id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2308.04740
Randomness-enhanced expressivity of quantum neural networks
As a hybrid of artificial intelligence and quantum computing, quantum neural networks (QNNs) have gained significant attention as a promising application on near-term, noisy intermediate-scale quantum (NISQ) devices. Conventional QNNs are described by parametrized quantum circuits, which perform unitary operations and measurements on quantum states. In this work, we propose a novel approach to enhance the expressivity of QNNs by incorporating randomness into quantum circuits. Specifically, we introduce a random layer, which contains single-qubit gates sampled from an trainable ensemble pooling. The prediction of QNN is then represented by an ensemble average over a classical function of measurement outcomes. We prove that our approach can accurately approximate arbitrary target operators using Uhlmann's theorem for majorization, which enables observable learning. Our proposal is demonstrated with extensive numerical experiments, including observable learning, R\'enyi entropy measurement, and image recognition. We find the expressivity of QNNs is enhanced by introducing randomness for multiple learning tasks, which could have broad application in quantum machine learning.
Yadong Wu, Juan Yao, Pengfei Zhang, Xiaopeng Li
2023-08-09T07:17:13Z
http://arxiv.org/abs/2308.04740v2
# Randomness-enhanced expressivity of quantum neural networks ###### Abstract As a hybrid of artificial intelligence and quantum computing, quantum neural networks (QNNs) have gained significant attention as a promising application on near-term, noisy intermediate-scale quantum (NISQ) devices. Conventional QNNs are described by parametrized quantum circuits, which perform unitary operations and measurements on quantum states. In this work, we propose a novel approach to enhance the expressivity of QNNs by incorporating randomness into quantum circuits. Specifically, we introduce a random layer, which contains single-qubit gates sampled from an trainable ensemble pooling. The prediction of QNN is then represented by an ensemble average over a classical function of measurement outcomes. We prove that our approach can accurately approximate arbitrary target operators using Uhlmann's theorem for majorization, which enables observable learning. Our proposal is demonstrated with extensive numerical experiments, including observable learning, Renyi entropy measurement, and image recognition. We find the expressivity of QNNs is enhanced by introducing randomness for multiple learning tasks, which could have broad application in quantum machine learning. _Introduction.-_ In recent years, significant breakthroughs have been made in the field of artificial intelligence. Among various machine learning algorithms, neural networks have played a vital role, thanks to their universal expressivity for deep architectures. As a quantum generalization of neural networks, quantum neural networks (QNNs) have been proposed based on parameterized quantum circuits. QNNs use quantum states instead of classical numbers as inputs[1, 2, 3, 4]. However, the evolution of the input quantum states is constrained to be unitary, which limits the expressivity of QNNs. For physical observables, which are linear functions of the input quantum states or density matrices, QNNs can achieve high accuracy only if the target operator shares the same eigenvalues with the measurement operator. For a general situation, it requires introducing auxiliary qubits, as proposed in [5]. To expresse non-linear functions of the input density matrices, such as purities, traditional approaches introduce multiple replicas, which is unfavorable on near-term, noisy intermediate-scale quantum (NISQ) devices with a limited number of logical qubits. Previous studies have also reported moderate accuracy for more general machine learning tasks, including image re-organization [6, 7, 8, 9, 10]. In this work, we propose a universal scheme to overcome the expressivity obstacle without the need for additional replicas. Our main inspiration comes from the recent development of the randomized measurement toolbox for quantum simulators [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]. In all of these protocols, a measurement is performed after a random unitary gate, and the desired property is predicted through a classical computer after collecting sufficient measurement outcomes. In particular, the random measurement has been experimentally realized in [35, 36, 37, 38, 39, 40]. These developments unveil that randomness plays a central role in extracting information from complex quantum systems efficiently. From a machine learning perspective, this implies that introducing random unitaries may enhance the expressivity of QNNs. This naturally leads to the concept of randomized quantum neural networks, where we collect measurement outcomes from an ensemble of parametrized quantum circuits to make final predictions. Analogous to the different types of layers in classical neural networks, randomized QNNs consist of deterministic layers and random layers. In deterministic layers, the quantum gates contain parameterized quantum gates as in traditional QNNs, while in random layers, they are sampled from trainable ensembles of single-qubit gates. This Figure 1: An illustration is provided for the proposed architecture of randomized quantum neural networks. In this example, the circuit contains two deterministic layers \(\hat{U}_{1(2)}\) and one random layer \(\hat{U}_{r}\) in between, with the final measurement performed on two qubits. As demonstrated in this work, this architecture shows randomness-enhanced expressivity for a variety of general learning tasks. is illustrated in FIG. 1. We demonstrate the high expressivity of the proposed architecture using several different tasks, including both linear and nonlinear functions of the input density matrix. Our results pave the way towards realizing the universal expressivity ability for QNNs. _Architecture.-_ We begin with a detailed description of randomized QNNs. To be concrete, we focus on the architecture illustrated in FIG. 1 for \(N_{\text{sys}}=5\) qubits, which comprises two deterministic layers, namely \(\hat{U}_{1}\) and \(\hat{U}_{2}\), with a random single qubit gate layer \(\hat{U}_{r}\) in between. Each deterministic layer \(\hat{U}_{l_{d}}\) (\(l_{d}=1,2\)) contains a number of units \(\hat{V}_{l_{d}}^{l}(\mathbf{\theta}_{l_{d}}^{l})\) (\(l\in\{1,2,...,L_{l_{d}}\}\)) and each deterministic layer is constructed as \[\hat{U}_{l_{d}}=\hat{V}_{l_{d}}^{L_{l_{d}}}(\mathbf{\theta}_{l_{d}}^{L_{l_{d}}})... \hat{V}_{i}^{2}(\mathbf{\theta}_{l_{d}}^{2})\hat{V}_{l_{d}}^{1}(\mathbf{\theta}_{l_{d} }^{1}), \tag{1}\] where \(\{\mathbf{\theta}_{l_{d}}^{l}\}\) are the parameters of the deterministic layers. In general, the arrangement of two-qubit gates in each deterministic layer allows for a large degree of freedom. In this work, we focus on the standard brick wall architecture with spatial locality. Each unit \(\hat{V}_{l_{d}}^{l}\) contains \(N_{\text{sys}}-1\) two qubit gates and each two qubit gate is a SU(4) matrix which can be parameterized as \(\exp(\sum_{j}c_{j}\hat{g}_{j})\). Here \(\hat{g}_{j}\) is the generator of SU(4) group and \(\{\mathbf{\theta}_{l_{d}}^{l}\}\) denotes parameters \(\{\mathbf{c}\}\) of all two qubit gates [41]. Nonetheless, alternative choices for each deterministic layer have the potential to enhance the expressivity of QNNs for a fixed number of gates [6]. For the sake of experimental convenience, the random layer \(\hat{U}_{r}\) comprises a tensor product of single-qubit gates, denoted by \(\hat{u}_{1}\otimes\hat{u}_{2}...\otimes\hat{u}_{N_{\text{sys}}}\). These gates are sampled from an ensemble \[\mathcal{E}=\{(w_{i},\hat{U}_{r,i}=\hat{u}_{1}^{i}(\mathbf{\alpha}_{i}^{1})\otimes \hat{u}_{2}^{i}(\mathbf{\alpha}_{i}^{2})...\otimes\hat{u}_{N_{\text{sys}}}^{i}(\bm {\alpha}_{i}^{N_{\text{sys}}}))\}, \tag{2}\] where \(i=1,2,...,N_{r}\) labels different elements and \(w_{i}\) is the corresponding weight with \(\sum_{i}w_{i}=1\). Each single qubit gate is parametrized by generators of SU(2) with 3 dimensional real vector \(\mathbf{\alpha}_{i}^{q}\) (\(q\in\{1,2,...,N_{\text{sys}}\}\)). Both \(\{w_{i}\}\) and \(\{\mathbf{\alpha}_{i}^{q}\}\) are trainable parameters. It is also straightforward to introduce multiple random layers into the full architecture of QNNs. Importantly, it is worth noting the differences between our definition and typical random measurement protocols. Firstly, our random layer can be added at any point in the quantum circuit, not necessarily before the final measurement. Secondly, our definition of \(\mathcal{E}\) allows for non-trivial correlations between single-qubit gates on different sites, which is typically absent in random measurement protocols. Both features are necessary for achieving a high expressivity in QNNs. We consider a dataset \(\{(|\psi_{m}\rangle,\mathcal{T}_{m})\}\), in which \(m\in\{1,2,...,N_{D}\}\) labels different data and \(\mathcal{T}_{m}\) is the target information for the corresponding state \(|\psi_{m}\rangle\). For each unitary \(\hat{U}_{r,i}\) in the ensemble \(\mathcal{E}\), we perform projective measurements in the computational basis for \(k\sim O(1)\) qubits. In FIG. 1, we set \(k=2\), and the measurement yields the probability distribution given by: \[p_{i,m}^{ss^{\prime}}=\langle\psi_{m}|\hat{U}_{1}^{\dagger}\hat{U}_{r,i}^{ \dagger}\hat{U}_{2}^{\dagger}(\hat{P}_{s}^{2}\otimes\hat{P}_{s^{\prime}}^{3}) \hat{U}_{2}\hat{U}_{r,i}\hat{U}_{1}|\psi_{m}\rangle, \tag{3}\] where the projection operator \(\hat{P}_{s}^{q}=\frac{1+ss^{\prime}}{2}\) for \(s=\pm 1\). Due to the constraint \(\sum_{ss^{\prime}}p_{i,m}^{ss^{\prime}}=1\), there are only \(3\) non-trivial components of \(p_{i,m}^{ss^{\prime}}\), denoted by the vector \(\mathbf{p}_{i,m}\). We then use a classical computer to apply a general function \(f_{\mathbf{\theta}}(\cdot)\), parametrized by \(\mathbf{\beta}\), to the probability distribution \(p_{i,m}^{ss^{\prime}}\), which yields a single outcome denoted by \(\mathcal{P}_{i,m}=f_{\mathbf{\beta}}(\mathbf{p}_{i,m})\). The classical function can be described by elementary functions in the simplest setting, but is more generally described by classical neural networks. We further average the outcome over the ensemble \(\mathcal{E}\) to obtain the final prediction for the input state \(|\psi_{m}\rangle\) as: \[\mathcal{P}_{m}=\sum_{i=1}^{N_{r}}w_{i}\mathcal{P}_{i,m}=\sum_{i=1}^{N_{r}}w_ {i}f_{\mathbf{\beta}}(\mathbf{p}_{i,m}). \tag{4}\] We use the mean square error (MSE) as the loss function \(\mathcal{L}=\frac{1}{N_{D}}\sum_{m}(\mathcal{P}_{m}-\mathcal{T}_{m})^{2}\) with data size \(N_{D}\) during the training process. We apply the gradient descent algorithm to optimize the parameters \(\{\mathbf{\theta}_{l_{d}}^{l},w_{i},\mathbf{\alpha}_{i}^{q},\mathbf{\beta}\}\) to minimize the loss function \(\mathcal{L}\). In the following sections, we focus on the demonstration of high expressivity for randomized QNNs. Our examples range from simple physical tasks including observable learning and Renyi entropy measurement, to standard machine learning tasks such as image recognization. _Observable learning.-_ To show the high expressivity of randomized QNNs, let us consider a simple scenario where the target, \(\mathcal{T}_{m}\), is an expectation of a physical observable \(\hat{O}\) with \(\mathcal{T}_{m}=\langle\psi_{m}|\hat{O}|\psi_{m}\rangle\). For simplicity, focusing on single-qubit measurement with \(k=1\), we first investigate whether the randomized QNNs as proposed as in FIG. 1 can approximate the target function \(\mathcal{T}_{m}\) as accurate as possible for sufficiently deep circuit structures with sufficiently large \(N_{r}\). As physical observables are linear in density matrices, a linear function \(f_{\mathbf{\beta}}(x)=\beta_{0}+\beta_{1}x\) will be applied to the measurement result. Explicitly, we introduce \(\hat{U}_{\text{tot},i}\) for a random realization \(i\) of the quantum circuit. As an example, we have \(\hat{U}_{\text{tot},i}=\hat{U}_{2}\hat{U}_{r,i}\hat{U}_{1}\). An accurate prediction of the target function requires that \[\sum_{i=1}^{N_{r}}w_{i}\;\hat{U}_{\text{tot},i}^{\dagger}(\beta_{0}\hat{ \sigma}_{0}^{1}+\beta_{1}\hat{\sigma}_{z}^{1})\hat{U}_{\text{tot},i}=\hat{O}, \tag{5}\] where \(\hat{\sigma}_{0}\) is the identity operator and Pauli matrix \(\hat{\sigma}_{z}\) is the single-qubit's measurement operator. For the case of \(N_{r}=1\) and \(w_{1}=1\), our setup reduces to the traditional QNN without randomness. In this scenario, Eq.(5) requires that \((\beta_{0}\hat{\sigma}_{0}^{1}+\beta_{1}\hat{\sigma}_{z}^{1})\) and \(\hat{O}\) be related by a unitary transformation. Since the unitary transformation preserves the eigenvalues of the operator, the requirement cannot be satisfied for a general operator \(\hat{O}\). When \(N_{r}>1\), Eq.(5) can be expressed as \(\Phi(\hat{\Sigma})=\hat{O}\), where \(\hat{\Sigma}\equiv\beta_{1}\hat{\sigma}_{z}^{1}+\beta_{0}\hat{\sigma}_{0}^{1}\) and \(\Phi(\hat{X})\) is a mixed-unitary channel [42]. For sufficiently complex circuit structures, we expect \(\Phi\) to be generic. In comparison to the \(N_{r}=1\) case, there is no constraint from unitarity. However, we still need to ask whether Eq.(5) can be satisfied for an arbitrary operator \(\hat{O}\). In the following, we prove that the answer to this question is affirmative. The proof consists of three steps: **Step 1.** Mathematically, if there exists a mixed-unitary channel \(\Phi\) such that \(Y=\Phi(X)\), we say that \(X\) majorizes \(Y\), denoted by \(Y\prec X\)[43]. Thus, for a randomized QNNs which can accurately predict any observable \(\hat{O}\), we need to find values of \(\beta_{0}\) and \(\beta_{1}\) such that \(\hat{O}\prec\hat{\Sigma}\) for any \(\hat{O}\). **Step 2.** According to Uhlmann's theorem for majorization [43, 44], \(\hat{O}\prec\hat{\Sigma}\) if and only if \(\mathbf{\lambda}_{\hat{O}}\prec\mathbf{\lambda}_{\hat{\Sigma}}\), where \(\mathbf{\lambda}_{\hat{X}}\) is the list of eigenvalues for the operator \(\hat{X}\) in the descending order. Here the majorization between two real vectors \(\mathbf{y}\prec\mathbf{x}\) is defined as (i) \(\sum_{j=1}^{q}x_{j}\geq\sum_{j=1}^{q}y_{j}\) for arbitrary \(1\leq q<\mathcal{D}\) and (ii) \(\sum_{j=1}^{\mathcal{D}}x_{j}=\sum_{j=1}^{\mathcal{D}}y_{j}\). Here \(\mathcal{D}\) is the dimension of the vectors. Noting that condition (ii) takes into account the trace-preserving property of mixed-unitary channels. **Step 3.** We can always find \(\beta_{0}\) and \(\beta_{1}\) such that \(\mathbf{\lambda}_{\hat{O}}\prec\mathbf{\lambda}_{\hat{\Sigma}}\). Assuming \(\beta_{1}>0\), the first or last \(\mathcal{D}/2\) components of \(\mathbf{\lambda}_{\hat{\Sigma}}\) correspond to the values \(\beta_{0}+\beta_{1}\) or \(\beta_{0}-\beta_{1}\), respectively. The constant term \(\beta_{0}\) can then be determined using condition (ii), which gives \(\beta_{0}=\mathcal{D}^{-1}\sum_{j=1}^{\mathcal{D}}\lambda_{\hat{O},j}\). Moreover, condition (i) can always be satisfied for sufficiently large \(\beta_{1}\). This proves the existence of \(\beta_{0}\) and \(\beta_{1}\) such that \(\hat{O}\prec\hat{\Sigma}\). Although randomized QNNs have the potential to express arbitrary operators, it is difficult to determine an upper bound or a required value for \(N_{r}\) in practical learning tasks. At the same time, it is unfavorable to have large \(N_{r}\) or a large number of random layers, especially in NISQ devices. Therefore, we turn to numerical simulations of the randomized QNNs, and investigate practical requirements on \(N_{r}\) and the number of random layers. Since the basis change can be efficiently captured by the deterministic layer \(\hat{U}_{1}\), we focus on observables \(\hat{O}\) that are diagonal in the computational basis. For simplicity, we further set \(\hat{U}_{1}=\hat{I}\) and \(\hat{U}_{2}\) composed by \(L_{2}\) units of a brick wall structure [41]. For each system size \(N_{\text{sys}}\), we test whether a random diagonal operator \(\hat{O}\) can be predicted accurately for different values of \(N_{r}\) by monitoring the training loss for a sufficiently large dataset. As an example, we plot the training loss as a function of the training epoch for \(N_{\text{sys}}=2\) in FIG. 2 (a). The curves are averaged over 10 operators with random eigenvalues from the uniform distribution \([-2.5,2.5]\). When we increase \(N_{r}\) from 1 to 3, there is a rapid decrease in the training loss for large training epochs. The result shows that \(N_{r}=3\) is sufficient for learning general operators for \(N_{\text{sys}}=2\) where the mean absolute error \(\mathbf{E}[\mathcal{P}-\mathcal{T}]=\frac{1}{N_{D}}\sum_{m}|\mathcal{P}_{m}- \mathcal{T}_{m}|\) can be decreased to \(10^{-5}\). More detail explanations of the trained random QNNs are given in the supplementary materials [41]. We further extend the system size \(N_{\text{sys}}\) to study how it affects the number of required random gates. The results are shown in FIG. 2 (b). Although we are limited to small system sizes \(N_{\text{sys}}\in\{2,3,4,5\}\), the results clearly show weak dependence of \(N_{r}\) on \(N_{\text{sys}}\). The training results show that \(N_{r}=3\) already gives highly accurate predictions for \(N_{\text{sys}}=5\). This guarantees the application of our proposal for randomized QNNs on NISQ devices for observable learning tasks. _Renyi entropy measurement.-_ We have established the universal expressivity of observables, and now we will consider targets that are non-linear functions of density matrices. One example of such targets is the Renyi entropy, which is also of experimental interest. To compute the Renyi entropy for a subsystem \(A\) consisting of the central \(N_{\text{sub}}\) qubits, we first calculate the reduced density matrix \(\hat{\rho}_{A}\) of an input state \(|\psi_{m}\rangle\) by tracing out the degrees of freedom of the complementary subsystem \(\bar{A}\). We then choose the target as \[\mathcal{T}_{m}=\text{Tr}_{A}[\hat{\rho}_{A}^{n}], \tag{6}\] which is related to the n-th Renyi entropy through \(S_{A}^{(n)}=-\frac{1}{n-1}\ln(\mathcal{T}_{m})\). Since we are directly measuring a local property of the input wavefunction, it is reasonable to fix \(\hat{U}_{1}\) and \(\hat{U}_{2}\) to the identity matrix \(\hat{I}\) and focus on the random layer \(\hat{U}_{r}\) with \(k=N_{\text{sub}}\). This approach provides a lower bound on the expressivity of randomized QNNs. Because the target \(\mathcal{T}_{m}\) is proportional to \(\rho^{n}\), we choose the function \(f_{\mathbf{\beta}}(\mathbf{x})\) to be a polynomial up to the \(n\)-th order. However, it is worth noting that lower order polynomials may also work in certain cases [45]. We prepare a dataset with random states \(|\psi_{m}\rangle\), the detailed description of which is provided in the supplementary materials [41]. The numerical results for \(n=2\), \(N_{\text{sys}}=5\) and \(N_{\text{sub}}=1,2\) are shown in FIG. 3. To make accurate predictions, we need \(N_{r}=3\) for \(N_{\text{sub}}=1\) and \(N_{r}=9\) for Figure 2: _Observable learning by QNN with a random layer._ (a) The training mean absolute loss is shown as a function of the training epoch for observable learning with \(N_{\text{sys}}=2\). The solid lines are averaged over the training process for 10 different random target operators with random initializations, and the shaded region represents the standard deviation. The dashed lines are the validation loss with the dataset containing 200 samples. (b) Mean absolute loss of training dataset for \(N_{\text{sys}}\in\{2,3,4,5\}\) and \(N_{r}\in\{1,2,3,4\}\). Markers are averaged over 10 different random target operators with random initializations, and error bars are the standard deviation. \(N_{\text{sub}}=2\). We have also checked that the threshold of \(N_{r}\) does not change if we instead consider \(n=3\), whose results are shown in the supplementary information [41]. It is interesting to compare our results to the proposed random measurement protocol for Renyi entropies. Our results indicate that \(N_{r}\) scales as \(3^{N_{\text{sub}}}\) when measuring Renyi entropies. In comparison, the previous protocol required each single-qubit gate \(\hat{u}_{q}^{i}\) to be sampled from the circular unitary ensemble [39; 18]. For \(n=2\), the circular unitary ensemble can be replaced by unitary 2-designs, which are known to be the Clifford group. Since the single-qubit Clifford group contains 24 elements, the total number of unitary matrices \(\hat{U}_{r,i}\) would naively scale as \(24^{N\text{sub}}\). However, in practice, this can be significantly reduced because randomized measurement protocols only require \(N_{\text{s}}\) snapshots sampled from the full ensemble. The theoretical bound of \(N_{\text{s}}\) for measuring general linear observables in a subsystem with \(N_{\text{sub}}\) qubits using random Pauli measurements up to an error \(\epsilon\) is given by \(N_{\text{s}}\gtrsim 3^{N_{\text{sub}}}/\epsilon^{2}\)[19; 18]. Consequently, in this quantum neural network structure, the number of unitary matrices \(\hat{U}_{r,i}\) that contribute is at most \(3^{N_{\text{sub}}}\), as in the protocol learned by the randomized QNNs. _Image recognition.-_ We consider image recognition, a more practical machine learning task, to demonstrate the enhanced expressivity of randomized QNNs. In this case, we use Google's 'Street View of House Number (SVHN)' dataset as an example [46]. Each image in the dataset corresponds to an integer number. For demonstration purposes, we select two categories of images containing the numbers '1' and '4'. Initially, we compress each image into an \(8\times 8\) pixel format, resulting in a \(64\)-dimensional real vector, which can be equivalently represented as a \(32\)-dimensional complex vector. Subsequently, we encode the image into the input wave function using \(N_{\text{sys}}=5\) qubits [41]. Unlike previous tasks, the mapping between the input and the output is highly complex and non-local, lacking a simple understanding. Consequently, we allow both \(\hat{U}_{1}\) and \(\hat{U}_{2}\) to be trainable. After measuring a single qubit, we choose a \(5^{\text{th}}\)-order polynomial for the function \(f_{\mathbf{\beta}}(\mathbf{x})\). Since the image recognition is a two category classification task, after obtaining the final ensemble average prediction \(\mathcal{P}_{m}\), we apply a logistic-sigmoid function to restrict the prediction in the region (0,1) with \(\mathcal{G}_{m}=1/(1+\exp(-\mathcal{P}_{m}))\). And we use the cross-entropy as the loss function \(\mathcal{L}=\frac{1}{N_{D}}\sum_{m}-\mathcal{T}_{m}\log(\mathcal{G}_{m})-(1- \mathcal{T}_{m})\log(1-\mathcal{G}_{m})\) to optimize the parameters in the randomized QNN. The accuracy \(F=\frac{1}{N_{D}}\sum_{m}|[\text{sign}(\mathcal{G}_{m}-0.5)+1]/2-\mathcal{T} _{m}|\) for \(N_{r}=1\) and \(N_{r}=4\) are shown in FIG. 4. For \(N_{r}=1\), the accuracy saturates at approximately 0.8 after a large number of epochs, while the averaged accuracy for the test dataset reaches \(69.8\%\). The introduction of a single random layer with \(N_{r}=4\) significantly enhances the accuracy of the predictions. In this case, the training dataset achieves an accuracy higher than \(90\%\), and the average accuracy for the test dataset is \(82.29\%\). The utilization of a non-trivial random layer with \(N_{r}=4\) demonstrates a significant improvement in the prediction capabilities of QNNs, indicating the enhanced expressivity of our randomized QNN architecture. _Outlook.-_ This work introduces the concept of randomized quantum neural networks, which include random layers where quantum gates are selected from an ensemble of unitary matrices. It is proven that these random layers provide universal expressivity for general physical observables using Uhlmann's theorem for majorization. Numerical simulations further show that this ar Figure 4: _Image recognition by QNN with a random layer._ The training loss is shown as a function of the training epoch for the image recognition task. The results are averaged over the training process for 10 different random initializations, and the shaded region represents the standard deviation. Figure 3: _Renyi entropy measurements by QNN with a random layer._ The training loss is shown as a function of the training epoch for purity with \(N_{\text{sys}}=5\) and (a) \(N_{\text{sub}}=1\) or (b) \(N_{\text{sub}}=2\). The results are averaged over the training process for 10 different random initializations, and the shaded region represents the standard deviation. The dashed lines are the validation loss with the dataset containing 200 samples. pressivity for non-linear functions of the density matrix, such as Renyi entropies and image recognition, with small ensemble sizes \(N_{r}\). These results indicate that the proposed method has potential for broad applications in NISQ devices. We further highlight the differences between our architecture and the proposal presented in a very recent paper [47]. Their work also incorporates a series of parameterized quantum circuits, where the circuit consists of multiple parametrized (controlled-) rotations that share the same parameter. In contrast, our architecture features only a few random layers described by a tensor product of single-qubit gates, making its training process more efficient. While the focus of this work is on parameterized quantum circuits with brick wall structures, it is straightforward to combine this novel architecture with other proposals to further improve expressivity or learning efficiency. For instance, it is possible to add ancilla qubits and explore more sophisticated architectures for the deterministic layers. Additionally, it would be interesting to investigate the impact of random layers on other quantum machine learning algorithms beyond traditional quantum neural networks [48, 49, 50, 51, 52, 53], such as quantu autoencoders [48, 49]. _Acknowledgement.-_ We thank Yingfei Gu, Ning Sun, Ce Wang, Hai Wang, and Yi-Zhuang You for helpful discussions. YW and XL are supported by National Program on Key Basic Research Project of China (Grant No. 2021YFA1400900), National Natural Science Foundation of China (Grants No. 11934002), Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01). YW is supported by the National Natural Science Foundation of China (Grant No. 12174236). JY is supported by the National Natural Science Foundation of China (Grant No. 11904190).
2310.13077
NeuroSMPC: A Neural Network guided Sampling Based MPC for On-Road Autonomous Driving
In this paper we show an effective means of integrating data driven frameworks to sampling based optimal control to vastly reduce the compute time for easy adoption and adaptation to real time applications such as on-road autonomous driving in the presence of dynamic actors. Presented with training examples, a spatio-temporal CNN learns to predict the optimal mean control over a finite horizon that precludes further resampling, an iterative process that makes sampling based optimal control formulations difficult to adopt in real time settings. Generating control samples around the network-predicted optimal mean retains the advantage of sample diversity while enabling real time rollout of trajectories that avoids multiple dynamic obstacles in an on-road navigation setting. Further the 3D CNN architecture implicitly learns the future trajectories of the dynamic agents in the scene resulting in successful collision free navigation despite no explicit future trajectory prediction. We show performance gain over multiple baselines in a number of on-road scenes through closed loop simulations in CARLA. We also showcase the real world applicability of our system by running it on our custom Autonomous Driving Platform (AutoDP).
Kaustab Pal, Aditya Sharma, Mohd Omama, Parth N. Shah, K. Madhava Krishna
2023-10-19T18:15:20Z
http://arxiv.org/abs/2310.13077v1
# NeuroSMPC: A Neural Network guided Sampling Based MPC for On-Road Autonomous Driving ###### Abstract In this paper we show an effective means of integrating data driven frameworks to sampling based optimal control to vastly reduce the compute time for easy adoption and adaptation to real time applications such as on-road autonomous driving in the presence of dynamic actors. Presented with training examples, a spatio-temporal CNN learns to predict the optimal mean control over a finite horizon that precludes further resampling, an iterative process that makes sampling based optimal control formulations difficult to adopt in real time settings. Generating control samples around the network-predicted optimal mean retains the advantage of sample diversity while enabling real time rollout of trajectories that avoids multiple dynamic obstacles in an on-road navigation setting. Further the 3D CNN architecture implicitly learns the future trajectories of the dynamic agents in the scene resulting in successful collision free navigation despite no explicit future trajectory prediction. We show performance gain over multiple baselines in a number of on-road scenes through closed loop simulations in CARLA. We also showcase the real world applicability of our system by running it on our custom Autonomous Driving Platform (AutoDP). ## I Introduction On road autonomous driving entails persistent and consistent real-time decision making in often highly dynamic and evolving scenes. To accomplish this most trajectory planning frameworks employ a scheme of generating multiple candidate trajectory proposals and scoring them according to an appropriate cost function [3, 23, 8]. The candidate trajectories are typically obtained from large scale driving data [3] or by sampling from a parameter distribution that defines a trajectory [23] As an alternative sampling based optimization paradigms have been popular in the high dimensional planning literature that seamlessly integrate dynamical model of the systems over which the trajectory plans are computed. The major concern however here is the disadvantage in rolling out trajectories in real-time In this paper we propose a novel framework that interleaves neural network driven prediction of finite horizon controls with samples drawn from the variance centred around the mean predicted by the network. The deep network outputs a vector of controls that constitute the optimal mean control over a finite horizon. The network is supervised with the best controls from the elite samples of the sampling based control framework [2]. Typically this mean is expected to be close to the global optimum [1, 15]. The best rollout sample is one that optimizes a scoring or cost function detailed later. We retain the advantages accrued due to the original framework by not executing the network output controls but choosing the best rollout candidate from the samples around the network output controls. At the same time we overcame the curse of time complexity of such frameworks by selecting the optimal mean obtained from the network that is at-least three orders faster. It is to be noted that almost all such formulations involve many iterations of the following two steps to converge to the optimal control : * Selection of a subset of the candidate samples (the elite set) based on a scoring function. * Update of the mean in accordance with the elite samples and re-sampling from the newly updated distribution. Apart from this, the network through its spatio-temporal convolution (3D-CNN) architecture implicitly learns the evolution of dynamic actions over time without resorting to explicit trajectory prediction. Despite the lack of explicit trajectory prediction the executed trajectory is collision free in diverse set of on-road navigation scenarios with many dynamic actors. We attribute this to extremely low latency, high frequency control roll-outs possible through the deep network. The paper contributes specifically in the following ways: 1. A neural network interleaved sampling based optimal control that computes finite horizon control rollout in real-time thereby making it suitable for real-time self driving for on-road scenes. 2. We show 3D convolutions that convolve spatial and temporal components of a time sequenced Birds Eye View (BEV) layout learns implicitly the future trajec Fig. 1: **Run on a real car: We show that our model trained on synthetic data from a simulation is able to run successfully in a real world setting. We ran the model on a real self driving car that avoids an obstacle in front of it in a controlled on-campus scene to ensure safety. The sequences of images in the top row shows that the ego-vehicle (highlighted in green) is able to avoid the obstacle (highlighted in yellow). The sequences of images in the bottom row shows the corresponding accuracy grid maps along with it’s predicted trajectory (red) and the best sampled trajectory (green) in the ego-vehicle coordinate frame.** tories of the dynamic obstacles so much so the planner based on the network output avoids collision with dynamic obstacles despite lack of explicit trajectory prediction into the future. 3. A number of closed loop simulations in diverse scenarios in the CARLA simulator as well as real-time on campus navigation through our AutoDP confirm the efficacy and real-time veracity of the proposed framework. 4. Moreover the comparative analysis on compute time with other sampling based optimal control frameworks [2, 1] clearly depicts the vast performance gain of the proposed method. 5. Further, we propose _RoadSeg_, a system for on board real-time road segmentation in known environments that can efficiently generate BEVs in resource constrained setups. ## II Related Work A large portion of on-road driving literature is devoted to getting layout representations [11, 16, 14], agent trajectory prediction [9, 20] and end to end trajectory generation [14, 23]. Most of the trajectory generation frameworks involve choosing the best possible trajectory out of an elite set that is typically obtained from large dataset of driving examples or expert trajectories [23, 14]. However the expert trajectories despite the diversity can be sub-optimal and need not be the best response for a given input scenario. Most of these methods do not show their trajectory planning in closed loop simulation scenarios as they are evaluated in pre-recorded datasets that prevent perception and planning in coupled closed loop settings. Also none of the above methods showcase their formulation on a real self driving vehicle in real world on-road scenes. Elsewhere, in the manipulation planning community, sampling based optimal control [17, 22] have become popular essentially due to the derivative free blackbox optimization feature of such frameworks. Such sampling based trajectory optimization methods suffer from the curse of time complexity with a number of variants being proposed to overcome this disadvantage. For instance [2] resorts to efficient parallelization of controls while [15] contributes through sample efficient methods. Whereas in [1] gradient based updates of sample parameters leads to enhanced performance. Nonetheless these methods still need iterative improvement of the sample parameters to reach optimum mean control values that preclude their adaptation to on-road real time autonomous driving applications. In contrast to above methods we showcase real time closed loop simulations on a number of CARLA scenes along with a closed loop implementation on our AutoDP to achieve point to point on campus autonomous driving. Moreover our low latency control rollout facilitates navigation in dynamic scenes without collisions despite the lack of explicit trajectory prediction into the future of the dynamic actors. ## III Methodology ### _Problem statement_ For an on road driving scene, given a point cloud and the global path that our AutoDP needs to follow, we consider the problem of generating occupancy grid maps from the point cloud data and use the occupancy grid map to generate controls that drives the AutoDP towards obstacle free regions on the road while moving along its global path. The state of the vehicle at timestep \(h\) is represented by the vector \(\widetilde{\mathbf{x}}_{h}=\left[x_{h},y_{h},\theta_{h}\right]\) where \(x_{h}\) and \(y_{h}\) represents the \(x\) and the \(y\) coordinate of the vehicle respectively and \(\theta_{h}\) is the orientation of the vehicle. The controls for the vehicle at timestep \(h\) are represented by the vector \(\mathbf{u_{h}}=\left[v_{h},\omega_{h}\right]\) where \(v_{h}\) and \(\omega_{h}\) are the velocity and the angular-velocity of the vehicle respectively. ### _Pipeline_ We show the pipeline of our proposed framework in Fig. 2. The point-cloud from our AutoDP is first passed through the RoadSeg Network to segment the points that belong to the road. The points that do not belong to the road are considered as obstacles. A down-projection operation is performed to the obstacle points to create a birds-eye view (BEV) occupancy grid map. 5 of these BEVs are then stacked together and passed through a 3D-CNN CNN based architecture to get the mean controls. These mean controls are used as the mean of a sampling based MPC to sample more trajectories. The best trajectory is chosen from amongst these trajectories to be executed by our AutoDP. #### Iii-B1 **RoadSeg Network** For the system to work in real time on a resource constrained setup, its essential that the individual building blocks have minimal GPU footprint and the highest possible FPS. The major bottleneck in our pipeline is road-segmentation, which is usually a GPU-heavy process. We can take advantage of the fact that our AutoDP is meant to operate in a known map. We can pre-segment the road using heavy neural networks, then distill this information with a smaller network that can work in real time. The resultant network is like an implicit representation of the operational area wherein the robot can navigate [7]. To do this, we first create a map of the operational area of our vehicle using LEGO-LOAM [19]. This map is used for global localization and global planning, similar to other autonomous driving platforms [6]. The map is created with our custom calibrated-lidar-camera setup. Since, we have 3D points and corresponding RGB values with the calibrated setup, we can use off the shelf segmentation models to segment out the road in the 3D map offline. The choice of the segmentation model is arbitrary. Once, the road points are segmented out, we can distill this information with a smaller neural network, which we call the RoadSeg network. This network takes in the \(x,y,z\) points which are passed to a NERF-like positional encoder [12] followed by few MLP layers. The network's job is to predict if the current points belong to the road or not and is trained using the offline-segmented LEGO-LOAM map as ground-truth. Though the LEGO-LOAM map is very sparse, the positional encoding allows the network to learn general spatial trends and makes it capable of segmenting out the road of the entire (much denser) lidar point clouds at inference. Its also capable of segmenting out regions that were previously not seen in the map creation process (like those occupied by other vehicles). The RoadSeg network is shown in Fig 3. We show the advantage of using our RoadSeg Network over other baselines in Sec. IV-B #### Iv-A2 **3D-CNN based Neural Network** The occupancy grid map at the current timestep \(T\) along with the past 4 occupancy grid maps and the global path are concatenated into a \(6\times H\times W\) spatio-temporal tensor. This spatio-temporal tensor is then passed into our 3D-CNN based encoder architecture to extract a feature vector. The feature vector is then passed through 4 fully-connected layers to get an output vector of dimension \(H\times 2\) which represents the optimal controls (velocity and angular-velocity). The feature vector from the encoder architecture encodes the spatio-temporal correlations between the occupancy grid maps. For our implementation we have used a slightly modified version of MobileNet-V2 [18] as the encoder. #### Iv-A3 **Sampling based MPC** The objective is to generate controls for short horizons of \(H\) timesteps into the future. The output of the 3D-CNN based Neural Network is used as a mean for a Gaussian distribution. We now sample \(N\) control sequences of length \(H\) from this Gaussian distribution and roll them out using the unicycle kinematics model of the vehicle to get \(N\) trajectories. Each of these trajectories are then scored with two cost functions: * _Smoothness cost_: The smoothness cost ensures that we always give more preference to a trajectory with a smooth change in it's linear and angular velocities. \[\hat{c}_{ang}(\mathbf{\hat{x}_{\_}{\_}{\mathit{l}}},\mathbf{u_{\_}{ \mathit{l}}})=\sqrt{\sum_{\_}{h=1}^{H-1}(u_{w_{\_}{h}}-u_{w_{\_}{h-1}})^{2}}\] (1) \[\hat{c}_{\_}{\mathit{l}in}(\mathbf{\hat{x}_{\_}{\mathit{l}}}, \mathbf{u_{\_}{\mathit{l}}})=\sqrt{\sum_{\_}{h=1}^{H-1}(u_{v_{\_}{h}}-u_{w_{ \_}{v-1}})^{2}}\] (2) where \(u_{\_}{v}\) and \(u_{\_}{w}\) represents the linear and angular velocity components of the sampled controls respectively. * _Obstacle avoidance cost_: The obstacle avoidance cost ensures that the trajectories are not colliding with any obstacles (occupied cells in the occupancy grid map). \[d(\mathbf{\widetilde{x}_{\_}{\mathit{h}}},\mathbf{\widetilde{\mathit{o}}_{\_} {\mathit{h}}})=\mid\mathbf{\widetilde{x}_{\_}{\mathit{h}}}-\mathbf{\widetilde{ \mathit{o}}_{\_}{\mathit{h}}}\mid\mid_{2}\] (3) Here \(\mathbf{\widetilde{x}_{\_}{\mathit{h}}}\) and \(\mathbf{\widetilde{o}_{\_}{\mathit{h}}}\) are the states of the agent and the obstacle respectively at time-step \(h\). The agent and the obstacles are represented as circles. Let \(r_{\_}A\) and \(r_{\_}O\) be the radius of the agent and the obstacle respectively. The state \(\mathbf{\widetilde{x}_{\_}{\mathit{h}}}\) is said to be in collision with the obstacle state \(\mathbf{\widetilde{o}_{\_}{\mathit{h}}}\) if the euclidean distance \(d(\mathbf{\widetilde{x}_{\_}{\mathit{h}}},\mathbf{\widetilde{o}_{\_}{\mathit{h }}})\) between the Fig. 3: The RoadSeg network takes in the point postions as input, passes them thorough a nerf-like positional encoding followed by a fully connected layer. The result is a segmentation score (road/not-road) for each point. Fig. 2: **Pipeline: Each Point-cloud from our AutoDP is first passed through the RoadSeg Network to segment the points that belong to the road. The non-road points are considered as obstacles. A down-projection operation is performed to the obstacle points to create a BEV occupancy grid map. 5 past BEVs are then stacked together and passed through a 3D-CNN architecture to get the mean controls for a short-horizon. These mean controls are then used as the mean of a sampling based MPC to sample more trajectories. The best trajectory is chosen from amongst these trajectories to be executed by our AutoDP.** agent's position and the obstacle's position is less than or equal to the sum of their radius. \[\hat{c}_{obs}(\mathbf{\hat{x_{i}}},\mathbf{u_{i}})=\begin{cases}\infty\,\ \text{if}\ d(\mathbf{ \hat{x_{h}}},\mathbf{\hat{o_{h}}})\leq r_{A}+r_{O},\ h\in[0,H)\\ 0\,\ \text{otherwise}\end{cases} \tag{4}\] where \(\mathbf{\hat{x_{i}}}\) and \(\mathbf{\hat{u_{i}}}\) are the i-th trajectory and controls out of the \(N\) sampled trajectories and controls. The final cost \(\hat{C}(\mathbf{\hat{x_{i}}},\mathbf{u_{i}})\) for each trajectory is calculated as the weighted sum of the three costs \[\hat{C}(\mathbf{\hat{x_{i}}},\mathbf{u_{i}})=w_{ang}\hat{c}_{ang}(\mathbf{\hat {x_{i}}},\mathbf{u_{i}})+w_{lin}\hat{c}_{lin}(\mathbf{\hat{x_{i}}},\mathbf{u_{i}})+\] \[w_{o}\hat{c}_{obs}(\mathbf{\hat{x_{i}}},\mathbf{u_{i}}) \tag{5}\] where \(w_{ang}\), \(w_{lin}\) and \(w_{o}\) are the weights chosen by the user. The trajectory with the smallest cost is chosen as the best trajectory and the controls at \(h=1\) are used to drive the ego-vehicle after which we recompute a new trajectory again. Figure 4 shows the output of the neural network and the best trajectory. ### _Dataset_ Each sample in our dataset consists of a sequence of 5 occupancy-grid maps along with the global path as the input and the short horizon optimal control as the output. We used the CARLA simulator [5] to create a dataset. Obstacles were spawned with varying velocities randomly amongst the CARLA maps. Each occupied cell in the occupancy grid map is used as an obstacle while planning the optimal controls using the Sampling based MPC (SMPC). Since our SMPC operates in the center line reference frame (CRF), we transform all the obstacles to the frame attached to the global path[21]. This allows us to treat curved roads as straight roads and plan feasible trajectories easily without taking the curvature bound constraints. To plan the trajectories that avoids collision with dynamic obstacles whose velocities are known from the CARLA ground truth data, we used the SMPC with the MPPI update rule. The optimal trajectories are then transformed back to the global frame so that the trajectories are within the curvature bounds of the roads in the global frame. During inference the BEV doesnot need to be converted from the global frame to the CRF which is a big advantage of our method. Also the 3D-CNN layers learns the dynamic nature of the scene implicitly so we also don't need a separate obstacle trajectory predictor. ## IV Experimental Evaluation ### _Qualitative Analysis_ In this section, we show a qualitative comparison between the trajectories generated by our NeuroSMPC (NSMPC) formulation and trajectories generated by Model Predictive Path Integral (MPPI) and gradient based Cross-Entropy Method (GradCEM) approaches. All the methods generate a short horizon (30 timesteps) control (velocity and angular-velocity) sequence. These controls are rolled out using the unicycle kinematics model to generate the trajectory. The NSMPC takes the past 5 birds-eye view occupancy grid maps and the global path as input. For both the MPPI and GradCEM approach, the occupied cells in the occupancy grid map are considered as obstacles. In both the approaches the distribution is updated iteratively to get the mean controls. In Fig. 5 we show the trajectory generated by our method compared to the trajectories generate by MPPI and GradCEM. We observe that the trajectory generated from our NSMPC formulation is qualitatively similar to the trajectories generated from the iterative approaches in an empty straight and curved road scenario and in a road with obstacles. ### _Quantitative Analysis_ **RoadSeg Network:** The RoadSeg Network, though only limited to the current operational area, allows for a much faster and accurate road segmentation with minimal GPU footprint. We compare our RoadSeg Network with two other baselines: RANSAC based plane segmentation and LSeg[10] in terms of computation time and GPU memory utilization. From Table I, we can observe that we outperform other approaches by a significant margin. The lesser GPU footprint of RoadSeg allows us to run the entire NeuroSMPC pipeline on a single laptop in our AutoDP. **Neural guided Sampling based MPC:** We compare our method with two other sampling based MPC baselines: MPPI and GradCEM, during inference with respect to the number of iterations required to reach an optimal trajectory. In Table Fig. 4: This figure demonstrates the output trajectory (red) of the neural network, the sampled trajectories (blue) from a Gaussian distribution with the neural network output as the mean, and the best trajectory (green) selected from the sampled trajectories after scoring them with the smoothness and obstacle avoidance cost functions. \begin{table} \begin{tabular}{|c|c|c|} \hline **Approach** & **GPU Memory Utilization (MB)** & **Inference Time (sec)** \\ \hline \hline RoadSeg & 790 & 0.0015 \\ \hline RANSAC & - & 0.08 \\ \hline LSeg & 3492 & 0.018 \\ \hline \end{tabular} \end{table} TABLE I: RoadSeg Performance compared against RANSAC and LSeg. RoadSeg results in lesser GPU footprint and faster inference. II, we show that using our method, we achieve an optimal trajectory in single-shot, whereas using MPPI or GradCEM we need to iteratively update the mean to get an optimal trajectory. For MPPI we need to update the mean of the distribution for 5 iterations whereas for GradCEM we need to update the mean for 3 iterations. ### _Sim2Real_ One of the key challenges of a deep learning based system is to ensure that a model trained on a dataset collected from a simulator can also run effectively in the real world. While it is very easy to generate large volumes of data from a simulator, it is very difficult to accurately model real world physical phenomenons like friction, impact and uncertainty in a simulator. This results in the dataset from simulator not representing the real physical world accurately. We were able to overcome this issue by using a simplified form of data. The occupancy grid map from the simulator lidar is very similar to the occupancy grip map from the real world Lidar. This is because the lidar configuration in the simulator is the same as our AutoDP. The only thing missing in the simulator is the sensor noise which we overcame by perturbing the lidar points by a small amount randomly. After training the model on synthetic data collected from a simulator, we were able to run the model on our AutoDP. Fig. 1 shows the execution of the model on our AutoDP while avoiding an obstacle in a controlled on campus scene. While there were still some instances where the model's output was colliding with the obstacle, the sampling nature of our formulation ensures that we still get an obstacle avoiding trajectory from the distribution of trajectories sampled from the model's output as mean. ### _Why Sample?_ The Neural Network's output is stochastic in nature and is not always guaranteed to avoid collisions with the obstacles. To ensure safety we need to make sure that the trajectory being executed by the AutoDP is always collision free. To solve this we sample a distribution of trajectories by using our model's output as a mean. The sampled trajectories are then scored using the cost functions mentioned in Section III-B.3. The best trajectory is then selected from the sampled trajectories and executed by the AutoDP. Fig. 6 shows an example scenario where the Neural Network output is colliding with the obstacle in front. The best trajectory (green) chosen from sampled trajectories is then executed by the AutoDP to avoid colliding with the obstacle. ## V Implementation and Training The MPPI, GradCEM and our Neural guided Sampling based MPC has been implemented using the PyTorch [13] library and trained on a system with a Intel Core i7-5930K (6 Core 12 threads) CPU and one NVIDIA RTX A4000 GPU with 16 GB VRAM and 64 GB RAM. We used the Lion optimizer [4] with a learning rate of 0.001 to train our network parameters. ## VI Conclusion This paper proposed NeuroSMPC, a novel neural network guided Sampling Based Optimal Controller as an effective mechanism of interleaving single shot optimal inference with sampling based frameworks. NeuroSMPC overcomes the entailment of iterative resampling of sampling based optimal control frameworks by inferring single shot optimal trajectory rollouts and yet retains the advantage of sample diversity by sampling controls around the predicted rollouts. NeuroSMPC trajectory rollouts are similar to sampling based optimal control formulations [1, 2] yet come at a much faster clip thus making it suitable for real time settings like on-road autonomous driving. NeuroSMPC's spatio-temporal convolution architecture learns implicitly the dynamic nature \begin{table} \begin{tabular}{|c|c|} \hline **Approach** & **No. of iteration** \\ \hline \hline NSMPC (Ours) & 0 \\ \hline MPPI & 5 \\ \hline GradCEM & 3 \\ \hline \end{tabular} \end{table} TABLE II: Comparison of SMPC (our method) with MPPI and GradCEM in terms of no. of update iterations required to get an optimal trajectory. Fig. 5: We show a qualitative comparison of the trajectories generated by NSMPC (our method), MPPI and GradCEM in three different driving conditions: Straight empty road, Curved empty road and a road with Obstacles. The grey line denotes the future trajectory of the dynamic obstacles. Fig. 6: **Why sample?** Since the NN output may not always be collision free, we sample a distribution of trajectories (red) by using the NN output (blue) as the mean of a Gaussian distribution. The best trajectory (green) is then selected to be executed by the AutoDP. of scenes without the need for explicit prediction of future states as it seamlessly avoids dynamic obstacles through an implicit understanding of their future evolution. NeuroSMPC has been tested in various CARLA scenes and a sim2real transfer on AutoDP (AutonomousDrivingPlatform) for on-road campus autonomous driving establishes its efficacy. ## VII Acknowledgements This work has partially been funded by Centre for Artificial Intelligence & Robotics (CAIR).
2302.11317
Neural Network Analytic Continuation for Monte Carlo: Improvement by Statistical Errors
This study explores the use of neural network-based analytic continuation to extract spectra from Monte Carlo data. We apply this technique to both synthetic and Monte Carlo-generated data. The training sets for neural networks are carefully synthesized without ``data leakage". We found that the training set should match the input correlation functions in terms of statistical error properties, such as noise level, noise dependence on imaginary time, and imaginary time-displaced correlations. We have developed a systematic method to synthesize such training datasets. Our improved algorithm outperform the widely used maximum entropy method in highly noisy situations. As an example, our method successfully extracted the dynamic structure factor of the spin-1/2 Heisenberg chain from quantum Monte Carlo simulations.
Kai-Wei Sun, Fa Wang
2023-02-22T12:03:22Z
http://arxiv.org/abs/2302.11317v1
# Neural Network Analytic Continuation for Monte Carlo: Improvement by Statistical Errors ###### Abstract This study explores the use of neural network-based analytic continuation to extract spectra from Monte Carlo data. We apply this technique to both synthetic and Monte Carlo-generated data. The training sets for neural networks are carefully synthesized without "data leakage". We found that the training set should match the input correlation functions in terms of statistical error properties, such as noise level, noise dependence on imaginary time, and imaginary time-displaced correlations. We have developed a systematic method to synthesize such training datasets. Our improved algorithm outperform the widely used maximum entropy method in highly noisy situations. As an example, our method successfully extracted the dynamic structure factor of the spin-\(\frac{1}{2}\) Heisenberg chain from quantum Monte Carlo simulations. **Keywords:** Neural Network, Analytic Continuation, Quantum Monte Carlo **PACS:** 07.05.Mh, 02.70.Ss ## 1 Introduction Numerical analytic continuation (AC) solves the following inversion problem, \[G(\tau)=\int d\omega K(\tau,\omega)A(\omega). \tag{1}\] The goal of AC is to extract the real frequency spectrum \(A(\omega)\) from the imaginary-time correlation function \(G(\tau)\), which is typically obtained by Monte Carlo simulation. The spectrum \(A(\omega)\) is required to be non-negative at any \(\omega\)-point and subjected to certain sum rule \(\int d\omega A(\omega)=\mbox{const.}\ K(\tau,\omega)\) is the inversion kernel, the form of which varies on specific problems being handled. This study involves two kinds of inversion kernels \(K(\tau,\omega)\) including \(K_{F}(\tau,\omega)=e^{-\tau\omega}/(1+e^{-\beta\omega})\) and \(K_{S}(\tau,\omega)=e^{-\tau\omega}\). \(K_{F}(\tau,\omega)\) usually appears while calculating single-particle excitation spectra from measured Green's functions [1, 2]. \(K_{S}(\tau,\omega)\) is usually involved while extracting dynamic structure factors from spin-spin correlation functions in some spin models [3]. To carry out actual calculation, \(\tau\) and \(\omega\) are often discretized as \(\tau_{i}=\tau_{1},\cdots,\tau_{M}\), \(\omega_{i}=\omega_{1},\cdots,\omega_{N}\). Then the target problem can be reformulated as \(G(\tau_{i})=\sum_{j=1}^{N}K(\tau_{i},\omega_{j})A(\omega_{j})\Delta\omega.\) For the purpose of simplicity, \(\Delta\omega\) will be absorbed to \(A(\omega_{j})\) by \(A(\omega_{j})\Delta\omega\to A(\omega_{j})\) in further discussions. The sum rule is then discretized to be \(\sum_{i=1}^{N}A(\omega_{i})\Delta\omega=\text{const}.\) It seems like a simple problem of matrix inversion at first sight but turns out to be a notoriously challenging task due to the ill-conditioned nature of this inversion problem. In almost all cases, corresponding condition numbers go far beyond the tolerance of existing computers' machine precision. Several methods are proposed to solve this problem such as the Maximum Entropy method (Maxent) [2] and Stochastic Analytic continuation (SAC) [4]. Both of them succeed in extracting empirically correct spectra. However, these methods usually demand highly accurate simulated correlation functions \(G_{\text{sim}}(\tau)\). As an emerging technique for machine learning, neural networks (NNs) [5] have experienced great success in a variety of physics-related domains. From the perspective of machine learning, analytic continuation can be viewed as a vector-to-vector prediction task, where \(G(\tau_{i})\) is mapped to \(A(\omega_{j})\). To construct a neural network capable of performing analytic continuation, both the network topology and training set should be built appropriately. The common framework on this task usually contains several steps: (1) Build a neural network. (2) Synthesize spectra \(A_{\text{train}}\) for training purpose. (3) Calculate \(G_{\text{train}}\) by the forward mapping \(A\to G\). Noting that the forward mapping is well-conditioned, thus \(G_{\text{train}}\) can be exactly determined. (4) Train the network using the dataset pair \((G_{\text{train}},A_{\text{train}})\) so that spectra predicted from \(G_{\text{train}}\) closely match \(A_{\text{train}}\). (5) When developing and testing NNs, synthesize testing set (\(G_{\text{test}}\), \(A_{\text{test}}\)) and evaluate the performance of trained NN on it. When using NNAC in actual tasks, apply trained network to predict spectra \(A_{\text{pred}}\) from simulated correlation functions \(G_{\text{sim}}\) generated by Monte Carlo simulations. To mimic real-world simulated data, noises are usually added to correlation functions obtained from synthetic spectra such as \(G_{\text{train}}\) and \(G_{\text{test}}\). In a relatively early study, Hongkee Yoon [6] and co-authors designed a network mainly based on fully-connected-layers (FCLs) [5]. In their research, both training and testing sets are obtained from synthetic Gaussian-type multi-peak spectra. Noises of Gaussian distribution are added to \(G_{\text{train}}\) and \(G_{\text{test}}\). The trained NN performs well in the testing set as the predicted spectra are very close to synthetic testing spectra. Several different network structures [7, 8, 9, 10] trained on similar Gaussian-type datasets are also proposed. In addition to synthetic datasets, neural networks based analytic continuation (NNAC) are also examined on some exactly solvable models such as one-dimensional transverse-field Ising model [11] and harmonic oscillator linearly coupled to an ideal environment [12]. In these two studies, artificial training sets (\(G_{\text{train}}\),\(A_{\text{train}}\)) are generated from exactly solved correlation functions and corresponding spectra. Different spectra in the training set correspond to different parameter values in the Hamiltonian being studied. Target spectra \(A_{\text{pred}}\) are predicted from simulated correlation function \(G_{\text{sim}}\) using Monte Carlo techniques. Ref [11] points out that the neural network's prediction performance can be improved by adding uniform noises to the exactly solved Green's functions at each imaginary time in the training set. Theoretically we have no knowledge about precise forms of spectra to be predicted before target spectra are actually predicted. That's because the knowledge of Gaussian-type spectra are not expected to be known before prediction. This is actually an intriguing topic dubbed "data leakage" [13] in the field of machine learning. Data leakage occurs when information is used in the training process but not expected to be available at prediction time. All aforementioned articles about NNAC have the issue of data leakage at some levels. In practice, we usually apply numerical analytical continuation to models that are not exactly solvable, where it is not possible to construct training sets by exactly solved spectra. To design the training set, hints from experiments or traditional AC approaches such as Maxent should also be explored. It should be mentioned that NNAC is useful even when spectra are already obtained from Maxent: NNAC performs better at least in highly-noisy cases as described in Ref [12]. This topic will also be elaborated upon in this paper. In general, domain knowledge [14, 15], especially possible spectrum peak shapes, should be incorporated when designing the training set as much as feasible but without data leakage. We then expect the trained NN to generalize [16, 17] well enough to handle unobserved correlation functions like \(G_{\text{test}}\) and \(G_{\text{sim}}\). Intuitively, people expect better prediction of spectra by incorporating more information. Monte Carlo simulations can provide more information beyond the measured correlation functions, such as the statistical errors of \(G(\tau)\). Specifically, they can provide information regarding two aspects of statistical errors: the measured errors \(R(\tau_{i})\) of \(G(\tau_{i})\) at each \(\tau_{i}\), and the covariance of correlation functions at different imaginary times. This work avoids data leakage while synthesizing the training sets and incorporates information of statistical errors to improve the performance of NNAC. With these means, NNAC has the potential to be a usable algorithm in practical applications and a significant component in the Monte Carlo-Analytic Continuation toolchain. In section 2, NNAC of kernel \(K_{F}(\tau,\omega)\) is examined on synthetic data, where datasets synthsized from spectra of different types of shapes are addressed. In section 3, NNAC of kernel \(K_{S}(\tau,\omega)\) is applied to one-dimensional Heisenberg chain as a real-world example of an AC problem. Conclusions are presented in the final section. ## 2 NNAC on Synthetic Datasets In this section, we design and test NNs on synthetic datasets. Principles for generating training sets will be developed. We first discuss three types of datasets, the training framework, as well as the actual training process. Noise level matching between the training and the testing set is then explored. Resulting spectra are then compared with those from Maxent. The impact of measured noise shapes and time-displaced correlation is then investigated. ### Preparation of Dataset Multi-peak spectra \(A(\omega)\) are produced by summing over single peaks \(F(\omega)\). \[A(\omega)=\frac{1}{Z}\sum_{i}F_{i}(\omega). \tag{2}\] In the formula above, \(Z\) is a scaling constant ensuring that \(A(\omega)\) obeys the sum rule. In this section, we assume \(\int d\omega A(\omega)=1\) for convenience. This paper involves three distinct peak types: asymmetric exponential power(ASEP), skew Gaussian(Skew), and Lorentz. The ASEP single-peak curve reads: \[F^{\text{ASEP}}(\omega)=\begin{cases}h\exp\big{[}-(\frac{m-\omega}{a_{1}})^{b_{ 1}}\big{]},\omega<m;\\ h\exp\big{[}-(\frac{\omega-m}{a_{2}})^{b_{2}}\big{]},\omega\geq m.\end{cases} \tag{3}\] In the above formula, \(h\), \(m\), \(a_{1}\), \(a_{2}\), \(b_{1}\), \(b_{2}\) are all control parameters. In this study, we set \(m\in[-5,5]\), \(a_{1},a_{2}\in[0.3,3]\), \(b_{1},b_{2}\in[1,3]\), \(h\in[0.2,1]\). The Skew peak takes the form \[F^{\text{Skew}}(\omega)=\begin{cases}0,z\leq 0;\\ \frac{h}{az}\exp(-\frac{y^{2}}{2}),z>0.\end{cases} \tag{4}\] \(z(\omega)=1-k\frac{\omega-m}{a}\) and \(y=\frac{1}{k}\ln(z)\). Control parameters are \(m\in[-2,2]\), \(a\in[0.5,1]\), \(k\in[-1,1]\), and \(h\in[0.2,1]\). The Lorentz curve takes the relatively simple form \[F^{\text{Lorentz}}(\omega)=h\frac{1}{(\omega^{2}-a^{2})^{2}-g^{2}\omega^{2}}, \tag{5}\] where \(g\in[1,2]\), \(a\in[2,4]\) and \(h\in[0.2,1]\). In this study, we investigate spectra containing one to four peaks. At least \(10^{5}\) samples are generated for each peak number by randomly selecting control parameters. In other words, one single dataset includes at least \(4\times 10^{5}\) samples. Training and testing sets of the same peak type are independently created. ASEP-type dataset has the most control parameters among these three types and thus contains a greater diversity of spectra while not explicitly contain spectra of Skew-type or Lorentz-type dataset. We expect the neural network to learn from ASEP-type dataset and generalize effectively to achieve good performance on the other two datasets. It should be noted that, unlike in some previous studies, we will not examine Gaussian-type spectra here, as they are explicitly included in ASEP-type dataset when \(b_{1}=b_{2}=2\) and \(a_{1}=a_{2}\). This explicit inclusion case does not frequently occur in real-world AC tasks and the performance of NNAC will be overestimated in the case of Gaussian-type testing sets. The imaginary time \(\tau\in[0,16]\) is discretized uniformly into \(512\) pieces and the frequency domain \(\omega\in[-15,15]\) is discretized into \(1024\) pieces. \(\beta\) is fixed to be \(16\) in the Fermion kernel \(e^{-\tau\omega}/(1+e^{-\beta\omega})\). ### Training Framework Convolution neural networks (CNNs) [18] will be employed in this work. FCL-based neural networks are also evaluated in the early stage of this study, which proves inferior to CNNs. Involvement of residual modules [19] or deep layer aggregation [20] also does not prove to make significant improvements. In the case of deep layer aggregation, both iterative deep aggregation and hierarchical deep aggregation are attempted. Based on the aforementioned factors, we employ the neural network shown in Figure 1. At first the \(512\)-length \(G(\tau_{i})\) is transferred to a \(p\)-length vector via a FCL (labeled "Dense") and then reshaped to be a \(1\times p\) matrix. This matrix can be regarded as a specific image that can be naturally processed by convolution layers. Next, this image is passed to a \(q\)-channel one dimensional convolution layer "Conv1d", followed by the activation layer "Swish". Within the "Conv1d" layer, convolution kernels of size \(1\times 3\) are used. Within the activation layer, the activation function named "Swish" [21] is used. This activation function is both non-monotonic and smooth and may improve the overall performance of the neural network compared to the commonly used ReLU [22] activation function according to Ref [21]. This "convolution \(\rightarrow\) activation" process will be carried out \(n\) times. The \(q\)-channel image is then compressed by an average-pooling layer [23] and flattened to be a \(pq/2\)-long vector. The flattened vector will be mapped to a 1024-long vector by another "Dense" layer. Ultimately, the "SoftMax" layer will present predictions of the spectra where the sum rule \(\sum_{j}A(\omega_{j})=1\) is naturally satisfied after this softmax operation. Tricks to reduce overfitting such as dropout [24] are not adopted here. Instead, we recommend enlarging the training set when signs of overfitting emerge since it is rather cheap to acquire data from synthetic spectra. Hyper-parameters are chosen to be \(n=8\), \(p=64\), and \(q=64\). To select appropriate hyper-parameters, we build an additional ASEP-type validation set, on which to evaluate NN trained by ASEP-type training set. When selecting hyper-parameters, the trade-off between performance and training time is taken into consideration. We use Kullback-Leibler Divergence(KLD) [25] as the loss function, which takes the form \[D_{\text{KL}}(A_{\text{true}}||A_{\text{pred}})=-\sum_{j}A_{\text{true}}( \omega_{j})\ln\frac{A_{\text{true}}(\omega_{j})}{A_{\text{pred}}(\omega_{j})}. \tag{6}\] Figure 1: The convolution-based structure of the neural network used in this work. Hyper-parameters are chosen to be \(n=8\), \(p=64\) and \(q=64\) in actual training process. KLD measures the difference (more precisely, relative entropy) between the true distribution \(A_{\text{true}}\) and the predicted distribution \(A_{\text{pred}}\), which makes it a natural choice in this task. Other commonly-used loss functions include mean absolute error (MAE) and mean squared error (MSE) as shown below. KLD also has the property of positivity as MAE and MSE. \[\text{MAE}(A_{\text{true}},A_{\text{pred}}) =\frac{1}{N}\sum_{j=1}^{N}\big{|}A_{\text{true}}(\omega_{j})-A_{ \text{pred}}(\omega_{j})\big{|} \tag{7}\] \[\text{MSE}(A_{\text{true}},A_{\text{pred}}) =\frac{1}{N}\sum_{j=1}^{N}\big{[}A_{\text{true}}(\omega_{j})-A_{ \text{pred}}(\omega_{j})\big{]}^{2} \tag{8}\] Empirically, spectra from NNs with MSE loss are often smoother than those from NNs with MAE loss since MSE punish large spectrum difference more severely. In this study, we didn't observe discernible difference in the performance between MSE-loss and KLD-loss NNs. NNs are programmed using Keras toolkits [26] with Tensorflow [27] backends. The Adam [28] optimizer is used for gradient descent. The early-stopping trick is utilized during training. The training process terminates when KLD measured on the validation set does not drop for 20 epochs, where the validation set is generated in the same manner as the training set. Trained weights are then restored to the epoch with the lowest KLD. Each training task will be repeated at least 5 times with different random seeds. KLDs shown in this paper are averaged among NNs trained with different seeds. The training process is depicted in Figure 2, where both the training set and the testing set are of ASEP-type. Errors at noise level \(10^{-3}\) are introduced to \(G_{\text{train}}\) and \(G_{\text{test}}\) (the concept of noise level will be discussed later). Relative values of three statistics measured on the testing set are tracked throughout the training process in Figure 2 (a). We track \(\text{RMSE}=\sqrt{\text{MSE}}\) instead of MSE itself because RMSE shares the same dimension as MAE and KLD. Relative loss in this figure is defined as "loss after this epoch"/"loss after the first epoch". In Figure 2 (b) we show an example from the testing set of how one predicted spectrum becomes closer to the true spectrum at different KLD levels. Selected checkpoints are indicated by red dots in Figure 2 (a). While visualizing the training process, we only use 1000 samples for each epoch because statistics converge too quickly for visualization if the entire training set containing \(4\times 10^{5}\) samples is used. The complete training set will be used in actual AC tasks hereafter. In this study, model training on an RTX3060 graphics card takes approximately 20 minutes on average. This is acceptable in the majority of circumstances, especially in contrast to the amount of time saved in the Monte Carlo simulation if highly accurate correlation functions are not incorporated. ### Noise Level Matching Correlation functions measured from Monte Carlo simulation inevitably contain statistical errors. To mimic simulated errors, Gaussian-type noises are added to \(G(\tau_{i})\) by \(G(\tau_{i})\to G(\tau_{i})+R(\tau_{i})\), where \(R(\tau_{i})\sim N(0,\sigma^{2})\). Four different noise levels are investigated in this work, \(\sigma=10^{-4},10^{-3},3\times 10^{-3},10^{-2}\). \(\sigma\) in this formula can also be interpreted as the absolute average of noises. At this stage, we assume \(G(\tau_{i})\) to be independently measured for each \(i\). In real-world NNAC-based tasks, noises of \(G_{\text{sim}}\) are measured from Monte Carlo simulation, and noises of the training set should be carefully arranged accordingly. Besides, the noise level of the testing set should be the same as the simulated data to mimic real-world tasks. To design the training set, a natural question arises as how we should set noise level \(\sigma_{\text{train}}\) of the training set when the noise level \(\sigma_{\text{test}}\) of the testing set is known? We train NNs by training sets of different \(\sigma_{\text{train}}\) and apply these NNs on testing sets of different \(\sigma_{\text{test}}\). Corresponding results are shown in Table 1 and Figure 3. Table 1 contains KLDs of spectra predicted from testing sets with different noise levels \(\sigma_{\text{test}}\) by NNs trained by training sets with different \(\sigma_{\text{train}}\). The smallest KLD in each line (marked red) is obtained when noise levels of the training set and the testing set match (\(\sigma_{\text{train}}=\sigma_{\text{test}}\)). Performance degrades but remains acceptable when \(\sigma_{\text{train}}\) increases and \(\sigma_{\text{train}}>\sigma_{\text{test}}\) while the opposite is not true when \(\sigma_{\text{train}}<\sigma_{\text{test}}\). For instance, KLD is relatively small when \((\sigma_{\text{train}},\sigma_{\text{test}})=(10^{-2},10^{-4})\) but is large and unsatisfactory when \((\sigma_{\text{train}},\sigma_{\text{test}})=(10^{-4},10^{-2})\). That's because information of ASEP(\(\sigma=10^{-4}\)) is somehow "contained" in ASEP(\(\sigma=10^{-2}\)): for each curve in ASEP(\(\sigma=10^{-4}\)) we may find similar samples with similar noises in ASEP(\(\sigma=10^{-2}\)) if datasets are large enough given noises are randomly selected, whereas the converse is not true. We train NNs with different noise levels and use them to predict one sample of \(G(\tau_{i})\) from the testing set with \(\sigma_{\text{test}}=3\times 10^{-3}\) and \(\sigma_{\text{test}}=10^{-2}\), which are presented in Figure 3 (a) and (b), respectively. Resulted spectra become closer to the ground truth when \(\sigma_{\text{train}}\) is closer to \(\sigma_{\text{test}}\). In Figure 3 (b), incorrect and unstable peaks are predicted by the NNs trained with \(\sigma_{\text{train}}=10^{-4}\) or \(10^{-3}\), whose KLDs are large correspondingly as seen in Table 1. Note that in this part, data leakage is not intentionally avoided: the training set and the testing set are both of ASEP type. With the same \(\sigma_{\text{test}}\), KLD differences caused by different \(\sigma_{\text{train}}\) may be relatively small and taking datasets with different line shapes may introduce unnecessary complexity, resulting in unsolid or even incorrect conclusions. From another perspective, we expect NNs to use the knowledge learned from the Figure 2: Tracking the training process. (a) Relative losses, including KLD, MAE, and RMSE, _w.r.t._ number of trained epochs. This so-called relative loss is defined as “loss after this epoch”/“loss after the first epoch”. (b) A typical example of the convergence process of one predicted spectrum to the true spectrum as KLD decreases. Selected checkpoints are labeled by red dots in (a). training set to predict correct spectra in actual tasks. The performance will be usually slightly weakened if line shapes of the testing set and training set are different. Therefore, we expect the NNs of proper \(\sigma_{\text{train}}\) to at least achieve good results on the testing set with the same line shape. The KLD results here do not represent actual performances of the NNs in practical tasks. ### Comparison with Maxent With the knowledge of noise level matching, \(G_{\text{train}}\) will be designed to have the same noise level as \(G_{\text{test}}\) in this work hereafter and we are now ready to compare NNAC with traditional AC methods like Maxent. We \begin{table} \begin{tabular}{||l|l|l|l|l||} \hline \(\sigma_{\text{test}}\)\(\sigma_{\text{train}}\) & \(10^{-4}\) & \(10^{-3}\) & \(3\times 10^{-3}\) & \(10^{-2}\) \\ \hline \hline \(10^{-4}\) & 0.0137(3) & 0.0151(4) & 0.0181(2) & 0.0280(1) \\ \hline \hline \(10^{-3}\) & 0.0172(1) & 0.0164(4) & 0.0185(2) & 0.0280(1) \\ \hline \hline \(3\times 10^{3}\) & 0.045(2) & 0.0268(3) & 0.0217(1) & 0.02854(9) \\ \hline \hline \(10^{-2}\) & 0.31(2) & 0.148(6) & 0.060(1) & 0.0350(1) \\ \hline \hline \end{tabular} \end{table} Table 1: KLDs of spectra predicted from testing sets with different\(\sigma_{\text{test}}\) by NNs trained by training sets with different \(\sigma_{\text{train}}\). In each line, the smallest KLD (marked red) is obtained when \(\sigma_{\text{train}}=\sigma_{\text{test}}\). To determine the errors of the KLDs in the table, we train NNs with at least 10 distinct random seeds and calculate statistical uncertainty of KLDs of spectra predicted by these NNs. Figure 3: Illustration of noise level matching. Ground truths in both sub-figures are the same curve. (a) Prediction of spectra from testing set with \(\sigma=3\times 10^{-3}\) by NNs trained with different \(\sigma_{\text{train}}\). The best spectrum is obtained when \(\sigma_{\text{train}}=\sigma_{\text{test}}=3\times 10^{-3}\). (b) Prediction of spectra from testing set with \(\sigma=10^{-2}\) by NNs trained with different \(\sigma_{\text{train}}\). The best spectrum is obtained when \(\sigma_{\text{train}}=\sigma_{\text{test}}=10^{-2}\). The predicted spectrum contains unstable peaks at wrong locations when \(\sigma_{\text{train}}=10^{-4}\) or \(3\times 10^{-3}\). train NNs by ASEP training sets and use them to predict ASEP-type, Skew-type and Lorentz-type spectra, respectively. Corresponding outcomes are depicted in Figure 4. Figure 4 (a),(b) and (c) show KLDs of spectra predicted by these two methods on ASEP, Skew, and Lorentz dataset respectively. Error bars of KLDs are omitted in this and subsequent figures to make graphs more comprehensible as they are relatively small. Typical predicted results at noise level \(3\times 10^{-3}\) are shown in Figure 4 (d),(e) and (f) of three peak types. Performance of NNAC is comparable with Maxent at the lowest noise level \(10^{-4}\) but outperforms Maxent significantly at relatively high noise levels. The improvement of prediction effect is also obvious when the training set and testing set are not of the same spectrum type. In spectrum examples depicted in Figure 4 (d),(e) and (f), peak locations are precisely predicted by NNAC but Maxent didn't provide accurate peak locations at this noise level. In some frequencies, Maxent may even give incorrect signals of peaks. Peak heights predicted by NNAC are also more accurate and closer to ground truths than Maxent's. Spectra from Maxent in this section about kernel \(K_{F}(\tau,\omega)\) are calculated mainly based on the software "TRIQS/maxent" [29] so that results can be easily checked. Various \(\alpha\)-choosing algorithms are evaluated, where \(\alpha\) is the penalty coefficient of the entropy term in the Maxent objective function [2]. Among these Figure 4: Comparison with Maxnet. NNs are trained by ASEP dataset and applied on three different testing sets: ASEP, Skew, and Lorentz. (a) to (c): KLD predicted results of ASEP, Skew, Lorentz dataset respectively at different noise levels. (d) to (f): typical predicted spectra at the noise level \(3\times 10^{-3}\) by Maxent and NNAC. The ground truth is also shown as comparison. The performance of NNAC is comparable with Maxent when the dataset contains low-level noise but outperforms Maxent at high-level noise even if NNs are not trained by the dataset of the same type as the testing set. algorithms discussed in Ref [29], "\(\chi_{2}\)-curvature", which is analogous to \(\Omega\)-Maxent [30], and "Bryan" algorithms greatly outperform others in terms of KLD in tasks of interest. Between these two, "\(\chi_{2}\)-curvature" is marginally superior to the Bryan algorithm. In this way, we use "\(\chi_{2}\)-curvature" in this work to ensure a level playing field for Maxent. ### Influence of Noise Dependency on Imaginary Time In the preceding discussion, noise \(R(\tau_{i})\) at each \(\tau_{i}\) is assumed to be sampled from the same Gaussian distribution and has the same variance, which is rarely the case in Monte Carlo simulation. We introduce the noise-shape-multiplier \(\lambda(\tau)\) to investigate influence of noise dependency on imaginary Time and assume \(R(\tau_{i})\sim N(0,\sigma(\tau_{i})^{2})\), \(\sigma(\tau_{i})=\lambda(\tau_{i})\sigma\). We refer to this dependency as "noise shape" hereafter. These multipliers satisfy \(\frac{1}{\beta}\int_{0}^{\beta}\lambda(\tau)d\tau=1\) to ensure that datasets with the same \(\sigma\) but different noise shapes are at approximately the same noise level. \(\lambda(\tau)\) of four distinct linear shapes labeled A, B, C, and D are displayed in Figure 5 (a). To demonstrate the impact of noise shape and how to appropriately arrange noises in the training set, we train NNs by ASEP-type training sets with equal noise (\(\lambda(\tau)=1\)) and noise shape A, respectively. These trained NNs are implemented on Skew-type testing sets with noise shape A. Corresponding measured KLDs are presented in Figure 5 (b). Spectra examples at noise level \(3\times 10^{-3}\) are shown in in Figure 5 (c). Origins of different performances by different noise shapes can be, to some extent, explained by permutation feature importance (PFI) [31], despite the fact that neural networks are typically seen as black boxes. To calculate PFI, we rearrange \(G(\tau_{i})\) randomly over samples on one certain time piece \(\tau_{i}\) in the testing set and PFI at this time piece is defined by how much the resulted KLD increases. PFI difference between NNs trained by datasets of linear noise shapes and equal-noise dataset are defined by \([\text{PFI}^{\text{T}}(\tau_{i})-\text{PFI}^{\text{E}}(\tau_{i})]/[\text{PFI} ^{\text{T}}(\tau_{i})+\text{PFI}^{\text{E}}(\tau_{i})]\). \(\text{PFI}^{\text{E}}(\tau_{i})\) denotes PFI from NNs trained by equal-noise dataset and \(\text{PFI}^{\text{T}}(\tau_{i})\) denotes PFI from NNs trained by dataset of some other noise shape, where \(\text{T}\in[\text{A},\text{B},\text{C},\text{D}]\). Resulted relative PFI differences are shown in Figure 5 (d). Moving average of adjacent five points are carried out to make curves smoother and clearer. Relative PFI curves and \(\lambda(\tau)\) curves increase or decrease in the opposite direction, which means NNs assign large feature importance on imaginary time pieces where \(G(\tau_{i})\) are less noisy. It should be emphasized that measured correlation functions do not often have linear-type noise shapes. Instead, they are frequently of exponential-like shapes. However, things can become more subtle in the case of exponential noise shape, when it becomes more difficult to disentangle the effects of different noise levels and noise shapes. In light of these concerns, we only examine linear-type noise shapes here, and it is believed that physical images are similar in other scenarios. ### Influence of Time-Displaced Correlation So far we've assumed that \(G(\tau_{i})\) at different \(\tau_{i}\) are measured independently, which is not always true in practical Monte Carlo simulation. At this time, covariance instead of independent errors of \(G(\tau_{i})\) should be considered. Covariance can be decomposed as \(\Sigma=U^{T}CU\). \(U=[U(\tau_{1}),\cdots,U(\tau_{N})]^{T}\), where \(U(\tau_{i})\) is the independently measured statistical error of \(G(\tau_{i})\). \(C\) is the correlation matrix. \(C_{ij}\) is defined as Pearson Figure 5: Influence of linear noise shapes. (a) Four types of shape multiplier \(\lambda(\tau)\). (b) Noises of the testing set are of shape A. Two neural networks are trained by training set of equal noise (\(\lambda(\tau)=1\)) and noise shape A, respectively. KLDs of trained neural networks are compared on the shape-A testing set at different noise levels. (c) Typical spectra predicted from \(G(\tau)\) of shape-A noises (\(\sigma=3\times 10^{-3}\)) by neural networks trained by equal-noise and shape-A training sets, respectively. (d) The relative difference in PFI between the neural network trained on a training set with linearly-shaped noise and the neural network trained on a training set with uniformly-shaped (\(\lambda(\tau)=1\)) noise at various imaginary times. correlation of measured \(G(\tau_{i})\) and \(G(\tau_{j})\). In practical AC tasks, \(\Sigma\) of \(G_{\rm sim}\) should be measured before designing the training set. If we require the training set to share the same covariance as the testing set, noises of the training set should be generated from corresponding joint Gaussian distribution, that is, \(R\sim N(0,\Sigma)\). To illustrate influences of time-displaced correlation, we create the toy correlation matrix for the testing set by \[C_{ij}=\frac{1}{1+|i-j|^{1/\gamma}}. \tag{9}\] In this work, we will investigate correlation matrices with condition numbers being \(10^{3}\), \(10^{6}\), \(10^{9}\), and \(10^{12}\) respectively by adjusting \(\gamma\). \(U(\tau)\) are generated at four noise levels \(\sigma\in[10^{-4},10^{-3},3\times 10^{-3},10^{-2}]\). NNs are trained by ASEP-type datasets and are to be applied to Skew-type testing sets with various noise levels and condition numbers. Training sets are designed in two manners: they may have zero correlation or the same correlation as the testing set. In Figure 6 (a), condition number of the testing set is fixed to be \(10^{12}\). NNs are trained by dataset with or without time-displaced correlation on each noise level. As being illustrated, influence of \(\tau\)-correlation is not significant at low noise levels but correlation mismatching may lead to incorrect prediction at high noise levels. In Figure 6 (b), the noise level of the testing set (and the training set, as well) is fixed to be \(10^{-2}\), where KLDs are lower when condition number is smaller. The reason may be that \(R(\tau_{i})\) are dominated by only a few singular values of \(\Sigma\), whose pattern of noises is relatively easy to be learned by NNs. Spectrum examples are shown in Figure 6 (c) with noise level \(10^{-2}\) and condition number \(10^{12}\), which contain predicted spectra by NNs trained with zero or the same correlation as the testing set, as well as the ground truth. Clearly the predicted spectra contain wrong peaks at wrong locations when time-displaced correlation is not matched. Figure 6: Illustration of influences of time-displaced correlation of \(G(\tau_{i})\). Noise levels of the training set and the testing set are matched. (a) The condition number of the correlation matrix is fixed to be \(10^{12}\). NNs trained by datasets without correlation may give wrong predictions if \(G(\tau_{i})\) in the testing set is correlated, especially when the noise level is high. (b) Noise levels are fixed to be \(10^{-2}\). KLDs are shown _w.r.t._ different condition numbers. (c) Spectra examples with condition number \(10^{12}\) and noise level \(10^{-2}\). ## 3 NNAC on Heisenberg Chain In this section, NNAC is carried out to extract dynamic structure factors of the spin-\(\frac{1}{2}\) anti-ferromagnetic Heisenberg chain of length \(L\), which reads \[H=J\sum_{i=1}^{L}\vec{S}_{i}\cdot\vec{S}_{i+1}. \tag{10}\] \(\vec{S}_{i}\) represents a spin located on site \(i\). Periodic boundary condition is assumed, _i.e._, \(\vec{S}_{L+1}=\vec{S}_{1}\). Imaginary-time-displaced spin-spin correlation of \(z\)-component is measured by stochastic series expansion [32]. \[G_{i,j}(\tau) = \langle e^{rH}S_{i}^{z}e^{-\tau H}S_{j}^{z}\rangle, \tag{11}\] \[G_{k}(\tau) = \frac{1}{L}\sum_{i,j}G_{i,j}(\tau)e^{-i(r_{i}-r_{j})k/L}. \tag{12}\] \(G_{i,j}(\tau)\) is time-displaced spin-spin correlation of \(z\)-component between spin \(i\) and spin \(j\). Target correlation function \(G_{k}(\tau)\) in wave-vector domain is then calculated via Fourier transformation. \(r_{i}\) denotes the location of spin \(i\), where the lattice constant is set to be 1. \(J\) is used as the energy unit. We set the inverse temperature \(\beta=1\) in the Monte Carlo simulation. In this work we'll focus on \(k=\pi\) and \(G_{k}(\tau)\) will be represented by \(G(\tau)\) for the sake of simplicity. Then the AC task reads \(G(\tau)=\int d\omega e^{-\tau\omega}A(\omega)\), where \(A(\omega)\) is the target dynamic structure factor. The corresponding sum rule is obtained by setting \(\tau=0\), _i.e._, \(\int d\omega A(\omega)=G(0)\). The same NN structure and hyper-parameters are used as in the previous section. Frequency \(\omega\) takes the range \(\omega\in[-10,10]\). The time domain and the frequency domain are discretized into 512 and 1024 pieces respectively as before. The spectrum of Heisenberg chain can be regarded as a sum of \(\delta\) functions at zero temperature. These \(\delta\) functions broaden as temperature increases. We perform quantum Monte Carlo simulation on a 32-site Heisenberg chain, where \(\delta\) functions are dense enough on the required energy scale \(\Delta\omega\sim 0.02\) so that a smooth spectrum can be obtained. The stochastic series expansion approach with loop-update [32] algorithm is used in simulation. Spin-spin correlation is measured every 100 update steps so that auto-correlation can be ignored. The covariance matrix \(\Sigma\) is measured by \(\Sigma_{ij}=[\langle G(\tau_{i})G(\tau_{j})\rangle-\langle G(\tau_{i})\rangle \langle G(\tau_{j})\rangle]/(N_{s}-1)\), where \(N_{s}\) is the number of independent samples. Spin-spin correlation functions are measured using different number of Monte Carlo samples to create datasets of different noise levels. In this section, noise levels are represented by relative statistical errors of \(G(0)\), which takes range from \(3.8\times 10^{-3}\) to \(3.6\times 10^{-2}\). Simulated \(G(\tau)\) are divided by corresponding \(G(0)\) before being fed into neural networks so that the sum rule is restored to \(\int d\omega A(\omega)=1\). Then the "SoftMax" layer results in the correct sum rule and the scale of extracted spectra will be recovered accordingly by multiplying with \(G(0)\). Correlation functions \(G(\tau_{i})\) at different imaginary time \(\tau_{i}\) are measured independently to ensure zero time-displaced correlation between \(G(\tau_{i})\). The obtained covariance matrix \(\Sigma\) is then a diagonal matrix since \(\langle G(\tau_{i})G(\tau_{j})\rangle-\langle G(\tau_{i})\rangle\langle G(\tau_ {j})\rangle=0\). Extracted spectra are shown in Figure 7, where Maxent and NNAC are compared. In Figure 7 (a), spectra extracted from spin-spin correlation function of relative error \(3.8\times 10^{-3}\) by Maxent and NNAC are compared, where two spectra coincide well with each other in this relatively simple single-peak case. These two spectra Figure 7: Spectra extracted by different methods. (a) Comparison of spectra generated by Maxent and NNAC from highly accurate \(G(\tau_{i})\). (b) KLDs of spectra generated by Maxent and NNAC. The most accurate spectra (with the lowest noise level) are taken as ground truths while calculating KLDs. (c) Spectra predicted by Maxent from \(G(\tau_{i})\) of different noise levels.(d) Spectra predicted by NNAC from \(G(\tau_{i})\) of different noise levels. also agree with those obtained from smaller systems using Lanczos-based methods [33]. Figure 7 (b) compares KLDs of the spectra produced by these two methods at different noise levels. Spectra corresponding to the lowest noise level of each method is regarded as ground truths respectively when calculating KLDs. When the noise level increases, the accuracy of the spectra produced by both Maxent and NNAC decreases, but the accuracy of NNAC decays slower than Maxent. Here again, the previous conclusion is confirmed: at low noise levels, Maxnet and NNAC can produce equally accurate results. At high noise levels, however, NNAC performs better than Maxent. Figures 7 (c) and (d) show how spectra extracted by the two methods change when the noise is gradually increased from \(3.8\times 10^{-3}\) to \(3.6\times 10^{-2}\). Spectra get progressively lower and wider in both cases. Spectra generated by Maxent exhibit large peak position shifts, while those generated by NNAC show little shift in peak positions. ## 4 Conclusions Applications of neural network-based analytic continuation were discussed in this paper. Numerical experiments are carried on both synthetic datasets and Monte Carlo data. The main conclusion is that a NN can learn from a carefully designed training set and make good predictions on spectra without data leakage, which surpass Maxent in highly noisy cases. To ensure that the neural network acquires adequate knowledge to predict the target spectral functions, the training dataset should comprise a sufficient number of diverse spectral functions. Incorporating information of measured statistical errors leads to better prediction on spectra. \(G(\tau_{i})\) of the training set should match those of simulated correlation functions in terms of noises at each \(\tau_{i}\) and time-displaced correlation. While acceptable, the time required for NNAC is relatively long compared to Maxent. Improving the efficiency of model training may be a fruitful area for future investigation. It may be possible to apply the idea of transfer-learning [34] here so that we do not need to train a model from scratch for each target spectrum but rather begin with a pre-trained model. A more valuable and ambitious goal is to train a model that is general to any spectrum. The input to this model should probably be the simulated correlation functions and the accompanying covariance matrices, which contain most (if not all) information needed to perform analytic continuation. ## Acknowledgement FW acknowledges support from National Natural Science Foundation of China (No. 12274004), and National Natural Science Foundation of China (No. 11888101). Quantum Monte Carlo simulations are performed on TianHe-1A of National Supercomputer Center in Tianjin.
2301.01440
Scalable Optimal Design of Incremental Volt/VAR Control using Deep Neural Networks
Volt/VAR control rules facilitate the autonomous operation of distributed energy resources (DER) to regulate voltage in power distribution grids. According to non-incremental control rules, such as the one mandated by the IEEE Standard 1547, the reactive power setpoint of each DER is computed as a piecewise-linear curve of the local voltage. However, the slopes of such curves are upper-bounded to ensure stability. On the other hand, incremental rules add a memory term into the setpoint update, rendering them universally stable. They can thus attain enhanced steady-state voltage profiles. Optimal rule design (ORD) for incremental rules can be formulated as a bilevel program. We put forth a scalable solution by reformulating ORD as training a deep neural network (DNN). This DNN emulates the Volt/VAR dynamics for incremental rules derived as iterations of proximal gradient descent (PGD). Analytical findings and numerical tests corroborate that the proposed ORD solution can be neatly adapted to single/multi-phase feeders.
Sarthak Gupta, Ali Mehrizi-Sani, Spyros Chatzivasileiadis, Vassilis Kekatos
2023-01-04T04:19:12Z
http://arxiv.org/abs/2301.01440v1
# Scalable Optimal Design of Incremental Volt/VAR Control using Deep Neural Networks ###### Abstract Volt/VAR control rules facilitate the autonomous operation of distributed energy resources (DER) to regulate voltage in power distribution grids. According to non-incremental control rules, such as the one mandated by the IEEE Standard 1547, the reactive power setpoint of each DER is computed as a piecewise-linear curve of the local voltage. However, the slopes of such curves are upper-bounded to ensure stability. On the other hand, incremental rules add a memory term into the setpoint update, rendering them universally stable. They can thus attain enhanced steady-state voltage profiles. Optimal rule design (ORD) for incremental rules can be formulated as a bilevel program. We put forth a scalable solution by reformulating ORD as training a deep neural network (DNN). This DNN emulates the Volt/VAR dynamics for incremental rules derived as iterations of proximal gradient descent (PGD). The rule parameters appear as DNN weights. To reduce the DNN depth, we leverage Nesterov's accelerated PGD iterations. Analytical findings and numerical tests corroborate that the proposed ORD solution can be neatly adapted to single/multi-phase feeders. IEEE Standard 1547.8, incremental control rules, multiphase feeders, proximal gradients, gradient backpropagation, deep neural networks. ## I Introduction Local Volt/VAR (Volt-Ampere Reactive) control facilitates voltage regulation on distribution grids by providing reactive power compensation from DERs equipped with smart inverters. Different from centralized control schemes which incur large computational and communication burden, local rules decide DER setpoints based on local measurements. Volt/VAR control rules can be categorized into _non-incremental_ and _incremental_ ones. The former compute DER reactive power setpoints based on local voltage readings. The IEEE Standard 1547.8 prescribes such non-incremental control rules as piecewise-linear functions of voltage [1]. On the other hand, incremental Volt/VAR rules compute the _change_ in VAR setpoints as a function of voltage [2, 3, 4, 5, 6]. The existing literature on designing Volt/VAR control rules can be classified into _stability_- and _optimality_-centric works. Stability-centric works study the effect of Volt/VAR rules as a closed-loop dynamical system, which may be rendered under stable under steep slopes of non-incremental rules [7, 8]. In fact, to ensure stability, non-incremental rules may have to compromise on the quality of their steady-state voltage profile [5, 8]. Incremental rules however do not experience stability limitations and can thus achieve improved voltage profiles compared to their non-incremental counterparts. Nonetheless, such improvements may come at the expense of longer settling times of the associated Volt/VAR dynamics [8]. Optimality-centric works focus on designing stable control rules to minimize a voltage regulation objective. To this end, optimization-based strategies have been employed to design affine non-incremental rules using heuristics [9, 10, 11]. Two of our recent works in [12] and [13] have addressed the problem of optimally designing the slope, deadband, saturation, and reference voltage. Reference [12] performs ORD via a bilevel optimization applicable to single-phase feeders. Reference [13] proposes DNN-based digital twins that emulate Volt/VAR dynamics, and reformulates ORD as a DNN training task for single-/multi-phase feeders. This letter deals with optimally selecting the shape of incremental Volt/VAR control rules, with contributions on three fronts: _c1)_ Although this _optimal rule design_ (ORD) task can be posed as a mixed-integer nonlinear optimization program, it does not scale well with the numbers of DERs, nodes, and grid loading scenarios. To address this challenge, the genuine idea here is to reformulate ORD as a deep-learning task and judiciously adapt the fast software modules widely available for training deep neural networks (DNNs). We have put forth a similar approach for designing non-incremental control rules in [13]. However, migrating from non-incremental to incremental rules is non-trivial due to the different curve shapes, stability, and settling time properties. _c2)_ To further expedite ORD for incremental rules, we suggest implementing accelerated Nesterov-type variants of the rules to yield a shallower DNN emulator. _c3)_ We also establish the convergence of incremental rules on multiphase feeders. Recently, reference [14] deals with the optimal design of incremental rules. It uses DNNs with a single hidden layer to model piecewise-linear functions and formulates ORD as a reinforcement learning task. While [14] also utilizes DNNs to design incremental rules, we delineate from it in several ways. Reference [14] focuses on voltage control during transient dynamics, whereas this work aims at ORD to drive steady-state voltages closer to unity and over different grid loading scenarios. Reference [14] utilizes a DNN to model the piecewise-linear mapping of the rule. In contrast, this work develops a DNN-based digital twin that emulates end-to-end Volt/VAR dynamics. Lastly, we provide stability and convergence analysis for single- and multiphase feeders alike, whereas [14] applies only to single-phase feeders. The rest of this letter is organized as follows. Section II models the feeder and discusses non-incremental and incremental Volt/VAR control rules. Section III formulates DNN-based digital twins for Volt/VAR dynamics of incremental rules, and their accelerated version. It also presents ORD for single-phase feeders as a deep learning task. Section IV extends the ORD process to multiphase feeders. The incremental rules are then benchmarked against non-incremental rules from [13] using tests on real-world data, in Section V. The letter is concluded in Section VI. ## II Volt/VAR Control Rules Consider a radial feeder serving \(N\) buses equipped with DERs, indexed by \(n\). Let \((\mathbf{q}^{\ell},\mathbf{q})\) collect reactive loads and generations at all nodes. Vectors \((\mathbf{p},\mathbf{v})\) collect the net active power injections and voltage magnitudes at all nodes. The impact of \(\mathbf{q}\) on \(\mathbf{v}\) can be approximately captured using the linearized grid model [13] \[\mathbf{v}\simeq\mathbf{X}\mathbf{q}+\tilde{\mathbf{v}} \tag{1}\] where \(\tilde{\mathbf{v}}:=\mathbf{R}\mathbf{p}-\mathbf{X}\mathbf{q}^{\ell}+v_{0} \mathbf{1}\) models the underlying _grid conditions_, and \(v_{0}\) is the substation voltage. Vector \(\tilde{\mathbf{v}}\) represents the impact of non-controlled quantities \((\mathbf{p},\mathbf{q}^{\ell})\) on voltages. Matrices \((\mathbf{R},\mathbf{X})\) depend on the feeder topology. For single-phase feeders, they are symmetric positive definite with positive entries [15]. For multiphase feeders, they are non-symmetric and have positive and negative entries [5, 13]. Vector \(\mathbf{q}\) in (1) carries the reactive injections by DERs we would like to control. Per the non-incremental rules of the IEEE Std. 1547 [1], DER setpoints are decided based on the Volt/VAR curve of Fig. 1, which is parameterized by \((\bar{v},\delta,\sigma,\bar{q})\). The standard further constrains these parameters within a polytopic feasible set [1, 12]. The negative slope of the linear segment of the curve in Fig. 1 can be expressed as \[\alpha:=\frac{\bar{q}}{\sigma-\delta}.\] The interaction of Volt/VAR rules with the feeder gives rise to nonlinear dynamics. These dynamics are stable if \(\|\operatorname{dg}(\boldsymbol{\alpha})\mathbf{X}\|_{2}<1\), where \(\operatorname{dg}(\boldsymbol{\alpha})\) is a diagonal matrix carrying the rule slopes over all buses on its diagonal [7]. The equilibrium setpoints for DERs cannot be expressed in closed form. However, they coincide with the minimizer of the convex optimization problem [7] \[\min_{-\mathbf{q}\preceq\mathbf{q}\preceq\bar{\mathbf{q}}}\frac{1}{2}\mathbf{ q}^{\top}\mathbf{X}\mathbf{q}+\mathbf{q}^{\top}(\tilde{\mathbf{v}}-\bar{ \mathbf{v}})+\frac{1}{2}\mathbf{q}^{\top}\operatorname{dg}^{-1}(\boldsymbol{ \alpha})\mathbf{q}+\boldsymbol{\delta}^{\top}|\mathbf{q}| \tag{2}\] where \(|\mathbf{q}|\) applies the absolute value on \(\mathbf{q}\) entrywise. Problem (2) depends on rule parameters \((\bar{v},\delta,\alpha,\bar{q})\) across all buses, collected in the \(4N\)-long vector \(\mathbf{z}:=(\bar{\mathbf{v}},\boldsymbol{\delta},\boldsymbol{\alpha},\bar{ \mathbf{q}})\). We denote by \(\mathbf{q}_{\mathbf{z}}(\tilde{\mathbf{v}})\) the equilibrium setpoints, and by \[\mathbf{v}_{\mathbf{z}}(\tilde{\mathbf{v}})=\mathbf{X}\mathbf{q}_{\mathbf{z} }(\tilde{\mathbf{v}})+\tilde{\mathbf{v}} \tag{3}\] the related equilibrium voltages reached by Volt/VAR rules parameterized by \(\mathbf{z}\) under grid conditions \(\tilde{\mathbf{v}}\). _Optimal rule design (ORD)_ can be stated as the task of selecting \(\mathbf{z}\) to bring equilibrium voltages \(\mathbf{v}_{\mathbf{z}}(\tilde{\mathbf{v}})\) close to unity. To cater to diverse conditions, the utility may sample loading scenarios \(\{\tilde{\mathbf{v}}_{s}\}_{s=1}^{S}\) for the next hour, and find \(\mathbf{z}\) as \[\mathbf{z}^{*}\in\arg\min_{\mathbf{z}} F(\mathbf{z}):=\frac{1}{S}\sum_{s=1}^{S}\|\mathbf{v}_{\mathbf{z}}( \tilde{\mathbf{v}}_{s})-\mathbf{1}\|_{2}^{2}\] (ORD) subject to (3) and \[\mathbf{z}\in\mathcal{Z}.\] Once found, the customized rules \(\mathbf{z}^{*}\) are sent to DERs to operate autonomously over the next hour. Note that \(\mathbf{v}_{\mathbf{z}}(\tilde{\mathbf{v}}_{s})\) depends on \(\mathbf{z}\) because the equilibrium setpoints \(\mathbf{q}_{\mathbf{z}}(\mathbf{v}_{s})\) in (3) are the minimizers of problem (2), which is parameterized by \(\mathbf{z}\). When solving (ORD) for non-incremental rules, the feasible set \(\mathcal{Z}\) consists of the polytopic constraints imposed on \(\mathbf{z}\) by the IEEE Std. 1547 as well as additional constraints on \(\boldsymbol{\alpha}\) to ensure \(\|\operatorname{dg}(\boldsymbol{\alpha})\mathbf{X}\|_{2}<1\); see [12]. Therefore, the feasible set \(\mathcal{Z}\) can be quite confined. This can lead to less desirable voltage profiles; that is, higher objective values \(F(\mathbf{z}^{*})\). The aforesaid issue can be addressed by replacing the non-incremental Volt/VAR rules of IEEE Std. 1547 by incremental ones as suggested in [2, 3, 4, 5, 6]. Incremental rules express the _change_ rather than the actual value in setpoints as a function of voltage. One option for incremental rules is to implement a proximal gradient descent (PGD) algorithm solving (2) as proposed in [5]. In this case, the control rule coincides with the PGD iterations, which are implemented by DERs in a decentralized fashion. Using incremental rules, set \(\mathcal{Z}\) is enlarged as now we only need to ensure \[\mathbf{z}\geq\mathbf{0}\] \[0.95\cdot\mathbf{1}\leq\bar{\mathbf{v}}\leq 1.05\cdot\mathbf{1}\] and that \(\bar{\mathbf{q}}\) are within the reactive power ratings of the DERs. The PGD algorithm is an extension of gradient descent to handle constraints and non-differentiable costs [5]. At iteration \(t\), PGD proceeds with two steps: _s1)_ It first computes the gradient of the first two terms of \(F(\mathbf{z})\), that is \(\mathbf{X}\mathbf{q}^{t}+\tilde{\mathbf{v}}-\bar{\mathbf{v}}=\mathbf{v}^{t}- \bar{\mathbf{v}}\). Here \(\mathbf{q}^{t}\) is the latest estimate of the minimizer of (2); _s2)_ PGD then updates \(\mathbf{q}^{t+1}\) as the minimizer of \[\min_{-\bar{\mathbf{q}}\preceq\mathbf{q}\leq\bar{\mathbf{q}}}\ \frac{1}{2}\mathbf{q}^{\top} \operatorname{dg}^{-1}(\boldsymbol{\alpha})\mathbf{q}+\boldsymbol{\delta}^{ \top}|\mathbf{q}|+\frac{1}{2\mu}\|\mathbf{q}-(\mathbf{v}^{t}-\bar{\mathbf{v}}) \|_{2}^{2} \tag{4}\] for a step size \(\mu>0\). The last problem involves the last two terms in the cost of (2) regularized by the Euclidean distance of \(\mathbf{q}\) to the gradient \((\mathbf{v}^{t}-\bar{\mathbf{v}})\) computed in step _s1)_. Converting PGD to control rules, step _s1)_ is performed by the physics of the feeder when injecting \(\mathbf{q}^{t}\) and measuring the local voltage deviations. Step _s2)_ is run by each DER independently as (4) is separable across buses. Using the subdifferential, solving (4) provides the update [5] \[y_{n}^{t}=\tilde{\alpha}_{n}\cdot\left(q_{n}^{t}-\mu(v_{n}^{t}-\bar{v}_{n})\right) \tag{5a}\] Fig. 1: Non-incremental Volt/VAR control rule provisioned by the IEEE Std. 1547 for the interconnection of DERs [1]. \[q_{n}^{t+1}=g_{n}\left(y_{n}^{t}\right) \tag{5b}\] where \(g_{n}(y_{n})\) is the _proximal operator_ \[g_{n}(y_{n}):=\begin{cases}+\bar{q}_{n}&,\ y_{n}>\overline{q}_{n}+\mu\tilde{ \delta}_{n}\\ y_{n}-\mu\tilde{\delta}_{n}&,\ \mu\tilde{\delta}_{n}<y_{n}\leq\overline{q}_{n}+\mu \tilde{\delta}_{n}\\ 0&,\ -\mu\tilde{\delta}_{n}\leq y_{n}\leq\mu\tilde{\delta}_{n}\\ y_{n}+\mu\tilde{\delta}_{n}&,\ -\overline{q}_{n}-\mu\tilde{\delta}_{n}\leq y_{n}<-\mu \tilde{\delta}_{n}\\ -\overline{q}_{n}&,\ y_{n}<-\overline{q}_{n}-\mu\tilde{\delta}_{n}.\end{cases} \tag{6}\] and the new parameters \((\tilde{\alpha}_{n},\tilde{\delta}_{n})\) are defined as \[\tilde{\alpha}_{n}:=\frac{1}{1+\mu/\alpha_{n}}\quad\text{and}\quad\tilde{ \delta}_{n}:=\frac{\delta_{n}}{1+\mu/\alpha_{n}}.\] The proximal operator is plotted in the top panel of Figure 2. Note that in (5), rule parameters are transformed from representation \(\mathbf{z}=(\bar{\mathbf{v}},\tilde{\boldsymbol{\delta}},\tilde{\boldsymbol {\alpha}},\tilde{\mathbf{q}})\) to representation \(\tilde{\mathbf{z}}:=(\bar{\mathbf{v}},\tilde{\boldsymbol{\delta}},\tilde{ \boldsymbol{\alpha}},\tilde{\mathbf{q}})\). This is without loss of generality as the transformation is a bijection, and so one can work exclusively with \(\tilde{\mathbf{z}}\). The feasible set \(\tilde{\mathcal{Z}}\) for \(\tilde{\mathbf{z}}\) is similar to \(\mathcal{Z}\) with the addition that \(\tilde{\boldsymbol{\alpha}}\leq\mathbf{1}\). As with non-incremental rules, the rules in (6) are driven by local data, but now \(q_{n}^{t+1}\) depends on \((v_{n}^{t},q_{n}^{t})\), and not \(v_{n}^{t}\) alone. Both types of rules solve (2). Hence, they both converge to the same equilibrium. The advantage of incremental rules is that they are stable for all \(\boldsymbol{\alpha}\) as long as \(\mu<2/\lambda_{\text{max}}(\mathbf{X})\); see [5]. It is worth stressing that \(\mathbf{z}\) does not have the same physical interpretation as in non-incremental rules (slopes, deadband, or saturation), though \(\mathbf{z}\) parameterizes (2) for both rules. _Accelerated incremental rules_. Although PGD rules enlarge \(\mathcal{Z}\), their settling times can be long. They reach an \(\varepsilon\)-optimal cost of (2) within \(-\frac{2\log\varepsilon}{\log\varepsilon}\kappa\left(\mathbf{X}\right)\) iterations. Here \(\kappa(\mathbf{X}):=\lambda_{\text{max}}(\mathbf{X})/\lambda_{\text{min}}( \mathbf{X})\) is the condition number of \(\mathbf{X}\). References [5, 16] put forth _accelerated_ incremental rules based on accelerated PGD (APGD). These rules need \(-\frac{2\log\varepsilon}{\log\varepsilon}\sqrt{\kappa\left(\mathbf{X}\right)}\) iterations to attain an \(\varepsilon\)-optimal cost, and take the form \[\tilde{y}_{n}^{t} :=\left(1+\beta_{t}\right)y_{n}^{t}-\beta_{t}y_{n}^{t-1} \tag{7a}\] \[q_{n}^{t+1} :=g_{n}\left(\tilde{y}_{n}^{t}\right) \tag{7b}\] where \(\beta_{t}:=\frac{t-1}{t+2}\), while \(y_{n}^{t}\) and \(g_{n}(y_{n})\) are as defined in (5a) and (6). Updates (7) remain local, but introduce additional memory as \(q_{n}^{t+1}\) depends on \((v_{n}^{t},q_{n}^{t})\) and \((v_{n}^{t-1},q_{n}^{t-1})\). ## III Deep Learning for Optimal Rule Design (ORD) in Single-Phase Feeders Solving (ORD) is challenging as it is a nonconvex bilevel program. Although it can be modeled as a mixed-integer nonlinear program, such an approach does not scale well with the number of DERs and/or scenarios for non-incremental rules [13]. Seeking a more scalable solution, we reformulate (ORD) as a deep learning task. The key idea is to design a DNN that emulates Volt/VAR dynamics under the control rule of (5). To this end, note that \(g_{n}(y_{n})\) is a piecewise-linear function with four breakpoints [5]. Interestingly, this operator can be expressed as the superposition of four rectified linear units (ReLUs) as illustrated in Fig. 2, where ReLUs are denoted by \(\rho(\cdot)\). The intercepts of the ReLUs depend linearly on \((\tilde{\delta}_{n},\bar{q}_{n})\). Building on this, one APGD iteration for DER \(n\) can be implemented by the 4-layer DNN in Fig. 3, whose weights depend affinely on \((\bar{v}_{n},\tilde{\delta}_{n},\tilde{\alpha}_{n},\bar{q}_{n})\). This DNN takes \((q_{n}^{t},v_{n}^{t})\) as its input, and computes \((q_{n}^{t+1},y_{n}^{t})\) at its output. It is termed \(\text{IC}_{n}\) and will be used as a building block to emulate Volt/VAR dynamics. This is accomplished by the recursive neural network (RNN) shown in Fig. 4. Here blocks \(\text{IC}_{n}\) are arranged vertically to model the parallel operation of DERs. Their outputs \(\mathbf{q}^{t+1}\) are multiplied by \(\mathbf{X}\), and the new voltage is computed as \(\mathbf{v}^{t+1}=\mathbf{X}\mathbf{q}^{t+1}+\tilde{\mathbf{v}}\). This is repeated \(T\) times. Thanks to the RNN structure, there is _weight sharing_, so the number of DNN weights is \(4N\) rather than \(4NT\). The RNN takes a grid loading vector \(\tilde{\mathbf{v}}_{s}\) as its input, the rule parameters \(\tilde{\mathbf{z}}\) as weights, and computes the voltages \(\mathbf{v}_{\tilde{\mathbf{z}}}^{T}(\tilde{\mathbf{v}}_{s})\) at time \(T\) at its output. For the output \(\mathbf{v}_{\tilde{\mathbf{z}}}^{T}(\tilde{\mathbf{v}}_{s})\) to approximate well equilibrium voltages, the depth \(T\) can be chosen by the convergence rate of PGD as follows. **Proposition 1**.: _For the DNN of Fig. 4 to ensure \(\|\Phi\left(\tilde{\mathbf{v}};\mathbf{z}\right)-\mathbf{v}^{*}(\mathbf{z})\|_ {2}\leq\epsilon_{1}\ \forall\ \tilde{\mathbf{v}}\), its depth \(T\) should satisfy_ \[T\geq\left(\frac{\kappa-1}{2}\right)\log\left(\frac{2\|\mathbf{X}\|_{2}\|\hat{ \mathbf{q}}\|_{2}}{\epsilon_{1}}\right). \tag{8}\] Fig. 3: A DNN emulating the accelerated incremental rules of (7). Plain incremental rules can be modeled by dropping the second layer (setting \(\beta^{t}=0\)) and ignoring output \(y_{n}^{t}\). Fig. 2: Proximal operator \(g(y)\) expressed as a sum of four shifted rectified linear units (ReLUs). _Proof:_ From the control rule of (5b), it follows that \[\|\mathbf{q}^{t}-\mathbf{q}^{*}\|_{2} =\|\mathbf{g}\left(\mathbf{y}^{t}\right)-\mathbf{g}\left(\mathbf{y} ^{*}\right)\|_{2}\] \[\leq\|\mathbf{y}^{t}-\mathbf{y}^{*}\|_{2}\] \[=\|\operatorname{dg}(\tilde{\boldsymbol{\alpha}})(\mathbf{I}-\mu \mathbf{X})\left(\mathbf{q}^{t-1}-\mathbf{q}^{*}\right)\|_{2}\] \[\leq\|\operatorname{dg}(\tilde{\boldsymbol{\alpha}})\|_{2}\cdot \|\mathbf{I}-\mu\mathbf{X}\|_{2}\cdot\|\mathbf{q}^{t-1}-\mathbf{q}^{*}\|_{2}\] \[\leq\|\mathbf{I}-\mu\mathbf{X}\|_{2}\cdot\|\mathbf{q}^{t-1}- \mathbf{q}^{*}\|_{2}. \tag{9}\] The first inequality stems from the non-expansive property of the proximal operator \(\mathbf{g}\). The next equality follows from (5a). The second inequality from the sub-multiplicative property of the spectral norm. The last inequality follows by the definition of spectral norm and because \(\tilde{\alpha}_{n}\leq 1\) for all \(n\). If \(\|\mathbf{I}-\mu\mathbf{X}\|_{2}<1\), inequality (9) implies that the dynamics in (5) are a non-expansive mapping, and thus, are stable and converge to \(\mathbf{q}^{*}\). Condition \(\|\mathbf{I}-\mu\mathbf{X}\|_{2}<1\) holds when \(\mu<2/\lambda_{\max}(\mathbf{X})\). The norm \(\|\mathbf{I}-\mu\mathbf{X}\|_{2}\) achieves its minimum of \(\left(1-\frac{2}{\kappa+1}\right)\) when \[\mu_{0}:=\frac{2}{\lambda_{\max}(\mathbf{X})+\lambda_{\min}(\mathbf{X})}.\] Plugging \(\mu_{0}\) into (9) and unfolding the dynamics over \(t\) provides \[\|\mathbf{q}^{t}-\mathbf{q}^{*}\|_{2} \leq\left(1-\tfrac{2}{\kappa+1}\right)^{t}\|\mathbf{q}^{0}- \mathbf{q}^{*}\|_{2}\] \[\leq 2\left(1-\tfrac{2}{\kappa+1}\right)^{t}\|\hat{\mathbf{q}}\| _{2}.\] For the voltage approximation error \(\|\mathbf{v}^{T}-\mathbf{v}^{*}\|_{2}=\|\mathbf{X}\left(\mathbf{q}^{T}- \mathbf{q}^{*}\right)\|_{2}\) at time \(T\) to be smaller than \(\epsilon_{1}\), we need \[\|\mathbf{v}^{T}-\mathbf{v}^{*}\|_{2} \leq 2\|\mathbf{X}\|_{2}\cdot\|\hat{\mathbf{q}}\|_{2}\cdot \left(1-\frac{2}{\kappa+1}\right)^{T}\leq\epsilon_{1}.\] This can be achieved by selecting \(T\) such that \[T \geq\frac{\log\left(\frac{2\|\mathbf{X}\|_{2}\hat{\mathbf{q}}\|_ {2}}{\epsilon_{1}}\right)}{\log\left(1+\frac{2}{\kappa-1}\right)}\] \[\geq\left(\frac{\kappa-1}{2}\right)\log\frac{2\|\mathbf{X}\|_{2} \|\hat{\mathbf{q}}\|_{2}}{\epsilon_{1}}.\] where the last inequality follows from \(\log(1+x)\leq x\). Plugging the values \(\|\mathbf{X}\|_{2}=0.463\) and \(\kappa=848\) for the IEEE 37-bus feeder, \(\|\hat{\mathbf{q}}\|_{2}=0.1\), and \(\epsilon_{1}=10^{-5}\) in (8), yields \(T\geq 2,892\) layers, which is relatively large. A key contributor to this large \(T\) is the \(\kappa\) term in (8). This promulgates the adoption of accelerated rules (7), which are known to have \(\mathcal{O}(\sqrt{\kappa})\) dependence. Interestingly, during implementation, one does not need to fix \(T\) to the above worst-case bounds. Leveraging dynamic computation graphs offered by Python libraries such as Pytorch, one may determine \(T\)_'on the fly'_ depending on the convergence of \(\mathbf{v}^{t}\) between pairs of successive layers. Since the RNN emulates Volt/VAR dynamics, it can surrogate \(\mathbf{v}_{\mathbf{z}}(\tilde{\mathbf{v}}_{s})\) in (ORD). Then (ORD) can be posed as training a DNN over its weights \(\tilde{\mathbf{z}}\in\tilde{\mathcal{Z}}\) or \(\mathbf{z}\in\mathcal{Z}\). Grid loading scenarios \(\{\tilde{\mathbf{v}}_{s}\}_{s=1}^{S}\) are treated as features and equilibrium voltages \(\mathbf{v}_{\mathbf{z}}(\tilde{\mathbf{v}}_{s})\) as predictions that should be brought close to the target value of \(\mathbf{1}\) for scenarios \(s\). The DNN can be trained using stochastic projected gradient descent (SPGD) as [13] \[\tilde{\mathbf{z}}^{i+1}=\left[\tilde{\mathbf{z}}^{i}-\frac{\lambda}{2B}\nabla_ {\tilde{\mathbf{z}}^{i}}\left(\sum_{s\in\mathcal{B}_{i}}\|\Phi(\tilde{\mathbf{ v}}_{s};\tilde{\mathbf{z}})-\mathbf{1}\|_{2}^{2}\right)\right]_{\tilde{\mathcal{Z}}} \tag{10}\] where \(\lambda>0\) is the learning rate; set \(\mathcal{B}_{i}\) is a batch of \(B\) scenarios; and \([\cdot]_{\tilde{\mathcal{Z}}}\) is the projection onto \(\tilde{\mathcal{Z}}\). Since \(\tilde{\mathcal{Z}}\) consists of simple box constraints, projection essentially means clipping the values to the box. Lastly \(\nabla_{\tilde{\mathbf{z}}^{i}}(\cdot)\) represents the gradient with respect to \(\tilde{\mathbf{z}}\) evaluated at \(\tilde{\mathbf{z}}=\tilde{\mathbf{z}}^{i}\), and is calculated efficiently thanks to _gradient back-propagation_. Although our DNN-based ORD assumed PGD-based rule, it may be applicable to other incremental rules too. A discussion about control rules and their DNN-based emulators is due. Recall that all three types of Volt/VAR control rules (non-incremental, incremental, and accelerated incremental) reach the same equilibrium voltages, if stable. The emulators aim at computing these equilibrium voltages. A natural question is whether the DNN emulator could implement a rule of a type different from the rule actually implemented on the feeder. This may be desirable to leverage the advantages of two types. Some caution is needed here. If the feeder implements non-incremental rules, but incremental rules converge faster to equilibrium voltages, it makes sense for the emulator to implement incremental rules. Of course, in this case, stability constraints on the non-incremental rules have to be enforced during DNN training. The reverse is not recommended: If the emulator implements non-incremental rules, its parameters \(\mathbf{z}\) should be constrained to be stable and that would be a restriction of the actual ORD problem. Finally, given the convergence advantage of accelerated incremental rules, they are always preferable over plain incremental rules for the DNN implementation. This showcases the utility of accelerated control rules even if they are not actually implemented on the feeder. ## IV Deep Learning for Optimal Rule Design (ORD) in Multiphase Feeders In multiphase feeders, matrix \(\mathbf{X}\) is non-symmetric and has both positive and negative entries. Therefore, the rule analysis and design of Section III has to be revisited. For example, equilibrium setpoints cannot be found as the minimizers of Fig. 4: Recurrent neural network (RNN) implementation for accelerated Incremental Volt/VAR control rules. an optimization problem as with (2). Moreover, increasing \(\mathbf{q}\) does not mean that all voltages increase. In multiphase feeders, the non-incremental rules of IEEE Std. 1547 remain stable as long as \(\|\operatorname{dg}(\boldsymbol{\alpha})\mathbf{X}\|_{2}<1\). This is the same condition as in the single-phase setup. How about the stability and equilibrium of incremental rules in multiphase feeders? Recall that for single-phase rules, incremental rules were obtained as the PGD iterations solving (2). Lacking an equivalent inner optimization for multiphase feeders precludes a similar approach here. Despite the incremental rules of (5) do not correspond to PGD iterates anymore, they can still be shown to be stable for multiphase feeders. **Proposition 2**.: _Let \(\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{\top}\) be the eigen-decomposition of matrix \(\mathbf{X}\mathbf{X}^{\top}\). The incremental rules of (5) are stable for multiphase feeders if their step size is selected as \(\mu<\lambda_{\min}\left(\mathbf{\Lambda}^{-1/2}\mathbf{U}^{\top}\left(\mathbf{ X}+\mathbf{X}^{\top}\right)\mathbf{U}\mathbf{\Lambda}^{-1/2}\right)\)._ The claim follows readily by adopting the proof of Proposition 1: If \(\mu\) is selected as above, then \(\|\mathbf{I}-\mu\mathbf{X}\|_{2}<1\) follows from [5, Prop. 6]. Similar to the single-phase case, incremental rules in multiphase feeders allow us to enlarge the feasible set \(\mathcal{Z}\) of rule parameters \(\mathbf{z}\). It is worth stressing that different from the single-phase setting, incremental and non-incremental rules do not converge to the same equilibrium on multiphase feeders. The ORD task for multiphase feeders can also be formulated as a deep-learning task, with some modifications. Firstly, matrices \(\mathbf{R}\) and \(\mathbf{X}\) need to be altered. Secondly, the DNNs for multiphase feeders have \(12N\) trainable parameters, since each layer consists of \(3N\) building modules corresponding to bus/phase (node) combinations. Lastly, the step size has to be selected per Proposition 2. Adopting the proof of Proposition 1, we next find the minimum DNN depth in multiphase feeders. **Proposition 3**.: _Let the DNN of Fig. 4 implement the incremental rules of (5) on multiphase feeders with \(\mu\) selected per Proposition 2. The DNN depth \(T\) ensuring voltage approximation error \(\|\Phi\left(\bar{\mathbf{v}};\mathbf{z}\right)-\mathbf{v}^{*}(\mathbf{z})\|_{2 }\leq\epsilon_{1}\) is_ \[T\geq\frac{\log\frac{\epsilon_{1}}{2\|\mathbf{X}\|_{2}\|\mathbf{q}\|_{2}}}{ \log\|\mathbf{I}-\mu\mathbf{X}\|_{2}}.\] We next numerically evaluate the proposed DNN-based ORD approach in single- and multiphase feeders, and contrast the performance of incremental control rules with that of non-incremental rules. ## V Numerical Tests We benchmark the performance of DNN-based incremental rules against non-incremental rules from [13] on single- and multiphase feeders. Real-world data were sourced from the Smart* project on April 2, 2011 [17], as explained in [13]. The DNNs were implemented and trained using Pytorch. We first compare (non)-incremental rules, both designed via DNN training for the single-phase IEEE 37-bus feeder of Figure 5. Homes with IDs 20-369 were averaged 10 at a time and successively added as active loads to buses 2-26 as shown in Fig. 6. Active generation from solar panels was also added, as per the mapping in Fig. 6. Buses \(\{6,9,11,12,15,16,20,22,24,25\}\) were assumed to host DERs with Volt/VAR control customized per bus. Incremental rules were simulated in their accelerated rendition. Both sets of rules were trained over \(S=80\) scenarios and \(200\) epochs with a learning rate of \(0.001\), using the Adam optimizer, and setting \(\mu=1\) for incremental rules. To ensure repeatability, the results were repeated across several time periods between 1-6 PM, and are compiled in Table V. Incremental rules obtained marginally lower objectives than non-incremental rules across all periods, with a somewhat significant difference for the 5 PM period. This behavior is explained because incremental rules allow for a larger set \(\mathcal{Z}\). DNN-based incremental control rules were also contrasted with their non-incremental ones on the multiphase IEEE 13-bus feeder, using the testing setup from [13]. Active loads were sampled \(10\) at a time from homes with IDs \(20\)-\(379\) and added to all three phases for the buses 1-12. Figure 6 also shows the solar panel assignments shown in Fig 6 for solar generation. Lastly, nine DERs with inverters were added across phases and bus indices as shown in Fig 6. The learning rates for non-incremental and incremental DNNs were set as \(0.1\) and \(0.001\), respectively, with the design Fig. 5: The IEEE 37-bus feeder converted to single-phase. Node numbering follows the format node number {panel ID}. DERs at buses \(\{6,9,11,12,15,16,20,22,24,25\}\) provide reactive power control; the rest operate at unit power factor. parameters \(\mathbf{z}:=(\bar{\mathbf{v}},\boldsymbol{\delta},\boldsymbol{\sigma},\boldsymbol{ \alpha})\) initialized to feasible values \((0.95,0.01,0.3,1.5)\). Table V compares the performance of the two rule categories over multiple periods for \(S=80\). While incremental rules took longer times to train, they were successful in lowering the cost \(F(\mathbf{z})\) by more than \(50\%\), thus yielding improved voltage profiles across all periods. ## VI Conclusions We have devised a DNN approach to optimally design incremental Volt/VAR control rules for single- and multi-phase feeders. The key idea is to construct a DNN that emulates end-to-end the associated Volt/VAR dynamics. The DNN takes grid conditions as the input, the rule parameters as weights, and outputs the associated equilibrium voltages. Leveraging the convergence rates of the related optimization algorithms, we have provided bounds on the minimum depth of the DNN emulator to approximate equilibrium voltages within the desired accuracy. We have also established the stability of incremental control rules for multiphase feeders. Numerical tests have demonstrated that the designed control rules attain improved voltage profiles compared to their non-incremental alternatives. The improvement was found to be starker for multiphase feeders, wherein (non)-incremental rules do not reach the same equilibrium. Our findings motivate further research to possibly characterize the equilibria of control rules for multiphase feeders; the convergence of accelerated incremental rules for multiphase feeders; and to deal with chance-constrained formulations or ORD problems targeting phase imbalances.
2303.04427
Self-Supervised Learning for Group Equivariant Neural Networks
This paper proposes a method to construct pretext tasks for self-supervised learning on group equivariant neural networks. Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input. Therefore, it is important to construct pretext tasks for self-supervised learning that do not contradict this equivariance. To ensure that training is consistent with the equivariance, we propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss. Equivariant pretext labels use a set of labels on which we can define the transformations that correspond to the input change. Invariant contrastive loss uses a modified contrastive loss that absorbs the effect of transformations on each input. Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed equivariant self-supervised tasks.
Yusuke Mukuta, Tatsuya Harada
2023-03-08T08:11:26Z
http://arxiv.org/abs/2303.04427v1
# Self-Supervised Learning for Group Equivariant Neural Networks ###### Abstract This paper proposes a method to construct pretext tasks for self-supervised learning on group equivariant neural networks. Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input. Therefore, it is important to construct pretext tasks for self-supervised learning that do not contradict this equivariance. To ensure that training is consistent with the equivariance, we propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss. Equivariant pretext labels use a set of labels on which we can define the transformations that correspond to the input change. Invariant contrastive loss uses a modified contrastive loss that absorbs the effect of transformations on each input. Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed equivariant self-supervised tasks. ## 1 Introduction Self-supervised learning is the method by which we define pretext tasks based on our prior knowledge about the input data and train the feature extractor using the pretext tasks without supervised labels. Pretext tasks include tasks to solve ill-posed problems such as image completion, tasks to predict image context, and tasks to make the features of the augmented images from the same image close to each other. Currently, self-supervised learning demonstrates comparative accuracy to supervised learning and is an effective framework for learning features in an unsupervised manner. Another direction for utilizing prior knowledge is to incorporate the knowledge into the model structure of the feature extractor. For example, the convolutional layer is designed to be robust to the local translation of the object in an image. Group equivariant neural networks are the effective framework for utilizing the knowledge of invariance for the required transformations such as image rotations and image flipping in the neural networks structure. Given the input data \(x\), the transformation on the input \(T_{\mathrm{in}}(g)\), and the transformations on the output \(T_{\mathrm{out}}(g)\), the group equivariant neural networks \(f_{\theta}\) are constructed to satisfy \(T_{\mathrm{out}}(g)\big{(}f_{\theta}(x)\big{)}=f_{\theta}(T_{\mathrm{in}}(g)(x) \big{)}\) for any transformation parameter \(g\), input data \(x\), and model parameter \(\theta\). Because the structure of group equivariant neural networks are restricted to satisfy equivariance, this restriction regularizes the model, and the learned model generally demonstrates better accuracy than the standard non-equivariant neural networks. Therefore, we expect that we can obtain the effective feature learning method by combining these two ideas to utilize the prior knowledge. When we combine the idea of self-supervised learning and the group equivariant neural networks, it is important to design the method such that these two components do not adversely affect each other. The functions that the group equivariant neural networks can learn are restricted to the mappings that preserve equivariance. Therefore, when a pretext task requires the mapping to violate the equivariance, it is difficult to learn the function through this pretext task. In this paper, we propose a self-supervised task that is suitable for group equivariant neural networks. The idea is to construct a self-supervised loss that does not change under the transformations on the input data. This invariance guarantees that we can learn the same equivariant model even when we apply the considered transformations to the input. To construct a self-supervised loss that satisfies this condition, we propose two concepts for self-supervised loss: equivariant pretext labels and invariant contrastive loss. Equivariant pretext labels are constructed such that they are consistent with the considered transformations on the input. This indicates that when we apply the transformations on the input, the corresponding pretext labels are also changed according to the considered transformations. The invariant contrastive loss is a loss that does not change when we apply transformations to each input data. We extend several existing self-supervised pretext tasks to satisfy these concepts. We apply the proposed loss to the image classification model on the ImageNet dataset and evaluate the model using several image recognition benchmarks. We demonstrate that the trained group equivariant neural networks demonstrate good classification accuracy when we use the proposed loss. The contributions of the paper are as follows: * We propose the concepts of equivariant pretext labels and invariant contrastive loss to train the group equivariant neural networks in a self-supervised manner. * We propose an equivariant extension to several existing self-supervised tasks. * We apply our method to standard image recognition benchmarks and demonstrate the effectiveness of the proposed loss. ## 2 Related Work ### Self-Supervised Learning Self-supervised learning is one of the frameworks used to train the feature extractor in an unsupervised manner. In self-supervised learning, we first design a task to minimize the loss function based on the prior knowledge of the input data and pretrain the model with the task. Next, we fine-tune the model initialized with the pretrained parameter on the target dataset with labels to obtain the classifier. In image recognition, early research on self-supervised learning is based on the hand-crafted pretext task. We design an ill-conditioned problem and expect the model to acquire basic knowledge about the image through training to solve the problem. Some examples of pretext tasks are tasks to recover an input image from the image with incomplete information (Pathak et al., 2016; Zhang et al., 2016, 2017; He et al., 2022), tasks to predict spatial relationships between subregions of an image, (Doersch et al., 2015; Noroozi and Favaro, 2016; Noroozi et al., 2018; Kim et al., 2018), tasks to predict transformations from the transformed input images (Komodakis and Gidaris, 2018; Zhang et al., 2019) and tasks to predict image primitives in an image (Noroozi et al., 2017; Gidaris et al., 2020). A recent trend in self-supervised learning is to utilize relationships between the images in the dataset instead of considering one-to-one relationships between the images and pretext labels. In Caron et al. (2018), the model is trained iteratively by first assigning a pseudo-label to each image by applying clustering to the output of the current model, and then updating the model to predict the assigned pseudo-label more accurately. In Asano et al. (2019), this method is extended by imposing a uniformness restriction to the pseudo-label space and formulating the learning loss as the Kullback-Leibler divergence between the pseudo-label distribution and predicted label distribution. In Hjelm et al. (2018), Oord et al. (2018), the model is trained to maximize mutual information between the intermediate features. These methods calculate loss by utilizing the other images in the dataset as negative samples. Among data-driven methods, the self-supervised framework based on contrastive loss is well investigated. In contrastive learning, a positive image pair is constructed by applying data augmentation to the same input image and the negative image pair is constructed by sampling different images from the dataset. The model is then trained such that the positive pair is close while the negative pair is distant based on distance criterions. Chen et al. (2020) uses the softmax cross-entropy loss for this purpose. In He et al. (2020), the author introduced a momentum encoder that is obtained from the moving average of the learned feature extractors and the memory bank to preserve the extracted features from the previous mini-batches in the queue data structure. Variants of contrastive learning include methods to use the consistency between the pseudo label assignment by applying clustering (Li et al., 2020; Caron et al., 2020), methods to match the feature only between the positive pair instead of the softmax cross-entropy loss (Grill et al., 2020; Chen and He, 2021) and methods that trains the model to output the correlation matrix close to the identity matrix (Zbontar et al., 2021). ### Group Equivariant Neural Networks Group equivariant neural networks are the framework for utilizing knowledge about invariance into the model structure of the neural networks. Given the transformations on the input \(T_{\mathrm{in}}(g)\) and output \(T_{\mathrm{out}}(g)\) for the group element \(g\in G\), group equivariant neural networks \(f_{\theta}\) with learnable parameter \(\theta\) are constructed to satisfy \(T_{\mathrm{out}}(g)f_{\theta}(x)=f_{\theta}(T_{\mathrm{in}}(g)(x))\) for any input \(x\), transformation group element \(g\), and the model parameter \(\theta\). Because neural networks are constructed as the composition of layers, we generally construct the group equivariant neural networks by first designing layers that satisfy equivariance, and then composing the equivariant layers. We introduce the group equivariant neural networks for image recognition in this section. Group Equivariant CNNs (G-CNNs) (Cohen and Welling, 2016) is the standard model for the group equivariant neural networks. G-CNNs has \(|G|\) learnable convolution layers and calculates the layer output by applying group convolution to the input. The first layer calculates the output as: \[x_{g}^{\rm out}=(T_{w_{0}}(g)w_{0})*x^{\rm in}, \tag{1}\] and succeeding layers calculate the output as \[x_{g}^{\rm out}=\sum_{h\in G}w_{(gh^{-1})}*x_{h}^{\rm in}, \tag{2}\] where \(x_{g}^{\rm in}\) and \(x_{g}^{\rm out}\) denote the \(g\)-th elements of the input and output respectively, \(w_{0},w\) are the learnable weights, \(T_{w_{0}}\) denotes the corresponding transformation on the weights for the first layer, and \(*\) indicates the image convolution. When we apply \(T_{\rm in}(g)\) to the input \(x\), the corresponding output becomes the permutation between the \(g\)-th elements of \(x_{g}^{\rm out}\). Therefore, the network becomes group equivariant. While G-CNNs are a versatile framework, this method has the drawback that the model carries extensive parameter and computation costs. Several methods for reducing these costs are proposed. In Cohen and Welling (2017), the computation cost is reduced by first applying the Fourier transformations to the input to decompose it into irreducible representations, and then applying the layer on this irreducible feature space. Weiler et al. (2018) represents a group convolutional filter using the weighted sum of the basis filter, and uses the summation coefficient as the learnable parameter to reduce the number of model parameters. As a different construction, Marcos et al. (2017) constructs the rotation equivariant layer by applying the convolution layer while rotating and calculates the two-dimensional vector from the statistics of the activation. Worrall et al. (2017) constructs the equivariant layer for the two-dimensional continuous rotation using spherical harmonics. Jenner and Weiler (2022) constructs the convolutional weights by incorporating the weighted sum of the differentiate operator. These methods and their variants are summarized in Weiler and Cesa (2019). ### Self-Supervised Learning with Equivariance In self-supervised methods, prior knowledge about invariance is used to construct the loss function rather than incorporate it into the model structure. For example, in contrastive learning, the invariance information is encoded as the data augmentation and used to construct the positive pair. Misra and Maaten (2020) tries to learn the invariance using the similarity between the augmented samples. Mitrovic et al. (2021) utilizes the concept of the causal mechanism and explicitly adds the regularizer such that the output features from the same image with different data augmentations resemble each other. Dangovski et al. (2022) proposes a hybrid method of the pretext tasks to predict the image transformations and contrastive loss to obtain the invariant features under the transformations. Based on the hybrid framework, Dangovski et al. (2022) analyzes the invariance and equivariance information that contribute to the performance. Keller et al. (2022) is close to our work in that the problem of training the group equivariant neural networks is also considered. Although Keller et al. (2022) constructs the invariant loss by averaging the loss with respect to the augmented features under the group action while the shape of the similarity metric itself is preserved, we directly modify the similarity function such that the loss is instance-wise invariant. Further, Keller et al. (2022) applies the method to SimCLR (Chen et al., 2020), while we apply our method to several contrastive and non-contrastive self-supervised tasks. ## 3 Proposed Method ### Motivation Our goal is to construct a self-supervised loss that is suitable for the group equivariant neural networks. We design the loss function to learn \(\theta\) for the feature extractor \(f_{\theta}\) that satisfies \(T_{\rm out}(g)f_{\theta}(x)=f_{\theta}(T_{\rm in}(g)(x))\). We first discuss the problem that occurs when we employ the standard loss function. We consider the pretext task where we assign the pretext label \(y^{\rm pretext}(x)\) for the input \(x\), and train the model to predict the label \(y^{\rm pretext}(x)\) from \(x\). In this case, \(f_{\theta}\) is trained to map \(x\) onto \(y^{\rm pretext}(x)\), but from the equivariance of \(f_{\theta}\), \(T_{\rm in}(g)(x)\) is mapped onto \(T_{\rm out}(g)(y^{\rm pretext}(x))\). However, in general, the pretext label for the input \(T_{\rm in}(g)(x)\) denoted as \(y^{\rm pretext}(T_{\rm in}(g)(x))\) is not the same as \(T_{\rm out}(g)(y^{\rm pretext}(x))\). Therefore, \(f_{\theta}\) is trained to map \(T_{\rm in}(g)(x)\) to the multiple target label, and it is difficult to train the model consistently. This discussion indicates the importance of designing a self-supervised task that is consistent with the group equivariance. ### Equivariant Pretext Labels In the previous section, we discussed the problem that we cannot train the model to mimic one-to-one mapping between input and output when we use standard self-supervised pretext labels. Therefore, we propose to use the pretext labels that are restricted to reflect equivariance. We consider the case where the transformation group acts on the label space of \(y^{\mathrm{pretext}}\) satisfies \[y^{\mathrm{pretext}}(T_{\mathrm{in}}(g)(x))=T_{\mathrm{out}}(g)(y^{\mathrm{ pretext}}(x)). \tag{3}\] When this condition is satisfied, the loss function does not change even when we apply \(T_{\mathrm{in}}(g)\) to the input \(x\), and the training becomes consistent. In this study, we consider the case where \(T_{\mathrm{out}}(g)\) acts as the permutation between labels. We plot the illustrative image of the equivariant pretext labels in Figure 1. Below, we propose this equivariant extension of the existing pretext tasks. #### 3.2.1 Context prediction with \(\pi/2\) rotation Context prediction (Doersch et al., 2015) is the method that we first extract \(3\times 3\) adjacent image subregions and then use the pair of central subregions and one of the other subregions as the input and train the model to predict the relative position between the input subregions as the eight-category classification problem. We consider the effect of the \(\pi/2\) image rotation on this pretext label. As plotted in Figure 2, the rotation acts as the permutation on the relative position label. For example, the subregion which is upper-left in the original image moves to the lower-left after the rotation, and the right subregion moves to the up subregion after the rotation Therefore, when we align the relative position category as ['left', 'down', 'right', 'up', 'upper left', 'lower left', 'lower right', 'upper right'], this space becomes the direct sum of two four-dimensional space where \(\pi/2\) acts as the permutation. We define this method to train the equivariant model with this label order as the equivariant context prediction. #### 3.2.2 Solving the jigsaw puzzle with \(\pi/2\) rotation and image flipping Solving the jigsaw task (Noroozi and Favaro, 2016) is a task that extends context prediction. We extract \(3\times 3\) adjacent image subregions, apply the permutation to the subregions, and then input all permuted subregions to the classification model. The classifier is then trained to predict the permutation that was applied to the input subregions. When we consider all permutations, we need to learn a \(9!\)-category classification problem, which is difficult to solve. Therefore, in general, we construct the subset whose permutations are distant from each other with respect to the Hamming distance and predict the permutation within the subset. As shown in Figure 3, \(\pi/2\) image rotation and image flipping act as the transformations on the permutation because these transformations themselves act as the permutation on the image subregions. Therefore, by constructing the permutation subset such that it is closed under the effect of \(\pi/2\) image rotation and image flipping, and aligning the permutation labels in the appropriate order, we can construct the label space that is equivariant under the transformations. We denote the task using the equivariant permutation subset as an equivariant jigsaw. Figure 1: Illustrative image of the equivariant pretext la-Figure 2: Illustrative image of equivariant context prediction. ### Invariant Contrastive Loss In the previous subsection, by assigning the equivariance to the label space as Eq. (3), we can learn the model consistently even when we apply transformations to one of the input images. We apply this idea to contrastive learning, where we use the relationship between the input images for training. In contrastive learning, we first apply data augmentation on the input images to construct the input mini batch \(\{x_{b}\}_{b=1}^{B}\), and then trains the model by minimizing \(l(f_{\theta}(x_{1}),f_{\theta}(x_{2}),...,f_{\theta}(x_{B}))\), where \(l\) is the predefined loss function. When we apply a transformation to the input \(x_{m}\) to use \(T_{\text{in}}(g)(x_{m})\) as the input, using the equivariance of \(f_{\theta}\), the loss function becomes \(l(f_{\theta}(x_{1}),f_{\theta}(x_{2}),...,T_{\text{out}}(g)f_{\theta}(x_{m}),...,f_{\theta}(x_{B}))\). When the value of the loss function does not change under this transformation as \[l(f_{\theta}(x_{1}),f_{\theta}(x_{2}),...,f_{\theta}(x_{B}))\] \[= l(f_{\theta}(x_{1}),f_{\theta}(x_{2}),...,T_{\text{out}}(g)f_{ \theta}(x_{m}),...,f_{\theta}(x_{B})), \tag{4}\] for any \(m\), input \(x_{m}\) and transformation \(g\), the learned model that minimizes this loss is not affected by the transformations on the input. We denote such losses that ignore the effect of the transformations on the output as the invariant contrastive loss. The illustrative image of invariant contrastive loss is shown in Figure 4. Below, we apply the invariant contrastive loss to several tasks, including contrastive-based [He et al., 2020], clustering-based [Caron et al., 2020] and distillation-based [Chen and He, 2021] tasks. #### 3.3.1 Momentum Contrast Momentum Contrast [He et al., 2020] is the standard contrastive learning method. Momentum Contrast uses the learnable parameter \(\theta\), momentum encoder parameter \(\theta^{k}\), and memory bank \(Q\) for training. In each iteration, the model is trained as follows: 1. Sample minibatch \(\{x_{n}\}_{n=1}^{N}\). 2. Apply data augmentation on the \(\{x_{n}\}_{n=1}^{N}\) to obtain \(\{\widehat{x_{n}^{q}}\}_{n=1}^{N}\) and \(\{\widehat{x_{n}^{k}}\}_{n=1}^{N}\). 3. Apply \(f_{\theta^{k}}\) on \(\widehat{\{x_{n}^{k}}\}_{n=1}^{N}\) to obtain \(\{k_{n}\}_{n=1}^{N}\). 4. Update \(\theta\) with the loss \[-\sum_{n=1}^{N}\frac{\exp\left(\frac{f_{\theta}(x_{n}^{q})^{t}k_{n}}{\tau} \right)}{\exp\left(\frac{f_{\theta}(x_{n}^{q})^{t}k_{n}}{\tau}\right)+\sum_{k \in Q}\exp\left(\frac{f_{\theta}(x_{n}^{q})^{t}k}{\tau}\right)}.\] (5) 5. Update \(\theta^{k}\) using the moving average of \(\theta\). 6. Update \(Q\) with \(\{k_{n}\}_{n=1}^{N}\) using first-in-first-out rule. We extend Eq. (5) to satisfy the condition of the invariant contrastive loss. To this end, we modify the loss in order not to change even when we apply \(T_{\text{out}}(g)\) on \(k\) and \(f_{\theta}(x_{n}^{q})\). We focus on the inner product that appears in \(\exp\) and replace it with \[\frac{1}{|G|}\frac{1}{|G|}\sum_{g_{1}\in G}\sum_{g_{2}\in G}(T_{\text{out}}(g_ {1})f_{\theta}(x_{n}^{q}))^{t}(T_{\text{out}}(g_{2})k). \tag{6}\] Because this value is the average of all pairs of transformations on the input features, the value of Eq. (6) does not change when we apply \(T_{\mathrm{out}}(g)\) on \(f_{\theta}(x_{n}^{\widehat{n}})\) and \(k\). Therefore, we can construct the invariant contrastive loss using this inner product. Further, using the bilinearity of the inner product, Eq. (6) can be calculated as the inner product between \(\frac{1}{|G|}\sum_{g\in G}T_{\mathrm{out}}(g)f_{\theta}(x_{n}^{\widehat{n}})\) and \(\frac{1}{|G|}\sum_{g\in G}T_{\mathrm{out}}(g)k\). Using this fact, the algorithm becomes invariant when we replace the feature extractor \(f_{\theta}\) with its average \(\frac{1}{|G|}\sum_{g\in G}T_{\mathrm{out}}(g)\circ f_{\theta}\), where \(\circ\) denotes the composition of the function. We call this method invariant Momentum Contrast. #### 3.3.2 Swapping Assignments between Views We construct the invariant Momentum Contrast by replacing the inner product with the invariant inner product. We can apply this procedure to the more complex contrastive learning method. In this section, we apply our method to Swapping Assignments between Views (SwAV) [Caron et al., 2020]. SwAV trains the model with the loss that considers the consistency of the pseudo label assignments on the cluster centroids. SwAV learns the feature extractor \(f_{\theta}\) and the cluster centroids \(C\in\mathbb{R}^{d\times c}\) where \(d\) denotes the feature dimension and \(c\) denotes the number of clusters as follows: 1. Sample minibatch \(\{x_{n}\}_{n=1}^{N}\). 2. Apply data augmentation on \(\{x_{n}\}_{n=1}^{N}\) to obtain \(\{\widehat{x_{n}^{\widehat{n}}}\}_{n=1}^{N}\) and \(\{\widehat{x_{n}^{\widehat{k}}}\}_{n=1}^{N}\). 3. Calculate the cluster assignment probability \(q_{n}\) of \(x_{n}\) using the entropy-regularized optimal transport between \(\{f_{\theta}(\widehat{x_{n}^{\widehat{k}}})\}_{n=1}^{N}\) and \(C\). We do not calculate the backpropagation through this part. 4. Calculate the predicted cluster assignment probability \(p_{n}=\mathrm{softmax}(C^{t}f_{\theta}(\widehat{x_{n}^{\widehat{n}}}))\). 5. Update \(\theta\) and \(C\) with the cross-entropy loss \(-\sum_{n=1}^{N}q_{n}\log p_{n}\). The entropy-regularized optimal transport is solved by the iterative Sinkhorn-Knopp algorithm [Cuturi, 2013]. This algorithm uses the cost function \(C^{t}f_{\theta}(\widehat{x_{n}^{\widehat{k}}})\) and iteratively applies matrix-vector products to calculate the assignment. Therefore, both \(q_{n}\) and \(p_{n}\) depend on the feature \(f_{\theta}(x)\) as the form of \(C^{t}f_{\theta}(x)\). We can construct the equivariant loss by replacing \(C^{t}f_{\theta}(x)\) with the one that is invariant under the action of \(T_{\mathrm{out}}(g)\). Similar to invariant Momentum Contrast, we use \(C^{t}\frac{1}{|G|}\sum_{g\in G}T_{\mathrm{out}}(g)f_{\theta}(x)\) as the invariant inner product. We denote this method as invariant SwAV. #### 3.3.3 SimSiam SimSiam [Chen and He, 2021] is a distillation-based method that trains the model to make the output features from the same input image close as follows: 1. Sample minibatch \(\{x_{n}\}_{n=1}^{N}\). 2. Apply data augmentation on \(\{x_{n}\}_{n=1}^{N}\) to obtain \(\{\widehat{x_{n}^{1}}\}_{n=1}^{N}\) and \(\{\widehat{x_{n}^{2}}\}_{n=1}^{N}\). 3. Apply \(f_{\theta}\) on \(\{\widehat{x_{n}^{m}}\}_{n=1}^{N}\) to obtain \(\{z_{n}^{m}\}_{n=1}^{N}\) for \(m=1,2\). 4. Apply predictor \(h\) on \(\{\widehat{z_{n}^{m}}\}_{n=1}^{N}\) to obtain \(\{p_{n}^{m}\}_{n=1}^{N}\) for \(m=1,2\). 5. Update \(h\) and \(f_{\theta}\) with the cosine similarity loss \(-\sum_{n=1}^{N}\frac{(z_{n}^{1})^{t}p_{n}^{2}}{\|z_{n}^{m}\|p_{n}^{2}\|}+\frac{ (z_{n}^{2})^{t}p_{n}^{1}}{\|z_{n}^{m}\|p_{n}^{2}\|}\) while backpropagation through \(z_{n}^{m}\) is ignored. In most cases, we can assume that \(T_{\mathrm{out}}(g)\) preserves the L2 norm of the feature. For example, \(T_{\mathrm{out}}(g)\) in G-CNNs act as the permutation between the feature elements, which can be represented as the orthogonal matrix. With this assumption, we simply need to replace the inner product \(z_{n}^{t}p_{n}\) with the invariant one. As in the case of Momentum Contrast, we conduct this by replacing \(z_{n}^{m}\) with \(\frac{1}{|G|}\sum_{g\in G}T_{\mathrm{out}}(g)z_{n}^{m}\) and \(p_{n}^{m}\) with \(\frac{1}{|G|}\sum_{g\in G}T_{\mathrm{out}}(g)p_{n}^{m}\) when we calculate the cosine similarity loss. We call this method invariant SimSiam. ## 4 Experiment In this section, we evaluate the proposed self-supervised loss on standard image recognition benchmarks. ### Experimental Setting #### 4.1.1 Dataset for pretraining We use the standard large-scale image recognition benchmark ImageNet (Deng et al., 2009) as the resource for self-supervised pretraining. ImageNet is a general image recognition dataset that consists of approximately 1,300,000 images with 1,000 categories. In the pretraining phase, we do not use the label information, and train the model with the self-supervised loss. #### 4.1.2 Model architecture We use ResNet50 (He et al., 2016) as the baseline model. We apply non-equivariant baseline self-supervised methods on this model. For the group equivariant neural networks to learn with the proposed loss, we use the model that replaces each convolutional layer of ResNet50 with the corresponding equivariant layers (Eq. (1, 2)). To match the number of learnable parameters to the original ResNet50, we divide the number of filters by \(\sqrt{|G|}\) from the original model. The equivariant ResNet50 we use has almost the same number of learnable parameters as the original ResNet50 and the dimension of the features before the last fully-connected layer becomes \(\sqrt{|G|}\) times the original feature dimension. We also replace the projection head with the corresponding equivariant layers with the reduced number of feature dimensions except for the output feature. We match the output feature dimension for the contrastive loss. For the transformations \(G\), we use a group that consists of \(\pi/2\) rotation \((|G|=4)\) for the context prediction task, and a group that consists of \(\pi/2\) rotation and image flipping \((|G|=8)\) for the other tasks. #### 4.1.3 Comparison method As the baseline method, we apply context prediction (Doersch et al., 2015), jigsaw (Noroozi and Favaro, 2016), Momentum Contrast (He et al., 2020), SwAV (Caron et al., 2020) and SimSiam (Chen and He, 2021) on ResNet50. As in the proposed method, we apply the proposed equivariant variants of these methods to the group equivariant ResNet50. As an ablation study to evaluate the effect of the equivariant model architecture and the effect of the proposed equivariant loss separately, we also evaluate the performance of the model that we train the group equivariant ResNet50 with the existing non-equivariant self-supervised loss as follows. We use a different order of the relative position labels from the order proposed in the previous section for context prediction, the original permutation subset for the jigsaw task and we use the original inner product to calculate the loss function for the Momentum Contrast, SwAV and SimSiam. #### 4.1.4 Training setting We refer to Chen and He (2021) for training SimSiam. We refer to Goyal et al. (2019) and its extension Goyal et al. (2021) for the other methods. For context prediction, we use a two-layer mlp consisting of a layer that reduces the feature dimension from 2,048 to 1,000, and a layer that maps the 2,000-dimensional feature consisting of two 1,000-dimensional subregion features to \begin{table} \begin{tabular}{l l l l} \hline Method & Baseline & Equivariant Model \& Loss (Ours) & Equivariant Model Only \\ \hline \hline Context prediction & 32.7 & **35.1** & 31.5 \\ Jigsaw & 35.1 & **43.1** & 42.5 \\ Momentum Contrast & 63.8 & **65.7** & 65.0 \\ SwAV & 71.4 & **71.6** & 68.2 \\ SimSiam & 65.9 & **68.2** & 65.5 \\ \hline \end{tabular} \end{table} Table 1: Accuracy (%) on ImageNet with linear image classification setting. \begin{table} \begin{tabular}{l l l l} \hline Method & Baseline & Equivariant Model \& Loss (Ours) & Equivariant Model Only \\ \hline \hline Context prediction & 51.7 & **53.6** & 49.1 \\ Jigsaw & 52.9 & 56.7 & **57.8** \\ Momentum Contrast & 80.7 & **81.1** & 80.2 \\ SwAV & 85.6 & **86.8** & 86.6 \\ SimSiam & **81.7** & 81.1 & 81.5 \\ \hline \end{tabular} \end{table} Table 2: Mean AP (%) on VOC2007 with linear image classification setting. eight-dimensional category space. We use random grayscaling as the data augmentation and train the model for 105 epochs with a minibatch size of 256. We use momentum SGD for training. We use linear warm up to increase the learning rate from 0.025 to 0.1 and then multiply by 0.1 on 30, 60, 90, and 100 epochs. We set the momentum as 0.9 and the weight decay rate as 0.0001. For the jigsaw task, we use the subset consisting of 2,000 permutations and train the model as a 2,000-way classification problem. Similar to the case of context prediction, we use a two-layer mlp. The first layer reduces the dimension from 2,048 to 1,000, and the succeeding layer reduces the 9,000-dimensional feature consisting of nine 1,000-dimensional subregion features to a 2,000-dimensional category space. We use the same learning parameter as that for the context prediction task. For the Momentum Contrast, we use the module introduced in Momentum Contrast v2 [2] and use the two-layer mlp with 2,048 mid-feature dimension and 128 output feature dimension as the projection head. We use random cropping, random color distortion, random Gaussian blur and random image flipping as the data augmentation. We set the size of the memory bank as 65,536, the temperature parameter as 0.2, and the coefficient of moving average for the momentum encoder as 0.999. We train the model for 200 epochs with batch size 256. We set the momentum as 0.9 and the weight decay rate as 0.001. We initialize the learning rate with 0.03 and multiply by 0.1 at 120 and 160 epochs. For SwAV, following the original paper, we use \(2\times 224+6\times 96\) multi-crop as the input. We use random cropping, random color distortion, random Gaussian blur, and random image flipping as the data augmentation. We use the same architecture as Momentum Contrast for the projection head and set the number of clusters as 3,000. We set the batch size as 256 and train the model for 200 epochs. We set the initial learning rate as 0.6 and use LARC to modify the learning rate. We set the momentum as 0.9 and the weight decay rate as 0.000001. We use the temperature parameter as 0.1 and the number of iterations as 3 for the iterative Sinkhorn-Knopp algorithm. Further, following the original paper, we use a queue with a length of 3,840 after 15 epochs and also use samples from a queue when we calculate the assignment. For SimSiam, we use a two-layer mlp with 2,048 mid-feature dimension and 512 output feature dimension as the projection head. We use random cropping, random color distortion, random gray scale, random Gaussian blur, tand random image flipping as the data augmentation. We train the model for 100 epochs with the batch size 512. We set the momentum as 0.9 and the weight decay rate as 0.0001. We initialize the learning rate with 0.05 and adjust the learning rate with a cosine decay schedule. As described in Section 4.1.2, we replace the projection head with the equivariant layers with the same output feature dimension for the proposed loss. We use the same learning parameter for the equivariant and non-equivariant methods. #### 4.1.5 Evaluation Setting We evaluate the pretrained model using the standard image recognition benchmarks. We evaluate the model using a linear image classification setting in which we fix the pretrained model and only learn the last fully-connection layer on top of the pretrained network. In addition to the ImageNet that is also used for the pretraining, we use the PASCAL \begin{table} \begin{tabular}{l|l l l l} \hline \hline & Dataset & \(\pi\) and Image Flipping & \(\pi/2\) and Image Flipping & \(\pi/4\) and Image Flipping \\ \hline \hline \multirow{3}{*}{Non Equivariant loss} & ImageNet & 64.6 & 65.0 & 63.8 \\ & VOC2007 & 79.8 & 80.2 & 80.9 \\ & iNaturalist & 32.4 & 31.2 & 29.1 \\ \hline \multirow{3}{*}{Equivariant loss (Ours)} & ImageNet & 64.7 & 65.7 & 65.2 \\ & VOC2007 & 79.9 & 81.1 & 80.4 \\ \cline{1-1} & iNaturalist & 32.4 & 33.8 & 30.9 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy and mean average precision (%) with different transformation groups using Momentum Contrast. \begin{table} \begin{tabular}{l l l l} \hline \hline Method & Baseline & Equivariant Model \& Loss (Ours) & Equivariant Model Only \\ \hline \hline Context prediction & **8.56** & **8.56** & 6.97 \\ Jigsaw & 8.72 & **13.8** & 13.2 \\ Momentum Contrast & 33.4 & **33.8** & 31.2 \\ SwAV & **42.1** & 35.8 & 32.4 \\ SimSiam & 32.6 & **33.7** & 27.8 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy (%) on iNaturalist18 with linear image classification setting. VOC 2007 (VOC2007) (Everingham et al., 2015) and iNaturalist18 (Van Horn et al., 2018) datasets. VOC2007 is a traditional image recognition benchmark that consists of approximately 2,500 images for training and evaluation with 20 categories. iNaturalist18 dataset is the fine-grained image recognition dataset that consists of approximately 450,000 images with approximately 8,000 categories of animal species. For ImageNet, following Goyal et al. (2019) context prediction and jigsaw tasks, we apply the average pooling with kernel size \(6\times 6\) at the four corners of the output of the last convolutional layer with size \(7\times 7\times 2,048\sqrt{|G|}\) and align to obtain \(8,192\sqrt{|G|}\)-dimensional features. We then apply a linear classification layer to predict the category. For Momentum Contrast and SwAV, following the original paper, we apply global average pooling and apply linear classification on the \(2,048\sqrt{|G|}\)-dimensional features. We train the model for 28 epochs with a batch size of 256 for the context prediction and jigsaw task. We set the weight decay rate as 0.0005 and momentum as 0.9. We set the base learning rate as 0.01 and multiply by 0.1 on the 8, 16, and 24 epochs. For Momentum Contrast, we set the batch size as 256, weight decay rate as 0, momentum as 0.9, and base learning rate as 30.0. We multiply the learning rate by 0.1 on 60 and 80 epochs and train the model for 100 epoch. For SwAV, we set the batch size as 256, weight decay rate as 0.00001, momentum as 0.9, and base learning rate as 0.3. We modify the learning rate with cosine learning rate decay and train the model for 100 epochs. For SimSiam, we set the batch size as 4,096, weight decay rate as 0, momentum as 0.9, and base learning rate as 0.1. We apply LARC for the learning rate modification. We train the model for 90 epochs. For VOC2007, we apply the linear SVM on the output of global average pooling. For iNaturalist18, we set the batch size as 256, weight decay rate as 0.0005, and base learning rate as 0.001. We train the model for 84 epochs with the learning rate multiplied by 0.1 on 24, 48, and 72 epochs. #### 4.1.6 Implementation We implemented the self-supervised methods using Goyal et al. (2021) and implemented the G-CNNs using Weiler and Cesa (2019). We used eight V100 or A100 GPUs for pretraining and one for fine-tuning the models. We conducted the experiment for both the existing methods and the proposed method and report the score in our setting. Therefore, there exist cases for which the scores of the existing methods are different from those reported in previous studies. ### Results Tables 1, 2 and 3 show the results. In many settings, the proposed method demonstrates better accuracy than the existing non-equivariant baselines. This indicates that group equivariant neural networks have higher representation ability than the standard non-equivariant models even in the self-supervised setting. When we compare the accuracy using the equivariant CNNs with the proposed and existing losses, there exist cases for which the equivariant model with non-equivariant loss demonstrate lower accuracy than the original non-equivariant model. This implies the effectiveness of combining the proposed equivariant loss with the equivariant model. For the jigsaw case, the performance drop when using non-equivariant loss is relatively small compared to context prediction and in some cases non-equivariant \begin{table} \begin{tabular}{l|l l l} \hline \hline & Dataset & \(\pi\) and Image Flipping & \(\pi/2\) and Image Flipping \\ \hline \hline \multirow{3}{*}{Non Equivariant loss} & ImageNet & 65.0 & 65.5 \\ & VOC2007 & 81.3 & 81.5 \\ & iNaturalist & 28.9 & 27.8 \\ \hline \multirow{3}{*}{Equivariant loss (Ours)} & ImageNet & 67.3 & 68.2 \\ & VOC2007 & 81.3 & 81.1 \\ & iNaturalist & 33.0 & 33.7 \\ \hline \hline \end{tabular} \end{table} Table 6: Accuracy and mean average precision (%) with different transformation groups using SimSiam. \begin{table} \begin{tabular}{l|l l l l} \hline \hline & Dataset & \(\pi\) and Image Flipping & \(\pi/2\) and Image Flipping \\ \hline \hline \multirow{3}{*}{Non Equivariant loss} & ImageNet & 68.5 & 68.2 & 68.6 \\ & VOC2007 & 86.3 & 86.6 & 86.9 \\ & iNaturalist & 34.5 & 32.4 & 32.6 \\ \hline \multirow{3}{*}{Equivariant loss (Ours)} & ImageNet & 71.5 & 71.6 & 70.9 \\ & VOC2007 & 86.3 & 86.8 & 86.4 \\ \cline{1-1} & iNaturalist & 38.5 & 35.8 & 32.8 \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy and mean average precision (%) with different transformation groups using SwAV. loss demonstrate better performance. Because jigsaw solves 2,000-way classification compared to eight-category classification for context prediction, we expect that the effect of equivariance on the label space becomes relatively small. Moreover, when we use contrastive loss, the model that combines G-CNNs with non-equivariant loss often demonstrates lower accuracy than the non-equivariant model. This implies that applying the contrastive loss directly to the equivariant model may diminish the performance. These results show the effectiveness of the proposed equivariant loss when we train the equivariant networks. ### Ablation Study on the Transformation Group The previous experiment mainly handles the equivariance for the group that consists of \(\pi/2\) image rotation and image flipping. For the case of invariant contrastive loss, we can use any group whenever we can calculate the average feature. For the ablation study, we evaluated the model that applies the different transformation groups to the invariant Momentum Contrast, invariant SwAV and invariant SimSiam. For the transformation group, in addition to the group consisting of \(\pi/2\) rotation and image flipping, we used the group consisting of \(\pi\) rotation and image flipping \((|G|=4)\) and the group consisting of \(\pi/4\) rotation and image flipping \((|G|=16)\). We omitted the SimSiam with the group consisting of \(\pi/4\) rotation and image flipping \((|G|=16)\) owing to the memory limitation. We used the same dataset and training setting and evaluated the accuracy and mean average precision. Tables 4, 5 and 6 show the results. There is a tendency that the group that considers \(\pi/2\) rotations demonstrates the best performance among the transformation groups, which implies that considering appropriate transformations contributes to the performance. Regarding the comparison between the proposed and non-equivariant losses, the proposed method demonstrates better accuracy in most settings. This indicates that the equivariant loss is more effective even when we change the group size. ## 5 Conclusion In this study, we proposed a method to construct the loss for training group equivariant neural networks in an unsupervised manner. To train the model that is robust under the transformation of the input data, we aim for a loss function that is invariant under the transformation. To construct such an invariant loss, we propose the concept of equivariant pretext labels and invariant contrastive loss. We then proposed the equivariant versions for several existing self-supervised methods. Experiments on standard image recognition benchmarks demonstrate that we can obtain good pretrained models by combining the proposed loss with the equivariant neural networks. We expect that we can apply the method of equivariant pretext labels and invariant contrastive loss to wider self-supervised tasks, which will be pursued in future work. ## Acknowledgements This work was partially supported by JSPS KAKENHI Grant Number JP19K20290, JST Moonshot R&D Grant Number JPMJPS2011, CREST Grant Number JPMJCR2015 and Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo.
2303.12245
Error Analysis of Physics-Informed Neural Networks for Approximating Dynamic PDEs of Second Order in Time
We consider the approximation of a class of dynamic partial differential equations (PDE) of second order in time by the physics-informed neural network (PINN) approach, and provide an error analysis of PINN for the wave equation, the Sine-Gordon equation and the linear elastodynamic equation. Our analyses show that, with feed-forward neural networks having two hidden layers and the $\tanh$ activation function, the PINN approximation errors for the solution field, its time derivative and its gradient field can be effectively bounded by the training loss and the number of training data points (quadrature points). Our analyses further suggest new forms for the training loss function, which contain certain residuals that are crucial to the error estimate but would be absent from the canonical PINN loss formulation. Adopting these new forms for the loss function leads to a variant PINN algorithm. We present ample numerical experiments with the new PINN algorithm for the wave equation, the Sine-Gordon equation and the linear elastodynamic equation, which show that the method can capture the solution well.
Yanxia Qian, Yongchao Zhang, Yunqing Huang, Suchuan Dong
2023-03-22T00:51:11Z
http://arxiv.org/abs/2303.12245v1
Error Analysis of Physics-Informed Neural Networks for Approximating Dynamic PDEs of Second Order in Time ###### Abstract We consider the approximation of a class of dynamic partial differential equations (PDE) of second order in time by the physics-informed neural network (PINN) approach, and provide an error analysis of PINN for the wave equation, the Sine-Gordon equation and the linear elastodynamic equation. Our analyses show that, with feed-forward neural networks having two hidden layers and the tanh activation function, the PINN approximation errors for the solution field, its time derivative and its gradient field can be effectively bounded by the training loss and the number of training data points (quadrature points). Our analyses further suggest new forms for the training loss function, which contain certain residuals that are crucial to the error estimate but would be absent from the canonical PINN loss formulation. Adopting these new forms for the loss function leads to a variant PINN algorithm. We present ample numerical experiments with the new PINN algorithm for the wave equation, the Sine-Gordon equation and the linear elastodynamic equation, which show that the method can capture the solution well. Keywords: _physics informed neural network; neural network; error estimate; PDE; scientific machine learning_ ## 1 Introduction Deep neural networks (DNN) have achieved a great success in a number of fields in science and engineering [35] such as natural language processing, robotics, computer vision, speech and image recognition, to name but a few. This has inspired a great deal of research efforts in the past few years to adapt such techniques to scientific computing. DNN-based techniques seem particularly promising for problems in higher dimensions, e.g. high-dimensional partial differential equations (PDE), since traditional numerical methods for high-dimensional problems can quickly become infeasible due to the exponential increase in the computational effort (so-called curse of dimensionality). Under these circumstances deep-learning algorithms can be helpful. In particular, the neural networks approach for PDE problems provide implicit regularization and can alleviate and perhaps overcome the curse of high dimensions [3, 4]. This approach also provides a natural framework for estimating the unknown parameters [24, 46, 45, 56, 59]. As deep neural networks are universal function approximators, it is natural to employ them as ansatz spaces for solutions of (ordinary or partial) differential equations. This paves the way for their use in physical modeling and scientific computing and gives rise to the field of scientific machine learning [31, 52, 46, 21, 36]. The physics-informed neural network (PINN) approach was introduced in [46]. It has been successfully applied to a variety of forward and inverse PDE problems and has become one of the most commonly-used methods in scientific machine learning (see e.g. [46, 25, 10, 30, 61, 29, 6, 54, 16, 53, 15, 7, 57, 23, 33, 18, 19, 60, 43, 17, 51, 27, 44], among others). The references [31, 9] provide a comprehensive review of the literature on PINN and about the benefits and drawbacks of this approach. The mathematical foundation for PINN aiming at the approximation of PDE solution is currently an active area of research. It is important to account for different components of the neural-network error: optimization error, approximation error, and estimation error [41, 49]. Approximation error refers to the discrepancy between the exact functional map and the neural network mapping function on a given network architecture [8, 22]. Estimation error arises when the network is trained on a finite data set to get a mapping on the target domain. The generalization error is the combination of approximation and estimation errors and defines the accuracy of the neural-network predicted solution trained on the given set of data. Theoretical understanding of PINN has been advanced by a number of recent works. In [49] Shin et al. rigorously justify why PINN works and shows its consistency for linear elliptic and parabolic PDEs under certain assumptions. These results are extended in [50] to a general abstract framework for analyzing PINN for linear problems with the loss function formulated in terms of the strong or weak forms of the equations. In [39] Mishra and Molinaro provide an abstract framework on PINN for forward PDE problems, and estimate the generalization error by means of the training error and the number of training data points. This framework is extended in [38] to study several inverse PDE problems, including the Poisson, heat, wave and Stokes equations. Bai and Koley [2] investigate the PINN approximation of nonlinear dispersive PDEs such as the KdV-Kawahara, Camassa-Holm and Benjamin-Ono equations. In [5] Biswa et al. provide explicit error estimates (in suitable norms) and stability analyses for the incompressible Navier-Stokes equations. Zerbinati [63] presents PINN as an under-determined point matching collocation method, reveals its connection with Galerkin Least Squares (GALS) method, and establishes an a priori error estimate for elliptic problems. An important theoretical result on the approximation errors from the recent work [13] establishes that a feed-forward neural network \(\hat{u}_{\theta}\) with a tanh activation function and two hidden layers may approximate a function \(u\) with a bound in a Sobolev space, \[\|\hat{u}_{\theta N}-u\|_{w^{k,\infty}}\leq C\mathrm{ln}(cN)^{k}/N^{s-k}.\] Here \(u\in w^{s,\infty}([0,1]^{d})\), \(d\) is the dimension of the problem, \(N\) is the number of training points, and \(c,C>0\) are explicitly known constants independent of \(N\). Based on this result, De Ryck et al. [12] have studied the PINN for the Navier-Stokes equations and shown that a small training error implies a small generalization error. In particular, Hu et al. [26] provide the higher-order (spatial Sobolev norm) error estimates for the primitive equations, which improve the existing results in the PINN literature that only involve \(L^{2}\) errors. In [14] it has been shown that, with a sufficient number of randomly chosen training points, the total \(L^{2}\) error can be bounded by the generalization error for Kolmogorov-type PDEs, which in turn is bounded by the training error. It is proved that the size of the PINN and the number of training samples only increase polynomially with the problem dimension, thus enabling PINN to overcome the curse of dimensionality in this case. In [37] the authors investigate the high-dimensional radiative transfer equation and prove that the generalization error is bounded by the training error and the number of training points, where the upper bound depends on the dimension only through a logarithmic factor. Hence PINN does not suffer from the curse of dimensionality, provided that the training errors do not depend on the underlying dimension. Although PINN has been widely used for approximating PDEs, theoretical investigations on its convergence and errors are still quite limited and are largely confined to elliptic and parabolic PDEs. There seems to be less (or little) theoretical analysis on the convergence of PINN for hyperbolic type PDEs. In this paper, we consider a class of dynamic PDEs of second order in time, which are hyperbolic in nature, and provide an analysis of the convergence and errors of the PINN algorithm applied to such problems. We have focused on the wave equation, the Sine-Gordon equation and the linear elastodynamic equation in our analyses. Building upon the result of [13, 12] on tanh neural networks with two hidden layers, we have shown that for these three kinds of PDEs: * The underlying PDE residuals in PINN can be made arbitrarily small with tanh neural networks having two hidden layers. * The total error of the PINN approximation is bounded by the generalization error of PINN. * The total error of PINN approximations for the solution field, its time derivative and its gradient is bounded by the training error (training loss) of PINN and the number of quadrature points (training data points). Furthermore, our theoretical analyses have suggested PINN training loss functions for these PDEs that are somewhat different in form than from the canonical PINN formulation. These lie in two aspects: (i) Our analyses require certain residual terms (such as the gradient of the initial condition, the time derivative of the boundary condition, or in the case of linear elastodynamic equation the strain and divergence of the initial condition) in the training loss, which would be absent from the canonical PINN formulation of the loss function. (ii) Our analyses may require, depending on the type of boundary conditions, a norm other than the \(L^{2}\) norm for certain boundary residuals in the training loss, which is different from the commonly-used \(L^{2}\) norm in the canonical PINN formulation of the loss function. These new forms for the training loss function suggested by the theoretical analyses lead to a variant PINN algorithm. We have implemented the PINN algorithm based on these new forms of the training loss function for the wave equation, the Sine-Gordon equation and the linear elastodynamic equation. Ample numerical experiments based on this algorithm have been presented. The simulation results indicate that the method has captured the solution field reasonably well for these PDEs. The numerical results also to some extent corroborate the theoretical relation between the approximation error and the PINN training loss obtained from the error analysis. The rest of this paper is organized as follows. In Section 2 we present an overview of PINN for dynamic PDEs of second order in time. In Sections 3, 4 and 5, we present an error analysis of the PINN algorithm for approximating the wave equation, Sine-Gordon equation, and the linear elastodynamic equation, respectively. Section 6 summarizes a set of numerical experiments with these three PDEs to supplement and support our theoretical analyses. Section 7 concludes the presentation with some closing remarks. Finally, the appendix (Section 8) recalls some auxiliary results for our analysis and provides the proofs of the main theorems in Sections 4 and 5. ## 2 Physics Informed Neural Networks (PINN) for Approximating PDEs ### Generic PDE of Second Order in Time Consider a compact domain \(D\subset\mathbb{R}^{d}\) (\(d>0\) being an integer), and let \(\mathcal{D}\) and \(\mathcal{B}\) denote the differential and boundary operators. We consider the following general form of an initial boundary value problem with a generic PDE of second order in time. For any \(\boldsymbol{x}\in D\), \(\boldsymbol{y}\in\partial D\) and \(t\in[0,T]\), \[\frac{\partial^{2}u}{\partial t^{2}}(\boldsymbol{x},t)+\mathcal{D }[u](\boldsymbol{x},t)=0, \tag{1a}\] \[\mathcal{B}u(\boldsymbol{y},t)=u_{d}(\boldsymbol{y},t),\] (1b) \[u(\boldsymbol{x},0)=u_{in}(\boldsymbol{x}),\quad\frac{\partial u }{\partial t}(\boldsymbol{x},0)=v_{in}(\boldsymbol{x}). \tag{1c}\] Here, \(u(\boldsymbol{x},t)\) is the unknown field solution, \(u_{d}\) denotes the boundary data, and \(u_{in}\) and \(v_{in}\) are the initial distributions for \(u\) and \(\frac{\partial u}{\partial t}\). We assume that in \(\mathcal{D}\) the highest derivative with respect to the time variable \(t\), if any, is of first order. ### Neural Network Representation of a Function Let \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) denote an activation function that is at least twice continuously differentiable. For any \(n\in\mathbb{N}\) and \(z\in\mathbb{R}^{n}\), we define \(\sigma(z):=(\sigma(z_{1}),\cdots,\sigma(z_{n}))\), where \(z_{i}\) (\(1\leq i\leq n\)) are the components of \(z\). We adopt the following formal definition for a feedforward neural network as given in [12]. **Definition 2.1** ([12]).: _Let \(R\in(0,\infty]\), \(L,W\in\mathbb{N}\) and \(l_{0},\cdots,l_{L}\in\mathbb{N}\). Let \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) be a twice differentiable function and define_ \[\Theta=\Theta_{L,W,R}:=\bigcup_{L^{\prime}\in\mathbb{N},L^{\prime} \leq L}\bigcup_{l_{0},\cdots,l_{L}\in\{1,\cdots,W\}}\times_{k=1}^{L^{\prime}} ([-R,R]^{l_{k}\times l_{k-1}}\times[-R,R]^{l_{k}}). \tag{2}\] For \(\theta\in\Theta\), we define \(\theta_{k}:=(W_{k},b_{k})\) and \(\mathcal{A}_{k}^{\theta}:\mathbb{R}^{l_{k-1}}\to\mathbb{R}^{l_{k}}\) by \(z\mapsto W_{k}z+b_{k}\) for \(1\leq k\leq L\), and we define \(f_{k}^{\theta}:\mathbb{R}^{l_{k-1}}\to\mathbb{R}^{l_{k}}\) by \[f_{k}^{\theta}=\left\{\begin{array}{ll}\mathcal{A}_{L}^{\theta}(z)&k=L,\\ (\sigma\circ\mathcal{A}_{k}^{\theta})(z)&1\leq k<L.\end{array}\right. \tag{3}\] Denote \(u_{\theta}:\mathbb{R}^{l_{0}}\to\mathbb{R}^{l_{L}}\) the function that satisfies for all \(z\in\mathbb{R}^{l_{0}}\) that \[u_{\theta}(z)=(f_{L}^{\theta}\circ f_{L-1}^{\theta}\circ\cdots\circ f_{1}^{ \theta})(z)\qquad z\in\mathbb{R}^{l_{0}}. \tag{4}\] We set \(z=(\mathbf{x},t)\) and \(l_{0}=d+1\) for approximating the PDE problem (1). \(u_{\theta}\) as defined above is the neural-network representation of a parameterized function associated with the parameter \(\theta\). This neural network contains \((L+1)\) layers (\(L\geq 2\)), with widths \((l_{0},l_{1},\cdots,l_{L})\) for each layer. The input layer has a width \(l_{0}\), and the output layer has a width \(l_{L}\). The \((L-1)\) layers between the input/output layers are the hidden layers, with widths \(l_{k}\) (\(1\leq k\leq L-1\)). \(W_{k}\) and \(b_{k}\) are the weight/bias coefficients corresponding to layer \(k\) for \(1\leq k\leq L\). From layer to layer the network logic represents an affine transform, followed by a function composition with the activation function \(\sigma\). Note that no activation function is applied to the output layer. We refer to \(u_{\theta}\) with \(L=2\) (i.e. single hidden layer) as a shallow neural network, and \(u_{\theta}\) with \(L\geq 3\) (i.e. multiple hidden layers) as a deeper or deep neural network. ### Physics Informed Neural Network for Initial/Boundary Value Problem Let \(\Omega=D\times[0,T]\) and \(\Omega_{*}=\partial D\times[0,T]\) be the spatial-temporal domaindomain. We approximate the solution \(u\) to the problem (1) by a neural network \(u_{\theta}:\Omega\to\mathbb{R}^{n}\). With PINN we consider the residual function of the initial/boundary value problem (1), defined for any sufficiently smooth function \(u:\Omega\to\mathbb{R}^{n}\) as, for any \(\mathbf{x}\in D\), \(\mathbf{y}\in\partial D\) and \(t\in[0,T]\), \[\mathcal{R}_{int}[u](\mathbf{x},t)=\frac{\partial^{2}u}{\partial t^{ 2}}(\mathbf{x},t)+\mathcal{D}[u](\mathbf{x},t), \tag{5a}\] \[\mathcal{R}_{sb}[u](\mathbf{y},t)=\mathcal{B}u(\mathbf{y},t)-u_{d}(\mathbf{y },t),\] (5b) \[\mathcal{R}_{tb1}[u](\mathbf{x},0)=u(\mathbf{x},0)-u_{in}(\mathbf{x}),\] (5c) \[\mathcal{R}_{tb2}[u](\mathbf{x},0)=\frac{\partial u}{\partial t}(\mathbf{x },0)-v_{in}(\mathbf{x}). \tag{5d}\] These residuals characterize how well a given function \(u\) satisfies the initial/boundary value problem (1). If \(u\) is the exact solution, \(\mathcal{R}_{int}[u]=\mathcal{R}_{sb}[u]=\mathcal{R}_{tb1}[u]=\mathcal{R}_{ tb2}[u]=0\). To facilitate the subsequent analyses, we introduce an auxiliary function \(v=\frac{\partial u}{\partial t}\) and rewrite \(\mathcal{R}_{tb2}\) as \[\mathcal{R}_{tb2}[v](\mathbf{x},0)=v(\mathbf{x},0)-v_{in}(\mathbf{x}). \tag{6}\] We reformulate (1a) into two equations, thus separating the interior residual into the following two components: \[\mathcal{R}_{int1}[u,v](\mathbf{x},t)=\frac{\partial u}{\partial t}( \mathbf{x},t)-v(\mathbf{x},t), \tag{7}\] \[\mathcal{R}_{int2}[u,v](\mathbf{x},t)=\frac{\partial v}{\partial t}( \mathbf{x},t)+\mathcal{D}[u](\mathbf{x},t). \tag{8}\] With PINN, we seek a neural network \((u_{\theta},v_{\theta})\) to minimize the following quantity, \[\mathcal{E}_{G}(\theta)^{2}= \int_{\Omega}|R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}\mathbf{x}+\int_{\Omega}|R_{int2}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2} \,\mathrm{d}\mathbf{x}+\int_{D}|R_{tb1}[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x} \tag{9}\] \[+\int_{D}|R_{tb2}[v_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int _{\Omega_{*}}|R_{sb}[u_{\theta}](\mathbf{x},t)|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{ d}t.\] The different terms of (9) may be rescaled by different weights (penalty coefficients). For simplicity, we set all these weights to one in the analysis. \(\mathcal{E}_{G}\) as defined above is often referred to as the generalization error. Because of the integrals involved therein, \(\mathcal{E}_{G}\) can be hard to minimize. In practice, one will approximate (9) by an appropriate numerical quadrature rule, as follows \[\mathcal{E}_{T}(\theta,\mathcal{S})^{2}= \mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T }^{int2}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{tbl}(\theta,\mathcal{S }_{tb})^{2}+\mathcal{E}_{T}^{tbl2}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T }^{sb}(\theta,\mathcal{S}_{sb})^{2}, \tag{10}\] where \[\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|R_{int1}[u_{\theta},v_{ \theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2}, \tag{11a}\] \[\mathcal{E}_{T}^{int2}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|R_{int2}[u_{\theta},v_{ \theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\] (11b) \[\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|R_{tb1}[u_{\theta}](\mathbf{x}_{ tb}^{n})|^{2},\] (11c) \[\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|R_{tb2}[v_{\theta}](\mathbf{x}_{ tb}^{n})|^{2},\] (11d) \[\mathcal{E}_{T}^{sb}(\theta,\mathcal{S}_{sb})^{2} =\sum_{n=1}^{N_{sb}}\omega_{sb}^{n}|R_{sb}[u_{\theta}](\mathbf{x}_{ sb}^{n},t_{sb}^{n})|^{2}. \tag{11e}\] The quadrature points in the spatial-temporal domain and on the spatial and temporal boundaries, \(\mathcal{S}_{int}=\{(\mathbf{x}_{int}^{n},t_{int}^{n})\}_{n=1}^{N_{int}}\), \(\mathcal{S}_{sb}=\{(\mathbf{x}_{sb}^{n},t_{sb}^{n})\}_{n=1}^{N_{b}}\) and \(\mathcal{S}_{tb}=\{(\mathbf{x}_{tb}^{n},t_{tb}^{n}=0)\}_{n=1}^{N_{tb}}\), constitute the input data sets to the neural network. In the above equations \(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}\) is referred to as the training error (or training loss), and \(\omega_{*}^{n}\) are suitable quadrature weights for \(\star=int\), \(sb\) and \(tb\). Therefore, PINN attempts to minimize the training error \(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}\) over the network parameters \(\theta\), and upon convergence of optimization the trained \(u_{\theta}\) contains the approximation of the solution \(u\) to the problem (1). **Remark 2.2**.: _The generalization error (9) (with the corresponding training error (10)) is the standard (canonical) PINN form if one introduces \(v=\frac{\partial u}{\partial t}\) and reformulates (1a) into two equations. We would like to emphasize that our analyses below suggest alternative forms for the generalization error, e.g._ \[\mathcal{E}_{G}(\theta)^{2}= \int_{\Omega}|R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}\mathbf{x}+\int_{\Omega}|R_{int2}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2} \,\mathrm{d}\mathbf{x}+\int_{\Omega}|\nabla R_{int1}[u_{\theta},v_{\theta}](\mathbf{x },t)|^{2}\,\mathrm{d}\mathbf{x} \tag{12}\] \[+\int_{D}|R_{tb1}[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int _{D}|R_{tb2}[v_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}|\nabla R_{tb1 }[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\] \[+\left(\int_{\Omega_{*}}|R_{sb}[u_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}},\] _which differs from (9) in the terms \(\nabla R_{int1}\), \(\nabla R_{tb1}\) and the last term. The corresponding training error is,_ \[\mathcal{E}_{T}(\theta,\mathcal{S})^{2}= \mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T }^{int2}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{int3}(\theta,\mathcal{ S}_{int})^{2}+\mathcal{E}_{T}^{tbl1}(\theta,\mathcal{S}_{tb})^{2}\] \[+\mathcal{E}_{T}^{tbl2}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E} _{T}^{tbl3}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{sb}(\theta,\mathcal{ S}_{sb}), \tag{13}\] _where_ \[\left\{\begin{aligned} \mathcal{E}_{T}^{int3}(\theta, \mathcal{S}_{int})^{2}&=\sum_{n=1}^{N_{int}}\omega_{int}^{n}| \nabla R_{int1}[u_{\theta},v_{\theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\\ \mathcal{E}_{T}^{tbl3}(\theta,\mathcal{S}_{tb})^{2}&= \sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|\nabla R_{tb1}[u_{\theta}](\mathbf{x}_{tb}^{n})|^ {2}.\end{aligned}\right. \tag{14}\] _The error analyses also suggest additional terms in the generalization error for different equations._ ### Numerical Quadrature Rules As discussed above, we need to approximate the integrals of functions. The analysis in the subsequent sections requires well-known results on numerical quadrature rules as reviewed below. Given \(\Lambda\subset\mathbb{R}^{d}\) and a function \(f\in L^{1}(\Lambda)\), we would like to approximate \(\int_{\Lambda}f(z)\mathrm{d}z\). A quadrature rule provides an approximation by \[\int_{\Lambda}f(z)\mathrm{d}z\approx\frac{1}{M}\sum_{n=1}^{M}\omega_{n}f(z_{n}), \tag{15}\] where \(z_{n}\in\Lambda\) (\(1\leq n\leq M\)) are the quadrature points and \(\omega_{n}\) (\(1\leq n\leq M\)) denote the appropriate quadrature weights. The approximation accuracy is influenced by the type of quadrature rule, the number of quadrature points (\(M\)), and the regularity of \(f\). For the mid-point rule, which is assumed in the analysis in the current work, the approximation accuracy is given by \[\left|\int_{\Lambda}f(z)\mathrm{d}z-\frac{1}{M}\sum_{n=1}^{M}\omega_{n}f(z_{n} )\right|\leq C_{f}M^{-2/d}, \tag{16}\] where \(C_{f}\lesssim\|f\|_{C^{2}(D)}\) (\(a\lesssim b\) denotes \(a\leq Cb\)) and \(D\) has been partitioned into \(M\sim N^{d}\) cubes and \(z_{n}\) (\(1\leq n\leq M\)) denote the midpoints of these cubes [11]. In this paper, we use \(C\) to denote a universal constant, which may depend on \(k,d,T,u\) and \(v\) but not on \(N\). And we use the subscript to emphasize its dependence when necessary, e.g. \(C_{d}\) is a constant depending only on \(d\). We focus on PDE problems in relatively low dimensions (\(d\leq 3\)) in this paper and employ the standard quadrature rules. We note that in higher dimensions the standard quadrature rules may not be favorable. In this case the random training points or low-discrepancy training points [40] may be preferred. In subsequent sections we focus on three representative dynamic equations of second order in time (the wave equation, Sine-Gordon equation, and the linear elastodynamic equation), and provide the error estimate for approximating these equations by PINN. We note that these analyses suggest alternative forms for the training loss function that are somewhat different from the standard PINN forms [46]. The PINN numerical results based on the standard form for the loss function, and based on the alternative forms as suggested by the error estimate, will be provided after the presentation of the theoretical analysis. In what follows, for brevity we adopt the notation of \(\mathcal{F}_{\Xi}=\frac{\partial\mathcal{F}}{\partial\Xi}\), \(\mathcal{F}_{\Xi\Upsilon}=\frac{\partial^{2}\mathcal{F}}{\partial\Xi\partial T }\) (\(\Xi,\Upsilon\in\{t,x\}\)), for any sufficiently smooth function \(\mathcal{F}:\Omega\rightarrow\mathbb{R}^{n}\). ## 3 Physics Informed Neural Networks for Approximating Wave Equation ### Wave Equation Consider the following wave equations on the torus \(D=[0,1)^{d}\subset\mathbb{R}^{d}\) with periodic boundary conditions: \[u_{t}-v=0 \text{in }D\times[0,T], \tag{17a}\] \[v_{t}-\Delta u=f \text{in }D\times[0,T],\] (17b) \[u(\boldsymbol{x},0)=\psi_{1}(\boldsymbol{x}) \text{in }D,\] (17c) \[v(\boldsymbol{x},0)=\psi_{2}(\boldsymbol{x}) \text{in }D,\] (17d) \[u(\boldsymbol{x},t)=u(\boldsymbol{x}+1,t) \text{in }\partial D\times[0,T],\] (17e) \[\nabla u(\boldsymbol{x},t)=\nabla u(\boldsymbol{x}+1,t) \text{in }\partial D\times[0,T]. \tag{17f}\] The regularity results for linear evolution equations of the second order in time have been studied in the Book [55]. When the self-adjoint operator \(\mathcal{A}\) takes \(\Delta\), the linear evolution equations of second order in time become the classical wave equations, and then we can also obtain the following regularity results. **Lemma 3.1**.: _Let \(r\geq 1\), \(\psi_{1}\in H^{r}(D)\), \(\psi_{2}\in H^{r-1}(D)\) and \(f\in L^{2}([0,T];H^{r-1}(D))\), then there exists a unique solution \(u\) to the classical wave equations such that \(u\in C([0,T];H^{r}(D))\) and \(u_{t}\in C([0,T];H^{r-1}(D))\)._ **Lemma 3.2**.: _Let \(k\in\mathbb{N}\), \(\psi_{1}\in H^{r}(D)\), \(\psi_{2}\in H^{r-1}(D)\) and \(f\in C^{k-1}([0,T];H^{r-1}(D))\) with \(r>\frac{d}{2}+k\), then there exists \(T>0\) and a classical solution \(u\) to the wave equations such that \(u(t=0)=\psi_{1}\), \(u_{t}(t=0)=\psi_{2}\), \(u\in C^{k}(D\times[0,T])\) and \(v\in C^{k-1}(D\times[0,T])\)._ Proof.: By Lemma 3.1, there exists \(T>0\) and the solution \((u,v)\) to the wave equations such that \(u(t=0)=\psi_{1}\), \(v(t=0)=\psi_{2}\), \(u\in C([0,T];H^{r}(D))\) and \(v\in C([0,T];H^{r-1}(D))\). As \(r>\frac{d}{2}+k\), \(H^{r-k}(D)\) is a Banach algebra. For \(k=1\), since \(u\in C([0,T];H^{r}(D))\), \(v\in C([0,T];H^{r-1}(D))\) and \(f\in C([0,T];H^{r-1}(D))\), we have \(u_{t}=v\in C([0,T];H^{r-1}(D))\) and \(v_{t}=\Delta u+f\in C([0,T];H^{r-2}(D))\). Then, it implies that \(u\in C^{1}([0,T];H^{r-1}(D))\) and \(v\in C^{1}([0,T];H^{r-2}(D))\). For \(k=2\), by \(f\in C^{1}([0,T];H^{r-1}(D))\), we have \(u_{tt}=v_{t}\in C([0,T];H^{r-2}(D))\) and \(v_{tt}=\Delta u_{t}+f_{t}\in C([0,T];H^{r-3}(D))\). Then, it implies that \(u\in C^{2}([0,T];H^{r-2}(D))\) and \(v\in C^{2}([0,T];H^{r-3}(D))\). Repeating the same argument, we have \(u\in\cap_{l=0}^{k}C^{l}([0,T];H^{r-l}(D))\) and \(v\in\cap_{l=0}^{k}C^{l}([0,T];H^{r-l-1}(D))\). Then, applying the Sobolev embedding theorem and \(r>\frac{d}{2}+k\), it holds \(H^{r-l}(D)\subset C^{r-l}(D)\) and \(H^{r-l-1}(D)\subset C^{r-l-1}(D)\) for \(0\leq l\leq k\). Therefore, \(u\in C^{k}(D\times[0,T])\) and \(v\in C^{k-1}(D\times[0,T])\). ### Physics Informed Neural Networks We would like to approximate the solutions to the problem (17) with PINN. We seek deep neural networks \(u_{\theta}:D\times[0,T]\rightarrow\mathbb{R}\) and \(v_{\theta}:D\times[0,T]\rightarrow\mathbb{R}\), parameterized by \(\theta\in\Theta\), that approximate the solution \(u\) and \(v\) of (17). Define residuals, \[R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)=u_{\theta t}-v_{\theta}, \tag{18a}\] \[R_{int2}[u_{\theta},v_{\theta}](\mathbf{x},t)=v_{\theta t}-\Delta u_ {\theta}-f,\] (18b) \[R_{tb1}[u_{\theta}](\mathbf{x})=u_{\theta}(\mathbf{x},0)-\psi_{1}(\mathbf{x }),\] (18c) \[R_{tb2}[v_{\theta}](\mathbf{x})=v_{\theta}(\mathbf{x},0)-\psi_{2}(\mathbf{x }),\] (18d) \[R_{sb1}[v_{\theta}](\mathbf{x},t)=v_{\theta}(\mathbf{x},t)-v_{\theta}( \mathbf{x}+1,t),\] (18e) \[R_{sb2}[u_{\theta}](\mathbf{x},t)=\nabla u_{\theta}(\mathbf{x},t)-\nabla u _{\theta}(\mathbf{x}+1,t). \tag{18f}\] Note that for the exact solution \(R_{int1}[u,v]=R_{int2}[u,v]=R_{tb1}[u]=R_{tb2}[v]=R_{sb1}[v]=R_{sb2}[u]=0\). Let \(\Omega=D\times[0,T]\) and \(\Omega_{*}=\partial D\times[0,T]\) be the space-time domain. With PINN, we minimize the the following generalization error, \[\mathcal{E}_{G}(\theta)^{2} =\int_{\Omega}|R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2} \,\mathrm{d}\mathbf{x}+\int_{\Omega}|R_{int2}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{ 2}\,\mathrm{d}\mathbf{x}+\int_{\Omega}|\nabla R_{int1}[u_{\theta},v_{\theta}](\mathbf{x },t)|^{2}\,\mathrm{d}\mathbf{x}\] \[+\int_{D}|R_{tb1}[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int_ {D}|R_{tb2}[v_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}|\nabla R_{tb1}[u _{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\] \[+\int_{\Omega_{*}}|R_{sb1}[v_{\theta}](\mathbf{x},t)|^{2}\,\mathrm{d} s(\mathbf{x})\,\mathrm{d}t+\int_{\Omega_{*}}|R_{sb2}[u_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}s(\mathbf{x})\,\mathrm{d}t. \tag{19}\] The form of different terms in this expression will become clearer below. To complete the PINN formulation, we will choose the training set \(\mathcal{S}\subset\overline{D}\times[0,T]\) based on suitable quadrature points. We divide the full training set \(\mathcal{S}=\mathcal{S}_{int}\cup\mathcal{S}_{sb}\cup\mathcal{S}_{tb}\) into the following three components: * Interior training points \(\mathcal{S}_{int}=\{z_{n}\}\) for \(1\leq n\leq N_{int}\), with each \(z_{n}=(\mathbf{x},t)_{n}\in D\times(0,T)\). * Spatial boundary training points \(\mathcal{S}_{sb}=\{z_{n}\}\) for \(1\leq n\leq N_{sb}\), with each \(z_{n}=(\mathbf{x},t)_{n}\in\partial D\times(0,T)\). * Temporal boundary training points \(\mathcal{S}_{tb}=\{\mathbf{x}_{n}\}\) for \(1\leq n\leq N_{tb}\) with each \(\mathbf{x}_{n}\in D\). We define the PINN training loss, \(\theta\mapsto\mathcal{E}_{T}(\theta,\mathcal{S})^{2}\), as follows, \[\mathcal{E}_{T}(\theta,\mathcal{S})^{2} =\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E} _{T}^{int2}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{int3}(\theta, \mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2}\] \[+\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E} _{T}^{tb3}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{sb1}(\theta, \mathcal{S}_{sb})^{2}+\mathcal{E}_{T}^{sb2}(\theta,\mathcal{S}_{sb})^{2}, \tag{20}\] where \[\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|R_{int1}[u_{\theta},v_{\theta} ](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2}, \tag{21a}\] \[\mathcal{E}_{T}^{int2}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|R_{int2}[u_{\theta},v_{\theta }]](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\] (21b) \[\mathcal{E}_{T}^{int3}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|\nabla R_{int1}[u_{\theta}, v_{\theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\] (21c) \[\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|R_{tb1}[u_{\theta}](\mathbf{x}_{ tb}^{n})|^{2},\] (21d) \[\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|R_{tb2}[v_{\theta}](\mathbf{x}_{ tb}^{n})|^{2},\] (21e) \[\mathcal{E}_{T}^{tb3}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|\nabla R_{tb1}[u_{\theta}]( \mathbf{x}_{tb}^{n})|^{2},\] (21f) \[\mathcal{E}_{T}^{sb1}(\theta,\mathcal{S}_{sb})^{2} =\sum_{n=1}^{N_{sb}}\omega_{sb}^{n}|R_{sb1}[v_{\theta}](\mathbf{x}_{ sb}^{n},t_{sb}^{n})|^{2},\] (21g) \[\mathcal{E}_{T}^{sb2}(\theta,\mathcal{S}_{sb})^{2} =\sum_{n=1}^{N_{sb}}\omega_{sb}^{n}|R_{sb2}[u_{\theta}](\mathbf{x}_{ sb}^{n},t_{sb}^{n})|^{2}. \tag{21h}\] Here the quadrature points in space-time constitute the data sets \(\mathcal{S}_{int}=\{(\mathbf{x}_{int}^{n},t_{int}^{n})\}_{n=1}^{N_{int}}\), \(\mathcal{S}_{tb}=\{\mathbf{x}_{tb}^{n}\}_{n=1}^{N_{tb}}\) and \(\mathcal{S}_{sb}=\{(\mathbf{x}_{sb}^{n},t_{sb}^{n})\}_{n=1}^{N_{sb}}\), and \(\omega_{\star}^{n}\) are suitable quadrature weights with \(\star\) denoting \(int\), \(tb\) or \(sb\). Let \[\hat{u}=u_{\theta}-u,\qquad\hat{v}=v_{\theta}-v,\] denote the difference between the solution to the wave equations and the PINN approximation of the solution. We define the total error of the PINN approximation by \[\mathcal{E}(\theta)^{2}=\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+|\nabla \hat{u}(\mathbf{x},t)|^{2}+|\hat{v}(\mathbf{x},t)|^{2})\mathrm{d}\mathbf{x}\mathrm{d}t. \tag{22}\] ### Error Analysis In light of the wave equations (17) and the definitions for different residuals (18), we have \[R_{int1}=\hat{u}_{t}-\hat{v}, \tag{23a}\] \[R_{int2}=\hat{v}_{t}-\Delta\hat{u}\] (23b) \[R_{tb1}=\hat{u}(\mathbf{x},0),\] (23c) \[R_{tb2}=\hat{v}(\mathbf{x},0),\] (23d) \[R_{sb1}=\hat{v}(\mathbf{x},t)-\hat{v}(\mathbf{x}+1,t),\] (23e) \[R_{sb2}=\nabla\hat{u}(\mathbf{x},t)-\nabla\hat{u}(\mathbf{x}+1,t). \tag{23f}\] #### 3.3.1 Bound on the Residuals **Theorem 3.3**.: _Let \(d\), \(r\), \(k\in\mathbb{N}\) with \(k\geq 3\). Let \(\psi_{1}\in H^{r}(D)\), \(\psi_{2}\in H^{r-1}(D)\) and \(f\in C^{k-1}([0,T];H^{r-1}(D))\) with \(r>\frac{d}{2}+k\). For every integer \(N>5\), there exist \(\tanh\) neural networks \(u_{\theta}\) and \(v_{\theta}\), each with two hidden layers, of widths at most \(3\lceil\frac{k}{2}\rceil|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3\lceil\frac{d+3}{2}\rceil|P_{d+2,d+2}|\lceil NT\rceil N^{d}\), such that_ \[\|R_{int1}\|_{L^{2}(\Omega)},\|R_{tb1}\|_{L^{2}(D)}\lesssim\ln NN^{- k+1}, \tag{24a}\] \[\|R_{int2}\|_{L^{2}(\Omega)},\|\nabla R_{int1}\|_{L^{2}(\Omega)}, \|\nabla R_{tb1}\|_{L^{2}(D)},\|R_{sb2}\|_{L^{2}(\partial D\times[0,t])} \lesssim\ln^{2}NN^{-k+2},\] (24b) \[\|R_{tb2}\|_{L^{2}(D)},\|R_{ab1}\|_{L^{2}(\partial D\times[0,t])} \lesssim\ln NN^{-k+2}. \tag{24c}\] Proof.: Based on Lemma 3.2, it holds that \(u\in H^{k}(\Omega)\) and \(v\in H^{k-1}(\Omega)\). In light of Lemma 8.5, there exists neural networks \(u_{\theta}\) and \(v_{\theta}\), with the same two hidden layers and widths \(3\lceil\frac{k}{2}\rceil|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3\lceil\frac{d+3}{2}\rceil|P_{d+2,d+2}|\lceil NT\rceil N^{d}\), such that for every \(0\leq l\leq 2\) and \(0\leq s\leq 2\), \[\|u_{\theta}-u\|_{H^{l}(\Omega)} \leq C_{l,k,d+1,u}\lambda_{l,u}(N)N^{-k+l}, \tag{25}\] \[\|v_{\theta}-v\|_{H^{s}(\Omega)} \leq C_{s,k-1,d+1,v}\lambda_{s,v}(N)N^{-k+1+s}, \tag{26}\] where \(\lambda_{l,u}=2^{l}3^{d+1}(1+\sigma)\ln^{l}\left(\beta_{l,\sigma,d+1,u}N^{d+k +3}\right)\), \(\sigma=\frac{1}{100}\), \(\lambda_{s,v}=2^{s}3^{d+1}(1+\sigma)\ln^{s}\left(\beta_{s,\sigma,d+1,v}N^{d+k +2}\right)\), and the definition for the other constants can be found in Lemma 8.5. In light of Lemma 8.3, we can bound the PINN residual terms, \[\|\hat{u}_{t}\|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{1}(\Omega)}, \qquad\|\hat{v}_{t}\|_{L^{2}(\Omega)}\leq\|\hat{v}\|_{H^{1}(\Omega)},\] \[\|\Delta\hat{u}\|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{2}(\Omega)}, \qquad\|\nabla\hat{u}_{t}\|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{2}(\Omega)},\] \[\|\nabla\hat{v}\|_{L^{2}(\Omega)}\leq\|\hat{v}\|_{H^{1}(\Omega)},\] \[\|\hat{u}\|_{L^{2}(D)}\leq\|\hat{u}\|_{L^{2}(\partial\Omega)}\leq C _{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{u}\|_{H^{1}(\Omega)},\] \[\|\hat{v}\|_{L^{2}(D)}\leq\|\hat{v}\|_{L^{2}(\partial\Omega)} \leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{v}\|_{H^{1}(\Omega)},\] \[\|\nabla\hat{u}\|_{L^{2}(D)}\leq\|\nabla\hat{u}\|_{L^{2}( \partial\Omega)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{u}\|_{H^{2}( \Omega)},\] \[\|\hat{v}\|_{L^{2}(\partial D\times[0,t])}\leq\|\hat{v}\|_{L^{2}( \partial\Omega)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{v}\|_{H^{1}( \Omega)},\] \[\|\nabla\hat{u}\|_{L^{2}(\partial D\times[0,t])}\leq\|\nabla\hat{u} \|_{L^{2}(\partial\Omega)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{u}\|_{H^ {2}(\Omega)}.\] By combining these relations with (25) and (26), we can obtain \[\|R_{int1}\|_{L^{2}(\Omega)}=\|\hat{u}_{t}-\hat{v}\|_{L^{2}( \Omega)}\leq\|\hat{u}\|_{H^{1}(\Omega)}+\|\hat{v}\|_{L^{2}(\Omega)}\] \[\leq C_{1,k,d+1,u}\lambda_{1,u}(N)N^{-k+1}+C_{0,k-1,d+1,v}\lambda_ {0,v}(N)N^{-k+1}\lesssim\ln NN^{-k+1},\] \[\|R_{int2}\|_{L^{2}(\Omega)}=\|\hat{v}_{t}-\Delta\hat{u}\|_{L^{2}( \Omega)}\leq\|\hat{v}\|_{H^{1}(\Omega)}+\|\hat{u}\|_{H^{2}(\Omega)}\] \[\leq C_{2,k,d+1,u}\lambda_{2,u}(N)N^{-k+2}+C_{1,k-1,d+1,v}\lambda_ {1,v}(N)N^{-k+2}\lesssim\ln^{2}NN^{-k+2},\] \[\|\nabla R_{int1}\|_{L^{2}(\Omega)}=\|\nabla(\hat{u}_{t}-\hat{v}) \|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{2}(\Omega)}+\|\hat{v}\|_{H^{1}(\Omega)}\] \[\leq C_{2,k,d+1,u}\lambda_{2,u}(N)N^{-k+2}+C_{1,k-1,d+1,v}\lambda_ {1,v}(N)N^{-k+2}\lesssim\ln^{2}NN^{-k+2},\] \[\|R_{tb1}\|_{L^{2}(D)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{u} \|_{H^{1}(\Omega)}\lesssim\ln NN^{-k+1},\] \[\|R_{tb2}\|_{L^{2}(D)},\|R_{sb1}\|_{L^{2}(\partial D\times[0,t])} \leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{v}\|_{H^{1}(\Omega)}\lesssim\ln NN^{- k+2},\] \[\|\nabla R_{tb1}\|_{L^{2}(D)},\|R_{sb2}\|_{L^{2}(\partial D\times[0, t])} \leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{u}\|_{H^{2}(\Omega)}\lesssim\ln^{2}NN^{-k+2}.\] Theorem 3.3 implies that one can make the PINN residuals (18) arbitrarily small by choosing \(N\) to be sufficiently large. It follows that the generalization error \(\mathcal{E}_{G}(\theta)^{2}\) in (19) can be made arbitrarily small. #### 3.3.2 Bounds on the Total Approximation Error We next show that the total error \(\mathcal{E}(\theta)^{2}\) is also small when the generalization error \(\mathcal{E}_{G}(\theta)^{2}\) is small with the PINN approximation \((u_{\theta},v_{\theta})\). Then we prove that the total error \(\mathcal{E}(\theta)^{2}\) can be arbitrarily small, provided that the training error \(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}\) is sufficiently small and the sample set is sufficiently large. **Theorem 3.4**.: _Let \(d\in\mathbb{N}\), \(u\in C^{1}(\Omega)\) and \(v\in C^{0}(\Omega)\) be the classical solution to the wave equations (17). Let \(u_{\theta}\) and \(v_{\theta}\) denote the PINN approximation with parameter \(\theta\). Then the following relation holds,_ \[\mathcal{E}(\theta)^{2}=\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+|\nabla\hat {u}(\mathbf{x},\tau)|^{2}+|\hat{v}(\mathbf{x},t)|^{2})\dd{\mathbf{x}}\dd t\leq C_{G}T\exp(2T), \tag{27}\] _where_ \[C_{G} =\int_{D}(|R_{tb1}|^{2}+|R_{tb2}|^{2}+|\nabla R_{tb1}|^{2})\dd{ \mathbf{x}}+\int_{0}^{T}\int_{D}(|R_{int1}|^{2}+|R_{int2}|^{2}+|\nabla R_{int1}|^{2} )\dd{\mathbf{x}}\dd t\] \[\quad+\int_{0}^{T}\int_{\partial D}(|R_{sb1}|^{2}+|R_{sb2}|^{2}) \dd{s}(\mathbf{x})\dd t.\] Proof.: By taking the inner product of (23a) and (23b) with \(\hat{u}\) and \(\hat{v}\) and integrating over \(D\), respectively, we have \[\frac{d}{2dt}\int_{D}|\hat{u}|^{2}\dd{\mathbf{x}} =\int_{D}\hat{u}\hat{v}\dd{\mathbf{x}}+\int_{D}R_{int1}\hat{u}\dd{\bm {x}}\leq\int_{D}|\hat{u}|^{2}\dd{\mathbf{x}}+\frac{1}{2}\int_{D}|R_{int1}|^{2}\dd {\mathbf{x}}+\frac{1}{2}\int_{D}|\hat{v}|^{2}\dd{\mathbf{x}}, \tag{28}\] \[\frac{d}{2dt}\int_{D}|\hat{v}|^{2}\dd{\mathbf{x}} =-\int_{D}\nabla\hat{u}\cdot\nabla\hat{v}\dd{\mathbf{x}}+\int_{ \partial D}\hat{v}\nabla\hat{u}\cdot\mathbf{n}\dd{s}(\mathbf{x})+\int_{D}R_{int2}\hat{v }\dd{\mathbf{x}}\] \[=-\int_{D}\nabla\hat{u}\cdot\nabla\hat{u}_{t}\dd{\mathbf{x}}+\int_{D} \nabla\hat{u}\cdot\nabla R_{int1}\dd{\mathbf{x}}+\int_{\partial D}\hat{v}\nabla \hat{u}\cdot\mathbf{n}\dd{s}(\mathbf{x})+\int_{D}R_{int2}\hat{v}\dd{\mathbf{x}}\] \[=-\frac{d}{2dt}\int_{D}|\nabla\hat{u}|^{2}\dd{\mathbf{x}}+\int_{D} \nabla\hat{u}\cdot\nabla R_{int1}\dd{\mathbf{x}}+\int_{\partial D}R_{sb1}R_{sb2} \cdot\mathbf{n}\dd{s}(\mathbf{x})+\int_{D}R_{int2}\hat{v}\dd{\mathbf{x}}\] \[\leq-\frac{d}{2dt}\int_{D}|\nabla\hat{u}|^{2}\dd{\mathbf{x}}+\frac{1}{ 2}\int_{D}|\nabla\hat{u}|^{2}\dd{\mathbf{x}}+\frac{1}{2}\int_{D}|\nabla R_{int1}|^ {2}\dd{\mathbf{x}}\] \[\quad\quad+\frac{1}{2}\int_{\partial D}(|R_{sb1}|^{2}+|R_{sb2}|^{ 2})\dd{s}(\mathbf{x})+\frac{1}{2}\int_{D}|\hat{v}|^{2}\dd{\mathbf{x}}+\frac{1}{2}\int_ {D}|R_{int2}|^{2}\dd{\mathbf{x}}. \tag{29}\] Here, we have used \(\hat{v}=\hat{u}_{t}-R_{int1}\). By adding (28) to (29), we have \[\frac{d}{2dt}\int_{D}|\hat{u}|^{2}\dd{\mathbf{x}}+\frac{d}{2dt}\int_{ D}|\nabla\hat{u}|^{2}\dd{\mathbf{x}}+\frac{d}{2dt}\int_{D}|\hat{v}|^{2}\dd{\mathbf{x}}\] \[\quad\leq\int_{D}|\hat{u}|^{2}\dd{\mathbf{x}}+\frac{1}{2}\int_{D}| \nabla\hat{u}|^{2}\dd{\mathbf{x}}+\int_{D}|\hat{v}|^{2}\dd{\mathbf{x}}+\frac{1}{2}\int_ {D}|R_{int1}|^{2}\dd{\mathbf{x}}\] \[\quad+\frac{1}{2}\int_{D}|R_{int2}|^{2}\dd{\mathbf{x}}+\frac{1}{2}\int _{D}|\nabla R_{int1}|^{2}\dd{\mathbf{x}}+\frac{1}{2}\int_{\partial D}(|R_{sb1}|^{ 2}+|R_{sb2}|^{2})\dd{s}(\mathbf{x}). \tag{30}\] Integrating (30) over \([0,\tau]\) for any \(\tau\leq T\) and applying the Cauchy-Schwarz inequality, we obtain \[\int_{D}|\hat{u}(\mathbf{x},\tau)|^{2}\dd{\mathbf{x}}+\int_{D}|\nabla\hat {u}(\mathbf{x},\tau)|^{2}\dd{\mathbf{x}}+\int_{D}|\hat{v}(\mathbf{x},\tau)|^{2}\dd{\mathbf{x}}\] \[\quad\leq\int_{D}|R_{tb1}|^{2}\dd{\mathbf{x}}+\int_{D}|R_{tb2}|^{2} \dd{\mathbf{x}}+\int_{D}|\nabla R_{tb1}|^{2}\dd{\mathbf{x}}+2\int_{0}^{\tau}\int_{D} \left(|\hat{u}|^{2}+|\nabla\hat{u}|^{2}+|\hat{v}|^{2}\right)\dd{\mathbf{x}}\dd t\] \[\quad+\int_{0}^{T}\int_{D}\left(|R_{int1}|^{2}+|R_{int2}|^{2}+| \nabla R_{int1}|^{2}\right)\dd{\mathbf{x}}\dd t+\int_{0}^{T}\int_{\partial D}(|R_{ sb1}|^{2}+|R_{sb2}|^{2})\dd{s}(\mathbf{x})\dd t.\] We apply the integral form of the Gronwall inequality to the above inequality to get \[\int_{D}\left(|\hat{u}(\mathbf{x},\tau)|^{2}+|\nabla\hat{u}(\mathbf{x},\tau)|^{2}+|\hat{ v}(\mathbf{x},\tau)|^{2}\right)\dd{\mathbf{x}}\leq C_{G}\exp(2T),\] where \[C_{G}=\int_{D}(|R_{tb1}|^{2}+|R_{tb2}|^{2}+|\nabla R_{tb1}|^{2}) \dd{\mathbf{x}}+\int_{0}^{T}\int_{D}(|R_{int1}|^{2}+|R_{int2}|^{2}+|\nabla R_{int1}|^ {2})\dd{\mathbf{x}}\dd t\] \[\qquad\qquad+\int_{0}^{T}\int_{\partial D}(|R_{sb1}|^{2}+|R_{sb2}|^ {2})\dd{s}(\mathbf{x})\dd t.\] Then, we integrate the above inequality over \([0,T]\) to yield (27). **Remark 3.5**.: _For the wave equations (17) with periodic boundary, we would like to mention below two other forms for the generalization error (and the related training loss). Compared with (19), they differ only on the spatial boundary \(\Omega_{*}\), i.e.,_ \[\mathcal{E}_{G}(\theta)^{2} =\int_{\Omega}|R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{\Omega}|R_{int2}[u_{\theta},v_{\theta}]( \mathbf{x},t)|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{\Omega}|\nabla R_{int1}[u_{ \theta},v_{\theta}](\mathbf{x},t)|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[+\int_{D}|R_{tb1}[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int _{D}|R_{tb2}[v_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}|\nabla R_{tb1 }[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\] \[+\left(\int_{\Omega_{*}}|R_{sb1}[v_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}\mathbf{s}(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}}, \tag{31}\] _and_ \[\mathcal{E}_{G}(\theta)^{2} =\int_{\Omega}|R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{\Omega}|R_{int2}[u_{\theta},v_{\theta}]( \mathbf{x},t)|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{\Omega}|\nabla R_{int1}[u_ {\theta},v_{\theta}](\mathbf{x},t)|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[+\int_{D}|R_{tb1}[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int _{D}|R_{tb2}[v_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}|\nabla R_{tb1 }[u_{\theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\] \[+\left(\int_{\Omega_{*}}|R_{sb2}[u_{\theta}](\mathbf{x},t)|^{2}\, \mathrm{d}\mathbf{s}(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}}. \tag{32}\] _The related training loss functions are given by_ \[\mathcal{E}_{T}(\theta,\mathcal{S})^{2} =\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_ {T}^{int2}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{int3}(\theta, \mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2}\] \[+\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_ {T}^{tb3}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{sb1}(\theta,\mathcal{S} _{sb}), \tag{33}\] _or_ \[\mathcal{E}_{T}(\theta,\mathcal{S})^{2} =\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_ {T}^{int2}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{int3}(\theta, \mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2}\] \[+\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_ {T}^{tb3}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{sb2}(\theta,\mathcal{S} _{sb}). \tag{34}\] _These three forms for the generalization error result from different treatments of the boundary term \(\int_{\partial D}\hat{v}\nabla\hat{u}\cdot\mathbf{n}\) in the proof of Theorem 3.4:_ \[\int_{\partial D}\hat{v}\nabla\hat{u}\cdot\mathbf{n}\,\mathrm{d}s( \mathbf{x}) =\int_{\partial D}R_{sb1}\nabla\hat{u}\cdot\mathbf{n}\,\mathrm{d}s(\mathbf{x})\leq| \partial D|^{\frac{1}{2}}(\|u\|_{C^{1}(\partial D\times[0,t])}+||u_{\theta}||_{C ^{1}(\partial D\times[0,t])})\left(\int_{\partial D}|R_{sb1}|^{2}\,\mathrm{d}s( \mathbf{x})\right)^{\frac{1}{2}},\] \[\int_{\partial D}\hat{v}\nabla\hat{u}\cdot\mathbf{n}\,\mathrm{d}s( \mathbf{x}) =\int_{\partial D}R_{sb1}R_{sb2}\cdot\mathbf{n}\,\mathrm{d}s(\mathbf{x}) \leq\frac{1}{2}\left(\int_{\partial D}|R_{sb1}|^{2}\,\mathrm{d}s(\mathbf{x})+\int _{\partial D}|R_{sb2}|^{2}\,\mathrm{d}s(\mathbf{x})\right).\] _Our numerical experiments indicate that adopting the training loss (33) or (34) seems to lead to poorer simulation results. For the periodic boundary, both terms \(R_{sb1}\) and \(R_{sb2}\) may be needed for the periodicity information. We suspect that this may be why only a single boundary term (\(R_{sb1}\) or \(R_{sb2}\)), as given by (33) and (34), leads to poorer numerical results._ **Theorem 3.6**.: _Let \(d\in\mathbb{N}\) and \(T>0\). Let \(u\in C^{4}(\Omega)\) and \(v\in C^{3}(\Omega)\) be the classical solution of the wave equations (17), and let \((u_{\theta},v_{\theta})\) denote the PINN approximation with parameter \(\theta\in\Theta\). Then the total error satisfies_ \[\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+|\nabla\hat{u}(\mathbf{x },t)|^{2}+|\hat{v}(\mathbf{x},t)|^{2})\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\leq C_{T}T \exp(2T)\] \[\qquad=\mathcal{O}(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}+M_{int }^{-\frac{2}{2\theta+1}}+M_{tb}^{-\frac{2}{2}}+M_{sb}^{-\frac{2}{2}}). \tag{35}\] _The constant \(C_{T}\) is defined as_ \[C_{T}= C_{(R_{th1}^{2})}M_{tb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{tb}}^{D}(R_{ tb1}^{2})+C_{(R_{th2}^{2})}M_{tb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{tb}}^{D}(R_{tb2}^{2})+C_ {(|\nabla R_{tu1}|^{2})}M_{tb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{tb}}^{D}(|\nabla R _{tb1}|^{2})\] \[+C_{(R_{init1}^{2})}M_{int}^{-\frac{2}{4+1}}+\mathcal{Q}_{M_{int} }^{\Omega}(R_{int1}^{2})+C_{(R_{init2}^{2})}M_{int}^{-\frac{2}{4+1}}+\mathcal{Q }_{M_{int}}^{\Omega}(R_{int2}^{2})+C_{(|\nabla R_{int1}|^{2})}M_{int}^{-\frac{2} {4+1}}\] \[+\mathcal{Q}_{M_{int1}}^{\Omega}(|\nabla R_{int1}|^{2})+C_{(R_{ stbl}^{2})}M_{sb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{sh}}^{\Omega_{2}}(R_{sb1}^{2})+C_ {(R_{s2}^{2})}M_{sb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{sb}}^{\Omega_{s}}(R_{sb2 }^{2}),\] _where_ \[C_{(R_{th1}^{2})}\lesssim\|\hat{u}\|_{C^{2}}^{2},\quad C_{(R_{ th2}^{2})}\lesssim\|\hat{v}\|_{C^{2}}^{2},\quad C_{(|\nabla R_{tb1}|^{2})} \lesssim\|\hat{u}\|_{C^{3}}^{2},\quad C_{(R_{int1}^{2})}\lesssim\|\hat{u}\|_{ C^{3}}^{2}+\|\hat{u}\|_{C^{2}}^{2},\] \[\quad\quad C_{(R_{int2}^{2})},C_{(|\nabla R_{int1}|^{2})}\lesssim \|\hat{u}\|_{C^{4}}^{2}+\|\hat{v}\|_{C^{3}}^{2},\quad C_{(R_{sbl}^{2})}\lesssim \|\hat{v}\|_{C^{3}}^{2},\quad C_{(R_{sb2}^{2})}\lesssim\|\hat{u}\|_{C^{4}}^{2},\] _and the bounds \(\|u_{\theta}\|_{C^{n}}\) and \(\|v_{\theta}\|_{C^{n}}\) (\(n\in\mathbb{N}\)) are given by Lemma 8.4._ Proof.: By combining Theorem 3.4 with the quadrature error formula (16), we have \[\int_{D}|R_{tb1}|^{2}\mathrm{d}\mathbf{x} =\int_{D}|R_{tb1}|^{2}\mathrm{d}\mathbf{x}-\mathcal{Q}_{M_{tb}}^{D}(R _{tb1}^{2})+\mathcal{Q}_{M_{tb}}^{D}(R_{tb1}^{2})\] \[\leq C_{(R_{tb1}^{2})}M_{tb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{tb}}^ {D}(R_{tb1}^{2}),\] \[\int_{D}|R_{tb2}|^{2}\mathrm{d}\mathbf{x} =\int_{D}|R_{tb2}|^{2}\mathrm{d}\mathbf{x}-\mathcal{Q}_{M_{tb}}^{D}(R _{tb2}^{2})+\mathcal{Q}_{M_{tb}}^{D}(R_{tb2}^{2})\] \[\leq C_{(R_{tb2}^{2})}M_{tb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{tb}}^ {D}(R_{tb2}^{2}),\] \[\int_{D}|\nabla R_{tb1}|^{2}\mathrm{d}\mathbf{x} =\int_{D}|\nabla R_{tb1}|^{2}\mathrm{d}\mathbf{x}-\mathcal{Q}_{M_{tb }}^{D}(|\nabla R_{tb1}|^{2})+\mathcal{Q}_{M_{tb}}^{D}(|\nabla R_{tb1}|^{2})\] \[\leq C_{(|\nabla R_{tb1}|^{2})}M_{tb}^{-\frac{2}{4}}+\mathcal{Q}_ {M_{tb}}^{\Omega}(|\nabla R_{tb1}|^{2}),\] \[\int_{\Omega}|R_{int1}|^{2}\mathrm{d}\mathbf{x}\mathrm{d}t =\int_{\Omega}|R_{int1}|^{2}\mathrm{d}\mathbf{x}\mathrm{d}t-\mathcal{Q }_{M_{int}}^{\Omega}(R_{int1}^{2})+\mathcal{Q}_{M_{int}}^{\Omega}(R_{int1}^{2})\] \[\leq C_{(R_{int1}^{2})}M_{int}^{-\frac{2}{4+1}}+\mathcal{Q}_{M_{ int}}^{\Omega}(R_{int1}^{2}),\] \[\int_{\Omega}|R_{int2}|^{2}\mathrm{d}\mathbf{x}\mathrm{d}t =\int_{\Omega}|R_{int2}|^{2}\mathrm{d}\mathbf{x}\mathrm{d}t-\mathcal{Q }_{M_{int}}^{\Omega}(R_{int2}^{2})+\mathcal{Q}_{M_{int}}^{\Omega}(R_{int2}^{2})\] \[\leq C_{(R_{in2}^{2})}M_{int}^{-\frac{2}{4+1}}+\mathcal{Q}_{M_{ int}}^{\Omega}(R_{int2}^{2}),\] \[\int_{\Omega}|\nabla R_{int1}|^{2}\mathrm{d}\mathbf{x}\mathrm{d}t =\int_{\Omega}|\nabla R_{int1}|^{2}\mathrm{d}\mathbf{x}\mathrm{d}t- \mathcal{Q}_{M_{int}}^{\Omega}(|\nabla R_{int1}|^{2})+\mathcal{Q}_{M_{int}}^{ \Omega}(|\nabla R_{int1}|^{2})\] \[\leq C_{(|\nabla R_{int1}|^{2})}M_{int}^{-\frac{2}{4+1}}+\mathcal{Q }_{M_{int}}^{\Omega}(|\nabla R_{int1}|^{2}),\] \[\int_{\Omega_{*}}|R_{sb1}|^{2}\mathrm{d}s(\mathbf{x})\mathrm{d}t =\int_{\Omega_{*}}|R_{sb1}|^{2}\mathrm{d}s(\mathbf{x})\mathrm{d}t- \mathcal{Q}_{M_{sh}}^{\Omega_{*}}(R_{sb1}^{2})+\mathcal{Q}_{M_{sb}}^{\Omega_{*}}( R_{sb1}^{2})\] \[\leq C_{(R_{sb1}^{2})}M_{sb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{sb}}^ {\Omega_{*}}(R_{sb1}^{2}),\] \[\int_{\Omega_{*}}|R_{sb2}|^{2}\mathrm{d}s(\mathbf{x})\mathrm{d}t =\int_{\Omega_{*}}|R_{sb2}|^{2}\mathrm{d}s(\mathbf{x})\mathrm{d}t- \mathcal{Q}_{M_{sh}}^{\Omega_{*}}(R_{sb2}^{2})+\mathcal{Q}_{M_{sb}}^{\Omega_{*}}( R_{sb2}^{2})\] \[\leq C_{(R_{sb2}^{2})}M_{sb}^{-\frac{2}{4}}+\mathcal{Q}_{M_{sb}}^ {\Omega_{*}}(R_{sb2}^{2}).\] By the above inequalities and (27), it holds that \[\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+|\nabla\hat{u}(\mathbf{x},t)|^{2}+| \hat{v}(\mathbf{x},t)|^{2})\mathrm{d}\mathbf{x}\mathrm{d}t\leq C_{T}T\exp(2T),\] where \[C_{T}= C_{(R^{2}_{tb1})}M^{-\frac{2}{2}}_{tb}+\mathcal{Q}^{D}_{M_{tb}}(R^{2 }_{tb1})+C_{(R^{2}_{tb2})}M^{-\frac{2}{2}}_{tb}+\mathcal{Q}^{D}_{M_{tb}}(R^{2} _{tb2})+C_{(|\nabla R_{tbb1}|^{2})}M^{-\frac{2}{2}}_{tb}+\mathcal{Q}^{D}_{M_{ tb}}(|\nabla R_{tb1}|^{2})\] \[+C_{(R^{2}_{int1})}M^{-\frac{2}{2+1}}_{int}+\mathcal{Q}^{\Omega} _{M_{int}}(R^{2}_{int1})+C_{(R^{2}_{int2})}M^{-\frac{2}{2+1}}_{int}+\mathcal{Q} ^{\Omega}_{M_{int}}(R^{2}_{int2})+C_{(|\nabla R_{int1}|^{2})}M^{-\frac{2}{2+1}}_ {int}\] \[+\mathcal{Q}^{\Omega}_{M_{int}}(|\nabla R_{int1}|^{2})+C_{(R^{2}_ {sb1})}M^{-\frac{2}{2}}_{sb}+\mathcal{Q}^{\Omega_{\star}}_{M_{sb}}(R^{2}_{ sb1})+C_{(R^{2}_{sb2})}M^{-\frac{2}{2}}_{sb}+\mathcal{Q}^{\Omega_{\star}}_{M_{ sb}}(R^{2}_{sb2}).\] The complexities of the constants \(C_{(R^{2}_{q})}\) are given by Lemma 8.4, and we observe that for every residual \(R_{q}\), it holds that \(\|R^{2}_{q}\|_{C^{n}}\leq 2^{n}\|R_{q}\|^{2}_{C^{n}}\) (\(n\in\mathbb{N}\)) for \(R_{q}=R_{tb1}\), \(R_{tb2}\), \(\nabla R_{tb1}\), \(R_{int1}\), \(R_{int2}\), \(\nabla R_{int1}\) and \(R_{sb2}\). ## 4 Physics Informed Neural Networks for Approximating the Sine-Gordon Equation ### Sine-Gordon Equation Let \(D\subset\mathbb{R}^{d}\) be an open connected bounded set with a boundary \(\partial D\). We consider the following Sine-Gordon equation: \[u_{t}-v=0 \text{in }D\times[0,T], \tag{36a}\] \[\varepsilon^{2}v_{t}=a^{2}\Delta u-\varepsilon_{1}^{2}u-g(u)+f \text{in }D\times[0,T],\] (36b) \[u(\boldsymbol{x},0)=\psi_{1}(\boldsymbol{x}) \text{in }D,\] (36c) \[v(\boldsymbol{x},0)=\psi_{2}(\boldsymbol{x}) \text{in }D,\] (36d) \[u(\boldsymbol{x},t)|_{\partial D}=u_{d}(t) \text{in }\partial D\times[0,T], \tag{36e}\] where \(u\) and \(v\) are the field functions to be solved for, \(f\) is a source term, and \(u_{d}\), \(\psi_{1}\) and \(\psi_{2}\) denote the boundary/initial conditions. \(\varepsilon>0\), \(a>0\) and \(\varepsilon_{1}\geq 0\) are constants. \(g(u)\) is a nonlinear term. We assume that the nonlinearity is globally Lipschitz, i.e., there exists a constant \(L\) (independent of \(v\) and \(w\)) such that \[|g(v)-g(w)|\leq L|v-w|,\qquad\forall v,\,w\in\mathbb{R}. \tag{37}\] **Remark 4.1**.: _The existence and regularity of the solution to the Sine-Gordon equation with different nonlinear terms have been the subject of several studies in the literature; see [58, 34, 47, 48, 55]._ _The book [55] provides the existence and regularity result of the following Sine-Gordon equation,_ \[u_{tt}+\alpha u_{t}-\Delta u+g(u)=f.\] _Let \(\alpha\in\mathbb{R}\), \(g(u)\) be a \(C^{2}\) function from \(\mathbb{R}\) to \(\mathbb{R}\) and satisfy certain assumptions. If \(f\in C([0,T];L^{2}(D))\), \(\psi_{1}\in H^{1}(D)\) and \(\psi_{2}\in L^{2}(D)\), then there exists a unique solution \(u\) to this Sine-Gordon equation such that \(u\in C([0,T];H^{1}(D))\) and \(u_{t}\in C([0,T];L^{2}(D))\). Furthermore, \(f^{\prime}\in C([0,T];L^{2}(D))\), \(\psi_{1}\in H^{2}(D)\) and \(\psi_{2}\in H^{1}(D)\), it holds \(u\in C([0,T];H^{2}(D))\) and \(u_{t}\in C([0,T];H^{1}(D))\)._ _Let \(g\) be a smooth function of degree 2. The following equation is studied in [48],_ \[u_{tt}-\Delta u+u+g(u,u_{t},u_{tt})=0,\] _where it is reformulated as_ \[\boldsymbol{u}_{t}=A\boldsymbol{u}+G(\boldsymbol{u}),\] _in which \(\boldsymbol{u}=\begin{pmatrix}u\\ u_{t}\end{pmatrix}\), \(A=\begin{pmatrix}0&1\\ \Delta-1&0\end{pmatrix}\) and \(G=\begin{pmatrix}0,\\ -g(u,u_{t},u_{tt})\end{pmatrix}\). Set \(X=H^{k}(\mathbb{R}^{n})\bigoplus H^{k-1}(\mathbb{R}^{n})\), \(k>n+2+2a\) with \(a>1\). Given \(\boldsymbol{u}_{0}=\begin{pmatrix}\psi_{1}\\ \psi_{2}\end{pmatrix}\in X\) and \(\|\boldsymbol{u}_{0}\|_{X}=\sigma\), there exists a \(T_{0}=T_{0}(\sigma)\) depending on the size of the initial data \(\sigma\) and a unique solution \(\boldsymbol{u}\in C([0,T_{0}],X)\)._ _The reference [58] provides the following result. Under certain conditions for the nonlinear term \(g(u)\), with \(f=0\), \(d\leq 5\), \(k\geq\frac{d}{2}{+}1\), \(\psi_{1}\in H^{k}(D)\) and \(\psi_{2}\in H^{k-1}(D)\), there exists a unique solution \(u\in C((0,\infty);H^{k}(D))\) of nonlinear Klein-Gordon equation._ _The following result is due to [34]. Under certain conditions for the nonlinear term \(g(u)\), with \(f=0\), \(\psi_{1}\in H^{k}(D)\) and \(\psi_{2}\in H^{k-1}(D)\) with a positive constant \(k\geq 4\), there exists a positive constant \(T_{k}\) and a unique solution \(u\in C([0,T_{k}];H^{k}(D))\cap C^{1}([0,T_{k}];H^{k-1}(D))\cap C^{2}([0,T_{k}]; H^{k-2}(D))\) to the nonlinear wave equations with different speeds of propagation._ A survey of literature indicates that, while several works have touched on the regularity of the solution to the Sine-Gordon equations, none of them is comprehensive. To facilitate the subsequent analyses, we make the following assumption in light of Remark 4.1. Let \(k\geq 1\), \(g(u)\) and \(f\) be sufficiently smooth and bounded. Given \(\psi_{1}\in H^{r}(D)\) and \(\psi_{2}\in H^{r-1}(D)\) with \(r\geq\frac{d}{2}+k\), we assume that there exists \(T>0\) and a classical solution \(u\) and \(v\) to the Sine-Gordon equations (36) such that \(u\in C([0,T];H^{k}(D))\) and \(v\in C([0,T];H^{k-1}(D))\). Then, it follows that \(u\in C^{k}(D\times[0,T])\) and \(v\in C^{k-1}(D\times[0,T])\) based on the Sobolev embedding theorem. ### Physics Informed Neural Networks Let \(\Omega=D\times[0,T]\) and \(\Omega_{*}=\partial D\times[0,T]\) be the space-time domain. We define the following residuals for the PINN approximation, \(u_{\theta}:\Omega\to\mathbb{R}\) and \(v_{\theta}:\Omega\to\mathbb{R}\), for the Sine-Gordon equations (36): \[R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)=u_{\theta t}-v_{\theta}, \tag{38a}\] \[R_{int2}[u_{\theta},v_{\theta}](\mathbf{x},t)=\varepsilon^{2}v_{ \theta t}-a^{2}\Delta u_{\theta}+\varepsilon_{1}^{2}u_{\theta}+g(u_{\theta})-f,\] (38b) \[R_{tbl}[u_{\theta}](\mathbf{x})=u_{\theta}(\mathbf{x},0)-\psi_{1}(\mathbf{x}),\] (38c) \[R_{tbl}[v_{\theta}](\mathbf{x})=v_{\theta}(\mathbf{x},0)-\psi_{2}(\mathbf{x }),\] (38d) \[R_{sb}[v_{\theta}](\mathbf{x},t)=v_{\theta}(\mathbf{x},t)|_{\partial D}- u_{dt}(t), \tag{38e}\] where \(u_{dt}=\frac{\partial u_{d}}{\partial t}\). Note that for the exact solution \((u,v)\), \(R_{int1}[u,v]=R_{int2}[u,v]=R_{tb1}[u]=R_{tb2}[v]=R_{sb}[v]=0\). With PINN we minimize the following generalization error, \[\mathcal{E}_{G}(\theta)^{2} =\int_{\Omega}|R_{int1}[u_{\theta},v_{\theta}](\mathbf{x},t)|^{2} \mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{\Omega}|R_{int2}[u_{\theta},v_{\theta}]( \mathbf{x},t)|^{2}\mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{\Omega}|\nabla R_{int1}[u_{ \theta},v_{\theta}](\mathbf{x},t)|^{2}\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[+\int_{D}|R_{tb1}[u_{\theta}](\mathbf{x})|^{2}\mathrm{d}\mathbf{x}+\int_ {D}|R_{tb2}[v_{\theta}](\mathbf{x})|^{2}\mathrm{d}\mathbf{x}+\int_{D}|\nabla R_{tb1}[u _{\theta}](\mathbf{x})|^{2}\mathrm{d}\mathbf{x}\] \[+\left(\int_{\Omega_{*}}|R_{sb}[v_{\theta}](\mathbf{x},t)|^{2}\mathrm{ d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}}. \tag{39}\] Let \[\hat{u}=u_{\theta}-u,\quad\hat{v}=v_{\theta}-v,\] where \((u,v)\) denotes the exact solution. We define the total error of the PINN approximation of the Sine-Gordon equations (36) as, \[\mathcal{E}(\theta)^{2}=\int_{\Omega}(|\hat{u}(\mathbf{x},t)|^{2}+a^{2}|\nabla \hat{u}(\mathbf{x},t)|^{2}+\varepsilon^{2}|\hat{v}(\mathbf{x},t)|^{2})\mathrm{d}\mathbf{x }\,\mathrm{d}t. \tag{40}\] Then we choose the training set \(\mathcal{S}\subset\overline{D}\times[0,T]\) with \(\mathcal{S}=\mathcal{S}_{int}\cup\mathcal{S}_{sb}\cup\mathcal{S}_{tb}\), based on suitable quadrature points: * Interior training points \(\mathcal{S}_{int}=\{z_{n}\}\) for \(1\leq n\leq N_{int}\), with each \(z_{n}=(\mathbf{x},t)_{n}\in D\times(0,T)\). * Spatial boundary training points \(\mathcal{S}_{sb}=\{z_{n}\}\) for \(1\leq n\leq N_{sb}\), with each \(z_{n}=(\mathbf{x},t)_{n}\in\partial D\times(0,T)\). * Temporal boundary training points \(\mathcal{S}_{tb}=\{\mathbf{x}_{n}\}\) for \(1\leq n\leq N_{tb}\) with each \(\mathbf{x}_{n}\in D\). The integrals in (39) are approximated by a numerical quadrature rule, resulting in the training loss, \[\mathcal{E}_{T}(\theta,\mathcal{S})^{2} =\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_ {T}^{int2}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{int3}(\theta, \mathcal{S}_{int})^{2}\] \[+\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_ {T}^{tb2}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{tb3}(\theta, \mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{sb}(\theta,\mathcal{S}_{sb}), \tag{41}\] where \[\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|R_{int1}[u_{\theta},v_{\theta} ](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2}, \tag{42a}\] \[\mathcal{E}_{T}^{int2}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|R_{int2}[u_{\theta},v_{\theta }](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\] (42b) \[\mathcal{E}_{T}^{int3}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|\nabla R_{int1}[u_{\theta},v _{\theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\] (42c) \[\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|R_{tb1}[u_{\theta}](\mathbf{x}_{ tb}^{n})|^{2},\] (42d) \[\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|R_{tb2}[v_{\theta}](\mathbf{x}_{ tb}^{n})|^{2},\] (42e) \[\mathcal{E}_{T}^{tb3}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|\nabla R_{tb1}[u_{\theta}]( \mathbf{x}_{tb}^{n})|^{2},\] (42f) \[\mathcal{E}_{T}^{sb}(\theta,\mathcal{S}_{sb})^{2} =\sum_{n=1}^{N_{sb}}\omega_{sb}^{n}|R_{sb}[v_{\theta}](\mathbf{x}_{ sb}^{n},t_{sb}^{n})|^{2}. \tag{42g}\] Here the quadrature points in space-time constitute the data sets \(\mathcal{S}_{int}=\{(\mathbf{x}_{int}^{n},t_{int}^{n})\}_{n=1}^{N_{int}}\), \(\mathcal{S}_{tb}=\{\mathbf{x}_{tb}^{n})\}_{n=1}^{N_{tb}}\) and \(\mathcal{S}_{sb}=\{(\mathbf{x}_{sb}^{n},t_{sb}^{n})\}_{n=1}^{N_{sb}}\), and \(\omega_{\star}^{n}\) are the quadrature weights with \(\star\) being \(int\), \(tb\) or \(sb\). ### Error Analysis By substracting the Sine-Gordon equations (36) from the residual equations (38), we get, \[R_{int1}=\hat{u}_{t}-\hat{v}, \tag{43a}\] \[R_{int2}=\varepsilon^{2}\hat{v}_{t}-a^{2}\Delta\hat{u}+ \varepsilon_{1}^{2}\hat{u}+g(u_{\theta})-g(u),\] (43b) \[R_{tb1}=\hat{u}(\mathbf{x},0),\] (43c) \[R_{tb2}=\hat{v}(\mathbf{x},0),\] (43d) \[R_{sb}=\hat{v}(\mathbf{x},t)|_{\partial D}. \tag{43e}\] The results on the PINN approximations to the Sine-Gordon equations are summarized in the following theorems. **Theorem 4.2**.: _Let \(d\), \(r\), \(k\in\mathbb{N}\) with \(k\geq 3\). Assume that \(g(u)\) is Lipschitz continuous, \(u\in C^{k}(D\times[0,T])\) and \(v\in C^{k-1}(D\times[0,T])\). Then for every integer \(N>5\), there exist \(\tanh\) neural networks \(u_{\theta}\) and \(v_{\theta}\), each with two hidden layers, of widths at most \(3[\frac{k}{2}]|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3[\frac{d+3}{2}]|P_{d+2,d+2}|NT|N^{d}\), such that_ \[\|R_{int1}\|_{L^{2}(\Omega)},\|R_{tb1}\|_{L^{2}(D)}\lesssim\ln NN ^{-k+1}, \tag{44a}\] \[\|R_{int2}\|_{L^{2}(\Omega)},\|\nabla R_{int1}\|_{L^{2}(\Omega)}, \|\nabla R_{tb1}\|_{L^{2}(D)}\lesssim\ln^{2}NN^{-k+2},\] (44b) \[\|R_{tb2}\|_{L^{2}(D)},\|R_{sb}\|_{L^{2}(\partial D\times[0,t])} \lesssim\ln NN^{-k+2}. \tag{44c}\] The proof of this theorem is provided in the Appendix 8.3. Theorem 4.2 implies that the PINN residuals in (38) can be made arbitrarily small by choosing a sufficiently large \(N\). Therefore, the generalization error \(\mathcal{E}_{G}(\theta)^{2}\) can be made arbitrarily small. We next show that the PINN total approximation error \(\mathcal{E}(\theta)^{2}\) can be controlled by the generalization error \(\mathcal{E}_{G}(\theta)^{2}\) (Theorem 4.3 below), and by the training error \(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}\) (Theorem 4.4 below). The proofs for Theorem 4.3 and Theorem 4.4 are provided in the Appendix 8.3. **Theorem 4.3**.: _Let \(d\in\mathbb{N}\), \(u\in C^{1}(\Omega)\) and \(v\in C^{0}(\Omega)\) be the classical solution of the Sine-Gordon equation (36). Let \((u_{\theta},v_{\theta})\) denote the PINN approximation with parameter \(\theta\). Then the following relation holds,_ \[\mathcal{E}(\theta)^{2}=\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+a^{2}| \nabla\hat{u}(\mathbf{x},t)|^{2}+\varepsilon^{2}|\hat{v}(\mathbf{x},t)|^{2})\,\mathrm{ d}\mathbf{x}\,\mathrm{d}t\leq C_{G}T\exp\left((2+\varepsilon_{1}^{2}+L+a^{2})T \right), \tag{45}\] _where_ \[C_{G} =\int_{D}(|R_{tb1}|^{2}+a^{2}|\nabla R_{tb1}|^{2}+\varepsilon^{2} |R_{tb2}|^{2})\mathrm{d}\mathbf{x}+\int_{0}^{T}\int_{D}(|R_{int1}|^{2}+|R_{int2}|^{ 2}+a^{2}|\nabla R_{int1}|^{2})\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[\quad+2C_{\partial D}|T|^{\frac{1}{2}}\left(\int_{0}^{T}\int_{ \partial D}|R_{sb}|^{2}\mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}},\] _and \(C_{\partial D}=a^{2}|\partial D|^{\frac{1}{2}}(\|u\|_{C^{1}(\partial D\times[ 0,t])}+||u_{\theta}||_{C^{1}(\partial D\times[0,t])})\)._ **Theorem 4.4**.: _Let \(d\in\mathbb{N}\) and \(T>0\), and let \(u\in C^{4}(\Omega)\) and \(v\in C^{3}(\Omega)\) be the classical solution to the Sine-Gordon equation (36). Let \((u_{\theta},v_{\theta})\) denote the PINN approximation with parameter \(\theta\in\Theta\). Then the following relation holds,_ \[\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+a^{2}|\nabla\hat{u}( \mathbf{x},t)|^{2}+\varepsilon^{2}|\hat{v}(\mathbf{x},t)|^{2})\,\mathrm{d}\mathbf{x}\, \mathrm{d}t\leq C_{T}T\exp\left((2+\varepsilon_{1}^{2}+L+a^{2})T\right)\] \[\qquad=\mathcal{O}(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}+M_{int }^{-\frac{2}{2\delta+1}}+M_{tb}^{-\frac{2}{2}}+M_{sb}^{-\frac{1}{2}}), \tag{46}\] _where the constant \(C_{T}\) is defined by_ \[C_{T}= C_{(R_{tb1}^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}_{M_{tb}}^{D}(R_{ tb1}^{2})+\varepsilon^{2}\left(C_{(R_{tb2}^{2})}M_{tb}^{-\frac{2}{2}}+ \mathcal{Q}_{M_{tb}}^{D}(R_{tb2}^{2})\right)\] \[+a^{2}\left(C_{(|\nabla R_{tb1}|^{2})}M_{tb}^{-\frac{2}{2}}+ \mathcal{Q}_{M_{tb}}^{D}(|\nabla R_{tb1}|^{2})\right)+C_{(R_{int1}^{2})}M_{int }^{-\frac{2}{2\delta+1}}+\mathcal{Q}_{M_{int}}^{\Omega}(R_{int1}^{2})\] \[+C_{(R_{int2}^{2})}M_{int}^{-\frac{2}{2\delta+1}}+\mathcal{Q}_{M_ {int}}^{\Omega}(R_{int2}^{2})+a^{2}\left(C_{(|\nabla R_{int1}|^{2})}M_{int}^{- \frac{2}{2\delta+1}}+\mathcal{Q}_{M_{int}}^{\Omega}(|\nabla R_{int1}|^{2}) \right),\] \[+2C_{\partial D}|T|^{\frac{1}{2}}\left(C_{(R_{sb}^{2})}M_{sb}^{- \frac{2}{2}}+\mathcal{Q}_{M_{tb}}^{\Omega}(R_{sb}^{2})\right)^{\frac{1}{2}}.\] It follows from Theorem 4.4 that the PINN approximation error \(\mathcal{E}(\theta)^{2}\) can be arbitrarily small, provided that the training error \(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}\) is sufficiently small and the sample set is sufficiently large. ## 5 Physics Informed Neural Networks for Approximating Linear Elastodynamic Equation ### Linear Elastodynamic Equation Consider an elastic body occupying an open, bounded convex polyhedral domain \(D\subset\mathbb{R}^{d}\). The boundary \(\partial D=\Gamma_{D}\cup\Gamma_{N}\), with the outward unit normal vector \(\mathbf{n}\), is assumed to be composed of two disjoint portions \(\Gamma_{D}\neq\emptyset\) and \(\Gamma_{N}\), with \(\Gamma_{D}\cap\Gamma_{N}=\emptyset\). Given a suitable external load \(\mathbf{f}\in L^{2}((0,T];\mathbf{L}^{2}(D))\), and suitable initial/boundary data \(\mathbf{g}\in C^{1}((0,T];\mathbf{H}^{\frac{1}{2}}(\Gamma_{N}))\), \(\mathbf{\psi}_{1}\in\mathbf{H}^{\frac{1}{2}}_{0,\Gamma_{D}}(D)\) and \(\mathbf{\psi}_{2}\in\mathbf{L}^{2}(D)\), we consider the linear elastodynamic equations, \[\mathbf{u}_{t}-\mathbf{v}=0 \text{in }D\times[0,T], \tag{47a}\] \[\rho\mathbf{v}_{t}-2\mu\nabla\cdot(\mathbf{\bar{\varepsilon}}(\mathbf{u}))- \lambda\nabla(\nabla\cdot\mathbf{u})=\mathbf{f} \text{in }D\times[0,T],\] (47b) \[\mathbf{u}=\mathbf{u}_{d} \text{in }\Gamma_{D}\times[0,T],\] (47c) \[2\mu\mathbf{\bar{\varepsilon}}(\mathbf{u})\mathbf{n}+\lambda(\nabla\cdot\mathbf{ u})\mathbf{n}=\mathbf{g} \text{in }\Gamma_{N}\times[0,T],\] (47d) \[\mathbf{u}=\mathbf{\psi}_{1} \text{in }D\times\{0\},\] (47e) \[\mathbf{v}=\mathbf{\psi}_{2} \text{in }D\times\{0\}. \tag{47f}\] In the above system, \(\mathbf{u}=(u_{1},u_{2},\cdots,u_{d})\) and \(\mathbf{v}=(v_{1},v_{2},\cdots,v_{d})\) denote the displacement and the velocity, respectively, and \([0,T]\) (with \(T>0\)) denotes the time domain. \(\underline{\mathbf{\varepsilon}}(\mathbf{u})\) is the strain tensor, \(\underline{\mathbf{\varepsilon}}(\mathbf{u})=\frac{1}{2}(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})\). The constants \(\lambda\) and \(\mu\) are the first and the second Lame parameters, respectively. Combining (47a) and (47b), we can recover the classical linear elastodynamics equation: \[\rho\mathbf{u}_{tt}-2\mu\nabla\cdot(\underline{\mathbf{\varepsilon}}(\mathbf{u}))- \lambda\nabla(\nabla\cdot\mathbf{u})=\mathbf{f}\qquad\text{in }D\times[0,T]. \tag{48}\] The well-posedness of this equation is established in [28]. **Lemma 5.1** ([28, 62]).: _Let \(\mathbf{\psi}_{1}\in H^{r}(D)\), \(\mathbf{\psi}_{2}\in H^{r-1}(D)\) and \(\mathbf{f}\in H^{r-1}(D\times[0,T])\) with \(r\geq 1\). Then there exists a unique solution \(\mathbf{u}\) to the classical linear elastodynamic equation (48) such that \(\mathbf{u}(t=0)=\mathbf{\psi}_{1}\), \(\mathbf{u}_{t}(t=0)=\mathbf{\psi}_{2}\) and \(\mathbf{u}\in C^{l}([0,T];H^{r-l}(D))\) with \(0\leq l\leq r\)._ **Lemma 5.2**.: _Let \(k\in\mathbb{N}\), \(\mathbf{\psi}_{1}\in H^{r}(D)\), \(\mathbf{\psi}_{2}\in H^{r-1}(D)\) and \(\mathbf{f}\in H^{r-1}(D\times[0,T])\) with \(r>\frac{d}{2}+k\), then there exists \(T>0\) and a classical solution \((\mathbf{u},\mathbf{v})\) to the elastodynamic equations (47) such that \(\mathbf{u}(t=0)=\mathbf{\psi}_{1}\), \(\mathbf{u}_{t}(t=0)=\mathbf{\psi}_{2}\), \(\mathbf{u}\in C^{k}(D\times[0,T])\) and \(\mathbf{v}\in C^{k-1}(D\times[0,T])\)._ Proof.: As \(r>\frac{d}{2}+k\), \(H^{r-k}(D)\) is a Banach algebra. By Lemma 5.1, there exists \(T>0\) and the solution \((\mathbf{u},\mathbf{v})\) to the linear elastodynamics equations such that \(\mathbf{u}(t=0)=\mathbf{\psi}_{1}\), \(\mathbf{v}(t=0)=\mathbf{\psi}_{2}\), \(\mathbf{u}\in C^{l}([0,T];H^{r-l}(D))\) with \(0\leq l\leq r\) and \(\mathbf{v}\in C^{l}([0,T];H^{r-1-l}(D))\) with \(0\leq l\leq r-1\). Since \(\mathbf{u}\in\cap_{l=0}^{k}C^{l}([0,T];H^{r-l}(D))\) and \(\mathbf{v}\subset\cap_{l=0}^{k-1}C^{l}([0,T];H^{r-l-1}(D))\). By applying the Sobolev embedding theorem and \(r>\frac{d}{2}+k\), we obtain \(H^{r-l}(D)\subset C^{r-l}(D)\) and \(H^{r-l-1}(D)\subset C^{r-l-1}(D)\) for \(0\leq l\leq k\). Therefore, \(\mathbf{u}\in C^{k}(D\times[0,T])\) and \(\mathbf{v}\in C^{k-1}(D\times[0,T])\). ### Physics Informed Neural Networks We now consider the PINN approximation of the linear elastodynamic equations (47). Let \(\Omega=D\times[0,T]\), \(\Omega_{D}=\Gamma_{D}\times[0,T]\) and \(\Omega_{N}=\Gamma_{N}\times[0,T]\) denote the space-time domain. Define the following residuals for the PINN approximation \(\mathbf{u}_{\theta}:\Omega\to\mathbb{R}\) and \(\mathbf{v}_{\theta}:\Omega\to\mathbb{R}\) for the elastodynamic equations (47): \[\mathbf{R}_{int1}[\mathbf{u}_{\theta},\mathbf{v}_{\theta}](\mathbf{x},t)=\mathbf{u}_ {\theta t}-\mathbf{v}_{\theta}, \tag{49a}\] \[\mathbf{R}_{int2}[\mathbf{u}_{\theta},\mathbf{v}_{\theta}](\mathbf{x},t)=\rho\mathbf{v }_{\theta t}-2\mu\nabla\cdot(\underline{\mathbf{\varepsilon}}(\mathbf{u}_{\theta}))- \lambda\nabla(\nabla\cdot\mathbf{u}_{\theta})-\mathbf{f},\] (49b) \[\mathbf{R}_{tb1}[\mathbf{u}_{\theta}](\mathbf{x})=\mathbf{u}_{\theta}(\mathbf{x},0)- \mathbf{\psi}_{1}(\mathbf{x}),\] (49c) \[\mathbf{R}_{tb2}[\mathbf{v}_{\theta}](\mathbf{x})=\mathbf{v}_{\theta}(\mathbf{x},0)- \mathbf{\psi}_{2}(\mathbf{x}),\] (49d) \[\mathbf{R}_{sb1}[\mathbf{v}_{\theta}](\mathbf{x},t)=\mathbf{v}_{\theta}|_{\Gamma_{ D}}-\mathbf{u}_{dt},\] (49e) \[\mathbf{R}_{sb2}[\mathbf{u}_{\theta}](\mathbf{x},t)=(2\mu\underline{\mathbf{ \varepsilon}}(\mathbf{u}_{\theta})\mathbf{n}+\lambda(\nabla\cdot\mathbf{u}_{\theta})\mathbf{n}) |_{\Gamma_{N}}-\mathbf{g}. \tag{49f}\] Note that for the exact solution \((\mathbf{u},\mathbf{v})\), we have \(\mathbf{R}_{int1}[\mathbf{u},\mathbf{v}]=\mathbf{R}_{int2}[\mathbf{u},\mathbf{v}]=\mathbf{R}_{tb1}[\mathbf{u}]= \mathbf{R}_{tb2}[\mathbf{v}]=\mathbf{R}_{sb1}[\mathbf{v}]=\mathbf{R}_{sb2}[\mathbf{u}]=0\). With PINN we minimize the the following generalization error, \[\mathcal{E}_{G}(\theta)^{2} =\int_{\Omega}|\mathbf{R}_{int1}[\mathbf{u}_{\theta},\mathbf{v}_{\theta}]( \mathbf{x},t)|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{\Omega}|\mathbf{R}_{int2}[\mathbf{ u}_{\theta},\mathbf{v}_{\theta}](\mathbf{x},t)|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{ \Omega}|\underline{\mathbf{\varepsilon}}(\mathbf{R}_{int1}[\mathbf{u}_{\theta},\mathbf{v}_{ \theta}](\mathbf{x},t))|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[+\int_{D}|\nabla\cdot(\mathbf{R}_{int1}[\mathbf{u}_{\theta},\mathbf{v}_{\theta}] (\mathbf{x},t))|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t+\int_{D}|\mathbf{R}_{tb1}[\mathbf{u}_{ \theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}|\mathbf{R}_{tb2}[\mathbf{v}_{\theta}]( \mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\] \[+\int_{D}|\underline{\mathbf{\varepsilon}}(\mathbf{R}_{tb1}[\mathbf{u}_{\theta }](\mathbf{x}))|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}|\nabla\cdot\mathbf{R}_{tb1}[\mathbf{u}_{ \theta}](\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\] \[+\left(\int_{\Omega_{D}}|\mathbf{R}_{sb1}[\mathbf{v}_{\theta}](\mathbf{x},t)|^ {2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}}+\left(\int_{\Omega _{N}}|\mathbf{R}_{sb2}[\mathbf{u}_{\theta}](\mathbf{x},t)|^{2}\,\mathrm{d}s(\mathbf{x})\, \mathrm{d}t\right)^{\frac{1}{2}}. \tag{50}\] Let \[\hat{\mathbf{u}}=\mathbf{u}_{\theta}-\mathbf{u},\quad\hat{\mathbf{v}}=\mathbf{v}_{\theta}-\mathbf{v}\] denote the difference between the solution to the elastodynamic equations (47) and the PINN approximation with parameter \(\theta\). We define the total error of the PINN approximation as, \[\mathcal{E}(\theta)^{2}=\int_{\Omega}(|\hat{\mathbf{u}}(\mathbf{x},t)|^{2}+2\mu| \underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}}(\mathbf{x},t))|^{2}+\lambda|\nabla\cdot \hat{\mathbf{u}}(\mathbf{x},t)|^{2}+\rho|\hat{\mathbf{v}}(\mathbf{x},t)|^{2})\,\mathrm{d} \mathbf{x}\,\mathrm{d}t. \tag{51}\] We choose the training set \(\mathcal{S}\subset\overline{D}\times[0,T]\) based on suitable quadrature points. The full training set is defined by \(\mathcal{S}=\mathcal{S}_{int}\cup\mathcal{S}_{sb}\cup\mathcal{S}_{tb}\), and \(\mathcal{S}_{sb}=\mathcal{S}_{sb1}\cup\mathcal{S}_{sb2}\): * Interior training points \(\mathcal{S}_{int}=\{z_{n}\}\) for \(1\leq n\leq N_{int}\), with each \(z_{n}=(\mathbf{x},t)_{n}\in D\times(0,T)\). * Spatial boundary training points \(\mathcal{S}_{sb1}=\{z_{n}\}\) for \(1\leq n\leq N_{sb1}\), with each \(z_{n}=(\mathbf{x},t)_{n}\in\Gamma_{D}\times(0,T)\), and \(\mathcal{S}_{sb2}=\{z_{n}\}\) for \(1\leq n\leq N_{sb2}\), with each \(z_{n}=(\mathbf{x},t)_{n}\in\Gamma_{N}\times(0,T)\). * Temporal boundary training points \(\mathcal{S}_{tb}=\{\mathbf{x}_{n}\}\) for \(1\leq n\leq N_{tb}\) with each \(\mathbf{x}_{n}\in D\). Then, the integrals in (50) can be approximated by a suitable numerical quadrature, resulting in the following training loss, \[\mathcal{E}_{T}(\theta,\mathcal{S})^{2} =\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E} _{T}^{int2}(\theta,\mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{int3}(\theta, \mathcal{S}_{int})^{2}+\mathcal{E}_{T}^{int4}(\theta,\mathcal{S}_{int})^{2}+ \mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2}\] \[\quad+\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2}+ \mathcal{E}_{T}^{tb3}(\theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{tb4}( \theta,\mathcal{S}_{tb})^{2}+\mathcal{E}_{T}^{sb1}(\theta,\mathcal{S}_{sb1})+ \mathcal{E}_{T}^{sb2}(\theta,\mathcal{S}_{sb2}), \tag{52}\] where, \[\mathcal{E}_{T}^{int1}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|\mathbf{R}_{int1}[\mathbf{u}_{ \theta},\mathbf{v}_{\theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2}, \tag{53a}\] \[\mathcal{E}_{T}^{int2}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|\mathbf{R}_{int2}[\mathbf{u}_{\theta },\mathbf{v}_{\theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\] (53b) \[\mathcal{E}_{T}^{int3}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|\underline{\mathbf{\varepsilon} }(\mathbf{R}_{int1}[\mathbf{u}_{\theta},\mathbf{v}_{\theta}](\mathbf{x}_{int}^{n},t_{int}^{n}) )|^{2},\] (53c) \[\mathcal{E}_{T}^{int4}(\theta,\mathcal{S}_{int})^{2} =\sum_{n=1}^{N_{int}}\omega_{int}^{n}|\nabla\cdot\mathbf{R}_{int1}[\bm {u}_{\theta},\mathbf{v}_{\theta}](\mathbf{x}_{int}^{n},t_{int}^{n})|^{2},\] (53d) \[\mathcal{E}_{T}^{tb1}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|\underline{\mathbf{\varepsilon} }(\mathbf{R}_{tb1}[\mathbf{u}_{\theta}](\mathbf{x}_{tb}^{n}))|^{2},\] (53e) \[\mathcal{E}_{T}^{tb2}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|\mathcal{R}_{tb2}[\mathbf{v}_{ \theta}](\mathbf{x}_{tb}^{n})|^{2},\] (53f) \[\mathcal{E}_{T}^{tb3}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{sb}}\omega_{tb}^{n}|\underline{\mathbf{\varepsilon} }(\mathbf{R}_{tb1}[\mathbf{u}_{\theta}](\mathbf{x}_{tb}^{n}))|^{2},\] (53g) \[\mathcal{E}_{T}^{tb4}(\theta,\mathcal{S}_{tb})^{2} =\sum_{n=1}^{N_{tb}}\omega_{tb}^{n}|\nabla\cdot\mathbf{R}_{tb1}[\mathbf{u}_{ \theta}](\mathbf{x}_{tb}^{n})|^{2},\] (53h) \[\mathcal{E}_{T}^{sb1}(\theta,\mathcal{S}_{sb1})^{2} =\sum_{n=1}^{N_{sb1}}\omega_{sb1}^{n}|\mathbf{R}_{sb1}[\mathbf{v}_{\theta} ](\mathbf{x}_{sb}^{n},t_{sb}^{n})|^{2},\] (53i) \[\mathcal{E}_{T}^{sb2}(\theta,\mathcal{S}_{sb2})^{2} =\sum_{n=1}^{N_{sb2}}\omega_{sb2}^{n}|\mathbf{R}_{sb2}[\mathbf{u}_{\theta} ](\mathbf{x}_{sb}^{n},t_{sb}^{n})|^{2}. \tag{53j}\] Here the quadrature points in space-time constitute the data sets \(\mathcal{S}_{int}=\{(\mathbf{x}_{int}^{n},t_{int}^{n})\}_{n=1}^{N_{int}}\), \(\mathcal{S}_{tb}=\{\mathbf{x}_{tb}^{n}\}_{n=1}^{N_{tb}}\), \(\mathcal{S}_{sb1}=\{(\mathbf{x}_{sb1}^{n},t_{sb1}^{n})\}_{n=1}^{N_{sb1}}\) and \(\mathcal{S}_{sb2}=\{(\mathbf{x}_{sb2}^{n},t_{sb2}^{n})\}_{n=1}^{N_{sb2}}\). \(\omega_{\star}^{n}\) denote the suitable quadrature weights with \(\star\) being \(int\), \(tb\), \(sb1\) and \(sb2\). ### Error Analysis Subtracting the elastodynamic equations (47) from the residual equations (49), we obtain \[\mathbf{R}_{int1}=\hat{\mathbf{u}}_{t}-\hat{\mathbf{v}}, \tag{54a}\] \[\mathbf{R}_{int2}=\rho\hat{\mathbf{v}}_{t}-2\mu\nabla\cdot(\underline{ \mathbf{\varepsilon}}(\hat{\mathbf{u}}))-\lambda\nabla(\nabla\cdot\hat{\mathbf{u}}),\] (54b) \[\mathbf{R}_{tb1}=\hat{\mathbf{u}}|_{t=0},\] (54c) \[\mathbf{R}_{tb2}=\hat{\mathbf{v}}|_{t=0},\] (54d) \[\mathbf{R}_{sb1}=\hat{\mathbf{v}}|_{\Gamma_{D}},\] (54e) \[\mathbf{R}_{sb2}=(2\mu\underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}})\mathbf{ n}+\lambda(\nabla\cdot\hat{\mathbf{u}})\mathbf{n})|_{\Gamma_{N}}. \tag{54f}\] The PINN approximation results are summarized in the following three theorems. The proofs of these theorems are provided in the Appendix 8.4. **Theorem 5.3**.: _Let \(d\), \(r\), \(k\in\mathbb{N}\) with \(k\geq 3\). Let \(\mathbf{\psi}_{1}\in H^{r}(D)\), \(\mathbf{\psi}_{2}\in H^{r-1}(D)\) and \(\mathbf{f}\in H^{r-1}(D\times[0,T])\) with \(r>\frac{d}{2}+k\). For every integer \(N>5\), there exist \(\tanh\) neural networks \((\mathbf{u}_{j})_{\theta}\) and \((\mathbf{v}_{j})_{\theta}\), with \(j=1,2,\cdots,d\), each with two hidden layers, of widths at most \(3\lceil\frac{k}{2}\rceil|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3\lceil\frac{d+3}{2}\rceil|P_{d+2,d+2}|\lceil NT\rceil N^{d}\), such that_ \[\|\mathbf{R}_{int1}\|_{L^{2}(\Omega)},\|\mathbf{R}_{tb1}\|_{L^{2}(\Omega)} \lesssim\mathrm{ln}NN^{-k+1}, \tag{55a}\] \[\|\mathbf{R}_{int2}\|_{L^{2}(\Omega)},\|\underline{\mathbf{\varepsilon}}( \mathbf{R}_{int1})\|_{L^{2}(\Omega)},\|\nabla\cdot\mathbf{R}_{int1}\|_{L^{2}(\Omega)} \lesssim\ln^{2}NN^{-k+2},\] (55b) \[\|\underline{\mathbf{\varepsilon}}(\mathbf{R}_{tb1})\|_{L^{2}(D)},\|\nabla \cdot\mathbf{R}_{tb1}\|_{L^{2}(D)},\|\mathbf{R}_{sb2}\|_{L^{2}(\Gamma_{N}\times[0,t] )}\lesssim\ln^{2}NN^{-k+2},\] (55c) \[\|\mathbf{R}_{tb2}\|_{L^{2}(D)},\|\mathbf{R}_{sb1}\|_{L^{2}(\Gamma_{D} \times[0,t])}\lesssim\mathrm{ln}NN^{-k+2}. \tag{55d}\] It follows from Theorem 5.3 that, by choosing a sufficiently large \(N\), one can make the PINN residuals in (49), and thus the generalization error \(\mathcal{E}_{G}(\theta)^{2}\) in (50), arbitrarily small. **Theorem 5.4**.: _Let \(d\in\mathbb{N}\), \(\mathbf{u}\in C^{1}(\Omega)\) and \(\mathbf{v}\in C(\Omega)\) be the classical solution to the linear elastodynamic equation (47). Let \((\mathbf{u}_{\theta},\mathbf{v}_{\theta})\) denote the PINN approximation with the parameter \(\theta\). Then the following relation holds,_ \[\int_{0}^{T}\int_{D}(|\hat{\mathbf{u}}(\mathbf{x},t)|^{2}+2\mu|\underline{ \mathbf{\varepsilon}}(\hat{\mathbf{u}}(\mathbf{x},t))|^{2}+\lambda|\nabla\cdot\hat{\mathbf{u} }(\mathbf{x},t)|^{2}+\rho|\hat{\mathbf{v}}(\mathbf{x},t)|^{2})\,\mathrm{d}\mathbf{x}\, \mathrm{d}t\leq C_{G}T\exp\left((2+2\mu+\lambda)T\right),\] _where_ \[C_{G} =\int_{D}|\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}2\mu| \underline{\mathbf{\varepsilon}}(\mathbf{R}_{tb1})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D} \lambda|\nabla\cdot\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+\rho\int_{D}|\mathbf{R}_{ tb2}|^{2}\,\mathrm{d}\mathbf{x}\] \[\quad+\int_{0}^{T}\int_{D}\left(|\mathbf{R}_{int1}|^{2}+2\mu|\underline {\mathbf{\varepsilon}}(\mathbf{R}_{int1})|^{2}+\lambda|\nabla\cdot\mathbf{R}_{int1}|^{2}+| \mathbf{R}_{int2}|^{2}\right)\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[\quad+2|T|^{\frac{1}{2}}C_{\Gamma_{D}}\left(\int_{0}^{T}\int_{ \Gamma_{D}}|\mathbf{R}_{sb1}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1 }{2}}+2|T|^{\frac{1}{2}}C_{\Gamma_{N}}\left(\int_{0}^{T}\int_{\Gamma_{N}}|\mathbf{R }_{sb2}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}},\] _with \(C_{\Gamma_{D}}=(2\mu+\lambda)|\Gamma_{D}|^{\frac{1}{2}}\|\mathbf{u}\|_{C^{1}( \Gamma_{D}\times[0,T])}+(2\mu+\lambda)|\Gamma_{D}|^{\frac{1}{2}}\|\mathbf{u}_{ \theta}\|_{C^{1}(\Gamma_{D}\times[0,T])}\) and \(C_{\Gamma_{N}}=|\Gamma_{N}|^{\frac{1}{2}}(\|\mathbf{v}\|_{C(\Gamma_{N}\times[0,T ])}+||\mathbf{v}_{\theta}||_{C(\Gamma_{N}\times[0,T])})\)._ Theorem 5.4 shows that the total error of the PINN approximation \(\mathcal{E}(\theta)^{2}\) can be controlled by the generalization error \(\mathcal{E}_{G}(\theta)^{2}\). **Theorem 5.5**.: _Let \(d\in\mathbb{N}\), \(\mathbf{u}\in C^{4}(\Omega)\) and \(\mathbf{v}\in C^{3}(\Omega)\) be the classical solution to the linear elastodynamic equation (47). Let \((\mathbf{u}_{\theta},\mathbf{v}_{\theta})\) denote the PINN approximation with the parameter \(\theta\). Then the following relation holds,_ \[\int_{0}^{T}\int_{D}(|\hat{\mathbf{u}}(\mathbf{x},t)|^{2}+2\mu|\underline{ \mathbf{\varepsilon}}(\hat{\mathbf{u}}(\mathbf{x},t))|^{2}+\lambda|\nabla\cdot\hat{\mathbf{u}}( \mathbf{x},t)|^{2}+\rho|\hat{\mathbf{v}}(\mathbf{x},t)|^{2})\,\mathrm{d}\mathbf{x}\,\mathrm{d}t \leq C_{T}T\exp\left((2+2\mu+\lambda)T\right)\] \[\qquad=\mathcal{O}(\mathcal{E}_{T}(\theta)^{2}+M_{int}^{-\frac{2}{ 2+1}}+M_{tb}^{-\frac{2}{2}}+M_{sb}^{-\frac{1}{4}}), \tag{56}\] _where_ \[C_{T}= C_{(\mathbf{R}_{tb1}^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}_{M_{tb}}^{D}( \mathbf{R}_{tb1}^{2})+\rho\left(C_{(\mathbf{R}_{tb2}^{2})}M_{tb}^{-\frac{2}{2}}+ \mathcal{Q}_{M_{tb}}^{D}(\mathbf{R}_{tb2}^{2})\right)+2\mu\left(C_{(|\mathbf{\varepsilon }(\mathbf{R}_{tb1})|^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}_{M_{tb}}^{D}(|\mathbf{ \varepsilon}(\mathbf{R}_{tb1})|^{2})\right)\] \[+\lambda\left(C_{(|\nabla\cdot\mathbf{R}_{tb1}|^{2})}M_{tb}^{-\frac{2 }{2}}+\mathcal{Q}_{M_{tb}}^{D}(|\nabla\cdot\mathbf{R}_{tb1}|^{2})\right)+C_{(\mathbf{R }_{int1}^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}_{M_{int}}^{\Omega}(\mathbf{R}_ {int1}^{2})\] \[+C_{(\mathbf{R}_{int2}^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}_{M_{ int}}^{\Omega}(\mathbf{R}_{int2}^{2})+2\mu\left(C_{(|\mathbf{\varepsilon}(\mathbf{R}_{ int1})|^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}_{M_{int}}^{\Omega}(|\mathbf{ \varepsilon}(\mathbf{R}_{int1})|^{2})\right)\] \[+\lambda\left(C_{(|\nabla\cdot\mathbf{R}_{int1}|^{2})}M_{int}^{-\frac {2}{2+1}}+\mathcal{Q}_{M_{int}}^{\Omega}(|\nabla\cdot\mathbf{R}_{int1}|^{2}) \right)+2|T|^{\frac{1}{2}}C_{\Gamma_{D}}\left(C_{(\mathbf{R}_{sb1}^{2})}M_{sb1}^{- \frac{2}{2}}+\mathcal{Q}_{M_{sb1}}^{\Omega_{D}}(\mathbf{R}_{sb1}^{2})\right)^{ \frac{1}{2}}\] \[+2|T|^{\frac{1}{2}}C_{\Gamma_{N}}\left(C_{(\mathbf{R}_{tb2}^{2})}M_{ sb2}^{-\frac{2}{2}}+\mathcal{Q}_{M_{sb2}}^{\Omega_{N}}(\mathbf{R}_{sb2}^{2})\right)^{ \frac{1}{2}}.\] Theorem 5.5 shows that the PINN approximation error \(\mathcal{E}(\theta)^{2}\) can be controlled by the training error \(\mathcal{E}_{T}(\theta,\mathcal{S})^{2}\) with a large enough sample set \(\mathcal{S}\). ## 6 Numerical Examples The theoretical analyses from Sections 3 to 5 suggest several forms for the PINN loss function with the wave, Sine-Gordon and the linear elastodynamic equations. These forms contain certain non-standard terms, such as the square root of the residuals or the gradient terms on some boundaries, which would generally be absent from the canonical PINN formulation of the loss function. The presence of such non-standard terms is crucial to bounding the PINN approximation errors, as shown in the error analyses. These non-standard forms of the loss function lead to a variant PINN algorithm. In this section we illustrate the performance of the variant PINN algorithm as suggested by the theoretical analysis, as well as the more standard PINN algorithm, using several numerical examples in one spatial dimension (1D) plus time for the wave equation and the Sine-Gordon equation, and in two spatial dimensions (2D) plus time for the linear elastodynamic equation. The following settings are common to all the numerical simulations in this section. Let \((\mathbf{x},t)\in D\times[0,T]\) denote the spatial and temporal coordinates in the spatial-temporal domain, where \(\mathbf{x}=x\) and \(\mathbf{x}=(x,y)\) for one and two spatial dimensions, respectively. For the wave equation and the Sine-Gordon equation, the neural networks contain two nodes in the input layer (representing \(x\) and \(t\)), two hidden layers with the number of nodes to be specified later, and two nodes in the output layer (representing the solution \(u\) and its time derivative \(v=\frac{\partial u}{\partial t}\)). For the linear elastodynamic equaton, three input nodes and four output nodes are employed in the neural network, as will be explained in more detail later. We employ the tanh (hyperbolic tangent) activation function for all the hidden nodes, and no activation function is applied to the output nodes (i.e. linear). For training the neural networks, we employ \(N\) collocation points within the spatial-temporal domain drawn from a uniform random distribution, and also \(N\) uniform random points on each spatial boundary and on the initial boundary. In the simulations the value of \(N\) is varied systematically among 1000, 1500, 2000, 2500 and 3000. After the neural networks are trained, for the wave equation and the Sine-Gordon equation, we compare the PINN solution and the exact solution on a set of \(N_{ev}=3000\times 3000\) uniform spatial-temporal grid points (evaluation points) \((x,t)_{n}\in D\times[0,T]\) (\(n=1,\cdots,N_{ev}\)) that covers the problem domain and the boundaries. For the elastodynamic equation, we compare the PINN solution and the exact solution at different time instants, and at each time instant the corresponding solutions are evaluated at a uniform set of \(N_{ev}=1500\times 1500\) grid points in the spatial domain, \(\mathbf{x}_{n}=(x,y)_{n}\in D\) (\(n=1,\cdots,N_{ev}\)). The PINN errors reported below are computed as follows. Let \(z_{n}=(\mathbf{x},t)_{n}\) (\((\mathbf{x},t)_{n}\in D\times[0,T],n=1,\cdots,N_{ev}\)) denote the set of uniform grid points, where \(N_{ev}\) denote the number of evaluation points. The errors of PINN are defined by, \[l_{2}\text{-error}=\frac{\sqrt{\sum_{n=1}^{N_{ev}}|u(z_{n})-u_{\theta}(z _{n})|^{2}}}{\sqrt{\sum_{n=1}^{N_{ev}}u(z_{n})^{2}}}=\frac{\sqrt{\left(\sum_{n=1 }^{N_{ev}}|u(z_{n})-u_{\theta}(z_{n})|^{2}\right)/N_{ev}}}{\sqrt{\left(\sum_{n=1 }^{N_{ev}}u(z_{n})^{2}\right)/N_{ev}}}, \tag{57a}\] \[l_{\infty}\text{-error}=\frac{\max\{|u(z_{n})-u_{\theta}(z_{n})| \}_{n=1}^{N_{ev}}}{\sqrt{\left(\sum_{n=1}^{N_{ev}}u(z_{n})^{2}\right)/N_{ev}}}, \tag{57b}\] where \(u_{\theta}\) denotes the PINN solution and \(u\) denotes the exact solution. Our implementation of the PINN algorithm is based on the PyTorch library (pytorch.org). In all the following numerical examples, we combine the Adam [32] optimizer and the L-BFGS [42] optimizer (in batch mode) to train the neural network. We first employ the Adam optimizer to train the network for 100 epochs/iterations, and then employ the L-BFGS optimizer to continue the network training for another 30000 iterations. We employ the default parameter values in Adam, with the learning rate 0.001, \(\beta_{1}=0.9\) and \(\beta_{2}=0.99\). The initial learning rate 1.0 is adopted in the L-BFGS optimizer. ### Wave Equation We next test the PINN algorithm for solving the wave equation (17) in one spatial dimension (plus time), under a configuration in accordance with that of [20]. Consider the spatial-temporal domain, \((x,t)\in D\times[0,T]=[0,5]\times[0,2]\), and the initial-boundary value problem with the wave equation on this domain, \[\frac{\partial^{2}u}{\partial t^{2}}-c^{2}\frac{\partial^{2}u}{ \partial x^{2}}=0, \tag{58a}\] \[u(0,t)=u(5,t),\qquad\frac{\partial u}{\partial x}(0,t)=\frac{ \partial u}{\partial x}(5,t),\] (58b) \[u(x,0)=2\operatorname{sech}^{3}\left(\frac{3}{\delta_{0}}(x-x_{0 })\right),\qquad\frac{\partial u}{\partial t}(x,0)=0, \tag{58c}\] where \(u(x,t)\) is the wave field to be solved for, \(c\) is the wave speed, \(x_{0}\) is the initial peak location of the wave, \(\delta_{0}\) is a constant that controls the width of the wave profile, and the periodic boundary conditions are imposed on \(x=0\) and \(5\). In the simulations, we employ \(c=2\), \(\delta_{0}=2\), and \(x_{0}=3\). Then the above problem has the solution, \[\begin{cases}u(x,t)=\operatorname{sech}^{3}\left(\frac{3}{\delta_{0}}\left(- 2.5+\xi\right)\right)+\operatorname{sech}^{3}\left(\frac{3}{\delta_{0}}\left(- 2.5+\eta\right)\right),\\ \xi=\operatorname{mod}\left(x-x_{0}+ct+2.5,5\right),\quad\eta=\operatorname{ mod}\left(x-x_{0}-ct+2.5,5\right),\end{cases}\] where \(\operatorname{mod}\) refers to the modulo operation. The two terms in \(u(x,t)\) represent the leftward- and rightward-traveling waves, respectively. We reformulate the problem (58) into the following system, \[u_{t}-v=0,\qquad v_{t}-c^{2}u_{xx}=0, \tag{59a}\] \[u(0,t)=u(5,t),\qquad u_{x}(0,t)=u_{x}(5,t),\] (59b) \[u(x,0)=2\operatorname{sech}^{3}\left(\frac{3}{\delta_{0}}(x-x_{0 })\right),\qquad v(x,0)=0, \tag{59c}\] where \(v(x,t)\) is an auxiliary field given by the first equation in (59a). To solve the system (59) with PINN, we employ 90 and 60 neurons in the first and the second hidden layers of neural networks, respectively. We employ the following loss function in PINN in light of (20), \[\text{Loss}= \frac{W_{1}}{N}\sum_{n=1}^{N}\left[u_{\theta t}(x_{int}^{n},t_{int}^ {n})-v_{\theta}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{2}}{N}\sum_{n=1}^{N}\left[v_{\theta t}(x_{int}^{n},t_{ int}^{n})-u_{\theta xx}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{3}}{N}\sum_{n=1}^{N}\left[u_{\theta tx}(x_{int}^{n},t_{ int}^{n})-v_{\theta x}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{4}}{N}\sum_{n=1}^{N}\left[u_{\theta}(x_{tb}^{n},0)-2 \operatorname{sech}^{3}\left(\frac{3}{\delta}_{0}\left(x_{tb}^{n}-x_{0}\right) \right)\right]^{2}\] \[+\frac{W_{5}}{N}\sum_{n=1}^{N}\left[v_{\theta}(x_{tb}^{n},0) \right]^{2}+\frac{W_{6}}{N}\sum_{n=1}^{N}\left[u_{\theta x}(x_{tb}^{n},0)+ \frac{18\sinh((3x_{tb}^{n}-3x_{0})/\delta_{0})}{\delta_{0}\cosh^{4}((3x_{tb}^ {n}-3x_{0})/\delta_{0})}\right]^{2}\] \[+\frac{W_{7}}{N}\sum_{n=1}^{N}\left[v_{\theta}(0,t_{sb}^{n})-v_{ \theta}(5,t_{sb}^{n})\right]^{2}+\frac{W_{8}}{N}\sum_{n=1}^{N}\left[u_{\theta x }(0,t_{sb}^{n})-u_{\theta x}(5,t_{sb}^{n})\right]^{2}. \tag{60}\] Note that in the simulations we have employed the same number of collocation points (\(N\)) within the domain Figure 1: Wave equation: Distributions of the True solutions, the PINN solutions and the PINN point-wise absolute errors for \(u\) and \(v\) in the spatial-temporal domain. \(N=2000\) training points within the domain and on each of the domain boundaries. and on each of the domain boundaries. The above loss function differs slightly from the one in the error analysis (20), in several aspects. First, we have added a set of penalty coefficients \(W_{n}>0\) (\(1\leq n\leq 8\)) for different loss terms in numerical simulations. Second, the collocation points used in simulations (e.g. \(x_{int}^{n}\), \(t_{int}^{n}\), \(x_{sub}^{n}\), \(t_{sub}^{n}\), \(x_{th}^{n}\)) are generated randomly within the domain or on the domain boundaries from a uniform distribution. In addition, the averaging used here do not exactly correspond to the numerical quadrature rule (mid-point rule) used in the theoretical analysis. We have also considered another form (given below) for the loss function, as suggested by an alternate analysis as discussed in Remark 3.5 (see equation (33)), \[\text{Loss}= \frac{W_{1}}{N}\sum_{n=1}^{N}\left[u_{\theta t}(x_{int}^{n},t_{int} ^{n})-v_{\theta}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{2}}{N}\sum_{n=1}^{N}\left[v_{\theta t}(x_{int}^{n},t_{ int}^{n})-u_{\theta xx}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{3}}{N}\sum_{n=1}^{N}\left[u_{\theta tx}(x_{int}^{n},t_{ int}^{n})-v_{\theta x}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{4}}{N}\sum_{n=1}^{N}\left[u_{\theta}(x_{tb}^{n},0)-2 \operatorname{sech}^{3}\left(\frac{3}{\delta}_{0}(x_{tb}^{n}-x_{0})\right) \right]^{2}\] \[+\frac{W_{5}}{N}\sum_{n=1}^{N}\left[v_{\theta}(x_{tb}^{n},0) \right]^{2}+\frac{W_{6}}{N}\sum_{n=1}^{N}\left[u_{\theta x}(x_{tb}^{n},0)+ \frac{18\sinh((3x_{tb}^{n}-3x_{0})/\delta_{0})}{\delta_{0}\cosh^{4}((3x_{tb}^{ n}-3x_{0})/\delta_{0})}\right]^{2}\] \[+\frac{W_{7}}{N}\sum_{n=1}^{N}\left[v_{\theta}(0,t_{sb}^{n})-v_{ \theta}(5,t_{sb}^{n})\right]+\frac{W_{8}}{N}\sum_{n=1}^{N}\left|u_{\theta x}( 0,t_{sb}^{n})-u_{\theta x}(5,t_{sb}^{n})\right|. \tag{61}\] The difference between this form and the form (60) lies in the last two terms, with the terms here not squared. The loss function (60) will be referred to as the loss form #1 in subsequent discussions, and (61) will be referred to as the loss form #2. The PINN schemes that employ these two different loss forms will be referred to as PINN-F1 and PINN-F2, respectively. Figure 1 shows distributions of the exact solutions, the PINN solutions, and the PINN point-wise absolute errors for \(u\) and \(v\) in the spatial-temporal domain. Here the PINN solution is computed by PINN-F1, in which penalty coefficients are given by \(\mathbf{W}=(W_{1},\ldots,W_{8})=(0.8,0.8,0.5,0.5,0.5,0.9,0.9)\). One can observe that the method has captured the wave fields for \(u\) and \(\frac{\partial u}{\partial t}\) reasonably well, with the error for \(u\) notably smaller than that of \(\frac{\partial u}{\partial t}\). Figures 2 and 3 provide a comparison of the solutions obtained using the two forms of loss functions. Figure 2 compares profiles of the PINN-F1 and PINN-F2 solutions, and the exact solution, for \(u\) (top row) at three time instants (\(t=0.5\), \(1.0\), and \(1.5\)), as well as the error profiles (bottom row). Figure 3 shows the corresponding results for the field variable \(v=\frac{\partial u}{\partial t}\). These results are obtained by using \(N=2000\) training data points in the domain and on each of the domain boundaries. It is observed that both PINN schemes, with the loss functions given by (60) and (61) respectively, have captured the solution reasonably well. We further observe that the PINN-F1 scheme (with the loss form (60)) produces notably more accurate results than the PINN-F2 (with loss form (61)), especially for the field \(\frac{\partial u}{\partial t}\). We have varied the number of training data points \(N\) systematically and studied its effect on the PINN results. Figure 4 shows the loss histories of PINN-F1 and PINN-F2 corresponding to different number of training data points (\(N\)) in the simulations, with a total of \(30,000\) training iterations. We can make two observations. First, the history curves with the loss function form #1 is generally smoother, indicating that the loss function decreases almost monotonically as the training progresses. On the other hand, significant fluctuations in the loss history can be observed with the form #2. Second, the eventual loss values produced by the loss form #1 are significantly smaller, by over an order of magnitude, than those produced by the loss form #2. Table 1 is a further comparison between the PINN-F1 and PINN-F2. Here the \(l_{2}\) and \(l_{\infty}\) errors of \(u\) and \(v\) computed by PINN-F1 and PINN-F2 corresponding to different training data points (\(N\)) have been listed. There appears to be a general trend that the errors tend to decrease with increasing number of training points, but the decrease is not monotonic. It can be observed that the \(u\) errors are notably smaller than those for \(v=\frac{\partial u}{\partial t}\), as signified earlier in e.g. Figure 1. One can again observe that PINN-F1 results are notably more accurate than those of PINN-F2 for the wave equation. Theorem 3.6 suggests the solution errors for \(u\), \(v=\frac{\partial u}{\partial t}\), and \(\nabla u\) approximately scale as the square root of the training loss function. Figure 5 provides some numerical evidence for this point. Here we plot the \(l^{2}\) errors for \(u\), \(\frac{\partial u}{\partial t}\) and \(\frac{\partial u}{\partial x}\) from our simulations as a function of the training loss value for PINN-F1 and PINN-F2 in logarithmic scales. It is evident that for PINN-F1 the scaling essentially follows the square root relation. For PINN-F2 the relation between the error and the training loss appears to scale with a power somewhat larger than \(\frac{1}{2}\). \begin{table} \begin{tabular}{c|c|c c|c c} \hline \multirow{2}{*}{method} & \multirow{2}{*}{\(N\)} & \multicolumn{2}{c}{\(l_{2}\)-error} & \multicolumn{2}{c}{\(l_{\infty}\)-error} \\ \cline{3-6} & & \(u_{\theta}\) & \(v_{\theta}\) & \(u_{\theta}\) & \(v_{\theta}\) \\ \hline \multirow{6}{*}{PINN-F1} & 1000 & 5.7013e-03 & 1.3531e-02 & 1.8821e-02 & 4.6631e-02 \\ & 1500 & 2.1689e-03 & 4.1035e-03 & 6.7631e-03 & 1.5109e-02 \\ \cline{1-1} \cline{2-6} & 2000 & 4.6896e-03 & 9.6417e-03 & 1.3828e-02 & 3.3063e-02 \\ \cline{1-1} \cline{2-6} & 2500 & 3.7879e-03 & 9.8574e-03 & 1.2868e-02 & 3.3622e-02 \\ \cline{1-1} \cline{2-6} & 3000 & 2.6588e-03 & 6.0746e-03 & 8.1457e-03 & 1.9860e-02 \\ \hline \multirow{6}{*}{PINN-F2} & 1000 & 4.7281e-02 & 9.2431e-02 & 1.4367e-01 & 3.2764e-01 \\ \cline{1-1} \cline{2-6} & 1500 & 4.9087e-02 & 1.2438e-01 & 2.1525e-01 & 5.0601e-01 \\ \cline{1-1} \cline{2-6} & 2000 & 1.8554e-02 & 4.9224e-02 & 6.0780e-02 & 1.6358e-01 \\ \cline{1-1} \cline{2-6} & 2500 & 2.3526e-02 & 5.4266e-02 & 9.8690e-02 & 1.9467e-01 \\ \cline{1-1} \cline{2-6} & 3000 & 1.4164e-02 & 3.7796e-02 & 5.3045e-02 & 1.4179e-01 \\ \hline \end{tabular} \end{table} Table 1: Wave equation: The \(u\) and \(v\) errors versus the number of training data points \(N\). Figure 2: Wave equation: Comparison of profiles of \(u\) (top row) and its absolute error (bottom row) between the PINN solutions (loss forms #1 and #2) and the exact solution at time instants (a) \(t=0.5\), (b) \(t=1.0\), and (c) \(t=1.5\). \(N=2000\) training data points within the domain and on each of the domain boundaries (\(x=0\) and \(5\), and \(t=0\)). ### Sine-Gordon Equation We test the PINN algorithm suggested by the theoretical analysis for the Sine-Gordon equation (36) in this subsection. Consider the spatial-temporal domain \((x,t)\in\Omega=D\times[0,T]=[0,1]\times[0,2]\), and the following initial/boundary value problem on this domain, \[\frac{\partial^{2}u}{\partial t^{2}}-\frac{\partial^{2}u}{\partial x ^{2}}+u+\sin(u)=f(x,t), \tag{62a}\] \[u(0,t)=\phi_{1}(t),\qquad u(1,t)=\phi_{2}(t),\] (62b) \[u(x,0)=\psi_{1}(x),\qquad\frac{\partial u}{\partial t}(x,0)= \psi_{2}(x). \tag{62c}\] Figure 4: Wave equation: Histories of the loss function versus the training iteration with PINN-F1 and PINN-F2, corresponding to different number of training data points (\(N\)). Figure 3: Wave equation: Comparison of the profiles of \(v=\frac{\partial u}{\partial t}\) (top row) and its absolute error (bottom row) between the PINN solutions (loss forms #1 and #2) and the exact solution at time instants (a) \(t=0.5\), (b) \(t=1.0\), and (c) \(t=1.5\). \(N=2000\) training data points within the domain and on each of the domain boundaries (\(x=0\) and \(5\), and \(t=0\)). In these equations, \(u(x,t)\) is the field function to be solved for, \(f(x,t)\) is a source term, \(\psi_{1}\) and \(\psi_{2}\) are the initial conditions, and \(\phi_{1}\) and \(\phi_{2}\) are the boundary conditions. The source term, initial and boundary conditions appropriately are chosen by the following exact solution, \[u(x,t)=\left[2\cos\left(\pi x+\frac{\pi}{5}\right)+\frac{9}{5}\cos\left(2\pi x +\frac{7\pi}{20}\right)\right]\left[2\cos\left(\pi t+\frac{\pi}{5}\right)+ \frac{9}{5}\cos\left(2\pi t+\frac{7\pi}{20}\right)\right]. \tag{63}\] To simulate this problem with PINN, we reformulate the problem as follows, \[u_{t}-v=0, \tag{64a}\] \[v_{t}-u_{xx}+u+\sin(u)=f(x,t),\] (64b) \[u(0,t)=\phi_{1}(t),\qquad u(1,t)=\phi_{2}(t),\] (64c) \[u(x,0)=\psi_{1}(x),\qquad v(x,0)=\psi_{2}(x), \tag{64d}\] where \(v\) is a variable defined by equation (64a). In light of (41), we employ the following loss function in PINN, \[\text{Loss}= \frac{W_{1}}{N}\sum_{n=1}^{N}\left[u_{\theta t}(x_{int}^{n},t_{ int}^{n})-v_{\theta}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{2}}{N}\sum_{n=1}^{N}\left[v_{\theta t}(x_{int}^{n},t_{ int}^{n})-u_{\theta xx}(x_{int}^{n},t_{int}^{n})+u_{\theta}(x_{int}^{n},t_{ int}^{n})+\sin(u_{\theta}(x_{int}^{n},t_{int}^{n}))-f(x_{int}^{n},t_{int}^{n}) \right]^{2}\] \[+\frac{W_{3}}{N}\sum_{n=1}^{N}\left[u_{\theta tx}(x_{int}^{n},t_ {int}^{n})-v_{\theta x}(x_{int}^{n},t_{int}^{n})\right]^{2}+\frac{W_{4}}{N} \sum_{n=1}^{N}\left[u_{\theta}(x_{tb}^{n},0)-\psi_{1}(x_{tb}^{n})\right]^{2}\] \[+\frac{W_{5}}{N}\sum_{n=1}^{N}\left[v_{\theta}(x_{tb}^{n},0)- \psi_{2}(x_{tb}^{n})\right]^{2}+\frac{W_{6}}{N}\sum_{n=1}^{N}\left[u_{\theta x }(x_{tb}^{n},0)-\psi_{1x}(x_{tb}^{n})\right]^{2}\] \[+\frac{W_{7}}{N}\sum_{n=1}^{N}\left[\left|v_{\theta}(0,t_{sb}^{n} )-\phi_{1t}(t_{sb}^{n})\right|+\left|v_{\theta}(1,t_{sb}^{n})-\phi_{2t}(t_{sb }^{n})\right|\right], \tag{65}\] where \(W_{n}>0\) (\(1\leq n\leq 7\)) are the penalty coefficients for different loss terms added in the PINN implementation. It should be noted that the loss terms with the coefficients \(W_{3}\) and \(W_{6}\) will be absent from the conventional PINN formulation (see [46]). These terms in the training loss are necessary based on the error analysis in Section 4. It should also be noted that the \(W_{7}\) loss terms are not squared, as dictated by the theoretical analysis of Section 4. Figure 5: Wave equation: The \(l^{2}\) errors of \(u\), \(\frac{\partial u}{\partial t}\), and \(\frac{\partial u}{\partial x}\) as a function of the training loss value. \(N=2000\) training data points. We have also implemented a PINN scheme with a variant form for the loss function, \[\text{Loss}= \frac{W_{1}}{N}\sum_{n=1}^{N}\left[u_{\theta t}(x_{int}^{n},t_{int}^ {n})-v_{\theta}(x_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{2}}{N}\sum_{n=1}^{N}\left[v_{\theta t}(x_{int}^{n},t_{ int}^{n})-u_{\theta xx}(x_{int}^{n},t_{int}^{n})+u_{\theta}(x_{int}^{n},t_{int}^ {n})+\sin(u_{\theta}(x_{int}^{n},t_{int}^{n}))-f(x_{int}^{n},t_{int}^{n}) \right]^{2}\] \[+\frac{W_{3}}{N}\sum_{n=1}^{N}\left[u_{\theta tx}(x_{int}^{n},t_{ int}^{n})-v_{\theta x}(x_{int}^{n},t_{int}^{n})\right]^{2}+\frac{W_{4}}{N}\sum_{n=1} ^{N}\left[u_{\theta}(x_{tb}^{n},0)-\psi_{1}(x_{tb}^{n})\right]^{2}\] \[+\frac{W_{5}}{N}\sum_{n=1}^{N}\left[v_{\theta}(x_{tb}^{n},0)- \psi_{2}(x_{tb}^{n})\right]^{2}+\frac{W_{6}}{N}\sum_{n=1}^{N}\left[u_{\theta x }(x_{tb}^{n},0)-\psi_{1x}(x_{tb}^{n})\right]^{2}\] \[+\frac{W_{7}}{N}\sum_{n=1}^{N}\left[(v_{\theta}(0,t_{sb}^{n})- \phi_{1t}(t_{sb}^{n}))^{2}+(v_{\theta}(1,t_{sb}^{n})-\phi_{2t}(t_{sb}^{n}))^{2 }\right]. \tag{66}\] The difference between (66) and (65) lies in the \(W_{7}\) terms. These \(W_{7}\) terms in (66) are squared, and they Figure 6: Sine-Gordon equation: Distributions of the exact solution (left column), the PINN solution (middle column) and the PINN absolute error (right column) for \(u\) (top row) and for \(v=\frac{\partial u}{\partial t}\) (bottom row). \(N=2000\) collocation points within the domain and on the domain boundaries. are not in (65). We refer to the PINN scheme employing the loss function (65) as PINN-G1 and the scheme employing the loss function (66) as PINN-G2. In the simulations we employ a feed-forward neural network with two input nodes (representing \(x\) and \(t\)), two output nodes (representing \(u\) and \(v\)), and two hidden layers, each having a width of 80 nodes. The tanh activation function has been used for all the hidden nodes. We employ \(N\) collocation points generated from a uniform random distribution within the domain, on each of the domain boundary, and also on the initial boundary, where \(N\) is varied systematically in the simulations. The penalty coefficients in the loss functions are taken to be \(\mathbf{W}=(W_{1},\ldots,W_{7})=(0.5,0.4,0.5,0.6,0.6,0.6,0.8)\). Figure 6 shows distributions of of \(u(x,t)\) and \(v=\frac{\partial u}{\partial t}\) from the exact solution (left column) and the PINN solution (middle column), as well as the point-wise absolute errors of the PINN solution for these fields (right column). These results are obtained by PINN-G2 with \(N=2000\) random collocation points within the domain and on each of the domain boundaries. The PINN solution is in good agreement with the true solution. Figures 7 and 8 compare the profiles of \(u\) and \(v\) between the exact solution, and the solutions obtained by PINN-G1 and PINN-G2, at several time instants (\(t=0.5\), \(1\) and \(1.5\)). Profiles of the absolute errors of the PINN-G1/PINN-G2 solutions are also shown in these figures. We observe that both PINN-G1 and PINN-G2 have captured the solution for \(u\) quite accurately, and to a lesser extent, also for \(v\). Comparison of the error profiles between PINN-G1 and PINN-G2 suggests that the PINN-G2 error in general appears to be somewhat smaller than that of PINN-G1. But this seems not to be true consistently in the entire domain. The effect of the collocation points on the PINN results has been studied by varying the number of training collocation points systematically between \(N=1000\) and \(N=3000\) within the domain and on each of the domain boundaries. The results are provided in Figure 9 and Table 2. Figure 9 shows histories of the loss function corresponding to different number of collocation points for PINN-G1 and PINN-G2. Table 2 provides the \(l_{2}\) and \(l_{\infty}\) errors of \(u\) and \(v\) versus the number of collocation points computed by PINN-G1 and PINN-G2. The PINN errors in general tend to decrease with increasing number of collocation points, but this trend is not monotonic. It can be observed that both PINN-G1 and PINN-G2 have captured the solutions quite accurately, with those errors from PINN-G2 in general slightly better. Figure 10 provides some numerical evidence for the relation between the total error and the training loss as suggested by Theorem 4.4. Here we plot the \(l_{2}\) errors for \(u\), \(v\) and \(\frac{\partial u}{\partial x}\) as a function of the training loss value obtained by PINN-G1 and PINN-G2. The results indicate that the total error scales approximately as Figure 7: Sine-Gordon equation: Top row, comparison of profiles between the exact solution and PINN-G1/PINN-G2 solutions for \(u\) at several time instants. Bottom row, profiles of the absolute error of the PINN-G1 and PINN-G2 solutions for \(u\). \(N=2000\) training collocation points. the square root of the training loss, which in some sense corroborates the error-loss relation as expressed in Theorem 4.4. ### Linear Elastodynamic Equation In this subsection we look into the linear elastodynamic equation (in two spatial dimensions plus time) and test the PINN algorithm as suggested by the theoretical analysis in Section 5 using this equation. Consider the spatial-temporal domain \((x,y,t)\in\Omega=D\times[0,T]=[0,1]\times[0,1]\times[0,2]\), and the following Figure 8: Sine-Gordon equation: Top row, comparison of profiles between the exact solution and PINN-G1/PINN-G2 solutions for \(v=\frac{\partial u}{\partial t}\) at several time instants. Bottom row, profiles of the absolute error of the PINN-G1 and PINN-G2 solutions for \(v\). \(N=2000\) training collocation points. Figure 9: Sine-Gordon equation: Loss histories of (a) PINN-G1 and (b) PINN-G2 corresponding to various numbers of training collocation points. initial/boundary value problem with the linear elastodynamics equation on \(\Omega\): \[\rho\frac{\partial^{2}\mathbf{u}}{\partial t^{2}}-2\mu\nabla\cdot(\mathbf{ \underline{\varepsilon}}(\mathbf{u}))-\lambda\nabla(\nabla\cdot\mathbf{u})=\mathbf{f}(\mathbf{ x},t), \tag{67a}\] \[\mathbf{u}|_{\Gamma_{d}}=\mathbf{\phi}_{d},\qquad\Big{(}2\mu\mathbf{ \underline{\varepsilon}}(\mathbf{u})+\lambda(\nabla\cdot\mathbf{u})\Big{)}|_{\Gamma_{ n}}\mathbf{n}=\mathbf{\phi}_{n},\] (67b) \[\mathbf{u}(\mathbf{x},0)=\mathbf{\psi}_{1},\qquad\frac{\partial\mathbf{u}}{ \partial t}(\mathbf{x},0)=\mathbf{\psi}_{2}, \tag{67c}\] where \(\mathbf{u}=(u_{1}(\mathbf{x},t),u_{2}(\mathbf{x},t))^{T}\) (\(\mathbf{x}=(x,y)\in D\), \(t\in[0,T]\)) is the displacement field to be solved for, \(\mathbf{f}(\mathbf{x},t)\) is a source term, and \(\rho\), \(\mu\) and \(\lambda\) are material constants. \(\Gamma_{d}\) is the Dirichlet boundary and \(\Gamma_{n}\) is the Neumann boundary, with \(\partial D=\Gamma_{d}\cup\Gamma_{n}\) and \(\Gamma_{d}\cap\Gamma_{n}=\emptyset\), where \(\mathbf{n}\) is the outward-pointing unit normal vector. In our simulations we choose the left boundary (\(x=0\)) as the Dirichlet boundary, and the rest are Neumann boundaries. \(\mathbf{\phi}_{d}\) and \(\mathbf{\phi}_{n}\) are Dirichlet and Neumann boundary conditions, respectively. \(\mathbf{\psi}_{1}\) and \(\mathbf{\psi}_{2}\) are the initial conditions for the displacement and the velocity. We employ the material parameter values \(\mu=\lambda=\rho=1\), and the following manufactured solution ([1]) to this problem, \[\mathbf{u}(\mathbf{x},t)=\sin(\sqrt{2}\pi t)\begin{bmatrix}-\sin(\pi x)^{2}\sin(2\pi y )\\ \sin(2\pi x)\sin(\pi y)^{2}\end{bmatrix}. \tag{68}\] The source term \(\mathbf{f}(\mathbf{x},t)\), the boundary/initial distributions \(\mathbf{\phi}_{d}\), \(\mathbf{\phi}_{n}\), \(\mathbf{\psi}_{1}\) and \(\mathbf{\psi}_{2}\) are chosen by the expression (68). \begin{table} \begin{tabular}{c|c|c c|c c} \hline \multirow{2}{*}{method} & \multirow{2}{*}{\(N\)} & \multicolumn{2}{c}{\(l_{2}\)-error} & \multicolumn{2}{c}{\(l_{\infty}\)-error} \\ \cline{3-6} & & \(u_{\theta}\) & \(v_{\theta}\) & \(u_{\theta}\) & \(v_{\theta}\) \\ \hline \multirow{8}{*}{PINN-G1} & 1000 & 3.0818e-03 & 4.3500e-03 & 9.6044e-03 & 1.8894e-02 \\ & 1500 & 3.4335e-03 & 4.8035e-03 & 1.0566e-02 & 1.7050e-02 \\ \cline{1-1} \cline{2-6} & 2000 & 2.1914e-03 & 3.0055e-03 & 7.5882e-03 & 1.1099e-02 \\ \cline{1-1} \cline{2-6} & 2500 & 3.0172e-03 & 3.5698e-03 & 9.2515e-03 & 1.4645e-02 \\ \cline{1-1} \cline{2-6} & 3000 & 2.5281e-03 & 4.4858e-03 & 7.2785e-03 & 1.6213e-02 \\ \hline \multirow{8}{*}{PINN-G2} & 1000 & 3.0674e-03 & 2.0581e-03 & 7.3413e-03 & 1.1323e-02 \\ \cline{1-1} \cline{2-6} & 1500 & 1.0605e-03 & 1.4729e-03 & 2.2914e-03 & 6.2831e-03 \\ \cline{1-1} \cline{2-6} & 2000 & 2.2469e-03 & 1.6072e-03 & 4.8842e-03 & 8.8320e-03 \\ \cline{1-1} \cline{2-6} & 2500 & 6.6072e-04 & 6.0509e-04 & 1.4099e-03 & 4.3423e-03 \\ \cline{1-1} \cline{2-6} & 3000 & 6.6214e-04 & 1.0830e-03 & 1.9697e-03 & 7.8866e-03 \\ \hline \end{tabular} \end{table} Table 2: Sine-Gordon equation: The \(l_{2}\) and \(l_{\infty}\) errors for \(u\) and \(v\) versus the number of training collocation points \(N\) corresponding to PINN-G1 and PINN-G2. To simulate this problem using the PINN algorithm suggested by the theoretical analysis from Section 5, we reformulate (67) into the following system \[\mathbf{u}_{t}-\mathbf{v}=\mathbf{0}, \mathbf{v}_{t}-2\nabla\cdot(\mathbf{\underline{\varepsilon}}(\mathbf{u}))- \nabla(\nabla\cdot\mathbf{u})=\mathbf{f}(\mathbf{x},t), \tag{69a}\] \[\mathbf{u}|_{\Gamma_{d}}=\mathbf{\phi}_{d}, \left(2\mathbf{\underline{\varepsilon}}(\mathbf{u})+(\nabla\cdot\mathbf{u}) \right)|_{\Gamma_{n}}\mathbf{n}=\mathbf{\phi}_{n},\] (69b) \[\mathbf{u}(\mathbf{x},0)=\mathbf{\psi}_{1}, \mathbf{v}(\mathbf{x},0)=\mathbf{\psi}_{2}, \tag{69c}\] where \(\mathbf{v}(\mathbf{x},t)\) is an intermediate variable (representing the velocity) as given by (69a). In light of (52), we employ the following loss function for PINN, Loss \[=\frac{W_{1}}{N}\sum_{n=1}^{N}\left[\mathbf{u}_{\theta t}(\mathbf{x}_{int }^{n},t_{int}^{n})-\mathbf{v}_{\theta}(\mathbf{x}_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{2}}{N}\sum_{n=1}^{N}\left[\mathbf{v}_{\theta t}(\mathbf{x}_{int }^{n},t_{int}^{n})-2\nabla\cdot(\mathbf{\underline{\varepsilon}}(\mathbf{u}_{\theta}( \mathbf{x}_{int}^{n},t_{int}^{n})))-\nabla(\nabla\cdot\mathbf{u}_{\theta}(\mathbf{x}_{int }^{n},t_{int}^{n}))-\mathbf{f}(\mathbf{x}_{int}^{n},t_{int}^{n}))\right]^{2}\] \[+\frac{W_{3}}{N}\sum_{n=1}^{N}\left[\mathbf{\underline{\varepsilon}}( \mathbf{u}_{\theta t}(\mathbf{x}_{int}^{n},t_{int}^{n})-\mathbf{v}_{\theta}(\mathbf{x}_{int}^{ n},t_{int}^{n}))\right]^{2}+\frac{W_{4}}{N}\sum_{n=1}^{N}\left[\nabla\cdot(\mathbf{u}_{ \theta t}(\mathbf{x}_{int}^{n},t_{int}^{n})-\mathbf{v}_{\theta}(\mathbf{x}_{int}^{n},t_{ int}^{n}))\right]^{2}\] \[+\frac{W_{5}}{N}\sum_{n=1}^{N}\left[\mathbf{u}_{\theta}(\mathbf{x}_{bb}^ {n},0)-\mathbf{\psi}_{1}(\mathbf{x}_{bb}^{n})\right]^{2}+\frac{W_{6}}{N}\sum_{n=1}^{N }\left[\mathbf{v}_{\theta}(\mathbf{x}_{bb}^{n},0)-\mathbf{\psi}_{2}(\mathbf{x}_{bb}^{n})\right] ^{2}\] \[+\frac{W_{7}}{N}\sum_{n=1}^{N}\left[\mathbf{\underline{\varepsilon}}( \mathbf{u}_{\theta}(\mathbf{x}_{tb}^{n},0)-\mathbf{\psi}_{1}(\mathbf{x}_{tb}^{n}))\right]^{2}+ \frac{W_{8}}{N}\sum_{n=1}^{N}\left[\nabla\cdot(\mathbf{u}_{\theta}(\mathbf{x}_{tb}^{n},0)-\mathbf{\psi}_{1}(\mathbf{x}_{tb}^{n}))\right]^{2}\] \[+\frac{W_{9}}{N}\sum_{n=1}^{N}\left|\mathbf{v}_{\theta}(\mathbf{x}_{sb1}^ {n},t_{sb1}^{n})-\mathbf{\phi}_{dt}(\mathbf{x}_{sb1}^{n},t_{sb1}^{n})\right|\] \[+\frac{W_{10}}{N}\sum_{n=1}^{N}|2\underline{\underline{\varepsilon }}(\mathbf{u}_{\theta}(\mathbf{x}_{sb2}^{n},t_{sb2}^{n}))\mathbf{n}+(\nabla\cdot\mathbf{u}_{ \theta}(\mathbf{x}_{sb2}^{n},t_{sb2}^{n}))\mathbf{n}-\mathbf{\phi}_{n}(\mathbf{x}_{sb2}^{n},t _{sb2}^{n})|,\] (70) where we have added the penalty coefficients, \(W_{n}>0\) (\(1\leq n\leq 10\)), for different loss terms in the implementation, and \(N\) denotes the number of collocation points within the domain and on the domain boundaries. In the numerical tests we have also implemented another form for the loss function as follows, Loss \[=\frac{W_{1}}{N}\sum_{n=1}^{N}\left[\mathbf{u}_{\theta t}(\mathbf{x}_{int }^{n},t_{int}^{n})-\mathbf{v}_{\theta}(\mathbf{x}_{int}^{n},t_{int}^{n})\right]^{2}\] \[+\frac{W_{2}}{N}\sum_{n=1}^{N}\left[\mathbf{v}_{\theta t}(\mathbf{x}_{ int}^{n},t_{int}^{n})-2\nabla\cdot(\mathbf{\underline{\varepsilon}}(\mathbf{u}_{ \theta}(\mathbf{x}_{int}^{n},t_{int}^{n})))-\nabla(\nabla\cdot\mathbf{u}_{\theta}(\bm {x}_{int}^{n},t_{int}^{n}))-\mathbf{f}(\mathbf{x}_{int}^{n},t_{int}^{n}))\right]^{2}\] \[+\frac{W_{3}}{N}\sum_{n=1}^{N}\left[\mathbf{\underline{\varepsilon}}( \mathbf{u}_{\theta t}(\mathbf{x}_{int}^{n},t_{int}^{n})-\mathbf{v}_{\theta}(\mathbf{x}_{int}^{ n},t_{int}^{n}))\right]^{2}+\frac{W_{4}}{N}\sum_{n=1}^{N}\left[\nabla\cdot(\mathbf{u}_{ \theta t}(\mathbf{x}_{int}^{n},t_{int}^{n})-\mathbf{v}_{\theta}(\mathbf{x}_{int}^{n},t_{ int}^{n}))\right]^{2}\] \[+\frac{W_{5}}{N}\sum_{n=1}^{N}\left[\mathbf{u}_{\theta}(\mathbf{x}_{tb}^ {n},0)-\mathbf{\psi}_{1}(\mathbf{x}_{tb}^{n})\right]^{2}+\frac{W_{6}}{N}\sum_{n=1}^{N} \left[\mathbf{v}_{\theta}(\mathbf{x}_{tb}^{n},0)-\mathbf{\psi}_{2}(\mathbf{x}_{tb}^{n})\right] ^{2}\] \[+\frac{W_{7}}{N}\sum_{n=1}^{N}\left[\mathbf{\underline{\varepsilon}}( \mathbf{u}_{\theta}(\mathbf{x}_{tb}^{n},0)-\mathbf{\psi}_{1}(\mathbf{x}_{tb}^{n}))\right]^{2}+ \frac{W_{8}}{N}\sum_{n=1}^{N}\left[\nabla\cdot(\mathbf{u}_{\theta}(\mathbf{x}_{tb}^{n},0)- \mathbf{\psi}_{1}(\mathbf{x}_{tb}^{n}))\right]^{2}\] \[+\frac{W_{9}}{N}\sum_{n=1}^{N}\left[\mathbf{v}_{\theta}(\mathbf{x}_{sb1}^ {n},t_{sb1}^{n})-\mathbf{\phi}_{dt}(\mathbf{x}_{sb1}^{n},t_{sb1}^{n})\right]^{2}\] \[+\frac{W_{10}}{N}\sum_{n=1}^{N}\left[2\underline{\underline{ \varepsilon}}(\mathbf{u}_{\theta}(\mathbf{x}_{sb2}^{n},t_{sb2}^{n}))\mathbf{n}+(\nabla\cdot \mathbf{u}_{\theta}(\mathbf{x}_{sb2}^{n},t_{sb2}^{n}))\mathbf{n}-\mathbf{\phi}_{n}(\mathbf{x}_{ sb2}^{n},t_{sb2}^{n})\right]^{2}.\] (71) The difference between these two forms for the loss function lies in the \(W_{9}\) and \(W_{10}\) terms. It should be noted that the \(W_{9}\) and \(W_{10}\) terms in (70) are not squared, in light of the error terms (53a)\(-\)(53j) from the theoretical analysis. In contrast, these terms are squared in (71). The PINN scheme utilizing the loss function (70) is henceforth referred to as PINN-H1, and the scheme that employs the loss function (71) shall be referred to as PINN-H2. In the simulations, we employ a feed-forward neural network with three input nodes, which represent \(\mathbf{x}=(x,y)\) and the time variable t, and four output nodes, which represent \(\mathbf{u}=(u_{1},u_{2})\) and \(\mathbf{v}=(v_{1},v_{2})\). The neural network has two hidden layers, with widths of 90 and 60 nodes, respectively, and the tanh activation function for all the hidden nodes. For the network training, \(N\) collocation points are generated from a uniform random distribution within the domain, on each of the domain boundary, as well as on the initial boundary. \(N\) is systematically varied in the simulations. We employ the penalty coefficients \(\mathbf{W}=(W_{1},...,W_{10})=(0.9,0.9,0.9,0.5,0.5,0.5,0.5,0.9,0.9)\) in the simulations. In Figures 11 and 12 we compare the PINN-H1/PINN-H2 solutions with the exact solution and provide an overview of their errors. Figure 11 is a visualization of the deformed configuration of the domain. Here we have plotted the deformed field, \(\mathbf{x}+\mathbf{u}(\mathbf{x},t)\), for a set of grid points \(\mathbf{x}\in D\) at three time instants from the exact solution, the PINN-H1 and PINN-H2 solutions. Figure 12 shows distributions of the point-wise absolute error of the PINN-H1/PINN-H2 solutions, \(\|\mathbf{u}_{\theta}-\mathbf{u}\|=\sqrt{(u_{\theta 1}(\mathbf{x},t)-u_{1}(\mathbf{x},t))^{2}+(u_{ \theta 2}(\mathbf{x},t)-u_{2}(\mathbf{x},t))^{2}}\), at the same three time instants. Here \(\mathbf{u}_{\theta}=(u_{\theta 1},u_{\theta 2})\) denotes the PINN solution. While both PINN schemes capture the solution fairly well at \(t=0.5\) and \(1\), at \(t=1.5\) both schemes show larger deviations from the Figure 11: Linear elastodynamic equation: Visualization of the deformed configuration at time instants (a) \(t=0.5\), (b) \(t=1.0\), and (c) \(t=1.5\) from the exact solution (top row), the PINN-H1 solution (middle row) and the PINN-H2 solution (bottom row). Plotted here are the deformed field, \(\mathbf{x}+\mathbf{u}(\mathbf{x},t)\), for a set of grid points \(\mathbf{x}\in D=[0,1]\times[0,1]\). \(N=2000\) training collocation points within domain and on the domain boundaries. true solution. In general, the PINN-H1 scheme appears to produce a better approximation to the solution than PINN-H2. The effect of the number of collocation points (\(N\)) on the PINN results has been studied in Figure 13 and Table 3, where \(N\) is systematically varied in the range \(N=1000\) to \(N=3000\). Figure 13 shows the histories of the loss function for training PINN-H1 and PINN-H2 under different collocation points. Table 3 lists the corresponding \(l_{2}\) and \(l_{\infty}\) errors of \(\mathbf{u}\) and \(\mathbf{v}\) obtained from PINN-H1 and PINN-H2. One can observe that the PINN errors in general tend to improve with increasing number of collocation points. It can also be observed that the PINN-H1 errors in general appear better than those of PINN-H2 for this problem. Figure 14 shows the errors of \(\mathbf{u}\), \(\mathbf{u}_{t}\), \(\mathbf{\underline{\varepsilon}}(\mathbf{u})\) and \(\nabla\cdot\mathbf{u}\) as a function of the loss function value in the network training of PINN-H1 and PINN-H2. The data indicates that these errors approximately scale as the square root of the training loss, which is consistent with the relation as given by Theorem 5.5. This in a sense \begin{table} \begin{tabular}{c|c c c c|c c c c} \hline \multirow{2}{*}{\(N\)} & \multicolumn{6}{c}{\(l_{2}\)-error} & \multicolumn{6}{c}{\(l_{\infty}\)-error} \\ \cline{2-10} & \(u_{\theta 1}\) & \(u_{\theta 2}\) & \(v_{\theta 1}\) & \(v_{\theta 2}\) & \(u_{\theta 1}\) & \(u_{\theta 2}\) & \(v_{\theta 1}\) & \(v_{\theta 2}\) \\ \hline \multicolumn{10}{c}{PINN-H1} \\ \cline{2-10} 1000 & 4.8837e-02 & 6.0673e-02 & 4.7460e-02 & 5.1640e-02 & 1.7189e-01 & 2.1201e-01 & 6.9024e-01 & 6.1540e-01 \\ \cline{2-10} 1500 & 2.8131e-02 & 3.1485e-02 & 4.1104e-02 & 4.1613e-02 & 1.9848e-01 & 2.4670e-01 & 3.4716e-01 & 4.0582e-01 \\ \cline{2-10} 2000 & 2.7796e-02 & 4.0410e-02 & 3.5891e-02 & 4.6334e-02 & 1.4704e-01 & 1.7687e-01 & 4.0678e-01 & 5.0022e-01 \\ \cline{2-10} 2500 & 3.0909e-02 & 4.0215e-02 & 3.3966e-02 & 4.4024e-02 & 1.7589e-01 & 2.4211e-01 & 4.1403e-01 & 3.9570e-01 \\ \cline{2-10} 3000 & 2.6411e-02 & 3.5600e-02 & 4.3209e-02 & 5.2802e-02 & 1.4289e-01 & 1.3625e-01 & 5.1167e-01 & 5.3298e-01 \\ \hline \multirow{2}{*}{1000} & 4.9869e-02 & 1.3451e-01 & 5.6327e-02 & 5.4796e-02 & 3.2314e-01 & 3.4978e-01 & 6.7624e-01 & 5.7277e-01 \\ \cline{2-10} 1500 & 5.4708e-02 & 1.3987e-01 & 4.5871e-02 & 5.1622e-02 & 2.8609e-01 & 5.2598e-01 & 4.9343e-01 & 2.3518e-01 \\ \cline{2-10} 2000 & 6.2114e-02 & 1.0190e-01 & 6.4477e-02 & 5.0011e-02 & 2.5745e-01 & 3.1642e-01 & 5.9057e-01 & 5.8411e-01 \\ \cline{2-10} 2500 & 3.7887e-02 & 6.0630e-02 & 5.4363e-02 & 5.0659e-02 & 2.2212e-01 & 2.4774e-01 & 5.3681e-01 & 3.5427e-01 \\ \cline{2-10} 3000 & 5.4862e-02 & 6.3407e-02 & 5.5208e-02 & 6.0082e-02 & 3.4102e-01 & 2.1308e-01 & 5.1894e-01 & 4.4995e-01 \\ \hline \end{tabular} \end{table} Table 3: Linear elastodynamic equation: The \(l_{2}\) and \(l_{\infty}\) errors for \(\mathbf{u}=(u_{1},u_{2})\) and \(\mathbf{v}=(v_{1},v_{2})\) versus the number of training data points \(N\) from the PINN-H1 and PINN-H2 solutions. Figure 12: Linear elastodynamic equation: Distributions of the point-wise absolute error, \(\|\mathbf{u}_{\theta}-\mathbf{u}\|\), of the PINN-H1 solution (top row) and the PINN-H2 solution (bottom row) at three time instants (a) \(t=0.5\), (b) \(t=1.0\), and (c) \(t=1.5\). \(N=2000\) training collocation points within domain and on the domain boundaries. provides numerical evidence for the theoretical analysis in Section 5. ## 7 Concluding Remarks In the present paper we have considered the approximation of a class of dynamic PDEs of second order in time by physics-informed neural networks (PINN). We provide an analysis of the convergence and the error of PINN for approximating the wave equation, the Sine-Gordon equation, and the linear elastodynamic equation. Our analyses show that, with feed-forward neural networks having two hidden layers and the tanh activation function for all the hidden nodes, the PINN approximation errors for the solution field, its time derivative and its gradient can be bounded by the PINN training loss and the number of training data points (quadrature points). Our theoretical analyses further suggest new forms for the PINN training loss function, which contain certain residuals that are crucial to the error estimate but would be absent from the canonical PINN formulation of the loss function. These typically include the gradient of the equation residual, the gradient of the initial-condition residual, and the time derivative of the boundary-condition residual. In addition, depending on the type of boundary conditions involved in the problem, our analyses suggest that a norm other than the commonly-used \(L^{2}\) norm may be more appropriate for the boundary residuals in the loss Figure 14: Linear elastodynamic equation: The errors for \(\mathbf{u}\), \(\mathbf{u}_{t}\), \(\underline{\mathbf{\varepsilon}}(\mathbf{u})\) and \(\nabla\cdot\mathbf{u}\) versus the training loss value obtained by PINN-H1 and PINN-H2. Figure 13: Linear elastodynamic equation: Training loss histories of PINN-H1 and PINN-H2 corresponding to different numbers of collocation points (\(N\)) in the simulation. function. Adopting these new forms of the loss function suggested by the theoretical analyses leads to a variant PINN algorithm. We have implemented the new algorithm and presented a number of numerical experiments on the wave equation, the Sine-Gordon equation and the linear elastodynamic equation. The simulation results demonstrate that the method can capture the solution field well for these PDEs. The numerical data corroborate the theoretical analyses. ## Declarations The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Availability of data/code and material Data will be made available on reasonable request. ## Acknowledgements The work was partially supported by the China Postdoctoral Science Foundation (No.2021M702747), Natural Science Foundation of Hunan Province (No.2022JJ40422), NSF of China (No.12101495), General Special Project of Education Department of Shaanxi Provincial Government (No.21JK0943), and the US National Science Foundation (DMS-2012415). ## 8 Appendix: Auxiliary Results and Proofs of Main Theorems from Sections 4 and 5 ### Notation Let a \(d\)-tuple of non-negative integers \(\alpha\in\mathbb{N}_{0}^{d}\) be multi-index with \(d\in\mathbb{N}\). For given two multi-indices \(\alpha,\beta\in\mathbb{N}_{0}^{d}\), we say that \(\alpha,\beta\), if, and only if, \(\alpha_{i}\leq\beta_{i}\) for all \(i=1,\cdots,d\). And then, denote \[|\alpha|=\sum_{i=1}^{d}\alpha_{i},\qquad\alpha!=\prod_{i=1}^{d}\alpha_{i}!, \qquad\begin{pmatrix}\alpha\\ \beta\end{pmatrix}=\frac{\alpha!}{\beta!(\alpha-\beta)!}.\] Let \(P_{m,n}=\{\alpha\in\mathbb{N}_{0}^{n},|\alpha|=m\}\), for which it holds \[|P_{m,n}|=\begin{pmatrix}m+n-1\\ m\end{pmatrix}.\] ### Some Auxiliary Results **Lemma 8.1**.: _Let \(d\in\mathbb{N},k,l\in\mathbb{N}_{0}\) with \(k>l+\frac{d}{2}\) and \(\Omega\subset\mathbb{R}^{d}\) be an open set. Every function \(f\in H^{k}(\Omega)\) has a continuous representative belonging to \(C^{l}(\Omega)\)._ **Lemma 8.2**.: _Let \(d\in\mathbb{N},k\in\mathbb{N}_{0}\), \(f\in H^{k}(\Omega)\) and \(g\in W^{k,\infty}(\Omega)\) with \(\Omega\subset\mathbb{R}^{d}\), then_ \[\|fg\|_{H^{k}(\Omega)}\leq 2^{k}\|f\|_{H^{k}(\Omega)}\|g\|_{W^{k,\infty}( \Omega)}.\] **Lemma 8.3** (Multiplicative trace inequality, e.g. [13]).: _Let \(d\geq 2\), \(\Omega\subset\mathbb{R}^{d}\) be a Lipschitz domain and let \(\gamma_{0}:H^{1}(\Omega)\to L^{2}(\partial\Omega):u\mapsto u|_{\partial\Omega}\) be the trace operator. Denote by \(h_{\Omega}\) the diameter of \(\Omega\) and by \(\rho_{\Omega}\) the radius of the largest \(d\)-dimensional ball that can be inscribed into \(\Omega\). Then it holds that_ \[\|\gamma_{0}u\|_{L^{2}(\partial\Omega)}\leq C_{h_{\Omega},d,\rho_{\Omega}}\|u \|_{H^{1}(\Omega)}, \tag{72}\] _where \(C_{h_{\Omega},d,\rho_{\Omega}}=\sqrt{\frac{2\max\{2h_{\Omega},d\}}{\rho_{ \Omega}}}\)._ **Lemma 8.4** ([12]).: _Let \(d,n,L,W\in\mathbb{N}\) and let \(u_{\theta}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a neural network with \(\theta\in\Theta\) for \(L\geq 2,R,W\geq 1\), c.f. Definition 2.1. Assume that \(\|\sigma\|_{C^{n}}\geq 1\). Then it holds for \(1\leq j\leq d\) that_ \[\|(u_{\theta})_{j}\|_{C^{n}(\Omega)}\leq 16^{L}d^{2n}(e^{2}n^{4}W^{3}R^{n}\| \sigma\|_{C^{n}(\Omega)})^{nL}. \tag{73}\] **Lemma 8.5** ([12]).: _Let \(d\geq 2,m\geq 3,\sigma>0,a_{i},b_{i}\in\mathbb{Z}\) with \(a_{i}<b_{i}\) for \(1\leq i\leq d\), \(\Omega=\prod_{i=1}^{d}[a_{i},b_{i}]\) and \(f\in H^{m}(\Omega)\). Then for every \(N\in\mathbb{N}\) with \(N>5\) there exists a tanh neural network \(\hat{f}^{N}\) with two hidden layers, one of width at most \(3[\frac{m}{2}]|P_{m-1,d+1}|+\sum_{i=1}^{d}(b_{i}-a_{i})(N-1)\) and another of width at most \(3[\frac{d+2}{2}]|P_{d+1,d+1}|N^{d}\prod_{i=1}^{d}(b_{i}-a_{i})\), such that for \(k=0,1,2\) it holds that_ \[\|f-\hat{f}^{N}\|_{H^{k}(\Omega)}\leq 2^{k}3^{d}C_{k,m,d,f}(1+\sigma)\mathrm{ ln}^{k}\left(\beta_{k,\sigma,d,f}N^{d+m+2}\right)N^{-m+k}, \tag{74}\] _and where_ \[C_{k,m,d,f}=\max_{0\leq l\leq k}\left(\begin{array}{c}d+l-1\\ l\end{array}\right)^{1/2}\frac{((m-l)!)^{1/2}}{([\frac{m-l}{d}]!)^{d/2}}\left( \frac{3\sqrt{d}}{\pi}\right)^{m-l}|f|_{H^{m}(\Omega)},\] \[\beta_{k,\sigma,d,f}=\frac{5\cdot 2^{kd}\max\{\prod_{i=1}^{d}(b_{i}- a_{i}),d\}\max\{\|f\|_{W^{k,\infty}(\Omega)},1\}}{3^{d}\sigma\min\{1,C_{k,m,d,f}\}}.\] _Moreover, the weights of \(\hat{f}^{N}\) scale as \(O(N^{\gamma})\) with \(\gamma=\max\{m^{2}/2,d(1+m/2+d/2)\}\)._ ### Proof of Main Theorems from Section 4: Sine-Gordon Equation **Theorem 4.2**: **:** Let \(d\), \(r\), \(k\in\mathbb{N}\) with \(k\geq 3\). Assume that \(g(u)\) is Lipschitz continuous, \(u\in C^{k}(D\times[0,T])\) and \(v\in C^{k-1}(D\times[0,T])\). Then for every integer \(N>5\), there exist tanh neural networks \(u_{\theta}\) and \(v_{\theta}\), each having two hidden layers, of widths at most \(3[\frac{k}{2}]|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3[\frac{d+3}{2}]|P_{d+2,d+2}|\lceil NT\rceil N^{d}\), such that \[\|R_{int1}\|_{L^{2}(\Omega)},\|R_{tbl}\|_{L^{2}(D)}\lesssim\mathrm{ ln}NN^{-k+1},\] \[\|R_{int2}\|_{L^{2}(\Omega)},\|\nabla R_{int1}\|_{L^{2}(\Omega)}, \|\nabla R_{tbl}\|_{L^{2}(D)}\lesssim\mathrm{ln}^{2}NN^{-k+2},\] \[\|R_{tbl2}\|_{L^{2}(D)},\|R_{sb}\|_{L^{2}(\partial D\times[0,t])} \lesssim\mathrm{ln}NN^{-k+2}.\] Proof.: Based on \(u\in C^{k}(D\times[0,T])\), \(v\in C^{k-1}(D\times[0,T])\) and Lemma 8.5, there exist neural networks \(u_{\theta}\) and \(v_{\theta}\), with the same two hidden layers and widths \(3[\frac{k}{2}]|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3[\frac{d+3}{2}]|P_{d+2,d+2}|\lceil NT\rceil N^{d}\), such that for every \(0\leq l\leq 2\) and \(0\leq s\leq 2\), \[\|u_{\theta}-u\|_{H^{l}(\Omega)}\leq C_{l,k,d+1,u}\lambda_{l,u}(N )N^{-k+l},\] \[\|v_{\theta}-v\|_{H^{s}(\Omega)}\leq C_{s,k-1,d+1,v}\lambda_{s,v}(N )N^{-k+1+s}.\] It is now straightforward to bound the PINN residual. \[\|\hat{u}_{t}\|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{1}(\Omega)}, \qquad\|\hat{v}_{t}\|_{L^{2}(\Omega)}\leq\|\hat{v}\|_{H^{1}(\Omega)},\] \[\|\Delta\hat{u}\|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{2}(\Omega)} \qquad\|\nabla\hat{u}_{t}\|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{2}(\Omega)},\] \[\|\nabla\hat{v}\|_{L^{2}(\Omega)}\leq\|\hat{v}\|_{H^{1}(\Omega)},\] \[\|\hat{u}\|_{L^{2}(D)}\leq\|\hat{u}\|_{L^{2}(\partial\Omega)} \leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{u}\|_{H^{1}(\Omega)},\] \[\|\hat{v}\|_{L^{2}(D)}\leq\|\hat{v}\|_{L^{2}(\partial\Omega)} \leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{v}\|_{H^{1}(\Omega)},\] \[\|\nabla\hat{u}\|_{L^{2}(D)}\leq\|\nabla\hat{u}\|_{L^{2}(\partial \Omega)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{u}\|_{H^{2}(\Omega)},\] \[\|\hat{v}\|_{L^{2}(\partial D\times[0,t])}\leq\|\hat{v}\|_{L^{2}( \partial\Omega)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{v}\|_{H^{1}(\Omega)}.\] Similar to Theorem 3.3, we can obtain \[\|R_{int1}\|_{L^{2}(\Omega)}=\|\hat{u}_{t}-\hat{v}\|_{L^{2}(\Omega)} \leq\|\hat{u}\|_{H^{1}(\Omega)}+\|\hat{v}\|_{L^{2}(\Omega)}\lesssim\ln NN^{-k+1},\] \[\|R_{int2}\|_{L^{2}(\Omega)}=\|\varepsilon^{2}\hat{v}_{t}-a^{2} \Delta\hat{u}+\varepsilon_{1}^{2}\hat{u}+g(u_{\theta})-g(u)\|_{L^{2}(\Omega)}\] \[\qquad\leq\varepsilon^{2}\|\hat{v}\|_{H^{1}(\Omega)}+a^{2}\|\hat{u }\|_{H^{2}(\Omega)}+\varepsilon_{1}^{2}\|\hat{u}\|_{L^{2}(\Omega)}+L\|\hat{u}\| _{L^{2}(\Omega)}\lesssim\ln^{2}NN^{-k+2},\] \[\|\nabla R_{int1}\|_{L^{2}(\Omega)}=\|\nabla(\hat{u}_{t}-\hat{v}) \|_{L^{2}(\Omega)}\leq\|\hat{u}\|_{H^{2}(\Omega)}+\|\hat{v}\|_{H^{1}(\Omega)} \lesssim\ln^{2}NN^{-k+2},\] \[\|R_{tb1}\|_{L^{2}(D)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat {u}\|_{H^{1}(\Omega)}\lesssim\ln NN^{-k+1},\] \[\|R_{tb2}\|_{L^{2}(D)},\|R_{sb}\|_{L^{2}(\partial D\times[0,t])} \leq C_{h_{\Omega},d+1,\rho_{\Omega}}\|\hat{v}\|_{H^{1}(\Omega)}\lesssim\ln NN ^{-k+2},\] \[\|\nabla R_{tb1}\|_{L^{2}(D)}\leq C_{h_{\Omega},d+1,\rho_{\Omega}} \|\hat{u}\|_{H^{2}(\Omega)}\lesssim\ln^{2}NN^{-k+2}.\] **Theorem 4.3**:: Let \(d\in\mathbb{N}\), \(u\in C^{1}(\Omega)\) and \(v\in C^{0}(\Omega)\) be the classical solution to the Sine-Gordon equation (36). Let \((u_{\theta},v_{\theta})\) denote the PINN approximation with the parameter \(\theta\). Then the following relation holds, \[\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+a^{2}|\nabla\hat{u}(\mathbf{x},t)|^{2 }+\varepsilon^{2}|\hat{v}(\mathbf{x},t)|^{2})\mathrm{d}\mathbf{x}\,\mathrm{d}t\leq C _{G}T\exp\left((2+\varepsilon_{1}^{2}+L+a^{2})T\right),\] where \(C_{G}\) is defined in the proof. Proof.: By taking the inner product of (43a) and (43b) with \(\hat{u}\) and \(\hat{v}\) over \(D\), respectively, we have \[\frac{d}{2dt}\int_{D}|\hat{u}|^{2}\,\mathrm{d}\mathbf{x} =\int_{D}\hat{u}\hat{v}\,\mathrm{d}\mathbf{x}+\int_{D}R_{int1}\hat{u} \,\mathrm{d}\mathbf{x}\leq\int_{D}|\hat{u}|^{2}\,\mathrm{d}\mathbf{x}+\frac{1}{2}\int _{D}|R_{int1}|^{2}\,\mathrm{d}\mathbf{x}+\frac{1}{2}\int_{D}|\hat{v}|^{2}\, \mathrm{d}\mathbf{x}, \tag{75}\] \[\varepsilon^{2}\frac{d}{2dt}\int_{D}|\hat{v}|^{2}\,\mathrm{d}\mathbf{x} =-a^{2}\int_{D}\nabla\hat{u}\cdot\nabla\hat{v}\,\mathrm{d}\mathbf{x}+a ^{2}\int_{\partial D}R_{sb}\nabla\hat{u}\cdot\mathbf{n}\,\mathrm{d}\mathbf{s}(\mathbf{x})- \varepsilon_{1}^{2}\int_{D}\hat{u}\hat{v}\,\mathrm{d}\mathbf{x}\] \[\qquad-\int_{D}(g(u_{\theta})-g(u))\hat{v}\,\mathrm{d}\mathbf{x}+ \int_{D}R_{int2}\hat{v}\,\mathrm{d}\mathbf{x}\] \[=-a^{2}\int_{D}\nabla\hat{u}\cdot\nabla\hat{u}_{t}\,\mathrm{d} \mathbf{x}+a^{2}\int_{D}\nabla\hat{u}\cdot\nabla R_{int1}\,\mathrm{d}\mathbf{x}+a^{2} \int_{\partial D}R_{sb}\nabla\hat{u}\cdot\mathbf{n}\,\mathrm{d}\mathbf{s}(\mathbf{x})- \varepsilon_{1}^{2}\int_{D}\hat{u}\hat{v}\,\mathrm{d}\mathbf{x}\] \[\qquad-\int_{D}(g(u_{\theta})-g(u))\hat{v}\,\mathrm{d}\mathbf{x}+ \int_{D}R_{int2}\hat{v}\,\mathrm{d}\mathbf{x}\] \[=-a^{2}\frac{d}{2dt}\int_{D}|\nabla\hat{u}|^{2}\,\mathrm{d}\mathbf{x} +a^{2}\int_{D}\nabla\hat{u}\cdot\nabla R_{int1}\,\mathrm{d}\mathbf{x}+a^{2}\int_{ \partial D}R_{sb}\nabla\hat{u}\cdot\mathbf{n}\,\mathrm{d}\mathbf{s}(\mathbf{x})- \varepsilon_{1}^{2}\int_{D}\hat{u}\hat{v}\,\mathrm{d}\mathbf{x}\] \[\qquad-\int_{D}(g(u_{\theta})-g(u))\hat{v}\,\mathrm{d}\mathbf{x}+ \int_{D}R_{int2}\hat{v}\,\mathrm{d}\mathbf{x}\] \[\leq-a^{2}\frac{d}{2dt}\int_{D}|\nabla\hat{u}|^{2}\,\mathrm{d}\mathbf{x }+\frac{a^{2}}{2}\int_{D}|\nabla\hat{u}|^{2}\,\mathrm{d}\mathbf{x}+\frac{a^{2}}{2} \int_{D}|\nabla R_{int1}|^{2}\,\mathrm{d}\mathbf{x}+C_{\partial D}\left(\int_{ \partial D}|R_{sb}|^{2}\,\mathrm{d}\mathbf{s}(\mathbf{x})\right)^{\frac{1}{2}}\] \[\qquad+\frac{1}{2}(\varepsilon_{1}^{2}+L)\int_{D}|\hat{u}|^{2} \,\mathrm{d}\mathbf{x}+\frac{1}{2}(\varepsilon_{1}^{2}+L+1)\int_{D}|\hat{v}|^{2} \,\mathrm{d}\mathbf{x}+\frac{1}{2}\int_{D}|R_{int2}|^{2}\,\mathrm{d}\mathbf{x}, \tag{76}\] where \(C_{\partial D}=a^{2}|\partial D|^{\frac{1}{2}}(\|u\|_{C^{1}(\partial D\times[0,t])} +||u_{\theta}||_{C^{1}(\partial D\times[0,t])})\) and \(\hat{v}=\hat{u}_{t}-R_{int1}\) have been used. Add (75) to (76), and we get \[\frac{d}{2dt} \int_{D}|\hat{u}|^{2}\,\mathrm{d}\mathbf{x}+a^{2}\frac{d}{2dt}\int_{D} |\nabla\hat{u}|^{2}\,\mathrm{d}\mathbf{x}+\varepsilon^{2}\frac{d}{2dt}\int_{D}| \hat{v}|^{2}\,\mathrm{d}\mathbf{x}\] \[\qquad\leq\frac{1}{2}(\varepsilon_{1}^{2}+L+2)\int_{D}|\hat{u}|^{2} \,\mathrm{d}\mathbf{x}+\frac{a^{2}}{2}\int_{D}|\nabla\hat{u}|^{2}\,\mathrm{d}\mathbf{x}+ \frac{1}{2}(\varepsilon_{1}^{2}+L+2)\int_{D}|\hat{v}|^{2}\,\mathrm{d}\mathbf{x}+ \frac{1}{2}\int_{D}|R_{int1}|^{2}\,\mathrm{d}\mathbf{x}+\frac{1}{2}\int_{D}|R_{ int1}|^{2}\,\mathrm{d}\mathbf{x}+\frac{1}{2}\int_{D}|R_{int1}|^{2}\,\mathrm{d}\mathbf{x}\] \[\qquad+\frac{a^{2}}{2}\int_{D}|\nabla R_{int1}|^{2}\,\mathrm{d} \mathbf{x}+C_{\partial D}\left(\int_{\partial D}|R_{sb}|^{2}\,\mathrm{d}\mathbf{s}(\mathbf{x}) \right)^{\frac{1}{2}}. \tag{77}\] Integrating (77) over \([0,\tau]\) for any \(\tau\leq T\) and applying the Cauchy-Schwarz inequality, we obtain \[\int_{D}|\hat{u}(\mathbf{x},\tau)|^{2}\mathrm{d}\mathbf{x}+a^{2}\int_{D}| \nabla\hat{u}(\mathbf{x},\tau)|^{2}\,\mathrm{d}\mathbf{x}+\varepsilon^{2}\int_{D}|\hat{v }(\mathbf{x},\tau)|^{2}\,\mathrm{d}\mathbf{x}\] \[\qquad\leq\int_{D}|R_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+a^{2}\int_{D}| \nabla R_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+\varepsilon^{2}\int_{D}|R_{tb2}|^{2}\, \mathrm{d}\mathbf{x}+(2+\varepsilon_{1}^{2}+L+a^{2})\int_{0}^{\tau}\int_{D}\left(| \hat{u}|^{2}+|\nabla\hat{u}|^{2}+|\hat{v}|^{2}\right)\,\mathrm{d}\mathbf{x}\, \mathrm{d}t\] \[\qquad+\int_{0}^{T}\int_{D}\left(|R_{int1}|^{2}+a^{2}|\nabla R_{ int1}|^{2}+|R_{int2}|^{2}\right)\,\mathrm{d}\mathbf{x}\,\mathrm{d}t+2C_{\partial D}|T|^{ \frac{1}{2}}\left(\int_{0}^{T}\int_{\partial D}|R_{sb}|^{2}\,\mathrm{d}s(\bm {x})\,\mathrm{d}t\right)^{\frac{1}{2}}.\] Applying the integral form of the Gronwall inequality to the above inequality leads to, \[\int_{D}|\hat{u}(\mathbf{x},\tau)|^{2}\,\mathrm{d}\mathbf{x}+a^{2}\int_{D}|\nabla\hat{ u}(\mathbf{x},\tau)|^{2}\,\mathrm{d}\mathbf{x}+\varepsilon^{2}\int_{D}|\hat{v}(\mathbf{x}, \tau)|^{2}\,\mathrm{d}\mathbf{x}\leq C_{G}\exp\left((2+\varepsilon_{1}^{2}+L+a^{2} )T\right), \tag{78}\] where \[C_{G} =\int_{D}(|R_{tb1}|^{2}+a^{2}|\nabla R_{tb1}|^{2}+\varepsilon^{2} |R_{tb2}|^{2})\,\mathrm{d}\mathbf{x}+\int_{0}^{T}\int_{D}(|R_{int1}|^{2}+|R_{int2} |^{2}+a^{2}|\nabla R_{int1}|^{2})\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[\quad+2C_{\partial D}|T|^{\frac{1}{2}}\left(\int_{0}^{T}\int_{ \partial D}|R_{sb}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}}.\] Then, we integrate (78) over \([0,T]\) to end the proof. **Theorem 4.4**: _Let \(d\in\mathbb{N}\) and \(T>0\). Let \(u\in C^{4}(\Omega)\) and \(v\in C^{3}(\Omega)\) be the classical solution to the Sine-Gordon equation (36). Let \((u_{\theta},v_{\theta})\) denote the PINN approximation with the parameter \(\theta\in\Theta\). Then the following relation holds,_ \[\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+a^{2}|\nabla\hat{u}( \mathbf{x},t)|^{2}+\varepsilon^{2}|\hat{v}(\mathbf{x},t)|^{2})\,\mathrm{d}\mathbf{x}\, \mathrm{d}t\leq C_{T}T\exp\left((2+\varepsilon_{1}^{2}+L+a^{2})T\right)\] \[\qquad=\mathcal{O}(\mathcal{E}_{T}(\theta)^{2}+M_{int}^{-\frac{2} {2+1}}+M_{tb}^{-\frac{2}{2}}+M_{sb}^{-\frac{1}{2}}),\] _where the constant \(C_{T}\) is given in the proof._ Proof.: We can combine Theorem 4.3 with the quadrature error formula (16) to obtain the error estimate, \[\int_{D}|R_{tb1}|^{2}\,\mathrm{d}\mathbf{x} =\int_{D}|R_{tb1}|^{2}\,\mathrm{d}\mathbf{x}-\mathcal{Q}^{D}_{M_{tb}}( R_{tb1}^{2})+\mathcal{Q}^{D}_{M_{tb}}(R_{tb1}^{2})\] \[\leq C_{(R_{tb1}^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}^{D}_{M_{ tb}}(R_{tb1}^{2}),\] \[\int_{D}|R_{tb2}|^{2}\,\mathrm{d}\mathbf{x} =\int_{D}|R_{tb2}|^{2}\,\mathrm{d}\mathbf{x}-\mathcal{Q}^{D}_{M_{tb}}( R_{tb2}^{2})+\mathcal{Q}^{D}_{M_{tb}}(R_{tb2}^{2})\] \[\leq C_{(R_{tb2}^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}^{D}_{M_{ tb}}(R_{tb2}^{2}),\] \[\int_{D}|\nabla R_{tb1}|^{2}\,\mathrm{d}\mathbf{x} =\int_{D}|\nabla R_{tb1}|^{2}\,\mathrm{d}\mathbf{x}-\mathcal{Q}^{D}_{ M_{tb}}(|\nabla R_{tb1}|^{2})+\mathcal{Q}^{D}_{M_{tb}}(|\nabla R_{tb1}|^{2})\] \[\leq C_{(|\nabla R_{tb1}|^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}^ {D}_{M_{tb}}(|\nabla R_{tb1}|^{2}),\] \[\int_{\Omega}|R_{int1}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t =\int_{\Omega}|R_{int1}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t- \mathcal{Q}^{\Omega}_{M_{int}}(R_{int1}^{2})+\mathcal{Q}^{\Omega}_{M_{int}}(R_ {int1}^{2})\] \[\leq C_{(R_{int1}^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}^{ \Omega}_{M_{int}}(R_{int1}^{2}),\] \[\int_{\Omega}|R_{int2}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t =\int_{\Omega}|R_{int2}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t- \mathcal{Q}^{\Omega}_{M_{int}}(R_{int2}^{2})+\mathcal{Q}^{\Omega}_{M_{int}}(R_ {int2}^{2})\] \[\leq C_{(R_{int2}^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}^{ \Omega}_{M_{int}}(R_{int2}^{2}),\] \[\int_{\Omega}|\nabla R_{int1}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t =\int_{\Omega}|\nabla R_{int1}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t -\mathcal{Q}^{\Omega}_{M_{int}}(|\nabla R_{int1}|^{2})+\mathcal{Q}^{\Omega}_{M _{int}}(|\nabla R_{int1}|^{2})\] \[\leq C_{(|\nabla R_{int1}|^{2})}M_{int}^{-\frac{2}{2+1}}+ \mathcal{Q}^{\Omega}_{M_{int}}(|\nabla R_{int1}|^{2}),\] \[\int_{\Omega_{*}}|R_{sb}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t =\int_{\Omega_{*}}|R_{sb}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t- \mathcal{Q}^{\Omega_{*}}_{M_{sb}}(R_{sb}^{2})+\mathcal{Q}^{\Omega}_{M_{sb}}(R_ {sb}^{2})\] \[\leq C_{(R_{sb}^{2})}M_{sb}^{-\frac{2}{2}}+\mathcal{Q}^{\Omega}_{M _{sb}}(R_{sb}^{2}).\] In light of (78) and the above inequalities, we have \[\int_{0}^{T}\int_{D}(|\hat{u}(\mathbf{x},t)|^{2}+a^{2}|\nabla\hat{u}(\mathbf{x},t)|^{2} +\varepsilon^{2}|\hat{v}(\mathbf{x},t)|^{2})\mathrm{d}\mathbf{x}\,\mathrm{d}t\leq TC_{T }\exp\left((2+\varepsilon_{1}^{2}+L+a^{2})T\right),\] where \[C_{T}= C_{(R_{tb1}^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}^{D}_{M_{tb}}(R_ {tb1}^{2})+\varepsilon^{2}\left(C_{(R_{tb2}^{2})}M_{tb}^{-\frac{2}{2}}+ \mathcal{Q}^{D}_{M_{tb}}(R_{tb2}^{2})\right)\] \[+a^{2}\left(C_{(|\nabla R_{tb1}|^{2})}M_{tb}^{-\frac{2}{2}}+ \mathcal{Q}^{D}_{M_{tb}}(|\nabla R_{tb1}|^{2})\right)+C_{(R_{int1}^{2})}M_{int}^ {-\frac{2}{2+1}}+\mathcal{Q}^{\Omega}_{M_{int}}(R_{int1}^{2})\] \[+C_{(R_{int2}^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}^{\Omega}_{ M_{int}}(R_{int2}^{2})+a^{2}\left(C_{(|\nabla R_{int1}|^{2})}M_{int}^{-\frac{2}{2+1}}+ \mathcal{Q}^{\Omega}_{M_{int}}(|\nabla R_{int1}|^{2})\right),\] \[+2C_{\partial D}|T|^{\frac{1}{2}}\left(C_{(R_{sb}^{2})}M_{sb}^{- \frac{2}{2}}+\mathcal{Q}^{\Omega_{*}}_{M_{sb}}(R_{sb}^{2})\right)^{\frac{1}{2}},\] and \[C_{(R_{tb1}^{2})} \lesssim\|\hat{u}\|_{C^{2}}^{2},\quad C_{(R_{tb2}^{2})}\lesssim\| \hat{v}\|_{C^{2}}^{2},\quad C_{(|\nabla R_{tb1}|^{2})}\lesssim\|\hat{u}\|_{C^{ 3}}^{2},\quad C_{(R_{int1}^{2})}\lesssim\|\hat{u}\|_{C^{3}}^{2}+\|\hat{v}\|_{C^{ 2}}^{2},\] \[C_{(R_{int2}^{2})},C_{(|\nabla R_{int1}|^{2})}\lesssim\|\hat{u}\|_ {C^{4}}^{2}+\|\hat{v}\|_{C^{3}}^{2},\quad C_{(R_{sb}^{2})}\lesssim\|\hat{v}\|_{C^{ 3}}^{2}.\] Here, the boundedness \(\|u_{\theta}\|_{C^{n}}\) and \(\|v_{\theta}\|_{C^{n}}\) (\(n\in\mathbb{N}\)) of the above constants can be obtained by Lemma 8.4 and \(\|R_{q}^{2}\|_{C^{n}}\leq 2^{n}\|R_{q}\|_{C^{n}}^{2}\) for \(R_{q}=R_{tb1}\), \(R_{tb2}\), \(\nabla R_{tb1}\), \(R_{int1}\), \(R_{int2}\), \(\nabla R_{int1}\) and \(R_{sb}\). ### Proof of Main Theorems from Section 5: Linear Elastodynamic Equation **Theorem 5.3**:: Let \(d\), \(r\), \(k\in\mathbb{N}\) with \(k\geq 3\). Let \(\mathbf{\psi}_{1}\in H^{r}(D)\), \(\mathbf{\psi}_{2}\in H^{r-1}(D)\) and \(\mathbf{f}\in H^{r-1}(D\times[0,T])\) with \(r>\frac{d}{2}+k\). For every integer \(N>5\), there exist tanh neural networks \((\mathbf{u}_{j})_{\theta}\) and \((\mathbf{v}_{j})_{\theta}\), with \(j=1,2,\cdots,d\), each with two hidden layers, of widths at most \(3\lceil\frac{k}{2}\rceil|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3\lceil\frac{d+3}{2}\rceil|P_{d+2,d+2}|NT\rceil N^{d}\), such that \[\|\mathbf{R}_{int1}\|_{L^{2}(\Omega)},\|\mathbf{R}_{tbb}\|_{L^{2}(\Omega)} \lesssim\ln NN^{-k+1},\] \[\|\mathbf{R}_{int2}\|_{L^{2}(\Omega)},\|\underline{\mathbf{\varepsilon}}( \mathbf{R}_{int1})\|_{L^{2}(\Omega)},\|\mathbf{\nabla}\cdot\mathbf{R}_{int1}\|_{L^{2}( \Omega)}\lesssim\ln^{2}NN^{-k+2},\] \[\|\underline{\mathbf{\varepsilon}}(\mathbf{R}_{tbb})\|_{L^{2}(D)},\|\mathbf{ \nabla}\cdot\mathbf{R}_{tbb}\|_{L^{2}(D)},\|\mathbf{R}_{sbz}\|_{L^{2}(\Gamma_{N} \times[0,t])}\lesssim\ln^{2}NN^{-k+2},\] \[\|\mathbf{R}_{tbb}\|_{L^{2}(D)},\|\mathbf{R}_{sbb}\|_{L^{2}(\Gamma_{D} \times[0,t])}\lesssim\ln NN^{-k+2}.\] Proof.: Lemma 5.2 implies that, \[\mathbf{u}\in C^{k}(D\times[0,T]),\qquad\mathbf{v}\in C^{k-1}(D\times[0,T]).\] Let \(\mathbf{u}_{\theta}=((u_{1})_{\theta},(u_{2})_{\theta},\cdots,(u_{d})_{\theta})\) and \(\mathbf{v}_{\theta}=((v_{1})_{\theta},(v_{2})_{\theta},\cdots,(v_{d})_{\theta})\). Based on Lemma 8.5, there exists \(\tanh\) neural networks \((u_{i})_{\theta}\) and \((v_{i})_{\theta}\), with \(i=1,2,\cdots,d\), each having two hidden layers, of widths at most \(3\lceil\frac{k}{2}\rceil|P_{k-1,d+2}|+\lceil NT\rceil+d(N-1)\) and \(3\lceil\frac{d+3}{2}\rceil|P_{d+2,d+2}|\lceil NT\rceil N^{d}\), such that for every \(0\leq l\leq 2\) and \(0\leq s\leq 2\), \[\|u_{i}-(u_{i})_{\theta}\|_{H^{l}(\Omega)} \leq C_{l,k,d+1,u_{i}}\lambda_{l,u_{i}}(N)N^{-k+l}, \tag{79}\] \[\|v_{i}-(v_{i})_{\theta}\|_{H^{s}(\Omega)} \leq C_{s,k-1,d+1,v_{i}}\lambda_{s,v_{i}}(N)N^{-k+1+s}. \tag{80}\] Let \(\partial_{i}\) represent the derivative with respect to the \(i\)-th dimension. For \(1\leq i,\ j\leq d\), we have \[\|(\hat{u}_{t})_{i}\|_{L^{2}(\Omega)}\leq\|\hat{u}_{i}\|_{H^{1}( \Omega)},\qquad\|(\hat{v}_{t})_{i}\|_{L^{2}(\Omega)}\leq\|\hat{v}_{i}\|_{H^{1} (\Omega)},\] \[\|\partial_{i}\partial_{j}\hat{u}_{i}\|_{L^{2}(\Omega)},\| \partial_{i}\partial_{i}\hat{u}_{i}\|_{L^{2}(\Omega)},\|\partial_{j}\partial_ {j}\hat{u}_{i}\|_{L^{2}(\Omega)}\leq\|\hat{u}_{i}\|_{H^{2}(\Omega)},\] \[\|\partial_{j}(\hat{u}_{t})_{i}\|_{L^{2}(\Omega)}\leq\|(\hat{u}_ {t})_{i}\|_{H^{1}(\Omega)}\leq\|\hat{u}_{i}\|_{H^{2}(\Omega)},\qquad\| \partial_{j}\hat{v}_{i}\|_{L^{2}(\Omega)}\leq\|\hat{v}_{i}\|_{H^{1}(\Omega)},\] \[\|\hat{u}_{i}\|_{L^{2}(D)}\leq\|\hat{u}_{i}\|_{L^{2}(\partial \Omega)}\leq C_{h\alpha,d+1,\rho_{\Omega}}\|\hat{u}_{i}\|_{H^{1}(\Omega)},\] \[\|\hat{v}_{i}\|_{L^{2}(D)}\leq\|\hat{v}_{i}\|_{L^{2}(\partial \Omega)}\leq C_{h\alpha,d+1,\rho_{\Omega}}\|\hat{v}_{i}\|_{H^{1}(\Omega)},\] \[\|\partial_{j}(\hat{u})_{i}\|_{L^{2}(D)}\leq\|\partial_{j}(\hat{u} _{i})\|_{L^{2}(\Omega)}\leq C_{h\alpha,d+1,\rho_{\Omega}}\|\hat{u}_{i}\|_{H^{2 }(\Omega)},\] \[\|\hat{v}_{i}\|_{L^{2}(\Gamma_{D}\times[0,t])}\leq\|\hat{v}_{i}\| _{L^{2}(\Omega)}\leq C_{h\alpha,d+1,\rho_{\Omega}}\|\hat{v}_{i}\|_{H^{1}( \Omega)},\] \[\|\partial_{i}\hat{u}_{i}n_{i}\|_{L^{2}(\Gamma_{N}\times[0,t])},\| \partial_{j}\hat{u}_{i}n_{i}\|_{L^{2}(\Gamma_{N}\times[0,t])},\|\partial_{j} \hat{u}_{i}n_{j}\|_{L^{2}(\Gamma_{N}\times[0,t])}\leq C_{h\alpha,d+1,\rho_{ \Omega}}\|\hat{u}_{i}\|_{H^{2}(\Omega)}.\] Using (79) and (80) and the above relations, we can now bound the PINN residuals, \[\|\mathbf{R}_{int1}\|_{L^{2}(\Omega)}\leq\|\hat{\mathbf{u}}_{t}-\hat{v} \|_{L^{2}(\Omega)}\leq\|\hat{\mathbf{u}}\|_{H^{1}(\Omega)}+\|\hat{\mathbf{v}}\|_{L^{2}( \Omega)}\lesssim\ln NN^{-k+1},\] \[\|\mathbf{R}_{int2}\|_{L^{2}(\Omega)}\leq\|\rho\hat{\mathbf{v}}_{t}-2\mu \nabla\cdot(\underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}}))-\lambda\nabla(\nabla \cdot\hat{\mathbf{u}})\|_{L^{2}(\Omega)}\] \[\lesssim\|\hat{\mathbf{v}}\|_{H^{1}(\Omega)}+\|\hat{\mathbf{u}}\|_{H^{2}( \Omega)}\lesssim\ln^{2}NN^{-k+2},\] \[\|\underline{\mathbf{\varepsilon}}(\mathbf{R}_{int1})\|_{L^{2}(\Omega)},\| \nabla\cdot\mathbf{R}_{int1}\|_{L^{2}(\Omega)}\lesssim\|\hat{\mathbf{u}}\|_{H^{2}( \Omega)}+\|\hat{\mathbf{v}}\|_{H^{1}(\Omega)}\lesssim\ln^{2}NN^{-k+2},\] \[\|\mathbf{R}_{tbb}\|_{L^{2}(D)}\leq\|\hat{\mathbf{u}}\|_{L^{2}(\Omega\Omega)} \lesssim\|\hat{\mathbf{u}}\|_{H^{1}(\Omega)}\lesssim\ln NN^{-k+1},\] \[\|\mathbf{R}_{tbb}\|_{L^{2}(D)}\leq\|\hat{\mathbf{v}}\|_{L^{2}(\Omega\Omega)} \lesssim\|\hat{\mathbf{v}}\|_{H^{1}(\Omega)}\lesssim\ln NN^{-k+2},\] \[\|\underline{\mathbf{\varepsilon}}(\mathbf{R}_{tbb})\|_{L^{2}(D)},\|\nabla \cdot\mathbf{R}_{tbb}\|_{L^{2}(D)}\lesssim\|\hat{\mathbf{u}}\|_{H^{2}(D)}\lesssim\ln^{2}NN ^{-k+2},\] \[\|\mathbf{R}_{sbb}\|_{L^{2}(\Gamma_{D}\times[0,t])}\leq\|\hat{\mathbf{v}}\|_{L ^{2}(\Omega)}\lesssim\|\hat{\mathbf{v}}\|_{H^{1}(\Omega)}\lesssim\ln NN^{-k+2},\] \[\|\mathbf{R}_{sb2}\|_{L^{2}(\Gamma_{N}\times[0,t])}\leq\|2\mu \underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}})\mathbf{n}+\lambda(\nabla\cdot\hat{\mathbf{u}}) \mathbf{n}\|_{\partial\Omega}\lesssim\|\hat{\mathbf{u}}\|_{H^{2}(\Omega)}\lesssim\ln^ {2}NN^{-k+2}.\] **Theorem 5.4:** Let \(d\in\mathbb{N}\), \(\mathbf{u}\in C^{1}(\Omega)\) and \(\mathbf{v}\in C(\Omega)\) be the classical solution to the linear elastodynamic equation (47). Let \((\mathbf{u}_{\theta},\mathbf{v}_{\theta})\) denote the PINN approximation with the parameter \(\theta\). then the following relation holds, \[\int_{0}^{T}\int_{D}(|\hat{\mathbf{u}}(\mathbf{x},t)|^{2}+2\mu|\underline{\mathbf{ \varepsilon}}(\hat{\mathbf{u}}(\mathbf{x},t))|^{2}+\lambda|\nabla\cdot\hat{\mathbf{u}}( \mathbf{x},t)|^{2}+\rho| Proof.: By taking the inner product of (54a) and (54b) with \(\hat{\mathbf{u}}\) and \(\hat{\mathbf{v}}\) and integrating over \(D\), respectively, we have \[\frac{d}{2dt}\int_{D}|\hat{\mathbf{u}}|^{2}\mathrm{d}\mathbf{x}=\int_{D} \hat{\mathbf{u}}\hat{\mathbf{v}}\,\mathrm{d}\mathbf{x}+\int_{D}\mathbf{R}_{int1}\hat{\mathbf{u}}\, \mathrm{d}\mathbf{x}\leq\int_{D}|\hat{\mathbf{u}}|^{2}\mathrm{d}\mathbf{x}+\frac{1}{2}\int_ {D}|\mathbf{R}_{int1}|^{2}\mathrm{d}\mathbf{x}, \tag{81}\] \[\rho\frac{d}{2dt}\int_{D}|\hat{\mathbf{v}}|^{2}\mathrm{d}\mathbf{x}=-2\mu \int_{D}\underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}}):\nabla\hat{\mathbf{v}}\,\mathrm{ d}\mathbf{x}-\lambda\int_{D}(\nabla\cdot\hat{\mathbf{u}})(\nabla\cdot\hat{\mathbf{v}})\, \mathrm{d}\mathbf{x}+\int_{\partial D}(2\mu\underline{\mathbf{\varepsilon}}(\hat{\mathbf{ u}})\mathbf{n}+\lambda(\nabla\cdot\hat{\mathbf{u}})\mathbf{n})\cdot\hat{\mathbf{v}}\, \mathrm{d}\mathbf{s}(\mathbf{x})\] \[\qquad+\int_{D}\mathbf{R}_{int2}\hat{\mathbf{v}}\,\mathrm{d}\mathbf{x}\] \[=-2\mu\int_{D}\underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}}):\nabla \hat{\mathbf{u}}_{t}\,\mathrm{d}\mathbf{x}+2\mu\int_{D}\underline{\mathbf{\varepsilon}}( \hat{\mathbf{u}}):\nabla\mathbf{R}_{int1}\,\mathrm{d}\mathbf{x}-\lambda\int_{D}(\nabla \cdot\hat{\mathbf{u}})(\nabla\cdot\hat{\mathbf{u}}_{t})\,\mathrm{d}\mathbf{x}+\lambda\int_ {D}(\nabla\cdot\mathbf{R}_{int1})(\nabla\cdot\hat{\mathbf{v}})\,\mathrm{d}\mathbf{x}\] \[\qquad+\int_{\Gamma_{D}}(2\mu\underline{\mathbf{\varepsilon}}(\hat{ \mathbf{u}})\mathbf{n}+\lambda(\nabla\cdot\hat{\mathbf{u}})\mathbf{n})\cdot\mathbf{R}_{sb1}\, \mathrm{d}\mathbf{s}(\mathbf{x})+\int_{\Gamma_{N}}\mathbf{R}_{sb2}\cdot\hat{\mathbf{v}}\, \mathrm{d}\mathbf{s}(\mathbf{x})+\int_{D}\mathbf{R}_{int2}\hat{\mathbf{v}}\,\mathrm{d}\mathbf{x}\] \[=-\frac{d}{dt}\int_{D}\mu|\underline{\mathbf{\varepsilon}}(\hat{\mathbf{u }})|^{2}\,\mathrm{d}\mathbf{x}-\frac{d}{dt}\int_{D}\frac{\lambda}{2}|\nabla\cdot \hat{\mathbf{u}}|^{2}\mathrm{d}\mathbf{x}+2\mu\int_{D}\underline{\mathbf{\varepsilon}}( \hat{\mathbf{u}}):\nabla\mathbf{R}_{int1}\,\mathrm{d}\mathbf{x}+\lambda\int_{D}(\nabla \cdot\mathbf{R}_{int1})(\nabla\cdot\hat{\mathbf{v}})\,\mathrm{d}\mathbf{x}\] \[\qquad+\int_{\Gamma_{D}}(2\mu\underline{\mathbf{\varepsilon}}(\hat{ \mathbf{u}})\mathbf{n}+\lambda(\nabla\cdot\hat{\mathbf{u}})\mathbf{n})\cdot\mathbf{R}_{sb1}\, \mathrm{d}\mathbf{s}(\mathbf{x})+\int_{\Gamma_{N}}\mathbf{R}_{sb2}\cdot\hat{\mathbf{v}}\, \mathrm{d}\mathbf{s}(\mathbf{x})+\int_{D}\mathbf{R}_{int2}\hat{\mathbf{v}}\,\mathrm{d}\mathbf{x}\] \[\leq-\frac{d}{dt}\int_{D}\mu|\underline{\mathbf{\varepsilon}}(\hat{\bm {u}})|^{2}\,\mathrm{d}\mathbf{x}-\frac{d}{dt}\int_{D}\frac{\lambda}{2}|\nabla\cdot \hat{\mathbf{u}}|^{2}\mathrm{d}\mathbf{x}+\mu\int_{D}|\underline{\mathbf{\varepsilon}}( \hat{\mathbf{u}})|^{2}\,\mathrm{d}\mathbf{x}+\mu\int_{D}|\underline{\mathbf{\varepsilon}}( \mathbf{R}_{int1})|^{2}\,\mathrm{d}\mathbf{x}\] \[\qquad+\frac{\lambda}{2}\int_{D}|\nabla\cdot\mathbf{R}_{int1}|\, \mathrm{d}\mathbf{x}+\frac{\lambda}{2}\int_{D}|\nabla\cdot\hat{\mathbf{v}}|\, \mathrm{d}\mathbf{x}+\frac{1}{2}\int_{D}|\hat{\mathbf{v}}|^{2}\,\mathrm{d}\mathbf{x}+ \frac{1}{2}\int_{D}|\mathbf{R}_{int2}|^{2}\,\mathrm{d}\mathbf{x}\] \[\qquad+C_{\Gamma_{D}}\left(\int_{\Gamma_{D}}|\mathbf{R}_{sb1}|^{2} \,\mathrm{d}\mathbf{s}(\mathbf{x})\right)^{\frac{1}{2}}+C_{\Gamma_{N}}\left(\int_{ \Gamma_{N}}|\mathbf{R}_{sb2}|^{2}\,\mathrm{d}\mathbf{s}(\mathbf{x})\right)^{\frac{1}{2}}. \tag{82}\] Here we have used \(\hat{\mathbf{v}}=\hat{\mathbf{u}}_{t}-\mathbf{R}_{int1}\), and the constants are given by \(C_{\Gamma_{D}}=(2\mu+\lambda)|\Gamma_{D}|^{\frac{1}{2}}\|\mathbf{u}\|_{C^{1}(\Gamma_ {D}\times[0,T])}+(2\mu+\lambda)|\Gamma_{D}|^{\frac{1}{2}}||\mathbf{u}_{\theta}||_{C ^{1}(\Gamma_{D}\times[0,T])}\) and \(C_{\Gamma_{N}}=|\Gamma_{N}|^{\frac{1}{2}}(\|\mathbf{v}\|_{C(\Gamma_{N}\times[0,T])}+ ||\mathbf{v}_{\theta}||_{C(\Gamma_{N}\times[0,T])})\). Add (81) to (82), and we get, \[\frac{d}{2dt}\int_{D} |\hat{\mathbf{u}}|^{2}\mathrm{d}\mathbf{x}+\frac{d}{dt}\int_{D}\mu| \underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}})|^{2}\,\mathrm{d}\mathbf{x}+\frac{d}{2 dt}\int_{D}\lambda|\nabla\cdot\hat{\mathbf{u}}|^{2}\,\mathrm{d}\mathbf{x}+\rho\frac{d}{2 dt}\int_{D}|\hat{\mathbf{v}}|^{2}\,\mathrm{d}\mathbf{x}\] \[\leq\int_{D}|\hat{\mathbf{u}}|^{2}\mathrm{d}\mathbf{x}+\mu\int_{D}| \underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}})|^{2}\mathrm{d}\mathbf{x}+\frac{\lambda}{2} \int_{D}|\nabla\cdot\hat{\mathbf{v}}|\,\mathrm{d}\mathbf{x}+\int_{D}|\hat{\mathbf{v}}|^{2} \,\mathrm{d}\mathbf{x}+\frac{1}{2}\int_{D}(|\mathbf{R}_{int1}|^{2}+|\mathbf{R}_{int2}|^{2}) \mathrm{d}\mathbf{x}\] \[\qquad+\mu\int_{D}|\underline{\mathbf{\varepsilon}}(\mathbf{R}_{int1})|^{2 }\,\mathrm{d}\mathbf{x}+\frac{\lambda}{2}\int_{D}|\nabla\cdot\mathbf{R}_{int1}|\, \mathrm{d}\mathbf{x}+C_{\Gamma_{D}}\left(\int_{\Gamma_{D}}|\mathbf{R}_{sb1}|^{2}\, \mathrm{d}\mathbf{s}(\mathbf{x})\right)^{\frac{1}{2}}+C_{\Gamma_{N}}\left(\int_{ \Gamma_{N}}|\mathbf{R}_{sb2}|^{2}\,\mathrm{d}\mathbf{s}(\mathbf{x})\right)^{\frac{1}{2}}. \tag{83}\] Integrating (83) over \([0,\tau]\) for any \(\tau\leq T\) and applying Cauchy-Schwarz inequality, we obtain, \[\int_{D} |\hat{\mathbf{u}}(\mathbf{x},\tau)|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}2\mu| \underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}}(\mathbf{x},\tau))|^{2}\,\mathrm{d}\mathbf{x}+ \int_{D}\lambda|\nabla\cdot\hat{\mathbf{u}}(\mathbf{x},\tau)|^{2}\,\mathrm{d}\mathbf{x}+ \rho\int_{D}|\hat{\mathbf{v}}(\mathbf{x},\tau)|^{2}\,\mathrm{d}\mathbf{x}\] \[\leq\int_{D}|\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}2\mu| \underline{\mathbf{\varepsilon}}(\mathbf{R}_{tb1})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D} \lambda|\nabla\cdot\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+\rho\int_{D}|\mathbf{R}_{tb2 }|^{2}\,\mathrm{d}\mathbf{x}\] \[\qquad+(2+2\mu+\lambda)\int_{0}^{\tau}\int_{D}\left(|\hat{\mathbf{u}}|^ {2}+|\underline{\mathbf{\varepsilon}}(\hat{\mathbf{u}})|^{2}+|\nabla\cdot\hat{\mathbf{u}}|^ {2}+|\hat{\mathbf{v}}|^{2}\right)\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[\qquad+\int_{0}^{T}\int_{D}\left(|\mathbf{R}_{int1}|^{2}+2\mu| By applying the integral form of the Gronwall inequality to the above inequality, we have \[\int_{D}(|\hat{\mathbf{u}}(\mathbf{x},\tau)|^{2}+2\mu|\underline{\mathbf{\varepsilon}}(\hat{ \mathbf{u}}(\mathbf{x},\tau))|^{2}+\lambda|\nabla\cdot\hat{\mathbf{u}}(\mathbf{x},\tau)|^{2}+ \rho\int_{D}|\hat{\mathbf{v}}(\mathbf{x},\tau)|^{2})\,\mathrm{d}\mathbf{x}\leq C_{G}\exp \left((2+2\mu+\lambda)T\right), \tag{84}\] where \[C_{G} =\int_{D}|\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+\int_{D}2\mu| \underline{\mathbf{\varepsilon}}(\mathbf{R}_{tb1})|^{2}\,\mathrm{d}\mathbf{x}+\int_{D} \lambda|\nabla\cdot\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}+\rho\int_{D}|\mathbf{R}_{ tb2}|^{2}\,\mathrm{d}\mathbf{x}\] \[\quad+\int_{0}^{T}\int_{D}\left(|\mathbf{R}_{int1}|^{2}+2\mu| \underline{\mathbf{\varepsilon}}(\mathbf{R}_{int1})|^{2}+\lambda|\nabla\cdot\mathbf{R}_{ int1}|^{2}+|\mathbf{R}_{int2}|^{2}\right)\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\] \[\quad+2|T|^{\frac{1}{2}}C_{\Gamma_{D}}\left(\int_{0}^{T}\int_{ \Gamma_{D}}|\mathbf{R}_{sb1}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac {1}{2}}+2|T|^{\frac{1}{2}}C_{\Gamma_{N}}\left(\int_{0}^{T}\int_{\Gamma_{N}}| \mathbf{R}_{sb2}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t\right)^{\frac{1}{2}}.\] Then, we finish the proof by integrating (84) over \([0,T]\). **Theorem 5.5**:: Let \(d\in\mathbb{N}\), \(\mathbf{u}\in C^{4}(\Omega)\) and \(\mathbf{v}\in C^{3}(\Omega)\) be the classical solution to the linear elastodynamic equation (47). Let \((\mathbf{u}_{\theta},\mathbf{v}_{\theta})\) denote the PINN approximation with the parameter \(\theta\). Then the following relation holds, \[\int_{0}^{T}\int_{D}(|\hat{\mathbf{u}}(\mathbf{x},t)|^{2}+2\mu|\underline{ \mathbf{\varepsilon}}(\hat{\mathbf{u}}(\mathbf{x},t))|^{2}+\lambda|\nabla\cdot\hat{\mathbf{u}} (\mathbf{x},t)|^{2}+\rho|\hat{\mathbf{v}}(\mathbf{x},t)|^{2})\,\mathrm{d}\mathbf{x}\,\mathrm{d }t\leq C_{T}T\exp\left((2+2\mu+\lambda)T\right)\] \[\qquad=\mathcal{O}(\mathcal{E}_{T}(\theta)^{2}+M_{int}^{-\frac{2} {d+1}}+M_{tb}^{-\frac{2}{d}}+M_{sb}^{-\frac{1}{2}}),\] where \(C_{T}\) is defined in the following proof. Proof.: By the definitions of different components of the training error (53) and applying the estimate (16) on the quadrature error, we have \[\int_{D}|\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x} =\int_{D}|\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}-\mathcal{Q}^{D}_{M_{ tb}}(\mathbf{R}_{tb1}^{2})+\mathcal{Q}^{D}_{M_{tb}}(\mathbf{R}_{tb1}^{2})\] \[\leq C_{(\mathbf{R}_{tb1}^{2})}M_{tb}^{-\frac{2}{d}}+\mathcal{Q}^{D}_{M _{tb}}(\mathbf{R}_{tb1}^{2}),\] \[\int_{D}|\mathbf{R}_{tb2}|^{2}\,\mathrm{d}\mathbf{x} =\int_{D}|\mathbf{R}_{tb2}|^{2}\,\mathrm{d}\mathbf{x}-\mathcal{Q}^{D}_{M _{tb}}(\mathbf{R}_{tb2}^{2})+\mathcal{Q}^{D}_{M_{tb}}(\mathbf{R}_{tb2}^{2})\] \[\leq C_{(\mathbf{R}_{tb2}^{2})}M_{tb}^{-\frac{2}{d}}+\mathcal{Q}^{D}_{M _{tb}}(\mathbf{R}_{tb2}^{2}),\] \[\int_{D}|\underline{\varepsilon}(\mathbf{R}_{tb1})|^{2}\,\mathrm{d} \mathbf{x} =\int_{D}|\underline{\varepsilon}(\mathbf{R}_{tb1})|^{2}\,\mathrm{d} \mathbf{x}-\mathcal{Q}^{D}_{M_{tb}}(|\underline{\varepsilon}(\mathbf{R}_{tb1})|^{2})+ \mathcal{Q}^{D}_{M_{tb}}(|\underline{\varepsilon}(\mathbf{R}_{tb1})|^{2})\] \[\leq C_{(|\underline{\varepsilon}(\mathbf{R}_{tb1})|^{2})}M_{tb}^{- \frac{2}{d}}+\mathcal{Q}^{D}_{M_{tb}}(|\underline{\varepsilon}(\mathbf{R}_{tb1}) |^{2}),\] \[\int_{D}|\nabla\cdot\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x} =\int_{D}|\nabla\cdot\mathbf{R}_{tb1}|^{2}\,\mathrm{d}\mathbf{x}- \mathcal{Q}^{D}_{M_{tb}}(|\nabla\cdot\mathbf{R}_{tb1}|^{2})+\mathcal{Q}^{D}_{M_{ tb}}(|\nabla\cdot\mathbf{R}_{tb1}|^{2})\] \[\leq C_{(|\nabla\cdot\mathbf{R}_{tb1}|^{2})}M_{tb}^{-\frac{2}{d}}+ \mathcal{Q}^{D}_{M_{tb}}(|\nabla\cdot\mathbf{R}_{tb1}|^{2}),\] \[\int_{\Omega}|\mathbf{R}_{int1}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t =\int_{\Omega}|\mathbf{R}_{int1}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t -\mathcal{Q}^{\Omega}_{M_{int}}(\mathbf{R}_{int1}^{2})+\mathcal{Q}^{\Omega}_{M_{ int}}(\mathbf{R}_{int1}^{2})\] \[\leq C_{(\mathbf{R}_{int1}^{2})}M_{int}^{-\frac{2}{d+1}}+\mathcal{Q}^{ \Omega}_{M_{int}}(\mathbf{R}_{int1}^{2}),\] \[\int_{\Omega}|\mathbf{R}_{int2}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t =\int_{\Omega}|\mathbf{R}_{int2}|^{2}\,\mathrm{d}\mathbf{x}\,\mathrm{d}t -\mathcal{Q}^{\Omega}_{M_{int}}(\mathbf{R}_{int2}^{2})+\mathcal{Q}^{\Omega}_{M_{ int}}(\mathbf{R}_{int2}^{2})\] \[\leq C_{(\mathbf{R}_{int2}^{2})}M_{int}^{-\frac{2}{d+1}}+\mathcal{Q}^{ \Omega}_{M_{int}}(\mathbf{R}_{int2}^{2}),\] \[\int_{\Omega}|\underline{\varepsilon}(\mathbf{R}_{int1})|^{2}\, \mathrm{d}\mathbf{x}\,\mathrm{d}t =\int_{\Omega}|\underline{\varepsilon}(\mathbf{R}_{int1})|^{2}\, \mathrm{d}\mathbf{x}\,\mathrm{d}t-\mathcal{Q}^{\Omega}_{M_{int}}(|\underline{ \varepsilon}(\mathbf{R}_{int1})|^{2})+\mathcal{Q}^{\Omega}_{M_{int}}(|\underline{ \varepsilon}(\mathbf{R}_{int1})|^{2})\] \[\leq C_{(|\underline{\varepsilon}(\mathbf{R}_{int1})|^{2})}M_{int}^{- \frac{2}{d+1}}+\mathcal{Q}^{\Omega}_{M_{int}}(|\underline{\varepsilon}(\mathbf{R} _{int1})|^{2}),\] \[\int_{\Omega}|\nabla\cdot\mathbf{R}_{int1}|^{2}\,\mathrm{d}\mathbf{x}\, \mathrm{d}t =\int_{\Omega}|\nabla\cdot\mathbf{R}_{int1}|^{2}\,\mathrm{d}\mathbf{x}\, \mathrm{d}t-\mathcal{Q}^{\Omega}_{M_{int}}(|\nabla\cdot\mathbf{R}_{int1}|^{2})+ \mathcal{Q}^{\Omega}_{M_{int}}(|\nabla\cdot\mathbf{R}_{int1}|^{2})\] \[\leq C_{(|\nabla\cdot\mathbf{R}_{int1}|^{2})}M_{int}^{-\frac{2}{d+1}}+ \mathcal{Q}^{\Omega}_{M_{int}}(|\nabla\cdot\mathbf{R}_{int1}|^{2}),\] \[\int_{\Omega_{D}}|\mathbf{R}_{sb1}|^{2}\,\mathrm{d}s(\mathbf{x})\, \mathrm{d}t =\int_{\Omega_{D}}|\mathbf{R}_{sb1}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t -\mathcal{Q}^{\Omega_{D}}_{M_{sb1}}(\mathbf{R}_{sb1}^{2})+\mathcal{Q}^{\Omega_{D}}_{ M_{sb1}}(\mathbf{R}_{sb1}^{2})\] \[\leq C_{(\mathbf{R}_{sb1}^{2})}M_{sb1}^{-\frac{2}{d}}+\mathcal{Q}^{ \Omega_{D}}_{M_{sb1}}(\mathbf{R}_{sb1}^{2}),\] \[\int_{\Omega_{N}}|\mathbf{R}_{sb2}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t =\int_{\Omega_{N}}|\mathbf{R}_{sb2}|^{2}\,\mathrm{d}s(\mathbf{x})\,\mathrm{d}t -\mathcal{Q}^{\Omega_{N}}_{M_{sb2}}(\mathbf{R}_{sb2}^{2})+\mathcal{Q}^{\Omega_{N}}_{ M_{sb2}}(\mathbf{R}_{sb2}^{2})\] \[\leq C_{(\mathbf{R}_{sb2}^{2})}M_{sb2}^{-\frac{2}{d}}+\mathcal{Q}^{ \Omega_{N}}_{M_{sb2}}(\mathbf{R}_{sb2}^{2}).\] In light of the above inequalities and (84), we obtain \[\int_{0}^{T}\int_{D}(|\hat{\mathbf{u}}(\mathbf{x},t)|^{2}+2\mu|\underline{\varepsilon}( \hat{\mathbf{u}}(\mathbf{x},t))|^{2}+\lambda|\nabla\cdot\hat{\mathbf{u}}(\mathbf{x},t)|^{2}+\rho| \hat{\mathbf{v}}(\mathbf{x},t)|^{2})\,\mathrm{d}\mathbf{x}\,\mathrm{d}t\leq TC_{T}\exp\left((2+2 \mu+\lambda)T\right),\] where \[C_{T}= C_{(\mathbf{R}_{tb1}^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}_{M_{tb}}^{D}( \mathbf{R}_{tb1}^{2})+\rho\left(C_{(\mathbf{R}_{tb2}^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{ Q}_{M_{tb}}^{D}(\mathbf{R}_{tb2}^{2})\right)+2\mu\left(C_{([\mathbf{\varepsilon}(\mathbf{R}_{ tt1})]^{2})}M_{tb}^{-\frac{2}{2}}+\mathcal{Q}_{M_{tb}}^{D}([\mathbf{\varepsilon}(\mathbf{R}_{ tb1})]^{2})\right)\] \[+\lambda\left(C_{([\nabla\cdot\mathbf{R}_{tbi1}]^{2})}M_{tb}^{-\frac{ 2}{2}}+\mathcal{Q}_{M_{tb}}^{D}(|\nabla\cdot\mathbf{R}_{tb1}|^{2})\right)+C_{(\bm {R}_{int1}^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}_{M_{int}}^{\Omega}(\mathbf{R}_ {int1}^{2})\] \[+C_{(\mathbf{R}_{int2}^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}_{M_{ int}}^{\Omega}(\mathbf{R}_{int2}^{2})+2\mu\left(C_{([\mathbf{\varepsilon}(\mathbf{R}_{ int1})]^{2})}M_{int}^{-\frac{2}{2+1}}+\mathcal{Q}_{M_{int}}^{\Omega}([\mathbf{ \varepsilon}(\mathbf{R}_{int1})]^{2})\right)\] \[+\lambda\left(C_{([\nabla\cdot\mathbf{R}_{int1}]^{2})}M_{int}^{-\frac {2}{2+1}}+\mathcal{Q}_{M_{int}}^{\Omega}(|\nabla\cdot\mathbf{R}_{int1}|^{2}) \right)+2|T|^{\frac{1}{2}}C_{\Gamma_{D}}\left(C_{(\mathbf{R}_{sb1}^{2})}M_{sb1}^{- \frac{2}{2}}+\mathcal{Q}_{M_{sb1}}^{\Omega_{D}}(\mathbf{R}_{sb1}^{2})\right)^{ \frac{1}{2}}\] \[+2|T|^{\frac{1}{2}}C_{\Gamma_{N}}\left(C_{(\mathbf{R}_{tb2}^{2})}M_{ sb2}^{-\frac{2}{2}}+\mathcal{Q}_{M_{sb2}}^{\Omega_{N}}(\mathbf{R}_{sb2}^{2})\right)^{ \frac{1}{2}}.\] The boundedness of the constants \(C(\mathbf{R}_{q}^{2})\) can be obtained from Lemma 8.4 and \(\|\mathbf{R}_{q}^{2}\|_{C^{n}}\leq 2^{n}\|\mathbf{R}_{q}\|_{C^{n}}^{2}\), with \(\mathbf{R}_{q}=\mathbf{R}_{tb1}\), \(\mathbf{R}_{tb2}\), \(\mathbf{\varepsilon}(\mathbf{R}_{tb1})\), \(\nabla\cdot\mathbf{R}_{tb1}\), \(\mathbf{R}_{int1}\), \(\mathbf{R}_{int2}\), \(\mathbf{\varepsilon}(\mathbf{R}_{int1})\), \(\nabla\cdot\mathbf{R}_{int1}\), \(\mathbf{R}_{sb1}\) and \(\mathbf{R}_{sb2}\).
2304.05946
Entanglement detection with classical deep neural networks
In this study, we introduce an autonomous method for addressing the detection and classification of quantum entanglement, a core element of quantum mechanics that has yet to be fully understood. We employ a multi-layer perceptron to effectively identify entanglement in both two- and three-qubit systems. Our technique yields impressive detection results, achieving nearly perfect accuracy for two-qubit systems and over $90\%$ accuracy for three-qubit systems. Additionally, our approach successfully categorizes three-qubit entangled states into distinct groups with a success rate of up to $77\%$. These findings indicate the potential for our method to be applied to larger systems, paving the way for advancements in quantum information processing applications.
Julio Ureña, Antonio Sojo, Juani Bermejo, Daniel Manzano
2023-04-12T16:14:18Z
http://arxiv.org/abs/2304.05946v2
# Entanglement detection with classical deep neural networks ###### Abstract In this study, we introduce an autonomous method for addressing the detection and classification of quantum entanglement, a core element of quantum mechanics that has yet to be fully understood. We employ a multi-layer perceptron to effectively identify entanglement in both two- and three-qubit systems. Our technique yields impressive detection results, achieving nearly perfect accuracy for two-qubit systems and over 90% accuracy for three-qubit systems. Additionally, our approach successfully categorizes three-qubit entangled states into distinct groups with a success rate of up to 77%. These findings indicate the potential for our method to be applied to larger systems, paving the way for advancements in quantum information processing applications. ## 1 Introduction Entanglement is one of the most important features of quantum mechanics. First proposed by Einstein, Podolski, and Rosen as a pretended proof of the incompleteness of the theory [1], it was later considered by Schrodinger as _the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought_[2]. Beyond its philosophical and fundamental interest, entanglement is a crucial resource for the development of new quantum technologies, being key to techniques such as quantum teleportation [3, 4], measurement-based quantum computation [5, 6], or super-dense coding [7]. One problem associated with entanglement is the development of separability criteria and entanglement measures [8, 9]. This problem is based on determining if a certain quantum system is entangled or not. Several criteria has been proposed including the celebrated Bell's inequalities [10], the Peres-Horodecki positive partial transpose criterion (PPT) [11, 12], entanglement witnesses [13, 14], and entropic criteria [15, 16]. For systems of dimension up to 6, the PPT criteria gives sufficient and necessary conditions for a quantum state, pure or mixed, to be entangled. For systems with a higher dimension there are not known sufficient conditions. Recently, the fields of machine learning and quantum mechanics have been recently merged in the new field of _quantum machine learning_ (QML). This connection has been made in two directions. First, quantum features can be used to enhance the learning process [17, 18]. Second, machine learning techniques can be used to learn quantum operations and to design experiments [19, 20]. One specific line of research in these directions are quantum neural networks (QNN), meaning learning models inspired by biological systems [21, 22, 23]. In this paper, we address the problem of separability determination by the use of a deep multilayer perceptron (MLP) [24, 25], in order to develop an autonomous method for entanglement detection. We first test it in a solvable model, a two-qubit system, showing that it can acquire practically a 100% efficiency after a small number of learning experiences and with simple topologies. We check these results in dependence with the entanglement of the system and with its purity. Furthermore, we also study the stability of the detection procedure when noise is added to the system. Finally, we apply the same method to a non-solvable model, a three qubits system, and we show that the network can reach high efficiency rates close to 100% for the highest entangled cases. For this problem we also study the performance of the network based on the different entanglement families showing that some families are easier to classify than others. This problem has been recently addressed in Refs. [26, 27] for the two qubits case and in Ref. [28] for systems up to 10 qubits with bipartite entanglement. In these works they can detect entanglement with very different efficiencies up to 97% in the best case scenario. We improve this efficiency and we show also the capabilities of neural network to distinguish between different amounts of multipartite entanglement. ## 2 Multilayer perceptron and the learning procedure In this section, we provide a short explanation of the MLP model used and how it is applied to our specific problem. Our MLP is a neural network (NN) model originally based on the McCulloch-Pitts model of neurons [29] and backpropagation of the error [30]. For the two-qubits case the input of the network will be the elements of the density matrix of the state, while for the three-qubits one it will be the vector state. By doing so we ensure that the complexity of the problem is comparable in both cases. As density matrices are Hermitian this means that for an \(N\) qubits system the dimension of the matrix is Figure 1: Scheme of the data processing. Explanation in the main text. that corresponds to \(2^{2N}\) real values that the MLP takes as input. As MLP are topologically invariant the order of the input parameters plays no major role. For the vector case the input is composed by \(2^{N}\) complex values that correspond to \(2^{(N+1)}\) real parameters. For some specific cases we have artificially increased the input space by redundancy to improve the learning procedure. To analyse our network we study three figures of merit. At the end of the learning we calculate the Average Success Rate (ASR), meaning the percentage of well-classified states for the set of interest. Furthermore, to also study the evolution of the learning procedure in binary classification problems we use the binary cross entropy (BCE) loss. If we have an output \(a^{\prime}\) and an expected output \(a\) the BCE is defined as \[BCE(a,a^{\prime})=-\left(a\,\log(a^{\prime})+(1-a)\,\log(1-a^{\prime})\right). \tag{1}\] Finally, in Section 4 we also studied the problem of classifying four entanglement families. In this case, the output layer consists of four neurons with activations \(a_{i}\), \(i\in\{1,\,2,\,3,\,4\}\), each of them corresponding to one of the families. For this specific case the readout of the MLP is not the activation of the neurons but the softmax function of these activations defined as \[S(a_{i})=\frac{\exp(a_{i})}{\sum_{j=1}^{4}\exp(a_{j})}. \tag{2}\] This can be considered as a probability distribution defined over the four entanglement families. The considered loss function for this problem will be the cross entropy between the predicted probability distribution \(\{S(a_{i})\}\) and the desired one that is just \(\delta_{ii^{\prime}}\), being \(i^{\prime}\) the correct classification and \(\delta\) the Kronecker-delta function. Therefore, we define the Categorical Cross Entropy (CCE) as \[CCE(a_{i},i^{\prime})=-\sum_{j}\delta_{ji^{\prime}}\,\log\left(S(a_{j})\right) =-\log\left(S(a_{i^{\prime}})\right), \tag{3}\] that depends only on the softmax function of the neuron which is associated to the correct classification. The training procedure is organised in epochs of training. One epoch entails feeding the network with the whole training set, i.e. using every training sample once to compute an estimation of the loss function gradient. Datasets have a size \(S\) that is generally divided into \(S/2\) separable and \(S/2\) entangled states. The batch size, \(M\) is the number of samples that are processed before the MLP is updated once. Unless stated otherwise, we assume \(M=40\). The parameter \(f\) is the proportion of the whole dataset that is used for training and it is set to \(f=0.8\) except stated otherwise. As \(M\) takes an integer value between one and the number of samples in the training set, \(f\cdot S\), one epoch takes \((f\cdot S)/M\) updates of the trainable parameters. The datasets are generated randomly (see appendix A). Once the density matrices for both separables and entangled states are generated and stored in files, the data should be prepared to be computed by the MLP. The data processing is sketched in Fig 1. It is divided into six steps. **Step one:** The separable and entangled density matrices (or state vectors in the three-qubits case) are read from the files and transformed into real vectors of size \(2^{2N}\), generating two arrays of size \(\left(2^{2N}+C\right)\times S/2\) where \(C\) is one for the case of a binary categorical classification, meaning that we add the value \(0/1\) to classify separable/entangled states. For the three-qubits classification problem \(C=4\) as we add to the input vector a new vector of dimension \(4\) with each element determined by \(\delta_{ji^{\prime}}\), being \(i^{\prime}\) the family of the state and \(j=1,\,\,2,\,3,\,4\) each vector element. **Step **two:** Both arrays are stacked, giving rise to the whole dataset of \(S\) samples. **Step three:** The array is randomly shuffled to mix the separable and entangled density matrices. **Step four and five:** The dataset is split into the training set, of size \(f\cdot S/2\), and the test set of size \((1-f)\cdot S/2\). **Step six:** Both the training and test set are split up into the input set (density matrices/state vectors) and output set (binary label for binary classification cases and four dimension vectors for 3-qubits categorical classification). The rows of the input set are fed into the input layer of the MLP. The output is used, together with the expected output \(a\) to calculate the BCE by Eq. (1). The activation function for the hidden layers is set to the Rectified Linear Unit (ReLU) [31, 32], the output layer has a sigmoidal activation function [30]. To backpropagate the error and optimise the network we use both the Adam Optimization Algorithm [33] and the Root Mean Squared Propagation (RMSProp) [34] as indicated in the caption of each figure. The simulations have been performed by the use of Python 3.8.10 and the libraries NumPy 1.21.4, Pandas 1.3.4, Tensorflow 2.7.0, Keras 2.7.0, Scipy 1.7.2 and Matplotlib 3.4.3. The initial values of weights and biases of the network have been stablished by the uniform Glorot method of the Keras library [35]. ## 3 Entanglement detection for two qubits systems The first problem we have studied is the detection of entanglement in an analytically solvable case, a two-qubit system. For this case necessary and sufficient separability conditions are given by the PPT separability criteria [11, 12, 8]. First of all we study the capability of the network in order to classify totally separable states from maximally entangled ones. In appendix A, there is a description of the method used to generate the datasets. For this problem the network achieves a 100% efficiency even with a simple topology, with only two hidden layers, and in a small number of epochs. In Figure 2, we can see the evolution of the binary cross entropy (BCE) loss as a function of the number of elapsed epochs of training. After a very small number of epochs, the network is able to classify the separable and maximally entangled states with practically 100% efficiency. Even if this is a solvable model this result is remarkable. The PPT criteria Figure 2: Differentiation of maximally entangled states from separable states for pure states. The curve represents the BCE loss as a function of the elapsed epochs of the learning process. The curve is averaged over 100 simulations and belong to MLP with a topology \(\langle 16:8:1\rangle\). RMSprop optimizer was used and the dataset size is \(S=2\cdot 10^{4}\). is a complex procedure that involves both partially transposing and eigenvalue calculation. The fact that a neural network can learn an equivalent procedure in less than five epochs highlights the potential of MLPs in the problem of entanglement detection. Furthermore, the network can also be trained to detect entanglement for non-maximally entangled states. As a measure of the amount of entanglement we have used the negativity. For a general state \(\rho\) its negativity is defined as \[\mathcal{N}(\rho)=\frac{\left|\left|\rho^{T_{1}}\right|\right|-1}{2}, \tag{4}\] where \(\rho^{T_{1}}\) represents the partial transpose of the density matrix with respect to the first qubit and \(||A||\equiv\mathrm{Tr}\sqrt{A^{\dagger}A}\) is the trace norm. The maximum value of negativity for a two-qubits case is 0.5, meaning that the system is fully entangled. In this analysis we classify random quantum states in sets with different amounts of entanglement. These sets allows us to study the performance of the networks depending on the negativity of the entangled states of the training set in comparison with the test set. The results are displayed in Fig 3. In this plot we can see the average success rate for the test set, after the network has been fully trained, when the system is trained with (TW) sets of different negativity values, and it is tested on (TO) the different sets. We have ordered the sets in 0.1 width intervals, meaning that there are 5 intervals. The analysis has been performed by training and testing the MLP for both pure and mixed states. For this problem the optimal topology of the network strongly depends on the negativity of the training sample. Samples with lower negativities require deeper and bigger networks to be properly classified. For all cases we have profoundly studied different topologies and selected the simplest one that achieves the maximum efficiency. The first conclusion that can be drawn from Fig. 3 is that networks trained with pure/mixed states are optimal for detecting entanglement only for pure/mixed states. The entanglement signatures in the density matrix strongly depend on the purity of the states, and the learning procedure is affected by this. However, a network trained with mixed states performs slightly better when tested on pure states than the other way around. Another conclusion is that only networks trained with states of low negativity are capable of detecting the entanglement of these same states. Furthermore, these networks perform well with states with higher negativity. However, this does not happen in the other direction, as networks trained with highly entangled states lose efficiency when applied to less entangled sets. These results indicate that networks should be trained for worst-case scenarios so they can perform well in any case. Based on these results we can infer that a network trained with both pure and mixed states with low entanglement may be the more general entanglement detector for arbitrary states (mixed and pure with arbitrary negativity). To test this hypothesis we have trained a network with a dataset composed by a shuffling of pure and mixed states with negativity \(\mathcal{N}\in(0,\,0.1)\). After the training procedure we tested the network in training sets with both pure and mixed states with different negativities. As it is shown in Fig. or all the training sets we obtain an average success rate rate higher than \(97\%\). Figure 3: Average success rate (efficiency) of MLPs that resulted from the training with pure (up) and mixed (down) states, when tested over datasets containing pure (left) and mixed (right) entangled states with different negativities. TW stands for ‘trained with’, while TO stands for ‘tested on’. The success rate for TW and TO datasets that belong to different negativity subinterval are averaged over the whole datasets, whereas TW and TO being the same dataset are averaged over the original test set (\(20\%\) of the data). The results are averaged over 10 simulations. The topologies used are for pure states: \(\mathcal{N}\in(0.0,\,0.1)\rightarrow(256:128:16:1)\), \((0.1,\,0.2)\rightarrow(128:16:1)\), \((0.2,\,0.3)\rightarrow(64:16:1)\), \((0.3,\,0.4)\rightarrow(32:4:1)\), \((0.4,\,0.5)\rightarrow(16:4:1)\). And for mixed states: \(\mathcal{N}\in(0.0,\,0.1)\rightarrow(256:128:16:1)\), \((0.1,\,0.2)\rightarrow(128:16:1)\), \((0.2,\,0.3)\rightarrow(64:8:1)\), \((0.3,\,0.4)\rightarrow(16:4:1)\), \((0.4,\,0.5)\rightarrow(16:1)\). The datasets sizes are \(S=2\cdot 10^{4}\). To further analyse the performance of the network in the boundaries between separable and entangled states, we have studied two specific two-qubits families. First, we have trained the network with systems of arbitrary negativity and we checked the probability of determining that a certain state is entangled for states \[\ket{\psi_{\epsilon}}=\frac{\ket{\psi_{\text{sep}}}+\epsilon\ket{\psi_{\text{Bell }}}}{\sqrt{1+\abs{\epsilon}^{2}+2\text{Re}\left\{\epsilon\bra{\psi_{\text{sep }}}\psi_{\text{Bell}}\right\}}}, \tag{5}\] where in general \(\epsilon\in\mathbb{C}\) but we have studied only the case of \(\epsilon\in[0,1]\), \(\ket{\psi_{\text{sep}}}\) are bipartite separable states, and \(\ket{\psi_{\text{Bell}}}\) are maximally entangled states. These states are separable only in the limit \(\epsilon=0\) and entangled otherwise. The purpose of studying these types of states is to evaluate the network's robustness to noise. As the volume of separable states is much smaller than that of entangled states, especially for pure systems [36], it is expected that if a separable system is subject to noise, it will become entangled. Therefore, it would be desirable if the network can correctly classify states that are close to being separable as separable. However, for some contexts this could also be considered a failure in entanglement detection. The parameter \(\epsilon\) at which the network detects entanglement would vary depending on the specific states being analysed, making this case interesting to study and classify. The results are presented in Figure 5 for both pure and mixed states (see Appendix for details about the generation of states). Interestingly, for mixed states and a small network with only two hidden layers, the classification performance is not robust under the presence of small noise, as it classifies up to 15% of states as entangled for small \(\epsilon\) values. However, if we increase the network depth to four hidden layers, the learning becomes more robust, and it classifies almost all states as separable if \(\epsilon<0.1\). For higher values of \(\epsilon\), both networks behave similarly. For pure states the behaviour of both networks is similar but Figure 4: Average success rate (efficiency) of the MLPs that resulted from training with a mixture of minimally entangled (\(\mathcal{N}\in(0,0,1)\)) pure and mixed states when applied to both pure and mixed training sets with different negativities. The activation function in the hidden layers is ReLU, and the used optimizer is RMSprop. Every efficiency is averaged over ten simulations. The results are averaged over 10 simulations. The MLP architecture is \(\langle 256:128:16\rangle\). The datasets sizes for this case is \(S=4\cdot 10^{4}\). there are qualitatively differences, being the deeper network more efficient. This result suggests that, although the success probability of deep and non-deep networks is similar, deep networks are able to capture more entanglement features during the learning process. Hence, depending on the purpose of the network it would be more interesting to design it with a specific topology. A similar result is obtained by studying Werner states in the form [37] \[\rho_{W}=\frac{p}{3}\mathbb{I}+(1-\frac{4p}{3})|\psi_{\text{Bell}}\rangle \langle\psi_{\text{Bell}}| \tag{6}\] where \(p\in[0,1]\), \(\mathbb{I}\) represents the maximally mixed state, and \(|\psi_{\text{Bell}}\rangle\) is a maximally entangled state (see Appendix). These kind of states are entangled if, and only if, \(p<\frac{1}{2}\). In this case, as it is shown if Figure 6, both networks overestimate the presence of entanglement, detecting more than half of the states as entangled up to \(p\sim 0.55\). A more interesting feature arises for values of \(p\) higher than \(0.8\). In this case, the network with two hidden layers starts detecting again the states as entangled. On the other hand, the network with four hidden layers makes a correct classification also in this case. This indicates that the deeper networks are more able to identify pure entanglement properties of the systems while the smaller ones can be tricked by other properties of the dataset as the Schmidt rank of the density matrices. Interestingly, in Refs [26, 27] a similar result is obtained for a very different neural network model and training procedure. ## 4 Entanglement detection and classification for three qubits systems It is well-known that three-qubit systems exhibit much more complex behavior with respect to their separability properties. It has been proven that there are six possible entanglement classes, meaning six types of states that can be connected by stochastic local operations and classical communication (SLOCC). These are the separable, bipartite entangled (BE), Greenberger-Horne-Zeilinger (GHZ), and W classes [38, 39]. The bipartite class can be further divided into three classes, depending on the way in which the bipartition is performed. In our case, where all three qubits are identical, we consider bipartite entanglement as only one family. To keep the complexity of the problem comparable to the two-qubit case, we have only worked with pure states in this section. The MLP input consists of the 16 real parameters corresponding to the elements of the state vector. First, we checked the learning rate of the MLP for each of the three families. Figure Figure 5: Probability of determining that a certain state is entangled as a function of the parameter \(\epsilon\) for states of the form (5) for pure (left) and mixed (right) states. 7 (left) shows the BCE loss as a function of the learning procedure for each family. The vertical colored lines represent the moment when the best configuration is achieved, and the MLP starts to suffer from overfitting. We can see that each family has a different learning speed, as well as a different maximum achievable efficiency. The W states are the easiest to classify, reaching a BCE loss below 0.1 after just 10 epochs. They are also the family that achieves the highest detection efficiency. The worst learning scenario occurs for the GHZ states, where the BCE loss cannot go below 0.2. This result is very interesting and indicates a relation between the amount of tripartite entanglement and the rate of entanglement detection. It is known that the W family contains the only states with entanglement that can be considered both tripartite and bipartite, while GHZ states can be considered only tripartite entangled [39]. From this result, we can conclude that tripartite entanglement is the hardest to detect with our MLP, bipartite entanglement is easier, and W states are the easiest as they contain both tripartite and bipartite entanglement. After training the selected classes, we evaluated the performance of the MLP on all three classes. The results are shown in Figure 7 (right). The MLP trained with W states and applied to GHZ states had the worst performance, followed by the situation where Figure 6: Probability of determining that a certain state is entangled as a function of the parameter \(p\) for states of the form (6). Figure 7: Left: BCE loss as a function of the epochs of learning for the three entanglement families. The architecture of the network is \(\langle 16:512:128:32:1\rangle\). The fraction of states in the dataset taken for training is \(f=0.75\) and the BCE is evaluated only in the test set. Each curve is averaged over 10 simulations. The vertical lines indicate the epochs in which the best configuration is reached. The datasets sizes are \(S=2\cdot 10^{5}\) and. the training was performed with BE states and the network was again applied to the GHZ family. This supports our claim that bipartite entanglement is easier to learn than tripartite entanglement, and when networks are trained with states containing bipartite entanglement, they perform poorly when faced with tripartite entanglement. On the other hand, the best performance was obtained when the network was applied to the W states regardless of the training set. This may be due to the presence of both tripartite and bipartite entanglement in this family. Additionally, we observed that when the MLP was trained with W states, it performed poorly when applied to any other family. Finally, we tested the MLP's ability to classify the states into four possible families: bipartite entangled, W, GHZ, and separable. To do this, we used a network with four output neurons as explained in Section 2. For this problem, the initial conditions of the network were very important, as shown in Figure 8 (left). We plotted the CCE as a function of the number of epochs for 10 different runs over the same dataset. Each run differed in the initialization of the network weights and the ordering of the dataset elements. As can be seen, different initial conditions led to different behavior with respect to the speed of learning, the final efficiency, and overfitting. For these 10 runs the best final success rate achieved is 79.7% and the average one is 73.2%. In Figure 8 (right) we can observe the number of runs that have lead to each different success rate. The word case, 57.5% seems an unlikely event while most of runs lead to a final efficiency above 70%. We may also remark that in this case the probability of correctly classifying a state by a purely random procedure is 25% instead than 50% as is in the binary classification. ## 5 Conclusions In this study, we have showcased the potential of deep learning algorithms to effectively address entanglement detection and classification challenges. Our findings are striking, as the network attains up to 100% efficiency in two-qubit scenarios and over 90% efficiency in three-qubit situations. Additionally, we identified a strong relationship between entanglement and purity, with networks trained on pure states underperforming when presented with mixed states and vice versa. Moreover, deep networks display resilience to minor noise and can identify entanglement characteristics that allow them to excel when working with well-established quantum Figure 8: Left: CCE of the categorical classification for 10 runs with different initial conditions over the same dataset. The vertical line marks the average number of epoch corresponding to the optimal training determined by an Early Stop algorithm. In these cases, the batch size is \(\mathsf{M}=1000\) and \(\mathsf{f}=0.75\). Right: Histogram showing the number of performances for different certain success rates. The topology of the network is \(\langle 16:512:128:32:4\rangle\). The dataset size is \(S=4\cdot 10^{5}\). families like Werner states. In three-qubit instances, the network can pinpoint the entanglement family of a state with over 77% precision. Our research introduces a novel approach to detecting and classifying entanglement that bypasses the need for specific criteria or witnesses tailored to particular dimensions. This study paves the way for further exploration in several areas. Firstly, investigating the impact of state properties such as purity on neural network performance could lead to improved detection algorithms and a deeper understanding of the interplay between various quantum properties. The techniques proposed here can also be adapted for other quantum information tasks, such as state comparison. Lastly, it would be valuable to investigate the creation of a quantum neural network capable of executing the same tasks, which could open up a myriad of applications and serve as a benchmark problem for both classical and quantum neural networks. ## 6 Acknowledgements We want to acknowledge funding from the FEDER/Junta de Andalucia program A.FQM.752.UGR20 and project PID2021-128970OA-I00 funded by MCIN/AEI/ 10.13039/501100011033 and, by "ERDF A way of making Europe", by the "European Union".
2308.10708
Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models
Causal Neural Network models have shown high levels of robustness to adversarial attacks as well as an increased capacity for generalisation tasks such as few-shot learning and rare-context classification compared to traditional Neural Networks. This robustness is argued to stem from the disentanglement of causal and confounder input signals. However, no quantitative study has yet measured the level of disentanglement achieved by these types of causal models or assessed how this relates to their adversarial robustness. Existing causal disentanglement metrics are not applicable to deterministic models trained on real-world datasets. We, therefore, utilise metrics of content/style disentanglement from the field of Computer Vision to measure different aspects of the causal disentanglement for four state-of-the-art causal Neural Network models. By re-implementing these models with a common ResNet18 architecture we are able to fairly measure their adversarial robustness on three standard image classification benchmarking datasets under seven common white-box attacks. We find a strong association (r=0.820, p=0.001) between the degree to which models decorrelate causal and confounder signals and their adversarial robustness. Additionally, we find a moderate negative association between the pixel-level information content of the confounder signal and adversarial robustness (r=-0.597, p=0.040).
Preben M. Ness, Dusica Marijan, Sunanda Bose
2023-08-21T13:22:12Z
http://arxiv.org/abs/2308.10708v1
Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models ###### Abstract Causal Neural Network models have shown high levels of robustness to adversarial attacks as well as an increased capacity for generalisation tasks such as few-shot learning and rare-context classification compared to traditional Neural Networks. This robustness is argued to stem from the disentanglement of causal and confounder input signals. However, no quantitative study has yet measured the level of disentanglement achieved by these types of causal models or assessed how this relates to their adversarial robustness. Existing causal disentanglement metrics are not applicable to deterministic models trained on real-world datasets. We, therefore, utilise metrics of content/style disentanglement from the field of Computer Vision to measure different aspects of the causal disentanglement for four state-of-the-art causal Neural Network models. By re-implementing these models with a common ResNet18 architecture we are able to fairly measure their adversarial robustness on three standard image classification benchmarking datasets under seven common white-box attacks. We find a strong association (r=0.820, p=0.001) between the degree to which models decorrelate causal and confounder signals and their adversarial robustness. Additionally, we find a moderate negative association between the pixel-level information content of the confounder signal and adversarial robustness (r=-0.597, p=0.040). P. M. Ness & D. Marijan Sunanda Bose Simula Research Laboratory Oslo, Norway [email protected], [email protected], [email protected] ## 1 Introduction The latent internal data representations of a model are said to be disentangled when different signal components or dimensions model separate semantic concepts in the input. For a dataset of face images, this could mean separate signal components for e.g. _gender_, _age_, and _presence of moustache_. It is a commonly held notion that such disentangled representations in Neural Network (NN) models are beneficial for the model's ability to adapt to new tasks or data distributions, decrease sample complexity, and increase the model's robustness to adversarial attacks [1, 13, 14]. However, the extensive investigation conducted in Locatello et al. [10] challenges these broad general assumptions and highlights the importance of more specific studies quantifying the concrete benefits of disentangled representations for different tasks and desirable model attributes. _Causal_ disentanglement is a special type of disentangled representations where the aim is to separately represent input features which are _causally_ related to some output label and features which are merely _spuriously correlated_ with the label. For an image classification task, this could mean separately representing the subject of an image - e.g. _a cat_ - from the information about the background, lighting levels, or camera angle. The latter is often correlated with the image label - e.g. images of wild animals tend to have nature backgrounds - but this is not the _cause_ of the label, and hence this pattern might not generalise to unseen tasks or datasets. It is demonstrated in Van Steenkiste et al. [10] that disentangled representations reduce sample complexity for specific abstract visual reasoning tasks which were intentionally difficult to solve based purely on statistical co-occurrences of depicted objects. Furthermore, it is a commonly held belief that adversarial attacks exploit spurious or non-causal correlations learnt by a model [17], it is therefore argued by e.g. Scholkopf et al. [20] that causal disentanglement should make models more robust against such attacks. There is a class of Neural Network (NN) models which explicitly aim to achieve causal disentanglement using the mathematical framework of Causal Inference [13], throughout this paper we will refer to such models as _Causal NNs_. These models have demonstrated good generalisation capabilities, and have been used successfully for long-tailed classification [14], to improve adversarial robustness [15], and to decrease sample complexity during training [14]. Although this performance is argued to stem from the models' ability to learn causally disentangled representations, there is a lack of studies investigating this claim. To the best of our knowledge, this is the first work to quantitatively test the association between causal disentanglement and adversarial robustness for NN models. Since a good disentangled representation is taken to mean one where there is a correspondence between signal components and high-level semantic content in the input data, quantitative investigations have so far primarily been confined to synthetic datasets [13]. This is because such a dataset allows one to both vary and measure the values of the true underlying data-generating factors. One can then vary a single factor - e.g. _presence of moustache_ - and confirm both qualitatively and quantitatively that only a subset of the representation's components varies while the rest are unchanged. This work is concerned with models operating on real-world datasets where the true values of the data-generating factors are of course unknown. Therefore, we propose a framework for measuring causal disentanglement using metrics based only on the information content and co-dependence of dif ferent representation components. This allows us for the first time to quantify the level of causal disentanglement achieved by state-of-the-art Causal NNs trained on real-world datasets, as well as measure the association between this disentanglement and each model's adversarial robustness 1. Footnote 1: Code for all models and experiments available at [https://github.com/prebenness/causal_disentanglement_robustness](https://github.com/prebenness/causal_disentanglement_robustness) #### Contributions * We perform systematic benchmarks of four recent causal NN models across three standard datasets with a common ResNet18 backbone allowing for a fair comparison of the models' performance and robustness. * We introduce a framework for quantifying causal disentanglement which does not depend on access to data-generating factors or stochastic model signals. * We find that the degree to which the different models achieve separation of causal and confounder signals varies significantly, but is largely independent of dataset. * We find a strong positive association between the decorrelation of causal and confounder signals and model robustness to adversarial attacks. ## 2 Causal Neural Networks Throughout this paper, a Neural Network model is said to be _causal_ if it aims to explicitly separate the causally linked and spuriously correlated information contained in an input \(\mathbf{x}\) with respect to some label \(\mathbf{y}\). We denote the causal signal \(\mathbf{c}\) and the spurious - or confounder - signal \(\mathbf{s}\). Finally, any applied perturbation to the input data - e.g. an adversarial attack - is denoted \(\mathbf{m}\) and the resulting perturbed input is denoted \(\mathbf{\bar{x}}\). Lowercase bold letters indicate vectors or tensors and upper case letters indicate random variables. A key assumption in classical NNs is that training and test data samples are drawn from the same data distribution. This causes degradation in performance when there is a shift in the distribution of data between the train and test domains. The motivation behind Causal NNs is to learn the causal features and relationships which hold true across such shifts in the data distribution, hence improving the model's ability to generalise. As a result, this class of models has seen an increase in popularity over the past few years for use cases such as long-tailed classification [15], learning feature importance [14], and defence against adversarial attacks [17]. Subject to a successful disentanglement of the causal signal \(\mathbf{c}\) and the confounder signal \(\mathbf{s}\), the central mathematical operation in most Causal NNs is the _back-door adjustment_. This is formalised in Pearl [16] as the _do-calculus_ operation \(P(Y|do(X)\). For a classifier predicting a label \(Y\) from an image \(X\) this becomes a marginalisation over the confounding variable given by \[P(Y|do(X))=\sum_{\mathbf{s}}P(Y|X,S=\mathbf{s})P(S=\mathbf{s}), \tag{1}\] where \(\mathbf{s}\) is the confounding signal, e.g. style and background information in the image. We can now see that casual NNs aim to provide robust classifications by smoothing out any learnt spurious correlations between \(S\) and \(Y\). Although the application of Equation 1 removes dependence on the confounder signal \(\mathbf{s}\), any practical implementation is necessarily approximate. Firstly, the summation over all possible values of \(\mathbf{s}\) is of course intractable, and in practice only finitely many terms can be used. Secondly, the isolation of \(\mathbf{s}\) depends on the model achieving causal disentanglement to a sufficient extent. ### Causal Disentanglement No universally agreed-upon definition of _disentangled representations_ exists in the context of NNs [10]. The term is generally taken to mean that semantically distinct components of an input are represented as separate components or dimensions of the model's internal representations. _Causal_ disentanglement has a narrower meaning in that the separate signal components represent the information in the input which is causally linked to the output and the information which is only spuriously correlated with the output for a given dataset. For real-world datasets where the true data-generating process is unknown and inaccessible, the definition of causal disentanglement must necessarily be qualitative. In this work we investigate image classification models, and we take the causal information to be the information defining the image subject as given by the label \(\mathbf{y}\). We then take the spurious information to be the remaining information in the image, such as background, lighting, camera angle, and lens distortions. This is in line with the desired information content described in the works proposing our studied models. It is proven by Locatello et al. [15] that fully unsupervised learning of disentangled representations is impossible. Disentanglement must be enforced and encouraged by the choice of _inductive biases_, e.g. the model architecture, choice of loss function and training regime, and sample weights and dataset splits. Causal NNs are of course subject to the same limitations, and the implementation and modelling choices made are crucial in achieving the desired causal disentanglement. We therefore here highlight three important design parameters for Causal NNs. In Section 2we describe how the models we have investigated realise these parameters. #### Separation Mechanism In order to split the signal representation into the \(C\) and \(S\) components a dedicated separation mechanism is almost universally used in causal NN architectures. This can be as simple as a feedforward network with two outputs, but restrictions are often used to ensure that the two signal streams are in some sense complementary. Examples include using two orthogonal projection matrices [17], an attention mechanism \(a(\mathbf{x})\) and its complement \(\mathbf{1}-a(\mathbf{x})\)[20], and disjoint input masks based on measures of pixel classification importance [13][15]. #### Intervention Mechanism As shown by Pearl [16], in order to identify causal signal components a so-called _intervention_ is necessary - in the case of a classifier the do-calculus operation \(P(Y|do(X)\) as defined in Equation 1. In a physical approximation, this would correspond to e.g. collecting images of a target class under all possible lighting conditions, camera angles, etc, in order to evaluate the terms in the marginalisation sum. This is obviously practically impossible, and Causal NNs must therefore approximate the evaluation of Equation 1. We refer to the part of the model architecture that implements this approximation as the model's _intervention mechanism_. Some models move the intervention mechanism to the model's latent space and use additive noise \(\mathbf{n}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) to approximate different confounder signal values as \(\hat{\mathbf{s}}=\mathbf{s}+\mathbf{n}\)[22][23]. Others such as Wang et al. (2021) iteratively partition the training data during training with the aim of grouping input samples with similar confounder signal values into the same partition stratum. Auxiliary LossWhile the purpose of a model's separation mechanism is to enforce the independence between the causal signal \(C\) and the confounder signal \(S\), it is also necessary to apply an inductive bias to enforce the desired information content of each signal stream with respect to the input. The \(C\) signal is very often used as the basis for the model's primary task and can therefore be trained in the traditional way with a standard loss function. However, models differ in the choice of the auxiliary task associated with the confounder signal stream. Some employ \(S\) in an adversarial way to select or create augmented training samples [10][11], while others use \(S\) directly for the primary task [22] in order to align the model's output distributions for clean and adversarially perturbed data. ### The Investigated Models In this paper, we study the following four models: the _deep causal manipulation augmented model_ (CAMA [22]), the _causal attention module_ (CaaM [22]), the _causal-inspired adversarial distribution alignment method_[1], and the _domain-attack invariant causal learning_ model (DICE [10]). Next, we give an overview of these models' architectures and design choices. CamaBased on a Variational Auto-Encoder (VAE) architecture, CAMA aims to model the causal variables \(M\) and \(S\) through separate encoder networks. For clean training samples, the manipulation variable \(M\) is set to a null value, and horizontally and vertically shifted images are used during training to model manipulated data. Similar to a standard VAE, the model aims to maximise the Evidence Lower Bound (ELBO) of the training data [13], which corresponds to \(\sum_{\mathbf{x},\mathbf{y}}ELBO(\mathbf{x},\mathbf{y},\mathbf{m}=\mathbf{0})\) for clean data samples and \(\sum_{\mathbf{x},\mathbf{y}}ELBO(\mathbf{x},\mathbf{y})\) for manipulated data. CaaMThe original use-case of CaaM was to perform rare-context image classification on the datasets NICO [14] and ImageNet-9 [20]. However, the model design utilises causal-confounder separation to the same end as the other models studied, namely to find distribution-invariant causal image features. The model uses a separation mechanism consisting of a CBAM [23] attention mechanism \(\mathbf{z}=\text{CBAM}(\mathbf{x})\) and its complement. The input \(\mathbf{x}\) is separated into causal features \(\mathbf{c}\) and confounder features \(\mathbf{s}\) by the relations \[\mathbf{c} =\text{Sigmoid}(\mathbf{z})\odot\mathbf{x},\] \[\mathbf{s} =\text{Sigmoid}(-\mathbf{z})\odot\mathbf{x}=\left(\mathbf{1}- \text{Sigmoid}(\mathbf{x})\right)\odot\mathbf{x},\] where \(\mathbf{z}\in\mathbb{R}^{w\times h\times c}\) and \(\odot\) is the elementwise product. The confounder features \(\mathbf{s}\) are then used to create a dataset partition \(\tau\) of splits \(t\) with similar confounder signal values which are used to approximate the backdoor adjustment of Equation 1 as \(P(Y|do(X))\approx\sum_{t\in\tau}P(Y|X,t)P(t)\). CausalAdvThe overall goal of CausalAdv is to align the modelled distributions of natural data \(P(Y|X,s)\) and adversarial data \(P(Y|\hat{X},s)\). The input signal \(\mathbf{x}\) is embedded to a latent space representation by a ResNet18 backbone to create \(\mathbf{h}=\text{ResNet}(\mathbf{x})\). A trainable linear projection \(\mathbf{W}_{c}\) is then used to extract the causal signal \(\mathbf{c}=\mathbf{W}_{c}\mathbf{h}\). In order to separate out the confounder signal \(\mathbf{s}\), a projection matrix \(\mathbf{W}_{s}\) is constructed so that it is orthogonal to \(\mathbf{W}_{c}\) in the sense that \(\mathbf{W}_{c}\mathbf{h}\perp\mathbf{W}_{s}\mathbf{h}\) for all \(\mathbf{h}\). As an approximation to the marginalisation over \(\mathbf{s}\) in the backdoor-adjustment of Equation 1, random noise \(\mathbf{n}\sim\mathcal{N}(\mathbf{0},\sigma\mathbf{I})\) is added to produce \(\hat{\mathbf{s}}=\mathbf{s}+\mathbf{n}\). The distribution alignment is then approximated by a cross-entropy (CE) loss, with two classifiers \(h\) and \(g\) predicting sample labels from \(\mathbf{c}\) and \(\hat{\mathbf{s}}\) respectively. This loss is summed across both adversarial and natural samples as \(\mathcal{L}=\alpha\text{CE}(h(\mathbf{c}),\mathbf{y})+\beta\text{CE}(g(\hat{ \mathbf{s}}),\mathbf{y})\), where \(\alpha\) and \(\beta\) are positive real-valued scaling factors to adjust the relative weights of the different loss terms. DiceSimilarly to CausalAdv, DICE also employs adversarial training to increase robustness. However, unlike the other models studied, DICE achieves the separation of causal and confounder signals through input masking. This mask is constructed by using the loss gradient \(\delta\in\mathbb{R}^{w\times h\times c}\) of a reference classifier with respect to the pixels in the input image \(\delta=\nabla_{x}\mathcal{L}(f_{ref}(\mathbf{x}),\mathbf{y})\). Pixels for which \(\max_{k}\delta_{ijk}\) is above some threshold value are set to \(0\) in order to produce a confounder sample \(\mathbf{s}_{x}\). In order to approximate the marginalisation over all possible confounders, DICE utilises a finite replay buffer of generated confounder samples \(\mathbb{S}\) and approximates backdoor-adjustment as \[P(Y|do(X))\simeq\sum_{s\in\mathbb{S}}P(Y|X,s)P(s). \tag{2}\] ### Adversarial Attacks Even state-of-the-art NN models are susceptible to performance degradation when the input is perturbed, often only very slightly so as to be virtually imperceptible to a human observer [15]. Although the defence against such attacks is still an ongoing subject of research, a prevalent hypothesis in the field of Causal NNs is that adversarial attacks exploit learnt spurious correlations between \(\mathbf{s}\) and \(\mathbf{y}\)[16]. NNs are extremely adept at capturing statistical relations but, unlike humans, lack an understanding of causal relations. As a result, carefully crafted changes to an input image targetting the confounder signal \(\mathbf{s}\) can lead to misclassifications in a NN while being completely ineffective against humans. Since Causal NNs aim to correctly learn the causal relations between input and output data, it is argued that they can circumvent this adversarial attack vector. In order to measure the adversarial robustness of the investigated models we subject them to a range of common attacks, these are outlined in this section. All attacks are so-called _white-box_ attacks, where the attacker has full access to the weights \(\mathbf{\theta}\) and loss gradients \(\nabla\mathcal{L}(\mathbf{\theta},\mathbf{x},\mathbf{y})\) of the attacked model. White box attacks are therefore considered the most difficult attack types to defend against. The perturbations generated by the attacks are constrained to lie within a ball of a small radius \(\epsilon\) around the clean sample, that is \(||\mathbf{x}-\tilde{\mathbf{x}}||_{p}\leq\epsilon\), where \(||\.\ ||_{p}\) denotes the \(l_{p}\) norm of a vector or tensor. Projected Gradient DescentOriginally proposed in Madry et al. (2017), Projected Gradient Descent (PGD) is an iterative perturbation scheme which at each iteration step \(t\) applies a small perturbation \(\delta_{t}\) to an input image \(\mathbf{x}\) in the direction of the loss function gradient \(\nabla_{\mathbf{x}}\mathcal{L}\). The new image \(\mathbf{x}_{t}=\mathbf{x}_{t-1}+\delta_{t}\) is then clipped to a ball of radius \(\epsilon\) under the chosen distance norm in order to ensure that the total allowed perturbation relative to the original input is not exceeded. The algorithm then iterates for a pre-specified number of steps or until a convergence criterion is met. CwSimilar to PGD, the attack method CW proposed in Carlini and Wagner (2017) is an iterative optimisation-based scheme, but the objective in this context is to jointly maximise the discrepancy between the true and predicted label, and minimise the perturbation distance relative to the original image. This is achieved by optimising a surrogate compound loss function using e.g. gradient descent for a specified number of iteration steps. FgsmBoth PGD and CW are effective attack methods used to test the robustness of state-of-the-art adversarial defence methods, but due to their iterative formulations, they are comparatively computationally expensive. In contrast, the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) calculates a single perturbation proportional to the sign of the model's loss gradient as \(\delta=\epsilon\ \text{Sign}(\nabla_{\mathbf{x}}\mathcal{L})\). Although not as effective as PGD and CW, FGSM is a popular attack algorithm due to its lower computational cost. ### Disentanglement Metrics Quantifying disentanglement in NNs is motivated by the heuristic idea that in a disentangled representation different signal components should correspond to different high-level semantic concepts in the data represented. Although a multitude of quantitative disentanglement metrics has been proposed (Carbonneau et al., 2022)(Kim and Mnih, 2018), the vast majority are restricted by at least one of the following two strong assumptions. Firstly, a large body of work on disentanglement quantification is concerned with models trained on synthetically generated datasets (Locatello et al., 2019)(Kim and Mnih, 2018). Such datasets have the benefit that it is possible to alter the parameters or _factors_ of the data-generating process explicitly and measure directly the effect this has on the model's internal representations. This limits the application of such metrics, as direct access to the ground-truth data-generating factors of real-world datasets is impossible. For real-world datasets, these values can only be approximated by extensive annotation of samples with some chosen set of semantically descriptive attributes - e.g. annotating images of humans with information about age, gender, background type and so on. The second limitation is the assumption of a probabilistic generative model, typically some form of Variational Auto-Encoder. Such models consist of a probabilistic encoder learning a latent space representation \(\mathbf{z}\) of the input data \(\mathbf{x}\) by approximating the distribution \(p(\mathbf{z}|\mathbf{x})\) and a decoder parameterising \(q(\mathbf{x}|\mathbf{z})\). Many disentanglement metrics, such as those proposed in Duan et al. (2019) and Do and Tran (2019) are concerned with measures of mutual information and conditional entropy between different signal components. While these measures are informative for probabilistic models, they are provably vacuous for deterministic NNs such as standard Convolutional NNs and Transformers. As demonstrated in Goldfeld et al. (2019), The conditional entropy \(H(Z|X)\) is no longer meaningful in the information-theoretic sense when \(Z\) is a deterministic function of \(X\). The task of quantitatively assessing signal disentanglement in deterministic models without access to the ground-truth data generation process, therefore, limits the set of available metrics. However, Liu et al. (2020) propose the use of two metrics to measure the disentanglement of the representations of style and content in an image, which bears some similarities to our goal of quantifying the disentanglement of causal and confounder signals relative to some input data. The first of these two metrics is Distance Correlation (\(DC\)). Proposed in (Szekely et al., 2007), \(DC\) is a well-established measure of the dependence between two variables. The second is Information Over Bias (\(IoB\)), proposed in Liu et al. (2020), which uses the reconstruction error of a NN trained to reconstruct a signal \(\mathbf{x}\) from a representation \(\mathbf{z}\) as a measure of the information content of \(\mathbf{z}\) with respect to \(\mathbf{x}\). Distance CorrelationGiven a set of \(N\) pairs of vector or tensor-valued samples \(\{(\mathbf{u},\mathbf{v})\}_{n=1}^{N}=(\mathbf{U},\mathbf{V})\), the \(DC\) is defined as follows. Let \(\mathbf{A}^{*}\) and \(\mathbf{B}^{*}\) be the unnormalised distance matrices of \(\mathbf{u}\) and \(\mathbf{v}\) respectively, under some distance metric \(||\.\ ||\), such that \(\mathbf{A}^{*}_{i,j}=||\mathbf{u}_{i}-\mathbf{u}_{j}||\) and \(\mathbf{B}^{*}\) is defined similarly for \(\mathbf{v}\). A normalisation is then applied by subtracting off the column-mean and the row-mean and adding the global mean of each matrix to obtain \(\mathbf{A}\) and \(\mathbf{B}\) where \(\mathbf{A}_{i,j}=\mathbf{A}^{*}_{i,j}-\tilde{\mathbf{A}}_{i,\cdot}-\tilde{ \mathbf{A}}_{\cdot,j}+\tilde{\mathbf{A}}_{\cdot,\cdot}\). The squared distance covariance \(dCov\) is defined as the arithmetic mean of \(\mathbf{A}_{i,j}\mathbf{B}_{i,j}\) over all the \(N\) samples. The \(DC\) is then calculated analogously to a correlation coefficient, as the normalised distance covariance: \[DC(\mathbf{U},\mathbf{V}) =\frac{dCov(\mathbf{U},\mathbf{V})}{\sqrt{dCov(\mathbf{U},\mathbf{ U})dCov(\mathbf{V},\mathbf{V})}},\] \[dCov(\mathbf{U},\mathbf{V}) =\sqrt{\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\mathbf{A}_{i,j} \mathbf{B}_{i,j}}{N^{2}}}.\] Unlike Pearson's Correlation Coefficient, a value of \(DC(X,Y)=0\) implies that \(X\) and \(Y\) are independent. Note also that \(DC\) allows for the measurement of dependence between variables of different dimensionalities. The computation of \(DC(X,Y)\) requires only that a distance metric is defined between samples of the _same_ variable \(||x_{i}-x_{j}||\), but crucially does _not_ require \(||x_{i}-y_{i}||\) to be defined. Importantly for our application, this allows us to compute the dependence between e.g. a vector \(\mathbf{c}\) and a \(channel\times width\times height\) image tensor \(\mathbf{x}\). \(DC\) is therefore a general measure of the dependence between two variables. Information over BiasGiven some input data \(\mathbf{x}\) and a learned representation \(\mathbf{z}\), a decoder network \(g_{\theta}\) is trained to re construct \(\mathbf{x}\) from \(\mathbf{z}\). The \(IoB\) is then defined as the average reconstruction performance gain, in terms of the Mean Squared Error, when operating on \(\mathbf{z}\) compared to on \(\mathbf{1}\), a dummy input vector of ones: \[IoB(\mathbf{x},\mathbf{z})=\frac{1}{N}\sum_{i=1}^{N}\frac{MSE(\mathbf{x}_{i},g _{\theta}(\mathbf{1}))}{MSE(\mathbf{x}_{i},g_{\theta}(\mathbf{z}_{i}))}. \tag{3}\] Like \(DC\), \(IoB\) is attractive as a metric because it admits both tensor and vector representations for the signals \(\mathbf{x}\) and \(\mathbf{z}\), and does not require \(\mathbf{x}\) and \(\mathbf{z}\) to be of the same size or dimensionality. It offers a flexible measure of relative information content, without being restricted to stochastic signals. ## 3 Related Work For completeness, we briefly review a few other causal models and explain why they are not studied in this paper, followed by an overview of related approaches for measuring model disentanglement. ### Other Causal Neural Network Models In this paper, we are concerned with models which aim to explicitly model causal and confounder signals, with the goal of using the causal signal for robust predictions. Approaches such as Ren et al. (2022), where Causal Inference is successfully used to create heuristic metrics for the detection of adversarial attacks also exist. This approach uses Causal Inference to motivate the analysis of the model but does so as a second step on top of the trained model, and therefore falls outside the model type considered in this paper. Models from domains other than image recognition are also of relevance, although outside the scope of this paper. In Zhao et al. (2022), the Natural Language Processing model uses latent-space smoothing over the confounder signal in a similar manner to CausalAdv to increase adversarial robustness. CATT, proposed in Yang et al. (2021), has a similar design philosophy to the models investigated in this paper, although the causal intervention is performed as a front-door adjustment. However, the marginalisation over the confounding signal is absorbed into the model's intersample and intrasample attention mechanisms. This obfuscates the measurement of the \(C\) and \(S\) signals without the application of additional modelling assumptions. CONTA, as proposed in Zhang et al. (2020), is another related model, where the confounder signal is not constructed on a per-instance basis as in the models presented here, but rather as an average pixel classification importance map across all samples in a class. However, both CONTA and CATT could be interesting objects of future work in the measurement and analysis of causal disentanglement. ### Disentanglement of Representations In terms of measuring the disentanglement of different model architectures, Locatello et al. (2019) offer a thorough investigation of VAE-style models on the task of learning disentangled representations in an unsupervised fashion for seven synthetic datasets. Similarly, Sepliarskaia et al. (2019) investigate the performance of disentanglement metrics for VAE models on synthetic datasets and propose a new quantitative metric for measuring this disentanglement. In contrast, we focus on measuring the disentanglement of Causal NN models with metrics which are generally applicable also to deterministic models trained on real-world datasets without access to the true data-generating factors. The most relevant paper to this end is probably Liu et al. (2020) which aims to measure the disentanglement of content and style in three representative computer vision models. However, this is not in the context of causal disentanglement nor is it related to adversarial robustness. ## 4 Methodology In this section, we detail the motivation for and setup of the experiments conducted, as well as the choice of causal and confounder signals for each model. With these experiments, we specifically aimed to address the following research questions. **RQ1:** To what extent and in what way do the investigated models exhibit causal disentanglement? **RQ2:** What is the relationship between the measured metric values and the models' performance? **RQ3:** What is the relationship between the measured metric values and the models' robustness to adversarial attacks? ### Measurements As the models were trained on real-world datasets without any other annotation than class labels, the choice of exactly which aspects of the models' signals to measure does not have a unique well-defined answer a priori. Therefore, we selected five measurements which we believe each capture important aspects of the models' causal disentanglement behaviour. These measurements are variations on the ones proposed in Liu et al. (2020) and are defined and motivated in this subsection as well as summarised in Table 1. **Separation of Causal and Confounder Signals** Perhaps the most central characteristic of the signal flow in Causal NNs is the separation of the signal streams of the causal signal \(\mathbf{c}\) and the confounder signal \(\mathbf{s}\). The way we chose to quantify this behaviour was by measuring the \(DC\) between these two signal streams. A high \(DC(C,S)\) means that \(C\) and \(S\) are correlated and dependent, which is contrary to the goal of Causal NNs. We, therefore, take a high \(DC(C,S)\) value to indicate low causal disentanglement. The first measurement is then defined as \(M_{1}=1-DC(C,S)\), so that a high value of \(M_{1}\) corresponds to a high degree of causal/confounder separation. **Causal Signal Informativeness** Since the Causal NNs studied in this work by definition employ the causal signal \(\mathbf{c}\) in performing their primary task, we believe it is useful to measure the information content of this signal with respect to the input \(\mathbf{x}\). In our experiments, this was done with two separate measurements. The first is \(M_{2}=DC(X,C)\) which measures the correlation between the causal signal and the input image. The second measurement is based on \(IoB(X,C)\), that is how well the input image \(\mathbf{x}\) can be reconstructed on a pixel level from \(\mathbf{c}\) relative to from an uninformative signal. \(IoB(X,C)\) takes on its minimum value of \(1\) when the causal signal is completely uninformative, and higher values indicate higher informativeness. To normalise the range of our measurements we reciprocate the ratio and define \(BoI(X,C)=\frac{1}{ToB(X,C)}\) and let \(M_{4}=1-BoI(X,C)\). \(M_{4}\) is now in the range \([0,1]\) and higher values indicate higher pixel-level information content of \(\mathbf{c}\) with respect to \(\mathbf{x}\). Confounder Signal InformativenessWhat the desirable properties of the confounder signal \(\mathbf{s}\) are in Causal NNs is still an open research question. It is argued by Liu et al. (2020) that when measuring content/style disentanglement it is necessary for the style signal to be informative with respect to the input image. This is because a style signal consisting of e.g. random noise would be disentangled from the content signal in the sense that the two would be independent. Liu et al. (2020) consider this a failure mode of their content/style disentanglement and argue that in order to rule out such failure an informative style signal is necessary. Our experiments are concerned with the disentangling of causal and confounder signals, and we believe it is not a priori obvious which properties of the confounder signal \(\mathbf{s}\) are beneficial to the performance and robustness of Causal NNs. Nonetheless, we believe that the semantic information that Causal NNs encourage in the confounder signal stream, such as information about background, lighting, and camera angle, bears similarities with the information intended for the style signal in content/style disentangled NNs. Hence, we define the measurement \(M_{3}=DC(X,S)\) to assess the dependency between the input image \(\mathbf{x}\) and the confounder signal \(\mathbf{s}\). Similarly to \(M_{4}\) we finally define \(M_{5}=1-BoI(X,S)\) to measure the pixel-level reconstructive information in the confounder signal. ### Model Selection The four models selected for analysis in this paper have shown good performance on challenging primary tasks such as AA robustness and rare-context image classification. We have chosen to study the disentanglement behaviour of these models because they all explicitly aim to separate the modelling of causal and spurious signals, and argue that this causal consistency is the reason for each model's high performance. The models were published in the period 2020 to 2022 and we believe they are representative of the current state-of-the-art in Causal NN models. ### Choice of Causal and Confounder Signals Throughout our analysis, the causal and confounder signals for each model were taken as follows: CamaThe value of \(S\) is sampled once per input image from the latent style representation of the final encoder network as \(S\sim q(S|X,Y,M)\), and \(C\) is taken as the hidden-state representation \(\mathbf{h}_{y}\) of the label \(\mathbf{y}\) as produced by the pre-merge step in the decoder. CaaM\(C\) and \(S\) were taken as the outputs \(\mathbf{c}\) and \(\mathbf{s}\) of the final disentanglement block of the CNN-CaaM model with a ResNet18 backbone. CausalAdvAfter the latent space embedding of the input as \(\mathbf{h}=\text{ResNet18}(\mathbf{x})\), \(C\) was taken as the projection \(\mathbf{c}=\mathbf{W}_{c}\mathbf{h}\). \(S\) was chosen as \(\mathbf{s}=\mathbf{W}_{s}\mathbf{h}\), i.e. before the addition of the gaussian noise \(\mathbf{n}\). DiceFor DICE, S was taken as the embedded confounder sample \(\mathbf{s}=\text{ResNet18}(\mathbf{s}_{x})\) and C as the embedding of \(\mathbf{x}_{c}\), i.e. the input image \(\mathbf{x}\) after the backdoor-adjustment of Equation 2 has been approximated as \(\mathbf{x}_{c}=\mathbf{x}+\sum_{\mathbf{s}\in\mathbb{S}}P(\mathbf{s})\mathbf{s}\) ### Experimental Setup The four models we have studied vary in terms of their intended use case, as well as their natural performance on their primary tasks. In order to make as fair a comparison as possible we altered or re-implemented DICE, CaaM, and CausalAdv to employ the same ResNet18 backbone architecture. CAMA, being structured as a VAE, differs quite significantly from the other three and does not rely on the same type of initial input data latent-space embedding. In order to not deviate too much from CAMA's original design, we opted to keep the architecture as described in Zhang et al. (2020). We conducted all experiments using the three standard image recognition benchmarking datasets MNIST (LeCun et al., 1998), CIFAR10, and CIFAR100 (Krizhevsky et al., 2009). All models were trained for a fixed number of epochs, and the model with the highest validation accuracy on the clean dataset was returned for testing in each case. For all datasets, the original training split was randomly partitioned into train and validation splits in the ratio \(4:1\). Metrics and MeasurementsAll \(DC\) values were computed over each dataset's test split, and \(IoB\) models were trained on the train split and tested on the test split. The training budget for each model was set to roughly match the training setup in the respective original papers. For the training of decoder models in the computation of \(IoB\), \(20\%\) of the available training data was randomly selected as a validation split, and models returned when no validation improvements were seen for \(40\) epochs. During the tracking of disentanglement metrics throughout entire training runs, this validation patience was lowered to \(5\) epochs and the total training budget was capped at \(50\) epochs. Adversarial RobustnessIn order to assess the robustness of the models we used the three standard attack algorithms \(PGD\), \(FGSM\), and \(CW\) under different distance norms and optimisation budgets for a total of 7 attack configurations - these are enumerated in Table 2. When measuring robustness we first measured the models' classification accuracy on the unperturbed test split of each dataset to get the clean accuracy \(a_{c}\). We then attacked each dataset's test split with each of the seven attack configurations and measured the models' resulting perturbed accuracy \(a_{p}\). Finally, we calculated the absolute performance drop as \(\Delta_{abs}=a_{c}-a_{p}\) and the relative performance drop as \(\Delta_{rel}=\frac{\Delta_{abs}}{a_{c}}\). \begin{table} \begin{tabular}{c|l|l} \hline \hline \(M_{i}\) & **Value** & **Interpretation** \\ \hline \(M_{1}\) & \(1-DC(C,S)\) & Causal/confounder separation \\ \hline \(M_{2}\) & \(DC(X,C)\) & Input/causal signal correlation \\ \hline \(M_{3}\) & \(DC(X,S)\) & Input/confounder signal correl. \\ \hline \(M_{4}\) & \(1-BoI(X,C)\) & Pixel info in causal signal \\ \hline \(M_{5}\) & \(1-BoI(X,S)\) & Pixel info in confounder signal \\ \hline \hline \end{tabular} \end{table} Table 1: The measurements taken of the models investigated. All measurement values are in the range \([0,1]\). ## 5 Results and Analysis In this section, we present the results of our experimental evaluation of the four chosen models, as well as analyse and discuss the findings in light of our three research questions. ### RQ1: Observed Disentanglement The full set of measurement values across each of the tested models is shown in Table 3. The first thing to note is that although all models aim to disentangle the causal and confounder signal streams, there is a large variation in how well \(C\) and \(S\) are decorrelated. The VAE-style model CAMA achieves the highest separation with an average value of \(M_{1}=1-DC(C,S)\) of 0.917, close to full statistical independence between \(C\) and \(S\). The lowest level of decorrelation is achieved by CaaM with an average \(M_{1}\) value of 0.132. All models score on average 0.442 or higher on the correlation of causal signal and input content as measured by \(M_{2}=DC(X,C)\). This is to be expected as the causal signal stream \(\mathbf{c}\) is used by each model to make classification predictions, and hence a high \(M_{2}\) value is directly encouraged during training. The models vary considerably in terms of how correlated the confounder signal and input are, from CAMA with an average \(M_{3}=DC(X,S)\) of 0.711 to CausalAdv with a value of 0.128. We see that CausalAdv _consistently_ exhibits low correlation between confounder and input across all three datasets with a standard deviation of only 0.07. Note that the confounder signal is measured _before_ the addition of the Gaussian noise term in this model, which makes the low value even more notable. In terms of pixel-level information, it is interesting to note that even though CAMA is a VAE-type model and aims to reduce reconstruction loss during training, this is not the model which best manages to reconstruct the input from either the causal or confounder signal. Finally, we note that DICE's causal signal is both the most correlated with the input signal, and the causal signal which is best able to reconstruct the input, indicating high causal signal information content. Similarly, CausalAdv's confounder signal is both the least correlated with the input and has the least capacity to reconstruct the input. SummaryEven though all models aim to disentangle the causal and confounder signal streams, there is a large variation in the extent to which these signal streams are decorrelated as measured by \(DC(C,S)\). There is also moderate variation between the models in terms of the information content of the confounder stream with respect to the input as measured both by the \(DC(X,S)\) and \(1-BoI(X,S)\). ### RQ2: Disentanglement and performance The only measurement value with a statistically significant correlation with a model's performance on its primary task is \(M_{2}=DC(X,C)\), the distance correlation between the causal signal stream and the input image. The Pearson Correlation Coefficient (PCC) between \(M_{2}\) and a model's clean test classification accuracy \(a_{c}\) is \(r=0.741\) at a p-value of \(p=0.006\). This relationship is also illustrated in Figure 1 which plots \(DC(X,C)\) values vs clean test accuracy for all models on all datasets. The green dashed line in Figure 2 shows the evolution of measurement \(M_{1}=1-DC(C,S)\) as a function of training epoch for CAMA training on CIFAR10 and DICE training on CIFAR100. We note that both models see a decrease in \(M_{1}\) as training progresses, which corresponds to the type of disentanglement encouraged by both models' inductive biases. SummaryThere is a strong association (\(r=0.741\)) between the distance correlation of the causal signal and the input and the clean test accuracy of a model. Apart from this, no measurement shows a statistically significant association with model performance on clean data. ### RQ3: Disentanglement and Robustness Table 4 shows the clean and adversarial accuracy of all models on all datasets, as well as the relative adversarial performance decrease \(\Delta_{rel}\). There is some variation in the clean data performance between models, with CaaM achieving the highest accuracy for all datasets. CAMA scores significantly lower than the other models on both CIFAR10 and CIFAR100, but these accuracies are within expectations for a simple VAE-style model. CausalAdv and DICE achieve the best and second-best average adversarial accuracies respectively, which is also reasonable given that these two models use adversarial training with PGD10 attacks as part of their training loops. More surprising is the relative robustness of CAMA, \begin{table} \begin{tabular}{c|c|c|c} \hline Attack & \# Steps & Norm & \(\epsilon\) \\ \hline PGD & 20 & \(l_{2}\) & \(1.0\) \\ \hline PGD & 40 & \(l_{2}\) & \(1.0\) \\ \hline PGD & 20 & \(l_{\infty}\) & \(\frac{8}{255}\) \\ \hline PGD & 40 & \(l_{\infty}\) & \(\frac{8}{255}\) \\ \hline FGSM & - & \(l_{\infty}\) & \(\frac{8}{255}\) \\ \hline CW & 20 & \(l_{2}\) & \(1.0\) \\ \hline CW & 40 & \(l_{2}\) & \(1.0\) \\ \hline \end{tabular} \end{table} Table 2: Adversarial attacks used to test the robustness of models, with the number of iteration steps, maximum perturbations, and distance norms. Figure 1: \(DC(X,C)\) vs clean test classification accuracy for all models on all datasets. Linear best-fit line in black. which only uses slight rotations and translation of input images to model adversarial perturbations during training. Finally, we observe that CaaM suffers the largest performance degradation under adversarial attacks, by a large margin. In order to assess the association between the different measurements made and model robustness quantitatively, Table 5 shows the PCC of the five measurements taken for each model and each model's clean accuracy, average adversarial accuracy across the seven attacks used, and corresponding average absolute and relative performance drop. At a significance threshold of \(p=0.05\) there are five statistically significant associations. Firstly we see that a high \(M_{2}=DC(X,C)\) value is associated with both a high clean test accuracy (see Section 5.and a high average adversarial accuracy (\(r=0.638,p=0.026\)). This is likely because the causal signal \(\mathbf{c}\) is used directly for classification, and hence a higher correlation with the input image makes the model's prediction task easier. It is interesting to see that high _pixel-level_ information content in the casual signal as measured by \(M_{4}=1-BoI(X,C)\) is _not_ associated with either clean or adversarial accuracy. This could indicate that the information in the causal signal should capture more high-level features of the input rather than low-level pixel information in order for the model to make accurate predictions. We also see that high pixel-level information in the confounder signal \(\mathbf{s}\) in terms of \(1-BoI(X,S)\) is moderately associated with relative adversarial performance degradation (r=0.597, p=0.040), although this association is no longer significant when accuracy degradation is measured in absolute terms. This gives some indication that low-level input information in the confounder signal hurts model robustness. This is interesting as it goes against what is argued by Liu et al. (2020), namely that pixel-level informative content and style signals are desirable disentanglement properties. However, this is in line with the recent trend of encouraging higher-level semantic content rather than low-level pixel information in learned representations as seen in e.g. LeCun (2022). The strongest correlations we find are between the decorrelation of the causal and confounder signal \(M_{1}=1-DC(C,S)\) and adversarial robustness. Decorrelation is strongly negatively associated with both absolute (r=-0.820, p=0.001) and relative (r=-0.720, p=0.008) adversarial performance drop. This is strong evidence in support of the notion that causally disentangled representations are beneficial for adversarial robustness. This relationship is also illustrated in Figure 3 which shows the value of \(M_{1}\) against \(\Delta_{abs}\) for all models, datasets and attacks in black, with average performance drop across all attacks indicated by red diamonds. However, it is interesting to note that the bottom plot in Figure 2 shows a point during model training after which \(M_{1}\) increases and adversarial accuracy under the PGD40 attack decreases. This could indicate that there is a sweet spot during model training, after which the increasing \(M_{1}\) is a result of model overfitting. SummaryWe observe a strong association between the decorrelation of causal and confounder signals and a model's adversarial robustness (\(r=0.820,p=0.001\)). This supports the idea that causal disentanglement helps robustness. ## 6 Key Findings and Conclusions In this paper, we investigated the causal disentanglement of four state-of-the-art Causal NN models. We used metrics from content/style disentanglement to assess different aspects of the separation and information content of the causal and confounder signals in each model without requiring access to the ground-truth data-generating function or restricting our analysis to stochastic models. Finally, we quantitatively assessed the association between the metrics and both the clean performance and the adversarial robustness of the models under a range of different common attacks. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Dataset & Model & Clean & Mean \(a_{p}\) & \(\Delta_{rel}\) \\ \hline \multirow{4}{*}{MNIST} & CAMA & 95.6\% & 81.8\% & 14.5\% \\ \cline{2-5} & CaaM & **99.6\%** & 18.3\% & 81.7\% \\ \cline{2-5} & CausalAdv & 99.3\% & **97.4\%** & **1.9\%** \\ \cline{2-5} & DICE & 99.1\% & 97.0\% & 2.1\% \\ \hline \multirow{4}{*}{CIFAR10} & CAMA & 35.1\% & 23.3\% & **33.6\%** \\ \cline{2-5} & CaaM & **83.6\%** & 4.5\% & 94.6\% \\ \cline{2-5} & CausalAdv & 80.5\% & **45.4\%** & 43.6\% \\ \cline{2-5} & DICE & 79.3\% & 39.7\% & 49.9\% \\ \hline \multirow{4}{*}{CIFAR100} & CAMA & 15.2\% & 9.2\% & **39.3\%** \\ \cline{2-5} & CaaM & **54.7\%** & 1.7\% & 96.8\% \\ \cline{1-1} \cline{2-5} & CausalAdv & 52.4\% & **23.7\%** & 54.8\% \\ \cline{1-1} \cline{2-5} & DICE & 52.1\% & 22.3\% & 57.2\% \\ \hline \hline \end{tabular} \end{table} Table 4: Clean accuracy, average adversarial accuracy, and relative accuracy drop \(\Delta_{rel}\) for all tested models and datasets. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Model & \(1-DC(C,S)\) & \(DC(X,C)\) & \(DC(X,S)\) & \(1-BoI(X,C)\) & \(1-BoI(X,S)\) \\ \hline CAMA & \(\mathbf{0.917}\pm 0.01\) & \(0.442\pm 0.21\) & \(\mathbf{0.711}\pm 0.18\) & \(0.156\pm 0.10\) & \(0.368\pm 0.22\) \\ \hline CaaM & \(0.132\pm 0.16\) & \(0.512\pm 0.16\) & \(0.493\pm 0.18\) & \(0.415\pm 0.08\) & \(\mathbf{0.460}\pm 0.06\) \\ \hline CausalAdv & \(0.819\pm 0.09\) & \(0.524\pm 0.14\) & \(0.128\pm 0.07\) & \(0.190\pm 0.12\) & \(0.006\pm 0.01\) \\ \hline DICE & \(0.424\pm 0.28\) & \(\mathbf{0.634}\pm 0.09\) & \(0.587\pm 0.25\) & \(\mathbf{0.455}\pm 0.03\) & \(0.275\pm 0.19\) \\ \hline \hline \end{tabular} \end{table} Table 3: Values of the five measured metrics, averaged over the three datasets for each model. Values are given as mean \(\pm\) std. #### Key Findings * Although each model aims to separate the representations of causal and confounder signals, there is a large variation in how well this aim is achieved. * High distance correlation between the causal and input signals is associated with higher classification accuracy on both clean and adversarially perturbed test data. * The decorrelation of causal and confounder signals is strongly associated with adversarial robustness. ### Conclusions Our findings point in the direction that the decorrelation of causal and confounder signals is useful for achieving robust Causal NNs, whereas low-level pixel information content appears at least unhelpful for the causal signal and seems to degrade robustness in the confounder stream. This indicates that the appropriate signal decorrelation should be encouraged during training in order to improve the robustness of the model. We also believe that the methodology applied in this work will be beneficial for other researchers investigating Causal NNs and disentangled representations, as the measurements used are flexible in that they permit an extensive range of signal types. LimitationsOur choice of measurements was based on the measurements taken in Liu et al. (2020), with the motivation of capturing both signal information content and inter-signal dependency. Nonetheless, other measurement choices are possible. Similarly, the question of exactly which internal model signal to treat as the sampled value of \(C\) and \(S\) does not have a definite and unique answer for each model and entails some level of qualitative judgement. An exhaustive set of experiments using all possible reasonable choices for these values was infeasible, we have therefore chosen the values which we believe in each case have the closest correspondence to the causal variables employed in each model's design to represent causal and confounder signals. Nonetheless, other researchers might have chosen differently. It is hard to draw definite conclusions with regard to the results of our analysis with a total of four model architectures trained on three relatively simple datasets. Although promising, more datasets and models should be investigated. #### Future Work An obvious direction of future work is to expand this comparative analysis to include a larger selection of models, tasks, and datasets. This paper is concerned with measuring the potential benefits of disentangled causal representations for adversarial robustness. Still, other desirable model properties are also of interest, such as out-of-distribution generalisation, few-shot learning, and sample efficiency. We hope that the general disentanglement quantification system utilised in this work will prove useful to other researchers investigating these related topics.
2310.11884
From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks
In this paper, we review recent approaches for explaining concepts in neural networks. Concepts can act as a natural link between learning and reasoning: once the concepts are identified that a neural learning system uses, one can integrate those concepts with a reasoning system for inference or use a reasoning system to act upon them to improve or enhance the learning system. On the other hand, knowledge can not only be extracted from neural networks but concept knowledge can also be inserted into neural network architectures. Since integrating learning and reasoning is at the core of neuro-symbolic AI, the insights gained from this survey can serve as an important step towards realizing neuro-symbolic AI based on explainable concepts.
Jae Hee Lee, Sergio Lanza, Stefan Wermter
2023-10-18T11:08:02Z
http://arxiv.org/abs/2310.11884v2
# From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks ###### Abstract In this paper, we review recent approaches for explaining _concepts_ in neural networks. Concepts can act as a natural link between learning and reasoning: once the concepts are identified that a neural learning system uses, one can integrate those concepts with a reasoning system for inference or use a reasoning system to act upon them to improve or enhance the learning system. On the other hand, knowledge can not only be extracted from neural networks but concept knowledge can also be inserted into neural network architectures. Since integrating learning and reasoning is at the core of neuro-symbolic AI, the insights gained from this survey can serve as an important step towards realizing neuro-symbolic AI based on explainable concepts. Keywords: Explainable artificial intelligence, concept explanation, neuro-symbolic integration ## 1 Introduction In recent years, neural networks have been successful in tasks that were regarded to require human-level intelligence, such as understanding and generating images and texts, performing dialogues, and controlling robots to follow instruction [1, 2, 3]. However, their decision-making is often _not_ explainable, which undermines user trust and negatively impacts their usage in sensitive or critical domains, such as automation, law, and medicine. One way to overcome this limitation is by making neural networks explainable, e.g., by designing them to generate explanations or by using a _post-hoc_ explanation method that analyzes the behavior of a neural network after it has been trained. This paper reviews explainable artificial intelligence (XAI) methods with a focus on explaining how neural networks learn _concepts_, as concepts can act as primitives for building complex rules, presenting themselves as a natural link between learning and reasoning [4], which is at the core of neuro-symbolic AI [5, 6, 7, 8, 9]. On the one hand, identifying the concepts that a neural network uses for a given input can inform the user about what information the network is using to generate its output [10, 11, 12, 13, 14, 15]. Combined with an approach to extract all relevant concepts and their (causal) relationships, one could generate explanations in logical or natural language that faithfully reflects the decision procedure of the network. On the other hand, the identified concepts can help a symbolic reasoner intervener in the neural network such that debugging the network becomes possible by modification of the concepts [16, 17, 18, 11]. Some XAI surveys have been published in recent years [19, 20, 21, 22, 23, 24, 25]. However, almost all of them are mainly concerned with the use of saliency maps to highlight important input features. Only a few surveys include concept explanation as a way to explain neural networks. A recent survey in this vein is by Casper et al. [26], which discusses a broad range of approaches to explaining the internals of neural networks. However, due to its broader scope, the survey does not provide detailed descriptions of methods for explaining concepts and misses recent advances in the field. The surveys by Schwalbe [27] and Sajjad et al. [28], on the other hand, are dedicated to specific kinds of concept explanation methods with a focus on either vision [27] or natural language processing [28] and are, therefore, limited in scope, failing to analyze the two areas together. We categorize concept explanation approaches and structure this survey based on whether they explain concepts at the level of individual neurons (Section 2) or at the level of layers (Section 3). The last section summarizes this survey with open questions. ## 2 Neuron-Level Explanations The smallest entity in a neural network that can represent a concept is a _neuron_[28], which could be--in a broader sense--also a _unit_ or a _filter_ in a convolutional neural network [10]. In this section, we survey approaches that explain, in a post-hoc manner, concepts that a neuron of a pre-trained neural network represents, either by comparing the _similarity_ between a concept and the activation of the neuron (see Section 2.1) or by detecting the _causal relationship_ between a concept and the activation of the neuron (see Section 2.2). ### Using Similarities between Concepts and Activations In this category, the concept a neuron is representing is explained by comparing the concept with the activations of the neuron when the concept is passed as an input to the model. The _network dissection_ approach by Bau et al. [10] is arguably the most prominent approach in this category, which is mainly applied to computer vision models. In this approach, a set \(\mathcal{C}\) of concepts are prepared as well as a set \(\mathcal{X}_{C}\) of images for each concept \(\mathcal{C}\in\mathcal{C}\). Then the activations of a convolutional filter are measured for each input \(x\in\mathcal{X}_{C}\). Afterward, the activation map is thresholded to generate a binary activation mask \(M(x)\) and scaled up to be compared with the original concept (e.g., concept head) in the binary segmentation mask \(L_{\mathcal{C}}(x)\) of the input \(X\) (e.g., the head segment of an image with a bird). See Figure 1 for an illustration. Then, to measure to which degree concept \(\mathcal{C}\) is represented by the convolutional filter, the dataset-wide intersection over union metric (IoU) is computed, which is defined as \(\text{IoU}(C)=\sum_{x\in\mathcal{X}_{C}}|M(x)\cap L_{\mathcal{C}}(x)|/\sum_{x \in\mathcal{X}_{C}}|M(x)\cup L_{\mathcal{C}}(x)|\). If the IoU value is above a given threshold, then the convolutional filter represents the concept \(\mathcal{C}\). Several extensions of this approach have been introduced. Fong et al. [29] question whether a concept has to be represented by a single convolutional filter alone or whether it can be represented by a linear combination of filters. They show that the latter leads to a better representation of the concept and also suggest to use binary classification for measuring how well filters represent a concept. Complementary to that extension, Mu et al. [13] investigate how to approximate better what a single filter represents. To this end, they assume that a filter can represent a boolean combination of concepts (e.g., (water OR river) AND NOT blue) and show that this compositional explanation of concepts leads to higher IoU. An intuitive extension of using compositional explanations is using _natural language_ explanations. The approach called MILAN by Hernandez et al. [14] finds such natural language explanations as a sequence \(d\) of words that maximizes the pointwise mutual information between \(d\) and a set of image regions \(E\) that maximally activates the filter, (i.e., \(\arg\max_{d}\log P(d\mid E)-\log P(d)\)). In the approach, the two probabilities \(P(d\mid E)\) and \(\log P(d)\) are approximated by an image captioning model and a language model, respectively, which are trained on a dataset that the authors curated. One strong assumption made by the network dissection approach is the availability of a comprehensive set \(\mathcal{C}\) of concepts and corresponding labeled images to provide accurate explanations of neurons. This is, however, difficult to obtain in general. Oikarinen et al. [30] tackle this problem with their CLIP-Dissect method, which is based on the CLIP [31] vision-language model. (CLIP embeds images and texts in the same vector space, allowing for measuring the similarity between texts and images.) To explain the concept a convolutional filter \(k\) is representing, they choose a set \(\mathcal{X}_{k}\) of the most highly activating images for filter \(k\), then use CLIP to measure the similarity between \(\mathcal{X}_{k}\) and each concept \(\mathcal{C}\in\mathcal{C}\) (here, the concept set \(\mathcal{C}\) consists of 20K most common English words), and finally find the best matching concept \(\mathcal{C}\). The dissection approach can also be used in generative vision models. Bau et al. [16] identify that units of generative adversarial networks [32] learn concepts similar to network dissection and that one can intervene on the units and remove specific concepts to change the output image (e.g., removing units representing the concept tree leads to output images with fewer trees in their scenes). ### Using Causal Relationships between Concepts and Activations In this category, the concepts that a neuron is representing are explained by analyzing the causal relationship either (i) between the input concept and the neuron by intervening on the input and measuring the neural activation or (ii) between the neuron and the output concept by intervening on the neural activation and measuring the probability in predicting the concept. This approach is often used for explaining neurons of NLP models [28], where the types of concepts can be broader (e.g., subject-verb behavior, causal relationship, semantic tags). The first line of work investigates the influence of a concept in the input on the activation of a neuron by intervening in the input. Kadar et al. [33] find the \(n\)-grams (i.e., a sequence of \(n\) words) that have the largest influence on the activation of a neuron by measuring the change in its activations when a word is removed from the \(n\)-grams. Na et al. [34] first identify \(k\) sentences that most highly activate a filter of a CNN-based NLP model. From these \(k\) sentences, they extract concepts by breaking down each sentence into a set of consecutive word sequences that form a meaningful chunk. Then they measure the contribution of each concept to the filters' activations by first repeating the concept to create a synthetic sentence of a fixed length (to normalize the input's contribution to the unit across different concepts) and then measuring the mean value of the filter's activations. The second line of work investigates the role of a neuron in generating a concept by intervening in the activation of the neuron. Dai et al. [35] investigate the factual linguistic knowledge of the BERT model [36], a widely used pre-trained model for text classification, which is pre-trained among other tasks by predicting masked words in a sentence. In this approach, given relational facts with a mask word (e.g. "Rome is the capital of [MASK]"), each neuron's contribution to predicting the mask is measured using the integrated gradients method [37]. To verify the causal role of the neuron that is supposed to represent a concept, the authors also intervene in the neuron's activation (by suppressing or doubling) and measure the change in accuracy in predicting the concept. Finlayson et al. [38] analyze whether a neuron of a transformer-based language model (e.g., GPT-2 [39]) has acquired the concept of conjugation. The authors determine which neuron contributes most to the conjugation of a verb by using the causal mediation analysis [40]. To this end, they first modify the activation of a neuron to the one that the neuron would have output if there was an intervention on the input (e.g., the subject in the input sentence was changed from singular to plural) and then measure the amount of change between the predictions of the correct conjugation of a verb with and without the intervention (see Figure 2). Meng et al. [18] also apply causal mediation analysis to GPT-2 to understand which neurons memorize factual knowledge and modify specific facts (e.g., "The Eiffel Tower is in Paris" is modified Figure 1: Neuron-level explanation using similarities between concepts and activations. Depicted is the network dissection approach, which compares the segmented concept in the input with the activation mask of a neuron [10]. to "The Eiffel Tower is in Rome"). The data they use consists of triples of the form (_subject_, _relation_, _object_) and the model has to predict the _object_ given _subject_ and _relation_. They discover that the neurons in the middle layer feed-forward modules in GPT-2 are the most relevant for encoding factual information and implementing a weight modifier to change the value of weights and alter the factual knowledge. ## 3 Layer-Level Explanations Concepts can also be represented by a whole _layer_ as opposed to a neuron or a convolutional filter, as mentioned to in the paragraph about the work by Fong et al. [29] in Section 2.1. This can be achieved in a post-hoc manner for a pre-trained model by passing examples of a concept dataset \(\mathcal{C}\) to the model and extracting the activations of a specific layer to train a concept classifier. Two approaches are prominent in layer-level explanations: the first is explaining with _concept activation vectors_ (CAVs) (see Section 3.1) and the second is _probing_ (see Section 3.2). The main difference between the two approaches is that in the case of CAV a linear binary classifier is trained for each concept \(C\in\mathcal{C}\), and in probing a multiclass classifier is trained with classification labels that are often related to certain linguistic features (e.g, sentiments, part-of-speech tags). On the other hand, concepts can be baked in a layer, where each concept represents a neuron as was done with localist representations in the early days of neural network research (see Section 3.3). ### Using Vectors to Explain Concepts: Concept Activation Vectors A _concept activation vector_ (CAV) introduced by Kim et al. [12] is a continuous vector that corresponds to a concept represented by a layer of a neural network \(f\) (see Figure 3). Let \(f=f^{\top}\circ f^{\bot}\), where \(f^{\bot}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is the bottom part of the network whose final convolutional layer \(\ell\) is of interest. To identify the existence of a concept \(C\) (e.g., the concept stripes) in layer \(\ell\), network \(f^{\bot}\) is first fed with positive examples \(x_{C}^{+}\) that contain concept \(C\) and negative examples \(x_{C}^{-}\) that do not contain the concept, and then their corresponding activations \(f^{\bot}(x_{C}^{+})\in\mathbb{R}^{n}\) and \(f^{\bot}(x_{C}^{-})\in\mathbb{R}^{n}\) are collected. Next, a linear classifier is learned that distinguishes activations \(f^{\bot}(x_{C}^{+})\) from activations \(f^{\bot}(x_{C}^{-})\). The vector normal \(v_{C}\in\mathbb{R}^{n}\) to the decision boundary of the classifier is then a CAV of concept \(C\). One useful feature of a CAV is that it allows for testing how much an input image \(x\) is correlated with a concept \(C\) (e.g., an image of a zebra and concept stripes), which is called _testing with CAVs_ (TCAV) in [12]. This is accomplished, roughly speaking, by measuring the probability of a concept \(C\) having a positive influence on predicting a class label \(k\in\{1,\dots,K\}\) on a dataset \(\mathcal{X}\), i.e., how much moving the latent vector \(f^{\bot}(x)\in\mathbb{R}^{n}\) along the direction of \(v_{C}\), i.e., \(f^{\bot}(x)+\epsilon\cdot v_{C}\), changes the log-probability of label \(k\) when it is fed to \(f^{\top}\) for all images \(x\in\mathcal{X}\) with class label \(k\). CAVs can be used in many different ways. Nejagholi et al. [41] use CAVs to identify sensitivity of abusive language classifiers wrt. implicit types (as opposed to explicit types) of abusive language. Different from the original Figure 2: Neuron-level explanation using causal relationships between concepts and activations. In causal mediation analysis, the activation of a neuron is modified to the one that the neuron would have output if there was an intervention on the input (the subject in the input sentence was changed from singular to plural). After, it is measured the amount of change between the predictions of the correct conjugation of a verb with and without the intervention [38]. approach [12] which obtains CAVs by taking the vector normal to the decision boundary, they obtain CAVs by just averaging over the activations \(f^{\perp}(x_{C}^{+})\) for all positive samples \(x_{C}^{+}\) to mitigate the impact of the choice of random negative samples \(x_{C}^{-}\) on determining the decision boundary. Zhou et al. [42] decompose the row vector of the last linear layer for predicting a class label \(k\) and represent it as a linear combination of a basis that consists of CAVs using only positive weights. Each positive weight then indicates how much of the corresponding concept is involved in predicting class label \(k\). Similarly, Abid et al. [17] propose an approach that learns a set of CAVs, but for debugging purposes. Given an input image misclassified by a model, a weighted sum of the set of CAVs is computed that leads to correct classification when added to the activations before the last linear layer of the model. In addition to explaining bugs on a conceptual level, this approach allows for identifying spurious correlations in the data. An issue with the original approach for learning CAVs is that one needs to prepare a set of concept labels and images to learn the CAVs. Ghorbani et al. [43] partially tackle this issue by preparing images of the same class and then segmenting them with multiple resolutions. The clusters of resulting segments then form concepts and can be used for TCAV. As corresponding concept labels are missing, the concepts need to be manually inspected. Yeh et al. [44] circumvent the problem of preparing a concept dataset by training CAVs together with a model on the original image classification dataset. To this end, they compute a vector-valued score, where each value corresponds to a learnable concept and indicates to which degree the concept is present in the receptive field of the convolutional layer (computed by building a scalar product). The score is then passed to a multilayer perceptron (MLP) to perform classification. ### Using Classifiers to Explain Concepts: Probing Similar to the CAV-based approaches in Section 3.1, _probing_ uses a classifier to explain concepts. However, instead of training a binary linear classifier for each concept \(C\in\mathcal{C}\) to measure the existence of the concept in the activation of a layer, probing uses a classifier for multiclass classifications with labels that often represent linguistic features in NLP (e.g., sentiments, part-of-speech tags). For example, given sentences as inputs to a pre-trained NLP model (e.g., BERT [36]), probing allows for evaluating how well the sentence embeddings of the model capture certain syntactic and semantic information, such as the length or the tense of the sentence [45; 46; 47] (see Figure 3.3). Probing, which is designed as a layer-level explanation method, can also be combined with neuron-level explanation method (see Section 2) by applying the probing classifier only to neurons that are relevant for the classification [48]. Finding such neurons can be accomplished by applying the elastic-net regularization to the classifier, which constrains both the L1 and the L2-norm of the classifier weights. The concepts learned by such probing classifiers can be combined with a knowledge base to provide richer explanations. Ribiero and Leite [49] use identified concepts as evidence to draw conclusions from a set of axioms in a knowledge base (e.g., given an axiom LongFreightTrain \(\leftarrow\) LongTrain \(\wedge\)FreightTrain in the knowledge base, identifying both antecedent concepts LongTrain and FreightTrain in the activations explains the presence of the consequence LongFreightTrain in the input). However, one cannot always assume the Figure 3: Layer-level explanation using vectors to explain concepts. For each concept \(C\) positive examples \(x_{C}^{+}\) and negative examples \(x_{C}^{-}\) are fed to a pre-trained model to learn the so-called _concept activation vector_ (CAV) \(v_{C}\) from the corresponding activations of the target layer [12]. presence of a knowledge base for a given task. Ferreira et al. [50] weaken this assumption by learning the underlying theory from the identified concept using an induction framework. Since the probing classifier is trained independently from the pre-trained model, it was pointed out that the pre-trained model does not necessarily leverage the same features that the classifier uses for predicting a given concept, i.e., what the probing classifier detects can be merely a correlation between the activation and the concept [51; 52]. ### Using Localist Representations: Concept Bottleneck Models Different from the neuron-based approach in Section 2, where concepts are learned in a post-hoc manner, in a _concept bottleneck model_ (CBM) [11], each concept is represented by a unique neuron in the bottleneck layer \(f^{\ell}\) of a model \(f\) (see Figure 5), which is a reminiscence of localist representation [53]. This layer provides information about the existence or the strengths of each concept in the input. The output of the bottleneck layer is then used by a classifier or regressor \(f^{\top}\) for the prediction, which allows for explaining what concept led to the given prediction. Often, the bottom part \(f^{\bot}\) of a pre-trained model is used for initializing the layers before the concept bottleneck \(f^{\ell}\) and \(f^{\ell}\) is a linear layer that maps the features from \(f^{\bot}\) to concepts. Therefore, a CBM is \(f=f^{\top}\circ f^{\ell}\circ f^{\bot}\). To train the concept bottleneck \(f^{\ell}\), the training data has to include concept labels in addition to task labels. One of the main limitations of CBMs is the need for the aforementioned concept labels, which might not be available for specific tasks. Several recent approaches overcome this limitation [54; 55; 56]. The main idea behind these approaches is using an external resource to obtain a set \(\mathcal{C}\) of concepts relevant to the task. This external resource could be a knowledge base such as ConceptNet [54], or the 20K common words English words [55], or a language model like GPT-3 [56]. After obtaining concept set \(\mathcal{C}\), each concept word \(C\in\mathcal{C}\) is embedded to a vector \(v_{C}\) by means of the CLIP vision-language model (cf. Section 2.1) such that vector \(v_{C}\) can be used for computing the strength of concept \(C\) for a given input \(x\in\mathcal{X}\), e.g., by measuring the cosine similarity between \(v_{C}\) and the embedding \(f^{\bot}(x)\). Finally, the presence of concepts in the concept bottleneck layers allows for inducing logical explanations, e.g., Ciravegna et al. [57] induce explanations in disjunctive normal form (DNF) from concept activations and predicted labels, which is similar to logic-based explanation approaches [49; 50] in Section 3.2. ## 4 Conclusion In this survey, we have reviewed recent methods for explaining concepts in neural networks. We have covered different approaches that range from analyzing individual neurons to learning classifiers for a whole layer. As witnessed by the increasing number of recent papers, this is an active research area and a lot is still to be discovered, for example, empirically comparing or integrating different approaches1. With the progress of concept extraction Figure 4: Layer-level explanation using a classifier to explain concepts. In this example, a pre-trained model takes as its input a sentence and a probing classifier is applied to the activation of the highlighted layer to check whether the activation encodes the concept of sentence length [46]. from neural networks, integrating the learned neural concepts with symbolic representations--also known as _neuro-symbolic integration_--is receiving (again) increasing attention [49, 50, 57, 58]. In conclusion, this line of research is still very active and in development, providing ample opportunities for new forms of integration in neuro-symbolic AI. ## 5 Acknowledgement The authors gratefully acknowledge support from the DFG (CML, MoReSpace, LeCAREbot), BMWK (SIDIMO, VERIKAS), and the European Commission (TRAIL, TERAIS). We would like to thank Cornelius Weber for valuable comments on this paper
2301.00776
Physics-Informed Neural Networks for Prognostics and Health Management of Lithium-Ion Batteries
For Prognostics and Health Management (PHM) of Lithium-ion (Li-ion) batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there are no general and flexible methods to fuse the information represented by those models. Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion batteries. When there is little prior knowledge about the dynamics, we leverage the data-driven Deep Hidden Physics Model (DeepHPM) to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertainty-based adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Li-ion Phosphate (LFP)/graphite batteries.
Pengfei Wen, Zhi-Sheng Ye, Yong Li, Shaowei Chen, Pu Xie, Shuai Zhao
2023-01-02T17:51:23Z
http://arxiv.org/abs/2301.00776v2
Fusing Models for Prognostics and Health Management of Lithium-Ion Batteries Based on Physics-Informed Neural Networks ###### Abstract For Prognostics and Health Management (PHM) of Lithium-ion (Li-ion) batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there is no general and flexible methods to fuse the information represented by those models. Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion-batteries. When there is little prior knowledge about the dynamics, we leverage the data-driven Deep Hidden Physics Model (DeepHPM) to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertainty-based adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Li-ion Phosphate (LFP)/graphite batteries. Battery Degradation, Information Fusion, Prognostics and Health Management, Physics-informed Machine Learning, Remaining Useful Life. ## I Introduction Global sales of Electric Vehicles (EVs) doubled in 2021 from 2020 to a new record of 6.6 million according to [1] released by International Energy Agency (IEA). It keeps rising strongly and 2 million were sold in the first quarter of 2022, up 75% from the same period in the previous year. As their power source, Lithium-ion (Li-ion) batteries have become one of the most important industrial consumables. The demand for batteries reached 340 Gigawatt-hours (GWh) in 2021, which also doubled from the previous year. In the same year, the average battery price is USD 132/kWh. The cost of battery replacement due to its degradation is still high, and the Battery Management System (BMS) is developed to perform Prognostics and Health Management (PHM). The primary tasks of BMS include both State of Health (SoH) estimation and Remaining Useful Life (RUL) prognostics. Fused usage information collected from various sources via BMS is expected to significantly improve the performance of PHM. Generally, information fusion has been achieved in various flexible forms, including data-level, feature-level, decision-level and model-level [2, 3]. The prognostic models can be categorized as physics-based models, experience-based models, and data-driven models [4, 5, 6]. Physics-based and experience-based models represent abundant prior domain knowledge of the monitored systems condensed by experts that have been widely accepted [7]. There is no insurmountable gap between these two categories of models since many physics models originate from empirical models, such as the Paris-Erdogan equation. On the contrary, building data-driven models is a process of mining latent information in data [8]. Dynamic models are commonly built to represent the governing dynamics during degrading, which can provide critical insights into the internal change of monitored systems. Dynamic models for Li-ion-batteries degradation can be established based on the classic Pseudo-Two-dimensional (P2D) model, which possesses high accuracy but is severely restricted due to the subsequent complexity and computation burden [9]. The Single-Particle (SP) model is then proposed to simplify the P2D model, which assumes that both electrodes consist of multiple uniform-sized spherical particles. Based on the SP model, a widely-accepted degradation mechanism was proposed. Since electrolytes can be reduced in the presence of lithiated carbon, Solid Electrolyte Interphase (SEI) formation is caused by the lithium carbonate. The newly exposed surface of the particles is then covered by SEI during the cycling. Under this mechanism, the degradation rate considering both Diffusion Induced Stresses (DIS), crack growth, and SEI thickness growth was derived in [10, 11]. There are many parameters to estimate and verify in this dynamic model proposed fully based on prior electrochemical knowledge, which restricts its generalizability. Another category of degradation models focuses more on the utilization of available degradation data with monitoring time or cycles, and then the degradation trends are modeled by analytical functions under the given statistical assumptions [12]. These models commonly oversimplify the physics that the degradation trends of Li-ion batteries should follow. Semi-physics and semi-empirical dynamic models are proposed to take both factors into account [13]. For instance, an exponential-law model can be derived based on battery Coulombic efficiency [14]. This exponential model can also be integrated from an empirically given differential form and then its parameters are estimated based on temperature stress, State of Charge (SoC) stress, time stress, and Depth of Discharge (DoD) stress [13]. In view of the fact that there is an upper bound for the capacity loss, the Verhulst model has been proposed to quantify the rate of capacity loss [15]. As can be seen, there usually exists more or less available information from physics for SoH estimation, while this is not always true for RUL prognostics. To fill this gap, discovery for dynamic models with a preset fixed form or completely arbitrary form has been explored by developing Partial Differential Equation (PDE)-Net [16, 17] and Deep Hidden Physics Models (DeepHPM) [18, 19] respectively. According to the No-Free-Lunch (NFL) theorem, information from different models fits distinct problems well. Model fusion can leverage the advantages of combining the information from different categories of available models for PHM. It can be difficult to categorize fusion forms of model fusion. Five mostly-applied forms of fusing models are reviewed according to the combination and interfaces of distinct types of prognostic models [4]. Fusing physics-based models and data-driven PHM methods drags much attention [20]. Among these fusion forms, transition equations built based on physics or experience have been widely integrated into the framework of Bayesian filtering [21, 22]. These transition equations are then used to iterate the degradation state. This implementation of fusing physics-informed dynamic models and data-driven prognostic models inspires the frontiers of Physics-Informed Machine Learning (PIML) [23] in PHM. The principle of PIML is to fuse the physics (experience)-based models and data-driven models. As reviewed in [4], this category of model fusion can be instantiated in various forms as well. Typical implementations of PIML are reviewed in [23] in terms of their applicable scenarios, principles, frameworks, and applications. As a formalized framework among PIML methods, Physics-Informed Neural Network (PINN) gains momentum since dynamic models established in various forms can be integrated. This high-level flexibility making it popular in multiple frontiers such as epidemiology [24], power electronics [25, 26], acoustics [27], and electromagnetics [28]. In this paper, we propose a semi-physics and semi-empirical dynamic model based on the Verhulst model [15] to capture the fading trends of the battery SoH. This model is further generalized as a PDE considering observable features given charging and discharging profiles, and a new parameter that represents the level of initial SEI formation is introduced. To estimate the SoH of Li-ion batteries, we introduce PINN to fuse the prior information formulated as the dynamic model and the information extracted from monitoring data. Besides, DeepHPM is also employed to discover the dynamics as a comparison with the improved Verhulst model for SoH estimation. It is also used for RUL prognostics without any explicit dynamic model. This implementation is a fusion of information mined by two data-driven models with different functions [4]. During the training of PINN, we use an uncertainty-based method to adaptively weigh losses of subsequently proposed multi-task learning. The proposed methods are then verified on a public experimental cycling dataset of Li-ion Phosphate (LFP)/graphite batteries. The contributions and advantages of this paper include: 1) We propose a new scheme to fuse prior information of physical or empirical degradation models and that of monitoring data in PHM for Li-ion batteries. 2) We propose a new semi-physical semi-empirical dynamic degradation to provide more insight into capacity loss. 3) We propose a scheme of fusing information of two distinct-designed data-driven models when there is little prior information. Moreover, the proposed models can be trained with an adaptive weighting method, which solves the challenge of combining losses appropriately in training PINNs. The code and data accompanying this paper are available on GitHub at1. Footnote 1: [Online]. Available: [https://github.com/WenPengfei0823/PINN-Battery-Prognostics](https://github.com/WenPengfei0823/PINN-Battery-Prognostics) The rest of this article is organized as follows. Section II setups the focused problem and the establishment of two categories of dynamic models is introduced, including an improved Verhulst model and DeepHPM-based automatically discovered models. Section III presents the framework of PINN for model fusion and its training method, where an adaptively uncertainty-based multi-tasks balancing technique is included. Section IV and V verify the proposed methods on a public experimental dataset, and several details are provided in Appendix A. Finally, Section VI concludes this article. ## II Problem Statement Battery aging results from a combined action of both times went and cycling [13, 29]. Fading in a Li-ion battery upon charging and discharging can be attributed to a coupled mechanical-chemical degradation within the cell. The DISs can boost the growth of the cracks on the electrode surfaces during cycling and thus leading to the growth of SEI [11, 10]. This degradation mechanism assembles the fatigue process of materials upon cyclic loading [13], where the capacity loss due to the stress accumulates independently and ultimately causes life loss. Such a process can be categorized as cycle aging. At the same time, a battery also degrades over time because of the inherent slow electrochemical reaction, forming the calendar aging. Since charging and discharging processes can often be completed in dozens of minutes while calendar aging becomes observable usually on the scales of months or even years, cycle aging contributes the most to capacity and life loss. As a result, we assume that calendar aging can be neglected here when modeling the degradation dynamics of batteries. The failure time of a Li-ion battery is commonly defined as the time when its currently usable capacity reduces below a pre-specified threshold. SoH of a battery is consequently related to its current capacity and is typically defined as the ratio of current capacity over the nominal one [30]. Here we focus on the Percentage Capacity Loss (PCL) \(u\) which represents the relative gap between the usable capacity and the nominal one, and it is denoted as: \[u_{k}=1-SoH_{k}=1-\frac{Q_{k}}{Q_{Nom}}\times 100\%, \tag{1}\] where \(Q_{Nom}\) represents the nominal capacity and \(Q_{k}\) represents the usable capacity in \(k^{\mathrm{th}}\) cycle. Without loss of generality, PCL \(u\) can be set as a univariate function of a virtual continuous independent time variable \(t\)[31] whose unit is still the cycles. When a battery is operated upon identical conditions upon each cycle, a series of observations can be acquired at discrete time \(t_{k}\): \[u_{k}=u\left(t_{k}\right). \tag{2}\] To handle the degradation dynamics of the battery degradation trend, the fading rate of its capacity can be denoted as \[\frac{\mathrm{d}u\left(t\right)}{\mathrm{d}t}=\mathcal{G}\left(t,u;\Theta \right). \tag{3}\] Eq. (3) is an explicit Ordinary Differential Equation (ODE) parameterized by set \(\Theta\), and \(\mathcal{G}\) denotes a nonlinear function of \(t\) and \(u\). For instance, based on SEI formation and the growth process on both initial and cracked surface [10, 11], eq. (3) can be specified as \[\frac{\mathrm{d}u\left(t\right)}{\mathrm{d}t}\bigg{|}_{t=t_{k}} =\theta_{1}\theta_{5}\left(1+\theta_{2}t_{k}\right)^{\frac{\theta _{3}}{2-\theta_{3}}}+\theta_{4}t_{k}^{-\frac{1}{2}}, \tag{4}\] \[+\theta_{1}\theta_{6}\sum_{l=1}^{k-1}\left[1+\theta_{2}\left(t_{ k}-t_{l}\right)\right]^{\frac{\theta_{3}}{2-\theta_{3}}}\left(t_{k}-t_{l}\right)^{- \frac{1}{2}},\] \[l<k, \tag{5}\] where \(t_{l}\) represents the monitoring time earlier than \(t_{k}\) and \(\Theta=\left\{\theta_{1},\theta_{2},\ldots,\theta_{6}\right\}\) are composite parameters. Each of them should be set according to dozens of electrochemical parameters including the geometric area of the graphite electrode film, the activation energy for crack propagation, and the solid phase porosity of the electrode film, etc. This theoretical model details molecular-level aging mechanism but it can be hardly employed in practical applications due to the lack of detailed cell conditions. Such a dynamic model describes a degradation trend reported in [32] with a decelerated trend of \(u\) during the early cycles and a moderate linear trend [33] during the latter cycles. Based on the observed trend, a simplified semi-empirical model was proposed in [13] by using regression analysis: \[\frac{\mathrm{d}u\left(t\right)}{\mathrm{d}t}=\theta\left[1-u\left(t\right) \right], \tag{6}\] where \(\theta\) denotes a basic linearized degradation rate that is co-determined by SoC, DoD, and cell temperature. An exponential-function solution with a decreasing absolute value of derivative can be acquired by integrating (6) to \(u\). It can be observed that this model may not fit the battery degradation with an accelerated trend during the early cycles reported in [34, 35]. ### _Improved Verhulst Dynamic Model_ Here the function \(\mathcal{G}\left(t,u;\Theta\right)\) is also made dependent on the health state of the cell, and takes the simplest linear form as an instance: \[\frac{\mathrm{d}u\left(t\right)}{\mathrm{d}t} =ru\left(t\right), \tag{7}\] \[s.t.\ u\left(t\right),\ r>0.\] where \(r\) is the degradation constant. Eq. (7) and partial term of (6) mark that the rate at which the capacity loss changes is proportional to the usable capacity at that time. The solution of (7) is the exponential function \(u\left(t\right)=u_{0}\exp\left(rt\right)\) with an initial loss \(u_{0}\), representing an accelerated degradation trend. This initial loss \(u_{0}\) is commonly set as 10% [10]. Considering the capacity loss results from SEI growth and so on is always finite, a constant \(K\) representing the upper bound of capacity loss is introduced to constrain the increasing rate of loss: \[\frac{\mathrm{d}u\left(t\right)}{\mathrm{d}t} =ru\left(t\right)\left[1-\frac{u\left(t\right)}{K}\right], \tag{8}\] \[\mathrm{s.t.\ }r>0,\] \[0<u\left(t\right)<K.\] Eq. (8) is also known as the logistic differential equation proposed by Pierre Verhulst to model the population growth process [15]. When the lost capacity increases close to \(K\), the increasing rate \(\mathrm{d}u\left(t\right)/\mathrm{d}t\) will decline until it is close to 0. Since a battery is commonly considered as failed when its \(u\) increases exceed 20%, it can be _a priori_ roughly known that \(K\) has a range of 20% to 100%. Furthermore, SEI will naturally form after a brand-new battery is manufactured and before use. In this process, the capacity loss caused by the SEI formation is supposed not to be governed by these dynamic models. Instead, a parameter \(C\) that denotes such initial capacity loss is introduced to modify the model (8) as \[\frac{\mathrm{d}u\left(t\right)}{\mathrm{d}t} =r\left[u\left(t\right)-C\right]\left[1-\frac{u\left(t\right)-C} {K-C}\right]. \tag{9}\] \[\mathrm{s.t.\ }r >0,\] \[0<u\left(t\right)<K,\] \[0<C\leq u_{0}.\] Even though under identical operation conditions, different cells manufactured in the same batch may show distinct degradation characteristics. With the setup that \(u\) is a univariate function of \(t\), this cell-to-cell heterogeneity can be modeled by the difference in degradation model parameters [36, 37]. Considering the sole time-independent variable \(t\) cannot distinguish specific trajectory when different batteries are fading, other variables that can indicate the latent health state can be introduced. In this setup, \(u\) is expanded to a multivariate function of both \(\mathbf{x}=\left[x_{1},x_{2},\ldots,x_{S}\right]^{\mathrm{T}}\in\mathbb{R}^{S}\) and \(t\). An \(S\)-dimensional Health Indicator (HI) \(\mathbf{x}\) can characterize the health state during the battery fading, and both monitoring data and designed representative features can act as health indicators [33, 38]. It is feasible that cell-to-cell heterogeneity is quantified by different specific combinations of values in the feature space. The degradation rate given \(\mathbf{x}\) and monitoring time \(t\) is subsequently modeled by a PDE as: \[\frac{\partial u\left(\mathbf{x},t\right)}{\partial t} =r\left[u\left(\mathbf{x},t\right)-C\right]\left[1-\frac{u\left(\mathbf{x},t\right)-C}{K-C}\right]. \tag{10}\] \[\mathrm{s.t.} r>0,\] \[0<u\left(t\right)<K,\] \[0<C\leq u_{0}.\] The parameters of eq. (10) are of the same meanings as those in eq. (9). ### _Data-Driven Dynamic Model_ The dynamic model (10) depicts one of many forms of SoH degrading. Beyond SoH prognostics, it can be difficult to distill degradation mechanisms when predicting other variables such as RUL. Following (3), we define more generalized nonlinear dynamics parameterized by \(\Theta\) to distill the mechanisms governing the evolution [18] of given data of HIs and time as \[u_{t}-\mathcal{G}\left(\mathbf{x},t,u,u_{\mathbf{x}},u_{\mathbf{x}\mathbf{x}},u_{\mathbf{x}\mathbf{x} \mathbf{x}},\ldots;\Theta\right)=0. \tag{11}\] In eq. (11), \(u_{\mathbf{x}}=\left[\frac{\partial u}{\partial x_{1}},\frac{\partial u}{\partial x _{2}},\ldots,\frac{\partial u}{\partial x_{S}}\right]^{\mathrm{T}}\) denotes the first-order partial derivative of \(u\) with respect to \(\mathbf{x}\) and so on. The nonlinear function \(\mathcal{G}\) poses more flexible relations on \(t\), \(u\), and their any-order partial derivatives. Subsequently, an infinite dimensional dynamical system can be represented by \(\mathcal{G}\)[18]. As can be seen, (10) is a specific implementation of (11) where \(\mathcal{G}\left(u;\Theta\right)=r\left(u-C\right)\left(1-\frac{u-C}{K-C}\right)\). In most cases of health monitoring, it is challenging to define an explicit dynamic model. With scattered and noisy monitoring observations, bias still exists in the PDE model (10) and the more-complicated latent dynamics. We construct a Neural Network (NN) as a function approximator [39] to fill this gap beyond a particular family of basis functions [19], forming a DeepHPM [18]: \[u_{t}-\mathrm{DeepHPM}\left(\mathbf{x},t,u,u_{\mathbf{x}},u_{\mathbf{x}\mathbf{x}},u_{\mathbf{x} \mathbf{x}},\ldots;\Theta\right)=0. \tag{12}\] Here we denote the NN used to approximating \(\mathcal{G}\) as the DeepHPM. The parameter set \(\Theta\) represents the trainable network parameters of the function approximator DeepHPM. ## III Methodology The PDE dynamic model established based on prior knowledge (10) or approximated by NN (12) can be quite hard to solve. Based on the well-known capability of NNs as universal function approximators, we employ another NN parameterized by \(\Phi\) to approximate the hidden solution \(u\left(\mathbf{x},t;\Phi\right)\) of the system. Solution \(u\left(\mathbf{x},t;\Phi\right)\) is also called the _surrogate_ network. With this NN solver, the direct access or approximations to the involved partial derivatives are unnecessary [40, 18]. Model fusion of the surrogate NN with the explicit PDE models or DeepHPM formulates a PINN. ### _Model Fusion by PINN_ A typical structure of PINN consists of three modules including a dynamic model, a surrogate NN, and an automatic differentiator. The dynamic model \(\mathcal{G}\left(\mathbf{x},t,u,u_{\mathbf{x}},u_{\mathbf{x}\mathbf{x}},u_{\mathbf{x}\mathbf{x}}, \ldots;\Theta\right)\) distills the mechanisms governing the dynamics of a degrading system, which can be either explicitly defined as (10) or approximated by a DeepHPM as (12). The surrogate NN \(u\left(\mathbf{x},t;\Phi\right)\) is used to approximate the hidden solution \(u\left(\mathbf{x},t\right)\) of the dynamic model, and the Automatic Differentiator (AutoDiff) [41] is used to calculate values of all involved partial differentials input to the dynamic model. Except for solving the latent \(u\left(\mathbf{x},t\right)\) to build the prognostic model, discovering the dynamic model is realized by identifying the parameter set \(\Theta\) of either specific PDE or DeepHPM. Identifying the unknown parameters forms a high-dimensional inverse problem describing the dynamical system. With the surrogate NN \(u\left(\mathbf{x},t;\Phi\right)\), we define the left-hand side of eq. (12) as a function \(f\left(\mathbf{x},t;\Phi,\Theta\right)\): \[f\left(\mathbf{x},t;\Phi,\Theta\right)\coloneqq u_{t}\left(\mathbf{x},t;\Phi\right)- \mathcal{G}\left(\mathbf{x},t,u,u_{\mathbf{x}\mathbf{x}},u_{\mathbf{x}\mathbf{x}\mathbf{x}},\ldots; \Theta\right).\] The parameters \(\Phi\) of surrogate NN \(u\left(\mathbf{x},t;\Phi\right)\) and \(\Theta\) of dynamic model \(\mathcal{G}\left(\mathbf{x},t,u,u_{\mathbf{x}},u_{\mathbf{x}\mathbf{x}},u_{\mathbf{x}\mathbf{x}\mathbf{x} },\ldots;\Theta\right)\) can be trained by minimizing the mean squared error losses: \[\mathcal{L}=\lambda_{u}\mathcal{L}_{u}+\lambda_{f}\mathcal{L}_{f}+\lambda_{f_{ f}}\mathcal{L}_{f_{f}}, \tag{13}\] where \[\mathcal{L}_{u}= \sum_{i=1}^{N}\left[u\left(\mathbf{x}_{i},t_{i};\Phi\right)-u_{i} \right]^{2}, \tag{14}\] \[\mathcal{L}_{f}= \sum_{i=1}^{N}\left[f\left(\mathbf{x}_{i},t_{i};\Phi,\Theta\right) \right]^{2},\] (15) \[\mathcal{L}_{f_{t}}= \sum_{i=1}^{N}\left[f_{t}\left(\mathbf{x}_{i},t_{i};\Phi,\Theta \right)\right]^{2}. \tag{16}\] Loss function (13) actually represents multi-tasks learning. Weight coefficients \(\lambda_{u}\), \(\lambda_{f}\), and \(\lambda_{f_{t}}\) can be tuned to balance the loss terms in the training process. The loss term \(\mathcal{L}_{u}\) corresponds to the regression fitting error at the collected observations, while \(\mathcal{L}_{f}\) and \(\mathcal{L}_{f_{t}}\) enforce the structure of PINN imposed by (11) and its first-order partial derivative with respect to \(t\)[42]. Here \(\mathcal{D}_{Train}=\left\{\mathbf{x}_{i},t_{i},u_{i}\right\}_{i=1}^{N}\) denotes the training data and \(u_{i}\) represent the label to be predicted corresponding to the input data \(\left\{\mathbf{x}_{i},t_{i}\right\}\), which can be actual SoH or RUL in PHM of batteries for the instance. ### _Framework of PINN_ Here the involved frameworks include a data-driven vanilla NN (baseline), a PINN with the Verhulst equation as the dynamic model (PINN-Verhulst), and a PINN with a DeepHPM as the dynamic model (PINN-DeepHPM). These frameworks are shown in Fig. 1, 2, and 3, respectively. To compare the respective learning framework clearer, the vanilla NN in the baseline is also denoted as a surrogate NN although there is no PDE to be solved. The basic structure of the surrogate NNs is illustrated in Fig. 4. The hidden layers of the surrogate NNs are all composed of Fully Connected (FC) layers, and the hyperbolic tangent is employed as the activation function due to its differentiability. Structures of all the involved surrogate NNs in both Fig. 1, 2, and 3 are set as this if there are no special notes. The structure of DeepHPM is set the same as that of the surrogate NNs. ### _Weight Coefficients Tuning in Training PINN_ Focusing on the loss function (13) of training a PINN, it can be observed that PINN is constrained or regularized by the loss imposed by the given set of PDEs beyond a regression task at the given data set. It is observed that networks is sensitive to the relative weights of multiple objective functions from desired training tasks [43]. Manually tuning the weighting coefficients \(\lambda\) can be quite expensive, and methods for adaptive tuning during the training are desirable. Numerical stiffness will cause unbalanced back-propagated gradients in the PINN training process, and the weighting coefficients can be accordingly tuned based on the gradient statistics [44]. Other mechanisms have also been utilized to design the adaptive balancing such as random lookback [45] and homoscedastic aleatoric uncertainty [43]. Following [43], the likelihood of the observed data labels is assumed as Gaussian. The mean of likelihood is set as the output of the surrogate NNs and the variance is set according to the weighting coefficient: \[p\left(\left.\boldsymbol{u}\right|u\left(\boldsymbol{X},\boldsymbol{t}; \Phi\right)\right)=\mathcal{N}\left(u\left(\boldsymbol{X},\boldsymbol{t}; \Phi\right),\frac{1}{\lambda_{u}}\right), \tag{17}\] where \(\boldsymbol{u}=\left[u_{1},u_{2},\ldots,u_{N}\right]^{\mathrm{T}}\), \(\boldsymbol{X}=\left[\boldsymbol{x}_{1},\boldsymbol{x}_{2},\ldots,\boldsymbol {x}_{N}\right]^{\mathrm{T}}\), and \(\boldsymbol{t}=\left[t_{1},t_{2},\ldots,t_{N}\right]^{\mathrm{T}}\) consist of labels and samples of the training set respectively. The output corresponding to the loss \(\mathcal{L}_{u}\) is used as the instance and the other objectives in (13) can be accordingly defined. The observation noise scalar is set related to the weighting coefficient \(\lambda\) corresponding to the output in the loss function. The log-likelihood can be written as \[\log p\left(\left.\boldsymbol{u}\right|u\left(\boldsymbol{X},\boldsymbol{t}; \Phi\right)\right)\propto-\frac{\lambda_{u}}{2}\left\|\boldsymbol{u}-u\left( \boldsymbol{X},\boldsymbol{t};\Phi\right)\right\|^{2}+\frac{1}{2}\log\lambda_{ u}. \tag{18}\] Based on (14), (15), (16), and (18), the multi-task minus log-likelihood can be obtained as (19). It can be seen that the loss function (13) is regularized as: \[\mathcal{L}=\lambda_{u}\mathcal{L}_{u}+\lambda_{f}\mathcal{L}_{f}+\lambda_{f_ {t}}\mathcal{L}_{f_{t}}-\log\lambda_{u}\lambda_{f}\lambda_{f_{t}}. \tag{20}\] With these settings, relative weighting coefficients \(\lambda_{u}\), \(\lambda_{f}\), and \(\lambda_{f_{t}}\) of the loss terms can be learned by minimizing the regularized loss function (20) adaptively. The output weighted noise is constrained by the penalty term \(\log\lambda_{u}\lambda_{f}\lambda_{f_{t}}\). In practice, we define \(\lambda^{\prime}\coloneqq-\log\lambda\) and train \(\lambda^{\prime}\) for numerical stability. The loss function (20) can be subsequently rewritten as: \[\mathcal{L} =\exp\left(-\lambda^{\prime}_{u}\right)\mathcal{L}_{u}+\exp \left(-\lambda^{\prime}_{f}\right)\mathcal{L}_{f}+\exp\left(-\lambda^{\prime} _{f_{t}}\right)\mathcal{L}_{f_{t}}\] \[+\log\lambda^{\prime}_{u}+\log\lambda^{\prime}_{f}+\log\lambda^{ \prime}_{f_{t}}. \tag{21}\] Training PINN by minimizing (21) instead of (13) is expected to balance the multiple losses, which is denoted as AdpBal here. ## IV Dataset Description and Preprocessing ### _Dataset Description_ The dataset involved in this paper consists of three batches of commercial LFP/graphite battery cells manufactured by A123 Systems, where a total of 124 cells were cycled to failure under dozens of different fast-charging protocols [35]. The nominal capacity of the cells is 1.1 Ah, and the unit charging and discharging rate 1C equals 1.1 A subsequently. Each charging protocol is marked as a string formatted as "C1(Q1)-C2", where the corresponding cell was first charged with the current C1 from 0% SoC (unit: %) to the SoC Q1. When charged at Q1, the charging current was then switched to C2 with which the cell was charged to 80% SoC. All cells were finally charged from 80% to 100% SoC with a Constant-Current Constant-Voltage (CC-CV) form to the 3.6V upper cutoff potential and the C/50 cutoff current. Moreover, all cells were discharged also with a CC-CV form at 4C to 2.0V lower cutoff potential and the C/50 cutoff current. Detailed information on the studied cells and the experimental settings can be accessed in [35]. ### _Feature Extraction_ As introduced in Section II and III, appropriately designed health features \(\boldsymbol{x}=\left[x_{1},x_{2},\ldots,x_{S}\right]^{\mathrm{T}}\) can effectively characterize the exact degradation process that each cell undergoes. Extracting features varying to the number of cycles for which the battery has operated attracts much attention. A critical principle is that the extracted features are expected to be of strong correlation with SoH [30]. Point features can be simply extracted based on the charging/discharging profiles during cycling, such as the peaks of Incremental Capacity (IC) curves during the CC charging [46]. Point features can be conveniently extracted mainly characterizing the transient state in each cycle. Interval features can represent characteristics in a period, and they can also be easily extracted by intercepting values on the profiles at certain intervals. Therefore, information related to SoH may be over-simplified since only values on the endpoints of the intervals are paid attention to. Interval features can be extracted as the charge time during the CC charging [34] and time intervals of equispaced charge current/voltage changing [47, 48] for the instance. To capture the trend variation of charging/discharging profiles during cycling, trend features are designed focusing on parameters varying upon a specific prior function modeling the profiles during a cycle. The trend of charging current during CV charging is approximately exponential, and the Fig. 1: Framework of typical data-driven vanilla NN. parameter of the fitted exponential function can be used as a trend feature [46]. Within a certain range of discharging voltage, differences in discharging capacity show a quadratic-function-formed trend to discharging capacity. Parameters of the quadratic polynomial can be accordingly estimated as the trend feature as well [30]. Other statistical features computed based on the whole profile of a cycle can also be employed such as the mean [49, 30], energy [50], Skewness [50], and Kurtosis [50], etc. Following [30], we also extracted the trend feature based on the quadratic model: \[Q_{i+1}\left(V_{i+1}\right)-Q_{i}\left(V_{i}\right)=-\omega\left[Q_{i}\left(V_{ i}\right)\right]^{2}+b+\varepsilon. \tag{22}\] Eq. (22) is defined for the profile of each cycle. \(Q_{i}\left(V_{i}\right)\) is a measurement of the discharging capacity with respect to the discharging voltage between 2.7V and 3.3V, and \(\varepsilon\sim\mathcal{N}\left(0,\sigma^{2}\right)\) represents the normal measuring error. Undetermined coefficients \(\omega\) and \(b\) are estimated as two features in that cycle. Maximum, minimum, and the variance of the IC curve when the discharging voltage is between 2.7V and 3.3V are also used as features. Other employed features include average temperature, internal resistance, and charging time. To reduce the measurement noise, the moving average is then applied to the extracted features for better representative capability. All the extracted features from cell #124 are illustrated in Fig. 5 Figure 4: Basic structure of NNs. Figure 3: Framework of PINN-DeepHPM. Figure 2: Framework of PINN-Verhulst. as an example, and it should be noted that the features are 0-1 normalized in this figure for better illustration. ### _Standardization_ Note that there are significant variations among different channels of inputs and outputs, which may severely impact the performance. Standardization is consequently implemented. The derivatives of outputs to the inputs can be kept unchanged in this way when using AutoDiff, and the standardized data are processed by the NN. On the other hand, the partial derivatives of the before- or after-standardization outputs to the before- or after-standardization inputs can be obtained. Here we employ the widely-used z-score standardization to scale inputs. Specifically, the standardization factors are set as the corresponding mean and standard deviation calculated from the training data: \[\widetilde{x}_{i}=\frac{x_{i}-\mathrm{Mean}\left(\mathbf{X}_{Train} \right)}{\mathrm{Std}\left(\mathbf{X}_{Train}\right)}, \tag{23}\] \[\widetilde{u}_{i}=\frac{u_{i}-\mathrm{Mean}\left(\mathbf{u}_{Train} \right)}{\mathrm{Std}\left(\mathbf{u}_{Train}\right)}. \tag{24}\] In eq. (23) and (24), \(\mathbf{u}=\left[u_{1},u_{2},\dots,u_{N}\right]^{\mathrm{T}}\) and \(\mathbf{X}=\left[\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{N}\right]^{\mathrm{T}}\) consist of labels and samples of the training set as denoted in Section III-C. Monitoring time \(t\) is also standardized in this way since it can be regarded as part of inputs. ## V Case Study The proposed prognostic framework is verified for both SoH estimation and RUL prediction. For better comparison with the existing methods, the training set and test set are formed in different ways, which are marked as cases A, B, and C respectively. In case A, data from cells #91 and #100 are used for training/validation and cell #124 is used as the test set [30]. Their charging protocol is 4.8C(80%)-4.8C. In case B, data from cells #101, #108, and #120 are used for training/validation and cell #116 is used as the test set [30]. Their charging protocol follows 5.3C(54%)-4C. In case C, data from batch 2 are selected and 20% of them are randomly chosen as the test set [49]. These cells follow various charging protocols. Here the widely used Root Mean Square Error (RMSE) metric is adopted to measure the performance. RMSE is formulated as: \[\mathrm{RMSE}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(\hat{u}_{i}-u_{i}\right)^{ 2}}, \tag{25}\] where \(\hat{u}_{i}\) and \(u_{i}\) denote the output of the prediction model and the actual data label corresponding to \(i^{\mathrm{th}}\) input sample. When it is used for SoH estimation, the output PCL should be transformed as SoH based on eq. (1). There are a total of \(N\) samples in the test set. Moreover, we also adopt the Root Mean Square Percentage Error (RMSPE) as the metric since the scale of SoH and RUL can be quite distinct. RMSPE is formulated as: \[\mathrm{RMSPE}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(\frac{\hat{u}_{i}-u_{i}}{u _{i}}\right)^{2}}\times 100\%. \tag{26}\] Division of training and test data have been detailed above. Then 20% of training-validation data are divided as the validation data in cases A and B, and 25% of training-validation data are divided as the validation data in case C. The rest of the training-validation data is used for training ultimately. Several general settings are listed in Table I. Other critical hyper-parameters such as the number of hidden layers and neurons are further tuned depending on the performance of validation data. To reduce the number of hyper-parameters to be tuned, Figure 5: Extracted features of cell #124. Figure 6: SoH estimation of cell #124 in case A. Figure 7: SoH estimation of cell #116 in case B. we tune them based on baseline models as far as possible. Then the obtained optimal parameters for the baseline models are applied to the proposed models. It is worth mentioning that those parameters may not be the optima for the proposed models. For instance, optimal network structures (the number of hidden layers and neurons) are determined based on the vanilla data-driven NN illustrated in Fig. 1, and then they are used for both PINN-Verhulst and PINN-DeepHPM. It has been revealed that the performance of discovering dynamics by using DeepHPM can be sensitive to input terms [18, 16, 17], so which terms should be involved needs to be further explored. We organize various combinations of input features, monitoring time as well as partial derivatives to them as a candidate library. Similarly, simply summing the multiple losses in eq. (13) for training PINNs is further set as a baseline. The optimal combination of input terms in the library for DeepHPM is determined with this baseline, and then it is used to validate AdpBal for the multiple losses. Calculated RMSPEs for SoH estimation and RMSEs for RUL prognostics on the validation set in terms of the number of hidden layers and neurons per layer are listed in Tables IV, V, VI, VII, and VIII in the Appendix. Validation RMSPEs and RMSEs in terms of the combinations of inputs to DeepHPM are listed in Table IX. Accordingly, settings for different cases are determined and summarized in Table II. With these settings, the proposed models are all trained in a regular computation environment (Intel Xeon E5-2630 CPU, 2.2 GHz; Nvidia GeForce GTX 1080 Ti GPU). The training-test process is repeated for 5 rounds in each case, and the results are averaged to reduce the randomness. The results of SoH estimation and RUL prognostics are summarized in Table III. ### _SoH Estimation_ The two categories of dynamic models, the Verhulst model, and DeepHPM are used to estimate the SoH. This represents two cases that whether there are available explicit dynamic models respectively. In case A, the calculated RMSPEs of SoH estimation on the validation data in terms of different combinations of the number of hidden layers and neurons are listed in Table IV. The optimal network structure for the data-driven baseline model is 2 hidden layers and 128 neurons per hidden layer. With this setting, the baseline model provides a 0.42% estimation error in terms of RMSPE on the test data. PINN-Verhulst provides a 1.41%-RMSPE estimation when simply summing the multiple losses. With AdpBal for tuning the weight coefficients of the losses, a 0.49% RMSPE can be obtained. When there is no prior explicit dynamic model, the inputs of DeepHPM are set as \(\mathbf{x},t\) when discovering the dynamic model via DeepHPM according to Table IX. These validation errors are calculated by simply summing the multiple losses. the estimation errors are 0.43% and 0.47% respectively without and with AdpBal. Figure 8: Variation of relative weighting coefficients by using adaptive balancing in the model training. (a)–(e) Respective variation of relative weighting coefficients in each round. In case B, the network structure is set as 2 hidden layers with 64 neurons per hidden layer according to Table V. With this setting, the baseline estimation error is 0.56%. PINN-Verhulst (Sum), PINN-Verhulst (AdpBal), PINN-DeepHPM (Sum), and PINN-DeepHPM (AdpBal) provide 0.56%, 0.44%, 0.51%, and 0.42% estimation errors in terms of RMSPEs. The SoH estimation is further illustrated in Fig. 6 and 7. The data-driven baseline method without model fusion can perform satisfactorily for SoH estimation, especially in case A. With fusing the improved Verhulst model, the performance of simply summing the losses has no significant improvement over the baseline. The estimation accuracy is even significantly declined in case A. By using the AdpBal, the performance is improved instead. Compared with case A, there is a better improvement in case B. The reason can be that the numbers of hidden layers and neurons are optimal for baseline, but probably not for the proposed methods. When DeepHPM is used to discover the governing dynamics, similar results are observed. It verifies the ability of DeepHPM to discover dynamics and the robustness of using AdpBal for training PINN. With AdpBal, relative values of the weighting coefficients changing in terms of the training epochs of case B in all 5 rounds are illustrated in Fig. 8. It can be seen that relative values of the weighting coefficients vary in similar ways during different rounds of training. This also shows the robustness of AdpBal. Moreover, the proposed methods are compared with a Gaussian Process Regression (GPR)-based method [30]. It is worth mentioning that partial onboard test data are further used to calibrate this GPR-based method (GPR Onboard) in [30]. This comparison is shown in Table III. As can be seen, the proposed methods all perform better than this comparative study even without using any onboard test data. ### _RUL Prognostics_ Since there is no prior explicit dynamic PDE model correlating the predicted RUL to the input features as well as monitoring time, only DeepHPM is employed here to discover the dynamics. The results are summarized in Table III. Predicted and actual RUL of the test cells #124 and #116 are illustrated in Fig. 9 and 10. In case A, the data-driven prognostic model provides a 46.38 cycles error in terms of RMSE. With physics-informed losses based on the discovery of DeepHPM, the prediction error is 48.81 cycles. After AdpBal is turned on, the predictive RMSE is 45.86 cycles. The prognostic error is slightly reduced by 6.04% ((45.86\(-\)48.81)/48.81\(\times\)100%) after weighting coefficients are tuned automatically. In case B, the prediction RMSE of baseline, PINN-DeepHPM (Sum), and PINN-DeepHPM (AdpBal) are 64.48, 65.52, and 56.29 cycles respectively. The prognostic error is significantly reduced by 14.09% ((56.29\(-\)65.52)/65.52\(\times\)100%) after weighting coefficients are tuned automatically. Similar to that of SoH estimation, the proposed method shows a minor improvement in case A and a major improvement in case B. As a result, the generalizability of the proposed methods from SoH estimation to RUL prognostics is verified. In case C, RUL prognostics error is reduced by 6.7% ((14.19\(-\)15.21)/15.21\(\times\)100%) with introducing DeepHPM. However, the performance deteriorated by using AdpBal. A possible reason can be that there Fig. 10: RUL prognostics of cell #116 in case B. Fig. 9: RUL prognostics of cell #124 in case A. are multiple operation conditions for the cells in case C, which severely impacts the homoscedastic uncertainty among measurements of cells. Moreover, the proposed methods are compared with a Long-Short Term Memory (LSTM) NN [49] and a Bayesian Deep Learning (DL) framework incorporating uncertainty quantification and calibration [49]. It can be seen from Table III that the fused PINN-DeepHPM outperforms those LSTM-based methods. LSTM commonly fits temporal information better than vanilla NN since it can capture the dynamics by using difference equations. The fusion with DeepHPM enables the vanilla NN to capture the dynamics as well. ## VI Conclusion In this article, we propose a new model fusion framework for Li-ion batteries PHM based on Physics-Informed Neural Network. A flexible form is provided by PINN to fuse empirical or physical dynamic models and data-driven surrogate NNs. With the established dynamic model, the rate of capacity degrading can be quantified with respect to the operating cycles and the current SoH of monitored cells. The generalization of the dynamic model to a PDE form fits different cells with the same combination of parameters. The added extra parameter can represent the initial SEI formation, which is a common reason of capacity loss in the initial cycles. Under this setting, the model is more consistent with the physical characteristics of battery degradation. DeepHPM provides a specialized model to discover the governing dynamics of battery degradation when lacking prior information. With the uncertainty-based weighting method, losses of multiple learning tasks can be adaptively balanced when training the PINN. Implementation of a public dataset verifies the effectiveness of the proposed methods. The results show that the proposed model fusion scheme can improve the performance of PHM for Li-ion batteries, but an appropriate weighting method is compulsory to balance the multi-task losses for training PINN. ## Appendix A Performance of Different Settings on the Validation Data ### _NN Structures for SoH Estimation_ In terms of the different number of hidden layers and neurons per layer, the calculated RMSPEs for SoH estimation on the validation set in respective cases are listed in Tables IV and V. These results are calculated by using the vanilla NN (Baseline). are determined according to Appendices A-A and A-B, these results are calculated by using the PINN-DeepHPM by simply summing the losses (Sum).
2301.04510
Time of Arrival Error Estimation for Positioning Using Convolutional Neural Networks
Wireless high-accuracy positioning has recently attracted growing research interest due to diversified nature of applications such as industrial asset tracking, autonomous driving, process automation, and many more. However, obtaining a highly accurate location information is hampered by challenges due to the radio environment. A major source of error for time-based positioning methods is inaccurate time-of-arrival (ToA) or range estimation. Existing machine learning-based solutions to mitigate such errors rely on propagation environment classification hindered by a low number of classes, employ a set of features representing channel measurements only to a limited extent, or account for only device-specific proprietary methods of ToA estimation. In this paper, we propose convolutional neural networks (CNNs) to estimate and mitigate the errors of a variety of ToA estimation methods utilizing channel impulse responses (CIRs). Based on real-world measurements from two independent campaigns, the proposed method yields significant improvements in ranging accuracy (up to 37%) of the state-of-the-art ToA estimators, often eliminating the need of optimizing the underlying conventional methods.
Anil Kirmaz, Taylan Şahin, Diomidis S. Michalopoulos, Muhammad Ikram Ashraf, Wolfgang Gerstacker
2023-01-11T15:15:15Z
http://arxiv.org/abs/2301.04510v1
# Time of Arrival Error Estimation for Positioning Using Convolutional Neural Networks ###### Abstract Wireless high-accuracy positioning has recently attracted growing research interest due to diversified nature of applications such as industrial asset tracking, autonomous driving, process automation, and many more. However, obtaining a highly accurate location information is hampered by challenges due to the radio environment. A major source of error for time-based positioning methods is inaccurate time-of-arrival (ToA) or range estimation. Existing machine learning-based solutions to mitigate such errors rely on propagation environment classification hindered by a low number of classes, employ a set of features representing channel measurements only to a limited extent, or account for only device-specific proprietary methods of ToA estimation. In this paper, we propose convolutional neural networks (CNNs) to estimate and mitigate the errors of a variety of ToA estimation methods utilizing channel impulse responses (CIRs). Based on real-world measurements from two independent campaigns, the proposed method yields significant improvements in ranging accuracy (up to 37%) of the state-of-the-art ToA estimators, often eliminating the need of optimizing the underlying conventional methods. Time-of-arrival estimation, high accuracy positioning, convolutional neural networks. ## I Introduction Location information is vital for many applications across various domains including industrial internet-of-things (IIoT), emergency services, transportation, and many more. Some of the applications, such as industrial asset tracking, autonomous driving and process automation, require highly accurate position estimation as emphasized in 3GPP [1]. Location information can be obtained by various approaches including time-based, angle-based and fingerprinting-based techniques using radio signals. One of the major approaches in widely utilized time-based positioning is to estimate the time-of-arrival (ToA) of the received positioning signals. Combined with time-of-transmission (ToT), i.e., the time when the radio signal is sent from the transmitter, ToA is used to calculate time-of-flight (ToF), i.e., the time it takes for the radio signal to travel from transmitter to receiver. Then, the range between transmitter and receiver can be estimated using ToF since the radio signals travel at a known speed, i.e., the speed of light, and can be utilized for positioning. The accuracy of ToA estimation is limited by various factors such as challenging propagation conditions, synchronization errors, measurement inaccuracies and limitations in radio resources. Some of the factors, such as hardware properties and limited radio bandwidth, are determined strictly by the cost or regulation limitations and are more difficult to eliminate. However, some others that are related to the propagation environment may be detected and mitigated to some degree by convenient post-processing especially when a large radio bandwidth, e.g., that of an ultra-wideband (UWB) transmission, is available. Among propagation environment related factors, non-line-of-sight (NLOS) propagation is one of the primary error sources in time-based positioning methods since it dec correlates the time-of-flight (ToF) and the distance between transmitter and receiver. Various approaches have been proposed to improve the accuracy of the time-based positioning techniques through identifying or mitigating the effect of propagation conditions on positioning. Binary classification of the propagation environment has been studied commonly in the form of line-of-sight (LOS) versus non-line-of-sight (NLOS) classification using hypothesis testing based on probabilistic models [2], supervised machine learning (ML) [3, 4] and unsupervised ML [5, 6]. Furthermore, multi-class classification has been proposed by dividing NLOS propagation into two sub-classes depending on the partial or full blockage of the LOS path [7, 8], by adding a _multipath_ class to the binary classification problem [9], or by classifying the material of the LOS blocking objects [10]. Even though the classification approach can improve the ranging or positioning accuracy through utilizing only the favorable, i.e., LOS, measurements [3, 4, 5], discarding NLOS measurements might lead to a poor positioning performance when the number of the available measurements is low. Moreover, such classification methods may not utilize the full information present in the measurements since the number of classes might be insufficient to describe the severity of the NLOS propagation in the measurements fully. Ranging error mitigation by processing various features extracted from a received UWB waveform was studied by utilizing support vector machines and Gaussian process estimators [8, 11], or by fuzzy comprehensive evaluation along with propagation channel identification [12]. Although the methods were reported to yield an improvement in ranging, the predetermined features extracted from the received waveform might not represent all information in the received waveform with respect to the ranging error. Such information loss was overcome in [3, 13] where the ranging error was estimated directly from a given channel impulse response (CIR) by using artificial neural network (ANN) estimators. However, only a specific UWB measurement and ranging device (DWM1000 [14]) which utilizes a proprietary ranging algorithm was considered. Although a leading-edge detection method was mentioned to be used for ToA estimation in [14], details on the adopted detection algorithm were not provided. In [15], ToA estimation via convolutional neural networks (CNNs) was studied, and the corresponding performance was compared with that of some conventional, i.e., non-ML, ToA estimators. However, the CNNs were trained mainly with simulation data, and ToA _error_ estimation was not studied which can provide a measure of reliability of ToA estimation. In this paper, we investigate the problem of _estimating the errors of various ToA estimators from a given CIR_. Then, the estimated errors can be mitigated to improve ranging accuracy and, thereby, performance of a positioning system. The main contributions of this paper are as follows: * We propose a novel CNN-based scheme to estimate and mitigate errors of various conventional ToA estimation algorithms with different computational complexity such as inflection point estimation (IFP) [16] and peak detection [17], and compare their performance to that of the leading-edge detection (LDE) [18] and the DWM1000 module [14] for a given CIR. * We analyze the error mitigation performance of the proposed CNN estimator for the cases of optimized and suboptimal versions of the underlying ToA estimation algorithms. * We evaluate the performance for two independent real-world datasets to ensure that the results are not specific or biased to a single measurement campaign. The analysis in this paper demonstrates that the proposed CNN-based error mitigation scheme improves the accuracy of the underlying conventional ToA estimators significantly even if they are improved with a basic error mitigation method. Furthermore, the proposed method is shown to provide a robust ranging performance in case the parameters of the underlying conventional ToA estimators are suboptimal. ## II System Description The considered scheme is composed of a two-step process. In the first step, an initial ToA estimation is realized based on a given CIR by one of the conventional methods listed in Section II-B1. In the second step, the initial ToA estimate and the CIR are input to an ANN to estimate the error of the initial ToA estimation. Then, this information is utilized to mitigate the error of initial ToA estimation, according to \[\widehat{\text{ToA}}^{\prime}=\widehat{\text{ToA}}_{\text{conventional}}-\widehat{e }_{\text{ToA}}, \tag{1}\] where \(\widehat{\text{ToA}}_{\text{conventional}}\), \(\widehat{e}_{\text{ToA}}\) and \(\widehat{\text{ToA}}^{\prime}\) represent the initial ToA estimated by a conventional method, the estimated error of the conventional ToA estimate and the mitigated ToA, respectively. ### _CIR and ToA Estimation_ A CIR characterizes the communication channel and contains information on the travel time of radio signals from transmitter to receiver. Transmitted signals might arrive at the receiver from different paths, e.g., direct, reflected, or diffracted paths. ToA represents the arrival time of the first arriving signal at the receiver and can be determined from a given CIR. ### _Baseline Methods_ #### Ii-B1 Conventional ToA Estimators In this work, we consider widely used conventional ToA estimators, namely Peak, IFP and LDE, as well as DWM: * _Peak_: The delay time of the first peak of the CIR above a noise threshold is considered as ToA [17]. * _IFP_: The delay time of the first point above a noise threshold where the CIR concavity changes [16] is estimated as ToA. * _LDE_: The CIR is filtered by a moving average window whose output is further passed through two different moving maximum window filters in parallel. The first delay time above a noise threshold where the output of the smaller maximum window filter exceeds the output of the larger maximum window filter by a factor, i.e., the leading-edge detection factor, is determined as ToA [18]. * _DWM_: ToA is estimated by the DWM1000 device. The DWM estimates used in this paper are taken from the publicly available datasets [3, 19]. Although a leading-edge detection method was mentioned to be used for the ToA estimation in the device's user manual [14], the details of the DWM1000's internal estimation algorithm are not provided. For Peak, IFP, and LDE, we define the noise threshold in terms of the relative path strength similar to [20], formulated as \[\gamma_{th_{i}}=\alpha\,\text{max}\{\text{CIR}_{i}\} \tag{2}\] with the noise threshold factor \(\alpha\). LDE has three additional parameters, namely the leading-edge detection factor and the size of the small and large windows. The parameters of Peak, IFP and LDE are optimized by an exhaustive search to yield the lowest mean absolute ToA error. #### Ii-B2 Benchmark ToA Error Mitigation Method In addition to the described conventional ToA estimators, we consider a benchmark scheme to estimate the _error_ of the ToA estimation conducted by these conventional methods. Denoted by _CustAvg_, this benchmark models the ToA error as constant and given by the mean of the error for each conventional ToA estimator. Following the estimation of ToA error, the error can be mitigated according to (1). ### _Ranging Based on ToA Estimation_ The range, i.e., the distance between the tag and anchor, can be estimated by multiplying the mitigated ToA by the speed of the radio signals, i.e., speed of light, according to \[\widehat{R}=c(\widehat{\text{ToA}}^{\prime}-\text{ToT}), \tag{3}\] where \(c\) and \(\widehat{R}\) represent the speed of light and the estimated range, respectively. ToT in (3) can be eliminated by using a two-way-ranging or a time-difference-of-arrival scheme. Subsequently, positioning of a target device can be performed by utilizing the range estimates with respect to multiple anchors with known locations. As a result, improving the accuracy of ToA estimates, i.e., through the error mitigation, yields an improved ranging, thereby, a more accurate positioning. ## III Proposed Method ### _ToA Error Mitigation Using ANNs_ The complex nature of NLOS or multipath propagation poses a challenge to accurate modelling of ToA estimation error based on an input CIR. Therefore, an ANN seems a sensible choice to model the error of the ToA estimation. We employ a one-dimensional CNN similar to [3, 13] to estimate the error of the conventional ToA estimators based on the input CIR, since CNNs are shown to be useful in identifying spatial correlations among the input samples [21]. Besides the CIR, \(\widehat{\text{ToA}}_{\text{conventional}}\) is also input to the CNN. Then, the output of the CNN, \(\widehat{\epsilon_{\text{ToA}}}\), is used to mitigate the error of the conventional ToA estimator according to (1). The utilized CNN comprises 3 convolutional layers followed by a fully connected layer. 16 output channels are used in each convolutional layer with a kernel size of 5 and a stride of 2 where no pooling layer is used in order to avoid a potential information loss. The rectified linear unit (ReLU) is used as the activation function in each neuron except for the output layer, and dropout regularization with a factor of 0.5 is utilized to prevent over-fitting. The CNNs are trained by using the Adam optimizer [22] with a learning rate of \(10^{-3}\) and a batch size of 32 to minimize the mean-squared error (MSE) between the estimated and the real ToA error. The parameters of the CNN estimator are optimized using training and validation data. It was observed that increasing the number of hidden layers or number of output channels further does not result in a significant additional performance gain. ### _Dataset Description and Pre-processing_ #### Iii-B1 Datasets We have used two publicly available datasets comprising real-world UWB measurements, which we refer to as _Office_ and _Room_. Office dataset, given in [3], pertains to two different office environments, _Office1_ and _Office2_. Room dataset, described in [13] and given in [19], comprises measurements taken in different sized office-like rooms with different dimensions. The measurements in both datasets are taken with 499.2 MHz of bandwidth at 3993.6 MHz of center frequency. It is assumed that the propagation channel between transmitter and receiver is reciprocal, i.e., identical, for forward and backward transmit directions, and the channel coherence time is larger than the reply time of the applied two-way ranging system. Such assumptions are realistic and required since a single CIR is provided per each two-way ranging in the datasets. #### Iii-B2 ToA Labeling The ToA delay time estimated by DWM, \(\widehat{\text{ToA}}_{\text{DWM}}\), the corresponding ranging error, \(\epsilon_{R}\), and time resolution of the CIR (i.e., the absolute time lapse between consecutive CIR indices), \(\,\delta_{t}\), are given (or can be obtained) from the datasets [3, 19]. Utilizing this information, we determine the ground-truth ToA indices, i.e., ToA labels, according to \[\text{ToA}_{\text{true}}=\widehat{\text{ToA}}_{\text{DWM}}-\frac{\epsilon_{ R}}{c\,\delta_{t}}. \tag{4}\] As such, the ranging error is converted into a ToA error which is subtracted from the estimated ToA to determine the true ToA. It should be noted that labeling real ToA in real-world CIR measurements is challenging and the introduced labeling may contain errors due to the clock drift, finite bandwidth and finite sampling rate. #### Iii-B3 Data Pre-processing Only 152 (out of 1016) samples after the first detected path were considered for each CIR in [3], whereas additional 5 CIR samples prior to the detected first path were also considered in [13] yielding CIRs with 157 samples. We further add a random number of noise-like samples (maximum 30 samples) prior to each CIR shifting CIRs randomly with respect to the time axis to eliminate a potential bias, and apply padding to the end of CIRs accordingly, yielding CIRs with 187 samples Fig. 1: Flow diagram and the naming of the considered ToA estimators. Fig. 2: Randomly shifted and padded CIRs using the described pre-processing. as shown in Fig. 2. The ToA labels are shifted together, i.e., by the same amount, with the CIRs. Each CIR is normalized by its maximum value before being input to the proposed CNN estimator to prevent a potential bias that might be caused by varying absolute amplitudes of the CIR samples. The datasets are divided into training, validation and test data for the CNN. Further, to enable a fair comparison, the training and validation data are used together to optimize the parameters of the conventional ToA estimators and the benchmark error mitigation method. The test data is selected from measurements taken in another environment (i.e., another office or another sized-room) than the training and validation data to assess the generalizability of the results. This approach is in line with the recent 3GPP agreements on evaluating the generalization performance of ML models used for positioning [23]. Training and validation data comprise 70% and 30% of the measurements belonging to the same environment, respectively, resulting in approximately 5000 training samples in each scenario for the Office dataset. To make a fair comparison between the two datasets, we also use approximately 5000 training samples for each scenario in the Room dataset. It is noted that the Office dataset includes repeated measurements taken from each anchor-tag location pair, i.e., not all training samples is associated with a different anchor-tag location pair, unlike the Room dataset. ## IV Performance Evaluation In this section, we present performance results based on real-world measurements for the proposed (CNN) and the benchmark (CustAvg) ToA error mitigation methods as well as conventional, i.e., unmitigated, ToA estimators (LDE, IFP, Peak, DWM). The naming of the estimators considered in this paper is shown in Fig. 1. We utilize the PyTorch framework to train the CNN. The results are generated based on 10 random selections of training and test measurement samples for each scenario to average out potential variations across data chunks. ### _Ranging Accuracy Evaluation_ As evaluation metric, we consider the absolute ranging error, \(\epsilon_{|R|}\), given by \[\epsilon_{|R|}=|\hat{R}-R_{\text{true}}|, \tag{5}\] where \(R_{\text{true}}\) denotes the real range obtained from the datasets [3, 19]. We provide the CDF of the ranging error for different ToA estimation schemes in Figs. 3 and 4, for Office and Room datasets, respectively. It can be observed from Figs. 3-4 that the proposed CNN-based error mitigation scheme improves the accuracy of the conventional ToA estimators. The improvement in 90th percentile ranging error varies between 19-74% and 4-38% for Room and Office datasets, respectively, depending on the utilized conventional ToA estimator. The smaller improvement for the Office dataset can be explained by the fact that the Office dataset contains repeated measurements taken for the same anchor-tag location pairs, unlike the Room dataset. As a result, there is a lower number of measurements taken for _unique_ anchor-tag location pairs leading to an insufficient amount of unique data for the CNN to be trained. Furthermore, all methods perform worse in Office dataset than in Room dataset despite the same measurement and ranging module, DWM1000, used. This can be explained by the different propagation environments, i.e., the propagation environment for Office dataset might be more challenging, or a discrepancy in the calibration of the DWM1000 module, e.g., antenna delay calibration. Comparing the two error mitigation methods, i.e., CNN and CntAvg, the proposed CNN-based method further yields a considerably better performance than CntAvg in most cases, and a similar performance in the worst case, depending on the underlying conventional ToA estimator. The gain of the CNN estimator over CntAvg estimator lies between 16-37% and 3-16% in Room and Office datasets, respectively, in 90th percentile ranging accuracy. Our performance evaluation also enables a comparison of conventional ToA estimators from the literature. Figures 2(a) and 3(a) show that LDE outperforms IFP and Peak. Peak is observed to show the worst performance in both datasets possibly due to the susceptibility of the peak detection to multipath propagation [18, 24]. ### _Comparison with DWM_ Figure 2(a) shows that DWM outperforms LDE slightly whereas LDE has a marginally better performance than DWM according to Fig. 3(a). The similar performance of DWM and LDE can be explained by the fact that a leading-edge detection Fig. 3: The CDF of ranging error of the proposed CNN-based estimator in comparison to (a) conventional ToA estimators and (b) benchmark CntAvg estimators, and comparison of (c) 90th percentile and (d) mean absolute ranging error of the considered schemes method was utilized by the DWM1000 device. Another observation is that CnstAvg degrades the performance of DWM, i.e., CnstAvg+DWM performs worse than DWM, in mean absolute ranging error for Office dataset. This can be explained by the fact that the average ToA estimation error of DWM is substantially different for Office1 and Office2, i.e., for training and test data. Accuracy performance comparison of DWM+CNN and LDE+CNN shows contradicting results, similar to the comparison between DWM and LDE. LDE+CNN outperforms DWM+CNN for the Office dataset while DWM+CNN has the superior performance for the Room dataset. The underlying reason might be a discrepancy in the calibration of the DWM1000 device in the two measurement campaigns. The details of the DWM1000's internal estimation algorithm were not provided neither in the device's user manual [14] nor in the descriptions of the measurements campaigns [3, 13]. Therefore, it is difficult to draw further conclusions regarding the performance of DWM-related estimators. ### _Effect of Utilizing Sub-optimal Conventional Methods_ Various approaches can be used to optimize the parameters of the conventional methods. For instance, as an alternative to selecting the noise threshold in terms of the relative path strength [20], it can also be determined in terms of the thermal noise [11]. Additionally, the number and density of the candidate values of an exhaustive or grid search might yield different optimized parameters. As a result, the parameters of the utilized conventional ToA estimators can be sup-optimal. In Table I, we provide the results related to the impact of optimizing the conventional ToA estimators. Such impact could not be evaluated for DWM since it is based on a proprietary detection algorithm. It can be observed from Table I that the performance of the conventional ToA estimators heavily depends on the parameter optimization for the measurements in both datasets. The proposed CNN estimator provides a robust ranging estimation, in case the utilized conventional ToA estimators are not optimized carefully. Specifically, using the proposed CNN estimator, the loss in ranging performance due to suboptimal parameters of the conventional ToA estimators is at most 8 cm at 90th percentile for both datasets, compared to 21 cm of CnstAvg and 37 cm of the conventional ToA estimators. ### _Complexity Analysis_ Finding peaks of the input CIR dominates the computational complexity of Peak requiring \(O(N)\) operations, where \(N\) denotes the length of the CIRs. The complexity of IFP is mainly determined by the calculation of the gradient where a subtraction and a division is performed for each element yielding a complexity of \(O(N)\). LDE is composed of a moving average filter followed by two moving maximum filters where the outputs of the two moving maximum windows are compared element-wise. The window size is constant in all three filters, and the window is shifted through the CIR yielding an overall complexity of \(O(N)\). Each one-dimensional convolutional layer of the proposed CNN is associated to a constant filter size, and a constant number of filters is shifted along the input CIR. The subsequent single fully connected layer maps the output of the last convolutional layer to a scalar resulting in an overall complexity of \(O(N)\). Although the dependence of the complexity on the input CIR size is linear for the considered estimators, the complexities of the estimators are different. Table II shows the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{ToA (error)} & \multicolumn{2}{|c|}{90th\%(\(\epsilon_{|R|}\)) (cm)} \\ \cline{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{estimation method} & \multicolumn{2}{|c|}{Office dataset} & \multicolumn{1}{c|}{Room dataset} \\ \hline \multirow{4}{*}{\begin{tabular}{c} \multirow{4}{*}{\begin{tabular}{c} \end{tabular} \\ \end{tabular} } } & \multirow{2}{*}{ \begin{tabular}{c} LDE \\ \end{tabular} } & 71 & 27 \\ \cline{3-5} \cline{5-5} & & & LDE+CntAvg & 65 & 29 \\ \cline{3-5} \cline{5-5} & & & LDE+CNN & 62 & 22 \\ \cline{3-5} \cline{5-5} & & & Peak & 117 & 85 \\ \cline{3-5} \cline{5-5} & & Peak+CntAvg & 75 & 35 \\ \cline{3-5} \cline{5-5} & & & Peak+CNN & 73 & 22 \\ \cline{3-5} \cline{5-5} & & IFP & 81 & 35 \\ \cline{3-5} \cline{5-5} & & IFP+CntAvg & 81 & 35 \\ \cline{3-5} \cline{5-5} & & & LDE & +28 & +12 \\ \cline{3-5} \cline{5-5} & & LDE+CntAvg & +12 & +3 \\ \cline{3-5} \cline{5-5} & & LDE+CNN & +8 & +0 \\ \cline{3-5} \cline{5-5} & & Peak & +31 & +22 \\ \cline{3-5} \cline{5-5} & & Peak+CntAvg & +18 & +13 \\ \cline{3-5} \cline{5-5} & & Peak+CNN & +4 & +7 \\ \cline{3-5} \cline{5-5} & & IFP & +37 & +34 \\ \cline{3-5} \cline{5-5} & & IFP+CntAvg & +11 & +21 \\ \cline{3-5} \cline{5-5} & & IFP+CNN & +6 & +6 \\ \hline \end{tabular} \end{table} TABLE I: 90th percentile absolute ranging errors of the considered ToA estimators and the increase in ranging error due to suboptimal (underlying) conventional ToA estimators. Fig. 4: The CDF of ranging error of the proposed CNN-based estimator in comparison to (a) conventional ToA estimators and (b) benchmark CnstAvg estimators, and (c) 90th percentile and (d) mean absolute ranging errors of the considered schemes time complexity of inference of the estimators that are implemented using Pytorch, numpy and scipy libraries of Python programming language running on a computer equipped with Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz and 24 GB of RAM. The additional latency caused by the proposed CNN-based error mitigation scheme is comparable to the latency of the widely used LDE estimator. ## V Conclusions In this paper, we have proposed a supervised ML approach based on CNNs for estimation of the error of conventional ToA estimators. These estimates are in turn used for mitigating such errors to improve the ranging accuracy. We have evaluated the performance of the proposed methods using real-world measurements collected from various environments. We first observed that the performance of the conventional ToA estimators differ significantly from each other, and further require optimization of their parameters for an improved performance. While the errors of the conventional ToA estimators could be mitigated partly by a simple benchmark mitigation scheme, such approach might even result in a worse performance in some cases. As an alternative, the proposed CNN-based error mitigation method can improve the ranging accuracy of the conventional ToA estimators with an acceptable amount of added latency. The proposed estimator was shown to outperform the benchmark error mitigation scheme by up to 16-37% in 90th percentile ranging accuracy depending on the environment. In addition, it was shown that the proposed CNN estimator provides a robust ranging performance, with only less than 8 cm of additional ranging error in 90th percentile, in case the parameters of the underlying ToA estimators are suboptimal. Thus, the CNN estimator can eliminate the necessity of carefully optimizing the underlying conventional ToA estimators, depending on the accuracy requirements. In this way, the proposed method offers an attractive solution for improving the ranging accuracy, providing a robust performance under different conventional ToA estimation algorithms and across various propagation environments. In addition to the proposed use of ML to mitigate the error of conventional ToA estimators, ML methods can be also applied to estimate the ToA _directly_, i.e., without requiring a conventional ToA estimator. Further research is needed to compare the performance of these two approaches.
2305.15698
Rethinking Diversity in Deep Neural Network Testing
Motivated by the success of traditional software testing, numerous diversity measures have been proposed for testing deep neural networks (DNNs). In this study, we propose a shift in perspective, advocating for the consideration of DNN testing as directed testing problems rather than diversity-based testing tasks. We note that the objective of testing DNNs is specific and well-defined: identifying inputs that lead to misclassifications. Consequently, a more precise testing approach is to prioritize inputs with a higher potential to induce misclassifications, as opposed to emphasizing inputs that enhance "diversity." We derive six directed metrics for DNN testing. Furthermore, we conduct a careful analysis of the appropriate scope for each metric, as applying metrics beyond their intended scope could significantly diminish their effectiveness. Our evaluation demonstrates that (1) diversity metrics are particularly weak indicators for identifying buggy inputs resulting from small input perturbations, and (2) our directed metrics consistently outperform diversity metrics in revealing erroneous behaviors of DNNs across all scenarios.
Zi Wang, Jihye Choi, Ke Wang, Somesh Jha
2023-05-25T04:13:51Z
http://arxiv.org/abs/2305.15698v2
# Rethink Diversity in Deep Learning Testing ###### Abstract Deep neural networks (DNNs) have demonstrated extraordinary capabilities and are an integral part of modern software systems. However, they also suffer from various vulnerabilities such as adversarial attacks and unfairness. Testing deep learning (DL) systems is therefore an important task, to detect and mitigate those vulnerabilities. Motivated by the success of traditional software testing, which often employs diversity heuristics, various diversity measures on DNNs have been proposed to help efficiently expose the buggy behavior of DNNs. In this work, we argue that many DNN testing tasks should be treated as directed testing problems rather than general-purpose testing tasks, because these tasks are specific and well-defined. Hence, the diversity-based approach is less effective. Following our argument based on the semantics of DNNs and the testing goal, we derive \(6\) metrics that can be used for DNN testing and carefully analyze their application scopes. We empirically show their efficacy in exposing bugs in DNNs compared to recent diversity-based metrics. Moreover, we also notice discrepancies between the practices of the software engineering (SE) community and the DL community. We point out some of these gaps, and hopefully, this can lead to bridging the SE practice and DL findings. software testing, deep learning, gradient methods ## I Introduction DL systems have demonstrated remarkable performance across various machine learning tasks and also revolutionized modern society [1, 2]. More recently, large language models [3] and stable diffusion models [4] have shown impressive, or even superhuman capabilities on many generative tasks. Despite the wide application of DL systems nowadays, their decision process remains quite black-box and less explainable to humans, and they are also shown to be vulnerable to various bugs [5, 6, 7]. With the wide deployment of DL systems, especially in some safety-critical applications, it is important to ensure the security and trustworthiness of those systems. However, DL systems are also software artifacts, thus amenable to SE approaches. Testing, one of the classical SE techniques, has been practiced for decades to detect erroneous behaviors of traditional software [8]. By serving numerous inputs to the programs, testing can expose bugs when the execution deviates from the specification. Moreover, the inputs can serve as concrete examples for developers to demonstrate the existence of defects and to understand and debug the program. In traditional software testing, tools often employ the adaptive testing scenario: a repeated loop involving input mutation, program execution, and evaluation of fitness [9, 10]. Inputs with higher fitness scores are prioritized or given more mutation resources for the next loop iterations. The goal is to expose bugs fast and this fitness quantifies how likely an input can expose bugs. As a result, to design an effective testing framework, an important task is to create a good metric that can prioritize potential defective inputs. Motivated by the idea of adaptive testing, to extend its utility toward detecting DNN bugs, a natural question arises as follows: What is a good metric that can prioritize potential erroneous inputs for DNNs? There are two types of testing frameworks: _diversity_-based testing and _directed_ testing. Diversity-based testing relies on the assumption that diversified tests can execute the program thoroughly, and often expose corner cases that are overlooked by developers. This methodology is suitable for general-purpose testing, where tools aim to expose as many bugs as possible [8, 9]. On the other hand, the goal of directed testing is more specific; such as detecting a _particular type of bug_ rather than any bug. Driven by the specific goal, directed testing prioritizes inputs that are more potential to expose particular bugs rather than inputs that improve the "diversity" [10, 11]. For example, standard fuzz testing prioritizes tests that trigger unseen coverage. However, if the tool is to identify concurrency errors, test executions that are close to the potential data-race code in the program should be considered more promising to expose concurrency bugs [12]. In recent years, researchers have proposed methods to test DL systems. Many of them are motivated by traditional software testing practices, in particular, to encourage the diversity of inputs. One of the common diversity measures in software testing is code coverage of execution from the inputs [8]. Researchers were motivated by this measure and defined similar metrics for DNNs. They treated neurons as the basic computational components, similar to basic blocks in programs, and defined diversity metrics in terms of neuron activation patterns [13], namely, _Neuron Coverage_ (NC). However, later works argued that NC is _not_ a meaningful metric [14]. Till these days, the usage of neuron activation diversity remains controversial: some recent works re-confirm that NC does not bring significant advantage [15] while some others still design diversity metrics based on neuron activations [16]. Contrary to existing works that design different diversity metrics for DL testing, we argue that many DL testing tasks in the literature [13, 16] should be treated as directed testing tasks because they are well-defined and specific. We refer to these bugs under detection as _functionality_ bugs because these bugs violate the DNN functionality. Also, the mathematical semantics of DNNs are simple compared to traditional software, so one can quantify how likely one input is to change the DNN's functionality, and this provides a measure for testing. Therefore, test generation should be guided by this directed measure rather than a diversity measure. Moreover, we suggest that the testing can be even accelerated by exploiting an intrinsic nature of DL systems: differentiability. The differentiability of DNNs allows us to derive fast and precise linear approximations in a small neighborhood using gradients. As we will explain, state-of-the-art \(\ell_{p}\)-adversarial attacks from DL literature, projected gradient descent (PGD) [17], can also be categorized into our testing framework. Moreover, we provide a careful mathematical analysis of gradient-based methods, which validates these methods for \(\ell_{p}\)-transformations, but also _casts doubt_ on their applications to other types of transformations. In fact, we notice that there are discrepancies between the practices in the SE and DL communities. Even though DL systems are also programs and many SE techniques can be applied, DL has also been studied extensively outside SE due to its capability and popularity. Many practices in both communities are similar at a high level but still different in many ways. We point out some of these gaps and hopefully, this work can help bridge some of the gaps. To summarize, we make the following contributions: * We provide a novel conceptual framework for many DL testing tasks, and demonstrate that some state-of-the-art DL practices are the natural consequences of this framework. We believe that our work can help clarify the controversial DNN testing topic, i.e., NC maximization; and bridge the gaps between practices in the SE and DL communities. In fact, we point out that the NC maximization metric generates out-of-distribution (OOD) inputs using the DL glossary. * To instantiate our framework, we provide several concrete testing metrics with careful mathematical analysis. Our unified testing framework allows us to handle various types of input transformations, and past works were unable to. Moreover, our analysis validates the application of gradients on \(\ell_{p}\)-adversarial attacks but also calls for rigorous scrutiny of gradient methods on other types of input transformations. * We empirically validate our framework with several carefully designed experiments Moreover, we also compare our framework with several diversity-based methods. The result shows that our metrics are more effective in exposing the functionality bugs among all settings, and also reveals that the diversity metrics are weak indicators for bugs under small \(\ell_{p}\)-perturbations. ## II background ### _Mathematical notations and definitions_ Let \([n]=\{1,\ldots,n\}\). For a vector \(v\in\mathbb{R}^{n}\), \(\text{sign}(v)\in\{-1,0,1\}^{n}\) such that \[\text{sign}(v)_{i}=\left\{\begin{array}{ll}1,&v_{i}>0\\ 0,&v_{i}=0\\ -1,&v_{i}<0\end{array}\right.\] \(A^{T}\) denotes the transpose of a matrix \(A\). Let \(||v||_{p}\) denote the \(\ell_{p}\) norm of a vector \(v\): \[||v||_{p}=\sqrt[p]{\sum_{i=1}^{n}|v_{i}|^{p}}.\] The canonical Euclidean norm is then \(||v||_{2}\), and another commonly considered norm on the input is the \(\ell_{\infty}\)-norm: \[||v||_{\infty}=\max_{i\in[n]}|v_{i}|.\] For two functions \(f\) and \(g\), \(f\circ g(x)=f(g(x))\) denotes the composition of \(f\) and \(g\). If \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is differentiable, we use \(D^{i}f\) to denote the \(i\)-th order differentiation of partial derivatives, e.g., \(D^{1}f\) is the gradient of \(f\), and \(D^{2}f\) is the Hessian of \(f\). As a convention, we omit \(1\) when we denote gradient, i.e., we use \(Df\) to denote the gradient of \(f\). For an infinitely differentiable multivariate function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), the Taylor expansion \(T\) of \(f\) at \(a\in\mathbb{R}^{n}\) is \[T(a+\Delta x)=f(a)+(\Delta x)^{T}Df(a)+\frac{1}{2}(\Delta x)^{T}D^{2}f(a)( \Delta x)+\ldots,\] Therefore, \(T(a+\Delta x)\) is an infinite series. If \(f\) is analytic, then \(T(a+\Delta x)\) converges to \(f(a+\Delta x)\)[18]. In this work, we only work with the first two terms of \(T\), i.e., we only consider \[f(a)+(\Delta x)^{T}Df(a). \tag{1}\] Given the information of \(f\) at \(a\), i.e., \(f(a)\) and \(Df(a)\), Equation (1) is the linear approximation of \(f\) at \(a+\Delta x\). This can provide an estimation of \(f(a+\Delta x)\) if we do not want to evaluate \(f\) at \(a+\Delta x\) directly, i.e., \(f(\Delta x+a)\approx f(a)+(\Delta x)^{T}Df(a)\). _Remark II.1_.: The quality of linear approximation depends on how large \(x\) is, and how close \(f\) is to a linear function. ### _Software testing_ Large software artifacts have been an integral part of modern society for decades and are evolving on a constant basis. Despite its ubiquity, software usually contains various defects and some of them can raise serious security concerns. Testing remains one of the most efficient ways to demonstrate the presence of bugs. Since modern software is complicated, bugs can occur anywhere in the program. For example, a buffer usage in the program could potentially introduce overflow and a pointer operation could lead to a NULL pointer dereference. Due to the omnipresence of risky operations in the program, automated testing techniques are commonly guided by the diversity heuristics: a diverse set of inputs can execute the program thoroughly and therefore result in more bugs revealed [8]. One of the diversity measures in testing is code coverage, which can have multiple granularities such as function coverage, basic block coverage, and path coverage. For example, the famous fuzzing tool, American Fuzzy Lop (AFL) is a state-of-the-art general-purpose fuzzer that employs the coverage maximization heuristics [9]. With a loop involving input mutation, execution, and evaluation of coverage-based diversity metric, AFL generates inputs aiming to execute different parts of the program and has successfully detected numerous bugs in various applications. ### _DNNs_ In this work, we mainly focus on feed-forward DNNs, but our methodology is quite general and architecture agnostic. Here we discuss some basics of DNNs that are needed to elaborate our testing approach. A feed-forward DNN \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{l}\) is a composition of affine transformations and non-linear activation functions: \[f_{1}(x)=W^{(1)}x+b^{(1)};\ f_{i}(x)=W^{(i)}\sigma(x)+b^{(i)},i=2,\ldots,d.\] where \(W^{(i)}\in\mathbb{R}^{n_{i+1}\times n_{i}}\) is the weight matrix between the layers, \(n_{1}=m\) and \(n_{d+1}=l\), \(d\) is the depth of the network, and \(b^{(i)}\in\mathbb{R}^{n_{i+1}}\) is the bias term. \(\sigma\), the activation, is an element-wise non-linear function. \(f=f_{d}\circ\cdots\circ f_{1}\). _Remark II.2_.: In general, a neural network is highly non-linear. Therefore, for the linear approximation to be precise, we would need \(\Delta x\) to be small in Equation (1). In this work, we only consider the standard classification tasks. \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{l}\) has \(l\) outputs. Let \(f^{(i)}\) be the \(i\)-th output of \(f\). The classification of an input \(x\) is \(\mathcal{C}(f,x)=\arg\max_{k\in[l]}f^{(k)}(x)\). Suppose that the prediction of \(x\) is \(i\), then \(f^{(i)}(x)>f^{(j)}(x)\) for all \(j\neq i\). The output of \(f\) is called the logit score, and the classification model outputs the class with the highest logit score. Compared with traditional software, which might contain complicated structures like pointer aliasing, various data structures and indirect function calls, DNNs are numerical programs with simple mathematical semantics. Therefore, the techniques designed to handle the complexity in traditional software might not be directly applicable to DNNs. In particular, we want to point out that diversity-based methods in software testing are not the ideal testing framework for many DNN testing tasks. ### _DNN testing_ The functionality of DNNs is to classify inputs correctly aligned with human perception. Therefore, functionality bugs in the classifier setting should be incorrect classification, and the testing goal is to generate inputs that are misclassified. However, obtaining tests and manually labeling them is usually a very expensive process. Researchers have proposed metamorphic testing [19], which augments existing inputs with metamorphic relations. Metamorphic testing generates new inputs by transforming existing ones while preserving their labels. For image inputs, many transformations do not change human judgment, such as small perturbations, image rotation, zooming, and brightness change. We refer to these transformations as semantics-preserving transformations. They can augment the original dataset by applying those transformations on labeled inputs. If any of those transformed inputs can alter the DNN's prediction, then we identify misclassified inputs. In this work, we focus on identifying misclassified inputs, which we refer to as _functionality testing_. Rather than manually collected and labeled, these inputs are generated from a few compositions of small semantics-preserving transformations. Many of the existing works can be categorized into functionality testing [5, 6, 13, 16], to change the DNN prediction by applying semantics-preserving transformations on the inputs. ### _Diversity in DNN testing_ Inspired by the success of traditional software testing, researchers have devised various diversity measures for DL system testing. One of the first diversity measures is NC [13], similar to the code coverage in traditional software testing, which measures how many neurons are activated above a specified threshold \(k\). The assumption is that the more neurons are activated, the more states of DNNs are explored. Later works refined the definition of NC and proposed several structural variants of NC, including the neuron boundary coverage and strong neuron activations [20]. More recently, [16] proposed to use intermediate layer activation divergence between metamorphic inputs as an indicator to detect prediction violations, arguing that inputs with different intermediate activation patterns should also have divergent predictions. [21] proposed a measure to compute the fault pattern of inputs based on the normalized logit score, and a test selection algorithm that chooses a subset of inputs with diverse fault patterns and higher prediction uncertainty. ### _Directed software testing_ Unlike general-purpose software testing tools such as AFL, directed testing prioritizes inputs that are close to the specific testing goal. For example, a data-race testing tool should prioritize inputs that are more likely to trigger concurrency bugs rather than inputs with unseen coverage [12]. In software testing, this proximity can often be measured statically by distance in code. For example, inputs that trigger concurrency bugs necessarily execute code where data race can occur, so the proximity can be measured by the distance between the execution trace and those race condition targets in the program. Mutating inputs close to those targets can generate inputs even closer to the targets and eventually reach the target and potentially trigger specific bugs [10]. Researchers have developed several directed testing tools, and shown that they are more capable of detecting specific bugs than general-purpose testing tools [11]. In fact, we argue that functionality testing of DNNs should be considered as directed testing tasks conceptually, because bugs within these tasks are well-defined and specific. ## III Main thesis In this section, we elaborate on the main thesis of this work: Functionality testing of DNNs is a directed testing task. This is because the mathematical semantics of DNNs is simple and the testing goal becomes specific under the semantics. This thesis motivates us to devise a few concrete testing metrics. All of them are directness metrics for functionality testing, and the application scope of each metric depends on the underlying input transformation. In fact, we want to highlight that each metric has its own application scope, and misusing them outside their scope can degrade the performance. ### _Forward fitness_ We use \(x\) to denote a human-labeled input, and let \(x^{\prime}\) be a generated input from a few small semantics-preserving transformations, similar to input mutation in traditional software testing. We fix a DNN under test: \(f\). Let \(i\in[l]\) be \(x\)'s label. We assume that \(f\) classifies \(x\) correctly, i.e., \(C(f,x)=i\). The goal of testing is to identify the buggy \(x^{\prime}\) so that the \(f\)'s classification over \(x^{\prime}\) will change, i.e., \(C(f,x^{\prime})\neq C(f,x)\). We provide two different interpretations of bugs, each of which can provide a testing metric. _Remark III.1_.: Here we assume that \(x\) is classified correctly by \(f\), i.e., \(C(f,x)\) matches the ground truth for \(x\). We make this assumption for the following reasons: 1. Modern network architectures have high capabilities in data fitting and can usually achieve high test accuracy. 2. Since our goal is to alter the prediction of \(f\) on \(x^{\prime}\), if we accomplish it, at least one of \(x\) or \(x^{\prime}\) is a bug. This is similar to the metamorphic testing methodology [16]. **Margin-based score.**\(C(f,x)=i\) implies \(f^{(i)}(x)>f^{(j)}(x),\forall j\in[l],j\neq i\). We define \[g_{ji}=f^{(j)}-f^{(i)}.\] where \(g_{ji}(x)<0\), \(\forall j\neq i\), because \(C(f,x)=i\). To change the prediction of \(f\) on \(x^{\prime}\) that is close to \(x\) semantically, we need \(f^{(j)}(x^{\prime})>f^{(i)}(x^{\prime})\) for some \(j\in[l],j\neq i\). Hence, the fitness function for functionality testing could be \[\max_{j\neq i}(f^{(j)}(x^{\prime})-f^{(i)}(x^{\prime}))=\max_{j\neq i}g_{ji}(x ^{\prime}), \tag{2}\] i.e., any transformed input \(x^{\prime}\) that has the greatest value of \(\max_{j\neq i}(f^{(j)}(x^{\prime})-f^{(i)}(x^{\prime}))\) is considered the most promising one to expose the bug. Intuitively, the transformed input with a higher margin difference between other classes and the correct label is considered better and has a greater chance to alter the prediction. We refer to Equation (2) as the _margin score_. **Loss-based score.** When the network is trained, a loss is used to measure the distance between the logit score \(f(x)\) and the ground truth \(i\). The training process minimizes the loss to fit the data to the label, i.e., if this measure is small, the \(x\) is likely to be classified correctly. This naturally provides a measure for the bug: if the loss is big, then \(x\) is likely to be misclassified. In classification tasks, the conventional loss is the cross-entropy function [22, chapter 3]: \[CE(f(x^{\prime}),i)=-\log\frac{e^{f^{(i)}(x^{\prime})}}{\sum_{j=1}^{n}e^{f^{( j)}(x^{\prime})}}. \tag{3}\] We refer to this score as the _loss score_. ### _Gradient based fitness_ To collect the logit score of input \(x^{\prime}\), one has to execute the DNN on \(x^{\prime}\). This can be expensive, especially when there are many input transformations and the model is large. However, one distinguishable characteristic of DL systems compared to traditional software is that neural networks are differentiable. Recall that Equation (1) defines a linear approximation of a function via gradients. This allows us to quickly estimate the values of the functions without executing the neural networks. More formally, suppose we have an input \(x\) and the transformed input \(x^{\prime}\), we can define \(\Delta x=x^{\prime}-x\). From Equation (1), we know that \(f(x^{\prime})\approx f(x)+(\Delta x)^{T}Df(x)\). This is particularly useful when the number of transformations is huge and even infinite in some cases. We compute the gradient with the original input once, then for every transformation, only need to compute the difference in the input domain. **Approximating margin score.** For the margin score in Equation (2), the quantity we want to estimate is \(g_{ji}(x^{\prime})\). We can compute the the value of \(g_{ji}\) at \(x\), \(g_{ji}(x)\), and the gradient at \(x\), \(Dg_{ji}(x)\), then compute the quantity: \[\bar{g}_{ji}(x^{\prime})=g_{ji}(x)+(\Delta x)^{T}Dg_{ji}(x).\] \(\bar{g}_{ji}(x^{\prime})\) is a linear approximation of \(g_{ji}(x^{\prime})\). Instead of identifying the input \(x^{\prime}\) that has the greatest value of \(\max_{j\neq i}(f^{(j)}(x^{\prime})-f^{(i)}(x^{\prime}))\), we choose the input \(x^{\prime}\) that has the greatest value of \[\max_{j\neq i}(\bar{g}_{ji}(x^{\prime})). \tag{4}\] We refer to this score as the _backward margin score_, because we need the gradient to compute this approximation, and gradient computation requires backpropagation over the computation graph. **Approximating loss score.** To approximate Equation (3) at \(x^{\prime}\), we have \[\overline{CE}(f(x^{\prime}),i)=CE(f(x),i)+(\Delta x)^{T}\cdot D(CE)(f(x),i), \tag{5}\] where \(D(CE)(f(x),i)\) is the gradient of the cross-entropy function at \(x\). We refer to this score as the _backward loss score_. **Computational efficiency.** Using the linear approximation surrogate is particularly beneficial when the number of transformations is huge, and the number of model parameters is much larger than the dimensions of the inputs, which is always true for modern network architecture. Suppose there are \(N\) transformations for an input, a forward execution takes \(F\) time, a backward gradient execution takes \(B\) time, which is usually similar to \(F\), and the inner-product computation takes \(I\) time. To compute the forward fitness score, we need \(N\) forward executions, which take \(NF\) time. To compute the backward fitness score, we need \(1\) forward execution to compute \(f(x)\) and \(1\) backward execution to compute the gradient \(Df(x)\), and then \(N\) inner-product computations to compute \((\Delta x)^{T}D(f)(x)\), which is usually very cheap compared to DNN executions. Brought together, the backward fitness score takes \(F+B+NI\) time to finish. **Analytical solution.** The backward gradient surrogate can be more valuable when the transformation set is continuous and has simple nice geometry. Because the set is continuous, theoretically, there are infinitely many transformations in the set. Therefore, forward-executing all transformed inputs becomes infeasible. However, if we use the gradient to approximate the function value, we can sometimes derive the optimal transformation among all possible ones to maximize the linear approximation Equation (1) analytically. For instance, if the transformation set is \(\ell_{2}\) or \(\ell_{\infty}\)-transformations, the geometry of the sets is either Euclidean balls or hypercubes. To maximize the gradient inner product, one only needs to project the gradient to the \(\ell_{p}\)-balls. Let \(h\) be the gradient. If the perturbation set is an \(\ell_{2}\)-ball with radius \(\epsilon\), the projection of \(h\) inside the ball is: \[\frac{h}{||h||_{2}}*\epsilon; \tag{6}\] if the perturbation set is an \(\ell_{\infty}\)-ball with radius \(\epsilon\), the projection of \(h\) is: \[\epsilon*\text{sign}(h). \tag{7}\] This is the core idea of PGD, the state-of-the-art \(\ell_{p}\)-attacks, where the gradient comes from the loss-based score, similar to Equation (5). Thus, when using gradient methods and the underlying transformation includes small \(\ell_{p}\)-perturbations, we also use the gradient to generate the analytical solutions as in Equations (6) and (7) to test DNN. _Remark III.2_.: In practice, PGD is a parameterizable multi-step adaptive projection of the gradients. On \(\ell_{\infty}\)-transformations, the one-step gradient projection coincides with the Fast Gradient Sign Method (FGSM) [6]. ### _Mixed score_ Recall that in Remark II.2, if \(\Delta x\) is big, the approximation quality from gradients is poor. Therefore, we can also define a \(\Delta x\)-dependent metric such that: when \(\Delta x\) is big, we use the forward score Equation (2); and when \(\Delta x\) is small, we consider the linear approximation. Thus we can define \[MM(x^{\prime})=\left\{\begin{array}{ll}\max_{j\neq i}(g_{ji}(x^{\prime})), &||\Delta x||_{p}>\epsilon\\ \max_{j\neq i}(\bar{g}_{ji}(x^{\prime})),&||\Delta x||_{p}\leq\epsilon\end{array}\right. \tag{8}\] We refer to Equation (8) as the _mixed margin score_. Similarly, for loss score, we have: \[ML(x^{\prime})=\left\{\begin{array}{ll}\underline{CE}(f(x^{\prime}),i),&|| \Delta x||_{p}>\epsilon\\ \overline{CE}(f(x^{\prime}),i),&||\Delta x||_{p}\leq\epsilon\end{array}\right. \tag{9}\] We refer to Equation (9) as the _mixed loss score_. Notice that the condition decision in Equations (8) and (9) is a cheap operation. We only need to compute \(\Delta x=x^{\prime}-x\) and find the \(\ell_{p}\)-norm of all \(\Delta x\). If we only consider small \(\ell_{p}\) transformations, mixed scores correspond to their gradient-score counterparts; and if all the transformation is large, mixed scores coincide with their forward-fitness counterparts. _Remark III.3_.: The gradient-based scores are useful when the additive difference between the transformed input and the original input is small, and especially when the set of transformations forms a nice geometrical space. However, if the difference is large, then gradients provide imprecise estimations. In this case, following our directness argument, a better score is from the logits, especially when the number of inputs is tractable. However, gradient-based scores can still provide fast estimations, albeit the precision is questionable. ## IV evaluation We empirically evaluate our claims made in Section III: 1) the directness metrics are effective in exposing bugs in functionality testing; 2) each metric is effective within their application scope and can have performance degradation outside the scope. Because the application scope is defined by whether the underlying transformation allows all small \(\ell_{p}\)-perturbations, we will evaluate the performance with different types of transformations separately. In particular, several recent SE testing works only studied non-\(\ell_{p}\) input transformation scenario [16, 21], and we use them as benchmarks. Our evaluation can also provide more understanding of their performance and application scope. More specifically, we want to answer the following research questions: **RQ1:**: How similar are the metrics to rank the inputs? **RQ2:**: Are the fitness scores induced from our framework better than the diversity metric to expose bugs in DNNs when not all small \(\ell_{p}\)-transformations are allowed? **RQ3:**: Are the fitness scores induced from our framework better than the diversity metric to expose bugs in DNNs when all \(\ell_{p}\)-transformations are allowed? Notice that RQ1 only asks for the correlations between different metric functions. This does not reflect what metrics are better for detecting the DNN bugs. However, for most testing frameworks, the ordinal relations of inputs are more important than the absolute values of scores. The correlation measurement helps us understand how similar the metrics are in different input transformation regimes. ### _Input transformations_ We consider two sets of transformations on the input. 1. The \(\ell_{p}\) transformations, also known as the adversarial attacks: the transformations are from \(\ell_{\infty}\)-balls or \(\ell_{2}\)-balls with small specified radii \(\epsilon_{\infty}\) and \(\epsilon_{2}\). The radii of transformations are specified according to recent adversarial robustness works [23, 17, 24]. 2. Common image transformations such as zooming, and blurring, which we refer to as _benign transformations_ as in [21]. Notice that the two transformations are not exclusive to each other: some benign transformations might also be \(\ell_{p}\)-transformations. The difference is that for \(\ell_{p}\)-transformations, we would allow all possible small additive \(\ell_{p}\)-perturbations. Notice that because the \(\ell_{p}\)-balls are continuous spaces, there are infinitely many transformations in theory. In practice, even if we discretize the space, the number of transformations is exponentially many with respect to the input dimension. As a result, if all \(\ell_{p}\)-transformations are allowed, for non-gradient methods, we can only sample the transformations within the \(\ell_{p}\)-balls; for gradient methods, we can project the gradient to the \(\ell_{p}\)-balls directly as described in Equations (6) and (7). We also report the average \(\ell_{2}\)-norm of \(\Delta x\) for all the transformations on the test dataset. We choose \(\ell_{2}\)-norm instead of \(\ell_{\infty}\)-norm because the input space and inner product used in linear approximation Equation (1) are naturally equipped with the \(\ell_{2}\)-norm. Also, the \(\ell_{2}\)-norm result can reflect the difference in \(\ell_{\infty}\)-norm. \(\ell_{p}\)**-transformations specifications.** We use radius \(0.08\) for the \(\ell_{2}\) transformations, radius \(0.01\) for the \(\ell_{\infty}\). Because we will test DNNs adaptively for \(5\) iterations, the choice of radius can ensure that the commonly practiced \(\ell_{p}\)-perturbations in adversarial robustness literature are reachable within \(5\) iterations. **Benign-transformations specifications.** Our benign transformations include \(7\) types of transformations: shift, zoom, brightness, rotation, shearing, blur and contrast ratio as in [21]. The parameters are selected such that all the transformations in the evaluation of [21] are reachable within \(5\) iterations. ### _Experimental design_ **RQ1.** We fix randomly selected transformations for each input in the test set, and then rank the transformations based on various fitness measures. We consider three types of transformations separately: (i) one with only \(\ell_{p}\)-perturbations; (ii) one with only benign transformations; (iii) one with both adversarial and benign transformations. For \(\ell_{p}\)-transformations, we randomly select \(14\)\(\ell_{\infty}\)-perturbations and \(14\)\(\ell_{2}\)-perturbations. For benign transformations, we generate \(4\) transformations for each of the seven benign transformations as in [21]. For mixed transformations, we combine all \(28\)\(\ell_{p}\)-perturbations and all \(28\) benign transformations together as the transformation candidates. For every set of transformations, each fitness metric produces a ranked list of transformations, where the top ones are assumed to be more likely to expose bugs in the DNN. We then compute the rank similarities between the lists. For example, given a fixed set of inputs \((x_{1},x_{2},\ldots,x_{n})\), each metric prioritizes them to generate a ranked list. We compare the two ranked lists to study the correlation between pairs of metrics. We use a well-established metric, Rank Biased Overlap (RBO) [25], to measure this similarity between a pair of ranked lists. The RBO score between two lists is the weighted average of overlaps between all top sublists from both lists, in which the weight takes the rank into account. The higher the score is, the more similar the two ranked lists are. One can then use RBO to measure how similar the two ranked lists are to study the correlation of the fitness metrics. **RQ2.** We apply the adaptive testing scenario to evaluate the metrics, i.e., a repeated loop involving mutation, feedback, and evaluation of fitness [10]. For each iteration, we consider all transformations over the inputs and then prioritize the transformed inputs according to various fitness scores. We retain the top input as the seed input for the next transformation round. We will record the accuracy induced from the retained input in each iteration, up to \(5\) iterations. This multi-iteration adaptive test is also similar to the PGD attack for \(\ell_{p}\)-perturbations [17]. In this experiment, we only allow benign transformations. This is similar to the evaluation in recent testing works [21, 21]. In particular, this can help us understand whether our directness metrics are effective in bug detection compared to existing works within their evaluation scope under the standard adaptive testing scenario. **RQ3.** We will conduct the same experiments as RQ2's experiments, except that the underlying transformations are either \(\ell_{p}\) transformations or including both \(\ell_{p}\) and benign transformations. The rationale is that human judgment is invariant to both small perturbations and usual image transformations, so we should consider all of them, especially if an adversary can utilize all semantics-preserving transformations. Because the goal of the experiment is to evaluate the bug-finding capability of different fitness, for non-gradient methods over \(\ell_{p}\)-perturbations, their capability of bug-detection heavily relies on the number of sampled transformations. We, therefore, sample \(140\)\(\ell_{p}\)-transformations instead of \(28\) in the evaluation of RQ1. ### _Experimental specifications_ **Datasets.** We use three standard datasets: (1) SVHN [26] contains \(10\) classes representing digits, \(73257\) inputs for training, \(26032\) digits for testing. (2) CIFAR10 [27] contains \(60,000\)\(32\times 32\) colour images in \(10\) classes. There are \(50000\) training images and \(10000\) test images. (3) CIFAR100 [27] is similar to CIFAR10, but has \(100\) classes containing \(600\) images each. There are \(500\) training images and \(100\) testing images per class. **Model architectures.** We use two classical architectures from the computer vision community for our experiments: VGG-16 [28], and ResNet-9 and ResNet-18 [29]. More experimental specifications can be found in the supplementary materials. **Benchmarks.** We use three recent diversity-based testing works as benchmarks: 1. Boosting Diversity: [16] is our major benchmark. It is built upon the assumption that if the divergence between two executions is significant, then the inputs might be classified differently. More specifically, for the pair of executions, one extracts neuron activations from any hidden layer in the DNN, and then compute the distributional difference of them between the original input and the transformed input. Transformed inputs with a great difference should be prioritized. As noted in [16], as one uses features extracted at the deeper layer of DNNs, the prioritization accuracy increases, but the execution time increases. To find the balance between these two, the standard practice is to choose the layer in the middle of the target DNN. We follow this guideline but also include a variant as a stronger benchmark; we use the logit layer as the representation layer, which, as the authors indicated, provided better prioritization estimation than the mid-intermediate layer. 2. NC: [20] monitors and gauges the intermediate neuron activities covered by current executions at various granularity levels. They profile the range of neuron outputs into \(k\)-multisections of the major function region and corner-cases regions during the training, then identify the ratio of sections covered under test-time executions. In [16], where NC is considered as one baseline of diversity measures, they adopt the original metric in [20] to compute the ratio of uncovered neurons given two executions; that is, the larger the value, the more diverse the two executions in terms of the NC patterns. We follow this modified NC measure as one of our baselines. 3. ATS: [21] is a recently proposed metric for input selection, i.e., choosing a small yet diverse subset of inputs from a large set. It computes a diversity metric from the normalized logit layer, similar to ours. It is not designed for our testing scenario, i.e., for every single input, choosing the best candidate among all transformed inputs. However, we adapt it to our scenario by feeding an input and its transformed mutants to the selection algorithm one at a time and then rank the inputs based on their metric. We include this variant as another benchmark. We will use the following abbreviations for the methods considered in the evaluation: **FM** stands for forward margin score (Equation (2)); **FL** stands for forward loss score (Equation (3)); **BM** stands for backward margin score (Equation (4)); **BL** stands for backward margin score (Equation (5)); **MM** stands for mixed margin score (Equation (8)); **ML** stands for mixed loss score (Equation (9)); **BD** stands for the standard tool from [16], **BD-F** stands for a variant that uses the logit layer to compute the divergence. **NC** stands for NC method using the layer in the middle of DNN architecture, **ATS** stands for the adapted tool from [21]. To summarize, our metrics (FM, FL, BM, BL, MM, ML), ATS, and BD-F only utilize the logit layer, while BD and NC use the intermediate layer neuron activations. We choose the same intermediate layer for both BD and NC. We do not include evaluations of ATS on the CIFAR100 dataset since the fault pattern computation is cubic to the number of classes, which is too slow with 100 classes. ## V Results and discussion In this section, we present selected experimental results due to space limits and discussions on the results. More results can be found in supplementary materials. ### _Rq1_ **RBO Benchmark.** The range of the RBO score is within \([0,1]\), and a high score indicates that two ranked lists are positively correlated and similar. We benchmark \(3\) RBO scores: (i) two same-ranked lists; (ii) two randomly ranked lists; (iii) two opposite-ranked lists. All lists are of length \(28\). With these benchmarks, we can interpret the results more precisely. The RBO benchmarks are: 1. Two same rank lists: \(1\); 2. Two randomly ranked lists: \(0.52\); 3. Two oppositely ranked lists: \(0.33\). As a result, if the RBO score is less than \(0.52\), the two ranks are negatively correlated. The RBO scores between different ranking methods are summarized in Table I. **Observations.** 1. The forward loss and margin scores are always very positively correlated; 2. Forward scores and backward scores are strongly positively correlated when the underlying transformation is only \(\ell_{p}\)-transformations; but for benign transformations, the norms induced from \(\Delta x\) are fairly large, and the correlations between forward and backward scores are much weaker. 3. Most of the scores are above \(0.52\), except for BD(-F) & FL and NC & FL in some \(\ell_{p}\)-transformation-only regime. **Implications.** 1. Both margin and loss scores can quantify how a DNN makes predictions, so their rankings of inputs to be classified incorrectly are very similar. 2. Gradient-based scores are excellent surrogates for forward scores only when the underlying transformations are small perturbations. 3. All the scores can be more efficient to expose functionality bugs than random prioritization. The only exception is that using BD with only \(\ell_{p}\) transformations. When applying small perturbations on the input, the difference measurement in the intermediate layer activation is not aligned with their loss change. This raises concerns about how accurate it is to use intermediate-layer divergence as a surrogate to measure the potential prediction change when the transformations are only small \(\ell_{p}\)-changes, which was not studied in [16]. ### _Rq2_ Selected results are summarized in Table II. **Observations.** 1. Our forward scores are the strongest testing metrics to expose functionality bugs in all experiments. 2. Recently proposed diversity methods, BD(-F) and ATS are also competitive metrics to test DNNs. Moreover, as suggested by [16], using the divergence in the final layers is a stronger metric than the intermediate layer. In fact, BD-F is the best metric without our forward scores. 3. The gradient-based scores are weaker than recent diversity-based methods, though they also consistently decrease the accuracy. _Remark V.1_.: [16] is designed for metamorphic testing, where the ground truth labels are assumed unknown. In our metrics, we assumed the access to ground-truth labels as in the adversarial robustness literature [6, 17]. In the supplementary materials, we also include a study of the metamorphic variant of our metrics, and the result is consistent with the ground-truth label case. This indicates that the metamorphic testing considered in [16] can also be more effectively addressed with our directed testing approach. **Implications.** 1. Because ATS and BD-F both use the logit layers as our scores, comparisons with them establish the main thesis that the functionality testing of DNNs is directed, and the bug potential can be measured by our scores. 2. Our theoretical analysis demonstrates that gradients are not a good estimation when the transformation induces a large difference to the inputs. This is consistent with our empirical observation, that gradient-based metrics are quite weak for benign transformations. As a result, when applying gradients to test networks, one should quantify the change introduced by the transformation. Indeed, gradient methods were applied to test neural networks beyond the \(\ell_{p}\)-transformations, and many of them are in the natural and programming languages domains [30, 31]. Researchers consider semantics preserving transformations to natural language or programs, and devised attacks and defenses using gradients against those vulnerabilities. These transformations are similar to benign transformations in our evaluation and there were no justifications that the changes induced by the transformations to the network input are small. The observation from our work raises concern about the effectiveness of these attacks and defenses. As a result, We call for rigorous scrutiny of gradient-based tests on non-\(\ell_{p}\) types of input transformations, especially discrete tractable transformations. ### _Rq3_ Selected results are summarized in Table III. **Observations.** 1. When only \(\ell_{p}\)-transformations are presented, the best metrics are gradient-based methods. In cases where both \(\ell_{p}\)-transformation and benign transformations are allowed, our mixed scores are the most effective metrics. 2. The BD(-F) scores are quite weak when only \(\ell_{p}\)-perturbations are allowed, even among all the non-gradient metrics. **Implications.** 1. From RQ1's result, we can see that gradient surrogates are very positively correlated. Moreover, the linear approximation allows directly deriving the optimal transformation for \(\ell_{p}\)-perturbations. This observation is consistent with our theoretical analysis and validates the application of gradient methods for small \(\ell_{p}\)-transformations. 2. From RQ1's result, when only \(\ell_{p}\)-transformations are allowed, the correlation between our loss score FL and BD(-F) scores is in fact a little negative. In the experiment, BD(-F) induce a weak result for testing, even compared with our non-gradient forward loss score. These results show that the neuron divergence can be subtle when the input transformations are small \(\ell_{p}\)-perturbations. These scenarios were not evaluated in [16]. Our evaluation brings new knowledge on the application scope of [16]. 3. In the mixed transformation setting, our mixed scores induce the strongest metric. Our results on various types of transformations indeed show that our metrics are superior to existing diversity-based metrics in detecting bugs within their application scopes. ## VI Related Work In this section, we further discuss some related topics on DNN testing. In particular, we want to emphasize some discrepancies between SE practices and DL findings. This brings awareness of the gaps and hopefully can lead to bridging them. **Neuron activation.** One of the first DNN testing works was inspired by the code coverage in traditional software testing practice to define NC [13]. However, the inherent difference between software and DNN challenges the direct application of the idea from one literature to the other. In software, execution trace usually has logical and qualitative semantics, i.e., the execution trace is decided by the logical conditions in the program. In contrast, the neuron activation in DNNs is quantitative, and its semantics is not as clear as in software. Therefore, maximizing NC does not provide clear interpretations as in traditional software. Nonetheless, NC does provide rich information about the execution of the inputs. In DL society, neuron activation is referred to as _feature representation_, and the researchers have exploited the information from feature representations for various purposes. For instance, higher-level explanations about DNN behaviors can be abstracted out of feature representations, which are more user-friendly than simply inspecting neuron outputs [32, 33, 34]. Moreover, [35] illustrated that feature representations can provide an effective metric for human perceptual measurement. They defined a metric based on normalized neuron activation and found that it is strongly aligned with human judgment. This leads to the design of defense against more realistic and unseen threats [36]. [16] relies on the assumption that if the neuron representation of two metamorphic inputs on an intermediate layer is very divergent, then their predictions are also likely different. While the reversed argument always holds that different predictions must have divergent intermediate representations, the original assumption is not necessarily true. Especially how to measure the difference is unclear when the divergence is subtle, and this can lead to distinctive prioritization of inputs as in the \(\ell_{p}\)-transformation evaluations. However, similar ideas in DL communities are used to design OOD detectors. OOD tests are inputs that are far from the distribution the DNN has seen during the training (i.e., in-distribution), hence it is undefined behavior for DNN to predict given those tests. Assuming that in-distribution and OOD inputs have distinctive feature attribution patterns, there have been various approaches to exploiting statistical information from feature representations to distinguish in-distribution vs. OOD inputs [37, 38], and understand the characteristics of OOD inputs [34]. Consequently, bringing the OOD detection perspective to DNN testing, we suggest that the NC maximization would make the feature representations of generated inputs deviate from the training data, which are actually OOD. This is consistent with the finding in [14] that the inputs from maximizing the NC score are not natural. To this end, one needs to define the scope of bugs a priori to avoid unnecessary execution against OOD inputs. **Gradient.** We carefully analyzed the application of gradients in DL testing. Gradients naturally exist for many mathematical functions, including neural networks. They quantify the local variability of a function and can provide the optimal linear approximation in a small neighborhood. Our gradient-based metrics are the applications of linear approximation and some of them coincide with the adversarial attack algorithms in DL literature, which indicates that these algorithms can be viewed as concrete instantiations of our framework for small additive transformations. Because traditional software is mostly not differentiable, gradients are less often used in software testing works. However, the gradient-based methods have been successful in many DL applications, encompassing \(\ell_{p}\)-adversarial attacks and gradient-based explanations [39, 40]. We believe that the SE practice can also utilize gradients as tools for testing DNNs, such as recent works on numerical defects in DL programs [41, 42]. **Bugs.** In this work, we restrict our scope to functionality bugs, i.e., inputs that are misclassified according to human perception. However, DL bugs can manifest in various forms. In [7], the authors categorize model bugs into three types: model contamination, data contamination, and test-time contamination. Model contamination is caused by errors in the model parameters, for example, incorrectly initialized parameters; data contamination comes from defects in the training data; and test-time contamination is caused by shifts in test data. Functionality bugs can be viewed as test-time contamination: our data are transformed at test time. However, there are many more types of bugs in the DL system to be addressed and SE techniques can be applied. For example, model contamination is often caused by API misuses, which has been studied in many SE works [43, 44]. Diversity.We claim that merely increasing diversity is not the ideal framework for DNN functionality testing. However, we do not claim that diversity is useless for DL testing. It depends on our testing goal. A natural diversity issue of DL systems is fairness and bias [45, 46]: Between subgroups, DL systems contain performance discrepancies and make biased decisions towards a certain subgroup. This would also cause concerns for the deployment of DL systems because DNN classifications can discriminate certain subgroups. If the tests were only from a subgroup, then potential fairness and bias issues would be concealed. Therefore, designing meaningful diversity measures can help expose and alleviate the bias and fairness issues in DL systems. With these DL lessons, a natural question arises: What should be treated as bugs for DL systems? We leave this as an open problem. We believe that if bugs are defined properly, then directed testing is more effective to detect the bugs. ## VII Threats to validity Internal validity.Our subject of study is the DNN, whose training dynamics have many nuances. The selection of datasets, architectures, optimizers, and hyperparameters might not be optimal for all situations. However, we conduct the experiments in accordance with the best practice to our knowledge. Moreover, the numbers reported in this work are averaged over tens of thousands of inputs, which can help reduce randomness. Our benchmarks come from the most up-to-date works, and the scale of our experiments is also larger than theirs [16, 21]. For example, for each input, we consider up to over a hundred transformations, but the transformations considered in [16, 21] are significantly smaller. External validity.We restrict the scope of our work to testing functionality bugs and claim that this task is directed. This does not imply that all DL testing tasks are directed and our work can be used to detect those bugs. One needs to first define what are bugs and then consider our work if these bugs fall in our scope. ## VIII Conclusion In this paper, we argue that functionality testing should be addressed using the directed testing framework because they are well-defined and specific. Moreover, we propose several concrete metrics to quantify the potential faultiness of inputs and carefully characterize their scopes. We find that metrics perform drastically differently across input transformations and call for careful scrutiny of these methods.
2310.02152
Graph Neural Network-based EEG Classification: A Survey
Graph neural networks (GNN) are increasingly used to classify EEG for tasks such as emotion recognition, motor imagery and neurological diseases and disorders. A wide range of methods have been proposed to design GNN-based classifiers. Therefore, there is a need for a systematic review and categorisation of these approaches. We exhaustively search the published literature on this topic and derive several categories for comparison. These categories highlight the similarities and differences among the methods. The results suggest a prevalence of spectral graph convolutional layers over spatial. Additionally, we identify standard forms of node features, with the most popular being the raw EEG signal and differential entropy. Our results summarise the emerging trends in GNN-based approaches for EEG classification. Finally, we discuss several promising research directions, such as exploring the potential of transfer learning methods and appropriate modelling of cross-frequency interactions.
Dominik Klepl, Min Wu, Fei He
2023-10-03T15:40:03Z
http://arxiv.org/abs/2310.02152v2
# Graph Neural Network-based EEG Classification: A Survey ###### Abstract Graph neural networks (GNN) are increasingly used to classify EEG for tasks such as emotion recognition, motor imagery and neurological diseases and disorders. A wide range of methods have been proposed to design GNN-based classifiers. Therefore, there is a need for a systematic review and categorisation of these approaches. We exhaustively search the published literature on this topic and derive several categories for comparison. These categories highlight the similarities and differences among the methods. The results suggest a prevalence of spectral graph convolutional layers over spatial. Additionally, we identify standard forms of node features, with the most popular being the raw EEG signal and differential entropy. Our results summarise the emerging trends in GNN-based approaches for EEG classification. Finally, we discuss several promising research directions, such as exploring the potential of transfer learning methods and appropriate modelling of cross-frequency interactions. graph neural network, classification, EEG, neuroscience, deep learning ## I Introduction Electroencephalography (EEG) is a non-invasive technique used for recording electrical brain activity with a wide range of applications in cognitive neuroscience [1], clinical diagnosis [2; 3], and brain-computer interfaces [4; 5]. However, analysing EEG signals poses several challenges, including a low signal-to-noise ratio, non-stationarity resulting from brain dynamics, and the multivariate nature of the signals [6; 7]. In this review, we focus on the classification of EEG, such as emotion recognition, motor imagery recognition or neurological disorders and diseases. Traditional feature extraction methods for EEG classification, such as common spatial patterns [6], wavelet transform [8], and Hilbert-Huang transform [9], have been commonly employed. These methods aim to extract meaningful features from EEG signals [10; 11], with key features like power spectral density (PSD) [7] to characterise brain states. However, relying on such manually defined features to train machine learning classifiers has several limitations. Subjectivity and biases in feature selection, along with time-consuming engineering and selection processes, limit scalability and generalisation [12; 7]. Automated feature extraction methods are needed to overcome these limitations, improve efficiency, reduce bias, and enhance classifier adaptability to different EEG datasets. Deep learning architectures, such as convolutional neural networks (CNN) and long short-term memory (LSTM) networks, have also been explored for EEG analysis [13; 14]. However, they face challenges in effectively capturing the spatial dependencies between electrodes and handling the temporal dynamics of EEG signals [7]. Modelling the complex sequential and spatial relationships in EEG data is crucial for more accurate classification and analysis. Network neuroscience offers an alternative approach to EEG modelling by framing the signals as a graph. The brain exhibits a complex network structure, with neurons forming connections and communicating with each other [15]. Analysing EEG data as a graph enables the study of network properties, including functional connectivity, providing insights into brain function and dysfunction [12; 16; 17]. Graph-based analysis facilitates the examination of network features, node importance, community structure, and information flow, offering insights into brain organisation and dynamics. Such graph-theory-based features were shown to be powerful predictive features for EEG classification [12; 17; 18; 19; 20; 21; 22]. However, these features have the same limitations as manually defined features based on traditional EEG analysis methods introduced above. Graph Neural Networks (GNNs) emerge as a powerful tool for modelling neurophysiological data [23], such as EEG, within the network neuroscience framework [7; 24]. GNNs are specifically designed to operate on graph-structured data. They can effectively leverage the spatial structure within EEG data to extract features, uncover patterns and make predictions based on the complex interactions between different electrodes. Designing GNN models for EEG classification will likely improve classification tasks and potentially uncover new insights in neuroscience. Motivated by the potential of GNNs and an increasing number of recent papers proposing GNN for various EEG classification tasks, there is an urgent need for a comprehensive review of GNN models for EEG classification. The main contributions of this paper include: * Identifying emerging trends of GNN models tailored for EEG classification. * Reviewing popular graph convolutional layers and their applicability to EEG data. * Providing a unified overview of node feature and brain graph structure definitions in the context of EEG analysis. * Examining techniques for transforming sets of node feature embeddings into a single graph embedding for graph classification tasks. By addressing these essential aspects, this review paper will provide a comprehensive and in-depth analysis of the application of Graph Neural Network (GNN) models for EEG classification. The findings and insights gained from this review will serve as a resource to navigate this emerging field and identify promising future research directions. ## II Overview of Graph Neural Networks Graphs are widely used to capture complex relationships and dependencies in various domains, such as social networks, biological networks, and knowledge graphs. The problem of graph classification, which aims to assign a label to an entire graph, has gained attention in recent years. GNNs offer a promising solution to this problem by extending the concept of convolution from Euclidean inputs to graph-structured data. GNNs have been successfully applied in a wide range of fields, such as biology [23], bioinformatics [25], network neuroscience [26], chemistry [27; 28], drug design and discovery [29; 30], natural language processing [31; 32], recommendation systems [33; 34], traffic prediction [35; 36] and finance [37]. In graph classification problems, the input is a set of graphs, each with its own set of nodes, edges, and node features. Let \(G=(V,E,H)\) denote a featured graph, where \(V\) represents the set of nodes, \(E\) represents the set of edges connecting the nodes, and \(H\) represents the \(V\times D\) matrix of \(D\)-dimensional node features. In the case of EEG, the EEG channels are the nodes, and edges represent structural or functional connectivity between pairs of nodes. Each graph \(G\) is associated with a label \(y\), indicating its class. The goal is to learn a function \(f(G)\to y\) that can predict the class label \(y\) given an input graph \(G\). A general structure of a GNN model for EEG classification is presented in Fig 1. Multiple types of GNNs have been well introduced in [38; 39]. In this survey, we briefly introduce the two main branches of GNNs, namely, spatial and spectral GNNs (Fig. 2). Other types of GNNs, such as attention GNNs [40], recurrent GNNs [41], and graph transformers [42], can be viewed as special cases of spatial GNNs, and thus we will not provide detailed discussion in this survey. Both spatial and spectral GNNs aim to extend the convolution mechanism to graph data. For a detailed review of their similarities and differences, see [43]. Spatial GNNs aggregate information from neighbouring nodes, similar to traditional convolution applied to image data aggregating information from adjacent pixels. Stacking multiple spatial GNN layers leads to information aggregation from various scales going from local to global patterns being captured in early and later layers, respectively. In contrast, spectral GNNs perform information aggregation in the graph frequency domain, with low-frequency and high-frequency components capturing global and local patterns, respectively. However, both approaches learn to capture local and global patterns within the graph, i.e. high and low-frequency information in the spectral domain. The advantage of spectral GNNs is their connection to graph signal processing, al Figure 1: General architecture of a graph neural network model for classification of EEG. (A) The input to the model consists of node features and a possibly learnable brain graph structure. (B) Optionally, the node features can undergo pre-processing via a neural network. (C) Next, the node features are passed to a block of graph convolutional layers, where node embeddings are learned. (D) Then, a node pooling module can be utilised to coarsen the graph. Node pooling may contain learnable parameters as well. (E) Finally, the set of node embeddings forms a graph embedding, which can be used to predict the outcome. lowing for interpretation from the perspective of graph filters. However, spectral GNNs do not generalise well to large graphs since they depend on the eigendecomposition of graph Laplacian. In contrast, spatial GNNs can be applied to large graphs since they perform only local message-passing. On the other hand, spatial GNNs may be challenging to interpret and prone to overfitting because of over-smoothing, where embeddings of all nodes become similar. ### Spatial GNNs Spatial GNNs directly operate on the graph structure via the adjacency matrix operator. Given a set of nodes and associated features, spatial GNNs perform neighbourhood aggregation to derive node embeddings. This process is referred to as message passing. Intuitively, nodes connected by edges should have similar node embeddings, i.e. local node similarity. Message passing implements this idea by updating node embeddings with aggregated information collected from the node's neighbourhood. Formally, the node update equation in \(l^{\text{th}}\) layer of spatial GNN with \(L\) layers is defined as follows: \[h_{i}^{(l+1)}=\sigma\left(W_{1}^{(l)}h_{i}^{(l)}+\sum_{j\in\mathcal{N}(v_{i})} W_{2}^{(l)}h_{j}^{(l)}e_{ji}\right), \tag{1}\] where \(h_{i}\) is the node embedding vector, or when \(l=1\), this is the input node feature vector. \(\sigma\) is the activation function, \(\sum\) is the aggregation function, \(\mathcal{N}(v_{i})\) is the neighbourhood of node \(v_{i}\), \(W\in\mathbb{R}^{d_{1}\times d_{2}}\) is a learnable parameter matrix projecting node embeddings from input dimension \(d_{1}\) to hidden dimension \(d_{2}\) and \(e_{ji}\) is the edge weight (\(e_{ji}=1\) for unweighted graphs). A single spatial GNN layer aggregates information from the 1-hop neighbourhood. Thus, to increase the reception field of the model, \(L\) spatial GNN layers can be stacked to aggregate information from up to \(L\)-hop neighbourhoods. A disadvantage of spatial GNNs is the difficulty of training deep models with many layers. With an increasing number of layers, the node embeddings become increasingly smooth, i.e. variance among embeddings of all nodes decreases. This happens when the messages already contain aggregated information from the whole graph; continual message passing of such saturated messages leads to oversmoothing, i.e., all node embeddings becoming essentially identical. ### Spectral GNNs Spectral GNNs can also be applied to EEG classification tasks by leveraging the spectral domain analysis of graph-structured data. The EEG graph is transformed into the spectral domain using the Graph Fourier Transform (GFT) and Graph Signal Processing (GSP) techniques. For a detailed review of spectral GNN methods, please refer to [44]. The graph spectrum is defined as the eigendecomposition of the graph Laplacian matrix. The GFT is then defined as \(\mathbf{\hat{H}}=\mathbf{U}^{T}\mathbf{H}\), its inverse as \(\mathbf{H}=\mathbf{U\hat{H}}\), where \(\mathbf{U}\) is the orthonormal matrix of eigenvectors of the graph Laplacian \(\mathbf{L}\) and \(H\in\mathbf{R}^{N\times D}\) is the matrix of node feature vectors with \(N\) and \(D\) being the number of nodes and dimensionality of node features, respectively. The graph Laplacian is defined as \(\mathbf{L}=D-A\), but often the normalized version is preferred: \(\mathbf{\hat{L}}=\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) (\(\mathbf{A}\) and \(\mathbf{D}\) are the adjacency and degree matrices, respectively). Spectral GNN is then typically defined as the convolution (\(*\)) of a signal defined on graph \(\mathbf{H}\) and a spatial kernel \(g\) in the spectral domain, thus becoming an element-wise multiplication (\(\odot\)): \[\mathbf{H}*g=\mathbf{U}\left(\left(\mathbf{U}^{T}\mathbf{H}\right)\odot\left( \mathbf{U}^{T}g\right)\right). \tag{2}\] Generally, \(\mathbf{U}^{T}g\) is defined as a learnable diagonal matrix \(\mathbf{G}=diag(g_{1},...,g_{V})\) spectral filter [44]. However, the full spectral graph convolution can be computationally expensive. A popular approximation is the Chebyshev GNN (ChebConv) [45], which performs localised spectral filtering on the graph. The node embedding update equation of a ChebConv is defined as: \[H*g\approx\sum_{i=1}^{K}\mathbf{\Theta}_{i}T_{i}(\mathbf{\hat{L}}^{\prime}), \tag{3}\] where \(\mathbf{\Theta}\in\mathbb{R}^{K\times d\times d}\) are learnable parameters, \(T_{i}(\mathbf{\hat{L}}^{\prime})=2T_{i-1}(\mathbf{\hat{L}}^{\prime})-T_{i-2} (\mathbf{\hat{L}}^{\prime})\), \(T_{1}(\mathbf{\hat{L}}^{\prime})=\mathbf{H}\), \(T_{2}(\mathbf{\hat{L}}^{\prime})=\mathbf{\hat{L}}^{\prime}\mathbf{H}\), and \(\mathbf{\hat{L}}^{\prime}=\frac{2\mathbf{\hat{L}}}{\lambda_{max}}-\mathbf{I}\) (\(\lambda_{max}\) is the largest eigenvalue of \(\mathbf{\hat{L}}\), often approximated as \(\lambda_{max}=2\)). The \(K\) parameter controls the size of the Chebyshev filter. However, spectral GNNs are limited to input graphs with a fixed number of nodes. This is because of the explicit use of the graph Laplacian. This is in contrast to spatial GNNs, which do not rely on explicitly materialising the adjacency matrix. ## III Survey results This survey is based on a review of 63 articles. These articles were selected by title and abstract screening from a search on Google Scholar and ScienceDirect queried on November 1st, 2022. The search query for collecting the articles was defined as: ("Graph neural network" OR "Graph convolutional network") AND ("Electroencephalography" OR "EEG"). Both peer-reviewed articles and preprints were searched and utilised. All types of EEG classification tasks were included. We summarise the various types of EEG classification tasks identified in the surveyed papers in Fig 3. The most common classification tasks are emotion recognition, epilepsy diagnosis and detection and motor imagery. However, the type of classification task should have a relatively minor effect on the GNN architecture design. Thus, we do not analyse and discuss this in detail. Instead, we survey the various GNN-based methods for EEG classification, intending to systematically categorise the types of GNN modules and identify emerging trends in this field independent of the specific classification task. In the remaining portion of this paper, we report the categories of comparisons we identified in the surveyed papers. These are based on the different modules of the proposed GNN-based models. Specifically, these are: * Definition of brain graph structure * Type of node features * Type of graph convolutional layer * Node feature preprocessing Figure 2: Illustration of core mechanisms of spatial and spectral graph neural networks (GNNs). A) An undirected featured graph is given as an example input graph with node features shown as node labels and colours. B) Spatial GNNs operate in the graph domain directly using message passing to update node embeddings. 1) Messages, i.e. transformed node features or embeddings, are sent along edges. For simplicity, we show only one direction of the flow of messages. 2) The collected messages at each node are aggregated using a permutation-invariant function and are fused with the original node embedding to form an updated node embedding. Thus, one spatial GNN layer results in node embeddings containing information about the 1-hop neighbourhood of a given node. Thus, \(L\) layers are required for node embeddings to access the information from the \(L\)-hop neighbourhood. C) In contrast, spectral GNNs operate in the graph spectral domain. 1) Node features are treated as signals on top of a graph and are deconstructed into graph frequencies given by the eigendecomposition of the graph Laplacian. Graph frequencies can be interpreted as variations of the signals. 2) The contribution of each graph frequency is weighted by the set of learnable kernels \(G\) that effectively function as graph filters. 3) Node embeddings are then obtained by aggregating the filtered graph frequencies and transforming them back to the spatial graph domain. Thus, full spectral GNNs can access information from \(N\)-hop neighbourhoods where \(N\) is the number of nodes of a given graph. However, in practice, approximations such as Chebyshev graph convolution restrict this to the chosen hop size. * Node pooling mechanisms * Formation of graph embedding from the set of node embeddings The following sections will provide further details on these categories, and the paper will conclude by discussing trends and proposing plausible directions for future research. ## IV Definition of brain graph structure The first part of the input to a GNN model is the brain graph structure inferred from the EEG data itself (Fig. 1A). We summarise the methods for defining the brain graphs in Table 1. These methods can be generally categorised as learnable or pre-defined. An alternative categorisation of the brain graph structures is the functional (FC) and the "structural" connectivity (SC). Generally, SC graphs are pre-defined, whereas FC graphs can be both pre-defined and learnable. SC in the classical sense of physical connections between brain regions is not possible to obtain using EEG signals since these are recorded at the scalp surface. Instead, we use the term to describe methods that construct brain graphs based on the physical distance between EEG electrodes. In contrast, FC refers to pairwise statistical relationships between EEG signals. SC graph is pre-defined such that electrodes are connected by an edge in the following way: \[e_{ij}=\begin{cases}1\text{ or }1/d_{ij},&\text{if }d_{ij}\leq t\\ 0,&\text{otherwise}\end{cases}, \tag{4}\] where \(e_{ij}\) is the edge weight connecting nodes \(i\) and \(j\), \(d_{ij}\) is a measure of distance between EEG electrodes, and \(t\) is a manually defined threshold controlling the graph sparsity. Such an approach offers several advantages. First, the SC graph is insensitive to any noise effects of EEG recording since it is independent of the actual signals. Second, all data samples share an identical graph structure, provided the same EEG montage was utilised during the recording. This offers explainability advantages when combined with spectral GNN since the graph frequency components defined by the eigenvectors of graph Laplacian are fixed. On the other hand, the SC graph is limited to short-range relationships. Thus, it might not accurately represent the underlying brain network. Some papers propose to overcome this limitation by manually inserting global [53; 56; 57; 58; 62] or inter-hemispheric edges [46; 54; 87]. In contrast, an FC graph can be obtained from either classical FC measures (FC measure in Table 1 or learnable methods (e.g. feature concatenation/distance and attention methods in Table 1). We refer to all of these methods as FC because they all measure the degree of interaction between two nodes, thus falling within the traditional definition of FC. Unlike SC, the FC graph is unique for each data sample and can contain both short- and long-range edges. On the other hand, since it is derived directly from EEG signals, it might be sensitive to noise. Learnable FC based on node feature distance or feature concatenation are generally computed as: \[e_{ij}=\theta_{1}(|h_{i}-h_{j}|)\text{ and} \tag{5}\] \[e_{ij}=\theta_{2}(h_{i}\parallel h_{j}), \tag{6}\] respectively, where \(\theta_{1}(\cdot)\) and \(\theta_{2}(\cdot)\) are neural networks with input-output dimensions of \(\mathbb{R}:d\to 1\) and \(\mathbb{R}:2\times d\to 1\), respectively; \(|\cdot|\) denotes absolute value; \(\parallel\) denotes concatenation and \(h_{i}\) is the node feature/embedding of node \(i\). We discuss the attention-based graphs together with the types of graph convolutional layers in Section VI and thus skip these methods in this section. Special cases of brain graph definition are the shared-mask methods. These methods defined a matrix of learnable parameters with the same shape as the adjacency matrix of the input graphs that acts as a mask/filter by multiplying it with the adjacency matrix. This learnable matrix is a part of the model. Thus, the same mask is applied to all input graphs. However, a shared mask limits the size of the input graphs, i.e. the number of nodes must remain fixed so that the adjacency matrix can be multiplied with the shared mask. In the current stage, which method should be preferred for brain graph classification tasks is unclear. Some authors attempt to avoid this issue by combining multiple methods. However, we instead suggest that the researchers carefully consider each of the presented methods in the context of the given classification task, as each method poses its unique set of strengths and weaknesses. ## V Node feature definitions The second part of the input to a GNN model is the node feature matrix (Fig. 1A). We summarise the various definitions of node features in Table 2. We categorise Figure 3: Classification tasks presented in the current EEG-GNN literature. these definitions based on which domain they are computed, i.e. time, frequency and graph domains. The time-domain methods are the most commonly used in the current literature. In particular, these are the differential entropy (DE) and raw signal methods. The popularity of DE is given by the fact that many of the open EEG datasets include this feature, such as the SEED [108] emotion recognition dataset. DE describes the complexity of a continuous variable and is defined as: \[DE(X)=-\int_{X}f(x)log(f(x))\,dx \tag{7}\] where \(X\) is a random continuous variable and \(f(x)\) is the probability density function. Many papers define the node feature as the raw EEG signal. However, the raw signal can be too long for a GNN to process effectively. Thus, it is often coupled with node feature pre-processing module and spatio-temporal GNNs (See V.1 and VI respectively) to either reduce the dimensionality or to extract the temporal patterns contained within the signal effectively. An alternative to the raw signal node feature is descriptive statistics, such as mean, median or standard deviation. Frequency-domain node features are usually defined as the Fourier frequency components obtained by the Fourier transform or the power spectral density. Both of these methods attempt to quantify the strength of various frequency components within the EEG signal. An advantage of these representations is their relatively low dimensionality compared to the raw signal described previously. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & Learnable & Pre-defined & Papers \\ \hline Distance between electrode & & & & [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64] \\ positions & ✗ & ✓ & [46; 47; 49; 50; 55; 59; 61; 65; 66; 67; 68; 69; 61; 66; 68; 65; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 87; 89; 90; 91; 92; 93; 95; 96; 97; 98; 99; 101; 102; 98; 94; 98; 95; 96; 97; 98; 99; 110; 103] \\ Functional connectivity & & & & \\ measure & ✗ & ✓ & [7; 47; 49; 50; 55; 59; 61; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 70; 73; 75; 77; 78; 79; 70; 74; 76; 75; 78; 79; 71; 72; 75; 76; 77; 79; 70; 78; 71; 73; 76; 77; 78; 79; 72; 76; 77; 79; 70; 78; 71; 74; 77; 78; 79; 73; 79; 74; 75; 76; 77; 78; 79; 75; 79; 76; 77; 78; 79; 71; 78; 79; 70; 79; 71; 72; 73; 74; 76; 77; 78; 79; 70; 74; 79; 75; 77; 76; 78; 79; 71; 72; 75; 77; 78; 79; 70; 76; 77; 78; 79; 71; 73; 77; 79; 71; 74; 75; 78; 79; 72; 76; 77; 78; 79; 74; 77; 75; 78; 79; 76; 77; 78; 79; 70; 77; 79; 71; 75; 79; 72; 76; 77; 78; 79; 74; 78; 79; 75; 79; 76; 77; 78; 79; 77; 78; 79; 70; 79; 71; 72; 73; 75; 76; 78; 79; 71; 74; 79; 75; 77; 78; 79; 71; 76; 78; 79; 72; 79; 73; 77; 78; 79; 74; 75; 77; 71; 72; 75; 78; 79; 76; 77; 73; 78; 79; 74; 76; 75; 77; 78; 79; 70; 77; 76; 79; 72; 76; 77; 78; 79; 71; 79; 70; 78; 71; 79; 74; 78; 75; 76; 77; 78; 79; 70; 79; 71; 78; 79; 72; 79; 73; 71; 74; 75; 76; 77; 78; 79; 74; 77; 78; 79; 75; 79; 76; 78; 79; 77; 78; 79; 79; 70; 71; 79; 70; 79; 72; 76; 77; 78; 79; 71; 79; 70; 71; 72; 73; 75; 77; 71; 74; 76; 75; 77; 78; 79; 70; 71; 78; 79; 74; 79; 75; 76; 77; 73; 78; 79; 75; 78; 79; 71; 76; 77; 78; 79; 70; 79; 71; 78; 79; 70; 71; 79; 72; 75; 76; 78; 77; 71; 79; 74; 75; 77; 71; 76; 78; 79; 77; 78; 79; 70; 79; 71; 78; 79; 70; 71; 79; 72; 75; 78; 79; 70; 71; 73; 79; 74; 76; 75; 77; 71; 78; 79; 74; 79; 75; 78; 76; 79; 72; 76; 77; 79; 70; 78; 71; 79; 70; 73; 78; 71; 74; 76; 75; 79; 71; 76; 77; 78; 79; 70; 79; 71; 78; 79; 70; 79; 72; 76; 77; 79; 70; 78; 78; 79; 70; 79; 70; 71; 78; 79; 70; 71; 79; 70; 71; 71; 72; 75; 76; 77; 78; 79; 71; 79; 70; 74; 78; 75; 79; 71; 76; 78; 79; 70; 77; 71; 72; 75; 78; 79; 70; 71; 78; 79; 70; 74; 79; 75; 71; 76; 77; 78; 79; 71; 78; 79; 70; 79; 72; 76; 77; 79; 70; 71; 78; 79; 70; 73; 79; 70; 71; 74; 75; 71; 76; 75; 77; 71; 78; 79; 72; 79; 70; 73; 71; 78; 79; 70; 74; 77; 75; 78; 71; 79; 70; 74; 76; 75; 71; 78; 79; 70; 74; 79; 75; 71; 76; 75; 77; 71; 72; 75; 78; 79; 71; 78; 79; 70; 79; 70; 71; 72; 75; 76; 71; 78; 79; 70; 71; 78; 79; 70; 71; 71; 72; 75; 78; 71; 79; 70; 71; 73; 75; 71; 72; 75; 78; 71; 79; 70; 74; 76; 75; 71; 78; 79; 71; 76; 75; 77; 71; 79; 70; 78; 71; 79; 72; 76; 77; 79; 70; 78; 73; 79; 70; 71; 78; 79; 71; 72; 79; 70; 73; 71; 74; 78; 75; 71; 79; 70; 71; 72; 75; 78; 71; 79; 70; 71; 73; 76; 71; 72; 75; 78; 71; 79; 70; 74; 76; 75; 73; 71; 78; 79; 70; 71; 74; 77; 76; 75; 71; 78; 79; 70; 77; 71; 78; 79; 71; 72; 75; 79; 70; 71; 78; 79; 71; 78; 79; 70; 71; 79; 70; 71; 79; 70; 71; 72; 75; 78; 71; 79; 70; 71; 78; 79; 70; 72; 76; 77; 79; 70; 71; 73; 78; 79; 71; 74; 78; 75; 71; 78; 79; 70; 71; 78; 79; 71; 70; 75; 79; 71; 72; 75; 76; 78; 71; 78; 79; 70; 71; 78; 79; 71; 78; 79; 70; 71; 79; 70; 71; 72; 75; 78; 71; 78; 79; 70; 71; 79; 70; 71; 71; 72; 75; 78; 79; 70; 71; 73; 79; 71; 72; 75; 76; 71; 78; 79; 70; 71; 78; 79; 70; 71; Finally, graph-theoretical features can be utilised to describe the nodes, e.g. mean node weight [65] and betweenness centrality [65; 73]. A severe limitation of this method is that the graph structure needs to be defined prior to node feature extraction. Thus, this node feature type is incompatible with learnable brain graph methods. ### Node Feature Preprocessing An optional next step after node features construction is some kind of node feature pre-processing module (NFP) (Fig. 1B). We summarise the types of NFPs in Table 3. Most of the NFPs are integrated within the GNN architecture, thus allowing the model to be trained in an end-to-end manner. The exceptions are methods that utilise a pre-trained feature extraction neural network implemented as a bidirectional LSTM [76] or a CNN [64]. The surveyed NFPs are all based on a neural network. In most cases, these are variants of a CNN and multilayer perceptron (MLP). These modules aim to (1) reduce the dimensionality of the node features and (2) enhance the node features, including potentially suppressing noise or redundant information. ## VI Type of Graph Convolutional Layer A core part of a GNN model are the graph convolutional layers (GCN) (Fig. 1C). We summarise the utilised types of GCNs in Table 4. We further categorise them based on the type of GNN as introduced in Section II, i.e. spatial, spectral. Additionally, we add the temporal category, which is not a type of standalone GCN layer but must be combined with spatial or spectral GCN. Interestingly, ChebConv is used in the majority of the surveyed papers (counting both ChebConv and spectral spatio-temporal GNN in Table 4). Since EEG typically uses 128 electrodes in high-density montages, the size of the brain graphs is relatively small. In such cases, even a full spectral GNN would not be too computationally expensive for EEG classification. Therefore, it remains unclear why many authors opt for the ChebConv approximation of spectral GNN. We speculate that the influence of classical signal processing tools in EEG analysis might also serve as a sufficient argument for using spectral GNNs for EEG classification. On the other hand, the other half of the surveyed papers experiment with a wide range of spatial GNNs. The (simplified) GCN is a popular method amongst these, which is equivalent to a 1st-order ChebConv (\(K=1\)). A special case of spatial GNN is the graph attention network (GAT). GAT allows for adjusting the graph by re-weighting the edges using an attention mechanism. Generally, the attention mechanism for computing the new softmax-normalised edge weight \(e_{ij}\) is defined as follows: \[e_{i,j}=\frac{\exp\left(\sigma\left(\mathbf{w}^{\top}[\mathbf{W}h_{i}\left\| \right.\mathbf{W}h_{j}]\right)\right)}{\sum_{k\in\mathcal{N}(i)}\exp\left( \sigma\left(\mathbf{w}^{\top}[\mathbf{W}h_{i}\left\|\right.\mathbf{W}h_{k}] \right)\right)}, \tag{8}\] where \(w\) and \(W\) are the learnable parameters of the model, \(\sigma\) is an activation function, \(h\) is the node feature vector/embedding, and \(N(i)\) is the set of nodes connected to node \(i\). The resulting edge weights can then be passed to Equation 1. Next, the spatio-temporal GNNs were tested for EEG classification in several instances. A spatio-temporal block consists of one GCN layer and one 1D-CNN applied temporally. This structure allows the model to extract both spatial (i.e. graph) and temporal patterns. There are both spatial and spectral variants of spatio-temporal GNN, and there is no indication as to which one should be preferred as no comparative study exists to date. Finally, several papers adopt multi-branch architectures. These methods utilise multiple GCN layers applied in parallel to allow the model to focus on various aspects (also views) of the input graph. An example of such a model utilises two-branch GNN to learn from both FC- and SC-based brain graph structure [63]. Alternatively, the individual frequency bands of EEG signals can be used to construct various graph views [85]. ## VII Node Pooling Mechanisms In some instances, reducing the number of nodes in the graph might be desirable. This can be achieved with a node pooling module (Fig. 1D). We summarise the node pooling modules utilised in the surveyed papers in Table 5. There are both learnable and non-learnable node pooling modules in the literature. Please see the corresponding papers for a detailed description of these methods (Table 5). Node pooling modules remain a relatively unexplored topic in the EEG-GNN classification models. Node pooling can (1) remove redundant nodes, (2) reduce the size of the graph embedding in a setting where the concatenation of node embeddings forms it, and (3) aid in the explainability of the model by identifying node importance with respect to the classification task. ## VIII From Node Embeddings to Graph Embedding The output of the graph convolutions is a set of learned node embeddings. Node embeddings in this form are suitable for tasks such as node classification and link prediction. However, for graph classification, the set of node features needs to be transformed into a unified graph representation (Fig. 1E). We summarise the methods for this transformation in Table 6. The most straightforward method to form a graph embedding is to simply concatenate the node features. This approach poses a few limitations. First, the resulting graph embedding grows with the number of nodes, thus, the classification layer requires a large number of parameters. Second, all input graphs need to have the same number of nodes, limiting the model's generalisation to other datasets. Finally, such an approach is likely to include redundant or duplicated information in the graph embedding since GNN produces node embeddings by aggregating information from neighbouring nodes. A readout function is one of the methods to form a graph embedding that addresses these issues. A readout forms the embedding by passing the node features through a permutation-invariant function. A general definition of a readout to obtain graph embedding of a graph \(G_{i}\) from a set of \(V\) node embeddings \(H=[h_{1},...,h_{V}]\) is given by: \[G_{i}=\sum_{k=1}^{V}h_{k}, \tag{9}\] where \(\sum\) can be any permutation-invariant function. In the surveyed papers, these functions were sum, average and maximum. A few papers also experiment with attention-weighted sum to attenuate the role of unimportant nodes within the graph embedding [88]. An interesting alternative is to apply CNN-style average or maximum pooling node-wise [105]. Alternatively, researchers explored various neural network models to obtain graph embeddings, such as CNN [52; 69; 78], (bi-)LSTM [83; 84; 99; 100; 51], Transformer [89] and capsule networks [73]. Additionally, graph pooling methods, such as DiffPool [109], SAGPool [110], iPool [111], TAP [112] and HierCorrPool [113] can be used for this purpose. ## IX Discussion Despite most of the surveyed papers being relatively recent, a wide range of GNN-based methods has already been proposed to classify EEG signals in a diverse set of tasks, such as emotion recognition, brain-computer interfaces, and psychological and neurodegenerative disorders and diseases (Fig 3). This recent rise in popularity of GNN models for EEG might be attributed to (1) the development of new GNN methods and (2) advances in network neuroscience inspired an extension of this framework to deep learning. GNNs offer unique advantages over other deep learning methods. This is mainly the possibility of modelling multivariate time series and interactions among them with a single GNN model, which is not possible with CNN or recurrent networks. Additionally, patterns learned by GNNs can readily be interpreted in the context of network neuroscience, thus enabling a wide range of avenues for model explainability. This survey categorises the proposed GNN models in terms of their inputs and modules. Specifically, these are brain graph structure, node features and their pre-processing, GCN layers, node pooling mechanisms, and formation of graph embeddings. This categorisation allows us to provide a quick and simple overview of the different methods presented in the EEG-GNN literature, \begin{table} \begin{tabular}{|c|c|c|} \hline Method & Learnable & Papers \\ \hline TopK & ✓ & [62; 67] \\ Hierarchical tree pooling & ✓ & [65] \\ SortPool & ✓ & [48] \\ EdgePool & ✓ & [48] \\ SAGPool & ✓ & [48; 54] \\ Set2Set & ✓ & [48] \\ Manual Clustering & ✗ & [101; 102] \\ Gracus Clustering & ✗ & [77; 80] \\ \hline \end{tabular} \end{table} Table 5: Overview of node pooling mechanisms. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & Spatial & Spectral & Temporal & Papers \\ \hline Graph Isomorphism Network & ✓ & ✗ & ✗ & [48; 65; 79] \\ (Simplified) Graph & & & & [7; 46; 53; 54; 56; 58; 61; 70; 72; 75; 83; 89; 106] \\ Convolution Network & ✓ & ✗ & [49; 51; 55; 57; 59; 66; 67; 69; 71; 74; 76– \\ Chebyshev Graph Convolution & ✗ & ✓ & ✗ & 78; 80; 82; 85; 90; 97; 99; 104] \\ Graph Attention Network & ✓ & ✗ & [60; 62; 73; 84; 88; 94; 98; 100] \\ Diffusion recurrent gated & ✗ & ✓ & ✗ & [50] \\ Spatio-temporal GNN & & & & [52; 81; 86; 95; 96; 107] \\ (Spectral) & ✗ & ✓ & ✓ & [63; 105] \\ Spatio-temporal GNN (Spatial) & ✓ & ✗ & ✓ & [63; 105] \\ Powers of Adjacency Matrix & & & & \\ GNN & ✓ & ✗ & [101; 102] \\ GraphSAGE & ✓ & ✗ & [48; 107] \\ Spectral GNN & ✗ & ✓ & ✗ & [87; 93] \\ B-Spline Kernel GCN & ✓ & ✗ & [47] \\ Residual GCN & ✓ & ✗ & ✗ & [91] \\ Multibranch architectures & - & - & - & [58; 63; 80; 92; 97; 100] \\ \hline \end{tabular} \end{table} Table 4: Overview of graph convolutional layers. appreciate the current state of the art in this field and identify promising future directions. ### Limitations of Surveyed Papers Surprisingly, we have identified the least variety and innovation in the category of GCN layers (Table 4). A significant proportion of the surveyed papers utilise either ChebConv or "vanilla" spatial GCN. This might be due to the relative novelty of the EEG-GNN field, and thus, many papers explore other areas of model design, such as node features and brain graph definitions. A few papers seem to successfully experiment with more complex types of GCN layers [47; 50; 91] and multi-branch architectures [92; 58; 63; 80; 97; 100]. A major limitation of most surveyed papers is the lack of generalisability to external datasets that might use a different number of EEG signals. This is caused by (1) the use of ChebConv and (2) forming graph embedding by node feature concatenation [47; 55; 56; 57; 58; 59; 60; 64; 66; 67; 70; 74; 77; 80; 81; 86; 87; 90; 91; 92; 93; 94; 100; 101; 102; 104]. (1) can be addressed by utilising spatial GCN layers as suggested above, and (2) can be solved by using a readout function or a suitable node pooling mechanism, which coarsens the graph to a fixed number of nodes. Additionally, there is a general lack of transfer learning experiments for EEG-GNN models, which might be a promising direction for future research. Finally, we have identified an interesting gap in EEG-GNN research: the lack of utilising frequency band information in a more complex way. A few papers train separate models for each frequency band in isolation [46; 47; 65]. Alternatively, they propose concatenating the graph embeddings generated from the frequency-band-GNN branches [87; 101; 52]. ### Future Directions Several promising directions can be identified in the rapidly evolving landscape of EEG-GNN research. First, a comprehensive comparison of the various GCN layers (e.g. spatial GNN, ChebConv, GAT and graph transformer) with respect to their influence on classification performance should be carried out to address this crucial design question in a systematic manner. Second, enhancing the generalisability of models by addressing issues related to the varying number of EEG signals/electrodes and exploring transfer learning approaches can open new avenues for research. For instance, pre-trained GNN models on cheap-to-obtain large datasets, such as open databases for emotion recognition or BCI applications, would allow the application of complex GNN architectures to problems with limited data availability due to the high costs or small populations (e.g. clinical data, rare diseases and disorders). Focusing on these issues would likely improve the generalisability of the models when evaluated on a diverse set of EEG datasets and different classification tasks. Lastly, the rich frequency information of EEG signals should be explored more. For instance, we suggest a plausible utility of integrating cross-frequency coupling (CFC) approaches into EEG-GNN models. There is growing evidence in the literature concerning the advanced brain functions (e.g. learning, memory) enabled by CFC [114]. Thus, integrating findings from neuroscience research into the EEG-GNN design promises both performance and explainability gains. ### Limitations of Our Survey It is worth noting that this paper does not follow a systematic review methodology; therefore, we do not assert that our findings are exhaustive. Instead, our objective is to offer a succinct and cohesive overview of the current \begin{table} \begin{tabular}{|c|c|c|} \hline Method & Learnable & Papers \\ \hline Sum readout & ✗ & [46; 65; 82] \\ Average readout & ✗ & [49; 54; 61; 62; 72; 85; 107] \\ Maximum readout & ✗ & [7; 54; 62; 76; 106] \\ Concatenate node & & [47; 55; 56; 57; 58; 60; 64; 66; 67; 70; 74; 77; 80; 81; 86; 87; 90; 91; 92; 93; 98; 100; 93; 98; 100; 94; 95; 96; 97; 98; 102; 104] \\ embeddings & ✗ & 102, 104] \\ CNN-like & & \\ Average/Maximum & & \\ Pooling & ✗ & [83; 105] \\ SortPool & ✓ & [68] \\ Attention weighted & ✓ & [63; 88; 97] \\ CNN & ✓ & [52; 69; 78] \\ LSTM & ✓ & [51; 99] \\ Capsule Network & ✓ & [73] \\ Transformer & ✓ & [89] \\ Bidirectional LSTM & ✓ & [83; 84; 100] \\ \hline \end{tabular} \end{table} Table 6: Overview of methods for the formation of graph embedding from a set of node embeddings research on EEG-GNN models to facilitate the development of innovative approaches and assist researchers new to this field. One of the major parts of EEG-GNN models we omit in this survey is the model explainability. We suggest that a survey paper is not well suited for comprehensively covering this aspect of research. Instead, we suggest a comparative experimental study to be better suited to explore the various explainability options of GNN explainability. However, to maintain the comprehensiveness of this survey, we list the papers that report the use of certain methods of model explainability: [50; 55; 89; 105; 106]. ## X Conclusion In conclusion, this survey examined the current research on EEG-GNN models for classifying EEG signals. Various GNN-based methods have been proposed for tasks such as emotion recognition, brain-computer interfaces, and psychological and neurodegenerative disorders. The surveyed papers were categorised based on inputs and modules, including brain graph structure, node features, GCN layers, node pooling mechanisms, and graph embeddings. GNNs offer a unique method for analysing and classifying EEG in the graph domain, thus allowing the exploitation of complex spatial information in brain networks that other neural networks do not. Additionally, GNNs can be easily extended with CNN and recurrent network-based modules at various stages of the GNN architecture, such as for node feature pre-processing, node embedding post-processing and graph embedding formation. However, limitations and areas for improvement were identified. There is a lack of variety and innovation in GCN layers, with many papers utilising ChebConv or "simple" spatial GCN without clear justification. Generalisability to external datasets with varying numbers of EEG electrodes is limited. Transfer learning experiments and integration of cross-frequency coupling approaches are potential future research to enhance the performance and explainability of GNN.
2304.05991
Maximum-likelihood Estimators in Physics-Informed Neural Networks for High-dimensional Inverse Problems
Physics-informed neural networks (PINNs) have proven a suitable mathematical scaffold for solving inverse ordinary (ODE) and partial differential equations (PDE). Typical inverse PINNs are formulated as soft-constrained multi-objective optimization problems with several hyperparameters. In this work, we demonstrate that inverse PINNs can be framed in terms of maximum-likelihood estimators (MLE) to allow explicit error propagation from interpolation to the physical model space through Taylor expansion, without the need of hyperparameter tuning. We explore its application to high-dimensional coupled ODEs constrained by differential algebraic equations that are common in transient chemical and biological kinetics. Furthermore, we show that singular-value decomposition (SVD) of the ODE coupling matrices (reaction stoichiometry matrix) provides reduced uncorrelated subspaces in which PINNs solutions can be represented and over which residuals can be projected. Finally, SVD bases serve as preconditioners for the inversion of covariance matrices in this hyperparameter-free robust application of MLE to ``kinetics-informed neural networks''.
Gabriel S. Gusmão, Andrew J. Medford
2023-04-12T17:15:07Z
http://arxiv.org/abs/2304.05991v2
Maximum-likelihood Estimators in Physics-Informed Neural Networks for High-dimensional Inverse Problems ###### Abstract Physics-informed neural networks (PINNs) have proven a suitable mathematical scaffold for solving inverse ordinary (ODE) and partial differential equations (PDE). Typical inverse PINNs are formulated as soft-constrained multi-objective optimization problems with several hyperparameters. In this work, we demonstrate that inverse PINNs can be framed in terms of maximum-likelihood estimators (MLE) to allow explicit error propagation from interpolation to the physical model space through Taylor expansion, without the need of hyperparameter tuning. We explore its application to high-dimensional coupled ODEs constrained by differential algebraic equations that are common in transient chemical and biological kinetics. Furthermore, we show that singular-value decomposition (SVD) of the ODE coupling matrices (reaction stoichiometry matrix) provides reduced uncorrelated subspaces in which PINNs solutions can be represented and over which residuals can be projected. Finally, SVD bases serve as preconditioners for the inversion of covariance matrices in this hyperparameter-free robust application of MLE to "kinetics-informed neural networks". physics-informed neural network, surrogate approximator, maximum-likelihood, singular-value decomposition, dimensionality reduction, catalysis, transient, chemical kinetics ## 1 Main Physics-informed [1] (PINNs), inspired or constrained [2] neural-networks comprise an interdisciplinary emerging area connecting machine learning to several fields of science, engineering and mathematics. It has shown that neural networks (NNs) provide a flexible structure for the solution of problems arising from fully- or partially-known differential equations, such as in heat-transfer [3, 4], compressible and incompressible, invisible and non-inviscible flows [5, 6], advection-diffusion equations [7], and other problems involving conservation laws and physical constraints. The formulation of inverse problems based on PINNs typically entails softly-constrained cost functions with several penalty weights (hyperparameters) that need be tuned for a given set of data of known underlying physics (differential equation) [4, 5, 8, 9, 10, 11, 12]. Other strategies such as weighting based on gradient statistics, inverse Dirichlet weighting, and multiple gradient descent based on the Karush-Kuhn-Tucker (KKT) formulation have been applied [13] to satisfy initial values or boundary conditions. However, it still remains to be devised a general strategy for balancing the contribution between data interpolation and physical model satisfaction. Mean-squared errors (MSE) cost functions are analogous to maximizing the error likelihood if errors are normally distributed, uncorrelated, homoscedastic and of equal variance [14]. Consequently, the weights of MSE cost-function components in inverse PINNs problems consist of variance-weighted maximum-likelihood estimators (MLE) around uncorrelated homoscedastic normally distributed error terms. The need for supervised screening of such weighting hyperparameters adds a cumbersome layer of complexity to the inverse problem solution, and prevents direct propagation of error between the loss function and the fitted parameters. Although the applications of PINNs have spanned problems in time and/or spatial (ODEs and PDEs) domains (input spaces) [15, 16, 10, 17, 18], few works have assessed inverse problems of high-dimensionality in the state space (targets) [19, 20], which is an intrinsic feature of large interacting systems, such as chemical reactions and biological systems. The few exceptions are the forward solution of the Allen-Cahn, Hamilton-Jacobi-Bellman and Black-Scholes equations [21] in partial stochastic differential equations. Chemical reaction networks may include thousands of chemical species and reactions [22] in often stiff systems of coupled nonlinear differential equations. Studies involving high-dimensional inverse chemical kinetics problems using standard numerical methods have also been limited [23]. They are computationally expensive, requiring successive computation of ODE integrals to estimate gradients through adjoint or sensitivity methods [24, 25, 26]. PINNs might also serve as an auxiliary integration and derivative estimation method in conjunction with mechanism-discovery strategies based on symbolic regression. These include methods that generate many integral-form model candidates, e.g. ALAMO [27], require derivative estimates, e.g. SINDy [28], or successive numerical integration for validation [29]. Among the few studies involving NN-based approaches to chemical kinetics, De Florio et. al. used the "extreme theory of functional connection" to derive upper-bounds for generalization error in stiff forward PINNs. They tested their approach against the _POLLU_ problem [30], consisting of a stiff ODE encompassing \(20\) chemical species in homogeneous phase. Wu et. al. [31] explored the use of lasso-regularized neural ordinary differential equations (NODE) to enforce sparsity in inferring the mass-action rate law coupling matrix for homogeneous chemical systems involving six chemical species. However, studies involving PINNs for inverse complex reaction systems, especially in heterogeneous catalysis, are still scarce [32]. Such reacting systems are ubiquitous in the chemical industry and the ability to investigate the detailed mechanisms and elementary processes that underlie chemical reactions would allow the rigorous modeling, control and design of chemical reactors, and the optimization of industrial processes. Kinetics-informed neural networks, KINNs, have been proposed by the authors and co-workers as a particular application of PINNs to high-dimensional heterogeneous chemical kinetics, which structurally tackles the issue of differential-algebraic equation (DAE) constraints and estimation of derivatives across the NN structure [32]. However, one of the main weakness of the original inverse KINNs formulation is that, as inverse PINNs in general, it consists of a multi-objective optimization problem. It required a priori estimation of a hyperparameter that conveyed the ratio between variances of state values and their derivatives. The inverse solution was shown to be a function of the chosen hyperparameter, leading to a Pareto frontiers of efficient solutions. A more effective approach is necessary to rigorously model the probability density function of residuals. The MLE formulation is advantageous since it may be applied to complex error distributions, e.g., full covariance with correlated error terms for Gaussian error [33], and its structure allows the propagation of variance between residuals. However, MLEs constructed around multivariate Gaussian distributions require the inversion of error covariance matrices, and correlated errors lead to ill-conditioning with respect to inversion. Better conditioning can be attained by finding a suitable preconditioner for the coupled differential equations [34]. The ODEs that describe the rates of change of species in a chemical system are dictated by material balances (conservation laws), which are conveyed as reaction stoichiometries [35, 36, 37]. For large chemical systems, these linear relationships between changes in concentrations of chemical species lead to inherently correlated errors in states and their derivatives, which results in ill-conditioned covariance matrices. In this work, we show that the Taylor expansion of residuals in Gaussian MLEs allows direct error propagation between interpolation and physics models for high-dimensional ODE inverse problems, thereby removing the need for hyperparameter tuning in inverse PINNs, especially for large chemical systems (KINNs). Furthermore, we resort to the singular-value decomposition (SVD) as a framework to generate orthonormal bases for the range and nullspace of the ODE coupling matrix (reaction stoichiometry matrix). The range basis serves as preconditioner since the projection of KINNs residuals on it removes correlated terms from the error covariance matrix, which mitigate the issue of ill-conditioning with respect to inversion, allowing the estimation of precision matrices. In this robust KINNs formulation (rKINNs) we incorporate the three types of learning biases, as defined by Karniadakis et. al. [38]: (i) "observational bias" in the sampling of model error at points where observed data is not available, (ii) "inductive bias" in the conservation law defined by the stoichiometry matrix, the topological incorporation of the normalization operator that enforces DAE constraints, as well as in the representation of the ODEs in the preconditioning space obtained from SVD of the stoichiometry matrix, and (iii) "learning bias" in assuming model and interpolation residuals are normally distributed and constructing the optimization problem in terms of MLEs. ## 2 Results ### Theoretical Overview #### 2.1.1 Mean-field chemical kinetics In its conceptualization, KINNs was designed around mean-field kinetic models with power-law kinetics with Arrhenius temperature, \(\theta\), dependence for a set of chemical species whose concentrations (states) are given by an array \(\mathbf{c}\in\mathbb{R}_{+}^{n}\). A mean-field microkinetic model (MKM) is defined as in (2.1.1): \[\mathbf{\dot{c}}=\frac{d}{dt}\mathbf{c}=f(\mathbf{c},\mathbf{p}(\theta))= \mathbf{M}\,\mathbf{r}(\mathbf{c},\mathbf{p}(\theta))=\mathbf{M}\,(\mathbf{k} (\mathbf{p}(\theta))\circ\psi(\mathbf{c})) \tag{2.1.1}\] Where \(\mathbf{M}\in\mathbb{Z}^{n\times m}\) is the stoichiometry matrix (ODE coupling matrix), which can include reactions between chemical species possibly in different phases (e.g. surface and gas/liquid-phases), \(\psi(\cdot):\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{m}\) represents the power-law kinetics mapping, and \(\ \mathbf{k}:=\{\mathbf{k}=\exp(-\mathbf{p}(\theta))\ \in\ \mathbb{R}_{m}^{m},\ \theta\in \mathbb{R}_{+},\ \mathbf{p}(\theta)\in\mathbb{R}^{m}\}\) denotes Arrhenius-like temperature dependent rate constant with \(\mathbf{p}\) as a vector-valued function that includes entropic and enthalpic contributions to a transition state. #### 2.1.2 Inverse KINNs residuals "Surrogate approximator" (SA) were introduced in KINNs [32] as a composite set of NNs that serve as basis functions with \(\mathbf{\omega_{\mathrm{s}}}\) weights for the solution of chemical kinetics ODEs, Eq. (2.1.1), with \(\mathbf{c}(t)\approx\mathbf{x}(t,\mathbf{\omega_{\mathrm{s}}})\), with the ability to structurally represent DAE constraints through an operator \(\mathbf{C_{N}}\) (section 4.3). Such an approach builds upon the standard PINNs concept [39, 40, 41], which is a direct result of the universal approximation theorem [42, 43, 44, 45]. The inverse problem is defined in terms of the multi-objective optimization function involving interpolation and physics-model residuals, \(\varepsilon_{\mathbf{x}_{i}}\) and \(\varepsilon_{\mathbf{x}_{i}}\in\mathbb{R}^{n}\) for \(i=1,2,...,d\) datapoints, respectively. The interpolation error, Eq. (2.1.2), is defined by the difference between measured states values, \(\tilde{\mathbf{x}}_{i}\in\mathbb{R}^{n}\), and estimated state values, \(\mathbf{x}(t_{i},\mathbf{\omega_{\mathrm{s}}})\in\mathbb{R}^{n}\), for every \(t_{i}\): \[\varepsilon_{\mathbf{x}_{i}}(\mathbf{\omega_{\mathrm{s}}})=\mathbf{x}(t_{i},\bm {\omega_{\mathrm{s}}})-\tilde{\mathbf{x}}_{i} \tag{2.1.2}\] such that if \(\tilde{\varepsilon}_{\mathbf{x}_{i}}(\mathbf{\omega_{\mathrm{s}}})=0\) then the SA goes exactly through data point \(i\). The physics-model residual, \(\varepsilon_{\mathbf{x}_{i}}\), relates the SA derivative and the output of differential equation (e.g. the kinetic model) in Eq. (2.1.3). \[\varepsilon_{\mathbf{x}_{i}} =\hat{\mathbf{x}}(t_{i},\mathbf{\omega_{\mathrm{s}}})-f(\mathbf{x}_{i },\mathbf{p}) \tag{2.1.3}\] \[=\hat{\mathbf{x}}(t_{i},\mathbf{\omega_{\mathrm{s}}})-\mathbf{M}\,( \mathbf{k}(\mathbf{p}(\theta))\circ\psi(\mathbf{x}(t_{i},\mathbf{\omega_{\mathrm{ s}}})))\] such that if \(\varepsilon_{\tilde{\mathbf{x}}_{i}}=0\) then the differential equation in Eq. (2.1.1) is exactly satisfied at time \(t_{i}\). In the original regularized KINNs loss function, a hyperparameter \(\alpha\) weights the interpolation in relation to the model-fitting MSE, Eq. (2.1.4). \[j_{t}(\mathbf{\omega_{\mathrm{s}}},\mathbf{p})\,\mathrm{MSE}(\mathbf{\varepsilon_{ \mathrm{x}}})+\alpha\,\mathrm{MSE}(\mathbf{\varepsilon_{\mathrm{x}}})=\frac{1}{d} \sum_{i}^{d}\mathbf{\varepsilon}_{\mathbf{x}_{i}}^{T}\mathbf{\varepsilon_{\tilde{ \mathbf{x}}_{i}}}+\alpha\frac{1}{d}\sum_{i}^{d}\mathbf{\varepsilon}_{\mathbf{x}_{i }}^{T}\mathbf{\varepsilon_{\mathbf{x}_{i}}} \tag{2.1.4}\] The loss function, \(j_{t}\) is then minimized with respect to \(\mathbf{\omega_{\mathrm{s}}}\) and \(\mathbf{p}\) for a fixed \(\alpha\). The error in kinetic model parameters, \(\mathbf{\varepsilon_{\mathrm{p}}}\), is defined as: \[\mathbf{\varepsilon_{\mathrm{p}}}^{*}=\mathbf{p}-\mathbf{p}^{*} \tag{2.1.5}\] where \(\mathbf{p}\) represents the parameters of the physical model at any point in the optimization and \(\mathbf{p}^{*}\) represents the true model parameters. Unlike \(\varepsilon_{\mathbf{x}_{i}}\) and \(\mathbf{\varepsilon_{\tilde{\mathbf{x}}_{i}}}\), \(\mathbf{\varepsilon_{\mathrm{p}}^{*}}\) is not dependent on time, and cannot be evaluated explicitly from kinetic data since \(\mathbf{p}^{*}\) is not known in general. Rather, it is implicitly assumed that minimizing the KINNs loss function also minimizes \(\mathbf{\varepsilon_{\mathrm{p}}^{*}}\). ### Derivation of maximum-likelihood estimator #### 2.2.1 Error probability density function To circumvent the need for a regularization hyperparameter (\(\alpha\)) and enable rigorous error propagation, the original KINNs problem, Eq. (2.1.4), and most or all related PINN problems, can be reformulated in terms of maximum likelihood estimators, MLEs. The key idea of MLE is to assume an underlying probability distribution function, \(\pi(\varepsilon)\), for the error terms in the regression problem and to maximize the likelihood of observing the errors given the data. Let \(\mathbf{\varepsilon_{\mathrm{x}}}\), \(\mathbf{\varepsilon_{\mathrm{k}}}\), \(\mathbf{\varepsilon_{\mathrm{p}}}\) denote the errors related to the interpolation, physical model and its parameters, respectively. Assuming errors are independent between timepoints \(i\), and that \(\mathbf{\varepsilon}_{\mathbf{x}}\) is normally distributed with covariance \(\mathbf{\Sigma}_{\mathbf{x}}\), the associated probability distribution function conditioned to the observed data \(\tilde{\mathbf{x}}\) and its covariance \(\mathbf{\Sigma}_{\mathbf{x}}\) is given in Eq. (2.2.1). \[\pi(\cap_{i=1}^{n}\mathbf{\varepsilon}_{\mathbf{x}_{i}},\mathbf{\varepsilon}_{\dot{ \mathbf{x}}_{i}},\mathbf{\varepsilon}_{\mathbf{p}}|\mathbf{\Sigma}_{\mathbf{x}},\tilde{ \mathbf{x}}_{i})=\prod_{i}^{n}\pi(\mathbf{\varepsilon}_{\mathbf{x}_{i}},\mathbf{ \varepsilon}_{\dot{\mathbf{x}}_{i}},\mathbf{\varepsilon}_{\mathbf{p}}|\mathbf{\Sigma}_ {\mathbf{x}},\tilde{\mathbf{x}}_{i}) \tag{2.2.1}\] The main advantage of the MLE formulation to KINNs is that it allows direct connection between errors associated with model-fitting and those related to data interpolation. With the independent variables \(\mathbf{x}\) and \(\mathbf{p}\), and assuming \(\hat{\mathbf{x}}(\mathbf{x})\) through \(\mathbf{\omega}_{\mathbf{s}}\), a variation in \(\mathbf{\varepsilon}_{\hat{\mathbf{x}}}\) can be represented by its total derivative: \[\delta\mathbf{\varepsilon}_{\dot{\mathbf{x}}}=\underbrace{\partial_{\mathbf{x}} \hat{\mathbf{x}}\delta\mathbf{x}}_{\text{representation}}-\underbrace{(\partial_{ \mathbf{x}}f(\mathbf{x},\mathbf{p})\delta\mathbf{x}+\partial_{\mathbf{p}}f( \mathbf{x},\mathbf{p})\delta\mathbf{p})}_{\text{structural}}+\mathcal{O}(\delta \mathbf{x}^{2},\delta\mathbf{p}^{2}) \tag{2.2.2}\] which represents the sensitivity of the model residuals, \(\delta\mathbf{\varepsilon}_{\dot{\mathbf{x}}}\), in a Taylor expansion as a function of variations in model parameter, \(\delta\mathbf{p}\), and states, \(\delta\mathbf{x}\). In Eq. (2.2.2), \(\partial_{\mathbf{x}}f(\mathbf{x},\mathbf{p})\) and \(\partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})\) are the kinetic model gradients with respect to states and physical model parameters, respectively, about \(\mathbf{x}\) and \(\mathbf{p}\), and they represent the contribution of the physical model to the variations in the residuals of the SA. The partial derivatives between the time derivatives of the SA with respect to its states, \(\partial_{\mathbf{x}}\hat{\mathbf{x}}\), are implicit functions of the NN architecture, i.e., depth, width, connectivity, choice of activation functions and, ultimately, their weights, \(\mathbf{\omega}_{\mathbf{s}}\), as in Eq. (2.2.3). \[\partial_{\mathbf{x}}\hat{\mathbf{x}}=\frac{\partial\hat{\mathbf{x}}}{ \partial\mathbf{\omega}_{\mathbf{s}}}\frac{\partial\mathbf{\omega}_{\mathbf{s}}}{ \partial\mathbf{x}}=\frac{\partial\hat{\mathbf{x}}}{\partial\mathbf{\omega}_{ \mathbf{s}}}\left(\frac{\partial\mathbf{x}}{\partial\mathbf{\omega}_{\mathbf{s}}} \right)^{-1} \tag{2.2.3}\] The universal approximation theorem extends the approximation of functions not only to their values but their gradients [46, 47]. Therefore, in the limit of sufficiently complex NNs, the models states, \(\mathbf{x}\), and the model derivatives, \(\hat{\mathbf{x}}\), will be independent. This is analogous to the case of explicit Galerkin methods that employ bases which enforce orthogonality between states and associated derivatives [48, 49]. However, for smaller NNs there should exist some correlation between states and their derivatives, with the ideal scenario being \(\partial_{\mathbf{x}}\hat{\mathbf{x}}\propto\partial_{\mathbf{x}}f\), where SA behavior mimics that of the physical model making a PINN approximate a NODE. Nonetheless, in this work we neglect the representation error, \(\partial_{\mathbf{x}}\hat{\mathbf{x}}\), based on the assumption that the feed-forward NN basis is large enough to allow states and their derivatives to be represented independently. With the assumption of normality distribution of residuals, the derivative residuals covariance matrix at a timepoint \(i\), \(\mathbf{\Sigma}_{\hat{\mathbf{x}}_{i}}\) can be expressed as projections of the interpolation covariance onto the gradients of the model at the respective timepoints, as in section 4.1. The joint probability density function (PDF) for the MLE approach is then given by the product of independent distributions: \[\pi\left(\cap_{i=1}^{n}\mathbf{\varepsilon}_{\mathbf{x}_{i}},\mathbf{ \varepsilon}_{\dot{\mathbf{x}}_{i}},\mathbf{\varepsilon}_{\mathbf{p}}|\mathbf{\Sigma} _{\mathbf{x}},\tilde{\mathbf{x}}_{i}\right) \approx\ \pi(\cap_{i=1}^{n}\mathbf{\varepsilon}_{\mathbf{x}_{i}},\mathbf{ \varepsilon}_{\dot{\mathbf{x}}_{i}}|\mathbf{\Sigma}_{\mathbf{x}},\mathbf{\Sigma}_{ \dot{\mathbf{x}}_{i}},\tilde{\mathbf{x}}_{i},\mathbf{p}) \tag{2.2.4}\] \[=\prod_{i}^{n}\pi(\mathbf{\varepsilon}_{\dot{\mathbf{x}}_{i}}|\mathbf{ \Sigma}_{\dot{\mathbf{x}}_{i}})\pi(\mathbf{\varepsilon}_{\mathbf{x}_{i}}|\mathbf{ \Sigma}_{\mathbf{x}}) \tag{2.2.5}\] In Eq. (2.2.5), we imply that model derivative and interpolation covariances are assumed constant for a given sample of evaluated model and model derivative errors. In section 2.2.2, we show how this assumption about the statistical model can be used within a maximum-likelihood estimator framework. #### 2.2.2 Maximum likelihood estimation Maximizing likelihood estimators are equivalent to minimizing the negative log-likelihood of error PDFs [50, 51]. In MLE form, Eq. (2.2.5) represents a summation, as in Eq. (2.2.6), where \(\pi\) are assumed to be Gaussian PDFs and \(\mathbf{\Omega}=\mathbf{\Sigma}^{-1}\) is the precision matrix, which is held constant within parameter optimization and, therefore, does not depend on the suppressed variables \(\mathbf{\omega}_{\mathbf{s}}\) and \(\mathbf{p}\). For brevity, we lump all parameters being optimized into a single variable, \(\mathbf{\omega}_{t}=[\mathbf{\omega}_{\mathbf{s}},\mathbf{p}]\), yielding. \[\begin{split}\min_{\mathbf{\omega}_{t}}&\ell_{\mathrm{t}}= j_{\mathrm{t}}=\frac{1}{d}\sum_{i}^{d}\left(\mathbf{\varepsilon}_{\dot{\mathbf{x}}_{i}}^{T} \mathbf{\Omega}_{\dot{\mathbf{x}}_{i}}\mathbf{\varepsilon}_{\dot{\mathbf{x}}_{i}}+ \mathbf{\varepsilon}_{\dot{\mathbf{x}}_{i}}^{T}\mathbf{\Omega}_{\mathbf{x}}\mathbf{ \varepsilon}_{\mathbf{x}_{i}}\right)\\ \text{s.t.}&\mathbf{\Omega}_{\dot{\mathbf{x}}_{i}}= \left(\mathbf{\Sigma}_{\dot{\mathbf{x}}_{i}}^{\mathbf{x}}+\mathbf{\Sigma}_{\dot{ \mathbf{x}}_{i}}^{\mathbf{p}}\right)^{-1}\\ &\mathbf{\Omega}_{\mathbf{x}}\ =\mathbf{\Sigma}_{\mathbf{x}}^{-1}\\ &\mathbf{p}\in\mathbb{R}^{\dim(\mathbf{p})},\ \mathbf{\omega}_{\mathbf{s}}\in\mathbb{R}^{\dim(\mathbf{\omega}_{\mathbf{s}})}\end{split} \tag{2.2.6}\] ### Range-Nullspace Decomposition Coupled ODEs describing rank-deficient dynamical system can be reduced by defining proper bases for states and their derivatives. The algebraic analysis of the stoichiometry coupling matrix for information recovery and data reconstructions has been focus of investigation in chemical and biological systems [52, 53]. Here we generalize the algebraic analysis of the coupling matrix, \(\mathbf{M}\) through it singular-value decomposition (SVD), which is determined by Eq. (2.3.1). \[\mathbf{M}\triangleq\mathbf{U}\mathbf{S}\mathbf{V}^{\dagger} \tag{2.3.1}\] Where \(\mathbf{U}\in\mathbb{R}^{n\times n}\), \(\mathbf{V}\in\mathbb{R}^{m\times m}\) are orthonormal basis of \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\), respectively, and \(\mathbf{S}\in\mathbb{R}^{n\times m}\) is a matrix with singular values of \(\mathbf{M}\) along its diagonal entries and zeroes for the off-diagonal ones. Therefore, \(\mathbf{U}\) is a basis for concentrations and their derivatives over time, which can be broken down into \(\mathbf{U}^{\mathrm{R}}=\{u_{ij}\mid s_{jj}>0,\ \forall i\}\), which is a basis for the range of \(\mathbf{M}\), and \(\mathbf{U}^{\mathrm{N}}=\{u_{ij}\mid s_{jj}=0,\ \forall i\}\), which is a basis for the null-space or kernel of \(\mathbf{M}^{T}\). \[\mathbf{U}=\begin{bmatrix}\mathbf{U}^{\mathrm{R}}&\mathbf{U}^{\mathrm{N}} \end{bmatrix} \tag{2.3.2}\] Any net non-zero rate of reaction must be in \(\mathbf{U}^{\mathrm{R}}\), while the boundary conditions and inner fluxes within the chemical system lie in the null-space \(\mathbf{U}^{\mathrm{N}}\), allowing the concentration vector to be decomposed into a range and nullspace component: \[\mathbf{c}_{(t)} =\mathbf{c}_{(t)}^{\mathrm{R}}+\mathbf{c}^{\mathrm{N}} \tag{2.3.3}\] \[\mathbf{c}_{(t)}^{\mathrm{R}} =\mathbf{U}^{\mathrm{R}}(\mathbf{U}^{\mathrm{R}})^{T}\mathbf{c}_{ (t)} =\mathbf{U}^{\mathrm{R}}\mathbf{z}_{(t)}^{\mathrm{R}}\] (2.3.4) \[\mathbf{c}^{\mathrm{N}} =\mathbf{U}^{\mathrm{N}}(\mathbf{U}^{\mathrm{N}})^{T}\mathbf{c}_{ (t)} =\mathbf{U}^{\mathrm{N}}\mathbf{z}^{\mathrm{N}} \tag{2.3.5}\] where \(\mathbf{z}_{(t)}^{\mathrm{N}}\) and \(\mathbf{z}_{(t)}^{\mathrm{R}}\) are the subspace projections of \(\mathbf{c}\) onto the null-space and range, respectively. As for concentration derivatives, they exclusively reside in the range of \(\mathbf{M}\), and can be decomposed using the projections as follows. \[\hat{\mathbf{c}}_{(t)} =\hat{\mathbf{c}}_{(t)}^{\mathrm{R}}=\mathbf{U}^{\mathrm{R}}( \mathbf{U}^{\mathrm{R}})^{T}\hat{\mathbf{c}}_{(t)}=\mathbf{U}^{\mathrm{R}} \hat{\mathbf{z}}_{(t)}^{\mathrm{R}} \tag{2.3.6}\] \[\mathbf{0} =(\mathbf{U}^{\mathrm{N}})^{T}\hat{\mathbf{c}}_{(t)}=\hat{ \mathbf{z}}^{\mathrm{N}} \tag{2.3.7}\] The non-zero singular values from SVD are associated with the subspace over which all the model derivative variance is confined. In section 4.7, we show that the SA can be built such that the time dependence is only contained in the span of \(\mathbf{U}^{\mathrm{R}}\), with a time-independent shift in \(\mathbf{U}^{\mathrm{N}}\). Hence, the projection of Eq. (2.1.2) onto \(\mathbf{U}^{\mathrm{N}}\) will lead to a time-independent error \(\hat{\mathbf{c}}_{\mathbf{z}}^{\mathrm{N}}\in\mathbf{U}^{\mathrm{N}}\) between the SA estimates and the observed data. Assuming that \(\mathbf{z}^{\mathrm{N}}\) can be estimated from the observed data, the nullspace solution can be set a priori as \(\mathbf{z}^{\mathrm{N}}=\left\langle\left(\mathbf{U}^{\mathrm{N}}\right)^{T} \tilde{\mathbf{x}}\right\rangle\). In Eq. (2.2.6), it is assumed that \(\mathbf{\Omega}_{\mathbf{x}}\) can be defined as \(\mathbf{\Sigma}_{\mathbf{x}}^{-1}\); however, when \(\mathbf{M}\) has zero singular values, \(\mathbf{\Sigma}_{\mathbf{x}}\) is rank-deficient and, hence, ill-conditioned with respect to inversion. Projecting \(\mathbf{\Sigma}_{\mathbf{x}}\) onto \(\mathbf{U}^{\mathrm{R}}\) provides a route to preserve the total variance and allows the inversion of the projected matrix, i.e, if \(\left(\mathbf{U}^{\mathrm{R}}\right)^{T}\mathbf{\Sigma}_{\mathbf{x}}\mathbf{U} ^{\mathrm{R}}=\mathbf{\Sigma}_{\mathbf{z}}\) is full-rank, \(\mathbf{\Omega}_{\mathbf{z}}=\left(\mathbf{\Sigma}_{\mathbf{z}}\right)^{-1}\) can be computed. The same rationale holds for \(\mathbf{\Sigma}_{\mathbf{\hat{x}}}\), since derivative residuals lie only in \(\mathbf{U}^{\mathrm{R}}\). ### Case Studies and Discussions #### 2.4.1 Representative 'Deep' Microkinetic Model Here, we utilize the most complex of the anecdotal test cases from the prior KINNs study [32], which includes latent species and reactions between them. The elementary steps are given in Eq. (2.4.1), and the corresponding matrix representation is shown in Supplementary Material, S5. Heterogeneous Reaction Network \[\begin{split} A&+*\frac{k_{\ast}^{d}}{k_{\ast}^{d}}A*\\ B&+*\frac{k_{\ast}^{d}}{k_{\ast}^{d}}B*\\ C&+*\frac{k_{\ast}^{d}}{k_{\ast}^{d}}C*\\ A&*+*\frac{k_{\ast}^{e}}{k_{\ast}^{e}}2D*\\ B&*+*\frac{k_{\ast}^{e}}{k_{\ast}^{e}}2E*\\ D&*\;+E*\frac{k_{\ast}^{t}}{k_{\ast}^{t}}F*\;+*\\ F&*\;+E*\frac{k_{\ast}^{e}}{k_{\ast}^{e}}C*\;+* \end{split} \tag{2.4.1}\] An overview of the generated synthetic data is shown in Fig. 1, where each initial condition is referred to as an "experiment", and data is separated by gas-phase concentrations (**x\({}_{\text{o}}\)**) that can be directly measured and surface concentrations (**x\({}_{\text{*}}\)**) that can only be measured indirectly, which is a common challenge in the dynamics of heterogeneous catalytic systems, i.e. the existence of "latent" states arising from adsorbed surface species. The calibrated signals for surface concentrations were obtained from the raw synthetic data by using the closed-form solution for surface reconstruction defined in Eq. (S3.5), section S3, with an eigenvalue cutoff of \(5\times 10^{-3}\) to find the calibration coefficients \(\gamma\), which are shown in the parity plot in Fig. S3.1. The coverages are normalized by construction, according to Eq. (S3.3), and satisfy the nullspace time invariance, which expressed by the differential equation in \(z\)-space, Fig. 1. Therefore, for the synthetic experiments under study, the bulk- and surface-phases dynamics can be reconstructed directly rather than learned through NN training. Figure 1: Bulk phase (a and e) and calibrated surface coverage fraction (b and f); **z**-Space: range, **z\({}^{\text{R}}\)** (c and g), and nullspace, **z\({}^{\text{N}}\)** (d and h), projections, where all variance is contained in the range projection and time invariance in the nullspace, for IC1, \(\{x_{A},x_{B},x_{C},x_{\ast}\}_{t_{0}}=\{0.6,0.4,0.0,1.0\}\), and IC2, \(\{x_{A},x_{B},x_{C},x_{\ast}\}_{t_{0}}=\{0.2,0.3,0.5,1.0\}\). #### 2.4.2 Application and Comparison of naive KINNs and robust KINNs The range-nullspace decomposition in Fig. 1 leads to the dimensionality reduction in the inverse problem, from a total of initially nine degrees of freedom, i.e. three bulk-phase species, and six surface species (adsorbates), while the free-sites balance "\(\ast\)" degree-of-freedom is accounted for by the normalization operator \(\mathrm{C_{\mathbf{N}}}\), only the seven range components are taken into consideration in the optimization. Therefore, the KINNs output layer included three linear components for bulk phases and seven DAE-constrained components with underlying NN with six outputs for surface species. rKINNs' SA only maps the seven components of the range space, while the three nullspace components are a priori estimated. In the naive approach, KINNs, the hyperparameter \(\alpha\) controls the expected ratio between interpolation (data) and model (physics) variances. From the MLE perspective, this represents a (naive) assumption that the underlying interpolation and model residuals are normally distributed with covariances \(\mathbf{\Sigma_{x}}=\sigma_{\mathbf{x}}\mathbf{I}\) and \(\mathbf{\Sigma_{\hat{x}}}=\sigma_{\hat{x}}\mathbf{I}\), with \(\alpha=\sigma_{\hat{x}}/\sigma_{\mathbf{x}}\), such that smaller values of \(\alpha\) denote lower variance for the model residual as compared to the interpolation residuals, and the opposite for larger values of \(\alpha\). In Fig. 2 (a), the inverse naive-KINNs are applied to the case-study system. The data- and model-residuals evolution as a function of \(\alpha\) define a Pareto-set of efficient solutions. A parity plot for the kinetic model parameters at the inflection points depicted by open (tightening) and closed (relaxation) diamonds in Fig. 2 (a) is shown in (b). Optimal kinetic model parameters are associated with the "elbows" or inflection points in the relaxation direction. Increasing \(\alpha\) to larger values leads to SA overfitting to data, whereas reducing it indefinitely leads to SA "forgetting" the data, which leads to a scenario of producing degenerate physics solutions [32]. The process of optimizing \(\alpha\) through multi-objective optimization is computationally expensive, since each iteration requires 300 epochs of NN training, and leads to ambiguity in the solution due to hysteresis in the parameters as a function of \(\alpha\). Fig. 2 (c) illustrates the main advantage of rKINNs over KINNs. Interpolation (\(\ell_{\mathbf{x}}\)) and model (\(\ell_{\mathbf{\hat{x}}}\)) negative log likelihoods were computed from the interpolation residuals projected onto the final (stationary) covariance matrix obtained by rKINNs, and model residuals per time point according to Eq. (4.7.3), respectively. In addition to the faster per-epoch convergence, rKINNs reaches a stable or stationary point (Fig. 2 (e), black-filled circle). Such a behavior is attributed to the stationarity in the SA interpolation and, thus, of the sample covariance, \(\hat{\Sigma}_{\mathbf{x}}\), and its propagation through the model gradients, \(\hat{\Sigma}_{\hat{\mathbf{x}}_{i}}\), over time points \(t_{i}\). Different choices of hidden-layers activation functions sets have led to indistinguishable results for stationary model parameters and confidence intervals for rKINNs, e.g., \(\{20_{\times 3}\}\) full swish \(\{swish_{\times 3}\}\), \(\{100_{\times 1}\}\) with \(\{rbf\}\), thus highlighting the robustness of rKINNs to not only weight the variances between interpolation and physics, but also to converge to similar convex basins irrespective of the choice of activation functions (basis functions) and SA architecture. In Fig. 3, synthetic data (solid circles), SA estimates (open circles, dashed line) and physical model integral based on the initial conditions estimated from SA (solid lines) are plotted for bulk species (a and d) and adsorbates (b and e). Additionally, the physical model, \(f(\mathbf{x})\), and SA algorithmic differentiation, \(\hat{\mathbf{x}}\), parity plots for all species are shown in Fig. 3 (c) and (f). The results illustrate a strong agreement between the numerical integration of the model with recovered parameters (solid lines) and the synthetic experimental data for both experiments. ## 3 Discussion In this work, we introduce a hyperparameter-free approach for inverse PINNs problems applied to high-dimensional coupled non-linear ODEs. We express the probability density function for the residuals of PINNs applied to heterogeneous chemical kinetics systems, KINNs [32], as normal distributions. We reformulate the KINNs regularized MSE cost functions as MLEs with direct propagation of SA interpolation error through the physical (kinetic) model gradients. The model gradients act as projectors to connect the cost functions for data interpolation and physical derivatives to eliminate the hyperparameter that is often required in the in inverse PINNs. In addition, we address the issue of ill-conditioning of covariance matrices by using SVD of the coupling matrix to find uncorrelated subspaces that function as pre-conditioners over which the residuals and covariance matrices may be projected. SVD also allows the decomposition of the SA representation into the range (time variant) and nullspace (time invariant) components of the stoichiometry matrix, thus leading to an intrinsic dimensionality reduction at the representation level, where only the range components are mapped by SAs' NNs. Such a decomposition also allows a closed-form solution for the calibration factors needed to relate semi-quantitative signals of latent adsorbate populations to explicit adsorbate concentrations. The novel rKINNs framework with MLE-estimators and the range-nullspace decomposition is applied to a complex case study in heterogeneous catalysis. It efficiently converges to a well-defined solution that is consistent with the results of expensive multi-objective optimization with the prior naive-KINN approach. Moreover, the rKINN formalism allows for rigorous propagation of error between the model and parameter spaces, enabling calculation of error bars on parameter estimates that are robust to noise in the data and the architecture of the underlying NNs The direct connection between interpolation and physics done through the construction of "physically-coupled" residuals probability density functions. This approach provides a general self-regularized framework for inverse PINNs problems based on error propagation through MLE forms, which are applicable to high-dimensional coupled ODEs. Even in the case of highly correlated outputs, SVD - or other orthogonal decomposition methods - can be used to define orthogonal minimal subspaces for the residuals projection and SA representation and robust evaluation of MLEs as NNs weights are trained. This framework applications span other types of reactor models, such as: homogeneous continuous flow stirred-tank reactor equations would include information about the flows at the boundary in the nullspace and, hence, may not be time invariant yet still separable from the kinetics, and uniform plug-flow reactors differential equations, which requires a change in variable over which it is integrated. Finally, additional effort would be necessary to expand the application of the concepts here devised to non-uniform transient systems, where partial differential equations would need to be solved. One limitation of the current rKINN approach is that semi-quantitative signals for all latent species are required. However, it common in heterogeneous catalysis the lack of information about some or all latent species. Additional work is required to address this scenario, where some assumptions about the covariance structure of unobservable species will be required. This is complicated by the fact that the uniqueness of the solution to the inverse problem with incomplete information is not guaranteed, and further studies are needed to establish the limits of solutions obtained from inverse problems in the limit of missing information or extreme stiffness. Moreover, in the case study presented Figure 2: Inverse KINNs vs rKINNs - (a) sensitivity analysis of naive KINNs regularization parameter \(\alpha\) with Pareto set estimate as a function of interpolation and model MSEs; highlighted tightening (increasing \(\alpha\), open diamond) and relaxation (decreasing \(\alpha\) closed diamond) inflection points; (b) naive KINNs model parameters parity plot at inflection points in highlighted in (a); (c) naive KINNs residuals represented in terms of negative log-likelihoods (\(\alpha\)-colored circles) and rKINNs (open circles) log-likelihoods reconstructed in terms of the final stable point (black-filled circle in the inset of (c)); (d) rKINNs model parameters at the stable point (error bars defined as \(2\) standard deviations based on the Fischer information estimated through Hessian analysis with algorithmic differentiation). One epoch is defined as \(100\) parameter-update iterations. it is assumed the physical model structure is fully known, which is often not the case in heterogeneous catalysis. This can potentially be overcome by coupling mechanism discovery algorithms [31, 54, 55] with (r)KINNs or other frameworks for solving the inverse problem, or by the use of NODEs that can learn surrogate functions for the system dynamics [56, 57, 58, 59]. ## 4 Methods ### Probability density function We assume that variations about \(\mathbf{x}\), and incumbent parameters, \(\mathbf{p}\), in Eq. (2.2.2) are random variables that follow independent multivariate normal distributions: \[\begin{split}\delta\mathbf{x}\in\hat{\boldsymbol{\varepsilon}}_ {\mathbf{x}}&\sim\mathcal{N}(0,\boldsymbol{\Sigma}_{\mathbf{x}}) \\ \delta\mathbf{p}\in\hat{\boldsymbol{\varepsilon}}_{\mathbf{p}}& \sim\mathcal{N}(0,\boldsymbol{\Sigma}_{\mathbf{p}})\end{split} \tag{4.1.1}\] With these assumptions, we can show that the variations about \(\hat{\boldsymbol{\varepsilon}}_{\hat{\mathbf{x}}_{i}}\) also comprise a multivariate normal distribution, whose covariance matrix can be estimated by the outer product of Eq. (2.2.2) at timepoint \(i\). Neglecting \(\mathcal{O}(\boldsymbol{\varepsilon}^{2})\) terms and assuming orthogonality between \(\boldsymbol{\varepsilon}_{\mathbf{x}}\) and \(\boldsymbol{\tilde{\varepsilon}}_{\mathbf{p}}\) yields: \[\begin{split}\boldsymbol{\Sigma}_{\hat{\mathbf{x}}_{i}}& =\{\boldsymbol{\Sigma}_{\hat{\mathbf{x}}_{i}}|\mathbf{x}_{i}, \hat{\mathbf{x}}_{i},\mathbf{p}\}=\langle\delta\boldsymbol{\varepsilon}_{ \hat{\mathbf{x}}_{i}}\otimes\delta\boldsymbol{\varepsilon}_{\hat{\mathbf{x}}_ {i}}\rangle\\ &=\langle(-\partial_{\mathbf{x}}f_{i}\hat{\boldsymbol{\varepsilon }}_{\mathbf{x}}-\partial_{\mathbf{p}}f_{i}\boldsymbol{\tilde{\varepsilon}}_{ \mathbf{p}})\otimes(-\partial_{\mathbf{x}}f_{i}\boldsymbol{\tilde{\varepsilon }}_{\mathbf{x}}-\partial_{\mathbf{p}}f_{i}\boldsymbol{\tilde{\varepsilon}}_{ \mathbf{p}})\rangle\\ &=\partial_{\mathbf{x}}f_{i}\boldsymbol{\hat{\varepsilon}}_{ \mathbf{x}}\otimes\boldsymbol{\varepsilon}_{\mathbf{x}}\left(\partial_{ \mathbf{x}}f_{i}\right)^{T}+\partial_{\mathbf{p}}f_{i}\boldsymbol{\tilde{ \varepsilon}}_{\mathbf{p}}\otimes\boldsymbol{\tilde{\varepsilon}}_{\mathbf{p}} \left(\partial_{\mathbf{p}}f_{i}\right)^{T}\text{ for }\hat{\boldsymbol{\varepsilon}}_{\mathbf{x}} \perp\hat{\boldsymbol{\varepsilon}}_{\mathbf{p}}\\ &=\partial_{\mathbf{x}}f_{i}\boldsymbol{\Sigma}_{\mathbf{x}} \left(\partial_{\mathbf{x}}f_{i}\right)^{T}+\partial_{\mathbf{p}}f_{i} \boldsymbol{\Sigma}_{\mathbf{p}}\left(\partial_{\mathbf{p}}f_{i}\right)^{T}\\ &=\boldsymbol{\Sigma}_{\hat{\mathbf{x}}_{i}}^{\mathbf{x}}+ \boldsymbol{\Sigma}_{\hat{\mathbf{x}}_{i}}^{\mathbf{p}}\end{split} \tag{4.1.2}\] where the arguments of the physical model \(f\) are suppressed for brevity, and \(f_{i}=f(\mathbf{x}_{i},\mathbf{p})\). This implies that the perturbations to the model residual can also be considered as a random variable that follows a multivariate normal Figure 3: KINNs results: synthetic data (solid markers), surrogate approximator (open markers) down-sampled to every two datapoints, and numerical integral with initial conditions estimated from SA at \(t=0\) for bulk-phase (a and b) and adsorbates (b and e). Parity plot of standardized derivatives for all chemical species (c and f) from the kinetic model \(f(\mathbf{x})\) and from algorithmic differentiation of the surrogate approximator (\(\hat{\mathbf{x}}\)). Physical model and SA derivatives were standardized based on the mean standard deviation per state to facilitate the visualization of their correlation. distribution: \[\delta\epsilon_{\mathbf{\hat{x}}_{i}}\in\hat{\mathbf{\varepsilon}}_{\mathbf{\hat{x}}_{ i}}\sim\mathcal{N}(0,\mathbf{\Sigma}_{\mathbf{\hat{x}}_{i}}) \tag{4.1.3}\] Notably, the covariance matrix for the underlying model parameter error distribution is generally unavailable, since the true parameters, \(\mathbf{p}\), are unknown. However, Eq. (2.2.2) can be re-arranged to provide a least-squares approximation for \(\epsilon_{\mathbf{p}}\), where the current evaluated states and derivative errors, \(\epsilon_{\mathbf{x}}\) and \(\epsilon_{\mathbf{\hat{x}}}\) are considered as samples drawn from the underlying random variables defined in Eq. (4.1.1) and Eq. (4.1.3), and the representation contribution in \(\partial_{\mathbf{x}}\mathbf{\hat{x}}\) is neglected since model parameter errors should not depend on the SA: \[\epsilon_{\mathbf{p}_{i}}=\left(\partial_{\mathbf{p}}f_{i}^{T}\partial_{ \mathbf{p}}f_{i}\right)^{-1}\partial_{\mathbf{p}}f_{i}^{T}\left(\epsilon_{ \mathbf{\hat{x}}_{i}}-\partial_{\mathbf{x}}f_{i}\epsilon_{\mathbf{x}_{i}}\right) \tag{4.1.4}\] We assume that the covariance of the data interpolation can be obtained from the residuals in \(\mathbf{x}\). Letting \(\epsilon_{\mathbf{x}}\) be a sample of the random variable \(\hat{\mathbf{\varepsilon}}_{\mathbf{x}}\) (where \(\epsilon_{\mathbf{x}}\) is defined by Eq. (2.1.2)) yields: \[\mathbf{\Sigma}_{\mathbf{x}}=\langle\epsilon_{\mathbf{x}}-\langle\epsilon_{ \mathbf{x}}\rangle\otimes\epsilon_{\mathbf{x}}-\langle\epsilon_{\mathbf{x}} \rangle\rangle \tag{4.1.5}\] In this case, "recentering" (subtracting the sample mean when estimating sample covariances) is not strictly necessary; however, to avoid numerical instabilities from large initial residuals, intermediate estimated covariances are recentered during the KINNs training, which does not affect the final results given that the fundamental structural assumption is that \(\langle\epsilon_{\mathbf{x}}\rangle\rightarrow\mathbf{0}\) as the optimization proceeds. Notably, Eq. (4.1.4) yields a time-dependent parameter error, \(\epsilon_{\mathbf{p}_{i}}\), based on the time-dependence of \(\epsilon_{\mathbf{x}}\) and \(\epsilon_{\mathbf{\hat{x}}}\), and is hence different from the true time-independent parameter error, \(\epsilon_{\mathbf{p}}^{*}\) (Eq. (2.1.5)). However, since we are treating \(\epsilon_{\mathbf{x}}\) and \(\epsilon_{\mathbf{\hat{x}}}\) as independent samples from underlying multivariate normal distributions, the interpretation is that \(\epsilon_{\mathbf{p}_{i}}\) is also a sample from \(\hat{\mathbf{\varepsilon}}_{\mathbf{p}}\) (Eq. (4.1.1)) that obeys the structure imposed by Eq. (4.1.4). Thus, the model parameter covariance matrix can be estimated as \(\mathbf{\Sigma}_{\mathbf{p}}=\langle\epsilon_{\mathbf{p}}-\langle\epsilon_{ \mathbf{p}}\rangle\otimes\epsilon_{\mathbf{p}}-\langle\epsilon_{\mathbf{p}} \rangle\rangle\), with \(\langle\epsilon_{\mathbf{p}}\rangle\rightarrow\mathbf{0}\) as the optimization reaches a critical point. ### Covariance inversion stabilization In the MLE formulation, a point of attention is that often the data- and model-parameter covariance structures are unknown, either requiring a priori assumptions or the utilization of methods for covariance inference from the data itself [60, 61] or from sample residuals [62, 63, 64]. In this work, we assume a lack of prior knowledge, and thus that the interpolation error is homoscedastic. The precision matrix is then updated periodically throughout the optimization at each epoch. To stabilize successive covariance inversions and account for the fact that residuals are not initially centered, the mean squared error is added as uncorrelated additive Gaussian noise to the covariances, Eq. (4.2.1), which is similar to the strategy adopted to stabilize Gaussian Process Regression [65]. As \(\langle\epsilon_{\mathbf{x}}\rangle\to 0\) at the critical points of \(\ell_{\mathrm{t}}\), this approach will not affect the final solution of the inverse problem. \[\mathbf{\Sigma}_{\mathbf{x}/\hat{\mathbf{x}}}\leftarrow\mathbf{\Sigma}_{ \mathbf{x}/\hat{\mathbf{x}}}+\mathrm{diag}(|\langle\epsilon_{\mathbf{x}} \rangle|) \tag{4.2.1}\] As the model converges this stabilization term reduces, and can be removed prior to error estimation to provide more accurate error estimates. ### Neural-network representation of heterogeneous systems Heterogeneous chemical systems comprise unbound (gas-phase), adsorbed molecules and intermediates. Unbound species (observable) and adsorbate or intermediate - latent - species are represented by individual NNs \(\mathbf{\hat{x}}_{\mathbf{o}}\) and \(\mathbf{\hat{x}}_{\mathbf{*}}\), respectively, where a normalization operator \(\mathrm{C}_{\mathrm{N}}\) was introduced to enforce normalization of surface species (DAE constraint) by construction, i.e., \(0\leq\mathbf{x}_{\mathbf{*}}=\mathrm{C}_{\mathbf{N}}[\mathbf{\hat{x}}_{\mathbf{* }}]\leq 1\) and \(|\mathbf{x}_{\mathbf{*}}|_{1}=1\ \forall\ \mathbf{\hat{x}}_{\mathbf{*}}\in\mathbb{R}^{\mathrm{dim}( \mathbf{x}_{\mathbf{*}})-1}\). In this work, we have reformulated the normalization operator \(\mathrm{C}_{\mathbf{N}}\) from its previous trigonometric description to a simpler structure in terms of products of logistic functions that enforces bijection (Supplementary Material, S1). \[\mathbf{x}=\begin{bmatrix}\mathbf{\hat{x}}_{\mathbf{o}}\\ \mathrm{C}_{\mathbf{N}}[\mathbf{\hat{x}}_{\mathbf{*}}]\end{bmatrix} \tag{4.3.1}\] ### KINNs architecture In this work, derivatives are obtained through algorithmic differentiation (AD) with just-in-time (JIT) compilation using the JAX library [66, 67]. In addition, SA training and model fitting are trained using the _Adam_ optimization algorithm. The code necessary for reproducing the results of this work can be found on GitHub ([https://github.com/gusmaogabriels/kinn/tree/rkinns](https://github.com/gusmaogabriels/kinn/tree/rkinns)). The MLE-formulation was utilized to build the cost function to be minimized for learning the ODE solutions using rKINNs. The JAX library provided the routines for the forward AD, allowing the obtainment of the gradients with respect to the SAs' parameters, which were optimized with the _Adam_ algorithm [68]. For both KINNs and rKINNs the underlying SA NNs consists of three-hidden layers, each of which with \(20\) neurons \(\{20_{\times 3}\}\) and corresponding activation functions \(\{tanh,\,swish,\,tanh\}\). ### Data Generation Two sets of initial conditions (ICs) were used to generate synthetic data, which were built to represent scenarios of initially high and low concentration of reactants and products: IC1, \(\left\{x_{A},x_{B},x_{C},x_{*}\right\}_{t_{0}}=\left\{0.6,0.4,0.0,1.0\right\}\), and IC2, \(\left\{x_{A},x_{B},x_{C},x_{*}\right\}_{t_{0}}=\left\{0.2,0.3,0.5,1.0\right\}\), respectively. Additive homoscedastic noise with \(\sigma=0.025\) was added to the generated data. Signals for surface coverages were re-normalized to simulate unknown calibration factors, although it is assumed that semi-quantitative signals for all surface states are known. Numerical integrations for synthetic data generation were carried out with the stiff-nonstiff algorithm LSODA from the FORTRAN ODEPACK [69, 70]. Forward and reverse rate constants conveyed in Table 1 were chosen to depict over two orders-of-magnitude variance in rate constants that typical of reaction networks in heterogeneous catalysis. A total of \(100\) time-points were sampled in logarithmic-space, i.e., more points were sampled at the beginning of the experiments, where derivatives exhibit higher variance. ### Latent states In many complex dynamical systems the species can be decomposed into measurable states and "latent" states that cannot be directly observed. This is the case in heterogeneous catalysis, where unbound gas phase species are typically observable, and denoted by \(\mathbf{c_{o}}\), while \(\mathbf{c_{a}}\) denotes the concentration of the latent variables, i.e., adsorbed and intermediate surface species. In this case, the range and null-space basis can be distinguished as: \[\mathbf{U}=\left[\begin{bmatrix}\mathbf{U_{o}}\\ \mathbf{U_{\star}}\end{bmatrix}^{\mathrm{R}}\quad\begin{bmatrix}\mathbf{U_{o} }\\ \mathbf{U_{\star}}\end{bmatrix}^{\mathrm{N}}\right] \tag{4.6.1}\] This leads to partitioned concentration vectors in Eq. (4.6.2). \[\begin{bmatrix}\mathbf{c_{o}}\\ \mathbf{c_{\star}}\end{bmatrix}_{(t)}=\begin{bmatrix}\mathbf{c_{o}}\\ \mathbf{c_{\star}}\end{bmatrix}^{\mathrm{R}}_{(t)}+\begin{bmatrix}\mathbf{c_{o }}\\ \mathbf{c_{\star}}\end{bmatrix}^{\mathrm{N}} \tag{4.6.2}\] Latent dynamics are characterized by incomplete information about the evolution of a subset of states, which can be partially "hidden" or indirectly inferable through the calibration of intensity measurements in conjunction with known associated conservation laws. For example, in heterogeneous catalysis, operando analytical methods, such as temperature-programmed reaction spectroscopy, may be utilized to track catalyst surface composition over time-on-stream [71]. The raw spectroscopic data (signal), \(\mathbf{y}\) is linear correlated with the latent states \(\mathbf{c_{\star}}\), as in Eq. (4.6.3), with calibration factors, \(\boldsymbol{\gamma}\), based on the conservation of total active sites constraints. \[\mathbf{c_{\star}}=\mathbf{y_{\star}}\circ\boldsymbol{\gamma},\quad\mathrm{ s.t.}\ \mathbf{1}^{T}\mathbf{c_{\star}}=1 \tag{4.6.3}\] In section S3, we use the null-space equations to a derive a closed-form solution for the calibration factor \(\boldsymbol{\gamma}\) in the scenario where \(\mathbf{y_{\star}}\) signal can be retrieved. ### Robust KINNs - MLE & Range-Nullspace decomposition In Eq. (2.1.1), the underlying ODE can be decomposed into its components in the nullspace (\(\mathrm{N}(\mathbf{M})\)) and range (\(\mathrm{R}(\mathbf{M})\)), and SVD provides a basis for the representation of states and its derivatives in those spaces. Estimated variances can be propagated and separated into nullspace and range components. Such a decomposition eliminates linearly-dependent terms, creates denser covariance matrices, and mitigates the propagation of numerical errors to the precision matrices due to ill-conditioned covariance matrix-inversions. The SA representation is constructed to structurally separate the orthogonal components of the time-independent (nullspace) and -dependent (range) solutions, Fig. 4. By combining Eqs. (2.3.3) to (2.3.5) and Eq. (4.6.1), the SA is built as a function of the nullspace and range coverage solutions, i.e. \(\mathrm{C_{\mathbf{N}}[\cdot]}\) in Eq. (4.7.1) and Eq. (4.7.2), respectively. The time-invariant solution \(\mathbf{z}^{\mathrm{N}}\) consists of the sum of a solution in the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{Rate Constants} \\ \cline{2-9} Type & \(\frac{k_{1}^{d}}{k_{1}^{d}}\) & \(\frac{k_{2}^{d}}{k_{2}^{d}}\) & \(\frac{k_{3}^{d}}{k_{3}^{d}}\) & \(\frac{k_{3}^{c}}{k_{3}^{c}}\) & \(\frac{k_{4}^{c}}{k_{4}^{c}}\) & \(\frac{k_{3}^{c}}{k_{3}^{c}}\) & \(\frac{k_{1}^{c}}{k_{1}^{c}}\) \\ \hline forward & \(20\) & \(24\) & \(16\) & \(640\) & \(160\) & \(560\) & \(640\) \\ reverse & \(8\) & \(12\) & \(40\) & \(960\) & \(80\) & \(160\) & \(240\) \\ \hline \hline \end{tabular} \end{table} Table 1: Forward and reverse reactions rate constants for the complex reaction mechanism (_dcs_) synthetics data generation. convex-hull of \((\mathbf{U}_{\bullet}^{\mathrm{N}})^{T}\), since \(\mathrm{C}_{\mathrm{N}}[\mathbf{\hat{x}}_{\bullet}^{\mathrm{N}}]\) implies a convex-affine combination of its basis vectors, translated by solutions in the positive linear-span of \((\mathbf{U}_{\bullet}^{\mathrm{N}})^{T}\), which parameterized by \(\mathbf{\hat{x}}_{\mathbf{o}}^{\mathrm{N}}\) through \(\mathbf{U}_{\bullet}^{\mathrm{N}:\mathrm{N}}\). A visual representation of how the all subspaces are interconnected is shown in Fig. 4. \[\mathbf{z}^{\mathrm{N}}=\left(\mathbf{U}_{\bullet}^{\mathrm{N}}\right)^{+} \mathrm{C}_{\mathrm{N}}[\mathbf{\hat{x}}_{\bullet}^{\mathrm{N}}]+\mathbf{U}_{ \bullet}^{\mathrm{N}:\mathrm{N}}\mathbf{\hat{x}}_{\mathbf{o}}^{\mathrm{N}} \tag{4.7.1}\] The range-component solution depends on the nullspace estimates for the latent variables, \(\mathbf{x}_{\bullet}^{\mathrm{N}}=\mathbf{U}_{\bullet}^{\mathrm{N}}\mathbf{z} ^{\mathrm{N}}\). Similarly to the nullspace solution, to enforce normalization, \(\mathbf{x}_{\bullet}=\mathrm{C}_{\mathrm{N}}[\mathbf{\hat{x}}_{\bullet(t)}^{ \mathrm{R}}]\) is first calculated and variations in the latent species solutions \(\mathbf{z}^{\mathrm{R}}\) are parameterized by \(\mathbf{U}_{\bullet}^{\mathrm{R}:\mathrm{N}}\in\mathrm{N}[\mathbf{U}_{\bullet }^{\mathrm{R}}]\), as in Eq. (4.7.2). Importantly, components in \(\mathbf{U}_{\bullet}^{\mathrm{N}:\mathrm{N}}\) and \(\mathbf{U}_{\bullet}^{\mathrm{R}:\mathrm{N}}\) are orthogonal to the latent space solution and, therefore, represent variations in the gas phase that do not affect the dynamics of the surface phase. The latent space solution is computed as \[\mathbf{z}^{\mathrm{R}}=\left(\mathbf{U}_{\bullet}^{\mathrm{R}}\right)^{+} \left(\mathrm{C}_{\mathrm{N}}[\mathbf{\hat{x}}_{\bullet(t)}^{\mathrm{R}}]- \mathbf{x}_{\bullet}^{\mathrm{N}}\right)+\mathbf{U}_{\bullet}^{\mathrm{R}: \mathrm{N}}\mathbf{\hat{x}}_{\mathbf{o}\left(t\right)}^{\mathrm{R}} \tag{4.7.2}\] where \(\mathbf{\hat{x}}_{\mathbf{o}\left(\mathbf{t}\right)}^{\mathrm{R}}\in\mathbb{R}^ {d}\) for \(d=\mathrm{nullity}[\mathbf{U}_{\bullet}^{\mathrm{R}}]\), and \(\mathbf{\hat{x}}_{\mathbf{o}\left(\mathbf{t}\right)}^{\mathrm{N}}\in\mathbb{R} ^{d}\) for \(d=\mathrm{nullity}[\mathbf{U}_{\bullet}^{\mathrm{N}}]\). With such a structure, all the variance is constrained to the \(\mathrm{R}[\mathbf{M}]\), which is parameterized by the time-invariant nullspace. We refer to the reduced optimization problem formulation as rKINN (Robust KINN), which is conveyed in Eq. (4.7.3), where \(\mathbf{\varepsilon}_{\mathbf{z}}^{\mathrm{R}}=(\mathbf{U}^{\mathrm{R}})^{T} \mathbf{\tilde{x}}-\mathbf{z}^{\mathrm{R}}=(\mathbf{U}^{\mathrm{R}})^{T} \mathbf{\varepsilon}_{\mathbf{x}}\) and \(\mathbf{\varepsilon}_{\mathbf{z}}^{\mathrm{R}}=(\mathbf{U}^{\mathrm{R}})^{T} \mathbf{\tilde{}}\mathbf{\varepsilon}_{\mathbf{x}}\) are the state- and state-derivative errors projected onto the \(\mathrm{R}[\mathbf{M}]\) subspace, respectively. This leads to a revised definition of the optimization problem, \[\min_{\mathbf{\omega}_{t}} \ell_{\mathrm{t}}=\frac{1}{n}\sum_{i}^{n}\left((\mathbf{ \varepsilon}_{\mathbf{z}_{i}}^{\mathrm{R}})^{T}\mathbf{\Omega}_{\mathbf{ \delta}_{i}}^{\mathrm{R}}\mathbf{\varepsilon}_{\mathbf{\delta}_{i}}^{\mathrm{R }}+(\mathbf{\varepsilon}_{\mathbf{z}_{i}}^{\mathrm{R}})^{T}\mathbf{\Omega}_{ \mathbf{\delta}_{i}}^{\mathrm{R}}\mathbf{\varepsilon}_{\mathbf{\delta}_{i}}^{ \mathrm{R}}\right)\] (4.7.3) s.t. \[\mathbf{\Omega}_{\mathbf{z}_{i}}^{\mathrm{R}}=\left((\mathbf{U}^{ \mathrm{R}})^{T}\left(\mathbf{\Sigma}_{\mathbf{\hat{x}}_{i}}^{\mathrm{x}}+ \mathbf{\Sigma}_{\mathbf{\delta}_{i}}^{\mathrm{p}}\right)\mathbf{U}^{\mathrm{ R}}\right)^{-1}\] \[\mathbf{\Omega}_{\mathbf{z}}^{\mathrm{R}}=\left((\mathbf{U}^{ \mathrm{R}})^{T}\mathbf{\Sigma}_{\mathbf{x}}\mathbf{U}^{\mathrm{R}}\right)^{-1}\] \[\mathbf{p}\in\mathbb{R}^{\mathrm{dim}(\mathbf{p})},\ \mathbf{\omega}_{\mathbf{s}}\in\mathbb{R}^{ \mathrm{dim}(\mathbf{\omega}_{\mathbf{s}})}\] which is solved with the rKINNs algorithm 1. ``` Input: Observed data (\(\mathbf{t}\), \(\mathbf{\tilde{x}}\)), stoichiometry matrix \(\mathbf{M}\), max. iterations \(n_{max}\), no. of epochs \(n_{e}\), tolerance \(tol\) 2: Build NN, initialize SA parameters \(\mathbf{\omega}_{\mathbf{s}}\), kinetic model parameters \(\mathbf{p}\) Evaluate \(\mathbf{\Omega}_{\mathbf{z}}^{\mathrm{R}}\) and \(\mathbf{\Omega}_{\mathbf{z}}^{\mathrm{R}}\). 4:while epoch \(<n_{e}\) or \(\ell_{\mathrm{t}}(\mathbf{\omega}_{t})\) < \(tol\)do whileiter < \(n_{max}\)do 6: Predict states \(\mathbf{x}(t,\mathbf{\omega}_{\mathbf{s}})\) from SA over \(\mathbf{t}\) Predict states derivatives \(\mathbf{\hat{x}}(t,\mathbf{\omega}_{\mathbf{s}})\) from SA over \(\mathbf{t}\) 8: Compute projected errors \(\mathbf{\varepsilon}_{\mathbf{z}}^{\mathrm{R}}\) and \(\mathbf{\varepsilon}_{\mathbf{z}}^{\mathrm{R}}\). Compute objective function \(\ell_{\mathrm{t}}(\mathbf{\omega}_{t})\) over \(\mathbf{t}\) 10: Compute gradients of \(\ell_{\mathrm{t}}(\mathbf{\omega}_{t})\) with AD; \(\partial_{\mathbf{\omega}_{t}}\ell_{\mathrm{t}}(\mathbf{\omega}_{\mathbf{s}},\mathbf{p})\) Update SA parameters \(\mathbf{\omega}_{\mathbf{s}}\) and kinetic model parameters \(\mathbf{p}\) using Adam 12: Update \(\partial_{\mathbf{x}}f(\mathbf{x},\mathbf{p})\) and \(\partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})\) Update \(\mathbf{\Sigma}_{\mathbf{\hat{x}}_{i}}^{\mathrm{x}}\) and \(\mathbf{\Sigma}_{\mathbf{\hat{x}}_{i}}^{\mathrm{p}}\) 14: Update \(\mathbf{\Sigma}_{\mathbf{x}}\) and \(\mathbf{\Sigma}_{\mathbf{\hat{x}}}\) with uncorrelated additive noise \(|\langle\mathbf{\varepsilon}_{\mathbf{x}}\rangle|\) Update covariance matrices \(\mathbf{\Sigma}_{\mathbf{x}}^{\mathrm{R}}\) and \(\mathbf{\Sigma}_{\mathbf{\hat{x}}_{i}}^{\mathrm{R}}\) 16: Evaluate precision matrices \(\mathbf{\Omega}_{\mathbf{z}}^{\mathrm{R}}\) and \(\mathbf{\Omega}_{\mathbf{\hat{x}}_{i}}^{\mathrm{R}}\) Output: Kinetic model parameters \(\mathbf{p}\) ``` **Algorithm 1** Robust KINNs ## Acknowledgements G.S.G. is a PhD research fellow with International Business Machine Corporation (IBM) since mid-2021 through the IBM Global University Program Awards. Acknowledgement is made to the donors of the American Chemical Society Petroleum Research Fund for partial support of this research. This work was supported by the U.S. Department of Energy (USDOE), Office of Energy Efficiency and Renewable Energy (EERE), Industrial Efficiency and Decarbonization Office (IEDO) Next Generation R&D Project DE-FOA-0002252-1775 under contract no. DE-AC07-05ID14517. We acknowledge Dr. John Kitchin for his formative work on the utilization of neural networks as bases for the solution of simple coupled forward kinetics ODEs (kitchingroup.cheme.cmu.edu/blog). The authors are grateful to Dr. M. Ross Kunz and Dr. Adam Yonge for the enlightening discussions involving the maximum-likelihood formulation of PINNs problems and uncertainty quantification, to Ms. Sushree Jagriti Sahoo for her comments on the algebraic decomposition of ODE coupling matrices, and to Mr. Dingqi Nai for general assessments of the application of KINNs to experimental data. G.S.G. thanks especially Dr. William Bradley and Mr. Zachary Kilwein for the extensive debate-like discussions on the application of PINNs and NODEs to general Chemical Engineering problems. Figure 4: Surrogate approximator structure based on range-nullspace decomposition using SVD. Initial-values and inner fluxes are contained in the time-invariant nullspace component (bottom). Time variations are represented by the fully-connected feed-forward neural network (top) with \(k\) hidden layers, which includes both unbound, \(\hat{\mathbf{x}}_{\bullet}^{\mathrm{R}}\), and bound, \(\hat{\mathbf{x}}_{\bullet}^{\mathrm{R}}\) latent representations that, through \(C_{\mathrm{N}}\) and linear operations, leads to \(\mathbf{z}^{\mathrm{R}}\). The MLE framework is constructed around the \(\mathbf{z}^{\mathrm{R}}\) lower dimension representation assuming that \(\mathbf{z}^{\mathrm{N}}\) can be obtained directly from measured data. ### Conflict of Interest The authors declare the nonexistence of conflicts of interest. ### Nomenclature **Ordinary Differential Equation** \(\mathbf{M}\): Stoichiometry matrix \(x\): State variable \(\mathbf{x}\): State variable array \(\mathbf{\dot{x}}\): State variable time derivatives \(\mathbf{c}\): Concentrations vector \(\mathbf{r}\): Rate of reaction vector \(\mathbf{\Sigma}\): Covariance matrix \(\mathbf{\Omega}\): Precision matrix \(\mathbf{\tilde{y}}\): Unscaled measurements \(\mathbf{\tilde{x}}\): Scaled measurements \(\mathbf{\gamma}\): Calibration factor **Surrogate Models** \(\phi\): Activation function \(\mathrm{C_{N}}\): Normalization-constraint operator \(\ell\): Objective function errors: \(\ell_{\mathrm{t}},\ \ell_{\mathbf{x}},\ \ell_{\mathbf{\hat{x}}}\) data and model likelihoods, respectively **Residuals** \(\boldsymbol{\varepsilon}\): Residuals array \(\boldsymbol{\varepsilon}_{i}\): Residuals array at time point \(t_{i}\) \(\boldsymbol{\hat{\varepsilon}}\): Residuals random variable \(\delta\): Sensitivity through total derivative **Parameters** \(\mathbf{p}\): Kinetic model parameters \(\boldsymbol{\omega_{\mathrm{s}}}\): Surrogate model parameters **Dimensionality Reduction** \(\mathbf{U^{R}}\): Orthogonal basis for the range of \(\mathbf{M}\), i.e., \(\mathrm{row}[\mathbf{M}]\) \(\mathbf{U^{N}}\): Orthogonal basis for the nullspace of \(\mathbf{M}\), i.e., \(\mathrm{null}[\mathbf{M}]\) \(\mathbf{V^{R}}\): Orthogonal basis for the range of \(\mathbf{M}^{T}\), i.e., \(\mathrm{row}[\mathbf{M}^{T}]\) \(\mathbf{V^{N}}\): Orthogonal basis for the nullspace of \(\mathbf{M}^{T}\), i.e., \(\mathrm{null}[\mathbf{M}^{T}]\) \(\mathbf{S}\): Diagonal matrix with singular values of \(\mathbf{M}\) **Abbreviations** SVD: Singular-value Decomposition ## References * [1] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _Journal of Computational Physics_, 378:686-707, 2 2019. * [2] Dehao Liu and Yan Wang. Multi-Fidelity Physics-Constrained Neural Network and Its Application in Materials Modeling. _Journal of Mechanical Design_, 141(12), 12 2019. * [3] Shengze Cai, Zhicheng Wang, Sifan Wang, Paris Perdikaris, and George Em Karniadakis. Physics-informed neural networks for heat transfer problems. _Journal of Heat Transfer_, 143(6), 6 2021. * [4] Siddhartha Mishra and Roberto Molinaro. Physics informed neural networks for simulating radiative transfer. _Journal of Quantitative Spectroscopy and Radiative Transfer_, 270:107705, 8 2021. * [5] Ameya D. Jagtap, Ehsan Kharazmi, and George Em Karniadakis. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. _Computer Methods in Applied Mechanics and Engineering_, 365:113028, 6 2020. * [6] Shengze Cai, Zhiping Mao, Zhicheng Wang, Minglang Yin, and George Em Karniadakis. Physics-informed neural networks (PINNs) for fluid mechanics: a review. _Acta Mechanica Sinica/Lixue Xuebao_, 37(12):1727-1738, 12 2021. * [7] Guofei Pang, Lu Lu, and George Em Karniadakis. fPINNs: Fractional Physics-Informed Neural Networks. _SIAM Journal on Scientific Computing_, 41(4):A2603-A2626, 1 2019. * [8] Lu Lu, Raphael Pestourie, Wenjie Yao, Zhicheng Wang, Francesc Verdugo, and Steven G. Johnson. Physics-Informed Neural Networks with Hard Constraints for Inverse Design. _SIAM Journal on Scientific Computing_, 43(6):B1105-B1132, 1 2021. * [9] Siddhartha Mishra and Roberto Molinaro. Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs. _IMA Journal of Numerical Analysis_, 42(2):981-1022, 4 2022. * [10] Jeremy Yu, Lu Lu, Xuhui Meng, and George Em Karniadakis. Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. _Computer Methods in Applied Mechanics and Engineering_, 393:114823, 4 2022. * [11] Xiaoli Chen, Liu Yang, Jinqiao Duan, and George Em Karniadakis. Solving Inverse Stochastic Problems from Discrete Particle Observations Using the Fokker-Planck Equation and Physics-Informed Neural Networks. _SIAM Journal on Scientific Computing_, 43(3):B811-B830, 1 2021. * [12] Ravi G. Patel, Indu Manickam, Nathaniel A. Trask, Mitchell A. Wood, Myoungkyu Lee, Ignacio Tomas, and Eric C. Cyr. Thermodynamically consistent physics-informed neural networks for hyperbolic systems. _Journal of Computational Physics_, 449:110754, 1 2022. * [13] Xiaocong Du, Shreyas Kolala Venkataramanaiah, Zheng Li, Chih-Wei Chang, Yuan Gao, Tonghe Wang, Suryanarayana Maddu, Dominik Sturm, Christian L Muller, and Ivo F Sbalzarini. Inverse Dirichlet weighting enables reliable training of physics informed neural networks. _Machine Learning: Science and Technology_, 3(1):015026, 2 2022. * [14] Fumio Hayashi. Finite-Sample Properties of OLS. In _Econometrics_, pages 47-59. 2000. * [15] Han Gao, Matthew J. Zahr, and Jian-Xun Wang. Physics-informed graph neural Galerkin networks: A unified framework for solving PDE-governed forward and inverse problems. _Computer Methods in Applied Mechanics and Engineering_, 390:114502, 2 2022. * [16] Xuhui Meng, Zhen Li, Dongkun Zhang, and George Em Karniadakis. PPINN: Parareal physics-informed neural network for time-dependent PDEs. _Computer Methods in Applied Mechanics and Engineering_, 370, 10 2020. * [17] Liu Yang, Xuhui Meng, and George Em Karniadakis. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. _Journal of Computational Physics_, 425:109913, 1 2021. * [18] Kirill Zubov, Zoe McCarthy, Yingbo Ma, Francesco Calisto, Valerio Pagliarino, Simone Azeglio, Luca Bottero, Emmanuel Lujan, Valentin Sulzer, Ashutosh Bharambe, Nand Vinchhi, Kaushik Balakrishnan, Devesh Upadhyay, and Chris Rackauckas. NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations. 7 2021. * [19] Son Ich Ngo and Young-Il Lim. Solution and Parameter Identification of a Fixed-Bed Reactor Model for Catalytic CO2 Methanation Using Physics-Informed Neural Networks. _Catalysts 2021, Vol. 11, Page 1304_, 11(11):1304, 10 2021. * [20] Alireza Yazdanii, Lu Luid, Maziar Raissid, and George Em Karniadakisid. Systems biology informed deep learning for inferring parameters and hidden dynamics. 2020. * [21] Jiequn Han, Arnulf Jentzen, and E. Weinan. Solving high-dimensional partial differential equations using deep learning. _Proceedings of the National Academy of Sciences of the United States of America_, 115(34):8505-8510, 8 2018. * [22] J.W. Thybaut and G.B. Marin. Single-Event MicroKinetics: Catalyst design for complex reaction networks. _Journal of Catalysis_, 308:352-362, 12 2013. * [23] Rob J. Berger, Freek Kapteijn, Jacob A. Moulijn, Guy B. Marin, Juray De Wilde, Maria Olea, De Chen, Anders Holmen, Luca Lietti, Enrico Tronconi, and Yves Schuurman. Dynamic methods for catalytic kinetics. _Applied Catalysis A: General_, 342(1-2):3-28, 6 2008. * [24] Patricia Rubert-Nason, Manos Mavrikakis, Christos T. Maravelias, Lars C. Grabow, and Lorenz T. Biegler. Advanced solution methods for microkinetic models of catalytic reactions: A methanol synthesis case study. _AIChE Journal_, 60(4):1336-1346, 4 2014. * [25] Srinivas Rangarajan, Christos T. Maravelias, and Manos Mavrikakis. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems. _Journal of Physical Chemistry C_, 121(46):25847-25863, 11 2017. * [26] Guy B. Marin, Vladimir V. Galvita, and Gregory S. Yablonsky. Kinetics of chemical processes: From molecular to industrial scale. _Journal of Catalysis_, 404:745-759, 12 2021. * [27] Zachary T Wilson and Nikolaos V Sahinidis. The ALAMO approach to machine learning. _Computers and Chemical Engineering_, 106:785-795, 2017. * [28] Steven L. Brunton, Joshua L. Proctor, J. Nathan Kutz, and William Bialek. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. _Proceedings of the National Academy of Sciences of the United States of America_, 113(15):3932-3937, 4 2016. * Methodological Frameworks. 1 2023. * [30] J. G. Verwer. Gauss-Seidel Iteration for Stiff ODES from Chemical Kinetics. _[https://doi.org/10.1137/0915076_](https://doi.org/10.1137/0915076_), 15(5):1243-1250, 7 2006. * [31] Qin Wu, Talin Avanesian, Xiaohui Qu, and Hubertus Van Dam. PolyODENet: Deriving mass-action rate equations from incomplete transient kinetics data. _The Journal of Chemical Physics_, 157(16):164801, 10 2022. * [32] Gabriel S. Gusmao, Adhika P. Retnanto, Shashwati C. da Cunha, and Andrew J. Medford. Kinetics-informed neural networks. _Catalysis Today_, 4 2022. * [33] L T Biegler, J J Damiano, and G E Blau. Nonlinear parameter estimation: A case study comparison. _AIChE Journal_, 32(1):29-45, 1986. * [34] A. J. Wathen. Preconditioning. _Acta Numerica_, 24:329-376, 5 2015. * [35] J. A. Dumesic. The Microkinetics of heterogeneous catalysis. _ACS professional reference book_, 1993. * [36] D. Constales, G. S. Yablonsky, and G. B. Marin. The C-matrix: Augmentation and reduction in the analysis of chemical composition and structure. _Chemical Engineering Science_, 110:164-168, 2014. * [37] Evgeniy A Redekop, Gregory S Yablonsky, Denis Constales, Palghat A Ramachandran, John T Gleaves, and Guy B Marin. Elucidating complex catalytic mechanisms based on transient pulse-response kinetic data. _Chemical Engineering Science_, 110:20-30, 5 2014. * [38] George Em Karniadakis, Ioannis G. Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning. _Nature Reviews Physics_, 3(6):422-440, 6 2021. * [39] A.J. Meade and A.A. Fernandez. The numerical solution of linear ordinary differential equations by feedforward neural networks. _Mathematical and Computer Modelling_, 19(12):1-25, 6 1994. * [40] A.J. Meade and A.A. Fernandez. Solution of nonlinear ordinary differential equations by feedforward neural networks. _Mathematical and Computer Modelling_, 20(9):19-44, 11 1994. * [41] I.E. E. Lagaris, Aristidis Likas, and D.I. I. Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. _IEEE Transactions on Neural Networks_, 9(5):987-1000, 5 1998. * [42] G. Cybenko. Approximation by superpositions of a sigmoidal function. _Mathematics of Control, Signals, and Systems_, 2(4):303-314, 12 1989. * [43] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. _Neural Networks_, 3(5):551-560, 1 1990. * [44] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. _Neural Networks_, 4(2):251-257, 1991. * [45] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. _Neural Networks_, 6(6):861-867, 1 1993. * [46] Allan Pinkus. Approximation theory of the MLP model in neural networks. pages 143-195, 1999. * [47] Nam Mai-Duy and Thanh Tran-Cong. Approximation of function and its derivatives using radial basis function networks. _Applied Mathematical Modelling_, 27(3):197-220, 3 2003. * [48] Philip W Livermore. Galerkin orthogonal polynomials. 2009. * [49] Philip W Livermore, Glenn R Ierley, P W Livermore, and \(\cdot\) G R Ierley. Quasi-L p norm orthogonal Galerkin expansions in sums of Jacobi polynomials Orthogonal expansions. 54:533-569, 2010. * [50] Anders Hald. On the history of maximum likelihood in relation to inverse probability and least squares. _Statistical Science_, 14(2):214-222, 5 1999. * [51] In Jae Myung. Tutorial on maximum likelihood estimation. _Journal of Mathematical Psychology_, 47(1):90-100, 2 2003. * [52] F. H. MacDougall. Thermodynamic Theory of Affinity. By Th. De Donder and Pierre Van Rysselberghe. _The Journal of Physical Chemistry_, 41(5), 1937. * [53] Evgeniy A. Redekop, Gregory S. Yablonsky, Denis Constales, Palghat A. Ramachandran, Cathryn Pherigo, and John T. Gleaves. The Y-Procedure methodology for the interpretation of transient kinetic data: Analysis of irreversible adsorption. _Chemical Engineering Science_, 66(24):6441-6452, 12 2011. * [54] Weiqi Ji and Sili Deng. Autonomous Discovery of Unknown Reaction Pathways from Data by Chemical Reaction Neural Network. _The Journal of Physical Chemistry A_, 125(4):1082-1092, 2 2021. * [55] Weiqi Ji, Franz Richter, Michael J. Gollner, and Sili Deng. Autonomous kinetic modeling of biomass pyrolysis using chemical reaction neural networks. _Combustion and Flame_, 240:111992, 6 2022. * [56] Christopher Rackauckas, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Supekar, Dominic Skinner, Ali Ramadhan, and Alan Edelman. Universal Differential Equations for Scientific Machine Learning. 1 2020. * [57] William Bradley, Gabriel S. Gusmao, Andrew J. Medford, and Fani Boukouvala. Training Stiff Dynamic Process Models via Neural Differential Equations. volume 49, pages 1741-1746. Elsevier, 1 2022. * [58] Jun Yin, Jiali Li, Iftekhar A Karimi, and Xiaonan Wang. Generalized reactor neural ODE for dynamic reaction process modeling with physical interpretability. _Chemical Engineering Journal_, 452:139487, 2023. * [59] Felix A. Doppel and Martin Votsmeier. Efficient machine learning based surrogate models for surface kinetics by approximating the rates of the rate-determining steps. _Chemical Engineering Science_, 262:117964, 11 2022. * [60] Murali R. Rajamani and James B. Rawlings. Estimation of the disturbance structure from data using semidefinite programming and optimal weighting. _Automatica_, 45(1):142-148, 1 2009. * [61] Raf Roelant, Denis Constales, Gregory S. Yablonsky, Roger Van Keer, Michael A. Rude, and Guy B. Marin. Noise in temporal analysis of products (TAP) pulse responses. _Catalysis Today_, 121(3-4):269-281, 3 2007. * [62] Peter J. Rousseeuw and Katrien Van Driessen. A Fast Algorithm for the Minimum Covariance Determinant Estimator. _Technometrics_, 41(3):212-223, 8 1999. * [63] Olivier Ledoit and Michael Wolf. A well-conditioned estimator for large-dimensional covariance matrices. _Journal of Multivariate Analysis_, 88(2):365-411, 2 2004. * [64] Yilun Chen, Ami Wiesel, Yonina C. Eldar, and Alfred O. Hero. Shrinkage Algorithms for MMSE Covariance Estimation. _IEEE Transactions on Signal Processing_, 58(10):5016-5029, 10 2010. * [65] Sivaram Ambikasaran, Daniel Foreman-Mackey, Leslie Greengard, David W. Hogg, and Michael O'Neil. Fast Direct Methods for Gaussian Processes. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 38(2):252-265, 2 2016. * [66] Roy Frostig, Matthew Johnson, and Chris Leary. Compiling machine learning programs via high-level tracing. In _SysML_, 3 2018. * [67] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, and Skye Wanderman-Milne. {JAX}: composable transformations of {P}ython+{N}um{P}y} programs. _[http://github.com/googlefjax_](http://github.com/googlefjax_), 2018. * Conference Track Proceedings_. International Conference on Learning Representations, ICLR, 12 2015. * [69] Alan C. Hindmarsh. LSODE and LSODI, two new initial value ordinary differential equation solvers. _ACM SIGNUM Newsletter_, 15(4):10-11, 12 1980. * [70] Linda Petzold. Automatic Selection of Methods for Solving Stiff and Nonstiff Systems of Ordinary Differential Equations. _SIAM Journal on Scientific and Statistical Computing_, 4(1):136-148, 3 1983. * [71] Robert J. Madix. The application of flash desorption spectroscopy to chemical reactions on surfaces: Temperature programmed reaction spectroscopy. _Critical Reviews in Solid State and Materials Sciences_, 7(2):143-152, 1978. Maximum-likelihood Estimators in Physics-Informed Neural Networks for High-dimensional Inverse Problems Gabriel S. Gusmao School of Chemical & Biomolecular Engineering Georgia Institute of Technology Atlanta, GA 30332 {gusmaogabriels, ajm}@gatech.edu Andrew J. Medford School of Chemical & Biomolecular Engineering Georgia Institute of Technology Atlanta, GA 30332 {gusmaogabriels, ajm}@gatech.edu ###### Abstract We propose a \[x_{i}(\mathbf{\omega_{s}},t) =(1-\sigma\left(\hat{x}_{i}(\mathbf{\omega_{s}},t)\right))\prod_{j<i} \sigma\left(\hat{x}_{j}(\mathbf{\omega_{s}},t)\right)\ \forall\ i<p;\ i,j\in\mathbb{N}\] (S1.2) \[x_{p}(\mathbf{\omega_{s}},t) =\prod_{j<p}\sigma\left(\hat{x}_{j}(\mathbf{\omega_{s}},t)\right)\] ## S2 MLE Optimality Conditions The optimality condition for cost function of the minimization problem in eq. (2.2.6) can be represented as expected values in eq. (S2.1). \[\nabla\left\langle\mathbf{\varepsilon}_{\mathbf{x}}^{T}\mathbf{\Omega}_{\mathbf{k}} \mathbf{\varepsilon}_{\mathbf{\hat{x}}}+\mathbf{\varepsilon}_{\mathbf{x}}^{T}\mathbf{ \Omega}_{\mathbf{x}}\mathbf{\varepsilon}_{\mathbf{x}}\right\rangle=\mathbf{0}\] (S2.1) Which can be expanded over the independent variables \(\mathbf{x}\) and \(\mathbf{p}\) as in \[\nabla\left\langle\mathbf{\varepsilon}_{\mathbf{x}}^{T}\mathbf{\Omega}_{ \mathbf{k}}\mathbf{\varepsilon}_{\mathbf{\hat{x}}}+\mathbf{\varepsilon}_{\mathbf{x}}^ {T}\mathbf{\Omega}_{\mathbf{x}}\mathbf{\varepsilon}_{\mathbf{x}}\right\rangle =\left[\partial_{\mathbf{x}},\partial_{\mathbf{x}},\partial_{ \mathbf{p}}\right]\left\langle\mathbf{\varepsilon}_{\mathbf{x}}^{T}\mathbf{\Omega}_{ \mathbf{k}}\mathbf{\varepsilon}_{\mathbf{\hat{x}}}+\mathbf{\varepsilon}_{\mathbf{x}}^ {T}\mathbf{\Omega}_{\mathbf{x}}\mathbf{\varepsilon}_{\mathbf{x}}\right\rangle\] (S2.2) \[\propto\left\langle\left[\partial_{\mathbf{x}}\mathbf{\hat{x}}+ \mathbf{I}-\partial_{\mathbf{x}}f(\mathbf{x},\mathbf{p}),\right.\right.- \partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})]^{T}\,\mathbf{\Omega}_{\mathbf{k}} \mathbf{\varepsilon}_{\mathbf{\hat{x}}}\right\rangle-\left\langle\left[\partial_ {\mathbf{\hat{x}}}\mathbf{x},\ \mathbf{I}\right]^{T}\mathbf{\Omega}_{\mathbf{x}}\mathbf{\varepsilon}_{ \mathbf{x}}\right\rangle\] (S2.3) \[=-\left\langle\begin{bmatrix}\partial_{\mathbf{x}}f(\mathbf{x}, \mathbf{p})^{T}-(\partial_{\mathbf{x}}\mathbf{\hat{x}})^{T}&\mathbf{I}\\ \partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})^{T}&\mathbf{0}\\ -\mathbf{I}&(\partial_{\mathbf{x}}\mathbf{x})^{T}\end{bmatrix}\begin{bmatrix} \mathbf{\Omega}_{\mathbf{\hat{x}}}&\mathbf{0}\\ \mathbf{0}&\mathbf{\Omega}_{\mathbf{x}}\end{bmatrix}\begin{bmatrix}\mathbf{\varepsilon}_{ \mathbf{\hat{x}}}\\ \mathbf{\varepsilon}_{\mathbf{x}}\end{bmatrix}\right\rangle=\mathbf{0}\] (S2.4) With the assumption that \(\partial_{\mathbf{x}}\mathbf{\hat{x}}\) and \(\partial_{\mathbf{\hat{x}}}\mathbf{x}=(\partial_{\mathbf{x}}\mathbf{\hat{x}}) ^{-1}\) do not contribute to structural variance, the condition defined in eq. (S2.4) can be simplified to \[\left\langle\begin{bmatrix}\partial_{\mathbf{x}}f(\mathbf{x}, \mathbf{p})^{T}&\mathbf{I}\\ \partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})^{T}&\mathbf{0}\end{bmatrix} \begin{bmatrix}\mathbf{\Omega}_{\mathbf{\hat{x}}}&\mathbf{0}\\ \mathbf{0}&\mathbf{\Omega}_{\mathbf{x}}\end{bmatrix}\begin{bmatrix}\mathbf{\varepsilon }_{\mathbf{\hat{x}}}\\ \mathbf{\varepsilon}_{\mathbf{x}}\end{bmatrix}\right\rangle=\mathbf{0}\] (S2.5) From the final results in eq. (4.1.2), eq. (S2.5) can be expressed as a function about \(\mathbf{\Omega}_{\mathbf{x}}\) and \(\mathbf{\Omega}_{\mathbf{p}}\) only, but expressing \(\partial_{\mathbf{x}}f(\mathbf{x}_{i},\mathbf{p})\) and \(\partial_{\mathbf{p}}f(\mathbf{x}_{i},\mathbf{p})\) in terms of their singular value decompositions. \[\partial_{\mathbf{x}}f(\mathbf{x},\mathbf{p}) =\mathbf{U}_{\mathbf{x}}\mathbf{S}_{\mathbf{x}}\mathbf{V}_{\mathbf{ x}}^{T}\] (S2.6) \[\partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p}) =\mathbf{U}_{\mathbf{p}}\mathbf{S}_{\mathbf{p}}\mathbf{V}_{\mathbf{ p}}^{T}\] \[\text{with }\mathbf{U}^{T}\mathbf{U} =\mathbf{I},\ \mathbf{V}^{T}\mathbf{V}=\mathbf{I}\] \[\text{and }\mathbf{S} =\mathrm{diag}(\mathbf{s}),\ \text{singular values}\] Which leads to eq. (S2.7) for the general case where \(\dim(\mathbf{p})>\dim(\mathbf{x})\). \[\left\langle\begin{bmatrix}\mathbf{V}_{\mathbf{x}}\mathbf{S}_{ \mathbf{x}}\mathbf{U}_{\mathbf{x}}^{T}\mathbf{U}_{\mathbf{x}}\mathbf{S}_{ \mathbf{x}}^{-1}\mathbf{V}_{\mathbf{x}}^{T}\mathbf{\Omega}_{\mathbf{x}}\mathbf{V}_{ \mathbf{x}}\mathbf{S}_{\mathbf{x}}^{-1}\mathbf{U}_{\mathbf{x}}^{T}&\mathbf{\Omega}_ {\mathbf{x}}\\ \mathbf{V}_{\mathbf{p}}\mathbf{S}_{\mathbf{p}}\mathbf{U}_{\mathbf{p}}^{T} \mathbf{U}_{\mathbf{p}}\mathbf{S}_{\mathbf{p}}^{-1}\mathbf{V}_{\mathbf{p}}^{T} \mathbf{\Omega}_{\mathbf{p}}\mathbf{V}_{\mathbf{p}}\mathbf{S}_{\mathbf{p}}^{-1} \mathbf{U}_{\mathbf{p}}^{T}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\mathbf{\varepsilon }_{\mathbf{x}}\\ \mathbf{\varepsilon}_{\mathbf{x}}\end{bmatrix}\right\rangle =\left\langle\begin{bmatrix}\mathbf{\Omega}_{\mathbf{x}}\left( \left(\partial_{\mathbf{x}}f(\mathbf{x},\mathbf{p})^{T}\right)^{+}\mathbf{ \varepsilon}_{\mathbf{\hat{x}}}+\mathbf{\varepsilon}_{\mathbf{x}}\right)\\ \left\langle\mathbf{V}_{\mathbf{p}}\mathbf{V}_{\mathbf{p}}^{T}\mathbf{\Omega}_{ \mathbf{p}}\left(\partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})^{T}\right)^{+} \mathbf{\varepsilon}_{\mathbf{\hat{x}}}\right\rangle\end{bmatrix}\right\rangle\] (S2.7) \[=\begin{bmatrix}\mathbf{\Omega}_{\mathbf{x}}\left(\left(\partial_{ \mathbf{x}}f(\mathbf{x},\mathbf{p})\right)^{T}\right)^{+}\mathbf{\varepsilon}_{ \mathbf{\hat{x}}}+\mathbf{\varepsilon}_{\mathbf{x}}\right)\\ \left\langle\mathbf{V}_{\mathbf{p}}\mathbf{V}_{\mathbf{p}}^{T}\mathbf{\Omega}_{ \mathbf{p}}\left(\partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})^{T}\right)^{+} \mathbf{\varepsilon}_{\mathbf{\hat{x}}}\right\rangle\end{bmatrix}=\mathbf{0}\] In which \(\mathbf{A}^{+}\) is the Moore-Penrose pseudo-inverse of matrix \(\mathbf{A}\)[2, 3]. For \(\mathbf{\Omega}_{\mathbf{x}}\) full-rank, the optimality conditions simplify to: \[\left\langle\mathbf{\varepsilon}_{\mathbf{x}}\right\rangle =\mathbf{0}\] (S2.8) \[\left\langle\left(\partial_{\mathbf{x}}f(\mathbf{x},\mathbf{p})^{T} \right)^{+}\mathbf{\varepsilon}_{\mathbf{\hat{x}}}\right\rangle =\mathbf{0}\] (S2.9) \[\left\langle\mathbf{V}_{\mathbf{p}}\mathbf{V}_{\mathbf{p}}^{T}\mathbf{ \Omega}_{\mathbf{p}}\left(\partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})^{T} \right)^{+}\mathbf{\varepsilon}_{\mathbf{\hat{x}}}\right\rangle =\mathbf{0}\] (S2.10) Condition eq. (S2.8) follows naturally from the assumption of centered interpolation errors. Condition eq. (S2.9) conveys the expected value for orthogonality between derivative residuals and the range of \(\partial_{\mathbf{x}}f\), and condition eq. (S2.10) denotes the expected value for the relationship between derivative residuals and gradients of the physical model with respect to the incumbent model parameter. Importantly, \(\mathbf{V}_{\mathbf{p}}\) is also a function of \(\mathbf{x}\) and \(\mathbf{p}\) since it comprises an orthogonal basis for the row space of \(\partial_{\mathbf{p}}f(\mathbf{x},\mathbf{p})\). ## S3 Latent state reconstruction With a mean-field approximation, and assuming the spectroscopic data includes the dynamics of all kinetically relevant reaction intermediates in an NPT-system, calibration coefficients can be retrieved by minimizing the raw signal projection onto the coupling matrix nullspace. With the nomenclature in eq. (2.1.2), i.e., taking \(\tilde{\mathbf{x}}\) as observed values, the underlying material balance can be expressed in terms of the nullspace equations as in eq. (S3.1). \[(\mathbf{U}_{\mathbf{o}}^{\mathrm{N}})^{T}\left(\tilde{\mathbf{x}}_{\mathbf{o} (t_{i})}-\tilde{\mathbf{x}}_{\mathbf{o}(t_{j})}\right)+(\mathbf{U}_{\star}^{ \mathrm{N}})^{T}\left(\tilde{\mathbf{x}}_{\star(t_{i})}-\tilde{\mathbf{x}}_{ \star(t_{j})}\right)=\varepsilon_{\mathbf{x}_{ij}}^{\mathrm{N}}\] (S3.1) In the absence of statistical noise, the nullspace error, \(\varepsilon_{\mathbf{x}_{i}}^{\mathrm{N}},\,\forall i\), would be zero. Latent states can be represented in terms of raw signal, \(\mathbf{y}\in\mathbb{R}_{+}^{n}\), and a calibration factor per chemical intermediate, \(\boldsymbol{\gamma}\in\mathbb{R}_{+}^{\mathrm{dim}(\star)}\), as in eq. (S3.2). \[\mathbf{x}_{\star}=\mathbf{y}_{\star}\circ\boldsymbol{\gamma}\] (S3.2) Coverage concentrations also need to be normalized, which can be expressed in terms of a parameterized set of solutions for \(\boldsymbol{\gamma}\), as in the following minimization problem, eq. (S3.3). \[\boldsymbol{\gamma}(\boldsymbol{\beta})=\boldsymbol{\gamma}^{\mathrm{R}}+ \boldsymbol{\gamma}^{\mathrm{N}}(\boldsymbol{\beta})=\left(\sum_{i}^{n}\tilde{ \mathbf{y}}_{\star(t_{i})}\tilde{\mathbf{y}}_{\star(t_{i})}^{T}\right)^{+} \sum_{i}^{n}\tilde{\mathbf{y}}_{\star(t_{i})}+\mathbf{U}_{\boldsymbol{\gamma} }^{\mathrm{N}}\boldsymbol{\beta}\] (S3.3) Where \(\mathbf{U}_{\boldsymbol{\gamma}}^{\mathrm{N}}\) is a basis for the nullspace of \(\sum_{i}^{n}\tilde{\mathbf{y}}_{\star(t_{i})}\tilde{\mathbf{y}}_{\star(t_{i})} ^{T}\), which can be computed from its eigendecomposition. By replacing eq. (S3.2) with the partial solution of eq. (S3.3) in eq. (S3.1), the reduced form in eq. (S3.4) is obtained. For realistic scenarios, statistical noise will lead to non-zero eigenvalues, which are associated with the nullspace of \(\sum_{i}^{n}\tilde{\mathbf{y}}_{\star(t_{i})}\tilde{\mathbf{y}}_{\star(t_{i})} ^{T}\) and, therefore, a cutoff criterion must be established to obtain the nullspace basis by selecting the associated eigenvectors. \[\begin{split}&(\mathbf{U}_{\mathbf{o}}^{\mathrm{N}})^{T}\left( \tilde{\mathbf{x}}_{\mathbf{o}(t_{i})}-\tilde{\mathbf{x}}_{\mathbf{o}(t_{j})} \right)+(\mathbf{U}_{\star}^{\mathrm{N}})^{T}\operatorname{diag}\left(\tilde {\mathbf{y}}_{\star(t_{i})}-\tilde{\mathbf{y}}_{\star(t_{j})}\right) \boldsymbol{\gamma}(\boldsymbol{\beta})\\ &=\mathbf{v}_{\mathbf{o}_{ij}}+\mathbf{V}_{\mathbf{\ast}_{ij}} \boldsymbol{\gamma}(\boldsymbol{\beta})\end{split}\] (S3.4) The minimization of the MSE of eq. (S3.4) in terms of \(\boldsymbol{\beta}\), considering only the asymmetrical pairs \((i,j)\,:\,j>i\), leads to the closed-form solution in eq. (S3.5): \[\boldsymbol{\gamma}=-\mathbf{U}_{\boldsymbol{\gamma}}^{\mathrm{N}}\left(( \mathbf{U}_{\boldsymbol{\gamma}}^{\mathrm{N}})^{T}\left(\sum_{i}^{n}\sum_{j}^{ i-1}\mathbf{V}_{\mathbf{\ast}_{ij}}^{T}\mathbf{V}_{\mathbf{\ast}_{ij}}\right) \mathbf{U}_{\boldsymbol{\gamma}}^{\mathrm{N}}\right)^{+}(\mathbf{U}_{ \boldsymbol{\gamma}}^{\mathrm{N}})^{T}\sum_{i}^{n}\sum_{j}^{i-1}\mathbf{V}_{ \mathbf{\ast}_{ij}}^{T}\left(\mathbf{v}_{\mathbf{o}_{ij}}+\mathbf{V}_{\mathbf{ \ast}_{ij}}\boldsymbol{\gamma}^{\mathrm{R}}\right)+\boldsymbol{\gamma}^{ \mathrm{R}}\] (S3.5) With \(\boldsymbol{\beta}\) from eq. (S3.5) replaced in eq. (S3.3), multiplicative or calibration factors \(\boldsymbol{\gamma}\) can be found to satisfy the chemical system nullspace condition in eq. (2.3.5) under coverage normalization. ## S4 Model Parameters Conditional Covariance - Calibration Factor The calibration factors, \(\boldsymbol{\gamma}\), can be included in the rKINNs objective function by expressing them error terms for interpolation and model in the original space, eqs. (2.1.2) and (2.1.3). We need first note that for reconstructed/calibrated data, the SAs have predefined nullspace invariances, \(\mathbf{z}^{\mathrm{N}}\), which is estimated directly from the unscaled data, \(\tilde{\mathbf{y}}\), and optimal calibration factors, \(\boldsymbol{\gamma}^{\star}\), eq. (S3.5), such that, \[\begin{split}\mathbf{x}(t_{i},\boldsymbol{\omega}_{\mathbf{s}}, \boldsymbol{\gamma})&=\mathbf{U}^{\mathrm{R}}\mathbf{z}^{ \mathrm{R}}(t_{i},\boldsymbol{\omega}_{\mathbf{s}})+\mathbf{U}^{\mathrm{N}} \mathbf{z}^{\mathrm{N}}(\boldsymbol{\gamma})\\ &=\mathbf{U}^{\mathrm{R}}\mathbf{z}^{\mathrm{R}}(t_{i}, \boldsymbol{\omega}_{\mathbf{s}})+\mathbf{U}^{\mathrm{N}}\mathbf{z}^{\mathrm{N}} \\ &=\mathbf{U}^{\mathrm{R}}\mathbf{z}^{\mathrm{R}}(t_{i}, \boldsymbol{\omega}_{\mathbf{s}})+\mathbf{U}^{\mathrm{N}}\frac{1}{n}\sum_{j=1} ^{n}\left((\mathbf{U}^{\mathrm{N}})^{T}\tilde{\mathbf{x}}_{j}\odot \boldsymbol{\gamma}\right)\\ &=\mathbf{U}^{\mathrm{R}}\mathbf{z}^{\mathrm{R}}(t_{i}, \boldsymbol{\omega}_{\mathbf{s}})+\frac{1}{n}\sum_{j=1}^{n}\left(\mathbf{U}^ {\mathrm{N}}(\mathbf{U}^{\mathrm{N}})^{T}\operatorname{diag}(\tilde{\mathbf{x}}_ {j})\right)\boldsymbol{\gamma}\end{split}\] (S4.1) Such a formulation conveys the effect of \(\boldsymbol{\gamma}\) in the nullspace solution, i.e., linear shifts in the data and, consequently, on the SAs. However, to allow the propagation of errors to the range equations, the error expression need to be formulated based on the unscaled mapping \(\mathbf{y}=\mathbf{x}\odot(\boldsymbol{\gamma}^{\star})^{-1}\). \[\begin{split}\boldsymbol{\varepsilon}_{\mathbf{x}_{i}}(\boldsymbol{ \omega}_{\mathbf{s}},\boldsymbol{\gamma})&=\mathbf{x}(t_{i}, \boldsymbol{\omega}_{\mathbf{s}},\boldsymbol{\gamma})-\tilde{\mathbf{y}}_{i} \odot\boldsymbol{\gamma}\\ \therefore\,\,\boldsymbol{\varepsilon}_{\mathbf{y}_{i}}=\boldsymbol{ \varepsilon}_{\mathbf{x}_{i}}(\boldsymbol{\omega}_{\mathbf{s}},\boldsymbol{ \gamma})\odot(\boldsymbol{\gamma}^{\star})^{-1}&=\mathbf{x}(t_{i}, \boldsymbol{\omega}_{\mathbf{s}},\boldsymbol{\gamma})\odot(\boldsymbol{ \gamma}^{\star})^{-1}-\tilde{\mathbf{y}}_{i}\end{split}\] (S4.2) Let \(\mathbf{y}(t_{i},\mathbf{\omega}_{\mathbf{s}},\mathbf{\gamma})=\mathbf{x}(t_{i},\mathbf{\omega }_{\mathbf{s}},\mathbf{\gamma})\odot(\mathbf{\gamma}^{*})^{-1}\), \(\mathbf{\varepsilon}_{\mathbf{x}_{i}}\) can be expressed in terms of \(\mathbf{y}(t_{i},\mathbf{\omega}_{\mathbf{s}},\mathbf{\gamma})\) as follows, \[\begin{split}\mathbf{\varepsilon}_{\mathbf{x}_{i}}&= \mathbf{y}(t_{i},\mathbf{\omega}_{\mathbf{s}},\mathbf{\gamma})\odot\mathbf{\gamma}-\tilde{ \mathbf{y}}_{i}\\ \mathbf{\varepsilon}_{\mathbf{\dot{x}_{i}}}&=\dot{ \mathbf{x}}(t_{i},\mathbf{\omega}_{\mathbf{s}},\mathbf{\gamma})-\mathbf{M}\left( \mathbf{k}(\theta,\mathbf{p})\circ\psi(\mathbf{x}(t_{i},\mathbf{\omega}_{\mathbf{s }}))\right)\\ &=\dot{\mathbf{y}}(t_{i},\mathbf{\omega}_{\mathbf{s}},\mathbf{\gamma}) \odot\mathbf{\gamma}-\mathbf{M}\left(\mathbf{k}(\theta,\mathbf{p})\circ\psi( \dot{\mathbf{y}}(t_{i},\mathbf{\omega}_{\mathbf{s}},\mathbf{\gamma})\odot\mathbf{\gamma })\right)\end{split}\] (S4.3) With \(\mathbf{\varepsilon}_{\mathbf{x}_{i}}^{\text{R/N}}=(\mathbf{U}^{\text{R/N}})^{T} \mathbf{\varepsilon}_{\mathbf{x}_{i}}\) and \(\mathbf{\varepsilon}_{\mathbf{z}_{i}}^{\text{R/N}}=(\mathbf{U}^{\text{R/N}})^{T} \mathbf{\varepsilon}_{\mathbf{\dot{x}_{i}}}\), the negative log-likelihood after range-nullspace decomposition can be evaluated as in eq. (4.7.3), and the Fisher information matrix, \(\mathcal{I}\), which is equivalent to the Hessian matrix of the MLE, \(\ell_{\text{t}}\), can be estimated as a function of \(\mathbf{p}\) and \(\mathbf{\gamma}\). \[\mathcal{I}(\mathbf{p},\mathbf{\gamma})=\nabla_{[\mathbf{p}\;\mathbf{\gamma}]}^{2} \ell_{\text{t}}=\begin{bmatrix}\partial_{\mathbf{p}}^{2}\ell_{\text{t}}& \partial_{\mathbf{p}}\partial_{\mathbf{\gamma}}\ell_{\text{t}}\\ \partial_{\mathbf{\gamma}}\partial_{\mathbf{p}}\ell_{\text{t}}&\partial_{ \mathbf{\gamma}}^{2}\ell_{\text{t}}\end{bmatrix}\] (S4.4) The corresponding asymptotic covariance matrix if given by: \[\mathbf{\Sigma}_{[\mathbf{p}\;\mathbf{\gamma}]}=\frac{1}{n}(\mathcal{I}(\mathbf{p}, \mathbf{\gamma}))^{-1}=\begin{bmatrix}\mathbf{\Sigma}_{\mathbf{p}\mathbf{p}}&\mathbf{ \Sigma}_{\mathbf{p}\mathbf{\gamma}}\\ \mathbf{\Sigma}_{\mathbf{\gamma}\mathbf{p}}&\mathbf{\Sigma}_{\mathbf{\gamma}\mathbf{\gamma}} \end{bmatrix}\] (S4.5) Where the diagonal terms represent the covariance matrices for model parameters and calibration factors, and off-diagonal terms are the covariances between them. The conditional covariance for the kinetic model parameters, \(\mathbf{\Sigma}_{\mathbf{p}}\) is evaluated as: \[\mathbf{\Sigma}_{\mathbf{p}}=\mathbf{\Sigma}_{\mathbf{p}\mathbf{p}}+\mathbf{\Sigma}_{ \mathbf{p}\mathbf{\gamma}}\left(\mathbf{\Sigma}_{\mathbf{\gamma}\mathbf{\gamma}}\right)^{-1} \mathbf{\Sigma}_{\mathbf{\gamma}\mathbf{p}}\] (S4.6) Which can be from Taylor expansion of the residuals assuming \(\mathbf{\gamma}\) and \(\mathbf{p}\) uncorrelated and normally distributed, and using Cholesky decomposition of the covariance matrices to generated perturbations around the optimal values. ## S5 Reaction Network Type _dcs_ The shape of the stoichiometry matrices reflect the elementary reactions of the system such that the rows are associated with the mass balance of each species, and the columns are associated with reaction rates, therefore having the same size as kinetic parameters k. The \(s\)-type elementary steps embody elementary reactions between surface intermediates exclusively. In the following, \(D*\), \(E*\) and \(F*\) are reaction intermediates for which there are no stable desorbed counterparts, which cannot be directly measured or inferred from ordinary analytical chemistry techniques. \[A+*\frac{k_{1}^{k_{1}^{d}}}{k_{1}^{d}}A*\] ( **d.1**) \[B+*\frac{k_{2}^{d}}{k_{2}^{d}}B*\] ( **d.2**) \[C+*\frac{k_{3}^{d}}{k_{3}^{d}}C*\] ( **d.3**) \[A*+*\frac{k_{5}^{d}}{k_{3}^{d}}2D*\] ( **c.1**) \[B*+*\frac{k_{4}^{d}}{k_{4}^{d}}2E*\] ( **c.2**) \[D*+E*\frac{k_{1}^{d}}{k_{1}^{d}}F*+*\] ( **s.1**) \[F*+E*\frac{k_{5}^{d}}{k_{5}^{d}}C*+*\] ( **c.3**) \[\ln(\mathbf{k}_{0})^{T}=[3.00\text{ }2.08\text{ }3.18\text{ }2.48\text{ }2.77\text{ }3.69\text{ }6.46\text{ }6.87\text{ }5.08\text{ }4.38\text{ }6.46\text{ }5.48\text{ }6.33\text{ }5.08]\] \[M=\left[\begin{array}{cccccccccccccccc}-1&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&-1&1&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&-1&1&0&0&0&0&0&0&0&0\\ 1&-1&0&0&0&0&-1&1&0&0&0&0&0&0\\ 0&0&1&-1&0&0&0&-1&1&0&0&0&0&0\\ 0&0&0&0&1&-1&0&0&0&0&0&0&1&-1\\ 0&0&0&0&0&0&2&-2&0&0&-1&1&0&0\\ 0&0&0&0&0&0&0&0&2&-2&-1&1&-1&1\\ 0&0&0&0&0&0&0&0&0&0&1&-1&-1&1\\ -1&1&-1&1&-1&1&-1&1&1&-1&1&-1&1&-1\end{array}\right]\] (S5.1) \[\mathbf{x}^{T}=[x_{A}\text{ }x_{B}\text{ }x_{C}\text{ }x_{A*}\text{ }x_{B*} \text{ }x_{C*}\text{ }x_{D*}\text{ }x_{E*}\text{ }x_{F*}\text{ }x_{*}]\]
2303.05088
Suppression of accidental backgrounds with deep neural networks in the PandaX-II experiment
The PandaX dark matter detection project searches for dark matter particles using the technology of dual phase xenon time projection chamber. The low expected rate of the signal events makes the control of backgrounds crucial for the experiment success. In addition to reducing external and internal backgrounds during the construction and operation of the detector, special techniques are employed to suppress the background events during the data analysis. In this article, we demonstrate the use of deep neural networks(DNNs)for suppressing the accidental backgrounds, as an alternative to the boosted-decision-tree method used in previous analysis of PandaX-II. A new data preparation approach is proposed to enhance the stability of the machine learning algorithms to be run and ultimately the sensitivity of the final data analysis.
Nasir Shaheed, Xun Chen, Meng Wang
2023-03-09T07:54:07Z
http://arxiv.org/abs/2303.05088v2
# Suppression of accidental backgrounds with deep neural networks in the PandaX-II experiment ###### Abstract The PandaX dark matter detection project searches for dark matter particles using the technology of dual phase xenon time projection chamber. The low expected rate of the signal events makes the control of backgrounds crucial for the experiment success. In addition to reducing external and internal backgrounds during the construction and operation of the detector, special techniques are employed to suppress the background events during the data analysis. In this article, we demonstrate the use of deep neural networks (DNNs) for suppressing the accidental backgrounds, as an alternative to the boosted-decision-tree method used in previous analysis of PandaX-II. A new data preparation approach is proposed to enhance the stability of the machine learning algorithms to be run and ultimately the sensitivity of the final data analysis. ## 1 Introduction The nature of dark matter in our universe remains one of the most fundamental unresolved questions in physics. The weakly interacting massive particles (WIMPs) proposed by various theories beyond the Standard Model of particle physics, are considered as a leading candidate for dark matter [1]. In recent decades, numerous experiments have been conducted to detect direct collisions between WIMPs and ordinary matter in deep underground laboratories, resulting in the suppression of the available parameter space for WIMPs [2; 3]. Among these projects are the PandaX-II [4] and PandaX-4T [5] experiments, located at the China Jinping Laboratory (CJPL) [6; 7; 8], which utilize the technology of a dual-phase xenon time projection chamber (TPC) [9]. Recently, the PandaX-4T experiment has established a more stringent constraint on the spin-independent interactions between WIMPs and nucleons than previous generations of the same type of experiments [10]. Background control is a crucial aspect of experiments searching for dark matter due to the extremely low rate of the signal particles. The main sources of background are gamma or neutron collisions inside the TPC, which originate from known radioactive sources in the detector material or dissolved sources within the xenon. The PandaX collaboration has made significant efforts to reduce backgrounds from laboratory, detector materials, and the xenon recirculating pipelines [11; 12; 13; 14]. Nevertheless, accidental background still plays a significant role and must be taken into account during data analysis. The PandaX-II experiment successfully utilized the boosted-decision-tree (BDT) method, a machine learning technique, to suppress accidental background [15]. With the advancement of deep learning technologies, such as deep neural networks (DNNs), these have become valuable tools in various studies of particle physics in recent years [16; 17; 18]. In particular, in the field of deep underground experiments, DNNs have been widely used to discriminate signals from backgrounds [19; 20; 21], reconstruct the energy and position of events [22; 23], and improve the speed of data fitting [24]. In this article, we conduct a study on using DNNs to suppress accidental background in the PandaX-II experiment. In Section 2, we provide a brief overview of the detection principle and accidental backgrounds in PandaX-II. We then discuss data preparation in Section 3. In Section 4, we present the testing of various DNN architectures for accidental background suppression, as well as a new data preparation method to improve stability. Finally, we summarize our findings and provide an outlook in Section 5. ## 2 Accidental background in PandaX-II The central components of the PandaX-II and PandaX-4T detectors are the dual phase xenon TPCs. They have a similar structure. A TPC has a cylindrical sensitive volume enclosed by polytetrafluoroethylene (PTFE) reflection panels. A cathode mesh at the TPC's base and a gate grid electrode beneath the liquid xenon surface create the drift field. The gate, in conjunction with the anode mesh above the liquid level, generates an extraction field which extracts electrons from the liquid xenon into the gas xenon. Two arrays of photomultiplier tubes (PMTs) are placed above and below the TPC to detect the scintillation photons generated within the TPC. For more detailed information on the PandaX detectors, refer to references [4] and [10]. To aid in understanding the origin of accidental background, we provide a brief overview of the detection principle of the dual phase TPC in this article. For a more in-depth explanation, we refer the readers to Ref. [9]. The collisions between incoming particles and target xenon atoms in the dual phase xenon TPC may produce prompt scintillation photons (\(S1\)) and ionized electrons. The electrons drift along the drifting field in the TPC and are extracted into the gaseous region, where they produce delayed photons (\(S2\)) through the process of electroluminescence. The time difference between the \(S1\) and \(S2\) signals can be used to determine the \(z\)-position of the collision. Additionally, the ratio of \(S2/S1\) is an important discriminator between electron recoil (ER) and nuclear recoil (NR) events. NR events, which are the interactions of interest for detecting WIMPs and the neutron backgrounds, are characterized by a lower ratio of \(S2/S1\) compared to ER events due to the fact that a large fraction of recoil energy converts into heat and escapes detection. This allows for the discrimination of WIMPs from most of the background events. In the PandaX-II and PandaX-4T experiments, the majority of backgrounds are caused by scattering events of gamma or neutron originating from radioactive isotopes in the detector materials and dissolved radioactive isotopes in liquid xenon, such as \({}^{222}\)Rn, \({}^{85}\)Kr, or \({}^{3}\)H. These backgrounds are controlled through techniques such as material screening and selection, and xenon distillation and purification. Another type of background, known as surface background, is generated by the \(\beta\)-decay of \({}^{210}\)Pb on the inner surface of the TPC, which affects the \(S2\) signal and is concentrated near the edges. This background can be modeled using a data-driven method and estimated. Accidental background refers to events where \(S1\) and \(S2\) signals are not from the same collision events. Identifying and controlling these backgrounds can be challenging, but they are crucial for achieving a robust understanding of the signals in the detector. The \(S1\) or \(S2\)-like signals are not correlated with any other recorded signals from the same source are referred to be "isolated". During event reconstruction, unrelated isolated \(S1\) and \(S2\) signals may appear in the same drift window, resulting in accidental backgrounds. In Ref. [15], the possible origins of isolated signals are analyzed in detail. We present a brief overview of them here. The isolated \(S1\) signals may originate from regular scattering events, but the corresponding \(S2\) signals are not produced or recorded. They could also be single electron signals that were misidentified as \(S1\)s. Additionally, overlapped dark noises from different photo-multiplier tubes (PMTs) may form \(S1\)-like signals. The isolated \(S2\) signals are produced by the electroluminescent process of electrons in the gas region, similar to regular \(S2\) signals. They can be regular \(S2\) signals without corresponding \(S1\) signals recorded, or overlap with \(S1\) signals in such a way that only the \(S2\) signals are recognized. Additionally, stagnant electrons created by large energy depositions may be randomly released into the gas region, resulting in \(S2\)-like signals directly. Ref. [15] estimated the average rates of isolated \(S1\) and \(S2\) signals, \(\bar{r}_{1}\) and \(\bar{r}_{2}\), using several data-driven methods, and obtained consistent results. The total number of accidental background events was calculated using the equation: \[n_{\mbox{acc}}=\bar{r}_{1}\cdot\bar{r}_{2}\cdot\Delta t_{w}\cdot T\cdot\epsilon, \tag{1}\] where \(\Delta t_{w}\) is the time window defined by the fiducial volume cut, \(T\) is the duration of the science data run, and \(\epsilon\) is the efficiency of data quality cuts. The final number of accidental background events in PandaX-II is non-trivial, particularly in the region beneath the reference median line of NR events in the plot of \(\log(S2/S1)\) versus \(S1\) from neutron calibration, where the statistics for regular ER backgrounds are low. Fig. 1 shows the signal distributions of the NR and ER calibration events and the accidental background in PandaX-II Run 11 data set [25] for reader's reference. Backgrounds in this region can obscure or mimic the true WIMP signals. Suppressing this type of background events is crucial for the sensitivity of the WIMP search. In the PandaX-II experiment, the BDT algorithm was employed to effectively reduce the number of accidental background events [15]. This technique utilizes a set of characteristics specific to the signals to differentiate between accidental background and NR events. Given the importance of events below the NR median line in the search for WIMPs, only samples in that region were used in the BDT training process. The optimized BDT cuts achieved a powerful capability to identify Figure 1: The signal distributions of NR and ER calibration events (scatter points) on top of the expected accidental background events (background histogram) in Run 11. The reference NR median line is plotted. accidental background while maintaining high efficiency for NR and ER signal events. For instance, the number of expected accidental events below the NR median in PandaX-II Run 11 was reduced from 2.93 to 0.77, while the efficiency for NR signal events in the same region is 90.7%. The successful implementation of the BDT technique in PandaX-II motivated us to investigate the use of advanced deep learning techniques for suppressing accidental background. ## 3 Data preparation In order to implement DNNs for the purpose of suppressing accidental background in the PandaX-II experiment, the data preparation is crucial. The data samples used in this study are sourced from the PandaX-II Run 11 dataset [25], which were also used in the previous study with BDT. This allows for consistency in the input variables and allows for a direct comparison of the performance of the two methods. The PandaX experiments take data by digitizing the output voltage of the PMTs into waveforms. In PandaX-II, the digitized samples within the \([-500,500]\)\(\mu\)s window of a global trigger are kept [26]. The raw data are processed following the same procedure: 1) the region over a given threshold in each recorded waveform segment is marked as "hit"; 2) hits with tight time correlation are clustered into "signal"; 3) for each signal, related properties are calculated, and the signal is tagged. The data is converted to collections of events with signals in the end of data processing, with properties calculated. The important properties of an event include: * number of S1 signals, * number of S2 signals, * start time of the event, * time difference of each signal to the start of the event, * index of the maximum \(S1\) before the maximum \(S2\), * index of the maximum \(S2\), * summation of extra signals except for the paired maximum \(S1\) and maximum \(S2\), * energy of the event by combining the paired \(S1\) and \(S2\). In the search for WIMPs, events featuring a primary \(S1\) and \(S2\) signal are selected for analysis. The horizontal location of an event is determined using various techniques based on the top pattern of the \(S2\) signal, while the vertical position is determined by the temporal separation of the paired \(S1\) and \(S2\) signals. These signals are then corrected according to their position. The final data used for analysis includes key characteristics of each signal, which are calculated from the waveform and summarized in Table 1. The aim of this study is to distinguish between accidental background events below NR median and physical scattering events, therefore, two types of data samples are necessary. Samples of accidental background were generated by randomly pairing isolated \(S1\) and \(S2\) signals identified in the dark matter search data. Since the region below the NR median has sufficient statistics only in the neutron calibration run, the physical scattering event samples are extracted from the neutron calibration runs. Additionally, the DNNs are anticipated to classify the greatest number of physical events above the NR median correctly, especially the ER events. Therefore, a third dataset is acquired from the related ER calibration runs. The events that have been chosen, which fall within the defined fiducial volume, have met all the established criteria during the PandaX-II data analysis process and fall within the signal window, including a charge of \(S1\) within a range of 3 and 45 PE, and a raw charge of \(S2\) greater than 100 PE, with a corrected \(S2\) charge smaller than 10,000 PE. For the accidental background, only the events below NR median are selected. The generated data set of accidental background contains 43,719 events, and the full NR data set contains 10,881 events. The data samples are structured in the ROOT format, a widely used tool in high energy physics experiments [27]. The variables are organized as branches within the TTree structure, allowing for easy implementation of a cut to select NR events below the NR median line during data loading. In order to train the DNN, 80% of the input data are used, 10% of the data are set aside for validation, and the remaining 10% for testing purposes. This split of the data allows for a thorough evaluation of the performance of the DNN and enables the identification of any potential issues during the training process. ## 4 Deep neural networks The task of identifying accidental background events can be approached as a binary classification problem. Since the number of features in the event data is limited, a Multi-Layer Perceptron \begin{table} \begin{tabular}{c l} Name & Description \\ \hline \hline qS1 & raw charge of \(S1\), in the unit of photoelectron (PE) \\ qS2 & raw charge of \(S2\) \\ qS1c & position corrected charge of \(S1\) \\ qS2c & position corrected charge of \(S2\) \\ wS1 & width of the \(S1\) \\ wS2 & width of the \(S2\) \\ wtenS2 & full width of one-tenth maximum of the \(S2\) \\ S1TBA & asymmetry between the charge collected by the top (\(q_{T}\)) and bottom (\(q_{B}\)) array for \(S1\), defined as \((q_{T}-q_{B})/(q_{T}+q_{B})\) \\ S2TBA & asymmetry between the charge collected by the top and bottom array for \(S2\) \\ S2SY1 & the ratio of charge before the maximum height to the total charge for \(S2\), in the raw waveform \\ S2SY2 & the ratio of charge before the maximum height to the total charge for \(S2\), in the smeared waveform \\ S1NPeaks & number of local maximums in \(S1\) \\ S1LargestBCQ & ratio of the largest charge detected by one bottom PMT to the total charge of \(S1\) \\ \hline \end{tabular} \end{table} Table 1: Important properties of the signals used in data analysis. (MLP) [28], a special type of artificial neural networks, is an appropriate deep learning model for this task. The basic idea of the artificial neural network is to construct a function \(N\) with many parameters, which maps the input features \(\mathbf{X}\) to the prediction \(\mathbf{y}\), or \(\mathbf{y}=N(\mathbf{X})\). Within the network, there are many layers of computation units called neurons, which accepts inputs from neurons from other layers and generate outputs to neurons in the following layers. By minimizing a given loss function of \(l(\mathbf{y},\mathbf{Y})\), which takes the prediction \(\mathbf{y}\) and the label \(\mathbf{Y}\) of data as inputs, the parameters of \(N\) are able to be adjusted. This process is generally referred to as "training" the function \(N\). The training process is done by backpropagation of errors, where the errors are propagated back through the layers of the network to update the parameters of the function. A trained function is able to fit the training data well and can be used to make predictions by feeding new data. The MLP contains multiple layers of neurons fully connected in a feedforward manner. The input layer takes in the input features \(\mathbf{X}\), and the output layer produces the prediction \(\mathbf{y}\). In between the input and output layers, there are one or more hidden layers that help to extract complex features from the input data. Once the MLP is trained, it can be used to predict by feeding new data into the input layer. In this study, we implemented the MLP models with TensorFlow 2.5 [29]. The event properties listed in Table. 1 are used as input features of the MLP. The input data have been rescaled with the min-max normalization. The activation function used in the output layer of the MLP is the "sigmoid" function, which maps the output to a value within the range of \([0,1]\). The value can be utilized to determine if an event is physical or non-physical according to an optimal cut value. The cut is obtained by maximizing the significance \(S\), the metric used in the previous study, which is defined as \[S=\frac{\epsilon_{s}n_{s}}{\sqrt{\epsilon_{s}n_{s}+\epsilon_{b}n_{b}}}, \tag{1}\] where \(n_{s}\) and \(n_{b}\) are the numbers of NR and accidental background events, respectively, and the \(\epsilon\)s are the corresponding efficiencies obtained with a certain cut value. To conform with the methodology in the previous study [15], the values of \(n_{s}\) and \(n_{b}\) are approximated to be equal and used accordingly. Fig. 2 presents an example of the determination of the significance. The first objective of this study is to determine the optimal structure of the hidden layers, including the number of layers and the number of neurons in each layer. Thus we conducted a scan across different layer combinations to find the maximum value of \(S\). During the scan, the learning rate was fixed at 0.001, the loss function was binary cross entropy, and the optimizer used was Adam [30]. To make a fair comparison with the previous study of the BDT method, the training data contains only the events below the NR median, following the same strategy. In the training process of each MLP model, a maximum of 1,500 epochs was set to ensure the model's convergence and to prevent overfitting. The training stopped when the validation loss function reached its minimum value with a patience of 20. The network parameters with the lowest validation loss value were then saved and used to calculate the significance. To obtain reliable results, each model was trained from scratch 100 times and the average significance was calculated. The results of the tested MLP structures, including the average significance and the average number of epochs required to reach the best parameters, are summarized in Table 2 and visualized in Figure 3. It was observed that as the number of neurons in the network increased, the training process completed earlier, and the significance is higher. Additionally, it is observed that all the deep learning models have achieved a higher significance compared to the 26.2 value obtained by the BDT method reported in Ref. [15]. The differences in average significances among the various models are relatively small. The BDT method has a unique attribute in that it demonstrates strong differentiation capabilities for events below the NR median, while maintaining a high level of efficiency for ER events that are dominant above the NR median line in dark matter searches. It is crucial to assess if the MLPs possess this characteristic. Unfortunately, the results indicate that the MLPs show inconsistent performance for ER event recognition, with most models displaying low efficiency for ER events, rendering the associated network parameters unsuitable for use in data analysis. Examples of the ER efficiencies as function of \(qS1\) for three given MLP models are presented in Fig. 4. Figure 3: The average significances and epochs of the scanned networks. Figure 2: The determination of the cut value as well as the significance \(S\). The distributions of the MLP output of input test events are plotted as backgrounds. The distributions are obtained from one of the best parameters for model 4C (see text below). However, after a thorough examination of the results, some MLP parameter sets were found to have both high discrimination power and high efficiency for ER events. One parameter set of model labeled "4C" is found to have the highest ER efficiency. The significance of this model is 26.77. The efficiencies for the accidental background, NR calibration events and ER calibration events are presented in Fig. 4(a). The selected model are applied on the PandaX-II Run 11 dark matter search data, resulting in 676 out of 708 candidate events having survived1. The updated efficiencies and data were used to drive the exclusion limit and sensitivity on the spin-independent WIMP-nucleon cross section at a WIMP mass of 40 GeV/c\({}^{2}\). The results are summarized in Table. 3, alongside the results obtained using the BDT method. By using the selected 4C model, the sensitivity is improved by 13% in comparison with that obtained by using the BDT method. Footnote 1: the post-unblinding cuts in Ref. [25] is already applied The application of regularization techniques was not successful in curing the unstable behavior. Adding L2 regularization to each hidden layer of Model 4C failed to discriminate between NR signal events and the accidental background events. The fluctuation in the ER efficiency curve remained even with the addition of dropout layers (dropout rate 0.2) after each hidden layer, as illustrated in Fig 3(d). To enhance the stability of ER efficiencies in the MLP models, we adopted a novel strategy for selecting the training data. We still utilized the accidental background events below NR median, \begin{table} \begin{tabular}{l c c c} Label & Hidden Layers & Average Significance & Average stop epoch \\ \hline [MISSING_PAGE_POST] & \(32\times 32\times 16\times 16\times 16\times 12\times 12\) & 26.83\(\pm\)0.15 & 101.7 \\ \hline \end{tabular} \end{table} Table 2: The average significances and epochs of the scanned networks. The numbers in the column “Hidden Layers” are neurons. but included all the NR events. With the updated input data, we retrained the 4C and 4E models, each 100 times, and renamed them to 4Cu and 4Eu, respectively. All the trained models exhibit an ER efficiency close to 100%, with high significance. The efficiencies obtained using the 4Cu model with the highest significance is presented in Fig. 4(b). The average significances are \(26.82\pm 0.17\) and \(27.30\pm 0.13\) for the 4Cu and 4Eu models, respectively. The constraints and sensitivities from the 4Cu model with the highest significance are \(26.82\pm 0.17\) and \(27.30\pm 0.13\) for the 4Cu and 4Eu models, respectively. The constraints and sensitivities from the 4Cu model with the highest significance are \(26.82\pm 0.17\) and \(27.30\pm 0.13\) for the 4Eu and 4Eu models, respectively. The constraints and sensitivities from the 4Cu model with the highest significance are \(26.82\pm 0.17\) and \(27. models with highest significances are presented in Table. 3. The limits and sensitivities have both seen an improvement in comparison with the old data selection strategy of DNN, suggesting that the revised approach to preparing the input data for deep neural networks has led to better results. Morever, model 4Cu shows a sensitivity improvement of 21.8% relative to the BDT method. ## 5 Summary In this study, we explored the utilization of deep neural networks, particularly MLPs, for suppressing accidental background in the PandaX-II experiment. The results showed that increasing the number of neurons in the layers improved the discrimination between signal and background events. However, the traditional strategy of using only data below the NR median during training resulted in unstable ER efficiencies, making the DNNs challenging to use in data analysis. By incorporating NR data above the NR median in the training, stable ER efficiencies and high background suppression power were achieved. Compared to the BDT method, the MLPs trained with the updated data preparation strategy demonstrated improved sensitivity for dark matter search. In the field of dark matter direct detection, the application of deep neural networks is not limited to signal and background discrimination. For instance, generative adversarial networks can be utilized to generate synthetic data that follows the same distribution as actual data, which can be employed in both simulation and analysis. We anticipate that machine learning techniques will continue to provide even greater benefits in underground experiments. ## Acknowledgment This project is supported in part by a grant from the Ministry of Science and Technology of China (No. 2016YFA0400301), grants from National Science Foundation of China (Nos. 12090063, 12175139), and by Office of Science and Technology, Shanghai Municipal Government (grant No. 18JC1410200). We also thank the sponsorship from the Chinese Academy of Sciences Center for Excellence in Particle Physics (CCEPP). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
2307.06062
Predicting the turbulent transport of cosmic rays via neural networks
A fast artificial neural network is developed for the prediction of cosmic ray transport in turbulent astrophysical magnetic fields. The setup is trained and tested on bespoke datasets that are constructed with the aid of test-particle numerical simulations of relativistic cosmic ray dynamics in synthetic stochastic fields. The neural network uses, as input, particle and field properties and estimates transport coefficients 10^7 faster than standard numerical simulations with an overall error of ~5% .
D. I. Palade
2023-07-12T10:24:34Z
http://arxiv.org/abs/2307.06062v1
# Predicting the turbulent transport of cosmic rays via neural networks ###### Abstract A fast artificial neural network is developed for the prediction of cosmic ray transport in turbulent astrophysical magnetic fields. The setup is trained and tested on bespoke datasets that are constructed with the aid of test-particle numerical simulations of relativistic cosmic ray dynamics in synthetic stochastic fields. The neural network uses, as input, particle and field properties and estimates transport coefficients \(10^{7}\) faster than standard numerical simulations with an overall error of \(\sim 5\%\). cosmic ray, magnetic turbulence, turbulent transport, neural networks ## I Introduction Cosmic rays (CRs) are charged particles originating mostly in galactic supernovae remnants and being accelerated by the Fermi mechanism in shock waves [1; 2] although, more exotic sources are possible [3]. Due to their (ultra)relativistic energies, CRs can permeate systems of all sizes, from the heliosphere, to galaxies and even the extra-galactic medium [3; 4]. Consequently, CRs carry important information to our understanding of fundamental high-energy physics, astrophysical magnetic fields, the structure of astrophysical media, space weather, etc. [5; 6; 7]. The dynamics of CRs is a long-standing fundamental problem of astrophysics with many open questions that are difficult to solve for several reasons. During their journey from sources to detection, CRs interact mainly with magnetic fields. The latter are highly turbulent [8] and, via non-linear interactions with particles, lead to consistent (anomalous) transport phenomena beyond ballistic motion [9]. The CR's energies cover a wide range of magnitudes (from \(MeV\) to \(10^{20}eV\)[2]) while the turbulent features of the magnetic fields are quite diverse, spanning different space-scales, anisotropies, and fluctuation amplitudes [10]. This richness in physical regimes opens a variety of possible transport types ranging from subdiffusive to superdiffusive in the perpendicular and parallel directions, with complicated dependencies [9; 10; 11; 12]. Despite this discouraging picture, a lot of progress has been made in the past decades in the topic of turbulent CRs transport. Quasilinear approaches [13] and nonlinear extensions [14] gained a lot of attention due to their technical simplicity and ability to provide insight in the physical mechanisms involved. Unfortunately, such models are known to fail in a lot of relevant cases such as the limit of strong turbulence or the 90 _degree problem_. A much more accurate approach, which gained momentum in the scientific community, is the method of test-particle simulations either in synthetic [15; 16; 17; 18] or MHD generated [19; 20] turbulent magnetic fields. Within this approach, the dynamics of CRs is mimicked at the statistical level with an ensemble of fictitious particles that allows for a direct evaluation of the transport coefficients. Its main drawback is the relatively high numerical effort required, which is hardly compatible with the diversity of possible physical regimes. It is clear that the astrophysics community would benefit from a fast and accurate method of prediction in their quest to understand the CR turbulent transport. A helping hand might come from another scientific front. In the recent years, we have witnessed the rise of artificial intelligence (AI) methods, in particular machine learning (ML) techniques [21; 22; 23], that are able to provide inferences in various mathematical problems. From the multitude of ML methods, the reader is directed towards artificial neural networks (ANNs) [22; 23]. The promise of ML is that, regardless of the chosen technique, if sufficient data is available, it can learn from that data and make fast and reliable prognoses for unknown cases. Such abilities would be equivalent to having analytical expressions at hand and would bypass the technical difficulties that arise in doing many simulations of CR dynamics. Given this scientific context, the purpose of the present work is to illustrate the methodology for developing an artificial neural network designed for predictions of cosmic ray turbulent transport in astrophysical magnetic fields. For the training and testing phases, a database is constructed with the aid of test-particle numerical simulations in synthetically generated fields. Since the main purpose is methodological, we restrict ourselves here to the evaluation of perpendicular diffusion [9] in a relatively limited range of physical parameters. Nonetheless, it is the hope of the author that this work will open a path towards more extensive databases and, consequently, more potent ANNs. The rest of this manuscript is organized as follows. The Theory section II describes, briefly, the CR transport model (II.1), the test-particle numerical method (II.2), the general architecture of ANNs (II.3) and the construction of a training and testing database (II.4). The Results Section (III) presents the convergence properties and the predictive power of the ANN. In the Con clusions section (IV), the main findings are resumed and perspectives are discussed. ## II Theory The ingredients of an ANN are the programming structure and the training dataset. For our problem, the latter is constituted of input-output pairs representing field-particle properties and diffusion coefficients. Such data is obtained with the numerical method of test particles that move according to a transport model. In this section, these elements, shown schematically in Fig. (1), are discussed in reverse order. ### The transport model A cosmic ray is a charged particle characterized by its rest mass \(m\), charge \(q\), position \(\mathbf{r}(t)\) and velocity \(\mathbf{v}(t)=d\mathbf{r}(t)/dt\) in a global reference frame obeying the relativistic Newton-Lorentz equation in the presence of an astrophysical magnetic field \(\mathbf{B}\): \[\frac{d(\gamma\mathbf{v})}{dt}=\frac{q}{m}\left(\mathbf{v}\times\mathbf{B} \right). \tag{1}\] The kinetic energy of the particle is \(T=mc^{2}(\gamma-1)\) where the Lorentz factor \(\gamma=(1-v^{2}/c^{2})^{-1/2}\), \(c\) is the speed of light and any electric field is neglected \(\mathbf{E}=0\). A cartesian system of coordinates \(\mathbf{r}\equiv(x,y,z)\) is defined. The magnetic field \(\mathbf{B}\) is decomposable into an average constant component \(B_{0}\) along \(Oz\) and a fluctuating part \(\mathbf{b}\) which is turbulent. Relative to this decomposition we coin the \(Oz\) direction as being "parallel" \(\hat{e}_{z}\equiv\hat{e}_{\parallel}\) and the plane \((x,y)\) as "perpendicular". Naturally, for any wavevector \(\mathbf{k}=(k_{x},k_{y},k_{z})\), \(k_{z}\equiv k_{\parallel}\) and \((k_{x},k_{y})\equiv\mathbf{k}_{\perp}\). The velocity \(\mathbf{v}=v_{\parallel}\hat{e}_{\parallel}+\mathbf{v}_{\perp}\) can be used to define the pitch angle \(\mu=v_{\parallel}/v\), where \(v^{2}=|\mathbf{v}|^{2}\). The turbulent component \(\mathbf{b}\) of the total magnetostatic field \(\mathbf{B}=B_{0}\hat{e}_{\parallel}+\mathbf{b}\) is represented in the paradigm of a 2D model [16; 18; 24]: \[\mathbf{b}(\mathbf{r})=\hat{e}_{\parallel}\times\nabla_{\perp}a_{\parallel} (\mathbf{r}). \tag{2}\] The magnetic vector potential \(a_{\parallel}\) is Gaussian, zero-averaged \(\langle a_{\parallel}(\mathbf{r})\rangle=0\) and homogeneous [15; 17; 24]. The last property is evident in the spectrum of its fluctuations: \[\langle\tilde{a}_{\parallel}(\mathbf{k})\tilde{a}_{\parallel}(\mathbf{k}^{ \prime})\rangle=S(\mathbf{k})\delta(\mathbf{k}+\mathbf{k}^{\prime}). \tag{3}\] By \(\tilde{a}_{\parallel}(\mathbf{k})\) was denoted the Fourier transform of \(a_{\parallel}(\mathbf{r})\), while \(\langle\cdot\rangle\) stands for statistical average. The overall turbulence amplitude is defined as \(b=\sqrt{(\mathbf{b}^{2}(\mathbf{r}))}\) and the spectrum is assumed to be of Kolmogorov type with parallel-perpendicular anisotropic dependencies [15; 18; 24]: \[S(\mathbf{k}_{\perp},k_{\parallel})\sim\frac{(k_{\perp}\lambda_{\perp})^{q} \left(1+(k_{\parallel}\lambda_{\parallel})^{2}\right)^{-s/2}}{k_{\perp}(1+(k_ {\perp}\lambda_{\perp})^{2})^{(s+q)/2}}, \tag{4}\] where \(q=3,s=5/3\) and \(\lambda_{\parallel},\lambda_{\perp}\) are "bend-over" scales, intimately related to coherence/integral scales [15]. Within this transport model, realizations of the turbulent field \(\mathbf{b}\) drive associated CR trajectories \(\mathbf{r}(t)\) via the eq. of motion (1). The ensemble of trajectories can be used to derive, as statistical averages, the diffusion transport coefficients in any direction \(Ox\): \[D_{x,x}(t)=\frac{\langle\left(x(t)-x(0)\right)^{2}\rangle}{2t}. \tag{5}\] In this paper, we are discussing regimes that are strictly diffusive and for which the quantity of interest is the asymptotic perpendicular diffusion coefficient \(D_{\perp}=\lim_{t\rightarrow\infty}D_{x,x}(t)\). In this context, the problem of CR turbulent transport is equivalent to asking: _what are the diffusion coefficients \(Y=D_{\perp}\) for any given set of particle-field input parameters \(X=(T,\mu,b,\lambda_{\perp},\lambda_{\parallel})\)_? For practical reasons related to numerical implementation, all quantities involved in the transport model (eqns. (1)-(4)) are scaled as follows. The magnetic fields \(\mathbf{b}\rightarrow\mathbf{b}B_{0}\), the time \(t\to\omega_{c}^{-1}\), the velocities \(\mathbf{v}\rightarrow\mathbf{v}\mathbf{c}\), the kinetic energy \(T\to Tmc^{2}\), the space scales \((\mathbf{r},\lambda_{i})\rightarrow(\mathbf{r},\lambda_{i})\rho_{L}^{0}\) and the wave-vectors \(\mathbf{k}\rightarrow\mathbf{k}/\rho_{L}^{0}\). The cyclotron frequency is defined as \(\omega_{c}=|q|B_{0}/m\) while the "bare" Larmor radius \(\rho_{L}^{0}=mc/(|q|B_{0})\). Consequently, the diffusion coefficients are scaled as \(D\to Dm_{0}c^{2}/(qB_{0})\). ### Test-particle numerical simulations The present work employs the method of test-particle simulations [15; 17] (or direct numerical simulations [18; 25; 26]) for the calculus of diffusion coefficients. The main idea is that we can mimic the reality of turbulent dynamics through a numerical representation of Figure 1: Schematic view of the elements required to construct the ANN. the transport model described in the previous subsection (II.1). A finite ensemble of \(N_{p}\) random fields \(\{\mathbf{b}\}\) with appropriate spectral properties (distribution and spectra) is constructed. For each realization of this ensemble, a CR trajectory \(\mathbf{r}(t)\) can be computed by solving eq. (1). In the limit of many particles \(N_{p}\rightarrow\infty\), the transport coefficients are given by statistical averages over the numerical ensemble of trajectories \(\{\mathbf{r}(t)\}\) (5). More details on the method can be found in [26]. The magnetic vector potential, \(a_{\parallel}\), is assumed Gaussian, zero-averaged, and homogeneous; thus, it can be constructed numerically in each realization with a Fourier-like (harmonic) decomposition [27; 26; 17]: \[a_{\parallel}(\mathbf{r})=b\lambda_{\perp}\sqrt{\frac{2}{N_{c}}}\sum_{j=1}^{N _{c}}\sin(\mathbf{k}^{j}\mathbf{r}+\alpha^{j}) \tag{6}\] where \(\alpha^{j}\) are independent random initial phases distributed uniformly in the interval \([0,2\pi)\). The partial waves \(\mathbf{k}^{j}\equiv\left(\mathbf{k}^{j}_{\perp},k^{j}_{\parallel}\right)\) are independent random variables generated with the use of the acceptance-rejection algorithm [28] for the PDF \(S(\mathbf{k})\). It can be shown that the representation (6) is, indeed, zero-averaged with the correct spectrum (4). In the limit of many partial waves \(N_{c}\rightarrow\infty\), the Gaussianity is also achieved (provided by the _central limit theorem_). Solving eq. (1) is a standard numerical problem which is tackled here with a relativistic Boris pusher [29]. Magnetic field values are evaluated in each realization directly using eqn. (6). Within the present scaling, the gyro-period is \(2\pi\) and the time step is chosen \(\delta t=2\pi/10\). This level of time discretization was found to be sufficient for an accurate prediction of the Larmor rotation and CR dynamics. Practical experience has shown that using \(N_{p}=10^{4}\) and \(N_{c}=500\) in a simulation is enough to achieve acceptable Gaussianity, Eulerian, and Lagrangian convergence, thus reliable results on transport. ### Artificial neural networks In this section, the generic features of an ANN architecture that will be used for predicting the transport of CRs in turbulent magnetic media are described. Its structure is represented graphically in Fig. (2). For more details, see [30]. A general ANN is a computing setup consisting of \(m+2\) "layers", each layer \(i\) contains \(n_{i}\) "neurons" and each neuron \(j\) is described by a numerical value \(Z_{i,j}\), a "bias" \(\beta_{i,j}\), an "activation function" \(f_{i,j}\) and some "weights" \(W_{i,j,k}\). The latter connects recursively neurons from neighbouring layers, thus, \(W\) is an irregular tensor of \((m+1)\times n_{i}\times n_{i-1}\) dimension. The first layer, \(i=0\), contains the values of the input variable \(X\), \(Z_{0,j}=X(j)\). The last layer, \(i=m+1\) contains the values of the output variable \(Y\), \(Z_{m+1,j}=Y(j)\). The rest are coined "hidden layers". Within an ANN there is a feed-forward recurrent relation between layers: \[Z_{i,j}=f_{i,j}\left(\sum_{k=1}^{n_{i-1}}W_{i,j,k}.Z_{i-1,k}+\beta_{i,j}\right) \tag{7}\] The purpose of an ANN is to model the true relation (which is unknown) between the input variable \(X\) and the output \(Y\). This is achieved by finding appropriate weights and biases \(\{W_{i,j,k},\beta_{i,j}\}\) that minimize an error function \(E\) between the output of the network, \(Z_{m+1}\), and the real output, \(Y\). This must be done over a dataset of \(N_{d}\) pairs \(\{X^{p},Y^{p}\}_{p=1,N_{d}}\) which should be _reasonably large_. The error function \(E\) is defined as Minkowski error: \[E=\frac{1}{n}\sum_{p=1}^{N_{d}}\sum_{j=1}^{n_{m+1}}\left(Z_{m+1,j}^{p}-Y^{p}(j )\right)^{2}. \tag{8}\] The error minimization is reached iteratively, over multiple "epochs", using ADAM [31], a modified version of gradient descent algorithm [32]. In this approach, during each iteration (epoch), the weights and biases \(\{W_{i,j,k},\beta_{i,j}\}\) (denoted generically \(\theta\)), are updated using a "learning"/"training" procedure: \[v_{\theta} =\alpha_{1}v_{\theta}+(1-\alpha_{1})\left(\frac{\partial E}{ \partial\theta}\right)^{2} \tag{9}\] \[m_{\theta} =\frac{\alpha_{2}m_{\theta}}{1-\alpha_{1}^{epoch}}+\frac{1- \alpha_{2}}{1-\alpha_{1}^{epoch}}\left(\frac{\partial E}{\partial\theta}\right)\] (10) \[\theta =\theta-\gamma\frac{m_{\theta}}{\varepsilon+\sqrt{v_{\theta}}} \tag{11}\] where \(\gamma,\alpha_{1},\alpha_{2},\varepsilon\) are parameters controlling the learning rate and the convergence of the ANN. The "moments" \(v_{\theta},m_{\theta}\) are initialized to zero, while the weights and biases \(\theta\equiv\{W_{i,j,k},\beta_{i,j}\}\) are normal random variables with zero average and unit variance. The ADAM algorithm (9)-(11) should find, asymptotically, the global minimum of Figure 2: The architecture of a general ANN with 5 input and 2 output variables. the error function \(E\). This means that the ANN should predict output values \(Z_{m+1}^{p}\) as suitable approximations for the real \(Y^{p},\forall p=\overline{1,N_{d}}\) and beyond. Based on numerical experience, the optimal values used for the present ANN are \(m=2\) hidden layers and \(n_{0}=3,n_{1}=10,n_{2}=5,n_{3}=1\) neurons in each layer. The activation functions are \(f_{i,j}(z)=\tanh(z),\forall i=\overline{1,m},j=\overline{1,n_{i}}\) and \(f_{m+1,j}(z)=z^{2},\forall j=\overline{1,n_{m+1}}\) for the hidden, respectively, output layers. The parameters for ADAM are \(\gamma=0.001,\alpha_{1}=0.9,\alpha_{2}=0.95,\varepsilon=10^{-7}\). ### Defining the database At this point, we must recall how the question of CR turbulent transport fits into the formal description of an ANN. The eqns. of the model (1)-(5) have a set of free parameters, namely \(\left(T,\mu,b,\lambda_{\perp},\lambda_{\parallel}\right)\). In our simulations, we chose \(\mu=0.5\) and \(\lambda_{\parallel}=10\lambda_{\perp}\), values relevant to observations from solar winds. The rest, \(X=(T,b,\lambda_{\perp})\) represents the input variable from ANN's perspective. For each set of parametric values \(X\), associated perpendicular diffusion coefficients are obtained as output \(Y=D_{\perp}\). In order to train and test the ANN, we need to construct a database of \(N_{d}\) pairs \((X^{p},Y^{p})_{p=1,N_{d}}\). The latter is obtained through the test-particle method described in Section (II.2). An appropriate number of \(N_{d}=5\times 10^{3}\) simulations has been performed. Note that this task is far more demanding than constructing and training the ANN, since it requires a much larger volume of CPU computing time. The \(n_{0}-\)dimensional space of \(X\) parameters must be truncated to a finite domain. This is done by choosing a cube of limiting values: \(T\in(0,10)\) which for protons corresponds to a kinetic energy \((0eV,10GeV)\), \(b\in(0,1)\) corresponding to \((0,B_{0})\) amplitude of fluctuations, \(\lambda_{\perp}\in(0.2,20)\). Note that for a proton, \(T=1\) corresponds to \(\approx 1GeV\) energy and a scaled Larmor radius \(\rho_{L}\approx 1.7\), thus, comparable with most values of correlation lengths, \(\lambda_{\perp}\). In order to avoid spurious biases, the values \(\{X^{p}\}\) are generated randomly uniform inside the parametric hypercube. It is important to observe that the database can be completed with analytical results. For example, at \(b=0\to D_{\perp}=D_{\parallel}=0\). Approximately \(5-10\%\) of the database can be filled with such exact results. If this fraction is too large, the hypercube becomes non-uniformly filled, and the ANN learns predominately the analytical values, which are of no interest. From the total database, \(90\%\) of the \((X,Y)\) pairs are used for training, while the remaining \(10\%\) will serve as testing grounds for the assessment of ANN's predictive abilities. ## III Results ### Turbulent transport The transport model has been used in the framework of the test-particle method to evaluate CR trajectories and, consequently, perpendicular diffusion coefficients \(Y=D_{\perp}\) for different random combinations of free input parameters \(X\equiv(T,b,\lambda_{\perp})\) values. The equations of motion (1) are fully relativistic and do not rely on the guiding-center approximation [18]. Consequently, particle trajectories describe the full Larmor rotation induced by the average magnetic field \(B_{0}\) as well as the "scattering" in the fluctuating field \(\mathbf{b}\) which is 2D. This level of description is needed whenever the characteristic length scales of fluctuations, in our case \(\lambda_{\perp}\), are comparable with the Larmor radius \(\rho_{L}\). This is precisely the case for the database since for most energies \(\rho_{L}\sim 1\) (in the scaling described in Section (II.1)) while \(\lambda_{\perp}\in(0.2,20)\). In Figs. (3a)-(3b) are shown typical CR trajectories in the perpendicular plane (Fig. (3a)) and in full 3D space (Fig. (3a)). Due to the fact that the pitch angle has been set to \(\mu=0.5\) in all simulations, the particles propagate along the parallel direction with larger velocities than the perpendicular guinding-center drifts. The Larmor rotation and the scattering due to fluctuations are obvious. The running diffusion coefficient \(D_{\perp}(t)\) related to the mean-square-displacement of particles, \(\langle x^{2}(t)\rangle\), is computed accordingly with eq. (5). Another, more frequently used, formula for diffusion is \(d(t)=\partial_{t}\langle x^{2}(t)\rangle/2\). The reason for which \(D_{\perp}(t)\) is used here instead of \(d_{\perp}(t)\) is due to the fact that we capture Larmor rotations in our model. This component of motion exhibits large fluctuations at small times in \(d_{\perp}(t)\). \(D_{\perp}(t)\), on the other hand, due to its functional form, suppresses the effect of rotations very fast as well as the statistical fluctuations at long times. Nonetheless, both forms converge asymptotically to the same value. All these features can be seen in Fig. (4) where two typical time dependent diffusion profiles are shown for the same set of free parameters. It must be emphasized that our database describes only pure diffusive regimes (in the perpendicular plane). In general, anomalous behaviour is possible as it was described, many times, in literature [7; 18; 16; 20; 9]. In fact, the finite parallel correlation length \(\lambda_{\parallel}=10\lambda_{\perp}\) is responsible for the decay of the Lagrangian correlation and, consequently, the saturation of the running diffusion to a constant value, thus, diffusive transport. The mechanism is simple: the parallel motion of CRs in fields with parallel dependence induces an effective decorrelation time \(\tau_{c}\approx\lambda_{\parallel}/v_{\parallel}\). Describing subdiffusive or superdiffusive transport remains an important task for future developments of ANNs. ### Convergence properties of the ANN Once the database has been build and the ANN constructed (programmed), we are all set to start the training phase. A random configuration of initial weights and biases \(\{W_{i,j,k},\beta_{i}\}\) is chosen and the minimization procedure (via ADAM algorithm) begins. The random nature of initial \(\{W_{i,j,k},\beta_{i}\}\) is needed to avoid setting the ANN in a configuration point from which won't be able to converge to the global minimum. Due to the same reason, different initializations lead to distinct paths of convergence. There is also a question of weather the true global minimum is achieved [21] or the algorithm gets stuck a neighbouring local minimum. Regardless, the practical experience has shown that the asymptotic states are appropriate approximations of the real minimum. In Fig. (5) it is shown a typical evolution of the error (8) relative to the norm of output values. The learning stage is fast, approximately \(5s\) on a typical personal CPU. One must note that the error function is almost always decreasing, but the evolution can manifest periods of quasi-plateau, as the one between \(400-900\) epochs in Fig. (5). For this reason, it is important to allow for long periods of minimization, in order to avoid confusing a local minima (the plateau) with the global one. The number of particles propagated within a single simulation in this work is set to \(N_{p}=10^{4}\). Looking at Fig. (4), it might seem that this value is unnecesary since the diffusion coefficient \(D_{\perp}(t)\) saturates to a nice, constant, plateau without temporal fluctuations. While this is true, this does not mean that the ensemble is sufficiently well represented and, consequently, that the asymptotic value is statistically robust (without fluctuations). In other words, if \(N_{p}\) is too small, for two identical simulations we might obtain relatively different diffusions. This would have a detrimental impact on the Figure 4: Typical perpendicular diffusion coefficients computed in two ways. Figure 5: Error \(E\) evolution during the iterative training of the ANN with ADAM. Figure 3: Typical CR trajectories in turbulent 2D magnetic fields shown in the perpendicular plane (a) and in full 3D geometry (b). database, introducing a spurious numerical fluctuation and making the job of convergence much more difficult. When the ADAM algorithm reaches the final plateau and an acceptable small error, one is ready to enter the testing phase. Now, the ANN, that is to say the values of \(\{W_{i,j,k},\beta_{i}\}\) obtained after minimization, can be used to evaluate diffusion coefficients for the remaining 10% input, \(X^{test}\), of the database that was not used in the training stage. The predicted values, \(Z^{test}_{m+1}\), are compared with exact output diffusions \(Y^{test}\) from the database. Fig. (6) shows the histogram of relative errors between ANN's predictions and real output, \(Err=100\left(Z^{test}_{m+1}/Y^{test}-1\right)\). While there are points with relatively large errors (usually those of very small diffusion \(Y^{test}\ll 1\)), the overall accuracy of the ANN can be estimated with the second moment of the histogram and it is close to 5%. When comparing computing times between test-particle simulations, \(t_{sim}\), and ANN, \(t_{ANN}\), the latter shows its true power: for a single diffusion coefficient on a two-processor CPU \(t_{sim}\approx 2s\) while \(t_{ANN}\approx 10^{-7}s\). Thus, ANNs are \(10^{7}\) times faster than numerical simulations. ### Making predictions with the ANN The histogram of errors obtained in the testing phase (6) suggests a global error of \(\sim 5\%\). But this is not enough to ensure an acceptable accuracy of the ANN. In fact, there are points which have departed more than 10% from the exact values. Moreover, these errors might be a serious liability if they are clustered in some special way inside the parametric space of input variables. In order to test whether this is the case, we must look for reduced spaces inside the database and evaluate if the ANN's predictions are a good fit for the real data. We do that on two distinct levels: first we look at two-dimensional dependencies of \(D_{\perp}\) varying two parameters at a time, than, we look at simple dependencies between \(D_{\perp}\) and only one parameter. In Figs. (7a)-(7c) we compare ANN output (red dots) with exact values (blue surface, obtained with test-particle simulation). Fig. (7a) uses as variables the magnetic turbulence amplitude \(b\) and the perpendicular length \(\lambda_{\perp}\) while the energy is set to \(T=1\). Fig. (7b) sets \(b=0.2\) and varies \(\lambda_{\perp}\) and \(T\), while in Fig. (7c) \(\lambda_{\perp}=5\). As one can see, all predicted values (red) lie close to the surface of exact diffusions (blue) in all cases. Thus, there is no special parametric domain where errors tend to accumulate and the ANN exhibits similar accuracy across the entire database. In Figs. (8a)-(8c) we set two parameters to constant values and vary only the third, as it follows: Fig. (8a) uses \(b=0.2,\lambda_{\perp}=5\), Fig. (8b) \(T=1,\lambda_{\perp}=5\) and Fig. (8c) \(T=1,b=0.2\). The overlap between predictions (red) and exact data (blue) are, again, consistently accurate across all simulations. Furthermore, at the level of such single parameter dependencies, we can understand how using an ANN tool could enhance our ability to in Figure 6: Relative error distribution on the testing database. Figure 7: ANN predicted diffusion (red dots) in comparison with exact, test-particle derived, data (blue surface) keeping one parameter constant at a time (\(T=1\)-(a), \(b=0.2\)-(b), \(\lambda_{\perp}=5\)-(c)). vestigate and understand the physical processes at play. For example, having easy access to the red curve in Fig. (8a) allows one to observe the maxima in the diffusion profile and infer about the existence of two competing mechanisms in the influence of particle energy. One is related to the monotonic increase of parallel velocity \(v_{\parallel}\) with \(T\) which increases the transport, while the other is connected to the finite Larmor radius effects that tend to decrease diffusion [18]. ## IV Conclusions The present work described a methodology for building artificial neural networks designed for predictions of cosmic ray transport in turbulent magnetic fields. The ANN developed here has a standard architecture with two hidden layers and \(tanh\)/quadratic activation functions. It uses the ADAM algorithm for optimizations in the learning phase. The input data in the training/testing database consists of the values of free parameters for CRs and turbulence, chosen inside a (hyper)cube of convenience. The output is represented by associated diffusion coefficients. The values of the latter are evaluated using a transport model which is numerically tackled with the aid of test-particle simulations. The learning stage showed fast and good convergence properties. In the testing phase, a good fit between exact and ANN predicted data was found with overall errors of \(\approx 5\%\). The predictive power of the ANN is proven by the dependencies of the transport coefficients on individual parameters. The most stringent feature of the network is its speed, since it is able to compute diffusion on bulk cases approximately \(10^{7}\) times faster than test-particle simulations. One might argue that the overall database constructed here is too limited: the model of turbulent fields (2D with a specific spectrum and connected correlation lengths \(\lambda_{\parallel}=10\lambda_{\perp}\)) is too simple to represent most astrophysical regimes and the numerical range of parameters (energy, correlation lengths, field strengths) is quite narrow. This is all true, but the purpose of this work was not to exhaust wide physical regimes. It was the author's intention to present a proof of concept for the use of machine learning techniques in the field of astrophysics, applied to a particular problem. Hopefully, the community will explore the use of this tool in future works using more sophisticated transport models and wider databases, thus, developing more potent ANNs. It is likely that, given the numerical effort required in building large databases, collaborations will be suitable. _A word of caution._ Developing ANNs to evaluate quantities of interest is not real knowledge. It does not constitute insight into any physical processes or quantitative mechanisms. Nonetheless, for practical applications where predictions are important, ANNs might be the choice of preference. Finally, even for more academically oriented questions, having a fast tool at hand to explore different regimes could be invaluable in the quest for disentangling the physical mechanisms at work. ## Statement The database and the ANN developed in this study are available upon reasonable request from the authors. Figure 8: ANN predicted diffusion (red dots) in comparison with exact, test-particle derived, data (blue dots) keeping two parameters constant at a time (\(b=0.2,\lambda_{\perp}=5\)-(a), \(T=1,\lambda_{\perp}=5\)-(b), \(T=1,b=0.2\)-(c)). ## Acknowledgements This research was partially supported by Romanian Ministry of Research, Innovation and Digitalization under Romanian National Core Program LAPLAS VII - contract no. 30N/2023.
2306.15337
Homological Neural Networks: A Sparse Architecture for Multivariate Complexity
The rapid progress of Artificial Intelligence research came with the development of increasingly complex deep learning models, leading to growing challenges in terms of computational complexity, energy efficiency and interpretability. In this study, we apply advanced network-based information filtering techniques to design a novel deep neural network unit characterized by a sparse higher-order graphical architecture built over the homological structure of underlying data. We demonstrate its effectiveness in two application domains which are traditionally challenging for deep learning: tabular data and time series regression problems. Results demonstrate the advantages of this novel design which can tie or overcome the results of state-of-the-art machine learning and deep learning models using only a fraction of parameters.
Yuanrong Wang, Antonio Briola, Tomaso Aste
2023-06-27T09:46:16Z
http://arxiv.org/abs/2306.15337v1
# Homological Neural Networks ###### Abstract The rapid progress of Artificial Intelligence research came with the development of increasingly complex deep learning models, leading to growing challenges in terms of computational complexity, energy efficiency and interpretability. In this study, we apply advanced network-based information filtering techniques to design a novel deep neural network unit characterized by a sparse higher-order graphical architecture built over the homological structure of underlying data. We demonstrate its effectiveness in two application domains which are traditionally challenging for deep learning: tabular data and time series regression problems. Results demonstrate the advantages of this novel design which can tie or overcome the results of state-of-the-art machine learning and deep learning models using only a fraction of parameters. The code and the data are available at [https://github.com/FinancialComputingUCL/HNN](https://github.com/FinancialComputingUCL/HNN). Machine Learning, ICML, ICML ## 1 Introduction Computational processes can be viewed as mapping operations from points or regions in space into points or regions in another space with different dimensionality and properties. Neural networks process information through stacked layers with different dimensions to efficiently represent the inherent structure of the underlying data. Uncovering this structure is however challenging since it is typically an unknown priori. Nevertheless, studying dependencies among variables in a dataset makes it possible to characterise the structural properties of the data and shape ad-hoc deep learning architectures on it. Specifically, the basic operation in deep neural networks consists of aggregating input signals into one output. This operation is most effective in scenarios where the spatial organization of the variables is a good proxy for dependency. However, in several real-world complex systems, modelling dependency structures requires the usage of a complex network representation. Graph Neural Networks have been introduced as one possible way to address this issue (Samek et al., 2021). However, they present two main limits: (i) they are designed for data defined on nodes of a graph (Yang et al., 2022), and (ii) they usually only explicitly consider low-order interactions as geometric priors (edges connecting two nodes), ignoring higher-order relations (triangles, tetrahedra,...). Instead, dependency is not simply a bi-variate relation between couples of variables and involves groups of variables with complex aggregation laws. In this work, we propose a novel deep learning architecture that keeps into account higher-order interactions in the dependency structure as topological priors. Higher-order graphs are networks that connect not only vertices with edges (i.e. low-order 1-dimensional simplexes) but also higher-order simplexes (Torres and Bianconi, 2020). Indeed, any higher-order component can be described as a combination of lower-order components (i.e. edges connecting two vertices, triangles connecting three edges,...). The study of networks in terms of the relationship between structures at different dimensionality is a form of homology. In this work, we propose a novel multi-layer deep learning unit capable of fully representing the homological structure of data and we name it Homological Neural Network (HNN). This is a feed-forward unit where the first layer represents the vertices, the second the edges, the third the triangles, and so on. Each layer connects with the next homological level accordingly to the network's topology representing dependency structures of the underlying input dataset. Information only flows between connected structures at different order levels, and homological computations are thus obtained. Neurons in each layer have a residual connection to a post-processing readout unit. HNN's weights are updated through backward propagation using a standard gradient descent approach. Given the higher-order representation of the dependency structure in the data, this unit should provide better computational performances than those of fully connected multi-layer architectures. Furthermore, given the network representation's intrinsic sparsity, this unit should be computationally more efficient, and results should be more intuitive to interpret. We test these hypotheses by evaluating the HNN unit on two application domains traditionally challenging for deep learning models: tabular data and time series regression problems. This work builds upon a vast literature concerning complex network representation of data dependency structures (Costa-Santos et al., 2011; Moyano, 2017). Networks are excellent tools for representing complex systems both mathematically and visually, they can be used for both qualitatively describing the system and quantitatively modeling the system properties. A dense graph with everything connected with everything else (complete graph) does not carry any information, conversely, too sparse representations are oversimplifications of the important relations. There is a growing recognition that, in most practical cases, a good representation is provided by structures that are locally dense and globally sparse. In this paper we use a family of network representations, named Information Filtering Networks (IFNs), that have been proven to be particularly useful in data-driven modeling (Tumminello et al., 2005; Barfuss et al., 2016; Briola Aste, 2023). The proposed methodology exploits the power of a specific class of IFNs, namely the Trianguulated Maximally Filtered Graph (TMFG), which is a maximally planar chordal graph with a clique-three structure made of tetrahedra (Massara et al., 2017). The TMFG is a good compromise between sparsity and density and it is computationally efficient to construct. It has the further advantage of being chordal (every cycle of four or more vertices has a chord) which makes it possible to directly implement probabilistic graphical modeling on its structure (Barfuss et al., 2016). The rest of the paper is organised as follows. We first review, in Section 2, the relevant literature. Then, in Section 3, we introduce a novel representation for higher-order networks, the founding stone of HNNs. The design of HNN as a modular unit of a deep learning architecture is discussed in Section 4. Application of HNN-based architectures to tabular data experiment on Penn Machine Learning Benchmark and to multivariate time-series on solar-energy power and exchange-rates datasets are discussed in Section 5.1 and Section 5.2. Conclusions are provided in Section 6. ## 2 Background literature ### Information Filtering Networks The construction of sparse network representations of complex datasets has been a very active research domain during the last two decades. There are various methodologies and possibilities to associate data with network representations. The overall idea is that in a complex dataset, each variable is represented by a vertex in the network, and the interaction between variables is associated with the network structure. Normally such a network representation is constructed from correlations or (non-linear) dependency measures (i.e. mutual information) and the network is constructed in such a way as to retain the largest significant dependency in its interconnection structure. These networks are known as Information Filtering Networks (IFN) with one of the best-known examples being the Minimum Spanning Tree (MST) (Nesetril et al., 2001) built from pure correlations (Mantegna, 1998). The MST has the advantage of being the sparsest connected network and of being the exact solution for some optimization problems (Kruskal, 1956; Prim, 1957). However, other IFNs based on richer topological embeddings, such as planar graphs (Tumminello et al., 2005; Aste & Matteo, 2017) or clique trees and forests (Massara et al., 2017; Massara & Aste, 2019), can extract more valuable information and better represent the complexity of the data. These network constructions have been employed across diverse research domains from finance (Barfuss et al., 2016) to brain studies (Telesford et al., 2011), and psychology (Christensen et al., 2016). In this paper, we use the Trianguulated Maximally Filtered Graph (TMFG) (Massara et al., 2017), which is a planar and chordal IFN. It has the property of being computationally efficient and it can yield a sparse precision matrix with the structure of the network, thereby being a tool for \(L_{0}\)-norm topological regularization in multivariate probabilistic models (Aste, 2022). ### Sparse neural networks Recent advances in artificial intelligence have exacerbated the challenges related to models' computational and energy efficiency. To mitigate these issues, researchers have proposed new architectures characterized by fewer parameters and sparse structures. Some of them have successfully reduced the complexity of very large models to drastically improve efficiency with negligible performance degradation (Ye et al., 2018; Molchanov et al., 2016; Lee et al., 2018; Yu et al., 2017; Anwar et al., 2015; Molchanov et al., 2017; Zhuo et al., 2018; Wang et al., 2018). Others have not only simplified the architectures but also enhanced models' efficacy, further demonstrating that fewer parameters yield better model generalization (Wu et al., 2020; Wen et al., 2016; Liu et al., 2015, 2017; Hu et al., 2016; Zhuang et al., 2018; Peng et al., 2019; Louizos et al., 2017). Nonetheless, in the majority of literature, sparse topological connectivity is pursued either after the training phase, which bears benefits only during the inference phase, or during the back-propagation phase which usually adds complexity and run-time to the training. A very first attempt to solve these issues is represented by network-inspired pruning methods incorporated pruning at the earliest stage of the building process, allowing for the establishment of a foundational topological architecture that can then be elaborated upon (Stanley and Miikkulainen, 2002; Hausknecht et al., 2014; Mocanu et al., 2017). However, the most interesting solution is represented by Simplicial NNs (Ebli et al., 2020) and Simplicial CNNs (Yang et al., 2022). Indeed, these architectures constitute the very first attempt to exploit the topological properties of sparse graph representations to capture higher-order data relationships. Despite their novelty, the design of these neural network architectures limits them to pre-designed network data, without the possibility to easily scale to more general data types (e.g., tabular data and time series). In this paper, we incorporate topological constraints within the design phase of the network architecture, generating a more intricate sparse topology derived from IFNs (Briola et al., 2022; Briola and Aste, 2022; Vidal-Tomas et al., 2023; Biola and Aste, 2022). ### Deep Learning models for tabular data Throughout the previous ten years, conventional machine learning algorithms, exemplified by gradient-boosted decision trees (GBDT) (Chen and Guestrin, 2016), have predominantly governed the landscape of tabular data modelling, exhibiting superior efficacy compared to deep learning methodologies. Although the encouraging results presented in the literature (Shwartz-Ziv et al., 2018; Poggio et al., 2020; Piran et al., 2020), deep learning tends to encounter significant hurdles when implemented on tabular data. The works of (Arik and Pfister, 2019) and (Hollmann et al., 2022) claim to achieve comparable results to tree models, but they are all very large attention/transformer-based models. Indeed, tabular data manifest a range of peculiar issues such as non-locality, data sparsity, heterogeneity in feature types, and an absence of a priori knowledge about underlying dependency structures. Therefore, tree ensemble methodologies, such as XGBoost, are still deemed as the optimal choice for tackling real-world tabular data related tasks (Friedman, 2001; Prokhorenkova et al., 2018; Grinsztajn et al., 2022). In this work, we propose a much more efficient sparse deep-learning model with similar results. ### Deep Learning models for multivariate time-series Existing research in multivariate time series forecasting can be broadly divided into two primary categories: statistical methods and deep learning-based methods. Statistical approaches usually assume linear correlations among variables (i.e., time series) and use their lagged dependency to forecast through a regression, as exemplified by the vector auto-regressive model (VAR) (Zivot and Wang, 2003) and Gaussian process model (GP) (Roberts et al., 2012). In contrast, deep learning-based methods, such as LSTNet (Lai et al., 2017) and TPA-LSTM (Shih et al., 2018), utilize Convolutional Neural Networks (CNN) to identify spatial dependencies among variables and combine them with Long Short-Term Memory (LSTM) networks to process the temporal information. Despite they have been widely applied across various application domains, including finance (Lu et al., 2020) and weather data (Wan et al., 2019), these architectures do not explicitly model dependency structures among variables, being strongly limited on the interpretability side. Recently, spatio-temporal graph neural networks (STGNNs) (Shao et al., 2022; 2) have attracted interest reaching state-of-the-art performances, as exemplified by MTGNN (Wu et al., 2020). STGNNs integrate graph convolutional networks and sequential recurrent models, with the former addressing non-Euclidean dependencies among variables and the latter capturing temporal patterns. The inclusion of advanced convolutional or aggregational layers accounting for sparsity and higher-order interactions has resulted in further improvements of STGNNs (Wang and Aste, 2022; Calandriello et al., 2018; Chakeri et al., 2016; Rong et al., 2020; Hasanzadeh et al., 2020; Zheng et al., 2020; Luo et al., 2021; Kim and Oh, 2021). In this paper, we use the HNN unit as an advanced aggregational module to extract the dependency structure of variables from the temporal signals generated from LSTMs. ## 3 A novel representation for higher order networks and its use for HNN construction The representation of undirected graphs explicitly accounts for the vertices and their connections through edges and, instead, does not explicitly account for other, higher-order, structures such as triangles, tetrahedra, and, in general, \(d\)-dimensional simplexes. Indeed, usually, an undirected graph is represented as a pair of sets, \(\mathcal{G}=(V,E)\): the vertex set \(V=(v_{1},...,v_{p})\) and the edge set \(E\) which is made of pairs of edges \((v_{i},v_{j})\). The associated graphical representation is a network where vertices, represented as points, are connected through edges, represented as segments. This encoding of the structure accounts only for the edges skeleton of the network. However, in many real-world scenarios, higher-order sub-structures are crucial for the functional properties of the network and it is therefore convenient - and sometimes essential - to use a representation that accounts for them explicitly. A simple higher-order representation can be obtained by adding triplets (triangles), quadruplets (tetrahedra), etc. to the sets in \(\mathcal{G}\). However, the associated higher-order network is hard to handle both visually and computationally. In this paper, we propose an alternative approach, which consists of a layered representation that explicitly takes into account the higher order sub-structures and their interconnections. Such a representation is very simple, highly intuitive, of practical applicability as computational architecture, and, to the best of our knowledge, it has never been proposed before. The proposed methodology is entirely based on a special class of networks: chordal graphs. These networks are constituted only of cliques organized in a higher order tree-like structure (also referred to as 'clique tree'). This class of networks is very broad and it has many useful applications, in particular for probabilistic modeling (Aste, 2022). A visual example of a higher-order chordal network (a clique-tree), with 7 vertices, 11 edges, 6 triangles, and 1 tetrahedron, is provided in Figure 1. In the figure, the maximal cliques (largest fully-connected subgraphs) are highlighted and reported, in the right panel, as clique-tree nodes. Such nodes are connected to each other with links that are sub-cliques called separators. Separators have the property that, if removed from the network, they disconnect it into a number of components equal to the multiplicity of the separator minus one. In higher-order networks, cliques are the edge skeletons of simplexes. A 2-clique is a 1-dimensional simplex (an edge); 3-clique is a 2-dimensional simplex (a triangle); and so on with \((d+1)\)-cliques being the skeleton of \(d\)-dimensional simplexes. To represent the complexity of a higher-order network, we propose to adopt a layered structure (i.e. the Hasse diagram) where nodes in layer \(d\) represent \(d\)-dimensional simplexes. The structures start with the vertices in layer 0; then a couple of vertices connect to edges represented in layer 1; edges connect to triangles in layer 2; triangles connect into tetrahedra in layer 3, and so on. This is illustrated in Figure 2. Such representation has a one-to-one correspondence with the original network but shows explicitly the simplexes and sub-simplexes and their interconnection in the structure. All information about the network at all dimensions is explicitly encoded in this representation including elements such as maximal cliques, separators, and their multiplicity (see caption of Figure 2). It is worth noting the resemblance of this layered structure with the layered architecture of deep neural networks. In Figure 1: **(a)** Visual example of a higher order network made of 7 vertices, 11 edges, 6 triangles, and 1 tetrahedron. **(b)** This higher-order network is a clique tree made of four cliques (maximal cliques highlighted in the circles) connected through three separators (the tick red edges). One can observe that the separator constituted by the vertex ‘4’ has multiplicity 1, while the separator constituted of the edge ‘4-6’ has multiplicity 2 and indeed it appears twice. Figure 3: The Homological Neural Network (HNN) unit is constructed by using as input layer 0 of the homological representation of the dependency structure (see Figure 2(b)) and then feeding forward through the homological layers. The output is produced by a readout unit that connects all neurons in the layers. The HNN is essentially a sparse MLP unit with residual connections. Figure 2: Higher order homological representation of the chordal graph in Figure 1 (reproduced in **(a)**). **(b)** Nodes in each layer, \(L_{d}\), represent the \(d\)-dimensional simplexes in the structure. The links between nodes in layers \(d\) and \(d+1\) are the connections between \(d\) to \(d+1\) simplexes in the network. The degree on the left of nodes in \(L_{d}\) is always equal to \(d\). The degree on the right of nodes in \(L_{d}\) can instead vary. The \(d\)-dimensional simplexes with no connections towards \(d+1\) are the maximal cliques in the network (i.e. the nodes in the clique tree in Figure 1(b)). deed, we leverage this novel higher-order network representation as the neural network architecture of the HNN unit. In our experiments, the HNN is implemented from the TMFG generated from correlations. TMFG is computationally efficient, and can thus be used to dynamically re-configure the HNN according to changeable system conditions (Wang and Aste, 2022). The HNN architecture is illustrated in Figure 3. Essentially it is made by the layered representation of Figure 2 with the addition of the residual connections linking each neuron in each simplex layer to a final read-out layer. Such HNN is a sparse MLP-like neural network with extra residual connections and it can be employed as a modular unit. It can directly replace fully connected MLP layers in several neural network architectures. In this paper, the HNN unit is implemented using the standard PyTorch deep learning framework, while the sparse connection between layers is obtained thorugh the "sparselinear"1 PyTorch library. Footnote 1: [https://github.com/hyeon95y/SparseLinear](https://github.com/hyeon95y/SparseLinear) ## 4 Design of neural network architectures with HNN units for tabular data and time series studies We investigate the performances of HNN units in two traditionally challenging application domains for deep learning: tabular data and time series regression problems. To process tabular data, the HNN unit can be directly fed with the data and it can be constructed from correlations by using the TMFG. In this case, the HNN unit acts as a sparsified MLP. This architecture is schematically shown in Figure 4. Instead, in spatio-temporal neural networks, the temporal layers are responsible for handling temporal patterns of individual series, whereas the spatial component learns their dependency structures. Consequently, the temporal part is usually modeled through the usage of recurrent neural networks (e.g. RNNs, GRUs, LSTMs), while the spatial component employs convolutional layers (e.g. CNNs) or aggregation functions (e.g. MLPs, GNNs). Figure 5 presents the spatio-temporal neural network architecture employed in our multivariate time series experiments. The architecture consists of an LSTM for the temporal encoding of each time series and a graph generation unit that takes into account the correlation between different time series. This unit models time series as nodes and pairwise correlations as edges by imposing the topological constraints typical of the TMFG: planarity and chordality. The HNN is built based on the resulting sparse TMFG and aggregates each of the encoded time series from the LSTM, generating the final output. ## 5 Results ### Tabular Data We test performances of the HNN architecture on the Penn Machine Learning Benchmark (PMLB) (Romano et al., 2021) regression dataset. We select datasets with more than 10,000 samples, and we split each of them into a 70% training and 30% testing set. The R2 scores for the HNN architecture are reported in Table 1 for groups of PMLB regression datasets with different number of variables. We first compare HNN performances with the ones achieved by a Multi-Layer Perceptron (MLP) with same depth (i.e. 4 as imposed by the TMFG in the HNN's building process). We also test a sparse MLP (MLP-HNN) with the same sparse structure of the HNN but without the residual connections from each layer to the final read-out layer, and a standard MLP with residual connections to the final read-out layer, MLP-res. All these architectures are optimized using gradient descent for the Figure 4: HNN architecture for tabular data. The tabular data is processed by a Graph Generation Unit to construct a prior sparse graph to represent spatial interdependencies between the feature columns. The prior graph guides the design of the HNN unit which then processes and transforms the feature columns into the final output. Figure 5: LSTM-HNN architecture for time-series data. The multivariate time-series is processed by a Graph Generation Unit to construct a prior sparse graph to represent spatial interdependencies, and each of the multivariate time series is processed separately by LSTM in the Temporal Convolution Module to harness the temporal information. The prior graph guides the design of the HNN unit which then aggregates the single temporal representations from LSTMs into the final output. parameters and grid search for the hyper-parameters. It is evident from Table 1 that the HNN architecture largely outperforms the other neural network models. It is also evident that the homological structure of HNN is the main factor associated with improved performances (indeed MLP-HNN outperforms MLP) while the residual connections between layers are not the factor that makes HNN best performing (indeed MLP-res does not outperforms HNN). It is commonly acknowledged that neural network models do not perform well on tabular data (Borisov et al., 2021); tree-based models and the gradient boosting framework represent instead the state-of-the-art (Shwartz-Ziv and Armon, 2022; Grinsztajn et al., 2022). We therefore compare the HNN results with baseline models including Linear Regression (LM), Random Forest (RF), Light Gradient Boosting Machine (LGBM), Extreme Gradient Boosting Machine (XGB) (Ke et al., 2017; Chen and Guestrin, 2016). The experiments are performed on the same datasets using the optimization and tuning pipeline described above. Table 2 and Figure 6 report the comparison between HNN results and the machine learning methods on the PMLB regression datasets. We underline that HNN outperforms traditional machine learning methods and nearly matches the state-of-the-art. Furthermore, the relative performance of HNN improves with the number of variables, notably with HNN obtaining equivalent and marginally better performance even than XGB for the datasets with large number of features (see Figure 6(d)). ### Multivariate Time-series Data The HNN module can be used as a portable component along with different types of neural networks to manage various input data structures and downstream tasks. In this Section, we apply HNN to process dependency structures in time series modelling after temporal dependencies are handled through the LSTM architecture. We use two different datasets which have been extensively investigated in the multivariate time-series literature (Wu et al., 2020): the solar-energy dataset from the National Renewable Energy Laboratory, which contains the solar-energy power output collected from 137 PV plants in Alabama State in 2007; and a financial dataset containing the daily exchange-rates rates of eight foreign countries including Australia, British, Canada, Switzerland, China, Japan, New Zealand, and Singapore in the period from 1990 to 2016 (see Table 5 in Appendix for further details). Analogously with the tabular data, we first compare the outcomes of LSTM-HNN with those obtained with adapted MLP units. Specifically, LSTM units plus an MLP (LSTM-MLP); LSTM units plus an MLP with added residual connections to the final read-out layer (LSTM-MLP-res); and LSTM units plus a sparse MLP of the same layout as HNN without residual connections (LSTM-MLP-HNN). We then compare the LSTM-HNN results with traditional and state-of-the-art spatio-temporal models for multivariate time-series problems: auto-regressive model (AR) (Zivot and Wang, 2003); a hybrid model that exploits both the power of MLP and auto-regressive modelling (VARMLP) (Zhang, 2003); a Gaussian process (GP) (Roberts et al., 2012); a recurrent neural network with fully connected GRU hidden units (RNN-GRU) (Wu et al., 2020); a LSTM recurrent neural network combined with a convolutional neural network (LSTNet) (Lai et al., 2017); a LSTM recurrent neural network with attention mechanism (TPA-LSTM) (Shih et al., 2018); and a graph neural network with temporal and graph convolution (MTGNN) (Wu et al., 2020). We evaluate performances of the LSTM-HNN and compare them with the ones achieved by benchmark methodologies by forecasting the solar-energy power outputs and the exchange-rates values at different time horizons with performances measured in terms of relative standard error (RSE) and correlation (CORR) (see Table 3). We underline that LSTM-HNN significantly outperforms all MLP-based models. On solar-energy data, LSTM-HNN reduces RSE by 38%, 25%, 17%, and 36% from LSTM-MLP and 8%, 7%, 3%, and 2% from LSTM-MLP-res across four horizons. On exchange-rates data, LSTM-HNN reduces RSE by 23%, 28%, 26%, and 13% from LSTM-MLP and 19%, 20%, 14%, and 10% from LSTM-MLP-res across four horizons. We also notice that the residual connections from each layer to the final read-out layer are effective both in the HNN Figure 6: The R2 score from different models on PMLB (Penn Machine Learning Benchmarks) regression datasets. **(a)** All datasets (46 datasets). **(b)** Datasets with number of variable \(\in[0,20)\) (32 datasets). **(c)** Datasets with number of variable \(\in[20,40)\) (8 datasets). **(d)** Datasets with number of variable \(\in[40,\inf)\) (6 datasets). architecture (i.e. LSTM-HNN outperforms LSTM-MLP-HNN) and within the MPL models (i.e. LSTM-MLP-res outperforms LSTM-MLP). In order to illustrate the significance of the gain, a paired t-test of LSTM-HNN against LSTM-MLP-res has been performed revealing that all differences are significant at 1% or better with the only exception for the correlation at horizon 3 in the solar-energy output data. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{\# variable \(\in[0,20)\)} & \multicolumn{4}{c|}{\# variable \(\in[20,40)\)} & \multicolumn{4}{c}{\# variable \(>40\)} \\ \hline & mean & 10th & 50th & 90th & mean & 10th & 50th & 90th & mean & 10th & 50th & 90th \\ \hline \hline HNN & \(\mathbf{0.70^{***}}\) & 0.45 & 0.78 & 0.93 & \(\mathbf{0.75^{***}}\) & 0.55 & 0.78 & 0.91 & \(\mathbf{0.89^{**}}\) & 0.78 & 0.92 & 0.96 \\ \hline \hline MLP-HNN & -9.64 & -8.97 & 0.01 & 0.82 & 0.21 & -0.01 & 0.03 & 0.56 & 0.55 & 0.14 & 0.54 & 0.96 \\ \hline MLP-res & -5.14 & 0.02 & 0.79 & 0.94 & 0.40 & 0.01 & 0.27 & 0.84 & 0.32 & 0.01 & 0.19 & 0.75 \\ \hline MLP & -7.18 & -0.63 & 0.19 & 0.87 & 0.09 & -0.01 & -0.00 & 0.22 & 0.12 & -0.14 & -0.00 & 0.50 \\ \hline \hline \end{tabular} \end{table} Table 1: R2 score from different models on PMLB regression dataset, with different number of variables. The best-performing average result is highlighted in bold, and \({}^{*}\) denotes \(1\%\) significance, \({}^{**}\) for \(0.1\%\) and \({}^{***}\) for \(0.001\%\) respectively from paired T-test of the second best performing model result against HNN result. We also report the 10% 50% and 90% quantiles. \begin{table} \begin{tabular}{l c c c c|c c c c|c c c} \hline \hline & \multicolumn{4}{c|}{solar-energy} & \multicolumn{4}{c}{exchange-rates} \\ \hline & & \multicolumn{4}{c|}{Horizon (days)} & \multicolumn{4}{c}{Horizon (days)} \\ Model & Metrics & 3 & 6 & 12 & 24 & 3 & 6 & 12 & 24 \\ \hline \hline LSTM-HNN & RSE & \(\mathbf{0.190^{*}}\) & \(\mathbf{0.270^{*}}\) & \(\mathbf{0.354^{*}}\) & \(\mathbf{0.446^{*}}\) & \(\mathbf{0.022^{*}}\) & \(\mathbf{0.027^{**}}\) & \(\mathbf{0.040^{*}}\) & \(\mathbf{0.049^{*}}\) \\ & CORR & \(\mathbf{0.981}\) & \(\mathbf{0.964^{*}}\) & \(\mathbf{0.942^{*}}\) & \(\mathbf{0.902^{**}}\) & \(\mathbf{0.976^{***}}\) & \(\mathbf{0.968^{**}}\) & \(\mathbf{0.956^{*}}\) & \(\mathbf{0.938^{*}}\) \\ \hline \hline LSTM-MLP-HNN & RSE & 0.207 & 0.292 & 0.365 & 0.454 & 0.028 & 0.034 & 0.046 & 0.054 \\ & CORR & 0.980 & 0.959 & 0.936 & 0.893 & 0.965 & 0.957 & 0.945 & 0.928 \\ \hline LSTM-MLP-res & RSE & 0.245 & 0.340 & 0.409 & 0.501 & 0.031 & 0.035 & 0.052 & 0.059 \\ & CORR & 0.972 & 0.944 & 0.905 & 0.898 & 0.850 & 0.829 & 0.835 & 0.828 \\ \hline LSTM-MLP & RSE & 0.307 & 0.361 & 0.425 & 0.697 & 0.029 & 0.037 & 0.054 & 0.056 \\ & CORR & 0.956 & 0.937 & 0.898 & 0.723 & 0.845 & 0.838 & 0.834 & 0.824 \\ \hline \hline \end{tabular} \end{table} Table 3: Relative Standard Error (RSE) and CORR (correlation). The best-performing results in a given metric and horizon are highlighted in bold. In addition, a paired T-test has been performed, and the p-values for the LSTM-HNN against the second-best-performing model (i.e. the LSTM-MLP-res) in the given metrics and horizon are highlighted next to the best-performing results, where \({}^{*}\) denotes \(1\%\) significance, \({}^{**}\) for \(0.1\%\) and \({}^{***}\) for \(0.001\%\) respectively. The comparison between the results for LSTM-HNN and the other benchmark models is reported in Table 4. Results reveal that LSTM-HNN consistently outperforms all three non-RNN-based methods (AR, VARMLP and GP) on both datasets. It also outperforms LSTNet-skip results. LSTM-HNN outperforms RNN-GRU for all datasets and horizons except for the correlation in the exchange rates at horizon 6 where it returns an equivalent result accordingly with the paired t-test that was conducted between LSTM-HNN and the best-performing model. LSTM-HNN is instead slightly outperformed by MTGNN in most results for solar-power and by TPA-LSTM in several results for exchange-rates. It must be however noticed that these are massive deep-learning models with a much larger number of parameters (respectively 1.5 and 2.5 times larger than LSTM-HNN for the solar-energy datasets and 10 and 26 times larger for the exchange-rates datasets, see Table 6). ## 6 Conclusion In this paper we introduce Homological Neural Networks (HNNs), a novel deep-learning architecture based on a higher-order network representation of multivariate data dependency structures. This architecture can be seen as a sparse MLP with extra residual connections and it can be applied in place of any fully-connected MLP unit in composite neural network models. We test the effectiveness of HNNs on tabular and time-series heterogeneous datasets. Results reveal that HNN, used either as a standalone model or as a modular unit within larger models, produces better results than MLP models with the same number of neurons and layers. We compare the performance of HNN with both fully-connected MLP, MLP sparsified with the HNN layered structure, and fully-connected MLP with additional residual connections and read-out unit. We design an experimental pipeline that verifies that the sparse higher-order homological layered structure on which HNN is built is the main element that eases the computational process. Indeed, we verify that the sparsified MLP with the HNN structure (MLP-HNN) over-performs all other MLP models. We also verify that the residual links between layers and the readout unit consistently improve HNN performances. Noticeably, although residual connections also improve fully-connected MLP performances, results are still inferior to the ones achieved by sparse MLP-HNN. We demonstrate that HNNs' performances are in line with state-of-the-art \begin{table} \begin{tabular}{c c|c c c c|c c c c} \hline \hline & & \multicolumn{4}{c|}{solar-energy} & \multicolumn{4}{c}{exchange-rates} \\ \hline & & \multicolumn{4}{c|}{Horizon (days)} & \multicolumn{4}{c}{Horizon (days)} \\ Model & Metrics & 3 & 6 & 12 & 24 & 3 & 6 & 12 & 24 \\ \hline \hline LSTM-HNN & RSE & \(0.190\) & \(0.270\) & \(0.354\) & \(0.446\) & \(0.022\) & \(0.027\) & \(0.040\) & \(0.049\) \\ & CORR & \(0.981\) & \(0.964\) & \(0.942\) & \(0.902\) & \(0.976\) & \(0.968\) & \(0.956\) & **0.938** \\ \hline \hline MTGNN & RSE & **0.177\({}^{*}\)** & **0.234\({}^{**}\)** & **0.310\({}^{*}\)** & **0.427\({}^{*}\)** & 0.019 & 0.025 & 0.034 & 0.045 \\ & CORR & **0.985** & **0.972\({}^{*}\)** & **0.950\({}^{*}\)** & 0.903 & 0.978 & 0.970 & 0.955 & 0.937 \\ \hline TPA-LSTM & RSE & 0.180 & 0.234 & 0.323 & 0.438 & **0.017\({}^{*}\)** & **0.024** & **0.034** & **0.044** \\ & CORR & 0.985 & 0.974 & 0.948 & **0.908\({}^{*}\)** & **0.979\({}^{*}\)** & 0.970 & **0.956** & 0.938 \\ \hline LSTNet-skip & RSE & 0.184 & 0.255 & 0.325 & 0.464 & 0.022 & 0.028 & 0.035 & 0.044 \\ & CORR & 0.984 & 0.969 & 0.946 & 0.887 & 0.973 & 0.965 & 0.951 & 0.935 \\ \hline RNN-GRU & RSE & 0.193 & 0.262 & 0.416 & 0.485 & 0.019 & 0.026 & 0.040 & 0.062 \\ & CORR & 0.982 & 0.967 & 0.915 & 0.882 & 0.978 & **0.971** & 0.953 & 0.922 \\ \hline GP & RSE & 0.225 & 0.328 & 0.520 & 0.797 & 0.023 & 0.027 & 0.039 & 0.058 \\ & CORR & 0.975 & 0.944 & 0.851 & 0.597 & 0.871 & 0.819 & 0.848 & 0.827 \\ \hline VARMLP & RSE & 0.192 & 0.267 & 0.424 & 0.684 & 0.026 & 0.039 & 0.040 & 0.057 \\ & CORR & 0.982 & 0.965 & 0.905 & 0.714 & 0.860 & 0.872 & 0.828 & 0.767 \\ \hline AR & RSE & 0.243 & 0.379 & 0.591 & 0.869 & 0.022 & 0.027 & 0.035 & 0.044 \\ & CORR & 0.971 & 0.926 & 0.810 & 0.531 & 0.973 & 0.965 & 0.952 & 0.935 \\ \hline \hline \end{tabular} \end{table} Table 4: Relative Standard Error and correlation. The best-performing results in a given metric and horizon are highlighted in bold. In addition, a paired T-test has been performed, and the p-values for the best-performing result against LSTM-HNN in the given metrics and horizon are highlighted next to the best-performing results, where \({}^{*}\) denotes \(1\%\) significance, \({}^{**}\) for \(0.1\%\) and \({}^{**}\) for \(0.001\%\) respectively. The absence of \({}^{*}\) indicates statistical equivalence between the best-performing and LSTM-HNN models. When LSTM-HNN is the best-performing result, then the t-test is conversely performed against the second best-performing result. best-performing computational models, however, it must be considered that they have a much smaller number of parameters, and their processing architecture is easier to interpret. In this paper, we build HNNs from TMFG networks computed on pure correlations. TMFG are very convenient chordal network representations that are computationally inexpensive and provide opportunities for dynamically self-adjusting neural network structures. Future research work on HNN will focus on developing an end-to-end dynamic model that addresses the temporal evolution of variable interdependencies. TMFG is only one instance of a large class of chordal higher-order information filtering networks (Massara & Aste, 2019) which can be used as priors to construct HNN units. The exploration of this larger class of possible representations is a natural expansion of the present HNN configuration and will be pursued in future studies. ## Acknowledgements All the authors acknowledge the members of the University College London Financial Computing and Analytics Group for the fruitful discussions on foundational topics related to this work. All the authors acknowledge the ICML TAG-ML 2023 workshop organising committee and the reviewers for the useful comments that improved the quality of the paper. The author, T.A., acknowledges the financial support from ESRC (ES/K002309/1), EPSRC (EP/P031730/1) and EC (H2020-ICT-2018-2 825215).
2301.07794
HCE: Improving Performance and Efficiency with Heterogeneously Compressed Neural Network Ensemble
Ensemble learning has gain attention in resent deep learning research as a way to further boost the accuracy and generalizability of deep neural network (DNN) models. Recent ensemble training method explores different training algorithms or settings on multiple sub-models with the same model architecture, which lead to significant burden on memory and computation cost of the ensemble model. Meanwhile, the heurtsically induced diversity may not lead to significant performance gain. We propose a new prespective on exploring the intrinsic diversity within a model architecture to build efficient DNN ensemble. We make an intriguing observation that pruning and quantization, while both leading to efficient model architecture at the cost of small accuracy drop, leads to distinct behavior in the decision boundary. To this end, we propose Heterogeneously Compressed Ensemble (HCE), where we build an efficient ensemble with the pruned and quantized variants from a pretrained DNN model. An diversity-aware training objective is proposed to further boost the performance of the HCE ensemble. Experiemnt result shows that HCE achieves significant improvement in the efficiency-accuracy tradeoff comparing to both traditional DNN ensemble training methods and previous model compression methods.
Jingchi Zhang, Huanrui Yang, Hai Li
2023-01-18T21:47:05Z
http://arxiv.org/abs/2301.07794v1
# HCE: Improving Performance and Efficiency with Heterogeneously Compressed Neural Network Ensemble ###### Abstract Ensemble learning has gain attention in recent deep learning research as a way to further boost the accuracy and generalizability of deep neural network (DNN) models. Recent ensemble training method explores different training algorithms or settings on multiple sub-models with the same model architecture, which lead to significant burden on memory and computation cost of the ensemble model. Meanwhile, the heuristically induced diversity may not lead to significant performance gain. We propose a new respective on exploring the intrinsic diversity within a model architecture to build efficient DNN ensemble. We make an intriguing observation that pruning and quantization, while both leading to efficient model architecture at the cost of small accuracy drop, leads to distinct behavior in the decision boundary. To this end, we propose Heterogeneously Compressed Ensemble (HCE), where we build an efficient ensemble with the pruned and quantized variants from a pretrained DNN model. An diversity-aware training objective is proposed to further boost the performance of the HCE ensemble. Experiemnt result shows that HCE achieves significant improvement in the efficiency-accuracy tradeoff comparing to both traditional DNN ensemble training methods and previous model compression methods. Ensemble, model compression, efficient neural network ## I Introduction Ensemble learning has been widely used to improve the generalization performance of machine learning algorithms by combining multiple diverse weak classifiers [1, 2, 3]. An ensemble achieves better accuracy when its sub-models are diverse (i.e. make different errors). Previous deep ensemble methods induce diversity by training multiple DNN models with the same model architecture but different weight initialization and training data [2, 4, 5, 6]. However, as the performance and size of DNN models increase dramatically, heuristically ensemble repeated model architecture brought limited performance improvement but heavy computation redundancy. This raises a intriguing question: _Can we build efficient and accurate DNN ensemble while exploring the intrinsic diversity within a pre-trained model?_ Previous exploration in model compression unveils diverse methods to induce efficiency, notably pruning and quantization. Quantization aims to convert floating-point parameters into low precision fixed-point representation [7, 8], whereas pruning aims to remove redundant parameters or structures that have little impact on the accuracy from the model architecture [9, 10]. Both methods can lead to significant reduction in model size and computation cost with a small accuracy drop. In this paper, we make an inspiring discovery that starting from the same pretrained model, quantization and pruning lead to diverse outcome in the decision region: Quantization largely retains (even increase) the margin in decision region, but lead to unsmooth decision boundary due to the discrete representation; Pruning on the other hand retains smoothness of the boundary, but has smaller margin and worse generalizability due to the removal of learnt features. The intrinsic diversity of pruning and quantization can lead to a diverse ensemble with efficient architecture. Inspired by the analysis above, we propose Heterogeneously Compressed Ensemble (HCE) to improve both performance and efficiency of DNN ensembles. We construct HCE as an ensemble of pruned and quantized variants of a pretrained DNN model, which enables us to utilize the intrinsic diversity brought by the heterogeneous model compression methods. Furthermore, we propose a novel diversity-aware model compression method to enable further diversity boost between the compressed variants, therefore improving ensemble performance. Experiment results show that by assembling specially trained sub-models, higher accuracy can be achieved while maintain low model size and computational cost. Figure 1 shows the result of HCE compared with other SOTA structure pruning methods, including traditional filter pruning [11] (PF), channel pruning [12] (CP), L1 norm based method (L1), Taylor expansion [10] (TE) and KL-divergence metric [13] (KL). In this case, HCE uses uniform quantization and filter pruning [11]. The model size of HCE is the sum of both compressed sub-models. As shown, HCE achieves better accuracy than the original baseline model and other SOTA structural pruning methods under the same model size. We also include a conventional deep ensemble method that trains two full ResNet56 with different initialization. Result shows that HCE achieves comparable accuracy with half of the size. To the best of our knowledge, HCE is the first work to build deep ensemble with heterogeneous model compression techniques. We make following contributions: * We propose HCE, a novel efficient DNN ensemble train ing method that utilizes and enhances the intrinsic diversity between heterogeneously compressed sub-models. * We unveil the intrinsic difference between pruned and quantized model via the analysis of decision regions. * We propose a diversity-aware knowledge distillation objective for compressed sub-model that will enable higher pruning rate and diversity in sub-model. * We perform extensive experiment on CIFAR-10 and ImageNet, where we show HCE achieves better efficiency-accuracy tradeoff than SOTA model compression methods and deep ensemble methods. The rest of the paper is organized as follows. We first provide background and related work on ensemble and model compression in Sec. II. Then we introduce our decision region visualization and analysis method and training procedure for HCE in Sec. III. In Sec. IV we discuss the experiment result and show effectiveness of HCE. ## II Related work ### _Ensemble_ Ensemble learning has been widely used to improve the performance of machine learning models. By combining the good performance of DNN models and the improved generalization ability from ensemble method, deep ensemble learning models generally show better performance compared to both traditional ensemble models and single DNN model. Ensemble models can be broadly categorised into homogeneous and heterogeneous ensemble, based on if each sub-model within the ensemble takes the same model architecture or not. In homogeneous ensembles, diversity is induced in the training process since all the model architectures are identical. For neural networks, training multiple sub-models with different initialization is proved to be an effective way to induce diversity [4, 5]. Also, some works use specially designed diversity training objective to increase robustness against adversarial attacks [14]. However, current DNN models have large model size and high training and inference costs, so that training multiple independent models is an inefficient option [15]. Some works have been done to solve this problem [16] but it is still challenging to train multiple DNN sub-models for ensemble since too many parameters and hyper-parameters need to be tuned and optimized. On the other hand, heterogeneous ensemble methods show the edge of lower computational cost and higher diversity. Some studies used DNN models combined with traditional models, such as support vector machine (SVM) or random forest (RF), to lower computation and increase diversities [17]. By using different data subsets, model architectures and fusion strategy, heterogeneous ensemble will introduce more diversity and hence show better generalization performance. Our proposed method tries to take advantage of both homogeneous and heterogeneous ensemble. We utilize multiple DNN models for their good performance as in homogeneous ensemble, meanwhile introducing intrinsic model architecture diversity in sub-models via heterogeneous compression. Also, we keep sub-models in small size while introduce diverse model architecture for better ensemble generalization ability. ### _Model Compression_ Model compression for DNNs have been widely used to accelerate models on resource-limited devices. Two popular methods are pruning and quantization. Neural network pruning aims to zero out redundant parameters to reduce both model size and computation cost. Generally it can be categorized into 1) unstructured pruning and 2) structured pruning. Unstructured pruning can achieve high sparse ration with negligible accuracy drop [18, 19, 20, 21, 22, 23]. However, unstructured pruning can hardly bring actual speedup on hardware because of the irregular non-zero data structure. On the other hand, coarse-gained structured pruning can benefit real-life hardware inference such as filter pruning [9] and channel pruning [11]. Although structured pruning can generate hardware-friendly weight structures, it cannot achieve relatively high sparse ratio compared to unstructured pruning. Quantization is another common and effective way to compress the deep learning models. It reduces the DNN model size and lower computation cost by replacing the floating point weights with low precision fixed-point data. Common quantization methods including directly apply uniform quantizers [24, 25], quantization-aware fine-tuning [26] and mixed-precision quantization [7, 8, 27]. Quantization can largely decrease DNN's arithmetic intensity but still cause significant accuracy degradation for ultra-low data precision. Our work explores the intrinsic diversity in the compressed model decision boundary induced by different compression methods. HCE utilizes the diversity between pruned and quantized model to build efficient ensemble, and further improves the ensemble performance with a diversity-aware sub-model training objective. ## III Method In this section, we will first analysis the diversity of heterogeneously compressed DNN models by visualizing the Figure 1: Comparison of HCE with baselines and other model compression methods on ResNet56, CIFAR-10. Model size ratio denotes the size of the pruned model compared to the 32bit ResNet56. Size of HCE is calculated by the sum of all sub-models in ensemble. decision region. Then we propose the HCE architecture and training scheme, which is an efficient way to produce highly-compressed sub-models for deep ensemble while guarantee high diversity within each individual sub-model. We visualize the improvement on sub-model diversity and ensemble decision regions brought by HCE in the end. ### _Decision Region Visualization_ We first illustrate how heterogeneously compressed DNN models learn different representations in the feature space. Figure 2 shows how decision region changes in the quantized and pruned models when the quantization precision and the sparsity varies. We first trained a 32-bit ResNet56 model on CIFAR-10 as baseline. For Figure 2(a), we generate a series of quantized models, 5-bit, 4-bit and 3-bit, progressively using post-training uniform quantization method. Also for Figure 2(b), We generate a series of structure pruned models with 90%, 70%, 40% and 10% size ratio using filter pruning [11]. No finetuning is performed after the compression. For all the subplots in each figure, we plot the decision boundaries in the vicinity of the same testing data point and with the same scale on a 2-D plane span by two random vectors. Figure 2(a) shows that when bit precision decreases, the decision region grows larger correspondingly while the decision boundaries become unsmooth. In detail, the top left figure in 2(a) is the decision region of the pre-trained baseline 32-bit ResNet56 model. When quantize the model into 5-bit, the decision region enlarges and the surface gets larger and rougher compared to the baseline. This trend follows when bit precision further decreases to 4-bit and 3-bit. One explanation for this decision region change is, since we quantize both weight and activation parameters in the model, the output feature of each layer becomes inaccurate and sensitive to input disturbs. The quantized activation makes the output feature more discrete so that the decision region may vary a lot when there is small perturbation on input. In fact, the decision region does not expand after quantized but get sensitive to input disturbs. However, with quantized activation layers, the output feature falls into discrete spaces. That is why the decision boundary gets unsmooth when data precision decrease. Unlike the quantized, the pruned case shows the opposite phenomenon. Figure 2(b) shows the trend of decision region when the sparisty increases in the model. In this case, we use structure pruning method to prune the filter inside the baseline model. As is shown in Figure 2(b), when sparsity level increases, i.e. size ratio decreases, the decision boundaries get smaller. This is because when structure pruning zeros out filters in the model, fewer features can be extracted to feature space. With fewer information extracted, the generalization ability of the DNN model decreases and so as the margin of decision boundary. From above, we can infer that although compressed from the same baseline model, quantized and pruned models show great difference on feature representation in hyperspace. From generalization perspective, quantization and pruning both effect the sensitiveness of the DNN model. The difference is that quantization makes the decision boundary unsmooth and sensitive to perturbation and pruning decrease the generalization capability but the decision boundary remain smooth. This difference introduces different properties in quantization and pruned models. By applying different model compression methods, we can foster diversity in deep ensembles. ### _Training Method_ Based on previous discussion, we propose our novel HCE training scheme, that ensembles the quantized and pruned model to introduce more diversity while maintaining low computation cost. Our HCE training method is composed of four steps. As is shown in Figure 3, the HCE training algorithm is designed as follows: 1) Train the full-precision, full-size model O as baseline. 2) Quantize the baseline model and get low-precision model Q. 3) Iteratively prune a sparse model based on O and Q while training with our proposed ensemble-aware training object. 4) For inference, simply average the output of quantized model and pruned model. Here we propose to fix the quantized model while training the pruned model for accuracy improvement, because the discrete weight and activation representation in the quantized model may hinder the stability of the training process. Specifically, the pruned model is trained with the object function in Equation (1). \[\begin{split} Loss(S)=\alpha*L_{CE}(output(S))+\\ (1-\alpha)*L_{KL}(output(S),(output(O)-output(Q))\end{split} \tag{1}\] The object function is inspired by Knowledge Distillation (KD) training. The first part is the cross entropy of the sparse model S. The latter part of the object function is the KL divergence between the output probability of the base model O and the quantized model Q. This part keeps the pruned model focus on the information that lost during the quantization. \(\alpha\) is a coefficient that controls the level of Fig. 2: Decision region visualizations of quantized and pruned model of a ResNet56 trained on CIFAR-10. The vertical and horizontal axes are two random Rademacher vectors. The zero point represents a test data point. Same color indicates the predicated label. Decision region can be inferred from the shape of the pattern with the same color. learning from the difference and directly from the training data. In detail, given a input \(x\), the base model would produce a score vector \(s^{O}(x)=[s^{O}_{1}(x),s^{O}_{2}(x),...,s^{O}_{K}(x)]\) and convert to probability by softmax: \(p^{O}_{k}(x)=\frac{e^{s^{\mathcal{L}_{k}^{O}(x)}}}{\sum_{j}e^{s^{O}_{j}(x)}}\). We also "soften" the probabilities using temperature scaling [28]: \[p^{O}_{k}(x)=\frac{e^{s^{\mathcal{Q}_{k}^{O}(x)/\tau}}}{\sum_{j}e^{s^{O}_{j}(x) /\tau}} \tag{2}\] where \(\tau>1\) is the temperature hyperparameter. Same as base model, quantized also generates a corresponding probability: \(p^{Q}_{k}(x)=\frac{e^{s^{\mathcal{Q}_{k}^{O}(x)/\tau}}}{\sum_{j}e^{s^{O}_{j}(x )/\tau}}\) and training sparse model has its own prediction \(p^{Q}_{k}(x)\). So the probability difference is \(p^{D}_{k}(x)=p^{O}_{k}(x)-p^{Q}_{k}(x)\) and the KL divergence loss is: \[L_{KL}=-\tau^{2}\sum_{k}p^{D}_{k}(x)\log P^{S}_{k}(x) \tag{3}\] Unlike training a student model from the teacher model in knowledge distillation, we try to force the pruned model to focus more on the prediction gap between the base model O and quantized model Q. Since quantized models often remain acceptable accuracy so that the gap between O and Q is limited. With less information to learn, the pruned model can achieve higher pruning rate. The goal of our training objective in Equation (1) is to make the output of the ensembled models close to that of the original model. In other word, the pruning method is aware of the diverse ensemble training. The KL divergence push the sparse model to produce output logits \(output(S)\) similar to \(output(O)-output(Q)\). In this case, during the inference when ensembling the pruned model \(S\) and quantized model \(Q\), the output \(output(S)+output(Q)\) should be close to the output of the original model \(output(O)\). Our experiment results prove that performing iterative pruning with the proposed diversity-aware training objective can lead to very sparse sub-model that can still result in high ensemble accuracy. It is worth to point out that our method has no limitations on the quantization or pruning methods. Our experiments will demonstrate that both the structural or unstructural pruning can be effective. ### _HCE Diversity Analysis_ To further analyse the effectiveness of our ensemble method, we look into the wrongly predicted labels of each individual sub-model. Still, We evaluate our method on ResNet56 model on the CIFAR-10 dataset. We first generated a 3-bit quantized model from the pretrained model and then trained a 60% pruned model filter pruning [11] with the HCE objective. As Shown in Figure 4, among 10,000 testcases, the baseline model have 590 error cases. The quantized submodel has 1,181 error cases and HCE pruned submodel has 654 error cases. However, among all error cases, only 401 cases are overlapped. The percentage of overlapped error cases in all error cases is \((E_{Q}\cup E_{S})/(E_{Q}\cap E_{S})=27.96\%\), which denotes that although quantized and pruned model are trained from the same pretrained model, they make very different error predictions, i.e. diversity. After ensemble, HCE produces similar accuracy as the baseline. 58.79% errors made by individual sub-model are corrected, including 90% of the errors made by quantized model alone. This indicates the proposed training objective is effective. We also compare the decision boundaries of HCE under different situations in Figure 5. Figure 5(a) shows that, as we discussion above, the decision boundary of quantized model is unsmooth while that of pruned model has small margin. HCE can restore the behavior of the original model by Figure 4: Venn figure of wrongly predicted images of ResNet56 on CIFAR-10. Numbers in the dash circles indicates the image wrongly classified by individual sub-model but corrected by the ensemble. Figure 5: Decision region visualizations showing (a) Quantized and HCE both make correct prediction and (b) HCE corrects mispredicted labels from quantized model Figure 3: training pipeline of HCE ensembling two compressed model. Figure 5(b) shows the case where quantized model makes mistake while HCE corrects the predictions. As the quantized model is perturbed into a wrong prediciont, the smooth decision region of the pruned model enables the ensemble to achieve correct output. This show that our HCE can indeed introduce and make use of the diverse model compression methods and produce similar performance as the original full-precision model. ## IV Experiment Result ### _Setup_ We compare HCE with various model compression counterparts. The base ResNet is implemented following [29] and we used both ResNet20 and ResNet56 on CIFAR-10. On ImageNet we tried HCE with ResNet50 and MobileNetV2. We used conventional uniform quantization and filter pruning [11] for experiment but HCE can be extended on other model compression methods. We also conduct an ablation study on hyper-parameters \(\alpha\) mentioned in Section III-B. To make comparable estimation on the computation cost of quantized and sparse model, we use a unified notion of FLOPs. Since quantized model only requires fixed-point operations, we first calculated the bit operations (BOPs) for the quantized model, then convert it to an estimated FLOPs by dividing a scaling factor of 23, which is the number of fraction bits in a Float32 representation [30]. ### _CIFAR-10 Result_ We first evaluate HCE on CIFAR-10 datasets and results are shown in Table I. The FLOPs of HCE is measured for both compressed sub-models in HCE. The quantized model in HCE is 3bit with accuracy of 88.19%. For large models like ResNet56, HCE can remove more than half FLOPs and achieve 93.56% accuracy, higher than other SOTA structured pruning methods. One important information is that, if we directly apply the PF method, the accuracy is 92.07% when FLOPs is 50%. But in our experiment, our pruning method is also PF combining with our training objective. By ensembling with a 3-bit model, the accuracy increases 1.5% with less FLOPs. The pruning method in HCE can be substituted with any other structured pruning method like DCF and SFP and HCE result can be better. As for smaller model ResNet20, HCE still achieved best accuracy compared to all other counterparts. With 57% FLOPs, the ensemble accuracy can achieve 91.99%. The hyperparameter \(\alpha\) in training object is 0.2 and 0.4 individually. ### _ImageNet result_ For ImageNet dataset, we test our HCE on ResNet50 and MobileNetV2 and the result is shown in Table II. For ResNet50, HCE achieve highest accuracy compared to other popular structured pruning methods under similar FLOPs. We also try HCE on MobileNetV2. MobileNetV2 is a relative small and thin DNN model compared to ResNet50 so compress MobileNetV2 on ImageNet is not easy. We would like to see how HCE works on such small models. As is shown in Table II, HCE also achieves better performance under similar FLOPs. However, in ImageNet experiments, \(\alpha\) is 0.3 on ResNet50 and 0.5 on MobileNetV2. We will further discuss this \(\alpha\) hyperparameter later. ### _Ablation study on \(alpha\)_ To study the effectiveness of the hyperparameter \(\alpha\) in sectionIII-B, we did a series experiment by tuning \(\alpha\). In this ablation study we apply HCE on ResNet20 on CIFAR-10. Different from above experiment setting, we applied L1 unstructured pruning in this case to demonstrate the extension ability of HCE. As is shown on Figure 6, on ResNet20 unstructured pruning case, \(\alpha\)=0.5 achieves higher accuracy than other \(\alpha\) values in most cases and seems to be superior than other values. Meanwhile \(\alpha\) =0.1 always produces lowest accuracy. However, on structured pruning of ResNet56 in section IV-B, \(\alpha\)=0.2 is the best choice. The explanation is that for our training objective 1, \(\alpha\) denotes the portion of HCE learning from the cross entropy of the input data while (1-\(\alpha\)) represents the learning from the difference between baseline and the quantized model. For large models, learning too much from the difference will cause the heavy overfitting and degrade the accuracy. But for small models like MobileNetV2 for ImageNet, overfitting will not occur when \(\alpha\) increases. ## V Conclusion This work propose HCE, a novel scheme to build heterogeneous deep ensemble to improve the generalization and \begin{table} \begin{tabular}{c c c c} \hline \hline Model & Approach & Accuracy & FLOPs(G) \\ \hline ResNet50 & Baseline & 76.01 & 4.1 (100\%) \\ & DCP [32] & 74.95 & 1.8 (44.5\%) \\ & FPGM [34]. & 74.83 & 1.9 (46.5\%) \\ & AutoPrunter [36] & 74.76 & 2.0 (48.8\%) \\ & SFP [33] & 74.61 & 1.7 (41.8\%) \\ & **HCE(\(\alpha\)=0.3)** & **75.13** & 1.8 (44.7\%) \\ \hline MobileNetV2 & Baseline & 71.8 & 0.32 (100\%) \\ & AMC [31] & 70.80 & 0.22 (70.0\%) \\ & MetaPruning [37] & 71.2 & 0.23 (72.3\%) \\ & **HCE(\(\alpha\)=0.5)** & **71.35** & 0.23 (71.8\%) \\ \hline \hline \end{tabular} \end{table} Table II: HCE results on ImageNet \begin{table} \begin{tabular}{c c c c} \hline \hline Model & Approach & Accuracy & FLOPs(M) \\ \hline ResNet56 & Baseline & 93.80 & 126.8 (100\%) \\ & PF [11] & 92.07 & 63.4 (50.0\%) \\ & CP [12] & 91.80 & 63.4 (50.0\%) \\ & AMC [31]. & 91.90 & 63.4 (50.0\%) \\ & DCF [32] & 93.49 & 63.4 (50.0\%) \\ & SFP [33] & 93.35 & 59.6 (47.1\%) \\ & FPGM [34] & 93.49 & 59.6 (47.1\%) \\ & CCP [35] & 93.46 & 67.2 (53.0\%) \\ & **HCE(\(\alpha\)=0.2)** & **93.56** & 60.6 (47.8\%) \\ \hline ResNet20 & Baseline & 92.20 & 41.2 (100\%) \\ & SFP [33] & 90.83 & 23.9 (58.0\%) \\ & FPGM [34] & 91.99 & 19.0 (46.1\%) \\ & **HCE(\(\alpha\)=0.4)** & **92.35** & 23.4 (56.8\%) \\ \hline \hline \end{tabular} \end{table} Table I: HCE results on CIFAR-10 efficiency simultaneously. We also introduce a new method to illustrate the intrinsic diversity within compressed models and show that HCE can generate highly compressed but diverse sub-models. Both CIFAR-10 and Imagenet experiments show the effectiveness of HCE. We hope our work can bring new perspective on deep ensemble and help deploying ensembles on resource-limited devices.
2307.01725
RRCNN: A novel signal decomposition approach based on recurrent residue convolutional neural network
The decomposition of non-stationary signals is an important and challenging task in the field of signal time-frequency analysis. In the recent two decades, many signal decomposition methods led by the empirical mode decomposition, which was pioneered by Huang et al. in 1998, have been proposed by different research groups. However, they still have some limitations. For example, they are generally prone to boundary and mode mixing effects and are not very robust to noise. Inspired by the successful applications of deep learning in fields like image processing and natural language processing, and given the lack in the literature of works in which deep learning techniques are used directly to decompose non-stationary signals into simple oscillatory components, we use the convolutional neural network, residual structure and nonlinear activation function to compute in an innovative way the local average of the signal, and study a new non-stationary signal decomposition method under the framework of deep learning. We discuss the training process of the proposed model and study the convergence analysis of the learning algorithm. In the experiments, we evaluate the performance of the proposed model from two points of view: the calculation of the local average and the signal decomposition. Furthermore, we study the mode mixing, noise interference, and orthogonality properties of the decomposed components produced by the proposed method. All results show that the proposed model allows for better handling boundary effect, mode mixing effect, robustness, and the orthogonality of the decomposed components than existing methods.
Feng Zhou, Antonio Cicone, Haomin Zhou
2023-07-04T13:53:01Z
http://arxiv.org/abs/2307.01725v1
RRCNN: A novel signal decomposition approach based on recurrent residue convolutional neural network ###### Abstract The decomposition of non-stationary signals is an important and challenging task in the field of signal time-frequency analysis. In the recent two decades, many signal decomposition methods led by the empirical mode decomposition, which was pioneered by Huang et al. in 1998, have been proposed by different research groups. However, they still have some limitations. For example, they are generally prone to boundary and mode mixing effects and are not very robust to noise. Inspired by the successful applications of deep learning in fields like image processing and natural language processing, and given the lack in the literature of works in which deep learning techniques are used directly to decompose non-stationary signals into simple oscillatory components, we use the convolutional neural network, residual structure and nonlinear activation function to compute in an innovative way the local average of the signal, and study a new non-stationary signal decomposition method under the framework of deep learning. We discuss the training process of the proposed model and study the convergence analysis of the learning algorithm. In the experiments, we evaluate the performance of the proposed model from two points of view: the calculation of the local average and the signal decomposition. Furthermore, we study the mode mixing, noise interference, and orthogonality properties of the decomposed components produced by the proposed method. All results show that the proposed model allows for better handling boundary effect, mode mixing effect, robustness, and the orthogonality of the decomposed components than existing methods. keywords: Empirical mode decomposition; Adaptive signal decomposition; Signal local average; Convolutional neural network; Residual network + Footnote †: journal: Pattern Recognition ## 1 Introduction With the development of technology, many every day-signals that exhibit nonlinearity and non-stationarity, such as human speech, radar systems, and seismic waves, can be accurately captured. It is well known that decomposing and exploring features of this kind of signals is quite challenging due to their nonlinear and non-stationary characteristics. In the past two decades, many studies have emerged for processing non-stationary signals. One of the most representative works is the empirical mode decomposition (EMD) algorithm along with the Hilbert spectrum analysis proposed by Huang et al. in 1998 [1]. Because EMD is fully data-driven, and can adaptively decompose a signal into several intrinsic mode functions (IMFs), it has already shown its usefulness in a wide range of applications, including semantic recognition [2], alcoholism identification [3], and stock trend prediction [4]. Despite its remarkable success, it still lacks mathematical foundations and is sensitive to noise and sampling. This sparked many efforts to improve the EMD. The improvements share the same feature: a signal is decomposed into several simpler components, and then a time-frequency analysis method is applied to each component separately. These signal decomposition methods can be mainly achieved in two ways: by iteration or by optimization. Methods based on iteration include many techniques, such as moving average, partial differential equation (PDE) and filter. For instance, Smith presented a new iteration method, based on the local average, to decompose the non-stationary signals into a set of functions [5]. Delechelle et al. proposed a new approach that resolves one major problem in the EMD, that is, the mean envelope detection of a signal, in virtue of a parabolic PDE [6]. Hadji et al. used the differential calculus on envelopes, which makes them prove that iterations of the sifting process are well approximated by the resolution of PDE [7]. Hong et al. introduced a novel sifting method based on the concept of the local integral mean of a signal [8]. And Cicone et al. studied the method based on iterative filtering to compute the local average, which is utilized to replace the mean of the upper and lower envelopes in the sifting procedure of the EMD [9; 10]. Tu et al. proposed the iterative nonlinear chirp mode decomposition (INCMD) [11] under the framework of the variational nonlinear chirp mode decomposition. On the other hand, there are methods based on optimization. Peng et al. designed an adaptive local linear operator-based optimization model to decompose a signal into several local narrow band signals [12]. Oberlin et al. proposed an optimization model in computing the mean envelope to replace the original one in EMD [13]. Inspired by the compressed sensing theory, Hou et al. studied a new adaptive data analysis method, which can be seen as a nonlinear version of compressed sensing and provides a mathematical foundation of the EMD method [14]. Flandrin et al. proposed a convex optimization procedure in order to replace the sifting process in the EMD, which follows the idea of texture-geometry decomposition with further specific EMD features such as quasi-orthogonality and extrema-based constraints [15; 16]. Dragomiretskiy et al. put forward the variational mode decomposition (VMD), whose goal is to decompose a signal into a discrete number of modes, that have specific sparsity properties while reproducing the input [17]. Rehman et al. generalized the VMD method to multivariate or multichannel data [18]. And Zhou et al. presented a new mathematical framework by finding the local average based on the local variational optimization model [19]. In addition, there are some methods that cannot be classified into the above two categories. For instance, Daubechies et al. proposed the method, called synchrosqueezed wavelet transforms, by combining the wavelet analysis and reallocation method [20]. Gille presented the approach, called empirical wavelet transform (EWT), to build adaptive wavelets [21], whose main idea is to extract the different modes by designing an appropriate wavelet filter bank. Singh et al. studied the adaptive Fourier decomposition method (FDM) based on the Fourier theory, which decomposes any data into a small number of "Fourier intrinsic band functions" [22; 23]. And Wang et. extended the adaptive FDM to the multi-channel case [24]. According to the works described above, we find that whether the method is based on iterative or optimization, calculating the local average of a given signal is very critical. For example, in EMD [1], the mean of the upper and lower envelopes are used to measure the local average of the signal; the local variational optimization model is constructed to compute the local average in [19]; and in the iterative filtering method [9; 10], the low-pass filter is employed to find the local average. Although there exist many studies on the characterization of the local average, it is basically impossible to find a method suitable for all signals from a practical point of view. Discussing the local average customized according to the type of signal, it not only provides a new research perspective, but also is likely to become the trend in the near future in signal processing for non-stationary data. In recent years, thanks to the remarkable results obtained in fields of research like image and natural language processing, the usage and application of deep learning methods have spread widely in an ample variety of research fields, like image processing [25] and natural language processing [26]. In signal processing, deep learning models have been used, so far, to achieve various goals, such as: noise removal [27; 28], forecasting [29; 30; 31], and detection [32; 33]. However, to the best of our knowledge, not a single method has been proposed so far in the literature, which allows to decompose a given non-stationary signal into simple oscillatory components, like the IMFs, which is solely based on deep learning techniques. For this reason, in the current work we propose an innovative signal decomposition algorithm, named recurrent residual convolutional neural network (RRCNN), which is based on deep learning models. In the RRCNN method, in fact, we first characterize the local average of a signal under the framework of deep learning, and then use it to handle signal decomposition. Specifically, the 1-Dimensional (1-D) convolutional neural network is primarily designed to adaptively compute the local average of the input signal, which is similar to the moving average method and the filter operation in the iterative filtering method, except that the weights in 1-D convolutional neural network are not fixed, but are learned adaptively during the training phase according to the input signals. Moreover, both the residual and recurrent structures are employed to amend the computed local average, which is consistent with the role of the inner loop process in many existing iterative-based signal decomposition methods. After the local average model is derived, it is cascaded in series to realize the decomposition model of the signal, whose function is equivalent to the outer loop structure of the existing iterative-based signal decomposition methods. Although the proposed method looks similar to those iterative-based decomposition methods that contain a two-loop structure, the use of the deep learning techniques makes this method have the following peculiarities: 1. Unlike the moving average method and the filter operation in the iterative filtering method, the convolutional filter weights that appear in the proposed RRCNN model, are not fixed in advance, but are learnt adaptively in the training phase according to the inputs. 2. Since the proposed RRCNN model is constructed under the framework of deep learning, it makes RRCNN more flexible and adaptive in finding the local average and achieving the decomposition for a given signal. In particular, the nonlinear activation function can be added after the convolutional operation to increase the expression ability. The loss function also can be customized according to the requirements that the ground truths usually have in the specific application. 3. Several artificial signals are constructed to verify the performance RRCNN in terms of local average characterization, noise interference, mode mixing, and orthogonality. Furthermore, we compare the RRCNN model with the state-of-the-art methods. In addition, we also use the solution of the Duffing and Lorenz equations, and the real data of the length of day (LOD) to evaluate the approximation ability of RRCNN to the existing models. 4. Generally speaking, the RRCNN model takes a certain amount of time in the training phase, which is a commonality of deep learning-based models. However, once the model training is completed, the computational efficiency in the prediction stage is relatively fast, especially it can use the parallelization mechanism to predict multiple signals at the same time, which is not available in most existing methods. 5. RRCNN has the limitations brought from the supervised model. For example: in the training phase, each input signal needs to know its label in advance. The rest of the paper is organized as follows. We review the iterative filtering method and provide its algorithm in Section 2.1. And the concept of \(\beta\)-smooth function and its properties are given in Section 2.2, which are used for proving the convergence of the proposed model. In Section 3, the new local average method and the derived signal decomposition method, collectively called the RRCNN, are proposed. Moreover, the training process and convergence analysis of RRCNN are given in this section. In Section 4, we study a series of examples to evaluate the performance of RRCNN compared with the existing methods. Finally, we give the conclusion in Section 5. ## 2 IF and \(\beta\)-smooth function ### IF The iterative filtering (IF) [9] is a recurrent algorithm that decomposes a nonlinear and non-stationary signal into a number of IMFs. The main idea of IF is the subtraction of local moving averages from the signal iteratively, where the local moving averages are calculated through convolutions with low-pass filters. Alg. 1 shows the detailed steps, where the parameter \(l_{n}\), called the filter length, is important in the IF method, and is determined by the information contained in the signal itself; \(w_{n}(\cdot)\) represents the low-pass filter function. ``` Data: Given a signal \(x(t)\) Result:\(IMF\) while the number of extrema of \(x\geq 2\)do 1\(n=0\); 2\(x_{1}(t)=x(t)\); 3while the stopping criterion is not satisfied do 4 compute the filter length \(l_{n}\) and filter weight function \(w_{n}\) for \(x_{n}\); 5\(x_{n+1}(t)=x_{n}(t)-\int_{-l_{n}}^{l_{n}}x_{n}(t+y)w_{n}(y)dy\); 6\(n=n+1\); 7 8 end while 9\(IMF=IMF\cup\{x_{n}\}\); 10\(x(t)=x(t)-x_{n}(t)\); 11 12 end while 13\(IMF=IMF\cup\{x\}\). ``` **Algorithm 1**Iterative filtering (IF) ### \(\beta\)-smooth function and some of its properties We first introduce the concepts of \(L\)-Lipschitz continuous and \(\beta\)-smooth for a function from [34]. **Definition 2.1**.: _A function \(f\) is said to be \(L\)-Lipschitz continuous if for all \(x,y\in\mathcal{X}\), \(\|f(x)-f(y)\|\leq L\|x-y\|\), where \(\mathcal{X}\) denotes the convex domain of \(f\), and \(L\) is called the Lipschitz constant._ **Definition 2.2**.: _A continuously differentiable function \(f\) is \(\beta\)-smooth if the gradient \(\nabla f\) is \(\beta\)-Lipschitz, that is if for all \(x,y\in\mathcal{X}\), \(\|\nabla f(x)-\nabla f(y)\|\leq\beta\|x-y\|\), where \(\mathcal{X}\) is the convex domain of \(f\)._ Then, for a unconstraint optimization problem, if its objective function is \(\beta\)-smooth, we can prove that the sequence generated by the gradient descent algorithm converges to a stationary point when the learning rate is small enough. The details can be found in Theorem 2.1. **Theorem 2.1**.: _Let \(f\) be a \(\beta\)-smooth function and \(f^{*}=\min f(x)>-\infty\). Then the gradient descent algorithm with a constant learning rate \(\lambda<\frac{2}{\beta}\), i.e., \(x^{(k+1)}=x^{(k)}-\lambda\nabla f(x^{(k)}),\) converges to a stationary point, i.e., the set \(\{x:\nabla f(x)=\mathbf{0}\}\)._ Proof.: According to the gradient descent algorithm, i,e., \[x^{(k+1)}=x^{(k)}-\lambda\nabla f(x^{(k)}), \tag{1}\] as \(f\) is \(\beta\)-smooth, we have \[f(x^{(k+1)}) \overset{(a)}{\leq}f(x^{(k)})+\nabla f(x^{(k)})(x^{(k+1)}-x^{(k) })+\frac{\beta}{2}\|x^{(k+1)}-x^{(k)}\|^{2}\] \[\overset{(b)}{=}f(x^{(k)})-\lambda\|\nabla f(x^{(k)})\|^{2}+ \frac{\beta\lambda^{2}}{2}\|\nabla f(x^{(k)})\|^{2}\] \[=f(x^{(k)})-\lambda(1-\frac{\beta\lambda}{2})\|\nabla f(x^{(k)})\| ^{2},\] where the inequality (a) follows from Lemma 3.4 in [34], and the equality (b) is obtained from Eqn. (1). Due to \(\lambda<2/\beta\), it becomes \[\|\nabla f(x^{(k)})\|^{2}\leq\frac{f(x^{(k)})-f(x^{(k+1)})}{\lambda(1-\frac{ \beta\lambda}{2})}.\] Next, we have \[\sum_{k=0}^{K}\|\nabla f(x^{(k)})\|^{2}\leq\frac{1}{\lambda(1-\frac{\beta \lambda}{2})}\sum_{k=0}^{K}(f(x^{(k)})-f(x^{(k+1)}))=\frac{f(x^{(0)})-f(x^{(K+1 )})}{\lambda(1-\frac{\beta\lambda}{2})}\leq\frac{f(x^{(0)})-f(x^{*})}{\lambda( 1-\frac{\beta\lambda}{2})},\] where \(x^{*}\) denotes the global optimization point. Taking the limit as \(K\rightarrow+\infty\), we have \(\sum_{k=0}^{+\infty}\|\nabla f(x^{(k)})\|^{2}\leq+\infty\). Hence, \(\lim_{k\rightarrow+\infty}\nabla f(x^{(k)})=0\) is obtained. ## 3 RRCNN inner loop block and RRCNN ### RRCNN inner loop block The main operation in IF is the computation of moving average, which is essentially realized by the convolution operation, where the filter length depends on the given signal, and the filter weights are mainly given by some empirical functions selected artificially a priori. Therefore, it is very natural to convert the convolution operation into a 1-D convolutional neural network model, where both the filter length and the filter weights can be learnt adaptively according to the input signals given in advance. Furthermore, some ingenious mechanisms in deep learning, such as the nonlinear activation function, the residue learning [35], etc., can be adopted to make it more flexible. The structure we design to mimic the inner while loop of Alg. 1, is graphically depicted in Fig. 1. Since it mainly contains the recurrence mechanism, the convolutional layer and the subtraction operation, we call it the recurrent residual convolutional neural network (RRCNN) inner loop block. ``` Data:\(X\in\mathbb{R}^{N}\) Result: the local average of \(X\) 1 Initialize \(i=0\) and \(X^{(0)}=X\); 2while\(i<S\)do 3 The input \(X^{(i)}\) goes through the first 1-D convolutional layer, i.e., \(X_{C1}:=Conv1D(X^{(i)},W_{1}^{(i)},padding=True,activation=\tanh)\); 4 Transfer \(X_{C1}\) to the second convolutional layer, i.e., \(X_{C2}:=Conv1D\) (\(X_{C1},\tilde{W}_{2}^{(i)},padding=True\)), where \(\tilde{W}_{2}^{(i)}:=softmax(W_{2}^{(i)})\); 5\(X^{(i+1)}=X^{(i)}-X_{C2}\); 6\(i=i+1\); 7 8 end while 9\(\hat{Y}=X^{(S)}\) and \(X-\hat{Y}\) are the IMF and the local average respectively. ``` **Algorithm 2**RRCNN inner loop block As shown in Fig. 1, the inner loop mainly consists of a judgment-loop structure and a residue operation, and the judgment-loop structure is formed of two convolutional layers and a residual operation. Suppose \(X\in\mathbb{R}^{N}\) (the vectors in this article are column vectors by default unless otherwise specified.) denote the input, the output of the RRCNN inner loop block, called \(\hat{Y}\in\mathbb{R}^{N}\), is computed as the following Alg. 2. And \(X-\hat{Y}\) is the local average of the input signal obtained by the RRCNN inner loop block. Mathematically, the output of the RRCNN inner loop block can be expressed as: \(\hat{Y}=F(X,\mathbf{W})\), where \(\mathbf{W}\) denotes the undetermined weights in the RRCNN inner loop block, and the function \(F\) represents the structure of RRCNN inner loop block, which is composed of the operators including convolution, nonlinear activation and subtraction. The detailed process of \(F\) can be formulated as: \(F(X,\mathbf{W})=f(X^{(S-1)},\mathbf{W}^{(S-1)})\), where \(S\) represents the number of recursion in the RRCNN inner loop block, \(\mathbf{W}^{(S-1)}\) is the undetermined weights in the \((S-1)\)-th recursion, and the function \(f\) and \(X^{(S-1)}\) are defined as: \[\begin{cases}f(X^{(i)},\mathbf{W}^{(i)})=\tanh(X^{(i)}*W_{1}^{(i)})*\tilde{W} _{2}^{(i)},\\ X^{(i+1)}=X^{(i)}-f(X^{(i)},\mathbf{W}^{(i)}),\end{cases} \tag{2}\] where \(X^{(0)}=X\), \(\mathbf{W}^{(i)}\) is the undetermined weights in the \(i\)-th recursion that it includes the weights, denoted as \(W_{1}^{(i)}\in\mathbb{R}^{K_{1}}\) and \(W_{2}^{(i)}\in\mathbb{R}^{K_{2}}\), \(\tilde{W}_{2}^{(i)}:=softmax(W_{2}^{(i)})=\{\frac{exp(W_{2}^{(i)})}{\sum_{k} exp(W_{2}^{(i)})}\}_{l=1}^{K_{2}}\), \(*\) is the 1-D convolution operation, and \(i=0,1,\ldots,S-2\). It is worth pointing out that the roles of the two 1-D convolutional layers in each judgment-loop are different. The role of the first convolutional layer, which is configured with a non-linear activation function (we select tanh in this work), is to enhance the nonlinear expression ability of the method. Whereas, the purpose of the second convolutional layer is to make the result more reasonable to describe the local average of the signal. Therefore, the non-negativity and normalization restrictions of its weights are added; and there is no nonlinear activation function configured with it. The use of padding in the two layers is to ensure that the length of the output is consistent with the input. We will discuss the training details of the RRCNN inner loop block in the following Figure 1: Graphic illustration of the RRCNN inner loop block. section. ### Rrcnn After the RRCNN inner loop block for the identification of the local average of a given signal is constructed, we can cascade a finite number of RRCNN inner loop blocks together to derive the signal decomposition, which is called RRCNN, and is shown in Fig. 2. According to it, an input signal \(X\in\mathbb{R}^{N}\) (also denoted as \(X_{0}\)) can be decomposed into \(M\) IMFs \(\hat{\mathbf{Y}}:=\{\hat{Y}_{i}\}_{m=1}^{M}\) (each \(\hat{Y}_{i}\in\mathbb{R}^{N}\)) and a residue \(X_{M}\in\mathbb{R}^{N}\). The detailed steps of RRCNN are listed in Alg. 3. The output of RRCNN can be formulated as: \[\begin{cases}\hat{Y}_{m}=F(X_{m-1},\mathbf{W}_{m}),\\ X_{m}=X_{m-1}-\hat{Y}_{m},\end{cases} \tag{3}\] where \(m=1,2,\ldots,M\), \(X_{0}=X\), \(F(X_{m-1},\mathbf{W}_{m})\) is the \(m\)-th RRCNN inner loop block whose purpose is to extract an IMF from \(X_{m-1}\), and \(\mathbf{W}_{m}\) denotes the undetermined weights of the \(m\)-th RRCNN inner loop block. ``` Data:\(X\in\mathbb{R}^{N}\), and the number of IMFs \(M\) Result: the IMFs and residue of \(X\) Initialize \(m=1\), \(X_{0}=X\); while\(m\leq M\)do Compute the \(m\)-th IMF and the local average for the input \(X_{m-1}\), denoted as \(\hat{Y}_{m}\) and \(X_{m}\) respectively, according to the RRCNN inner loop block; \(m=m+1\); end while\(\hat{\mathbf{Y}}=\{\hat{Y}_{m}\}_{m=1}^{M}\) and \(X_{M}\) are the resulting IMFs and residue of RRCNN. Figure 2: Graphic illustration of the RRCNN. All the generated IMFs are concatenated as the outputs of the RRCNN model. The errors between the outputs, i.e., \(\hat{\mathbf{Y}}\), and the labels that are composed of the true first \(M\) IMFs, denoted as \(\mathbf{Y}\in\mathbb{R}^{N\times M}\), are computed by the loss function. For example, the loss function can be expressed as: \[L(\hat{\mathbf{Y}},\mathbf{Y})=\|\hat{\mathbf{Y}}-\mathbf{Y}\|_{F}^{2}, \tag{4}\] where the errors are measured by mean square error (MSE), and \(\|\cdot\|_{F}\) denotes the Frobenius norm. In the RRCNN model equipped with the loss function as in Eqn. (4), the computational complexity of the forward process of RRCNN is mainly attributed to the computation of the convolutional layer, which is \(O(N\cdot K)\), where \(N\) and \(K\) denote the length of the signal and the size of the convolutional filter, respectively. The loss can be customized according to the characteristic of the decomposition task. For example, if the 3rd IMF are smooth, the quadratic total variation term, expressed as \[QTV(\hat{\mathbf{Y}}_{\Omega_{1}}):=\sum_{m\in\Omega_{1}}\sum_{t=1}^{N-1}( \hat{Y}_{(t+1),m}-\hat{Y}_{t,m})^{2},\] can be added to the loss function, where \(\Omega_{1}\) represents the set of subscripts of those smooth components (here \(\Omega_{1}=\{3\}\)), i.e., \[L(\hat{\mathbf{Y}},\mathbf{Y})=\|\hat{\mathbf{Y}}-\mathbf{Y}\|_{F}^{2}+\eta QTV (\hat{\mathbf{Y}}_{\Omega_{1}}), \tag{5}\] where \(\eta\geq 0\) is a penalty parameter, \(\hat{\mathbf{Y}}\) is the dependent variable of the function \(F(\cdot,\cdot)\), and its independent variables are \(X\) and \(\mathbf{W}\) respectively. Moreover, if the 2nd and 3rd IMFs are orthogonal, an orthogonal constraint can be added to the loss function to ensure the orthogonality of the resulting components, i.e., \[\begin{split} L(\hat{\mathbf{Y}},\mathbf{Y})&=\sum_{ i\in\hat{\Omega}_{2}}\|\hat{Y}_{i}-Y_{i}\|_{2}^{2}+\sum_{(i,j)\in\Omega_{2}}\| \mathbf{W}^{o_{ij}}\hat{Y}_{i}-Y_{i}\|_{2}^{2}+\|\mathbf{W}^{o_{ij}}\hat{Y}_{j }-Y_{j}\|_{2}^{2},\\ &\text{s.t. }\mathbf{W}^{o_{ij}}\mathbf{W}^{o_{ij}\top}=\mathbf{I}, \end{split} \tag{6}\] where \(\mathbf{W}^{o_{ij}}\in\mathbb{R}^{N\times N}\) stands for the orthogonal matrix to be determined by \(\min_{\{\mathbf{W},\mathbf{W}^{o_{ij}}\}}L(\hat{Y},Y)\), \(\Omega_{2}\) (here \(\Omega_{2}=\{(2,3)\}\)) denotes the subscript pairs of those orthogonal components, \(\hat{\Omega}_{2}\) (here \(\hat{\Omega}_{2}=\{2,3\}\)) represents the set consisting of all subscripts that appear in \(\Omega_{2}\), and \(\hat{\Omega}_{2}^{c}=\{1,2,\ldots,M\}-\hat{\Omega}_{2}\). In specific execution process, the orthogonal transformation \(\mathbf{W}^{o_{ij}}\) of \(\hat{Y}_{i}\) and \(\hat{Y}_{j}\) can be regarded as adding a fully connected layer after the outputs of \(\hat{Y}_{i}\) and \(\hat{Y}_{j}\). The two fully connected layers share weights, i.e., \(\mathbf{W}^{o_{ij}}\), and satisfy orthogonality, i.e., \(\mathbf{W}^{o_{ij}}\mathbf{W}^{o_{ij}\top}=\mathbf{I}\). In this case, the result of any IMF whose subscript meeting \(i\in\hat{\Omega}_{2}\) is updated from \(\hat{Y}_{i}\) to \(\mathbf{W}^{o_{ij}}\hat{Y}_{i}\), and the results of other components remain unchanged. Compared with IF, RRCNN also contains two loops. The outer loop successively acquires \(M\) IMFs and one residue. And the purpose of the inner loop, i.e., the RRCNN inner loop block, is mainly to compute the local average by several iterations, and finally get the IMF and local average of the current input signal. On the other hand, these two methods have important differences, which can be summed up as: 1. The number of outer loop of the IF method is data-driven, which might be different for different signals. While it is given in advance for the proposed RRCNN approach. Therefore, RRCNN is suitable for decomposing the non-stationary signals containing the same number of mono-components. 2. In the IF method, the filter length that is a key parameter, is determined only by the signal under decomposition. However, its filter weights lack of adaptability, and they are basically determined by the filter length. For the RRCNN approach, the filter length in each convolutional layer is adaptive, and can be selected by hyper-parameter optimization. Moreover, its filter weights are data-driven, which makes RRCNN flexible in the characterization of the local average. 3. In addition to more flexibly computing the local average of the signal, the proposed RRCNN method has all the advantages of deep learning, such as: the non-linear expression ability, the customized loss function according to specific decomposition tasks, etc., which are missing in the traditional signal decomposition methods. Although the proposed RRCNN method is developed based on the IF, its ambition is not to replace the IF, or even to replace any existing signal decomposition method. The RRCNN is essentially a supervised deep learning model. This not only provides all the advantages that the existing signal decomposition methods are lacking, but also gives all the limitations of the supervision model itself. Such as, for any given signal, regardless of the decomposition performance, the existing signal decomposition methods can basically decompose it; however, the RRCNN model does not work without any training based on a set of input signals for which is known the ground truth. In addition, the training process of the RRCNN model can be regarded as the process of exploring the potential patterns of the input signals. If the patterns of the input signals are unique, their decomposition performance will also be greatly reduced. ### Training process and convergence analysis of RRCNN In the training process, the gradient-based back-propagation method is used to learn the filter weights that appear in the convolutional layers by using the training data. For the hyper-parameters in the RRCNN, including the filter lengths and the numbers of neurons in the convolutional layers, Hyperopt1 is adopted to select their values by using the validation data, which is a Python library for serial and parallel optimization over awkward search spaces for hyper-parameters. Footnote 1: Github website: [https://github.com/hyperopt/hyperopt](https://github.com/hyperopt/hyperopt) Since when the parameter \(\eta\) in Eqn. (5) is equal to \(0\), it degenerates to the case of Eqn. (4), we only discuss the training processes and convergences of the models when their loss functions are given in Eqns. (5) and (6), respectively. We first discuss the situation of the RRCNN model equipped with the loss function (5). For convenience, we consider that the output of the model only produces one IMF, and the number of recursion is also limited to \(1\), that is, \(M=1\) and \(S_{1}=1\) in Fig. 2. Suppose that \((X^{j},Y^{j})_{j=1}^{J}\subset\mathbb{R}^{N}\times\mathbb{R}^{N}\) is a given set of training samples, where \(J\) denotes the number of samples. The processe and the loss function of RRCNN are as follows: \[\left\{\begin{array}{l}X_{C1}^{j}=\sigma(X^{j}*W_{1}),\\ \\ X_{C2}^{j}=X_{C1}^{j}*\tilde{W}_{2},\\ \\ \hat{Y}^{j}=X^{j}-X_{C2}^{j},\end{array}\right. \tag{7}\] \(L(\hat{Y},Y)=\sum_{j=1}^{J}\|\hat{Y}^{j}-Y^{j}\|_{2}^{2}+\eta QTV(\hat{Y}^{j})\), where \(\tilde{W}_{2}=softmax(W_{2})\), \(\sigma=tanh\) in our model, \(\eta\) is a non-negative parameter, \(W_{1}\in\mathbb{R}^{K_{1}}\) and \(W_{2}\in\mathbb{R}^{K_{2}}\) are the undetermined convolution filters. According to the gradient descent method and the chain derivation rule, \(W_{1}\) and \(W_{2}\) are learned following the back propagation method, that is, \(W_{h}^{(n+1)}=W_{h}^{(n)}-\lambda\nabla_{W_{h}}L,(h=1,2)\) where \(\lambda\) denotes the learning rate, \(\nabla_{W_{h}}L=(\frac{\partial}{\partial W_{h1}},\ldots,\frac{\partial}{ \partial W_{hK_{1}}})^{\top}L\), \(\frac{\partial L}{\partial W_{2i}}=-\sum_{j=1}^{J}\sum_{t,l=1}^{N}\frac{ \partial L}{\partial Y_{t}^{j}}\frac{\partial X_{C2i}^{j}}{\partial W_{2i}} \frac{\partial\tilde{W}_{2i}}{\partial W_{2i}}\), \(\frac{\partial L}{\partial W_{1i}}=-\sum_{j=1}^{J}\sum_{t,l=1}^{N}\frac{ \partial L}{\partial Y_{t}^{j}}\frac{\partial X_{C2i}^{j}}{\partial X_{C1i}^{ j}}\sigma^{\prime}(R_{l}^{j})\frac{\partial R_{l}^{j}}{\partial W_{1i}}\), \(\frac{\partial L}{\partial Y_{t}^{j}}=2(\tilde{Y}_{t}^{j}-Y_{t}^{j})\), \(\frac{\partial X_{C2i}^{j}}{\partial W_{2i}}=X_{C1(t-[K_{1}/2]+l)}^{j}\), \(R_{l}^{j}=(X^{j}*W_{1})_{l}\), \(\frac{\partial R_{l}^{j}}{\partial W_{1i}}=X^{j}{}_{l-[K_{2}/2]+i}\), \[\frac{\partial\tilde{W}_{2l}}{\partial W_{2i}}=\left\{\begin{array}{l}\frac{ exp(W_{2i}+W_{2i})}{(\sum_{k}exp(W_{2k}))^{2}},\ \mbox{if}\ l\neq i;\\ \frac{exp(W_{2i})\sum_{k\neq i}exp(W_{2k})}{(\sum_{k}exp(W_{2k}))^{2}},\ \mbox{otherwise},\end{array}\right.\mbox{ and } \frac{\partial X_{C2i}^{j}}{\partial X_{C1i}^{j}}=\left\{\begin{array}{l} \tilde{W}_{2(l-t+[K_{1}/2])},\ \mbox{if}\ 1\leq l-t+[K_{1}/2]\leq K_{1};\\ \\ 0,\ \mbox{otherwise}.\end{array}\right.\] Next, we discuss the convergence of the training process of the RRCNN model expressed in Eqn. (7). **Theorem 3.1**.: _For the RRCNN model defined in Eqn. (7), the sequences \(\{W_{1}^{(n)}\}\) and \(\{W_{2}^{(n)}\}\) are generated by the gradient descent algorithm. Then, there exists a positive constant \(L\) independent of the input data, so that when the learning rate \(\lambda\) is less than \(L\), \(\{W_{1}^{(n)}\}\) and \(\{W_{2}^{(n)}\}\) converge to their corresponding stationary points, i.e., the sets \(\{W_{1}:\nabla_{W_{1}}L=\mathbf{0}\}\) and \(\{W_{2}:\nabla_{W_{2}}L=\mathbf{0}\}\), respectively._ Proof. From the Lagrange's mean value theorem, it is obvious to find that the composition function composed by two functions, the \(\beta_{1}\)- and \(\beta_{2}\)-smooth respectively, is still \(\beta\)-smooth (\(\beta\leq\beta_{1}\beta_{2}\)) under the condition that it can be composited. According to Eqn. (3), the RRCNN model can be seen as a composition function composed of a series of functions. Then, combining with Theorem 2.1, we have that if the function of the \(i\)-layer in the RRCNN model is \(\beta_{i}\)-smooth, it can be proved that the weights sequences obtained by the gradient descent method, converge under the condition that the learning rate satisfies \(\lambda\leq 2/\Pi\beta_{i}\). For the function \(\tilde{W}_{2}=softmax(W_{2})\) that is included in the RRCNN model, the second partial derivative of each \(\tilde{W}_{2l}\) (\(l=1,\ldots K_{2}\)) with respect to its independent variable vector \(W_{2}\) exists and satisfies: \[\left|\frac{\partial^{2}\tilde{W}_{2l}}{\partial W_{2i}\partial W_{2j}}\right|= \frac{exp(W_{2i})}{(\sum_{k}exp(W_{2k}))^{3}}\begin{cases}\sum_{k\neq i}exp(W_{2 k})|\sum_{k}exp(W_{2k})-2exp(W_{2i})|,&\text{if }l=i=j,\\ exp(W_{2l})|2exp(W_{2i})-\sum_{k}exp(W_{2k})|,&\text{if }l\neq i=j,\,\leq 2.\\ exp(W_{2i}+W_{2l}+W_{2j}),&\text{if }l\neq i\neq j,\end{cases}\] Hence, it states that each \(\tilde{W}_{2l}\) is \(\beta\)-smooth (and \(\beta\leq 2\)) according to the Lagrange's mean value theorem. For the rest functions involved in RRCNN include quadratic function, tanh, and 1-D convolution operation, these functions can be easily proved to be \(\beta\)-smooth (\(\beta\) here is a general representation, and the \(\beta\) value of each function may not be the same.) by judging that their second derivative functions exist and bounded. Therefore, the conclusion is proved. For the case of the model with an orthogonal constraint in the loss function \(L\) in Eqn. (6), the orthogonal constraint is a new obstacle compared to the previous model. However, in the field of optimization, the study of optimization problems with orthogonal constraints has become very common. And the gradient-based projection method can be used to find \(\mathbf{W}^{o}\) with convergence guarantees [36, 37]. Furthermore, under the idea of back propagation used in updating the weights of the neural networks, the solutions of problem (6) can be obtained according to Alg. 4. ``` 1\(i=0\), given the learning rates \(\lambda_{1},\lambda_{2}\), and initialize the \(\mathbf{W}^{o},\mathbf{W}\); 2while\(i<Max\_Iter\)do 3\(\mathbf{W}^{o}\leftarrow\mathbf{W}^{o}-\lambda_{1}\nabla_{\mathbf{W}^{o}}L\); 4\(\mathbf{W}^{o}\leftarrow\mathcal{P}_{\mathcal{S}_{p,q}}(\mathbf{W}^{o})\), where \(\mathcal{P}_{\mathcal{S}_{p,q}}(\mathbf{W}^{o})\) denotes the projection of \(\mathbf{W}^{o}\in\mathbb{R}^{p\times q}\) to the Stiefel manifold \(\mathcal{S}_{p,q}\), \(\mathcal{P}_{\mathcal{S}_{p,q}}(\mathbf{W}^{o})=\tilde{\mathbf{U}}\mathbf{ \nabla}^{\top}\), and \(\tilde{\mathbf{U}}\tilde{\mathbf{x}}\tilde{\mathbf{V}}^{\top}\) is the reduced singular value decomposition of \(\mathbf{W}^{o}\); 5\(\mathbf{W}\leftarrow\mathbf{W}-\lambda_{2}\nabla_{\mathbf{W}}L\) according to the back propagation method; 6\(i\gets i+1\); 7 end while ``` **Algorithm 4**RRCNN with orthogonal constraint Since the convergence analysis of \(\mathbf{W}\) calculated based on the gradient descent method is consistent with Theorem 3.1, and that of \(\mathbf{W}^{o}\) calculated based on the gradient projection method has been discussed in the literature [36, 37], so the convergence of Alg. 4 can be obtained intuitively. **Remark 1**.: _Similar to the case with loss function in Eqn. (5), we assume that in the RRCNN model, there are only two RRCNN inner loop blocks, i,e., \(M=2\) in Fig. 2. Furthermore, the IMFs obtained by the two blocks satisfy orthogonality, i.e., \(\Omega_{2}=\{(1,2)\}\), \(\hat{\Omega}_{2}=\{1,2\}\) and \(\hat{\Omega}_{2}^{c}=\emptyset\) in Eqn. (6). In this case, the RRCNN model can be reduced to a simpler formula, which looks close to the expressions of some orthogonal constraint algorithms, which reads:_ \[\min_{\mathbf{W},\mathbf{W}^{o}}\|\mathbf{W}^{o}\hat{\mathbf{Y}}-\mathbf{Y}\| _{F}^{2},s.t.\ \mathbf{W}^{o}\mathbf{W}^{o\,\top}=\mathbf{I}, \tag{8}\] _where \(\hat{\mathbf{Y}}\) depends on \(\mathbf{W}\)._ ## 4 Experiments To evaluate the performance of the proposed RRCNN inner loop block and RRCNN models, we test them against seven aspects, which are: 1) Can RRCNN inner loop block be used to find the local average of the non-stationary signal? 2) Is RRCNN inner loop block still effective on noisy signal? 3) Can RRCNN be used to decompose the non-stationary signals? 4) Can RRCNN be effective on noisy signals? 5) Can RRCNN be effective on the signals composed of orthogonal mono-components? 6) Can RRCNN be effective on solutions of differential equations? 7) Is RRCNN capable of processing real signals? Furthermore, we discuss the computational time of RRCNN, and compare it with other methods. In addition, limited by the length of the paper, we submit some experiments as supplementary materials. In the experiments, we divide each of the constructed input data into the training and validation datasets with ratio \(7:3\). Moreover, our proposed signal average method, i.e., the RRCNN inner loop block, is compared with the existing signal average methods based on cubic spline envelopes (CSE) [1], optimization model (OP), a segment power-function based envelopes (SPFE) [38] and iterative filtering (IF) [9], respectively. For simplicity, we denote the averages obtained from the CSE, OP, SPFE, and IF as CSA, OPA, SPFA, and IF respectively. And the RRCNN2 method will be compared with the state-of-the-art methods, including EMD, IF3, VMD4[17], continuous wavelet transform based synchrosqueezing (SYNSQ_CWT) [20], short time Fourier transform based synchrosqueezing (SYNSQ_STFT5) [20], INCMD6[11], EWT7[21], FDM8[22], and its variant called DCT_GAS_FDM9[23]. In addition, in the experiments for verifying the robustness of noise interference, the proposed RRCNN method is compared with the ensemble EMD (called EEMD10) model [39]; and it is compared with the M-LFBF11[15, 16] model to verify the orthogonality of the decomposed components. Footnote 2: Code of RRCNN is available at [https://github.com/zhoudafa08/RRCNN](https://github.com/zhoudafa08/RRCNN) Footnote 3: Code of IF: [http://people.disim.univaq.it/](http://people.disim.univaq.it/)\(\sim\)antonio.cione/Software.html Footnote 4: Code of VMD: [https://www.mathworks.com/help/wavelet/ref/vmd.html](https://www.mathworks.com/help/wavelet/ref/vmd.html) Footnote 5: Codes of SYNSQ_CWT and SYNSQ_STFT: [https://github.com/ebrevd](https://github.com/ebrevd) o/synchrosqueezing Footnote 6: Code of INCMD: [https://github.com/sheadan/IterativeNCMD](https://github.com/sheadan/IterativeNCMD) Footnote 7: Code of EVT: [https://www2.mathworks.cs/help/wavelet/ug/empirical-wave](https://www2.mathworks.cs/help/wavelet/ug/empirical-wave) let-transform.html Footnote 8: Code of FDM: [https://www.researchgate.net/publication/274570245_Matl](https://www.researchgate.net/publication/274570245_Matl) ab_Code_Of_The_Fourier_Decomposition_Method_FDM Footnote 9: Code of DCT_GAS_FDM: [https://www.researchgate.net/publication/32629](https://www.researchgate.net/publication/32629) 4577_MATLABCodeOfFDM_DCT_DFT_FIR_FSASJuly2018 Footnote 10: Code of EEMD: [http://perso.ens-lyon.fr/patrick.flandrin/emd.html](http://perso.ens-lyon.fr/patrick.flandrin/emd.html) Footnote 11: Code of M-LFBF: [http://perso.ens-lyon.fr/nelly.pustelnik/](http://perso.ens-lyon.fr/nelly.pustelnik/) The results are measured by the metrics listed in Tab. 1, where Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) are used to measure the errors between the predicted results \(\hat{Y}\in\mathbb{R}^{N}\) and the ground truth \(Y\in\mathbb{R}^{N}\), and \(\rho(c_{1},c_{2})\) is used to evaluate the orthogonality between the resulting components \(c_{1}\in\mathbb{R}^{N}\) and \(c_{2}\in\mathbb{R}^{N}\). \begin{table} \begin{tabular}{c c c} \hline \hline Metric & MAE & RMSE & \(\rho(c_{1},c_{2})\) \\ \hline Expression & \(\frac{1}{2}\sum_{t=1}^{N}|\hat{Y}_{t}-Y_{t}|\) & \(\sqrt{\frac{1}{2}\sum_{t=1}^{N}(\hat{Y}_{t}-Y_{t})^{2}}\) & \(\frac{|\langle c_{1},c_{2}\rangle|}{|C_{1}|c_{2}|c_{2}|c_{2}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation indices **4.1. Can RRCNN inner loop block be used to find the local average of the non-stationary signal?** We first evaluate the performance of the proposed RRCNN inner loop block in solving the local average of the non-stationary signal. Several signals composed of the linear function and the mono-component function, or just the mono-component function, are constructed as the inputs, where the mono-component function can generally be expressed as \(a(t)\cos\theta(t)\), which meets \(a(t),\theta^{\prime}(t)>0\)\(\forall t\), and the changes in time of \(a(t)\) and \(\theta^{\prime}(t)\) are much slower than that of \(\theta(t)\). Ideally, the average of the mono-component signal is zero. The input signals and the corresponding labels we construct in this part are listed in Tab. 2. It should be pointed out that the label here represents the first IMF of the corresponding input signal, not the local average. The local average can be computed by subtracting the label from the input signal. After the RRCNN inner loop block is trained with the inputs in Tab. 2, we select three mono-component signals with different instantaneous frequencies and instantaneous amplitudes, discussed in Examples 1-2, which are not included in the inputs, to test the performance of the RRCNN inner loop block, respectively. **Example 1**: \(x(t)=(3+2\cos(2t))\cos(2t^{2})\), \(t\in[0,3]\). **Example 2**: \(x(t)=(2t+\cos(2t^{2}))\cos(12t+t^{2}+2\cos(t))\), \(t\in[0,3]\). **Example 3**: \(x(t)=(3+2\cos(3t))\cos(5t^{2})\), \(t\in[0,3]\). [MISSING_PAGE_POST] **Example 35**: \(x(t)=(2t+\cos(2t))\cos(12t+t^{2}+2\cos(t))\ The moving averages of the signals in Examples 1-3 obtained from different methods are shown in Fig. 3 (a)-(c), respectively, and the errors between the obtained moving averages and the true average are listed in Tabs. 3-5, respectively. According to the results, we can observe the following phenomena: 1. The existing methods are prone to boundary effects, which can be seen from the left boundaries of Fig. 3 (a)-(c). However, the RRCNN inner loop block method can avoid this problem to a certain extent. 2. When the signal is in a situation where the amplitude changes quickly and the frequency changes slowly, the RRCNN inner loop block performs best among all models according to the left parts of Fig. 3 (a)-(c). When the amplitude change is reduced and the frequency change is accelerated, its performance may even be inferior to other models, which can be seen from the right half of Fig. 3 (c). 3. Even though the RRCNN inner loop block has some dazzling and bleak performance compared with other local average methods, it can be seen from Tabs. 3-5 that the MAE and RMSE of RRCNN are significantly reduced compared to other models. The reason for the phenomenon above can be attributed to: 1. The averages obtained by the comparison methods are basically determined by the local information of the signal, which makes the results reasonable when the information is sufficient (e.g., the part of the amplitude change is reduced and the frequency change is accelerated); and the results differ greatly when the information is insufficient (e.g., the part of the amplitude changes quickly and the frequency changes slowly). 2. The filter weights of each convolutional layer in the RRCNN are shared, they are determined by all the information contained in the whole signal. Therefore, the average obtained by the RRCNN is relatively \begin{table} \begin{tabular}{c c c c c} \hline \hline Metric & CSA & OPA & SPFA & IF & **RRCNN** \\ \hline MAE & 0.2067 & 0.2448 & 0.3646 & 0.8624 & **0.1418** \\ RMSE & 0.4185 & 0.5427 & 0.6388 & 1.5833 & **0.1712** \\ \hline \hline \end{tabular} \end{table} Table 5: Metrics of the moving averages of the input in Example 3. Figure 3: Moving averages by different methods of Examples 1-3. stable, and it is not easy to cause interference due to the large difference in the changing of the signal amplitude and frequency. ### **Rs RRCNN inner loop block still effective on noisy signal?** In this part, we consider the robustness of the RRCNN inner loop block model to noise based on the constructed inputs and labels in Section 4.1. Specifically, each input signal is perturbed with additive Gaussian noise with the signal-to-noise ratio (SNR) set to \(15dB\), and the corresponding label remains unchanged, as detailed in Tab. 6. Similar to the section above, we select a noisy signal, which is essentially the signal in Examples 2-3 with additive Gaussian noise, and is described in Examples 4-5, to test the performance. **Example 4**: \(x(t)=(2t+\cos(2t^{2}))\cos(20t+t^{2}+2\cos(t))+\varepsilon(t)\), where \(t\in[0,3]\) and \(SNR=15dB\). **Example 5**: \(x(t)=(3+2\cos(3t))\cos(5t^{2})+\varepsilon(t)\), where \(t\in[0,3]\) and \(SNR=15dB\). The results of Examples 4-5 are shown in Fig. 4 (a)-(b) and Tabs. 7-8. From the results, we can find that the RRCNN inner loop block performs the the most robust among all models for the signal with additive Gaussian noise. Specifically, in Example 4, the RRCNN inner loop block reduces the MAE and RMSE from \(0.1936,0.2420\), for the second the best method (i.e., SPFA), to \(0.1190,0.1500\), respectively; and in Example 5, the RRCNN inner loop block reduces the MAE and RMSE from \(0.2004,0.2520\), for the second the best method (i.e., SPFA), to \(0.1290,0.1623\), respectively. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(x_{1}(t)\) & \(x_{2}(t)\) & Inputs & Labels & Notes \\ \hline \multirow{2}{*}{\(0.1kt\)} & \(\cos(3lt)\) & & & \(k=2,3,\ldots,9\) \\ & \(\cos(3lt+t+\cos(t))\) & & & \(k=2,4,6,8\) \\ & \(\cos(3lt+t+\cos(t))\) & & & \(l=2,4,6,8\) \\ \hline \multirow{2}{*}{\(0.1k\)} & \(\sin(3lt)\) & & & \(k=1,2,\ldots,10\) \\ & \(\sin(3klt)\) & \(x_{1}(t)+x_{2}(t)+\varepsilon(t)\) & \(x_{2}(t)\) & \\ & \(\sin(3klt+t^{2}+\cos(t))\) & & & \(l=2,4,\ldots,28\) \\ \hline \multirow{2}{*}{\(3+2\cos(0.5kt)\)} & \(\cos(0.5klt^{2})\) & & & \(k=2,3,\ldots,6\) \\ & \(\cos(0.5li^{2}+l\cos(t))\) & & & \(l=4,5,\ldots,9\) \\ \hline \hline \end{tabular} \end{table} Table 6: Inputs disturbed by the Gaussian noise with the SNR set to \(15dB\), and the labels used in Section 4.2, where \(t\in[0,3]\). \begin{table} \begin{tabular}{c c c c c c} \hline \hline Metric & CSA & OPA & SPFA & IF & **RRCNN** \\ \hline MAE & 0.2023 & 0.1998 & 0.1936 & 0.2183 & **0.1190** \\ RMSE & 0.2529 & 0.2501 & 0.2420 & 0.2762 & **0.1500** \\ \hline \hline \end{tabular} \end{table} Table 7: Metrics of the moving averages of the input in Example 4 obtained from different methods. \begin{table} \begin{tabular}{c c c c c} \hline \hline Metric & CSA & OPA & SPFA & IF & **RRCNN** \\ \hline MAE & 0.2141 & 0.2126 & 0.2004 & 0.2339 & **0.1290** \\ RMSE & 0.2676 & 0.2658 & 0.2520 & 0.2967 & **0.1623** \\ \hline \hline \end{tabular} \end{table} Table 8: Metrics of the moving averages of the input in Example 5. ### Can RRCNN be used to decompose the non-stationary signals? After evaluating the proposed RRCNN inner loop block model in calculating the local average, the next task is to assess the decomposition performance of RRCNN for non-stationary signals. To demonstrate the problem, we only consider decomposing the signals consisting of two components. The input signals in the part can be divided into two categories: one is composed of a mono-component signal and a zero signal, and the other is of two mono-components with close frequencies. The former is to train RRCNN describe the zero local average of the mono-component signal; and the latter is to enable RRCNN to decompose signals with close frequencies, which is the main factor causing the mode mixing effect. The inputs and labels are constructed and shown in Tab. 9. To challenge the proposed model, we hereby choose the signal composed of two cosine signals with frequencies of \(2.5\)Hz and \(3.4\)Hz respectively, described in Example 4, that is more prone to introduce the mode mixing effect in the existing methods. **Example 6**: \(x(t)=\cos(5\pi t)+\cos(6.8\pi t)\), \(t\in[0,6]\). In Example 6, the components predicted by the trained RRCNN model are compared with those obtained from the state-of-the-art methods in signal decomposition. The metrics of the errors between the obtained components \begin{table} \begin{tabular}{c c c c} \hline \hline \(x_{1}(t)\) & \(x_{2}(t)\) & Inputs & Labels & Notes \\ \hline \(\cos(k\pi t)\) & \(\cos((k+1.5)\pi t+t^{2}+\cos(t))\) & \(x_{1}(t)+x_{2}(t)\) & \([x_{2}(t),x_{1}(t)]\) & \(k=5,6,\ldots,14\) \\ \(0\) & \(\cos((k+1.5)\pi t+t^{2}+\cos(t))\) & & \\ \hline \(\cos(k\pi t)\) & \(\cos(kl\pi t+t^{2}+\cos(t))\) & & \\ \(0\) & \(\cos(kl\pi t)\) & \(x_{1}(t)+x_{2}(t)\) & \([x_{2}(t),x_{1}(t)]\) & \\ \(0\) & \(\cos(kl\pi t+t^{2}+\cos(t))\) & & \(l=2,3,\ldots,19\) \\ \hline \hline \end{tabular} \end{table} Table 9: Inputs and labels used in Section 4.3, where \(t\in[0,6]\). Figure 4: Moving averages by different methods of Examples 4-5. and the labels, measured by MAE and RMSE, are shown in Tab. 10. In addition, to compare RRCNN and the existing methods more intuitively, we select the top three methods with comprehensive performance from Tab. 10, i.e., RRCNN, EMD, and INCMD, and plot their obtained components and the corresponding time-frequency-energy (TFE) representations in Fig. 5 (a), (b), respectively. It should be noted that the identification of an optimal TFE representation is a research topic on its own, and it is out of the scope of this work. Here, we set as TFE representation the Fourier quadrature transforms that was proposed in [23]. According to the results, we have the following conclusions: 1. The mode mixing problem is indeed a big challenge for some of the existing methods. For example, the maximum value of \(x(t)\) is 2, but the MAEs of the two components obtained by the SYNSQ_STFT method are as high as 0.6620 and 0.6778, respectively, which basically does not separate the cosine signals with frequencies of 2.5Hz and 3.4Hz from \(x(t)\). 2. Many methods achieve satisfactory decomposition for \(x(t)\). For example, it can be seen from the left plots in the 2nd and 3rd rows of Fig. 5 (a) that the components obtained by the EMD, INCMD, and RRCNN methods have relatively consistent oscillation modes with the ground truths. This viewpoint can also be drawn from Fig. 5 (b), although there are some obvious fluctuations, the TFE representations of the two components, obtained by EMD, INCMD, and RRCNN methods, are basically separated just like the those of the real components. 3. Nonetheless, a closer look at the right plots in the 2nd and 3rd rows of Fig. 5 (a) reveals the subtle advantage of the RRCNN model at the boundaries. Due to the incompleteness of the waveform at the boundary, many existing methods are deeply affected by it, as are EMD and INCMD. However, the weights of the convolution filters in the RRCNN model depend on the entire waveform of the whole training samples, which reduce consistently the boundary effects. ### Can RRCNN be effective on noisy signals? Similar to the RRCNN inner loop block, we verify the robustness of RRCNN against additive Gaussian noise in this part. The constructed inputs and labels are listed in Tab. 11, where the inputs are generated by introducing additive Gaussian noise with the SNR set to \(25dB\) to the signals in Tab. 9. After the RRCNN model is trained, \begin{table} \begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{\(c_{1}\)} & \multicolumn{2}{c}{\(c_{2}\)} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{\(c_{1}\)} & \multicolumn{2}{c}{\(c_{2}\)} \\ \cline{2-2} \cline{5-8} & MAE & RMSE & MAE & RMSE & MAE & RMSE & MAE & RMSE \\ \hline EMD[1] & 0.1049 & 0.2110 & 0.1132 & 0.2058 & INCMD[11] & 0.1144 & 0.1667 & 0.1160 & 0.1509 \\ VMD[17] & 0.2193 & 0.2479 & 0.2193 & 0.2479 & SYNSQ\_CWT[20] & 0.2227 & 0.2496 & 0.2589 & 0.2965 \\ EWT[21] & 0.2145 & 0.2605 & 0.2145 & 0.2605 & SYNSQ\_STFT[20] & 0.6620 & 0.7402 & 0.6778 & 0.7608 \\ FDM[22] & 0.2438 & 0.2947 & 0.2429 & 0.2893 & DCT\_GAS\_FDM[23] & 0.1608 & 0.2005 & 0.1605 & 0.2003 \\ IF[9] & 0.2253 & 0.2756 & 0.2060 & 0.2489 & **RRCNN** & **0.1013** & **0.1242** & **0.0331** & **0.0422** \\ \hline \end{tabular} \end{table} Table 10: Metrics of the errors between the obtained components by different methods and the ground truth of Example 6. we choose the signal consisting of two mono-components and additive Gaussian noise with a SNR of \(15dB\) as the test data, which is given in Example 7. Since the smaller SNR value, the greater the noise, the noise of \(x(t)\) is larger than that in the training data. **Example 7**: \(x(t)=\cos(5\pi t)+\sin(8\pi t+2t^{2}+\cos(t))+\varepsilon(t)\), \(t\in[0,6]\), SNR=\(15dB\). Figure 5: Results of Examples 6-8. The errors between the ground truths and the components obtained by different methods, measured by MAE and RMSE, are reported in Tab. 12. Furthermore, the components, errors, and TFE representations of the three best performing methods, i.e., FDM, DCT_GAS_FDM, and RRCNN, are shown in Fig. 5 (d), (e), respectively. According to the results, we can find that RRCNN works for the signals with additive Gaussian noise, although there is no overwhelming advantage, especially over FDM. Specifically, from the left plots in the 2nd-3rd rows of Fig. 5 (d), RRCNN basically separates the two mono-components, and the resulting components are consistent with the ground truths in the oscillation mode. Moreover, as shown in the right plots in the 2nd-3rd rows of Fig. 5 (d), the errors of RRCNN are relatively evenly dispersed in the entire time period, while those of the FDM and DCT_GAS_FDM methods are both small in the middle and large at the boundaries, which is consistent with the observations in Section 4.3. At last, since RRCNN is a method designed with the help of CNN in the time domain, it can obtain an effect comparable to the existing methods in the time domain. Due to the lack of a prior information of RRCNN in the frequency domain, the effect of this method might be slightly reduced from the time-frequency domain. According to the results in Fig. 5 (e), the TFE distributions of the two mono-components obtained by the FDM, DCT_GAS_FDM, and RRCNN, are obviously spaced apart, but the TFE distribution of the component \(c_{1}\) obtained by RRCNN has a much more severe jitter than the true component \(\sin(8\pi t+2t^{2}+\cos(t))\) in the interval \(t\in[2,4]\). However, from the TFE representations, we can also see that the RRCNN method is able to reduce boundary effects compared to other methods. ### Can RRCNN be effective on the signals composed of orthogonal mono-components? In this part, we test the proposed RRCNN model on the signal composed of the orthogonal components. As discussed in Section 3.2, the RRCNN model in this part should have been equipped with the loss function with \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{\(c_{1}\)} & \multicolumn{2}{c}{\(c_{2}\)} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{\(c_{1}\)} & \multicolumn{2}{c}{\(c_{2}\)} \\ \cline{2-2} \cline{5-8} & MAE & RMSE & MAE & RMSE & MAE & RMSE \\ \hline EMD[1] & 0.1693 & 0.2961 & 0.2029 & 0.3485 & INCMD[11] & 0.2436 & 0.3870 & 0.3343 & 0.4720 \\ VMD[17] & 0.1440 & 0.2371 & 0.1363 & 0.2286 & SYNSQ_CWT[20] & 0.5817 & 0.6477 & 0.0915 & 0.1522 \\ EWT[21] & 0.1840 & 0.2945 & 0.1754 & 0.2884 & SYNSQ_STFT[20] & 0.4353 & 0.4992 & 0.2009 & 0.2463 \\ FDM[22] & **0.1039** & **0.1765** & 0.0857 & 0.1327 & DCT_GAS_FDM[23] & 0.1288 & 0.1801 & 0.0826 & 0.1252 \\ IF[9] & 0.1559 & 0.2318 & 0.1601 & 0.2256 & EEMD[39] & 0.1870 & 0.2308 & 0.1243 & 0.1925 \\ **RRCNN** & 0.1556 & 0.1977 & **0.0805** & **0.1014** & & & & \\ \hline \hline \end{tabular} \end{table} Table 12: Metrics of the errors between the results obtained by different methods and the ground truths of Example 7. \begin{table} \begin{tabular}{c c c c} \hline \hline \(x_{1}(t)\) & \(x_{2}(t)\) & Inputs & Labels & Notes \\ \hline \(\cos(k\pi t)\) & \(\cos((k+1.5)\pi t)\) & & & \\ \(\cos((k+1.5)\pi t+t^{2}+\cos(t))\) & & & \\ \(0\) & \(\cos((k+1.5)\pi t+t^{2}+\cos(t))\) & & & \\ \(\cos((k+1.5)\pi t+t^{2}+\cos(t))\) & & & & \\ \hline \(\cos(k\pi t)\) & & & & \\ \(\cos(k\pi t+t^{2}+\cos(t))\) & & & & \\ \(0\) & \(\cos(kl\pi t+t^{2}+\cos(t))\) & & & \\ \hline \hline \end{tabular} \end{table} Table 11: Inputs disturbed by the Gaussian noise with the SNR set to \(25dB\), and the labels used in Section 4.4, where \(t\in[0,6]\). an orthogonal constraint, we directly add an inner product term to the loss function to control the orthogonality, i.e., \(\gamma\frac{|c_{1},c_{2}>|}{\|c_{1}\|_{2}\|c_{2}\|_{2}}\), where \(\gamma\) denotes the positive penalty parameter, and \(c_{1}\) and \(c_{2}\) are two components that are orthogonal. Then we train the model by the back propagation method based on the loss function. Although there is no guarantee of convergence in this case, it is simple, computationally efficient, and basically meets the expectation of orthogonality from the experimental results. The Fourier basis is adopted for the construction of the inputs of RRCNN because they are orthogonal. The constructed inputs and labels are given in Tab. 13. After the RRCNN is trained, we use it to decompose the signal given in Example 8, which is composed of two mono-components that are orthogonal and close in frequency. **Example 8**: \(x(t)=\cos(7t)+\sin(9t)\), \(t\in[0,2\pi]\). The orthogonality and the errors between the resulting components and the corresponding ground truths are reported in Tab. 14. According to the results, the EWT and DCT_GAS_FDM methods perform best in terms of orthogonality, which is attributed to the fact that the former is based on the wavelet transform and the latter is based on the discrete cosine transform, both of which have strong orthogonal constraint on the decomposition results. For RRCNN, its orthogonality is promoted by minimizing the loss function. Therefore, on one hand, the results of RRCNN tend to find a balance in each item of the loss function. On the other hand, the limited iterative process cannot ensure that the results are completely converges to the true solution. Combined with orthogonality and error metrics, the overall performance of the RRCNN model is still satisfactory. Specifically, it is not the best in terms of orthogonality, but it is also very close to orthogonality, outperforming the other models except EWT and DCT_GAS_FDM; in terms of error, its two components are also the closest to the true components. Moreover, we select the top three methods in terms of the error metrics from Tab. 14, that is, FDM, DCT_GAS_FDM, and RRCNN, and plot their obtained components, errors, and TFE representations in Fig. 5 (c), (f), respectively. From the plots, we can draw a conclusion similar to Example 6, that is, the RRCNN model can obtain the performance that is comparable to the state-of-the-art methods in the time domain, especially at the boundary, which can weaken the the impact of incomplete waveforms at the boundary to a certain extent. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\(\rho(c_{1},c_{2})\)} & \multicolumn{2}{c}{\(c_{1}\)} & \multicolumn{2}{c}{\(c_{2}\)} & \multirow{2}{*}{Method} & \multirow{2}{*}{\(\rho(c_{1},c_{2})\)} & \multicolumn{2}{c}{\(c_{1}\)} & \multicolumn{2}{c}{\(c_{2}\)} \\ \cline{5-10} & & MAE & RMSE & MAE & RMSE & & MAE & RMSE & MAE & RMSE \\ \hline EMD[1] & 0.1141 & 0.1796 & 0.2776 & 0.2615 & 0.3581 & INCMD[11] & 0.1785 & 0.1846 & 0.2485 & 0.6205 & 0.7004 \\ VMD[17] & 0.4351 & 0.5682 & 0.6303 & 0.5682 & 0.6303 & SYNSQ_CWT[20] & 0.6268 & 0.7697 & 0.9299 & 0.4094 & 0.4823 \\ EWT[21] & **0.0000** & 0.6364 & 0.7070 & 0.6364 & 0.7070 & SYNSQ_STFT[20] & 0.1561 & 0.3225 & 0.3880 & 0.3202 & 0.3835 \\ IF[9] & 0.6896 & 0.2262 & 0.2776 & 0.2519 & 0.3092 & M-LFBF[16] & 0.9543 & 0.3858 & 0.4721 & 0.3857 & 0.4720 \\ FDM[22] & 0.0939 & 0.1229 & 0.2047 & 0.1244 & 0.2035 & DCT_GAS_FDM[23] & **0.0000** & 0.1222 & 0.1898 & 0.1211 & 0.1824 \\ **RRCNN** & 0.0595 & **0.1195** & **0.1759** & **0.0639** & **0.0889** & & & & & \\ \hline \hline \end{tabular} \end{table} Table 14: Metrics of the orthogonality, and errors of the obtained components by different methods in Example 8. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(x_{1}(t)\) & \(x_{2}(t)\) & Inputs & Labels & Notes \\ \hline \(\cos kt\) & \(\sin(k+l)t\) & \(x_{1}(t)+x_{2}(t)\) & \(\left[x_{2}(t),x_{1}(t)\right]\) & \(k=6,7,8,9\) \\ & & & \(l=3,5,\ldots,33\) \\ \hline \hline \end{tabular} \end{table} Table 13: Inputs and labels used in Section 4.5, where \(t\in[0,2\pi]\). However, due to the lack of a priori information in the frequency domain in the design of RRCNN, its advantage in terms of TFE distribution might be reduced when compared with other methods especially in the middle of the signal. However its performance are overall better than existing methods at the boundaries, also in the time-frequency domain. ### Can RRCNN be effective on solutions of differential equations? To address this question, and following what has been done in the seminal work by Huang et al. [1], we test RRCNN against the solutions of a Duffing and Lorenz equations. The labels are necessary in the training process of the RRCNN model, however, they are not known in this instance. Since the EMD method works well for these two types of signals, we decided to use the results of EMD as the labels to verify the learning and generalization capabilities of the RRCNN model. **Example 9**: We consider the following Duffing equation: \(\ddot{x}+\alpha x+\beta x^{3}=\gamma\cos(\omega t).\) We focus our attention on the decomposition of \(\dot{x}(t)\). We first construct the inputs by solving the equation using the Euler method with the parameters set to \(\alpha\in[-1,0)\), \(\beta\in[0.8,1.2]\), \(\omega\in[1.0,1.4]\) with step size \(0.1\), and \(\gamma\in[0.04,0.06]\) with step size \(0.005\), respectively, where \(t\in[0,300]\) and the initial conditions: \(\{x(0)=1,\dot{x}(0)=1\}\). And then, the first two IMFs of each input obtained by the EMD are collected as the labels. After the RRCNN is trained, we apply it to decompose the signal \(\dot{x}(t)\) with \(\alpha=-0.85,\beta=1.05,\gamma=0.045,w=1.3\), which was not included in the training data. The results are depicted in Fig. 6. Both the resulting components produced by RRCNN are very close to those of EMD. **Example 10**: The Lorenz equation is mathematically expressed as: \(\dot{x}=-\sigma(x-y),\dot{y}=rx-y-xz,\dot{z}=-bz+xy.\) We take into account the decomposition of \(x(t)\). The inputs are the \(x(t)\) achieved by the ode45 (code: [https://www.mathwo](https://www.mathwo) rks.com/help/matlab/ref/ode45.html) method by changing the parameters \(\sigma\in[9,11]\) with step size \(0.2\), \(r\in[19,21]\) with step size \(0.2\), and \(b\in[2,5]\) with step size \(1\), where \(t\in[0,50]\) and initial conditions: \(\{x(0)=-10,y(0)=0,z(0)=0\}\). Similarly to Example 10, we treat the first two IMFs of each input produced Figure 6: Components of \(\dot{x}(t)\) of Duffing equation with \(\alpha=-0.85,\beta=1.05,\gamma=0.045,w=1.3\) by EMD and RRCNN. First row: the original signal. The 2nd-3rd rows, left: component obtained by EMD (blue solid curve) and RRCNN (red dotted curve); right: the error between the obtained components by EMD and RRCNN. by the EMD as the labels. The results for signal \(x(t)\) with parameters set to \(\sigma=10.5,r=20.5,b=3.5\), predicted by the trained RRCNN model are shown in Fig. 7. The results show that the RRCNN has good learning and generalization capabilities for the solution to Lorenz equation, and can basically achieve the performance of EMD. ### 4.7 Is RRCNN capable of processing real signals? We hereby employ the RRCNN model to process the real data, i.e., the length of day (LOD, data source: [http://hpiers.obspm.fr/eoppc/eopc04/eopc04.62-now](http://hpiers.obspm.fr/eoppc/eopc04/eopc04.62-now), start date: Jan. 1,1962, end date: May 24, 2022), which is widely used in the verification of the signal decomposition methods. To generate the signals from LOD data for use as the inputs to train the RRCNN model, we first split the LOD into a series of length 720 (about two years) segments with a stride of 180 (about half year). And then, similar to the situation in which we were dealing with the solutions of differential equations, the main challenge in Figure 8: Comparisons of the components obtained by EWT and RRCNN for two LOD data given in Examples 11-12. Top panel: the original signal. The 2nd-6st panels: the obtained components by EWT (blue solid curve) and RRCNN (red dotted curve). Figure 7: Components of \(x(t)\) of Lorenz equation with \(\sigma=10.5,r=20.5,b=3.5\) by EMD and RRCNN. Same as Fig. 6. training the RRCNN model on real signals is the ground truth calibration. We use the EWT method to produce the labels for each segment, because it can produce an artificially pre-set number of components. **Example 11**: After the RRCNN model is trained, we apply it to the input, which is the LOD data ranging from Dec. 12, 2018 to Nov. 30, 2020, and is excluded in the training dataset. **Example 12**: We also apply the trained RRCNN model to another input, which is the LOD data ranging from Dec. 7, 2019 to Nov. 25, 2021, and is excluded in the training dataset. The components obtained from EWT and RRCNN for the input are depicted in Fig. 8 (a)-(b). The plots prove that RRCNN can approximate EWT well. ### Computational time of different methods To compare the computational time of the proposed method with other methods, we apply them to decompose the signal in Example 4, and record the corresponding runtime for comparison. Since in the prediction phase, RRCNN has the advantage of processing the decomposition of multiple signals in parallel, we also compare the running time of these methods when decomposing multiple signals. In this case, we no longer construct new signals to be decomposed, but just repeat the processing of the signal in Example 4, which does not affect the comparison of running time. All the experiments are performed in Python 3.8.12 or Matlab R2020b on a Dell Precision 5820 tower with Intel(R) Xeon(R) W-2102 processor (2.90 GHz), 64G memory, and Ubuntu 18.04.3 operating system. We depict the running time of different methods when decomposing different numbers of signals in Fig. 9. From it, we have the following findings: When decomposing only one signal, the computational efficiency of RRCNN ranks in the middle among the 11 methods compared in this work. However, as the number of signals increases to 10, the computational efficiency of RRCNN begins to improve, and its ranking rises to 4th, surpassing the IF and VMD methods; when the number increases to 100, its computational efficiency ranks 2nd, beyond the EMD and DCT_GAS_FDM methods. Although there is still a gap between RRCNN and the EWT method that ranks 1st in computational efficiency, it can be seen that as the number of signals increases, the gap between them shrinks significantly. Figure 9: Comparison of computational time of different signal decomposition methods when decomposing different numbers of signals. ## 5 Conclusion In the paper, we use deep learning techniques to tackle the issue of decomposing a non-stationary signal into oscillatory components. Specifically, we first construct the RRCNN inner loop block for obtaining the local average of a given signal, and then these blocks are cascaded into a deeper network, called RRCNN, which is used to decompose a given signal. Since the proposed RRCNN model is based on the deep learning framework and is a supervised model, it has the advantages of these two types of models. First, the convolutional filter weights in the model are learnt according to the input signals, which makes the proposed method more adaptive. Second, some common tools in deep learning, like the residual structure and the nonlinear activation function, can be added to increase the expressive ability and flexibility of the proposed model. Third, the proposed model can be customized according to the application. For example, when processing signals composed of orthogonal components, an inner product term can be added to the loss function to enhance the orthogonality of the derived components. To verify the performance of the proposed model, we compare it with other existing models from seven aspects. And the artificial and real data are used in the experiments. All results show that the proposed method works better in handling the boundaries, mode mixing effects and the orthogonality of the decomposed components, and is more robust than the existing methods. On the other hand, RRCNN has the classical limitations of supervised models. For example, the labels must be given in advance in the training phase. Therefore, the goal of this work is not to replace any existing method, but to propose a completely new kind of approach which is based on deep learning and supervised learning and to add it to the existing methods, to derive a more flexible and adaptive processing approach for signals. In the future, we plan to work on the extension of this work to multivariate signals. Furthermore, given that the main limitation of the RRCNN approach is that it requires other decomposition methods to calibrate the ground truths before the training phase, we plan to work in the future on the development of an unsupervised learning method to produce a new technique with similar performance to the RRCNN algorithm, but that does not require any training based on other techniques. ## Declaration of Competing Interest No conflict of interest exits in the submission of this manuscript, and manuscript is approved by all authors for publication. We declare that the work described was original research that has not been published previously, and not under consideration for publication elsewhere, in whole or in part. ## Acknowledge F. Zhou research was partially supported by the National Natural Science Foundation of China [grant number 11901113], the Guangdong Basic and Applied Basic Research Foundation [grant number 2020A1515110951], and the China Scholarship Council [grant number 202008440024]. A. Cicone is a member of the Italian GNCS of the INdAM. H. Zhou research was partially supported by the National Science Foundation of US [grant number DMS-1830225], and the Office of Naval Research [grant number N00014-18-1-2852].
2301.05815
First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)
This paper presents a summary and meta-analysis of the first three iterations of the annual International Verification of Neural Networks Competition (VNN-COMP) held in 2020, 2021, and 2022. In the VNN-COMP, participants submit software tools that analyze whether given neural networks satisfy specifications describing their input-output behavior. These neural networks and specifications cover a variety of problem classes and tasks, corresponding to safety and robustness properties in image classification, neural control, reinforcement learning, and autonomous systems. We summarize the key processes, rules, and results, present trends observed over the last three years, and provide an outlook into possible future developments.
Christopher Brix, Mark Niklas Müller, Stanley Bak, Taylor T. Johnson, Changliu Liu
2023-01-14T04:04:12Z
http://arxiv.org/abs/2301.05815v1
# First Three Years of the International Verification of Neural Networks Competition (VNN-COMP) ###### Abstract This paper presents a summary and meta-analysis of the first three iterations of the annual International Verification of Neural Networks Competition (VNN-COMP) held in 2020, 2021, and 2022. In the VNN-COMP, participants submit software tools that analyze whether given neural networks satisfy specifications describing their input-output behavior. These neural networks and specifications cover a variety of problem classes and tasks, corresponding to safety and robustness properties in image classification, neural control, reinforcement learning, and autonomous systems. We summarize the key processes, rules, and results, present trends observed over the last three years, and provide an outlook into possible future developments. Keywords:certified robustness, adversarial robustness, formal verification, formal methods, neural networks, machine learning, deep learning ## 1 Introduction Neural networks are increasingly used in safety-critical applications [7; 23]. However, it has become apparent that they are highly susceptible to adversarial examples [47], i.e., minor and possibly imperceptible input perturbations can cause the output to change significantly. As such perturbations can occur in the real world either at random or due to malicious actors, it is of utmost importance to analyze the robustness of deep learning based systems in a mathematically rigorous manner before applying them in safety-critical domains. To this end, a wide range of methods and corresponding software tools have been developed [12; 16; 22; 24]. However, with tools becoming ever more numerous and specialized, it became increasingly difficult for practitioners to decide which tool to use. In 2020, the inaugural VNN-COMP was organized to tackle this problem and allow researchers to compare their neural network verifiers on a wide set of benchmarks. Initially conceived as a friendly competition with little standardization, it was increasingly standardized and automated to ensure a fair comparison on cost-equivalent hardware using standardized formats for both properties and networks. In this work, we outline this development, summarize key rules and results, describe the high-level trends observed over the last three years, and provide an outlook on possible future developments. ## 2 Neural Network Verification We consider the neural network verification problem defined as follows: Given an input specification \(\phi\subseteq\mathds{R}^{d_{\text{in}}}\), also called pre-condition, an output specification \(\psi\subseteq\mathds{R}^{d_{\text{out}}}\), also called post-condition, and a neural network \(N:\mathds{R}^{d_{\text{in}}}\mapsto\mathds{R}^{d_{\text{out}}}\), we aim to prove that the precondition implies the post-condition, i.e., \[\forall x:x\vDash\phi\Rightarrow N(x)\vDash\psi, \tag{2.1}\] or provide a counterexample. Inspired by the notation common in the SAT-solver community, we encode this problem by specifying a constraint set describing an adversarial example, i.e., \[\exists x:x\vDash\phi\wedge N(x)\vDash\neg\psi. \tag{2.2}\] Therefore, we call instances where Equation (2.2) is satisfiable and thus the property encoded by Equation (2.1) does _not_ hold SAT and instances where Equation (2.2) is unsatisfiable and the property encoded by Equation (2.1) has been shown to hold UNSAT. Note that while it is possible to show SAT by directly searching for counter-examples using adversarial attacks [17; 32], those approaches are not complete, i.e., if they are not successful in finding a counter-example this does _not_ imply that a property holds. Example ProblemsOne particularly popular property is the robustness to adversarial \(\ell_{\infty}\)-norm bounded perturbations in image classification. There, the network \(N\) computes a numerical score \(y\in\mathds{R}^{d_{\text{out}}}\) corresponding to its confidence that the input belongs to each of the \(d_{\text{out}}\) classes for each input \(x\in\mathds{R}^{d_{\text{in}}}\). The final classification \(c\) is then computed as \(c=\operatorname*{arg\,max}_{i}N(x)_{i}\). In this setting, an adversary may want to perturb the input such that the classification changes. Therefore, the verification intends to prove that \[\operatorname*{arg\,max}_{i}N(x^{\prime})_{i}=t,\] \[\forall x^{\prime}\in\{x^{\prime}\in\mathds{R}^{d_{\text{in}}}\mid \|x-x^{\prime}\|_{\infty}\leq\epsilon\},\] where \(t\) is the target class, \(x\) is the original image and \(\epsilon\) is the maximal permissible perturbation magnitude. There, the pre-condition \(\phi\) describes the inputs an attacker can choose from (\(\phi=\{x^{\prime}\in\mathds{R}^{d_{\text{in}}}\mid\|x-x^{\prime}\|_{\infty}\leq \epsilon\}\)), i.e., an \(\ell_{\infty}\)-ball of radius \(\epsilon\), and the post-condition \(\psi\) describes the output space corresponding to a classification to the target class \(t\) (\(\psi=\{y\in\mathds{R}^{d_{\text{out}}}\mid y_{t}>y_{i},\forall i\neq t\}\)). When neural networks are used as controllers, more complex properties can be relevant. For example, in the ACAS Xu setting [23] a neural controller gives action recommendations based on the relative position and heading of the controlled and intruder aircraft. There, we want to, e.g., ensure that for inputs \(\mathcal{D}\) corresponding to the intruder aircraft being straight ahead and heading our way, neither of the evasive maneuvers "strong left" (SL) or "strong right" (SR) is considered the worst option. More formally, we want to verify that \[\operatorname*{arg\,min}_{i}N(x^{\prime})_{i}\notin\{\operatorname*{SL}, \operatorname*{SR}\},\;\forall x^{\prime}\in\mathcal{D}.\] Here, we obtain a more complex, non-convex post-condition \[\psi= \mathds{R}^{d_{\text{out}}}\backslash\] \[\big{(}\{y\in\mathds{R}^{d_{\text{out}}}\mid y_{\operatorname*{SL }}<y_{i},\;\forall i\notin\{\operatorname*{SL},\operatorname*{SR}\}\}\] \[\cup\{y\in\mathds{R}^{d_{\text{out}}}\mid y_{\operatorname*{SR}}<y _{i},\;\forall i\notin\{\operatorname*{SL},\operatorname*{SR}\}\}\big{)}.\] ## 3 Competition Goals VNN-COMP is organized to further the following goals. Define StandardsTo enable practitioners to easily use and evaluate a range of different verification approaches and tools without substantial overhead, it is essential that all tools can process both networks and specifications in a standardized file format. To this end, the second iteration of the VNN-COMP established such a standard. Problem specifications (pre- and post-condition) are defined using the VNN-LIB[48] format and neural networks are defined using the ONNX[3] standard. In 2022, additionally, a standardized format for counterexamples was introduced. Facilitate Verification Tool ComparisonEvery year, dozens of papers are published on neural network verification, many proposing not only new methods but also new benchmarks. With authors potentially investing more time into tuning their method to the chosen benchmarks, a fair comparison between all these methods is difficult. VNN-COMP facilitates such a comparison between a large number of tools on a diverse set of benchmarks, using cost-equivalent hardware, and test instances not available to participants. Letting participants and industry practitioners propose a wide range of interesting benchmarks, yields not only a ranking on the problems typically used in the field but also highlights which tools are particularly suitable for more specialized problems. Further, by ensuring a standardized installation and evaluation process is in place, the comparison to a large number of state-of-the-art tools for any publication is enabled. Shape Future Work DirectionsThe visibility VNN-COMP tends to the problems underlying the considered benchmarks has the potential to raise their profile in the community. As benchmarks are developed jointly by industry and academia, this constitutes a great opportunity to shape future research to be as impactful as possible. Over the last years, benchmarks have featured ever-increasing network sizes (see Table 2), promoting scalability, more complex networks (including, e.g., residual [19] and max-pooling layers [64]), promoting generalizability, and more complex specifications, enabling more interesting properties to be analyzed. Bring Researchers TogetherBoth the rule and benchmark discussion phase during the lead-up to the competition, as well as the in-person presentation of results at the Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS)1 provide participants with a great opportunity to meet fellow researches and discuss the future of the field. Further, the tool and benchmark descriptions participants provide for the yearly report [2; 34; 5] serve as an excellent summary of state-of-the-art methods, allowing people entering the field to get a quick overview. Footnote 1: [https://fomlas2022.wixsite.com/fomlas2022](https://fomlas2022.wixsite.com/fomlas2022) ## 4 Overview of Three Years of VNN-COMP In this section, we provide a high-level description of how the VNN-COMP evolved from 2020 to 2022, listing all participants and the final rankings in Table 1. Generally, performance is measured on a set of equally weighted _benchmarks_, each consisting of a set of related _instances_. Each instance consists of a trained neural network, a timeout, and input and output constraints. Below, we group benchmarks into _categories_ to enable a quicker comparison between years. ### Vnn-Comp 2020 The inaugural VNN-COMP2[2] was held in 2020 as a "friendly competition" with no winner. Its main goal was to provide a stepping stone for future iterations by starting the process of defining common problem settings and identifying possible avenues for standardization. Footnote 2: [https://sites.google.com/view/vnn20/vnncomp](https://sites.google.com/view/vnn20/vnncomp) #### 4.1.1 Benchmarks Three benchmark categories were considered with only one of the eight teams participating in all of them: * two benchmarks, based on ACAS Xu and MNIST. * one benchmark, based on MNIST. * two benchmarks, based on MNIST and CIFAR10. #### 4.1.2 Evaluation Teams evaluated their tools using their own hardware. While this simplified the evaluation process, it made the reported results incomparable, due to the significant hardware differences. The teams reported that they used between 4 and 40 CPUs and between 16 and 756 GB of RAM. ### Vnn-Comp 2021 Based upon the insights gained in 2020, the second iteration of VNN-COMP3 was organized with a stronger focus on comparability between the participating tools [5]. Footnote 3: [https://sites.google.com/view/vnn2021](https://sites.google.com/view/vnn2021) #### 4.2.1 Benchmarks Teams were permitted to propose one benchmark with a total timeout of at most six hours split over its constituting instances. Networks were defined in the ONNX format [3] and problem specifications were given in the VNN-LIB format [48]. To prevent excessive tuning to specific benchmark instances, benchmark proposers were encouraged to provide a script enabling the generation of new random instances for the final tool evaluation. However, teams were allowed to tune their tools for each benchmark, using the initial set of benchmark instances. In 2021, the benchmarks could be split into the following categories, with multiple teams participating in all of them: * two benchmarks, based on ACAS Xu and MNIST. * one benchmark, based on MNIST. * three benchmarks, based on CIFAR10. * one benchmark, based on MNIST. * one benchmark, based on CIFAR10. * one benchmark, based on database indexing. #### 4.2.2 Evaluation To allow for comparability of results, all tools were evaluated on equal-cost hardware using Amazon Web Services (AWS). Each team could decide whether they wanted their tool to be evaluated on a CPU-focused r5.12xlarge or a GPU-focused p3.2xlarge instance (see Table 4.1 for more details). Further, instead of providing results and runtimes themselves, teams had to prepare scripts automating the installation and execution of their tools. After the submission deadline, the organizers installed and evaluated each tool using the provided scripts. In many cases, this process required some debugging in a back and forth between the organizers and teams. ScoringFor every benchmark, 10 points were awarded for correctly showing the instance to be SAT/UNSAT, with a 100 point penalty for incorrect results (see Table 4.2). A simple adversarial attack was used to identify "easy" SAT instances, on which the available points were reduced from 10 to 1. If tools reported contradicting results on an instance, the ground truth was decided by a majority vote. Bonus points were awarded to the fastest two tools on every instance (two points for the fastest and one point for the second fastest). Runtimes differing by less than 0.2 seconds or below one second were considered equal, so multiple teams could receive the two point bonus. To correct the notable differences in startup overhead, e.g., due to the need to acquire a GPU, it was measured as the runtime on a trivial instance and subtracted from every runtime. The benchmark score was computed from the points obtained as discussed above by normalizing with the maximum number of obtained points. Consequently, the tool with the most points was assigned a score of 100%. The total competition score was simply the sum of the per benchmark scores, corresponding to equal weighting. ResultsIn 2021, 12 teams participated in the competition. \(\alpha\)-\(\beta\)-CROWN won first place, followed by VeriNet in second, and ERAN/OVAL in third, depending on the overhead measurement and voting scheme used to determine result-correctness. Except for VeriNet, they all used the GPU instance. ### Vnn-Comp 2022 In the most recent iteration of VNN-COMP4[34], the evaluation was fully automated, allowing the number of benchmarks to be increased. Footnote 4: [https://sites.google.com/view/vnn2022](https://sites.google.com/view/vnn2022) #### 4.3.1 Benchmarks In 2022, each participating team could submit or endorse up to two benchmarks, allowing industry practitioners to propose benchmarks without entering a tool. Each benchmark had a total timeout of between three and six hours, with randomization of instances being mandatory this year. Tool tuning was still permitted on a per benchmark level and in practice also per network using the network's statistics. The submitted benchmarks can be grouped into the following categories: * three benchmarks, based on reinforcement tasks and MNIST. * one benchmark. * one benchmark, based on database indexing and cardinality estimation. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{Returned Result} \\ \cline{2-5} Ground Truth & SAT & UNSAT & Other \\ \hline SAT, simple & \(+1\) & \(-100\) & \(0\) \\ SAT, complex & \(+10\) & \(-100\) & \(0\) \\ UNSAT & \(-100\) & \(+10\) & \(0\) \\ \hline \hline \end{tabular} \end{table} Table 4.2: Points per instance in 2021. SAT instances were split into simple and complex based on whether a simple adversarial attack was successful. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & 2021 & 2022 & vCPUs & RAM [GB] & GPU \\ \hline r5.12xlarge & ✓ & ✗ & 48 & 384 & ✗ \\ p3.2xlarge & ✓ & ✓ & 8 & 61 & V100 GPU with 16 GB memory \\ m5.16xlarge & ✗ & ✓ & 64 & 256 & ✗ \\ g5.8xlarge & ✗ & ✓ & 32 & 128 & A10G GPU with 24 GB memory \\ t2.large & ✗ & ✓ & 2 & 8 & ✗ \\ \hline \hline \end{tabular} \end{table} Table 4.1: Available AWS instances. - three benchmarks, based on CIFAR10. * two benchmarks, based on CIFAR10, CIFAR100, and TinyImageNet. * one benchmark based on image segmentation. #### 4.3.2 Evaluation Similar to the previous year, teams could choose between a range of AWS instance types (see Table 4.1) providing a CPU, GPU, or mixed focus. Except for the much weaker t2.large instance, all instances were priced at around three dollars per hour. In contrast to 2021 where organizers had to manually execute installation scripts and debug with the participants, an automated submission and testing pipeline was set up. Teams could submit their benchmarks and tools via a web interface by specifying a git repository, commit hash, and post-installation script (enabling, e.g., the acquisition of licenses). This triggered a new AWS instance to be spawned where all installation scripts were executed. If the installation succeeded, the tool was automatically evaluated on a previously selected set of benchmarks before the instance was terminated again. To enable debugging by the participants, all outputs were logged and made accessible live via the submission website, allowing them to monitor the progress. This automation allowed each team to perform as many tests as necessary without the need to wait for feedback from the organizers. Furthermore, teams could test on the same AWS instances used during final evaluation without having to pay for their usage, with the costs kindly covered by the SRI Lab of ETH Zurich. ScoringUnlike during the VNN-COMP 2021, SAT instances were not divided into simple and complex for scoring purposes, leading to 10 points being awarded for all correct results (see Table 4.3). Further, instead of relying on a voting scheme to determine the ground truth in the presence of dissent among tools, the burden of proof was placed on the tool reporting SAT, requiring them to provide a concrete counter-example. If no valid counter-example was provided, the corresponding tool was judged to be incorrect and awarded the 100 point penalty. ResultsOut of the eleven participating teams, \(\alpha\)-\(\beta\)-CROWN placed first, MN-BAB second, and VeriNet third. For a comparison of all participating tools across all benchmarks, see Figure 4.1. ## 5 Comparison Across the Years In Table 5.1 we list all tools participating in any iteration of the VNN-COMP and refer the interested reader to the corresponding VNN-COMP report for a short description of the tools. In Table 5.2, we compare the scope of the competition across the last three years. As can be seen, the number, variety, complexity, and scale of benchmarks increased with every iteration. Starting with 5 benchmarks covering simple fully connected (FC) and convolutional (Conv) networks in 2020, the 2022 competition saw 12 benchmarks including a range of complex residual and U-Net architectures with up to 140 million parameters. Further, we believe that the increasing number of registered tools clearly shows that the interest in both the field in general and the competition in particular is growing year by year. However, the large and increasing discrepancy between registered and submitted tools might indicate that many teams feel like they are not able to invest the significant effort required to support not only the standardized network and specification formats but also the wide variety of different benchmarks. As tools are ranked by their total score with each benchmark providing a score of up to 100%, the final ranking is biased towards tools that support all benchmarks. While we believe that this is a valuable incentive for tool developers to develop methods that can be easily applied to new problems, it might be daunting for new teams to implement all necessary features, deterring them from participating at all. Successful TrendsWhile all teams started out using only CPUs in 2020, only one of the top four teams relied solely on CPUs in 2021, and all top three teams chose GPU instances in 2022. This transition enabled both the more efficient evaluation of simple bound propagation methods such as DeepPoly [45], CROWN [62], and IBP [18] and approximate solutions of the linear programming (LP) problems arising during verification [55; 14; 58]. Similarly, the top two teams in 2021 and all top three teams in 2022 relied on a branch-and-bound (BaB) based approach, recursively breaking down the verification problem into easier subproblems until it becomes solvable, thus effectively enabling the use of GPUs to solve tighter mixed \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Returned Result} \\ \cline{2-4} Ground Truth & SAT & UNSAT & Other \\ \hline SAT & \(+10\) & \(-100\) & \(0\) \\ UNSAT & \(-100\) & \(+10\) & \(0\) \\ \hline \hline \end{tabular} \end{table} Table 4.3: Points per instance in 2022. integer linear programming (MILP) encodings of the verification problem [11; 14; 55; 61]. Both top two teams in the most recent iteration combined this approach with additional multi-neuron [14] and solver-generated cutting plane constraints [61], first introduced by the 3rd place ERAN in 2021 [36]. We thus conclude that successful tools leverage hardware accelerators such as GPUs to efficiently handle tight (MI)LP encodings of the verification problem. ## 6 Outlook Below we discuss considerations that could enable future iterations of the VNN-COMP to serve its goals and the community, discussed in Section 3, even better. ### Tracking Year-on-Year Progress While we believe VNN-COMP already provides reasonable mechanisms for comparing the tools submitted in every iteration, the changing benchmarks and tools make it hard to track the year-on-year progress of the field as a whole. Because some tools are heavily optimized for the specific benchmarks of that year's competition, simply evaluating them on the benchmarks of previous (or future) years (even if they support them) does not yield a meaningful progress metric. While one benchmark from the inaugural competition was included as an unsocred extra benchmark in the two following iterations (cifar2020), only few unsolved instances remain, making it a very insensitive measure for further improvements. While including all benchmarks from previous years in the (scored) benchmark selection would place an undue burden on participants, choosing one particularly challenging, representative, and interesting benchmark every year to be included as a (scored) extra benchmark in future iterations might be a good compromise. Additionally, a more restrictive stance on tool tuning could enable a much more representative evaluation of new tools on old benchmarks. ### Tool Tuning Many of the most successful tools do not employ a single verification strategy, but a whole portfolio of different modes, all coming with different hyperparameters. Depending on their choice, tool performance can vary significantly, making it essential for practitioners to get their choice right when applying these tools to new problems. However, this can be highly challenging given the large number of parameters and their complex interactions, especially without in-depth knowledge of the tool. For VNN-COMP, tuning tools was allowed explicitly on a per-benchmark basis and implicitly on a per-network basis, enabling teams to showcase the maximum performance of their tools. However, for future iterations it might be interesting to restrict tuning for some or all benchmarks to encourage authors to develop autotuning strategies, making the adaption of their tools to new problems much easier. This could, Figure 4.1: Cactus plot for all tools in the VNN-COMP 2022 across all benchmarks. for example, be implemented by not only generating random specifications but also random networks. ### Batch Processing Every VNN-COMP benchmark consists of a set of instances that, while typically related, are evaluated in isolation, with the tool being terminated in between. Unfortunately, this means that any startup overhead such as acquiring a GPU or preprocessing the considered network is incurred for every instance. This is in contrast to most practical settings where a large number of input-output specifications are considered for the same network. This discrepancy is accounted for by measuring and subtracting this overhead from each individual runtime. However, not only is this overhead measurement process flawed and introduces noise, but it can also dominate the evaluation time for easy instances. In future iterations, tools could be provided with a whole batch of properties at once to more closely relate to their typical application. Further, currently, timeouts are defined per instance, making a strategy of always attempting verification until timeout optimal. \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Tool} & \multirow{2}{*}{Organization} & \multicolumn{3}{c}{Participation, Place} & References \\ \cline{3-6} & & 2020 [2] & 2021 [5] & 2022 [34] & \\ \hline \(\alpha\)-\(\beta\)-CROWN & Carnegie Mellon, Northeastern, & ✗ & ✓(1/12) & ✓(1/11) & [63, 57, 58] \\ & Columbia, UCLA & & ✗ & ✓(11/11) & N/A \\ AveriNN & Kansas State University & ✗ & ✗ & ✓(11/11) & N/A \\ CGDTest & University of Waterloo & ✗ & ✗ & ✓(5/11) & N/A \\ Debona & RWTH Aachen University & ✗ & ✓(6/12) & ✓(8/11) & [9] \\ DNNF & University of Virginia & ✗ & ✓(12/12) & ✗ & [42] \\ ERAN & ETH Zurich, UIUC & ✓(3/12) & ✗ & [46, 43, 45, 36, 39] \\ FastBatLLNN & University of California & ✗ & ✗ & ✓(9/11) & N/A \\ Marabou & Hebrew University of Jerusalem, & ✗ & ✓(5/12) & ✓(7/11) & [25] \\ & Stanford University, Amazon Web & & & & \\ MIPVerify & Massachusetts Institute of Technology & ✓ & ✗ & ✗ & [49] \\ &ogy & & & & \\ MN-BaB & ETH Zurich & ✗ & ✗ & ✓(2/11) & [14] \\ nennum & Stony Brook University & ✓ & ✓(8/12) & ✓(4/11) & [4] \\ NNV & Vanderbilt University & ✓ & ✓(9/12) & ✗ & [50, 53, 52, 51, 56] \\ NV.jl & Carnegie Mellon, Northeastern & ✗ & ✓(10/12) & ✗ & [28] \\ Oval & University of Oxford & ✓ & ✓(3/12) & ✗ & [10, 11, 31] \\ PeergiNN & University of California & ✓ & ✗ & ✓(6/11) & [26] \\ RPM & Stanford & ✗ & ✓(11/12) & ✗ & [54] \\ Venus & Imperial College London & ✓ & ✓(7/12) & ✗ & [8, 27] \\ VeraPak & Utah State University & ✗ & ✗ & ✓(10/11) & N/A \\ VeriNet & Imperial College London & ✓ & ✓(2/12) & ✓(3/11) & [20, 21] \\ \hline \hline \end{tabular} \end{table} Table 5.1: Participating tools. \begin{table} \begin{tabular}{l l l l} \hline \hline & 2020 & 2021 & 2022 \\ \hline Tools registered & N/A & 15 & 18 \\ Tools submitted & 8 & 13 & 11 \\ Benchmarks submitted & 5 & 8 (+1 unscored) & 12 (+1 unscored) \\ Max. network depth & 8 & 18 & 27 \\ Max. network parameters & 855,600 & 42,059,431 (sparse) & 138,356,520 \\ Activation functions & ReLU, tanh, sigmoid & ReLU, sigmoid, MaxPool, AveragePool \\ Layer types & Fully Connected, Conv & Fully Connected, Conv, Residual & Fully Connected, Conv, Residual, BatchNorm \\ Applications & Image Recognition, Control, Database Indexing & Image Recognition, Control, Database Indexing, Cardinality Estimation \\ Mean \#benchmarks/tool & 3.0 (min 2, max 5) & 5.5 (min 1, max 9) & 7.3 (min 1, max 13) \\ \hline \hline \end{tabular} \end{table} Table 5.2: Comparison across years. However, in a practical setting, recognizing instances where verification is likely to fail and stopping early can significantly increase a method's throughput and thus utility. Switching to per benchmark timeouts for the VNN-COMP would incentivize the development of effective heuristics towards this goal. Furthermore, tools could benefit from proof-sharing approaches [15], where verified sub-problems from one instance are reused for following instances. ### Continuous Competition In addition to a yearly VNN-COMP, tool submissions for the most recent benchmark set could be accepted on a rolling basis, made possible by the automated submission and evaluation process introduced this year. This would transform the competition from a yearly snapshot of the current research to a centralized repository of the state-of-the-art, updating as teams submit new methods that they publish. However, if not implemented with great care, this would enable tools to be tuned on the evaluation instances before submission, leading to a skewed comparison. Further, the question of funding the required cloud compute remains open. ### Soundness Evaluation An inherent requirement for neural network verifiers is that they are sound, i.e., they never claim a safety property holds, when in fact it does not. However, assessing soundness is difficult as the ground truth for VNN-COMP problem instances is generally only known if it was shown to be SAT with a valid counter-example. This is particularly problematic when no instances in a benchmark are SAT and thus returning UNSAT for every instance immediately can not be demonstrated to be unsound. Requiring a certain portion of instances of every benchmark to be SAT (in expectation), could alleviate this issue. An interesting alternative avenue to tackle this challenge is proof generation [37]. An extra category could be introduced where tools are additionally required or incentivized to provide a verifiable proof if they claim a property is UNSAT. While big soundness bugs are rare, few or none of the submitted tools are floating point sound, i.e., even tools that would be sound using exact arithmetic might become unsound due to imprecisions introduced by floating-point arithmetic. This is particularly pronounced if tools choose to use single precision computations for performance reasons. The sensitivity of different tools to such issues could be evaluated on a benchmark specifically designed to uncover floating point soundness issues. ### Other Competition Modes A dedicated falsifier category could be added to encourage teams to develop and submit stronger attacks, going beyond the standard adversarial attacks. Further, a meta-solver category could be added to investigate whether approaches that heuristically pick from a range of methods, successful in other domains [59], can significantly outperform individual tools. However, it would need to be ensured that these tools provide sufficient value over individual submissions, which already combine different verification strategies. ### Promote Common Tool Development Parsing large and complex VNN-LIB files or converting ONNX files to other common formats can be time-consuming to implement. While many teams implemented their own tools to this end, available, open-source tools for the parsing of VNN-LIB files [1] and the optimization of ONNX files (DNNV [41]) should be highlighted and their continued development encouraged. ### Remaining Challenges We can broadly identify four groups of challenges in neural network verification: * Verifying relatively small but only weakly regularized networks, which requires an extraordinarily precise analysis, can still be intractable with current methods. * Scaling precise methods to medium-sized networks (e.g. small ResNets) and datasets (e.g. Cifar100 or TinyImageNet) with a large number of neurons is challenging, as the cost of branch-and-bound based algorithms scales exponentially with the required split depth, making branching decisions both harder and more important. * Scaling verification to large networks (e.g. VGG-Net 16) and datasets (e.g. ImageNet) in the presence of dense input specifications requires particularly memory-efficient implementations due to a large number of neurons. * Verification outside of the classification setting is underexplored leading to a lack of established approaches for, e.g., image segmentation or object detection. Orthogonally, the training of certifiably robust networks remains an open problem. Despite significant progress over recent years [6; 18; 33; 35; 38; 40; 60], networks trained specifically to exhibit provable robustness guarantees still suffer from severely degraded standard accuracy. Therefore, most benchmarks considered in the VNN-COMP are based on networks trained without consideration for later certification. More broadly in the community, readers may also be interested in the International Competition on Verifying Continuous and Hybrid Systems (ARCHCOMP)5 category on Artificial Intelligence and Neural Network Control Systems (AINNCS), which has been held annually since 2019 [30; 29], and considers neural network verification in closed-loop systems. Footnote 5: [https://cps-vo.org/group/ARCH/FriendlyCompetition](https://cps-vo.org/group/ARCH/FriendlyCompetition) ## 7 Advice for Participants In this section, we provide some guidance for teams that are interested in the VNN-COMP but have not participated yet. Note that these are neither rules nor requirements. ### For Benchmark Authors The VNN-COMP intends to highlight areas where neural network verification can be successfully applied and to showcase interesting differences between the participating tools. Thus, ideally, tasks are not so hard that none of the instances can be solved by any participant but also not so easy that every tool can solve all of them. For benchmarks related to real-world applications, we recommend including a detailed description of the background, to highlight the benchmark's relevance and the characteristics of the verification problem, e.g. sparseness of the input or some network layers. Further questions and requests for modifications should be expected while tool authors work on supporting the proposed benchmark. ### For Tool Authors We recommend teams reference past benchmarks to test their tool before the new benchmarks are submitted. Given the ever-increasing diversity of submitted benchmarks, it may not be feasible to support all benchmarks from the get-go. If this is the case, we recommend focusing on the fully connected and convolutional ReLU networks, which in the past have covered a wide range of benchmarks, while minimizing implementation effort. Some operations, e.g., max-pooling can also be simplified to multiple ReLU layers using tools such as DNNV[41]. Further, we recommend extensive testing against adversarial attacks to minimize the chance for soundness errors. For tools that are designed for very specific problems, we also want to encourage authors to submit a relevant benchmark highlighting this specialization. Finally, we recommend reading publications associated with the well-performing tools (see Table 5.1) to gain a better understanding of the techniques used by successful teams. ## 8 Conclusions In this report, we summarize the main processes and results of the three VNN-COMP held so far from 2020 to 2022. We highlight the growing interest in the field, expressed in an increasing number of registered teams and considered benchmarks, including some submitted by industry. Further, we observe that every year, the size and complexity not only of the considered networks but also specifications grew, driving and exemplifying progress in the field. Finally, we highlight the increase in accessibility of verification methods resulting from the standardized input and output formats and the automated installation and evaluation process required for participation in VNN-COMP. ###### Acknowledgements. This material is based upon work supported by the Air Force Office of Scientific Research and the Office of Naval Research under award numbers FA9550-19-1-0288, FA9550-21-1-0121, FA9550-22-1-0019, FA9550-22-1-0450, and N00014-22-1-2156, as well the Defense Advanced Research Projects Agency (DARPA) Assured Autonomy program through contract number FA8750-18-C-0089, and the National Science Foundation (NSF) under grants 1911017, 2028001, 2220401, and 2220426. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force, the United States Navy, DARPA, nor NSF.
2302.08875
Optimal Training of Mean Variance Estimation Neural Networks
This paper focusses on the optimal implementation of a Mean Variance Estimation network (MVE network) (Nix and Weigend, 1994). This type of network is often used as a building block for uncertainty estimation methods in a regression setting, for instance Concrete dropout (Gal et al., 2017) and Deep Ensembles (Lakshminarayanan et al., 2017). Specifically, an MVE network assumes that the data is produced from a normal distribution with a mean function and variance function. The MVE network outputs a mean and variance estimate and optimizes the network parameters by minimizing the negative loglikelihood. In our paper, we present two significant insights. Firstly, the convergence difficulties reported in recent work can be relatively easily prevented by following the simple yet often overlooked recommendation from the original authors that a warm-up period should be used. During this period, only the mean is optimized with a fixed variance. We demonstrate the effectiveness of this step through experimentation, highlighting that it should be standard practice. As a sidenote, we examine whether, after the warm-up, it is beneficial to fix the mean while optimizing the variance or to optimize both simultaneously. Here, we do not observe a substantial difference. Secondly, we introduce a novel improvement of the MVE network: separate regularization of the mean and the variance estimate. We demonstrate, both on toy examples and on a number of benchmark UCI regression data sets, that following the original recommendations and the novel separate regularization can lead to significant improvements.
Laurens Sluijterman, Eric Cator, Tom Heskes
2023-02-17T13:44:47Z
http://arxiv.org/abs/2302.08875v2
# Optimal Training of Mean Variance Estimation Neural Networks ###### Abstract This paper focusses on the optimal implementation of a Mean Variance Estimation network (MVE network) (Nix and Weigend, 1994). This type of network is often used as a building block for uncertainty estimation methods in a regression setting, for instance Concrete dropout (Gal et al., 2017) and Deep Ensembles (Lakshminarayanan et al., 2017). Specifically, an MVE network assumes that the data is produced from a normal distribution with a mean function and variance function. The MVE network outputs a mean and variance estimate and optimizes the network parameters by minimizing the negative loglikelihood. In this paper, we discuss two points: firstly, the convergence difficulties reported in recent work can be relatively easily prevented by following the recommendation from the original authors that a warm-up period should be used. During this period, only the mean is optimized assuming a fixed variance. This recommendation is often not used in practice. We experimentally demonstrate how essential this step is. We also examine if keeping the mean estimate fixed after the warm-up leads to different results than estimating both the mean and the variance simultaneously after the warm-up. We do not observe a substantial difference. Secondly, we propose a novel improvement of the MVE network: separate regularization of the mean and the variance estimate. We demonstrate, both on toy examples and on a number of benchmark UCI regression data sets, that following the original recommendations and the novel separate regularization can lead to significant improvements. Neural Networks, Uncertainty Quantification, Density Estimation, Regression ## I Introduction Neural networks are gaining tremendous popularity both in regression and classification applications. In a regression setting, the scope of this paper, neural networks are used for a wide range of tasks such as the prediction of wind power (Khosravi and Nahavandi, 2014), bone strength (Shaikhina and Khovanova, 2017), and floods (Chaudhary et al., 2022). Due to the deployment of neural networks in these safety-critical applications, uncertainty estimation has become increasingly important (Gal, 2016). The uncertainty in the prediction can be roughly decomposed into two parts: epistemic or model uncertainty, the reducible uncertainty that captures the fact that we are unsure about our model, and aleatoric uncertainty, the irreducible uncertainty that arises from the inherent randomness of the data (Abdar et al., 2021; Hullermeier and Waegeman, 2019). In this paper, we refer to the latter as the variance of the noise, to avoid any confusion or philosophical discussions. The variance of the noise can be _homoscedastic_ if it is constant, or _heteroscedastic_ if it depends on the input \(\mathbf{x}\). There is a vast amount of research that studies the model uncertainty. Notable approaches include Bayesian neural networks (MacKay, 1992; Neal, 2012), dropout (Gal and Ghahramani, 2016; Gal et al., 2017), and ensembling (Heskes, 1997). Conversely, a lot less emphasis is often placed on the estimation of the variance of the noise. Monte-Carlo dropout, for example, simply uses a single homoscedastic hyperparameter. Some other methods, such as concrete dropout and the hugely popular Deep Ensembles (Lakshminarayanan et al., 2017), use a Mean Variance Estimation (MVE) network (Nix and Weigend, 1994). An MVE network, see Figure 1, works as follows. We assume that we have a data set consisting of \(n\) pairs \((\mathbf{x}_{i},y_{i})\), with \(y_{i}\sim\mathcal{N}\left(\mu(\mathbf{x}_{i}),\sigma^{2}(\mathbf{x}_{i})\right)\). An MVE network consists of two sub-networks that output a prediction for the mean, \(\hat{\mu}(\mathbf{x})\), and for the variance, \(\hat{\sigma}^{2}(\mathbf{x})\). These sub-networks only share the input layer and do not have any shared weights or biases. In order to enforce positivity of the variance, a transformation such as a softplus or an exponential is used. The network is trained by using the negative loglikelihood of a normal distribution as the loss function: \[\mathcal{L}=\sum_{i=1}^{N}\frac{1}{2}\log(\hat{\sigma}^{2}(\mathbf{x}_{i}))+\frac{ 1}{2}\frac{(y_{i}-\hat{\mu}(\mathbf{x}_{i}))^{2}}{\hat{\sigma}^{2}(\mathbf{x}_{i})}\] Since the MVE network is often used as the building block for complex uncertainty estimation methods, it is essential that it works well. Multiple authors have noted that the training of an MVE network can be unstable (Seitzer et al., 2021; Skafte et al., 2019; Takahashi et al., 2018). The main argument, elaborated on in the next section, is that the network will start focussing on areas where the network does well at the start of the training process while ignoring poorly fitted regions. However, Nix and Weigend (1994) already warned for the possibility of harmful overfitting of the variance and gave the solution: _The training of an MVE network should start with a warm-up period where the variance is fixed and only the mean is optimized._ Additionally, the variance is initialized at a constant value in order to make all data points contribute equally to the loss. Nix and Weigend (1994) did not demonstrate the importance of this warm-up period in the original paper. In this paper, we empirically demonstrate that using a warm-up period can greatly improve the performance of MVE networks and fixes the instability noted by other authors. A limited amount of research has investigated possible improvements of the MVE network (Seitzer et al., 2021; Skafte et al., 2019). Most improvements require a significant adaptation to the training procedure such as a different loss function or locally aware mini-batches. However, to the best of our knowledge, none have investigated our proposed easy-to-implement improvement: _The mean and variance in an MVE network should be regularized separately._ Most modern methods (Egele et al., 2021; Gal et al., 2017; Jain et al., 2020; Lakshminarayanan et al., 2017) appear to use the same regularization for both the mean and the variance. In fact, the current use of the MVE network often does not even easily allow for different regularizations. Typically, only a second output node is added to represent the variance, instead of an entire separate sub-network (see Figure 2). As we will demonstrate in this paper, separate regularization can be very beneficial to the predictive results. _Contributions:_ * We provide experimental results that demonstrate the importance of a warm-up period as suggested by Nix and Weigend (1994). * We investigate if it is beneficial to update the mean and variance simultaneously after the warm-up as opposed to keeping the mean fixed after the warm-up and only learning the variance. * We provide a theoretical motivation why different regularization for the mean and variance in an MVE network is desirable and experimentally demonstrate that this can lead to significant improvements. _Organisation:_ This paper consists of 6 sections, this introduction being the first. In Section II, we go through the problems with MVE networks that have recently been reported in the literature. In the same section, we show that these problems can be resolved by following the recommendation of using a warm-up period. The following Section III gives a theoretical motivation in favor of updating both the mean and the variance after the warm-up as opposed to keeping the mean fixed and only learning the variance. Section IV explains, both intuitively and using classical theory, why we expect to need different amounts of regularization for the mean and the variance estimates. Both the effect of the warm-up and of separate regularization are experimentally examined in Section V. The final section summarizes the results, gives a list of recommendations when training an MVE network, and provides possible avenues for future work. All the code used in the experiments of this paper can be found at [https://github.com/LaurensSluyterman/Mean_Variance_Estimation](https://github.com/LaurensSluyterman/Mean_Variance_Estimation). ## II Difficulties with Training MVE Networks It is known that the training of an MVE network can be unstable (Seitzer et al., 2021; Skafte et al., 2019; Takahashi et al., 2018). The main argument is that the network may fail to learn the mean function for regions where it initially has a large error. In these regions, the variance estimate will increase, which implies that the residual does not contribute to the loss as much. The network will start to focus more on regions where it is performing well, while increasingly ignoring poorly fit regions. To illustrate what can happen, we reproduced an experiment from Seitzer et al. (2021). We sampled 1000 covariates, \(x_{i}\), uniformly between 0 and 10, and subsequently sampled the targets, \(y_{i}\), from a \(\mathcal{N}\left(0.4\sin(2\pi x_{i}),0.01^{2}\right)\) distribution. Figure 3 shows that the MVE network is unable to properly learn the mean function. Increasing training time does not solve this. A network with a similar architecture that was trained using the mean-squared-error loss _was_ able to learn the mean function well. We provide a second explanation for this behaviour by noting that the loss landscape is likely to have many local minima. We already encounter this in a very simple example. Suppose we have a data set consisting of two parts: 100 data points from a \(\mathcal{N}\left(2,0.5^{2}\right)\) distribution and 100 data points from a \(\mathcal{N}\left(5,0.1^{2}\right)\) distribution. For each part, we are allowed to pick a separate variance estimate, \(\hat{\sigma}_{1}^{2}\) and \(\hat{\sigma}_{2}^{2}\) but we can only pick a single estimate for the mean. In this situation, there are two local minima of the negative loglikelihood (see Figure 4): we can set \(\hat{\mu}\) to approximately 2 with a small \(\hat{\sigma}_{1}^{2}\) and large \(\hat{\sigma}_{2}^{2}\) or set \(\hat{\mu}\) to 5 with a large \(\hat{\sigma}_{1}^{2}\) and small \(\hat{\sigma}_{2}^{2}\). While this simplified setting is of course not a realistic representation of a neural network, it does illustrate that there can easily be many local minima when dealing with complex functions for the mean and the variance. When we start from a random estimate for the mean, it is therefore not unlikely to end up in a bad local minimum. Fig. 1: Original MVE architecture Fig. 2: Modern MVE architecture ### _The Solution: Warm-up_ The original authors advised to use a warm-up, which indeed alleviates most problems. After initialization, the variance is fixed at a constant value and the mean estimate is learned. In a second phase, the mean and variance are updated simultaneously. We can motivate why a warm-up is beneficial, both form the loss-contribution perspective and from the local minima perspective. From the loss-contribution perspective, we do not have the problem that regions that are predicted poorly initially fail to learn. Because the variance estimate at initialization is constant, all residuals contribute to the loss equally. From the loss-landscape perspective, we are less likely to end up in a bad local minima if we start from a sensible mean function. Figure 5 shows that adding a warm-up period indeed solves the convergence problem that we previously had in the sine example in Figure 3. ## III What to do after the warm-up? After the warm-up period, we could either update the variance while keeping the mean estimate fixed or update both simultaneously. In the original MVE paper, the authors argue that simultaneously estimating the mean and the variance is also advantageous for the estimate of the mean. The reasoning is that the model will focus its resources on low noise regions, leading to a more stable estimator. We go through some classical theory that shows that this is the case for a linear model. All derivations for the statements in this paper regarding linear models can be found in van Wieringen (2015). We assume that we have a data set consisting of \(n\) data points \((\mathbf{x}_{i},y_{i})\), with \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) and \(y\in\mathbb{R}\). With \(X\), we denote the \(n\times p\) design matrix which has the \(n\) covariate vectors \(\mathbf{x}_{i}\) as rows. With \(Y\), we denote the \(n\times 1\) vector containing the observations \(y_{i}\). We assume \(X\) to be of full rank. Consider the linear model: \[Y=X\beta+U,\quad\text{with }U\sim\mathcal{N}\left(0,\Sigma\right), \tag{1}\] where \(\Sigma\) can be any invertible covariance matrix, possibly heteroscedastic and including interaction terms. Suppose this covariance matrix is known, then classical theory tells us that it is beneficial for our estimate of \(\beta\) to take this into account. To see this, we will compare the linear model in Equation (1) with a rescaled version that takes the covariance matrix into account. Since \(\Sigma\) is positive semi-definite, we can write it as \(BB^{T}\) and rescale our model by multiplying with \(B^{-1}\): \[Z:=B^{-1}Y =B^{-1}X\beta+B^{-1}U\] \[=\tilde{X}\beta+V,\quad V\sim\mathcal{N}\left(0,I_{n}\right).\] When finding the least-squares estimators, both formulations lead to different estimators of \(\beta\), denoted by \(\hat{\beta}\) and \(\hat{\beta}^{*}\) respectively. In Appendix A, we show that both estimators are unbiased estimators of \(\beta\) and that \(\hat{\beta}^{*}\) has a lower variance. We want to emphasize that this leads to improved metrics such has RMSE. Let us for instance look at difference between Fig. 4: A simple example of local minima in the negative loglikelihood. The data consist of two parts: 100 data points from a \(\mathcal{N}\left(2,0.5^{2}\right)\) distribution and 100 data points from a \(\mathcal{N}\left(5,0.1^{2}\right)\) distribution. The graphs shows negative loglikelihood as a function of \(\hat{\mu}\) where we take the optimal variance estimates for each value of \(\hat{\mu}\). Fig. 5: By using a warm-up where only the mean is updated, the MVE network is able to learn the mean function well. In this example, the variance appears to be overfitting slightly. Fig. 3: An example of an MVE network that fails to learn the mean function when simultaneously updating the mean and the variance. The orange area gives plus or minus a single standard deviation. the expected quadratic errors of a new pair \((\mathbf{x}_{\text{new}},y_{\text{new}})\) when using \(\hat{\beta}\) and \(\beta^{*}\). In Appendix A, we prove that \[\mathbb{E}\left[(y_{\text{new}}-\mathbf{x}_{\text{new}}^{T}\hat{\beta})^{2}-(y_{ \text{new}}-\mathbf{x}_{\text{new}}^{T}\hat{\beta}^{*})^{2}\right]\geq 0.\] This short example illustrates that it can be beneficial for the mean estimate to take the variance into account. Besides the obvious benefit of having an estimate of the variance, it may therefore be a good idea to use an MVE network in order to get a better estimate for the mean, as the authors of the original papers also pointed out. In summary, focussing on low noise regions is beneficial. However, the estimate of the noise is made using the mean predictor. If the mean predictor is bad, we do not focus on low noise regions but on high accuracy regions, which can be very detrimental. We therefore need a warm-up period, after which classical theory would suggest that estimating the mean and variance simultaneously has advantages. In Section V, we test if estimating the mean and variance simultaneously is indeed beneficial for the mean estimate. ## IV The Need for Separate Regularization In this section, we give a theoretical motivation for the need for a different regularisation of the parts of the network that give the mean and variance estimate. Intuitively this makes sense: there is no reason to assume that the mean function and the variance function are equally complex. If one is much more complex than the other, we do not want to regularize them the same way. We can give a more rigorous argument by again analyzing a classical linear model. We do this by considering two linear models that most closely resemble the scenario of an MVE network. The first model will estimate the mean while knowing the variance and the second model will estimate the log of the variance1 while knowing the mean. Both models will generally have a different optimal regularization constant. Footnote 1: An MVE network often uses an exponential transformation in the output of the variance neuron to ensure positivity. The network then learns the log of the variance. ### _Scenario 1: Estimating the Mean With a Known Variance_ We use the same notation as in the previous example and assume a homoscedastic noise. The goal is to find the estimator that minimizes the mean-squared-error loss plus a regularization term, \[\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\beta)^{2}+\lambda\sum_{j=1}^{p}\beta_{j}^{ 2}.\] Depending on the value of \(\lambda\), we get different estimators \(\hat{\beta}(\lambda)\). In Appendix B, we show that optimal regularization constant, \(\lambda^{*}\), satisfies \[\lambda^{*}\propto p(\beta^{T}\beta)^{-1}.\] We defined optimal as the \(\lambda\) for which \[\text{MSE}(\hat{\beta}(\lambda)):=\mathbb{E}\left[|\beta-\hat{\beta}(\lambda )|^{2}\right]\] is minimal. ### _Scenario 2: Estimating the Log-Variance With a Known Mean_ Next, we examine a linear model that estimates the logarithm of the variance. We again have \(n\) datapoints \((\mathbf{x}_{i},y_{i})\) where we assume the log of the variance to be a linear function of the covariates: \[y_{i}=\mu_{i}+\epsilon,\quad\epsilon\sim\mathcal{N}\left(0,e^{\mathbf{x}_{i}^{T} \hat{\beta}}\right)\] We use the same covariates and for the targets we define: \[z_{i}:=\log((y_{i}-\mu)^{2}))-C,\quad\text{with}\quad C=\psi(1/2)+\log(2),\] where \(\psi\) is the digamma function. This somewhat technical choice for \(C\) is made such that \[z_{i}=\log(\sigma^{2}(\mathbf{x}_{i}))+\tilde{\epsilon},\] where \(\tilde{\epsilon}\) has expectation zero and a constant variance. The details can be found in Appendix B. In the same appendix we repeat the same procedure, i.e. minimizing the mean-squared-error with a regularization term, and demonstrate that for the optimal regularization constant, \(\lambda^{*}\), satisfies \[\tilde{\lambda}^{*}\propto p(\tilde{\beta}^{T}\tilde{\beta})^{-1}.\] The conclusion is that for these two linear models, that most closely resemble the scenario of regularized neural networks that estimate the mean and log-variance, the optimal regularization constants rely on the true underlying parameters \(\beta\) and \(\tilde{\beta}\). Since there is no reason to assume that these are similar, there is also no reason to assume that the mean and variance should be similarly regularized. ### _Separate Regularization of the Variance Alleviates the Variance-Overfitting_ While the problem of ignoring initially poorly fit regions is still present, proper regularization of the variance can alleviate the harmful overfitting of the variance. To illustrate this effect, we trained 4 MVE networks, without a warm-up period, on a simple quadratic function with heteroscedastic noise. The \(x\)-values were sampled uniformly from \([-1,1]\) and the \(y\)-values were subsequently sampled from a \(\mathcal{N}\left(x^{2},(0.1+0.2x^{2})^{2}\right)\) distribution. We used the original MVE architecture which has two sub-networks that estimate the mean and the variance. We used separate \(l_{2}\)-regularization constants for both sub-networks in order to be able to separately regularize the mean and the variance. We used the same mean regularization in all networks and gradually decreased the regularization of the variance. Figure 6 demonstrates the effect of different amounts of regularization of the variance. When the variance is regularized too much, the network is unable to learn the heteroscedastic variance. This is problematic both because the resulting uncertainty estimates will be wrong, but also because we lose the beneficial effect on the mean that we discussed in the previous subsection. In the second subfigure, the network was able to correctly estimate both the mean and variance. When we decreased the regularization of the variance further, however, we see that the network simply increased the variance on the right side instead of learning the function. When we remove regularization of the variance all together, the network was completely unable to learn the mean function. Additionally, we repeated the sine experiment while using a higher regularization constant for the variance than for the mean. In Figure 7, we see that the MVE network is now able to learn the sine function well, even without a warm-up period. We were unable to achieve this when using the same regularization constant for both the mean and the variance. ## V Experimental Results In this section, we experimentally demonstrate the benefit of a warm-up period and separate regularization. In Subsection V-A, we specify the three training strategies that we compare. Subsections V-B and V-C give details on the data sets, experimental procedure, and architectures that we use. Finally, the results are given and discussed in Subsection V-D. ### _Three Approaches_ We compare three different approaches: 1. No Warm-up: This is the approach that is used in popular methods such as Concrete dropout and Deep Ensembles. The mean and the variance are optimized simultaneously. 2. Warm-up: This is the approach recommended in the original paper. We first optimize the mean and then both the mean and the variance simultaneously. 3. Warm-up fixed mean: We first optimize the mean and then optimize the variance while keeping the mean estimate fixed. We add this procedure to test if optimizing both the mean and the variance after the warm-up further improves the mean estimate. For each approach, we consider two forms of \(l_{2}\)-regularization: 1. Separate regularization: The part of the network that estimates the mean has a different regularization constant than the part of the network that estimates the variance. 2. Equal regularization: Both parts of the network use the same regularization constant. ### _Data Sets and Experimental Procedure_ We compare the three approaches on a number of regression UCI benchmark data sets. These are the typical regression data sets that are used to evaluate neural network uncertainty estimation methods (Gal and Ghahramani, 2016; Hernandez-Lobato and Adams, 2015; Lakshminarayanan et al., 2017; Pearce et al., 2018). For each data set we use a 10-fold cross-validation and report the average loglikelihood and RMSE on the validation sets along with the standard errors. For each of the 10 splits, we use another 10-fold cross-validation to obtain the optimal regularization constants. The entire experimental procedure is given in Algorithm 1. ### _Architecture and Training Details_ * We use a split architecture, meaning that the network consists of two sub-networks that output a mean and a variance estimate. Each sub-network has two hidden layers with 40 and 20 hidden units and ELU (Clevert et al., 2015) activation functions. The mean-network has a linear transformation in the output layer and variance-network an exponential transformation to guarantee positivity. We also added a minimum value of \(10^{-6}\) for numerical stability. * All covariates and targets are standardized before training. * We use the Adam optimizer (Kingma and Ba, 2014) with gradient clipping set at value 5. We found that this greatly improves training stability in our experiments. * We use a default batch-size of 32. * We use 1000 epochs for each training stage. We found that this was sufficient for all networks to converge. * We set the bias of the variance to 1 at initialization. This makes sure that the variance predictions are more or less constant at initialization. Fig. 6: The effect of the different amounts of regularization of the variance. In all four figures, the mean has the same regularization constant of 0.1. The regularization of the variance is given in the subcaptions. The orange area gives plus or minus a single standard deviation. Fig. 7: By using a separate regularization constant for the variance, the MVE network is able to learn both the mean and the variance function well. ``` 1Input: Data set \((X,Y)\); 2 3Devide \((X,Y)\) in 10 distinct subsets, denoted \((X^{(i)},Y^{i})\); 4for\(i\) from \(1\) to \(10\)do 5\(X_{\text{train}}=\cup_{j\neq i}X^{(j)}\), \(Y_{\text{train}}=\cup_{j\neq i}Y^{(j)}\), \(X_{\text{val}}=X^{(i)}\), \(Y_{\text{val}}=Y^{(i)}\); 6 Use 10-fold cross-validation (using only (\(X_{\text{train}},Y_{\text{train}}\))) to find the optimal regularization constants. This is done by choosing the constants for which the loglikelihood on the left-out sets is highest. The possible regularization constants are \([0.00001,0.0001,0.001,0.01]\); 7 Train a model using the optimal separate regularization constants; 8 Train a model using the optimal equal regularization constant; 9 Evaluate the loglikelihood and root-mean-squared error on the validation set; 10Return: The average of the 10 loglikelihood and RMSE values along with the standard error.; ``` **Algorithm 1**Our experimental procedure. A 10 fold cross-validation is used to compare the different methods. In each fold, a second 10-fold cross-validation is used to obtain the optimal regularization constants. We use the same splits when comparing approaches. ### _Results and Discussion_ The results are given in Tables I and II. Bold values indicate that for that specific training strategy (no warm-up, warm-up, or warm-up fixed mean) there is a significant difference between equal and separate regularization. This means that every row can have up to three bold values. Significance was determined by taking the differences per fold and testing if the mean of these differences is significantly different from zero using a two-tailed \(t\)-test at a 90\(\%\) confidence level. We see that a warm-up is often very beneficial. For the yacht data set, we observe a considerable improvement in the RMSE when we use a warm-up period. A warm-up also drastically improves the result on the energy data set when we do not allow separate regularization. Generally, the difference between keeping the mean fixed after the warm-up and optimizing the mean and variance simultaneously after the warm-up is less pronounced. For a few data sets (Concrete, Kin8nm, Protein) we do observe a considerable difference in root-mean-squared error if we only consider equal regularization. If we allow separate regularization, however, these differences disappear. A separate regularization often drastically outperforms equal regularization. The energy data set gives the clearest example of this. For all three training strategies, a separate regularization performs much better than an equal regularization of the mean and variance. A similar pattern can be seen for the yacht data set. The optimal regularization for the variance was typically similar or an order of magnitude larger than the optimal regularization of the mean, never lower. We would like to stress that statistically significant results are difficult to obtain with only 5 to 10 folds but that the pattern emerges clearly: separate regularization often improves the results while never leading to a significant decline. Equal regularization and no warm-up perform as well as the other strategies for some data sets, although never considerably better. For Boston Housing, for example, using a warm-up and separate regularization yields very similar results as the other strategies. This can happen since the problem may be easy enough that the network is able to simultaneously estimate the mean and the variance without getting stuck. Additionally, while there is no reason to assume so a priori, the optimal regularization constant for the mean and the variance can be very similar. In fact, for the Boston Housing experiment we often found the same optimal regularization constant for the mean and variance during the cross-validation. ## VI Conclusion In this paper, we tested various training strategies for MVE networks. Specifically, we investigated if following the recommendations of the original authors solves the recently reported convergence problems and we proposed a novel improvement, separate regularization. We conclude that the use of a warm-up period is often essential to obtaining the best results and fixes the convergence problems. Contrary to what classical theory suggests, we do not observe a significant advantage to estimating the mean and variance simultaneously after a warm-up as opposed to estimating the mean while assuming a constant variance and fixing it afterwards. Optimizing the mean while accounting for heteroscedastic noise was seemingly only beneficial when separate regularization was not allowed. We argued that we expect to need different regularization constants for the mean and variance. There are no reasons to assume that the mean and variance functions are equally complex and we therefore should not expect a similar regularization constant. We experimentally demonstrated that a separate regularization constant indeed often leads to large improvements. ### _Recommendations_ Based on our experiments, we have come to the following recommendations when training an MVE network: * Use a warm-up, as also suggested by the original authors. It is important to initialize the variance such that it is more or less constant for all inputs. Otherwise, some regions may be neglected. This is easily achieved by setting the bias of the variance neuron to 1 at initialization. * Use gradient clipping. We found gradient clipping to yield more stable optimization when optimizing the mean and variance simultaneously. * Use separate regularization for the mean and variance. If a hyperparameter search is computationally infeasible, we found that the variance should typically be regularized an order of magnitude stronger than the mean. ### _Future Work_ The results on separate regularization indicate that estimating the variance and estimating the mean are often not equally difficult problems. It is therefore likely not optimal to use a similar architecture and training procedure for both. It would be interesting to investigate if the use of a separate architecture and training procedure leads to further improvements. On a more general note, it may be very worthwhile to get better estimates of the data noise variance. A vast amount of very intricate work is being done to improve the model uncertainty estimate. Approaches such as dropout, ensembling, and Bayesian neural networks, to name a few. For the predictive uncertainty, however, it may be just as or even more important to properly estimate the variance of the noise correctly. This, of course, highly depends on the data set in question but it could be that the uncertainty due to a heteroscedastic variance has much more influence on the predictive uncertainty than the model uncertainty. We therefore think that is very worthwhile to investigate the optimal way to estimate the the variance of the (possibly non-Gaussian) noise.
2306.09044
Hands-on detection for steering wheels with neural networks
In this paper the concept of a machine learning based hands-on detection algorithm is proposed. The hand detection is implemented on the hardware side using a capacitive method. A sensor mat in the steering wheel detects a change in capacity as soon as the driver's hands come closer. The evaluation and final decision about hands-on or hands-off situations is done using machine learning. In order to find a suitable machine learning model, different models are implemented and evaluated. Based on accuracy, memory consumption and computational effort the most promising one is selected and ported on a micro controller. The entire system is then evaluated in terms of reliability and response time.
Michael Hollmer, Andreas Fischer
2023-06-15T11:07:17Z
http://arxiv.org/abs/2306.09044v1
# Hands-on detection for steering wheels with neural networks ###### Abstract In this paper the concept of a machine learning based hands-on detection algorithm is proposed. The hand detection is implemented on the hardware side using a capacitive method. A sensor mat in the steering wheel detects a change in capacity as soon as the driver's hands come closer. The evaluation and final decision about hands-on or hands-off situations is done using machine learning. In order to find a suitable machine learning model, different models are implemented and evaluated. Based on accuracy, memory consumption and computational effort the most promising one is selected and ported on a micro controller. The entire system is then evaluated in terms of reliability and response time. machine learning, hands-on detection, driving assistance + Footnote †: publicationid: _6-7 October 2022, Barcelona, Spain_ ## I Introduction The development of advanced driver assistance systems is an essential goal for car manufacturers. As can be seen from a survey, driver assistance systems are by now an important purchase criterion for over 60% of potential buyers [1]. In addition, a unique selling point over the competition and thus a competitive advantage can be gained through the further automation of vehicles. An example is the system from Mercedes-Benz, which was the first to receive approval for autonomous driving at level 3 in December 2021. Autonomous driving at level 3 enables the driver to divert his attention from what is happening on the road in certain situations. The vehicle takes over the lateral and longitudinal guidance and independently recognizes errors or departure from system limits. In such a case, the system would prompt the driver to take back control of the vehicle. This transfer of vehicle control is a crucial challenge. An autonomous system must be able to recognize whether the driver is ready to take over control of the vehicle again. To ensure this, some form of driver monitoring is required. One way of detecting the driver's condition is a hands-on detection (HOD). This is a system that detects whether the driver's hands are on the steering wheel and therefore control over the vehicle can safely be transferred. A HOD can be implemented inexpensively by measuring steering angle and torque acting on the steering wheel. The necessary sensors are required for the servo-assistance, anyway. However, there is the disadvantage that false hands-off messages often occur in situations where the driver does not exert any significant force for lateral guidance. In such a case, the driver would be asked to put his hands back on the steering wheel, even though he has not let go of the steering wheel. A better HOD variant, also used in this paper, uses a capacitance sensor. This allows to detect the driver's contact with the steering wheel, without relying on any exerted force to the steering wheel. However, the evaluation of capacitance values is more complex, since these are dependent on the driver and his environment. In this paper a machine learning algorithm is implemented, which is able to distinguish between a hands-on and a hands-off situation based on the capacitance values. The AI model is then ported to a micro controller and the reliability and response time of the HOD is evaluated. A maximum response time of 200ms is assumed to be appropriate for timely HOD. This paper aims to answer the question: Can neural networks increase reliability of HOD within a response time of 200ms? ## II Background Two techniques are combined in this paper to realize HOD: Capacity measurement and machine learning. ### _Capacity measurement_ One option to realize HOD is detection of a contact between the driver and the steering wheel by measuring the change in capacitance. There are different methods to measure the capacitance of the steering wheel. In this paper, a frequency-based measurement method is used. Touching the steering wheel is detected by a change in capacitance in a sensor element, with the capacitance being calculated indirectly from the measured frequency. The sensor element represents a measuring capacitor which forms a resonant circuit together with another capacitor and a coil. The frequency of the resonant circuit can be calculated using equation 1, which describes an ideal resonant circuit. \[f_{0}=\frac{1}{2\pi\sqrt{L(C_{k}+C_{s})}} \tag{1}\] The equation depends on the capacitance of the capacitor \(C_{k}\), the capacitance of the sensor element \(C_{s}\) and the inductance of the coil \(L\). As long as the steering wheel is in an untouched state, the resonant circuit oscillates with its maximum frequency \(f_{0}\). If the driver puts his hands on the steering wheel, the capacity of the sensor element is increased, leading to a reduction in the frequency of the resonant circuit. The sensor element is a capacitive mat that is wrapped around the core of the steering wheel and represents the active part of the measuring capacitor. Since there is no opposite side, a stray electric field forms between the active capacitor side and the environment. An approaching object causes a change of the capacitance value of the sensor element. To illustrate, the measuring capacitor can be seen as a plate capacitor, which can be described by the equation \(C=\epsilon_{0}\cdot\epsilon_{r}\cdot\frac{A}{d}\). Here, both electrically conductive and non-conductive objects cause a change in capacitance for different reasons. A nearby conductive object causes the distance \(d\) between the active capacitor side and its surroundings to decrease, increasing the capacitance. On the other hand, non-conductive objects lead to an increase in capacity via a change in relative permittivity \(\epsilon_{r}\). ### _Machine learning approaches_ To classify the capacitance values four different machine learning models are trained. In the following a brief overview of the different approaches is given. #### Ii-B1 Time Delay Neural Network One machine learning approach is the Time Delay Neural Network (TDNN). The TDNN is structured as a standard multiperceptron with a delay buffer connected in front. New values in a time series are buffered until a certain amount is reached. Subsequently, these buffered values are passed in a final input into the multiperceptron, which then carries out the classification [2]. #### Ii-B2 Long Short Term Memory As a second approach to classify the capacitance values a Long Short Term Memory (LSTM) net, a variant of a recurrent neural network is used. In difference to feed-forward networks like the TDNN, the neurons of the LSTM can have connections to neurons in the previous layer, to the same layer or to themselves, in addition to the standard forward-pointing connections. The feedback loops implement a memory, which allows the network to remember previous events [2]. This is an advantage in time-dependent series of measurements, since each measured value is dependent on its predecessor in a certain way. In contrast to TDNN, which assumes independent measured values, recurrent neural networks can use this memory to take account of the temporal dependency [3]. #### Ii-B3 Random Forest The last approach is the random forest which combines the prediction results of multiple decision trees using the bootstrap aggregating (bagging) method. The idea behind bagging is to train several decision trees with a subset of the training data. The subsets are created by randomly selecting samples from the entire training data. This process is also called bootstrapping [4]. The result are multiple decision trees that are structured differently and ideally even out in their classification errors. The output of the random forest is the class chosen by most of the decision trees. ## III Related Work Other work also used machine learning to develop HOD, differing in sensors, algorithms, and response times. Johansson and Linder [5] used a camera system and the torque acting on the steering wheel to implement HOD. For the camera, two CNN approaches were compared in classifying the most recently acquired image. For evaluating the torque measurement a one-dimensional convolutional neural network and an LSTM network were used. According to the authors, the evaluation of the torque requires a few seconds to detect a hands-off situation and up to two seconds to detect a hands-on. The camera approach reacted to a situation change within 5.4 seconds. Both solutions are thus well above the response time of 200ms we aim for in this work. Hoang Ngan Le et al. [6] have also developed a machine learning based HOD with a camera system. In their paper the image evaluation is performed by a Region Based Convolutional Neural Network (RCNN) which has been improved for the specific purpose. The improved RCNN achieved 0.09 frames per second, which roughly corresponds to the evaluation of one frame every eleven seconds. As such, the time required for detection is also well above the 200ms limit. A solution not based on machine learning was published as a patent by Volkswagen AG. This connects two possible approaches for HOD. On the one hand, the values of the steering angle or torque sensor are used and on the other hand, the capacitance values of the steering wheel are considered to distinguish between a hands-on and hands-off situation. The idea behind the combined approach is to use the torque sensor to detect hands-on situations with high confidence. During these situations, the corresponding capacitance values are recorded. With the data a function is set up with which it is possible to quickly decide for each new capacitance value whether it corresponds to a hands-on or hands-off situation [7]. Another non machine learning option for evaluating capacitance sensors was published by Analog Devices [8] and relies on dynamic threshold values. An algorithm continuously monitors the values of the capacitance sensor and measures the ambient level if no touch is detected. In addition, the average maximum sensor value is measured with each touch. The threshold from which a capacitance increase is counted as a touch is a certain percentage of the average measured maximum sensor value. These approaches bear a potential problem: If the driver only touches the steering wheel very lightly, the measured average maximum sensor value decreases. The dynamic threshold adapts to the small capacitance values. Thus, at some point a slight increase in capacitance values is erroneously recognized as a touch. If the driver brings both hands close to the steering wheel without touching it, this could trigger a similar increase in capacity as previous two-finger touch. The HOD would then recognize a hands-on situation even though the driver is not touching the steering wheel. ## IV Experimental description The implementation of the different machine learning models is divided into several steps. First, training data is recorded, classified and processed, which is then used to train the machine learning models. Based on the model results, the most promising model is selected and transferred to a micro controller. Finally, the system is evaluated in terms of reliability and response time. ### _Generating training data_ For generating the training data, the steering wheel is alternately touched and released at defined points for five seconds. This process is repeated 30 minutes each for a two-finger, four-finger and two-hand touch. Figure 1 shows the points of contact. It should be noted that the points in the figure do not only refer to the front of the steering wheel. Alternately also the outside, the back and the inside were touched. Regarding the sampling rate a new capacitance value was recorded every 2 ms. ### _Preprocessing data for learning_ After recording the training data two preprocessing steps were implemented. In the first step, every sample was assigned a "hands-on" or "hands-off" label. This was automated by following the change in capacitance when touching or releasing the steering wheel. The corresponding edge was used to separate and label all samples. Once the difference between two measured capacity values is above noise level, it is interpreted as an edge. The required change in capacitance to trigger an edge was set separately and fine tuned for each of the three data sets. A rising edge triggers the "hands-on" label, while a falling edge triggers the "hands-off" label. In the second preprocessing step, every sample in the dataset was normalized in its length. This was done because the machine learning model should learn to classify a hands-on or hands-off situation based on capacitance values of just a few hundred milliseconds to speed up the reaction time of the HOD. Therefore a window with a fixed length of 100 values is placed over every sample. The values in the window form the input for the machine learning models. In each step, this window is moved one value, dropping an old value and adding a new value. Thus, all models have a fixed length input of 100 capacitance values, corresponding to 200 ms of recorded time. ### _Preparation of gradient data_ In order for the machine learning models to deliver optimal results, capacitance values have to be normalized. For this, it is necessary to obtain minimum and maximum capacitance value during execution. In the training phase this is not a problem because all data is known a-priori. In a real world application this is not the case, which means that minimum and maximum values have to be determined dynamically. An estimate of the minimum value can be obtained by measuring the ambient level when the steering wheel is untouched. The maximum value, however, is a greater challenge. It would require the driver to place both of his hands on the steering wheel, which the system can never be sure is the case. Additionally, estimating the maximum value from the minimum value is not possible, as the change in capacitance caused by a driver heavily depends on his body weight. To eliminate this issue, the absolute capacitance values were converted into gradient values, focussing on change in capacity over time instead. This makes it easier to normalize the values, since only the maximum capacitive rate of change need to be known. Figure 2 shows gradient values when the steering wheel is touched with one hand. ## V Evaluation For the evaluation all machine learning approaches are trained with the created datasets. The most promising model is then selected and ported to the STM32F769 micro controller where the final reliability and reaction time testing is done. ### _Training_ In order to decide which machine learning model is best suited for the classification task, all models are trained with five different combinations of parameters. The resulting machine learning models are examined based on memory consumption, execution time and reliability. Fig. 1: Contact points on the steering wheel during the training phase Fig. 2: Change in capacity when the hand is approached #### Iv-A1 Without gradient data First, the models are trained with absolute capacitance values. All models achieve a very high level of accuracy as well as precision and recall, with differences visible mainly in memory usage and execution time (cf. Tab. I). With no major difference in accuracy, the random forest requires far more memory than other models, which is particularly disadvantageous for embedded systems. Therefore, measuring the execution time was neglected. Looking at the neural networks, the biggest difference is the execution time. While the TDNN only need a few microseconds for a forward pass, the LSTM networks need several milliseconds due to their more complex structure. This is relevant, because data is sampled with a rate of 2ms in the experiments. Thus, when used on the micro controller, LSTM networks with more than one hidden neuron lose data because the measurement is faster than the processing. #### Iv-A2 With gradient data Looking at the models trained with the gradient data, the previously observed disadvantages regarding the memory consumption of the random forest remain (cf. Tab. II). The processing time of the LSTM is still inferior to that of the TDNN. However, the LSTM with a hidden neuron performs slightly better in accuracy, precision and recall compared to the TDNN with 50 hidden neurons and occupies with 27 kB of memory almost just a third of the memory. The TDNN on the other hand, offers a significantly shorter execution time. The delay between input and output for the TDNN with 50 hidden neurons is only 150 \(\mu\)s compared to the 0.6 ms of the smallest LSTM. Training the TDNN takes 1:08 minutes which is only a fraction of the training time for the LSTM, which takes 20:48 minutes. For this reason the TDNN with 50 hidden neurons was selected and ported to the micro controller. ### _Practical reliability test_ To test if the system recognizes touches by the driver reliably, the steering wheel was touched with two fingers, four fingers one hand and two hands at the points shown in figure 1. In the runs in which the steering wheel was touched with two and four fingers, a distinction was made between touching the front, back, inside and outside for each point. In each of the four runs, all points were touched ten times to see whether touches were only recognized sporadically in some places. In these experiments, the recognition of two fingers proved to be the most difficult. Especially on the inside, where there is a seam, the distance to the sensor mat is particularly large. This decreases sensitivity and a touch triggers only a small increase in capacitance, resulting in no touch detection at all for the two finger experiments and a maximum of 7 out of 10 correctly identified events in the four finger experiment. Regarding the position, the 6 o'clock position proved to be difficult, both with two and with four fingers. Somewhat less (but still noticeably) impacted positions were the 3 o'clock and 9 o'clock positions. These three positions are located, where the steering wheel spokes connect to the wheel--likely the root causes of the problem. In the 10 and 2 positions typical for driving a car, all events were recognized reliably irrespective of finger count, as long as any area apart from the inside of the wheel was touched. Also, when the steering wheel was not just touched, but gripped with one or two hands, the success rate rose to 100%. ### _Reaction time_ Next, the reaction time of the system was tested with two fingers which represents the hardest challenge as shown in the previous section. Fig. 3 shows the capacitance values over time when the steering wheel was touched with two fingers. The red line represents the threshold from which the steering wheel was actually touched and released. The increase in capacitance below the red line is caused by approaching the fingers but not having made contact yet. Reaction time measurement is started, when the capacitance values first exceed the threshold and stopped when the values drop below it. In ten experimental runs, results ranged between 108ms and 294ms for "hands-on" events and a significantly faster reaction time of 30-60ms for "hands-off" events, if two fingers were used. With four fingers, reaction times could be reduced to 74-94ms (hands-on) and 38-58ms (hands-off), respectively. ## VI Conclusion The results show that it is possible to use a machine learning algorithm to evaluate capacitance values for HOD and achieve fast reaction times. By using the change in capacitance instead of the absolute values in the machine learning model, the problem of normalizing the input values was solved and the HOD worked without external calibration, independent of the driver and environment.
2308.08379
A distributed neural network architecture for dynamic sensor selection with application to bandwidth-constrained body-sensor networks
We propose a dynamic sensor selection approach for deep neural networks (DNNs), which is able to derive an optimal sensor subset selection for each specific input sample instead of a fixed selection for the entire dataset. This dynamic selection is jointly learned with the task model in an end-to-end way, using the Gumbel-Softmax trick to allow the discrete decisions to be learned through standard backpropagation. We then show how we can use this dynamic selection to increase the lifetime of a wireless sensor network (WSN) by imposing constraints on how often each node is allowed to transmit. We further improve performance by including a dynamic spatial filter that makes the task-DNN more robust against the fact that it now needs to be able to handle a multitude of possible node subsets. Finally, we explain how the selection of the optimal channels can be distributed across the different nodes in a WSN. We validate this method on a use case in the context of body-sensor networks, where we use real electroencephalography (EEG) sensor data to emulate an EEG sensor network. We analyze the resulting trade-offs between transmission load and task accuracy.
Thomas Strypsteen, Alexander Bertrand
2023-08-16T14:04:50Z
http://arxiv.org/abs/2308.08379v1
A distributed neural network architecture for dynamic sensor selection with application to bandwidth-constrained body-sensor networks ###### Abstract We propose a dynamic sensor selection approach for deep neural networks (DNNs), which is able to derive an optimal sensor subset selection for each specific input sample instead of a fixed selection for the entire dataset. This dynamic selection is jointly learned with the task model in an end-to-end way, using the Gumbel-Softmax trick to allow the discrete decisions to be learned through standard backpropagation. We then show how we can use this dynamic selection to increase the lifetime of a wireless sensor network (WSN) by imposing constraints on how often each node is allowed to transmit. We further improve performance by including a dynamic spatial filter that makes the task-DNN more robust against the fact that it now needs to be able to handle a multitude of possible node subsets. Finally, we explain how the selection of the optimal channels can be distributed across the different nodes in a WSN. We validate this method on a use case in the context of body-sensor networks, where we use real electroencephalography (EEG) sensor data to emulate an EEG sensor network. We analyze the resulting trade-offs between transmission load and task accuracy. Distributed deep neural networks, Sensor Selection, Wireless sensor networks, EEG channel selection ## I Introduction Wireless sensor networks (WSNs) consist of a collection of networked wireless sensor nodes with local processing capabilities, which cooperate to solve a specific inference task [1, 2, 3]. Due to technological advances in the miniaturization and energy-efficiency of sensors and microprocessors, such WSNs have become popular for long-term and wide-area monitoring in various domains such as acoustics, video surveillance, object tracking, and physiological sensing. In the latter case, such WSNs are also known as body area networks or body-sensor networks (BSNs), where physiological sensors at different locations on or in the body are wirelessly interconnected to share their data [4, 5]. Ensuring maximal battery lifetime is a crucial consideration in the design of these WSNs. The energy bottleneck will typically be found in the wireless transmission of the data between the sensors and/or a fusion center [4]. This not only motivates energy-efficient hardware design, but also a shift in the algorithmic design of the models running on these sensor nodes. Instead of optimizing the model only for accuracy, the amount of data that needs to be transmitted when the model is used in the context of a WSN becomes an important design factor. In this paper, we focus on reducing this data transmission by teaching the nodes a policy where they only transmit their data when the contribution of this specific node towards the inference task would be very informative for the current input sample, while upholding a given computational constraint. Such a constraint could be, e.g., that each node can on average only transmit at most 50% of its collected sensor data. To this end, we propose a _dynamic channel selection_ methodology. For each block of collected samples across the nodes of the WSN, a distributed dynamic channel selector computes an input-dependent, optimal subset of channels, represented by a binary mask across the channels. Inference is then performed on the masked input by a deep neural network (DNN) at a fusion center which collects the data transmitted by the sensors. The selector and the inference model are trained jointly in an end-to-end manner, with the discrete parameters involved in the selection process being made trainable through the Gumbel-Softmax trick [6, 7]. How often each channel is selected is limited by a per-channel sparsity loss on the computed masks. The usage of this dynamic channel selection means that the inference model will be presented with different channel subsets for different inputs, as if channels were randomly missing. We show that applying dynamic spatial filtering (DSF) [8] to the masked input to re-weight the channels helps the inference model become more robust against the missing of channels and improves performance. To validate our proposed architecture, we focus on a specific use case in the area of brain-computer interfaces, where BSNs can be used to collect neural signals at different brain regions. One such an example is a BSN that monitors the brain via multi-channel electroencephalography (EEG) sensors, a so-called wireless EEG sensor network (WESN) [4]. EEG is a widely used, noninvasive way to record electrical brain activity, measuring potentials on multiple locations on the scalp to yield multi-channel time signals. These signals contain useful information for a variety of tasks such as epileptic seizure detection [9], sleep stage analysis [10] and brain-computer interfacing (BCI) [11]. In a WENN, multiple lightweight mini-EEG devices record one or a few EEG channels from their respective scalp areas, process the data, and transmit it to other nodes or a central fusion center, rather than using a bulky cap as in traditional EEG recording methods. We note that, while our evaluation use case is focused on brain signals, our proposed methodology is generic and can be applied to other kinds of wireless sensor networks as well. The main contributions of this paper are: * We propose an end-to-end learnable dynamic sensor or channel selection method that selects, for each window of a multi-channel input, an optimal subset of channels to use for inference, given a certain selection budget. This dynamic selection is learned jointly with the task DNN model and the Gumbel-Softmax trick is used to enable backpropagation for the discrete decisions involved. * We demonstrate how this methodology can be used to reduce the transmission load in a wireless sensor network, thus increasing its battery lifetime. We do this by moving from centralized to distributed channel selection and enforcing per-node constraints to ensure a proper balancing of the transmission load. In addition, we present a use case where the method can improve the robustness of the classifier to noise bursts. The paper is organized as follows. In section II we go over previous work in static and dynamic feature selection. Section III formally presents our problem statement and dynamic channel selection methodology. In section IV we provide an overview of the used dataset and how it was used to emulate a WESN environment and provide more details on the used model architecture and training strategy for this specific experiment. Our experimental results are then presented in section V and we end with some conclusions in section VI. **Note on terminology:** Throughout this paper, we will always use the term 'channel selection' to refer to a selection of channels from a multi-channel input signal. Sensor selection or node selection could be viewed as a special case of channel selection. In the case of single-channel sensors, sensor or channel selection refer to the same thing. However, in the case of multi-channel sensors, sensor selection refers to the problem of selecting pre-defined _groups_ of channels rather than individual channels, where each group corresponds to a sensor. For the sake of an easy exposition, but without loss of generality, we will assume single-channel sensors throughout this paper. Sometimes, we will refer to sensors as 'nodes' for consistency in terminology with the WSN literature. ## II Related work ### _Static feature selection_ The goal of feature selection is to find an optimal subset of an available set of features that maximizes the performance of a classification or regression model on a given task. A host of literature exists that solves this problem in a _static_ way, i.e., the optimal subset is determined for a certain dataset as a whole and the same selection is then applied to all input samples. Filter-based approaches rank the available features by a criterion like mutual information (MI) with the target labels and select the \(K\) highest scoring features [12]. Wrapper-based approaches use methods like greedy backward selection to efficiently explore the space of possible feature subsets, train the model on these candidate subsets and finally select the one that performs the best [13]. Embedded approaches jointly learn the subset and the task model in an end-to-end way, by performing \(L_{1}\) regularization on the input weights [14] or learning the discrete parameters of the feature selection using continuous relaxations [15, 16]. In this paper, we employ this approach of continuous relaxations to perform _dynamic_ channel selection instead. ### _Dynamic feature selection_ In _dynamic_, or _instance-wise_ feature selection, the aim is to find an optimal subset of features for each individual input sample. One area where this approach has been highly relevant is field of explainable machine learning, where the goal is to indicate which features contributed most to the model output. For instance, L2X [17] trains an explainer model that maximizes the mutual information between the feature subset of size \(K\) of a given sample and the class distribution yielded by a trained task model. This line of work however, is mainly interested in finding the most relevant features for an already trained model, not in optimizing the performance of the model on reduced feature sets. Another relevant area is the field of active feature acquisition [18, 19]. In this setting, obtaining features is associated with a certain cost. The goal is then to obtain maximal model performance with a minimal amount of features, without being able access all the features of a given input sample from the start. This typically results in an iterative procedure where, based on the current feature subset, the optimal feature to extend the set with is estimated, until sufficient confidence in the model prediction is reached or the budget is saturated. In our WSN setting in contrast, we do have access to all the features to perform subset selection, but we are not allowed to centralize all of them by transmitting them over the wireless link. We also aim to avoid iterative procedures that require multiple communication rounds between the sensors and the fusion center as these are prone to latency issues in real-time situations. The most similar approach to ours in terms of methods is taken by Verelst et al. in the field of computer vision [20]. The aim of their work is to decrease the computation time and energy of a CNN by learning input-dependent binary masks that are applied to the feature maps of an image at each layer. That layer then only performs convolutions on the pixels that are not masked out. A sparsity loss on these masks then forces the network to adhere to a certain computational budget, with backpropagation for the discrete masks being enabled through the Gumbel-Softmax trick. We will employ a similar strategy to learn binary masks for our dynamic selection, albeit in a distributed architecture and with the goal of reducing the data transmission over the wireless links, as detailed in the next section. ## III Proposed Method In this section, we describe our dynamic channel selection methodology. Without loss of generality, we assume a classification task, although all methods can be easily extended to a regression task. For the sake of an easy exposition, we will initially assume a centralized architecture where all the channels are available to make a decision on the selection. In a WSN context, this implies that the channel selection is performed _after_ transmitting all the channels to the fusion center, in which case there are no bandwidth savings. Nevertheless, this setting is still relevant to make the network robust against non-stationary noise influences and/or to reduce the computational complexity at the fusion center. Later on, in Section III-E, we will explain how the channel selection can be performed at the level of the sensor nodes, such that non-selected channels do not have to be transmitted at all. ### _Problem statement_ Let \(\mathcal{D}=\{(X^{(1)},y^{(1)}),(X^{(2)},y^{(2)}),\ldots,(X^{(N)},y^{(N)})\}\) be a dataset of \(N\) samples of a multi-channel signal \(X^{(i)}\) with class labels \(y^{(i)}\). Each \(X^{(i)}\in\mathrm{I\!R}^{M\times L}\) contains \(M\) channels and a window of \(L\) consecutive time steps. We are also given a DNN model \(f_{\theta}\) that is used to perform inference on these samples. Our goal is to learn a dynamic selection function that, for each separate input sample, determines this sample's optimal subset of channels to be presented to the inference model, while adhering to certain budget constraints on the amount of channels we are allowed to use on average. This selection of channels is based on a score vector \(\boldsymbol{\alpha}\in\mathbb{R}^{M}\) that is computed by a channel scoring function \(h_{\varphi}(X)\) for each input \(X\). To go from a continuous score to a discrete selection in a way that still allows for end-to-end learning through backpropagation, we make use of a Gumbel-Softmax module \(G\), which converts the score vector \(\boldsymbol{\alpha}\) to a binary mask \(\bar{\mathbf{z}}\in\{0,1\}^{M}\) to be applied to the input. Formally, this means learning parameters \(\theta\) of the task model \(f_{\theta}\) and the parameters \(\varphi\) of a selection model \(s_{\varphi}=G\circ h_{\varphi}:\mathbb{R}^{M\times L}\mapsto\{0,1\}^{M\times 1 };X\mapsto\bar{\mathbf{z}}\) such that \[\varphi^{*},\theta^{*}= \underset{\varphi,\theta}{\text{argmin}}\ \mathcal{L}_{CE}(f_{\theta}(X\odot\bar{\mathbf{z}}\mathbf{1}_{L}^{ \top}),y)+\lambda\mathcal{L}_{S}(\bar{\mathbf{z}})\] \[= \underset{\varphi,\theta}{\text{argmin}}\ \mathcal{L}_{CE}(f_{ \theta}(X\odot s_{\varphi}(X)\mathbf{1}_{L}^{\top}),y)+\lambda\mathcal{L}_{S}( s_{\varphi}(X)) \tag{1}\] with \(\mathcal{L}_{CE}(p,y)\) the cross-entropy loss between the predicted label \(p\) and the ground truth \(y\), \(\odot\) an element-wise product, \(\mathbf{1}_{L}^{\top}\) the row vector of dimension \(L\) containing only ones, \(\mathcal{L}_{S}\) a cost function that enforces sparsity in the learned masks and \(\lambda\) a hyperparameter to balance the two losses. A schematic overview of our method is presented in Fig. 1. We will now delve deeper into the design of each of the modules involved. ### _Learning discrete decisions with Gumbel-Softmax_ To enable the network to learn discrete decisions while still keeping the entire network end-to-end learnable we make use of the Gumbel-Softmax trick [6, 7]. Take a discrete random variable, drawn from a categorical distribution with \(K\) classes and class probabilities \(\pi_{1},...\pi_{K}\), represented as a one-hot vector \(\bar{\mathbf{y}}\in\{0,1\}^{K}\), with the index of the one indicating the class \(\bar{\mathbf{y}}\) belongs to. Discrete samples from this distribution can then be drawn with the Gumbel-Max trick: \[\bar{\mathbf{y}}=\text{one\_hot}(\underset{k}{\text{argmax}}\ (\log\pi_{k}+g_{k})) \tag{2}\] with \(g_{k}\) independent and identically distributed (i.i.d.) samples from the Gumbel distribution [21] and \(\text{one\_hot}(i)\) the operator that generates a one-hot \(K\times 1\) vector where the one is placed at position \(i\). The Gumbel-Softmax is then a continuous, differentiable relaxation of this discrete sampling procedure, approximating the discrete one-hot vectors \(\bar{\mathbf{y}}\) with continuous vectors \(\mathbf{y}\) whose elements sum to one instead by replacing the Figure 1: Overview of the dynamic channel selection. An \(L\)-sample window of an M-channel signal is passed to a channel scoring module, yielding unnormalized channel scores \(\boldsymbol{\alpha}\). These are then converted in discrete selections by the Gumbel-Softmax module and applied to the input as a binary mask \(\bar{\mathbf{z}}\), dropping a number of channels. This masked input \(\bar{X}\) is then fed to the DSF module which re-weights the received channels with an attention mechanism, computing a weight matrix \(W\) and bias \(b\) that are multiplied with and added to the masked input: \(\bar{X}=W\bar{X}+b\)[8]. Finally, the original classifier is then applied to the resulting signal to obtain a prediction \(y\). The entire pipeline is jointly learned in an end-to-end manner. argmax with a softmax. For the k-th element \(y_{k}\), this results in: \[y_{k}=\frac{\exp((\log~{}\pi_{k}+g_{k})/\tau)}{\sum_{j=1}^{K}\exp((\log~{}\pi_{j}+ g_{j})/\tau)} \tag{3}\] with \(\tau\) the temperature of this continuous relaxation. Lowering the temperature causes the softmax to more closely resemble an argmax, thus causing the continuous \(\mathbf{y}\) to be a closer approximation of the discrete \(\bar{\mathbf{y}}\). It will however, also cause the relaxation to become less smooth and increase the variance of the gradients. Our goal is to model a learnable, binary random variable \(\bar{z}_{m}\) for each channel \(m\), which is 1 when the channel is selected and 0 otherwise. In the case of such a binary random variable \(\bar{z}_{m}\), with \(P(\bar{z}_{m}=1)=\pi_{1}\), it can be shown [20] that by setting \(K=2\) in Eq. 3, the Gumbel-Softmax trick can be simplified to \[\begin{split}& y_{1}=\sigma\left(\frac{\log\pi_{1}+g_{1}-g_{2}}{ \tau}\right)\\ & y_{2}=1-y_{1}\end{split} \tag{4}\] with \(\sigma(\cdot)\) the sigmoid function. A continuous relaxation \(z_{m}\) of the binary random variable \(\bar{z}_{m}\) can then be obtained by taking \(z_{m}=y_{1}\) We can use this binary Gumbel-Softmax trick to transform unnormalized, learnable channel scores \(\boldsymbol{\alpha}\in\mathrm{I\!R}^{M}\) yielded by a network \(h_{\varphi}(X)\) into continuous, differentiable approximations \(\mathbf{z}=[z_{1},...,z_{M}]^{\top}\) of the discrete \(\bar{\mathbf{z}}\in\{0,1\}^{M}\). There are a number of ways this continuous relaxation can be used to obtain approximating gradients for the discrete \(\bar{\mathbf{z}}\), but we will follow the Straight-Through estimator approach [6, 20]. This means that we will sample discrete decisions from our binary distribution in the forward pass: \[\begin{split}\bar{z}_{m}=&\left\lfloor\sigma \left(\frac{\alpha_{m}+g_{1}-g_{2}}{\tau}\right)\right\rfloor\\ =&\begin{cases}1,&\text{if }z_{m}=\sigma \left(\frac{\alpha_{m}+g_{1}-g_{2}}{\tau}\right)>0.5\\ 0,&\text{otherwise}\end{cases}\end{split} \tag{5}\] with \(\lfloor\cdot\rceil\) the rounding operator, resulting in a binary distribution where \(P(\bar{z}_{m}=1)=\sigma(\alpha_{m})\) (replacing \(\pi_{1}\) in Eq. 4). To enable backpropagation through the discrete rounding operator, we use gradients from the continuous relaxation in the backward pass, which implies the approximation \[\nabla_{\varphi}\bar{\mathbf{z}}\approx\nabla_{\varphi}\mathbf{z}. \tag{6}\] This scheme allows for hard decisions to be used during training and learned through end-to-end backpropagation. This process is schematically illustrated in Fig. 2. At inference time, Gumbel noise is no longer added to the score vector, resulting in the network no longer sampling from binary distributions, but behaving in a deterministic manner instead, i.e. \(\bar{z}_{m}=1\) if \(\sigma(\alpha_{m})>0.5\). ### _Enforcing sparsity_ We assume each channel is measured on a different node of a wireless sensor network, whose nodes are able to communicate with each other over bandwidth-constrained links. To reduce the communication load of these nodes, we want each node \(m\) to only transmit their data (i.e. yield a 1 in their binary mask) for a maximal target percentage \(T\in[0,1]\) of the input samples. We thus want the expected value of each element of the binary mask over the distribution of our input samples \(X\) to be below this target \(T\): \[\begin{split}&\mathbb{E}_{X}[\bar{z}_{m}]\leq T\\ & m=1,..,M\end{split} \tag{7}\] These per-node constraints ensure that the masks are not only sparse, but balanced across the different nodes as well, meaning that there is no single node that is transmitting significantly more than the others. Secondly, by applying the constraints to the expected value of the masks instead of on each separate masks, we allow for a variable amount of nodes to be used for different input samples. To enforce these constraints, we use a mini-max optimization in which we impose a per-node sparsity loss on the decisions made during training and aggregate these by penalizing the node that currently violates this constraint the most: \[\begin{split}\mathcal{L}_{S,m}=\text{max}~{}\left(\frac{1}{B} \sum_{b=1}^{B}\sigma\left(\frac{\alpha_{m}^{(b)}}{\tau_{0}}\right)-T,0\right) ^{2}\\ \mathcal{L}_{S}=\text{max}_{m}~{}\mathcal{L}_{S,m}\end{split} \tag{8}\] with \(B\) the batch size, \(\alpha_{m}^{(b)}\) the score for node \(m\) for the b'th input sample of the batch and \(\tau_{0}\) a temperature constant we set at \(0.1\). This sparsity loss replaces the expected value in the constraints of Eq. 7 with a batch average and the discrete node decisions \(\bar{z}_{m}\) with the continuous approximation \(\sigma\left(\frac{\alpha_{m}^{(b)}}{\tau_{0}}\right)\). The fact that this approximation is computed through a sigmoid with a low temperature, without the addition of Gumbel noise means we more closely approximate the behaviour of the selection layer at inference time than we would if we directly penalized the hard decisions \(\bar{\mathbf{z}}\). This is important to ensure that if the sparsity constraints are met at training time, they will also be met at inference time. For instance, if the network ensures that for all \(X\), \(\sigma(\alpha_{m})=0.51\), then due to the addition of Gumbel noise in the computation of \(\bar{z}_{m}\), the network will sample \(\bar{z}_{m}=1\) and \(\bar{z}_{m}=0\) an about equal amount of times during training. At inference time however, when no noise is added, the network will always yield \(\bar{z}_{m}=1\) since \(\sigma(\alpha_{m})>0.5\), surely violating the constraints. Also important to note is that using Eq. 8 requires training with a large enough batch size such that the batch average of Eq. 8 is a meaningful estimate of the expected value used in Eq. 7. ### _Dealing with different channel subsets_ Performing dynamic channel selection means the classification network \(f_{\theta}\) will see a different subset of active channels depending on the current input sample. Stated in another way, our network needs to be able to deal with missing inputs. This can cause problems in the learning of the network weights, as the network needs to be able to extract relevant information when a channel is selected, but not cause interference when the channel is not selected and the corresponding input only contains zeros. Ideally, we would employ a number of separate classification networks, each optimized for a specific channel subset. In practice however, this would require training and storage of \(2^{M}\) networks, which quickly becomes infeasible. Thus, the question arises how we can make a single network be able to cope as efficiently as possible when multiple input sets are possible. We tackled this issue by extending our network with the Dynamic Spatial Filtering (DSF) proposed by Banville et al. [8]. The idea of DSF is to re-weight the \(M\) input channels using an attention layer. In this setting, new (virtual) channels are formed by applying a spatial filter to all input channels, i.e., making linear combinations of the channels, with the weights being computed from the spatial covariance matrix of the current input window. This re-weighting decreases the impact of missing channels on the network activations and has been shown to make a network more robust against noisy or missing channels. ### _From centralized to distributed_ The channel scoring function \(h_{\varphi}\) in Eq. 1 currently still uses all M input channels to make a decision. However, an important aspect to be taken into account is the distributed nature of WSN platforms, where different channels are recorded on different physical devices. In this setting, we want to reduce the transmission load of these devices by only selecting and thus transmitting the signal of a node when its information is relevant for the current sample. However this will only actually be beneficial when we are able to perform the selection _without centralizing the data of the different sensors_. We will consider three different cases corresponding to different constraints on our dynamic channel scoring function \(h_{\varphi}(X)\): * _Centralized_: The selection is derived from the joint information of all channels, i.e., \(\mathbf{\alpha}=h_{\varphi}(X)\). This setting serves as a theoretical upper bound for the following two practical settings. * _Distributed_: Each node has to decide whether to transmit solely on its own data, i.e., \(\alpha_{m}=h_{\varphi,m}(\mathbf{x}_{m})\) where \(x_{m}\) denotes the \(m\)-th row of \(X\). * _Distributed-Feedback_: Each node computes a short vector \(\mathbf{\beta}_{m}=h_{\varphi,m}(\mathbf{x}_{m})\in\mathbb{R}^{C}\) with \(C<<L\), that is transmitted to the fusion center. At the fusion center, the \(\mathbf{\beta}_{m}\) of all nodes are combined into the stacked vector \(\mathbf{\beta}\) to determine a final scoring vector \(\mathbf{\alpha}=g_{\phi}(\mathbf{\beta})\). The discrete selection \(\bar{\mathbf{z}}\) resulting from this scoring vector is then returned to the nodes to inform them which of them should transmit. The size of these vectors \(\mathbf{\beta}_{m}\) should be small compared to the length \(L\) of the window to be transmitted to minimize the overhead cost of the selection. In order to reduce the trainable parameters, one can decide to make the different \(h_{\varphi,m}\) models copies of each other, with shared weights for all layers, except the final layer having its own set of parameters for each channel \(m\). These three settings are illustrated in Fig. 3. ### _Training strategy_ Successfully training a model with masking units typically hinges on a good initialization. Since a sparsity loss is much easier to minimize than the training objective - by simply driving the weights of the binary masks to zero - the network can quickly collapse into a state where barely any units are executed [20, 22]. Once this has happened, it is very hard for the network to learn task-relevant information that could pull it out of this state. To avoid this, we adopt a step-wise training strategy that learns one module at a time. 1. Initialize the weights of the classifier \(f_{\theta}\) with the weights of the original M-channel network trained without any dynamic selection. 2. Add the _centralized_ dynamic selection layer and train it while fine-tuning (i.e., training at a lower learning rate) the classifier. Figure 2: Illustration of the Gumbel-Softmax trick. During training, hard decisions are sampled by perturbing the channel scores \(\mathbf{\alpha}\) with Gumbel noise, passing this through a sigmoid to obtain soft probabilities and applying a thresholding operator. Backpropagation through the thresholding operation is enabled by using a Straight-Through estimator, treating the threshold in the backward pass as an identity function, i.e. \(\frac{\partial\mathbf{z}}{\partial\mathbf{z}}\approx 1\) and thus \(\nabla_{\varphi}\bar{\mathbf{z}}\approx\nabla_{\varphi}\mathbf{z}\) 3. Add the DSF module and train it while fine-tuning the the dynamic selection and classifier. 4. Transform the centralized dynamic selection layer in a distributed dynamic selection layer and fine-tune the whole model (see below). To go from a centralized to a distributed architecture, we employ a 2-step transfer learning approach. First, we employ the centralized channel scoring function \(h_{\varphi}\) as a teacher model and try to ensure that the outputs \(\mathbf{\alpha}_{distr}\) of the student model - the distributed channel scoring function - match the outputs \(\mathbf{\alpha}_{centr}\) of the teacher by minimizing the following loss: \[\mathcal{L}(h_{\varphi,distr})=\mathcal{L}_{BCE}(\sigma(\mathbf{\alpha}_{distr}), \lfloor\sigma(\mathbf{\alpha}_{centr})\rceil) \tag{9}\] with \(\mathcal{L}_{BCE}\) the binary cross-entropy loss. By minimizing this loss, we do not necessarily ensure that the channel scores \(\mathbf{\alpha}\) are exactly alike, but rather that the discrete outputs at inference time will be similar, which is what we ultimately want. In the final step, we use the newly learned distributed channel selection layer, initialize the DSF module and the classifier with the corresponding weights of the centralized model and fine-tune the whole network in an end-to-end fashion. ## IV Application to Wireless EEG Sensor Networks ### _Data set_ In the field of BCI, the motor execution paradigm is used to decode body movement from the corresponding neural signals in the motorsensory areas of the brain. The High Gamma Dataset [23] contains EEG data from about 1000 trials of executed movements for each of the 14 subjects, as well as a separate test set of about 180 trials per subject, all following a visual cue. The dataset includes 4 classes of movements: left hand, right hand, feet, and rest. As in [23] we only use the 44 channels that cover the motor cortex, which are preprocessed by resampling at 250 Hz, highpass filtering above 4 Hz, standardizing the per-channel mean and variance to 0 and 1 respectively, and extracting a window of 4.5 seconds for each trial. This pre-processing is adopted from [23] and described in full detail there. ### _WESN node emulation and selection_ In mini-EEG devices, we cannot measure the potential between a given electrode and a distant reference (e.g. the mastoid or Cz electrode), as we would in traditional EEG caps [13]. Instead, we can only record the local potential between two nearby electrodes belonging to the same sensor device. To emulate this setting using a standard cap-EEG recording, we follow the method proposed in [13], which considers each pair of electrodes within a certain maximum distance as a candidate electrode pair or node. By subtracting one channel from the other, we can remove the common far-distance reference and obtain a signal that emulates the local potential of the node. Applying this method with a distance threshold of 3 cm to our dataset, we obtain a set of 286 candidate electrode pairs or nodes, which have an average inter-electrode distance of 1.98 cm and a standard deviation of 0.59 cm. Given that our WESN will consist of a limited number of mini-EEG devices, we first need to select the \(M\) most informative sensor nodes from the pool of 286 candidate nodes. To achieve this, we adopt the static channel selection method described in [16], which enables us to learn the \(M\) optimal nodes for a given task and neural network by jointly training the network and a selection layer. Note that this is a fixed selection for the entire data set, not for each sample separately. We train this selection layer, along with the centralized network (\(f_{\theta}\) in Fig. 1 we will use for classification (see Section IV-C), using data from all subjects in the dataset, which results in a subject-independent set of \(M\) mini-EEG nodes that are best suited for solving the motor execution task. We do this for 3 different values of \(M\), corresponding to a small WESN (\(M=4\) nodes), a medium-size WESN Figure 3: Different settings for the channel scoring module in a sensor network. Blue blocks indicate modules on the sensor nodes, orange modules on the fusion center and red dotted lines communications between the two. (a) The centralized upper bound employs all joint information of all channels to compute a score, but would require all data to be centralized. (b) The distributed setting does not allow for communication between the nodes (via the fusion center) and makes each score only dependent on the local data. (c) The distributed-feedback setting allows for a small amount of communication between the nodes to make a better, joint decision. (\(M=8\) nodes), and a high-density WESN (\(M=16\) nodes). ### _Model architecture_ As mentioned above, the neural network architecture we employ for classification (\(f_{\theta}\) in Fig. 1) is the MSFBCNN proposed in [24], which was designed specifically for a motor execution task. Inspired by the Filterbank-CSP approach of [25], this model computes log-power features by applying a number of temporal filters in parallel, aggregating these with spatial filters, and then applying squaring and average pooling over time. These features are then classified by a single linear layer. While the details of this network are not relevant for this study, we provide a summary of this network in table format in Appendix A for completeness. For our channel scoring module \(h_{\varphi}\) in the centralized setting, we will employ the same architecture, with the final linear layer being adapted to output a vector of dimension \(M\times 1\). In the two distributed settings, the different node scoring models \(h_{\varphi,m}\) are copies of each other, with shared weights for all layers, except the final layer having its own set of parameters for each node/channel \(m\). The network architecture of these \(h_{\varphi,m}\) is simply a single-input version of the M-input network define by \(f_{\theta}\), but where the last fully-connected layer outputs the scalar \(\alpha_{m}\) in the distributed setting and the node summary \(\boldsymbol{\beta}_{m}\in\mathbb{R}^{C\times 1}\) in the distributed-feedback setting. The dimension of these node summaries \(\boldsymbol{\beta}_{m}\) is chosen to be \(C=10\), ensuring the overhead of its transmission is negligible compared to the transmission of the full window of \(L=1125\) time samples.The module \(g_{\phi}(\boldsymbol{\beta})\) aggregating the node summaries in the fusion center is a simple 2-layer multilayer perceptron (MLP) with a hidden dimension of 50 and ReLU nonlinearity. The DSF module also consists of a 2-layer MLP with hidden dimension 50 and ReLU nonlinearity, which is applied to the vectorized sample covariance matrix \(\frac{1}{L}\hat{X}\hat{X}^{\top}\) of the masked input sample and which produces a weight matrix \(W\in\mathbb{R}^{M\times M}\) and a bias \(b\in\mathbb{R}^{M\times 1}\) which are used to compute a re-weighted output \(\tilde{X}=W\hat{X}+b\). Finally, for training, we follow the procedure described in section III-F, using the Adam optimizer [26] with a learning rate of \(10^{-3}\) when a module is trained for the first time and \(10^{-4}\) when it is fine-tuned during subsequent steps. A batchsize of 64 is employed and training lasts for 100 epochs with early stopping activated when the validation loss does not decrease for 10 epochs. The hyperparameter \(\lambda\), controlling the penalization of the sparsity loss was set to 10 for this application and a fixed temperature \(\tau=1\) was used for the Gumbel-Softmax module. ## V Experimental Results ### _Centralized versus distributed performance_ We first analyze the rate-accuracy trade-off obtained by our proposed dynamic channel selection method and investigate the impact of going from a theoretical, centralized approach to a practical, distributed one. To do this, we train our model for a given target rate \(T\), indicating the maximal percentage of input samples for which the data of each node should be used and transmitted. Since the lifetime of the WSN as a whole will ultimately be determined by the node with the highest transmission rate, we will always report the _maximal_ rate \(\mathcal{R}_{max}\) among the nodes, rather than the _average_. As a proof-of-concept benchmark, we will employ a naive system that for each sample and for each node randomly determines whether the data should be transmitted, doing so with a probability equal to the relative target rate. This allows us to investigate whether the dynamic selection is truly able to make intelligent decisions that go beyond simply making sure the constraints are met. We train our model for a range of target rates and for networks consisting of 4, 8 and 16 nodes, each time averaging the results over 5 runs. Fig. 4 shows the resulting rate-accuracy tradeoffs. Firstly, we can observe that the dynamic selection indeed consistently outperforms the random selection, with the gap widening as the target rate decreases. Secondly, while the distributed network shows a small performance gap with the centralized upper bound, allowing for even a small amount of communication between the nodes by employing the distributed-feedback setting compensates for this as it performs very similarly to the centralized setting. When comparing the networks with a different amount of nodes, it can be observed that the more nodes are being used, the smaller the relative performance losses are when moving from the starting rate of 100% to a rate of 50% (for instance, the 16-node network only loses 5% accuracy, while the 4-node network loses 10%). Since there will be a higher amount of redundancy between the 16 nodes than between the 4 nodes, it makes sense that dropping channels in the former has less of an impact on the accuracy than in the latter. ### _Impact of dynamic spatial filtering_ Next, we analyze the importance of the presence of the DSF module by comparing it with the networks where it has not been added. The results are illustrated in Fig. 5. For the 4-node network, it can be observed that the inclusion of DSF only results in a small improvement for the random selection network and no improvement at all on the dynamic selection network. As mentioned in section III-D, the main purpose of the DSF module is to increase the capability of the classifier to cope with the different channel subsets it is presented with. In this small, 4-node network, the amount of channel subsets \(2^{M}\) is still manageable. Furthermore, in contrast to the random selection, the dynamic selection will not randomly sample from _all_ possible channel subsets, but will focus its sampling on a more select number of these. For instance, it will almost always make sure that both a node from the left and right hemisphere is included, since the difference between both is highly informative for motor execution. This explains why the inclusion of DSF is slightly more necessary - and thus beneficial - in the case of random selection than in the case of dynamic selection. When moving to 8- and 16-node networks, the amount of subsets quickly grow and the performance gains afforded by DSF become more and more salient, with these gains once again being slightly higher for the random selection than for the dynamic selection. ### _Impact of noisy environments_ Up until now, we have discussed situations where we perform a trade-off between the amount of channels we reject and the accuracy of the classifier. In some cases however, working with only a subset of the channels can actually be beneficial for the accuracy as well. In environments where sudden noise bursts can occur, these unexpected inputs, even when limited to a single channel, can heavily disturb the activations of the entire neural network network and lead to misclassifications. To make it easier for the network weights to be robust against these noise bursts, it can be beneficial to detect when these happen and zero the corresponding input instead. To test this hypothesis, we repeated our previous experiments with the dynamic selection method, but in this case, each channel of each input window had a 25% chance to be replaced by Gaussian noise with zero mean and standard deviation uniformly sampled between 0 and 3 instead, leaving no more relevant information on this channel. Fig. 6 compares the performance of the distributed-feedback dynamic selection with a baseline network, which directly takes the perturbed data as input. The noise is added during both training and testing to enable a fair comparison, i.e., the network without dynamic channel selection can in principle learn how to cope with these noise bursts. Firstly, it can be observed that the dynamic selection never transmits more information than absolutely necessary: only 75% of the channels actually contain information, so the resulting rate is automatically capped around 75%. Secondly, Figure 4: Rate-accuracy trade-off for the proposed dynamic channel selection method for networks of 4,8 and 16 nodes. Mean test accuracies are plotted against the percentage of samples for which the critical node of the network needs to transmit, i.e. the node with the highest percentage of transmission. Each data point is an average of 5 runs for a given maximal target rate \(T\) (see Eqs. 7 and 8). Baseline performance indicates accuracy without dynamic selection involved, i.e. each node transmits at a rate \(\mathcal{R}\) of 100%. Dynamic selection consistently outperforms random channel selection. While a gap exists between the distributed implementation and the centralized upper bound, sharing a small amount of information between the nodes in the distributed-feedback setting largely overcomes this and performs about as well as the centralized upper bound. Figure 5: Rate-accuracy trade-off for the proposed dynamic channel selection method for networks of 4,8 and 16 nodes, with and without inclusion of the DSF module. Baseline performance indicates accuracy without dynamic selection involved, i.e. each node transmits at a rate \(\mathcal{R}\) of 100%. While DSF is not necessary for the 4-node network, it becomes more important as the size of the network increases, delivering a consistent performance gain across all settings. the automatic rejection of noisy channels does indeed lead to an increased accuracy compared to the baseline accepting this noise as input. A probable reason is that it will be easier for the classifier to find weights that process normal inputs normally and minimize the impact on the activations of disturbances when these disturbances are zero inputs rather than noise bursts. ## VI Conclusion and future outlook We have proposed a dynamic channel or sensor selection method in order to reduce the communication cost and improve the battery lifetime of WSNs. For each input window, the method selects the optimal subset of sensors to be used by a neural network classifier, while optimizing a trade-off between the amount of channels selected and the accuracy of the given task. The dynamic selection and the classifier are jointly trained in an end-to-end way through backpropagation. The dynamic selection module consists of three major parts: a channel scoring function assigning a relevance score to each channel, a binary Gumbel-Softmax trick converting these scores to discrete decisions and the dynamic spatial filtering module of Banville et al. [8] to make the classifier more robust against the resulting absence of channels. A crucial aspect of this dynamic selection is that it can computed in a _distributed_ way, requiring minimal communication overhead between the nodes. We have demonstrated the use of this method to perform a trade-off between the transmission rate of the nodes in an emulated wireless EEG sensor network and the accuracy of a motor execution task. Additionally, we have presented a use case where the dynamic selection can even improve the accuracy of the model, by automatically rejecting inputs that might harm performance, such as heavy bursts of noise. Though we have focused on the application use case of wireless EEG sensor networks, our methodology is generic and can be applied to sensor networks with any kind of modalities. In future work, we will explore applications of this method in other distributed platforms than WESNs.
2310.08133
Multi Level Dense Layer Neural Network Model for Housing Price Prediction
Predicting the price of a house remains a challenging issue that needs to be addressed. Research has attempted to establish a model with different methods and algorithms to predict the housing price, from the traditional hedonic model to a neural network algorithm. However, many existing algorithms in the literature are proposed without any finetuning and customization in the model. In this paper, the author attempted to propose a novel neural network-based model to improve the performance of housing price prediction. Inspired by the modular neural network, the proposed model consists of a three-level neural network that is capable to process information in parallel. The author compared several state-of-the-art algorithms available in the literature on the Boston housing dataset to evaluate the effectiveness of the proposed model. The results show that the proposed model provides better accuracy and outperforms existing algorithms in different evaluation metrics. The code for the implementation is available https://github.com/wijayarobert/MultiLevelDenseLayerNN
Robert Wijaya
2023-10-12T08:46:26Z
http://arxiv.org/abs/2310.08133v1
# Multi Level Dense Layer Neural Network Model for Housing Price Prediction ###### Abstract Predicting the price of a house remains a challenging issue that needs to be addressed. Research has attempted to establish a model with different methods and algorithms to predict the housing price, from the traditional hedonic model to a neural network algorithm. However, many existing algorithms in the literature are proposed without any fine-tuning and customization in the model. In this paper, the author attempted to propose a novel neural network-based model to improve the performance of housing price prediction. Inspired by the modular neural network, the proposed model consists of a three-level neural network that is capable to process information in parallel. The author compared several state-of-the-art algorithms available in the literature on the Boston housing dataset to evaluate the effectiveness of the proposed model. The results show that the proposed model provides better accuracy and outperforms existing algorithms in different evaluation metrics. The code for the implementation is available at [https://github.com/wijayarobert/MultiLevelDenseLayerNN](https://github.com/wijayarobert/MultiLevelDenseLayerNN) ## 1 Introduction Estimating the value of the house is a major problem for many stakeholders. Several factors like the size of the house, number of rooms, as well as location affect the price of the house. Nevertheless, the price can be predicted with various different methods. One of the common techniques to use is regression techniques that involve one or more features as input and single target output. In [1], the author developed a Support Vector Regression model with Gaussian Filter to predict the housing price. Other researchers also propose a neural network model with a promising testing accuracy of the Boston dataset (87.7%) [2]. This result indicates that a neural network is a viable algorithm for resolving difficult problems, even providing a more robust model than the traditional hedonic model. The neural network is a widely used machine learning algorithm because of its performance to learn from the raw data. This is because a neural network model can learn automatically the features of the data without requiring extensive handcrafted features beforehand. In a deep architecture of the neural network, multiple layers are involved to represent and perform a non-linear transformation of the data in a different hierarchy. The effectiveness of the neural network algorithm leads the author to develop an appropriate architecture to forecast the housing price with better accuracy. In this paper, the author proposes a novel neural network model to improve the performance of housing price prediction. Unlike the regular feedforward neural network where the input moves in only one direction, the author designed a three-level neural network that is capable to process the information simultaneously. The results of the proposed model outperform the existing model with the testing accuracy of the Boston dataset reaching 91.1% accuracy. To verify the methods, the author visualizes the predicted and actual values through a regression graph and plots a histogram to see the frequency of error prediction. This paper is structured as follows: First, in section 2 the author briefly states the problem definition, including the dataset, performance criteria, and data preprocessing. Section 3 describes the architecture of the proposed Neural Network. Then, section 4 illustrates the performance of both the proposed method and existing algorithms. Section 5 summarizes the conclusion of this research. ## 2 Problem Settings The prediction results of the proposed neural network in [2] are robust and promising. However, all the evaluated models and the final proposed model is a standard feedforward neural networks without any fine-tuning and customization in the development of the model. Despite the higher accuracy achieved in the research, the model still can be improved by applying some advanced features and empirical design of neural network architecture. **Dataset:** The Boston Housing Dataset is chosen in this work. The dataset is considered small but widely used, containing 506 cases of housing price information in different suburbs of Boston, Massachusetts. From the original dataset, 405 samples are treated as training data and 101 samples are treated as test data. The Boston dataset covers 14 attributes for each case, an explanation of 14 attributes is shown in Table 1. **Evaluation metrics:** Several metrics are used to evaluate the accuracy of the proposed model. This includes R-Squared, Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE). In a regression problem, the coefficient of determination (R\({}^{2}\)) is commonly used to measure the correlation between the actual and predicted outputs. MAE, MSE, and RMSE are used to assess the error of the model. Specifically, MAE evaluates the absolute distance between the actual and predicted values, while MSE represents the average of the squared difference between true and predicted values. RMSE will take the square root of MSE, and keep maintaining the property of the errors. The calculation of the metrics is based on the following equations: \[R^{2}=1-\frac{\sum(y_{i}-\hat{y})^{2}}{\sum(y_{i}-\bar{y})^{2}} \tag{1}\] \[MAE=\frac{1}{N}\sum_{i=1}^{N}|y_{i}-\hat{y}| \tag{2}\] \[MSE=\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y})^{2} \tag{3}\] \begin{table} \begin{tabular}{c|l} **Attributes** & **Description** \\ \hline CRIM & per capita crime rate by town \\ ZN & the proportion of residential land zoned for lots over 25,000 sq. ft. \\ INDUS & the proportion of non-retail business acres per town \\ CHAS & Charles River dummy variable (= 1 if tract bounds river; = 0 otherwise) \\ NOX & nitric oxides concentration (parts per 10 million) \\ RM & the average number of rooms per dwelling \\ AGE & the proportion of owner-occupied units built before 1940 \\ DIS & weighted distances to five Boston employment centres \\ RAD & index of accessibility to radial highways \\ TAX & full-value property-tax rate per \$10,000 \\ PTRATIO & pupil-teacher ratio by town \\ B & 1000 (Bk-0.63)2 where BK is the proportion of blacks by town \\ LSTAT & \% lower status of the population \\ MEDV & The median value of owner- occupied homes is \$1000’s \\ \end{tabular} \end{table} Table 1: Attributes explanation of the Boston dataset. \[RMSE=\sqrt{MSE}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y})^{2}} \tag{4}\] where \(\hat{y}\) is predicted value of \(y\), and \(\bar{y}\) is the mean value of \(y\). In general, the R\({}^{2}\) represents the accuracy of the model, the higher score of R\({}^{2}\), the better model fits, and the lower score of MAE, MSE, and RMSE indicate the better accuracy of the model. **Data Normalization:** Before training the model, both the training and testing dataset is normalized. This paper performed standard normalization or z-score normalization by the TensorFlow inbuilt function which transforms the features to have a mean of 0 and a standard deviation of 1. The following equation describes how data normalization is implemented: \[z=\frac{x-\mu}{\sigma} \tag{5}\] where \(\mu=\frac{1}{n}\sum_{i-1}^{n}x_{i}\), and \(\sigma=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\mu)^{2}}\) ## 3 Methodologies Artificial neural network (ANN) algorithms are inspired by the architecture of neurons in the brain. Through the dynamics of the network, the algorithm can learn to recognize the pattern in a dataset and generalize what they have processed. Given a training set contains a list of input with corresponding labels, the ANN can be trained to classify the data and make some adjustments based on the value between their neurons [3]. This is called supervised learning, where the neuron in the network tries to fit an input-output function with respect to the training data. This understanding can be translated into a mathematical model as shown below: \[z=\sum_{i=1}^{n}x_{i}w_{i}+b \tag{6}\] Each input feature \(x_{1},x_{2},...,x_{n}\) is multiplied by a weight \(w_{1},w_{2},...,w_{n}\) before they are summed together. Then, the constant value called bias (\(b\)) is added to produce the net input (\(z\)) of the neuron. The net input will be passed through an activation function (\(g\)) to produce the new output (\(j\)) which will be passed on to other neurons. \[j=g(z)=g(\sum_{i=1}^{n}x_{i}w_{i}+b) \tag{7}\] The neuron is responsible to receive information from other neurons, processing the information, and transmitting the result to other neurons. This process is illustrated in Figure 1. ### Architecture Inspired by the modular neural network where the network consists of several multilayer perceptrons, in this paper the author proposed a multi-level dense layer neural network to yield better generalization. Each module or level in the networks is independent which allows the system to work in parallel [4]. Moreover, from a computational perspective, the modular design of the neural network leads the model to more robust and efficient computation because it eliminates a high coupling burden that is often encountered in a standard/monolithic neural network. The architecture is shown in Figure 2. As shown in this figure, the network has 3 hidden layers consisting of 10 dense layers in total with the same amount of neurons (128 neurons) that implement the ReLU activation function. The first layer is the input layer of 13 units since the data have 13 features. This input layer will be passed to six (3 pairs) dense layers in the first hidden layer. Each pair have the same amount of neurons, this is because based on experiments conducted by the author, it will lead to better accuracy compared to using different amount of neurons. The output from each level in the first layers will be concatenated as an input for the second hidden layer. The results of three dense layers in the second hidden layer will be combined together before being calculated in the last hidden layers by a single dense layer. The output from this process will be passed into the output layer with a single unit, which is the price prediction of the house. ### Batch Normalization The standard Batch Normalization (BN) was implemented in the early part of the model architecture to accelerate the training process of the network. Although the precise effect of BN remains a topic of further investigation [5], the experiments during model development in this paper show that BN has positive effects on several aspects of neural networks. According to [5], BN helps reduce the internal covariate shift caused by the change of distribution of the input signal. In this experiment, it helps speed up the training process and benefits every measurement metric used in this research. Figure 1: The ANN receive input features \(x_{1},x_{2},...,x_{n}\) and multiplied them by weight \(w_{1},w_{2},...,w_{n}\). The weighted inputs are summed together with the bias \(b\) value before passing through the activation function to produce the net input of the neuron. Figure 2: The architecture of the multi-level dense layer neural network. This architecture consists of 10 dense networks in total, spreading in three hidden layers. The batch normalization function is also applied to input data before passing into the hidden layers. The output from the hidden layers will be passed into the output layer as a single unit (the price prediction number). ### Training Process The developed model was trained and validated by 20% of the training dataset for 1000 epochs. In the training of the model, the author utilizes the Adam (Adaptive Moments) optimization method with a learning rate equal to 0.001. The optimizer is responsible to optimize the loss function during training and adjust the attributes of the neural network. This will helps in reducing the loss and increase the accuracy. The MAE and MSE during training are shown in Figure 4. ## 4 Experimental Results In this section, the author has performed experiments to measure the accuracy of the model and compare the proposed model with existing algorithms in [2]. Figure 4: MAE and MSE score during training of the model. Both MAE and MSE decreased significantly in the first 50 epochs of the training. For additional information, it takes 1 minute 44 seconds, and 1 minute 25 seconds for training the model using CPU (AMD Ryzen 3), and Google Colab GPU, respectively. Figure 3: Network architecture generated with Keras.util plot_model function. In summary, the network consists of 1 input layer, 1 batch normalization function, 10 dense layers, 4 concatenating processes, and 1 output layer. ### Results on Boston Dataset To verify the accuracy during training, the model was evaluated with the testing dataset. The regression graph of the testing dataset and histogram of prediction is shown in Figure 5. As shown in the figure, the regression graph shows that the proposed neural network model is capable to construct a correlation between input and output parameters and give robust prediction results. The values of the four evaluation metrics for both the training and testing dataset are shown in Table 1. The R\({}^{2}\) value is 0.948, and 0.911 for the training and testing dataset, respectively. The MAE value for the training dataset is 1.99, and 2.31 for the testing dataset. The MSE value is 7.24 and 9.16, respectively for the training and testing dataset. Lastly, the RMSE value corresponding to the training set is 2.69, and 3.02 for the testing set. These values indicate that the model is robust and provides better accuracy compare to the current existing algorithm. Furthermore, to see the performance of the model, the author compares the actual value (the price of the house) with the predicted value generated by the proposed model as shown in table 2. ### Algorithm Comparison Several existing algorithms available in the literature are compared with the proposed model. Figure 6 and table 3 described the performance of the proposed model compare to the existing algorithms. As it shown in the figure 6 and table 3, the proposed model provides more robust results in most evaluation metrics. ## 5 Conclusion In this paper, the author developed a novel neural network model to improve the performance of the housing price prediction model using the Boston housing dataset. The architecture of the model is inspired by a modular neural network where the network consists of several independent multilayer perceptrons that work in parallel. The proposed model is evaluated by examining the performance based on different metrics, namely R\({}^{2}\), MAE, MSE, and RMSE. The experimental result shows that the proposed model significantly outperforms the existing state-of-the-art algorithms including ANN, XGBoost, Random Forest, Linear Regression, and SVM that have been utilized in the literature. Admittedly, the prediction accuracy is still limited in the Boston dataset, and the universality of the model needs to be measured in further research. As in the further study, the author would like to further explore novel neural network techniques and architecture that can be applied to a wider variety of real-world problems in society.
2303.06032
Exploring Adversarial Attacks on Neural Networks: An Explainable Approach
Deep Learning (DL) is being applied in various domains, especially in safety-critical applications such as autonomous driving. Consequently, it is of great significance to ensure the robustness of these methods and thus counteract uncertain behaviors caused by adversarial attacks. In this paper, we use gradient heatmaps to analyze the response characteristics of the VGG-16 model when the input images are mixed with adversarial noise and statistically similar Gaussian random noise. In particular, we compare the network response layer by layer to determine where errors occurred. Several interesting findings are derived. First, compared to Gaussian random noise, intentionally generated adversarial noise causes severe behavior deviation by distracting the area of concentration in the networks. Second, in many cases, adversarial examples only need to compromise a few intermediate blocks to mislead the final decision. Third, our experiments revealed that specific blocks are more vulnerable and easier to exploit by adversarial examples. Finally, we demonstrate that the layers $Block4\_conv1$ and $Block5\_cov1$ of the VGG-16 model are more susceptible to adversarial attacks. Our work could provide valuable insights into developing more reliable Deep Neural Network (DNN) models.
Justus Renkhoff, Wenkai Tan, Alvaro Velasquez, illiam Yichen Wang, Yongxin Liu, Jian Wang, Shuteng Niu, Lejla Begic Fazlic, Guido Dartmann, Houbing Song
2023-03-08T07:59:44Z
http://arxiv.org/abs/2303.06032v1
# Exploring Adversarial Attacks on Neural Networks: An Explainable Approach ###### Abstract Deep Learning (DL) is being applied in various domains, especially in safety-critical applications such as autonomous driving. Consequently, it is of great significance to ensure the robustness of these methods and thus counteract uncertain behaviors caused by adversarial attacks. In this paper, we use gradient heatmaps to analyze the response characteristics of the VGG-16 model when the input images are mixed with adversarial noise and statistically similar Gaussian random noise. In particular, we compare the network response layer by layer to determine where errors occurred. Several interesting findings are derived. First, compared to Gaussian random noise, intentionally generated adversarial noise causes severe behavior deviation by distracting the area of concentration in the networks. Second, in many cases, adversarial examples only need to compromise a few intermediate blocks to mislead the final decision. Third, our experiments revealed that specific blocks are more vulnerable and easier to exploit by adversarial examples. Finally, we demonstrate that the layers \(Block\_conv1\) and \(Block\_cov1\) of the VGG-16 model are more susceptible to adversarial attacks. Our work could potentially provide useful insights into developing more reliable Deep Neural Network (DNN) models. ## I Introduction Artificial intelligence (AI) and deep learning (DL) provide unlimited possibilities for addressing various engineering and scientific problems. However, the reliability and robustness of Deep Neural Networks (DNNs) has caused many concerns; for example, some researchers revealed that DNNs, such as the VGG-16 model [1], can be misled by intentionally mutated images that are imperceptible to humans [2, 3, 4]. In these scenarios, the mutated pixels have pseudo-random characteristics and thus raise concerns about the uncertainty and trustworthiness of DNNs under the natural Gaussian noise of their operational environments [5, 6]. For mitigation, on the one hand, some solutions are proposed to increase the robustness of DNNs by augmenting their training with perturbed samples or introducing a robust loss term [7, 8, 9]. These approaches encourage DNNs to treat a slightly perturbed image as its origin. In this context, finding and incrementally training DNNs with adversarial examples is analogous to fuzzy testing. Some representative frameworks are proposed, such as DLFuzz [10], DeepXplore [2], DeepHunter [11], and TensorFlow12. Footnote 1: We use the same notation as in [11], but we use the same notation as in [11]. One common feature of these adversarial example-enabled neural network fuzzy testing frameworks is that they not only discover adversarial examples but also try to maximize the activation rates of neurons, a.k.a. neuron coverage [13, 14]. Neuron coverage describes how many neurons are activated during a prediction. DLFuzz [10] adapts this concept from DeepXplore [2] and tries to optimize this metric by generating adversarial examples and maximizing the prediction difference between the original and the adversarial images. Higher neuron coverage usually contributes positively to the robustness of DNNs. However, training DNNs on perturbed samples incrementally is computationally expensive and can reduce classification accuracy [15]. Moreover, it is difficult to find a balance between accuracy and adversarial robustness. Defending existing DNNs against adversarial examples is preferable. Defense-GAN [16] trains a defensive generative adversary network (GAN) on natural inputs. A noticeable behavior deviation can be detected when adversarial examples are fed into the defensive GAN. Instead of modeling the inputs directly, I-Defender [17] models the output distributions of fully connected hidden layers on each class. Then it uses statistical testing to reject adversarial examples. Adversarial perturbations can be treated as additive noise. Therefore, similar approaches, such as denoising autoencoders, can be used to purify the input of DNNs [18, 19, 20, 21]. Most of the current efforts still regard DNNs as black-box models and have not yet analyzed the effect of adversarial attacks in an explainable way. In this paper, we use images from the ImageNet database [22] and then manipulate them with DLFuzz [10], which generates adversarial examples based on given seed images and tries to activate as many neurons as possible simultaneously. Grad-CAM [23] heatmaps are generated to make the decision-making procedure explainable. Wrongly classified mutated images are analyzed and compared to their origin to find out why and in which layer of the behavior deviations occur. We compare the response characteristics of the VGG-16 model using Grad-CAM when the input images are mixed with adversarial and statistically similar Gaussian noise. Our findings are as follows. * Both random noise and adversarial noise cause behavioral deviations of intermediate layers. However, adversarial noise causes more behavioral deviations by distracting the area of focus in the intermediate layers. * There are certain blocks that are more vulnerable and more easily exploited by adversarial examples. In particular, we demonstrate that the layers \(Block4\_conv1\) and \(Block5\_cov1\) of the VGG-16 model are more susceptible to adversarial attacks. * We show that a neural network model can be misled by compromising only a few blocks. The remainder of this paper is organized as follows: A literature review of related work is presented in Section II. We present the methodology in Section III. The evaluation and discussion are presented in Section IV with the conclusions in Section VI. ## II Related Work Current efforts aim at increasing the robustness of DNNs against adversarial attacks. Accordingly, current research addresses approaches to make the decision-making of DNNs more transparent [24][25][26], and to find vulnerabilities in model architectures [27][28]. Since this research is utilized and closely related to our approach, we discuss these efforts in the following. _Adversarial Attacks_: An adversarial example is generated by introducing small and imperceptible perturbations to a given seed image to cause misclassifications. There are several procedures, such as FGSM [29], to generate adversarial examples. In general, an adversary attacker has to know the parameters of the target neural classifier and then solve an optimization problem that the perturbations should maximize the classification loss and minimize the difference between perturbed and original image. Defending DNNs against adversarial examples can increase the robustness of the model. _Visual Explanations_: Methods like Gradient-weighted Class Activation Mapping (Grad-CAM) [25] and Local Interpretable Model-Agnostic Explanations (LIME) [26] aim to improve the interpretability of DNNs. They have the ability to explain the prediction of black-box models, and can therefore improve their trustworthiness. Grad-CAM calculates a heatmap that highlights different areas of an image in different colors. These colors visualize how much an area positively contributes to a certain prediction. LIME highlights pixels that contribute positively or negatively on the basis of a threshold. These pixels can be represented in different colors, as in Grad-CAM. With Grad-CAM, it is possible to identify not only which areas contribute positively and negatively to a certain prediction, but also how much an area influences a decision [30]. There are numerous efforts that use or improve these methods to increase confidence in machine learning models [31][32][33]. ## III Methodology ### _Problem Formulation_ To make DNNs as robust as possible to adversarial attacks, we must understand how adversarial examples cause misclassifications. For this reason, the classification process is analyzed layer by layer to find sections of an exemplary selected DNN that are particularly vulnerable to such attacks. Herewith, we want to present a procedure to analyze any DNN, making it possible to find vulnerabilities in the fundamental network architecture to be able to counteract these in the future. A brief workflow of this paper is given in Figure 1. In general, original samples (a.k.a. natural images), adversarial examples, and noisy samples are then analyzed using Grad-CAM. A heatmap is generated for each layer of the target VGG-16 model showing what the focus point of the DNN in the corresponding layer is. Then, the cosine similarity is calculated between the heatmaps of the adversarial examples and those of the original images for every layer. By locating layers where the similarity between the adversarial and the original samples is particularly low, we can determine which layers react strongly to the perturbations. ### _Data Preparation_ We used randomly selected images as seeds from the ImageNet dataset, and we derive their adversarial perturbations (a.k.a. adversarial noise), denoted as: \[\mathbf{N}_{a}=\text{ADV}(\mathbf{S},\mathbf{\varepsilon}) \tag{1}\] where \(\text{ADV}(\cdot)\) is the adversarial perturbation function, \(\mathbf{\varepsilon}\) is the strength of the perturbation, and \(\mathbf{S}\) denotes a seed image. In this work, DLFuzz is configured to generate adversarial noise and maximize neuron coverage at the same time. We generate the same amount of Gaussian noise \(\mathbf{N}_{g}\), defined as random perturbations, with statistical properties similar to the adversarial perturbations. Accordingly, we use the expected value, standard deviation and shape of \(\mathbf{N}_{a}\) to generate \(\mathbf{N}_{g}\). We define the distribution as: \[\mathcal{N}(\mu_{a},\Sigma_{a}) \tag{2}\] where \(\mathcal{N}(\cdot)\) denotes a normal distribution, \(\mu_{a}\) denotes the expected value, and \(\Sigma_{a}\) denotes the standard deviation. We Fig. 1: Exploring neural network response against adversarial perturbations and Gaussian noise. let \(\mu_{a}=\overline{\mathbf{N}_{a}}\) and \(\Sigma_{a}=\sigma(\mathbf{N}_{a})\). Based on this distribution, we generate random noise \(\mathbf{N}_{g}\) represented as a matrix with the same shape as \(\mathbf{N}_{a}\). For more details, please refer to our implementation in GitHub1. In this way, the augmented input \(\mathbf{I}\) to the DNN becomes: Footnote 1: [https://github.com/JustusRen/Exploring-Adversarial-Attacks-on-Neural-Networks](https://github.com/JustusRen/Exploring-Adversarial-Attacks-on-Neural-Networks) \[\mathbf{I}=\{\mathbf{S},\mathbf{S}+\mathbf{N}_{a},\mathbf{S}+\mathbf{N}_{g}\} \tag{3}\] where \(\mathbf{S}+\mathbf{N}_{a}\) and \(\mathbf{S}+\mathbf{N}_{g}\) are adversarial examples and the noisy version of the seed image. ### _Network Behavior Deviation Detection_ For a given convolutional layer \(l\), its response to an input image, e.g., a seed image, can be visualized using its grad-CAM heatmap, defined as: \[\mathbf{H}(\mathbf{S})=R\left[sign\left(\frac{\mathbf{J}(\mathbf{S})}{\partial \theta_{\mathbf{I}}}\right)\right] \tag{4}\] where \(\mathbf{J}(\mathbf{S})\) denotes the classification loss given \(S\) and \(\theta_{\mathbf{I}}\) denotes the parameters of layer \(l\). The function \(R(\cdot)\) denotes the operation that reshapes and interpolates the derived gradients to the same dimension and size as \(\mathbf{S}\). The Grad-CAM heatmap displays the focal areas of \(l\) in \(\mathbf{S}\). Therefore, the degree or level of behavioral deviations of the heatmap under the perturbed sample and the natural image can be used to quantify whether it is compromised. Hereby, we use cosine similarity to calculate the degree of behavioral deviations as: \[\mathbf{G}[\mathbf{S_{1}},\mathbf{S_{2}}]=\frac{vec(\mathbf{H}(\mathbf{S_{1} }))\cdot vec(\mathbf{H}(\mathbf{S_{2}}))}{\|vec(\mathbf{H}(\mathbf{S_{1}}))\| \cdot\|vec(\mathbf{H}(\mathbf{S_{2}}))\|} \tag{5}\] where \(\mathbf{H}(\mathbf{S_{1}})\) and \(\mathbf{H}(\mathbf{S_{2}})\) denote Grad-CAM heatmaps of different images. These are vectorized and normalized for the similarity calculation. For a specific layer and a seed image \(\mathbf{S}\), we calculate and compare two degrees of behavioral deviation: \[\mathbf{D}_{a}=\mathbf{G}[\mathbf{S},\mathbf{S}+\mathbf{N}_{a}] \tag{6}\] \[\mathbf{D}_{g}=\mathbf{G}[\mathbf{S},\mathbf{S}+\mathbf{N}_{g}] \tag{7}\] Our observations revealed that both Gaussian noise and adversarial perturbations can cause behavioral deviations. For each layer, we used the median value of its degree of behavioral deviation under Gaussian noise as the threshold to quantify whether it is compromised, i.e. whether its behavior deviates more severely than when it is under the same amount and strength of Gaussian noise. ### _DLFuzz for Adversarial Example Generation_ We use DLFuzz [10] to calculate perturbations and applied to images from the ImageNet [34] data set. The procedure of deriving additive perturbations is regarded as an optimization problem as: \[\underset{\mathbf{N}_{a},\ ||\mathbf{N}_{a}||\leq\delta}{\text{argmax}}\left[ \sum_{i=1}^{K}c_{i}-c+4\sum_{i=0}^{m}n_{i}\right] \tag{8}\] where \(\delta\) restricts the magnitude of the adversarial perturbation, \(c\) is the prediction on a given seed image, \(c_{i}\) is one of the top \(K\) candidate predictions, \(n_{i}\) is one of the activation values in the \(m\) selected neurons, and \(\lambda\) is a constant to balance between activating more neurons and obtaining adversarial examples. Mathematically, DLFuzz manipulates the perturbations to mislead a tested network to make wrong predictions and simultaneously activate the selected neurons. By maximizing neuron coverage, DLFuzz ensures that adversarial examples are generated more efficiently during a test. Based on the assumption that this will trigger more logic in the network and thus provoke and detect more erroneous behavior [2], the underlying DNN we selected is the VGG-16 [1] model pre-trained on the ImageNet dataset. After obtaining an effective adversarial perturbation for each seed, we amplify the perturbations with a total of five different ratios, defined as the perturbation strength. These are 25%, 50%, 100%, 200% and 400%. ## IV Evaluation and Discussion In this section, we evaluate the behavior deviation of a pre-trained VGG-16 network under adversarial examples and Gaussian noise. ### _Seed Selection_ The ImageNet dataset contains more than 10 million images in 1,000 categories. We randomly pick five images from each category to derive their adversarial examples. Unfortunately, not all images can find their corresponding adversarial examples within the pre-defined magnitude of perturbation. Ultimately, we derived at least one adversarial example from 48% of the randomly collected images. ### _Neuron Coverage under different attacks_ Figure 1(a) shows the progression of the neuron coverage of the model using images with adversarial noise and random noise. It is noticeable that the neuron coverage increases slightly faster using adversarial perturbations. However, DLFuzz can also increase neuron coverage using random perturbations. Furthermore, Figure 1(b) shows how the neuron coverage increases when different perturbations strengths are applied to the seed images. In general, the strength of the perturbation has a slight effect on the neuron coverage. The less severe the perturbation, the slower the neuron coverage increases. ### _Network Vulnerability Analysis_ We used the Grad-CAM heatmaps of each layer under natural images as references and used cosine similarities to measure their behavior deviations under different attacks as shown in Figure 3. More specific statistical analysis is presented in Figure 3(a), where the Grad-CAM responses of the neural network under adversarial examples are severely deviated. More specific statistical analysis is presented in Figure 3(a), where the Grad-CAM responses of the neural network under adversarial examples are severely deviated. Under adversarial examples, each layer's heatmap cosine similarities to the natural image have lower mean values and greater variances in comparison with the similarities under noisy images. In particular, the two layers, Block4_Conv1 and Block5_Conv1, are considered to be easier to compromise because their response deviates more significantly under adversarial noise, showing a lower average cosine similarity than when it receives images with pure Gaussian noise. Interestingly, Gaussian noise also causes network behaviors to deviate from those when the network receives only clear images. As shown in Figure (b)b, although we observe lower cosine similarities, the response deviation is not as significant as when the network is under adversarial examples. ### _Behavior Drift Under Different Attack Strengths_ As we mentioned, Gaussian noise and adversarial perturbations can cause deviations in network behavior. Adversarial perturbations can drift the network behavior more and play a significant role in adversarial attacks. As given in Equation 2, the Gaussian random perturbations will have statistical properties consistent with adversarial perturbations. According to Figure (a)a and (b)b, the comparison of cosine similarities in grad-CAM heatmaps indicates that adversarial inputs have a lower overall value (a.k.a., more severe behavior deviations) than noisy inputs at all attack levels. When we adjust the attack strength, the cosine similarities on adversarial inputs decrease more. The Grad-CAM heatmap cosine similarities decrease more significantly, indicating more severe behavioral deviations as determined by a smaller cosine similarity, when adversarial perturbations are amplified 2 and 4 times. Comparably, although the network's averaged behavior also deviates under Gaussian noisy images, the averaged magnitude of deviation is not as significant as under adversarial perturbations. Although it is generally believed that a stronger Fig. 3: Gradient CAMs of VGG-16 under different inputs. Red and green rectangles highlight the convolutional layers that have significant behavior drifts. Fig. 2: Comparison of neuron coverage optimization with random perturbations, targeted perturbations and different perturbation strengths adversarial attack strength can cause the behavior of the neural network to deviate further, our experiments show that the behavioral deviation is still significant when the attack strength is reduced to 0.25, for which the three curves when the attack strength are 0.25, 2 and 4 are very similar. Interestingly, behavioral deviations under attack strength 0.5 and 1 are almost identical. In general, more significant behavioral deviations can be observed in deeper layers of the network, as shown in Figures 4(a) and 4(b). However, we do notice some fluctuations, which indicates that there are some specific layers that are more vulnerable and easier to compromise. ### _Distributions on the number of compromised layers_ We also discover that the VGG-16 network can be misled by compromising only a few intermediate layers. We analyze the probability of compromise for each network layer and derive Figure 6. The statistical distribution of the number of compromised layers with all adversarial examples is shown in Figure 7. From the previous discussions, perturbations driven by Gaussian noise can cause behavior drift but not necessarily mislead the classifier. As mentioned in Section III-C, we define a threshold for each convolutional layer in different perturbation levels, and this threshold equals the median value of the behavior deviations when this layer processes the input image with pure Gaussian noise. Intuitively, a layer is called compromised when its behavior under adversarial examples deviates further than it should be under the same input mixed Fig. 4: Comparison of behavioral deviations under: (a) adversarial examples and (b) random noise for the VGG-16 convolutional layers (x-axis 1-1 represents the Block1_Conv1 layer in VGG-16). Fig. 5: Comparison of average behavioral deviation on: (a) adversarial examples and (b) images with Gaussian noise with different attack strength (x-axis 1-1 represents the Block1_Conv1 layer in VGG-16). Fig. 6: Compromise probability of convolutional layers in the VGG-16 network. with Gaussian noise. As shown in Figure 6, the adversarial examples with the lowest attack strength have approximately 40% of the chance to compromise any layer. This probability increases significantly and reaches 80% when we increase the attack strength to be greater than 2 or less than 0.25. In Figure 7, we count the number of convolutional layers that have been compromised for all adversarial examples. We discover that there are some ultimate adversarial examples that can compromise all convolutional layers in the VGG-16 network. This chance of getting the ultimate adversarial examples increases when the attack strength is greater than 2 or less than 0.25. In addition, adversarial samples exist that can cause misclassification by compromising zero or only a small number of layers. ## V Conclusion In this work, we generate adversarial examples using DL-Fuzz. Through Grad-CAM, we were able to analyze the decision-making procedure of the VGG-16 network layer by layer. In doing so, we were able to show that compared to Gaussian random noise, intentionally generated adversarial perturbations cause more severe behavioral deviations. Furthermore, we were able to show that in many cases, only a few intermediate blocks of a DNN need to be compromised in order to manipulate the final decision. Finally, we demonstrate that, in particular, the layers \(Block4\_conv1\) and \(Block5\_cov1\) of the VGG-16 model are more susceptible to adversarial attacks. ## VI Future Work Utilizing our proposed approach, it is possible to find vulnerable layers in a DNN model. In future work, we want to explore the use of a zero-bias layer [35] to create more robust DNNs. In addition, it would be interesting to investigate how the focal point changes during the decision-making process of the DNN, and to what extent this correlates with the deliberate perturbations. ## Acknowledgment This research was supported in part by the Air Force Research Laboratory Information Directorate, through the Air Force Office of Scientific Research Summer Faculty Fellowship Program(r), Contract Numbers FA8750-15-3-6003, FA9550-15-0001 and FA9550-20-F-0005. This research was also partially supported by the National Science Foundation under Grant No. 2150213.
2304.10191
Efficient Uncertainty Estimation in Spiking Neural Networks via MC-dropout
Spiking neural networks (SNNs) have gained attention as models of sparse and event-driven communication of biological neurons, and as such have shown increasing promise for energy-efficient applications in neuromorphic hardware. As with classical artificial neural networks (ANNs), predictive uncertainties are important for decision making in high-stakes applications, such as autonomous vehicles, medical diagnosis, and high frequency trading. Yet, discussion of uncertainty estimation in SNNs is limited, and approaches for uncertainty estimation in artificial neural networks (ANNs) are not directly applicable to SNNs. Here, we propose an efficient Monte Carlo(MC)-dropout based approach for uncertainty estimation in SNNs. Our approach exploits the time-step mechanism of SNNs to enable MC-dropout in a computationally efficient manner, without introducing significant overheads during training and inference while demonstrating high accuracy and uncertainty quality.
Tao Sun, Bojian Yin, Sander Bohte
2023-04-20T10:05:57Z
http://arxiv.org/abs/2304.10191v1
# Efficient Uncertainty Estimation in Spiking Neural Networks via MC-dropout ###### Abstract Spiking neural networks (SNNs) have gained attention as models of sparse and event-driven communication of biological neurons, and as such have shown increasing promise for energy-efficient applications in neuromorphic hardware. As with classical artificial neural networks (ANNs), predictive uncertainties are important for decision making in high-stakes applications, such as autonomous vehicles, medical diagnosis, and high frequency trading. Yet, discussion of uncertainty estimation in SNNs is limited, and approaches for uncertainty estimation in artificial neural networks (ANNs) are not directly applicable to SNNs. Here, we propose an efficient Monte Carlo(MC)-dropout based approach for uncertainty estimation in SNNs. Our approach exploits the time-step mechanism of SNNs to enable MC-dropout in a computationally efficient manner, without introducing significant overheads during training and inference while demonstrating high accuracy and uncertainty quality. Keywords:Spiking Neural Network Uncertainty Estimation MC-dropout. ## 1 Introduction Inspired by the brain's event-driven and sparse communication, spiking neural networks (SNNs) are enabling applications with high energy-efficiency in the form of neuromorphic computing [21]. Analogous to biological neurons, spiking neurons in SNNs communicate using discrete spikes, and time stepping is typically used to account for the evolution of these neurons' internal state as a response to impinging and emitted spikes. With recent advances in architectures and training methods, SNNs now achieve performance comparable to their artificial neural network (ANN) counterparts in many tasks [25, 26, 3]. To employ SNNs in the real-world however, accurate predictions have to be paired with high-quality uncertainty estimation to enable decision-making in high-stakes applications such as autonomous vehicles, medical diagnosis, and high frequency trading [4]: uncertain predictions in these applications may need to be reviewed by human experts for final decisions. In ANNs, predictive uncertainties in classification models are commonly represented by predictive distributions [13]. While evidence suggests that the brain performs a form of Bayesian inference based on uncertainty representations [18], the literature on uncertainty in SNNs is relatively limited and primarily concentrates on the sampling of probabilistic distributions, typically from a neuroscience perspective [20, 12]. Approaches for uncertainty estimation in classical deep learning models can be divided into two groups: deterministic methods and Bayesian methods [6]. With a deterministic method, a model learned from training data is essentially a point estimate of the model's parameters. In a deterministic deep network, each predictive distribution is estimated by a single forward propagation followed by the softmax function. Yet, although it is feasible to infer uncertainty with deterministic methods, these methods are known to be prone to output overconfident estimation [13, 6]. In contrast, a Bayesian network learns the posterior distribution of parameters in the network rather than depending on a single setting of parameters. The probability outputs of a Bayesian method can be analytically obtained by marginalizing the likelihood of the input with the estimated posterior distribution; this however is generally an intractable problem. To tackle this issue, many approximation methods and non-Bayesian methods have been introduced [6]. Example of these methods like Monte-Carlo-dropout (MC-dropout) [5] and deep ensembles [13] achieve excellent performance in terms of uncertainty estimation quality, either by repeatedly carrying out inference for each sample in perturbed versions of the network, or by training a collection of networks and then carrying out inference in each network. Here, we propose an efficient uncertainty estimation approach for SNNs by exploiting their time-step mechanism. Specifically, we apply continual MC-dropout in SNNs by taking their outputs averaged over time steps as predictive distributions, where we train SNNs with a loss function that also involves their Figure 1: (a) In ANNs, MC-dropout is performed by averaging results for a pre-defined number (\(M\)) of forward passes through a dropout-enabled network. (b) In AOT-SNNs, inference at each time step is taken as functionally equivalent to a forward pass in the MC-dropout method. As the SNN network evaluation requires \(M\) time-steps already, only one effective forward pass is needed. time steps: **A**verage-**O**ver-Time-SNNs (AOT-SNNs, Figure 1). In AOT-SNNs, we take inference of each time step as functionally equivalent to a forward pass in the standard MC-dropout method. Since only one forward pass is needed in inference, the computational overhead for AOT-SNNs is significantly reduced relative to the MC-dropout method while still allowing effective uncertainty estimation. We compare the performance of AOT-SNNs with more standard SNNs, as well as with SNNs using the classical MC-dropout approach and SNN ensembles, across multiple classification tasks. We demonstrate that for identical network architectures, AOT-SNNs achieve a significant gain over more standard SNNs in both accuracy and uncertainty quality while being much more computationally efficient. ## 2 Background ### Problem Setup We assume a training dataset \(\mathcal{D}\) that consists of \(\mathcal{N}\) i.i.d data points \(\mathcal{D}=\{\mathbf{X},\mathbf{Y}\}=\{\mathbf{x}_{n},y_{n}\}_{n=1}^{N}\), where \(\mathbf{x}_{n}\in\mathbb{R}^{d}\) and the true label \(y_{n}\in\{1,\dots,K\}\). Given a sample \(\mathbf{x}_{n}\), a neural network outputs the probabilistic predictive distribution \(p_{\omega}(y_{n}|\mathbf{x}_{n})\), where \(\omega\) is the parameters of the network. A number of non-Bayesian methods achieving excellent performance in term of uncertainty estimation have been proposed, among which are deep ensembles [13] and post-hoc calibration methods [10]. Deep ensembles are considered a "gold standard" for uncertainty estimation [24], while a set of models are trained with a proper scoring rule as the loss function. At inference time, the output of all models are then combined to obtain a predictive distribution. Post-hoc calibration methods, such as temperature scaling [10], involve the re-calibration of probabilities using a validation dataset and achieve excellent calibration performance in the i.i.d test dataset. ### Bayesian Neural Networks and MC-Dropout Approximation In a Bayesian neural network, the predictive distribution for a sample \(\mathbf{x}\) is given by: \[p(y|\mathbf{x},\mathcal{D})=\int p(y|\mathbf{x},\omega)p(\omega|\mathcal{D})d\omega. \tag{1}\] The posterior distribution, \(p(\omega|\mathcal{D})\) or \(p(\omega|\mathbf{X},\mathbf{Y})\), of the parameters \(\omega\) can be computed by applying Bayes' theorem \[p(\omega|\mathbf{X},\mathbf{Y})=\frac{p(\mathbf{Y}|\mathbf{X},\omega)p(\omega )}{p(\mathbf{Y}|\mathbf{X})}. \tag{2}\] Due to the intractability of the normalizer in (2), the posterior distribution \(p(\omega|\mathcal{D})\) and the predictive distribution \(p(y|\mathbf{x},\mathcal{D}))\) usually cannot be evaluated analytically. A variety of approximation methods have been introduced to tackle this issue [14, 9]. One such approximation is the MC-dropout method, which is often taken as a baseline model in uncertainty estimation [13, 17] due to its feasibility and relatively good performance. Dropout [22] is a simple but effective technique used in deep learning models to prevent overfitting. In the MC-dropout method, dropout is applied before each weight layer of a neural network in both **training** and **testing**. The predictive distribution calculation with the MC-dropout method is performed by averaging results over a predefined number of forward passes through a dropout-enabled network. Gal & Gharamani [5] showed that neural networks with such configuration can be viewed as an approximation to a Bayesian method in the form of _deep Gaussian processes_[2]. Either MC-dropout models or deep ensembles involves multiple forward propagation passes in inference. As a result, when naively applied to SNNs, the computational and energy costs becomes relatively high due to the necessity of repeatedly running SNNs for multiple times during inference. ### Source and Quality of Predictive Uncertainty The only source of predictive uncertainty of deterministic methods is from the noisy data. Uncertainty in a Bayesian method comes from both data and defects of the model itself [6]: uncertainty caused by data is referred to as _data uncertainty_, while uncertainty caused by defects of the model itself is referred to as _model uncertainty_. The quality of predictive uncertainties can be measured from two aspects [13]. The first concerns uncertainty quality on in-distribution data, where test data and training data share the same distribution. The second aspect evaluates generalization of uncertainty on domain-shifted data. While certain post-hoc calibration methods may generate accurate predictive probabilities for i.i.d data, their effectiveness in predicting uncertainty for domain-shifted data is not ensured [17]. For both aspects, model calibration is examined as the indication of uncertainty quality [17]. For classification tasks, accuracy and calibration are two evaluation measures that are mutually orthogonal [13]. Accuracy, defined as the ratio of corrected classified examples to total number of examples, measures how often a model correctly classifies; calibration measures the quality of predictive probability distributions [13] and indicates the extent to which the probability of a predicted class label reflects the real correct likelihood. A class of metrics to measure calibration is referred to as _proper scoring rules_[8], which include the Brier score (BS) and negative log-likelihood (NLL); another calibration metrics is the _Expected Calibration Error_ (ECE) [10], which is a scalar summary statistic of calibration that approximates miscalibration. Although the definition ECE is intuitive and thus widely used, it is not a perfect metric for calibration because optimal ECE values can be generated by trivial solutions [17]; see the Appendix for details on proper scoring rules and ECE. ### Snn SNNs typically work with the same types of network topologies as ANNs, but computation in SNNs is distinct. SNNs use stateful and binary-valued spiking neurons, rather than the stateless and analog valued neurons of ANNs. As a result, unlike synchronous computation in ANNs, inference in SNNs is in a iterative form through multiple time steps \(t=0,1,...,T\): in each time step \(t\), the membrane potential of a spiking neuron \(U(t)\) is affected by the impinging spikes from connecting neurons emitted at time step \(t-1\), and the past potential \(U(t-1)\). Once the membrane potential \(U(t)\) reaches a threshold \(\theta\), the neuron itself emits a spike. Such sparse and asynchronous communications between connected neurons is key to enabling SNNs to achieve high energy-efficiency. #### 2.4.1 LIF Neurons Various spiking neuron models exist, ranging in complexity from the detailed Hodgkin-Huxley model to the simplified Leaky-Integrated-and-Fire (LIF) neuron model [7]. The latter is widely used in SNNs, as it is interpretable and computationally efficient. Resembling an RC circuit, the LIF neural model is represented as: \[\tau\frac{dU}{dt}=-U+RI. \tag{3}\] where \(I\) and \(R\) are the current and input resistance, and \(\tau\) is the time constant of the circuit. The discrete approximation of (3) can be written as: \[u_{i}^{t}=\lambda u_{i}^{t-1}+\sum_{j}w_{ij}s_{j}^{t}-s_{i}^{t-1}\theta, \tag{4}\] \[s_{i}^{t}=\left\{\begin{array}{ll}1,&\mbox{if $u_{i}^{t}>\theta$}\\ 0,&\mbox{otherwise}\end{array}\right. \tag{5}\] where \(u_{i}\) is the membrane potential of a neuron \(i\), \(\lambda\) denotes the leaky constant (\(<1\)) for the membrane potential, \(w_{ij}\) represents the weight connecting the neuron \(i\) and its pre-synaptic neuron \(j\), and \(s_{i}\) indicates whether a neuron spikes. With the introduction of surrogate gradient methods [16, 25] and learnable LIF neurons [25, 3], both trainability and performance of SNNs have been improved dramatically. ## 3 Methods Here, we present our proposed AOT-SNNs. We first explain how we efficiently apply MC-dropout to SNNs, and then introduce the loss function used in AOT-SNNs, which is based on the mean output values over time steps. Lastly, we explain the network architecture we use to demonstrate AOT-SNNs in practice. ### Efficient MC-dropout in SNNs As noted, the standard MC-dropout method runs a test sample multiple times in a model with dropout enabled, and takes the output of these forward passes as the final predictive distribution. Thus applied, the MC-dropout method achieves satisfactory performance in predictive uncertainty in ANNs. In principle, thus defined MC-dropout can be applied directly to SNNs, as _MC-dropout SNN_. This, however, can result in inefficient inference, mainly due to the time-step mechanism in SNNs. As each neuron's activation in an SNN is a continuous process over time, an SNN typically has to be run for multiple time steps to perform inference. Naively performing inference of a single sample in an SNN with MC-dropout would mean running multiple forward passes of a sample through a network where each individual pass entails the evaluation of multiple time steps. This will obviously be computationally expensive. As an alternative, we propose to leverage SNN time-step mechanism by enabling MC-dropout in AOT-SNNs during a single evaluation. Specifically, we compute predictive distributions in a dropout-enabled AOT-SNN by averaging outputs of its multiple time steps. In this view, each time step in an AOT-SNN is weakly equivalent to a forward pass in the standard MC-dropout method. This approach requires only one forward pass during inference and thus significantly lowers computational costs. ### Loss Function Loss functions in many current high-performing SNN learning algorithms [25, 3, 19, 27] are computed based on the output values of last time step, and we will refer such loss functions as _last-time-step_ loss, resulting in Last-Time-Step-SNNs (_LTS-SNNs_). The last-time-step loss is written as: \[L=l(T) \tag{6}\] where \(l(t)\) is the loss function computed on the output values of the time step \(t\). Since the last-time-step loss is not compatible with the proposed uncertainty estimation approach in AOT-SNNs, we introduce the _average-over-time_ loss, which calculates its output by averaging over multiple time steps: \[L=\frac{1}{T}\sum_{t=1}^{T}l(t). \tag{7}\] By combining the average-over-time loss with MC-dropout, we expect that the quality of uncertainty estimation for our approach will be improved, particularly in comparison to LTS-SNNs trained by the last-time-step loss, as AOT pushes SNNs to correctly classify as much as possible at every time step, while LTS-SNNs do not. For \(l(t)\), either negative log-likelihood (NLL) loss or the mean squared error (MSE) loss [3] can be used. Here, we use the MSE loss, as we find that in practise the NLL loss causes practice a disconnect between NLL and accuracy, which is an indication of miscalibration [10]. ### Network Architecture We use AOT-SNNs with a network architecture very similar to the high-performing PLIF networks in [3]. These networks are composed of a _spiking encoder network_ and a _classifier network_. The spiking encoder network consists of multiple downsampling modules. Each downsampling module has a certain number of convolution blocks and a pooling layer (\(kernel\,size=2,stride=2\)). The convolution block is composed of a convolution layer (\(kernel\,size=3,stride=1,padding=1\)), a batch normalization layer, and a spiking neuron layer. Our classifier network is slightly modified from [3] and includes a fully-connected layer, a spiking neuron layer, another fully-connected layer, which is then followed by a readout integrator layer. Unlike the original PLIF networks that classify using relatively coarse summed rate-coding collected from a population of output neurons, probabilities of AOT-SNNs are computed based on the membrane potentials of readout integrator neurons as in [25]. This modification enable AOT-SNNs to achieve better uncertainty estimation performance compared to corresponding standard PLIF networks while obtaining similar accuracy. In the spiking neuron layers, PLIF neurons [3] are used, where the time constants \(\tau\) are learned and shared by neurons within the same layer. Note that dropout is applied to the neurons' output spikes, and input data is directly injected into the network as current into the input neurons. ## 4 Experiments We performed a series of experiments to compare AOT-SNNs to LTS-SNNs, as well as MC-dropout SNNs and also with the 'gold standard' of SNN ensembles, across multiple classification tasks. As a proof of concept, we first applied this approach to the MNIST dataset. Second, we experiment on the CIFAR-10 dataset to compare our models with corresponding LTS-SNNs. Additionally, we reported and analyzed results on the CIFAR-100 dataset. Furthermore, we carried out an ablation study where we characterized the uncertainty properties of AOT-SNNs with regard to dropout rates and dropout types. ### Experimental Setup In our experiments, LTS-SNNs used the same layer structure as their corresponding AOT-SNNs. However, they differ in that LTS-SNNs used the predictive distribution output by the last time step and were trained with the last-time-step loss. Note that dropout is not enabled during inference in LTS-SNNs. Enabling dropout would lead to notably weak performance for LTS-SNNs, similar to that of ANNs. All the MC-dropout SNNs and SNN ensembles are based on their corresponding LTS-SNNs. The Adam optimizer was used, with a cosine annealing learning rate scheduler, whose initial learning rate is 0.001 and \(T_{max}\) is 64. The default dropout rate used is 0.5. For the MINIST dataset, we used a batch size of 150, while the batch sizes were 60 for CIFAR-10 and 15 for CIFAR-100. The number of epochs used for each dataset were 200 (MNIST), 300 (CIFAR-10), and 300 (CIFAR-100). ### Mnist The spiking encoder network for the MNIST dataset has two downsampling modules, each of which includes only one convolution block. In Table 1, we compared the AOT-SNN and its corresponding LTS-SNN, both using best performing models that have eight time steps to evaluate samples. The results demonstrate that the AOT-SNN outperforms the LTS-SNN in both accuracy and the predictive uncertainty metrics, including Brier score, NLL, and ECE. For each time step of the two models, we illustrated the performance of Brier score and ECE, averaged over the entire test dataset (figure 2). Note that the mean values of these \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & Accuracy (\%) \(\uparrow\) & BS \(\downarrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) \\ \hline AOT-SNN (8) & 99.54\(\pm\)0.030 & 7.0e-4\(\pm\)4.3e-5 & 0.0144\(\pm\)7.6e-4 & 0.0012\(\pm\)3.4e-4 \\ \hline LTS-SNN (8) & 99.37\(\pm\)0.080 & 1.0e-3\(\pm\)1.0e-4 & 0.021\(\pm\)0.0025 & 0.004\(\pm\)0.0011 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparisons between the AOT-SNN and its corresponding LTS-SNN on the MNIST dataset (mean\(\pm\)std across 5 trials). The numbers after the model names represent time steps. Figure 2: Uncertainty performance for each time step of the AOT-SNN model and its corresponding LTS-SNN on the MNIST dataset. The results of metrics are averaged over the entire test dataset. two metrics over the previous time steps were used for each time step of the AOT-SNN. The graph illustrates that during the initial three time intervals, the AOT-SNN's performance lags behind the LTS-SNN. However, performance of the AOT-SNN demonstrates a significant improvement over the LTS-SNN in both two uncertainty metrics from the fourth time step to the final one. This improvement could be attributed to the updates of AOT-SNNs compared to LTS-SNNs, where we average the outputs of dropout-enabled time steps and take them as the final output. ### CIFAR-10 and CIFAR-100 The architectures of AOT-SNNs for the CIFAR-10 and CIFAR-100 dataset are similar. They apply the same spiking encoder network, which has two down-sampling modules, each with three convolution blocks. Their classifier networks differ only in the last fully-connected layer due to their different number of ground truth classes. #### 4.3.1 CIFAR-10 held-out test dataset. Table 2 presents a comparison of AOT-SNNs to LTS-SNNs, MC-dropout SNNs, and SNN ensembles. While each MC-dropout SNN ran five forward passes, each SNN ensemble consisted of five models. We show results for 4 and 8 time steps, corresponding to respective best performing duration (see also Table 3). AOT-SNNs exhibit superior performance compared to LTS-SNNs, and achieve comparable accuracy to SNN ensembles while yielding slightly lower results on BS and NLL, only underperforming on ECE. In comparison to the the MC-dropout SNNs, AOT-SNNs do deliver superior accuracy and performed almost as well as BS and NLL, with only a slight loss in ECE. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Accuracy (\%) \(\uparrow\) & BS \(\downarrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) \\ AOT-SNN (4, 1) & 90.2\(\pm\)0.26 & 0.0153\(\pm\)0.00030 & 0.38\(\pm\)0.012 & 0.040\(\pm\)0.0031 \\ AOT-SNN (8, 1) & 90.8\(\pm\)0.23 & 0.0144\(\pm\)0.00040 & 0.37\(\pm\)0.022 & 0.043\(\pm\)0.0041 \\ LTS-SNN (4, 1) & 88.9\(\pm\)0.71 & 0.017\(\pm\)0.0011 & 0.43\(\pm\)0.028 & 0.058\(\pm\)0.0044 \\ LTS-SNN (8, 1) & 88.5\(\pm\)0.60 & 0.0181\(\pm\)0.00081 & 0.47\(\pm\)0.013 & 0.067\(\pm\)0.0034 \\ MC-dropout SNN (4, 5) & 90.53\(\pm\)0.37 & 0.0140\(\pm\)0.00041 & 0.32\(\pm\)0.001 & 0.026\(\pm\)0.0030 \\ MC-dropout SNN (8, 5) & 90.43\(\pm\)0.37 & 0.0145\(\pm\)0.00053 & 0.35\(\pm\)0.013 & 0.037 \(\pm\)0.0014 \\ SNN Ensembles (4, 5) & 90.9 & 0.0134 & 0.2919 & 0.012 \\ SNN Ensembles (8, 5) & 90.8 & 0.0135 & 0.2967 & 0.016 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison on the CIFAR-10 dataset between AOT-SNNs, LTS-SNNs, MC-dropout models, and deep ensembles (mean\(\pm\) std across 5 trials). The digits enclosed in brackets following the model names indicate the number of SNN time steps and the number of forward passes or models used in inference. Table 3 presents the results of AOT-SNNs and LTS-SNNs with time steps smaller or equal to 10. With each model trained five times, the table lists the mean and standard deviation for all the metrics. In this exhaustive comparison, we see that that AOT-SNNs significantly outperform LTS-SNNs, with all models with more than 3 time steps achieving significantly better accuracy and Brier score, with best results for 8 time steps. Moreover, almost all AOT-SNNs achieve better NLL and ECE, except for the model with a single time step (which however has considerably lower accuracy). This is in line with the finding for MNIST in the Figure 2. **CIFAR-100.** Comparing the AOT-SNN with time step eight with its corresponding LTS-SNN for CIFAR-100 (Table 4), we similarly find that AOT-SNNs achieve significantly better results than the LTS-SNN, in both accuracy and predictive uncertainty quality. **CIFAR-10-C: domain-shifted test dataset.** As mentioned earlier, the quality of predictive uncertainties need to be measured on both in-distribution held-out data and domain-shifted data. We evaluated AOT-SNNs on the CIFAR-10-C dataset [11], a domain-shifted test dataset of CIFAR-10. The CIFAR-10-C dataset is designed to evaluate the robustness of image classification models \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & Time steps & Accuracy (\%) \(\uparrow\) & BS \(\downarrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) \\ \hline AOT-SNN & 2 & 89.4\(\pm\)0.18 & 0.0168\(\pm\)0.00014 & 0.417\(\pm\)0.0061 & 0.047\(\pm\)0.0023 \\ AOT-SNN & 3 & 89.7\(\pm\)0.26 & 0.0160\(\pm\)0.00027 & 0.40\(\pm\)0.021 & 0.044\(\pm\)0.0042 \\ AOT-SNN & 4 & 90.2\(\pm\)0.26 & 0.0153\(\pm\)0.00030 & 0.38\(\pm\)0.012 & 0.040\(\pm\)0.0031 \\ AOT-SNN & 5 & 90.4\(\pm\)0.07 & 0.0150\(\pm\)0.00024 & 0.39\(\pm\)0.026 & 0.043\(\pm\)0.0036 \\ AOT-SNN & 6 & 90.5\(\pm\)0.16 & 0.0149\(\pm\)0.00028 & 0.38\(\pm\)0.017 & 0.043\(\pm\)0.0030 \\ AOT-SNN & 7 & 90.2\(\pm\)0.34 & 0.0151\(\pm\)0.00043 & 0.37\(\pm\)0.012 & 0.043\(\pm\)0.0019 \\ AOT-SNN & 8 & 90.8\(\pm\)0.23 & 0.0144\(\pm\)0.00040 & 0.37\(\pm\)0.022 & 0.043\(\pm\)0.0041 \\ AOT-SNN & 9 & 90.5\(\pm\)0.55 & 0.0147\(\pm\)0.00073 & 0.37\(\pm\)0.024 & 0.044\(\pm\)0.0041 \\ AOT-SNN & 10 & 90.7\(\pm\)0.41 & 0.0146\(\pm\)0.00062 & 0.37\(\pm\)0.024 & 0.044\(\pm\)0.0052 \\ \hline LTS-SNN & 1 & 88.2\(\pm\)0.47 & 0.017\(\pm\)0.00068 & 0.36\(\pm\)0.013 & 0.0138\(\pm\)0.0034 \\ LTS-SNN & 2 & 88.6\(\pm\)0.40 & 0.0180\(\pm\)0.00031 & 0.46\(\pm\)0.0085 & 0.067\(\pm\)0.0055 \\ LTS-SNN & 3 & 88.0\(\pm\)0.56 & 0.0184\(\pm\)0.00076 & 0.44\(\pm\)0.023 & 0.060\(\pm\)0.0030 \\ LTS-SNN & 4 & 88.9\(\pm\)0.71 & 0.017\(\pm\)0.0011 & 0.43\(\pm\)0.028 & 0.058\(\pm\)0.0044 \\ LTS-SNN & 5 & 88.4\(\pm\)0.27 & 0.0181\(\pm\)0.00047 & 0.46\(\pm\)0.016 & 0.063\(\pm\)0.0031 \\ LTS-SNN & 7 & 88.3\(\pm\)1.12 & 0.018\(\pm\)0.0014 & 0.48\(\pm\)0.026 & 0.068\(\pm\)0.0062 \\ LTS-SNN & 8 & 88.5\(\pm\)0.60 & 0.0181\(\pm\)0.00081 & 0.47\(\pm\)0.013 & 0.067\(\pm\)0.0034 \\ LTS-SNN & 9 & 88.0\(\pm\)0.52 & 0.0189\(\pm\)0.00082 & 0.49\(\pm\)0.025 & 0.069\(\pm\)0.0036 \\ LTS-SNN & 10 & 88.0\(\pm\)0.91 & 0.019\(\pm\)0.0015 & 0.49\(\pm\)0.046 & 0.069\(\pm\)0.0062 \\ \hline \hline \end{tabular} \end{table} Table 3: In-distribution performance comparisons between AOT-SNNs and LTS-SNNs on CIFAR10 (mean\(\pm\)std across 5 trials). against common corruptions. It contains 19 corruption types that are created by applying a combination of 5 severity levels to the original CIFAR-10 test set. The CIFAR-10-C dataset is commonly used as a benchmark to evaluate the uncertainty estimation in domain-shifted settings [17]. We compared the performance of the AOT-SNN with eight time steps, its corresponding LTS-SNN, and the temperature scaling method that re-calibrates probabilities output by the \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & Time steps & Accuracy (\%) \(\uparrow\) & BS \(\downarrow\) & NLL \(\downarrow\) ECE \(\downarrow\) \\ \hline AOT-SNN & 8 & 65.15 & 0.005028 & 1.6749 & 0.1352 \\ LTS-SNN & 8 & 62.32 & 0.005333 & 1.7325 & 0.1665 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparisons between the AOT-SNN and the corresponding LTS-SNN on the CIFAR-100 dataset. Figure 3: Comparisons of the AOT-SNN model, its corresponding LTS-SNN, and the post-hoc calibrated version of the LTS-SNN on each severity level of CIFAR-10-C. LTS-SNN on all the severity levels of CIFAR-10-C (Figure 3). The results show that the AOT-SNN outperform the LTS-SNN in all severity levels. Furthermore, in severity levels one to four, the AOT-SNN achieve better performance than the post-hoc calibrated results, while in level five, the AOT-SNN has comparable performance. Together, this shows that AOT-SNNs improve uncertainty estimation over both LTS-SNNs and the temperature scaling method also in domain-shifted settings. #### 4.2.2 Ablation study. We further considered the impact of dropout rates and dropout types on the quality of uncertainty estimates of AOT-SNNs. Dropout type.We replaced the dropout in the LTS-SNN and our best-performing model, both of which have eight time steps, with DropConnect [23]. Instead of dropping the spikes like the regular dropout, DropConnect randomly drops the weights in each layer before the PLIF neuron layer. As shown in Table 5, despite the slightly better performance of the LTS-SNN-DC compared to the corresponding dropout-based models (LTS-SNN), the AOT-SNN-DC outperform LTS-SNN-DC in terms of both accuracy and uncertainty quality (both models in the table have a dropout rate of 0.5). The observation suggests that DropConnect may fulfill the same function as regular dropout in AOT-SNNs. Dropout rate.To investigate the impact of dropout rate on performance, we tested AOT-SNNs with dropout rates ranging from 0.1 to 0.9 in increments of 0.1. These experiments were based on our best-performing model of eight time steps and trained on the CIFAR-10 dataset separately for each amount of dropout. The accuracy and Brier score were plotted in Figure 4. The trends in accuracy, Brier score are consistent, with models having dropout rates lower than 0.5 producing flat results, followed by a decline in performance. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Accuracy (\%) \(\uparrow\) & BS \(\downarrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) \\ \hline AOT-SNN (8) & 90.8\(\pm\)0.23 & 0.0144\(\pm\)0.00040 & 0.37\(\pm\)0.022 & 0.043\(\pm\)0.0041 \\ AOT-SNN-DC (8) & 90.5\(\pm\)0.37 & 0.0140\(\pm\)0.00041 & 0.32\(\pm\)0.010 & 0.026\(\pm\)0.0030 \\ LTS-SNN (8) & 88.5\(\pm\)0.60 & 0.0181\(\pm\)0.00081 & 0.47\(\pm\)0.013 & 0.067\(\pm\)0.0034 \\ LTS-SNN-DC (8) & 90.2\(\pm\)0.25 & 0.0161\(\pm\)0.00036 & 0.47\(\pm\)0.035 & 0.065\(\pm\)0.0041 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparisons between the AOT-SNN with DropConnect and its corresponding LTS-SNN on the CIFAR-10 dataset. The numbers after the model names represent time steps. ## 5 Conclusion We proposed a novel and efficient approach for uncertainty estimation in spiking neural networks SNNs based on the MC-dropout method combined with an appropriate choice of loss-function. Our approach exploits the time-step mechanism of SNNs to enable MC-dropout in a computationally efficient manner, without introducing significant overheads during training and inference. We demonstrated that our proposed approach can be computationally efficient and performant in uncertainty quality at the same time. Future work could investigate the potential of our approach in more applications, such as speech processing and medical imaging. #### 5.0.1 Acknowledgments TS is supported by NWO-NWA grant NWA.1292.19.298. SB is supported by the European Union (grant agreement 7202070 "HBP"). ## Appendix #### 5.0.2 Proper Scoring Rules. A _scoring rule_\(S(\mathbf{p},y)\) assigns a value for a predictive distribution \(\mathbf{p}\) and one of the labels \(y\). A _scoring function_\(s(\mathbf{p},\mathbf{q})\) is defined as the expected score of \(S(\mathbf{p},y)\) under the distribution \(q\) \[s(\mathbf{p},\mathbf{q})=\sum_{y=1}^{K}q_{y}S(\mathbf{p},y). \tag{8}\] If a scoring rule satisfies \(s(\mathbf{p},\mathbf{q})<=s(\mathbf{q},\mathbf{q})\), it is called a _proper scoring rule_. If \(s(\mathbf{p},\mathbf{q})=s(\mathbf{q},\mathbf{q})\) implies \(\mathbf{q}=\mathbf{p}\), this scoring rule is a _strictly proper scoring rule_. When evaluating quality of probabilities, an optimal score output by a proper scoring rule indicates a perfect prediction [17]. In contrast, trivial solutions could generate optimal values for an improper scoring rule [17, 8]. Figure 4: The impact of dropout rate on performance of AOT-SNNs on the CIFAR-10 dataset. Dropout rates are ranging from 0.1 to 0.9 in increments of 0.1. The two most commonly used proper scoring rules are Brier score [1] and NLL. Brier score is the squared \(L_{2}\) norm of the difference between \(\mathbf{p}\) and one-hot encoding of the true label \(y\). NLL is defined as \(S(\mathbf{p},y)=-\mathrm{log}p(y|\mathbf{x})\) with \(y\) being the true label of the sample \(\mathbf{x}\). Among these two rules, the Brier score is more recommendable because NLL can unacceptably over-emphasize small differences between small probabilities [17]. Note that proper scoring rules are often used as loss functions to train neural networks. [13, 8]. #### 3.0.1 Ece. The ECE is a scalar summary statistic of calibration that approximates miscalibration [15, 10]. To calculate ECE, the predicted probabilities, \(\hat{y}_{n}=\mathrm{argmax}_{y}\mathbf{p}(y|\mathbf{x_{n}})\), of test instances are grouped into \(M\) equal-interval bins. The ECE is defined as \[ECE=\sum_{m=1}^{M}f_{m}|o_{m}-e_{m}|, \tag{9}\] where \(o_{m}\) is the fraction of corrected classified instances in the \(m^{th}\) bin, \(e_{m}\) the average of all the predicted probabilities in the \(m^{th}\) bin, and \(f_{m}\) the fraction of all the test instances falling into the \(m^{th}\) bin. The ECE is not a proper scoring rule and thus optimum ECEs could come from trivial solutions.
2304.02595
Bayesian neural networks via MCMC: a Python-based tutorial
Bayesian inference provides a methodology for parameter estimation and uncertainty quantification in machine learning and deep learning methods. Variational inference and Markov Chain Monte-Carlo (MCMC) sampling methods are used to implement Bayesian inference. In the past three decades, MCMC sampling methods have faced some challenges in being adapted to larger models (such as in deep learning) and big data problems. Advanced proposal distributions that incorporate gradients, such as a Langevin proposal distribution, provide a means to address some of the limitations of MCMC sampling for Bayesian neural networks. Furthermore, MCMC methods have typically been constrained to statisticians and currently not well-known among deep learning researchers. We present a tutorial for MCMC methods that covers simple Bayesian linear and logistic models, and Bayesian neural networks. The aim of this tutorial is to bridge the gap between theory and implementation via coding, given a general sparsity of libraries and tutorials to this end. This tutorial provides code in Python with data and instructions that enable their use and extension. We provide results for some benchmark problems showing the strengths and weaknesses of implementing the respective Bayesian models via MCMC. We highlight the challenges in sampling multi-modal posterior distributions for the case of Bayesian neural networks and the need for further improvement of convergence diagnosis methods.
Rohitash Chandra, Joshua Simmons
2023-04-02T02:19:15Z
http://arxiv.org/abs/2304.02595v3
# Bayesian neural networks via MCMC: a Python-based tutorial ###### Abstract Bayesian inference provides a methodology for parameter estimation and uncertainty quantification in machine learning and deep learning methods. Variational inference and Markov Chain Monte-Carlo (MCMC) sampling techniques are used to implement Bayesian inference. In the past three decades, MCMC methods have faced a number of challenges in being adapted to larger models (such as in deep learning) and big data problems. Advanced proposals that incorporate gradients, such as a Langevin proposal distribution, provide a means to address some of the limitations of MCMC sampling for Bayesian neural networks. Furthermore, MCMC methods have typically been constrained to use by statisticians and are still not prominent among deep learning researchers. We present a tutorial for MCMC methods that covers simple Bayesian linear and logistic models, and Bayesian neural networks. The aim of this tutorial is to bridge the gap between theory and implementation via coding, given a general sparsity of libraries and tutorials to this end. This tutorial provides code in Python with data and instructions that enable their use and extension. We provide results for some benchmark problems showing the strengths and weaknesses of implementing the respective Bayesian models via MCMC. We highlight the challenges in sampling multi-modal posterior distributions in particular for the case of Bayesian neural networks, and the need for further improvement of convergence diagnosis. keywords: Bayesian neural networks, MCMC, Langevin dynamics, Deep learning, Convolutional Neural Networks, time series prediction + Footnote †: journal: Elsevier ## 1 Introduction Bayesian inference provides a probabilistic approach for parameter estimation in a wide range of models used across the fields of machine learning, econometrics, environmental and Earth sciences [1; 2; 3; 1; 4; 5]. The term 'probabilistic' refers to representation of unknown parameters as probability distributions rather than using fixed point estimates as in conventional optimisation methods and machine learning models where gradient-based methods are prominent [6]. The probability representation of unknown parameters requires a different approach to optimisation and, from the machine learning and computational statistics point-of-view, this is known as sampling. Markov Chain Monte-Carlo (MCMC) sampling methods have been prominent for inference (estimation) of model parameters via the posterior distribution (probability distribution). In other words, Bayesian methods attempt to quantify the uncertainty in model parameters by marginalising over the predictive posterior distribution. Hence, in the case of neural networks, MCMC methods can be used to implement Bayesian neural networks that represent weights and biases as probability distributions [7; 8; 9; 10; 11; 7]. Probabilistic machine learning provides natural way of providing uncertainty quantification in predictions [12] since the uncertainties can be obtained by probabilistic representation of parameters. This inference procedure can be seen as a form of learning (optimisation) applied to the model parameters [8]. In this tutorial, we employ linear models and simple neural networks to demonstrate the use of MCMC sampling methods. The probabilistic representation of weights and biases in the respective models allows uncertainty quantification on model predictions. We note that MCMC refers to a family of algorithms for implementing Bayesian inference for parameter and uncertainty estimation in models. The range of models to which Bayesian inference is applied, including statistical, graphical, and machine learning models; has led to the existence of a wide range of MCMC sampling algorithms. Some of the prominent ones are Metropolis Hastings algorithm [13; 14; 15], Gibbs sampling [16; 17; 18], Langevin MCMC [19; 20; 21], rejection sampling [22; 23], sequential MCMC [24], adaptive MCMC [25], parallel tempering (tempered) MCMC [26; 27; 28; 29], reversible-jump MCMC [30; 31], specialised MCMC methods for discrete time series models[32; 33; 34], constrained parameter and model settings [35; 36], and likelihood free MCMC [37]. MCMC sampling methods have also been used for data augmentation [38; 39], model fusion [40], model selection [41; 42], and interpolation [43]. Apart from this, we note that MCMC methods have been prominent in a wide range of applications that include geophysical inversions [44; 45; 46], geo-scientific models [5; 47; 48], environmental and hydrological modelling [49; 50], bio-systems modelling [51; 52; 53], and quantitative genetics [54; 55]. In the case of Bayesian neural networks, the large number of model parameters that emerge from large neural network architectures and deep learning models pose challenges for MCMC sampling methods. Hence, progress in the application of Bayesian approaches to big data and deep neural networks has been slow. Research in this space has included a number of methods that have been fused with MCMC such as gradient-based methods [56; 21; 57; 58; 59], and evolutionary (meta-heuristic) algorithms which include differential evolution, genetic algorithms, and particle swarm optimisation [60; 61; 62; 63]. This use of gradients in MCMC was initially known as Metropolis-adjusted Langevin dynamics [21] and has shown promising performance for linear models [58] and has also been extended to Bayesian neural networks [59]. Another direction has been the use of better exploration features in MCMC sampling such as parallel tempering MCMC with Langevin-gradients and parallel computing [59], providing a competitive alternative to stochastic gradient-descent [64] and Adam optimizers [6] with the addition of uncertainty quantification in predictions. These methods have also been applied to Bayesian deep learning models such as Bayesian autoencoders [65] and Bayesian graph convolutional neural networks (CNNs) [66] which require millions of trainable parameters to be represent as posterior distributions. Recently, Kapoor et al. [63] combined tempered MCMC with particle swarm optimisation-based proposal distribution in an parallel computing environment and showed that the hybrid approach samples posterior distribution more effectively when compared with the conventional approach. Hamiltonian Monte Carlo (HMC) approach that uses gradient-based proposal distributions [57] and has been effectively applied to Bayesian neural networks [67]. In similar way, Langevin dynamics can be used to incorporate gradient-based stepping with Gaussian noise into the proposal distribution [58]. HMC avoids random walk behaviour using an auxiliary momentum vector and implementing Hamiltonian dynamics where the momentum samples are discarded after later. The samples are hence less correlated and tend converge to the target distribution more rapidly. Variational inference (VI) provides an alternative approach to MCMC methods to approximate Bayesian posterior distribution [68; 69]. _Bayes by backpropagation_ is a VI method that showed competitive results when compared to stochastic gradient descent and dropout methods used as approximate Bayesian methods [70]. Dropout is a regularisation technique that is relatively simple to implement and involves randomly dropping selected weights in forward-pass operation of backpropagation. This improves the generalization performance of neural networks and has been widely adopted [71]. Gal and Ghahramani [72] presented an approximate Bayesian methodology based on dropout-based regularisation which has been used for other deep learning models such as CNNs [73]. Later, Gal and Ghahramani [74] presented VI-based dropout technique for recurrent neural networks (RNNs); particularly, long-short term memory (LSTM) and gated recurrent unit (GRU) models for language modelling and sentiment analysis tasks. We argue that the use of dropouts used for regularisation cannot be seen as a an alternative of MCMC sampling directly from the posterior distribution. In this case, we do not know the priors nor know much about the posterior distribution and there is little theoretical rigour, only computational efficacy or capturing noise and uncertainty during model training. Furthermore, in the Bayesian methodology, a probabilistic representation using priors is needed which is questionable in the dropout methodology for Bayesian computation. Given that VI methods are seen as approximate Bayesian methods, we need to invest more effort in directly sampling from the posterior distribution in case of Bayesian deep learning models. This can only be possible if both communities (i.e., statistics and machine learning) are aware about the strengths and weaknesses of MCMC methods for sampling Bayesian neural networks that span hundreds to thousands of parameters, and go orders of magnitude higher when looking at Bayesian deep learning models. The progress of MCMC for deep learning has been slow due to lack of implementation details, libraries and tutorials that provide that balance of theory and implementation. In this paper, we present a Python-based tutorial for MCMC methods that covers simple Bayesian linear models and Bayesian neural networks. We provide code in Python with data and instructions that enable their use and extension. We provide results for some benchmark problems showing the strengths and weaknesses of implementing the respective Bayesian models via MCMC with detailed instructions for every code sample in a related Github repository so that it is easy to clone and run. Our tutorial is simple, and relies on basic Python libraries such as numpy. We do not use advanced libraries as the goal is to serve as a go-to document for beginners who have basic knowledge of machine learning models, and need to get hands on experience with MCMC sampling. Hence, this is a code-based computational tutorial with theoretical background. Finally, we highlight the challenges in sampling multi-modal posterior distributions in the case of Bayesian neural networks and shed light on the use of convergence diagnostics. The rest of the paper is organised as follows. In Section 2, we present a background and literature review of related methods. Section 3 presents the proposed methodology, followed by experiments and results in Section 4. Section 5 provides a discussion and Section 6 concludes the paper with directions of future work. ## 2 Background ### Bayesian inference We recall that Bayesian methods account for the uncertainty in prediction and decision making via the posterior distribution [75]. Note that the posterior is the conditional probability determined after taking into account the prior distribution and the relevant evidence or data via sampling methods. Thomas Bayes (1702 - 1761) presented and proved a special case of the Bayes' theorem [76; 77] which is the foundation of Bayesian inference. However, it was Pierre-Simon Laplace (1749 - 1827) who introduced a general version of the theorem and used it to approach problems [78]. Figure 1 gives an overview of the Bayesian inference framework that uses data with a prior and likelihood to construct to sample from the posterior distribution. This is the building block of the rest of the lessons that will feature Bayesian logistic regression and Bayesian neural networks. Bayesian inference estimates unknown parameters using prior information or belief about the variable. Prior information is captured in the form of a distribution. A simple example of a prior belief is a distribution that has a positive real-valued number in some range. This essentially would imply a belief that our result or posterior distribution would likely be a distribution of positive numbers in some range which would be similar to the prior but not the same. If the posterior and prior both follow the same type of distribution, this is known as a conjugate prior [79]. If the prior provides useful information about the variable, rather than very loose constraints, it is known as informative prior. The prior distribution is based on expert knowledge (opinion) and is also dependent on the domain for different types of models [80; 81]. The need for efficient sampling methods to implement Bayesian inference has been a significant focus of research in computational statistics. This is especially true in the case of multi-modal and irregular posterior distributions [82; 83; 29] which tend to dominate in Bayesian neural network problems [10; 84]. MCMC sampling methods are used to update the probability for a hypothesis (proposal \(\Theta\)) as more information becomes available. The hypothesis Figure 1: The relationship of the likelihood with data and prior distribution is shown for sampling the posterior distribution. is given by a prior probability distribution that expresses one's belief about a quantity (or free parameter in a model) before some data (**d**) are observed. MCMC methods use sampling to construct the posterior distribution (\(P(\Theta|\textbf{x})\)), iteratively using a proposal distribution, prior distribution \(P(\Theta)\), and likelihood function. \[P(\Theta|\textbf{x})=\frac{\textbf{P}(\textbf{d}|\boldsymbol{\Theta})\times \textbf{P}(\boldsymbol{\Theta})}{\textbf{P}(\textbf{d})} \tag{1}\] We note that \(P(\textbf{d}|\boldsymbol{\Theta})\) could be seen as the likelihood distribution in disguise. \(P(\textbf{d})\) is the marginal distribution of the data and is often seen as a normalising constant and ignored. Hence, ignoring it, we can also express the above in this way \[P(\Theta|\textbf{x})\propto\textbf{P}(\textbf{x}|\boldsymbol{\Theta})\times \textbf{P}(\boldsymbol{\Theta}) \tag{2}\] The likelihood function is a function of the parameters of a given model provided specific observed data [85]. The likelihood function can be seen as a measure of fit to the data for the proposals which are drawn from the proposal distribution. Hence, from an optimisation perspective, the likelihood function can be seen as a fitness or error function. The posterior distribution is constructed after taking into account the relevant evidence (data) and prior distribution, with the likelihood that considers the proposal and the model. MCMC methods essentially implement Bayesian inference via a numerical approach that marginalizes or integrates over the posterior distribution [86]. Note that probability and likelihood are not the same in the field of statistics, while in everyday language they are used as if they are the same. The term "probability" refers to the possibility of something happening in relation to a given distribution of data. The likelihood refers to the likelihood function that provides a measure of fit in relation to a distribution. The likelihood function indicates which parameter (data) values are more likely than others in relation to a distribution. Further and detailed explanations regarding Bayesian inference and MCMC sampling is given here [87; 88]. ### Probability distributions #### 2.2.1 Gaussian (Normal) Distribution A normal probability density or distribution, also known as the Gaussian distribution, is described by two parameters, mean (\(\mu\)) around which the distribution is centered and the standard deviation (\(\sigma\)) which describes the spread (sometimes described instead by the variance, \(\sigma^{2}\)). Using these two parameters, we can fit a probability (normal) distribution to data from some source. In similar way, given a probability distribution, we can generate data and this process is known as sampling from the distribution. In sampling the distribution we simply present random data points (uniform) to the distribution and get data that is, in a way, transformed by the distribution. These parameters determine the shape of the probability distribution, e.g., if it is peaked or spread. Note that the normal distribution is symmetrical in nature and caters for negative and positive numbers of real data. Equation 3 presents the Gaussian distribution probability density function (PDF) for parameters \(\mu\) and \(\sigma\). \[f(x)=\frac{1}{\sqrt{2\pi\sigma^{2}}}exp\left(-\frac{1}{2}\left(\frac{x-\mu}{ \sigma}\right)^{2}\right) \tag{3}\] We will sample from this distribution in Python via the NumPy library [89] to sample from the various distributions discussed in this tutorial and SciPy library [90] to get a representation of the PDF i.e., the probability distribution. The associated github repository1 contains the code to generate Figures 2 to 5 using the Seaborn and Matplotlib Python libraries. Footnote 1: [https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/blob/main/01-Distributions.ipynb](https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/blob/main/01-Distributions.ipynb) ``` 1importnumpyasnp 2fromnumpyimportrandom 3fromscipyimportstats 4#10randomdrawsfromastandardGaussiandistribution-N(0,1) 5samples=random.randn(10) * Gaussian distribution PDF with mean of 4 and standard deviation of 0.5 x = np.linspace(0, 8, 100) p pdf = stats.norm.pdf(x, loc=4, scale=0.5) ``` Listing 1: Random number generation for a Gaussian distribution We note that the mean and standard deviation are purely based on the data and it will change depending on the dataset. Let us visualise what happens when standard deviation changes and mean remains the same for the distribution as shown in Figure 2. We can review some more examples where changes to the mean and standard deviation gives us different shapes of the PDF as shown in Figure 3. #### 2.2.2 Multivariate Normal distribution The multivariate normal distribution or joint normal distribution generalises univariate normal distribution to more variables or higher dimensions as shown in the PDF in Equation 4. \[f(x_{1},\ldots,x_{M})=\frac{1}{\sqrt{(2\pi)^{M}|\Sigma|}}\exp\left(-\frac{1}{2} (\mathbf{x}-\boldsymbol{\mu})^{\mathrm{T}}\Sigma^{-1}(\mathbf{x}-\boldsymbol {\mu})\right) \tag{4}\] where \(\mathbf{x}\) is a real \(M\)-dimensional column vector and \(|\Sigma|\) is the determinant of symmetric covariance matrix which is positive definite. #### 2.2.3 Gamma distribution A gamma distribution is defined by the parameters shape (\(\alpha\)) and rate (\(\beta\)) as shown below. Figure 3: Normal distributions with different parameters, i.e mean and the standard deviation. Figure 2: Normal distributions with the same mean and different standard deviations. \[f(x;\alpha,\beta)=\frac{\beta^{\alpha}x^{\alpha-1}e^{-\beta x}}{\Gamma(\alpha)} \tag{5}\] for \(x>0\quad\alpha,\beta>0\); where \(\Gamma(n)=(n-1)!\). Figure 4 presents the Gamma distribution for various parameter combinations, with corresponding Python code given below. ``` 10(size)randomdrawsfromaGammadistributionwithashapeparameterof2andscale(1/rate)of2samples=random.gamma(2,scale=2,size=10) #GammadistributionPDFwithashapeparameterof2andscale(1/rate)of2 #gammadistributionPDFwithashapeparameterof2andscale(1/rate)of2 #x=np.linspace(0,10,100) #pdf=stats.gamma.pdf(x,a=2,scale=2) #InverseGammadistributionPDFwithashapeparameterof2andscaleof1 #x=np.linspace(0,5,100) #pdf=stats.invgamma.pdf(x,a=2,scale=1) ``` Listing 2: Random number generation for the Gamma and inverse Gamma distributions The corresponding inverse-Gamma (IG) distribution takes the same parameters with examples given in Figure 5 and is more appropriate for real positive numbers. #### 2.2.4 Binomial distribution So far, we have only addressed real numbers with respective probability distributions; however, we need to also consider discrete numbers. The Bernoulli distribution is a discrete probability distribution typically used for modelling binary classification problems. We begin with an example where a variable \(x\) takes the value 1 with probability \(p\) and Figure 4: Gamma distributions with different shape and rate parameters (\(\alpha\) and \(\beta\)). Figure 5: Inverse gamma distributions with different shape and rate parameters (\(\alpha\) and \(\beta\)). the value 0 with probability \(q=1-p\). The probability mass function for this distribution over the possible outcomes (\(x\)) is given by Equation 6. \[f(x;p)=p^{x}(1-p)^{1-x} \tag{6}\] for \(x\in{0,1}\). The probability of getting exactly \(k\) successes (\(x=1\)) in \(n\) independent Bernoulli trials (\(f(k,n,p)\)) is given as \(\Pr(k;n,p)\) in Equation 7. \[\Pr(k;n,p)=\binom{n}{k}p^{k}(1-p)^{n-k} \tag{7}\] for \(k=0,1,2,...,n\), where \(\binom{n}{k}=\frac{n!}{k!(n-k)!}\). Listing 3 below shows the implementation in Python. ``` 10(size)randomdrawsfromaBinomialdistributionwithashapeparameterof2andscale(1/rate)of2 2samples=random.gamma(2,scale=2,size=10) #BinomialdistributionPDFwithashapeparameterof2andscale(1/rate)of2 3x=np.linspace(0,10,100) 5pdf=stats.gamma.pdf(x,a=2,scale=2) ``` Listing 3: Random number generation for the Binomial distributions #### 2.2.5 Multinomial distribution Previously, we catered for the case of two outcomes; however, we can consider the case of more than two outcomes. Suppose a single trial can result in \(k\) (\(k\geq 2\)) possible outcomes numbered \(1,2,\ldots,k\) and let \(p_{i}=\mathbb{P}\)(a single trial results in outcome i) (\(\sum_{i=1}^{k}p_{i}=1\)). For \(n\) independent trials, let \(X_{i}\) denote the number of trials resulting in outcome \(i\) (then \(\sum_{i=1}^{k}X_{i}=n\)). Then we say that the distribution of \((X_{1},X_{2},\ldots,X_{k})\sim\text{Multinomial}(n;p_{1},p_{2},\ldots,p_{k})\) and it holds \[\mathbb{P}(X_{1}=x_{1},X_{2}=x_{2},\ldots,X_{k}=x_{k})=\frac{n!}{x_{1}!x_{2}! \ldots x_{k}!}p_{1}^{x_{1}}p_{2}^{x_{2}}\ldots p_{k}^{x_{k}},\ 0<p_{i}<1,\ \sum_{i=1}^{k}p_{i}=1. \tag{8}\] ## 3 Mcmc We begin by noting that a Markov process is uniquely defined by its transition probabilities \(P(x^{\prime}|x)\) which defines the probability of transitioning from any given state \(x\) to another given state \(x^{\prime}\). The Markov process has a unique stationary distribution \(\pi(x)\) given the following two conditions are met. 1. There must exist a stationary distribution \(\pi\) which solves the detailed balance equations, and therefore requires that each transition \(x\to x^{\prime}\) is reversible. This implies that for every pair of states \(x,x^{\prime}\), the probability of being in state \(x\) and moving to state \(x^{\prime}\), must be equal to the probability of being in state \(x^{\prime}\) and moving to state \(x\); hence, \(\pi(x)P(x^{\prime}\mid x)=\pi(x^{\prime})P(x\mid x^{\prime})\). 2. The stationary distribution must be unique which is guaranteed by ergodicity of the Markov process [91; 92; 93]. Ergodicity is guaranteed when every state is aperiodic (i.e., the system does not return to the same state at fixed intervals) and positive recurrent (i.e., the expected number of steps for returning to the same state is finite). An ergodic system is one that mixes well, in other words you get the same result whether you average its values over time or over space. Given that \(\pi(x)\) is chosen to be \(P(x)\), the condition of detailed balance becomes \(P(x^{\prime}\mid x)P(x)=P(x\mid x^{\prime})P(x^{\prime})\) which is re-written as shown in Equation 9. \[\frac{P(x^{\prime}\mid x)}{P(x\mid x^{\prime})}=\frac{P(x^{\prime})}{P(x)} \tag{9}\] Algorithm 1 presents a basic MCMC sampler with random-walk proposal distribution that runs until a maximum number of samples (\(N_{max}\)) is reached for training data, \(\mathbf{d}\). ``` Data: Training data, \(\mathbf{d}\) Result:\(N_{max}\) samples from the posterior distribution - Initialise \(x_{0}\) ; for\(i=1\) until\(N_{max}\)do 1. Propose a value \(x^{\prime}|x_{i}\sim q(x_{i})\), where \(q(.)\) is the proposal distribution; 2. Given \(x^{\prime}\), execute the model \(f(x^{\prime},\mathbf{d})\) to compute the predictions (output \(y\)) and the likelihood; 3. Calculate the acceptance probability \(\alpha=\min\left(1,\frac{P(x^{\prime})}{P(x)}\frac{q(x|x^{\prime})}{q(x^{ \prime}|x_{0})}\right)\) 4. Generate a random value from a uniform distribution \(u\sim U(0,1)\); 5. Accept or reject proposed value \(x^{\prime}\); if\(u<\alpha\)then 1 accept the sample, \(x_{i}=x^{\prime}\) else 2 reject current and retain previous sample, \(x_{i}=x_{i-1}\) end if end for ``` **Algorithm 1**A basic MCMC sampler leveraging the Metropolis-Hastings algorithm Algorithm 1 proceeds by proposing new values of the parameter \(x\) (Step 1) from the selected proposal distribution \(q(.)\), in this case a uniform distribution between 0 and 1. Conditional on these proposed values, the model \(f(x^{\prime},\mathbf{d})\) computes or predicts an output using proposal x' and data \(\mathbf{d}\) (Step 2). We computer the likelihood using the prediction, and employ a Metropolis-Hasting criterion (Step 3) to determine whether to accept or reject the proposal (Step 5). We compare the acceptance ratio \(\alpha\) with \(u\sim U(0,1)\), this enforces that the proposal is accepted with probability \(\alpha\). If the proposal is accepted, the chain moves to this proposed value. If rejected, the chain stays at the current value. The process is repeated until the convergence criterion is met, which in this case is the maximum number of samples (\(N_{max}\)) defined by the user. ### Priors The prior distribution is generally based on belief, expert opinion or other information without viewing the data [8; 94]. Information to construct the prior can be based on past experiments or the posterior distribution of the model for related datasets. There are no hard rules for how much information should be encoded in the prior distribution; hence, we can take multiple approaches. An _informative prior_ gives specific and definite information about a variable. If we consider the prior distribution for the temperature tomorrow evening, it would be reasonable to use a normal distribution with an expected value (as mean) of today's evenings temperature with a standard deviation of the temperature each evening for the entire season. A _weakly informative prior_ expresses partial information about a variable. In the case of the prior distribution of evening temperature, a weakly informative prior would consider day time temperature of the day (as mean) with a standard deviation of day time temperature for the whole year. An _uninformative prior_ or _diffuse prior_ expresses vague information about a variable, such as the variable is positive or has some limit range. A number of studies have been done regarding priors for linear models [95; 96] and Bayesian neural networks and deep learning models [97]. Hobbs et al. [98] presented a study for Bayesian priors in generalised linear models for clinical trials. We note that incorporation of prior knowledge in deep learning models [99], is different from selecting or defining priors in Bayesian deep learning models. Due to the similarity of terms, we caution the readers that these can be often confused and mixed up. ### MCMC sampler in Python We begin with an example with a deliberately simple example in which we are only sampling one parameter from a binomial distribution in order to demonstrate a simple MCMC implementation in Python. Looking at a simple binomial (e.g., coin flipping) likelihood (we will explore the likelihood later) problem and given the data of \(k\) successes in \(n\) trials, we will calculate the posterior probability of the parameter \(p\) which defines the chance of success for any given trial. MCMC sampling requires a prior distribution along with a likelihood function that is used to evaluate a set of parameters proposed for the given data and model. In other words, the likelihood is a measure of the quality of proposals obtained from a defined proposal distribution using MCMC sampling. We first need to define our prior. Listing 4 presents an implementation 2 of this simple MCMC sampling exercise in Python of Algorithm 1. Footnote 2: [https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/blob/main/02-Basic-MCMC.ipynb](https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/blob/main/02-Basic-MCMC.ipynb) In this example, we adopt a uniform distribution as an uninformative prior, only constraining the \(p\) to be between the values of 0 and 1 (\(p\in[0,1]\)). ``` 1#Firstdefineourlikelihoodfunctionwhichwillbedependentonprovided"data' 2#inthiscassewwillchoosek=50,n=100 3deflikelihood(query_prob): 4,' 5Giventhedatofksuccessesinntrials,returnalikelihoodfunctionwhich 6evaluatestheprobabilitythatasingsuccess(p)isquery_probforanygiven 7query_prob(between0and1). 8,' 9k=50 10n=100 11returnstats.binnom.pmf(k,n,query_prob) 12 13##MCMCSettingsandSetup 14n_samples=10000#numberofsamplestodrawfromtheposterior 15burn_in=500#numberofsamplestodiscardbeforerecordingdrawfromtheposterior 16 17x=random.uniform(0,1)#initialiseavalueofx0 18 19#createanarrayofNaNstoffillwithoursamples 20p_posterior=np.full(n_samples,np.nan) 21 22print('Generating{}MCMCsamplesfromtheposterior:'.format(n_samples)) 23 24#nowweecanstarttheMCMCsamplingloop 25foriiinnp.arange(n_samples): 26#Samplevalueuniformlyfrom0tod1asapprosal 27x_new=random.uniform(0,1) 28 29#CalculateMetropolis-Hastingsacceptanceprobabilitybasedontheprior 30#(canbeignoredinthiscase)andlikelihood 31prior_ratio=1#forthissimpleexamplesdiscussedabove 32likelihood_ratio=likelihood_function(x_new)/likelihood_function(x) 33alpha=np.min([1,likelihood_ratio*prior_ratio]) 34 35#Hereweusearandomdrawfromanuniformdistributionbetween0and1asa 36#methodofacceptingthemewproposalwithaprobabilityofalpha 37#(i.e.,acceptifu<alpha) 38u=random.uniform(0,1) 39ifu<alpha: 40x=x_new#thenupdatethecurrentsampletotheproposalforthenextiteration 41 42#Storethecurrentsample 43p_posterior[ii]=x ``` Listing 4: Python implementation of Algorithm 1 In this example, we adopt a uniform distribution as an uninformative prior, only constraining the \(p\) to be between the values of 0 and 1 (\(p\in[0,1]\)). ``` 1#Firstdefineourlikelihoodfunctionwhichwillbedependentonprovided"data' 2#inthiscassewwillchoosek=50,n=100 3deflikelihood(query_prob): 4,' 5Giventhedatofksuccessesinntrials,returnalikelihoodfunctionwhich 6evaluatestheprobabilitythatasingsuccess(p)isquery_probforanygiven 7query_prob(between0and1). 8,' 9k=50 10n=100 11returnstats.binnom.pmf(k,n,query_prob) 12 13##MCMCSettingsandSetup 14n_samples=1000#numberofsamplestodrawfromtheposterior 15burn_in=500#numberofsamplestodiscardbeforerecordingdrawfromtheposterior 16 17x=random.uniform(0,1)#initialiseavalueofx0 18 19#createanarrayofNaNstoffillwithoursamples 20p_posterior=np.full(n_samples,np.nan) 21 22print('Generating{}MCMCsamplesfromtheposterior:'.format(n_samples)) 23 24#nowecanstarttheMCMCsamplingloop 25foriinnp.arange(n_samples): 26#Samplevalueuniformlyfrom0tod1asapprosal 27x_new=random.uniform(0,1) 28 29#CalculateMetropolis-Hastingsacceptanceprobabilitybasedontheprior 30#(canbeignoredinthiscase)andlikelihood 31prior_ratio=1#forthissimpleexamplesdiscussedabove 32likelihood_ratio=likelihood_function(x_new)/likelihood_function(x) 33alpha=np.min([1,likelihood_ratio*prior_ratio]) 34 35#Hereweusearandomdrawfromanuniformdistributionbetween0and1asa 36#methodofacceptingthemewproposalwithaprobabilityofalpha 37#(i.e.,acceptifu<alpha) 38u=random.uniform(0,1) 39ifu<alpha: 40x=x_new#thenupdatethecurrentsampletotheproposalforthenextiteration 41 42#Storethecurrentsample 43p_posterior[ii]=x ``` Listing 4: Python implementation of Algorithm 1 An important note, however, in MCMC, a certain portion of the initial samples are discarded. The discarded samples are known as the burn-in or warmup period. The burn-in can range depending on the sampling problem, but here we will use 25 % to 50 % depending on the complexity of the model. If you use MCMC for large neural network architectures, 50 % burn-in will likely be required. Note that burn-in could be seen as an optimisation stage. Essentially you are discarding material that is not part of the posterior distribution. Your posterior distribution should feature good predictions, and that is what you get after your sampler goes towards convergence. In order to visualise the MCMC results, typically histograms of the posterior distribution and the trace plot are used. The histogram of the posterior distribution allows us to examine the mean and variance visually, while the trace plot shows the value of samples at each iteration, allowing us to examine the behaviour and convergence of the MCMC. Although it is necessary to exclude the burn-in samples in the posterior distribution, it can be helpful to include them in the trace plots so that we can examine where the model started and how well it converged. The following plots provide a visualisation of the results from Listing 4, where a normal distribution is produced by a simple MCMC. Since this is a relatively simple model, a small burn-in proportion is used. The histogram of posterior shows a normally distributed shape, and the trace plot shows that the samples are distributed around the convergence value, as well as the burn-in samples which are in red. We also note that the value of the posterior is usually taken as the mean of the distribution, and in this case our mean value is 0.0413. ## 4 Bayesian linear models via MCMC We give details of implementing Bayesian logistic regression that uses Metropolis-Hastings MCMC with random-walk proposal distribution. We wish to model a univariate timeseries consisting of observed data points with inputs \(X=(\mathbf{x}_{1},\ldots,\mathbf{x}_{S})^{\prime}\). We assume that the relationship between inputs and outputs is a signal plus noise model where the signal depends upon a set of parameters \(\theta\), denoted by \(f(\mathbf{x},\theta)\). We assume the noise to be Gaussian with a mean of zero and a variance of \(\tau^{2}\), so that \[y=f(\mathbf{x},\theta)+e\qquad e\sim\mathcal{N}(0,\tau^{2}) \tag{10}\] or equivalently, \[p(y|x,\theta,\tau^{2})\sim\mathcal{N}\left(f(\mathbf{x},\theta),\tau^{2}\right) \tag{11}\] We have adopt above form to generalise to the neural network model later; hence, Equation 12 expresses the case of a linear model. \[f(\mathbf{x},\theta)=\theta\mathbf{X}^{T} \tag{12}\] Figure 6: Posterior and trace plot for the basic MCMC sampler example. In this linear regression problem, we then have the parameters \(\theta\) (that represents all weights and biases) and \(\tau\) (that represents a single noise parameter). We sample the parameters using MCMC to find their posterior distributions. Note that from an optimisation perspective, the case of sampling can be seen as a form of optimisation, eg. using gradient-based methods [100] for learning the parameters of linear models or neural networks, i.e similar to the case of training a perceptron model [101] in the machine learning and neural networks literature. The additional aspect of a MCMC sampler is their ability to sample a posterior probability distribution that represents the parameters of a model rather than a fixed point estimate given by optimisation methods. ### Likelihood As detailed above in Section 2.1, our Bayesian treatment of the problem of finding the posterior \(p(\theta\mid y)\) requires the definition of both a likelihood \(p(\theta\mid\mathbf{x})\) and prior distribution \(p(\theta)\). We begin by defining the likelihood. The likelihood, i.e probability of the data given the model is given by the product of the likelihood for each of our data points (\(S\) total) as shown in Equation 13. \[p(y\mid x,\theta,\tau^{2})=\prod_{t=1}^{S}p(y_{t}\mid x_{t},\theta,\tau^{2}) \tag{13}\] We note that for MCMC implementation, we would be using log-likelihood (i.e taking the log of the likelihood function) to eliminate numerical instabilities which can occur since we multiple probabilities together which grows with the size of the data. It is also more convenient to maximize the log of the likelihood function since the logarithm is monotonically increasing function of its argument, maximization of the log of a function is equivalent to maximization of the function itself. In order to transform a likelihood function into a log-likelihood, we will use the log product rule as given below. \[log_{b}(x\times y)=log_{b}(x)+log_{b}(y) \tag{14}\] The log-likelihood simplifies the subsequent mathematical analysis and also helps avoid numerical instabilities due to the product of a large number of small probabilities. In the log-likelihood, Equation 13 is much simplified by computing the sum of the log probabilities as given in Equation 15. \[\ln p(y\mid x,\theta,\tau^{2})=\sum_{t=1}^{S}\ln p(y_{t}\mid x_{t},\theta, \tau^{2}) \tag{15}\] In order to construct the likelihood function, we use our definition of the probability for each data point given the model as shown in Equation 11, and the form of the Gaussian distribution as defined in Equation 3. We use a set of weights and biases as the model parameters \(\theta\) in our model \(f(x,\theta)\) for \(S\) training data instances and variance \(\tau^{2}\). Our assumption of normally distributed errors leads to a likelihood given in Equation 16. \[p(y\mid x,\theta,\tau^{2})=\frac{1}{(2\pi\tau^{2})^{S/2}}\times\exp\left(- \frac{1}{2\tau^{2}}\sum_{t=1}^{S}(y_{t}-f(x_{t},\theta))^{2}\right) \tag{16}\] ### Prior We note that a linear model transforms into a Bayesian linear model with the use of a prior distribution, and a likelihood function to sample the posterior distribution via MCMC. In Section 3.2, we discussed the need to define a prior distribution for our model parameters \(\theta\) and \(\tau\). In the case where the prior distribution comes from the same probability distribution family as the posterior distribution, the prior and posterior are then called conjugate distributions [102; 103]. The prior is called a conjugate prior for the likelihood function of the Bayesian model. To implement conjugate priors in our linear model, we will assume a multivariate Gaussian prior for _theta_ (Equation 17) and an inverse Gamma distribution (IG) for \(\tau^{2}\) (Equation 18). We use an IG prior distribution since we require positive real numbers to represent \(\tau^{2}\). We use the multivariate Gaussian distribution to represent the parameters such as weights and bias of the linear models. These parameters feature negative and positive real number values and we are involving a model with more than one parameter, hence the multivariate Gaussian distribution is most appropriate for the prior. \[\theta\sim\mathcal{N}(0,\sigma^{2}) \tag{17}\] \[\tau^{2}\sim IG(\nu_{1},\nu_{2}) \tag{18}\] In this example, we choose parameter values of \(\sigma=5\), \(\nu_{1}=0\), and \(\nu_{2}=0\) to define the prior distributions. These values are based on expert opinion who consider trained models. First, lets revisit multivariate normal distribution from Equation 4, used here to define the prior distribution for our weights and biases. Suppose that our \(\mathbf{x}\) is our set of \(M\) weights and biases, \((\theta_{1},\ldots,\theta_{M})\). Suppose our \(\mu\) is a vector of zeros (as defined above), then we get: \[f(\theta_{1},\ldots,\theta_{M})=\frac{1}{\sqrt{(2\pi)^{M}[\Sigma]}}\exp\left(- \frac{1}{2}(\mathbf{\theta})^{\mathrm{T}}\Sigma^{-1}(\mathbf{\theta})\right) \tag{19}\] The covariance matrix \(\mathbf{\Sigma}\) in this case is just a diagonal matrix with all values equal to \(\sigma^{2}\) (scalar), \(\mathbf{\Sigma}^{-1}\) will become \(I/\sigma^{2}\) where \(I\) is an identity matrix (diagonal elements which are all ones). Hence, the numerator in above equation \[(\mathbf{\theta})^{\mathrm{T}}\mathbf{\Sigma}^{-1}(\mathbf{\theta}) \tag{20}\] becomes \[\frac{(\mathbf{\theta})^{\mathrm{T}}\mathbf{I}(\mathbf{\theta})}{\sigma^{2}} \tag{21}\] We note that multiplying identity matrix with any other matrix is the matrix itself, hence finally we get \(\theta^{2}\) in numerator. We can now move to the IG distribution used to define the prior for our model variance (\(\tau^{2}\)). \[f(\tau^{2})=\frac{\nu_{1}^{\nu_{2}^{\nu_{2}}}}{\Gamma(\nu_{1})}\left(\frac{1} {\tau^{2}}\right)^{\nu_{1}+1}\exp\left(\frac{-\nu_{2}}{\tau^{2}}\right) \tag{22}\] We note that \(\nu_{1}^{\nu_{2}}/\Gamma(\nu_{1})\) is a constant which can be dropped considering proportionality. Equation 23 takes into account the product of all our MCMC parameters to define the overall prior. \[p(\mathbf{\theta})\propto\frac{1}{(2\pi\sigma^{2})^{M/2}}\times\exp\bigg{\{}-\frac {1}{2\sigma^{2}}\bigg{(}\sum_{i=1}^{M}\theta^{2}\bigg{)}\bigg{\}}\times\tau^{ -2(1+\nu_{1})}\exp\left(\frac{-\nu_{2}}{\tau^{2}}\right) \tag{23}\] ### Python Implementation The following code presented in Listing 13 implements a model in the form outlined above. First, we will define our simple linear model as in Equation 12. ``` 1classLinearModel: 2''' 3Simplelinearmodelwithasingoutput(y)giventhecovariatesx_1...x_Moftheform: 4y=w_1*x_1+...+w_M*x_M+b 5whereM=numberoffeatures,waretheweights,andbisthebias. 6''' 7#Initialisevaluesofmodelparameters 8def_init...(self): 9self.w=None 10self.b=None 11 * Functiontotakeindataandparametersampleandreturntheprediction *default_proposal(self,data,theta): *" *Encodetheproposedparametersandthenusethemodeltoppredict *Input: *data:(NxM)arrayofdata *theta:(M+1)vectorofparameters.Thelastelementoftheatsomitesthebisterm(givingM+1elements) *";" *self.encode(theta)#methodotencodewandb *prediction=self.predict(data)#predictandreturn *returnprediction *"Linearmodelprediction *defpredict(self,x_in): *y_out=x_in.dot(self.w)+self.b *returny_out *#Helperfunctiontsplittheparametervectorintowandbandstoreinthemodel *defencode(self,theta): *self.w=theta[0:-1] *self.b=theta[-1] ``` 12#Functiontotakeindataandparametersampleandreturntheprediction *defevaluate_proposal(self,data,theta): *" *Encodetheproposedparametersandthenusethemodeltoppredict *Input: *data:(NxM)arrayofdata *theta:(M+1)vectorofparameters.Thelastelementoftheatsomitesthebisterm(givingM+1elements) *log_likelihood:loglikelihoodofthedatagiventheparameters *model_prediction:predictionofthemodelgiventheparameters *accuracy:accuracy(RMSE)ofthemodelgiventheparameters *"," *#firstmakeapplicationwithparameterstheta *model_prediction=self.model.evaluate_proposal(self.x_data,theta) *accuracy=self.rmse(model_prediction,self.y_data)#RMSEerrormetric *nowcalculatetheloglikelihood *log_likelihood=np.sum(-0.5*np.log(2*np.pi*tausq)-0.5*np.square(self.y_data-model_prediction)/tausq) *return[log_likelihood,model_prediction,accuracy] *Definetheprior *defprior_likelihood(self,sigma_squared,nu_1,nu_2,theta,tausq): *" *Calculatethepriorlikelihoodoftheparameters * ``` 24Input: 25sigma_squared: varianceofnormalpriorfortheta 26nu_1:parameternu_1ofttheinversegammapriorfortau^2 27nu_2:parameternu_2ofttheinversegammapriorfortau^2 28theta:(M+1)vectorofparameters.Thelastelementofthetaconstitutesthebias term(givingM+1elements) 29tausq:varianceoftheerrorterm 30Output: 31log_prior:logpriorlikelihood 32,, 33n_params=self.theta_size#numberofparametersinmodel 34part1=-1*(n_params/2)*np.log(sigma_squared) 35part2=1/(2*sigma_squared)*(sum(np.square(theta))) 36log_prior=part1-part2-(1+nu_1)*np.log(tausq)-(nu_2/tausq) 37returnlog_prior ``` Listing 6: Python implementation of likelihood and prior functions for linear regression model to be incorporated into the MCMC sampling class Before running MCMC, we need to set up the sampler. First we generate an initial sample for our parameters, and initialise arrays to capture the samples that form the posterior distribution, the accuracy, and the model predictions. Then we proceed with sampling as per the MCMC sampling algorithm detailed in Algorithm 1. This algorithm uses a Gaussian random walk for the parameter proposals (\(\theta_{p}\) and \(\tau_{p}^{2}\)), perturbing the previous proposed value with Gaussian noise as shown in Equations 26 and 27, respectively. \[\theta_{p}\sim\theta_{p-1}+\mathcal{N}(0,\sigma_{\theta}) \tag{26}\] \[\tau_{p}^{2}\sim\tau_{p-1}^{2}+\mathcal{N}(0,\sigma_{\tau^{2}}) \tag{27}\] We then accept/reject the proposed value according to the Metropolis-Hastings acceptance ratio. Note that the log-likelihood is used and hence the ratio of previous and current likelihood will need to consider log laws (rules), i.e we note the log product rule in Equation 28 and the quotient rule in Equation 29. \[log_{b}(x\times y)=log_{b}(x)+log_{b}(y) \tag{28}\] \[log_{b}(x/y)=log_{b}(x)-log_{b}(y) \tag{29}\] ``` 1#MCMCsampler 2defsampler(self): 3''' 4Runthesamplerforadefinedlinearmodel 5''' 6##Definemeptryarraystotorethesampledposteriorvalues 7#posteriorofallweightsandbiasoverallsamples 8pos_theta=np.ones((self.n_samples,self.theta_size)) 9#posteriordefiningthevarianceofthenoiseinpredictions 10pos_tau=np.ones((self.n_samples,1)) 11 12#recordoutputf(x)overallsamples 13pred_y=np.ones((self.n_samples,self.x_data.shape[0])) 14#recordtheRMSEofeachsample 15rmse_data=np.zeros(self.n_samples) 16 17#Initialisation 18#initialistheta-themodelparameters 19theta=np.random.randn(self.theta_size) 20#makeinitialprediction 21pred_y[0,]=self.model.evaluate_proposal(self.x_data,theta) 22 23#initialiseeta-wesampleetaasagaussianrandomwalkinthelogspaceoftau^2 24eta=np.log(np.var(pred_y[0,]-self.y_data)) tausq_proposal=np.exp(eta) 2 3#calculatethepriorlikelihoodprior_likelihood=self.prior_likelihood(self.sigma_squared,self.nu_1,self.nu_2,theta,tausq_proposal) 4#calculatethelikelihoodconsideringobservations 5[likelihood,pred_y[0,],_,rmse_data[0]]=self.likelihood_function(theta,tausq_proposal) 6 7n_accept=0 8##RuntheMCMCsampleforn_samples 9foriiminp.arange(1,self.n_samples): 10#SamplenewvaluesfortheatandtauusingaGaussianrandomwalk 11theta_proposal=theta+np.random.normal(0,self.step_theta,self.theta_size) 12eta_proposal=eta+np.random.normal(0,self.step_eta,1)#sampletau^2inlogspace 13tausq_proposal=np.exp(eta_proposal) 14 15#calculatethepriorlikelihoodprior_proposal=self.prior_likelihood(self.sigma_squared,self.nu_1,self.nu_2,theta_proposal,tausq_proposal) 16) 17#calculatethelikelihoodconsideringobservations 18[likelihood_proposal,pred_y[ii,],_,rmse_data[ii]]=self.likelihood_function(theta_proposal,tausq_proposal) 19#Notingthatlikelihood_functionandprior_likelihoodreturnloglikelihoods, 20#wecanuseglaustocalculatethemacetometaprobolity 21diff_likelihood=likelihood_proposal-likelihood 22diff_priorlikelihood=prior_proposal-prior_likelihood 23 24mh_prob=min(1,mp.exp(diff_likelihood+diff_priorlikelihood)) 25 26#sampletocaptorrejecttheproposalaccordingtotheacceptanceprobability 27u=np.random.uniform(0,1) 28ifu<mh_prob: 29#acceptandupdatethevalues 30n_accept+=1 31likelihood=likelihood_proposalprior_likelihood=prior_proposaltheta=theta_proposal 32eta=eta_proposal 33#storetomakeuptheposterior 34pos_theta[ii,]=theta_proposal 35pos_theta[ii,]=theta_proposal 36pos_theta[ii,]=theta_proposal 37pos_tau[ii,]=theta_proposal 38else: 39#rejectmoveandstoretheoldvalues 40pos_theta[ii,]=pos_theta[ii-1,] 41pos_tau[ii,]=pos_tau[ii-1,] 42 43#calculatetheacceptancratesascheck 44accept_rate=(n_accept/self.n_samples)*100 print('{:.3f}%wereaccepted'.format(accept_rate)) 45 46 47#storetheposterior(samplesafterburnin)inapandasdataframeandreturn 48self.pos_theta=pos_theta[self.n_burnin;,] 49self.pos_tau=pos_tau[self.n_burnin;,] 40self.rmse_data=rmse_data[self.n_burnin;] 41 42#splitthetaintowandb 43results_dict={'w()'.format(_):self.pos_theta[:,_].squeeze()for_inrange(self.theta_size-1)} 44results_dict['b']=self.pos_theta[:,-1].squeeze() 45results_dict['tau']=self.pos_tau.squeeze() results_dict['rmse']=self.rmse_data.squeeze() *results_df=pd.DataFrame.from_dict( *results_dict *) *returnresults_df ``` Listing 7: Python implementation of an MCMC sampler for the linear model Now that we have the sampler, we can create an MCMC class that brings together the model, data, hyperparameters, and sampling algorithm. ``` 1classMCMC: 2def__init__(self,n_samples,n_burnin,x_data,y_data): 3self.n_samples=n_samples#numberofMCMCsamples 4self.n_burnin=n_burnin#numberofburn-insamples 5self.x_data=x_data#(NxM) 6self.y_data=y_data#(Nx1) 7self.theta_size=x_data.shape[1]+1#weightsforeachfeatureandabiasterm(M+1) 8 9#MCMCparameters-definesthevarianceterminourGaussianrandomwalk 10self.step_theta=0.02; 11self.step_eta=0.01;#noteetaisusedastuinthesamplertoconsiderlog scale. 12 13#modelhyperparameters 14#consideredbylookingatdistributionofsimilartrainedmodels-i.edistributionofweightsandbias 15self.sigma_squared=5 16self.nu_1=0 17self.nu_2=0 18 19#initialisethelinearmodelclass 20self.model=LinearModel() 21 22#storeoutput 23self.pos_theta=None 24self.pos_tau=None 25self.rmse_data=None 26 27#functionsdefinedabove-thisispoorpractice,butdoneforreadability 28#andclarity 29self.likelihood_function=MethodType(likelihood_function,self) 30self.prior_likelihood=MethodType(prior_likelihood,self) 31self.sampler=MethodType(sampler,self) 32 33defmodel_draws(self,num_samples=10): 34''' 35Simulatenewmodelpredictions(mu)undertheassumptionthatourposteriorsare 36Gaussian. 37, 38#num_samplesxnum_data_points 39pred_y=np.zeros((num_samples,self.x_data.shape[0])) 40sim_y=np.zeros((num_samples,self.x_data.shape[0])) 41 42foriiinrange(num_samples): 43theta_drawn=np.random.normal(self.pos_theta.mean(axis=0),self.pos_theta.std(axis=0),self.theta_size) 44tausq_drawn=np.random.normal(self.pos_tau.mean(),self.pos_tau.std()) 45 46[_,pred_y[ii,:],sim_y[ii,:],_]=self.likelihood_function(theta_drawn,tausq_drawn 47) 48returnpred_y,sim_y * Additional error metric * targets) ** 2).mean()) ``` Listing 8: Python implementation of an MCMC class for fitting a linear regression model We can then run the MCMC sampler. ``` 1#MCMCSettingsandSetup 2n_samples=20000#numberofsamplestodrawfromtheposterior 3burn_in=int(n_samples*0.25)#numberofsamplestodiscardbeforerecordingdrawsfromtheposterior 4 5#Generatedotadata 6n_data=100 7n_features=1 8x_data=np.repeat(np.expand_dims(np.linspace(0,1,n_data),axis=-1),n_features,axis=1) 9y_data=3*x_data[:,0]+4+np.random.randn(n_data)*0.5 10 11#InitialistheMCMCclass 12mcmc=MCMC(n_samples,burn_in,x_data,y_data) 13#Runthesampler 14results=mcmc.sampler() 15#Drawsamplemodelsfromtheposterior 16pred_y_sim_y=mcmc.model_draws(num_samples=100) ``` Listing 9: Code to call the MCMC sampling class and fit a model to some toy data After the code runs, we can see the predictions of the trained Bayesian linear model and simulated values (model fit with added variance according to \(\tau^{2}\)), such as below in Figure 7. ## 5 Bayesian neural networks via MCMC ### Neural networks We utilise a simple neural network, also known as a multilayer perceptron to demonstrate the process of training a Bayesian neural network via MCMC. A neural network model \(f(x)\) is made up of a series of lower level computations, which can be used to transform inputs to their corresponding outputs \(\{\mathbf{\tilde{x}_{t}},\mathbf{y_{t}}\}\). Neural networks are structured as layers of neurons whose value is determined based on a linear combination of inputs from the previous layer, with an activation function used to introduce nonlinearity. In this case, we consider a neural network with one hidden layer and one output, as shown in Figure 8. Figure 7: Bayesian linear regression model fitting a toy dataset. As an example, we can calculate the output value of the jth neuron in the first hidden layer of a network \((h,j)\) using a weighted combination of the \(m\) inputs (\(\bar{x}_{t}\)) as shown in Equation 30. \[g\Big{(}\delta_{h,j}+\sum_{i=1}^{m}w_{i,j}\bar{x}_{t,i}\Big{)} \tag{30}\] where, the bias (\(\delta_{h,j}\)) and weights (\(w_{i}\) for each of the \(m\) inputs) are parameters to be estimated (trained), and \(g(.)\) is the activation function that can be used to perform a nonlinear transformation. In our case, the function \(g(.)\) is the sigmoid activation function, used for the hidden and output layers of the neural network model with a single hidden layer as shown in Figure 8. We train the model to approximate the function \(f\) such that \(f(\mathbf{\bar{x}_{t}})=\mathbf{y_{t}}\) for all pairs. We extend our previous calculation a single neuron in the hidden layer to calculate the output \(f(\mathbf{\bar{x}_{t}})\) as shown in Equation 31. \[f(\mathbf{\bar{x}_{t}})=g\Big{(}\delta_{o}+\sum_{j=1}^{H}v_{h}\times g\Big{(} \delta_{h}+\sum_{i=1}^{m}w_{i,j}\bar{x}_{t,i}\Big{)}\Big{)}, \tag{31}\] where \(H\) is the number of neurons in the hidden layer, \(\delta_{o}\) is the bias for the output, and \(v_{h}\) are the weights from the hidden layer to the output neuron. The complete set of parameters for the neural network model, shown in Figure 8, is made up of \(\theta=(\mathbf{\bar{w}},\mathbf{\bar{v}},\boldsymbol{\delta})\), where \(\boldsymbol{\delta}=(\boldsymbol{\delta}_{o},\boldsymbol{\delta}_{h})\). \(\mathbf{\bar{w}}\) are the weights transforming the input to hidden layer. \(\mathbf{\bar{v}}\) are the weights transforming the hidden to output layer. \(\boldsymbol{\delta}_{h}\) is the bias for the hidden layer, and \(\boldsymbol{\delta}_{o}\) is the bias for the output layer. Figure 8: Simple neural network with a single hidden layer. The information is passed and processed from the input to hidden and then finally to the output layer as shown. ### Bayesian neural networks A Bayesian neural network is a probabilistic implementation of a standard neural network with the key difference being that the weights and biases are represented via the posterior probability distributions rather than single point values as shown in Figure 9. Similar to canonical neural networks [104], Bayesian neural networks also have universal continuous function approximation capabilities. However, the posterior distribution of the network parameters allows uncertainty quantification on the predictions. The task for MCMC sampling is to learn (sample) the posterior distributions representing the weights and biases of the neural network that best fit the data. As in the previous examples, we begin inference with prior distributions over the weights and biases of the network and use a sampling scheme to find the posterior distributions given training data. Since non-linear activation functions exist in the network, the conjugacy of prior and posterior is lost, and therefore we must employ an MCMC sampling scheme and make assumptions about the distribution of errors. We specify the model similar to the Bayesian linear regression, assuming a Gaussian error as given in Equation 32. \[y=f(\mathbf{x},\theta)+e\qquad e\sim\mathcal{N}(0,\tau^{2}) \tag{32}\] This leads to the same likelihood function as presented in logarithmic form in Equation 24. As in Section 4.2, we adopt Gaussian priors for all parameters of the model (\(\theta\)), with zero mean and a user-defined variance (\(\sigma^{2}\)), and an IG distribution for the variance of the error model (\(\tau^{2}\)), with parameters \(\nu_{1}\) and \(\nu_{2}\). The _likelihood function_ and _prior likelihood_ function, therefore remain unchanged from their definition in Listing 6. ### Training neural networks via backpropagation We note that typically random-walk proposal distributions are used for small scale-models; however, neural network models features a large number of parameters. The choice of a proposal distribution is essential for models with large number of parameters. To better guide the sampling, we can incorporate gradients to our proposals and we will start by examining how gradients are incorporated in non-probabilistic deep learning problems. Gradient based methods are widely used in deep learning alongside backpropagation to train models with various neural network architectures [106]. Stochastic gradient descent (SGD) involves stepping through the parameter space Figure 9: Bayesian neural network and MCMC sampling. Adapted from [105]. iteratively, guided by gradients, to optimize a differentiable objective function. The method has prominently featured in the training of various neural networks architectures including deep learning models. SGD is combined with backpropagation when training neural networks [107; 108; 109]. Backpropagation involves a forward pass which propagates information forward to get prediction (decision) at the output layer and a backward pass to compute the local gradients for each of the parameters. These gradients are then used to inform the update of the model parameters in an iterative process, where the weights and biases are updated at each step. Training neural networks also can be considered as solving a non-convex optimization problem \(\text{argmin}\,L(\theta)\); where, \(\theta\in R^{n}\) is the set of parameters and \(L\) is the loss function. The parameter (weight) update for an iteration (epoch) of SGD is shown in Equation 33. \[\theta_{k}=\theta_{k-1}-a_{k-1}\nabla L(\theta_{k-1}) \tag{33}\] where, \(\theta_{k}\) denotes the \(k^{th}\) iteration, \(a_{k}\) is the learning rate, and \(\nabla L(\theta_{k})\) denotes the gradient. We note that the learning rate is user defined hyperparameter which depends on the problem and data at hand. It is typically determined through tuning using cross-validation or trial and error. A number of extensions of the backpropagation algorithm employing SGD have been proposed to address limitations. These include use of weight decay regularization during training to improve generalisation ability [110], a momentum mechanism for faster training [111; 112], adaptive learning rate [112], and second order gradient methods [113] which, although efficient, have problems in scaling up computationally with larger models. In the last decade, with the deep learning revolution, further attempts have been made to ensure that enhanced backpropagation algorithms, not only improve training accuracy, but can also scale well computationally for large deep learning models. Hence, the development of methods such as the adaptive gradient algorithm (AdaGrad) [114], ada-delta [115], and Adam (adaptive moment estimation) [116]. These advanced algorithms are generally based on the idea of adapting the learning rate automatically during the training, taking into account the recent history of the optimisation process. #### 5.3.1 Langevin gradient-based proposal distribution We mentioned earlier that random-walk proposal distributions are suited for small scale models and better proposal distributions would be required for neural network models. Although simple neural networks have much lower number of parameters, when compared to deep learning models, training simple neural networks with MCMC sampling is a challenge with random-walk proposal distribution. We need to utilise the properties of backpropagation algorithm and the mechanism of weight update using gradients. Hence, we utilise _stochastic gradient Langevin dynamics_[58] for the proposal distribution which features the addition of noise to the stochastic gradients. The method has shown to be effective for linear models [58] which motivated its use in Bayesian neural networks. In our previous work, Langevin-gradient MCMC has been very promising for simple and deep neural networks [59; 66; 65]. Hence, we draw the proposed values for the parameters (\(\mathbf{\theta}^{p}\)) according to a one-step (epoch) gradient as shown in Equation 34. \[\mathbf{\theta}^{p}\sim\mathcal{N}(\mathbf{\bar{\theta}}^{[s]},\Sigma_{\theta}) \tag{34}\] A Gaussian distribution with a standard deviation of \(\Sigma_{\theta}\), and mean (\(\mathbf{\bar{\theta}}^{[s]}\)) calculated using a gradient based update (Equation 35 of the parameter values from the previous step (\(\mathbf{\theta}^{[s]}\)). \[\mathbf{\bar{\theta}}^{[s]}=\mathbf{\theta}^{[s]}+r\times\nabla E(\mathbf{\theta}^{[s]}) \tag{35}\] with learning rate \(r\) (a user-defined hyperparameter) and gradient update (\(\nabla E(\mathbf{\theta}^{[s]})\)) according to the model residuals. \[E(\mathbf{\theta}^{[s]})=\sum_{t\in\mathcal{T}}(y_{t}-F(\mathbf{x}_{i},\mathbf{\theta} ^{[s]}))^{2} \tag{36}\] \[\nabla E(\mathbf{\theta}^{[s]})=\left(\frac{\partial E}{\partial\theta_{1}}, \cdots,\frac{\partial E}{\partial\theta_{L}}\right).\] Hence, the Langevin gradient-based proposal consists of 2 parts: 1. Gradient descent-based weight update 2. Addition of Gaussian noise from \(\mathcal{N}(0,\Sigma_{\theta})\) We need to ensure that the detailed balance is maintained since the Langevin-gradient proposals are not symmetric. We note that MCMC implementation with relaxed detailed balance condition for some applications also exist [117]. Therefore, a combined update is used as a proposal in a Metropolis-Hastings step, which accepts the proposal \(\mathbf{\theta}^{p}\) for a position \(s\) with the probability \(\alpha\) as shown in Equation 37. \[\alpha=\min\left\{1,\frac{p(\mathbf{\theta}^{p}|\mathbf{y})q(\mathbf{\theta}^{|s|}|\mathbf{ \theta}^{p})}{p(\mathbf{\theta}^{|s|}|\mathbf{y})q(\mathbf{\theta}^{p}|\mathbf{\theta}^{|s |})}\right\} \tag{37}\] where, \(p(\mathbf{\theta}^{p}|\mathbf{y})\) and \(p(\mathbf{\theta}^{|s|}|\mathbf{y})\) can be computed using the likelihood from Equation (16), and priors given by Equation (23). The ratio of the proposed and the current \(q(\mathbf{\theta}^{p}|\mathbf{\theta}^{|s|})\), is given by Equation 38 which is based on a one-step (epoch) gradient \(\nabla E_{\mathbf{y}}[\mathbf{\theta}^{|s|}\) with user-defined learning rate \(r\) given in Equation 39. \[q(\mathbf{\theta}^{|s|}|\mathbf{\theta}^{p})\sim N(\bar{\mathbf{\theta}}^{|s| },\Sigma_{\mathbf{\theta}}) \tag{38}\] \[\bar{\mathbf{\theta}}^{|s|}=\mathbf{\theta}^{|s|}+r\times\nabla E_{ \mathbf{y}}[\mathbf{\theta}^{|s|}] \tag{39}\] Thus, this ensures that the detailed balance condition holds and the sequence \(\mathbf{\theta}^{|s|}\) converges to draws from the posterior \(p(\mathbf{\theta}|\mathbf{y})\). Since our implementation is in the log-scale, the log posterior is given by 40. \[\log\left(p(\mathbf{\theta}|\mathbf{y})\right)=\log\left(p(\mathbf{\theta})\right)+ \log\left(p(\mathbf{y}|\mathbf{\theta})\right)+log(q(\mathbf{\theta}|\mathbf{\theta}^{ \star})) \tag{40}\] Algorithm 2 gives a full description of the Langevin-gradient MCMC sampling scheme, bringing together all the steps outlined above. User defined parameters are present for the maximum number of samples (\(S_{max}\)), rate of Langevin-gradient proposals (\(L_{prob}\)) compared to simple random walk proposals, and learning rate \(r\) for Langevin-gradient proposals. We note that in a standard Langevin MCMC approach, the rate of it being used (\(L_{prob}\)) is 1 and Gaussian noise is already part of Langevin gradients. However, in our implementation, we use a combination of random-walk proposal distribution with Langevin gradients, as this seem to be computationally more efficient than Langevin gradients alone which consume extra computational time when computing gradients, especially in larger models. We begin by drawing initial values for the \(\theta\) from the prior distribution given in Equation (23) (Stage 1.1). A new proposal is drawn for \(\theta^{p}\) (which incorporates the model weights and biases and \(\tau^{2}\) from either a Langevin-gradient or random-walk proposal distribution (Stage 1.2). The proposal is evaluated using the Bayesian neural network (BNN) model with the likelihood function in Equation 16 (Stage 1.4). We also evaluate the likelihood for the prior given in Equation (23) (Stage 1.3). Using these likelihoods, we can check if the proposal should be accepted using Metropolis-Hastings condition (Stage 1.5 and 1.6). If accepted, the proposal becomes part of the chain, else we retain previous the last accepted state of the chain. We repeat the procedure until the maximum samples are reached (\(S_{max}\)). Finally, we execute post-sampling stage where we obtain the posterior distribution by concatenating the history of the samples in the chain. ### Python Implementation We first define and implement the simple neural network module (class), and implement methods (functions) for the forward and backward pass to calculate the output of the network given a set of inputs. We need to compute the gradients, and update the model parameters given a model prediction and observations, respectively. Listings 10, 11 and 12 present an implementation4 of the Bayesian Neural Network and associated Langevin-gradient MCMC sampling scheme. Footnote 4: [https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/blob/main/04-Bayesian-Neural-Network.ipynb](https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/blob/main/04-Bayesian-Neural-Network.ipynb) ``` #NNprediction defwardpass(self,X): ``` *TakeaninputXandreturntheoutputofthenetwork *Input: *-X:(Nxnum_features)arrayofinputdata *Output: Data: Dataset Result: Posterior distribution of model parameters (weights and biases) - Stage 1.0: Metropolis Transition - 1.1 Draw initial values \(\theta_{0}\) from the prior for each s until Smaxdo 1.2 Draw \(\kappa\) from a Uniform-distribution [0,1] if\(\kappa\leq L_{prob}\)then Use Langevin-gradient proposal distribution: \(\mathbf{\theta}^{p}\sim\mathcal{N}(\bar{\mathbf{\theta}}^{[s]},\Sigma_{\theta})\) end if else Use random-walk proposal distribution: \(\mathbf{\theta}^{p}\sim\mathcal{N}(\mathbf{\theta}^{[s]},\Sigma_{\theta})\) end if 1.3 Evaluate prior given in Equation 23 1.4 Evaluate log-likelihood given in Equation 16 1.5 Compute the posterior probability for Metropolis-Hastings condition - Equation 40 1.6 Draw \(u\) from a Uniform-distribution [0,1] if\(\log(u)\leq log(p(\mathbf{y}|\theta))\)then Accept replica state: \(\theta^{[s+1]}\leftarrow\theta^{p}\) end if else Reject and retain previous state: \(\theta^{[s+1]}\leftarrow\theta^{[s]}\) end if end while ``` **Algorithm 2**Bayesian neural network framework using Langevin-based MCMC sampling. ``` -self.12output:(N)arrayofoutputdataf(x)whichcanbe -comparedtoobservations(Y)... #Hiddenlayer 11_z=np.dot(X,self.11_weights)+self.11_biases self.11_output=self.sigmoid(11_z)#activationfunctiong(.) #Outputlayer 12_z=np.dot(self.11_output,self.12_weights)+self.12_biases self.12_output=self.sigmoid(12_z) returnself.12_output 18 19defbackward_pass(self,X,Y): 20''' 21ComputehegradientsusingabackwardpassandundertakeLangevin-gradientupdatingofparameters 22Input: 23-X:(Nxnum_features)arrayofinputdata 24-Y:(N)arrayoftargetdata 25,... 26#dE/dtheta 2712_delta=(Y-self.l2_output)*(self.l2_output*(1-self.l2_output)) 28l2_weights_delta=np.outer( self.l1_output, 29l2_delta 30) 31) 32#backpropofl2_deltaandsamesasabove 33l1_delta=np.dot(l2_delta,self.l2_weights.T)*(self.l1_output*(1-self.l1_output)) 34l1_weights_delta=np.outer( X, 35l1_delta 36) ``` **Algorithm 3**Bayesian neural network framework using Langevin-based MCMC sampling. * #updateforoutputlayer * self.12_weights+=self.rate*12_weights_delta * self.12_biases+=self.rate*12_delta * updateforhiddenlayer * self.11_weights+=self.rate*11_weights_delta * self.11_biases+=self.rate*11_delta Listing 10: Python implementation of Neural Network forward and backward passes Next, we implement the model for a single hidden layer neural network with multiple input neurons and a single output neuron (one step ahead prediction). ``` 1classNeuralNetwork: 2''' 3NeuralNetworkmodelwithasinglehiddenlayerandasingleoutput(y) 4''' 5def_init__(self,layer_sizes,learning_rate=0.01): 6''' 7Initializethemodel 8Input: 9-layer_sizes(input,hidden,output):arrayspecifyingthenumberof 10nodesineachlayer 11-learning_rate:learningrateforthegradientupdate 12''' 13#Initialvaluesofmodelparameters 14self.input_num=layer_sizes[0] 15self.hidden_num=layer_sizes[1] 16self.output_num=layer_sizes[2] 17 18#totalnumberofparametersfromweightsandbiases 19self.n_params=(self.input_num*self.hidden_num)+(self.hidden_num*self.output_num)+\ 20self.hidden_num+self.output_num 21#learningparams 22self.rate=learning_rate 23 24#Initializetworkstructure 25self.initialise_network() 26 27#functionsdefinedabove-thisispoorpractice,butdoneforreadability 28#andclarity 29self.forward_pass=MethodType(forward_pass,self) 21self.backward_pass=MethodType(backward_pass,self) 23 24definitialize_network(self): 31''' 32Initializetworkstructure-weightsandbiasesforthehiddenlayer 33andoutputlayer 34''' 35#hiddenlayer 36self.11_weights=np.random.normal( 37loc=0,scale=1/np.sqrt(self.input_num), 38size=(self.input_num,self.hidden_num)) 39self.11_biases=np.random.normal( 40loc=0,scale=1/np.sqrt(self.hidden_num), 41size=(self.hidden_num,)) 42#placeplaceforstoringthehiddenlayervalues 43self.11_output=np.zeros((1,self.hidden_num)) 44 45#outputlayer 46self.12_weights=np.random.normal( 47loc=0,scale=1/np.sqrt(self.hidden_num), 48size=(self.hidden_num,self.output_num)) 49self.12_biases=np.random.normal( 50loc=0,scale=1/np.sqrt(self.hidden_num), 51size=(self.output_num,)) #placeholderforstoringthemodeloutputs self.12_output=np.zeros((1,self.output_num)) defevaluate_proposal(self,x_data,theta): ''' Ahelperfunctiontotaketheinputdataandproposedparametersample andreturntheprediction Input: data:(Nxnum_features)arrayofdata theta:(w,v,b_h,b_o)vectorofparameterswithweightsandbiases ''' self.decode(theta)#methododecodewintoW1,W2,B1,B2. size=x_data.shape[0] fx=np.zeros(size) foriinrange(0,size):#tosewhatfxisproducedbyyourcurrentweight update fx[i]=self.forward_pass(x_data[i,]) returnfx deflangevin_gradient(self,x_data,y_data,theta,depth): ''' ComputetheLangevingradientbasedproposaldistribution Input: -x_data:(Nxnum_features)arrayofinputdata -y_data:(N)arrayoftargetdata -theta:(w,v,b_h,b_o)vectorofproposedparameters. -depth:SGDdepth Output: -theta_updated:Updatedparameterproposal ''' self.decode(theta)#methododecodewintoW1,W2,B1,B2. size=x_data.shape[0] #UpdateparametersbasedonLG for_inrange(0,depth): foriinrange(0,size): self.forward_pass(x_data[i1,]) self.backward_pass(x_data[i1,],y_data[i1]) theta_updated=self.encode() returntheta_updated #Helperfunctions defsigmod(self,x): ''' Implementationofthesigmoidfunction ''' return1/(1+np.exp(-x)) defencode(self): ''' Encodethemodelparametersintoavector Output: -theta:vectorofparameters. ''' w1=self.11_weights.ravel() w2=self.12_weights.ravel() theta=np.concatenate([w1,w2,self.11_biases,self.12_biases]) returntheta defdecode(self,theta): ''' Decodethemodelparametersfromavector ``` 11Input: 12 - theta:vectorofparameters. 13,,, 14w_layer1size=self.input_num*self.hidden_num 15w_layer2size=self.hidden_num*self.output_num 16 17w_layer1=theta[0:w_layer1size] 18self.11_weights=np.reshape(w_layer1,(self.input_num,self.hidden_num)) 19 20w_layer2=theta[w_layer1size:w_layer1size+w_layer2size] 21self.12_weights=np.reshape(w_layer2,(self.hidden_num,self.output_num)) 22self.11_biases=theta[w_layer1size+w_layer2size:w_layer1size+w_layer2size+self.hidden_num] 23self.12_biases=theta[w_layer1size+w_layer2size+self.hidden_num:w_layer1size+w_layer2size+self.hidden_num+self.output_num] ``` Listing 11: Python implementation of the Neural Network class Finally, we implement an MCMC sampler following the Langevin-gradient MCMC sampling scheme. As discussed above, the prior and likelihood functions remain the same as in the linear regression model case, as presented in Listing 6. ``` 1#MCMCsampler 2defsampler(self): 3,' 4RunthesamplerforadefinedNeuralNetworkmodel 5''' 6#define emptyarraystotorethesampledposteriorvalues 7#posteriorofallweightsandbiasoverallsamples 8pos_theta=np.ones((self.n_samples,self.theta_size)) 9#posteriordefiningthevarianceofthenoiseinpredictions 10pos_tau=np.ones((self.n_samples,1)) 11 12#recordoutputf(x)overallsamples 13pred_y=np.ones((self.n_samples,self.x_data.shape[0])) 14#recordsimulatedvaluesf(x)+erroroverallsamples 15sim_y=np.ones((self.n_samples,self.x_data.shape[0])) 16#recordtheRMSEofeachsample 17rmse_data=np.zeros(self.n_samples) 18 19#Initialisation 20#initialistheta 21theta=np.random.randn(self.theta_size) 22#makeinitialprediction 23pred_y[0,]=self.model.evaluate_proposal(self.x_data,theta) 24 25#initialiseeta 26eta=np.log(np.var(pred_y[0,]-self.y_data)) 27tau_proposal=np.exp(eta) 28 29#Hyperpriors-consideredbylookingatdistributionofsimilartrainedmodels-i.e.distributionofweightsandbias 30sigma_squared=self.sigma_squared 31nu_1=self.nu_1 32nu_2=self.nu_2 33 34#calculatethepriorlikelihood 35prior_likelihood=self.prior_likelihood(sigma_squared,nu_1,nu_2,theta,tau_proposal) 36#calculatethelikelihoodconsideringobservations 37[likelihood,pred_y[0,],sim_y[0,],rmse_data[0]]=self.likelihood_function(theta,tau_proposal) 38 39n_accept=0 40n_langevin=0 #RuntheMCMCsampleforn_samples * #Samplenewvaluesforthetaandtau [MISSING_PAGE_POST] ``` 10pos_tau[ii,]=tau_proposal 11else: 12#store 13pos_theta[ii,]=pos_theta[ii-1,] 14pos_tau[ii,]=pos_tau[ii-1,] 15 16#printthe%oftimestheproposalwasaccepted 17accept_ratio=(n_accept/self.n_samples)*100 18print(accept_ratio,'%wasaccepted') 19 20 21#storetheposteriorofthetaandtau,aswellastheRMSEoftheesamples 22self.pos_theta=pos_theta[self.n_burnin:,] 23self.pos_tau=pos_tau[self.n_burnin:,] 24self.rmse_data=rmse_data[self.n_burnin:] 25 26#Createapandasdataframetostoretheposteriorsamplesofthetaandtau,the 27associatedRMSE 28results_dict={'w()'.format(_):self.pos_theta[:,_].squeeze()for_inrange(self.theta_size-2)} 29results_dict['b0']=self.pos_theta[:,self.theta_size-2].squeeze() 30results_dict['bu']=self.pos_theta[:,self.theta_size-1].squeeze() 31results_dict['tau']=self.pos_tau.squeeze() 32results_dict['rmse']=self.rmse_data.squeeze() 33 34results_df=pd.DataFrameFrame.from_dict( 35results_dict 36)returnresults_df ``` Listing 12: Python implementation MCMC sampler - Langevin proposal distribution ### Results We use the Sunspot time series5 data and Abalone6 datasets for regression problems. The Abalone dataset provides the ring age for Abalone based on eight features that represent physical properties such as length, width, and weight and associated target feature, i.e., the ring age. Determining the age of Abalone is difficult as requires cutting the shell and counting the number of rings using a microscope. However, other physical measurements can be used to predict the age and a model can be developed to use the physical features to determine the ring age. Sunspots are regions of reduced surface temperature in the Sun's photosphere caused by concentrations of magnetic field flux, and appear as spots darker than the surrounding areas. The Sunspot cycles are about every eleven years and over the solar cycle, the number of occurrence of Sunspot change more rapidly. Sunspot activities are monitored since they have an include on Earth's climate and weather systems. We obtain the Abalone dataset from the University of California (UCI) Machine Learning Repository7 and keep a processed version of all the datasets in our repository8. We also obtain datasets for classification problems from the same repository that features a number of datasets, but we only close Iris 9 and Ionosphere 10 classification data. The Iris classification dataset contains 4 features (sepal length, sepal width, petal length, petal with) of three types of Iris flower species, featuring 50 instances for each case. This dataset is one of the most prominent datasets used for machine learning. In the Ionosphere dataset, there are 34 continuous features and the target is either "good" or "bad", a binary classification task with 351 instances. The task is to filter the radio signals where the "good" radar refers certain structure in the ionosphere and "bad" imply that signals pass through the ionosphere. Footnote 5: [https://www.sidc.be/silso/datafiles](https://www.sidc.be/silso/datafiles) Footnote 6: [https://archive.ics.uci.edu/ml/datasets/abalone](https://archive.ics.uci.edu/ml/datasets/abalone) Footnote 7: [https://archive-beta.ics.uci.edu/about](https://archive-beta.ics.uci.edu/about) Footnote 8: [https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/tree/main/data](https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/tree/main/data) Footnote 9: [https://archive.ics.uci.edu/ml/datasets/iris](https://archive.ics.uci.edu/ml/datasets/iris) Footnote 10: [https://archive.ics.uci.edu/ml/datasets/ionosphere](https://archive.ics.uci.edu/ml/datasets/ionosphere) In the Sunspot time series problem, we employ a one-step ahead prediction and hence use one output neuron. We process the Sunspot dataset (univariate time series) using Taken's embedding theorem [118] to construct a state-space vector, i.e., the standard approach for using neural networks for time series prediction [59]. This is essentially using a sliding window approach of size \(D\) overlapping \(T\) time lags. The window size \(D\) determines the number of input neurons in the Bayesian neural network and Bayesian linear model. We used \(D=4\) and \(T=2\) for our data reconstruction for the Sunspot time series, as these values have given good performance in our previous works [59]. We created a train and test set for all the datasets by selecting 2/3 of the dataset for training and remaining for testing. In the Bayesian linear model and neural network, we choose the number of samples to be 25,000 for all problems, distributed across 5 chains and excluding burn-in. In the Bayesian linear model, we choose the learning rate \(r=0.1\), and the step sizes for \(\theta=0.02\) and \(\tau=0.01\),respectively. Additionally, for the Gaussian prior distribution, we choose the parameters \(\sigma^{2}=5,\nu_{1}=0\) and \(\nu_{2}=0\), respectively. In the Bayesian neural network models, we choose the learning rate \(r=0.01\), and the step sizes for \(\theta=0.025\) and \(\tau=0.2\), respectively. In the Gaussian prior distribution, we choose the parameters \(\sigma^{2}=25,\nu_{1}=0\) and \(\nu_{2}=0\), respectively. We also use a burn in rate of 0.5 for both the Bayesian linear and the Bayesian neural network models. This indicates the proportion of the MCMC samples that are discarded. We report the mean and standard deviation (std) of the accuracy for respective problems (RMSE or classification performance), obtained from the posterior distribution after discarding the burn-in samples. We note that although only random walk proposal distribution can be used, we show results where both random walk and Langevin gradient proposals are used at a rate of 0.5. We first present the results of Bayesian regression with the sunspot (time series) and Abalone (regression) datasets. We evaluate the model performance using the _root mean squared error_ (RMSE) which is a standard metric for time series prediction and regression problems. We present the results obtained by the Bayesian linear model and Bayesian neural network model for regression problems in Table 1. In Table 1, we observe that Bayesian neural network performs better for the Sunspot time series prediction problem as it achieves a lower RMSE on both the training and testing set. This can also be seen in Figures 1 and 1. In the case of Abalone problem, Table 1 shows that both models obtain similar classification performance, but Bayesian neural network has better test performance. However, we note that in both problems, Bayesian neural networks has a much lower acceptance rate; we prefer roughly a 23 % acceptance rate [119] that implies that the posterior distribution has been effectively sampled. Table 2 presents results for the classification problems in the Iris and Ionosphere datasets. We notice that that both models have similar test and training classification performance for the Iris classification problem, and Bayesian neural networks give better results for the test dataset for the Ionosphere problem. The acceptance rate is much higher for Bayesian neural networks, the Bayesian linear model gets a suitable acceptance rate when compared to literature. ## 6 Convergence diagnosis It is important to ensure that the MCMC sampling is adequately exploring the parameter space and constructing an accurate picture of the posterior distribution. One method of monitoring the performance of the adopted MCMC sampler is to examine convergence diagnostic to monitor the extent to which the Markov chains have become a stationary distribution. Practitioners routinely apply the Gelman-Rubin (GR) convergence diagnostic to this end [120]. This diagnostic is developed using sampling from multiple MCMC chains whereby the variance of each chain \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Problem & Train (RMSE) & Test (RMSE) & Accept. rate \\ \hline \multirow{2}{*}{Bayesian linear model} & \multirow{2}{*}{Sunspot} & 0.025 & 0.022 & 13.5\% \\ & & (0.013) & (0.012) & \\ \multirow{2}{*}{Bayesian neural network} & \multirow{2}{*}{Sunspot} & 0.027 & 0.026 & 7.4\% \\ & & (0.007) & (0.007) & \\ \hline \multirow{2}{*}{Bayesian linear model} & \multirow{2}{*}{Abalone} & 0.085 & 0.086 & 5.8\% \\ & & (0.005) & (0.005) & \\ \multirow{2}{*}{Bayesian neural network} & \multirow{2}{*}{Abalone} & 0.080 & 0.080 & 3.8\% \\ & & (0.002) & (0.002) & \\ \hline \hline \end{tabular} \end{table} Table 1: Regression results using Bayesian linear model and Bayesian neural networks via MCMC. 100 samples are drawn from the parameter posteriors and the RMSE calculated with the mean and standard deviation (in brackets) reported. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Problem & Train Accuracy & Test Accuracy & Accept. rate \\ \hline \multirow{2}{*}{Bayesian linear model} & \multirow{2}{*}{Iris} & 90.392\% & 90.844\% & 83.5\% \\ & & (2.832) & (3.039) & \\ \multirow{2}{*}{Bayesian neural network} & \multirow{2}{*}{Iris} & 97.377\% & 98.116\% & 97.0\% \\ & & (0.655) & (1.657) & \\ \hline \multirow{2}{*}{Bayesian linear model} & \multirow{2}{*}{Ionosphere} & 89.060\% & 85.316\% & 58.8\% \\ & & (1.335) & (2.390) & \\ \multirow{2}{*}{Bayesian neural network} & \multirow{2}{*}{Ionosphere} & 99.632\% & 92.668\% & 94.5\% \\ & & (0.356) & (1.890) & \\ \hline \hline \end{tabular} \end{table} Table 2: Classification accuracy with Bayesian linear model and Bayesian neural networks via MCMC. We show results for the posterior mean and standard deviation (in brackets). Figure 10: Bayesian linear regression model - Sunspot dataset Figure 11: Bayesian neural network model - Sunspot dataset Figure 12: Distribution of \(\hat{R}\) values for our Bayesian linear model and Bayesian neural network. is assessed independently (within-chain variance) and then compared to the variance between the multiple chains (between-chain variance) for each parameter. Large differences between these two variances would indicate the at the chains have not converged on the same stationary distribution. For our examples above, we will therefore run a number of independent experiments and compare the MCMC chains using the convergence diagnostic. This measure of convergence is calculated as the potential scale reduction factor (PSRF or \(\hat{R}\)), where values of PSRF close to 1 indicate convergence. The PSRF is obtained by first calculating the between-chain variance \[B=\frac{n}{m-1}\sum_{j=1}^{m}(\bar{\theta}_{j}-\bar{\bar{\theta}})^{2} \tag{41}\] and within-chain variance \[W=\frac{1}{m}\sum_{j=1}^{m}s_{j}^{2} \tag{42}\] where \(n\) is the number of samples, \(m\) is the number of chains, \[\bar{\theta}_{j}=\frac{1}{n}\sum_{i=1}^{n}\theta_{ij}\] \[\bar{\bar{\theta}}=\frac{1}{m}\sum_{j=1}^{m}\bar{\theta}_{j}\] and \[s_{j}^{2}=\frac{1}{n-1}\sum_{j=1}^{n}\theta_{ij}-\bar{\theta}_{j}\] After observing these estimates, we may then estimate the target variance \(\sigma^{2}\) with \[\hat{\sigma}^{2}=\frac{n-1}{n}W+\frac{B}{n}\] Then what is known about \(\theta\) can be estimated and the result is an approximate Student t's distribution for \(\theta\) with centre \(\bar{\bar{\theta}}\), scale \(\sqrt{\hat{V}}=\sqrt{\hat{\sigma}^{2}+\frac{B}{mn}}\) and degrees of freedom \(d=\frac{2\hat{V}}{\sqrt{\hat{\sigma}^{2}}(\hat{V})}\). Here \[\dot{\mathrm{var}}(\hat{V}) =\left(\frac{n-1}{n}\right)^{2}\frac{1}{m}\dot{\mathrm{var}}(s_ {i}^{2})+\left(\frac{m+1}{mn}\right)^{2}\frac{2}{m-1}B^{2}\] \[+2\frac{(m+1)(n-1)}{mn^{2}}\] \[\cdot\frac{n}{m}[\mathrm{c}\dot{\mathrm{ov}}(s_{i}^{2},\bar{ \theta}_{i}^{2})-2\bar{\bar{\theta}}\mathrm{c}\dot{\mathrm{ov}}(s_{i}^{2},\bar {\theta}_{i})].\] Finally, we calculate the PSRF given by \(\hat{R}\) as \[\sqrt{\hat{R}}=\sqrt{\frac{\hat{V}}{W}\frac{d}{d-2}} \tag{43}\] Listing 13 gives further details of the Python implementation 11. Footnote 11: [https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/tree/main/convergence/convergence-GR.py](https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial/tree/main/convergence/convergence-GR.py) ### Results We show results for Gelman-Rubin diagnosis for Bayesian linear regression and Bayesian neural networks using the the Sunspot problem. We test five chains for each model, each with 5,000 samples of weights excluding burn-in samples and evaluating the \(\hat{R}\) values for each weight. Figure 12 shows the distribution of the \(\hat{R}\) values on a log-scale, and we observe that the \(\hat{R}\) values of the weights for Bayesian linear regression model are all much smaller than Bayesian neural network. We can conclude that the Bayesian neural network shows poor convergence based on the Gelman-Rubin diagnostics presented. By closely examining each weight individually, we observe that the problem of non-convergence mainly arises from multi-modality of the posterior. In Figure 13 we look at samples from a single chain of 25,000 samples excluding 50% burn-in. In Figure 13a and 13b, we present a visualisation for a selected weight from the Bayesian linear model. In Figure 13c and 13d, we present a visualisation for a selected weight from the Bayesian neural network model. We observe potential multi-modal distributions in both cases, with a high degree of auto-correlation and poor convergence. To examine the impact of longer MCMC chains in achieving convergence, an additional test was run taking 400,000 samples excluding 20% burn-in and then thinning the chain by a factor of 50. These thinned results are shown for the Iris dataset in Figure 14. We can see that the chains exhibit more desirable properties and (particularly in the case of the linear model as shown in Figure 14a,b) convergence diagnostics were improved as expected when removing some of the auto-correlation through thinning. We can see in Figure 14c,d that the Bayesian neural network still exhibits a multi-model posterior for this parameter. ## 7 Discussion Overall, this highlights the problem of applying Gelman-Rubin diagnostics on the weights of Bayesian neural networks. We observe that the Bayesian neural network performs better than the Bayesian linear regression in most Figure 13: Posterior and trace-plot for a parameter in each of the Bayesian linear and Bayesian Neural Net models - Sunspot data. For this example, 25,000 samples were taken excluding 20% burn-in with the chain thinned by a factor of 50. Figure 14: Posterior and trace-plot for a paramater in each of the Bayesian linear and Bayesian Neural Net models - Iris data. For this example, 400,000 samples were taken excluding 50% burn-in with no thinning. prediction problems, despite showing no indications of convergence. This is due to the challenge of attaining MCMC convergence in the weights of Bayesian neural networks, as they are influenced by the problem of multi-modality. Hence, we conclude that for a Bayesian neural network, a poor performance in the Gelman-Rubin diagnostics does not necessarily imply a poor performance in prediction tasks. We revisit the principle of equifinality [121, 122] that states that in open systems, a given end state can be reached by many potential means. In our case, the system is a neural network model and many solutions exist that represent a trained model displaying some level of performance accuracy. Despite this, we note that the goal of Bayesian models are to offer uncertainty quantification in predictions, and proper convergence is required. The original Gelman-Rubin diagnosis [120] motivated a number of enhancements for different types of problems [123, 124, 125]; and we may need to develop better diagnosis for Bayesian neural networks. Nonetheless, in our case comparing Bayesian logistic regression (converged) and Bayesian neural networks (not converged), but achieved good accuracy, we can safely state that so our our Bayesian neural network can only provide a means for uncertainty quantification but is not mature enough to qualify as a robust Bayesian model. We also revisit convergence issue in the case of Bayesian linear model as shown in Panel (a) - Figure 13, which shows a multi-modal distribution that has not well converged. In the case of linear models, we can have certain features that are not contributing much to the decision making process (predictions) and the weights associated (coefficients) with those features may not converge since they are not really contributing. This is similar to the case of neural networks, where certain weight links are not needed and can be pruned. We note that we attach much higher acceptance rate in Bayesian neural networks for classification problems (Table 2) when compared to regression problems(Table 2). We note that multinomial likelihood is used for the classification, and Gaussian likelihood is used for the regression problems, and both use Langevin-gradient proposal distributions that account for detailed balanced condition. Hence, we need to fine tune the hyper-parameters associated with the proposal distribution to ensure we get higher acceptance rate for the regression problems. Although 23 % acceptance rate [119] has been prominently used as a "golden rule", the optimal acceptance rate depends on the nature of the problem. The number of parameters, Langevin-based proposal distribution, and type of model would raise questions about the established acceptance rate [126]. Hence, more work needs to be done to establish what acceptance rates are appropriate for simple neural networks and deep learning models. A way to address the issue of convergence would be to develop an ensemble of linear models that can complete with the accuracy of neural networks or deep learning models. In ensemble methods such as bagging and boosting, we can use linear models that have attained convergence as per Gelman-Rubin diagnosis and then combine results of the ensemble using averaging and voting, as done in ensemble methods. Furthermore, we note that Langevin-based MCMC is a way ahead but not the only way ahead. A number of works have utilised the Hamiltonian MCMC which provides another approach to incorporate gradients in MCMC methods that can provide advantages over Langevin-based methods; however, we need comprehensive evaluation studies to check how both methods work for deep learning models such as CNN and LSTM. Our previous work has shown that despite the challenges, the combination of Langevin-gradients with parallel tempering MCMC [59], presents opportunities for sampling larger neural network architectures [66, 65]. The need to feature a robust methodology for uncertainty quantification in CNNs will make them more suitable for applications where uncertainty in decision making poses major risks, such as medical image analysis [127] and human security [128]. Recently, CNNs have been considered for modelling temporal sequences, they have proven to be successful for time series classification [129, 130], and time series forecasting problems [131, 132]. Our recent work has shown that one dimensional CNNs provide better prediction performance than LSTM-based models for multi-step time series prediction problems [133]. Leveraging CNNs within a Bayesian framework can provide better uncertainty quantification in predictions and make them useful for cutting-edge real-world applications. We envision that this tutorial will enable statisticians and machine learners to utilise MCMC sampling in a more effective way when developing new models and also developing a Bayesian framework for existing deep learning models. The tutorial has introduced basic concepts with code and provides an overview of challenges, when it comes to convergence of Bayesian neural networks. ## 8 Code and Data All code (for implementation, results and figures) and data presented in this paper are available in the associated github repository12. This repository presents the implementations in separate Jupyter notebooks in the base directory, with sub-directories containing data, convenient functions and details of the environment setup. Footnote 12: [https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial](https://github.com/sydney-machine-learning/Bayesianneuralnetworks-MCMC-tutorial)
2310.10998
Accelerating Scalable Graph Neural Network Inference with Node-Adaptive Propagation
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications. However, the sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs. Although existing Scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure, these methods still suffer from scalability issues when making inferences on unseen nodes, as the feature preprocessing requires the graph to be known and fixed. To further accelerate Scalable GNNs inference in this inductive setting, we propose an online propagation framework and two novel node-adaptive propagation methods that can customize the optimal propagation depth for each node based on its topological information and thereby avoid redundant feature propagation. The trade-off between accuracy and latency can be flexibly managed through simple hyper-parameters to accommodate various latency constraints. Moreover, to compensate for the inference accuracy loss caused by the potential early termination of propagation, we further propose Inception Distillation to exploit the multi-scale receptive field information within graphs. The rigorous and comprehensive experimental study on public datasets with varying scales and characteristics demonstrates that the proposed inference acceleration framework outperforms existing state-of-the-art graph inference acceleration methods in terms of accuracy and efficiency. Particularly, the superiority of our approach is notable on datasets with larger scales, yielding a 75x inference speedup on the largest Ogbn-products dataset.
Xinyi Gao, Wentao Zhang, Junliang Yu, Yingxia Shao, Quoc Viet Hung Nguyen, Bin Cui, Hongzhi Yin
2023-10-17T05:03:00Z
http://arxiv.org/abs/2310.10998v2
# Accelerating Scalable Graph Neural Network Inference with Node-Adaptive Propagation ###### Abstract Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications. However, the sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs. Although existing Scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure, these methods still suffer from scalability issues when making inferences on unseen nodes, as the feature preprocessing requires the graph to be known and fixed. To further accelerate Scalable GNNs inference in this inductive setting, we propose an online propagation framework and two novel node-adaptive propagation methods that can customize the optimal propagation depth for each node based on its topological information and thereby avoid redundant feature propagation. The trade-off between accuracy and latency can be flexibly managed through simple hyper-parameters to accommodate various latency constraints. Moreover, to compensate for the inference accuracy loss caused by the potential early termination of propagation, we further propose Inception Distillation to exploit the multi-scale receptive field information within graphs. The rigorous and comprehensive experimental study on public datasets with varying scales and characteristics demonstrates that the proposed inference acceleration framework outperforms existing state-of-the-art graph inference acceleration methods in terms of accuracy and efficiency. Particularly, the superiority of our approach is notable on datasets with larger scales, yielding a \(75\times\) inference speedup on the largest Ogbn-products dataset. Graph condensation, graph neural network, inference acceleration, continual learning, label propagation. ## I Introduction Developing a graph neural network (GNN) for very large graphs has drawn increasing attention due to the powerful expressiveness of GNNs and their enormous success in many industrial applications. While GNNs provide a universal framework to tackle various down-streaming tasks, their implementation on large-scale industrial graphs is impeded by heavy computational demands, which severely limits their application in latency-sensitive scenarios. For example, recommender systems designed for streaming sessions must completely perform real-time inference on user-item interaction graphs [1, 2, 3, 4]. The fraud and spam detection tasks require millisecond-level inference on the million-scale graph to identify the malicious users and prevent the loss of assets for legitimate users [5, 6, 7]. In some computer vision applications, GNNs are designed for processing 3D point cloud data and deployed in the automated driving system to perform object detection or semantic segmentation tasks [8, 9, 10]. In such scenarios, the real-time inference is of utmost importance. The root cause for the heavy computation and high latency in GNNs is known as the _neighbor explosion problem_. GNNs [11, 12, 13, 14] typically adopt the message-passing pipeline and leverage the feature propagation and transformation processes to construct the model as shown in Figure 1 (a). Through executing \(k\) feature propagation processes, the propagated features at depth \(k\) can capture the node information from \(k\)-hop neighborhoods (also known as supporting nodes). This approach is of particular significance to large-scale and sparsely labeled graphs, where large propagation depths are needed to aggregate enough label information from distant neighbors [15, 16]. However, as the depth of propagation increases, the number of supporting nodes grows exponentially, incurring a substantial computational cost. To mitigate the expensive computation resulting from feature propagation, several Scalable GNNs [17, 18, 19, 20, 21, 22], e.g., SGC, were proposed to remove the non-linear transformation layers among propagation layers and use the propagated feature for prediction. As shown in Figure 1 (b), the feature propagation of these methods can be pre-computed in the preprocessing procedure and only needs to be executed once. Instead of performing feature propagation during each training epoch, the training time complexity of Scalable GNNs is significantly reduced than other GNNs [24, 25], and the training of these models scales well with graph size (Figure 1 (c)). Due to their notable advantage in training speed, Scalable GNNs have gained considerable popularity in large-scale graph processing tasks. This is evident from the widespread adoption of Scalable GNNs as the primary choice in numerous methods participating in the Open Graph Benchmark challenge [26]. However, Scalable GNNs still struggle to do efficient inference on unseen nodes because the preprocessing of feature propagation is based on the transductive premise (i.e. the graph is known and fixed). In the majority of practical situations, inference encompasses unseen nodes, requiring online execution of feature propagation [27, 12], as depicted in Figure 1 (d). This condition significantly hinders real-world applications. In addition, existing methods adopt a fixed propagation depth for all nodes, which restricts the flexibility of exploiting the multi-scale features and tends to over-smooth the high-degree nodes [28], thereby leading to redundant computation and performance degradation. In this paper, we aim to reduce the redundant computation of feature propagation to further accelerate the inference of Scalable GNNs in the inductive setting. To this end, we propose a general online propagation framework: Node-Adaptive Inference (NAI), which introduces _personalized propagation depth_ to determine the optimal propagation depth for inference. Specifically, we provide two different node-adaptive propagation approaches within the NAI framework, which could evaluate the smoothing status of the propagated feature in an explicit and implicit manner, respectively, and thus can terminate the needless propagation in time. Particularly, the trade-off between inference latency and accuracy can be flexibly managed by tuning simple global hyper-parameters, which adaptively adjust the propagation depth for each node. This provides a variety of inference options for users with different latency constraints. Moreover, taking inspiration from the Inception Network [29], which is the cornerstone work in convolutional neural networks, we design a novel Inception Distillation method in NAI to exploit the multi-scale receptive field information and mitigate the performance degradation caused by the adaptive propagation. With a more powerful supervision signal, NAI could accelerate the inference speed with a negligible performance drop. The main contributions of this paper are summarized as follows: * graph-based inductive inference, where the ever-Scalable GNNs also struggle with heavy online computation of feature propagation. * **New Methodology.** We propose an online propagation framework for Scalable GNNs and two novel node-adaptive propagation approaches that generate the personalized propagation depth for each node, thereby avoiding the redundant computation of feature propagation and mitigating the over-smoothing problem. To compensate for the potential inference accuracy loss, we further propose Inception Distillation to exploit the multi-scale receptive field information and improve the accuracy. * **SOTA Performance.** Extensive experiments are conducted on three public datasets with different scales and characteristics, and the experimental results show that our proposed efficient inference framework NAI outperforms the state-of-the-art (SOTA) graph inference acceleration baselines in terms of both accuracy and efficiency. In particular, the advantage of our NAI is more significant on larger-scale datasets, and NAI achieves \(75\times\) inference speedup on the largest Ogbn-products dataset. ### _Problem Formulation_ Given a graph \(\mathcal{G}\) = (\(\mathcal{V}\), \(\mathcal{E}\)) with \(|\mathcal{V}|=n\) nodes and \(|\mathcal{E}|=m\) edges, its node adjacency matrix and degree matrix are denoted as \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and \(\mathbf{D}=\mathrm{diag}(d_{1},d_{2},...,d_{n})\), where \(d_{i}=\sum_{v_{j}\in\mathcal{V}}\boldsymbol{A}_{i,j}\) is the degree of node \(v_{i}\in\mathcal{V}\). The adjacency matrix and degree matrix with self-loops are denoted as \(\widetilde{\mathbf{A}}\) and \(\widetilde{\mathbf{D}}\). The node feature matrix is \(\mathbf{X}=\{\boldsymbol{x}_{1},\boldsymbol{x}_{2},...,\boldsymbol{x}_{n}\}\) in which \(\boldsymbol{x}_{i}\in\mathbb{R}^{f}\) represents the node attribute vector of \(v_{i}\), and \(\mathbf{Y}=\{\boldsymbol{y}_{1},\boldsymbol{y}_{2},...,\boldsymbol{y}_{l}\}\) is the one-hot label matrix for node classification task. In the inductive setting, the entire node set \(\mathcal{V}\) is partitioned into training set \(\mathcal{V}_{train}\) (including labeled set \(\mathcal{V}_{l}\) and unlabeled set \(\mathcal{V}_{u}\)) and test set \(\mathcal{V}_{test}\). GNNs are trained on \(\mathcal{G}_{train}\) which only includes \(\mathcal{V}_{train}\) and all edges connected to \(v\in\mathcal{V}_{train}\). The evaluation is to test the performance of trained GNNs on \(\mathcal{V}_{test}\) in graph \(\mathcal{G}\). ### _Graph Neural Networks_ GNNs aim to learn node representation by using topological information and node attributes. The existing GNNs adopt the message-passing pipeline and construct models utilizing two processes: feature propagation and transformation. By stacking multiple layers, the \(k\)-th layer feature matrix \(\mathbf{X}^{(k)}\) can be formulated as: \[\mathbf{X}^{(k)}=\delta\left(\hat{\mathbf{A}}\mathbf{X}^{(k-1)} \mathbf{W}^{(k)}\right), \tag{1}\] \[\hat{\mathbf{A}}=\widetilde{\mathbf{D}}^{\gamma-1}\widetilde{ \mathbf{A}}\widetilde{\mathbf{D}}^{-\gamma},\] Fig. 1: The comparison of GNN and Scalable GNN in the training and inference procedures. (a) The training and transductive/inductive inference procedure of GNNs, such as GCN [11]. The propagation and transformation process are integrated as a single GNN layer. (b) The feature preprocessing approach of Scalable GNNs, illustrated by SGC [17]. The transformation processes are removed, and non-parametric propagation processes are conducted as the preprocessing. (c) The training/transductive inference procedure of Scalable GNNs. The classifier training or transductive inference is based on preprocessed features directly. (d) The inductive inference procedure of Scalable GNNs, where propagation has to be executed online. where \(\mathbf{W}^{(k)}\) is the layer-specific trainable weights at layer \(k\) and \(\delta\left(\cdot\right)\) is the activation function, such as ReLU function. \(\mathbf{\widetilde{D}}\) is the diagonal node degree matrix used to normalize \(\mathbf{\widetilde{A}}\). \(\mathbf{X}^{(k-1)}\) is the input of \(k\)-th layer and \(\mathbf{X}^{(0)}=\mathbf{X}\). In each layer, \(\mathbf{\widetilde{A}}\) propagates the information among neighbors, and \(\mathbf{W}^{(k)}\) transforms the propagated features. Through executing \(k\) times feature propagation processes, the propagated features at depth \(k\) can capture the node information from \(k\)-hop neighborhoods. However, as the depth of propagation increases, the number of supporting nodes grows exponentially, incurring a substantial computational cost. Note that, \(\gamma\in[0,1]\) is the convolution coefficient and could generalize Eq. (1) to various existing models. By setting \(\gamma=1\), 0.5 and 0, the convolution matrix \(\mathbf{\tilde{A}}\) represents the transition probability matrix \(\mathbf{\widetilde{A}}\mathbf{\widetilde{D}}^{-1}\)[12, 27, 30], the symmetric normalization adjacency matrix \(\mathbf{\widetilde{D}}^{-\frac{1}{2}}\mathbf{\widetilde{A}}\mathbf{ \widetilde{D}}^{-\frac{1}{2}}\)[11, 31] and the reverse transition probability matrix \(\mathbf{\widetilde{D}}^{-1}\mathbf{\widetilde{A}}\)[32], respectively. Employing the propagated feature \(\mathbf{X}^{(k)}\), the evaluation of node classification task is accomplished by executing the softmax function and optimizing the cross-entropy loss across all labeled nodes. ### _Scalable Graph Neural Networks_ Although GNNs achieve excellent performance by executing multiple feature propagation and transformation processes, it was found that the aggregation of neighbor features (i.e., feature propagation) makes a major contribution to the performance of GNNs and plays a more important role [17]. Based on this finding, to improve the scalability of GNNs, SGC [17] was proposed to decompose the two processes and remove feature transformations in the middle layers. It propagates the node features for \(k\) times as: \[\mathbf{X}^{(k)}=\mathbf{\hat{A}}^{k}\mathbf{X}, \tag{2}\] where \(\mathbf{\hat{A}}\) is defined as Eq. (1). Then, \(\mathbf{X}^{(k)}\), the propagated feature at depth \(k\), is fed to a linear model for classification. Benefiting from the linear propagation of Eq. (2), SGC removes the trainable parameters between feature propagation and allows for offline precomputation of the feature matrix as shown in Figure 1 (b). As a result, SGC effectively reduces training time by avoiding the computationally intensive feature propagation process in each training epoch, since the feature propagation is performed only once before the classifier training. Following SGC, more powerful Scalable GNNs are designed by exploring the precomputed features \(\{\mathbf{X}^{(0)},\mathbf{X}^{(1)},...,\mathbf{X}^{(k)}\}\) in Eq. (2), with \(\mathbf{X}^{(0)}\) representing \(\mathbf{X}\) for consistent notation. For example, SIGN [18] proposes to transform propagated features at different depths by linear transformations, then concatenates them together to enhance the feature representation. The predicted objective can be represented as: \[\mathbf{X}^{(k)}_{SIGN}=\mathbf{X}^{(0)}\mathbf{W}^{(0)}\left|\right|\mathbf{ X}^{(1)}\mathbf{W}^{(1)}\left|\right|...\left|\right|\mathbf{X}^{(k)}\mathbf{W}^{( k)}, \tag{3}\] where \(\left|\right|\) denotes concatenation operations and \(\mathbf{W}^{(k)}\) are transformation matrices. \(\mathrm{S}^{2}\mathrm{GC}\)[22] averages propagated features at different depths to construct a simple spectral graph convolution: \[\mathbf{X}^{(k)}_{S^{2}GC}=\frac{1}{k}{\sum_{l=0}^{k}}\mathbf{X}^{(l)}. \tag{4}\] GAMLP [23] combines propagated features at different depths by measuring the feature information gain and constructing the node-wise attention mechanism: \[\mathbf{X}^{(k)}_{GAMLP}={\sum_{l=0}^{k}}T^{(l)}\mathbf{X}^{(l)}, \tag{5}\] where \(T^{(l)}\) are diagonal node-wise attention matrices. The propagated features \(\{\mathbf{X}^{(0)},\mathbf{X}^{(1)},...,\mathbf{X}^{(k)}\}\) used in these methods are all pre-computed in advance, successfully speeding up the training procedure and transductive inference procedure. However, these Scalable GNNs still suffer from scalability issues when making inferences on the unseen nodes that are not encountered during model training. In such an inductive inference scenario (Figure 1 (d)), the feature propagation of unseen nodes has to be executed online, and this time-consuming process directly leads to high inference latency. ## II Methodology To mitigate the scalability problem of **Scalable GNNs** in the **inductive** inference setting, we propose an online propagation framework, i.e., Node-Adaptive Inference (NAI), as shown in Figure 2. In the inductive inference procedure, the personalized propagation depth is generated online for each test node by referring to its stationary state, and the nodes with low personalized propagation depth are inferred in advance during the propagation procedure. Within the framework, two optional modules (Distance/Gate-based Node-Adaptive Propagation) are designed to compare the propagated features with the stationary states and customize the personalized propagation depth. In the training procedure of multiple classifiers, NAI leverage the Inception Distillation to enhance the classifiers at lower depths. We construct a more powerful teacher to capture multi-scale information of different-sized receptive fields and updates both teacher and students simultaneously with the target of higher prediction accuracy. In the following sections, we first present the two propagation modules and the inference algorithm in Section II-A. Subsequently, we discuss the computational complexity of inference in Section II-B. Finally, we delve into the details of training classifiers for each layer and introduce Inception Distillation in Section II-C. ### _Node-Adaptive Propagation_ The key component of the NAI framework is generating the personalized propagation depth in the inference procedure. To this end, we introduce two optional Node-Adaptive Propagation (NAP) modules by referring to the stationary feature state, and the NAP is plugged into the propagation process directly to control the propagation depth for each node. Specifically, Scalable GNNs propagate the information within \(k\)-hops neighbors by multiplying the \(k\)-th order normalized adjacency matrix by the feature matrix as Eq. (2). This operation gradually smooths the node feature by neighbor features, and with the growth of \(k\), the propagated node features within the same connected component will reach a stationary state [28]. When \(k\rightarrow\infty\), the features are propagated for infinite times and reach the stationary feature state \(\mathbf{X}^{(\infty)}\), which be calculated as: \[\mathbf{X}^{(\infty)}=\hat{\mathbf{A}}^{(\infty)}\mathbf{X}, \tag{6}\] where \(\mathbf{\hat{A}}^{(\infty)}\) is the adjacency matrix propagated for infinite times. The element in \(\mathbf{\hat{A}}^{(\infty)}\) is calculated as: \[\mathbf{\hat{A}}^{(\infty)}_{i,j}=\frac{\left(d_{i}+1\right)^{\gamma}\left(d_ {j}+1\right)^{1-\gamma}}{2m+n}, \tag{7}\] where \(\mathbf{\hat{A}}^{(\infty)}_{i,j}\) is the weight between nodes \(v_{i}\) and \(v_{j}\), i.e., the element of \(i\)-th row and \(j\)-th column in \(\mathbf{\hat{A}}^{(\infty)}\). \(d_{i}\) and \(d_{j}\) are node degrees for \(v_{i}\) and \(v_{j}\). \(m\) and \(n\) are the numbers of edges and nodes. \(\gamma\) is the convolution coefficient in Eq. (1). Since \(\mathbf{\hat{A}}^{(\infty)}_{i,j}\) is only related to the degree of source node \(v_{i}\) and target node \(v_{j}\), topology information is lost after the infinite number of propagation and the final features will be over-smoothed. For example, when \(\gamma=0\), \(\mathbf{\hat{A}}^{(\infty)}_{i,j}\) is only determined by the degree of target node \(v_{j}\) and target nodes with equal degrees have the same weight value. Considering the different degree distributions of the nodes, the smoothness of the node features varies at the same propagation depth and should be evaluated to avoid the over-smoothing problem. Moreover, the node features with proper smoothness can be inferred in advance to reduce the redundant computation. To this end, we propose two different NAP approaches to generate the personalized propagation depth in the explicit and implicit manner respectively: Distance-based NAP (\(\mathrm{NAP_{d}}\)) and Gate-based NAP (\(\mathrm{NAP_{g}}\)). #### Iii-B1 **Distance-based NAP** \(\mathrm{NAP_{d}}\) uses the distance between the propagated feature at depth \(l\) (i.e., \(\mathbf{X}^{(l)}_{i}\)) and stationary feature (i.e., \(\mathbf{X}^{(\infty)}_{i}\)) to measure the feature smoothness of node \(v_{i}\) explicitly, and the distance \(\Delta^{(l)}_{i}\) is defined as Eq. (8): \[\Delta^{(l)}_{i}=\left.\left\|\mathbf{X}^{(l)}_{i}-\mathbf{X}^{(\infty)}_{i} \right\|,\right. \tag{8}\] where \(\left\|\cdot\right\|\) means \(l_{2}\) norm. A small distance indicates a strong smoothing effect and a higher risk of over-smoothing. To control the smoothness and generate a proper propagation depth for inference, we introduce a global hyper-parameter \(T_{s}\) as a threshold for all distances in the graph. The personalized propagation depth \(L(v_{i},T_{s})\) for the node \(v_{i}\) is: \[L(v_{i},T_{s})=\operatorname*{arg\,min}_{l}(\Delta^{(l)}_{i}<T_{s}). \tag{9}\] The personalized propagation depth satisfies the union upperbound [28] as: \[L(v_{i},T_{s})\leq\min\Big{\{}\log_{\lambda_{2}}(T_{s}\sqrt{\frac{d_{i}+1}{2m +n}}),\max\{L(v_{j},T_{s}),v_{j}\in N_{v_{i}}\}+1\Big{\}}, \tag{10}\] where \(\lambda_{2}\leq 1\) is the second largest eigenvalue of \(\mathbf{\hat{A}}\) and \(N_{v_{i}}\) is the neighbor node set of \(v_{i}\). The first term of upper-bound shows that the personalized propagation depth of \(v_{i}\) is positively correlated with the scale of the graph (i.e., the number of edges \(m\) and the number of nodes \(n\)), the sparsity of the graph (small \(\lambda_{2}\) means strong connection and low sparsity, and vice versa), and negatively correlated with its node degree \(d_{i}\). Moreover, the second term indicates that the difference between two neighboring nodes' personalized propagation depth is no more than 1, and the neighbors of high-degree nodes will also have a low personalized propagation depth. In summary, nodes located in Fig. 2: The inference procedure and classifier training procedure for NAI. In the inference procedure (left), the propagation depth is adaptively controlled by gates or by comparing the propagated feature with the stationary state. The propagated features with low propagation depth are predicted by corresponding classifier in advance. The training procedure (right) includes feature propagation, base classifier training, and Inception Distillation. The classifier \(f^{(k)}\) is initially trained by cross-entropy (CE) loss, and set as the teacher for Single-Scale Distillation. Subsequently, enhanced classifiers at higher depths are employed to construct the ensemble teacher, which serves to distill multi-scale receptive field information into lower depth classifiers. sparse local areas and possessing smaller degrees should have higher personalized propagation depths, and vice versa. By referring to the distance explicitly, \(\mathrm{NAP}_{\mathrm{d}}\) takes both the node feature and topological information into account and can generate the personalized propagation depth during the propagation process. With the personalized propagation depth \(L(v_{i},T_{s})\), the propagated feature \(\mathbf{X}_{i}^{(L(v_{i},T_{s}))}\) will be predicted by classifier \(f^{(L(v_{i},T_{s}))}\) (refer to Section II-C for details) in advance. #### Iii-B2 **Gate-based NAP** \(\mathrm{NAP}_{\mathrm{g}}\) leverages a series of gates at different depths to control the propagation process and dynamically decide whether the propagation of each node should be stopped. Nonetheless, the gate structure and the training of a series of gates encounter several challenges. * The propagated feature should be compared with the stationary feature in each gate. * Each node should possess a distinct and suitable propagation depth, signifying that it should be chosen by a single gate exclusively. * Gates at varying depths exhibit dependency, as the decisions made by higher-depth gates are influenced by preceding ones. As a result, gates should be trained concurrently to effectively learn this dependency. * Gates must be designed to be lightweight, guaranteeing that no heavy computation operations are incorporated. Considering above challenges, we design gates for Scalable GNNs and simultaneously train these gates end-to-end as show in Figure 3 by leveraging the well-trained classifiers (refer to Section II-C for details). Here we take the propagation depth \(l\) as an example. During the gate training procedure, the gate \(g^{(l)}\) at depth \(l\) takes the propagated feature \(\mathbf{X}_{i}^{(l)}\) and \(\hat{\mathbf{X}}_{i}^{(l)}\) from the previous depth as inputs, subsequently generating a one-hot mask \(\mathbf{m}_{i}^{(l)}\) as output according to Eq. (11). The initial value of \(\hat{\mathbf{X}}_{i}^{(l)}\) is established as the stationary feature to input the gate \(g^{(1)}\), i.e., \(\hat{\mathbf{X}}_{i}^{(1)}=\mathbf{X}_{i}^{(\infty)}\). \[\begin{split}&\mathbb{X}_{i}^{(l)}=\mathbf{X}_{i}^{(l)}\,|| \,\hat{\mathbf{X}}_{i}^{(l)},\\ &\mathbf{e}_{i}^{(l)}=\mathrm{softmax}\left(\mathbb{X}_{i}^{(l)} \mathbf{W}^{(l)}\right),\\ &\mathbf{m}_{i}^{(l)}=\mathrm{GS}\left(\mathbf{e}_{i}^{(l)}-\mathbf{ \Theta}_{i}^{(l)}\right),\end{split} \tag{11}\] where \(1\leq l<k\). \(\mathbf{W}^{(l)}\in\mathbb{R}^{2f\times 2}\) is the trainable weight vector. \(\mathbf{e}_{i}^{(l)}\in\mathbb{R}^{2}\) is the probability which indicates the preference of \(\mathbf{X}_{i}^{(l)}\) and \(\hat{\mathbf{X}}_{i}^{(l)}\). \(\mathrm{GS}\) is the Gumbel softmax [33] which used to generate two dimensional one-hot mask \(\mathbf{m}_{i}^{(l)}=[m_{i,1}^{(l)},m_{i,2}^{(l)}]\). To prevent a node from being chosen by multiple gates or all nodes employing the propagated feature at the highest depth, we introduce the penalty term \(\mathbf{\Theta}_{i}^{(l)}\). This term ensures that each node is selected by the gates only once. Specifically, \(\mathbf{\Theta}_{i}^{(l)}=[\theta_{i,1}^{(l)},0]\) is generated based on the masks from previous depths and operates exclusively along the first dimension. \(\theta_{i,1}^{(l)}=\sum_{j=1}^{l-1}\mu\mathrm{sigmoid}(\varphi(m_{i,1}^{(j)}- 0.5))\) in our implementation1. For \(1\leq j\leq l-1\), if \(m_{i,1}^{(j)}>0.5\) exists, the propagated feature of node \(v_{i}\) has already been selected by previous depth \(j\) and a large penalty will be added to \(\theta_{i,1}^{(l)}\). Consequently, the mask will maintain \(m_{i,1}^{(l)}<m_{i,2}^{(l)}\) for all higher depths \(l\leq j<k\). Conversely, if this condition is not met, \(\theta_{i,1}^{(l)}\) will remain at a low value, approximating zero. Footnote 1: \(\mu\) and \(\varphi\) are large and arbitrary constants. In our experiments, we set \(\varphi\) and \(\mu\) equal to 1000. With the mask \(\mathbf{m}_{i}^{(l)}\), the gate input at depth \(l+1\) is constructed as Eq. (12): \[\hat{\mathbf{X}}_{i}^{(l+1)}=\mathbf{m}_{i}^{(l)}[\mathbf{X}_{i}^{(l)};\hat{ \mathbf{X}}_{i}^{(l)}]. \tag{12}\] * If \(m_{i,1}^{(l)}>m_{i,2}^{(l)}\), \(\mathbf{X}_{i}^{(l)}\) will be selected by the mask and used as the input of the next depth gate. Owing to the penalty term, the mask for \(v_{i}\) at higher depths will be maintained as \(m_{i,1}^{(l)}<m_{i,2}^{(l)}\), which enables the propagated feature at depth \(l\) to be predicted in the end. * If \(m_{i,1}^{(l)}<m_{i,2}^{(l)}\), \(\hat{\mathbf{X}}_{i}^{(l)}\) will be chosen by the mask to retain the same value from previous depths. Notice that \(\mathbf{X}_{i}^{(\infty)}\) is input in the first gate as \(\hat{\mathbf{X}}_{i}^{(1)}\) and the value of \(\hat{\mathbf{X}}_{i}^{(l)}\) will be preserved as \(\mathbf{X}_{i}^{(\infty)}\) if the node has never been selected by any previous gate. Ultimately, if \(\hat{\mathbf{X}}_{i}^{(k)}=\mathbf{X}_{i}^{(\infty)}\), \(\hat{\mathbf{X}}_{i}^{(k)}\) will be replaced with \(\mathbf{X}_{i}^{(k)}\). Nodes selected by different gates will be predicted by the classifiers at their corresponding depths. Employing well-trained classifiers and cross-entropy loss, the gates at various depths are trained concurrently, while classifier parameters remain unaltered during the gate training process. During the inference procedure, gates will dynamically generate discrete masks according to the propagated feature and stationary feature. The personalized propagation depth for the node \(v_{i}\) is: \[L(v_{i})=\operatorname*{arg\,min}_{l}(m_{i,1}^{(l)}=1). \tag{13}\] Given the personalized propagation depth \(L(v_{i})\), the propagated feature \(\mathbf{X}_{i}^{(L(v_{i}))}\) will be predicted in advance by classifier \(f^{(L(v_{i}))}\). #### Iii-B3 **Inference Algorithm** To unify both \(\mathrm{NAP}_{\mathrm{d}}\) and \(\mathrm{NAP}_{\mathrm{g}}\) into our proposed NAI framework and adapt them to different latency constraints and application scenarios, we introduce two more global hyper-parameters in the inference algorithm, i.e., \(T_{min}\) and \(T_{max}\), which indicates the minimum and the maximum propagation depth, respectively. The inference procedure is shown in Algorithm 1. In line 2, \(\mathbf{X}^{(\infty)}\) for batch \(\mathcal{V}_{b}\) is calculated according the entire graph by Eq. (6). In line 3, the supporting nodes are derived according to \(\mathcal{V}_{b}\) and \(T_{max}\), where \(1\leq T_{max}\leq k\). Then, the node features will be propagated \(T_{min}\) times, where \(1\leq T_{min}\leq T_{max}\) (line 5). After \(T_{min}\) times propagation, features are compared with \(\mathbf{X}^{(\infty)}\) (or input \(g^{(l)}\) together with \(\mathbf{X}^{(\infty)}\)) and inferred by the classifier if the distances are smaller than \(T_{s}\) (or mask is \([1,0]\)) (line 9-12). Until \(l=T_{max}\), all left nodes will be classified by \(f^{(T_{max})}\) and the prediction results for \(\mathcal{V}_{b}\) are output (line 17-18). Once the model is deployed on the device, users can choose the hyper-parameters by using validation set that align with the latency requirements and provide the highest validation accuracy for the inference process. ### _Computational Complexity Analysis_ Table I compares the inference computational complexity of four Scalable GNNs and their complexity after deploying NAI in the inductive setting. All computations include feature processing and classification, and we show the basic version of GAMLP which utilizes the attention mechanism in feature propagation. NAI could reduce the computation of feature propagation by decreasing the propagation depth \(k\). Suppose \(q\) is the average propagation depth over all nodes when adopting NAI, the complexity for feature propagation in SGC is decreased to \(\mathcal{O}(qmf)\). This means that NAI can achieve stronger acceleration effects for graphs with large-scale edges and high feature dimensions under the same \(q\). The classification complexity is \(\mathcal{O}(nf^{2})\), which is the same as vanilla SGC. The additional computational complexity for the stationary state and distance calculation (or gate) is \(\mathcal{O}(n^{2}f)\) and \(\mathcal{O}(qnf)\), respectively. Similar results can be observed in \(\mathrm{S}^{2}\mathrm{GC}\) and GAMLP. For SIGN, it concatenates propagated features at different depths before the classification procedure, leading to the increase of feature dimension. As a result, the classification computation also decreases from \(\mathcal{O}(kPnf^{2})\) to \(\mathcal{O}(qPnf^{2})\) when applying NAI to SIGN. ### _Enhance Classifiers by Inception Distillation_ Although the NAP module generates the personalized propagation depth according to topological information and reduces the computational redundancies of feature propagation, the plain classifiers restrict the classification performances. To compensate for the potential inference accuracy loss, we utilize multiple classifiers for the propagated features at different depths and further propose Inception Distillation to exploit the multi-scale receptive field information. The well trained classifiers can directly used in Algorithm 1. Inception Distillation includes two stages: Single-Scale and Multi-Scale Distillation. #### Iv-B1 **Single-Scale Distillation** For a Scalable GNN with the highest propagation depth \(k\), the classifier \(f^{(k)}\) is trained firstly with \(\mathbf{X}^{(k)}\) by using cross-entropy loss and is designated as the teacher. The outputs of students \(f^{(l)}\) (\(1\leq l<k\)) and the Fig. 3: The end-to-end training procedure of gates for Scalable GNNs with the highest depth \(k\). The propagated features are compared with the stationary feature in each gate and the decisions of gates at higher depths are influenced by previous gates. The propagated features are predicted by corresponding classifiers, and the gates are optimized by using cross-entropy (CE) loss. teacher \(f^{(k)}\) are processed for knowledge distillation as Eq. (14): \[\begin{split}\mathbf{z}_{i}^{(l)}&=f^{(l)}(\mathbf{X} _{i}^{(l)}),\\ \mathbf{z}_{i}^{(k)}&=f^{(k)}(\mathbf{X}_{i}^{(k)}), \\ \tilde{\mathbf{p}}_{i}^{(l)}&=\mathrm{softmax}(\mathbf{z} _{i}^{(l)}/T),\\ \tilde{\mathbf{p}}_{i}^{(k)}&=\mathrm{softmax}(\mathbf{z} _{i}^{(k)}/T),\end{split} \tag{14}\] where \(T\) is the temperature, which controls how much to rely on the teacher's soft predictions [34]. Then, the knowledge of \(f^{(k)}\) is distilled to other student classifiers at lower depths \(f^{(l)}\) separately as Eq. (15). We follow the conventional knowledge distillation [34] and penalize the cross-entropy loss between the student's softmax outputs and the teacher's softmax outputs. \[\mathcal{L}_{d}^{(l)}=\frac{1}{|\mathcal{V}_{train}|}\sum_{v_{i}\in\mathcal{V }_{train}}\ell(\tilde{\mathbf{p}}_{i}^{(l)},\tilde{\mathbf{p}}_{i}^{(k)}), \tag{15}\] where \(\ell(\cdot,\cdot)\) is the cross-entropy loss and \(\mathcal{V}_{train}\) is the training set. \(|\cdot|\) denotes the set size. Besides \(\mathcal{L}_{d}^{(l)}\), the node label provides another supervision signal for the students: \[\begin{split}\mathcal{L}_{c}^{(l)}&=\frac{1}{| \mathcal{V}_{i}|}\sum_{v_{i}\in\mathcal{V}_{i}}\ell(\tilde{\mathbf{y}}_{i}^{(l)}, \mathbf{y}_{i}),\\ \tilde{\mathbf{y}}_{i}^{(l)}&=\mathrm{softmax}(\mathbf{ z}_{i}^{(l)}),\end{split} \tag{16}\] where \(\mathbf{z}_{i}^{(l)}\) is derived from Eq. (14) and \(\mathbf{y}_{i}\) is the one-hot label. \(\mathcal{V}_{l}\) is the labeled set. Finally, the Single-Scale Distillation loss \(\mathcal{L}_{single}^{(l)}\) is constructed by jointly optimizing \(\mathcal{L}_{c}^{(l)}\) and \(\mathcal{L}_{d}^{(l)}\): \[\mathcal{L}_{single}^{(l)}=(1-\lambda)\mathcal{L}_{c}^{(l)}+\lambda T^{2} \mathcal{L}_{d}^{(l)}, \tag{17}\] where \(T^{2}\) is used to adjust the magnitudes of the gradients produced by knowledge distillation [34] and \(\lambda\in[0,1]\) is the hyper-parameter that balances the importance of two losses. #### Ii-C2 **Multi-Scale Distillation** Single-Scale Distillation can enhance the classifiers according to single-scale receptive field information. In order to further enhance the model's representational capacity, an ensemble teacher is constructed to retain multi-scale receptive field information. It is voted by \(r\) classifiers at higher depths and their predictions \(\tilde{\mathbf{y}}_{i}^{(l)}\) are combined as: \[\begin{split}\bar{\mathbf{z}}_{i}&=\mathrm{softmax }(\sum_{l=k-r+1}^{k}w_{i}^{(l)}\tilde{\mathbf{y}}_{i}^{(l)}),\\ w_{i}^{(l)}&=\frac{exp(q_{i}^{(l)})}{\sum_{l=k-r+1 }^{k}exp(q_{i}^{(l)})},\\ q_{i}^{(l)}&=\delta(\tilde{\mathbf{y}}_{i}^{(l)}\mathbf{ s}^{(l)}),\end{split} \tag{18}\] where \(\bar{\mathbf{z}}_{i}\) is the ensemble teacher prediction for node \(v_{i}\). \(\delta\left(\cdot\right)\) is the activation function, and we employ the sigmoid function. \(\mathbf{s}^{(l)}\in\mathbb{R}^{f\times 1}\) is the weight vector which projects the logits into a same subspace to measure self-attention scores. Scalars \(q_{i}^{(l)}\) are normalized to weight the predictions \(\tilde{\mathbf{y}}_{i}^{(l)}\). Subsequently, the teacher's knowledge is transferred to student classifiers by optimizing the Multi-Scale Distillation loss \(\mathcal{L}_{multi}^{(l)}\): \[\mathcal{L}_{multi}^{(l)}=\mathcal{L}_{t}+(1-\lambda)\mathcal{L}_{c}^{(l)}+ \lambda T^{2}\mathcal{L}_{e}^{(l)}, \tag{19}\] where \(1\leq l<k\) and \(\mathcal{L}_{multi}^{(l)}\) consists of three components: \(\mathcal{L}_{t}\), \(\mathcal{L}_{c}^{(l)}\) and \(\mathcal{L}_{e}^{(l)}\). \(\mathcal{L}_{t}\) represents the constraint for the ensemble teacher as Eq. (20): \[\mathcal{L}_{t}=\frac{1}{|\mathcal{V}_{l}|}\sum_{v_{i}\in\mathcal{V}_{l}}\ell( \bar{\mathbf{z}}_{i},\mathbf{y}_{i}). \tag{20}\] \(\mathcal{L}_{c}^{(l)}\) is the hard label supervision signal defined in Eq. (16). \(\mathcal{L}_{e}^{(l)}\) distills the knowledge from the ensemble teacher to other student classifiers as Eq. (21): \[\begin{split}\mathcal{L}_{e}^{(l)}&=\frac{1}{| \mathcal{V}_{train}|}\sum_{v_{i}\in\mathcal{V}_{train}}\ell(\tilde{\mathbf{p}}_{i}^ {(l)},\bar{\mathbf{p}}_{i}),\\ \tilde{\mathbf{p}}_{i}&=\mathrm{softmax}(\bar{\mathbf{z }}_{i}/T),\end{split} \tag{21}\] where \(\tilde{\mathbf{p}}_{i}^{(l)}\) is derived from Eq. (14). Notice that not only the student classifiers but also the weight vector \(\mathbf{s}^{(l)}\) and the ensemble teacher prediction \(\bar{\mathbf{z}}_{i}\) will be updated simultaneously by optimizing \(\mathcal{L}_{multi}^{(l)}\), which provides a trainable regularization term [35] with the target of the higher student performance. Leveraging the enhanced classifiers obtained through Single-Scale Distillation, Inception Distillation can further capture comprehensive knowledge within multi-scale receptive fields, thereby improving the performance of classifiers at different depths. ## III Experiments We test NAI on real-world graphs with different scales to verify the effectiveness and aim to answer the following six questions. **Q1**: Compared with other state-of-the-art inference acceleration baselines, can NAI achieve better performance? **Q2**: How does each component (e.g., \(\mathrm{NAP_{d}}\), \(\mathrm{NAP_{g}}\), and Inception Distillation) in NAI affect the model performance? **Q3**: What is the difference between the performance of \(\mathrm{NAP_{d}}\) and \(\mathrm{NAP_{g}}\)? **Q4**: Can NAI generalize well to different Scalable GNN models? **Q5**: How about the acceleration performance of NAI under different batch sizes? **Q6**: How do hyper-parameters affect NAI? \begin{table} \begin{tabular}{l|l l l l} \hline & SGC & SIGN & S\({}^{2}\)GC & GamLP \\ \hline Vanilla & \(\mathcal{O}(kmf+nf^{2})\) & \(\mathcal{O}(kmf+kPnf^{2})\) & \(\mathcal{O}(kmf+knf+nf^{2})\) & \(\mathcal{O}(kmf+Pnf^{2})\) \\ NAI & \(\mathcal{O}(qmf+nf^{2}+n^{2}f)\) & \(\mathcal{O}(qmf+qPnf^{2}+n^{2}f)\) & \(\mathcal{O}(qmf+qnf+nf^{2}+n^{2}f)\) & \(\mathcal{O}(qmf+Pnf^{2}+n^{2}f)\) \\ \hline \end{tabular} \end{table} TABLE I: The inference computational complexities of Scalable GNNs in the inductive setting. \(n\), \(m\) and \(f\) are the number of nodes, edges, and feature dimensions, respectively. \(k\) denotes the propagation depth and \(P\) is the number of layers in classifiers. \(q\) is the averaged propagation depth when adopting NAL ### _Experimental Settings_ **Baselines and Datasets**. We compare our proposed distance-based NAI (\(\mathrm{NAI_{d}}\)) and gate-based NAI (\(\mathrm{NAI_{g}}\)) with the state-of-the-art methods designed for **GNN inference acceleration**, which include: (1) TinyGNN [36]. Distill the knowledge from a deep GNN teacher to a single-layer GNN while exploiting the local structure information within peer nodes. (2) GLNN [37]. Distill the knowledge from a deep GNN teacher to a simple MLP to eliminate the neighbor-fetching latency in GNN inference. Note that GLNN completely abandons the feature propagation to speed up the inference and can be seen as the extremely simplified case of NAI. (3) NOSMOG [38]. Enhance GLNN by encoding the graph structural information explicitly and utilizing the adversarial feature augmentation to ensure stable learning against noises. (4) Quantization [39]. Quantize model parameters from FP32 to INT8. We evaluate our proposed method on three datasets with different scales and characteristics, including: a citation network (Ogbn-arxiv) [40], an image network (Flickr) [27] and a product co-purchasing network (Ogbn-products) [40]. In citation network, papers from different topics are considered as nodes and the edges are citations among the papers. The node attributes are word embedding vectors and each paper's topic is regarded as a node class. Flickr contains descriptions and properties of images and the node class is the image category. In Ogbn-products, the nodes representing products, and edges between two products indicate that the products are purchased together. Node features are generated from the product descriptions and the task is to predict the category of a product. The detailed descriptions of the datasets are provided in Table II.2 Footnote 2: The dataset split in GLNN [37] and NOSMOG [38] are different from ours since they further evaluate the method in the transductive setting. **Evaluation metrics**. The performance of each baseline is evaluated by five criteria, including the accuracy of the test set (ACC), averaged multiplication-and-accumulation operations per node (MACs), averaged feature processing MACs per node (FP MACs), averaged inference time per node (Time) and averaged feature processing time per node (FP time). Specifically, MACs for NAI evaluates 4 procedures, including stationary state computation, feature propagation, distance computation (or gate) and classification. Besides these procedures, the Time for NAI further contains the time of supporting node sampling. FP MACs and FP Time for NAI evaluate the feature propagation and distance computation (or gate) procedure. **Implementation and Settings**. Without loss of generality, we use the symmetric normalization adjacency matrix \(\widetilde{\mathbf{D}}^{-\frac{1}{2}}\widetilde{\mathbf{A}}\widetilde{\mathbf{D }}^{-\frac{1}{2}}\) in all base models. We set all student classifiers to be the same as the teacher GNNs except for GLNN on dataset Ogbn-arxiv and Ogbn-products. We follow GLNN paper and set the hidden embedding size as 4-times and 8-times wider than the teacher GNN on Ogbn-arxiv and Ogbn-products to achieve higher accuracy. Moreover, for a fair comparison, we re-implement the position feature aggregation in NOSMOG by the matrix multiplication to show NOSMOG's real inference performance in the inductive setting.3 Footnote 3: In the released codes of NOSMOG, the aggregation of position features is calculated node-by-node, which is inefficient and time-consuming in real inference scenarios. For each method, the hyper-parameters used in experiments are searched by the grid search method on the validation set or following the original papers. We use the ADAM optimization algorithm to train all the models. The best propagation depth \(k\) for each dataset and base model is searched together with learning rate, weight decay, and dropout to get the highest performance. Specifically, the values for \(k\), learning rate and weight decay are searched from [2, 10] with step 1, {0.6, 0.3, 0.1, 0.01, 0.001} and {0, 1e-3, 1e-4, 1e-5}. Dropout, \(T\) and \(\lambda\) are searching from [0, 0.7], [1, 2] and [0, 1] with step 0.1, respectively. The hyper-parameters of NAI under base model SGC are shown in Table III. Table IV shows the hyper-parameters for the NAI under other different base models. To eliminate randomness, we repeat each method three times and report the mean performance. If not specified otherwise, the inference time is evaluated on the CPU with batch size 500. The codes are written in Python 3.9 and the operating system is Ubuntu 16.0. We use Pytorch 1.11.0 on CUDA 11.7 to train models on GPU. All experiments are conducted on a machine with Intel(R) Xeon(R) CPUs (Gold \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Dataset & \(n\) & \(m\) & \(f\) & \(c\) & \(\pi\)Train/Val/Test \\ \hline Flickr & 89,250 & 899,756 & 500 & 7 & 44k/22k/22k \\ Ogbn-arxiv & 169,343 & 1,166,243 & 128 & 40 & 91k/30k/48k \\ Ogbn-products & 2,449,029 & 123,718,280 & 100 & 47 & 196k/39k/2,213k \\ \hline \hline \end{tabular} \end{table} TABLE II: Datasets properties. \(n\), \(m\), \(f\) and \(c\) are the number of nodes, edges, feature dimensions and classes, respectively. \begin{table} \begin{tabular}{l|c c c} \hline \hline & \(\mathrm{S}^{2}\mathrm{GC}\) & \(\mathrm{SIGN}\) & \(\mathrm{GAMLP}\) \\ \hline k & 10 & 5 & 5 \\ learning rate & 0.001 & 0.01 & 0.001 \\ weight decay & 1e-4 & 1e-5 & 1e-3 \\ dropout & 0.2 & 0.1 & 0.1 \\ \(T_{single}\) & 1 & 2 & 1.6 \\ \(\lambda_{single}\) & 0.1 & 0.9 & 0.9 \\ \(T_{multi}\) & 1.9 & 1.8 & 1.8 \\ \(\lambda_{multi}\) & 0.6 & 0.9 & 0.8 \\ \hline \hline \end{tabular} \end{table} TABLE IV: The hyper-parameters for NAI under different models. \(*_{single}\) and \(*_{multi}\) mean the hyper-parameters for Single-Scale and Multi-Scale Distillation, respectively. \begin{table} \begin{tabular}{l|c c c} \hline \hline & Flickr & Ogbn-arxiv & Ogbn-products \\ \hline k & 7 & 5 & 5 \\ learning rate & 0.001 & 0.001 & 0.01 \\ weight decay & 0 & 0 & 1e-4 \\ dropout & 0.3 & 0.3 & 0.1 \\ \(T_{single}\) & 1.2 & 1 & 1.1 \\ \(\lambda_{single}\) & 0.6 & 0.1 & 0.2 \\ \(T_{multi}\) & 1.9 & 1.5 & 1 \\ \(\lambda_{multi}\) & 0.8 & 0.1 & 0.1 \\ \hline \hline \end{tabular} \end{table} TABLE III: The hyper-parameters for NAI under base model SGC. \(*_{single}\) and \(*_{multi}\) mean the hyper-parameters for Single-Scale and Multi-Scale Distillation, respectively. 5120 @ 2.20GHz) and NVIDIA TITAN RTX GPUs with 24GB GPU memory. ### _Performance Comparison_ To answer **Q1**, we compare \(\rm NAI_{d}\) and \(\rm NAI_{g}\) with other baselines under the base model: SGC. For NAIs, we select the hyper-parameters that prioritize the inference speed. From Table V, we observe that both \(\rm NAI_{d}\) and \(\rm NAI_{g}\) achieve great balances between accuracy and inference speed. As for ACC, NAIs outperform the Quantization method and achieve the least ACC loss compared to vanilla SGC. Although Quantization also shows great accuracy, it only saves the classification computation and could not help to reduce the computation from feature processing. Benefiting from removing the feature propagation in the inference procedure, GLNN has the smallest MACs and the fastest inference speed. However, for the same reason, GLNN could not generalize well for inductive settings as analyzed in their paper. Even with the increased embedding size, the accuracies on Ogbn-arxiv and Ogbn-products decrease significantly. This indicates that ignoring topological information severely impairs the prediction of unseen nodes. Although NOSMOG imitates this problem by using Deepwalk and explicitly encoding the position information, it still has accuracy gaps compared with the base model. Compared to the single-layer GNN, NAIs outperform TinyGNN on all datasets. Although TinyGNN saves a part of the computation of feature propagation, the self-attention mechanism and linear transformation used in its peer-aware module cause a large number of extra computations. Especially in the dataset with high feature dimension, e.g., Flickr, the MACs are much more than vanilla SGC. \(\rm NAI_{g}\) obtains better accuracy than \(\rm NAI_{d}\), which comes at the cost of more computations from gates. Compared with other baselines, \(\rm NAI_{d}\) and \(\rm NAI_{g}\) accelerate inference significantly by controlling the FP MACs, and \(\rm NAI_{d}\) achieves the 75\(\times\) Time speedup and 86\(\times\) FP Time speedup on Ogbn-products. Although the highest propagation depth is 5 on Ogbn-products, \(\rm NAI_{d}\) and \(\rm NAI_{g}\) both achieve nonlinear acceleration ratios since the number of supporting nodes and the adjacency matrix size grow at an exponential rate with the propagation depth. This verifies the effectiveness of generating personalized propagation depth by using the NAI framework. Besides the speed-first results in Table V, the NAI framework allows users to choose more accurate results based on the latency constraints. Figure 4 shows the trade-off between accuracy and inference time in different hyper-parameter settings. We select 3 typical settings for each dataset and method, which are denoted as "\(\rm NAI_{s}^{1}\)", "\(\rm NAI_{s}^{2}\)" and "\(\rm NAI_{s}^{3}\)", respectively. Note that "\(\rm NAI_{s}^{1}\)" is the speed-first setting in Table V. From Figure 4, NAIs achieve the highest classification accuracy and are even superior to vanilla SGC. This is due to that NAP mitigates the over-smoothing problem and Inception Distillation enhances the classifiers (Table VII and VIII in the next subsection evaluate their impacts). For example, on Flickr, \(\rm NAI_{d}^{3}\) and \begin{table} \begin{tabular}{l|l l l l l|l l l l l l|l \(\mathrm{NAI}_{\mathrm{g}}^{3}\) achieve more accurate results while spending a similar inference time with SGC. \(\mathrm{NAI}_{\mathrm{g}}^{2}\) speed up \(\mathrm{NAI}_{\mathrm{g}}^{3}\) by 1.8\(\times\), and \(\mathrm{NAI}_{\mathrm{d}}^{2}\) accelerates \(\mathrm{NAI}_{\mathrm{d}}^{3}\) by 2.1\(\times\) with little accuracy drop. More detailed comparisons between \(\mathrm{NAI}_{\mathrm{g}}\) and \(\mathrm{NAI}_{\mathrm{d}}\) can be found in Section III-C. Table VI shows the detailed test node distributions selected from a single run, i.e., the number of nodes at different propagation depths, for different datasets and hyper-parameter settings. The depth increases from 1 (left) to \(k\) (right). From Table VI, we observe that most of the nodes of \(\mathrm{NAI}_{\mathrm{d}}^{2}\) on Flickr adopt the propagated features at depth 4. This successfully reduces the number of supporting nodes and saves the computation of the feature propagation. To get the best accuracy, \(\mathrm{NAI}_{\mathrm{d}}^{3}\) makes full use of each classifier, and the propagation depths of tested nodes are various. As for the \(\mathrm{NAI}_{\mathrm{d}}^{1}\) on Ogbn-products, all nodes adopt the propagated features at depth 2 to trade off the inference speed and accuracy. It demonstrates the flexibility of the \(\mathrm{NAI}\) framework, and the fixed propagation depth used in classic GNNs is the special case of our proposed method. ### _Ablation Study_ To thoroughly evaluate our method and answer **Q2-3**, we provide ablation studies on: (1) Node-Adaptive Propagation; (2) Inception Distillation. Table VII shows the performance of \(\mathrm{NAI}_{\mathrm{d}}\), \(\mathrm{NAI}_{\mathrm{g}}\) and NAI without NAP under different hyper-parameter settings on Ogbn-arxiv and Ogbn-products. ACC and Time values are averaged over 3 runs and the node distributions are selected from a single run. Their maximum propagation depths \(k=5\), and \(T_{max}=1\) is omitted due to the same inference accuracy. Note that the accuracies of "NAI w/o NAP" do not grow monotonically with \(T_{max}\) because the Inception Distillation enhances the classifiers independently. Comparing \(\mathrm{NAI}_{\mathrm{d}}\) with "NAI w/o NAP" under the same \(T_{max}\), accuracies are all improved with less inference latency. To achieve a fast inference speed under the same \(T_{max}\), tested nodes adopt various propagation depths, contributing to both accuracy improvement and computation saving. \(\mathrm{NAI}_{\mathrm{g}}\) achieves much better accuracy than \(\mathrm{NAI}_{\mathrm{d}}\). This improvement comes from the powerful representation ability of gates and at the cost of the computation and inference time. For example, when \(T_{max}=3\), the inference time of \(\mathrm{NAI}_{\mathrm{g}}\) on Ogbn-arxiv is larger than both \(\mathrm{NAI}_{\mathrm{d}}\) and "NAI w/o NAP". However, when more nodes adopt lower propagation depths, \(\mathrm{NAI}_{\mathrm{g}}\) can perform better. When \(T_{max}=4\), \(\mathrm{NAI}_{\mathrm{g}}\) is more efficient and effective than \(\mathrm{NAI}_{\mathrm{d}}\). Besides NAP, Inception Distillation is designed to explore multi-scale knowledge and improve the inference accuracy. We evaluate the accuracy of \(f^{(1)}\), which has the worst performance among classifiers, to show the effectiveness of each component in Inception Distillation. Table VIII displays the results of NAI without Inception Distillation ("w/o ID"), NAI without Single-Scale Distillation ("w/o SS"), NAI without Multi-Scale Distillation ("w/o MS") and NAI. First, the Multi-Scale Distillation explores multi-scale receptive fields and constructs a more powerful teacher via self-attention mechanism, contributing to improvements on all datasets when comparing NAI with NAI w/o MS. For example, when ignoring the Multi-Scale Distillation, the accuracy of NAI will drop 0.44% on Flickr. Besides, Single-Scale Distillation provides a solid foundation for Multi-Scale Distillation. With more accurate classifiers, the ensemble teacher will be more expressive and powerful, which could provide higher-quality supervision signals. The classification results will decrease on all datasets when Single-Scale Distillation is removed. With the help of Single-Scale Distillation, the accuracy of Multi-Scale Distillation has a 2.04% increase on the dataset Flickr. These results indicate that Single-Scale and Multi-Scale Distillation are essential to NAI. ### _Generalization_ In addition to SGC, our proposed NAI framework can be applied to any Scalable GNNs. To answer **Q3**, we test the generalization ability of \(\mathrm{NAI}_{\mathrm{d}}\) and \(\mathrm{NAI}_{\mathrm{g}}\) by deploying them on SIGN, \(\mathrm{S}^{2}\mathrm{GC}\) and GAMLP. The hyper-parameters, \begin{table} \begin{tabular}{c|c c c} \hline \hline & & Flickr & Ogbn-arxiv & Ogbn-products \\ \hline NAI w/o ID & 40.86 & 65.54 & 70.17 \\ NAI w/o MS & 44.41 & 65.91 & 70.28 \\ NAI w/o SS & 42.81 & 66.08 & 70.37 \\ NAI & 44.85 & 66.10 & 70.49 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: The ablation study on the Inception Distillation. Accuracy (%) is averaged over 3 runs. \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \hline & & \multicolumn{4}{c|}{Ogbn-arxiv} & \multicolumn{4}{c}{Ogbn-products} \\ \hline \(T_{max}\) & Method & ACC (\%) & Time (ms) & Node distribution & ACC (\%) & Time (ms) & Node distribution \\ \hline \multirow{3}{*}{2} & NAI w/o NAP & 69.16 & 202.7 & [0, 48603, 0, 0, 0] & 73.70 & 923.2 & [0, 2213091, 0, 0, 0] \\ & \(\mathrm{NAI}_{\mathrm{d}}\) & 69.25 & 128.4 & [184967454, 0, 0, 0] & 73.70 & 923.2 & [0, 2213091, 0, 0, 0] \\ & \(\mathrm{NAI}_{\mathrm{g}}\) & 69.34 & 195.5 & [20524 e551, 0, 0] & 73.89 & 1088.3 & [452412, 210757, 0, 0, 0] \\ \cline{1-1} \cline{2-7} & NAI w/o NAP & 69.38 & 454.2 & [0, 48603, 0, 0] & 73.95 & 17121.5 & [0, 0, 2213091, 0, 0] \\ & \(\mathrm{NAI}_{\mathrm{d}}\) & 69.48 & 427.4 & [0, 20528, 2075, 0, 0] & 73.97 & 16914.3 & [0, 1086, 2212050, 0, 0] \\ & \(\mathrm{NAI}_{\mathrm{g}}\) & 69.53 & 455.7 & [0, 16420, 23183, 0, 0] & 74.03 & 16223.9 & [0, 1369671, 8, 34420, 0] \\ \hline \multirow{3}{*}{4} & NAI w/o NAP & 69.26 & 889.3 & [0, 0, 48603, 0] & 74.57 & 422322.2 & [0, 0, 0, 2213091, 0] \\ & \(\mathrm{NAI}_{\mathrm{d}}\) & 69.52 & 816.6 & [0, 30303, 5898, 12402, 0] & 74.58 & 39474.8 & [0, 1384, 239, 2211468, 0] \\ \cline{1-1} & \(\mathrm{NAI}_{\mathrm{d}}\) & 69.66 & 806.4 & [0, 31519, 11344, 5740, 0] & 74.66 & 37662.0 & [0, 0, 095524, 307567, 0] \\ \hline \multirow{3}{*}{5} & NAI w/o NAP & 69.36 & 1296.4 & [0, 0, 0, 48603] & 74.24 & 68938.8 & [0, 0, 0, 2213091] \\ \cline{1-1} & \(\mathrm{NAI}_{\mathrm{d}}\) & 69.82 & 1198.9 & [0, 16503, 12221, 1077, 18802] & 74.58 & 67523.2 & [0, 0, 2213068, 23] \\ \cline{1-1} & \(\mathrm{NAI}_{\mathrm{g}}\) & 69.90 & 1224.7 & [0, 0, 13215, 3864, 31524] & 74.69 & 68087.0 & [0, 4, 488514, 3105162, 423015] \\ \hline \hline \end{tabular} \end{table} TABLE VII: The ablation study on NAPs under different \(T_{max}\). The propagation depth of node distribution increases from 1 to \(k=5\). including the classifier structure, are searched to get the best performance for each base model. The detailed hyper-parameters are listed in Table IV. The accuracy and inference time results of SIGN, \(\mathrm{S^{2}GC}\) and GAMLP are shown in Table IX, X and XI, respectively. NAIs consistently outperform the other baselines when considering both accuracy and inference speedup. Compared to GLNN, \(\mathrm{NAI_{d}}\) can improve the accuracy for 4.18%, 2.35% and 3.90% on SIGN, \(\mathrm{S^{2}GC}\), and GAMLP, respectively. Although the attention mechanism used in TinyGNN requires a large number of MACs, the feature propagation is more time-consuming and the acceleration ratios for different base models are ranging from 1.2\(\times\) to 2.9\(\times\) compared with vanilla GNNs. Quantization achieves the smallest accuracy loss but the acceleration ratio is limited. When applying \(\mathrm{NAI_{d}}\) to SIGN, \(\mathrm{S^{2}GC}\) and GAMLP, the FP Time can be accelerated by \(20\times\), \(43\times\) and \(12\times\). Considering the other computations, i.e., the computation of stationary state and classification, the corresponding inference time are accelerated by \(10\times\), \(26\times\) and \(8\times\). The best acceleration result of \(\mathrm{NAI_{g}}\) is shown in \(\mathrm{S^{2}GC}\). It achieves \(24\times\) and \(27\times\) speedup for inference time and MACs, respectively. ### _Effect of Batch Size_ The inference of GNNs is related to the batch size because the number of supporting nodes grows with the increase of the batch size. To answer **Q4**, we evaluate different methods and report the averaged MACs and inference time in different batch sizes on Flickr. The batch size changes from 100 to 2000, and all other settings and accuracy performances are the same as Table V. The detailed results are shown in Figure 5. For the base model SGC, both MACs and inference time keep at the same magnitude with the growth of batch size. Specifically, when the batch size changes from 100 to 2000, the MACs increase from 2282.8 to 2613.0, and the inference time increases from 2396.0 ms to 2701.1 ms. The Quantization has the same performance trend as the base model in both MACs and inference time. However, TinyGNN shows a strong positive correlation with the batch size and the inference time exceeds SGC when the batch size is 1000. When the batch size grows to 2000, the number of MACs reaches 16917.9 and the inference time is 8725.0 ms. This performance degradation of TinyGNN comes from the heavy computations of the attention mechanism used in its peer-aware module. Benefiting from the extremely simplified MLP model, the number of MACs of GLNN is kept around 100, and the inference time is around 10 ms. For NOSMOG, the position feature aggregation for unseen nodes causes the MACs and inference time overhead compared with GLNN. For our proposed \(\mathrm{NAI_{d}}\) and \(\mathrm{NAI_{g}}\), the number of MACs increases with the growth of batch size. This is due to that NAI requires the extra calculation of the stationary feature state \(\mathbf{X}^{(\infty)}\) and distances (or gates) for all target nodes in the batch. However, these procedures can be fast calculated by matrix multiplication and subtraction, resulting in stable inference time performances in different batch sizes. ### _Parameter Sensitivity Analysis_ Temperature \(T\) and weight \(\lambda\) are two influential hyper-parameters for Inception Distillation. Moreover, the ensemble number \(r\) controls the teacher quality in Multi-Scale Distilla \begin{table} \begin{tabular}{l|c c c c c} \hline & ACC & \#mMACs & \#FP mMACs & Time & FP Time \\ \hline \(\mathrm{S^{2}GC}\) & 50.08 & 3897.8 & 3889.2 & 3959.5 & 3717.6 \\ GLNN & 46.59 & 8.6 & 0 & 9.5 & 0 \\ NOSMOG & 48.19 & 9.0 & 0.1 & 31.6 & 17.3 \\ TinyGNN & 46.89 & 8855.1 & 8846.5 & 1366.7 & 1355.0 \\ Quantization & 49.10 & 3897.8 & 3889.2 & 3946.9 & 3714.6 \\ \hline \(\mathrm{NAI_{d}}\) & 48.94 & 120.1 (32) & 89.0 (44) & 149.9 (26) & 86.3 (43) \\ \(\mathrm{NAI_{g}}\) & 49.66 & 142.1 (27) & 97.7 (40) & 165.6 (24) & 88.0 (42) \\ \hline \end{tabular} \end{table} TABLE X: Inference comparison under base model \(\mathrm{S^{2}GC}\) on Flickr. ACC is evaluated in percentage. Time and FP Time are evaluated in millisecond. Acceleration ratios between NAI and vanilla GNNs are shown in brackets. \begin{table} \begin{tabular}{l|c c c c c} \hline & ACC & \#mMACs & \#FP mMACs & Time & FP Time \\ \hline SIGN & 51.00 & 1574.8 & 1526.8 & 1667.1 & 1569.0 \\ GLNN & 46.84 & 8.1 & 0 & 7.8 & 0 \\ NOSMOG & 48.24 & 8.5 & 0.1 & 32.8 & 17.2 \\ TinyGNN & 47.21 & 8862.2 & 8846.1 & 1356.1 & 1345.9 \\ Quantization & 45.87 & 1574.9 & 1526.8 & 1654.3 & 1565.0 \\ \hline \(\mathrm{NAI_{d}}\) & 51.02 & 135.0 (12) & 112.5 (14) & 170.4 (10) & 78.7 (20) \\ \(\mathrm{NAI_{g}}\) & 50.93 & 135.2 (12) & 112.8 (14) & 181.3 (9) & 86.6 (18) \\ \hline \end{tabular} \end{table} TABLE IX: Inference comparison under base model SIGN on Flickr. ACC is evaluated in percentage. Time and FP Time are evaluated in millisecond. Acceleration ratios between NAI and vanilla GNNs are shown in brackets. Fig. 5: The number of MACs and inference time comparison among different methods and batch sizes on the dataset Flickr. tion. To analyze the influence of these hyper-parameters and answer **Q5**, we conduct the experiment on Flickr and the base model is SGC. The classification performances of \(f^{(1)}\) in terms of hyper-parameters are shown in Figure 6. Firstly, \(\lambda\) is quite important which could significantly affect the classification result. For example, \(\lambda\) for Multi-Scale Distillation should be controlled between 0.8 and 1 to get better performance. This indicates that the supervision provided by the ensemble teacher is more important than the hard label. In contrast, \(\lambda\) for Single-Scale Distillation should be selected carefully to balance two losses. Following the increase of \(T\), the performance of Multi-Scale Distillation decreases first and then increases. Thus, limiting \(T\) to a larger value and using softer labels works best. The Single-Scale Distillation results in terms of \(T\) show that decreasing temperature could help enhance the classification performance. \(T\) should be controlled in the range of [1, 1.2]. Finally, the results in terms of \(r\) show that increasing the number of combined predictions could help enhance the classification performance. But it also introduces more unreliable labels in model training. Especially when introducing the low-quality labels from \(f^{(1)}\), the classification result drops rapidly. To sum up, Inception Distillation gets stable and high classification performances when \(\lambda\) ranges from 0.5 to 1. Softer labels and an appropriate ensemble number should be applied to Multi-Scale Distillation for better performance. ## IV Related Works To deploy the model on large-scale graphs, researchers propose various techniques to accelerate training and inference, which can be categorized into the model perspective and the algorithm perspective. **Acceleration models.** From the model perspective, the acceleration model mainly contain sampling-based and scalable models. Besides the models studied in this paper, sampling-base models can be divided into three categories according to sampling methods: node-wise [12, 24, 41]/ layer-wise [42, 43, 44]/ graph-wise [27, 30] sampling. Although sampling-based GNNs mitigate the neighbor explosion problem by restricting the number of neighbors, they are greatly influenced by sampling quality and suffer from the high variance problem when applied to inference. As a result, the inference procedure typically adheres to conventional GNNs. A distinct sampling method for GNNs acceleration is PPRGo [24]. PPRGo utilizes the personalized PageRank to replace the hierarchical feature propagation and then proposes an approximate method to select top-k neighbor nodes and speed up the PageRank matrix calculation. Compared with our NAI, PPRGo focuses on the different GNN framework which conducts the propagation process after the transformation process [25] to solve the over-smoothing problem. This makes PPRGo must be trained end-to-end and could not be generalized to Scalable GNNs studied in this paper. Instead of using neighbor sampling or matrix approximate calculation, NAI can adaptively generate personalized propagation depth for each node and achieves great generalization ability. **Acceleration algorithms**. From the algorithm perspective, acceleration methods include pruning, quantization and knowledge distillation (KD). Pruning methods designed for GNNs [45, 46, 47] reduce the dimension of embeddings in each hidden layer to save the computation. Quantization [48] uses low-precision integer arithmetic during inference to speed up the computation. However, these two kinds of methods concentrate on reducing the computation of feature transformation and classification, and raw features are preserved to avoid performance degradation. This limits the acceleration performance considering that feature propagation accounts for the most proportion of runtime. KD aims to train a lightweight model which has a similar performance to the teacher model, thus has been exploited in recent inference acceleration researches. Most KD methods for GNNs try to enhance the student performance by introducing high-order structural information because the receptive field is bound to the number of GNNs layers [49, 50, 51, 52]. Besides, GraphAKD [53] leverages adversarial training to decrease the discrepancy between teacher and student. ROD [54] uses multiple receptive field information to provide richer supervision signals for sparsely labeled graphs. RDD [55] defines the node and edge reliability to make better use of high-quality data. Different from the above works which concentrate on improving the performance of a single classifier, the Inception Distillation in NAI focuses on multi-scale knowledge transfer and boosts the performance for multiple students. ## V Conclusion We present Node-Adaptive Inference (NAI), a general inference acceleration framework for Scalable GNNs. NAI can successfully reduce the redundancy computation in feature propagation and achieve adaptive node inference with personalized propagation depths. With the help of Inception Distillation, NAI exploits multi-scale receptive field knowledge and compensates for the potential inference accuracy loss. Extensive experiments on large-scale graph datasets verified that NAI has high acceleration performance, good generalization ability, and flexibility for different latency constraints. NAI drives the industrial applications of Scalable GNNs, especially in streaming and real-time inference scenarios. Fig. 6: Parameter sensitivity results of NAIs under the base model SGC on Flickr.
2308.03210
Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series
Irregularly sampled multivariate time series are ubiquitous in several application domains, leading to sparse, not fully-observed and non-aligned observations across different variables. Standard sequential neural network architectures, such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), consider regular spacing between observation times, posing significant challenges to irregular time series modeling. While most of the proposed architectures incorporate RNN variants to handle irregular time intervals, convolutional neural networks have not been adequately studied in the irregular sampling setting. In this paper, we parameterize convolutional layers by employing time-explicitly initialized kernels. Such general functions of time enhance the learning process of continuous-time hidden dynamics and can be efficiently incorporated into convolutional kernel weights. We, thus, propose the time-parameterized convolutional neural network (TPCNN), which shares similar properties with vanilla convolutions but is carefully designed for irregularly sampled time series. We evaluate TPCNN on both interpolation and classification tasks involving real-world irregularly sampled multivariate time series datasets. Our experimental results indicate the competitive performance of the proposed TPCNN model which is also significantly more efficient than other state-of-the-art methods. At the same time, the proposed architecture allows the interpretability of the input series by leveraging the combination of learnable time functions that improve the network performance in subsequent tasks and expedite the inaugural application of convolutions in this field.
Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis
2023-08-06T21:10:30Z
http://arxiv.org/abs/2308.03210v2
# Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series ###### Abstract Irregularly sampled multivariate time series are ubiquitous in several application domains, leading to sparse, not fully-observed and non-aligned observations across different variables. Standard sequential neural network architectures, such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), consider regular spacing between observation times, posing significant challenges to irregular time series modeling. While most of the proposed architectures incorporate RNN variants to handle irregular time intervals, convolutional neural networks have not been adequately studied in the irregular sampling setting. In this paper, we parameterize convolutional layers by employing time-explicitly initialized kernels. Such general functions of time enhance the learning process of continuous-time hidden dynamics and can be efficiently incorporated into convolutional kernel weights. We, thus, propose the time-parameterized convolutional neural network (TPCNN), which shares similar properties with vanilla convolutions but is carefully designed for irregularly sampled time series. We evaluate TPCNN on both interpolation and classification tasks involving real-world irregularly sampled multivariate time series datasets. Our experimental results indicate the competitive performance of the proposed TPCNN model which is also significantly more efficient than other state-of-the-art methods. At the same time, the proposed architecture allows the interpretability of the input series by leveraging the combination of learnable time functions that improve the network performance in subsequent tasks and expedite the inaugural application of convolutions in this field. ## 1 Introduction Time series arise naturally in many contexts including quantitative finance, astrophysics and medicine, just to name a few. Recently, there is a growing interest in applying machine learning techniques to time series data. Besides time series forecasting, which has been extensively studied for decades [7], other tasks have also emerged recently such as time series classification [12] and generation [8]. Time series are constructed from real-world data and usually several of their observations are missing or are subject to noise. This is mainly due to irregular sampling and is common in different types of data including medical records, network traffic, and astronomical data. Unfortunately, the most successful machine learning models in sequential modeling, namely recurrent neural networks (RNNs) and convolutional neural networks (CNNs) cannot properly handle such irregularly sampled time series data. Indeed, those models treat observations successively and assume an equidistant sampling scheme. Thus, time series data that exhibits variable gaps between consecutive time points pose a significant challenge to such conventional deep learning architectures. A naive approach to deal with the above problem would be to drop some observations such that the distance between consecutive (remaining) observations is fixed. However, this would increase data sparsity, thus leading to poorly defined latent variables. A more prominent approach would be to first apply some imputation method to replace missing values with estimated values, and then to use the standard models which assume an equidistant sampling scheme. In fact, several recent approaches build on the above idea [3, 9]. However, this could potentially result in a loss of information and a violation of the underlying dynamics. Recently, there has been an increasing interest in effectively capturing the continuous dynamics of real-world sparse and irregular multivariate time series. Most studies have extended RNNs to continuous-time hidden dynamics defined by ordinary differential equations (ODEs) [4, 24]. The effectiveness of Convolutional Neural Networks (CNNs) [15] as an alternative to recurrent architectures has been established, as long as the input dependencies that are essential fall within the memory horizon of the network. CNNs are based on parallel computations and thus are more efficient, contrary to the training instability and gradient problems of RNNs that employ back-propagation through time [34]. However, since discrete convolutions learn independent weights for each time step in the kernel range, they do not directly capture the time irregularities. Efforts for the continuous implementation of convolutional kernels have targeted 3D data [25, 33] and recently, sequences [23]. The proposed continuous convolution for sequential data [23], CKConv, parameterizes the kernel values using a multi-layer perception (MLP) on the relative positions \(\{\Delta_{\tau_{i}}\}\) of the observations, followed by a periodic activation function [29]. In contrast to [23] that take advantage of periodic activations, our layer can be constructed employing any predefined set of continuous functions and be followed by any activation, while using significantly fewer learnable parameters, since a single feed-forward layer is used for the parameterization of the convolutional kernel. Following the above line of research, in this paper, we develop a new model, so-called _Time-Parameterized Convolutional Neural Network_ (TPCNN), which generalizes the standard CNN model to irregularly sampled time series. To achieve that, we replace the fixed kernels of CNNs with kernels whose values are parameterized both by time and by trainable variables. Thus, instead of keeping the kernel weights fixed over the whole time series length, we use different functions (e.g., linear, sinusoidal) to produce the kernels that will be convolved with each patch of the time series. Therefore, kernels can be seen as continuous functions of time, and the proposed TPCNN model can naturally learn continuous latent representations of irregular time series. Furthermore, the use of the aforementioned functions improves the explainability of the proposed model. We combine our time-parameterized convolutions with vanilla convolutions by stacking them in a deep encoder module. The proposed TPCNN model is evaluated in the tasks of time series classification and time series interpolation. Our experiments demonstrate that the proposed model performs comparably to state-of-the-art methods. The main contributions of the paper are summarized as follows: 1. Generalizing conventional, fixed convolutional kernels to time functions, that increase their representational power and still leverage properties of convolutions (e.g., locally aggregated information, fast training). 2. Enabling the application and proving the efficiency of deep stacked convolutions in the irregular sampling setting. 3. Achieving high-performance results in interpolation and classification of irregularly sampled benchmark datasets, which are comparable to other state-of-the-art methods. ## 2 Related Work The long-standing challenge in multivariate irregular time series modeling has led to the development of various neural network architectures that explicitly handle such time-dependent peculiarity. One strategy suggests dividing the timeline into equal intervals, filling in missing data, and then using a Recurrent Neural Network (RNN) on the imputed inputs. Using a weighted average between the empirical mean and the previous observation to perform imputation has also been proposed [3]. Alternative methods for imputation include the use of Gaussian processes [9], or generative adversarial networks [16] prior to running the RNN on time-discretized inputs. The interpolation-prediction network [26] employs several semi-parametric interpolation layers for multivariate time series input with missing values, followed by a prediction network which is applied on the produced regularly spaced and fully observed representations. Multi-directional RNNs (M-RNN) combine past and future observations for each timestamp [36]. A differentiable set function method for classifying irregularly sampled is another line of work presented in [11]. An alternative strategy for handling irregularly sampled data involves architectures that directly model such temporal sequences. Various techniques, including adaptations of gated recurrent unit networks (GRUs) [5] and Long Short-term Memory networks (LSTMs) [10], have been introduced for this purpose. Among the several proposed modified GRU architectures [3], a prominent example takes as input observed values, indicators denoting missing data points, and the differences in time between observations. The LSTM architecture has been extended for handling the time irregularity of the data, by introducing a novel time gate in [19] that updates the memory state. The activation and deactivation of this gate are governed by distinct rhythmic oscillations, controlled by some learnable parameters. Another LSTM modification is presented in [21], where the proposed forget gate moderates the passing of memory from one time step to another. Another solution for handling irregularly sampled data is to incorporate the time gaps between observations directly into Recurrent Neural Networks (RNNs). One approach is to add the time gap \(\Delta_{t}\) to the RNN input, which has been found to be susceptible to overfitting [18]. An alternative method is to introduce hidden states that decay over time, which has been proposed in several works as a viable solution [3, 2, 22]. Hidden states with an exponential decay can be employed to parameterize neural Hawkes processes and explicitly model observations via latent state changes at each observation event [17]. Many works focus on the continuous modeling of time series by learning a continuous-time neural representation with a latent state defined at all times. More specifically, a variational auto-encoder model, which utilizes a neural network decoder in combination with a latent ordinary differential equation (ODE) model, has been presented in [4]. Based on this approach, an ODE-RNN encoder that consists of a neural ODE part that models the hidden state dynamics and an RNN part that updates the hidden state has been proposed [24]. A continuous version of the GRU architecture models the input series via continuous ODE dynamics describing the evolution of the probability distribution of the data [6]. Finally, an alternative to Neural ODEs, Neural Controlled Differential Equations represent the continuous-time analogue of an RNN, which benefits from memory-efficient adjoint-based backpropagation across observations [14]. Attention mechanisms combined with time encodings, as an alternative to positional ones [32], have been proposed [30, 37, 31]. By extending attention with learnable time embeddings [35], the recently proposed Multi-Time Attention Network [27] computes the similarity between observations at different time points using a learnable time embedding. This approach works similarly to kernel-based interpolation, but by leveraging a learnable time attention-based similarity kernel. Except for the optimization issues of RNNs, the conventional dot-product self-attention mechanism matches queries with keys without considering the surrounding context. At the same time, space complexity grows quadratically with the input length, leading to memory constraints and potential performance limitations. The use of implicit neural representations for creating continuous data representations by encoding the input in the weights of a neural network has recently gathered interest [20, 29]. Our approach can be conceptualized as an implicit representation of the convolutional kernels since they are parameterized as learnable and continuous functions of time. In this study, the proposed time-parameterized convolutional layer (TPC) introduces time-varying convolutional kernels, allowing for more efficient representational learning of the time dependencies among partially-observed variables. We leverage several continuous time functions for extracting learnable time embeddings of the time intervals across different variables. The proposed architecture is carefully designed for interpolation and classification tasks on irregularly sampled time series. The TPC Layer In this section, we define the mathematical properties of the employed Time-Parameterized layer (TPC) and analytically explain a proposed framework for tasks involving irregularly sampled, partially observed and multivariate time series. ### Preliminaries Convolution is a well-studied mathematical operation which has applications in many diverse scientific fields [1]. The convolution of two functions \(f\) and \(g\), denoted by \(f*g\), expresses how the shape of one is modified by the other. Continuous convolution.If the domains of functions \(f\) and \(g\) are continuous, convolution is defined as the integral of the product of the two functions after one is reflected and shifted. Formally, given \(f\colon\mathbb{R}^{D}\to\mathbb{R}\) and \(g\colon\mathbb{R}^{D}\to\mathbb{R}\), the continuous convolution operation is defined as: \[(f*g)(\mathbf{x})=\int_{-\infty}^{\infty}f(\mathbf{y})g(\mathbf{x}-\mathbf{y})dy\] Discrete convolution.In the real world, signals are discrete and finite. For functions \(f\), \(g\), defined over the support domain of finite integer set \(\mathbb{Z}^{D}\) and \(\{-K,-K+1,...,K-1,K\}^{D}\), respectively, the discrete equivalent of convolution is defined as: \[(f*g)[n]=\sum_{k=-K}^{K}f[n-k]g[k] \tag{1}\] Thus, the integral is replaced by a finite summation. Standard CNN models consist of layers that perform discrete convolutions that are defined over the discrete domain. ### Time-Parameterized 1D Convolutions We first introduce the key notations behind the employed time-parameterized convolutions for irregular and multivariate time series and analyze their fundamental properties. Irregular time series and standard CNNs.Let \(\{\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\}\) be a collection of multivariate time series where \(\mathbf{X}^{(i)}\in\mathbb{R}^{m\times L}\) for all \(i\in\{1,\ldots,N\}\). Thus, each time series consists of \(m\) channels and has a length (i.e., number of observations) equal to \(L\) which corresponds to the observation times \(\{t_{1},t_{2},\ldots,t_{L}\}\). Let also \(d(\cdot,\cdot)\) denote a function that measures the distance (in time) between observations of a single channel of the collection of time series. The convolution operation of standard CNNs assumes that consecutive observations are equally spaced across all samples, and thus, the weights of the different kernels of standard CNNs are fixed across all chunks of the time series. In other words, the summation in the right part of Equation (1) is performed over the elements of the same set for all \(n\). Formally, we have that \(d\Big{(}\mathbf{X}^{(i)}_{i,j},\mathbf{X}^{(j)}_{i,j+1}\Big{)}=\tau\) holds for all \(i\in\{1,\ldots,m\}\), \(j\in\{1,\ldots,L-1\}\) and \(i,j\in\{1,\ldots,N\}\) where \(N\) is the number of samples. However, the above does not necessarily hold in the case of irregularly sampled time series data. Indeed, irregular sampling for multivariate series leads to variations in the number of observations across channels. Thus, due to the assumptions it makes, the standard convolution operation of CNNs is not suitable for irregular time series data. Time-parameterized convolutional kernels.To deal with the irregularity of time series, we propose to use time-parameterized kernels. Thus, instead of a fixed kernel that slides over the patches of the time series, we use a parameterized kernel whose components are functions of time. The kernel is also parameterized by the weights of a neural network. We constraint the size of the kernel to be equal to \(2z+1\) where \(z\in\mathbb{N}_{0}\) where \(\mathbb{N}_{0}\) denotes the set of natural numbers together with zero. Then, the elements of the kernel are constructed by some function \(g(\theta,\Delta t)\) where \(\theta\) denotes some trainable parameters and \(\Delta t\) denotes the distance (in time) of the observation associated with some element of the kernel and the \(z+1\)-th observation. Formally, the convolution is defined as follows: \[(f*g)(t)=\sum_{i=1}^{2z+1}f(t_{i})g(\theta,t-t_{i})=\sum_{i=1}^{2z+1}f(t_{i})g( \theta,\Delta t_{i}) \tag{2}\] where \(t_{1},\ldots,t_{2z+1}\) are the timestamps associated with the observations of the patch the kernel is applied to. The function \(g(\theta,\Delta t)\) is quite general and can have different forms. In this paper, we focus on interpretability and thus function \(g(\theta,\Delta t)\colon\mathbb{R}^{5}\to\mathbb{R}\) is defined as follows: \[g\Big{(}\begin{bmatrix}\theta_{1}&\theta_{2}&\theta_{3}&\theta_{4}&\Delta t \end{bmatrix}^{\top}\Big{)}=\theta_{1}\Big{(}\sigma\Big{(}h\big{(}\theta_{3} \cdot\Delta t+\theta_{4}\big{)}\Big{)}+\theta_{2}\Big{)}\] where \(h\colon\mathbb{R}\to\mathbb{R}\) is a continuous function in \(\mathbb{R}\) and \(\sigma\colon\mathbb{R}\to\mathbb{R}\) denotes some activation function (i.e., sigmoid, ReLU, etc.). Thus, to construct each element of the kernel, function \(g\) takes as input four trainable parameters (i.e., \(\theta_{1},\theta_{2},\theta_{3}\) and \(\theta_{4}\)) and the time difference between the current observation and the center observation of the patch. Function \(h\) is chosen such that inductive bias is injected into the model. This can allow the model to capture patterns that commonly occur in time series data and also make its internal operations more interpretable. For example, a function \(h(x)=c\) where \(c\) is some constant would not be a good candidate for extracting useful features from the time series. On the other hand, we employ more informative functions which can capture useful properties of time series such as trend and seasonality. In particular, we employ the following ten functions: 1. \(h_{1}(x)=x\) 2. \(h_{2}(x)=\sin(x)\) 3. \(h_{3}(x)=\cos(x)\) 4. \(h_{4}(x)=\tan(x)\) 5. \(h_{5}(x)=\exp(x)\) 6. \(h_{6}(x)=x^{2}\) 7. \(h_{7}(x)=x^{3}\) 8. \(h_{8}(x)=\sinh(x)\) 9. \(h_{9}(x)=\cosh(x)\) 10. \(h_{10}(x)=\tanh(x)\) Most of the time, trend is a monotonic function, and therefore, functions \(h_{1},h_{6}\) and \(h_{7}\) are chosen to detect trend in time series. Seasonality is a typical characteristic of time series in which the data experiences regular and predictable changes that recur over a defined cycle. Functions \(h_{2},h_{3},h_{9}\) and \(h_{10}\) are responsible for extracting features that take seasonality into account. The approach presented above generates kernels for univariate time series. In the case of multivariate time series, different parameters are learned for the different components of the time series. Therefore, the four parameters \((\theta_{1},\theta_{2},\theta_{3}\) and \(\theta_{4})\) are replaced by vectors of dimension \(m\), i. e., \(\mathbf{\theta}_{1},\mathbf{\theta}_{2},\mathbf{\theta}_{3},\mathbf{\theta}_{4}\in\mathbb{R}^{m}\). Thus, function \(g(\mathbf{\theta},\Delta t)\colon\mathbb{R}^{4m+1}\to\mathbb{R}^{m}\) is computed by applying function \(h(\cdot)\) pointwise to \(m\) different elements. Note that \(\Delta t\) is still a scalar since observation times are identical across all components of the series. ### The Time-Parameterized Convolutional (TPC) Layer Given a sample \(\mathbf{X}^{(i)}\), its corresponding observation times \(\{t_{1},t_{2},\ldots,t_{L}\}\), and a time-parameterized function \(g\), the kernel centered at the \(j\)-th observation (i. e., \(\mathbf{X}^{(i)}_{\cdot,j}\)) is constructed as follows: Note that \(\mathbf{X}^{(i)}_{:,j}\) denotes the \(j\)-th column of matrix \(\mathbf{X}^{(i)}\). Once we construct the kernel, the output of the convolution is computed as follows: \[c=\sum_{l=1}^{m}g(\boldsymbol{\theta},\Delta t_{j-K})_{l}\, \mathbf{X}^{(i)}_{l,j-K}+\ldots +\sum_{l=1}^{m}g(\boldsymbol{\theta},0)_{l}\,\mathbf{X}^{(i)}_{l,j}+\ldots\] \[+\sum_{l=1}^{m}g(\boldsymbol{\theta},\Delta t_{j+K})_{l}\, \mathbf{X}^{(i)}_{l,j+K}\] where \(c\in\mathbb{R}\). In some cases, features of the multivariate time series might be missing. In such cases, the above operation would compute the sum of a smaller number of terms (since missing features are ignored). Thus, we also experimented with the mean function: \[c=\frac{1}{\nu}\Bigg{(}\sum_{l=1}^{m}g(\boldsymbol{\theta}, \Delta t_{j-K})_{l}\,\mathbf{X}^{(i)}_{l,j-K}+\ldots +\sum_{l=1}^{m}g(\boldsymbol{\theta},0)_{l}\,\mathbf{X}^{(i)}_{l,j}+\ldots \tag{3}\] \[+\sum_{l=1}^{m}g(\boldsymbol{\theta},\Delta t_{j+K})_{l}\, \mathbf{X}^{(i)}_{l,j+K}\Bigg{)}\] where \(\nu\) denotes the actual number of features (out of the \((2K+1)m\) features, those that are not missing). Thus, the convolution between a sequence of observations and the kernel outputs a real number. We use zero padding and apply the kernel to all observations and, therefore we obtain a vector \(\mathbf{c}\in\mathbb{R}^{L}\). Furthermore, similar to standard CNNs, not a single kernel, but instead a collection of kernels is generated and applied to the input. These kernels might correspond to different functions of the ones defined above (i. e., \(h_{1},\ldots,h_{10}\)). Suppose that we use \(p\) different kernels in total (potentially of different functions). Then, the output of the TPC layer of the multivariate and irregularly sampled time series \(\mathbf{X}^{(i)}\) is computed as: \[TPC(\mathbf{X}^{(i)},\mathbf{t}^{(i)})=\big{\|}_{i=1}^{p}\mathbf{c}_{i}\in \mathbb{R}^{L\times p}\] where \(\|\) is the concatenation operator between vectors and \(\mathbf{t}^{(i)}\) is a vector that stores the observation times of the time series. ### Properties of TPC Layer Constant number of parametersAn interesting property of the TPC layer is that the number of parameters of each kernel is constant and equal to \(4m\) regardless of the size of the kernel. This is because the kernel is dynamically generated based on the observation times and only \(4m\) trainable parameters are involved. This is in contrast to standard convolutional layers where the number of parameters is equal to the size of the kernel plus the bias. Thus, the number of parameters of the TPC layer will be less than the number of parameters of a standard convolutional layer when the size of the kernels is greater than \(4\). This is likely to lead to less complex models and might significantly reduce overfitting. Time Complexity.The time complexity of the proposed TPC layer is approximately \(\mathcal{O}(L\ell mp)\) for kernel size \(\ell\), similar to the vanilla 1D convolution. Since TPC relies on convolutions, that take advantage of parallel computations, it can be trained faster than recurrent neural network architectures. The complexity comparison becomes even more advantageous when compared with continuous-time models, such as neural ODEs that are significantly slower than RNNs [14]. Learning Properties.The proposed TCP layer introduces time-varying convolutional kernels as opposed to fixed kernels that are commonly employed in traditional convolutional neural networks (CNNs). In other words, the employed kernels do not remain fixed throughout the whole length of the input series. This particular trait of TPC does not explicitly force weight sharing between different subsequences of the time series during convolution. Weight sharing is, however, implicitly modeled via the learnable representations of time, that are used to initialize the kernel weights. This is based on the assumption that observations that are mapped to similar time embeddings will probably share similar values of weights in the convolutional operation. The proposed approach still maintains the ability to locally aggregate information by retaining the notion of fixed kernel size in the convolution operation. This allows for the output of the convolution to be locally aggregated, while still incorporating the benefits of time-varying convolutional kernels. Invariance Properties.If some patterns in the time series are identical, both in terms of the observations and also in terms of the difference in time between the observations, then the TPC layer will produce the same output for those two patterns. For example, let \(\mathbf{x}_{i}=(x_{i-K},\ldots,x_{i},\ldots,x_{i+K})\) and \(\mathbf{x}_{j}=(x_{j-K},\ldots,x_{j},\ldots,x_{j+K})\) denote two sequences of values and \(\mathbf{t}_{i}=(t_{i-K},\ldots,t_{i},\ldots,t_{i+K})\) and \(\mathbf{t}_{j}=(t_{j-K},\ldots,t_{j},\ldots,t_{j+K})\) denote their respective observation times. If \(\mathbf{x}_{i}=\mathbf{x}_{j}\) holds and \(\Delta\mathbf{t}_{i}=\Delta\mathbf{t}_{j}\) also holds, where \(\Delta\mathbf{t}_{i}=(t_{i-K}-t_{i},\ldots,0,\ldots,t_{i+K}-t_{i})\) and \(\Delta\mathbf{t}_{j}=(t_{j-K}-t_{j},\ldots,0,\ldots,t_{j+K}-t_{j})\), then the kernels produced for these two sequences of values are identical and therefore, the layer produces the same output. Furthermore, the different functions defined in the previous subsection make the kernels invariant to different transformations. For instance, in the above example, suppose that \(\Delta\mathbf{t}_{i}\neq\Delta\mathbf{t}_{j}\), and that the \(k\)-th element of the second sequence is equal to \((k+1)2\pi\) times the corresponding element of the first sequence for \(k\in\{0,1,\ldots,2K+1\}\). Then, the TPC layer equipped with the \(h_{2}\) function (i. e., \(\sin(\cdot)\) function) and with \(\theta_{3}=1\) and \(\theta_{4}=0\) would produce the same output for both patterns. Such a function can capture periodic temporal correlations. Figure 1: (Left) An encoder that consists of the proposed TPC layer, convolutions and pooling layer and produces a fixed-size latent representation \(\mathbf{z}\). (Right) An encoder-decoder framework that reconstructs the series from the input using TPC and linear layers. ### TPCNN Framework for Irregularly Sampled Time Series We will next discuss how the TPC layer can be integrated into neural network architectures for dealing with various tasks that involve irregular time series, such as interpolation and classification. Following previous work, we propose an encoder-decoder framework, so-called Time-Parameterized Convolutional Neural Network (TPCNN) framework. In what follows, we give more details about the two main components of the proposed framework, namely its encoder and its decoder. TPCNN Encoder.This module is responsible for mapping the input time series into a latent vector which captures their overall shape and their specificities. The first layer of the encoder is an instance of the TPC layer introduced above. The TPC layer receives as input the irregular and multivariate series \(\mathbf{X}^{(i)}\in\mathbb{R}^{m\times L}\) and the corresponding vector of observation times \(\mathbf{t}^{(i)}=\{t_{1},t_{2},...,t_{L}\}\). The output of TPC layer is then successively fed to vanilla convolution layers which can capture longer-time dependencies of the continuous latent representation of the time series. A pooling layer follows each convolution layer, including TPC. By down-sampling the output, such layers are expected to extract features that are good indicators of class membership or of the shape of the time series. Finally, a fully-connected layer is applied to the output of the last convolution layer to extract a low-dimensional representation \(\mathbf{z}^{(i)}\in\mathbb{R}^{d}\) of the series. TPCNN Decoder.This part of the architecture is responsible for reconstructing the multivariate input series from the latent vector that is produced by the encoder. The latent vector \(\mathbf{z}\) that was produced by the encoder is first given as input to a fully-connected layer whose objective is to perform rescaling. The emerging vector is then passed onto another fully-connected layer which produces a matrix \(\hat{\mathbf{X}}^{(i)}\) that matches the dimension of the input time series. These reconstructed time series are then compared against the input series to evaluate the autoencoder's performance. Interpolation and Classification Setting.Note that some components of the TPCNN framework depend on the considered task, i. e., interpolation or classification. For instance, in the interpolation setting, each time a kernel of the TPC layer is applied to some subset of the input series, the observation that lies at the center of that subset is masked such that the model does not have direct access to it. On the other hand, such a masking is not performed in the case of the classification setting. The reconstruction loss of a standard autoencoder is typically measured using the mean squared error (MSE) between the original input and the reconstructed output. For an input time series \(\mathbf{X}^{(i)}\), the MSE loss is computed as: \[\mathcal{L}_{interpolation}=\frac{1}{|\mathcal{O}|}\sum_{j\in\mathcal{O}}\left\| \mathbf{X}^{(i)}_{:,j}-\hat{\mathbf{X}}^{(i)}_{:,j}\right\|_{2}^{2}\] where \(\mathcal{O}\) is a set that contains the indices of the observed values and \(\hat{\mathbf{X}}^{(i)}\) denotes the reconstructed series produced by the decoder as a function of the latent vector \(\mathbf{z}\). The encoder-decoder framework of Figure 1 (Right) is combined with the MSE loss for the interpolation task. Additionally, as already discussed, masking is performed on the center element of each slice of the input series, and the rest of the observed values of the slice are used for interpolation. In the case of classification, the latent representation \(\mathbf{z}\) that is generated by the encoder and which preserves the information about the multivariate time series' dependencies, can be directly fed to a classifier module to make predictions. In the experiments that follow, we employ a 2-layer multi-layer perceptron (MLP) with \(ReLU\) activation function. Thus, in the case of a classification problem with \(|\mathcal{C}|\) classes, the output is computed as follows: \[\mathbf{p}=softmax(MLP(\mathbf{z}))\] Then, given a training set consisting of time series \(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\), we use the negative log-likelihood of the correct labels as training loss: \[\mathcal{L}_{classification}=-\sum_{i=1}^{N}\sum_{j=1}^{|\mathcal{C}|}\mathbf{y }^{(i)}_{j}\log\mathbf{p}^{(i)}_{j}\] where \(\mathbf{y}_{j}^{(i)}\) is equal to 1 if \(\mathbf{X}^{(i)}\) belongs to the \(j\)-th class, and 0 otherwise. The application of the TPCNN model to the above two scenarios is illustrated in Figure 1 (classification on the left and interpolation on the right). ## 4 Experiments In this section, we describe the experimental setup and methodology used to evaluate the performance of our proposed time-parameterized convolutional layer on various tasks involving irregular time series, including interpolation and classification. ### Datasets We evaluate the performance of the proposed architecture and the baselines on the following real-world datasets: PhysioNet:The PhysioNet Challenge 2012 dataset [28] comprises 8000 multivariate time series that correspond to records from the first 48 hours of a patient's admission to the intensive care unit (ICU). Measurements include 37 variables which can be missing at different steps and occur in irregular intervals. Half of the instances are labeled with 13.8% of instances being in the positive class (in-hospital mortality). For the interpolation experiments, we used all 8000 instances and for the classification experiments, we used the 4000 labeled instances. We use the same experimental protocols and preprocessing steps as in [24]. MIMIC-III:The MIMIC-III dataset [13] consists of multivariate health records, that can have missing values, collected at Beth Israel Deaconess Medical Center between 2001 and 2012. Based again on the preprocessing strategy of [24], we extract 53211 samples including 12 features. Given the first 48 hours of data, the task is to predict in-hospital mortality, with 8.1% of the data samples in the positive class. Human Activity:The human activity dataset contains time series data from five individuals performing various activities (such as walking, sitting, lying, standing, etc.), based on the 3D positions of tags attached to their belts, chest and ankles (12 features in total). Following the preprocessing procedures outlined by [24], a dataset of 6554 sequences and 50 time steps is extracted. The task for this dataset is to classify each time step in the series into one of the eleven activities. ### Experimental Setting We next explain the experimental setting we follow for interpolation and classification, similar to the work of [27]. In the case of interpolation, we study all instances (labeled and unlabeled) from the PhysioNet dataset. The dataset is partitioned into an 80% training set and a 20% test set, with a fraction (20%) of the training data serving as the validation set. The interpolation task is to predict based on a subset of available data points values for the unobserved points. This is executed using different percentages of observed steps, which vary between 50% and 90% of the total available steps. For this experiment, we perform five different runs and report performance on the unobserved data using the mean squared error (MSE) metric. We also use the labeled data from the PhysioNet, MIMIC-III and Human Activity datasets to conduct classification experiments. For the physiological data of PhysioNet and MIMIC-III, the classification task considers the entire time series, whereas, in the context of the human activity dataset, classification is performed for each time step in the series. We follow the same train, validation and test splitting procedure as described in the interpolation setting. For this experiment, we perform five different runs to provide the classification performance on the different datasets. For PhysioNet and MIMIC-III datasets, we report performance using the area under the ROC curve (AUC) score, due to class imbalance. For the Human Activity dataset, we asses the model performance using the accuracy metric. The validation set is used to select the best set of hyperparameters for our models via grid search. ### Baseline Models In this study, we conduct a thorough evaluation of several deep learning architectures as baseline models for performance comparison. These models are specifically designed to handle irregular time series and include variations of the Recurrent Neural Network (RNN), Attention modules and encoder-decoder architectures. The specific models evaluated in this study include: 1. Basic RNN variants including: _RNN-Impute_, _RNN-\(\Delta_{t}\)_, _RNN-decay_, _GRU-D_. The _RNN-Impute_ model employs a method to impute missing data points based on the weighted average between the last observation of the time series and the total mean of the variable in the training set [3]. In _RNN-\(\Delta_{t}\)_ the input to RNN is extended with a missing indicator for the variable and the time interval \(\Delta_{t}\) since the last observed point. The _RNN-decay_ is an RNN with hidden states that decay exponentially over time [18, 3], whereas _GRU-D_ employs exponential decay on both hidden states and input [3]. 2. Other RNN variants, such as _Phased-LSTM_, _IP-Nets_, _SeFT_, _RNN-VAE_. The _Phased-LSTM_ model incorporates time irregularity through the use of a time gate that controls access to the hidden and cell states of the LSTM [19]. _IP-Nets_ are Interpolation-Prediction Networks (IPN), which perform interpolation prior to prediction with an RNN on the transformed equally-spaced intervals, using semi-parametric interpolation layers [26]. The _SeFT_ model employs learnable set functions for time series and combines the representations with an attention-based mechanism [11]. _RNN-VAE_ is a standard variational RNN encoder-decoder. 3. ODE variants, such as _ODE-RNN_, _L-ODE-RNN_, _L-ODE-ODE_. In _ODE-RNN_ neural ODEs model the dynamics of the hidden state, and an RNN updates the hidden state in the presence of new observations [24]. Similarly, _L-ODE-RNN_ and _L-ODE-ODE_ are latent ODEs with the former combining an RNN encoder and a neural ODE decoder [4], and the latter an ODE-RNN encoder and a neural ODE decoder [24]. 4. Attention-based frameworks, including _mTAND_. The multi-time attention network, _mTAND_, interpolates missing data using a learnable attention similarity kernel between observations, which are accessed based on trainable temporal embeddings [27]. ### Results Interpolation of missing data.In Table 1 we present the results of the experimental setting designed for interpolation, as described in Section 4.2. For different percentages of observed values (i. e., ranging from 50% to 90%), we record the interpolation performance on the reconstructed irregularly sampled multivariate time series of the PhysioNet dataset using the MSE metric. We compare the proposed TPCNN model to different \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Model** & \multicolumn{4}{c}{**Mean Squared Error** (\(\times 10^{-3}\))} \\ \hline RNN-VAE & 13.418 \(\pm\) 0.008 & 12.594 \(\pm\) 0.004 & 11.887 \(\pm\) 0.007 & 11.133 \(\pm\) 0.007 & 11.470 \(\pm\) 0.006 \\ L-ODE-RNN & 8.132 \(\pm\) 0.020 & 8.140 \(\pm\) 0.018 & 8.171 \(\pm\) 0.030 & 8.143 \(\pm\) 0.025 & 8.402 \(\pm\) 0.022 \\ L-ODE-ODE & 6.721 \(\pm\) 0.109 & 6.816 \(\pm\) 0.045 & 6.798 \(\pm\) 0.143 & 6.850 \(\pm\) 0.066 & 7.142 \(\pm\) 0.066 \\ mTAND-Full & **4.139 \(\pm\) 0.029** & **4.018 \(\pm\) 0.048** & **4.157 \(\pm\) 0.053** & **4.410 \(\pm\) 0.149** & **4.798 \(\pm\) 0.036** \\ TPCNN (ours) & 5.993 \(\pm\) 0.058 & 5.797 \(\pm\) 0.063 & 5.654 \(\pm\) 0.108 & 5.624 \(\pm\) 0.084 & 5.532 \(\pm\) 0.140 \\ \hline **Observed**(\%) & 50\% & 60\% & 70\% & 80\% & 90\% \\ \hline \hline \end{tabular} \end{table} Table 1: Performance for interpolation with different percentages of observed time points on _PhysioNet_. We mention in bold the best-performing method(s) and underline the second best-performing method(s) based on statistical significance tests. baseline methods designed for interpolation, including RNN-VAE, L-ODE-RNN, L-ODE-ODE and mTAND-Full (i. e., mTAND encoder-decoder framework for interpolation). We mention in bold the best-performing method and underline the results for the second-performing method. We also perform tests for measuring the statistical significance of the studied methods, which leads to highlighting two distinct models that refer to the highest performances. We can observe that the best-performing method is mTAND-Full, which is closely followed by the proposed TPCNN model. The rest of the baselines show significantly worse performance compared to the proposed TPCNN, including the highly accurate in the irregular setting ODE-based method L-ODE-ODE. The performance of the proposed model ranges from \(\sim 6.0\times 10^{-3}l\) to \(\sim 5.5\times 10^{-3}\) in terms of MSE, showing a slightly improved performance as the percentage of missing observations decreases. On the other hand, mTAND-Full shows a slightly degrading performance for a smaller percentage of missing data, with RNN-VAE being the only baseline method that follows the same behavior. Classification.We also report in Table 2 the results of the different baselines, as described in Section 4.3, and the proposed TPCNN model on classification for the labeled instances of PhysioNet, MIMIC-III and \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**AUC**} & \multicolumn{2}{c}{**Accuracy**} \\ & **PhysioNet** & **MIMIC-III** & **Human Activity** \\ \hline RNN-Impute & 0.764 \(\pm\) 0.016 & 0.8249 \(\pm\) 0.0010 & 0.859 \(\pm\) 0.004 \\ RNN-\(\Delta_{t}\) & 0.787 \(\pm\) 0.014 & 0.8364 \(\pm\) 0.0011 & 0.857 \(\pm\) 0.002 \\ RNN-Decay & 0.807 \(\pm\) 0.003 & 0.8392 \(\pm\) 0.0012 & 0.860 \(\pm\) 0.005 \\ RNN GRU-D & 0.818 \(\pm\) 0.008 & 0.8270 \(\pm\) 0.0010 & 0.862 \(\pm\) 0.005 \\ Phased-LSTM & 0.836 \(\pm\) 0.003 & 0.8429 \(\pm\) 0.0035 & 0.855 \(\pm\) 0.005 \\ IP-Nets & 0.819 \(\pm\) 0.006 & 0.8390 \(\pm\) 0.0011 & 0.869 \(\pm\) 0.007 \\ SeFT & 0.795 \(\pm\) 0.015 & 0.8485 \(\pm\) 0.0022 & 0.815 \(\pm\) 0.002 \\ RNN-VAE & 0.515 \(\pm\) 0.040 & 0.5175 \(\pm\) 0.0312 & 0.343 \(\pm\) 0.040 \\ ODE-RNN & 0.833 \(\pm\) 0.009 & **0.8561 \(\pm\) 0.0051** & 0.885 \(\pm\) 0.008 \\ L-ODE-RNN & 0.781 \(\pm\) 0.018 & 0.7734 \(\pm\) 0.0030 & 0.838 \(\pm\) 0.004 \\ L-ODE-ODE & 0.829 \(\pm\) 0.004 & **0.8559 \(\pm\) 0.0041** & 0.870 \(\pm\) 0.028 \\ mTAND-Full & **0.858 \(\pm\) 0.004** & **0.8544 \(\pm\) 0.0024** & **0.910 \(\pm\) 0.002** \\ \hline TPCNN (ours) & 0.833 \(\pm\) 0.001 & 0.8380 \(\pm\) 0.0011 & 0.897 \(\pm\) 0.004 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance for **per-sequence** classification on _PhysioNet_ and _MIMIC-III_ and **per-time-point** classification on _Human Activity_ datasets. We mention in bold the best-performing method(s) and underline the second best-performing method(s) based on statistical significance tests. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **PhysioNet** & **MIMIC-III** & **Human Activity** \\ \hline \multicolumn{4}{c}{Size (parameters)} \\ mTAND-Full & 1.3M & 1.4M & 1.6M \\ TPCNN (ours) & 350K & 100K & 300K \\ \hline \multicolumn{4}{c}{Time per epoch (\(min\))} \\ mTAND-Full & 0.06 & 0.5 & 0.006 \\ TPCNN (ours) & 0.15 & 0.2 & 0.008 \\ \hline \hline \end{tabular} \end{table} Table 3: Memory and computational costs, in terms of size (number of parameters) and time per epoch (in minutes). Human Activity datasets. For the first two imbalanced datasets, we use AUC as an evaluation metric and perform per-sequence binary classification, whereas, for the Human Activity dataset, we report accuracy for the task of per-time-point classification. For all datasets, we boldly mention the best-performing methods and underline the results for the second best-performing methods. Due to several non-statistically significant differences in performances, we have several methods being among the first or second best-performing. For PhysioNet and Human Activity datasets, our proposed TPCNN framework is the second-best method in terms of metrics, surpassed by the attention-based model mTAND-Full. More specifically, in PhysioNet the proposed model performs as well as the ODE variants (i. e., ODE-RNN, L-ODE-ODE) that are however significantly slow in terms of computational time, as mentioned in [27]. In Human Activity classification, TPCNN shows quite improved performance being \(\sim 1\%\) worse than mTAND-Full. However, in the MIMIC-III classification, the proposed TPCNN model lies among the third-best-performing methods, being surpassed by several baselines. In this dataset, ODE-RNN, L-ODE-ODE and mTAND-Full methods achieve the highest AUC scores, followed by the SeFT model, which however performs significantly worse in classification experiments for the other two datasets. The significant performance advantage of mTAND-Full in this task can be attributed to its design which jointly performs interpolation and classification while directly attending only to observed time points. On the other hand, the proposed model handles missing data inside the convolutional kernel of the TPC layer by applying the mean aggregator of Equation 3. The aggregation neighborhood however is constrained by the kernel size and remains fixed throughout the series length. Extending the proposed architecture to incorporate size-varying kernels could further improve the learning capabilities of the TPC layer. Computational cost.In Table 3 we provide a comparison in terms of memory and computational costs between the proposed TPCNN and its main competitor mTAND-Full. We report the size, i. e., the number of parameters, and the time per epoch in minutes for the two methods and the three real-world datasets. Comparisons of mTAND and previous state-of-the-art models, among which the efficient ODE-based methods, as shown in [27] have demonstrated that the former is significantly faster (i. e., approximately 100 times) than ODE-based methods that make use of an ODE solver. As we can observe in Table 3, TPCNN is as fast as mTAND-Full in terms of time cost comparison. When it comes to the size of the model, the proposed TPCNN uses significantly fewer parameters compared to mTAND-Full, while maintaining competitive performance. More specifically, TPCNN uses approximately some hundred thousand parameters, i. e., \(100-350\) thousand parameters, while mTAND-Full size scales to millions of parameters, i. e., approximately 1.5 million. This comparison highlights the high efficacy of convolutions in the irregular sampling setting, which allow the training of neural networks that are significantly smaller and fast compared to the baselines. Therefore, the proposed TPCNN can easily scale to larger datasets and remains efficient even when trained with fewer parameters. Figure 2: Reconstruction results using the proposed TPCNN model on the synthetic dataset. Three different samples of the test set are visualized. Experiments on synthetic data.Following the line of work of [27], we reproduce their synthetic sinusoidal dataset that consists of 1000 samples, each describing a time series of 100 time points where \(t\in[0,1]\). Given 10 reference points, an RBF kernel with bandwidth 100 is used to obtain local interpolations at the 100 time steps. For each sample, 20 time points are randomly selected so as to represent an irregularly spaced series. A split of 80% and 20% extracts the respective train and test sets. We employ the encoder-decoder interpolation framework of Figure 1 (Right). Contrary to the interpolation setting for PhysioNet, we give as input the 20 irregular time steps, without the missing points, and reconstruct each observation based on the rest using TPCNN with the functions \(h_{2}(x)=\sin(x)\) (blue points) and \(h_{5}(x)=\exp(x)\) (green points). We visualize the obtained reconstructions for 3 samples of the test set in Figure 2. Each plot consists of the true values (ground truth) for a test sample, while the dark markers represent the 20 observed input data points (observed data), the blue markers and the green markers the 20 predicted values (reconstruction) using \(\sin(\cdot)\) and \(\exp(\cdot)\) functions respectively. By employing the function \(h_{2}(x)=\sin(x)\), we are able to achieve a lower MSE loss compared to the ones achieved with the rest of the time functions defined in Section 3.2. We should mention here that in case domain knowledge is available, it can be incorporated into the proposed TPCNN method via the employed time function, which is likely to lead to performance improvements. Ablation study.We also present in Figure 3 an ablation study on different time functions employed for parameterizing the weights of the convolutional kernels. The performance metric (AUC or accuracy) on the test set is reported on the classification task of the real-world datasets given a different time function or combination of time functions. For all three datasets, we examine a subset of the functions described in Section 3.2. More specifically, we employ \(h_{1}(x),h_{2}(x),h_{3}(x),h_{5}(x)\) (i. e., \(\ln(\cdot),\sin(\cdot),\cos(\cdot),\exp(\cdot)\)) and their combination (e. g., \(\{\sin(\cdot),\cos(\cdot)\},\{\sin(\cdot),\cos(\cdot),\ln(\cdot),\exp(\cdot)\}\)). We observe that different functions may contribute more or less to the classification performance for the given dataset. In PhysioNet, while the linear function \(\ln(\cdot)\) and exponential function \(\exp(\cdot)\) lead to the lowest AUC on the test set, when combined with \(\sin(\cdot)\) and \(\cos(\cdot)\) they achieve a performance improvement by \(\sim 1\%\). Additionally, in MIMIC-III classification \(\cos(\cdot)\) and \(\exp(\cdot)\) functions show the highest AUC test, while \(\sin(\cdot)\) and \(\ln(\cdot)\) (i. e., linear function) lead to a reduced performance by \(\sim 4\%\). At the same, the combination of functions improves performance but Figure 3: Ablation study on different time functions for the parameterization of convolutional kernels for each dataset. Each plot captures the performance (AUC or Accuracy) for each function or combination of functions on the test set. does not surpass \(\cos(\cdot)\) and \(\exp(\cdot)\) when employed alone. Finally on the Human Activity dataset, \(\cos(\cdot)\) function and the combination \(\{\sin(\cdot),\cos(\cdot),\sin(\cdot),\exp(\cdot)\}\), followed by the \(\exp(\cdot)\) function achieve the highest test accuracy. The linear \(\ln(\cdot)\) function again, in this case, leads to the lowest accuracy score compared to the rest of the time functions. During training, we can observe that the linear time function followed by a standard non-linear activation (e. g., ReLU) when used for the parameterization of the convolutional kernel weights suffers from slow convergence and consequently worse performance. On the other hand, periodic time functions and the exponential function seem to more efficiently describe the time dynamics and lead to smoother training when used for parameterizing convolutions. This experiment highlights the explainability aspects of the proposed TPCNN model since it allows us to determine which time functions better describe the considered time series. Furthermore, under certain conditions, the time series could be considered as a composition of such kind of functions. ## 5 Conclusion In this work, we carefully designed and experimentally evaluated a novel time-parameterized convolutional neural network, which incorporates learnable time functions into the weights of convolutional kernels. The proposed method generalizes well in different tasks involving irregularly sampled multivariate time series while being computationally efficient and interpretable.
2310.13249
TempGNN: Temporal Graph Neural Networks for Dynamic Session-Based Recommendations
Session-based recommendations which predict the next action by understanding a user's interaction behavior with items within a relatively short ongoing session have recently gained increasing popularity. Previous research has focused on capturing the dynamics of sequential dependencies from complicated item transitions in a session by means of recurrent neural networks, self-attention models, and recently, mostly graph neural networks. Despite the plethora of different models relying on the order of items in a session, few approaches have been proposed for dealing better with the temporal implications between interactions. We present Temporal Graph Neural Networks (TempGNN), a generic framework for capturing the structural and temporal dynamics in complex item transitions utilizing temporal embedding operators on nodes and edges on dynamic session graphs, represented as sequences of timed events. Extensive experimental results show the effectiveness and adaptability of the proposed method by plugging it into existing state-of-the-art models. Finally, TempGNN achieved state-of-the-art performance on two real-world e-commerce datasets.
Eunkyu Oh, Taehun Kim
2023-10-20T03:13:10Z
http://arxiv.org/abs/2310.13249v1
# TempGNN: Temporal Graph Neural Networks for Dynamic Session-Based Recommendations ###### Abstract. Session-based recommendations which predict the next action by understanding a user's interaction behavior with items within a relatively short ongoing session have recently gained increasing popularity. Previous research has focused on capturing the dynamics of sequential dependencies from complicated item transitions in a session by means of recurrent neural networks, self-attention models, and recently, mostly graph neural networks. Despite the plethora of different models relying on the order of items in a session, few approaches have been proposed for dealing better with the temporal implications between interactions. We present Temporal Graph Neural Networks (TempGNN), a generic framework for capturing the structural and temporal dynamics in complex item transitions utilizing temporal embedding operators on nodes and edges on dynamic session graphs, represented as sequences of timed events. Extensive experimental results show the effectiveness and adaptability of the proposed method by plugging it into existing state-of-the-art models. Finally, TempGNN achieved state-of-the-art performance on two real-world e-commerce datasets. recommender systems, session-based recommendations, graph neural networks, temporal embedding + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition an absolute time pattern and a relative time pattern. However, such temporal encoding strategies have rarely been considered in SRRs, which restricts their ability to capture the significance of interactions at different times. We conjecture that there are two reasons for the difficulties in inferring temporal meaning from session data. Unlike sequential recommendations, as a session usually has a very short length with a short expiration period, the time differences between interactions in a session are very small. Moreover, because user information is not clearly provided for security reasons, it can be extracted only within the limited time of a session unit. To tackle the aforementioned problems, we propose Temporal Graph Neural Networks (TempGNN), a generic framework for capturing the structural and temporal dynamics in complex item transitions on dynamic session graphs, represented as sequences of timed events. To facilitate GNNs, we propose a temporal embedding method for nodes and edges. The embedding on nodes models the time differences between the prediction time and timestamps of items, whereas the embedding on edges models the time differences between events in a session. In addition, the proposed temporal encoding approach considers time frequency to avoid biased learning towards a specific time range with high popularity. We also propose a novel method for combining temporal information with an item through a gate network, which allows the model to consider the degree of dependence of time and an item on each other. Extensive experimental results using two real-world \(\mathsf{c}\)-commerce datasets show that our method outperforms various existing state-of-the-art models and this confirms the effectiveness and adaptability of the proposed temporal embedding. ## 2. Related Work **Session-Based Recommendations.** A session is formally represented as a variable-length event sequence with clear boundaries (Song et al., 2015). To build better-performing SRRs, a GNN has been widely adopted in recent years for modeling the complex item transitions within and across sessions by transforming a session into a graph structure. SR-GNN (Song et al., 2015) adopted a gated graph neural network (GGNN) (Song et al., 2015) and an attention mechanism for predicting the next item in given a session. NISER+ (Beng et al., 2015) extended SR-GNN to address popularity bias by introducing L2 normalization. SGNN-HN (Song et al., 2015) introduced a virtual node that considers unconnected items and a highway gate to reduce overfitting. However, previous studies still do not perform well in comprehending complicated transition relationships because they do not consider the time differences between the transitions at all. Unlike other methods, our model can capture item transition patterns better by injecting temporal information from a session into GNNs. **Temporal Embeddings in Session-based Recommendations.** TA-GNN (Beng et al., 2015) utilized time interval information between nodes by introducing time-aware adjacency matrices. This method assumes that two interactions are more relevant when the time interval between them is short. TiRec (Tie et al., 2015) also modeled a session using a time interval. This method adopted a fixed set of sinusoid functions as a basis. KSTT (Tie et al., 2015) introduced three types of time embedding methods that utilize the time difference between each behavior and the prediction time: time bucket embedding, time2vec (Dong et al., 2015), and Mercer time embedding (Marcel et al., 2016). These multiple temporal embeddings were expected to capture different time patterns. The above methods attempted to capture a user's intention more accurately using either time intervals between interactions or time differences with regards to the prediction timing, whereas our model adopts both to maximize the effect. **Temporal Embeddings in Sequential Recommendations.** Many attempts have been made to encode temporal information in Sequential Recommendations. PosRec (Paszl et al., 2015) fully exploited positional information through dual-positional encoding with position-aware GGNNs. TGSRec (Beng et al., 2015) unified sequential patterns and temporal collaborative signals based on Bochner's theorem (Bochner, 2015). However, the small number of learnable vectors of these methods was insufficient to capture a large amount of time information. Thus, temporal embedding through bucketizing has been widely used. TiSARec (Tie et al., 2015) modeled the time intervals between items in a sequence. ATRank (Tie et al., 2015) bucketized a time feature with an exponentially increasing time range. TASER (Tie et al., 2015) jointly learned user interests and two typical temporal patterns in absolute and relative times. However, learning with these methods could be skewed in favor of several buckets with high popularity, because the buckets are divided without considering their frequency of appearance. In contrast, our time encoding approach additionally considers the time frequency, which ensures that each bucket has the same amount of temporal information to avoid biased learning. This is effective in preventing the overfitting of temporal embeddings, regardless of the distribution of time. In addition, we propose a novel method combining temporal information with an item through a gate network that adjusts each weight by considering the relationship between time and an item. ## 3. Problem Definition A dynamic session-based recommendation predicts the next item based on an ongoing session taking into account the recommendation timing, as shown in Figure 1. In other words, the next click may change according to the predicted timing. We formulate this task as follows. Let \(I=\left\{i_{1},i_{2},...,i_{|I|}\right\}\) denote all unique items in all sessions, where \(|I|\) is the number of unique items. A session \(S=[(i_{1}^{S},ts_{1}^{S}),(i_{2}^{S},ts_{2}^{S}),...,(i_{|S|}^{S},ts_{|S|}^ {S})]\) is a sequence of items and Figure 2. Distributions of time differences in three datasets. The blue distribution indicates time differences from the prediction timing (TN), and the green one is time intervals between interactions in a session (TE). their timestamps, where \(i_{j}^{S}\in I\) and \(ts_{j}^{S}\) are the \(j\)-th clicked item and timestamp in \(S\), and \(|S|\) is the length of the session. Given a session and prediction timestamp \(ts_{|S|+1}^{S}\), we aim to predict the next clicked item \(i_{|S|+1}^{S}\). Items with the highest top-K scores are recommended by estimating a probability vector \(\hat{y}\in\mathbb{R}^{|I|}\) corresponding to the relevance scores for the unique items. ## 4. Method Our workflow consists of three main parts: graph construction, information propagation in a GNN, and attention and prediction, as shown in Figure 3. A detailed description of each process is given in the following sections. ### Graph Construction We construct a graph \(G=(V,E)\) from each session \(S\) to better capture user behavior. The vertices of the graph \(V=\{p_{1},p_{2},...,p_{|V|}\}\) denote unique nodes from the combinations of items and times. Edges \(E=\{e_{1},e_{2},...,e_{|E|}\}\) are obtained through the temporal embedding of edges. The dimensions of the embeddings can be set differently, but here they are all set to \(d\) for convenience. #### 4.1.1. Item Embedding Each unique item is mapped onto a trainable vector \(i\in\mathbb{R}^{d}\) in \(I\). We apply \(L_{2}\) normalization to each embedding during training and inference to reduce the effect of the popularity of items on the model because items with a high frequency during training have a high \(L_{2}\) norm, whereas less popular items do not. There are detailed descriptions and experiments in (Bang et al., 2017). Therefore, our model uses normalized item embedding as \[\tilde{i}=\frac{i}{\|i\|_{2}}. \tag{1}\] #### 4.1.2. Temporal Embedding for Nodes We define temporal embedding for nodes (TN) as a feature of the difference between the prediction timing and click timestamp of each item in a session. This includes information on how much time information should be utilized in predictions, taking into consideration how long ago the interaction was. Specifically, we first bucketize the time difference. This is a prerequisite for utilizing continuous temporal information, which is difficult to learn. If there are too few buckets, utilizing temporal information has little effect, whereas if the number of buckets is too high, there is a waste of memory with no performance boost. After preparing the appropriate number of buckets, we use a quantile function for the time differences to ensure that each bucket has the same amount of information. This is, of course, performed using the training data. Therefore, the time differences that are not observed during the training process belong to one of the buckets at both ends. After each TN is normalized, it passes through a leaky ReLU function for nonlinearity and a linear layer. In summary, we formulate the \(j\)-th TN of a session as \[\begin{split}&\tilde{t}n_{j}=W\left(\sigma^{lr}\left(normalize\left(tn_{j}\right)\right)\right)+b,\\ &\text{where }tn_{j}=TN\left[bucktietie^{TN}\left(ts_{|S|+1}^{S}- ts_{j}^{S}\right)\right],\end{split} \tag{2}\] where \(W\in\mathbb{R}^{d\times d}\) and \(b\in\mathbb{R}^{d}\) are learnable parameters, \(\sigma^{lr}\) is a leaky ReLU function, \(normalize\) means \(L_{2}\) normalization, \(bucktietie^{TN}\) is a function used to obtain a specific bucket index of \(TN\), and \(TN[]\) is a lookup function that takes one embedding vector corresponding to the index. #### 4.1.3. Temporal Node Aggregation The nodes used in our GNN reflect the time information to items. Conventional models have used the addition of two embedding vectors as a combination method. The problem with this is that the same temporal information is reflected if the time buckets are the same, regardless of the type of item. However, the relationship between an item and time is more complex. In reality, the degree of sensitivity to time differs Figure 3. The overall workflow of our model, TempGNN. It consists of three main processes. It takes a prediction timing as input and outputs recommended probabilities for candidate items. depending on the item, even if the temporal embedding is the same. To capture this complex relationship between an item and time, we propose a novel method for controlling the degree of reflection through a gate network when the two embeddings are aggregated, where the weight is calculated by considering the relationship as \[\begin{split}& v=(1-g)\odot\tilde{i}+g\odot\tilde{t}n,\\ &\text{where }g=\sigma^{s}\left(W\left[\tilde{i};\tilde{t}n\right]+b \right),\end{split} \tag{3}\] where \(\odot\) is an element-wise multiplication, \(\sigma^{s}\) is a sigmoid function, \([;]\) denotes a concatenation, and \(W\in\mathbb{R}^{d\times 2d}\) and \(b\in\mathbb{R}^{d}\) are trainable parameters. #### 4.1.4. Temporal Embedding for Edges Our model exchanges information with neighboring nodes by considering their time intervals. This temporal information is very important when exchanging information they have. For example, a time difference between two nodes that is too long might mean they are not adjacent. In addition, a time interval that is too short could mean a miss click within a session. Therefore, we add temporal embedding for edges (TE) to consider temporal information during propagation between adjacent nodes. We take the timestamp differences of interactions within a session in two directions: incoming and outgoing, and then feature them in the same way as TN in 4.1.2. The formula is \[\begin{split}&\tilde{te}_{j}=W\left(\sigma^{Ir}\left(normalize\left(te_{j}\right)\right)\right)+b,\\ &\text{where }te_{j}=TE\left[bucketize^{TE}\left(ts_{j+1}^{S}-ts_{j}^{S} \right)\right],\end{split} \tag{4}\] where \(j\in\{1,2,...,|S|-1\}\). ### Information Propagation in a GNN Many GNN-based models have been developed for SBRs. Our model advances the previous studies and is particularly based on SGNN-HN and NISER+ (NisER, 2017; NisER, 2017). In addition, we utilize temporal information as an additional feature, which can be applied in any recommendation model. #### 4.2.1. Star Node A star node is a virtual node connected to all nodes in a graph with bidirectional edges. Non-adjacent nodes can also propagate information using the star node as an intermediate node (NisER, 2017). At the same time, it has information that integrates the graph. It is updated like other nodes and initialized to the average value of all nodes in the graph as \[v_{s}=\frac{1}{|V|}\sum_{j=1}^{|V|}v_{j}, \tag{5}\] where \(v_{s}\) denotes the star node and \(|V|\) is the number of nodes in the graph. #### 4.2.2. Message Passing GNNs typically go through step message passing and neighbor aggregation in order to update a node (Nis et al., 2017; Nis et al., 2018; Nis et al., 2019). Unlike previous GNN-based methods, we designed this exchange of information by considering time intervals between nodes in the message passing phase. Subsequently, a GGNN is applied as a method for updating node information (Nis et al., 2018; Nis et al., 2019). Message passing and aggregation proceed in both directions for incoming and outgoing edges. Among them, we obtain an aggregated message for the \(j\)-th node from the neighbors through the incoming edges as \[\begin{split}& m_{j}^{I}=W^{I}\left(\frac{1}{\left|N_{j}^{I} \right|}\sum_{u_{i}\in N_{j}^{I}}\left(1-g_{ij}\right)\odot v_{i}+g_{ij} \odot e_{ij}\right)+b^{I},\\ &\text{where }g_{ij}=\sigma^{s}\left(W\left[v_{i};v_{j};e_{ij} \right]+b\right),\end{split} \tag{6}\] where \(W^{I}\in\mathbb{R}^{d\times d}\) and \(b^{I}\) are trainable parameters for incoming message passing, \(N_{j}^{I}\) is the set of incoming neighbors for the \(j\)-th node, \(e_{ij}\in E\) denotes the temporal embedding of the edge from \(v_{i}\) to \(v_{j}\), \(W\in\mathbb{R}^{d\times d}\) and \(b\) are learnable parameters, and the gate \(g_{ij}\) considers the characteristics of the two nodes and the time interval between them (i.e., an incoming edge) in order to adjust the transmitted information. The formula for outgoing message passing is similar to Equation 6 as \[\begin{split}& m_{j}^{O}=W^{O}\left(\frac{1}{\left|N_{j}^{O} \right|}\sum_{u_{o}\in N_{j}^{O}}\left(1-g_{jo}\right)\odot v_{o}+g_{jo}\odot e _{jo}\right)+b^{O},\\ &\text{where }g_{jo}=\sigma^{s}\left(W\left[v_{j};v_{o};e_{jo} \right]+b\right),\end{split} \tag{7}\] where \(W^{O}\in\mathbb{R}^{d\times d}\) and \(b^{O}\) are trainable parameters and \(N_{j}^{O}\) is the set of outgoing neighbors for the \(j\)-th node. Then, both directional messages are concatenated to update the node as \[m_{j}=\left[m_{j}^{I};m_{j}^{O}\right]. \tag{8}\] #### 4.2.3. Updating a Node Updating nodes proceeds by applying the aggregated message vector and star node. First, the message updates the previous information of a node with a gate as \[\begin{split}& z_{j}^{I}=\sigma^{s}(W_{z}m_{j}^{I}+U_{z}v_{j}^{I-1}+b _{z}),\\ & r_{j}^{I}=\sigma^{s}(W_{r}m_{j}^{I}+U_{r}v_{j}^{I-1}+b_{r}),\\ &\tilde{v}_{j}^{I}=\sigma^{t}(W_{h}m_{j}^{I}+U_{h}(r_{j}^{I} \odot v_{j}^{I-1})+b_{h}),\\ &\tilde{v}_{j}^{I}=(1-z_{j}^{I})\odot v_{j}^{I-1}+z_{j}^{I}\odot \tilde{v}_{j}^{I},\end{split} \tag{9}\] where \(W_{z},W_{r},W_{h}\in\mathbb{R}^{d\times 2d}\), \(U_{z},U_{r},U_{h}\in\mathbb{R}^{d\times d}\), and \(b_{z},b_{r},b_{h}\) are learnable parameters, \(l\) denotes the \(l\)-th layer of the GNN, and \(\sigma^{t}\) is a hyperbolic tangent function. After the propagation of adjacent nodes, the star node is reflected in the update, which considers the overall information in the graph. A gate network helps how much information from the previous star node should be propagated as \[\begin{split}& v_{j}^{I}=(1-a_{j}^{I})\tilde{v}_{j}^{I}+a_{j}^{I}v_{ s}^{I-1},\\ &\text{where }a_{j}^{I}=\sigma^{s}\left(\frac{\left(\tilde{v}_{j}^{I} \right)^{\top}v_{s}^{I-1}}{\sqrt{d}}\right),\end{split} \tag{10}\] where \(v_{s}^{I-1}\) is the star node of the previous layer in the GNN and \(\sqrt{d}\) denotes a scaling factor. A non-parametric mechanism is applied for efficient learning unlike SGNN-HN (NisER, 2017). Then, the star node is also updated for continuous graph learning using a non-parametric attention mechanism (i.e., a scaled dot product) as \[\begin{split}& v_{\text{s}}^{l}=\left[v_{\text{i}}^{l},v_{\text{2}}^ {l},...,v_{\left|V\right|}^{l}\right]^{\top}\beta^{l},\\ &\text{where }\beta^{l}=softmax\left(\frac{\left[v_{\text{i}}^{l},v_{ \text{2}}^{l},...,v_{\left|V\right|}^{l}\right]v_{\text{s}}^{l-1}}{\sqrt{d}} \right),\end{split} \tag{11}\] where \(\left|V\right|\) is the number of nodes in a graph and \(\left[v_{\text{1}}^{l},v_{\text{2}}^{l},...,v_{\left|V\right|}^{l}\right]\in \mathbb{R}^{\left|V\right|\times d}\) denotes a matrix that includes all nodes in the graph. #### 4.2.4. Highway Gate This propagation proceeds iteratively with \(L\) layers through adjacent and intermediate nodes, which consist of shared parameters. This allows the model to obtain more distant information over multiple propagations. However, the individuality of each node can be diluted if it is excessive. Thus, a highway gate (Hamilton et al., 2017) is applied to take advantage of both as \[\begin{split}& v^{f}=(1-g)\odot v^{L}+g\odot v^{0},\\ &\text{where }g=\sigma^{s}\left(W\left[v^{L};v^{0}\right]+b \right),\end{split} \tag{12}\] where \(v^{f}\) denotes the final node after the highway gate, \(v^{L}\) and \(v^{0}\) are the node after the propagation of the \(L\)-th layer and initial node, respectively, and \(W\in\mathbb{R}^{d\times 2d}\) and \(b\) are trainable parameters. ### Attention and Prediction #### 4.3.1. Obtaining a Preference Nodes that have completed all propagations are transformed back to a session format as \[U=\left[u_{1},u_{2},...,u_{\left|S\right|}\right], \tag{13}\] where \(\left|S\right|=\left|U\right|\) is the length of the session and \(u_{j}\in\left\{v_{\text{1}}^{f},v_{\text{2}}^{f},...,v_{\left|V\right|}^{f}\right\}\) denotes a node arranged in the original order of the sequence. A representation is obtained by reflecting all nodes in different proportions determined by a soft attention mechanism considering the last and overall information (i.e., a star node) as \[\begin{split}& r=\sum_{j=1}^{\left|S\right|}y_{j}u_{j},\\ &\text{where }\gamma_{j}=w_{0}^{\top}\sigma^{s}(W_{1}u_{j}+W_{2}u_{ \left|S\right|}+W_{3}v_{\text{s}}^{L}+b),\end{split} \tag{14}\] where \(w_{0}\in\mathbb{R}^{d},W_{1},W_{2},W_{3}\in\mathbb{R}^{d\times d}\), and \(b\) are learnable parameters, \(u_{\left|S\right|}\) is the last node, and \(v_{\text{s}}^{L}\) is the star node after \(L\) layers. Because the last node could be a decisive clue for estimating a user's next interaction, a preference vector is formulated as \[p=W\left[r;u_{\left|S\right|}\right]+b, \tag{15}\] where \(W\in\mathbb{R}^{d\times 2d}\) and \(b\) are trainable parameters. #### 4.3.2. Prediction We obtain the normalized probabilities for the next click by measuring the similarities between the preference and all candidate items. To solve the long-tail problem of a recommendation (Beng et al., 2017), cosine similarity is applied as \[\tilde{y}[j]=\frac{p^{\top}i_{j}}{\|p\|_{2}\|i_{j}\|_{2}}, \tag{16}\] where \(i_{j}\in I\) is the \(j\)-th item embedding and \(\tilde{y}[j]\), an element of the vector \(\tilde{y}\in\mathbb{R}^{\left|I\right|}\), denotes the similarity between the \(j\)-th item and a user preference. Then, the similarity vector is normalized by a scaled softmax function, which addresses the convergence problem (Beng et al., 2017; Wang et al., 2017) as \[\tilde{y}[j]=\frac{\exp\left(\tau\tilde{y}[j]\right)}{\sum_{k=1}^{\left|I \right|}\exp\left(\tau\tilde{y}[k]\right)}, \tag{17}\] where \(\tau\) is a scaling factor, \(\left|I\right|\) is the number of candidate items, and \(\tilde{y}[j]\) denotes the probability that the next click of a user is the \(j\)-th candidate item. Then, the items with the highest top-K probabilities in \(\hat{y}\in\mathbb{R}^{\left|I\right|}\) are recommended. #### 4.3.3. Objective Function We adopt a cross-entropy loss function as an objective function for the probabilities. Our model is trained by minimizing the loss, which is formulated as \[\mathcal{L}=-\sum_{j=1}^{\left|I\right|}y[j]log\left(\hat{y}[j]\right), \tag{18}\] where \(y[j]\in\left\{0,1\right\}\) is a target that indicates whether the next click is the \(j\)-th item or not. In other words, \(y\in\mathbb{R}^{\left|I\right|}\) is a one-hot vector corresponding to the candidate items. ## 5. Experiments In this section, we first describe the experimental settings, followed by four experimental results and analyses. All experiments were averaged over five replicates. ### Experimental Settings #### 5.1.1. Datasets * **Yochoose** was released by RecSys Challenge 20151, and contains click streams from an e-commerce website from 6 months. * **Diginetica** was used as a challenge dataset for CIKM Cup 20162. We only adopt the transaction data. Footnote 1: [https://recsys.acam.org/recsys15/challenge/](https://recsys.acam.org/recsys15/challenge/) Footnote 2: [https://competitions.codalab.org/competitions/11161](https://competitions.codalab.org/competitions/11161) Our preprocessing of these datasets followed previous studies (Beng et al., 2017; Wang et al., 2017; Wang et al., 2017) for fairness. We filtered out sessions with only one item and items that occurred fewer than five times. The sessions were split for training and testing, where the last day of Yochoose and the last week of Diginetica were used for testing. Items that were not included in the training set were excluded from the testing set. Finally, we split the sessions into several sub-sequences. Specifically, for a session \(S=[x_{1},x_{2},...,x_{\left|S\right|}]\), where \(x_{j}=(item,timestep)\) denotes a pair of items and timestamps, we generated sub-sequences and the corresponding next interaction as \(\{[x_{1}],x_{2}\}\), \(\{[x_{1},x_{2}],x_{3}\},...,\{[x_{1},x_{2},...,x_{\left|S-1\right|}],x_{\left|S \right|}\}\) for the training and testing sets. As Yochoose is too large, we only utilized the recent 1/64 and 1/4 fractions \begin{table} \begin{tabular}{l r r r} \hline \hline & Yoochose 1/64 & Yoochose 1/4 & Diginetica \\ \hline \# of clicks & 557,248 & 8,326,407 & 982,961 \\ \# of train sessions & 369,859 & 5,917,745 & 719,470 \\ \# of test sessions & 55,898 & 55,598 & 60,858 \\ \# of items & 17,745 & 30,470 & 43,097 \\ Avg. of session lengths & 6.16 & 5.71 & 5.13 \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of three datasets. of the training set, which are denoted as Yoochoose 1/64 and Yoochoose 1/4, respectively. Statistics for the three datasets are shown in Table 1. #### 5.1.2. Baselines * **GRU4Rec**[(6)] applied gated recurrent units (GRUs) to model sequential information in an SBR. * **CSRM**[(20)] employed GRUs to model sequential behavior with an attention mechanism and utilized neighbor sessions as auxiliary information. * **STAMP**[(13)] applied an attention mechanism to obtain the general preference. * **SR-IEM**[(17)] utilized a modified self-attention mechanism to estimate item importance and recommended the next item based on the global preference and current interest. * **SR-GNN**[(22)] adopted GGNNs to obtain item embeddings and recommended by generating a session representation with an attention mechanism. * **NISER+**[(4)] extended SR-GNN by introducing \(L_{2}\) normalization, positional embedding, and dropout. * **SGNN-HN**[(16)] extended SR-GNN by introducing a highway gate to avoid overfitting and a star node, which is a virtual node connected with all nodes. #### 5.1.3. Evaluation Metrics Following previous studies [(4; 16; 22)], we used the same evaluation metrics R@K (recall) and M@K (mean reciprocal rank), where K is 20 and 5. R@K represents the proportion of test instances that have the target items in the top-K recommended items. M@K is the average of the reciprocal ranks of the target items in the recommendation list. #### 5.1.4. Parameter Setup We used the recent 10 items in a session to ensure fairness across all models. An Adam optimizer was adopted, where the initial learning rate is 0.001 with a decay factor of 0.1 for every 3 epochs, \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\). In addition, the \(L_{2}\) regularization rate was set to \(1e^{-5}\). The batch size was 100. All trainable parameters were initialized using a uniform distribution with a range of \(\left[\frac{-1}{\sqrt{d}},\frac{1}{\sqrt{d}}\right]\) according to the dimension of each model. For our model, the dimension of the embeddings \(d\) was set to 256, the scaling factor \(\tau\) was 12, and the number of layers \(L\) was 6. The numbers of buckets of TN and TE were 40 and 50, respectively. For the other settings of the baselines, we referred to the corresponding paper and official code. ### Overall Performance Overall performance comparison of the baselines on the three datasets was summarized in Table 2. This was measured using R@20, M@20, R@5, and M@5. First, if we analyze the difference in the performance of the datasets, it seems that Diginetica is more difficult to predict than Yoochoose. In addition, better performance is obtained when we use a larger training set for Yoochoose. However, the difference of 16 times the learning time and memory usage seems to be not as noticeable. From among RNN-based models, CSRM outperforms GRU4Rec in all measures due to the addition of an attention mechanism to the GRU. It has better performance than STAMP, except for Yoochoose 1/4, for which both show similar results. Although SR-IEM based on a self-attention mechanism performs similarly to CSRM on Diginetica, it usually outperforms the previous models. SR-GNN, the first GNN-based method to be proposed, has similar performance to SR-IEM overall. NISER+, an extended version of SR-GNN, outperforms all previous results and shows a remarkable performance improvement, especially on Diginetica. SGNN-HN has the second rank performance in most of the measures in this experiment and shows the best performance among the baselines. However, our model outperforms the previous studies in all results. In particular, the improvements in terms of M@20 are notable, which is a measure that is difficult to improve compared to R@20 based on other results. Compared with SGNN-HN, the improvement rates of M@20 on the three datasets are 5.13%, 4.59%, and 2.67%, whereas those of R@20 are 1%, 0.82%, and 0.94%, respectively. Because mean reciprocal rank is a measure that considers the recommendation rank, we can recommend item lists with more sophisticated priorities by adding temporal information. ### Models with Temporal Embeddings The proposed temporal embedding method can easily be adopted in any SBR model. Figure 4 shows the results of utilizing TN, TE, and both together on our model and the baselines described above. Because GRU4Rec, CSRM, STAMP, and SR-IEM among the models \begin{table} \begin{tabular}{l l c c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Yochoose 1/64} & \multicolumn{4}{c}{Yochoose 1/4} & \multicolumn{4}{c}{Diginetica} \\ \cline{3-14} & & R@20 & M@20 & R@5 & M@5 & R@20 & M@20 & R@5 & M@5 & R@20 & M@20 & R@5 & M@5 \\ \hline \multirow{2}{*}{RNN-based} & GRU4Rec & 62.03 & 23.34 & 37.04 & 20.74 & 67.63 & 27.32 & 42.69 & 24.71 & 34.25 & 9.45 & 14.71 & 7.58 \\ & CSRM & 70.20 & 29.77 & 46.05 & 27.20 & 70.50 & 29.23 & 45.37 & 26.58 & 51.51 & 17.20 & 26.55 & 14.76 \\ \hline \multirow{2}{*}{Attention-based} & STAMP & 68.64 & 29.89 & 45.65 & 27.47 & 70.62 & 30.36 & 46.53 & 27.83 & 47.66 & 15.54 & 24.16 & 13.25 \\ & SR-IEM & 70.86 & 31.59 & 47.95 & 29.16 & 71.02 & 30.49 & 46.69 & 27.92 & 51.70 & 17.14 & 26.46 & 14.66 \\ \hline \multirow{2}{*}{GNN-based} & SR-GNN & 70.38 & 30.71 & 47.08 & 28.26 & 71.39 & 30.96 & 47.07 & 28.40 & 51.46 & 17.54 & 26.94 & 15.11 \\ & NISER+ & 71.36 & 31.91 & 48.21 & 29.46 & 72.74 & 32.09 & 48.82 & 29.55 & 54.39 & 19.20 & 29.15 & 16.70 \\ & SGNN-HN & 71.88 & 31.94 & 48.40 & 29.46 & 72.92 & 32.69 & 48.78 & 30.13 & 55.56 & 19.44 & 29.72 & 16.88 \\ \cline{1-1} & **TempGNN** & **72.60** & **33.58** & **48.88** & **31.09** & **73.52** & **34.19** & **49.62** & **31.67** & **56.08** & **19.96** & **30.25** & **17.39** \\ \hline \hline \end{tabular} \end{table} Table 2. Overall performance for three datasets. A bold-faced number indicates the best score and the second performer is underlined in each column. are not GNN-based, only TN can be used. The results for the three datasets show different aspects as time information is added. First, according to the result graphs for Yoochoose 1/64, TN induces an improvement in all models, except for GRU4Rec, which is a pure RNN-based model. This fairly large drop indicates that the model is not trained harmoniously with TN. In addition, although TE does not improve as much as TN, it improves the performance of all GNN-based models. Therefore, the inclusion of temporal information with TN and TE for Yoochoose 1/64 is a very important factor for recommendations. The results for Yoochoose 1/4 are generally similar to those for Yoochoose 1/64, but even GRU4Rec shows a performance improvement with TN. It appears that a large amount of temporal information leads to improved performance. In addition, TN also shows better results in improvement than TE like the results for Yoochoose 1/64. Interestingly, looking at the results of SR-GNN, NISER+, and SGNN-HN, using only TE does little to improve the performance of R@20, but it helps improve M@20. This means that exploiting the time differences between interactions helps in more sophisticated predictions. In addition, even if TE alone cannot improve R@20, TE with TN yields better results. The results of using the two sets of temporal information together show better performance than those of using TN alone, leading to significant improvements. The results for Diginetica are different from those for the other two datasets, where the temporal embeddings lead to high improvement rates. In most of the baselines, TN does not help improve, but rather seems to hinder efficient learning. In contrast, TE is helpful for improving all models, which means that the time intervals between interactions in Diginetica provide more clues for accurate predictions. Even the results of GNN-based models show that adding TE alone is better than using both temporal information together. ### Comparison of Temporal Embedding Methods Table 3 shows the results of comparing methods using temporal information for an SBR. Base is a basic version that removes both temporal embeddings (i.e., TN and TE) from TempGNN. Position refers to the model in which positional embedding (Shi et al., 2016; Wang et al., 2017) is added to Base. Because only the most recent 10 items are used in our experiment, a maximum of 10 position information can be used. Constant utilizes one learnable vector by multiplying the constant times, which are normalized between 0 and 1. This is similar to the method used in TGSRec (Brockett et al., 2016) except for the periodicity. Bucket means that each bucket is split according to specific intervals after clipping both by 2% to prevent an outward range. This method has been adopted by many previous models (Shi et al., 2016; Wang et al., 2017; Wang et al., 2018), which do not consider time frequency, whereas our method splits the information into groups of equal size. Figure 4. Performance of baseline models with temporal embedding using three datasets. Gray bars indicate the results of basic models. The results of models with TN are shown as blue bars, the ones of models with TE are shown as green bars, and purple bars indicate performance when both are used together. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Yoochoose 1/64} & \multicolumn{2}{c}{Yoochoose 1/4} & \multicolumn{2}{c}{Diginetica} \\ \cline{2-7} & R@20 & M@20 & R@20 & M@20 & R@20 & M@20 \\ \hline Base & 71.90 & 32.58 & 72.99 & 33.31 & 55.85 & 19.67 \\ \hline Position & 71.86 & 31.84 & 72.88 & 32.24 & 55.68 & 19.43 \\ Constant & 72.27 & 32.57 & 73.38 & 33.75 & 55.85 & 19.76 \\ Bucket & 72.49 & 32.93 & 73.42 & 33.46 & 55.94 & 19.83 \\ \hline Q & 72.57 & 33.44 & **73.53** & 34.08 & 55.90 & 19.83 \\ Q+A & 72.56 & 33.50 & 73.47 & 34.07 & 56.00 & 19.82 \\ Q+G & 72.43 & 33.43 & 73.52 & 34.08 & 56.07 & 19.91 \\ **Q+A+G** & **72.60** & **33.58** & 73.52 & **34.19** & **56.08** & **19.96** \\ \hline \hline \end{tabular} \end{table} Table 3. Performance of temporal embedding methods. Q means quantile bucketizing for time, A is an activation function, and G is a gate network when applying temporal embeddings. The performance after adding positional embedding shows a rather small decrease in the three datasets. The result of Constant shows a slight improvement overall. Although fewer trainable parameters are used compared with Position, the performance is rather improved. This means that clues time differences can provide are more helpful in predicting user behavior from a session than positional differences. The result of Bucket is better than that of Constant, except for M@20 on Yoochoose 1/4, and shows a fairly good improvement compared with Base. Q, which changes the method of allocating buckets from Bucket, shows a high improvement on Yoochoose 1/64 and 1/4, particularly for M@20. Adding an activation function (i.e., a leaky ReLU) results in little change from Q, shown as Q+A, whereas a gate network leads to an improvement on Diginetica, shown as Q+G. Finally, our model's method, Q+A+G, has the best performance, except for R@20 on Yoochoose 1/4. Even so, it exhibits the second-best performance, which is almost equal to the best. ### Number of Buckets To maximize the use of time clues in session data, an appropriate number of buckets should be set. This depends on the characteristics of the dataset. If the number of buckets is set too small, the temporal information contained is insufficient. This is because even timestamps with different meanings can be classified into the same group. Conversely, if too many buckets are set, there is a waste of memory due to unnecessary splitting. Even if two timestamps belong to different groups, they would have shown no significant differences. Figure 5 shows the number of buckets suitable for TN and TE for the two datasets. These graphs have three common characteristics. First, too few buckets (e.g., 1 or 2) may degrade the performance compared with a base version without any temporal embedding. This is because unnecessary clues are provided, which actually hinder effective learning. Second, a convergence pattern is exhibited as a certain number is exceeded. This indicates that preparing only a certain number of groups that distinguish times is sufficient. Looking at the converged performance, on Yoochoose 1/64, using TN leads to a meaningful improvement, and utilizing TE does so on Diginetica. Interestingly, both embeddings contribute to improving M@20, which implies that more sophisticated predictions are made. Finally, TN with 10-quantiles always shows higher performance than positional embedding represented by a gray dotted line in the graph. Even if the same number of parameters are used, the difference lies in how the buckets are allocated. Interactions with only one positional gap may be grouped into the same time bucket or there may be a large time difference. Our method shows that it can capture the subtle differences in user behavior that cannot be known by positions. ## 6. Conclusion We introduced TemgGNN, a generic framework for capturing the structural and temporal patterns in complex item transitions through temporal embedding operators on nodes and edges on dynamic session graphs represented as sequences of timed events. State-of-the-art results were obtained for several datasets. Extensive experimental results confirm that even if a session has relatively short-length interactions, the temporal relationship between items in the session and the prediction point are important factors in predicting the next item and can improve performance. Meanwhile, although our discrete buckets somewhat reflect continuous distribution over time, as shown in Figure 6, we plan to investigate how to fully capture this in future work.
2303.07310
Learning Reduced-Order Models for Cardiovascular Simulations with Graph Neural Networks
Reduced-order models based on physics are a popular choice in cardiovascular modeling due to their efficiency, but they may experience reduced accuracy when working with anatomies that contain numerous junctions or pathological conditions. We develop one-dimensional reduced-order models that simulate blood flow dynamics using a graph neural network trained on three-dimensional hemodynamic simulation data. Given the initial condition of the system, the network iteratively predicts the pressure and flow rate at the vessel centerline nodes. Our numerical results demonstrate the accuracy and generalizability of our method in physiological geometries comprising a variety of anatomies and boundary conditions. Our findings demonstrate that our approach can achieve errors below 2% and 3% for pressure and flow rate, respectively, provided there is adequate training data. As a result, our method exhibits superior performance compared to physics-based one-dimensional models, while maintaining high efficiency at inference time.
Luca Pegolotti, Martin R. Pfaller, Natalia L. Rubio, Ke Ding, Rita Brugarolas Brufau, Eric Darve, Alison L. Marsden
2023-03-13T17:32:46Z
http://arxiv.org/abs/2303.07310v1
# Learning Reduced-Order Models for Cardiovascular Simulations with Graph Neural Networks ###### Abstract Reduced-order models based on physics are a popular choice in cardiovascular modeling due to their efficiency, but they may experience reduced accuracy when working with anatomies that contain numerous junctions or pathological conditions. We develop one-dimensional reduced-order models that simulate blood flow dynamics using a graph neural network trained on three-dimensional hemodynamic simulation data. Given the initial condition of the system, the network iteratively predicts the pressure and flow rate at the vessel centerline nodes. Our numerical results demonstrate the accuracy and generalizability of our method in physiological geometries comprising a variety of anatomies and boundary conditions. Our findings demonstrate that our approach can achieve errors below 2% and 3% for pressure and flow rate, respectively, provided there is adequate training data. As a result, our method exhibits superior performance compared to physics-based one-dimensional models, while maintaining high efficiency at inference time. ## 1 Introduction In the last twenty years, Computational Fluid Dynamics (CFD) has become an essential tool in the study of the cardiovascular system [1; 2; 3]. For example, CFD simulations have been used to noninvasively assess the severity of coronary artery aneurysms [4; 5], to propose novel surgical methods for congenital heart disease [6], and to optimize medical devices [7; 8]. Full 3D blood flow models, often solved with the finite element method, allow for patient-specific modeling and extraction of detailed quantities such as wall shear stress and velocity fields. However, the implementation of these simulations in clinical practice is still limited due, in part, to their high computational cost. Reduced-order models (ROMs) have been devised to overcome this issue, though their increased efficiency generally comes with a cost of lost accuracy. This paper aims to develop and validate a novel ROM following a data-driven approach based on graph neural networks (GNNs). The typical approach to deriving ROMs for cardiovascular simulations is physics-based. These formulations rely on simplifying assumptions that reduce the vessel geometry's complexity, and, consequently, the number of variables necessary to describe the quantities of interest. This physics-based class of ROMs includes popular zero-dimensional and one-dimensional models. In zero-dimensional formulations (often called lumped-parameter network models) the cardiovascular system is analogous to an electric circuit in which the pressure drop across portions of the arterial tree and blood flow rate play the role of electric potential difference and currents, respectively. These quantities of interest do not depend on any spatial variable. We refer to [9; 10; 11; 12] for examples of uses of such models in cardiovascular simulations, from simple Windkessel models [13] to full circulatory networks. One-dimensional models are derived by integrating the three-dimensional Navier-Stokes equation over the vessel cross-section to reduce the equations to a single spatial variable [14]. Specifically, arterial trees are approximated as compositions of segments representing the centerline of the vessels, and pressure, flow rate, and vessel wall displacement are considered functions of the axial component only. Compared to lumped-parameter network models, one-dimensional models capture more of the physics due to their ability to account for wave propagation phenomena due to the interaction of flow with elastic vessel walls. One-dimensional models have proven to be useful in numerous studies; see, for example, [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Zero- and one-dimensional formulations often lead to reasonably accurate results at a fraction of the computational cost of full three-dimensional simulations. However, these ROMs sometimes perform poorly because (i) they do not account correctly for pressure losses at vascular junctions and (ii) they rely on specific mathematical models (mostly based on empirical data or heuristics), particularly to describe pathological cases (e.g., stenoses or aneurysms). A recent study conducted on 72 cardiovascular models selected from a freely available database of cardiovascular models, the Vascular Model Repository (VMR),1 compared zero- and one-dimensional formulations against three-dimensional reference simulations [29]. The authors observed average relative errors of 2.1% & 1.8% on the pressure (zero- and one-dimensional models, respectively) and 3.9% & 3.4% on the flow rate (zero- and one-dimensional models, respectively). Footnote 1: [http://www.vascularmodel.com](http://www.vascularmodel.com) Data-driven ROMs have the potential to overcome the issues mentioned above. Typical data-driven reduced-order approaches include projection-based methods such as reduced-basis or proper-orthogonal decomposition techniques [30, 31]. Although these methods can be adapted to specific geometries through interpolation strategies [32, 33, 34, 35], they are often not capable of representing the full range of geometrical variability characterizing patient-specific anatomic models. These problems can be partially overcome by employing projection-based methods in simple subdomains making up the vascular geometry of interest [36, 37]. Another option for generating data-driven models for physical simulations is deep learning. While, in the context of cardiovascular modeling, deep learning has been used to accelerate other aspects of the simulation pipeline, for example, image-based segmentation [38, 39] and model generation [40, 41, 42], it has not yet been used widely to model physical processes. Among the most popular architectures devoted to this scope are physics-informed neural networks (PINNs) [43, 44], which are neural networks trained to mimic known physical laws. PINNs have been applied, for example, to ocean and climate modeling in [45, 46], in the field of atomistic simulations in [47], to approximate cardiac activation mapping in [48] and blood flow dynamics in [49] and [50]. For an overview of fluid dynamics and machine learning efforts, we refer to [51]. As with projection-based methods, these algorithms are typically not flexible with respect to changes in the domain geometry, which is a crucial drawback for their use in cardiovascular simulations on patient-specific applications. Recently, GNNs have been proposed as an alternative to classic fully connected and convolutional neural networks to address these difficulties. GNNs are used, for example, in [52] to learn the laws governing particle interactions in particle-based simulations and in [53] as solvers for mesh-based simulations called MeshGraphNets. More recently, GNNs have also been used to estimate steady blood flow in a synthetic dataset of three-dimensional arteries [54]. In this paper, we consider an approach inspired by MeshGraphNets to derive a one-dimensional surrogate ROM for cardiovascular simulations. We apply the GNN to learn the laws governing bulk quantities, such as average pressure and flow rate at the centerline nodes, as opposed to using it to approximate the solution of the three-dimensional blood flow equations (as in the original paper). Owing to its capacity to resolve pressure and flow rate on cardiovascular models' centerlines, we consider our method a data-driven one-dimensional model. The network iteratively considers the state of the system, comprising pressure and flow rate at a particular timestep and other relevant features (such as cross-sectional area and parameters governing the boundary conditions), and computes approximations for the next values of pressure and flow rate. We adapt the original MeshGraphNet implementation to make it suitable for blood flow simulations. In particular, we include graph edges to efficiently transfer boundary condition information to the interior graph nodes, and we include special parameters as node features to handle Windkessel-type or resistance-type boundary conditions, which are typically encountered in cardiovascular modeling. In our numerical results, we demonstrate the GNN on a diverse set of geometries. We assess its ability to generalize to multiple geometries and compare its performance with physics-based one-dimensional models. In summary, the main contributions of this paper are threefold: 1. We consider a modified version of MeshGraphNet capable of handling complex boundary conditions by adding special edges and patient-specific features to the graph. We show that these modifications lead to substantial improvements compared to MeshGraphNet in an ablation study performed in Section 5.2. 2. We assess the ability of the GNN to generalize to different geometries, which, to our knowledge, has not been investigated in previous works. We reiterate that geometrical variability plays an essential role in patient-specific modeling (see Section 5.1 and Section 5.2). 3. We demonstrate that our ROM outperforms physics-driven one-dimensional models in geometries characterized by many junctions or pathological conditions (see Section 5.2). Our GNN implementation is freely available at [https://github.com/StanfordCBCL/gROM](https://github.com/StanfordCBCL/gROM). ## 2 Notation Let us consider a set of \(G\) patient-specific cardiovascular geometries \(\Omega_{1},\ldots,\Omega_{G}\). Given a geometry \(\Omega_{g}\), we generate a directed graph consisting of a set of nodes \(n_{1}^{g},\ldots,n_{N^{g}}^{g}\) along the vessel centerline; see Figure 1 (left). We denote \(e_{ij}^{g}\) as the directed edge connecting node \(i\) to node \(j\). We denote \(T_{\text{ce}}^{g}\) as the period of one cardiac cycle of the patient and consider the discrete time sequence \[t^{0,g},t^{1,g},\ldots,t^{M,g},\] such that \(t^{0,g}=0\), \(t^{M,g}=T_{\text{ce}}^{g}\), and \(\Delta t^{g}=t^{1,g}-t^{0,g}=\ldots=t^{M,g}-t^{M-1,g}\). In this paper, we take a constant \(\Delta t^{g}\) for all patients to simplify the learning process. However, we envision that the generalization to variable time step size is possible with minimal changes to the architecture (for instance, by including \(\Delta t^{g}\) in the set of node features). We call the set of all node and edge features at time \(t^{k,g}\) the state of the system \(\Theta^{k,g}(\boldsymbol{\mu})\), where \(\boldsymbol{\mu}\) is a vector of system parameters--in this work, these are related to its boundary conditions. We call a sequence of states \(\Theta^{k,g}(\widetilde{\boldsymbol{\mu}})\) for a particular choice of system parameters \(\widetilde{\boldsymbol{\mu}}\) a trajectory. We refer to Section 3.1 for a description of the features we consider in this work. For brevity, we will implicitly assume that all quantities must be patient-specific and omit the superscript \(g\) in the following sections whenever possible. ## 3 MeshGraphNets for cardiovascular simulations Our GNN acts as a data-driven one-dimensional ROM; for a description of classical physics-based one-dimensional models, we refer to Appendix A. During the rollout phase shown in Figure 1 (top-right), the network takes as an input \(\Theta^{k}\) and computes an update that allows us to advance the state of the system from \(\Theta^{k}\) to \(\Theta^{k+1}\). We apply the GNN iteratively. In each time step \(t^{k}\) with \(k>0\) we provide it with the previously estimated system state. At \(t^{0}\), we feed the network a prescribed initial condition. The action of the GNN is described in Section 3.4. ### Graph features We select the node and edge features based on our knowledge of the problem. For example, it is well known that, under a Poiseulle condition, a linear relation exists between flow rate \(Q\) and pressure drop \(\Delta P\) across an approximately cylindrical vessel. Specifically, \[\Delta P=RQ=\frac{8\mu L}{\pi r^{4}}Q.\] The proportionality constant \(R\) is called resistance and depends on the viscosity of blood \(\mu\), the length of the vessel \(L\), and its radius \(r\). In this work, we assume that the viscosity and density of the blood are equal to \(\mu=0.04\) g cm\({}^{-1}\) s\({}^{-1}\) and \(\rho=1.06\) gr cm\({}^{-3}\) for all patients, respectively, and we do not include viscosity and density as graph features. Based on these notions of fluid dynamics, we include the cross-sectional area in the node features. Node features.We consider the cross-sectional average pressure \(p_{i}^{k}\in\mathbb{R}^{+}\) and the flow rate \(q_{i}^{k}\in\mathbb{R}\) in every centerline node \(n_{i}\) as descriptors of the state of the system \(\Theta^{k}\) at time \(t^{k}\). These are computed on the section obtained as the intersection between the plane orthogonal to the centerline and the thee-dimensional model of the vessel. Similarly, the area of the vessel lumen \(A_{i}\in\mathbb{R}\) is the area of the section passing through node \(i\) and is considered a node feature for the above reasons. We introduce a one-hot vector \(\mathbf{\alpha}_{i}\in\mathbb{R}^{4}\) to encode different node types. These are branch nodes, junction nodes, model inlet (one per geometry), and model outlets, as shown in Figure 1 (left). The distinction between branch and junction nodes is necessary because the area of sections within junctions varies discontinuously across the centerline (owing to nearby slices sometimes cutting through a different number of branches). This leads to different blood dynamics in branches and junctions when looking at quantities averaged over such sections. We use the same automated algorithm for junction detection as in [29]. Additional node features are the tangent to the centerline evaluated at node \(n_{i}\), which we denote \(\mathbf{\phi}_{i}\in\mathbb{R}^{3}\), the period of the cardiac cycle \(T_{\text{cv}}\in\mathbb{R}^{+}\), minimum and maximum pressure over the whole cardiac cycle (\(p_{\text{min}}\in\mathbb{R}^{+}\) and \(p_{\text{max}}\in\mathbb{R}^{+}\)), three parameters associated with the boundary conditions (\(R_{i,p}\in\mathbb{R}^{+}\), \(C_{i}\in\mathbb{R}^{+}\), and \(R_{i,d}\in\mathbb{R}^{+}\); see Section 3.2 for further information), and a boolean loading variable \(l^{k}\in\{0,1\}\) associated with model initialization (see Section 3.3). We remark that, although in this work \(p_{\text{min}}\) and \(p_{\text{max}}\) are set based on simulation data, these values are typically known for the patient at hand--for example, they are identified as diastolic and systolic pressure in aorta models--and are often used in physics-based cardiovascular simulations to tune the boundary conditions discussed in Section 3.2. In summary, we associate with each node \(n_{i}\) the vector of features \[\mathbf{v}_{i}^{k}=[p_{i}^{k},q_{i}^{k},A_{i},\mathbf{\alpha}_{i}^{\text{T}}, \mathbf{\phi}_{i}^{\text{T}},T_{\text{cc}},p_{\text{min}},p_{\text{max}},R_{i,p}, C_{i},R_{i,d},l^{k}]^{\text{T}}\in\mathbb{R}^{17}. \tag{1}\] Node features and their definitions are summarized in Table 1. Edge features.We define \(\mathbf{d}_{ij}=\mathbf{x}_{j}-\mathbf{x}_{i}\in\mathbb{R}^{3}\) as the difference between the position of nodes \(n_{j}\) and \(n_{i}\), and \(z_{ij}\) the length of the shortest path between the two nodes. In addition to _physical_ edges, in this work we introduce edges connecting boundary nodes to interior ones (see Section 3.2). We, therefore, introduce the one-hot vector \(\mathbf{\beta}_{ij}\in\mathbb{R}^{4}\) to encode the edge type. We define the following Figure 1: Schematics of MeshGraphNet. Left: graph of an aottofemoral model, with branch (light blue), junction (dark blue), inlet (red), and outlet (yellow) nodes. Top-right: rollout phase of the method. We provide the input state \(\Theta^{k}\) to the GNN, which computes a prediction for the update of the state variables. The update is combined with the current state to compute \(\Theta^{k+1}\). Bottom-right: steps in the GNN. The node and edge features are encoded using fully-connected neural networks (FCNNs), processed \(L\) times using aggregation operations, and decoded into the output space. edge types: edges connecting branch nodes to branch nodes, edges connecting junction nodes to junction nodes, and edges connecting the model inlet or outlets to interior nodes. For simplicity, edges connecting branch nodes to junction nodes are assigned the same type as those within branches. We note that edges within branches and junctions are those defining the centerline of the anatomical model and that, when computing the shortest path \(z_{ij}\), we only consider these edges. Moreover, we highlight that although the introduction of boundary edges changes the topology of the graph, information regarding the original topology is maintained through the different edge types. The edge features associated with \(e_{ij}\) are \[\mathbf{w}_{ij}=[\mathbf{d}_{ij}^{\text{T}}/\|\mathbf{d}_{ij}\|_{2},z_{ij}, \boldsymbol{\beta}_{ij}^{\text{T}}]^{\text{T}}\in\mathbb{R}^{8}. \tag{2}\] Edge features and their definitions are summarized in Table 2. Note that all features (except for the unitary vectors \(\mathbf{t}_{i}\) and \(\mathbf{d}_{ij}^{\text{T}}/\|\mathbf{d}_{ij}\|_{2}\)) are normalized to follow a standard Gaussian distribution \(\mathcal{N}(0,1)\) using statistics computed on the dataset. \begin{table} \begin{tabular}{c l} \hline \hline Node feature & Definition \\ \hline \(p_{i}^{k}\) & Pressure at time \(t^{k}\) \\ \(q_{i}^{k}\) & Flow rate at time \(q^{k}\) \\ \(A_{i}\) & Cross-sectional area \\ \(\boldsymbol{\alpha}_{i}\) & Nodal type \\ \(\boldsymbol{\phi}_{i}\) & Centering tangent \\ \(T_{\text{cc}}\) & Cardiac cycle duration \\ \(p_{\text{min}}\) & Minimum pressure (across cardiac cycle) \\ \(p_{\text{max}}\) & Maximum pressure (across cardiac cycle) \\ \(R_{i,p}\) & \(R_{p}\) Parameter in RCR boundary conditions (see Figure 2) \\ \(C_{i}\) & \(C\) Parameter in RCR boundary conditions (see Figure 2) \\ \(R_{i,d}\) & \(R_{d}\) Parameter in RCR boundary conditions (see Figure 2) \\ \(l^{k}\) & Boolean load variable at time \(t^{k}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Node features and their definitions. \begin{table} \begin{tabular}{c l} \hline \hline Edge feature & Definition \\ \hline \(\mathbf{d}_{ij}^{\text{T}}/\|\mathbf{d}_{ij}\|_{2},z_{ij},\boldsymbol{\beta}_{ ij}^{\text{T}}\|^{\text{T}}\in\mathbb{R}^{8}\). \\ \(\boldsymbol{\beta}_{ij}\) & Edge type \\ \hline \hline \end{tabular} \end{table} Table 2: Edge features and their definitions. Figure 2: Circuit schematics of three-element Windkessel (RCR) and resistance boundary conditions (left and right, respectively). In physics-based models these boundary conditions are modeled using ordinary differential equations such as Eq. (3). Here, we train the GNN to characterize these boundary conditions by including \(R_{p},C,R_{d}\) or \(R\) into the graph features. ### Extension of MeshGraphNet to support cardiovascular boundary conditions The hemodynamics in the vasculature are determined by the boundary conditions at the inlet and outlets. In cardiovascular simulations, we represent the downstream vasculature as three-element Windkessel-type (RCR) or resistance-type boundary conditions, as shown in Figure 2. In RCR boundary conditions, the pressure in the capacitor \(P_{c}(t,Q)\) is determined by the relationship \[\frac{\text{d}P_{c}}{\text{d}t}=\frac{1}{C}\left(Q(t)-\frac{P_{c}}{R_{d}} \right). \tag{3}\] Then, the pressure at the outlet (\(P\) in Figure 2) is found as \(P(t,Q)=P_{c}(t,Q)-R_{p}Q(t)\). In resistance boundary conditions, the relationship between pressure and flow rate at the outlet is simply \(P(t)=RQ(t)\). We remark that this is a special case of RCR boundary conditions where \(C\) and \(R_{d}\) tend to zero. In both RCR and resistance boundary conditions, we assume that the distal pressure is zero for simplicity. To integrate these boundary condition types into the MeshGraphNet framework, we provide the values of \(R_{p}\), \(C\), and \(R_{d}\) as node features in the outlet nodes. We set those features to zero for all remaining nodes in the graph. When using resistance-only boundary conditions, we also set \(C=0\) and \(R_{d}=0\). During training and in every iteration of the rollout phase, we solely prescribe the flow rate value at the next time step at the inlet. Due to the nature of the centerlines we consider in this paper, most centerline nodes are connected to only two neighbors, which hinders the information transfer between the boundary nodes and the interior ones. For this reason, we add artificial edges connecting each interior node to the closest node on the boundary. Specifically, every interior node \(n_{i}\) is connected to the boundary node \(n_{j}\) for which the shortest path length between \(n_{i}\) and \(n_{j}\), i.e., \(z_{ij}\), is minimized. Edges connecting inlets and outlets to interior nodes are associated with different types of edges (see _Edge features_ in Section 3.1) and are bidirectional as all the other types of edges that we consider in this paper. ### Initial conditions As we shall see in Section 4, we train our GNN on time-dependent simulation data over one cardiac cycle. However, the initial condition of the system at the start of the cardiac cycle is in general not a piece of information that can be easily estimated from clinical data. For this reason, when generating the trajectories in the dataset, we start from a constant solution in which \(p^{0}=p_{\text{min}}\) and \(q^{0}=0\) in all graph nodes and then linearly interpolate between this initial state and the actual initial condition at the start of the cardiac cycle over \(T_{l}=0.1\) s. We then include a node feature, denoted \(l^{k}\), taking a unitary value between \(t=-0.1\) sec and \(t=0\) sec, and zero for \(t>0\), to differentiate between the loading stage and the actual simulation of the cardiac cycle. ### Forward step of the Graph Neural Network In this section, we denote by \(\mathbf{f}_{*}\) a fully-connected neural network (FCNN) [55] with \(n_{h}\) hidden layers. The number of neurons in the hidden layers is constant and equal to \(n_{s}\). We consider the LeakyReLU activation function for every layer except the output layer, and we apply a layer normalization on the output layer unless explicitly specified. When an FCNN accepts multiple arguments, these should be concatenated into a single tensor. MeshGraphNet is based on the three main stages shown in Figure 1 (bottom-right): 1. _Encode_: we transform the node and edge features into latent features using FCNNs. In particular, given node feature \(\mathbf{v}_{i}^{k}\), we compute its latent representation as \(\mathbf{v}_{i}^{k,(0)}=\mathbf{f}_{\text{en}}(\mathbf{v}_{i}^{k})\in\mathbb{R} ^{n_{l}}\), where \(\mathbf{f}_{\text{en}}\) is an FCNN mapping the space of node features into \(\mathbb{R}^{n_{l}}\). Similarly, we compute \(\mathbf{w}_{ij}^{(0)}=\mathbf{f}_{\text{ce}}(\mathbf{w}_{ij})\in\mathbb{R}^{n_ {l}}\) for all edges \(e_{ij}\). Although the size of the latent space \(n_{l}\) does not need to coincide for edges and nodes, we take it to be the same to facilitate the optimization of our architecture, discussed in Section 3.6. 2. _Process_: the process stage is performed \(L\) times and is further divided into two phases. In the first phase, we compute new edge features as \[\mathbf{w}_{ij}^{(l)}=\mathbf{f}_{\text{pe}}^{(l)}(\mathbf{w}_{ij}^{(l-1)}, \mathbf{v}_{i}^{k,(l-1)},\mathbf{v}_{j}^{k,(l-1)})\in\mathbb{R}^{n_{l}},\] where \(l\geq 1\) is the iteration number. Then, we compute new node features using aggregation functions as \[\mathbf{v}_{j}^{k,(l)}=\mathbf{f}_{\text{pn}}^{(l)}(\mathbf{v}_{j}^{k,(l-1)}, \sum_{i:\exists e_{ij}}\mathbf{w}_{ij}^{(l)},\mathbf{w}_{\text{in},j},\mathbf{ w}_{\text{out},j})\in\mathbb{R}^{n_{l}}.\] All the networks considered in this stage feature a residual connection. 3. _Decode_: in the last stage, node features are transformed from the latent space to the desired output space using an FCNN. The desired output is a vector that contains the update of pressure and flow rate \([\delta p_{i}^{k},\delta q_{i}^{k}]=\mathsf{f}_{\text{fa}}(\mathbf{v}_{i}^{k,( L)})\in\mathbb{R}^{2}\). Following [53], we do not use layer normalization on the output layer of \(\mathbf{f}_{\text{fa}}\). At the end of the forward pass of the GNN, we update the pressure and flow rate nodal values \(p_{i}^{k}\) and \(q_{i}^{k}\) as \(p_{i}^{k+1}=p_{i}^{k}+\delta p_{i}^{k}\) and \(q_{i}^{k+1}=q_{i}^{k}+\delta q_{i}^{k}\). We introduce \(\delta\mathbf{v}_{i}^{k}=[\delta p_{i}^{k},\delta q_{i}^{k},0,\ldots,0]\in \mathbb{R}^{17}\) and the function \(\Psi_{m}\), which denotes the result of applying the GNN \(m\) consecutive times (rollout phase). Specifically, \[\Psi_{1}(\Theta^{k})=\bigcup_{i=1}^{N}\{\mathbf{v}_{i}^{k}+\delta\mathbf{v}_{ i}^{k}\}\ \cup\bigcup_{i,j:\exists e_{ij}}\{\mathbf{w}_{ij}\},\] and \[\Psi_{m}(\Theta^{k})=(\underbrace{\Psi_{1}\circ\cdots\circ\Psi_{1}}_{m})( \Theta^{k}).\] We denote the approximation of \(p\) and \(q\) at node \(i\), after \(m\) applications of the GNN, as \(\Psi_{m}(\Theta^{k})|_{p,i}\) and \(\Psi_{m}(\Theta^{k})|_{q,i}\), respectively. ### Training We now introduce the loss function \(\mathcal{L}\) minimized during training of the GNN. Let us define the training set \(\{\Omega_{g}\}_{g\in\mathcal{T}}\), where \(\mathcal{T}\) is a subset of \([1,\ldots,G]\). We train the network on small strides of \(s\) consecutive time steps so that it approximates the exact values of the nodal pressure and flow rate \(\hat{p}_{i}^{k,g}\) and \(\hat{q}_{i}^{k,g}\). Here, we consider \(s=5\). Considering multiple steps has been proven to be beneficial in [56]. In particular, we define the strided MSE \[\text{sMSE}^{k,g,s}=\frac{1}{N^{g}}\sum_{l=1}^{s}\sum_{i=1}^{N^{g}}a_{l}b_{i} \left[(\hat{p}_{i}^{k+l,g}-\Psi_{l}(\Theta^{k,g})|_{p,i})^{2}+(\hat{q}_{i}^{k +l,g}-\Psi_{l}(\Theta^{k,g})|_{q,i})^{2}\right],\] where \(a_{l}\) = 1 if \(l=1\) and \(a_{l}=0.5\) otherwise, and \(b_{i}=100\) for boundary nodes and \(b_{i}=1\) otherwise. Then, the loss function reads \[\mathcal{L}=\text{MSE}=\sum_{g\in\mathcal{T}}\frac{1}{|\mathcal{T}|(M^{g}-s)} \sum_{k=0}^{M^{g}-s}\text{sMSE}^{k,g,s}. \tag{4}\] Our goal is to make the GNN robust for rollouts of many time steps. In order to achieve this, it is imperative to add random noise during training to simulate the effect of the error caused by the network in the rollout phase, as described in [52, 53]. During training, we perturb the pressure and flow rate in the state \(\Theta^{k,g}\) as \(\tilde{p}_{i}^{g}=\hat{p}_{i}^{g}+\varepsilon_{p}\) and \(\tilde{q}_{i}^{g}=\tilde{q}_{i}^{g}+\varepsilon_{g}\), where \(\varepsilon_{p}\sim N(0,\sigma^{2})\), \(\varepsilon_{g}\sim N(0,\sigma^{2})\), and \(\sigma\) is a hyperparameter controlling the noise standard deviation. The loss function (4) is optimized using stochastic gradient descent and the Adam optimizer. For details on the GNN and training hyperparameters, we refer to Section 3.6. We consider each entire trajectory as a datapoint. Due to the limited size of the datasets that we obtain in this way (as opposed to, for example, considering every possible set of \(s\) consecutive timestep as separate datapoints), we use \(k\)-fold cross-validation [57], that is, we train \(k\) networks on different train-test splits using the same set of hyperparameters and report their average performance. The train-test splits are made such that \(1-1/k\) and \(1/k\) of the trajectories are assigned to the train and test set, respectively, and such that each trajectory appears in the test set exactly once. Consider as an example the dataset employed in Section 5.1, which is composed of 160 trajectories. During 10-fold cross-validation, each of the 10 networks is trained on 144 trajectories and tested on 16, and we report the average performance achieved on train and test. ### Hyperparameter optimization Hyperparameter optimization is essential for achieving state-of-the-art performance using machine learning methods. In this work, we employ the optimization platform SigOpt.2 As our objective function, we employed the average rollout error on pressure and flow rate obtained on the test set after training. More specifically, given the anatomy of patient \(g\), the errors for pressure and flow rate are computed as Footnote 2: [https://sigopt.com](https://sigopt.com) \[\text{e}^{g}_{p} =\frac{\sum_{i\in B\mathcal{s}}\sum_{k=1}^{M^{g}}(\hat{p}^{k,g}_{ i}-\Psi_{k}(\Theta^{0,g})|_{p,i})^{2}}{\sum_{i\in B\mathcal{s}}\sum_{k=1}^{M^{g}}( \hat{p}^{k,g}_{i})^{2}}, \tag{5}\] \[\text{e}^{g}_{q} =\frac{\sum_{i\in\mathcal{B}\mathcal{s}}\sum_{k=1}^{M^{g}}(\hat{ q}^{k,g}_{i}-\Psi_{k}(\Theta^{0,g})|_{q,i})^{2}}{\sum_{i\in\mathcal{B}}\sum_{k=1}^{M^{g}} (\hat{q}^{k,g}_{i})^{2}}, \tag{6}\] where \(\hat{p}^{k,g}_{i}\) and \(\hat{q}^{k,g}_{i}\) are the exact pressure and flow rate values at time \(t^{k}\) and at node \(n_{i}\) for patient \(g\). Note that, when computing these errors, we only consider the indices of nodes located in branches, that is, \(\mathcal{B}^{g}\). Our optimization yielded the following choices of hyperparameters: 1. _Network architecture._ Each fully-connected network in the GNN consists of \(n_{h}=2\) hidden layers and \(n_{s}=64\) neurons per hidden layer. The optimal number of processing iterations in the forward step of the GNN is \(L=5\), and we consider \(n_{l}=16\) as the size for the latent space. 2. _Training hyperparameters._ We consider a starting learning rate of \(10^{-3}\). This is decreased using a cosine annealing function; the final value of learning rate is \(10^{-6}\). We use batches of 100 graphs. During training, we inject normally distributed noise \(N(0,\sigma^{2})\) with \(\sigma=5\cdot 10^{-2}\). Unless explicitly stated, we train the GNNs for 100 epochs. ## 4 Dataset We consider eight healthy and diseased patient-specific models selected from the VMR and shown in Figure 3. Models 1-5 are healthy aorta models, Model 6 is an aortofemoral model featuring an aneurysm, Model 7 is a healthy pulmonary model, and Model 8 is an aorta affected by coarctation. We selected these models as, based on previous studies such as [29], they present features--e.g, many junctions (Model 7) or stenoses (Model 8)--that are typically challenging for traditional physics-based models. The data-generation pipeline consists of three main steps: three-dimensional simulation using SimVascular [58; 59], transformation to the one-dimensional representation, and graph generation. Three-dimensional simulation.In addition to geometrical information, the VMR also contains simulation data and boundary conditions information tuned to match clinical measurements. The models include a prescribed flow rate profile over one cardiac cycle at the inlet and RCR (Models 1-6 and Model 8) or resistance boundary conditions (Model 7) at the outlets. We performed 32 (Models 1-5) and 50 (Models 6-8) three-dimensional simulations for each of these geometries under random perturbations of the original boundary conditions. In particular, we multiplied the inlet flow rate and each of the parameters governing the outlet boundary conditions by independent factors uniformly distributed in the range \([0.8,1.2]\). Using the notation introduced in Section 2, each model in the VMR was associated with a particular parameter set \(\boldsymbol{\mu}=[\mu^{\text{in}},\mu^{\text{out}}_{1},\ldots,\mu^{\text{out}} _{N_{\text{end}}}]\) governing its boundary conditions, and we ran simulations for multiple perturbed parameter sets each of the form \(\widetilde{\mathbf{\mu}}=[c^{\text{in}}\mu^{\text{in}},c_{1}^{\text{out}}\mu_{1}^{ \text{out}},\dots,c_{N_{\text{max}}}^{\text{out}}\mu_{N_{\text{max}}}^{\text{ out}}]\), where \(c^{\text{in}},c_{1}^{\text{out}},\dots,c_{N_{\text{max}}}^{\text{out}}\sim U(0.8,1.8)\). We performed the three-dimensional finite-element simulations of the unsteady Navier-Stokes of two cardiac cycles on 128 dual-socket AMD(R) EPYC 7742 cores of the San Diego Super Computing Center (SDSC) Expanse cluster. In Table 3 we report the average simulation run time for each of the eight considered models. Post-processing of simulation results.We restricted the three-dimensional results to the model centerlines. We achieved this by considering the orthogonal sections of the vascular geometry at each centerline node and integrating the pressure and normal component of the velocity over the cross section, thus computing average pressure and flow rate. We also associated with each centerline node the area of the corresponding section. For clarity, Figure 4 (middle) shows a subset of the sections where we integrated the pressure and velocity field. Graph generation.Our GNN implementation is based on PyTorch and the Deep Graph Library (DGL)3. In this last step of the pipeline, we generated nodes, edges, and relative features, in a format compatible with DGL. As discussed in Section 3.2 and shown in Figure 4 (right), at this stage we added edges connecting boundary nodes to interior ones. The time-dependent pressure and flow rate fields are here resampled at each graph node using cubic splines at a constant \(\Delta t\). We also performed Figure 3: Cardiovascular models from the Vascular Model Repository. Models 1-5: healthy aorta models. Model 6: aortofemoral model affected by an aneurysm. Model 7: healthy pulmonary model. Model 8: aorta model affected by coarctation. The locations marked in the bottom geometries are used in the results presented in Section 5.2 \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline Run time [h] & 3.8 & 2.7 & 1.7 & 2.9 & 3.7 & 19.4 & 5.8 & 5.9 \\ Number of simulations & 32 & 32 & 32 & 32 & 32 & 50 & 50 & 50 \\ \hline \hline \end{tabular} \end{table} Table 3: Average run time in hours of three-dimensional simulations and total number of simulations for each of the considered cardiovascular models. data augmentation and increase the number of graphs 4-fold by starting each trajectory at a different offset. Table 4 shows statistics computed over the graphs considered in this paper. Each row is indexed based on the ID of the corresponding cardiovascular model (see Figure 3 for reference). These three steps of the pipeline are summarized in Figure 4. ## 5 Results In this section, we present the results obtained on the datasets described in Section 4. We trained the networks in a distributed fashion over 16 36-core dual-socket Intel(R) Ice Lake Xeon(R) nodes. For inference, we use an Apple M1 Max processor. ### Convergence study with respect to dataset size and sensitivity analysis We first assessed the performance of the GNN as a function of the dataset size. We used the five healthy models shown in Figure 3 (Models 1-5) and extracted the solution from the last cardiac cycle (32 per geometry) to train MeshGraphNet. Considering data augmentation, this resulted in 640 resolved flows in the dataset. We performed 10-fold cross-validation by considering a 90-10% split \begin{table} \begin{tabular}{c c c c c c c} \hline \hline ID & \(p\) [mmHg] & \(q\) [cm\({}^{2}\)/s] & \(A\) [cm\({}^{2}\)] & \(h\) [mm] & nodes & edges \\ \hline 1 & 84 [46,148] & 35 [-43,449] & 1.23 [0.2,3.3] & 2.06 [1.1,4.2] & 288 & 1138 \\ 2 & 72 [42,209] & 15 [-20,376] & 0.55 [0.1,1.4] & 1.36 [0.8,3.0] & 292 & 1154 \\ 3 & 74 [33,120] & 17 [-10,235] & 0.78 [0.1,2.6] & 2.08 [1.2,4.6] & 240 & 946 \\ 4 & 86 [52,125] & 20 [-9,243] & 1.23 [0.2,3.8] & 2.42 [1.4,5.6] & 257 & 1014 \\ 5 & 91 [13,143] & 43 [-61,594] & 2.60 [0.3,5.5] & 2.88 [1.6,5.2] & 144 & 564 \\ \hline 6 & 91 [63,159] & 10 [-3,227] & 0.95 [0.1,6.9] & 2.91 [1.7,6.6] & 479 & 1892 \\ 7 & 6 [1,26] & 2 [-1,172] & 0.19 [0.1,2.6] & 3.97 [2.3,10.7] & 848 & 3200 \\ 8 & 86 [19,161] & 20 [-22,310] & 1.07 [0.1,5.3] & 1.68 [0.9,4.2] & 326 & 1290 \\ \hline \hline \end{tabular} \end{table} Table 4: Pressure \(p\), flow rate \(q\), cross-sectional area \(A\), nodal distance \(h\), number of nodes, and number of edges, in the graphs used in the results of Section 5.1 (rows 1-5) and Section 5.2 (rows 6-8). Values of \(p\), \(q\), \(A\), and \(h\) are averaged over branch nodes. Minimum and maximum values are reported in brackets. Figure 4: Pipeline for generating input data to MeshGraphNet. We first run three-dimensional simulations using SimVascular, then restrict the results to the centerlines by integrating pressure and velocity over slices, and finally generate graphs and relative node and edge features starting from the centerline nodes and connectivity. The figure on the right shows the boundary edges discussed in Section 3.2. between train and test sets and avoiding data leaks--that is, the augmented data for each simulation belonged to the same (train or test) set as the original simulation. Figure 5 shows the performance of the network when trained over 10, 20, 40, 80, and 160 trajectories. On the left, we report the train and test loss value decay over the training epochs. As the size of the dataset increases, the gap between train and test validation loss decreases, as does the value of the final loss. On the right, we numerically demonstrate the expected convergence in the pressure and flow rate errors (computed as in Eq. (5) and Eq. (6)) with respect to the dataset size. Our results also show that, as we consider more trajectories to train the GNN, the generalization gap between train and test rollout errors decreases and is less variable. The variability in performance is represented in Figure 5 (right) by 95% confidence intervals computed over the 10 networks trained during 10-fold cross-validation. We analyzed the sensitivity of MeshGraphNet with respect to each of the node and edge features as follows. We selected one trajectory at random in the dataset and assessed the baseline performance of each of the 10 GNNs trained over the whole dataset in terms of pressure and flow rate rollout errors. Then, we performed many rollouts using the same GNNs while applying random Gaussian noise \(N(0,0.05)\) to only one feature per rollout in each node and edge in each timestep (in the case of multidimensional node features such as tangent and node type, we divide the standard deviation by the size of the vector). We recall that node and edge features are standard normalized, which motivates the use of the same noise distribution for all features. We define the sensitivity factor with respect to a particular feature as the ratio between the error obtained in the perturbed configuration and the baseline error. Therefore, sensitivity close to one indicates robustness with respect to noise. Features associated with higher values of sensitivity are more important to the result accuracy than those associated with lower values. Our findings are summarized in Figure 6. The accuracy in the global approximation of pressure and flow rate did not depend dramatically on the previous timestep. As a matter of fact, we train our GNNs specifically to be robust to noise in pressure and flow rate, as discussed in Section 3.5. The networks were most sensitive to variations in the cross-sectional area (which is consistent with our understanding of fluid dynamics) and patient-specific data such as boundary conditions parameters and reference values for pressure (\(p_{\text{min}}\) and \(p_{\text{max}}\)). Unexpectedly, the cardiac cycle period \(T_{\text{cc}}\) also played an important role; we believe that this is due to the low number of distinct \(T_{\text{cc}}\) considered here (five, i.e., one per geometry) and that increasing the variability of that parameter in the dataset will also reduce its effect. Geometrical features such as tangent or node distance did not significantly affect the accuracy of the results. This might be partially due to the fact that those features are not independent of each other--for instance, the tangent value can be estimated starting from the node positions and vice versa. Finally, we observe that the Figure 5: MeshGraphNet performance when trained over datasets with variable numbers of unique (i.e., not considering augmented data) trajectories. Left: decay of the loss function over 100 epochs (solid lines: train loss, dashed lines: test loss). The displayed loss is relative to the largest value achieved by the network trained on 10 trajectories. Right: convergence of relative rollout errors in pressure \(p\) and flow rate \(q\). Vertical bars represent 95% confidence intervals based on 10 independent training runs. loading variable \(l^{k}\), which we introduced to distinguish between the loading phase and cardiac cycle simulation, did not play an important role in the rollout accuracy. ### Comparison against physics-based one-dimensional models To assess the performance of our method against a physics-based ROM, we considered Models 6, 7, and 8 (see Figure 3), which present characteristics that are typically difficult to handle using one-dimensional models. For each geometry, we simulated two cardiac cycles using 50 random configurations of the boundary conditions associated with that cardiovascular model in the VMR. We also performed one-dimensional simulations using the same boundary conditions used in the three-dimensional ones. We set the material properties of the arterial wall to obtain close-to-zero variations in the lumen area, following the approach adopted in [29]. We refer to Appendix A for more details about our implementation of one-dimensional models. We initialized the ROM with the pressure and flow rate approximations by the three-dimensional model evaluated at the one-dimensional nodes and ran it for two cardiac cycles. In our experience, this configuration leads to better agreement with the three-dimensional data (instead of initializing the ROM using constant values for pressure and flow rate or performing the simulation for more cardiac cycles). Similarly to Section 5.1, we used 4-fold data augmentation for training and, therefore, each anatomical model was associated with a training dataset of 200 trajectories. We considered two approaches for training. In the first approach, we considered a global dataset with trajectories computed on all three cardiovascular models. We denoted the networks trained this way GNN-A. The second strategy consists of training three distinct GNNs over datasets comprising trajectories from the same anatomical models. In other words, we trained one GNN on all trajectories associated with Model 6, one on all trajectories associated with Model 7, and one on all trajectories associated with Model 8. As these networks are trained on a single geometry, we denote them GNN-B\(g\), where \(g=6\), \(g=7\) or \(g=8\) is the patient identifier. Table 4, which reports statistics on the models considered in this section, motivates the choice of training geometry-specific networks. Indeed, different vascular regions are characterized by different ranges for key quantities such as pressure, flow rate, and cross-sectional area. Since the GNN operates on data normalized using statistics computed over all graphs in the dataset, we aim to investigate whether normalizing the data with values adapted to each Figure 6: Sensitivity analysis of MeshGraphNet with respect to node and edge features. Sensitivity is measured as the ratio between the perturbed and baseline errors, where we obtain the perturbed configuration by adding random Gaussian noise to the features on the x-axis during rollout. Results are averaged over ten independently trained GNNs. The node and edge features are: nodal pressure at time \(t^{k}\) (\(p_{i}^{k}\)), nodal flow rate at time \(t^{k}\) (\(q_{i}^{k}\)), cross-sectional area \(A_{i}\), centerline tangent \(\mathbf{\phi}_{i}\), node type, cardiac cycle period \(T_{cc}\), minimum pressure \(p_{\text{min}}\), maximum pressure \(p_{\text{max}}\), RCR parameters, loading variable \(l^{k}\), relative node position \(\mathbf{d}_{ij}/\|\mathbf{d}_{ij}\|\), shortest path length between nodes \(n_{j}\) and \(n_{i}\), and edge type. cardiovascular region presents advantages in terms of accuracy. We perform 5-fold cross-validation for GNN-A and GNN-B\(g\). Due to the different dataset sizes, we train GNN-A networks for 100 epochs and GNN-B\(g\) ones for 500. Figure 7 shows the performance of GNN-A, GNN-B6 (left column), GNN-B7 (middle column), and GNN-B8 (right column), over trajectories contained in the test sets. We evaluate the error in pressure at 20 graph nodes sampled at random in the models' branches and display the pressure and flow rate evolution at the locations where the error in pressure achieved by GNN-A is minimum (top row), median (middle row), and maximum (bottom row). The GNNs are remarkably accurate in all considered locations. Figure 7 also shows the performance of one-dimensional models ran with \(\Delta t=10^{-2}\) s (same time step as GNN-A and GNN-B\(g\)). The data- and physics-driven models both performed well on the aoftemoral model (left column), but the GNNs outperformed the one-dimensional ROM on the pulmonary and aorta coarctation models (middle and right columns). Figure 8 compares the accuracy of the GNN-A and one-dimensional models over the whole dataset; since GNN-A and GNN-B\(g\) performed comparatively well, we here focus on the more general GNN-A for simplicity. In all cases \(\Delta t=10^{-2}\) s. We plot the pressure (top row) and flow rate Figure 7: Pressure and flow rate approximated by GNNs and one-dimensional models at the locations shown in Figure 3. Left, middle, and right columns correspond to Models 6, 7, and 8, respectively. All results are obtained with \(t=10^{-2}\) s. In the legend, GNN-A refers to a network trained on all three geometries at the same time, GNN-B\(g\) is a GNN trained on simulation trajectories for a single patient (\(g=6\), \(g=7\), or \(g=8\)), and 1D refers to the physics-driven one-dimensional model (see Appendix A). Ground truth values are reported in black dashed lines. (bottom row) approximations by the GNNs and one-dimensional models against the ground truth at locations randomly sampled in space and in time in all the models in the dataset. Points positioned on the bisector indicate good agreement between ROM approximation and ground truth. Once again, these results demonstrate that the GNN achieves a much higher accuracy than the physics-based one-dimensional models. In Figure 9, we report the average performance of the GNNs and one-dimensional models over all 150 trajectories. Owing to 5-fold cross-validation and the fact that each trajectory appears in the test set exactly once, the GNN results are averaged over all trajectories in the dataset. We observe that the accuracy of GNN-A and GNN-B\(g\) are comparable, which suggests that a single GNN can Figure 8: Pressure and flow rate approximations by GNN-A and one-dimensional models with \(\Delta t=10^{-2}\) (y-axis) vs ground truth (x-axis). Left, middle, and right column refer to Model 6, 7 and 8, respectively. Points are randomly sampled in space and time over all trajectories in the dataset. Figure 9: Average errors in pressure and flow rate (left panel) and average run time in seconds (right panel) for different configurations of GNNs and one-dimensional models. generalize to multiple geometries. We also note that using \(\Delta t=2\cdot 10^{-2}\) s instead of \(\Delta t=10^{-1}\) s when training GNN-A leads to lower accuracy. However, we highlight that different time-step sizes might require a different set of hyperparameters--most notably, the standard deviation of the noise we use to make the network robust during rollout. In our case, the hyperparameter optimization discussed in Section 3.6 is performed considering \(\Delta t=10^{-2}\) s. The relative errors for pressure and flow rate produced by the GNNs in each anatomical model are considerably lower than those produced by the one-dimensional ROMs. Moreover, we do not observe a strong effect of \(\Delta t\) on the global accuracy of one-dimensional models. This indicates that the poor performance achieved on Models 7 and 8 was due to limitations intrinsic to the methods. The GNNs showed similar efficiency than the one-dimensional models in terms of run time using the same time step size \(\Delta t\) in the Model 6 and lower efficiency in Model 8. In the case of the more complex pulmonary model (Model 7), the GNNs are orders of magnitude more efficient than the one-dimensional models: for example, for \(\Delta t=0.01\) s the one-dimensional models took 60.6 seconds to complete a single cardiac cycle, whereas GNN-A and GNN-B\(g\) took around 4.0 seconds. We observe that, when using a larger \(\Delta t\) to train GNN-A, the run time decreased accordingly: the run times for Models 6, 7, and 8 scales from 3.1, 4.0, and 2.9 (approximately the same for GNN-A and GNN-B\(g\)) to 1.6, 2.0, and 1.5, respectively. The run times in the one-dimensional models do not scale linearly due to the larger number of Newton iterations necessary to reach convergence in the solution of the nonlinear system with larger \(\Delta t\). Finally, we performed an ablation study to evaluate the effects of different GNN components by excluding them altogether. Our goal was to determine whether the main modifications to MeshGraphNet proposed in this paper improve the accuracy of the original algorithm. These modifications are: (i) all graph features listed in Eq. (1) and Eq. (2), except for \(p_{i}^{k}\), \(q_{i}^{k}\), \(\boldsymbol{\alpha}_{i}\), \(\boldsymbol{\alpha}_{ij}^{T}/\|\boldsymbol{\alpha}_{ij}^{T}\|\), and \(z_{ij}\), and (ii) the boundary edges discussed in Section 3.2. For the sake of clarity, we define the set \[\tau=\{A_{i},\boldsymbol{\phi}_{i},T_{\text{ce}},p_{\text{min}},p_{\text{max }},R_{i,p},C_{i},R_{i,d},l^{k},\boldsymbol{\beta}_{ij},\text{ for all }i,j,k\}, \tag{7}\] which includes all node and edge graph features we propose to incorporate in MeshGraphNet, for all nodes, edges, and timesteps. We trained one GNN-A by excluding \(\tau\) and another one by excluding boundary edges using the same hyperparameters discussed in Section 3.6, performed 5-fold cross-validation, and compared their performance against a baseline GNN-A. Figure 10 shows the results averaged over all networks and divided into Models 6, 7, and 8. We observed the biggest performance decline when excluding the boundary edges. This suggests that, in our application, MeshGraphNet cannot be used as is without including those edges to allow information to flow more quickly in the graph. Excluding all the graph features we introduced in this paper also resulted in a noticeable performance drop, particularly in the diseased models (Models 6 and 8). These results demonstrate that our modifications to MeshGraphNet lead to noticeable improvements in the ROM. Figure 10: Ablation study. We show the decline in performance in pressure (left) and flow rate (right) when we exclude the features defined in Eq. (7) from the graph (green bars), or when we do not add boundary edges (blue bars), compared to the baseline GNN-A (red bars). ## 6 Conclusions We presented a reduced-order model to simulate blood dynamics in one-dimensional approximations of patient-specific vasculatures. Our architecture is a modified version of MeshGraphNet adapted to suit cardiovascular simulations. We demonstrated the generalizability of the network on a variety of different geometries and topologies. We showed the convergence of rollout error on train and test datasets as the number of trajectories used for training increased. We performed a sensitivity analysis to determine which node and edge features are more important to correctly approximating pressure and flow rate. In our experiments, the most influential features were nodal cross-sectional area and patient-specific quantities such as parameters governing the boundary conditions. In Section 5.2, we carried out a direct comparison with physics-driven one-dimensional models, showing superior performance of the graph neural network, in particular when handling complex geometries such as those with many junctions (Model 7) or stenosis (Model 8). Specifically, our networks consistently achieved errors below 2% and 3% in pressure and flow rate, respectively. We also considered different approaches to train our algorithm: training networks specific to different cardiovascular regions instead of a single network able to handle different geometries. Our results show that low errors can be obtained by following the latter strategy, which indicates that, with sufficient training data, our algorithm will adequately mimic three-dimensional simulations in all regions of the cardiovascular tree. Motivated by our sensitivity study, we performed an ablation study to determine which of our contributions played a more critical role in the accuracy of the network when excluding them altogether. Our results indicate that introducing boundary edges is essential to ensure meaningful results and that using patient-specific graph features also leads to performance improvements with respect to the original MeshGraphNet architecture. Future work will focus on further quantifying and improving the graph neural network's ability to generalize to unseen geometries during training. We will pursue methods that enable the graph neural network to robustly achieve the accuracies and efficiencies demonstrated in this work using smaller training datasets. This could be done by modifying the structure of the graph neural network (e.g., by incorporating notions of physics like conservation of mass) or the composition of the training dataset to include more patient-specific geometries with fewer trajectories each. Another open direction of research (motivated by our sensitivity analysis in Section 5.1) consists of investigating whether introducing new features or removing some of the existing ones will improve the performance of the method. Removing the parameters associated with the boundary conditions--while keeping the patient-specific data that is normally used to determine them--would be a significant advantage over existing approaches. Indeed, boundary condition tuning is a critical step in current physics-based simulations and is typically performed by varying the boundary condition parameters of surrogate models (for example, zero- or one-dimensional reduced-order models) in Bayesian optimization frameworks that usually require many model evaluations to converge. These optimization procedures are based on objective functions that measure how closely the surrogate model reproduces key physiological quantities such as systolic and diastolic pressures. Incorporating these physiological measures into the neural networks would allow us to bypass the boundary condition tuning stage and reduce the number of steps between medical image acquisition and simulation result assessment. The results presented in Appendix C give us confidence that further tuning of the model and improvements in the training dataset will allow us to obtain accurate approximation results without providing boundary condition parameters to the networks. ## Acknowledgments This work was supported by NIH Grants R01LM013120, R01EB029362, and K99HL161313. Additional funding was provided by the Stanford Graduate Fellowship and an NSF GRFP. This publication was additionally supported by the Stanford Maternal and Child Health Research Institute. The authors gratefully acknowledge the San Diego Super Computing Center (SDSC) and Intel for providing the computational resources to run the three-dimensional simulations and to train the GNNs presented in this paper. The authors also thank Dr. Tailin Wu for the insightful discussions and support on GNNs implementation and calibration.
2310.19943
The Acquisition of Physical Knowledge in Generative Neural Networks
As children grow older, they develop an intuitive understanding of the physical processes around them. Their physical understanding develops in stages, moving along developmental trajectories which have been mapped out extensively in previous empirical research. Here, we investigate how the learning trajectories of deep generative neural networks compare to children's developmental trajectories using physical understanding as a testbed. We outline an approach that allows us to examine two distinct hypotheses of human development - stochastic optimization and complexity increase. We find that while our models are able to accurately predict a number of physical processes, their learning trajectories under both hypotheses do not follow the developmental trajectories of children.
Luca M. Schulze Buschoff, Eric Schulz, Marcel Binz
2023-10-30T18:58:03Z
http://arxiv.org/abs/2310.19943v1
# The Acquisition of Physical Knowledge ###### Abstract As children grow older, they develop an intuitive understanding of the physical processes around them. Their physical understanding develops in stages, moving along developmental trajectories which have been mapped out extensively in previous empirical research. Here, we investigate how the learning trajectories of deep generative neural networks compare to children's developmental trajectories using physical understanding as a testbed. We outline an approach that allows us to examine two distinct hypotheses of human development - stochastic optimization and complexity increase. We find that while our models are able to accurately predict a number of physical processes, their learning trajectories under both hypotheses do not follow the developmental trajectories of children. Machine Learning, Neural Networks, Machine Learning, Neural Networks ## 1 Introduction More than 70 years ago, Turing (1950) famously suggested that "instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain." If we want to take Turing's proposal seriously, we have to ask ourselves: how do children learn? Developmental psychologists have investigated children's learning in a number of different realms. One of the most well-studied is their acquisition of physical knowledge (Baillargeon, 1996; 2004; Spelke & Kinzler, 2007; Lake et al., 2017). Here, prior empirical work provides us with a precise understanding of the stages that children undergo during their cognitive development (see Figure 1A for an example). It, therefore, serves as an ideal testbed for our investigation. In the present paper, we set out to formalize and test two distinct hypotheses of children's development. The first is the idea of _development as stochastic optimization_, which argues that cognitive development results from some form of stochastic optimization procedure (Gopnik et al., 2017; Ullman & Tenenbaum, 2020; Giron et al., 2022; Wolff, 1987). The second is the idea of _development as complexity increase_, which instead stipulates that the knowledge structures involved in human reasoning become more complex over time (Baillargeon, 2002; Binz & Endres, 2019). First, we show how both hypotheses can be instantiated in a \(\beta\)-variational autoencoder (\(\beta\)-VAE) framework. We then probe models with different degrees of complexity and optimization on physical reasoning tasks using violation-of-expectation (VOE) methods (Piloto et al., 2018; Smith et al., 2019). Finally, we compare the learning trajectories of these artificial systems to the developmental trajectories of children. We find that even fairly generic deep generative neural networks acquire many physical concepts. However, the order in which they acquire these concepts under both hypotheses does not align well with the acquisition order of children - neither hypothesis fully captures the learning trajectories of children. Thus, we conclude that the investigated models do not acquire their knowledge in accordance with Turing's proposal. The remainder of this paper is organized as follows. Section 2 surveys previous literature on models of human-like physical knowledge and developmental trajectories. In Section 3, we illustrate how to instantiate the development as stochastic optimization and development as complexity increase hypotheses in the \(\beta\)-VAE framework. We then apply these models to different physical reasoning domains in Section 4. Section 5 concludes this report with a general discussion of our findings. ## 2 Related work ### Models with human-like physical knowledge Building models with human-like physical knowledge has become an active research area in recent years (see Table 1 for a summary). Battaglia et al. (2013) argued that human reasoning in complex natural scenes is driven by an intuitive physics engine that relies on probabilistic simulations to make inferences. Following this idea, they introduced interaction networks - a model that performs simulations by combining an object-centric and a relation-centric component (Battaglia et al., 2016). In contrast to the initial approach that relied on a hard-coded physics engine, interaction networks are learnable engines, allowing them to generalize to novel systems with different configurations of objects and relations. In a similar vein, Smith et al. (2019) combined a perception module that infers physical object representations from raw images with a reasoning module that predicts future object states conditioned on the object representations. They found that this model matched human performance in a number of scenarios. Lerer et al. (2016) trained large convolutional neural networks to predict the stability of wooden block towers as well as the trajectories of falling blocks. They showed that the performance of such networks exceeds that of human subjects on synthetic data. Zhang et al. (2016) compared the intuitive physics engine of Battaglia et al. (2013) to the convolutional neural network of Lerer et al. (2016). They found that while convolutional networks are able to achieve superhuman accuracy in judging the stability of block towers, their physical understanding is dissimilar to that of humans. ### The violation-of-expectation method How physical knowledge of artificial systems should be evaluated has also received attention. Taking inspiration from developmental psychology, Piloto et al. (2018) proposed to use the VOE method to probe the knowledge of neural networks (Baillargeon, 1996). In particular, they measured the surprise of a network after observing physically implausible sequences. Their work was among the first to demonstrate that the VOE method can elucidate black-box models' inference mechanisms. Moreover, recent intuitive physics benchmarks have also been inspired by work in developmental psychology. Riochet et al. (2021) presented an "evaluation benchmark which diagnoses how much a given system understands about physics by testing whether it can tell apart well-matched videos of possible versus impossible events constructed with a game engine." Likewise, Weihs et al. (2022) proposed a benchmark testing for knowledge about continuity, solidity, and gravity using videos filmed in infant-cognition labs and robotic simulation environments. Finally, Piloto et al. (2022) also introduced a Figure 1: **A**: Human developmental trajectory for support events outlined by Baillargeon (1996). The illustrations are taken from Baillargeon (1996) and they show the physical rules acquired at the respective ages. With \(3\) months, infants decide based on a simple contact or no contact rule. According to this rule, a block configuration is considered stable if the blocks touch each other. At around \(5\) months, infants understand that the type of contact matters. Now, only configurations with blocks stacked on top of each other are judged as stable. At \(6.5\) months, they begin to also consider the overlap of the blocks. Finally, at \(12.5\) months they are able to incorporate the block shapes into their judgement, relying not only on the amount of contact but also on how the mass is distributed for each block. **B**: Illustration of our generative video prediction model. data set for evaluating intuitive physics in neural networks using the VOE method and use this data set to probe the physical knowledge of a deep learning model equipped with object-centric representations. ### Modeling human development Even though developmental psychology has inspired how to evaluate physical knowledge in neural networks, the emphasis of prior machine learning research has always been on reproducing adult-level performance. In contrast, computational cognitive scientists also strive to build artificial learning systems that capture the developmental trajectories of children. Perhaps most closely related to our work is the approach of Binz and Endres (2019) who compared trajectories of Bayesian neural networks that had access to different amounts of data to human developmental trajectories. They investigated both occlusion and support events and found that the acquisition order of concepts in their model aligned with that of children. However, in contrast to their work, which uses an oracle to provide a supervision signal about block stability and visibility, our approach solely relies on an unsupervised training objective. If we look beyond the realm of intuitive physics, we can find other works that have attempted to model the process of human development. Huber et al. (2022) investigated the emergence of object recognition in children. They showed that four- to six-year-olds are already more robust to image distortions compared to deep neural networks trained on ImageNet. Furthermore, children predominantly relied on shape instead of texture for object detection, making them more similar to adults than deep neural networks (Geirhos et al., 2018). Averbeck (2022) pruned recurrent neural networks by removing weak synapses. They found that pruned networks were more resistant to distractions in a working memory task and made optimal choices more frequently in a reinforcement learning setting. These results were consistent with developmental improvements during adolescence, where performance on cognitive operations improves as excitatory synapses in the cortex are pruned. Finally, Giron et al. (2022) examined a theory of development as stochastic optimization. In particular, they combined this idea with a model of human decision-making in multi-armed bandit problems and demonstrated that development resembles a stochastic optimization process in the parameter space of this model. In contrast to these earlier models of development, our setup uses high-dimensional visual stimuli (i.e., video sequences) and solely relies on an unsupervised training objective. It, therefore, more closely resembles the actual learning processes of children in the real world. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & support & occlusion & collision & unsupervised & violation- & sequential & developmental \\ events & events & events & learning & of- & expectations & predictions & trajectories \\ \hline \hline \multicolumn{10}{l}{**Work attempting to build models with human-like physical intuitions:**} \\ \hline Battaglia et al. (2013) & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ Battaglia et al. (2016) & ✗ & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ \\ Smith et al. (2019) & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ \\ Lerer et al. (2016) & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ Zhang et al. (2016) & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ Piloto et al. (2018) & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ \\ Riochet et al. (2021) & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ \\ Piloto et al. (2022) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ \\ \hline \hline \multicolumn{10}{l}{**Work attempting to model human developmental trajectories:**} \\ \hline Giron et al. (2022) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ Averbeck (2022) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ Huber et al. (2022) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ Binz and Endres (2019) & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Table summarizing previous work attempting to build models with human-like physical intuitions and work attempting to model human developmental trajectories. Machine learning research has predominantly focused on reproducing adult-level performance, while computational cognitive science has relied heavily on low-dimensional and static stimuli. The present paper combines the best of both worlds. ## 3 Methods In the following, we discuss how the _development as stochastic optimization_ and _development as complexity increase_ hypotheses can be instantiated in the \(\beta\)-VAE framework1. For the development as stochastic optimization hypothesis, we train a generative video prediction model using gradient descent. To obtain a learning trajectory of this model, we evaluate snapshots of the model in every epoch. For the development as complexity increase hypothesis, we train models of different complexities by making use of the \(\beta\)-VAE framework (Higgins et al., 2016). Doing so enforces a bottleneck on the representational capacity of the hidden representations (Sims, 2016; Bates Jacobs, 2020), which can be interpreted as a particular form of computational complexity (we will discuss potential alternatives in our general discussion). Learning trajectories for this hypothesis are obtained by increasing the model's representational capacity, i.e., by moving from higher to lower \(\beta\)-values within fully converged models. Footnote 1: We do not propose that these exact mechanisms are implemented by the brain. Rather, we propose these mechanisms as “as if” models: while the mechanisms behind biological development are likely not exactly analogous, they have similar characteristics. ### Model architecture and objective We use the recurrent state space model (RSSM) (Hafner et al., 2019; Saxena et al., 2021) - which can be seen as a sequential version of a VAE - as an exemplary model for our analysis. We selected this particular model for two reasons. First, generative models are, in general, a good starting point as they have previously been used to model intuitive physics (Piloto et al., 2018, 2022; Riochet et al., 2021) (see also (Duan et al., 2022) for an overview). Furthermore, we decided on a VAE-based model since such models have been shown to capture many aspects of human cognition (Malloy et al., 2022; Nagy et al., 2020; Bates Jacobs, 2020), thereby making them a reasonable candidate hypothesis for studying human development. The model maintains a latent state at each time step, which is comprised of a deterministic component \(h_{t}\) and a stochastic component \(s_{t}\) (see Figure 1B). These components depend on the previous time steps through a function \(f(h_{t-1},s_{t-1})\), which is implemented as a gated recurrent neural network. We train our models by optimizing the following objective: \[-\sum_{t=1}^{T}\mathbb{E}_{q(s_{t}|o_{\leq t})}[\ln p(o_{t}\mid s_{ t})]+ \tag{1}\] \[\beta\ \mathbb{E}_{q(s_{t-1}|o_{\leq t-1})}\big{[}\text{KL}\big{(}q(s_ {t}\mid o_{\leq t})\mid\mid p(s_{t}\mid s_{t-1})\big{)}\big{]}\] where \(o_{\leq t}=o_{1},o_{2},\ldots,o_{t}\) is a sequence of rendered images obtained from a 3D physics engine. For all models, the size of the stochastic hidden dimension \(s_{t}\) was kept at \(20\), while the size of the deterministic hidden dimension \(h_{t}\) was set to \(200\), as in previous implementations of the RSSM (Hafner et al., 2019; Saxena et al., 2021). We furthermore adopted the image encoder and decoder architectures described by Dittadi et al. (2020). We refer the reader to Appendix A for further details about the model architecture and training procedure. We can use the RSSM to generate either open- or closed-loop predictions. For open-loop predictions, the model processes a number of initial observations to infer an approximate posterior \(q(s_{t-1}\mid o_{\leq t-1})\), followed by decoding subsequent latent representations sampled from the prior \(p(s_{t}\mid s_{t-1})\). For closed-loop reconstructions, the decoder is instead continuously given representations sampled from the posterior which is updated at every time step using the previously observed frame. We generally report results obtained via open-loop predictions unless stated otherwise. ### Measuring surprise To assess whether a model has learned a specific physical rule, we make use of the VOE paradigm (Piloto et al., 2018, 2022). For this, the model is presented with two video sequences: a _violated_ sequence, which constitutes a violation according to the rule, and an _expected_ sequence, which is consistent with the rule. If the model has successfully learned a specific rule, it should show a larger degree of surprise for the violated compared to the expected sequence. Following Piloto et al. (2022) and Smith et al. (2019), we measure the model's surprise using the decoder's negative log-likelihood (NLL) of observations (i.e., the first term of Equation 1)2. More specifically, for each sequence, we determine if the NLL is larger for the violated compared to the expected sequence for the majority of the frames. We then take the mean over all sequences for a specific condition in order to check whether the reconstructions of the model better match the expected or the violated sequences. This approach is inspired by developmental psychology and allows us to measure a model's surprise similar to how developmental psychologists measure surprise in children (see Appendix B for a discussion on different measures of surprise). Footnote 2: Note that this definition differs from the common usage of surprise in the context of information theory, where it is defined to be \(-\log p(o\leq_{t})\). ## 4 Results We evaluated our models on three distinct physical processes. For each of these processes, we generated training data sets inspired by experiments from developmental psychology using the Unity game engine (Unity Technologies, 2005). We randomly varied a number of properties to ensure sufficient variability in the training data. We also generated test data sets that - following the VOE paradigm - contain pairs of violated and an expected sequences for each of the conditions in the respective event types (see Appendix C for a detailed description of the data generation process and a visualization of the employed test sequences). ### Support events We began our investigations by looking at support events, which consist of block configurations such as the ones shown in Figure 1A. Similar tasks have been studied extensively in both the machine learning and developmental psychology community, and they, therefore, serve as an ideal starting point for our analyses. Each scene in our data set contains two randomly configured blocks in a gray room.3 Footnote 3: Note that it would certainly be possible to consider more complex configurations (e.g., by increasing the number of blocks), but we deliberately made this design choice to match the experimental paradigms used in developmental psychology. Baillargeon (1996) has shown that, as infants grow older, they make use of increasingly complex rules to decide whether a given block configuration is stable or not (also see Baillargeon (2002; 2004)). With \(3\) months, infants decide based on a simple contact or no contact rule. According to this rule, a block configuration is considered stable if the blocks touch each other. At around \(5\) months, infants understand that the type of contact matters. Now, only configurations with blocks stacked on top of each other are judged as stable. At \(6.5\) months, they begin to also consider the overlap of the blocks. Finally, at \(12.5\) months they are able to incorporate the block shapes into their judgement, relying not only on the amount of contact but also on how the mass is distributed for each block. For each of the four rules for support events, we constructed pairs of violated and expected test sequences with identical first frames. For example, according to the overlap rule, a block configuration should only be stable if the blocks are stacked on top of each other with enough overlap. A test sequence pair for this rule shows two blocks that only slightly overlap. The expected test sequence, which is consistent with the rule, shows the top block falling. In contrast Figure 2: The first row shows the difference in surprise between the violated and expected test sequences given by the fully trained model with \(\beta=1\) for the overlap condition in the support event data-set. The surprise curve is smoothed using cubic spline interpolation. The second row shows the expected test sequence. The third row shows the violated test sequence. The last row shows the open-loop reconstruction from the model given the first two frames. - and in violation of real physics - the violated test sequence shows the same block configuration that however appears stable (Figure 2 shows an example pair for this rule, together with the predictions of our model and its corresponding surprise values). We first verified that our model is able to predict a given scene accurately into the future. For this purpose, we plotted the open-loop predictions given by the fully trained model with \(\beta=1\). Figure 2 shows an examplary result for the overlap condition. We see that the predictions of the fully trained model closely match the expected sequence. Furthermore, we see that high surprise values for the violated sequence coincide with differences to the expected sequence - the model is surprised when it observes parts of a video sequence that diverge from real physics. Appendix D.1 shows further examples for open- and closed-loop predictions, while Appendix F contains a visualization of prediction errors. Figure 3 illustrates how knowledge about physical rules develops over time for the two earlier outlined hypotheses. On the left, the percentage of sequences for which the surprise for the violated sequence exceeds that of the expected sequence is plotted for each of the four conditions over the course of training for the model with \(\beta\) = 1. Here, the model becomes increasingly optimized over the epochs, thereby implementing the development as stochastic optimization hypothesis. It is evident that the model is able to learn three of the four conditions as it shows more surprise for the violated than the expected sequences for the majority of the cases. However, it learns the conditions at roughly the same rate which does not match the developmental trajectories of children. While it settles at different levels for the conditions, the order of these conditions also does not match the acquisition order of children: the shape condition, for instance, shows the second highest percentage while it is the last rule that children acquire. On the right, the percentage of sequences for which the surprise for the violated sequence exceeds that of the matching expected sequence is plotted for each of the four conditions for fully trained models with different \(\beta\)-values. This relates to the development as complexity increase hypothesis since the representational capacity of the model increases as \(\beta\) decreases. The order in which increasingly complex models learn the different conditions again does not resemble the developmental trajectories of children: the model with \(\beta=8\) performs very similarly to the model with \(\beta=1\). To summarize, for support events, neither hypotheses yields learning trajectories that resemble the developmental trajectories of children (see also Appendix G for a closed-loop counterpart to Figure 3). ### Occlusion events Next, we wanted to test whether the results obtained in the last section hold across domains. Thus, we extended our analyses to occlusion events, which display a moving object passing behind two vertical columns. The two columns, together with an optional horizontal connection at the top or bottom, form an occluder which may hide the moving object. Like in the preceding section, we created a randomized training data set alongside several test sequences that violate physical principles (see Appendix C for examples and further details about the data generation process). Baillargeon (1996) reported that, in this setting, (1) infants form a simple behind/not-behind distinction by 2.5 months. Hereby they assume that the object will not re-appear in the gap between the columns that are connected at the top or bottom. By 3 months, (2) infants expect objects to re-appear when the columns are connected at the top but fail to do so if the columns are connected at the bottom. Finally, at 3.5 months, (3) they also expect objects to appear behind screens that are connected at the bottom, given that the Figure 3: The plot on the left shows the percentage of sequences for which the surprise for the violated sequence exceeds that of the expected sequence for the model with \(\beta=1\) at every epoch and for each condition of the support event data set. The lines are smoothed with a uniform kernel of size 10. The plot on the right shows the same metric for fully trained models with different \(\beta\). object is taller than the connecting part. Figure 4 visualizes our modeling results by showing the percentage of sequences for which the surprise for the violated sequence exceeds that of the expected sequence for each condition. First, we can observe that the fully trained model with \(\beta=1\) is surprised when presented with any of the test sequences that violate physical principles, indicating that it understood all of the three aforementioned occlusion settings. For the left side of the plot, which depicts the development as stochastic optimization hypothesis, we see that the number of sequences for which the surprise for the violated sequence exceeds that of the matching expected sequence increases at the same rate for all three conditions, meaning that the model learns the three concepts at approximately the same time. Thus, for occlusion events, the stochastic optimization hypothesis again does not yield a learning trajectory that matches that of children. The right side of Figure 4 relates to the development as complexity increase hypothesis. Here, we see that the the percentage of sequences for which the surprise for the violated sequence exceeds that of the matching expected sequence remains relatively stable as the complexity of the model increases. This again does does not lend support to the complexity increase hypothesis. ### Collision events The last physical process that we investigated were collision events. Here, each scene shows an object rolling down a hill and colliding with a stationary object. To train our models, we again created a randomized training data set of such scenarios together with several test sequences that violate physical principles (see Appendix C for examples and further details about the data generation process). Baillargeon (1996) provides two insights when it comes to collision events: (1) at first, infants expect any stationary object that collides with a moving object to be displaced by the same amount. However, as they grow older, they are able to take the relative sizes of the two objects into account and understand that the larger the size of the moving object compared to the stationary object, the larger the displacement of the stationary object. (2) Furthermore, at around \(8\) months, infants become subject to a _vertical bias_, meaning that they judge stationary objects as immovable if they have a salient vertical dimension (Wang et al., 2003; 2004). Figure 5 shows the learning trajectories of the two hypotheses, again by plotting how the percentage of sequences for which the surprise for the violated sequence exceeds that of the expected sequence develops over training epochs and complexity. Empirical research suggests that children first expect a size-independent displacement for all objects. To test whether our models exhibit such a characteristic, we compared their predictions for violated sequences with size-independent displacement (thereby violating physical principles) and expected sequences with a size-dependent displacement (working according to normal physics). While children initially show higher surprise when observing the expected sequences, our models offer a very different picture: at no point do they show more surprise for the expected compared to the violated sequences, as apparent by the plot on the left side of Figure 5. To test for the vertical bias, we constructed violated sequences where vertical objects do not move upon a collision and expected sequences where they do move according to normal physics. When presented with such sequence pairs, children show more surprise for the expected compared to the violated sequences at some point during their development. However, our models do not exhibit this characteristic at any point in time. Throughout training, they are more sur Figure 4: The plot on the left shows the percentage of sequences for which the surprise for the violated sequence exceeds that of the expected sequence for the model with \(\beta=1\) at every epoch and for each condition of the occlusion event data set. The lines are smoothed with a uniform kernel of size 10. The plot on the right shows the same metric for fully trained models with different \(\beta\). prised by the violated compared to the expected sequences. Likewise, we also do not observe this effect when manipulating the representational capacity of our models as shown on the right side of Figure 5. For collision events, we therefore again find that neither the development as stochastic optimization nor the development as complexity increase hypothesis yield learning trajectories that resemble the developmental trajectories of children. ## 5 Discussion We have compared the learning trajectories of an artificial system to the developmental trajectories of children for three physical processes. For this purpose, we outlined an approach that allowed us to investigate two distinct hypotheses of human development: stochastic optimization and complexity increase. We found that the learning trajectories under both hypotheses do not follow children's developmental trajectories. For all three event types, we found differences to human learning. For support and occlusion events, the predictions of our models improve at roughly the same rate for all conditions, which indicates that our models do not move along separate stages. For collision events, our models crucially exhibit none of the biases that appear in children. We argue that this is to be expected. The vertical bias, for example, is likely a product of their self-directed movement in the world: as children begin to move around, the majority of vertical objects they encounter, such as walls or furniture, are immovable (Baillargeon, 1996). In contrast to this, our models do not have access to such experiences and are therefore not incentivized to show this bias. While previous work on modeling cognitive development (Binz and Endres, 2019; Giron et al., 2022) focused on tasks with low-dimensional and static stimuli, our approach employs high-dimensional visual stimuli (e.g., video sequences) and solely relies on an unsupervised training objective. It, therefore, more closely mirrors the actual learning processes of children in the real world. We furthermore extend previous research on building models with human-like physical intuitions by not focusing on adult-level performance but instead investigating developmental trajectories (see again Table 1 for a comparison to previous research). To showcase how our approach functions as a general framework for testing the learning trajectories of artificial systems, we used a fairly generic generative model. It would be interesting to evaluate the two hypotheses for other model classes, such as generative adversarial networks (Goodfellow et al., 2020) or diffusion models (Sohl-Dickstein et al., 2015) such as a denoising diffusion probabilistic model (Ho et al., 2020). Furthermore, it has been argued in previous work that object-centric representations are crucial for a proper physical understanding of more complex scenes (Piloto et al., 2022). However, our models did not feature explicit object-centric representations and were still able to predict a number of physical processes. Thus, future work should aim for a systematic comparison of models with and without explicit object-centric representations. Additionally, future work should ideally incorporate model-based analyses investigating how changes in the hidden state are related to models' inability to capture the developmental trajectories of children (we have performed a rudimentary first analysis towards this end, see section H in the Appendix). We used very simple data sets to determine the viability of our approach. Evidently, children do not learn by looking at a large number of stylized sequences. Instead, they observe the real world and generalize their acquired knowledge to a given experimental setting. To capture this process, future research should ideally train models in a similar way. This could, for example, be accomplished by utilizing the Figure 5: The plot on the left shows the percentage of sequences for which the surprise for the violated sequence exceeds that of the expected sequence for the model with \(\beta=1\) at every epoch and for each condition of the collision event data set. The lines are smoothed with a uniform kernel of size 10. The plot on the right shows the same metric for fully trained models with different \(\beta\). SAYCam data set (Sullivan et al., 2021). It contains a large number of longitudinal video recordings from infants' perspectives. We believe that using this data set, it might be possible for an artificial model to acquire a vertical bias. It additionally includes time stamps indicating when a child has encountered a particular scene, which could be used to investigate how the nature of the training data influences development. It is also possible that children's pursuit of goals has an influence on their learning trajectories. Maybe this could also be investigated using the SayCAM dataset, however it is possible that collecting new data is required for answering this question. Finally, the complexity constraint we impose is a constraint on the size of the latent representations of the model. However, it is entirely possible that other parts of children's physical models change in complexity throughout their development. For example, Binz and Endres (2019) implement complexity increase through varying the complexity of model weights instead of the complexity of latent representations. In contrast to our work, they found that the acquisition order of concepts in their model aligned with that of children for support and occlusion events. Besides relying on a different complexity measure, their study diverged from ours in two additional ways: they used a much simpler set of stimuli (2D instead of 3D environments) and they relied on much less sophisticated models (plain feedforward networks instead of our sequential RSSM). We think an important question that needs to be addressed in future work is which of these design choices causes the divergence in results. What do we make of our results on the whole? On the one hand, they demonstrate that it is possible to use tools developed in psychology to elucidate the inner workings of deep learning models (Ritter et al., 2017; Binz and Schulz, 2022). From this perspective, our work highlights yet another mismatch between human learning and learning in artificial neural networks (Flesch et al., 2018; Dekker et al., 2022). On the other hand, our results also indicate that current modeling approaches are quite far away from implementing Turing's proposal for obtaining a programme that simulates the adult mind. If we want to keep following this direction, we have to therefore ask ourselves what is needed to build models that acquire their knowledge in human-like ways. Towards this end, it is possible that the training data plays an important role, as suggested by some of our results. However, it might be equally plausible that we need to develop new model architectures and come up with more sophisticated ways to train them. ## Acknowledgements This work was funded by the Max Planck Society and the Volkswagen Foundation under grant number VW98569.
2308.05846
Seed Kernel Counting using Domain Randomization and Object Tracking Neural Networks
High-throughput phenotyping (HTP) of seeds, also known as seed phenotyping, is the comprehensive assessment of complex seed traits such as growth, development, tolerance, resistance, ecology, yield, and the measurement of parameters that form more complex traits. One of the key aspects of seed phenotyping is cereal yield estimation that the seed production industry relies upon to conduct their business. While mechanized seed kernel counters are available in the market currently, they are often priced high and sometimes outside the range of small scale seed production firms' affordability. The development of object tracking neural network models such as You Only Look Once (YOLO) enables computer scientists to design algorithms that can estimate cereal yield inexpensively. The key bottleneck with neural network models is that they require a plethora of labelled training data before they can be put to task. We demonstrate that the use of synthetic imagery serves as a feasible substitute to train neural networks for object tracking that includes the tasks of object classification and detection. Furthermore, we propose a seed kernel counter that uses a low-cost mechanical hopper, trained YOLOv8 neural network model, and object tracking algorithms on StrongSORT and ByteTrack to estimate cereal yield from videos. The experiment yields a seed kernel count with an accuracy of 95.2\% and 93.2\% for Soy and Wheat respectively using the StrongSORT algorithm, and an accuray of 96.8\% and 92.4\% for Soy and Wheat respectively using the ByteTrack algorithm.
Venkat Margapuri, Prapti Thapaliya, Mitchell Neilsen
2023-08-10T19:56:15Z
http://arxiv.org/abs/2308.05846v2
# Seed Kernel Counting using Domain Randomization and Object Tracking Neural Networks ###### Abstract High-throughput phenotyping (HTP) of seeds, also known as seed phenotyping, is the comprehensive assessment of complex seed traits such as growth, development, tolerance, resistance, ecology, yield, and the measurement of parameters that form more complex traits [1]. One of the key aspects of seed phenotyping is cereal yield estimation that the seed production industry relies upon to conduct their business. While mechanized seed kernel counters are available in the market currently, they are often priced high and sometimes outside the range of small scale seed production firms' affordability. The development of object tracking neural network models such as Von Only Look Once (YOLO) enables computer scientists to design algorithms that can estimate cereal yield inexpensively. The key bottleneck with neural network models is that they require a plethora of labelled training data before they can be put to task. We demonstrate that the use of synthetic imagery serves as a feasible substitute to train neural networks for object tracking that includes the tasks of object classification and detection. Furthermore, we propose a seed kernel counter that uses a low-cost mechanical hopper, trained YOLOw neural network model, and object tracking algorithms on StrongSORT and ByteTrack to estimate cereal yield from videos. The experiment yields a seed kernel count with an accuracy of 95.2% and 93.2% for Soy and Wheat respectively using the StrongSORT algorithm, and an accuracy of 96.8% and 92.4% for Soy and Wheat respectively using the ByteTrack algorithm. YOLOv8, Artificial Intelligence, Domain Randomization, Object Tracking, Seed Counter ## I Introduction The advent of technology in the field of agriculture commenced over a century ago, and several studies have been conducted since the 1990s to improve upon production efficiency [2]. High-throughput Phenotyping (HTP) of seeds, also known as seed phenotyping, is the comprehensive assessment of complex seed traits such as growth, development, tolerance, resistance, ecology, yield, and the measurement of parameters that form more complex traits [1]. HTP increases the accuracy of measurements while reducing costs through the application of automation, remote sensing, data integration, and experimental design. The current work addresses the aspect of cereal yield estimation, a use case primarily geared toward the seed production industry. In the absence of accurate seed kernel count, seed production firms package bags of seed kernels by weight. However, the weight of seed kernels depends on the external environment. For instance, the weight of the seed kernels increases when they are soaked in water from the moisture they absorb. There is often a discrepancy between the expected and actual number of seed kernels when the customer acquires the seed kernel bags packaged by weight. Packaging seeds by count is tedious when performed by hand, making it infeasible. Seed production firms have to resort to the use of expensive mechanized seed counting machinery to pack seed kernels by count. However, the seed counting machinery, ranging from $500 to $3000, proves too expensive for small scale production firms at times, and deters them the ability to pack by count. The significant developments made in the field of Artificial Neural Networks, specifically Object Tracking Neural Network models, enable plant and computer scientists to collaboratively develop low-cost, high-throughput systems for seed kernel counting. This paper demonstrates the idea of leveraging videos of seed kernels rolling down a platform to estimate seed kernel count using low-cost hardware components (described in section II) and the object tracking neural network model of You Only Look Once (YOLO). YOLO belongs to a class of supervised neural networks that provides object tracking ability. The task of object tracking bundles the tasks of object detection and classification within it. Supervised neural network models require a plethora of labelled information to train for tasks such as object detection and classification. However, labelled training data is not always readily available for entities such as seed kernels. We demonstrate that the use of synthetic image datasets, generated following the principles of Domain Randomization [3, 4, 5], is a feasible alternative to train neural network models in the absence of real-world labelled datasets. ## II Related Work Neilsen et al. [6] proposed an image processing algorithm to conduct seed kernel counting from videos. The working of the algorithm is based on tracking each of the seed kernels as they flow down a backlit platform. A seed kernel is considered a valid detection and counted if the seed kernel is detected a predefined number of times (threshold). However, the image processing algorithm is highly sensitive to the video's frame rate. GridFree [7] is a Python package for image analysis of interactive grain counting and measurement. GridFree uses an unsupervised machine learning approach, K-Means, to segment kernels from the background by using principal component analysis (PCA) on both raw image channels and their color indices. The package incorporates users' experiences as a dynamic criterion to set thresholds for a divide-and-combine strategy that effectively segments adjacent kernels. When adjacent multiple kernels are incorrectly segmented as a single object, they form an outlier on the distribution plot of kernel area, length, and width. The software exhibits great performance on multiple crop types such as alfalfa, canola, lentil, wheat, chickpea, and soybean. Parico et al. [8] performed real-time pear fruit detection and counting using YOLOv4 models and Deep SORT algorithm. The study provides a systematic and pragmatic methodology to choose the most suitable neural network model for a desired application in the field of agriculture. The region-of-interest (ROI) line technique was used by the study to estimate the number of pear fruits detected by the neural network model. Wu et. al. [9] performed detection of Camellia oleifera fruit in complex scenes by using YOLOv7 and data augmentation. The comparison of YOLOv7's performance with YOLOv5, YOLOv3-spp, and Faster RCNN showed that YOLOv7 provided the best detection performance. The experiment yielded a Mean Average Precision, Precision, Recall, F1 Score, and average detection time of 96.03%, 94.76%, 95.54%, 95.15%, and 0.025 seconds per image respectively. Hajar et al. [10] performed vision-based moving obstacle detection and tracking in paddy field using YOLOv3 and Deep SORT. The center point positions of the obstacles were used to track the objects as they moved through the paddy field. The augmented YOLOv3 architecture consisted of 23 residual blocks and up-sampled only once. The augmented architecture obtained a mean intersection over union score of 0.779 and was 27.3% faster in processing speed than standard YOLOv3. Huang et al. [11] developed a video-based detection model to identify damage within unwashed eggs using YOLOv5 and ByteTrack object tracking algorithm. The detection results of the different frames were associated by ID, and used five different frames to determine egg category. The experimental results showed an accuracy of 96.4% when YOLOv5 in conjunction with the ByteTrack algorithm was used to detect broken/damaged eggs from videos. ## III Hardware Components We propose a low-cost setup for the capture of seed kernel videos for algorithmic analysis using YOLOv8. Fig. 1 shows the seed kernel image capture setup designed for the experiment. The mechanical hopper helps to deliver seeds at a constant rate unlike free-hand delivery that tends to be erratic. The mobile phone is placed on a 3-D printed stand to ensure that the camera is always held orthogonal to the surface. It helps to eliminate any skew that may result during the capture of the video. The stand is fitted with a 3-D printed platform at the bottom. The platform at the bottom channels the seed kernels ensures that the seed kernels remain in the field of view of the camera as they roll down the light box. In the absence of the platform, it is observed that seed kernels often drift to the side and fall off the light box prematurely, hindering their detection. The mobile phone used for image capture is a Google Pixel 2 XL mobile phone whose default capture frame rate is 60 fps. Fig. 2 shows a frame of the wheat seed kernel video captured using the proposed setup in Fig. 1. ## IV Domain Randomization and Image Datasets Domain Randomization (DR) is the idea of training neural network models on images containing simulated objects that translate closely to real-world objects. A small sample of images containing real objects is required for the creation of synthetic images using DR. The images of soy and wheat are captured by placing a mobile phone on a 3-D printed stand that holds the mobile phone orthogonal to the surface. Ensuring Fig. 1: Mechanical hopper delivering seed kernels that the mobile phone is held orthogonal to the surface is key to eliminate any skew in the resulting images. Fig. 4 shows soy seeds being captured by the proposed image capture setup. Images of 25 seed kernels of soy and wheat are captured using the setup shown in Fig. 3. Using the synthetic image generator developed as part of a previous work [3] synthetic images containing seed kernels of soy and wheat are developed. The synthetic image generator applies image augmentation techniques such as rotation, flipping, and noising to generate image datasets that are akin to real-world scenarios. The synthetic images are meticulously curated to allow for about 25% overlap at the maximum to account for clustered seed kernels as the frames of the video are processed. Synthetic image datasets are created for the seed types of soy and wheat, wherein each dataset consists of 200 images of size 320x320x3 with each image containing between 25 and 50 seed kernels overlaid on a light background, as shown in Fig. 4. Furthermore, the synthetic image generator outputs annotation files that contain location coordinates pertinent to each seed kernel in the image in the TXT format for YOLOv8 to consume and process during training. In addition to the 200 images of each seed kernel type generated for training YOLOv8, 35 synthetic images containing 30 seed kernels of each seed kernel type are generated for testing on YOLOv8. ## V YOLOv8 and Object Tracking Algorithms The YOLO model is a single-shot detector [12, 13, 14] that uses a fully convolutional neural network as the backbone to process the input image. The YOLOv8 [15] model used for this work was released in January 2023 by Ultralytics to support multiple computer vision tasks such as object detection, classification, segmentation, pose estimation, and tracking. YOLOv8 comprises a convolutional neural network that is divided into two parts: backbone and head. The backbone, CSPDarknet53 [16], consists of 53 convolutional layers and uses cross state partial connections to facilitate information flow between different layers. The head consists of multiple convolutional layers followed by a series of fully connected layers. Another key characteristic of YOLOv8 is the use of the Self-Attention [17, 18] mechanism in the head to enable the model to focus on specific parts of the image for better performance. Object tracking involves the tasks of object detection and classification, and feature mapping between objects in different frames as they move around. YOLOv8 is a single-shot object detector that detects and labels objects in a given image. However, object tracking requires that the object be detected in every frame across the video i.e. the object is to be re-identified within every frame as each frame of the video gets processed. Several object tracking algorithms [19, 20, 21] have been proposed over the years. This paper considers two object tracking algorithms for experimentation, namely, StrongSORT and ByteTrack. ### _StrongSORT_ The StrongSORT [19] algorithm is an improvement over the DeepSORT [20] algorithm. StrongSORT algorithm, like DeepSORT, is a two-branch framework consisting of an Appearance branch and a Motion branch. The algorithm works on objects that have been detected in each frame of the video. The YOLOX-X [21] model is used to detect the objects in the frame. However, detections may be performed using alternate Fig. 4: Image capture of soy seed kernels Fig. 3: Image capture of soy seed kernels Fig. 2: Wheat seed kernels flowing down the light box object detectors such as Faster R-CNN [24] as well. The Appearance branch identifies the features of each of the objects detected in a given frame. The detected features are used to match a given object across different frames. BoT [22] is leveraged as the feature extractor by the StrongSORT algorithm. The appearance state for the \(i^{th}\) tracklet within frame t, \(e_{i}^{t}\), is updated as the exponential moving average (EMA) given by \(e_{i}^{t}\) = \(\alpha e_{i}^{t-1}\) + (1-\(\alpha\))\(f_{i}^{t}\) where \(f_{i}^{t}\) is the appearance embedding of the current matched detection and \(\alpha\) = 0.9, is a variable momentum term. The benefit of EMA lies in the fact that it leverages information of inter-frame feature changes and suppresses detection noise. The Motion branch leverages Kalman Filter [25] to predict the position of the object in the frame based on a constant velocity model. However, the basic Kalman Filter offers poor performance when the detections made by the object detector are sub par. In order to tackle this issue, the StrongSORT algorithm uses the NSA Kalman Filter algorithm borrowed from the GIAO tracker [26]. ### _ByteTrack_ ByteTrack [23] algorithm is best suited to cases where objects detected in one frame of the video by detectors such as YOLOv8 are ignored in a different frame of the video. Such instances are generally a result of occlusion of objects as they move from one frame to the other. Motion tracking (MoT) algorithms typically leverage bounding boxes plotted by object detectors (such as YOLOv8) to assign a unique ID to each object in the video. Furthermore, object detectors associate a confidence level to each bounding box that is plotted. Most MoT algorithms ignore bounding boxes that have a confidence level below a threshold to avoid false positives. However, ignoring the bounding boxes at low confidence levels risks ignoring detecting objects that may otherwise be detected (true positives). The ByteTrack algorithm mitigates the risk by leveraging bounding boxes at all confidence levels agnostic of a confidence level threshold and meticulously attempts to identify all objects in a frame agnostic of the detection confidence level. The ByteTrack algorithm leverages a queue called Tracklets to store all the objects (and bounding boxes) that have been detected by the object detector (YOLOv8). The bounding boxes are separated into high score (\(D^{high}\)) and low score (\(D^{low}\)) based on threshold. Each of the objects in the Tracklets queue is tracked across each frame of the video agnostic of the confidence level of the bounding boxes associated to them. The tracking of objects across frames is performed using Kalman Filter [25]. Firstly, the position (bounding box) of each of the objects in the tracklets queue is predicted in the subsequent frame. The predictions are matched with the actual detections made by the object detector using Motion Similarity score. Motion Similarity score is computed as the result of Intersection over Union (IoU) between the predicted and actual bounding boxes. Initially, tracklet matching is done between the predicted and high score (\(D^{high}\)) bounding boxes. The tracklets that do not match with any of the high score bounding boxes are matched with low score (\(D^{low}\)) bounding boxes. Any tracklet that is not matched is preserved for a predefined number of frames to test for rebirth in case of occlusion. Finally, the tracklet is removed from the queue if a match is never found. ## VI Experiment The development of the Seed Counter using YOLOv8 and object tracking algorithms involves three steps: * Seed detection * Seed tracking * Seed counting ### _Seed Detection_ The YOLOv8 model is trained on the image dataset generated using the synthetic image generator (described in section IV). 80% of the image dataset is used for training and 20% is used for validation. The model is tested on images containing real Soy and Wheat seed kernels. The test dataset consists of 100 images, 50 of soy and 50 of wheat, each containing 20-30 seed kernels. Transfer Learning [], the ability of a neural network to apply the knowledge gained by training on a dataset to a different dataset where there is a presence of common domains between the source and target datasets, is leveraged to train the YOLOv8 model. Model weights from the YOLOv8 model pretrained on the COCO image dataset are leveraged as provided by Uralytics. The hyperparameters used to train the YOLOv8 model are as shown in Table I. The results of seed kernel detection on the test dataset are evaluated using the metrics of Precision, Recall, and Average Precision (PR). The metrics are briefly described below and the obtained results are presented in Table II. **Precision:** Precision indicates the number of positives accurately classified by the classifier from all the classifications made. It is given by _true positives/ (true positives + false positives)_. **Recall:** Recall indicates the number of positives identified by the classifier of all the positives present in the dataset. It is given by _true positives/(true positives + false negatives)_. **Average Precision:** The Average Precision is computed as the ratio of the Intersection-over-Union (IoU) between the bounding box predictions made by the classifier and ground truth bounding boxes. It is computed for an 50% overlap between ground truth and predicted bounding boxes for the purposes of this paper, given by \(AP_{50}\). **Note:** Average Precision is only reported for the validation data set but not test data set because the images in the test set do not have ground truth bounding boxes plotted around them. From the results in table II, high recall scores of 91% and 90% on soy and wheat respectively for the test set indicates that the model albeit being trained on synthetic images detects real seed kernels well. The precision scores of 93% and 92% indicate that the model classifies the seed kernels correctly on most instances. The reason for high precision might be due to the clear morphometric distinction between soy and wheat. ### _Seed Tracking_ The Seed Tracking phase involves the application of StrongSORT and ByteTrack algorithms to videos of Soy and Wheat seed kernels captured using the setup described in section III. The experiment is conducted on videos containing 250 seed kernels of each seed type captured at three different frame rates, 30, 60, and 120 i.e. normal speed, slow motion, and super slow motion. Videos with higher frame rates capture more level of detail than those with lower frame rates. Frame rate has a significant impact on object tracking algorithms since they function by predicting the position of objects in future frames. The StrongSORT and ByteTrack algorithms are applied using the detection weights obtained in the Seed Detection phase as input. The tracking algorithms apply a unique ID to each seed kernel detected in the video and track them throughout the video. In the perfect world where the seed kernels in the video are not prone to occlusion or clustering, mere tracking of objects in the video and counting the number of IDs tracked by the object tracking algorithm is sufficient to obtain a count of the seed kernels in the video. However, the seed kernels in the video are clustered in parts, occluded, and prone to sudden deviations in trajectory, as shown in Fig. 2. The aforementioned issues of clustering, occlusion, and sudden trajectory deviations lead to the risk of object tracking algorithms assigning different unique IDs to the same seed kernel in different frames of the video, eventually leading to a discrepancy between the actual number of seed kernels in the video and number of unique IDs generated by the object tracking algorithms. The number of unique IDs generated by StrongSORT and ByteTrack algorithms on each of the videos captured for the wheat seeds are shown in Tables III and IV respectively. The results show that either of the object tracking algorithms consistently overcount the number of seed kernels in the video. rate of 30 and least accurate on videos captured at a frame rate of 120 for either seed type, and object tracking algorithm demonstrating that a lower frame that captures a higher level of detail positively influences the performance of object tracking algorithms. However, both algorithms undercount the number of seed kernels agnostic of frame rate. It is due to clustering of seed kernels in certain frames which results in the seed tracking algorithms considering multiple seed kernels as one. ## VII Pitfalls, Future Work, and Conclusion The key pitfall of the experiment is that the videos used for the experiment consist of seed kernels that are clustered. As a result, the object tracking algorithms failed to track each seed kernel accurately. In further experiments, the video capture mechanism will be altered to ensure that the videos do not contain clustered (or occluded) seed kernels. Overall, the experiment demonstrates the feasibility of synthetic images to train object tracking neural network models, and their application in seed kernel counting aimed at the seed production industry. As the results are encouraging, future work will involve the development of a mobile application (Android/iOS) and a robust video capture mechanism.
2304.06042
A physical neural network training approach toward multi-plane light conversion design
Multi-plane light converter (MPLC) designs supporting hundreds of modes are attractive in high-throughput optical communications. These photonic structures typically comprise >10 phase masks in free space, with millions of independent design parameters. Conventional MPLC design using wavefront matching updates one mask at a time while fixing the rest. Here we construct a physical neural network (PNN) to model the light propagation and phase modulation in MPLC, providing access to the entire parameter set for optimization, including not only profiles of the phase masks and the distances between them. PNN training supports flexible optimization sequences and is a superset of existing MPLC design methods. In addition, our method allows tuning of hyperparameters of PNN training such as learning rate and batch size. Because PNN-based MPLC is found to be insensitive to the number of input and target modes in each training step, we have demonstrated a high-order MPLC design (45 modes) using mini batches that fit into the available computing resources.
Zheyuan Zhu, Joe H. Doerr, Guifang Li, Shuo Pang
2023-04-06T01:54:59Z
http://arxiv.org/abs/2304.06042v1
## Appendix A physical neural network training approach toward multi-plane light conversion design ## Abstract Multi-plane light converter (MPLC) designs supporting hundreds of modes are attractive in high-throughput optical communications. These photonic structures typically comprise >10 phase masks in free space, with millions of independent design parameters. Conventional MPLC design using wavefront matching updates one mask at a time while fixing the rest. Here we construct a physical neural network (PNN) to model the light propagation and phase modulation in MPLC, providing access to the entire parameter set for optimization, including not only profiles of the phase masks and the distances between them. PNN training supports flexible optimization sequences and is a superset of existing MPLC design methods. In addition, our method allows tuning of hyperparameters of PNN training such as learning rate and batch size. Because PNN-based MPLC is found to be insensitive to the number of input and target modes in each training step, we have demonstrated a high-order MPLC design (45 modes) using mini batches that fit into the available computing resources. ## 1 Introduction Multi-plane light conversion (MPLC) devices are attractive as mode (de)multiplexers in beam shaping and optical communication applications [1, 2, 3, 4]. As the demand for high optical throughput grows, large-scale mode multiplexing and demultiplexing devices with tens to hundreds of spatial models become increasingly desirable, leading to the continuous search for MPLC designs that push the limit of the number of spatial modes supported in a single device [5, 6, 7, 8]. Although there has been no consensus on the minimal number of phase masks required for a specific MPLC device, the general rule of thumb suggests that it increases slower than \(O(N)\), where \(N\) is the number of spatial mode pairs, given the same requirement on coupling efficiency [9]. Yet when designing a large-scale MPLC system supporting the simultaneous conversion between hundreds of modes, challenges can arise from the high-dimensional parameter space. For example, a 10-mode MPLC using 7 phase masks, each with 256x256 pixels, contains 2 million independent phase parameters, complicating the optimization process. The conventional wavefront matching (WFM) method transforms the high-dimensional MPLC design process into a low-dimensional sequential optimization problem. WFM iteratively adjusts one mask at a time while fixing the rests to match the wavefronts between forward and backward propagating fields through the photonic structure [10]. As a result, WFM can be considered as a specific optimization sequence in the entire parameter space, leaving other potential optimization processes unexplored. In addition, WFM lacks the flexibility to optimize both phase profiles and distances between masks simultaneously. The WFM results cannot be easily modified to accommodate specific input, output, and inter-mask distances. In recent years, the state-of-the-art artificial neural network (ANN) models have reached comparable scales as the MPLC models. Motivated by the similarity between optical backward propagation and gradient-based ANN training [8, 11, 12], here we have constructed a physical neural network (PNN) based on the optical propagation model in MPLC. The PNN-based MPLC design leverages the hardware and software development in ANN training [13, 14, 15] to perform global search in the parameter space, with the capability to adjust various hyperparameters to steer among various optimization paths. This paper is organized as follows. We first modeled the free-space propagation in MPLC with an equivalent PNN and formulated the MPLC design problem as PNN training. We demonstrated that PNN-based design can incorporate various optimization sequences, including the WFM method, which can be expressed as a specific optimization sequence. We then performed PNN training on both phase profiles and distances to refocus an existing MPLC design for different input and output distances, a capability that is not available in WFM. Finally, we explored the MPLC designs generated from tuning the hyperparameters in PNN training, primarily the batch size and learning rate, and discovered that the performances were insensitive to the batch size. This allowed us to break down a large-scale MPLC design into mini batches, a technique commonly used in ANN, to fit the PNN training process into available computing resources without compromising the performance. ## 2 Theory ### Multi-plane light conversion (MPLC) model MPLC transforms a set of orthogonal two-dimensional (2D) input fields into another orthogonal set of target fields with cascaded 2D masks along the longitudinal direction. The masks are typically implemented with phase-only spatial light modulators (SLMs) or fabricated as diffractive optical elements. The spacing between the input fields, phase plates, and output fields are generally on the order of mm. Thus, the electric fields in the MPLC system can be calculated using free-space propagation based on wave optics. Figure 1 illustrates the principle of a light conversion model that aims to transform \(M\) input fields into \(M\) target fields using \(N\) phase plates. The distance between adjacent phase plates \(\phi_{i}\) and \(\phi_{i+1}\) is \(z_{i}\). Here we use \(z_{0}\) and \(z_{N}\) to denote the distances from the input fields to the first phase mask, and that from the last phase mask to the output fields, respectively. After discretizing and serializing the fields and masks into vectors, the output field, \(E_{N}\), produced from the input field, \(E_{0}\), can be expressed by successive free-space propagations and phase modulations in matrix form as Figure 1: Principle of multi-plane light conversion (MPLC). The system cascades a set of phase masks in free space to successively convert the input fields to target output fields. Each phase mask along with the subsequent free-space propagation is equivalent to a neural network layer. Calculating the optimal masks can be treated as a physical neural network training problem. \[E_{N}=\mathcal{F}^{-1}\mathrm{diag}(\exp(ik_{z}z_{N}))\mathcal{F}\left[\prod_{i=1}^{ N}(\mathrm{diag}(\exp(i\phi_{i}))\mathcal{F}^{-1}\mathrm{diag}(\exp(ik_{z}z_{i-1})) \mathcal{F})\right]E_{0}. \tag{1}\] Here we have adopted the angular-spectrum method for free-space propagation, where \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) denote the 2D Fourier transform matrix and its inverse, respectively. \(\mathrm{diag}(\mathbf{x})\) creates a diagonal matrix from the vector \(\mathbf{x}\). The longitudinal wave vector \(k_{z}=\sqrt{k_{0}^{2}-k_{x}^{2}-k_{y}^{2}}\), where \(k_{x}\) and \(k_{y}\) are the spatial frequencies along the \(\mathbf{x}\) and \(\mathbf{y}\) dimensions, respectively, and \(k_{0}\) is the wave number of the light in vacuum. Eq. (1) indicates that the input and output fields follow a series of linear, unitary transforms, defined by the parameters \(\phi_{i}\) and \(z_{i}\). To optimize the parameter sets \(\Phi=\{\phi_{i},i=1,2,...,N\}\) and \(Z=\{z_{i},i=0,1,...,N\}\) for best mode conversion efficiency, we establish a feed-forward PNN to model the MPLC process with \(N+1\) layers, in which the layer \(W_{l}\) consists of the free-space propagation for a distance \(z_{i}\), followed by phase modulation \(\phi_{i}\). \[W_{l}=\mathrm{diag}(\exp(i\phi_{l}))\mathcal{F}^{-1}\mathrm{diag}(\exp(ik_{z} z_{i-1}))\mathcal{F}, \tag{2}\] and the last layer, \(W_{N+1}\), consists only of free-space propagation for a distance \(z_{N}\), \[W_{N+1}=\mathcal{F}^{-1}\mathrm{diag}(\exp(ik_{z}z_{N}))\mathcal{F}. \tag{3}\] All the \(\phi_{i}\) and \(z_{i}\) parameters in the layer \(W_{l}\) can be toggled between trainable and untrainable states. For an MPLC design supporting \(M\) modes, a set of \(M\) output fields \(\{E_{N}^{(J)},j=1,2,...,M\}\), generated from a set of \(M\) input fields \(\{E_{0}^{(J)},j=1,2,...,M\}\) via Eq. (1), are designed to couple into the respective target fields \(\{E_{t}^{(J)},j=1,2,...,M\}\). The coupling efficiency, \(\eta^{(J)}\), which quantifies the percentage of optical power coupled from output field \(E_{N}^{(J)}\) into the target field \(E_{t}^{(J)}\), is given by the magnitude square of the overlap integral in Eq. (4), \[\eta^{(J)}=\left|\left(E_{N}^{(J)}\right)^{*}\left(E_{t}^{(J)}\right)\right|^{ 2}. \tag{4}\] Here \(*\) denotes the conjugate transpose of a matrix/vector. All pairs of input and target fields are assumed to be normalized, such that \(\left|\left(E_{0}^{(J)}\right)^{*}\left(E_{0}^{(J)}\right)\right|=1\), and \(\left|\left(E_{t}^{(J)}\right)^{*}\left(E_{t}^{(J)}\right)\right|=1\). Since the free-space propagation and phase modulation are both unitary transformations, the output fields are also normalized \(\left|\left(E_{N}^{(J)}\right)^{*}\left(E_{N}^{(J)}\right)\right|=1\). ### Optimization in MPLC model The PNN model given by Eq. (1) is constructed in TensorFlow 2.4, which provides a gradient-based optimizer, Adaptive Moment Estimation (ADAM), to train the entire or a subset of MPLC parameters, including phase profiles on masks \(\phi_{i}\) and the inter-mask distances, \(z_{i}\). The loss function to minimize is the average percentage of power loss, \(L\), among all \(M\) pairs of output fields \(E_{N}\) and target fields \(E_{t}\), \[L\coloneqq\frac{1}{M}\sum_{j=1}^{M}\left(1-\eta^{(J)}\right)=-\frac{1}{M} \sum_{j=1}^{M}\eta^{(J)}+C. \tag{5}\] Here we can omit the constant \(C\) in the loss function since it is independent on all the MPLC parameters. In addition, we can also designate a subset, \(\{j_{1},j_{2},...,j_{B}\}\subseteq\{1,2,...,M\}\), of all the input-target field pairs in the loss function as \[L=-\frac{1}{B}\sum_{b=1}^{B}\eta^{(j_{b})}, \tag{6}\] which is equivalent to using a mini batch of the dataset in neural network training. The ADAM optimizer moves the parameters according to Eq. (7) for \(i=1,2,...,N\) with considerations on the running averages of the gradient and momentum [16]. \[\phi_{i} \leftarrow\phi_{i}-\gamma f\big{(}\nabla_{\phi_{i}}L\big{)} \tag{7}\] \[z_{i} \gets z_{i}-\gamma f\big{(}\nabla_{z_{i}}L\big{)}.\] Here \(f(\cdot)\) summarizes the gradient and momentum scaling operations in each ADAM iteration. \(\gamma\) is the learning rate. The gradients, \(\nabla_{\phi_{i}}L\) and \(\nabla_{x_{i}}L\), are calculated by TensorFlow using automatic differentiation. We can further customize the optimization sequence using macros that define the parameter(s) to fix or update, as well as which pairs of output and target fields to include in the loss, \(L\), in each training iteration. For example, we can choose to update the first mask and the distance between output and the last mask in one training step by executing ADAM training (Eq. (7)) only on the parameters \(\phi_{1}\) and \(z_{N}\), while leaving all other parameters fixed. The default PNN optimization sequence executes Eq. (7) on all the masks \(\Phi\), while fixing the distances \(Z\) between masks. The conventional wavefront matching (WFM) method is also a gradient-based optimizer with a specific optimization sequence. Contrary to the full parameter update in PNN, WFM considers only one phase mask, \(\phi_{i}\) is trainable in each iteration. By setting the gradient of \(L\) with respect to \(\phi_{i}\) to zero, the critical point, \(\nabla_{\phi_{i}}L=0\), on the coupling efficiency contour satisfies the analytical expression in Eq. (8), \[\frac{1}{M}\sum_{j=1}^{M}(\xi_{i+1}\exp(i\phi_{i})\,\varepsilon_{i}-c.\,c.\,)=0. \tag{8}\] Here _c.c._ denotes the complex conjugate of the preceding term. \(\xi_{i+1}=\left(E_{t}^{(j)}\right)^{*}\mathcal{F}^{-1}\mathrm{diag}(\exp(ik_{ z}z_{N}))\mathcal{F}\big{[}\prod_{k=i+1}^{N}W_{i}\big{]}\mathcal{F}^{-1}\mathrm{ diag}(\exp(ik_{z}z_{i}))\mathcal{F}\) can be interpreted as the conjugate transpose of the backward-propagating field, and \(\varepsilon_{i}=\big{[}\prod_{k=1}^{i-1}W_{i}\big{]}E_{0}^{(j)}\) can be interpreted as the forward-propagating field, both at the location of mask \(i\). WFM chooses \(\phi_{i}\) to match the wavefronts of the forward- and backward-propagating fields, so that Eq. (8) holds. Figure 2(a) illustrates the optimization sequences undertaken by PNN (Eq. (7)) and WFM (Eq. (8)) to reach their respective solutions. Because of the phase-shift ambiguity, the coupling efficiency contour contains an infinite number of local maxima. WFM jumps to a series of critical points along one of the mask dimensions with the rest of the masks fixed. To reach other maxima, WFM requires different initial conditions, which can be hard to engineer. In contrast, PNN follows the gradient direction towards a local maximum, which can lead to a different solution. In addition to scan the initial conditions, we can change the learning rate and momentum of the training process to access different maxima nearby. Figure 2: (a) Comparison of the optimization sequences between PNN and WFM on the coupling efficiency contour. (b) Illustration of the evaluation metrics, loss \(L\) and sharpness \(\delta L\) used in PNN inference. ### Evaluation of MPLC model After optimization, the inference process evaluates both the loss performance \(L\), as well as resilience of the model to perturbations using the sharpness metric in ANN [18], defined by the change in the loss function under perturbed phase in Eq. (9), \[\delta L=\frac{\left|L_{\Phi+\delta\Phi_{k}}-L_{\Phi}\right|}{1+\left|L_{\Phi} \right|}. \tag{9}\] Here \(\delta\Phi\) represents the amount of phase noise added to the MPLC design. \(L_{\Phi}\) and \(L_{\Phi+\delta\Phi_{k}}\) denote the average PNN loss given the trained and perturbed phase masks, respectively. \(k=1,2,...,K\) instances of random perturbations are drawn from a uniform distribution \(\mathcal{U}_{[-\delta\Phi,\delta\Phi]}\) to calculate the mean and standard deviation of \(\delta L\), as illustrated in Figure 2(b). In ANN, a lower sharpness implies the potential of the model to generalize better to the data it has not been trained on before [19]. In PNN-based MPLC model, a smaller sharpness is also preferable, which implies higher optical tolerance against phase errors. We also evaluate the optical performance using the insertion loss, IL (dB), and optical tolerance \(\delta IL\) (dB), which are parallel concepts to PNN loss \(L\) and sharpness \(\delta L\), respectively. The insertion loss in dB is calculated from the mean eigenvalues of the crosstalk matrix as in Ref. [8]. The optical tolerance is defined as \(\delta\mathrm{IL}=\left|\mathrm{IL}_{\Phi+\delta\Phi_{k}}-\mathrm{IL}_{\Phi}\right|\), where \(\mathrm{IL}_{\Phi}\) and \(\mathrm{IL}_{\Phi+\delta\Phi_{k}}\) denote the insertion loss (dB) given the trained and perturbed phase masks, respectively. Likewise, \(K\) instances of random perturbations are drawn to calculate the mean and standard deviation of \(\delta\mathrm{IL}\). Finally, since masks that differ by a constant phase are identical in terms of functionality, we quantify the similarity, \(S\), between two masks, \(\phi_{1}\) and \(\phi_{2}\), using the cross-correlation between complex phasors, \(\exp(i\phi_{1})\) and \(\exp(i\phi_{2})\), as defined in Eq. (10), \[S=\frac{\left|\int\exp(i\phi_{1})\exp(-i\phi_{2})\,\mathrm{d}x\mathrm{d}y \right|}{\sqrt{\int\!\left|\exp(i\phi_{1})\right|^{2}\mathrm{d}x\mathrm{d}y \int\!\left|\exp(i\phi_{2})\right|^{2}\mathrm{d}x\mathrm{d}y}}. \tag{10}\] To evaluate the sharpness and optical tolerance of the MPLC models in the following experiments, we chose \(K\)=10 random instances with the level of phase perturbation \(\delta\Phi\)=0.05rad, which is equivalent to 2 quantization levels on a typical 8-bit liquid crystal on silicon (LCoS) SLM. ## 3 Numerical Simulations ### Optimization sequences of PNN We set up a 10-mode MPLC model to compare the phase mask designs from different optimization sequences in PNN. The model was designed to convert a linear array of 10 Gaussian spots into the 10 Hermit-Gaussian modes in the first 4 mode groups using 5 phase plates, shown in Figure 3. The input spots were linearly spaced at 128\(\mu\)m apart with Gaussian beam waist of 50\(\mu\)m, matching the output generated from a linear fiber array. The masks contained 512x512 pixels with a pixel size of 3\(\mu\)m. The distances between the phase masks, as well as the inputs to the first mask, and the output from the last mask, were all 6mm. The output Hermit-Gaussian modes all had beam waists of 200\(\mu\)m. For comparison with WFM, all the distances were fixed to 6mm as non-trainable parameters. Both WFM and PNN initialized phase masks \(\Phi\) with all zeros. PNN can incorporate a sequential optimization process using the macro in Table 1, which mimics the behavior of WFM method [17]. The sequential training macro iteratively updates one mask while fixing the rests and the inter-mask distances. Figure 4(a) compares the average coupling efficiency, \(\eta\), of all modes as a function of global iteration steps \(k\) between WFM and the sequential training macro. The masks after the iterations \(k\)=1 and 2 are visualized in Figure 4(b). Table 2 lists the similarities, \(S\), between the WFM solutions and sequential training macro for each phase mask. The similarities exceed 92% for all phase masks, indicating that WFM and PNN could reach identical solutions. The default PNN optimization sequence performs the optimizations all the masks. Figure 5(a) compares loss as a function of iteration steps for PNN and WFM. Both algorithms stopped when the relative change in loss function dropped below \(10^{-3}\). WFM and PNN converged to average coupling efficiencies of 0.764 and 0.763, respectively. Although PNN took >10 times more steps to reach a similar coupling efficiency as WFM, the overall execution time, 933s, was ~3% shorter than that of WFM, 961s, since TensorFlow can leverage GPUs to parallelize and accelerate the PNN training. Figure 5(b) compares the five phase masks from WFM and PNN. The similarities between the first to the fifth masks were 0.038, 0.098, 0.077, 0.092, and 0.084, respectively, clearly indicating two different solutions. The sharpness of PNN and WFM solutions are \((7.9\pm 0.037)\times 10^{-3}\) and \((6.5\pm 0.014)\times 10^{-3}\), respectively, consistent with the respective optical tolerances, \((1.81\pm 0.015)\times 10^{-2}\) dB and \((1.81\pm 0.056)\times 10^{-2}\) dB, of the two designs. These results imply similar local behaviors around the two solutions. We have demonstrated that PNN can produce a wider variety of phase mask designs that completely cover the solution space of WFM. Within the PNN framework, WFM can be considered as a special sequential optimization path. It is worth mentioning that PNN is not limited to the sequential (WFM-like) and global training sequences defined in these two examples. Better optimization sequences can be designed to accelerate the convergence of PNN training. ### Optimizing inter-mask distances in PNN-based MPLC design The ability to optimize the input and output distances is an important degree of freedom in MPLC design. Similar to adjusting the object- and image-space distances in lens design [20], tuning the input and output distances alongside the phase masks could potentially improve MPLC performance, or adapt an existing MPLC design to different experimental conditions. However, WFM is not efficient at finding the optimal distances, a task that amounts to enumerating a list of possible MPLC designs with different inter-mask distances, and then executing the WFM iterations from scratch for each configuration. In contrast, the distances and masks are both adjustable in PNN model. In this example, we optimized the input and output distances \(z_{0}\) and \(z_{N}\) in an existing MPLC design with PNN training. Based on the 5-plate, 10-mode MPLC model, we set the distances \(z_{0}\) and \(z_{5}\), as well as a subset of the phase masks \(\Phi\), as trainable parameters. Table 3 summarizes the optimization sequence of the MPLC refocusing macro. In the first round of updates, we trained only the first and last phase masks, \(\phi_{1}\) and \(\phi_{5}\), alongside \(z_{0}\) and \(z_{5}\), which is equivalent to defocusing in conventional lens design. In the second round, we freed all phase masks for training along with \(z_{0}\) and \(z_{5}\). The MPLC model was initialized with the PNN mask design from Figure 5. Figure 5: Comparison between the phase mask solutions from WFM and default PNN optimization sequence. (a) Average coupling efficiency as a function of iteration steps \(k\) for both WFM (red curve) and PNN (blue curve). (b) Phase masks from WFM and PNN at the end of the iterations. Figure 6 plots the phase profiles of the masks after both rounds of updates in the refocusing macro. After round 1 of the updates, the distances \(z_{0}\) and \(z_{5}\) changed from 6mm to 6.34mm and 15.95mm, respectively. Quadratic phase profiles that compensated the curvature of the defocusing wavefront appeared on the last phase mask. After freeing all phase masks alongside \(z_{0}\) and \(z_{5}\), the final design showed a 2.2% improvement in average coupling efficiency, indicating that PNN had performed further optimizations on top of the quadratic phase profile. It is worth noting that the inter-mask distances can also be optimized in a similar way as input and output distances. Constraints can be set to enforce equal distances between adjacent masks to create a feasible reflective MPLC design. \begin{table} \begin{tabular}{l} \hline \(\varepsilon\ \leftarrow\ 10^{-3}\), \(Z\ \leftarrow\) 6mm \\ Load \(\Phi_{i}\) from Figure 5 \\ _// Round 1: Adjust first and last masks alongside input and output distances._ \\ \(\delta L\ \leftarrow\ 1\)_, \(L_{prev}\ \leftarrow\ 1\) \\ While \(\delta L>\varepsilon\) \\ Update \(\phi_{1}\), \(\phi_{5}\), \(z_{0}\), and \(z_{5}\) with Eq. (7) \\ Calculate loss \(L\) with Eq. (5) \\ \(\delta L\leftarrow\big{|}L-L_{prev}\big{|}/L_{prev}\) \\ _\(L_{prev}\gets L\)_ \\ _// Round 2: Adjust all masks alongside input and output distances._ \\ \(\delta L\ \leftarrow\ 1\)_, \(L_{prev}\ \leftarrow\ 1\) \\ While \(\delta L>\varepsilon\) \\ Update \(\phi_{1}\), \(\phi_{2}\), \(\phi_{3}\), \(\phi_{4}\), \(\phi_{5}\), \(z_{0}\), and \(z_{5}\) with Eq. (7) \\ Calculate loss \(L\) with Eq. (5) \\ \(\delta L\leftarrow\big{|}L-L_{prev}\big{|}/L_{prev}\) \\ \(L_{prev}\gets L\) \\ \end{tabular} \end{table} Table 3: MPLC refocusing macro that optimizes phase plates alongside distances. Figure 6: Phase masks after round 1 and round 2 updates with the MPLC refocusing macro. ### Effects of batch size in PNN-based MPLC design Apart from the full MPLC design parameters, PNN also provides access to the hyperparameters for fine-tuning the training process. This example explores the effect of one of the hyperparameters, batch size, on PNN-based MPCL design. Training using mini batches of a full dataset is a common practice in ANN to break down a large dataset into smaller blocks that fit in the available computing resources. Since the pairs of all input and target output electric fields are analogous to the training samples in a full dataset, the potential of using batch training could scale up PNN-based MPCL design to hundreds or thousands of modes. Table 4 shows the macro to train an \(M\)-mode MPLC using a batch size of \(B\) (\(B<M\)). The inner loops ensure a full epoch is consumed before updating the loss function of the MPLC model. We applied the batch training macro to the 10-mode example with a batch of 4, 6, 8, and 10 pairs of input and output electric fields in during the parameter update (Eq. (7)). Here a batch size of 10 is equivalent to training using the full dataset. For each batch size, we tuned the learning rate from 0.1 to 0.9 with a step size of 0.1 and selected the resulting MPLC models with the best PNN loss. Figure 7 plots the loss and sharpness with respect to different batch sizes. The different batch sizes introduced marginal changes in PNN loss, which were comparable in order of magnitude as the changes from phase perturbation. In contrast, there has been considerable evidence indicating that changing the batch size often results in the change in sharpness in conventional ANN [19, 21]. We attribute this behavioral difference to the lack of nonlinearities in MPLC (see Appendix), as the electric field propagation from one phase plate to the next can be completely described by a linear transformation (Eq. (2)). Since batch size does not affect the performance, or tolerance of a linear PNN model, we can expand PNN to a high-order MPLC design, using mini batches of the input and target mode pairs. \begin{table} \begin{tabular}{l} \hline \(\varepsilon\,\leftarrow\,10^{-3}\), \(\delta L\,\leftarrow\,1\), \(L_{prev}\,\leftarrow\,1\) \\ While \(\delta L>\varepsilon\) \\ \multicolumn{2}{l}{Generate random permutations \(\{j_{1},...,j_{M}\}\) from \(\{1,2,...,M\}\)} \\ \(Q\,\leftarrow\,\)ceil (\(M\)/\(B\)) \\ For \(k\)=1 to \(Q\) \\ \multicolumn{2}{l}{Calculate loss \(L\) with Eq. (6) using \(j_{(k-1)B+1}\) to \(j_{kB}\)} \\ \multicolumn{2}{l}{Update \(\phi_{1}\) to \(\phi_{N}\) with Eq. (7)} \\ \multicolumn{2}{l}{Calculate loss \(L\) with Eq. (5)} \\ \(\delta L\,\leftarrow\,\big{|}L-L_{prev}\big{|}/L_{prev}\) \\ \(L_{prev}\,\leftarrow\,L\) \\ \end{tabular} \end{table} Table 4: Batch training macro for \(M\)-mode MPLC with \(N\) masks. Figure 7: PNN design of the 10-mode MPLC with batch training. (a) Sharpness (blue) and ANN loss \(L\) (red) with respect to the training batch size. (b) Insertion loss (red) and tolerance (blue) after phase perturbation as a function of batch size. Figure 8(a) shows a 45-mode MPLC example we have used to test PNN training with mini batches. The model was designed to convert a linear array of 45 Gaussian spots into the 45-Hermit-Gaussian modes in the first 9 mode groups using 8 phase plates. The input spots were linearly spaced at 127\(\mu\)m apart with Gaussian beam waist of 30\(\mu\)m. The masks contained 1280x512 pixels with a pixel size of 5\(\mu\)m. The total number of trainable parameters were 5.2 million. The distances between the phase masks, as well as the inputs to the first mask, and the output from the last mask, were fixed at 24mm. The output Hermit-Gaussian modes all had a beam waist of 200\(\mu\)m. We performed the PNN training using batch sizes of 4, 6, 8, and 45 (full dataset). Within each batch size, we tuned the learning rate from 0.1 to 0.9 with a step size of 0.1 and selected the resulting MPLC models with the best PNN loss. Given the size of the model and the number of trainable parameters, the GPUs on our host PC (two NVIDIA RTX2080Ti) supported a maximum batch size of \(B\)=8. To test the MPLC design trained with full dataset, we wrote the macro in Table 5 to perform an equivalent full-dataset training. Figure 8(b) and (c) compare the loss and sharpness with respect to the batch sizes. All models reached >71% average coupling efficiency and <3dB insertion loss. Considering the variance of the models due to phase perturbation, no noticeable change in the sharpness and optical tolerance can be observed across different batch sizes, indicating successful training of 45-mode MPLC using smaller batch sizes. Hence, we can divide the entire mode-conversion pairs into mini batches that fit in the available memory of individual CPU cores and GPUs, making the PNN training applicable to large-scale MPLC models supporting hundreds to thousands of modes. Figure 8: PNN design of a 45-mode MPLC with batch training. (a) Inputs and target outputs of PNN. (b) Sharpness (blue) and ANN loss \(L\) (red) with respect to the training batch size. (c) Insertion loss (red) and tolerance (blue) after phase perturbation as a function of batch size. \begin{table} \begin{tabular}{l} \hline \(B\)\(\leftarrow\)8, \(\varepsilon\)\(\leftarrow\) 10\({}^{-3}\), \(\delta L\)\(\leftarrow\) 1, \(L_{prev}\)\(\leftarrow\) 1 \\ While \(\delta L\)\(>\)\(\varepsilon\) \\ Generate random permutations \(\{j_{1},...,j_{M}\}\) from \{1,2,...,M\}\) \\ \(Q\)\(\leftarrow\) ceil (\(M\)/\(B\)) \\ Initialize \(\{g_{i}\}\) as zero vectors \\ For \(k\)=1 to \(Q\) \\ Calculate loss \(L\) with Eq. (6) using \(j_{(k-1)B+1}\) to \(j_{kB}\) \\ For \(i\)=1 to \(N\) \\ // Aggregate partial gradients \\ \(g_{i}\gets g_{i}+\nabla_{\phi_{i}}L\) \\ Update \(\phi_{1}\) to \(\phi_{N}\) with Eq. (7) \\ Calculate loss \(L\) with Eq. (5) \\ \(\delta L\leftarrow\big{|}L-L_{prev}\big{|}/L_{prev}\) \\ \(L_{prev}\gets L\) \\ \end{tabular} \end{table} Table 5: Full-dataset update macro for \(M\)-mode MPLC with \(N\) masks. ## 4 Summary We have demonstrated a new approach towards MPLC design based on a PNN model that incorporates optical propagation and phase modulation. The proposed method can perform simultaneous searches over all the design parameters, including the phase profiles and distances between the masks. The PNN training opens the possibility for engineering the underlying optimization sequences, constituting a superset of the conventional WFM design methods. We have demonstrated the capability to optimize input and output distances alongside the masks, yielding a superior solution that cannot be easily reached by WFM. Combined with the hyperparameter tuning techniques, we have arrived at solutions with similar performances but using smaller batch sizes. The ability to perform batch training could scale up PNN-based MPLC design approach to hundreds or thousands of modes while maintaining compatibility with limited computing resources. ## Acknowledgement This work is supported in part by the National Science Foundation (ECCS-1932858), and the Office of Naval Research (N00014-20-1-2441). We would like to thank Dr. Stephen Becker from University of Colorado at Boulder for helpful discussions. ## Appendix Supported by universal approximation theorem [22], ANN models often contain nonlinear activation functions to ensure sufficient expressive power. We speculate that the inclusion of nonlinear activations in conventional ANN contributes to the tradeoff between batch size and sharpness, which we did not observe in linear MPLC model. Here we provide empirical evidence supporting our hypothesis using a fully connected MNIST classification model with two hidden layers. The model has an input of size 784, 2 hidden layers with ReLU of size 100, and an output layer of size 10. We quantify the number of activations as the average number of hidden neurons inhibited by ReLU over all input samples in the dataset. The model was compiled with cross entropy loss and trained with Adam optimizer using batch sizes of 600 and 6000. For the batch size 6000, we aggregated the partial gradient within 10 minibatches of 600 before performing the weight update, as described by the full parameter update macro in Table 5. We logged the sharpness and the number of activations during the training process. Figure 9 shows that small batch training leads to a lower sharpness than large batch training, consistent with the observations in Ref.[19]. The lower sharpness can be explained by the larger number of activations in small-batch training as when more neurons are inhibited, the outputs are less likely to be impacted by perturbation. The correlation between activations and sharpness has implications towards improvements to ANN generalization algorithms as it provides a novel metric to consider. Further understanding of why batch size influences the number of activations could be valuable in controlling the number of activations in the future as well as using large batch Figure 9: Training with different batch sizes in MNIST model. (a) Sharpness and (b) number of activations as a function of the training steps for batch size of 600 (blue curves) and 6000 (red curves). training without the cost of generalization capabilities. In the absence of activation functions in the MPLC model, batch size hardly affects the sharpness and optical tolerance.
2307.01241
Data-driven decoding of quantum error correcting codes using graph neural networks
To leverage the full potential of quantum error-correcting stabilizer codes it is crucial to have an efficient and accurate decoder. Accurate, maximum likelihood, decoders are computationally very expensive whereas decoders based on more efficient algorithms give sub-optimal performance. In addition, the accuracy will depend on the quality of models and estimates of error rates for idling qubits, gates, measurements, and resets, and will typically assume symmetric error channels. In this work, instead, we explore a model-free, data-driven, approach to decoding, using a graph neural network (GNN). The decoding problem is formulated as a graph classification task in which a set of stabilizer measurements is mapped to an annotated detector graph for which the neural network predicts the most likely logical error class. We show that the GNN-based decoder can outperform a matching decoder for circuit level noise on the surface code given only simulated experimental data, even if the matching decoder is given full information of the underlying error model. Although training is computationally demanding, inference is fast and scales approximately linearly with the space-time volume of the code. We also find that we can use large, but more limited, datasets of real experimental data [Google Quantum AI, Nature {\bf 614}, 676 (2023)] for the repetition code, giving decoding accuracies that are on par with minimum weight perfect matching. The results show that a purely data-driven approach to decoding may be a viable future option for practical quantum error correction, which is competitive in terms of speed, accuracy, and versatility.
Moritz Lange, Pontus Havström, Basudha Srivastava, Valdemar Bergentall, Karl Hammar, Olivia Heuts, Evert van Nieuwenburg, Mats Granath
2023-07-03T17:25:45Z
http://arxiv.org/abs/2307.01241v1
# Data-driven decoding of quantum error correcting codes using graph neural networks ###### Abstract To leverage the full potential of quantum error-correcting stabilizer codes it is crucial to have an efficient and accurate decoder. Accurate, maximum likelihood, decoders are computationally very expensive whereas decoders based on more efficient algorithms give sub-optimal performance. In addition, the accuracy will depend on the quality of models and estimates of error rates for idling qubits, gates, measurements, and resets, and will typically assume symmetric error channels. In this work, instead, we explore a model-free, data-driven, approach to decoding, using a graph neural network (GNN). The decoding problem is formulated as a graph classification task in which a set of stabilizer measurements is mapped to an annotated detector graph for which the neural network predicts the most likely logical error class. We show that the GNN-based decoder can outperform a matching decoder for circuit level noise on the surface code given only simulated experimental data, even if the matching decoder is given full information of the underlying error model. Although training is computationally demanding, inference is fast and scales approximately linearly with the space-time volume of the code. We also find that we can use large, but more limited, datasets of real experimental data [Google Quantum AI, Nature **614**, 676 (2023)] for the repetition code, giving decoding accuracies that are on par with minimum weight perfect matching. The results show that a purely data-driven approach to decoding may be a viable future option for practical quantum error correction, which is competitive in terms of speed, accuracy, and versatility. ## I Introduction Quantum Error Correction (QEC) is foreseen to be a vital component in the development of practical quantum computing [1; 2; 3; 4; 5]. The need for QEC arises due to the susceptibility of quantum information to noise, which can rapidly accumulate and corrupt the final output. Unlike noise mitigation schemes where errors are reduced by classical post-processing [6; 7; 8], QEC methods encode quantum information in a way that allows for the detection and correction of errors without destroying the information itself. A prominent framework for this are topological stabilizer codes, such as the surface code, for which the logical failure rates can be systematically suppressed by increasing the size of the code if the intrinsic error rates are below some threshold value [9; 10; 11; 12; 13]. Stabilizer codes are based on a set of commutative, typically local, measurements that project an \(n\)-qubit state to a lower dimensional code space representing one or more logical qubits. Errors take the state out of the code space and are then indicated by a syndrome, corresponding to stabilizer violations. The syndrome needs to be interpreted in order to gauge whether a logical bit or phase flip may have been incurred on the logical qubit. Interpreting the syndrome, to predict the most likely logical error, requires both a decoder algorithm and, traditionally, a model of the qubit error channels. The fact that measurements may themselves be noisy, makes this interpretation additionally challenging [10; 13]. Efforts are under way to realize stabilizer codes experimentally using various qubit architectures [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. In [28], code distance 3 and 5 surface codes were implemented, using 17 and 49 superconducting qubits, respectively. After initialization of the qubits, repeated stabilizer measurements are performed over a given number of cycles capped by a final round of single qubit measurements. The results are then compared with the initial state to determine whether errors have caused a logical bit- (or phase-) error. The decoder analyses the collected sets of syndrome measurements in post-processing, where the fraction of correct predictions gives a measure of the logical accuracy. The better the decoder, the higher the coherence time of the logical qubit, and in [28] a computationally costly tensor network based decoder was used to maximize the logical fidelity of the distance 5 code compared to the distance 3 code. However, with the objective of moving from running and benchmarking a quantum memory to using it for universal quantum computation, it will be necessary to do error correction both with high accuracy and in real time. In the present work, we explore the viability of using a purely data-driven approach to decoding, based on the potential of generating large amounts of experimental data. We use a graph neural network (GNN) which is well suited for addressing this type of data. Namely, a single data point, as in [28], consists of a set of "detectors", i.e., changes in stabilizer measurements from one cycle to the next, together with a label indicating the measured logical bit- or phase-flip error. This can be represented as a labeled graph with nodes that are annotated by the information on the type of stabilizer and the space-time position of the detector, as shown in Fig ure 1. The maximum degree of the graph can be capped based on removing edges between distant detectors, keeping only a fixed maximum number of neighboring nodes. The latter ensures that each network layer in the GNN (see Figure 2) performs a number of matrix multiplications that scales linearly with the number of nodes, i.e., linearly with the number of stabilizer measurements and the overall error rate. We have trained this decoder on simulated experimental data for the surface code using Stim [31] as well as real experimental data on the repetition code [28]. For both of these, the decoder is on par with, or outperforms, the state-of-the-art matching decoder [32], suggesting that with sufficient data and a suitable neural network architecture, model-free machine learning based decoders trained on experimental data can be competitive for future implementations of quantum error-correcting stabilizer codes. ## II Stabilizer Codes and Decoding A stabilizer code is defined through a set of commuting operators constructed from products of Pauli operators acting on a Hilbert space of \(n\) data qubits [3]. With \(n_{S}\) independent stabilizers the Hilbert space is split into sectors of dimension \(2^{n-n_{S}}\), specified by the parity under each stabilizer. For concreteness we will consider the case \(n_{S}=n-1\), such that each of the sectors represent a single qubit degree of freedom. Each syndrome measurement is performed with the help of an ancilla qubit following a small entangling circuit with the relevant data qubits. The measured state of the ancilla qubits provide a syndrome \(S=\{s_{i},i=1,...,n_{S}\,|\in 0,1\}\), and projects the density matrix of the \(n\) qubit state into a single 2-dimensional block, a Pauli frame [33; 34]. Given uncertainties in the measurements, a number of rounds are typically performed before the information is interpreted by means of a decoder. Defining a pair of anticommuting operators \(Z_{L}\) and \(X_{L}\) that commute with the stabilizer group, provides the logical computational space through \(Z_{L}|0\rangle_{L}=|0\rangle_{L}\) and \(|1\rangle_{L}=X_{L}|0\rangle_{L}\). Assuming a fixed pair of logical operators for a given code defines the corresponding logical states in each Pauli frame. Thus, a number of subsequent rounds of stabilizer measurements, during which the code is affected by decoherence, transforms the density matrix from the initial state \(\rho=\sum_{i,j\in\{0,1\}}\rho_{ij}|i\rangle_{L}\langle j|_{L}\) to the final state \(\rho^{\prime}=\sum_{i,j\in\{0,1\}}\rho^{\prime}_{ij}|i\rangle^{\prime}_{L} \langle j|^{\prime}_{L}\), where \(|0/1\rangle_{L}\) (\(|0/1\rangle^{\prime}_{L}\)) are the logical qubit states in the initial (final) Pauli frame. The logical error channels are approximated by \[\rho \rightarrow\rho^{\prime}=\epsilon_{L}(\rho_{L})\] \[= (1-P)\rho+P_{X}X_{L}\rho X_{L}+P_{Z}Z_{L}\rho Z_{L}+P_{Y}Y_{L} \rho Y_{L}\,,\] with \(Y_{L}=-iZ_{L}X_{L}\) and \(P=\sum_{i=X,Y,Z}P_{i}\). In general there may be additional non-symmetric channels (see for Figure 1: Memory experiment on the distance \(d=5\) surface code. Data qubit initialization is followed by \(d_{t}=2\) stabilizer measurement rounds and a final data qubit measurement round. Data qubits are on the vertices of plaquettes (circles, shown in the bottom and top planes). Ancilla qubits (not shown) at the center of plaquettes provide stabilizer measurements outcomes. The detector graph has nodes corresponding to changes in stabilizers from the previous time step. (Not all edges shown.) Nodes are annotated by the type of stabilizer and the space-time coordinate. The label, here \(\lambda_{Z}=1\), corresponding to a change of \(\langle Z_{L}\rangle\), measured along the northwest edge. Also shown, bottom layer, are some example stabilizers, and the logical \(X_{L}\) (not measured). example [19]), but we will assume that the data (as in [28]) does not resolve such channels. The probabilities of logical error, \(P_{i}\), will be quantified by the complete set of syndrome measurements and depend on single and multi-qubit error channels as well as measurement and reset errors. It is the task of the decoder to quantify these in order to maximize the effectiveness of the error correction. Traditionally this is done through computational algorithms that use a specific error model. The framework that most decoders are based on uses independent and identically distributed symmetric noise acting on individual qubits, possibly, for circuit-level noise, complemented by two-qubit gate errors, faulty measurements and ancilla qubit reset errors. Maximum-likelihood decoders [35; 36; 37; 38; 39; 40] aim to explicitly account for all possible error configurations that are consistent with the measured syndromes, with their respective probabilities given by the assumed error model. The full set of error configurations fall in four different cosets that map to each other by the logical operators of the code, thus directly providing an estimate of the probabilities \(P_{i}\) that is limited only by the approximations involved in the calculation and the error model. Even though such decoders may be useful for benchmarking and optimizing the theoretical performance of stabilizer codes [28], they are computationally too demanding for real time operation, even for small codes. The more standard decoders instead are based on the minimum weight perfect matching (MWPM) algorithm [41; 42; 43; 44; 45; 46]. Such a decoder aims to find the single, most likely, configuration of single qubit errors consistent with the set of measured stabilizers. Detectors are mapped to nodes of a graph with edges that are weighted by the probability of the pair of nodes. For codes where nodes appear in pairs (such as the repetition or surface code), the most likely error corresponds to pairwise matching such that the total weight of the edges is minimized. This algorithm is fast, in practice scaling approximately linearly with the size of the graph (see Figure 8). Nevertheless, it has several short-comings that limits accuracy and applicability: 1) Approximate handling of crossing edges (such as coinciding X and Z errors) means that the effective error model is oversimplified. 2) Except at very low error rates, degeneracies of less likely error configurations are ignored. 3) For models where a single error may give rise to more than two detector events, more sophisticated algorithms are needed [47; 48; 49; 50; 51; 52; 53]. These shortcomings can be partially addressed by more sophisticated approaches such as counting multiplicity or using belief propagation [54; 55; 56; 57], but often at the cost of added computational complexity. Other examples of decoder algorithms are based on decoding from small to large scale, such as cellular-automata [58; 59; 60], renormalization group [61], or union-find [49; 62]. The latter, in particular, is very efficient, but at the cost of sub-optimal performance. ### Related work A number of different deep learning based decoder algorithms have also been formulated, based on supervised learning, reinforcement learning, and genetic neural algorithms [63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83]. Focusing on the works on the surface code and based on supervised learning, these can roughly be separated according to whether they primarily consider perfect stabilizers [63; 64; 65; 72; 77; 78; 81], Figure 2: Schematic of the GNN decoder. It takes as input an annotated detector graph, c.f. Figure 1. Several layers of graph convolutional operations (following Eqn. 3) transform each node feature vector. (The empty circle shows the message passing to this particular node from neighboring nodes on the graph.) Next, a mean-pooling operation averages all the node feature vectors into a single graph embedding, which is independent of the size of the graph. Finally, the latter is passed through two separate dense networks to give two binary class predictors, corresponding to the logical \(X\) and \(Z\) labels, respectively. (For details see Appendix A.) or include measurement noise or circuit-level noise [66, 69, 80, 82, 83], and whether they are purely data-driven [63, 65, 66, 69, 81, 83] or involve some auxiliary, model-informed, algorithm or multi-step reduction of decoding [77, 78, 80, 64, 82]. The present work is in the category, realistic (circuit-level) noise, and purely data-driven. It is distinguished primarily in that we 1) Use graph neural networks and graph structured data, and 2) Train and test the neural network decoder on real experimental data. In addition, as in several of the earlier works [66, 68, 69], we emphasize the use of a model-free, purely data-driven, approach. By using experimental stabilizer data, the approximations of traditional model-based decoder algorithms can be avoided. The fact that the real error channels at the qubit level may be asymmetric, due to amplitude damping, have long-range correlations, or involve leakage outside the computational space, is intrinsic to the data. This is also in contrast to other data-driven approaches [84, 85, 86, 21, 84] that use stabilizer data to learn the detailed Pauli channels, optimize a decoder algorithm through the edge weights of a matching decoder, or the individual qubit and measure error rates of a tensor network based decoder, as these are all constrained by a specific error model. ### Repetition code and surface code The decoder formalism that we present in this work can be applied to any stabilizer code, requiring only a dataset of measured (or simulated) stabilizers, together with the logical outcomes. Nevertheless, to keep to the core issues of training and performance we consider only two standard scalable stabilizer codes: the repetition code and the surface code. The repetition code is defined on a one-dimensional grid of qubits with neighboring pair-wise \(Z_{i}\otimes Z_{i+1}\) stabilizers. In the Pauli frame with all \(+1\) stabilizers, the code works are \(|0\rangle_{L}=|0\rangle^{\otimes n}\) and \(|1\rangle_{L}=|1\rangle^{\otimes n}\). Consider a logical qubit state \(|\psi\rangle=\alpha|0\rangle_{L}+\beta|1\rangle_{L}\), with complex amplitudes \(|\alpha|^{2}+|\beta|^{2}=1\). The logical bit-flip operator is given by \(X_{L}=\bigotimes_{i}X_{i}\), which sets the code distance \(d_{X}=n\). Assuming perfect stabilizer measurements and independent and identically distributed single qubit bit-flip error probabilities, decoding the repetition code is trivial. For any set of stabilizer violations, i.e., odd parity outcomes, there are only two consistent configurations of errors that map to each other by acting with \(X_{L}\). A decoder (maximum-likelihood in the case of this simple error model) would suggest the one with fewer errors. The repetition code, set up to detect bit-flip errors, is insensitive to phase flip errors, as is clear from the fact that a phase-flip (\(Z\)) error on a single qubit also gives a phase-flip error (\(\beta\rightarrow-\beta\)) on the logical qubit, corresponding to a code distance \(d_{Z}=1\). To detect and correct both bit- and phase-flip errors we need a more potent code, the most promising of which may be the surface code. We consider the qubit-efficient "rotated" surface code [87, 88, 89] (see Figure 1), constructed from weight-4, \(Z^{\otimes 4}\) and \(X^{\otimes 4}\), stabilizers (formally stabilizer generators), with complementary weight-2 stabilizers on the boundary. On a square grid of \(d\times d\) data qubits, the \(d^{2}-1\) stabilizers give one logical qubit. We define the logical operator \(X_{L}\) as a string of \(X\)'s on the southwest edge, and a string of \(Z\)'s on the northwest edge, as shown in Figure 1. These are the two (unique up to products of stabilizers) lowest weight operators that commute with the stabilizer group, without being part of said group. Stabilizer measurements are performed by means of entangling circuits between the data qubits and an ancilla qubit. Assuming hardware with one ancilla qubit per stabilizer, and the appropriate gate schedule, these can all be measured simultaneously, corresponding to one round of stabilizer measurements. ### Memory experiments on the surface code To train and test our decoder we consider a real or simulated experiment, illustrated schematically in Figure 1, to benchmark a surface code as a quantum memory. The following procedure can be used for any stabilizer code: * Initialize the individual qubits: Data qubits in a fixed or random configuration in the computational basis \(|0\rangle\) and \(|1\rangle\). Ancilla qubits in \(|0\rangle\). The initial data qubit configuration is viewed as a \(0\)'th round of measurements that initialize the \(Z\)-stabilizers. This also corresponds to an effective measurement \(\langle Z_{L}\rangle_{t=0}=\prod_{i\in Z_{L}}Z_{i}=\pm 1\). (Northwest row of qubits in Figure 1.) * A first round, \(t=1\), of actual stabilizer measurements is performed. The Z-stabilizers are determined, up to bit-flip errors, by the \(0\)'th round. Hence, the difference between the two provides the first round of Z-detectors. The X-stabilizers have randomized outcome, projecting to an even or odd parity state over the four (or two) qubits in the Hadamard (\(|+\rangle\), \(|-\rangle\)) basis. The value of these stabilizers form the reference for subsequent error detecting measurements of the X-stabilizers. Ancilla qubits are reset to \(0\) after this and subsequent rounds. * Subsequent rounds \(t=2,...,d_{t}\) of Z and X stabilizer measurements provide the input for corresponding detectors based on changes from the previous round. * Finally, data qubits are measured individually in the Z-basis, which provides a final measurement, \(\langle Z_{L}\rangle_{t=d_{t}+1}\). These also provide a final round of Z-stabilizers, which, since they are provided by the actual qubit outcomes rather than by measuring an ancilla, are perfect stabilizers by definition. The outlined experiment provides a single data point \(D=(\{V_{Z}\},\{V_{X}\},\lambda_{Z})\) consisting of set of Z-detectors \(\{V_{Z}\}\), over \(d_{t}+1\) cycles and a set of X-detectors \(\{V_{X}\}\) over \(d_{t}-1\) cycles. In addition to the stabilizer type, each detector is tagged with its space-time coordinate, \((x,y,t)\), with \(0\leq x,y\leq d\) and \(1\leq t\leq d_{t}\pm 1\) for \(Z\) and \(X\) detectors respectively. The logical label is given by \[\lambda_{Z}=\frac{1}{2}|\langle Z_{L}\rangle_{t=0}-\langle Z_{L}\rangle_{t=d_ {t}+1}|\in\{0,1\}\,. \tag{2}\] The probability of \(\lambda_{Z}=1\) is, according to Eqn. 1, given by \(P_{X}+P_{Y}\), and the probability of \(\lambda_{Z}=0\) by \(P_{I}+P_{Z}\), corresponding to a logical bit-flip or not. What has been described is a "memory-Z" experiment [31], i.e., one in which we detect logical bit-flips. Qubits are initialized in the computational basis \(|0\rangle\) and \(|1\rangle\). A "memory-X" experiment prepares the qubits in the Hadamard basis, with the role of X- and Z-stabilizers reversed. Physically, in the lab, one cannot do both experiments in the same run, as \(Z_{L}\) and \(X_{L}\) do not commute. This also implies that each data point only has one of the two binary labels, \(\lambda_{Z}\) or \(\lambda_{X}\), even though there is information in the detectors about both labels. The neural network will be constructed to predict both labels for a given set of detectors, which implies that the learning framework is effectively that of semi-supervised learning, with partially labeled data. Thus, in contrast to a matching based decoder, which breaks the surface code detectors into two independent sets with a corresponding graph for each, the GNN decoder can make use of the complete information. This, in addition to the fact that it is not constrained by the limitations of the matching algorithm itself, provides a possible advantage in terms of prediction accuracy. Additionally, some fraction of the data is incorrectly labeled. This follows from the fact that measured labels will not always be the most likely. In fact, the fraction of incorrectly labeled data corresponds to the logical failure rate that an optimal decoder would provide. For the data that we use, this fraction ranges from marginal (see Figure 4) to quite substantial (see Figure 6), depending in particular on the number of cycles that are decoded, as the logical failure rate grows exponentially with the number of cycles. We have also assumed that there is no post-processing to remove leakage. Assuming there is some mechanism of relaxation back to the computational qubit subspace, including the last round of measurements, leakage events will be be handled automatically by the neural network decoder, based on the signature they leave in the detector data. ## III Graph neural network decoder A graph neural network (GNN) can be viewed as a trainable message passing algorithm, where information is passed between the nodes through the edges of the graph and processed through a neural network [90, 91, 92]. The input is data in the form of a graph \(G=(V,E)\), with a set of nodes \(V=\{i\,|\,i=1,..,N\}\) and edges \(E=\{(i,j)\,|\,i\neq j\in V\}\), which is annotated by \(n\)-dimensional node feature vectors \(\vec{X}_{i}\) and edge weights (or vectors) \(e_{ij}\). The basic building blocks are the message passing graph convolutional layers, which take a graph as input and outputs an isomorphic graph with transformed feature vectors. Specifically, in this work we have used a standard graph convolution [93] where for each node the \(d_{in}\)-dimensional feature vector \(\vec{X}_{i}\) is transformed to new feature vector \(\vec{X}^{\prime}_{i}\) with dimension \(d_{out}\) according to \[\vec{X}^{\prime}_{i}=\sigma\left(W_{1}\vec{X}_{i}+\sum_{j}e_{ij}W_{2}\vec{X}_{ i}\right)\,, \tag{3}\] where non-existent edges are indicated by \(e_{ij}=0\). Here \(W_{1}\) and \(W_{2}\) are \(d_{out}\times d_{in}\) dimensional trainable weight matrices and \(\sigma\) is an element-wise non-linear activation function which includes a \(d_{out}\)-dimensional trainable bias vector. For the task at hand, which is graph classification, graph convolutions are followed by a pooling layer that contracts the information to a single vector, a graph embedding, which is independent of the dimension of the graph. We use a simple mean-pooling layer \(\vec{X}^{\prime}=N^{-1}\sum_{i}\vec{X}_{i}\). For the classification we use two structurally identical, but independent, standard multi-layer feedforward networks that each end with a single node with sigmoid activation that acts as a binary classifier. The weights and biases of the complete network are trained using stochastic gradient descent with a loss function which is a sum of the binary cross entropy loss of the network output with respect to the binary labels. Since the experimental data, or simulated experimental data, only has one of the two binary labels (\(\lambda_{Z}\), \(\lambda_{X}\)) for each complete detector graph, gradients are only calculated for the provided label. To avoid overfitting to the training data we employ two different approaches depending on the amount of available data. In using experimental data from [28], we use a two-way split into a training set and a test set. To avoid diminishing the training data further, we do not use a validation set, and instead train for a fixed number of epochs. We observe (see Figure 6) that the test accuracy does not change significantly over a large number of epochs, even though the network continues to overfits. For the case with simulated experimental data (Figure 4), we avoid overfitting by replacing a substantial fraction (25%) of the data with new data, generated within the training cycle, after each epoch of training. This effectively emulates a much larger dataset, while keeping with the memory limits set by the available hardware. Since the training set is effectively unbounded, the number of unique detector graphs scales as \(2^{d_{i}d^{2}}\) and the network cannot overfit. Also here, a fixed test set is used to gauge the performance. The GNN training and testing is implemented in PyTorch Geometric [94], simulated data is generated using Stim [31], and the MWPM decoding results use Pymatching [32]. The Adam optimizer is used for stochastic gradient descent, using manual learning rate decrements when the training accuracy has leveled out. Details on the training procedure can be found in Appendix A. Several other graph layers were experimented with, including graph attention for both convolutions [95] and pooling [96, 97], as well as top\({}_{k}\) pooling [98, 99]. These were found not to improve results. The width and depth of the final network was arrived at after several rounds of iterations. Naturally, we expect that larger code distances, i.e., larger graphs, will require scaling up the network size. (See also Sec. IV.4) ### Data structure As discussed previously the data is in a form \(D=(\{V_{Z}\},\{V_{X}\},\lambda_{Z/X})\), consisting of a set of detectors \(V_{Z/X}\), specified by a space-time coordinate, together with a binary label. Based on this we construct a single graph. Each node corresponds to a detector event, and is annotated by a 5-vector (for the surface code with circuit-level noise) \(\vec{X}=(b_{1},b_{2},x,y,t)\) containing the space-time coordinate \((x,y,t)\) and two exclusive binary (one-hot encoded) labels with \(\vec{b}=(1,0)\) for an X-stabilizer and \(\vec{b}=(0,1)\) for a Z-stabilizer. (The encoding of the type of stabilizer may be superfluous, as it can be deduced from the coordinate.) We initially consider a complete graph, with edge weights given by the inverse square supremum norm between the vertices, \(e_{ij}=(\max\{|x_{i}-x_{j}|,|y_{i}-y_{j}|,|t_{i}-t_{j}|\})^{-2}\). This form of the edge weights is motivated by a naive picture of the minimum number of single data qubit measurement errors that would cause a pair of detectors. The main purpose of the weights is to give some measure of locality, in order to prune the graph. Smaller weight edges are removed, leaving only a fixed maximal node degree, which for the results presented in this work was capped at six. ## IV Results The GNN based decoder has been implemented, trained, and tested on the surface code and the repetition code. The main focus is on using simulated or real experimental data, presented in IV.1 and IV.2, respectively. We also present some results on the surface code with perfect stabilizers, IV.3, where we are able to train the network for larger code distances. ### Surface code with circuit-level noise We use Stim to generate data with circuit-level noise. Simulated circuits use standard settings for the surface code, containing Hadamard single qubit gates, controlled-Z (CZ) entangling gates, and measure and reset operations. All of these operations, and the idling, contain Pauli noise, scaled by an overall error rate \(p\). (See Appendix B.) Datasets of several million graphs are generated, with partial replacement after each epoch of training to avoid overfitting. Figure 3 shows test results evaluated at \(p=1.0\cdot 10^{-3}\) for decoders trained with data using an even mix of error rates \(p=\{1.0,2.0,3.0,4.0,5.0\}\cdot 10^{-3}\) and memory-Z experiments. The logical failure rate is thus approximately 50% of the true failure rate (up to correlations between failures in \(X_{L}\) and \(Z_{L}\)), but consistent with the type of data that would be experimentally accessible. (We have also tried training and testing with a mix of memory-Z and memory-X experiments, which works as well but takes longer to train to the same accuracy.) The MWPM decoder uses the information provided by the simulated error model to optimize edge weights on the decoding graph, whereas the GNN decoder uses only the data provided by the simulated measurements. Despite this, we find that with sufficient training the GNN decoder outperforms the matching decoder. A different network is trained for each code distance \(d\) and for each number of rounds of stabilizer measurements \(d_{t}\). Figure 4 shows a representative plot of the training and validation accuracy, evaluated on the mixed error rate dataset. With an active (in-memory) dataset containing \(5\cdot 10^{6}\) and given that 25% is replaced in each epoch, 1000 epochs corresponds to a total of \(1.25\,10^{9}\) data points. ### Repetition code using experimental data Having trained GNN based decoders on simulated experimental data in the previous section, we now turn to real experimental data. We use the public data provided together with [28]. This contains data on both the \(d=3\) and \(d=5\) surface codes as well as the \(d=25\) bit-error correcting repetition code. All datasets are of the form described in II.3, thus readily transferred to the annotated and labeled graphs used to train the GNN, as described in III.1. The datasets contain approximately \(10^{6}\) data points for the different codes, code distances, and varying number of stabilizer rounds. Our attempts to train a GNN on the data provided for the various implementations of surface code were generally unsuccessful. While it gave good results on the training data, the logical failure rate on the test set was poor. Given the fact that on the order of \(10^{9}\) data points were used for the simulated circuit-level noise on the surface code (IV.1), it is not surprising that the significantly smaller dataset turned out to be insufficient. The net work cannot achieve high accuracy without overfitting to the training data given the relatively small dataset. For the repetition code, the data which is provided is of a single type, for a \(d=25\) code measured over \(d_{t}=50\) rounds. Each round thus contains the measurement of \(24\) ancilla qubits for the \(ZZ\) stabilizers of the two neighboring data qubits along a one-dimensional path. As done in [28] this data can be split up into data for smaller repetition codes, by simply restricting to stabilizers over a subset of \(d\) subsequent data qubits. In this way the dataset can be increased by a factor \(25-(d-1)\), and used to train a single GNN for each code distance. It should be noted that this is suboptimal, compared to generating the same amount of data on single distance \(d\) device, as variations in the performance of the constituent qubits and gates will be averaged out in the dataset. Nevertheless, using this scheme we successfully trained GNN decoders for short distance repetition codes, with test accuracies shown in Figure 5. Results for (what we refer to as) "Device-optimized MWPM" is taken from [28]. The GNN decoder performs almost on par with this sophisticated matching decoder for \(d=3\). As expected, the relative performance deteriorates with increased code distance. We expect that we would need more training data for larger code distance, but instead we have access to less. As the comparison with the matching decoder that uses a device specific error model may be biased compared to using training data from different devices, as mentioned above, we also give results for an "uniformed" matching decoder with edge weights based on the 1-norm distance between space-time coordinates. It may also be noted that using MWPM corresponds to a near optimal decoder for the repetition code, at least for the case of phenomenological measurement noise where it is equivalent to bit-flip error surface code. This is in contrast to the surface code, for which MWPM is suboptimal, even in the case of perfect stabilizers. Thus, outperforming MWPM for the repetition code may be more challenging than for the surface code. ### Surface code with perfect stabilizers To complement the results on simulated and real experimental data we have also trained the GNN decoder on the surface code with perfect stabilizers under depolarizing noise. The same network (see Appendix A) is used as for circuit-level noise, but trained at \(p=[0.01,0.05,0.1,0.15]\). Results up to code distance \(d=15\) are shown in Figure 7 and found to significantly outperform MWPM. We also compare to a tensor network based [100] maximum likelihood decoder (MLD), showing that for code distance \(d=5\) the GNN decoder has converged to the level of being an approximate MLD. We do not attempt to derive any threshold for the GNN decoder. Given a sufficiently expressive network we expect that the decoder would eventually converge to a maximum likelihood decoder, but in practice the accuracy is limited by the training time. It gets progressively more difficult to converge the training for larger code distances, which means that any th Figure 4: GNN training and test accuracy versus number of training epochs for circuit-level noise, comparing training using a fixed dataset to replacing \(25\%\) of the data after each epoch. We see that the latter setup is crucial to avoid overfitting, allowing the test accuracy to closely follow the training accuracy. Code distance \(d=5\), with \(d_{t}=5\) cycles, dataset size \(5\cdot 10^{6}\) samples using an error rate randomly selected from \(p=[0.001,0.002,...,0.005]\). The test set is a fixed dataset of the same type containing \(5\cdot 10^{4}\) data points. (Sharp kink at \(250\) epochs correspond to a decrement of the learning rate.) Figure 3: Decoding simulated experimental data [31], with circuit-level noise, on the rotated surface code with code distance \(d\). Logical failure rate versus number of rounds of stabilizer measurements \(d_{t}\). Comparing GNN decoder trained on the detector data with MWPM decoder [32] that has full information of the data-generating error model. Each data point is evaluated over \(10^{8}\) samples (\(10^{7}\) for \(d<7\)), memory-Z, at a reference error rate of \(p=1\cdot 10^{-3}\). a function of the training time versus code distance. In fact, in principle, since the threshold is a \(d\to\infty\) quantity, we would not expect that a supervised learning algorithm can give a proper threshold if trained separately for each code distance. Nevertheless, through GNN's it is quite natural to use the same network to decode any distance code, as the data objects (detector graphs) have the same structure. We have investigated training the same network for different code distances and different number of rounds. This shows some promise, but so far does not achieve accuracy levels that can match MWPM. ### Scalability We are limited to relatively small codes in this work. For the repetition code using experimental data, it is quite clear that main limitation to scaling up the code distance is the size of the available dataset. For the surface code using simulated data it is challenging to increase the code distance while still surpassing MWPM. As the logical failure rates decrease exponentially with code distance, the test accuracy of the supervised training needs to follow. One way to counter this is to increase the number of stabilizer cycles, \(d_{t}\), but this also increases the graph size, making the training more challenging from the perspective of increased memory requirements as well as the increased complexity of the data. Nevertheless, it is interesting to explore the intrinsic scalability of the algorithm, by quantifying how the decoding time using a fixed size GNN scales with the code size. Here we present results on the decoding time per syndrome for the surface code, as a function of code volume \(d^{2}d_{t}\), at fixed error rate. The network architecture is the same as used for all the results in this paper, as described in Appendix A. In line with expectations, the GNN inference scales approximately linearly with the code volume, i.e. average graph size, \(T\sim d^{2}d_{t}\) (with Figure 5: Decoding experimental data [28] on the repetition code with code distance \(d\), over 50 rounds of stabilizer measurements. Comparing GNN decoder, using a dataset containing \((26-d)\cdot 5\cdot 10^{7}\) graphs, with a MWPM decoder with “device-optimized” edge weights ([28]) and a simple model-free MWPM decoder with 1-norm edge weights. The training-test split of the dataset is 99 to 1, and the logical failure rate is mapped to an error rate per round. Results for two different random training-test splits are shown. Figure 6: Training curves for the GNN decoder on repetition code, following Figure 5. Each epoch trains through the whole training set, which eventually leads to overfitting, where the training accuracy starts to significantly surpass the test accuracy. To maximize the amount of training data, no validation set was used. No early stopping was implemented in order to avoid optimizing results to the test set. Figure 7: Decoding the rotated surface code with perfect stabilizers and code distance \(d\). Logical failure rate versus error rate \(p\), for depolarizing noise, evaluated over failures with respect to both \(X_{L}(\lambda_{X})\) and \(Z_{L}(\lambda_{Z})\). Comparing the GNN decoder with MWPM decoder that has full information of the data-generating error model. Each data point is evaluated over \(10^{5}\) data points. Dashed lines is the accuracy using a matrix product state (MPS) decoder [100] at code distances 5 and 7, evaluated over \(10^{4}\) data points. \(d_{t}=1\) for perfect stabilizers). The number of matrix operations per graph convolutional layer, following Equation 3, is proportional to the number of nodes times the number of edges. The number of layers is fixed, multiplying this by a constant factor. The feature vector pooling is proportional to the number of nodes, whereas the subsequent dense network classifiers are independent of the graph size. We find that inference scales slightly better than the highly optimized matching decoder. However, several caveats are in order. 1) The size of the GNN is fixed. Larger code sizes may eventually require scaled up networks, unless the error rate is scaled down accordingly 2) The network has not been trained on code distances larger than \(d=15\) (2D). It is only a test of the decoding time, not the accuracy. 3) For GNN inference, the data is batched in order to eliminate a fixed loading time to the GPU. Treating batched data doesn't seem viable for real time decoding. Similarly our graph construction algorithm is slower, scaling quadratically with code volume, and this time has also been removed to get decoding time per graph. These are both issues that most likely can be improved significantly with more hardware efficient algorithms, and, in the longer term, special purpose hardware. ## V Conclusion and Outlook In this paper we develop a model-free, data-driven, approach to decoding quantum error correcting stabilizer codes, using graph neural networks. A real or simulated memory experiment is represented as a single detector graph, with annotated nodes corresponding to the type of stabilizer and its space-time coordinate, and labeled by the measured logical operation on the encoded qubit. The maximal node degree is capped by cropping edges between distant nodes. The data is used to train a convolutional GNN for graph classification, with classes corresponding to logical Pauli operations, and used for decoding. We show that we can use real and simulated experimental data,for the repetition code and surface code respectively, to train a decoder with logical failure rates on par with minimum weight perfect matching, despite the latter having detailed information about the underlying real or simulated error channels. The use of a graph structure provides an efficient way to store and process the syndrome data. To train the GNN requires significant amounts of training data, but as shown in the case of simulated experiments, data can be produced in parallel with training. Network inference, i.e., using the network as a decoder, is fast, scaling approximately linearly with the space-time dimension of the code. As an extension of this work there are several interesting possibilities to explore. One example is to use a GNN for edge weight generation within a hybrid algorithm with a matching decoder (similarly to [21]). This would depart from the pure data-driven approach pursued in this paper, with peak performance limited by the matching decoder, but with the potential advantage of requiring less data to train. An alternative to this, to potentially improve performance and lower data requirements, is to use device specific input into edge weights, or encode soft information on measurement fidelities into edge or node features. Going beyond the most standard codes considered in this paper, we expect that any error correcting code for which labeled detector data can be generated can also be decoded with a GNN. This includes Clifford-deformed stabilizer codes [101, 102, 103, 104, 105], color codes [106, 107] or hexagonal stabilizer codes [108, 109, 110, 111], where syndrome defects are not created in pairs, but potentially also Floquet type codes [112, 113]. In addition, heterogeneous and correlated noise models [114, 115] would also be interesting to explore, where in particular the latter is difficult to handle with most standard decoders. The software code for the project can be found at [116]. ###### Acknowledgements. We acknowledge financial support from the Knut and Alice Wallenberg Foundation through the Wallenberg Centre for Quantum Technology (WACQT). Computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at Chalmers Centre for Computational Science and Engineering (C3SE), partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973. We thank Figure 8: Scaling of average decoding time per syndrome versus code volume \(d^{2}d_{t}\) for GNN and MWPM. Perfect stabilizer measurements sampled at \(p=0.05\) (labeled 2D) and circuit level noise sampled at \(p=10^{-3}\) (labeled 3D). Dotted lines show a regression for \(d\geq 31\) and \(d\geq 11\) for 2D and 3D, respectively, according to the ansatz: \(T=C\cdot(d^{2}\cdot d_{t})^{\alpha}\), with the GNN showing sub-linear scaling. The average graph construction time per sample is shown in green. Viktor Rehnberg and Hampus Linander for technical support. ## Appendix A GNN architecture and training Figure 9 displays the architecture of the GNN decoder. The node features are sent through 7 subsequent graph convolutional layers (Equation 3). The node features are passed through a rectified linear unit (ReLU) activation function (which corresponds to chopping negative values) after each layer. After the graph convolutional layers, the node features from all nodes are pooled into one high-dimensional vector by computing the mean across all nodes. This vector is then cloned and sent to two identical fully connected neural networks. Both heads consist of 4 dense layers which map the pooled node feature vector down to one real-valued number which is output in the range 0 to 1 through a sigmoid function. The input and output dimension \(d_{in}\) and \(d_{out}\) of the graph convolutional and dense layers can be found in Table 1. Networks are trained on NVIDIA Tesla A100 HGX GPU's using the python multiprocessing module to generate data in parallel on a CPU. For gradient descent, samples are batched in batches of size \(10^{3}\). The learning rate is set to \(10^{-4}\) and decreased manually to \(10^{-5}\), whenever the validation accuracy reached a plateau. An example of a training history for \(d=5\) and varying number of surface code cycles \(d_{t}\) is shown in Figure 10. For this example, with \(d_{t}=5\), 100 epochs of training takes approximately 10 hours. The code is available at [116]. ## Appendix B Stabilizer circuits and error model for circuit-level noise Quantum circuits for weight-four \(Z\)- (\(X\)-) stabilizers of the surface code are displayed in Figure 11 (12). The gate set used for the stabilizer measurements consists of the Hadamard gate (\(H\)), and the \(CNOT\) gate. Under \begin{table} \begin{tabular}{|c|c|c|} \hline \hline Layer & \(d_{in}\) & \(d_{out}\) \\ \hline GraphConv\({}_{1}\) & 5 & 32 \\ GraphConv\({}_{2}\) & 32 & 128 \\ GraphConv\({}_{3}\) & 128 & 256 \\ GraphConv\({}_{4}\) & 256 & 512 \\ GraphConv\({}_{5}\) & 512 & 512 \\ GraphConv\({}_{6}\) & 512 & 256 \\ GraphConv\({}_{7}\) & 256 & 256 \\ Dense\({}_{1}\) X & 256 & 128 \\ Dense\({}_{2}\) X & 128 & 181 \\ Dense\({}_{3}\) X & 128 & 32 \\ Dense\({}_{4}\) X & 32 & 1 \\ Dense\({}_{1}\) Z & 256 & 128 \\ Dense\({}_{2}\) Z & 128 & 181 \\ Dense\({}_{3}\) Z & 128 & 32 \\ Dense\({}_{4}\) Z & 32 & 1 \\ \hline \end{tabular} \end{table} Table 1: Overview over the input and output dimension of the graph convolutional and dense layers of the GNN decoder. Figure 10: Example of training and validation accuracy versus the number of epochs of training for code distance 5. One epoch corresponds to training with a dataset containing \(5\cdot 10^{6}\) detector graphs of different error rates, as in Figure 4. The test set is a fixed dataset of the same type containing \(5\cdot 10^{4}\) data points. After each epoch the oldest 25% of the dataset is replaced with new data to avoid overfitting to the training data. Kink at 300 epochs correspond to decrement of the learning rate, whereas the spikes are due to network fluctuations. Figure 9: Schematic of the GNN architecture, with details in Table 1. The same architecture is used for all the results, except that for the repetition code there is only one output head. Also the input dimension is two (2D space-time coordinate) for the repetition code and four (two types of stabilizers, and 2D spatial coordinate) for the surface code with perfect stabilizers. circuit-level noise, single-qubit depolarizing noise gate \(D_{p}\) (which applies gate \(\sigma_{i},i\in\{X,Y,Z\}\) where any of the gates is applied with probability \(p/3\), and \(I\) with probability \(1-p\)) acts on the data qubits before each stabilizer measurement cycle and on each target qubit after single-qubit gates. Two-qubit depolarizing noise gates (which apply gate \(\sigma_{i}\sigma_{j},i,j\in\{I,X,Y,Z\}\), where \(II\) is acted on with probability \(1-p\), and the rest with probability \(p/15\)) act on the two qubits involved after every two-qubit gate. Furthermore, each qubit suffers from reset- and measurement-error with probability \(p\), displayed by operators \(X_{p}\) when measuring and resetting in the computational basis.
2310.01842
SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering
The intersection of vision and language is of major interest due to the increased focus on seamless integration between recognition and reasoning. Scene graphs (SGs) have emerged as a useful tool for multimodal image analysis, showing impressive performance in tasks such as Visual Question Answering (VQA). In this work, we demonstrate that despite the effectiveness of scene graphs in VQA tasks, current methods that utilize idealized annotated scene graphs struggle to generalize when using predicted scene graphs extracted from images. To address this issue, we introduce the SelfGraphVQA framework. Our approach extracts a scene graph from an input image using a pre-trained scene graph generator and employs semantically-preserving augmentation with self-supervised techniques. This method improves the utilization of graph representations in VQA tasks by circumventing the need for costly and potentially biased annotated data. By creating alternative views of the extracted graphs through image augmentations, we can learn joint embeddings by optimizing the informational content in their representations using an un-normalized contrastive approach. As we work with SGs, we experiment with three distinct maximization strategies: node-wise, graph-wise, and permutation-equivariant regularization. We empirically showcase the effectiveness of the extracted scene graph for VQA and demonstrate that these approaches enhance overall performance by highlighting the significance of visual information. This offers a more practical solution for VQA tasks that rely on SGs for complex reasoning questions.
Bruno Souza, Marius Aasan, Helio Pedrini, Adín Ramírez Rivera
2023-10-03T07:14:53Z
http://arxiv.org/abs/2310.01842v1
# SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering ###### Abstract The intersection of vision and language is of major interest due to the increased focus on seamless integration between recognition and reasoning. Scene graphs (SGs) have emerged as a useful tool for multimodal image analysis, showing impressive performance in tasks such as Visual Question Answering (VQA). In this work, we demonstrate that despite the effectiveness of scene graphs in VQA tasks, current methods that utilize idealized annotated scene graphs struggle to generalize when using predicted scene graphs extracted from images. To address this issue, we introduce the SelfGraphVQA framework. Our approach extracts a scene graph from an input image using a pre-trained scene graph generator and employs semantically-preserving augmentation with self-supervised techniques. This method improves the utilization of graph representations in VQA tasks by circumventing the need for costly and potentially biased annotated data. By creating alternative views of the extracted graphs through image augmentations, we can learn joint embeddings by optimizing the informational content in their representations using an un-normalized contrastive approach. As we work with SGs, we experiment with three distinct maximization strategies: node-wise, graph-wise, and permutation-equivariant regularization. We empirically showcase the effectiveness of the extracted scene graph for VQA and demonstrate that these approaches enhance overall performance by highlighting the significance of visual information. This offers a more practical solution for VQA tasks that rely on SGs for complex reasoning questions. ## 1 Introduction The successful execution of Visual Question Answering (VQA) relies on a comprehensive understanding of the scene, including spatial interrelationships and reasoning inference capabilities [1, 14]. Incorporating scene graph (SG) representations in SG-VQA tasks has shown promising outcomes [13, 16, 25, 32, 18], providing concise representations of complex spatial and relational information. Earlier investigations into SG-VQA demonstrated that successful models primarily rely on the utilization of manually annotated scene graphs for training [20, 21, 25], resulting in remarkably high levels of accuracy on the GQA dataset [14], surpassing human performance by a significant margin (see Table 1). Despite the promising results, we argue that utilizing pre-annotated SGs in VQA is impractical in the real world due to its labor-intensive nature. Also, it permits a wide range of semantically corresponding SG [12] and when annotated it could potentially introduce questions-related biases, giving rise to concerns about its generalizability [2]. These issues may limit the model's ability to solve real-world problems beyond the dataset [23]. This is evident in a significant decline in accuracy, approximately 60% when models are confronted with automatically generated SGs. Additionally, studies assert that the main limitation in generalizing stems largely from linguistic correlations. [2, 17]. In this study, we address these challenges by extracting an SG from a given image using an unbiased, off-the-shelf scene graph generator [16], with the aim of removing any potential information leakage, as illustrated in Fig. 1's structure. Furthermore, our method employs semantically preserving augmentation, integrated with un-normalized contrastive framework, to further mitigate potential linguistic biases to enhance the visual cues translated as SG for VQA. We refer to it as the _SelfGraphVQA framework_, cf. Fig. 1. Given its simplicity [7], our approach is trained using joint embeddings and a Siamese network architecture, inspired by the SimSiam model, which does not require negative samples [5, 9]. In this work, we explore three un-normalized contrastive approaches (node-wise, graph-wise, and regularization for permutation equivariance) and demonstrate its effectiveness by enhancing the visual information for the VQA task. A graph neural network (GNN) with a self-attention strategy (GAT) is employed to distill an SG representation relevant to the question by capturing visual interaction content among objects in the scene [7]. Our work differs from existing VQA models in three main aspects: (i) we generate as SG using a pre-trained, unbiased scene graph generator [16] in a more practical approach; (ii) we utilize un-normalized contrastive learning on the SG representation, along with augmentation, to eliminate any potential spurious correlations from annotated data and to heighten the visual information; and (iii) the use of a GAT encoder to enhance high-level semantic and spatial reasoning on the SG. We further investigate the behavior of visual enhancement when employing a more expressive language encoder, specifically BERT [15]. Importantly, our SelfGraphVQA framework does not require the costly pre-training strategy common to transformer-based models commonly used in vision-language tasks [8, 28, 32]. ## 2 Related Work Scene Graph and Visual Question Answering.Accurately assessment of VQA tasks, requiring a comprehensive understanding of visual perception and semantic reasoning, has gained substantial attention in the academic community, as these tasks holds significant practical value, particularly in enhancing accessibility for the visually impaired [4, 15, 33, 34, 19]. Several works have explored the information that SG representations may bring to VQA [20, 31], as opposed to the more data-hungry transformer-based visual language models [8, 19, 28]. However, existing SG-VQA approaches typically rely on idealized scene graphs and inherent dataset reasoning [20, 21]. Obtaining such annotations can be costly without an end-to-end pipeline. Moreover, even SoTA methods in SG-VQA exhibit limited generalization capabilities, potentially due to spurious correlations [2]. Self-Supervised Learning.Broadly speaking, recent advancements in self-supervised learning can be categorized into normalized [3, 6] and maximization representation learning [7, 11, 29]. Contrastive methods aim to bring embeddings of identically labelled images closer together while separating embeddings generated from different versions. In visual-language data, the prevailing approach for self-supervised learning involves pretraining a transformer-based model on a large dataset to solve pretext tasks before fine-tuning for downstream tasks [8, 27, 28, 32]. However, these methods can be computationally expensive and complex due to the use of negative samples and masking techniques. Modern un-normalized contrastive learning methods, e.g., BYOL [11] and SimSiam [7], use architectures inspired by reinforcement learning to maximize the informational content of the representations. In our proposal, we adopt a similarity maximization approach using a Siamese architecture for visual scene graph representation. ## 3 Methodology We refer the reader to the appendix for the implementation details. We experiment with the maximization strategy with three independent and distinct similarity losses over either a localized node representation (i.e., object-wise), a global pooled graph representation (i.e., scene-wise), or a regularization node representation term to induce permutation equivariance. We denote the graph representations \(z_{i}=f_{g}\big{(}g(x_{i}),f_{q}(q)\big{)}\), and the predictor's output vectors \(p_{i}=h(z_{i})\). Generally, the representations are maximized by minimizing the generic cosine distance \(D\) loss. Local Similarity.To account for permutation invariance in the node representations, we compute cosine distances over all object pairs from the two views and use the maximally \begin{table} \begin{tabular}{l l c} \hline \hline Method & Eval. Data & Acc (\%) \\ \hline Human [14] & – & 89.3 \\ GraphVQA [20] & Annotated/SGG & 94.8 \\ LRTA [21] & Annotated/SGG & 93.1 \\ Lightweight [25] & Annotated/SGG & 77.9 \\ CRF [24] & Annotated & 72.1 \\ LXMERT [28] & Extracted & 59.8 \\ \hline GraphVQA (original pre-trained on ideal) & **Test Extracted/SGG** & 29.7 \\ \hline SelfGraphVQA (Local) & Extracted/SGG & 51.5 \\ SelfGraphVQA (Global) & Extracted/SGG & 52.3 \\ SelfGraphVQA (SelfSim) & Extracted/SGG & 54.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Our experiments revealed a notable accuracy reduction in top-notch methods on the GQA dataset when transitioning from well-annotated to extracted scene graphs. We categorize methods by data type (e.g., annotated data or purely image-question extraction) and SGG usage. All methods are trained and validated uniformly, except for the test extracted configuration, trained on ideal data and validated on extracted SGG data. Figure 1: (Left) The statistical dependence of the task and the ideal graph, \(G\). (Right) Our proposed framework removes data leakage by using the extracted SG \(G^{\prime}\). Our architecture comprises a question encoder \(f_{q}\), a graph encoder \(f_{g}\), and a classifier \(f_{c}\). Two distinct views of one image are processed by the same pipeline. We use a frozen pre-trained SG generator \(g\), and a prediction head \(h\) is applied through the top view with gradient backpropagation, while gradients are not propagated back from the lower view. We maximize the representation of the views using the similarity loss \(L^{\prime}\). similar node embedding pairs to compute the local loss by \[L_{\ell}^{*}(p_{1},z_{2})=\frac{1}{O}\sum_{i}^{O}\operatorname*{arg\,min}_{z_{2, j}}D(p_{1,i},z_{2,j}), \tag{1}\] where \(O\) is the number of objects in the scene. Symmetrically, we compute \(L_{\ell}^{*}(p_{2},z_{1})\), to obtain the overall local loss \[L_{\ell}(z_{1},z_{2})=\frac{1}{2}\big{(}L_{\ell}^{*}(p_{1},z_{2})+L_{\ell}^{*} (p_{2},z_{1})\big{)}. \tag{2}\] **Global Similarity.** After obtaining a graph representation, we follow an approach similar to cosine similarity maximization for image classification [7, 11]. Along with the intuition that contrasting between global representations may enhance the visual cues, we assume that the global representation contains the full information about the scene. Similar to the local representation, we minimize the cosine distance, yielding a loss on the form \[L_{g}(z_{1},z_{2})=\frac{1}{2}\big{(}D(p_{1},z_{2})+D(p_{2},z_{1})\big{)}. \tag{3}\] **Regularization for Permutation Equivariance.** We employ an _anchor_, where the SG of an unmodified image guides the SG of the augmented image, allowing us to obtain a more accurate representation of the original scene. Our assumption is that the local similarity loss decreases the global performance, while global similarity provides a contextual representation but loses local details. This technique aligns similar nodes and encourages regularization, making augmented scene representations closer to the original, thus mitigating permutation invariance in graph representations. Denote the anchored representation by \(z_{1}\), and the augmented representation by \(z_{2}\). We determine intra-similarities of the anchors \(s_{1,i}=\operatorname*{arg\,min}_{z_{1,j}}D(z_{1,i},z_{1,j})\) and similarities of augmented views \(s_{2,ij}=D(z_{2,i},z_{2,j})\). We then compute cross-entropy (CE) between anchors and augmentations \[J(z_{1},z_{2})=\operatorname{CE}(s_{1},s_{2}), \tag{4}\] which acts as a regularizer to constrain permutation equivariance for the augmentations in addition to the local loss. We combine these losses using \[L_{s}(z_{1},z_{2})=L_{\ell}(z_{1},z_{2})+J(z_{1},z_{2}), \tag{5}\] which we refer to as a local self-similarity loss (SelfSim). **Distribution Link Representation Regularization.** Similarly to the regularization for permutation equivariance, we apply link regularization _in conjunction with one of the other three similarity strategies_. The edges of the _anchor_ SG guide the edges of the augmented SG. Denote the anchored edge score representation by \(r_{1}\), and the augmented edge score representation by \(r_{2}\). These scores characterize the relationship between the objects in the scene, and we aim to make the link distribution more robust to perturbation. _In this case, the scene graph generator [16] is trainable._ We compute the cross-entropy between the anchored edge scores and the augmented edge scores \(J_{e}(r_{1},r_{2})=\operatorname{CE}(r_{1},r_{2})\), which acts as a regularizer to constrain the link prediction distribution, yielding \[L_{\epsilon}(z_{1},z_{2})=L_{\ell}(z_{1},z_{2})+J_{e}(r_{1},r_{2}). \tag{6}\] All models utilizing this added link distribution regularizer are characterized by the inclusion of the term "link." **Overall Optimization Objective.** Lastly, we outline the overall loss for optimizing the VQA objective. To identify the correct answer \(a\in A\) given an example \((x,q,A)\), where \(x\) represents the input image, and \(q\) is the associated question, we extract a point estimate of probabilities \[p(a\mid x,q)=\sigma\left(\operatorname{logit}(x)\right), \tag{7}\] where \(\sigma\) is the softmax function, and \(\operatorname{logit}(x)=f(x,q)\) are the logits for all possible answers produced by our encoder. We calculate the cross-entropy loss for each instance, \[L_{\text{\tiny{aug}}}(x)=\operatorname{CE}\left(p(a\mid x,q),a\right). \tag{8}\] Our combined training loss is then given by \[L(x)=\alpha L_{\text{\tiny{aug}}}(x)+\beta L^{\prime}(z_{1},z_{2}), \tag{9}\] where \(L^{\prime}\) can be any of the aforementioned similarity loss strategies: \(L_{\ell}\), \(L_{g}\), or \(L_{s}\), with or without \(L_{\epsilon}\). The \(\alpha\) and \(\beta\) are controlled hyperparameters that balance the contribution of the various components in the total loss. ## 4 Experiments and Ablations We evaluate our framework on the GQA dataset [14]. Our study aims to establish a practical foundation for demonstrating the potential of SG along with an un-normalized contrasting approach to improve visual cues for VQA. Despite the noise data in the extracted SG, we demonstrate its effectiveness, Fig. 2, by highlighting the importance of further exploration. The utilization of non-idealized SG-VQA methods with un-normalized contrastive learning leads to improvements across all metrics, Table 2. Furthermore, our framework demonstrates faster convergence during training, approximately 20% faster in epochs compared to baselines. However, further investigation is required to validate them. The un-normalized contrastive approach universally enhances results across question categories (Fig. 2), with specific types of approaches further improving the model's performance based on the query type. We conducted ablations to demonstrate the functionality of our approach and carried out detailed observations that go beyond mere reliance on metrics using the GQA dataset. **Does the Scene Graph Really Matter?** Through a perturbation study where images were augmented based on question types, we introduced disruptive noise such as image flipping to challenge the model's ability to answer spatial relational questions. The goal was to observe mistakes in the model's answers. The results, compared to the baseline (Table 3), showed greater variation in our model's performance, indicating that it pays more attention to visual information, whereas the baseline appears to rely on other sources of information. **Are Performance Gains Mainly Due to Augmentations?** We compared our approach with the baseline architecture, training solely with data augmentation techniques to evaluate their influence on overall performance. Table 4 provides evidence that data augmentation techniques actually impair the performance of the architecture. **Are Our Models Less Biased?** Our initial hypothesis was that current top-performing models might incorporate biases present in the questions into their weights. We conducted experiments to analyze this issue, introducing random noise to features in the scene graph while preserving its topology, and perturbing the language in up to 50% of the words in the questions. The results in Table 5 demonstrate that our approach relies less on linguistic features, prioritizing overall information and reducing linguistic bias. Additionally, we explored visual enhancement, even when trained with a more expressive language module such as BERT. The experiments in Table 5 examine the impact of using BERT and its effect on enhancing visual information. **Examples.** Given the wide range of acceptable answers, we argue that solely relying on standard evaluation metrics may not provide a fair comparison, thus presenting additional challenges to the field. Fig. 3 demonstrates the utility of SG for interpretability, as they enable a graphical analysis of objects and the overall composition of the scene. ## 5 Conclusions Despite promising results in VQA tasks with idealized SG, our study revealed that models relying on manually annotated and expensive SG struggle with real-world data. To address this, we proposed SelfGraphVQA, a more practical SG-VQA framework that breaks the spurious correlation of annotated SG and learns to answer questions using extracted SG from a pre-trained SG generator. We employed un-normalized contrastive learning to maximize similar graph representations in different views. All approaches utilizing self-supervision showed improvement over their baselines. Overall, we demonstrated the effectiveness of extracted SG in VQA, underscoring the significance of continued exploration of the potential of SG for complex tasks. We also showed that self-supervision over the SG representation improved the results by enhancing the visual information within the task. We hope that this work raises awareness of the challenges of accentuating the role of the scene in answering questions from images. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Binary & Open & Validity & Plansibility & Acc \\ \hline Baseline Aug & 65.1 & 28.7 & 94.6 & 90.1 & 50.1 \\ SelfSim & 68.4 & 31.3 & 94.9 & 90.7 & 54.0 \\ \hline \hline \end{tabular} \end{table} Table 4: Results(%) of the Aug. Baseline and SelfSim. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & Binary (\(\uparrow\)) Open (\(\uparrow\)) Consistent (\(\uparrow\)) Validity (\(\uparrow\)) Plansibility (\(\uparrow\)) Dist. (\(\downarrow\)) Acc (\(\uparrow\)) \\ \hline Baseline & 65.8 & 29.7 & 58.2 & 94.9 & 90.5 & 11.7 & 50.1 \\ Baseline+BERT & 68.0 & 32.2 & 62.6 & 95.0 & 90.9 & 7.7 & 53.8 \\ \hline Local & 66.8 & 30.2 & 59.4 & 94.9 & 90.6 & 8.8 & 51.5 \\ Global & 67.7 & 30.8 & 62.5 & 94.9 & 90.6 & 6.7 & 52.3 \\ SelfSim & **68.4** & 31.3 & **65.9** & 94.9 & 90.7 & **21** & 54.0 \\ \hline Global+BERT+link & 68.0 & **33.0** & 63.9 & 95.0 & **91.2** & 8.9 & **54.8** \\ SelfSim+BERT+link & 68.2 & 32.8 & 64.3 & **95.0** & 91.0 & 8.0 & **84.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Results (%) on GQA by standard metrics. Figure 3: Examples to demonstrate the complexity of VQA. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Setup** & \multicolumn{3}{c}{**Methods**} \\ \hline Scene Graph + Question & Baseline & Local & Global & SelfSim \\ \hline Noise + SG & 16.2 & 16.6 & 28.6 & 26.6 \\ Question + Note & 39.9 & 38.3 & 37.4 & 39.8 \\ Noise + Noise & 12.7 & 14.6 & 18.9 & 21.0 \\ \hline Question + Scene Graph & BERT Baseline & BERTGRabeltnik & BERTGRabeltnik & BERTGRabeltnik \\ \hline Noise + SG & 21.0 & 23.2 & 24.5 & \\ Question + Noise & 42.4 & 41.8 & 42.8 & \\ Noise + Noise & 19.8 & 21.7 & 21.3 & \\ \hline Relative & Synonym & \multicolumn{3}{c}{Antiguous} \\ \hline \hline \end{tabular} \end{table} Table 5: Sensitivity of accuracy (%) for bias question analyzes of SelfGraphVQA and SelfGraphVQABERT. Figure 2: Accuracy on different question types. ## Acknowledgements This work was supported in part by the FAPESP (Sao Paulo Research Foundation) grant no. 2022/09849-8 and BEPE grant no. 2022/09849-8. The computations were performed in part on resources provided by Sigma2--the National Infrastructure for High-Performance Computing and Data Storage in Norway--through Project NN8104K. This work was funded in part by the Research Council of Norway, via the Centre for Research-based Innovation funding scheme (grant no. 309439), and Consortium Partners.
2307.12229
EchoGLAD: Hierarchical Graph Neural Networks for Left Ventricle Landmark Detection on Echocardiograms
The functional assessment of the left ventricle chamber of the heart requires detecting four landmark locations and measuring the internal dimension of the left ventricle and the approximate mass of the surrounding muscle. The key challenge of automating this task with machine learning is the sparsity of clinical labels, i.e., only a few landmark pixels in a high-dimensional image are annotated, leading many prior works to heavily rely on isotropic label smoothing. However, such a label smoothing strategy ignores the anatomical information of the image and induces some bias. To address this challenge, we introduce an echocardiogram-based, hierarchical graph neural network (GNN) for left ventricle landmark detection (EchoGLAD). Our main contributions are: 1) a hierarchical graph representation learning framework for multi-resolution landmark detection via GNNs; 2) induced hierarchical supervision at different levels of granularity using a multi-level loss. We evaluate our model on a public and a private dataset under the in-distribution (ID) and out-of-distribution (OOD) settings. For the ID setting, we achieve the state-of-the-art mean absolute errors (MAEs) of 1.46 mm and 1.86 mm on the two datasets. Our model also shows better OOD generalization than prior works with a testing MAE of 4.3 mm.
Masoud Mokhtari, Mobina Mahdavi, Hooman Vaseli, Christina Luong, Purang Abolmaesumi, Teresa S. M. Tsang, Renjie Liao
2023-07-23T05:31:47Z
http://arxiv.org/abs/2307.12229v1
EchoGLAD: Hierarchical Graph Neural Networks for Left Ventricle Landmark Detection on Echocardiograms ###### Abstract The functional assessment of the left ventricle chamber of the heart requires detecting four landmark locations and measuring the internal dimension of the left ventricle and the approximate mass of the surrounding muscle. The key challenge of automating this task with machine learning is the sparsity of clinical labels, i.e., only a few landmark pixels in a high-dimensional image are annotated, leading many prior works to heavily rely on isotropic label smoothing. However, such a label smoothing strategy ignores the anatomical information of the image and induces some bias. To address this challenge, we introduce an **ech**ocardiogram-based, hierarchical **g**raph neural network (GNN) for left ventricle **l**andmark **d**tection (EchoGLAD). Our main contributions are: 1) a hierarchical graph representation learning framework for multi-resolution landmark detection via GNNs; 2) induced hierarchical supervision at different levels of granularity using a multi-level loss. We evaluate our model on a public and a private dataset under the in-distribution (ID) and out-of-distribution (OOD) settings. For the ID setting, we achieve the state-of-the-art mean absolute errors (MAEs) of 1.46 mm and 1.86 mm on the two datasets. Our model also shows better OOD generalization than prior works with a testing MAE of 4.3 mm. Keywords:Graph Neural Networks Landmark Detection Ultrasound. ## 1 Introduction Left Ventricular Hypertrophy (LVH), one of the leading predictors of adverse cardiovascular outcomes, is the condition where heart's mass abnormally increases secondary to anatomical changes in the Left Ventricle (LV) [10]. These anatomical changes include an increase in the septal and LV wall thickness, and the enlargement of the LV chamber. More specifically, Inter-Ventricular Septal (IVS), LV Posterior Wall (LVPW) and LV Internal Diameter (LVID) are assessed to investigate LVH and the risk of heart failure [21]. As shown in Figure 1 (a), four landmarks on a parasternal long axis (PLAX) echo frame can characterize IVS, LVPW and LVID, and allow cardiac function assessment. To automate this, machine learning-based (ML) landmark detection methods have gained traction. It is difficult for such ML models to achieve high accuracy due to the sparsity of positive training signals (four or six) pertaining to the correct pixel locations. In an attempt to address this, previous works use 2D Gaussian distributions to smooth the ground truth landmarks of the LV [9, 13, 18]. However, as shown in Figure 1 (b), for LV landmark detection where landmarks are located at the wall boundaries (as illustrated by the dashed line), we argue that an isotropic Gaussian label smoothing approach confuses the model by being agnostic to the structural information of the echo frame and penalizing the model similarly whether the predictions are perpendicular or along the LV walls. In this work, to address the challenge brought by sparse annotations and label smoothing, we propose a hierarchical framework based on Graph Neural Networks (GNNs) [25] to detect LV landmarks in ultrasound images. As shown in Figure 2, our framework learns useful representations on a hierarchical grid graph built from the input echo image and performs multi-level prediction tasks. Our contributions are summarized below. * We propose a novel GNN framework for LV landmark detection, performing message passing over hierarchical graphs constructed from an input echo; * We introduce a hierarchical supervision that is automatically induced from sparse annotations to alleviate the issue of label smoothing; * We evaluate our model on two LV landmark datasets and show that it not only achieves state-of-the-art mean absolute errors (MAEs) (1.46 mm and Figure 1: (a) IVS, LVID and LVPW measurements visualized on a PLAX echo frame. (b) If the wall landmark labels are smoothed by an isotropic Gaussian distribution, points along the visualized wall and ones perpendicular are penalized equally. Ideally, points along the walls must be penalized less. 1.86 mm across three LV measurements) but also outperforms other methods in out-of-distribution (OOD) testing (achieving 4.3 mm). ## 2 Related Work Various convolution-based LV landmark detection works have been proposed. Sofka [26] use Fully Convolutional Networks to generate prediction heatmaps followed by a center of mass layer to produce the coordinates of the landmark locations. Another work [18] uses a modified U-Net [24] model to produce a segmentation map followed by a focal loss to penalize pixel predictions in close proximity of the ground truth landmark locations modulated by a Gaussian distribution. Jafari [13] use a similar U-Net model with Bayesian neural networks [8] to estimate the uncertainty in model predictions and reject samples that exhibit high uncertainties. Gilbert [6] smooth ground truth labels by placing 2D Gaussian heatmaps around landmark locations at angles that are statistically obtained from training data. Lastly, Duffy [4] use atrous convolutions [1] to make predictions for LVID, IVS and LVPW measurements. Other related works focus on the detection of cephalometric landmarks from X-ray images. These works are highly transferable to the task of LV landmark detection as they must also detect a sparse number of landmarks. McCouat [ Figure 2: Overview of our proposed model architecture. **Hierarchical Feature Construction** provides node features for the hierarchical graph representation of each echo frame where the nodes in the main graph correspond to pixels in the image, and nodes in the auxiliary graphs correspond to patches of different granularity in the image. **Graph Neural Networks** are used to process the hierarchical graph representation and produce node embeddings for the auxiliary graphs and the main graph. **Multi-Layer Perceptrons (MLPs)** are followed by a Sigmoid output function to map the node embeddings into landmark heatmaps of different granularity over the input echo frame. al._[20] is one of these works that abstains from using Gaussian label smoothing, but still relies on one-hot labels and treats landmark detection as a pixel-wise classification task. Chen _et al._[2] is another cephalometric landmark detection work that creates a feature pyramid from the intermediate layers of a ResNet [11]. Our approach is different from prior works in that it aims to avoid the issue shown in Fig. 1 (b) and the sparse annotations problem by the introduction of simpler auxiliary tasks to guide the main pixel-level task, so that the ML model learns the location of the landmarks without relying on Gaussian label smoothing. It further improves the representation learning via efficient message-passing [25, 7] of GNNs among pixels and patches at different levels without having as high a computational complexity as transformers [3, 19]. Lastly, while GNNs have never been applied to the task of LV landmark detection, they have been used for landmark detection in other domains. Li _et al._[16] and Lin _et al._[17] perform face landmark detection via modeling the landmarks with a graph and performing a cascaded regression of the locations. These methods, however, do not leverage hierarchical graphs and hierarchical supervision and instead rely on initial average landmark locations, which is not an applicable approach to echo, where the anatomy of the depicted heart can vary significantly. Additionally, Mokhtari _et al._[22] use GNNs for the task of EF prediction from echo cine series. However, their work focuses on regression tasks. ## 3 Method ### Problem Setup We consider the following supervised setting for LV wall landmark detection. We have a dataset \(D=\{X,Y\}\), where \(|D|=n\) is the number of \(\{x^{i},y^{i}\}\) pairs such that \(x^{i}\in X\), \(y^{i}\in Y\), and \(i\in[1,n]\). Each \(x^{i}\in\mathbb{R}^{H\times W}\) is an echo image of the heart, where H and W are height and width of the image, respectively, and each \(y^{i}\) is the set of four point coordinates \([(h_{1}^{i},w_{1}^{i}),(h_{2}^{i},w_{2}^{i}),(h_{3}^{i},w_{3}^{i}),(h_{4}^{i},w_{4}^{i})]\) indicating the landmark locations in \(x^{i}\). Our goal is to learn a function \(f:\mathbb{R}^{H\times W}\mapsto\mathbb{R}^{4\times 2}\) that predicts the four landmark coordinates for each input image. _A figure in the supp. material further clarifies how the model generates landmark location heatmaps on different scales (Fig. 2)._ ### Model Overview As shown in Figure 2, each input echo frame is represented by a hierarchical grid graph where each sub-graph corresponds to the input echo frame at a different resolution. The model produces heatmaps over both the main pixel-level task as well as the coarse auxiliary tasks. While the pixel-level heatmap prediction is of main interest, we use a hierarchical multi-level loss approach where the model's prediction over auxiliary tasks is used during training to optimize the model through comparisons to coarser versions of the ground truth. The intuition behind such an approach is that the model learns nuances in the data by performing landmark detection on the easier auxiliary tasks and uses this established reasoning when performing the difficult pixel-level task. ### Hierarchical Graph Construction To learn representations that better capture the dependencies among pixels and patches, we introduce a hierarchical grid graph along with multi-level prediction tasks. As an example, the simplest task consists of a grid graph with only four nodes, where each node corresponds to four equally-sized patches in the original echo image. In the main task (the one that is at the bottom in Figure 2 and is the most difficult), the number of nodes is equal to the total number of pixels. More formally, let us denote a graph as \(G=(V,E)\), where \(V\) is the set of nodes, and \(E\) is the set of edges in the graph such that if \(v_{i},v_{j}\in V\) and there is an edge from \(v_{i}\) to \(v_{j}\), then \(e_{i,j}\in E\). To build hierarchical task representations, for each image \(x\in X\) and the ground truth \(y\in Y\), \(K\) different auxiliary graphs \(G_{k}(V_{k},E_{k})\) are constructed using the following steps for each \(k\in[1,K]\): 1. \(2^{k}\times 2^{k}=4^{k}\) nodes are added to \(V_{k}\) to represent each patch in the image. Note that the larger values of \(k\) correspond to graphs of finer resolution, while the smaller values of \(k\) correspond to coarser graphs. 2. Grid-like, undirected edges are added such that \(e_{m-1,q},e_{m+1,q},e_{m,q-1},e_{m,q+1}\in E_{k}\) for each \(m,q\in[1\dots 2^{k}]\) if these neighbouring nodes exist in the graph (border nodes will not have four neighbouring nodes). 3. A patch feature embedding \(z_{j}^{k}\), where \(j\in[1\dots 4^{k}]\) is generated and associated with that patch (node) \(v_{j}\in V_{k}\). The patch feature construction technique is described in Section 3.4. 4. Binary node labels \(\hat{y}_{k}\in\{0,1\}^{4^{k}\times 4}\) are generated such that \(\hat{y}_{kj}=1\) if at least one of the ground truth landmarks in \(y\) is contained in the patch associated with node \(v_{j}\in V_{k}\). Note that for each auxiliary graph, four different one-hot labels are predicted, which correspond to each of the four landmarks required to characterize LV measurements. The main graph, \(G_{\text{main}}\), has a grid structure and contains \(H\times W\) nodes regardless of the value of \(K\), where each node corresponds to a pixel in the image. Additionally, to allow the model to propagate information across levels, we add inter-graph edges such that each node in a graph is connected to four nodes in the corresponding region in the next finer graph as depicted in Fig. 2. ### Node Feature Construction The graph representation described in Section 3.3 is not complete without proper node features, denoted by \(z\in\mathbb{R}^{|V|\times d}\), characterizing patches or pixels of the image. To achieve this, the grey-scale image is initially expanded in the channel dimension using a CNN. The features are then fed into a U-Net where the decoder part is used to obtain node features such that deeper layer embeddings correspond to the node features for the finer graphs. This means that the main pixel-level graph would have the features of the last layer of the network. _A figure clarifying node feature construction is provided in the supp. material (Fig. 1)._ ### Hierarchical Message Passing We now introduce how we perform message passing on our constructed hierarchical graph using GNNs to learn node representations for predicting landmarks. The whole hierarchical graph created for each sample, _i.e._, the main graph, auxiliary graphs, and cross-level edges, are collectively denoted as \(G^{i}\), where \(i\in[1,\ldots,n]\). Each \(G^{i}\) is fed into GNN layers followed by an MLP: \[h^{l+1}_{\text{nodes}} =\text{ReLU}(\text{GNN}_{l}(G^{i}),h^{l}_{\text{nodes}}),\quad l \in[0,\ldots,L] \tag{1}\] \[h_{\text{out}} =\sigma(\text{MLP}(h_{\text{nodes}^{l+1}})), \tag{2}\] where \(\sigma\) is the Sigmoid function, \(h^{l}_{\text{nodes}}\in\mathbb{R}^{|V_{G^{i}}|\times d}\) is the set of d-dimensional embeddings for all nodes in the graph at layer \(l\), and \(h_{\text{out}}\in[0,1]^{|V_{G^{i}}|\times 4}\) is the four-channel prediction for each node with each channel corresponding to a heatmap for each of the pixel landmarks. The initial node features \(h^{1}_{\text{nodes}}\) are set to the features \(z\) described in Sections 3.3 and 3.4. The coordinates \((x^{p}_{\text{out}},y^{p}_{\text{out}})\) for each landmark location \(p\in[1,2,3,4]\) are obtained by taking the expected value of individual heatmaps \(h^{p}_{\text{out}}\) along the \(x\) and \(y\) directions such that: \[x^{p}_{\text{out}}=\sum_{s=1}^{|V_{G^{i}}|}\text{softmax}(h^{p}_{\text{out}})_ {s}*\text{loc}_{x}(s), \tag{3}\] where similar operations are performed in the y direction for \(y^{p}_{\text{out}}\). Here, we vectorize the 2D heatmap into a single vector and then feed it to the softmax. \(\text{loc}_{x}\) and \(\text{loc}_{y}\) return the \(x\) and \(y\) positions of a node in the image. It must be noted that unlike some prior works such as Duffy _et al_. [4] that use post-processing steps such as imposing thresholds on the heatmap values, our work directly uses the output heatmaps to find the final predictions. ### Training and Objective Functions To train the network, we leverage two types of objective functions. 1) _Weighted Binary Cross Entropy (BCE):_ Since the number of landmark locations is much smaller than non-landmark locations, we use a weighted BCE loss; 2) _L2 regression of landmark coordinates:_ We add a regression objective which is the L2 loss between the predicted coordinates and the ground truth labels. ## 4 Experiments ### Datasets **Internal Dataset:** Our private dataset contains 29,867 PLAX echo frames, split in a patient-exclusive manner with 23824, 3004, and 3039 frames for training, validation, and testing, respectively. **External Dataset:** The public Unity Imaging Collaborative (UIC) [12] LV landmark dataset consists of a combination of 3822 end-systolic and end-diastolic PLAX echo frames acquired from seven British echocardiography labs. The provided splits contain 1613, 298, and 1911 training, validation, and testing samples, respectively. For both datasets, we down-sample the frames to a fixed size of \(224\times 224\). ### Implementation Details Our model creates \(K\)=7 auxiliary graphs. For the node features, the initial single-layer CNN uses a kernel size of 3 and zero-padding to output features with a dimension of \(224\times 224\times 4\) (\(C\)=4). The U-Net's encoder contains 7 layers with \(128\times 128,64\times 64,32\times 32,16\times 16,8\times 8,4\times 4\), and \(2\times 2\) spatial dimensions, and \(8,16,32,64,128,256\), and 512 number of channels, respectively. Three Graph Convolutional Network (GCN)[15] layers (\(L=3\)) with a hidden node dimension of 128 are used. To optimize the model, we use the Adam optimizer [14] with an initial learning rate of 0.001, \(\beta\) of (0.9, 0.999) and a weight decay of 0.0001, and for the weighted BCE loss, we use a weight of 9000. The model is implemented using PyTorch [23] and Pytorch Geometric [5] and is trained on two 32-GB Nvidia Titan GPUs. Our code-base is publicly available at [https://github.com/MasoudMo/echoglad](https://github.com/MasoudMo/echoglad). ### Results We evaluate models using Mean Absolute Error (MAE) in mm, and Mean Percent Error (MPE) in percents, which is formulated as \(\text{MPE}=100\times\frac{|L_{\text{pred}}-L_{\text{true}}|}{L_{\text{true}}}\), where \(L_{\text{pred}}\) and \(L_{\text{true}}\) are the prediction and ground truth values for every measurement. We also report the Success Detection Rate (SDR) for LVID for 2 and 6 mm thresholds. This rate shows the percentage of samples where the absolute error between ground truth and LVID predictions is below the specific threshold. These thresholds are chosen based on the healthy ranges for IVS (0.6-1.1cm), LVID (2.0-5.6cm), and LVPW (0.6-0.1cm). Hence, the 2 mm threshold provides a stringent evaluation of the models, while the 6 mm threshold facilitates the assessment of out-of-distribution performance. **In-Distribution (ID) Quantitative Results.** In Tab. 1, we compare the performance of our model with previous works in the ID setting where the training and test sets come from the same distribution (_e.g._, the same clinical setting), we separately train and test the models on the private and the public dataset. _The results for the public dataset are provided in the supp. material (Table 1)._ **Out-of-Distribution (OOD) Quantitative Results.** To investigate the generalization ability of our model compared to previous works, we train all models on the private dataset (which consists of a larger number of samples compared to UIC), and test the trained models on the public UIC dataset as shown in Tab. 2. Based on our visual assessment, the UIC dataset looks very different compared to the private dataset, thus serving as an OOD test-bed. **Qualitative Results.**_Failure cases are shown in supp. material (Fig. 3)._ **Ablation Studies.** In Table 3, we show the benefits of a hierarchical graph representation with a multi-scale objective for the task of LV landmark detection. _We provide a qualitative view of the ablation study in supp. material (Fig. 4)._ \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} Model & \multicolumn{3}{c|}{MAE [mm] \(\downarrow\)} & \multicolumn{3}{c|}{MPE [\%] \(\downarrow\)} & \multicolumn{3}{c}{SDR[\%] of LVID \(<\uparrow\)} \\ & LVID & IVS & LVPW & LVID & IVS & LVPW & 2.0 mm & 6.0 mm \\ \hline \hline Gilbert _et al._[6] & 2.9 & 1.4 & 1.4 & 6.5 & 14.5 & 15.2 & 48.1 & 88.9 \\ Lin _et al._[18] & 9.4 & 11.2 & 9.0 & 21.2 & 116.5 & 92.9 & 26.0 & 49.1 \\ McCouat _et al._[20] & **2.2** & 1.3 & 1.4 & **4.8** & 13.5 & 15.1 & 58.3 & 93.9 \\ Chen _et al._[2] & 2.3 & 1.2 & 1.2 & 5.2 & 12.6 & 13.8 & 60.4 & 92.6 \\ Duffy _et al._[4] & 2.5 & 1.2 & 1.2 & 5.4 & 13.2 & 13.5 & 52.1 & 93.0 \\ Ours & **2.2** & **1.1** & **1.1** & **4.8** & **11.2** & **12.2** & **62.4** & **94.4** \\ \end{tabular} \end{table} Table 1: **Quantitative results** on the private test set for models trained on the private training set. We see that our model has the best average performance over the three measurements, which shows the superiority of our model in the in-distribution setting for high-data regime. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} Model & \multicolumn{3}{c|}{MAE [mm] \(\downarrow\)} & \multicolumn{3}{c|}{MPE [\%] \(\downarrow\)} & \multicolumn{3}{c}{SDR[\%] of LVID \(<\uparrow\)} \\ & LVID & IVS & LVPW & LVID & IVS & LVPW & 2.0 mm & 6.0 mm \\ \hline \hline Gilbert _et al._[6] & 9.5 & 4.8 & 4.1 & 23.5 & 32.3 & 26.8 & 22.5 & 52.2 \\ Lin _et al._[18] & 51.5 & 51.7 & 41.3 & 121.0 & 375.8 & 298.0 & 11.3 & 24.6 \\ McCouat _et al._[20] & 5.9 & 3.6 & 4.4 & 18.5 & 30.5 & 36.4 & 34.6 & 72.3 \\ Chen _et al._[2] & 7.4 & 5.3 & 6.9 & 22.5 & 49.4 & 62.4 & 28.9 & 65.3 \\ Duffy _et al._[4] & 13.7 & 4.1 & 5.5 & 36.8 & 36.4 & 45.4 & 6.2 & 20.6 \\ Ours & **5.8** & **2.8** & **4.3** & **18.4** & **23.8** & **34.6** & **35.8** & **74.9** \\ \end{tabular} \end{table} Table 2: **Quantitative results** on the public UIC test set for models trained on the private training set. This table shows the out-of-distribution performance of the models when trained on a larger dataset and tested on a smaller external dataset. We can see that in this case, our model outperforms previous works by a large margin, which attests to the generalizability of our framework. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} Model & \multicolumn{3}{c|}{MPE [\%]} \\ & LVID & IVS & LVPW \\ \hline \hline Vanilla U-Net & 5.31 & 13.17 & 13.47 \\ U-Net Main Graph & 4.98 & 11.67 & 12.78 \\ Single-Scale Loss & 5.41 & 12.37 & 12.8 \\ Main Model & **4.91** & **11.45** & **12.36** \\ \end{tabular} \end{table} Table 3: **Ablation results** on the validation set of our private dataset. Vanilla U-Net uses a simple U-Net model, while U-Net Main Graph only uses the pixel-level graph (no aux. graphs). Main Model is our proposed approach. Lastly, Single-Scale Loss has the same framework as the Main Model but only computes the loss for the model’s predictions on the main graph (no multi-scale loss). ## 5 Conclusion and Future Work In this work, we introduce a novel hierarchical GNN for LV landmark detection. The model performs better than the state-of-the-art on most measurements without relying on label smoothing. We attribute this gain in performance to two main contributions. First, our choice of representing each frame with a hierarchical graph has facilitated direct interaction between pixels at differing scales. This approach is effective in capturing the nuanced dependencies amongst the landmarks, bolstering the model's performance. Secondly, the implementation of a multi-scale objective function as a supervisory mechanism has enabled the model to construct a superior inductive bias. This approach allows the model to leverage simpler tasks to optimize its performance in the more challenging pixel-level landmark detection task. For future work, we believe that the scalability of the framework for higher-resolution images must be studied. Additionally, extension of the model to video data can be considered since the concept of intra-scale and inter-scale edges connecting nodes could be extrapolated to include temporal edges linking similar spatial locations across frames. Such an approach could greatly enhance the model's performance in unlabeled frames, mainly through the enforcement of consistency in predictions from frame to frame.
2310.11270
Graph Neural Networks for Recommendation: Reproducibility, Graph Topology, and Node Representation
Graph neural networks (GNNs) have gained prominence in recommendation systems in recent years. By representing the user-item matrix as a bipartite and undirected graph, GNNs have demonstrated their potential to capture short- and long-distance user-item interactions, thereby learning more accurate preference patterns than traditional recommendation approaches. In contrast to previous tutorials on the same topic, this tutorial aims to present and examine three key aspects that characterize GNNs for recommendation: (i) the reproducibility of state-of-the-art approaches, (ii) the potential impact of graph topological characteristics on the performance of these models, and (iii) strategies for learning node representations when training features from scratch or utilizing pre-trained embeddings as additional item information (e.g., multimodal features). The goal is to provide three novel theoretical and practical perspectives on the field, currently subject to debate in graph learning but long been overlooked in the context of recommendation systems.
Daniele Malitesta, Claudio Pomo, Tommaso Di Noia
2023-10-17T13:42:32Z
http://arxiv.org/abs/2310.11270v3
# Graph Neural Networks for Recommendation: Reproducibility, Graph Topology, and Node Representation ###### Abstract Graph neural networks (GNNs) have gained prominence in recommendation systems in recent years. By representing the user-item matrix as a bipartite and undirected graph, GNNs have demonstrated their potential to capture short- and long-distance user-item interactions, thereby learning more accurate preference patterns than traditional recommendation approaches. In contrast to previous tutorials on the same topic, this tutorial aims to present and examine three key aspects that characterize GNNs for recommendation: (i) the reproducibility of state-of-the-art approaches, (ii) the potential impact of graph topological characteristics on the performance of these models, and (iii) strategies for learning node representations when training features from scratch or utilizing pre-trained embeddings as additional item information (e.g., multimodal features). The goal is to provide three novel theoretical and practical perspectives on the field, currently subject to debate in graph learning but long been overlooked in the context of recommendation systems. ## 1 Learning objectives With the current tutorial, we plan to cover both theoretical and practical aspects in GNNs-based recommendation. First, we investigate the current challenges in experimentally-reproducing the results of state-of-the-art approaches, and compare their performance to other shallower models for recommendation (with unexpected outcomes) [1; 2]. For the experimental study, we adopt Elliot [3], our framework for the rigorous reproducibility and evaluation of recommender systems. Indeed, such an investigation paves the way to understanding whether, how, and why topological graph properties (conceptually related to node degree) may influence the performance of GNNs-based recommendation systems [4]. Finally, the tutorial delves into the representation strategies for node embeddings [5; 6]; while most of the techniques learn node representation from scratch, our focus goes to the adoption of pre-trained item's side information (such as multimodal ones [7]) by analysing the main experimental implications of such a design choice [8]. ## 2 Tutorial schedule November 30, 2023 (5pm-8pm GMT), Online. Total tutorial duration _180 minutes_ **Introduction and background** (_Tommaso Di Noia_) _\(\rightarrow\)_ _20 minutes_ * Introduction and motivations of the tutorial _\(\rightarrow\)_ _5 minutes_ * Basics concepts of recommender systems & GNNs-based recommendation \(\rightarrow\)_15 minutes_ **Reproducibility** (Claudio Pomo) \(\rightarrow\)_60 minutes_ * **[Hands-on #1]** Implementation and reproducibility of GNNs-based recsys in Elliot with PyG and reproducibility issues \(\rightarrow\)_35 minutes_ * Performance comparison of GNNs-based approaches to traditional recommendation systems \(\rightarrow\)_25 minutes_ **Break and Q&A \(\rightarrow\)_15 minutes_** **Graph topology \(\rightarrow\)_30 minutes_** * Concepts and formulations of graph topological properties of the user-item graph (Tommaso Di Noia) \(\rightarrow\)_15 minutes_ * Impact of topological graph properties on the performance of GNNs-based recommender systems (Daniele Malitesta) \(\rightarrow\)_15 minutes_** **Node representation** (Daniele Malitesta) \(\rightarrow\)_45 minutes_ * Design choices to train node embeddings from scratch \(\rightarrow\)_20 minutes_ * **[Hands-on #2]** Leveraging item's side-information (e.g., multimodal features) to represent node embeddings \(\rightarrow\)_25 minutes_ **Closing remarks and Q&A \(\rightarrow\)_10 minutes_** ## 3 Relevance to LoG The tutorial covers highly-related topics to the ones of the LoG conference and community, namely, the application of graph neural networks (GNNs) to the task of personalized recommendation. In particular, and differently from previous similar tutorials presented at other venues (see later), the main focus of this tutorial is on the reproducibility of state-of-the-art recommendation systems leveraging GNNs, the possible impact of graph topological characteristics on the performance of such approaches, and the modeling of node features (trained from scratch or freezed to use additional side information). Since the outlined three aspects have been widely debated in the graph representation learning literature over the last few years, we believe the LoG conference could benefit from an analysis of these aspects also in the field of recommender systems. ## 4 Previous related tutorials **Tutorials on graph-based recommendation.** Previous tutorials on graph-based recommendation [9, 10, 11, 12] are reported in Table 1, along with the reference, website, slides, and video recording. **Differences with previous tutorials.** The majority of previous tutorials on graph-based recommendation address the topic of GNNs in recommendation from a general perspective. Conversely, our tutorial intends to approach the same topic from three main research aspects which are highly popular in the graph learning field but have not been previously investigated in graph-based recommendation, namely: reproducibility issues, the influence of graph topology on model's performance, and the modeling of node features. Indeed, the tutorials [12, 13] are the closest to ours in the intention of providing a more specific analysis of GNNs-based recommendation, by considering user modeling and beyond-accuracy evaluation; however, as already highlighted, our investigation and contributions are different. **Other related tutorials.** Scalable Graph Neural Networks with Deep Graph Library (KDD 2020); Deep Graph Learning: Foundations, Advances and Applications (KDD 2020); Learning Graph Neural Networks with Deep Graph Library (The Web Conf 2020); Graph Representation Learning: Foundations, Methods, Applications and Systems (KDD 2021); Advanced Deep Graph Learning: Deeper, Faster, Robuster, Unsupervised (The Web Conf 2021); Learning from Graphs: From Mathematical Principles to Practical Tools (The Web Conf 2021); Scalable Graph Neural Networks with Deep Graph Library (WSDM 2021); Frontiers of Graph Neural Networks with DIG (KDD 2022); Graph Neural Networks: Foundation, Frontiers and Applications (KDD 2022); Graph Neural Networks: Foundation, Frontiers and Applications (The Web Conf 2023); Self-supervised Learning and Pre-training on Graphs (The Web Conf 2023); When Sparse Meets Dense: Learning Advanced Graph Neural Networks with DGL-Sparse Package (The Web Conf 2023). ## 5 Useful materials All useful materials are available at the tutorial's website: [https://sisinflab.github.io/tutorial-gnns-recsys-log2023/](https://sisinflab.github.io/tutorial-gnns-recsys-log2023/), along with the GitHub repository: [https://github.com/sisinflab/Log-2023-GNNs-RecSys](https://github.com/sisinflab/Log-2023-GNNs-RecSys). ## 6 Tutorial speakers **Daniele Malitesta** is a PhD candidate at the Polytechnic University of Bari (Italy). During his research career so far, he has been studying and developing recommendation algorithms leveraging side information, with a specific focus on graph- and multimodal-based recommender systems. He has published at top-tier conferences, such as SIGIR, ECIR, RecSys, and MM, and has served as a reviewer at SIGIR 2023, RecSys 2023 (outstanding reviewer), NeurIPS 2023, LoG 2022-2023, ICLR 2024, and ECIR 2024. He is one of the organizers of the First International Workshop on Graph-Based Approaches in Information Retrieval (IRonGraphs), co-located with ECIR 2024. He is among the developers of Elliot, a framework for the rigorous evaluation and reproducibility of recommender systems, where he contributed with the implementation of more than 15 algorithms from the state-of-the-art. Recently, he has visited Dr. Pasquale Minervini at the University of Edinburgh as part of the internship period of his PhD. **Claudio Pomo** is a research fellow at the Polytechnic University of Bari in Italy, where he obtained his doctorate in computer engineering. His research focuses on responsible AI for personalization, with a particular emphasis on reproducibility of results and multi-objective performance evaluation. Claudio has made significant contributions in these areas, with his work being accepted at prominent conferences such as SIGIR, RecSys, ECIR, UMAP, and in journals including Information Science and IPM. He has also actively participated in the academic community, serving as a reviewer for conferences like SIGIR, RecSys, NeurIPS, WSDM, ECIR, and UMAP. Claudio delivered a tutorial at RecSys21 titled "Pursuing Privacy in Recommender Systems: the View of Users and Researchers from Regulations to Applications." More recently, he co-organized a workshop at KDD 2023 focused on recommender system evaluation, known as EvalRS. Claudio is also one of the authors and contributors to Elliot, a framework designed to assess the rigorous evaluation and reproducibility of recommender systems. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Title** & **Venue** & **Website** & **Slides** & **Video** \\ \hline Learning and Reasoning on Graph for Recommendation [9] & WSDM 2020 & link & link & ✗ \\ \hline Graph-based Representation Learning for Web-scale Recommender Systems [10] & KDD 2022 & link & link & ✗ \\ \hline Graph Neural Networks for Recommender System [11] & WSDM 2022 & link & link & link \\ \hline Tutorial on User Profiling with Graph Neural Networks and Related Beyond-Accuracy Perspectives [12] & UMAP 2023 & link & link & ✗ \\ \hline Leveraging Graph Neural Networks for User Profiling: Recent Advances and Open Challenges [13] & CIKM 2023 & link & link & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Tutorials about graph-based recommendation from 2020-2023. **Tommaso Di Noia** is Professor of Computer Science at the Polytechnic University of Bari (Italy). His research activities focus on AI and Data Management. They were initially devoted to knowledge representation and automated reasoning. Then, he studied how to apply knowledge representation techniques to automated negotiations. Following these ideas, he has devoted his interest to applying knowledge graphs and Linked Open Data to RSs with papers published in international journals, conferences, and book chapters. During the last years, he moved his research into the Trustworthy AI topic with a particular interest in adversarial ML, explainability, fairness and privacy protection of RSs. He is serving as program co-chair at RecSys 2023 and general co-chair at RecSys 2024. ## 7 Intended audience and level The proposed tutorial deals with both intermediate and advanced theoretical/practical topics spanning recommendation, graph representation learning, reproducibility in machine learning, graph topology, and multimodal learning. For these reasons, the tutorial might ideally be of interest to a wide range of researchers and practitioners working on (even a subset of) such aspects. Programming knowledge of Python and PyTorch (with a specific focus on PyTorch Geometric) would be a good-to-have skill, even though the tutorial will guide the attendees step-by-step, especially during the hands-on sessions. Given the virtual nature of the tutorial, we do not set a strong requirement for the maximum number of participants (ideally up to 200).
2302.11707
A Deep Neural Network Based Approach to Building Budget-Constrained Models for Big Data Analysis
Deep learning approaches require collection of data on many different input features or variables for accurate model training and prediction. Since data collection on input features could be costly, it is crucial to reduce the cost by selecting a subset of features and developing a budget-constrained model (BCM). In this paper, we introduce an approach to eliminating less important features for big data analysis using Deep Neural Networks (DNNs). Once a DNN model has been developed, we identify the weak links and weak neurons, and remove some input features to bring the model cost within a given budget. The experimental results show our approach is feasible and supports user selection of a suitable BCM within a given budget.
Rui Ming, Haiping Xu, Shannon E. Gibbs, Donghui Yan, Ming Shao
2023-02-23T00:00:32Z
http://arxiv.org/abs/2302.11707v1
# A Deep Neural Network Based Approach to Building Budget-Constrained Models for Big Data Analysis+ ###### Abstract _Deep learning approaches require collection of data on many different input features or variables for accurate model training and prediction. Since data collection on input features could be costly, it is crucial to reduce the cost by selecting a subset of features and developing a budget-constrained model (BCM). In this paper, we introduce an approach to eliminating less important features for big data analysis using Deep Neural Networks (DNNs). Once a DNN model has been developed, we identify the weak links and weak neurons, and remove some input features to bring the model cost within a given budget. The experimental results show our approach is feasible and supports user selection of a suitable BCM within a given budget._ _Deep learning, big data analysis, budget-constrained model, input feature, deep neural network_ ## I Introduction With the emergence of big data, large scale data-driven machine learning becomes increasingly important. Deep learning, also called deep structured learning, is a subfield of machine learning based on artificial neural networks (ANNs). A deep neural network (DNN) is an ANN with multiple hidden layers between the input and output layers. There are many different types of DNNs, e.g., feedforward deep neural network (FF-DNN), recurrent neural network (RNN) and convolutional neural network (CNN), all of which follow similar procedures for training and testing [1]. Deep learning approach has been very successful in recent years for processing big data from sources such as social media, Internet search engines, e-commerce platforms, and healthcare systems. Successful deep learning mechanisms require collecting a large amount of data or purchasing data from a third-party vendor on many different input features or variables in order to develop feasible and accurate models for classification and prediction. However, data collection on input features could be very expensive and time consuming. Such cost may also include preprocessing, maintenance and storage of the data associated with the input features. For example, a recommendation system of a major e-commerce application using deep learning would require storing millions of user access information per month. Dozens of features such as, the amount of time a user views a certain item, and other items that are also viewed, would be recorded for each user access. The preprocessing of such data and the costs associated with the storage, transmission and maintenance can be remarkably high. Similarly, in a deep learning application that determines when a cruise ship needs to be maintained, a huge amount of data on the status measurements and usage statistics of the different system components of the cruise ship would also be required. As one more example, in a healthcare application using deep learning, various medical test data such as blood pressure, cholesterol levels and heart rates, need to be collected to develop an accurate medical diagnose model for training and determination of certain diseases. The cost associated with input features include the cost to collect the training and testing data as well as collection of a new data point for the classification or prediction purpose. In this study, we assume there are existing training and testing datasets for building a deep learning model. Therefore, we can focus on the total cost of collecting a new data point on all required features of the model. We call the cost for collecting a new data point the _model cost_. Note that a high model cost would also imply a high cost of acquiring the needed datasets for model training and testing. Practically, there are always limits to the budgets in deep learning applications. Due to the budget constraints, we must limit the number of features used in a model, while keeping the model accuracy high enough. In our approach, we reduce the model cost by selecting a subset of the most important features and deriving a reasonable model within a certain budget. In other words, with a given budget, we need to eliminate the least important features to ensure the model cost is lower than the budget. Since removing features typically reduces the accuracy of the model, it is required that our approach must deliver a budget-constrained model (BCM) with a reasonable accuracy. In previous work, we proposed several ways to select a set of features under a certain cost profile [2]. In this paper, we focus on deep learning methods and introduce a DNN-based approach to identifying the least important features from a DNN, subject to a given budget. Instead of deriving a single BCM, we produce a list of BCMs with expected predictive accuracies, sorted by predefined budget levels. This could be used to choose a BCM with the best predictive accuracy under a given budget, or allow a user to better trade off between budget and model accuracy. ## II Related Work There have been many research efforts on big data analytics using deep learning approaches. Deep architectures such as DNNs can often capture hierarchical and complex patterns of the inputs for more effective analysis of big data than traditional statistical learning methods. For example, the "Google Brain" project has used large DNNs with about one million simulated neurons and one billion simulated connections to leverage big data for image enhancement, language translation, and robotics research [3]. Esteva et al. presented deep learning techniques using DNNs for medical imaging, electronic health record data processing, and robotic-assisted surgery in the healthcare domain [4]. They also demonstrated the application of deep learning in bioinformatics, e.g., building a deep-learning system in genomics to convert raw data into input data tensors, processed by DNNs for specific biomedical applications. Xu and Gade designed a systematic approach to designing a layered knowledge graph that can be converted into a structured DNN [5]. The structured DNN model has been used for smart real estate assessments, which outperforms conventional multivariate linear regression methods as well as prediction mechanisms used by the leading real estate companies such as Zillow and Redfin. Most of the deep learning approaches assume the availability of required datasets for predictive analysis without considering the cost associated with data collection. In contrast, our approach aims to derive budget-constrained models by eliminating the least important features. Previous work related to cost-sensitive learning is summarized as follows. Elkan showed the proportion of negative examples in a training set would affect the optimal cost-sensitive classification decisions for problems with differing misclassification costs [6]. He recommended first developing a classifier and then using the probability estimates calculated from the classifier to compute optimal decisions. Sheng and Ling proposed a method to select a proper threshold that produces the lowest misclassification cost [7]. The experimental results showed that thresholding, as a general method to develop a cost-sensitive algorithm, has the least sensitivity on the misclassification cost ratio. O'Brien et al. analyzed the relationship between systematic errors in the class probability estimates and cost matrices for multiclass classification [8]. They explored the effect on the class partitioning of the cost matrix and demonstrated the effectiveness of learning a new partition matrix. Zhou et al. proposed a method to select features by their probabilities that are inversely proportional to the costs [9]. They constructed a decision tree with feature costs and used a random forest-based feature selection algorithm to produce low-cost feature subsets. Ji and Carin presented a formal definition of the cost-sensitive classification problem and provided a solution using a partially observable Markov decision process (POMDP) [10]. Different from traditional approaches, features were selected in a sequential manner until no additional feature acquisition could be justified based on classification results. More recently, Maliah and Shani formulated the cost sensitive classification problem as a POMDP, taking both test and misclassification costs into consideration [11]. They used a tree-based MDP approach to modeling a belief space and provided a scalable method for reasoning about future actions. Frumosu et al. proposed a method to reduce the production cost by predicting the number of faulty products while ensuring production quality delivery [12]. They reduced the problem to an imbalanced binary classification problem and solved the problem using Voronoi diagrams and the genetic algorithm. The above cost-sensitive learning approaches provided useful methods to reduce test and misclassification costs; however, they are not aimed to provide users model options to meet the budget constraints. In addition, most of the existing cost-sensitive learning approaches are not deep learning approaches, which intrinsically have limitations in dealing with large datasets and complex problems such as medical diagnosis. In previous work [2], Yan et al. approached the problem of budget constrained learning, in terms of variable costs. They explored the solution space to produce a model schedule as a list of models, sorted by model costs and expected predictive accuracy. Based on this work, we further proposed a deep learning based approach to building budget-constrained models using deep neural networks. In this sense, our approach complements existing cost-sensitive learning approaches that are suitable for applications not involving large amount of data and provides a scalable solution to complex problems, such as cybersecurity, fraud detection and medical diagnosis. ## III Model Cost and Budget-Constrained Models Deep learning has been widely used in various fields such as medical diagnosis, autonomous driving, and mathematics education. DNNs are a type of deep learning methods widely adopted in big data analytics and large-scale data driven applications. Since DNN-based approaches have shown ground-breaking results in speech recognition and image recognition tasks in recent years, the number of applications using DNNs has exploded. In this paper, we demonstrate our deep learning approach using FF-DNN - a simple type of DNNs, to build BCMs for big data analysis. ### _Model Cost of a Deep Neural Network_ The FF-DNN model is usually treated as a "black box"; however, it is undeniable that every neuron in a hidden layer of a FF-DNN has certain significance or hidden semantics, and different neurons have different effects on the outputs of the model [5]. To a certain extent, the absolute weight value of a link in a neural network represents the impact of the source neuron to the target neuron. Such impact may pass through the layers of the neural network and influence the results of the output neurons. When an input neuron has the least impact on the results of the output neurons, its corresponding feature may become a candidate to be removed from the model with minimal impact on the model accuracy. To adopt a well-trained deep learning model for prediction or classification, we need to collect data on a set of input features. For example, the set of input features to determine if a patient has a certain heart disease may include measures such as "blood pressure", "heart rate", "fasting blood sugar", "age", and "gender". The collection, purchase, and storage of data on different features may incur different feature cost. Let \(F\) be the set of all measurable features in a certain domain, where \(|\,F\,|=m\). Let \(\mathbf{f}=\{f_{1},f_{2},\,...,f_{K}\}\) be a set of input features of a model \(\mathcal{O}(\mathbf{f})\), which uses a total of \(K\) measurable features; thus, \(\mathbf{f}\subseteq F\) and \(K\leq m\). Let function \(c:F\mapsto\mathcal{Z}\)' be a mapping from feature \(f\)\(\in F\) to the cost of measuring feature \(f\). To simplify matters, we assume a feature cost is a nonnegative integer from the set of nonnegative integers \(Z\)'. We define the model cost of \(\mathcal{O}(\mathbf{f})\) as the summation of all feature costs as in Eq. (1). \[C\big{(}\mathcal{O}(\mathbf{f})\big{)}=\sum_{i=1}^{K}c(f_{i})\quad\text{where}\,\, \,f_{i}\in\mathbf{f}\text{ and }|\mathbf{f}|=K \tag{1}\] Given a budget level \(b\), we need to find a set of features \(\mathbf{f}\subseteq F\), such that the model cost of \(\mathcal{O}(\mathbf{f})\) is no more than \(b\), and \(\mathcal{O}(\mathbf{f})\) has the best predictive accuracy. That is, to solve the optimal problem defined in Eq. (2). \[\arg\max_{\mathbf{f}\subseteq F}accuracy\big{(}\mathcal{O}(\mathbf{f})\big{)}\,\,\, \text{subject to}\,\,\,\,C\big{(}\mathcal{O}(\mathbf{f})\big{)}\,\leq b \tag{2}\] In our DNN-based approach, we start with all measurable features and a list of predefined budget levels. We gradually remove the least important input features until the model costs are within the budgets. For each budget level, once the least important features are removed, the remaining features form a new set of inputs for development of a new classifier. It is expected that the new model will be less accurate as the number of input features decreases; however, the costs of collecting data for training and prediction can be significantly reduced. ### Budget-Constrained Models In the context of FF-DNN, a budget-constrained model or BCM \(\mathcal{O}(\mathbf{f})\) is defined as a 4-tuple (\(S,\mathbf{f},\,\mathbf{w},\,p\)), where \(S\) is the structure of the FF-DNN, \(\mathbf{f}\) is a set of input features that correspond to the set of input neurons in \(\mathcal{O}(\mathbf{f})\), \(\mathbf{w}\) is the weights of the links in \(\mathcal{O}(\mathbf{f})\), and \(p\) the expected accuracy of the model. Note in this paper, \(S\) is defined as a fully connected DNN (FCDNN), while using partially connected DNNs (PCDNNs) is envisioned as a future, and more ambitious research direction. Given a list of budget levels \(B=(b_{1},\,b_{2},\,...,\,b_{n})\) and a set of measurable features \(F=\{f_{1},f_{2},\,...,f_{m}\}\), our task is to build a list \(\mathcal{O}_{n}\), where \(1\leq i\leq n\), and for each \(\mathcal{O}_{i}\), with an identified subset of features from \(F\) such that the model satisfies Eq. (2) for budget \(b_{i}\). Table I shows an example of a list of BCMs with their expected model accuracy and lists of features, sorted by budget levels. With a given budget and a required accuracy, we can find the most suitable model from the table. For example, if the given budget is 1750, and the required model accuracy is 0.94, we shall choose the BCM with the set of features {1,4,5,8,10}, whose model cost is 1500 that is less than 1750. ### A Framework for Building a Budget-Constrained Model With sufficient training and testing datasets, our approach aims to develop a BCM with its model cost within a given budget level. The framework for generating a BCM under a given budget is illustrated in Fig. 1. For any raw data, whether captured by measurement or purchased from a third-party vendor, there is typically a lot of unnecessary information. We first need to preprocess the data and retrieve the needed fields in a desired format. Since data points with missing information or wrong information could negatively affect the training and testing results, such data points must be fixed or considered as outliers to be removed from the dataset. The dataset is then split into a training dataset and a testing dataset. Note that to simplify Fig. 1, we do not show the process of partitioning the dataset into \(k\) equal sized subsamples for the \(k\)-fold cross-validation purpose. We extract all the features from the dataset to build the first model. After the model is fully trained, we check if the model cost is higher than the given budget. If the answer is yes, we find the least important feature, remove it using an algorithm described in Section IV, and create a new model using the remaining features. This procedure is repeated until the model cost becomes less than or equal to the given budget. In this case, the testing dataset is used to calculate the expected model accuracy. Finally, the 4-tuple (\(S,\mathbf{f},\,\mathbf{w},\,p\)), i.e., the structure \(S\) of the FF-DNN, the set of input features \(\mathbf{f}\), the weights of the links \(\mathbf{w}\) in the model, and the expected accuracy \(p\) of the model, is recorded as the resulting BCM for the given budget. ## IV Generation of Budget-Constrained Models ### Identifying the Least Important Input Feature In our DNN-based approach, we define a set of thresholds for the links of the neural network, designed to identify and eliminate the weak links. When a neuron's output links are all identified as weak links, the neuron is considered to have minimal impact on the output, and thus, it is considered a weak neuron. Our approach starts with the last hidden layer that most directly affects the output neurons, and then works backward to Fig. 1: A Framework of generating a BCM under a given budget determine the weak links and weak neurons. The procedure repeats until we find weak input neurons, whose corresponding features become a candidate to be removed from the model. We now use a few examples to show how to identify weak links and weak neurons. Since multiple source neurons link to a target neuron, the weights of each link represent their impact on the target neuron. The higher absolute value of a weight, the higher the impact a source neuron has on the target neuron. We can identify the weak links by setting a threshold for each target neuron of a link. For example, in Fig. 2 (a), the link threshold for target neuron \(t_{n}\) is set to \(0.3\). Consequently, the link from source neuron \(n_{1}\) to \(n\) is marked as a weak link (denoted by a dashed line) as the weight of the link is \(0.1\) that is less than \(t_{n}\). On the other hand, a source neuron links to multiple target neurons. If the links coming from a source neuron have all been identified as weak links, that source is marked as a weak neuron. For example, in Fig. 2 (b), all links coming from source neuron \(n\) are weak links because their link weights are less than their corresponding thresholds; thus, neuron \(n\) is marked as a weak neuron denoted by a dashed circle. If a neuron is identified as a weak neuron, its impact on the outputs of the neural network is considered minimal. Therefore, all links connecting that weak neuron are considered as weak links because if we remove the weak neuron from the DNN, all its incoming links will also be removed. Fig. 3 shows such an example with neuron \(n_{5}\) being a weak neuron. As shown in Fig. 3, since neuron \(n_{5}\) is a weak neuron, neuron \(n_{1}\) to \(n_{4}\) would have little impacts on the outputs of the neural network through their links to \(n_{5}\); therefore, we can reasonably mark links \(l_{1}\), \(l_{2}\), \(l_{3}\) and \(l_{4}\) as weak links. Being said, a link is marked as a weak link in either of the following two cases: 1) its weight is less than the threshold, and 2) its target neuron is a weak neuron. Fig. 4 presents an example of a FF-DNN model with four layers including two hidden layers. There are three input neurons \(n_{11}\), \(n_{12}\) and \(n_{13}\), which correspond to three input features. All neurons except the input neurons have been assigned thresholds. Note that the thresholds for the neurons can be different, and each threshold of a neuron \(n\) is initialized based on the weights of all links that connect to neuron \(n\). As described later in this section, the thresholds need to be adjusted if no weak input neuron can be identified. The steps to identify weak links and weak neurons of the neural network in Fig. 4 are illustrated in Fig. 5. From the figure, we can see the process starts with the last hidden layer and works backward to the input layer. For example, in Layer 3, since the neuron \(n_{31}\) contains only one output link, which is a weak link, it is marked as a weak neuron. Similarly, in Layer 2, since neuron \(n_{21}\) contains links that are either weak or connect to a weak neuron, neuron \(n_{21}\) is then marked as a weak neuron as well. Finally, in Layer 1, two input neurons \(n_{11}\) and \(n_{12}\) are identified as weak neurons; thus, their corresponding input features are candidate features to be removed from the model. It is worth noting that, in our approach, when more than one weak input neurons are identified, the least important feature is considered to be the one having the highest feature cost; therefore, minimizing the model cost. Fig. 4: An example of a FF-DNN model Fig. 5: The steps to identify weak links and weak neurons Fig. 3: An example of weak links connecting to a weak neuron Fig. 2: Examples of (a) weak link and (b) weak neuron The procedure of finding the least important feature is shown in Algorithm 1. As described in the algorithm, given a FF-DNN \(\mathcal{O}(\mathbf{f})\) with \(L\) layers, all neurons and links in \(\mathcal{O}(\mathbf{f})\) are initially considered strong. A very low initial threshold \(\delta_{n}\) is set for each neuron \(n\), except for the input layer, based on the weights of the input links to neuron \(n\). Starting from the last hidden layer \(l_{L}\)-1, all the weak neurons and weak links are marked in a backward manner. To ensure that the input layer contains at least one weak neuron, the value of each threshold can be increased gradually. Finally, a weak neuron in the input layer is selected and its corresponding input feature is identified as the least important feature \(f\). ``` 1. Let all neurons/links in \(\mathcal{O}(\mathbf{f})\) be strong neurons/links 2. Let \(f\) be the least important feature, initialized to null 3.for\(i=2\) to \(L\) 4.for each neuron \(n\) in layer \(l\) 5. Initialize the threshold \(t_{n}\) of neuron \(n\) with small value \(\delta_{n}\) 6.while\(\mathbf{f}^{\prime}\) is null 7.for\(i=L\)-1 to \(1/\) identify weak links/neurons backward 8.for each target neuron \(\beta\) in \(l_{i+1}\) 9.for each source neuron \(\alpha\) in \(l\) 10. Let \(w_{\gamma}\) be the weight of the link \(\gamma\) from \(\alpha\) to \(\beta\) 11.if\(w_{\gamma}<t_{\mathbf{f}}\) or \(\beta\) is a weak neuron 12. Mark link \(\gamma\) as a weak link 13.for each source neuron \(\alpha\) in \(l_{i}\) 14.if all links from source neuron \(\alpha\) are weak links 15. Mark source neuron \(\alpha\) as a weak neuron 16.if there is no weak neuron in input layer \(l_{1}\) 17. Let threshold \(t_{n}\) of each target neuron \(n\) in \(\mathcal{O}(\mathbf{f})\) be \(2^{*}\delta_{n}\) 18.else\(//\) there are one or more weak input neurons 19. Select a weak neuron \(a^{*}\) in \(l_{1}\) with highest feature cost 20. Set \(f\) to the input feature corresponding to \(a^{*}\) 21.return\(f\) ``` **Algorithm 1**Finding the Least Important Feature ### Generating a FF-DNN based BCM Once we are able to identify the least important input feature in our FF-DNN based deep learning approach, we can generate a FF-DNN model that satisfies a budget requirement. Let a given budget be \(b\). We develop a FF-DNN model that satisfies the requirements described in Eq. (2). This may require going through a number of steps to remove more than one input feature to meet the budget requirement. Each time when the least important feature is removed, we build a new FF-DNN model and train it using the same datasets. It is expected that the new model is less accurate than its previous model version as the number of input features decreases. With the trained new model, we identify the least important feature again until the budget requirement is met. Algorithm 2 shows the procedure to generate a BCM given budget level \(b\), dataset \(D\) with a set of features \(F\), and model cost function \(C(\mathcal{O}(\mathbf{f}))\). To make the model cost \(C(\mathcal{O}(\mathbf{f}))\leq b\), starting from \(\mathbf{f}=F\), the method gradually removes the least important feature using Algorithm 1. Finally, the model \(\mathcal{O}(\mathbf{f})\) is created and trained on the \(\mathbf{f}\) that satisfies \(C(\mathcal{O}(\mathbf{f}))\leq b\), and the corresponding 4-tuple (\(S,\mathbf{f},\mathbf{w},\,p\)) representing BCM \(\mathcal{O}(\mathbf{f})\) is returned as the result. ``` 1. Let \(\mathbf{f}\) be the set of measurable features \(F\) 2. Randomly partition \(D\) into \(k\) equal sized subsamples. 3.while\(C(\mathcal{O}(\mathbf{f}))>b\)\(//\) model cost is greater than the given budget 4. Create a FF-DNN \(\mathcal{O}(\mathbf{f})\) with a set of features \(\mathbf{f}\) 5. Train and test \(\mathcal{O}(\mathbf{f})\) with dataset \(D\) using \(k\)-fold cross-validation 6. Invoke Algorithm 1, and let \(f^{\prime\prime}\) be the least important feature 7. Remove feature \(\mathbf{f}^{*}\) from \(\mathbf{f}\) 8. Create a FF-DNN \(\mathcal{O}(\mathbf{f})\) with a set of features \(\mathbf{f}\) 9. Train and test \(\mathcal{O}(\mathbf{f})\) with \(D\), and save weights \(\mathbf{w}\) and accuracy \(p\) 10. Let \(S\) be the structure of FF-DNN \(\mathcal{O}(\mathbf{f})\). 11.return\(\mathbf{4}\)-tuple (\(S,\mathbf{f},\mathbf{w},\,p\)) ``` **Algorithm 2**Generating a BCM Under a Given Budget ### Generating a List of BCMs Developing a deep learning model under a specific budget may possibly result in failing to achieve the required predictive accuracy or wasting money on unnecessary features. For example, a low given budget for a deep learning model adopted in a cardiac diagnosis application may only use a limited number of features, which could make the prediction accuracy less than 60%. Such an application is obviously not marketable. On the other hand, suppose a vehicle routing simulation application has already achieved close to 100% prediction accuracy with a reasonable model cost. If we continue to improve the model with more features under a higher budget, it cannot improve the predictive accuracy significantly and will inevitably waste money. To avoid the above undesirable situations, users shall be allowed to trade off between various budget levels and the required predictive accuracy for a suitable cost-effective deep learning model. Algorithm 3 shows the procedure to generate a list of BCMs \(L_{\textit{BCM}}\), given a maximum budget \(b_{\textit{max}}\) and a distance \(d\) between two consecutive budget levels. Each generated BCM satisfies the minimal accuracy requirement as well as its corresponding budget requirement. ``` 1. Let \(L_{\textit{BCM}}\) be a list of BCMs, initialized to an empty list. 2. Let \(b\) be a budget level, initialized to \(b_{\textit{max}}\) 3.while\(b>0/\) a budget level should be always greater than 0 4. Invoke Algorithm 2, and let (\(S,\mathbf{f},\mathbf{w},\,p\)) be a 4-tuple representing BCM \(\mathcal{O}(\mathbf{f})\) with \(C(\mathcal{O}(\mathbf{f}))\leq b\) 5.if\(p\geq p_{\textit{min}}\)\(//\) the expected model accuracy is no less than \(p_{\textit{min}}\) 6. Add the 4-tuple (\(S,\mathbf{f},\mathbf{w},\,p\)) into \(L_{\textit{BCM}}\) 7.elsereturn\(L_{\textit{BCM}}\) 8.\(b=b\)\(-\)\(d\) 9.return\(L_{\textit{BCM}}\) ``` **Algorithm 3**Generating a List of BCMs ### Generating a List of BCMs Developing a deep learning model under a specific budget may possibly result in failing to achieve the required predictive accuracy or wasting money on unnecessary features. For example, a low given budget for a deep learning model adopted in a cardiac diagnosis application may only use a limited number of features, which could make the prediction accuracy less than 60%. Such an application is obviously not marketable. On the other hand, suppose a vehicle routing simulation application has already achieved close to 100% prediction accuracy with a reasonable model cost. If we continue to improve the model with more features under a higher budget, it cannot improve the predictive accuracy significantly and will inevitably waste money. To avoid the above undesirable situations, users shall be allowed to trade off between various budget levels and the required predictive accuracy for a suitable cost-effective deep learning model. Algorithm 3 shows the procedure to generate a list of BCMs \(L_{\textit{BCM}}\), given a maximum budget \(b_{\textit{max}}\) and a distance \(d\) between two consecutive budget levels. Each generated BCM satisfies the minimal accuracy requirement as well as its corresponding budget requirement. ``` 1. Let \(L_{\textit{BCM}}\) be a list of BCMs, initialized to an empty list. 2. Let \(b\) be a budget level, initialized to \(b_{\textit{max}}\) 3.while\(b>0/\) a budget level should be always greater than 0 4. Invoke Algorithm 2, and let (\(S,\mathbf{f},\mathbf{w},\,p\)) be a 4-tuple representing BCM \(\mathcal{O}(\mathbf{f})\) with \(C(\mathcal{O}(\mathbf{f}))\leq b\) 5.if\(p\geq p_{\textit{min}}\)\(//\) the expected model accuracy is no less than \(p_{\textit{min}}\) 6. Add the 4-tuple (\(S,\mathbf{f},\mathbf{w},\,p\)) into \(L_{\textit{BCM}}\) 7.elsereturn\(L_{\textit{BCM}}\) 8.\(b=b\)\(-\)\(d\) 9.return\(L_{\textit{BCM}}\) ``` **Algorithm 4**Generating a List of BCMs ### Generating a List of BCMs Developing a deep learning model under a specific budget may possibly result in failing to achieve the required predictive accuracy or wasting money on unnecessary features. For example, a low given budget for a deep learning model adopted in a cardiac diagnosis application may only use a limited number of features, which could make the prediction accuracy less than 60%. Such an application is obviously not marketable. On the other hand, suppose a vehicle routing simulation application has already achieved close to 100% prediction accuracy with a reasonable model cost. If we continue to improve the model with more features under a higher budget, it cannot improve the predictive accuracy significantly and will inevitably waste money. To avoid the above undesirable situations, users shall be allowed to trade off between various budget levels and the required predictive accuracy for a suitable cost-effective deep learning model. Algorithm 3 shows the procedure to generate a list of BCMs \(L_{\textit{BCM}}\), given a maximum budget \(b_{\textit{max}}\) and a distance \(d\) between two consecutive budget levels. Each generated BCM satisfies the minimal accuracy requirement as well as its corresponding budget requirement. ``` 1. Let \(L_{\textit{BCM}}\) be a list of BCMs, initialized to an empty list. 2. Let \(b\) be a budget level, initialized to \(b_{\textit{max}}\) 3.while\(b>0/\) a budget level should be always greater than 0 4. Invoke Algorithm 2, and let (\(S,\mathbf{f},\mathbf{w},\,p\)) be a 4-tuple representing BCM \(\mathcal{O}(\mathbf{f})\) with \(C(\mathcal{O}(\mathbf{f}))\leq b\) 5.if\(p\geq p_{\textit{min}}\)\(//\) the expected model accuracy is no less than \(p_{\textit{min}}\) 6. Add the 4-tuple (\(S,\mathbf{f},\mathbf{w},\,p\)) into \(L_{\textit{BCM}}\) 7.elsereturn\(L_{\textit{BCM}}\) 8.\(b=b\)\(-\)\(d\) 9.return\(L_{\textit{BCM}}\) ``` **Algorithm 5**Generating a List of BCMs ### Generating a List of BCMs Developing a deep learning model under a specific budget may possibly result in failing to achieve the required predictive accuracy or wasting money on unnecessary features. For example, a low given budget for a deep learning model adopted in a cardiac diagnosis application may only use a limited number of features, which could make the prediction accuracy less than 60%. Such an application is obviously not marketable. On the other hand, suppose a vehicle routing simulation application has already achieved close to 100% prediction accuracy with a reasonable model cost. If we continue to improve the model with more features under a higher budget, it cannot improve the predictive accuracy significantly and will inevitably waste money. To avoid the above undesirable situations, users shall be ## V Case Study The major goal of our approach is to develop a list of DNN models that meet budget requirements, while keeping the predictive accuracy of each model as high as possible. To validate the feasibility and performance of our approach, we conduct experiments on two datasets from the UC Irvine Machine Learning Repository [13]. The two datasets are the _Early-stage diabetes risk prediction dataset_ (_DS1_) and the _Heart disease dataset_ (_DS2_). To facilitate the application of our approach and ensure fully trained models as well as improved model accuracy, we adopt the TensorFlow [14] to develop FF-DNNs for the experiments and apply _k_-fold cross-validation to train and test the models. Each dataset is randomly divided into 10 datasets for 10-fold cross-validation. For categorical data, we use one-hot encoding to divide the corresponding feature into multiple in order to improve model performance. For example, the feature _Icthing_ in _DS1_ is a categorical feature with a value of either "Yes" or "No", representing the presence or absence of itching symptoms, respectively. Using the one-hot encoding, the feature _Icthing_ can be split into two features, namely _Icthing_Yes_ and _Icthing_No_. If _Icthing_ has value "Yes", it is replaced by two features _Icthing_Yes_ = 1 and _Icthing_No_ = 0; otherwise, we set _Icthing_Yes_ = 0 and _Icthing_No_ = 1. ### The Early Stage Diabetes Risk Prediction Dataset The Early-stage diabetes risk prediction dataset includes 520 instances, collected using direct questionnaires from the patients of Syflet Diabetes Hospital in Syflet, Bangladesh [13]. There are 13 categorical attributes used as features, namely _Age_, _Sex_, _Polyphagia_, _Genital__htrush_ (_Ghtrush_), _Visual__hurring__(Vblur_), _Icthing__(_Tritability_, _Delayed healing__(_Dheal_), _Partial__pares__(Par_), _Muscle stiffness__(_Mstiff_) and _Alopecia_. Each input feature is assigned a feature ID as shown in Table II. The label of each data point is an output categorical feature of _Diabetes_, which has the value of either "Yes" or "No", indicating whether a patient has diabetes or not. The FF-DNN models that we build for this dataset have 5 hidden layers with 120 hidden neurons in each hidden layer. We set the feature costs randomly by sampling from [100, 300] uniformly, except for the costs of _Sex_ and _Age_, which are set to 0. In the following experiments, the maximum budget level _b\({}_{\text{max}}\)_ is set to 1900, which is greater than the total cost of all features in the diabetes dataset; thus, the initial BCM model shall consist of all the features with the potential maximum predictive accuracy. We set a distance \(d=200\) between two consecutive budget levels, gradually decrease the budget level, and derive the corresponding BCMs. This process stops when the predictive accuracy becomes less than the minimum required predictive accuracy \(p_{\text{min}}=0.65\), as predefined for the experiments. Table III shows a list of BCMs generated by applying Algorithm 3, where the features are represented by the feature ID as defined in Table II. For example, the BCM \(\mathcal{O}_{8}\), with the given budget of 500 and expected predictive accuracy of 0.8173, has a set of input features {3,6,7,13}, representing the features of _Visual__hurring_, _Genital__htrush_, _Polydipsia_, and _Age_. The list of BCMs in Table III allows a user to select an appropriate deep learning model based on the budget and accuracy requirements. For example, when the given budget is 1600 and the required accuracy is 0.94, the user shall select the BCM \(\mathcal{O}_{3}\)with the set of features {1,3,4,5,6,7,8,9,10,13}. In this case, the expected predictive accuracy is 0.9423, which is greater than the required accuracy 0.94. However, if the required accuracy becomes 0.95, the user will have to increase the budget to 1900 and select the first model \(\mathcal{O}_{1}\) with expected predictive accuracy 0.9615, being greater than 0.95. To demonstrate the expected performance of our approach, we compare it with two different approaches: the cost-based approach and the random selection approach. With the given features and the cost function, the cost-based approach works in accordance with the principle that it always removes the most expensive feature to make the model cost decrease quickly; while the random selection approach randomly removes a feature each time to reduce the model cost. For each of the three approaches, we generate 10 BCMs for each budget level, and select the model with the highest prediction accuracy. The highest predictive accuracy vs. model cost at each budget level is presented in Fig. 6. Fig. 6: Predictive accuracy over predefined budget levels (_DS1_) From Fig. 6, we can see that our approach outperforms the other two approaches at each budget level, while the cost-based approach works better than the random-selection approach in most of the cases. Since the cost-based approach removes the most expensive feature at each step, it can remove the minimum number of features to make the model cost below a given budget. Compared with the random-selection approach, the cost-based approach would generally perform better than the random selection approach as more features could be kept for a given budget, potentially leading to a higher accuracy. However, the cost-based approach may also mistakenly remove the most expensive feature that is also an important one. This is the reason why the cost-based approach cannot perform as well as our approach. Note that our approach always removes the least important feature first, which has the lowest impact on the model prediction accuracy. In the figure, all three curves intersect at budget level 1900, the reason being, all three approaches share the same FF-DNN that uses all input features, and thus, they have the same accuracy. In addition, we notice that the accuracies of all three methods drop sharply when the budget level becomes less than 900; whereas the budget levels above 900 maintain high accuracies of all three methods. This indicates that the BCM \(\mathcal{O}_{6}\) from Table III with the budget level of 900 may be considered as the most cost-effective model. To further demonstrate that our approach leads to a higher degree of model accuracy than the other two approaches. We conduct experiments using the three approaches by removing only one feature at a time. Fig. 7 shows the comparison results among the three approaches showing how accuracy changes with the number of features removed. As demonstrated in the figure, for any number of features removed, our approach consistently achieves the highest model accuracy than the other two approaches. ### The Heart Disease Dataset The Heart disease dataset contains 76 attributes, but only 14 features is used in this experiment for demonstration purpose [13]. The 14 features include 7 categorical attributes, namely _Sex_, _Chest pain type_ (_Cp_), _Slope of the peak exercise ST segment_ (_Slope_), _Resting electrocardiographic results_ (_Restecg_), _Number of major vessels colored by fluoroscopy_ (_Ca_), _Exercise induced angina_ (_Thal_), _Thallium Stress Test_ (_Exang_), along with 6 integer attributes, namely _Age_, _Resting blood pressure_ (_Trestbps_), _Serum cholestoral in mg/dl_ (_Chol_), _Fasting blood sugar_ (_Fbs_), _Maximum heart rate achieved (_Thalach_), and _ST depression induced by exercise relative to rest_ (_Oldpeak_). Each input feature is assigned a feature ID as shown in Table IV. The label of each data point is an output categorical feature of _Diagnosis of heart disease_, which has the value of either "Yes" or "No", indicating whether a patient has a heart disease or not. The FF-DNN model we built for this dataset contains 3 hidden layers with 200 hidden neurons in each hidden layer. Similar to the experiments on the Early-stage diabetes risk prediction dataset, we set the maximum budget level \(b_{max}\) to 1600, the distance \(d=\) 200 between two consecutive budget levels, and the minimum required predictive accuracy \(p_{min}\) to 0.65. Table V shows the list of BCMs generated using our approach with random feature costs sampling from [100, 300]. Now, we compare the performance of our approach with that of the cost-based approach and the random selection approach by generating lists of BCMs for various budget levels. The results of predictive accuracy over predefined budget levels for the three approaches are shown in Fig. 8. Fig. 8: Predictive accuracy over predefined budget levels (_DS2_) Fig. 7: Accuracy changes with the number of features removed (_DS1_) From Fig. 8, we can see that our approach has the highest model accuracy at all budget levels, while the cost-based approach has higher accuracy than the random selection approach at most of the budget levels. These results are consistent with those from the previous experiments on _DS1_. However, we notice that when the two features _Exang_ and _Restec_ are removed from BCM \(\mathcal{O}_{6}\) and \(\mathcal{O}_{7}\), respectively, the predictive accuracy is not significantly changed. This is different from the situation shown in Fig. 6, where accuracy drops sharply when the budget level becomes low. Since our approach always removes the least important feature first, the features _Exang_ and _Restec_ are supposed to be important ones; thus, removing them shall result in significant decrease of the predictive accuracy. The reason why this does not happen could be explained by the correlations the input features may have with each other. In this particular case, the importance of the features _Exang_ and _Restec_g may have relied on features that have been removed, e.g., features _Ca_ and _Fbs_ in BCM \(\mathcal{O}_{5}\). Similar to the experiments on _DS1_, we develop models using the three approaches by removing only one feature at a time. Fig. 9 shows the comparison results among the three approaches. As shown in the figure, for any number of features removed, our approach again consistently achieves the highest model accuracy than the other two approaches. ## VI Conclusions and Future Work Big data analytics is increasingly becoming one of the trending industry practices, but it has also brought major challenges for data processing, data maintenance and accurate prediction. One such major challenge is associated with the high cost of model features in many applications. In this paper, we introduced a DNN-based approach to developing deep learning models subject to budget constraints. Our approach can gradually reduce the model cost by removing the least important feature at each step. We present an algorithm to find weak links and weak neurons in a backward manner and identify the least important feature in a model. To support user selection of a suitable BCM under a given budget, or trade off between budget and predictive accuracy, we demonstrate how to generate a list of BCMs under predefined budget levels and a minimum required accuracy. Since our approach is based on deep neural network, it is scalable and provides a promising method for big data analysis. In our current work, we performed experiments using the FF-DNN on standard datasets. In future work, we will adopt more advanced DNNs such as RNN, further verify the performance of our approach using much larger datasets, and evaluate the computational cost of our approach. We will also look into the dependency among input features, and seek a more efficient method by removing a group of highly correlated but less important features. Instead of deriving a list of BCMs, we will explore to build dynamic models with mutable feature costs. This would require developing real-time classifiers as shown in previous work [15]. Finally, we plan to build partially connected FF-DNNs under given budget levels. This could be a challenging task because partially connected FF-DNNs are currently not supported in major deep learning tools such as TensorFlow. However, as in earlier work [5], using partially connected DNNs in our deep learning approach can simplify the computation process and lead to more efficient BCMs.
2306.14388
Nonlinear Functional Principal Component Analysis Using Neural Networks
Functional principal component analysis (FPCA) is an important technique for dimension reduction in functional data analysis (FDA). Classical FPCA method is based on the Karhunen-Lo\`{e}ve expansion, which assumes a linear structure of the observed functional data. However, the assumption may not always be satisfied, and the FPCA method can become inefficient when the data deviates from the linear assumption. In this paper, we propose a novel FPCA method that is suitable for data with a nonlinear structure by neural network approach. We construct networks that can be applied to functional data and explore the corresponding universal approximation property. The main use of our proposed nonlinear FPCA method is curve reconstruction. We conduct a simulation study to evaluate the performance of our method. The proposed method is also applied to two real-world data sets to further demonstrate its superiority.
Rou Zhong, Chunming Zhang, Jingxiao Zhang
2023-06-26T02:51:36Z
http://arxiv.org/abs/2306.14388v1
# Nonlinear Functional Principal Component Analysis Using Neural Networks ###### Abstract Functional principal component analysis (FPCA) is an important technique for dimension reduction in functional data analysis (FDA). Classical FPCA method is based on the Karhunen-Loeve expansion, which assumes a linear structure of the observed functional data. However, the assumption may not always be satisfied, and the FPCA method can become inefficient when the data deviates from the linear assumption. In this paper, we propose a novel FPCA method that is suitable for data with a nonlinear structure by neural network approach. We construct networks that can be applied to functional data and explore the corresponding universal approximation property. The main use of our proposed nonlinear FPCA method is curve reconstruction. We conduct a simulation study to evaluate the performance of our method. The proposed method is also applied to two real-world data sets to further demonstrate its superiority. **Keywords** : Functional principal component analysis; Neural Network; Nonlinear dimension reduction; Curve reconstruction. ## 1 Introduction Functional data analysis (FDA) has become widely concerned, with the rapid development of data collection technology. There are many monographs that provide a detailed introduction of FDA, such as Ramsay and Silverman (2005), Ferraty and Vieu (2006) and Horvath and Kokoszka (2012). Functional principal component analysis (FPCA), as a dimension reduction technique, plays a greatly important role in FDA, since functional data is a type of data with infinite dimensional. However, traditional FPCA is merely a linear approach, and the linear assumption can limit the effectiveness of dimensional reduction. In this paper, we aim to develop a nonlinear FPCA method based on the use of neural networks. Recently, neural network approach draws more and more attention in the field of FDA and shows strong potential. Thind et al. (2023) proposed a functional neural network to handle nonlinear regression model with functional covariates and scalar response. Further, Rao and Reimherr (2023) introduced a continuous layer in the construction of functional neural networks, so that the functional nature of the data can be maintained as long as possible. The nonlinear function-on-scalar regression model has been considered by Wu et al. (2022) with the use of neural networks. Moreover, neural network method is also employed in the classification problem of functional data, such as Thind et al. (2020) and Wang et al. (2022). The above works all focus on supervised learning, as the regression or classification issues for functional data are considered, where label variables are defined. Furthermore, unsupervised learning problems for functional data have also been studied by neural networks. Wang et al. (2021) discussed the mean function estimation for functional data using neural networks. Multi-dimensional functional data is taken into account by Wang and Cao (2022), and a robust location function estimator is proposed via neural networks. Sarkar and Panaretos (2022) concentrated on covariance estimation for multi-dimensional functional data, and three types of covariance networks are defined correspondingly. Though neural networks have gained extensive interest in FDA, there are only very few studies working on nonlinear dimensional reduction for functional data through neural networks. Wang and Cao (2022) presented a functional nonlinear learning method, which is a representation learning approach for multivariate functional data and can be applied to curve reconstruction and classification. However, their method is developed based on recurrent neural network (RNN), thus only discrete values of the data are used in the neural network. Therefore, a nonlinear dimension reduction method by neural networks that treats the continuously observed data from a functional perceptive is needed. FPCA is a crucial dimension reduction tool in FDA. There has been a great many works contributing to the development of FPCA in various aspects. These include, but are not limited to the study of principal component analysis for sparsely observed functional data (Yao et al., 2005; Hall et al., 2006; Li and Hsing, 2010). Robust FPCA approaches were introduced in Locantore et al. (1999); Gervini (2008) and Zhong et al. (2022). Moreover, Chiou et al. (2014) and Happ and Greven (2018) discussed principal component analysis methods for more complex functional data, such as multivariate functional data and multi-dimensional functional data. For nonlinear FPCA, Song and Li (2021) generalized kernel principal component analysis to accommodate functional data. Currently, research on nonlinear FPCA is not sufficient enough. Nevertheless, the consideration of nonlinear structure of functional data can be beneficial, since more parsimonious representation can be obtained. To this end, we propose a new nonlinear functional principal component analysis method by neural networks, which can be simply denoted as nFunNN. In specific, we borrow the idea of the autoassociative neural networks in [Kramer, 1991] for the construction of our networks, to realize dimension reduction and curve reconstruction. Kramer [1991] achieved the purpose of dimension reduction for multivariate data through an internal "bottleneck" layer. However, the extension to functional data is nontrivial due to the infinite dimensional nature of functional data, which adds the complexity of the neural networks and increases the difficulty in the optimization. For our proposed neural network, both input and output are functions. To the best of our knowledge, though neural networks with functional input have been studied in the existing works, networks with both functional input and functional output have not been taken into account yet, and the consideration of which can be more complicated. B-spline basis functions are employed in our computation and backpropagation algorithm is applied. The simulation study and real data application show the superiority of the nFunNN method under various nonlinear settings. Moreover, we also establish the universal approximation property of our proposed nonlinear model. The contributions of this paper can be summarized as follows. First, our work is the first attempt to the generalization of the autoassociative neural networks to functional data settings, which is not straightforward. The use of neural networks provides new framework of nonlinear dimension reduction for functional data. Second, the universal approximation property of the proposed model is discussed, which brings theoretical guarantees to our method. Third, we present an innovative algorithm for the computation in practice and develop a python package, called nFunNN, for implementation. The organization of this paper is as follows. In Section 2, we first give an explanation of nonlinear FPCA, and then introduce a functional autoassociative neural network to complete nonlinear principal component analysis for functional data. We also discuss the practical implementation of our method. In Section 3, we display the simulation results in our numerical study. The evaluation of our method by real-world data is provided in Section 4. In Section 5, we conclude this paper with some discussions. ## 2 Methodology ### Nonlinear FPCA Let \(X(t)\) be a smooth random function in \(L^{2}(\mathcal{T})\), where \(\mathcal{T}\) is a bounded and closed interval. In this paper, \(\mathcal{T}\) is set as \([0,1]\) if there is no specific explanation. Let \(\mu(t)\) and \(\Sigma(s,t)\) denote the mean function and covariance function of \(X(t)\), respectively. For linear FPCA, according to the Karhunen-Loeve Theorem, \(X(t)\) admits the following expansion \[X(t)=\mu(t)+\sum_{k=1}^{\infty}\xi_{k}\phi_{k}(t),\] where \(\xi_{k}\) is the \(k\)-th functional principal component score, \(\phi_{k}(t)\) is the \(k\)-th eigenfunction of \(\Sigma(s,t)\) and satisfies \(\int_{\mathcal{T}}\phi_{k}(t)^{2}dt=1\) and \(\int_{\mathcal{T}}\phi_{k}(t)\phi_{l}(t)dt=0\) for \(l\neq k\). Moreover, the functional principal component scores are uncorrelated random variables with mean zero and \(E\xi_{k}^{2}=\lambda_{k}\), where \(\lambda_{k}\) is the \(k\)-th eigenvalue of \(\Sigma(s,t)\). In practice, as only the first several functional principal component scores dominate the variation, a truncated expansion \[X(t)=\mu(t)+\sum_{k=1}^{K}\xi_{k}\phi_{k}(t) \tag{1}\] is often applied, where \(K\) is the number of functional principal components that used. Furthermore, we also have that \[\xi_{k}=\int_{\mathcal{T}}\{X(t)-\mu(t)\}\phi_{k}(t)dt. \tag{2}\] It is obvious that \(X(t)\) is mapped into a lower dimensional space through a linear transformation. However, with the constraint of linear map, the nonlinear structure is ignored, which may lower the efficiency of dimension reduction. For nonlinear FPCA, we extend the linear map in (2) to arbitrary nonlinear map. That is \[\xi_{k}=G_{k}(X), \tag{3}\] where \(G_{k}\) is a nonlinear function that maps function in square integrable space \(L^{2}(\mathcal{T})\) to scalar in \(\mathbb{R}\). Similarly, (1) can be generalized into nonlinear version as \[X(t)=H(\mathbf{\xi})(t),\] where \(\mathbf{\xi}=(\xi_{1},\dots,\xi_{K})^{\top}\) and \(H\) is a nonlinear function that maps from a vector space to a square integrable space. The scores obtained from nonlinear FPCA also contain the main information of \(X(t)\), but the dimension of the feature space can be lower, as the nonlinear structure is taken into account in the process of dimension reduction. To estimate the nonlinear functional principal component scores, the nonlinear functions \(G_{k}\) and \(H\) have to be learnt, and a neural network method is employed. ### Neural Networks for Nonlinear FPCA In this section, we construct a functional autoassociative neural network for nonlinear FPCA. The structure of the proposed neural network is shown in Figure 1. The output is the reconstruction of the input data, thus both input and output are functions in our network, which brings challenge to the optimization of the neural network. Furthermore, dimension reduction is realized by the second hidden layer, which can also be called "bottleneck" layer as in (Kramer, 1991). More details about the computation of the proposed functional autoassociative neural network are given below. To be specific, the left two hidden layers are designed to learn the nonlinear functions \(G_{k}\)'s that map the functional input to the scores, which can be viewed as a dimension reduction process. For the \(j\)th neuron in the first hidden layer, we define that \[H_{j}^{(1)}=\sigma\Big{\{}b_{j}+\int_{\mathcal{T}}\beta_{j}(t)X(t)dt\Big{\}}, \ j=1,\ldots,J,\] where \(b_{j}\in\mathbb{R}\) is the intercept, \(\beta_{j}(t)\in L^{2}(\mathcal{T})\) is the weight function, \(\sigma(\cdot)\) is a nonlinear activation function, and \(J\) is the number of neurons in the first hidden layer. This is a natural generalization of the first two layers of a multilayer perceptron to adapt to the functional input. As \(H_{j}^{(1)}\in\mathbb{R}\), computation of the score in the second hidden layer can be promoted naturally. The \(k\)th neuron in the second hidden layer is \[H_{k}^{(2)}=\sum_{j=1}^{J}w_{jk}H_{j}^{(1)},\ k=1,\ldots,K,\] where \(w_{jk}\in\mathbb{R}\) is the weight. Moreover, let \(\mathcal{S}(\sigma,L^{2}(\mathcal{T}))\) be the set of functions from \(L^{2}(\mathcal{T})\) to \(\mathbb{R}\) of the form \[X\mapsto\sum_{j=1}^{J}w_{jk}\sigma\Big{\{}b_{j}+\int_{\mathcal{T}}\beta_{j}(t )X(t)dt\Big{\}}.\] Figure 1: The proposed functional autoassociative neural network for nonlinear FPCA. According to Corollary 5.1.2 in (Stinchcombe, 1999) and the proof in Section 6.1.2 of (Rossi and Conan-Guez, 2006), \(\mathcal{S}(\sigma,L^{2}(\mathcal{T}))\) has the universal approximation property for \(L^{2}(\mathcal{T})\). That means any nonlinear function from \(L^{2}(\mathcal{T})\) to \(\mathbb{R}\) can be approximated up to arbitrary degree of precision by functions in \(\mathcal{S}(\sigma,L^{2}(\mathcal{T}))\). The right two layers in Figure 1, which correspond to the estimation of the nonlinear function \(H\), are used to reconstruct \(X(t)\) by the low-dimensional scores \(H_{k}^{(2)}\) in the second hidden layer. The procedure can be more challenging, since we have to get functional output from scalars in the second hidden layer. To this end, the \(r\)th neuron in the third layer is defined as \[H_{r}^{(3)}(t)=\sigma\Big{\{}a_{r}(t)+\sum_{k=1}^{K}\gamma_{kr}(t)H_{k}^{(2)} \Big{\}},\ t\in\mathcal{T},\ r=1,\ldots,R,\] where \(a_{r}(t)\in L^{2}(\mathcal{T})\) is the intercept function, \(\gamma_{kr}(t)\in L^{2}(\mathcal{T})\) is the weight function, and \(R\) is the number of the neurons in the third hidden layer. It can be observed that each neuron in the third hidden layer is a function. Then, \(X(t)\) is reconstructed by \[\widehat{X}(t)=\sum_{r=1}^{R}u_{r}H_{r}^{(3)}(t),\ t\in\mathcal{T}, \tag{4}\] where \(u_{r}\in\mathbb{R}\) is the weight. The whole network is trained by minimizing the following reconstruction error \[RE=\int_{\mathcal{T}}\{X(t)-\widehat{X}(t)\}^{2}dt.\] As functional data is involved in the proposed network, some of the parameters need to be estimated are functions, which makes the optimization of the network nontrivial. In Section 2.3, we introduce the optimization algorithm for practical implementation. Note that \(H_{k}^{(2)}\) can be viewed as the estimation of \(\xi_{k}\) in (3). Therefore, the dimension of \(X\) is reduced to \(K\) through the functional autoassociative neural network. We can use the low-dimensional vector to complete further inference, such as curve reconstruction, regression and clustering. In this paper, we mainly focus on the curve reconstruction by the low-dimensional representation. ### The Transformed Network for Practical Implementation As discussed in Section 2.2, it can be hard to optimize the proposed functional autoassociative neural network in Figure 1, since many parameters appear as a function. Here, we employ the B-spline basis functions to transform the estimation of functions to the estimation of their coefficients. Let \(\mathbf{B}_{L}=\{B_{l}(t),l=1,\ldots,L\}\) be a set of B-spline basis functions with degree \(d\), where \(L\) is the number of basis functions. Then, we have \[\beta_{j}(t)=\sum_{l=1}^{L}c_{jl}B_{l}(t),\ a_{r}(t)=\sum_{l=1}^{L}\alpha_{rl}B_ {l}(t),\ \gamma_{kr}(t)=\sum_{l=1}^{L}v_{krl}B_{l}(t),\] for \(j=1,\ldots,J\), \(k=1,\ldots,K\), and \(r=1,\ldots,R\), where \(c_{jl}\), \(\alpha_{rl}\) and \(v_{krl}\) are the basis expansion coefficients of \(\beta_{j}(t)\), \(a_{r}(t)\) and \(\gamma_{kr}(t)\), respectively. With the use of B-spline basis functions, the computation of the first two layers for the proposed functional autoassociative neural network turns out to be \[H_{k}^{(2)}=\sum_{j=1}^{J}w_{jk}\sigma\Big{\{}b_{j}+\sum_{l=1}^{L}c_{jl}\int_{ \mathcal{T}}X(t)B_{l}(t)dt\Big{\}}\triangleq\sum_{j=1}^{J}w_{jk}\sigma\Big{(} b_{j}+\sum_{l=1}^{L}c_{jl}\widetilde{X}_{l}\Big{)}, \tag{5}\] where \(\widetilde{X}_{l}=\int_{\mathcal{T}}X(t)B_{l}(t)dt\). In practice, \(\widetilde{X}_{l}\) is calculated through the B-spline expansion of \(X\), that is \(\widetilde{X}_{l}=\sum_{h=1}^{L}x_{h}\{\int_{\mathcal{T}}B_{h}(t)B_{l}(t)dt\}\), where \(x_{h}\) is the basis expansion coefficient of \(X(t)\). Then, we have \[H_{k}^{(2)}=\sum_{j=1}^{J}w_{jk}\sigma\Big{[}b_{j}+\sum_{h=1}^{L}\Big{\{}\sum_ {l=1}^{L}c_{jl}\int_{\mathcal{T}}B_{h}(t)B_{l}(t)dt\Big{\}}x_{h}\Big{]} \triangleq\sum_{j=1}^{J}w_{jk}\sigma\Big{(}b_{j}+\sum_{h=1}^{L}d_{jh}x_{h} \Big{)},\] where \(d_{jh}=\sum_{l=1}^{L}c_{jl}\int_{\mathcal{T}}B_{h}(t)B_{l}(t)dt\). The following theorem discusses the universal approximation property of the first two layers based on B-spline basis functions. The proof is provided in the Appendix. **Theorem 1**.: _Let \(\sigma\) be a continuous non polynomial function from \(\mathbb{R}\) to \(\mathbb{R}\), and \(\mathcal{S}(\sigma,\mathbf{B}_{L})\) be the set of functions from \(L^{2}(\mathcal{T})\) to \(\mathbb{R}\) of the form_ \[X\mapsto\sum_{j=1}^{J}w_{j0}\sigma\Big{(}b_{j}+\sum_{h=1}^{L}d_{jh}x_{h}\Big{)},\] _where \(x_{h}\) is the \(h\)th coordinate of \(X\) on the basis \(\mathbf{B}_{L}\), \(J\in\mathbb{N}^{*}\), \(w_{j0}\in\mathbb{R}\), \(b_{j}\in\mathbb{R}\) and \(d_{jh}\in\mathbb{R}\). Then, \(\mathcal{S}(\sigma,\mathbf{B}_{L})\) has the universal approximation property. That is for any compact subset \(\mathbb{K}\) of \(L^{2}(\mathcal{T})\), for any \(F\) from \(\mathbb{K}\) to \(\mathbb{R}\) and for any \(\epsilon>0\), there exists \(G\in\mathcal{S}(\sigma,\mathbf{B}_{L})\) such that for all \(X\in\mathbb{K}\), \(|G(X)-F(X)|<\epsilon\)._ For the computation of the third hidden layer of the proposed functional autoassociative neural network, we have \[a_{r}(t)+\sum_{k=1}^{K}\gamma_{kr}(t)H_{k}^{(2)} =\sum_{l=1}^{L}\alpha_{rl}B_{l}(t)+\sum_{k=1}^{K}\sum_{l=1}^{L}v_{ krl}B_{l}(t)H_{k}^{(2)}=A_{r}^{\top}B(t)+\sum_{k=1}^{K}V_{kr}^{\top}B(t)H_{k}^{(2)}\] \[=\Big{(}A_{r}+\sum_{k=1}^{K}V_{kr}H_{k}^{(2)}\Big{)}^{\top}B(t),\] where \(A_{r}=(\alpha_{r1},\ldots,\alpha_{rL})^{\top}\), \(V_{kr}=(v_{kr1},\ldots,v_{krL})^{\top}\), and \(B(t)=(B_{1}(t),\ldots,B_{L}(t))^{\top}\). For the term \(A_{r}+\sum_{k=1}^{K}V_{kr}H_{k}^{(2)}\) above, it can be viewed as weighted sums of \(H_{1}^{(2)},\ldots,H_{K}^{(2)}\). Thus, the computation from the second hidden layer to the third hidden layer of the functional autoassociative neural network can be transformed accordingly, that is \[H_{r}^{(3)}(t)=\sigma\Big{\{}\Big{(}A_{r}+\sum_{k=1}^{K}V_{kr}H_{k}^{(2)} \Big{)}^{\top}B(t)\Big{\}},t\in\mathcal{T}, \tag{6}\] the structure of which is shown in Figure 2. The middle layer in Figure 2 represents the elements of the \(L\)-dimensional vector \(A_{r}+\sum_{k=1}^{K}V_{kr}H_{k}^{(2)}\). Furthermore, the reconstruction of \(X(t)\) can be obtained by (4) as discussed in Section 2.2. In practice, suppose that \(X_{1},\ldots,X_{n}\) are \(n\) independent realizations of \(X\), where \(n\) is the sample size. Then, the loss function for the network can be represented as \[\frac{1}{n}\sum_{i=1}^{n}\int_{\mathcal{T}}\{X_{i}(t)-\widehat{X}_{i}(t)\}^{2 }dt.\] Figure 2: The network structure corresponding to the computation of (6). However, integral appeared in the loss function can increase the difficulty of optimization. Hence, right-hand Riemann sum is employed for the computation. Specifically, let \(0=t_{1}<t_{2}<\ldots<t_{M}=1\) be some equally spaced times points on \(\mathcal{T}\). Moreover, denote \(s_{1},\ldots,s_{T}\) be the observation time points of the random curves, where \(T\) is the observation size. Then, we consider the following loss function \[\widetilde{RE}=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{M-1}\sum_{m=2}^{M}\{\widetilde {X}_{i}(t_{m})-\widehat{X}_{i}(t_{m})\}^{2}, \tag{7}\] where \(\widetilde{X}_{i}(t_{m})\) is the estimation of \(X_{i}(t_{m})\) using the observed data by smoothing. Note that only values of the curves at \(t_{1},\ldots,t_{M}\) involved in the loss function. Therefore, we just need to consider discrete values of \(X(t)\) in the last two layers of the network. Let \(\mathbf{B}_{l}=(B_{l}(t_{0}),\ldots,B_{l}(t_{M}))^{\top}\) for \(l=1,\ldots,L\), and \(\mathbf{B}=(\mathbf{B}_{1},\ldots,\mathbf{B}_{L})^{\top}\). The \(r\)th neuron of the third hidden layer can be computed by \[\widetilde{H}_{r}^{(3)}=\sigma\Big{\{}\Big{(}A_{r}+\sum_{k=1}^{K}V_{kr}H_{k}^{ (2)}\Big{)}^{\top}\mathbf{B}\Big{\}}. \tag{8}\] Then the output is given by \[\widehat{\mathbf{X}}=\sum_{r=1}^{R}u_{r}\widetilde{H}_{r}^{(3)}, \tag{9}\] where \(\widehat{\mathbf{X}}=(\widehat{X}(t_{0}),\ldots,\widehat{X}(t_{M}))\). To be clear, the transformed functional autoassociative neural network is shown in Figure 3. The computation involved in the transformed autoassociative neural network corresponds to (5), (8) and (9) respectively. By turning the proposed functional autoassociative neural network in Section 2.2 into the transformed functional autoassociative neural network, the Adam algorithm (Kingma and Ba, 2014) can be employed in the optimization. This algorithm is popularly-used, and it can be realized by the Python package torch. Moreover, we also provide the Python package nFunNN for the implementation specific to our method. ## 3 Simulation ### Numerical performance In this section, we conduct a simulation study to explore the performance of the proposed nFunNN method. For the simulated data, we take into account the measurement error to make Figure 3: The transformed functional autoassociative neural network for nonlinear FPCA. it more consistent with the practical cases. Specifically, the observed data is generated by \[Y_{ij}=X_{i}(s_{j})+\epsilon_{ij},\ i=1,\ldots,n,\ j=1,\ldots,T,\] where \(Y_{ij}\) is the \(j\)th observation for the \(i\)th subject, and \(\epsilon_{ij}\)'s are the independent measurement errors, which we obtain from the normal distribution \(\mathcal{N}(0,\delta^{2})\). We set \(\delta=0.1\), and the observation size is set as \(T=51\). The observation grids are equally spaced on \(\mathcal{T}=[0,1]\). For the setting of \(X_{i}\), we consider the following cases. * Case 1: \(X_{i}(t)=\xi_{i1}\sin(2\pi t)+\xi_{i2}\cos(2\pi t),\ t\in\mathcal{T}\), where \(\xi_{i1}\)'s and \(\xi_{i2}\)'s are simulated from \(\mathcal{N}(0,3^{2})\) and \(\mathcal{N}(0,2^{2})\), respectively. * Case 2: \(X_{i}(t)=\xi_{i2}\sin(\xi_{i1}t),\ t\in\mathcal{T}\), where both \(\xi_{i1}\)'s and \(\xi_{i2}\)'s are simulated from \(\mathcal{N}(0,2^{2})\). * Case 3: \(X_{i}(t)=\xi_{i2}\cos(\xi_{i1}t),\ t\in\mathcal{T}\), where both \(\xi_{i1}\)'s and \(\xi_{i2}\)'s are simulated in the same way as that in Case 2. * Case 4: \(X_{i}(t)=\xi_{i1}\sin(2\pi t)+\xi_{i2}\cos(2\pi t)+\xi_{i2}\sin(\xi_{i1}t),\ t\in \mathcal{T}\), where both \(\xi_{i1}\)'s and \(\xi_{i2}\)'s are simulated in the same way as that in Case 2. * Case 5: \(X_{i}(t)=\xi_{i1}\sin(2\pi t)+\xi_{i2}\cos(2\pi t)+\xi_{i2}\cos(\xi_{i1}t),\ t\in \mathcal{T}\), where both \(\xi_{i1}\)'s and \(\xi_{i2}\)'s are simulated in the same way as that in Case 2. The above setups include various structures of \(X_{i}(t)\). In Case 1, \(X_{i}(t)\) is actually generated through the Karhunen-Loeve expansion, with zero mean, \(\lambda_{1}=3^{2}\), \(\lambda_{2}=2^{2}\), and \(\lambda_{k}=0\) for \(k\geq 3\). Thus, \(X_{i}(t)\) in Case 1 has a linear structure, and a linear method may be suitable enough for this case. Moreover, the other four cases consider the nonlinear structure of \(X_{i}(t)\). Case 2 and Case 3 impose only one nonlinear term in the setup, while Case 4 and Case 5 combine the linear terms in Case 1 with nonlinear terms in Case 2 and Case 3 respectively. Further, whether a linear structure or a nonlinear structure is considered, all the five cases are set to contain two principal components. The proposed nFunNN method is compared with the classical linear FPCA method (Ramsay and Silverman, 2005). Specifically, the numbers of neurons in different layers for our transformed functional autoassociative network are set as \(L=10\), \(J=20\), \(K=2\), and \(R=20\). And the number of principal components for the linear FPCA method is selected as 2. To evaluate the performance of curve reconstruction, we consider the following criteria: \[\text{RMSE} =\sqrt{\frac{1}{nM}\sum_{i=1}^{n}\sum_{m=1}^{M}\{\widehat{X}_{i} (t_{m})-X_{i}(t_{m})\}^{2}},\] \[\text{RRMSE} =\sqrt{\sum_{i=1}^{n}\sum_{m=1}^{M}\{\widehat{X}_{i}(t_{m})-X_{i} (t_{m})\}^{2}}\Bigg{/}\sqrt{\sum_{i=1}^{n}\sum_{m=1}^{M}X_{i}(t_{m})^{2}},\] where RRMSE means the relative RMSE. Note that the prediction at \(t_{1},\ldots,t_{M}\) is assessed, and these time points can be different from the observation time points. In our simulation study, \(t_{1},\ldots,t_{M}\) are also equally spaced on \([0,1]\) with \(M\) being set as 101. We consider the performance for both training set and test set in the evaluation, and the sizes of both sets are set as 1000. Moreover, 100 Monte Carlo runs are conducted for each considered case. Table 1 lists the simulation results of the proposed nFunNN method and FPCA method in all the five cases. In Case 1, though FPCA method yields slightly better RMSE and RRMSE than our nFunNN method, both methods perform well for training set and test set. The result is not surprising, since \(X_{i}(t)\) is generated by the Karhunen-Loeve expansion in Case 1, and classical linear FPCA is good enough to handle such case. For Case 2 and Case 3, where nonlinear structure is considered, it is evident that the proposed nFunNN method outperforms the FPCA method and gives more accurate prediction. That implies the advantage of our nFunNN method when solving nonlinear cases. Furthermore, FPCA method is almost invalid for Case 4 and Case 5, while the proposed nFunNN method still provides encouraging curve reconstruction results. In the setting of \(X_{i}(t)\) in Case 4 and Case 5, we add a nonlinear term besides two linear terms, which makes the linear FPCA method cannot fulfill the prediction with only two principal components. It can be observed that the prediction error of our nFunNN method is much lower than that of the FPCA method in Case 4 and Case 5. That indicates our method can achieve more effective dimension reduction results in nonlinear settings. To sum up, the proposed nFunNN method gives great predicting results for all the five cases. Although the error can be a bit larger than that of the linear method in linear case, the predicting results of our nFunNN method is still reasonable. Furthermore, the nFunNN method shows obvious superiority for the nonlinear cases. Therefore, our nFunNN method can be a good choice, when we have no idea whether the data at hand has a linear structure or a nonlinear structure. ### Effect of the tuning parameters In this section, we discuss the effect of \(L\), \(J\), \(K\), and \(R\) on the performance of the proposed nFunNN method. Though the values of \(J\) and \(R\) can be different for our method, we set \(J=R\) for simplicity here. We consider the settings of Case 1, Case 2 and Case 4 for explanation in this section. For the discussion of the effect of \(L\), we fix \(J=20\) and \(K=2\), and the value of \(L\) can be selected by 10, 15, and 20. Moreover, we set \(J=10,15,20\), and \(L=10\), \(K=2\) for the exploration of the influence of \(J\). When discussing the effect of \(K\), we set \(K=2,3,4\), \(L=10\) and \(J=20\). Note that the number of parameters in the network is \(J(KL+K+2L+2)\) when \(J=R\). As the sample size should be larger than the number of parameters, we increase the sample size of training set to 2000, and the size of test set is still 1000 as in Section 3.1. Tables 2-4 present the simulation results of our nFunNN method with the use of various \(L\), \(J\) and \(K\). From Table 2, it can be observed that with the rise of \(L\), both RMSE and RRMSE increase slightly. As shown in Table 3, there is no obvious difference in the prediction errors with the use of various \(J\), which implies that the effect of \(J\) is not significant. For results in Table 4, the prediction errors for Case 1 are similar when different values of \(K\) are considered. However, for Case 2 and Case 4, which are both nonlinear cases, various values of \(K\) lead to different performance of the network. In Case 2, the prediction error first decreases with the increase of \(K\), and then shows a minor growth. Furthermore, the performance of the nFunNN method gets much better when larger \(K\) is used in Case 4. We conjecture the reason is related \begin{table} \begin{tabular}{c c c c c c} \hline & & \multicolumn{2}{c}{Training set} & \multicolumn{2}{c}{Test set} \\ & & \multicolumn{2}{c}{RASE} & RRASE & \multicolumn{2}{c}{RASE} & RRASE \\ \hline \multirow{2}{*}{Case 1} & nFunNN & 0.0285 (0.0088) & 0.0112 (0.0034) & 0.0324 (0.0187) & 0.0127 (0.0072) \\ & FPCA & 0.0201 (0.0003) & 0.0079 (0.0002) & 0.0201 (0.0003) & 0.0079 (0.0002) \\ & nFunNN & 0.0453 (0.0146) & 0.0386 (0.0121) & 0.0607 (0.0182) & 0.0519 (0.0154) \\ & FPCA & 0.0890 (0.0177) & 0.0760 (0.0147) & 0.0878 (0.0192) & 0.0753 (0.0164) \\ & nFunNN & 0.0790 (0.0250) & 0.0488 (0.0156) & 0.1036 (0.0300) & 0.0639 (0.0182) \\ & FPCA & 0.1955 (0.0255) & 0.1207 (0.0156) & 0.2033 (0.0290) & 0.1254 (0.0174) \\ & nFunNN & 0.1828 (0.0576) & 0.0791 (0.0252) & 0.2181 (0.0746) & 0.0943 (0.0320) \\ & FPCA & 0.9992 (0.0279) & 0.4323 (0.0093) & 1.0065 (0.0275) & 0.4356 (0.0096) \\ & nFunNN & 0.2248 (0.0514) & 0.0877 (0.0199) & 0.2781 (0.0637) & 0.1080 (0.0244) \\ & FPCA & 0.6573 (0.0291) & 0.2565 (0.0112) & 0.6628 (0.0348) & 0.2566 (0.0123) \\ \hline \end{tabular} \end{table} Table 1: The averaged RMSE and RRMSE of the nFunNN and FPCA methods across 100 Monte Carlo runs for all the five cases, with standard deviation in parentheses. to the complex setting of Case 4. To summarize, according to the simulation results, the effects of \(L\) and \(J\) are not very obvious, while different values of \(K\) can bring large changes for the prediction in some complex cases. Moreover, the selection of tuning parameters in the neural network can be completed through the validation set. \begin{table} \begin{tabular}{c c c c c c} \hline & & \multicolumn{2}{c}{Training set} & \multicolumn{2}{c}{Test set} \\ & & RASE & RRASE & RASE & RRASE \\ \hline \multirow{4}{*}{Case 1} & \(L=10\) & 0.0247 (0.0038) & 0.0097 (0.0015) & 0.0262 (0.0042) & 0.0103 (0.0016) \\ & \(L=15\) & 0.0301 (0.0060) & 0.0119 (0.0024) & 0.0312 (0.0062) & 0.0122 (0.0024) \\ & \(L=20\) & 0.0375 (0.0082) & 0.0147 (0.0032) & 0.0386 (0.0080) & 0.0151 (0.0031) \\ \hline \multirow{4}{*}{Case 2} & \(L=10\) & 0.0414 (0.0084) & 0.0354 (0.0071) & 0.0522 (0.0162) & 0.0443 (0.0138) \\ & \(L=15\) & 0.0547 (0.0209) & 0.0469 (0.0179) & 0.0628 (0.0240) & 0.0533 (0.0204) \\ & \(L=20\) & 0.0708 (0.0309) & 0.0605 (0.0261) & 0.0780 (0.0318) & 0.0663 (0.0276) \\ \hline \multirow{4}{*}{Case 4} & \(L=10\) & 0.1802 (0.0770) & 0.0777 (0.0328) & 0.1956 (0.0796) & 0.0845 (0.0352) \\ & \(L=15\) & 0.1881 (0.0678) & 0.0812 (0.0293) & 0.2018 (0.0763) & 0.0870 (0.0327) \\ \cline{1-1} & \(L=20\) & 0.2350 (0.1851) & 0.1014 (0.0804) & 0.2473 (0.1885) & 0.1067 (0.0812) \\ \hline \end{tabular} \end{table} Table 2: The averaged RMSE and RRMSE of the nFunNN methods across 100 Monte Carlo runs with the use of various values of \(L\) for Case 1, Case 2 and Case 4, with standard deviation in parentheses. \begin{table} \begin{tabular}{c c c c c c} \hline & \multicolumn{4}{c}{Training set} & \multicolumn{2}{c}{Test set} \\ & & RASE & RRASE & RASE & RRASE \\ \hline & \(K=2\) & 0.0248 (0.0042) & 0.0097 (0.0016) & 0.0258 (0.0045) & 0.0102 (0.0018) \\ Case 1 & \(K=3\) & 0.0255 (0.0038) & 0.0100 (0.0015) & 0.0266 (0.0040) & 0.0105 (0.0016) \\ & \(K=4\) & 0.0263 (0.0029) & 0.0103 (0.0012) & 0.0275 (0.0036) & 0.0108 (0.0015) \\ \hline & \(K=2\) & 0.0394 (0.0082) & 0.0337 (0.0071) & 0.0512 (0.0185) & 0.0436 (0.0158) \\ Case 2 & \(K=3\) & 0.0271 (0.0016) & 0.0232 (0.0015) & 0.0349 (0.0133) & 0.0297 (0.0114) \\ & \(K=4\) & 0.0290 (0.0015) & 0.0248 (0.0014) & 0.0352 (0.0096) & 0.0300 (0.0082) \\ \hline & \(K=2\) & 0.1733 (0.0522) & 0.0748 (0.0224) & 0.1899 (0.0663) & 0.0820 (0.0283) \\ Case 4 & \(K=3\) & 0.0610 (0.0084) & 0.0263 (0.0036) & 0.0754 (0.0158) & 0.0326 (0.0067) \\ & \(K=4\) & 0.0334 (0.0027) & 0.0144 (0.0012) & 0.0405 (0.0079) & 0.0175 (0.0033) \\ \hline \end{tabular} \end{table} Table 4: The averaged RMSE and RRMSE of the nFunNN methods across 100 Monte Carlo runs with the use of various values of \(K\) for Case 1, Case 2 and Case 4, with standard deviation in parentheses. \begin{table} \begin{tabular}{c c c c c c} \hline & \multicolumn{4}{c}{Training set} & \multicolumn{2}{c}{Test set} \\ & & RASE & RRASE & RASE \\ \hline & \(J=10\) & 0.0228 (0.0031) & 0.0090 (0.0012) & 0.0251 (0.0060) & 0.0099 (0.0023) \\ Case 1 & \(J=15\) & 0.0231 (0.0024) & 0.0091 (0.0009) & 0.0250 (0.0041) & 0.0098 (0.0016) \\ & \(J=20\) & 0.0246 (0.0035) & 0.0097 (0.0014) & 0.0262 (0.0043) & 0.0103 (0.0017) \\ \hline & \(J=10\) & 0.0522 (0.0122) & 0.0445 (0.0104) & 0.0599 (0.0165) & 0.0512 (0.0137) \\ Case 2 & \(J=15\) & 0.0457 (0.0112) & 0.0390 (0.0094) & 0.0541 (0.0147) & 0.0463 (0.0125) \\ & \(J=20\) & 0.0413 (0.0088) & 0.0352 (0.0074) & 0.0506 (0.0136) & 0.0433 (0.0114) \\ \hline & \(J=10\) & 0.2102 (0.0507) & 0.0907 (0.0220) & 0.2266 (0.0612) & 0.0978 (0.0259) \\ Case 4 & \(J=15\) & 0.1953 (0.0794) & 0.0843 (0.0340) & 0.2145 (0.0898) & 0.0926 (0.0386) \\ & \(J=20\) & 0.1684 (0.0413) & 0.0727 (0.0179) & 0.1854 (0.0507) & 0.0801 (0.0224) \\ \hline \end{tabular} \end{table} Table 3: The averaged RMSE and RRMSE of the nFunNN methods across 100 Monte Carlo runs with the use of various values of \(J\) for Case 1, Case 2 and Case 4, with standard deviation in parentheses. Real Data Analysis In this section, we discuss the performance of the proposed nFunNN method for real data application. Yoga data set and StarLightCurves data set from [Chen et al., 2015] are considered. In specific, we aim to assess the predicting ability of the nFunNN method by these two data sets. For the Yoga data set, it contains 3300 samples and the observation size is 426 for each subject. We randomly divide the data set into training set and test set with the sample size being 3000 and 300 respectively. The values at these 426 observation grids are predicted by both our nFunNN method and the classical FPCA method via different \(K\). The numbers of neurons in other layers for our transformed functional autoassociative neural network are set as \(L=20\), \(J=20\), and \(R=20\). The following criteria are considered: \[\widetilde{\text{RMSE}} =\sqrt{\frac{1}{nM}\sum_{i=1}^{n}\sum_{m=1}^{M}\{\widehat{X}_{i}( t_{m})-Y_{im}\}^{2}},\] \[\widetilde{\text{RRMSE}} =\sqrt{\sum_{i=1}^{n}\sum_{m=1}^{M}\{\widehat{X}_{i}(t_{m})-Y_{im }\}^{2}}\Bigg{/}\sqrt{\sum_{i=1}^{n}\sum_{m=1}^{M}Y_{im}^{2}},\] where \(M=426\) for the Yoga data set. The data set is randomly split for 100 times, and the predicting results for both training set and test set are presented in Table 5. It is shown that with the rise of \(K\), the predicting results are getting better for both nFunNN and FPCA methods. Moreover, the proposed nFunNN method can provide more precise predicting results when using the same \(K\) as the FPCA method, which demonstrates the advantage of our method in real data application. Figure 4 exhibits the predicting performance of both methods for training set and test set by boxplots. It can be observed that nFunNN method always produces less predicting error. Furthermore, we consider the analysis of the StarLightCurves data set. There are 9236 subjects in this data set and each subject has 1024 observations. Similar to the analysis of Yoga data set, we randomly divide the StarLightCurves data set into training set and test set, and the sizes are set as 8000 and 1236 respectively. We intend to predict the values at these 1024 observation grids by both nFunNN and FPCA methods using various \(K\). The numbers of the neurons for the transformed functional autoassociative neural network are set as the same as that in the analysis of Yoga data set. We also conduct 100 runs for StarLightCurves data set, and the averaged \(\widetilde{\text{RMSE}}\) and \(\widetilde{\text{RRMSE}}\) are reported in Table 6. The trend of the prediction error for both methods is analogous with that for Yoga data set. Figure 5 provides visual illustration for the comparison of nFunNN and FPCA methods. It is shown that our nFunNN method constantly outperforms the linear FPCA method with the use of different \(K\). That \begin{table} \begin{tabular}{c c c c c c} \hline & & \multicolumn{2}{c}{Training set} & \multicolumn{2}{c}{Test set} \\ & & \(\widetilde{\text{RMSE}}\) & \(\widetilde{\text{RMSE}}\) & \(\widetilde{\text{RMSE}}\) & \(\widetilde{\text{RMSE}}\) \\ \hline \multirow{2}{*}{\(K=2\)} & FunNN & 0.2709 (0.0061) & 0.2712 (0.0061) & 0.2777 (0.0127) & 0.2780 (0.0127) \\ & FPCA & 0.4464 (0.0009) & 0.4469 (0.0009) & 0.4453 (0.0093) & 0.4459 (0.0093) \\ & FunNN & 0.2172 (0.0052) & 0.2174 (0.0052) & 0.2265 (0.0097) & 0.2267 (0.0097) \\ & FPCA & 0.3745 (0.0009) & 0.3750 (0.0009) & 0.3769 (0.0085) & 0.3773 (0.0085) \\ & FunNN & 0.1747 (0.0032) & 0.1749 (0.0032) & 0.1821 (0.0089) & 0.1823 (0.0089) \\ & FPCA & 0.2943 (0.0009) & 0.2947 (0.0009) & 0.2952 (0.0092) & 0.2955 (0.0092) \\ & FunNN & 0.1492 (0.0032) & 0.1494 (0.0032) & 0.1572 (0.0068) & 0.1574 (0.0068) \\ & FPCA & 0.2554 (0.0007) & 0.2557 (0.0007) & 0.2565 (0.0068) & 0.2568 (0.0068) \\ & FunNN & 0.1294 (0.0029) & 0.1295 (0.0029) & 0.1366 (0.0059) & 0.1367 (0.0059) \\ & FPCA & 0.2240 (0.0006) & 0.2243 (0.0006) & 0.2242 (0.0063) & 0.2245 (0.0063) \\ & FunNN & 0.1159 (0.0028) & 0.1161 (0.0028) & 0.1232 (0.0055) & 0.1234 (0.0055) \\ & FPCA & 0.1947 (0.0005) & 0.1949 (0.0005) & 0.1955 (0.0055) & 0.1958 (0.0055) \\ & FunNN & 0.1043 (0.0036) & 0.1044 (0.0036) & 0.1110 (0.0063) & 0.1111 (0.0063) \\ & FPCA & 0.1707 (0.0005) & 0.1709 (0.0005) & 0.1714 (0.0049) & 0.1716 (0.0049) \\ \hline \end{tabular} \end{table} Table 5: The averaged RMSE and RRMSE of the nFunNN and FPCA methods across 100 runs for the Yoga data set, with standard deviation in parentheses. Figure 4: The averaged \(\widetilde{\text{RMSE}}\) of the nFunNN and FPCA methods across 100 runs for the Yoga data set. (a) The boxplot for training set. (b) The boxplot for test set. further indicates the effectiveness of the proposed nFunNN method in real-world application. ## 5 Conclusions and Discussion In this paper, we introduce a nonlinear FPCA method to realize effective dimension reduction and curve reconstruction. We generalize the autoassociative neural network to our functional data analysis framework and construct a transformed functional autoassociative neural network for practical implementation. The proposed method takes into account the nonlinear structure of the functional observations. A Python package is developed for the convenience of using the proposed nFunNN method. The theoretical properties of the proposed networks are also considered. Moreover, the results of the simulation study and real data application further suggest the superiority of our nFunNN method. There are also several possible extension for our work. First, we only consider usual functional data in the development of our method. However, complex function data, such as multivariate functional data [Chiou et al., 2014] and multidimensional functional data [Wang et al., \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{Training set} & \multicolumn{2}{c}{Test set} \\ & & RMSE & RMSE & RMSE & RMMSE \\ \hline \multirow{2}{*}{\(K=2\)} & FunNN & 0.2048 (0.0039) & 0.2049 (0.0039) & 0.2074 (0.0057) & 0.2075 (0.0057) \\ & FPCA & 0.3052 (0.0007) & 0.3053 (0.0007) & 0.3047 (0.0046) & 0.3048 (0.0046) \\ & FunNN & 0.1636 (0.0032) & 0.1637 (0.0032) & 0.1662 (0.0045) & 0.1663 (0.0045) \\ & FPCA & 0.2615 (0.0007) & 0.2617 (0.0007) & 0.2609 (0.0047) & 0.2611 (0.0047) \\ & FunNN & 0.1484 (0.0023) & 0.1485 (0.0023) & 0.1519 (0.0044) & 0.1520 (0.0044) \\ & FPCA & 0.2344 (0.0008) & 0.2345 (0.0008) & 0.2346 (0.0050) & 0.2347 (0.0050) \\ & FunNN & 0.1386 (0.0019) & 0.1387 (0.0019) & 0.1415 (0.0033) & 0.1416 (0.0033) \\ & FPCA & 0.2110 (0.0007) & 0.2112 (0.0007) & 0.2110 (0.0044) & 0.2111 (0.0044) \\ & FunNN & 0.1305 (0.0020) & 0.1305 (0.0020) & 0.1344 (0.0032) & 0.1344 (0.0032) \\ & FPCA & 0.1908 (0.0007) & 0.1909 (0.0007) & 0.1919 (0.0048) & 0.1920 (0.0048) \\ & FunNN & 0.1228 (0.0017) & 0.1229 (0.0017) & 0.1267 (0.0032) & 0.1267 (0.0032) \\ & FPCA & 0.1744 (0.0007) & 0.1745 (0.0007) & 0.1747 (0.0044) & 0.1748 (0.0044) \\ & FunNN & 0.1158 (0.0021) & 0.1158 (0.0021) & 0.1198 (0.0031) & 0.1199 (0.0031) \\ & FPCA & 0.1568 (0.0006) & 0.1569 (0.0006) & 0.1580 (0.0037) & 0.1581 (0.0037) \\ \hline \hline \end{tabular} \end{table} Table 6: The averaged \(\widetilde{\text{RMSE}}\) and \(\widetilde{\text{RRMSE}}\) of the nFunNN and FPCA methods across 100 runs for the StarLightCurves data set, with standard deviation in parentheses. 2022b], becomes more and more common nowadays. Thus, considering nonlinear FPCA method for these types of data and generalizing our method to solve such issue can be of great significance. Second, only curve reconstruction error is considered in the construction of the loss function (7) for our method. So, our method is particularly suitable for the curve reconstruction issue. It can be beneficial if other concerns can be imposed in the loss function, such as regression and clustering problems. To achieve this goal, some modifications of the proposed neural network are needed, which is worth further research. ## Acknowledgments This work was supported by Public Health & Disease Control and Prevention, Major Innovation & Planning Interdisciplinary Platform for the "Double-First Class" Initiative, Renmin University of China. This work was also supported by the Outstanding Innovative Talents Cultivation Funded Programs 2021 of Renmin University of China. Figure 5: The averaged \(\widetilde{\text{RMSE}}\) of the nFunNN and FPCA methods across 100 runs for the StarLightCurves data set. (a) The boxplot for training set. (b) The boxplot for test set. Proof of Theorem 1 Proof.: Let \(\mathbb{K}\) be a compact subset of \(L^{2}(\mathcal{T})\). Recall that \(\mathcal{S}(\sigma,L^{2}(\mathcal{T}))\) is the set of functions from \(L^{2}(\mathcal{T})\) to \(\mathbb{R}\) of the form \[X\mapsto\sum_{j=1}^{J}w_{j0}\sigma\Big{\{}b_{j}+\int_{\mathcal{T}}X(t)\beta_{j} (t)dt\Big{\}},\] where \(J\in\mathbb{N}^{*}\), \(w_{j0}\in\mathbb{R}\), \(b_{j}\in\mathbb{R}\) and \(\beta_{j}(t)\in L^{2}(\mathcal{T})\). According to Corollary 5.1.2 in (Stinchcombe, 1999) and Section 6.1.2 of (Rossi and Conan-Guez, 2006), \(\mathcal{S}(\sigma,L^{2}(\mathcal{T}))\) has the universal approximation property. Hence, for any continuous function \(F\) from \(\mathbb{K}\) to \(\mathbb{R}\) and for any \(\epsilon>0\), there exist \(H\in\mathcal{S}(\sigma,L^{2}(\mathcal{T}))\) such that for all \(X\in\mathbb{K}\), \[|H(X)-F(X)|<\epsilon/2. \tag{10}\] As \(H\) is continuous in \(L^{2}(\mathcal{T})\), we have that for any \(X\in\mathbb{K}\), there exist \(\eta(X)>0\) such that for any \(f\in B(X,\eta(X))\), \(|H(X)-H(f)|<\epsilon/4\). By the approximation property of B-splines (Zhong et al., 2023) and the compactness of \(\mathbb{K}\), we can get \[|H(\widetilde{X})-H(X)|<\epsilon/2, \tag{11}\] similar to the proof in Section 6.1.3 of (Rossi and Conan-Guez, 2006), where \(\widetilde{X}(t)=\sum_{h=1}^{L}x_{h}B_{h}(t)\). Then according to (10) and (11), \[|H(\widetilde{X})-F(X)|\leq|H(\widetilde{X})-H(X)|+|H(X)-F(X)|<\epsilon.\] Moreover, \[H(\widetilde{X}) =\sum_{j=1}^{J}w_{j0}\sigma\Big{\{}b_{j}+\int_{\mathcal{T}} \widetilde{X}(t)\beta_{j}(t)dt\Big{\}}\] \[=\sum_{j=1}^{J}w_{j0}\sigma\Big{[}b_{j}+\int_{\mathcal{T}}\Big{\{} \sum_{h=1}^{L}x_{h}B_{h}(t)\Big{\}}\beta_{j}(t)dt\Big{]}\] \[=\sum_{j=1}^{J}w_{j0}\sigma\Big{\{}b_{j}+\sum_{h=1}^{L}x_{h}\int_ {\mathcal{T}}\beta_{j}(t)B_{h}(t)dt\Big{\}}\] \[=\sum_{j=1}^{J}w_{j0}\sigma\Big{(}b_{j}+\sum_{k=1}^{L}d_{jh}x_{h} \Big{)}.\] Define \(\widetilde{H}(X)=H(\widetilde{X})\), then we have \(\widetilde{H}\in\mathcal{S}(\sigma,\mathbf{B}_{L})\). The proof is completed.
2310.10608
Quality control using convolutional neural networks applied to samples of very small size
Although there is extensive literature on the application of artificial neural networks (NNs) in quality control (QC), to monitor the conformity of a process to quality specifications, at least five QC measurements are required, increasing the related cost. To explore the application of neural networks to samples of QC measurements of very small size, four one-dimensional (1-D) convolutional neural networks (CNNs) were designed, trained, and tested with datasets of $ n $-tuples of simulated standardized normally distributed QC measurements, for $ 1 \leq n \leq 4$. The designed neural networks were compared to statistical QC functions with equal probabilities for false rejection, applied to samples of the same size. When the $ n $-tuples included at least two QC measurements distributed as $ \mathcal{N}(\mu, \sigma^2) $, where $ 0.2 < |\mu| \leq 6.0 $, and $ 1.0 < \sigma \leq 7.0 $, the designed neural networks outperformed the respective statistical QC functions. Therefore, 1-D CNNs applied to samples of 2-4 quality control measurements can be used to increase the probability of detection of the nonconformity of a process to the quality specifications, with lower cost.
Rallou A. Chatzimichail, Aristides T. Hatjimihail
2023-10-16T17:33:08Z
http://arxiv.org/abs/2310.10608v2
Quality Control Using Convolutional Neural Networks Applied to Samples of Very Small Size ## 1 Abstract Although there is extensive literature on the application of artificial neural networks (NNs) in quality control (QC), to monitor the conformity of a process to quality specifications, at least five QC measurements are required, increasing the related cost. To explore the application of neural networks to samples of QC measurements of very small size, four one-dimensional (1-D) convolutional neural networks (CNNs) were designed, trained, and tested with datasets of \(n\)-tuples of simulated standardized normally distributed QC measurements, for \(1\leq n\leq 4\). The designed neural networks were compared to statistical QC functions with equal probabilities for false rejection, applied to samples of the same size. When the \(n\)-tuples included at least two QC measurements distributed as \(\mathcal{N}(\mu,\sigma^{2})\), where \(0.2<|\mu|\leq 6.0\), and \(1.0<\sigma\leq 7.0\), the designed neural networks outperformed the respective statistical QC functions. Therefore, 1-D CNNs applied to samples of 2-4 quality control measurements can be used to increase the probability of detection of the nonconformity of a process to the quality specifications, with lower cost. ## 2 Introduction ### QC Alternative quality control procedures can be applied to a process to test the null hypothesis, that the process conforms to the quality specifications and consequently is in control, against the alternative, that the process is out of control. When a true null hypothesis is rejected, a statistical type I error is committed. We have then a false rejection of a run of the process. The probability of a type I error is called probability for false rejection. When a false null hypothesis is accepted, a statistical type II error is committed. We fail then to detect a significant change of the probability density function of a quality characteristic of the process. The probability for rejection of a false null hypothesis equals the probability of detection of the nonconformity of the process to the quality specifications. A QC procedure can be formulated as a Boolean expression of QC functions, applied to a sample of QC measurements. If it is true, then the null hypothesis is considered as false, the process as out of control, and the run is rejected (A. T. Hattimihail 1992). A statistical QC procedure is evaluated as a Boolean proposition of one or more statistical QC functions. Each QC function is a decision rule, evaluated by calculating a statistic of the measured quality characteristic of a sample of QC measurements. Then, if the statistic is out of the interval between the decision limits, the decision rule is considered as true. If it is true, then the null hypothesis is considered as false, the process as out of control, and the run is rejected. Control charts are plots of the statistics, in time order. ### NNs NNs are adaptive computational algorithms, for statistical data modeling and classification of arbitrary precision, inspired by the brain structure and information processing. They are equivalent to nonlinear mathematical functions \(\mathbf{W}=\mathrm{F}(\mathbf{Q})\), where \(\mathbf{W}\) and \(\mathbf{Q}\) are vectors, matrices, or tensors of finite dimensions. NNs consist of interconnected input, processing, and output nodes. Information flows from input to output, through weighed directed interconnections (see Figure 1). During forward propagation, each processing node receives as input the sum of the weighed outputs of other nodes plus a bias. Its output is calculated by an activation function and is propagated to other nodes. During back propagation, the weights and biases of each node are adapted to minimize an output dependent loss function. In training, forward and back propagation are applied repeatedly, using appropriate datasets. Since 1943 (McCulloch and Pitts 1943), there have been designed many NNs architectures, growing in complexity (Leijnen and van Veen 2020), evolving into multilayer deep neural networks (DNNs). Common types of NNs are: 1. Feedforward NNs, directed acyclic networks with fully connected input, hidden and output layers. 2. Convolutional NNs (CNNs), directed acyclic networks with connected input, hidden and output layers. 3. Recurrent NNs, directed networks, with connected input, hidden and output layers, including cyclic subnetworks. Figure 1: An NNs with an input layer of four nodes, two hidden layers of eight and six nodes, and an output layer of two nodes. NNs are bioinspired computational algorithms, as they are the genetic algorithms (GA). Since 1989, NNs have been proposed as alternative to statistical quality control procedures (Zorriassatine and Tannock 1998) while GA have been used for the design of QC since 1993 (A. T. Hatjimihail 1993). The first application of NNs to QC was published by Pugh (Pugh 1989), who applied NNs to samples of 5 QC measurements, with performance comparable to that of simple statistical quality control rules. Hwang and Hubele (Hwang and Hubele 1993) applied NNs to samples of 8 QC measurements for control charts pattern recognition among 8 unnatural control chart patterns. Other early applications of NNs to QC were published by Smith (Smith 1994), Stutzle (Stutzle 1995), Cheng (Cheng 1995) and Chang and Aw (Chang and Aw 1996). The application of NNs to statistical process control was reviewed comprehensively by Psarakis (Psarakis 2011). Hachicha and Ghorbel (Hachicha and Ghorbel 2012) reviewed extensively their application to control charts pattern recognition. The input of the NNs applied to statistical process control was either _n_-tuples of QC measurements or derived statistics. To the best of our knowledge there have not been any publication in quality control about NNs applied to _n_-tuples of QC measurements for _n_\(<\) 5. QC costs are substantial (Howanitz, Tetrault, and Steindel 1997). As the cost of QC sampling increases with the size of the sample, especially when it involves expensive QC materials or it is destructive, we explored the application of NNs to samples of QC measurements of very small size. For that purpose, four 1-D CNNs were designed and applied to _n_-tuples of QC measurements, for \(1\leq n\leq 4\), as described in detail in the _Methods_ section. CNNs are compact DNNs with low computational complexity, easier to implement in relatively low-cost systems, as they include convolutional and pooling layers. Through small kernels, each neuron of a convolutional layer receives input from only a restricted area of the previous layer, called the neuron's _receptive field_. Moreover, pooling layers combine the outputs of neuron clusters at one layer into a single neuron in the next layer, reducing further the dimensions of data. Since CNNs merge both feature extraction and classification tasks into a single structure, they have been used extensively for feature extraction in image processing (Khan et al. 2020). One-dimensional (1-D) CNNs have been used in signal processing and classification, as in anomaly detection in quality control (Kiranyaz et al. 2021). 3. Methods 3.1. Simulated QC Sample Tuples It is assumed that \(u\) are standardized normally distributed measurements of control samples when the measurement process is in control and \(v\) when it is out of control. Then: \[U\neg N(0,1)\] \[V\sim\mathcal{N}(\mu,\sigma^{2})\] where: \[|\mu|>0\;\forall\;\sigma>1\] Series of QC measurements _n_-tuples (QCMT) were simulated, with up to \(k\) measurements out of control, in all possible combinations, where: \[1\leq n\leq 4\] \(1\leq k\leq n\) The combinations of the elements of the QCMT are presented in Table 1. ### Simulated datasets Training and testing datasets were created with simulated QC measurements, normally distributed and standardized. #### 3.2.1 Training Datasets For \(a\in\{1,2,3,4,6,8,12,16\}\), and \(1\leq n\leq 4\), each training dataset \(\mathbf{T}_{a}(n)\) consists of: 1. \(a\)\(2^{n-1}10^{5}\)\(n\)-tuples of random simulated QC measurements distributed as \(\mathcal{N}(0,1)\), 2. \(2^{n-1}10^{5}\)\(n\)-tuples of random simulated QC measurements, \(k\) distributed as \(\mathcal{N}(0,\sigma^{2})\) and \(n-k\)_as_\(\mathcal{N}(0,1)\), for \(1\leq k\leq n\) and \(1<\sigma\leq 11\). 3. \(2^{n-1}10^{5}\)\(n\)-tuples of random simulated QC measurements, \(k\) distributed as \(\mathcal{N}(\mu,1)\) and \(n-k\)_as_\(\mathcal{N}(0,1)\), for \(1\leq k\leq n\) and \(0<|\mu|\leq 10\). #### 3.2.2 Testing Datasets The following testing datasets were created: 1. \(\mathbf{D}(n,0,1)\), for \(1\leq n\leq 4\), with \(10^{8}\)\(n\)-tuples of random simulated QC measurements distributed as \(\mathcal{N}(0,1)\). 2. \(\mathbf{D}(n,k,0,\sigma)\) for \(\sigma=1.1,1.2,...,7.0\), and \(1\leq n\leq 4\) and \(1\leq k\leq n\), with \(\binom{n}{k}10^{5}\)\(n\)-tuples of random simulated QC measurements, \(k\) distributed as \(\mathcal{N}(0,\sigma^{2})\) and \(n-k\)_as_\(\mathcal{N}(0,1)\). 3. \(\mathbf{D}(n,k,\mu,1)\) for \(|\mu|=0.1,0.2,...,6.0\), and \(1\leq n\leq 4\) and \(1\leq k\leq n\), with \(\binom{n}{k}10^{5}\)\(n\)-tuples of random simulated QC measurements, \(k\) distributed as \(\mathcal{N}(\mu,1)\) and \(n-k\)_as_\(\mathcal{N}(0,1)\) Tables 2 and 3 present the elements of the simulated training and testing datasets, with the respective numbers of the QCMT. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & & \(n\) & \(U\) & \(V\) \\ \hline \multirow{3}{*}{\(\text{T}_{a}(1)\)} & \(\{[u_{1}],\textit{False}\}_{i=1}^{n}\) & \(a\ 10^{5}\) & \(\mathcal{N}(0,1)\) & \\ \cline{2-4} & \(\{[v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \hline \multirow{3}{*}{\(\text{T}_{a}(2)\)} & \(\{[u_{1},u_{2}],\textit{False}\}_{i=1}^{n}\) & \(3a\ 10^{5}\) & \(\mathcal{N}(0,1)\) & \\ \cline{2-4} & \(\{[u_{1},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[v_{1},u_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[v_{1},u_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[v_{1},v_{2}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \hline \multirow{3}{*}{\(\text{T}_{a}(3)\)} & \(\{[u_{1},u_{2},u_{3}],\textit{False}\}_{i=1}^{n}\) & \(7a\ 10^{5}\) & \(\mathcal{N}(0,1)\) & \\ \cline{2-4} & \(\{[u_{1},u_{2},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[u_{1},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[u_{1},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[v_{1},u_{2}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[u_{1},u_{2}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[u_{1},u_{2}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[u_{1},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[u_{1},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[u_{1},v_{2}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[v_{1},v_{2},u_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[v_{1},v_{2},u_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[v_{1},v_{2},u_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \hline \multirow{3}{*}{\(\text{T}_{a}(3)\)} & \(\{[v_{1},u_{2},u_{3}],\textit{False}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[u_{1},u_{2},u_{3},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[u_{1},u_{2},v_{1}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[u_{1},u_{2},v_{1},u_{3}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[u_{1},v_{1},u_{2},u_{3}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \hline \multirow{3}{*}{\(\text{T}_{a}(4)\)} & \(\{[u_{1},v_{1},v_{2},u_{2}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(0,\sigma^{2}),1<\sigma\leq 11\) \\ \cline{2-4} & \(\{[u_{1},v_{1},v_{2},u_{2}],\textit{True}\}_{i=1}^{n}\) & \(10^{5}\) & \(\mathcal{N}(0,1)\) & \(\mathcal{N}(\mu,1),0<|\mu|\leq 10\) \\ \cline{2-4} & \(\{[u_{1},v_{1},u_{2},v_{2}],\textit{True}\} ### 3.3.QC functions As QC function we define a Boolean valued function applied to a n-tuple \(\mathbf{x}\) of QC measurements of a run of a process. If it is _true_, the process is considered out of control, and the run is rejected. #### 3.3.1. Neural Network QC functions As \(N_{\alpha}(\mathbf{x})\) is denoted a neural network QC function. For this project, there were designed four 1-D CNNs with 69 layers, \(486-11634\) nodes and 1x1 kernels. Figure 2 presents their template. In Table 4 there is a summary presentation of their layers and nodes. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline CNNs & \multicolumn{2}{|c|}{\(N_{\alpha}(x_{1})\)} & \multicolumn{2}{|c|}{\(N_{\alpha}(x_{1},x_{2},x_{3})\)} & \multicolumn{2}{|c|}{\(N_{\alpha}(x_{1},x_{2},x_{3})\)} & \multicolumn{2}{|c|}{\(N_{\alpha}(x_{1},x_{2},x_{3},x_{4})\)} \\ \hline & Layers & Nodes & Layers & Nodes & Nodes & Layers & Nodes \\ \hline Input & 1 & 1 & 1 & 2 & 1 & 3 & 1 & 4 \\ \hline Hidden & 67 & 483 & 67 & 2102 & 67 & 5577 & 67 & 11628 \\ \hline Output & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 \\ \hline \end{tabular} \end{table} Table 4. CNNs Layers and nodes \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & & \(n\) & \(U\) & \(V\) \\ \hline \(\mathbf{D}(1,0,1)\) & \(\{[u_{1},False]_{in}^{n}\}\) & \(8\cdot 10^{5}\) & \(N(0,1)\) & \\ \hline \(\mathbf{D}(1,1,\mu,\sigma)\) & \(\{[v_{1},True]_{in}^{n}\}\) & \(10^{5}\) & \(N(\mu,\sigma^{2}),|\mu|>0\) V \(\sigma>1\) \\ \hline \(\mathbf{D}(2,0,1)\) & \(\{[u_{1},u_{2},False]_{in}^{n}\}\) & \(8\cdot 10^{5}\) & \(N(0,1)\) & \\ \hline \(\mathbf{D}(2,1,\mu,\sigma)\) & \(\{[u_{1},v_{1},True]_{in}^{n}\}\) & \(10^{5}\) & \(N(0,1)\) & \\ \hline & \(\{[v_{1},u_{1},True]_{in}^{n}\}\) & \(10^{5}\) & \(N(0,1)\) & \(N(\mu,\sigma^{2}),|\mu|>0\) V \(\sigma>1\) \\ \hline \(\mathbf{D}(2,2,\mu,\sigma)\) & \(\{[v_{1},v_{2}]_{in}^{n}\}\) & \(10^{5}\) & \(N(\mu,\sigma^{2}),|\mu|>0\) V \(\sigma>1\) \\ \hline \(\mathbf{D}(3,0,1)\) & \(\{[u_{1},u_{2},u_{3}]_{False}\}\) & \(8\cdot 10^{5}\) & \(N(0,1)\) & \\ \hline & \(\{[u_{1},u_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},v_{1},v_{2}]_{True}\}\) & \(10^{5}\) & \\ \hline \(\mathbf{D}(3,2,\mu,\sigma)\) & \(\{[v_{1},u_{1},v_{2}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},u_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},u_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline \(\mathbf{D}(3,3,\mu,\sigma)\) & \(\{[v_{1},v_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{2},u_{3},u_{4}]_{False}\}\) & \(8\cdot 10^{5}\) & \(N(0,1)\) & \\ \hline & \(\{[u_{1},u_{2},u_{3},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline \(\mathbf{D}(4,1,\mu,\sigma)\) & \(\{[u_{1},u_{2},v_{1},u_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{2},u_{3},u_{4}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{2},v_{1},v_{2}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{2},v_{1},v_{2}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},v_{1},v_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline \(\mathbf{D}(4,2,\mu,\sigma)\) & \(\{[u_{1},v_{1},v_{2},u_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{1},v_{1},v_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},v_{1},v_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},v_{1},v_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[u_{1},u_{1},v_{2},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{1}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v_{1},v_{2},v_{1},v_{3}]_{True}\}\) & \(10^{5}\) & \\ \hline & \(\{[v Figure 2: The template of the CNN \(N_{a}(x)\) For \(a\in\{1,2,3,4,6,8,12,16\}\), \(n=\#\mathbf{x}\) (that is \(n\) equal to the cardinality of \(\mathbf{x}\)), and \(1\leq n\leq 4\), each 1-D CNN \(N_{\alpha}(\mathbf{x})\) consists of: 1. The input layer, with \(n\) nodes. 2. Ten parallel arrays of five convolutional layers each. Every convolutional layer has \(n\) output channels, and kernels of size 1x1. 3. A catenating layer, which catenates the ten parallel arrays. 4. A pooling layer, performing 1-D pooling with kernels of size 2x1. 5. Ten convolutional net layers having \(n\) output channels, and kernels of size 1x1. 6. A pooling net layer, performing 1-D pooling with kernels of size 2x1. 7. Four linear net layers with output vectors of decreasing size. 8. A _softmax_ net layer, _normalizing the exponential of the output vector of the fourth linear layer. 9. The output layer, which is a decoder with two nodes, classifying the input as _true_ (out of control) or _false_ (out of control). For \(a\in\{1,2,3,4,6,8,12,16\}\), each CNN \(N_{\alpha}(\mathbf{x})\) was trained using the respective training set \(\mathbf{T}_{\alpha}(n)\), and then tested on the datasets \(\mathbf{D}(n,0,1)\) and \(\mathbf{D}(n,k,\mu,\sigma)\), for each combination of \(k\) and \(n\), for \(1\leq n\leq 4\) and \(1\leq k\leq n\). The misclassification rate of the application of \(N_{\alpha}(\mathbf{x})\) to the dataset \(\mathbf{D}(n,0,1)\) equals its probability for false rejection, denoted as \(B_{N,a}(n,\mu,\sigma)\). The correct classification rate of the application of \(N_{\alpha}(\mathbf{x})\) to the dataset \(\mathbf{D}(n,k,\mu,\sigma)\), for \(|\mu|>0\) and \(\sigma>1\), equals its probability for rejection, denoted as \(B_{N,a}(n,k,\mu,\sigma)\) (see _Notation and Formalism_). The thirty two trained CNNs are available at [https://www.hcsl.com/Supplements/hcsltr21s1.zip](https://www.hcsl.com/Supplements/hcsltr21s1.zip), in Apache MXNet, ONNX, and Wolfram Language frameworks formats, as described in _Supplement 1_. #### 3.3.2 Statistical QC Functions Let \(m\) the expected mean and \(s\) the standard deviation of the QC measurements \(x_{i}\) of a measurement process when the process is in control. As \(S_{a}(\mathbf{x};l,m,s)\) is denoted a statistical QC function, applied to a n tuple \(\mathbf{x}\) of QC measurements \(x_{i}\), which is true if \(|x_{i}-m|>l\), for any of them. Furthermore, let: \[S_{\alpha}(\mathbf{x};\;l)=S_{a}(\mathbf{x};l,0,1)\] For \(a\in\{1,2,3,4,6,8,12,16\}\), the probability for rejection of the application of \(S_{\mathrm{a}}(\mathbf{x};l)\) to the dataset \(D(n,0,1)\) is denoted as \(P_{S,a}(n,0,1)\). The decision limit \(l\) of \(S_{\mathrm{a}}(\mathbf{x};l)\) is defined (see _Formalism and Notation_) so that: \[P_{S,a}(n,0,1)\;=\;P_{N,a}(n,0,1)\] For \(a\in\{1,2,3,4,6,8,12,16\}\), \(1\leq n\leq 4\) and \(1\leq k\leq n\), the probability for rejection of the application of \(S_{\mathrm{a}}(\mathbf{x};l)\) to the dataset \(D(n,k,\mu,\sigma)\) is denoted as \(P_{S,a}(n,k,\mu,\sigma)\). The calculated values of \(I\) are presented in Table 5. ### Computer System and Software The datasets were created and the CNNs were designed, trained, and tested using Wolfram Mathematica Ver. 13.0, Wolfram Research, Inc., Champaign, IL, USA, on a computer system with an Intel Core i9-11900K CPU, 128 GB RAM, a NVIDIA GeForce RTX 3080 Ti GPU, under Microsoft Windows 11 Professional operating system. ## 4 Results The plots of the probabilities for rejection \(R_{N,a}(n,k,\mu,\sigma)\) and \(B_{s,a}(n,k,\mu,\sigma)\) of the application of the QC functions to the respective datasets, their differences \(\Delta\mathcal{P}_{a}(n,k,\mu,\sigma)\), and their relative differences \(\Delta\mathcal{P}_{R,a}(n,k,\mu,\sigma)\) are presented in _Appendix_. The probabilities for rejection \(R_{N,a}(n,k,\mu,\sigma)\) and \(B_{s,a}(n,k,\mu,\sigma)\), their differences \(\Delta\mathcal{P}_{a}(n,k,\mu,\sigma)\) with the respective statistical significance (\(p\)-values), and their relative differences \(\Delta\mathcal{P}_{R,a}(n,k,\mu,\sigma)\) are presented in detail in _Supplement 2_. As examples, Figure 2 presents the probabilities for rejection \(R_{N,a}(2,2,\mu,\sigma)\) of the application of the CNN functions to the respective datasets, while Figure 3 presents the probabilities for rejection \(R_{N,6}(3,3,\mu,\sigma)\) and \(B_{s,6}(3,3,\mu,\sigma)\) of the application of the QC functions to the respective datasets. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline CNN & \(n\) & \(\alpha\) & \(R_{N,a}(n,0,1)\) & \(I\) \\ \hline \multirow{8}{*}{\(N_{a}(x_{1})\)} & \multirow{8}{*}{\(1\)} & \(1\) & \(0.127240\) & \(1.525077\) \\ \cline{3-5} & & & \(2\) & \(0.034726\) & \(2.111536\) \\ \cline{3-5} & & & \(3\) & \(0.027421\) & \(2.205468\) \\ \cline{3-5} & & & \(4\) & \(0.023750\) & \(2.261149\) \\ \cline{3-5} & & & \(6\) & \(0.012258\) & \(2.504643\) \\ \cline{3-5} & & & \(8\) & \(0.008197\) & \(2.643835\) \\ \cline{3-5} & & & \(12\) & \(0.005178\) & \(2.795778\) \\ \cline{3-5} & & & \(16\) & \(0.004186\) & \(2.863775\) \\ \hline \multirow{8}{*}{\(N_{a}(x_{1},x_{2})\)} & \multirow{8}{*}{\(2\)} & \(1\) & \(0.114545\) & \(1.888090\) \\ \cline{3-5} & & & \(2\) & \(0.046078\) & \(2.268308\) \\ \cline{3-5} & & & \(3\) & \(0.031890\) & \(2.407227\) \\ \cline{3-5} & & & \(4\) & \(0.008733\) & \(2.849716\) \\ \cline{3-5} & & & \(6\) & \(0.016204\) & \(2.646416\) \\ \cline{3-5} & & & \(8\) & \(0.009049\) & \(2.838333\) \\ \cline{3-5} & & & \(12\) & \(0.003311\) & \(3.145682\) \\ \cline{3-5} & & & \(16\) & \(0.002574\) & \(3.218706\) \\ \hline \multirow{8}{*}{\(N_{a}(x_{1},x_{2},\ x_{3})\)} & \multirow{8}{*}{\(3\)} & \(1\) & \(0.120814\) & \(2.033406\) \\ \cline{3-5} & & & \(2\) & \(0.041528\) & \(2.456272\) \\ \cline{3-5} & & & \(3\) & \(0.024233\) & \(2.646056\) \\ \cline{3-5} & & & \(4\) & \(0.007836\) & \(3.009274\) \\ \cline{3-5} & & & \(6\) & \(0.009234\) & \(2.958895\) \\ \cline{3-5} & & & \(8\) & \(0.005146\) & \(3.135029\) \\ \cline{3-5} & & & \(12\) & \(0.005211\) & \(3.131339\) \\ \cline{3-5} & & & \(16\) & \(0.002915\) & \(3.298332\) \\ \hline \multirow{8}{*}{\(N_{a}(x_{1},x_{2},\ x_{3},x_{4})\)} & \multirow{8}{*}{\(4\)} & \(1\) & \(0.101483\) & \(2.220312\) \\ \cline{3-5} & & & \(2\) & \(0.040159\) & \(2.56916\) \\ \cline{3-5} & & & \(3\) & \(0.029011\) & \(2.681327\) \\ \cline{3-5} & & & \(4\) & \(0.004909\) & \(3.231921\) \\ \cline{1-1} \cline{3-5} & & & \(6\) & \(0.010381\) & \(3.010814\) \\ \cline{1-1} \cline{3-5} & & & \(8\) & \(0.005599\) & \(3.194108\) \\ \cline{1-1} \cline{3-5} & & & \(12\) & \(0.003848\) & \(3.301041\) \\ \hline \end{tabular} \end{table} Table 5: The probabilities for false rejection \(R_{N,a}(n,0,1)\) and the calculated values of the decision limits \(I\) Figure 2: The probabilities for rejection of the CNN \(N_{a}(x_{1},x_{2})\), for \(a\in\{1,2,3,4,6,8,12,16\}\), when applied to two QC measurements, distributed as \(\mathcal{N}(0,\sigma^{2})\) (upper plot) or as \(\mathcal{N}(\mu,1)\) (lower plot) Figure 3: The probabilities for rejection of the CNN \(N_{6}(x_{1},x_{2},x_{3})\) when applied to three QC measurements, distributed as \(\mathcal{N}(0,\sigma^{2})\) (upper plot) or as \(\mathcal{N}(\mu,1)\) (lower plot) Likewise, Figures 4 and 5 present the differences \(\Delta\mathbb{P}_{8}(n,k,\mu,\sigma)\) and the relative differences \(\Delta\mathbb{P}_{R,8}(n,k,\mu,\sigma)\) of the probabilities for rejection \(R_{N,8}(n,k,\mu,\sigma)\) and \(R_{S,8}(n,k,\mu,\sigma)\) of the application of the QC functions to the respective datasets. Figure 4: The differences between the probabilities for rejection of \(N_{8}(\mathbf{x})\) and the respective statistical QC functions vs \(\sigma\) Figure 5: The relative differences between the probabilities for rejection of \(N_{8}(\mathbf{x})\) and the respective statistical QC functions vs \(\mu\) The results show that for \(a\in\{1,2,3,4,6,8,12,16\}\), and \(1<n\leq 4\), \(1<k\leq n\), \(0.2<|\mu|\leq 6.0\), and \(1.0\leq\sigma\leq 7.0\) (see _Supplement 2_ and Figures 1-80 and 147-178 _of Appendix_): \[\Delta P_{a}(n,k,\mu,\sigma)\,>0\] For \(a\in\{1,2,3,4,6,8,12,16\}\), \(1<n\leq 4\), \(1<k\leq n\), \(0.2<|\mu|\leq 6.0\), and \(1.2<\sigma\leq 7.0\), the differences are statistically significant (\(p<0.01\)), except for some differences where \(\Delta P_{a}(n,k,\mu,\sigma)<0.002\) (see _Supplement 2_). For \(a\in\{1,2,3,4,6,8,12,16\}\), and \(1<n\leq 4\), most of the differences \(\Delta P_{a}(n,1,\mu,\sigma)\) are negative, with \(-0.03<\Delta P_{a}(n,1,\mu,\sigma)<0\) (see _Supplement 2_). For \(a\in\{1,2,3,4,6,8,12,16\}\), and \(1<n\leq 4\), and \(1<k\leq n\) and \(|\mu|<0.2\) there are some negative differences \(\Delta P_{a}(n,k,\mu,\sigma)\), however \(|\Delta P_{a}(n,k,\mu,\sigma)|<0.0008\) (see Table 6). Furthermore (see _Supplement 2_): 1. For \(a\in\{1,2,3,4,6,8,12,16\},1<n\leq 4\), \(15\leq k\leq n,0\leq|\mu_{h}|<|\mu_{j}|\leq 6.0\) and \(1.0\leq\sigma\leq 7.0\) (see Figures 1-146 of _Appendix_): \[P_{N,a}\big{(}n,k,\mu_{h},\sigma\big{)}<P_{N,a}\big{(}n,k,\mu_{j},\sigma\big{)}\] * For \(a\in\{1,2,3,4,6,8,12,16\},1<n\leq 4,\ 15\leq k\leq n,0\leq|\mu|\leq 6.0,\ 1.0\leq \sigma_{h}<\sigma_{j}\leq 7.0\) (see Figures 1-146 of _Appendix_): \[P_{N,a}(n,k,\mu,\sigma_{h})<P_{N,a}\big{(}n,k,\mu,\sigma_{j}\big{)}\] * For \(a\in\{1,2,3,4,6,8,12,16\},1<n\leq 4,\ 1\leq k_{h}<k_{j}\leq n,0\leq|\mu|\leq 6.0,\) and \(1.0\leq\sigma\leq 7.0\) (see Figures 81-104 of _Appendix_): \[P_{N,a}(n,k_{h},\mu,\sigma)<P_{N,a}\big{(}n,k_{j},\mu,\sigma\big{)}\] * For \(a_{h}\in\{1,2,3,4,6,8,12\},\ a_{h}\in\{2,3,4,6,8,12,16\},\ a_{h}<a_{j},\ 1\leq n \leq 4,\ 15\leq k\leq n,\ 0<|\mu|\leq 6.0,\) and \(1.0<\sigma\leq 7.0\) (see Figures 105-114 of _Appendix_): \[P_{N,\ a_{h}}(n,k,\mu,\sigma)>P_{N,\ a_{j}}(n,k,\mu,\sigma)\] * For \(a\in\{1,2,3,4,6,8,12,16\},1\leq n_{h}<\eta_{j}\leq 4,\ 0<|\mu|\leq 6.0,\) and \(1.0<\sigma\leq 7.0\) (see Figures 115-122 of _Appendix_): \[P_{N,a}(n_{h},n_{h},\mu,\sigma)<P_{N,a}\big{(}n_{j},n_{j},\mu,\sigma\big{)}\] * For \(a\in\{1,2,3,4,6,8,12,16\},1\leq n_{h}<\eta_{j}\leq 4,\ 15\leq k\leq n_{h},\ 0<|\mu|\leq 6.0,\) and \(1.0<\sigma\leq 7.0\) (see Figures 123-146 of _Appendix_): \[P_{N,a}\big{(}\eta_{j},k,\mu,\sigma\big{)}<P_{N,a}(n_{h},k,\mu,\sigma\big{)}\] ## 5 Illustrative example The performance of the designed CNNs was compared to the performance of the respective statistical functions with equal probabilities for false rejection, for the detection of the critical systematic error of a clinical chemistry assay. Briefly, the coefficient of variation and the bias of the uricase and peroxidase method uric acid assay of the AU5800 (Beckman, Coulter, Brea, Ca) analyser, were estimated as equal to 1.44% and 1.46% respectively (Xia et al. 2018). The optimal total allowable analytical error for the uric acid, based on the biological variation is equal to 5.98% (Fraser et al. 1997). Therefore, the standardized critical random and systematic errors of the assay, as defined by Linnet (Linnet 1989), are equal to 3.33 and 2.67 respectively Tables 7 and 8 show the probabilities for critical error detection of the QC functions \(N_{a}(x)\) and the respective statistical QC functions \(S_{a}(x;I)\), their differences and relative differences, as well as their probabilities for false rejection, for \(2{\leq}n{\leq}4\), \(2{\leq}k{\leq}n\), and \(a\in\{1,2,3,4,6,8,12,16\}\). All the differences are positive and statistically significant (\(p<10^{-6}\)). The selection of the NN QC function to be applied depends upon the risk of critical error, and the cost of the control measurements and the false rejections.. Although the costs can be calculated, the risk can only be approximated. Hence, an applicable QC function, with a relatively high probability for critical random error detection (\(>0.75\)), a high probability for critical systematic error detection (\(>0.9\)), and a low probability for false rejection (\(<0.01\)), could be the \(N_{6}(x_{1},x_{2},x_{3})\) (see Figure 4), as: \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(n\) & \(k\) & \(a\) & \(R_{k,a}(n,0,1)\) & \(I\) & \(R_{k,a}(n,k,\mu_{c},1)\) & \(R_{k,a}(n,k,\mu_{c},1)\) & \(\Delta P_{a}(n,k,\mu_{c},1)\) & \(\Delta R_{k,a}(n,k,\mu_{c},1)\) \\ \hline [MISSING_PAGE_POST] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(n\) & \(k\) & \(a\) & \(B_{\Lambda,\alpha}(n,0,1)\) & \(I\) & \(B_{\Lambda,\alpha}(n,k,0,\sigma_{c})\) & \(B_{\Lambda,\alpha}(n,k,0,\sigma_{c})\) & \(\Delta B_{n}(n,k,0,\sigma_{c})\) & \(\Delta B_{\Lambda,\alpha}(n,k,0,\sigma_{c})\) \\ \hline [MISSING_PAGE_POST] 1. \(P_{N,6}\)(3,0,1) = \(P_{S,6}\)(3,0,1) = 0.009234 2. \(P_{N,6}\)(3,3,0,3.3) = 0.780677 3. \(P_{S,6}\)(3,3,0,3.3) = 0.754969 4. \(\Delta P_{6}\)(3,3,0,3.3) = 0.025708 5. \(\Delta P_{R,6}\)(3,3,0,3.3) = 0.034052 6. \(P_{N,6}\)(3,3,2.67,1) = 0.922430 7. \(P_{S,6}\)(3,3,2.67,1) = 0.768898 8. \(\Delta P_{6}\)(3,3,2.67,1) = 0.153532 9. \(\Delta P_{R,6}\)(3,3,2.67,1) = 0.199677 Therefore, the NN QC function \(N_{6}\)(\(x_{1},x_{2},x_{3}\)) outperforms significantly the statistical \(S\)(\(x_{1},x_{2},x_{3}\); 2.958895), although they have equal probabilities for false rejection. ## 5 Discussion The complexity of statistical QC has increased significantly since the 1920s, when it was introduced into industry by Shewhart (Shewhart, 1930), especially after the development of low cost digital computers on integrated circuits with substantial processing power. Since 1989, the wide availability of more processing power led to the development of NNs and later of DNNs, including CNNs, and their application to statistical QC. Deep learning transforms low level representations to a higher order abstract one, extracting features from raw data in multiple stages. CNNs are relatively simple and compact DNNs, with exemplary learning ability and performance (Khan et al., 2020). Particularly, 1-D CNNs simple enough to be implemented in low cost computer systems have been successfully used to process 1-D signals (Kiranyaz et al., 2021; Kiranyaz, Ince, and Gabbouj, 2016), including control charts (Xu et al., 2019). Although the relationship among a QC procedure, the reliability of a system, the risk of nonconformity to quality specifications and the QC related cost is complex (Aristides T. Hatjmihail, 2009), it is obvious that cost increases with the number of the QC measurements required for the safe detection of nonconformity. The application of statistical QC functions to very small samples of QC measurements has been extensively studied (Duncan, 1986) (Montgomery, 2019). However, NNs QC functions have only been applied to _n_-tuples of QC measurements with \(n>5\). Consequently, it is meaningful to explore their application to _n_-tuples of QC measurements for \(1\leq n\leq 4\). For this project, there were designed 4 1-D CNNs with 1x1 kernels, applicable to 1 to 4 QC measurements respectively. As the directly cost related probability for false rejection is a key index of the performance of a QC function, each of the 4 1-D CNNs was trained with eight training datasets, with different ratios of _n_-tuples of random simulated QC measurements distributed as \(\mathcal{N}\)(0,1) (see Tables 1-4). As a result, thirty two trained 1-D CNNs were obtained, which were applied to testing datasets containing all the possible combinations of measurements. The range of their probabilities for false rejection was from 0.002574 to 0.127400 (see Table 5). Another approach for obtaining varying probabilities for false rejection could be weighing the output of the CNN, with differing penalization of the probability for false rejection. The comparison of the trained 1-D CNNs with the respective statistical QC functions with decision limits shown on Table 5, shows that the designed CNNs outperform the respective statistical QC functions, for \(2\leq n\leq 4\) and \(2\leq k\leq n\), that is when the samples they are applied to include at least two QC measurements out of control, distributed either as \(\mathcal{N}(\mu,1),\text{for}\ 0.2<|\mu|\leq 6.0\), or as \(\mathcal{N}(0,\sigma^{2}),\text{for}\ 1.0<\sigma\leq 7.0\). However, statistical QC functions outperform the 1-D CNNs, for \(1\leq n\leq 4\) and \(k=\)1, that is when the samples they are applied to include only one control measurement distributed as either as \(\mathcal{N}(\mu,1),\text{for}\ 0<|\mu|\leq 6.0\), or as \(\mathcal{N}(0,\sigma^{2}),\text{for}\ 1.0<\sigma\leq 7.0,\ \text{but then}\ |\mathcal{A} \underline{P}_{\alpha}(n,1,\mu,\sigma)|<0.028\) (see Figures 1-85 and 147-178 of _Appendix_ and _Supplement 2_). Besides, the probability for rejection of the 1-D CNNs increases with the sample size, when all the QC measurements of the sample are out of control (see Figures 115-122 of _Appendix_ and _Supplement 2_), while decreases with the sample size when the numbers of the out of control measurements of each sample are equal (see Figures 123-146 of _Appendix_ and _Supplement 2_). Limitations of this project, that could be improved by further research, are the following: * The assumption of normality of either the measurements or their applicable transforms (Gillard 2012; Atkinson, Riani, and Corbellini 2021; Sakia 1992; Box and Cox 1964), however, this is usually valid. Furthermore, the four designed CNNs can be trained and tested using datasets of any parametric or nonparametric distribution. * The arbitrary selection of the different ratios of \(n\)-tuples of random simulated QC measurements distributed as \(\mathcal{N}(0,1)\) of the simulated datasets, for obtaining varying probabilities for false rejection. Another approach could be weighing the output of the CNNs with differing penalization of the probability for false rejection. * The arbitrary design of the four CNNs. As the processing power of the computer systems is increasing exponentially, efficient methods of optimal design of NNs will be developed, including evolutionary algorithms (Ding et al. 2013). The thirty two trained CNNs can be used as they are or they can be retrained with different datasets. Their models, including their weights and biases, are available in various formats, to be implemented in any neural networks' framework (see _Supplement 1_). ## 6 Conclusion One-dimensional CNNs applied to samples of 2-4 QC measurements can improve the detection of nonconformity of a process to quality specifications, with lower cost.
2302.01500
Spiking Synaptic Penalty: Appropriate Penalty Term for Energy-Efficient Spiking Neural Networks
Spiking neural networks (SNNs) are energy-efficient neural networks because of their spiking nature. However, as the spike firing rate of SNNs increases, the energy consumption does as well, and thus, the advantage of SNNs diminishes. Here, we tackle this problem by introducing a novel penalty term for the spiking activity into the objective function in the training phase. Our method is designed so as to optimize the energy consumption metric directly without modifying the network architecture. Therefore, the proposed method can reduce the energy consumption more than other methods while maintaining the accuracy. We conducted experiments for image classification tasks, and the results indicate the effectiveness of the proposed method, which mitigates the dilemma of the energy--accuracy trade-off.
Kazuma Suetake, Takuya Ushimaru, Ryuji Saiin, Yoshihide Sawada
2023-02-03T02:30:00Z
http://arxiv.org/abs/2302.01500v1
# Spiking Synaptic Penalty: Appropriate Penalty Term ###### Abstract Spiking neural networks (SNNs) are energy-efficient neural networks because of their spiking nature. However, as the spike firing rate of SNNs increases, the energy consumption does as well, and thus, the advantage of SNNs diminishes. Here, we tackle this problem by introducing a novel penalty term for the spiking activity into the objective function in the training phase. Our method is designed so as to optimize the energy consumption metric directly without modifying the network architecture. Therefore, the proposed method can reduce the energy consumption more than other methods while maintaining the accuracy. We conducted experiments for image classification tasks, and the results indicate the effectiveness of the proposed method, which mitigates the dilemma of the energy-accuracy trade-off. Machine Learning, Neural Networks, Neural Networks ## 1 Introduction With the rapid growth and spread of neural networks, realizing energy-efficient neural networks is an urgent mission for sustainable development. One such model is the spiking neural network (SNN), which is also known to be more biologically plausible than ordinary artificial neural networks (ANNs). SNNs are energy-efficiently driven on neuromorphic chips (Akopyan et al., 2015; Davies et al., 2018) or certain field-programmable gate arrays (FPGAs) (Maguire et al., 2007; Misra & Saha, 2010) by asynchronously processing spike signals. However, as the spike firing rate of an SNN increases, the energy consumption does as well, and thus, the advantage of the SNN diminishes. Therefore, in addition to the shift from ANNs to SNNs, it is advantageous to adopt training methods that reduce energy consumption in the inference phase. At the same time, such a training method should be independent of the network architecture to avoid limitations in the application. That is, our goal is to develop a training method that realizes energy-efficient SNNs without any constraint on the network architecture. There are various approaches toward energy-efficient SNNs, such as pruning, quantization, and knowledge distillation (Kundu et al., 2021; Chowdhury et al., 2021; Lee et al., 2021), which are widely-used approaches also in ANNs. Further, there are SNN-specific approaches sparsifying the spiking activity related to the energy consumption (Lee et al., 2020; Kim & Panda, 2021; Naya et al., 2021), to which our method belongs. In particular, the methods that penalize the spike firing rate in the training phase are close to our aforementioned goal (Esser et al., 2016; Sorbaro et al., 2020; Pellegrini et al., 2021). However, they indirectly reduce the energy consumption by arbitrarily reducing the spike firing rate, where there is no strict positive correlation between them. Hence, reducing energy consumption while maintaining accuracy is difficult. Principle and IdeaOur principle is that--we should optimize the metric as it is in the training phase. In this spirit, we propose to introduce a proper penalty term for the spiking activity--a _spiking synaptic penalty_--into the objective function. It is derived so that its expected value is precisely proportional to the energy consumption metric for the SNN (see Fig. 1). Although the difference between the proposed and existing methods is only at the Figure 1: Comparison among layer-wise penalty terms. The \(x\)-axis represents the layer number, and the \(y\)-axis represents the ratio of some layer-wise metric or penalty term to some total metric or penalty term. The network architecture is CNN7 (App. A.1), and the computation details are described in Sec. 3. Our penalty term (blue) is precisely proportional to the ground truth (gray). spike, we demonstrate that this minor correction causes significant improvement. #### Main Contributions * We derived a novel penalty term that can directly optimize the metric for the total energy consumption of an SNN without modifying the network architecture. * We demonstrated that the proposed method is compatible with the weight decay, which imposes implicit sparsity on the network (Yaguchi et al., 2018), and that the proposed method creates a higher sparsification effect than the weight decay. * We demonstrated that the proposed method can reduce the energy consumption more than other methods while maintaining the accuracy for image classification tasks, which mitigates the dilemma of the energy-accuracy trade-off. ## 2 Related Work ### Spike Sparsification in Direct SNN Training The most relevant approaches to our proposal introduce the penalty term for the spike firing rate and directly train SNNs by the surrogate gradient method (Esser et al., 2016; Pellegrini et al., 2021). It is a straightforward idea to penalize the spike firing rate to obtain energy-efficient SNNs because it appears in the SNN energy consumption metric (Lee et al., 2020; Kim and Panda, 2021). We refer to the reduction in spike firing rate as spike sparsification. Although the spike firing rate cannot be optimized by the ordinal backpropagation method owing to non-differentiability, it can be optimized by the surrogate gradient method, which is the same technique as training spiking neurons of SNNs (Zenke and Ganguli, 2018; Shrestha and Orchard, 2018). However, neither of these penalty terms (Esser et al. (2016); Pellegrini et al. (2021)) precisely matches the energy consumption metric. As opposed to them, our spiking synaptic penalty resolves this limitation. ### Spike Sparsification via Conversion from ANN Other approaches introduce the penalty term for corresponding ReLU networks (ANNs with ReLU activations) and convert them to SNNs (Sorbaro et al., 2020; Narduzzi et al., 2022). Although there is no guarantee that the penalty terms for ReLU networks contribute to the reduction of the energy consumption for converted SNNs, ReLU networks can be optimized by the ordinal backpropagation method. Note that the same synaptic scaling factor for the penalty term as ours is proposed to reduce the energy consumption for SNNs in Sorbaro et al. (2020). However, they failed to provide evidence to support their claim, as they mentioned. As opposed to them, we provide theoretical and experimental proof in the setting of the direct SNN training by the surrogate gradient. ### Neuron Sparsification Neuron sparsification means increasing the number of permanently zero-valued activations--dead neurons. It is a stronger condition than spike sparsification, which does not force neurons to be permanently inactive. In ReLU networks, the training with the Adam optimizer and weight decay regularization implicitly induce neuron sparsification (Yaguchi et al., 2018) because ReLU activations have an inactive state. However, this claim has yet to be demonstrated in the context of SNNs, where spiking neurons also have an inactive state, even though weight decay is usually adopted in SNNs. Therefore, to detect the effect of our method correctly, we shall also focus on the weight decay. ## 3 Method In this section, we propose the spiking synaptic penalty. First, we describe the spiking neuron model with surrogate gradient mechanism. Next, we describe the metric for energy consumption, which can be represented by the spiking activity. Note that we need to optimize both the accuracy and energy efficiency in the training phase. Finally, we state that the spiking synaptic penalty is the proper penalty term to optimize the energy consumption metric. ### Neuron Model and Surrogate Gradient In this study, we uses SNNs constructed by single-step spiking neurons, which are superior to the multi-time step SNNs in terms of training and inference costs for static tasks (Suetake et al., 2023). Note that the single-step spiking neurons are the same setup as that in a previous study of the penalty term (Esser et al., 2016). Let us denote \(l\in\{l\in\mathbf{Z}\mid 1\leq l\leq L\}\) as the layer, \(d_{l}\) as the number of neurons in the \(l\)-th layer, \(\mathbf{s}_{0}=x\in X\subset\mathbf{R}^{d_{0}}\) as the input data, and the subscript \(i\) of any vector as its \(i\)-th component. Then, the single-step spiking neuron is defined as follows (Suetake et al., 2023): **Definition 3.1**.: The forward mode of a single-step spiking neuron consists of two ingredients, the membrane potential \(\mathbf{u}_{l}\in\mathbf{R}^{d_{l}}\) and spikes emitted by neurons \(\mathbf{s}_{l}\in\{0,1\}^{d_{l}}\). They are defined using the Heaviside step function \(H\) as follows: \[\mathbf{u}_{l} :=\mathbf{W}_{l}\mathbf{s}_{l-1}, \tag{1}\] \[s_{l,i}\left(u_{l,i}\right) :=H\left(u_{l,i}-u_{\mathrm{th}}\right)=\left\{\begin{array}{l l}1&\left(u_{l,i}\geq u_{\mathrm{th}}\right),\\ 0&\left(u_{l,i}<u_{\mathrm{th}}\right),\end{array}\right. \tag{2}\] where \(\mathbf{W}_{l}\in\mathbf{R}^{d_{l}\times d_{l-1}}\) is the strength of the synapse con nections, also called the weight matrix, and \(u_{\mathrm{th}}\in\mathbf{R}\) is the spike firing threshold (Eq. 1 for \(l=1\) corresponds to the direct encoding (Rueckauer et al., 2017)). A backward mode of the single-step neuron as it is does not work in the standard backpropagation algorithm because the derivative of Eq. 2 vanishes almost everywhere. Therefore, we adopt the technique called surrogate gradient, _i.e.,_ we formally replace the derivative function with some reasonable function, for example, the following one (Suetake et al., 2023): \[\frac{\partial s_{l,i}}{\partial u_{l,i}}\left(u_{l,i}\right)\coloneqq\left\{ \begin{array}{ll}\frac{1}{\tau}\frac{1}{u_{l,i}}&\left(u_{l,i}\geq u_{ \mathrm{th}}\right),\\ \frac{\partial\sigma}{\partial u_{l,i}}\left(u_{l,i}\right)&\left(u_{l,i}<u_ {\mathrm{th}}\right),\end{array}\right. \tag{3}\] where \(\tau\) and \(\alpha\) are hyperparameters and \(\sigma_{\alpha}\) is the scaled sigmoid function expressed as follows: \[\sigma_{\alpha}\left(u_{l,i}\right) :=\frac{1}{1+\exp\left(\left(-u_{l,i}+u_{\mathrm{th}}\right)/ \alpha\right)}, \tag{4}\] \[\frac{\partial\sigma_{\alpha}}{\partial u_{l,i}}\left(u_{l,i} \right) =\frac{1}{\alpha}\sigma_{\alpha}\left(u_{l,i}\right)\left(1-\sigma_{ \alpha}\left(u_{l,i}\right)\right). \tag{5}\] Note that the choice of a function for the surrogate function is irrelevant to our proposal. ### Metric for Energy Consumption We prepare the symbol \(\psi_{l,i}\) for the number of synapses outgoing from the \(i\)-th neuron in the \(l\)-th layer, _i.e._, the number of matrix elements in \((\mathbf{W}_{l})_{\ast,i}\in\mathbf{R}^{d_{l}}\) that is not forced to vanish in terms of network architecture. Let us denote \(W_{l},H_{l}\) as the width and height of the feature map, respectively, \(C_{l}\) as the channel size, and \(k_{l}\) as the kernel size in the \(l\)-th layer. We restrict both the kernel width and height to be identical to \(k_{l}\) for the sake of simplicity. Then, explicit forms of \(\psi_{l,i}\) are as follows, _e.g._, the standard fully connected (fc) and two-dimensional convolutional (conv) layers, \[\psi_{l,i}=\psi_{l}=\left\{\begin{array}{ll}C_{l+1}&\left(\mathrm{fc}\right),\\ \frac{W_{l+1}}{W_{l}H_{l}}C_{l+1}k_{l}^{2}&\left(\mathrm{conv}\right),\end{array}\right. \tag{6}\] where the convolutional layer assumes appropriate padding. Note that some non-trivial padding, stride, and dilation can induce \(i\)-dependency of \(\psi_{l,i}\). Using \(\psi_{l,i}\), we can express the number of floating point operations (FLOPs), which is often used as a metric to measure the computational complexity in ANNs, as follows: \[\text{FLOPs}(l):=\sum_{i=1}^{d_{l}}\psi_{l,i}, \tag{7}\] and the layer-wise and total spike firing rates, which are also important metrics to measure the sparsity of spiking activity in SNNs, as follows: \[R(l) :=\mathop{\mathbb{E}}_{x\in X}\left[\frac{\sum_{i=1}^{d_{l}}s_{l,i }}{d_{l}}\right], \tag{8}\] \[R :=\sum_{l=1}^{L}R(l), \tag{9}\] where the operation \(\mathop{\mathbb{E}}_{x\in X}\) means taking the empirical expectation in the dataset \(X\). Then, the energy consumption metric that we should optimize is defined as follows. **Definition 3.2**.: Let us denote \(T\) as the size of time steps and \(E_{\text{AC}}\) as the energy consumption per accumulate operation. Then, the layer-wise and total energy consumption metrics for the SNN are defined as follows: \[E_{\text{SNN}}(l) :=TE_{\text{AC}}\mathop{\mathbb{E}}_{x\in X}\left[\sum_{i=1}^{d_{l }}\psi_{l,i}s_{l,i}\right], \tag{10}\] \[E_{\text{SNN}} :=\sum_{l=1}^{L}E_{\text{SNN}}(l). \tag{11}\] Note that \(T\) is equal to one for the single-step neuron model, and we use \(E_{\text{AC}}=0.9\) [J] (Horowitz, 2014). If \(\psi_{l,i}\) is independent of \(i\) (\(\exists\psi_{l},\forall i,\psi_{l,i}=\psi_{l}\)), by combining Eqs. 7 and 8, Eq. 10 is rewritten as follows: \[E_{\text{SNN}}(l) =TE_{\text{AC}}\sum_{i=1}^{d_{l}}\psi_{l}\mathop{\mathbb{E}}_{x \in X}\left[\frac{\sum_{i=1}^{d_{l}}s_{l,i}}{d_{l}}\right]\] \[=TE_{\text{AC}}\text{FLOPs}(l)R(l), \tag{12}\] which is the same metric as that used in Kim and Panda (2021). For the sake of simplicity, we consider the case where \(\psi_{l,i}\) is independent of \(i\) in Sec. 4. ### Spiking Synaptic Penalty To optimize the energy consumption \(E_{\text{SNN}}\) (Eq. 11), we propose the following penalty terms. **Definition 3.3**.: The layer-wise and total spiking synaptic penalty terms are defined as follows: \[\Omega_{\text{syn}}(l) =\Omega_{\text{syn}}(l,\mathbf{s}_{l}):=\frac{1}{p}\sum_{i=1}^{d_{l}} \psi_{l,i}s_{l,i}^{p}, \tag{13}\] \[\Omega_{\text{syn}} =\Omega_{\text{syn}}(\mathbf{s}):=\sum_{l=1}^{L}\Omega_{\text{syn}}(l, \mathbf{s}_{l}), \tag{14}\] where \(\mathbf{s}:=\{\mathbf{s}_{l}\}_{l=1}^{L}\) and \(p\geq 1\). The equivalency between the total energy consumption metric and total spiking synaptic penalty immediately follows from their definitions and the equation \(s_{l,i}^{p}=s_{l,i}\), which is derived from Eq. 2. **Theorem 3.4**.: _The expected value of the layer-wise and total spiking synaptic penalties are precisely proportional to the layer-wise and total energy consumption metrics of SNNs:_ \[pE_{\text{AC}}\mathop{\mathbb{E}}_{x\in X}\left[\Omega_{\text{syn }}(l)\right] =E_{\text{SNN}}(l), \tag{15}\] \[pE_{\text{AC}}\mathop{\mathbb{E}}_{x\in X}\left[\Omega_{\text{syn }}\right] =E_{\text{SNN}}, \tag{16}\] _for arbitrary \(p\geq 1\)._ This fact means that optimizing Eq. 14 leads to optimizing Eq. 11. Hence, we strongly propose to use Eq. 14 as the penalty term to optimize the energy consumption metric. In the following, we indicate the total spiking synaptic penalty when simply referred to as the _spiking synaptic penalty_ (including Eqs. 9 and 11). _Remark 3.5_.: The proposed penalty term can be optimized in the manner of the surrogate gradient as in Sec. 3.1, and using \(p\neq 1\) options controls the backward signal when the spike does not fire as follows: \[\frac{1}{p}\frac{\partial s_{l,i}^{p}}{\partial u_{l,i}}\left(u_{ l,i}\right) =s_{l,i}^{p-1}\frac{\partial s_{l,i}}{\partial u_{l,i}}\] \[\simeq\left\{\begin{array}{ll}\frac{\partial s_{l,i}}{ \partial u_{l,i}}&\left(u_{l,i}\geq u_{\text{th}}\right),\\ 0&\left(u_{l,i}<u_{\text{th}}\right),\end{array}\right. \tag{17}\] where we used Eq. 2. From Eqs. 13 and 17, there is no intrinsic difference when \(p>1\); hence, we do not consider \(p>1\) options except for \(p=2\), which is commonly used in several studies (Esser et al., 2016; Pellegrini et al., 2021). Note that the derivation of Eq. 17 is clearly independent of the choice of the surrogate gradient. However, the open problem still remains, _i.e.,_ it cannot be theoretically decided which choice is better, \(p=1\) or \(p>1\). We experimentally examined it for \(p=1,2\) as described in Sec. 4. #### 3.3.1 Differences from Other Penalty Terms The other candidates for the penalty term are as follows: \[\Omega_{\text{total}} =\frac{1}{p}\sum_{l=1}^{L}\sum_{i=1}^{d_{l}}s_{l,i}^{p}, \tag{18}\] \[\Omega_{\text{balance}} =\frac{1}{p}\sum_{l=1}^{L}\sum_{i=1}^{d_{l}}\frac{1}{d_{l}}s_{l, i}^{p}, \tag{19}\] where, for \(p=2\), \(\Omega_{\text{total}}\) and \(\Omega_{\text{balance}}\) are the same as those in Esser et al. (2016) and Pellegrini et al. (2021), respectively. However, we claim that neither can directly optimize the energy consumption because they do not have the proportional nature (Eq. 16), although they sparsify the spiking activity to some extent. Figs. 1 and 2 and Table 1 show the discrepancy between the energy consumption metric and penalty terms of the model used in the following experiment. In these figures and table, we assumed that all the spiking neurons fired, _i.e._, \(s_{l,i}=1(\forall l,i)\), for the sake of simplicity. In this assumption, the ground truth is proportional to FLOPs without loss of generality. These figures and table indicate that the spiking synaptic penalty is precisely proportional to the energy consumption metric, but other penalties are not. In the next section, we will experimentally verify how this claim affects performance. #### 3.3.2 Normalization of Penalty Terms Penalty terms are included in the objective function with their intensity parameter \(\lambda\) as the coupling \(\lambda\Omega_{*}\), where the symbol \(*\) denotes "syn", "total", or "balance". For tractable treatment of the intensity parameter between various penalty terms or among models of various scales, we recommend normalizing the penalty terms by \(\Omega_{*}(\mathbf{1})\), where \(\mathbf{1}\) indicates \(\mathbf{s}=\mathbf{1}\), _i.e._, \(s_{l,i}=1\,\forall l,i\). Note that replacing \(\Omega_{*}\) with \begin{table} \begin{tabular}{l r r r r} \hline \hline Model & \(E_{\text{SNN}}/E_{\text{AC}}\) & \(\Omega_{\text{syn}}\) & \(\Omega_{\text{total}}\) & \(\Omega_{\text{balance}}\) \\ \hline CNN7 & 98895888 & 98895888 & 59688 & 6 \\ VGG11 & 2526060544 & 2526060544 & 249856 & 10 \\ ResNet18 & 553730048 & 553730048 & 671744 & 20 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison among total penalty terms. Our penalty term \(\Omega_{\text{syn}}\) is precisely equal to the ground truth \(E_{\text{SNN}}/E_{\text{AC}}\). Figure 2: Comparison among layer-wise penalty terms. The network architectures are (A) VGG11 and (B) ResNet18 (App. A.1). As in Fig. 1, our penalty term (blue) is precisely proportional to the ground truth (gray). \(\Omega_{*}/\Omega_{*}(\mathbf{1})\) is equivalent to replacing \(\lambda\) with \[\lambda^{\prime}=\lambda/\Omega_{*}(\mathbf{1}). \tag{20}\] Hence, we sometimes adopt the normalized notation \(\lambda^{\prime}\) instead of \(\lambda\). ## 4 Experiment In this section, we evaluate the effectiveness of the proposed spiking synaptic penalty. First, we describe the setup for experiments. Next, we show that the proposed method can decrease energy consumption. Finally, we show that the proposed method can reduce the energy consumption more than other methods while maintaining the accuracy. In particular, we show that the proposed method is independent of the choice of a function for the surrogate gradient and better than the conversion approach (Sorbaro et al., 2020). Overall, the main objective is to analyze the behavior of our method rather than to achieve state-of-the-art performances. ### Experimental Setup As the single-step spiking neuron is developed for static tasks (Suetake et al., 2023), we experimented for the Fashion-MNIST (Xiao et al., 2017), CIFAR-10, and CIFAR-100 (Krizhevsky, 2009) datasets widely used in SNN experiments (Esser et al., 2016; Zhang and Li, 2020; Chowdhury et al., 2021) with some network architectures, CNN7, VGG11, and ResNet18 (refer to App. A.1 for details). For these experiments, we implemented the program by the PyTorch framework and used one GPU, an NVIDIA GeForce RTX 3090, with 24 GB (refer to Table B.1 in the appendix for the difference in the training time). In these experiments, we used the following objective function: \[L= \frac{1}{n}\sum_{n=1}^{N}\left(CE(f(x_{n}),t_{n})+\lambda\Omega_{ \mathrm{syn}}(\mathbf{s}(x_{n}))\right)\] \[+\lambda_{\mathrm{WD}}\left(\|\mathbf{W}\|_{L_{2}}^{2}+B_{\mathrm{BN }}\|\mathbf{W}_{\mathrm{BN}}\|_{L_{2}}^{2}\right), \tag{21}\] where \((x_{n},t_{n})\) denotes the pair of input data and its label, \(f\) denotes some spiking neural network, \(CE\) denotes the cross-entropy function, \(\|\mathbf{W}\|_{L_{2}}^{2}\) denotes the \(L_{2}\) penalty for the weights, _i.e._, weight decay, \(\|\mathbf{W}_{\mathrm{BN}}\|_{L_{2}}^{2}\) denotes the \(L_{2}\) penalty for the trainable parameters of batch normalization layers, \(\lambda\) and \(\lambda_{\mathrm{WD}}\) denote the intensity of penalties, and \(B_{\mathrm{BN}}\in\{0,1\}\). Note that we included the weight decay into the objective function to verify its sparsifying effect (Yaguchi et al., 2018) in the context of SNNs. In addition, we explicitly specified the weight decay for batch normalization layers because it was included in the default setting of the PyTorch framework (Paszke et al., 2019) but not in Yaguchi et al. (2018). The training of \(f\) was done by the backpropagation algorithm with the surrogate gradient of Eq. 3 unless otherwise stated. The optimizer was selected from the momentum SGD (mSGD) or Adam to confirm the claims in Yaguchi et al. (2018). Refer to App. A for further details of the experimental setup such as hyperparameters. ### Energy Reduction by Spiking Synaptic Penalty We investigated whether optimizing the spiking synaptic penalty (Eq. 14) led to optimizing the energy consumption metric (Eq. 11) and whether there was any conflict with other terms in the objective function (Eq. 21). The setting was as follows. The baseline model was trained for Eq. 21 with \(\lambda=\lambda_{\mathrm{WD}}=0\). The other models were trained from scratch with some combinations of the weight decay (\(\lambda_{\mathrm{WD}}>0\)), \(L_{2}\) penalty for batch normalization layers (\(B_{\mathrm{BN}}=1\)), and \(p=1\) spiking synaptic penalty (\(\lambda>0\)). The results are presented in Tables 2 and 3. Note that the values in this table were taken for \(\lambda\) and \(\lambda_{\mathrm{WD}}\) from a point where the accuracy was very low (approximately \(20\%\)) until the accuracy reached the upper bound and stopped changing. From the result in Table 2, we can observe the following. First, as the intensity of the penalty term increases, the energy consumption metric decreases; the inference accuracy also decreases. Therefore, the intensity parameter \(\lambda\) controls the trade-off between them. Second, the combination of the spiking synaptic penalty and weight decay further reduces the energy consumption metric. Therefore, we propose to adopt both of them simultaneously. In addition, we found that the combination of the weight decay and Adam optimizer induces neuron sparsification even without the spiking synaptic penalty, though its contribution to the energy reduction is less than the spiking synaptic penalty. Furthermore, the neuron sparsification proceeds more strongly for the Adam optimizer than the mSGD optimizer, which is consistent with the claim in Yaguchi et al. (2018) (see also Table 3). Note that we cannot observe a remarkable difference between \(B_{\mathrm{BN}}=0\) and \(1\). Therefore, we adopt the weight decay with \(B_{\mathrm{BN}}=0\) in further experiments to simplify our objective function. Finally, all the above results hold for not only VGG11 but also CNN7 and ResNet18 (see Tables 8-11 in the appendix). ### Trade-off between Accuracy and Energy Efficiency #### 4.3.1 Comparison between Penalties To see how the difference between various penalty terms affected the training result, we conducted a comparative experiment. The setting was as follows. For fair comparison, we used the \(\lambda^{\prime}\) notation for the intensity parameter of penalties rather than the raw \(\lambda\) (see Eq. 20). The baseline model was trained for Eq. 21 with \(\lambda^{\prime}=B_{\mathrm{BN}}=0\), and we tuned \(\lambda_{\rm WD}>0\) to obtain the highest accuracy. Then, the others were trained by varying \(\lambda^{\prime}>0\) and by replacing \(\Omega_{\rm syn}\) in Eq. 21 with \(\Omega_{\rm total}\) or \(\Omega_{\rm balance}\) from scratch. The results are shown in Fig. 3 (A) as \(\lambda^{\prime}\)-parameterized curves of the energy-accuracy trade-off, where the energy consumption rate was produced as the energy consumption of each model normalized by that of the baseline model. Note that it is better for data to be located at the upper left corner in the figure. Refer to App. A.3 for the sampling of \(\lambda^{\prime}\). In addition, the quantitative analysis is presented in Table 4, where higher scores are better for all the metrics: area under the curve (AUC), Spearman's rank correlation coefficient (Spearman), and the mutual information (MI). Note that the argument of each metric represents a cutoff parameter, where data with lower accuracy than it are omitted. We introduced the cutoff parameter because training tended to break as the intensity parameter was increased for all the methods. Refer to App. A.4 for details of quantitative metrics. From the result in Fig. 3 (A) and Table 4, we can observe the following. First, for each \(\Omega_{*}\), the \(p=1\) option is apparently better than the \(p=2\) option. Therefore, we propose to adopt the \(p=1\) option. Note that this difference arises from the backward control as Eq. 17. However, the mechanism by which it affects the training result is still unclear; it remains our future work. Second, for \(p=1\), the trade-off curve of \(\Omega_{\rm syn}\) is the best, followed in order by \(\Omega_{\rm total}\) and \(\Omega_{\rm balance}\). Therefore, we experimentally clarified the advantage of the coefficient \(\psi_{l,i}\) for Eq. 13, which had remained an issue in the method proposed by Sorbaro et al. (2020). Finally, all the above results hold for not only CNN7 but also VGG11 and ResNet18 (see Fig. 5 and Tables 12-17 in the appendix). #### 4.3.2 Independence of Surrogate Gradients To see how the choice of a function for the surrogate gradient affected the training result, we conducted the same experiment as that in Sec. 4.3.1 except for the choice of a function. Instead of Eq. 3, we adopted the piece-wise linear function (Esser et al., 2016) and scaled sigmoid (Pellegrini et al., 2021) function for the surrogate gradient as follows: \[\frac{\partial s}{\partial u} \simeq\max\left(1-|u-u_{\rm th}|,0\right), \tag{22}\] \[\frac{\partial s}{\partial u} \simeq\frac{\partial\sigma_{\alpha}}{\partial u}, \tag{23}\] where \(\sigma_{\alpha}\) is the same as Eq. 5. The results are presented in Figs. 3 (B) and (C), and Tables 18 and 19 in the appendix. From the result in Figs. 3 (B) and (C), the same observations as those in Sec. 4.3.1 hold. That is, the \(p=1\) option is apparently better than the \(p=2\) option; the trade-off curve of \(\Omega_{\rm syn}\) is the best, followed in order by \(\Omega_{\rm total}\) and \(\Omega_{\rm balance}\). Therefore, the spiking synaptic penalty works independent of the choice of a function for the surrogate gradient. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\lambda\) & \(\lambda_{\rm WD}\) & \(B_{\rm BN}\) & Accuracy [\%] & \(E_{\rm SNN}/E_{\rm baseline}\) [\%] & Dead rate [\%] & \(R\) [\%] \\ \hline [MISSING_PAGE_POST] \hline \hline \end{tabular} \end{table} Table 2: Performance with respect to varying \(\lambda\), \(\lambda_{\rm WD}\), and \(B_{\rm BN}\). The network architecture is VGG11, the dataset is CIFAR-10, and the optimizer is Adam. \(E_{\rm SNN}\) denotes the energy consumption metric for the SNN (Eq. 11), \(E_{\rm baseline}\) denotes the \(E_{\rm SNN}\) for the model with \(\lambda=\lambda_{\rm WD}=B_{\rm BN}=0\), dead rate denotes the ratio of the number of dead neurons to the number of total neurons, and \(R\) denotes the spike firing rate (Eq. 9). #### 4.3.3 Superiority to Conversion Approach To see how the difference between training methods affected the training result, we also produced the trade-off curve for the conversion approach (Sorbaro et al., 2020). We trained a single QReLU network (an ANN with quantized ReLU activations) increasing the intensity of the penalty and evaluated the converted SNNs for each intensity in different time steps: \(T=1,5\), and \(10\) (refer to the original paper (Sorbaro et al., 2020) for details). The results are presented in Fig. 3 (D) and Table 20 in the appendix, where the energy consumption was normalized by that of the baseline for \(\Omega_{\mathrm{syn}}\). From the result in Fig. 3 (D), we can observe that both the energy consumption and accuracy for the conversion approach are worse than those for the surrogate gradient approach. This is because the conversion process degrades the accuracy, and the penalty term for the QReLU network cannot directly optimize the energy consumption metric for the SNN. Hence, we should directly train SNNs by the surrogate gradient and spiking synaptic penalty to avoid such degradation. ## 5 Conclusion We studied the training method to obtain energy-efficient SNNs in terms of the surrogate gradient. Based on our principle that we should optimize the metric as it is, we derived the spiking synaptic penalty to optimize the energy consumption metric. Then, we experimentally showed that the spiking synaptic penalty (especially for \(p=1\)) is superior to the existing penalties and conversion approach. Furthermore, its effectiveness is indifferent to the type of network \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & AUC(\(70\))[\(\%\)] & AUC(\(50\))[\(\%\)] & Spearman(\(70\)) & Spearman(\(50\)) & MI(\(70\)) & MI(\(50\)) \\ \hline \hline \(\Omega_{\mathrm{syn}}\left(p=1\right)\) (Ours) & **68.02** & **79.60** & **0.9861** & **0.9865** & **3.465** & **3.610** \\ \(\Omega_{\mathrm{syn}}\left(p=2\right)\) (Ours) & 61.62 & 72.69 & 0.9474 & 0.9709 & 3.233 & 3.476 \\ \(\Omega_{\mathrm{total}}\left(p=1\right)\) & 63.30 & 76.05 & 0.9766 & 0.9767 & 3.244 & 3.319 \\ \(\Omega_{\mathrm{total}}\left(p=2\right)\) & 55.16 & 64.63 & 0.9831 & 0.9701 & 3.218 & 3.295 \\ \(\Omega_{\mathrm{balance}}\left(p=1\right)\) & 54.23 & 67.16 & 0.9412 & 0.9412 & 2.978 & 2.978 \\ \(\Omega_{\mathrm{balance}}\left(p=2\right)\) & 31.47 & 42.94 & 0.8500 & 0.8946 & 2.708 & 2.833 \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative comparison corresponding to Fig. 3 (A). Higher scores are better. The best and the second-best results are highlighted in bold and underlined, respectively. Refer to App. A.4 for details of the quantitative metrics. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\lambda\) & \(\lambda_{\mathrm{WD}}\) & \(B_{\mathrm{BN}}\) & Accuracy [\%] & \(E_{\mathrm{SNN}}/E_{\mathrm{baseline}}\) [\%] & Dead rate [\%] & \(R\) [\%] \\ \hline [MISSING_PAGE_POST] \hline \hline \end{tabular} \end{table} Table 3: Performance with respect to varying \(\lambda,\lambda_{\mathrm{WD}}\), and \(B_{\mathrm{BN}}\). The optimizer is architectures and surrogate gradients. We conclude that our principle has worked well. An apparent limitation is that the definition of the spiking synaptic penalty depends on that of the energy consumption metric. However, if the target metric becomes deformed, the penalty should be accordingly deformed in accordance with our principle--even though it is a metric irrelevant to the energy consumption. Another limitation is that although the target metric is directly included in the objective function, it is just indirectly optimized by the surrogate gradient. To overcome this issue, we need an alternative training method such as equilibrium training (Xiao et al., 2021). We further list some outstanding issues. First, it is unclear why there was a difference between the training result for \(p=1\) and \(2\) of the spiking synaptic penalty. Elucidating the mechanism of this difference could help us understand the surrogate gradient. Second, we did not focus on the synergy between the spike sparsification and pruning. A pruning-aware sparsification training will help us obtain more energy-efficient SNNs. Finally, the high availability of the spiking synaptic penalty should be verified, for example, in the case of real datasets, large networks, and other tasks. By solving these issues, we can contribute to the realization of genuinely eco-friendly SNNs. To facilitate further research, we will release the code shortly.
2305.14814
What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding
We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power. Existing analyses relate this notion to the graph isomorphism problem, which is mostly relevant for graphs of small sizes, or studied graph classification or regression tasks, while prediction tasks on nodes are far more relevant on large graphs. Recently, several works showed that, on very general random graphs models, GNNs converge to certains functions as the number of nodes grows. In this paper, we provide a more complete and intuitive description of the function space generated by equivariant GNNs for node-tasks, through general notions of convergence that encompass several previous examples. We emphasize the role of input node features, and study the impact of node Positional Encodings (PEs), a recent line of work that has been shown to yield state-of-the-art results in practice. Through the study of several examples of PEs on large random graphs, we extend previously known universality results to significantly more general models. Our theoretical results hint at some normalization tricks, which is shown numerically to have a positive impact on GNN generalization on synthetic and real data. Our proofs contain new concentration inequalities of independent interest.
Nicolas Keriven, Samuel Vaiter
2023-05-24T07:09:53Z
http://arxiv.org/abs/2305.14814v1
# What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding ###### Abstract We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power. Existing analyses relate this notion to the graph isomorphism problem, which is mostly relevant for graphs of small sizes, or studied graph classification or regression tasks, while prediction tasks on _nodes_ are far more relevant on large graphs. Recently, several works showed that, on very general random graphs models, GNNs converge to certains functions as the number of nodes grows. In this paper, we provide a more complete and intuitive description of the function space generated by equivariant GNNs for node-tasks, through general notions of convergence that encompass several previous examples. We emphasize the role of input node features, and study the impact of _node Positional Encodings_ (PEs), a recent line of work that has been shown to yield state-of-the-art results in practice. Through the study of several examples of PEs on large random graphs, we extend previously known universality results to significantly more general models. Our theoretical results hint at some normalization tricks, which is shown numerically to have a positive impact on GNN generalization on synthetic and real data. Our proofs contain new concentration inequalities of independent interest. ## 1 Introduction Machine learning on graphs with Graph Neural Networks (GNNs) [53, 5] is now a well-established domain, with application fields ranging from combinatorial optimization [6] to recommender systems [50, 11], physics [45, 1], chemistry [16], epidemiology [37], physical networks such as power grids [41], and many more. Despite this, there is still much that is not properly understood about GNNs, both empirically and theoretically, and their performances are not always consistent [52, 22], compared to simple baselines in some cases. It is generally admitted that a better theoretical understanding of GNNs, especially of their fundamental limitations, is necessary to design better models in the future. Theoretical studies of GNNs have largely focused on their _expressive power_, kickstarted by a seminal study [54] that relates their ability to _distinguish non-isomorphic graphs_ to the historical Weisfeiler-Lehman (WL) test [51]. Following this, many works have defined improved versions of GNNs to be "more powerful than WL" [34, 35, 26, 49, 38], often by augmenting GNNs with various features, or by implementing "higher-order" versions of the basic message-passing paradigm. Among the simplest and most effective idea to "augment" GNNs is the use of _Positional Encodings_ (PE) as input to the GNN, inspired by the vocabulary of Transformers [48]. The idea is to equip nodes with carefully crafted input features that would help break some of the indeterminancy in the subsequent message-passing framework. In early works, unique and/or random node identifiers have been used [32, 47], but they technically break the permutation-invariance/equivariance - consistency with a reordering of the nodes in the graph - of the GNN. Most PEs in the current literature are based on eigenvectors of the adjacency matrix or Laplacian of the graph [12, 13] (with recent variants to handle the sign/basis indeterminancy [31]), random-walks [13], node metrics [56, 30], or subgraphs [4]. Some of these have been shown to have an expressive power beyond WL [30, 4, 31]. In some contexts however, WL-based analyses have limitations: they pertain to tasks on graphs (e.g. graph classification or regression) and have limited to no connections to tasks on nodes or links; and they are mostly relevant for small-scale graphs, as medium or large graphs are never isomorphic but exhibit different characteristics (e.g. community structures). At the other end of the spectrum, the properties of GNNs on _large_ graphs have been analysed in the context of latent positions Random Graphs (RGs) [24, 25, 43, 44, 29, 2, 36], a family of models slightly more general than graphons [33]. Such statistical models of large graphs are classically used in graph theory [18, 9] to model various data such as epidemiological [27, 39], biological [17], social [20], or protein-protein interaction [19] networks, and are still an active area of research [9]. For GNNs, the use of such models has shed light on their stability to deformations of the model [44, 29, 24], expressive power [25], generalisation [15, 36], or some phenomena such as oversmoothing [23, 3]. One basic idea is that, as the number of nodes in a random graph grows, GNNs converge to "continuous" equivalents [24, 8], whose properties are somewhat easier to characterize than their discrete counterpart. As prediction tasks on _nodes_ are far more common and relevant on large graphs modelled by random graphs, this paper will focus on _permutation-equivariant_ GNNs, rather than permutation-invariant. In the limit, it has been shown that their output converge to _functions_ over some latent space to label the nodes, but the descriptions of this space of functions and its properties are still very much incomplete. A partial answer was given in [25], in which some universality properties are given for specific models of GNNs, but for limited models of random graphs _with no random edges_, and specific models of GNNs that no not include node features or PEs. Contributions.In this paper, we significantly extend existing results by providing a **complete description of the function space generated by permutation-equivariant GNNs** (Theorem 1), in terms of simple stability rules, and show that it is equivalent to previous implicit definitions that were based on convergence bounds. We outline the **role of the input node features**, and particularly of Positional Encodings (PEs). We then study the several representative examples of PEs on large random graphs. In particular, we analyze SignNet [31] (eigenvector-based) PEs (Theorem 2), and distance-based PEs [30] (Theorem 3). We derive simple normalization rules that are necessary for convergence, and show that they are relevant even on real data. Finally, our proofs contain new universality results for square-integrable functions and new concentration inequalities that are of independent interest. All technical proofs, and the code to reproduce the figures, are available as supplementary material or in the Appendix. ## 2 Background on Random Graphs and Graph Neural Networks Let us start with generic notations and definitions. The norm \(\left\lVert\cdot\right\rVert\) is the Euclidean norm for vectors and the operator norm for matrices and compact operators between Hilbert spaces. The latent space \(\mathcal{X}\) is a compact metric set with a probability distribution \(P\) over it. Square-integrable functions from \(\mathcal{X}\) to \(\mathbb{R}^{q}\) w.r.t. \(P\) are denoted by \(L_{q}^{2}\), and are equipped with the Hilbertian norm \(\left\lVert f\right\rVert_{L^{2}}^{2}\stackrel{{\text{\tiny def.}}} {{=}}\int_{\mathcal{X}}\left\lVert f(x)\right\rVert^{2}dP(x)\). The (disjoint) union of multidimensional functions \(L_{\cup}^{2}\stackrel{{\text{\tiny def.}}}{{=}}\bigsqcup_{q\in \mathbb{N}^{*}}L_{q}^{2}\) is a metric space for a metric defined as \(\left\lVert f-g\right\rVert_{L^{2}}\) if \(f,g\in L_{q}^{2}\) for some \(q\), and \(1\) otherwise. Continuous _Lipschitz_ functions between metric spaces \(\mathcal{X}\rightarrow\mathcal{Y}\) are denoted by \(\mathcal{C}_{\text{Lip}}(\mathcal{X},\mathcal{Y})\). For \(X=\left\{x_{1},\ldots,x_{n}\right\}\) where \(x_{i}\in\mathcal{X}\), we define the sampling of \(f:\mathcal{X}\rightarrow\mathbb{R}^{d}\) as \(\iota_{X}f=\left[f(x_{i})\right]_{i=1}^{n}\in\mathbb{R}^{n\times d}\). Given \(Z\in\mathbb{R}^{n\times d}\), the Frobenius norm is \(\left\lVert Z\right\rVert_{\text{F}}\) and we define the normalized Frobenius norm as \(\left\lVert Z\right\rVert_{\text{MSE}}=n^{-\frac{1}{2}}\left\lVert Z\right\rVert _{\text{F}}\). The notation comes from the fact that \(\left\lVert\iota_{X}(f-f^{\star})\right\rVert_{\text{MSE}}^{2}=n^{-1}\sum_{i} \left\lVert f(x_{i})-f^{\star}(x_{i})\right\rVert^{2}\) which is akin to an empirical Mean Square Error. Latent position Random Graphs.In this paper, we consider _latent position random graphs_[20, 28, 33], a family of models that includes Stochastic Block Models (SBM), graphons, random geometric graphs, and many other examples. They are the primary models used for the study of GNNs in the literature [24, 25, 29, 43]. We generate a graph \(G=(X,A,Z)\), where \(X\in\mathbb{R}^{n\times d}\) are _unobserved_ latent variables, \(A\in\{0,1\}^{n\times n}\) its symmetric adjacency matrix, and \(Z\in\mathbb{R}^{n\times p}\) are (optional) observed node features. The latent variables and adjacency matrix are generated as such: \[\forall i,\ x_{i}\stackrel{{ iid}}{{\sim}}P,\qquad\forall i<j,\ a_{ij}\sim\text{Bernoulli}(\alpha_{n}w(x_{i},x_{j}))\quad\text{ independently} \tag{1}\] where \(w:\mathcal{X}\times\mathcal{X}\to[0,1]\) is a continuous _connectivity kernel_ and \(\alpha_{n}\) is the _sparsity-level_ of the graph, such that the expected degrees are in \(\mathcal{O}\left(n^{2}\alpha_{n}\right)\). Non-dense graph can be obtain with \(\alpha_{n}=o(1)\), here we will go down to the _relatively sparse_ case \(\alpha_{n}\gtrsim(\log n)/n\), a classical choice in the literature [28, 24]. Note that the continuity hypothesis of the kernel \(w\) is not really restrictive: neither \(\mathcal{X}\) nor the support of the distribution \(P\) need be connected. For instance, SBMs can be obtained by taking \(\mathcal{X}\) to be a finite set. We do not specify a model for the node features yet, see Sec. 4. Graph shift matrix and operator.When the number of nodes grows on random graphs, it is known that certain discrete operators associated to the graph converge to their continuous version, as well as the GNNs that employ them [24, 8]. Here, some of our results will be valid under quite generic assumptions. We consider a **graph shift matrix**[46]\(S=S(G)\in\mathbb{R}^{n\times n}\), which can be either directly the adjacency matrix of the graph or various notions of graph Laplacians. We define an associated **graph shift operator**\(\mathbf{S}:L^{2}_{\sqcup}\to L^{2}_{\sqcup}\) such that the restriction \(\mathbf{S}_{|L^{2}_{q}}\) is a compact linear operator of \(L^{2}_{q}\) onto itself. Note that we reserve "matrix" and "operator" respectively for the discrete and continuous versions. The results in Sec. 3 will be valid under generic convergence assumptions from \(S\) to \(\mathbf{S}\), while the results of Sec. 4 will focus on the following two representative examples. **Example 1** (Normalized adjacency matrix and kernel operator).: _Here \(S=\bar{A}=(n\alpha_{n})^{-1}A\) and \(\mathbf{S}f=\mathbf{A}f=\int w(\cdot,x)dP(x)\). This choice requires to know, or estimate, the sparsity level \(\alpha_{n}\). In this case, our results will hold whenever \(\alpha_{n}\gtrsim(\log n)/n\) with an arbitrary multiplicative constant._ **Example 2** (Normalized Laplacian matrix1 and operator).: _Here \(S=L=D_{A}^{-1/2}AD_{A}^{-1/2}\) where \(D_{A}=\operatorname{diag}(A1_{n})\) is the degree matrix of \(G\), and \(\mathbf{S}f=\mathbf{L}f=\int\frac{w(\cdot,x)}{\sqrt{d(\cdot)d(x)}}dP(x)\) where \(d(\cdot)=\int w(\cdot,x)dP(x)\) is the degree function. Whenever we opt for this choice, we assume that \(d_{\min}\stackrel{{ iid}}{{\equiv}}\inf_{\mathcal{X}}d(x)>0\), and our results will hold whenever \(\alpha_{n}\geqslant C(\log n)/n\) with a multiplicative constant \(C\) that depends (in a non-trivial way) on \(w\), see Thm. 9 in App. D._ Footnote 1: Note that the normalized Laplacian is traditionally defined as \(\operatorname{Id}-L\), here it does not change our definition of GNNs since they include residual connections To sometimes unify notations, when we adopt these examples, we define \(w_{\mathbf{S}}\) such that \(w_{\mathbf{S}}(x,y)=w(x,y)\) in the adjacency case and \(w_{\mathbf{S}}(x,y)=\frac{w(x,y)}{\sqrt{d(x)d(y)}}\) in the normalized Laplacian case. Therefore for these two examples the continuous operator has a single expression \(\mathbf{S}f=\int w_{\mathbf{S}}(\cdot,x)dP(x)\). Graph Neural Network.As mentioned in the introduction, we focus on _equivariant_ GNNs that can compute functions over _nodes_, as this makes the most sense on large graphs that RGs seek to model. Recall that we observe a graph shift operator \(S\) and node features \(Z\in\mathbb{R}^{n\times p}\), and we return a vector per nodes \(\Phi(S,Z)\in\mathbb{R}^{n\times d_{L}}\). We adopt a traditional message-passing neural network (MPNN) that uses the graph shift matrix \(S\): given input features \(Z^{(0)}\in\mathbb{R}^{n\times d_{0}}\), \[Z^{(\ell)} =\rho\left(Z^{(\ell-1)}\theta_{0}^{(\ell-1)}+SZ^{(\ell-1)}\theta_ {1}^{(\ell-1)}+1_{n}(b^{(\ell)})^{\top}\right)\in\mathbb{R}^{n\times d_{\ell}},\] \[\Phi_{\theta}(S,Z^{(0)}) =Z^{(L-1)}\theta^{(L-1)}+1_{n}(b^{(L)})^{\top} \tag{2}\] where \(\rho\) is the ReLU function applied element-wise, and \(\theta^{(\ell)}_{i}\in\mathbb{R}^{d_{\ell}\times d_{\ell+1}}\), \(b^{(\ell)}\in\mathbb{R}^{d_{\ell}}\) are learnable parameters gathered in \(\theta\in\Theta\). We denote by \(\Theta\) the set of all possible parameters. We note that here we employ the ReLU function as a non-linearity, as some of our results will use its specific properties. Multi-Layer Perceptrons (MLP, densely connected networks) using the ReLU activation, and with potentially more than one hidden layer, will be denoted by \(f^{\text{MLP}}_{\gamma}\), where \(\gamma\) gathers their parameters. Following recent literature [12, 13], we consider inputing _Positional Encoding_ (PE) at each node. Such PE are generally computed using only the graph structure and concatenated to existing node features \(Z\), here we simply introduce a generic notation: \[Z^{(0)}=\text{PE}_{\gamma}(S,Z)\in\mathbb{R}^{n\times d_{0}} \tag{3}\] with some parameter \(\gamma\in\Gamma\). In our notations, the PE module uses the node features \(Z\), generally by concatenating them to its output. For short, we may denote the whole architecture with PE and GNN as \(\Phi_{\theta,\gamma}(S,Z)\stackrel{{\text{\tiny def.}}}{{=}} \Phi_{\theta}(S,\text{PE}_{\gamma}(S,Z))\). It is not difficult to see that if the PE computation is equivariant, then the whole GNN is equivariant: denoting by \(\sigma\) a permutation matrix of \(\{1,\dots,n\}\), \[\forall\sigma,\ \Phi_{\theta,\gamma}(\sigma S\sigma^{\top},\sigma Z)= \sigma\Phi_{\theta,\gamma}(S,Z)\quad\Leftrightarrow\quad\forall\sigma,\ \text{PE}_{\gamma}(\sigma S\sigma^{\top},\sigma Z)=\sigma\text{PE}_{\gamma}(S, Z).\] All the examples of PEs examined in Sec. 4 are equivariant. ## 3 Function spaces of Graph Neural Networks In this section, we provide a complete and intuitive description of the function space approximated by equivariant GNNs applied on RGs. All technical proofs are provided in App. A. It has been shown [24, 25, 8, 29] that GNNs converge to functions over the latent space: when the node features are a sampling of a certain function \(\iota_{X}f^{(0)}\), then the output of the GNN is close to being a sampling of another function \(\iota_{X}f^{(L)}\). Assuming the node features or PEs approximate some function set \(\mathcal{B}\subset L^{2}_{\sqcup}\), we define the space of functions that a GNN can approximate as follows. **Definition 1**.: _Given a base set \(\mathcal{B}\subset L^{2}_{\sqcup}\), the **set of functions approximated by GNNs**\(\mathcal{F}_{\mathrm{GNN}}(\mathcal{B})\) is formed by all the functions \(f\in L^{2}_{\sqcup}\) such that: for all \(\varepsilon>0\), there are \(\theta\in\Theta,f^{(0)}\in\mathcal{B}\) such that_ \[\mathbb{P}\Big{(}\left\|\Phi_{\theta}(S,\iota_{X}f^{(0)})-\iota_{X}f\right\|_ {\mathrm{MSE}}\geqslant\varepsilon\Big{)}\xrightarrow[n\to\infty]{}0. \tag{4}\] In other words, \(\mathcal{F}_{\mathrm{GNN}}(\mathcal{B})\) are the functions whose sampling can be \(\varepsilon\)-approximated by the output of a GNN, with probability going to \(1\) as \(n\) grows. Note that if the quantifiers of \(\theta,f^{(0)}\) and \(\varepsilon\) were reversed, the MSE would converge to \(0\) in probability. Here this is _not_ the case: \(\theta,f^{(0)}\)_may depend on_\(\varepsilon\), which is akin to an approximation level. Similar to the permutation equivariance of GNNs, there is a notion of continuous equivariance for functions well-approximated by GNNs [24, 25, 8], where the permutations are replaced by bijections over the latent space \(\mathcal{X}\). We adopt the notations \(\mathcal{F}_{\mathrm{GNN}}(\mathcal{B})=\mathcal{F}_{\mathrm{GNN}}(\mathcal{B },w,P)\). For all continuous bijections \(\phi\) over \(\mathcal{X}\), we define \(w_{\phi}(x,y)=w(\phi(x),\phi(y))\), \(P_{\phi}=\phi^{-1}\sharp P\) where \(\sharp\) is the push-forward operation, and \(\mathcal{B}_{\phi}=\{f\circ\phi\ |\ f\in\mathcal{B}\}\). Then, we have the following result. **Proposition 1**.: _Let \(S=S(A)\) be a graph shift operator that only depends on the adjacency matrix of the graph in a permutation-equivariant manner. Then, for all continuous bijections \(\phi:\mathcal{X}\to\mathcal{X}\),_ \[\mathcal{F}_{\mathrm{GNN}}(\mathcal{B}_{\phi},w_{\phi},P_{\phi})= \left\{f\circ\phi\ |\ f\in\mathcal{F}_{\mathrm{GNN}}(\mathcal{B},w,P)\right\}.\] That is, if one "permutes" the kernel \(w\), the distribution \(P\) and the base set \(\mathcal{B}\), then the function space \(\mathcal{F}_{\mathrm{GNN}}\) contains exactly the permuted version of the original space. The goal of this section is to provide a more intuitive description of the space \(\mathcal{F}_{\mathrm{GNN}}\), which we will do under some basic convergence assumption from \(S\) to \(\mathbf{S}\). GNNs (2) basically include two components: dense connections and MLPs that can approximate any continuous function by the universality theorem [40], and applications of \(S\). Hence, we define the following function space. **Definition 2**.: _We define \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\subset L^{2}_{\sqcup}\) the (minimal) \(\mathbf{S}\)**-extension** of a base set \(\mathcal{B}\subset L^{2}_{\sqcup}\) by the following rules:_ 1. _Base space:_ \(\mathcal{B}\subset\mathcal{F}_{\mathbf{S}}(\mathcal{B})\)_;_ 2. _Stability by composition with continuous functions:_ _for all_ \(f\in\mathcal{F}_{\mathbf{S}}(\mathcal{B})\) _with a_ \(p\)_-dimensional output and_ \(g\in\mathcal{C}_{\mathrm{Lip}}(\mathbb{R}^{p},\mathbb{R}^{q})\)_, it holds_2 _that_ \(g\circ f\in\mathcal{F}_{\mathbf{S}}(\mathcal{B})\)_;_ Footnote 2: Note that, since \(g\) is Lipschitz, when \(f\in L^{2}_{\sqcup}\) we indeed have \(g\circ f\in L^{2}_{\sqcup}\). 3. _Stability by graph operator:_ _for all_ \(f\in\mathcal{F}_{\mathbf{S}}(\mathcal{B})\)_, it holds that_ \(\mathbf{S}f\in\mathcal{F}_{\mathbf{S}}(\mathcal{B})\)_;_ 4. _Linear span:_ _for all_ \(q\)_,_ \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\cap L^{2}_{q}\) _is a vector space;_ 5. _Closure:_ \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\) _is closed in_ \(L^{2}_{\sqcup}\)_;_ 6. _Minimality:_ _for all_ \(\mathcal{G}\subset L^{2}_{\sqcup}\) _satisfying all the properties above,_ \(\mathcal{F}_{\mathbf{S}}\subset\mathcal{G}\)_._ In words, \(\mathcal{F}_{\mathbf{S}}\) take a base set \(\mathcal{B}\), and extend it to be stable by composition with Lipschitz functions, application of the graph operator, and linear combination (of its elements with the same dimensionality). Our result will use the following assumption, which is naturally true for our running examples. **Assumption 1**.: _With probability going to \(1\), \(\|S\|\) is bounded. Moreover, for all \(f\in L^{2}_{\sqcup}\),_ \[\left\|S_{tX}f-\iota_{X}\mathbf{S}f\right\|_{\mathrm{MSE}}\xrightarrow[n\to \infty]{\mathcal{P}}0\] _where \(\xrightarrow[n]{\mathcal{P}}\) indicates convergence in probability._ **Proposition 2**.: _Assumption 1 is true for the adjacency matrix (ex. 1) and normalized Laplacian (ex. 2)._ Under this assumption, the main result of this section states that the functions well-approximated by GNNs are exactly the \(\mathbf{S}\)-extension of the base input features \(\mathcal{B}\). **Theorem 1**.: _Under Assumption 1, for all \(\mathcal{B}\subset L^{2}_{\sqcup}\), we have:_ \[\mathcal{F}_{\mathrm{GNN}}(\mathcal{B})=\mathcal{F}_{\mathbf{S}}(\mathcal{B})\] Given the definition of GNNs (2) and construction of \(\mathcal{F}_{\mathbf{S}}\), Theorem 1 appears quite natural. Its proof, provided in App. A.3, is however far from trivial. The inclusion \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\subset\mathcal{F}_{\mathrm{GNN}}( \mathcal{B})\) is similar in spirit to previous convergence results [24], since one has to construct a GNN that approximates a particular function. It involves however a new extended universality theorem for MLPs for square-integrable functions (Lemma 3 in App. A.3), which uses the _special properties of ReLU_. The reverse inclusion \(\mathcal{F}_{\mathrm{GNN}}(\mathcal{B})\subset\mathcal{F}_{\mathbf{S}}( \mathcal{B})\) is quite different from previous work on GNN convergence: given \(f\in\mathcal{F}_{\mathrm{GNN}}(\mathcal{B})\) whose only property is to be well-approximated by GNNs, one must construct a sequence of functions in \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\) that converge to \(f\), and uses the closure of \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\). The need to work within square-integrable function is here obvious, as we only have convergence of the MSE, an approximation of the \(L^{2}\)-norm. For instance, this inclusion would not be true in the space of continuous functions. Using composition with continuous functions, if \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\) contains a continuous bijection \(\phi:\mathcal{X}\to\mathrm{Im}(\phi)\), then \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\) contains all continuous functions, and by density all square integrable functions. That is, the equivariant GNNs are then **universal** over \(\mathcal{X}\): they can generate any function to label the nodes. Another criterion using the Stone-Weierstrass theorem (e.g. [21]), similar to the proofs in [25], is the following. **Proposition 3**.: _Assume that for all \(x\neq x^{\prime}\) in \(\mathcal{X}\), there is a continuous function \(f\in\mathcal{F}_{\mathbf{S}}(\mathcal{B})\cap\mathcal{C}_{\mathrm{Lip}}( \mathcal{X},\mathbb{R})\) such that \(f(x)\neq f(x^{\prime})\). Then, \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})=L^{2}_{\sqcup}\)._ In the rest of the paper, we study several examples of PEs and corresponding set \(\mathcal{B}\), that will generalize the results of [25]. We expect many other interesting characteristics of \(\mathcal{F}_{\mathbf{S}}\) to be derived in the future. ## 4 Node features and Positional encodings In the previous section, we have provided a complete description of the function space generated by equivariant GNNs when fed samplings of functions as node features, and the set of \(\mathcal{B}\) is thus crucial for the properties of \(\mathcal{F}_{\mathbf{S}}(\mathcal{B})\). For instance, in the absence of node features and PEs, it is classical to input _constant features_ to GNNs [25], such that the space of interest is \(\mathcal{F}_{\mathbf{S}}(1)\). However, similar to the failure of the WL test on regular graphs, if \(\mathbf{S}1\propto 1\) (e.g. constant degree function), then \(\mathcal{F}_{\mathbf{S}}(1)\)_contains only constant functions_! The role of PEs is often to mitigate such situations. **Definition 3**.: _The **set of functions approximated by PEs**\(\mathcal{F}_{\mathrm{PE}}\) is formed by all the functions \(f\in L^{2}_{\sqcup}\) such that: for all \(\varepsilon>0\), there is \(\gamma\in\Gamma\) such that_ \[\mathbb{P}\Big{(}\left\|\mathrm{PE}_{\gamma}(S,Z)-\iota_{X}f\right\|_{ \mathrm{MSE}}\geqslant\varepsilon\Big{)}\xrightarrow[n\to\infty]{}0\,. \tag{5}\] Note that, as before, \(\gamma\) may depend on \(\varepsilon\). When passing PEs as input to GNNs, \(\mathcal{F}_{\mathrm{PE}}\) serves as the base space \(\mathcal{B}\), and the space of interest to characterize the functions well approximated by the whole architecture \(\Phi_{\theta,\gamma}\) is therefore \(\mathcal{F}_{\mathbf{S}}(\mathcal{F}_{\mathrm{PE}})\). In fact, by simple Lipschitz property: for any \(f\in\mathcal{F}_{\mathbf{S}}(\mathcal{F}_{\mathrm{PE}})\) and \(\varepsilon>0\), there are \(\theta\in\Theta,\gamma\in\Gamma\) such that \[\mathbb{P}\Big{(}\left\|\Phi_{\theta,\gamma}(S,Z)-\iota_{X}f\right\|_{ \mathrm{MSE}}\geqslant\varepsilon\Big{)}\xrightarrow[n\to\infty]{}0\] In the rest of the section, we therefore aim to characterize \(\mathcal{F}_{\mathrm{PE}}\) for several representative examples. We first briefly comment on observed node features, then move on to PEs. Proofs are in App. B. ### Node features A first, simple example, is when observed node features are actually a sampling of some function \(Z=\iota_{X}f^{(0)}\). This is a convenient choice that is often adopted in the literature [24, 25, 23, 8, 29]. In this case, by adopting the identity \(\mathrm{PE}_{\gamma}(S,Z)=Z\), it is immediate that \(\mathcal{F}_{\mathrm{PE}}=\{f^{(0)}\}\). A more realistic example is the presence of centered noise: \[Z=\iota_{X}f^{(0)}+\nu\in\mathbb{R}^{n\times d_{0}} \tag{6}\] where \(\nu=[\nu_{1},\dots,\nu_{n}]\) and the \(\nu_{i}\) are i.i.d. noise vectors with \(\mathbb{E}\nu_{i}=0\) and \(\mathrm{Cov}(\nu_{i})=C_{\nu}\). This time, \(\mathcal{F}_{\mathrm{PE}}\) cannot contain directly \(f^{(0)}\), as the Law of Large Numbers (LLN) gives \[\left\|Z-\iota_{X}f^{(0)}\right\|_{\mathrm{MSE}}^{2}=\left\|\nu\right\|_{ \mathrm{MSE}}^{2}\xrightarrow[n\to\infty]{}\mathrm{Tr}(C_{\nu})>0\] However, when applying the graph shift matrix at least once, one obtains convergent PEs. **Proposition 4**.: _Consider the adjacency matrix (ex. 1) or normalized Laplacian (ex. 2). If the node features are a noisy sampling (6) and the PE are defined \(\mathrm{PE}_{\gamma}(S,Z)=SZ\), then, \(\mathcal{F}_{\mathrm{PE}}=\{\mathbf{S}f^{(0)}\}\)._ Of course this may not be the only possibility for removing noise from node features, and moreover it is not clear how realistic the node features model (6) actually is. The study of more refined models linking graph structure and node features is a major path for future work. ### Positional Encodings In this section, we consider classical PEs computed solely from the graph structure and show how they articulate with our framework. We consider two examples that are the most-often used in the literature: PEs as eigenvectors of the graph shift matrix [12, 13] (actually a recent variant that account for sign indeterminancy [31]), and PE based on distance-encoding [30] (again a variant that, as we will see, generalize other architectures [49]). For most of the results below, we will focus on two representative cases of kernels, that include many practical examples: **Example a** (Stochastic Block Models).: _In this case, the space of latent variables \(\mathcal{X}=\{1,\ldots,K\}\) is finite, each element correspond to a community label. The kernel \(w\) is represented by a matrix \(C\stackrel{{\text{\tiny def.}}}{{=}}[w(\ell,k)]\in\mathbb{R}_{+} ^{K\times K}\) that gives the probability of connection between communities \(\ell\) and \(k\), and \(P\in\mathbb{R}_{+}^{K}\) is a probability vector of size \(K\) that sum to \(1\)._ **Example b** (P.s.d. kernel).: _Here we assume that \(w\) is positive semi-definite (p.s.d.). This includes for instance the Gaussian kernel._ For any symmetric matrix (resp. self-adjoint compact operator) \(M\), we denote by \(\lambda_{i}^{M}\) its eigenvalues and \(u_{i}^{M}\) its eigenvectors (resp. eigenfunctions), with any arbitrary choice of sign or basis here. Since in all our examples operators are either p.s.d. or finite-rank, the eigenvalues are ordered as such: first the non-zero eigenvalues by decreasing order (from positive to negative), then all zero eigenvalues. #### 4.2.1 Eigenvectors and SignNet It has been proposed [12, 13] to feed the first \(q\) eigenvectors of the graph into the GNN, for a fixed \(q\). A potential problem with this approach is the sign ambiguity of the eigenvectors, or even the basis ambiguity in case of eigenvalues with multiplicities. Here we consider only the sign ambiguity for simplicity: we will assume that the first eigenvalue of \(\mathbf{S}\) are distinct. The sign ambiguity was alleviated in [31] by taking a _sign-invariant_ function: considering an eigenvector \(u_{i}^{S}\) of \(S\), \[(\mathbf{Q}f)(u_{i}^{S})\stackrel{{\text{\tiny def.}}}{{=}}f(u_{ i}^{S})+f(-u_{i}^{S})\in\mathbb{R}^{n\times p} \tag{7}\] where \(f:\mathbb{R}\rightarrow\mathbb{R}^{p}\) is a function applied to each coordinate of \(u_{i}^{S}\) to preserve permutation-equivariance. The resulting function is sign-invariant, and one can parameterized \(f\). Given the first \(q\) eigenvectors \(u_{i}^{S}\) and a collection of MLPs \(f_{\gamma_{i}}^{\text{MLP}}:\mathbb{R}\rightarrow\mathbb{R}^{p_{i}}\) for some output dimensions \(p_{i}\), the PE considered in this subsection concatenates the outputs: \[\text{PE}_{\gamma}(S)=[(\mathbf{Q}f_{\gamma_{i}}^{\text{MLP}})(\sqrt{n}u_{i} ^{S})]_{i=1}^{q}\in\mathbb{R}^{n\times p} \tag{8}\] where \(p=\sum_{i=1}^{q}p_{i}\) and the MLP are applied element-wise. The parameter \(\gamma\) gathers the \(\gamma_{i}\). The equation (8) involves a renormalization of the eigenvectors \(u^{S}\) by the square root of the size of the graph \(\sqrt{n}\): indeed, as \(u_{i}^{S}\) is normalized _in_\(\mathbb{R}^{n}\), this is necessary for consistency across different graph sizes. See Sec. 4.2.3 for a discussion and some numerical illustrations. As can be expected, the eigenvectors of \(S\) generally converge to the eigenfunctions of \(\mathbf{S}\), under a spectral gap assumption. We provide the theorem below which handles all of our running examples. We suppose that the relevant eigenvalues have single multiplicities, to only have sign ambiguity. **Theorem 2**.: _Consider either SBM (ex. a) or p.s.d. kernel (ex. b), and either adjacency matrix (ex. 1) or normalized Laplacian (ex. 2). Fix \(q\), assume the first \(q+1\) eigenvalues \(\lambda_{1}^{\mathbf{S}},\ldots,\lambda_{q+1}^{\mathbf{S}}\) of \(\mathbf{S}\) are two-by-two distinct. We define_ \[\mathcal{F}_{\mathrm{Eig}}\stackrel{{\text{\tiny def.}}}{{=}} \left\{[(\mathbf{Q}f_{i})\circ u_{i}^{\mathbf{S}\dagger q}_{i}]=1\ |\ f_{i}\in\mathcal{C}_{\mathrm{Lip}}(\mathbb{R},\mathbb{R}^{p_{i}}),p_{i}\in \mathbb{N}^{*}\right\} \tag{9}\] _Then \(\mathcal{F}_{\mathrm{PE}}=\overline{\mathcal{F}_{\mathrm{Eig}}}\)._ Hence \(\mathcal{F}_{\mathrm{PE}}\) contains the eigenfunctions of \(\mathbf{S}\), modified by the SignNet architecture to account for the sign indeterminancy. We further discuss this space in Sec. 4.2.3. An illustration is provided in Fig. 1. #### 4.2.2 Distance-encoding PEs In [30], the authors propose to define PEs through the aggregation of a set of "distances" \(\xi(i,j)\) from each node \(i\) to a set \(j\in V_{T}\) of target nodes (typically, labelled nodes in semi-supervised learning, or anchor nodes selected randomly [56]): \[(\text{PE}_{\gamma})_{i,:}=\text{AGG}(\{\xi(i,j)\ |\ j\in V_{T}\})\] where AGG is an _aggregation_ function that acts on (multi-)sets, and \(\xi(i,j)\) is selected in [30] as random-walk based distances \(\xi(i,j)=[(AD_{A}^{-1})_{ij},\ldots,((AD_{A}^{-1})^{q})_{ij}]\in\mathbb{R}^{q}\). For simplicity, since here we do not consider any particular set of target nodes, we just consider \(V_{T}=V\) the set of all nodes. Moreover, to use our convergence results, we replace the random walk matrix with our graph shift matrix \(S\). As aggregation, we opt for the deep-set architecture [58], which applies an MLP on each \(\xi(i,j)\) then a sum. Deep sets can approximate any permutation-invariant function. As we will see below, with the proper normalization to ensure convergence, we obtain: \[\text{PE}_{\gamma}=\tfrac{1}{n}\sum_{j}f_{\gamma}^{\text{MLP}}\left(n\cdot[Se _{j},\ldots,S^{q}e_{j}]\right)\in\mathbb{R}^{n\times q}\] where \(f_{\gamma}^{\text{MLP}}:\mathbb{R}^{q}\rightarrow\mathbb{R}^{p}\) is applied row-wise and \(e_{j}\in\mathbb{R}^{n}\) are one-hot basis vectors. We note that a similar architecture was proposed in a different line of work: it was called Structured Message Passing by [49], or Structured GNN by [25]. In these works, the inspiration is to give nodes unique identifiers, _e.g._, one-hot encodings \(e_{i}\). However, this process is not equivariant. To restore equivariance, [49] propose a deep-set pooling in the "node-id" dimension \(\text{PE}_{\gamma}(S)=\sum_{j}\Phi_{\gamma}(S,e_{j})\), where \(\Phi_{\gamma}\) is itself a permutation-equivariant GNN, and the equivariance of \(\text{PE}_{\gamma}\) is restored. By choosing \(\Phi_{\gamma}(\mathbf{S},e_{j})=n^{-1}f_{\gamma}^{\text{MLP}}\left(n\cdot[Se _{j},\ldots,S^{q}e_{j}]\right)\) (which is a valid choice for a message-passing GNN), we obtain exactly distance-encoding PEs above. In [25], powerful universality results were shown for this choice of architecture _in the case of non-random edges_\(a_{ij}=w(x_{i},x_{j})\) and \(q=1\). With our notations, they implicitely studied PE functions of the following form: \(\int f(w(\cdot,x))dP(x)\). This allows to _modify the values of the kernel_ before computing the degree function, and can therefore break potential indeterminancy such as constant degrees. Unfortunately, their proof technique and the concentration inequalities they use are _not true anymore for Bernoulli random edges_, which are far more realistic than deterministic weighted edges. Here we show that for a large class of kernels, concentration can be restored when we add an MLP filter on the eigenvalues of \(S\) with ReLU. Our definition of distance-encoding PEs is therefore: \[\text{PE}_{\gamma}=\tfrac{1}{n}\sum_{j}f_{\gamma_{1}}^{\text{MLP}}\left(n \cdot[S_{\gamma_{2}}e_{j},\ldots,S_{\gamma_{2}}^{q}e_{j}]\right) \tag{10}\] where \(S_{\gamma_{2}}\stackrel{{\text{\tiny def.}}}{{=}}h_{f_{\gamma_{2} }^{\text{MLP}}}(S)\) is a filter that applies an MLP \(f_{\gamma_{2}}^{\text{MLP}}\) on the eigenvalues of \(S\). Figure 1: Illustration of the role of the SignNet architecture and of the renormalization by \(\sqrt{n}\) of the eigenvectors on synthetic data, with a latent space \(\mathcal{X}=[-1,1]\) (\(x\)-axis), a Gaussian kernel \(w\), and uniform distribution \(P\). Blue dots represent a graph from the training set, orange dot a test graph that is twice bigger. **From left to right:** eigenvectors with renormalization (with a different sign for the two graphs), eigenvectors without, PEs with, and PEs without, with the regression test errors of a GNN trained using these PE with or without renormalization. We observe that SignNet indeed fixed the sign ambiguity. The absence of renormalization yields unconsistent PEs across graphs of different sizes, which results in a high test error on test graphs than training graphs. **Theorem 3**.: _Consider either SBM (ex. a) or p.s.d. kernel (ex. b), and either adjacency matrix (ex. 1) or normalized Laplacian (ex. 2). Consider the PE (10). We define_ \[\mathcal{F}_{\mathrm{Dist}}\stackrel{{\text{\tiny{def.}}}}{{=}} \left\{\int f([\mathbf{S}\delta_{x}(\cdot),\ldots,\mathbf{S}^{q}\delta_{x}( \cdot)])dP(x)\ |\ f\in\mathcal{C}_{\mathrm{Lip}}([0,1]^{q},\mathbb{R}^{p}),p\in \mathbb{N}^{*}\right\} \tag{11}\] _where \(\mathbf{S}\delta_{x}\stackrel{{\text{\tiny{def.}}}}{{=}}\{z \mapsto w_{\mathbf{S}}(z,x)\}\) by abuse of notation. Then \(\mathcal{F}_{\mathrm{Dist}}\subset\mathcal{F}_{\mathrm{PE}}\)._ Note that here we only have an inclusion \(\mathcal{F}_{\mathrm{Dist}}\subset\mathcal{F}_{\mathrm{PE}}\) instead of an equality as in Thm. 2: indeed, we show that the PE (10) can approximate functions in \(\mathcal{F}_{\mathrm{Dist}}\), but they may converge to other functions. Nevertheless, as a consequence of our analysis, all the universality results of [25, Sec. 5.3] are valid with the choice of PE (10), see Appendix C for a reminder using our notations. This is a strict, and non-trivial improvement over [25], as their results were only derived for non-random edges. For this, Theorem 3 relies mostly on a new concentration inequality for Bernoulli matrices with ReLU filters in Frobenius norm, that we give below since it is of independent interest. **Theorem 4**.: _Consider either SBM (ex. a) or p.s.d. kernel (ex. b), and either adjacency matrix (ex. 1) or normalized Laplacian (ex. 2). Define the Gram matrix \(W=[w_{\mathbf{S}}(x_{i},x_{j})/n]_{ij}\). For all \(\varepsilon>0\), there is an MLP filter \(S_{\gamma}=h_{f_{\gamma}^{\mathrm{MLP}}}(S)\) such that_ \[\mathbb{P}(\left\|S_{\gamma}-W\right\|_{\mathrm{F}}\geqslant\varepsilon)\to 0.\] The proof of this theorem, given in appendix B.3, is inspired by the so-called USVT estimator [7]. One notes that the use of an MLP graph filter is quite unconventional. A more classical choice is polynomial filters: this avoids the diagonalization of \(S\) by computing \(\sum_{k}a_{k}S^{k}\), it is for instance the basis for the ChebNet architecture [10]. For the purpose of Theorems 3 and 4, _polynomial filters do not work, and ReLU is of crucial importance_: indeed, we need the filter to zero-out \(\mathcal{O}\left(n\right)\) eigenvalues _uniformly_ in some interval \([-\tau,\tau]\). With polynomial, this could be done by taking learned parameters that depend on \(n\) (to get a finer approximation as \(n\) increases), but this is not allowed in our framework, where we want to generalize on large graphs \(n\to\infty\). On the other hand, when choosing \(f\) as an MLP with ReLU, due to the shape of this non-linearity, \(f_{\gamma_{2}}^{\mathrm{MLP}}\) can be _uniformly_\(0\) on a whole domain. Of course, polynomial filters offer great computational advantages, and perform well in practice, despite their flaw in our asymptotic analysis. Moreover, ReLU is technically non-differentiable. Designing filters that offer both computational advantages and exact approximation is still an open question. In practice, we observe that the ReLU-filter does learn to approximate its expected shape, when we minimize the reconstruction error \(\left\|S_{\gamma}-W\right\|_{\mathrm{F}}\) on synthetic data where \(W\) is known, see Fig. 2. #### 4.2.3 Discussion Approximation power.As mentioned above, in the absence of node features, one may opt for constant input, but this may lead to degenerate situations. PEs aim to counteract that, by increasing GNNs' approximation power. We quickly verify that this is indeed the case for our two examples. **Proposition 5**.: _There are cases where \(\mathcal{F}_{\mathbf{S}}(1)\subset\mathcal{F}_{\mathbf{S}}(\mathcal{F}_{ \mathrm{Eig}})\) or \(\mathcal{F}_{\mathbf{S}}(1)\subset\mathcal{F}_{\mathbf{S}}(\mathcal{F}_{ \mathrm{Dist}})\) with strict inclusions._ Figure 2: Illustration of Theorem 4 on synthetic data where \(W\) is known, with a Gaussian kernel. Unfiltered eigenvalues of \(S\) are represented by blue crosses, filtered ones obtained by minimizing \(\min_{\gamma_{2}}\left\|S_{\gamma_{2}}-W\right\|_{\mathrm{F}}\) by orange dots, and the ideal ReLU-filter used in the proof of Thms. 3 and 4 is represented by a red line. Moreover, as mentioned in the previous section, existing universality results [25] can be generalized in our case, see App. C. Another interesting question is somewhat the opposite: given the already rich class of functions generated by PEs, are GNNs really more powerful? **Proposition 6**.: _There are cases where \(\mathcal{F}_{\mathrm{Eig}}\subset\mathcal{F}_{\mathbf{S}}(\mathcal{F}_{\mathrm{ Eig}})\) or \(\mathcal{F}_{\mathrm{Dist}}\subset\mathcal{F}_{\mathbf{S}}(\mathcal{F}_{ \mathrm{Dist}})\), with strict inclusions._ The proof, which is not so trivial, invokes functions with at least one round of message-passing after the computation of PEs, so the additional approximation power does not come only from MLPs. Intuitively, it seems natural that message-passing rounds are useful for other reasons, e.g. noise reduction or smoothing [23]. We leave these complementary lines of investigation for future work. Renormalization.A striking point in our variants of PEs is the presence of various normalization factors by the graph size \(n\) to ensure convergence: the equation (8) involves a renormalization of the eigenvectors \(u^{S}\) by the square root of the size of the graph \(\sqrt{n}\), while (10) involves a multiplicative factor \(n\)_inside_ the MLP \(f_{\gamma_{1}}^{\mathrm{MLP}}\) (the \(1/n\) outside of the sum is more classical). Our analysis shows that these normalization factors are necessary for convergence when \(n\to\infty\), and more generally for consistency across different graph sizes. In practice, this is generally not used. Indeed, if the training and testing graphs have roughly the same "range" of sizes \(n\in[n_{\min},n_{\max}]\), then a GNN model can _learn_ the proper normalization to perform, which is not the point of view of our analysis \(n\to\infty\). While in-depth benchmarking of PEs has been done in the literature [13] and is out-of-scope of this paper, we give a small numerical illustration of the effect of normalization, for a synthetic dataset (Fig. 1) and two real-world datasets that contain graphs of different sizes3 (Tab. 1). On a synthetic dataset that is exactly formed of random graphs of vastly different sizes, the renormalization is of course necessary to obtain good performance, as predicted by our theory: without it, the PEs do not converge when \(n\) grows. On real data, we see that renormalization generally helps generalization, and this is more true for IMDB-BINARY, which contains a larger range of graph sizes, and distance-based PEs. Note that here we use relatively small GNNs that are _not state-of-the-art_, as well as a different train/test split than most papers (\(K=5\) CV-folds instead of \(K=10\)): indeed, we do not want our models to _learn_ the proper normalization on the limited range of sizes \(n\) in the dataset, so we limit their number of parameters and use a smaller training set. We do not expect our simple renormalization process to make a significant difference on large-scale benchmarks with state-of-the-art models [13], but this is a pointer in an interesting direction that will be explored in the future. In particular, this type of normalization may be useful in real-world scenarii where the test graphs are far larger than the labelled training graphs. Footnote 3: Technically, these datasets are graph-tasks instead of node-tasks. Indeed, we needed graphs of different sizes to test the renormalization, and there are few (if any) node-task datasets containing many graphs of different sizes. We perform a final pooling on our equivariant GNNs to obtain permutation-invariant versions. ## 5 Conclusion On large random graphs, the manner in which GNNs label _nodes_ can be modelled by functions. The analysis of the resulting function spaces is still in its infancy, and of a very different nature \begin{table} \begin{tabular}{l r r r r} \hline \hline Dataset & \multicolumn{2}{c}{Eigenvectors} & \multicolumn{2}{c}{Distance-encoding} \\ & w/ norm. & w/o norm. & w/ norm. & w/o norm. \\ \hline IMDB-BINARY[55] & 67.80 & 66.10 & 71.10 & 63.95 \\ COLLAB[55] & 73.74 & 74.77 & 75.65 & 75.02 \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracy for GNNs with different PEs, with or without renormalization by the graph size \(n\). Results for 5-fold cross-validation averaged over 3 experiments. to the studies of _graph-tasks_, both discrete [54] or in the limit [36]. In this paper, we clarified significantly the nature of the space of functions well-approximated by GNNs on large-graphs, showing that it can be defined by a few extension rules within the space of square-integrable functions. We then showed the usefulness of Positional Encodings by analyzing two popular examples, established new universality results, as well as some concentration inequalities of independent interest. Our theory hinted at some process for consistency across graphs of different sizes that can help generalization in practice. This paper, which in large part consisted in _properly defining_ the objects of interest, is without doubt only a first step in their analysis. Future studies might look at specific settings and derive more useful properties of the space \(\mathcal{F}_{\mathbf{S}}\), more powerful PEs, a better understanding of their limitations, or more realistic models for node features.
2310.03890
Accelerated Neural Network Training with Rooted Logistic Objectives
Many neural networks deployed in the real world scenarios are trained using cross entropy based loss functions. From the optimization perspective, it is known that the behavior of first order methods such as gradient descent crucially depend on the separability of datasets. In fact, even in the most simplest case of binary classification, the rate of convergence depends on two factors: (1) condition number of data matrix, and (2) separability of the dataset. With no further pre-processing techniques such as over-parametrization, data augmentation etc., separability is an intrinsic quantity of the data distribution under consideration. We focus on the landscape design of the logistic function and derive a novel sequence of {\em strictly} convex functions that are at least as strict as logistic loss. The minimizers of these functions coincide with those of the minimum norm solution wherever possible. The strict convexity of the derived function can be extended to finetune state-of-the-art models and applications. In empirical experimental analysis, we apply our proposed rooted logistic objective to multiple deep models, e.g., fully-connected neural networks and transformers, on various of classification benchmarks. Our results illustrate that training with rooted loss function is converged faster and gains performance improvements. Furthermore, we illustrate applications of our novel rooted loss function in generative modeling based downstream applications, such as finetuning StyleGAN model with the rooted loss. The code implementing our losses and models can be found here for open source software development purposes: https://anonymous.4open.science/r/rooted_loss.
Zhu Wang, Praveen Raj Veluswami, Harsh Mishra, Sathya N. Ravi
2023-10-05T20:49:48Z
http://arxiv.org/abs/2310.03890v1
# Accelerated Neural Network Training with Rooted Logistic Objectives ###### Abstract Many neural networks deployed in the real world scenarios are trained using cross entropy based loss functions. From the optimization perspective, it is known that the behavior of first order methods such as gradient descent crucially depend on the separability of datasets. In fact, even in the most simplest case of binary classification, the rate of convergence depends on two factors: 1. condition number of data matrix, and 2. separability of the dataset. With no further pre-processing techniques such as over-parametrization, data augmentation etc., separability is an intrinsic quantity of the data distribution under consideration. We focus on the landscape design of the logistic function and derive a novel sequence of _strictly_ convex functions that are at least as strict as logistic loss. The minimizers of these functions coincide with those of the minimum norm solution wherever possible. The strict convexity of the derived function can be extended to finetune state-of-the-art models and applications. In empirical experimental analysis, we apply our proposed rooted logistic objective to multiple deep models, e.g., fully-connected neural networks and transformers, on various of classification benchmarks. Our results illustrate that training with rooted loss function is converged faster and gains performance improvements. Furthermore, we illustrate applications of our novel rooted loss function in generative modeling based downstream applications, such as finetuning StyleGAN model with the rooted loss. The code implementing our losses and models can be found here for open source software development purposes: [https://anonymous.4open.science/r/rooted_loss](https://anonymous.4open.science/r/rooted_loss). ## 1 Introduction Neural networks have become a necessity to enable various real-world applications, especially in large scale settings. An appropriate parameterized model is chosen with the information of the domains or use-cases pertaining to the applications ([11; 42; 8]). Then, the parameters are iteratively modified to optimize a mathematically valid loss function applied on data points which represent the application under consideration ([17; 16; 32; 28; 21]). Once the iterative procedure terminates (or is terminated with stopping conditions), the model parameters can be used to make predictions on unseen points. Thus, it is crucial to understand how different algorithms behave during optimization phase that correspond to the training procedure. In large scale setting, first order methods are preferred since they require the least computing resources, and are easier to implement with Automatic Differentiation packages ([39; 36; 44]). Naturally, the success and efficiency of first order methods depend on the landscape properties of the loss function when are applied on samples in datasets ([10; 26]). **How does dataset affect optimization landscape?** Consider the task of classification in which a dataset \(\mathcal{D}\) is represented as a set of pairs \((x,y)\), where \(x\) denote features, and \(y\) denote corresponding classes or labels ([40; 47; 53]). In binary classification, the task is to categorize \(x\) into one of two classes using model parameters after optimization. Here, it is known the rate of convergence of (stochastic) gradient descent - the de-facto first order method - to the optimal solution is primarily influenced by two factors: (1) _condition number_ of the loss function ([38]): this number gives an insight into the structure and properties of the dataset. A lower condition number implies a better gradient directions which makes optimization faster for first order methods ([19; 4]). While using a one layer neural network, this condition number is determined by the so-called data matrix in which \((x,y)\) pairs are appropriately stacked as rows/columns; (2) recent work have shown that _separability_ of \(\mathcal{D}\) is an important factor to consider for modeling and training purposes ([45; 50]). Intuitively, separability is a measure of how easily a model can distinguish between two \(x\)'s from different classes \(y\)'s in \(\mathcal{D}\). A highly separable dataset is easier to classify, and the optimization process is expected to converge faster. Indeed, separability is inherent to the dataset, and so without employing extra pre-processing steps like normalization ([22; 51]), augmentation ([46; 52]), over-parametrization (more than one layer) ([13; 7]), the level of separability is determined by the distribution from which \(\mathcal{D}\) was sampled from. Furthermore, the landscape of objective function that are used for generating or sampling points similar to \(x\) have also been under investigation ([41]). A standard assumption in designing models or architectures for sampling is that \(x\) is a smooth function - usually an image (or audio) considered as a two (or one) dimensional smooth function. With this assumption, various architectures have been proposed with (discrete) convolution or smoothing operators as the building blocks, such as DCGAN ([43]), BigGAN ([6]), StyleGAN ([27]). These smoothing based architectures called Generators gradually transform a random signal to \(\tilde{x}\), a "fake" or synthetic sample. Then, a classification architecture called Discriminator is used to assign the probability of \(\tilde{x}\) being a real sample from \(\mathcal{D}\). While separability might not be the deciding factor in training the overall models, conditioning of loss functions used to train the Discriminator is crucial in determining the success of first order algorithms, and thereby the sampling process to obtain \(\tilde{x}\sim x\in\mathcal{D}\) ([1]). **Our Contributions.** We provide a plug-in replacement for \(\log\) based loss functions for supervised classification and unsupervised generation tasks with provable benefits. **First**, we show that there is a natural approximation to \((-\log)\) that is bounded from below that has the nice theoretical properties such as convexity and smoothness. Our novel result shows that the proposed _Rooted_ loss with one additional parameter \(k\) is at least as conditioned as \((-\log)\) based convex loss function, so provable acceleration. **Second,** we apply our loss to various datasets, architecture combinations and show that it can lead to significant empirical benefits for classifications. In image classifications, we show that the training time with our proposed rooted loss is much less than cross-entropy or focal loss. It also provides 1.44% - 2.32 % gains over cross-entropy loss and 5.78 % - 6.66 % gains over focal loss in term of test accuracy. **Third,** we apply rooted loss on generative models as downstream applications, showing lower FID and better generated images with limited training data. ## 2 Preliminaries Logistic regression is the task of finding a vector \(w\in\mathbb{R}^{d}\) which approximately minimizes the empirical logistic loss ([23]). While logistic regression can be seen as a single-layer neural network, deep neural networks contain multiple such layers stacked together. Each layer captures increasingly complex features from the input data. This hierarchical structure allows deep networks to model complex relationships. Given datapoints \((x_{i},y_{i}),i=1,\ldots,n\), where \(x_{i}\in\mathbb{R}^{d}\) denotes the features in \(d-\) dimensions, and \(y_{i}\in\{+1,-1\}\) is the binary label. By parametrizing the prediction function for a new sample \(x\) as \[f(x):=\mathbb{P}(y=\pm 1|x)=\sigma(\pm w^{\top}x) \tag{1}\] where \(\sigma\) is the sigmoid function, the maximum likelihood estimator of \(w\in\mathbb{R}^{d}\) can be obtained by minimizing the _negative_ log-likelihood function of \(w\) ([49]), written as, \[\mathcal{L}_{\text{LR}}(w):=\frac{1}{n}\sum_{i=1}^{n}\log\left(1+\exp\left(-y_{ i}w^{\top}x_{i}\right)\right). \tag{2}\] The cross-entropy (CE) loss is one of the most commonly used loss functions for training deep neural networks, most notably in multi-class classification problems. Given datapoints as \((x_{i},y_{ik})\), where \(k\in c\), \(c\) is the number of classes, \(y_{ik}\in\{0,1\}\) is a binary indicator of whether class \(k\) is the correct classification for example \(i\). Following the (2), multi-class cross-entropy loss is written as, \[\mathcal{L}_{\text{CE}}(w):=-\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{c}y_{ik}\log \left(\frac{\exp(w_{k}^{\top}x_{i})}{\sum_{j=1}^{c}\exp(w_{j}^{\top}x_{i})} \right), \tag{3}\] where \(w_{j}^{\top}x_{i}\) represents the prediction score for the \(i\)-th example and the \(j\)-th class. ## 3 Rooted Logistic Objective Function ### Motivation: From Logistic Objective to Rooted Logistic Objective Logistic loss can serve as a smooth approximation to the element-wise maximum function, where smoothness is desirable in model design since gradient-based optimizers are commonly used. In this work, we consider to use the Taylor approximation of the natural logarithm function as follows: 1. for a fixed \(u\in\mathbb{R}_{+}\), the derivative of \(u^{v}\) is given by \(u^{v}\log(u)\) by Chain rule, 2. now observe that by evaluating the derivative at \(v=0\), we obtain \(\log(u)\), and 3. finally, plugging the above two in the definition of derivative we have that \(\log(u)=\lim_{v\downarrow 0}\frac{u^{v}-1}{v}=\lim_{k\uparrow\infty}k \left(u^{1/k}-1\right)\). Thus, for training purposes, we propose using a fixed sufficiently large \(k\) with the following approximation to the \(\log\) function: \(\log(u)\approx k(u)^{\frac{1}{k}}-k\). Here, the approximation seeks to express \(\log(u)\) in terms of a function raised to the power of \(\frac{1}{k}\). The constant \(k\) provides a degree of freedom that can be adjusted to fine-tune the approximation. Building on this approximation, a novel loss function, termed the **R**ooted **L**ogistic **O**bjective function (**RLO**), is introduced. The key idea is to modify the traditional logistic loss by incorporating the above approximation. The loss function for this RLO can be defined as: \[\mathcal{L}_{\text{RLO}}^{k}(w)=\frac{1}{n}\sum_{i=1}^{n}k\cdot\left[l_{i}^{k }(w):=\left(1+\exp\left(-y_{i}w^{\top}x_{i}\right)\right)^{\frac{1}{k}}\right]. \tag{4}\] **Intuition to prefer Rooted Loss over Log based losses.** Logistic loss plays a pivotal role in penalizing prediction errors, particularly for the true class denoted as \(y_{i}\) in classification tasks. One of its notable characteristics is the high loss and large gradient when the function \(f(x)\) approaches zero. This sharp gradient is beneficial in gradient-based optimization methods, such as gradient descent, because it promotes more significant and effective update steps, leading the convergence towards the optimal solutions. Moreover, when we consider the gradient contributions from incorrect classes, the "signal" coming from the gradient is weaker, so such optimization schemes may be less effective in driving the probabilities for these classes to zero. Specifically, optimization algorithms might struggle or take longer to drive the predicted probabilities of these incorrect classes towards zero. In simpler terms, while the logistic loss is adept at penalizing mistakes for the true class, it might be gentler or slower in correcting overconfident incorrect predictions. The deep neural networks (DNNs) trained by the softmax cross-entropy (SCE) loss have achieved state-of-the-art performance on various tasks ([17]). ### Convexity of RLO Standard logistic regression function in (2) has favorable convexity properties for optimization. In particular, it is strictly convex with respect to parameters \(w\), for more details, see ([15]). By direct calculation of Gradient and Hessian using Chain and Product rules, we obtain the gradient \(\nabla_{w}l_{i}^{k}\) for a single point \((x_{i},y_{i})\), \[\nabla_{w}l_{i}^{k}(w) =\frac{1}{k}[(1+\exp\left(-y_{i}w^{\top}x_{i}\right)^{(\frac{1}{k} -1)}\cdot\exp\left(-y_{i}w^{\top}x_{i}\right))]\cdot(-y_{i}x_{i}) \tag{5}\] \[=l_{i}^{k}(w)\cdot\frac{\exp\left(-y_{i}w^{\top}x_{i}\right)}{1+ \exp\left(-y_{i}w^{\top}x_{i}\right)}\cdot(-y_{i}x_{i})=l_{i}^{k}(w)\cdot\frac {1}{\exp\left(-y_{i}w^{\top}x_{i}\right)+1}\cdot(-y_{i}x_{i})\] (6) \[=-g(w,x_{i})\cdot y_{i}x_{i}, \tag{7}\] where \(g(w,x_{i}):=\sigma(y_{i}w^{\top}x_{i})\cdot l_{i}^{k}(w)\geq 0\). Similarly, we obtain the second-order gradient \(\nabla^{2}l_{i}^{k}(w)\) for a single point \((x_{i},y_{i})\) as follows, \[\nabla^{2}l_{i}^{k}(w)=h(w,x_{i})\cdot x_{i}x_{i}^{\top}, \tag{8}\] where \(h(w,x_{i}):=l_{i}^{k}(w)\cdot\sigma(y_{i}w^{\top}x_{i})\cdot\left[1-\sigma(y_{i }w^{\top}x_{i})\cdot(1-1/k)\right]>0\) since both \(\sigma(\cdot),1/k\in(0,1)\). We have included the full proof of hessian in the Appendix A. With these calculations, we have the following result: **Lemma 1**.: \(\mathcal{L}_{\text{RLO}}^{k}(w)\) _is a strictly convex function whenever \(k>1\) as is considered here._ Note that, our result is novel because standard composition rules for convex optimization do not apply. This is due to the fact the function \((\cdot)^{\frac{1}{k}}\) is a _concave_ function in the nonnegative orthant. Numerically, the main advantage is that the condition number of \(\mathcal{L}_{\text{RLO}}^{k}(w)\) is independent of data, while \(\mathcal{L}_{\text{RLO}}^{k}(w)\) can be quite ill-conditioned for inseparable datasets due to the \(\log(\cdot)\) function. More details can be found in Chapter 12 of ([38]). While strict convexity is true for both Logistic and RLO loss functions, the following result says that the full batch RLO is guaranteed to be as conditioned as Logistic objective function by comparing the coefficient of the hessian term \(x_{i}x_{i}^{\top}\) in RLO and Logisitic objectives (LO): **Lemma 2**.: _Let \(r_{i}:=h_{\text{RLO}}(w_{i}^{*},x_{i}^{*})/h_{\text{LO}}(w,x_{i})\in\mathbb{R }_{\geq 0}\),, where \(w_{i}^{*}\) is the optimal parameters for sample \(i\). Then if \(k\leq\exp(l_{i}^{k}(w_{i}^{*}))\) then \(r_{i}>1\)._ Above, lemma 2 states that as long as \(k\) is not chosen to be too large, the gradient directions may provide sufficient descent needed for fast convergence. This property makes it ideal for solving classification problems. From lemma 1 and 2, we can conclude that there is a range of values of \(k\) that provides better conditioning for individual data points. It is beneficial when using stochastic algorithms that use a random mini-batch of samples at each iteration instead of the full dataset to compute gradient. **Generalization properties of RLO.** Assuming that points \(x_{i}\in\mathbb{R}^{d}\) are bounded i.e., \(\|x\|\leq B_{x}\) and that there is an bounded optimal solution \(\|w\|\leq B_{o}\), we expect that the generalization bounds for LR in (2) to hold for RLO in (4). This is because of the fact that asymptotically - when \(k\uparrow+\infty\) - the hessian coefficient of RLO is at most \(1\), which guarantees that the gradient is lipschitz continuous ([31; 3]). ### Applying RLO for Generative Models Generative models were studied as _statistical_ problem where the goal is, given a training dataset \(x_{i},i=1,2...n\), learn a parametric model of its distribution \(p(x)\). For an appropriate parametric model \(f_{\theta}\), we need \(\theta\) such that \(f_{\theta}(z)\approx x\), where \(z\) is usually a Gaussian vector to approximate some \(x_{i}\) through the transformation \(f_{\theta}\). For sampling, given a mapping \(f_{\theta}\), synthetic data points can be generated by sampling a Gaussian vector \(z\) and computing \(f_{\theta}(z)\). This overcomes some of the architectural restrictions of \(f_{\theta}\). This property is leveraged to come up with Generative Adversarial Networks (GANs), see Chapter 10 in ([33]). GANs are a class of models that help the synthesize data points from the model using \(f_{\theta}\) which gets a Gaussian vector \(z\) as an input. GANs are trained by comparing these synthetic samples with real samples from the training data \(x_{i}\). The comparison is done by a critic, e.g., a binary classifier \(g_{\eta}\) which judges the authenticity of the samples. It is an adversarial game where the generator's parameters \(\theta\) are continuously updated to synthesize data close to reality while the classifier such as the discriminator wants to label them correctly as fake. The result is a generator that has successfully learned to generate data that the discriminator labels as real. The generator tries to maximize the classifier loss with respect to \(\theta\) while the classifier tries to minimize the loss with respect to \(\eta\). This leads to a rooted minmax problem with loss that is similar (4), written as, \[\min_{\theta}\max_{\eta}V_{k}(f_{\theta},g_{\eta})=\mathbb{E}_{x\sim p_{data}(x )}[k\left(g_{\eta}(x)\right)^{1/k}]+\mathbb{E}_{z\sim p_{z}(z)}[k\left(1-g_{ \eta}(f_{\theta}(z))\right)^{1/k}]. \tag{9}\] ## 4 Experiments In this section, we illustrate the experiments of using our proposed RLO on multiple architectures of models on various benchmark datasets. Specifically, we compare rooted logistic regression with standard logistic regression on synthetic dataset and 4 benchmark datasets from UCI machine learning repository. Furthermore, we evaluate rooted loss against cross-entropy loss and focal loss by training state-of-the-arts deep models, e.g., ResNet ([20]), ViT ([12]) and Swin ([35]), on image classification tasks. Finally, we showcase the application of image generations using RLO with StyleGAN ([24]). ### Datasets **Synthetic dataset Setup ([21]):** We use a version of the popular 2-class spiral with 1500 samples, and we use 70% data for training and the remaining 30% data for testing. **Dataset for regression:** The empirical studies are conducted on the following 4 benchmark classification datasets, which can be found in the publicly available UCI machine learning repository ([2]): Wine, Ionosphere, Madelon and Specheart. **Image datasets:** We conduct image classification experiments to test the performance of rooted loss. In particular, we use CIFAR-10/100 ([30]) for training from the scratch,and Tiny-ImageNet ([37]) and Food-101 ([5])for finetuning. For our image generation experiments with StyleGAN, we use FFHQ dataset ([25]) and the Stanford Dogs dataset ([29]). More data information are in Appendix B. ### Shallow Logistic regression vs Rooted Logistic regression **Experiments setups:** The baseline is standard logistic regression. To showcase the benefits of RLO, we run the experiments with different numbers of k\(\in[3,20]\) for the proposed rooted logistic regression. Note that, for all the datasets except Specheart, we use the same number of iterations (200) and learning rate (0.01) for all the experiment settings. For Specheart, we increase the number Figure 1: The rate of convergence over iterations of standard logistic regression and RLO. The lines for the rooted logistic regression show the convergence for the value of k which gives the best test accuracy, \(k=4\) for Ionosphere, \(k=6\) for Madelon, \(k=20\) for Specheart and \(k=3\) for Wine. RLO converges faster than standard logistic regression in all the settings. of iterations to 1000 for better convergence and higher accuracy. We also evaluate standard logistic regression as well as RLO with/without \(\ell_{2}\) regularization. More setup details are in Appendix D.1. **Convergence analysis:** As mentioned above, we keep the experimental settings the same across all datasets, except Specheart. Figure 1 shows the convergence performance for Ionosphere, Madelon, Specheart and Wine datasets respectively. For all the datasets we can clearly see that RLO has better convergence performance compared to the standard logistic regression. We can see that the RLO, with and without \(\ell_{2}\) regularization converge quicker than standard logisitc regression, and RLO without \(\ell_{2}\) regularization converging comparatively faster. For the convergence results for other values of k, in the case of RLO, please refer Appendix D.1. **Performance gains:** Table 1 shows the test accuracy for all the datasets under the different regression settings. For RLO, we also show the top 3 \(k\) values which achieved the highest accuracy. As seen in the table, for all the datasets, RLO with/without \(\ell_{2}\) regularization outperforms standard logistical regression with/without \(\ell_{2}\) in term of accuracy on test set. Specifically, RLO with \(\ell_{2}\) regularization consistently achieves higher accuracy rates for different values of \(k\). Hence, we conclude that our proposed RLO is beneficial to accelerate the training and also provide improvements. ### Deep Neural Network for classification with RLO **Experiments setups:** At first, we implemented three different layers (2, 3, 4) fully-connected neural networks (FCN) on synthetic dataset. The training iterations are 1000, 100, and 50 respectively. We use the same hidden size of 100, learning rate as 0.01 and k of 3 for three FCNs. For the vision models in image classification tasks, as multi-class classification, we train and finetune on ViT-B ([12]), ResNet-50 ([20]), and Swin-B ([35]) models. The k parameters of our proposed RLO are chosen from the set {5, 8, 10}. We train on CIFAR-10 and CIFAR-100 for 200 epochs with ViT and 100 epochs with ResNet and Swin. Moreover, we finetune these models on Time-ImageNet and Food-101 for 10 epochs. We train and fine-tune both on 3 NVIDIA RTX 2080Ti GPUs. To evaluate our proposed RLO, we use cross-entropy (CE) loss and focal loss as baselines. More implementation details are in Appendix C. **Observations on FCNs decision boundaries:** To enable interpretative understanding, we use the synthetic setups to visualize the decision boundaries learned by RLO compared with CE. Figure 3 shows the decision boundaries obtained from RLO and CE for three different FCNs training for different iterations. The intervening white line between the red and blue regions denotes the decision boundary, a critical threshold distinguishing classifications within the model. Specifically, comparing (b) and (e), and (c) and (f), we observe the margins which are the distances from datapoints to the decision boundary are larger for RLO in most of regions. Hence, RLO is beneficial to separate data points that enable the faster convergence rate. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & LR & LR - L2 & \multicolumn{2}{c}{RLO} & RLO - L2 \\ \hline Dataset & Test Acc. & Test Acc. & k & Test Acc. & Test Acc. \\ \hline \multirow{3}{*}{Wine} & \multirow{3}{*}{90 \(\pm\) 4.15} & \multirow{3}{*}{89.44 \(\pm\) 5.66} & 3 & **97.22 \(\pm\) 1.75** & 94.55 \(\pm\) 3.51 \\ \cline{3-5} & & & 11 & 82.77 \(\pm\) 1.23 & **95.55 \(\pm\) 2.22** \\ \cline{3-5} & & & 13 & 91.66 \(\pm\) 5.55 & **95 \(\pm\) 5.09** \\ \hline \multirow{3}{*}{Ionosphere} & \multirow{3}{*}{81.4 \(\pm\) 2.73} & \multirow{3}{*}{83.94 \(\pm\) 2.1} & 4 & 85.07 \(\pm\) 1.12 & **86.47 \(\pm\) 1.12** \\ \cline{3-5} & & & 3 & **86.47 \(\pm\) 1.69** & 85.63 \(\pm\) 0.56 \\ \cline{3-5} & & & 16 & 84.5 \(\pm\) 0.00 & **86.19 \(\pm\) 0.56** \\ \hline \multirow{3}{*}{Madelon} & \multirow{3}{*}{52.03 \(\pm\) 1.9} & \multirow{3}{*}{50.83 \(\pm\) 1.51} & 6 & **54.36 \(\pm\) 0.71** & 52.75 \(\pm\) 0.97 \\ \cline{3-5} & & & 9 & **54.13 \(\pm\) 0.58** & 51.8 \(\pm\) 1.43 \\ \cline{3-5} & & & 19 & 52.36 \(\pm\) 1.14 & **54.13 \(\pm\) 1.42** \\ \hline \multirow{3}{*}{Specheart} & \multirow{3}{*}{80.49 \(\pm\) 3.92} & \multirow{3}{*}{88.25 \(\pm\) 1.69} & 20 & 84 \(\pm\) 3.10 & **88.5 \(\pm\) 1.83** \\ \cline{3-5} & & & 15 & 82.75 \(\pm\) 1.83 & **88 \(\pm\) 1.49** \\ \cline{3-5} & & & 13 & 82.99 \(\pm\) 2.44 & **88 \(\pm\) 1.00** \\ \hline \hline \end{tabular} \end{table} Table 1: Testing Accuracy from 5-Fold Cross Validation, using Shallow Logistic regression vs Rooted Logistic regression (RLO). Top 3 values of \(k\) are shown for RLO. RLO with/without \(\ell_{2}\) regularization outperforms Shallow Logistic regression with/without \(\ell_{2}\), in term of accuracy on test sets of all 4 datasets. Figure 3: RLO performance on CIFAR-100 training with different models. The x-axis is wall time in minutes. RLO obtains more stable validation loss, and use less time for training on all models. Figure 2: The color demonstrated the estimated probability of a class label being identified as 1, as aligned with the scale located on the right side in the figures. The intervening white line between the red and blue regions denotes the decision boundary, In (a), (b) and (c), we train a 2-layer FCN for 1000 iterations, a 3-layer FCN for 100 iterations, and a 4-layer FCN for 50 iteration with cross-entropy loss. In (d), (e) and (f), we train a 2-layer FCN for 1000 iterations, a 3-layer FCN for 100 iterations, and a 4-layer FCN for 50 iteration with rooted logistic objective loss. **Nonconvex Optimization Benefits:**_(1) Performance gains:_ we evaluate the effectiveness of RLO on nonconvex optimizations. In Figure 3, training with RLO for FCNs outperforms CE in term of accuracy in all different settings. It provides 1% - 4.3 % improvements for the binary classification. Furthermore, we illustrate the results of RLO on multiple image classification benchmarks. Table 2 shows that RLO performs the best and the second best accuracy across all datasets and network architectures. Specifically, training with RLO brings roughly 1.44% - 2.32 % gains over CE and 5.78 % - 6.66 % gains over focal in term of test accuracy. Additionally, Figure 2(a) and 2(b) show the training time of RLO are significantly less than CE and focal on different models. For example, the training wall time of ViT on CIFAR-100 for 200 epochs is 54 minutes and 109 minutes less than CE and focal respectively. Therefore, our proposed RLO can accelerate neural networks training and also provide performance improvements regardless of datasets and model architectures. _(2) Effects on overfitting_: Figure 2(b) shows the validation loss using CE is increasing over iterations. However, we observe that validation loss with RLO is decreasing over time on the same dataset and model, which is beneficial to reducing overfitting. ### GAN-related with RLO **Experiments setups:** For the image generation setup, we use the version of StyleGAN capable of being trained by limited training data, as proposed by ([24]). All training is done on 3 NVIDIA RTX 2080Ti GPUs with FFHQ and Stanford Dogs dataset. We evaluate the effectiveness of RLO by replacing the original loss and compare it to StyleGAN's CE loss, for different values of \(k\). To compare the efficacy of the models trained using RLO and CE loss, we take a large image from the original dataset, and compute its projection on the latent space using our model snapshots from the initial and final stages of the training. We then use these projections to generate an image using their respective models. More implementation details are in Appendix C. **Observations:** As shown in Figure 3(a), the setup with RLO trained on the FFHQ dataset produces a lower FID, which means better quality images, than the one with the CE. The progressive image generation while training these models are illustrated in Figure 3(b). Images produced by RLO (bottom 2 rows) seems to be slightly better at the final stages of the training. Similar FID vs time comparison and progressive image generation for the Stanford Dogs dataset is shown in Figure 4, where the FID scores for the two models are close. Finally, in Figure 5, for a target image, we show the images obtained from the projections using the initial and final stages of training. For both FFHQ and Stanford Dogs dataset, we can see that the images generated using the final stage RLO models (bottom image, in the last column), produce details that are closer to the target image than CE. More generated images are shown in Appendix D.3. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\# Param} & \multicolumn{2}{c}{CE} & \multicolumn{2}{c}{Focal} & \multicolumn{2}{c}{RLO-10} \\ & & Train & Test & Train & Test & Train & Test \\ \hline ResNet-34 & 21.3M & 99.76 & 89.92 & 94.04 & 85.79 & **99.79** & **89.98** \\ ResNet-50 & 23.7M & **99.47** & 86.65 & 93.98 & 80.13 & **99.47** & **86.72** \\ ResNet-101 & 42.7M & 99.69 & 84.28 & 94.40 & 77.75 & **99.74** & **85.12** \\ \hline ViT-S & 14.4M & 67.17 & 67.7 & 66.51 & 67.74 & **68.31** & **68.37** \\ ViT-B & 85.1M & 72.49 & 72.12 & 71.61 & 71.45 & **73.23** & **72.41** \\ ViT-L & 226.8M & 76.56 & **74.81** & 74.56 & 73.72 & **77.17** & **74.81** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablations on model architectures, including running time and test performance on CIFAR-10. Time is averaged one epoch time in seconds. Note that, CE is cross-entropy for short. \(k\) value is 10. Our RLO obtains the best train and test accuracy in all models. Figure 4: Results on FFHQ dataset and Stanford Dogs dataset. (a) FID score vs training time for both cross-entropy loss and RLO-2 setup. (c) FID score vs training time for both cross-entropy loss and RLO-11 setup. In (b) and (d), the top \(8\times 2\) row contains four instances of the image generation (Each image is a part of the \(2\times 2\) grid containing four images) using CE loss. The bottom \(8\times 2\) represents the same with the RLO setup at the same instances. Figure 5: For each setup, the target image can be seen on the left-most image. To its right, the first row shows the generated images with the projection obtained from the initial and final stages of the training respectively, with CE. The second row is the result of replacing it with RLO. train Swin using different \(m\) on CIFAR-10 with \(k=8\). The performances on different \(m\) are similar, and \(m=8\) is slightly better than others in term of test accuracy. More results of different \(k\) values are in Appendix D. **Is RLO sensitive to model architectures/sizes?** First, in Table 2, we showed the performance of RLO for image classification tasks for various combinations of datasets and deep neural network models. We saw that for k values from the set {5, 8, 10} achieved performance gain over CE and focal methods, in terms of test accuracy, for almost all the settings. For further ablation, we chose the \(k\) value to be \(10\) and compare with the baselines under different model architectures of ResNet and ViT. The ablation results in Table 3 suggest that RLO resoundingly performs better, even under different architectures of the same model. Both the training as well as the test accuracy under RLO-10 are better than those trained with CE and focal losses. Thus suggesting that RLO guarantees performance gain across different hyperparameters and model architectures. ## 5 Conclusions and Future work We presented comprehensive evaluations of a new class of loss functions for prediction problems in shallow and deep network. This class of loss functions has many favorable properties in terms of optimization and generalization of learning problems defined with high-dimensional data. Recent results suggest that standard logistic loss \(L_{\text{LR}}(\cdot)\) need to be adjusted for better convergence and generalization properties ([49]). By taking limit as \(k\uparrow+\infty\), or equivalently \(1/k\downarrow 0\) (say using L'Hopital's rule), we can see that \(\lim_{k\to\infty}L_{\text{RLO}}^{k}(\cdot)=L_{\text{LO}}(\cdot)\). Moreover since \(L_{\text{RLO}}^{k}(\cdot)\) and first order necessary condition are both smooth with respect to \(k\), the minimizers also coincide in the limit. We leave rate aspects of convergence to max classifier and generalization aspect of obtained solution as in ([48; 15]) for future work. Finally, dependence of the parameter \(k\) on excess risk and generalization bounds for RLO is also left as a future work. We believe insights from recent generalized linear models are fruitful directions to pursue ([18; 14]). Our investigations show that the rooted logistic loss function performs better when using first-order methods. However, the convergence guarantees for first-order methods are relatively weak for pretraining architectures with a large number of parameters, such as vision models. Moreover, since these models have sequential aspects in their training formulations, the convergence rate is further reduced in practice. Therefore, it would be interesting to consider second-order methods like Sophia [34], to optimize \(\mathcal{L}_{\text{RLO}}^{k}\) for some \(k\).
2303.13563
Skip Connections in Spiking Neural Networks: An Analysis of Their Effect on Network Training
Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs) due to their potential for energy efficiency and their ability to model spiking behavior in biological systems. However, the training of SNNs is still a challenging problem, and new techniques are needed to improve their performance. In this paper, we study the impact of skip connections on SNNs and propose a hyperparameter optimization technique that adapts models from ANN to SNN. We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs by enabling faster convergence and increasing information flow through the network. Our results show an average +8% accuracy increase on CIFAR-10-DVS and DVS128 Gesture datasets adaptation of multiple state-of-the-art models.
Hadjer Benmeziane, Amine Ziad Ounnoughene, Imane Hamzaoui, Younes Bouhadjar
2023-03-23T07:57:32Z
http://arxiv.org/abs/2303.13563v1
# Skip Connections in Spiking Neural Networks: An Analysis of Their Effect on Network Training ###### Abstract Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs) due to their potential for energy efficiency and their ability to model spiking behavior in biological systems. However, the training of SNNs is still a challenging problem, and new techniques are needed to improve their performance. In this paper, we study the impact of skip connections on SNNs and propose a hyperparameter optimization technique that adapts models from ANN to SNN. We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs by enabling faster convergence and increasing information flow through the network. Our results show an average +8% accuracy increase on CIFAR-10-DVS and DVS128 Gesture datasets adaptation of multiple state-of-the-art models. Spiking Neural Network, efficient deep learning, neural architecture search ## I Introduction While Deep Learning (DL) has become a foundational technology to many applications, including autonomous driving, and medical imagining segmentation. Its inference is still energy-, resource- and time-expensive. An ongoing challenge in DL research is to devise energy-efficient inference algorithms and hardware platforms. Taking inspiration from the brain, two intertwined solutions are investigated: (1) designing neuromorphic hardware, in which memory and computations reside within the same node and (2) building bio-inspired networks such as Spiking Neural Network (SNN), which can be more energy efficient than nonspiking neural networks. In this paper, we refer to this latter as Artificial Neural Network (ANN). Although ANN are brain-inspired, there are fundamental differences in their computations and learning mechanism compared to the brain. In the brain, neurons communicate with each other via sequences of an action potential, also known as _spikes_. An action potential travels along the axon in a neuron and activates synapses. The synapses release neurotransmitters that arrive at the postsynaptic neuron. Here, the action potential rises with each incoming pulse of neurotransmitters. If the action potential reaches a certain threshold the postsynaptic neuron fires a spike itself. These individual spikes are sparse in time and space, which is the main reason behind the energy efficiency of biological neuronal networks, i.e., SNNs. In addition, the information in SNN is conveyed by spike times and thus can have a large capacity for encoding and representing the input data at the edge. However, SNN architecture design and training are still in their early phases. An important scientific question is to what extent architecture characteristics (e.g., operations, skip connections) are compliant with the spatial and temporal constraints in SNN. To answer this question, this paper presents the following contribution: * We investigate the relationship between the number of skip connections and accuracy drop that comes from standard architectures, including denseNet121 [1], resnet18 [2], and mobilenetv2 [3] converted to SNN. To the best of our knowledge, this is the first work introducing a dense spiking neural network. * We propose an adaptation hyperparameter tuning algorithm that selects the best number of skip connections to optimize the trade-off between accuracy drop and energy efficiency. ## II Related Works ANNs use gradient descent to optimize the weights during training, but this has proven challenging in SNN due to their non-differentiable nonlinear spikes [4]. A solution to this problem was to introduce surrogate derivatives during the backpropagation (BP) operation, which replaces the spiking activations with a smooth function [5]. While this has been successful at improving the accuracy of SNNs in solving certain problems, their performance often falls short compared to that of ANNs. The reason for this discrepancy is twofold: 1) the surrogate gradient approaches only provide an approximation of the gradients and 2) the unfolding of SNNs in time to perform the backpropagation BP leads to the vanishing gradient problem, similar to the problem faced in vanilla Recurrent Neural Networks (RNNs). A study such as in [6] provides a method that computes exact gradients for arbitrary SNN architecture, but their applications were limited to relatively simple datasets, e.g., MNIST. The incorporation of local losses in space [7] and time [8] has shown promising results in circumventing the vanishing gradient problem, but as these methods only roughly approximate the gradients, they lead to less competitive performance. To further improve the learning in SNNs, other studies took a different approach by introducing novel architectures and ingredients. For example, the work in [9] showed that batch normalization through time could effectively train deep SNN. Another example is presented in [10], where the authors implemented a dedicated neural architecture search (NAS) to find the best architecture within common NAS benchmarks, such as NAS-Bench-201. While their methodology is promising, we found that adapting ANN standard architectures such as resnet18, or densenet, yields better accuracies in less search time. In our study, we adapt standard architectures and explore the effect of skip connections on learning in SNNs. ## III Proposed Methodology In this section, we first describe the skip connection investigation. The investigation aims at understanding the importance of skip connections in SNN and motivating the use of hyperparameter optimization on the number of skip connections. We then present the general steps of our hyperparameters optimization strategy. ### _Skip connections in SNN_ Common state-of-the-art topologies comprise a small repeated set of layers, called _blocks_. The blocks are connected with a single sequential connection. The block's topology is described as a directed acyclic graph (DAG). Each vertex corresponds to a layer such as convolution, attention, or fully connected. The number of vertices in the graph is referred to as the depth of the block, \(d_{b}\). An adjacency matrix represents the connections between vertices. Skip connections are commonly used inside the blocks to increase overall accuracy. Their main goal is to overcome the vanishing gradient problem that may prevent layers from training. There are two types of skip connections in the literature, as shown in figure 1 (a) and (b). * Densenet-like Skip Connections (DSC) [1] concatenates previous layers' outputs, \(l_{i-n}\) where \(0<n<i\), as the input to the next layer. A direct mathematical relation is then created between the weights of layer \(l_{i-n}\) and the output of layer \(l_{i}\), which enhances backward gradient computation. However, adding these connections enlarges the input tensor which augments the number of multiply-accumulates operations (MAC). * Addition-type Skip Connections (ASC) [2] perform element-wise summation of the outputs of previous layers, \(l_{i-n}\) where \(0<n<i\). The result is the input to the current layer \(l_{i}\). This type is usually used in resnet-like architectures. To study the topological properties, we do not use the original architectures, such as DenseNets which contains all-to-all connections. Instead, we consider a generalized version where we vary the number of skip connections by randomly selecting only some channels for concatenation. We define \(n_{\text{skip},i}\) as the number of skip connections coming to layer \(i\). To analyze the skip connection effect, we first build a single-block architecture, with 4 convolution layers inside the block. Figure 1 (right) shows the results of varying the \(n_{\text{skip}}\) and using both DSC and ASC types of skip connections. If \(n_{\text{skip}}\) is greater than the number of previous layers, we use the number of previous layers instead. For example, the second layer can only have \(n_{\text{skip}}=1\), and the fourth layer can have Fig. 1: Left: Commonly used skip connections in neural networks. Right: Skip connections investigation results. up to \(n_{\text{skip}}=3\). The hyperparameters and setup are described in section IV. We show the test accuracy on CIFAR-DVS [11] as well as the average firing rate for the SNN models. The firing rate refers to the rate at which a block generates output signals, which are typically referred to as spikes. From figure 1 (right), we draw the following conclusions: * Overall, increasing the number of skip connections, regardless of their type, enhances the model's accuracy and decreases the drop from the corresponding ANN version. * Adding DSC slightly increases the average firing rate compared to ASC. The firing rate of the original architecture, i.e., \(n_{\text{skip}}=0\), is low (11%). The summation operation adds up the spikes of multiple inputs, which increases the overall firing rate of the network, while the concatenation operation combines the input signals into a single signal, which results in a lower overall firing rate but yields an increasing number of MACs. The choice of connection type between layers is critical. A lower firing rate and fewer operations can lead to energy efficiency and reduced computational requirements while offering a decent increase in accuracy. ### _Skip Connections Optimization_ We define the skip connections in the adjacency matrix of each block \(A_{b}\), where \(b\) is the index of the block in the overall topology. Each element of the adjacency matrix is defined in equation 1. Note that the adjacency matrix does not include any backward connections. \[a_{ij}=\left\{\begin{array}{ll}0&\text{if no connection between i and j}\\ 1&\text{DSC connection between i and j}\\ 2&\text{ASC connection between i and j}\end{array}\right. \tag{1}\] Figure 2 shows the overall hyperparameter optimization strategy used to adapt a given ANN such as densenet [1], resnet [2], or mobilenetv2 [3] to SNN. Given an initial ANN topology, denoted as \(\alpha\), the optimization aims at finding the right number, position, and type of skip connections that minimize the drop between ANN accuracy and its SNN counterpart. The overall optimization process comprises two steps: (1) we begin by constructing the search space of all possible adjacency matrices. Each block is extracted from the given topology and the number of layers in each block as well as the initial adjacency matrices are defined. (2) We use bayesian optimization (BO) to optimize the accuracy drop. Formally, BO seeks to compute \(A^{*}=argmin_{A\in\Lambda}(A)\), where \(\Lambda\) is the set of all possible adjacency matrices and \(f(A)\) denotes the accuracy drop between the topology obtained with the adjacency matrix \(A\) and accuracy of \(\alpha\). Over a sequence of iterations, the results from all previous iterations are used to model the topology of \(\{f(A)\}_{A\in\Lambda}\) using the posterior distribution of the model. The next architecture is then chosen by optimizing an acquisition function. Two design decisions need to be taken in the case of BO: (1) The prior model, and (2) The acquisition function. * **The Prior:** defines the probability distribution over the objective function. We use a gaussian process [12] (GP) to model this distribution. GP is a valuable surrogate model for such an expensive training objective. We do not use a predictor as it would require creating a dataset for each given topology. * **The acquisition function:** defines how we select the next point to sample, given a conditional distribution over the values of the objective given by the prior. The most common acquisition functions used in literature are expected improvement (EI), probability of improvement (PI), and upper confidence bound (UCB) [13]. The latter is used in our search strategy. The UCB algorithm enables us to balance exploration and exploitation. It shifts from concentrating on exploration, choosing the least preferred actions, to focusing on exploitation. These functions balance exploration with exploitation during the search. The chosen architecture is then trained and used to update GP. Evaluating \(f(A)\) in each iteration is the bottleneck of BO since we need to train the model. Because we optimize the skip connections, we can use previously trained weights and share them among all possible topologies \(\{f(A)\}_{A\in\Lambda}\). We only fine-tune the networks for \(n\) epochs to account for the removal or addition of skip connections. Besides, we use parallel BO, i.e., our strategy outputs \(k\) architectures to train in each iteration, so that the \(k\) architectures can be trained in parallel. ## IV Experiments In this section, we present the results of the effects of skip connections on the performance of SNN. The experiments were conducted using the snnTorch library [14]. We evaluated the optimization on three datasets: CIFAR-10 [15], CIFAR-10-DVS [11], and DVS128 Gesture [16]. For CIFAR-10, we trained the model on 50k images split into 10 classes, with 5k each for validation and testing. We Fig. 2: Overview of the hyperparameters optimization process. used an SGD optimizer with a learning rate of 0.01 and a momentum of 0.9. The employed number of steps is equal to 25 and the maximum number of epochs is equal to 200. For CIFAR-10-DVS, the dataset contains 10k images recorded by a DVS128 sensor, split into 90% training and 10% test, with the training further divided into 80% new training and 20% validation. We train the models for 100 epochs with a learning rate set to 0.025, SGD optimizer, and momentum to 0.9. For DVS128 Gesture, event data was recorded from 29 subjects performing 11 hand gestures and preprocessed into 4k sequences for training, 500 for validation, and 500 for testing. The training was performed for 200 epochs with a learning rate of 0.01 and Adam optimizer. ### _Overall results_ Table I shows the overall results of optimizing the skip connections on three state-of-the-art architectures resnet18 [2], densenet121 [1], and mobilenetv2 [3]. Our optimized architectures consistently perform better than the original SNN version. On average, we achieve +11.3%, +9.3%, and +10.2% accuracy on CIFAR-10, CIFAR-10-DVS, and DVS128 Gesture respectively. Note that given the non-static images in CIFAR-10-DVS and DVS128 Gesture, conventional ANN is omitted for such type of data. ### _Comparison with random Search_ The efficacy of our hyperparameter optimization is compared to random search in figure 3. We implement a random search (RS) over the constructed search space of adjacency matrices. The search builds the architecture based on a randomly sampled adjacency matrix without replacement. Then RS trains the architecture from scratch which requires a massive computing budget. With fewer iterations, our method achieves better performance for optimizing multiple models with fewer iterations. Besides, our hyperparameter optimization is more stable and consistently provides closer solutions, which ensures finding the right architectures within a single run of search. ## V Conclusion and Future directions This paper presents novel insights into the design and training of spiking neural networks (SNNs) and highlights the potential of skip connections as a promising tool for advancing SNN research. Our study evaluated both densenet-like and addition-type skip connections and found that both improved accuracy, with densenet-like connections being more energy-efficient by slightly increasing the firing rate. Our comprehensive hyperparameter optimization process led to the discovery of the optimal ANN to SNN adaptation, resulting in an average accuracy improvement of 8% within approximately 5 minutes. These results demonstrate the significance of skip connections in designing and training SNNs and pave the way for further research in this field. In future work, we plan to further improve the performance of SNNs by incorporating backward connections into our hyperparameter optimization. Additionally, exploring the split and connectivity between ANN/SNN processing on edge devices and the cloud is another promising avenue for future research. The integration of these innovations could lead to more practical applications of SNNs in real-world scenarios.
2302.03459
On the relationship between multivariate splines and infinitely-wide neural networks
We consider multivariate splines and show that they have a random feature expansion as infinitely wide neural networks with one-hidden layer and a homogeneous activation function which is the power of the rectified linear unit. We show that the associated function space is a Sobolev space on a Euclidean ball, with an explicit bound on the norms of derivatives. This link provides a new random feature expansion for multivariate splines that allow efficient algorithms. This random feature expansion is numerically better behaved than usual random Fourier features, both in theory and practice. In particular, in dimension one, we compare the associated leverage scores to compare the two random expansions and show a better scaling for the neural network expansion.
Francis Bach
2023-02-07T13:29:06Z
http://arxiv.org/abs/2302.03459v2
# On the relationship between multivariate splines ###### Abstract We consider multivariate splines and show that they have a random feature expansion as infinitely wide neural networks with one-hidden layer and a homogeneous activation function which is the power of the rectified linear unit. We show that the associated function space is a Sobolev space on a Euclidean ball, with an explicit bound on the norms of derivatives. This link provides a new random feature expansion for multivariate splines that allow efficient algorithms. This random feature expansion is numerically better behaved than usual random Fourier features, both in theory and practice. In particular, in dimension one, we compare the associated leverage scores to compare the two random expansions and show a better scaling for the neural network expansion. ## 1 Introduction Multivariate non-parametric regression can be approached from a variety of methods: decision trees, local averaging methods such as Nadaraya-Watson estimation or \(k\)-nearest-neighbor regression, neural networks, and methods based on positive definite kernels such as smoothing splines, kriging, and kernel ridge regression (see, e.g., [1, 2, 3]). In this paper, we build on the following known relationship between kernel-based methods and infinitely-wide neural networks [4, 5]. We consider an activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), and a one-hidden-layer neural network model on \(\mathbb{R}^{d}\) of the form \[f(x)=\sum_{j=1}^{m}\eta_{j}\sigma(w_{j}^{\top}x+b_{j}),\] where \(\eta_{j}\in\mathbb{R}\), \((w_{j},b_{j})\in\mathbb{R}^{d+1}\), for \(j=1,\ldots,m\). When an \(\ell_{2}\)-regularization is added to the objective function which is used to fit the model, this is equivalent to using a kernel-based method with positive-definite kernel \[\hat{k}(x,y)=\frac{1}{m}\sum_{j=1}^{m}\sigma(w_{j}^{\top}x+b_{j})\sigma(w_{j}^ {\top}y+b_{j}). \tag{1}\] See, e.g., [6] for an introduction to kernel methods. If the \(m\) input weights \((w_{j},b_{j})\in\mathbb{R}^{d+1}\) are sampled independently and identically distributed, when \(m\) tends to infinity, by the law of large numbers, \(\tilde{k}(x,y)\) tends to the equivalent kernel \[k(x,y)=\mathbb{E}_{(w,b)}\big{[}\sigma(w^{\top}x+b)\sigma(w^{\top}y+b)\big{]}. \tag{2}\] This equivalence between infinitely wide neural networks and kernel methods has already been used in several ways: * Given a known kernel \(k\) which can be expressed as an expectation as in Eq. (2), we can use the approximate kernel \(\hat{k}\) as in Eq. (1), and its explicit random features to derive efficient algorithms [5, 7]: with \(n\) observations, we can circumvent the computation of the \(n\times n\) kernel matrix by computing the \(m\)-dimensional feature vector for each of the \(n\) observations, which is advantageous when \(m<n\). * Given a known neural network architecture, they allow the study of the regularization properties of using over-parameterized models, that is, with the number of hidden neurons going to infinity [8]. In this paper, we make the following contributions, that contribute to the two ways mentioned above of relating kernels and neural networks: * We consider multivariate splines [9, 10], with kernels proportional to \(\|x-y\|_{2}^{2\alpha+1}\), for \(\alpha\in\mathbb{N}\) (where \(\|\cdot\|_{2}\) denotes the standard Euclidean norm), and show that they have a random feature expansion as infinitely wide neural networks with one-hidden layer and a homogeneous activation function which is the \(\alpha\)-th power of the "rectified linear unit" [11]. This extends the earlier work of [8], which proved this link for \(\alpha=0\) (step activation function). * We show that the associated function space is a Sobolev space with order \(s=\frac{d+1}{2}+\alpha\), with an explicit dependence between the norms. * This link provides a new random feature expansion for multivariate splines that allow efficient algorithms. This random feature expansion numerically behaves better than usual random Fourier features, both in theory and practice. In particular, in dimension one, we compare the associated leverage scores [12] to compare the two random expansions. This also provides a more efficient alternative to random Fourier feature expansion for all Matern kernels [23, page 84]. ## 2 From one-hidden layer neural networks to positive-definite kernels We consider the Euclidean ball \(\mathbb{B}^{d}(R)\) of center \(0\) and radius \(R\) in \(\mathbb{R}^{d}\), for \(R>0\), and consider the estimation of real-valued functions on \(\mathbb{B}^{d}(R)\). For \(\alpha\in\mathbb{N}\), we consider activation functions \(\sigma\) of the form \(\sigma(u)=(u_{+})^{\alpha}=\max\{u,0\}^{\alpha}\), that is, \(\sigma(u)=0\) for \(u\leqslant 0\), and \(\sigma(u)=u^{\alpha}\) for \(u>0\), with the usual convention that \(u^{0}=1\) if \(u>0\), with a particular focus on \(\alpha\in\{0,1\}\). For \(\alpha=0\), we recover the step-function \(\sigma(u)=1_{u>0}\), and for \(\alpha=1\) the rectified linear unit \(\sigma(u)=u_{+}\). We consider randomly distributed weights \((w,b)\in\mathbb{R}^{d+1}\), and the positive definite kernel \[k(x,y)=\mathbb{E}_{(w,b)}\big{[}\sigma(w^{\top}x+b)\sigma(w^{\top}y+b)\big{]}.\] Since we use homogeneous activation functions, we can normalize weights \((w,b)\) so that they have compact supports. Several normalizations and distributions can be used to obtain closed-form formulas. Full spherical symmetry in \(\mathbb{R}^{d+1}\).The first normalization is to write \[w^{\top}x+b=\binom{w}{b/R}^{\top}\binom{x}{R},\] and let \(\binom{w}{b/R}\) be rotationally invariant, for example, be uniformly distributed on the unit \(\ell_{2}\)-sphere in dimension \(\mathbb{R}^{d+1}\). This leads to closed-form formulas [13, 14] for the corresponding kernel \(\tilde{k}_{d}^{(\alpha)}\), with \(\cos\varphi=\frac{x^{\top}y+R^{2}}{(\|x\|_{2}^{2}+R^{2})^{1/2}(\|y\|_{2}^{2}+ R^{2})^{1/2}}\geqslant 0\) (so that \(\varphi\in[0,\pi/2]\)), leading to for small values of \(\alpha\): \[\tilde{k}_{d}^{(0)}(x,y) = \frac{1}{2\pi}(\pi-\varphi)\] \[\tilde{k}_{d}^{(1)}(x,y) = \frac{1}{2\pi(d+1)}(\|x\|_{2}^{2}+R^{2})^{1/2}(\|y\|_{2}^{2}+R^{2 })^{1/2}\times\big{[}\sin\varphi+(\pi-\varphi)\cos\varphi\big{]}\] \[\tilde{k}_{d}^{(2)}(x,y) = \frac{1}{2\pi(d+1)(d+3)}(\|x\|_{2}^{2}+R^{2})(\|y\|_{2}^{2}+R^{2} )\times\big{[}3\sin\varphi\cos\varphi+(\pi-\varphi)(1+2\cos^{2}\varphi)\big{]}.\] More generally (see [13]), we have: \[\tilde{k}_{d}^{(\alpha)}(x,y)=\frac{1}{2\pi(d+1)(d+3)\cdots(d+2\alpha-1)}(\|x \|_{2}^{2}+R^{2})^{\alpha/2}(\|y\|_{2}^{2}+R^{2})^{\alpha/2}\times J_{\alpha} (\varphi),\] with \(J_{\alpha}(\varphi)=(-1)^{\alpha}(\sin\varphi)^{2\alpha+1}\Big{(}\frac{1}{\sin \varphi}\frac{d}{d\varphi}\Big{)}^{\alpha}\Big{(}\frac{\pi-\varphi}{\sin\varphi} \Big{)}\), which is of the form \(P_{\alpha}(\cos\varphi,\sin\varphi)+Q_{\alpha}(\cos\varphi,\sin\varphi)(\pi-\varphi)\) for \(P_{\alpha}\) and \(Q_{\alpha}\) polynomials of degree less than \(\alpha\). Regularization properties.The associated space of functions can be described through spherical harmonics in ambient dimension \(d+1\), as, e.g., described in [15, 14], through the use of the Laplacian on the hyper-sphere. This requires, however, strong knowledge of spherical harmonics and is not easy to relate to classical notions of derivatives in \(\mathbb{R}^{d}\). Note that Hermite polynomials can be used as well [16]. Overall we obtain the Sobolev space with degree \(s=\frac{d+1}{2}+\alpha\) over \(\mathbb{B}^{d}(R)\), but with a non-explicit expression in terms of derivatives. In this paper, we consider another normalization with easier interpretations and links with existing kernels from the statistical literature. This is done using only a spherical symmetry on \(w\in\mathbb{R}^{d}\). Partial spherical symmetry (in \(\mathbb{R}^{d}\)).We can instead choose to have \(\binom{w}{b/R}\) uniformly distributed on the product \(\mathbb{S}^{d-1}\times[-1,1]\) where \(\mathbb{S}^{d-1}\subset\mathbb{R}^{d}\) is the unit \(\ell_{2}\)-sphere, following [8] that introduced this normalization for \(\alpha=0\). This corresponds to \(w\) uniform on the sphere \(\mathbb{S}^{d-1}\) and \(b\) uniform on \([-R,R]\). The main goal of this paper is to provide closed-form formulas for the kernel as well as to study the regularization properties. We will start in dimension one (\(d=1\)) and extend to all dimensions in later sections. We thus define the positive-definite kernel: \[k_{d}^{(\alpha)}(x,y)=\mathbb{E}_{(w,b)}\big{[}(w^{\top}x+b)_{+}^{\alpha}(w^{\top }y+b)_{+}^{\alpha}\big{]},\] where \((w,b)\) is uniform on \(\mathbb{S}^{d-1}\times[-R,R]\). ## 3 Kernels on the interval \([-R,r]\) (\(d=1\)) We first consider the kernel for \(d=1\), where \(w\in\{-1,1\}\), for which we have, by a change of variable \(b\to-b\): \[k_{1}^{(\alpha)}(x,y) = \frac{1}{4R}\int_{-R}^{R}(x+b)_{+}^{\alpha}(y+b)_{+}^{\alpha}db+ \frac{1}{4R}\int_{-R}^{R}(-x+b)_{+}^{\alpha}(-y+b)_{+}^{\alpha}db\] \[= \frac{1}{4R}\int_{-R}^{R}(x-b)_{+}^{\alpha}(y-b)_{+}^{\alpha}db+ \frac{1}{4R}\int_{-R}^{R}(b-x)_{+}^{\alpha}(b-y)_{+}^{\alpha}db.\] ### Closed-form formulas We first consider the case \(\alpha=0\) and then generalize from it. We have for \(\alpha=0\), by direct integration: \[k_{1}^{(0)}(x,y) = \frac{1}{4R}\int_{-R}^{\min\{x,y\}}db+\frac{1}{4R}\int_{\max\{x, y\}}^{R}db=\frac{1}{2}-\frac{1}{4R}\big{[}\max\{x,y\}-\min\{x,y\}\big{]}\] \[= \frac{1}{2}-\frac{1}{4R}|x-y|.\] A more tedious direct computation gives the expression for other small values of \(\alpha\), as: \[k_{1}^{(1)}(x,y) = \frac{R^{2}}{6}+\frac{1}{2}xy+\frac{1}{24R}|x-y|^{3}\] \[k_{1}^{(2)}(x,y) = \frac{R^{4}}{10}+\frac{2R^{2}xy}{3}+\frac{R^{2}}{6}(x^{2}+y^{2}) +\frac{1}{2}x^{2}y^{2}-\frac{1}{120R}|x-y|^{5}.\] This can be extended to all values in \(\alpha\) in the following proposition, shown in Appendix B.1. Note that in one dimension, [17] already made the connection between cubic splines and infinitely-wide neural networks. **Proposition 1** (Closed-form formula for \(d=1\)): _Let \(\alpha\in\mathbb{N}\), we have, for \((w,b)\) uniformly distributed on the product \(\{-1,1\}\times[-R,R]\),_ \[k_{1}^{(\alpha)}(x,y)=\mathbb{E}_{(w,b)}\big{[}(w^{\top}x+b)_{+}^{\alpha}(w^{ \top}y+b)_{+}^{\alpha}\big{]}=P_{1}^{(\alpha)}(x^{2},y^{2},xy)+\frac{1}{R}c_{1 }^{(\alpha)}|x-y|^{2\alpha+1},\] _where \(P_{1}^{(\alpha)}\) is a polynomial of degree \(\alpha\), such that \(k_{1}^{(\alpha),(\mathrm{pol})}(x,y)=P_{1}^{(\alpha)}(x^{2},y^{2},xy)\) is a positive-definite kernel, and \(c_{1}^{(\alpha)}=\frac{(-1)^{\alpha+1}}{4}\frac{(\alpha!)^{2}}{(2\alpha+1)!}\)._ Moreover, as shown in Appendix B.1, we have a special form of polynomial kernel \(k_{1}^{(\alpha),(\mathrm{pol})}(x,y)=P_{1}^{(\alpha)}(x^{2},y^{2},xy)\), as: \[k_{1}^{(\alpha),(\mathrm{pol})}(x,y) = \frac{1}{4R}\int_{-R}^{R}(x-b)^{\alpha}(y-b)^{\alpha}db,\ \ \mbox{which can be expressed as}\] \[k_{1}^{(\alpha),(\mathrm{pol})}(x,y) = \frac{1}{2R}\sum_{i,j=0}^{\alpha}1_{i+j\ \mathrm{even}}\cdot \binom{\alpha}{i}\binom{\alpha}{j}x^{i}y^{j}\frac{R^{2\alpha+1-i-j}}{2\alpha+1 -i-j} \tag{3}\] \[= \frac{1}{2}\sum_{s=0}^{\alpha}\frac{R^{2\alpha-2s}}{2\alpha+1-2s} \sum_{i,j=0}^{\alpha}1_{i+j=2s}\cdot\binom{\alpha}{i}\binom{\alpha}{j}x^{i}y^{ j}.\] Note that the term for \(s=0\), is \(\frac{R^{2\alpha}}{2(2\alpha+1)}\), while for \(\alpha>0\), the term corresponding to \(s=1\) is equal to \(\frac{\alpha^{2}R^{2(\alpha-1)}}{2(2\alpha-1)}xy\). In all cases, it can be computed in time at most \(O(\alpha^{2})\). Moreover, the corresponding feature space leads to all polynomials of degree less than \(\alpha\) (see proof in Appendix B.3). This result will be directly extended to dimensions \(d\) greater than one in Prop. 3. ### Corresponding norm All positive-definite kernels define a Hilbert space of real-valued functions on \(\mathbb{B}^{d}(R)\) with a particular norm. For kernels that can be expressed as expectations, this norm \(\Omega_{1}^{(\alpha)}\) is equal to [18, 14]: \[\Omega_{1}^{(\alpha)}(f)^{2} = \inf_{\eta_{\pm}:[-R,R]\to\mathbb{R}}\frac{1}{4R}\int_{-R}^{R} \big{[}\eta_{+}(b)^{2}+\eta_{-}(b)^{2}\big{]}db\] \[\mbox{such that }\forall x\in[-R,R],f(x)=\frac{1}{4R}\int_{-R}^{R} \big{[}\eta_{+}(b)(x-b)_{+}^{\alpha}+\eta_{-}(b)(b-x)_{+}^{\alpha}\big{]}db,\] where the infimum is taken over square-integrable functions \(\eta_{+}\) and \(\eta_{-}\). Special case \(\alpha=0\).For \(f\) continuously differentiable, we can use and average the two simple representations: \[f(x)=f(-R)+\int_{-R}^{x}f^{\prime}(b)(x-b)_{+}^{0}db=f(R)-\int_{-R}^{R}f^{ \prime}(b)(b-x)_{+}^{0}db,\] to get \(f(x)=\frac{1}{2}[f(R)+f(-R)]+2R\int_{-R}^{R}f^{\prime}(b)(x-b)_{+}^{0}\frac{ db}{4R}-2R\int_{-R}^{R}f^{\prime}(b)(b-x)_{+}^{0}\frac{db}{4R}.\) The constant function equal to \(1/2\) on \([-R,R]\) can be represented as: \[\int_{-R}^{R}(x-b)_{+}^{0}\frac{db}{4R}+\int_{-R}^{R}(b-x)_{+}^{0}\frac{db}{4R}.\] We can thus take: \(\eta_{+}(b)=2Rf^{\prime}(b)+[f(R)+f(-R)]\) and \(\eta_{+}(b)=-2Rf^{\prime}(b)+[f(R)+f(-R)]\). This leads to the squared norm \(\Omega_{1}^{(0)}(f)^{2}\) less than (since the cross-terms cancel): \[2R\int_{-R}^{R}f^{\prime}(x)^{2}dx+\big{[}f(-R)+f(R)\big{]}^{2}.\] In particular, the norm is finite as soon as the quantity above is well-defined, that is, \(f^{\prime}\) square integrable. To show that this is indeed the correct norm, we simply need to check that our representation is optimal, which is shown below for all \(\alpha\)'s (see Prop. 2). Thus \[\Omega_{1}^{(0)}(f)^{2}=2R\int_{-R}^{R}f^{\prime}(x)^{2}dx+\big{[}f(-R)+f(R) \big{]}^{2}.\] General case \(\alpha\geqslant 0\).To obtain the norm, we can notice that continuous expansions with functions \((x-b)_{+}^{\alpha}\) are exactly obtained from Taylor expansions with integral remainders, which apply to functions defined on \([-R,R]\) with \(\alpha+1\) continuous derivatives: \[f(x)=\sum_{i=0}^{\alpha}\frac{f^{(i)}(-R)}{i!}(x+R)^{i}+\int_{-R}^{R}\frac{f^{ (\alpha+1)}(b)}{\alpha!}(x-b)_{+}^{\alpha}db.\] Ignoring the boundary conditions, we see that \(\eta_{+}(b)\) should be related to \(\frac{1}{\alpha!}f^{(\alpha+1)}(b)\), and that the RKHS norm should include the integral \(\int_{R}^{R}f^{(\alpha+1)}(x)^{2}dx\). The following proposition makes this explicit (see proof in Appendix B.2). **Proposition 2** (RKHS norm for \(d=1\)): _The RKHS norm on functions on \([-R,R]\) associated to the kernel \(k_{1}^{(\alpha)}\) is equal to:_ \[\Omega_{1}^{(\alpha)}(f)^{2}=\frac{2R}{\alpha!^{2}}\int_{-R}^{R}f^{(\alpha+1) }(x)^{2}dx+\Theta^{(\alpha)}\big{[}f^{(0)}(-R),f^{(0)}(R),\ldots,f^{(\alpha)}( -R),f^{(\alpha)}(R)\big{]},\] _where \(\Theta^{(\alpha)}\) is non-negative quadratic form._ For example, for \(\alpha=1\), we get: \[\Omega_{1}^{(1)}(f)^{2} = 2R\int_{-R}^{R}f^{\prime\prime}(x)^{2}dx+\big{[}f^{\prime}(R)+f^ {\prime}(-R)\big{]}^{2}+\frac{3}{2R^{2}}\big{[}f(-R)+f(R)-Rf^{\prime}(R)+Rf^ {\prime}(-R)\big{]}^{2}.\] Equivalence to classical Sobolev norms.Using classical results on Sobolev spaces [19], the norm in Proposition 2 can be shown to be equivalent to the classical squared Sobolev norm \(R\int_{-R}^{R}f^{(\alpha+1)}(x)^{2}dx+\frac{1}{R^{2\alpha+1}}\int_{-R}^{R}f(x )^{2}dx\). We will generalize this to all dimensions and provide an explicit equivalence in the following sections. ## 4 Kernels on the ball \(\mathbb{B}^{d}(R)\) (\(d\geqslant 1\)) We now extend results from Section 3 to all dimensions \(d\geqslant 1\). We will get explicit closed-form formulas but with a slightly less explicit formulation for the RKHS norm. ### Closed-form formulas We start with the closed-form formula that directly extends Prop. 1. **Proposition 3** (Closed-form formula for \(d\geqslant 1\)): _Let \(\alpha\in\mathbb{N}\), we have, for \(\binom{w}{b/R}\) uniformly distributed on the product \(\mathbb{S}^{d-1}\times[-1,1]\),_ \[k_{d}^{(\alpha)}(x,y)=\mathbb{E}_{(w,b)}\big{[}(w^{\top}x+b)_{+}^{\alpha}(w^{ \top}y+b)_{+}^{\alpha}\big{]}=P_{d}^{(\alpha)}(\|x\|_{2}^{2},\|y\|_{2}^{2},x^{ \top}y)+\frac{1}{R}c_{d}^{(\alpha)}\|x-y\|_{2}^{2\alpha+1},\] _where \(P_{d}^{(\alpha)}\) is a polynomial of degree \(\alpha\), such that \(k_{d}^{(\alpha),(\mathrm{pol})}(x,y)=P_{d}^{(\alpha)}(\|x\|_{2}^{2},\|y\|_{2}^ {2},x^{\top}y)\) is a positive-definite kernel, and \(c_{d}^{(\alpha)}=\frac{(-1)^{\alpha+1}}{4\sqrt{\pi}}\frac{\alpha!^{3}\Gamma( \frac{d}{2})}{(2\alpha+1)!\,\Gamma(\frac{d}{2}+\frac{1}{2}+\alpha)}\)._ ProofWe have \(k_{d}^{(\alpha)}(x,y)=\mathbb{E}_{w}\big{[}k_{1}^{(\alpha)}(w^{\top}x,w^{\top }y)\big{]}\), and we simply use, for \(w\) uniform on the sphere: \(\mathbb{E}[|w^{\top}z|^{2\alpha+1}]=\|z\|_{2}^{2\alpha+1}\frac{\Gamma(1+ \alpha)\Gamma(\frac{d}{2})}{\Gamma(\frac{1}{2})\Gamma(\frac{d}{2}+\frac{1}{2}+ \alpha)}\) (see Appendix A), which leads to the expression for \(c_{d}^{(\alpha)}\). To treat the polynomial kernel part, we use Eq. (3), and the fact that for \(w\) uniform, and \(i+j\) even, \(\mathbb{E}\big{[}(w^{\top}x)^{i}(w^{\top}y)^{j}\big{]}\) is a polynomial of degree less than \(i+j\) in \(x^{\top}x\), \(y^{\top}y\) and \(y^{\top}x\). \(\blacksquare\) Like for \(d=1\), we have an integral representation for the kernel \(P_{d}^{(\alpha)}(\|x\|_{2}^{2},\|y\|_{2}^{2},x^{\top}y)=k_{d}^{(\alpha),( \mathrm{pol})}(x,y)\), as \[k_{d}^{(\alpha),(\mathrm{pol})}(x,y)=\frac{1}{4R}\int_{-R}^{R}\mathbb{E}_{w} \big{[}(w^{\top}x+b)^{\alpha}(w^{\top}y+b)^{\alpha}\big{]}db.\] Note that we have defined a new positive-definite polynomial kernel, which is an alternative to the standard kernel \((x,y)\mapsto(R+x^{\top}y)^{\alpha}\), that can be computed in time \(O(d)\) (with a constant that depends on \(\alpha\)). As shown in Appendix B.4, the corresponding space spans all polynomials of degree less than \(\alpha\) (or equal). We have, for \(\alpha\in\{0,1,2\}\): \[k_{d}^{(0)}(x,y) = \frac{1}{2}-\frac{1}{4R}\frac{\Gamma(1)\Gamma(\frac{d}{2})}{ \Gamma(1/2)\Gamma(\frac{d+1}{2})}\|x-y\|_{2}\] \[k_{d}^{(1)}(x,y) = \frac{R^{2}}{6}+\frac{1}{2d}x^{\top}y+\frac{1}{24R}\frac{\Gamma( 2)\Gamma(\frac{d}{2})}{\Gamma(\frac{1}{2})\Gamma(\frac{d}{2}+\frac{3}{2})}\|x -y\|_{2}^{3}\] \[k_{d}^{(2)}(x,y) = \frac{R^{4}}{10}+\frac{1}{3d}x^{\top}y+\frac{R^{2}}{6d}(\|x\|_{2} ^{2}+\|y\|_{2}^{2})+\frac{1}{2d(d+2)}(2(x^{\top}y)^{2}+\|x\|_{2}^{2}\|y\|_{2}^ {2})\] \[-\frac{1}{120R}\frac{\Gamma(2)\Gamma(\frac{d}{3})}{\Gamma(\frac{1} {2})\Gamma(\frac{d}{2}+\frac{5}{2})}\|x-y\|_{2}^{5}.\] A simple bound.We will need to provide a bound on the associated features. We have, for \(\|x\|_{2}\leqslant R\): \[k_{d}^{(\alpha}(x,x) = \frac{1}{4R}\int_{-R}^{R}\mathbb{E}_{w}\big{[}(w^{\top}x+b)^{2} \big{]}\alpha db \tag{4}\] \[\leqslant \frac{1}{4R}\int_{-R}^{R}\mathbb{E}_{w}(2R)^{2\alpha}db=\frac{1} {2}(2R)^{2\alpha}.\] ### Corresponding norms In dimension \(d=1\), we could give an explicit formula for the corresponding RKHS norm, which relied on Taylor's formula with integral remainders. This will be less explicit in higher dimensions, and we will need to use the theory of multivariate splines [9, 10]. ## 5 Link with multivariate splines In this section, we first review splines and then draw explicit links. For more details on multivariate splines, see [20, 21, 18]. ### Review of multivariate splines We consider the function: \[E_{d}^{(\alpha)}(z)=c_{d}^{(\alpha)}\|z\|_{2}^{\alpha},\] which has Fourier transform (defined as a distribution, see [22]): \[c_{d}^{(\alpha)}(-1)^{\alpha+1}2^{d+1+2\alpha}\pi^{d/2-1}\Gamma(\alpha+3/2) \Gamma(d/2+1/2+\alpha)\frac{1}{\|\omega\|_{2}^{d+1+2\alpha}}=b_{d}^{(\alpha)} \frac{1}{\|\omega\|_{2}^{d+1+2\alpha}}.\] The kernel \(E_{d}^{(\alpha)}(x-y)\) is known to be "conditionally positive of order \(\alpha\)" [20, 21], that is for each \(x_{1},\ldots,x_{n}\in\mathbb{R}^{d}\), and \(\lambda_{1},\ldots,\lambda_{n}\) such that \(\sum_{i=1}^{n}\lambda_{i}P(x_{i})=0\) for all polynomials \(P\) of degree less than \(\alpha\), \[\sum_{i,j=1}^{n}\lambda_{i}\lambda_{j}E_{d}^{(\alpha)}(x_{i}-x_{j})\geqslant 0.\] We also know that for any function \(L:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\), the minimization of \[L(f(x_{1}),\ldots,f(x_{n}))+\frac{1}{2b_{d}^{(\alpha)}}\frac{1}{(2\pi)^{d}} \int_{\mathbb{R}^{d}}|\hat{f}(\omega)|^{2}\|\omega\|_{2}^{d+1+2\alpha}d\omega\] is attained at \(f(x)=P(x)+\sum_{i=1}^{n}\lambda_{i}E_{d}^{(\alpha)}(x-x_{i})\), with \(P\) and \(\lambda\) obtained through the minimization of \[L(f(x_{1}),\ldots,f(x_{n}))+\frac{1}{2}\sum_{i,j=1}^{n}\lambda_{i}\lambda_{j}E _{d}^{(\alpha)}(x_{i}-x_{j})\] with respect to the polynomial \(P\) of degree less than \(\alpha\), and \(\lambda\in\mathbb{R}^{n}\) such that \(\sum_{i=1}^{n}\lambda_{i}Q(x_{i})=0\) for all polynomials \(Q\) of degree less than \(\alpha\)[9]. When \(d\) is odd, then we have an explicit representation in terms of partial derivatives: \[\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}|\hat{f}(\omega)|^{2}\|\omega\|_{2}^ {d+1+2\alpha}d\omega=\int_{\mathbb{R}^{d}}\|D^{\frac{d+1}{2}+\alpha}f(x)\|^{2 }dx,\] where \(D^{\frac{d+1}{2}+\alpha}f\) is the tensor of all partial derivatives of order \(\frac{d+1}{2}+\alpha\). The expansion above can be extended to the representation of functions on \(\mathbb{B}^{d}(R)\) such that the norm above is finite as \(f(x)=P(x)+\int_{\mathbb{B}^{d}(R)}\!E_{d}^{(\alpha)}(x-y)d\lambda(y)\) where \(\lambda\) is Radon measure. Algorithms.Given a map \(\varphi:\mathbb{R}^{d}\to\mathbb{R}^{m}\) that can represent all polynomials of degree less than \(\alpha\), that is, with \(m=\binom{d+\alpha}{\alpha}\), then we can write the vector \(y\in\mathbb{R}^{n}\) defined as \(y_{i}=f(x_{i})\), as \(y=\Phi\nu+K\lambda\), with \(\Phi\in\mathbb{R}^{n\times m}\) the matrix with all \(\varphi(x_{i})\), \(i=1,\ldots,n\), and \(K\in\mathbb{R}^{n\times n}\) the kernel matrix associated with \(E_{d}^{(\alpha)}\). We add the constraint \(\Phi^{\top}\lambda=0\). Being conditionally positive means that for \(\rho\) large enough, \(K+\rho\Phi\Phi^{\top}\) is positive semi-definite. In this paper, we provide an explicit \(\rho\Phi\Phi^{\top}\) that makes this happen when the data are constrained in \(\mathbb{B}^{d}(R)\). Note that when \(\Phi^{\top}\lambda=0\), the polynomial part of our kernel becomes irrelevant. The algorithm above requires solving an optimization problem of dimension \(n\), while we will see below how this can be reduced using random features. ### Equivalence with Sobolev space As shown in Prop. 3, our kernel is equal to \[k_{d}^{(\alpha)}(x,y)=k_{d}^{(\alpha),(\mathrm{pol})}(x,y)+c_{d}^{(\alpha)}\| x-y\|_{2}^{2\alpha+1}.\] We will show that the RKHS norm is equivalent to \(\Omega_{d}^{(\alpha)(\mathrm{eq})}(f)^{2}\) defined as the minimal value of \[\frac{1}{R}\frac{1}{b_{d}^{(\alpha)}(2\pi)^{d}}\int_{\mathbb{R}^{d}}\|\omega\| _{2}^{d+1+2\alpha}|\hat{g}(\omega)|^{2}d\omega+\frac{\underline{c}^{2}}{R^{2 \alpha}}\int_{\mathbb{B}^{d}(R)}|g(x)|^{2}dx, \tag{5}\] over all functions \(g:\mathbb{R}^{d}\to\mathbb{R}\) that is equal to \(f\) on \(\mathbb{B}^{d}(R)\), for a well-chosen constant \(\underline{c}\). Norm comparisons.We consider two positive constants \(\underline{c}\) and \(\overline{c}\) such that: * For any polynomials \(P\) of degree less than \(\alpha\), we have \(\Omega_{d}^{(\alpha)}(P)\leqslant\frac{\underline{c}}{R^{\alpha}}\|P\|_{L_{2} (\mathbb{B}^{d}(R))}\) for a positive constant \(\underline{c}\). Such a constant exists because two kernels defining a norm on the finite-dimensional space must have equivalent norms. We currently do not have an explicit upper bound on the constant \(\underline{c}\). * For any \(f\) in the RKHS defined by \(k_{d}^{(\alpha)}\), \(\|f\|_{L_{2}(\mathbb{B}^{d}(R))}\leqslant R^{\alpha}\overline{c}\Omega_{d}^{( \alpha)}(f)\). This has to exist because, for any \(x\in\mathbb{B}^{d}(R)\), we have, like for all RKHSs, \(f(x)^{2}\leqslant k_{d}^{(\alpha)}(x,x)\cdot\Omega_{d}^{(\alpha)}(f)^{2}\). We thus have, by integration, \(\overline{c}^{2}R^{2\alpha}\leqslant\operatorname{vol}(\mathbb{B}^{d}(R)) \sup_{x\in\mathbb{B}^{d}(R)}k_{d}^{(\alpha)}(x,x)\leqslant\operatorname{vol} (\mathbb{B}^{d}(R))\frac{1}{2}(2R)^{2\alpha}\) by Eq. (4), and thus \(\overline{c}^{2}\leqslant\operatorname{vol}(\mathbb{B}^{d}(R))2^{2\alpha-1}\). Note that we must have \(\underline{c}\overline{c}\geqslant 1\). We now prove the equivalence. **Proposition 4** (RKHS norm for \(d\geqslant 1\)): _For \(f\in L_{2}(\mathbb{B}^{d}(R))\), we have:_ \[\frac{1}{\underline{c}\overline{c}\sqrt{2}}\Omega_{d}^{(\alpha)(\mathrm{eq})}( f)\leqslant\Omega_{d}^{(\alpha)}(f)\leqslant 2\sqrt{2}\,\underline{c}\overline{c} \,\Omega_{d}^{(\alpha)(\mathrm{eq})}(f).\] **Proof** For the upper-bound, we consider a function \(g\) attaining the minimization problem defining \(\Omega_{d}^{(\alpha)(\mathrm{eq})}(f)^{2}\) in Eq. (5). From Section 5.1, we can express \(f\) as \(f(x)=\int_{\mathbb{B}^{d}(R)}k_{d}^{(\alpha)}(x,y)d\lambda(y)+P(x)\), where \(\lambda\) is a Radon measure on \(\mathbb{B}^{d}(R)\), such that \(\int_{\mathbb{B}^{d}(R)}\!\!Q(y)d\lambda(y)=0\) for all polynomials \(Q\) of degree less than \(\alpha\). Moreover, since we have a minimum norm representation, we get, \[\frac{1}{b_{d}^{(\alpha)}}\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d }}\|\omega\|_{2}^{d+1+2\alpha}|\hat{g}(\omega)|^{2}d\omega \geqslant \int_{\mathbb{B}^{d}(R)}\int_{\mathbb{B}^{d}(R)}E_{d}^{(\alpha)} (x-y)d\lambda(y)d\lambda(x)\] \[= \int_{\mathbb{B}^{d}(R)}\int_{\mathbb{B}^{d}(R)}k_{d}^{(\alpha)} (x,y)d\lambda(y)d\lambda(x).\] The last quantity is equal to \(\Omega_{d}^{(\alpha)}(f-P)^{2}\) because of the reproducing property of kernels. The polynomial \(P\) can be expressed in the RKHS because of the part \(k_{d}^{(\alpha),(\mathrm{pol})}(x,y)\). Therefore \(f\) is in the RKHS. We thus only need to show that the RKHS norm of \(P\) is less than its \(L_{2}\)-norm on \(\mathbb{B}^{d}(R)\), since then the RKHS norm of \(f\) is less than a constant times \(\Omega_{d}^{(\alpha)(\mathrm{eq})}(f)\). The \(L_{2}\)-norm of \(P\) on \(\mathbb{B}^{d}(R)\) is less than the \(L_{2}\)-norm of \(g\) (which is the one of \(f\)) plus the \(L_{2}\)-norm of the function \(x\mapsto\int_{\mathbb{B}^{d}(R)}k_{d}^{(\alpha)}(x,y)d\lambda(y)\), which is less than \(\overline{c}R^{\alpha}\Omega_{d}^{(\alpha)}(f-P)\) by definition of \(\overline{c}\). Thus, since \(\Omega_{d}^{(\alpha)}(P)\leqslant\frac{c}{R^{\alpha}}\|P\|_{L_{2}(\mathbb{B}^{ d}(R))}\) by definition of \(\underline{c}\), we get: \[\Omega_{d}^{(\alpha)}(f) \leqslant \Omega_{d}^{(\alpha)}(P)+\Omega_{d}^{(\alpha)}(f-P)\leqslant \frac{\underline{c}}{R^{\alpha}}\|P\|_{L_{2}(\mathbb{B}^{d}(R))}+\Omega_{d}^{ (\alpha)}(f-P)\] \[\leqslant \frac{\underline{c}}{R^{\alpha}}\Big{(}\|f\|_{L_{2}(\mathbb{B}^{ d}(R))}+\overline{c}R^{\alpha}\Omega_{d}^{(\alpha)}(f-P)\Big{)}+\Omega_{d}^{( \alpha)}(f-P)\] \[\leqslant 2\overline{c}\overline{c}\bigg{(}\frac{1}{b_{d}^{(\alpha)}} \frac{1}{(2\pi)^{d}R}\int_{\mathbb{R}^{d}}\|\omega\|_{2}^{d+1+2\alpha}|\hat{f }(\omega)|^{2}d\omega\bigg{)}^{1/2}+\frac{\underline{c}}{R^{\alpha}}\|f\|_{L_{ 2}(\mathbb{B}^{d}(R))}.\] Thus \(\Omega_{d}^{(\alpha)}(f)^{2}\leqslant 8\underline{c}^{2}\overline{c}^{2}\Omega_{d}^{( \alpha)(\mathrm{eq})}(f)^{2}\). For the lower-bound, given \(f=\int_{\mathbb{B}^{d}(R)}k_{d}^{(\alpha)}(x,y)d\lambda(y)\) in the RKHS, the extension \(g\) that minimizes the squared norm \(\frac{1}{b_{d}^{(\alpha)}}\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\|\omega\|_ {2}^{d+1+2\alpha}|\hat{g}(\omega)|^{2}d\omega\), is the one that can be written \(g=\int_{\mathbb{B}^{d}(R)}k_{d}^{(\alpha)}(x,y)d\mu(y)\), with \(\mu\) orthogonal to polynomials, and \(\int_{\mathbb{B}^{d}(R)}\int_{\mathbb{B}^{d}(R))}k_{d}^{(\alpha)}(x,y)d\mu(x)d \mu(y)\) minimized. By introducing the measure \(\bar{\lambda}\) obtained by projecting on the orthogonal to all polynomials of degree less than \(\alpha\), we have: \[\int_{\mathbb{B}^{d}(R)}\int_{\mathbb{B}^{d}(R))}k_{d}^{(\alpha)} (x,y)d\mu(x)d\mu(y) \leqslant \int_{\mathbb{B}^{d}(R))}\int_{\mathbb{B}^{d}(R))}k_{d}^{(\alpha)} (x,y)d\bar{\lambda}(x)d\bar{\lambda}(y)\] \[\leqslant \int_{\mathbb{B}^{d}(R))}\int_{\mathbb{B}^{d}(R))}k_{d}^{(\alpha)} (x,y)d\lambda(x)d\lambda(y),\] which is equal to \(\Omega_{d}^{(\alpha)}(f)^{2}.\) Thus, we get: \[\Omega_{d}^{(\alpha)(\mathrm{eq})}(f)^{2} \leqslant \frac{1}{b_{d}^{(\alpha)}}\frac{1}{(2\pi)^{d}R}\int_{\mathbb{R}^{d }}\|\omega\|_{2}^{d+1+2\alpha}|\hat{g}(\omega)|^{2}d\omega+\frac{\underline{c}^ {2}}{R^{2\alpha}}\|g\|_{L_{2}(\mathbb{B}^{d}(R))}^{2}\] \[\leqslant \Omega_{d}^{(\alpha)}(f)^{2}+\overline{c}^{2}\underline{c}^{2} \Omega_{d}^{(\alpha)}(f)^{2}\leqslant 2\overline{c}^{2}\underline{c}^{2} \Omega_{d}^{(\alpha)}(f)^{2},\] which leads to the desired norm equivalence. ### Two competing random feature expansions for \(\alpha=0\) We can first consider the random feature expansion obtained from neural networks in Prop. 3, but also a classical one based on the Fourier transform [5]. We indeed have, for \(w\) uniform on the sphere \(\mathbb{S}^{d}\): \[k_{d}^{(\alpha)}(x,y) = \mathbb{E}_{w}\big{[}k_{1}^{(\alpha)}(w^{\top}x,w^{\top}y)\big{]}\] \[= k_{d}^{(\alpha),(\mathrm{pol})}(x,y)+\frac{1}{R}\mathbb{E}_{w} \big{[}c_{1}^{(\alpha)}|w^{\top}(x-y)|^{2\alpha+1}\big{]}=\frac{1}{2}-\frac{1} {4R}\mathbb{E}_{w}\big{[}|w^{\top}(x-y)|\big{]},\] for \(\alpha=0\). Since \(|w^{\top}(x-y)|\leqslant 2R\) almost surely (because \(x,y\in[-R,R]\)), for \(\alpha=0\), we can use the one-dimensional Fourier transform of the function: \[\varphi:u\mapsto(\frac{1}{2}-\frac{1}{4R}|u|)1_{|u|\leqslant 2R},\] which is equal to \[\hat{\varphi}(\omega)=\frac{\sin^{2}(R\omega)}{R\omega^{2}}.\] We thus have, for \(\|x-y\|_{2}\leqslant 2R\), \[k_{d}^{(0)}(x,y) = \mathbb{E}_{w}\Big{[}\frac{1}{2\pi R}\int_{-\infty}^{+\infty} \frac{\sin^{2}(R\tau)}{R\tau^{2}}e^{i\tau w^{\top}(x-y)}d\tau\Big{]}\] \[= \frac{2}{\mathrm{vol}(\mathbb{S}^{d-1})}\frac{1}{2\pi}\int_{ \mathbb{S}^{d-1}}\int_{0}^{+\infty}\frac{\sin^{2}(R\tau)}{R\tau^{d+1}}e^{i\tau w ^{\top}(x-y)}\tau^{d-1}d\tau dw\] \[= \frac{2}{\mathrm{vol}(\mathbb{S}^{d-1})}\frac{1}{2\pi R}\int_{ \mathbb{R}^{d}}\frac{\sin^{2}(R\|\omega\|_{2})}{\|\omega\|_{2}^{d+1}}e^{i\omega ^{\top}(x-y)}d\omega\text{ using the change of variable }\omega=\tau w,\] \[= \frac{\Gamma(d/2)}{\pi^{d/2}}\frac{1}{2\pi R}\int_{\mathbb{R}^{d }}\frac{\sin^{2}(R\|\omega\|_{2})}{\|\omega\|_{2}^{d+1}}e^{i\omega^{\top}(x-y) }d\omega\] \[= \frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\frac{\Gamma(d/2)(2\pi) ^{d-1}}{\pi^{d/2}R}\frac{\sin^{2}(R\|\omega\|_{2})}{\|\omega\|_{2}^{d+1}}e^{ i\omega^{\top}(x-y)}d\omega.\] In other words, the Fourier transform of the function \(x\mapsto(\frac{1}{2}-\frac{1}{R}c_{d}^{(0)}\|x\|_{2})\text{1}_{\|x\|_{2} \leqslant 2R}\) is equal to \(\omega\mapsto\frac{\Gamma(d/2)(2\pi)^{d-1}}{\pi^{d/2}R}\frac{\sin^{2}(R\| \omega\|_{2})}{\|\omega\|_{2}^{d+1}}\). This leads naturally to the random feature \(\cos(\omega^{\top}x+b)\) with \(b\) uniform in \([-\pi,\pi]\) and \(\omega\) sampled from the distribution with density \(\frac{2}{(2\pi)^{d}}\frac{\Gamma(d/2)(2\pi)^{d-1}}{\pi^{d/2}R}\frac{\sin^{2}(R \|\omega\|_{2})}{\|\omega\|_{2}^{d+1}}\), which corresponds to \(\omega=\tau w\), with \(w\) uniform on the sphere and \(\tau\) sampled from \(\frac{\sin^{2}(R\tau)}{\pi R\tau^{2}}\), which can be done by sampling from a Cauchy distribution and using rejection sampling. We can also consider \(b\) taking values \(0\) and \(\pi/2\) uniformly, that is, with random features \(\cos(\omega^{\top}x)\) and \(\sin(\omega^{\top}x)\). Empirical comparison for \(d=1\).We compare the two random feature expansions, first visually in Figure 1, then numerically in Figure 2, showing that the random feature expansion based on neural networks has better approximation properties.1 Footnote 1: Matlab code to reproduce all figures is available at [https://www.di.ens.fr/~fbach/neural_splines_online.zip](https://www.di.ens.fr/~fbach/neural_splines_online.zip). Comparison of leverage scores for \(d=1\).We want to compare the two random feature expansions, which are of the form \[k(x,y)=\mathbb{E}_{v}[\varphi(x,v)\varphi(y,v)],\] for a feature \(\varphi(x,v)\in\mathbb{R}\) for \(v\in\mathcal{V}\). As described in [12, Section 4], to assess the capacity of random feature expansions to approximate the initial function space, a key quantity is the "leverage score" \[v\mapsto\langle\varphi(\cdot,v),\left(\Sigma+\lambda I\right)^{-1}\!\varphi( \cdot,v)\rangle_{L_{2}(\mathbb{B}^{d}(R))},\] where \(\Sigma=\mathbb{E}_{v}\!\left[\varphi(\cdot,v)\otimes_{L_{2}(\mathbb{B}^{d}(R) )}\varphi(\cdot,v)\right]\) is an integral operator on \(L_{2}(\mathbb{B}^{d}(R))\). The maximal leverage score over \(v\in\mathcal{V}\) has a direct influence on the number of needed random features to get a \(\lambda\)-approximation in \(L_{2}\)-norm of the RKHS ball of the original RKHS: from [12, Prop. 1], up to logarithmic terms, the maximal leverage score is proportional to the number \(m\) of necessary random features. In Appendix C, we compute these leverage scores explicitly for \(d=1\), and we see that for the neural network features, the maximal leverage score diverges as \(1/\sqrt{\lambda}\) when \(\lambda\) tends to zero, while for random Fourier features, they diverge faster as \(1/\lambda\), explaining the empirical superiority seen above. Figure 1: Minimum norm interpolation of the green points by the full RKHS (in blue) and the random feature expansion, for \(m=200\). Left: neural network expansion, right: Fourier expansion. Four different draws of the random features are plotted. ### A new random feature expansions for all \(\alpha\) For all \(\alpha\), we provided a new kernel \(k_{d}^{(\alpha)}\) that makes the classical multivariate spline positive-definite, together with a random feature expansion, that can be used for efficient estimation. See [7] for an analysis. Other kernels lead to RKHS norms that are equivalent to the same Sobolev norm, such as the Matern kernels [23, page 84]. These have a natural random Fourier feature expansion (with empirically the same behavior as shown for \(\alpha=0\) above), while ours are based on neural networks, with a better behavior when used within random feature expansions. ## 6 Conclusion In this paper, we provided new random feature expansions for kernels associated with splines, leading to better properties for Sobolev space on the Euclidean balls than existing expansions based on the Fourier transform. As done by [14] with feature expansions based on spherical harmonics, this link could be used to provide explicit approximation bounds for neural networks for a large number of neurons (where input weights are also estimated). Figure 2: Estimation of the minimum interpolation of the full RKHS by random feature expansions for different values of \(m\) (number of random features). We sample \(n=20\) points \(x_{1},\ldots,x_{n}\) uniformly in \([-1,1]\) as well as \(n\) random labels \(y_{1},\ldots,y_{n}\) from a standard Gaussian distribution. We then compare the minimum interpolation fits with the \(L_{2}([-1,1])\)-norm for the full kernel and the random feature approximations. This is averaged over 20 replications for the choice of the input points, and with infinitely many replications for the labels (as the expectation can be taken in closed form): given test points \(x_{1}^{\prime},\ldots,x_{m}^{\prime}\), and the training and testing kernel matrices \(K\) and \(K^{\prime}\), together with their approximations \(\hat{K}\) and \(\hat{K}^{\prime}\), the error is proportional to \(\|K^{\prime}K^{-1}y-\hat{K}^{\prime}\hat{K}^{-1}y\|_{2}^{2}\), and we can thus compute the expectation with respect to \(y\), which is equal to \(\|K^{\prime}K^{-1}-\hat{K}^{\prime}\hat{K}^{-1}\|_{F}^{2}\), which is the quantity we plot above. ### Acknowledgements We thank Nicolas Le Roux and Alessandro Rudi for the interesting discussions related to this work. We acknowledge support from the French government under the management of the Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute), as well as from the European Research Council (grant SEQUOIA 724063). ## Appendix A A few lemmas about uniform distributions on the sphere If \(w\) is uniform on the unit sphere, then: \[\mathbb{E}[|w^{\top}z|^{2\alpha+1}] = \|z\|_{2}^{2\alpha+1}\frac{\Gamma(1+\alpha)\Gamma(\frac{d}{2})}{ \Gamma(\frac{1}{2})\Gamma(\frac{d}{2}+\frac{1}{2}+\alpha)}\] \[\mathbb{E}[(w^{\top}z)^{2\alpha}] = \|z\|_{2}^{2\alpha}\frac{\Gamma(\frac{1}{2}+\alpha)\Gamma(\frac {d}{2})}{\Gamma(\frac{1}{2})\Gamma(\frac{d}{2}+\alpha)}\] \[\mathbb{E}[(w^{\top}z)^{2}] = \|z\|_{2}^{2}/d\] \[\mathbb{E}[z^{\top}ww^{\top}t] = \frac{1}{d}z^{\top}t\] \[\mathbb{E}[(z^{\top}ww^{\top}t)^{2}] = \frac{1}{d(d+2)}\big{[}2(z^{\top}t)^{2}+z^{\top}z\cdot t^{\top}t \big{]}.\] This is obtained from \(w_{1}^{2}\) having a Beta distribution with parameters \((\frac{1}{2},\frac{d-1}{2})\), and using invariance by rotation. ## Appendix B Proof for expressions of RKHS norms, \(d=1\) ### Proof of Proposition 1 We have \(\int_{-R}^{R}(x-b)_{+}^{\alpha}(y-b)_{+}^{\alpha}db=\int_{R}^{\min\{x,y\}}(x- b)^{\alpha}(y-b)^{\alpha}db\), which we can reformulate with \(s=\frac{x+y}{2}\) and \(\delta=\frac{x-y}{2}\), leading to \(x=s+\delta\) and \(y=s-\delta\), with \(\min\{x,y\}=s-|\delta|\). We get: \[\int_{-R}^{R}(x-b)_{+}^{\alpha}(y-b)_{+}^{\alpha}db = \int_{-R}^{s-|\delta|}(s+\delta-b)^{\alpha}(s-\delta-b)^{\alpha} db=\int_{-R}^{s-|\delta|}((s-b)^{2}-\delta^{2})^{\alpha}db\] \[= \int_{-R}^{s-|\delta|}\sum_{i=0}^{\alpha}\binom{\alpha}{i}(s-b)^ {2i}(-1)^{i-\alpha}\delta^{2\alpha-2i}db\] \[= \sum_{i=0}^{\alpha}\binom{\alpha}{i}\frac{1}{2i+1}\Big{[}(R+s)^ {2i+1}-|\delta|^{2i+1}\Big{]}(-1)^{i-\alpha}\delta^{2\alpha-2i}\] \[= \sum_{i=0}^{\alpha}\binom{\alpha}{i}\frac{(-1)^{i-\alpha}}{2i+1} (R+s)^{2i+1}\delta^{2\alpha-2i}-\sum_{i=0}^{\alpha}\binom{\alpha}{i}\frac{(-1 )^{i-\alpha}}{2i+1}\times|\delta|^{2\alpha+1}\] \[= A_{\alpha}(x,y)-B_{\alpha}|x-y|^{2\alpha+1},\] with \[A_{\alpha} = \sum_{i=0}^{\alpha}\binom{\alpha}{i}\frac{(-1)^{i-\alpha}}{2i+1}(R+s )^{2i+1}\delta^{2\alpha-2i}=\int_{-y}^{R}(x+b)^{\alpha}(y+b)^{\alpha}db\] \[B_{\alpha} = \frac{1}{2^{2\alpha+1}}\sum_{i=0}^{\alpha}\binom{\alpha}{i}\frac{ (-1)^{i-\alpha}}{2i+1}=\frac{1}{2^{2\alpha+1}}\int_{0}^{1}\sum_{i=0}^{\alpha} \binom{\alpha}{i}(-1)^{i-\alpha}x^{2i}dx\] \[= \frac{(-1)^{\alpha}}{2^{2\alpha+1}}\int_{0}^{1}(1-x^{2})^{\alpha }dx=\frac{(-1)^{\alpha}}{2^{2\alpha+2}}2^{2\alpha+1}\int_{0}^{1}u^{\alpha}(1-u )^{\alpha}du=\frac{(-1)^{\alpha}}{2}\frac{\Gamma(\alpha+1)^{2}}{\Gamma(2 \alpha+2)},\] using the change of variable \(\frac{1+x}{2}=u\), \(\frac{1-x}{2}=1-u\). This leads to, using symmetries: \[k_{d}^{(\alpha)}(x,y) = \frac{1}{4R}\int_{-y}^{R}(x+b)^{\alpha}(y+b)^{\alpha}db+\frac{1}{ 4R}\int_{y}^{R}(-x+b)^{\alpha}(-y+b)^{\alpha}db\] \[-\frac{(-1)^{\alpha}}{4R}\frac{\Gamma(\alpha+1)^{2}}{\Gamma(2 \alpha+2)}|x-y|^{2\alpha+1}\] \[= \frac{1}{4R}\int_{-y}^{R}(x+b)^{\alpha}(y+b)^{\alpha}db+\frac{1}{ 4R}\int_{-R}^{-y}(-x-b)^{\alpha}(-y-b)^{\alpha}db\] \[-\frac{(-1)^{\alpha}}{4R}\frac{\Gamma(\alpha+1)^{2}}{\Gamma(2 \alpha+2)}|x-y|^{2\alpha+1}\] \[= \frac{1}{4R}\int_{-R}^{R}(x+b)^{\alpha}(y+b)^{\alpha}db-\frac{(- 1)^{\alpha}}{4R}\frac{\Gamma(\alpha+1)^{2}}{\Gamma(2\alpha+2)}|x-y|^{2\alpha+1}.\] We can then expand using the binomial formula. ### Proof of Proposition 2 If we have the representation, for \(f\) with \(\alpha+1\) continuous derivatives: \[\forall x\in[-R,R],f(x)=\frac{1}{4R}\int_{-R}^{R}\big{[}\eta_{+}(b)(x-b)_{+}^ {\alpha}+\eta_{-}(b)(b-x)_{+}^{\alpha}\big{]}db,\] then by taking the \((\alpha+1)\)-derivative, we must have: \[f^{(\alpha+1)}(x)=\frac{\alpha!}{4R}\eta_{+}(x)+\frac{\alpha!}{4R}(-1)^{\alpha +1}\eta_{-}(x).\] We thus have: \[\eta_{+}(x) = \frac{2R}{\alpha!}f^{(\alpha+1)}(x)+c(x)\] \[\eta_{-}(x) = (-1)^{\alpha+1}\frac{2R}{\alpha!}f^{(\alpha+1)}(x)+(-1)^{\alpha }c(x)\] for a certain function \(c:[-R,R]\rightarrow\mathbb{R}\). We have from Taylor formula with integral remainder: \[f(x) = \sum_{i=0}^{\alpha}\frac{f^{(i)}(-R)}{i!}(x+R)^{i}+\int_{-R}^{R} \frac{f^{(\alpha+1)}(b)}{\alpha!}(x-b)_{+}^{\alpha}db\] \[f(x) = \sum_{i=0}^{\alpha}\frac{(-1)^{i}f^{(i)}(R)}{i!}(R-x)^{i}-(-1)^{ \alpha}\int_{-R}^{R}\frac{f^{(\alpha+1)}(b)}{\alpha!}(b-x)_{-}^{\alpha}db,\ \mbox{and by averaging them},\] \[f(x) = \frac{1}{2}\sum_{i=0}^{\alpha}\Big{[}\frac{f^{(i)}(-R)}{i!}(x+R) ^{i}+\frac{(-1)^{i}f^{(i)}(R)}{i!}(R-x)^{i}\Big{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+\frac{1}{2}\int_{-R}^{R}\frac{f^{(\alpha+1)}(b)}{\alpha!}\big{[}(x-b) _{+}^{\alpha}-(-1)^{\alpha}(b-x)_{+}^{\alpha}\big{]}db.\] Given our expression for \(\eta_{+}\) and \(\eta_{-}\), this implies that for all \(x\in[-R,R]\) \[\frac{1}{2}\sum_{i=0}^{\alpha}\Big{[}\frac{f^{(i)}(-R)}{i!}(x+R)^ {i}+\frac{(-1)^{i}f^{(i)}(R)}{i!}(R-x)^{i}\Big{]} = \frac{1}{4R}\int_{-R}^{R}\big{[}c(b)(x-b)_{+}^{\alpha}+(-1)^{ \alpha}c(b)(b-x)_{+}^{\alpha}\big{]}db\] \[= \frac{1}{4R}\int_{-R}^{R}c(b)(x-b)^{\alpha}db,\] leading to constraints on \(\int_{-R}^{R}c(b)b^{i}\) for \(i\in\{0,\ldots,\alpha\}\). The optimal \(c\) is obtained by minimizing: \[\frac{1}{4R}\int_{-R}^{R}\big{[}\eta_{+}(b)^{2}+\eta_{-}(b)^{2}\big{]}db=\frac {2R}{\alpha!^{2}}\int_{-R}^{R}f^{(\alpha+1)}(b)^{2}db+\frac{1}{2R}\int_{-R}^ {R}c(b)^{2}db.\] Thus \(c\) has to be a polynomial of degree less than \(\alpha\), with coefficients which are linear combinations of \(f^{(i)}(\pm R)\) for \(i\in\{0,\ldots,\alpha\}\). This leads to the desired result. ### Polynomial kernel in one dimension Given a polynomial \(P\) on \(\mathbb{R}\) of degree less than \(\alpha\) (or equal), if we can write it as: \[P(x)=\frac{1}{2R}\int_{-R}^{R}\eta(b)(x-b)^{\alpha}db, \tag{6}\] then its squared RKHS norm (for \(k_{1}^{(\alpha),(\mathrm{pol})}\)) is equal to the infimum of \(\frac{1}{4R}\int_{-R}^{R}\eta(b)^{2}db\). Given the representation in Eq. (6), we have: \[P^{(k)}(0)=\frac{1}{2R}\frac{\alpha!}{(\alpha-k)!}\int_{-R}^{R}\eta(b)(-b)^{ \alpha-k}db=\frac{(-1)^{\alpha-k}}{2R}\frac{\alpha!}{(\alpha-k)!}\int_{-R}^{R }\eta(b)b^{\alpha-k}db,\] which is equal to \(\frac{(-1)^{\alpha-k}}{2R}\frac{\alpha!}{(\alpha-k)!}\langle\eta,Q_{\alpha-k} \rangle_{L_{2}(\mathbb{B}^{d}(R))}\), where \(Q_{j}(b)=b^{j}\). Thus, given that we want to minimize \(\langle\eta,\eta\rangle_{L_{2}(\mathbb{B}^{d}(R))}\), the solution has to be a polynomial \(\eta=\sum_{j=0}^{\alpha}s_{j}Q_{j}\), with \(s\in\mathbb{R}^{\alpha+1}\) minimizing \[\sum_{i,j=0}^{\alpha}s_{i}s_{j}\langle Q_{i},Q_{j}\rangle_{L_{2}(\mathbb{B}^{d }(R))}\] such that \((-1)^{j}2R\frac{j!}{\alpha!}P^{(\alpha-j)}(0)=\sum_{i=0}^{\alpha}s_{i}\langle Q_{ i},Q_{j}\rangle_{L_{2}(\mathbb{B}^{d}(R))}\) for all \(j\in\{0,\ldots,\alpha\}\). If \(P=\sum_{j=0}^{\alpha}t_{j}Q_{j}\), we obtain \[(-1)^{j}2R\binom{\alpha}{j}^{-1}t_{\alpha-j}=\sum_{i=0}^{\alpha}s_{i}\langle Q _{i},Q_{j}\rangle_{L_{2}(\mathbb{B}^{d}(R))}\text{. Since the Gram matrix of the monomials is invertible, the optimal }s\text{ is a linear function of the coefficients }t\text{. Thus the norm of }P\text{ is a positive-definite quadratic form in the coefficients. Hence the norm is equivalent to the }L_{2}\text{-norm on the space of polynomials of degree less than }\alpha\text{ (or equal). ### Polynomial kernel in dimension \(d\geqslant 1\) We can apply the same reasoning as in the section above and need to show that we can represent all polynomials of degree less than \(\alpha\) as \[P(x)=\frac{1}{2R}\int_{-R}^{R}\int_{\mathbb{S}^{d}}\eta(w,b)(w^{\top}x+b)^{ \alpha}dwdb,\] for \(\eta(w,b)\) square integrable. By taking all partial derivatives at \(x=0\), this imposes that all \[\int_{-R}^{R}\int_{\mathbb{S}^{d}}\eta(w,b)w_{1}^{u_{1}}\cdots w_{d}^{u_{d}}b ^{v}dwdb\] are fixed, for \(u_{1}+\cdots+u_{d}+v=\alpha\), and since the family of polynomials \((w,b)\mapsto w_{1}^{u_{1}}\cdots w_{d}^{u_{d}}b^{v}\) is linearly independent in \(L_{2}(\mathbb{S}^{d}\times[-1,1])\), the same reasoning above leads to an RKHS norm which is equivalent the \(L_{2}\)-norm on the space of polynomials of degree less than \(\alpha\) (or equal). ## Appendix C Computing leverage scores In this section, we explicitly compute leverage scores for \(d=1\) and \(\alpha=0\) for the two expansions. ### General solution We consider \(d=1\), \(R=1\), and the classical integral operator for the uniform distribution on \([-1,1]\): \[\Sigma f(x)=\frac{1}{2}\int_{-1}^{1}k_{1}^{(0)}(x,y)f(y)dy=\frac{1}{4}\int_{- 1}^{1}f(y)dy-\frac{1}{8}\int_{-1}^{1}|x-y|f(y)dy.\] Given a function \(g\in L_{2}([-1,1])\), we aim to compute the leverage score: \[\frac{1}{2}\int_{-1}^{1}g(x)\big{[}(\Sigma+\lambda I)^{-1}g\big{]}(x)dx.\] We thus compute \(f=(\Sigma+\lambda I)^{-1}g\), which is such that: \[g(x)=\frac{1}{4}\int_{-1}^{1}f(y)dy-\frac{1}{8}\int_{-1}^{1}|x-y|f(x)dy+ \lambda f(x).\] By taking two derivatives and using the fact that the second-order derivative of \(x\mapsto|x-y|\) is \(2\delta_{y}\), we get: \[g^{\prime\prime}(x)=\lambda f^{\prime\prime}(x)-\frac{1}{4}f(x).\] Once we know a solution \(f_{0}\) for the ordinary differential equation above, then all solutions are obtained as \[f(x)=f_{0}(x)+A\cosh\frac{x}{2\sqrt{\lambda}}+B\sinh\frac{x}{2\sqrt{\lambda}}, \tag{7}\] for some \(A,B\in\mathbb{R}\). Obtaining a solution in "closed-form".We can solve the ODE in \(f\) using standard techniques [24], by writing \(f(x)=e^{\frac{x}{2\sqrt{\lambda}}}a(x)\), so that \[g^{\prime\prime}(x) = \lambda e^{\frac{x}{2\sqrt{\lambda}}}a^{\prime\prime}(x)+\sqrt{ \lambda}e^{\frac{x}{2\sqrt{\lambda}}}a^{\prime}(x)\] \[e^{-\frac{x}{2\sqrt{\lambda}}}g^{\prime\prime}(x) = \lambda a^{\prime\prime}(x)+\sqrt{\lambda}a^{\prime}(x).\] We then write \(a^{\prime}(x)=e^{-\frac{x}{\sqrt{\lambda}}}c(x)\), so that \[e^{-\frac{x}{2\sqrt{\lambda}}}g^{\prime\prime}(x)=\lambda e^{-\frac{x}{\sqrt{ \lambda}}}c^{\prime}(x),\] and thus \[c^{\prime}(x)=\frac{1}{\lambda}e^{\frac{x}{2\sqrt{\lambda}}}g^{\prime\prime}( x),\] leading to a particular solution by integration: \[c(x)=\frac{1}{2\lambda}\int_{-1}^{1}e^{\frac{y}{2\sqrt{\lambda}}}g^{\prime \prime}(y)\operatorname{sign}(x-y)dy.\] Moreover, we get a particular solution: \[a(x) = \frac{1}{2}\int_{-1}^{1}e^{-\frac{y}{\sqrt{\lambda}}}c(y) \operatorname{sign}(x-y)dy\] \[= \frac{1}{4\lambda}\int_{-1}^{1}e^{-\frac{y}{\sqrt{\lambda}}}\int _{-1}^{1}e^{\frac{t}{2\sqrt{\lambda}}}g^{\prime\prime}(t)\operatorname{sign}( y-t)dt\operatorname{sign}(x-y)dy\] \[f(x) = \frac{1}{4\lambda}\int_{-1}^{1}\int_{-1}^{1}e^{\frac{x}{2\sqrt{ \lambda}}}e^{-\frac{y}{\sqrt{\lambda}}}e^{\frac{t}{2\sqrt{\lambda}}}g^{\prime \prime}(t)\operatorname{sign}(y-t)\operatorname{sign}(x-y)dydt,\] with all solutions obtained by adding \(A\cosh\frac{x}{2\sqrt{\lambda}}+B\sinh\frac{x}{2\sqrt{\lambda}}\). Finding constants \(A\) and \(B\).We have for the unique solution \(f\) of \(g=(\Sigma+\lambda I)f\): \[g(1) = \frac{1}{4}\int_{-1}^{1}f(y)dy-\frac{1}{8}\int_{-1}^{1}(1-y)f(y)dy +\lambda f(1)\] \[g(-1) = \frac{1}{4}\int_{-1}^{1}f(y)dy-\frac{1}{8}\int_{-1}^{1}(1+y)f(y)dy +\lambda f(-1),\text{ leading to}\] \[g(1)+g(-1) = \frac{1}{4}\int_{-1}^{1}f(y)dy+\lambda[f(1)+f(-1)] \tag{8}\] \[g(1)-g(-1) = \frac{1}{4}\int_{-1}^{1}yf(y)dy+\lambda[f(1)-f(-1)]. \tag{9}\] For \(f(x)=\cosh\frac{x}{2\sqrt{\lambda}}\), we have: \(\int_{-1}^{1}f(x)dx=4\sqrt{\lambda}\sinh\frac{1}{2\sqrt{\lambda}}\). For \(f(x)=\sinh\frac{x}{2\sqrt{\lambda}}\), we have: \(\int_{-1}^{1}xf(x)dx=4\sqrt{\lambda}\cosh\frac{1}{2\sqrt{\lambda}}-8 \lambda\sinh\frac{1}{2\sqrt{\lambda}}\). Thus, for our solution in Eq. (7): \[g(1)+g(-1) = \frac{1}{4}\int_{-1}^{1}f_{0}(y)dy+\lambda[f_{0}(1)+f_{0}(-1)]+A \Big{[}\sqrt{\lambda}\sinh\frac{1}{2\sqrt{\lambda}}+2\lambda\cosh\frac{1}{2 \sqrt{\lambda}}\Big{]}\] \[g(1)-g(-1) = \frac{1}{4}\int_{-1}^{1}yf_{0}(y)dy+\lambda[f_{0}(1)-f_{0}(-1)]+B \sqrt{\lambda}\cosh\frac{1}{2\sqrt{\lambda}}.\] Therefore, to obtain \(A\) and \(B\), we simply need to compute \(f_{0}(1)\), \(f_{0}(-1)\) as well as \(\int_{-1}^{1}f_{0}(y)dy\) and \(\int_{-1}^{1}yf_{0}(y)dy\). ### Neural networks We consider \(g(x)=1_{x>b}=(x-b)_{+}^{0}\), we then consider \(f_{0}(x)=1_{x>b}\frac{1}{\lambda}\cosh\frac{x-b}{2\sqrt{\lambda}}\). We have: \[g(x)-\lambda f_{0}(x) = 1_{x>b}\big{[}1-\cosh\frac{x-b}{2\sqrt{\lambda}}\big{]}\] \[g^{\prime}(x)-\lambda f_{0}^{\prime}(x) = -\frac{1}{2\sqrt{\lambda}}1_{x>b}\sinh\frac{x-b}{2\sqrt{\lambda}}\] \[g^{\prime\prime}(x)-\lambda f_{0}^{\prime\prime}(x) = -\frac{1}{4\lambda}1_{x>b}\cosh\frac{x-b}{2\sqrt{\lambda}}=-\frac{ 1}{4}f_{0}(x),\] and thus \(f_{0}\) is a particular solution. We have: \[f_{0}(1)+f_{0}(-1) = f_{0}(1)-f_{0}(-1)=f_{0}(1)=\frac{1}{\lambda}\cosh\frac{1-b}{2 \sqrt{\lambda}}\] \[\int_{-1}^{1}f_{0}(x)dx = \int_{b}^{1}\frac{1}{\lambda}\cosh\frac{x-b}{2\sqrt{\lambda}}dx= \frac{2}{\sqrt{\lambda}}\sinh\frac{1-b}{2\sqrt{\lambda}}\] \[\int_{-1}^{1}xf_{0}(x)dx = \int_{b}^{1}\frac{1}{\lambda}x\cosh\frac{x-b}{2\sqrt{\lambda}}dx= \int_{b}^{1}\frac{1}{\lambda}b\cosh\frac{x-b}{2\sqrt{\lambda}}dx+\int_{b}^{1} \frac{1}{\lambda}(x-b)\cosh\frac{x-b}{2\sqrt{\lambda}}dx\] \[= \frac{2b}{\sqrt{\lambda}}\sinh\frac{1-b}{2\sqrt{\lambda}}+\frac{2 }{\sqrt{\lambda}}\Big{[}(1-b)\sinh\frac{1-b}{2\sqrt{\lambda}}-2\sqrt{\lambda} \cosh\frac{1-b}{2\sqrt{\lambda}}\Big{]}\] \[= \frac{2}{\sqrt{\lambda}}\Big{[}\sinh\frac{1-b}{2\sqrt{\lambda}}-2 \sqrt{\lambda}\cosh\frac{1-b}{2\sqrt{\lambda}}\Big{]}.\] Thus \[\frac{1}{2} = \frac{1}{2\sqrt{\lambda}}\sinh\frac{1-b}{2\sqrt{\lambda}}+\cosh \frac{1-b}{2\sqrt{\lambda}}+2\lambda A\Big{[}\frac{1}{2\sqrt{\lambda}}\sinh \frac{1}{2\sqrt{\lambda}}+\cosh\frac{1}{2\sqrt{\lambda}}\Big{]}\] \[\frac{1}{2} = \frac{1}{2\sqrt{\lambda}}\sinh\frac{1-b}{2\sqrt{\lambda}}+B\sqrt {\lambda}\cosh\frac{1}{2\sqrt{\lambda}},\] which allows to solve for \(A\) and \(B\). Moreover \[\int_{-1}^{1}f(x)g(x)dx = \int_{-1}^{1}g(x)\big{[}f_{0}(x)+A\cosh\frac{x}{2\sqrt{\lambda}}+B \sinh\frac{x}{2\sqrt{\lambda}}\big{]}dx\] \[= \frac{2}{\sqrt{\lambda}}\sinh\frac{1-b}{2\sqrt{\lambda}}+2A\sqrt{ \lambda}\big{[}\sinh\frac{1}{2\sqrt{\lambda}}-\sinh\frac{b}{2\sqrt{\lambda}} \big{]}+2B\sqrt{\lambda}\big{[}\cosh\frac{1}{2\sqrt{\lambda}}-\cosh\frac{b}{2 \sqrt{\lambda}}\big{]}\] \[= \frac{2}{\sqrt{\lambda}}\sinh\frac{1-b}{2\sqrt{\lambda}}+\frac{1} {\sqrt{\lambda}}\frac{\big{[}\frac{1}{2}-\frac{1}{2\sqrt{\lambda}}\sinh\frac{ 1-b}{2\sqrt{\lambda}}-\cosh\frac{1-b}{2\sqrt{\lambda}}\big{]}}{\frac{1}{2 \sqrt{\lambda}}\sinh\frac{1}{2\sqrt{\lambda}}+\cosh\frac{1}{2\sqrt{\lambda}}} \big{[}\sinh\frac{1}{2\sqrt{\lambda}}-\sinh\frac{b}{2\sqrt{\lambda}}\big{]}\] \[+\frac{1-\frac{1}{\sqrt{\lambda}}\sinh\frac{1-b}{2\sqrt{\lambda}} }{\cosh\frac{1}{2\sqrt{\lambda}}}\big{[}\cosh\frac{1}{2\sqrt{\lambda}}-\cosh \frac{b}{2\sqrt{\lambda}}\big{]},\] which is our desired quantity (multiplied by 2). This quantity is maximized at \(b=0\), for which we have the value: \[\frac{2}{\sqrt{\lambda}}\sinh\frac{1}{2\sqrt{\lambda}}+\frac{1}{ \sqrt{\lambda}}\frac{\big{[}\frac{1}{2}-\frac{1}{2\sqrt{\lambda}}\sinh\frac{1}{ 2\sqrt{\lambda}}-\cosh\frac{1}{2\sqrt{\lambda}}\big{]}}{\frac{1}{2\sqrt{ \lambda}}\sinh\frac{1}{2\sqrt{\lambda}}+\cosh\frac{1}{2\sqrt{\lambda}}}\big{[} \sinh\frac{1}{2\sqrt{\lambda}}\big{]}+\frac{1-\frac{1}{\sqrt{\lambda}}\sinh \frac{1}{2\sqrt{\lambda}}}{\cosh\frac{1}{2\sqrt{\lambda}}}\big{[}\cosh\frac{1 }{2\sqrt{\lambda}}-1\big{]}\] \[= \frac{1}{\sqrt{\lambda}}\frac{\big{[}\frac{1}{2}+\frac{1}{2\sqrt{ \lambda}}\sinh\frac{1}{2\sqrt{\lambda}}-\cosh\frac{1}{2\sqrt{\lambda}}\big{]}} {\frac{1}{2\sqrt{\lambda}}\sinh\frac{1}{2\sqrt{\lambda}}+\cosh\frac{1}{2\sqrt {\lambda}}}\big{[}\sinh\frac{1}{2\sqrt{\lambda}}\big{]}+\big{[}1-\frac{1}{ \sqrt{\lambda}}\sinh\frac{1}{2\sqrt{\lambda}}\big{]}\cdot\big{[}1-\frac{1}{ \cosh\frac{1}{2\sqrt{\lambda}}}\big{]}\] \[= \Big{[}1+\frac{1/2}{\frac{1}{2\sqrt{\lambda}}\sinh\frac{1}{2\sqrt {\lambda}}+\cosh\frac{1}{2\sqrt{\lambda}}}\Big{]}\frac{1}{\sqrt{\lambda}} \sinh\frac{1}{2\sqrt{\lambda}}+\big{[}1-\frac{1}{\sqrt{\lambda}}\sinh\frac{1 }{2\sqrt{\lambda}}\big{]}\cdot\big{[}1-\frac{1}{\cosh\frac{1}{2\sqrt{\lambda}} }\big{]}\] \[= \frac{\frac{1}{2\sqrt{\lambda}}\sinh\frac{1}{2\sqrt{\lambda}}}{ \frac{1}{2\sqrt{\lambda}}\sinh\frac{1}{2\sqrt{\lambda}}+\cosh\frac{1}{2 \sqrt{\lambda}}}+\big{[}1-\frac{1}{\cosh\frac{1}{2\sqrt{\lambda}}}\big{]}+ \frac{1}{\sqrt{\lambda}}\frac{\sinh\frac{1}{2\sqrt{\lambda}}}{\cosh\frac{1}{2 \sqrt{\lambda}}}.\] The maximal leverage score has thus order \(\frac{1}{2\sqrt{\lambda}}\). ### Fourier feature We consider \(g(x)=e^{i\omega x}\) so that we can obtain both \(\cos\omega x\) and \(\sin\omega x\). Then we can take \(f_{0}(x)=\frac{\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}e^{i\omega x}\) as a special solution, since \[\lambda f_{0}^{\prime\prime}(x)-\frac{1}{4}f_{0}(x)=\omega^{2}\frac{-\lambda \omega^{2}-\frac{1}{4}}{\lambda\omega^{2}+\frac{1}{4}}e^{i\omega x}=g^{\prime \prime}(x).\] We get, from Eq. (8) and Eq. (9): \[2\cos\omega = \frac{\omega^{2}}{4\lambda\omega^{2}+1}\frac{1}{i\omega}2i\sin \omega+\frac{\lambda\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}2\cos\omega+A \Big{[}\sqrt{\lambda}\sinh\frac{1}{2\sqrt{\lambda}}+2\lambda\cosh\frac{1}{2 \sqrt{\lambda}}\Big{]}\] \[2i\sin\omega = \frac{\omega^{2}}{4\lambda\omega^{2}+1}\Big{[}\frac{1}{\omega^{2} }e^{i\omega x}(1-i\omega x)\Big{]}_{-1}^{1}+\frac{\lambda\omega^{2}}{\lambda \omega^{2}+\frac{1}{4}}2i\sin\omega+B\sqrt{\lambda}\cosh\frac{1}{2\sqrt{ \lambda}}.\] This leads to explicit formulas for the constants \(A\) and \(B\): \[\frac{2\cos\omega-2\omega\sin\omega}{4\lambda\omega^{2}+1} = A\Big{[}\sqrt{\lambda}\sinh\frac{1}{2\sqrt{\lambda}}+2\lambda \cosh\frac{1}{2\sqrt{\lambda}}\Big{]}\] \[\frac{2i\sin\omega}{4\lambda\omega^{2}+1} = \frac{1}{4\lambda\omega^{2}+1}\Big{[}2i\sin\omega-2i\omega\cos \omega\Big{]}+B\sqrt{\lambda}\cosh\frac{1}{2\sqrt{\lambda}},\text{ leading to}\] \[\frac{2i\omega\cos\omega}{4\lambda\omega^{2}+1} = B\sqrt{\lambda}\cosh\frac{1}{2\sqrt{\lambda}}.\] We then get \[A = \frac{2\cos\omega-2\omega\sin\omega}{4\lambda\omega^{2}+1}\frac{1} {\sqrt{\lambda}\sinh\frac{1}{2\sqrt{\lambda}}+2\lambda\cosh\frac{1}{2\sqrt{ \lambda}}}\] \[\frac{B}{i} = \frac{2\omega\cos\omega}{4\lambda\omega^{2}+1}\frac{1}{\sqrt{ \lambda}\cosh\frac{1}{2\sqrt{\lambda}}}.\] Thus, the solution for \(g(x)=\cos\omega x\) is \(f(x)=\frac{\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}\cos\omega x+A\cosh \frac{x}{2\sqrt{\lambda}}\), while the solution for \(g(x)=\sin\omega x\) is \(f(x)=\frac{\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}\sin\omega x+\frac{B}{i} \sinh\frac{x}{2\sqrt{\lambda}}\). Thus, we can compute for \(g(x)=\cos\omega x\) \[\int_{-1}^{1}f(x)g(x)dx = \int_{-1}^{1}\cos\omega x\Big{[}\frac{\omega^{2}}{\lambda\omega^ {2}+\frac{1}{4}}\cos\omega x+A\cosh\frac{x}{2\sqrt{\lambda}}\Big{]}dx\] \[= \frac{\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}\Big{(}1+\frac{1 }{2}\frac{\sin\omega}{\omega}\Big{)}+A\int_{-1}^{1}\cos\omega x\cosh\frac{x}{2 \sqrt{\lambda}}dx\] \[= \frac{\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}\Big{(}1+\frac{1 }{2}\frac{\sin\omega}{\omega}\Big{)}+\frac{2A}{\omega^{2}+\frac{1}{4\lambda}} \Big{[}\frac{1}{2\sqrt{\lambda}}\cos\omega\sinh\frac{1}{2\sqrt{\lambda}}+ \omega\sin\omega\cosh\frac{1}{2\sqrt{\lambda}}\Big{]}.\] For \(g(x)=\sin\omega x\), we get: \[\int_{-1}^{1}f(x)g(x)dx = \int_{-1}^{1}\sin\omega x\Big{[}\frac{\omega^{2}}{\lambda\omega^ {2}+\frac{1}{4}}\sin\omega x+\frac{B}{i}\sinh\frac{x}{2\sqrt{\lambda}}\Big{]}dx\] \[= \frac{\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}\Big{(}1-\frac{1 }{2}\frac{\sin\omega}{\omega}\Big{)}+\frac{B}{i}\int_{-1}^{1}\sin\omega x\sinh \frac{x}{2\sqrt{\lambda}}dx\] \[= \frac{\omega^{2}}{\lambda\omega^{2}+\frac{1}{4}}\Big{(}1-\frac{1 }{2}\frac{\sin\omega}{\omega}\Big{)}+\frac{2B/i}{\omega^{2}+\frac{1}{4\lambda }}\Big{[}\frac{1}{2\sqrt{\lambda}}\sin\omega\cosh\frac{1}{2\sqrt{\lambda}}- \omega\cos\omega\sinh\frac{1}{2\sqrt{\lambda}}\Big{]}.\] We thus obtain the two leverage scores (divided by 2). We notice that the two leverage scores tend to \(1/(2\lambda)\) for \(\omega\) tending to infinity, which is the largest value for all \(\omega\). ### Empirical comparisons As detailed in [25, Appendix A], we can estimate the leverage scores from a grid in \([-1,1]\) with \(n\) points by computing \(\sum_{i,j=1}^{n}\varphi(x_{i},v)\varphi(x_{j},v)\big{[}(K+n\lambda I)^{-1} \big{]}_{ij}\), and compare with the theoretical expression found above, which match. See Figure 3.
2301.12762
Causality-based CTR Prediction using Graph Neural Networks
As a prevalent problem in online advertising, CTR prediction has attracted plentiful attention from both academia and industry. Recent studies have been reported to establish CTR prediction models in the graph neural networks (GNNs) framework. However, most of GNNs-based models handle feature interactions in a complete graph, while ignoring causal relationships among features, which results in a huge drop in the performance on out-of-distribution data. This paper is dedicated to developing a causality-based CTR prediction model in the GNNs framework (Causal-GNN) integrating representations of feature graph, user graph and ad graph in the context of online advertising. In our model, a structured representation learning method (GraphFwFM) is designed to capture high-order representations on feature graph based on causal discovery among field features in gated graph neural networks (GGNNs), and GraphSAGE is employed to obtain graph representations of users and ads. Experiments conducted on three public datasets demonstrate the superiority of Causal-GNN in AUC and Logloss and the effectiveness of GraphFwFM in capturing high-order representations on causal feature graph.
Panyu Zhai, Yanwu Yang, Chunjie Zhang
2023-01-30T10:16:40Z
http://arxiv.org/abs/2301.12762v1
# Causality-based CTR Prediction using Graph Neural Networks ###### Abstract **Abstract:** As a prevalent problem in online advertising, CTR prediction has attracted plentiful attention from both academia and industry. Recent studies have been reported to establish CTR prediction models in the graph neural networks (GNNs) framework. However, most of GNNs-based models handle feature interactions in a complete graph, while ignoring causal relationships among features, which results in a huge drop in the performance on out-of-distribution data. This paper is dedicated to developing a causality-based CTR prediction model in the GNNs framework (Causal-GNN) integrating representations of feature graph, user graph and ad graph in the context of online advertising. In our model, a structured representation learning method (GraphFwFM) is designed to capture high-order representations on feature graph based on causal discovery among field features in gated graph neural networks (GGNNs), and GraphSAGE is employed to obtain graph representations of users and ads. Experiments conducted on three public datasets demonstrate the superiority of Causal-GNN in AUC and Logloss and the effectiveness of GraphFwFM in capturing high-order representations on causal feature graph. **Keywords:** CTR prediction, graph neural networks, feature interactions, causal inference, online advertising _Zhai, Panyu, Yanwu Yang, and Chunjie Zhang. "Causality-based CTR prediction using graph neural networks." Information Processing & Management 60.1 (2023): 103137. DOI: [https://doi.org/10.1016/j.ipm.2022.103137_](https://doi.org/10.1016/j.ipm.2022.103137_). ## 1 Introduction A variety of advertising forms have emerged with the development of the Internet. Online advertising and recommender systems have become dominant channels to promote products or services for worldwide firms (Yang et al., 2017). In 2020, online advertising amounted to $378.16 billion, in spite of the negative economic impact of the COVID-19 pandemic (Statista, 2022). According to IAB report (IAB, 2022), the online advertising revenue soared 35% to $189 billion in 2021 in U.S. alone, which is the highest growth since 2006. As recognized by both researchers and practitioners, predicting the click-through rate (CTR) is a critical issue in the multi-billion-dollar online advertising industry (McMahan et al., 2013). As a prevalent problem in online advertising, CTR prediction has attracted plentiful research efforts from academia and industry. Existing CTR prediction research mainly focuses on feature interactions based on factorization machines (FMs), deep neural networks (DNNs) and graph neural networks (GNNs) (Yang & Zhai, 2022). Although FMs theoretically support high-order representations, FMs-based models typically use pairwise feature interactions for CTR prediction due to the complexity raised by high-order interactions (Rendle, 2010; Li et al., 2021b). DNNs have been integrated with LR and FMs into a rich set of modeling frameworks to capture sophisticated feature interactions, such as Wide & Deep (Cheng et al., 2016), NFM (He & Chua, 2017), DeepFM (Guo et al., 2017). Although DNN-based models are generally better than FMs attributing to their deeper networks, interactions are performed in implicit manners and thus with low interpretability. Moreover, DNN- and FMs-based models explore feature interactions in Euclidean spaces, which may be unrealistic in most scenarios. In contrast, GNNs-based models investigate feature interactions in non-Euclidean spaces, by converting feature interactions to node interactions in a graph structure. In GNNs, features (nodes) aggregate information from their neighbors to update their hidden states (Song et al., 2021). Recently, quite a few studies have been reported to establish CTR prediction models in the GNNs framework, by modeling high-order interactions in feature graph (Li et al., 2019b; Li et al., 2021a; Li et al., 2021b), designing graph intention networks (Li et al., 2019a) and dynamic sequential graph (Chu et al., 2021) to enrich users' behaviors, constructing attribution graph and collaborative graph to address the sparsity issue (Guo et al., 2021). However, most of existing GNNs-based models handle feature interactions by aggregating information from all neighbors equally in a complete graph (Tao et al., 2020). On one hand, these models are vulnerable to over-smoothing due to the complete-graph aggregation. On the other hand, these models fail to consider inner relationships among features and thus suffer from the performance drop on the out-of-distribution data (Wu et al., 2022). In effect, causal inference endows the prediction with better model generalization and performance robustness (Zhang et al., 2022). Moreover, causality-based feature interactions can enhance the interpretability of aggregation functions in GNNs-based models in that causality characterizes inherent relationships among features (Sui et al., 2022). To the best of our knowledge, this is the first research designing causality-based CTR prediction models in the GNNs framework. This paper proposes a causality-based CTR prediction model in the GNNs framework (Causal-GNN). In order to capture the fact that each feature may behave differently when interacting with others, we develop a structured representation learning approach (GraphFwFM) to capture feature representations based on causal discovery among field features in gated graph neural networks (GGNNs). We also construct user graph and ad graph through statistical analysis reflecting the similarity among users and among ads, respectively, and employ GraphSAGE to generate embeddings by sampling and aggregating from the local neighborhood in user graph and ad graph. The multi-head attention mechanism is utilized to fuse graph representations of features, users and ads, and the neural-based attention aware predictor takes attention-weighted representations to predict the clicking probability. We conducted experiments on three public datasets (i.e., Criteo, Avazu and MovieLens-1M) to evaluate the performance of Causal-GNN by comparing with several state-of-the-art baselines. Experimental results illustrate the superiority of Causal-GNN in AUC and Logloss and the effectiveness of GraphFwFM in modeling high-order representations in causal feature graph. The main contribution of this study can be summarized as follows. First, we propose a causality-based CTR prediction model in the GNNs framework integrating multiple representations of graph enabled features. Second, we design a structured representation learning approach (GraphFwFM) to capture causal feature representations in GNNs. Third, experiments are conducted on three public datasets to demonstrate the superiority of the Causal-GNN model and the merit of GraphFwFM in CTR prediction tasks. The remainder of this paper is organized as follows. Section 2 gives a brief review on the related work on CTR prediction and discusses the linkage of this research with the extant literature. Section 3 presents the modeling structure and details of Causal-GNN. Experimental evaluations are reported in Section 4, and we conclude this research in Section 5. ## 2 Related work In this section, we first provide a brief review on four major classes of advertising CTR prediction models reported in the literature, then primarily focus on GNNs-based models for CTR prediction. ### The classification of CTR prediction models In the literature on advertising CTR prediction, researchers have primarily explored four classes of modeling frameworks including multivariate statistical models, factorization machines (FMs) based models, deep learning models and tree models. For a comprehensive survey on CTR prediction models in online advertising, refer to see Yang & Zhai (2022). (1) **Multivariate statistical models** include logistic regression (LR) (Richardson et al., 2007) and Poly2 model (Chang et al., 2010), which independently uses an individual feature or simply considers interactions between features (Chang et al., 2010). (2) **Factorization machines (FMs)-based models** (e.g., FMs, FFMs, FwFMs and AFM) effectively capture pairs of feature interactions by using the factorized mechanism (Rendle, 2010; Xiao et al., 2017). Formally, in the FMs-based modeling framework, the inner product is used to capture pairwise feature interactions of latent vectors between features. FMs-based models are efficient in achieving low-order feature interactions and show the recognized performance in CTR prediction tasks (Guo et al., 2017). (3) **Deep learning models** are generally utilized to capture high-order feature interactions for CTR prediction, including standard long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997), convolutional neural network (CNN) (Zhang et al., 2022c), factorization machine supported neural network (FNN) (Zhang et al., 2016). In order to capture feature interactions of multiple orders flexibly, a bunch of ensemble models have been reported integrating deep learning models with (either low-order or high-order) explicit components, e.g., Wide & Deep (Cheng et al., 2016), DeepFM (Guo et al., 2018), Deep & Cross network (DCN) (Wang et al., 2017) and xDeepFM (Lian et al., 2018). (4) **Tree models** are developed based on the idea of boosting in ensemble learning, including Gradient boosting decision tree (GBDT) (Friedman, 2001) and XGBoost (Chen & Guestrin, 2016), which have shown considerable successes in CTR prediction tasks (He et al., 2014). Tree model largely suffers from the sparsity problem. To this end, researchers have extensively explored ensemble models combining tree models with various modeling components, e.g., GBDT+gcForest (Qiu et al., 2018), GBDT+DNN (Ke et al., 2019), XGBoost+FwFMs (Shi et al., 2019) and XGBoost+DeepFM (An & Ren, 2020). However, the above CTR prediction models focus on unstructured combinations of features and are largely limited by the prediction capability and/or implicit forms. Moreover, these models handle feature interactions in Euclidean spaces, which may be unrealistic in most scenarios. ### 2.2 GNNs in CTR prediction Graph neural networks (GNNs) are an emerging modeling framework to broaden the feature horizon of CTR prediction in non-Euclidean spaces and support more interpretable models. Note that GNNs-based CTR prediction falls into the third class (i.e., deep learning models) as discussed in the previous section. Due to their strength in graph representations, GNNs have been used to alleviate the feature sparsity and behavior sparsity problems in CTR prediction, by converting feature interactions into node interactions in a graph structure. Li et al. (2019b) proposed a feature interaction GNNs (Fi-GNN) where field-aware feature interactions were realized through assigning two interaction matrices to each node in a complete graph; in a later work, Li et al. (2021a) used the pre-trained GNNs to generate explicit semantic-cross features and applied the weighted square loss to compute the importance of these features. Users' potential interests are beneficial to predicting their future behaviors. Graph models are efficient in representing users' rich, diverse and fluid interests (Li et al., 2019; Wang et al., 2022) and latent correlations among various interests (Wang et al., 2021), which can be extracted from their past behaviors and interactions with items (Zhang et al., 2022b) by using hierarchical attention and multi-head attention mechanisms (Zheng et al., 2022), graph-masked transformer (Min et al., 2022), and triangle graph interest network (Jiang et al., 2022). Collaborative information between users and items is very valuable for extracting users' interests and in turn improving CTR prediction. In particular, collaborative graphs constructed based on users' behaviors and items' attributions can enhance feature and behavior embeddings in GNNs frameworks (Guo et al., 2021; Zhang et al., 2022b). Aiming to mine potential users' interests and real intentions from their behaviors, Min et al. (2022) constructed a heterogeneous graph with four types of interactions and used graph-masked transformer to capture highly representative embeddings of users and items, and Jiang et al. (2022) proposed a triangle graph interest network (TGIN) that utilized triangles in the neighborhood in the item co-occurrence graph to extract implicit user interests and aggregated the information of several interest units represented by triangles. Online advertising is an extremely dynamic and complex environment for marketing and promotions (Yang et al., 2022; Li & Yang, 2022). In order to capture users' real-time interest, Li et al. (2019a) designed a Graph Intention Network (GIN) based on a co-occurrence commodity graph and adopt multi-layer graph diffusion to enrich users' behaviors, Chu et al. (2021) applied graph convolutional networks on dynamic sequential graphs of users and items to obtain representations of the target user and the candidate item iteratively, and Wang et al. (2022) proposed a dynamic graph-based disentangled representation framework (DisenCTR) where a disentangled representation learning component was used to extract users' diverse interests from a time-evolving user-item interaction graph. Existing GNNs-based models can capture sophisticated feature interactions and representations of users' behaviors and interests helpful for CTR prediction. However, GNNs are vulnerable to data biases and especially shortcut features (Wu et al., 2022). That is, when the data is out-of-distribution, the performance of GNNs-based models may drop drastically. Moreover, most of existing GNNs-based models handle feature interactions in a complete graph, while ignoring inner relationships among features (e.g., causality). Theoretically, causality can improve the generalization of prediction models and boost the model performance on out-of-distribution data (Wu et al., 2022). It is worthwhile to note that causal inference provides an alternative perspective for the interpretability of aggregation functions in the GNNs framework. In addition, the input for existing CTR prediction models is generally restricted to field features and interactions among features. As a matter of fact, there exist various types of graph-enabled features (e.g., relationships between users and between advertisements) valuable for CTR predictions. This research aims to promote the performance of CTR prediction by integrating causal feature representations in the GNNs modeling framework (Causal-GNN). Specifically, we use multiple GNNs to extract useful information from causal feature graph, user graph and ad graph constructed through causal inference and statistical analysis. Moreover, we develop a GGNNs-based representation learning approach (GraphFwFM) to capture high-order representations based on causal discovery among field features. In addition to causal feature graph, user graph and ad graph representation learning components encapsulated in our CTR prediction model are expected to address the sparsity problem. This is the first research using causal inference to facilitate CTR prediction in the GNNs framework integrating multiple graph representations. ## 3 The Model In this section, we start with the modeling structure of our CTR prediction model (Causal-GNN), then turn to details of each component. Table 1 lists the notations used in this paper. \begin{table} \begin{tabular}{l l} \hline Terms & Definition \\ \hline \(f_{k}\) & The \(k\)-th field feature. \\ \(u_{l}\) & The \(i\)-th user. \\ \(a_{j}\) & The \(j\)-th ad. \\ \(S\) & The number of field features. \\ \(N\) & The number of users. \\ \(M\) & The number of ads. \\ \hline \end{tabular} \end{table} Table 1: Notations ### Modeling Structure We propose a CTR prediction model (i.e., Causal-GNN) including five major components, as shown in Figure 1. The first component is the graph construction layer, which builds feature graph, user graph and ad graph through causal learning and statistical analysis. The second component is the graph construction layer, which builds feature graph, user graph, and ad graph through causal learning and statistical analysis. component is the embedding layer, which encodes each field as a binary vector by one-hot encoders, and embeds field features as dense vectors through field-aware embedding and graphs as latent representations through graph embedding. The third component is the GNNs layer, which learns graph representations of features, users and ads by using GraphFwFM and GraphSAGE. The fourth is the attention layer, which processes the output of the GNNs with multi-head attention mechanisms, and the fifth, i.e., the prediction layer, utilizes a neural-based attention method to integrate representations of users, ads and features to conduct CTR prediction. Figure 1: The structure of the proposed CTR prediction model (Causal-GNN) ### 3.2 Graph construction The graph construction layer establishes three graphs namely feature graph, user graph and ad graph through causal learning and statistical analysis on historical advertising logs. Conceptually, a causal graph is a directed graph with nodes denoting features (or variables) and edges denoting the dependencies among these features (Helmert, 2004; Hu et al., 2014; Kocaoglu et al., 2017). In our context, we construct a causal feature graph to capture high-order feature interactions for CTR prediction. Whereas there is no causal relationship among users and among ads, we construct user graph and ad graph based on the similarity. #### 3.2.1 Causal learning in feature graph This research constructs a causal graph with field features of each instance through causal learning. In the causal feature graph \(\mathbf{\mathcal{G}_{f}(\mathcal{F},\mathbf{\mathcal{E}_{\mathcal{F}}})}\), each node corresponds to a field \(f_{k}\), i.e., \(\mathbf{\mathcal{F}}=\{f_{1},f_{2},...,f_{S}\}\), and an edge from one node to another is directed and denotes the causal relationship on the path. Causal relationships in \(\mathbf{\mathcal{G}_{f}(\mathcal{F},\mathbf{\mathcal{E}_{\mathcal{F}}})}\) are represented with a weighted adjacency matrix \(\mathbf{W}^{(f)}\in\mathbf{R}^{S\times S}\). In most of existing causality learning frameworks, directed acyclic graphs (DAGs) are formed to represent the causal structure of the feature space (Zheng et al., 2018; Wei et al., 2020). Given an independent and identically distributed (i.i.d.) sample \(\mathbf{X}\) (\(\mathbf{X}\in\mathbf{R}^{S\times q}\), where \(S\) the number of field features and \(q\) are the dimension of a (vector) field, respectively, causal learning aims to recover a DAG structure over features (represented by \(\mathbf{W}^{(f)}\)) from \(\mathbf{X}\). Following prior studies (Yu et al., 2019; Zhang et al., 2019), causal learning generalizes the SEM model in the GNNs framework, which is given as \[f^{-1}\ \ {\mathbf{(X)}}={\mathbf{W}^{(f)}}^{T}f^{-1}{\mathbf{(X)}}+\text{g}(\mathbf{Z}), \qquad{\color[rgb]{0,0,0}(1)}\] where \(\mathbf{Z}\in\mathbf{R}^{S\times q}\) is the noise matrix, g and \(f\) are parameterized functions on \(\mathbf{Z}\) and \(\mathbf{X}\), respectively. Causal learning based on variational Bayes (Kingma & Welling, 2013) approximates the true posterior \(\mathbf{p}_{\theta}(\mathbf{Z}|\mathbf{X})\) with the variational posterior \(\mathbf{q}_{\phi}(\mathbf{Z}|\mathbf{X})\), by minimizing the Kullback-Leibler (KL) divergence of the latter from the former (Blundell et al., 2015), given as \[arg\min_{\theta,\phi}D_{KL}[q_{\phi}(\mathbf{Z}|\mathbf{X})||p_{\theta}(\mathbf{Z}| \mathbf{X})]\] \[=arg\min_{\theta,\phi}\int q_{\phi}(\mathbf{Z}|\mathbf{X})log\frac{q_{\phi} (\mathbf{Z}|\mathbf{X})}{p_{\theta}(\mathbf{Z})p_{\theta}(\mathbf{X}|\mathbf{Z})}\,d\mathbf{Z}\] \[=arg\min_{\theta,\phi}D_{KL}[q_{\phi}(\mathbf{Z}|\mathbf{X})||p_{\theta}( \mathbf{Z})]-E_{q_{\phi}}(\mathbf{Z}|\mathbf{X})[log\,p_{\theta}(\mathbf{X}|\mathbf{Z})], \tag{2}\] where \(D_{KL}\) is the KL-divergence, \(\theta=(M_{X},S_{X})\) and \(\phi=(M_{Z},S_{Z})\) are generative parameters and variational parameters, respectively. The resulting cost function in Equation (2) is known as the expected lower bound, whose negative form is called the variational lower bound or the evidence lower bound (ELBO). Given a distribution of \(\mathbf{Z}\) and a set of samples \(\mathbf{X}^{1},\)\(\mathbf{X}^{2}\),..., \(\mathbf{X}^{n}\), the loss is defined as the mean negative lower bound, which is reformulated as \[L_{ELBO}=-\frac{1}{n}\sum_{k=1}^{n}D_{KL}(q_{\phi}(\mathbf{Z}|\mathbf{X}^{k})||p_{ \theta}(\mathbf{Z}))-E_{q_{\phi}}(\mathbf{Z}|\mathbf{X}^{k})[\log p_{\theta}(\mathbf{X}^{k}| \mathbf{Z})]. \tag{3}\] Density functions \(q_{\phi}(\mathbf{Z}|\mathbf{X}^{k})\) and \(p_{\theta}(\mathbf{X}^{k}|\mathbf{Z})\) can be obtained through probabilistic encoder and decoder of Bayesian neural networks (BNNs) (Kingma & Welling, 2013; Yu et al., 2019). Specifically, the encoder instantiates \(f^{-1}\) with BNNs and \(\mathbf{g}\) with an identity mapping to obtain parameters \((\mathbf{M}_{Z}\) and \(\mathbf{S}_{Z})\) of the distribution of the variational posterior \(q_{\phi}(\mathbf{Z}|\mathbf{X})\), and the decoder employs inverse functions of \(f\) and \(\mathbf{g}\) to obtain parameters \((\mathbf{M}_{X}\) and \(\mathbf{S}_{X})\) of the distribution of the true posterior \(p_{\theta}(\mathbf{X}|\mathbf{Z})\). Taking into account the acyclicity constraint, causal learning can be transformed into the following optimization problem through the augmented Lagrangian method with \(\mathcal{L}_{1}\) - regularization (Ng et al., 2019), which is given as \[\big{(}\mathbf{W}^{(f)},\mathbf{\theta}\big{)}=arg\min_{\mathbf{W}^{(f)},\mathbf{ \theta}}(-L_{ELBO}+\lambda||\mathbf{W}^{(f)}||_{1}+\alpha h(\mathbf{W}^{(f)})+\frac{ \rho}{2}|h(\mathbf{W}^{(f)})|^{2}), \tag{4}\] \[s.t. h\big{(}\mathbf{W}^{(f)}\big{)}=tr\left[\big{(}\mathbf{I}+\alpha\mathbf{W}^{(f )}\circ\mathbf{W}^{(f)}\big{)}^{S}\right]-S=0,\qquad(5)\] where \(tr\) is the trace of matrix \(\big{(}\mathbf{I}+\alpha\mathbf{W}^{(f)}\circ\mathbf{W}^{(f)}\big{)}^{S},\)\(\mathbf{\theta}\) is a set of parameters of BNNs in variational autoencoders, \(\alpha\) is the Lagrange multiplier and \(\rho\) is the penalty parameter. The optimization problem (Equations 4 and 5) can be solved through updating \(\alpha\) and increasing \(\rho\) by using stochastical optimization solvers to obtain the adjacency matrix \(\mathbf{W}^{(f)}\) and \(\mathbf{\theta}\). In this research, causal relationships described by \(\mathbf{W}^{(f)}\) is used to feed the causal feature graph representation learning discussed in Section 3.4.1. #### 3.2.2 User graph and ad graph construction User graph is established based on common ads displayed to each pair of users. In the user graph \(\mathbf{G}_{u}(\mathbf{U},\mathbf{\mathcal{E}}_{u}),\ \mathbf{U}=\{u_{1},u_{2},...,u_{N}\}\), each user corresponding to a node (\(u_{i}\)); there exists an edge from \(u_{i^{\prime}}\) to \(u_{i}\) if one or more common ads are displayed to the two users. The weight of the edge from \(u_{i^{\prime}}\) to \(u_{i}\) is calculated as \(w_{i^{\prime}l^{\prime}}(u)=\frac{|S_{i}\cap S_{l}|}{|S_{l}|}\), where \(S_{i^{\prime}},S_{i}\in\mathbf{\mathcal{A}}\) denote the collections of advertisements displayed to \(u_{i^{\prime}}\) and \(u_{i}\), respectively. Ad graph is constructed in a similar way. That is, in the ad graph \(\mathbf{\mathcal{G}_{a}(\mathbf{\mathcal{A}},\mathbf{\mathcal{E}}_{a})},\ \mathbf{\mathcal{A}}=\{a_{1},a_{2},...,a_{M}\}\), each node corresponds to an advertisement (\(a_{j}\)) and the presence of an edge means that the two ads are displayed to one or more common users. The weight of the edge from \(a_{j^{\prime}}\) to \(a_{j}\) is calculated as \(w_{j^{\prime}j}(u)=\frac{|A_{j^{\prime}}\cap A_{j}|}{|A_{j}|}\), where \(A_{j^{\prime}},A_{j}\in\mathbf{\mathcal{U}}\) denote the sets of users to whom \(a_{j^{\prime}}\) and \(a_{j}\) are exposed, respectively. ### Embedding In this Section, we present embedding techniques to transform features, users and advertisements into dense vectors. #### 3.3.1 Field feature embedding In CTR prediction, multi-field categorical features are usually transformed into a vector containing 0 and 1 through one-hot encoding. For example, 'ad position=3' can be encoded as '[0, 0, 1,..., 0]', which are high-dimensional sparse vectors. Hence, it is necessary to transform one-hot vectors \(\mathbf{x}_{f_{k}}\) (\(f_{k}\in\mathcal{F}\)) into low-dimensional dense feature vectors through field-aware embedding methods (Ma et al., 2016; Wang et al., 2017; Xie et al., 2019). Formally, each field \(f_{k}\) can be represented as a dense vector as follows. \[\mathbf{\tilde{e}}_{f_{k}}=\mathbf{W}_{emb}\mathbf{x}_{f_{k}}, \tag{6}\] where \(\mathbf{W}_{emb}\in\mathbf{R}^{d\times n_{c}}\) is the embedding matrix, \(n_{c}\) is the number of features, \(d\) is the embedding size, \(\mathbf{\tilde{e}}_{f_{k}}\in\mathbf{R}^{d}\) is the embedding vector of field feature \(f_{k}\). #### Graph embedding This research employs DeepWalk capture relationships in feature graph \(\mathbf{\mathcal{G}_{f}(\mathbf{\mathcal{F}},\mathbf{\mathcal{E}}_{f})}\), user graph \(\mathbf{\mathcal{G}_{u}(\mathbf{\mathcal{U}},\mathbf{\mathcal{E}}_{u})}\) and ad graph \(\mathbf{\mathcal{G}_{a}(\mathbf{\mathcal{A}},\mathbf{\mathcal{E}}_{a})}\), which is a prevalent graph embedding technique. DeepWalk treats random walks in a graph as a sentence to learn the embedding of each node (Perozzi et al., 2014; Goyal & Ferrara, 2018). In a directed graph (e.g., feature graph, user graph and ad graph), a random walk \(\mathcal{W}_{t}\) rooted at \(\mathbf{node}_{i}\) with length \(t\) can be made by the RandWalk algorithm, i.e., \(\mathcal{W}_{i}=RandWalk(G,\mathbf{node}_{t},t)\). Then the embedding of each user can be learnt with the mapping function \(\mathbf{E}\in\mathbf{R}^{N\times d}\) using the SkipGram algorithm (Perozzi et al., 2014) with a specific window size \(w\), which is given as \[\mathbf{e}= SkipGram(\mathbf{E},\mathcal{W}_{t}\,w), \tag{7}\] In Equation (7), \(\mathbf{e}\) is instantiated as \(\mathbf{e}_{f}\in\mathbf{R}^{N\times d_{f}}\), \(\mathbf{e}_{u}\in\mathbf{R}^{N\times d_{u}}\) and \(\mathbf{e}_{a}\in\mathbf{R}^{N\times d_{a}}\) denoting the embedding representations of feature graph \((\mathbf{\mathcal{G}_{f}(\mathbf{\mathcal{F}},\mathbf{\mathcal{E}}_{f})})\), user graph \((\mathbf{\mathcal{G}_{u}(\mathbf{\mathcal{U}},\mathbf{\mathcal{E}}_{u})})\) and ad graph \((\mathbf{\mathcal{G}_{a}(\mathbf{\mathcal{A}},\mathbf{\mathcal{E}}_{a})})\), respectively. \(\mathbf{e}_{f_{k}}\in\mathbf{R}^{d_{f}}\) denotes the \(k\)-th row of \(\mathbf{e}_{f}\), corresponding to \(f_{k}\); \(\mathbf{e}_{u_{l}}\in\mathbf{R}^{d_{u}}\) denotes the \(i\)-th row of \(\mathbf{e}_{u}\), corresponding to \(u_{i}\); and \(\mathbf{e}_{a_{j}}\in\mathbf{R}^{d_{a}}\) denotes the \(j\)-th row of \(\mathbf{e}_{a}\), corresponding to \(a_{j}\). ### Graph representation learning In this section, we present graph representation learning methods for feature graph, user graph and ad graph constructed in Section 3.2, in order to enhance the performance robustness of CTR prediction. Specifically, we propose a novel causality-based feature graph representation learning method to learn graph representations of causal features, then apply GraphSAGE to learn user graph and ad graph representations. #### 3.4.1 Feature graph representation learning We present a causality-based feature representation learning method termed GraphFwFM in gated graph neural networks (GGNNs). Simply put, GraphFwFM integrates causal inference in feature graph and the FwFMs feature interaction mechanism. GGNNs are taken as the fundamental GNNs framework because GGNNs overcome a limitation of GNNs (Li et al., 2015), i.e., no guarantee of convergence in a fixed number of steps, by using the gate mechanism in the propagation (Zhou et al., 2020). GGNNs update each node state through its neighbors and its previous time step state information using GRU. Following (Li et al., 2015), GGNNs are represented as \[\begin{cases}\mathbf{H}_{f_{k}}{}^{(l)}=\left(\mathbf{1}-\mathbf{z}_{f_{k}}{}^{(l)} \right)\mathbf{\odot}\mathbf{H}_{f_{k}}{}^{(l-1)}+\mathbf{z}_{f_{k}}{}^{(l)}\mathbf{\odot}\overline {\mathbf{H}_{f_{k}}{}^{(l)}}\\ \overline{\mathbf{H}_{f_{k}}{}^{(l)}}=\tanh\left(\mathbf{W}\mathbf{a}_{f_{k}}{}^{(l)}+\bm {U}\left(\mathbf{r}_{f_{k}}{}^{(l)}\mathbf{\odot}\mathbf{H}_{f_{k}}{}^{(l-1)}\right)\right) \\ \mathbf{z}_{f_{k}}{}^{(l)}=\sigma\left(\mathbf{W}_{x}\mathbf{a}_{f_{k}}{}^{(l)}+\mathbf{U}_{x} \mathbf{H}_{f_{k}}{}^{(l-1)}\right)\\ \mathbf{r}_{f_{k}}{}^{(l)}=\sigma\left(\mathbf{W}_{r}\mathbf{a}_{f_{k}}{}^{(l)}+\mathbf{U}_{r }\mathbf{H}_{f_{k}}{}^{(l-1)}\right)\\ \mathbf{a}_{f_{k}}{}^{(l)}=\mathbf{A}_{f_{k}}^{T}\cdot\left[\mathbf{H}_{f_{1}}{}^{(l-1)} \right]^{T},...,\mathbf{H}_{f_{S}}{}^{(l-1)}T\right]^{T}+\mathbf{b}\\ \mathbf{H}_{f_{k}}{}^{(1)}=[\mathbf{e}_{f_{k}}^{T},\mathbf{0}]^{T}\end{cases}, \tag{8}\] where \(\mathbf{A}_{f_{k}}\) represents the strength of node \(f_{k}\) when it communicates with other nodes; \(\mathbf{H}_{f_{k}}{}^{(l-1)}\) is the hidden state of node \(f_{k}\) (\(k=1,2,...,S\)) in \((l-1)\)-th step; \(\mathbf{b}\) is the bias and \(\mathbf{e}_{f_{k}}\) is the initial state of node \(f_{k}\). \(\mathbf{z}_{f_{k}}{}^{(l)}\) and \(\mathbf{r}_{f_{k}}{}^{(l)}\) are the update gate and the reset gate of GRU, respectively. Each node in GGNNs has two matrices (i.e., \(\mathbf{A}^{in}\) and \(\mathbf{A}^{out}\)) to determine the strength of information flow in and out from/to other nodes. Because each feature may behave differently when interacting with others (Ma et al., 2016; Juan et al., 2016; Juan et al., 2017), it may be insufficient to realize feature interactions in feature graph with the two interaction matrices. To this end, we design a causality-based representation learning approach (GraphFwFM) to overcome this shortcoming of GGNNs, by taking advantage of causal inference in feature graph and the FwFMs feature interaction mechanism. Figure 2 illustrates the mechanism of GraphFwFM. GraphFwFM samples the neighbors of each feature node according to the weights (\(w_{k^{\prime}k}{}^{(f)}\)) of directed edges, then aggregates state information from sampled neighbors based on the principle of feature interactions in FwFMs, and updates the state information by employing the update function in GGNNs. In GraphFwFM, each node has a state vector \(\mathbf{H}_{f_{k}}{}^{(l)}\) which corresponds to field feature \(f_{k}\) in the \(l\)-th step. Then the state of feature graph in the \(l\)-th step can be represented as \(\mathbf{H}^{(l)}=[\mathbf{H}_{f_{1}}{}^{(l)},\mathbf{H}_{f_{2}}{}^{(l)},...,\mathbf{H}_{f_{S}}{ }^{(l)}]\). The initial state of feature graph is fed with the concatenation of field embedding \((\mathbf{\tilde{e}}_{f_{k}})\) and graph embedding \((\mathbf{e}_{f_{k}})\) of features \((\mathbf{e}_{f_{k}}=[\mathbf{\tilde{e}}_{f_{k}},\mathbf{e}_{f_{k}}])\), i.e., \(\mathbf{H}^{(0)}=[\mathbf{e}_{f_{1}},\mathbf{e}_{f_{2}},...,\mathbf{e}_{f_{S}}]\). For a given node, its state is updated through the aggregation of its neighborhood nodes' states and its previous state. GGNNs and other types of GNNs are prone to over-smoothing as the number of neural network layers increases (Zhou et al., 2020). As illustrated by previous research (e.g., Little and Badawy, 2019; Guo et al., 2020; Zhang et al., 2022), sampling based on causal relationships among features can increase the performance robustness of prediction models. In this research, we sample a node's neighbors in feature graph based on the causality-based weighted adjacency matrix learnt in Section 3.2.1. In the meanwhile, for each node, the associated edges are pruned by a threshold \(\varepsilon\) and the remaining neighbors form its sampled neighborhood. Formally, for node \(f_{k}\) (\(k=1,2,...,S\)), Figure 2: The mechanism of GraphFwFM for feature graph \[w_{k^{\prime}k}{}^{(f)}=\begin{cases}1,\ w_{k^{\prime}k}{}^{(f)}>\varepsilon\\ 0,\ w_{k^{\prime}k}{}^{(f)}\leq\varepsilon\end{cases},\ k^{\prime}=1,2,...,S, \tag{9}\] where \(w_{k^{\prime}k}{}^{(f)}\) is the element in the \(k^{\prime}\)-th row and the \(k\)-th column of the adjacency matrix \(\mathbf{W}^{(f)}\), which denotes whether \(f_{k^{\prime}}\) is a sampled neighbor of \(f_{k}\) (\(w_{k^{\prime}k}{}^{(f)}=1\)) or not (\(w_{k^{\prime}k}{}^{(f)}=0\)). At each step, \(f_{k}\) interacts with other nodes in feature graph with strengths described by a transformation matrix \(\mathbf{W}_{k}\). The weight of an edge from \(f_{k^{\prime}}\) to \(f_{k}\) characterizes the strength of the directional interaction, i.e., \(r_{k^{\prime}k}\), which explicitly describes the relationship strength from \(f_{k^{\prime}}\) to \(f_{k}\). At the \(l\)-th step, \(f_{k}\) aggregates information of its sampled neighbors in the following way. \[\mathbf{H}_{\mathcal{N}(f_{k})}{}^{(l)}=\sum_{f_{k^{\prime}}\in\mathcal{N}(f_{k}) }r_{k^{\prime}k}\alpha_{k^{\prime}k}w_{k^{\prime}k}{}^{(f)}\mathbf{W}_{k}\mathbf{W}_{ k^{\prime}}\mathbf{H}_{f_{k^{\prime}}}{}^{(l-1)}, \tag{10}\] where \(\mathcal{N}(f_{k})\) is the neighborhood set of \(f_{k}\), and \(f_{k^{\prime}}\) is a neighbor of \(f_{k}\); \(\mathbf{H}_{f_{k^{\prime}}}{}^{(l-1)}\) is the state vector of \(f_{k^{\prime}}\) at the \((l-1)\)-th step. \(\alpha_{k^{\prime}k}\) characterizes the importance of \(f_{k^{\prime}}\) in the aggregation for feature \(f_{k}\). The attention mechanism has a significant effect on the state aggregation (Velickovic et al., 2017; Zhang et al., 2018; Wang et al., 2019). Thus, we design an attention-based aggregator based on \(r_{k^{\prime}k}\), \(\mathbf{W}_{k}\) and \(\mathbf{W}_{k^{\prime}}\), which is given as \[\alpha_{k^{\prime}k}=\frac{exp(LeakyRel(x_{k^{\prime}k}))}{\sum_{f_{k}\in \mathcal{N}(f_{k})}exp(LeakyRel(x_{2k}))}, \tag{11}\] \[x_{k^{\prime}k}=r_{k^{\prime}k}\overline{a}^{T}[\mathbf{W}_{k}\mathbf{H}_{f_{k}}{}^{( l-1)}\ \|\ W_{k^{\prime}}\mathbf{H}_{f_{k^{\prime}}}{}^{(l-1)}], \tag{12}\] The new state of each feature node (\(f_{k}\)) depends on its neighbors' information \(\mathbf{H}_{\mathcal{N}(f_{k})}{}^{(l)}\) and its previous state \(\mathbf{H}_{f_{k}}{}^{(l-1)}\). GRU is employed as the state update function, in order to improve the long-term propagation across graph structure and deepen GraphFwFM. The operation \(Add\&Norm\) in transformer has an excellent performance in prediction (Vaswani et al., 2017). In particular, the \(Add\) operation applies a residual connection between each pair of two neural network layers, which combines low-order and high-order features; and the \(Norm\) operation employs the layer normalization function to stabilize each hidden layer (Ba et al., 2016). The state update can be formally expressed as follows. \[\mathbf{H}_{f_{k}}{}^{(l^{\prime})}=GRU\big{(}\mathbf{H}_{\mathcal{N}(f_{k})}{}^{(l)},\mathbf{H}_{f_{k}}{}^{(l-1)}\big{)}, \tag{13}\] \[\mathbf{H_{f_{k}}}^{(l)}=LayerNorm(\mathbf{H_{f_{k}}}^{(l^{\prime})}+\mathbf{H_{f_{k}}}^{(l-1 )}),\quad l=0,1,\ldots,K_{f}, \tag{14}\] where \(K_{f}\) is the depth of GraphFwFM for feature graph. #### 3.4.2 User graph and ad graph representation learning In user (and ad) graph, when two users (and ads) are connected, their neighborhoods might be similar. In this research, considering the neighborhood similarity of connected users (and ads), we employ GraphSAGE to capture graph representations of users and ads are learnt by using relationships between nodes in user graph and ad graph. GraphSAGE is an inductive graph representation learning approach that can generate node representation for previously unseen nodes (Hamilton et al., 2017). In order to generalize to unseen nodes, GraphSAGE learns a function to generate a node's representation by sampling and aggregating its local neighborhood. In user graph, user representation can be learnt through the following neural network \[\begin{cases}\mathbf{H_{N(u_{i})}}^{(l)}=Aggregator(\mathbf{H_{u}}^{(l-1)},\forall u\in \mathcal{N}(u_{i}))\\ \mathbf{H_{u_{i}}}^{(l)}=\sigma\left(\mathbf{W}^{k}\cdot conccat\big{(}\mathbf{H_{u_{i}}}^{ (l-1)},\mathbf{H_{N(u_{i})}}^{(l)}\big{)}\right)\end{cases},\;l=1,2,...,K_{u}, \tag{15}\] where \(Aggregator\) denotes the aggregator function, \(l\) is the index of a GraphSAGE layer, \(K_{u}\) is the depth of GraphSAGE in user graph, \(\mathcal{N}(u_{i})\) is the sampled neighborhood of node \(u_{i}\) based on the weights of edges computed in Section 3.2.2, and \(\mathbf{h}_{u_{i}}^{0}=\mathbf{e_{u_{i}}}(i=1,2,...,N)\). Similarly, graph representation of ad graph can also be obtained by using GraphSAGE, denoted as \(\mathbf{H_{a_{j}}}^{(l)}(l=1,2,...,K_{a},j=1,2,...,M)\). ### 3.5 Attention This research applies the multi-head self-attention mechanism to discriminate the importance of different neural network layers. The multi-head self-attention mechanism is the core idea of the transformer, which maps a query and set of key-value pairs with multiple different linear projections and jointly integrates these sub-space representations by a concatenation operation to capture information from various sources (Vaswani et al., 2017; Liang et al., 2022). The multi-head self-attention can be formulated as \[\begin{cases}H_{h}=softmax\left(\frac{Q_{h}K_{h}^{(k)}}{\sqrt{d_{K}}}\right) \boldsymbol{V}_{h}\\ \boldsymbol{Q}_{h}=\boldsymbol{Q}\boldsymbol{W}_{h}^{(Q)},\boldsymbol{K}_{h}= \boldsymbol{K}\boldsymbol{W}_{h}^{(k)},\boldsymbol{V}_{h}=\boldsymbol{V} \boldsymbol{W}_{h}^{(v)},\\ \boldsymbol{H}=[\boldsymbol{H}_{1},\boldsymbol{H}_{2},...,\boldsymbol{H}_{h}] W^{0},\end{cases} \tag{16}\] where \(\boldsymbol{W}_{h}^{(Q)}\in\mathbb{R}^{n\times d_{K}},\ \boldsymbol{W}_{h}^{(k)}\in \mathbb{R}^{n\times d_{K}},\ \boldsymbol{W}_{h}^{(v)}\in\mathbb{R}^{n\times d_{V}}\) are parameter matrices learnt from the \(h\)-th head, \(h=1,2,...,h\), \(d_{K}\) is the dimension of queries and keys, \(d_{V}\) is the dimension of values, \(W^{0}\in\mathbb{R}^{(h\times d_{V})\times n}\) is a parameter matrix projecting the concatenation of \(h\) heads into the output space \(\mathbb{R}^{n}\), and \(\boldsymbol{H}_{h}\) is the output matrix of the \(h\)-th head. In each head, the attention score is normalized by the softmax function (Equation 16). The multi-head self-attention mechanism concatenates results from multiple heads to obtain the final representation (Equation 17). In feature graph representation learning described in Section 3.4.1, graph state at the \(l\)-th step is obtained from the \(l\)-th layer of GraphFwFM, i.e., \(\boldsymbol{H}^{(l)}=\left[\boldsymbol{H}_{f_{1}}{}^{(l)},\boldsymbol{H}_{f_{ 2}}{}^{(l)},...,\boldsymbol{H}_{f_{S}}{}^{(l)}\right]\), (\(l=0,1,2,\ldots,K_{f}\)). Conceptually, GraphFwFM with \(K_{f}\) layers produces multiple orders of graph representations. Following prior research (e.g., He et al., 2021; Liu et al., 2021), let \(Q=K=V=\left[\boldsymbol{H}^{(0)},\boldsymbol{H}^{(1)},...,\boldsymbol{H}^{( K_{f})}\right]^{T}.\) This research utilizes the multi-head self-attention mechanism to learn the effect of low-order and high-order features simultaneously (Gao et al., 2018), which is given as \[\left[\boldsymbol{H}_{f_{1}},\boldsymbol{H}_{f_{2}},...,\boldsymbol{H}_{f_{S} }\right]=multihead(\left[\boldsymbol{H}^{(0)},\boldsymbol{H}^{(1)},...,H^{(K_{ f})}\right]^{T}), \tag{18}\] where \(\boldsymbol{H}_{f_{k}}\) is the resulting node representation of \(f_{k}\) in feature graph. In order to accurately characterize the importance of users and ads, the multi-head self-attention mechanism allocates a specific weight to each user and each ad. Specially, we apply multi-head self-attention mechanism on the representation of each user (\([\boldsymbol{H}_{u_{i}}{}^{(0)},\boldsymbol{H}_{u_{i}}{}^{(1)},...,\boldsymbol {H}_{u_{i}}{}^{(K_{u})}],\ i=1,2,...N\)) in user graph and the representation of each ad (\([\boldsymbol{H}_{a_{j}}{}^{(0)},\boldsymbol{H}_{a_{j}}{}^{(1)},...,\boldsymbol {H}_{a_{j}}{}^{(K_{a})}],\ j=1,2,...M\)) in ad graph, respectively. In a similar way, we can obtain new user representation \(\boldsymbol{H}_{u_{l}}\) (\(i=1,2,...N\)) and ad representation \(\boldsymbol{H}_{a_{j}}\) (\(j=1,2,...M\)) through attention mechanisms. ### Prediction In the prediction layer, this research takes a simple neural-based attention prediction method to determine contributions of graph representations of features, users and ads in predicting the CTR (Li et al., 2020). Specifically, we utilize a fully connected layer for each graph representation and then perform a weighted sum over these representations, which is given as \[\mathbf{att}_{u_{i}} = tanh(\mathbf{W}_{u}\mathbf{H}_{u_{i}}+\mathbf{b}_{u}),\] \[\mathbf{att}_{a_{j}} = tanh(\mathbf{W}_{a}\mathbf{H}_{a_{j}}+\mathbf{b}_{a}),\] \[\mathbf{att}_{f_{k}} = tanh(\mathbf{W}_{f_{k}}\mathbf{H}_{f_{k}}+\mathbf{b}_{f_{k}}), \tag{19}\] \[\mathbf{H} = \mathbf{att}_{u_{i}}\mathbf{\odot}\mathbf{H}_{u_{i}}+\mathbf{att}_{a_{j}}\mathbf{ \odot}\mathbf{H}_{a_{j}}+\sum_{k}\mathbf{att}_{f_{k}}\mathbf{\odot}\mathbf{H}_{f_{k}}, \tag{20}\] where \(\mathbf{W}_{u}\), \(\mathbf{W}_{a}\) are weight matrices for user graph and ad graph, respectively, \(\mathbf{W}_{f_{k}}\) is weight matrix for the \(k\)-th feature (\(f_{k}\)) in a fully connected neural network, \(\mathbf{b}_{u}\), \(\mathbf{b}_{a}\) and \(\mathbf{b}_{f_{k}}\) are bias vectors for users, ads and feature \(f_{k}\) (\(k=1,2,\ldots,S\)), respectively, \(\mathbf{\odot}\) is the element-wise multiplication, \(\mathbf{att}_{u_{i}}\), \(\mathbf{att}_{a_{j}}\) and \(\mathbf{att}_{f_{k}}\) are weight vectors for user \(u_{i}\), ad \(a_{j}\) and feature \(f_{k}\), respectively. The vector \(\mathbf{H}\) is taken as the input of a fully connected neural network layer to make CTR prediction where the sigmoid function squeezes predicted values to [0,1], given as \[\hat{\mathcal{Y}}=sigmoid(\mathbf{WH}+b). \tag{21}\] ## 4 Experiments This section starts with experimental settings including datasets, evaluation metrics, then turns to model comparison, and finally makes hyper-parameter analysis and ablation study of the proposed model. ### Datasets We evaluate the proposed model (Causal-GNN) and baselines on the following three benchmark datasets that have been widely used in the extant literature on CTR prediction1. Footnote 1: Note that MovieLens-1M is a popular dataset for CTR prediction in recommender systems that are a similar Web service to online advertising (Yang and Zhai, 2022). This study chooses MovieLens-1M due to the following reasons: (1) it provides rich information for evaluating the strength of modeling components in Causal-GNN; (2) it can be used to illustrate the generalization ability of Causal-GNN in a different but related application. * **Criteo2**: The Criteo display advertising challenge dataset is provided by CriteoLab on Kaggle in 2014. Each record includes 13 numerical fields and 26 categorical fields. * **Avazu3**: The Avazu dataset is with the Kaggle 2014 CTR prediction competition, containing Avazu data during 11 days, ordered chronologically. Each record includes 23 fields such as ad-id, site-id, etc. * **MovieLens-1M4**: The MovieLens-1M dataset is collected from MovieLens Web site, which contains 1,000,209 ratings from 6,040 anonymous users on about 3,900 movies. Each record includes user-id, ad-id, rating, timestamp, gender, age, occupation, Zipcode, title, and genres. Footnote 2: [https://www.kaggle.com/c/criteo-display-ad-challenge](https://www.kaggle.com/c/criteo-display-ad-challenge) Footnote 3: [https://www.kaggle.com/c/avazu-ctr-prediction/data](https://www.kaggle.com/c/avazu-ctr-prediction/data) We randomly sample 2 million instances in Criteo, Avazu, and use all instances in MovieLens-1M for experimental evaluation. Each dataset is divided into three parts - training (80%), validation (10%) and test (10%). ### Evaluation metrics This study employs the two most popular metrics, namely AUC-ROC and Logloss, to evaluate the performance of Causal-GNN and baselines in CTR prediction tasks. * **AUC-ROC** is defined as the area under the receiver operating characteristics (ROC) curve, measuring the probability of positive instances ranked higher than negative ones. The ROC curve is based on the ratio of true positive rate (TPR) and false positive rate (FPR). A higher AUC-ROC indicates the better prediction performance. * **Logloss** reflects the average deviation of predicted values from true values. A lower Logloss suggests the better prediction capacity. Specifically, the Logloss is formulated as \(L=-\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}[y_{i}\times log\hat{y}_{i}+(1-y_{i})\left(1- log\hat{y}_{i}\right)]\), (22) where \(y_{i}=0\) (or 1) denotes the label, \(\hat{y}_{i}\in[0,1]\) is the predicted CTR, \(N_{s}\) is the size of the dataset. ### 4.3 Experimental settings #### 4.3.1 Baselines This study validates the performance of Causal-GNN by comparing with a class of state-of-the-art models, including low-order models (i.e., LR, FMs, AFMs), high-order models (i.e., NFMs, CIN, DeepFM) and graph models (i.e., Fi-GNN, GAT, GraphFM), as specified below. * **Logistic regression (LR)** is one of the most widely used baseline method, which is a linear combination of individual features. * **Factorization Machines (FMs)**(Rendle, 2010) is a second-order feature interaction model, supporting the better prediction on sparse data by using factorized parameters. * **Attentional factorization machine (AFM)**(Xiao et al., 2017) utilizes the attention-based pooling mechanism to represent contributions of various feature interactions. * **Neural factorization machine (NFM)**(He & Chua, 2017) is an implicit high-order feature interaction model, which describes second-order feature interactions by using FMs, and captures high-order feature interactions through a DNNs-based component. * **Compressed Interaction Network (CIN)**(Lian et al., 2018) is an explicit high-order model, which captures feature interactions at the vector-wise level and compresses the intermediate tensor to update the feature map. * **DeepFM**(Guo et al., 2017) is an end-to-end model, which integrates FMs to capture low-order feature interactions and DNNs to capture high-order feature interactions. * **Feature interaction graph neural networks (Fi-GNN)**(Li et al., 2019) is a GNNs-based feature interaction model, which realizes field-aware feature interactions in a graph structure. * **Graph attention networks (GAT)**(Velickovic et al., 2017) utilizes attention mechanisms to assign the importance to neighbors of a focal node, with no requirement of the prior knowledge of the entire graph structure. * **Graph Factorization Machine (GraphFM)**(Li et al., 2021b) integrates the interaction function of FMs into the feature aggregation strategy of GNNs and captures higher-order feature interactions with the increase of stacking layers. #### Parameter settings In the following experiments, for the purpose of fair comparison, the Logloss (Equation 22) is used as the objective function for all the prediction methods, Adam is taken as the optimizer, and batch size is set as 512. In order to achieve stable results, for each method, we perform three times of experiments, and in each experiment 12 epochs were run to reduce the influence of random factors. The early stopping strategy is used to interrupt the training when the Logloss on the validation set does not decrease in five successive epochs. Final results were calculated as the average of the three experimental results. In order to capture the local optimal result, we use a grid search method to determine hyper-parameters of the proposed model (Causal-GNN). Specifically, the learning rate is searched in the interval [0.0005, 0.001, 0.0015, 0.002, 0.0025, 0.003], and we obtained different values for the three datasets, i.e., 0.003 on Criteo, 0.002 on Avazu, and 0.0015 on MovieLens-1M. Similarly, the embedding size in feature graph is searched in the interval [8, 16, 32, 64, 128], and we took the value of 128 for the three datasets. The embedding sizes in user graph and ad graph are both set as 64. The number of GNNs layers is set as 3. Moreover, on MovieLens-1M, the aggregator used in GraphSAGE dealing with user graph and ad graph is chosen from four candidates namely mean aggregator, LSTM aggregator, mean pooling aggregator and max pooling aggregator. We found that max pooling aggregator achieves the best performance. For fair comparison, we also set the number of DNNs and GNNs layers of the baselines (i.e., NFM and DeepFM) as 3. Meanwhile, as for other settings of the baselines, we use parameters suggested in articles originating these methods to ensure the best performance of these methods. ### 4.4 Performance comparison In the following, we report the performance comparison among Causal-GNN and baselines for CTR prediction tasks on three datasets. Table 2 presents the performance comparison of these models. From Table 2, we can observe the following results. First, overall our proposed method (Causal-GNN) outperforms the baselines in terms of AUC and Logloss on the three datasets, with the exception that GraphFM outperforms Causal-GNN in AUC on Avazu at the 0.001 level. Specifically, on the three datasets, compared with the baselines except for GraphFM, the CTR prediction performance in AUC is improved by Causal-GNN significantly at the 0.01 level; in the meanwhile, Causal-GNN achieves smaller Logloss at the 0.001 level than the baselines. Second, it is worthwhile to note that MovieLens-1M contains specific information of users and ads, while Criteo and Avazu do not. Thereby, MovieLens-1M supports the full Causal-GNN model validation. Thus, the superiority of Causal-GNN on MovieLens-1M illustrates the effectiveness of the Causal-GNN model, while its superiority on Criteo and Avazu proves the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Criteo} & \multicolumn{2}{c}{Avazu} & \multicolumn{2}{c}{MovieLens-1M} \\ \cline{2-7} & AUC & Logloss & AUC & Logloss & AUC & Logloss \\ \hline LR & 0.7652 & 0.4782 & 0.7626 & 0.3800 & 0.8219 & 0.3426 \\ FMs & 0.7502 & 0.4850 & 0.7699 & 0.3765 & 0.8243 & 0.3405 \\ AFMs & 0.7544 & 0.4820 & 0.7667 & 0.3780 & 0.8238 & 0.3415 \\ NFM & 0.7561 & 0.4807 & 0.7694 & 0.3764 & 0.8339 & 0.3349 \\ CIN & 0.7663 & 0.4753 & 0.7614 & 0.3865 & 0.8280 & 0.3372 \\ DeepFM & 0.7787 & 0.4645 & 0.7666 & 0.3793 & 0.8286 & 0.3359 \\ Fi-GNN & 0.7839 & 0.4609 & 0.7684 & 0.3761 & 0.7840 & 0.3648 \\ GAT & 0.7726 & 0.4698 & 0.7644 & 0.3785 & 0.7271 & 0.3955 \\ GraphFM & 0.7837 & 0.4601 & **0.7741** & 0.3772 & 0.7843 & 0.3642 \\ Causal-GNN & **0.7844** & **0.4599** & 0.7728 & **0.3750** & **0.8342** & **0.3298** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison effectiveness of the GraphFwFM on the causal feature graph. Third, Causal-GNN overwhelmingly outperforms the low-order models (i.e., LR, FMs, AFMs). This indicates that GNNs-based feature representations entitle Causal-GNN to capture more sophisticated features. Fourth, Causal-GNN has comparable performance to high-order models (NFMs, CIN, DeepFM). In this sense, Causal-GNN can retain better interpretability of feature interactions with no sacrifice of performance. Fifth, the superiority of Causal-GNN over graph models (i.e., Fi-GNN, GraphFM and GAT) can be attributed to the effectiveness of causal inference among features and graph representation learning on feature graph, user graph and ad graph. Note that experiments are conducted using the original Fi-GNN and GraphFM codes with the same settings of parameters specified by Li et al. (2019b) and Li et al. (2021b). We can notice that the performance results of Fi-GNN and GraphFM illustrated in Table 2 differ from those reported in Li et al. (2019b) and Li et al. (2021b). Because the codes and experimental settings are exactly the same, we speculate that, (1) the performance difference of Fi-GNN and GraphFM on Criteo and Avazu may result from the fact that Li et al. (2019b) and Li et al. (2021b) used all instances in Criteo and Avazu, while this research used 2 million sampled instances to conduct experiments; (2) as for the performance difference of Fi-GNN and GraphFM on MovieLens-1M, it may be imputed to that the way that Li et al. (2021b) converted the label values and removed some instances in MovieLens-1M. MovieLens-1M (about 1,000,000 instances) has 5 levels of ratings (1-5) as labels. These labels need to be converted into 2 labels (click or not click). As described in Li et al. (2021b), instances with labels 1-2 and those with labels 4-5 were treated as negative samples (not click) and positive samples (click), respectively, and removed neutral instances with ratings of 3 (about 200,000 instances), which may most likely lead to prediction biases. However, in order to get valid results, this research retains all instances in MovieLens-1M to conduct experiments. Last but not the least, the superiority of Causal-GNN on cross-domain CTR prediction tasks, i.e., display ads (Criteo), mobile ads (Avazu) and movie recommendations (MovieLens-1M) may reveal its good generalization ability. We will explore this issue in more detail in Section 4.6.3. ### 4.5 Hyper-parameter analysis In order to get deep insights into the architecture of the proposed model (Causal-GNN), we study effects of hyper-parameters including the learning rate, the embedding size in GraphFwFM and different aggregators in GraphSAGE, on the prediction performance. #### 4.5.1 Effect of learning rate The learning rate determines the learning speed of a supervised learning model and influences how and when the objective function converges to the local optimum. In the experiment, we search the learning rate in the interval [0.0005, 0.001, 0.0015, 0.002, 0.0025, 0.0030] for Causal-GNN on the three datasets. Figure 3 presents the performance of Causal-GNN with different learning rates. Figure 3. The performance of Causal-GNN with different learning rates From Figure 3, we can observe that AUC and Logloss change in converse directions with the increase of the learning rate. In other words, the performance of Causal-GNN is consistent under the two evaluation metrics. Moreover, the local optimal learning rate is different on the three datasets: 0.003 on Criteo, 0.002 on Avazu and 0.0015 on MovieLens-1M. That is, Causal-GNN performs at different convergence speeds on the three datasets, possibly due to the difference in data characteristics. The three datasets contain different numbers of fields: Criteo has 39 fields, Avazu has 23 fields and MovieLens-1M has 10 fields. Intuitively, a richer dataset may require a larger learning rate for a given model to achieve the local convergence. This may explain the fact that Causal-GNN is assigned the largest learning rate (0.003) on Criteo and the smallest learning rate (0.0015) on MovieLens-1M. #### Effect of embedding size The embedding size is a significant parameter in deep learning frameworks, which substantially influences the model performance and the computational cost. We explore the embedding size with a grid search method for Causal-GNN on the three datasets. Figure 4 presents the performance of Causal-GNN with different embedding sizes in feature graph. The latent representation preserves more information with the increase of the embedding size. From Figure 4, we can see that the embedding size is a sensitive parameter in feature representations of Causal-GNN. The performance of Causal-GNN becomes better when the embedding size increases, although there is a small-range fluctuation on Avazu. Results show that the embedding size is optimal at 128 on all three datasets. This may indicate that the three datasets require strong representations to fit the data. #### 4.5.3 Effect of aggregator Aggregator is an important operation in GraphSAGE. An ideal aggregator should be symmetric and trainable. Hence, choosing a good aggregator can improve the representation capacity of GNNs. Among the three datasets, only MovieLens-1M provides necessary information to build user graph and ad graph. Therefore, this study explores effects of four aggregators, namely mean aggregator, LSTM aggregator, mean pooling aggregator and max pooling aggregator, on the performance of Causal-GNN on MovieLens-1M, as shown in Figure 5. From Figure 5, we can observe that the max pooling aggregator results in the best AUC performance of Causal-GNN while the mean aggregator and the LSTM aggregator lead to the worst AUC. Note that the mean aggregator and the LSTM aggregator have a small difference in AUC at the 0.00001 level. From the perspective of Logloss, the Figure 4: The performance of Causal-GNN with different embedding sizes Figure 5: The performance of Causal-GNN with different aggregators on MovieLens-1M results in the smallest Logloss performance of Causal-GNN, while the LSTM aggregator has the largest Logloss. Thus, we can conclude that the max pooling aggregator is the best aggregator and the LSTM aggregator is the worst aggregator on MovieLens-1M. The possible explanation is that the LSTM aggregator is not inherently symmetric in that it processes the inputs as a sequence (Hamilton et al., 2017). Additionally, the mean aggregator has an unsatisfactory performance because it is untrainable, whereas a trainable aggregator is indispensable for GNNs-based models. ### 4.6 Ablation study We conduct ablation studies to evaluate the contribution of modeling components of Causal-GNN. Specifically, we remove the GraphFwFM to valid the effect of our proposed GGNNs-based graph representation learning approach in Section 4.6.1, remove each of the three graph representations to test the effect of graph representations of features, users and ads in Section 4.6.2, and remove the component of causal learning on feature graph to examine the role of causality in feature representations in Section 4.6.3. #### 4.6.1 Effect of GraphFwFM We examine how our proposed graph representation learning component (GraphFwFM) on feature graph benefits Causal-GNN. In particular, we remove the GraphFwFM component (i.e., model-GraphFwFM) and compare its performance with the full Causal-GNN model, as illustrated in Table 3. From Table 3, we can notice that, the model without GraphFwFM performs poorly comparing with the full model. This indicates that GraphFwFM is substantially effective in learning graph representations of features. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Criteo} & \multicolumn{2}{c}{Avazu} & \multicolumn{2}{c}{MovieLens-1M} \\ & AUC & Logloss & AUC & Logloss & AUC & Logloss \\ \hline Model-GraphFwFM & 0.7722 & 0.4694 & 0.7637 & 0.3814 & 0.7591 & 0.4601 \\ Causal-GNN model & 0.7844 & 0.4599 & 0.7728 & 0.3750 & 0.8342 & 0.3298 \\ \hline \hline \end{tabular} \end{table} Table 3: Effect of GraphFwFM #### Effect of graph representations of feature, user and ad We examine how Causal-GNN is benefited from graph representations of features, users and ads, respectively. Specifically, we remove each of the three graph representations and compare these models with the full Causal-GNN model, as shown in Table 4. From Table 4, we can observe that removing one or more graph representations leads to the worse performance than the full Causal-GNN model. Moreover, among the three graph representations, feature graph is the most effective in improving the prediction performance. Among combinations of two graph representations, user+ad and ad+feature graph have comparable performance, and user+feature graph leads to the worst performance, which suggests that ad graph is an important input for CTR prediction. #### Effect of causal inference in feature representation In the following we examine the effect of causal inference in feature representation. First, we explore how causal inference among features improves the performance of Causal-GNN. Specifically, we replace causal feature graph with a complete feature graph (i.e., model-causality) and compare its performance with the full Causal-GNN model, as demonstrated in Table 5. From Table 5, we can see that model-causality performs worse than the full Causal-GNN model. Moreover, it is apparent that the model performance on Criteo is the most affected by the causality among those on the three datasets. The possible reason is that Criteo has more fields and in turn possesses richer causal relationships among features than the other two \begin{table} \begin{tabular}{l l l} \hline \hline Model & AUC & Logloss \\ \hline User graph representation & 0.7232 & 0.3960 \\ Ad graph representation & 0.7639 & 0.3756 \\ Feature graph representation & 0.7918 & 0.3584 \\ \hline User+ad graph representations & 0.8273 & 0.3350 \\ User+feature graph representations & 0.8042 & 0.3517 \\ Ad+feature graph representations & 0.8287 & 0.3324 \\ \hline Causal-GNN model & 0.8342 & 0.3298 \\ \hline \hline \end{tabular} \end{table} Table 4: Effect of graph representations of feature, user and ad on MovieLens-1M datasets. Table 5. Effect of causal inference on the performance \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Criteo} & \multicolumn{2}{c}{Avazu} & \multicolumn{2}{c}{MovieLens-1M} \\ & AUC & Logloss & AUC & Logloss & AUC & Logloss \\ \hline Model-causality & 0.7796 & 0.4650 & 0.7674 & 0.3772 & 0.8285 & 0.3342 \\ Causal-GNN model & 0.7844 & 0.4599 & 0.7728 & 0.3750 & 0.8342 & 0.3298 \\ \hline \hline \end{tabular} Figure 6. The model performance in the training phase and the testing phase Next, we inspect how causal inference among features enhances the generalization of causal-GNN. Specifically, we compare the reduced model without causality (model-causality) with the full Causal-GNN model with respect to how the model performance evolves in the training phase and the testing phase when the number of epochs increases, as demonstrated in Figure 6. From Figure 6, we can see that the performance difference between the training phase and the testing phase achieved by the full Causal-GNN model is much smaller than that by its reduced form without causality, in terms of both AUC and Logloss on the three datasets. This implies that causal inference among features makes Causal-GNN less prone to over-fitting. In other words, Causal-GNN learns underlying rules and patterns in the data, rather than shallow information. Moreover, compared to the reduced form without causality, the full Causal-GNN model can reach the stable performance faster when the number of epochs increases both in the training phase and in the testing phase. ## 5 Conclusion In this study, we propose a causality-based CTR prediction model in the GNNs framework (Causal-GNN) integrating multiple representations (feature graph, user graph and ad graph). Moreover, we proposed a graph representation learning approach (GraphFwFM) based on GGNNs to capture causal relationships in feature graph. GraphSAGE is employed to obtain graph representations of users and ads. Experiments conducted on three public datasets (Criteo, Avazu and MovieLens-1M) show that Causal-GNN achieves better results than state-of-the-art baselines for CTR prediction tasks and GraphFwFM is substantially effective in capturing high-order sophisticated representations in causal feature graph. This research also yields several interesting findings that provide valuable managerial insights for online advertisers and platform providers. First, causal relationships among features are essential information for CTR prediction. Hence, it is important to explicitly derive causal relationships either manually or statistically and focus on roles of casual factors in generating more clicks. We believe that empirical studies on causal relationships among various advertising variables certainly deserve further investigation. Note that this research focuses on the development of causal learning based GNNs models for CTR prediction; in the meanwhile, the benchmark datasets used in the experiments are for prediction tasks, rather than empirical studies. Second, user graph and ad graph can enhance CTR prediction. This implies that relationships among users and among advertisements complement to users' and advertisements' characteristics in determining the CTR. This reminds advertisers to carefully design advertising portfolios and inspect relationships among their advertisements. Moreover, it also implies that whether a specific user will click an advertisement may be influenced by their friends, which is consistent with simulation results provided by Yang et al. (2018). In this sense, both advertisers and online platforms should try all means to improve interactions among users (Yang & Gao, 2021). Third, the comparison experiment illustrates that graph models are generally superior to non-graph ones, which indicates that it is promising to handle the CTR prediction problem in a graph structure. In the future research, we plan to explore more sophisticated feature interactions in the GNNs framework. That is, it aims to study flexible causality-based interaction rules and diverse aggregate strategies in graph neural architectures in order to achieve the better performance for CTR prediction. Second, how to build interpretable GNNs-based CTR prediction models is a meaningful research perspective. In particular, we are intended to integrate the domain-specific knowledge with causal inference to support explicit representations of high-order feature aggregations and interactions. Then functionally-grounded evaluation metrics of model interpretability can be developed for prediction tasks. Last but not the least, it is desirous to build real-time prediction models through designing efficient computing architectures and search strategies. ## Acknowledgements We are thankful to the editor and anonymous reviewers who provided valuable suggestions that led to a considerable improvement in the organization and presentation of this manuscript. This work is partially supported by the (NSFC National Natural Science Foundation of China) grants (72171093, 62072026) and Beijing Natural Science Foundation grant (JQ20022).
2303.08157
Graph Neural Network Surrogates of Fair Graph Filtering
Graph filters that transform prior node values to posterior scores via edge propagation often support graph mining tasks affecting humans, such as recommendation and ranking. Thus, it is important to make them fair in terms of satisfying statistical parity constraints between groups of nodes (e.g., distribute score mass between genders proportionally to their representation). To achieve this while minimally perturbing the original posteriors, we introduce a filter-aware universal approximation framework for posterior objectives. This defines appropriate graph neural networks trained at runtime to be similar to filters but also locally optimize a large class of objectives, including fairness-aware ones. Experiments on a collection of 8 filters and 5 graphs show that our approach performs equally well or better than alternatives in meeting parity constraints while preserving the AUC of score-based community member recommendation and creating minimal utility loss in prior diffusion.
Emmanouil Krasanakis, Symeon Papadopoulos
2023-03-14T18:14:40Z
http://arxiv.org/abs/2303.08157v2
# Graph Neural Network Surrogates of Fair Graph Filtering ###### Abstract Graph filters that transform prior node values to posterior scores via edge propagation often support graph mining tasks affecting humans, such as recommendation and ranking. Thus, it is important to make them fair in terms of satisfying statistical parity constraints between groups of nodes (e.g., distribute score mass between genders proportionally to their representation). To achieve this while minimally perturbing the original posteriors, we introduce a filter-aware universal approximation framework for posterior objectives. This defines appropriate graph neural networks trained at runtime to be similar to filters but also locally optimize a large class of objectives, including fairness-aware ones. Experiments on a collection of 8 filters and 5 graphs show that our approach performs equally well or better than alternatives in meeting parity constraints while preserving the AUC of score-based community member recommendation and creating minimal utility loss in prior diffusion. graph signal processing, node ranking, algorithmic fairness, disparate impact, graph neural networks ## 1 Introduction Graph signal processing (Gavili and Zhang, 2017; Ortega et al., 2018; Sandryhaila and Moura, 2013) is a graph analysis discipline that studies node value propagation via edges. In particular, it defines graph signals as collections of prior values spread across graph nodes, such as numeric attribute values or known probabilities of nodes exhibiting a characteristic. Then, graph filters produce posterior node scores by diffusing priors through edges. Large posteriors correspond to notions of structural proximity to nodes with large priors. This scheme facilitates many downstream graph mining tasks, such as unsupervised extraction of tightly knit node clusters (Andersen et al., 2006; Schaeffer, 2007; Kulis et al., 2009; Wu et al., 2012), node recommendation based on structurally proximity to a set of query nodes (Tong et al., 2006; Kloster and Gleich, 2014), and certain graph neural network architectures for node or graph classification (Kipf and Welling, 2016; Chen et al., 2020; Klicpera et al., 2018; Dong et al., 2020; Huang et al., 2020). As a running example, starting from a set of a social network's users that have formed a community (social group) based on a shared interest, filters can recommend the community to other users based on the structure of the social network's interaction graph. Fairness concerns arise when the outputs of artificial intelligence and data mining systems are correlated to protected attribute values, such as gender or ethnicity (Chouldechova, 2017; Kleinberg et al., 2018).1 In these cases, one bias mitigation goal is to consider organization of data samples into protected and non-protected groups depending on given attribute values (e.g., men and women) and require that both groups should produce similar outputs when evaluated by the same measures of choice (Chouldechova, 2017; Krasanakis et al., 2018; Zafar et al., 2019; Ntoutsi et al., 2020). A popular fairness objective is the statistical parity of the fractions of positive prediction between the groups; this is known as disparate impact elimination (Biddle, 2006; Calders and Verwer, 2010; Kamiran and Calders, 2012; Feldman et al., 2015) and often assessed through a measure called _prule_ (Subsection 2.4). Footnote 1: We focus on binary protected attributes in that they obtain values either 1 or 0 for each data sample. Fairness concerns also arise for graph filters, whose posteriors can be affected by prior value or graph structure biases (Subsection 2.4). For example, the average posterior score for men could be higher than the average score for women if nodes with large priors were mostly men and the graph featured few inter-gender interactions. To quantify how fair graph filters are with respect to disparate impact, we employ a generalization of the prule that accounts for node scores (Krasanakis et al., 2020) and aim to maximize it while in large part maintaining the original posteriors. This way, we can retain the predictive capabilities of graph filters within downstream tasks, such as the AUC of recommendations (Subsection 2.2), while addressing disparate impact concerns. To achieve this, we expand on a previous hypothesis (Krasanakis et al., 2020) that graph filter fairness can be induced by editing priors so that resulting posteriors largely preserve the outcome of filtering but also optimize fairness-aware objectives. Here, we introduce a novel mathematical framework to theoretically ground this hypothesis, as well as a drastically improved mechanism for editing posteriors that accounts for our new analysis (Section 3). We also reframe this approach as a type of graph neural network that parses as inputs the original prior values, the original posteriors, and the sensitive attribute, and is used as a surrogate model of ideal posteriors (Section 4). The network's neural parameters are learned on-the-fly depending on the original priors and filter, and architectural hyperparameters are chosen so that they satisfy conditions stipulated by our analysis. The contribution of this work is threefold: * We introduce a novel graph neural network framework for adjusting the outcome of graph filtering so that it becomes fairness-aware. This substitutes most filters within existing systems by maintaining the same notions of structural proximity while making sure that groups of nodes are protected. * We prove that prior editing can control posteriors to reach local optima in many objectives, such as fairness-aware ones. We also show that appropriate L1 regularization can make any twice differentiable objective suitable to prior editing. Universal approximation via filtering is also applicable to objectives other than ours. * We compare our approach to existing ones on a corpus of 5 multidisciplinary real-world graphs combined with 8 graph filters and variations of 2 downstream tasks (community member recommendation and node value diffusion) to show that the proposed framework better preserves posteriors while mitigating their disparate impact. ## 2 Background and Related Work In this section we provide the theoretical background necessary to understand our work. First, in Subsection 2.1 we overview graph signal processing concepts used to study a wide range of methods for obtaining node scores given prior information of node values. In Subsection 2.2 we also present two popular graph filtering tasks, namely community member recommendation, and node score diffusion. These are employed by many practical applications, and we later use them as the main scenarios for evaluating fairness-aware graph filtering. Additionally, in Subsection 2.4 we discuss algorithmic fairness under the prism of graph signal processing, and overview the limited research done to merge these disciplines. Finally, Subsection 2.3 introduces related graph neural network principles and terminology. The operations and symbols defined in this work are summarized in Table 2. ### Graph Signal Processing #### 2.1.1 Graph signal propagation Graph signal processing (Gavili and Zhang, 2017; Ortega et al., 2018; Sandryhaila and Moura, 2013) is a domain that extends traditional signal processing to graph-structured \begin{table} \begin{tabular}{l|l} **Notation** & **Interpretation** \\ \hline \(\mathcal{I}\) & Identity matrix with appropriate dimensions \\ **0** & Column vector of appropriate rows and zero elements \\ \(r[v]\) & Element corresponding to node \(v\) of graph signal \(r\) \\ \(\mathcal{L}(r)\) & Loss function for graph filter posteriors \(r\) \\ \(\phi\) & The parameter controlling \(\phi\)-fairness definitions \\ \(\theta\) & Parameters (typically neural ones) of prior editing/generation schemes \\ \(\mathcal{M}(\theta)\) & Prior graph signal generation scheme \\ \(\nabla\mathcal{L}(r)\) & Gradient vector of loss \(\mathcal{L}(r)\) with elements \(\nabla\mathcal{L}(r)[v]=\frac{\partial\mathcal{L}(r)}{\partial r[v]}\) \\ \(\mathbb{J}_{\mathcal{M}}(\theta)\) & The Jacobian matrix of multivariate function \(\mathcal{M}(\theta)\) with \(\mathbb{J}_{\mathcal{M}}(\theta)[v]=\nabla_{\theta}(\mathcal{M}(\theta)[v])\) for rows \(v\) \\ \(\mathbb{H}_{\mathcal{L}}(r)\) & The Hessian matrix of a real-valued function \(\mathcal{L}(r)\) obtained per \(\mathbb{H}_{\mathcal{L}}(r)=\nabla\mathcal{L}(r)\) \\ \(|x|\) & Absolute value for numbers, number of elements for sets \\ \(\|x\|\) & L2 norm of vector \(x\) computed as \(\sqrt{\sum_{v}x[v]^{2}}\) \\ \(\|x\|_{1}\) & L1 norm of vector \(x\) computed as \(\sum_{v}|x[v]|\) \\ \(|x\|_{\infty}\) & Maximum value of \(x\) computed as \(\max_{v}x[v]\) \\ \(\lambda_{1}\) & Smallest eigenvalue of a positive definite matrix \\ \(\lambda_{\max}\) & Largest eigenvalue of a positive definite matrix \\ \(\hat{A}\) & Normalized version of adjacency matrix \(A\) \\ \(F(\hat{A})\) & Graph filter on normalized adjacency matrix \(\hat{A}\) \\ \(A\setminus B\) & Set difference, that is the elements of \(A\) not found in \(B\) \\ \(a^{T}b\) & Dot product of column vectors \(a,b\) as matrix multiplication \\ \(\mathbb{R}^{|\mathcal{V}|}\) & Space of column vectors comprising all graph nodes \\ diag(\(|\lambda_{i}|_{i}\)) & A diagonal matrix diag(\([\lambda_{i}|_{i})\)[\(i,j\)] = \(\{\lambda_{i}\) if \(i=j,0\) otherwise\} \\ \(A[u,v]\) & Element of matrix \(A\) at row \(u\) and column \(v\) \\ \(A^{T}\) & Transposition of matrix \(A\) for which \(A^{T}[u.v]=A[v,u]\) \\ \(A^{-1}\) & Inverse of invertible matrix \(A\) \\ \(\{x\,|\,cond(x)\}\) & Elements \(x\) satisfying a condition \(cond\) \\ \(\mathcal{S}\) & Set of nodes with sensitive attribute values \\ \end{tabular} \end{table} Table 1: Mathematical notation. Graph-related quantities refer to a common studied graph. data. To do this, it starts by defining graph signals \(q:\mathcal{V}\rightarrow\mathbb{R}\) as maps that assign real values \(q[v]\) to graph nodes \(v\in\mathcal{V}\).2 Graph signals can be represented as column vectors \(q^{\prime}\in\mathbb{R}^{|\mathcal{V}|}\) with elements \(q^{\prime}[i]=q[\mathcal{V}[i]]\), where \(|\cdot|\) is the number of set elements and \(\mathcal{V}[i]\) is the \(i\)-th node of the graph after assuming an arbitrary fixed order. For ease of notation, in this work we use graph signals and their vector representations interchangeably by replacing nodes with their ordinality index. In other words, we assume the isomorphism \(\mathcal{V}[i]=i\). Intuitive interpretations of information captured by graph signals are presented in Subsection 2.2. Footnote 2: Signals with multidimensional node values can be expressed as ordered collections of real-valued signals. A pivotal operation in graph signal processing is the one-hop propagation of node values to their graph neighbors, where incoming values are aggregated on each node. Expressing this operation for unweighted graphs with edges \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) requires the definition of adjacency matrices \(A\), whose elements correspond to the binary existence of respective edges, i.e., \(A[i,j]=\{1\text{ if }(\mathcal{V}[i],\mathcal{V}[j])\in\mathcal{E},0\text{ otherwise}\}\). A normalization operation is typically employed to transform adjacency matrices into new ones \(\hat{A}\) with the same dimensions, but which model some additional assumptions about the propagation mechanism (see below for details). Then, single-hop propagation of node values stored in graph signals \(q\) to neighbors yields new graph signals \(q_{next}\) with elements \(q_{next}[u]=\sum_{v\in\mathcal{V}}\hat{A}[u,v]q[v]\). For the sake of brevity, this operation is usually expressed using linear algebra as \(q_{next}=\hat{A}q\). Two popular types of adjacency matrix normalization are column-wise and symmetric. The first views one-hop propagation as a stochastic process (Tong et al., 2006) that is equivalent to randomly walking the graph and selecting the next node to move to from a uniformly random selection between neighbors. Formally, this is expressed as \(\hat{A}_{col}=AD^{-1}\), where \(D=\text{diag}\big{(}\big{[}\sum_{v}A[u,v]\big{]}_{u}\big{)}\) is the diagonal matrix of node degrees. Columns of the column-normalized adjacency matrix sum to 1. This way, if graph signal priors model probability distributions over nodes, i.e., their values sum to 1 and are non-negative, posteriors also model probability distributions. On the other hand, symmetric normalization arises from a perspective where the eigenvalues of the normalized adjacency matrix are treated as the graph's spectrum (Chung and Graham, 1997; Spielman, 2012). In this case--and if the graph is unweighted and undirected in that the existence of edges \((u,v)\) also implies the existence of edges \((v,u)\)--a symmetric normalization formula is needed to guarantee that eigenvalues are real numbers. The normalization \(\hat{A}_{symm}=D^{-1/2}AD^{-1/2}\) is predominantly selected, on merit that it has bounded eigenvalues (see below) and \(D^{1/2}(I-\hat{A}_{symm})D^{1/2}\) is a Laplacian operator that implements the equivalent of discrete derivation over graph edges. Graph signal processing research often targets undirected graphs and symmetric normalization, since these enable computational tractability and closed form convergence bounds of resulting tools.3 In this work, we also favor this practice, because it allows graph signal filters to maintain spectral characteristics needed by our analysis. The key property we take advantage of is that, as long as the graph is connected, the normalized adjacency matrix \(\hat{A}\) is invertible and has the same number of real-valued eigenvalues as the number of graph nodes \(|\mathcal{V}|\). Furthermore, all eigenvalues reside in the range \([-1,1]\). If we annotate these eigenvalues as \(\{\lambda_{i}\in[-1,1]\,|\,i=1,\ldots,|\mathcal{V}|\}\), the symmetric normalized adjacency matrix's Jordan decomposition takes the form: \[\hat{A}=U^{-1}\text{diag}([\lambda_{i}]_{i})U\] where \(U\) is an orthonormal matrix with columns the corresponding eigenvectors. Therefore, scalar multiplication, power and addition operations on the adjacency matrices also transform eigenvalues the same way. For example, it holds that \(\hat{A}^{n}=U^{-1}\text{diag}([\lambda_{i}^{n}]_{i})U\). #### 2.1.2 Graph filters The one-hop propagation of graph signals is a type of shift operator in the multidimensional space induced by the graph, in that it propagates values based on a notion of structural/relational proximity. Based on this observation, graph signal processing uses it analogously to the time shift \(z^{-1}\) operator and defines the notion of graph filters as weighted aggregations of multi-hop propagation. In particular, since \(\hat{A}^{n}q\) expresses the propagation \(n=0,1,2,\dots\) hops away of graph signals \(q\), a weighted aggregation of these hops means that the outcome of graph filters can be expressed as: \[\begin{split}& r=F(\hat{A})q\\ & F(\hat{A})=\sum_{n=0}^{\infty}f_{n}\hat{A}^{n}\end{split} \tag{1}\] where \(F(\hat{A})\) is the filter characterized by real-valued weights \(\{f_{n}\in\mathbb{R}|n=0,1,2,\dots\}\) indicating the importance placed on propagation of graph signals \(n\) hops away. In this work, we understand the resulting graph signal \(r\) to capture posterior node scores that arise from passing prior node values of the original signal \(q\) through the filter. Given the weighted aggregation of multi-hop propagations, a generalized understanding of posteriors is in terms of quantifying proximity to nodes with large prior values via many paths of length \(n\) for lengths corresponding to large weights \(f_{n}\). Different interpretations of which real-world properties are captured by proximity give rise to different graph mining tasks, such as the ones presented in Subsection 2.2. In this work, we focus only on graph filters that are positive definite matrices, i.e., whose eigenvalues are all positive. For symmetric normalized graph adjacency matrices with eigenvalues \(\{\lambda_{i}\in[-1,1]|i=1,\dots,|\mathcal{V}|\}\), it is easy to check whether graph filters defined per Equation 1 are positive definite, as they assume eigenvalues \(\left\{F(\lambda_{i})=\sum_{n=0}^{\infty}f_{n}\lambda_{i}^{n}\,|\,i=1,\dots, |\mathcal{V}|\right\}\) and hence we can check whether the graph filter's corresponding polynomial assumes only positive values: \[F(\lambda)>0\,\forall\lambda\in[-1,1] \tag{2}\] For example, filters arising from decreasing importance of propagating more hops away \(f_{n}>f_{n+1}>0\,\forall n=0,1,\dots\) are positive definite. Two well-known graph filters that are positive definite for symmetric adjacency matrix normalizations are personalized pagerank (Andersen et al., 2007; Bahmani et al., 2010) and heat kernels (Kloster and Gleich, 2014). These respectively arise from power degradation of hop weights \(f_{n}=(1-a)a^{n}\) and the exponential kernel \(f_{n}=e^{-t}t^{n}/n!\) for parameters \(a\in[0,1]\) and \(t\in\{1,2,3,\dots\}\). The best parameter values vary, depending on the graph and mining task. ### Example Applications of Graph Filtering #### 2.2.1 Community member recommendations Nodes of real-world graphs are often organized into communities of either ground truth structural characteristics (Fortunato and Hric, 2016; Leskovec et al., 2010; Xie et al., 2013; Papadopoulos et al., 2012) or shared node attribute values (Hric et al., 2014, 2016; Peel et al., 2017). A common task in graph analysis is to score or subsequently rank all nodes based on their relevance to such communities. This is particularly important for large graphs, where community boundaries can be vague (Leskovec et al., 2009; Lancichinetti et al., 2009). Furthermore, node scores can be combined with other characteristics, such as their past values when discovering nodes of emerging importance in time-evolving graphs, in which case they should be of high quality across the whole graph. Many algorithms that discover communities from only a few known members also rely on transforming and thresholding node scores (Andersen et al., 2006; Whang et al., 2016). In many applications, node scores are computed with graph filters. In particular, prior graph signals are constructed given the knowledge that certain example nodes exhibit a common attribute of interest. This knowledge may be inputted from real-world data gathering or, in case of unsupervised community detection, by automatic extraction of candidate nodes around which communities should be discovered. In both cases, prior signal elements are assigned binary values depending on whether respective nodes are provided as examples: \[q[v]=\{1\text{ if }v\text{ is known to have the attribute},0\text{ otherwise}\}\] In general, some but not all nodes that belong to the same structural community could serve as examples to construct a prior graph signal. All known members should be used, but there may be unknown ones too. Then, graph filters \(F(\hat{A})\) can be deployed to produce posteriors \(r=F(\hat{A})q\) whose elements \(r[v]\) indicate the structural proximity of nodes \(v\) to known members. Finally, nodes with large posteriors are good candidates to be recommended as community members, to be granularly identified as exhibiting the shared metadata attribute of the examples, or, depending on the application, to receive recommendations about the studied community. The success of such actions depends on the choice of filter (Krasanakis et al., 2022), which can be determined either with an informed identification of dominant structural characteristics leading to the creation of the community, or with supervised experimentation on domain graphs. The measure that typically quantifies the quality of community member recommendations arising from node scores is the area under the curve of receiver operating characteristics (AUC) (Hanley and McNeil, 1982), which compares operating characteristic trade-offs at different decision thresholds. If we pack ground truth node scores in a graph signal \(q_{test}[v]=\{1\text{ if node }v\text{ is a community member},0\text{ otherwise}\}\), so as to evaluate a posterior score signal \(r[v]\), the True Positive Rate (TPR) and False Positive Rate (FPR) operating characteristics for decision thresholds \(\theta\) can be respectively defined as: \[TPR(\theta) =P(q_{test}[v]=1\,|\,r[v]\geq\theta)\] \[FPR(\theta) =P(q_{test}[v]=0\,|\,r[v]\geq\theta)\] where \(P(a|b)\) denotes the probability of event \(a\) conditioned on \(b\). AUC is defined as the cumulative effect induced to the TPR when new decision thresholds are used to change the FPR and quantified per the following formula: \[AUC=\int_{-\infty}^{\infty}TPR(\theta)FPR^{\prime}(\theta)\,d\theta \tag{3}\] AUC values closer to 100% indicate that community members exhibit higher scores compared to non-community members, whereas 50% AUC corresponds to random node scores. #### 2.2.2 Graph diffusion Graph diffusion refers to procedures that smooth prior node values throughout graph structures. This task directly translates to a graph signal processing pipeline, where node values are organized into prior graph signals and graph filters play the role of the diffusion mechanism. The notion of structural proximity is understood as an indication of prior-posterior correlations between nodes. Based on this understanding, diffusion has been used for inference of continuous-valued node attributes when nodes exhibit missing or noisy information (Castillo et al., 2007; Gadde et al., 2013; Anis et al., 2016). These efforts have culminated in integrating diffusion mechanisms in graph neural networks that smooth the prediction logits of multilayer perceptrons through the graph structure (Subsection 2.3). Compared to community member recommendation, which typically handles binary priors indicating community membership--and therefore may start from only a few non-zero prior values--diffusion parses continuous (sometimes even negative) values that could be non-zero for most graph nodes. One could be tempted to think of diffusion as the more general setting, but in practice the two domains often remain distinct due to addressing different application needs that present different engineering and theoretical challenges. For example, when looked under the prism of graph filters, diffusion can perfectly reconstruct (Anis et al., 2016) appropriately downsampled graph signals (Anis et al., 2016), but community analysis frequently dabbles into extremely sparse data that produce large posteriors in (and can hence be restricted to run on) only certain graph segments (Lofgren et al., 2016). There are many branching applications of diffusion where its efficacy is tangled with other application domain intricacies, such as system hyperparameters. Thus, it has been implicitly acknowledged (Tsioutsiouliklis et al., 2020, 2021) that exploratory variations involving fairness-aware objectives should not be evaluated in a case-by-case basis but in a vacuum, where regression error measures capture the distance between the original (biased) posteriors and debiased ones. In this work, we follow the same principle and leave for future work the application of fair diffusion to specific downstream tasks. We thus quantify the impact of inserting fairness postulates into diffusion mechanisms by measuring the error between graph filter posteriors \(r_{original}\) and the fairness-aware ones \(r_{fair}\) generated by ours or other approaches. In the literature, the absolute error commonly serves this purpose, but we point out that it largely ignores the impact on lower-scored posteriors, such as sensitive group members. To prevent the utility loss itself from suffering from disparate impact, and given that we later experiment in settings that yield positive posteriors, in this work we work with the average relative error as the utility loss: \[\mathcal{L}_{util}=\tfrac{1}{|\mathcal{V}|}\big{\|}1-\tfrac{r_{fair}}{r_{ original}}\big{\|}_{1} \tag{4}\] Utility losses should ideally be small, but for biased original posteriors there inevitably exist trade-offs between zero losses and deviations from trying to make new posteriors fair. ### Graph Neural Networks Graph neural networks generalize base graph diffusion principles, which are applied to graph signals with numeric node values, to multidimensional node feature values. Given node feature matrices \(H^{(0)}\) whose columns can be considered graph signals of respective features (their rows correspond to node feature values) graph neural networks are broadly derived by adjusting the following equation (Kipf and Welling, 2016): \[H^{(\ell)}=\sigma\big{(}\hat{A}H^{(\ell-1)}W^{(\ell)}+b^{(\ell)}\big{)}\] where \(\hat{A}\) is the symmetrically normalized adjacency matrix (often edited with the renormalization trick of adding self-loops in the graph) and \(W^{(\ell)}\) and \(b^{(\ell)}\) define a dense affine layer transformation of the graph shift \(\hat{A}H^{(\ell-1)}\). The output of each transformed graph shift passes through a non-linear transformation \(\sigma\), which at intermediate layers is typically chosen as the rectified linear activation \(\text{relu}(x)=max\{0,x\}\) applied element-wise. When graph neural networks are used for node classification, their output is activated through a row-wise softmax to approximate one-hot encoding of predicted classes. In this work, our output activation and objective are rather different. Recent research has shown that the above type of architecture suffers from oversmoothing and should thus be limited to a couple of layers that fail to account for node information many hops away. To address this issue, recursive schemes have been proposed to partially maintain early representations on later layers. Of these, our work resembles the predict-then-propagate architecture (Klicpera et al., 2018), which decouples the neural and graph diffusion components by first enlisting multilayer perceptrons to end-to-end train representations \(H^{(L)}\) of node features, and then applying graph filters on each representation dimension to arrive at final predictions: \[H^{(\ell)}=\text{relu}(H^{(\ell-1)}W^{(\ell)}+b^{(\ell)})\quad \ell=1,2,\dots,L\] \[\hat{Y}_{logits}=F(\hat{A})H^{(L)}\] \[\hat{Y}=\text{softmax}(\hat{Y}_{logits})\] Originally, this approach used personalized pagerank as the filter of choice \(F(\hat{A})\). Our analysis arrives at a similar scheme of producing predictive logits \(\hat{Y}_{logits}\), but finds that the selection of graph filter should depend on the desired objective in that it should yield qualitative priors to be adjusted. The applied setting also differs from the typical graph neural network formulation in that we extract node features \(H^{(0)}\) from the graph filtering process and directly use the logits as surrogate estimators of ideal posteriors optimizing fairness-aware objectives. Furthermore, our approach is designed only for objectives that evaluate one-dimensional node ranking. Prospective extension to scopes like node classification is outlined in Appendix A but requires non-trivial theoretical groundwork to extend our analysis, as well as experimental validation that is left to future work. ### Fairness of graph filter posteriors In this subsection we introduce the well-known concept of disparate impact in the domain of algorithmic fairness, which we aim to mitigate. We also explore systemic graph biases and overview previous works that inject fairness in graph filter posteriors. #### 2.4.1 Disparate impact elimination When not focused on equal treatment between specific individuals, algorithmic fairness is broadly understood as parity between sensitive and non-sensitive samples over a chosen statistical property. Three popular fairness-aware objectives (Chouldechova, 2017; Krasanakis et al., 2018; Zafar et al., 2019; Ntoutsi et al., 2020) are disparate treatment elimination, disparate impact elimination, and disparate mistreatment elimination. These correspond to not using the protected attribute in predictions, preserving statistical parity between the fraction of sensitive and non-sensitive positive labels for sample groups with the same protected attribute values, and achieving identical predictive performance on the two groups under a measure of choice. Here, we focus on mitigating disparate impact unfairness (Chouldechova, 2017; Biddle, 2006; Calders and Verwer, 2010; Kamiran and Calders, 2012; Feldman et al., 2015). An established measure that quantifies this objective for binary protected attributes is the _prule_(Biddle, 2006). Denoting as \(R[v]\in\{0,1\}\) the binary outputs of a system \(R\) for samples \(v\), \(S\) the group (set) of protected samples and \(S^{\prime}\) its complement, this measure is defined as: \[\begin{split}& prule=\frac{\min\{r_{S},r_{S^{\prime}}\}}{\max\{r_{S},r_{S^{ \prime}}\}}\in[0,1]\\ & r_{S}=P(R[v]=1|p\in S)\\ & r_{S^{\prime}}=P(R[v]=1|p\not\in S)\end{split} \tag{5}\] The prule computes to 100% when the fractions \(r_{S},r_{S^{\prime}}\) are perfectly equal and to 0% when there is no positive output for one of the sensitive attribute's values. There is precedence (Biddle, 2006) for considering 80% prule or higher fair. Calders-Verwer disparity \(|r_{S}-r_{S^{\prime}}|\)(Calders and Verwer, 2010) is also a well-known disparate impact assessment measure. However, although it is optimized at the same point as the prule, it biases fairness assessment against high fractions of positive predictions. For example, it considers the fractions of positive labels \((r_{S},r_{S^{\prime}})=(0.8,0.6)\) less fair than \((r_{S},r_{S^{\prime}})=(0.4,0.3)\). We shy away from this understanding because we later employ a stochastic interpretation of posterior nodes scores that could be scaled by a constant yet unknown factor. On the other hand, the prule would quantify both numerical examples as \(\frac{0.6}{0.8}=\frac{0.3}{0.4}=75\%\) fair. #### 2.4.2 Posterior biases in graphs Fairness concerns also arise in graph filters. To begin with, linked social network nodes often exhibit similar attribute values. This phenomenon is known as homophily (McPherson et al., 2001) and is the core argument behind the use of low-pass graph filters, i.e., which concentrate on propagating priors only a few hops away. However, the same phenomenon also means that prior biases (e.g., lower values on average against protected group nodes) are transferred to the posteriors of low-pass graph filters, such as personalized pagerank (Karimi et al., 2018). Additionally, there may exist structural biases in graphs with non-homogenous node degree distributions (Vaccario et al., 2017; Cui et al., 2021). For example, graph filters often assign higher scores to well-linked nodes (as Fortunato et al. (2006) demonstrate for the personalized pagerank filter) and this causes those sensitive attribute values that are correlated with node degrees to also be correlated with node scores. As an example of how graph filtering can be biased, consider the left graph of Figure 1 and the node shape as a sensitive attribute. The circle and triangle groups of nodes tend to form more intra-group than other edges, but there also exist two structural communities A and B. Constructing graph signal priors from the red known members of community B, which are both triangles, leads an example graph filter towards favoring member recommendations (pink) for more triangles rather than the top circles, even if the former happen to reside in community A instead. These recommendations are in line with notions of structural proximity, but if the goal was to recommend members of community B, example members and respective priors are clearly biased in that they do not represent any circles and thus fail to indicate the top-right area of the graph as important. One way to mitigate this type of bias would be by including circles in the examples, as shown on the right graph of the same figure. Prior bias could come from unfair data gathering procedures. However, given a small number of example community members, this bias can also arise inadvertently, due to random selection of structurally close same-group nodes as examples. This is unlikely for unbiased sampling mechanisms and large numbers of example nodes, but it is commonplace to perform graph filtering with as few as one or two non-zero prior values. Regardless of its cause (e.g., chance or malice), prior bias can be exacerbated by homophilous graph structures. One view of this phenomenon is that the structures themselves are biased (Karimi et al., 2018). However, in this work we show that appropriate graph signal prior selection can mitigate any structural biases in connected graphs, which indicates that bias can also be seen as the fault of priors, namely of large prior values residing structurally far away from nodes of the protected group. #### 2.4.3 Posterior score fairness In domains related to the outcome of graph mining algorithms, fairness has been defined for the order of recommended items (Beutel et al., 2019; Biega et al., 2018; Yang and Stoyanovich, 2017; Zehlike et al., 2017) as equity in the ranking positions between protected and non-protected group members. These notions of fairness are not applicable to the more granular understanding provided by node scores. Figure 1: Biased (left) and fair (right) community member recommendations with regards to the protected subgroup of circles. Another definition of graph mining fairness has been introduced for node embeddings (Bose and Hamilton, 2019; Rahman et al., 2019) under the guise of fair random walks, which is the stochastic process modeled by personalized pagerank when the adjacency matrix is normalized by columns. Yet, the fairness of these walks is only implicitly asserted through embedding fairness. Furthermore, they require at least one member of the protected group to be adjacent to each node, to make sure that the walks can favor them at every step on which fairness needs to be corrected. Fairness has also been recently explored for the outcomes of graph neural networks (Dai and Wang, 2020; Ma et al., 2021; Dong et al., 2021; Dai and Wang, 2021; Zhang et al., 2022; Dong et al., 2022). Approaches are typically trained to produce fair predictions (e.g., recommendations, link predictions) by a) incorporating traditional notions of fairness as regularizers to differentiable relaxations of accurate node classification (e.g., cross-entropy minimization), b) imposing similar fairness constraints on the training process, or c) rebalancing algorithmic components to debias the inference process. These works also support partial knowledge of sensitive attribute values, given that these can be neurally estimated. In line with fairness-aware graph filtering (see below), a popular type of rebalancing constitutes of adding or removing edges (Loveland et al., 2022). Despite the existence of works tailored to other graph mining paradigms, it remains important to investigate fairness for traditional graph filtering. To begin with, advances on graph neural network theory (Dong et al., 2020) suggest that this paradigm can be equivalent to decoupling neural network and graph filter components. Thus, the question of how to make the outcome of graph filters fair can also be of use there. At the same time, graph neural networks do not see use in all graph mining applications, as they tend to rely on an abundance of node features, which are not always present for graph mining, and are often computationally intractable to parse with modern GPUs at the scale of graphs with millions of nodes. Finally, existing architectures tend to focus on node classification or link prediction instead of the more granular scoring we explore in this work. Recent works by Tsioutsiouliklis et al. (2020, 2021, 2022) have initiated a discourse on the posterior score fairness of graph filters by recognizing the need for optimizing trade-offs between fairness and posterior preservation. Furthermore, they provide a first definition of node score fairness, called \(\phi\)-fairness, which for scores \(r[v]\) of graph nodes \(v\in\mathcal{V}\) allocates a fixed fraction of their total mass for the protected group nodes \(v\in S\): \[\sum_{v\in S}r[v]=\phi\sum_{v\in\mathcal{V}}r[v]\] To achieve perfect compliance to \(\phi\)-fairness, the same researchers apply graph structure modifications (edge addition or removal) to balance the score mass flowing to and from protected group nodes during graph shifts. Furthermore, they address fairness-aware algorithms for personalized graph filter inputs by introducing the concept of _universal fairness_ as the practice of modifying the graph structure so that \(\phi\)-fairness is satisfied irrespective of provided priors. They also show that graph modifications are the only way to make the personalized pagerank filter universally fair. Unfortunately, modifying graphs so that filters become universally fair admits several restrictions that are not always possible to satisfy. First, this kind of approach has been analysed only for the family of personalized pagerank filters with column-wise normaliza tion; modifications could exist for symmetrically normalized personalized pagerank or other filters, but additional case-by-case analysis is needed. Second, not only are the modifications that would best preserve the original posteriors still unknown, but existing mechanisms remain dependent on priors. As an extension of this point, we argue that the intent to make algorithms universally fair--even for graph signal priors that capture biases--introduces drastic modifications in what graph filters consider as structural proximity and ultimately hinders the preservation of original posteriors by affecting them more than needed. In fact, some modifications artificially penalize the scores of nodes with non-zero priors to offset some of the biased probability mass retained by personalized pagerank. A different proposition for fairness-aware graph filtering is to tailor bias mitigation to the specific priors injecting unfairness (Krasanakis et al., 2020). In this work, we expand on this direction and aim to improve upon preliminary heuristics of editing priors that optimize fair graph filter posterior objectives without altering the relations between nodes. This approach addresses traditional disparate impact mitigation by generalizing the prule as a measure of posterior score fairness. The generalization starts from a stochastic interpretation of posterior node scores, where they are proportional to the probability of nodes assuming positive labels, and calculates the expected positive number of sensitive and non-sensitive node labels obtained by sampling mechanism that uniformly assigns positive labels with probabilities proportional to posterior scores: \[p_{S}=P(R[v]=1|p\in S)=\frac{\Omega}{|S|}{\sum_{v\in S}r[v]}\] \[p_{S^{\prime}}=P(R[v]=1|p\not\in S)=\frac{\Omega}{|\mathcal{V} \setminus S|}{\sum_{v\not\in S}r[v]}\] where the value \(\Omega>0\) is the scale of the proportion and \(R\) is a stochastic process with probabilities \(P(R[v]=1)=\Omega\,r[v]\). Plugging these in the prule cancels out the scale between the nominator and denominator and yields the following formula for calculating a stochastic interpretation of the prule given posterior node scores \(r\): \[prule=\frac{\min\left\{|\mathcal{V}\setminus S|\sum_{v\in S}r[v],|S|\sum_{v \not\in S}r[v]\right\}}{\max\left\{|\mathcal{V}\setminus S|\sum_{v\in S}r[v],| S|\sum_{v\not\in S}r[v]\right\}} \tag{6}\] We also adopt this approach in this work and, by convention, consider \(prule=0\) when all node scores are zero. Although conceived through independent processes, the conditions of 100% prule and \(\phi\)-fairness capture the same type of posterior equity when \(\phi=\frac{|S|}{|\mathcal{V}|}\), i.e., they become equivalent in this case. When the protected group nodes end up with smaller scores on average, which is the typical case we address in this work, the relation between the prule and \(\phi\) generalizes to: \[prule=\frac{\phi|\mathcal{V}\setminus S|}{(1-\phi)|S|}\Leftrightarrow\phi= \frac{|S|prule}{|\mathcal{V}|+|S|(prule-1)}\] Otherwise, the desired correspondence between the two notions of fairness can be found by considering the complement \(\mathcal{V}\setminus S\) as the protected group. ## 3 Prior Editing to Optimize Posterior Scores In this work we attempt to make graph filtering fairness-aware while respecting how it understands structural proximity. Before working on the fairness domain, in this section we conduct a more general analysis of how to search for posteriors optimizing a broad class of objectives while respecting the propagation mechanisms of graph filters. Our proposed theoretical framework is exploited later on, in Section 4, to eliminate disparate impact from node scores while minimally perturbing the outcome of traditional graph filtering. One possible approach to optimizing posteriors would be by directly adjusting them (Tsioutsiouliklis et al., 2020). However, this practice deteriorates the quality gained by passing priors through graph filters. Ultimately, this loses the robustness of propagating information through node relations by introducing a degree of freedom for each node (Subsection 6.1). For example, under this approach, there is no difference in increasing the scores of low-scored nodes in the protected group by a small but non-negligible amount compared to increasing the scores of high-scored nodes of the same group. However, the first case could lead to a catastrophic loss of intra-group recommendation order, even if it were pareto optimal with respect to any small node score permutation. To prevent loss of posterior quality gained by passing graph signal priors through filters, for instance to facilitate downstream tasks, it has been proposed that, instead of the posteriors, the priors should be edited (Krasanakis et al., 2020). In this work, we theoretically back this claim by deriving the editing as the convergent point of a gradual procedure that leads to posterior objective optimization near the original priors. An intuitive understanding of this process is presented in Figure 2, where original priors \(q_{0}\) are edited over time \(t\) until they reach new ones \(q_{\infty}\). The corresponding original posteriors \(r_{0}\) turn into new node scores \(r_{\infty}\) that asymptotically reach the local optima of an objective \(\mathcal{L}(r_{\infty})\). Individually editing each prior node value may still introduce too many degrees of freedom that make it hard to track which (out of many possible) gradient trajectory is followed at each point in time. Thus, the same research direction conceives editing mechanisms of few parameters that are applied on a node-by-node basis and reduce the available degrees of freedom to the bare minimum needed to optimize posterior objectives. In this work, we build on this proposition and drastically improve previous heuristics by deriving both which Figure 2: A prior editing process and its optimization trajectory for two nodes. node properties (information available during graph filtering) should be involved in editing and what form editing mechanisms should take. To support our analysis, in Subsection 3.1 we express how tightly prior editing mechanisms should approximate the gradients of graph filter posterior objectives to let the latter reach local optimality with respect to a broad class of twice differentiable objectives. In Subsection 3.2, we progress our prior editing framework to explain why parameterized (e.g., neural) models with enough degrees of freedom are suited to prior editing. Based on theoretical analysis, in Subsection 3.3 we devise appropriate neural networks to be trained at runtime for each prior graph signal and filter so that they estimate ideal priors that trade-off original posterior preservation and meeting fairness constraints. After a certain point, our analysis requires that ideal posteriors lie close enough to the original ones outputted by base graph filters. This means that original filters should already procure "good enough" first posteriors for downstream objectives. ### Local optimality of prior editing In this subsection we investigate theoretical properties of positive definite graph filters that allow coarse approximations of optimal prior editing schemes to reach local optimality with regards to their induced posterior objective values. Before going through our main analysis, we point out that real-world graph filters often end up with a post-processed version of posteriors. For example, personalized pagerank is often implemented as an iterative application of the power method scheme \(r=aWr+(1-a)q\) that quickly converges to small numerical tolerance by performing L1 normalization after each step (e.g., as Lin and Cohen (2010) do), i.e., by dividing node posteriors with their sum. This does not affect the ratio of importance scores placed on propagating prior graph signals different number of hops away, but produces scaled scores that sum to 1. More complex post-processing mechanisms may induce different transformations per node that can not be modeled by graph filters. Taking these concerns into account, we hereby introduce a notation with which to formalize post-processing and integrate it in our proposed approach; we consider a _post-processing vector_ with which posteriors are multiplied element-by-element. Mathematically, this lets us write post-processed posteriors \(r\) of passing graph signal priors \(q\) through a graph filter \(F(\hat{A})\) as: \[r=\text{diag}(p)F(\hat{A})q \tag{7}\] The exact transformation arising from post-processing mechanisms could vary, depending on both the graph filter and graph signal priors. However, we can decouple this dependency by thinking of the finally selected post-processing as one particular selection out of many possible ones. For instance, L1 output normalization can be modeled as multiplication with the inverse of the original posteriors' L1 norm \(p[v]=\frac{1}{\|F(\hat{A})q\|_{1}}\). Given a graph filter with post-processing, we now introduce the concept of approximately tracking the optimization slope of posterior objectives with small enough error. This is formalized in Definition 1, which introduces a class of multivariate multivalue functions, called \(\lambda\)-optimizers of objectives, that approximate the negative gradients of objectives with fixed relative error bound \(\lambda\). Smaller values of this strictness parameter indicate tighter approximation of optimization slopes, whereas smaller values indicate looser tracking. To disambiguate the possible directions of slopes, our analysis considers loss functions of non-negative values to be minimized. **Definition 1**: _A continuous function \(f:\mathcal{R}\rightarrow\mathbb{R}^{|\mathcal{V}|}\) will be called a \(\lambda\)-optimizer of a loss function \(\mathcal{L}(r)\) over graph signal domain \(\mathcal{R}\subseteq\mathbb{R}^{|\mathcal{V}|}\) only if:_ \[\|f\big{(}r\big{)}+\nabla\mathcal{L}(r)\|<\lambda\|\nabla\mathcal{L}(r)\|\quad \text{ for all }\nabla\mathcal{L}(r)\neq\textbf{0}\] We now analyse the robustness of positive definite graph filters with post-processing in terms of how tight optimizers of posterior losses should be for the graph filter's propagation mechanism to "absorb" relative errors and eventually reach local minima. To this end, in Lemma 2 (proof in Appendix C) we find a maximum tightness parameter sufficient to lead to local optimality of posteriors with respect to the loss. The required tightness depends on the graph filter's maximum and minimum eigenvalues and the post-processing vector's maximum and minimum values (these depend on both the filter and the normalized adjacency matrix's eigenvalues). For results to hold true, the graph filter needs to be symmetric positive definite. The fact that non-exact optimizers suffice to track trajectories supports the rest of our analysis. **Lemma 2**: _Let \(F(\hat{A})\) be a positive definite graph filter and \(p\) a post-processing vector. If \(f(r)\) is a \(\frac{\lambda_{1}\min_{p}p[v]}{\lambda_{\max}\max_{v}p[v]}\)-optimizer of a differentiable loss \(\mathcal{L}(r)\) over graph signal domain \(\mathcal{R}\subseteq\mathbb{R}^{|\mathcal{V}|}\), where \(\lambda_{1},\lambda_{\max}>0\) are the smallest positive and largest eigenvalues of \(F(\hat{A})\) respectively, updating graph signals per the rule:_ \[\begin{split}&\frac{\partial q(t)}{\partial t}=f(r(t))\\ & r(t)=\text{diag}(p)F(\hat{A})q(t)\end{split} \tag{8}\] _asymptotically leads to the loss to local optimality if posterior updates are closed in the domain, i.e. \(r(t)\in\mathcal{R}\,\forall t\in[0,T]\Rightarrow\text{diag}(p)F(\hat{A}) \int_{0}^{T}f\big{(}r(t)\big{)}dt\in\mathcal{R}\)._ Following a similar approach as to check whether graph filters are positive definite in Equation 2, we can also bound the eigenvalue ratio \(\frac{\lambda_{1}}{\lambda_{\max}}\) involved in calculating the necessary approximation tightness when we know only the type of filter but not the graph or priors. In particular, for filters \(F(\hat{A})\) where \(\hat{A}\) are symmetrically normalized adjacency matrices of undirected graphs it holds that: \[\frac{\lambda_{1}}{\lambda_{\max}}\geq\frac{\min_{\lambda\in[-1,1]}F(\lambda) }{\max_{\lambda\in[-1,1]}F(\lambda)} \tag{9}\] Such bounds are typically lax. To demonstrate this, we compute them for the personalized pagerank and heat kernel filters, which can respectively be expressed in closed forms as \((1-a)(I-a\hat{A})^{-1}\) and \(e^{-t(I-\hat{A})}\), where \(a\) and \(t\) are their parameters. For these filters, their respective eigenvalue ratios for \(\lambda\in[-1,1]\) are at most \(\frac{1-a}{1+a}\) and \(e^{-2t}\). For larger values of \(a\) and \(t\), which penalize less the spread of graph signal priors farther away, stricter optimizers are required to keep track of the gradient's negative slope. When normalization is the only post-processing employed, all elements of the personalization vector are the same and bounds for sufficient optimizer strictness coincide with the aforementioned eigenvalue ratios. Inserting some specific numbers, for the most widespread version of personalized pagerank with \(a=0.85\) it suffices to select \(0.081\)-optimizers to edit priors. Even for wider prior diffusion with \(a=0.99\) it suffices to select \(0.005\)-optimizers. When graphs have adequately many nodes, such optimizers are orders of magnitude larger than the average posteriors \(\frac{1}{|\mathcal{V}|}\) arising from L1 normalization. In Subsection 6.1, we explain why larger eigenvalue ratios broaden the applicability of our mathematical analysis and identify threats that arise when those filters that focus on many propagations--and thus require tight optimizers--are applied on graphs with too few nodes. In that scenario, optimization errors become comparable to the average posteriors and therefore generate untraceable optimization trajectories. ### Surrogate models for prior signal editing The above analysis lets us express what objectives like fair graph filtering would look like under the prism of accrediting their quality (e.g., biases) to priors. In particular, we try to meet posterior objectives by employing appropriate \(\lambda\)-optimizers of objective gradients to adjust priors per Equation 13. To this end, we argue that finding the editing mechanisms and computing line integrals leading to locally optimal priors is unnecessary; instead, we can create models that directly compute the integration outcome. This intuition holds on a theoretical level too; Lemma 3 (proof in Appendix C) transcribes the required tightness of optimizers around loss gradients to near-invertibility of a prior editing model's Jacobian across the gradients' direction. If the requirement is met, there exist prior editing model parameters that lead posteriors to locally optimize the objective. Since the same quantity as in Lemma 2 identifies adequate tightness, even coarse prior editing models can be met with success. **Lemma 3**: _Let us consider graph signal priors \(q_{0}\), positive definite graph filter \(F(\hat{A})\) with largest and smallest eigenvalues \(\lambda_{\max},\lambda_{1}\), post-processing vector \(p\), differentiable loss function \(\mathcal{L}(r)\) with at least one minimum in domain \(\mathcal{R}\subseteq\mathbb{R}^{|\mathcal{V}|}\), and a differentiable graph signal generation function \(\mathcal{M}:\mathbb{R}^{K}\rightarrow\mathbb{R}^{|\mathcal{M}|}\) for which there exist parameters \(\theta_{0}\) satisfying \(\mathcal{M}(\theta_{0})=q_{0}\) and \(\text{diag}(p)F(\hat{A})\mathcal{M}(\theta)\in\mathcal{R}\). If, for any parameters \(\theta\), its Jacobian \(\mathbb{I}_{\mathcal{M}}(\theta)\) has linearly independent rows and satisfies:_ \[\left\|E(\theta)\nabla\mathcal{L}(r)\right\|<\tfrac{\lambda_{1} \min_{p}p[v]}{\lambda_{\max}\max_{p}p[v]}\|\nabla\mathcal{L}(r)\|\quad\text{ for }\nabla\mathcal{L}(r)\neq\textbf{0}\] \[E(\theta)=\mathcal{I}-\mathbb{I}_{\mathcal{M}}(\theta)(\mathbb{ I}_{\mathcal{M}}^{T}(\theta)\mathbb{I}_{\mathcal{M}}(\theta))^{-1}\mathbb{I}_{ \mathcal{M}}^{T}(\theta)\] \[r(\theta)=\text{diag}(p)F(\hat{A})\mathcal{M}(\theta)\] _then there exist parameters \(\theta_{\infty}\) that make \(\mathcal{L}\big{(}r(\theta_{\infty})\big{)}\) locally optimal._ In Lemma 3, the matrix \(E\) computes differences between the unit matrix and multiplying the Jacobian matrix \(\mathbb{J}_{\mathcal{M}}(\theta)\) with its right pseudo-inverse; differences are constrained to matter only for nodes with high gradient score values. Thus, as long as the objective's gradient retains a clear--though maybe changing--direction to move towards to and this can be captured by a surrogate prior editing model \(\mathcal{M}(\theta)\), which may (or should) fail at following other kinds of trajectories, then a parameter trajectory path exists to arrive at locally optimal prior edits. For example, if prior editing had the same number of parameters as the number of nodes and its parameter gradients were linearly independent, \(\mathbb{J}_{\mathcal{M}}(\theta)\) would be a square invertible matrix, yielding \(\mathbb{J}_{\mathcal{M}}(\theta)(\mathbb{J}_{\mathcal{M}}^{T}(\theta)\mathbb{J} _{\mathcal{M}}(\theta))^{-1}\mathbb{J}_{\mathcal{M}}^{T}(\theta)=\mathcal{I} \Leftrightarrow E=\mathbf{0}\) and the theorem's precondition inequality would always hold. Whereas, as the number of parameters decreases, it becomes more important for \(\mathcal{M}(\theta)\) to be able to exhibit degrees of freedom in the same directions as its induced loss's gradients. Next, we aim to learn the directions of the degrees of freedom with deep neural models. ### Neural approximation of posterior optimization with prior signal editing From a high-level standpoint, Lemma 3 suggests that, for each prior graph signal \(q_{0}\) and a graph filter posterior objective, there exists an appropriate prior editing mechanism \(\mathcal{M}(\theta)\) that follows a differentiable trajectory towards a point of locally optimal priors \(\mathcal{M}(\theta_{\infty})\) corresponding to parameter values \(\theta_{\infty}\). Found priors are considered locally optimal both in the sense that they end up locally optimizing the objective and in that they need to lie close enough to the original ones to be discovered. We now propose that neural network architectures are valid prior editing mechanisms; thanks to the universal approximation theorem (we use the form presented by Kidger and Lyons (2020) to work with non-compact domains and obtain sufficient layer widths), they can produce tight enough approximations of the desired objectives to serve as \(\lambda\)-optimizers. The minimum requirements neural architectures need to satisfy are summarized in Theorem 4 (proof in Appendix C). Importantly, the theorem specifies appropriate neural architectures and several hyperparameters (e.g., their width), but their ideal depth can only be acquired via hyperparameter tuning on each filtering task. **Theorem 4**: _Let us consider graph signal priors \(q_{0}\), positive definite graph filter \(F(\hat{A})\) with largest and smallest eigenvalues \(\lambda_{\max},\lambda_{1}\) respectively, post-processing vector \(p\), and twice differentiable loss function \(\mathcal{L}(r)\), with some unique minimum at unknown posteriors \(r=r_{\infty}\) of a domain \(\mathcal{R}\subseteq\mathbb{R}^{|\mathcal{V}|}\), and whose Hessian matrix \(\mathbb{H}_{\mathcal{L}}(r)\) linearly approximates its gradients \(\nabla\mathcal{L}(r)\) within the domain with error at most \(\epsilon_{\mathbb{H}}\). Let us also construct a node feature matrix \(H^{(0)}\) whose columns are graph signals (its rows \(H^{(0)}[v]\) correspond to nodes \(v\)). If \(q_{0}\) and all graph signals involved in the calculation of \(\mathcal{L}(r)\) are columns of \(H^{(0)}\), and it holds that:_ \[\frac{\lambda_{1}\min_{v}p[v]}{\lambda_{\max}\max\max_{v}p[v]}\|\nabla \mathcal{L}(r)\|>2\epsilon_{\mathbb{H}}\|r-r_{\infty}\|\] _for any \(r\in\mathcal{R}\setminus\{r_{\infty}\}\), then for any \(\epsilon_{\infty}>0\) there exists a deep neural network architecture_ \[H^{(\ell)}=\phi_{\ell}(H^{(\ell-1)}W^{(\ell)}+b^{(\ell)})\] _with activation functions_ \[\phi_{\ell}(x)=\{x\text{ for }\ell=L,relu(x)\text{ otherwise}\}\] _learnable weights \(W^{(\ell)}\), and biases \(b^{(\ell)}\) for layers \(\ell=1,2,\ldots,L\) with depth \(L\geq 4\), in which using the output of the last layer \(H^{(L)}\) as priors yields posteriors arbitrarily close to the local minimum \(\|diag(p)F(\hat{A})H^{(L)}-r_{\infty}\|<\epsilon_{\infty}\). Additionally, layers other than the last can have output columns equal to the number of columns of \(H^{(0)}+2\)._ To summarize the main result of Theorem 4, there exists a small enough area \(\mathcal{R}\) around locally optimal priors \(q_{\infty}\), in which biased posteriors corresponding to priors \(q_{0}\) can still be usable as a reference to approximate \(q_{\infty}\). Such an area is enclosed within the brown dashed curve in Figure 3 and is wider the closer to linearity posterior objectives are or the looser \(\lambda\)-optimizers of graph filters are. At the same time, loss derivatives should be large enough to push optimization trajectories towards minima, even from faraway points of the domain. ## 4 Surrogate Graph Neural Networks of Fairness-Aware Graph Filtering At this point, we have all the necessary tools to tackle the main goal of this work, i.e., to procure new posteriors that are similar to those of graph filters but which also achieve high prule fairness. In this regard, we introduce fairness-aware posterior objectives to be optimized with neural prior editing. At first, we formulate the objectives to trade-off node score preservation and fairness, but later evolve our pipeline to optimize only preservation under hard-coded fairness constraints. Our approach is summarized in Figure 4; starting from only from one prior value for each node, we obtain features \(H_{0}[v]\) for nodes \(v\) to input in the neural network _NN_, and train the latter to predict new node priors that in the end lead to fair posteriors. The final step of producing fair posteriors also passes the filter's outcome through a transformation function \(f(\cdot)\) that ensures theoretical properties of our analysis transfer to the practical setting at hand (e.g., let guarantees of symmetric normalization apply column-wise normalization). The pipeline of learning new priors, passing them through the graph filter, and performing output transformations is equivalent to training a predict-then-propagate graph neural network whose propagation mechanism is the filter itself and the final activation applies postp-processing (e.g., normalization) and transformation functions. In other words, the proposed framework is equivalent to a surrogate graph neural network model of ideal pos Figure 3: Finding nearby priors that locally optimize graph filtering objectives. teriors (Subsection 4.3). The same framework and corresponding analysis is in principle applicable to a variety of posterior objectives, though in this work we only operate it for disparate impact mitigation. In this section, we explore the architecture and hyperparameters of the above framework. First, Subsection 4.1 specifies fairness-aware objectives, and introduces a methodology for adapting them or any other twice differentiable loss to our mathematical analysis by adding an L1 posterior regularization term. Then, Subsection 4.2 gathers node features \(H^{(0)}\) requisite for Theorem 4.1 to optimize the desired objective. Subsection 4.3 presents the exact graph neural network architecture of our approach; architecture details and some hyperparameters are to be selected anew for each graph filtering task (e.g., with grid search among promising values), such as when different priors are analysed. Finally, Subsections 4.4 and 4.5 present activation functions that respectively find local optima for graph filters applied on asymmetric adjacency matrix normalization or objectives prone to numerical instability, and impose fairness constraints without needing to search for appropriate weight trade-offs within fairness-aware objectives. ### Defining fairness-aware objectives in our framework Training prior editing mechanisms so that they preserve the outcome of graph filtering but also become fairness-aware requires appropriate selection of objective/loss functions that satisfy the conditions of Theorem 4.1. We devise losses that trade-off disparate impact mitigation and original posterior score preservation: \[\mathcal{L}(r)=\mathcal{L}_{ret}(r)+l_{bias}\mathcal{L}_{bias}(r)\] where the component \(\mathcal{L}_{ret}(r)\) captures the degree of preservation of original posteriors and \(\mathcal{L}_{bias}(r)\) quantifies posterior bias. Within such losses, \(l_{bias}\geq 0\) is a trade-off hyperparameter; larger values indicate more importance being placed on bias mitigation than retaining posteriors. The ideal trade-off is not necessarily fixed, but should assume the smallest possible value needed to reach desired fairness constraints. Too large values may make fairness dominate neural learning processes and thus end up with shallow approximations of original posteriors, but too small values may also fail to induce fairness. Subsection 4.5 presents a methodology that substitutes the necessary hyperparameter exploration by setting \(l_{bias}=0\) and imposing additional post Figure 4: Overview of the proposed fair posterior framework. processing (by modifying the graph neural network's output activation) to achieve perfect disparate impact mitigation. Here we define a full objective for the sake of completeness. Before fleshing out the loss, we address the concept of universal approximation via enhancing _any_ twice differentiable loss to satisfy the preconditions of Theorem 4. Double differentiability is easy to satisfy, at least within the optimization domains of machine learning frameworks constrained to avoid computational singularities; these domains are often implicitly defined in machine learning tasks, given that optimization trajectories tend to avoid singularities in probability and thus there could exist equivalents that match the same points functions are evaluated at but also bypass the singularities. However, the theorem also requires that gradient magnitudes should be significantly larger compared to the error of linearly approximating them via the Hessian, so as to overcome local optima. To facilitate universal applicability of our framework, i.e., to losses that may not necessarily satisfy the last property, we introduce the practice of **adding sufficiently large L1 posterior regularization** to base losses \(\mathcal{L}(r)\), resulting to the following regularized versions: \[\tilde{\mathcal{L}}(r)=\mathcal{L}(r)+l_{reg}\int_{\mathcal{M}(r)}sgn(r)\cdot dr \approx\mathcal{L}(r)+l_{reg}(\|r\|_{1}-\|r_{0}\|_{1}) \tag{10}\] where \(\cdot\) is the dot product, \(sgn(\cdot)\) is a twice differentiable tight enough approximation of the element-by-element sign operator, \(l_{reg}\geq 0\) a sufficiently large regularization parameter, and \(\mathcal{M}(r)\subseteq\mathcal{R}\) is an (unknown) optimization trajectory within domain \(\mathcal{R}\) that starts from \(r_{0}\) and asymptotically approaches \(r\). Theorem 5 (proof in Appendix C) explains why large enough L1 regularization introduces a constant push of posteriors towards zero that keeps gradients sufficiently large. The upper bound we provide for the regularization is a sufficient condition. Recall that the optimization domain can attract many but not all possible posteriors to locally optimal priors. Hence, graph filters employed in practice should be selected to form close enough approximations of the desired objective (i.e., not all filters are suited to all graphs and objectives). **Theorem 5**: _For any twice differentiable loss \(\mathcal{L}(r)\), there exists sufficiently large enough parameter \(l_{reg}\in\left[0,2^{\frac{\sup_{q}\|q-q_{0}\|}{|\mathcal{V}|}}\right]\) such that the loss regularization of Equation 10 satisfies the properties needed by Theorem 4 within the optimization domain_ \[\mathcal{R}=\{r_{\infty}\}\cup\left\{r:\|r-r_{\infty}\|<0.5\max\left\{| \mathcal{V}|,\frac{\|\nabla\mathcal{L}(r)\|}{c_{\mathrm{il}}}\right\}\frac{ \lambda_{1}\min_{v}p[v]}{\lambda_{\max}\max_{v}p[v]}\right\}\] _where \(r_{\infty}\) is the ideal posteriors optimizing the regularized loss. If the second term of the \(\max\) is selected, it suffices to have no regularization \(l_{reg}=0\)._ Given the freedom in loss selection granted by the regularization term, we now focus solely on the business objective of surrogate graph filtering that addresses the disparate impact of node scores. First, we aim to minimize the utility loss of Subsection 2.4) by setting \(\mathcal{L}_{ret}(r)=\mathcal{L}_{util}\). Second, we quantify disparate impact mitigation via the prule; to reach perfect mitigation, this measure needs to compute to 1 and its penalization can be directly used as a loss component, i.e., \(\mathcal{L}_{bias}(r)=1-prule\). ### Node features Overall, when applying Theorem 4 to fairness-aware graph filtering, it suffices to introduce only three node feature dimensions for the graph neural network to process. First, original priors need to be inputs of the architecture. This enables the perfect replication of original posteriors requisite for Lemma 3, for instance when the neural prior editing mechanism just outputs the original priors. Second, original posteriors are directly used in the loss function calculation. Third, the sensitive attribute indicates which nodes belong to the protected group and is also a type of graph signal (a value that depends on the node) contributing to the computation of the loss. All three signals should be set as node feature dimensions. Thus, we extract the following node features \(H^{(0)}:|\mathcal{V}|\times 3\) to input in graph neural network architectures: \[H^{(0)}[v,0] =q_{0}[v]\] \[H^{(0)}[v,1] =r_{0}[v]\] \[H^{(0)}[v,2] =\{1\text{ if }v\in S,0\text{ otherwise}\}\] where, \(q_{0}\) are the original priors, \(r_{0}=diag(p)H(\hat{A})q_{0}\) the original posteriors for graph filter \(H(\hat{A})\) with post-processing vector \(p\), and \(S\) is the group of protected nodes. No other signals contribute to the computation of the previous subsection's loss, which means that \(H^{(0)}\) suffice to generate near-ideal priors. This analysis holds true for symmetric normalized graph adjacency matrices and is extended to asymmetric ones in Subsection 4.4 by adding a fourth node feature dimension. ### Architecture Given a loss function and node feature matrix \(H^{(0)}\), both of which are appropriately constructed as in the previous subsections to meet the requirements of Theorem 4, we formally express the theorem's estimation of ideal posteriors organized into a column vector \(H^{(L)}\) via a model of the following form: \[H^{(\ell)}=\text{relu}(H^{(\ell-1)}W^{(\ell)}+b^{(\ell)})\quad \ell=1,2,\dots,L\] \[\hat{r}_{\infty}=diag(p)F(\hat{A})H^{(L)}\] for post-processing vector \(p\), and appropriate layer weights and biases \(W^{(\ell)}\) and \(b^{(\ell)}\) respectively. This formula resembles the predict-then-propagate architecture of Subsection 2.3 and can therefore be implemented in graph neural network frameworks supporting related operations. But, contrary to the widespread practice of engaging the same predefined heuristic architectures each time, our analysis reveals several necessary choices listed below. First, we employ neural layers of \(3+2=5\) dimensions--or \(4+2=6\) dimensions for asymmetric adjacency matrix normalization. This is different than the larger number (e.g., 64 or 128) of hidden layer dimensions in traditional graph neural networks, but is justified in that we are not working with too many node features and thus require a smaller number of latent dimensions. Similarly to the predict-then-propagate scheme, we employ _relu_ activations at intermediate layers. However, at the top layer, outputs are not used for classification; given that we need the architecture to be able to perfectly reconstruct base graph filtering posteriors despite not explicitly using those as inputs, the output activation approximating this concept should ideally remain linear. Thus, the only transformation we apply is the post-processing vector multiplication already employed by filters and the filter transfer trick, which is a complementary mechanism that will be developed in Subsection 4.4. Second, the graph filter performing the propagation of neural logits \(H^{(L)}\), where in our case these predictions are the edited priors, should be identical to the one producing the original potentially biased posteriors. For instance, if iterative formulas are used to compute graph filters until numerical tolerance (e.g., the power iteration for personalized pagerank), the same number of iterations need to be repeated during propagation, regardless of the latter's convergence properties. From a theoretical standpoint, breaking the premise of reusing the same kind of filtering with a different number of iterations would make underlying optimization trajectories non-continuous and thus makes applicability of our analysis uncertain. The negative effect of using different types of filtering together is demonstrated in the ablation study of Appendix B. A third and final consideration, which plays a vital role in the generalization abilities of neural networks but has not been addressed so far, is how neural parameters should be initialized before training starts. To avoid shallow minima for deep graph neural networks, Glorot and Bengio (2010) recommend parameter initialization strategies that assign randomized values to dense matrix transformations while preserving the expected variance of inputs and therefore prevent vanishing or exploding gradients at the first optimization steps. For _relu_ activations, He et al. (2015) show that this property is satisfied at layer \(\ell\) for zero initial biases and dense weights sampled from a normal distribution \(\mathcal{N}\big{(}0,\sigma^{(\ell)}\big{)}\) with zero mean and standard deviation \(\sigma^{(\ell)}=\sqrt{\frac{2}{\text{columns of }W^{(\ell)}}}\). We devote the rest of this subsection to adjusting initialization so that it avoids pitfalls stemming from narrow neural layer widths. When initialization weights are sampled from a distribution that is symmetric around zero, and which has therefore zero mean, narrow layer widths like ours tend to create a lot of "dying" neurons, where _relu_ does not backpropagate during training due to activations that reside in its non-positive region (Lu et al., 2019). This poses a significant risk in the ability of training our proposed neural architecture; at worst, for some nodes its single output may not contribute to training. This concern is addressed in the literature either by adjusting activation functions to produce small but non-zero derivatives for negative inputs (He et al., 2015), or by employing asymmetric initialization strategies (Lu et al., 2019). We choose the last type of approach, because our analysis is applied on potentially non-compact domains (e.g., where objectives are not differentiable at certain points); to work on such domains we stick to the universal approximation theory results of Kidger and Lyons (2020), who only present guarantees for _relu_ under non-compactness. Since propagating graph signals with non-negative values yields non-negative posteriors, we elect to use a folded normal distribution for initialization, i.e., which applies an absolute value on the normal distribution's samples and thus preserves the positive sign of all inputs for the posteriors arising from initialization. Given that we start from a normal distribution with zero mean, the standard deviation of the folded variation (Tsagris et al., 2014) becomes \(\phi_{fold}=\sigma\sqrt{1-\frac{2}{\pi}}\), where \(\sigma\) is the standard deviation of the zero-mean normal distribution on which absolute value folding is applied. Thus, to preserve the variance of inputs, we initialize weights per: \[W^{(\ell)}=\big{|}W^{(\ell)}_{normal}|:W^{(\ell)}_{normal}\sim\mathcal{N}\bigg{(} 0,\sqrt{\frac{2}{(1-\frac{2}{\pi})\text{ columns of }W^{(\ell)}}}\bigg{)} \tag{11}\] ### The filter transfer trick The theoretical groundwork and graph neural network architecture we presented so far are only applicable to filters of symmetrically normalized adjacency matrices. However, other types of normalization also hold practical value. For example, the adjacency matrices of directed graphs are typically asymmetric, whereas personalized pagerank filters often model Markov chains via column-wise normalization. Additionally, objectives involving division with small original posteriors, such as the relative errors in the unbiased utility loss of Equation 4, tend to make low-scored node errors dominate the optimization process. In this subsection, we devise a blanket methodology that addresses both of these issues, which we dub the _filter transfer trick_. This extends our approach to both asymmetric filtering, i.e., the outcome of filtering asymmetric adjacency matrices, and numerically unstable objectives by devising appropriate twice differentiable transformation functions \(f:\mathbb{R}^{|\mathcal{V}|}\to\mathbb{R}^{|\mathcal{V}|}\) to generate posteriors \(f(r)\) in "problematic" settings from the posteriors \(r\) learned for symmetric graph filtering on numerically stable objectives. We first express the filter transfer trick for asymmetric filtering. In this setting, we search for posteriors \(r_{ns}=diag(p_{ns})F(\hat{\mathcal{A}}_{ns})q\) that locally minimize losses \(\mathcal{L}_{ns}(r_{ns})\) for asymmetrically normalized adjacency matrices \(\hat{\mathcal{A}}_{ns}\) and post-processing vectors \(p_{ns}\) with non-zero elements. Thus, to reproduce asymmetric filtering in an optimize-able manner, we depend on posteriors \(r=diag(p)F(\mathcal{A})q\) obtained for symmetrically normalized adjacency matrices \(\hat{\mathcal{A}}\) and any post-processing \(p\) with non-zero elements, and compute the transformations \(r_{ns}=f(r)\) for which \(\mathcal{L}_{nr}\big{(}f(r)\big{)}\) is minimized. Given that appropriate transformations exist, it suffices to find \(r\) with our graph neural network methodology to minimize losses \(\mathcal{L}(r)=\mathcal{L}_{ns}\big{(}f(r)\big{)}\). A barrier in applying the above methodology is that, to avoid shallow minima, transformation functions should also preserve structural information captured by asymmetric filtering. Future research can consider learning the functions, for instance via adversarial techniques, but this requires extensive analysis and experimentation that lie outside the scope of this work. Instead, we conceive functions that, for many common settings, find locally optimal posteriors \(r_{ns}\) of asymmetric adjacency matrix filtering that lie close to original posteriors \(r_{ns0}\) of unedited priors \(q_{0}\), and exhibit the same receptive breadth as original filters, i.e., let nodes be influenced from the same number of hops away. Finding posteriors close to original ones is similar to what we achieved for symmetric filtering, but this time we do not maintain all implicit structural characteristics of asymmetric filtering (the receptive breadth is only one characteristic) and thus run the risk of discovering shallow optima. Nonetheless, we make the assumption that too shallow optima are avoided thanks to different kinds of adjacency normalization following roughly the same structural analysis principles of graph filters. Given the mathematical intractability of working with abstract types of normalization, we only experimentally validate this claim for column-wise normalization. The transformation functions we study come from the following family: \[f_{\delta}(r)=r_{ns0}\tfrac{\delta+r}{\delta+r_{0}} \tag{12}\] where all operations are applied element-by-element on graph signals, \(r_{ns0}\) are the original asymmetric filter posteriors, and \(r_{0}\) are the original symmetric filter posteriors arising from the unedited priors \(q_{0}\). The parameter \(\delta\geq 0\) trades-off numerical stability for larger values vs. fast learning for smaller values. In particular, given that the loss has a bounded derivative (i.e., is Lipschitz continuous), it holds that \(\lim_{\delta\to\infty}\nabla\mathcal{L}(f_{\delta}(r))=\mathbf{0}\). During hyperparameter selection, we search among potential values \(\delta=\delta_{0}\max_{v}r_{0}[v]\) where \(\delta_{0}\in\{0.1,1,10\}\). These values control optimization robustness (Subsection 6.2) while remaining comparable to the order of magnitude of filtering outcomes, thus providing leeway for derivative-based optimization. In Theorem 6 (proof in Appendix C) we show that, for the class of filters with non-negative coefficients, non-negative prior node values, and adjacency matrix normalization with positive edge weights (e.g., column-wise normalization), the proposed transformation can find locally optimal posteriors for asymmetric filtering based on a dual symmetric problem. Additionally, the theorem defines an equally hard to solve symmetric problem, which determines how tight neural approximation of the symmetric filter should be. **Theorem 6**: _Let us consider the case where priors and filter parameters are all non-negative, and asymmetric adjacency matrix normalization yields positive edge weights. Equation 12 can express locally optimal asymmetric filtering posteriors, which can be found by applying the graph neural architecture of this section on the equivalent symmetric filter to optimize the loss \(\mathcal{L}(f_{\delta}(r))\) from node features \(H^{(0)}\) extended with a fourth dimension \(H^{(0)}[v,3]=r_{ns0}[v]\) and \(6\) neural layer dimensions. The optimization tightness of the symmetric filter is the same as if we minimized \(\mathcal{L}\big{(}\text{diag}\big{(}\tfrac{p}{\delta+r_{0}}\big{)}F(\hat{ \mathcal{A}})q\big{)}\)._ The proof of the above theorem can be generalized to locally optimize posteriors of any filter with a predefined symmetric filter of the same receptive breadth instead of only the symmetric counterpart of the asymmetric filter. Although it is tempting to find local optima this way, when different filters are used to optimize each other's posteriors there is a non-negligible risk of reaching shallow minima (Appendix B), similar to the concerns for unconstrained posterior optimization. A more thorough discussion of this point is reserved for Subsection 6.1. For now, we stress that, when applied to asymmetric filtering, this subsection's approach is a heuristic tailored to filters of the same functional forms. The family of transformations presented in Equation 12 is also applicable on symmetric filtering to guide optimization towards posteriors with the same order of magnitude as original ones. In this case, the transformed posteriors \(f(r)\) are tied to the same filter form, adjacency matrix \(\hat{A}_{ns}=\hat{A}\), and original posteriors \(r_{ns0}=r_{0}\) as the the ones used to compute \(r\). Thus, the transformation collapses to \[f(r)=r_{0}\tfrac{\delta+r}{\delta+r_{0}}\] and maintains near-identical propagation mechanisms to the original filter as \(\delta\to 0\). The main difference becomes that gradients remain proportional to the order of magnitude of \(r_{0}\)'s elements, which prevents optimizers from abruptly inducing large score changes to low-scored nodes. For this case, we entertain the same hyperparameter search for values \(\delta\) as before. Furthermore, the transformation function does not inject any new graph signal in the computation of the loss \(\mathcal{L}(f(r))\), which means that no additional columns should be concatenated to the node feature matrix \(H^{(0)}\), i.e., the original columns can be maintained when making symmetrc filtering fairness-aware. As a final remark, although the filter transfer trick can port our approach to some problematic settings and even control the optimization robustness, the family of transformation functions we employ does not naturally extend to negative priors. ### Hard-coded fairness constraints When fairness is integrated into the loss component of our architecture, as we previously did, it requires careful hyperparameter exploration to achieve fairness while avoiding getting stuck at unfair yet Pareto optimal posteriors, for example due to different scale between the prule and utility loss derivatives. However, the exploration can be computationally costly and therefore limit the practical viability of our approach (Subsection 6.4). To address this issue, this subsection presents another application of the filter transfer trick's principles to algorithmically impose fairness constraints instead of incorporating them in the loss; we rely on posterior transformation functions that make their outputs always meet desired fairness constraints and remove respective terms from the desired loss. To derive such functions in the context of mitigating disparate impact, which is the focus of this work, we look at the simple methodology for balancing non-negative posterior mass between protected and non-protected groups \(S\) and \(\mathcal{V}\setminus S\) by upscaling the node scores of the one and downscaling the scores of the other. For binary sensitive attribute \(s[u]=\{1\text{ if }u\in S,0\text{ otherwise}\}\), the appropriate scaling is achieved with the formula: \[f_{mult}(r)[v]=p_{0}\bigg{(}\frac{\left(1-\phi\right)s[v]}{\sum_{u\in S}s[u]r [u]}+\frac{\phi\left(1-s[v]\right)}{\sum_{u\not\in S}s[u]r[u]}\bigg{)}r[v]\] where \(p_{0}\) is a normalization term to ensure that the same type of scalar multiplication (if any) is imposed on \(r_{mult}\) as was used to procure \(r\). The balancing process hard-codes \(\phi\)-fairness adherence (recall that this is equivalent to specific prule values, i.e., \(prule=1\) for \(\phi=\frac{|S|}{|\mathcal{V}|}\)) in our setting, and is applied element-by-element on posteriors \(r\). It can also be generalized to multi-value sensitive attributes by diving each group of nodes with the same values with their sum, but we do not experiment with such scenarios in this work. The transformation function \(f_{mult}(r)\) does not inject any new graph signal in the computation of filter transfer loss \(\mathcal{L}(f_{mult}(r))\), which means that no additional columns should be concatenated to the node feature matrix \(H^{(0)}\). Furthermore, the practice remains compatible with our analysis. Thus, now that a fairness constraint is enforced by applying an appropriate transformation as part of our architecture's output activation, we can go back to minimizing the utility loss--plus any necessary regularization term--with our framework. Devising a fairness-imposing transformation was simple for disparate impact mitigation, but it may be more complicated for other types of fairness encountered in future work. Figure 5 presents the graph neural network that integrates into output activations the filter transfer trick \(f\), fairness constraint imposition \(f_{mult}\), and L1 normalization of final posteriors to be compatible with the normalization employed by graph filters we experiment on. Other types of normalization could also be used. The figure captures implementation details that were not visible in the introductory overview of Figure 4. We refer to this system as neural surrogate graph filter fairness (NSGFF). ## 5 Experiments In this section we assess whether prior editing can mitigate disparate impact while largely preserving the quality of original posteriors in graph mining tasks. First, in Subsection 5.1 we describe benchmark filtering tasks; these comprise different graphs and prior graph signals, and define quantitative assessment of filter posteriors. In Subsection 5.2 we present competing approaches for injecting fairness in graph filters. The methodology we follow to explore their efficacy is presented in Subsection 5.3. Finally, in Subsection 5.4 we run experiments and present results. ### Graph mining tasks Our exploration spans the two types of downstream graph filtering tasks presented in Subsection 2.2; community member recommendation and graph diffusion. In both tasks, we account for a sensitive graph node attribute, and aim to fully mitigate its disparate impact on filter posteriors by achieving prule \(=1\). The two types of mining tasks are each replicated under three ways of emulating how real-world priors would be constructed (see below), and under both fairness constraints. In total, we investigate \(2\cdot 3=6\) task variations. Community member recommendation experiments are conducted on publicly available graphs that comprise metadata communities (e.g., node classification labels or known overlapping communities) and a binary sensitive attribute to be protected from a business, ethical, or legal standpoint. Our work is also applicable to other types of fairness, multi-value sensitive attributes, and constraints that exclude certain nodes from fairness evaluation. However, as other literature approaches do not always account for such settings, we experiment on disparate impact mitigation of a binary sensitive attribute as a common ground. To assess member recommendation, we randomly use either 10%, 30%, or 50% of known community members to construct binary graph signals, and use other known members as test data, for which aim to assign higher posterior scores than non-community members. This assessment is quantified with AUC. We experiment on the following three graphs: Figure 5: Detailed view of the proposed NSGFF system. Citeseer (Sen et al., 2008). A network of scientific publications among six scientific topics that are linked based on citations. We consider the first two of these topics as the communities whose members we aim to rediscover, for instance as if they were new keywords whose entries we needed to populate. In line with many fairness benchmarks (Dong et al., 2022), we protect recommendation results with respect one of the topics--in our case, the one with the most members--so that they obtain on average equal posteriors to the rest of recommended nodes. Highschool (Mastrandrea et al., 2015). A network of highschool student interactions. From this dataset, we extract known friendship relations in Facebook, the classes students attend, and student gender (male/female/unknown). We perform community recommendation experiments, in which we aim to recommend classes to students that are structurally proximate to others already attending the class. During this process, we protect female students to not be assigned on average lower recommendation scores while striving to maintain the original AUC for male students. Pokec (Takac and Zabovsky, 2012). A social network dataset of the namesake platform, where user profiles are linked based on friendship relations. To reduce the running time of experiments, we construct a subgraph with the first 100,000 (out of more than 30 million) edges found in the original dataset. Using these, we aim to recommend users for the cooking and sports communities. At the same time, we consider the binary gender attribute of user profiles (male or female) to be sensitive in the sense that recommendation scores should achieve average parity between the respective groups of users. We also experiment with diffusing prior scores through graphs whose nodes have sensitive labels. For this task, we either diffuse priors for which a fraction of node values among 30%,50%,70% are non-zero and uniformly sampled from the range \([0,1]\). We aim to mitigate disparate impact when diffusing such priors while in large part minimizing the utility loss compared to original (biased) graph filtering. In addition to the previous graphs involved in community member recommendation experiments, diffusion is also tested on the following two publicly available graphs: Polblogs. A network of blogs mined in 2005 that form edges based on the hyperlinks between them. We consider the political opinions expressed in each blog to be a sensitive attribute that should not influence data mining outcomes. For example, this concern may arise when trying to cover news stories, where understanding both sides of an opinion hinters the creation of political echo chambers, and lets stakeholders (e.g., journalists) glean a spherical understanding by not suppressing a specific type of opinion. Experiments of starting from a few nodes replicate a real-world setting of searching for blogs related to a given topic. Polbooks. A network of political science books that are linked based on whether they have been frequently co-purchased. The books are marked based on the political opinion of left, right, or neural. To convert these values into a binary sensitive attribute, we consider the protection of left vs non-left political opinions. We treat the edges of all the above graphs as undirected. The number of nodes and edges, alongside the number of experimented communities, the number of nodes in the largest community, and the number of protected group members (i.e., the nodes with the protected sensitive attribute value) are summarized in Table 2. All experiments for the three graphs with communities run once for each analysed community and each setting, whereas for the final two graphs they run once for each setting. Experiments are seeded and enough in number to make aggregate reports robust; in total, we investigate \((3+2+2\) communities) \(\cdot 6\) variations \(+\)\(2\) graphs \(\cdot 3\) variations \(=\)\(48\) tasks. ### Compared approaches In this subsection we outline promising existing and new approaches that improve the fairness of base graph filters, such as the ones outlined in the next subsection. All implementations are available as open source.4 A smaller scale ablation study of our proposed NSGFF system's architectural choices is presented in Appendix B. Footnote 4: [https://github.com/maniospas/pygrank-f](https://github.com/maniospas/pygrank-f) _None._ Running the base graph filter without injecting any type of fairness awareness. This approach is both the baseline against which to compute the preservation of original posteriors, i.e., it corresponds to the ideal AUC assessment during community detection and to zero utility loss during diffusion. _LFPRO (Tsioutsiouliklis et al., 2020)_. Near-optimal redistribution of ranks causing disparate impact. Unlike this work, which aims to influence posteriors through prior editing, LFPRO directly edits posteriors by redistributing excess scores between protected and non-protected group nodes to improve the prule while keeping the scores non-negative. The original posteriors are mostly preserved by making small incremental changes. To prevent numerical underflows that might prevent this approach from converging in graphs with many nodes, we repeat the gradual redistribution of scores up to a numerical tolerance of \(10^{-12}\). _LFPRP (Tsioutsiouliklis et al., 2021)_. This approach can only be applied on the personalized pagerank filter with column-wise adjacency matrix normalization. It evolves previous fair random walk heuristics (Rahman et al., 2019) to redistributing excess inter-group node score transference during graph shifts from the protected group to the non-protected group and conversely. The redistribution within each group is weighted proportionally to the original posteriors. The weighing could also be uniform (known as the LFPRU approach (Tsioutsiouliklis et al., 2021)), but this \begin{table} \begin{tabular}{l r r r r r} **Graph** & **Nodes** & **Edges** & **Communities** & **Largest com/ty** & **Protected** \\ \hline Citeseer & 3,327 & 4,732 & 2 & 668 & 701 \\ Highschool & 156 & 1,437 & 3 & 32 & 85 \\ Pokec & 49,683 & 100,000 & 2 & 411 & 26,395 \\ Polblogs & 105 & 441 & - & - & 43 \\ Polbooks & 1,224 & 19,090 & - & - & 636 \\ \end{tabular} \end{table} Table 2: Characteristics of graphs on which we experiment. yields worse results across all experiments and we do not report it to make comparisons easier to parse. LFPRP achieves exact \(\phi\)-fairness (Subsection 2.4) for the adjusted node scores \(r-(1-a)p\), where \(r\) are its posteriors and \(p\) the priors. In our experiments we assess the adjusted scores and set the value of \(\phi\) as the equivalent of the target prule \(=1\) described in Subsection 2.4. _Mult (baseline)._ Applying the post-processing of Subsection 4.5 on base filters to remove the disparate impact of their posteriors. This serves both as a baseline to compare with other approaches and an ablation study that demonstrates gains of prior editing using the NSGFF approach presented below. _FP (Krasanakis et al., 2020)._ Employing heuristic prior editing instead of this work's neural mechanism. The success of this approach hinges on its underlying assumptions being able to model optimization trajectories. We optimize the same utility loss as this work while applying the same filter transfer tricks (otherwise, this approach gets stuck in very shallow minima in some graphs). Best parameters are estimated using _pygrank_'s black-box tuning (Krasanakis et al., 2022) and set it to divide the parameter search range by two on each iteration and default other hyperparameters. _NSGFF (this work)._ The neural surrogate graph filter fairness introduced in this work, optimized for the utility loss with ad-hoc regularization parameter \(l_{reg}=\frac{\|q_{0}\|_{1}}{|\mathcal{V}|}\) (this is sufficient for Theorem 5 as long as \(\|q_{0}\|_{1}\geq 2\|q-q_{0}\|\) everywhere on some trajectory from \(q_{0}\) to \(q\)). Following common graph neural network training practices, training employs the Adam optimizer with learning rate \(0.01\) and default other hyperparameters (Kingma and Ba, 2014), and the number of training epochs is determined by repeating parameter updates until the loss does not decrease for a patience of \(100\) consecutive epochs. The number of neural layers \(L\) and the filter transfer trick hyperparameter \(\delta\) are tuned each time training takes place. To limit running time, we make this exploration coarser by searching among all combinations of \(L\in\{3,\ldots,9\}\) and \(\delta_{0}\in\{0.1,1,10\}\) with shallow training that has patience of \(5\) epochs, a proportional sped up learning rate of \(0.1\), and an upper limit of \(50\) epochs. ### Evaluation methodology We now detail the methodology of assessing competing approaches. This consists of going through all \(48\) filtering tasks outlined Subsection 5.1 and combining these with the \(8\) base graph filters presented below and the \(6\) compared approaches of Subsection 5.2 to mitigate the disparate impact of base filters. Since LFPRP can only be applied on personalized pagerank with asymmetric adjacency matrix normalization, experiments create a total of \(48\) tasks \(\cdot\) (\(6\) filters \(\cdot\)\(5\) approaches \(+\)\(2\) filters \(\cdot\)\(6\) approaches) \(=2,016\) posterior signals. Each of these is assessed in terms of prule and, depending on the task, AUC or utility loss. To simplify presentation of results, we report an average of measure assessments for compared approaches per base filter and disparate impact mitigation approach. The average is weighted so that each graph is equally represented. We remind that the goal is not to find the best base filters, but to find the best disparate impact mitigation approach. Experiments are conducted on the two popular families of graph filters outlined in Subsection 2.1: personalized pagerank and heat kernels. We denote these as PPR{a}{norm} and HK{t}{norm} and create variations with propagation parameters \(a=\{0.85,0.9\}\) and \(t=\{1,3\}\) and either symmetric or column-wise adjacency matrix normalization \(norm\in\{\)Sym, Col\(\}\). Depending on the choice of filter type, propagation parameter, and adjacency matrix normalization, we study \(2\cdot 2\cdot 2=8\) base filters. All filters are implemented to account up to their 20-th polynomial term (this is their maximal receptive breadth). ### Experiment results Community member recommendation and graph diffusion experiments are summarized in Figures 3 and 4 respectively. First, we assert that original posteriors, i.e., the ones procured by filtering with no fairness awareness, are imbalanced between the protected group and the rest. This is indeed the case for community member recommendation, as indicated by the inadmissible (less than 80%) prule of base graph filters at the second column. Disparate impact is not as prominent in diffusion experiments, where the unbiased randomization of priors lets base filters exhibit high prule by themselves. But prule is still not maximal, which indicates that there exist some, though not pervasive, graph structure biases that prevent perfect transfer of the priors' statistical parity to posteriors. These observations verify the existence of prior and structural biases, and that the former are more invasive. Hence, fairness-inducing interventions are needed to combat disparate impact. Looking at results, our proposed NSGFF approach achieves maximal prule while also better preserving the community member recommendation AUC than other approaches and inducing small utility loss. Furthermore, compared to LFPRP, which is the second best-performing approach in terms of AUC, our system is applied on a variety of filters. It also presents significant improvements compared to the heuristic prior editing of FP--which serves as motivation for this work. Notably, the Mult baseline is a viable competitor against previous state-of-the-art. In fact, for diffusion tasks, where small reweighing suffices to fully mitigate disparate impact, it yields the smallest utility loss for asymmetric filtering compared to all other approaches. This baseline runs the risk of being less successful in debiasing diffusion with high disparate impact (Appendix B), but its usage should by hereby considered in practical applications (Subsection 6.4) and baselines of future work, especially since it preserves a similar portion \begin{table} \begin{tabular}{l c c c c c c c c c c} & None & LFPRO & LFPRP & Mult & FP & NSGFF \\ \cline{2-13} & AUC & prule & AUC & prule & AUC & prule & AUC & prule & AUC & prule \\ PPR0.85Col & 0.759 & 0.603 & 0.703 & 0.842 & 0.744 & 1.000 & 0.742 & 1.000 & **0.752** & 1.000 \\ PPR0.90Col & 0.756 & 0.625 & 0.690 & 0.864 & 0.742 & 1.000 & 0.740 & 1.000 & 0.739 & 1.000 & **0.751** & 1.000 \\ HK1Col & 0.765 & 0.479 & 0.652 & 0.776 & - & - & 0.750 & 1.000 & 0.750 & 1.000 & **0.766** & 1.000 \\ HK3Col & 0.763 & 0.577 & 0.720 & 0.834 & - & - & 0.749 & 1.000 & 0.749 & 1.000 & **0.762** & 1.000 \\ PPR0.85Sym & 0.774 & 0.588 & 0.722 & 0.856 & - & - & 0.750 & 1.000 & 0.755 & 1.000 & **0.766** & 1.000 \\ PPR0.90Sym & 0.773 & 0.619 & 0.730 & 0.884 & - & - & 0.749 & 1.000 & 0.750 & 1.000 & **0.763** & 1.000 \\ HK1Sym & 0.774 & 0.445 & 0.664 & 0.763 & - & - & 0.753 & 1.000 & 0.761 & 1.000 & **0.766** & 1.000 \\ HK3Sym & 0.774 & 0.555 & 0.737 & 0.834 & - & - & 0.755 & 1.000 & 0.758 & 1.000 & **0.766** & 1.000 \\ \end{tabular} \end{table} Table 3: Community member recommendation. Best fairness-aware AUC is bolded. of AUC compared to LFPRP. Nonetheless, NSGFF remains by far the best approach for all symmetric filtering and some asymmetric filtering tasks. ## 6 Discussion In this section we list theoretical and practical properties to keep in mind when deploying our approach. We follow four threads. First, in Subsection 6.1 we describe theoretical limits--mostly for graphs with too few nodes--on finding deep local optima for arbitrary objectives. Second, in Subsection 6.2 we delve into the role of NSGFF's filter transfer trick in controlling posterior preservation and the robustness of local optima. Third, in Subsection 6.3 we point to objectives other than posterior preservation. Fourth, in Subsection 6.4 we analyse the scalability of our approach and how this affects practical usage. ### Applicability criteria Going back to the generic prior editing framework of Section 3, when the desired posterior loss is not non-quadratic (i.e., when the Hessian at one point can not form a good approximation of derivatives everywhere and consequently forms a large error \(\epsilon_{\mathbb{H}}\)), the radius of the domain from which ideal posteriors can attract original ones, as described in Theorem 5 for adequate regularization, becomes dominated by the term: \[radius=0.5|\mathcal{V}|\frac{\lambda_{1}\min_{v}p[v]}{\lambda_{\max}\max_{v}p[v]}\] We call this quantity _optimization horizon radius_, as it indicates a minimal L2 distance between ideal (maybe locally if not globally optimal) posteriors and the ones produced by filters that, if not exceeded, always lets our methodology discover the former from the latter. That is, any posteriors within hyperspeheres of such radii around local optima will be attracted towards the ideal centers. If such hyperspheres are overlapping for multiple optima, it is uncertain which of those our approach will select. Whereas for distances greater than the the one indicated by the radii, it is unclear whether our approach will approximate optimal posteriors at all. An implicit assumption that drives the applicability of our posterior optimization framework --and by extension the derived NSGFF system designed to mitigate disparate impact \begin{table} \begin{tabular}{l c c c c c c c c c c c} & \multicolumn{2}{c}{None} & \multicolumn{2}{c}{LFPRO} & \multicolumn{2}{c}{LFPRP} & \multicolumn{2}{c}{Mult} & \multicolumn{2}{c}{FP} & \multicolumn{2}{c}{NSGFF} \\ \cline{2-13} & \(\mathcal{L}_{util}\) & prule & \(\mathcal{L}_{util}\) & prule & \(\mathcal{L}_{util}\) & prule & \(\mathcal{L}_{util}\) & prule & \(\mathcal{L}_{util}\) & prule & \(\mathcal{L}_{util}\) & prule \\ PPR0.85Col & 0.000 & 0.916 & 0.111 & 0.999 & 0.174 & 1.000 & **0.041** & 1.000 & 0.283 & 1.000 & 0.084 & 1.000 \\ PPR0.90Col & 0.000 & 0.923 & 0.094 & 0.999 & 0.146 & 1.000 & **0.038** & 1.000 & 0.215 & 1.000 & 0.073 & 1.000 \\ HK1Col & 0.000 & 0.907 & 152.596 & 0.993 & - & - & **0.047** & 1.000 & **0.047** & 1.000 & 0.173 & 1.000 \\ HK3Col & 0.000 & 0.897 & 0.258 & 0.998 & - & - & **0.053** & 1.000 & 0.378 & 1.000 & 0.100 & 1.000 \\ PPR0.85Sym & 0.000 & 0.934 & 0.051 & 0.999 & - & - & 0.032 & 1.000 & 0.032 & 1.000 & **0.024** & 1.000 \\ PPR0.90Sym & 0.000 & 0.934 & 0.051 & 0.999 & - & - & 0.032 & 1.000 & 0.032 & 1.000 & **0.026** & 1.000 \\ HK1Sym & 0.000 & 0.934 & 73.384 & 0.997 & - & - & 0.032 & 1.000 & 0.032 & 1.000 & **0.019** & 1.000 \\ HK3Sym & 0.000 & 0.926 & 0.128 & 0.999 & - & - & 0.037 & 1.000 & 0.037 & 1.000 & **0.027** & 1.000 \\ \end{tabular} \end{table} Table 4: Node value diffusion. Best fairness-aware utility loss is bolded. is that original posteriors are close to the desired ones; large enough optimization horizon radii allow greater edits to be applied on posteriors while searching for optimal points without picking up trajectories that miss the latter. Sometimes, appropriate and potentially deeper edits can also be found for smaller radii (Subsection 6.2). Overall, radius values depend on the spectral characteristics of filters, the post-processing involved, and the number of graph nodes. When these create concerns over the applicability of our approach, we suggest validating the effectiveness of our framework instead of blindly applying it, for instance by following settings similar to our experimental methodology to assert that unsupervised losses are smaller than those of alternatives. Our analysis primarily targets connected graphs; for non-connected ones, the number of nodes when computing the radius should be replaced by the smallest among connected components. In addition to understanding limitations for graphs with too few nodes, the above intuition can also explain why directly optimizing posteriors, for instance with LFPRO in experiments or NN in the ablation study of Appendix B, is fundamentally flawed, as first theorized in the beginning of Section 3. This mechanism can be understood as employing the identity filter \(F(\hat{A})=I\) to transform some priors that are also posteriors a different (the base) graph filter. When all elements of the post-processing vector \(p\) are equal (e.g., when L1 normalization is applied to posteriors), such approaches exhibit exceptionally large optimization horizon radii \(0.5|\mathcal{V}|\), which are likely to encompass locally optimal posterior adjustments. But, at the same time, simple optimization does not respect any propagation mechanism, which means that many of the competing local optima candidates will not preserve the relations between adjusted posteriors that would have been imposed by the graph structure, i.e., many of those minima are shallow. ### Using the filter transfer trick to control robustness Armed with the concept of the optimization horizon radius, we finalize the analysis the filter transfer trick introduced in Subsection 4.4 and examine the impact of different values of the hyperparameter \(\delta=\delta_{0}\max_{v}r_{0}[v]\) on the radii attracting posteriors to (asymmetric filtered or numerically robust) locally optimal points \(r_{ns}\). To this end, we use Theorem's 6 equivalent problem to find an optimization horizon radius and transcribe that back to the space of outputted surrogate posteriors: \[radius=0.5|\mathcal{V}|\frac{\lambda_{1}\min_{v}\frac{p[v]}{ \delta+r_{0}[v]}}{\lambda_{\max\max_{v}}\frac{p[v]}{\delta+r_{0}[v]}}\frac{1} {\delta+\max_{v}r_{0}[v]}\geq 0.5\frac{\delta_{0}}{(\delta_{0}+1)^{2}\max_{v}r_{0 }[v]}|\mathcal{V}|\frac{\lambda_{1}\min_{v}p[v]}{\lambda_{\max}\max_{v}p[v]}\] \[\geq\frac{0.5\delta_{0}}{(\delta_{0}+1)^{2}}|\mathcal{V}|^{2} \frac{\lambda_{1}}{\lambda_{\max}}\text{ for L1 normalization of }r_{0}\] Given that inequalities are inclusive (they can be equalities given appropriate conditions) we consider the last expression to be the optimization horizon radius when the filter transfer trick is employed. The radius grows quadratically with the number of graph nodes, which means that local minima are likely to be found, even for graphs with few nodes, though this can be deceptive in that the the same radius without the square is achieved when normalization snaps the maximum posterior to one. The radius for the underlying symmetric filtering task is always \(\frac{\delta_{0}}{\delta_{0}+1}\) that of normal symmetric filtering (this holds true for any normalization that multiplies score vectors with scalars). A corollary of this analysis is that the filter transfer trick's hyperparameter \(\delta_{0}\) controls how close local optima can be to ideal posteriors; small values encompass only local optima very close to posteriors, though they may also fail to find any. Given that our NSGFF approach hard-codes fairness constraints and is only interested in minimizing the utility loss, sufficiently small \(\delta_{0}\) values that encompass at least one local minima effectively serve as mechanisms that reject larger yet locally optimal utility losses. Effectively, they shrink the hyperspheres around shallow local minima and thus prevent them from encompassing the original posteriors. For example, for symmetric filtering, the explored values \(\delta_{0}\in\{0.1,1,10\}\) shrink optimization horizon radii to \(0.091,0.5,0.909\) of those that would occur without the filter transfer trick, i.e., they range from only favoring posteriors very close to the original ones to maintaining almost the same radii. Similarly, we explore the effect of Subsection 4.5's methodology, which hard-codes fairness constraints into our system. Given that all optimization horizon radius computations are multiplied with the term \(\frac{\min_{*}p[v]}{\max_{*}p[v]}\), where \(p\) is the post-processing vector, applying the disparate impact mitigation constraint effectively multiplies radii with the term: \[\inf_{r\in\mathcal{R}}\min\left\{\frac{1-\phi_{*}(r)}{\phi_{*}(r)}\frac{\phi} {1-\phi},\frac{\phi_{*}(r)}{1-\phi_{*}(r)}\frac{1-\phi}{\phi}\right\}=\inf_{ r\in\mathcal{R}}\frac{\min\{prule_{*}(r),prule\}}{\max\{prule_{*}(r),prule\}}\] where in this case \(\mathcal{R}\) denotes the optimization domain or trajectory, \(\phi\) and \(prule\) are the ideal values of respective disparate impact quantifications we impose on our system, and \(\phi_{*}(t)\) and \(prule_{*}(t)\) compute these values for the filter's posteriors within the forward pipeline, i.e., before applying the constraint. The last two quantities can not be analytically derived but, when learning posteriors similar to the original ones as we do, we expect them to be similar to the corresponding computations of the original filter \(\phi_{*}(r_{0})\), \(prule_{*}(r_{0})\). Thus, when imposing the constraint \(prule=1\), as we did to fully mitigate disparate impact, the optimization horizon radius is approximately multiplied with \(prule_{*}(r_{0})\) computed for base filters (this corresponds to the None approach in experiments). This analysis reveals that it may be challenging to debias graph filtering with disparate impact very close to zero. ### Objectives other than posterior preservation In experiments, our surrogate graph neural network model succeeds in tightly approximating original posteriors (as indicated by the small utility loss) while mitigating disparate impact from filter score fairness, and should therefore be preferred for this objective. Small utility losses in general correspond to high community recommendation quality too. However, following posteriors less tightly could produce other business benefits. Characteristically, fairness sometimes improves AUC in community recommendation experiments, as seen for its higher average value for NSGFF compared to vanilla HK1Col filtering. In such cases, by merit on being more inclusive, fairness-aware graph filtering identifies more representative communities. Indeed, like in the real world, the random prior selection methodology used in experiments has a chance of excluding highly relevant areas of the graph by not exhibiting non-zero priors near them. Consequently, inclusivity during filtering lets it account for relevant faraway areas in the recommendations. This argument has previously been voiced by Stoica et al. (2020), although in our case we infer the more diverse prior node values instead of sampling them from the real world. Thus, there could be interest in approaches that follow posteriors less tightly in favor of other objectives. Future works can accommodate such criteria, for instance via a supervised instead of utility losses, or training schemes that progressively move away from the original posteriors while maintaining local optimality. Such research could extend our local universal approximation results to families of graph neural networks for node classification, as described in Appendix A. In addition to other objective/loss components, our theoretical framework is made universally applicably to twice differentiable objectives only by enriching it with a sufficiently large L1 regularization term. In experiments, we obtained near-identical results when omitting this term, which is to be expected given how small its penalties become relative to other loss components for graphs with hundreds or more nodes. ### Scalability limitations Before closing this work, we point out that the computation cost of running our approach can be prohibitive for large graphs. Training the graph neural network architecture typically requires up to a few thousand filter runs, and the hyperparameter search for its ideal depth repeats several training procedures. These costs are no greater than what would be expected of other graph neural networks in the literature addressing different tasks. Furthermore, they scale linearly with the cost of running one graph filter and can be sped up with distributed or parallel computing. At the same time, the added societal value of making graph filtering fairness-aware can be worth the filtering delay or infrastructure cost. On the other hand, compared to the small cost of running simple filters, our approach is not well-suited to processing large graphs with millions of nodes, as it requires thousands of filter reruns. For example, the running time of our system's implementation (including training on-the-fly and hyperparameter exploration) for the Citeseer graph on a 2.6GHz system with DDR3 RAM requires several minutes, but a single filter runs in less than 20 milliseconds. Given that we even go beyond this scope and conduct large-scale benchmarking, our investigation is limited to graphs with fewer (thousands instead of millions of) nodes and lightweight hyperparameter search. Still, given enough resources, we expect similar or better results to be obtained, even for graphs with more nodes, whose optimization horizon radii are larger. GPU framework implementations of sparse matrix multiplications needed by graph shifts that do not exhibit parallelization gains are partly to blame for the lack of speed. We expect future breakthroughs in related technologies to help better scale our approach. For the time being, and when processing graphs with millions of edges, there is still merit to devising heuristics, like LFPRP or even the Mult baseline, that may find solutions shallower but comparable to ours. Given the increasing fairness concerns of applying artificial intelligence in many aspects of the real world, even naive alternatives to our approach are preferable to taking no measures against disparate impact when that is a concern. ## 7 Conclusions and Future Work In this work we explored the concept of editing graph signal priors to locally optimize fairness-aware graph filter posterior objectives. To this end, we introduced a framework for filter-aware universal approximation of local posterior objective minima. In this framework, we set fairness-aware objectives, and implemented the resulting system via a graph neural network architecture to be used as a surrogate of ideal graph filtering. The architecture conducts new optimization and hyperparameter search at runtime for new priors and filters. We also designed output activations so that, when posteriors are non-negative, ensure usability even under non-symmetric adjacency matrix normalization and numerically unstable objectives. We finally experimented on real-world graphs with various graph filters and signal priors, and used our approach to mitigate disparate impact while producing higher quality posterior scores than competing approaches, in terms of either AUC or an unbiased utility loss. Promising applications of this work could involve designing more fair or powerful decoupled graph neural network architectures, where feature extraction and propagation with graph filters are performed separately. Future research can also improve our approach via more informed (e.g., learned) filter transfer tricks for non-symmetric filtering, as well as algorithmic speed-ups to training, hyperparameter exploration, and training. Finally, we recognize the scientific validity of devising evaluation objectives for graph filtering that are not affected by the biases being mitigated, as we did for our selected utility loss, and using them to assess existing approaches. ## Acknowledgments This work received funding by the European Union under contract number GA-101070285 MAMMOth.
2303.01826
TopSpark: A Timestep Optimization Methodology for Energy-Efficient Spiking Neural Networks on Autonomous Mobile Agents
Autonomous mobile agents require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments. Recent works studied that the energy consumption of SNNs can be optimized by reducing the computation time of each neuron for processing a sequence of spikes (timestep). However, state-of-the-art techniques rely on intensive design searches to determine fixed timestep settings for only inference, thereby hindering the SNNs from achieving further energy efficiency gains in both training and inference. These techniques also restrict the SNNs from performing efficient online learning at run time. Toward this, we propose TopSpark, a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both training and inference, while keeping its accuracy close to the accuracy of SNNs without timestep reduction. The ideas of TopSpark include: analyzing the impact of different timesteps on the accuracy; identifying neuron parameters that have a significant impact on accuracy in different timesteps; employing parameter enhancements that make SNNs effectively perform learning and inference using less spiking activity; and developing a strategy to trade-off accuracy, latency, and energy to meet the design requirements. The results show that, TopSpark saves the SNN latency by 3.9x as well as energy consumption by 3.5x (training) and 3.3x (inference) on average, across different network sizes, learning rules, and workloads, while maintaining the accuracy within 2% of SNNs without timestep reduction.
Rachmad Vidya Wicaksana Putra, Muhammad Shafique
2023-03-03T10:20:45Z
http://arxiv.org/abs/2303.01826v2
# TopSpark: A Timestep Optimization Methodology for Energy-Efficient ###### Abstract Autonomous mobile agents (e.g., mobile ground robots and UAVs) typically require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks (e.g., object recognition) while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments. Recent works studied that the energy consumption of SNNs can be optimized by reducing the computation time of each neuron for processing a sequence of spikes (i.e., timestep). However, state-of-the-art techniques rely on intensive design searches to determine fixed timestep settings for only the inference phase, thereby hindering the SNN systems from achieving further energy efficiency gains in both the training and inference phases. These techniques also restrict the SNN systems from performing efficient online learning at run time. Toward this, we propose TopSpark, a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both the training and inference phases, while keeping its accuracy close to the accuracy of SNNs without timestep reduction. The key ideas of our TopSpark include: (1) analyzing the impact of different timestep settings on the accuracy; (2) identifying neuron parameters that have a significant impact on accuracy in different timesteps; (3) employing parameter enhancements that make SNNs effectively perform learning and inference using less spiking activity due to reduced timesteps; and (4) developing a strategy to trade-off accuracy, latency, and energy to meet the design requirements. The experimental results show that, our TopSpark saves the SNN latency by 3.9x as well as energy consumption by 3.5x for training and 3.3x for inference on average, across different network sizes, learning rules, and workloads, while maintaining the accuracy within 2% of that of SNNs without timestep reduction. In this manner, TopSpark enables low-power/energy-efficient SNN processing for autonomous mobile agents. ## I Introduction Autonomous mobile agents (e.g., UGVs and UAVs) usually require low-power ML algorithms to complete their ML-based tasks (e.g., object recognition through images/videos), since these agents are typically powered by batteries [1]; see Fig. 1(a). Furthermore, these mobile robots also require to continuously adapt to different operational environments (so-called _dynamic environments_) since the offline-trained knowledge is typically learnt from limited samples and may be obsolete at run time, hence leading to accuracy degradation; see Fig. 1(a). These requirements can be fulfilled by Spiking Neural Networks (SNNs) because of the following two reasons. First, advancements in neuromorphic computing have led SNNs to achieve ultra-low power/energy processing and high accuracy by leveraging their bio-inspired spike-based operations [2, 3, 4]. Second, SNNs employ bio-inspired learning rules locally in each synapse, such as Spike-Timing-Dependent Plasticity (STDP), which are suitable for efficient online learning (i.e., training at run time for updating the knowledge of systems using _unlabeled data_1), thereby adapting to dynamic environments [5, 6, 7]. Furthermore, SNNs are also expected to provide high accuracy that meets the applications' requirements. To achieve higher accuracy, larger-sized SNN models are usually employed as they have a larger number of neurons to recognize more input features than the smaller-sized models; see Fig. 1(b). For instance, the work of [8] studied that a network with 3600 excitatory neurons achieves \(\sim\)90% accuracy for the MNIST, while a network with 100 excitatory neurons only achieves \(\sim\)75%. However, SNN hardware platforms for embedded applications (e.g., mobile robots) typically have limited memory and compute capabilities [9], thereby making it challenging to achieve both high energy efficiency and high accuracy for SNNs at the same time. To address this, previous works have explored different optimization methodologies [8, 10, 11, 12, 13]. However, _most of them have not optimized the computational time of a neuron for processing input spikes (i.e., timestep)_, which also has the potential to substantially reduce the energy consumption of SNN processing, while preserving the excitatory neurons to recognize diverse features. Footnote 1: Unlabeled data from environments can be leveraged for efficient SNN training at run time using STDP-based unsupervised learning. **Targeted Research Problem:**_How can we improve the energy efficiency of SNN processing in both the training and inference phases through timestep optimizations, while maintaining the accuracy high._ An efficient solution to this problem will enable low-latency and energy-efficient autonomous mobile agents/robots that can adapt to different environments._ ### _State-of-the-Art and Their Limitations_ State-of-the-art works have employed different timestep reduction techniques to optimize the energy consumption of SNN inference, while achieving acceptable accuracy [14, 15, 16, 17]. For instance, the Fig. 1: (a) Autonomous mobile agents typically require low-power processing and capabilities for adapting to different operational environments to complete ML-based tasks. (b) SNN architecture that supports STDP-based unsupervised learning, i.e., fully-connected network. A larger SNN model has a higher number of excitatory neurons to recognize more features than a smaller one. work of [14] trained binary-weighted SNNs and then studied the impact of different timesteps on accuracy. Another work performed a gradient descent-based retraining method to learn the parameter values while considering reduced timesteps [15]. Similarly, the work of [16] gradually reduced the timesteps while retraining the network. Meanwhile, a recent work employed search algorithms to find the appropriate timestep reduction [17]. **Limitations:** State-of-the-art works only employed fixed timesteps for SNN inference, through costly (re)training [14, 15, 16] and costly intensive searches [17]. Therefore, the existing techniques limit the SNN systems from further energy efficiency gains in both the training and inference phases. These techniques also hinder the SNN systems from performing efficient online learning/fine-tuning through training at run-time. Furthermore, smart AI-based systems usually need to adjust their accuracy for better battery life at run time [18], thereby requiring adaptive trade-offs between accuracy, latency, and energy, which cannot be accommodated by the existing techniques. To address these limitations, _a timestep optimization technique for both the training and inference phases is required_. This solution will enable efficient SNN training and inference with smaller timesteps (i.e., lower latency) and lower energy consumption. Moreover, this efficient training can also be leveraged at run time to enable efficient online learning mechanisms. To highlight the potential of timestep reduction in SNN processing, we present an experimental case study in the following Section I-B. ### _Motivational Case Study and Key Challenges_ We study the accuracy, latency, and energy consumption profiles of an SNN model considering different timesteps. To do this, we perform experiments employing a fully-connected SNN with 400 neurons, rate coding, and pair-wise STDP in unsupervised learning settings [10]. For each timestep setting, we perform training and inference using the MNIST dataset2. Detailed information on the experimental setup will be discussed in Section IV. The experimental results are shown in Fig. 2, from which we draw the following **key observations**. Footnote 2: The MNIST dataset is commonly used in SNN community for evaluating SNNs with unsupervised learning settings [2][8][10]. * Accuracy scores are typically proportional to timestep settings due to the spiking activity, i.e., smaller timestep leads to lower accuracy, and larger timestep leads to higher accuracy; see Fig. 2(a). * Accuracy profile of an SNN with timestep reduction may have two regions, compared to the baseline accuracy (i.e., an SNN without timestep reduction): (1) a region with comparable accuracy, and (2) a region with notable accuracy degradation; see Fig. 2(a). * Timestep reduction can effectively save the latency and energy consumption of SNN processing in both the training and inference phases due to reduced neuron and learning operations; see Fig. 2(b). _Although reducing timesteps can effectively curtail the latency and energy consumption of SNN processing, aggressive reductions in the timestep may significantly decrease the accuracy_, thereby limiting the applicability of timestep reduction techniques for improving the efficiency gains of SNNs. **Research Challenges:** Our experiments and observations expose several design challenges that should be solved to address the targeted problem, as discussed in the following. * _The solution should maximally reduce the timestep_ in the training and the inference to significantly optimize the computation latency and energy consumption of SNN processing, but without noticeably degrading the accuracy. * _The solution should effectively learn from reduced spiking activity_ to maintain the learning quality as compared to the original SNNs (i.e., without timestep reduction). * _The optimization techniques should incur minimum overheads (e.g., energy)_ to accommodate efficient online learning and better battery life management. ### _Our Novel Contributions_ To address the research challenges, we propose _TopSpark_, _a novel Timestep optimization methodology for energy efficient Spiking neural networks in training and inference on autonomous mobile agents_. To the best of our knowledge, our TopSpark is the first work that optimizes timesteps of SNNs in both the training and inference phases. Following are the novel steps performed in the TopSpark methodology (see the overview in Fig. 3). 1. **Analyzing the accuracy under different timesteps** to investigate the impact of timestep reduction in the SNN training and inference on the accuracy profiles. 2. **Analyzing the impact of neuron parameters** to investigate the role of neuron parameters and the influence of their values on the accuracy profiles under different timesteps. 3. **Devising parameter enhancement rules to maintain accuracy** by leveraging the knowledge from previous analysis steps, thereby providing effective yet simple solutions with minimum overheads. 4. **Devising a strategy to trade-off accuracy, latency, and energy** to meet the constraints (i.e., target accuracy, target latency, and energy budget), hence enabling a better battery life management. **Key Results:** We evaluate TopSpark methodology using Python simulations [19] on GPU machines to get the accuracy profiles and leverage the computation time and power to estimate the latency and energy consumption of SNN training and inference. Experimental results with different workloads and STDP-based learning rules show that, our TopSpark saves the latency by 3.9x and energy consumption by 3.5x for training and 3.3x for inference on average, while keeping the accuracy close to the SNNs without timestep reduction. ## II Preliminaries of SNNs Spiking neural networks (SNNs) are the neural networks' class that employs bio-plausible computation models based on action Fig. 2: Experimental results considering an SNN with 400 excitatory neurons with a fully-connected architecture in Fig. 1(a), rate coding, and pair-based STDP [10] under different timesteps in training and inference: (a) accuracy profiles; (b) latency and energy consumption normalized to timestep 350. potentials (i.e., spikes) [2]. An SNN model has a specific neuron model, network architecture, learning rule, and spike coding [8]. _Neuron model_ defines how a neuron operates, updates its internal dynamics (e.g., membrane potential), and generates output spikes over time. The operational time of a neuron to process a sequence of input spikes from a single input (e.g., an image pixel) is defined as _timestep_[17]. Here, we consider the Leaky Integrate-and-Fire (LIF) neuron model because it has been widely used in the SNN community because of its simple yet highly bio-plausible operations [8]; see an overview in Fig. 4. The LIF neuron updates its membrane potential (\(V_{mem}\)) each timestep. \(V_{mem}\) is increased each time an input spike comes, and otherwise, \(V_{mem}\) is decreased. If \(V_{mem}\) reaches a defined threshold voltage (\(V_{th}\)), the neuron generates an output spike. Then, \(V_{mem}\) goes to the reset potential (\(V_{mem}\)) and the neuron cannot generate spikes in a defined refractory period (\(T_{ref}\)). _Network architecture_ determines the connections among neurons, synapses, as well as inputs and outputs. In this work, we consider a fully-connected network in Fig. 1(b) since it supports unsupervised learning, which is required for enabling efficient online learning mechanisms. In such a network, each input (e.g., an image pixel) is connected to all excitatory neurons. Each excitatory neuron is expected to recognize a specific class. Hence, the connecting synapses are trained to learn the corresponding features. For the _learning rules_, we consider the bio-plausible STDP rules (i.e., pair-based weight-dependent STDP [10] and adaptive learning rate STDP [8]) since they support unsupervised learning scenarios to train the synaptic weights using unlabeled data from operational environments, thus enabling efficient online learning mechanisms [5][12]. To perform SNN processing, the input data is converted into a sequence of spikes (i.e., spike train) using a specific _neural/spike coding_. Here, we consider the rate coding as it can be coupled with different STDP-based learning rules for achieving high accuracy [10]. This rate coding uses Poisson distribution to generate the spike train of each input (e.g., a pixel) whose probability mass function (\(P_{pmf}\)) is given by Eq. 1, with \(\lambda\) denotes the rate parameter, \(k\) denotes the number of occurrences, and \(e\) denotes the Eulers' number. \[P_{pmf}=\lambda^{k}\cdot\frac{e^{-\lambda}}{k!} \tag{1}\] To avoid the domination of some excitatory neurons in the training phase, the adaptation potential (\(\theta\)) is employed [10]. Here, \(V_{th}\) is increased by \(\theta\) each time the corresponding neuron generates a spike that triggers the learning process for a specific input feature/pattern, thereby making it harder to stimulate the same neuron to generate spikes for other input patterns; see Fig. 4. In this manner, a certain neuron is expected to produce spikes only when stimulated with a specific pattern (i.e., recognizing a specific class), and other neurons can produce spikes when stimulated with other input patterns (different classes), thereby maximizing the accuracy. ## III TopSpark Methodology Our TopSpark methodology employs several key novel steps, as shown in the overview in Fig. 5. Detailed descriptions of these steps are provided in the subsequent sections. ### _Analysis of Accuracy Profiles in Different Timesteps_ We first investigate and analyze the impact of timestep reduction in training and inference on accuracy. This study aims at understanding the characteristics of the accuracy profiles considering a given SNN model, dataset, and timestep. To do this, we perform experimental studies with a 400-neuron SNN while considering different timesteps, datasets (i.e., MNIST and Fashion MNIST), and learning rules, i.e., a pair-based weight-dependent STDP (STDP1) [10] and an adaptive learning rate-based STDP (STDP2) [8]3. The experimental results are shown in Fig. 6, from which we draw the following key observations. Footnote 3: Detailed information on the experimental setup is provided in Section IV. * **Trends of the accuracy:** In general, different models with different combinations of timesteps, learning rules, and datasets have similar trends, i.e., the accuracy profiles are proportional to the timestep as a larger timestep leads to higher accuracy, while a smaller one leads to lower accuracy. It indicates that, accuracy degradation in different SNN models may be solved using a similar approach, which is beneficial for developing a simple yet effective solution. * **Advantages:** Most of the timesteps lead to accuracy scores that are comparable to the baseline accuracy (i.e., SNNs without timestep reduction) despite employing different SNN models with different learning rules and datasets; see label-1. It shows that, the timestep may be reduced significantly without facing noticeable accuracy degradation as compared to the baseline. * **Potentials of small timesteps:** The low accuracy in small timesteps (shown by label-2) should be improved so that these timesteps can be used at run time for offering good trade-offs among accuracy, latency, and energy consumption. For instance, if the SNN-based systems need to reduce their operational power/energy for better battery life, they may reduce the timestep without accuracy loss or with acceptable accuracy degradation as compared to the baseline. Our observations expose that, a smaller timestep typically has less spiking activity (i.e., a smaller number of pre- and post-synaptic spikes), hence leading to relatively less learning activity and lower accuracy. This indicates that, _a small number of spikes in a reduced timestep should be effectively used for learning the input features_. ### _Identifying the Roles of Neuron Parameters in Different Timesteps_ To maintain the learning quality of SNNs in a reduced timestep, the learning rules should benefit from the available spikes during the training. Hence, the neurons should effectively make use of the presynaptic spikes for generating the post-synaptic spikes, which are then used by the STDP learning rules to recognize different classes. Otherwise, the learning rules will not benefit from the spikes. To address this, _we investigate the roles of neuron parameters and their impact on the accuracy_. We perform experimental case studies with a 400-neuron network with different timesteps, datasets, and Fig. 4: The neuronal dynamics of a LIF neuron model. Fig. 3: An overview of our novel contributions (shown in blue boxes). learning rules as employed in Section III-A, while considering different values of neuron parameters. Here, we investigate threshold potential (\(V_{th}\)), refractory period (\(T_{ref}\)), and adaptation potential (\(\theta\)) since we employ the LIF neuron model. We reduce the values of these parameters with the following settings: (1) \(V_{th}^{1}\) = \(V_{th}^{0}-1\); (2) \(T_{ref}=1\); and (3) \(\theta=0\). Index-\(0\) denotes a parameter with an original value, while index-\(1\) denotes a parameter with an adjusted value. The experimental results are presented in Fig. 7, from which we derive the following key observations. * **Reduced \(V_{th}\):** We observe that \(V_{th}^{1}\) = \(V_{th}^{0}-1\) leads to better accuracy than the direct timestep reduction in most of the timesteps, and leads to competitive accuracy compared to the baseline in small timesteps, as shown by 3 and 4. The reason is that, a smaller \(V_{th}\) can make the corresponding neuron produces post-synaptic spikes easier than the original \(V_{th}\), thus increasing the post-synaptic spike and learning activities that lead to better accuracy. Therefore, _the idea of threshold potential (\(V_{th}\)) reduction can be exploited for devising an effective solution_. * **Reduced \(T_{ref}\):** We observe that \(T_{ref}=1\) leads to comparable accuracy as the baseline in small timesteps (see 3 and 4) since this setting makes all neurons responsive to any input spikes, which leads to higher learning activities and relatively good accuracy. In high timesteps, this setting leads to highly responsive neurons that may encounter difficulties in distinguishing a specific class, hence leading to accuracy lower than the baseline and the direct timestep reduction; see 3. Therefore, _the idea of refractory period (\(T_{ref}\)) reduction should be exploited carefully if considered for developing an effective solution_. * **Reduced \(\theta\):** We observe that, in general, \(\theta=0\) leads to a significant accuracy drop as compared to the baseline and the direct timestep reduction, as shown by 4. The reason is that, this setting may make some neurons dominate the spiking activity (i.e., spike generation), hence restricting the other neurons and their connecting synapses from learning and recognizing diverse input features, which in turn causes accuracy degradation [10]. Therefore, _the adaption potential (\(\theta\)) should be considered when developing an effective solution_. ### _Parameter Enhancements for Maintaining Accuracy_ Section III-B suggests that neuron parameters can be exploited to maintain accuracy in reduced timesteps. However, finding appropriate values for these parameters requires intensive searches, which restrict the SNN systems from efficient online learning/fine-tuning. Toward this, _we propose a set of simple policies that can enhance the neuron parameters with minimum overheads_, thereby enabling the SNN-based systems to employ adaptive timestep reduction for efficient online learning through training at run time. Fig. 5: The overview of our TopSpark methodology, where the novel steps are highlighted in blue boxes. We first analyze the impact of timestep reduction on the accuracy profiles (Section III-A). We also identify the roles of neuron parameters in different timesteps (Section III-B). Then, we leverage previous observations to enhance the neuron parameters (Section III-C) as well as develop a strategy to trade-off accuracy, latency, and energy (Section III-D). Output of the TopSpark methodology is an optimized SNN model with optimized timestep, which is employed on autonomous mobile agents. Furthermore, the mobile agents can adjust the timestep of the SNN processing at run time to adaptively meet the power/energy requirements (e.g., to save battery life). Fig. 6: The accuracy profiles of a 400-neuron network with reduced timesteps during training and inference phases, while considering different learning rules and datasets: (a) MNIST and (b) Fashion MNIST. **Threshold potential (\(V_{th}\)):** We leverage \(V_{th}\) reduction to maintain accuracy for most of the timesteps by adjusting the gap between \(V_{th}\) and \(V_{reset}\), so that the neurons can have proportional and sufficient spiking (i.e., pre- and post-synaptic spikes) and learning activities for distinguishing different classes. To do this, _we propose to linearly scale down \(V_{th}\) from its original value considering the reduced timestep, as stated in Eq. 2._\(V_{th}^{0}\) and \(V_{th}^{1}\) are the threshold potentials for the original and the adjusted ones, respectively. \(V_{reset}\) is the reset potential, while \(T_{0}\) and \(T_{1}\) are the timesteps for the original and the adjusted ones, respectively. \[V_{th}^{1}=V_{reset}+\left[\frac{T_{1}}{T_{0}}\cdot(V_{th}^{0}-V_{reset})\right] \tag{2}\] **Refractory period (\(T_{ref}\)):** We leverage \(T_{ref}\) reduction to define the effective duration for a neuron to be unresponsive after generating spikes, so that the other neurons have a chance to process the input spikes and trigger the learning process. To do this, _we propose to proportionally decrease \(T_{ref}\) from its original value considering the reduced timestep, as stated in Eq. 3._\(T_{ref}^{0}\) and \(T_{ref}^{1}\) denote the refractory period for the original and the adjusted ones, respectively. In this manner, \(T_{ref}\) is set to a small value when the timestep is small, and vice versa. Furthermore, we also employ a ceiling function to discretize the refractory period into a timestep form and ensure that the minimum value of \(T_{ref}=1\). \[T_{ref}^{1}=\left[\frac{T_{1}}{T_{0}}\cdot T_{ref}^{0}\right] \tag{3}\] **Adaptation potential (\(\theta\)):** We keep \(\theta\) in the neural dynamics and carefully adjust its value so that the spiking and learning activities are increased while avoiding the domination of some neurons, thereby maintaining good accuracy. To do this, _we propose to linearly reduce \(\theta\) from its original value, as stated in Eq. 4._ Here, \(\theta_{0}\) and \(\theta_{1}\) are the adaptation potentials for the original and adjusted ones, respectively. \[\theta_{1}=\frac{T_{1}}{T_{0}}\cdot\theta_{0} \tag{4}\] ### _Trade-Off Strategy for Accuracy, Latency, and Energy_ Sections III-A and III-B show that timestep reduction improves the latency and energy efficiency of SNN systems. However, at the same time, their accuracy may be degraded below the acceptable threshold. Hence, the accuracy level, latency, and energy consumption of SNN systems should meet the design constraints. It is especially important for applications that need adaptive adjustments at run time for better battery life, such as smart mobile agents/robots. Toward this, _we propose a strategy to trade-off accuracy, latency, and energy consumption to meet the constraints (i.e., acceptable accuracy, acceptable latency, and energy budget)_. Our strategy is to quantify the trade-off benefit for a given model using our proposed multi-objective trade-off function in Eq. 5, which considers accuracy, latency, and energy consumption. \(S\) is the score of trade-off benefit, which is useful for design space exploration considering different trade-offs between accuracy, latency, and energy consumption. \(A\) is the accuracy and \(L_{n}\) is the normalized latency, i.e., the ratio between the latency after timestep reduction and the original latency. Meanwhile, \(E_{n}\) is the normalized energy, i.e., the ratio between the energy consumption after timestep reduction and the original one. \(\tau\) and \(\varepsilon\) are the adjustment factors for latency and energy consumption, respectively. The adjustment factor should be set higher than the others if the respective metric is more important than the others, and vice versa. Here, \(\tau\) and \(\varepsilon\) are the non-negative real numbers. \[S=A-(\tau\cdot L_{n}+\varepsilon\cdot E_{n})\ \ \text{with}\ \ L_{n}=\frac{L_{1}}{L_{0}}\ \ \ \text{and}\ \ \ E_{n}=\frac{E_{1}}{E_{0}} \tag{5}\] **The use of our TopSpark methodology in autonomous mobile agents:** The output of TopSpark is an optimized SNN model with enhanced parameters and optimized timestep, which can be employed directly for performing energy-efficient SNN inference on mobile agents/robots. If the mobile agents consider online learning for adapting to different operational environments, they can also employ the TopSparks' output model, since this model can be trained under reduced timestep settings. Furthermore, the mobile agents can adjust the timestep of SNN processing at run time to adaptively meet the power/energy requirements through TopSparks' parameter enhancements. In this manner, mobile agents/robots can save their battery life on-the-fly without significantly/noticeably sacrificing accuracy. ## IV Evaluation Methodology To evaluate our TopSpark methodology, we use the same evaluation scenarios that are employed widely in the SNN community with the experimental setup shown in Fig. 8. We use a Poisson-based rate coding for converting data into spikes. We employ the fully-connected network architecture (as shown in Fig. 1) with a different number of neurons. For brevity, we consider the term M_n_ for an SNN model with _n_-number of neurons. We also employ different STDP-based learning rules, i.e., a pair-based weight-dependent STDP (STDP1) [10] and an adaptive learning rate-based STDP (STDP2) [8]. We consider the MNIST and Fashion MNIST datasets as the workloads. For each timestep setting, we perform both the training and inference phases. These experimental scenarios aim at highlighting the generality of TopSpark methodology. For comparison partners, we consider the original SNN models without timestep reduction as the _baseline_, and the SNNs with _direct_ timestep reduction technique. Furthermore, for both comparison partners, we consider the original parameter values from work of [10], as shown in Table I. **Evaluation:** We employ Python-based simulations [19] which run on GPU machines (i.e., Nvidia GeForce RTX 2080 Ti) to evaluate the accuracy. Afterward, we leverage the simulation time to evaluate the latency. We also obtain the power consumption using the _nvidia-smi_ utility, following the approach used in work [20]. These recorded simulation time and operational power are then leveraged to estimate energy consumption in the training and inference phases. ## V Results and Discussion ### _Maintaining Accuracy_ Experimental results for the accuracy are provided in Fig. 9. In general, the direct timestep reduction technique achieves comparable accuracy profiles in large timesteps, but suffers from a significant degradation in small timesteps due to the loss of information (i.e., spikes) for maintaining the learning quality; Fig. 8: Experimental setup for evaluating TopSpark methodology. see 1. Meanwhile, our TopSpark achieves competitive accuracy profiles for all timestep settings across different learning rules and workloads, as shown in 1 and 2. For instance, if we consider an acceptable 2% accuracy loss from the baseline (i.e., original SNN without timestep reduction) for M400 with the MNIST, our TopSpark achieves the accuracy of 86% in timestep 30 (STDP1) and 90% in timestep 100 (STDP2). Meanwhile, the direct reduction technique achieves the accuracy of 85% in timestep 50 (STDP1) and 90% in timestep 150 (STDP2). If we need to save more battery life, we can further reduce the timestep while relaxing the accuracy constraint. For instance, TopSpark can achieve accuracy of 77% (STDP1) and 75% (STDP2) in timestep 10 for M400 with the MNIST, while the direct reduction technique achieves only accuracy of 52% (STDP1) and 14% (STDP2); see 1. The reason is that, our TopSpark employs parameter enhancements that proportionally scale down the values of neuron parameters to (1) make the neurons preserve the spiking and learning activities in large timesteps for maintaining high accuracy, and (2) increase the spiking and learning activities to compensate the loss of spikes for improving the learning quality in small timesteps and the accuracy. ### _Latency Improvements_ Fig. 10 shows the experimental results for latency. In general, the direct reduction technique and our TopSpark effectively reduce the processing latency as compared to the baseline, since they employ smaller timesteps. For instance, if we consider an acceptable 2% accuracy loss from the baseline for M400 with the MNIST, the direct reduction technique improves latency by 3.9x in training and 4.2x in inference for STDP1, and by 2.3x in training and inference for STDP2, as compared to the baseline. Meanwhile, our TopSpark can further improve latency by 10.8x in training and 10.9x in inference for STDP1, and by 2.5x in training and 3.5x in inference for STDP2, as compared to the baseline; see 1. In all experimental scenarios, our TopSpark improves latency by 3.9x on average across different network sizes, learning rules, workloads, and processing phases (i.e., training and inference). The reason is that, parameter enhancements in our TopSpark enable more timestep reduction while preserving the learning quality through appropriate spiking and learning activities. ### _Energy Efficiency Improvements_ The experimental results for energy consumption are provided in Fig. 10. The direct reduction technique and our TopSpark effectively improve the energy efficiency as compared to the baseline, since they have smaller latency and operational power. For instance, if we consider an acceptable 2% accuracy loss from the baseline for M400 with the MNIST, the direct reduction technique improves energy efficiency by 4x (STDP1) and by 2.3x (STDP2) in both the training Fig. 10: Latency and energy consumption of SNNs during (a) training and (b) inference, considering different learning rules (STDP1 and STDP2), network sizes, timestep settings, and workloads. The experimental results for MNIST and Fashion MNIST are similar due to the same number and size of samples. Fig. 9: Accuracy profiles of SNN models across different timestep settings, learning rules, and workloads: (a) MNIST and (b) Fashion MNIST. and inference phases, as compared to the baseline. Meanwhile, our TopSpark further improves energy efficiency by 10x (STDP1) and by 2.5x-3.5x (STDP2) in both the training and inference phases, as compared to the baseline; see. In all experimental scenarios, our TopSpark improves energy efficiency by 3.5x (training) and by 3.3x (inference) on average across different network sizes, learning rules, and workloads. The reason is that, our TopSpark employs effective parameter enhancements that preserve the learning quality across different network sizes, timesteps, learning rules, workloads, and processing phases. Therefore, the reduced timesteps lead to reduced latency and operational power, and hence the energy consumption. ### _Design Trade-Offs_ Experimental results for design trade-offs are provided in Fig. 11. The results show that, we can set the adjustment factors to meet the design constraints. If we need to prioritize accuracy over the other metrics, then we may set \(\tau=0\) and \(\varepsilon=0\). As a result, the trade-off benefit leads to design points that achieve high accuracy as potential solutions, as pointed out by. If we set a higher priority to latency over the other metrics (e.g., \(\tau=10\) and \(\varepsilon=0\)), the trade-off benefit is shifted to a design point that has a significant timestep reduction; see. Similar results are observed if we set higher priority to energy consumption, as a significant timestep reduction also effectively improves energy efficiency; see. We also observe that, there may be a change of trade-off benefit when we change the adjustment factors (\(\tau\) and \(\varepsilon\)); see. The reason is that, when we prioritize one metric above the others, the benefit of the other metrics will decrease, and vice versa. These results also show that the SNN-based systems can employ our strategy to trade-off their accuracy, latency, and energy consumption. The above results and discussion show that the TopSpark methodology is applicable for enabling energy-efficient SNNs with STDP-based learning rules in training and inference phases, across different timesteps, network sizes, and workloads, hence making it amenable to autonomous mobile agents/robots. ## VI Conclusion We propose a novel TopSpark methodology that leverages adaptive timestep optimizations to enable energy-efficient SNNs via analysis of accuracy in different timesteps, parameter enhancements, and trade-offs for accuracy-latency-energy. Our TopSpark saves latency by 3.9x and energy consumption by 3.5x (training) and 3.3x (inference) on average, while keeping accuracy within 2% of SNNs without timestep reduction. Therefore, our work may enable low-latency and energy-efficient SNN training and inference for autonomous mobile agents/robots, including their efficient online learning process.
2306.07445
Least-Squares Neural Network (LSNN) Method For Linear Advection-Reaction Equation: Non-constant Jumps
The least-squares ReLU neural network (LSNN) method was introduced and studied for solving linear advection-reaction equation with discontinuous solution in \cite{Cai2021linear,cai2023least}. The method is based on an equivalent least-squares formulation and \cite{cai2023least} employs ReLU neural network (NN) functions with $\lceil \log_2(d+1)\rceil+1$-layer representations for approximating solutions. In this paper, we show theoretically that the method is also capable of accurately approximating non-constant jumps along discontinuous interfaces that are not necessarily straight lines. Theoretical results are confirmed through multiple numerical examples with $d=2,3$ and various non-constant jumps and interface shapes, showing that the LSNN method with $\lceil \log_2(d+1)\rceil+1$ layers approximates solutions accurately with degrees of freedom less than that of mesh-based methods and without the common Gibbs phenomena along discontinuous interfaces having non-constant jumps.
Zhiqiang Cai, Junpyo Choi, Min Liu
2023-06-12T22:20:51Z
http://arxiv.org/abs/2306.07445v3
# Least-Squares Neural Network (LSNN) Method ###### Abstract The least-squares ReLU neural network method (LSNN) was introduced and studied for solving linear advection-reaction equation with discontinuous solution in [4, 5]. The method is based on an equivalent least-squares formulation and employs ReLU neural network (NN) functions with a \(\lceil\log_{2}(d+1)\rceil+1\) layer representation for approximating the solution. In this paper, we show theoretically that the method is also capable of approximating a non-constant jump along the discontinuous interface of the underlying problem that is not necessarily a straight line. Numerical results for test problems with various non-constant jumps and interfaces show that the LSNN method with \(\lceil\log_{2}(d+1)\rceil+1\) layers approximates the solution accurately with DoFs less than that of mesh-based methods and without the common Gibbs phenomena along the discontinuous interface. Least-Squares Method, ReLU Neural Network, Linear Advection-Reaction Equation, Discontinuous Solution 65N15, 65N99 ## 1 Introduction Let \(\Omega\) be a bounded domain in \(\mathbb{R}^{d}\) (\(d\geq 2\)) with Lipschitz boundary \(\partial\Omega\), and denote the advective velocity field by \(\boldsymbol{\beta}(\mathbf{x})=(\beta_{1},\cdots,\beta_{d})^{T}\in C^{0}( \bar{\Omega})^{d}\). Define the inflow part of the boundary \(\Gamma=\partial\Omega\) by \[\Gamma_{-}=\{\mathbf{x}\in\Gamma:\,\boldsymbol{\beta}(\mathbf{x})\cdot \boldsymbol{n}(\mathbf{x})<0\} \tag{1}\] with \(\boldsymbol{n}(\mathbf{x})\) being the unit outward normal vector to \(\Gamma\) at \(\mathbf{x}\in\Gamma\). Consider the linear advection-reaction equation \[\left\{\begin{array}{rcl}u_{\boldsymbol{\beta}}+\gamma\,u&=&f,&\mbox{in}\; \;\Omega,\\ u&=&g,&\mbox{on}\;\;\Gamma_{-},\end{array}\right. \tag{2}\] where \(u_{\boldsymbol{\beta}}\) denotes the directional derivative of \(u\) along \(\boldsymbol{\beta}\). Assume that \(\gamma\in C^{0}(\bar{\Omega})\), \(f\in L^{2}(\Omega)\), and \(g\in L^{2}(\Gamma_{-})\) are given scalar-valued functions. A major challenge in numerical simulation is that the solution of (2) is discontinuous along an interface because of a discontinuous inflow boundary condition, where the discontinuous interface can be the streamline from the inflow boundary. Traditional mesh-based numerical methods often exhibit oscillations near a discontinuity (called the Gibbs phenomena) and may not be extended to nonlinear hyperbolic conservation laws. The least-squares ReLU neural network (LSNN) method for solving (2) with discontinuous solution was introduced and studied in [4, 5]. The method is based on an equivalent least-squares formulation studied in ([2, 7]) and employs ReLU neural network (NN) functions with a \(\lceil\log_{2}(d+1)\rceil+1\) layer representation for approximating the solution. The LSNN method is capable of automatically approximating the discontinuous solution since the free hyperplanes of ReLU NN functions adapt to the solution (see [3, 4, 5]). Compared to various adaptive mesh refinement (AMR) algorithms that locate the discontinuous interface through local mesh refinement (see, e.g., [6, 8, 10]), the LSNN method is much more effective in terms of the number of the degrees of freedom. Approximation property of ReLU NN functions to a step function was recently studied in [4, 5]. In particular, we showed theoretically that two or \(\lceil\log_{2}(d+1)\rceil+1\) layer ReLU NN functions are necessary and sufficient to approximate a step function with any given accuracy \(\varepsilon>0\) when the discontinuous interface is a hyperplane or general hyper-surface, respectively. This approximation property was used to establish _a priori_ error estimates of the LSNN method. The jump of the discontinuous solution of (2) is generally non-constant when the reaction coefficient \(\gamma\) is non-zero. The main purpose of this paper is to establish _a priori_ error estimates (see Theorem 3) for the LSNN method without making the assumption that the jump is constant. To this end, we decompose the solution as the sum of discontinuous and continuous parts (see (3)), so that the discontinuous part of the solution can be described as a cylindrical surface on one subdomain and zero otherwise. Then we construct a continuous piecewise linear (CPWL) function with sharp transition layers along the discontinuous interface to approximate the discontinuous part accurately by \(\mathcal{O}(J_{2}\sqrt{\varepsilon_{2}})+\mathcal{O}(J_{3}\sqrt{\varepsilon_ {3}})\) error, where \(\varepsilon_{2}\) and \(\varepsilon_{3}\) are, respectively, the width of the transition layers and a given positive number bounding the differences of the function values and directional derivatives between the shifted surface and piecewise plane functions constructed to approximate the surface on the corresponding subdomain (see Lemmas 5 and 5 and Theorem 3). From [1, 5, 11, 12], we know that the CPWL function is a ReLU NN function \(\mathbb{R}^{d}\to\mathbb{R}\) with a \(\lceil\log_{2}(d+1)\rceil+1\) layer representation, from which it follows that the discontinuous part of the solution can be approximated by this class of functions for any prescribed accuracy. The rest of the paper is organized as follows. In Section 2, we briefly review and discuss properties of ReLU NN functions and the LSNN method in [5]. Then theoretical convergence analysis is conducted in Section 3, showing that discretization error of the method mainly depends on the continuous part of the decomposition of the solution. Finally, to demonstrate the effectiveness of the method, we provide numerical results for test problems with various non-constant jumps in Section 4. ## 2 ReLU NN Functions and LSNN Method This section briefly reviews properties of ReLU neural network (NN) functions and the least-squares ReLU neural network (LSNN) method in [5]. For a given positive integer \(n\), denote the collection of all ReLU NN functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) that have a representation with depth \(L\) and total number of hidden neurons \(n\) by \(\mathcal{M}(d,1,L,n)\), and the collection of all ReLU NN functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) with a \(L\) layer representation by \(\mathcal{M}(d,1,L)\). Then \[\mathcal{M}(d,1,L)=\bigcup_{n\in\mathbb{N}}\mathcal{M}(d,1,L,n). \tag{4}\] **Proposition 2** **(**see [1, 5])**: _The collection of all continuous piecewise linear (CPWL) functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) is equal to \(\mathcal{M}(d,1,\lceil\log_{2}(d+1)\rceil+1)\), i.e., the collection of all ReLU NN functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) that have a representation with depth \(\lceil\log_{2}(d+1)\rceil+1\)._ **Proposition 3**: _\(\mathcal{M}(d,1,L,n)\subset\mathcal{M}(d,1,L,n+1)\)._ Define the least-squares (LS) functional \[\mathcal{L}(v;\mathbf{f})=\|v_{\boldsymbol{\beta}}+\gamma\,v-f\|_{0,\Omega}^{ 2}+\|v-g\|_{-\boldsymbol{\beta}}^{2}, \tag{5}\] where \(\mathbf{f}=(f,g)\) and \(\|\cdot\|_{-\boldsymbol{\beta}}\) is given by \[\|v\|_{-\boldsymbol{\beta}}=\langle v,v\rangle_{-\boldsymbol{\beta}}^{1/2}= \left(\int_{\Gamma_{-}}|\boldsymbol{\beta}\boldsymbol{\cdot}\boldsymbol{n}|\,v ^{2}\,ds\right)^{1/2}.\] The LS formulation of problem (2) is to seek \(u\in V_{\boldsymbol{\beta}}\) such that \[\mathcal{L}(u;\mathbf{f})=\min_{v\in V_{\boldsymbol{\beta}}}\mathcal{L}(v; \mathbf{f}), \tag{6}\] where \(V_{\boldsymbol{\beta}}=\{v\in L^{2}(\Omega):v_{\boldsymbol{\beta}}\in L^{2}(\Omega)\}\) is a Hilbert space that is equipped with the norm \[\left|\!\left|\!\left|v\right|\!\right|\!\right|_{\boldsymbol{\beta}}=\left( \left|\!\left|v\right|\!\right|_{0,\Omega}^{2}+\left|\!\left|v_{\boldsymbol{ \beta}}\right|\!\right|\!\right|_{0,\Omega}^{2}\right)^{1/2}.\] Then the corresponding LS and discrete LS approximations are, respectively, to find \(u_{{}_{N}}\in\mathcal{M}(d,1,L,n)\) such that \[\mathcal{L}\big{(}u_{{}_{N}};\mathbf{f}\big{)}=\min_{v\in\mathcal{M}(d,1,L,n)} \mathcal{L}\big{(}v;\mathbf{f}\big{)}, \tag{4}\] and to find \(u_{{}_{T}}^{N}\in\mathcal{M}(d,1,L,n)\) such that \[\mathcal{L}_{{}_{T}}\big{(}u_{{}_{T}}^{N};\mathbf{f}\big{)}=\min_{v\in \mathcal{M}(d,1,L,n)}\mathcal{L}_{{}_{T}}\big{(}v;\mathbf{f}\big{)}, \tag{5}\] where \(\mathcal{L}_{{}_{T}}\big{(}v;\mathbf{f}\big{)}\) is the discrete LS functional (see [4]). ## 3 Error Estimate In this section, we establish error estimate of the LSNN method for the linear advection-reaction equation with a non-constant jump along a discontinuous interface. For simplicity, we restrict ourselves in two dimensions. To this end, assume the advection velocity field \(\boldsymbol{\beta}\) is piecewise constant. That is, there exists a partition of the domain \(\Omega\) such that \(\boldsymbol{\beta}\) has the same direction but possibly different magnitude at each interior point of subdomain. Without loss of generality, assume that there are only two sub-domains: \(\Omega=\Upsilon_{1}\cup\Upsilon_{2}\) and that the inflow boundary data \(g(\mathbf{x})\) is discontinuous at only one point \(\mathbf{x}_{0}\in\Gamma_{-}\) with \(g(\mathbf{x}_{0}^{-})=\alpha_{1}\) and \(g(\mathbf{x}_{0}^{+})=\alpha_{2}\) from different sides. (Figure 1(a) depicts \(\Upsilon_{1}\) and \(\Upsilon_{2}\) as the left-upper and the right-lower triangles, respectively.) Let \(I\) be the streamline emanating from \(\mathbf{x}_{0}\), then the discontinuous interface \(I\) divides the domain \(\Omega\) into two sub-domains: \(\Omega=\Omega_{1}\cup\Omega_{2}\), where \(\Omega_{1}\) and \(\Omega_{2}\) are the left-lower and the right-upper sub-domains separated by the discontinuous interface \(I\), respectively (see Figure 1(a)). The corresponding solution \(u\) of (2) is discontinuous across the interface \(I\) and is piecewise smooth with respect to the partition \(\{\Omega_{1},\Omega_{2}\}\). For a given \(\varepsilon_{2}>0\), take a \(\varepsilon_{2}\) neighborhood around the interface \(I\) in the direction of \(\boldsymbol{\beta}\) as in Figure 1(b). Next, we estimate error in the sub-domain \(\Upsilon_{i}\), say, \(\Upsilon_{2}\). To further simplify error estimate, assume that \[\Upsilon_{i}=(0,1)\times(0,1),\quad\boldsymbol{\beta}(\mathbf{x})=(0,v_{2}( \mathbf{x}))^{T},\quad\text{and}\quad\mathbf{x}_{0}=(x_{0},0)\text{ for }x_{0}\in(0,1).\] These assumptions imply that restriction of the interface \(I\) in \(\Upsilon_{i}\) is a vertical line segment \[I=\{(x_{0},y)\in\Upsilon_{i}:y\in(0,1)\},\] and that \(\Upsilon_{i}\) is partitioned into two sub-domains \[\Omega_{1i}=\{\mathbf{x}=(x,y)\in\Upsilon_{i}:\,x<x_{0}\}\quad\text{and}\quad \Omega_{2i}=\{\mathbf{x}=(x,y)\in\Upsilon_{i}:\,x>x_{0}\}\] (see Figure 1(c)). In \(\Upsilon_{i}=\Omega_{1i}\cup\Omega_{2i}\cup I\), let \(u_{1}\) and \(u_{2}\) be the solutions of (2) defined only on \(\Omega_{1i}\) with constant inflow boundary conditions \(g=\alpha_{1}\) and \(\alpha_{2}\) on \(\{(x,0):\,x\in[0,x_{0}]\}\), respectively. (When \(\Upsilon_{i}\) is different from \(\Upsilon_{2}\), the discontinuous point is not \(\mathbf{x}_{0}\) but an interior point of the domain \(\Omega\), and the values of the solution \(u\) at that discontinuous point from different sides are taken as constant inflow boundary conditions.) Set \(a(\mathbf{x})=u_{1}(\mathbf{x})-u_{2}(\mathbf{x})\) and let \(\chi(\mathbf{x})\) be a piecewise discontinuous function defined by \[\chi(\mathbf{x})=\left\{\begin{array}{ll}a(\mathbf{x}),&\mathbf{x}\in\Omega _{1i},\\ 0,&\mathbf{x}\in\Omega_{2i},\end{array}\right. \tag{6}\] then the solution \(u\) of (2) has the following decomposition (see [5]) \[u(\mathbf{x})=\hat{u}(\mathbf{x})+\,\chi(\mathbf{x})\,\mbox{ in }\Upsilon_{i}. \tag{3}\] Here, \(\hat{u}(\mathbf{x})=u(\mathbf{x})-\,\chi(\mathbf{x})\) is clearly piecewise smooth; moreover, it is also continuous in \(\Upsilon_{i}\) since \(\hat{u}\big{|}_{I}=u_{2}\big{|}_{I}\) from both sides. Then we have the following error estimate and postpone its proof to section 5. **Theorem 3.1**: _For any given \(\varepsilon_{2}>\) and \(\varepsilon_{3}>0\), on \(\Upsilon_{i}\), there exists a CPWL function \(p_{i}(\mathbf{x})\) such that_ \[\left|\!\left|\!\left|\chi-p_{i}\right|\!\right|\!\right|_{\boldsymbol{\beta} }\leq D_{1}\sqrt{\varepsilon_{2}}+D_{2}\sqrt{\varepsilon_{3}}. \tag{4}\] The proof of the theorem is postponed to Section 5. **Remark 3.2**: _We now construct a CPWL function \(p(\mathbf{x})\) on \(\Omega\) given by_ \[p(\mathbf{x})=p_{i}(\mathbf{x}),\ \mathbf{x}\in\Upsilon_{i},\] _so that \(p_{i}(\mathbf{x})=p_{i+1}(\mathbf{x})\) on the boundary of \(\Upsilon_{i}\) and \(\Upsilon_{i+1}\). Using the triangle inequality, Theorem 3.1 can be extended to the case that \(\boldsymbol{\beta}\) is piecewise constant to establish the error estimate on the whole domain \(\Omega\)._ **Theorem 3.3**: _Let \(u\) and \(u_{{}_{N}}\) be the solutions of problems (3) and (4), respectively. If the depth of ReLU NNs in (4) is at least \(\lceil\log_{2}(d+1)\rceil+1\), then for a sufficiently large integer \(n\) Figure 1: Domain decomposition for the case that \(\boldsymbol{\beta}\) is piecewise constant _there exists an integer \(\hat{n}\leq n\) such that_ \[\left|\!\left|\!\left|u-u_{{}_{N}}\right|\!\right|\!\right|\!\right|_{\boldsymbol{ \beta}}\leq C\,\left(\sqrt{\varepsilon_{2}}+\sqrt{\varepsilon_{3}}+\inf_{v \in\mathcal{M}(d,n-\hat{n})}\left|\!\left|\!\left|\hat{u}-v\right|\!\right|\! \right|_{\boldsymbol{\beta}}\right), \tag{3.4}\] _where \(\mathcal{M}(d,n-\hat{n})=\mathcal{M}(d,1,\lceil\log_{2}(d+1)\rceil+1,n-\hat{n})\)._ _Proof._ The proof is similar to the one of Theorem 4.4 in [5]. **Lemma 3.4**: _Let \(u\), \(u_{{}_{N}}\), and \(u_{{}_{T}}^{N}\) be the solutions of problems (2.3), (2.4), and (2.5), respectively. Then there exist positive constants \(C_{1}\) and \(C_{2}\) such that_ \[\left|\!\left|\!\left|u-u_{{}_{T}}^{\omega}\right|\!\right|\! \right|\!\right|_{\boldsymbol{\beta}}\leq C_{1}\,\left(\left|(\mathcal{L}-\mathcal{L}_{{}_{T }})(u_{{}_{N}}-u_{{}_{T}}^{N},\mathbf{0})\right|+\left|(\mathcal{L}-\mathcal{ L}_{{}_{T}})(u-u_{{}_{N}},\mathbf{0})\right|\right)^{1/2}\] \[+C_{2}\,\left(\sqrt{\varepsilon_{2}}+\sqrt{\varepsilon_{3}}+\inf_ {v\in\mathcal{M}(d,n-\hat{n})}\left|\!\left|\!\left|\hat{u}-v\right|\!\right|\! \right|_{\boldsymbol{\beta}}\right). \tag{3.5}\] _Proof._ The proof is similar to the one of Lemma 4.7 in [5]. ## 4 Numerical Experiments In this section, we present numerical results for two dimensional test problems with constant, piecewise constant, or variable advection velocity fields. The discrete LS functional was minimized by the ADAM optimization algorithm [9] on a uniform mesh with mesh size \(h=10^{-2}\). The directional derivative \(v_{\boldsymbol{\beta}}\) was approximated by the backward finite difference quotient multiplied by \(|\boldsymbol{\beta}|\) \[v_{\boldsymbol{\beta}}(\mathbf{x}_{{}_{K}})\approx|\boldsymbol{\beta}|\frac{v( \mathbf{x}_{{}_{K}})-v\big{(}\mathbf{x}_{{}_{K}}-\rho\bar{\boldsymbol{\beta} }(\mathbf{x}_{{}_{K}})\big{)}}{\rho}, \tag{4.1}\] where \(\bar{\boldsymbol{\beta}}=\frac{\boldsymbol{\beta}}{|\boldsymbol{\beta}|}\) and \(\rho=h/4\) (except for the last test problem, which used \(\rho=h/15\)). The LSNN method was implemented with an adaptive learning rate that started with \(0.004\) and was reduced by half for every \(50000\) iterations (except for the fourth test problem, which reduced for every \(100000\) iterations). For each experiment, to avoid local minima, \(10\) ReLU NN functions were trained for \(5000\) iterations each, and then the experiment began with one of the pretrained network functions that gave the minimum loss. Tables 1 to 5 report the numerical errors in the relative \(L^{2}\), \(V_{\boldsymbol{\beta}}\), and the LS functional with parameters being the total number of weights and biases. Since the input dimension \(d=2\) (see [5]), we employed ReLU NN functions with a \(2\)-\(n_{1}\)-\(n_{2}\)-\(1\) representation or structure, which means the representation has two hidden layers with \(n_{1}\), \(n_{2}\) neurons, respectively. The \(l^{\text{th}}\) layer breaking lines defined by the set \[\{\mathbf{x}\in\Omega:\boldsymbol{\omega}^{(l)}(N^{(l-1)}\circ\cdots\circ N^{( 2)}\circ N^{(1)}(\mathbf{x}))-\mathbf{b}^{(l)}\text{ has a zero component}\}\text{ with }N^{(0)}=I\] are depicted in Figures 2 to 6 to follow the behavior of ReLU NN function approximation along the discontinous interface. All of the test problems are defined on the domain \(\Omega=(0,1)^{2}\) with \(\gamma=1\) (\(f=1\) for the first three test problems and \(f=0\) for the remaining test problems). respectively. The exact solution of this test problem is \[u(x,y)=\left\{\begin{array}{rl}1,&(x,y)\in\Omega_{1}=\{(x,y)\in\Omega:x<1/2\}, \\ 1+e^{-y},&(x,y)\in\Omega\setminus\Omega_{1}.\end{array}\right. \tag{10}\] The LSNN method with a random initialization and 50000 iterations was implemented with 2-20-20-1 ReLU NN functions. The numerical results are presented in Figure 2 and Table 1. The traces (Figure 2(b)) of the exact and numerical solutions on the plane \(y=0.5\) show no difference or oscillation. The exact solution (Figure 2(c)), which has a non-constant jump along the vertical interface (Figure 2(a)) is accurately approximated by a 3 layer ReLU NN function (Figure 2(d) and Table 1). Note that the solution of this test problem takes the same form as \(\chi\) in section 3, which was approximated by a CPWL function constructed by partitioning the domain into rectangles stacking on top of each other. It appears from Figure 2(e) that the 3 layer ReLU NN function approximation has a similar partition and the second layer breaking lines were generated for approximating the jump along the discontinuous interface by sharp transition layers and nonconstat part of the solution, which is consistent with our theoretical analysis on the convergence of the method. ### Problem with a piecewise smooth solution This example is a modification of subsection 4.1 by changing the inflow boundary condition to \[\Gamma_{-}=\{(x,0):x\in(0,1)\}\] \[\text{and}\ \ g(x,y)=\left\{\begin{array}{rl}0,&(x,y)\in\Gamma_{-}^{1}\equiv\{(x,0):x\in(0,1/2)\},\\ 2,&(x,y)\in\Gamma_{-}^{2}=\Gamma_{-}\setminus\Gamma_{-}^{1},\end{array}\right.\] respectively. The exact solution of this test problem is \[u(x,y)=\left\{\begin{array}{rl}1-e^{-y},&(x,y)\in\Omega_{1},\\ 1+e^{-y},&(x,y)\in\Omega_{2}.\end{array}\right. \tag{11}\] The LSNN method with a random initialization and 50000 iterations was implemented with 2-20-20-1 ReLU NN functions. The numerical results are presented in Figure 3 and Table 2. Unlike the previous test problem, the exact solution (Figure 3(c)) consists of two non-constant smooth parts. The LSNN method is capable of approximating the solution accurately without oscillation (Figures 3(b) to 3(d) and Table 2). The 3 layer ReLU NN function approximation has a partition (Figure 3(e)) similar to the one in subsection 4.1 with the second layer breaking lines on both sides for approximating the two non-constant smooth parts of the solution. respectively. The exact solution of this test problem is \[u(x,y)=\left\{\begin{array}{rl}1-\sin(2\pi x)e^{-y},&(x,y)\in\Omega_{1},\\ 1+(3/2-x)e^{-y},&(x,y)\in\Omega_{2}.\end{array}\right. \tag{10}\] The LSNN method with a random initialization and 100000 iterations was implemented with 2-40-40-1 ReLU NN functions. The numerical results are presented in Figure 4 and Table 3. Since the solution on the inflow boundary consists of two non-constant smooth curves, we increased the number of hidden neurons to obtain a more accurate solution. Figures 4(c) and 4(d) and Table 3 show that the approximation is accurate pointwise and in average. The traces (Figure 4(b)) on \(y=0.5\) exhibit no oscillation and note a few corners on the curve, verifying that the ReLU Figure 2: Approximation results of the problem in subsection 4.1 NN function approximation is a CPWL function. The partition generated by the breaking lines (Figure 4(e)) of the approximation shows how the exact solution was approximated. ### Problem with a piecewise constant advection velocity field Let \(\bar{\Omega}=\bar{\Upsilon}_{1}\cup\bar{\Upsilon}_{2}\) and \[\Upsilon_{1}=\{(x,y)\in\Omega:y\geq x\}\text{ and }\Upsilon_{2}=\Omega\setminus \Upsilon_{1}.\] The advective velocity field is a piecewise constant field given by \[\mathbf{\beta}(x,y)=\left\{\begin{array}{cl}(-1,\sqrt{2}-1)^{T},&(x,y)\in\Upsilon _{1},\\ (1-\sqrt{2},1)^{T},&(x,y)\in\Upsilon_{2}.\end{array}\right. \tag{4.6}\] Figure 3: Approximation results of the problem in subsection 4.2 \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Network structure & \(\frac{\|u-u_{T}^{N}\|_{0}}{\|u\|_{0}}\) & \(\frac{\|u-u_{T}^{N}\|_{\theta}}{\|u\|_{\theta}}\) & \(\frac{\mathcal{L}^{1/2}(u_{T}^{N},\mathbf{f})}{\mathcal{L}^{1/2}(u_{T}^{N}, \mathbf{0})}\) & Parameters \\ \hline 2-20-20-1 & 0.078036 & 0.013157 & 0.010386 & 501 \\ \hline \end{tabular} \end{table} Table 2: Relative errors of the problem in subsection 4.2 Figure 4: Approximation results of the problem in subsection 4.3 \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Network structure & \(\frac{\|u-u_{T}^{N}\|_{0}}{\|u\|_{0}}\) & \(\frac{\|u-u_{T}^{N}\|_{\theta}}{\|u\|_{\theta}}\) & \(\frac{\mathcal{L}^{1/2}(u_{T}^{N},\mathbf{f})}{\mathcal{L}^{1/2}(u_{T}^{N}, \mathbf{0})}\) & Parameters \\ \hline 2-40-40-1 & 0.041491 & 0.016480 & 0.012733 & 1801 \\ \hline \end{tabular} \end{table} Table 3: Relative errors of the problem in subsection 4.3 The inflow boundary and the inflow boundary condition are given by \[\Gamma_{-} =\{(1,y):y\in(0,1)\}\cup\{(x,0):x\in(0,1)\}\] \[\text{and}\ \ g(x,y) =\left\{\begin{array}{rl}\exp(x/(\sqrt{2}-1))x,&(x,y)\in\Gamma_{- }^{1}\equiv\{(x,0):x\in(0,1)\},\\ \exp(1/(\sqrt{2}-1))(11+(\sqrt{2}-1)y),&(x,y)\in\Gamma_{-}^{2}=\Gamma_{-} \setminus\Gamma_{-}^{1},\end{array}\right.\] respectively. Let \[\widehat{\Upsilon}_{11}=\{(x,y)\in\Upsilon_{1}:y<(1-\sqrt{2})x+1\},\ \widehat{\Upsilon}_{12}=\Upsilon_{1}\setminus\widehat{\Upsilon}_{11},\] \[\widehat{\Upsilon}_{21}=\{(x,y)\in\Upsilon_{2}:y<\frac{1}{1-\sqrt{2}}(x-1)\},\ \text{and}\ \widehat{\Upsilon}_{22}=\Upsilon_{2}\setminus\widehat{\Upsilon}_{21}.\] The exact solution of this test problem is \[u(x,y)=\left\{\begin{array}{rl}\exp(\sqrt{2}x+y)(y+(\sqrt{2}-1)x),&(x,y)\in \widehat{\Upsilon}_{11},\\ \exp(\sqrt{2}x+y)(y+(\sqrt{2}-1)x+10),&(x,y)\in\widehat{\Upsilon}_{12},\\ \exp(x/(\sqrt{2}-1))(x+(\sqrt{2}-1)y),&(x,y)\in\widehat{\Upsilon}_{21},\\ \exp(x/(\sqrt{2}-1))(x+(\sqrt{2}-1)y+10),&(x,y)\in\widehat{\Upsilon}_{22}. \end{array}\right. \tag{4.7}\] The LSNN method with a random initialization and 300000 iterations was implemented with 2-40-40-1 ReLU NN functions. The numerical results are presented in Figure 5 and Table 4. The exact solution (Figure 5(c)) consists of four non-constant smooth parts and has a non-constant jump along two connected line segments (Figure 5(a)). The traces (Figure 5(b)) of the exact and numerical solutions, the 3 layer ReLU NN function approximation (Figure 5(d)), and Table 4 indicate that the approximation is accurate pointwise and in average. Most of the second layer breaking lines (Figure 5(e)) are along the discontinuous interface, which correspond to sharp transition layers of the approximation for approximating the jump. The LSNN method with a random initialization and 300000 iterations was implemented with 2-60-60-1 ReLU NN functions. The numerical results are presented in Figure 6 and Table 5. We increased the number of hidden neurons and \(\rho\) was set to \(h/15\) in the finite difference quotient in (4.1) because of the jump along the curved interface (Figure 6(a)). Although theoretical analysis on the convergence of the method in the case of a smooth interface was not conducted, Figures 6(b) to 6(d) and Table 5 show that the LSNN method is still capable of approximating the discontinuous solution with the curved interface accurately without oscillation. Finally, again, most of the second layer breaking lines are along the interface (Figure 6(e)) to approximate the discontinuous jump. Figure 5: Approximation results of the problem in subsection 4.4 ## 5 Proof of Theorem 3.1 In the section, we provide the proof of Theorem 3.1 by constructing a CPWL function to approximate \(\chi(\mathbf{x})\) in (3.1). Note that \(a(\mathbf{x})\) is generally a cylindrical surface and that the jump of \(\chi(\mathbf{x})\) is non-constant with \[\chi(x,0)=a(x,0)=\alpha_{1}-\alpha_{2}\quad\forall\ x\in[0,x_{0}]\] (see Figure 7(a)). For a given \(\varepsilon_{1}>0\), take \(\widehat{\Upsilon}=(0,1)\times(y_{0},y_{0}+\varepsilon_{1})\) (see Figure 7(b)). Without loss of generality, let \(\alpha_{1}=1\), \(\alpha_{2}=0\), and \(\widehat{\Upsilon}=(0,1)\times(0,\varepsilon_{1})\). Figure 6: Approximation results of the problem in subsection 4.5 Hence, \[\chi(\mathbf{x})=\chi_{0}(\mathbf{x})+\chi_{1}(\mathbf{x})\text{ on }\widehat{\Upsilon}, \tag{5.1}\] where \(\chi_{0}(\mathbf{x})\) is a step function and \(\chi_{1}(\mathbf{x})\) vanishes on the inflow boundary given by \[\chi_{0}(\mathbf{x})=\left\{\begin{array}{ll}1,&\mathbf{x}\in\Omega_{1i} \cap\widehat{\Upsilon},\\ 0,&\mathbf{x}\in\Omega_{2i}\cap\widehat{\Upsilon},\end{array}\right.\quad \text{and}\quad\chi_{1}(\mathbf{x})=\left\{\begin{array}{ll}a(\mathbf{x})-1,&\mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon},\\ 0,&\mathbf{x}\in\Omega_{2i}\cap\widehat{\Upsilon}.\end{array}\right.\] **Lemma 5.1**: _Let_ \[b(\mathbf{x})=\left\{\begin{array}{ll}\boldsymbol{b}\cdot(\mathbf{x}-(x_{0},0)),&\mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon},\\ 0,&\mathbf{x}\in\Omega_{2i}\cap\widehat{\Upsilon},\end{array}\right.\] _where \(\boldsymbol{b}=(0,d)^{T}\) is a constant vector, and \(p_{1}(\mathbf{x})\) be a two layer neural network function on \(\widehat{\Upsilon}\), given by_ \[p_{1}(\mathbf{x})=-c\,\sigma(\mathbf{w}_{1}\cdot\mathbf{x}+x_{0})+c\,\sigma (\mathbf{w}_{2}\cdot\mathbf{x}+x_{0})\] \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Network structure & \(\frac{\|u-u_{T}^{N}\|_{0}}{\|u\|_{0}}\) & \(\frac{\|u-u_{T}^{N}\|_{\boldsymbol{\beta}}}{\|u\|_{\boldsymbol{\beta}}}\) & \(\frac{\mathcal{L}^{1/2}(u_{T}^{N},\mathbf{f})}{\mathcal{L}^{1/2}(u_{T}^{N}, \mathbf{0})}\) & Parameters \\ \hline 2-60-60-1 & 0.046528 & 0.049423 & 0.019995 & 3901 \\ \hline \end{tabular} \end{table} Table 5: Relative errors of the problem in subsection 4.5 Figure 7: Illustration of the convergence analysis on one subdomain \(\Upsilon_{i}\) with weights and coefficient_ \[\mathbf{w}_{1}=\begin{pmatrix}-1\\ -\varepsilon_{2}\end{pmatrix},\quad\mathbf{w}_{2}=\begin{pmatrix}-1\\ \varepsilon_{2}\end{pmatrix},\quad c=\frac{d}{2\varepsilon_{2}}\] _(see Figure 7(c)). Then we have on \(\widehat{\Upsilon}\),_ \[\left|\!\left|\!\left|b-p_{1}\right|\!\right|\!\right|_{\boldsymbol{\beta}}= \left(\left|\!\left|b-p_{1}\right|\!\right|\!\right|_{0,\widehat{\Upsilon}}^{2 }+\left|\!\left|b_{\boldsymbol{\beta}}-p_{1\boldsymbol{\beta}}\right|\!\right| \!\right|_{0,\widehat{\Upsilon}}^{2}\right)^{1/2}\leq\sqrt{\frac{\varepsilon_{ 1}^{3}}{24}+\frac{B^{2}}{4}}\left|d\right|\sqrt{\varepsilon_{1}\varepsilon_{2}}, \tag{5.2}\] _where we assume \(\left|v_{2}(\mathbf{x})\right|\leq B\) (recall \(\boldsymbol{\beta}(\mathbf{x})=(0,v_{2}(\mathbf{x}))\))._ Proof.: Set \[\widehat{\Upsilon}_{\varepsilon_{2}}\equiv\widehat{\Upsilon}_{1,\varepsilon_ {2}}\cup\widehat{\Upsilon}_{2,\varepsilon_{2}}\equiv\{\mathbf{x}\in\Omega_{1 i}\cap\widehat{\Upsilon}:\,\mathbf{w}_{1}\cdot\mathbf{x}+x_{0}<0\}\cup\{ \mathbf{x}\in\Omega_{2i}\cap\widehat{\Upsilon}:\,\mathbf{w}_{2}\cdot\mathbf{x }+x_{0}>0\}.\] Then we have \[b(\mathbf{x})-p_{1}(\mathbf{x})=\left\{\begin{array}{ll}-c(\mathbf{w}_{1} \cdot\mathbf{x}+x_{0})&\mathbf{x}\in\widehat{\Upsilon}_{1,\varepsilon_{2}}, \\ -c(\mathbf{w}_{2}\cdot\mathbf{x}+x_{0})&\mathbf{x}\in\widehat{\Upsilon}_{2, \varepsilon_{2}},\\ 0,&\mathbf{x}\in\widehat{\Upsilon}\setminus\widehat{\Upsilon}_{\varepsilon_ {2}}.\end{array}\right.\] Calculating the double integral, \[\left|\!\left|b-p_{1}\right|\!\right|_{0,\widehat{\Upsilon}}^{2}=\left|\! \left|b-p_{1}\right|\!\right|_{0,\widehat{\Upsilon}_{1,\varepsilon_{2}}}^{2} +\left|\!\left|b-p_{1}\right|\!\right|_{0,\widehat{\Upsilon}_{2,\varepsilon_ {2}}}^{2}=\frac{d^{2}}{24}\varepsilon_{1}^{4}\varepsilon_{2}, \tag{5.3}\] and using the directional derivative, \[\left|\!\left|b_{\boldsymbol{\beta}}-p_{1\boldsymbol{\beta}}\right|\!\right|_ {0,\widehat{\Upsilon}}^{2}=\int_{\widehat{\Upsilon}_{1,\varepsilon_{2}}}(c \mathbf{w}_{1}\cdot\boldsymbol{\beta})^{2}\,d\mathbf{x}+\int_{\widehat{ \Upsilon}_{2,\varepsilon_{2}}}(c\mathbf{w}_{2}\cdot\boldsymbol{\beta})^{2} \,d\mathbf{x}\leq\frac{(dB)^{2}}{4}\varepsilon_{1}\varepsilon_{2}. \tag{5.4}\] Now (5.2) follows from (5.3) and (5.4). **Lemma 5.2**.: _Let \(\widehat{\Upsilon}\), \(I\), \(b(\mathbf{x})\), \(p_{1}(\mathbf{x})\), and \(\boldsymbol{\beta}(\mathbf{x})\) be as in Lemma 5.1 with \(d=\chi_{1}(0,\varepsilon_{1})/\varepsilon_{1}\), and let \(p_{0}(\mathbf{x})\) be a two layer neural network function on \(\widehat{\Upsilon}\) given by_ \[p_{0}(\mathbf{x})=\frac{1}{2\varepsilon_{2}}\left(\sigma(x-x_{0}+\varepsilon_ {2})-\sigma(x-x_{0}-\varepsilon_{2})\right). \tag{5.5}\] _Then we have on \(\widehat{\Upsilon}\),_ \[\left|\!\left|\!\left|\!\left|\chi-(p_{0}+p_{1})\right|\!\right|\!\right|\! \right|_{\boldsymbol{\beta}}\leq C_{1}\sqrt{\varepsilon_{1}\varepsilon_{2}}+ C_{2}\sqrt{\varepsilon_{1}}, \tag{5.6}\] _where \(C_{2}\) is given by the square root of_ \[\sup\{(u_{1}(\mathbf{x})-u_{2}(\mathbf{x})-1-b(\mathbf{x}))^{2}: \mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon}\}x_{0}\\ +\sup\{2\gamma(\mathbf{x})^{2}(u_{2}(\mathbf{x})-u_{1}(\mathbf{x }))^{2}+2(dB)^{2}:\mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon}\}x_{0} \tag{5.7}\] Proof.: From the triangle inequality, \[\left|\!\left|\!\left|\chi-(p_{0}+p_{1})\right|\!\right|\!\right|_{\boldsymbol{ \beta}}=\left|\!\left|\!\left|\chi_{0}+\chi_{1}-(p_{0}+p_{1})\right|\!\right|\! \right|_{\boldsymbol{\beta}}\leq\left|\!\left|\!\left|\chi_{0}-p_{0}\right|\! \right|\!\right|_{\boldsymbol{\beta}}+\left|\!\left|\!\left|\chi_{1}-p_{1} \right|\!\right|\!\right|_{\boldsymbol{\beta}}. \tag{5.8}\] Since \(\left|\!\left|\chi_{0\boldsymbol{\beta}}-p_{0\boldsymbol{\beta}}\right|\!\right|=0\), calculating the double integral, \[\left|\!\left|\!\left|\chi_{0}-p_{0}\right|\!\right|\!\right|_{ \boldsymbol{\beta}} =\left(\left|\!\left|\chi_{0}-p_{0}\right|\!\right|\!\right|_{0, \widehat{\Upsilon}}^{2}+\left|\!\left|\chi_{0\boldsymbol{\beta}}-p_{0 \boldsymbol{\beta}}\right|\!\right|_{0,\widehat{\Upsilon}}^{2}\right)^{1/2}\] \[=\left|\!\left|\chi_{0}-p_{0}\right|\!\right|_{0,\widehat{\Upsilon}}\] \[=\frac{1}{\sqrt{6}}\sqrt{\varepsilon_{1}\varepsilon_{2}}. \tag{5.9}\] Next, again, by the triangle inequality, \[\left|\!\left|\!\left|\chi_{1}-p_{1}\right|\!\right|\!\right|_{\boldsymbol{\beta}} \leq\left|\!\left|\!\left|\chi_{1}-b\right|\!\right|\!\right|_{\boldsymbol{ \beta}}+\left|\!\left|\!\left|b-p_{1}\right|\!\right|\!\right|_{\boldsymbol{ \beta}}.\] By Lemma 5.1 \[\left|\!\left|\!\left|\!\left|b-p_{1}\right|\!\right|\!\right|\!\right|_{ \boldsymbol{\beta}}\leq\sqrt{\frac{\varepsilon_{1}^{3}}{24}+\frac{B^{2}}{4}} \left|d\right|\sqrt{\varepsilon_{1}\varepsilon_{2}}.\] To bound \(\left|\!\left|\!\left|\!\left|\chi_{1}-b\right|\!\right|\!\right|_{\boldsymbol{ \beta}}\), recall the definition of the graph norm, \[\left|\!\left|\!\left|\chi_{1}-b\right|\!\right|\!\right|_{\boldsymbol{\beta} }=\left(\left|\!\left|\chi_{1}-b\right|\!\right|\!\right|_{0,\widehat{\Upsilon }}^{2}+\left|\!\left|\chi_{1\boldsymbol{\beta}}-b_{\boldsymbol{\beta}}\right| \!\right|_{0,\widehat{\Upsilon}}^{2}\right)^{1/2}.\] First we have \[\left|\!\left|\chi_{1}-b\right|\!\right|_{0,\widehat{\Upsilon}}^ {2} =\left|\!\left|\chi_{1}-b\right|\!\right|_{0,\Omega_{1i}\cap \widehat{\Upsilon}}^{2}\] \[\leq\sup\{(\chi_{1}(\mathbf{x})-b(\mathbf{x}))^{2}:\mathbf{x}\in \Omega_{1i}\cap\widehat{\Upsilon}\}\varepsilon_{1}x_{0}\] \[=\sup\{(u_{1}(\mathbf{x})-u_{2}(\mathbf{x})-1-b(\mathbf{x}))^{2}: \mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon}\}\varepsilon_{1}x_{0}.\] Next, observing \(b_{\boldsymbol{\beta}}=dv_{2}\) and from (2), \[\chi_{1\boldsymbol{\beta}}=(u_{1}-u_{2}-1)_{\boldsymbol{\beta}}=(u_{1}-u_{2}) _{\boldsymbol{\beta}}=\gamma(u_{2}-u_{1}),\] we have \[\left|\!\left|\chi_{1\boldsymbol{\beta}}-b_{\boldsymbol{\beta}} \right|\!\right|_{0,\widehat{\Upsilon}}^{2} =\left|\!\left|\chi_{1\boldsymbol{\beta}}-b_{\boldsymbol{\beta}} \right|\!\right|_{0,\Omega_{1i}\cap\widehat{\Upsilon}}^{2}\] \[=\left|\!\left|\gamma(u_{2}-u_{1})-dv_{2}\right|\!\right|_{0, \Omega_{1i}\cap\widehat{\Upsilon}}^{2}\] \[\leq\left\|\!\left|\sqrt{2\gamma^{2}(u_{2}-u_{1})^{2}+2(dv_{2})^ {2}}\right\|\!\right|_{0,\Omega_{1i}\cap\widehat{\Upsilon}}^{2}\] \[\leq\sup\{2\gamma(\mathbf{x})^{2}(u_{2}(\mathbf{x})-u_{1}(\mathbf{ x}))^{2}+2(dB)^{2}:\mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon}\} \varepsilon_{1}x_{0}.\] Combining the above inequalities, we obtain (10). Given \(\varepsilon_{3}>0\), choose \(\varepsilon_{1}=1/m\) such that \[\sup\{(u_{1}(\mathbf{x})-u_{2}(\mathbf{x})-\chi(0,j/m)-b_{j}( \mathbf{x}))^{2}:\mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon}_{j}\}, \tag{12}\] \[\sup\{2\gamma(\mathbf{x})^{2}(u_{2}(\mathbf{x})-u_{1}(\mathbf{x}) )^{2}+2(d_{j}B)^{2}:\mathbf{x}\in\Omega_{1i}\cap\widehat{\Upsilon}_{j}\}< \varepsilon_{3},\] where \(\widehat{\Upsilon}_{j}=(0,1)\times(j/m,(j+1)/m)\) for \(j=0,\ldots,m-1\), and \(b_{j}(\mathbf{x})\), \(d_{j}\) as in Lemma 5.2. Then define on each \(\widehat{\Upsilon}_{j}\), \(p_{0j}(\mathbf{x}),p_{1j}(\mathbf{x})\) as in Lemma 5.2, and construct a CPWL function \(p_{i}(\mathbf{x})\) on \(\Upsilon_{i}\) given by \[p_{i}(\mathbf{x})=p_{0j}(\mathbf{x})+p_{1j}(\mathbf{x}),\;\mathbf{x}\in \widehat{\Upsilon}_{j}.\] Proof of Theorem 3.1.: By Lemma 5.2 and the given condition, \[\begin{split}\left\|\chi-p_{i}\right\|_{\boldsymbol{\beta}}& =\left(\sum_{j=1}^{m}\left\|\chi-(p_{0j}+p_{1j})\right\|_{ \boldsymbol{\beta}}^{2}\right)^{1/2}\\ &\leq\left(\sum_{j=1}^{m}\left(D_{1j}\sqrt{\varepsilon_{1} \varepsilon_{2}}+D_{2j}\sqrt{\varepsilon_{1}\varepsilon_{3}}\right)^{2} \right)^{1/2}\\ &\leq\left(m\max_{1\leq j\leq m}\left(D_{1j}\sqrt{\varepsilon_{1} \varepsilon_{2}}+D_{2j}\sqrt{\varepsilon_{1}\varepsilon_{3}}\right)^{2} \right)^{1/2}\\ &=\sqrt{m}\max_{1\leq j\leq m}\left(D_{1j}\sqrt{\varepsilon_{1} \varepsilon_{2}}+D_{2j}\sqrt{\varepsilon_{1}\varepsilon_{3}}\right)\\ &=\max_{1\leq j\leq m}\left(D_{1j}\sqrt{\varepsilon_{2}}+D_{2j} \sqrt{\varepsilon_{3}}\right),\end{split} \tag{5.11}\] where for the first identity, each norm on the right-hand side is taken over \(\widehat{\Upsilon}_{j}\). Now \(D_{1}=D_{1k}\) and \(D_{2}=D_{2k}\) for some \(1\leq k\leq m\).
2301.12951
Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications. To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations. Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance. Yet, the interplay between these two aspects remains unexplored. In this paper, we pioneer the exploration of the interaction between the privacy risks of edge leakage and the individual fairness of a GNN. Our theoretical analysis unravels that edge privacy risks unfortunately escalate when the nodes' individual fairness improves. Such an issue hinders the accomplishment of privacy and fairness of GNNs at the same time. To balance fairness and privacy, we carefully introduce fairness-aware loss reweighting based on influence function and privacy-aware graph structure perturbation modules within a fine-tuning mechanism. Experimental results underscore the effectiveness of our approach in achieving GNN fairness with limited performance compromise and controlled privacy risks. This work contributes to the comprehensively developing trustworthy GNNs by simultaneously addressing both fairness and privacy aspects.
He Zhang, Xingliang Yuan, Shirui Pan
2023-01-30T14:52:23Z
http://arxiv.org/abs/2301.12951v2
# On the Interaction between Node Fairness and Edge Privacy ###### Abstract Due to the emergence of graph neural networks (GNNs) and their widespread implementation in real-world scenarios, the fairness and privacy of GNNs have attracted considerable interest since they are two essential social concerns in the era of building trustworthy GNNs. Existing studies have respectively explored the fairness and privacy of GNNs and exhibited that both fairness and privacy are at the cost of GNN performance. However, the interaction between them is yet to be explored and understood. In this paper, we investigate the interaction between the fairness of a GNN and its privacy for the first time. We empirically identify that edge privacy risks increase when the individual fairness of nodes is improved. Next, we present the intuition behind such a trade-off and employ the influence function and Pearson correlation to measure it theoretically. To take the performance, fairness, and privacy of GNNs into account simultaneously, we propose implementing fairness-aware reweighting and privacy-aware graph structure perturbation modules in a retraining mechanism. Experimental results demonstrate that our method is effective in implementing GNN fairness with limited performance cost and restricted privacy risks. ## 1 Introduction In recent years, in addition to competent performance, there has been an increasing desire for fairness [1, 2, 3] and private information security [21, 22] in GNNs, because both privacy and fairness are two essential concerns of GNN users [16]. In the context of GNNs, existing studies [14, 15] only focus on performance and privacy/fairness; however, privacy and fairness are not isolated from each other in practical scenarios. For example, in e-commerce platforms [12, 13], where users and items are considered as nodes and their interactions are regarded as edges in a graph, a recommender system should serve all users with competent performance. Meanwhile, user bias [21] should be avoided and similar users (i.e., nodes) should receive similar item recommendations to boost individual fairness; the purchase records of a customer (i.e., edges between nodes) should not be exposed to others without permission. Therefore, it is necessary to study the interaction among performance, fairness, and privacy to build trustworthy GNNs comprehensively [16, 15]. Existing literature has studied the interaction between performance and fairness/privacy of GNNs. For example, to boost individual fairness in GNNs, a method called RE-DRESS [11] proposes to add a regularisation term concerning fairness from the ranking perspective into the loss function of GNNs. Moreover, InFoRM [15] uses the Lipschitz property and proposes a metric to evaluate the individual fairness of nodes. This metric can also be involved in the loss function of a target model to reduce the bias existing in the GNN predictions. To promote edge privacy, for instance, Wu _et al._[21] explore the vulnerability of edges in the training graph and introduce differential privacy (DP) mechanisms [17] to protect edges from leakage. These studies also demonstrate that both individual fairness and edge privacy are at cost of GNN performance. To comprehensively build trustworthy GNNs, it is inevitable to study the interaction between fairness and privacy. Currently, a few works have explored the impact of improving model privacy on fairness [16] or promoting algorithm fairness on privacy [16], which are in the context of general machine learning models for Independent Identically Distribution (IID) data. In contrast, this paper will study the interaction of node fairness and edge privacy of GNNs for complex graph data, which has not yet been explored and measured to the best of our knowledge. Moreover, given the trade-off between fairness/privacy and performance and unexplored interaction between fairness and privacy, a challenging research topic of building trustworthy GNNs is how to ensure the privacy and fairness of a GNN model simultaneously with keeping competent model performance (i.e., with limited performance cost). In this paper, we empirically verify the adverse effect of individual fairness of nodes on edge privacy in GNN models. To understand this observation, we employ the influence functions of training samples to measure and explain this trade-off. Furthermore, we propose a Privacy-aware Per turbations and Fairness-aware Re-weighting (PPFR) method to implement GNN fairness with limited performance cost and restricted privacy risks. The contributions of this paper are summarised as follows: * In this paper, we explore the interaction between fairness and privacy of GNN for the first time. We empirically show that the edge privacy risk increases when enhancing individual fairness of nodes, i.e., there exists a trade-off between fairness and privacy of GNNs. * To understand GNN behaviours concerning different trustworthiness aspects (e.g., fairness and privacy), we propose employing the influence function and Pearson correlation coefficient to quantitatively measure the interaction (e.g., trade-off) between them. * Based on a re-weighting method and a graph structure perturbation method, we propose a novel method to devise competent GNNs with reduced bias and edge leakage risks, whose effectiveness has been demonstrated by our experimental evaluations. ## 2 Background **Graphs.** A graph \(G=\{\mathcal{V},\mathcal{E}\}\) includes a node set \(\mathcal{V}=\big{\{}v_{1},\dots,v_{|\mathcal{V}|}\big{\}}\) and an edge set \(\mathcal{E}\). \(\mathcal{E}\) characterises the relationship information in \(G\). The set of edges can also be denoted by an adjacency matrix \(\mathbf{A}\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\), in which \(\mathbf{A}_{i,j}=1\) when \(e_{ij}=(v_{i},v_{j})\in\mathcal{E}\), otherwise \(\mathbf{A}_{i,j}=0\). Matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times k}\) (\(k\) indicates the dimensionality of features) denotes node features, the \(i\)-th row of \(\mathbf{X}\) represents the feature of node \(v_{i}\). Without loss of generality, another description form of a graph is \(G=\{\mathbf{A},\mathbf{X}\}\). In this paper, we focus on the undirected graph, i.e. \(\mathbf{A}_{i,j}=\mathbf{A}_{j,i}\). **Node Classification** and **GCN.** For a graph \(G=\{\mathcal{V},\mathcal{E}\}\), the set of labelled nodes is denoted by \(\mathcal{V}_{l}\subset\mathcal{V}\), where \(y_{v}\) is the label of \(v\in\mathcal{V}_{l}\). The set of unlabelled nodes in \(G\) is indicated by \(\mathcal{V}_{u}=\mathcal{V}\setminus\mathcal{V}_{l}\). Given \(G\) and node labels, node classification aims to train a GNN model \(f\), which can predict labels for nodes in \(\mathcal{V}_{u}\). In this paper, we consider the graph convolutional network (GCN) model [13], which is a typical GNN model for node classification. Given a \(L\) layer GCN model, we assume that \(\mathbf{E}^{(l)}\) and \(\mathbf{W}^{(l)}\) represent the output node embeddings and the weight matrix of the \(l\)-th hidden layer, respectively. The graph convolution at the \(l\)-th layer can be formulated as \(\mathbf{E}^{(l)}=\sigma(\hat{\mathbf{A}}\mathbf{E}^{(l-1)}\mathbf{W}^{(l)})\), where \(\sigma\) is the nonlinear activation, \(\hat{\mathbf{A}}=\tilde{\mathbf{D}}^{-\frac{1}{2}}(\ \mathbf{A}+\mathbf{I})\tilde{\mathbf{D}}^{-\frac{1}{2}}\), and \(\tilde{\mathbf{D}}\) being the degree matrix of \((\mathbf{A}+\mathbf{I})\). **Individual Fairness.** To be fair to all users, GNNs should ensure that everyone is treated equally and receives the same quality of service regardless of their background. In the context of node classification, individual fairness requires that any two similar nodes receive similar GNN predictions [12]. Specifically, given the similarity matrix \(\mathbf{S}\) of nodes and GNN predictions \(\mathbf{Y}\), the bias with respect individual fairness is measured by \(Bias(\mathbf{Y},\mathbf{S})=Tr(\mathbf{Y}^{T}\mathbf{L_{S}}\mathbf{Y})\), where \(\mathbf{Y}^{T}\) represents the transpose of \(\mathbf{Y}\), \(\mathbf{L_{S}}\) indicates the Laplacian matrix of \(\mathbf{S}\)[12]. To improve individual fairness of GNNs, a method called InFoRM proposed to involve \(Bias(\mathbf{Y},\mathbf{S})\) into the loss function during the training phase of GNNs [12]. **Link Stealing Attacks.** Existing studies [12, 13] have demonstrated that attackers are capable of inferring the existence of a link between any two nodes in the training graph of a GNN model. For example, by querying node predictions of the target GNN model, attackers can use the prediction similarity of node pairs to infer whether two nodes are connected in a specific node pair [12]. This attack is based on the intuition that if two nodes share more similar predictions from the target GNN model then there is a greater likelihood of the nodes being linked together [12]. Currently, existing methods employ the AUC score to evaluate the vulnerability of a GNN model to link-stealing attacks, i.e., leakage risk of edges in the training graph [12, 13]. ## 3 Interaction Between Fairness and Privacy: A Preliminary Study This section presents a preliminary study on the node classification task to assess the effect of promoting individual fairness on the risk of edge leakage. Introducing the item concerning fairness into the loss function of a GNN model is effective in reducing bias, while it potentially affects the privacy risk of edges in the training graph. ### Preliminary Study Settings **Datasets and Models.** In our preliminary experiments, we employ Cora [13], Citeseer [13], and Pubmed [13] datasets, which are commonly used in evaluating GNNs in node classification. Models selected for this study are GCNs [13] with 16 hidden layers that employ ReLU and softmax activation functions. We use accuracy as the metric for evaluating the performance of GCNs. **Fairness.** Following previous studies [12, 13], we combine the \(Bias(\mathbf{Y},\mathbf{S})\) and original loss function together in the training phase to promote fairness. In this paper, the similarity matrix \(\mathbf{S}\) is defined as the Jaccard index [12], and the bias in GNN predictions is measured by \(Bias(\mathbf{Y},\mathbf{S})\). The smaller the bias value, the fairer the GNN prediction \(\mathbf{Y}\). **Privacy.** In this paper, we assume that attackers can only query target models to obtain node predictions from target GNNs, which is the most practical link-stealing attack (i.e., Attack-0 in [12]). Based on prediction similarity, attackers attempt to infer the existence of an edge in any node pairs in the training graph. The edge leakage risk is measured by AUC (area under the ROC curve) score, where larger values indicate higher privacy risks. Following a previous study [12], the prediction similarity is calculated using Cosine, Euclidean, Correlation, Chebyshev, Braycurtis, Canberra, Cityblock and Squeuclidean distances. ### Observations In our preliminary studies, we focus on the change of privacy risk on edges when boosting individual fairness in GNN predictions. As shown in Table 1, we observe that boosting fairness comes at the cost of model performance, i.e., the prediction accuracy is sacrificed when improving bias on all datasets. However, performance reduction is not the only adverse effect. Changes in AUC scores indicate that edge leakage risks increase when GNN fairness is promoted. This phenomenon can be explained by the definition of \(\mathbf{S}\) and its different influences on different node pairs. In homophily graphs (e.g., Cora, Citeseer, Pubmed), similar nodes (i.e., nodes with the same label) are more likely to connect to each other [15, 14]. In these graphs, calculating Jaccard similarity [14] between nodes leads \(\mathbf{S}\) to assign higher values (e.g., 1) for node pairs in which two nodes own a higher proportion of the same 1-hop neighbour nodes, while lower values (e.g., 0) for other node pairs. Therefore, boosting individual fairness based on Jaccard similarity has limited even zero influence on the latter node pairs (i.e., almost unchanged large distances), but encourages nodes in the former to obtain more similar predictions (i.e., smaller distances). As a byproduct of promoting fairness, the distinction between connected and unconnected node pairs is increased. Although we can explain the interaction between fairness and privacy intuitively, comprehensively building trustworthy GNNs years for quantitatively measuring this trade-off and balancing the performance, fairness, and privacy of GNNs. ## 4 Method This section presents our method for promoting the fairness of GNN models while restricting edge privacy risks. We first introduce how to evaluate the influence of training samples and then employ a correlation index to measure the interaction between fairness and privacy. Finally, we propose a method that can restrict edge privacy risks when promoting GNN fairness. ### The Influence of Training Samples In this section, we first present how the weight of training samples influences the parameters of models, and then discuss its influence on interested functions. **Influence of Training Samples on GNN Parameters** Generally, given a set of labelled nodes in graph \(G\), we can train a GCN model by minimising the following loss function [23]: \[\theta^{*}(\mathbf{1})=\arg\min_{\theta}\sum_{v\in\mathcal{V}_{l}}L(\hat{y}_{ v},y_{v};\theta), \tag{1}\] where \(y_{v}\) represents the ground truth label of node \(v\in\mathcal{V}_{l}\), and \(\hat{y}_{v}\) indicates the predicted label from the GCN model with parameter \(\theta\). The \(\mathbf{1}\) (i.e., all-one vector) here represents that all nodes in \(\mathcal{V}_{l}\) are treated equally during the training of the GCN model. When changing the weight of training samples, the obtained parameter can be expressed as \[\theta^{*}(\mathbf{1}+\mathbf{w})=\arg\min_{\theta}\sum_{v\in\mathcal{V}_{l}}( 1+w_{v})L(\hat{y}_{v},y_{v};\theta), \tag{2}\] where \(w_{v}\in[-1,1]\) and \((1+w_{v})\) represents the weight of node \(v\) in the loss function when training a GCN model (e.g., \(w_{v}=-1\) indicates leaving node \(v\) out of training phase). To estimate \(\theta^{*}(\mathbf{1}+\mathbf{w})\) without retraining the GCN model, here we employ the influence function [13, 14] and Taylor expansion to conduct the following first-order approximation: \[\theta^{*}(\mathbf{1}+\mathbf{w})\approx\theta^{*}(\mathbf{1})+\sum_{v\in \mathcal{V}_{l}}w_{v}\mathcal{I}_{\theta^{*}(\mathbf{1})}(v), \tag{3}\] where \(\mathcal{I}_{\theta^{*}(\mathbf{1})}(v)=\frac{d\theta^{*}(\mathbf{1})}{dw_{v} }|_{w_{v}=0}\) is the influence function with respect to node \(v\). According to the classical analysis [13], it can be calculated as \[\mathcal{I}_{\theta^{*}(\mathbf{1})}(v)=\mathbf{H}_{\theta^{*}(\mathbf{1})}^{ -1}\nabla_{\theta}L(\hat{y}_{v},y_{v};\theta^{*}(\mathbf{1})), \tag{4}\] where \(\mathbf{H}_{\theta^{*}(\mathbf{1})}=\frac{1}{|\mathcal{V}_{l}|}\sum_{v\in \mathcal{V}_{l}}\nabla_{\theta}^{2}L(\hat{y}_{v},y_{v};\theta^{*}(\mathbf{1}))\) is the Hessian matrix of loss function with respect to parameter \(\theta^{*}(\mathbf{1})\). **Influence on Interested Functions** To study the behaviour of GNN models, different functions have been proposed to evaluate GNN outputs. To evaluate \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Acc\(\uparrow\)} & \multirow{2}{*}{Bias\(\downarrow\)} & \multicolumn{8}{c}{Privacy Risks \(\downarrow\)} \\ \cline{4-11} & & & & Cosi & & Eucl & Corr & Cheb & Bray & Canb & City & Squeu \\ \hline \multirow{2}{*}{Cora} & Vanilla & 86.12 & 0.0766 & 92.34 & 91.98 & 92.13 & 92.41 & 92.57 & 90.41 & 92.57 & 91.98 \\ \cline{2-11} & Reg & 85.38 & 0.0494 & 93.42 & 93.67 & 93.14 & 93.73 & 94.10 & 93.81 & 94.10 & 93.67 \\ \hline \multirow{2}{*}{Citeseer} & Vanilla & 63.66 & 0.0445 & 92.82 & 92.68 & 92.74 & 93.00 & 93.10 & 94.15 & 93.10 & 92.68 \\ \cline{2-11} & Reg & 63.11 & 0.0301 & 94.31 & 94.64 & 94.00 & 94.77 & 95.03 & 96.10 & 95.03 & 94.64 \\ \hline \multirow{2}{*}{Pubmed} & Vanilla & 85.37 & 0.0706 & 88.59 & 89.26 & 87.04 & 89.37 & 89.37 & 92.85 & 89.37 & 89.26 \\ \cline{2-11} & Reg & 83.37 & 0.0108 & 92.66 & 92.90 & 89.44 & 92.95 & 92.95 & 93.67 & 92.95 & 92.90 \\ \hline \hline \end{tabular} * In this table, “Vanilla” represents the generic training of GNNs, and “Reg” indicates that the fairness regularisation is introduced into the loss function of GNNs during their vanilla training. “Cosi”, “Eucl”, “Corr”, “Cheb”, “Bray”, “Canb”, “City”, and “Squeu” represent the Cosine, Euclidean, Correlation, Chebyshev, Braycurtis, Canberra, Cityblock and Squeuclidean distances, respectively. \end{table} Table 1: Comparison of the accuracy, bias, and privacy risks of GCN models. whether GNNs treat all users equally, individual fairness requires that similar individuals are treated equally. As mentioned in Section 2, the existing bias [14] in a GNN model can be measured by \[f_{bias}(\theta)=Bias(\mathbf{Y},\mathbf{S})=Tr(\mathbf{Y}^{T}\mathbf{L_{S}} \mathbf{Y}). \tag{5}\] To steal edges in the training graph, existing attacks [13] first calculate the distance of any two nodes with respect to their GNN predictions, then categorise all distances into two clusters with the KNN method. The authors propose to employ AUC scores between the connected and unconnected node pairs to evaluate the vulnerability of GNNs. In this paper, we use the following function to measure link leakage risks of the training graph, i.e., \[f_{risk}(\theta)=\frac{|\mathbb{E}[d_{0}(\hat{y}_{i},\hat{y}_{j})]-\mathbb{E }[d_{1}(\hat{y}_{i},\hat{y}_{j})]|}{(var(d_{0}(\hat{y}_{i},\hat{y}_{j}))+var(d _{1}(\hat{y}_{i},\hat{y}_{j})))/2}, \tag{6}\] where \(d(\hat{y}_{i},\hat{y}_{j})\) represents the distance between the GNN predictions on \(i\)-th and \(j\)-th node in graph \(G\), and the subscript in \(d_{0}(\cdot,\cdot)/d_{1}(\cdot,\cdot)\) (i.e., 0 or 1) indicates that the input comes from unconced/connected nodes. \(\mathbb{E}[\cdot]\) and \(var(\cdot)\) represent the mean and variance operations, respectively. We empirically verify the effectiveness of \(f_{risk}\) in measuring privacy risks and only show its value on the Citeseer dataset in Table 2 due to limited space. In this paper, we use the Taylor expansion of \(f\) with respect to the parameters \(\theta\) and the equation (3) to calculate the influence of training samples on \(f\). Specifically, we assume that both \(f_{bias}\) and \(f_{risk}\) are induced from the parameter \(\theta\) of target GNN since the input of these functions is the final prediction of target GNN. Therefore, the influence of training samples [12, 14] on a interested function \(f\) can be expressed as \[\mathcal{I}_{f}(\mathbf{w}) \approx\nabla_{\theta}f(\theta^{*}(\mathbf{1}))^{T}\left[\theta^ {*}(\mathbf{1}+\mathbf{w})-\theta^{*}(\mathbf{1})\right] \tag{7}\] \[\approx\nabla_{\theta}f(\theta^{*}(\mathbf{1}))^{T}\left[\sum_{v \in\mathcal{V}_{l}}w_{v}\mathcal{I}_{\theta^{*}(\mathbf{1})}(v)\right]\] _Remarks_. In this paper, any derivable function that takes the GNN prediction \(\mathbf{Y}\) as input can be considered as \(f\). For example, \(f\) can be instantiated as the loss function that concerns the utility (i.e., performance) of the target model [11], i.e., \[\mathcal{I}_{util}(\mathbf{w})=\sum_{v\in\mathcal{V}_{l}}\nabla_{\theta}L( \hat{y}_{v},y_{v};\theta^{*}(\mathbf{1}))^{T}\left[\sum_{v\in\mathcal{V}_{l} }w_{v}\mathcal{I}_{\theta^{*}(\mathbf{1})}(v)\right]. \tag{8}\] ### Measuring Interactions by Influence Functions Measuring the interaction between fairness and privacy is not trivial. This is because individual fairness is proposed from the node view, while the privacy risk lies in the edge perspective. Thus, considering fairness and privacy in the same coordinate space is essential for measuring their interaction when building trustworthy GNNs. According to recent research [11], after the vanilla training of Neural Networks, considering the influence of training samples on model fairness and utility at the same time can achieve fairness at no utility cost, which inspires us to measure the interaction between fairness and privacy. First, we respectively calculate the influence on \(f_{bias}\) and \(f_{risk}\), which helps analyse their interaction at the same coordinate space. According to Equation 7, we can estimate how training samples impact \(f_{bias}\) and \(f_{risk}\) by calculating \(\mathcal{I}_{f}(\mathbf{w})\). Specifically, the influence on fairness can be expressed as follows, \[\mathcal{I}_{f_{bias}}(\mathbf{w})=\nabla_{\theta}f_{bias}(\theta^{*}( \mathbf{1}))^{T}\left[\sum_{v\in\mathcal{V}_{l}}w_{v}\mathcal{I}_{\theta^{*}( \mathbf{1})}(v)\right], \tag{9}\] and the influence on privacy risk can be calculated as \[\mathcal{I}_{f_{risk}}(\mathbf{w})=\nabla_{\theta}f_{risk}(\theta^{*}( \mathbf{1}))^{T}\left[\sum_{v\in\mathcal{V}_{l}}w_{v}\mathcal{I}_{\theta^{*}( \mathbf{1})}(v)\right]. \tag{10}\] In this paper, we use \(\mathbf{w}_{v}\) to denote an all-zero vector except \(w_{v}\) is -1. Thus, \(\mathcal{I}_{f}(\mathbf{w}_{v})\) represents leaving node \(v\) out of the training of a GCN model, i.e., \[\mathcal{I}_{f}(\mathbf{w}_{v})=-\nabla_{\theta}f(\theta^{*}(\mathbf{1}))^{T} \mathbf{H}_{\theta^{*}(\mathbf{1})}^{-1}\nabla_{\theta}L(\hat{y}_{v},y_{v}; \theta^{*}(\mathbf{1})). \tag{11}\] Concatenating all \(\mathcal{I}_{f}(\mathbf{w}_{v})\) (\(v\in\mathcal{V}_{l},f=f_{bias/risk}\)) together, we obtain the influence vector \(\mathcal{I}_{f_{bias/risk}}\in\mathbb{R}^{1\times|\mathcal{V}_{l}|}\). Next, we employ the Pearson correlation coefficient \(r\) between \(\mathcal{I}_{f_{bias}}\) and \(\mathcal{I}_{f_{risk}}\) to measure the interactions between fairness and privacy. Specifically, \[r=\texttt{Pearson}(\mathcal{I}_{f_{bias}},\mathcal{I}_{f_{risk}}), \tag{12}\] where \(r\in[-1,1]\). In the context of measuring interactions between fairness and privacy, \(r=1\) represents a mutual promotion, while \(r=-1\) indicates there is a conflict between them. ### Boosting Fairness with Restricted Privacy Risks Building trustworthy GNNs requires considering performance and several aspects of trustworthiness simultaneously [15], however, taking performance, fairness, and privacy into consideration at the same time is not trivial [15]. First, the existing literature shows that both fairness [13] and privacy [20] of GNNs are at the cost of performance. Furthermore, our empirical study (i.e., Tables 1 and 2) shows that promoting fairness (i.e., reducing bias) in GNN predictions results in the increase of privacy risk of edges in the training graph. In this paper, we argue that the performance of GNNs lies in their central position when they serve users. Our goal is \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & Acc & Bias & \(f_{risk}\) & Cosi & Eucl & Corr \\ \hline Vanilla & 63.66 & 0.0445 & 7.99 & 92.82 & 92.68 & 92.74 \\ Reg & 63.11 & 0.0301 & 9.40 & 94.31 & 94.64 & 94.00 \\ Reg & 58.15 & 0.005 & 20.80 & 96.89 & 96.64 & 95.04 \\ \hline \hline \end{tabular} \end{table} Table 2: Effectiveness of \(f_{risk}\) in evaluating link leakage risks on Citeseer dataset. The risks of link leakage increase when promoting fairness and \(f_{risk}\) is effective in measuring these risk changes. to devise a method that can boost fairness with limited performance costs and restricted privacy risks. Next, we will present our approach to satisfying this design goal, including fairness-aware re-weighting and privacy-aware perturbations. ### Fairness-aware Re-weighting. Inspired by the previous study [11], we propose to re-weight the training samples to boost fairness with a limited performance cost. Specifically, the weight can be obtained by solving the following Quadratically Constrained Linear Programming (QCLP), \[\begin{array}{ll}\min&\sum\limits_{v\in\mathcal{V}_{l}}w_{v}\mathcal{T}_{f_ {tail}}(\mathbf{w}_{v})\\ s.t.&\sum\limits_{v\in\mathcal{V}_{l}}w_{v}^{2}\leq\alpha|\mathcal{V}_{l}|,\\ &\sum\limits_{v\in\mathcal{V}_{l}}w_{v}\mathcal{T}_{f_{tail}}(\mathbf{w}_{v}) \leq\beta\sum\mathcal{I}_{f_{tail}}^{+}(\mathbf{w}_{v}),\\ &w_{v}\in[-1,1].\end{array} \tag{13}\] In this optimisation, \(w_{v}\in[-1,1]\) is the variable. The objective tends to minimise the total bias existing in GNN predictions. The first constraint is designed to control the degree of re-weighting. The second constraint represents that obtained weights only cost limited model utility, where \(\mathcal{I}_{f_{tail}}^{+}(\mathbf{w}_{v})\) indicates \(\mathcal{I}_{f_{tail}}(\mathbf{w}_{v})\) with positive value. \(\alpha\) and \(\beta\) are hyperparameters. In this paper, after solving this QCLP with Gurobi optimiser [14], the obtained weight of training sample \(\mathbf{w}_{fair}=[w_{1},w_{2},...,w_{|\mathcal{V}_{l}|}]\) can be engaged into re-training of the target GNN model with weighted loss in equation (2). ### Privacy-aware Perturbations Given the proposed re-weighting method, a straightforward approach of taking privacy and fairness into account simultaneously is that involving an item that concerns \(\mathcal{I}_{f_{risk}}\) into the QCLP (e.g., playing a role as a constraint or part of the objective). However, the negative correlation between \(\mathcal{I}_{f_{bias}}\) and \(\mathcal{I}_{f_{risk}}\) leads to the weak effectiveness of this simple method. Hence, it is necessary to locate the cause of edge leakage risks and devise a method to restrain it. Next, we will first present the analysis of link-stealing attacks, and then show our perturbation method for inhibiting leakage risks. **(1) Modeling Edge Leakage Risks.** In this paper, we take the most practical link-stealing attacks (i.e., Attack-0 in [11], black-box setting) as our object. Specifically, in these attacks, after querying the target GNN, attackers can calculate a link score to infer if two nodes are connected based on their prediction similarity. Instead of directly analysing privacy risks in the link score space, we will study it in the embedding space and show how the existence of an edge impacts the learned embedding. In the embedding space, a well-trained GCN model clusters similar nodes together so that they have similar predictions. According to a previous study [10], we assume the learned node embedding follows the normal distribution. For simplicity, we employ the left normalisation \(\hat{\mathbf{A}}=\tilde{\mathbf{D}}^{-1}(\,\mathbf{A}+\mathbf{I})\) in GCN models and consider the binary node classification task. Assuming that \(\mu_{i}\) and \(\sigma\) indicate the mean and standard deviation of the node embedding from class \(y_{i}\) (\(i=0,1\)) at the \(t\)-th layer, we obtain that the learned embedding \(\mathbf{E}^{(t,y_{i})}\sim N(\mu_{i},\sigma^{2})\). In this paper, we focus on analysing the intra-class node pairs (i.e., nodes with the same label), since they are the majority of all node pairs. In a GCN model, edges are involved in the one-hop mean-aggregation operation, i.e., \(\hat{\mathbf{A}}\mathbf{E}\). From the view of an individual node \(v_{i}\), the one-hop mean-aggregation operation is \([\hat{\mathbf{A}}\mathbf{E}^{(t)}]_{i}=\frac{1}{d_{i}+1}(\mathbf{E}_{i}^{(t)}+ \sum_{\begin{subarray}{c}y_{i}^{\prime}\in\mathcal{N}(v_{i})\\ m_{j}=0\end{subarray}}\mathbf{E}_{j}^{(t,y_{0})})=\sum_{\begin{subarray}{c}y_{i }^{\prime}\in\mathcal{N}(v_{i})\\ m_{j}=1\end{subarray}}\mathbf{E}_{j}^{(t,y_{1})})\), where \(\mathcal{N}(v_{i})\) represents the neighbour node set of \(v_{i}\), and \(m_{i}=1\) indicates \(v_{i}\) is from class \(y_{1}\), otherwise \(m_{n}=0\). Given \(\mathbf{m}=[m_{1},...,m_{|\mathcal{V}|}]\), \(\mathbf{E}^{(t,y_{1})}=diag(\mathbf{m})\mathbf{E}^{(t)}\), \(\mathbf{E}^{(t,y_{0})}=(\mathbf{I}-diag(\mathbf{m}))\mathbf{E}^{(t)}\), and \(\mathbf{E}^{(t)}=\mathbf{E}^{(t,y_{0})}+\mathbf{E}^{(t,y_{1})}\). Therefore, \(\sum_{\begin{subarray}{c}y_{i}\in\mathcal{N}(v_{i})\\ m_{j}=0\end{subarray}}\mathbf{E}_{j}^{(t,y_{0})}\sim N(d_{i}^{y_{0}}\mu_{0},d_ {i}^{y_{0}}\sigma^{2})\) and \(\sum_{\begin{subarray}{c}y_{j}\in\mathcal{N}(v_{i})\\ m_{j}=1\end{subarray}}\mathbf{E}_{j}^{(t,y_{1})}\sim N(d_{i}^{y_{1}}\mu_{1},d_ {i}^{y_{1}}\sigma^{2})\), where \(d_{i}^{y_{0}/y_{1}}\) represent the number of nodes from \(\mathbf{E}^{y_{0}/y_{1}}\) and \(d_{i}=d_{i}^{y_{0}}+d_{i}^{y_{1}}\) indicate the degree of \(v_{i}\). Without loss of generality, we assume that \(v_{i}\) and \(v_{j}\) in the node pair \((v_{i},v_{j})\) come from class 0. As shown below, the node distance sensitivity in embedding space can be calculated as the difference between cases where \(v_{i}\) and \(v_{j}\) are connected or unconnected. **Case 0**: When \(v_{i}\) and \(v_{j}\) is unconnected, \(\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{i}^{0}\) and \(\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{j}^{0}\) can be approximately expressed as \[\begin{split}\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{i}^{0} \approx&\frac{1}{d_{i}+1}\left(\mathbf{E}_{i}^{(t)}+d_{i}^{y_{0}} \mu_{0}+d_{i}^{y_{1}}\mu_{1}\right),\\ \left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{j}^{0}\approx& \frac{1}{d_{j}+1}\left(\mathbf{E}_{j}^{(t)}+d_{j}^{y_{0}}\mu_{0}+d_{j}^{y_{1}} \mu_{1}\right).\end{split} \tag{14}\] **Case 1**: When \(v_{i}\) and \(v_{j}\) is connected, \(\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{i}^{1}\) and \(\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{j}^{1}\) can be approximately expressed as \[\begin{split}\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{i}^{1} \approx&\frac{1}{d_{i}+2}\left(\mathbf{E}_{i}^{(t)}+\mathbf{E}_{j}^{(t )}+d_{i}^{y_{0}}\mu_{0}+d_{i}^{y_{1}}\mu_{1}\right),\\ \left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{i}^{1}\approx& \frac{1}{d_{j}+2}\left(\mathbf{E}_{j}^{(t)}+\mathbf{E}_{i}^{(t)}+d_{j}^{y_{0}} \mu_{0}+d_{j}^{y_{1}}\mu_{1}\right).\end{split} \tag{15}\] Given (14) and (15), the distance of \(v_{i}\) and \(v_{j}\) in the embedding space is calculated as \[\begin{split} d_{0}(v_{i},v_{j})&=\left(\hat{ \mathbf{A}}\mathbf{E}^{(t)}\right)_{i}^{0}-\left(\hat{\mathbf{A}}\mathbf{E}^{(t)} \right)_{j}^{0},\\ d_{1}(v_{i},v_{j})&=\left(\hat{\mathbf{A}}\mathbf{E}^{(t )}\right)_{i}^{1}-\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{j}^{1}. \end{split} \tag{16}\] Thus, the sensitivity of \(d(v_{i},v_{j})\) with respect to the existence of edge \(e_{ij}\) is \[\begin{split}\Delta d(v_{i},v_{j})=&\|d_{0}(v_{i},v_{j})-d _{1}(v_{i},v_{j})\|\\ =&\left\|\frac{\left(\hat{\mathbf{A}}\mathbf{E}^{(t)} \right)_{i}^{0}-\mathbf{E}_{j}^{(t)}}{d_{i}+2}-\frac{\left(\hat{\mathbf{A}} \mathbf{E}^{(t)}\right)_{j}^{0}-\mathbf{E}_{i}^{(t)}}{d_{j}+2}\right\|, \end{split} \tag{17}\] and its expectation is \(\mathbb{E}\left[\Delta d(v_{i},v_{j})\right]=\|(\mu_{1}-\mu_{0})\delta\|\), where \(\delta=\frac{d_{i}^{y_{1}}}{(d_{i}+1)(d_{i}+2)}-\frac{d_{i}^{y_{1}}}{(d_{j}+1)( d_{j}+2)}\), due to \[\mathbb{E}\left[\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{i}^{0}- \mathbf{E}_{j}^{(t)}\right] =\frac{(1+d_{i}^{y_{0}})\mu_{0}+d_{i}^{y_{1}}\mu_{1}}{d_{i}+1}- \mu_{0} \tag{18}\] \[=\frac{d_{i}^{y_{1}}(\mu_{1}-\mu_{0})}{d_{i}+1},\] \[\mathbb{E}\left[\left(\hat{\mathbf{A}}\mathbf{E}^{(t)}\right)_{j} ^{0}-\mathbf{E}_{i}^{(t)}\right] =\frac{(1+d_{j}^{y_{0}})\mu_{0}+d_{j}^{y_{1}}\mu_{1}}{d_{j}+1}- \mu_{0}\] \[=\frac{d_{j}^{y_{1}}(\mu_{1}-\mu_{0})}{d_{j}+1}.\] **(2) Perturbation-based Method.** Although our modeling process cannot fully depict privacy risks, this coarse grain analysis provides us with insights into understanding risk sources and designing methods to restrict edge leakage. For example, the item \(\mu_{0}-\mu_{1}\) in \(\mathbb{E}\left[\Delta d(v_{i},v_{j})\right]\) indicates a GNN model with higher performance (i.e., higher discrimination when owning a larger \(\mu_{0}-\mu_{1}\) value) has higher edge leakage risks due to the homophily of graph data (i.e., connected nodes are more likely to have similar attributes and the same label). Another item \(\delta\) implies that, for nodes with similar degree values, the heterophily difference between nodes is positively correlated to the privacy risk. According to these insights, we introduce heterophily edge noises into the graph structure to reduce edge leakage risks with a limited performance cost. Specifically, after the vanilla training (i.e., with (1)) of target models, we employ its predictions on all nodes to add heterophily neighbours for each node, i.e., \[\mathcal{N}(v_{i})=\mathcal{N}(v_{i})\cup\{v_{i_{1}},...,v_{i_{k}}\}, \tag{19}\] where the GNN prediction on \(v\in\{v_{i_{1}},...,v_{i_{k}}\}\) is different from that on \(v_{i}\), \(k=\gamma|\mathcal{N}(v_{i})|\) and \(\gamma\) is a hype-parameter. With Equation (19) we obtain the perturbed adjacency matrix \(\mathbf{A}^{\prime}\), which is involved in the retraining of target GNNs. In addition to reducing heterophily differences between nodes of the same class (i.e., \(\delta\) in \(\mathbb{E}\left[\Delta d(v_{i},v_{j})\right]\)), the introduction of heterophily edges can help close the distance of nodes across classes (i.e., \(\mu_{0}-\mu_{1}\) in \(\mathbb{E}\left[\Delta d(v_{i},v_{j})\right]\)). **Target Model Re-training** Fig. 1 shows the whole framework of our Privacy-aware Perturbations and Fairness-aware Re-weighting (PPFR) method, whose goal is to promote fairness with limited performance costs and restricted privacy risks. The entire training of the target model includes two phases: vanilla training and PPFR retraining. Specifically, we first conduct vanilla training to obtain a competent GNN model. After that, we continue to train (i.e., re-training) the target GNN with the perturbed graph structure and weighted loss function derived from the fairness-aware weights. In PPFR, the epoch number of re-training phase \(\mathsf{e}_{re}\) is defined as \(\mathsf{e}_{re}=s\mathsf{e}_{va}\), where \(\mathsf{e}_{va}\) is the epoch number of vanilla training and \(s=0.1\) is a hyper-parameter. ## 5 Experiments In this section, we conduct experiments to evaluate our proposed PPFR method. We first introduce the experimental setup, followed by our experimental results and discussion. ### Experimental Setup **Datasets, Models, and Metrics.** The datasets, model setup, and fairness and privacy metrics follow that in Section 3.1. Note that the privacy result in this section is the average AUC derived from 8 different distances. Moreover, we use the following metric to evaluate the effect of a method \(\Omega\) on both Figure 1: The Framework of Our Privacy-aware Perturbations and Fairness-aware Re-weighting (PPFR) Method. After the vanilla training of a GNN model, the privacy-aware module introduces heterophily edges to generate the perturbed graph structure, fairness-aware re-weighting module employs the influence function and quadratically constrained linear programming to obtain fairness-aware sample weights. After that, the perturbed graph structure and fairness-aware sample weights are involved in the re-training of the target GNN model to promote fairness with limited performance cost and restricted privacy risks. fairness and privacy, i.e., \[\Delta=\frac{\Delta_{bias}\Delta_{risk}}{|\Delta_{acc}|}, \tag{20}\] where \(\Delta_{(\cdot)}=\frac{\mathtt{q}_{(\cdot)}-\mathtt{w}/\mathtt{o}_{(\cdot)}}{ \mathtt{w}/\mathtt{o}_{(\cdot)}}\) takes bias/risk/accuracy values of methods \(\mathtt{\Omega}\) and \(\mathtt{w}/\mathtt{o}\) as inputs to evaluate the change ratio on bias/risk/accuracy when using method \(\mathtt{\Omega}\), \(\mathtt{w}/\mathtt{o}\) represents the GNN model obtained by vanilla training. According to the definition, \(\Delta\) measures the cost-effectiveness with respect to GNN performance when promoting both fairness and privacy. A positive \(\Delta\) indicates that \(\mathtt{\Omega}\) can boost fairness and privacy simultaneously, otherwise \(\Delta\) is negative. **Baselines.** To verify the effectiveness of our method (i.e., **PFFR**), we combine privacy and fairness methods together as the baseline methods. In this paper, we consider differential privacy (DP) (i.e., EdgeRand and LapGraph [20]) as the method to improve edge privacy of GNNs, and both EdgeRand and LapGraph are engaged in generating perturbed adjacency matrix. Specifically, to boost edge privacy, EdgeRand/LapGraph uses a randomisation/Laplacian mechanism to introduce edge noises into the original graph structure. We follow the parameter setting in [21] to introduce \(\epsilon\)-edge DP (i.e., EdgeRand/LapGraph) into our evaluation. According to the previous study [21], EdgeRand and LapGraph have similar effectiveness when \(\epsilon\) is small, while LapGraph is more applicable to large graphs. Thus, we apply EdgeRand on Cora and Citeseer datasets and LapGraph on the Pubmed dataset. In this paper, **Reg** represents that the fairness regularisation is introduced into the loss function of GNN models during their vanilla training. **DPReg** represents using the edge DP method and adding the fairness regularisation into loss simultaneously. **DPFR** indicates the combination of edge DP and our fairness-aware re-weighting (FR) in this paper, where a perturbed graph is engaged in the retraining phase. ### Experimental Results As shown in Table 3, we evaluate the effectiveness of our Privacy-aware Perturbations and Fairness-aware Re-weighting (PPFR) method in boosting fairness with limited performance costs and restricted privacy risks. Results in Table 3 include two parts: correlation results between \(\mathcal{I}_{f_{bias}}\) and \(\mathcal{I}_{f_{risk}}\), and comparison between our PPFR and baseline methods. **Correlation.** In the last column of Table 3, we quantitatively measure the correlation between fairness and privacy with our proposed index (i.e., Eq. 12). The negative correlation results between \(\mathcal{I}_{f_{bias}}\) and \(\mathcal{I}_{f_{risk}}\) show that there exists a negative correlation between node individual fairness and edge leakage risks. These correlation results are consistent with our observations (i.e., the trade-off between fairness and privacy) in Table 1, and also underpin the design of our PPFR method, that is, it does not involve \(\mathcal{I}_{f_{bias}}\) and \(\mathcal{I}_{f_{risk}}\) simultaneously in linear programming. **Effectiveness.** Other evaluation results also verify the effectiveness of our PPFR method. **(1)** Due to involving GNN performance into consideration, our PPFR method can maintain the same level of performance as vanilla GNNs on all datasets (i.e., 81.09, 60.50, 81.57 on Cora, Citeseer, and Pubmed, respectively), which is vital in serving GNN users. However, the low performance of DPReg (i.e., 63.26, 47.47, and 64.95 on three datasets, respectively) potentially leads to the impracticability of GNN systems. **(2)** When comparing the DPFR and PPFR methods, our privacy-aware perturbation (PP) method is more effective than the edge DP methods. Although edge DP (i.e., DPFR rows) is effective in controlling edge leakage risks compared to current fairness promotion methods (e.g., Reg rows), it still poses a higher privacy risk than vanilla GNNs. In contrast, our method is more effective since it presents equal even lower privacy risks in almost all cases, which indicates PPFR can achieve fairness with non-negative privacy impacts. **(3)** According to the \(\Delta\) results, PPFR is the only method that has a positive sign across all datasets, indicating that it can boost fairness and privacy while maintaining competent accuracy. ## 6 Conclusion In this paper, we investigate the interaction between the fairness and privacy of GNNs, which is indispensable in comprehensively building trustworthy GNNs. We empirically observe the adverse effects of node fairness on edge privacy risks and propose to quantitatively measure their trade-off through influence functions and Pearson correlation. Finally, we devise a retraining method to increase GNN fairness with limited performance cost and restricted privacy risk, whose effectiveness is demonstrated by our experimental evaluations on real-world datasets. In the future, we will conduct a theoretical analysis between the fairness and privacy of GNNs and explore other methods to balance the different aspects of trustworthy GNNs. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Datasets & & \(\Delta_{bias}\downarrow\) & \(\Delta_{risk}\downarrow\) & \(\Delta\uparrow\) & \(r\) \\ \hline \multirow{4}{*}{Cora} & Reg & -35.51 & 1.80 & -7.44\(\times 10^{-1}\) & \\ \cline{2-6} & DPReg & 29.11 & -17.07 & -1.87\(\times 10^{-1}\) & \\ \cline{2-6} & DPFR & -1.17 & 0.34 & -6.55\(\times 10^{-3}\) & -0.66 \\ \cline{2-6} & PPFR & -11.75 & -0.73 & 1.46\(\times 10^{-2}\) & \\ \hline \multirow{4}{*}{Citeseer} & Reg & -32.36 & 1.91 & -7.17\(\times 10^{-1}\) & \\ \cline{2-6} & DPReg & 290.34 & -14.95 & -1.71\(\times 10^{0}\) & \\ \cline{2-6} & DPFR & 0.22 & -0.08 & -7.01\(\times 10^{-4}\) & -0.51 \\ \cline{2-6} & PPFR & -14.16 & -0.31 & 8.97\(\times 10^{-3}\) & \\ \hline \multirow{4}{*}{Pubmed} & Reg & -84.70 & 3.54 & -1.28\(\times 10^{0}\) & \\ \cline{2-6} & DPReg & -64.45 & -32.71 & 8.81\(\times 10^{-1}\) & \\ \cline{1-1} \cline{2-6} & DPFR & -29.60 & 0.94 & -1.98\(\times 10^{-1}\) & -0.41 \\ \cline{1-1} \cline{2-6} & PPFR & -31.30 & -0.18 & 1.25\(\times 10^{-2}\) & \\ \hline \hline \end{tabular} * In this table, columns \(\Delta_{bias}\) and \(\Delta_{risk}\) show evaluation results in the percentage (i.e.,%) form. In the \(\Delta\) column, blue colour represents desired results, while red colour indicates undesired results. “\(r\)” in the last column indicates the Pearson correlation coefficient between \(\mathcal{I}_{f_{bias}}\) and \(\mathcal{I}_{f_{risk}}\). \end{table} Table 3: Effectiveness of our PPFR method and Correlation between \(\mathcal{I}_{f_{bias}}\) and \(\mathcal{I}_{f_{risk}}\).
2306.03623
Spike-based computation using classical recurrent neural networks
Spiking neural networks are a type of artificial neural networks in which communication between neurons is only made of events, also called spikes. This property allows neural networks to make asynchronous and sparse computations and therefore drastically decrease energy consumption when run on specialised hardware. However, training such networks is known to be difficult, mainly due to the non-differentiability of the spike activation, which prevents the use of classical backpropagation. This is because state-of-the-art spiking neural networks are usually derived from biologically-inspired neuron models, to which are applied machine learning methods for training. Nowadays, research about spiking neural networks focuses on the design of training algorithms whose goal is to obtain networks that compete with their non-spiking version on specific tasks. In this paper, we attempt the symmetrical approach: we modify the dynamics of a well-known, easily trainable type of recurrent neural network to make it event-based. This new RNN cell, called the Spiking Recurrent Cell, therefore communicates using events, i.e. spikes, while being completely differentiable. Vanilla backpropagation can thus be used to train any network made of such RNN cell. We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark and its variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that this new cell makes the training of deep spiking networks achievable.
Florent De Geeter, Damien Ernst, Guillaume Drion
2023-06-06T12:19:12Z
http://arxiv.org/abs/2306.03623v3
# Spike-based computation using classical recurrent neural networks ###### Abstract Spiking neural networks are a type of artificial neural networks in which communication between neurons is only made of events, also called spikes. This property allows neural networks to make asynchronous and sparse computations and therefore to drastically decrease energy consumption when run on specialized hardware. However, training such networks is known to be difficult, mainly due to the non-differentiability of the spike activation, which prevents the use of classical backpropagation. This is because state-of-the-art spiking neural networks are usually derived from biologically-inspired neuron models, to which are applied machine learning methods for training. Nowadays, research about spiking neural networks focuses on the design of training algorithms whose goal is to obtain networks that compete with their non-spiking version on specific tasks. In this paper, we attempt the symmetrical approach: we modify the dynamics of a well-known, easily trainable type of recurrent neural network to make it event-based. This new RNN cell, called the Spiking Recurrent Cell, therefore communicates using events, i.e. spikes, while being completely differentiable. Vanilla backpropagation can thus be used to train any network made of such RNN cell. We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark and its variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that this new cell makes the training of deep spiking networks achievable. ## 1 Introduction In the last decade, artificial neural networks (**ANNs**) have become increasingly powerful, overtaking human performance in many tasks. However, the functioning of ANNs diverges strongly from the one of biological brains. Notably, ANNs require a huge amount of energy for training and inferring, whereas biological brains consumes much less power. This energy greediness prevents ANNs to be used in some environments, for instance in embedded systems. One of the considered solutions to this problem is to replace the usual artificial neurons by spiking neurons, mimicking the function of biological brains. Spiking Neural Networks (**SNNs**) are considered as the third generation of neural networks (Maass, 1997). Such networks, when run on neuromorphic hardware (like _Loihi_(Davies et al., 2018) for instance), can show very low power consumption. Another advantage of the SNNs is their event-driven computation. Unlike usual ANNs that propagate information in each layer and each neuron at each forward pass, SNNs only propagate information when a spike occurs, leading to more _event-driven_ and sparse computations. Nonetheless, the development of SNNs face a challenging problem: the activation function that is usually used to generate spikes is not differentiable, therefore preventing any training using usual backpropagation (Rumelhart et al., 1986), which is a the core of ANNs success. Several solutions are being considered nowadays, as discussed in section 2. The classical approach consists in using a simple model for the spiking neurons to which are added learnable weights. Then, methods inspired from classical machine learning are used to train, either by directly training the SNN, or by first training an ANN and then converting it into a SNN. In this paper, we approach the problem from the other side: from the well-known _Gated Recurrent Cell_ (**GRU**) (Cho et al., 2014), we derive a new event-based recurrent cell, called the _Spiking Recurrent Cell_ (**SRC**). SRC neurons communicate via events, generated with differentiable equations. The SRC and its equations are described in section 3. Such event-based cell permits to leverage the potential of classical recurrent neural networks (**RNN**) training approaches to create networks that compute using spikes. The performance of SRC-based RNNs has been tested on neuromorphic versions of classical benchmarks, such as the MNIST benchmark and some variants, whose results are discussed in section 4. SNNs built with SRCs achieve comparable results to other types of SNNs on these benchmarks. ## 2 Related Works This section aims at introducing RNNs and SNNs. Different approaches to train SNNs are also described. ### Recurrent Neural Networks RNNs are a type of neural networks that carry fading memory by propagating a vector, called the _hidden state_, through the time. More precisely, a RNN is usually composed of recurrent layers, also called _recurrent cells_, and classical fully-connected layers. Each recurrent cell has its own hidden state. At each time step, a new hidden state is computed from the received input and the hidden state. This allows RNNs to process sequences. Mathematically, this gives: \[h[t]=\phi\left(x[t],h[t-1];\Theta\right)\] where \(h[t]\) and \(x[t]\) are respectively the hidden state and the input at time \(t\), \(\phi\) is the recurrent cell and \(\Theta\) its parameters. Training RNNs has always been difficult, especially for long sequences, due to vanishing and exploding gradients (Pascanu et al., 2013). Indeed, RNNs are trained using backpropagation through time (**BPTT**) (Werbos, 1990). This algorithm consists in first _unfolding_ the RNN in time, i.e. turning it into a very deep feedforward network whose number of hidden layers is equal to the sequence length and whose weights are shared among layers. Then usual backpropagation is applied to this network. However, due to the huge number of layers, gradient problems are much more prone to appear than in usual feedforward networks. There exist several solutions to solve or at least attenuate these problems. For instance, exploding gradients can be easily solved using gradient clipping (Pascanu et al., 2013). But the most notable improvement in RNNs was the introduction of the gating mechanism: gates, i.e. vectors of reals between 0 and 1, are used to control the flow of information, i.e. what is added to the hidden state, what is forgotten, etc. This has led to the two most known recurrent cells: the _Long-Short Term Memory_ (**LSTM**) (Hochreiter and Schmidhuber, 1997) and the _Gated Recurrent Unit_ (**GRU**) (Cho et al., 2014). LSTM uses 3 gates, while GRU is more lightweight and uses 2 gates. The new recurrent cell introduced in this paper (section 3) is a derivation of GRU and can be expressed as a usual recurrent neural network. ### Spiking Neural Networks Biological neurons communicate using spikes, i.e. short pulses in neuron membrane potential, generated by a non-linear phenomena. These membrane potential variations are created from the flow of ions that go in and out of the cell. There exist a lot of different models to model neuron excitable membranes, the most notable being the Hodgkin-Huxley model (Hodgkin and Huxley, 1952), and similar models called _conductance-based models_. Such models represent the neuron membrane as a capacitance in parallel with several voltage sources and variable conductances that respectively model the electrochemical gradients that apply on the different ions and the ions gates. Despite being very physiological, this model contains too many equations and parameters to be used in machine learning. That is why much more simple, phenomenological models are usually used to model spiking neurons in a SNN. A classical model of this type is the _Leaky Integrate-and-Fire_ (**LIF**) model. It is composed of a leaky integrator, to integrate the input current into membrane potential variation, associated to a reset rule that is triggered once a threshold potential is reached. Once the threshold potential is reached, a spike is emitted and the potential is reset to its resting value. Unlike conductance-based models, the LIF model generates _binary_ spikes, i.e. spikes that last one timestep and whose value is always 1. Mathematically, this gives: \[V[t]=\alpha_{V}V[t-1]+x[t]\] \[\left\{\begin{aligned} &\text{if }V[t]>V_{thresh}\text{, then }s[t]=1\text{ and }V[t]=V_{rest}\\ &\text{otherwise }s[t]=0\end{aligned}\right.\] where \(V[t]\), \(x[t]\) and \(s[t]\) are the membrane potential, the input and the output at time \(t\), respectively, \(\alpha_{V}\) is the leakage factor, \(V_{thresh}\) the threshold and \(V_{rest}\) the resting potential. The LIF model is far less physiological then conductance-based models, but it is much more lightweight and retains the core of spike-based computation. LIF neurons can be organized in layers to form a complete network. The question is now how to train such a network? Due to the non-differentiable reset rule, usual backpropagation can not be used (or at least can not be used directly). To achieve reasonable training performance, many approaches to train SNNs have been proposed (Yamazaki et al., 2022; Tavanaei et al., 2018), which can be split into three categories. First, SNNs can be trained using unsupervised learning rules, which are local to the synapses (Masquelier and Thorpe, 2007; Neftci et al., 2014; Diehl and Cook, 2015; Lee et al., 2019). These learning rules are often derived from the _Spike-timing-dependent plasticity_ process (Markram et al., 1997), which strengthens or weakens synaptic connections depending on the coincidence of pre and post-synaptic spikes. This non-optimization-based training method is usually slow, often unreliable, and lead to unsubstantial performance. The second category is an indirect training. It consists in first training a usual ANN (with some constraints) and then converting it into a SNN (Cao et al., 2015; Diehl et al., 2015; Esser et al., 2016). Indeed, ANNs can be seen as special spiking networks that uses a rate-based coding scheme. These methods allow to use all the algorithms developed for training ANNs, and thus can reach high performance. However, they do not unlock the full potential of spiking networks, as rate-coding is not the only way of transmitting information through spikes. Also, rate-based coding usually results in a higher number of generated spikes, weakening the energy-efficiency of SNNs. The third and last approach is to rely on gradient-based optimization to directly train the SNN (Bohte et al., 2000; Sporea and Gruning, 2013; Hunsberger and Eliasmith, 2015; Zenke and Ganguli, 2018; Shrestha and Orchard, 2018; Lee et al., 2016; Neftci et al., 2019). These methods usually smooth the entire networks or use a surrogate smoothed gradient for the non-differentiable activation to allow backpropagation. SNNs trained by gradient-based algorithms have achieved good performance, even competing with ANNs on some benchmarks. Notably, Huh and Sejnowski (2018) used a smooth spike-generating process which replaces the non-differentiable activation of the LIF neurons. This approach is closely related to ours, as they both use soft non-linear activations to generate spikes. ## 3 Spiking Recurrent Cell The new spiking neuron introduced in this paper is derived from the well-known recurrent neural network GRU. This section describes its derivation and the different parts of the neuron, namely the spike-generation and the inputs-integration parts. ### Spike-generation As the starting point of the derivation of the SRC equations, another recurrent cell will be used, itself derived from GRU: the _Bistable Recurrent Cell_ (**BRC**) created by Vecoven et al. (2021). Its main property is its never-fading memory created by the bistability property of its neurons. Here are the equations of GRU: \[z[t] =\sigma\left(U_{z}x[t]+W_{z}h[t-1]+b_{z}\right)\] \[r[t] =\sigma\left(U_{r}x[t]+W_{r}h[t-1]+b_{r}\right)\] \[h[t] =z[t]\odot h[t-1]+(1-z[t])\odot\tanh\left(U_{h}x[t]+r[t]\odot W_{ h}h[t-1]+b_{h}\right)\] And here are the ones of BRC: \[z[t] =\sigma\left(U_{z}x[t]+w_{z}\odot h[t-1]+b_{z}\right)\] \[r[t] =1+\texttt{tanh}\left(U_{r}x[t]+w_{r}\odot h[t-1]+b_{r}\right)\] \[h[t] =z[t]\odot h[t-1]+(1-z[t])\odot\tanh\left(U_{h}x[t]+r[t]\odot h[t -1]+b_{h}\right)\] Both use two gates (\(z\) and \(r\)) to control the flow of information. There are two major differences between GRU and BRC, highlighted in red. First, the memory in BRC is cellular, meaning that each neuron of the cell has its own internal memory that is not shared with the others, while in GRU all internal states can be accessed by each neuron. The second difference is the range of possible values of \(r\): in GRU, it is included between \(0\) and \(1\) while in BRC, it is included between \(0\) and \(2\). This difference allows the BRC neuron to switch from monostability (\(a\leq 1\)) to bistability (\(a>1\)). These two properties of BRC, i.e. the cellular memory and the bistability, can be used to generate spikes. The cellular memory can represent the membrane potential of the spiking neurons, while the bistability is created by a local positive feedback, which is the first step of a spike. Indeed, a spike can be described in two steps: a fast local positive feedback that brings the potential to a high value followed by a slower global negative feedback that brings back the potential to its resting value. Therefore, integrating such a negative feedback to BRC equations will allow the cell to generate spikes. This can be done by adding a second hidden state \(h_{s}\) which lags behind \(h\) (Equation 1c) and a new term in the update equation of \(h\) (highlighted in red in Equation 1a). As no information can be transmitted between neurons except when a spike occurs, the fast hidden state \(h\) is passed through a ReLU function to isolate the spikes from the small, subthreshold variations of \(h\). This creates the output spikes train \(s_{out}\) (Equation 1d). The input of SRC, i.e. the integration of the input pulses, will be discussed afterwards, therefore we will simply use \(x\) to denote the input used by the spike generation. This leads to the equations that generate spikes: \[h[t] =z\odot h[t-1]+(1-z)\odot\texttt{tanh}\left(x[t]+r\odot h[t-1]+ r_{s}\odot h_{s}[t-1]+b_{h}\right) \tag{1a}\] \[z_{s}[t] =z_{s}^{hyp}-(z_{s}^{hyp}-z_{s}^{dep})*\frac{1}{1+\exp\left(-10*( h[t-1]-0.5)\right)}\] (1b) \[h_{s}[t] =z_{s}\odot h_{s}[t-1]+(1-z_{s})\odot h[t-1]\] (1c) \[s_{out}[t] =\texttt{ReLU}\left(h[t]\right) \tag{1d}\] Two new gates (\(r_{s}\) and \(z_{s}\)) have to be introduced. To enforce that no computation could be achieved through alterations in the shape of a spike, the 4 gates does not depend anymore on learnable weights. Three of them are fixed to constant values: \(r=2\), \(r_{s}=-7\) and \(z=0\). The fourth one, \(z_{s}\), controls the speed at which \(h_{s}\) catches up with \(h\): the lower the faster. To create spikes with short depolarization periods, \(z_{s}\) should be low at depolarization potentials, and larger at subthreshold potentials, mimicking the voltage-dependency of ion channel time constants in biological neurons. This is modeled using Equation 1b, where \(z_{s}^{hyp}\) is the value at hyperpolarization potentials (low \(h\)) and \(z_{s}^{dep}\) the value at depolarization potentials (high \(h\)). In practice, \(z_{s}^{hyp}=0.9\) and \(z_{s}^{dep}=0\). Finally, the bias \(b_{h}\) controls the propensity of neurons to fire spikes: the higher, the easier. However if it reaches too high values, the neurons may saturate. As this is a behavior that we would rather avoid, the bias should be constrained to always be smaller than some value. In the experiments, we have fixed this higher bound to \(-4\). Figure 1 shows the behavior of one SRC neuron given different inputs \(x\) and biases \(b_{h}\). It can be observed that for a high bias (Figure 1b), the neuron is able to spike even with a null input, while for a lower one (Figure 1a), the neuron remains silent. SNNs are often put forward for their very small energy consumption, due to the sparse activity of spiking neurons. It is thus important to be able to measure the activity of sa of SRC neurons, the spikes do not last exactly one timestep. It is therefore better to compute the number of timesteps during which spikes are emitted rather than the number of spikes. This brings us to define the relative number of _spiking_ timesteps: \[\mathcal{T}(s)=\frac{1}{T}\sum_{t=1}^{T}H(s[t]) \tag{2}\] where \(H\) denotes the Heaviside step function. ### Input-integration The last point to be addressed before being able to construct networks of SRCs is how to integrate the input spikes. We have decided to use leaky integrators with learnable weights \(w_{i}\): \[i[t]=\alpha\,i[t-1]+\sum_{i}w_{i}s_{i}[t]\] where \(\alpha\) is the leakage factor. To prevent the SRC from saturating due to large inputs, we also add a rescaled hyperbolic tangent to \(i[t]\) to create neuron input \(x[t]\). The equations of a whole SRC layer therefore writes, starting from the input pulses \(s_{in}\) up to the output pulses \(s_{out}\): \[i[t] =\alpha\,i[t-1]+W_{s}\,s_{in}[t] \tag{3a}\] \[x[t] =\rho\cdot\text{tanh}\left(\frac{i[t]}{\rho}\right)\] (3b) \[z_{s}[t] =z_{s}^{hyp}-(z_{s}^{hyp}-z_{s}^{dep})*\frac{1}{1+\exp\left(-10* (h[t-1]-0.5)\right)}\] (3c) \[h[t] =\text{tanh}\left(x[t]+r\odot h[t-1]+r_{s}\odot h_{s}[t-1]+b_{h}\right)\] (3d) \[h_{s}[t] =z_{s}[t]\odot h_{s}[t-1]+(1-z_{s}[t])\odot h[t-1]\] (3e) \[s_{out}[t] =\text{ReLU}\left(h[t]\right) \tag{3f}\] To sum up, Equation 3a first integrates the input pulses using a leaky integrator. The result then passes through a rescaled hyperbolic tangent in Equation 3b. \(z_{s}\) is computed, based on \(h\), in Equation 3c. This forms the input used by the _spike generation_ part (Equation 3d and Equation 3e) to update \(h\) and \(h_{s}\). Finally, Equation 3f isolates the spikes from the small variations of \(h\) and generates the output pulses. The rescaling factor \(\rho\) is set to \(3\), forcing \(x\) to be between \(-3\) and \(3\). Finally, like the other recurrent cells, SRC can be organized in networks with several layers. Figure 1: Simulation of a SRC neuron for some inputs sequence \(x\) and different biases \(b_{h}\). Experiments This section describes the different experiments that were made to assess SRC performance. ### Benchmarks The SRC has been tested on the well-known MNIST dataset (Deng, 2012), as well as two variants. The Fashion-MNIST dataset (Xiao et al., 2017) contains images of fashion products instead of handwritten digits. It is known to be more difficult than the original MNIST. The second variant is the Neuromorphic MNIST (N-MNIST) (Orchard et al., 2015) which, as its name suggests, is a neuromorphic version of MNIST where the handwritten digits have been recorded by an event-based camera. The MNIST and Fashion-MNIST datasets are not made to be used with spike-based networks, therefore their images must first be encoded into spike trains. To do so, a rate-based coding and a latency-based coding were used in the experiments. The first one creates one spike train per pixel, where the number of spikes per time period is proportional to the value of the pixel. More precisely, the pixel is converted into a Poisson spike train using its value as the mean of a binomial distribution. To avoid having too many spikes, we have scaled the pixel values by a factor (the _gain_) of \(0.25\). Therefore, a white pixel (value of \(1\)) will spike with a probability of 25% at each timestep, while a black one (value of 0) will never spike. The latency-based coding is much more sparse, as each pixel will spike at most one time. In this case, the information is contained in the time at which the spike occurs. The idea is that brighter pixels will spike sooner than darker ones. The spike time \(t_{spk}\) of a pixel is defined as the duration needed by the potential of a (linearized) RC circuit to reach a threshold \(V_{th}\) if this circuit is alimented by a current \(I\) equivalent to the pixel value: \[t_{spk}=\text{min}\left(-\tau\left(I-1\right),V_{th}\right)\] where \(\tau\) is the time constant of the RC circuit. In our experiments, we have used a \(\tau=10\) and a \(V_{th}=0.01\). The spike times are then normalized to span the whole sequence length, and the spikes located at the last timestep (i.e. the spikes whose \(t\) equals to \(V_{th}\)) are removed. The encodings were performed using the snnTorch library (Eshraghan et al., 2021). All the experiments were made using spikes trains of length 200. Therefore, the MNIST (or Fashion-MNIST) inputs of dimension \((1,28,28)\) are converted to tensors of size \((200,1,28,28)\). On the other hand, N-MNIST already has event-based inputs. Indeed, each sample contains the data created by an event-based camera. Therefore this data just need to be converted to tensors of spikes. An event-based camera pixel outputs a event each time its brightness changes. There are therefore two types of events: the ones issued when the brightness increased and the ones issued when it decreases. A N-MNIST sample is a list of such events, which contains a timestamp, the coordinates of the pixel that emitted it, and its type. The Tonic library (Lenz et al., 2021) was used to load the N-MNIST dataset and convert its samples into tensors of size \((200,2,34,34)\). The first dimension is the time, the second is related to the type of the event and the two last are the x and y spatial coordinates. ### Readout layer In order to extract the predictions from the outputs of an SRC network, the final SRC layer is connected with predefined and frozen weights to a readout layer of leaky integrators, with one integrator per label and a leakage factor of \(0.99\). Each integrator is excited (positive weight) by a small group of SRC neurons and is inhibited (negative weight) by the others. In our experiments, this final SRC layer contains 100 neurons. Each integrator is connected to all neurons: 10 of these connections have a weight of 10, while the others have a weight of -1. The prediction of the model corresponds to the integrator with the highest value at the final timestep. ### Loss function The networks were trained using the cross-entropy loss, which is usually used in classification tasks. This function takes as inputs the values \(x\) of the leaky integrators at the final timestep and the target class \(y\). The loss is then computed (for a single sample) as: \[l(x,y)=-log\left(\frac{exp(x_{y})}{\sum_{c=1}^{C}exp(x_{c})}\right)\] where \(C\) is the number of classes and \(x_{c}\) refers to the element of \(x\) associated to the class \(c\). This function basically applies the _Softmax_ function to \(x\) and then computes the negative log likelihood. For a whole batch, we simply take the mean of the \(l\)'s. ### Learning The loss function being defined, it is now possible to train networks of SRCs using the usual automatic differentiation of PyTorch. Experiments showed that bypassing the ReLU during backpropagation really fasten learning. As explained in subsection 3.1, this ReLU is used to isolate the spikes (high variations of \(h\)) from the small fluctuations. Considering the backward pass, this ReLU _blocks_ the gradients when no spike is currently occurring, i.e. \(h[t]<0\). We therefore tested to let these gradients pass even when no spike is occurring. This reminds of the surrogate gradient optimization (Neffci et al., 2019) used to train LIF neurons, where, in our case, the activation function is a ReLU, while the backward pass assumes it was a linear activation: \[s_{out}[t]=\text{ReLU}\left(h[t]\right)\] \[\frac{\partial s_{out}[t]}{\partial h[t]}=1,\,\forall h[t]\] Figure 2 shows the evolution of the accuracy and cross-entropy of two SRC networks composed of 3 layers, one trained with the surrogate gradient, the other without. For each network we have trained 5 models. Except the usage of the surrogate gradient, all the other parameters are the same. It is clear that the surrogate gradient speeds up the learning, and will therefore be used in all our experiments. ### Results All experiments have been performed using PyTorch on GPUs, without any modification of the automatic differentiation and backpropagation algorithm, except for the ReLU that is bypassed during backward passes. All training have lasted 30 epochs. We have used the _Adam_ optimizer with an initial learning rate of \(0.005\) that decays exponentially with a factor of \(0.97\). For each set of parameters, 5 models were trained. #### 4.5.1 Shallow networks As a first experiment, we have tested several shallow networks with either 1, 2, or 3 SRC layers. The final layer always contains 100 neurons connected to the readout layer, as described in subsection 4.2. Figure 2: Evolution of the cross-entropy and the accuracy on the MNIST dataset for two networks composed of 3 SRC layers. One of these two networks has been trained with the surrogate gradient, while the other has not. The size of the hidden layers, if the model has any, was fixed to 512 neurons. The leakage factor of the SRC integrators was set to \(0.9\), except in the experiments where the latency-based coding was used, where it was set to \(0.99\) to deal with the high sparsity of the inputs. The leakage factor of the readout integrators was fixed to \(0.99\). Neuron biases \(b_{h}\) are initialized to \(6\) and the Xavier uniform initialization is used for the synaptic weights \(W_{s}\). Table 1 shows the different testing accuracies achieved by these networks. We can observe that SRC networks were able to learn and achieved comparable performances to other non-convolutional SNNs, despite only being trained for 30 epochs. We also observe that multi-layers networks performs better than single-layer ones. As previously mentioned, another important aspect of such networks is the neurons activity. Using the measure defined in Equation 2, the mean activity of the neurons has been computed and is shown in Table 2. The mean activity stays quite low, which is good. It can also be observed than when the encoding is not sparse (rate coding), the shallow the network the lower the activity, while it is the opposite for the sparse encoding (latency coding). #### 4.5.2 Training deeper neural networks Shallow networks of SRC neurons have successfully been trained. However, one of the breakthroughs in deep learning was the ability to train deep neural networks. Training deep SNNs is known to be difficult. We have therefore tested several networks with different number of hidden layers to see if SRCs manage to learn also when the network becomes deeper. As previously, all trainings have lasted 30 epochs. These were made on the MNIST dataset with the rate-based coding. All hidden layers consist of 512 neurons, while the final SRC layer still contains 100 neurons. Figure 3 shows the results of this experiment. All networks manage to learn and achieve good performances after 30 epochs. However, the higher the number of hidden layers, the slower the training. This explains why the models with a high number of hidden layers do not perform as good as shallow networks at the end of the 30 epochs. Nevertheless, the goal of this experiment was not to assess the performance but rather the ability to learn of deep networks. Furthermore, the top-right graph shows the duration of each epoch for each number of hidden layer. It obviously increases with respect to the network depth but training duration stays quite small even for large number of hidden layers. For instance, the training of the 10 hidden layers networks have lasted about one day. ## 5 Conclusion In this paper, we have introduced a new type of artificial spiking neuron. Instead of deriving this neuron from existing spiking models, as it is classically done, we have started from a largely used RNN cell. This new spiking neuron, called the _Spiking Recurrent Cell_ (**SRC**), can be expressed as a \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Layers} & \multicolumn{4}{c|}{MNIST} & \multicolumn{4}{c|}{Fashion MNIST} & \multirow{3}{*}{N-MNIST} \\ \cline{2-2} \cline{4-10} & \multicolumn{2}{c|}{Rate coding} & \multicolumn{2}{c|}{Latency coding} & \multicolumn{2}{c|}{Rate coding} & \multicolumn{2}{c|}{Latency coding} & \multicolumn{2}{c|}{} \\ \cline{2-10} & Mean & Max & Mean & Max & Mean & Max & Mean & Max & Mean & Max \\ \hline 1 & 96.32 & 96.43 & 94.78 & 95.06 & 85.43 & 85.66 & 85.53 & 85.64 & 95.16 & 95.25 \\ \hline 2 & **97.78** & 97.86 & **97.89** & **98.01** & 85.89 & 86.15 & **86.83** & **87.25** & **97.81** & **97.88** \\ \hline 3 & 97.67 & **97.94** & 97.73 & 97.99 & **86.04** & **86.31** & 85.36 & 85.58 & 97.75 & 97.84 \\ \hline \end{tabular} \end{table} Table 1: Accuracies (in %) obtained on the test sets of the different datasets and the different encodings after 30 epochs. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Layers} & \multicolumn{2}{c|}{MNIST} & \multicolumn{2}{c|}{Fashion MNIST} & \multirow{2}{*}{N-MNIST} \\ \cline{2-2} \cline{4-7} & Rate coding & \multicolumn{2}{c|}{Latency coding} & \multicolumn{2}{c|}{Rate coding} & \multicolumn{2}{c|}{Latency coding} & \multicolumn{2}{c|}{} \\ \hline 1 & 2.61 & 3.17 & 2.48 & 3.75 & 4.59 \\ \hline 2 & 3.89 & 1.61 & 5.11 & 3.30 & 4.07 \\ \hline 3 & 3.89 & 2.00 & 4.51 & 2.33 & 4.03 \\ \hline \end{tabular} \end{table} Table 2: Mean spiking activity (in %) on the test sets of the different datasets and the different encodings after 30 epochs. usual recurrent cell. Its major advantage is the differentiability of its equations. This property allows to directly apply the usual backpropagation algorithm to train SRCs. Spiking neural networks made of SRCs have been tested on the MNIST benchmark as well as two variants, the Fashion MNIST and the Neuromorphic MNIST. These networks have achieved results which are comparable to the ones obtained with other non-convolutional SNNs. Also, multi-layers networks have shown to be able to learn. This proof of concept shows promising results and paves the way for new experiments. For instance, trying a convolutional version of the SRC on more complex image classification tasks would be interesting. Also, adding feedback connections could increase the computational power of the SRC, as it is up to now only a feedforward SNN. Improving the initialization of the synaptic weights should also be considered. Finally, the neuron is currently only able to modify the synaptic weights and the biases via backpropagation. It is possible to add new learnable parameters in the SRC equations in order to let the possibility to the neuron to control more aspects of the neurons, like for instance the firing rate or the firing pattern. ## Acknowledgments and Disclosure of Funding Florent De Geeter gratefully acknowledges the financial support of the Walloon Region for Grant No. 2010235 - ARIAC by DW4AI.
2305.15511
Hard-constrained neural networks for modelling nonlinear acoustics
We model acoustic dynamics in space and time from synthetic sensor data. The tasks are (i) to predict and extrapolate the spatiotemporal dynamics, and (ii) reconstruct the acoustic state from partial observations. To achieve this, we develop acoustic neural networks that learn from sensor data, whilst being constrained by prior knowledge on acoustic and wave physics by both informing the training and constraining parts of the network's architecture as an inductive bias. First, we show that standard feedforward neural networks are unable to extrapolate in time, even in the simplest case of periodic oscillations. Second, we constrain the prior knowledge on acoustics in increasingly effective ways by (i) employing periodic activations (periodically activated neural networks); (ii) informing the training of the networks with a penalty term that favours solutions that fulfil the governing equations (soft-constrained); (iii) constraining the architecture in a physically-motivated solution space (hard-constrained); and (iv) combination of these. Third, we apply the networks on two testcases for two tasks in nonlinear regimes, from periodic to chaotic oscillations. The first testcase is a twin experiment, in which the data is produced by a prototypical time-delayed model. In the second testcase, the data is generated by a higher-fidelity model with mean-flow effects and a kinematic model for the flame source. We find that (i) constraining the physics in the architecture improves interpolation whilst requiring smaller network sizes, (ii) extrapolation in time is achieved by periodic activations, and (iii) velocity can be reconstructed accurately from only pressure measurements with a combination of physics-based hard and soft constraints. In and beyond acoustics, this work opens strategies for constraining the physics in the architecture, rather than the training.
Defne Ege Ozan, Luca Magri
2023-05-24T19:04:31Z
http://arxiv.org/abs/2305.15511v2
# Hard-constrained neural networks for modelling nonlinear acoustics ###### Abstract In this computational paper, we model acoustic dynamics in space and time from synthetic sensor data. The tasks are (i) to predict and extrapolate the spatiotemporal dynamics, and (ii) reconstruct the acoustic state from partial observations. To achieve this, we develop acoustic neural networks. These are networks that learn from sensor data, whilst being constrained by prior knowledge on acoustic and wave physics. The prior knowledge is constrained as a soft constraint, which informs the training, and as a hard constraint (Galerkin neural networks), which constrains parts of the network's architecture as an inductive bias. First, we show that standard feedforward neural networks are unable to extrapolate in time, even in the simplest case of periodic oscillations. This motivates the constraints on the prior knowledge. Second, we constrain the prior knowledge on acoustics in increasingly effective ways by (i) employing periodic activations (periodically activated neural networks); (ii) informing the training of the networks with a penalty term that favours solutions that fulfil the governing equations (soft-constrained); (iii) constraining the architecture in a physically-motivated solution space (hard-constrained); and (iv) combination of these. Third, we apply the networks on two testcases for two tasks in nonlinear regimes, from periodic to chaotic oscillations. The first testcase is a twin experiment, in which the data is produced by a prototypical time-delayed model. In the second testcase, the data is generated by a higher-fidelity model with mean-flow effects and a kinematic model for the flame source. We find that (i) constraining the physics in the architecture improves interpolation whilst requiring smaller network sizes, (ii) extrapolation in time is achieved by periodic activations, and (iii) velocity can be reconstructed accurately from only pressure measurements with a combination of physics-based hard and soft constraints. In acoustics and thermoacoustics, this works opens possibilities for physics-constrained data-driven modelling. Beyond acoustics, this work opens strategies for constraining the physics in the architecture, rather than the training. ## I Introduction When modelling, reconstructing, and forecasting dynamics from data, constraining prior knowledge into machine learning methods can significantly improve prediction, robustness and generalizability (e.g., [1, 2, 3, 4]). On the one hand, constraints that are imposed in the loss function as penalty terms, which act during training, are referred to as "soft constraints". Soft constraints can include the governing partial differential equations (PDEs) derived from conservation laws, initial and boundary conditions [5]. Deep feedforward neural networks with soft-constrained physics information, coined as physics-informed neural networks (PINNs) [6], have been employed to infer flow fields from synthetic data from prototypical flows [7, 8], from puffing pool fires [9], experimental data from a flow over an espresso cup [10], and clinical MRI data [11], to name only a few. Beyond PINNs, physics information has enabled super-resolution tasks without high-resolution labels in deep feedforward neural networks [12, 13, 14] and in convolutional neural networks [15, 16]. On the other hand, constraints that are imposed in the architecture (as opposed to the training) are referred to as "hard constraints". Hard constraints span the areas of the function space in which physical solutions live, i.e., they create an _inductive bias_[17]. Known PDEs [18, 19, 20], invariances [21], Dirichlet boundary conditions [5], and periodic boundary conditions [16, 22, 23] have been incorporated in the architecture of neural networks as hard constraints. In this paper, we design Galerkin neural networks, which hard-encode the solution structure into the architecture. These networks are inspired by Galerkin projection, which is a common technique for solving PDEs by projecting the equations onto a finite set of modes, which transforms the problem into a set of ordinary differential equations [24]. The library of modes can be a generic basis such as Fourier, or data-driven such as proper orthogonal decomposition (POD) modes. In this paper, we take advantage of a physical basis. We focus on solutions from acoustics and thermoacoustics, which originate from nonlinear wave equations. Thermoacoustic systems contain nonlinearities in the heat release law, which, when coupled with the acoustics in a positive feedback loop, can generate self-excited oscillations and rich nonlinear behaviours via bifurcations (e.g., [25, 26, 27, 28, 29]). Because these oscillations can have detrimental effects on the system's structure and performance, their prediction and control are active areas of research, for example, in gas turbines [30, 31], and rocket engines [32]. Traditionally, the prediction of thermoacoustics has been achieved with first principles. In the time domain, a direct approach is the brute-force time-integration of the governing equations following a discretization scheme. High-fidelity models such as large-eddy simulations that model the acoustics and the flame simultaneously on fine grids provide highly accurate solutions, but they are computationally expensive [31]. Low-fidelity models reduce the computational effort at the expense of accuracy while obtaining models that can be used for stability, bifurcation analysis, and parametric studies. Generally, these approaches combine a linear acoustic solver with a heat release law. In this direction, nonlinear behaviour of longitudinal and annular combustors has been investigated by employing network models based on the travelling-wave [e.g., 33; 34; 35; 36; 37], Galerkin decomposition of pressure and velocity using only a finite number of acoustic eigenmodes and projection of the PDEs onto those modes [e.g., 38; 39], and numerical discretizations of the PDEs [40; 41]. Predictions with first principles only are either computationally expensive (e.g., large-eddy simulation), or as good as the model assumptions. Because thermoacoustics is a multi-physics phenomenon, the model assumptions made in low-order models are inevitably substantial. This motivates adding data into the first-principles modelling of thermoacoustics, for which machine learning methods excel. The data typically comes from laboratory experiments that are conducted with setups consisting of a duct with a heat source, which can be a flame or an electrically heated wire mesh [e.g., 42; 43]. In the experiments, the collected data is usually the acoustic pressure measured by microphones at high sampling rates. In this direction, data assimilation techniques have been applied to improve physics-based qualitatively accurate models of a prototypical thermoacoustic system, whilst estimating model parameters. Novoa and Magri [44] combined a thermoacoustic low-order model with an ensemble Kalman filter, and an Echo State network for real-time bias-aware data assimilation of the acoustic state [45]. In other applications, neural networks have been developed for heat release law or flame response inference in thermoacoustic applications [46; 47; 48]. In [49], a physics-informed feedforward neural network approach was developed for learning thermoacoustic limit cycles by using periodic activation functions. The overarching goal of this paper is to generalize and develop acoustic neural networks to embed the structure of the nonlinear wave solution into the architecture. The specific objective of this paper is three-fold: to (i) predict and extrapolate in time thermoacoustic oscillations, (ii) reconstruct pressure and velocity over the entire domain from pressure sensors only, and (iii) obtain a model that is robust to noise and generalizable to unseen scenarios. This paper is organized as follows. In Section II, we provide the mathematical background for feedforward neural networks and Galerkin decompositions. In Section III, we introduce Galerkin neural networks. In Section IV, we discuss the application of Galerkin neural networks to acoustics and thermoacoustics. In Sections V and VI, we show results for twin experiments on synthetic data from a Rijke tube and on synthetic data from a higher-fidelity model. The paper ends with a conclusion section. ## II Background ### Standard feedforward neural networks Let \(\mathbf{y}\in\mathbb{R}^{N_{y}}\) represent some vector of physical quantities pertaining to a system that depend on space, \(\mathbf{x}\in\mathbb{R}^{N_{d}}\), where \(N_{d}\) is the dimension, and time, \(t\in\mathbb{R}\). Then, given full or partial observations of \(\mathbf{y}\), our goal is to learn a model \(\mathbf{f}\) that predicts an output vector \(\mathbf{\hat{y}}\in\mathbb{R}^{N_{y}}\) from an an input vector \((\mathbf{x},t)\) while minimizing an error metric, \(\mathcal{L}\). A feedforward neural network (FNN) is defined by a composition of functions, \(\mathbf{f}\), which, when appropriately designed, can approximate any continuous function in a specified range [51] \[\mathbf{f}(\mathbf{x},t):=\mathbf{f}^{(L)}\left(\mathbf{f}^{(L-1)}\left(\ldots\mathbf{f}^{(1)} \left(\mathbf{x},t\right)\right)\right), \tag{1a}\] \[\mathbf{f}^{(l)}(\mathbf{z}):=\mathbf{\phi}^{(l)}\left(\mathbf{W}^{(l)}\mathbf{z}+\mathbf{b}^{(l) }\right), \tag{1b}\] where \(\mathbf{f}^{(l)}\) is the function that maps the layer \(l-1\) to the layer \(l\), where \(l=1,\ldots,L\) and \(L\) is the number of layers; \(\mathbf{W}\) are the weights matrices and \(\mathbf{b}\) are the biases, which are the trainable parameters; and \(\mathbf{\phi}^{(l)}\) are the activation functions from the layer \(l-1\) to the layer \(l\), which are applied to each component of the argument (1). For regression, in the last layer, the activation \(\mathbf{\phi}^{(L)}\) is linear. The neural network offers an ansatz for a continuous function through linear operations and simple nonlinear activations (please, refer to [50] for a pedagogical and geometric explanation). The FNN is a standard architecture that is fully data-driven, i.e., no prior knowledge is embedded in the network (Figure 1). The network's weights and biases, collectively grouped in a variable \(\mathbf{\chi}\), are optimized via gradient descent to minimize an error, which is provided by a loss function \(\mathcal{L}\) \[\mathbf{\chi}^{\star}=\operatorname*{arg\,min}_{\mathbf{\chi}}\,\mathcal{L}(\mathbf{\chi}). \tag{2}\] When no physics-constraint is imposed as in standard neural networks, the data-driven loss is quantified by the mean-squared error (MSE) between the measured data and predictions of the network. In the formulation of the data-driven loss, we also account for the cases of partial state observations with a measurement matrix \(\mathbf{M}\in\mathbb{R}^{N_{y}\times N_{y}}\) that indicates which states are measured, \[M_{ij}=\begin{cases}1&\text{when $i=j$ and $y_{i}$ is measured},\\ 0&\text{otherwise}.\end{cases} \tag{3}\] When full state measurements are available, then \(\mathbf{M}\) is the identity matrix, \(\mathbf{I}_{N_{y}\times N_{y}}\). The loss is given as \[\mathcal{L}\equiv\mathcal{L}_{DD}=\frac{1}{NN_{m}}\sum_{k=1}^{N}||\mathbf{M}\mathbf{y} _{k}-\mathbf{M}\hat{\mathbf{y}}_{k}||_{2}^{2}, \tag{4}\] where subscript \(DD\) stands for data-driven, \(N_{m}\leq N_{y}\) is the number of measured states, the subscript \(2\) denotes the \(\ell_{2}\) norm, and the subscript \(k\) denotes the \(k\)-th element in the dataset of \(N\) pairs of input and output vectors. In Section IV.2, we tailor the neural networks to acoustic problems by specifying the data, input vectors, task, activations functions, nonlinear maps, and loss functions. The prior knowledge will be embedded in different ways in the neural networks. ### Separation of variables and Galerkin methods Separation of variables seeks to find solutions to partial differential equations in a special form as the product of functions of the individual independent variables, e.g., time and single space coordinates, which results in an eigenvalue problem [52]. Linear wave equations and wave solutions can thus be represented in an acoustic eigenbasis, namely Fourier modes, which is complete. On the other hand, the thermoacoustic problem is governed by a wave equation with a nonlinearity arising from the heat release (please refer to Section IV.1 for details). However, the nonlinearity will only slightly change the frequency and mode coupling, which motivates the use of the acoustic eigenspace as an expressive basis to represent the nonlinear solutions as well [38]. A weak solution to the nonlinear problem is provided by the Galerkin method, which approximates the solution to the PDE by projecting the equations onto a finite-dimensional subspace [24]. The solution is written as a linear combination of the basis functions that span this subspace [24], \[y_{i}(x,t)=\sum_{j=1}^{N_{y}}\alpha_{j}^{i}(t)\Psi_{j}^{i}(x), \tag{5}\] Figure 1: Example of a standard feedforward neural network (FNN) [e.g., 50] with two hidden layers. This is the data-driven only model, which has no prior knowledge embedded in the architecture. The trainable parameters are the weights, \(\mathbf{W}^{(.)}\), and the biases, \(\mathbf{b}^{(.)}\). where \(\mathbf{y}\in\mathbb{R}^{N_{y}}\) is a vector of physical quantities (Section II.1) and \(i=1,2,\ldots,N_{y}\) denotes the \(i\)-th element of \(\mathbf{y}\), \(\Psi_{j}^{i}(x)\) are the basis functions, or Galerkin modes, \(N_{g}\) is the number of Galerkin modes, and \(\alpha_{j}^{i}(t)\) are the Galerkin amplitudes. The orthogonality of the Galerkin modes guarantees that the approximation error is orthogonal to the space spanned by the finite number of modes retained. This property makes the Galerkin method a suitable choice for the modelling of acoustic problems because the low-frequency modes contain most of the energy and thus, the truncation error can be minimal. ## III Galerkin neural networks FNNs are flexible and useful tools for function approximation, however, they can suffer from under- or overfitting, especially in the case of scarce or noisy data. This requires a careful hyperparameter tuning, which becomes computationally expensive in proportion with the size of network. Even then, the network may not be able to capture some features of the solution because of missing data. In order to counteract these shortcomings, we exploit the physical knowledge about the spatiotemporal basis of the system in question, and propose a network structure inspired by the Galerkin decomposition of the system as motivated in Section II.2. The chosen Galerkin modes are a known nonlinear transformation of the spatial coordinates, which is introduced in the network. Therefore, by design, this network can be configured to automatically satisfy the boundary conditions. We will refer to this network architecture as Figure 2: Galerkin neural network (GalNN). This network is composed of two branches; a) spatial branch (hard constraint), and b) a temporal branch, which learns the temporal behaviour. The trainable parameters are the weights, \(\mathbf{W}^{(.)}\), and the biases, \(\mathbf{b}^{(.)}\) the _Galerkin neural network_ (_GalNN_). The Galerkin network is composed of a spatial branch that transforms \(\mathbf{x}\) into the _a-priori_ Galerkin modes, \(\mathbf{\Psi}^{i}(\mathbf{x})\) (5), and a temporal branch that is an FNN (1a) that takes only \(t\) as an input and predicts the unknown Galerkin amplitudes, \(\mathbf{\alpha}^{i}(t)\) (5), as outputs. The final outputs, \(\mathbf{\hat{y}}\), are computed by (5). Formally, the GalNN is defined as \[f_{i}(\mathbf{x},t)=\sum_{j=(i-1)N_{g}}^{iN_{g}}g_{j}(\mathbf{x})h_{j}(t), \tag{6}\] where \(\mathbf{f}\) is the map from \((\mathbf{x},t)\) to \(\mathbf{\hat{y}}\), \(i=1,2,\ldots,N_{y}\) denotes the \(i\)-th element of the output vector, \(\mathbf{g}(\mathbf{x})\) is the spatial branch given by \(\mathbf{g}(\mathbf{x})=(\mathbf{\Psi}^{1}(\mathbf{x}),\mathbf{\Psi}^{2}(\mathbf{x}),\ldots,\mathbf{\Psi}^ {N_{y}(\mathbf{x})})\) and \(\mathbf{h}(t)\) is the temporal branch modelled as an FNN, the outputs of which are denoted as \((\mathbf{\alpha}^{1}(t),\mathbf{\alpha}^{2}(t),\ldots,\mathbf{\alpha}^{N_{y}}(t))\). The data-driven loss is defined the same as the standard FNN case (4) over the prediction error on \(\mathbf{y}\). The time evolution of the Galerkin amplitudes are thus obtained as an intermediary step. This architecture is shown in Figure 2. ## IV Application to acoustics and thermoacoustics In this paper, we develop a physics-constrained data-driven model of the acoustic variables as a function of time and space. Given enough capacity by means of number of neurons and layers, FNNs can fit the data within the training range. This can be considered as an interpolation problem, by which we mean that the network predicts on data points that is within the bounds of the training input range. When the prediction is performed for scenarios outside the training input range, we call this an extrapolation problem. Thermoacoustic systems exhibit oscillations, which can be periodic, quasi-periodic, or chaotic in time. First, given time series data from a time window, our task is to obtain a model that can extrapolate such oscillatory behaviour. In experiments, full-state observations may be unavailable or prohibitively expensive, e.g., in acoustics, only pressure data may be available through measurements with microphones. Second, our task is to reconstruct the flow variables over the entire spatial domain from full- or partial noisy state observations, which poses a challenge for purely data-driven models. Third, we seek a robust, generalizable, low-order model for acoustic and thermoacoustic solutions using as little data as possible. ### Background on thermoacoustics We consider a thermoacoustic system composed of a straight duct of length \(\tilde{L}\) with a compact heat source located at \(\tilde{x}=\tilde{x}_{f}\), where \((\tilde{\cdot})\) denotes a dimensional variable (Figure 3). We make the following assumptions about the system; (i) the acoustics are one-dimensional, i.e., the tube is sufficiently longer than its diameter for the cut-on frequency to be large enough for longitudinal acoustics only to propagate, (ii) the heat release from the flame acts as a pointwise monopole source of sound (compact assumption), (iii) the mean-flow is low Mach number and there is no entropy wave convection, and (iv) effects of viscosity and heat conduction are negligible (e.g., [28]). The reflection of the acoustic Figure 3: Schematic of the thermoacoustic system. A straight duct with open ends and a heat source, \(\dot{q}\), located at \(x_{f}\). The acoustic reflection coefficients at the inlet and outlet are \(R_{in}\) and \(R_{out}\). The Rijke tube is employed in Section V for twin experiments with the assumptions; (i) zero Mach number, i.e., \(\bar{u}_{1}=\bar{u}_{2}=0\), (ii) no change in the mean flow variables before and after the flame e.g., \(\bar{\rho}_{1}=\bar{\rho}_{2}=1\), and (iii) ideal boundary conditions, i.e., \(R_{in}=R_{out}=-1\). A higher-fidelity model with a mean-flow and a kinematic model for the flame, is employed in Section VI for analysing model generalization. waves at the boundaries are given by the reflection coefficients \(R_{i}n\) and \(R_{o}ut\) for inlet and outlet, respectively, which determines the boundary conditions. The dynamics are governed by the dimensional equations derived from mass, momentum, and energy conservation. Modelling the heat source as a compact source results in two duct segments related by jump conditions that are enforced at the heat source location. For brevity, the suffices 1 and 2 denote conditions before and after the flame, i.e., \(\tilde{x}=\tilde{x}_{f,1}\) and \(\tilde{x}=\tilde{x}_{f,2}\), respectively. The jump conditions are found from mass, momentum, and energy fluxes across the flame with the ideal gas law (e.g., [53]). The governing equations and the jump conditions are linearized by assuming that the flow variables can be expressed as infinitesimal perturbations on top of a mean-flow, i.e., \((\tilde{\cdot})=\bar{(\tilde{\cdot})}+\bar{(\tilde{\cdot})}^{\prime}\), where \(\bar{(\cdot)}\) denotes the steady mean-flow variable and \((\cdot)^{\prime}\) denotes the unsteady infinitesimal perturbations. Under the low Mach number assumption, the acoustics are governed by momentum and energy equations (e.g., [28] \[\bar{\tilde{\rho}}\frac{\partial\tilde{u}^{\prime}}{\partial\tilde{t}}+\tilde {u}\frac{\partial\tilde{u}^{\prime}}{\partial\tilde{x}}+\frac{\partial\tilde {p}^{\prime}}{\partial\tilde{x}}=0, \tag{7a}\] \[\frac{\partial\tilde{p}^{\prime}}{\partial\tilde{t}}+\tilde{u}\frac{\partial \tilde{p}^{\prime}}{\partial\tilde{x}}+\gamma\tilde{p}\frac{\partial\tilde{u }^{\prime}}{\partial\tilde{x}}-(\gamma-1)\frac{\tilde{\tilde{q}}^{\prime}}{ \tilde{A}}\delta(\tilde{x}-\tilde{x}_{f})=0, \tag{7b}\] where \(\tilde{\rho}\) is the density; \(\tilde{u}\) is the velocity; \(\tilde{p}\) is the pressure; \(\tilde{\tilde{q}}\) is the heat release rate; \(\tilde{A}\) is the cross-sectional area of the duct; \(\gamma\) is the heat capacity ratio; and \(\delta\) is the Dirac delta distribution. In Sections V and VI, the governing equations are employed with different simplifications to generate the synthetic data for our study. We will impose the governing equations in their non-dimensional PDE form, denoted by omitting the symbol \(\bar{(\cdot)}\), as soft constraints during the training of our neural networks. This approach will be discussed in detail in Section IV.2.2. ### Acoustic neural networks The dynamics of thermoacoustic oscillations are dominated by unstable eigenfunctions (e.g., [28]), which are periodic oscillations in time. We propose neural networks that can naturally infer acoustic dynamics, which are the basis functions of nonlinear thermoacoustic behaviours. The proposed networks embed the prior knowledge through the activation functions (Section IV.2.1), through a penalization loss function in the training (soft constraint, Section IV.2.2), and through the architecture (hard constraint, Section IV.2.3). For the thermoacoustic system described in IV.1, the input vector consists of the one-dimensional spatial coordinate, \(x\), and time, \(t\), and the output vector consists of the acoustic pressure and velocity fluctuations, \(\mathbf{y}=(p^{\prime},u^{\prime})\). In Section IV.2.1, we motivate employing periodic activation functions in the standard FNN for acoustic problems, the eigenfunctions of which are periodic. Further, in Section IV.2.2, we include a physics-based regularization term that penalizes solutions that violate the conservation laws governing the acoustic dynamics. Finally, in Section IV.2.3, we promote a hard-constrained architecture in the form of a GalNN (Section III). Inspired by the physical remarks of Section II.2, we design a neural network architecture that spans the Hilbert space with the acoustic eigenfunctions as the Galerkin modes, whilst having trainable parameters for inference and closure of the unknown physical terms. The acoustic neural networks are summarized at the end of this section in Table 1. #### iv.2.1 Periodically activated feedforward neural networks (P-FNNs) In order to augment the extrapolation capability, observe that nonlinear thermoacoustic dynamics originate from the nonlinear coupling of acoustic eigenfunctions (e.g., [28]), which are Fourier modes. From a data-driven perspective, this means that the weights and biases should be periodically activated. A straightforward strategy is to employ periodic activations in the acoustic neural networks, i.e., \(\mathbf{\phi}^{(i)}=\sin(\mathbf{z})\) in Eq. 1b [49]. The physics of the system, namely the periodic nature of the solutions, is embedded in the network itself via the choice of activation function, which provides an inductive bias on the function space of the network and improves extrapolation [54]. The weights of a layer \(l\) with the sine activation are initialized from a uniform distribution in the range \([-\sqrt{3/\text{fan}_{\text{in}}},\sqrt{3/\text{fan}_{\text{in}}}]\), where \(\text{fan}_{\text{in}}\) is the number of neurons in the layer \(n_{l-1}\)[54]. #### iv.2.2 Soft constraints with physics-informed losses Prior knowledge can be embedded as a penalization term in the loss function in Eq. 2 to minimize during training (e.g., [5; 6]). This approach improves the generalizability of the network, while promoting physical outputs [3]. In thermoacoustics, neural network predictions should fulfill the acoustic conservation laws, therefore, we penalize solutions that violate momentum and energy equations in the loss function \[\mathcal{L}=\lambda_{DD}\mathcal{L}_{DD}+\lambda_{M}\mathcal{L}_{M}+\lambda_{E} \mathcal{L}_{E}, \tag{8}\] where \(\mathcal{L}_{DD}\) is the data-driven loss (4) (Section II.1), and \(\mathcal{L}_{M}\) and \(\mathcal{L}_{E}\) are the residual losses from the conservation of momentum and energy equations, respectively. The non-negative scalars \(\lambda_{DD}\), \(\lambda_{M}\), and \(\lambda_{E}\) are regularization hyperparameters. Obtaining momentum and energy losses, \(\mathcal{L}_{M}\) and \(\mathcal{L}_{E}\), requires the evaluation of the physical residuals of the network predictions. We denote the partial differential operators that define the residual from momentum and energy equations as \(\mathcal{F}_{M}(p^{\prime},u^{\prime})\) and \(\mathcal{F}_{E}(p^{\prime},u^{\prime})\), respectively. A given \((p^{\prime},u^{\prime})\) is a solution of the system if the residuals are zero, i.e., \(\mathcal{F}_{M}(p^{\prime},u^{\prime})=0\) and \(\mathcal{F}_{E}(p^{\prime},u^{\prime})=0\). We will exactly define these residuals separately for the Rijke tube in Section V.1 and for the higher-fidelity model in VI.1 along with the modelling assumptions and non-dimensionalization of the original equations (7). The physics-informed losses are then computed as \[\mathcal{L}_{M} =\frac{1}{N+N_{s}}\sum_{k=1}^{N+N_{s}}\mathcal{F}_{M}(\hat{p}^{ \prime}_{k},\hat{u}^{\prime}_{k})^{2}, \tag{9}\] \[\mathcal{L}_{E} =\frac{1}{N+N_{s}}\sum_{k=1}^{N+N_{s}}\mathcal{F}_{E}(\hat{p}^{ \prime}_{k},\hat{u}^{\prime}_{k})^{2}. \tag{10}\] where \(N\) is the number of training data points, i.e., \(\{x_{k},t_{k}\}_{k=1}^{N}\), and \(N_{s}\) is the number of uniformly sampled points over the whole training domain, as we can evaluate the physical loss at any location in time and space. For this purpose, we employ automatic differentiation using tf.GradientTape functionality from TensorFlow [55]. The automatic differentiation of the output variables with respect to the input variables yields the Jacobian that contains the partial derivatives, \(\frac{\partial\hat{p}^{\prime}}{\partial x},\frac{\partial\hat{p}^{\prime}}{ \partial t},\frac{\partial\hat{u}^{\prime}}{\partial x},\frac{\partial\hat{u} ^{\prime}}{\partial t}\). These partial derivatives are plugged in the operators, \(\mathcal{F}_{M}\) and \(\mathcal{F}_{E}\), to calculate the residual of the network. #### ii.2.3 Hard-constraints with Galerkin neural networks (GalNNs) A suitable eigenbasis for the thermoacoustic problem is provided by the natural acoustic eigenfunctions, because the most of the energy of the solution is dominated by the first acoustic modes, and effects of mean flow or nonlinearity only slightly change the frequency and the eigenshapes (Section II.2). The core eigenfunctions of the thermoacoustic system with zero Mach number case are similar to a low Mach number case, \(M\lesssim 0.1\)[56]. Hence, to derive them, we assume a zero Mach number for simplification, whose dynamics are governed by the non-dimensional equations [e.g., 56], \[\bar{\rho}\frac{\partial u^{\prime}}{\partial t}+\frac{\partial p^{\prime}}{ \partial x}=0, \tag{11a}\] \[\frac{\partial p^{\prime}}{\partial t}+\frac{\partial u^{\prime}}{\partial x}- \hat{q}^{\prime}\delta(x-x_{f})=0, \tag{11b}\] where the density is modelled as \(\bar{\rho}=\bar{\rho}_{1}\) when \(0\leq x<x_{f}\), and \(\bar{\rho}=\bar{\rho}_{2}\) when \(x_{f}<x\leq 1\). The dimensional variables have been scaled as, \(x=\tilde{x}/\bar{L}\); \(t=\tilde{t}\tilde{\bar{c}}/\bar{L}\), where \(\tilde{\bar{c}}\) is the mean speed of sound; \(u^{\prime}=\tilde{u}^{\prime}/\tilde{\bar{c}}\); \(\rho=\tilde{p}^{\prime}/\tilde{\bar{p}}\); \(p=\tilde{p}^{\prime}/(\tilde{\bar{p}}\tilde{\bar{c}}^{2})\); \(\hat{q}^{\prime}=\tilde{\bar{q}}^{\prime}(\gamma-1)/(\tilde{\bar{p}}\tilde{ \bar{c}}^{3})\). A physically-justified method to solve the set of PDEs (11) is to decompose pressure and velocity as \[p^{\prime}(x,t)=\sum_{j=1}^{N_{s}}\begin{cases}\mu_{j}(t)\Pi_{j}^{(1)}(x),&0 \leq x<x_{f},\\ \mu_{j}(t)\Pi_{j}^{(2)}(x),&x_{f}<x\leq 1,\end{cases} \tag{12a}\] \[u^{\prime}(x,t)=\sum_{j=1}^{N_{s}}\begin{cases}\eta_{j}(t)\Upsilon_{j}^{(1)}(x),& 0\leq x<x_{f},\\ \eta_{j}(t)\Upsilon_{j}^{(2)}(x),&x_{f}<x\leq 1,\end{cases} \tag{12b}\] and project the equations onto the acoustic eigenfunctions found as below in the case of ideal boundary conditions, i.e., \(p^{\prime}(x=0)=p^{\prime}(x=1)=0.\), [56] \[\Pi^{(1)}_{j}(x)=-\sin(\omega_{j}\sqrt{\bar{\rho}_{1}}x),\quad\Pi^{( 2)}_{j}(x)=-\left(\frac{\sin\gamma_{j}}{\sin\beta_{j}}\right)\sin(\omega_{j} \sqrt{\bar{\rho}_{2}}(1-x)), \tag{13a}\] \[\Upsilon^{(1)}_{j}(x)=\frac{1}{\sqrt{\bar{\rho}_{1}}}\cos(\omega_ {j}\sqrt{\bar{\rho}_{1}}x),\quad\Upsilon^{(2)}_{j}(x)=-\frac{1}{\sqrt{\bar{ \rho}_{2}}}\left(\frac{\sin\gamma_{j}}{\sin\beta_{j}}\right)\cos(\omega_{j} \sqrt{\bar{\rho}_{2}}(1-x)), \tag{13b}\] where \(N_{g}\) is the number of Galerkin modes retained in the approximation, and \(\gamma_{j}=\omega_{j}\sqrt{\bar{\rho}_{1}}x_{f},\quad\beta_{j}=\omega_{j} \sqrt{\bar{\rho}_{2}}(1-x_{f})\). The acoustic angular frequencies \(\omega_{j}\) are the solutions of the dispersion relationship, \(\sin\beta_{j}\cos\gamma_{j}+\cos\beta_{j}\sin\gamma_{j}\sqrt{\frac{\bar{\rho} _{1}}{\bar{\rho}_{2}}}=0\). In the limit of no jump in mean-flow density, i.e., \(\bar{\rho}_{1}=\bar{\rho}_{2}\), \(\omega_{j}=j\pi\), which are the natural acoustic angular frequencies [56]. Motivated by the Galerkin decomposition (12), we express the acoustic and thermoacoustic solutions as a GalNN (6), where \(\Pi^{(1)}_{j}\), \(\Pi^{(2)}_{j}\) (13a), and \(\Upsilon^{(1)}_{j}\), \(\Upsilon^{(2)}_{j}\) (13b) are the Galerkin modes, and \(\mu_{j}\) and \(\eta_{j}\) are the Galerkin amplitudes. The pressure and velocity are determined from the predicted Galerkin amplitudes using (12). This architecture is shown in Figure 4 (a, b). Whereas the acoustic pressure is continuous across the flame, the acoustic velocity undergoes a discontinuity due to the dilation from the heat-release rate [28]. When such a discontinuity is approximated with a finite Fourier series, such as the Galerkin modes we are using, the Gibbs phenomenon manifests itself as high-frequency oscillations around the discontinuity [57; 40], which causes an unphysical behaviour. To physically capture the discontinuity and eliminate unphysical oscillations in the predictions, we add two step functions in the velocity modes \[\Upsilon^{(1)}_{N_{g}+1}(x)=1,\quad\Upsilon^{(2)}_{N_{g}+1}(x)=0, \tag{14a}\] \[\Upsilon^{(1)}_{N_{g}+2}(x)=0,\quad\Upsilon^{(2)}_{N_{g}+2}(x)=1, \tag{14b}\] such that the summation (12b) runs from \(1\) to \(N_{g}+2\). The modes \(\Upsilon^{(.)}_{N_{g}+1}\) and \(\Upsilon^{(.)}_{N_{g}+2}\) are weighted by independent coefficients, \(\eta_{N_{g}+1}\) and \(\eta_{N_{g}+2}\), which allows them to capture a jump discontinuity at the flame location. The temporal branch of the GalNN should be capable of extrapolation in time of thermoacoustic oscillations. These solutions bifurcate from limit-cycle oscillations, which are periodic, to quasiperiodic and chaotic oscillations. As they originate from periodic solutions, even the quasiperiodic and chaotic solutions are in fact dominated by periodicity, which can also be observed in the frequency spectrum of the oscillations (later in Section V.2.2). Therefore, we deploy a periodically activated FNN (Section IV.2.1) as the temporal branch of the GalNN. #### iv.2.4 Periodic Galerkin networks (P-GalNNs) We enforce periodic activations (Section IV.2.1) in the Galerkin neural network in order to achieve extrapolation for limit-cycle, quasiperiodic, and chaotic solutions. For simplicity, if we consider a single-layer periodically activated GalNN, then the outputs of the temporal branch of the network are given as a sum of sinusoids, where the hidden layer weights represent the angular frequencies and the biases represent the phases, i.e., \(\sin(\mathbf{W}^{(1)}t+\mathbf{b}^{(1)})\). For a signal given as a sum of sinusoids, its angular frequency is the greatest common divisor of the angular frequencies of its components. For a limit-cycle, the greatest common divisor is its fundamental angular frequency, and the other frequencies are its harmonics. This applies to a periodically activated neural network as well. However, the weights of the neural network are initialised randomly and then trained on a finite amount of data, so it is numerically unlikely for the learned weights to be exact integer multiples of a fundamental frequency. To overcome this numerical challenge, we add a further constraint on the GalNN to guarantee periodic behaviour with the fundamental frequency of the limit-cycle. We create a hidden layer that takes time as input and outputs (\(\sin(\mathbf{h}\theta)\), \(\cos(\mathbf{h}\theta)\)), \(\mathbf{h}=(1,2,...,N_{h})\), such that the weights in the periodic activations are integer multiples of one trainable variable, \(\theta\). The variable \(\theta\) is initialized to the non-dimensional angular frequency of the acoustic system, \(\pi\), which is an educated guess for the actual thermoacoustic frequency as from our physical knowledge we know that the nonlinearity will only slightly change this. Hence, upon training, \(\theta\) corresponds to the angular frequency of the limit-cycle, and the number of harmonics, \(N_{h}\) can be regarded as a hyperparameter. Because we have both sine and cosine activations for the same angular frequency, the phase information is automatically captured, hence we do not require a bias term. The temporal branch of the GalNN tailored for limit-cycles is shown in Figure 4 (c). ## V Extrapolation and state reconstruction: twin experiments ### Dataset from the Rijke tube We perform twin experiments on synthetic data generated from a prototypical thermoacoustic system, the Rijke tube, to demonstrate how the acoustic neural networks developed in IV.2 tackle extrapolation and state reconstruction. The Rijke tube (Figure 3) captures the qualitative nonlinear dynamics and bifurcations observed in real-life applications [39]. In this model, we further assume that (i) the mean flow has a zero Mach number with uniform density, and (ii) the boundary conditions are ideal, i.e. \(p^{\prime}(x=0,t)=p^{\prime}(x=1,t)=0\), hence \(R_{in}=R_{out}=-1\). We solve the associated PDEs (11) using the Galerkin decomposition (12)-(13), with \(\bar{\rho}_{1}=\bar{\rho}_{2}=1\), which implies that \(\omega_{j}=j\pi\)[56]. By substituting the pressure and velocity variables in (11) with their Galerkin decompositions in (12) and projecting the dynamics onto the Galerkin modes, the dynamics of the Galerkin variables \(\eta_{j}\) and \(\mu_{j}\) are described by a \(2N_{g}\)-dimensional system of ordinary differential equations [41] \[\dot{\eta}_{j}-\mu_{j}j\pi=0, \tag{15a}\] \[\dot{\mu}_{j}+\eta_{j}j\pi+\zeta_{j}\mu_{j}+2\dot{q}^{\prime}\sin( j\pi x_{f})=0. \tag{15b}\] In the models of Rijke tube in literature [e.g., 58], a modal damping term acting on the pressure, \(\zeta p^{\prime}(x,t)\), is added to the energy equation (11b) to account for the unmodelled effects of dissipation at the boundaries, and at viscous and thermal boundary layers at the tube walls. This term lets the higher modes to be damped out. In the projected dynamics, the damping term is \(\zeta_{j}\mu_{j}\), where \(\zeta_{j}=c_{1}j^{2}+c_{2}j^{1/2}\)[59]. The heat release rate is described by a modified King's law \[\dot{q}^{\prime}=\beta\left(\sqrt{|1+u^{\prime}(x_{f},t-\tau)|}-1\right) \tag{16}\] where \(\beta\) and \(\tau\) are the heat release strength and the flame time delay, respectively [60]. The time-delayed problem is transformed into an initial value problem via an advection function with the dummy variable \(v\) so that it can be solved with a standard time-marching scheme [41] \[\frac{\partial v}{\partial t}+\frac{1}{\tau}\frac{\partial v}{\partial X}=0, \quad 0\leq X\leq 1,\quad v(X=0,t)=u^{\prime}(x_{f},t). \tag{17}\] The PDE (17) is discretized using a Chebyshev spectral method with \(N_{c}\) points [61]. The physics-informed loss of the Rijke tube system is calculated by evaluating the left-hand side of the PDEs (11), i.e., the momentum residual, \(\mathcal{F}_{M}\), is defined as the left-hand side of (11a), and the energy residual, \(\mathcal{F}_{E}\), is defined as the left-hand side of (11b). When calculating the heat-release term (16) in the physics-informed loss, the time-delayed velocity at the flame can be predicted from the network as \(\dot{u}^{\prime}(x_{f},t_{k}-\tau)\) (for the GalNN \(\dot{u}^{\prime}(x_{f},t_{k}-\tau)=\sum_{j=1}^{N_{g}}\hat{\eta}_{j}(t_{k}- \tau)\cos(j\pi x_{f})\)), i.e., by evaluating the network at \((x=x_{f},t=t_{k}-\tau)\). Therefore, since the model takes time as an input for both FNN and GalNN architectures, there is no need to account for the time delay with a separate dummy variable. The evaluation of the heat-release law requires the system parameters \(x_{f}\), \(\beta\), and \(\tau\) to be known. For the FNN, the implementation details concerning the calculation of heat release and modal damping are provided in the Appendix A. For the GalNN, as Galerkin amplitudes are available, the physical residuals of the Rijke tube can be directly computed from the ODEs (15) in these twin experiments. ### Extrapolation in time We simulate the Rijke tube system (11) with the parameters; \(N_{g}=10\), \(N_{c}=10\), \(x_{f}=0.2\), \(\beta=5.7\), \(\tau=0.2\), \(c_{1}=0.1\), \(c_{2}=0.06\). After a transient period, this system settles onto a limit-cycle with a period of \(\approx 1.927\) time units. Before training, the inputs are standardized to have a zero mean and unit variance. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Network & Abbreviation & Constraint & Separation of & Can extrapolate in time & Section \\ \hline Feedforward & & & & & \\ neural network & & & & & \\ \hline Periodically activated & & & & & \\ feedforward & P-FNN & No & No & Yes - via activation function & IV.2.1 \\ neural network & & & & & \\ \hline Galerkin & & & & & \\ neural network & & & & & \\ \hline Periodic Galerkin & & & & & \\ neural network & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the proposed neural networks. All the networks can be equipped with a soft constraint, which penalizes non-physical solutions in the loss function (see Sections IV.2.2). When the soft constraint is present, the abbreviation will be prefixed with PI (physics-informed). For example, a Galerkin network with the soft constraint will be abbreviated as PI-GalNN. #### iv.1.1 Standard feedforward neural networks Figures 5 (a,b) and (c,d) show the results of FNNs with ReLU and tanh activation functions when trained on this dataset. The network hyperparameters (number of layers, number of neurons, learning rate) have been tuned via a grid search, in which we choose the architectures that resulted in the smallest validation losses. The network properties are provided in Table 2. The ReLU network is initialized with the method of He _et al._[62], and the tanh network with the method of Glorot and Bengio [63]. As illustrated in Figures 5 (a,b) and (c,d), FNNs with conventional activation functions fail at extrapolating periodic functions in time. Figure 5 (e,f) shows the extrapolation capability of an FNN equipped with sine activation, i.e., a periodically activated FNN (P-FNN) as introduced in Section IV.2.1. (Note that when using the sine activation, the inputs are not normalized before training.) The activation function is modified to include a hyperparameter \(a\) such that it becomes \[\phi(z)=\frac{1}{a}\sin(az). \tag{18}\] This hyperparameter is used to fine tune the frequency content of the learned functions. For this dataset, it is determined as \(a=10\) with a grid search, the effect of varying \(a\) is shown in more detail in the Appendix B. Physically, since the weights are initialized as \(\sim\)_Uniform\((-1.22,1.22)\)_ (Section IV.2.1), multiplying this by \(a=10\), provides a good initial guess on the order of magnitude of the angular frequency for the training. Figure 5 (g,h) shows the case for the sine-ReLU configuration, which can also extrapolate in time. Deeper architectures are required when training \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l} \hline & ReLU & tanh & FNN & sin & sin-ReLU & GaN & P-GalNN \\ \hline Hidden layers & 5 & 5 & 3 & 2 & 2 & 1 \\ Activations & ReLU & tanh & sine & sine-ReLU & sine & harmonics \\ Neurons & 96 & 96 & 32 & 64 & 16 & 40 \\ Learning rate & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ Optimizer & \multicolumn{8}{c}{Adam} \\ Batch size & \multicolumn{8}{c}{64} \\ \hline \end{tabular} \end{table} Table 2: Summary of neural network model properties trained on the Rijke tube data. Figure 5: Extrapolation in time of the acoustic pressure with standard neural networks. Effect of activation functions: (a, b) ReLU, (c, d) tanh, (e, f) sine in all hidden layers, (g, h) sine-ReLU (sine in the first layer, ReLU for the rest of the hidden layers). Top row: predictions in both time and space. Bottom row: predictions in time at \(x=0.25\), training data, and ground truth. Typical activation functions, such as ReLU (a, b) and tanh (c, d), fail to extrapolate in time. with purely ReLU or tanh networks compared to sine or sine-ReLU networks. For example, in this case we use two layers of sine and sine-ReLU vs. five layers of ReLU and tanh, and even then the ultimate training and validation losses of sine and sine-ReLU networks were one order of magnitude smaller. To conclude, using periodic activations produces more physical and expressive neural network models for acoustics than the conventional activations, which can also extrapolate in time, whilst speeding up training and hyperparameter search as a consequence of the smaller network sizes required. #### iv.2.2 Physics-constrained networks and long-term extrapolation We showed that networks equipped with periodic activations make accurate predictions on data points chosen in a time range right after the training. Here, we analyse how the learned models extrapolate in the long term. The ideal model of a periodic solution should capture the periodicity that the system exhibits for all times. Network predictions over a long time period are plotted in Figure 6 (a) and (b), for P-FNN and GalNN, respectively. Over a long time range, we observe a beating-like phenomenon. In the frequency spectra of the acoustic variables, the peaks are at the harmonics of the angular frequency of the limit-cycle, which we determined as \(\theta^{*}=3.2605\) by inspection of the frequency spectrum and autocorrelation of the timeseries. After the training, this is captured by the weights in the first hidden layer as discussed in Section IV.2.4. These weights are multiplied with the time input before periodic activation, i.e., \(\sin(a\mathbf{W}_{1}^{(1)}x+a\mathbf{W}_{2}^{(1)}t+a\mathbf{b}^{(1)})\) for P-FNN, and \(\sin(a\mathbf{W}^{(1)}t+a\mathbf{b}^{(1)})\) for GalNN, where \(a\) is a hyperparameter of the sine activation (18). The optimized weights are placed close to the harmonics of the angular frequency of the limit-cycle. Figure 7 shows the (sorted) weights of the P-FNN and a GalNN. The harmonics of the angular frequency of the limit-cycle are shown in dashed horizontal lines. Although these weights are close to the harmonics of the frequency of the limit-cycle, they are not exact integer multiples of each other. Even the weights clustered around one frequency slightly differ from each other. This can be observed in the power spectral density (PSD) of the predicted timeseries as well, shown in Figure 6 (d) and (e). These observations align with the beating behaviour. We overcome this issue by using the GalNN tailored for this specific purpose as described in Section IV.2.4. The long term predictions are shown in Figure 6(c) and the power spectral density of the predicted signal in Figure 6(f). The estimated \(\theta^{*}\) by the periodic GalNN is \(3.2615\), which aligns with the true signal. The trained weights, which are activated as \((\sin(\mathbf{W}^{(1)}t,\cos(\mathbf{W}^{(1)}t)\) in the periodic GalNN, are shown in Figure 7(c). With this constrained structure, we eliminate the beating behaviour without using more training data. In addition to limit-cycles, thermoacoustic systems can also exhibit quasiperiodic and chaotic oscillations through bifurcations [e.g., 42]. With \(x_{f}=0.2\) and \(\tau=0.2\) and increasing \(\beta\), the system takes a Ruelle-Takens-Newhouse route to chaos; at \(\beta=0.5\), it bifurcates from a fixed point solution to a limit-cycle, at \(\beta=5.8\) from a limit-cycle to a quasiperiodic attractor, and at \(\beta=6.5\) from a quasiperiodic attractor to chaos [41]. We collect quasiperiodic data from \(\beta=6\) regime and chaotic data from \(\beta=7\) regime. For the quasiperiodic system, we train a GalNN with 2 sine layers of 96 neurons on data from the 80 time units long time series following a hyperparameter tuning with a grid search. The predictions are shown in Figure 8(a-c) with the PSD. The Galerkin network can capture the quasiperiodic behaviour and the frequency spectrum correctly. For the chaotic system, we train a GalNN with 4 sine layers of 256 neurons on data from the 200 time units long time series. The predictions are shown in Figure 8(d-f) along with the PSD. To train on chaotic data, we required a longer time series and a deeper and wider network because the nonlinear dynamics is richer than the quasiperiodic solution. The network's prediction correctly captures the dominant frequencies and the amplitude of the true signal. ### Learning from partial state measurements Full state measurements are difficult to perform in thermoacoustics, i.e., only pressure measurements are taken with microphones. To emulate such a scenario, we perform the training using only pressure data, discarding the velocity data. Therefore, in the formulation of the data-driven loss(4), the measurement matrix becomes \(\mathbf{M}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\). Our aim is to learn a model that can also output velocity information without ever being exposed to measurements from it during training. Figure 9 shows the predictions of the P-FNN and P-GalNNs trained with and without physics-information. When the loss is computed solely from the data, only the pressure can be learned. On the other hand, velocity predictions are made possible by enforcing the physics via the physics-informed loss term. For the purely data-driven networks, the output weights related to velocity are not updated, hence the velocity predictions are made with the initial weight values which are small. The physics-informed P-FNN captures the trend of the velocity state, however, there is a negative offset from the actual values. On the other hand, the physics-informed P-GalNN can accurately reconstruct the velocity. ## VI Results on higher-fidelity data ### Higher-fidelity data In order to assess how well the GalNN approach generalizes to higher-fidelity data, which is not generated by the Galerkin method, we employ a higher-fidelity model that includes the effects of the mean flow and has a kinematic flame model for the flame [33] (Figure 3). The solution of the PDEs (7) can be obtained by a travelling wave approach in conjunction with the jump conditions at the flame. This model can also account for nonideal open boundary conditions via acoustic reflection coefficients \(R_{in}\) and \(R_{out}\) at the inlet and outlet. We simulate a system with ideal boundary conditions, i.e., \(R_{in}=R_{out}=-1\), that results in limit-cycle oscillations using the mean flow parameters for the inlet velocity \(\tilde{\bar{u}}_{1}=4\:m/s\); for the inlet temperature \(\tilde{T}_{1}=293\:K\); for the inlet and outlet pressure \(\tilde{P}_{1}=\tilde{P}_{2}=101300\:Pa\); and for the heat release rate \(\tilde{Q}=2000\:W\). This is a low Mach number configuration with \(\tilde{M}_{1}=0.0117\) and \(\tilde{M}_{2}=0.0107\). In the preprocessing of the data, we non-dimensionalize the variables as, \(x=\tilde{x}/\tilde{L}\); \(t=\tilde{\bar{t}}_{ref}/\tilde{L}\); \(u^{\prime}=\tilde{u}^{\prime}/\tilde{\bar{c}}_{ref}\); \(\rho=\tilde{\rho}^{\prime}/\tilde{\rho}_{ref}\); \(p=\tilde{p}^{\prime}/(\tilde{\bar{p}}_{ref}\tilde{\bar{c}}_{ref}^{2})\); \(\tilde{q}^{\prime}=\tilde{\bar{q}}^{\prime}(\gamma-1)/(\tilde{\bar{\rho}}_{ ref}\tilde{\bar{c}}_{ref}^{2})\) where \(\tilde{\bar{(}})_{ref}=(\tilde{L}_{u}(\tilde{\bar{\gamma}})_{1}+\tilde{L}_{d}( \tilde{\bar{\gamma}})_{2})/\tilde{L}\) Figure 6: Long-term extrapolation in time of periodic acoustic pressure timeseries with physics constrained neural networks. The training data is small and spans the first four time units inside the red vertical line. Predictions on the pressure time series with (a) periodically activated feedforward neural network (P-FNN), (b) Galerkin neural network (GalNN), and (c) periodic Galerkin neural network (P-GalNN). Power spectral density of true and predicted time-series with (d) P-FNN, (e) GalNN, and (f) P-GalNN. Constraining prior knowledge on periodicity is key to an accurate long-term prediction. is chosen as the weighted average of the mean values of variable \((\cdot)\) across the flame. The non-dimensional flame location is at \(x_{f}=0.61\), and the non-dimensional densities in the duct segments before and after the flame are \(\bar{\rho}_{1}=1.145,\bar{\rho}_{2}=0.769\). These quantities will be used to determine the acoustic mode shapes for the Galerkin neural network (13). While creating the training and validation sets, we consider two types of validation; (i) validation of the interpolation capability, and (ii) validation of the extrapolation capability, i.e., when the data is sampled from outside the time range of training. A good validation of the interpolation loss prevents overfitting. Validation of extrapolation loss indicates how well the model generalizes beyond the time range of training. After discarding a transient, we split the first 4 non-dimensional time units long time series (approximately two periods) of the limit-cycle into training and validation of the interpolation sets, assigning a 80% to 20% ratio between them. The validation data for the extrapolation is then taken as the next 4 non-dimensional time units. The physics-informed loss of the higher-fidelity system is calculated by evaluating the left-hand side of the PDEs (7) after being non-dimensionalized as described above. The momentum residual, \(\mathcal{F}_{M}\), is defined as the left-hand side of (7a). In contrast to the twin experiments with Rijke tube, for the higher-fidelity data, we assume that we do not have access to the heat release law. Hence, the energy residual, \(\mathcal{F}_{E}\), is defined as the left-hand side of (7a) excluding the heat release term, \(\dot{q}\). This is acceptable because the heat release term only acts at the flame location due to the compact assumption and the Dirac delta. ### Comparison of networks performance for reconstruction We analyse the performance of the different networks that we have built in Section IV.2 on the reconstruction of the higher-fidelity data. We compare three configurations of the feedforward neural network with; (i) ReLU activations in all hidden layers, (ii) sine activation in all hidden layers, (iii) sine activation in the first layer, ReLU activation in the rest of the hidden layers, and three configurations of the periodic Galerkin neural network with one hidden \begin{table} \begin{tabular}{c|c|c|c|c|c|c} & ReLU FNN & sin FNN & sin-ReLU FNN & P-GalNN (i) & P-GalNN (ii) & P-GalNN (iii) \\ \# hidden layers & 6 & 3 & 4 & 1 & 1 & 1 \\ \# Galerkin modes & - & - & - & 20 & 20 & 20 \\ Activations & ReLU & sine & sine-ReLU & harmonics & harmonics & harmonics \\ \# neurons & 64 & 64 & 32 & 20 & 20 & 20 \\ Learning rate & 0.01 & 0.01 & 0.01 & 0.0004 & 0.0004 & 0.0004 \\ Optimizer & & & & Adam & & \\ Batch size & & & & 32 & & \\ \end{tabular} \end{table} Table 3: Summary of neural network model properties trained on higher-fidelity data. Feedforward neural networks (FNNs) with different activations; ReLU, sine, sine-ReLU (sine in the first layer, ReLU in the rest). Periodic Galerkin neural networks (P-GalNNs) with different choices of spatial bases; (i) natural acoustic modes (13) for \(\bar{\rho}_{1}=\bar{\rho}_{2}=1\), (ii) natural acoustic modes for \(\bar{\rho}_{1}=1.145,\bar{\rho}_{2}=0.769\); and (iii) case (ii) with the addition of step functions in the velocity modes to handle the jump discontinuity (14). Figure 7: In reference to Fig. 6, trained weights of the first hidden layers of (a) periodically activated feedforward neural network (P-FNN), (b) Galerkin neural network (GalNN), and (c) periodic Galerkin neural network (P-GalNN). These weights are multiplied with the time input before periodic activation, i.e., \(\sin(a\mathbf{W}_{1}^{(1)}x+a\mathbf{W}_{2}^{(1)}t+a\mathbf{b}^{(1)})\) for P-FNN, \(\sin(a\mathbf{W}^{(1)}t+a\mathbf{b}^{(1)})\) for GalNN, where \(a\) is a hyperparameter, and \(\sin(\mathbf{W}^{(1)}t)\) for periodic GalNN. trainable frequency layer; (i) with \(\omega_{j}=j\pi\) as the acoustic angular frequencies as before, i.e., when \(\bar{\rho}_{1}=\bar{\rho}_{2}=1\), (ii) with \(\omega_{j}\) determined from the dispersion relationship using the real mean flow density values of the system, i.e., when \(\bar{\rho}_{1}=1.145,\bar{\rho}_{2}=0.769\), (iii) the Galerkin modes chosen as (ii) with the addition of step functions in the velocity modes to handle the jump discontinuity (14). For each configuration of activations, the hyperparameters of the architecture and training (number of layers, number of neurons, and learning rate) have been tuned with a grid search, in which we chose the set of hyperparameters that Figure 8: Extrapolation in time for non-periodic solutions. (a-c) Quasiperiodic solution; and (d-f) chaotic solution. Training data shown within the red vertical line (80 time units for the quasiperiodic case, 200 time units for the chaotic case). resulted in the smallest validation losses. These properties are provided in Table 3 for each network. The training and validation loss histories of all models are shown in Figure 10. The ReLU network has higher training and interpolation losses, cannot fit the data as well as the sine and sine-ReLU networks, and cannot extrapolate. For all the other networks (sine FNN, sine-ReLU FNN, P-GalNN (i), P-GalNN (ii), P-GalNN (iii)), the validation losses are close to the training loss. The P-GalNN (iii) scores the lowest losses. The periodic Galerkin networks can have similar or better prediction performance as the other deeper feedforward neural networks even with one hidden layer following a physical choice of spatial basis. Figure 11 visualizes the velocity predictions and their errors between the predictions and the true data obtained with the networks. The P-GalNN (i) exhibits an error pattern that resembles an offset from the true solution, which indicates a biased model. The prediction is also oscillatory and the error is especially high at the boundaries. The more physically motivated choice of P-GalNN (ii) eliminates these errors, however the discontinuity jump still can not be captured, which leads to high frequency oscillations in the prediction around the flame location. A similar phenomenon is also observed for the sine-ReLU network, where the error is the highest at the flame location. This problem is overcome by P-GalNN (iii), which results in the smallest training and validation errors overall. This network finds the non-dimensional dominant angular frequency as \(\theta^{*}=3.3164\) and thus, the period as \(T^{*}=1.8946\). From the power spectral density of the true pressure signal, we read \(\theta^{*}=3.3166\) and \(T^{*}=1.8945\). The periodic Galerkin network successfully learns the correct period. Figure 9: Reconstruction of acoustic velocity from pressure measurements only. Predictions on acoustic (a, b) pressure and (c, d) velocity from only 11 pressure observations with different physics-constrained networks. Top row: spatial modes at \(t=1.5\). Bottom row: timeseries at \(x=0.25\). Accurate pressure predictions (a,b) do not necessarily imply accurate velocity reconstructions (c,d). The acoustic velocity is accurately reconstructed from pressure measurements with the physics-informed periodic Galerkin neural network (PI-P-GalNN) in panels (c,d). ### Robustness to sparsity and noise In this section, we demonstrate the robustness of the acoustic neural networks to sparsity and noise in the training data. We select P-GalNN (iii) from the previous section because of its more generalizable performance in comparison to the other architectures and investigate the regularization methods that can be used in order to prevent overfitting in the case of corrupted or scarce data. We compare a physics-based regularization, i.e., physics-informed loss, with \(\ell_{1}\)- and \(\ell_{2}\)-norm regularizations, which act on the network weights as \(||\mathbf{W}||_{1}\) and \(||\mathbf{W}||_{2}\) respectively in the loss term. We add zero mean Gaussian noise on the non-dimensional training data with a standard deviation equal to 10% of the standard deviation of the true solution function over the whole domain. Using the described regularization methods, we train networks with 10 Galerkin modes on this new dataset this time given only on a coarse grid. The comparison of the obtained predictions when using (i) no regularization, (ii) \(\ell_{2}\)-norm, (iii) \(\ell_{1}\)-norm, and (iv) physics-based regularization is shown in Figure 12. We train multiple networks with varying values of regularization coefficients and show only the results for those with the smallest validation losses. Without regularization, the model can fit the training data perfectly, however the interpolation points show spikes, which is a clear demonstration of overfitting. This case of overfitting emerges when there is a discrepancy between the number of sensors and the number of Galerkin modes dictated by Nyquist-Shannon sampling theorem. The highest number of Galerkin modes is given as \[N_{g}^{*}=\operatorname*{arg\,max}_{j}\,\{j\mid\omega_{j}\sqrt{\bar{\rho}_{1}} \leq k_{Nyquist},\;\omega_{j}\sqrt{\bar{\rho}_{2}}\leq k_{Nyquist}\} \tag{19}\] where \(k_{Nyquist}=\frac{\pi}{\Delta x}\) with \(\Delta x\) being the spacing between sensor locations. The physics information significantly eliminates overfitting and outperforms \(\ell_{1}\)- and \(\ell_{2}\)-norm regularizations. The \(\ell_{2}\)-norm regularization does not work as well because the weights are not selectively regularized, i.e., dominant modes/frequencies will be regularized in the same way as non-dominant ones. The \(\ell_{1}\)-norm regularization performs better since modes corresponding to higher Figure 10: Data from higher-fidelity model for a periodic acoustic solution. Loss-function histories. (a-b) Training, (c-d) validation of interpolation, and (e-f) validation of extrapolation. Feedforward neural networks (FNNs) with different activations; ReLU, sine, sine-ReLU (sine in the first layer, ReLU in the rest). Period Galerkin neural networks (P-GalNNs) with different choices of spatial bases; (i) natural acoustic modes (13) for \(\bar{\rho}_{1}=\bar{\rho}_{2}=1\), (ii) natural acoustic modes for \(\bar{\rho}_{1}=1.145,\bar{\rho}_{2}=0.769\); and (iii) case (ii) with the addition of step functions in the velocity modes to handle the jump discontinuity (14). wavenumbers are discarded as \(\ell_{1}\)-norm promotes sparsity and in the physical basis, the energy is contained in the low frequency modes. We perform a quantitative study to investigate the robustness of the prediction performance to sparsity and noise. First, we vary the number of sensors along the tube. Figure 13 shows the relative \(\ell_{2}\)-error achieved by Galerkin networks with different number of modes when trained on noise-free data given on spatial grids of 2, 5, 11, and 23 points. The relative \(\ell_{2}\)-error is calculated from the prediction on the fine grid of 49 points over the training range by dividing the \(\ell_{2}\)-norm of the error by the \(\ell_{2}\)-norm of the ground truth. The networks are trained with and without Figure 11: Comparison of acoustic neural networks in terms of reconstruction performance from higher-fidelity data. From left to right, (a-d) sine-ReLU feedforward neural network (FNN), i.e., sine in the first layer, ReLU in the rest); (e-h) periodic Galerkin neural network (P-GalNN) (i) with natural acoustic modes (13) for \(\bar{\rho}_{1}=\bar{\rho}_{2}=1\); (i-l) P-GalNN (ii) with natural acoustic modes for \(\bar{\rho}_{1}=1.145,\bar{\rho}_{2}=0.769\); (m-p) P-GalNN (ii) with the addition of step functions in the velocity modes to handle the jump discontinuity (14). From top to bottom, velocity prediction in time and space, error between ground truth and prediction in time and space, velocity prediction and error at a fixed time instance, \(t=1.5\). physics-information in the loss function. We fix the weighting of the data-driven loss to \(\lambda_{D}=1\), and the physics loss to \(\lambda_{M}=\lambda_{E}=0.0001\) for physics-informed and to 0 otherwise. As discussed above, high number of Galerkin modes leads to overfitting for coarse grids, when the Nyquist wavenumber is not sufficient to resolve the high wavenumbers, e.g., 5 sensors and 10 Galerkin modes. Furthermore, we observe large prediction error even when the Nyquist condition is satisfied, e.g. 5 sensors and 5 Galerkin modes. We find that this type of error is mostly concentrated at the boundaries of the velocity, as we have not provided any boundary data and therefore, the model overfits to the rest of the data, not generalizing well over the boundaries. For pressure, this is not an issue, because the boundary conditions are Dirichlet and already encapsulated within the provided Galerkin basis. On the other hand, low number of Galerkin modes may not be enough to approximate the pressure and velocity shapes in fine detail, e.g., 11 sensors and 5 Galerkin modes. Figure 12: In reference to Fig. 11, effect of the regularization hyperparameter on periodic Galerkin neural networks (P-GalNNs) in case of sparse and noisy measurements. From left to right, (a-d) no regularization, (e-h) \(\ell_{2}\)-norm regularization, (i-l) \(\ell_{1}\)-norm regularization, and (m-p) physics-based regularization. From top to bottom: acoustic velocity, error between ground truth and prediction, velocity prediction at a fixed time instance, and error at a fixed time instance, \(t=1.5\). The addition of the physics-information shows a marked difference and helps overcoming these limitations. Next, we choose the network with 10 modes and add increasing levels of noise to the data from 5 sensor measurements. We take 5 different realizations of noise and show the prediction relative \(\ell_{2}\) error as a box plot in Figure 14 for noise levels of 5, 10, 20, and 40 %. Noting that even in the noise-free case, the non-physics-informed network performs poorly, we focus on physics-informed training with varying weightings of the physics loss. As the noise level increases, the optimum weighting of the physics-information increases as well, since the data becomes less reliable. We obtain good predictions of pressure and velocity over the entire domain, even though the data is sparse and noisy. The network is also robust to different realizations of noise. We conclude by demonstrating how the physics-informed training performs in the absence of velocity data. The prediction results of a physics-informed periodic Galerkin neural network, PI-P-GalNN, are compared with a sine-ReLU physics-informed feedforward neural network, PI-P-FNN, in Figure 15. For the PI-P-GalNN, we report the following relative \(\ell_{2}\)-errors: 2.63 % for pressure and 2.75 % for velocity in the training range (\(t=0-4\) time units), and 2.84 % for pressure and 2.80 % for velocity in the extrapolation range (\(t=4-8\) time units). In comparison, for the sine-ReLU feedforward neural network, we report the following relative \(\ell_{2}\)-errors: 4.81 % for pressure and 164.22 % for velocity in the training range (\(t=0-4\) time units), and 9.77 % for pressure and 160.57 % for velocity in the extrapolation range (\(t=4-8\) time units). With the P-GalNN, we can reconstruct the velocity as well as pressure. Figure 14: Robustness to noise. Galerkin neural networks with different regularization hyperparameters for physics-based losses with data corrupted with noise. (The regularization hyperparameter of the data-driven loss is \(\lambda_{D}=1\).) Figure 13: Effect of number of pressure sensors in data collection. Effect of number of modes in Galerkin neural networks, with and without physics-based loss. (The regularization hyperparameter of the data-driven loss is \(\lambda_{D}=1\).) The PI-P-FNN fits to the noisy data more compared to the PI-P-GalNN, and can only recover the velocity with an offset, completely discarding the velocity jump. ### Nonideal boundary conditions So far, we have restricted ourselves to ideal boundary conditions. However, in real experiments, we deal with nonideal conditions, in which the acoustic waves are not fully reflected at the tube ends [e.g., 64]. As a result, in contrast to the ideal case and the acoustic modes we have obtained under that assumption, the pressure fluctuations at the tube ends are not zero, i.e., \(p^{\prime}(x=0,t)\neq 0,\;p^{\prime}(x=1,t)\neq 0\). In order to tackle nonideal boundary conditions, we add a linear term in the pressure modes (13a), \[\Pi^{(1)}_{N_{g}+1}(x) =\Pi^{(2)}_{N_{g}+1}(x)=x, \tag{20a}\] \[\Pi^{(1)}_{N_{g}+2}(x) =\Pi^{(2)}_{N_{g}+2}(x)=1, \tag{20b}\] such that the summation (12a) runs from \(1\) to \(N_{g}+2\). The modes \(\Pi_{N_{g}+1}\) and \(\Pi_{N_{g}+2}\) are weighted by independent coefficients, \(\mu_{N_{g}+1}\) and \(\mu_{N_{g}+2}\). This way, the linear term provides the offset to the predicted pressure fluctuations at the tube ends, i.e., \(\mu_{N_{g}+1}(t)x+\mu_{N_{g}+2}(t)\). Once trained, we expect to estimate \(\mu_{N_{g}+2}(t)=p^{\prime}(x=0,t)\) and \(\mu_{N_{g}+1}(t)=p^{\prime}(x=1,t)-p^{\prime}(x=0,t)\). We generate data from the high-fidelity model using the mean flow parameters for the inlet velocity \(\tilde{\tilde{u}}_{1}=8\:m/s\) Figure 15: Reconstruction of acoustic velocity from noisy pressure measurements only. Predictions on acoustic (a, b) pressure and (c, d) velocity from only 5 pressure observations with physics-based loss using the periodically activated feedforward neural network (P-FNN) and periodic Galerkin neural network (P-GalNN). Top row: pressure and velocity shapes along the tube at a fixed time instance, \(t=1.5\). Bottom row: pressure and velocity timeseries at a fixed point in the tube, \(x=0.33\). and for the heat release rate \(\tilde{Q}=10000\:W\). We set the reflection coefficients \(R_{in}=R_{out}=-0.985\). The non-dimensional densities in the duct segments before and after the flame are \(\bar{\rho}_{1}=1.269,\bar{\rho}_{2}=0.571\). Figure 16 compares the predictions of a standard physics-informed periodic Galerkin neural (PI-P-GalNN) with a PI-P-GalNN with a linear mode for a training case of only 5 pressure sensors. With the addition of the linear mode, the PI-P-GalNN is capable of capturing the non-zero pressure fluctuations at the boundaries even from sparse measurements given at other locations. ## VII Conclusions In this work, we model acoustic and thermoacoustic pressure and velocity oscillations from synthetic data. The synthetic data captures the rich nonlinear behaviour of thermoacoustic oscillations observed in propulsion and power generation. We develop acoustic neural networks to tackle the tasks of (i) extrapolation in space and time, and (ii) reconstruction of full acoustic state from partial observations by exploiting prior knowledge on the physics of acoustics. The prior knowledge is embedded in the network as both soft and hard constraints. First, as acoustic and thermoacoustic systems are dominated by sinusoidal eigenfunctions, we promote a physically-motivated choice of sinusoidal activation functions. Unlike standard feedforward neural network architectures that employ ReLU or tanh activations, periodically activated networks can extrapolate in time, and the trained weights hold frequency information. This means that spatiotemporal patterns can be learned more efficiently from less data because the network is more expressive as well as robust. Second, we inform the training with the acoustic conservation laws with penalty terms in the loss function. This term regularizes the predictions in the case of noisy and scarce data, and enables reconstruction of unobserved states. Typically, in thermoacoustics experiments, only pressure measurements are available. Third, inspired by Galerkin decomposition, we design a neural network with temporal and spatial branches (Galerkin neural network), which spans a physical function space via the choice of acoustic eigenmodes as the spatial basis. In order to account for nonideal boundary conditions that occur when the acoustic waves are not Figure 16: Prediction in the case of nonideal boundary conditions for a training case of only 5 noise-free pressure sensors. Comparison of a standard physics-informed periodic Galerkin neural (PI-P-GalNN) network with a PI-P-GalNN for nonideal boundary conditions. Shown in (a) pressure shape at a fixed time instance, \(t=1.5\), (b,c) pressure fluctuations at \(x=0\) and \(x=1\), respectively. The linear mode can capture the non-zero pressure fluctuations at the boundaries. fully reflected at the boundaries, we add a linear mode in the pressure modes, which captures the non-zero pressure fluctuations at the inlet and outlet. We consider two test cases: (i) twin experiments on synthetic data from a Rijke tube with a nonlinear, time-delayed heat release law with Galerkin discretization of the PDEs, and (ii) higher-fidelity data generated by a thermoacoustic network model with a kinematic flame model that takes into account mean-flow effects. The first case shows that the standard feedforward neural networks fail at extrapolation and accurate reconstruction of the velocity in the absence of observations, while the developed acoustic Galerkin neural networks can solve each case. On this test case, we also demonstrate the long term extrapolation capability of Galerkin neural networks on periodic and quasi-periodic solutions, while for chaotic solutions, we recover the dominant components in the frequency spectrum. In the second test case, we show the generalizability and robustness of the Galerkin neural networks to higher-fidelity data, which can be noisy and contain only pressure measurements. This work opens up possibilities to learn the nonlinear dynamics of thermoacoustics using physics-aware data-driven methods. The developed methods can be transferred to other problems that are solved by Galerkin methods to enhance robustness and expressivity. ###### Acknowledgements. This research has received financial support from the ERC Starting Grant No. PhyCo 949388. ## Appendix A Calculation of modal damping and heat release terms in the physics-informed loss ### Modal damping The modal damping is defined as a multiplication in the frequency domain and a convolution in the spatial domain with the pressure. Using an FNN, we predict the pressure at a given time in the spatial domain. In order to compute the contribution of the damping in the physics-informed loss, at each training step, we will transform the pressure predicted in the spatial domain to the frequency domain via Fast Fourier Transformation (FFT), compute the damping in the frequency domain, and then transform the result back to the spatial domain. When sampling from signals, the distinct Fourier frequencies are given as \(\omega_{k}=k\frac{2\pi}{KT_{s}},k=0,1,...,K/2\), where \(K\) is total number of samples, \(T_{s}\) is the sampling time, and \(KT_{s}\) gives the length of the sampled signal. During simulation, we considered \(N_{g}\) Galerkin modes. So, in our case, we set \(K\geq 2N_{g}\). Notice that the spatial domain is \([0,1]\), while the wavenumbers of the Galerkin modes have a resolution of \(\pi\), which is only half-period in this domain. Consequently, the resolution of the Fourier frequencies must be set to \(\frac{2\pi}{KT_{s}}=\pi\) and hence, \(KT_{s}=2\), which means that the FFT must be taken over a domain of \([0,2]\), such that the samples are at \(x_{k}=k\frac{2}{K},k=0,1,..,K-1\) and we have one period of discrete samples. One could predict the pressure over the \([0,2]\) domain, but this could lead to errors as there is no training data in this region. Instead, we first predict over the original \([0,1]\) domain and then stack this prediction with its symmetric with respect \(x=1\), since we know that pressure is given as a sum of sines, which is an odd function. So, we have \[\hat{p}^{\prime}=\left\{\hat{p}^{\prime}(x_{k},t)\left|x_{k}=k\frac{2}{K},\,k =0,1,..,K-1,\,K\geq 2N_{g}\right\},\right. \tag{10}\] and its Fourier Transform \(\mathcal{F}(\hat{p}^{\prime})\). Now, we take only the part of \(\mathcal{F}(\hat{p})\) that corresponds to the positive frequencies and element-wise multiply it with the damping modes \(\zeta\), \[\xi_{j}=\mathcal{F}_{j}^{+}(\hat{p}^{\prime})\zeta_{j},\quad j=1,2,...,N_{g} \tag{11}\] In the next step, we take the inverse Fourier transform of the convolution. As a matter of fact, the training is done in batches, so the \(x\) locations we are interested in calculating the effect of damping at may not necessarily collide with the spatial grid that we have previously constructed. So, we will do the inverse Fourier transform for these locations in the training batch. Ultimately, the damping term is found as \[\zeta p^{\prime}(x)=\frac{1}{2K}\sum_{j=1}^{N_{g}}\xi_{j}e^{ij\pi x}+\xi_{j}^ {*}e^{-ij\pi x}, \tag{12}\] where \(\xi^{*}\) denotes the complex conjugate of \(\xi\). We use the contribution of damping in the residual of the energy equation (11b) when computing the physics-informed loss (10). ### Heat release Although the heat-release term acts as a Dirac delta in the spatial domain, the solver implements Galerkin decomposition, which projects this term onto a truncated set of modes. Plugging back the decomposition (12) in the left-hand side of the energy equation (11b), the remaining heat release term is equal to \(\sum_{j=1}^{N_{g}}2q^{j}\sin(j\pi x_{f})\sin(j\pi x)\), which is the effective term in the simulations. As the number of Galerkin modes approaches to infinity, this approaches the Dirac delta. However, from a practical point of view, the number of modes in the simulation is finite, thus we use this expression to find the contribution of heat release in the residual of the energy equation (11b) when computing the physics-informed loss (10). ## Appendix B Effect of hyperparameter in the sine activation We observe that varying hyperparameter \(a\) in the sine activation formulated as \(\frac{1}{a}\sin(az)\) affects the frequency of the learned functions. Hence, it is a hyperparameter that requires tuning. Figure 17 illustrates this effect for the Rijke tube data discussed in Section V.2 for \(a=1,10,20\). Low \(a\), \(a=1\), results in a low frequency model, whereas high, \(a=20\), results in a high frequency model, which can especially be observed in the pressure and velocity shapes in the spatial domain. We found the optimum value for this dataset as \(a=10\). Since the sampling frequency of the training data is high enough, we do not observe the effect of high \(a\) in the time domain. Ziyin _et al._[54] reported similar findings in their studies when using the \(1+\sin^{2}(z)\) activation. ## Appendix C Travelling wave solution to the higher fidelity model In this model, the pressure and velocity are expressed as functions of two acoustic travelling waves, which propagate up- and downstream of the tube. These waves are derived by applying the method of characteristics to the acoustic wave equation and defined as \(f\) and \(g\), with propagation velocities \(\tilde{c}_{1}\pm\tilde{u}_{1}\) in the upstream region \(\tilde{x}\leq\tilde{x}_{f}\); and \(h\) and \(j\), with propagation velocities \(\tilde{c}_{2}\pm\tilde{u}_{2}\) in the downstream region \(\tilde{x}\geq\tilde{x}_{f}\). This model can also account for nonideal open boundary conditions via reflection coefficients \(R_{in}\) and \(R_{out}\) at the inlet and outlet, \(f(\tilde{t})=R_{in}g(\tilde{t}-\tilde{\tau}_{u})\) and \(j(\tilde{t})=R_{out}h(\tilde{t}-\tilde{\tau}_{d})\), where \(\tilde{\tau}_{u}\) and \(\tilde{\tau}_{d}\) are the travelling times of the waves from the flame to the up- and downstream boundaries, respectively. If the boundary conditions are ideal, i.e., fully reflective, then \(R_{in}=R_{out}=-1\). The full set of equations that describe the dynamics of the waves are given by \[\mathbf{X}\begin{bmatrix}g(\tilde{t})\\ h(\tilde{t})\end{bmatrix}=\mathbf{Y}\begin{bmatrix}g(\tilde{t}-\tilde{\tau}_{u})\\ h(\tilde{t}-\tilde{\tau}_{d})\end{bmatrix}+\begin{bmatrix}0\\ \frac{\tilde{q}-\tilde{q}}{\tilde{A}_{1}\tilde{c}_{1}}\end{bmatrix}, \tag{10}\] where the matrices \(\mathbf{X}\) and \(\mathbf{Y}\) are functions of the mean-flow variables and are obtained from the jump conditions. We generate the data with the code implementation of the kinematic flame model from [53]. (Code available at [65]).
2302.00193
$\rm A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks
As graph data size increases, the vast latency and memory consumption during inference pose a significant challenge to the real-world deployment of Graph Neural Networks (GNNs). While quantization is a powerful approach to reducing GNNs complexity, most previous works on GNNs quantization fail to exploit the unique characteristics of GNNs, suffering from severe accuracy degradation. Through an in-depth analysis of the topology of GNNs, we observe that the topology of the graph leads to significant differences between nodes, and most of the nodes in a graph appear to have a small aggregation value. Motivated by this, in this paper, we propose the Aggregation-Aware mixed-precision Quantization ($\rm A^2Q$) for GNNs, where an appropriate bitwidth is automatically learned and assigned to each node in the graph. To mitigate the vanishing gradient problem caused by sparse connections between nodes, we propose a Local Gradient method to serve the quantization error of the node features as the supervision during training. We also develop a Nearest Neighbor Strategy to deal with the generalization on unseen graphs. Extensive experiments on eight public node-level and graph-level datasets demonstrate the generality and robustness of our proposed method. Compared to the FP32 models, our method can achieve up to a 18.6x (i.e., 1.70bit) compression ratio with negligible accuracy degradation. Morever, compared to the state-of-the-art quantization method, our method can achieve up to 11.4\% and 9.5\% accuracy improvements on the node-level and graph-level tasks, respectively, and up to 2x speedup on a dedicated hardware accelerator.
Zeyu Zhu, Fanrong Li, Zitao Mo, Qinghao Hu, Gang Li, Zejian Liu, Xiaoyao Liang, Jian Cheng
2023-02-01T02:54:35Z
http://arxiv.org/abs/2302.00193v1
# A\({}^{2}\)Q: Aggregation-Aware Quantization for Graph Neural Networks ###### Abstract As graph data size increases, the vast latency and memory consumption during inference pose a significant challenge to the real-world deployment of Graph Neural Networks (GNNs). While quantization is a powerful approach to reducing GNNs complexity, most previous works on GNNs quantization fail to exploit the unique characteristics of GNNs, suffering from severe accuracy degradation. Through an in-depth analysis of the topology of GNNs, we observe that the topology of the graph leads to significant differences between nodes, and most of the nodes in a graph appear to have a small aggregation value. Motivated by this, in this paper, we propose the Aggregation-Aware mixed-precision Quantization (A\({}^{2}\)Q) for GNNs, where an appropriate bitwidth is automatically learned and assigned to each node in the graph. To mitigate the vanishing gradient problem caused by sparse connections between nodes, we propose a Local Gradient method to serve the quantization error of the node features as the supervision during training. We also develop a Nearest Neighbor Strategy to deal with the generalization on unseen graphs. Extensive experiments on eight public node-level and graph-level datasets demonstrate the generality and robustness of our proposed method. Compared to the FP32 models, our method can achieve up to a 18.6x (i.e., 1.70bit) compression ratio with negligible accuracy degradation. Moreover, compared to the state-of-the-art quantization method, our method can achieve up to 11.4% and 9.5% accuracy improvements on the node-level and graph-level tasks, respectively, and up to 2x speedup on a dedicated hardware accelerator. ## 1 Introduction Recently, Graph Neural Networks (GNNs) have attracted much attention due to their superior learning and representing ability for non-Euclidean geometric data. A number of GNNs have been widely used in real-world applications, such as recommendation system (Jin et al., 2020), and social network analysis (Lerer et al., 2019), etc. Many of these tasks put forward high requirements for low-latency inference. However, the real-world graphs are often extremely large and irregular, such as Reddit with 232,965 nodes, which needs 19G floating-point operations (FLOPs) to be processed by a 2-layer Graph Convolutional Network (GCN) with only 81KB parameters (Tailor et al., 2020), while ResNet-50, a 50-layer DNN, only takes 8G FLOPs to process an image (Canziani et al., 2016). What is worse, it requires a huge amount of memory access for GNNs inference, e.g., the nodes features size of Reddit is up to 534MB, leading to high latency. Therefore, the aforementioned problems pose a challenge to realize efficient inference of GNNs. Neural network quantization can reduce the model size and accelerate inference without modifying the model architecture, which has become a promising method to solve this problem in re cent years. Unfortunately, there remain some issues in the existing works on GNNs quantization. Feng et al. (2020) only quantizes the node feature and keeps floating point calculations during inference. Tailor et al. (2020) proposes a degree-quant training strategy to quantize GNNs to the low-bit fixed point but causes a large accuracy drop, e.g., 11.1% accuracy drops when quantizing to 4bits. Moreover, some works (Wang et al., 2021b; Bahri et al., 2021; Wang et al., 2021a; Jing et al., 2021) quantize GNNs into 1-bit and compute with XNOR and bit count operations. However, these 1-bit quantization methods are either restricted to the node-level tasks or can not generalize well to other GNNs. Most of the above methods do not make full use of the property of GNNs and graph data, resulting in severe accuracy degradation or poor generalization. As presented in MPNN framework (Gilmer et al., 2017), GNNs processing is divided into two phase: First, in the aggregation phase, a node collects information from neighboring nodes and uses the aggregation function to generate hidden features; second, in the update phase, the hidden features are transformed into new features by an update function. We analyze the nodes features after aggregation in Figure 1 and find that the higher the in-degree is, the larger the node features tend to be after aggregation. And the features vary significantly between nodes with different in-degrees, which represent the topology of a graph. Moreover, according to Xie et al. (2014); Aiello et al. (2001), the degrees of nodes in most real-world graph data often follow the power-law distribution, i.e., nodes with a low degree account for the majority of graph data. Therefore, specially quantizing the nodes features according to the topology of the graphs will be beneficial to reduce the quantization error while achieving a higher compression ratio. In this paper, we propose the **Aggregation-Aware Quantization** (\(\rm A^{2}Q\)) method, which quantizes different nodes features with different learnable quantization parameters, including bitwidth and step size. These parameters can be adaptively learned during training and are constrained by a penalty on memory size to improve the compression ratio. However, when quantizing the model in semi-supervised tasks, the gradients for most quantization parameters are zero due to the sparse connections between nodes, which makes the training non-trivial. We propose the **Local Gradient** method to solve this problem by introducing quantization error as supervised information. Finally, to generalize our method to unseen graphs in which the number of the nodes varies, we develop the **Nearest Neighbor Strategy** which assigns the learned quantization parameters to the unseen graph nodes. To the best of our knowledge, we are the first to introduce the mixed-precision quantization to the GNNs. Compared with the previous works, our proposed methods can significantly compress GNNs with negligible accuracy drop. In summary, the key contributions of this paper are as follows: 1. We propose the Aggregation-Aware mixed-precision Quantization (\(\rm A^{2}Q\)) method to enable an adaptive learning of quantization parameters. Our learning method is powerful by fully Figure 1: The analysis of the average aggregated node features in different in-degrees node groups on various tasks. (a) The values at the final layer for GNNs trained on Cora. (b) The values at the 2-5 layer of GIN trained on REDDIT-BINARY. The average values are all generated from 10 runs. utilizing the characteristic of GNNs, and the learned bitwidth is strongly related to the topology of the graph. 2. A **Local Gradient** method is proposed to train the quantization parameters in semi-supervised learning tasks. Furthermore, to generalize our method to the unseen graphs in which the number of input nodes is variable, we develop the **Nearest Neighbor Strategy** to select quantization parameters for the nodes of the unseen graphs. 3. Experiments demonstrate that we can achieve a compression ratio up to 18.6x with negligible accuracy degradation compared to the full-precision (FP32) models. Moreover, the model trained with our \(\mathrm{A^{2}Q}\) method outperforms the state-of-the-art (SOTA) method up to 11.4% with a speedup up to 2.00x in semi-supervised tasks, and obtains up to 9.5% gains with a 1.16x speedup in graph-level tasks. We provide our code at this URL: [https://github.com/weihai-98/](https://github.com/weihai-98/)\(\mathrm{A^{2}Q}\). ## 2 Related Work _Graph Neural Networks:_ The concept of the graph neural network was first proposed in Scarselli et al. (2008), which attempted to generalize neural networks to model non-Euclidean data. In the following years, various GNN models were proposed. For example, Graph Convolution Network (GCN) (Kipf and Welling, 2016) uses a layer-wise propagation rule that is based on a first-order approximation of spectral convolutions on graphs, Graph Isomorphism Network (GIN) (Xu et al., 2018) designed a provably maximally powerful GNN under the MPNN framework, and Graph Attention Network (GAT) (Velickovic et al., 2017) introduces the attention mechanism to graph processing. Although GNNs have encouraging performance in a wide range of domains (Jin et al., 2020; Yang, 2019), the huge amount of float-point operations and memory access in process pose a challenge to efficient inference, which hinder the applications of GNNs. _Quantized GNNs:_ As a promising method to reduce the model size and accelerate the inference process, quantization is also applied to GNNs. Some works quantize features and weights in GNNs to low bitwidths (Feng et al., 2020; Tailor et al., 2020) or even 1-bit (Wang et al., 2021; Bahri et al., 2021; Wang et al., 2021; Jing et al., 2021), i.e., use fixed-point numbers instead of floating-point numbers for computation. But when the compression ratio is high (e.g., \(<\)4bit), the performance degradation of these works is significant, and the generalization of 1-bit method is limited. There are also some works on vector quantization (VQ), which use the vectors in a codebook obtained during the training process instead of the original features (Ding et al., 2021; Huang et al., 2022). However, searching for vectors in the codebook is computationally complex. _Mixed-Precision Quantization:_ Based on the idea that different layers have different sensitivities to quantization, mixed-precision quantization is proposed in CNNs to quantize different layers to different bitwidths for better model compression. Early works (Wang et al., 2019; Lou et al., 2019) proposed reinforcement learning (RL) based methods to search bitwidth for different layers, but they often require large computational resources, which limits the exploration of the search space. Another important class of mixed-precision method is the criteria-based method, they use the specific criteria to represent the quantization sensitivity, e.g., (Dong et al., 2019; Dong et al., 2020; Chen et al., 2021)quantize different layers with different bitwidths based on the trace of the Hessian. Recently, there are some other methods to learn the bitwidth during training (Uhlich et al., 2019; Esser et al., 2019; Jain et al., 2020). However, due to the huge difference between GNNs and CNNs, it is difficult to use these methods on GNNs directly, and our \(\mathrm{A^{2}Q}\) is the first method to introduce the mixed-precision quantization to GNNs, further improving the inference efficiency of GNNs. ## 3 Method In this section, we describe our proposed Aggregation-Aware Quantization in detail. Firstly, we present the formulation of the mixed-precision quantization for GNNs, which fully utilizes the property of GNNs and graph data. Secondly, we introduce the Local Gradient method to address the gradient vanishing problem during training. Finally, we detail the Nearest Neighbor Strategy, which is used for generalizing our approach to the unseen graphs. ### Aggregation-Aware Quantization We assume a graph data with \(N\) nodes and the node features are \(F\)-dimensional, i.e., the feature map is \(\mathbf{X}\in\mathbb{R}^{N\times F}\) and \(\mathbf{x}_{i}\) is the features of node \(i\). We use the learnable parameters step size \(\alpha_{i}\in\mathbb{R}_{+}\) and bitwidth \(b_{i}\in\mathbb{R}_{+}\) to quantize the features of the \(i\)-th node as: \[\bar{\mathbf{x}}_{i}=sign(\mathbf{x}_{i})\left\{\begin{aligned} \lfloor\frac{|\mathbf{x}_{i}|}{\alpha_{i}}+0.5\rfloor,&|\mathbf{x}|< \alpha_{i}(2^{[b_{i}]-1}-1)\\ 2^{[b_{i}]-1}-1,&|\mathbf{x}_{i}|\geq\alpha_{i}(2^{[b_ {i}]-1}-1)\end{aligned}\right., \tag{1}\] where \(\lfloor\cdot\rfloor\) is the floor function, and \([\cdot]\) is the round function to ensure the bitwidth used to quantize is an integer. The learnable parameters are \(\mathbf{s}_{X}=(\alpha_{1},\alpha_{2},...,\alpha_{N})\), and \(\mathbf{b}_{X}=(b_{1},b_{2},...,b_{N})\). Then we can obtain the fixed-point feature map \(\bar{\mathbf{X}}\), and the original feature can be represented as \(\mathbf{X}_{q}=\mathbf{S}_{X}\cdot\bar{\mathbf{X}}\), where \(\mathbf{S}_{X}=diag(\alpha_{1},\alpha_{2},...,\alpha_{N})\). Note that we use \([b]+1\) as the quantization bitwidth for the features after ReLU because the values are all non-negative. In the update phase, the node features are often transformed with a linear mapping or an MLP in which matrix multiplication \(\mathbf{X}\mathbf{W}\) is the main computation, and the transformed node features are the input to the next layer in GNNs. In order to accelerate the update phase, we also quantize \(\mathbf{W}\). Due to the fact that \(\mathbf{W}\) in a certain layer is shared by all nodes, we quantize \(\mathbf{W}\) to the same bitwidth of 4bits for all GNNs in this paper. However, each column of \(\mathbf{W}\) has its learnable quantization step size, i.e., \(\mathbf{s}_{W}=(\beta_{1},\beta_{2},..,\beta_{F_{2}})\), where \(F_{2}\) is the output-dimension of the node features in current layer and \(\beta_{i}\) is the quantization step size for the \(i\)-th column of \(\mathbf{W}\), and we also use Eq. 1 to quantize \(\mathbf{W}\). We can obtain the integer representation \(\bar{\mathbf{W}}\) and the quantized representation \(\bar{\mathbf{W}}_{q}=\bar{\mathbf{W}}\cdot\mathbf{S}_{W}\), where \(\mathbf{S}_{W}=diag(\beta_{1},\beta_{2},...,\beta_{F_{2}})\). The float-point matrix multiplication in the update phase can be reformulated as follow: \[\mathbf{X}\cdot\mathbf{W}\approx\mathbf{X}_{q}\cdot\mathbf{W}_{q}=(\mathbf{S}_{X}\cdot\bar{\mathbf{X} })\cdot(\bar{\mathbf{W}}\cdot\mathbf{S}_{W})=(\bar{\mathbf{X}}\cdot\bar{\mathbf{W}})\odot( \mathbf{s}_{X}\otimes\mathbf{s}_{W})\;, \tag{2}\] where \(\odot\) denotes an element-wise multiplication, and \(\otimes\) denotes the outer product. After training, we can obtain \(\mathbf{s}_{X}\) and \(\mathbf{s}_{W}\) so that the outer product can be pre-processed before inference. An example is illustrated in Figure 2. For the aggregation phase, i.e., \(\mathbf{AX}\), \(\mathbf{A}\) is the adjacency matrix and \(\mathbf{A}\in\{0,1\}^{N\times N}\), we quantize the \(\mathbf{X}\) as the quantization way of \(\mathbf{W}\) because the nodes features involved in the aggregation process come from the update phase, in which the features lose the topology information of graphs. Then the aggregation phase can be performed by integer operations to reduce the computational overhead. The quantization parameters \((s,b)\) are trained by the backpropagation algorithm. Since the floor and round functions used in the quantization process are not differentiable, we use the straight-through estimator (Bengio et al., 2013) to approximate the gradient through these functions, and the gradients of the quantization parameters can be calculated by: \[\frac{\partial L}{\partial s}=\sum_{i=1}^{d}\frac{\partial L}{\partial x_{q}^ {i}}\cdot\frac{\partial x_{q}^{i}}{\partial s}\;, \tag{3}\] where \(d\) is the dimension of the vector \(\mathbf{x}\), \((s,b)\) are the quantization parameters for \(\mathbf{x}\), and \(x_{q}^{i}\) is the value of \(i\)-th dimension in \(\mathbf{x}_{q}\). Detailed information about quantization process and the backpropagation are shown in Appendix A.1 and A.3 **Proof 2 and 3**. In order to improve the compression ratio of the node features, we introduce a penalty term on the memory size: \[L_{memory}=(\frac{1}{\eta}\cdot\sum_{l=1}^{L}\sum_{i=1}^{N}\mathrm{dim}^{l}\cdot b _{i}^{l}-M_{target})^{2}\;, \tag{5}\] where \(L\) is the number of layers in the GNNs, \(N\) is the total number of nodes, \(\mathrm{dim}^{l}\) is the length of the node features in \(l\)-th layer, \(b_{i}^{l}\) is the quantization bitwidth for node \(i\) in \(l\)-th layer, \(M_{target}\) is the target memory size on the total node features memory size, and \(\eta=8*1024\), which is a constant to convert the unit of memory size to \(\mathrm{KB}\). Then the model and quantization parameters can be trained by the loss function: \[L_{total}=L_{task}+\lambda\cdot L_{memory}\;, \tag{6}\] where \(L_{task}\) is the task-related loss function and \(\lambda\) is a penalty factor on \(L_{memory}\). ### Local Gradient Although the above end-to-end learning method is concise and straightforward, the gradients for the quantization parameters of nodes features, i.e., \(\frac{\partial L_{task}}{\partial s}\) and \(\frac{\partial L_{task}}{\partial b}\), are almost zero during the training process of semi-supervised tasks, which poses a significant challenge to train the quantization parameters for nodes features. We analyze the property of GNNs and graph data, and find that two reasons lead to this phenomenon: 1. The extreme sparsity of the connections between nodes in graph data. 2. Only a tiny fraction of nodes with labels are used for training in semi-supervised tasks (e.g., \(0.30\%\) in PubMed dataset). Therefore, \(\frac{\partial L_{task}}{\partial x_{q}}\) for most node features are zero (detailed proof in Appendix A.3.2), which results in that the gradients for quantization parameters of these nodes vanish according to Eq. 3 and Eq. 4. To clarify, we visualize the \(\frac{\partial L_{task}}{\partial x_{q}}\) in the second layer of GCN trained on Cora. As shown in Figure 3, most gradients for the nodes features are zero. The gradients of the \(L_{task}\) w.r.t. quantized nodes features can be viewed as the supervised information from the labeled nodes which enable the training of the quantization parameters for nodes features. However, this supervised information is missing due to zero gradients. Considering the quantization error is related to the \(L_{task}\), we introduce the quantization error \(E=\frac{1}{d}\left|\mathbf{x}_{q}-\mathbf{x}\right|_{1}\) as the supervised information for the quantization parameters of nodes features, where \(\mathbf{x}\) is the features before quantization, \(\mathbf{x}_{q}\) is the features after quantization and \(|\cdot|_{1}\) denotes the L1 norm. We refer to this method as **Local Gradient** because the gradients are computed by the local quantization errors instead of back-propagated task-related gradients. Then the quantization parameters for node features can be trained by gradients from \(E\): \[\frac{\partial E}{\partial s}=\frac{1}{d}\sum_{i=1}^{d}sign(x_{q}^{i}-x^{i}) \cdot\frac{\partial x_{q}^{i}}{\partial s}\;, \tag{7}\] \[\frac{\partial E}{\partial b}=\frac{1}{d}\sum_{i=1}^{d}sign(x_{q}^{i}-x^{i}) \cdot\frac{\partial x_{q}^{i}}{\partial b}\;. \tag{8}\] Note that the quantization parameters of \(\mathbf{W}\) are still trained by utilizing the gradients in Eq. 3. ### Nearest Neighbor Strategy In graph-level tasks, the quantized GNNs are required to generalize to unseen graphs. In such a scenario, the number of input nodes may vary during training or inference. However, the learnable method can only train a fixed number of \((s,b)\) pairs which are the same as the number of input nodes, so it is challenging to learn the \(s\) and \(b\) for every node in graph-level tasks. To solve this problem, we propose the **Nearest Neighbor Strategy**, which allows learning of a fixed number of quantization parameters and select quantization parameters for the unseen graphs. The proposed strategy is shown in Algorithm 1. To ensure the numerical range of \(\mathbf{x}_{q}\) is as close as to \(\mathbf{x}\) at FP32, a simple way is to keep the maximum quantization value equal to the maximum absolute value of \(\mathbf{x}\). Based on this idea, we first initialize \(m\) groups of quantization parameters, then we calculate the maximum quantization value for every group, i.e., \(q_{max}=s(2^{|b|-1}-1)\). When quantizing the features of node \(i\), the feature with the largest absolute value \(f_{i}\) in the node features \(\mathbf{x}_{i}\) is first selected, and then we find the nearest \(q_{max}\) and quantize the node features with the \((s,b)\) corresponding to this \(q_{max}\). When performing backpropagation, we first calculate the gradients of the loss function w.r.t. quantization parameters according to Eq. 3 and Eq. 4. For a specific set of quantization parameters \((s_{j},b_{j})\), we collect the gradients from the nodes that have used them and add these gradients together. After the model has been trained, we obtain the quantization parameters \((\mathbf{s},\mathbf{b})\). Since \(q_{max}\) can be calculated and sorted in advance, searching the nearest \(q_{max}\) can be implemented by binary searching. Usually, we set \(m=1000\) for all graph-level tasks in our paper and the overhead introduced to inference time is negligible. ## 4 Experiments ### Experimental Settings In this section, we evaluate our method on three typical GNN models, i.e., GCN, GIN, and GAT. And we compare our method with the FP32 GNN model and DQ-INT4 (Tailor et al., 2020) on eight datasets, including four node-level semi-learning tasks (Cora, CiteSeer, PubMed, ogbn-arxiv) (Hu et al., 2020; Yang et al., 2016) and four graph-level tasks (REDDIT-BINARY, MNIST, CIFAR10, ZINC) (Yanardag & Vishwanathan, 2015; Dwivedi et al., 2020), to demonstrate the generality and robustness of our method. Among these datasets, ZINC is a dataset for regression tasks, which uses regression loss as the metric of the model performance, while others are all for classification tasks. For a fair comparison, we set the quantization bitwidth of \(\mathbf{W}\) for all GNNs to 4bits as DQ-INT4. We count the average bitwidths for nodes features in all layers of the overall model and list them in our \begin{table} \begin{tabular}{c l l l l l} \hline \hline Dataset & Model & Accuracy & Average bits & Compression Ratio & Speedup \\ \hline \multirow{4}{*}{**Cora**} & GCN(FP32) & 81.5\(\pm\)0.7\% & 32 & 1x & — \\ & GCN(DQ ) & 78.3\(\pm\)1.7\% & 4 & 8x & 1x \\ & GCN(ours) & **80.9\(\pm\)0.6\%** & **1.70** & **18.6x** & **2.00x** \\ \cline{2-6} & GAT(FP32) & 83.1\(\pm\)0.4\% & 32 & 1x & — \\ & GAT(DQ ) & 71.2\(\pm\)2.9\% & 4 & 8x & 1x \\ & GAT(ours) & **82.6\(\pm\)0.6\%** & **2.03** & **15.4x** & **1.49x** \\ \hline \multirow{4}{*}{**CiteSeer**} & GCN(FP32) & 71.1\(\pm\)0.7\% & 32 & 1x & — \\ & GCN(DQ ) & 66.9\(\pm\)2.4\% & 4 & 8x & 1x \\ & GCN(ours) & **70.6\(\pm\)1.1\%** & **1.87** & **17.0x** & **1.91x** \\ \cline{2-6} & GIN(FP32) & 66.1\(\pm\)0.9\% & 32 & 1x & — \\ & GIN(DQ ) & 60.8\(\pm\)2.1\% & 4 & 8x & 1x \\ & GIN(ours) & **65.1\(\pm\)1.7\%** & **2.54** & **12.6x** & **1.37x** \\ \hline \multirow{3}{*}{**PubMed**} & GAT(FP32) & 79.0\(\pm\)0.3\% & 32 & 1x & — \\ & GAT(DQ) & 70.6\(\pm\)12.5\% & 4 & 8x & 1x \\ & GAT(ours) & **78.8\(\pm\)0.4\%** & **2.12** & **15.1x** & **1.38x** \\ \hline \multirow{3}{*}{**ogbn-arxiv**} & GCN(FP32) & 71.7\(\pm\)0.3\% & 32 & 1x & — \\ & GCN(DQ) & 65.4\(\pm\)3.9\% & 4 & 8x & 1x \\ \cline{1-1} & GCN(ours) & **71.1\(\pm\)0.3\%** & **2.65** & **12.1x** & **1.28x** \\ \hline \hline \end{tabular} \end{table} Table 1: The results comparison on node-level tasks. The average bits are counted for each task when the best results are achieved. results, denoted by "Average bits". Since today's CPUs and GPUs can not support mixed-precision operations well, we implement a precision-scalable hardware accelerator to perform the overall inference process for GNN. The accelerator employs massive bit-serial multipliers Judd et al. (2016), therefore, the latency of the integer multiplications is determined by the bitwidth of the node features. To evaluate the performance gains of our method over DQ-INT4, we develop a cycle-accurate simulator for our accelerator. More details about accelerator architecture are shown in Appendix A.7.5. Moreover, we show the compression ratio of quantized GNNs compared to the FP32 models in terms of overall memory size. For simplicity, we use GNN(DQ) to represent the GNNs quantized by DQ-INT4 and GNN-dataset to represent the task in which we run the experiment, e.g., GCN-Cora represents the GCN model trained on Cora. Detailed information about datasets and settings is in Appendix A.5 and Appendix A.6. ### Node-Level Tasks Table 1 shows the experimental results on three GNN architectures trained on four node-level datasets. Compared with DQ-INT4, our method can achieve significantly better accuracy on each task, even with a higher compression ratio, improving the inference performance with 1.28x to 2.00x speedups. On almost all node-level tasks, our proposed \(\mathrm{A^{2}Q}\) has negligible accuracy drop compared to the FP32 baselines while achieving 12.1x-18.6x compression ratio. Since both GIN and GAT involve more complex computations, such as the calculation of attention coefficients in GAT, it is more challenging to quantize those models, and DQ performs poorly on these two models. However, our method can overcome this problem and maintain comparable accuracy compared with the FP32 models. Our method can outperform the DQ-INT4 by 11.4% on the GAT-Cora task with a smaller bitwidth (2.03 v.s. 4). Even on ogbn-arxiv, which has a large number of nodes, \(\mathrm{A^{2}Q}\) can achieve a 12.1x compression ratio compared with FP32 baseline with comparable accuracy, which demonstrates the robustness of our method. Moreover, to demonstrate the generality of our method, we also evaluate our method on heterogeneous graphs and the inductive learning tasks and compare with more related works in Appendix A.7.1. ### Graph-Level Tasks Table 2 presents the comparison results on the graph-level tasks. Our method can obtain better results on all tasks than DQ-INT4 with higher compression and a considerable speedup. Especially on the GIN-REDDIT-BINARY task, our method outperforms DQ-INT4 by 9.5% while achieving a 1.16x \begin{table} \begin{tabular}{c l l l l l} \hline \hline Dataset & Model & Accuracy (Loss\(\downarrow\)) & Average bits & Compression ratio & Speedup \\ \hline \multirow{4}{*}{**MNIST**} & GCN(FP32) & 90.1\(\pm\)0.2\% & 32 & 1x & — \\ & GCN(DQ) & 84.4\(\pm\)1.3\% & 4 & 8x & 1x \\ & GCN(ours) & **89.9\(\pm\)0.8\%** & **3.50** & **9.12x** & **1.17x** \\ \cline{2-6} & GIN(FP32) & 96.4\(\pm\)0.4\% & 32 & 1x & — \\ & GIN(DQ) & 95.5\(\pm\)0.4\% & 4 & 8x & 1x \\ & GIN(ours) & **95.7\(\pm\)0.2\%** & **3.75** & **8.52x** & **1.07x** \\ \hline \multirow{4}{*}{**CIFAR10**} & GCN(FP32) & 55.9\(\pm\)0.4\% & 32 & 1x & — \\ & GCN(DQ) & 51.1\(\pm\)10.7\% & 4 & 8x & 1x \\ & GCN(ours) & **52.5\(\pm\)0.8\%** & **3.32** & **9.62x** & **1.25x** \\ \cline{2-6} & GAT(FP32) & 65.4\(\pm\)0.4\% & 32 & 1x & — \\ & GAT(DQ) & 56.5\(\pm\)0.6\% & 4 & 8x & 1x \\ & GAT(ours) & **64.7\(\pm\)2.8\%** & **3.73** & **8.57x** & **1.12x** \\ \hline \multirow{2}{*}{**ZINC**} & GCN(FP32) & 0.450\(\pm\)0.008 & 32 & 1x & — \\ & GCN(DQ) & 0.536\(\pm\)0.011 & 4 & 8x & 1x \\ & GCN(ours) & **0.49\(\pm\)0.05** & **3.68** & **8.68x** & **1.08x** \\ \hline \multirow{2}{*}{**REDDIT-BINARY**} & GIN(FP32) & 92.2\(\pm\)2.3\% & 32 & 1x & — \\ & GIN(DQ) & 81.3\(\pm\)4.4\% & 4 & 8x & 1x \\ \cline{1-1} & GIN(ours) & **90.8\(\pm\)1.8\%** & **3.50** & **9.14x** & **1.16x** \\ \hline \hline \end{tabular} \end{table} Table 2: The results comparison on graph-level tasks. speedup. Even for graph datasets with similar in-degrees, such as MNIST and CIFAR10, our method also learns the appropriate bitwidths for higher compression ratio and better accuracy. Although on GIN-MINT task, the improvement of our method is relatively small due to the similarity of the in-degrees between different nodes, our method can achieve comparable accuracy with smaller bitwidth (3.75 v.s. 4). ### Analysis To understand why our approach works, we analyze the relationship between the learned bitwidths and the topology of the graph. Figure 4(a) and 4(b) reveal that the bitwidth learned by \(\mathrm{A}^{2}\mathrm{Q}\) is strongly related to the topology of graph data in the node-level tasks. As the bitwidth increases, the average in-degrees of nodes become larger. In other words, \(\mathrm{A}^{2}\mathrm{Q}\) method tends to learn higher bitwidth for nodes with higher in-degrees. However, in GAT, as shown in Figure 4(c), the learned bits are irregular. This is because the features aggregated in GAT are topology-free. However, our method can still learn appropriate quantization bitwidths for different nodes, which improves accuracy while reducing memory usage. In addition, Figure 4 also shows the node distribution for different bitwidths and the result is consistent with power-law distribution. Since nodes in graph data mainly have low in-degrees, most of the nodes are quantized to low bitwidth (\(\leq 4\)), compressing the GNNs as much as possible. And there are also some high in-degree nodes quantized to high bitwidth, which can help to maintain the accuracy of the GNN models. As a result, the average bitwidth of the entire graph features is low, and the accuracy degradation is negligible. For the graph-level tasks in which the number of nodes varies, our method is also aggregation-aware. We select a layer of GIN trained on REDDIT-BINARY and analyze the relationship between bitwidth and average in-degrees of nodes using the corresponding bitwidth to quantize in Figure 4(d) and 4(e). It can be seen that the bitwidth learned for nodes features input to the second layer of MLP, which is the update function in GIN for graph-level tasks, does not present a correlation with the topology of graph. We analyze the reason and find that the node features before the second layer is the result mapped by the first layer of MLP and is activated by the activation function, e.g., ReLU, which results in the node features losing the topology information. We present more experiment results in Appendix A.7. to demonstrate that our method is generally applicable. ## 5 Ablation Study **The advantage of learning-based mixed-precision quantization:** In Figure 5, we compare our \(\mathrm{A}^{2}\mathrm{Q}\) with the manual mixed-precision method, which manually assigns high-bit to those nodes with high in-degrees and low-bit to those nodes with low in-degrees. In the figure, the postfix "learn" denotes that using \(\mathrm{A}^{2}\mathrm{Q}\) method, "manual" denotes that we assign bits to nodes and the model only learns the stepsize, and "mixed-precision" denotes that the model uses the same quantization method as DQ-INT4 but assigning different bitwidths to nodes. For the "mixed-precision", we assign 5bits to those nodes with 50% top in-degrees and assign 3bits to others. The implications are two-fold. First, compared with the DQ-INT4, which uses the same quantization bitwidth, the mixed-precision Figure 4: The relationship between quantized bitwidth and average in-degrees of nodes. (a), (b) and (c) represent the results of three GNN models trained on CiteSeer. (d) and (e) are results about the first and the second layer of an MLP, which is the update function of GIN trained on REDDIT-BINARY. The green bars represent the average in-degrees for the certain bitwidth used by nodes and the orange polylines represent the number of the nodes that use this certain bitwidth. method obtains 1.1% gains on GCN-Cora tasks demonstrating that the mixed-precision method is more effective. Second, the results of the learning method outperform the manual method on all tasks. Especially for the models with a high compression ratio, on GIN-CiteSeer task, learning method can achieve 21.5% higher accuracy. This demonstrates that our learning method can perform better than the assignment method according to prior knowledge for mixed-precision quantization of GNNs. **The power of learning the quantization parameters:** Ablations of two quantization parameters \((\mathbf{s},\mathbf{b})\) on the GIN-Cora task are reported in the first row of Table 3. The "no-lr" denotes that do not use learning method, "no-lr-b" denotes that only learn the step size \(\mathbf{s}\), "no-lr-s" denotes that only learn the bitwidths \(\mathbf{b}\), and "lr-all" denotes that learn the bitwidth and step size simultaneously. We can see that learning the step size can significantly increase the accuracy and even the "no-lr-bit" model can outperform the DQ-INT4 at the same compression ratio. When learning the bitwidth and step size simultaneously, the model can achieve higher accuracy with a higher compression ratio. This is because our method learns lower bitwidths for most nodes with low in-degrees and higher bitwidths for a tiny fraction of nodes with high in-degrees, which can improve the compression ratio while achieving higher accuracy. **Local Gradient v.s. Global Gradient:** To demonstrate the effectiveness of our Local Gradient method, we compare the models trained with and without it on the GCN-CiteSeer task in the last row of Table 3. The "Global" denotes that the model is trained with Eq. 3 and Eq. 4. The model trained with the local method outperforms the global method by 13.8% with a higher compression ratio. This is because the Local Gradient method can learn quantization parameters for all nodes, while only quantization parameters for a part of nodes can be updated with the Global Gradient method due to the extreme sparse connection in the graph on the node-level semi-supervised tasks. **The overhead of Nearest Neighbor Strategy:** We evaluate the real inference time of the GIN model on the 2080ti GPU. On REDDIT-BINARY task, the model without the selection process requires 121.45ms, while it takes 122.60ms for the model with our Nearest Neighbor Strategy, which only introduces 0.95% overhead. But with the help of the Nearest Neighbor Strategy, our model can obtain 19.3% accuracy gains for quantized GIN on REDDIT-BINARY. ## 6 Conclusion This paper proposes A\({}^{2}\)Q, an aggregation-aware mixed-precision quantization method for GNNs, and introduces the Local Gradient and Nearest Neighbor Strategy to generalize A\({}^{2}\)Q to the node-level and graph-level tasks, respectively. Our method can learn the quantization parameters for different nodes by fully utilizing the property of GNNs and graph data. The model quantized by our A\({}^{2}\)Q can achieve up to a 18.6x compression ratio, and the accuracy degradation is negligible compared with the FP32 baseline. Compared with the prior SOTA, DQ-INT4, our method can significantly improve 11.4% accuracy with up to a 2.00x speedup on different tasks. Our work provides a general, robust and feasible solution to speed up the inference of GNNs.
2307.05635
Fundamental limits of overparametrized shallow neural networks for supervised learning
We carry out an information-theoretical analysis of a two-layer neural network trained from input-output pairs generated by a teacher network with matching architecture, in overparametrized regimes. Our results come in the form of bounds relating i) the mutual information between training data and network weights, or ii) the Bayes-optimal generalization error, to the same quantities but for a simpler (generalized) linear model for which explicit expressions are rigorously known. Our bounds, which are expressed in terms of the number of training samples, input dimension and number of hidden units, thus yield fundamental performance limits for any neural network (and actually any learning procedure) trained from limited data generated according to our two-layer teacher neural network model. The proof relies on rigorous tools from spin glasses and is guided by ``Gaussian equivalence principles'' lying at the core of numerous recent analyses of neural networks. With respect to the existing literature, which is either non-rigorous or restricted to the case of the learning of the readout weights only, our results are information-theoretic (i.e. are not specific to any learning algorithm) and, importantly, cover a setting where all the network parameters are trained.
Francesco Camilli, Daria Tieplova, Jean Barbier
2023-07-11T08:30:50Z
http://arxiv.org/abs/2307.05635v1
# Fundamental limits of overparametrized ###### Abstract We carry out an information-theoretical analysis of a two-layer neural network trained from input-output pairs generated by a teacher network with matching architecture, in overparametrized regimes. Our results come in the form of bounds relating \(i)\) the mutual information between training data and network weights, or \(ii)\) the Bayes-optimal generalization error, to the same quantities but for a simpler (generalized) linear model for which explicit expressions are rigorously known. Our bounds, which are expressed in terms of the number of training samples, input dimension and number of hidden units, thus yield fundamental performance limits for any neural network (and actually any learning procedure) trained from limited data generated according to our two-layer teacher neural network model. The proof relies on rigorous tools from spin glasses and is guided by "Gaussian equivalence principles" lying at the core of numerous recent analyses of neural networks. With respect to the existing literature, which is either non-rigorous or restricted to the case of the learning of the readout weights only, our results are information-theoretic (i.e. are not specific to any learning algorithm) and, importantly, cover a setting where _all_ the network parameters are trained. + Footnote †: {fcamilli,dtieplov,jbarbier}@ictp.it ## 1 Introduction Artificial neural networks (NNs) are universal approximators [1, 2, 3] with remarkable abilities for supervised learning tasks such as regression or classification. In particular, modern deep neural networks, originally inspired by multilayer perceptrons [4, 5, 6, 7, 8], achieve exceptional performance in image classification or speech recognition [9] just to name a few examples. However, despite the important activity revolving around them, their theoretical understanding remains rather poor. One reason for the lack of strong theoretical guarantees for realistic NN models is related to the complex interplay between at least three aspects, whose individual effects are hard to single out: their architecture, the structure inherent to the data sets on which they are trained, as well as the algorithms and optimization procedures used to do so. It is therefore of crucial interest to tackle well defined theoretical models which are rich enough to capture some of the features of real NNs while remaining theoretically tractable. In this work, we propose to analyse a teacher-student set-up from a Bayesian-optimal perspective, with random input data and dependent responses generated according to a rule based on a teacher NN. This setting has the advantage to disentangle the aforementioned three components of NN learning by allowing us to mostly focus on how the _architecture_ of the NN used for learning (and data generation), and how the amount of accessible data, influence the prediction performance. More precisely, we are going to show that when learning a complex rule linking _unstructured_ inputs to responses in the _information-theoretic optimal way_, and this in an _overparametrized _regime_, then an explicit characterization of the prediction capabilities of the NN is possible. Our results being of an information-theoretic nature, they will not depend on a specific learning procedure. Moreover because the inputs will be structure-less, the conclusions drawn will essentially capture architecture-dependent features of the learning; in the present case, the effect of overparametrization. A key challenge one has to face when analysing NNs is the presence of non-linear activation functions, whose role is essential for the network expressivity. Models with linear activations cannot capture non-linearities, but they serve as a starting point for deeper understanding [10]. The case of a narrow hidden layer was already studied more than thirty years ago [11, 12, 13] and more recently in [14]. However, in the more challenging regimes where _all layers are large and of comparable sizes_ it was observed (for instance in [15, 16, 17, 18, 19]) that certain NNs models behave like finely tuned linear models regardless of the activation type, given sufficient regularity. From this observations, a whole set of "Gaussian equivalence principles" (GEPs) have emerged as valuable tools for handling non-linear activations in both rigorous and more heuristic approaches. GEPs leverage a well-known fact in high-dimensional probability: suitably rescaled low-dimensional projections of high-dimensional vectors with weakly correlated components exhibit Gaussian behavior. Classical results [20, 21] and recent developments [22, 23] support the validity of GEPs in various high-dimensional inference contexts, such as in the description of certain observables for shallow neural networks [17, 18, 19]. However, the extent to which GEPs apply to the _information-theoretic study of NNs where all weights are learned_ remains uncertain. Certain scaling regimes relating the number of data samples and network weights must cause GEPs to break down, as NNs do not always behave like linear models [24]. This paper aims to bridge this gap by means of rigorous mathematical physics techniques developed in the study of spin glasses. We demonstrate the existence of a scaling regime for two-layer networks where GEPs are rigorously applicable, with the number of data playing a central role. As a result, we establish the information-theoretical equivalence between a two-layer NN and a generalized linear model, that hence share the same optimal generalization error. NotationsBold notations are reserved for vectors and matrices. By default a vector \(\mathbf{x}\) is a column vector, and its transpose \(\mathbf{x}^{\intercal}\) is therefore a row vector. Thus the usual \(L_{2}\) norm \(\|\mathbf{x}\|^{2}=\mathbf{x}^{\intercal}\mathbf{x}\) and \(\mathbf{x}\mathbf{x}^{\intercal}\) is a rank-one projector. \(\mathbb{E}_{A}\) is an expectation with respect to the random variable \(A\); \(\mathbb{E}\) is an expectation with respect to all random variables entering the ensuing expression. For a function \(F\) of one argument we denote \(F^{\prime}\) its derivative. Notations like \(i\leq N\) always implicitly assume that the index \(i\) starts at \(1\). We often compactly write \(\mathbb{E}(\cdots)^{2}=\mathbb{E}[(\cdots)^{2}]\geq(\mathbb{E}(\cdots))^{2}= \mathbb{E}^{2}(\cdots)\) and similarly for other functions, we denote equivalently \(\mathbb{E}[f(\cdots)]\) and \(\mathbb{E}f(\cdots)\). For any functions \(f\) and \(g\), \(f=O(g)\) means that there exists a constant \(K\) such that \(|f|\leq K|g|\); in other words we simply denote \(O(g):=O(|g|)\) where in the r.h.s. we use the standard big O notation. Hence, taken for instance a Gaussian r.v. \(Z\sim\mathcal{N}(0,1)\), \(f=O(Z)\) does _not_ imply \(\mathbb{E}f=0\), but rather \(|\mathbb{E}f|\leq\mathbb{E}|f|\leq K\mathbb{E}|Z|\). AcknowledgementsThe authors were funded by the European Union (ERC, CHORAL, project number 101039794). Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. We wish to thank Marco Mondelli for suggesting meaningful references, and Rosalba Pacelli and Pietro Rotondo for fruitful clarifications on [10, 25]. ## 2 Setting and main results ### Bayesian-optimal learning of a shallow network in the teacher-student set-up We consider supervised learning with a Bayesian two-layer neural network in a teacher-student setup, with matched architecture for teacher (i.e., data-generating) and student (i.e., trained) models. To be precise, let the dataset be \(\mathcal{D}_{n}=\{(\mathbf{X}_{\mu},Y_{\mu})\}_{\mu=1}^{n}\), with inputs \(\mathbf{X}_{\mu}\in\mathbb{R}^{d}\) and responses \(Y_{\mu}\in\mathbb{R}\). The \(Y_{\mu}\) could also be taken from \(\mathbb{R}^{K}\) with \(K\) independent of \(d,p,n\) without much changes. The teacher network generates the responses according to the following rule: \[Y_{\mu}=f\Big{(}\frac{\mathbf{a}^{*\intercal}}{\sqrt{p}}\varphi\Big{(}\frac{ \mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)};\mathbf{A}_{\mu}\Big{)}+\sqrt{ \Delta}Z_{\mu}\,. \tag{1}\] Here, \(\mathbf{a}^{*}\in\mathbb{R}^{p}\) and \(\mathbf{W}^{*}\in\mathbb{R}^{p\times d}\). The readout function \(f\) can include stochasticity, modeled through its second argument \(\mathbf{A}_{\mu}\) which are independent and identically distributed (i.i.d.) random variables in \(\mathbb{R}^{k}\) with \(k\) fixed, whose common distribution is denoted by \(P_{A}\). Whenever it is not specified, real functions are applied component-wise to vectors, such as the non linearities \(f,\varphi\). We assume the following regularity hypotheses: * The activation function \(\varphi:\mathbb{R}\mapsto\mathbb{R}\), \(\varphi\in C^{3}\) is \(c\)-Lipschitz for some absolute constant \(c\), is an odd function, and has bounded second and third derivatives. * The readout function \(f\) as well as its first and second derivatives are \(P_{A}\)-almost surely bounded. We draw the independent inputs \(\mathbf{X}_{\mu}\stackrel{{\text{iid}}}{{\sim}}\mathcal{N}(0,I _{d})\) from the standard Gaussian measure. Furthermore, Gaussian label noise \(Z_{\mu}\stackrel{{\text{iid}}}{{\sim}}\mathcal{N}(0,1)\) is added in (1), whose variance is tuned by \(\Delta>0\). Introducing the output kernel \[P_{\text{out}}(y\mid x):=\int P_{A}(d\mathbf{A})\frac{1}{\sqrt{2\pi\Delta}} \exp\Big{(}-\frac{1}{2\Delta}(y-f(x;\mathbf{A}))^{2}\Big{)}\,, \tag{2}\] once can see that the random outputs are generated independently, conditionally on the teacher parameters \(\boldsymbol{\theta}^{*}=(\mathbf{a}^{*},\mathbf{W}^{*})\) and the inputs, as \[Y_{\mu}\sim P_{\text{out}}\Big{(}\cdot\mid\frac{\mathbf{a}^{*\intercal}}{ \sqrt{p}}\varphi\Big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)} \Big{)}\,. \tag{3}\] The probability distribution for the weights of the teacher network are drawn from a centered Gaussian factorized prior distribution. Taking them with different variances is possible but it does not add much to the result, so we consider them all equal to one for our purposes: \(a_{i}^{*},W_{ij}^{*}\stackrel{{\text{iid}}}{{\sim}}\mathcal{N}(0,1)\). The same Gaussian law is used as prior distribution for the Bayesian student neural network model. In empirical risk minimization, a Gaussian prior would induce a \(L^{2}\) norm regularization for the weights. In this paper we will instead deal with Bayesian learning in the so-called _Bayes-optimal setting_, which is the proper framework to quantify the fundamental performance limits of neural networks. Concretely, the Bayes-optimal scenario corresponds to a realizable matched setting where the student network has exactly the same architecture as the teacher network used to generate the responses in the data \(\mathcal{D}_{n}\). The analysis in this setting therefore yields the best information-theoretically achievable student's generalization performance whenever optimally trained, namely, when using Bayesian learning based on the posterior distribution of the student's parameters. As a consequence, _our results lay down fundamental upper bounds for the performance of any neural networks, Bayesian or not, trained in any possible manner (including empirical risk minimization rather than Bayesian learning) or, more generally, for any learning procedure for the supervised task at hand. Moreover, both the readout and internal hidden layer are trained, with a number of hidden units that can grow proportionally to the input dimension._ To the best of our knowledge, this is the first rigorous result of this kind in this challenging scaling regime. More formally, a student network is said to be Bayes-optimal if it employs the same output kernel \(P_{\text{out}}\) as used by the teacher network, or equivalently same number of layers and layers widths, readout \(f\) and activation \(\varphi\), label noise variance \(\Delta\), as well as same prior law for its weights. Bayes-optimal learning is then based on the Bayes posterior of the network parameters \(\boldsymbol{\theta}=(\mathbf{a},\mathbf{W})\) which reads \[dP(\boldsymbol{\theta}\mid\mathcal{D}_{n})=\frac{1}{\mathcal{Z}(\mathcal{D}_{n} )}\prod_{\mu=1}^{n}P_{\text{out}}\Big{(}Y_{\mu}\mid\frac{\mathbf{a}^{\intercal }}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)} \Big{)}D\boldsymbol{\theta} \tag{4}\] where for brevity \[D\mathbf{\theta}:=\prod_{i=1}^{p}\frac{da_{i}}{\sqrt{2\pi}}e^{-\frac{a_{i}^{2}}{2}} \prod_{i=1}^{p}\prod_{j=1}^{d}\frac{dW_{ij}}{\sqrt{2\pi}}e^{-\frac{W_{ij}^{2}}{2 }}=:D\mathbf{a}D\mathbf{W}\,. \tag{5}\] We will often use the notation \(D\) for densities of objects whose entries are i.i.d. standard Gaussian variables. The normalization of the posterior distribution will be referred to as _partition function_: \[\mathcal{Z}(\mathcal{D}_{n}):=\int D\mathbf{\theta}\exp\Big{(}\sum_{\mu=1}^{n}u_{Y _{\mu}}(s_{\mu})\Big{)} \tag{6}\] where we have introduced the two further definitions \(u_{y}(x)=\log P_{\text{out}}(y\mid x)\) and \[s_{\mu}:=\frac{\mathbf{a}^{\intercal}}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W }\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}\,,\quad S_{\mu}:=\frac{\mathbf{a}^{* \intercal}}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{ \sqrt{d}}\Big{)}\,. \tag{7}\] Note that the partition function is random through the randomness of the dataset \(\mathcal{D}_{n}\). Then, (optimal) Bayesian learning means that the predictor \(\hat{Y}_{\text{Bayes}}(\mathbf{X}_{\text{new}})\) of the response associated with a fresh input test sample corresponds to the mean of the Bayes posterior distribution of the response given the training data: \[\hat{Y}_{\text{Bayes}}(\mathbf{X}_{\text{new}}):=\mathbb{E}[Y_{\text{new}}\mid \mathcal{D}_{n},\mathbf{X}_{\text{new}}]=\int dY\,Y\,P_{\text{out}}\Big{(}Y\mid \frac{\mathbf{a}^{\intercal}}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W}\mathbf{X }_{\text{new}}}{\sqrt{d}}\Big{)}\Big{)}dP(\mathbf{\theta}\mid\mathcal{D}_{n})\,. \tag{8}\] We will sometimes employ the language of statistical mechanics. In particular we interpret the posterior distribution as a Boltzmann-Gibbs measure over degrees of freedom which are the network weights. We shall denote the expectations w.r.t. the posterior with the so-called Gibbs brackets \(\langle\cdot\rangle\). For future convenience we introduce also its replicated version: for a function \(g\) dependent of \(k\) copies \((\mathbf{\theta}_{b})_{b\leq k}\) of the parameters, \[\langle g\rangle^{\otimes k}:=\frac{1}{\mathcal{Z}(\mathcal{D}_{n})^{k}}\int \prod_{b=1}^{k}D\mathbf{\theta}_{b}\prod_{\mu=1}^{n}P_{\text{out}}\Big{(}Y_{\mu} \mid\frac{\mathbf{a}_{b}^{\intercal}}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W} _{b}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}\Big{)}\,g((\mathbf{\theta}_{b})_{b\leq k})\,, \tag{9}\] that, with a slight abuse of notation, will still be denoted by \(\langle\cdot\rangle\). From the above definition we see that the replicated Boltzmann-Gibbs measure is factorized for a given realization of the dataset, interpreted as quenched randomness in the analogy with spin glasses [26]. Hence, _replicas_, namely i.i.d. samples from the posterior measure, are independent conditionally on \(\mathcal{D}_{n}\). However, when computing so-called quenched averages \(\mathbb{E}\langle\cdot\rangle\) a further expectation \(\mathbb{E}\) is taken w.r.t. the quenched data which couples the replicas. One of the main object of interest is the _free entropy_ (i.e., log-partition function) per sample, which is nothing else than minus the Shannon entropy \(H(\mathcal{D}_{n})\) of the data distribution per sample: \[\bar{f}_{n}:=\frac{1}{n}\mathbb{E}\log\mathcal{Z}(\mathcal{D}_{n})=-\frac{1}{ n}H(\mathcal{D}_{n})\,, \tag{10}\] where the expectation \(\mathbb{E}\) is w.r.t. to the training data \(\mathcal{D}_{n}=\{(\mathbf{X}_{\mu},Y_{\mu})\}_{\mu=1}^{n}\). The normalization by \(n\) is natural given that the number of terms in the "energy" defined by the exponent in (6) is precisely \(n\). The data has a joint law that can be written in terms of the output kernel \[dP(\mathcal{D}_{n}) =\prod_{\mu=1}^{n}\Big{(}\prod_{j=1}^{d}\frac{dX_{\mu j}}{\sqrt{2 \pi}}e^{-\frac{X_{\mu j}^{2}}{2}}\Big{)}dY_{\mu}\,\mathbb{E}_{\mathbf{a}^{*}, \,\mathbf{W}^{*}}\prod_{\mu=1}^{n}P_{\text{out}}(Y_{\mu}\mid S_{\mu})\] \[=:\prod_{\mu=1}^{n}D\mathbf{X}_{\mu}dY_{\mu}\,\mathbb{E}_{ \mathbf{a}^{*},\,\mathbf{W}^{*}}\exp\Big{(}\sum_{\mu=1}^{n}u_{Y_{\mu}}(S_{\mu} )\Big{)}\,. \tag{11}\] Two observations are in order. First, the samples, indexed by \(\mu\), are not independent because the responses were all drawn from the teacher, even though the \(\mathbf{X}_{\mu}\)'s are independently generated. Second, except for the presence of differentials on the quenched variables this expression is very similar to the partition function (6). This is due to the Bayes-optimality of the student network and has some pragmatic consequences, such as the Nishimori identities which will be key for the proof, see Appendix A. Finally, note that for the sake of simplicity we did not include trainable biases in the definition of our NN model. However, we believe that adding them would not change much to our analysis as long as, like the other trainable parameters, they are Gaussian distributed and the student network is again Bayes-optimal and uses same architecture and number of parameters as the teacher model. ### An information-theoretically equivalent generalized linear model We now introduce an another model known as generalized linear model (GLM) [6, 7, 8], which can be represented as a one layer neural network and which is thus a generalization of the classical perceptron [4, 5]. One particular instance of the GLM turns out to be deeply connected to the setting with shallow networks introduced in the previous section. In this model the responses are generated independently conditionally on the inputs as \[Y_{\mu}^{\circ}=f\Big{(}\rho\frac{\mathbf{v}^{*\intercal}\mathbf{X}_{\mu}}{ \sqrt{d}}+\sqrt{\epsilon}\xi_{\mu}^{*}\,;\,\mathbf{A}_{\mu}\Big{)}+\sqrt{ \Delta}Z_{\mu}\,,\quad\text{or}\quad Y_{\mu}^{\circ}\sim P_{\text{out}}\Big{(} \cdot\mid\rho\frac{\mathbf{v}^{*\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+\sqrt{ \epsilon}\xi_{\mu}^{*}\Big{)} \tag{12}\] where \(\mathbf{v}^{*}=(v_{j}^{*})_{j\leq d}\in\mathbb{R}^{d}\), \(v_{j}^{*\text{ iid}}\sim\mathcal{N}(0,1)\), \(\xi_{\mu}^{*\text{ iid}}\sim\mathcal{N}(0,1)\) all independently and the rest is as before. With respect to the two-layer neural network, the non-linearity brought by the middle layer has been replaced by a linear model with an additional effective Gaussian noise \(\xi_{\mu}^{*}\). \(\rho\) and \(\epsilon\geq 0\) are two real parameters that will be specified later. This connection between the Bayes-optimal learning of neural networks and this GLM was recently conjectured in [27] based on the replica method and the application of Gaussian equivalences. Our results vindicate their conjecture but for different scaling regimes relating the diverging parameters \(d,p,n\). We are going to further comment on this point later on. All the above construction can be repeated for the generalized linear model. From now on, quantities characterized by a \({}^{\circ}\) superscript will refer to the GLM. For starters, we denote the dataset generated through (12) by \(\mathcal{D}_{n}^{\circ}:=\{(\mathbf{X}_{\mu},Y_{\mu}^{\circ})\}_{\mu=1}^{n}\). Let \[s_{\mu}^{\circ}=\rho\frac{\mathbf{v}^{\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+ \sqrt{\epsilon}\xi_{\mu}\,,\qquad S_{\mu}^{\circ}=\rho\frac{\mathbf{v}^{* \intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+\sqrt{\epsilon}\xi_{\mu}^{*}\] and \(D\boldsymbol{\xi}=\prod_{\mu}D\xi_{\mu}\). The expectation under the GLM posterior of any bounded test function \(g\) of \(k\) "replicas" (i.e., conditionally on the data i.i.d. copies) \((\mathbf{v}_{b},\boldsymbol{\xi}_{b})_{b\leq k}\) reads \[\langle g\rangle^{\circ\,\otimes k}:=\frac{1}{\mathcal{Z}^{\circ}(\mathcal{D }_{n}^{\circ})^{k}}\int\prod_{b=1}^{k}D\mathbf{v}_{b}D\boldsymbol{\xi}_{b} \prod_{\mu=1}^{n}P_{\text{out}}\Big{(}Y_{\mu}^{\circ}\mid\rho\frac{\mathbf{v }_{b}^{\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+\sqrt{\epsilon}\xi_{b\mu}\Big{)} \,g((\mathbf{v}_{b},\boldsymbol{\xi}_{b})_{b\leq k})\,, \tag{13}\] with \(\mathcal{Z}^{\circ}(\mathcal{D}_{n}^{\circ})\) the GLM posterior normalization. As before, the free entropy reads \[\bar{f}_{n}^{\circ}:=\frac{1}{n}\mathbb{E}\log\mathcal{Z}^{\circ}(\mathcal{D }_{n}^{\circ})=\frac{1}{n}\mathbb{E}\log\int D\mathbf{v}D\boldsymbol{\xi} \exp\Big{(}\sum_{\mu=1}^{n}u_{Y_{\mu}^{\circ}}(s_{\mu}^{\circ})\Big{)}\,. \tag{14}\] Finally, we write the distribution of the dataset, that is used for the quenched average in the above formula: \[dP(\mathcal{D}_{n}^{\circ}) =\prod_{\mu=1}^{n}\Big{(}\prod_{j=1}^{d}\frac{dX_{\mu j}}{\sqrt{2 \pi}}e^{-\frac{X_{\mu j}^{\circ}}{2}}\Big{)}dY_{\mu}^{\circ}\,\mathbb{E}_{ \mathbf{v}^{*},\boldsymbol{\xi}^{*}}\prod_{\mu=1}^{n}P_{\text{out}}(Y_{\mu}^{ \circ}\mid S_{\mu}^{\circ})\] \[=:\prod_{\mu=1}^{n}D\mathbf{X}_{\mu}dY_{\mu}\,\mathbb{E}_{\mathbf{v }^{*},\boldsymbol{\xi}^{*}}\exp\Big{(}\sum_{\mu=1}^{n}u_{Y_{\mu}^{\circ}}(S_{ \mu}^{\circ})\Big{)} \tag{15}\] and the optimal Bayesian predictor is \[\hat{Y}^{\circ}_{\text{Bayes}}(\mathbf{X}_{\text{new}}):=\mathbb{E}[Y^{\circ}_{ \text{new}}\mid\mathcal{D}^{\circ}_{n},\mathbf{X}_{\text{new}}]=\int dY\,Y\,P_{ \text{out}}\Big{(}Y\mid\rho\frac{\mathbf{v}^{\intercal}\mathbf{X}_{\text{new} }}{\sqrt{d}}+\sqrt{\epsilon}\xi_{\mu}\Big{)}dP(\mathbf{v},\boldsymbol{\xi} \mid\mathcal{D}^{\circ}_{n})\,. \tag{16}\] ### Results Our first theorem concerns the equivalence between the Bayes-optimal learning of the neural network and GLM models at the level of free entropy. Letting \(Z\sim\mathcal{N}(0,1)\) we denote \(\mathbb{E}_{\mathcal{N}(0,1)}g:=\mathbb{E}g(Z)\). **Theorem 1** (Free entropy equivalence).: _Let_ \[\rho:=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{\prime}\quad\text{and}\quad \epsilon^{2}:=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}-\rho^{2}\,, \tag{17}\] _and suppose A1) and A2) hold. Then_ \[|\bar{f}_{n}-\bar{f}^{\circ}_{n}|=O\Big{(}\sqrt{\Big{(}1+\frac{n}{d}\Big{)} \Big{(}\frac{n}{p}+\frac{n}{d^{3/2}}+\frac{1}{\sqrt{d}}\Big{)}}\Big{)}\,. \tag{18}\] From the previous statement we can identify the scaling regime for which the equivalence holds, namely, when the right hand side of (18) goes to \(0\). From now on we shall denote by \[\widetilde{\lim}\,g_{d,p,n}:=\lim_{i\to\infty}\,g_{d_{i},p_{i},n_{i}}\] where \((d_{i},p_{i},n_{i})_{i}\) is any sequence of triplets of integers such that \[\lim_{i\to\infty}\Big{(}1+\frac{n_{i}}{d_{i}}\Big{)}\Big{(}\frac{n_{i}}{p_{i}} +\frac{n_{i}}{d_{i}^{3/2}}+\frac{1}{\sqrt{d_{i}}}\Big{)}=0\,. \tag{19}\] As a corollary we have the matching of the mutual informations between the dataset \(\mathcal{D}_{n}\) and the network weights in the same limit. For the two-layer neural network the mutual information is related to the free entropy through the following expression (\(H\) is the Shannon entropy) \[\frac{1}{n}I_{n}(\boldsymbol{\theta}^{*};\mathcal{D}_{n})=\frac{1}{n}H( \mathcal{D}_{n})-\frac{1}{n}H(\mathcal{D}_{n}\mid\boldsymbol{\theta}^{*})=- \bar{f}_{n}+\mathbb{E}\log P_{\text{out}}\Big{(}Y_{1}\mid\frac{\mathbf{a}^{* \intercal}}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{1}}{\sqrt{ d}}\Big{)}\Big{)}\,, \tag{20}\] whereas for the equivalent GLM we have \[\frac{1}{n}I^{\circ}_{n}(\boldsymbol{\theta}^{\circ*};\mathcal{D}^{\circ}_{n} )=-\bar{f}^{\circ}_{n}+\mathbb{E}\log P_{\text{out}}\Big{(}Y_{1}\mid\rho\frac {\mathbf{v}^{\intercal}\mathbf{X}_{1}}{\sqrt{d}}+\sqrt{\epsilon}\xi_{1}^{*} \Big{)}\,. \tag{21}\] Considering that the teacher weights and inputs are Gaussian we have in law \[\frac{\mathbf{a}^{*\intercal}}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W}^{*} \mathbf{X}_{1}}{\sqrt{d}}\Big{)}\overset{\text{\tiny D}}{=}Z\sqrt{\frac{1}{p} \Big{\|}\varphi\Big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{1}}{\sqrt{d}}\Big{)} \Big{\|}^{2}} \tag{22}\] with \(Z\sim\mathcal{N}(0,1)\) and \(\|\cdot\|\) is the standard \(L^{2}\) norm for vectors. Therefore, it is clear that the randomness in \(\mathbf{W}^{*}\) and \(\mathbf{X}_{1}\) will disappear in the limit. A similar equality holds for the GLM: \[\rho\frac{\mathbf{v}^{*\intercal}\mathbf{X}_{1}}{\sqrt{d}}+\sqrt{\epsilon}\xi _{1}^{*}\overset{\text{\tiny D}}{=}Z\sqrt{\rho^{2}\frac{\|\mathbf{X}_{1}\|^{2} }{d}+\epsilon}\,. \tag{23}\] Our goal is now to show that the arguments under square root both tend to \(\rho^{2}+\epsilon=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}\) in the limit, and that we can plug this result inside the last terms in (20) and (21), with a control on the error we make. To this end, define \[S_{d}(t)=\sqrt{t\rho^{2}\Big{(}\frac{\|\mathbf{X}_{1}\|^{2}}{d}-1 \Big{)}+\epsilon+\rho^{2}}\,,\quad\text{or}\quad S_{d}(t)=\sqrt{t\Big{(}\frac{ 1}{p}\Big{\|}\varphi\Big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{1}}{\sqrt{d}}\Big{)} \Big{\|}^{2}-\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}\Big{)}+\mathbb{E}_{ \mathcal{N}(0,1)}\varphi^{2}} \tag{24}\] and \[\Psi(t):=\mathbb{E}\int dYP_{\text{out}}(Y\mid ZS_{d}(t))\log P_{ \text{out}}(Y\mid ZS_{d}(t))\,. \tag{25}\] Using the properties later described in Lemma 4 and the definition of \(P_{\text{out}}\) in (2), under assumptions (A1)-(A2) one can readily verify that \[|\dot{\Psi}(t)|\leq C(f)\mathbb{E}|Z||\dot{S}_{d}(t)|\,, \tag{26}\] where \(C(f)\) is a constant depending on \(f\). From this bound, by the fundamental theorem of integral calculus, we have \[|\Psi(1)-\Psi(0)|\leq C(f)\int_{0}^{1}dt\,\mathbb{E}|\dot{S}_{d}( t)|\leq C(f)\begin{cases}\frac{\mathbb{E}\|\varphi(\mathbf{W}^{*}\mathbf{X}_{1}/ \sqrt{d})\|^{2}/p-\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}}{\sqrt{\mathbb{E}_{ \mathcal{N}(0,1)}\varphi^{2}}}&\quad\text{for the 2-layer NN}\\ \rho^{2}\frac{\mathbb{E}\|\|\mathbf{X}_{1}\|^{2}/d-1}{\sqrt{\mathbb{E}_{ \mathcal{N}(0,1)}\varphi^{2}}}&\quad\text{for the GLM}\end{cases}\,. \tag{27}\] The remainder for the two-layer NN is \(O(p^{-1/2}+d^{-1/4})\) whereas for the GLM we have \(O(d^{-1/2})\). Finally, thanks to the previous argument \[\frac{1}{n}I_{n}(\boldsymbol{\theta}^{*};\mathcal{D}_{n}) =-\bar{f}_{n}+\Psi(\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2})+O(p^{ -1/2}+d^{-1/4})\,, \tag{28}\] \[\frac{1}{n}I_{n}^{\circ}(\boldsymbol{\theta}^{\circ*};\mathcal{D }_{n}^{\circ}) =-\bar{f}_{n}^{\circ}+\Psi(\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2 })+O(d^{-1/2})\,, \tag{29}\] where \[\Psi(\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}):=\Psi(0)=\mathbb{E }\int dYP_{\text{out}}(Y\mid Z\sqrt{\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2} })\log P_{\text{out}}(Y\mid Z\sqrt{\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}})\,. \tag{30}\] Hence, we have just proved the information theoretical equivalence: **Corollary 2** (Mutual information equivalence).: _Under the same hypothesis of Theorem 1 the following holds:_ \[\Big{|}\frac{1}{n}I_{n}(\boldsymbol{\theta}^{*};\mathcal{D}_{n}) -\frac{1}{n}I_{n}^{\circ}(\boldsymbol{\theta}^{\circ*};\mathcal{D}_{n}^{\circ })\Big{|}=O\Big{(}\sqrt{\Big{(}1+\frac{n}{d}\Big{)}\Big{(}\frac{n}{p}+\frac{ n}{d^{3/2}}+\frac{1}{\sqrt{d}}\Big{)}}\Big{)}\,. \tag{31}\] The performance of the neural network is quantified using the generalization error on test data using the square loss. The Bayes-optimal generalization error thus reads \[\mathcal{E}_{n}:=\mathbb{E}\big{(}Y_{\text{new}}-\mathbb{E}[Y_{ \text{new}}\mid\mathcal{D}_{n},\mathbf{X}_{\text{new}}]\big{)}^{2}\,. \tag{32}\] The outer expectation is intended w.r.t. the training data set and the test sample which is generated independently from the training data according to the same teacher model. The GLM Bayes-optimal generalization error \(\mathcal{E}_{n}^{\circ}\) is defined similarly but considering the GLM teacher-student setting described in the previous section. As a consequence of the proof technique of Theorem 1 it is possible to show the following equivalence at the level of the generalization error, proven in Section 5. **Theorem 3** (Generalization error equivalence).: _Under the same hypothesis of Theorem 1 the following holds:_ \[\widetilde{\lim}\,|\mathcal{E}_{n}-\mathcal{E}_{n}^{\circ}|=0\,, \tag{33}\] _i.e., the shallow neural network and noisy GLM settings lead to the same Bayes-optimal generalization error in any high-dimensional limit such that (19) holds._ Our results combined with those of [8] for the GLM provide explicit rigorous formulas for the mutual information and Bayes-optimal generalization error for the Bayesian neural network studied in this paper. A remark is in order. The previous theorem states that the Bayes-optimal generalization error of a two-layer NN, which is trained on the dataset \(\mathcal{D}_{n}\) generated by a teacher two-layer NN with same architecture, equals that of a noisy GLM trained on the dataset \(\mathcal{D}_{n}^{\circ}\) generated by a matched teacher GLM. However, we cannot deduce from this that the two-layer NN trained using the GLM teacher dataset \(\mathcal{D}_{n}^{\circ}\), or viceversa, achieves the Bayes-optimal generalization error. It would be interesting to investigate this aspect in the future. ### Related works There exist by now a whole zoology of theoretical models for NNs studied in the literature and it becomes increasingly challenging to cover them whole. We provide here a partial classification of the main models divided according to how scale their internal widths compared to the inputs dimension, and whether the internal weights are trainable or not, see Figure 1. For each class we provide a selection of relevant references without trying to be exhaustive. Perceptrons and committee machinesThe perceptron (and its generalization, i.e., the GLM) are linear classifiers with a non-linear readout. Committee machines can be viewed as two-layer neural networks with a narrow hidden layer and a single neuron output layer. These models have been studied in teacher-student set-ups and with online learning since the nineties [28, 31, 12, 11, 32, 33, 34, 35, 36, 8, 14, 29, 30, 17, 18]. Despite their rich phenomenology with a so-called specialization phase transition where the model realizes the in-put-output rule is non-linear, these machines cannot capture the features of realistic data: they project high-dimensional data in a comparatively too low-dimensional space. The relevant regime for these models is \(n,d\to\infty\) with Figure 1: Standard neural network architectures analyzed in the literature, classified according to how scale their layers widths (large diverging layers are depicted with many neurons, those which are much smaller or of fixed size have one or two neurons), and whether the internal weights are trainable (red edges) or completely fixed and not trained (blue edges). **(1)** represents a _generalized linear model_ (also called _perceptron_), **(2)** corresponds to _committee machines_, with large input size and small (finite) hidden layer; **(3)** and **(4)** are examples of _mean field_ regime, where the input size is small while the hidden layer is large; in particular model **(4)** corresponds to the _random feature model_; **(5)** and **(6)** represent the challenging _linear-width_ regimes. Our results cover, e.g., models belonging to the regimes **(6)**, and **(3)** when \(p\gg d\gg n\gg 1\). \(n/d\to\alpha\in(0,\infty)\) and \(p=O(1)\). Despite the fact that all the weights among layers have to be learnt, w.r.t. our setting, the middle layer remains finite while the input dimension and number of data points diverge together. Note that, at first sight, it might be surprising that our result does not imply equivalence when \(p\) is finite. However, for any \(p>1\) such equivalences are not expected because a committee machine with two or more hidden units is not, in general, a linear classifier and can represent more complex non-linear relations. On the contrary, GEP-type of results are expected to provide reductions towards (generalized) linear models. Mean-field regimeIn the series of works [37, 42, 43, 44, 45, 46, 47, 48, 49, 38, 39, 40, 41] the authors study the stochastic gradient descent (SGD) dynamics of multilayer neural networks. In contrast with the committee machine, here also the hidden layer can diverge in size (see [47]). This projection of a relatively low-dimensional signal into a very high-dimensional space has a regularizing property on the risk landscape. In particular, it causes the merging of possible multiple minima of the finite \(p\) risk. SGD is then able to reach a near optimum with controllable guarantees. However, mean-field analyses of SGD dynamics differ from the information-theoretical one. Indeed, SGD produces a "one-shot estimator", which is in general outperformed by the Bayes-optimal one. Also, online learning is generally considered while our analysis consider (optimal) learning from a large fixed data set. Furthermore, for the information-theoretical equivalence in Theorem (1) to be valid, we need to control the size of the training set w.r.t. the network size, and in principle we can send both \(d,p\to\infty\) with \(d/p\) finite, as long as the training set is not too big (\(n\ll p\)). Frozen hidden weights: neural tangent kernel, random features and lazy trainingNeural tangent kernel (NTK) [50] is a linearization of, say, a two-layer neural network, which reduces its training to a linear regression on the readout weights. As specified in [47], NTK describes well the neural network performance at the initial stage of learning using SGD when the network weights are virtually frozen around their initialization. In a similar fashion, in random feature models and lazy training regimes [51, 61, 62, 63, 64, 65, 66, 67, 68, 52, 53, 16, 54, 55, 56, 57, 58, 59, 60] the internal weights of the network are quenched, i.e., fixed once and for all. Other results [69, 70, 71, 72, 73, 74] show that large-width NNs with purely random weights behave as gaussian processes. Finally, recent works based on random matrix theory consider linear-width NNs but again under the assumption that only the last layer is learned, while internal ones are random [75, 76, 77, 78, 79, 80]. Even though some of the above results, and more recently [19], extend to extensive width input and hidden layers, as well as extensive number of data, i.e. \(d,p,n\to\infty\) with \(d,p,n\) all proportional, they hold in a setting that is fundamentally different from ours. In fact, we address the learning of _all_ the parameters in the network, considering all of them as _annealed_ variables, from a statistical mechanics perspective. Moreover, it is worth stressing again that we study the Bayes-optimal generalization error, and not the one coming from ERM. In this regard, it was shown in [81] that ERM with hinge or logistic losses can reach generalization errors that are close to Bayes-optimal in GLMs. In addition, as long as a suitable (though not convex) loss is taken into account, ERM can yield Bayes-optimal performances. However, this holds for GLMs, which have no hidden layer and thus only \(p\) parameters to be learnt. Linear-width regimesA line of recent works [10, 25, 27] deals with the full training of the network as we do here. [10] in particular carries out a thorough study for linear neural networks. In [27], instead, the authors conjecture the Bayes-optimal limits in the extensive width and data regime \(d,p,n\to\infty\) all proportionally. Their computations are based on a combination of the heuristic replica method from spin glasses and a Gaussian equivalence principle, that allows to treat the non-linear activations in an efficient way. Despite GEPs have been proved rigorously in other contexts (for instance [18, 19]), it is not obvious that they are directly applicable to the extensive width and data regime when the full training of the network is carried out. Indeed, it this not clear to us whether our proof can be extended to the whole regime considered in [27]; in particular, we cannot assess whether the equivalence results provided in the next section do hold in the fully proportional regime where \(d=\Theta(n)\) and large (this is allowed by our bounds) _and_\(p=\Theta(n)\) (which is instead prevented by our bounds) as considered [27], in spite that we cannot prove it at the moment. Gaussian equivalence principles are also present in random matrix theory literature [82, 83, 84, 75] and find applications in the study of random features models. Estimation in multi-layer generalized linear modelsFinally, we emphasize the difference between the learning problem considered in our work and the inference problem discussed in [85], later extended in [86], where the authors consider the task of reconstructing a vector from observations obtained from a multi-layer GLM with fixed, known, weight matrices. We believe that the proof of the concentration of the free entropy in [86], which was a bottleneck for GLM extensions to the multi-layer setting initiated in [85], can be adapted to our learning problem, yielding the Bayes-optimal generalization error GLM reduction for a deep network. ## 3 Proof of Theorem 1 ### The interpolating model Our proof is based on the interpolation method, introduced in the seminal papers [87, 88]. This method is a very effective tool whenever a comparison between two high dimensional models is needed. The idea is that of introducing an interpolating model, for any \(t\in[0,1]\), that at its ends \(t=0\) and \(t=1\) matches the two models to be compared. In analogy with [8], we shall interpolate at the level of the variables \(s_{\mu}\), \(S_{\mu}\): \[S_{t\mu} :=\sqrt{1-t}\frac{\mathbf{a}^{\intercal}}{\sqrt{p}}\varphi\Big{(} \frac{\mathbf{W}^{\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}+\sqrt{t}\rho \frac{\mathbf{v}^{\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+\sqrt{t}\epsilon\xi_{ \mu}^{*}\,, \tag{34}\] \[s_{t\mu} :=\sqrt{1-t}\frac{\mathbf{a}^{\intercal}}{\sqrt{p}}\varphi\Big{(} \frac{\mathbf{W}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}+\sqrt{t}\rho\frac{\mathbf{ v}^{\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+\sqrt{t}\epsilon\xi_{\mu}\,. \tag{35}\] We thus introduce an interpolating teacher and interpolating student, such that the second is Bayes-optimal for any \(t\in[0,1]\). This allows us to use the so-called Nishimori identities of Appendix A uniformly in \(t\). The interpolating teacher network shall produce the \(\mu=1,\ldots,n\) conditionally indpendent responses \[Y_{t\mu}\sim P_{\text{out}}(\cdot\mid S_{t\mu})\,, \tag{36}\] where the output kernel is unchanged since it depends only on \(f\). Posterior means now read \[\langle g\rangle_{t}:=\frac{1}{\mathcal{Z}_{t}}\int D\mathbf{a}D\mathbf{W}D \mathbf{v}D\boldsymbol{\xi}\exp\Big{[}\sum_{\mu=1}^{n}u_{Y_{t\mu}}(s_{t\mu}) \Big{]}g \tag{37}\] for any observable \(g\) depending on \(\mathbf{a},\mathbf{W},\mathbf{v},\boldsymbol{\xi}\), where \[\mathcal{Z}_{t}=\int D\mathbf{a}D\mathbf{W}D\mathbf{v}D\boldsymbol{\xi}\exp \Big{[}\sum_{\mu=1}^{n}u_{Y_{\mu}}(s_{t\mu})\Big{]}\,. \tag{38}\] In the following we drop the subscript \(t\) to keep the notation light and simply use \(\langle\cdot\rangle\). We also introduce the compact notation \[\mathbb{E}_{(t)}(\cdot):=\mathbb{E}_{\mathbf{a}^{*}}\mathbb{E}_{\setminus \mathbf{a}^{*}}(\cdot)=\mathbb{E}_{\mathbf{a}^{*}}\mathbb{E}_{\mathbf{W}^{*}, \mathbf{v}^{*},\boldsymbol{\xi}^{*},\{\mathbf{X}_{\mu}\}}\int\prod_{\mu=1}^{n} dY_{t\mu}e^{u_{Y_{t\mu}}(S_{t\mu})}(\cdot)\,. \tag{39}\] The free entropy of this interpolating model is thus \[\bar{f}_{n}(t):=\frac{1}{n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\,, \tag{40}\] whence it can be verified that \[\bar{f}_{n}(0)=\bar{f}_{n}\,,\quad\bar{f}_{n}(1)=\bar{f}_{n}^{\circ}\,. \tag{41}\] Due to the identity \(\bar{f}_{n}(1)-\bar{f}_{n}(0)=\int_{0}^{1}\frac{d}{dt}\bar{f}_{n}(t)dt\), a sufficient condition to prove our Theorem 1 is to show that \(\frac{d}{dt}\bar{f}_{n}(t)\) is uniformly bounded by the same order as in the statement. A direct computation shows \[\frac{d}{dt}\bar{f}_{n}(t)=-A_{1}+A_{2}+A_{3}+B \tag{42}\] where \[A_{1} :=\frac{1}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu=1}^{n} u^{\prime}_{Y_{t\mu}}(S_{t\mu})\frac{\mathbf{a}^{*\intercal}}{\sqrt{(1-t)p}} \varphi\Big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}\,, \tag{43}\] \[A_{2} :=\frac{1}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu=1}^{n} u^{\prime}_{Y_{t\mu}}(S_{t\mu})\rho\frac{\mathbf{v}^{*\intercal}\mathbf{X}_{\mu}}{ \sqrt{td}}\,,\] (44) \[A_{3} :=\frac{1}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu=1}^{n} u^{\prime}_{Y_{t\mu}}(S_{t\mu})\sqrt{\frac{\epsilon}{t}}\xi_{\mu}^{*}\,,\] (45) \[B :=\frac{1}{n}\mathbb{E}_{(t)}\Big{\langle}\sum_{\mu=1}^{n}u^{ \prime}_{Y_{t\mu}}(s_{t\mu})\frac{ds_{t\mu}}{dt}\Big{\rangle}\,. \tag{46}\] We will control each term individually. For that we will need a number of Lemmas wich we provide now. ### Lemmata We collect here some Lemmas that shall be used intensively in the following. For the convenience we postpone proofs to the Appendix. **Lemma 4** (Properties of \(P_{\mathrm{out}}\)).: _Recall the definition \(u_{y}(x):=\log P_{\mathrm{out}}(y\mid x)\). We denote \(u^{\prime}_{y}(x):=\partial_{x}u_{y}(x)\). Furthermore, let_ \[U_{\mu\nu}:=\delta_{\mu\nu}u^{\prime\prime}_{Y_{t\mu}}(S_{t\mu})+u^{\prime}_{ Y_{t\mu}}(S_{t\mu})u^{\prime}_{Y_{t\nu}}(S_{t\nu})\,. \tag{47}\] _Under Assumptions (A1) and (A2) the following statements hold:_ \[\mathbb{E}[u^{\prime}_{Y_{t\mu}}(S_{t\mu})\mid S_{t\mu}]=\mathbb{ E}[U_{\mu\nu}\mid S_{t\mu},S_{t\nu}]=0\,, \tag{48}\] \[\mathbb{E}[(u^{\prime}_{Y_{t\mu}}(S_{t\mu}))^{2}\mid S_{t\mu}]\,, \,\mathbb{E}[U^{2}_{\mu\nu}\mid S_{t\mu},S_{t\nu}]\leq C(f)\,, \tag{49}\] _for a positive constant \(C(f)\) depending solely on the readout function._ _Remark 1_.: It is worth to point out a simple observation that for \(\mu=\nu\) we have \(U_{\mu\mu}=P^{\prime\prime}_{\mathrm{out}}(Y_{t\mu}\mid S_{t\mu})/P_{\mathrm{ out}}(Y_{t\mu})\), where \[P^{\prime}_{\mathrm{out}}(y\mid x):=\partial_{x}P_{\mathrm{out}}(y\mid x)\,, \qquad P^{\prime\prime}_{\mathrm{out}}(y\mid x):=\partial_{x}\partial_{x}P_{ \mathrm{out}}(y\mid x)\,,\] from what immediately follows \[\mathbb{E}\Big{[}\Big{(}\frac{P^{\prime\prime}_{\mathrm{out}}(Y_{t\mu}\mid S_{ t\mu})}{P_{\mathrm{out}}(Y_{t\mu}\mid S_{t\mu})}\Big{)}^{2}\mid S_{t\mu}\Big{]} \leq C(f)\,. \tag{50}\] The following lemma will play a crucial role, and it contains all the approximations due to the law of large numbers. We introduce here a convenient notation for the pre-activations: \[\mathbf{\alpha}_{\mu}:=\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}\,. \tag{51}\] Hence, conditionally on the inputs \((\mathbf{X}_{\mu})_{\mu\leq n}\), the \(\mathbf{\alpha}\)'s have covariance \[\frac{1}{p}\mathbb{E}[\mathbf{\alpha}_{\mu}^{\intercal}\mathbf{\alpha}_{\nu}\mid \mathbf{X}_{\mu},\mathbf{X}_{\nu}]:=\frac{1}{p}\mathbb{E}_{\mathbf{W}^{*}} \frac{(\mathbf{W}^{*}\mathbf{X}_{\mu})^{\intercal}}{\sqrt{d}}\frac{\mathbf{W} ^{*}\mathbf{X}_{\nu}}{\sqrt{d}}=\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_ {\nu}}{d}\,. \tag{52}\] **Lemma 5** (Approximations).: _Let \(\tilde{\varphi}\) be either \(\varphi\) or the identity function. Under assumptions (A1) and (A2) the following estimates hold for any choice of \(\mu,\nu\leq n\):_ \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu i})=\rho+ O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1\Big{)}\,, \tag{53}\] \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu i})=\mathbb{E }_{\mathcal{N}(0,1)}\varphi^{2}+O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1 \Big{)}\,,\] (54) \[\mathbb{E}_{\mathbf{W}^{*}}\varphi(\alpha_{\mu i})\tilde{\varphi }(\alpha_{\nu i})=\rho\mathbb{E}_{\mathcal{N}(0,1)}\tilde{\varphi}^{\prime} \frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}+O\Big{(}\frac{\mathbf{ X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1 \Big{)}\Big{)}+O\Big{(}\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{ \nu}}{\|\mathbf{X}_{\nu}\|^{2}}\Big{)}^{2}\Big{)}+O\Big{(}\frac{(\mathbf{X}_{ \mu}^{\intercal}\mathbf{X}_{\nu})^{2}}{\|\mathbf{X}_{\nu}\|^{2}d}\Big{)}\,,\] (55) \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu i}) \varphi^{\prime}(\alpha_{\nu i})=\rho^{2}+O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{ 2}}{d}-1\Big{)}+O\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{ \|\mathbf{X}_{\nu}\|^{2}}\Big{)}\,,\] (56) \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu i})\tilde{ \varphi}^{2}(\alpha_{\nu i})=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2} \mathbb{E}_{\mathcal{N}(0,1)}\tilde{\varphi}^{2}+O\Big{(}\frac{\|\mathbf{X}_{ \mu}\|^{2}}{d}-1\Big{)}+O\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_ {\nu}}{\|\mathbf{X}_{\nu}\|^{2}}\Big{)}\,. \tag{57}\] The final key ingredient is the concentration of the free entropy, that we prove in Section 4. **Theorem 6** (Free entropy concentration).: _Under assumptions (A1) and (A2) there exists a non-negative constant \(C(f,\varphi)\) such that_ \[\mathbb{E}_{\mathbf{a}^{*}}\mathbb{V}_{\backslash\mathbf{a}^{*}} \Big{(}\frac{1}{n}\log\mathcal{Z}_{t}\Big{)}=\mathbb{E}\Big{(}\frac{1}{n}\log \mathcal{Z}_{t}-\mathbb{E}_{\backslash\mathbf{a}^{*}}\frac{1}{n}\log\mathcal{Z }_{t}\Big{)}^{2}\leq C(f,\varphi)\Big{(}\frac{1}{d}+\frac{1}{n}\Big{)}\,. \tag{58}\] ### Proof of Theorem 1 We split the proof of Theorem 1 into different Lemmas for the sake of readability. If not differently specified, all the following Lemmas hold under the same hypotheses of Theorem 1. The first one concerns the \(B\) contribution to the derivative of the free entropy (42). **Lemma 7** (\(B\) term).: \(B=0\)_._ Proof.: The random variable inside the brackets in (46) is a function of the data \(Y_{t\mu}\) and of a sample from the posterior through \(s_{t\mu}\). Hence we can use the Nishimori identities to get rid of the brackets, replacing \(s_{t\mu}\) with the ground truth version \(S_{t\mu}\) (from now on we denote with an upper dot \(\dot{S}:=\frac{dS}{dt}\) the \(t\)-derivative): \[B=\frac{1}{n}\mathbb{E}_{(t)}\sum_{\mu=1}^{n}u^{\prime}_{Y_{t\mu}}(S_{t\mu}) \dot{S}_{t\mu}=\frac{1}{n}\sum_{\mu=1}^{n}\mathbb{E}_{(t)}\Big{[}\mathbb{E}_{ (t)}[u^{\prime}_{Y_{t\mu}}(S_{t\mu})\mid S_{t\mu}]\dot{S}_{t\mu}\Big{]} \tag{59}\] where we used the tower rule for expectations. The latter is identically zero thanks to Lemma 4. We split \(A_{1}\) into two other contributions \(A_{1}=A_{11}+A_{12}\) where \[A_{11} :=\frac{1}{2n\sqrt{1-t}}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{ \mu=1}^{n}u^{\prime}_{Y_{\mu}}(S_{t\mu})\Big{(}\frac{\mathbf{a}^{*\intercal} \mathbf{\Gamma}}{\sqrt{p}}\varphi\Big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{ \sqrt{d}}\Big{)}-\frac{\rho\mathbf{a}^{*\intercal}\mathbf{W}^{*}\mathbf{X}_{ \mu}}{\sqrt{pd}}\Big{)}\,, \tag{60}\] \[A_{12} :=\frac{1}{2n\sqrt{1-t}}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{ \mu=1}^{n}u^{\prime}_{Y_{\mu}}(S_{t\mu})\frac{\rho\mathbf{a}^{*\intercal} \mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{pd}}\,. \tag{61}\] Let us simplify these terms by Gaussian integration by parts. In \(A_{12}\), integrating by parts w.r.t. \(\mathbf{W}^{*}\) yields \[A_{12}=\frac{\rho}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{ \mu,\nu=1}^{n}U_{\mu\nu}\frac{\mathbf{a}^{*\intercal}\big{(}\mathbf{a}^{*} \circ\varphi^{\prime}\big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}} \big{)}\big{)}}{p}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d} \tag{62}\] with \(U_{\mu\nu}\) defined in Lemma 4 and \(\circ\) denotes the entry-wise (Hadamard) product. Concerning \(A_{11}\), because of the non-linearity, we can only integrate by parts w.r.t. \(\mathbf{a}^{*}\) and obtain \[A_{11}=\frac{1}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu, \nu=1}^{n}U_{\mu\nu}\Big{[}\frac{\varphi(\boldsymbol{\alpha}_{\mu})^{\intercal} \varphi(\boldsymbol{\alpha}_{\nu})-\rho\boldsymbol{\alpha}_{\mu}^{\intercal} \varphi(\boldsymbol{\alpha}_{\nu})}{p}\Big{]}\,. \tag{63}\] The off-diagonal \(\mu\neq\nu\) and diagonal terms in the previous equations play two very different roles, and they shall be treated separately in the following. **Lemma 8** (Off-diagonal part of \(A_{11}\)).: _The following holds:_ \[A_{11}^{\rm off}:=\frac{1}{n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t }\sum_{\mu\neq\nu}U_{\mu\nu}\Big{[}\frac{\varphi(\boldsymbol{\alpha}_{\mu})^{ \intercal}\varphi(\boldsymbol{\alpha}_{\nu})-\rho\boldsymbol{\alpha}_{\mu}^{ \intercal}\varphi(\boldsymbol{\alpha}_{\nu})}{p}\Big{]}=O\Big{(}\sqrt{\Big{(}1 +\frac{n}{d}\Big{)}\Big{(}\frac{n}{p}+\frac{n}{d^{3/2}}\Big{)}}\Big{)}\,. \tag{64}\] Proof.: Let us start noticing that, for any smooth function \(F(\boldsymbol{\alpha}_{\mu},\boldsymbol{\alpha}_{\nu})\) we have \[\mathbb{E}_{\langle\mathbf{a}^{*}}U_{\mu\nu}F(\boldsymbol{\alpha }_{\mu},\boldsymbol{\alpha}_{\nu})=\mathbb{E}_{\langle\mathbf{a}^{*}\rangle} \big{[}\mathbb{E}_{\langle\mathbf{a}^{*}}[U_{\mu\nu}\mid\mathbf{W}^{*}, \mathbf{v}^{*},\boldsymbol{\xi}^{*},\mathbf{X}]F(\boldsymbol{\alpha}_{\mu}, \boldsymbol{\alpha}_{\nu})\big{]}=0 \tag{65}\] thanks to Lemma 4. As a consequence, with \(\mathbf{a}^{*}\) fixed, we can modify \(A_{11}^{\rm off}\) as follows without changing its value: \[A_{11}^{\rm off}=\mathbb{E}_{\langle\mathbf{a}^{*}\rangle}(f_{n }-\mathbb{E}_{\langle\mathbf{a}^{*}\rangle}f_{n})\sum_{\mu\neq\nu}U_{\mu\nu} \Big{[}\frac{\varphi(\boldsymbol{\alpha}_{\mu})^{\intercal}\varphi(\boldsymbol{ \alpha}_{\nu})-\rho\boldsymbol{\alpha}_{\mu}^{\intercal}\varphi(\boldsymbol{ \alpha}_{\nu})}{p}\Big{]} \tag{66}\] with \(f_{n}:=\log\mathcal{Z}_{t}/n\). In the following we shall simply write \(u^{\prime}_{\mu}\) in place of \(u^{\prime}_{Y_{\mu}}(S_{t\mu})\), and \(\boldsymbol{\varphi}_{\mu}\) instead of \(\varphi(\boldsymbol{\alpha}_{\mu})\) for brevity. We are now in position to use Cauchy-Schwartz's identity: \[(A_{11}^{\rm off})^{2}\leq\mathbb{V}_{\langle\mathbf{a}^{*} \rangle}[f_{n}]\sum_{\mu\neq\nu}\sum_{\lambda\neq\eta}\mathbb{E}_{\langle \mathbf{a}^{*}\rangle}U_{\mu\nu}U_{\lambda\eta}\Big{[}\frac{\boldsymbol{\varphi} _{\mu}^{\intercal}\boldsymbol{\varphi}_{\nu}-\rho\boldsymbol{\alpha}_{\mu}^{ \intercal}\boldsymbol{\varphi}_{\nu}}{p}\Big{]}\Big{[}\frac{\boldsymbol{\varphi} _{\lambda}^{\intercal}\boldsymbol{\varphi}_{\eta}-\rho\boldsymbol{\alpha}_{ \lambda}^{\intercal}\boldsymbol{\varphi}_{\eta}}{p}\Big{]}\,. \tag{67}\] Note that when all the four Greek indices are different from one another we get the highest combinatorial factor of \(O(n^{4})\). However, using the conditional independence of the responses and Lemma 4, the expectation sets them to \(0\). Hence, the only contributions from the double sums come from \(\mu=\lambda\) and \(\nu=\eta\), or \(\mu=\eta\) and \(\nu=\lambda\), which gives twice the same quantity. Thus \[(A_{11}^{\rm off})^{2}\leq\mathbb{V}_{\langle\mathbf{a}^{*}\rangle }[f_{n}]\frac{2}{p^{2}}\sum_{\mu\neq\nu}\mathbb{E}_{\langle\mathbf{a}^{*} \rangle}(u^{\prime}_{\mu}u^{\prime}_{\nu})^{2}\sum_{i,j=1}^{p}\big{[}\varphi_{ \mu i}\varphi_{\nu i}\varphi_{\mu j}\varphi_{\nu j}-2\rho\alpha_{\mu i}\varphi_{ \nu i}\varphi_{\mu j}\varphi_{\nu j}+\rho^{2}\alpha_{\mu i}\varphi_{\nu i} \alpha_{\mu j}\varphi_{\nu j}\big{]}\,. \tag{68}\] The double sum on \(i,j\) comes from the square of a scalar product. Lemma 4 then allows to bound \(\mathbb{E}[U_{\mu\nu}^{2}\mid S_{t\mu},S_{t\nu}]\) by a constant. Let us treat the off-diagonal terms (\(i\neq j\)) first. We call Lemma 5, in particular (55), to simplify the first term in (68): \[\mathbb{E}_{\setminus\mathbf{a}^{*}}\sum_{i\neq j,1}^{p}\varphi( \alpha_{\mu i})\varphi(\alpha_{\nu i})\varphi(\alpha_{\mu j})=p(p-1)\mathbb{E} _{\mathbf{X}_{\mu},\mathbf{X}_{\nu}}\big{(}\mathbb{E}_{\mathbf{W}^{*}}[ \varphi(\alpha_{\mu 1})\varphi(\alpha_{\nu 1})]\big{)}^{2}\] \[=p(p-1)\mathbb{E}\Big{[}\rho^{4}\Big{(}\frac{\mathbf{X}_{\mu}^{ \intercal}\mathbf{X}_{\nu}}{d}\Big{)}^{2}+O\Big{(}\frac{\mathbf{X}_{\mu}^{ \intercal}\mathbf{X}_{\nu}}{d}\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal} \mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}}\Big{)}^{2}\Big{)}+O\Big{(}\Big{(} \frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{)}^{2}\frac{ \mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}}\Big{)} +O\Big{(}\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{)} ^{2}\Big{(}\frac{\|\mathbf{X}_{\nu}\|^{2}}{d}-1\Big{)}\Big{)}\Big{]}\,. \tag{69}\] The first term in the square brackets corresponds to the square of the leading term in (55) with \(\tilde{\varphi}=\varphi\). The other two terms are obtained as cross products between the leading term in (55) and the remainders. Exploiting the fact that the norm of a Gaussian vector concentrates with exponential speed, i.e., \[\mathbb{P}\Big{(}\Big{|}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1\Big{|}\geq h \Big{)}\leq\exp\Big{(}-\frac{dLh^{2}}{2}\Big{)}\,,\quad L>0\,,\forall h>0\,, \tag{70}\] one can conclude that \[\mathbb{E}_{\setminus\mathbf{a}^{*}}\sum_{i\neq j,1}^{p}\varphi( \alpha_{\mu i})\varphi(\alpha_{\nu i})\varphi(\alpha_{\mu j})\varphi(\alpha_{ \nu j})=p(p-1)\Big{[}\frac{\rho^{4}}{d}+O\Big{(}\frac{1}{d^{3/2}}\Big{)}\Big{]}\,. \tag{71}\] We now turn to the second term of (68): using again a Gaussian integration by part for the second equality below followed by Lemma 5 we get \[\rho\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X}_{\nu}}\mathbb{E}_{ \mathbf{W}^{*}} \sum_{i\neq j}^{p}\alpha_{\mu i}\varphi_{\nu i}\varphi_{\mu j} \varphi_{\nu j}=\rho p(p-1)\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X}_{\nu}} \mathbb{E}_{\mathbf{W}^{*}}[\alpha_{\mu 1}\varphi_{\nu 1}]\mathbb{E}_{\mathbf{W}^{*}}[ \varphi_{\mu 1}\varphi_{\nu 1}]\] \[=p(p-1)\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X}_{\nu}}\Big{[}\rho^ {2}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}+O\Big{(}\frac{ \mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{(}\frac{\|\mathbf{X}_{\mu }\|^{2}}{d}-1\Big{)}\Big{)}\Big{]} \tag{72}\] which shows that \[\rho\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X}_{\nu}}\mathbb{E}_{ \mathbf{W}^{*}}\sum_{i\neq j}^{p}\alpha_{\mu i}\varphi_{\nu i}\varphi_{\mu j} \varphi_{\nu j}=p(p-1)\Big{[}\frac{\rho^{4}}{d}+O\Big{(}\frac{1}{d^{3/2}}\Big{)} \Big{]}\,. \tag{73}\] Finally, for what concerns the off-diagonal terms \(i\neq j\), we deal with the last term of (68): \[\rho^{2}\sum_{i\neq j}^{p}\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X} _{\nu}}\alpha_{\mu i}\varphi_{\nu i}\alpha_{\mu j}\varphi_{\nu j} =\rho^{2}p(p-1)\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X}_{\nu}} \mathbb{E}_{\mathbf{W}^{*}}[\alpha_{\mu 1}\varphi_{\nu 1}]\mathbb{E}_{\mathbf{W}^{*}}[ \alpha_{\mu 1}\varphi_{\nu 1}]\] \[=p(p-1)\rho^{2}\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X}_{\nu}} \Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{)}^{2} \mathbb{E}_{\mathbf{W}^{*}}\varphi_{\mu 1}^{\prime}\mathbb{E}_{\mathbf{W}^{*}} \varphi_{\nu 1}^{\prime}\] \[=p(p-1)\Big{[}\frac{\rho^{4}}{d}+O\Big{(}\frac{1}{d^{3/2}}\Big{)} \Big{]} \tag{74}\] where we used integration by parts and the approximation Lemma 5. From this computation we see that, remarkably, the leading orders of the off-diagonal terms \(i\neq j\) in (68) cancel each other, leaving the more convenient rate \(O(1/d^{3/2})\). More precisely, there exists an absolute constant \(K\) such that \[(A_{11}^{\rm off})^{2}\leq\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{n}]\frac{2K} {p^{2}}\sum_{\mu\neq\nu}\mathbb{E}_{\setminus\mathbf{a}^{*}}\Big{\{}\sum_{i=1 }^{p}\big{[}\varphi_{\mu i}^{2}\varphi_{\nu i}^{2}-2\rho\alpha_{\mu i}\varphi_{ \nu i}^{2}\varphi_{\mu i}+\rho^{2}\alpha_{\mu i}^{2}\varphi_{\nu i}^{2}\big{]}+O \Big{(}\frac{p^{2}}{d^{3/2}}\Big{)}\Big{\}}\,. \tag{75}\] From the previous bound we see that we cannot hope that the same cancellation occurs in the diagonal terms \(i=j\). Using again the results from Lemma 5 one can show that \[(A_{11}^{\rm off})^{2}\leq\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{n}]\frac{2K }{p^{2}}\sum_{\mu\neq\nu}\Big{[}O(p)+O\Big{(}\frac{p^{2}}{d^{3/2}}\Big{)} \Big{]}\,. \tag{76}\] The statement is thus proved after we take care of the remaining expectation over \(\mathbf{a}^{*}\) using Theorem 6: \[|\mathbb{E}_{\mathbf{a}^{*}}A_{11}^{\rm off}|\leq\mathbb{E}_{\mathbf{a}^{*}} \sqrt{(A_{11}^{\rm off})^{2}}\leq\sqrt{\mathbb{E}_{\mathbf{a}^{*}}\mathbb{V}_ {\setminus\mathbf{a}^{*}}[f_{n}]O\Big{(}\frac{n^{2}}{p}+\frac{n^{2}}{d^{3/2}} \Big{)}}=O\Big{(}\sqrt{\frac{n}{p}+\frac{n}{d^{3/2}}+\frac{n^{2}}{dp}+\frac{n^ {2}}{d^{5/2}}}\Big{)}\,. \tag{77}\] _Remark 2_.: The previous result is telling us that the number of data points \(n\) can grow as fast as \(o(d^{3/2})\) with the size of the input layer, but has to be much smaller than \(p\), the size of the hidden layer. Furthermore, treating the difference \(\varphi(\boldsymbol{\alpha}_{\mu})^{\intercal}\varphi(\boldsymbol{\alpha}_{ \nu})-\rho\boldsymbol{\alpha}_{\mu}^{\intercal}\varphi(\boldsymbol{\alpha}_{ \nu})\) altogether is fundamental to obtain the scaling \(n^{2}d^{-3/2}\) instead of \(n^{2}d^{-1}\). We also stress that the pre-activations \(\boldsymbol{\alpha}_{\mu}\) in the hidden layer have correlations among them that scale as \(d^{-1/2}\). If \(d\) is not big enough they cannot be considered weakly correlated. From Lemma 8 we thus infer that \[A_{11}=\frac{1}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu=1}^{n}\frac{P _{\rm out}^{\prime\prime}(Y_{t\mu}\mid S_{t\mu})}{P_{\rm out}(Y_{t\mu}\mid S_ {t\mu})}\Big{[}\frac{\|\varphi(\boldsymbol{\alpha}_{\mu})\|^{2}-\rho \boldsymbol{\alpha}_{\mu}^{\intercal}\varphi(\boldsymbol{\alpha}_{\mu})}{p} \Big{]}+O\Big{(}\sqrt{\Big{(}1+\frac{n}{d}\Big{)}\Big{(}\frac{n}{p}+\frac{n}{ d^{3/2}}\Big{)}}\Big{)}\,. \tag{78}\] For the term \(A_{3}\) we use integration by parts with respect to the variables Gaussian \(\xi_{\mu}^{*}\): \[A_{3}=\frac{\epsilon}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu=1}^{n} \Big{(}(u^{\prime}_{Y_{t\mu}}(S_{t\mu}))^{2}+u^{\prime\prime}_{Y_{t\mu}}(S_{t \mu})\Big{)}=\frac{\epsilon}{2d}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu=1 }^{n}\frac{P_{\rm out}^{\prime\prime}(Y_{t\mu}\mid S_{t\mu})}{P_{\rm out}(Y_{t \mu}\mid S_{t\mu})}\,. \tag{79}\] Hence \[A_{3}-A_{11}\,=\,\frac{1}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t} \sum_{\mu=1}^{n}\frac{P_{\rm out}^{\prime\prime}(Y_{t\mu}\mid S_{t\mu})}{P_{ \rm out}(Y_{t\mu}\mid S_{t\mu})}\Big{[}\epsilon\,-\,\frac{\|\varphi( \boldsymbol{\alpha}_{\mu})\|^{2}-\rho\boldsymbol{\alpha}_{\mu}^{\intercal} \varphi(\boldsymbol{\alpha}_{\mu})}{p}\Big{]}+O\Big{(}\sqrt{\Big{(}1+\frac{n}{ d}\Big{)}\Big{(}\frac{n}{p}+\frac{n}{d^{3/2}}\Big{)}}\Big{)}\,. \tag{80}\] **Lemma 9** (\(A_{3}-A_{11}^{\rm diag}\) term).: _The following asymptotics holds:_ \[A_{3}-A_{11}^{\rm diag}:=\frac{1}{2n}\mathbb{E}_{(t)}\log \mathcal{Z}_{t}\sum_{\mu=1}^{n}\frac{P_{\rm out}^{\prime\prime}(Y_{t\mu}\mid S _{t\mu})}{P_{\rm out}(Y_{t\mu}\mid S_{t\mu})}\Big{[}\epsilon-\frac{\|\varphi( \boldsymbol{\alpha}_{\mu})\|^{2}-\rho\boldsymbol{\alpha}_{\mu}^{\intercal} \varphi(\boldsymbol{\alpha}_{\mu})}{p}\Big{]}=O\Big{(}\sqrt{\Big{(}\frac{n}{d}+ 1\Big{)}\Big{(}\frac{1}{p}+\frac{1}{\sqrt{d}}\Big{)}}\Big{)}\,. \tag{81}\] Proof.: Define \[C:=\frac{1}{n}\mathbb{E}_{\setminus\mathbf{a}^{*}}\log\mathcal{Z}_{t}\sum_{\mu=1 }^{n}\frac{P_{\mathrm{out}}^{\prime\prime}(Y_{t\mu}\mid S_{t\mu})}{P_{\mathrm{ out}}(Y_{t\mu}\mid S_{t\mu})}\Big{[}\epsilon-\frac{\|\varphi(\mathbf{\alpha}_{\mu})\|^{2}- \rho\mathbf{\alpha}_{\mu}^{\mathsf{T}}\varphi(\mathbf{\alpha}_{\mu})}{p}\Big{]}\,. \tag{82}\] First, thanks to Lemma 4 \[\mathbb{E}_{\setminus\mathbf{a}^{*}}\Big{[}\frac{P_{\mathrm{out}}^{\prime \prime}(Y_{t\mu}\mid S_{t\mu})}{P_{\mathrm{out}}(Y_{t\mu}\mid S_{t\mu})}\mid \mathbf{W}^{*},\mathbf{v}^{*},\mathbf{X}_{\mu},\xi_{\mu}^{*}\Big{]}=0\,, \tag{83}\] and this allows us to center the \(f_{n}=\log\mathcal{Z}_{t}/n\) with its mean without changing the value of \(C\). After using Cauchy-Schwartz we have \[\begin{split} C^{2}\leq\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{ n}]\sum_{\mu,\nu=1}^{n}\mathbb{E}_{\setminus\mathbf{a}^{*}}\Big{[}\mathbb{E}_{ \setminus\mathbf{a}^{*}}\Big{[}\frac{P_{\mathrm{out}}^{\prime\prime}(Y_{t\mu} \mid S_{t\mu})}{P_{\mathrm{out}}(Y_{t\mu}\mid S_{t\mu})}\frac{P_{\mathrm{ out}}^{\prime\prime}(Y_{t\nu}\mid S_{t\nu})}{P_{\mathrm{out}}(Y_{t\nu}\mid S_{t \nu})}\mid\mathbf{W}^{*},\mathbf{v}^{*},\mathbf{X}_{\mu},\mathbf{X}_{\nu}, \xi_{\mu}^{*},\xi_{\nu}^{*}\Big{]}\\ \times\Big{(}\epsilon-\frac{\|\varphi(\mathbf{\alpha}_{\mu})\|^{2}- \rho\mathbf{\alpha}_{\mu}^{\mathsf{T}}\varphi(\mathbf{\alpha}_{\mu})}{p}\Big{)}\Big{(} \epsilon-\frac{\|\varphi(\mathbf{\alpha}_{\nu})\|^{2}-\rho\mathbf{\alpha}_{\nu}^{ \mathsf{T}}\varphi(\mathbf{\alpha}_{\nu})}{p}\Big{)}\Big{]}\,.\end{split} \tag{84}\] Thanks to the observation in (83), only the diagonal terms \(\mu=\nu\) will survive in the double sum on the r.h.s. of the previous inequality. Furthermore recall (50). Hence the bound on \(C^{2}\) becomes \[C^{2}\leq\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{n}]C(f)n\mathbb{E}_{ \mathbf{X}_{1},\mathbf{W}^{*}}\Big{(}\epsilon-\frac{\|\varphi(\mathbf{\alpha}_{1 })\|^{2}-\rho\mathbf{\alpha}_{1}^{\mathsf{T}}\varphi(\mathbf{\alpha}_{1})}{p}\Big{)}^ {2}\,. \tag{85}\] Following an integration by part and Lemma 5 we have \[\begin{split}\mathbb{E}_{\mathbf{W}^{*}}\alpha_{1i}\varphi(\alpha _{1i})&=\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{1i}) \frac{\|\mathbf{X}_{1}\|^{2}}{d}=\frac{\|\mathbf{X}_{1}\|^{2}}{d}\Big{(} \mathbb{E}_{\mathcal{N}(0,1)}\varphi^{\prime}+O\Big{(}\frac{\|\mathbf{X}_{1} \|^{2}}{d}-1\Big{)}\Big{)}\\ &=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{\prime}+O\Big{(}\frac{\| \mathbf{X}_{1}\|^{2}}{d}-1\Big{)}\,.\end{split} \tag{86}\] From this and the approximation Lemma it follows that (letting \(\mathbb{E}^{2}(\cdots)=(\mathbb{E}(\cdots))^{2}\)) \[\mathbb{E}_{\mathbf{X}_{1}}\mathbb{E}_{\mathbf{W}^{*}}\Big{[} \frac{\|\varphi(\mathbf{\alpha}_{1})\|^{2}-\rho\mathbf{\alpha}_{1}^{\mathsf{T}} \varphi(\mathbf{\alpha}_{1})}{p}\Big{]} =\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}-\mathbb{E}_{\mathcal{N}( 0,1)}^{2}\varphi^{\prime}+O\Big{(}\mathbb{E}_{\mathbf{X}_{1}}\Big{|}\frac{\| \mathbf{X}_{1}\|^{2}}{d}-1\Big{|}\Big{)}\] \[=\epsilon+O(d^{-1/2})\,, \tag{87}\] \[\mathbb{E}_{\mathbf{X}_{1}}\mathbb{E}_{\mathbf{W}^{*}}\Big{[} \frac{\|\varphi(\mathbf{\alpha}_{1})\|^{2}-\rho\mathbf{\alpha}_{1}^{\mathsf{T}} \varphi(\mathbf{\alpha}_{1})}{p}\Big{]}^{2} =\frac{1}{p}\mathbb{E}_{\mathbf{X}_{1}}\mathbb{E}_{\mathbf{W}^{*} }\big{(}\varphi^{4}(\alpha_{11})-2\rho\varphi^{3}(\alpha_{11})\alpha_{11}+ \rho^{2}\alpha_{11}^{2}\varphi^{2}(\alpha_{11})\big{)}\] \[+\frac{p-1}{p}\mathbb{E}_{\mathbf{X}_{1}}\big{[} \mathbb{E}_{\mathbf{W}^{*}}^{2}\varphi^{2}(\alpha_{11})-2\rho \mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{11})\mathbb{E}_{\mathbf{W}^{*}} \varphi(\alpha_{11})\alpha_{11}+\rho^{2}\mathbb{E}_{\mathbf{W}^{*}}^{2}\varphi( \alpha_{11})\alpha_{11}\big{)}\] \[=\epsilon^{2}+O(p^{-1})+O(d^{-1/2})\,. \tag{88}\] Hence, we finally have \[\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{W}^{*}}\Big{(}\epsilon-\frac{\|\varphi( \mathbf{\alpha}_{\mu})\|^{2}-\rho\mathbf{\alpha}_{\mu}^{\mathsf{T}}\varphi(\mathbf{\alpha}_ {\mu})}{p}\Big{)}^{2}=O(p^{-1})+O(d^{-1/2})\,. \tag{89}\] Plugging this and the bound in Theorem 6 into the inequality for \(C^{2}\) we readily get the statement. Now the remaining goal is to prove that \(A_{2}-A_{12}\to 0\). Using integration by parts in \(A_{2}\) w.r.t. \(\mathbf{v}^{*}\) we obtain \[A_{2}=\frac{\rho^{2}}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu,\nu=1}^{n}U _{\mu\nu}\frac{\mathbf{X}_{\mu}^{\mathsf{T}}\mathbf{X}_{\nu}}{d}\,. \tag{90}\] Recall also formula (62) for \(A_{12}\). **Lemma 10** (\(A_{12}-A_{2}\) term).: _The following asymptotics holds:_ \[A_{12}-A_{2}=\frac{\rho}{2n}\mathbb{E}_{(t)}\log\mathcal{Z}_{t}\sum_{\mu,\nu=1}^ {n}U_{\mu\nu}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{[} \frac{\mathbf{a}^{*\intercal}(\mathbf{a}^{*}\circ\varphi^{\prime}(\mathbf{\alpha}_ {\nu}))}{p}-\rho\Big{]}=O\Big{(}\sqrt{\Big{(}1+\frac{n}{d}\Big{)}\Big{(}\frac{n }{dp}+\frac{n}{d^{3/2}}\Big{)}}\Big{)}\,. \tag{91}\] Proof.: Conditional on \(\mathbf{a}^{*}\) define \[C:=\frac{1}{n}\mathbb{E}_{\setminus\mathbf{a}^{*}}\log\mathcal{Z}_{t}\sum_{ \mu,\nu=1}^{n}U_{\mu\nu}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{ d}\Big{[}\frac{\mathbf{a}^{*\intercal}(\mathbf{a}^{*}\circ\varphi^{\prime}(\mathbf{ \alpha}_{\nu}))}{p}-\rho\Big{]}\,. \tag{92}\] As before, thanks to Lemma 4, we can center the random variable \(\log\mathcal{Z}_{t}\) with its expectation \(\mathbb{E}_{\setminus\mathbf{a}^{*}}\log\mathcal{Z}_{t}\), without affecting \(C\). We can thus use Cauchy-Schwartz's inequality, obtaining \[C^{2}\leq\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{n}]\mathbb{E}_{\setminus \mathbf{a}^{*}}\sum_{\mu,\nu=1}^{n}\sum_{\lambda,\eta=1}^{n}U_{\mu\nu}U_{ \lambda\eta}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{[} \frac{\mathbf{a}^{*\intercal}(\mathbf{a}^{*}\circ\varphi^{\prime}(\mathbf{\alpha} _{\nu}))}{p}-\rho\Big{]}\frac{\mathbf{X}_{\lambda}^{\intercal}\mathbf{X}_{ \eta}}{d}\Big{[}\frac{\mathbf{a}^{*\intercal}(\mathbf{a}^{*}\circ\varphi^{ \prime}(\mathbf{\alpha}_{\eta}))}{p}-\rho\Big{]} \tag{93}\] for a given \(\mathbf{a}^{*}\). Thanks again to Lemma 4 the only terms that survive in the above quadruple sum are those with \(\mu=\nu=\lambda=\eta\), and \(\mu\neq\nu\), \(\lambda\neq\eta\) but with \(\mu=\lambda\), \(\nu=\eta\) or vice versa. Up to constants everything can be summed up as follows: \[C^{2}\leq K\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{n}]\mathbb{E}_{\setminus \mathbf{a}^{*}}\sum_{\mu,\nu=1}^{n}\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal} \mathbf{X}_{\nu}}{d}\Big{)}^{2}\Big{[}\frac{\mathbf{a}^{*\intercal}(\mathbf{a }^{*}\circ\varphi^{\prime}(\mathbf{\alpha}_{\nu}))}{p}-\rho\Big{]}^{2} \tag{94}\] where we used again Lemma 4. Now, expanding the square and computing the \(\mathbf{W}^{*}\) average of \(\varphi^{\prime}\) via Lemma (5) we readily get \[C^{2}\leq K^{\prime\prime}\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{n}]\sum_{ \mu,\nu=1}^{n}\mathbb{E}_{\mathbf{X}_{\mu},\mathbf{X}_{\nu}}\Big{(}\frac{ \mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}\Big{)}^{2}\Big{[}\Big{(} \frac{\|\mathbf{a}^{*}\|^{2}}{p}-1\Big{)}^{2}+O\Big{(}\frac{\|\mathbf{X}_{\nu} \|^{2}}{d}-1\Big{)}\Big{(}\frac{\|\mathbf{a}^{*}\|^{4}}{p^{2}}+\frac{\| \mathbf{a}^{*}\|^{2}}{p}\Big{)}\Big{]} \tag{95}\] where \(K^{\prime}\) is a suitable positive constant. Denoting the double sum by \(D\) we have \[|A_{2}-A_{12}|\leq K^{\prime\prime}\mathbb{E}_{\mathbf{a}^{*}}\sqrt{\mathbb{ V}_{\setminus\mathbf{a}^{*}}[f_{n}]}\sqrt{D}\leq K^{\prime\prime}\sqrt{ \mathbb{E}_{\mathbf{a}^{*}}\cdot\mathbb{V}_{\setminus\mathbf{a}^{*}}[f_{n}] \,\mathbb{E}_{\mathbf{a}^{*}}D}=O\Big{(}\sqrt{\Big{(}\frac{1}{n}+\frac{1}{d} \Big{)}\Big{(}\frac{n^{2}}{dp}+\frac{n^{2}}{d^{3/2}}\Big{)}}\Big{)}\,. \tag{96}\] Putting the results of all the Lemmas in this section together we get that the time derivative of the interpolating free entropy is bounded by \[\frac{d}{dt}\bar{f}_{n}(t) =\underbrace{O\Big{(}\sqrt{\Big{(}1+\frac{n}{d}\Big{)}\Big{(} \frac{n}{p}+\frac{n}{d^{3/2}}\Big{)}}\Big{)}}_{A_{3}^{\text{eff}}}+\underbrace {O\Big{(}\sqrt{\Big{(}1+\frac{n}{d}\Big{)}\Big{(}\frac{1}{p}+\frac{1}{\sqrt{d} }\Big{)}}\Big{)}}_{A_{3}-A_{11}^{\text{diag}}}+\underbrace{O\Big{(}\sqrt{ \Big{(}1+\frac{n}{d}\Big{)}\Big{(}\frac{n}{dp}+\frac{n}{d^{3/2}}\Big{)}}\Big{)} }_{A_{12}-A_{2}} \tag{97}\] \[=O\Big{(}\sqrt{\Big{(}1+\frac{n}{d}\Big{)}\Big{(}\frac{n}{p}+\frac {n}{d^{3/2}}+\frac{1}{\sqrt{d}}\Big{)}}\Big{)}\,. \tag{98}\] All the bounds in this section are uniform in \(t\in[0,1]\). This finishes the proof of Theorem 1. Concentration of the free entropy Here we prove that the free entropy of the interpolating model concentrates, i.e., Theorem 6. To simplify the notations we use \(C(f,\varphi)\) for a generic non-negative constant depending only on \(f\) and \(\varphi\). We recall that the partition function is defined as \[\mathcal{Z}_{t}=\int D\mathbf{a}D\mathbf{v}D\mathbf{W}\prod_{\mu=1}^{n}D \boldsymbol{\xi}_{\mu}\exp\Big{[}\sum_{\mu=1}^{n}\log P_{\text{out}}(Y_{t\mu}| s_{t\mu})\Big{]}\,, \tag{99}\] where \[Y_{t\mu} =f(S_{t\mu};\mathbf{A}_{\mu})+\sqrt{\Delta}Z_{\mu}\,, \tag{100}\] \[S_{t\mu} =\sqrt{1-t}\frac{\mathbf{a}^{*\intercal}}{\sqrt{p}}\varphi\Big{(} \frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}+\sqrt{t\rho}\frac{ \mathbf{V}^{*\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+\sqrt{t\epsilon}\xi_{\mu}^{*}\,,\] (101) \[s_{t\mu} =\sqrt{1-t}\frac{\mathbf{a}^{\intercal}}{\sqrt{p}}\varphi\Big{(} \frac{\mathbf{W}\mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}+\sqrt{t\rho}\frac{\mathbf{ v}^{\intercal}\mathbf{X}_{\mu}}{\sqrt{d}}+\sqrt{t\epsilon}\xi_{\mu}\,. \tag{102}\] We prove Theorem 6 in several steps, first we show concentration with respect only to \(\{Z_{\mu}\}_{\mu}\), \(\{\xi_{\mu}^{*}\}_{\mu}\) and \(\{\mathbf{X}_{\mu}\}_{\mu}\) using classical Poincare-Nash inequality, then with respect to \(\{\mathbf{A}_{\mu}\}_{\mu}\) using the corollary of Efron-Stein inequality, and then finally with respect to \(\mathbf{W}^{*}\) with \(\mathbf{v}^{*}\). For this we rewrite \[\mathbb{E}\Big{(}\frac{1}{n}\log\mathcal{Z}_{t}-\mathbb{E}_{ \mathbf{v}^{*},\mathbf{W}^{*},\mathbf{X},\boldsymbol{\xi}^{*},\mathbf{A}, \mathbf{Z}}\frac{1}{n}\log\mathcal{Z}_{t}\Big{)}^{2}=\\ =\mathbb{E}\Big{(}\frac{1}{n}\log\mathcal{Z}_{t}-\mathbb{E}_{ \mathbf{X},\boldsymbol{\xi}^{*},\mathbf{Z}}\frac{1}{n}\log\mathcal{Z}_{t} \Big{)}^{2}+\mathbb{E}\Big{(}\frac{1}{n}\mathbb{E}_{\mathbf{X},\boldsymbol{ \xi}^{*},\mathbf{Z}}\log\mathcal{Z}_{t}-\mathbb{E}_{\mathbf{A}}\mathbb{E}_{ \mathbf{X},\boldsymbol{\xi}^{*},\mathbf{Z}}\frac{1}{n}\log\mathcal{Z}_{t} \Big{)}^{2}+\\ +\mathbb{E}\Big{(}\mathbb{E}_{\psi}\frac{1}{n}\log\mathcal{Z}_{t }-\mathbb{E}_{\mathbf{v}^{*},\mathbf{W}^{*}}\mathbb{E}_{\psi}\frac{1}{n}\log \mathcal{Z}_{t}\Big{)}^{2} \tag{103}\] where by \(\mathbb{E}_{\psi}\) we denoted the joint expectation with respect to \(\mathbf{Z}\), \(\boldsymbol{\xi}^{*}\), \(\mathbf{X}\), and \(\mathbf{A}\). Also for brevity, in what follows, by writing \(\mathbf{Z}\), \(\boldsymbol{\xi}^{*}\), \(\mathbf{X}\), and \(\mathbf{A}\) we mean the sets \(\{Z_{\mu}\}_{\mu}\), etc. We recall two classical concentration results, whose proofs can be found in [89], Chapter 3. **Proposition 11** (Poincare-Nash inequality).: _Let \(\xi=[\xi_{1},\ldots,\xi_{K}]^{\intercal}\) be a real Gaussian standard random vector. If \(g:\mathbb{R}^{K}\to\mathbb{R}\) is a continuously differentiable function, then_ \[\mathbb{V}g(\xi)\leq\mathbb{E}\|\nabla g(\xi)\|^{2}\,. \tag{104}\] **Proposition 12** (Bounded difference).: _Let \(\xi=[\xi_{1},\ldots,\xi_{K}]^{\intercal}\) be a random vector with i.i.d. elements taking values in some space \(\mathcal{A}\). If function \(g:\mathcal{A}^{K}\to\mathbb{R}\) satisfies_ \[\sup_{1\leq i\leq K}\sup_{x_{1},\ldots,x_{K},x_{t}\in\mathcal{A}}|g(x_{1}, \ldots,x_{i},\ldots,x_{K})-g(x_{1},\ldots,x_{i}^{\prime},\ldots,x_{K})|\leq C \tag{105}\] _for some \(C>0\), then_ \[\mathbb{V}\{g(\xi)\}\leq\frac{1}{4}KC^{2}. \tag{106}\] In what follows we will denote \(P^{y}(y|x):=\frac{\partial P_{\text{out}}(y|x)}{\partial y}\) and \(P^{x}(y|x):=\frac{\partial P_{\text{out}}(y|x)}{\partial x}\). **Lemma 13**.: _There exists a non-negative constant \(C(f,\varphi)\) such that_ \[\mathbb{E}\Big{(}\frac{1}{n}\log\mathcal{Z}_{t}-\mathbb{E}_{\mathbf{X}, \boldsymbol{\xi}^{*},\mathbf{Z}}\frac{1}{n}\log\mathcal{Z}_{t}\Big{)}^{2} \leq\frac{C(f,\varphi)}{n}\,.\] Proof.: Since \(\xi_{\mu}^{*}\), \(Z_{\mu}\) and all elements of vectors \(\mathbf{X}_{\mu}\) are jointly independent for all \(\mu\), we have thanks to Proposition 11 \[\mathbb{E}\mathbb{V}_{\mathbf{X},\boldsymbol{\xi}^{*},\mathbf{Z}} \Big{(}\frac{1}{n}\log\mathcal{Z}_{t}\Big{)} \leq\frac{1}{n^{2}}\mathbb{E}\|\nabla\log\mathcal{Z}_{t}\|^{2}\] \[=\frac{1}{n^{2}}\sum_{\mu=1}^{n}\mathbb{E}\Big{(}\frac{\partial \log\mathcal{Z}_{t}}{\partial\xi_{\mu}^{*}}\Big{)}^{2}+\frac{1}{n^{2}}\sum_{ \mu=1}^{n}\sum_{i=1}^{d}\mathbb{E}\Big{(}\frac{\partial\log\mathcal{Z}_{t}}{ \partial X_{\mu}^{i}}\Big{)}^{2}+\frac{1}{n^{2}}\sum_{\mu=1}^{n}\mathbb{E}\Big{(} \frac{\partial\log\mathcal{Z}_{t}}{\partial Z_{\mu}}\Big{)}^{2}=:I_{1}+I_{2}+I_ {3}\,.\] For the sake of brevity we drop index out and write \(P_{\mu}=P_{\mathrm{out}}(Y_{t,\mu}|s_{t,\mu})\) in what follows. Gibbs brackets \(\langle\cdot\rangle\) are defined as in (39). After taking derivative in the first term we obtain \[\Big{|}\frac{\partial\log\mathcal{Z}_{t}}{\partial\xi_{\mu}^{*}}\Big{|}=\Big{|} \Big{\langle}\frac{P_{\mu}^{y}}{P_{\mu}}\Big{\rangle}f^{\prime}(S_{t,\mu}; \mathbf{A}_{\mu})\sqrt{t\epsilon}\Big{|}\leq c\sqrt{t\epsilon}C(f)(|Z_{\mu}|^ {2}+1)\,.\] Last inequality is due to boundedness of \(f^{\prime}\) and Lemma 17. One can see that the only randomness left is in \(Z_{\mu}\). Since it is Gaussian, the average of polynomial is bounded by some uniform constant \(c\). We obtained that each term in \(I_{1}\) is bounded by constant, the number of terms is \(n\), from this it follows immediately that \(I_{1}\leq C(f)/n\). The second type of partial derivative will give us \[\frac{\partial\log\mathcal{Z}_{t}}{\partial X_{\mu}^{i}}=\Big{\langle} \frac{P_{\mu}^{y}}{P_{\mu}}\Big{\rangle}f^{\prime}(S_{t,\mu};\mathbf{A}_{\mu} )\Big{(}\sqrt{1-t}\frac{\big{(}\mathbf{a}^{*}\circ\varphi^{\prime}(\frac{ \mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}})\mathbf{W}^{*}\big{)}_{i}}{\sqrt{pd }}+\sqrt{t\rho}\frac{v_{i}^{*}}{\sqrt{d}}\Big{)}\\ +\Big{\langle}\frac{P_{\mu}^{x}}{P_{\mu}}\frac{\big{(}\mathbf{a} \circ\varphi^{\prime}(\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}) \mathbf{W}\big{)}_{i}}{\sqrt{pd}}\Big{\rangle}\sqrt{1-t}+\Big{\langle}\frac{P_ {\mu}^{x}}{P_{\mu}}\frac{v_{i}}{\sqrt{d}}\Big{\rangle}\sqrt{t\rho}\,.\] Plugging this into \(I_{2}\) and using the simple inequality \(\mathbb{E}(a+b)^{2}\leq 2(\mathbb{E}a^{2}+\mathbb{E}b^{2})\) in order to square each term of the r.h.s. separately, we notice that it appeared the terms \[\mathbb{E}\Big{\langle}\frac{P_{\mu}^{x}}{P_{\mu}}\frac{\big{(}\mathbf{a} \circ\varphi^{\prime}(\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}) \mathbf{W}\big{)}_{i}}{\sqrt{pd}}\Big{\rangle}^{2}\,,\qquad\mathbb{E}\Big{\langle} \frac{P_{\mu}^{x}}{P_{\mu}}\frac{v_{i}}{\sqrt{d}}\Big{\rangle}^{2}\,,\] which depend only on \(Y_{t\mu},S_{t\mu}\) and \(s_{t\mu}\). This allow us to use Nishimori identity by removing the brackets and adding \(*\) to \(\mathbf{a}\), \(\mathbf{W}\) and \(\mathbf{v}\). Then evaluating each ratio with \(P_{\mathrm{out}}\) using Lemma 17 we get \[I_{2}\leq\frac{C(f)(1-t)}{n^{2}}\sum_{\mu=1}^{n}\mathbb{E}\Big{(}(|Z_{\mu}|^{2 }+1)^{2}\frac{\|\mathbf{a}^{*}\circ\varphi^{\prime}\Big{(}\frac{\mathbf{W}^{*} \mathbf{X}_{\mu}}{\sqrt{d}}\Big{)}\mathbf{W}^{*}\|^{2}}{pd}\Big{)}+\frac{C(f)t \rho}{n}\mathbb{E}\Big{(}(|Z_{\mu}|^{2}+1)^{2}\frac{\|\mathbf{v}^{*}\|^{2}}{ d}\Big{)}\,. \tag{107}\] after what we notice that factors \((|Z_{\mu}|^{2}+1)^{2}\) are independent of others and its expectation can be bounded with positive constant. In the end we obtain \[I_{2}\leq\frac{C(f)}{n^{2}}\sum_{\mu=1}^{n}\mathbb{E}\frac{\|\mathbf{a}^{*} \circ\varphi^{\prime}\big{(}\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}} \big{)}\mathbf{W}^{*}\|^{2}}{pd}+\frac{C(f)}{n}\,. \tag{108}\] To finish the proof we notice that when we expand the norm appearing above all non zero terms will contain only squares, e.g., \(\mathbb{E}(a_{i}^{*}\varphi_{i}^{\prime}W_{ij}^{*})^{2}\), and so be positive. This gives as opportunity to bound \(\varphi^{\prime}\) with \(C(\varphi)\) in each term and calculate its expectation which is simply \(C(\varphi)\). The number of such terms is exactly \(pd\), this gives us simple bound \(I_{2}\leq C(f,\varphi)/n\). Finally, the derivatives with respect to \(Z_{\mu}\) are of the form \[\frac{\partial\log\mathcal{Z}_{t}}{\partial Z_{\mu}}=\sqrt{\Delta}\Big{\langle} \frac{P_{\mu}^{y}}{P_{\mu}}\Big{\rangle}\,. \tag{109}\] Similarly to what done above we bound \(I_{3}\) with the help of Lemma 17 \[I_{3}\leq\frac{C(f)}{n}\,. \tag{110}\] Next step would be to prove the concentration of function \(\mathbb{E}_{\mathbf{X},\mathbf{Z},\boldsymbol{\xi}^{*}}\log\mathcal{Z}_{t}/n\) with respect to \(\mathbf{A}\) using Proposition 12 while keeping \(\mathbf{W}^{*}\) and \(\mathbf{v}^{*}\) fixed. **Lemma 14**.: _There exists a constant \(C(f,\varphi)>0\) such that_ \[\mathbb{E}\Big{(}\frac{1}{n}\mathbb{E}_{\mathbf{X},\mathbf{Z},\boldsymbol{ \xi}^{*}}\log\mathcal{Z}_{t}-\mathbb{E}_{\mathbf{A}}\frac{1}{n}\mathbb{E}_{ \mathbf{X},\mathbf{Z},\boldsymbol{\xi}^{*}}\log\mathcal{Z}_{t}\Big{)}^{2}\leq \frac{C(f,\varphi)}{n}\,. \tag{111}\] Proof.: We consider \(h(\mathbf{A})=\mathbb{E}_{\mathbf{X},\mathbf{Z},\boldsymbol{\xi}^{*}}\log \mathcal{Z}_{t}/n\) a function of all the elements \(\mathbf{A}_{\mu,i}\) of \(\mathbf{A}_{\mu}\) for \(1\leq i\leq k\) and \(1\leq\mu\leq n\). We denote by \(\mathbf{A}^{\prime}\) a vector such that \(\mathbf{A}^{\prime}_{\mu,i}=\mathbf{A}_{\mu,i}\) for \(\mu\neq\nu\), \(i\neq j\) and \(\mathbf{A}^{\prime}_{\nu,j}\) is a random variable with distribution \(P_{A}\), independent of all others. According to Proposition 12 it is sufficient to prove that \[|h(\mathbf{A}^{\prime})-h(\mathbf{A})|<\frac{C(f,\varphi)}{n}\,. \tag{112}\] If we denote by \(H\) (and \(H^{\prime}\)) the Hamiltonian corresponding to \(\mathcal{Z}_{t}\) (and \(\mathcal{Z}_{t}\) with \(\mathbf{A}^{\prime}\)) one can see that \[h(\mathbf{A}^{\prime})-h(\mathbf{A})=\frac{1}{n}\mathbb{E}_{\mathbf{X}, \mathbf{Z},\boldsymbol{\xi}^{*}}\log\Big{\langle}e^{H-H^{\prime}}\Big{\rangle} _{H}\geq\frac{1}{n}\mathbb{E}_{\mathbf{X},\mathbf{Z},\boldsymbol{\xi}^{*}} \Big{\langle}H-H^{\prime}\Big{\rangle}_{H}\,, \tag{113}\] the last step being true due to Jensen inequality. On the other hand \(h(\mathbf{A}^{\prime})-h(\mathbf{A})\leq\mathbb{E}_{\mathbf{X},\mathbf{Z}, \boldsymbol{\xi}^{*}}\langle H-H^{\prime}\rangle_{H^{\prime}}/n\). We recall the definition (2) of \(P_{\mathrm{out}}(Y_{t\mu},s_{t\mu})\) and similarly we obtain \[H-H^{\prime}\geq\frac{1}{2\Delta}\Big{\langle}(f(S_{t\nu};\mathbf{A}^{\prime} _{\nu})-f(s_{t\nu};\tilde{\mathbf{A}})+\sqrt{\Delta}Z_{\nu})^{2}-(f(S_{t\nu}; \mathbf{A}_{\nu})-f(s_{t\nu};\tilde{\mathbf{A}})+\sqrt{\Delta}Z_{\nu})^{2} \Big{\rangle}_{G^{\prime}} \tag{114}\] and \[H-H^{\prime}\leq\frac{1}{2\Delta}\Big{\langle}(f(S_{t\nu};\mathbf{A}^{\prime} _{\nu})-f(s_{t\nu};\tilde{\mathbf{A}})+\sqrt{\Delta}Z_{\nu})^{2}-(f(S_{t\nu}; \mathbf{A}_{\nu})-f(s_{t\nu};\tilde{\mathbf{A}})+\sqrt{\Delta}Z_{\nu})^{2} \Big{\rangle}_{G}\,, \tag{115}\] where \(\langle\cdot\rangle_{G}\) (or with \(G^{\prime}\)) defined as \[\langle\cdot\rangle_{G}=\frac{\int P_{A}(d\tilde{\mathbf{A}})e^{-\frac{1}{2 \Delta}(Y_{t\nu}-f(s_{t\nu};\tilde{\mathbf{A}}))^{2}}(\cdot)}{\int P_{A}(d \tilde{\mathbf{A}})e^{-\frac{1}{2\Delta}(Y_{t\nu}-f(s_{t\nu};\tilde{\mathbf{A }}))^{2}}} \tag{116}\] or with \(Y^{\prime}_{t\nu}\) where \(\mathbf{A}_{\nu}\) is changed to \(\mathbf{A}^{\prime}_{\nu}\). Since \(f\) is bounded we immediately obtain \(|H-H^{\prime}|\leq C(f)(|Z_{\mu}|^{2}+1)\) and (112). Now, due to the Proposition 12, the statement of the Lemma is proved. The last part is to prove the concentration of function \(g=\mathbb{E}_{\psi}\log\mathcal{Z}_{t}/n\) with respect to \(\mathbf{W}^{*},\mathbf{v}^{*}\). **Lemma 15**.: _There exists a constant \(C(f,\varphi)>0\) such that_ \[\mathbb{E}(g-\mathbb{E}_{\mathbf{W}^{*},\mathbf{v}^{*}}g)^{2}\leq\frac{C(f, \varphi)}{d}\,. \tag{117}\] Proof.: Due to Poincare-Nash inequality we have \[\mathbb{E}(g-\mathbb{E}_{\mathbf{W}^{*},\mathbf{v}^{*}}g)^{2}\leq\sum_{i,j}^{p,d} \mathbb{E}\Big{(}\frac{\partial g}{\partial W^{*}_{ij}}\Big{)}^{2}+\sum_{i}^{d }\mathbb{E}\Big{(}\frac{\partial g}{\partial v^{*}_{i}}\Big{)}^{2}\,. \tag{118}\] Let us first deal with the partial derivatives with respect to \(W^{*}_{ij}\) \[\frac{\partial g}{\partial W^{*}_{ij}}=\frac{1}{n}\sum_{\mu}^{n}\mathbb{E}_{ \psi}\Big{(}\Big{\langle}\frac{P^{y}_{\mu}}{P_{\mu}}\Big{\rangle}f^{\prime}(S_ {t\mu},\mathbf{A}_{\mu})\frac{\sqrt{1-ta}^{*}_{i}\varphi^{\prime}_{i}X^{j}_{ \mu}}{\sqrt{pd}}\Big{)}\,, \tag{119}\] where \(\varphi^{\prime}_{i}=\varphi^{\prime}(\mathbf{W}^{*}_{i}\mathbf{X}_{\mu}/ \sqrt{d})\). Before integrating by parts with respect to \(X^{j}_{\mu}\) let us notice that in the sum over \(\mu\) all terms are the same since we are taking the expectation over all i.i.d. vectors \(\mathbf{X}_{\mu}\), \(\mathbf{A}_{\mu}\), \(Z_{\mu}\) and \(\xi_{\mu}\), it means that we can disregard the sum and just multiply by \(n\) directly. Then we have \[\frac{\partial g}{\partial W^{*}_{ij}}=\mathbb{E}_{\psi}\Big{[} \frac{\partial S_{t\mu}}{\partial X^{j}_{\mu}}\frac{\sqrt{1-ta}^{*}_{i}\varphi ^{\prime}_{i}}{\sqrt{pd}}\Big{(}-\Big{\langle}\frac{P^{y}_{\mu}}{P_{\mu}} \Big{\rangle}^{2}f^{\prime}(S_{t\mu};\mathbf{A}_{\mu})^{2}+\Big{\langle}\frac {P^{yy}_{\mu}}{P_{\mu}}\Big{\rangle}f^{\prime}(S_{t\mu};\mathbf{A}_{\mu})^{2} +\Big{\langle}\frac{P^{y}_{\mu}}{P_{\mu}}\Big{\rangle}f^{\prime\prime}(S_{t\mu };\mathbf{A}_{\mu})\Big{)}\Big{]}\\ +\mathbb{E}_{\psi}\Big{[}\frac{\sqrt{1-ta}^{*}_{i}\varphi^{\prime }_{i}}{\sqrt{pd}}\Big{(}-\Big{\langle}\frac{P^{y}_{\mu}}{P_{\mu}}\Big{\rangle} \Big{\langle}\frac{P^{x}_{\mu}}{P_{\mu}}\frac{\partial s_{t\mu}}{\partial X^{j }_{\mu}}\Big{\rangle}+\Big{\langle}\frac{P^{yx}_{\mu}}{P_{\mu}}\frac{\partial s _{t\mu}}{\partial X^{j}_{\mu}}\Big{\rangle}\Big{)}f^{\prime}(S_{t\mu};\mathbf{ A}_{\mu})\Big{]}+\mathbb{E}_{\psi}\Big{[}\frac{\sqrt{1-ta}^{*}_{i}\varphi^{\prime\prime}_{i}W^{*}_{ ij}}{\sqrt{pd}}f^{\prime}(S_{t\mu};\mathbf{A}_{\mu})\Big{]}\,.\] Due to the Lemma 17 absolute values of ratios of \(P\)'s are bounded with \(C(f)(|Z_{\mu}|^{2}+1)\). In the first term of expression above one can easily get rid of \(Z_{\mu}\) since \(\mathbb{E}_{\psi}((1+|Z_{\mu}|^{2}))<C\). On the other hand derivatives of \(f\) and \(\varphi\), in view of (A2), remain bounded with non-negative constant \(C(f,\varphi)\), so combining all above and plugging into latter expression along with derivatives of \(S_{t\mu}\) and \(s_{t\mu}\) we obtain \[\Big{|}\frac{\partial g}{\partial W^{*}_{ij}}\Big{|}\leq\frac{C(f,\varphi)}{\sqrt{d}}\mathbb{E}_{\psi}\Big{|}\frac{\big{(}\mathbf{a}^{*}\circ \varphi^{\prime}(\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}})\mathbf{W}^{* }\big{)}_{j}}{\sqrt{pd}}\frac{a^{*}_{i}}{\sqrt{p}}\Big{|}+\frac{C(f,\varphi)}{ \sqrt{d}}\mathbb{E}_{\psi}\Big{|}\frac{v^{*}_{j}}{\sqrt{d}}\frac{a^{*}_{j}}{ \sqrt{p}}\Big{|}\\ +\frac{C(f,\varphi)}{\sqrt{d}}\mathbb{E}_{\psi}\Big{|}(|Z_{\mu}|^{ 2}+1)\frac{a^{*}_{i}}{\sqrt{p}}\Big{\langle}\frac{\big{(}\mathbf{a}\circ\varphi ^{\prime}(\frac{\mathbf{W}\mathbf{X}_{\mu}}{\sqrt{d}})\mathbf{W}\big{)}_{j}}{ \sqrt{pd}}\Big{\rangle}\Big{|}+\frac{C(f,\varphi)}{\sqrt{d}}\mathbb{E}_{\psi} \Big{|}(|Z_{\mu}|^{2}+1)\frac{a^{*}_{i}}{\sqrt{p}}\Big{\langle}\frac{v_{j}}{ \sqrt{d}}\Big{\rangle}\Big{|}+\frac{C(f,\varphi)}{\sqrt{d}}\mathbb{E}_{\psi} \Big{|}\frac{a^{*}_{i}W^{*}_{ij}}{\sqrt{pd}}\Big{|}\,.\] After using repeatedly \(\mathbb{E}(a+b)^{2}\leq 2\mathbb{E}a^{2}+2\mathbb{E}b^{2}\) along with Jensen inequality one can show \[\sum_{i,j}^{p,d}\mathbb{E}\Big{|}\frac{\partial g}{\partial W^{*}_ {ij}}\Big{|}^{2}\leq\frac{C(f,\varphi)}{d}\Big{(}\mathbb{E}\Big{[}\frac{\| \mathbf{a}^{*}\circ\varphi^{\prime}(\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{ \sqrt{d}})\mathbf{W}^{*}\|^{2}}{pd}\frac{\|\mathbf{a}^{*}\|^{2}}{p}\Big{]}\\ +\mathbb{E}\Big{[}\frac{(|Z_{\mu}|^{2}+1)^{2}\|\mathbf{a}^{*}\|^{ 2}}{p}\Big{\langle}\frac{\|\mathbf{a}\circ\varphi^{\prime}(\frac{\mathbf{W} \mathbf{X}_{\mu}}{\sqrt{d}})\mathbf{W}\|^{2}}{pd}\Big{\rangle}\Big{]}+\mathbb{E} \Big{[}\frac{(|Z_{\mu}|^{2}+1)^{2}\|\mathbf{a}^{*}\|^{2}}{p}\Big{\langle}\frac{ \|\mathbf{v}\|^{2}}{d}\Big{\rangle}\Big{]}+2\Big{)}\,. \tag{120}\] Two terms of the form \(\mathbb{E}[b\langle c\rangle]\) we bound by using Cauchy-Schwartz inequality, Jensen's inequality and Nishimori identity consecutively: \[\mathbb{E}[b\langle c\rangle]\leq\mathbb{E}^{1/2}[b^{2}]\mathbb{E}^{1/2}[ \langle c\rangle^{2}]\leq\mathbb{E}^{1/2}[b^{2}]\mathbb{E}^{1/2}[\langle c^{2} \rangle]\leq\mathbb{E}^{1/2}[b^{2}]\mathbb{E}^{1/2}[c^{2}]\,. \tag{121}\] This allows us to rewrite (120) as \[\sum_{i,j}^{p,d}\mathbb{E}\Big{|}\frac{\partial g}{\partial W^{*}_{ij}}\Big{|}^{2} \leq\frac{C(f,\varphi)}{d}\Big{(}\mathbb{E}^{1/2}\Big{[}\frac{\|\mathbf{a}^{*}\circ \varphi^{\prime}(\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}})\mathbf{W}^{*}\|^{ 4}}{p^{2}d^{2}}\Big{]}+C\Big{)}\,. \tag{122}\] What is left is to notice that in \(\mathbb{E}\|\mathbf{a}^{*}\circ\varphi^{\prime}(\mathbf{W}^{*}\mathbf{X}_{\mu}/ \sqrt{d})\mathbf{W}^{*}\|^{4}/(p^{2}d^{2})\) all non zero terms will have only even powers so we can bound \(\varphi^{\prime}\) with a constant in all of them, which gives immediately \[\sum_{i,j}^{p,d}\mathbb{E}\Big{|}\frac{\partial g}{\partial W^{*}_{ij}}\Big{|}^{ 2}\leq\frac{C(f,\varphi)}{d}\,. \tag{123}\] Now we consider the partial derivative with respect to \(v^{*}_{i}\) \[\frac{\partial g}{\partial v^{*}_{i}}=\frac{1}{n}\sum_{\mu}^{n}\mathbb{E}_{ \psi}\Big{[}\Big{\langle}\frac{P^{y}_{\mu}}{P_{\mu}}\Big{\rangle}f^{\prime}(S_ {t,\mu};\mathbf{A}_{\mu})\frac{\sqrt{t\rho}X^{i}_{\mu}}{\sqrt{d}}\Big{]}\,. \tag{124}\] As in the case with \(W^{*}_{ij}\) it is necessary to integrate by parts also with respect to \(X^{i}_{\mu}\) since blind bounds will not give us the desired order. The result will be very similar to the previous calculation just this time we don't have factors \(\varphi^{\prime}_{i}\) and \(a^{*}_{i}\). After similar simplification (bounds on ratios of \(P\)'s, etc.) we obtain \[\Big{|}\frac{\partial g}{\partial v^{*}_{i}}\Big{|}\leq\frac{C(f, \varphi)}{\sqrt{d}}\mathbb{E}_{\psi}\Big{|}\frac{\big{(}\mathbf{a}^{*}\circ \varphi^{\prime}(\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}})\mathbf{W}^{ *}\big{)}_{i}}{\sqrt{pd}}\Big{|}+\frac{C(f,\varphi)}{\sqrt{d}}\mathbb{E}_{\psi} \Big{|}\frac{v^{*}_{i}}{\sqrt{d}}\Big{|}\\ +\frac{C(f,\varphi)}{\sqrt{d}}\mathbb{E}_{\psi}\Big{|}(|Z_{\mu}|^ {2}+1)\Big{\langle}\frac{(\mathbf{a}\circ\varphi^{\prime}(\frac{\mathbf{W} \mathbf{X}_{\mu}}{\sqrt{d}})\mathbf{W})_{i}}{\sqrt{pd}}\Big{\rangle}\Big{|}+ \frac{C(f,\varphi)}{\sqrt{d}}\mathbb{E}_{\psi}\Big{|}(|Z_{\mu}|^{2}+1)\Big{ \langle}\frac{v_{i}}{\sqrt{d}}\Big{\rangle}\Big{|}\,. \tag{125}\] Similarly to the previous case it is easy to see that \[\sum_{i}^{d}\mathbb{E}\Big{|}\frac{\partial g}{\partial v^{*}_{i} }\Big{|}^{2}\leq\frac{C(f,\varphi)}{d}\Big{(}\mathbb{E}\Big{[}\frac{\|\mathbf{ a}^{*}\circ\varphi^{\prime}(\frac{\mathbf{W}^{*}\mathbf{X}_{\mu}}{\sqrt{d}}) \mathbf{W}^{*}\|^{2}}{pd}\Big{]}+\mathbb{E}\Big{[}\frac{\|\mathbf{v}^{*}\|^{2}} {d}\Big{]}\\ +\mathbb{E}\Big{[}(|Z_{\mu}|^{2}+1)^{2}\Big{\langle}\frac{\| \mathbf{a}\circ\varphi^{\prime}(\frac{\mathbf{W}\mathbf{X}_{\mu}}{\sqrt{d}}) \mathbf{W}\|^{2}}{pd}\Big{\rangle}\Big{]}+\mathbb{E}\Big{[}(|Z_{\mu}|^{2}+1)^ {2}\Big{\langle}\frac{\|\mathbf{v}\|^{2}}{d}\Big{\rangle}\Big{]}\Big{)}\,. \tag{126}\] In order to be able to apply Nishimori identity to the last two terms we have first to use Cauchy-Shchwarz inequality to separate factors with noise \(Z_{\mu}\) from the Gibbs bracket. Due to the fact that \(\varphi^{\prime}\) is bounded we obtain \[\sum_{i}^{d}\mathbb{E}\Big{|}\frac{\partial g}{\partial v^{*}_{i}}\Big{|}^{2} \leq\frac{C(f,\varphi)}{d}\,. \tag{127}\] This combined with (123) finishes the proof. ## 5 Proof of Proposition 3 The proof is implied by that of Theorem 1. We introduce a further set of data \(\tilde{\mathcal{D}}_{n}:=\{(\mathbf{X}_{\nu},\tilde{Y}_{\nu})\}_{n+1\leq\nu \leq n(1+\varepsilon)}\) with responses \[\tilde{Y}_{\nu}=\sqrt{\lambda}Y^{\prime}_{\nu}+\tilde{Z}^{\prime}_{\nu}\,, \quad Y^{\prime}_{\nu}\sim P_{\text{out}}(\cdot\mid S_{\nu}) \tag{128}\] where \(\lambda\geq 0\) and \(n+1\leq\nu\leq n(1+\varepsilon)\) for some \(\varepsilon\geq 0\), \(\tilde{Z}^{\prime}_{\nu}\) are i.i.d. Gaussian variables independent of everything else, and where \(S_{\nu}\) is defined as in (7) but for the new inputs, with same teacher as used to generate \(\mathcal{D}_{n}\). We now define a proxy for the Bayes-optimal generalization error given the original and new data: \[\mathcal{E}_{n}(\lambda,\varepsilon):=\frac{1}{n\varepsilon}\sum_{\nu=n+1}^{n( 1+\varepsilon)}\mathbb{E}\big{[}(Y^{\prime}_{\nu}-\mathbb{E}[Y^{\prime}_{\nu} \mid\mathcal{D}_{n}\cup\tilde{\mathcal{D}}_{n}])^{2}\big{]}\,. \tag{129}\] This would recover the true definition of generalization if we set \(\lambda=0\) in it. The quantity \(\mathcal{E}_{n}(\lambda,\varepsilon)\) can be obtained through the I-MMSE relation [90] \[\frac{1}{n}\frac{\partial}{\partial\lambda}I_{n}(\mathbf{Y}^{\prime};\sqrt{ \lambda}\mathbf{Y}^{\prime}+\mathbf{Z}^{\prime}\mid\mathbf{Y},(\mathbf{X}_{ \mu})_{\mu\leq n(1+\varepsilon)})=\frac{\varepsilon}{2}\mathcal{E}_{n}( \lambda,\varepsilon)\,. \tag{130}\] Following the general arguments in [8], the mutual information on the l.h.s. is concave in \(\lambda\). Moreover the proof of Theorem 1 can be readily extended to take into account the additional side information (128): indeed, the proof is exactly the same as before except that the "channel" (128) generating the \(n\varepsilon\) responses is slightly different than the original one \(P_{\mathrm{out}}\). The new channel is equivalent to the original one once we rescale the readout \(f\) by \(\sqrt{\lambda}f\) and the noise variance \(\Delta\) as \(\lambda\Delta+1\) in (1). This channel still verifies the same hypotheses. One then just need to keep track of the indices with responses generated according to the basic model (1) and those from the rescaled channel (128). In particular, it is possible to show the asymptotic equivalence of the quantities \(I_{n}(\mathbf{Y}^{\prime};\sqrt{\lambda}\mathbf{Y}^{\prime}+\mathbf{Z}^{ \prime}\mid\mathbf{Y},(\mathbf{X}_{\mu})_{\mu\leq n(1+\varepsilon)})/n\) and \(I_{n}^{\circ}(\mathbf{Y}^{\prime};\sqrt{\lambda}\mathbf{Y}^{\prime}+\mathbf{Z }^{\prime}\mid\mathbf{Y},(\mathbf{X}_{\mu})_{\mu\leq n(1+\varepsilon)})/n\), where the second one refers to the equivalent GLM, with \(S_{\nu}^{\circ}=\rho\mathbf{v}^{\star 1}\mathbf{X}_{\nu}/\sqrt{d}+\sqrt{ \varepsilon}\xi_{\nu}^{\star}\). Then by concavity one can use Griffiths' Lemma (see for instance [91, Lemma IV.6.3]) to exchange the derivative with the limit \(n\to\infty\) almost everywhere: \[\lim_{n\to\infty}\frac{1}{n}\frac{\partial}{\partial\lambda}I_{n}(\mathbf{Y}^ {\prime};\sqrt{\lambda}\mathbf{Y}^{\prime}+\mathbf{Z}^{\prime}\mid\mathbf{Y},(\mathbf{X}_{\mu})_{\mu\leq n(1+\varepsilon)})=\frac{\partial}{\partial \lambda}\lim_{n\to\infty}\frac{1}{n}I_{n}^{\circ}(\mathbf{Y}^{\prime};\sqrt{ \lambda}\mathbf{Y}^{\prime}+\mathbf{Z}^{\prime}\mid\mathbf{Y},(\mathbf{X}_{ \mu})_{\mu\leq n(1+\varepsilon)})\,. \tag{131}\] The exchange fails when the model on the r.h.s., i.e. the GLM, presents a phase transition. Then one has to send both \(\lambda,\varepsilon\to 0\). The authors of [8] showed that these limits commute with the \(n\to\infty\) limit for the GLM. So the r.h.s. yields the optimal generalization error for the GLM proved in [8] that is therefore shared by the Bayesian two-layer neural network under study. ## Appendix A Nishimori identity The Nishimori identities are a very general set of symmetries arising in inference in the Bayes-optimal setting as a consequence of Bayes rule. They were initially discovered in the context of the gauge theory of spin glasses [92], which possess a sub-region of their phase space, called _Nishimori line_, where the most relevant thermodynamic quantities can be exactly computed, and we can generally count on replica symmetry, namely the concentration of order parameters [93]. To introduce them, consider a generic inference problem where a Bayes-optimal statistician observes \(\mathbf{Y}\) that is a random function of some ground truth signal \(\mathbf{X}^{*}\): \(\mathbf{Y}\sim P_{Y|X}(\mathbf{X}^{*})\). Then the following holds: **Proposition 16** (Nishimori identity).: _For any bounded function \(f\) of the signal \(\mathbf{X}^{*}\), the data \(\mathbf{Y}\) and of conditionally i.i.d. samples from the posterior \(\mathbf{x}^{j}\sim P_{X|Y}(\,\cdot\mid\mathbf{Y})\), \(j=1,2,\ldots,n\), we have that_ \[\mathbb{E}\langle f(\mathbf{Y},\mathbf{X}^{*},\mathbf{x}^{2},\ldots,\mathbf{x }^{n})\rangle=\mathbb{E}\langle f(\mathbf{Y},\mathbf{x}^{1},\mathbf{x}^{2}, \ldots,\mathbf{x}^{n})\rangle \tag{132}\] _where the bracket notation \(\langle\,\cdot\,\rangle\) is used for the joint expectation over the posterior samples \((\mathbf{x}^{j})_{j\leq n}\), \(\mathbb{E}\) is over the signal \(\mathbf{X}^{*}\) and data \(\mathbf{Y}\)._ Proof.: The proof follows directly from Bayes' rule. An elementary proof can be found in [94]. ## Appendix B Proof of Lemma 4 Let us start with proving the auxiliary Lemma where we combine all necessary bounds concerning derivatives of \(P_{\mathrm{out}}(y|x)\). In what below \(C(f)\) is a general constant that depends on \(f\) and also might depend on \(\Delta\). Below, upper indices represent partial derivatives, e.g., \(P_{\mathrm{out}}^{x}(y|x)=\partial_{x}P_{\mathrm{out}}(y|x)\) and \(P_{\mathrm{out}}^{xx}(y|x)=\partial_{x}\partial_{x}P_{\mathrm{out}}(y|x)\). **Lemma 17**.: _Let \(y=Y_{t\mu}=f(S_{t\mu};\mathbf{A}_{\mu})+\sqrt{\Delta}Z_{\mu}\). Under assumption (A2) there exists constant \(C(f)\) such that_ \[\max\Big{\{}\Big{|}\frac{P_{\mathrm{out}}^{y}(y|x)}{P_{\mathrm{ out}}(y|x)}\Big{|},\Big{|}\frac{P_{\mathrm{out}}^{x}(y|x)}{P_{\mathrm{out}}(y|x)} \Big{|},\Big{|}\frac{P_{\mathrm{out}}^{yy}(y|x)}{P_{\mathrm{out}}(y|x)},\Big{|} \frac{P_{\mathrm{out}}^{yx}(y|x)}{P_{\mathrm{out}}(y|x)}\Big{|},\Big{|}\frac{P_ {\mathrm{out}}^{xx}(y|x)}{P_{\mathrm{out}}(y|x)}\Big{|}\Big{\}}<C(f)(|Z_{\mu}|^ {2}+1)\,. \tag{133}\] Proof.: For convenience we recall here the definition of \(P_{\mathrm{out}}(y|x)\) \[P_{\mathrm{out}}(y\mid x)=\int dP_{A}(\mathbf{A})\frac{1}{\sqrt{2\pi\Delta}} \exp\Big{(}-\frac{1}{2\Delta}(y-f(x;\mathbf{A}))^{2}\Big{)}\,. \tag{134}\] It is easy to see that the ratio of any of these derivatives of \(P_{\mathrm{out}}\) with \(P_{\mathrm{out}}\) can be rewritten using an average \[\langle\cdot\rangle_{\mathbf{A}}:=\frac{\int dP_{A}(\mathbf{A})(\cdot)e^{- \frac{1}{2\Delta}(y-f(x;\mathbf{A}))^{2}}}{\int dP_{A}(\mathbf{A})e^{-\frac{1 }{2\Delta}(y-f(x;\mathbf{A}))^{2}}}\,. \tag{135}\] After some algebra we get \[\frac{P_{\mathrm{out}}^{y}(y|x)}{P_{\mathrm{out}}(y|x)}=\Big{\langle}- \frac{1}{\Delta}(y-f(x;\mathbf{A}))\Big{\rangle}_{\mathbf{A}}\,, \tag{136}\] \[\frac{P_{\mathrm{out}}^{x}(y|x)}{P_{\mathrm{out}}(y|x)}=\Big{\langle} \frac{1}{\Delta}(y-f(x;\mathbf{A}))f^{\prime}(x;\mathbf{A})\Big{\rangle}_{ \mathbf{A}}\,,\] (137) \[\frac{P_{\mathrm{out}}^{yy}(y|x)}{P_{\mathrm{out}}(y|x)}=\Big{\langle} \frac{1}{\Delta^{2}}(y-f(x;\mathbf{A}))^{2}\Big{\rangle}_{\mathbf{A}}-\frac{1 }{\Delta}\,,\] (138) \[\frac{P_{\mathrm{out}}^{yx}(y|x)}{P_{\mathrm{out}}(y|x)}=\Big{\langle} -\frac{1}{\Delta^{2}}(y-f(x;\mathbf{A}))^{2}f^{\prime}(x;\mathbf{A})+\frac{1 }{\Delta}f^{\prime}(x;\mathbf{A})\Big{\rangle}_{\mathbf{A}}\,,\] (139) \[\frac{P_{\mathrm{out}}^{xx}(y|x)}{P_{\mathrm{out}}(y|x)}=\Big{\langle} \Big{(}\frac{1}{\Delta^{2}}(y-f(x;\mathbf{A}))^{2}-\frac{1}{\Delta}\Big{)}f^{ \prime}(x;\mathbf{A})^{2}+\frac{1}{\Delta}(y-f(x;\mathbf{A}))f^{\prime\prime}( x;\mathbf{A})\Big{\rangle}_{\mathbf{A}}\,. \tag{140}\] Since all expressions have a similar form we will treat only the last one, all others can be bounded in the same way. We have \[\Big{|}\frac{P_{\mathrm{out}}^{xx}(y|x)}{P_{\mathrm{out}}(y|x)}\Big{|}\leq \Big{\langle}\Big{(}\frac{1}{\Delta^{2}}(y-f(x;\mathbf{A}))^{2}+\frac{1}{ \Delta}\Big{)}f^{\prime}(x;\mathbf{A})^{2}\Big{\rangle}_{\mathbf{A}}+\Big{ \langle}\frac{1}{\Delta}\Big{|}y-f(x;\mathbf{A})\Big{|}\Big{|}f^{\prime\prime}( x;\mathbf{A})\Big{|}\Big{\rangle}_{\mathbf{A}}\,. \tag{141}\] When \(y=f(S_{t\mu};\mathbf{A}_{\mu})+\sqrt{\Delta}Z_{\mu}\), since \(f\) is bounded along with its first two derivatives (see A2) we obtain immediately \[\Big{|}\frac{P_{\mathrm{out}}^{xx}(y|x)}{P_{\mathrm{out}}(y|x)}\Big{|}\leq C(f )(|Z_{\mu}|^{2}+1)\,. \tag{142}\] _Remark 3_.: With such bound one can see that after averaging such ratios (with or without \(\langle\cdot\rangle\) as in (39)) with respect to \(Z_{\mu}\), we simply obtain a uniform bound \(C(f)\). Now let us return to the proof of Lemma 4. Proof of Lemma 4.: By definition one has \(\mathbb{E}[u^{\prime}_{Y_{t\mu}}(S_{t\mu})\mid S_{t\mu}]=\int dyP_{\mathrm{ out}}^{x}(y\mid S_{t\mu})=0\). For \(U_{\mu\nu}\) instead, we first need to realize that \(U_{\mu\mu}=P_{\mathrm{out}}^{xx}(Y_{t\mu}\mid S_{t\mu})/P_{\mathrm{out}}(Y_{t \mu}\mid S_{t\mu})\) which implies \(\mathbb{E}[U_{\mu\mu}\mid S_{t\mu}]=\int dyP_{\mathrm{out}}^{xx}(y\mid S_{t\mu})=0\). Concerning the off-diagonal instead, conditionally on \(S_{t\mu}\), \(S_{t\nu}\) the remaining disorder in the \(Y\)'s is independent, so for \(\mu\neq\nu\) we have \(\mathbb{E}[U_{\mu\nu}\mid S_{t\mu},S_{t\nu}]=\int dyP_{\mathrm{out}}^{x}(y\mid S _{t\mu})\int dy^{\prime}P_{\mathrm{out}}^{x}(y^{\prime}\mid S_{t\nu})=0\). For the boundedness of the derivatives it is sufficient to notice that \[(u^{\prime}_{Y_{t\mu}}(S_{t\mu}))^{2}=\Big{(}\frac{P_{\mathrm{ out}}^{y}(Y_{t\mu}|S_{t\mu})}{P_{\mathrm{out}}(Y_{t\mu}|S_{t\mu})}\Big{)}^{2}\leq C(f)(|Z_{ \mu}|^{4}+1)\,, \tag{143}\] the last inequality being true due to Lemma 17 and the fact that \(Y_{t\mu}=f(S_{t\mu};\mathbf{A}_{\mu})+\sqrt{\Delta}Z_{\mu}\). Now it is immediate that after taking the expectation conditioned on \(S_{t\mu}\) we obtain a bound \(C(f)\). In order to deal with the last quantity \(U_{\mu\nu}^{2}\) we rewrite \[U_{\mu\nu}^{2}=\Big{(}\delta_{\mu\nu}\Big{(}\frac{P_{\text{out}}^{xx}(Y_{t\mu}| S_{t\mu})}{P_{\text{out}}(Y_{t\mu}|S_{t\mu})}-\Big{(}\frac{P_{\text{out}}^{x}(Y_{t \mu}|S_{t\mu})}{P_{\text{out}}(Y_{t\mu}|S_{t\mu})}\Big{)}^{2}\Big{)}+\frac{P_{ \text{out}}^{x}(Y_{t\mu}|S_{t\mu})}{P_{\text{out}}(Y_{t\mu}|S_{t\mu})}\frac{P_ {\text{out}}^{x}(Y_{t\nu}|S_{t\nu})}{P_{\text{out}}(Y_{t\nu}|S_{t\nu})}\Big{)}^ {2}\,.\] With the help of Lemma 17 one can see immediately that \(U_{\mu\nu}^{2}\leq C(f)P(Z_{t\mu},Z_{t\nu})\), where \(P\) is some polynomial with even degrees. Once again, after taking expectation we get a bound by a positive constant \(C(f)\). ## Appendix C Proof of Lemma 5 For the reader convenience we repeat the statement of the Lemma below. **Lemma 18** (Approximations).: _Recall \(\rho:=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{\prime}\) and \(\epsilon^{2}:=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}-\rho^{2}\). Let \(\tilde{\varphi}\) be either \(\varphi\) or the identity function. Under assumptions (A1) and (A2) the following estimates hold:_ \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu i})=\rho+ O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1\Big{)}\,, \tag{144}\] \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu i})=\mathbb{E }_{\mathcal{N}(0,1)}\varphi^{2}+O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1 \Big{)}\,,\] (145) \[\mathbb{E}_{\mathbf{W}^{*}}\varphi(\alpha_{\mu i})\tilde{\varphi }(\alpha_{\nu i})=\rho\mathbb{E}_{\mathcal{N}(0,1)}\tilde{\varphi}^{\prime} \frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}+O\Big{(}\Big{(}\frac{ \mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}}\Big{)} ^{2}\Big{)}+O\Big{(}\frac{(\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu})^{2}} {\|\mathbf{X}_{\nu}\|^{2}d}\Big{)}+O\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal} \mathbf{X}_{\nu}}{d}\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1\Big{)}\Big{)}\,,\] (146) \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu i}) \varphi^{\prime}(\alpha_{\nu i})=\rho^{2}+O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2 }}{d}-1\Big{)}+O\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\| \mathbf{X}_{\nu}\|^{2}}\Big{)}\,,\] (147) \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu i})\tilde{ \varphi}^{2}(\alpha_{\nu i})=\mathbb{E}_{\mathcal{N}(0,1)}\varphi^{2}\mathbb{ E}_{\mathcal{N}(0,1)}\tilde{\varphi}^{2}+O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1 \Big{)}+O\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\| \mathbf{X}_{\nu}\|^{2}}\Big{)}\,. \tag{148}\] Proof.: Starting from (144), using the fundamental theorem of integral calculus we get \[|\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu i})- \rho|=|\mathbb{E}\varphi^{\prime}\Big{(}z\sqrt{\frac{\|\mathbf{X}_{\mu}\|^{2} }{d}}\Big{)}-\rho|\leq\int_{0}^{1}ds\mathbb{E}\frac{|z|}{2}\Big{|}\varphi^{ \prime\prime}\Big{(}z\sqrt{s\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}+1-s}\Big{)} \Big{|}\frac{\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1\big{|}}{\sqrt{s\frac{\| \mathbf{X}_{\mu}\|^{2}}{d}+1-s}} \tag{149}\] where the average is only over \(z\sim\mathcal{N}(0,1)\). If we use the bound on the second derivative \(\varphi^{\prime\prime}\leq\bar{K}\) we decouple completely \(s\) from \(z\) and we can compute both the average (\(\mathbb{E}|z|\leq\sqrt{\mathbb{E}z^{2}}=1\)) and the integral, obtaining \[|\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu i})- \rho|\leq\bar{K}\frac{\big{|}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1\big{|}}{ \frac{\|\mathbf{X}_{\mu}\|^{2}}{\sqrt{d}}+1}\leq\bar{K}\Big{|}\frac{\|\mathbf{X }_{\mu}\|^{2}}{d}-1\Big{|}\,. \tag{150}\] Let us now focus on (145). In the same spirit as the previous point: \[|\mathbb{E}_{\mathbf{W}^{*}}\tilde{\varphi}^{2}(\alpha_{\mu i})- \mathbb{E}_{\mathcal{N}(0,1)}\tilde{\varphi}^{2}|=|\mathbb{E}\tilde{\varphi}^{2 }\Big{(}z\sqrt{\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}}\Big{)}-\mathbb{E}_{ \mathcal{N}(0,1)}\tilde{\varphi}^{2}|\leq\bar{K}\int_{0}^{1}ds\mathbb{E}|z \tilde{\varphi}(\cdots)|\frac{\big{|}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1 \big{|}}{\sqrt{s\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}+1-s}} \tag{151}\] with \(\tilde{\varphi}^{\prime}\leq\bar{K}\) and the argument of \(\tilde{\varphi}\) is \(z(s(\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1)+1)^{1/2}\). Here, before being able to integrate over \(s\) we need to bound the expectation \(\mathbb{E}|z\tilde{\varphi}(\cdots)|\). Recall that \(\varphi\) is Lipschitz, thus a simple bound is given by \(|\tilde{\varphi}(\cdots)|\leq\bar{K}|(\cdots)|\) using \(\tilde{\varphi}^{\prime}\leq\bar{K}\) and that \(\tilde{\varphi}(0)=0\) as it is odd. This yields immediately: \[|\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\tilde{\varphi}^{2}(\alpha_{\mu i})-\mathbb{ E}_{\mathcal{N}(0,1)}\tilde{\varphi}^{2}|\leq\bar{K}^{2}\Big{|}\frac{\| \mathbf{X}_{\mu}\|^{2}}{d}-1\Big{|}\,. \tag{152}\] Consider now (146). In what follows we drop the \(i\)-index for brevity. Let \[\alpha_{\mu\perp\nu}:=\alpha_{\mu}-\alpha_{\nu}\frac{\mathbb{E}\alpha_{\mu} \alpha_{\nu}}{\mathbb{E}^{2}\alpha_{\nu}}=\alpha_{\mu}-\alpha_{\nu}\frac{ \mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}}\,,\] that is independent of \(\alpha_{\nu}\). Now we expand \(\varphi\) around \(\alpha_{\mu\perp\nu}\): \[\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\varphi(\alpha_{\mu})\tilde{\varphi}(\alpha _{\nu})=\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu\perp\nu}) \mathbb{E}_{\mathbf{W}^{*}}\tilde{\varphi}^{\prime}(\alpha_{\nu})\frac{ \mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{d}+\frac{1}{2}\mathbb{E}_{ \mathbf{W}^{*}}\varphi^{\prime\prime}(p)\tilde{\varphi}(\alpha_{\nu})\alpha_{ \nu}^{2}\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{ X}_{\nu}\|^{2}}\Big{)}^{2}\,. \tag{153}\] The zero-th order is zero because \(\varphi\) is odd, and we have performed Gaussian integration by parts in the first order. \(p\) is a point in between \(\alpha_{\mu\perp\nu}\) and \(\alpha_{\nu}\). Now we expand again \(\varphi^{\prime}(\alpha_{\mu\perp\nu})\) around the initial point \(\alpha_{\mu}\): \[\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\varphi^{\prime}(\alpha_{\mu\perp\nu})= \mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu})-\mathbb{E}_{\mathbf{ W}^{*}}\varphi^{\prime\prime}(p)\alpha_{\nu}\frac{\mathbf{X}_{\mu}^{ \intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}}=\mathbb{E}_{\mathbf{W}^{* }}\varphi^{\prime}(\alpha_{\mu})+O\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal} \mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}}\Big{)} \tag{154}\] where \(p\) has the same meaning as before, and we used \(\varphi^{\prime\prime}\leq\bar{K}\). At this point it suffices to use (144) on \(\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\mu})\) and \(\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\tilde{\varphi}^{\prime}(\alpha_{\nu})\) and the estimate is proved. Let us move to (147). As for the previous point, we follow the orthogonalization procedure: \[\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\varphi^{\prime}(\alpha_{\mu}) \varphi^{\prime}(\alpha_{\nu}) =\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\!\varphi^{\prime}(\alpha_{\mu \perp\nu})\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\nu})+\mathbb{E} _{\mathbf{W}^{*}}\varphi^{\prime\prime}(p)\varphi^{\prime}(\alpha_{\nu})\alpha _{\nu}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^ {2}} \tag{155}\] \[=\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\!\varphi^{\prime}(\alpha_{\mu \perp\nu})\mathbb{E}_{\mathbf{W}^{*}}\varphi^{\prime}(\alpha_{\nu})+O\Big{(} \frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}} \Big{)}\,. \tag{156}\] Now we use (154) and (144) to conclude: \[\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\!\varphi^{\prime}(\alpha_{\mu})\varphi^{ \prime}(\alpha_{\nu})=\rho^{2}+O\Big{(}\frac{\|\mathbf{X}_{\mu}\|^{2}}{d}-1 \Big{)}+O\Big{(}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{ X}_{\nu}\|^{2}}\Big{)} \tag{157}\] as in the statement. Now we move to (148). As before, we expand \(\varphi^{2}(\alpha_{\mu})\) around \(\alpha_{\mu\perp\nu}\): \[E_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu})\tilde{\varphi}^{2}(\alpha_{\nu})= \mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu\perp\nu})\tilde{\varphi}^{2 }(\alpha_{\nu})+2\mathbb{E}_{\mathbf{W}^{*}}\int_{0}^{1}ds\varphi(\alpha_{\mu,\nu}(s))\varphi^{\prime}(\alpha_{\mu,\nu}(s))\tilde{\varphi}^{2}(\alpha_{\nu })\alpha_{\nu}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_ {\nu}\|^{2}} \tag{158}\] where \(\alpha_{\mu,\nu}(s)=\alpha_{\mu\perp\nu}+s\alpha_{\nu}\mathbf{X}_{\mu}^{ \intercal}\mathbf{X}_{\nu}/\|\mathbf{X}_{\nu}\|^{2}\). The integral on the r.h.s. can be bounded in different ways. For instance, one can first integrate the \(\alpha_{\nu}\) by part, recalling that \(\alpha_{\mu\perp\nu}\) is independent of it, and then exploit the fact that both \(\varphi\) and \(\tilde{\varphi}\) are Lipschitz. This yields the \(O(\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}/\|\mathbf{X}_{\nu}\|^{2})\) in the statement. The leading term \(\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu\perp\nu})\tilde{\varphi}^{2}( \alpha_{\nu})\) can be split into \(\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu\perp\nu})\mathbb{E}_{ \mathbf{W}^{*}}\!\cdot\!\tilde{\varphi}^{2}(\alpha_{\nu})\) thanks to the orthogonalization. Expanding \(\varphi^{2}(\alpha_{\mu\perp\nu})\) around \(\alpha_{\mu}\) we get \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu\perp\nu})=\mathbb{E}_{\mathbf{ W}^{*}}\varphi^{2}(\alpha_{\mu})-2\int_{0}^{1}ds\,\mathbb{E}_{\mathbf{W}^{*}}\! \cdot\!\varphi(\alpha_{\mu,\nu}(s))\varphi^{\prime}(\alpha_{\mu,\nu}(s))\alpha_{ \nu}\frac{\mathbf{X}_{\mu}^{\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}} \tag{159}\] with \(\alpha_{\mu,\nu}(s)\) as above. The integral contributes again with the same order as the one above, therefore \[\mathbb{E}_{\mathbf{W}^{*}}\varphi^{2}(\alpha_{\mu})\tilde{\varphi}^{2}(\alpha_{ \nu})=\mathbb{E}_{\mathbf{W}^{*}}\!\cdot\!\varphi^{2}(\alpha_{\mu})\mathbb{E}_{ \mathbf{W}^{*}}\tilde{\varphi}^{2}(\alpha_{\nu})+O\Big{(}\frac{\mathbf{X}_{\mu}^ {\intercal}\mathbf{X}_{\nu}}{\|\mathbf{X}_{\nu}\|^{2}}\Big{)}\,. \tag{160}\] Finally, it only remains to apply (54) to both the factors in the leading contribution on the r.h.s., which yields the missing remainder \(O(\|\mathbf{X}_{\mu}\|^{2}/d-1)\) in the statement.
2301.09381
A Structural Approach to the Design of Domain Specific Neural Network Architectures
This is a master's thesis concerning the theoretical ideas of geometric deep learning. Geometric deep learning aims to provide a structured characterization of neural network architectures, specifically focused on the ideas of invariance and equivariance of data with respect to given transformations. This thesis aims to provide a theoretical evaluation of geometric deep learning, compiling theoretical results that characterize the properties of invariant neural networks with respect to learning performance.
Gerrit Nolte
2023-01-23T11:50:57Z
http://arxiv.org/abs/2301.09381v1
# A Structural Approach to the Design of Domain Specific Neural Network Architectures ###### Abstract We propose a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures Architectures Architectures Architectures. The proposed approach is based on the use of a structural approach to the design of Domain Specific Neural Network Architectures
2304.01665
Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks
Language models' (LMs) proficiency in handling deterministic symbolic reasoning and rule-based tasks remains limited due to their dependency implicit learning on textual data. To endow LMs with genuine rule comprehension abilities, we propose "Neural Comprehension" - a framework that synergistically integrates compiled neural networks (CoNNs) into the standard transformer architecture. CoNNs are neural modules designed to explicitly encode rules through artificially generated attention weights. By incorporating CoNN modules, the Neural Comprehension framework enables LMs to accurately and robustly execute rule-intensive symbolic tasks. Extensive experiments demonstrate the superiority of our approach over existing techniques in terms of length generalization, efficiency, and interpretability for symbolic operations. Furthermore, it can be applied to LMs across different model scales, outperforming tool-calling methods in arithmetic reasoning tasks while maintaining superior inference efficiency. Our work highlights the potential of seamlessly unifying explicit rule learning via CoNNs and implicit pattern learning in LMs, paving the way for true symbolic comprehension capabilities.
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Kang Liu, Jun Zhao
2023-04-04T09:50:07Z
http://arxiv.org/abs/2304.01665v3
# Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks ###### Abstract Language models (LMs) proficiency in handling deterministic symbolic reasoning and rule-based tasks remains limited due to their dependency implicit learning on textual data. To enable fully rule comprehension ability, we explore how to incorporate compiled neural networks (CoNNs) which weight is specially designed into the architecture of LMs, to achieve high accuracy and robust performance. CoNNs are transformer-based neural networks that execute rules through artificially generated attention weights. Our method, which call "Neural Comprehension", by incorporating CoNN modules into the LM, the framework effectively tackles rule-intensive challenges. Our experiments on symbolic reasoning tasks and real-world arithmetic reasoning tasks demonstrate the superior performance of our method compared to existing techniques. Furthermore, our LM achieves flawless execution on symbolic operations tasks, highlighting the potential of our method in enabling LMs to possess true symbolic comprehension capabilities. Our code is publicly available at: [https://github.com/WENGSYX/Neural-Comprehension](https://github.com/WENGSYX/Neural-Comprehension). ## 1 Introduction Language models (LMs), particularly large language models (LLMs), have exhibited impressive performance on complex reasoning tasks (Brown et al., 2020; Zhang et al., 2022; Chowdhery et al., 2022; Wei et al., 2022; Ma et al., 2022). Despite this, the proficiency of LMs in tackling deterministic symbolic reasoning and rule-based tasks is still limited (Welleck et al., 2022). For example, GPT-3's arithmetic performance declines with higher digit numbers (Brown et al., 2020), and its mathematical accuracy is influenced by word frequency in training data (Razeghi et al., 2022). Moreover, length generalization (Anil et al., 2022) remains a challenge even for 100-billion-parameter models, such as GPT-4 (Bubeck et al., 2023). We hypothesize that these limitations stem from LMs' dependency on implicitly learning rules from textual data. During the training Figure 1: The length generalization of T5 (with fine-tune) (Raffel et al., 2020), GPT-3.5 (with few-shot) (Ouyang et al., 2022) and GPT-4 (with few-shot) on symbolic operations (Additional) tasks. The tasks included examples such as _”15673 + 3186”_ (length = 10). To evaluate the model’s proficiency, we conducted tests on tasks ranging from 3 to 30 digits, with longer than 10 digits being out-of-distribution of training data. process, the primary objective of implicitly learning based on gradient Updating is to minimize the loss associated with the given textual dataset. As illustrated in Figure 1, a simple length generalization experiment using addition tasks with varying numbers of digits highlights this limitation. Performance deteriorates as test length increases, indicating that these models strongly rely on statistical patterns in the data rather than capturing fundamental logical structures. This reliance on implicit learning constrains LMs' accuracy in executing symbolic operations tasks. As a result, their performance suffers when confronted with out-of-distribution and rule-intensive tasks that require a more profound understanding of abstract rules. We propose a transformer-based language model framework, termed "Neural Comprehension", which synergistically integrates a pre-trained LM [Li et al., 2021b] and compiled neural networks (CoNNs) [Weiss et al., 2021] to achieve high accuracy and robust performance. CoNNs are neural networks but the rules are explicitly coded through transformer-liked structures and attention. Therefore CoNN is human-controllable, executing rules through artificially generated attention weights, and can achieve perfect accuracy once compiled network is done. Neural Comprehension relying solely on neural networks without requiring additional tools. It employs a token-by-token generation method, analogous to GPT-3, where each token can be generated by either the pre-trained LM or one of the CoNNs. We comprises a pre-trained LM and multiple sets of CoNNs. The implementation of the Neural Comprehension framework facilitates the integration of rule-intensive abilities and reasoning capabilities into LMs, endowing them with genuine symbolic comprehension skills. In this work, we conduct extensive experiments to evaluate the performance of our proposed Neural Comprehension method on a variety of rule-intensive tasks. Our experimental results demonstrate the effectiveness of our approach in comparison with existing state-of-the-art techniques, such as vanilla fine-tuning, few-shot learning, and Chain-of-Thought reasoning. Specifically, Neural Comprehension outperforms these methods in terms of accuracy, efficiency, and interpretability, showcasing its superiority in handling rule-intensive tasks. Our study presents a strong case for the deployment of Neural Comprehension in language models, highlighting its potential to transform the landscape of symbolic reasoning and language understanding capabilities. ContributionsOur main contributions are as follows: * We pioneer the development and implementation of flawless execution rule-intensive symbolic operations for language models that rely on neural networks. By employing a versatile and interpretable method, we successfully integrate CoNNs, which are explicitly coded and human-controllable, into the language model. Our method facilitates direct rule deduction without the need for learning from conditional probabilities, leading to a more robust and effective approach. (**Section**3) * To expand the application field, we leverage the In-context learning ability of large language models to auto generate CoNN. Our method can be easily extended to various symbolic operations tasks. (**Appendix C**) * Our experimental results on controllable symbolic reasoning tasks and real-world numerical calculation tasks demonstrate the superior performance of our method in comparison to existing techniques. Notably, our language model achieves flawless execution on symbolic reasoning tasks. (**Section**5.15.25.3) * We also studied the potential of combining multiple CoNNs and found that adding correlated CoNNs can continuously increase performance, while adding uncorrelated CoNNs rarely leads to performance degradation. This provides a new approach for model fusion, enabling the model to easily acquire new knowledge. (**Section**5.4) ## 2 Related Works As model parameters, training calculations, and dataset sizes have increased, language models have gained new capabilities [Srivastava et al., 2022, Wei et al., 2022a], such as coding [Li et al., 2022b, Nijkamp et al., 2022], medical diagnosis [Li et al., 2021a, Xia et al., 2022], complex question-answering [Zhu et al., 2022, Daull et al., 2023], cross-language translation [Fan et al., 2021, Li et al., 2022a], few-shot learning [Brown et al., 2020, Perez et al., 2021], and thought chaining [Wei et al., 2022c, Weng et al., 2022]. However, these models also exhibit limitations as they generally learn superficial patterns rather than the innate logic and rules of language. Consequently, humans often find it challenging to trust the results provided by language models [Sarker et al., 2021, Moore, 2022]. **Pre-trained Language Models** encompass those trained on general-purpose corpora [Lewis et al., 2019, Scao et al., 2022] and specialized symbolic tasks [Geva et al., 2020, Lewkowycz et al., 2022]. They primarily aim to capture statistical patterns in language, which limits their capacity for symbolic reasoning. Symbolic reasoning involves manipulating abstract symbols and logical rules to derive new knowledge [Shindo et al., 2021, Yang and Deng, 2021] and necessitates the ability to extrapolate to novel situations and reason about concepts absent in the training data [Fujisawa and Kanai, 2022]. Due to the constraints of gradient learning, neural networks face challenges in wholly solving symbolic reasoning problems. **In-Context Learning** has emerged as a promising approach to address these challenges [Dong et al., 2022] and closely approximate the predictors computed by gradient descent [Akyurek et al., 2022]. By prompting the language model to generate an explanation before generating an answer, the chain of thought [Wei et al., 2022c, Kojima et al., 2022, Zhang et al., 2022b, Zhou et al., 2022a] encourages the model to think sequentially. This technique has been employed in various numerical and symbolic reasoning tasks, such as scratchpad prompting [Nye et al., 2021] for length generalization [Anil et al., 2022] and utilizing the chain of thought to perform arithmetic operations like summing pairs of single digits with carry [Zhou et al., 2022b]. However, this approach often necessitates substantial computational resources, and achieving perfect accuracy remains challenging. **Augmented Language Models** have been proposed as an alternative, supplementing language models with external tools [Mialon et al., 2023]. Examples include generating Python code for numerical reasoning [Gao et al., 2022, Chen et al., 2022] or incorporating tool usage as a pre-training task [Schick et al., 2023]. However, using external tools lacks a unified framework with language models and instead relies on the normativity of program generation. Consequently, if a task demands higher-level abstraction or intricate and robust capabilities, such as Redefine [Wei et al., 2022b], Autoformalization [Wu et al., 2022], and Theorem Proving [Wu et al., 2020], the language model may struggle to solve it, even if it possesses the ability to operate external tools [Zhou et al., 2022b]. ## 3 Methods ### Preliminaries **In-Context Learning (ICL)**, Recent studies on ICL algorithms have shown that the learning process of language models within the ICL framework is analogous to gradient descent [Akyurek et al., 2022]. Specifically, transformer-based in-context learners implicitly implement standard learning algorithms by encoding smaller models in their activations and updating these implicit models as new examples appear in the context. However, these models face challenges in rule-intensive questions, as the rules represent abstract, high-dimensional knowledge that cannot be directly learned from the data, resulting in difficulties with implicit learning. **Compiled Neural Network (CoNN)**. The flexibility of neural networks to adjust their weights is a unique characteristic not found in the human brain. We propose incorporating CoNNs into LLM architectures to leverage this feature. The CoNN is a transformer-based neural network leveraging artificially compiled attention weights to execute rules. A transformer model comprises multiple attention layers and Multi-Layer Perceptron (MLP) layers. Each attention layer facilitates interactions between tokens, with the multiplication of query and key elements representing a "Select" operation in CoNN. Subsequent multiplication with value elements indicates an "Aggregate" operation. The MLP layer is responsible for the token itself and is referred to as the "Zipmap" operation [Weiss et al., 2021]. Utilizing the three operations (Select, Aggregate, and Zipmap) to represent the sequence-to-sequence process, we can convert this information into transformer weights [Lindner et al., 2023]. By stacking multiple attention layers, CoNN can address various human-defined rule understanding problems, such as mathematical calculations and symbol operations 2. Footnote 2: **Appendix B** provides a more detailed description of CoNN. ### Neural Comprehension Language models excel in language understanding tasks, while CoNNs achieve absolut accuracy in rule-intensive operation tasks using attention weights guided by abstract rules. To combine the language understanding capabilities of existing language models with accurate problem-solving for rule-based tasks (e.g., computation), we propose the Neural Comprehension, which integrates the language model's implicit learning parameters and CoNNs' explicit learning parameters. In Neural Comprehension, CoNNs represent high-dimensional rules explicitly using multiple attention matrices and incorporate these with the original LM's attention matrix. As illustrated in Figure 2, we maintain the use of a decoder architecture to iteratively generate the subsequent context step by step. In particular, the language model encodes the context and produces the textual and reasoning process context \(D(x)\) step by step, while CoNNs handle sequence transformations involving rules. When a rule-required operation emerges, CoNN's attention is utilized to calculate specific values. The structure of Neural Comprehension is similar to MoE (Shazeer et al., 2017). For example, when calculating _364425-216582_, the pre-trained language model output _148843_, which is incorrect. However, the Subtraction CoNN can correct the result to _147843_ in the neural comprehension framework. This process encoded into context dynamically, improving intermediate results interpretability and final result accuracy. Neural Comprehension combines LM and CoNNs in a piecewise function to perform gradient update. LLM hidden state output is \(H_{L}=\left(H_{L_{1}}\cdots H_{L_{d_{L}}}\right)^{\top}\in\mathbb{R}^{d_{L}}, \quad H_{L_{i}}\in(0,1)\), and CoNN output is \(H_{C}=\left(H_{C_{1}}\cdots H_{C_{d_{C}}}\right)^{\top}\in\mathbb{R}^{d_{C}}, \quad H_{C_{i}}\in(0,1)\)3. Specifically, we perform model fusion by adding the mapping from the last hidden layer representation to the vocabulary. Footnote 3: It is worth noting that \({}_{d_{L}}\) and \({}_{d_{C}}\) here refer to the vocabulary size of the Model’s decode output. In this paper, for ease of implementation, the output vocabulary size of CoNNs’ decode \({}_{d_{C}}\) is generally less than 100 due to limitations in computing resources (detailed information is shown in **Appendix Table 1**). The Neural Comprehension combines the Pre-trained LM’s hidden state output, \(H_{L}\), and CoNN’s output, \(H_{C}\), using identity matrices \(I_{d_{L}}\) (for \(d_{L}\)) and \(I_{d_{C}}\) (for \(d_{C}\)) to concatenate them for model fusion. \[\hat{i}=\operatorname*{argmax}_{i}\left[\left(\begin{array}{c}I_{d_{L}},0\\ 0,\beta I_{d_{C}}\end{array}\right)\left(\begin{array}{c}H_{L},0\\ 0,H_{C}\end{array}\right)\right],\quad\beta\in\{0,1\} \tag{1}\] Within the Neural Comprehension, CoNNs manage sequence transformations involving rules. When the model encounters a rule-required operation, a gating mechanism determines whether to use CoNN's attention for computation. The gating mechanism assesses whether to maintain the initial output, provided by the pretrained language model, or modify it using the CoNN. where the model corrects the answer by applying a gradient to the in-context learning function through \(\beta\). In Equation 1, since the hidden state output \(H_{C_{i}}\) elements of CoNN are \(\{0,1\}\), when \(\beta=0\), the model adopts the original decoding token of LM. When encountering a rule calculation problem, \(\beta=1\), the Figure 2: The architecture of Neural Comprehension. model calculates the result by taking the maximum value of CoNN's hidden layer output \(H_{C}\) and decodes the result from CoNN's vocabulary. Regarding the selection of \(\beta\), since the CoNN involved in this paper is relatively simple, it is determined by the forward computation results of CoNN. For example, when we set up an Addition CoNN, we specify that the final result should be output when encountering '=', so when encountering '=', \(\beta=1\). However, for larger-scale CoNN, we recommend that a learnable gating network determine \(\beta\). ### Gradient Modification in Neural Comprehension To better appreciate the benefits of our method in handling rule-intensive tasks and improving accuracy, it is crucial to understand the gradient perspective of ICL. The optimization process in ICL can be viewed as a search for suitable gradients to minimize the loss function. Due to the implicit learning nature of standard ICL methods, gradients learned from data may not always be ideal for addressing rule-intensive tasks. Therefore, our proposed method introduces an explicit learning component to provide more appropriate gradient updates for such tasks, ultimately leading to enhanced overall performance. In this section, we focus on elucidating the changes in the gradient introduced by the Neural Comprehension model. The gradient of the model during the execution of ICL can be partitioned into two categories based on the origin of the gradients: \[\text{Gradient}=\left\{\begin{array}{ll}I_{d_{1}}&\text{Text}\\ I_{d_{2}}&\text{Rule}\end{array}\right. \tag{2}\] Here, \(I_{d_{1}}\) represents the gradients derived implicitly from the language model (LM) and corresponds to the text-based learning aspect of the model. Conversely, \(I_{d_{2}}\) represents the gradients explicitly derived from the CoNNs, encoding rule-based knowledge. The Neural Comprehension model integrates both gradient sources to optimize the ICL process. In linear regression problems, the loss function can be expressed as a piecewise function according to 1, here \(P_{1}(x)\) is the LLM and \(P_{2}(x)\) is CONN, the In-context-learner can be separate into two process : \[L =\left\|y-\beta^{\top}x\right\|^{2} \tag{3}\] \[=\left\{\begin{array}{ll}\left\|y-\beta_{1}^{\top}x\right\|^{2 }&x\in P_{1}(x)\\ \left\|y-\beta_{2}^{\top}x\right\|^{2}&x\in P_{2}(x)\end{array}\right. \tag{4}\] Based on the partitioned gradient as defined in Equation 2, the overall gradient of the Neural Comprehension model can be obtained by computing their individual gradients concerning the respective \(\beta\): \[\underbrace{\frac{\partial L}{\partial\beta}}_{\text{Gradient}}=\left\{ \begin{array}{ll}\frac{\partial L}{\partial\beta_{1}}&x\in P_{1}(x)\\ \frac{\partial L}{\partial\beta_{2}}&x\in P_{2}(x)\end{array}\right. \tag{5}\] This partitioning allows the Neural Comprehension model to specifically address the gradient requirements of both implicit learning via LM and explicit learning via CoNNs. It is crucial to note that CoNNs are designed to minimize the loss associated with rule-based tasks, essentially providing an optimal gradient for tasks involving rule-intensive operations. This leads to a substantial improvement in the model's accuracy for rule-based tasks, as the gradient updates provided by CoNNs are more suitable for rule learning compared to the initially available gradients from the LM. By amalgamating the both of gradient sources, the Neural Comprehension model achieves a more refined optimization of in-context learning. Additionally, from the perspective of gradients, our approach surpasses conventional data-driven implicit learning techniques as it integrates explicit rule-based learning mechanisms that exhibit more suitable gradient updates for rule-intensive questions. The Neural Comprehension model effectively balances the need for implicit and explicit learning within the ICL framework, leading to an enhanced overall performance in terms of accuracy and interpretability. ## 4 Experimental Settings We primarily explore the capacity of language models to address symbolic reasoning tasks, concentrating on three areas: symbolic operations, symbolic reasoning, and arithmetic reasoning. Symbolic OperationsBuilding upon the approaches developed by Anil et al. (2022) and Qian et al. (2022), we examine the following tasks: Parity, Reverse, Addition and Subtraction. These tasks do not require complex text understanding, but only require faithfully implementing symbolic operations and outputting the corresponding results. Symbolic ReasoningWe employ the experimental framework of Wei et al. (2022) for the two tasks, Last Letter Concatenation and Coin Flip. These tasks require a combination of language understanding and rule comprehension abilities. Arithmetic ReasoningTo evaluate the method's generalization ability from symbolic operations to arithmetic reasoning in addition and subtraction tasks, we use five established arithmetic reasoning datasets: AddSub (Hosseini et al., 2014), SingleEq (Koncel-Kedziorski et al., 2015), MultiArith (Roy and Roth, 2016), GSM8K (Cobbe et al., 2021), and SVAMP (Arkil et al., 2021). Additionally, we introduce the AddSub\({}^{+}\) dataset, containing tasks of varying complexity based on the number of digits involved in arithmetic operations, ranging from 1-digit addition to 20-digit addition/subtraction tasks. ## 5 Experiment and Result ### Symbolic Tasks In this study, we conduct a length generalization experiment (Anil et al., 2022) to examine the distinctions between the Neural Comprehension and learning-based methods, as depicted in Figure 3. Our experimental design encompasses \(1000\times 40\) independent test sets, comprising problems with \begin{table} \begin{tabular}{l|c c c c} \hline \hline **Techniques** & **In-distribution** & **Out-of-distribution** & **Time and Space Complexity** & **Interpretability** \\ \hline Vanilla Fine-tune (For LM) & ✓ & ✗ & ✓ & ✗ \\ Vanilla Few-shot (For LLM) & ✓ & ✓ & ✓ & ✗ \\ Scatchpad (Anil et al., 2022) & ✓ & ✓ & ✗ & ✓ \\ Algorithmic (Zhou et al., 2022) & ✓ & ✓ & ✗ & ✓ \\ **Neural Comprehension (Ours)** & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on Symbolic operations tasks of five techniques that language models admit: (1) Vanilla Finetuning, (2) Vanilla Few-shot, (3) Scratchpad (Chain-of-Thought reasoning), (4) Algorithmic (Chain-of-Thought reasoning) and (5) Neural Comprehension. We find that the first four learning-based methods have different modes of failure regarding in and out-of-distribution coverage for symbolic operations. However, Neural Comprehension has strong advantages in terms of length generalization, efficiency, and interpretability. ✗ signifies poor ✓ signifies nontrivial, ✓ signifies near-perfect performance. (*) Refers to task-dependency. Figure 3: Comparison of Neural Comprehension and other implicit learning-based methods in symbolic operations tasks to test length generalization performance. In this, the T5 model uses the Vanilla Fine-tune method for learning, and LLMs use the Few-shot learning method. In Neural Comprehension, each task has a different CoNN, namely Parity, Reverse, Addition, and Subtraction. varying digit lengths from 1 to 40 digits. 10 to 20 digits within the range are provided by us for methods based on implicit learning for training; during the testing phase, this range is called In-Dist. Furthermore, we present results for both Scratchpad [10] and Algorithmic [15] approaches. The results of our experiment demonstrate that the Vanilla Fine-tune (red lines) method performs optimally on the in-domain (10-20 digit) training set, while its performance deteriorates for both more simplistic and more intricate. This finding suggests that the absence of relevant samples in the training set may cause gradient descent-based language models to underperform on both simpler and more complex tasks. As further discussed in the **appendix D.1**, this phenomenon can be attributed to the inherent generalization limitations of statistical models and the position bias of language models. Considering the Vanilla Few-shot method (green lines), we determine that its performance is less impacted by the prompt sample range compared to Vanilla Fine-tune. Large language models, which are trained on extensive text corpora, excel at solving more straightforward problems such as symbolic operations within a ten-digit range. Nevertheless, performance remains below par for test sets with more than ten digits, even when prompted with 10-20 digit samples. Observing CoT-like methods (we use GPT-3.5), including Scratchpad and Algorithmic, unveils their robust length generalization capabilities. Scratchpad works by requiring large language models to record intermediate steps, while Algorithmic employs a similar approach to record the carry operations involved in the addition process. This can be primarily attributed to their proficiency in decomposing complex problems into smaller incremental steps and maintaining intermediate states. However, these methods necessitate substantial computational resources, and extending the length beyond the input limit of the model becomes challenging. Our study reveals that Neural Comprehension attains remarkably high accuracy in symbolic operations. This implies that Neural Comprehension, unlike conventional methods, does not rely on training data and remains unaffected by discrepancies in input lengths for in-distribution and out-of-distribution data. Consequently, it alleviates the requirement for step-by-step work tracking, and language models with CoNNs only need relatively fewer computational steps to execute sequence operations directly. Encoding rules into neural network modules endows us with greater interpretability, enabling language models to flawlessly perform purely symbolic operation tasks. ### Symbolic Reasoning In this section, we investigate the performance of Neural Comprehension in terms of symbolic reasoning capabilities. Our hypothesis is that, although pretrained Language Models (LMs) demonstrate strong language understanding abilities, they lack the capacity to deduce and comprehend rules regarding symbolic reasoning tasks. Thus, we aim to evaluate whether the incorporation of compiled neural networks in the form of CoNNs can address this limitation and improve the LM's symbolic reasoning abilities. To assess the performance of the rule comprehension component (CoNNs) in symbolic reasoning, we devise an experiment that measures the model's accuracy using intermediate processes and represents them in a "Chain of Thought"-like manner. In doing so, the experiment decomposes language understanding and rule comprehension explicitly into simpler outputs, avoiding the complexities of reasoning and additional error propagation in the models. Example outputs from this approach can be found in **Appendix F**. We observed that neural com Figure 4: In the iterative process of gradient descent during training. The blue line represents a language model that incorporates neural comprehension, and the red line represents the original language model. Additionally, we provide Direct, which is a direct prediction of the final result, as a reference. prehension improves the symbolic reasoning capabilities of pre-trained language models in most cases (Neural Comprehension almost always outperforms Vanilla Fine-tune in Figure 4), and can fit faster. This observation suggests that the introduction of compiled neural networks has a positive impact on pretrained LMs, addressing rule comprehension limitations in symbolic reasoning tasks. ### Arithmetic Reasoning Arithmetic reasoning serves as a suitable testbed for evaluating language models and their ability to address real-world problems. In this study, we examine the AddSub\({}^{+}\) dataset variants that involve different digit lengths, utilizing the Addition and Subtraction models from the CoNNs family. Notably, the capabilities of Neural Comprehension extend beyond these tasks, as CoNNs can also simulate calculators that support multiplication and division operations, and potentially perform linear algebra computations or even in-context learning algorithms that employ backpropagation (Giannou et al., 2023). To evaluate the impact of Neural Comprehension on arithmetic reasoning, we compare the output of vanilla CoT language models and those incorporating Neural Comprehension, using the vanilla CoT baseline as a reference. As demonstrated in Figure 5, the vanilla CoT model struggles to extrapolate and solve arithmetic problems involving longer digit lengths. However, integrating Neural Comprehension significantly improves the performance of language models on such complex arithmetic tasks. Since we only incorporated the Addition and Subtraction CoNNs, we attribute the observed performance enhancement to the increased computational accuracy of the language model. For further evidence, we present additional experimental results on widely-used arithmetic reasoning datasets in **Appendix D.2**, which reinforce the benefits of using Neural Comprehension over the vanilla CoT model. In comparison to language models employing external tools like PAL (Gao et al., 2022), our findings suggest that generating accurate code for the less code-trained GLM-130B model might be challenging for PAL, resulting in performance levels inferior to those of the vanilla CoT. This outcome indicates that language models offer greater flexibility, whereas external tools may have difficulties in more complex or unique situations. The integration of compiled neural networks appears to be a more promising approach, as evidenced by the performance improvements observed in our experiments. Specifically, when language models encounter intricate arithmetic tasks that involve nested operations or multi-step calculations, the integrated CoNNs can efficiently handle these operations, allowing the language model to focus on higher-level reasoning. In contrast, the use of external tools often requires explicit coding and may not generalize effectively to more complicated scenarios. In conclusion, our results demonstrate that incorporating compiled neural networks into language models provides a more robust and versatile solution for arithmetic reasoning and related challenges, underlining the superiority of this approach over external tools such as PAL. ### Ablation and Analyses: Module Combination for Neural Comprehension Efficiently deploying multiple CoNNs is crucial for achieving exceptional Neural Comprehension performance. As depicted in Figure 4, the amalgamation of distinct CoNNs, tailored for both symbolic Figure 5: We conducted simulations of the AddSub dataset with varying digits by modifying the “IEquations” parameter. We then tested the performance of three LLMs with and without Neural Comprehension in generating CoT outputs for AddSub\({}^{+}\). And we reported the solve rates of three LLMs and compared the solve rates of using additional tools (PAL (Gao et al., 2022)). and arithmetic reasoning tasks within the language model framework, can lead to remarkable benefits. It is observed that integrating pertinent CoNNs bolsters the performance of the initial language model, whereas the inclusion of unrelated language models rarely causes detrimental effects, regardless of whether single or multiple CoNNs are combined. This can be ascribed to the refined design of the Neural Comprehension framework, which ensures the precise execution of assigned tasks by CoNNs without interference from irrelevant modules. Each CoNN module is adept at generating the appropriate output when needed, thereby preventing the emergence of erroneous results from unrelated components. Importantly, as seen in **Appendix B.3**, the parameter count for each CoNN module ranges from 1/1000 to 1/1000000 of that for GPT-3, and the experiments in **Appendix D.3** show that the inference latency in the neural understanding framework only increases by 1%-3% compared to Vanilla. This observation underscores the remarkable scalability of the Neural Comprehension framework, which possesses the capability to not only accommodate existing knowledge concepts but also assimilate novel ones as the number of CoNNs expands. Theoretically, the integration of tens of thousands of CoNN modules within language models holds the potential to foster a comprehensive understanding of concepts. ## 6 Conclusion We have observed that pretrained language models lack an intrinsic comprehension of rule-based concepts and explored how Neural Comprehension can integrate compiled neural networks into the language model framework in a simple and generic manner. We demonstrated the superiority of our approach over existing learning-based method, Without external tools, our approach enables language models to perform nearly perfect symbolic operations and can be applied to more realistic arithmetic reasoning tasks. Our study opens new avenues for language models, such as the investigation of more complex CoNNs related to higher-order abstract reasoning, the development of more advanced gating mechanisms for smoother integration, and the exploration of other domains in which Neural Comprehension could exhibit significant advantages. Furthermore, our framework provides a foundation for future work on unifying both implicit and explicit learning in language models and facilitating the seamless. Figure 6: In Neural Comprehension framework, the performance of multiple different module combination is demonstrated. The left side shows the effect of combining a pre-trained language model with a CoNN, while the right side shows the impact of combining a language model with multiple CoNNs. For different tasks, we categorize CoNNs as Correlated (green) and Uncorrelated (red), indicating whether the CoNN is related to the current task or not.
2304.07200
EV-Catcher: High-Speed Object Catching Using Low-latency Event-based Neural Networks
Event-based sensors have recently drawn increasing interest in robotic perception due to their lower latency, higher dynamic range, and lower bandwidth requirements compared to standard CMOS-based imagers. These properties make them ideal tools for real-time perception tasks in highly dynamic environments. In this work, we demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects. We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency, as well as a learning-based approach that allows real-time inference of a confidence-enabled control signal to the robot. To validate our approach, we present an experimental catching system in which we catch fast-flying ping-pong balls. We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms such as the Nvidia Jetson NX.
Ziyun Wang, Fernando Cladera Ojeda, Anthony Bisulco, Daewon Lee, Camillo J. Taylor, Kostas Daniilidis, M. Ani Hsieh, Daniel D. Lee, Volkan Isler
2023-04-14T15:23:28Z
http://arxiv.org/abs/2304.07200v1
# EV-Catcher: High-Speed Object Catching Using Low-latency Event-based Neural Networks ###### Abstract Event-based sensors have recently drawn increasing interest in robotic perception due to their lower latency, higher dynamic range, and lower bandwidth requirements compared to standard CMOS-based imagers. These properties make them ideal tools for real-time perception tasks in highly dynamic environments. In this work, we demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects. We introduce a lightweight event representation called _Binary Event History Image_ (BEHI) to encode event data at low latency, as well as a learning-based approach that allows real-time inference of a confidence-enabled control signal to the robot. To validate our approach, we present an experimental catching system in which we catch fast-flying ping-pong balls. We show that the system is capable of achieving a success rate of \(81\%\) in catching balls targeted at different locations, with a velocity of up to \(13\) m/s even on computer-constrained embedded platforms such as the Nvidia Jetson NX. Visual Tracking, Sensor-based Control ## I Introduction Biological systems are able to estimate and catch objects moving at very fast speeds. For example, professional table tennis players can hit table tennis balls at speeds greater than 25 \(\frac{m}{s}\), and Major League Baseball catchers can catch fast balls flying toward them at a speed of 40 \(\frac{m}{s}\)[1]. On the other hand, state-of-the-art robots can only catch objects at lower speeds using vision-based sensors, as shown in Tab. I[2, 3, 4, 5, 6]. This performance difference can be mainly explained by the limitation of robot perception using frame-based cameras: with high-speed motion, traditional cameras can only receive a few frames within the flight time of the object. A naive approach would be to increase the frame rate of the camera. However, there is an inherent trade-off between frame rate, bandwidth, and latency. Increasing the frame rate and resolution of the sensor would lead to a larger volume of data to process, and thus incur longer latencies that are detrimental to the performance of the catching system. The trade-off between high latency and high computational cost for traditional cameras is a critical obstacle on the path to achieving human-level catching performance. Many of these problems can be avoided by using bioinspired event cameras, which are designed to emulate biological vision to enable fast perception. Their high-temporal sampling resolution, low bandwidth, high dynamic range, and asynchronous capabilities make them ideal sensors for dynamic environments. In this work, we address the question: _can we narrow the gap between robots and humans in vision-based catching tasks by using event-based vision?_ Dynamic Vision Sensors (DVSs) have been previously shown in perception systems for dodging and avoidance of dynamic obstacles [7]. We focus on the task of catching fast balls shot towards the camera, which is a harder task as it requires both precision and speed. Since actuators have mechanical limitations, the time allocated for perception is bounded. Under these circumstances, we have significant constraints on the latency of the perception system. Additionally, the deadline to make a control decision depends on the velocity and size of the incoming object [8]. **We present the first coupled event-based perception Fig. 1: Experimental setup used to validate our approach: an event-based camera observes events from the incoming ball. These events are packaged into BEHI images and sent to the lightweight prediction network. The prediction network produces trajectory estimations with uncertainty. The robust motion estimator takes the network output and predicts the impact location, which is passed to the motor controller for commanding a linear actuator to catch the ball. action system capable of catching balls flying at a speed of up to \(13\,\)m/s. At the heart of our approach is a **novel, lightweight representation which can accurately encode event history**. The system is capable of performing inference in real-time of the \(x\) impact location and issue the appropriate motion command to a linear rail to intercept the incoming ball. In summary, our contributions are: 1. A new lightweight representation for events that significantly reduces the computational cost of real-time event representation processing. This representation outperforms both event volume and grayscale image-based perception baselines, while achieving considerably lower latency. 2. A compact and fast event-based neural network and a robust motion estimation algorithm. The average error of the impact location is 1.9 cm. 3. An end-to-end system to perform fast ball catching with visual perception, achieving an average success rate of \(81\%\) on unseen trajectories with a top speed of \(13\,\)m/s. ## II Related Work ### _Object Catching in Robotics_ There have been multiple attempts to develop robots capable of catching a fast ball by predicting its trajectory. One of the earlier works in catching a moving object is Mousebuster [9], which intercepts an object moving at \(0.7\) m/s using a robot manipulator. More recently, visual-servoing methods [2] use pixel coordinates to directly control the arm to catch an object whose flight trajectory takes approximately \(1.5\,\)s. Some works use high-speed sensors to reduce the perception latency. Sato et al. [4] use a high-speed camera running at \(500\,\)Hz to better estimate the trajectory of the object. However, achieving such a frame rate requires a high-bandwidth PCI-E interface to transfer the data. A fast color segmentation algorithm is used to estimate the position of the ball. Another relevant area of research in object catching is sport robots. Researchers have attempted to build robots to play table tennis against real human players. The robot system in [10] is able to hit a vertically moving ball at \(1.5\,\)m/s. Monocular cameras coupled with a small baseline motion were used in [6] to regress the trajectory of a ball. Recent techniques use deep learning to encode visual images of an object trajectory and predict its future location [6]. Learning-based approaches are used to predict the full trajectory of the ball given partial observations [3]. Although the trajectory estimator in [3] works with a ping-pong ball that travels as fast as 7 m/s, experiments are performed only in post-processing. Despite these previous attempts, the presented systems are usually too slow for real-time catching of balls returned by a real human. Finally, while there are videos online about robots catching balls, it is hard to assess the performance and generality of these instances. We observe that most pieces of work in this area either 1) target low-speed motion, 2) require an external motion capture system, or 3) rely on high-bandwidth and intensive computation. In contrast, our work focuses on intercepting balls at higher speeds, using low-bandwidth event representations, capable of running in resource-constrained systems. We show a comparison between previous work and our event-based catcher in Tab. I. Although the tasks in these papers are not exactly the same, the table captures the main challenge we address in building high-speed catching/intercepting systems. ### _Event Cameras_ Event-based cameras measure asynchronous changes in _loglight intensity_. These cameras output a set of events \(E=\{e_{1},e_{2},...e_{n}\}\), where for each event \(e_{i}=\{x_{i},y_{i},p_{i},t_{i}\}\), \((x_{i},\,y_{i})\) correspond to camera pixel location, \(p_{i}\) is the polarity (sign change of the log-light intensity), and \(t_{i}\) is the time at which the light change occurs. Event-based 2D tracking-based approaches directly estimate the motion parameters by optimizing over image gradient [11]. Tracking can be done by contrast maximization [12, 13] or a globally optimal search [14, 15] showcases the advantages of using event cameras compared to traditional cameras to track bouncing balls using long-short-term memory (LSTM) architectures. Related to our task, [16] detects ball positions by applying a Hough transform to identify full circles projected onto the image frame. These methods show promising results in tracking objects in 2D, but bringing the motion into the 3D space remains an unsolved task for event cameras. Learning-based approaches have been proposed to directly learn depth from monocular event data [17]. Attempts have also been made to directly learn the structure of the scene and the movement of the camera from the event cameras [18]. Another line of research learns dense time-to-collision (TTC) from monocular event sequences [19]. In addition to these perception-focused works, end-to-end learning of control input from event data has enabled complicated control tasks such as UAV navigation [20]. Recently, event-based cameras have been used for high-speed dodging [7, 21, 22]. Early demonstrations of DVS catching have been performed for the task of goal keeping [23]. In this paper, we present results which further the state of the art in this line of inquiry. We take on the challenging task of catching high-speed objects, which generally requires more precision of estimating the impact location than just dodging objects. ## III Method ### _Binary Event History Images_ A key challenge in dealing with event data is choosing the appropriate representation. To reduce the computational \begin{table} \begin{tabular}{l|c|c|c|c} & \begin{tabular}{c} Real \\ Real-time \\ \end{tabular} & \begin{tabular}{c} Monocular \\ \end{tabular} & \begin{tabular}{c} Speed \\ \end{tabular} \\ \hline Deguchi et al. [2] & ✓ & ✓ & ✗ & \(<5\)m/s \\ Cigliano et al. [6] & ✓ & ✓ & ✓ & \(\sim 4\)m/s \\ Rapp [10] & ✓ & ✓ & ✗ & \(\sim 0.5\)m/s \\ Lin \& Huang[3] & ✗ & ✗ & ✓ & \(\sim 7\)m/s \\ Sato et al. [4] & ✗ & ✗ & ✓ & \(\sim 1.2\)m/s \\ Zhang et al. [5] & ✗ & ✗ & ✓ & \(\sim 5\)m/s \\ Ours & ✓ & ✓ & ✓ & \(\sim 13\) m/s \\ \end{tabular} \end{table} TABLE I: Feature comparison of select existing catching and table tennis robots. cost while preserving the necessary trajectory information of a flying object, we propose using a compact representation of events called a _Binary Event History Image (BEHI)_. For a list of events \(\{e_{1},e_{2},...,e_{N}\}\), a BEHI at time \(T\) is defined as: \[I_{T}(x,y)=(\sum_{i=1}^{N}[x_{i}=x,y_{i}=y,t_{i}<T])>0. \tag{1}\] The BEHI highlights the trajectory of the ball projected onto the image plane because only pixels that have changed during the flight time are activated. In addition, such images have the same number of channels independently of the total time range of the events. Compared to grayscale images whose history requires heavy concatenation, BEHI keeps a constantized image that preserves the trajectory information. BEHIs are lightweight representations for event cameras: for sensors of resolution \(W\times H\) the size required for BEHI is only (\(W\times H\)) bits. On the contrary, the events volumes [18] require (\(C\times 2\times H\times W\times 32\)) bits, where \(C\) is the number of channels for the event volume. On the other hand, if we use grayscale images as input, the size requirement is (\(H\times W\times N\times 8\)) bits, where \(N\) is the number of frames used to estimate the trajectory information. ### _Learning Object Trajectory with Uncertainty_ We adopt a learning-based approach to estimate the final impact location by predicting \(x\) positions and time-to-collision (TTC) along the flight trajectory. An important aspect of our problem consists of acquiring early event data on the incoming object, when its size in the camera is small and perception information is uncertain. To properly deal with this uncertainty, one must incorporate it into the model to produce a robust perception. At the beginning of each trajectory, the motion is projected onto a small number of pixels in the image. This could make initial predictions misleading. As the object gets closer to the camera, the motion becomes more apparent and therefore can provide a more robust estimate. However, due to the limited number of data points available to the perception front-end before having to perform a catch maneuver, we would like to use all the available data to estimate our impact position, even if the initial data usually contain a certain amount of noise. Therefore, we adopt a confidence-driven approach to estimate the impact location by a weighted least-squares regression. Traditionally, such uncertainty is obtained by filtering techniques such as Kalman Filter [24]. Inspired by [25], which uses IMU-only data to estimate displacement and uncertainty, our network learns to minimize a log-likelihood function to learn the uncertainty directly. Instead of producing a single-scalar prediction, we output the normal distribution \(\mathcal{N}(d_{i},\sigma_{i})\) of the prediction. \(d_{i}\) is the predicted object location in the world frame and \(\sigma_{i}\) is the standard deviation of the distribution inversely correlated with confidence. ### _Robust Prediction Network_ In this section, we describe the prediction network to estimate the trajectory, as well as the prediction algorithm that outputs the impact location. Given a stream of events, our goal is to estimate the time and impact location of a single moving object in the scene. It is difficult to track the object using traditional window-based approaches for a number of reasons. First, detecting arbitrary objects and tracking their scales are challenging tasks, as the shape of the ball is highly dependent on the motion itself [26]. Second, applying naive tracking algorithms becomes increasingly difficult when there is external noise. Finally, without depth information, an accurate relative scale needs to be continuously estimated. To overcome these challenges, we propose using a lightweight network to learn these tasks simultaneously. For a given BEHI from the camera, the network predicts three values: 1) **current ball x-location** in the camera frame, 2) **prediction uncertainty** and 3) **TTC**. Previous efforts have been made to directly predict the impact location and TTC directly from a dart trajectory [22]. However, such regression tasks require a significant amount of training data, since there is only a pair of such ground-truth values for each event sequence. In [22], a carefully designed data augmentation method was applied by random shifting and rotation. In this work, we overcome this problem by introducing an explicit motion model to the 3D trajectory so that the network is supervised on higher frequency ground truth data. Due to the real-time nature of the system, our network requires to perform at low latency. After obtaining the BEHI input, we resize it to \(250\times 250\) before feeding it into the network. We chose a regression network that has only 4 convolutional layers (with batch normalization) and 1 linear layer, with a total of 370,947 parameters. Each convolutional layer has 64 channels with \(3\times 3\) kernels. We used a stride size of 2 to decrease the resolution. Before the last linear prediction layer, average pooling is used to spatially aggregate the features. Rectified Linear Units (ReLU) are used after each BatchNorm layer as activation functions. This simple design allows us to run inference in batches of 12 within \(20\,\mathrm{ms}\). The small network demonstrates competitive performance in learning motions. We trained the network for 100 epochs with a learning rate of \(2\cdot 10^{-4}\) and a batch size of 16. All networks mentioned in Section V are trained with the same architecture and hyperparameters. #### Iii-C1 Object Position Regression Loss Given the output position \(d_{i}\) and the ground truth object location \(\hat{d}_{i}\), the regression Fig. 2: Binary Event History Image (BEHI) generated from a sample trajectory. From left to right, top to bottom, we show the BEHI progression of a ball flying towards the camera. loss forces the network to predict the mean of the predicted normal distribution. \[\mathcal{L}_{L1}^{POS}(d,\hat{d})=\frac{1}{n}\sum_{i=1}^{n}|d_{i}-\hat{d}_{i}| \tag{2}\] #### Iii-C2 Object TTC Regression Loss Given the output time \(ttc_{i}\), an L1 regression loss is used to supervise the TTC prediction from the current event time stamp, which is defined as the timestamp of the last event in the current BEHI. \[\mathcal{L}_{L1}^{TIME}(ttc,\hat{ttc})=\frac{1}{n}\sum_{i=1}^{n}|ttc_{i}-\hat{ ttc}_{i}| \tag{3}\] #### Iii-C3 Negative Log-Likelihood Loss Given the output position \(d_{i}\), standard deviation \(\sigma_{i}\) from the network and the ground truth object \(\hat{d}_{i}\), the negative log-likelihood loss can be computed as: \[\mathcal{L}_{NLL}(d,\sigma,\hat{d}) =-\frac{1}{n}\sum_{i=1}^{n}\log(\frac{1}{\sigma_{i}\sqrt{2\pi}} \exp(-\frac{1}{2}(\frac{d_{i}-\hat{d}_{i}}{\sigma_{i}})^{2})) \tag{4}\] \[=\frac{1}{2n}\sum_{i=1}^{n}\log(\sigma_{i})+(\frac{d_{i}-\hat{d}_ {i}}{\sigma_{i}})^{2}+cst \tag{5}\] We jointly optimize the loss function by taking the weighted sum of the three functions above. The log term in \(\mathcal{L}_{NLL}\) makes the network unstable if the initial mean estimate \(d_{i}\) is inaccurate. The ball location can be learned alone with the negative log likelihood loss term. However, the convergence of the log-likelihood function depends on a good initial mean estimate and is vulnerable to outliers. Therefore, we have the L1 regression loss to help stabilize network training. \(\lambda\) is \(0.1\). \[\mathcal{L}=\mathcal{L}_{L1}^{POS}+\mathcal{L}_{L1}^{TIME}+\lambda\cdot \mathcal{L}_{NLL} \tag{6}\] ### _Estimating Impact Location with Uncertainty_ Given a list of noisy object position predictions \(x_{i}\) and standard deviation \(\sigma_{i}\)2, predicted TTC \(ttc_{i}\), and event frame timestamps \(t_{i}\), the goal is to recover the impact location \(x_{impact}\). To reduce the cost of computation needed for prediction, we assume that the trajectory model is linear with respect to time: Footnote 2: [https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/jjournals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/jjournals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/jj/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/jjournals/j/journals/jjournals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/journals/jj/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/jjournals/j/journals/jjournals/journals/jjournals/j/journals/jj/journals/journals/j/journals/jjournals/j/journals/jjournals/j/journals/j/journals/jjournals/j/journals/journals/jjournals/j/journals/jj/journals/j/journals/journals/journals/jj/journals/journals/j/journals/journals/jj/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/jjournals/j/journals/journals/jjournals/jj/journals/j/journals/j/journals/jjournals/j/journals/j/journals/j/journals/jjournals/j/journals/jjournals/j/journals/jjournals/j/journals/journals/jjournals/j/journals/journals/jj/journals/j/journals/jjournals/journals/jj/journals/jjournals/jj/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/jjournals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/jjournals/j/journals/j/journals/jj/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/jj/journals/j/journals/j/journals/journals/jj/journals/j/journals/jj/journals/journals/jj/journals/j/j/journals/j/journals/jj/journals/j/journals/j/journals/jj/journals/j/journals/j/jjournals/j/journals/j/journals/jjournals/j/journals/jj/journals/j/journals/jjournals/j/journals/j/journals/j/journals/jj/journals/j/journals/journals/j/jj/journals/j/journals/jj/journals/j/journals/j/jjournals/j/](https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/jjournals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/jjournals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/jj/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/jjournals/j/journals/jjournals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/journals/jj/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/jjournals/j/journals/jjournals/journals/jjournals/j/journals/jj/journals/journals/j/journals/jjournals/j/journals/jjournals/j/journals/j/journals/jjournals/j/journals/journals/jjournals/j/journals/jj/journals/j/journals/journals/journals/jj/journals/journals/j/journals/journals/jj/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/jjournals/j/journals/journals/jjournals/jj/journals/j/journals/j/journals/jjournals/j/journals/j/journals/j/journals/jjournals/j/journals/jjournals/j/journals/jjournals/j/journals/journals/jjournals/j/journals/journals/jj/journals/j/journals/jjournals/journals/jj/journals/jjournals/jj/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/jjournals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/jjournals/j/journals/j/journals/jj/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/jj/journals/j/journals/j/journals/journals/jj/journals/j/journals/jj/journals/journals/jj/journals/j/j/journals/j/journals/jj/journals/j/journals/j/journals/jj/journals/j/journals/j/jjournals/j/journals/j/journals/jjournals/j/journals/jj/journals/j/journals/jjournals/j/journals/j/journals/j/journals/jj/journals/j/journals/journals/j/jj/journals/j/journals/jj/journals/j/journals/j/jjournals/j/) #### Iv-A2 **Processing computer** we use an NVIDIA Jetson NX embedded System-on-module (SOM), with a 384-core Volta GPU. The processing computer collects events from the DVS, generates the BEHIs, runs the network inference, and sends the position command index to the motor controller using its GPIO. We use the Jetson for two reasons: 1) the fast GPIO is crucial in our perception-action loop. The GPIO input / output latency of this platform is sub \(1\,\)ms. 2) The platform requires only \(15\,\)W, which is important in the context of low-power perception with event cameras. The only disadvantage of using the Jetson NX is that this platform only allows for a _single-shot_ inference, and thus it requires a trigger signal. #### Iv-A3 **Controller and actuator** We use an off-the-shelf Festo CMMP-AS motor controller, with an EMME-AS motor and a EGC toothed belt axis. The actuator is located perpendicular to the motion of the ball. It has a motion span of \(600\,\)mm (\(300\,\)mm from the center position), and it is capable of achieving a velocity of \(10\,\)m/s and an acceleration of \(50\,\)m/s\({}^{2}\). The target motions are precomputed in a look-up table at intervals of \(100\,\)m/s using a constant-jerk trajectory planner. We attached a rim with a net that has a height of \(400\,\)mm and a width of \(240\,\)mm (including the rim). The hard latency constraint of our mechanical system (see Figure 3) comes from the actuator. The constant-jerk trajectory planner is only capable of \(\pm 1300\,\)m/s\({}^{3}\). As we would like to reach distances as far as \(400\,\)mm from the center of the linear actuator, including the rim. With our controller, it takes \(160\,\)ms to move \(283\,\)mm. As the whole motion of the ball takes at least \(310\,\)ms (see Sec. IV-B), this leaves us with \(150\,\)ms to execute our perception and action algorithms. #### Iv-A4 **Perception-Action** The main sensor of our perception system is an Inivation DVXplorer DVS, featuring a Samsung DVS Gen3 sensor [28]. The DVS is located in front of the actuator, pointing toward the ball launcher. Our DVS driver generates a BEHI from the events as soon as we receive them, every \(10\,\)ms. Then on the inference program, we assemble our batch of 12 BEHIs and make a prediction. Next, we perform regression to estimate the impact position and decide which is the best position command for the linear actuator to intercept the ball. Finally, the target impact location is then sent to the controller to move the linear actuator. ### _Data Collection and Preprocessing_ While there exist simulators for event cameras, we have observed that they may not produce accurate representations of the sensor. For this reason, we decided to train our network using only real-world data. The data collection process was carried out using a motion capture system. We wrapped our balls in reflective tape and recorded ball positions, events, and global shutter grayscale images for comparison (using a Chameleon3 camera working at 76 FPS). We manually synchronized the different sensors in the data set. A total of 235 data points were collected, and we show the distribution of flight time and impact location in Fig. 4. As the ball motion is fast, motion capture systems may lose track intermittently for a few milliseconds. To solve this problem, we performed a second-order polynomial fitting of the ball position. This has the added benefit of simplifying the ball motion pattern, making it easier for the network to learn. Furthermore, interpolation provides intermediate position data points that can be used during the training process, as described in Sec. III-D. ### _Triggering the Motion with Network Prediction_ The latency of our system is bounded by the mechanical limitations of our actuators (Section IV-A). Moreover, once we send the motion command, it is not possible to make any corrections afterward. Therefore, it is critical to issue the motion command to the rail on time. For this, we have two options: 1) we can use a trigger signal to start processing the events as soon as the ball is shot, or 2) we can run our network continuously and wait for the last predicted TTC to fall within a certain amount of time reserved for moving the rail. In Tab. III, we show a comparison of the performance of using these two methods. To ensure enough time for the rail motion, we continuously run the network until the last predicted TTC is within \(210\,\)ms. We then stop the network inference and issue a command based on the predictions up to this point. Fig. 4: Top: Impact location and flight time for the different data points in our collected data set, with their respective histograms. We aim to cover different shooting angles and speed, allowing the network to learn unbiased estimation of ball motions. Bottom: experimental end-to-end success rates for different impact locations of the ball. Despite the similar performance of using trigger signals by thresholding the predicted TTC, this method relies on running our inference pipeline every time we receive a BEHI. The experiments in Tab. III are performed on a much more powerful laptop computer with a mobile RTX 3080 GPU to simulate pipelining. Our computing platform is a lightweight Jetson NX, which does not have such computational power. Therefore, we relax this computational requirement by using a trigger signal. ## V Experimental Results In this section, we present the quantitative results of the front-end perception of our system, as well as the end-to-end object catching results. ### _Perception_ To provide baselines for the perception module, we evaluated the performance on two other input modalities: event volumes and grayscale images. For event volumes, we follow the formulation in [18] to generate event volumes with 10 temporal bins for each polarity. For a set of events \(\{e_{i}=\left(x_{i},y_{i},t_{i},p_{i}\right)\}\), an event volume \(V_{p}(x,y,t)\) is defined as \[t_{i}^{*} =\left(B-1\right)\left(t_{i}-t_{0}\right)/\left(t_{N}-t_{1}\right) \tag{12}\] \[V_{p}(x,y,t) =\sum_{i}p_{i}k_{b}\left(x-x_{i}\right)k_{b}\left(y-y_{i}\right)k _{b}\left(t-t_{i}^{*}\right)\] (13) \[k_{b}(a) =\max(0,1-|a|) \tag{14}\] where \(k_{b}\) is a bilinear interpolation kernel. This representation has been widely used in flow estimation because it preserves the temporal gradient of events. For grayscale images, due to the framed nature of the camera, we cannot directly encode variable-sized frames in a Convolutional Neural Network without computationally heavy recurrent structures. In addition, event volumes and stacked image frames both have multiple input channels with non-binary inputs (unsigned integer or floating point). These properties make it challenging to run inference in real time, even with GPU acceleration. Limited by the longer network inference time and data transmission latency of using grayscale images, we have time to collect only 8 images for prediction. We evaluate the perception component of the system by evaluating two errors: 1) per frame error and 2) per trajectory error. We provide four error metrics for the following predictions. 1. Ball location (per frame): the predicted location of the object in the camera frame. This is evaluated in all data before the 160 ms deadline. 2. TTC (per frame): the predicted time to collision of the object. This is also evaluated in all data before the \(160\,\mathrm{ms}\) decision deadline. 3. Impact location (per trajectory): the predicted impact location (x coordinate) when the object hits the image plane. This prediction combines time-to-collision and ball location estimates. 4. Collision time (per trajectory): the time from the "trigger time" to the time when the ball hits the image plane. We provide the quantitative evaluation in Tab. II. Additionally, in Fig. 6 we report the ball location error per frame among the three input representations for a subset of unseen test trajectories. We observe that the network trained with BEHI consistently outperforms the other two baseline networks on different unseen trajectories. Grayscale images contain not only the motion trails, but also the background pixels. With the limited amount of training data, the neural network needs to learn to identify the motion in the cluttered background. In contrast, as shown in Fig. 2, the trajectory of the ball is apparent in the BEHI, which could reduce the network capacity requirement of the task. BEHI does better on average than the heavier event volume representation despite its compact size. We observe that more informative representations, such as event volumes, require more network capacity for learning the motion. The fundamental trade-off between network size and performance leads us to use a compressed representation such as BEHI. In this task, the impact location error for gray scale cameras has a mean of 9.4 cm with a standard deviation of 5 cm. This means that many of the predictions may have a positional error greater than 14 cm. As our fast rail uses a discretized positional controller with precomputed trajectories, a slight shift in the predicted position could cause the robot to move to a wrong position, further magnifying the inaccuracy. An important component of the system is the predicted uncertainty, which is learned with an unsupervised loss. We conducted an ablation study to analyze the effect of uncertainty. The reader should note that this only changes the impact location prediction results. In our experiments, the impact location for BEHI becomes \(30.078\pm 31.378\,\mathrm{mm}\) when uncertainty is ignored, as opposed to \(19.000\pm 14.878\,\mathrm{mm}\) when weighted with uncertainty. ### _Influence of Background Motion_ In this section, we present additional experiments to test the performance of the network under significant background motion. As performing data collection of flying balls with \begin{table} \begin{tabular}{c|c|c|c} & Ball Location (mm) & TTC (ms) & Impact Location (mm) & Collision Time (ms) \\ \hline BEHI & **7.809\(\pm\)4.208** & **8.990\(\pm\)7.310** & **19.000\(\pm\)14.478** & **7.950\(\pm\)6.920** \\ Event Volume & 18.340\(\pm\)5.523 & 39.380\(\pm\)12.453 & 21.600\(\pm\)14.34 & 39.380\(\pm\)10.654 \\ Grayscale Image & 32.639\(\pm\)17.311 & 57.047\(\pm\)22.242 & 94.099\(\pm\)49.154 & 56.792\(\pm\)14.849 \\ \hline BEHI (BG) & 20.5541\(\pm\)0.100 & 14.437\(\pm\)12.709 & 59.586\(\pm\)53.165 & 12.292\(\pm\)11.371 \\ Event Volume (BG) & 19.296\(\pm\)8.441 & 36.214\(\pm\)12.478 & 74.754\(\pm\)49.308 & 36.214\(\pm\)10.102 \\ \end{tabular} \end{table} TABLE II: Average error of location, TTC, impact location and collision time on 20 held-out unseen trajectories. The error shown corresponds to one standard deviation. “BG” indicates that there is background motion in the scene. \begin{table} \begin{tabular}{c|c|c} & Impact Location (mm) & Collision Time (ms) \\ \hline Hardware Trigger & **19.000\(\pm\)14.878** & **7.950\(\pm\)6.920** \\ Network TTC Trigger & 22.755\(\pm\)21.728 & 8.216\(\pm\)6.757 \\ \end{tabular} \end{table} TABLE III: Influence of the trigger signal in the perception performance of our system. background motion is challenging, we propose the following evaluation scheme for the perception pipeline: we first record events of people motion at the same location where we performed the experiments. Later, we merged flying ball events with the background motion events. We trained our networks in this new dataset and measured performance in unseen trajectories. Example BEHI images for the augmented sequences are shown in Fig. 7. We report this performance in Tab. II. In these extremely challenging cases, traditional blob detection and tracking algorithms are inadequate due to dominating noise. Although some performance degradation is observed due to background movements, the system is able to predict with a positional error of 6 cm. The TTC and the collision time are mostly unaffected by this change. This increase is expected because background events make it more difficult to segment the ball motion. However, the latency of the data acquisition and inference pipelines is not affected. From Tab. II, we can see that BEHI underperforms event volumes in this scenario. The additional time channels of the event volumes compared to BEHI could allow the network to group actions more effectively. ### _End-to-end Performance_ To assess the performance of our system, we performed 120 shots targeting different locations of the end actuator, uniformly covering the whole range of motion of the linear actuator, and we tried to catch them. We show an example of catching sequence in Fig. 5. After each motion, we return the linear actuator to the center position. The results are summarized in Fig. 4. We observe that the average success rate is \(81\,\mathrm{\char 37}\), and the lowest success rate is \(73\,\mathrm{\char 37}\). For reference, we could expect that an average success rate for random motions will be \(28.5\%\). We should note that the failure cases are different depending on the impact location: for extreme impact locations and fast balls, the latency of our perception algorithm is sometimes higher than our deadline. In rare cases, our perception system sometimes estimates the wrong position and misses the ball. ## VI Discussion Overall, our system is designed to close the gap between robots and humans in the task of object catching. Acosta et al. [29] highlighted the challenges in this task for robots using traditional cameras. Although the high sampling rate and low latency of event cameras help solve this problem, a perception pipeline must be carefully designed to minimize the latency of each system component. Among these designs, we describe in detail the BEHI representation and a lightweight event network. From the quantitative evaluation in Tab. II, we observe a significant increase in perception performance using the proposed BEHI representation, compared with images and event volumes. We have also showed that the BEHI is robust to simple background motion. For robust prediction of the trajectory and time to collision, we present an uncertainty-driven approach for fusing the high-frequency predictions from the network. This approach is necessary to compensate for inaccurate predictions from the lightweight network. The uncertainty of each prediction is learned unsupervised from the distribution of the training data and therefore does not require additional labeling. Using this approach, the perception algorithm is able to maintain an increasingly robust estimate of the impact location at a low latency. Our system does not require to pre-calibrate the sensors, as the calibration is learned from the training data. Since we do not explicitly model the calibration and configuration of objects into the method, knowledge such as ball shapes and lens parameters that are essential for motion estimation may not easily transfer to new scenes. Therefore, retraining the network is expected for a new experimental setup. Fig. 5: Catching sequence for an example ball launch. We collect events every \(10\,\mathrm{ms}\), for a total of \(120\,\mathrm{ms}\). During this time, the rail does not perform any motion. Next, we perform inference on the embedded computer. Finally, we select a target position to perform a one-shot motion to catch the ball. Fig. 6: Average error of the object position prediction (reported in meters) on a subset of the 20 unseen test trajectories. Fig. 7: Example BEHIs with background motion. We should note that due to the mechanical limitations of our position controller, we could only issue single-shot commands to the rail. One could imagine a faster controller is able to "follow" the prediction from the network over time, which would allow the actuator more time for movement. Moreover, we have assumed a prior in the motion of the flying object (linear) during our experiments, as explained in III-C. ## VII Conclusion This work investigated the problem of catching high-speed balls using event-based sensors. Through our study, we were able to show that event-based cameras are an attractive sensor for this task compared to frame-based sensors. Additionally, we demonstrated a full-scale system of perception, planning, and action to achieve catching at a top speed of \(13\,\)m/s. Both of these achievements present interesting routes for further high-speed catching systems. One direction of future investigation is solving the vision task of perceiving the terminal object state given a different camera viewpoint. This approach would enable the development of mobile robots that can perform the object catching task. Moreover, we have not analyzed in this paper the effect of sensor egomotion. Although simple rotations can be easily compensated using an on-board IMU [7], general motion segmentation using monocular event cameras remains challenging with the tight time constraints presented in this paper. Another future direction is to reduce system latency through the development of specialized hardware for edge inference. This would permit the deployment of our method on a resource-constrained system running on minimal energy. All in all, we seek to create the next generation of low-latency robot systems that can respond and react to the dynamic environment around them. ## VIII Acknowledgement We gratefully acknowledge Samsung AI 2021-2022 Award to the University of Pennsylvania.
2307.03907
The relationship between activated H2 bond length and adsorption distance on MXenes identified with graph neural network and resonating valence bond theory
Motivated by the recent experimental study on hydrogen storage in MXene multilayers [Nature Nanotechnol. 2021, 16, 331], for the first time we propose a workflow to computationally screen 23,857 compounds of MXene to explore the general relation between the activated H2 bond length and adsorption distance. By using density functional theory (DFT), we generate a dataset to investigate the adsorption geometries of hydrogen on MXenes, based on which we train physics-informed atomistic line graph neural networks (ALIGNNs) to predict adsorption parameters. To fit the results, we further derived a formula that quantitatively reproduces the dependence of H2 bond length on the adsorption distance from MXenes within the framework of Pauling's resonating valence bond (RVB) theory, revealing the impact of transition metal's ligancy and valence on activating dihydrogen in H2 storage.
Jiewei Cheng, Tingwei Li, Yongyi Wang, Ahmed H. Ati, Qiang Sun
2023-07-08T05:43:11Z
http://arxiv.org/abs/2307.03907v1
The relationship between activated H2 bond length and adsorption distance on MXenes identified with graph neural network and resonating valence bond theory ###### Abstract Motivated by the recent experimental study on hydrogen storage in MXene multilayers [_Nature Nanotechnol._ 2021, 16, 331], for the first time we propose a workflow to computationally screen 23,857 compounds of MXene to explore the general relation between the activated H2 bond length and adsorption distance. By using density functional theory (DFT), we generate a dataset to investigate the adsorption geometries of hydrogen on MXenes, based on which we train physics-informed atomistic line graph neural networks (ALIGNNs) to predict adsorption parameters. To fit the results, we further derived a formula that quantitatively reproduces the dependence of H2 bond length on the adsorption distance from MXenes within the framework of Pauling's resonating valence bond (RVB) theory, revealing the impact of transition metal's legacy and valence on activating dihydrogen in H2 storage. ## Introduction One of the most promising solutions to achieving carbon neutrality is to use hydrogen energy, which has renewability and zero CO2 emission[1, 2, 3]. However, the most difficult challenge is to find materials that can store hydrogen with large gravimetric and volumetric density and operate under ambient thermodynamic conditions[4, 5, 6]. While the bonding of hydrogen existed in nature is either too strong as in metal hydrides, or too weak as in MOFs[7]. To balance the thermodynamics and kinetics, hydrogen binding needs to be between physisorption and chemisorption, namely, in quasi-molecular form, where H2 is activated with elongated H-H bond length. The activation can be achieved either by electron transferring like in Kubas effect[8] or by charge polarization unveiled by Jena and co-workers[9]. The former takes advantage of the unfilled \(d\) orbitals of transition-metal atoms where H2 molecules donate electrons to the unfilled \(d\) orbitals and the transition metals back donate the electrons to the H2 molecules, and the latter relies on the charge polarization induced by the local electrical field. In both cases, H2 retains its molecular bond but with stretched H-H bond length. Figure 1: Workflow for the high-throughput screening of MXene for hydrogen storage via graph neural network and multiscale simulation. The H-H bond length is like a barometer that directly measures the interaction strength of H\({}_{2}\) molecule with materials, while the interaction directly depends on the distance of H\({}_{2}\) from the substrates. Therefore, it is of vital significance to explore the relation of H-H bond length changing with the adsorption distance, which motivates the studies in this direction. For example, Grundemann _et al._[10] developed an equation based on Pauling's bond order concept and experimental evidences to explain the H-H bond length and metal-H distances' correlations in transition metal dihydrides and dihydrogen complexes. Zhou _et al._[11] computationally studied external electric field's impact on stretching dihydrogen bond and adsorption distance on boron nitride sheet. Friederich _et al._[12] revealed the correlation between H-H bond length and activation process of dihydrogen in Vaska's complexes via _ab-initio_ simulations and machine learning. These studies clearly showed that the pursued relationship depends on the substrate materials. Recently, MXenes have been found to be promising candidates both as hydrogen storage materials[13, 14, 15] and as catalysts for enhancing metal hydrides' performance[16, 17, 18] due to its superior ability in activating H\({}_{2}\) bonds. However, current studies have only covered a very small part of the MXene compounds and the underlying hydrogen activation mechanisms on MXenes are still unclear. Intriguingly, the transition metal (TM) sites are normally coordinated in MXenes, while one functional group is bonded to three TM sites. Therefore, the dihydrogen bond activation process is neither on open metal sites nor on normally saturated TM sites. Two-dimensional (2D) MXene monolayers are building blocks of multilayer MXenes. It is of importance to explore the activation of dihydrogen on 2D MXene layers, especially, the general relationship between H-H bond length and the adsorption distance from MXenes. In this work, we design a workflow by combining high-throughput simulation and machine learning to screen the MXenes' material space for hydrogen activation, as shown in Figure 1, where the state-of-art ALIGNN model[19] and physics-informed machine learning[20] are applied. **Results and discussion** **MXene's hydrogen adsorption geometry datasets** According to experiment[14] and aNANt database[21], the MXenes' H\({}_{2}\) adsorption geometry dataset is constructed. MXenes' H\({}_{2}\) adsorption geometry refers to H\({}_{2}\)'s bond length and its adsorption distance from MXene. In a unit cell of MXene (see Figure 1), five joint layers are arranged in an array of FG\({}_{1}\), TM\({}_{1}\), C/N, TM\({}_{2}\) and FG\({}_{2}\) (FG\({}_{1/2}\): functional group, TM\({}_{1/2}\): early transition metal, C/N: carbon or nitrogen). According to aNANt database[22], there are 11 and 13 choices for TM sites and FG layers respectively, which eventually generates 23,857 distinct computationally designed 2D MXenes monolayers. To avoid jeopardizing hydrogen storage performance, MXenes with fifth period transition metals (i.e. Hf, Ta and W) are excluded due to their heavy atomic weight. The remaining 12,647 structures are randomly divided into 1,305 and 11,342 subsets (Figure S2) as the training and prediction subsets. We assign one hydrogen molecule on each TM site and conducted high-throughput calculation to obtain hydrogen adsorption geometry for each MXene monolayer in the first subset. More details of our simulation can be found in Supplementary Information. We then extract four labels to identify the adsorption geometry of H\({}_{2}\) on MXenes: the bond lengths of two hydrogen molecules ( \(d_{\text{H1}}\), \(d_{\text{H2}}\)), the distances between TM sites and geometry centers of hydrogen molecules (d\({}_{\text{MH1}}\), d\({}_{\text{MH2}}\)). The H\({}_{2}\) bond length and adsorption distance can faithfully reflect the process of H\({}_{2}\)'s adsorption, as is illustrated in Figure 2. A potential energy surface scan is performed to H\({}_{2}\) adsorbed on a \(3\times 3\times 1\) supercell of Ti\({}_{2}\)CH\({}_{2}\) (Figure 2(a-b)), where H\({}_{2}\)'s z coordinate and sorbent atoms are fixed and H\({}_{2}\)'s bond length is able to relax. Figure 2(c) shows that when H\({}_{2}\) approaches the sorbent, the \(d_{\text{H}}\) scarcely stretches and adsorption energy E\({}_{ads}\) increases slightly due to charge polarization. When \(d_{\text{MH}}\) is smaller than 4 A, H-H bond and E\({}_{ads}\) observably increases with the decrease of adsorption distance Figure 3: Distribution of distances between (a) two hydrogen atoms and (b) TM and center of H\({}_{2}\). PI-ALIGNN predictions versus DFT calculation results of (c) \(d_{\text{H}}\) and (d) \(d_{\text{MH}}\) in the test set. The data related to up/bottom hydrogen molecule are plotted with aqua/light-blue color. Figure 2: (a) side view and (b) top view of H\({}_{2}\) adsorbed on a \(3\times 3\times 1\) supercell of Ti\({}_{2}\)CH\({}_{2}\); (c) adsorption energy of H\({}_{2}\) changing with \(d_{\text{MH}}\) and \(d_{\text{H}}\) on Ti\({}_{2}\)CH\({}_{2}\). primarily due to Kubas effect. Therefore, the four adsorption geometry labels are sufficient to give an insight into Hz's asorption on MXenes. The distributions of these four labels in our dataset are plotted in Figure 3 (a-b) and the relation between \((\,d_{\rm H1}\,,d_{\rm H2})\) and \((\,d_{\rm MH1}\,,d_{\rm MH2})\) is plotted in Figure S1. Figure 3 (a) shows that the molecular or quasi-molecular adsorption of H\({}_{2}\) happens with \(d_{\rm H}\) close to the equilibrium distance and atomic H binding also happens. **Two-step ALIGNN model** The first ALIGNN model as H\({}_{2}\) quasi-molecule classifier is designed to predict whether the hydrogen maintains a molecule or quasi-molecule form, since the adsorbed H\({}_{2}\) can be dissociated in many ways (Figure S3), which will reduce the accuracy of H\({}_{2}\) adsorption geometry predictor due to limited amount of data. The model is composed of two identical ALIGNNs in the classification form to predict the state of one hydrogen molecule. If the distance between hydrogen atoms in the adsorbed molecule is smaller than 0.9 A (1.2 times of the equilibrium distance), the MXene structure will be labeled quasi-molecular configuration. The precision of our two models on test set reaches 0.938 and 0.957, respectively (Table S2). Next, 1,021 structures in our dataset with two quasi-molecule labels are picked out to train the second model, a H\({}_{2}\) adsorption geometry predictor. This model is a multi-output regression ALIGNN trained to predict four hydrogen adsorption geometry labels, namely two H\({}_{2}\) bond lengths and two H\({}_{2}\) adsorption distances. To achieve a better performance, we inform our model of the fact that when H\({}_{2}\) is far away from TM, it tends to maintain its equilibrium bond length by adding a bond-order term in loss function, making it a physics-informed (PI) version. Compared to normal ALIGNN, our PI-ALIGNN's mean MAE of 5 folds is reduced by 0.00013 A, which is about 4% of total MAE (Table S3-4). The mean MAE of 0.00295 A in 5-fold cross-validation is in the same order of magnitude as the error range in DFT calculations for geometry (around 0.002 A), making our model reliable in predicting \(d_{\rm H}\). As shown in Figure 3(c-d), the adsorption geometries of the H\({}_{2}\) are well predicted with a MAE of 0.00312 A for \(d_{\rm H}\) and 0.166 A for \(d_{\rm MH}\) in test set. More details of ALIGNN model's implementation are given in Supporting Information. Then, this two-step model is applied to the remaining database. The first H\({}_{2}\) quasi-molecule classifier ALIGNN picked out 9,883 structures with two quasi-molecule labels from 12,647 pieces of data. The H\({}_{2}\) adsorption geometry predictor PI-ALIGNN predicted four hydrogen adsorption geometry labels of these 9,883 MXenes. **Analysis to MXene's hydrogen storage mechanism** To understand the mechanism of hydrogen adsorption on MXenes both qualitatively and quantitively, we choose the five mono-component FGs, i.e. H, O, F, Cl and Br, which have relatively simple FG-H\({}_{2}\) interaction, to analyze the dependence of H\({}_{2}\) bond length on adsorption distance (Figure 4). The rest data containing binary and ternary component FGs are presented in Figure S4. Note that we don't discriminate up and bottom H\({}_{2}\) in the following discussions. In the existing research literatures, it has been well accepted that the \(d_{\rm H}\) increases as \(d_{\rm MH}\) reduces due to the electron donation and back-donation between the transition metal's \(d\) orbital and H\({}_{2}\)'s \(\sigma^{*}\) orbital. Such picture gives rise to a natural assumption that the total valence bond order of hydrogen atom is unity (since each H has one electron) and can be composed of individual valence bond orders, namely, we have the following equation[10], \[p_{\rm HH}+p_{\rm MH}=1 \tag{1}\] where \(p_{\rm HH}\) and \(p_{\rm MH}\), the bond order between H-H and TM-H, is given by[22], \[p_{\rm KH}=exp\left(-\frac{r_{\rm MH}-r_{\rm MH}^{0}}{p_{\rm KH}}\right) \tag{2}\] where X is H/M,\(r_{\rm XH}\) is the distance between X and H atom, \(r_{\rm MH}^{0}\) represents the equilibrium X-H distance of free donor XH (when \(p_{\rm KH}=1\)) and \(b_{\rm KH}\) is a decay parameter. A quantitative relation reads[10], \[r_{\rm HH}=r_{\rm HH}^{0}-b_{\rm HH}\,ln\left[1-exp\left(-\frac{r_{\rm MH}-r_{ \rm MH}^{0}}{b_{\rm MH}}\right)\right] \tag{3}\] \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \begin{tabular}{c} element \\ (ligancy 6) \\ \end{tabular} & Sc & Ti & V & Cr & Y & Zr & Nb & Mo \\ \hline \begin{tabular}{c} valence \\ number \(\nu\) \\ \end{tabular} & 3 & 4 & 5 & 6 & 3 & 4 & 5 & 6 \\ \hline \(\nu_{L}^{2}\) & 6.25 & 5.48 & 3.62 & 1.86 & 6.25 & 5.48 & 3.62 & 1.86 \\ \hline \hline \end{tabular} \end{table} Table 1: Different values of \(\nu_{L}^{\nu}\) for TMs in MXenes Figure 4: Predicted along with calculated hydrogen adsorption geometries of the five mono-component FGs separated by different TMs with (a) Sc, (b) Ti, (c) V, (d) Cr, (e) Y, (f) Zr, (g) Nb and (h) Mo. Dashed lines are from Eq. (3) and solid lines from Eq. (6), our formula for H\({}_{2}\) adsorption geometry. where we set \(r_{\text{MH}}^{0}=1.6\) A and \(b_{\text{H}}=b_{\text{MH}}=0.404\) A according to the literature [10] and equilibrium H\({}_{2}\) bond length \(r_{\text{HH}}^{0}=0.75\) A in line with our computational result. Given that \[r_{\text{HH}}=d_{\text{H}},\ r_{\text{MH}}=\sqrt{d_{\text{MH}}^{2}+\left(\frac{ d_{\text{H}}}{2}\right)^{2}}, \tag{4}\] The dashed lines in Figure 4 are from Eq. (3), suggesting that the Eq. (3) is not suitable for the functionalized MXenes. It was originally derived for open TM sites in complexes and TM hydrides, which overestimates \(d_{\text{H}}\) in our systems with normally coordinated metal sites (NCMSs). TM sites in MXenes have a ligancy 6 which lowers their ability to activate dihydrogen bonds. A revised equation is needed for MXenes and other materials by taking ligancy and valence into consideration. For this purpose, we adapt a new formula for the bond order \(p_{\text{MH}}\) between TM and H atom, deduced from Pauling's RVB theory through a statistic treatment of valence electrons [24, 23], \[p_{\text{MH}}^{*}=exp\left(-\frac{r_{\text{MH}}-r_{\text{MH}}^{0}}{b_{\text{ MH}}}\right)\times\frac{1}{1+A\left(v_{\text{L}}^{0}-1\right)} \tag{5}\] where A is a coupling coefficient of resonating between different structures and \(v_{\text{L}}^{0}\) is the number of unsynchronized resonance structures per atom, determined with valence \(v\) and ligancy \(L\), given by Eq. (54) (see Supporting Information for detailed deductions). Eq. (5) gives a basic picture of the resonating process of valence bonds. Strength of resonance is represented with coupling constant A and possible resonating structures \(v_{\text{L}}^{0}\). The valence electrons are delocalized due to resonance so that the bond strength, or bond order, decreases under the same bond distance \(r_{\text{MH}}\) The bond order of dihydrogen bond is still described with Eq. (2) since H\({}_{2}\) has no ligand. The values of \(v_{\text{L}}^{0}\) for TMs in this work is shown in Table 1, and the revised formula for H\({}_{2}\) adsorption geometry on NCMS reads \[r_{\text{HH}}=r_{\text{HH}}^{0}-b_{\text{HH}}\,n\left[1-exp\left(-\frac{r_{ \text{MH}}-r_{\text{HH}}^{0}}{b_{\text{MH}}}\right)\times\frac{1}{1+A\left(v _{\text{L}}^{0}-1\right)}\right]. \tag{6}\] We choose A = 0.3 to achieve a good agreement with our data (solid lines in Figure 4). Through \(v_{\text{L}}^{0}\), the activation of H\({}_{2}\) bond on MXenes is properly described by Eq. (6) as the solid lines plotted in Fig.3. What's more, TM with more valence electrons tends to have longer H-H bond length at the same adsorption distance, implying a better activation of H-H bond. From the perspective of Pauling's RVB theory, the increase of \(v_{\text{L}}^{0}\) for the unsynchronized resonance structures per atom results from the lack of valence electron compared with atom's ligancy. The resonance between different electron configurations delocalizes the electrons, thus TM's ability to activate H\({}_{2}\) bond is weakened. Therefore, TM with more valence electrons tends to perform as better dihydrogen bond activator in MXenes, whose TM sites all have same ligancy of 6. Eq. (6) gives us a quantitative correlation between dihydrogen bond length and H\({}_{2}\)'s adsorption distance from NCMSs on MXenes with mono-component FGs, which can be readily extended to other crystal material surfaces with similar local structures. ## Conclusion In summary, to better understand the recent experiment on hydrogen storage in multilayer MXenes, we focus on the activation of H\({}_{2}\) on the functionalized MXenes, which can be measured by the H-H bond length changing with the adsorption distance. Based on high-throughput simulation, machine learning and Pauling's RVB theory, we derived a general formula for describing the relationship between activated H\({}_{2}\) bond length and adsorption distance, which could be extended to other 2D materials and materials surfaces to get insight on H\({}_{2}\)'s activation process. ## Associated Content The implementation and detailed analysis of high-throughput simulation, ALIGNN model and related deductions in Pauling's RVB theory are available in the Supporting Information. This material is available free of charge via the Internet at [http://pubs.acs.org](http://pubs.acs.org). ## Author Information * School of Materials Science and Engineering, Peking University, Beijing 100871, China; Center for Applied Physics and Technology, Peking University, Beijing 100871, China; Email: [email protected] ## Acknowledgment This work was partially supported by grants from the National Key Research and Development Program of China (2021YFB4000601), and from the China Scholarship Council (CSC). The calculations were supported by the High-performance Computing Platform of Peking University.
2308.14426
Low-complexity Samples versus Symbols-based Neural Network Receiver for Channel Equalization
Low-complexity neural networks (NNs) have successfully been applied for digital signal processing (DSP) in short-reach intensity-modulated directly detected optical links, where chromatic dispersion-induced impairments significantly limit the transmission distance. The NN-based equalizers are usually optimized independently from other DSP components, such as matched filtering. This approach may result in lower equalization performance. Alternatively, optimizing a NN equalizer to perform functionalities of multiple DSP blocks may increase transmission reach while keeping the complexity low. In this work, we propose a low-complexity NN that performs samples-to-symbol equalization, meaning that the NN-based equalizer includes match filtering and downsampling. We compare it to a samples-to-sample equalization approach followed by match filtering and downsampling in terms of performance and computational complexity. Both approaches are evaluated using three different types of NNs combined with optical preprocessing. We numerically and experimentally show that the proposed samples-to-symbol equalization approach applied for 32 GBd on-off keying (OOK) signals outperforms the samples-domain alternative keeping the computational complexity low. Additionally, the different types of NN-based equalizers are compared in terms of performance with respect to computational complexity.
Yevhenii Osadchuk, Ognjen Jovanovic, Stenio M. Ranzini, Roman Dischler, Vahid Aref, Darko Zibar, Francesco Da Ros
2023-08-28T09:05:27Z
http://arxiv.org/abs/2308.14426v1
# Low-complexity Samples versus Symbols-based Neural Network Receiver for Channel Equalization ###### Abstract Low-complexity neural networks (NNs) have successfully been applied for digital signal processing (DSP) in short-reach intensity-modulated directly detected optical links, where chromatic dispersion-induced impairments significantly limit the transmission distance. The NN-based equalizers are usually optimized independently from other DSP components, such as matched filtering. This approach may result in lower equalization performance. Alternatively, optimizing a NN equalizer to perform functionalities of multiple DSP blocks may increase transmission reach while keeping the complexity low. In this work, we propose a low-complexity NN that performs samples-to-symbol equalization, meaning that the NN-based equalizer includes match filtering and downsampling. We compare it to a samples-to-sample equalization approach followed by match filtering and downsampling in terms of performance and computational complexity. Both approaches are evaluated using three different types of NNs combined with optical preprocessing. We numerically and experimentally show that the proposed samples-to-symbol equalization approach applied for 32 GBd on-off keying (OOK) signals outperforms the samples-domain alternative keeping the computational complexity low. Additionally, the different types of NN-based equalizers are compared in terms of performance with respect to computational complexity. Neural Network equalizer, Optical communications, Intensity-modulation, Direct-detection. ## I Introduction Intensity-modulated and directly detected (IM/DD) transceivers have been widely implemented due to their low cost, simplicity, and low footprint, making them highly suitable for applications requiring a large number of transceivers, such as short-reach interconnects [1, 2]. However, the transmission reach of IM/DD links is limited by the intersymbol interference (ISI) effect induced by the linear chromatic dispersion (CD) accumulated during fiber propagation and the nonlinear square-law photo-detector (PD). To tackle the nonlinear impairments and extend transmission reach the application of a nonlinear equalizer is a requirement for IM/DD receivers operating in the C-band. Although various equalization techniques have been proposed, the search for high-performance and low-complexity digital signal processing (DSP) remains ongoing [3]. Due to the rapid development of machine learning computing frameworks, various types of neural network (NN)-based equalizers have been proposed as promising solutions in short-reach fiber transmission outperforming traditional equalization techniques [3, 4, 5, 6, 7]. The NNs in the shape of feedforward NNs (FNN) [5], recurrent NNs (RNN) [8, 6, 9], and convolutional NNs (CNN) [10] show higher equalization performance compared to conventional Volterra [11] or feedforward equalizer (FFE) [8], effectively addressing nonlinearities in IM/DD links. However, the NN-based equalizers that have high equalization capabilities are often highly complex [3]. Therefore, there is a need to develop a low-complexity NN-based equalizer that can effectively tackle nonlinear channel impairments. Recently, an equalization proposal has emerged that combines optical pre-processing with digital NNs to divide the complexity between optical and electrical domains [12, 13, 14]. It was experimentally shown that reservoir computing can compensate for CD when applied for a sequence of samples of time domain signal. Such equalizers are usually optimized taking into account receiver-side match filters and downsampling. However, optimizing such equalizers independently from the rest of the DSP, such as match filtering, can lead to a lower equalization accuracy [5, 15]. To address this, an alternative approach involves embedding multiple DSP blocks into a single FNN-based equalizer and optimizing it as a unified entity [16]. This approach has the potential to enable NNs to fully perform symbols recovery and avoid additional postprocessing complexity in the shape of matched filtering. In this work, we extend an investigation of FNN equalizers from [16] by showing different NN's performance from the perspective of the impact of computational complexity. We propose using a samples-to-symbol NN-based equalizer that performs the task of downsampling, match filtering of the pulse shape, and equalization together as one NN receiver block. We compare this approach to the samples-to-sample NN-based equalizer (the idea is to reconstruct the information from the sliced spectrum together with equalization), followed by the matched filter of the pulse shaping and downsampling. Additionally, we compare the performance of two approaches in the shape of different NN types such as FNN, RNN, and CNN. We show that the samples-to-symbol approach significantly outperforms the samples-to-sample equalization in terms of bit error ratio (BER) performance for all types of NN. Furthermore, we investigate the complexity of the proposed NN and show that it provides low BER performance while keeping low computational complexity suitable for digital equalizer implementation. Finally, we numerically and experimentally show that samples-to-symbol FNN requires fewer or the same number of multiplications per symbol than the rest of the investigated equalizers, extending the transmission distance for complexity-constrained applications. The structure of the paper is as follows: Section II outlines the numerical and experimental setup used for short-reach transmission. Section III provides a description of the various NN-based equalizers that were examined. In Section IV, we explain the method for calculating the computational complexity of all the NN-based equalizers discussed. The performance and complexity of the NN equalizers are compared and highlighted in Section V. Finally, we summarize our findings in the conclusion section. ## II Short-Reach System Under Investigation ### _Numerical Setup_ First, we describe the numerical setup used to investigate NN-based equalizers. The simulation setup is shown in Fig. 1. At the transmitter, a pseudorandom sequence of \(2^{21}\) bits is generated using a Mersenne Twister generator, upsampled by 8 samples per symbol (sps), and shaped by root-raised-cosine RRC filter (\(\alpha=0.1\)) to generate a 32-GBd OOK signal. Then, the electrical signal is encoded in the optical domain using a Mach-Zehnder modulator (MZM) and input into the communication channel. As our goal is to investigate the impact of CD on the signal, the standard single-mode fiber (SSMF) transmission is modeled only with CD (\(D=16.4\) ps/nm/km). The receiver pre-amplifier noise is modeled as an additive white Gaussian noise (AWGN) source with a tunable variance (\(\sigma^{2}\)) which allows adjusting the signal-to-noise ratio (SNR) of the received signal. After amplification, the signal's spectrum is divided into \(N_{slices}=4\) equal slices by an arbitrary waveguide grating (AWG) to reduce the power fading effect, as proposed in [15]. Numerically, the AWG can be designed as a fixed number of second-order Gaussian filters with a 3-dB bandwidth of 16 GHz to slice the signal into \(N_{slices}=4\) number of equal slices of 8 GHz that is fixed throughout the rest of the work. More details of signal slicing can be found in [12, 15]. After that, each slice of the total signal is detected by the separate photodetector (PD) in a square-law fashion. Before feeding the signal into the NN equalizer, the sequence is reshaped to create a serial structure via a reshaping layer (Fig. 1). Then, the signal sequence is fed into two types of NN-based equalization: samples-to-sample (Sa-NN) and samples-to-symbol (Sy-NN). The reshaping procedure and the details of the input-output structure for both Sa-NN and Sy-NN are described in Section III. Finally, the BER is estimated through error counting as a function of SNR and distance. ### _Experimental Setup_ To validate the numerical results, the experimental scenario used in this work is adopted from [12]. In the experiment, the signal is shaped with RRC (\(\alpha=0.1\)) and upsampled to 2 sps. The signal is then resampled to match the sampling rate of the digital-to-analog converter at 88 Gsa/s. Next, the signal is modulated using MZM with the bias adjusted to the quadrature point. The optical signal is then transmitted through 74 km of a SMF. At the receiver, the signal is first amplified using an erbium-doped fiber amplifier (EDFA) and then filtered by a wavelength selective switch (WSS) modeling the AWG. The filters are configured as second-order Gaussian filters with a 3-dB bandwidth of 16 GHz, as for the numerical analysis. Subsequently, the signal is independently detected by four PDs, each having the same bandwidth of 40 GHz. At the receiver, the signal is digitally resampled using a real-time oscilloscope operating at a sampling rate of 80 Gsa/s and electrical bandwidth of 33 GHz. After the detection, the signal is post-processed offline with a low-pass filter, followed by equalization as in numerical simulations. Then, the symbols' hard decision is made and the BER is calculated. The structure of the equalization approaches is defined in Section III. ## III Neural Networks-based Receiver Design In this section, we describe the Sa-NN and the Sy-NN equalizer architectures. First, both Sy-NN and Sa-NN equalizers are applied to the sliced time domain signal right after the PD. Both approaches employ a simple NN architecture consisting of one input layer, one hidden layer, and one output layer. The input to the NN is defined with a memory \(M\) and a number of features \(N_{slices}\) of optical slices of the same signal that correspond to \(M\cdot N_{slices}\) total input. Each sample in memory \(M\) is sliced into \(N_{slices}=4\) and it is Fig. 1: Communication setup under investigation. a) Sa-NN equalizer, and b) Sy-NN equalizer. fixed throughout the rest of the work. The size of the memory \(M\) is determined by the formula: \[M=2L+q, \tag{1}\] where \(L=K\cdot sps\) represents the \(K\) number of previous and future symbols upsampled by \(sps\), while \(q\) represents the current equalized unit. For Sa-NN \(q=1\), because it is aimed to process 1 input sample and provide 1 sample at the output. For Sy-NN, however, the \(q=sps\) as is aimed to equalize all the samples that belong to 1 symbol at the output. The serial structure of the total input sequence will be defined in more detail in the following paragraph. It is important to highlight that, for numerical simulations, the Sa-NN equalizers are applied for the signal with \(sps=8\). Reducing the number of sps significantly degrades equalization performance, therefore keeping \(sps=8\) is essential for all Sa-NN equalizers. The performance of Sy-NN instead, remains stable for both \(sps=8\) and \(sps=2\) samples per equalized symbol. Therefore, in this work, for Sa-NN - the signal is upsampled at 8 sps, while for Sy-NN \(sps=2\) is used. However, in the experimental scenario, \(sps=2\) were used for both Sa-NN and Sy-NN cases due to the limited sampling rate of the ADCs. Due to the application of time delay memory the number of sps plays an important role in defining computational complexity as will be discussed in the next sections. Due to the equalization of a single sample in Sa-NN and a single symbol in Sy-NN, the input sequences to both NN equalizers are different in size. If we define the \(\mathbf{x}_{k}^{r}=[x_{k}^{(1)},x_{k}^{(2)},x_{k}^{(3)},x_{k}^{(4)}]\) to be a set of 4 slices of a \(k\)-th sample with \(L\) number of previous or forward samples, the input to Sa-NN can be defined as: \[\mathbf{x}^{\mathcal{S}a}= \Big{[}\mathbf{x}_{k-L}^{r},\mathbf{x}_{k-(L-1)}^{r},...,\underset {\text{sample}}{\mathbf{x}_{k}^{r}},...,\mathbf{x}_{k+(L-1)}^{r},\mathbf{x}_{ k+L}^{r}\Big{]} \tag{2}\] For the input of a Sy-NN equalizer, first we define \(\mathbf{x}_{t}^{s}=[\mathbf{x}_{k}^{r},\mathbf{x}_{k+1}^{r},\mathbf{x}_{k+2}^ {r},...,\mathbf{x}_{k+sps}^{r}]\) with \(t=\lfloor k/sps\rfloor\), as a sequence of \(sps\) samples that correspond to one input symbol. Then, the input to Sy-NN with \(K\) previous or forward symbols can be defined as: \[\mathbf{x}^{Sy}= \Big{[}\mathbf{x}_{t-K}^{s},\mathbf{x}_{t-(K-1)}^{s},..., \underset{\text{symbol}}{\mathbf{x}_{t}^{s}},...,\mathbf{x}_{t+(K-1)}^{s}, \mathbf{x}_{t+K}^{s}\Big{]}. \tag{3}\] For both Sa-NN and Sy-NN, the hidden layer is defined with \(N_{h}\) hidden units. At the output, a single neuron represents 1 equalized sample for Sa-NN or 1 equalized symbol for Sy-NN equalizers. The activation function and other hyperparameters are determined through Bayesian optimization (BO) [17]. The output activation function \(f_{out}\) was found \(sigmoid\) for Sa-based and \(linear\) for Sy-based NNs. The mini-batch size was found to be 1800 for Sa-based and 1000 data samples for Sy-based equalizers. The time domain signal is re-scaled before the input to the NN by using an optimized variance parameter \(var\). The learning rate \(l_{rate}\) is equal to \(0.5\times 10^{-2}\) for Sa-NN and \(1\times 10^{-2}\) for Sy-NN. The optimized hyperparameters are summarized in Table I. The training process involves using a regression approach with a mean squared error loss function on \(2^{19}\) training symbols and \(2^{16}\) testing symbols. Backpropagation with stochastic gradient descent was used for learning. The following subsections will describe the architecture of each NN in detail. ### _Feedforward Neural Network_ Feedforward neural network (FNN) is amongst the simplest options widely proposed for short-reach channel equalization, where densely connected structure helps effectively learn the memory-induced chromatic dispersion [18, 4, 19]. In this work, the input layer is designed with a time-delay window to support feedforward connectivity with additional short memory. The schematic of the FNN is shown in Fig. 2. To limit the complexity we consider the FNN with a single hidden layer and limit the number of hidden units to \(N_{h}=10\). The hidden layer activation functions were found to be \(sigmoid\) for Sa-FNN and \(ReLU\) for Sy-FNN. The FNN with an input \(x_{t}\) and single hidden layer for a single output \(y_{t}\) recovery in a matrix form can be described by: \[y_{t}=f_{out}(f_{h}[x_{t}\cdot\mathbf{W}_{h}+b_{h}]\cdot\mathbf{W}_{out}), \tag{4}\] where \(W_{h}\) and \(W_{out}\) are the hidden and output weights matrices, and \(f_{h}\) and \(f_{out}\) are hidden and output activation functions. The rest of the hyperparameters were optimized using BO for a single scenario of 30 km transmission as an intermediate choice of distances considered in this work. It is worth noting that picking a specific transmission scenario for hyper-parameter optimization does not result in a significant change in equalization performance. The goal of BO in this case is more for choosing the best fitting NN hyperparameters rather than finding an extremely specific NN architecture that will provide the absolutely optimal performance. For example, optimization is used to find an appropriate hidden layer activation function out of the available functions that are suitable for this choice of NN application, instead of manually applying each function to find the best performance. Therefore, similar hyperparameter values were found when applying BO for other transmission distances. Fig. 2: FNN-based equalizer structure with time-delayed input. ### _Gated Recurrent Unit Neural Network_ A gated recurrent unit is a special type of NN with recurrent connections that enhance the capabilities of traditional NNs by incorporating internal memory and gated architecture. By storing the hidden state at each time step, the GRUs are very suitable for the sequential data processing used within the DSP framework. The GRUs were successfully used in [20] to equalize 200 Gbps PAM-4 signal over 1 km transmission. The structure of a GRU cell is depicted in Fig. 3. The calculation of reset gate \(\mathbf{r_{t}}\), update gate \(\mathbf{s_{t}}\), a hidden state \(\mathbf{h_{t}}\), and a candidate hidden state \(\hat{\mathbf{h_{t}}}\) of a single hidden GRU cell with input \(\mathbf{u_{t}}\) can be described as: \[\mathbf{r_{t}}=\sigma(\mathbf{W_{r}}\mathbf{u_{t}}+\mathbf{U_{r}}\mathbf{h_{t-1}}+\bm {b_{r}}) \tag{5}\] \[\mathbf{s_{t}}=\sigma(\mathbf{W_{s}}\mathbf{u_{t}}+\mathbf{U_{s}}\mathbf{h_{t-1}}+\bm {b_{s}})\] (6) \[\hat{\mathbf{h_{t}}}=tanh(\mathbf{W_{h}}\mathbf{u_{t}}+\mathbf{r_{t}}\odot(\mathbf{U _{h}}\mathbf{h_{t-1}}+\mathbf{b_{h}}))\] (7) \[\mathbf{h_{t}}=(1-\mathbf{s_{t}})\odot\mathbf{h_{t-1}}-\mathbf{s_{t}}\odot\hat{ \mathbf{h}}_{t}, \tag{8}\] where \(\mathbf{W_{r}}\), \(\mathbf{U_{r}}\), \(\mathbf{W_{s}}\), \(\mathbf{U_{s}}\), \(\mathbf{W_{h}}\), and \(\mathbf{U_{h}}\) are corresponding weights matrices, and \(\mathbf{b_{r}}\), \(\mathbf{b_{z}}\), \(\mathbf{b_{h}}\) are the biases. The \(\sigma\) and \(tanh\) are logistic sigmoid and hyperbolic tangent activation functions correspondingly. The \(\odot\) is a Hadamard product. Following our idea with FNNs, here, we use a GRU network with a single layer and up to 10 hidden units to limit the maximum computational capacity. The activation function \(f_{h}\) found by BO used for both Sy-GRU and Sa-GRU is \(tanh\). ### _Convolutional Neural Network_ Convolutional Neural Network (CNN) is a powerful feed-forward filtering tool that was found effective in one-dimensional sequence processing, due to its ability to extract essential features from the input sequences. The CNN showed outstanding performance in equalizing 112 Gbps PAM-4 signal transmitted over 40 km SSMF [21]. In [22], the authors experimentally demonstrated the effectiveness of CNN applied for equalization of 56 Gbps PAM-4 IM/DD transmission over 25 km SSMF. In this work, we introduce the time delay to the input layer, which allows the CNN to have a short memory and filter multiple input features simultaneously. The architecture of CNN is shown in Fig. 4. The convolution layer itself consists of several sliding filters \(N_{h}\) with a size of \(N_{w}\). Without downsampling by pooling and additional processing layers, the output of the convolution layer is equal to the size of \(N_{h}\). To simplify the convolution structure we fix the padding to 0, dilation to 1, and stride to 1. The input and output relationship can be defined as: \[y_{i}^{g}=f_{h}(\sum_{n=1}^{N_{slices}}\sum_{j=1}^{N_{w}}x_{i+j-1,n}\odot w_{j,n}^{g}+b_{j,n}^{g}), \tag{9}\] where \(y_{i}^{g}\) defines the output feature map, for the \(i-th\) input feature generated by filter \(g\) of a convolution layer [23]. The \(x\) is the input data vector, while \(\mathbf{w}_{j}^{g}\) is the \(j\)-th kernel of a filter \(g\) and \(b_{j}^{g}\) is the bias. The \(n\) index corresponds to the feature index in the range of 1 to \(N_{slices}=4\), and \(f_{h}\) is a nonlinear activation function. The output \(y^{g}\) is then followed by the feedforward output layer. Optimized by BO, the number of filters \(N_{h}\) is equal to 15, and the filter size \(N_{w}\) is equal to 14 with the \(sigmoid\) activation function. The architectures of both Sa-NN and Sy-NN as well as their optimized hyperparameters are summarized in Table I. ## IV Computational Complexity of NN-based equalizers The computational complexity of forward NN propagation represented by the number of multiplications per equalized symbol (RMPS) is one of the primary comparison methods in hardware channel equalization [3]. Multiplications consume most of the processing logic when dealing with float values Fig. 4: Architecture of CNN-based equalizer with time-delayed input. Fig. 3: Schematics of gated architecture of a single GRU cell. with high decimal digits [24]. In this work, we emphasize the focus on assessing the inference-phase computation complexity (CC) during the evaluation phase, rather than considering the training complexity of a NN, which is conducted offline during the calibration phase. Furthermore, our framework does not incorporate the computational complexity associated with nonlinear activation functions, as they often rely on approximation techniques rather than direct multiplicative calculations. Notably, in the traditional approach of lookup tables-based approximation, the application of such activation functions can be digitally implemented with significantly reduced computational requirements [25]. The complexity of a feedforward NN-based equalizer [23] with input size (mini-batch, \(M\), \(N_{slices}\)), single hidden layer with \(N_{h}\) number of neurons, and a single output neuron \(N_{out}\) can be defined as: \[CC_{FNN}=MN_{slices}N_{h}+N_{h}N_{out}, \tag{10}\] The complexity of GRU NN is more complicated than the FNN structure due to the gated recurrent structure [26]. The RMPS of a GRU equalizer is calculated as: \[CC_{GRU}=3(N_{slices}N_{h}+N_{h}N_{h})M+N_{h}N_{out}M, \tag{11}\] The complexity of a feed-forward CNN architecture composed of one single convolution layer [23] is defined as : \[\begin{split} CC_{CNN}=N_{slices}N_{h}N_{w}(M-N_{w}+1)\\ +(M-N_{w}+1)N_{h}N_{out}\end{split} \tag{12}\] where \(N_{h}\) is the number of filters and \(N_{w}\) is the filter size. Finally, we also show the complexity of commonly used FFE as a benchmark comparison for the rest of the NNs. Important to mention, that the FFE can only be applied for the original signal without slicing with a single PD [3]. The complexity of an FFE can be calculated by considering the number of window taps \(N_{taps}\) as: \[CC_{FFE}=N_{taps}+1 \tag{13}\] The FFE is implemented with \(N_{taps}=11\) taps using least mean square (LMS) algorithm trained on 50000 samples. One important point to mention is that the CC we calculate in this work stands for Sy-based equalizers. The architecture of Sy-FNN/GRU/CNN is aimed to equalize a single symbol at the output, while the Sa-FNN/GRU/CNN is applied in the time domain equalizing a single sample at the output of NN. To quantify the complexity of Sa-based NNs the total \(CC_{NN}\) has to be further multiplied by the number of \(sps\), which is 8 in the simulation, and 2 in the experimental scenario. At the same time, the complexity of match filtering is not accounted for in this case. ## V Results and Discussions ### _Performance Comparison_ To compare and evaluate the Sa-NN and Sy-NN equalizers in numerical simulations, the BER versus SNR performance for \(l=74\) km transmission is shown in Fig. 5. First, the back-to-back (B2B) transmission is referenced at KP4 forward error correction (FEC) threshold (BER \(=2.24\times 10^{-4}\)). Fig. 5 shows that the proposed Sy-NN approach outperforms the Sa-NN for all the types of NN equalizers. Because the Sy-based NN learns not only to compensate for the impairments but also to approximate the output OOK symbol in a regression way, optimizing it as a single DSP block improves transmission reach. Compared to Sy-NN, the Sa-NN is optimized disregarding the following RRC filter. Although both types of equalization approaches have a similar \(K=3\) number of previous and future symbols in the input, the Sa-based NNs cannot properly compensate the ISI and equalize the samples into a correct symbol using a fixed post-processing RRC-based match filtering. Additionally, the Sy-based approach has a single output neuron which is trained to provide the symbol output between 0 and 1. However, for the Sa-based, the output neuron corresponds to a value in a broader range of a time domain input signal. The output neuron of a Sa-NN also tries to provide the values in between the sample-domain range, while the Sy-NN aims to output values only close to 0 and 1. Even though both methods have the same degrees of freedom at the input obtained from the neighboring samples, the Sa-GRU outperforms the Sa-FNN and Sa-CNN having a 0.8 dB KP4 SNR penalty. This improvement can be understood as GRU cells involve higher internal multiplicative complexity by memorizing previously calculated states, that are added to a short-term memory from the input. As for the Sy-based approach, all three equalizers Sy-FNN, Sy-GRU, and Sy-CNN show similar equalization performance improving the required SNR penalty at KP4 FEC threshold by around 2.8 dB compared to the un-equalized B2B reference. To quantify the impact of the NN-based equalizers on the transmission reach we calculate the SNR penalty at KP4 FEC threshold as a difference in dB between the transmission with equalization and the un-equalized reference IM/DD for \(l=0\) km (Fig. 5 - SNR Penalty). To further compare analyzed equalization approaches, Fig. 6 shows the KP4 FEC SNR penalty versus the transmission distance of the communication system. First, the single PD receiver with no equalization is plotted as a reference. To show the inability of compensation for a longer transmission distance of a conventional feedforward equalizer, the FFE Fig. 5: BER versus SNR at \(l=74\) km transmission for all NN-based equalizers. with LMS is also applied [27]. The Sa-NN approaches show the robust KP4 FEC transmission reach of up to 75-km outperforming the FFE equalizer. It can be noticed that the simple Sa-FNN shows similar to Sa-GRU and Sa-CNN performance up to 20-km transmission. However, the strong impact of the ISI combined with insufficient input memory leads to more errors in the output for the Sa-FNN at longer distances. Therefore, the Sa-FNN can reach only up to 63 km transmission. In contrast to that, the Sa-CNN and Sa-GRU have more internal complexity, which slightly increases the equalization capacity and improves the transmission reach to 71-km and 74-km accordingly. As for the Sy-NN, the proposed Sy-based equalization approach outperforms the Sa-based one by 2 dB on average for all transmissions. Due to the sufficient amount of short-term input memory and simpler feed-forward structure, the Sy-FNN performs better for shorter distances of up to 50 km. However, all three Sy-FNN, Sy-GRU, and Sy-CNN show identical performance and increase the transmission reach without penalty up to 93 km. It shows that the short-term memory introduced at the input plays a crucial role in the equalization capacity of a symbol output Sy-NN structure. ### _Computational Complexity Comparison_ Computational complexity in terms of RMPS of NN-based equalizers is an essential aspect when considering simple IM/DD systems. To evaluate the proposed NN-based equalizers from the complexity angle, we limit the available number of RMPS and define the range from 100 to 1500 multiplications. To vary the number of RMPS we change the number of hidden units in FNN and GRU, and filters in CNN architectures. The corresponding optimized hyperparameters used for both Sa-NN and Sy-NN architectures are shown in Table II. Additionally, we do not include the complexity of the matching filter for Sa-based equalizers which will also increase the total CC but with a constant, architecture-independent offset. Therefore, our main focus here is to investigate the Sy-NN architectures and evaluate their performance at different complexity levels. Fig. 7 shows the achievable transmission reach without penalty for restricted complexity levels in the number of RMPS. It is shown, that the Sy-FNN and Sy-CNN can reach similar transmission performance for \(10^{2}\), and \(2\times 10^{2}\) RMPS. However, starting from \(5\times 10^{2}\) RMPS, the Sy-FNN reaches its equalization capacity for 93 km transmission and keeps similar transmission reach up to \(15\times 10^{2}\) RMPS. While Sy-CNN reaches similar to Sy-FNN equalization performance only at \(15\times 10^{2}\) RMPS. As for the Sy-GRU architecture, restricting the number of RMPS to a couple of hundred RMPS limits the equalization capacity of GRU cells significantly decreasing the transmission reach. Due to the complex gated structure, to define the Sy-GRU with \(10^{2}\) RMPS, we had to decrease the input memory \(M\) by setting the number \(K\) in Eq. 1 to \(K=1\) symbol, which led to a decrease in equalization performance. For the rest of the GRU complexity levels the \(K\) is kept fixed equal to 3. It can be seen, that the Sy-GRU simply does not have enough multiplicative capability to compensate memory-related ISI at longer distances. Additionally, for the CC comparison, the Fig. 7 also reports the minimum possible Sa-FNN and Sa-CNN structures, demonstrating the inability of the equalizers to properly compensate for the impairments within limited complexity. To validate the numerical results of the proposed approach, we experimentally compare the equalization performance Fig. 6: SNR penalty at the KP4 FEC threshold with respect to B2B un-equalized performance versus the fiber length. Fig. 7: Computational complexity in RMPS for numerical analysis. of Sy and Sa-based equalizers applied for 32-GBd 74-km transmission in SSMF. The BER versus the restricted complexity number is shown in Fig. 8. It is important to highlight that in the experimental setup, the signal is upsampled to \(sps=2\). Therefore, the CC of Sa-NN will be adapted accordingly. The memory \(M\) and the number of hidden units for corresponding CC levels are summarized in Table III. It can be seen, that optimizing the Sy-NN as a single DSP block outperforms the Sa-based equalization for all the investigated NN architectures. The Sy-FNN, particularly, shows the lowest BER compared to Sy-GRU and Sy-CNN for the CC levels up to \(10\times 10^{2}\). However, at the \(15\times 10^{2}\) number of CC, the Sy-GRU outperforms the Sy-FNN due to higher recurrent memory that allows for storing essential information about the accumulated impairments. It can be seen, that for the CC level of \(10^{2}\) the Sa-GRU and Sy-GRU perform equally poorly, because a single hidden GRU cell is unable to capture the dynamics of the transmission impairments. However, increasing the number of hidden GRU cells along with the input memory leads to higher Sy-GRU and Sa-GRU equalization capacity and lower BER. For the Sy-CNN, the combination of filters is unable to compensate for the memory-induced impairments. Similarly to numerical results, the proposed Sy-NN equalization outperforms the Sa-NN in terms of BER performance for all levels of CC. This leads to the conclusion that when the CC is around a couple of hundred multiplications the Sy-FNN equalizer is a primary candidate for a simple IM/DD setup. However, increasing both input memory and hidden recurrent units can increase the equalization performance at the expense of higher CC. ## VI Conclusions In this work, we demonstrate that designing a samples-to-symbol neural network (NN)-based equalizer as (Sy-NN) outperforms a samples-to-sample neural network (Sa-NN) for compensating the impairments in 32 Gbd on-off keying intensity-modulated and directly-detected transmission in both numerical and experimental scenarios. Numerical simulations show that the Sy-NNs can efficiently transform 4 slices of upsampled pulse-shaped signals into a single symbols output increasing the transmission reach up to 93 km. This design minimizes the computational complexity (CC) of the internal NN, eliminating the need for external digital signal processing (DSP) blocks, like matched filters. Additionally, experimental transmission over 74 km validates the numerical results, showing an improvement of an order of magnitude for the Sy-FNN over the Sa-FNN. Comparing the CC of the Sy-based equalizers, we show that using the FNN delivers high equalization performance while keeping the complexity at a couple of hundred real multiplications per equalized symbol. ## Acknowledgments This work was financially supported by the Villum Fonden's YIP OPTIC-AI project (grant no 29344) and ERC CoG FRECOM (grant no. 771878).
2304.01498
DCANet: Dual Convolutional Neural Network with Attention for Image Blind Denoising
Noise removal of images is an essential preprocessing procedure for many computer vision tasks. Currently, many denoising models based on deep neural networks can perform well in removing the noise with known distributions (i.e. the additive Gaussian white noise). However eliminating real noise is still a very challenging task, since real-world noise often does not simply follow one single type of distribution, and the noise may spatially vary. In this paper, we present a new dual convolutional neural network (CNN) with attention for image blind denoising, named as the DCANet. To the best of our knowledge, the proposed DCANet is the first work that integrates both the dual CNN and attention mechanism for image denoising. The DCANet is composed of a noise estimation network, a spatial and channel attention module (SCAM), and a CNN with a dual structure. The noise estimation network is utilized to estimate the spatial distribution and the noise level in an image. The noisy image and its estimated noise are combined as the input of the SCAM, and a dual CNN contains two different branches is designed to learn the complementary features to obtain the denoised image. The experimental results have verified that the proposed DCANet can suppress both synthetic and real noise effectively. The code of DCANet is available at https://github.com/WenCongWu/DCANet.
Wencong Wu, Guannan Lv, Yingying Duan, Peng Liang, Yungang Zhang, Yuelong Xia
2023-04-04T03:18:27Z
http://arxiv.org/abs/2304.01498v2
# DCANet: Dual Convolutional Neural Network with Attention for Image Blind Denoising ###### Abstract Noise removal of images is an essential preprocessing procedure for many computer vision tasks. Currently, many denoising models based on deep neural networks can perform well in removing the noise with known distributions (i.e. the additive Gaussian white noise). However eliminating real noise is still a very challenging task, since real-world noise often does not simply follow one single type of distribution, and the noise may spatially vary. In this paper, we present a novel dual convolutional neural network (CNN) with attention for image blind denoising, named as the DCANet. To the best of our knowledge, the proposed DCANet is the first work that integrates both the dual CNN and attention mechanism for image denoising. The DCANet is composed of a noise estimation network, a spatial and channel attention module (SCAM), and a dual CNN. The noise estimation network is utilized to estimate the spatial distribution and the noise level in an image. The noisy image and its estimated noise are combined as the input of the SCAM, and a dual CNN contains two different branches is designed to learn the complementary features to obtain the denoised image. The experimental results have verified that the proposed DCANet can suppress both synthetic and real noise effectively. The code of DCANet is available at [https://github.com/WenCongWu/DCANet](https://github.com/WenCongWu/DCANet). Image blind denoising, dual convolutional neural network, attention mechanism, noise estimation. ## 1 Introduction As one of the most significant research areas of low-level visual tasks, image denoising aims to restore clean images from noisy ones. During the past decades, many researchers have presented a number of denoising methods. Before the wide application of deep neural networks (DNNs), filtering techniques and sparse learning are widely used denoising methods. For instance, in the NLM [1], the weighted average of all pixels within the search window in an image is applied to achieve noise removal. The BM3D [2] improves the sparse representation by collaborative alteration. The trilateral weighted sparse coding (TWSC) [3] accomplishes real image denoising by employing image priors. The weighted nuclear norm minimization (WNNM) [4] and the multichannel WNNM (MCWNNM) for color images [5] employ the low rank approach and prior knowledge to enhance denoising performance. These denoising methods can obtain favorable denoising performance, however most of them have to contain a complex and time-consuming optimization algorithm. Meanwhile, many manually adjusted parameters are also usually required for these models to perform well, this may lead to uncertainty of their denoising performance. Therefore these models can hardly be applied in practical denoising scenes. From the early successful deep neural networks (DNNs) based image denoising model DnCNN [6] to the present, the DNNs based denoising models have received much attention due to their superior denoising effect. However, many denoising models based on DNN are designed for the additive white Gaussian noise (AWGN) only. Moreover, in early DNN-based models, usually one model is trained only for one specific noise level, therefore it is very hard to generalize these models for other noise levels, or other types of noise [7]. To make the DNN based denoising models more flexible, many techniques have been proposed. For instance, a tunable noise level map is exploited as the input of the FFDNet [8]. As the map is variable or non-uniform, the FFDNet is able to handle different noise levels and the spatially varying noise can be handled as well. Nevertheless, the map of the FFDNet has to be set manually by human experience, which makes the model still far from the real denoising tasks. For real denoising scenes, noise removal has to be completed with the unknown noise level or distribution, namely, blind denoising is required. To this end, many methods have been proposed. As an extension of the FFDNet, the CBDNet [9] introduces a noise estimation sub-network, which makes the CBDNet as a fully blind denoising model. An optimal fusion function derived from the Sigal-to-Noise Ratio (SNR) was proposed in [10], and a corresponding BUIFD was then designed for image blind denoising. The AirNet model was proposed in [11], which can restore low quality images corrupted by multiple degradation models by using only one single network. The TC-Net model was presented in [12], which uses the transformer combining with CNN to extract local details and global relationships. The TGVLT model was proposed in [13], which adopts the total generalized variation method in the network design to improve its flexibility and generalization ability. However, many existing blind denoising models have massive parameters to be learned, and their performance still can be further improved, especially in the removal of real noise. Recently, some researchers have investigated different network structures to improve the learning ability of the denoising models. For instance, the BRDNet [14] contains two different branches (the upper sub-network and the lower one), which enables the model to extract complementary features and therefore enhance the denoising performance. Pan et al. [15] presented a DualCNN containing two sub-networks of different depths, where the shallow sub-network is utilized to extract the image structures, and the deeper one aims to learn the image details. Both the BRDNet and the DualCNN achieve competitive denoising performance, demonstrating that using the dual structure CNN to learn complementary features can be helpful for image denoising. The attention mechanism can be a helpful tool for image denoising has also been verified by many researchers. For example, Anwar et al. [16] proposed the RIDNet model containing feature attention, which mainly includes four enhancement attention modules (EAM) with short skip connection and local connection. The ADNet model was developed in [17], in which an essential attention block is utilized to filter the noise information. Although both the RIDNet and ADNet achieve noticeable denoising performance, only one type of attention mechanism is used in their models, this may lead to the ignoring of the other inter-relationships of convolutional features. Motivated by the success of the dual network structure and the attention mechanism in image denoising, in this paper, we present an effective dual CNN with attention mechanism for image blind denoising, named as the DCANet, which can achieve a competitive blind denoising performance. In our DCANet, the noise level map of a noisy image is first estimated by the noise estimation network, the noise information and the noisy image are then concatenated together as the input of the spatial and channel attention module (SCAM). The SCAM makes the model focus on the features that contain more useful contextual information. The dual CNN in our DCANet is composed of two different sub-networks, where the downsampling and dilated convolutions are used to enlarge the receptive field. The skip connections are also used to improve the training of the network. The DCANet owns the following characteristics: (1) As far as we know, the proposed DCANet is the first model that investigates the integration of dual CNN and attention mechanism for image blind denoising. (2) A novel dual CNN denoising network and a new noise estimator are designed in our denoising model. (3) The DCANet is capable of obtaining competitive denoising results compared with other state-of-the-art denoising models. Moreover, the DCANet has a simpler model structure compared with many existing blind denoising models. The remainder of this paper is organized as follows. Section 2 reviews the related denoising methods. Our proposed DCANet model is introduced in Section 3. Section 4 reports the experimental results. The paper is summarized in Section 5. ## 2 Related work ### Image blind denoising models Some early DNN based denoising models have already made the effort to blind denoising. For example, the famous DnCNN [6] can be used for real denoising, however its performance is far from satisfaction. Zhang et al. [18] proposed the IRCNN model by learning deep denoiser prior to realize blind denoising, and promising results were reported. Peng et al. [19] designed a dilated residual network for blind denoising, where the authors claimed that dilated convolution and skip connection can be helpful for noise removal. To capture the noise feature better, many researchers have tried to utilize a noise estimation network to estimate the noise distributions and levels in an image. Based on the DnCNN, the BUIFD model with a noise estimator was proposed in [10]. The BUIFD can handle unknown and universal noise by a single model. Soh et al. [20] presented a deep universal blind denoising (DUBD) network, where a conditional estimation network (CENet) is introduced for incorporating human prior knowledge, and a tunable denoising sub-network is designed for noise removal. Guo et al. [9] designed the CBDNet model for blind denoising, where a noise estimation sub-network is used to estimate the noise level, and asymmetric learning is adopted to restrain the possible under-estimation of the noise level. Yue et al. [21] presented a novel variational denoising network (VDN) for image blind denoising. The VDN contains a denoising network and a Sigma network, the Sigma network is applied to estimate the noise distribution in a noisy image. Kim et al. [22] proposed an adaptive instance normalization denoising network (AINDNet), and the adaptive instance normalization (AIN) is developed to regularize the network, so that the model will avoid overfitting to the synthetical noisy images, and a noise level estimator is applied to estimate the noise level. It can be found that currently most of the blind denoising models utilize a noise estimation sub-network as the solution for achieving blind denoising, and the effectiveness of this solution has been verified. In our proposed denoising model, a noise estimation block is also developed. Compared with the noise estimation networks of the CBDNet [9], VDN [21], and DUBD [20], our noise estimator is designed to contain more convolutional layers, which we think can help to extract more complex noise information. ### Attention-guided denoising models Extracting and selecting appropriate features is extremely important for image processing tasks[23; 24; 25]. However, for images with complex textures, it is very difficult to obtain features with high discriminative ability [26]. To solve this problem, the attention mechanism is proposed in [27], which is able to focus on the salient regions in images, and can capture the useful features better. Since then the attention mechanisms have been widely utilized in various image analysis tasks, including image denoising. A residual dilated attention network (RDAN) was designed in [28] for blind denoising, which contains two attention blocks, the attention mechanism in RDAN enhances the restoration of texture details. A multi-stage denoising architecture MPRNet [29] was developed for image restoration, where the supervised attention modules and channel attention blocks were introduced and used to promote the restoration effect. A very deep residual non-local attention network (RNAN) was proposed in [30] for image restoration, the attention-based RNAN achieves competitive performances on different image restoration tasks, including image denoising. The channel and space attention neural network (CSANN) was presented in [31], the CSANN can predict the residual image to eliminate noise. An adaptive consistency prior (ACP) was presented in DeamNet [32], in which the dual element-wise attention mechanism (DEAM) modules were designed and utilized to enhance its learning ability. A multi-attention complementary fusion network (MACFNet) [33] was designed for image denoising, where multiple dimensional attention mechanisms were merged to enhance denoising performance and details restoration. It can be observed that the MPRNet [29], RNAN [30], CSANN [31], and MACFNet [33] all use multiple attention mechanisms to exploit different dependencies in convolutional features, and remarkable denoising results were produced by these models. The success of these attention-guided models reveals that the attention mechanism can be an effective tool for image denoising. ### Dual convolutional neural network for image denoising In general, very deep networks may cause the vanishing or exploding of gradients, which affects the performance of the network. To address this issue, some researchers have tried to promote the learning ability of the denoising model by increasing the width of the network instead of the depth. The dual CNN adopts such network composition strategy, where two parallel branches (sub-networks) are contained in a dual CNN, and usually different network structures are designed for the two branches to boost the learning ability. The dual CNN structures for image denoising tasks have also been investigated by some researchers. Tian et al. [14] presented a BRDNet model, which consists of a dual CNN containing two different sub-networks. As different network architectures can capture different image features [34], the BRDNet therefore can extract complementary features to improve its denoising performance, impressive denoising results were reported by BRDNet. Later, Tian et al. [35] designed another dual denoising network (DudeNet) with two different branches to achieve effective noise removal, and the model also obtains favorable results. For general low-level vision tasks, Pan et al. [36] presented a dual CNN (DualCNN). The DualCNN is composed of two different parallel branches, where one branch is used for capturing the image structures, and another branch is applied to extract the image details. ## 3 The proposed model ### Network architecture In this section, we introduce the proposed DCANet. Fig. 1 shows the architecture of the DCANet. The DCANet model consists of a noise estimation network, a spatial and channel attention module (SCAM), and a dual CNN. The denoising process of the DCANet can be formulated as follows: \[x=f_{DCANet}(y), \tag{1}\] where \(x\) and \(y\) denote the denoised and noisy image, respectively. The noise estimation network is utilized to estimate the noisy information of the noisy image, as formulated below: \[F_{1}=Net_{est}(y), \tag{2}\] where \(Net_{est}\) represents the noise estimation network, and \(F_{1}\) is the predicted noise information. Then a convolution layer takes the concatenation of the noisy image and the estimated noise level map as its input, producing a feature map with 64 channels, which is sent to the SCAM for feature filtering. The focused feature \(F_{2}\) is extracted by the SCAM, as formulated below: \[F_{1}^{1} =K_{s}*[F_{1},y], \tag{3}\] \[F_{2} =SCAM(F_{1}^{1}),\] where \([,]\) denotes the concatenation operation, \(K_{s}\) is the standard convolutional kernel, and \(*\) represents convolution. The dual CNN is then applied to capture complementary features from \(F_{2}\), as formulated below: \[F_{3} =USN(F_{2}), \tag{4}\] \[F_{4} =LSN(F_{2}),\] where \(USN\) and \(LSN\) represent the upper and lower sub-networks, respectively. \(F_{3}\) and \(F_{4}\) denote the extracted features. Finally, the denoised image \(x\) is obtained by concatenating the two sub-networks, as formulated below: \[x=K_{s}*[F_{3}+y,F_{4}+y]+y.\] (5) In our model, the channel number is set to 64, which can balance network complexity and denoising performance. The BN and ReLU layers in each layer of convolution or dilated convolution are not presented in Fig. 1. Although there are many existing methods utilize the dual CNN structures or incorporate attention mechanisms for image denoising, as far as we know, no previous work has reported the combination of the dual CNN and attention mechanism for image blind denoising. In the following subsections, we introduce the noise estimator, the attention module, and the dual CNN in detail. #### Noise estimation network To accurately estimate the noise in an image, unlike the noise estimation network of CBDNet [9] and VDN [21], our noise estimation network contains more convolutional layers, which can make the network extract complex noise information. The architectures of noise estimation sub-networks of the proposed DCANet, CBDNet [9], AIMDNet [22], and VDN [21] can be seen in Fig. 2. Our estimator has 7 convolutional layers, where the Convolution (Conv), Rectified Linear Units (ReLU) [37], Batch Normalization (BN) [38] and Tanh [39] are used in the network. Specifically, "Conv+ReLU", "Conv+BN+ReLU" and "Conv+Tanh" are used in the first, the middle, and the last layers respectively. The noise feature \(F_{1}\) in Eqn. (2) can be obtained by: \[F_{0}^{0} =\phi(K_{s}*y), \tag{6}\] \[F_{0}^{i+1} =\phi(BN(K_{s}*F_{0}^{i})),i\in\{0,1,2,3,4\},\] \[F_{1} =\xi(K_{s}*F_{0}^{5}),\] where \(y\), \(K_{s}\), and \(F_{1}\) are the noisy image, standard convolutional kernel, and the estimated noise level, respectively. \(\phi\) and \(\xi\) represent the ReLU and Tanh activation functions, respectively. \(F_{0}^{0}\) is the feature obtained by the first layer, and \(F_{0}^{i+1}\) denotes the features sequentially extracted by the five middle convolutional layers, and \(F_{1}\) is the noise feature obtained by the last layer of the noise estimator. The zero padding is used in our noise estimator to keep the feature map size constant. It should be noted that both our noise estimation network and the estimator in AIMDNet [22] have seven convolutional layers. Different from ours, the noise estimation network of the AIMDNet has two downsampling operations. Although a larger receptive field size is beneficial to extract more image information, the downsampling operations yet will bring information loss. In particular, compared with the noise estimation sub-networks of the CBDNet, VDN, and AIMDNet, our estimation network incorporates multiple BN layers, which not only accelerate network training, but also enhance network performance. #### Spatial and channel attention module To leverage the useful features in images, many denoising models employ attention mechanisms to improve their learning ability, such as the ADNet [17], RDAN [28], RNAN [30], CSANN [31], and DeamNet [32]. Following the attention network introduced in CSANN [31], our proposed DCANet also utilizes the spatial and channel attention for feature selection, which is presented in Fig. 3. Our spatial and channel attention module (SCAM) consists of the parallel spatial attention module (SAM) and channel attention module (CAM), which contains the following operations: Conv, PReLU [40], Global Average Pooling (GAP) [41], Global Max Pooling (GMP) [42], BN, ReLU, Sigmoid [43]. The implemental procedure of the SCAM can be expressed as follows: \[F_{1}^{2} =\varphi(K_{s}*F_{1}^{1}), \tag{7}\] \[F_{1}^{3} =K_{s}*F_{1}^{2},\] \[F_{1}^{4} =\delta(\phi(BN(K_{s}*[GAP(F_{1}^{3}),GMP(F_{1}^{3})])))\cdot F _{1}^{3},\] \[F_{1}^{5} =\delta(K_{s}*\phi(K_{s}*GAP(F_{1}^{3})))\cdot F_{1}^{3},\] \[F_{2} =K_{s}*[F_{1}^{4},F_{1}^{5}]+F_{1}^{1},\] where \(F_{1}^{i}\) (\(i\in\{1\), 2, 3, 4, 5\(\}\)) and \(F_{2}\) denote the captured features. \(\varphi\), \(\delta\), and \(\phi\) represent PReLU, Sigmoid, and ReLU activation functions, respectively. The '\(\cdot\)' and \([,]\) are the element-wise product and concatenation operation. The SCAM is employed to exploit the inter-spatial and inter-channel relationships of the convolutional features. The informative features are retained by the SCAM module, and the unimportant ones are suppressed. #### Dual convolutional neural network It has been verified that expanding the width of the denoising network is also a useful way to Figure 1: The network structure of the proposed model. Figure 2: The noise estimation network architectures of our DCANet (a), CBDNet (b), AINDNet (c), and VDN (d). Figure 1: The network structure of the proposed model. improve network performance [44]. In this work, we develop a dual CNN with two different sub-networks for image denoising, which can be seen in the brown and blue dotted boxes in Fig. 1, where the upper sub-network is a U-shaped network, and the lower branch is a dilated convolution network. It has been verified that using different sub-networks in dual CNN can enhance the performance of the whole network, as the complementary features can be learned by the two different branches [14; 34; 35]. The upper branch of our dual CNN involves the following operations: Conv, ReLU, BN, max-pooling (Downsampling) [45], bilinear interpolation (Upsampling) [46], and skip connections [47; 48]. The lower branch includes Dilated Convolution (DConv) [49], ReLU, BN, and the skip connections as well. The outputs of two different branches are concatenated through a concatenation operation. Compared with the existing dual CNN based denoising models such as the BRDNet [14] and DualCNN [36], our proposed dual network is designed to own a larger receptive field, which allows the network to capture more contextual information. Specifically, in our dual network, the upper branch employs two times of downsampling to increase the receptive field. However, the downsampling operation may cause the loss of image information. Therefore, two skip connections are applied in the upper sub-networks to suppress the loss. The upsampling operations are employed to recover the size of the feature map. The lower sub-network applies the dilated convolution to increase the size of the receptive field, and the symmetric skip connections are utilized to speed up network training. Particularly, we adopt the hybrid dilated convolution (HDC) in the lower branch, it has been verified that the HDC can remove the gridding phenomenon effectively and improve network performance [50; 51]. The dilated rate of each dilated convolution layer in the lower branch is shown in Table 1. ### Loss functions Mean squared error (MSE) is a widely used loss function for optimizing neural networks [52; 53]. For the AWGN removal, we optimize our DCANet by the following loss function: \[\begin{split}\mathcal{L}&=\frac{1}{2N}\sum_{i=1}^{ N}\left\|f(y_{i},\theta)-x_{i}\right\|^{2}\\ &=\frac{1}{2N}\sum_{i=1}^{N}\left\|\hat{x}_{i}-x_{i}\right\|^{2},\end{split} \tag{8}\] where \(x_{i}\), \(y_{i}\), and \(\hat{x}_{i}\) are the clean, noisy, and denoised images, respectively. \(N\) is the number of clean-noisy image patches, and \(\theta\) represents the parameters of the DCANet. For removing the unknown real noise, as the high-frequency textures will be lost due to the Figure 3: The structure of the SCAM. square penalty, then the MSE often produces a blurred and overly smoothed visual effect. Therefore, we select the Charbonnier loss [54] as the reconstruction loss to optimize our model for real noise removal. Moreover, to preserve the fidelity and authenticity of the high-frequency details, we utilize the edge loss proposed in [55] to constrain the loss of the high-frequency components between the ground-truth image \(x\) and the denoised image \(\hat{x}\). Our proposed DCANet contains a noise estimator to predict the noise level \(\sigma(y)\) in a noisy image \(y\). For real noise removal, the estimated noise level should be avoided to be too smooth, especially for the spatially variant noise, which may change dramatically throughout the whole noisy image. Therefore, we employ a total variation (TV) regularizer [9] to restrict the smoothness of the estimated noise level. Therefore, the total loss of our DCANet is expressed as: \[\mathcal{L}=\mathcal{L}_{char}(\hat{x},x)+\lambda_{edge}\mathcal{L}_{edge}( \hat{x},x)+\lambda_{TV}\mathcal{L}_{TV}(\sigma(y)), \tag{9}\] where we empirically set \(\lambda_{edge}\) and \(\lambda_{TV}\) to \(0.1\) and \(0.05\) respectively, and \(\mathcal{L}_{char}\) is the Charbonnier loss, which is represented as: \[\mathcal{L}_{char}=\sqrt{\left\|\hat{x}-x\right\|^{2}+\epsilon^{2}}, \tag{10}\] where the constant \(\epsilon\) is set as \(10^{-3}\). \(\mathcal{L}_{edge}\) is the edge loss, which can be expressed as: \[\mathcal{L}_{edge}=\sqrt{\left\|\triangle(\hat{x})-\triangle(x)\right\|^{2}+ \epsilon^{2}}, \tag{11}\] where \(\triangle\) denotes the Laplacian operator [56]. \(\mathcal{L}_{TV}\) is defined as: \[\mathcal{L}_{TV}=\left\|\bigtriangledown_{h}\sigma(y)\right\|_{2}^{2}+\left\| \bigtriangledown_{\triangledown}\sigma(y)\right\|_{2}^{2}, \tag{12}\] where \(\bigtriangledown_{h}(\bigtriangledown_{v})\) is the gradient operator along the horizontal (vertical) direction. ## 4 Experiments and results ### Datasets The DIV2K dataset [57] containing \(800\) high-resolution color images was used to train the proposed DCANet for the AWGN removal, and we resized these high-resolution images into the images of the size \(512\times 512\). Moreover, the images were grayscaled to train the DCANet for grayscale image denoising. According to the size of the receptive field of the DCANet, the training images were randomly cropped into patches of the size \(140\times 140\). The AWGN within a noise level range of \([0,75]\) is added to the clean image patches to generate their noisy counterparts. To augment the training samples, rotation and flipping were utilized. We tested our model on five public commonly used datasets, including Set12 [58], BSD68 [58], CBSD68 [58], Kodak24 [59] and McMaster [60]. For real noise removal, we select the SIDD training data [61] and RENOIR dataset [62] for model training. The SIDD training data and the RENOIR dataset consist of \(320\) and \(240\) pairs of noisy images and the near noise-free counterparts, respectively. We randomly cut the images in these datasets into image patches of size \(140\times 140\), and applied rotation and flipping operations for data augmentation. The SIDD validation set [61] and DND sRGB dataset [63] were used for model evaluation. ### Experimental settings All our experiments were performed on a PC equipped with a CPU of Intel(R) Core(TM) i7-11700KF, 32 GB RAM, and a GPU of NVIDIA GeForce RTX 3080Ti. The proposed DCANet was trained for grayscale and color images, respectively. The training of the DCANet for the synthetic noise cost about \(48\) hours. For real image denoising, it took about \(45\) hours to train the model. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline Layer & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline Dilated rate & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1 & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: The dilated rates of different layers in the lower sub-network. The network parameters of the DCANet were optimized by the Adam optimizer [64]. For the AWGN removal, the DCANet was trained for 600,000 iterations, during which the initial learning rate is \(10^{-4}\) and then decreases by half every 100,000 iterations, and the batch size was set to 24. For real image denoising, we used 120 epochs to train the model, during which the initial learning rate was \(2\times 10^{-4}\) and steadily reduced to \(10^{-6}\) using cosine annealing strategy [65], the batch size was set to 16. For other hyper-parameters of the Adam algorithm, we used the default settings. ### Ablation study To verify the effectiveness of the proposed denoising architecture, especially the effect of SCAM, dual CNN, and skip connections, we trained ten different networks for grayscale image denoising. The ten different models and their corresponding performance are shown in Table 2. The Set12 [58] dataset was utilized for our ablation evaluation, and the noise level was set to 75. In Table 2, one can find that short and long skip connections can improve the network performance. Besides, the denoising performance of the models using only a single attention mechanism (SAM or CAM) is better than that of the model using serial attention mechanisms, but lower than that of the model using parallel attention mechanisms (SCAM in the proposed model), which shows the effectiveness of the proposed SCAM. In addition, the denoising performance of the model using a single sub-network is inferior to the dual CNN structure. The results validate that using SCAM, dual CNN, and skip connections can improve the model performance. ### The additive white Gaussian noise removal In this subsection, we show the results of our DCANet on grayscale and color images corrupted by the AWGN. The BM3D [2], WNNM [4], MCWNNM [5], TNRD [66], DnCNN [6], BUIFD [10], IRCNN [18], FFDNet [8], BRDNet [14], ADNet [17], DudeNet [35], DSNetB [19], RIDNet [16], AINDNet [22] and AirNet [11] were used to compare with our DCANet. Table 3 lists the PSNR values at different noise levels of the compared denoising models on the Set12 dataset. One can see that the denoising performances of the DCANet are slightly behind the BRDNet and ADNet at noise levels 15 and 25, however our DACNet beats other methods at noise level 50. It also should be noted that for the image "Barbara", the traditional methods BM3D and WNNM obtain significant denoising results, we think the reason is that this image contains rich repetitive structures, and the non-local self-similarity learning based methods can capture such structures better. Table 4 presents the average SSIM results of the compared models on the Set12 dataset. As can be seen from the table, our DCANet also produces competitive results. Table 5 reports the quantitative results on the BSD68 dataset of the compared denoising models. It can be found that the RIDNet obtains leading performance at noise levels 15 and 25. Our proposed model can obtain competitive denoising results at all tested noise levels, and especially our model outperforms other denoising methods at noise level 50. Fig. 4 shows the visual comparison results on the "test003" image at noise level 50 of different denoising models. One can find that compared with other models, the DCANet obtains better denoising performance, both quantitatively and qualitatively. It can be seen from our experimental results on the grayscale images that our DCANet can achieve competitive denoising performance compared with other state-of-the-art methods. Especially, a trend can be discovered from Table 3, Table 4, and Table 5, that for grayscale image denoising, our model becomes more effective as the noise power increases. For the noise removal evaluation on color images, the CBSD68, Kodak24, and McMaster datasets were applied. Table 6 lists the average PSNR values at different noise levels of many models on three datasets. It can be seen that our model obtains leading denoising results compared to other state-of-the-art denoising models. Table 7 reports the average SSIM values at multiple noise levels of the compared denoising methods on three color image datasets. One can find that the DCANet also generates better results. Fig. 5 presents the visual comparison results on the "kodim23" image from the Kodak24 dataset of different denoising models. One can see that our model obtains a visual appealing result, and a favorable trade-off between image detail preservation and noise reduction is obtained. ### Real noise removal For real noise removal, we evaluated the denoising methods on two commonly used public datasets: the SIDD validation set and the DND sRGB images. The two datasets contain real noisy images and near noise-free counterparts, and the counterparts can be provided as the ground truth, which can be utilized to obtain the PSNR and SSIM values. Table 8 lists the average PSNR and SSIM values of the compared models on the SIDD validation set and the DND sRGB images. One can see that the DCANet model obtains effective denoising performance. Fig. 6 shows the visual results of the compared models for real noise removal. It can be found the MCWNNM, TWSC, and CDnCNN-B generated unsatisfactory visual effects. In contrast, the AINDNet, VDN, and the proposed DCANet achieved much better visual quality and PSNR/SSIM values. ### Model complexity analysis We evaluated the network complexity of the proposed DCANet from the perspectives of the running time, the floating point operations per second (FLOPs), and the number of network parameters. The codes of all the compared denoising models are from the original authors. The BM3D [2], WNNM [4] and MCWNNM [5] were run in Matlab (R2020a) environment. The DnCNN-B [6], BUIFD [10], IRCNN [18], FFDNet [8], BRDNet [14], CBDNet [9], RIDNet [16], DudeNet [35], AINDNet [22], VDN [21], ADNet [17], AirNet [11], and the proposed DCANet were implemented in PyCharm (2021) environment. Three randomly chosen grayscale and color noisy images of different sizes were used to evaluate the runtime of different denoising models. The noise level was set to 25. We obtained the runtime of every tested method on each image by averaging the time of 20 implementations. It should be noted that our experiments neglected the memory transfer time between the CPU and GPU. Table 9 lists the runtime of the tested denoising methods. It can be found that our DCANet has a fast speed for both grayscale and color images. Table 10 reports the number of parameters and FLOPs of different models on grayscale and \begin{table} \begin{tabular}{c c c c} \hline \hline Model ID & Models & PSNR & SSIM \\ \hline 1 & DCANet without short skip connection & 25.60 & 0.7356 \\ \hline 2 & DCANet without long skip connection & 25.72 & 0.7447 \\ \hline 3 & DCANet without SCAM & 25.65 & 0.7406 \\ \hline 4 & DCANet without SAM & 25.73 & 0.7453 \\ \hline 5 & DCANet without CAM & 25.71 & 0.7444 \\ \hline 6 & DCANet with serial attention mechanism ( SAM(front) and CAM (behind) ) & 25.62 & 0.7377 \\ \hline 7 & DCANet with serial attention mechanism ( CAM(front) and SAM (behind) ) & 25.66 & 0.7409 \\ \hline 8 & DCANet with only upper sub-network & 25.52 & 0.7325 \\ \hline 9 & DCANet with only lower sub-network & 25.70 & 0.7443 \\ \hline 10 & Proposed DCANet model & **25.75** & **0.7458** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparison results of the ten models containing different components on the Set12 dataset. The noise level is set to 75. The best results are bolded. Figure 4: Visual comparison results on the “test003” image from the BSD68 dataset at the noise level 50. (I) Ground-truth image, (II) Noisy image / 14.15dB, (III) BM3D / 26.21dB, (IV) WNNM / 26.50dB, (V) DnCNN-S / 26.90dB, (VI) IRCNN / 26.85dB, (VII) BUIFD / 26.32dB, (VIII) FFDNet / 26.92dB, (IX) ADNet / 27.06dB, (X) DCANet / 27.18dB. Figure 5: Visual comparison results on the “kodim23” image from the Kodak24 dataset at the noise level 50. (I) Ground-truth image, (II) Noisy image / 14.16dB, (III) CBM3D / 30.95dB, (IV) MCWNNM / 25.25dB, (V) CDnCNN-S / 31.47dB, (VI) CDnCNN-B / 31.12dB, (VII) IRCNN / 31.16dB, (VIII) BUIFD / 29.41dB, (IX) FFDNet / 31.50dB, (X) ADNet / 31.47dB, (XI) AirNet / 31.50dB, (XII) DCANet / 31.86dB. color images, respectively. It can be observed from Table 10 that some state-of-the-art image blind denoising methods have large numbers of parameters, these models are usually equipped with complex model structures, however the proposed DCANet can obtain competitive or even better performance. For those models have the similar number of parameters, like the BUIFD, BRDNet, DudeNet, and RIDNet, our model achieve better results in most of the tested denoising tasks, especially in color image denoising and real image denoising methods. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Noise levels & Models & C.man & House & Peppers & Starfish & Monar. & Airpl. & Parrot & Lema & Barbara & Boat & Man & Couple & Average \\ \hline \multirow{7}{*}{\(\sigma=15\)} & BMGD [2] & 31.91 & 34.93 & 32.69 & 31.14 & 31.85 & 31.07 & 31.37 & 34.26 & 33.10 & 32.13 & 31.92 & 31.10 & 32.37 \\ \cline{2-13} & WNMM [4] & 32.17 & 35.13 & 32.99 & 31.82 & 32.71 & 31.39 & 31.62 & 34.27 & 33.60 & 32.27 & 32.11 & 32.17 & 32.70 \\ \cline{2-13} & TNRD [66] & 32.19 & 34.53 & 33.04 & 31.75 & 32.56 & 31.46 & 31.63 & 34.24 & 32.13 & 32.14 & 32.23 & 32.11 & 32.50 \\ \cline{2-13} & DaCNN-S [6] & 32.61 & 34.97 & 33.30 & 32.20 & 33.09 & 31.70 & 31.83 & 34.62 & 32.64 & 32.42 & 32.46 & 32.47 & 32.86 \\ \cline{2-13} & IRCNN [18] & 32.58 & 34.89 & 33.31 & 32.02 & 32.82 & 31.70 & 31.84 & 34.53 & 32.43 & 32.34 & 32.40 & 32.40 & 32.77 \\ \cline{2-13} & BUIFD [10] & 31.74 & 34.78 & 32.80 & 31.92 & 32.77 & 31.34 & 31.39 & 34.38 & 31.68 & 32.18 & 32.25 & 32.22 & 32.46 \\ \cline{2-13} & FFDNet [8] & 32.43 & 35.07 & 33.25 & 31.90 & 32.66 & 31.57 & 31.81 & 34.62 & 32.54 & 32.38 & 32.41 & 32.46 & 32.77 \\ \cline{2-13} & BRDNet [14] & 32.80 & 35.27 & 33.47 & 32.24 & 33.35 & 31.85 & 32.00 & 34.75 & 32.93 & 32.55 & 32.50 & 32.62 & 33.63 \\ \cline{2-13} & ADNet [17] & 32.81 & 35.22 & 33.49 & 32.17 & 33.17 & 31.86 & 31.96 & 34.71 & 32.80 & 32.57 & 32.47 & 32.58 & 32.98 \\ \cline{2-13} & DudeNet [38] & 32.71 & 35.13 & 33.38 & 32.29 & 33.28 & 31.78 & 31.93 & 34.66 & 32.73 & 32.46 & 32.46 & 32.40 & 32.94 \\ \cline{2-13} & DCANet & 32.43 & 35.13 & 33.09 & 32.06 & 33.16 & 31.55 & 31.80 & 34.70 & 32.76 & 32.42 & 32.40 & 32.48 & 32.83 \\ \hline \multirow{7}{*}{\(\sigma=25\)} & BMGD [2] & 29.45 & 32.85 & 30.16 & 28.56 & 29.25 & 28.42 & 28.93 & 32.07 & 30.71 & 29.90 & 29.61 & 29.71 & 29.97 \\ \cline{2-13} & WNMM [4] & 29.64 & 33.22 & 30.42 & 29.03 & 29.84 & 28.69 & 29.15 & 32.24 & 31.24 & 30.03 & 29.76 & 29.82 & 30.26 \\ \cline{2-13} & TNRD [66] & 29.72 & 32.53 & 30.57 & 29.02 & 29.85 & 28.88 & 29.18 & 32.00 & 29.41 & 29.91 & 29.87 & 29.71 & 30.06 \\ \cline{2-13} & DnCNN-S [6] & 30.18 & 33.06 & 30.87 & 29.41 & 30.28 & 29.13 & 29.43 & 32.44 & 30.00 & 30.21 & 30.10 & 30.12 & 30.43 \\ \cline{2-13} & IRCNN [18] & 30.08 & 33.06 & 30.88 & 29.27 & 30.09 & 29.12 & 29.47 & 32.43 & 29.92 & 30.17 & 30.04 & 30.08 & 30.38 \\ \cline{2-13} & BUIFD [10] & 29.42 & 33.03 & 30.48 & 29.21 & 30.20 & 28.99 & 28.94 & 32.20 & 29.18 & 29.97 & 29.88 & 29.90 & 30.12 \\ \cline{2-13} & FFDNet [8] & 30.10 & 33.28 & 30.93 & 29.32 & 30.08 & 29.04 & 29.44 & 32.57 & 30.01 & 30.25 & 30.11 & 30.20 & 30.44 \\ \cline{2-13} & BRDNet [14] & 31.39 & 33.41 & 31.04 & 29.46 & 30.50 & 29.20 & 29.55 & 32.65 & 30.34 & 30.33 & 30.14 & 30.28 & 30.61 \\ \cline{2-13} & ADNet [17] & 30.34 & 33.41 & 31.14 & 29.41 & 30.39 & 29.17 & 29.49 & 32.61 & 30.25 & 30.37 & 30.08 & 30.24 & 30.58 \\ \cline{2-13} & DudeNet [38] & 30.23 & 33.24 & 30.98 & 29.53 & 30.44 & 29.14 & 29.48 & 32.52 & 30.15 & 30.24 & 30.08 & 30.15 & 30.52 \\ \cline{2-13} & DCANet & 30.18 & 33.39 & 30.84 & 29.42 & 30.51 & 29.10 & 29.49 & 32.72 & 30.49 & 30.35 & 30.14 & 30.26 & 30.57 \\ \hline \multirow{7}{*}{\(\sigma=50\)} & BMGD [2] & 26.13 & 29.69 & 26.68 & 25.04 & 25.82 & 25.10 & 25.90 & 29.05 & 27.22 & 26.78 & 26.81 & 26.46 & 26.72 \\ \cline{2-13} & WNMM [4] & 26.45 & 30.33 & 26.95 & 25.44 & 26.32 & 25.42 & 26.14 & 29.25 & 27.79 & 26.97 & 26.94 & 26.64 & 27.05 \\ \cline{2-13} & TNRD [66] & 26.62 & 29.48 & 27.10 & 25.42 & 26.31 & 25.59 & 26.16 & 28.93 & 25.70 & 26.94 & 26.98 & 26.50 & 26.81 \\ \cline{2-13} & DaCNN-S [6] & 27.03 & 30.00 & 27.32 & 25.70 & 26.78 & 25.87 & 26.48 & 29.39 & 26.22 & 27.20 & 27.24 & 26.90 & 27.18 \\ \cline{2-13} & IRCNN [18] & 26.88 & 29.96 & 27.33 & 25.57 & 26.61 & 25.80 & 26.55 & 29.40 & 26.24 & noise removal. Namely, our model obtains a favorable tradeoff between network complexity and denoising performance. ## 5 Conclusion In this paper, we propose a dual convolutional neural network with attention (DCANet) for image blind denoising. The proposed DCANet is composed of a noise estimation network, an attention module, and a dual convolutional denoising network. The noise estimation network is applied to estimate the noise in a noisy image. The attention module consists of spatial and channel attention block, and is utilized to filter unimportant information. The dual convolutional denoising network contains two different branches, which not only widens the network to improve its learning ability, but also can capture the complementary image features to enhance the denoising effect. To the best of our knowledge, the combination of the dual CNN and the attention mechanism for image blind denoising has not been investigated before. \begin{table} \begin{tabular}{c c c c} \hline \hline Models & \(\sigma\)=15 & \(\sigma\)=25 & \(\sigma\)=50 \\ \hline BM3D [2] & 0.896 & 0.851 & 0.766 \\ \hline WNNM [4] & 0.894 & 0.846 & 0.756 \\ \hline TNRD [66] & 0.896 & 0.851 & 0.768 \\ \hline DnCNN-S [6] & 0.903 & 0.862 & 0.783 \\ \hline IRCNN [18] & 0.901 & 0.860 & 0.780 \\ \hline BUIFD [10] & 0.899 & 0.855 & 0.755 \\ \hline FFDNet [8] & 0.903 & 0.864 & 0.791 \\ \hline BRDNet [14] & 0.906 & 0.866 & 0.794 \\ \hline ADNet [17] & 0.905 & 0.865 & 0.791 \\ \hline RIDNet [16] & 0.906 & 0.867 & 0.793 \\ \hline DCANet & 0.903 & 0.866 & 0.798 \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative comparison results on the Set12 dataset. The three best results are respectively emphasized in red, blue and green. Figure 6: Visual comparison results on the “11.4” image from the SIDD validation set. (I) Ground-truth image / (PSNR/SSIM), (II) Noisy image / (18.25/0.169), (III) MCWNNM / (28.63/0.725), (IV) TWSC / (30.42/0.837), (V) CDnCNN-B / (20.76/0.231), (VI) AINDNet / (36.24/0.904), (VII) VDN / (36.39/0.907), (VIII) DCANet / (36.36/0.912). Compared with the state-of-the-art denoising models, the proposed DCANet obtains competitive denoising performance. Our proposed DCANet also obtains a favorable trade-off between model complexity and denoising ability, therefore the model can be an option for the practical image denoising tasks. In the future, we will further investigate the noise estimation network to obtain more accurate noise estimation. The application of the DCANet on other low-level visual tasks such as image deraining and super-resolution will also be addressed. ## Declarations Funding.The Natural Science Foundation of China No. 61863037, No. 41971392, and the Applied Basic Research Foundation of Yunnan Province under grant No. 202001AT070077. Authors contribution statement.Wencong Wu conceived and designed the study. Wencong Wu, Guannan Lv, and Yingying Duan performed the experiments. Wencong Wu, and Peng Liang were responsible for drawing figures and tables. Data analysis and collation were carried out by Guannan Lv, and Yingying Duan. Wencong Wu, Yungang Zhang, and Yuelong Xia wrote the paper. Yungang Zhang provided the funding support. Wencong Wu, Yungang Zhang, and Yuelong Xia reviewed and edited the manuscript. All authors read and approved the manuscript. Ethical and informed consent for data used.The datasets used for this work comply with ethical standards, and the authors were given permission to access these datasets. Data availability and access.If necessary, data involved in this work can be provided by the corresponding author, and the code of this work is accessible on [https://github.com/WenCongWu/DCANet](https://github.com/WenCongWu/DCANet). Competing Interests.The authors declare that they have no conflicts of interest to this work. The authors declare that they do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
2308.14340
HRGCN: Heterogeneous Graph-level Anomaly Detection with Hierarchical Relation-augmented Graph Neural Networks
This work considers the problem of heterogeneous graph-level anomaly detection. Heterogeneous graphs are commonly used to represent behaviours between different types of entities in complex industrial systems for capturing as much information about the system operations as possible. Detecting anomalous heterogeneous graphs from a large set of system behaviour graphs is crucial for many real-world applications like online web/mobile service and cloud access control. To address the problem, we propose HRGCN, an unsupervised deep heterogeneous graph neural network, to model complex heterogeneous relations between different entities in the system for effectively identifying these anomalous behaviour graphs. HRGCN trains a hierarchical relation-augmented Heterogeneous Graph Neural Network (HetGNN), which learns better graph representations by modelling the interactions among all the system entities and considering both source-to-destination entity (node) types and their relation (edge) types. Extensive evaluation on two real-world application datasets shows that HRGCN outperforms state-of-the-art competing anomaly detection approaches. We further present a real-world industrial case study to justify the effectiveness of HRGCN in detecting anomalous (e.g., congested) network devices in a mobile communication service. HRGCN is available at https://github.com/jiaxililearn/HRGCN.
Jiaxi Li, Guansong Pang, Ling Chen, Mohammad-Reza Namazi-Rad
2023-08-28T06:32:09Z
http://arxiv.org/abs/2308.14340v1
HRGCN: Heterogeneous Graph-level Anomaly Detection with Hierarchical Relation-augmented Graph Neural Networks ###### Abstract This work considers the problem of heterogeneous graph-level anomaly detection. Heterogeneous graphs are commonly used to represent behaviours between different types of entities in complex industrial systems for capturing as much information about the system operations as possible. Detecting anomalous heterogeneous graphs from a large set of system behaviour graphs is crucial for many real-world applications like online web/mobile service and cloud access control. To address the problem, we propose HRGCN, an unsupervised deep heterogeneous graph neural network, to model complex heterogeneous relations between different entities in the system for effectively identifying these anomalous behaviour graphs. HRGCN trains a hierarchical relation-augmented Heterogeneous Graph Neural Network (HetGNN), which learns better graph representations by modelling the interactions among all the system entities and considering both source-to-destination entity (node) types and their relation (edge) types. Extensive evaluation on two real-world application datasets shows that HRGCN outperforms state-of-the-art competing anomaly detection approaches. We further present a real-world industrial case study to justify the effectiveness of HRGCN in detecting anomalous (e.g., congested) network devices in a mobile communication service. HRGCN is available at [https://github.com/jiaxilliearn/HRGCN](https://github.com/jiaxilliearn/HRGCN). GNN, Heterogeneous Graph, Anomaly Detection, Industrial Application ## I Introduction Anomaly detection is an important area of data analytics and machine learning which involves the development of algorithms and techniques to identify deviations from normal behaviours. By distinguishing between normal and abnormal patterns, anomaly detection empowers organisations to proactively address emerging issues, prevent system failures, and enhance reliability [1]. In the realm of large and complex systems, such as cloud platforms and mobile/IoT networks, anomaly detection has garnered increasing attention in recent years [2]. These systems are characterised by their extensive data volumes, diverse services and their relations, presenting formidable challenges in detecting anomalous patterns, such as system performance degradation, unauthorised access, or malicious activities. Addressing these challenges requires comprehensive evaluations of behaviours involving diverse entities in the system. In this context, behaviour data is typically represented by graphs to capture the entity interactions. Figure 1 illustrates an example of normal and abnormal devices based on heterogeneous mobile networks/graphs involving two types of entities (users and devices) and multiple types of interactions among the entities. Although the users may occasionally encounter intermittent "bad" events, it does not necessarily imply that the entire graph exhibits abnormal behaviours. Therefore, a nuanced understanding of the complex inter-dependencies and interactions among the heterogeneous entities within the system is crucial for accurately detecting the anomalies. There are three main streams of related anomaly detection approaches on graphs, including homogeneous-graph-based approaches [3, 4, 5], heterogeneous-graph-based approaches [6, 7, 8], and hybrid approaches considering both graph and non-graph data (e.g., system logs) [9]. Their common objective is to identify graph-level anomalies, i.e., exceptional graphs that deviate significantly from the other graphs. Compared with homogeneous-graph-based approaches that focus on dealing with graphs consisting of a single type of nodes Fig. 1: Example mobile network events with heterogeneous entities and relations. Regular (green) vs. irregular (red) events and devices.
2305.18592
Deep Neural Networks Generalization and Fine-Tuning for 12-lead ECG Classification
Numerous studies are aimed at diagnosing heart diseases based on 12-lead electrocardiographic (ECG) records using deep learning methods. These studies usually use specific datasets that differ in size and parameters, such as patient metadata, number of doctors annotating ECGs, types of devices for ECG recording, data preprocessing techniques, etc. It is well-known that high-quality deep neural networks trained on one ECG dataset do not necessarily perform well on another dataset or clinical settings. In this paper, we propose a methodology to improve the quality of heart disease prediction regardless of the dataset by training neural networks on a variety of datasets with further fine-tuning for the specific dataset. To show its applicability, we train different neural networks on a large private dataset TIS containing various ECG records from multiple hospitals and on a relatively small public dataset PTB-XL. We demonstrate that training the networks on a large dataset and fine-tuning it on a small dataset from another source outperforms the networks trained only on one small dataset. We also show how the ability of a deep neural networks to generalize allows to improve classification quality of more diseases.
Aram Avetisyan, Shahane Tigranyan, Ariana Asatryan, Olga Mashkova, Sergey Skorik, Vladislav Ananev, Yury Markin
2023-05-19T14:49:04Z
http://arxiv.org/abs/2305.18592v1
# Deep Neural Networks Generalization and Fine-Tuning for 12-lead ECG Classification ###### Abstract Numerous studies are aimed at diagnosing heart diseases based on 12-lead electrocardiographic (ECG) records using deep learning methods. These studies usually use specific datasets that differ in size and parameters, such as patient metadata, number of doctors annotating ECGs, types of devices for ECG recording, data preprocessing techniques, etc. It is well-known that high-quality deep neural networks trained on one ECG dataset do not necessarily perform well on another dataset or clinical settings. In this paper, we propose a methodology to improve the quality of heart disease prediction regardless of the dataset by training neural networks on a variety of datasets with further fine-tuning for the specific dataset. To show its applicability, we train different neural networks on a large private dataset TIS containing various ECG records from multiple hospitals and on a relatively small public dataset PTB-XL. We demonstrate that training the networks on a large dataset and fine-tuning it on a small dataset from another source outperforms the networks trained only on one small dataset. We also show how the ability of a deep neural networks to generalize allows to improve classification quality of more diseases. Electrocardiography, time series analysis, deep learning, ECG classification, fine-tuning, pre-trained neural networks ## I Introduction cardiovascular diseases nowadays represent the leading cause of death. Several tests are commonly used to diagnose heart conditions. For instance, tests such as echocardiograms can more accurately determine the presence of disease than others. However, some of them are generally not easily accessible to patients, require specialized equipment, and are also time-consuming and expensive. Electrocardiography (ECG) is a non-invasive tool to assess the general cardiac condition of a patient. Due to it being a fast, painless, efficient, and low-maintenance test, it is used as the primary method for diagnosing cardiovascular diseases. In this paper, we focus on the improvement of the most common type of ECG records used by medical institutions, short 12-lead ECGs. Despite the wide application of ECG recording, ECG analysis is a complex task that requires the expertise of a specialist with broad specific knowledge. Often not only the health but also the life of the patient depends on the timely decoding of all the data. This is further complicated by the difficulty of manual ECG analysis, which increases the likelihood of errors or incompleteness of diagnosis in interpretation. Therefore, numerous studies aim to detect abnormalities using automated methods such as applying DNNs in ECG analysis. Automated interpretation in ECG analysis can transform the ECG into a screening tool and predictor of diseases. However, despite the great development of various deep learning methods applied to ECGs, there are still significant limitations that do not allow us to assert the success of these methods in a clinical setting with more non-unified and noisy data. The number of digital ECGs is increasing dramatically. There are promising studies that use classical machine learning methods to detect cardiac abnormalities [1, 2]. Nevertheless, the most popular methods for analyzing ECGs are DNNs. To detect an abnormality, the researchers either use raw or preprocessed ECG signals [3, 4] or convert signals to images using wavelet or short-time Fourier transform [5, 6, 7]. Multiple surveys describe deep learning methods for ECG classification [8, 9]. Most of the studies use partial and end-to-end deep learning techniques with convolutional layers and residual connections. Several approaches have already shown cardiologist-level performance for some cardiac abnormalities [10]. One of the main problems that arise in medical domain studies is the small amount of data. The data is often difficult to obtain because of privacy and security reasons. Consequently, few papers use large amounts of ECG data [11, 12]. Even with a large dataset available [11], there may be challenges because the diagnosis is usually presented in text form. In this case, labels have to be extracted from textual reports, which can lead to additional errors [13]. Most of the studies that analyze 12-lead ECGs use open datasets, such as PTB-XL, CPSC2018, Chapman-Shaoxing Database [14, 15, 16], for evaluation. These datasets consist of several thousand ECGs with fixed parameters such as duration and frequency. In addition, each dataset is preprocessed and annotated by a small group of cardiologists, who can make similar mistakes during annotation. Due to the small number of samples in open datasets, neural networks are limited to the number of abnormalities that they can classify well. Thus, DNNs trained on these datasets to predict widespread abnormalities do not show the same quality on datasets from different domains. Several studies demonstrate that the same DNNs trained on different datasets show significantly different results on their respective test datasets even for the most popular heart diseases such as atrial fibrillation [17, 18, 19]. Such a difference in prediction performance for various ECG datasets makes the networks less robust. This makes it impossible to widely use them on clinical datasets nowadays. In this paper, we propose a methodology to improve the accuracy of ECG classification. We demonstrate that training DNN on a large dataset assembled from a variety of different datasets and then fine-tuning it on a relatively smaller dataset improves abnormality prediction for this specific dataset. Additionally, we demonstrate that DNNs trained on a large dataset have good generalization ability for ECG analysis and can help to improve prediction results for different datasets. We conduct the experiments using a large, assembled dataset TIS of raw 12-lead ECG records and one of the most popular public datasets PTB-XL [14]. We compare DNNs with different architectures trained on TIS and PTB-XL for 7 selected abnormalities and show that neural networks trained on TIS show stable prediction quality regardless of the test data whereas networks trained on PTB-XL start degrading on the data from a different source. We also demonstrate, for two selected abnormalities, that trained DNNs deliver results comparable to cardiologists' one. ## 2 Methods ### Dataset retrieval and preprocessing Two datasets are used for analysis. The first one is a publicly available PTB-XL dataset of 21,837 ECG records [14]. All ECGs in PTB-XL are 10 seconds in duration and have a 500 Hz frequency. The ECGs are preprocessed by a bandpass filter to reduce the noise and annotated by up to 2 cardiologists. The second one is the TIS dataset of more than 1,500,000 12-lead ECGs collected from multiple hospitals over 2 years with almost 57% of labeled records annotated by more than 130 cardiologists. The data is stored in signal format and does not have any initial preprocessing. These hospitals use different devices to record ECGs in a clinical setting. As a result, the dataset is heterogeneous in various parameters: types of recording devices, frequency, duration, etc. For this research, we use 549,279 records from the TIS dataset, which are collected and annotated by the doctors for the period from December 2019 to December 2021. Ethical approval was granted by the Ethics Committee of the I.M. Sechenov First Moscow State Medical University (Protocol No. 06-23). One of the important advantages of records in TIS is the initial annotation of the dataset with labels apart from textual diagnoses. Existing research suggests methods to extract diseases from textual diagnoses [11, 20]. However, these methods perform with a certain percentage of errors, which can affect the prediction. TIS dataset is developed in a way that each doctor can select the abnormality from the list of heart diseases along with writing a diagnosis. Therefore, there is no need to use the algorithms to retrieve the labels from texts, potentially creating additional errors in the dataset. To conduct valid experiments, TIS and PTB-XL formats should be standardized. Therefore, annotated TIS records are filtered and converted to the same data format as PTB-XL. For training and validation, we selected ECG records of 9-10 seconds duration and 500, 1000 Hz frequency. All records are resampled to 500 Hz and zero-padded to 10 seconds. For the TIS test dataset, we use only 10-second ECG records with 500 Hz frequency. We also considered only patients older than 18 years. Errors can appear while recording ECGs in hospitals, for example, in case of a bad electrode cable connection or improperly working cardiographs. To avoid these errors, the records that had at least one lead with the constant value are removed during data preprocessing. The output ECG signal characteristics can vary in datasets because they depend on the recording devices, the systems, and the formats in which they are stored. For example, PTB-XL and TIS records have 1.6 and 2906 average amplitude respectively, therefore, can be interpreted incorrectly by the DNNs without preprocessing. To achieve a similar average amplitude, we apply z-score normalization to both datasets after the initial transformations. ### Model architecture and training We use a popular DenseNet model [21] from the family of CNNs widely used in ECG classification studies [22] which is adapted for unidimensional data. We also reviewed the DNNs proposed in [23]. To avoid adjusting architecture to a specific dataset, we use default values for the kernel size, convolutional layers number, and node number per layer for DenseNet. We use default parameters for the other observed DNNs for the same reasons. The observed DenseNet model consists of a Convolutional (Conv) layer with kernel size 7, followed by a max pooling layer, 4 Dense blocks, and 3 Transition blocks. The output of the last Dense block is fed into the Adaptive Average Pooling layer and the fully connected layer for classification. The first Conv layer and each layer inside the Dense block are followed by batch normalization and ReLU activation function. For the Transition block, we use average pooling with kernel size 2. The architecture of DenseNet is presented in Fig. 1. We split both TIS and PTB-XL datasets into train and test parts and use the test dataset to evaluate the DNNs. The test datasets are fixed for the experiments and the models trained on TIS and PTB-XL are both evaluated on the same data. After filtering, preprocessing, and splitting the data 549,279 ECG records from the TIS dataset are selected for training and validation, and 31,872 records are used to evaluate the DNNs. The characteristics of TIS are presented in Table 1. ECGs recorded during two selected months are used from the TIS dataset for tests. PTB-XL has a parameter in a database parameter to divide the dataset for training and testing. Hence, ECG records of the 10th fold are selected for the test dataset, other records are used for training and validation. For each selected abnormality, we trained a binary classifier. Each trained neural network has the same input format of the ECG records of 10 seconds duration with a 500 Hz frequency. We use binary cross-entropy with logit loss which is minimized with the Adam optimizer [24]. The optimizer has default parameters with a learning rate of 0.003. Due to class imbalance, we use weighted loss, giving higher weight to the positive class based on the proportion of presence. The learning rate is reduced by a factor of 0.8 whenever the validation loss does not present any improvement for 3 epochs in a row. The convolutional layers' weights are initialized with Kaiming normal values [25]. The rest of the layers are initialized with constant weight and zero bias. The training runs for 100 epochs with an early stopping with the patience of 20. Finally, the DNN with the best validation loss during training is selected. ### Hyperparameter tuning We use stratified cross-validation with 5 folds to find the best values for hyperparameters for both datasets. We repeated the selection, training, and evaluation process 15 times. To select optimal values for learning rate and reducing factor, we use the following options: learning rate in the range of 0.001 to 0.003 and factor between 0.4 to 0.8. The best metrics are achieved for the learning rate equal to 0.003 and the factor to 0.8. On the one hand, such a choice is enough to ensure convergence with little steps, on the other hand, it is small enough to avoid divergent behavior. Activation function ReLU is chosen among ReLU, ELU, SELU, and LeakyReLU. For the batch size, we selected 256, the maximum value that can fit in the memory of the GPU. We want to demonstrate how pre-training the DNN on the variety of datasets combined into one affects the prediction results on the other dataset. We train the models on the TIS dataset and use obtained weights to fine-tune them on the PTB-XL dataset. We selected the same architecture to train the neural network and froze the first blocks of the pre-trained DNN. Then the weights of the last layers are randomly initialized and fine-tuned on PTB-XL. We set multiple experiments to find the best size for the number of blocks of DenseNet to freeze as well as the learning rate. The parameters are chosen from the following options: for the number of frozen blocks 5, 7, 9, for learning rate 0.001, 0.002, 0.003. The best metrics on the PTB-XL test dataset are achieved when 7 blocks of DenseNet are frozen and the learning rate is equal to 0.003. Figure 1: The DenseNet architecture for ECG classification. Figure 2: Scheme of experiments to compare TIS_dn, PTBXL_dn, and TIS_tuned_dn. ## 3 Experiments and Results We set up experiments to compare the effect of the training datasets on ECG classification. We use both test datasets to evaluate neural networks for each selected abnormality trained exclusively on the observed dataset. Furthermore, we use PTBXL test data to investigate the effect of pre-training the DNN on the large dataset. We compare three neural networks with the same architectures: the network pre-trained on TIS and fine-tuned on PTBXL (TIS_tuned_dn), and the ones trained either on PTB-XL (PTBXL_dn) or TIS (TIS_dn). The scheme of the performed experiments is shown in Fig. 2. We selected 7 abnormalities according to SCP-ECG standards to train the DNNs: Atrial FIBrillation (AFIB), Right Bundle Branch Block (RBBB), Sinus TACHycardia (STACH), Sinus BRADycardia (SBRAD), first-degree AV Block (1AVB), Left Bundle Branch Block (LBBB), and Premature Ventricular Complexes (PVC). We should note that we merged Complete and Incomplete Left Bundle Branch Blocks (CLBBB and ILBBB) into one disease: the Left Bundle Branch Block (LBBB). Similarly, we merged Complete and Incomplete Right Bundle Branch Blocks into the Right Bundle Branch Block (RBBB). These abnormalities are common and widely represented in each of the considered datasets. The distribution of abnormalities for both datasets is summarized in Table 2. The prediction results of the networks on TIS and PTB-XL test datasets are presented in Table 3 and 4, respectively. We chose Sensitivity (Sens.), and Specificity (Spec.) metrics to compare the quality of the DNNs. These are widely used metrics for ECG analysis [26] Sensitivity assesses how well the model finds people with heart diseases. Specificity gives an estimate of how accurately the model detects people without a disease. We also count G-mean and F2-score to understand which neural network performs better. Higher values of the considered metrics would indicate which model gives more accurate abnormality prediction. Comparison of the DNNs demonstrates the importance of large and diverse train data. TIS_dn models generalize disease classification significantly better than PTBXL_dn models and are stable regardless of the test dataset. For most of the observed abnormalities, TIS_dn models show comparable quality on the PTB-XL test dataset to PTBXL_dn models. On the contrary, the metrics of the PTBXL_dn decrease on the TIS test dataset. Different training data have a smaller impact on prediction quality for abnormalities that have less variety among patients. Therefore, the metrics of the PTBXL_dn DNNs detecting atrial fibrillation or right bundle branch block demonstrate a moderate change. However, the difference becomes significant for other heart diseases such as IAVB, PVC, or LBBB and reaches up to 10% for G-mean. TIS_tuned_dn networks give promising results. Considering the observed metrics, a TIS_tuned_dn gives better predictions as compared with both TIS_dn or PTBXL_dn networks for all the abnormalities. The G-mean metric is increased by up to 2% compared to TIS_dn and up to 4% compared to PTBXL_dn, which is a significant improvement in the quality of ECG classification. Moreover, if G-mean does not have a significant improvement, F2-score increases dramatically, which means that a TIS_tuned_dn can much better detect people with abnormalities. We also trained several neural networks with different architectures apart from DenseNet. We conducted the same experiments with various neural networks used as benchmarks for the classification of the PTB-XL dataset [23] and achieved the same results. The results of the experiments for these DNNs are presented in Appendix Tables 6 - 6. The TIS dataset is large, which affects the quality of classification and helps to generalize the DNNs. However, we demonstrate that there are other characteristics of each dataset apart from the size that influence the DNN prediction performance. We run the experiments halving the size of the TIS training dataset with the maintaining proportions of abnormality prevalence to show that fine-tuning on relatively small PTB-XL will still improve the classification of PTB-XL test data. We selected 2 abnormalities AFIB and PVC and trained the DenseNet model on a halved TIS train dataset (hTIS_dn). Then we froze the layers of pre-trained networks and fine-tuned them on PTB-XL (hTIS_tuned_dn), which is almost 30 times smaller than TIS. The comparison of the DNNs trained on datasets of different sizes on PTB-XL test data is presented in Table 5. The metrics are not improved for the hTIS_dn model compared to PTBXL_dn and TIS_dn. Wherein, the metrics of hTIS_tuned_dn models are the best among the observed neural networks. Despite halving the training dataset and reducing the number of samples by almost 300,000, fine-tuning the model with approximately 17,000 samples from another data source improves the metrics on PTB-XL. We demonstrate that a model trained on a much smaller but specific dataset showed higher quality on the selected test data compared to the models trained on the full-size dataset. To estimate the quality of DNNs in clinical practice, we compared their prediction with the independent ECG evaluation by three doctors from different hospitals with years of experience in reading and diagnosing ECG records. We took a random subset of 500 samples from the PTB-XL test dataset with the maintaining proportions of the classes. Each cardiologist annotated these 500 records selecting whether two abnormalities observed in this paper (AFIB and PVC) are present in the ECG record. The subset of PTB-XL had 34 and 26 ECG records with AFIB and PVC respectively. After the evaluation, we compared the annotation of the doctors with the PTBXL_dn and TIS_tuned_dn DNNs. The prediction performance of Densenet networks and the doctors' annotation on a subset of 500 ECGs are presented in Table VI. We also plot sensitivity/specificity curves in Fig. 3 and confusion matrices of the doctor's annotation and the DNNs prediction in Appendix Fig. 4, 5. The metrics demonstrate that TIS_tuned_dn outperforms 2 cardiologists for AFIB and all the cardiologists for PVC. ## IV Discussion Despite many convincing studies on 12-lead ECG classification, the research rarely addresses the problem of generalization of the neural networks. This is primarily since usually there is no opportunity to train the DNNs on large amounts of data from various sources. Most studies use the same train and test dataset for the ECG analysis [26], and there are a few researchers that use different datasets [27, 28]. However, they are trained and validated on the same dataset. This eliminates the possibility of correctly validating the DNNs on ECGs from various sources because records may differ for multiple reasons. For instance, data can be initially preprocessed during the ECG recording or later during the doctors' annotation. Another possible reason that can cause the difference in the quality of records is the diversity of ECG devices that can record ECGs with specific parameters, filtering, or distinct noise. Large and heterogeneous ECG data help with networks' generalization. We demonstrate that DNNs trained on the collection of different datasets maintain high metrics for different datasets for the observed heart diseases and show metrics comparable to clinical quality on data from another domain for two selected abnormalities. DNNs trained on open datasets can achieve good initial results for several cardiac abnormalities, but they are unstable for data from different domains. It is necessary to use training data from different sources to be able to generalize the neural network. Another attempt to make the network more stable is to create a unified approach to filtering and preprocessing during data collection. Most of the open data is initially preprocessed, which may affect the loss of features that determine the abnormality of the record. One of the ways to solve this problem can be publishing raw ECG signals in addition to preprocessed data. An important observation from the experiments in this paper is that neural networks trained on many various data can improve in quality with a relatively small increase in the training data. These results allow us to fine-tune the DNNs on datasets from different sources to get better prediction performance. Thus, researchers who do not have large amounts of data can achieve high-quality results on relatively small amounts of data. The good prediction performance of the DNNs trained to diagnose 12-lead ECGs does not limit the potential for improvement. As the next steps to develop existing classification methods, new parameters, which can lead to the better generalization of the neural networks, must be considered. First of all, these are patient metadata, which will give additional information for certain records. There are already some developments in analyzing metadata [29], however, the analysis has limitations. The reason for that is a small amount of additional information about a record due to security reasons or the lack of a single system for storing patients' data. One of the solutions to the problem associated with the confidentiality of storing patients' metadata can be an identification of a person by the ECG [30]. To improve the abnormality prediction models, it will be helpful to develop algorithms for each disease individually. In some cases, there is no need to use the entire ECG record when only a few parts are sufficient. This approach can simplify the neural networks by not providing the parts of the record that are not affected by the considered abnormality. In addition, algorithms should be tested on a variety of real-time data from different sources with feedback from cardiologists. This Figure 3: Sensitivity-specificity curves show the prediction quality for PTBXL_dn and TIS_tuned_dn networks to detect atrial fibrillation (AFIB) and premature ventricular complexes (PVC). Points correspond to the performance of three different doctors’ annotations. will help in analyzing incorrectly predicted samples and early identification of the errors in the algorithm. The ability to fine-tune the DNNs pre-trained on a large dataset allows us to work on the described improvements. Researchers can use the networks as backbones for more complex architectures to achieve the best quality. ECG metadata and hand-crafted features can be used with the DNNs to evaluate their influence on the prediction. The weights of the neural networks can also be used for transfer learning for studies that do not have a lot of labeled data, for instance, analysis of ECG records to detect hyperkalemia, hypokalemia [31], or human emotions [32]. Moreover, trained networks can be used as benchmarks for datasets from different sources and provide the possibility to verify new models. ## 5 Conclusion Deep neural networks have achieved promising results in predicting heart diseases with ECG records. However, due to the difference in ECG records from multiple data sources and the small amount of available data for training, current DNNs are not generalized to be widely applied in practice. In this paper, we present the methodology to achieve better abnormality prediction quality regardless of the dataset. We demonstrate that training neural networks on a variety of datasets with further fine-tuning on the specific dataset shows better quality than DNN trained on this specific data set only. This approach can help the application of DNNs for ECGs analysis in medical centers that have different data sources. In addition, it can reduce both the cost and time spent on data annotation by cardiologists, since this approach requires less labeled data. We also show the significance of a large diverse data for a generalization of the neural networks. The improvement of generalization ability leads to the more efficient use of neural networks in real-life applications. We demonstrate that current DNNs trained on a large collection of different datasets and fine-tuned for the selected dataset can achieve cardiologist-level performance. The proposed methodology can be applied to new algorithms and architectures, thereby providing better prediction quality. For further improvement, we consider the analysis of ECG metadata for abnormality classification. Considering additional information about the patient can be substantial and give more accurate predictions. Moreover, we consider only patients that are older than 18 years. This limitation has to be considered when implementing new methods for ECG classification due to criteria for diagnosing children's and adults' heart diseases may differ, which can cause the inability of trained DNNs to predict the abnormalities accurately. One of the approaches would be training different models for predicting cardiac abnormalities in children and adults.
2303.02644
Expectation consistency for calibration of neural networks
Despite their incredible performance, it is well reported that deep neural networks tend to be overoptimistic about their prediction confidence. Finding effective and efficient calibration methods for neural networks is therefore an important endeavour towards better uncertainty quantification in deep learning. In this manuscript, we introduce a novel calibration technique named expectation consistency (EC), consisting of a post-training rescaling of the last layer weights by enforcing that the average validation confidence coincides with the average proportion of correct labels. First, we show that the EC method achieves similar calibration performance to temperature scaling (TS) across different neural network architectures and data sets, all while requiring similar validation samples and computational resources. However, we argue that EC provides a principled method grounded on a Bayesian optimality principle known as the Nishimori identity. Next, we provide an asymptotic characterization of both TS and EC in a synthetic setting and show that their performance crucially depends on the target function. In particular, we discuss examples where EC significantly outperforms TS.
Lucas Clarté, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
2023-03-05T11:21:03Z
http://arxiv.org/abs/2303.02644v2
# Expectation consistency for calibration of neural networks ###### Abstract Despite their incredible performance, it is well reported that deep neural networks tend to be overoptimistic about their prediction confidence. Finding effective and efficient calibration methods for neural networks is therefore an important endeavour towards better uncertainty quantification in deep learning. In this manuscript, we introduce a novel calibration technique named _expectation consistency_ (EC), consisting of a post-training rescaling of the last layer weights by enforcing that the average validation confidence coincides with the average proportion of correct labels. First, we show that the EC method achieves similar calibration performance to temperature scaling (TS) across different neural network architectures and data sets, all while requiring similar validation samples and computational resources. However, we argue that EC provides a principled method grounded on a Bayesian optimality principle known as the _Nishimori identity_. Next, we provide an asymptotic characterization of both TS and EC in a synthetic setting and show that their performance crucially depends on the target function. In particular, we discuss examples where EC significantly outperforms TS. ## 1 Introduction As deep learning models become more widely employed in all aspects of human society, there is an increasing necessity to develop reliable methods to properly assess the trustworthiness of their predictions. Indeed, different uncertainty quantification procedures have been proposed to measure the confidence associated with trained neural network predictions (Abdar et al., 2021; Gawlikowski et al., 2021). Despite their popularity in practice, it is well known that some of these metrics, such as interpreting the last-layer softmax scores as confidence scores, lead to an overestimation of the true class probability (Guo et al., 2017). As a consequence, various methods have been proposed to calibrate neural networks (Gal and Ghahramani, 2016; Guo et al., 2017; Maddox et al., 2019; Minderer et al., 2021). In this work, we propose a novel method for the post-training calibration of neural networks named _expectation consistency_ (EC). It consists of fixing the scale of the last-layer weights by enforcing the average confidence to coincide with the average classification accuracy on the validation set. This procedure is inspired by optimality conditions steaming from the Bayesian inference literature. Therefore, it provides a mathematically principled alternative to similar calibration techniques such as temperature scaling, besides being simple to implement and computationally efficient. Our goal in this work is to introduce the expectation consistency calibration method, illustrate its performance across different deep learning tasks and provide theoretical guarantees in a controlled setting. More specifically, our **main contributions** are: * We introduce a novel method, _Expectation Consistency_ (EC) to calibrate the post-training predictions of neural networks, see Algorithm 1. The method is based on rescaling the last-layer weights so that the average confidence matches the average accuracy on the validation set. We provide a Bayesian inference perspective on expectation consistency that grounds it mathematically. * While calibration methods abound in the uncertainty quantification literature, we compare EC to a close and widely employed method in the deep learning practice: _temperature scaling_ (TS). Our experiments with different network architectures and real data sets show that the two methods yield very similar results in practice. * We provide a theoretical analysis of EC in a high-dimensional logistic regression exhibiting overconfidence issues akin to deep neural networks. We show that in this setting EC consistently outperforms temperature scaling in different uncertainty metrics. The theoretical analysis also elucidates the origin of the similarities between the two methods. ``` Input: Validation set \((\mathbf{x}_{i},y_{i})_{i=1}^{n_{val}}\), classifier \(\hat{f}:\mathcal{X}\rightarrow\mathbb{R}^{K}\) Compute the logits \(\mathbf{z}_{i}=\hat{f}(\mathbf{x}_{i})\in\mathbb{R}^{K}\) and output \(\hat{y}_{i}=\arg\max_{k}\mathbf{z}_{ik}\) Compute the accuracy on validation set \(\mathcal{A}_{val}=\frac{1}{n_{val}}\sum_{i}\delta(y_{i}=\hat{y}_{i})\) Determine \(T_{EC}\) such that \(\frac{1}{n_{val}}\sum_{i}\max_{k}\sigma^{(k)}(\mathbf{z}_{i}/\!\!r)=\mathcal{A}_{val}\) Output: Temperature \(T_{\mathrm{EC}}\), and probabilities on new samples \(\max_{k}\sigma^{(k)}(\mathbf{z}^{n\times n}/\!\!r_{\mathrm{EC}})\), ``` **Algorithm 1**Expectation consistency (EC) The code used in this project is available at the repository: [https://github.com/SPOC-group/expectation-consistency](https://github.com/SPOC-group/expectation-consistency) ### Related work Calibration of neural networks -The calibration of predictive models, in particular neural networks, has been extensively studied, see Abdar et al. (2021), Gawlikowski et al. (2021) for two reviews. In particular, modern neural network architectures have been observed to return overconfident predictions (Guo et al., 2017; Minderer et al., 2021). While their overconfidence could be partly attributed to their over-parametrization, some theoretical works (Bai et al., 2021; Clarte et al., 2022; DBLAN, 2022) have shown that even simple regression models in the under-parametrized regime can exhibit overconfidence. There exists a range of methods that guarantee calibration asymptotically (i.e. when the number of samples is sufficiently large) without assuming anything about the data distribution, see e.g. Gupta et al. (2020). However, for a limited number of samples, it is less clear which of the proposed methods provides the most accurate calibration. Temperature scaling -Guo et al. (2017) proposed _Temperature Scaling_ (TS), a simple post-processing method consisting of rescaling & cross-validating the norm of the last-layer weights. Due to its simplicity and efficiency compared to other methods such as Platt scaling (Platt, 2000) or histogram binning (Zadrozny and Elkan, 2001), TS is widely used in practice to calibrate the output of neural networks (Abdar et al., 2021). Moreover, Clarte et al. (2022) has shown that in some settings, TS is competitive with much more costly Bayesian approaches in terms of uncertainty quantification. While Gupta et al. (2020) has shown that without any assumption on the data model, injective calibration methods such as TS cannot be calibrated in general, Guo et al. (2017) conclude that: "_Temperature scaling is the simplest, fastest, and most straightforward of the methods, and surprisingly is often the most effective._" This justifies why TS is used so widely in practice. Bayesian methods -Bayesian methods such as Gaussian processes allow estimating the uncertainty out of the box for a limited number of samples under (at least implicit) data distribution assumptions. When the data-generating process is known, the best way to estimate the uncertainty of a model is to use the predictive posterior. However, Bayesian inference is often intractable, and several approximate Bayesian methods have been adapted to neural networks, such as deep ensembles (Lakshminarayanan et al., 2017) or weight averaging (Maddox et al., 2019). On the other hand, the strength of posthoc methods like temperature scaling is that it applies directly to the unnormalized output of the network, and does not require additional training. A comparable Bayesian approach has been developed in Kristiadi et al. (2020), where a Gaussian distribution is applied to the last-layer weights. Bayesian methods typically involve sampling from a high-dimensional posterior (Mattei, 2019), and different methods have been proposed to compute them efficiently (Graves, 2011; Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017; Maddox et al., 2019). Notation -We denote \([n]\coloneqq\{1,\cdots,n\}\); \(\mathbf{1}(A)\) the indicator function of the set \(A\); \(\mathcal{N}(\mathbf{x}|\mu,\Sigma)\) the multivariate Gaussian p.d.f. with mean \(\mu\) and covariance \(\Sigma\). ## 2 Setting Consider a \(K\)-class classification problem where a neural network classifier is trained on a data set \((\mathbf{x}_{i},y_{i})_{i\in[n]}\in\mathbb{R}^{d}\times[K]\). Without loss of generality, for a given input \(\mathbf{x}\in\mathbb{R}^{d}\) we can write the output of the classifier as a \(K\)-dimensional vector \(\mathbf{z}(\mathbf{x})=W\mathbf{\varphi}(\mathbf{x})\in\mathbb{R}^{K}\), where we have denoted the last-layer features by \(\mathbf{\varphi}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{p}\) and the read-out weights \(W\in\mathbb{R}^{K\times p}\). We define the _confidence_ of the prediction for class \(k\) as: \[\hat{f}(\mathbf{x},k)\coloneqq\sigma^{(k)}(\mathbf{z}(\mathbf{x}))=\frac{e^{z_{k}(\mathbf{x}) }}{\sum\limits_{l\in[K]}e^{z_{l}(\mathbf{x})}}\in(0,1) \tag{1}\] where \(\sigma:\mathbb{R}^{K}\rightarrow(0,1)^{K}\) is the softmax activation function. In short, \(\hat{f}(\mathbf{x},k)\) defines a probability, as estimated by the network, that \(\mathbf{x}\) belongs to class \(k\). For a given \(\mathbf{x}\in\mathbb{R}^{d}\), the final _prediction_ of the model is then given by \(\hat{y}(\mathbf{x})=\arg\max_{k}\hat{f}(\mathbf{x},k)\in[K]\), and the associated prediction confidence \(\hat{f}(\mathbf{x})=\max_{k}\hat{f}(\mathbf{x},k)=\hat{f}(\mathbf{x},\hat{y}(\mathbf{x}))\in (0,1)\). As it is common practice, in what follows we will be mostly interested in the case in which the network is trained by minimizing the empirical risk (ERM) with the cross-entropy loss: \[\ell(\hat{f}(\mathbf{x}),y) =-\log\hat{f}(\mathbf{x},y)\] \[=\sum_{k=1}^{K}\delta(y=k)\log\sigma^{(k)}(W\mathbf{\phi}(\mathbf{x})),\] although many of the concepts introduced here straightforwardly generalize to other training procedures. The quality of the training is typically assessed by the capacity of the model to generalize on unseen data. This can be quantified by the test misclassification error and the test loss: \[\mathcal{E}_{g}=\mathbb{E}_{\mathbf{x},y}\left[\delta\left(\hat{y}(\mathbf{x})\neq y \right)\right],\quad\mathcal{L}_{g}=-\mathbb{E}_{\mathbf{x},y}\left[\log\hat{f}( \mathbf{x},y)\right]\] These are point performance measures. However, often we are also interested in quantifying the quality of the network prediction confidence. Different uncertainty metrics exist in the literature, but some of the most current ones are the _calibration, expected calibration error_ (ECE) and _Brier score_ (BS), defined as: \[\begin{cases}\Delta_{p}&=p-\mathbb{P}_{\mathbf{x},y}\left(\hat{y}(\mathbf{x})=y|\hat{ f}(\mathbf{x})=p\right)\\ \text{ECE}&=\mathbb{E}_{\mathbf{x},y}\left(|\Delta_{\hat{f}(\mathbf{x})}|\right)\\ BS&=\mathbb{E}_{\mathbf{x},y}\left(\sum_{k=1}^{K}(\hat{f}(\mathbf{x},k)-\delta(y=k))^ {2}\right)\end{cases} \tag{2}\] Note that the Brier score is a proper loss, meaning that it is minimized when \(\hat{f}(\mathbf{x},k)\) is the true marginal distribution \(\mathbb{P}(y=k|\mathbf{x})\). This is not the case of the ECE: indeed, the estimator defined as \(\hat{f}(\mathbf{x},k)=\mathbb{P}(y=k)\) has \(0\) ECE but does not correspond to the marginal distribution of \(y\) conditioned on \(\mathbf{x}\) and has suboptimal test error. Finally, we introduce the confidence function with temperature \(T>0\): \[\hat{f}_{T}(\mathbf{x},k)=\sigma^{(k)}(\nicefrac{{W\varphi(\mathbf{x})}}{{T}}). \tag{3}\] ## 3 Expectation consistency calibration The method proposed in this work acts similarly as the temperature scaling method Guo et al. (2017) discussed in the related work section, with a key difference in how the temperature parameter is chosen. The popular and widely adopted temperature scaling (TS) procedure will also serve as the main benchmark in what follows. Temperature scaling --Although the score-based confidence measure introduced in (1) might appear natural, numerical evidence suggests that for modern neural network architectures, it tends to be overconfident Guo et al. (2017). In other words, it overestimates the probability of class belonging. To mitigate overconfidence, Guo et al. (2017) has introduced a post-training calibration method known as _temperature scaling_ (TS) Minderer et al. (2021); Wang et al. (2021). Temperature scaling consists of rescaling the trained network output \(\mathbf{z}\mapsto\nicefrac{{z}}{{T}}\) by a positive constant \(T>0\) (the "temperature") which is then be tuned to adjust the prediction confidence. Equivalently, TS can be seen as a re-scaling of the norm of the last-layer weights \(W\). Guo et al. (2017) has found that choosing \(T\) that minimizes the cross-entropy loss on the validation set \(\{(\mathbf{x}_{i},y_{i})_{i\in[n_{val}]}\}\): \[T_{\mathrm{TS}}=\arg\min_{T>0}\left(-\sum_{i=1}^{n_{val}}\ell(\hat{f}_{T}(\bm {x}_{i},y_{i}))\right) \tag{4}\] results in a better calibrated rescaled predictor \(\hat{f}_{T_{\mathrm{TS}}}\). To get a feeling for its effect on the confidence, it is instructive to look at the two extreme limits of TS. On one hand, if \(T\ll 1\), the softmax will be dominated by the class with the largest confidence, eventually converging to a hard-thresholding \(T\to 0^{+}\). This will typically lead to an overconfident predictor. On the other hand, for \(T\gg 1\), the softmax will be less and less sensitive to the trained weights, converging to a uniform vector at \(T\to\infty\). This will typically correspond to an underconfident predictor. Therefore, by tuning \(T\), we can either make a predictor less overconfident (by lowering the temperature \(T<1\)) or less underconfident (by increasing the temperature \(T>1\)). Temperature scaling is a specific instance of matrix/vector scaling, where the logits \(z_{i}\) are multiplied by a matrix/vector before the softmax. Despite being more general, matrix and vector scaling have been observed in Guo et al. (2017) to perform worse than TS. Different variants of TS have been developed. Similarly to vector scaling, class-based temperature scaling Frenkel and Goldberger (2021) computes one temperature per class and finds the best temperature by minimizing the validation ECE instead of the validation loss. While TS can be naturally applied to the last-layer output of neural networks, Kull et al. (2019) has extended TS to more general multi-class classification models. Expectation consistency --In this work, we introduce a novel calibration method, which we will refer to as _Expectation Consistency_ (EC). As for TS, the starting point is a pre-trained confidence function \(\hat{f}\) which we rescale \(\hat{f}_{T}\) by introducing a temperature \(T>0\). The key difference resides in the procedure we use to tune the Figure 1: Comparison of expected calibration error (ECE) and Brier score (BS) of temperature scaling (TS) and expectation consistency (EC) on various models and data sets. We see very minor differences between the two calibration methods. Given how well TS works in practice we conjecture at least the same for EC. temperature. Instead of minimizing the validation loss (4), we search for a temperature such that the average confidence is equal to the proportion of correct labels in the test set. In mathematical terms, we define \(T_{\mathrm{EC}}\) such that the following is satisfied: \[\frac{1}{n_{\mathrm{val}}}\sum_{i=1}^{n_{\mathrm{val}}}\hat{f}_{T_{\mathrm{EC}}} (\mathbf{x}_{i})=\frac{1}{n_{\mathrm{val}}}\sum_{i=1}^{n_{\mathrm{val}}}\mathbf{1} (\hat{y}(\mathbf{x}_{i})=y_{i}) \tag{5}\] The intuition behind this choice is the following: a calibrated classifier is such that for all \(p\in(0,1),\Delta_{p}=0\). This condition is not achievable by tuning the temperature parameter \(T\), so a less strict condition is to enforce it in expectation \(\mathbb{E}_{\mathbf{x}}\left[\Delta_{\hat{f}(\mathbf{x})}\right]=0\), ensuring that the classifier is calibrated on average. This is equivalent to enforcing the average confidence to be equal to the probability of predicting the correct class on a validation set. Note that the fact that we directly compare to the confidence on the validation set is analogous to what is done in the conformal prediction (Papadopoulos et al., 2002) methods to estimate prediction sets (as opposed to calibration that we are aiming at here). We refer to Algorithm 1 for a pseudo-code of expectation consistency. It is instructive to consider a Bayesian perspective on EC. For the sake of this paragraph, assume that both the training and validation data were independently drawn from a parametric probability distribution \(p(\mathbf{x},y|\theta)\). If we had access to the distribution of the data (but not the specific realization of the parameters \(\theta\)), the Bayes-optimal confidence function would be given by the expectation of \(f_{\star}(\mathbf{x}|\theta)=p(y|\mathbf{x},\theta)\) with respect to the posterior distribution of the weights given the training data \(p(\theta|(\mathbf{x}_{i},y_{i})_{i\in[n]})\). In this case, one would not even need a validation set since the expected test accuracy would be predicted by the uncertainties under the posterior. In Section 5.1 we illustrate this discussion for concrete data distribution. This expectation consistency property of the Bayes-optimal predictor is known as the _Nishimori condition_ in the information theory and statistical physics literature (Iba, 1999; Measson et al., 2009; Zdeborova and Krzakala, 2016). Therefore, from this perspective requesting condition (5) to hold can be seen as enforcing the Nishimori conditions for the rescaled confidence function. The Nishimori conditions are also used within the expectation-maximization algorithm for learning hyperparameters Dempster et al. (1977). We describe in Section 5 how to interpret both temperature scaling and expectation consistency as learning procedures for the hyperparameter \(T\). The main idea behind the EC method proposed here is that even in the absence of knowledge of the data-generating model, the expectation consistency (5) relation should hold for a calibrated uncertainty quantification method. Note that \(T_{EC}\) exists and is unique. Indeed, the average confidence is a decreasing function of the temperature, converging to one when \(T\to 0^{+}\) and to zero when \(T\rightarrow\infty\). Therefore, there is a unique \(T_{EC}\) that satisfies the constraint (5), and in practice, it can be found by bisection. We refer to Figure 2 for an illustration of the uniqueness of \(T_{EC}\). Moreover, note that expectation consistency is more flexible than temperature Figure 2: Left: The validation loss and average confidence of the model, as a function of the temperature \(T\), model is DenseNet121 trained on CIFAR10. The dark dashed line is the accuracy for the validation set. Orange (respectively blue) cross corresponds to \(T_{EC}\), \(T_{TS}\). Middle: ECE of the model as a function of \(T\), blue and orange dots respectively correspond to TS and EC. Right: Reliability diagram of Resnet20 trained on CIFAR10, before and after Temperature scaling. The reliability diagram after EC is indistinguishable from the one of TS. scaling: in multi-class classification problems, we can fix the temperature so that the average confidence is equal to the top \(N\) accuracy for any \(N\in[K]\). In this work, we focus on the top \(1\) accuracy. ## 4 Experiments on real data In this section, we present numerical experiments carried out on real data sets and compare the performance of EC and TS. As we will see, both methods yield similar calibration performances in practical scenarios. Experimental setup - We consider the performance of the calibration methods from Section 3 in image classification tasks. Experiments were conducted on three popular image classification data sets: * SVHN Netzer et al. (2011) is made of colored \(32\times 32\) labelled digit images. Train/validation/test set sizes are 65931/325/26032. * CIFAR10 and CIFAR100 data sets Krizhevsky (2009), consisting of \(32\times 32\) colored images from 10/100 classes (dog, cat, plane, etc.), respectively. Train/validation/test sets sizes are 45000/5000/10000 images for CIFAR10, 50000/5000/5000 for CIFAR100. We consider different neural network architectures adapted to image classification tasks: ResNets (He et al., 2016), DenseNets (Huang et al., 2017), VGG (Simonyan and Zisserman, 2014) and RepVGG (Ding et al., 2021). For CIFAR100, pre-trained models available online were employed. More details on the training procedure are available in Appendix A. Results - We refer to Table 1 for a comparison of TS and EC on the various data sets and models discussed above. Curiously, we observe that both EC and TS yield very similar temperatures across the different tasks and architectures, implying a similar ECE and Brier score. In particular, note that both methods give \(T>1\), consistent with the fact that the original networks were overconfident. Therefore, as expected, both methods improve the calibration of the classifiers. The right panel of Figure 2 shows the reliability diagram of the ResNet 20 trained on CIFAR10: we observe that before applying TS and EC, the accuracy is lower than the confidence. In other words, the model is overconfident and both TS and EC improve the calibration of the model. Note that both methods improve the Brier score and yield very similar results. From the computational cost perspective, EC is as efficient to run as TS, and requires only a few lines of code, see the GitHub repository where we provide the code to reproduce the experiments discussed here. However, we believe expectation consistency is a more principled calibration method, as it constrains the confidence of the model to correspond Figure 3: Left: Accuracy of Resnet20 model (Left), the temperature returned by TS and EC (Middle) and ECE of the model (Right) as a function of the size of the training set \(\alpha=\nicefrac{{n_{\text{train}}}}{{50000}}\). The model is trained with the same hyperparameters as in Figure 1. Again we see that the two methods are comparable even at largely different sample sizes. to the accuracy and has a natural Bayesian interpretation. Moreover, as we will discuss in Section 5, we can derive explicit theoretical results for EC. Our experiments suggest that the similarity between TS and EC is independent of the accuracy of the model. Indeed, in Figure 3, we observe the accuracy and ECE of a ResNet model trained on different amounts of data. As expected, the accuracy of the model increases with the amount of training data. We observe in the middle and right panels that the temperatures and ECE obtained from both methods are extremely similar, independently of the accuracy of the model. Finally, we plot in the middle panel of Figure 2 the ECE as a function of the temperature and observe that neither \(T_{TS}\) nor \(T_{EC}\) is close to the minimum of ECE. However, as we have discussed in Section 3, ECE is only one uncertainty quantification metric and is not a proper loss, so we wish not to optimize the temperature for this metric in particular. ## 5 Theoretical analysis of the expectation consistency algorithm As we have seen in Section 4, our experiments with real data and neural network models suggest that despite their different nature, EC and TS achieve a similar calibration performance across different architectures and data sets. In this section, we investigate EC and TS in specific settings where we can derive theoretical guarantees on their calibration properties. For concreteness, in the examples that follow we will focus on binary classification problems for which, without loss of generality, we can assume \(y\in\{-1,+1\}\). In this encoding, the softmax function is equivalent to the logit \(\sigma(t)\coloneqq(1+e^{-t})^{-1}\), and the hard-max is given by the sign function. Further, we assume that both the training \((\mathbf{x}_{i},y_{i})_{i\in[n]}\) and validation set \((\mathbf{x}_{i},y_{i})_{i\in[n_{val}]}\) were independently drawn from the following data generative model: \[f_{*}(\mathbf{x}) \coloneqq\mathbb{P}(y^{\mu}=1|\mathbf{x}^{\mu})=\sigma_{*}\left( \frac{\mathbf{w}_{*}^{\top}\mathbf{x}^{\mu}}{T_{*}}\right)\] \[\mathbf{x}^{\mu} \sim\mathcal{N}(\mathbf{0},\nicefrac{{1}}{{d}}\mathbf{I}_{d}),\quad \mathbf{w}_{*}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d}) \tag{6}\] with \(\sigma_{\star}:\mathbb{R}\to(0,1)\) an activation function and \(T_{*}>0\) explicitly parametrizing the norm of the weights. First, in Section 5.1 we provide a Bayesian interpretation of both the TS and EC methods, in an example where \(T_{\mathrm{TS}}=T_{\mathrm{EC}}=T_{\star}\). Next, in Section 5.2 we analyze a misspecified empirical risk minimization setting where they yield different results. Finally, we discuss in Section 5.3 one case in which EC consistently outperforms TS. ### Relation with Bayesian estimation Consider a Bayesian inference problem: given the training data \(\mathcal{D}\coloneqq\{(\mathbf{x}_{i},y_{i})_{i\in[n]}\}\), what is the predictor that maximizes the accuracy? If the statistician had complete access to the data generating process (6), this would be given by integrating the likelihood of the data over the posterior distribution of the weights given the data: \[f_{\mathrm{bo}}(\mathbf{x})\coloneqq\mathbb{P}(y=1|\mathcal{D},\mathbf{x})= \int\mathrm{d}\mathbf{w}\;\sigma\left(\frac{\mathbf{w}^{\top}\mathbf{x}}{T_{*}}\right)p(\mathbf{ w}|\mathcal{D},T_{*})\] where the posterior distribution is explicitly given by: \[p(\mathbf{w}|\mathcal{D},T_{*})\propto\mathcal{N}(\mathbf{w}|0,\mathbf{ I}_{d})\prod_{i\in[n]}\sigma_{*}\left(y_{i}\frac{\mathbf{w}^{\top}\mathbf{x}_{i}}{T_{ \star}}\right). \tag{7}\] The Bayes-optimal predictor above is well calibrated (Clarte et al., 2022), and consequently satisfies the expectation consistency condition: its average confidence equates to its accuracy. Consider now a scenario where the statistician only has _partial_ information about the data-generating process: she knows the prior and likelihood but does not have access to the true temperature \(T_{\star}\). In this case, she could still write a posterior distribution but would need to estimate the temperature \(T\) from the data. This can be done by finding the \(T\) that minimizes the classification error, or equivalently the generalisation loss, yet equivalently this would correspond to expectation maximization as discussed e.g. in (Decelle et al., 2011; Krzakala et al., 2012). This estimation of the temperature would lead to \(T=T_{*}\) and recovers the Bayes-optimal estimator \(f_{\mathrm{bo}}\). Hence, in the _well-specified_ Bayesian setting, doing temperature scaling amounts to expectation consistency, providing a very natural interpretation of both the temperature scaling and expectation consistency methods in a Bayesian framework. Note that in this paper we are concerned with frequentist estimators trained via empirical risk minimization. In that case, even in the well-specified setting, neither TS nor EC will recover the correct temperature \(T_{*}\) in the high-dimensional limit. This impossibility to recover \(T_{*}\) comes from the fact that we are not sampling a distribution anymore but instead consider a point estimate and do not have enough samples to be in the regime where point estimators are consistent. ### Misspecified empirical risk minimization Consider now the case in which the statistician only has access to the training data \(\mathcal{D}\), with no knowledge of the underlying generative model. A popular classifier for binary classification in this case is logistic regression, for which: \[\hat{f}_{\mathrm{erm}}(\mathbf{x})=\sigma(\hat{\mathbf{w}}^{\top}\mathbf{x}) \tag{8}\] and the weights are obtained by minimizing the empirical risk over the training data \(\hat{\mathbf{w}}=\mathrm{argmin}\hat{\mathcal{R}}_{n}(\mathbf{w})\) where: \[\hat{\mathcal{R}}_{n}(\mathbf{w})=-\sum_{i\in[n]}\log\sigma(\mathbf{w}^{ \top}\mathbf{x})+\nicefrac{{\lambda}}{{2}}\|\mathbf{w}\|^{2} \tag{9}\] and we remind that \(\sigma\) is the sigmoid/logit function. In this setting, the calibration is given by \(\Delta_{\ell}=\ell-\mathbb{E}_{\mathbf{x}}\left[f_{*}(\mathbf{x})|\hat{f}_{\mathrm{erm }}(\mathbf{x})=\ell\right]\), and the EC by \(\mathbb{E}_{\mathbf{x}}\left[\left|\Delta_{\hat{f}_{\mathrm{erm}}(\mathbf{x})}\right|\right]\). Note that logistic regression can also be seen as the maximum likelihood estimator for the logit model, which given the data model (6) for \(\sigma_{*}\neq\sigma\) is misspecified. Sur and Candes (2019) have shown that even in the well-specified case \(\sigma_{\star}=\sigma\), non-regularized logistic regression yields a biased estimator of \(\mathbf{w}_{\star}\) in the high-dimensional limit where \(n,d\to\infty\) at a proportional rate \(\alpha=\nicefrac{{n}}{{d}}\), which Bai et al. (2021) has shown to be overconfident. Clarte et al. (2022) characterized the calibration as a function of the regularization strength and the number of samples, and has shown that overconfidence can be mitigated by properly regularizing. The goal in this section is to leverage these results on high-dimensional logistic regression in order to provide theoretical results on the calibration properties of TS and EC. In particular, we will be interested in comparing the following three choices of data likelihood function \(\sigma_{\star}\): \[\begin{cases}\sigma_{\text{logit}}(z)=\frac{1}{1+e^{-z}}\\ \sigma_{\text{affine}}(z)=0\text{ if }z<-1,1\text{ if }z>1,\frac{t+1}{2}\text{ else}\\ \sigma_{\text{constant}}=0\text{ if }z<-1,1\text{ if }z>1,\nicefrac{{1}}{{2}}\text{ else}\end{cases} \tag{10}\] Asymptotic uncertainty metrics -The starting point of the analysis is to note that the uncertainty metrics of interest (2) only depend on the weights through the pre-activations \((\mathbf{w}_{\star}^{\top}\mathbf{x},\mathbf{\hat{w}}^{\top}\mathbf{x})\) on a test point \(\mathbf{x}\). Since the distribution of the inputs is Gaussian, the joint statistics of the pre-activations is Gaussian: \[(\mathbf{w}_{\star}^{\top}\mathbf{x},\mathbf{\hat{w}}^{\top}\mathbf{x})\sim\mathcal{N}\left( \mathbf{0}_{2},\begin{bmatrix}\nicefrac{{1}}{{d}}||\mathbf{w}_{\star}||_{2}^{2}& \nicefrac{{1}}{{d}}\mathbf{w}_{\star}^{\top}\hat{\mathbf{w}}_{\text{erm}}\\ \nicefrac{{1}}{{d}}\mathbf{w}_{\star}^{\top}\hat{\mathbf{w}}_{\text{erm}}&\nicefrac{{1 }}{{d}}||\hat{\mathbf{w}}_{\text{erm}}||_{2}^{2}\end{bmatrix}\right)\] As discussed above, different recent works (Sur and Candes, 2019; Bai et al., 2021; Clarte et al., 2022b) have derived exact asymptotic formulas for these statistics in different levels of generality for logistic regression. In particular, the following theorem from Clarte et al. (2022b), which considers a general misspecified model will be used for the analysis: Figure 5: Relative difference \(\nicefrac{{|T_{EC}-T_{TS}|}}{{T_{TS}}}\) as a function of the sampling ratio \(\alpha\) with three different \(\sigma_{\star}\), and \(\lambda=10^{-4}\). We observe that when \(\sigma_{\star}\) differs more from \(\sigma\), EC and TS yield different results. Points are simulations done at \(d=200\). **Theorem 1** (Thm. 3.2 from Clarte et al. (2022b)).: _Consider the logit classifier (8) trained by minimizing the empirical risk (9) on a data set \((\mathbf{x}_{i},y_{i})_{i\in[n]}\) independently sampled from model (6). Then, in the high-dimensional limit when \(n,d\to\infty\) at fixed \(\alpha=\nicefrac{{n}}{{d}}\):_ \[(\nicefrac{{1}}{{d}}\mathbf{w}_{\star}^{\top}\hat{\mathbf{w}}_{\rm{erm}}, \nicefrac{{1}}{{d}}||\hat{\mathbf{w}}_{\rm{erm}}||_{2}^{2})\xrightarrow[d\to\infty ]{}(m,q) \tag{11}\] _where \((m,q)\in\mathbb{R}^{2}_{+}\) are explicitly given by the solution of a set of low-dimensional self-consistent equations depending only on \((\alpha,\lambda,\sigma,\sigma_{\star})\), and which for the sake of space are discussed in Appendix B._ Leveraging on Thm. 1, we can derive an asymptotic characterization for the asymptotic limit of the uncertainty metrics defined in (2). **Proposition 1**.: _Under the same assumptions of Theorem 1, the asymptotic limit of the uncertainty metrics defined in (2) is given by:_ \[\begin{cases}\Delta_{\ell}(m,q)&=\ell-\mathcal{Z}_{\star}(1, \nicefrac{{m}}{{q}}\sigma^{-1}(\ell),1-\nicefrac{{m^{2}}}{{q}})\\ {\rm{ECE}}(m,q)&=\int_{0}^{\infty}{\rm{d}}z|\Delta_{\sigma(z)}(m,q)|\mathcal{N }(z|0,q)\end{cases}\] _where \((m,q)\in\mathbb{R}^{2}_{+}\) are the asymptotic limits of the correlation functions in (11) and_ \[\mathcal{Z}_{\star}(y,\omega,V)=\mathbb{E}_{\xi\sim\mathcal{N}( \omega,V)}\left[\sigma_{\star}\left(\nicefrac{{y\xi}}{{T_{\star}}}\right)\right] \tag{12}\] The proof of this result is given in Appendix B. Proposition 1 provides us with all we need to fully characterize the calibration properties of TS and EC in our setting. In the next paragraphs, we discuss its implications. In practice, the \(\ell_{2}\) regularization parameter \(\lambda\) in the empirical risk (9) is optimized by cross-validation. Clarte et al. (2022b, a) has shown that appropriately regularizing the risk not only improves the prediction accuracy but also the calibration and ECE of the logistic classifier. In particular, it was shown that cross-validating on the loss function yields different results from cross-validation on the misclassification error, with a larger difference arising in the case of misspecified models. Curiously, Clarte et al. (2022a) has shown that in this case, good performance and calibration can be achieved by combining a \(\ell_{2}\) penalty with TS. In the following, we discuss how this compares with EC. Note that the exact asymptotic characterization from Thm. 1 allows us to bypass cross-validation, allowing us to find the optimal \(\lambda\) by directly optimizing the low-dimensional formulas. We thus define \(\lambda_{\rm{error}}\) (respectively \(\lambda_{\rm{loss}}\)) as the value of \(\lambda\) such that \(\mathbf{w}_{\rm{erm}}\) yields the lowest test misclassification error (respectively test loss). ### Expectation consistency outperforms temperature scaling In Section 4, we have numerically observed that EC and TS yield almost the same temperature and thus have similar performance in terms of different uncertainty quantification metrics for different architectures trained on real data sets. Figure 5 shows the relative difference \(\delta T=\nicefrac{{|T_{TS}-T_{EC}|}}{{T_{TS}}}\) between the two methods for logistic regression on the synthetic data model (6) for the different choice of target activation \(\sigma_{\star}\in\{\sigma_{\rm{logit}},\sigma_{\rm{affine}},\sigma_{\rm{ constant}}\}\) defined in (10). Contrary to the real data scenario in Section 4, we observe a significant difference between the two methods for \(\sigma_{\star}\in\{\sigma_{\rm{affine}},\sigma_{\rm{constant}}\}\). For instance, for the piece-wise constant function \(\sigma_{\star}=\sigma_{\rm{constant}}\), \(\delta T\) is a non-decreasing function of the sampling ratio \(\alpha\), and is around \(30\%\) at \(\alpha=20\). Figure 4 shows that expectation consistency yields a lower ECE than Temperature scaling in all the settings considered in Section 5. On one hand, the effect is small in the well-specified case where the target and model likelihoods are the same: the ECE of Temperature scaling is higher by around \(0.01\%\). This is quite intuitive from the discussion in Section 5.1, since in this case, we are closer to the Bayesian setting where both methods were shown to coincide. On the other hand, this difference increases in the misspecified setting, suggesting that model misspecification plays an important role in these calibration methods. In particular, note that in all cases considered here, EC has a lower ECE than TS for all three regularizations considered: \(\lambda=10^{-4},\lambda_{\rm{error}},\lambda_{\rm{loss}}\). Figure 6 shows the joint probability density function of the variables \((f_{\star}(\mathbf{x}),\hat{f}_{\mathrm{erm}}(\mathbf{x}))\in[0,1]^{2}\). In particular, we show in white-dashed lines the conditional mean \(\mathbb{E}\left[f_{\star}(\mathbf{x})|\hat{f}_{\mathrm{erm}}(\mathbf{x})\right]\) which corresponds to the accuracy-confidence chart in Figure 2. As in the real data case, we observe that the ERM estimator is consistently overconfident, i.e \(\forall\ell\geqslant\nicefrac{{1}}{{2}},\Delta_{\ell}\geqslant 0\). Moreover, we see that after TS and EC, the conditional mean gets closer to the diagonal (red curve), implying that the model is more calibrated. The phenomenology of the simple data model seems to correspond to what we observe with real data and suggests that expectation consistency is a better approach to calibration. Interpretation of the results --Temperature scaling corresponds to rescaling the outputs of the network by minimizing the validation loss. In the literature, the cross-entropy loss is one of the most widespread choices, both for training and for measuring uncertainty scores (with the softmax). From a Bayesian perspective, minimizing the cross-entropy loss corresponds to maximizing the likelihood under the assumption that the has been generated from a softmax (a.k.a. multinomial logit) model. Hence, the underlying assumption behind temperature scaling is that the labels are generated using a softmax likelihood. Therefore, we expect it to perform better when this assumption is met. Indeed, our experiments in Section 5 confirm this intuition. In the case where the ground truth model is indeed given by a logit, TS performs well and is close to EC. However, in the misspecified case, where this assumption does not hold, TS performs worse than EC. For completeness, let us note that in most settings we tried EC was equivalent or outperforming TS. We searched for a case where the opposite would happen and found one where TS is very slighly better than EC, that we present in the Appendix C. ## 6 Conclusion and future work In this work, we introduced _Expectation Consistency_, a new post-training calibration method for neural networks. We have shown that EC is close to temperature scaling across different image classification tasks, giving almost the same expected calibration error and Brier score, while having comparable computational cost. We provided an analysis of the asymptotic properties of both methods in a synthetic setting where data is generated by a ground truth model, showing that while EC and TS yield the same performance for well-specified methods, EC provides a better and more principled calibration method under model misspecification. Our experiments on simple data models showed that when there is a discrepancy between our linear model and the true data model, EC performs better than TS. However, our experiments on real data show a very similar performance across different architectures, data models and overall model accuracy. In future work, we aim to understand better why both methods are so similar in practical scenarios. ## 7 Acknowledgements We acknowledge funding from the ERC under the European Union's Horizon 2020 Research and Innovation Program Grant Agreement 714608-SMiLe, the Swiss National Science Foundation grant SNFS OperaGOST, \(200021\_200390\) and the _Choose France - CNRS AI Rising Talents_ program. This research was supported by the NCCR MARVEL, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 205602).
2305.17879
Reversible Quantization Index Modulation for Static Deep Neural Network Watermarking
Static deep neural network (DNN) watermarking techniques typically employ irreversible methods to embed watermarks into the DNN model weights. However, this approach causes permanent damage to the watermarked model and fails to meet the requirements of integrity authentication. Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity, hindering their practical adoption. In this paper, we propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (QIM). Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding. Furthermore, we design two schemes to address the challenges of integrity protection and legitimate authentication for DNNs. Through simulation results on training loss and classification accuracy, we demonstrate the feasibility and effectiveness of our proposed schemes, highlighting their superior adaptability compared to existing methods.
Junren Qin, Shanxiang Lyu, Fan Yang, Jiarui Deng, Zhihua Xia, Xiaochun Cao
2023-05-29T04:39:17Z
http://arxiv.org/abs/2305.17879v2
# Reversible Quantization Index Modulation for Static Deep Neural Network Watermarking ###### Abstract Static deep neural network (DNN) watermarking techniques typically employ irreversible methods to embed watermarks into the DNN model weights. However, this approach causes permanent damage to the watermark model and fails to meet the requirements of integrity authentication. Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity, hindering their practical adoption. In this paper, we propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (OIM). Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding. Furthermore, we design two schemes to address the challenges of integrity protection and legitimate authentication for DNNs. Through simulation results on training loss and classification accuracy, we demonstrate the feasibility and effectiveness of our proposed schemes, highlighting their superior adaptability compared to existing methods. deep neural network (DNN), watermarking, reversible data hiding (RDH). ## 1 Introduction Deep neural networks (DNNs) have gained significant popularity due to their remarkable performance and have found applications in various fields [1, 2, 3, 4, 5, 6, 7]. However, the increasing use of deep learning-based systems also poses a risk of unauthorized usage or modification of DNN models without proper attribution to the original authors. To address this concern, watermarking techniques for DNNs have emerged as an important step in protecting the intellectual property embedded in these models [8]. Watermarking provides an additional layer of security that allows the original authors to prove ownership of their models, safeguard them from unauthorized access and use, track their provenance, ensure integrity, facilitate versioning, and identify malicious models [9]. Deep neural network (DNN) watermarking techniques can be broadly categorized into static and dynamic watermarking approaches [10, 11], depending on where the watermark can be read from. In static watermarking methods (e.g., [12, 13, 14, 15]), the watermark can be directly extracted from the network weights. During the training phase, these weights are determined and typically represented in floating-point formats, which differ from the popular unsigned-integer format commonly used for images. Static watermarking techniques aim to embed the watermark directly into the weights of the DNN, ensuring that the ownership and integrity of the model can be verified by examining these weight values. Static watermarking is particularly relevant in scenarios where the protection of the model's weights and ownership verification are of utmost importance. On the other hand, dynamic watermarking techniques (e.g., [16, 17]) rely on the modification of the network's behavior when provided with specific inputs, resulting in a visible watermark in the model's output. By carefully designing the input signals or modifying the network's architecture, dynamic watermarking allows for the extraction of the watermark through the observation of specific output patterns. Dynamic watermarking techniques offer a more flexible approach by embedding the watermark in the network's behavior rather than its weights. This enables the watermark to be extracted from the model's output, making it suitable for applications where the focus is on detecting unauthorized usage or tracking the dissemination of the model. An intriguing research direction is the development of reversible watermarking schemes for DNNs. Reversible watermarking is a type of digital watermarking that enables content owners to protect their digital data without causing any permanent modifications [18, 19]. It allows embedded information to be retrieved from the host object without any data loss or damage. Reversible watermarking algorithms have been successfully applied to the unsigned integer format commonly used in images, including techniques such as difference expansion (DE) [20], prediction-error expansion (PEE) [21, 22], and histogram shifting (HS) [23]. Considering that the weights in DNNs can be treated as conventional multimedia objects, reversible watermarking of DNNs can be seen as an extension of static DNN watermarking, where reversible watermarks are embedded within the weights. However, existing approaches, such as the HS-based method proposed by Guan _et al._[24] for watermarking convolutional neural networks (CNNs), face challenges when dealing with floating-point weights and suffer from degradation when the host exhibits a uniform or uniform-like distribution. Motivated by these challenges, we propose a novel reversible watermarking scheme specifically tailored for floating-point weights in DNNs. Our contributions, along with their highlights, are summarized as follows: * First, we design a simple yet efficient reversible watermarking algorithm, named reversible quantization index modulation (R-QIM), which improves upon the widely used quantization index modulation (QIM) [25, 26, 27, 28]. R-QIM allows for reversible embedding of watermarks in floating-point or real-valued objects, resembling a lattice quantizer that maps input values from a large continuous set to a countable smaller set with a finite number of elements. While QIM is naturally lossy, we leverage the availability of the cover object during the watermark embedding process to add a scaled version of the difference vector back to the quantized output values, enabling reversibility. * Second, we demonstrate how R-QIM can be deployed in DNN watermarking to achieve integrity protection and legitimacy authentication. For integrity protection, our scheme allows the owner or a trusted third-party institution to verify the occurrence of data tampering, regardless of noiseless or known noisy channel conditions. This addresses the limitations of existing schemes that are unavailable in noisy channel transmission. For legitimacy authentication, our proposed scheme provides an effective means to differentiate between legal and illegal use of target DNNs. This added layer of protection helps deter attackers and facilitates the identification of individuals responsible for unauthorized use. Additionally, it provides assurance that a given DNN is authentic, ensuring the integrity of the produced data. * Third, we provide theoretical justifications and conduct numerical simulations to showcase the advantages of R-QIM. We analyze the signal-to-watermark ratio (SWR) of R-QIM, which measures capacity and fidelity, and compare the training loss and classification accuracy of R-QIM with the HS-based method [24] by analyzing the weights of multi-layer perceptron (MLP) and visual geometry group (VGG) models. The remainder of the paper is organized as follows. Section 2 introduces DNN watermarking models and existing algorithms. Sections 3 and 4 present R-QIM along with theoretical analyses and its applications in DNN watermarking. Section 5 provides simulation results, and Section 6 concludes the paper. ## 2 Preliminaries ### _Reversible DNN Watermarking Basics_ Reversible deep neural network (DNN) watermarking involves the embedding of a watermark into the weights of a DNN model in a manner that allows for its extraction without any permanent modifications or loss of information. This reversible embedding process is analogous to static DNN watermarking, where the watermark is embedded directly into the network weights during the training phase [10, 12]. However, reversible watermarking techniques ensure that the original weights can be perfectly recovered after the watermark is extracted. The mathematical model for reversible DNN watermarking can be described as follows. Let \(\mathbf{W}\) denote the set of all weights in a trained DNN model. During watermark embedding, specific weights from \(\mathbf{W}\) are selected based on a location sequence \(\mathbf{c}\) guided by a clue or key \(cl\), resulting in a cover sequence \(\mathbf{s}\). The information sequence \(\mathbf{m}\) is then embedded into \(\mathbf{s}\) using a carefully designed embedding function \(\text{Emb}(\cdot)\), resulting in the watermarked sequence \(\mathbf{s}_{w}\). To ensure correct extraction and recovery of the watermark, the following triplet of operations is applied: \[\begin{cases}\mathbf{s}_{w}=\text{Emb}(\mathbf{s},\mathbf{m})\\ \hat{\mathbf{m}}=\text{Ext}(\mathbf{s}_{w}+\mathbf{n})=\text{Ext}(\mathbf{y}) \\ \hat{\mathbf{s}}=\text{Rec}(\mathbf{s}_{w}+\mathbf{n})=\text{Rec}(\mathbf{y}) \end{cases} \tag{1}\] where \(\text{Emb}(\cdot)\) represents the embedding function that embeds the information sequence \(\mathbf{m}\) into the cover sequence \(\mathbf{s}\) to produce the watermarked sequence \(\mathbf{s}_{w}\). \(\text{Ext}(\cdot)\) and \(\text{Rec}(\cdot)\) denote the extraction and recovery functions, respectively. \(\mathbf{n}\) represents the additive noise present in the received watermarked sequence \(\mathbf{y}=\mathbf{s}_{w}+\mathbf{n}\). While reversible DNN watermarking shares similarities with reversible image watermarking, there are notable differences in terms of the cover format, robustness, and fidelity requirements. Table I summarizes the key differences between reversible image watermarking and reversible DNN watermarking. Reversible DNN watermarking operates on floating-point weights, which differ from the unsigned integers typically used in reversible image watermarking. The fidelity requirement in reversible DNN watermarking pertains to the effectiveness of the host network after watermark embedding, rather than the visual quality of the host signal as in image watermarking. Additionally, reversible DNN watermarking should have the capacity to embed a large amount of data or information into the network weights. Security is crucial to prevent unauthorized parties from accessing, reading, or modifying the watermark. Lastly, efficiency is important to ensure faster embedding and extraction processes for DNN watermarking algorithms. By understanding the unique characteristics and requirements of reversible DNN watermarking, we can develop tailored algorithms and techniques that enable the embedding, extraction, and recovery of watermarks while preserving the integrity and effectiveness of the DNN models. ### _Existing Methods_ #### 2.2.1 Hs HS (Histogram Shifting) is a reversible watermarking algorithm originally developed for images, but it has been adapted for use in CNNs [24]. The method consists of three main parts: host sequence construction, data preprocessing, and the watermarking algorithm. In the host sequence construction, a host matrix is constructed from a convolutional layer in the CNN. This step is not directly relevant to this paper and will not be discussed further. In the data preprocessing step, each weight is defined as follows: \[\omega=\pm 0.\underbrace{00...0}_{p\text{ digits}}n_{1}n_{2}...n_{c}n_{c+1}...n_{q}, \tag{2}\] Here, \(q\) represents the total length of digits for the weight. To meet the requirements of an integer host, the consecutive non-zero digit pairs \((n_{c},n_{c+1})\) in \(\omega\), corresponding to the minimum entropy, are chosen as the significant digit pairs to construct the host sequence. These chosen pairs are then adjusted by adding an adjustable integer parameter \(V\) to ensure they fall within the appropriate range of \([-99,99]\). For the watermarking algorithm, HS [23] scheme is employed as the embedding and extraction strategy. The 1-bit HS embedding process for the watermark \(m\) can be described as follows: \[\omega^{{}^{\prime}}=\begin{cases}\omega+m,&\omega=\Omega_{\max}\\ \omega+1,&\omega\in(\Omega_{\max},\Omega_{\min})\\ \omega,&\omega\notin[\Omega_{\max},\Omega_{\min})\end{cases}. \tag{3}\] The histogram shifting operation in this 1-bit embedding process is depicted in Fig. 1(a), where the bins greater than \(\Omega_{\max}\) are shifted to the right by a fixed \(\Delta=1\) to create a vacant bin for embedding. The watermark \(m\) with a uniform distribution is then embedded into the bin equal to \(\Omega_{\max}\) using HS. This divides the entire cover into three regions, as depicted in Fig. 1(c): region i for covers smaller than \(\Omega_{\max}\), region ii for covers equal to \(\Omega_{\max}\), and region iii for covers larger than \(\Omega_{\max}\). The mapping rule for \(\omega\) changes depending on the bit, as shown in Fig. 1(b). Using the same process of host sequence construction and data preprocessing, the extraction process can be described as follows: \[\hat{m}=\begin{cases}1,&\omega^{{}^{\prime}}=\Omega_{\max}+1\\ 0,&\omega^{{}^{\prime}}=\Omega_{\max}\end{cases}, \tag{4}\] and the recovery process as: \[\hat{\omega}=\begin{cases}\omega^{{}^{\prime}}-1,&\omega\in(\Omega_{\max}, \Omega_{\min})\\ \omega^{{}^{\prime}},&\omega\notin[\Omega_{\max},\Omega_{\min})\end{cases}. \tag{5}\] #### 2.2.2 Qim QIM (Quantization Index Modulation) is a widely used method for non-reversible watermarking [25, 26, 27, 28]. Its rationale can be explained using the example shown in Fig. 2(a). The circle and cross positions in Fig. 2(a) represent two sets, \(\Lambda_{0}\) and \(\Lambda_{1}\), arranged alternately. Given a host or cover sample \(s\in\mathbb{R}\) and a one-bit message \(m\in\{0,1\}\), the watermarked value is obtained by moving \(s\) to the nearest point in \(\Lambda_{0}\) when \(m=0\), and to the nearest point in \(\Lambda_{1}\) when \(m=1\). Let \(Q_{\Delta}(s)=\Delta\lfloor s/\Delta\rfloor\) be a quantization function with \(\Delta\) as the step-size parameter. The embedding process can be described as follows: \[s_{\mathrm{QIM}}\triangleq Q_{m}(s)=Q_{\Delta}(s-d_{m})+d_{m},\ m\in\{0,1\}, \tag{6}\] where \(d_{0}=-(\Delta/4)\), \(d_{1}=\Delta/4\), \(\Lambda_{0}=d_{0}+\Delta\mathbb{Z}\), and \(\Lambda_{1}=d_{1}+\Delta\mathbb{Z}\). Assuming that the transmitted \(s_{\mathrm{QIM}}\) has been contaminated by an additive noise term \(n\), the received signal is given by \(y=s_{\mathrm{QIM}}+n\). A minimum distance decoder is used to extract the watermark as follows: \[\hat{m}=\mathop{arg\min}_{m\in\{0,1\}}\left[\min_{s\in\Lambda_{m}}|y-s| \right]. \tag{7}\] \begin{table} \begin{tabular}{p{56.9pt}||p{142.3pt} p{142.3pt}} \hline \hline Features & Reversible image watermarking & Reversible DNN watermarking \\ \hline Format of covers & Unsigned integers & Floating-point numbers \\ Fidelity & Higher quality of the host signal after watermark embedding & Higher effectiveness of the host network after watermark embedding \\ Capacity & Ability to embed a watermark with a massive amount of data/information \\ Security & Ability to remain secret from unauthorized parties accessing, reading, and modifying the watermark Efficiency & Higher speed for the embedding and extraction process of the watermarking algorithm \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of reversible image watermarking and reversible DNN watermarking. Fig. 1: Illustration of the HS algorithm. If \(|n|<\Delta/4\), the estimated value \(\hat{m}\) is correct. In terms of embedding distortion, as shown in Fig. 3(a), the maximum error caused by embedding is \(\Delta/2\). If the quantization errors are uniformly distributed over \([-(\Delta/2),(\Delta/2)]\), the mean-squared embedding distortion is given by \(D=\Delta^{2}/12\). Considering the capacity, QIM achieves an approximate rate of 1 bppps (bit per sample), which means each sample of the host cover can carry 1 bit of watermark information. ## 3 The Proposed Method In this section, we introduce a QIM-based RDH (Reversible Data Hiding) algorithm called reversible QIM (R-QIM) and highlight its advantages compared to the method proposed in [24]. ### _R-Qim_ We observe that there exists a quantization error \(e\) between the cover vector \(s\) and its quantized watermarked vector \(Q_{(m,k)}(s)\), given by: \[e=s-Q_{m}(s). \tag{8}\] If we only use \(Q_{m}(s)\) as the watermarked vector, the information about \(e\) is lost. However, QIM has a certain error tolerance capability. If we consider \(e\) as "beneficial noise" and add it back to \(Q_{m}(s)\), we can maintain the information about the cover \(s\), making the scheme reversible. The challenge lies in properly scaling the "beneficial noise" to meet specific requirements. First, the scaled \(e\) should be small enough to stay within the correct decoding region. Second, the scaled \(e\) should not be too small to avoid exceeding the representation accuracy of numbers. The method that incorporates these ideas is called R-QIM. Its embedding operation is defined as: \[s_{\rm R-QIM}\triangleq\alpha Q_{(m,k)}(s)+(1-\alpha)s, \tag{9}\] where \(\alpha\) represents a scaling factor that satisfies \(\alpha\in\left(\frac{|\mathcal{M}|-1}{|\mathcal{M}|},1\right)\), \(Q_{(m,k)}(s)\) is an encrypted quantizer defined as: \[Q_{(m,k)}(s)\triangleq Q_{\Delta}(s-d_{m}-k)+d_{m}+k,m\in\mathcal{M}. \tag{10}\] In Eq. (10), \(Q_{\Delta}(s)\) denotes the same \(Q_{\Delta}(s)=\Delta\lfloor s/\Delta\rfloor\) used in conventional QIM, and \(k\) represents a dithering component for secrecy. R-QIM can be considered a fast version of the lattice-based method proposed in [29]. The parameters \(k\) and \(\alpha\) are typically treated as secret keys in a watermarking scheme. By setting \(k=0\) and \(\alpha=0.5\), we can achieve a 1-bit embedding example of R-QIM as depicted in Fig. 2(b). In this case, the watermarked covers are distributed in the green and red zones around the circle and cross positions, rather than on the positions themselves. For the receiver, the estimated watermark can be extracted from the received signal \(y\) using the following equation: \[\hat{d_{m}}\equiv Q_{\frac{\Delta}{|\mathcal{M}|}+k}(y)=\left[Q_{\frac{ \Delta}{|\mathcal{M}|}}(y-k)+k\right]\{\Delta\}. \tag{11}\] If the noise term \(n\) is small enough to satisfy the condition: \[Q_{\frac{\Delta}{|\mathcal{M}|}+k}(n)=0, \tag{12}\] the correct extraction \(\hat{d_{m}}=d_{m}\) is achieved, whether in a noiseless or noisy channel. To estimate the original weight \(s\) from the received signal \(y\), we use the following equation: \[\hat{s}=\frac{y-\alpha Q_{\frac{\Delta}{|\mathcal{M}|}+k}(y)}{1-\alpha}. \tag{13}\] The correct restoration \(\hat{s}=s\) occurs if and only if \(n=0\) such that \(y=s_{\rm R-QIM}\). In the presence of noise, the estimation error is given by: \[\hat{s}-s=\frac{n}{1-\alpha}. \tag{14}\] By setting \(\alpha=0.5\) and \(k=0\), the embedding distortion is depicted in Fig. 3(b), with a maximum error of \(\alpha\Delta/2=\Delta/4\). If the quantization errors are uniformly distributed over \([-(\Delta/2),(\Delta/2)]\), the mean-squared embedding distortion is: \[D=\frac{\alpha}{12}\Delta^{2}. \tag{15}\] Since \(\bigcup_{m=0}^{|\mathcal{M}|-1}\Lambda_{m}=\mathbb{R}\) (as shown in Fig. 2), each bit of the watermark can be embedded into a host sample with Fig. 3: Selection of watermarked signal with given \(s\) and \(m\in\{0,1\}\) for Prototype symmetric function, \(x=Q_{m}(s)\) with \(m=0\), and \(x=Q_{m}(s)\) with \(m=1\) in different one-bit watermarking. (a) Conventional QIM. (b) Reversible QIM with \(\alpha=0.5\). Fig. 2: Embed one bit into a sample with different versions of QIM. (a) Conventional QIM. (b) Reversible QIM. any characteristic and distribution. These features make R-QIM capable of accommodating a watermark of almost the same maximum length as the number of host samples. ### _Discussions_ In this section, we compare the R-QIM algorithm with the HS algorithm proposed in [24] and discuss their respective advantages in terms of usability, capacity, and imperceptibility. First, let's consider usability. The HS algorithm is not suitable for RDH-based static DNN watermarking due to two main reasons. Firstly, it mismatches the host of uniform distribution, which makes the watermarked sequence exhibit obvious statistical characteristics. This vulnerability makes the algorithm defenseless against passive attacks. Secondly, the low capacity of the HS algorithm becomes even worse when applied to uniformly distributed hosts. Therefore, HS is not feasible for static DNN watermarking, as the data preprocessing operation makes the host sequence uniform rather than normally distributed. We conducted experiments to verify this by preprocessing different randomly generated data of normal distribution, testing them multiple times for skewness, kurtosis, and Kolmogorov-Smirnov (K-S) tests, and plotting the Quantile-Quantile (Q-Q) plot for one of the test results. The results (Fig. 4) clearly show that the preprocessed data becomes flatter and deviates from a normal distribution according to the K-S test. Figures 4(d), (e), and (f) demonstrate that the preprocessed data follows a uniform distribution. Thus, HS [24] lacks practical usability, while R-QIM is feasible for data admitting any distribution. Next, let's analyze the theoretical advantages of R-QIM in terms of capacity and imperceptibility compared to HS. To evaluate the embedding capacity, we consider a host sequence of length \(L\) and analyze the maximum available watermark length \(C_{\max}\) for both R-QIM and HS. In the HS algorithm, the watermark is only embedded into the bin where \(s=\Omega_{\max}\), and the other bins do not contain any information about the watermarks. Recall that Regions i, ii and iii are shown in Fig. 1(c). The maximum length of the available watermark in HS can be calculated as: \[C_{\max,\mathrm{HS}}=\mathrm{Pr}(X\in\mathrm{Region\ ii})\cdot L. \tag{16}\] On the other hand, in R-QIM, the entire host sequence can be used to embed the watermark, resulting in \(C_{\max,\mathrm{R-QIM}}=L\). It is evident that for host sequences of the same length, R-QIM has a higher embedding capacity. In terms of embedding distortion or imperceptibility, we define the signal-to-watermark ratio (SWR) as a measure. The SWR is defined as: \[\mathrm{SWR}\ (\mathrm{dB})=10\times\log\left(\frac{\sigma_{\mathbf{w}}^{2}}{ \sigma_{\mathbf{w}}^{2}}\right), \tag{17}\] where \(\sigma_{\mathbf{s}}^{2}\) and \(\sigma_{\mathbf{w}}^{2}\) represent the power of the host and the additive watermark, respectively. A smaller value of \(\sigma_{\mathbf{w}}^{2}\) indicates a higher SWR, which implies better imperceptibility. To analyze the embedding distortion fairly, we assume the same capacity and host distribution for both HS in [24] and R-QIM, corresponding to embedding the watermark into the host \(\Omega_{\max}\) which follows a Gaussian distribution. Regarding the embedding distortion, we have the following result: **Theorem 1**.: _R-QIM achieves a larger SWR than HS when \(\Delta\leq\sqrt{3}\)._ Proof.: Due to the symmetry of the Gaussian distribution, we have \(\mathrm{Pr}(X\in\mathrm{Region\ ii})=\mathrm{Pr}(X\in\mathrm{Region\ iii})\). Therefore, \(\mathrm{Pr}(X\in\mathrm{Region\ ii})=1-2\mathrm{Pr}(X\in\mathrm{Region\ iii})\). With the same settings, the \(\sigma_{\mathbf{w}}^{2}\) of HS is given by: \[\sigma_{\mathbf{w},\mathrm{HS}}^{2}= \frac{1}{4}\mathrm{Pr}(X\in\mathrm{Region\ ii})+\mathrm{Pr}(X \in\mathrm{Region\ iii})\] \[= \frac{1}{4}+\frac{1}{2}\mathrm{Pr}(X\in\mathrm{Region\ iii}), \tag{18}\] while the \(\sigma_{\mathbf{w}}^{2}\) of R-QIM is given by: \[\sigma_{\mathbf{w},\mathrm{R-QIM}}^{2}= \frac{\alpha\Delta^{2}}{12}\mathrm{Pr}(X\in\mathrm{Region\ ii})\] \[= \frac{\alpha\Delta^{2}}{12}-\frac{\alpha\Delta^{2}}{6}\mathrm{Pr} (X\in\mathrm{Region\ iii}). \tag{19}\] Based on Eqs. (18) and (19), we have: \[\sigma_{\mathbf{w},\mathrm{HS}}^{2}-\sigma_{\mathbf{w},\mathrm{R-QIM}}^{2}= \frac{3-\alpha\Delta^{2}}{12}+\frac{3+\alpha\Delta^{2}}{6}\mathrm{Pr}(X\in \mathrm{Region\ iii}). \tag{20}\] Since \(0<\mathrm{Pr}(X\in\mathrm{Region\ iii})<1/2\), we have: \[\frac{3-\alpha\Delta^{2}}{12}<\frac{3-\alpha\Delta^{2}}{12}+ \frac{3+\alpha\Delta^{2}}{6}\mathrm{Pr}(X\in\mathrm{Region\ iii})<\frac{1}{2}. \tag{21}\] Equation (20) is larger than \(0\) when \(\Delta\leq\sqrt{3}\). Thus, the theorem is proved. According to Theorem 1, when considering a fixed setting with \(\Delta=1\) in HS [24], it can be observed that R-QIM achieves lower embedding distortion and better fidelity, based on the aforementioned assumption. Furthermore, it indicates that the fidelity of R-QIM can be controlled. When \(\Delta>\sqrt{3}\), by adjusting the parameters, we can obtain flexible fidelity performance, whether it is better or worse than HS. In the subsequent scheme design, we will demonstrate the benefits of this feature. ## 4 Applications of R-QIM in Static DNN Watermarking In this section, we explore the application of R-QIM in static deep neural network (DNN) watermarking. We propose a scheme that includes several algorithms to facilitate the embedding, extraction, and restoration processes. Furthermore, we outline the concrete steps for functions such as integrity protection and infringement identification, which can be realized using the proposed scheme. The schematics of the two applications are depicted in Figure 5. For the sake of simplicity, we refer to the owner of the DNNs as "Alice," the legal user as "Bob," the illegal user as "Mallory," and the trusted third-party institution as "Institution." ### _Wrapping up R-QIM_ To address security concerns, R-QIM requires additional measures. The watermarking, extracting, and restoring processes based on R-QIM are presented through pseudo-codes in **Mark** (Algorithm 1), **Extract** (Algorithm 2), and **Restore** (Algorithm 3). In these algorithms, certain parameters such as \(cl\) and \(k\) are set using a pseudo-random number generator (PRNG), while others like the step size \(\Delta\) and scaling factor \(\alpha\) are determined by the owner. **Mark** (Algorithm 1) takes as input the trained model \(\mathbf{W}\), the watermark \(\mathbf{m}\), and the aforementioned parameters. It outputs the watermark model \(\mathbf{W}_{wtm}\) along with side information, including the watermark information \(\mathbf{w\_info}\) and the secret key \(\mathbf{sk}\). By selecting a sequence \(\mathbf{s}=[s_{0},s_{1},...,s_{L-1}]\) with the clue \(cl\) and extracting relevant information (\(L\) and \(|\mathcal{M}|\)) from \(\mathbf{m}\) using the Info function, each bit of the watermark \(m_{i}\) is embedded into \(s_{i}\) using the R-QIM embedding equation (9) via the \(\mathrm{Emb}(\cdot)\) function. The watermark information \(\mathbf{w\_info}\) combines \(L\) and \(|\mathcal{M}|\), while \(\mathbf{sk}\) includes \(cl\), \(k\), and \(\Delta\). To maintain the security properties of the embedded watermark, the owner of the DNN model should keep \(\mathbf{w\_info}\), \(\mathbf{sk}\), and \(\alpha\) confidential. **Extract** (Algorithm 2) performs the watermark extraction from the watermarked model \(\mathbf{W}_{wtm}\) generated by **Mark**. With the assistance of the watermark information \(\mathbf{w\_info}\) and the secret key \(\mathbf{sk}\) held by the owner, an estimated sequence \(\hat{\mathbf{d}}\) is created using the R-QIM extraction equation (11) via the \(\mathrm{Ext}(\cdot)\) function, following the same selection process as in **Mark**. Then, utilizing the watermark information in the codebook, **Extract** outputs an estimated watermark \(\hat{\mathbf{m}}\) derived from \(\hat{\mathbf{d}}\). Notably, since watermark extraction requires the assistance of the secret key \(\mathbf{sk}\) rather than the scaling factor \(\alpha\), which relates to the security of DNN model recovery, **Extract** should be performed by the DNN model owner or a trusted third-party institution, ensuring the non-disclosure of the scaling factor \(\alpha\). **Restore** (Algorithm 3) takes the watermarked model \(\mathbf{W}_{wtm}\) as input and restores it to its original form using the watermark information \(\mathbf{w\_info}\), the secret key \(\mathbf{sk}\), and the scaling factor \(\alpha\) as side information. After the same selection process as **Mark**, each sample \(y_{i}\) is recovered to \(s_{i}\) one by one using the R-QIM recovery equation (13) via the \(\mathrm{Rec}(\cdot)\) function. Since the correct restoration relies on the noise term \(n\), we can detect tampering in the watermarked model under a noiseless channel or a known noisy channel, making the watermarking process reversible for protecting the integrity of the watermarked model. Furthermore, as the restored model no longer contains the watermark, the effectiveness of the restoration process can be evaluated by verifying the absence of the watermark in the DNN model. ### _Integrity Protection_ To enable reversibility in DNN watermarks, Guan et al. [24] introduced the concept of integrity protection for DNN models. They proposed a scheme that verifies whether a DNN model has been tampered with by comparing the bit differences in weights between the restored and original models. Building on this idea, we present an integrity protection scheme that employs R-QIM [24], as depicted in Figure 5 (a). In this scheme, Alice embeds her watermarks into a commercialized DNN model \(\mathbf{W}\) using the **Mark** algorithm [24], resulting in a watermarked DNN model \(\mathbf{W}_{wtm}\). During transmission, Mallory illegally intercepts \(\mathbf{W}_{wtm}\), modifies Fig. 4: Skewness, Kurtosis and K-S test results on normal distributed data, and their respective Q-Q plots. its weights, and profits from sharing the tampered model. To identify tampering, we define two types of operations for noiseless and noisy channels. * **Noiseless channel:** In this scenario, where correct recovery is guaranteed, the tampered DNN model is restored using the **Restore** algorithm [24] to obtain an estimated model. Notably, the **Restore** function can meet the requirements of perfect recovery after watermark extraction since Equation (13) [24] contains \(Q_{\frac{\Delta}{|\mathcal{M}|}+k}(y)\), which can be regarded as watermark extraction. The weights of the restored model are then compared to the original model using a difference function \(\text{Diff}(\cdot)\), which calculates a difference ratio \(b\). Due to the sensitivity of the recovery process, even minor changes to \(\mathbf{W}_{wtm}\) would lead to differences in the weights of the restored model. This characteristic allows for integrity assessment, where \(b=1\) (or \(b=0\)) indicates that \(\mathbf{W}_{wtm}\) has (not) been tampered with. * **Noisy channel:** In this scenario, tampering of DNN models can be identified when the noise term \(n\) is sufficiently small. By leveraging Equation (14) [24], the difference between the restored and original models can be theoretically measured, allowing for a comparison that excludes the interference of the noise term \(n\). Theoretical differences between the restored and original models are computed using Fig. 5: Schematics of using R-QIM for integrity protection and infringement identification. Equation (14) [24], and a difference ratio \(b\) is obtained. When \(b=1\) (or \(b=0\)), it indicates that \(\mathbf{W}_{wtm}\) has (not) been tampered with. In summary, our proposed scheme is well-suited for integrity protection compared to the scheme presented by Guan et al. [24]. Our scheme offers higher fidelity for \(\mathbf{W}_{wtm}\), as justified by Theorem 1[24]. Additionally, our scheme is the first to protect the integrity of \(\mathbf{W}_{wtm}\) over noisy channels. ### _Infringement Identification_ In addition to integrity protection, reversible DNN watermarking can be utilized for infringement identification of suspicious DNN models. When the watermark is removed during the recovery operation, no watermark remains in the restored model. This enables the distinction between a legal user holding the restored model and an illegal user holding the watermarked model. Based on this concept, we propose a novel scheme for infringement identification of DNN models, where a user receives a secret key for recovery after legalization and obtains a restored model. The proposed scheme for legitimate authentication is illustrated in Figure 5 (b). For simplicity, we refer to the owner, legal user, illegal user, and trusted third-party institution as "Alice," "Bob," "Mallory," and "Institution," respectively. In our proposed scheme, Alice sells her commercialized DNN model \(\mathbf{W}\) through an online/offline platform and utilizes our scheme for marking the ownership of \(\mathbf{W}\). After embedding a watermark into \(\mathbf{W}\) using the **Mark** algorithm [24], the resulting watermarked model \(\mathbf{W}_{wtm}\) is sent to the platform, serving as an exhibit or trial product to promote Alice's model. As the embedding process occurs after model training, the fidelity of \(\mathbf{W}_{wtm}\) is intentionally lower, ensuring that its disclosure does not harm Alice's rights. When Bob expresses interest in the product, Alice shares **w_info, sk**, and \(\alpha\) with him. Bob can then recover \(\mathbf{W}_{wtm}\) to its original form using the **Restore** algorithm [24]. The recovered model is identical to the original model, maximizing its effectiveness, and the watermark is completely removed, making it undetectable in the recovered model. If Mallory illegally steals the DNN model and shares it on public platforms, Alice can report the incident to the Institution for arbitration. To authenticate the legitimacy of the suspicious model held by Mallory, Alice or the Institution can extract the estimated watermark \(\hat{\mathbf{m}}\) from the suspect model using the **Extract** algorithm [24]. Then, \(\hat{\mathbf{m}}\) can be compared to Alice's watermark using \(\mathrm{Diff}(\cdot)\), which outputs a difference ratio \(b\) for detecting the presence of the embedded watermark. When \(b\leq 0.1\), the watermark is considered detected, and Mallory is identified as an illegal user. To avoid infringing on Alice's rights, the DNN watermarking scheme for infringement identification should exhibit lower fidelity, ensuring that the effectiveness of the watermarked model is no better than the original one. Thanks to Theorem 1[24], R-QIM offers greater distortion than HS [24] by setting \(\Delta>\sqrt{3}\) and an appropriate \(\alpha\), making it more suitable for the infringement identification scenario. ## 5 Simulations To evaluate the effectiveness of the proposed R-QIM scheme, the simulations are divided into three parts: i) Usability of the HS method in [24]. ii) Comparison between R-QIM and HS in terms of capacity and fidelity. iii) Impact of R-QIM parameters. The experimental setups for these simulations are summarized as follows: **Datasets and Models**: The datasets chosen for training the models are **MNIST**[30] and **CIFAR10**[31]. The **MNIST** dataset consists of 60,000 training and 10,000 testing gray-scale images of handwritten digits, each with a size of 28\(\times\)28 pixels and divided into 10 classes. The **CIFAR10** dataset contains 50,000 training and 10,000 testing color images of various objects, with a size of 32\(\times\)32 pixels. Two combinations of models and datasets are used: the Multilayer Perceptron (MLP) model trained on **MNIST** (referred to as Group A) and the Visual Geometry Group (VGG) model trained on **CIFAR10** (referred to as Group B). **Parameters**: The watermark is generated by converting a piece of text into a bit stream. The step size is set to \(\Delta=1\), and the scaling factor is chosen as \(\alpha=0.8675\) with a dithering value of \(k=0\). **Indicators**: The fidelity of the watermarked model is evaluated using training loss and classification accuracy. The training loss measures the damage caused by watermark embedding, while the classification accuracy reflects the effectiveness of the watermarked network. Lower training loss indicates less impact from watermark embedding, and higher classification accuracy indicates greater effectiveness. To detect the existence of copyright information and perform tampering detection, the bit error rate (BER) is employed. BER is calculated as the ratio of the number of differing bits between the original and estimated message to the total number of bits. In copyright protection, if BER is not larger than 10%, it indicates the presence of information; otherwise, it does not. For tampering detection, a model is considered untampered if BER equals 0. ### _Usability test_ In Section 3.2, we theoretically identified potential weaknesses of the HS method. To support our claims, we conducted a numerical analysis of the weights of DNN models. Figure 6 presents the Q-Q plot comparing the original and preprocessed weights of the MLP and VGG models against normal and uniform distributions. We observe that the original weights of both models exhibit a distribution closer to the normal distribution, while the preprocessed weights clearly follow the uniform distribution as integers. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Metric**} & \multicolumn{2}{c}{**MLP (\(198656\) length)**} & \multicolumn{2}{c}{**VGG (\(200000\) length)**} \\ \cline{2-5} & Original & Preprocessed & Original & Preprocessed \\ \hline Skewness & 0.0426 & -0.1059 & 34.8544 & -0.0028 \\ Kurtosis & 2.8642 & 1.8324 & 1692.39 & 1.8276 \\ \(P\leq 5\%\) in K-S test & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \(P\leq 5\%\) in J-B test & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \hline \end{tabular} \end{table} TABLE II: The analysis of weights in different models. Furthermore, we performed an analysis of skewness, kurtosis, K-S test, and Jarque-Bera (J-B) test results for the original and preprocessed weights of the MLP and VGG models. The summarized analysis is presented in Table II. The results indicate that: i) Data preprocessing flattens the distribution of the weights, resulting in lower kurtosis values for the preprocessed weights. ii) Neither the original nor the preprocessed weights of the MLP and VGG models pass the K-S and J-B tests. These experiments demonstrate that realistic DNN weights do not follow a normal distribution. However, this does not undermine the validity of the data preprocessing method proposed in [24] for transforming data from a normal to a uniform distribution. Nevertheless, it highlights the limited usability of the [24] method. ### _Capacity and fidelity comparisons_ In order to assess the adaptability and superiority of the two proposed schemes in terms of capacity and fidelity, we conducted several experiments to compare R-QIM with HS in two specific applications. The first application focuses on integrity protection, which requires a watermarking method with a high embedding capacity and minimal embedding damage. Therefore, we compared the maximum available capacity, classification accuracy, and training loss between our proposed scheme and the method proposed by [24] with a fixed step size \(\Delta=1\). Fig. 6: Q-Q plot on the original and preprocessed weights with \(c=3\). \begin{table} \begin{tabular}{c c c c c} \hline \hline **Host** & **MLP (\(198656\) length)** & **VGG \(200000\) length** \\ \cline{2-5} **length** & R-QIM & HS [24] & R-QIM & HS [24] \\ \hline 20\% & 39732 & 377 & 40000 & 396 \\ 40\% & 79463 & 811 & 80000 & 760 \\ 60\% & 119194 & 1187 & 120000 & 1177 \\ 80\% & 158925 & 1565 & 160000 & 1604 \\ 100\% & 198656 & 1969 & 200000 & 1982 \\ \hline \hline \end{tabular} \end{table} TABLE III: Comparison of information embedding capacity. Fig. 7: Comparison of loss and accuracy with different epochs. The results for the maximum available capacity are presented in Table III, revealing a significant difference between R-QIM and HS. Regardless of whether it is group A or B, the table clearly indicates that R-QIM exhibits a higher embedding capability compared to the benchmark method, which aligns with our theoretical analysis. Regarding fidelity, we compared the training loss and classification accuracy of the watermarked models embedded at different epochs using R-QIM and HS. The corresponding results are depicted in Fig. 7. It can be observed that R-QIM consistently outperforms the benchmark method in both metrics for both group A and B, with the difference being more pronounced in group B. Importantly, these observations support our assumption that lower embedding distortion leads to better fidelity of the watermarked model. For infringement identification, according to Theorem 1, R-QIM can achieve a more noticeable decline in fidelity compared to HS by setting \(\Delta>\sqrt{3}\). To verify this, we examined the classification accuracy and training loss of MLP and VGG models with different values of \(\alpha\) and \(\Delta\), as shown in Fig. 8. Based on these observations, we conclude the following: i) As \(\alpha\) and \(\Delta\) increase, the loss of the watermarked model increases while the accuracy decreases, which aligns with our expectation regarding the relationship between distortion and fidelity. ii) When \(\Delta=1<\sqrt{3}\) in R-QIM, it outperforms HS in terms of both loss and accuracy for groups A and B. However, when \(\Delta=3\) and \(5>\sqrt{3}\), HS performs better than R-QIM. This finding supports Theorem 1 and demonstrates the flexible performance of R-QIM, which determines its applicability in infringement identification. ### _Performance of R-QIM Recovery_ To assess the performance of R-QIM recovery, we conducted several simulations focusing on the accuracy of the recovered values and the presence of watermarks. In these experiments, the watermarks were converted to uniform data consisting of 4264 bits. In the first experiment, we aimed to compare the performance difference in implementing reversible operations. We trained two combinations from scratch twice for 60 epochs and embedded the watermark at epoch 30 using the proposed scheme. In the first run (denoted by the green line), a reversible operation was applied immediately after the embedding process, while in the second run (denoted by the red line), no reversible operation was applied. Fig. 9 presents notable observations from this experiment: i) After watermark embedding, the model's accuracy sharply degraded due to the parameter modification. However, with the implementation of the proposed reversible operation, the reduced accuracy immediately restored. This demonstrates the effectiveness of the reversible operation in offsetting the damages caused by watermark embedding. ii) Without the reversible operation, when the reduced accuracy reached a plateau, the accuracy of both groups dropped below that of the reversible operation applied case. This indicates incomplete compensation in subsequent training for the watermarked model without reversible operation, while the compensation is complete with the reversible operation. iii) We observed that group A was more severely affected than group B after watermark embedding, and it reached a slower plateau in subsequent training, suggesting a higher effectiveness of the reversible operation in group A. To analyze the specific effects of the various processes in the proposed method, we compared the values of the original, watermarked, and recovered weights for the two combinations in Fig. 10. In this figure, the points representing the original and recovered cover (represented by a horizontal line and a vertical bar, respectively) coincide at each index of the sample, indicating correct recovery. Additionally, the distribution of watermarked weights (represented by crosses) in Fig. 10 illustrates the impact on the weights caused by watermark embedding. Finally, as the infringement identification function relies on determining whether the restored DNN model contains a watermark, we compared the bit error rate (BER) metric of the watermark with and without the reversible operation, as depicted in Fig. 11. The BER value of the watermarked model was 0.0005, whereas it increased to 0.43 after applying the reversible operation. This confirms that the reversible operation can effectively remove the watermark embedded in the host model, thereby demonstrating the validity of the legitimacy authentication scheme. ## 6 Conclusion In this paper, we have proposed a novel static deep neural network (DNN) watermarking scheme called Reversible QIM (R-QIM). The R-QIM scheme offers higher capacity and fidelity compared to existing methods, and it overcomes the weaknesses associated with the usability of host data under various distributions. We have also introduced two R-QIM-based schemes for integrity protection and infringement identification of DNNs. The integrity protection scheme enables the verification of watermarked DNNs' integrity by comparing the restored model with the original model. Fig. 8: Comparison of loss and accuracy with different \(\alpha\) and \(\Delta\). In infringement identification, the presence of watermarks in the watermarked model can determine the legality of the current user. Theoretical analyses and numerical simulations have demonstrated the superior performance of R-QIM compared to the method proposed in [24]. R-QIM exhibits greater flexibility in fidelity performance, higher embedding capacity, and adaptability to weights with arbitrary distributions. In conclusion, the R-QIM scheme presents a significant advancement in DNN watermarking, offering enhanced capacity, fidelity, and applicability in various scenarios. This scheme holds promise for effective integrity protection and infringement identification of DNN models in practical applications.
2309.02179
High-resolution 3D Maps of Left Atrial Displacements using an Unsupervised Image Registration Neural Network
Functional analysis of the left atrium (LA) plays an increasingly important role in the prognosis and diagnosis of cardiovascular diseases. Echocardiography-based measurements of LA dimensions and strains are useful biomarkers, but they provide an incomplete picture of atrial deformations. High-resolution dynamic magnetic resonance images (Cine MRI) offer the opportunity to examine LA motion and deformation in 3D, at higher spatial resolution and with full LA coverage. However, there are no dedicated tools to automatically characterise LA motion in 3D. Thus, we propose a tool that automatically segments the LA and extracts the displacement fields across the cardiac cycle. The pipeline is able to accurately track the LA wall across the cardiac cycle with an average Hausdorff distance of $2.51 \pm 1.3~mm$ and Dice score of $0.96 \pm 0.02$.
Christoforos Galazis, Anil Anthony Bharath, Marta Varela
2023-09-05T12:33:05Z
http://arxiv.org/abs/2309.02179v1
High-resolution 3D Maps of Left Atrial Displacements using an Unsupervised Image Registration Neural Network ###### Abstract Functional analysis of the left atrium (LA) plays an increasingly important role in the prognosis and diagnosis of cardiovascular diseases. Echocardiography-based measurements of LA dimensions and strains are useful biomarkers, but they provide an incomplete picture of atrial deformations. High-resolution dynamic magnetic resonance images (Cine MRI) offer the opportunity to examine LA motion and deformation in 3D, at higher spatial resolution and with full LA coverage. However, there are no dedicated tools to automatically characterise LA motion in 3D. Thus, we propose a tool that automatically segments the LA and extracts the displacement fields across the cardiac cycle. The pipeline is able to accurately track the LA wall across the cardiac cycle with an average Hausdorff distance of \(2.51\pm 1.3\)\(mm\) and Dice score of \(0.96\pm 0.02\). Left Atrial, Image Registration Neural Network, Displacement Field Vector. ## 1 Introduction The analysis of the anatomy and function of the left atrium (LA) is becoming more important for the prognosis and diagnosis of cardiac conditions such as atrial fibrillation (AF) or heart failure (HF) (Hoi, 2017; Peters et al., 2021). Structural characteristics of the LA are established atrial disease biomarkers (Varela et al., 2017) and analysis of LA deformations has been explored using speckle-tracking echocardiography (Smiseth et al., 2022). These biomarkers are typically obtained for a single LA view and spatial averages across LA wall regions. Spatiotemporal 3D maps of LA deformation are expected to provide more specific signatures of LA pathology, with greater diagnostic and prognostic value, as has been shown for the left ventricle (LV) (Duchateau et al., 2020). However, there are currently no publicly available MRI datasets or adequate image analysis tools to extract high-resolution displacement field vector (DFV) maps of the whole LA. In this paper, we use a novel high-resolution Cine MRI protocol designed specifically for the LA. These Cine MRI offer information about the LA at higher spatial resolution than images of any other existing database. However, given that only a small number of subjects have been imaged with this protocol, we develop and utilize methods for limited number of training images. **Aim** We propose the following pipeline to automatically obtain high-resolution 3D DFVs of the LA: 1) A few-shot segmentation network (LA-SNet) of the LA across the cardiac cycle to guide the registration; 2) Extraction of the LA segmentation contour and dilation; 3) An automatic subject-by-subject image registration of the LA contour image (LA-DNet). ## 2 Methods ### Data We use 3D LA Cine MRI bSSFP scans acquired using a novel acquisition protocol (Varela et al., 2020). In summary, they were acquired in a single breath-hold, with resolution of \(1.72\times 1.72\times 2.00\)\(mm^{3}\) and 20 phases across the cardiac cycle. Phase 0 corresponds to cardiac end diastole (smallest LA volume). As proof of concept, we analyse images from six subjects: three healthy volunteers and three subjects with suspected cardiovascular disease. ### Preprocessing The images are cropped to a size of \(96\times 96\times 36\) voxels, centered at the LA. Additionally, they are translated such that the LA centroid is stationary across the cardiac cycle and their intensity is min-max normalized. We manually segment the LA across the entire cardiac cycle to use as ground truth. From the segmented data, the contour is extracted and dilated using a 2 voxel radius spherical structure, which is used to mask the images. ### Model Details of LA-SNet and LA-DNet are in Figure 1, which their parameters have been experimentally selected. They share the same architecture that is based on a 3D U-Net (Ronneberger et al., 2015). The models incorporate squeeze and excitation blocks (Hu et al., 2018), which were already applied to LV MRI segmentation and image registration (Galazis et al., 2022). LA-DNet also utilizes a spatial transformer (Jaderberg et al., 2015) to obtain the DFV in an unsupervised way. The DFV is smoothed using a bending energy regularizer (Rueckert et al., 1999). LA-SNet is trained on the augmented whole LA images on cardiac phases 0, 8, and 15 and predicts the respective LA segmentation. LA-DNet takes the two contour masked images (moving: cardiac phase 0; fixed: cardiac phase [0-19]) to generate a displacement field that resamples the moving to the target image. Figure 1: The proposed pipeline to extract high-resolution LA displacement field maps. ## 3 Results LA-SNet can accurately segment the LA across the cardiac cycle, with an average Hausdorff distance (HD) of \(3.03\pm 1.12\ mm\) and Dice score (DS) of \(0.95\pm 0.02\). Similarly, LA-DNet is able to accurately track the LA wall across the cycle (see Figure 2). The LA segmentations obtained when adding the estimated DFV to the LA segmentation in phase 0 compare extremely well with the GT segmentations: \(HD=2.51\pm 1.3\ mm;DS=0.96\pm 0.02\). It outperformed previously used symmetric diffeomorphic image normalization from ANTs package (Avants et al., 2009) which obtained (\(HD=2.57\pm 1.16\ mm;DS=0.85\pm 0.04\)) to the same LA contour images and (\(HD=3.35\pm 1.48\ mm;DS=0.77\pm 0.09\)) when applied to the unsegmented LA images. Using LA-DNet directly on the unsegmented LA images as inputs also led to poor results (\(HD=3.35\pm 1.05\ mm;DS=0.78\pm 0.07\)). The LA-DNet estimated DFVs are spatially and temporally smoother and the Jacobian of the deformation gradient is consistent with the known volumetric changes of the LA, as can be seen in: [https://tinyurl.com/2eju3r9f](https://tinyurl.com/2eju3r9f). ## 4 Conclusions The proposed pipeline is able to extract DFVs that accurately track the LA wall across the cardiac cycle. The estimated high-resolution 3D LA DFVs pave the way towards potentially detecting regional functional biomarkers for conditions such as AF or HF. They may also provide useful information for the identification of LA fibrosis (Sohns and Marrouche, 2020). The LA registration across the cardiac cycle is more challenging than that of the LV. For the latter, several registration tools are available (Hernandez et al., 2021; De Vos et al., 2019), but these performed poorly for the LA registration task. The usual assumption, that the intensity of the different image components (e.g. the LV myocardium) is constant across the cardiac cycle, is not valid for the LA. This is because the LA myocardium is very thin (Varela et al., 2017), and thus barely identifiable in bSSFP images; and the LA blood pool voxels' intensity depends on blood velocity and is therefore very variable across the cardiac cycle. We successfully propose a different approach for automatically LA registration, using LA contours from automated segmentations as inputs, training it on a subject by subject basis to allow its deployment to small datasets of Cine MRI of the LA. Figure 2: The image registration metrics plotted for LA-DNet and ANTs: A) Hausdorff distance (HD) and, B) Dice score (DS). HD and DS are obtained by comparing manual LA segmentations across the cardiac cycle with segmentations transformed using the estimated DFV on phase 0. ## Acknowledgments This work was supported by the UKRI CDT in AI for Healthcare [http://ai4health.io](http://ai4health.io) (Grant No. EP/S023283/1) and the British Heart Foundation Centre of Research Excellence at Imperial College London (RE/18/4/34215). We acknowledge computational resources and support provided by the Imperial College Research Computing Service ([http://doi.org/10.14469/hpc/2232](http://doi.org/10.14469/hpc/2232)). Last but not least, we thank the volunteers for allowing the use of their data for this research.
2310.09167
A Deep Neural Network -- Mechanistic Hybrid Model to Predict Pharmacokinetics in Rat
An important aspect in the development of small molecules as drugs or agro-chemicals is their systemic availability after intravenous and oral administration. The prediction of the systemic availability from the chemical structure of a potential candidate is highly desirable, as it allows to focus the drug or agrochemical development on compounds with a favorable kinetic profile. However, such pre-dictions are challenging as the availability is the result of the complex interplay between molecular properties, biology and physiology and training data is rare. In this work we improve the hybrid model developed earlier [1]. We reduce the median fold change error for the total oral exposure from 2.85 to 2.35 and for intravenous administration from 1.95 to 1.62. This is achieved by training on a larger data set, improving the neural network architecture as well as the parametrization of mechanistic model. Further, we extend our approach to predict additional endpoints and to handle different covariates, like sex and dosage form. In contrast to a pure machine learning model, our model is able to predict new end points on which it has not been trained. We demonstrate this feature by predicting the exposure over the first 24h, while the model has only been trained on the total exposure.
Florian Führer, Andrea Gruber, Holger Diedam, Andreas H. Göller, Stephan Menz, Sebastian Schneckener
2023-10-13T15:01:55Z
http://arxiv.org/abs/2310.09167v2
# A Deep Neural Network - Mechanistic Hybrid Model to Predict Pharmacokinetics in Rat ###### Abstract An important aspect in the development of small molecules as drugs or agrochemicals is their systemic availability after intravenous and oral administration. The prediction of the systemic availability from the chemical structure of a potential candidate is highly desirable, as it allows to focus the drug or agrochemical development on compounds with a favorable kinetic profile. However, such predictions are challenging as the availability is the result of the complex interplay between molecular properties, biology and physiology and training data is rare. In this work we improve the hybrid model developed earlier [1]. We reduce the median fold change error for the total oral exposure from **2.85** to **2.35** and for intravenous administration from **1.95** to **1.62**. This is achieved by training on a larger data set, improving the neural network architecture as well as the parametrization of mechanistic model. Further, we extend our approach to predict additional endpoints and to handle different covariates, like sex and dosage form. In contrast to a pure machine learning model, our model is able to predict new end points on which it has not been trained. We demonstrate this feature by predicting the exposure over the first 24h, while the model has only been trained on the total exposure. Keywords:Hybrid modelling, Deep Learning, Property prediction, PBPK modelling, Drug design, Bioavailability, Pharmacokinetics ## 1 Introduction Drug discovery is about the optimization of the interaction of molecules with biological targets to achieve the desired therapeutic effect, while reducing toxical effects. The same is true for developing compounds for applications in agriculture, with a large focus on the reduction of toxical effects in mammals. Both development processes are long and risky. Numerous methods and tools have therefore been established to support decisions in the search for best performing candidates. Experimental characterization of compounds can take up considerable resources and time. The targeted, early identification of favorable properties and consequently informed selection of compounds can significantly reduce development cycles and the associated costs. Selection criteria include both, pharmacological and toxicological effects, as well as pharmacokinetics (PK)1, in particular the availability of the compound in the body. Footnote 1: Even though the name Pharmacokinetics implies that the field is only concerned with pharmaceutical substances, the field is concerned with all types of xenobiotic substances, see [https://en.wikipedia.org/wiki/Pharmacokinetics](https://en.wikipedia.org/wiki/Pharmacokinetics) In this multi-parameter optimization of physicochemical properties, efficacy, safety and PK, many compounds are usually tested in different high-throughput assays to generate a basic understanding of a compound's characteristics. However, as PK is determined by the complex non-linear interplay of compound properties and physiology, using these assays to test and optimize all aspects and parameters relevant for PK is usually not possible. Therefore, animal studies remain an important contribution to understanding the PK characteristics of a potential drug candidate. However, animal studies are usually performed later in research for selected compounds that are already optimized with respect to the early accessible assays. This approach helps to keep the number of animal experiments low, but unfortunately often struggles from eventual limitations in further PK optimization. The most import quantity in PK is the blood plasma concentration \(C\) as a function of time after a an oral (per os, PO) or intravenous (IV) administration. In this publication we are mainly interested in a few key parameters characterizing the concentration-time curve, such as the maximal concentration (\(C_{max}\)) and the exposure between two time points \(t_{1}\) and \(t_{2}\): \[AUC_{t_{1},t_{2}}=\int_{t_{1}}^{t_{2}}dt\,C(t). \tag{1}\] Most important is the total exposure, i.e. the exposure between the time of administration and infinity, here simply denoted as \(AUC\), sometimes also the exposure during the first 24h after administration AUC\({}_{24\text{h}}\) is considered. For pharmaceutical compounds, oral drug delivery plays an important role. It represents the most common administration route and is convenient for patients and physicians leading to high patient compliance. The extent to which the systemic exposure of a drug after PO administration (AUC\({}_{\text{PO}}\)) differs from the exposure after intravenous (IV) administration (AUC\({}_{\text{IV}}\)) is quantified by the oral bioavailability \(F\) defined as \[F=\frac{AUC_{PO}}{AUC_{IV}}\cdot\frac{D_{IV}}{D_{PO}}, \tag{2}\] where \(D_{PO}\) denotes the oral dose and \(D_{IV}\) the IV dose. We would like to stress that AUC\({}_{\text{PO}}\), AUC\({}_{\text{IV}}\) and hence \(F\) depends on the compound as well as on the dose and the formulation. In general the \(AUC\) is a non-linear function of the dose, depending on e.g. the metabolic capacity or the availability of binding proteins. In addition, the AUC\({}_{\text{PO}}\) can have non-linear dose dependencies related to the oral absorption, e.g. due to a limited solubility or a transport mechanism. Typically, non-linearities in the dose dependence of AUC\({}_{\text{IV}}\) can be neglected, this allows to extrapolate the AUC\({}_{\text{IV}}\) from \(D_{IV}\) to \(D_{PO}\) hence it is sufficient to consider \(D_{PO}=D_{IV}\). Furthermore, the oral dosage form can affect the PK, i.e. whether the compound is administered as solution, suspension or tablet. The AUC\({}_{\text{PO}}\) is typically lower for administration of a suspension or tablet than for a solution, since for a suspension or a tablet the particles first have to be released from the formulation and dissolve in the gastrointestinal tract (GIT). Note, that for tablets special so-called enabling formulations exist, which can increase the dissolution rate hence increase AUC\({}_{\text{PO}}\) and \(F\). Those formulations are usually not used in the early phases of drug and agrochemical development, hence we do not consider them in this publication. While in pharmaceutical development a compound's systemic availability is desired to be as high as possible, for agrochemicals a compounds' it should be as low as possible to minimize the risks associated with safety and toxicities. As the determination of the PK-parameters \(AUC\), \(C_{max}\) and \(F\) requires performing in-vivo studies in animals or even humans, they can not be used as a selection criterion for the early screening and optimization phases due to effort, cost and animal welfare considerations. Therefore, being able to predict them as early as possible, preferably directly from a compound's chemical structure, would reduce the risk during lead identification and optimization phases. Equally important, focusing only on the most promising compounds reduces the number of animal experiments. There have been several attempts to predict PK-parameters from chemical structure [2; 3]. However, most of them are purely data-driven, hence do not exploit the available mechanistic knowledge about the different processes determining PK. In the present work, we combine Deep Learning with a mechanistic model to predict \(AUC_{PO}\), AUC\({}_{\text{IV}}\) and C\({}_{\text{max,PO}}\) in rats from the chemical structures only. Predictions for \(F\) can be calculated using (2) from the prediction for AUC\({}_{\text{PO}}\) and AUC\({}_{\text{IV}}\). Our approach builds on the recent progress in applying Deep Learning to molecule property predictions [4; 5; 6; 7]. But, in contrast to these works, our data set is rather small with only a few thousand compounds. To compensate for this, we combine Deep Learning for property prediction with physiological based pharmacokinetic (PBPK) models. PBPK models are well established mechanistic models, describing the kinetics of compounds in physiological environments [8, 9]. Doing so we benefit from our knowledge about rat physiology and the interplay of different processes, and make much more efficient use of the available data to learn relevant molecule characteristics. ## 2 Methods and materials In this section we first describe our hybrid model and give a brief overview on PBPK models. Then we describe the training procedure for our model. Finally, we give an overview over the data used to train our hybrid model ### Hybrid modelling To predict pharmacokinetics in rats we combine Deep Learning for molecular property prediction with PBPK models. A PBPK model is a system of ordinary differential equations (ODE) describing the PK processes a compound is undergoing within an organism. The processes are usually referred to as ADME processes, which stands for administration, distribution, metabolism and excretion. PBPK models are compartment models in which organs are represented by the compartments and the processes Figure 1: Overview over our hybrid model structure consisting of a graph convolutional neural network for predicting a set of molecule properties. These molecule properties are the free parameters of a physiological model of rats predicting the pharmacokinetics. In practice, we approximate the PBPK model by a surrogate neural network. are parametrized by physicochemical and other molecule properties. For readers not familiar with PBPK models we provide a brief overview in A. In our approach, depicted in figure 1, the molecular properties predicted by a neural network, here called property net, correspond to parameters of the PBPK model, e.g. the solubility and the amount of substance cleared in the liver (hepatic clearance). The other parameters of the PBPK models are compound independent and describe the physiology of the organism, e.g., the organ volumes, blood flows, or specify the drug administration, e.g., administration route (IV/PO), dose and formulation. The clear split between parameters describing the physiology and the molecule in our model highlights the expected advantages of our hybrid model in terms of required data and generalization. In our model the physiological parameters are fixed by the choice of target organism, so the neural network bridges between the molecular structure and the physiology. Furthermore, as certain aspects of the problem, e.g. the dose dependency, are mechanistically modelled, our model is able to exploit the extrapolation capabilities of the PBPK model, for example by generalizing to dosages outside the training range or predicting properties it has not been trained on, e.g. concentrations in different tissues. Furthermore, we can exploit the flexibility of the property net to compensate for a misspecified or inaccurate mechanistic model. Two examples we consider in this publication, are the differences between male and female rats and different formulations used for the oral administration. We model those cases by a mechanistic model for male rats and solution as formulation, but let the molecule properties depend on these covariates by passing them to the (last layers) of the property net. By doing so, the property net learns to adapt its outputs such that even though we are using a wrong mechanistic model, we are still able to obtain an accurate model for female rats and suspensions. ### Physiologically based pharmacokinetic models For the mechanistic model, we use the generic rat PBPK model available in the Open Systems Pharmacology (OSP) Suite [11] and add a generic hepatic (metabolism) and renal clearance (glomerular filtration in kidney) as well as a generic global P-glycoprotein (P-gp)-like active transport, which causes a flow from the inside to the outside of cells. For our purpose of predicting bioavailability in the early phases of drug and agrochemical development, it is sufficient to fix the physiological parameters to those of a typical rat, by using the OSP default values. The compound properties used as input for the PBPK model are listed in table 1. For oral administrations we assume a solution as formulation and account for differences between solution and suspension as well as differences between male and female rats by passing formulation and sex to the property net as described in section 2.1. ### Neural network architecture Compared to the model developed in [1] we here replace the SMILES string representation of molecules and the corresponding 1D convolutional architecture with a graph convolutional network (GCN) architecture directly acting on the graph representation of molecules. As SMILES representations are generated by a depth-first traversal through the molecular graph with an arbitrary starting node, they are not unique and connected sub-graphs are neither represented as contiguous sub-strings nor are they represented in same way when occurring in different molecules. In contrast, graph convolutional layers explicitly respect permutation invariance of the graph nodes and the connectivity of the graph. Hence, they do not suffer from the non-uniqueness and connectivity issues of SMILES based architectures. We therefore expect a GCN architecture to be superior to a SMILES based architecture. Indeed, we even have difficulties finding a set of hyperparameters for the SMILES based architecture, which \begin{table} \begin{tabular}{|p{56.9pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline **Parameter** & **Short description** & **Pretraining data source** & **Distribution for surrogate model training** \\ \hline Hepatic clearance & Clearance rate in liver & In-vitro data (from hepatocyte assay) & Log-normal \\ \hline V\({}_{\text{max}}\) & P-gp like transporter & In-vitro data (Caco-assay) & Mixture of point mass at 0 and a half-normal \\ \hline GFR fraction & Fraction filtered in kidney & No pretraining & Uniform \\ \hline Fraction unbound & Fraction unbound in plasma & Predicted by independent DNN & Truncated normal \\ \hline Lipophilicity & Membrane affinity (log(MA)) & Predicted by independent DNN & Normal \\ \hline Effective molecular weight & Surrogate for molecule size & Calculated molecular weight excluding halogens & Half-normal \\ \hline Stomach solubility & Solubility in stomach & Predicted using Henderson-Hasselbach equation, DNN for solubility and pKa & Log-normal \\ \hline Small intestine & Solubility in small intestine & Predicted using Henderson-Hasselbach equation, DNN for solubility and pKa & Log-normal \\ \hline Large intestine & Solubility in large intestine & Predicted using Henderson-Hasselbach equation, DNN for solubility and pKa & Log-normal \\ \hline Small intestine permeation & Absorption rate in small intestine & Predicted from predicted log(MA) and molecular weight & Log-normal \\ \hline Large intestine permeation & Absorption rate in large intestine & Predicted from predicted log(MA) and molecular weight & Log-normal \\ \hline \end{tabular} \end{table} Table 1: Overview over the compound properties used as input of the PBPK model and whether predicted or in-vitro observed values are used for pertaining. Details on the used prediction models can be found in [10]. result in stable pre-training and acceptable accuracy, while we easily found those hyperparameters for the GCN. We use the GCN-architecture proposed in [12] implemented in deepchem [13]. This GCN-architecture uses differentiable operations inspired by those used to calculate circular fingerprints and equips them with learnable weights. ### Model training #### 2.4.1 Surrogate For end-to-end training of the hybrid model, we need to back-propagate through the PBPK model. Even though this is possible for small ODE systems, it is computationally prohibitive for our model with about 300 stiff ODEs. Therefore, we replace the PBPK model by a surrogate neural network approximating the PBPK model. Here, we use a fully connected neural network. We train the surrogate model on 2.4M simulations with random model parameters, sampled using latin hyper-cube sampling, and test it on additional 0.72M randomly sampled simulations. Each parameter is distributed according to a simple parametric distribution, e.g. normal or log-normal, roughly matching the distribution of values in our training data. The functional form of the distributions are summarized in table 1. We increase the variances of the distributions by 50 % to avoid the predictions of the property net leaving the training range of the surrogate. As can be seen from figure 2, the surrogate is able to reproduce the PBPK model accurately, with a fold change error of about 1.04 for the two PO endpoints and 1.02 for the IV endpoint. The error of the surrogate is negligible compared to the expected error caused by the high biological variability of about 50% of the in-vivo data. To be able to reliably back-propagate through the surrogate, good point wise approximations of the PBPK model are not sufficient. Also, the surrogate gradients and ideally higher order derivatives need to be good approximations of the PBPK models derivatives, i.e. the surrogate needs to reproduce the response of the PBPK model to changes in the molecule properties. We confirm this qualitatively by randomly generating a set of points in the PBPK input space and then vary each molecule property individually, while holding the others fixed, some examples are shown in figure 3. Overall, we find good agreement between the curves predicted by the surrogate and the PBPK model. Figure 2: Overall accuracy of surrogate neural evaluated on a hold out test set of simulations. A median fold change error of 2-4% is small to the expected biological variability of the data of 50%. #### Training strategy To overcome the small data set of about 7000 compounds, we pre-train the property net on molecule properties of about 140k compounds. As this is only a pretraining step, we use predicted properties where available and measured values otherwise. Table 1 indicates for which properties predicted and for which measured values are used. Figure 3: Some examples showing the full PBPK model and the surrogate model as a function of a single model parameter, while keeping the others fixed. In these examples the dependence on the hepatic clearance (top left) and dose (bottom right) is very accurately described by the surrogate. In the GFR example (top right) the surrogate is able to reproduce the shape of the PBPK, but shows a constant offset of about 20%, which is acceptable given the variability of the PK-data. The solubility example (bottom left) shows an offset of similar size, but is able to qualitatively reproduce the step-like behavior seen in the PBPK model. The small oscillations of about few % do not introduce major problems during training of the hybrid model. Details on the pretraining data set are given in section 2.5. The pretrained property net is then trained end-to-end, as part of the hybrid model, on the in-vivo data to predict the target PK-parameters. To constrain the model parameters predicted by the property net to physiological values, e.g. a non-negative clearance, and to the range of the surrogate training data, we add a penalty term to the loss function \(L_{total}\) \[L_{total}=\sum_{i}L\left(y_{i},\widehat{y_{i}}\right)+\lambda\sum_{i,j}max \left(p_{j,i}-p_{max,j},0\right)^{2}+max\left(p_{min,j}-p_{j,i},0\right)^{2}. \tag{3}\] The first sum is over all data points \(i\), the second over all data points \(i\) and all molecule properties indexed by \(j\). \(p_{i,j}\) denotes all predicted molecule properties for all data points. The penalty is zero as long as \(p_{min,j}\leq p_{i,j}\leq p_{max,j}\) and positive otherwise. When during training the penalty term is not decreasing for several epochs, we increase the weight \(\lambda\) until a pre-defined tolerance, here \(10^{-8}\), is reached. Empirically, we find that this is sufficient to constrain the PBPK model parameters to the viable range, see section B. ### Data We retrieved all in-vivo data taken after PO or IV administration in Wistar rats from the Bayer data warehouse. After filtering out pro-drugs, salts, molecules heavier than \(1500\,g/mol\) and non-standard formulations we are left with 7192 compounds, with in total 5731 \(AUC_{PO}\), 6183 \(C_{max,PO}\) and 6408 \(AUC_{IV}\) measurements. In contrast to [1], in addition to male rats and PO administrations as solution we consider also female rats and suspensions. Furthermore, in contrast to our previous work [1], we do not restrict the dose, therefore, our data covers a dose range from \(0.0024\,mg/kg\) to \(1000\,mg/kg\). Overall, there is more data available for low dosages than for high dosages, see figure 5. The large dose range reflects the fact that our data set includes relative low dose (\(\sim 1\,mg/kg\)) data mainly taken from male rats at Bayer Pharmaceuticals, as well as high dose data (\(\gtrsim 10\,mg/kg\)) mainly taken from female rats at Bayer Figure 4: Number of data points for different sub-sets of the data set. Since the standard test for pharmaceuticals is on male rats using, for PO, a solution, most of our compounds are tested on male rats. CropScience. Furthermore, the compounds from both divisions are expected to have different properties. This increases the diversity in our data set. We expect that this results in a better generalization of the model. To challenge the capabilities of our hybrid model to generalize to new observables we also collect available AUC\({}_{\rm 24h}\) data. For pretraining, we use about 100k compounds from the Bayer data warehouse. We use our internally available models to predict solubility, pKa values, lipophilicity and plasma protein binding in human for all compounds. For the hepatic clearance and membrane permeation no model is available, so we use all available in-vitro measurements resulting in an additional 40k compounds for pretraining. Note that usually no urine data is collected, so neither data nor a model is available for the GFR fraction, hence the GFR fraction is only trained end-to-end. In total, we use about 140k compounds for pre-training. ## 3 Results In this section we validate the predictive performance of our model and compare it to a standard GCN. We further challenge the generalization capabilities of our hybrid model by using it to predict the AUC\({}_{\rm 24h}\), a quantity the model has not been trained on. ### Model performance We optimize the hyperparameters of our hybrid model using the HORD algorithm [15]. The model architecture is optimized on the pretraining set while training hyperparameters are optimized on the training set. We validate the best model on our 20% hold out test set in figure 6. For evaluation, we use the median fold change error defined as: \[mfce=\ exp\left(median\left|log\left(y\right)-log\left(\hat{y}\right)\right| \right), \tag{4}\] such that for a perfect model \(mfce=1\). A fold change error of 2 to 3 is considered sufficient to inform compound selection [16, 17, 18, 19, 20, 21, 22]. For all targets, except for the AUC\({}_{\rm PO}\) for male rats and suspension, we reach this goal. For AUC\({}_{\rm IV}\) and \(F\) our model Figure 5: Distribution of used dose in PO (left) and IV (right) measurements. High doses are typically tested only in PO experiments, hence they span much large dose range then the iv experiments. even achieves \(mfce<2\). Compared to previous work [1], which uses a slightly different test set, the \(mfce\) of AUC predictions for male rats for PO (solution) and IV has improved from 2.85 to 2.35 (PO) and from 1.95 to 1.62 (IV). Also the \(mfce\) of \(F\) predictions improved from 1.83 to 1.62. Additionally, we observe a more stable training and easier to tune hyperparameters. \(\mathrm{C_{max,PO}}\) after oral administration, which has not been considered in [1], can be predicted with a slightly higher accuracy than the AUC. Even though there is less data available for female rats than for male rats, predictions for female rats after an IV administration or a PO administration using solution can be made with similar accuracy as for male rats. We observe that PO predictions for suspensions are less accurate than predictions for solutions. This is expected, given that the dissolution of a suspension adds complexity to the dynamics in the GIT. Likewise, predictions for IV are more accurate than predictions for PO. As described in section 2.1 we predict suspension by using a mechanistic model for solutions with adapted molecule properties, we expect that using a mechanistic model for suspension, the prediction accuracy for suspension can be improved. Figure 6: Model predictions vs observed values for our three training tasks \(\mathrm{AUC_{PO}}\) (top left), \(\mathrm{AUC_{IV}}\) (top right) and \(\mathrm{C_{max,PO}}\) (bottom left). For comparison with earlier work [1; 14] we also show the derived predictions for \(F\) (bottom). The predictions for \(\mathrm{AUC_{IV}}\) are more accurate than the PO predictions, this is expected since the processes involved in a PO administration are more complex than those involved in an IV administration. We observe that despite having less data for female rats than for male rats the predictions have a similar accuracy, while predictions for suspensions are a bit less accurate than those for solutions. ### Advantages of hybrid models Figure 7 shows for comparison the performance of a standard GCN, having the same architecture as the property net except for the output layer's size. The predictions of the GCN are for all three endpoints worse than those of the hybrid model. While the performance drop of the AUC\({}_{\text{IV}}\) predictions is moderate, the performance drop of AUC\({}_{\text{PO}}\) and C\({}_{\text{max,PO}}\) is of practical relevance as the standard GCN does not reach \(mfce<3\). In addition to the improved performance of the hybrid model, compared to a pure Deep Learning model, we can expect that the hybrid model is able to extrapolate and predict target parameters on which it has not been trained. For a first assessment of the extrapolation capability of our hybrid model we use the AUC\({}_{\text{24h}}\). Figure 8 shows the AUC\({}_{\text{24h}}\) predictions of our hybrid model compared to the observed values. The accuracy of the AUC\({}_{\text{24h}}\) predictions are comparable to the endpoints the model was trained on. Note that the AUC\({}_{\text{24h}}\) are predicted using the full PBPK model instead of the surrogate. The high predictive accuracy reconfirms that our surrogate is an accurate approximation of the PBPK model. This is further confirmed by the predictions of our training targets using the PBPK model instead of the surrogate model in section C. Figure 8: Predicted AUC\({}_{\text{24h}}\) compared to the observed values. The AUC\({}_{\text{24h}}\) are predicted from the molecule properties predicted by the property net, using the exact PBPK model. Despite not being trained on the AUC\({}_{\text{24h}}\) our hybrid model achieves an accuracy comparable to the total \(AUC\). Figure 7: Model accuracy for a pure GCN model for the same three predictions tasks AUC\({}_{\text{PO}}\) (top), AUC\({}_{\text{IV}}\) (mid) and C\({}_{\text{max,PO}}\) (bottom) as for the hybrid model. The accuracy of all 3 end-points is higher for the hybrid model than for the pure deep learning model, see figure 2. ## 4 Summary and conclusion In this work we present a hybrid model to predict the pharmacokinetics of pharmaceutical and agrochemical compounds bioavailability in rats directly from chemical structure. As predicting in-vivo targets is challenging due to the complex non-linear interplay of many processes as well as the low amount and high variability of data. We tackled these challenges by combining expert knowledge about rat physiology and processes affecting pharmacokinetics with Deep Learning for molecular property predictions. The work in [1] was extended by using a GCN for the property net and changing the parametrization of the mechanistic model which improves the interface between the neural network and the mechanistic model. Additionally, we increased the available training data set by less restrictive filtering of the data and the inclusion of additional training endpoints, such as \(\mathrm{C_{max,PO}}\) or predictions for female rats and suspensions. Furthermore, we used a larger internal data set for pretraining, which is expected to be better correlated to bioavailability than the previously used public data from the TOX21 challenge [23]. That lead to an improved performance of the model compared to [1]. For all except one end-point our model has an \(mfce<3\). Our \(AUC_{IV}\) and \(F\) predictions even have an mfce \(<2\). This is expected to be accurate enough to inform decisions during early phases of drug discovery [16, 17, 18, 19, 20, 21, 22]. Furthermore, our model enables the selection and prioritization of compounds which are directly optimized with respect to their pharmacokinetic profile [24, 25, 26]. Our prediction accuracy is competitive to the \(F\) prediction accuracy in [14], when their model is only trained on in-vivo data from the chemical series it is applied to and superior otherwise. Our model shows a similar performance to the different models in [27], which seem to have a slightly higher accuracy, but to achieve this, the chemical structure as well as in-vitro parameters are required, whereas our approach does not rely on in-vitro measurements. We like to stress that such a comparison should not be over interpreted, as different data-sets are used for training and validating the different models. Additionally, our approach is able to handle different covariates like sex and formulation by predicting effective molecule properties, which are sex and formulation dependent. We expect that prediction the accuracy can be improved by accounting for these covariates in the mechanistic model. The inclusion of further covariates like body weight is therefore likely to result in even better predictions. However, the incorporation of more covariates is limited in practice as the typically available covariates do not fully specify the physiology of an individual. To account for the residual variability and to possibly improve predictions further probabilistic models which estimate population parameter distributions are required. To do so one could build on the recent progress in building deep generative models [28, 29, 30]. Incorporating more covariates - either deterministic or probabilistic - requires more input parameters to the mechanistic model, which complicates the use of a surrogate neural network for the PBPK model. Training and using a surrogate worked well for the small number of inputs and outputs considered in this work, but becomes harder if their number increases. In such cases the use of a full PBPK model can become superior. However, this is currently computationally infeasible. Recently, there has been increasing interest in combining differential equations and Deep Learning [31, 32, 33] and, consequently, in tools to train these models [34, 35, 36], such that one can expect using the full PBPK model will become feasible in the near future. A complementary approach is to use simpler and hence computationally cheaper PK models, such as compartmental models or a reduced version of the full PBPK model. Using PBPK models directly would also alleviate the need for constraining the molecule properties to the validity range of the surrogate. Constraining the molecule properties by introducing a penalty term in the loss worked in our case, but still complicated model training. Using a PBPK model directly would also enable to train the hybrid model on concentration-time profiles, which would be highly desirable, since more accurate predictions of concentration time profiles would allow a much more detailed description of the pharmacokinetics of a compound. Successful application of PK models not only depends on the prediction accuracy, but also on the possibility to estimate uncertainty on the prediction. Such, that decisions based on predictions are only made for molecules for which the model is expected to be accurate. Ref. [27] assesses prediction uncertainty. For none of the tested approaches the uncertainty estimates are fully satisfactory. But, approaches with statistical correct Bayesian or Frequentist epistemic uncertainty provide better uncertainty estimates. We expect that in both cases well calibrated uncertainties can be provided by computationally expensive ensembles techniques [37, 38, 39]. Our model has the potential to reduce cost, development time and animal experiments in drug and agrochemical research by focusing the development on the most promising candidates and being able to directly optimize a compounds PK. Furthermore, our approach can be used to predict human PK [40], therefore directly optimizing for clinical use. ## Declaration of Competing Interest The authors declare no competing financial interest. ## Acknowledgments ### Appendix A Physiologically based pharmacokinetic models Physiologically based pharmacokinetic (PBPK) models are ordinary differential equation models describing how a substance, e.g. a drug, is absorbed, distributed, metabolized, and excreted in an organism. For the reader not familiar with PBPK models we provide a brief overview over the basic concepts, building blocks and equations forming a PBPK model. For more details we refer to [41]. In PBPK models physiological organs and tissues are represented by compartments. The transport of substance via the blood is modeled by balance equations of the form \[\frac{dC_{i}}{dt}=\frac{Q_{i}}{V_{i}}\left(C_{art}-\frac{C_{i}}{P_{i}}\right), \tag{1}\] where \(C_{i}\) denotes the compound concentration in the compartment \(i\), \(V_{i}\) its volume, \(Q_{i}\) the blood flow, \(P_{i}\) the partition coefficient between blood and tissue, and \(C_{art}\) the compound concentration in arterial blood, which is governed by \[\frac{dC_{art}}{dt}=-\sum_{i}\frac{Q_{i}}{V_{i}}\left(C_{art}-\frac{C_{i}}{P_{i }}\right)\!, \tag{2}\] To describe dissolution, absorption, metabolism and excretion, as well as additional distribution mechanism the equations 1 and 2 need to be extended. For example, dissolution and absorption in a single GIT compartment is described by the following equations: \[\frac{dC_{g}}{dt} =\frac{Q_{g}}{V_{g}}\left(C_{art}-\frac{C_{g}}{P_{g}}\right)+K_{a} C_{lum}, \tag{3}\] \[\frac{dC_{lum}}{dt} =-K_{a}C_{lum}+\frac{dC_{dis}}{dt},\] (4) \[\frac{dC_{dis}}{dt} =K\left(C_{0}-C_{dis}\right)^{2/3}\left(C_{s}-C_{lum}\right), \tag{5}\] Equation 3 describes concentration in the GIT tissue \(C_{g}\), which is sourced by a linear absorption process from the GIT lumen. Equation 4 is describes the compound concentration in the GIT lumen \(C_{lum}\), which is sourced by the dissolved compound \(C_{dis}\). Equation 5 is the Noyse-Whitney equation describing the dissolution of the compound in the GIT lumen, with \(K\) being a compound dependent constant, \(C_{0}\) is the total amount of compound administered divided by the administered volume and \(C_{s}\) is the solubility, i.e. the compound concentration the GIT lumen at (thermal) equilibrium. Metabolism is described by the Michaelis-Menten-Kinetics, which for \(C\ll K_{m}\) can be linearized: \[\frac{dC}{dt}=-V_{max}\frac{C}{K_{M}+C}=-\frac{V_{max}}{K_{M}}C+O\left(\left( \frac{C}{K_{m}}\right)^{2}\right), \tag{6}\] The constants \(V_{max}\) and \(K_{M}\) depend on the compound and the metabolizing enzyme and control the speed and saturation of metabolism. We assume a single generic metabolizing enzyme, hence in our hybrid model hepatic clearance is fully characterized by the rate \(\frac{V_{max}}{K_{M}}\). An active P-gp like transport via membrane proteins, assuming a constant protein concentration, follows also a Michaelis-Menten-Kinetics \[\frac{dC_{1}}{dt} =-V_{max}\frac{C_{1}}{K_{M}+C_{1}} \tag{7}\] \[\frac{dC_{2}}{dt} =V_{max}\frac{C_{1}}{K_{M}+C_{1}}. \tag{8}\] As for the metabolism, the constants \(V_{max}\) and \(K_{M}\) control the speed and saturation of the transport are compound and are transport protein dependent. For our purpose it is sufficient to set \(K_{M}=1\frac{\mu\text{mol}}{\text{L}}\), i.e. use the OSP default value, hence the transport is parametrized by its maximal velocity \(V_{max}\). ## Appendix B Validation of property constraints In figure B1 the distribution of predicted molecule properties of the test set are shown together with the maximal and minimal values in the surrogate training data set. All predicted molecule properties lie within in the surrogates training range, confirming the effectiveness of the penalized loss described in section 2.4.2. Note that for \(V_{max}\) and \(FU\) we used heavy tailed distributions for generating the surrogate training data, resulting in the large range shown in figure B1. For the \(FU\) this results in unphysiological values \(>1\), for which the equations of the PBPK model are still defined. But in practice the property net does not predict a \(FU>1\). Furthermore, to increase the flexibility of our clearance model we increased the maximal allowed value for the GFR fraction from 1 to 5.25. ## Appendix C A posteriori surrogate validation We can validate the surrogate model a posteriori by predicting the training targets of our hybrid model using the PBPK model instead of the surrogate. Figure C2 shows the predictions obtained using the PBPK model vs those obtained using the surrogate. The accuracy is not as good as expected from the analysis in section 2.4.1, but still accurate enough to be used in the hybrid model, the \(mfce\) of the surrogate (\(1.2-1.4\)) is clearly better than the \(mfce\) of the hybrid model (\(mfce\gtrsim 1.6\)). Additionally, figure C3 shows the predictions using the full PBPK vs the observed values. These predictions are almost as accurate as those using the surrogate model. A maximal difference of 0.24 in the \(mfce\) can be observed, and no additional features are visible. This highlights again the accuracy of the used surrogate model.
2303.13775
GSplit: Scaling Graph Neural Network Training on Large Graphs via Split-Parallelism
Graph neural networks (GNNs), an emerging class of machine learning models for graphs, have gained popularity for their superior performance in various graph analytical tasks. Mini-batch training is commonly used to train GNNs on large graphs, and data parallelism is the standard approach to scale mini-batch training across multiple GPUs. One of the major performance costs in GNN training is the loading of input features, which prevents GPUs from being fully utilized. In this paper, we argue that this problem is exacerbated by redundancies that are inherent to the data parallel approach. To address this issue, we introduce a hybrid parallel mini-batch training paradigm called split parallelism. Split parallelism avoids redundant data loads and splits the sampling and training of each mini-batch across multiple GPUs online, at each iteration, using a lightweight splitting algorithm. We implement split parallelism in GSplit and show that it outperforms state-of-the-art mini-batch training systems like DGL, Quiver, and $P^3$.
Sandeep Polisetty, Juelin Liu, Kobi Falus, Yi Ren Fung, Seung-Hwan Lim, Hui Guan, Marco Serafini
2023-03-24T03:28:05Z
http://arxiv.org/abs/2303.13775v2
# GSplit: Scaling Graph Neural Network Training on Large Graphs via Split-Parallelism ###### Abstract Large-scale graphs with billions of edges are ubiquitous in many industries, science, and engineering fields such as recommendation systems, social graph analysis, knowledge base, material science, and biology. Graph neural networks (GNN), an emerging class of machine learning models, are increasingly adopted to learn on these graphs due to their superior performance in various graph analytics tasks. Mini-batch training is commonly adopted to train on large graphs, and data parallelism is the standard approach to scale mini-batch training to multiple GPUs. In this paper, we argue that several fundamental performance bottlenecks of GNN training systems have to do with inherent limitations of the data parallel approach. We then propose split parallelism, a novel parallel mini-batch training paradigm. We implement split parallelism in a novel system called GSplit and show that it outperforms state-of-the-art systems such as DGL, Quiver, and PaGraph. ## 1 Introduction Graph neural networks (GNN), an emerging class of machine learning models, are increasingly adopted to analyze graph-structured data due to their superior performance in various graph analytics tasks. An input graph for a GNN can have millions of vertices and billions of edges [14]. GNNs are broadly adopted in companies such as Pinterest [35] and Twitter [8] to improve user experiences and in engineering and science domains for computer vision [23], natural language understanding [12], quantum chemistry [11], material design [39], and drug discovery [10]. A common approach to train a GNN on large-scale graphs is to use _mini-batch_ gradient descent. This approach partitions the training data into subsets called mini-batches. Each training _iteration_ calculates updates to the model parameters, called gradients, based on a different mini-batch. In GNN training, we want to learn how to compute the features of the vertices in the training set, which are called _target vertices_, based on the input features of the vertices in their k-hop neighborhood. A mini-batch consists of a subset of target vertices and a _sample_ of their k-hop neighborhood, which could be excessively large if sampling was not used (see Figure 1). Mini-batch training is commonly used in production and research GNN training systems such as DGL [30], PyTorch Geometric [7], Quiver [4], AliGraph [38], PaGraph [18], and GNN Lab [34]. These systems use _data parallelism_ to execute each training iteration across multiple GPUs (see Figure 2). At each iteration, data parallelism partitions the mini-batch into _micro-batches_, which consist of a partition of the target vertices in the mini-batch and their sampled k-hop neighbors, and assigns each micro-batch to one GPU. The entire GNN model is replicated at each GPU. Each GPU then loads the input features of all the vertices in its micro-batch and computes gradients locally and independently from other GPUs. At the end of the iteration, all GPUs aggregate and apply the gradients they computed. **Limitations of Existing Work.** Unfortunately, data parallel training for GNNs is _inherently redundant_ due to the _overlaps_ between the k-hop neighborhoods of target vertices that are assigned to different micro-batches (gray area in Figure 1). This creates both data loading and computation overheads. Loading training data to GPUs at each iteration is known to be a significant overhead in data-parallel training since each GPU must gather the input features of all the vertices in its micro-batch, which can be large [18]. This overhead is exacerbated by the overlaps across micro-batches, since several vertices are likely to appear in micro-batches assigned to multiple GPUs. The input features of these vertices are sent to multiple GPUs, which increases the communication cost. An orthogonal problem caused by overlaps across micro-batches is _redundant computation_. GNN models organize the vertices of a micro-batch sample into layers. They compute the hidden features of the vertices in a layer by aggregating the features of their neighbors in the lower layer. If the same vertex appears in multiple micro-batches assigned to multiple GPUs, these will compute their hidden features redundantly, which can be a significant overhead. **Proposed Approach.** In this paper, rather than patching the data-parallel pipeline incrementally, we introduce a new paradigm for parallel mini-batch GNN training called _split parallelism_ (see Figure 3). At each iteration, split parallelism splits the vertices of the mini-batch into multiple partitions. Unlike micro-batches, partitions do not have overlaps. Each partition is assigned to a GPU for computation and partitions are created in a way that maximizes local computation. During training, GPUs cooperate with each other to exchange lightweight intermediate results in each iteration. Split parallelism solves the two fundamental problems of data parallelism. It eliminates redundant computation because each vertex is uniquely associated to one GPU that is responsible for computing its hidden features. It also eliminates redundant data loading because each vertex belongs to the split of exactly one GPU, which is the only one responsible for loading its input features. Split parallelism builds on some ideas used in systems for full-graph training, which are designed to train the GNN on the entire graph at each iteration [16, 19, 20]. To scale to large graphs, these systems partition the input graph and exchange intermediate data across GPUs. Mini-batch training, however, presents two unique challenges and opportunities to make split parallelism efficient. First, _mini-batches change at each iteration_, since they are randomly sampled. Split parallelism splits each mini-batch into non-overlapping partitions, each corresponding to one GPU, to eliminate redundancies. We call this novel scheduling problem _split-parallel online scheduling_. The scheduling algorithm is executed on the critical path of each iteration so it must be very fast. We show how to achieve this goal by combining offline partitioning and online scheduling. In full-graph training, online scheduling is not necessary since each iteration operates on the same graph using the same schedule, which is computed offline. Data parallelism also uses the same schedule at each iteration (see Figure 2). The consequence of splitting a mini-batch into non-overlapping partitions is that split parallelism requires inter-GPU communications to complete one iteration of training, because some local partitions may have a few edges that require data from other GPUs. We implement _cooperative training_ that invokes _split-aware_ kernels and transparently supports GNN models written using the standard high-level SAGA operators offered by existing GNN training systems. Our implementation leverages fast GPU-GPU buses such as NVLink, when available, to speed up coordination. We present the first experimental evaluation of cooperative training in the context of mini-batch training. Our evaluation demonstrates that the inter-GPU communication required for cooperative training represents a much smaller cost than the redundancies in input data loading and computation inherent in data parallelism. Second, _mini-batches are only subsets of the whole graph and some vertices are much more likely to be included in a mini-batch than others_. This offers caching opportunities to make split parallelism efficient. Existing work on mini-batch training has shown that _caching in the GPU memory_ the input features of frequently-loaded vertices can reduce data transfers and significantly speed up training [4, 18, 34]. In full-graph training, caching is not used because the input features of _all_ vertices of a potentially large graph must be loaded to the GPUs at every iteration. Although caching is used in some data-parallel-based GNN training systems [4, 18, 34], split parallelism is much better suited than data parallelism to leverage in-GPU input feature caches. Data parallelism cannot take full advantage of caches because it moves input data (the micro-batches) to computation (the GPUs training independently). A GPU still needs to load all the input features in its micro-batch that are not cached locally. Split parallelism, instead, uses online scheduling to split mini-batches so that if a GPU caches the input features of a vertex, it is given the responsibility of processing those features locally. This follows the well-established principle of moving computation to data, not data to computation. By moving computation to data, split parallelism enables multiple GPUs to cache _non-overlapping subsets_ of input features and form a _distributed GPU cache_, whose size scales with the number of GPUs in the system. Input features only need to be transferred from the CPU memory to a GPU when there are _global_ cache misses, that is, a feature is not cached by _any_ GPU. Data parallelism can take advantage of a distributed GPU cache only in systems with fast GPU-GPU buses, such as NVLink. When such a bus is available and a GPU has a local cache miss, it is faster to load input features from another GPU memory than from the CPU memory, as done by systems like Quiver [4] and WholeGraph [33]. Without a fast GPU-GPU bus, a GPU running data-parallel training must load data through the PCIe bus regardless of whether the features are cached by another GPU or stored in the host memory, so a distributed GPU cache is not effective. Split parallelism improves data loading time independent of the system configuration because it does not require transferring cached features across GPUs. **Results.** We implement the proposed solutions in GSplit, a novel split-parallel GNN training framework targeting _synchronous multi-GPU_ training over large graphs on a single host. We compare GSplit against state-of-the-art single-host multi-GPU systems for mini-batch training: DGL [30], PGraph [18], and Quiver [4]. GSplit consistently and significantly outperforms all these baselines, sometimes by more than one order of magnitude. Split parallelism reduces data loading time, one of the main bottlenecks in the end-to-end training pipeline, by a large margin. Its training time is competitive with data-parallel training because the communication cost of cooperative training is balanced by the gains of eliminating computational redundancy. Online scheduling is fast enough to not become a bottleneck, yet it produces partitions that are good enough to deliver consistent speedups over the baselines. This paper makes the following key contributions: * We introduce split parallelism, a novel paradigm for mini-batch GNN training that maximizes data access locality and avoids the redundant computation and data transfer overheads of data parallelism. * We design a fast online scheduling algorithm to obtain per-GPU computation graphs from each mini-batch at each iteration. * We implement cooperative training to support GNN models written using standard SAGA operators. We present the first experimental comparison of cooperative mini-batch training and data parallel training. * We implement GSplit, an end-to-end multi-GPU GNN training system based on split parallelism that GSplit implements an efficient split-aware operator. ## 2 Background and Motivations This section first provides a brief background on GNN Training and then discusses the limitations of existing approaches to motivate this work. ### Graph Neural Network Training **Mini-batch training on GNN models.** Stochastic Gradient Descent (SGD) trains on a mini-batch of the training data at each iteration. In GNNs, a mini-batch consists of a subset of vertices of the graph, which are called _target vertices_, and a _sample_ of their k-hop neighborhood (see Figure 1). In data-parallel training, micro-batches are the partitions of a mini-batch that are trained in parallel, one per GPU, in one iteration. They are obtained by sampling the k-hop neighborhood of a partition of the target vertices in the mini-batch. A GNN model is defined as a sequence of _GNN layers_.1 In the forward propagation, each GNN layer \(l>0\) aggregates and transforms the features of the vertices in the layer \(l-1\) of the sample and produces the features of the vertices in the layer \(l\) (see Figure 1). The last GNN layer computes the features of the target vertices, which are used to compute the loss. In the backward propagation, the layers are executed in an reversed order to compute gradients. Footnote 1: In the following, the term “layer” will refer to GNN layers, not to neural network layers, unless otherwise stated. **GNN layers.** Each GNN layer \(l\) calculates the features of \(V^{(l)}\), the vertices in layer \(l\), using features of \(V^{(l-1)}\), the vertices in layer \(l-1\), and sometimes also features of \(E^{(l)}\), the edges between vertices in layer \(l-1\) and \(l\). The implementation typically follows a message-passing framework that executes scatter function, message function, gather function, and update function sequentially. The _scatter function_\(\sigma^{(l)}\) prepares an _edge tensor_ of edge-wise vectors to compute messages. The edge tensor combines, for each edge \(e(u,v)\in E^{(l)}\), vectors for the source and destination vertices, typically the feature vectors \(h_{u}^{(l-1)}\) and \(h_{v}^{(l-1)}\), and optionally an edge feature vector \(w_{e}^{(l-1)}\): \[\sigma_{e}^{(l)}=[h_{u}^{(l-1)},h_{v}^{(l-1)},w_{e}^{(l-1)}],\quad e(u,v)\in E ^{(l)}\] This function does not perform any computation: it merely collects and combines sparse data from different vectors based on the structure of the graph. The _message function_\(\phi^{(l)}\) produces a message tensor \(M\) containing a message for each edge in \(E^{(l)}\). It typically takes as input an edge tensor from the scatter operates and applies \(\phi^{(l)}\) on each edge \(e\) to build a message: \[m_{e}^{(l)}=\phi^{(l)}(\sigma_{e}^{l}),\quad e(u,v)\in E^{(l)}.\] The \(\phi\) function is defined by the user and can be, for example, a Multi-Layer Perceptron (MLP). In some simpler GNN models, a message function uses only the source vector of each edge, so it does not require building a full edge tensor using a scatter. In the example of Figure 1, \(\phi\) outputs the features of the source vertex for each edge. The _gather_ function \(\oplus\) takes a message tensor as input and aggregates all messages to the same destination vertex using a commutative and associative aggregation function such as sum or mean (see the Aggr block in Figure 1). The entry in the output tensor for a vertex \(v\in V^{(l)}\) is: \[m_{v}^{(l)}=\oplus_{e(u,v)\in E^{(l)}}^{(l)}m_{e}^{(l)},\quad e(u,v)\in E^{(l )}.\] Finally, the _update_ function \(\psi\) computes a new hidden feature for a vertex \(v\) based on the aggregated message to \(v\) This can also be an arbitrary neural network, similar to the message function \(\phi\) (see the MLP block in Figure 1). \[h_{v}^{(l)}=\psi^{(l)}(h_{v}^{(l-1)},m_{v}^{(l)}),\quad v\in V^{(l)}.\] We call a chain of Scatter, Apply-message, Gather, Apply-update operations a _SAGA_, following the terminology introduced by NeuGraph [19]. ### Limitations of Data Parallelism In data parallelism, micro-batches often overlap, resulting in redundant data loading and computation. Suppose that the mini-batch of Figure 1 is the union of two micro-batches, one for each of the target vertices. The two micro-batches are associated with two different GPUs. The triangle in the Figure shows overlaps in the k-hop sample of different target vertices. The input features of the layer-0 vertices in the overlap need to be loaded by both GPUs. Similarly, the hidden features of the layer-1 vertex in the overlap need to be computed by both GPUs. We now discuss the overheads resulting from these redundancies in detail. **Issue 1: Redundant data loading.** Data loading transfers training data (including the micro-batch graph structures and the involved input vertex features) to GPUs for computation at each iteration. Data-parallel training systems cache input vertex features on GPUs to mitigate the data loading bottleneck. Some work, such a PaGraph [18] or GNNLab [34] focus on configurations where GPUs are connected with each other through the same bus as the CPU (the PCIe bus). These systems keep independent per-GPU caches. To decide which vertices to cache on each GPU, they logically partition the training vertices across GPUs. They then construct a subgraph for each partition as the union of the k-hop neighborhoods of the training vertices. Finally, they cache at each GPU a subset of the vertices in each partition. Because the k-hop subgraphs have large overlaps, the caches of each GPU also overlap. High-end multi-GPU systems have fast GPU-GPU buses such as NVLink [3], which are faster than CPU-GPU buses like PCIe. Quiver is a recent GNN training system that leverages these hardware features to partition cached input features across multiple GPUs [4]. Whenever a GPU needs to load the features of a vertex that is not cached locally, it loads it from the cache of another GPU if possible, and loads from the CPU memory otherwise. This strategy ensures that caches have no overlaps. Both strategies make data loading cheaper, but they still suffer from significant data loading costs by redundantly loading the same features on multiple GPUs. Table 1 quantifies the data transfer volume and epoch time overhead in prior data-parallel systems. The experiments consider single-server systems with 4 GPUs connected with a PCIe bus for PaGraph and NVLink for Quiver (see Section 5 for the detailed experimental setup). The Table considers different cache sizes, expressed as the fraction of the input features that are stored in the cache by each GPU. In Quiver, GPUs cache non-overlapping sets of vertex features. These systems can cache the input features for the entire graph using a per-GPU cache size of 25%, so we consider that as the maximum cache size in this experiment. The size of unique input features in an average mini-batch represents the minimal amount of data transferred without using caching. PaGraph and Quiver load much more data than that, even with caching, because in data parallelism, different GPUs must load the same input features redundantly. When comparing the two systems for the same cache size in Table 1, caching is more beneficial in Quiver because it uses a distributed cache, whereas PaGraph keeps independent per-GPU caches. As the size of the cache grows, Quiver transfers less data from CPU than PaGraph. Quiver, however, still needs to perform redundant loads from the CPU, and some of the CPU-GPU data transfers are replaced by GPU-GPU transfers due to cache misses. Inter-GPU transfers are faster but not free and still make up a high fraction of the epoch time. These results show that redundant data loading is a fundamental problem in data parallelism, even when using a distributed cache and fast GPU-GPU buses. Split parallelism eliminates this problem by design. **Issue 2: Redundant computation.** After loading all the input features of a micro-batch, each GPU runs the GNN model to compute the hidden features of the vertices in the upper layers. The overlaps among micro-batches create computation overheads at this stage. The target vertices of the micro-batches for the same iteration are non-overlapping sets. However, the samples of the k-hop neighborhood of those vertices \begin{table} \begin{tabular}{|l|l||r|r|r|} \hline **System** & **Cache \%** & **0\%** & **10\%** & **25\%** \\ \hline \hline \multirow{4}{*}{PaGraph} & CPU-GPU & 13682 & 10829 & 7404 \\ & GPU-GPU & 0 & 0 & 0 \\ & \% load/epoch & 79\% & 78\% & 68\% \\ \hline \multirow{3}{*}{Quiver} & CPU-GPU & 19267 & 6334 & 0 \\ & GPU-GPU & 0 & 9698 & 14448 \\ \cline{1-1} & \% load/epoch & 95\% & 71\% & 38\% \\ \hline \end{tabular} \end{table} Table 1: **Redundant data loading.** Averaged data volume in MB loaded by all GPUs per iteration and percentage of loading time over epoch time. (Dataset: ogbn-products. Model: GraphSage. Mini-batch size: 4096.) \begin{table} \begin{tabular}{|l||c|c|c|} \hline **Dataset** & **4x Micro** & **1x Mini** & **\% redundancy** \\ \hline \hline ogbn-products & 195M & 155M & 25.5\% \\ ogbn-papers100M & 327M & 131M & 148.9\% \\ amazon & 473M & 263M & 79.2\% \\ \hline \end{tabular} \end{table} Table 2: **Redundant computation.** Total number of edges computed over one epoch when each mini-batch is sampled as 4 micro-batches of size 1024 (4x Micro) vs. 1 mini-batch of size 4096 (lx Mini). can have large overlaps. For this reason, the hidden features of the same vertex at the same layer can be computed by multiple GPUs, resulting in redundant computation. Note that this computational overhead is additional to the data transfer overhead discussed previously. Table 2 evaluates the degree of computational redundancy in data-parallel training in terms of total number of edges processed during an iteration. With 4 GPUs, data parallelism creates 4 separate micro-batches, which have the total number of edges reported in the Table. Instead of creating multiple independent and overlapping micro-batches, split parallelism generates a single mini-batch for all the target vertices and splits it without overlaps, avoiding redundant computation. The total number of edges to process with this approach is between one to two orders of magnitude lower. ## 3 Split-Parallel Training ### Overview Each split-parallel training iteration consists of four main steps: sampling, online scheduling, data transfer, and cooperative training. Figure 4 shows an overview of one iteration, excluding the preliminary sampling step that produces a mini-batch and the final synchronous gradient aggregation and parameter update step. In split parallelism, _sampling_ returns a single mini-batch sample for all the target vertices in the mini-batch. _Split-parallel online scheduling_ (Step 1 in Figure 4) splits the mini-batch into _local splits_ (e.g., \(S_{1}\) and \(S_{2}\) in Figure 4), one for each GPU. Unlike micro-batches in data-parallel training, the local splits are non-overlapping partitions of the mini-batch sample. Layer-0 vertices are assigned based on the content of the GPU caches. Each GPU is responsible for loading and processing the input and hidden features for the vertices in its assigned split. The hidden features for vertices at the other layers are computed by only one GPU, which is selected to maximize data access locality. Splitting eliminates all redundant input data loading and hidden feature computation. The scheduling algorithm must be very fast because it is run online at each iteration on a different mini-batch. The step following scheduling is _data transfer_ (Step 2 in Figure 4). Split parallelism optimizes this step because it only transfers the features that are missing from the combined caches of all GPUs, and it transfers them to only one GPU. After data transfer, the system invokes training kernels of each local split on its corresponding GPU. Finally, _cooperative training_ (Step 3 in Figure 4) executes the local splits to train model parameters. This step could involve inter-GPU communications to complete one iteration of training because some local splits may have a few edges that require data from other GPUs. Cooperative training uses _split-aware_ implementations of the message-passing functions of Section 2.1 to transparently supports GNN models written using the standard high-level SAGA operators offered by existing GNN training systems. The GNN model in Figure 4 has two layers, each having edges across splits. Both layers require inter-GPU communication (Steps 3.1-2), which we will discuss more in detail in Section 3.3.2. After this coordination, each GPU is able to compute an embedding for the target vertices in their local split, compute the loss, and start propagating partial gradients backward. This requires GPUs to send back the partial gradients over the hidden features they received in the forward pass (Step 3.3-4). At the end of the backward pass, each GPU has gradients for the model parameters (\(g_{1},g_{2}\)). As we will discuss shortly and show in the evaluation, the inter-GPU communication required for cooperative training is a cost that is comparable to, and typically lower than, the redundant input data loading and redundant computation costs of data parallelism. ### Split-Parallel Online Scheduling Split-parallel online scheduling splits a mini-batch into per-GPU local splits by assigning each vertex of a mini-batch sample to one local split. The scheduling algorithm needs to meet the following two requirements: Figure 4: Example of a Training Iteration in Split Parallel Training (GraphSage). 1. It should not become a performance bottleneck of the end-to-end training pipeline. 2. it should minimize the number of edge cuts in order to reduce the inter-GPU communication during forward and backward computation. The cheapest possible sampling algorithm would be to randomly assign each vertex in a mini-batch to a split. Besides, a uniform random distribution would ensure that all splits are perfectly balanced. However, this would produce a bad edge cut, since the probability that two endpoint vertices of an edge are in the same split is only \(1/g\), where \(g\) is the number of splits. A large edge cut increases the communication cost of cooperative training. On the other hand, min-cut graph partitioning algorithms such as Metis [17] could meet the second requirement, but applying them for every mini-batch is very time-consuming. To meet both requirements, we propose a two-step procedure that combines online splitting with offline partitioning and caching. The first step, offline partitioning, applies min-cut graph partitioning algorithms to the full input graph. The output is a vertex partitioning map, which assigns each vertex of the entire input graph to one GPU and per-GPU cache sets, which specifies the set of vertices that are cached by that GPU, if caches are used. The second step online splitting slices each mini-batch into local splits by looking up the vertex partitioning map and the per-GPU cache sets. The lookup operation is much more lightweight than applying a min-cut graph partitioning algorithm. Specifically, the online splitting algorithm assigns vertices of the mini-batch to GPUs by looking up the vertex partitioning map. All the vertices assigned to a specific GPU and their induced computation graph is a local split for that GPU. GPUs are responsible for the computation only in their local splits. If the destination vertex \(v\) belongs to a different split than the source vertex \(u\), the algorithm adds a special _reference vertex_ for \(v\) to the split of \(u\). The online splitting algorithm also looks up the caching set to determine which layer 0 vertices must be loaded. If a layer-0 vertex is assigned to a GPU but its feature is not cached, its features are loaded only by that GPU at the beginning of the iteration. In the example of Figure 4, the mini-batch sample is split into two local splits \(S_{1}\) and \(S_{2}\) in Step 1. Vertex \(v_{1}\) is included in \(S_{2}\) as a reference vertex, since it belongs to \(S_{1}\) but it has an incoming edge from a source vertex in \(S_{2}\). Similarly, \(v_{2}\) is included in \(S_{1}\) as a reference vertex. Using full-graph partitions to split mini-batches, which are subgraphs of the input graph, does not necessarily result in as balanced splits and low edge cut as running graph partitioning on each mini-batch. However, it is a good compromise to keep both the scheduling time and the training time low, as we show in our evaluation. ### Cooperative Training Splitting a mini-batch into non-overlapping local splits results in edge cuts across splits, which requires coordination across GPUs in each training iteration. In this paper, we present the first implementation and experimental evaluation of cooperative training for mini-batch GNN training. We transparently support GNN models specified using existing message passing primitives used by practitioners to implement GNN models (see Section 2.1). Each GNN layer runs of one or more SAGAs. Two of the message passing functions, scatter and gather, operate on edges and thus need to be aware of edges that connect vertices across different local splits. #### 3.3.1 Split-Aware Scatter and Gather There are two possible approaches to make scatter and gather split-aware. Recall that scatter builds an edge tensor by combining the source and destination vectors of each edge while the gather operator aggregates all messages to each destination vertex. The first approach can implement a source-to-destination scatter which sends the source vertices' features to the destination of each edge. In the backward pass, gradients flow in the opposite direction. The advantage of this approach is that all the subsequent SAGA operators can be executed locally until the end of the forward pass. This is also the default approach adopted in full-graph training systems [19]. The second approach can implement a destination-to-source scatter, which builds the edge tensor by sending the destination vertex vector to the source vertices. It can significantly reduce communication volume in mini-batch training since the total amount of destination vertices are usually much less than the source vertices. However, it requires a second shuffle for the gather operator since the gather function requires aggregating data by destination. Our empirical observation shows that more rounds of shuffles with less data are usually more costly than the a one-time shuffle with more data. Therefore, split parallelism implements the source-to-destination scatter in order to minimize the number of shuffles. An example using the source-to-destination scatter schedule is shown in Figure 5. At the beginning of a GNN layer \(l\), split parallelism has partitioned the hidden feature tensor \(H^{(l-1)}\) by vertex across GPUs based on the splits. In this example, we compute the hidden features of a vertex \(v\) at layer \(l\). Vertex \(v\) is in the split of GPU 1, but some of its neighbors are in GPU 2, so it is added to the split of GPU 2 as a reference vertex. The scatter operation builds an edge tensor that combines source and destination features for each edge. The tensor is partitioned by destination vertex: the scatter sends the source vertex features for all incoming edges incident to \(v\) at layer \(l\) to GPU 1, which is responsible for \(v\). The edge-wise message function computes all messages to \(v\) locally to GPU 1, so the gather function only aggregates local messages. Finally, the update function computes the new hidden feature vector of \(v\). It is possible to optimize this schedule for models that do not need to build a full edge tensor to obtain messages, as we discuss shortly when considering GraphSage. #### 3.3.2 Cooperative Training Examples We now discuss implementations of split-parallel training considering two popular and diverse GNN models: GraphSage [13] and Graph Attention Networks (GAT) [27]. Split parallelism runs the two models unmodified: it simply replaces local implementations of scatter with the split-aware implementation described previously. **GraphSage.** A GraphSage layer builds messages directly from the input features of the source vertices of the edges. Its simplicity in the message function allows us to pre-aggregate feature vectors before the scatter operation in order to reduce the communication volume. Figure 4 shows an example of cooperative training using a GraphSage model [13] with two layer. In the first layer, vertex \(v_{1}\) is in split \(S_{1}\) but has incoming edges from vertices in split \(S_{2}\). GPU 2 computes the partial aggregated message \(\hat{m}_{v_{1}}^{(1)}\) and then executes a shuffle (i.e., source-to-destination scatter) to send \(\hat{m}_{v_{1}}^{(1)}\) to GPU 1 (see Step 3.1). GPU 1 computes the final aggregation to compute the hidden features of \(v_{1}\) at layer 1. GPU 1 also locally compute the hidden features \(h_{v_{2}}^{(1)}\). In the second layer, \(h_{v_{2}}^{(1)}\) on GPU 1 is scattered to GPU 2 to compute messages for \(v_{3}\) (see Step 3.2). At the end of layer 2, the GNN computes the loss and starts the backward propagation. The GPUs need to send the gradients for the activations they received back to the sender. In the example of Figure 4, GPU 1 sends \(d\hat{m}_{v_{1}}^{(1)}\) and GPU 2 sends \(dh_{v_{2}}^{(1)}\) (see Steps 3.3-4 in Figure 4). **GAT.** The GAT model computes attention scores for each edge before computing the messages. The attention score of an edge \(e(u,v)\) at layer \(l\) is computed as: \[\alpha_{(u,v)}^{(l)}=\exp(e_{(u,v)}^{(l-1)})/\sum_{(u^{\prime},v)\in E^{(l)}} \exp(e_{(u^{\prime},v)}^{(l-1)}).\] where \(e_{e}^{(l)}\) is an attention coefficient for the edge, computed using an attention function \(a\) over the features of the source and destination, \(h_{u}^{(l-1)}\) and \(h_{v}^{(l-1)}\). A GAT layer \(l\) can be implemented using two SAGA operations. The first SAGA operation computes the denominator of \(\alpha_{(u,v)}^{(l)}\) for all the incoming edges of each vertex \(v\) at layer \(l\). It requires a shuffle operation using split-aware scattering. The second SAGA computes the features of destination vertices and is completely local. Specifically, in the first SAGA, a split-aware scatter first collects the source and destination features for all incoming edges for \(v\), which requires a shuffle. The message function then computes the sum of the denominator over the local edges. Next, a gather aggregates the messages for each destination vertex to compute the denominator of \(\alpha_{(u,v)}^{(l)}\). The GAT layer then proceeds with the second SAGA to compute the feature of each destination vertex \(v\). A local scatter builds edge tensors that include the features of the source and destination and the denominator of \(\alpha_{(u,v)}^{(l)}\). The message function calculates \(\alpha_{(u,v)}^{(l)}\) for each edge, then computes the edges' attention score using the function \(a\), and obtains the message. Finally, a gather aggregates the messages and the update function computes the hidden features of the vertices in layer \(l\). In summary, both GraphSage and GAT perform up to two shuffles per layer, one in the forward and one in the backward propagation. ## 4 Implementation We implemented split-parallel training in GSplit, a multi-GPU GNN training system supporting synchronous training. GSplit is implemented on top of DGL 0.8.2 and Torch 1.8.0 with CUDA 11.1. The implementation consists of about 10k lines of code. The sampling and splitting code is in C++, the remaining code in Python. **Sampling and scheduling.** GSplit runs sampling and splitting on the CPU to easily scale to large graphs that do not fit in GPU memory. It uses one thread to randomly shuffle the set of training nodes and create a set of target vertices for each mini-batch. Sampling and scheduling are performed using multiple _sampler_ processes, each working independently in parallel on one mini-batch to sample it and split vertices and edges as they are sampled. It also uses one _trainer_ process per GPU, each responsible for invoking kernels on that GPU. Figure 5: A SAGA computing the hidden features of vertex \(v\) at layer \(l\) over two GPUs using _source-to-destination_ scatter. In split-parallel training, all GPUs must work on splits of the same mini-batch at the same time. GSplit uses one trainer as a leader, which selects which mini-batch to process next among the ones currently complete. To avoid making the leader a bottleneck, GSplit separates the data and control paths. Workers write mini-batches into shared memory and send a handle to the leader. The leader picks one handle and instructs all other trainers to operate on that mini-batch. **Partitioning and caching policy.** GSplit can use any offline partitioning and caching policy. By default, it uses Metis for offline graph partitioning [17]. After partitioning the graph offline, GSplit caches the highest degree vertices of each partition in the corresponding GPU, unless the user specifies a different policy. It allows the user to decide how much GPU memory should be dedicated to caching. **Cooperative training.** The cooperative training we discussed requires a split-aware version of the scatter operator. It uses fast direct GPU-GPU buses such as NVLink if available in the system or the PCIe bus otherwise. Shuffles use data coalescing to have a GPU send at most one message to each other GPU. Vertices in the same local split have contiguous ids, so that the online scheduling algorithm and our split-aware operators can find the local split of a vertex using a simple range check. **Integration with DGL and Torch.** GSplit implements sampling, split-parallel online scheduling, feature extraction, feature loading, and feature caching from scratch. It runs all GPU-local GNN operators by invoking unmodified DGL. The message and update operators are fully local, whereas the split-aware scatter and gather functions interleave local steps, executed by invoking DGL, and shuffle steps, which are implemented by GSplit. GSplit uses NCCL to implement inter-GPU peer-to-peer communication and exploits NVLink if available. It invokes NCCL through Torch. GSplit uses Torch also for gradient aggregation and model update. ## 5 Evaluation ### Experiment Settings **Hardware setup.** We run our experiments on two types of hosts, both with 4 GPUs but with different GPU-GPU interconnects. The first host type, which we call PCIe, has four NVIDIA GeForce RTX 3090 Ti GPUs, each with 24 GB memory, and four Intel(R) Xeon(R) Silver 4214R CPU @ 2.40GHz, each with 12 cores and 192 GB RAM. The GPUs in host are connected using a PCIe 3.0 bus. The second host type, which we call NVLink, is an AWS EC2 P3.8xlarge instance. It has four NVIDIA V100 GPUs with 16GB memory and Xeon E5-2686 v4 @ 2.70GHz, with 18 cores and 244 GB RAM. GPUs are connected to the CPU with a PCIe 3.0 bus and with each other via NVLink. **Datasets.** We use three datasets listed in Table 3. Two of the datasets are from the Open Graph Benchmark (OGB), a standard benchmark for GNN training [1]. We use the two largest graphs in the benchmark: products and papers100M. We also use the Amazon dataset from [36]. **GNN models.** We consider two popular and diverse GNN models, which we described in Section 3.3.2: GraphSage [13] and GAT [27]. Both GraphSage and GAT perform up to two shuffles per layer. We use the standard neighbour sampling algorithm, with a fanout of 20 and three hops. We use a default hidden size of 16, as used in [13, 27], and a batch size of 4096. **Baselines.** All state-of-the-art systems for mini-batch multi-GPU GNN training use data-parallel training. We use three systems as baselines: DGL, PaGraph, and Quiver. * **DGL** (Deep Graph Library) is a standard library for GNN training [2]. We use DGL version 0.8.2, the same we use as a component of GSplit. DGL does not use GPU caching. * **PaGraph** uses GPU caching to reduce data transfer overheads [18]. PaGraph always loads missing input features from the host memory through the CPU-GPU bus (PCIe). We run the publicly-available implementation of PaGraph, updated to run with DGL 0.8.2. * **Quiver** is a recent GNN training system that leverages fast direct GPU-GPU buses like NVLink [4] to reduce data loading overhead. Quiver partitions the input features across GPU caches. It loads missing features from other GPUs' caches whenever possible. All systems use CPU-based sampling to scale to graphs that do not fit in GPU memory. They all run 32 sampler processes in parallel with training. GSplit uses the same degree-based caching policy as PaGraph and Quiver for consistency. ### Running Time Comparison We compare the running times per-epoch for all systems. All systems except DGL make use of the local cache. Sampling, and slicing for GSplit, happen in parallel with training. Therefore, we break down the epoch time of a trainer process into two sequential steps: loading time and training time. We compare the performance of different GNN training systems on our two hosts, which use different buses. \begin{table} \begin{tabular}{|l||c|c|c|} \hline **Dataset** & **\# Nodes** & **\# Edges** & **\# Feat** \\ \hline \hline ogbn-products (PR) & 2.4M & 62M & 100 \\ ogbn-papers100M (PA) & 111M & 1.6B & 128 \\ Amazon (AM) & 1.56M & 168M & 200 \\ \hline \end{tabular} \end{table} Table 3: Datasets used for the evaluation #### 5.2.1 Performance using PCIe We first consider the PCIe host. We use DGL as the baseline in configurations without a GPU cache a PaGraph in configurations with a cache. We do not run Quiver in this setup since its distributed caching strategy requires NVLink. Figure 6 reports the epoch time comparison. Overall, GSplit outperforms both baselines significantly, sometimes by one order of magnitude. We now discuss the impact of split parallelism in terms of loading and training time. **Loading time.** The most important factor in this speedup is the reduction in loading time, which is a significant bottleneck in baselines. When each GPU caches 25% of the graph or more, the combined distributed cache contains all the features. The scheduler schedules computation based on the placement of the cached features, so there are no cache misses. GSplit only has to load the computation graph and this results in the lowest loading time. DGL has the highest loading time because it does not use caching. PaGraph's loading time is lower than DGL thanks to caching. However, it still has a much higher loading overhead than GSplit for two reasons. First, PaGraph maintains an independent cache at each GPU, with large overlaps among the caches. Unlike GSplit, it can only cache a subset of the input features. Second, it loads data redundantly because it uses data parallelism, an overhead GSplit avoids. Figure 6 also shows the effect of using different cache sizes. We consider a cache size of up to 25%, which is where GSplit achieves full caching. GSplit can reduce loading time compared to both baselines even when the distributed cache does not contain all the features. Consider for example the 0% case, where caching is not used. In this case, all the input features of the mini-batch need to be loaded from the CPU memory. However, GSplit loads each feature vector to only one GPU, whereas the other data parallel systems need to perform redundant loads. **Training time.** GSplit does not show large speedups in terms of training time on the PCIe host. On the one hand, it has a lower computation cost than the other systems because it eliminates redundant computation. The effect is particularly visible for the PA and AM datasets, which have larger overlaps among micro-batches (see Table 2). On the other hand, it has Figure 6: Epoch time (in seconds) on the PCIe host for different GPU cache sizes. Speedups are over DGL. Figure 7: Epoch time (in seconds) on the NVLink host for different GPU cache sizes. Speedups are over DGL. the additional communication and synchronization cost of cooperative training, which the baseline systems do not have. Eliminating redundant computation does not always entirely offset this cost in the PCIe host, which has a slower inter-GPU communication. However, the training time of GSplit is still in line with the baseline systems. As expected, the cache size has no significant impact on training time, as shown in Figure 6. #### 5.2.2 Performance using NVLink In the experiments on the NVLink host, we include Quiver since its distributed cache is designed to leverage the NVLink bus and is a superior baseline than PaGraph. Figure 7 reports the epoch time comparison. GSplit shows large speedups of up to one order of magnitude in this configuration too, this time for both loading and training. **Loading time.** As before, the loading time is strongly influenced by the size of the GPU caches. In the 25% cache case, both Quiver and GSplit can cache the input features of the entire graph thanks to their distributed cache. Neither system needs to load input features from the CPU memory. However, Quiver replaces loads from the CPU memory with loads from other GPUs' memory, as shown in Table 1. These data transfers are faster thanks to NVLink, so data loading time is significantly reduced compared to DGL. However, they are still not free and represent a low but not negligible cost. GSplit can take advantage of the caches much more effectively than Quiver since it does not require loading input features at all. The drawback of redundant data transfers is particularly evident when the cache size is lower than 25%. In this case, both Quiver and GSplit need to load features from the CPU memory. The loading time of Quiver grows significantly and becomes a major bottleneck. In contrast, GSplit's loading time also grows, but much less significantly. **Training time.** GSplit shows significant speedups compared to DGL and Quiver in terms of training time because it eliminates redundant computation. The additional communication cost of cooperative training is greatly reduced thanks to NVLink, leading to a clearly positive net gain, particularly for PA and AM, the two datasets with larger computational redundancy. As in the previous case, the training time of GSplit is not affected by the size of the cache, as shown in Figure 7. #### 5.2.3 Effect of Mini-Batch Size We now consider the effect of the mini-batch size on the running time. For these experiments, we consider a cache size of 25%. The results are shown in Figures 8 and 9. As the size of the mini-batch grows, the epoch time for all systems decreases since the system performs fewer training iterations. GSplit achieves consistent speedups regardless of the mini-batch size and on both hosts. It tends to do slightly better on larger batch sizes since they present larger overlaps among micro-batches. ### Online Scheduling Evaluation **Cost of Online Splitting.** Split parallelism requires an additional step in the training pipeline executed at each iteration: online splitting, which occurs after a mini-batch sample is produced and before training starts. It is important to ensure that this step does not become a performance bottleneck. We now perform a micro-benchmark to evaluate the throughput of sampling and splitting in GSplit compared to DGL. We run a single thread for both DGL and GSplit on the PCIe host and measure the time it takes to produce all the data required for a training iteration, considering a system with four GPUs. A data-parallel system like DGL consumes four micro-batch samples per training iteration, each for one fourth of the target vertices in the mini-batch. A split-parallel system like GSplit consumes four splits of a single mini-batch. Even though different samples are sampled in parallel, our single-thread experiment estimates the throughput at which sampling can operate. The results are shown in Table 4. Splitting adds some overhead to sampling: for each vertex, we need to check which offline partition it belongs to and whether it is cached. DGL does not perform splitting or caching, so it does not have this overhead. However, sampling for split-parallel training also benefits from the elimination of redundancy. Instead of sampling four independent and potentially overlapping micro-batches, a split-parallel sampler only needs to sample one larger mini-batch. The combined effect of these two factors is that the overhead of online splitting is at most 74% of the time spent on sampling. **Offline-Online Split Quality.** GSplit uses a combination of offline partitioning and online splitting to reduce the edge cut. Previous experiments have shown that this solution achieves both high training performance and low splitting time. Here, we examine the split quality by measuring _edge skew_ and _percentage of local edges_. Specifically, _edge skew_ is the maximum number of edges assigned to a split in an iteration minus the minimum divided by the average. It aims to capture whether the computation workloads are balanced across GPUs. A large edge skew means the local splits are not balanced. Splitting a mini-batch in a random uniform way (called _random uniform partition_) would result in zero edge skew and thus perfect load balancing. However, it could lead to many edge cuts and thus high inter-GPU communication overhead. A _local edge_ is an edge whose source and destination vertex are in the same split. Otherwise it is remote and introduce a reference vertex. _Percentage of local edges_ is the ratio between the number of local edges and the total edges in a mini-batch. Higher percentage of local edges are better since it indicates lower inter-GPU communication costs. A _random uniform partition_ would result in low percentage of local edges (e.g., an average of 25% when splitting a mini-batch into four local splits), as explained in Section 3.2. Table 5 reports the edge skew (column "Edge Skew") and percentage of local edges (column "Local Edges") of our offline-partitioning online-splitting approach using different datasets and batch sizes, when producing 4 splits per mini-batch. Compared to a uniform random partition approach, our approach achieves higher skew but much higher percentage of local edges (83% - 96%), indicating much lower inter-GPU communication cost. ### Accuracy Evaluation To validate the correctness of GSplit, we compare its test accuracy with DGL. Test accuracy is the accuracy of the GNN model on the test dataset, which is not seen during training. Figure 10 compares the accuracy at different epochs for the two OGB datasets, PR, and PA, using GraphSage and GAT respectively. GSplit's accuracy at each epoch matches DGL's accuracy, as shown in the Figure. GSplit's convergence speed, however, is much faster since each epoch takes a much shorter time to complete. ## 6 Related Work Our work focuses on single-host multi-GPU mini-batch training. Distributed mini-batch training is another important area of research for GNN training systems. Systems like DGL [37] and AliGraph [38] use data-parallel mini-batch training to scale to large graphs. \(P^{3}\) introduces pipelined push-pull hybrid parallelism for distributed mini-batch training [9]. Marius++ runs data-parallel GNN training on large graphs using a single GPU and out-of-core approach rather than a Figure 8: Epoch time (in seconds) on the PCIe host for different mini-batch sizes. Figure 10: Test Accuracy Figure 9: Epoch time (in seconds) on the NVLink host for different batch sizes. Speedups are over DGL. distributed system [28]. For a survey on distributed GNN training, we refer the reader to [25] Many systems focus on full-graph GNN training, which is a different problem than mini-batch training as discussed in Sections 1 and 3.2. Other notable but less directly related work in full-graph training includes Dorylus [26], which uses serverless functions, DGCL [5], which optimizes communication in distributed training, and FlexGraph, which support aggregation from indirect neighbors [29]. NeutronStar is a distributed full-graph training system that uses standard full-graph training for some target vertices and fetches the k-hop neighborhood similar to data parallelism for others [31]. Other work has explored other types of bottlenecks than the ones discussed in this paper. Mini-batch sampling and extraction can become bottlenecks in some cases. Existing work has explored using in-GPU sampling to alleviate this bottleneck. NextDoor proposes a programming API and a runtime to speed up in-GPU sampling [15]. C-SAW is another system with similar goals [22]. GPU sampling in these systems does not scale to large graphs that cannot be stored in the GPU memory. GNNLab is a multi-GPU mini-batch training system that uses a factorized approach to share GPU resources between sampling and training to leave more GPU memory for sampling [34]. Adding support for in-GPU sampling and splitting to GSplit is an interesting avenue of future work. GNNLab also proposes a pre-sampling technique to select vertices to cache based on sampling probability, using an independent per-GPU cache. GSplit can leverage this and other caching policies to increase its cache hit rate, also thanks to its distributed cache design. Finally, some work proposes dedicating a subset of the GPU threads to load data directly from the CPU memory concurrently with training [21]. This speeds up loading but it requires a double in-GPU buffer to load features. None of these systems explores different paradigms to parallelize GNN training. Work on optimizing GPU kernels for GNN training (for example SeaStar [32]) is complementary to the work described in this work. CPU-GPU data transfers can also be limited by biasing the sampling algorithms to prioritize sampling from graph partitions that store on GPUs [6, 24]. However, this approach is not transparent as it relies on ML practitioners to change the model design to be cache-aware, which affects training convergence and accuracy. ## 7 Conclusions This paper discusses some inherent performance limitations of the common data-parallel approach to mini-batch GNN training. It proposes split parallelism, a novel parallel training paradigm that eliminates the computation and data transfers redundancies of data parallelism. Split parallelism requires new approaches for online scheduling and cooperative training. We implement GSplit, a multi-GPU mini-batch training system based on split parallelism, and show that it outperforms state-of-the-art systems in a variety of scenarios. By moving computation to data, split parallelism significantly reduces data loading time. While this result was expected given the premise of split-parallel training, showing that cooperative training can match and sometimes surpass the performance of data parallel training is an interesting empirical result. Similarly, we show that offline pre-processing can effectively support fast online splitting while achieving good training performance. We believe that split parallelism represents an interesting new avenue to scale GNN training to large graphs and systems.
2305.14394
Unsupervised Spiking Neural Network Model of Prefrontal Cortex to study Task Switching with Synaptic deficiency
In this study, we build a computational model of Prefrontal Cortex (PFC) using Spiking Neural Networks (SNN) to understand how neurons adapt and respond to tasks switched under short and longer duration of stimulus changes. We also explore behavioral deficits arising out of the PFC lesions by simulating lesioned states in our Spiking architecture model. Although there are some computational models of the PFC, SNN's have not been used to model them. In this study, we use SNN's having parameters close to biologically plausible values and train the model using unsupervised Spike Timing Dependent Plasticity (STDP) learning rule. Our model is based on connectionist architectures and exhibits neural phenomena like sustained activity which helps in generating short-term or working memory. We use these features to simulate lesions by deactivating synaptic pathways and record the weight adjustments of learned patterns and capture the accuracy of learning tasks in such conditions. All our experiments are trained and recorded using a real-world Fashion MNIST (FMNIST) dataset and through this work, we bridge the gap between bio-realistic models and those that perform well in pattern recognition tasks
Ashwin Viswanathan Kannan, Goutam Mylavarapu, Johnson P Thomas
2023-05-23T05:59:54Z
http://arxiv.org/abs/2305.14394v1
Unsupervised Spiking Neural Network Model of Prefrontal Cortex to study Task Switching with Synaptic deficiency ###### Abstract In this study, we build a computational model of Prefrontal Cortex (PFC) using Spiking Neural Networks (SNN) to understand how neurons adapt and respond to tasks switched under short and longer duration of stimulus changes. We also explore behavioral deficits arising out of the PFC lesions by simulating lesioned states in our Spiking architecture model. Although there are some computational models of the PFC, SNN's have not been used to model them. In this study, we use SNN's having parameters close to biologically plausible values and train the model using unsupervised Spike Timing Dependent Plasticity (STDP) learning rule. Our model is based on connectionist architectures and exhibits neural phenomena like sustained activity which helps in generating short-term or working memory. We use these features to simulate lesions by deactivating synaptic pathways and record the weight adjustments of learned patterns and capture the accuracy of learning tasks in such conditions. All our experiments are trained and recorded using a real-world Fashion MNIST (FMNIST) dataset and through this work, we bridge the gap between bio-realistic models and those that perform well in pattern recognition tasks. unsupervised, pattern recognition, neural network, artifical neural network, computational intelligence, bio inspired ## I Introduction Decision making is one of the fundamental cognitive processes of human behaviors. Evidence arising out of cognitive studies show the PFC region of the brain is involved in decision making and task switching activities [2, 24]. There has been a renewed interest in SNN as they imitate biological neurons and have been used in a variety of supervised and unsupervised tasks [4]. In our previous study of task switching [30], we used a computationally tractable model of SNN to implement a bio-inspired neuron model to analyze the effects of task switching from the perspective of the synaptic weight. In this study, we implement a more realistic architecture of the PFC based on bio-realistic connectionist models [2, 24] and study the behavioral deficits and learning impairment suffered by patients with the PFC lesions within the context of task switching. We explore these avenues using a Hebbian-based biological unsupervised learning rule known as STDP [11, 27]. Our contributions in this paper are: (a) Implementing a biologically realistic architecture of the PFC neurons using SNN, (b) Studying behavioral and memory deficits by simulating lesions in our SNN model, (c) Using a real-world dataset to mimic real-world possibilities in learning and memory formation (d) Using a computationally efficient and tractable framework of SNN to record and graphically plot neural phenomenon like synaptic lesions and memory impairment when learning patterns.Finally, unlike previous works which take a top-down approach where human participants engage in task switching experiments based on cues or use traditional artificial neural networks, we employ a bottom-up approach at a biologically realistic neuronal level. We take this approach so as to understand the detailed biological neuronal mechanisms of task switching. This paper is organized as follows: In Section II, we provide a background for this work. Section III talks about the neuron model design, learning, dataset used and our PFC architecture. In Section IV we present the experiments. This study concludes in Section V where we present and analyze the results. ## II Related Work Studies like [10, 25] support the evidence of the PFC as the region most involved in enabling decision making and task switching in the brain. In these works, the PFC decision-making is tested utilizing task switching tasks performed by human participants. Moreover, these experiments take place in a controlled manner by providing cues to participants before enabling changes in tasks. In our model, we address this issue by training in an unsupervised manner using STDP and analyze the response of neurons. Although there have been studies on task switching using computational models [2, 24, 13], they use traditional neural networks and train the models in a supervised manner. Our model improves upon these studies by using a spiking neural architecture which provides for more realistic biological behavior. We use the Leaky Integrate and Fire (LIF) type of neurons to simulate the spiking behavior. Lesions and behavioral deficits arising out of these are explored in works like [3, 24] which also uses task switching to gauge the memory learning capabilities of the PFC. These studies provided the validation for our results. Our model also implements sustained or persistent neural activity which acts as short-term or working memory. This is used to retrieve an entire episode of memory from a partial stimulus. We validate our model results arising from sustained activity by comparing it with experimental results and studies done in works like [12, 23, 24]. Previous research like [2, 22] specify a short and longer time duration for experiments on human participants. We also use the same time duration shifts for testing our models with lesions by deactivating synapses. ## III Methods ### _Neuron Design Model_ SNN's mimic real neurons due to their behavior of firing spikes. In our research, we implement the LIF model due to its computational tractability [17]. They are widely used to simulate computational models as they capture a variety of neuronal behavior dynamics [8]. Let the membrane potential of LIF neurons be represented by \(V_{m}\). We can then define our LIF neurons in terms of Ordinary Differential Equations to describe the evolution of \(V_{m}\) over time. We can define our synaptic equations as follows: \[I_{leak}=g_{leak}*(E_{leak}-V_{m}) \tag{1}\] \[I_{exc}=g_{exc}*(E_{exc}-V_{m}) \tag{2}\] \[I_{inh}=g_{inh}*(E_{inh}-V_{m}) \tag{3}\] Here \(g_{leak}\) refers to the leak conductance and stays constant and is stored in leak current \(I_{leak}\). \(g_{exc}\) is the excitatory conductance and reflects the excitatory input \(I_{exc}\). Similarly \(g_{inh}\) is the inhibitory conductance and measures the strength of inhibitory input \(I_{inh}\). \(E_{leak}\), \(E_{exc}\) and \(E_{inh}\) are the leak, excitatory and inhibitory membrane potentials. Combining (1), (2) and (3) the membrane equation with several synapses is given as: \[C_{m}\frac{dV}{dt}=I_{leak}+I_{exc}+I_{inh}+\xi \tag{4}\] where \(C_{m}\) is the membrane capacitance constant specified in Farads and \(\xi\) is the standard noise term. Evolution of the inhibitory and excitatory conductance over time is given by \[\frac{dg_{inh}}{dt}=-\frac{g_{inh}}{\tau_{inh}} \tag{5}\] \[\frac{dg_{exc}}{dt}=-\frac{g_{exc}}{\tau_{exc}} \tag{6}\] where \(\tau_{inh}\) and \(\tau_{exc}\) are the membrane time constants. Parameters used for neurons are close to actual biological values in all our experimental trials. Spiking Neurons communicate by generating and propagating electrical impulses known as Action Potentials or Spikes. After firing a spike the membrane potential \(V_{m}(t)\) is reset to the resting potential \(V_{rest}\). During this phase, the neuron goes to a refractory period \(\tau_{ref}\) and cannot spike again during this phase. We can model the spiking behavior of LIF neurons having spike voltage threshold \(V_{\phi}\) by: \[V_{m}(t)=\begin{cases}V_{reset},&\text{if }V_{m}(t)>V_{\phi}\\ V_{m}(t),&\text{otherwise}\end{cases} \tag{7}\] The various biological parameter values are specified in Table I ### _Unsupervised Plasticity Learning Rule_ In our study, we use the unsupervised synaptic plasticity learning rule known as Spike Timing Dependent Plasticity (STDP). STDP has been linked to various learning rules and mechanisms like Hebbian Learning and short-term prediction [15, 19] of biological neurons. This type of biologically-inspired learning rule is used in SNN. Learning of patterns takes place by modification of synaptic weights in response to the firing times of pre and post-synaptic spikes. This leads to observing two neural phenomena namely, Long Term Potentiation (LTP) [11] which occurs when the neurons are learning patterns and Long Term Depression (LTD) [1, 11] which leads to forgetting of patterns in the network. As STDP depends on the timing of pre and post-synaptic spikes, we can define the STDP function by [1] where \(w\) is the synaptic weight: \[(\Delta w)=\begin{cases}A^{+}e^{-\Delta t/\tau^{+}}&\Delta t>0\\ A^{-}e^{\Delta t/\tau^{-}}&\Delta t<0\end{cases} \tag{8}\] where \(A^{+}\) holds the synaptic trace resulting from LTP and \(A^{-}\) stores the LTD synaptic trace. \(\tau^{+}\) and \(\tau^{-}\) denote the change in pre and post-synaptic spike time. \(\Delta t\) is the time delay difference of pre-synaptic and post-synaptic spikes given by \(\Delta t=\tau^{-}-\tau^{+}\). In case of LTP, the synaptic weight \(w\) is updated by: \[\begin{split} a^{-}&\to a^{-}+A^{-}\\ w&\to w+a^{+}\end{split} \tag{9}\] Updating the weight when there is depression in the network is given by: \[\begin{split} a^{+}&\to a^{+}+A^{+}\\ w&\to w+a^{-}\end{split} \tag{10}\] \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Parameter** & **Value** \\ \hline \(V_{\phi}\) & \(-55mV\) \\ \hline \(V_{rest}\) & \(-70mV\) \\ \hline \(E_{inh}\) & \(-75mV\) \\ \hline \(E_{exc}\) & \(0mV\) \\ \hline \(E_{leak}\) & \(-65mV\) \\ \hline \end{tabular} \end{table} TABLE I: Spiking Neural Network parameters. For computational efficiency, we store only the synaptic traces and the weight \(w\) is bound by \(0<=w<=w_{max}\). ### _Unsupervised Spiking Neural Architecture_ Our architecture draws on aspects derived from computational neuroscience and psychology [24]. Our network is depicted in Figure 1. It consists of three layers namely the Input Sensory Layer which receives the input stimuli and encodes them into Poisson spike trains and transfers the encoded spikes to both the PFC Memory layer and the PFC Response layer. Neuron and STDP parameters are set to biologically realistic values. In the following sections, we will describe the role and functions of these layers. #### Iii-C1 Input Sensory Layer This is the input layer. We convert the input stimuli into spike trains having frequency proportional to the value of each pixel. This type of encoding is known as Rate coding. Within the cortex, neuronal firing activity is irregular and stochastic [16]. This behavior is captured by a Poisson process and spiking activity in the brain roughly approximates a Poisson distribution [14]. Hence we overlay the input spike train as a Poisson process. The input layer is made up of Excitatory neurons which are mapped to each pixel in the input row. SNN's carry information in spikes which are transmitted between different layers via Synapses. #### Iii-C2 PFC Memory Layer This layer receives the stimulus through excitatory synapses from the Input Sensory Layer and makes lateral self-inhibitory synaptic connections to gener Fig. 1: Spiking Neural Architecture of PFC. ate competition among neurons. This mode of lateral self-inhibition is known as Winner Take All. This ensures a single firing neuron is picked which fires consistently amongst competing neurons [11, 13]. Weight matrices of the neurons act as the receptive fields and they are either strengthened or weakened based on the timing of spikes arriving from the Input layer. STDP plasticity rule enables the strengthening of synapses based on the response of neurons to stimuli [27]. #### Iii-B3 Sustained Activity From our model in Figure 1 we observe that each LIF unit in the Memory layer has self excitatory connections. This helps the PFC neurons to carry a trail of learned activity from the input stimulus and maintain them for a short duration in the absence of any input. This type of behavior dynamics is known as an attractor network [24]. This persistent firing activity in the absence of any input or stimulus is known as Sustained Activity. This phenomenon is responsible for the formation of short-term or working memory [23]. These representations or knowledge are then maintained in a stable manner even after ceasing to activate the network using any input stimuli. Several research studies have shown that persistent neuronal activity has been observed whenever the PFC is involved in decision making or selection of tasks among other activities [9, 18, 21]. Figure 2 illustrates persistent spiking activity in our architecture modeled using LIF neurons. As seen in Figure 2, we present the Target (T) stimulus which is a \(28*28\) matrix of image arrays from FMNIST dataset for a duration of \(500ms\) and is stopped. The neurons continue spiking beyond \(500ms\) up to \(800ms\) indicated by the shaded portion and gradually decline as the inhibitory neurons suppress the activity of excitatory neurons which is one of the key functions of the PFC [23]. This continued spiking by neurons holds knowledge of the learned pattern for a short duration which can be used to reconstruct the entire pattern from the whole memory when tested with contextually similar inputs to the original input. #### Iii-B4 Response Layer This is the decision-making layer and the responses are recorded by observing which of the two units of LIF neurons spike first on seeing a target stimulus. This layer receives excitatory signals from the Input and Memory layer. The target and non-target neuron is denoted by \(N_{t}\) and \(N_{nt}\). Recordings are made by noting which of the neurons membrane potential \(V_{m}\) first exceeds the threshold \(V_{\phi}\). ## IV Experiments We validate our architecture by devising experiments to observe the learning and adaptive properties of the PFC neurons. We use a real-world Fashion-MNIST (FMNIST) image dataset which is suitable for classification and image recognition tasks. Our goals in these experiments are the following: (a) Simulate experiments based on previous research studies in the areas of cognitive psychology [22] and brain frontal cortex lesions [3, 20], (b) Analyze and record the learning and adaptation behavior of the PFC neurons when stimuli are switched at longer and shorter time durations, (c) Observe the memory formation capacity when synaptic connections between the Input and Memory module are switched off partially. This simulates a network having neuronal lesions [20, 24]. FMNIST dataset comprises \(70,000\) records with \(10\) categories of fashion products. The labels consist of \(7000\) images each, with \(60000\) training and \(10000\) test samples. Every row represents a \(28*28\) shaped matrix. For this study, we chose four stimulus types (i.e.) Target, Non-target, Context-Target and Context-Non-target. The model learns patterns Fig. 2: Sustained Activity in the Memory Layer. by modifying the synaptic weight \(w\) of neurons \(N\) and is represented as \(w:N\times N\) describing the synaptic connections between neurons in the network. Weight \(w\) gains higher values whenever a post-synaptic spike is preceded by pre-synaptic spike (i.e.) \(\tau_{pre}<\tau_{post}\). For every pixel \(p_{1},p_{2},\ldots,p_{i}\) where \(i\in{1,2,3,\ldots,784}\) correspond to the number of features, we map each pixel \(p_{i}\to N_{exc}^{i}\) to excitatory neurons. The input and output are excitatory such that \(N_{input}\cup N_{output}\subseteq N_{exc}\). The _Target_ is used predominantly and serves as the stimulus responsible for eliciting a spike from the target neuron \(N_{t}\). In the FMNIST dataset, this image falls under the class of _t-shirts_. _Non-Target_ should not evoke a spike response from \(N_{t}\) and its effect on the model is observed in the dynamics of output neuron \(N_{nt}\). This image comes under the class of _ankle boots_. Target and Non-Target stimuli are presented in a \(70:30\) mix with the target signals appearing in the majority of our trials. As the PFC model exhibits persistent neural activity, we use two additional sets of stimuli to test this functionality. The purpose of _Context-Target_ and _Context-Non-Target_ is used to analyze the pattern recognition capacity of the short-term working memory of the PFC neurons. These two stimuli are presented after the model learns patterns from target and non-target. This leads to sustained activity where the neurons maintain a steady firing state despite the absence of input. In this state we present context-target and context-non-target to record how the neurons use this short-term memory to retrieve the whole pattern. Capturing the firing activity of \(N_{t}\) and \(N_{nt}\) when they are in their persistent state helps in better understanding of the PFC decision-making capabilities. To effectively test this, we choose stimulus contextually similar to the main target and non-target images. We used cosine similarity to get images similar to Target and Non-Target. An Fig. 4: Final STDP weights for Target Stimulus at full synaptic connectivity. Fig. 3: Neuron responses with full Synapse connectivity under short and long delay between task switching. image having high similarity value to the target is chosen as our context-target and similarly, we choose an image for context-non-target. The \(4\) types of input stimuli are shown in Figure 1. For studying the effects of lesions, we partially deactivate synapses between the Input Sensory and the PFC Memory layer. ## V Results Our goal is to capture and understand how the neurons in our PFC model change to different stimulus conditions having full and partial synaptic connectivity. This aids in understanding of how memory and learning get impaired in the absence of synapses between neurons. We discuss our results obtained for the different trials of experiments conducted. The responses for full synaptic connection between all the layers in our model is illustrated in Figure 3. We performed \(1000\) trials for every stimulus category and recorded the responses. From Figure 3, the model responds well to the target stimulus with an accuracy of \(76\%\). As STDP is unsupervised, there was no prior training of the model and this simulates real-world conditions of task switching on demand. The learning of patterns in this trial can be attributed to LTP which causes the synaptic weight \(w\) to increase [11]. In task-switching with short delays, we switch task stimulus every \(350ms\). For the non-target stimulus, the model performs at \(54\%\) accuracy indicating adjustment to the new task leading to the synaptic weights undergoing depression [30]. We then analyze the effect of short-term memory by presenting the context-target and observe the neurons responding with \(60\%\) correct firing of spikes. This higher accuracy is because of dense excitatory connections within the memory module layer as shown in Figure 1. These recurrent excitatory connections provide a higher firing activity which is then regulated by the local self-inhibition in the memory layer. This enables the neurons to maintain their firing activity for a short duration and are able to retrieve an entire memory from a partial stimulus. This phenomenon occurs in real neurons of the PFC as discussed in previous research works [12, 23, 24]. We observe a similar higher accuracy for context-non-target due to the effect of sustained firing activity. Similar to our previous study on Task Switching using SNN's [30], we notice that on longer durations, the model is better able to adapt to the incoming patterns with higher accuracy of \(81\%\) for the target. Here, the switch occurs every \(550ms\). The higher accuracy is attributed to the neurons storing sufficient information in their memory and are better able to adapt when tasks are switched [29]. For context-target stimulus the accuracy is \(74\%\) and \(57\%\) for non-target and \(52\%\) for context-non-target tasks. These results show that our network can reproduce most of the behavioral responses as found in the following study [2], which performed similar task-switching trials on human participants. We show the final synaptic weight representation for the target stimulus learned by the model using STDP in Figure 4. We now simulate the network by deactivating synapses between the Input and Memory layer. This causes deficiency in the information being transmitted and is analogous to a lesion in the cortical regions of the brain [20]. Altering the synaptic connectivity inhibits the model from learning representations thereby impairing the decision-making capability of PFC. This leads to an absence of short-term or working memory [6] leading to poor learning and decision making. We shift our focus to study the response accuracy of neurons for partial deactivation of synapses. We deactivate the synapse connections between the Input sensory layer and the Memory layer by a probability value of \(Syn_{p}=0.5\). This gives the model a \(50\%\) chance of making a successful synaptic connection and transmitting spike trains. From Figure 5, for short delays, the accuracy response to target stimulus is \(42\%\) which is low compared to the accuracy with full synapse connectivity. This shows that patients decision-making and memory formation are impaired in lesioned neurons as the dopamine needed to enable sustained activity is deficient [5]. Accuracy for context-target is \(37\%\) and comes close to target (T) accuracy due to the presence of partial representations held actively by weak persistent neurons in a deficient synaptic network. This scenario showcases our model's ability to learn patterns with partial synaptic connectivity between its layers which happens due to STDP. Non-target accuracy is \(12\%\) and \(10\%\) for context-non-target. For longer delays with synaptic deficiency, the accuracy for target stimulus is \(40\%\), but there is a rapid fall in response for the context-target giving an accuracy of only \(16\%\). This happens as PFC lesions have been shown to have more impact when there is a longer delay between task switches [25, 28]. There is a \(350ms\) gap between stimulus presentation in longer delays and during this interval of time, the weakened sustained activity by the neurons decays leading to little or no working memory. Accuracies for non-target and context-non-target are even lower as they do not occur frequently. Lesions in synaptic activity cause the neurons to have partial information with no spiking activity in some neurons. This causes higher error rates in learning leading to a high cost of switching [25, 26]. This leads to incorrect firing activity leading to depression which is characterized by \(\tau_{post}<\tau_{pre}\) and brings down the accuracy of correct responses. The impartial pattern representations formed by lesioned synaptic weights \(w\) by STDP is seen in Figure 6 which shows our model is capturing the general pattern of the image but deficient synaptic connectivity prevents it from learning as a whole. ## VI Conclusions We have presented a computational model of the PFC using Spiking LIF neurons to study the mechanism of learning and decision making using task-switching trials. The results of our model align with experiments performed on human and primate subjects [2, 7, 25, 31]. They also follow the biological process observed in real neurons having lesions. We showcase neural phenomenon like LTP and LTD which occurs in real neurons during the formation of memory. Additionally, our model also exhibits persistent attractor network states due to recurrent self excitatory connections. These connection dynamics have their basis in the actual biology of the PFC found in the brain. Our model also uses an unsupervised STDP learning rule to learn patterns and adapt to different task stimuli. Unlike previous studies, we have used real world dataset to study and record the behavior dynamics of the neurons. We show a novel way of simulating lesions by deactivating synapses. The effect of these lesions has been plotted as synaptic weights which gives a partial learning of the stimulus. Our experiments on task switching have been devised per research done on human subjects. The characteristic behaviors observed in our study helps in understanding and testing various hypotheses associated with neural structures involved in task switching. To the best of our knowledge, this model is the first to use STDP and a real dataset with the architecture of the PFC neurons derived from studies done on the brain using neuralimaging techniques.
2310.10422
A Neural Network-Based Approach to Normality Testing for Dependent Data
There is a wide availability of methods for testing normality under the assumption of independent and identically distributed data. When data are dependent in space and/or time, however, assessing and testing the marginal behavior is considerably more challenging, as the marginal behavior is impacted by the degree of dependence. We propose a new approach to assess normality for dependent data by non-linearly incorporating existing statistics from normality tests as well as sample moments such as skewness and kurtosis through a neural network. We calibrate (deep) neural networks by simulated normal and non-normal data with a wide range of dependence structures and we determine the probability of rejecting the null hypothesis. We compare several approaches for normality tests and demonstrate the superiority of our method in terms of statistical power through an extensive simulation study. A real world application to global temperature data further demonstrates how the degree of spatio-temporal aggregation affects the marginal normality in the data.
Minwoo Kim, Marc G Genton, Raphael Huser, Stefano Castruccio
2023-10-16T14:03:25Z
http://arxiv.org/abs/2310.10422v1
# A Neural Network-Based Approach to Normality Testing for Dependent Data ###### Abstract There is a wide availability of methods for testing normality under the assumption of independent and identically distributed data. When data are dependent in space and/or time, however, assessing and testing the marginal behavior is considerably more challenging, as the marginal behavior is impacted by the degree of dependence. We propose a new approach to assess normality for dependent data by non-linearly incorporating existing statistics from normality tests as well as sample moments such as skewness and kurtosis through a neural network. We calibrate (deep) neural networks by simulated normal and non-normal data with a wide range of dependence structures and we determine the probability of rejecting the null hypothesis. We compare several approaches for normality tests and demonstrate the superiority of our method in terms of statistical power through an extensive simulation study. A real world application to global temperature data further demonstrates how the degree of spatio-temporal aggregation affects the marginal normality in the data. _Keywords:_ Adaptive Cut-Off; Aggregation of Test Statistics; Neural Network; Normality Test; Spatio-Temporal Statistics ## 1 Introduction One of the fundamental tasks for both model design and validation is to identify a marginal distribution for the data (or the residuals according to some trend), and to test whether it can be ascribed to a known parametric model. Arguably one, if not the the most, important case is that of the normal distribution. In this case, in addition to informal methods such as quantile-quantile plots and histograms, there is a wide variety of normality tests under the assumption of independent and identically distributed (_i.i.d._) data; see, e.g., Anderson and Darling (1952), Shapiro and Wilk (1965), Lilliefors (1967), and Jarque and Bera (1980). Normality tests are based on statistics such as skewness and kurtosis, which summarize some properties of the distribution and compare them to the statistic expected from a normal distribution. The tests may not provide unanimous results if, for instance, the data resemble a normal distribution with respect to one statistic but not with respect to others; see Thode (2002). When the data are not _i.i.d._, with dependence informed possibly (but not necessarily) by space and/or time, testing the marginal behavior is considerably more challenging. Indeed, while it is methodologically convenient to assume a Gaussian process, i.e., a random function with marginal Gaussian distribution, the dependence leads to excessive rejections in normality tests intended for _i.i.d._ data. As an extreme example, one may consider a Gaussian process with perfect correlation: for every realization, every observation will have the same value, hence leading to the impossibility of assessing the marginal behavior. Therefore, standard tests intended for _i.i.d._ data are bound to exhibit inflated Type I error rates on dependent data, even if the process is in fact Gaussian. It is hence necessary to develop tests that account for dependence, and which would adjust the decision criterion accordingly. The recent work of Horvath et al. (2020) proposed a modification of the Jacque-Bera normality test (Jarque and Bera, 1980) by estimating the spatial structure. In their review on multivariate normality tests, Chen and Genton (2023) also extended the test of Horvath et al. (2020) to the multivariate setting. While a test adjustment may provide a partial solution, relying on only a single test with dependent data is limiting, as the null distribution of the test statistic strongly depends on the correlation structure. For instance, the null distribution of a single test statistic such as the Shapiro-Wilk normality test (Shapiro and Wilk, 1965) will differ depending on the strength of the spatial dependence. In order to enhance the test power, a solution is to combine different tests so as to use multiple statistics at the same time. One simple approach is the Bonferroni correction, which predicates rejection of \(H_{0}\) if at least one of the \(m\) tests is rejected at level \(\alpha/m\); see, e.g., Haynes (2013). The Bonferroni correction guarantees the appropriate Type I error rate but is overly conservative and has an optimal power only if the test statistics are independent. Another approach to combine \(m\) tests is to use Fisher's method, which combines information from the p-values of all tests. If the tests are all independent, then \(-2\sum_{i=1}^{m}\ln p_{i}\) follows a \(\chi^{2}_{2m}\) distribution (Fisher, 1992; Kost and McDermott, 2002). A linear combinations of p-values has also been suggested in Edgington (1972). Winkler et al. (2016) reviewed fifteen methods for combining p-values. Neural networks-based approaches with descriptive statistics as inputs for _i.i.d._ data have been introduced to test for normality and compared with standard tests (Wilson and Engel, 1990). Sigut et al. (2006) assessed univariate normality using trained neural networks with input features including sample skewness, sample kurtosis, test statistics in Shapiro and Wilk (1965), the Fisher transform of the Pearson correlation coefficient, and the family of test statistics proposed by Vasicek (1976). More recently, Simic (2021) extended previous approaches by adding summary statistics such as minimum, maximum, and sample size to the representative input set. All the past studies showed that neural network approaches can often outperform typical statistical tests by combining information in a non-linear fashion. In this work, we propose a more general neural network-based test for normality aimed at dependent data (in space, time, space/time, or simply multivariate) with a novel adaptive cut-off technique, which will be shown to outperform currently available methods for testing normality when the independence assumption is violated. The paper proceeds as follows. In Section 2, we present the general framework of combining multiple tests and introduce our neural network methodology. In Section 3, we conduct a simulation study for testing the assumption of normality on a spatial grid and we show the improvement against currently available methods. In Section 4, we apply the proposed method to spatially distributed data from a global climate model simulation in order to test normality at different levels of spatial aggregation. In Section 5, we discuss conclusions and directions for future research. ## 2 Methodology for Normality Testing Let \(\mathbf{Y}=\left(Y(\mathbf{s}_{1}),\ldots,Y(\mathbf{s}_{M})\right)^{\top}\) be a vector of real valued random processes on a manifold. This manifold can represent a spatial domain such as a Euclidean space or a sphere for a spatial process, the positive real line for time series or a Cartesian product of the two in the case of space-time processes. Let \(H_{0}\) be any model property that \(Y(\cdot)\) may satisfy (in our case the marginal distribution being Gaussian). We aim to create a most-powerful classifier \(C:\mathbf{Y}\mapsto\{0,1\}\) with Type I error rate \(\alpha\); that is we have \(P(C(\mathbf{Y})=1\mid H_{0}\quad\text{true})=\alpha\) and for any other classifier \(\tilde{C}\) at the same Type I error rate we have that \(P(C(\mathbf{Y})=1\mid H_{0}\quad\text{false})\geq P(\tilde{C}(\mathbf{Y})=1\mid H_{0} \quad\text{false})\). ### Individual normality tests For simplicity of notation, we denote with \(Y_{i}=Y(\mathbf{s}_{i})\), \(i=1,\ldots,M\), \(\mathbf{Y}=(Y_{1},\ldots,Y_{M})^{\top}\) the data for which one wants to assess normality. We focus on four tests that are used as inputs for our neural network: Shapiro-Wilk (Shapiro and Wilk, 1965), Lilliefors (Lilliefors, 1967), Jarque-Bera (Jarque and Bera, 1980), and Anderson-Darling (Anderson and Darling, 1952). The Shapiro-Wilk test relies on calculating the order statistics and comparing the observed versus expected values \(W=(\sum_{i=1}^{M}a_{i}Y_{(i)})^{2}/\sum_{i=1}^{M}(Y_{i}-\bar{Y})^{2}\), where \(Y_{(i)}\) is the \(i^{th}\) order statistic, \(\bar{Y}\) is the sample mean, and \(a_{i}\) is a weight calculated from the expected means and covariances of the order statistics under the null hypothesis of _i.i.d._ data. Despite its popularity, the Shapiro-Wilk test relies on the availability of appropriate values of \(a_{i}\) which have no closed form, so the values are determined through Monte Carlo simulation, and for large sample sizes \(M\), it is more difficult to obtain accurate \(a_{i}\) estimates (Das and Imon, 2016). Indeed, in all the code implementation we used throughout this work, the size of \(M\) is limited to a few thousand points. The Lilliefors test is an adaptation of the Kolmogorov-Smirnov test for Gaussian data. It measures the maximum deviation of the empirical and theoretical cumulative distribution functions (CDFs), denoted with \(F_{M}\) and \(F\), respectively: \(D_{M}=\sup_{y}|F_{M}(y)-F(y)|\). Then \(D_{M}\) is compared to the expected distribution under the null hypothesis, and a p-value is calculated. The Anderson-Darling test statistic is also based on deviation from the theoretical CDF: \(A^{2}=M\int_{-\infty}^{\infty}\frac{\{F_{M}(y)-F(y)\}^{2}}{F(y)\{1-F(y)\}} \mathrm{d}F(y)\). Rather than measuring the maximum deviation between the empirical and theoretical CDFs, Anderson-Darling weighs deviations in the tails more heavily. Finally, the Jarque-Bera test calculates the test statistic \(JB=\frac{M}{6}\{S^{2}+(K-3)^{2}/4\}\), where \(S\) and \(K\) are the sample skewness and kurtosis, respectively. Informally, the Jarque-Bera test checks whether the sample's skewness and kurtosis match those of a normal distribution. The asymptotic expected values of the empirical skewness and kurtosis are 0 and 3, and the asymptotic variance of the empirical skewness and kurtosis are \(6/M\) and \(24/M\). Thus, the Jarque-Bera statistic is a squared sum of two asymptotically independent standardized normal distributions, and thus distributed as a \(\chi^{2}\) random variable. ### Combining tests Let \(C_{1},C_{2},\ldots,C_{m}\) be \(m\) classifiers with Type I error \(\alpha\). Insofar as they are distinct classifiers, they assess at least partly different properties implied by \(H_{0}\). For example, to test \(H_{0}\): \(\mathbf{Y}(\mathbf{s})\) is normally distributed, \(C_{1}\) may be testing whether the skewness is zero, while \(C_{2}\) may be testing whether the excess kurtosis is zero. Both are appropriate level-\(\alpha\) tests of \(H_{0}\) and their performance, measured by statistical power, will vary depending on how the departure of the alternative model hypothesis \(H_{1}\) to \(H_{0}\) affects the properties assessed by each classifier. Ideally, we would like to combine the \(m\) classifiers into a single level-\(\alpha\) classifier \(C\) that is more powerful. In our case, combining the classifiers is complicated because of two main issues. First, since each individual classifier is testing different but related properties of \(H_{0}\), the \(m\) classifiers are expected to be dependent; the Bonferroni correction is overly conservative because the effective number of tests is less than \(m\) due to this dependence and Fisher's method's asymptotic distribution is no longer valid. In the field of statistical genetics, Greco et al. (2015) accounted for the dependence of various genetic tests of association for case-control studies by repeatedly permuting cases and controls in order to calculate the null distribution of either the Fisher statistic or minimum p-value statistic, which naturally adjusts for the dependence. The method relies on creating a representative sample of data under the null hypothesis through permutations. In our setting, we only have a single realization of the process \(\mathbf{Y}(\mathbf{s})\), so instead of a permutation, we will create a representative sample of data under \(H_{0}\) through simulation. ### Combining tests through neural networks If \(T_{1},T_{2},\ldots,T_{m}\) are test statistics for classifiers \(C_{1},C_{2},\ldots,C_{m}\), the simplest approach to combine them is through a classifier comprising of a linear combination and a logit transformation: \(\text{logit}\{P(C(\mathbf{Y})=1)\}=\gamma_{0}+\gamma_{1}T_{1}+\cdots+\gamma_{m}T_{m}\). While this approach allows to combine information across tests, its functional form limits its flexibility. In this work, we propose a more flexible approach which relies on a (deep) neural network, i.e., we filter the test statistics through a combination of multiple non-linear functions (Goodfellow et al., 2016). More specifically, we consider the following: \[F(\mathbf{Y})=P(C(\mathbf{Y})=1)=S\{W_{L}\sigma_{L}(W_{L-1}\cdots\sigma_{2}(W_{2} \sigma_{1}(W_{1}\mathbf{T})))\}, \tag{1}\] which is a composition of: 1. The \(m\)-dimensional vector of all the test statistics considered \(\mathbf{T}=(T_{1},\ldots,T_{m})^{\top}\). If no classifiers are available, one may also consider \(\mathbf{T}\) to be the identity function so that the vector of the observed data \(\mathbf{Y}\) itself is the desired input. For simplicity of notation in the next points, we set \(n_{0}=m\). 2. \(L\) matrices representing linear transformations \(W_{i}:\mathbb{R}^{n_{i-1}}\mapsto\mathbb{R}^{n_{i}}\). The parameter \(n_{i}\) is the _width_ of layer \(i\), while \(L\) is the _depth_ of the neural network. 3. \(L\) fixed non-linear transformations \(\sigma_{i}\) that are applied component-wise. In this paper, we use the common restricted linear unit (ReLU, Goodfellow et al. (2016)) activation function defined by \(\sigma(z)=\max(0,z)\). 4. A sigmoid function \(S(z)=(1+e^{-z})^{-1}\), which guarantees an output in \([0,1]\) that we can interpret as \(P(C(\mathbf{Y})=1)\). Inference (i.e., learning) can be performed by simulating the representative samples \(\mathbf{Y}_{1}^{H_{0}},\ldots,\mathbf{Y}_{N_{0}}^{H_{0}}\in\mathbb{R}^{M}\) satisfying \(H_{0}\) and \(\mathbf{Y}_{1}^{H_{1}},\ldots,\mathbf{Y}_{N_{1}}^{H_{1}}\) satisfying \(H_{1}\). The matrix entries of \(W_{i}\) are then learned by optimizing the binary cross-entropy (or log loss), which penalizes overly-confident incorrect predictions: if we denote by \(p_{i}^{H_{0}}=P\left(C\left(\mathbf{Y}_{i}^{H_{0}}\right)=1\right)\) and \(p_{i}^{H_{1}}=P\left(C\left(\mathbf{Y}_{i}^{H_{1}}\right)=1\right)\), then: \[\text{logloss}=\sum_{i=1}^{N_{0}}\log\left(1-p_{i}^{H_{0}}\right)+\sum_{i=1}^{N_{ 1}}\log p_{i}^{H_{1}}. \tag{2}\] In this work, we use the stochastic gradient descent-based optimization algorithm Adam (Kingma and Ba, 2015). Since the neural network outputs a probability, instead of setting an arbitrary cut-off of 0.5, we set it such that the method has a pre-specified Type I error rate \(\alpha\). Formally, this cut-off \(q_{\alpha}\) is defined using (1) as: \[q_{\alpha}=\inf_{q\in[0,1]}\left[\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\mathbb{I}\{ F(\mathbf{Y}_{i}^{H_{0}})>q\}\leq\alpha\right]. \tag{3}\] ### Adaptive cut-off In this section we assume for simplicity that the Gaussian training data are spatially dependent and generated from a Matern covariance model (Stein, 1999) with varying degrees of spatial dependence. The proposed adaptive cut-off approach can however be easily generalized to other spatial, temporal and spatio-temporal models. For any two observations \(Y(\mathbf{s}_{i}),Y(\mathbf{s}_{j})\) at two generic locations \(\mathbf{s}_{i},\mathbf{s}_{j}\in\mathbb{R}^{2}\), the covariance in the Matern model is: \[\text{cov}\{Y(\mathbf{s}_{i}),Y(\mathbf{s}_{j})\}=\frac{\sigma^{2}}{2^{\nu-1}\Gamma( \nu)}\left(\frac{\|\mathbf{s}_{i}-\mathbf{s}_{j}\|}{\beta}\right)^{\nu}\mathcal{K}_{ \nu}\left(\frac{\|\mathbf{s}_{i}-\mathbf{s}_{j}\|}{\beta}\right), \tag{4}\] where \(\mathcal{K}_{\nu}\) is the modified Bessel function of the second kind of order \(\nu>0\), and \(\|\mathbf{s}_{i}-\mathbf{s}_{j}\|\) is the Euclidean distance. The parameter \(\sigma^{2}\) specifies the marginal variance and \(\beta>0\) controls the range of the spatial dependence: when we consider a distance \(\sqrt{8\nu}/\beta\), the spatial correlation is near 0.1 for all \(\nu\)(Stein, 1999). Finally, \(\nu\) specifies the regularity/smoothness of the process, i.e., the degree of mean square differentiability. Since we simulate the training data by varying the spatial range \(\beta\), a single cut-off value independent of this parameter would inevitably result in incorrect Type I error rates. In this work, we propose a more flexible cut-off \(q_{\alpha}\) in (3) as a function of \(\beta\). Specifically, let \(n_{\beta;\text{train}}\) be the number of range parameters for the training set such that \(\beta_{1},\dots,\beta_{n_{\beta;\text{train}}}\) are the parameters used to generate \(\mathbf{Y}_{1}^{H_{0}},\dots,\mathbf{Y}_{N_{0}}^{H_{0}}\). For each \(\beta_{g}\) and its corresponding observations, a cut-off value is elicited as in (3) denoted by \(q_{\alpha}(\beta_{g})\) for \(g=1,\dots,n_{\beta;\text{train}}\). We employ non-parametric kernel regression to estimate the cut-off function based on pairs \((\beta_{1},q_{\alpha}(\beta_{1}))^{\top},\ldots,(\beta_{n_{\beta;\text{train}}},q_{ \alpha}(\beta_{n_{\beta;\text{train}}}))^{\top}\). We use a Gaussian kernel and assume that the estimated cut-off at a new testing value \(\beta\) is: \[\hat{q}_{\alpha}(\beta)=\frac{\sum_{g=1}^{n_{\beta;\text{train}}}K_{h}(\beta- \beta_{g})q_{\alpha}(\beta_{g})}{\sum_{g=1}^{n_{\beta;\text{train}}}K_{h}( \beta-\beta_{g})}, \tag{5}\] where \(K_{h}(\beta-\beta_{g})=h^{-1}K\left(h^{-1}(\beta-\beta_{g})\right)\), \(K(z)=\exp\left(-z^{2}/2\right)/\sqrt{2\pi}\) for any \(z\in\mathbb{R}\), and \(h\) is a selected bandwidth. We implement this kernel regression using the R package np(Li and Racine, 2003; Li et al., 2013). ### An existing test for dependent normal data Horvath et al. (2020) introduced a test to determine whether some dependent data on a regular grid can be regarded as a realization of a Gaussian process. We show here the main idea behind their approach, and we refer to their manuscript for a comprehensive derivation of the test statistic and relevant estimators. Their method involves modeling a process that accounts for the spatial correlation and computing two statistics related to sample skewness and kurtosis. The test can be performed since Horvath et al. (2020) demonstrated that the sum of squares of the two statistics asymptotically follows a chi-square distribution with two degrees of freedom. Specifically, the data \(\{Y(\mathbf{s}_{1}),\ldots,Y(\mathbf{s}_{M})\}\), where \(\{\mathbf{s}_{1},\ldots,\mathbf{s}_{M}\}\in\mathbb{Z}^{d}\) are locations in a \(d\)-dimensional spatial domain, are assumed to follow the moving average model \(Y(\mathbf{s})=\mu+\sum_{\mathbf{s}^{\prime}\in\mathbb{Z}^{d}}a(\mathbf{s}^{ \prime})\epsilon(\mathbf{s}-\mathbf{s}^{\prime})\), \(\mathbf{s}\in\mathbb{Z}^{d}\), where \(\mu\) is the process mean and \(\epsilon(\mathbf{s}),\mathbf{s}\in\mathbb{Z}^{d}\) are independent, standard normal innovations. We denote sample skewness and kurtosis with the standardized data by \(\mathcal{S}_{M}\) and \(\mathcal{K}_{M}\) respectively, and by \(\phi_{\mathcal{S}}^{2}\) and \(\phi_{\mathcal{K}}^{2}\) their asymptotic variances (which depend on \(\mu\) and \(a(\mathbf{s}^{\prime})\)). The test statistic is defined as \(\mathcal{S}_{M}^{2}/\hat{\phi}_{\mathcal{S}}^{2}+\mathcal{K}_{M}^{2}/\hat{ \phi}_{\mathcal{K}}^{2}\), where \(\hat{\phi}_{\mathcal{S}}^{2}\) and \(\hat{\phi}_{\mathcal{K}}^{2}\) are kernel estimators whose detailed explanation and comprehensive derivations are given in their paper. In Section 3 of this work, we use this test as a benchmark to compare the performance of our proposed method. ## 3 Simulation Study ### Simulation design We simulate a zero mean, isotropic Gaussian random field with Matern covariance function in (4) on a two dimensional unit square regular grid of size \(60\times 60\). We assume \(\nu\in\{0.5,1.0\}\) where the former value simplifies the covariance function to \(\sigma^{2}\exp(-\|\mathbf{s}_{i}-\mathbf{s}_{j}\|/\beta)\). We present results for \(\nu=0.5\) in this section, while the results for \(\nu=1.0\) are deferred to the supplement Section A. We choose \(n_{\beta;\text{train}}=30\) equally spaced values of \(\beta\) between \(0\) and \(\beta_{\text{max}}=0.234\) (including both endpoints) in the training set, spanning from zero to strong dependence on a unit square. The range parameter bound \(\beta_{\text{max}}\) is chosen so that the _effective range_, i.e., the distance at which the correlation between two locations reaches \(0.05\), is \(0.7\). This bound is valid only for the unit square, so it requires a rescaling in the application, and also depends on \(\nu\). In the test set we choose \(n_{\beta;\text{test}}=50\) equally spaced values of \(\beta\) from \(0\) to \(\beta_{\text{max}}\), to demonstrate that the neural network is capable of interpolating between different choices of range parameters. The sets of \(\beta\)s in the training set and testing set are denoted by \(\mathcal{B}_{\text{train}}\) and \(\mathcal{B}_{\text{test}}\), respectively, such that \(|\mathcal{B}_{\text{train}}|=n_{\beta;\text{train}}\) and \(|\mathcal{B}_{\text{test}}|=n_{\beta;\text{test}}\). Non-normal distributions in the training and testing set were created by applying a signed power transformation to the baseline Matern Gaussian random field. Specifically, for an exponent parameter \(p\), a value \(z\) was transformed to \(f(z;p)=|z|^{p}\text{sign}(z)\), for values of \(p\) in the set \(\mathcal{P}_{\text{train}}=\{1.2,1.4,1.6,1.8\}\) in the training set, and in the set \(\mathcal{P}_{\text{test}}=\{1.1,1.2,\ldots,2.0\}\) in the testing set, to demonstrate the neural network's ability to interpolate and (modestly) extrapolate. We denote by \(|\mathcal{P}_{\text{train}}|=n_{p;\text{train}}\) and \(|\mathcal{P}_{\text{test}}|=n_{p;\text{test}}\), and we generate \(n_{\text{sample}}=200\) sample points for each combination of \((\beta,p)\) in the case of non-normal data. Therefore, the training set contains \(n_{\beta;\text{train}}\times n_{p;\text{train}}\times n_{\text{sample}}=24,000\) (non-normal) data points, while the testing set contains \(n_{\beta;\text{test}}\times n_{p;\text{test}}\times n_{\text{sample}}=100,000\) (non-normal) data points. For the null hypothesis, i.e., normal data with \(p=1\), we generate an equivalent number of samples, i.e, the training set contains \(24,000\) points, while the testing set contains \(100,000\) points using the same sets \(\mathcal{B}_{\text{train}}\) and \(\mathcal{B}_{\text{test}}\), respectively. Type I errors for individual normality tests introduced in Section 2.1 are presented in Section 3.2. Results in terms of Type I error and power for our neural network, the linear classifiers and Horvath et al. (2020)'s method are shown in Section 3.3. ### Classical tests The Type I errors for the classical normality tests increase as the range of dependence increases in the simulation data, as is apparent in Figure 1. These tests are therefore not appropriate given their assumption of independence. Given their uncalibrated Type I error, we do not calculate the power of these tests and do not compare them with the other methods shown in the following sections. ### Tests for dependent data We use \(m=6\) inputs: the four test statistics of the normality tests in Section 2.1 along with the sample skewness and kurtosis. We rely on a neural networks with \(L=2\) hidden layers and with \(n_{1}=256\) and \(n_{2}=128\) nodes. To at least partly mitigate overfitting we use dropout (Srivastava et al., 2014) during training, which randomly removes a fraction of nodes during each training step and acts as a form of regularization. In each of the \(L\) layers, 30% of nodes are randomly removed during each training step. We provide a sensitivity study in Section 3.3.3 to demonstrate the robustness of the results with respect to other choices of network depth, width and drop-out rate. Inference is performed by minimizing the binary cross-entropy logarithmic loss (2), which is equivalent to maximizing the log-likelihood. For each \(\beta\in\mathcal{B}_{\text{train}}\), we set a cut-off at the observed \(1-\alpha=95\)th percentile in (3) using the associated Gaussian data in training set such that we collect Figure 1: \(y\)-axis: Type I errors for Shapiro–Wilk test (red), Lilliefors test (blue), Anderson–Darling test (green), and Jarque–Bera test (orange). \(x\)-axis: The dependence parameter \(\beta\) of a Matérn covariance function as shown in (4) when the other two parameters are fixed, \(\sigma^{2}=1\) and \(\nu=0.5\). The black dashed horizontal line in the figure represents 5% of Type I error. \((\beta_{1},q_{\alpha}(\beta_{1}))^{\top},\ldots,(\beta_{n_{\beta;\text{train}}},q_{ \alpha}(\beta_{n_{\beta;\text{train}}}))^{\top}\) and obtain cut-off functions for neural network and linear classifiers from non-parametric kernel regression as shown in Figure 2. #### 3.3.1 Type I error comparison First, we compare the Type I errors for the method in Horvath et al. (2020), the linear and the neural networks classifiers assuming that the true \(\beta\)s in \(\mathcal{B}_{\text{test}}\) are known, in order to calibrate the testing data points with a suitable cut-off value from the pre-computed kernel regressions. In practice, the true values of \(\beta\) are unknown and require estimation, so in order to assess the Type I errors in a real case, we estimate \(\beta\) and \(\sigma^{2}\) simultaneously with fixed \(\nu=0.5\) using the software ExaGeoStatR(Abdulah et al., 2023), which allows a unified, high-performance parallel system designed to optimize a covariance-based Gaussian likelihood for spatial data. With the help of advanced high performance dense linear algebra libraries, ExaGeoStatR offers exact solutions for calculating the inverse of the covariance matrix and its determinant, which are necessary for evaluating the Gaussian log-likelihood. The optimization step in ExaGeoStatR relies on the Bound Optimization BY Quadratic Approximation (BOBYQA) method, which is a numeric, global, derivative-free and bound Figure 2: Simulation study: Non-parametric Gaussian kernel regressions as defined in (5) with a bandwidth \(h=0.3\) for neural network (red) and linear (blue) classifiers. On the \(x\)-axis are the range parameter \(\beta\) of the Matérn covariance (4), while on the \(y\)-axis the predicted cut-off and corresponding pointwise 95% confidence interval are represented by solid lines and bands, respectively. The other two parameters are fixed at \(\sigma^{2}=1\) and \(\nu=0.5\). constrained optimization algorithm (Powell, 2009), such that we can obtain faster and more accurate estimation than brute force methods. Figure 3 illustrates the resulting Type I errors for both the cases of known and unknown parameters. In the first case (known parameters), our adaptive cut-off methods have approximately nominal 5% Type I error rates for all \(\beta\) values (see the red and blue lines in Figure 3) while Horvath et al. (2020)'s method has unstable Type I error rates as the dependence parameter varies (see the green lines in Figure 3). In the second scenario (unknown parameters), the outcomes are still comparable to those of known parameters although we utilize estimated \(\beta\)s instead of the true values. In Section B of the supplement, we discuss the case where the parameter \(\nu\) is misspecified. Specifically, we train the linear and neural network models using the data generated with \(\nu=1\), while the actual test data are generated with \(\nu=0.5\), and vice versa. The misspecification of \(\nu\) significantly worsens the size of tests because the value of \(\beta_{\max}\), which controls the size of a test, is computed based on the wrong \(\nu\), so incorrect cut-off functions are derived (see Figure 2 and Figure S1 in the supplement). In real-world scenarios, \(\nu\) Figure 3: Panel (a): Type I errors for neural network (red), linear classifier (blue), and Horváth et al. (2020)’s method (green) assuming that the parameters \(\beta\in\mathcal{B}_{\text{test}}\) are known where \(|\mathcal{B}_{\text{test}}|=50\) and the other two parameters of a Matérn covariance function are fixed, \(\sigma^{2}=1\) and \(\nu=0.5\); Panel (b): Same as (a) when the parameters \(\beta\) on the \(x\)-axis and \(\sigma^{2}\) are estimated by maximum likelihood estimates and the smoothness parameter is fixed to \(\nu=0.5\). The black dashed horizontal line in each panel represents the 5% Type I error. has to be estimated along with the linear models and neural networks. In Section 4, we demonstrate how to practically calibrate the tests with an estimated \(\nu\). #### 3.3.2 Power comparison In order to identify the best test, we need to assess the power under the alternative hypothesis \(H_{1}\) while maintaining a predetermined Type I error rate \(\alpha\). We compare powers for our proposed neural network model and linear aggregation with adaptive cut-off, along with the approach in Horvath et al. (2020). Figure 4 shows the power curves as a function of the departure from normality, measured by the exponent \(p\). Each curve is computed as an average across all choices of dependence parameters \(\beta\in\mathcal{B}_{\text{test}}\) assuming that they are known (See Panel (a)) or estimated (See Panel (b)). It is readily apparent that the neural network classifier achieves the highest power for all choices of \(p\in\{1.1,1.2,\ldots,2.0\}\). Also, our adaptive cut-off method has higher power as the non-normal distribution's tails become heavier (with larger \(p\)). Here, neural networks Figure 4: Panel (a): Averaged powers across all choices of \(\beta\in\mathcal{B}_{\text{test}}\) for neural network (red), linear classifier (blue), and Horváth et al. (2020)’s method (green) given a value of exponent \(p\) (i.e., non-normality parameter) on the \(x\)-axis assuming that the parameters \(\beta\in\mathcal{B}_{\text{test}}\) are known where \(|\mathcal{B}_{\text{test}}|=50\) and the other two parameters of a Matérn covariance function are fixed, \(\sigma^{2}=1\) and \(\nu=0.5\); Panel (b): Same as (a) when the parameters \((\beta,\sigma^{2})\) are estimated by maximum likelihood estimates and the smoothness parameter is fixed, \(\nu=0.5\). The black dashed horizontal line in each panel represents the power of \(5\%\). perform only slightly better than linear combinations. The use of only six inputs can be one reason for the slight improvement in this case. It is expected that the accuracy of neural networks would be enhanced if a larger number of inputs are employed. #### 3.3.3 Sensitivity analysis We perform a sensitivity analysis with respect to the choice of depth \(L\), width \((n_{1},n_{2})\), and dropout rate of the neural network. First, we consider the same drop-out rate of \(0.3\) but different number of layers and nodes: 1) three hidden layers with \((n_{1},n_{2},n_{3})=(256,128,64)\); 2) two hidden layers with \((n_{1},n_{2})=(32,16)\); and 3) one hidden layer with \(n_{1}=128\). Second, we use the same number of layers and nodes as in Section 3.3 but different drop-out rates, \(0.6\) or \(0.1\). Hence, we have a total of six distinct network structures, including the original one, and the results are summarized in Table 1. We also recompute the Type I error and power in Figure 3-(a) and Figure 4-(a) for all models. The results, shown in Figure 5, show how all six networks display a very similar pattern. ## 4 Testing Normality for Global Climate Data ### Motivation Climate change is bound to affect both natural and human systems, with varying outcomes depending on the region, economic sector, and time. The magnitude and range of future \begin{table} \begin{tabular}{c c c c c c c} \hline & Model 1 & Model 2 & Model 3 & Model 4 & Model 5 & Model 6 \\ \hline \# of layers & 2 & 3 & 2 & 1 & 2 & 2 \\ \# of nodes & (256, 128) & (256, 128, 64) & (32, 16) & (128) & (256, 128) & (256, 128) \\ Drop-out & 0.3 & 0.3 & 0.3 & 0.3 & 0.6 & 0.1 \\ \hline \end{tabular} \end{table} Table 1: Summary of different network architectures: Model 1 is the original network we used in Section 3.3. The drop-out rate in Models 2, 3, and 4 is identical to that of the original model, however, they differ in their network structures. Models 5 and 6 have modified drop-out rates with the same structure as the original one. climate does not only rely on the dynamics of the Earth's system but also on scenarios of socio-economic developments (IPCC, 2022). Computer models or simulators are the standard tool to understand and quantify future changes in the climate, as well as their social, political and economic effects. The high complexity, spatial and temporal resolution of modern climate models make it impossible to explore future climate for a fully exhaustive range of scenarios, as every simulation puts a considerable strain on the computational and storage resources of an institution's cyberinfrastructures (Huang et al., 2023). As such, sensitivity analysis is limited to a selected set representative of physical parametrizations and scenarios, and uncertainty quantification can be performed partially at best. Statistical surrogates, or emulators (Sacks et al., 1989; Kennedy and O'Hagan, 2001) are then routinely trained on a small set of available simulations, and then used to provide a considerably faster (yet approximate) assessment of the behavior of (some variables at some spatio-temporal resolutions of) a climate model (Castruccio and Stein, 2013; Castruccio et al., Figure 5: (a): Type I errors for various architectures of neural networks—Model 1 (red), Model 2 (blue), Model 3 (green), Model 4 (yellow), Model 5 (orange), and Model 6 (brown)—assuming that the parameters \(\beta\in\mathcal{B}_{\text{test}}\) are known where \(|\mathcal{B}_{\text{test}}|=50\); (b): Overall powers for various architectures of neural networks computed as an average over all values of \(\beta\in\mathcal{B}_{\text{test}}\). For both panels, the other two parameters of the Matern covariance function are fixed to \(\sigma^{2}=1\) and \(\nu=0.5\) and the black dashed horizontal lines represent \(y=0.05\). 2014; Castruccio and Genton, 2016). A useful simplifying assumption for climate emulation is that of Gaussianity, which at some level of spatial and/or temporal aggregation is more or less explicitly assumed to be valid owing to the central limit theorem. The presence of spatial and temporal dependence within the data, however, makes it challenging to formally assess this assumption. Testing for normality in this framework is therefore of high relevance as it would provide indications as to which modeling strategy would be more appropriate: a Gaussian process emulator (Sacks et al., 1989) or more complex trans-Gaussian (Jeong et al., 2019; Tagle et al., 2020) or latent Gaussian models (Zhang et al., 2023). In this application, we make use of our adaptive cut-off method to assess normality of a widely used collection of climate simulations under different levels of aggregation. ### CMIP6 data We focus on the data from the Coupled Model Intercomparison Project Phase 6 (CMIP6, Eyring et al. (2016)), the reference collection of simulations (_ensemble_) of the Intergovernmental Panel on Climate Change Assessment Report 6 (Juckes et al., 2020) and in particular on the MIROC-ES2L model (Hajima et al., 2020) given its complete record of simulations. We consider on monthly near surface air temperature data (at 2 meters above the ground level, in Celsius) under SSP245, an intermediate scenario in terms of global mean temperature increase and degree of global socio-economic collaboration throughout the 21st century (Van Vuuren et al., 2014). The data set comprises \(T=12\times 86=1032\) time points (all months in 2015-2100) on a regular \(2.79^{\circ}\times 2.81^{\circ}\) latitude and longitude grid, for a total of \(M=64\times 128=8192\) locations. We denote the temperature as \(Y_{t}(\mathbf{s}_{i})\) at location \(i=1,\ldots,M\) and time point \(t=1,\ldots,T\). Before assessing normality, we provide a model for the trend and the temporal dependence, which need to be removed before applying our proposed methdology. ### Modeling trend and temporal dependence We consider the following additive spatio-temporal autoregressive moving average (ARMA)-like model: \[Y_{t}(\mathbf{s}_{i}) = \mu_{r(t)}(\mathbf{s}_{i})+\epsilon_{t}(\mathbf{s}_{i}), \tag{6a}\] \[\epsilon_{t}(\mathbf{s}_{i}) = \sum_{j=1}^{p}\psi_{j;i}\epsilon_{t-j}(\mathbf{s}_{i})+\sum_{k=0}^ {q}\theta_{k;i}\eta_{t-k}(\mathbf{s}_{i}). \tag{6b}\] where \(\theta_{0;i}=1\), \(\mu_{r(t)}(\mathbf{s}_{i})\) is the monthly trend with indices \(r(t)\in\{0,\ldots,11\}\) representing the remainder when \(t\) is divided by \(12\) and \(\eta_{t}(\mathbf{s}_{i})\) is a zero-mean residual uncorrelated in time. Further, we assume that \(\text{Var}\{\epsilon_{t}(\mathbf{s}_{i})\}=\sigma_{r(t)}^{2}(\mathbf{s}_{i})\) for \(t=1,\ldots,T\), i.e., there is a month-specific variance. For each location independently, both mean and variance are estimated in a non-parametric fashion with a moving window estimator: \[\widehat{\mu}_{r(t)}(\mathbf{s}_{i})=\frac{1}{|A_{r(t)}|}\sum_{t\in A_{r(t)}}Y _{t}(\mathbf{s}_{i}),\quad\widehat{\sigma}_{r(t)}^{2}(\mathbf{s}_{i})=\frac{1 }{|A_{r(t)}|}\sum_{t\in A_{r(t)}}\left\{Y_{t}(\mathbf{s}_{i})-\widehat{\mu}_{ r(t)}(\mathbf{s}_{i})\right\}^{2},\] where \(A_{r(t)}=\{t:t\bmod 12=r(t)\}\). The average \(R^{2}\) across all locations is \(0.80\) with standard deviation \(0.21\) and \(89\%\) values of \(R^{2}\) are greater than \(0.5\), which is better than harmonic regression (performed in the supplementary material). We then remove the trend and variance by computing the standardized residuals as: \[\widehat{\epsilon}_{t}(\mathbf{s}_{i})=\frac{Y_{t}(\mathbf{s}_{i})-\widehat{ \mu}_{r(t)}(\mathbf{s}_{i})}{\widehat{\sigma}_{r(t)}^{2}(\mathbf{s}_{i})}.\] Finally, for each location, we perform inference on the ARMA model (6b) on \(\widehat{\epsilon}_{t}(\mathbf{s}_{i})\) using the R package forecast(Hyndman and Khandakar, 2008), with the orders \(p\) and \(q\) selected via Bayesian information criterion (BIC). Once the model orders are identified, the model parameters \(\psi_{j;i}\) and \(\theta_{k;i}\) are estimated by maximum likelihood inference and we use them to compute the residuals \(\widehat{\eta}_{t}(\mathbf{s}_{i})\) as estimates of our target quantity \(\eta_{t}(\mathbf{s}_{i})\). Intuitively, the normality assumption for the air temperature data would be violated due to the occurrence of exceptional temperatures at certain locations, resulting in heavier tail probabilities compared to a Gaussian distribution. Hence, it might not be preferable to employ the normality assumption for modeling the original temperature data. In this regard, we are interested in assessing the impact of spatial aggregation on the normality of \(\widehat{\eta}_{t}(\mathbf{s}_{i})\). To simplify the notation, we will abuse the notation and use the same expression for the residuals at different levels of spatial aggregation. ### Data aggregation The emulator residuals \(\hat{\eta}_{t}(\mathbf{s}_{i})\) are likely not normal at the native grid resolution, as it is expected that some locations will have unusual temperatures with heavier-than-normal tails. However, some degree of spatial aggregation should result in more normal residuals, and we aim at formally testing this assumption with our proposed approach. We partition the pixels (locations) into smaller squares and compute the mean of the estimated residuals, \(\hat{\eta}_{t}(\mathbf{s}_{i})\), within each square. We choose the square of sizes \(2\times 2\), \(4\times 4\), \(8\times 8\), and \(16\times 16\) such that the corresponding aggregated data have the number of locations \(M=2048,512,128,32\), respectively. Figure 6 shows the map of the estimated residuals in January 2015 at all four different levels of aggregation. Figure 6: Standardized emulator residuals \(\hat{\eta}_{t}(\mathbf{s}_{i})\) in January 2015 for different levels of spatial aggregation. (a): original grid resolution; (b): 4 observations in each square of size \(2\times 2\); (c): 64 observations in each square of size \(8\times 8\); (d): 256 observations in each square of size \(16\times 16\). ### Calibration of classifiers First of all, we simulate the data from a Gaussian distribution using the Matern covariance in (4) with \(\nu\in\mathcal{N}_{\text{train}}=\{0.5,1.0,1.5,2.0,2.5,3.0\}\) covering rough to smooth spatial processes, \(\sigma^{2}=1\), and \(\beta\in\mathcal{B}_{\text{train}}=\{0,\ldots,\beta_{\text{max}}\}\), thereby covering independence to strong dependence and \(n_{\beta;\text{train}}=|\mathcal{B}_{\text{train}}|=30\) as in Section 3. The range parameter bound \(\beta_{\text{max}}\) depends on the choice of \(\nu\) and the spatial domain. Since in the case of a unit square we had the effective range of \(0.7\) corresponding to the strong dependence, for the domain here, we rescale it using the following ratio: effective range/maximum distance \(=0.7/\sqrt{2}\), where the maximum distance and the effective range are \(6307\) km and \(12742\) km, respectively, in chordal distance for all levels of aggregation. The different values of \(\beta_{\text{max}}\) across different choices of the smoothness parameter \(\nu\) are shown in Table S2 of the supplementary materials. Here, we emphasize that we train six pairs of neural networks and linear classifiers for each value of \(\nu\) and every testing data point will be assigned to one of the six based on the estimated value of \(\nu\). For non-normal data, the same transformation as in Section 3 is used with \(p\in\mathcal{P}_{\text{train}}=\{1.2,1.4,1.6,1.8\}\). We draw \(n_{\text{sample}}=200\) sample points for each setup such that we have \(n_{\nu;\text{train}}\times n_{\beta;\text{train}}\times n_{p;\text{train}} \times n_{\text{sample}}=144,000\) non-normal data points and the same amount of normal data points where \(n_{\nu;\text{train}}=|\mathcal{N}_{\text{train}}|=6\). Calibration is performed with the simulated normal and non-normal data and the resulting cutoff functions for each value of \(\nu\in\mathcal{N}_{\text{train}}\) are obtained using non-parametric kernel regression as illustrated in Figure 7. For the structure of neural networks, the number of hidden layers is \(L=2\) with \(n_{1}=256\) and \(n_{2}=128\) nodes and we use \(m=5\) inputs among those we used in Section 3. We do not use the Shapiro-Wilk test because the number of locations at the original resolution, \(M=8192\), exceeded the maximum allowed by the R implementation of the test (see the discussion on the methods about reliability of the test for large \(M\) in Section 2.1). To determine suitable neural network and linear classifiers and corresponding cut-off values for each time point \(t=1,\ldots,T\), we estimate the Matern parameters \((\sigma^{2},\beta,\nu)\) simultaneously given the location information with chordal distances and the spatial residuals \((\hat{\eta}_{t}(\mathbf{s}_{1}),\ldots,\hat{\eta}_{t}(\mathbf{s}_{M}))^{\top}\) using the package ExaGeoStat(Abdulah et al., 2018) which relies on BOBYQA optimization (Powell, 2009). Then, each testing data vector is allocated to a trained neural network and a linear classifier according to the closest approximation of the estimated smoothness parameter. For example, if the estimated smoothness parameter for a data vector is \(\hat{\nu}=0.8\), we use the neural network and linear classifier calibrated with \(\nu=1\), if \(\hat{\nu}=0.3\), we use the neural network and linear classifier calibrated with \(\nu=0.5\). ### Test results For the different levels of data aggregation we perform the calibration as detailed in Section 4.5 and compute the rejection rates across all time points (\(T=1032\)). The results are shown in Table 2. As expected by the central limit theorem, as the spatial aggregation increases, both the neural network and the linear test highlight that the residuals become more normally distributed. Indeed, at native resolution the normality tests are rejected for more than 95% of time points for both classifiers, while higher levels of aggregation decrease the rejection rates down to approximately 20%. The neural network model is overall Figure 7: Application (native grid resolution): Non-parametric Gaussian kernel regressions as defined in (5) with a bandwidth \(h=0.3\) for neural network (red) and linear (blue) classifiers. On the \(x\)-axis are the range parameter \(\beta\) of the Matern covariance (4), while on the \(y\)-axis the predicted cut-off and corresponding pointwise 95% confidence interval are represented by solid lines and bands, respectively. Since the residuals are normalized, we set \(\sigma^{2}=1\), while we have the smoothness parameter equal to (a) \(\nu=0.5\) and (b) in \(\nu=1.0\). less favorable towards the normality assumption, and the discrepancy between the two approaches is slightly higher when the degree of spatial aggregation is moderate (\(M=512\)). As we expected, the rejection rate is very high with the original resolution of the temperature data, and interestingly, the rejection rate is still high with the moderate level of aggregation, therefore flagging the normality assumption as generally inappropriate. This can likely be attributed to a large number of time points (\(T=1032\)), which result in high power of a normality test against any alternative distribution. ## 5 Discussion and Conclusion We proposed a new test for dependent data to test Gaussianity by merging the test statistic of individual normality tests (which may or may not assume dependence) via neural networks. By means of a simulation study, we have shown how the proposed approach results in higher power than individual tests as well as a linear aggregation of the tests. Our application for temperature data highlighted how increasing the level of spatial aggregation results in more normal data, as could be expected from the central limit theorem. The proposed approach has been applied to normality test for dependence data, but its extent is far more general. In fact, other marginal distributions can be tested: a generalized extreme value distribution can be assessed for maxima at different levels of temporal aggregation, or skew-normality for high resolution weather data. Such approach could also be generalized to multivariate data to test marginal univariate properties. While the proposed approach represents a significant step forward in assessing Gaus \begin{table} \begin{tabular}{c c c c c c} \hline \hline Rejection rate & All locations (\(M=8192\)) & \(M=2048\) & \(M=512\) & \(M=128\) & \(M=32\) \\ \hline NN & 0.994 & 0.967 & 0.845 & 0.532 & 0.227 \\ Linear & 0.958 & 0.924 & 0.735 & 0.511 & 0.191 \\ \hline \hline \end{tabular} \end{table} Table 2: Rejection rates for the estimated residuals of the emulator (6a) for the neural network and linear normality testing approach. The results are shown across the different level of spatial aggregation. sianity under dependence, it comes with several caveats that a practitioner must be aware of. Firstly, the method must assume a given structure of spatial dependence, so the reliability of the results are inextricably linked with the assumptions associated with it, most noticeably isotropy and stationarity. While these assumptions may be hard to defend for the original data, the focus on residuals would at least partially justify the spatial structure. Additionally, the proposed method depends on a prespecified type of alternative hypothesis, in this case a non-Gaussian power transformation, and this may or may not be a good alternative hypothesis depending on the application. ## Supporting Information and Data Availability The code for this work is available at [https://github.com/stat-kim/adaptive-cutoff](https://github.com/stat-kim/adaptive-cutoff) The data that support the findings of this study are openly available as part of the Large Ensemble project at the National Center for Atmospheric Research at www.earthsystemgrid.org. ## Acknowledgments We would like to thank Brian Greco for insightful discussions. This research was supported by the King Abdullah University of Science and Technology (KAUST).
2303.15489
Railway Network Delay Evolution: A Heterogeneous Graph Neural Network Approach
Railway operations involve different types of entities (stations, trains, etc.), making the existing graph/network models with homogenous nodes (i.e., the same kind of nodes) incapable of capturing the interactions between the entities. This paper aims to develop a heterogeneous graph neural network (HetGNN) model, which can address different types of nodes (i.e., heterogeneous nodes), to investigate the train delay evolution on railway networks. To this end, a graph architecture combining the HetGNN model and the GraphSAGE homogeneous GNN (HomoGNN), called SAGE-Het, is proposed. The aim is to capture the interactions between trains, trains and stations, and stations and other stations on delay evolution based on different edges. In contrast to the traditional methods that require the inputs to have constant dimensions (e.g., in rectangular or grid-like arrays) or only allow homogeneous nodes in the graph, SAGE-Het allows for flexible inputs and heterogeneous nodes. The data from two sub-networks of the China railway network are applied to test the performance and robustness of the proposed SAGE-Het model. The experimental results show that SAGE-Het exhibits better performance than the existing delay prediction methods and some advanced HetGNNs used for other prediction tasks; the predictive performances of SAGE-Het under different prediction time horizons (10/20/30 min ahead) all outperform other baseline methods; Specifically, the influences of train interactions on delay propagation are investigated based on the proposed model. The results show that train interactions become subtle when the train headways increase . This finding directly contributes to decision-making in the situation where conflict-resolution or train-canceling actions are needed.
Zhongcan Li, Ping Huang, Chao Wen, Filipe Rodrigues
2023-03-27T12:08:34Z
http://arxiv.org/abs/2303.15489v1
# Railway Network Delay Evolution: A Heterogeneous Graph Neural Network Approach ###### Abstract Railway operations involve different types of entities (stations, trains, etc.), making the existing graph/network models with homogenous nodes (i.e., the same kind of nodes) incapable of capturing the interactions between the entities. This paper aims to develop a heterogeneous graph neural network (HetGNN) model, which can address different types of nodes (i.e., heterogeneous nodes), to investigate the train delay evolution on railway networks. To this end, a graph architecture combining the HetGNN model and the GraphSAGE homogeneous GNN (HomoGNN), called SAGE-Het, is proposed. The aim is to capture the interactions between trains, trains and stations, and stations and other stations on delay evolution based on different edges. In contrast to the traditional methods that require the inputs to have constant dimensions (e.g., in rectangular or grid-like arrays) or only allow homogeneous nodes in the graph, SAGE-Het allows for flexible inputs and heterogeneous nodes. The data from two sub-networks of the China railway network, namely the Guangzhou South network (GZS-Net) and the Changsha South network (CSS-Net), are applied to test the performance and robustness of the proposed SAGE-Het model. The experimental results show that SAGE-Het exhibits better performance than the existing delay prediction methods and some advanced HetGNNs used for other prediction tasks; the predictive performances of SAGE-Het under different prediction time horizons (10/20/30 min ahead) all outperform other baseline methods; the accuracies are over 90% under the permissible 3-minute errors for the three prediction time horizons. Specifically, the influences of train interactions on delay propagation are investigated based on the proposed model. The results show that train interactions become subtle when the train headways increase (e.g., when the train headways are over 20 min, canceling the edges does not decrease the prediction performance). This finding directly contributes to decision-making in the situation where conflict-resolution or train-canceling actions are needed. **Keywords:** Heterogeneous graph neural network, GraphSAGE, Railway network, Delay evolution; Train interactions ## 1 Introduction Railway networks are complex systems consisting of multitudinous fixed facilities, including stations and tracks, and moving objects, most notably trains. The states of running trains are a result of interactions between the facilities and objects in the systems, as well as the effects of the external environment, such as bad weather. (Huang et al., 2020; Wang and Zhang, 2019). As a measurement of the deviation between the actual and scheduled operation plans, delays are generally used to describe the state of a railway network, where trains are the most common moving objects in the system. As such, the delays of running trains are great indicators to evaluate the state of a railway network. Network-oriented train delay evolution is thus critical for railway operators and controllers as a comprehensive understanding of train delays in the network improves the quality of traffic control actions and rescheduling strategies (Sciences, 2018; Wen et al., 2019). Due to the availability of data, numerous state-of-the-art machine-learning methods have been used in railway systems (Oneto et al., 2017; Ye et al., 2021). Among them, delay prediction or propagation is a popular field. However, the majority of previous delay prediction and propagation research is train-oriented (Huang et al., 2020; Markovic et al., 2015); in other words, they concentrated mainly on trains and forecasted the delays of each train in the downstream stations. However, in practice, dispatchers need to pay more attention to the network states from a systematic perspective, i.e., the delays of the railway network. In this study, we model train delays from a network perspective, considering the different kinds of trains and stations in the systems. The proposed network-oriented approach considers all trains at each moment, and explore the evolution of railway network delay, by predicting the delays of the running trains in the network after a given time interval. The network-oriented approach is expected to support dispatchers with a more comprehensive understanding of the state of the entire railway network, thus enabling them to make a global, rather than partial, adjustment plan. Although all elements, such as trains, stations, disturbance events, etc., can be viewed as entities that interact to cause delays in the railway network, not all trains are simultaneously impacted by the same entities; for instance, at a given timestamp, some trains in the railway network are affected by a facility failure, but others are not. Thus, trains will have diverse input features in each prediction task. Previous studies were mainly based on traditional machine learning algorithms (e.g., the random forest (RF) (Nair et al., 2019), support vector regression (SVR) (Markovic et al., 2015), etc.), graph-based approaches (e.g., graph neural networks (GNNs) (Heglund et al., 2020; Zhang et al., 2021), and Bayesian networks (Corman and Kecman, 2018; Lessan et al., 2019)). Machine learning models typically take rectangular or grid-like arrays as inputs (Sanchez-Lengeling et al., 2021); in other words, they require all samples to have the same input dimensions (i.e., the width and length of the array). To this end, some samples with missing features or values may be intentionally filled by human-specified values (e.g., zero or one). This artificial supplementation inevitably requires prior and domain knowledge, potentially increasing the model complexity and lowering the accuracy of results. The existing graph-based approaches for delay prediction only allow for homogeneous nodes (i.e., the same type of node). Heterogeneous GNNs (HetGNNs) update node information based on the adjacent edges and neighboring nodes, and allow nodes to have heterogeneous neighboring nodes (different kinds of neighboring nodes). Therefore, HetGNN is a great fit to address the complex systems with multiple types of entities to be considered. The railway network in each timestamp can be viewed as a separate graph, with various entities (i.e., trains and stations) acting as nodes. The HetGNN can address heterogeneous neighboring nodes, and enable each node to connect with various numbers of nodes by edges (Hu et al., 2020; Wang et al., 2019). This means that HetGNN can eliminate the inconsistency of the input dimensions between train and station features. Thus, trains and stations can be viewed as different types of nodes. Then, connecting the trains to the interacting stations (or between trains), the HetGNN can be applied to explore the delay evolution of the railway network by predicting the delays of running trains on the graph (i.e., the railway network). We develop the delay evolution model (SAGE-Het) by using GraphSAGE (Hamilton et al., 2017) as the basic HomoGNNs in the proposed HetGNN model. Leveraging the latter, the strengths of interactions between nodes (i.e., trains and stations) are clarified by experimenting under different train headways. Thus, the results directly contribute to decision-making in the situation where traffic control/rescheduling action is needed. Therefore, the main contributions of this research are four-fold. 1. A novel network-oriented HetGNN approach is proposed to predict the future delays of running trains on the whole network, which contrasts with existing models that only predict the delay of one train at downstream stations. 2. A hybrid framework that can address heterogeneous nodes is developed to consider the interaction between different types of entities (trains and stations) in railway systems. 3. A graph model called SAGE-Het, whose nodes allow for different numbers of heterogeneous neighboring nodes and flexible input, is put forward. This provides a better fit for complex systems with diverse numbers of influencing factors at different moments, such as railways. 4. The SAGE-Het is leveraged to clarify the strength of train interactions by experimenting (canceling edges between train-train nodes) under different train headways (e.g., 3, 5, 10, and 20 minutes). The remainder of this paper is organized as follows. In Section 2, related studies that focused on delay prediction and some applications of GNNs are reviewed. Section 3 describes the problem. Next, the method of this study, namely SAGE-Het, which combines the HetGNN model and the GraphSAGE model, is designed in Section 4. The data was introduced, and SAGE-Het model is implemented in Section 5. In Section 6, the performances of different baselines and SAGE-Het are compared and analyzed, and the train-train interaction on delay evolution is investigated. Finally, the conclusions and discussion of this work are presented in Section 7. ## 2 Literature Review Delay prediction is a typical supervised-learning-based task, which usually aims at estimating the delay duration (Markovic et al., 2015; Nair et al., 2019), delay influences (Huang et al., 2020; Kecman and Goverde, 2015), and delay patterns (Huang et al., 2022). With the development of data collection and restoration techniques and computing ability, data-driven methods are widely employed in delay prediction based on historical data. According to the methods employed, related studies can be classified into those using traditional machine learning methods and GNNs. The train operation process is a chronological process in which the train arrivals and departures can be abstracted as nodes, while the section running and dwelling times are treated as edges to link these nodes. Relying on this abstraction perspective, some Markov property-based methods are executed to predict arrival or departure delays. The Markov chain (MC) is an essential Markov property-based method that considers the last states to decide the current states. For delay prediction, MC-based studies usually consider that the arrival delays depend on the departure delay at the last station, while the departure delays are influenced by the arrival delays at the same stations (Barta et al., 2012; Gaurav and Srivastava, 2018; Kecman et al., 2015; Sahin, 2017). Another Markov property-based method is the Bayesian network (BN). Compared with the MC, the BN can usually link several previous states in the network format (e.g., the current arrival can be linked with the arrival and departure states at the previous station(s)). In addition, the linking methods of the BN have been studied by relying on some optimization algorithms (Huang et al., 2020; Lessan et al., 2019), resulting in hybrid BNs with better performance. Existing BN-based delay prediction studies have been applied to tackle several different problems, such as the delay lengths of trains (Corman and Kecman, 2018; Lessan et al., 2019; Li et al., 2021), the delay effects (i.e., primary delays, the number of delayed trains, and the total delay times) (Huang et al., 2020), the impacts of delays at stops on the network (Ulak et al., 2020), and the disruption duration (Zilko et al., 2016). These Markov property-based methods are highly interpretable to train operations, so they have been widely used, but these methods only considered train delays as the node attributes. However, delays are influenced by various internal and external factors of the railway system, meaning that the performance of these methods is hindered by this limitation. Traditional machine learning methods used in delay prediction mainly include some machine learning methods, such as tree-based algorithms (e.g., the RF), SVR, artificial neural networks (ANNs), and hybrid neural networks. The decision tree is a classical machine learning method with a simple structure but high prediction accuracy (Zilko et al., 2016). Therefore, its variants have been widely employed. For example, the RF (Gaurav and Srivastava, 2018; Jiang et al., 2019; Klumpenhouwer and Shalaby, 2022; Nair et al., 2019), an ensemble algorithm based on decision trees, and some boosting algorithms (e.g., XGBoost (Li et al., 2020; Shi et al., 2021), the gradient boosted decision tree (GBDT) (Wang and Zhang, 2019) have been used for delay prediction. The SVR is also another commonly used algorithm to model the delay propagation, such as delay time prediction (Barbour et al., 2018; Markovic et al., 2015), delay recovery prediction (Wang et al., 2021), and running time prediction (Huang et al., 2020). In addition, the \(k\)-nearest neighbors (KNN) algorithm (Li et al., 2016; Pongnumkul et al., 2014) and deep extreme learning machines (DELMs) (Oneto et al., 2017, 2018) are also applied for delay prediction. Simple ANNs have been applied to delay prediction in several railway networks (Kecman and Goverde, 2015; Peters et al., 2005; Yaghini et al., 2013), but with the development of neural network techniques, more advanced neural networks have been proposed to capture specific dependencies or address specific data attributes in railway systems. For instance, the recurrent neural network (RNN) and its variants (e.g., long short-term memory (LSTM) and gated recurrent units (GRUs)) are capable of handling time-series data. Train operation along stations and the chronological train arrival process at the station can both be treated as time-series processes; thus, LSTM has been used to extract the hidden information in the delay prediction process (Huang et al., 2020; Li et al., 2022; Wen et al., 2020). However, the train operation process is influenced by different kinds of entities, which makes it difficult to make an accurate prediction via the use of only a single neural network. Therefore, hybrid neural networks that can use diverse neural network blocks to handle different kinds of data, such as the combinations of LSTM and Fully-connected neural network (FCNN) (Huang et al., 2020), a convolutional neural network (CNN) and FCNN (Huang et al., 2021; Huang et al., 2020), and a CNN, LSTM, and FCNN (Li et al., 2022), have been proposed to predict delays. These traditional machine learning methods are typical supervised learning methods by which the relationship between the input and output features are learned, based on which the future is then predicted. In the modeling process of these traditional machine learning methods, the influencing factors (i.e., input features) for each sample must be the same, which requires samples with some missing input features to be filled artificially. This artificial supplementation inevitably requires prior and domain knowledge, potentially increasing the model's complexity and lowering the accuracy of results. GNN-related studies have achieved numerous successes, thus contributing to their wide applications in many transportation fields, such as traffic state prediction (Cui et al., 2019), traffic flow prediction (Guo et al., 2019; Wang et al., 2020), and traffic demand prediction (Zou et al., 2021). In terms of railway delay prediction, some studies (Heglund et al., 2020; Li et al., 2021) have applied graph convolutional networks (GCNs) with a spatiotemporal attention mechanism to predict railway delays. Similarly, based on the spatiotemporal GCN (STGCN), a previous study (Zhang et al., 2021) predicted the number of delayed trains at each station in the railway network by taking stations as nodes. In addition, a multi-layer time-series graph neural network (MTGNN) model has been established to predict the arrival delay under diverse delay causes (Ding et al., 2021). Alternatively, by considering each train at each previous station (Ding et al., 2021; Li et al., 2021) or section (Heglund et al., 2020) as a node, the delay(s) of a given train at the downstream station(s) were estimated according to the delay(s) at the previous station(s). As with traditional machine learning studies, these HomoGNN-based delay prediction studies were single-train-oriented (Ding et al., 2021; Heglund et al., 2020; Li et al., 2021). These railway delay prediction studies all used HomoGNNs, which can only address homogenous nodes. Suppose some samples have specific influencing factors while others do not (e.g., some delayed trains are influenced by a facility failure, while others do not). In this case, these samples must be filled artificially or deleted. HetGNNs can address the shortcomings of the previous HomoGNN-based delay prediction studies, because they can have heterogeneous nodes. HetGNNs have been used in text classification (Linmei et al., 2019), system recommendation (Fan et al., 2019), and spam review detection (Li et al., 2019). In the traffic domain, an approach based on the heterogeneous graph attention network for the prediction of traffic speed has been proposed (Jin et al., 2021). The trajectories of different traffic participants (e.g., vehicles, pedestrians, and cyclists) have been predicted using a hierarchical HetGNN model (Li et al., 2021c). In terms of railway delay prediction, there is no HetGNN-based study. The preceding review of the existing delay prediction literature demonstrates that previous studies mainly relied on various traditional machine learning or HomoGNN methods. These methods require rectangular or grid-like arrays as input features. If the samples have different numbers of features, the values must be filled artificially. In other words, previous studies could only address consistent input features (with the same input dimensions). HetGNN updates the information of nodes by relying on their neighboring nodes (edges), and each node can have different numbers of heterogeneous neighboring nodes (edges). Thus, to address the gap, this paper proposes a HetGNN model in which trains and stations are treated as nodes and linked with the corresponding affected train nodes to explore railway delay evolution. This allows the model to take the interactions between different types of trains and stations in railway systems into account, thus potentially boosting the accuracy of delay evolution estimation. ## 3 Problem Statement Railway systems are composed of various entities, including stations, trains, tracks, extraneous environments, etc. Train delays, which are the consequences of interactions among diverse entities in the railway network, are mainly used to measure the state of the railway network. We propose a network-oriented approach to explore the evolution of railway network delay by estimating the future delays of running trains, considering the influences of diverse entities. Let us use **Fig. 1** to clarify the delay evolution problem. In the figure, TS 1 represents a terminal station, which has multiple arrival-departure yards to serve train operations. PS 1 to PS 11 represent passing (intermediate) stations, which are connected only by one railway line. **Fig. 1(A)** presents the current railway network state (at timestamp _T_), while the to-be-predicted railway network state (at timestamp \(T\) + \(\Delta\)_T_) is exhibited in **Fig. 1(B)**. We assume that Trains 0-7 are running from passing stations toward the terminal station TS 1 at timestamp \(T\); therefore, they are running trains; Train 8 has already arrived at TS 1; it is, therefore, called the terminated train at timestamp \(T\). TS 1 has three arrival-departure yards (TS 1-1, TS 1-2, TS 1-3 in **Fig. 1**) to serve trains on four railway lines. These yards are linked with each other. The aim of this work is to estimate the delays of all running trains at _T_+\(\Delta\)_T_ on a network, based on the current railway network state (i.e., at timestamp _T_). We model the effects and interactions between trains and stations using a heterogeneous graph \(G\), where \(G\) = (_V_, _E_) (_V_ and \(E\) denote the node and edge sets, respectively). We construct the graph \(G\) trying to model the following influences between (of) entities in the railway system. First, train-train interactions are crucial for delay evolution; for example, in **Fig. 1**, if Train 1 is delayed and the headway between it and Train 0 is short, Train 0 may be delayed due to the minimum headway requirement. We use the change of running trains' delays to explore the railway delay evolution on the network. Given a particular timestamp, running trains differ from terminated trains because terminated trains have already arrived at terminal stations, meaning that terminated trains do not need to be predicted in the next timestamp. However, terminated trains will influence other running trains. This means that all trains should be addressed as nodes in the graph to capture the train interactions. Therefore, running trains and terminated trains are defined as RT and TT nodes in the heterogeneous graph (i.e., the railway network). Edges (also known as metapaths) reflect the interactions between nodes. We define two kinds of edges to reflect the train-train interaction, including the interactions between RTs and between RTs and TTs. Here, the edges between RTs are designated as _RT-rr-RT_, and the edges between RTs and TTs are defined as _RT-rt-TT_. The relationship between nodes is represented by the intermediate abbreviations (e.g., _rr_ and _rt_) in the edges; for instance, _write_ is the relationship of node Author and Paper in the edge _Author-write-Paper_. Second, trains update their delay states when passing the signal record points, which are the stations in this study (i.e., trains update the delay states at stations). Thus, stations also influence the delay evolution of the network. A typical example is that trains have to wait for the opening of the train routes (releasing of the block) before entering the station due to potential route conflicts. From the perspective of railway operations, there are two types of stations: passing (intermediate) and terminal stations. Trains only pass through passing stations while they terminate at terminal stations. In addition, Passing stations are usually traversed by only one railway line (as shown in **Fig. 1**, where there is only one railway line from PS 1 to PS 2). Therefore, only trains running on one line will arrive at or depart for passing stations. However, the terminal station may have several arrival-departure yards linking different railway lines (e.g., TS 1 has TS 1-1, TS 1-2, and TS 1-3 in **Fig. 1**). Trains running on multiple lines may arrive at or depart from TSs. For example, trains on any of the four railway lines can arrive at or depart from TS 1 in **Fig. 1**. In TSs, trains may run from one yard to another for transferring lines. Figure 1: The railway network delay evolution process. This means that interactions between trains from different lines at terminal stations are possible. Hence, passing stations and the arrival-departure yards of terminal stations are referred to as PS and TS nodes, respectively, in the proposed graph model, meaning that a terminal station may contain multiple nodes. TS and PS nodes are referred to as station nodes in this study. We name the edges between station nodes and RT nodes _Station-sr-RT_ and the edges between stations nodes _Station-ss-Station_. The proposed graph, thus, contains four types of nodes, namely RTs (e.g., Train 0 in **Fig. 1(A)**), TTs (e.g., Train 8 in **Fig. 1(A)**), PSs (e.g., PS 1 in **Fig. 1(A)**), and TSs (e.g., TS 1-1 in **Fig. 1(A)**). In addition, there are four types of edges to reflect the interaction between nodes, including _RT-rr-RT_, _RT-rt-TT_, _Station-sr-RT_, _and Station-ss-Station_. Thus, the aforementioned delay evolution problem on a railway network can be described by: \[F([V^{T},E^{T}];G)=V^{T+AT}_{1}\quad, \tag{1}\] where \(F(0)\) is the HetGNN model, \(\Delta T\) is the prediction time interval/horizon, \(V^{T}\) and \(E^{T}\) are the node and edge sets at timestamp \(T\). Additionally, \(V^{T}=\{V^{T}_{1},V^{T}_{2},V^{T}_{3},V^{T}_{4}\}\), where \(V^{T}_{1},V^{T}_{2},V^{T}_{3},V^{T}_{4}\) denotes RT, TT, PS, and TS nodes at timestamp \(T\), respectively. \(E^{T}=\{E^{T}_{1},E^{T}_{2},E^{T}_{3},E^{T}_{4}\}\) are the edge sets, where \(E^{T}_{1},E^{T}_{2},E^{T}_{3},E^{T}_{4}\) represent _RT-rr-RT_, _RT-rt-TT_, _Station-sr-RT_, and _Station-ss-Station_ edges at timestamp \(T\), respectively. It is noted that we only consider the trains running to terminal stations on the network; therefore, downstream stations will affect the upstream stations, but not the other way around. For instance, only PS 2 influence train operations at PS 1. In other words, graph \(G\) is a directed graph. ## 4 Method ### The procedure of the proposed method To predict the delays of all running trains on a railway network, a HetGNN-based approach is proposed in this study. The overview of the HetGNN approach is described in **Fig. 2**. The proposed approach contains four steps, i.e., Step (a) to Step (d). In Step (a), the railway network at timestamp \(T\) is abstracted as a heterogeneous graph \(G=(V^{T},E^{T})\), and is used as the input of our HetGNN model. In Step (b), the nodes are updated by the Homogeneous Model or Heterogeneous Model (shown below **Fig. 2**), depending on whether there are one kind or multiple kinds of edge(s) connecting them. If multiple types of edges connect a node, the updated node result will be the aggregation relying on the updated results from each edge type. For instance, RT nodes will be updated relying on edges _RT-rr-RT_, _TT-rt-RT_, _Station-sr-RT_. The output of Step (b) is then aggregated in Step (c). Step (b) and Step (c) can be repeated several times, i.e., nodes are updated by several convolutional layers in HetGNN. Finally, the delays of RT nodes at timestamp \(T+\Delta T\) (i.e., \(V^{T+iT}_{1}\) ) are obtained in Step (d), by feeding the aggregated RT nodes into a linear layer. ### Heterogeneous Graph Neural Network-(HetGNN) The HetGNN takes a heterogeneous graph as input. In this study, trains and stations are addressed as nodes to form heterogeneous graphs, and the HetGNN is used to process different types of entities in railway systems. Subsequently, the HetGNN updates node information based on the convolutional layers, resulting in an updated graph. Finally, the prediction task, such as node predictions or graph classification tasks, can be executed using the updated graph. First, HetGNN takes a heterogeneous graph as input. The heterogeneous graph contains different kinds of nodes, and the nodes connect each other based on diverse edges. For instance, the railway network heterogeneous graph (\(G=(V,E)\) ) consists of nodes RT, TT, PS, and TS and edges _RT-rr-RT, TT-tr-RT, Station-sr-RT,_ and _station-ss-station_. Subsequently, nodes connecting with different edges will be updated based on the HetGNN convolutional layers. The Homogeneous Model and Heterogeneous Model (shown in **Fig.2 Step (b, I and II)**) are used to update node information in HetGNN convolutional layers. In heterogeneous graphs, nodes at the edges whose two nodes at both ends are homogeneous (e.g., the edge _RT-rr-RT_) can be updated by standard HomoGNNs in HetGNN convolutional layers. The node features updated by the Homogeneous Model for node \(i\) in layer \(k\), namely \(x_{i}^{(k)}\), are updated by Figure 2: Overview of the proposed HetGNN-based approach in this study. \[x_{i}^{(k)}=r^{(k)}\ \ \ \ x_{i}^{(k-1)}\square_{j_{k}N(i)}\phi^{(k)}\ \ \ \ x_{i}^{(k-1)},x_{j}^{(k-1)},e_{ji}\ \ \ \, \tag{2}\] where \(e_{ji}\) denotes edge features from node \(j\) to node \(i\), \(N(i)\) represents the neighboring node set of node \(i\), and function \(\square\) is a differentiable, permutation-invariant function (e.g., the sum, mean, or max). Moreover, \(\phi\) and \(r\) denote differentiable functions, which can include many existing excellent models, such as graph attention networks (GATs) (Velickovic et al., 2017), GCNs (Kipf and Welling, 2016), and GraphSAGE (Hamilton et al., 2017). For the edges connecting heterogeneous nodes (e.g., _TT-tr-RT_ in our case), the nodes cannot be updated by the standard HomoGNNs. Therefore, the Heterogeneous Model is needed to update the node information. Since the feature dimensionalities of the nodes in the edge are different, this Heterogeneous Model first applies the basic HomoGNN (shown as the basic HomoGNN 2 in **Fig.2 Step (b, II)**) to map the two distinctive nodes into the same dimensional space. Subsequently, the node information updated by the basic HomoGNN is aggregated by a summation function, and the aggregated result is then activated by a ReLU activation function, and the destination nodes (e.g., the RT nodes in the edges _TT-tr-RT_ in our case) are finally updated. Based on the HetGNN convolutional layers, all nodes in different kinds of edges can be updated, resulting in an updated heterogeneous graph. The mathematical expression of the HetGNN convolutional layers for all nodes can be represented by Eq (3), as follows, \[v_{i}^{(k)}=\mathop{\oplus}_{i\in E}f_{\theta}^{(k,e)}\ \ v_{i}^{(k-1)},\{v_{ \nu}^{(k-1)}:w\in N^{(e)}(i)\}\ \ \,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)\] where \(v_{i}^{(k)}\) is the node features of node \(i\) in the \(k_{th}\) layer, and the nodes connecting various edges are updated based on the basic HomoGNN \(f_{\theta}^{k}\) in the \(k_{th}\) layer. Additionally, \(N^{(e)}(i)\) denotes the set of corresponding beginning nodes which point to the destination node (e.g., the TT node in _TT-tr-RT_ edges when the updated node is RT) in the edge \(e\), and the \(\oplus\) is aggregation function (e.g., the sum, max, or min) for aggregating node information generated by different edges. Lastly, based on the new heterogeneous graph updated by the HetGNN convolutional layers, the prediction task can be executed to predict the nodes or graph classification tasks. ### GraphSAGE-based HetGNN Architecture We employ GraphSAGE (Hamilton et al., 2017) as the basic HomoGNNs in the proposed model as it has achieved satisfactory performance in numerous prediction tasks. GCNs are the most commonly-used HomoGNNs in the existing GNN-based delay prediction studies. Compared with the classical GCN, GraphSAGE updates node information through random sampling of the neighboring nodes during training, which can control the number of nodes involved in the calculation, and therefore dramatically reduces training resources. In this way, GraphSAGE updates node information based on its neighbors without the need to reiterate the entire graph when a node is added, making it have a better generalization ability than GCN. In other words, for all basic HomoGNNs in Step (b), the functions \(\mathbf{\phi}\) and \(r\) in Eq (2), and \(f_{o}^{\pm}\) in Eq (3) are GraphSAGE models in the proposed model. Thus, we refer to the proposed model as SAGE-Het. The number of convolutional layers in the SAGE-Het (i.e., repeated Step (b) and (c)) are crucial hyperparameters. After hyperparameter-tuning, the number of convolutional layers is selected as four, each with 256 neurons and a ReLU activation function. In addition, all aggregation functions (i.e., \(\mathtt{O}\) in Eq. (2) and \(\oplus\) in Eq. (3)) are performed by the summation functions. A brief summary of the hyperparameters in SAGE-Het is provided in **Table 1**. Let us consider **Fig. 2** as an example to introduce the prediction process of delay evolution. The railway network state at timestamp \(T\) (i.e., \(G=(V^{T},E^{T})\) ) is first taken as the input. Steps (b) and (c) are the schematic diagram of SAGE-Het convolutional layers; therefore, they are used to update the node information in different edges. For the RT nodes in the edges _RT-rr-RT_, as both ends in the edges are homogeneous, they are updated by using the Homogeneous Model (shown in **Fig. 2 step (b, D)**). In this case, \(x_{i}^{(k)}\) is the feature of the RT node \(i\) in layer \(k\), \(N(i)\) represents the set of the beginning RT nodes in the edges _RT-rr-RT_ (the first RT node) for RT node \(i\), and the function \(\phi\) and \(r\) are the GraphSAGEs. The RT nodes in the edge _TT-rr-RT_ and _Station-sr-RT_ are updated based on the Heterogeneous Model (shown in **Fig. 2 Step (b, II)**). Then, the nodes information updated from different edges (i.e., RT\({}^{rr}\), RT\({}^{rr}\), and RT\({}^{rr}\)) can be aggregated by a summation function (the "aggregation" block in Step (c)), and the RT nodes after aggregation are then obtained (e.g., RT' in **Fig. 2**). The same process can be applied to other nodes (such as TS nodes), thus producing a new heterogeneous graph updated by the SAGE-Het convolutional layer. Steps (b) and (c) can be repeated to represent that all nodes are updated via several successive SAGE-Het convolutional layers. It should be noted that each SAGE-Het convolutional layer can vary by using different basic HomoGNNs. Finally, the RT nodes are predicted by feeding the RT nodes in the new heterogeneous graph into a linear layer. The pseudo-code of the SAGE-Het is shown in Algorithm 1. ``` Input: The heterogeneous graph \(G=(V^{T},E^{T})\), i.e., the railway network state at timestamp \(T\) ``` [MISSING_PAGE_POST] ## 5 Model Implementation ### Data Description The data used in this study were obtained from the Guangzhou Railway Bureau, China, and the period of the data is from 03/24/2015 to 11/10/2016. Two railway networks, namely the Guangzhou South network (GZSNet) and the Changsha South network (CSS-Net), were taken as cases to validate the proposed SAGE-Het model. The schematic diagrams of GZS-Net and CSS-Net are shown in **Fig. 3**. Guangzhou South (GZS) is the terminal station of five HSRs, namely the Jing-Guang (JG) HSR, Nan-Guang (NG) HSR, Gui-Guang (GG) HSR, Guangdong (GZ) HSR, and Guang-Shen (GS) HSR. Changsha South (CSS) is also a multi-line station, which can be deemed the connection station of four HSRs, namely the Jing-Chang HSR, Kun-Chang HSR, Chang-Guang (CG) HSR, and Hu-Chang (HC) HSR. In this paper, the historical train operation data is used, and all data is recorded in minutes. Some samples of the raw operation data are shown in **Table 2**, including the Date, Train number, Station, Scheduled arrival, Scheduled departure, Actual arrival, Actual departure, and Occupied track. The attributes of train nodes (RT and TT nodes) are calculated and used to explore the delay evolution based on the raw data. In addition, the number of station tracks (the variable \(N\)) can be obtained from the corresponding station (yard) layouts. As the raw data only recorded the train operation states at stations, train delay states are updated when the trains arrive at and depart from the station. As described in Section 3, this paper only considers the trains heading to the terminal station (e.g., towards TS 1 in **Fig.1**), while trains heading away from terminal stations are not considered. Railway networks at different timestamps can be abstracted to various heterogeneous graphs, as shown in **Fig. 4. Fig. 4(A)** is a heterogeneous graph sample, which is generated from the railway network at timestamp \(T\) (i.e., it corresponds to **Fig.1 (A)**). Similarly, **Fig. 4(B)** is another sample that corresponds to **Fig.1 (B)**. Taking **Fig. 4** as an example, the aim of this study is to predict the delays of the RT nodes of **Fig. 4(A)** at timestamp \(T\)+\(\Delta T\), taking the heterogeneous graph sample in **Fig. 4(A)** (the heterogeneous graph at timestamp \(T\)) as the model input, with the prediction time interval \(\Delta T\). It should be noted that the delays of terminated trains do not need to be predicted in the next timestamp, as they have already arrived at the terminal stations. Therefore, the graph structure at \(T\)+\(\Delta T\) is different from that at \(T\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Train Number} & \multirow{2}{*}{Station} & \multirow{2}{*}{Date} & Actual & Actual & Scheduled & Scheduled & Occupied \\ & & & arrival & departure & arrival & departure & track \\ \hline D903 & Guangzhou North & 2015/3/24 & 6:09 & 6:09 & 6:13 & 6:13 & 1 \\ G6023 & Hengshan West & 2016/1/5 & 14:13 & 14:15 & 14:09 & 14:11 & 3 \\ G6023 & Guangzhou South & 2016/1/5 & 16:30 & 16:33 & 13:30 & 16:35 & 11 \\ G6143 & Changsha South & 2015/4/4 & 12:03 & 12:19 & 11:38 & 11:52 & 3 \\ \hline \hline \end{tabular} Note: For the occupied track, the passage tracks at the station are labeled with Roman characters, while the dwelling tracks are labeled with numbers \end{table} Table 2: Some samples of the raw operation data Figure 3: The schematic diagram of the Guangzhou South network and the Changsha South network. The graphs vary at different timestamps due to the differences in the number of running trains at various times. For example, **Fig. 4(A)** has 8 RT nodes, while **Fig. 4(B)** has 7 RT nodes. Railway networks at each timestamp can be abstracted to a heterogeneous graph sample, which is shown in the two graph samples in **Fig. 4(A)** and **(B)**. The prediction time interval (\(\Delta T\)) is an adjustable hyperparameter, which reflects how far ahead the railway network state is predicted. The prediction time interval of this paper was first set as 20 min, with reference to the _2018 RAS competition_(Sciences, 2018). Thus, the railway networks (graphs) were updated every 20 min, and the delays of running trains 20 min later were predicted based on the current graph structure. The prediction periods were only considered from 8:00 to 23:00 each day (i.e., timestamps should be 8:00, 8:20,..., 23:00), because too few (or no) trains run in the railway network during other periods. Graph samples with RT node delays more than 90 min were eliminated because the proportions of these delays were too small (less than 1% in GZS-Net and CSS-Net) to ensure that the model was correctly trained. This elimination guaranteed the statistical significance and sufficient training of the HetGNN model, and the considered horizon could be prolonged once there were enough of these delayed data (Corman and Kecman, 2018; Lessan et al., 2019). The predicted delays of the RT nodes are unknown, so, if only the delayed trains are applied to train the model, in practical application, some early and on-time arrivals would also be predicted as delays (Corman and Kecman, 2018; Huang et al., 2020d; Lessan et al., 2019). Thus, each train could be assigned a value, corresponding to the actual arrival/departure minus scheduled arrival/departure, to describe the delay state. The value of an on-time arrival was 0, while early arrivals had negative values. The delays considered in this study include early arrivals, on-time arrivals, and delayed arrivals, which are imperative. Ultimately, the datasets of GZS-net and CSS-net had 23417 and 23879 graph samples respectively. These graph samples were shuffled, and 60% of the graph samples were used for model training, 20% were used as the validation dataset to tune the model hyperparameters, and the remaining 20% were employed for model testing. Figure 4: The heterogeneous graphs abstracted from the railway network at different timestamps. ### 5.2 Node and Edge Feature Determination **(1) Features of RT nodes** In this study, the network states (train delays) are predicted based on the graph information. The node and edge features first need to be determined to predict train delays. Because delays are usually (and are in this study) calculated on train arrival/departure events, we first define some train operation-related variables. The following variables are used to express the node features: \(A_{Train,Satisfion}^{Sch}\) : the scheduled arrival time of the _Train_ at the _Station_; \(D_{Train,Satisfion}^{Sch}\) : the scheduled departure time of the _Train_ at the _Station_; \(A_{Train,Satisfion}^{Act}\) : the actual arrival time of the _Train_ at the _Station_; \(D_{Train,Satisfion}^{Act}\) : the actual departure time of the _Train_ at the _Station_; \(AD_{Train,Satisfion}^{Train}\) : the arrival delay of the _Train_ at the _Station_, \(AD_{Train,Satisfion}^{Act}=A_{Train,Satisfion}^{Act}-A_{Train,Satisfion}^{ Sch}\) ; \(DD_{Train,Satisfion}^{Act}\) : the departure delay of the _Train_ at the _Station_, \(DD_{Train,Satisfion}=D_{Train,Satisfion}^{Act}-D_{Train,Satisfion}^{Sch}\) ; The factors influencing delay prediction are determined by referencing previous delay prediction studies (Huang et al., 2020; Li et al., 2020; Nabian et al., 2019) and the variables defined above. For the RT nodes, the current delays (\(C\)) are the basis of the subsequent delay state in delay evolution. Thus, current delays are considered as features of RT nodes. When a train (e.g., Train 1 in **Fig. 1**) is dwelling at a Passing station at timestamp \(T\), \(C\) should be the arrival delay of the train (i.e., \(AD_{Train,Satisfion}\)). If a train (Train 1 in **Fig. 1**) is running in section (section (_Station 1 Station 2_)) at current timestamp (timestamp \(T\)), then \(C\) should be the departure delay of the train at _Station 1_ (which is \(DD_{Train,Satisfion}\)). For instance, at timestamp \(T\), the \(C\) of RT 0 is the departure delay of RT 0 at PS 1 in **Fig. 4(A)**. The headway between the to-be-predicted trains and forward trains, labeled as \(I\), reflect the train-train interaction. Due to the minimum headway requirement, if the headways are too small, the delayed forward train may influence the rear trains, resulting in knock-on delays. In contrast, when the headways are long enough, the rear trains will not be influenced by the delayed forward trains. Thus, \(I\) is another critical factor for delay evolution, and therefore is also considered as the attribute of RT nodes. For instance, at timestamp \(T\), the \(I\) of RT 0 (i.e., Train 0 in **Fig. 1(A)**) is the headway between RT 1 (i.e., Train 1 in **Fig. 1(A)**) and RT 0 (equaling \(D_{Train,0,PS\,1}^{Act}\) - \(D_{Train,PS\,1}^{Act}\) ) in **Fig. 4(A)**. The supplement time between the current timestamp (\(T\)) and the to-be-predicted timestamp (\(T\)+\(\Delta T\)) is crucial for running trains as it determines the maximum recovery times. The supplement time is the difference between the scheduled running time and the minimum running time. We define the scheduled running time of RT nodes from timestamp \(T\) to timestamp \(T\)+\(\Delta T\) as the variable \(S\), while the minimum running time of RT nodes from timestamp \(T\) to timestamp \(T\)+\(\Delta T\) is the variable \(M\). The variable \(S\) is the scheduled running time in the section(s) where the trains pass through during \(\Delta T\). For example, in **Fig. 1(A)**, Train 7 is running in the section (PS 9, PS 10) at timestamp \(T\), and it will be running in the section (PS 11, TS 1) based on the scheduled timetable at timestamp \(T\)+\(\Delta T\). Because trains update their delay states at stations, we will only know the scheduled delay state of Train 7 at PS 11. In this way, the variable \(S\) is the scheduled running time of Train 7 from PS 9 to PS 11 during \(\Delta T\), i.e., \(S=D_{T_{\rm{rand}},\tau_{\rm{PS11}}}^{x_{\rm{SR}}}-D_{T_{\rm{rand}},\tau_{\rm{ sys}}}^{x_{\rm{SR}}}\). Similarly, the variable \(M\) is the minimum running time in the section(s) where the trains pass through during \(\Delta T\); for instance, the variable \(M\) is the scheduled running time of Train 7 from PS 9 to PS 11 in the above case. Generally, \(M\) is different from trains in the same section because of the diverse service types. In addition, the scheduled remaining running times from now (\(T\)) to the the to-be-predicted timestamp (\(T\)+\(\Delta T\)) is also critical, and it is labeled as \(R\). It should be noted that \(R\) should be the scheduled running time from current timestamp (\(T\)) to the nearest delay updating timestamp after \(\Delta T\), because trains update delay states at station in thi study. Taking **Fig. 1** as an example, Trains 4 and 5 run in the same section at timestamp \(T\), while Train 4 runs in the downstream section and Train 5 still runs in the previous section (i.e., Train 4 updates its delay state but Train 5 does not) at timestamp \(T\)+\(\Delta T\). For Train 4, the nearest delay updating timestamp after \(\Delta T\) is the departure time at PS 7; therefore \(R=D_{T_{\rm{rand}}+\Delta T}^{x_{\rm{SR}}}-T\). The variable \(S\) for Train 4 can be represented as \(S=D_{T_{\rm{rand}}+\Delta T}^{x_{\rm{SR}}}-D_{T_{\rm{rand}}+\Delta\tau_{\rm{ SR}}}^{x_{\rm{SR}}}\), so \(R\) can also be \(R=S-(T-D_{T_{\rm{rand}}+\Delta\tau_{\rm{SR}}}^{x_{\rm{SR}}})\) for Train 4 in **Fig. 1**. Assuming that Train 4 and Train 5 do not have a delay change at timestamp \(T\)+\(\Delta T\); the scheduled remaining running times of Train 5 is less than \(\Delta T\) while that of Train 4 over \(\Delta T\). In this case, Train 5 will never cause delay changes at the next timestamp \(T\)+\(\Delta T\), but Train 4 is only suitable for some samples in which the delays are not changed between two adjacent timestamps. If we do not consider variable \(R\), the prediction model cannot distinguish these two situations. Therefore, \(R\) is also considered as an attribute for RT nodes. These influencing factors are related to running trains, so they are considered attributes (features) of RT nodes. In summary, the attributes (features) of RT nodes are as follows: \(C\): the current delays of the predicted RT nodes; \(I\): the headways between the predicted RT nodes and their first forward RT nodes; \(S\): the scheduled running times of the predicted RT nodes from the current timestamp to the to-be-predicted timestamp (i.e., from timestamp \(T\) to \(T\)+\(\Delta T\)); \(M\): the minimum running times of the predicted RT nodes from the current timestamp to the to-be-predicted timestamp (i.e., from timestamp \(T\) to \(T\)+\(\Delta T\)); \(R\): the scheduled remaining running times of the predicted RT nodes from current timestamp to the to-be-predicted timestamp (i.e., from timestamp \(T\) to \(T\)+\(\Delta T\)). **(2) Features of TT nodes** TT nodes correspond to the terminated trains which have already completed their itineraries when arriving at the terminal station in the current timestamp. Therefore, it is unnecessary to predict their delays in the next timestamp. However, as TT nodes may influence the rear predicted RT nodes, they must also be considered in this study. The current delays should be considered as the attributes of TT nodes. In addition, TT nodes do not have corresponding attributes \(I\), \(S\), \(M\), and \(R\) because terminated trains will not continue their itineraries. To distinguish from variable \(C\) of the RT nodes, the current delay of the TT nodes is defined as \(C^{TT}\). **(3) Features of PS and TS nodes** The other two types of nodes, namely PS and TS nodes, are stations/yards. We assume that the impact of stations/yards on trains depends mainly on the number of station tracks; therefore, the number of station tracks (_N_) is used as the feature of PS nodes. As described previously, the arrival-departure yards are treated as the TS nodes, so the number of station tracks is considered as the feature of TS nodes, labeled as \(N^{TS}\). Although the features of PS and TS nodes are identical, they are different from the railway operation and management perspectives. For instance, trains only need to (not frequently) dwell at the passing (intermediate) stations, while there may be vehicle cleaning, inspection, and other tasks at terminal stations. We, therefore, assume that TS and PS nodes are different (even though they have the same features) from an operation and management perspective. **(4) Features of Edges** The edge _RT-rr-RT_ connects consecutive trains (i.e., two RT nodes), to reflect the interactions of running trains. Because the _RT-rr-RT_ edges reflect the influence of the forward train on the rear train, this edge is directed (e.g., RT 1 to RT 0 in **Fig. 4(A)**). Another edge _TT-tr-RT_ reveals the influences of the forward terminated trains on the rear running trains. It is also directed (e.g., TT 1 to RT 6 in **Fig. 4(A)**), but the nodes have different attributes. The _Station-ss-Station_ edges reflect the physical structure of the railway network. The downstream station should influence the previous station, because we only focus on trains in one direction (i.e., the trains running to the terminal station); in other words, edges between TS (PS) and PS nodes are directed (e.g., PS 2 points to PS 1 and TS 1-1 points to PS 2 in **Fig. 4(A)**). However, arrival-departure yards of terminal stations interconnect, which allows trains in different yards to travel around. Hence, the edges between TS nodes are undirected (e.g., the edge between TS 1-1 and TS 1-2 in **Fig. 4(A)**). The _Station-sr-RT_ edge links the PS/TS nodes with RT nodes to uncover their relationship. _Station-sr-RT_ should be the directed edge from the forward station to the rear train when the train is in the previous section or at the previous station. For instance, in **Fig. 1(A)**, Train 1 (i.e., RT 1) runs in the section (PS 1, PS 2) with PS 2 pointing to RT 1, while Train 5 (RT 5) dwells at PS 7, resulting in TS 1-3 pointing to RT 5. Because GraphSAGE is not edge-weighted, the attributes of edges are equal to 1 when the nodes in the edges are linked. The node/edge types and features are summarized in **Table 3**. \begin{table} \begin{tabular}{l l l} \hline \hline & Types & Brief introduction of the attributes \\ \hline \multirow{2}{*}{Node} & \multirow{2}{*}{RT node} & _C:_ the delays of the RT nodes at timestamp \(T\). \\ & & _I:_ the headways between the RT nodes and their first forward RT (TT) nodes. \\ \hline \hline \end{tabular} \end{table} Table 3: The summary of the types and attributes of nodes and edges ### Model Training The mean absolute error (MAE), as defined by Eq. (4), was chosen as the loss function to train the SAGE-Het model. \[MAE=\frac{1}{N}\sum_{i=1}^{N}\left|y_{i}-y_{i}^{\prime}\right|, \tag{4}\] where \(y_{i}\) and \(y_{i}\) respectively correspond to the actual and predicted delays for the RT nodes, and \(N\) represents the total size of the dataset. The Adam optimizer with an initial learning rate of 0.001 was used as the model optimizer, and the global learning rate decreased by 10% when the decrease of the validation dataset over 10 epochs was less than 0.01. To prevent overfitting, the early-stop technique was also used during training. When there was less than a 0.01 decrease in the validation dataset over 30 epochs, the model stopped training, and the maximum number of epochs was set as 300. The data loader with a batch size of 64 was used. The training techniques are exhibited in **Table 4**. \begin{table} \begin{tabular}{l l} \hline \hline Optimizer & Adam \\ \hline Loss function & MAE \\ \hline Initial learning rate & 0.001 \\ \hline Activation function & ReLU \\ \hline Techniques to avoid over-fitting & 10 epochs with 10\% learning rate reduction when less than 0.01 decrease \\ \hline Techniques to decrease training time & 30 epochs less than 0.01 decrease \\ \hline Mini-batch & 64 \\ \hline Maximum epochs & 300 \\ \hline \hline \end{tabular} \end{table} Table 4: A brief summarization of training techniques ## 6 Result Analysis In this section, we evaluate the proposed method. First, the comparative results between the proposed SAGE-Het and other existing methods are demonstrated. Then, we compare the the actual values with the predicted values based on SAGE-Het. Subsequently, we demonstrate the performance of the proposed model under different prediction horizons (\(\Delta T\)). Finally, we investigate the influence of train-train interactions on delay evolution. ### Compared with Existing Methods First, the proposed model is compared with the existing delay prediction models. The RF (Li et al., 2020; Lulli et al., 2018; Nabian et al., 2019; Nair et al., 2019; Oneto et al., 2020), SVR (Barbour et al., 2018; Markovic et al., 2015; Wang et al., 2021), and ANN (Peters et al., 2005; Yaghini et al., 2013) are some commonly-used methods in previous delay prediction studies, which also exhibit satisfactory performance. According to the _2018 RAS Competition_(Sciences, 2018), in the practical railway delay prediction process, some dispatchers assume that the delay is unchanged. In other words, once there is a delay, this delay is assumed to be the same until the next timestamp; thus, this prediction method was also considered and labeled as "Keep Constant". Each RT node serves as a sample for the baselines RF, SVR, and ANN. The input features of RT nodes for these baselines contain the variables \(C\), \(I\), \(M\), \(S\), and \(R\) in **Table 2**. In addition, the delay of the first forward train of the RT node (labeled as \(D\)) is also considered to reflect the interaction between adjacent trains, and the number of station tracks at the station (i.e., the corresponding \(N\) of station nodes) is also taken into account to represent the influence of station on the train. In summary, the input features of each RT node include \(C\), \(I\), \(M\), S, \(R\), \(D\), and the \(N\)/\(N^{TS}\) at the station. The proposed SAGE-Het cannot be compared with the existing HomoGNN-based delay prediction methods because different types of nodes are considered in this study, which HomoGNNs cannot handle. The Introductions and related hyperparameter tuning of the baseline models are provided as follows. **Keep Constant:** This method postulates that delays in the next timestamp are equal to the current timestamp, i.e., delays are kept constant until the train arrives at the terminal station. **RF:** The Scikit-learn package (Pedregosa et al., 2011) was used to build the RF models in this study. The MSE is used as the loss function because the training time is extremely long with the MAE loss function. The hyperparameters, including \(n\_estimators\) and \(max\_depth\), were tuned by GridSearchCV in the Scikit-learn package. The candidate values of \(n\_estimators\) and \(max\_depth\) were chosen through a grid search over [50,100,150,200] and [1, 2, 3,..., 20], respectively. **SVR**: The SVR model was also established by the Scikit-learn package. The hyperparameters \(C\), \(gamma\), and \(epsilon\) were optimized according to the validation dataset, with the candidate set [0.01,0.1,1,10]. **ANN:** The ANN model consisted of two hidden layers with 256 and 128 neurons, respectively. The activation function was ReLU, and the other training techniques were the same as those for SAGE-Het (described in Section 4.3). In addition, some existing HetGNN algorithms, such as the Heterogeneous Graph Transformer (HGT) (Hu et al., 2020) and Heterogeneous Graph Attention Network (HAN) (Wang et al., 2019), were also selected for comparison with the proposed SAGE-based approach. Although they have not been applied in existing delay prediction studies, they have achieved great success in other prediction tasks (Mei et al., 2022; Yang et al., 2020). **HAN:** This method combines the attention mechanism with the HetGNN model. In this study, the HGT model had four layers, each of which had 256 hidden units. The same training techniques as those used for SAGE-Het were applied to HAN. **HGT:** This method merges the transformer architecture with the HetGNN model. The same hyperparameters and training techniques as those for HAN were used to train HGT. The MAE and root-mean-square error (RMSE), as respectively given by Eqs. (4) and (5), were chosen as metrics to measure the performance on the testing dataset. All compared models were preprocessed in the same way as the SAGE-Het (e.g., normalized). \[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-y_{i}^{{}^{\prime}}\right)^{2}} \tag{5}\] The predictive performances measured by MAE of SAGE-Het and other baselines for all RT node samples are shown in **Fig. 5.** The results indicate that SAGE-Het achieved the best prediction performance at GZS-Net and CSS-Net. The Keep Constant method achieved the worst performance; this is unsurprising, as train operation is a dynamic process in which dispatchers adjust the scheduled operational plan of trains to reduce delays according to the real-time delay situation and timetable structure. Regarding the most commonly-used machine learning algorithms, namely the RF, SVR, and ANN, RF performed quite well in terms of the RMSE. Compared with RF regarding RMSE, SAGE-Het exhibited 1.6% and 6.2% improvement at GZS-Net and CSS-Net, respectively. However, SAGE-Het exhibited a comparative 18.6% and 17.2% improvement at GZS-Net and CSS-Net for MAE, respectively. The two existing HetGNNs, namely HAN and HGT, did not achieve better performance than SAGE-Het, which may have been because the HetGNN architecture does not consider the influences of too many entities, while a simpler model can achieve even better performance. More attention should be given to delayed trains rather than early and on-time arrivals. Therefore, we investigate the predictive performance of SAGE-Het and other baselines for delayed RT node samples. The results in **Table 5** show that SAGE-Het outperforms other baselines, with better performance than RF, which has the best predictive performance in baseline models. In addition, the method Keep Constant has a more precise prediction performance for delayed RT nodes than for the whole samples. This is because early-arrival trains exist in the dataset. Early-arrival trains usually need to return to scheduled operation (by slowing down). However, delay recoveries are limited by the maximum recovery times, therefore, the change in delays of delayed trains is less significant than that of early-arrival trains. For instance, trains with a 5-minute early arrival may be on time at the next timestamp, while trains with a 5-minute delay may be delayed by 4 minutes if the maximum recovery time is 1 minute. ### Comparison between the actual values and predicted values based on SAGE-Het Then, to explore the prediction performance of SAGE-Het for different delay horizons, the predicted and actual delays are compared in **Fig. 6**. The results show that these scattered points were almost distributed near \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{3}{*}{Algorithms} & \multirow{3}{*}{\begin{tabular}{c} GZS- \\ MAE\# \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} GZS- \\ MAE\# \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} GZS- \\ RMSE\# \\ RMSE\# \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} CSS- \\ MAE\# \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} CSS- \\ RMSE\# \\ Increase \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} CSS- \\ MAE\# \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} CSS- \\ RMSE\# \\ Increase \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} CSS- \\ RMSE\# \\ Increase \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} CSS- \\ RMSE\# \\ Increase \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} CSS- \\ RMSE\# \\ Increase \\ (\%) \\ \end{tabular} } & \multirow{3}{*}{ \begin{tabular}{c} CSS- \\ RMSE\# \\ Increase \\ (\%) \\ \end{tabular} } \\ \hline Keep constant & 2.318 & 76\% & 3.282 & 26.7\%- & 1.77 & 55.1\% & 3.022 & 29.4\% \\ RF & 1.462 & 11\% & 2.388 & 7.8\% & 1.305 & 14.4\% & 2.45 & 4.9\% \\ SVR & 2.81 & 113.4\% & 3.693 & 42.6\% & 1.976 & 73.2\% & 2.991 & 28\% \\ ANN & 1.551 & 17.8\% & 2.393 & 7.6\% & 1.325 & 16.1\% & 2.456 & 5.1\% \\ HAN & 2.099 & 59.4\% & 3.264 & 26\% & 2.226 & 95.1\% & 3.221 & 37.9\% \\ HGT & 1.393 & 5.8\% & 2.598 & 0.3\% & 1.26 & 10.4\% & 2.459 & 5.3\% \\ SAGE-Het & 1.317 & - & 2.59 & - & 1.141 & - & 2.336 & - \\ \hline \hline \end{tabular} * Note: the increase-related indicators are the increasing percentages of corresponding models compared with the SAGE-Het. \end{table} Table 5: The prediction performance of SAGE-Het and other baselines for delayed samples. Figure 5: The prediction performance of SAGE-Het and other baselines for all samples. the 45-degree line, which demonstrates that the differences between the actual and predicted values were insignificant, i.e., SAGE-Het has high prediction accuracy. In addition, with the increment of actual delays, the prediction values did not deviate from the 45-degree line, which demonstrates that SAGE-Het has stable prediction performance for both short and long delays. ### Performance for Different Prediction Time Intervals The SAGE-Het was used to predict the delays 20 min later according to the current railway network delays above. To investigate the influence of different prediction time interval, the model performances for different prediction time interval (i.e., \(\Delta T\) in Eq(1)), namely 10 and 30 min, were determined, and are exhibited in **Table 6**. It can be seen from **Table 6** that the prediction performance decreased with the increase of the prediction time interval (\(\Delta T\)) at both GZS-Net and CSS-Net. In addition, SAGE-Het outperformed other baselines under different prediction time intervals. To better understand the prediction performance under different prediction time intervals, **Fig. 7** presents the residual distributions under different prediction time intervals. The results demonstrate that SAGE-Het achieved satisfactory performance for different prediction time intervals at the two railway networks. The residual distributions almost followed a bell-shaped distribution, which indicates that the \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{GZS-Net: 10/20/30 (min)} & \multicolumn{2}{c}{CSS-Net: 10/20/30 (min)} \\ \cline{2-5} & MAE & RMSE & MAE & RMSE \\ \hline Keep Constant & 1.475/2.323/2.674 & 2.581/3.384/3.779 & 1.069/1.957/2.445 & 2.152/3.044/3.558 \\ RF & 0.654/0.988/1.055 & 1.315/1.745/2.051 & 0.540/0.805/0.926 & 1.2/1.550/1.818 \\ SVR & 1.484/2.308/2.827 & 2.589/3.064/3.799 & 1.069/1.508/1.838 & 2.157/2.245/2.811 \\ ANN & 0.688/1.112/1.241 & 1.339/1.891/2.231 & 0.553/0.857/0.966 & 1.278/1.617/1.867 \\ HAN & 1.101/1.604/1.771 & 1.781/2.522/2.897 & 1.101/1.582/1.721 & 2.015/2.363/2.606 \\ HGT & 0.657/0.959/1.011 & 1.3/1.801/1.897 & 0.546/0.792/0.883 & 1.308/1.563/1.697 \\ **SAGE-Het** & **0.558/0.833/0.901** & **1.201/1.716/1.897** & **0.447/0.687/0.781** & **1.236/1.460/1.666** \\ \hline \hline \end{tabular} Note: the bold fonts denote the best performance. \end{table} Table 6: The prediction performance under different prediction time intervals. Figure 6: The comparison of the actual and predicted values for SAGE-Het. residuals met both the zero-mean and normal distribution assumptions. In addition, the relationships between the cumulative distribution functions (CDFs) and the absolute values of residuals (\(|\)Residual\(|\)) are shown in **Fig. 8.** Each \(|\)Residual\(|\) value corresponds to the percentage of the absolute values of residuals that are no more than this value. For instance, when \(|\)Residual\(|\) equals 1 min, the corresponding CDF should be the percentage of the absolute values of residuals that are no more than 1 min. For both networks, when the prediction time interval was 10 min, the percentages of \(|\)Residual\(|\) no more than 1 min exceeded 80%, which indicates quite satisfactory performance. Even for the 30-min prediction time interval, the percentages of \(|\)Residual\(|\) no more than 1 min reached 70% for these two networks. The percentages of \(|\)Residual\(|\) no more than 3 min were over 90% for all three prediction time intervals (10, 20, 30 min) for the two networks. ### Influences of Train-Train Interaction In SAGE-Het, nodes are allowed to have different numbers of neighboring heterogeneous nodes, which are updated based on their neighboring nodes. Therefore, unlike traditional machine-learning approaches, SAGE-Het does not require all samples to be the same inputting dimension. In this way, the significance of interactions between nodes can be validated by canceling the corresponding connecting edges between nodes. Since the interaction between adjacent trains is a crucial factor in the delay propagation process, the significance of train interactions on delay evolution can be tested by changing the linking between nodes. SAGE-Het considers the interactions between adjacent running trains and those between forward terminated trains and rear running trains by edges _RT-rr-RT_ and _TT-tr-RT_. Therefore, firstly, we investigated whether these train interactions matter by canceling all edges between trains (i.e., deleting the edges _RT-rr-RT_ and _TT-tr-RT_). In this way, the current states of running trains were only considered when updating the train-related edge (_RT-rr-RT_ and _TT-tr-RT_) information, so this approach is labeled as **SAGE-Het-selflink**. In addition, Train headways also determine the train interactions. In other words, if the intervals between adjacent trains are large enough, the rear train has fewer possibilities to be influenced by the forward train (i.e., the train interaction does not matter in this situation). Therefore, the performances when canceling the edges whose headways between trains were more than a specified value (e.g., 20 min, 10 min, etc.) were investigated. For example, the method **SAGE-Het-cut-10** in **Table 7** denotes the deletion of the edges in the SAGE-Het model when the headways between consecutive running trains and those between the forward terminated trains and rear running trains were more than 10 min. Similarly, **SAGE-Het-cut-3**, **SAGE-Het-cut-5**, and **SAGE-Het-cut-20** were obtained. **Table 7** reveals that **SAGE-Het-selflink** achieved an obviously worse performance than SAGE-Het, indicating that train interactions are critical for delay propagation. Regarding different headways, the prediction performances of **SAGE-Het-cut-3** and **SAGE-Het-cut-5** were obviously decreased as compared with that of SAGE-Het. This is because the forward train will dramatically influence the rear train when their interval is so small that no (little) supplementary time can be utilized to maintain the safety distance. With the increase of the headways, the predictive performance was found to increase. The performance of **SAGE-Het-cut-20** was almost equal to that of the SAGE-Het model, which indicates that train interactions do not matter if the train interval is sufficient. In existing studies, train interaction was considered regardless of the length of the interval. To ensure consistency for all samples, the related information of forward trains must be inputted into the model. The \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & \begin{tabular}{c} \text{G2S-MAE} \\ \end{tabular} & \begin{tabular}{c} \text{G2S-RMSE} \\ \end{tabular} & \begin{tabular}{c} \text{} \\ \text{Increases} \\ \end{tabular} & \begin{tabular}{c} \text{} \\ \text{(\%)} \\ \end{tabular} & \begin{tabular}{c} \text{} \\ \text{(\%)} \\ \end{tabular} & \begin{tabular}{c} \text{} \\ \text{(\%)} \\ \end{tabular} & \begin{tabular}{c} \text{} \\ \text{(\%)} \\ \end{tabular} & \begin{tabular}{c} \text{} \\ \text{(\%)} \\ \end{tabular} \\ \hline SAGE-Het-selflink & 0.965 & 15.8\% & 1.876 & 9.3\% & 0.741 & 7.9\% & 1.607 & 10.1\% \\ SAGE-Het-cut-3 & 0.925 & 11.0\% & 1.848 & 7.7\% & 0.729 & 6.1\% & 1.577 & 8.0\% \\ SAGE-Het-cut-5 & 0.905 & 8.6\% & 1.810 & 5.5\% & 0.737 & 7.3\% & 1.557 & 6.6\% \\ SAGE-Het-cut-10 & 0.875 & 5.0\% & 1.802 & 5.0\% & 0.696 & 1.3\% & 1.473 & 0.9\% \\ SAGE-Het-cut-20 & 0.866 & 4.0\% & 1.737 & 1.2\% & 0.693 & 0.9\% & 1.475 & 1.0\% \\ SAGE-Het & 0.833 & - & 1.716 & - & 0.687 & - & 1.460 & - \\ \hline \hline \end{tabular} Note: the increase-related indicators are the increasing percentages of corresponding models compared with the SAGE-Het. \end{table} Table 7: The prediction performance of SAGE-Het under different edge-linking methods. proposed SAGE-Het model can modify the edge-linking method to address this shortcoming. While this study considered that all adjacent trains (stations) are linked, in future studies, the optimal edge-linking method can be learned by relying on some encoder blocks (Tygesen et al., 2022). ## 7 Conclusion and Discussions This paper proposed a HetGNN model to investigate delay evolution in the railway network. In the proposed model, four kinds of nodes (e.g., RT, TT, PS, and TS nodes) are taken into account, and their interactions are represented by four edges (_RT-rr-RT_, _TT-tr-RT_, _Station-ss-Station_, and _Station-sr-RT_). A new neural architecture that is specific to the train delay prediction problem was developed; it is based on the GraphSAGE HomoGNN and HetGNN model, and we refer to it as SAGE-Het. The SAGE-Het model can process different types of nodes and capture the interactions between nodes. Based on the SAGE-Het model, the evolution of railway network delay was investigated by predicting the delays of running trains in future timestamps. The prediction performance of the SAGE-Het model was compared with that of existing delay prediction models (the RF, SVR, and ANN) and some advanced HetGNNs (HGT and HAN) that have achieved great success in other prediction tasks. Subsequently, the prediction performance under different prediction time intervals was compared. In addition, the prediction precision was demonstrated by calculating the residual distributions under diverse prediction time intervals. Lastly, the influence of train interactions on delay evolution was explored by changing the edge-linking methods in the SAGE-Het model. The main conclusions reached from these analyses are summarized as follows. 1. The SAGE-Het model has better predictive performance than other existing delay prediction methods from the literature, as well as HGT, and HAN. 2. The SAGE-Het model exhibits satisfactory predictive performance under different prediction time intervals. The absolute values of residuals that are no more than 1 min can reach up to 70%, even for predicting the delay 30 min into the future. 3. Train interactions will influence the rear train when the intervals between adjacent trains are small, while this influence will disappear when the intervals are large enough, i.e., the forward trains will almost not influence the rear train when the train intervals are sufficient. In comparison to the existing delay prediction methods, the proposed SAGE-Het approach can predict train delays more accurately, assisting railway operators and managers with more precise data and providing more comprehensive information to passengers. Once the prediction model based on SAGE-Het is trained, the dispatcher can callback the model to predict railway network delays whenever necessary. To callback the trained models requires only several (or few) seconds; for example, it only takes 3 seconds to load the well-trained SAGE-Het for test datasets with around 4600 graph samples in GZS-Net and CSS-Net. In addition, SAGE-Het, as a network-oriented approach, is capable of enhancing the comprehensive understanding of the delay evolution in railway networks, thus helping dispatchers make a comprehensive adjustment plan. Moreover, SAGE-Het enables nodes to connect with various numbers of heterogeneous nodes by edges, which can allow more flexible inputs and therefore does not require the missing features to be artificially filled. Interactions between nodes can also be explored by canceling corresponding edges based on SAGE-Het. For instance, the investigation of the impact of train-train interactions can improve understanding of the delay evolution, which can assist in obtaining a more applicable edge structure for our SAGE-Het in future work. This research contains some shortcomings that need to be addressed in further studies. First, delays are updated at stations instead of continuously in this study, which may prevent the actual delays from being obtained in real time. Thus, the predicted delays will lag behind actual delays. However, once the real-time delay records are obtained, we can train a more precise prediction model based on SAGE-Het. In addition, this study only considered train and station attributes when establishing the SAGE-Het model to improve the prediction precision, although the proposed SAGE-Het model still achieved satisfactory performance. However, in the practical delay propagation process, railway network delay evolution results from the interactions of different network entities (e.g., weather, passenger flow, dispatching organization, etc.). Our proposed SAGE-Het model can theoretically gather the influences of all railway network entities. Therefore, in future studies, with the addition of more railway network entities to the SAGE-Het model and the selection of a more proper basic HomoGNN according to the edge characteristics, more precise delay prediction models can be constructed to reveal the evolution mechanism of railway network delay. In addition, in this study, the edges were determined by relying on delay propagation domain knowledge (e.g., adjacent trains are linked regardless of the interval length), while the investigation of the edge-linking method presented in Section 6.4 revealed that canceling some edges will not influence the results when the adjacent train intervals are sufficient. Therefore, in future research, some encoder blocks that can assist in learning the graph edge structure can be applied. Moreover, this HetGNN model can also be extended to other fields. For instance, in terms of the vehicle speed prediction task, the vehicles, pedestrians, and signal systems can be treated as different kinds of nodes, while corresponding edges can represent their relationships. ## Acknowledgment This work was supported by the National Nature Science Foundation of China (grant numbers 71871188 and U1834209), the Research and development project of China National Railway Group Co., Ltd [grant number P2020X016], the Fundamental Research Funds for the Central Universities (grant number 2682021CX051), and the China Scholarship Council [grant number 202007000149]. We are grateful for the contributions made by our project partners
2308.06787
RMP-Loss: Regularizing Membrane Potential Distribution for Spiking Neural Networks
Spiking Neural Networks (SNNs) as one of the biology-inspired models have received much attention recently. It can significantly reduce energy consumption since they quantize the real-valued membrane potentials to 0/1 spikes to transmit information thus the multiplications of activations and weights can be replaced by additions when implemented on hardware. However, this quantization mechanism will inevitably introduce quantization error, thus causing catastrophic information loss. To address the quantization error problem, we propose a regularizing membrane potential loss (RMP-Loss) to adjust the distribution which is directly related to quantization error to a range close to the spikes. Our method is extremely simple to implement and straightforward to train an SNN. Furthermore, it is shown to consistently outperform previous state-of-the-art methods over different network architectures and datasets.
Yufei Guo, Xiaode Liu, Yuanpei Chen, Liwen Zhang, Weihang Peng, Yuhan Zhang, Xuhui Huang, Zhe Ma
2023-08-13T14:59:27Z
http://arxiv.org/abs/2308.06787v1
# RMP-Loss: Regularizing Membrane Potential Distribution for Spiking Neural Networks ###### Abstract Spiking Neural Networks (SNNs) as one of the biology-inspired models have received much attention recently. It can significantly reduce energy consumption since they quantize the real-valued membrane potentials to 0/1 spikes to transmit information thus the multiplications of activations and weights can be replaced by additions when implemented on hardware. However, this quantization mechanism will inevitably introduce quantization error, thus causing catastrophic information loss. To address the quantization error problem, we propose a regularizing membrane potential loss (RMP-Loss) to adjust the distribution which is directly related to quantization error to a range close to the spikes. Our method is extremely simple to implement and straightforward to train an SNN. Furthermore, it is shown to consistently outperform previous state-of-the-art methods over different network architectures and datasets. ## 1 Introduction Recently, many efforts have been done to make deep neural networks (DNNs) lightweight, so that they can be deployed in devices where energy consumption is limited. To this end, several approaches have been proposed, including network pruning [65], network quantization [18, 40, 15], knowledge transfer/distillation [49], neural architecture search [69, 42], and spiking neural networks (SNNs) [23, 19, 22, 17, 56, 39, 47, 59, 58, 67, 53, 60, 61]. The SNN provides a special way to reduce energy consumption following the working mechanism of the brain neuron. Its neurons accumulate spikes from previous neurons and present spikes to posterior neurons when the membrane potential exceeds the firing threshold. This information transmission paradigm will convert the computationally expensive multiplication to computationally convenient additions thus making SNNs energy-efficient when implemented on hardware. Specialized neuromorphic hardware based on an event-driven processing paradigm is currently under various stages of development, _e.g._, SpiNNaker [31], TrueNorth [1], Tianjic [48], and Loihi [9], where SNNs can be efficiently implemented further. Due to the advantage of computational efficiency and rapid development of neuromorphic hardware, the SNN has gained more and more attention. Although SNNs have been widely studied, their performance is still not comparable with that of DNNs. This performance gap is largely related to the quantization of the real-valued membrane potential to 0/1 spikes for the firing of the SNN in implementation [21]. The excessive information loss induced by the firing activity forcing all information only to two values will cause accuracy to decrease. Although information loss is important for computer vision tasks and the quantization will cause information loss too, the essential role of the activation function is to introduce non-linearity for neural networks [46]. Therefore, how to effectively reduce the information loss of membrane potential quantization is of high research importance. However, as far as we know, few studies have focused on directly solving this problem. The quantization error problem also exists in these methods that convert the ANN model to an SNN model [11, 4, 39]. However, these methods solve the problem by changing the activation function in ANNs or increasing timesteps in SNNs, which don't work for SNN training. InfLoR-SNN [21] adds a membrane potential redistribution function in the spiking neuron to reduce the quantization error via redistributing the membrane potential. However, it will decrease the biological plausibility of the SNN and increase the inference burden. This paper focuses on reducing the quantization error in SNN training directly and aims to introduce no burden for the SNN. Quantization error is smaller when the membrane potential is close to the spiking threshold or reset values [21]. Hence, to mitigate the information loss, we suggest redistributing the membrane potential to where the membrane potential is closer to the 0/1 spike. Then an additional loss term aims at regularizing **m**embrane **p**otential is presented, called RMP-Loss, which can encourage the membrane potentials to gather around binary spike values during the training phase. The workflow of our method is shown in Fig. 1. Our main contributions can be concluded as follows: * To our best knowledge, there have been few works noticing the quantization error in direct training of SNNs. To mitigate the quantization error, we present the RMP-Loss, which is of benefit to training an SNN model that enjoys a narrow gap between the membrane potential and its corresponding 0/1 spike. Furthermore, we also provide theoretical proof to clarify why the RMP-Loss can prevent information loss. * Some existing methods can address information loss too. While achieving comparable performance, more parameters or computation burdens are also introduced in the inference phase. Different from those methods, the RMP-Loss can handle information loss directly without introducing any additional parameters or inference burden. * Extensive experiments on both static and dynamic datasets show that our method performs better than many state-of-the-art SNN models. ## 2 Related Work There are two main problems to train an SNN model [22]. The first is that the firing activity is non-differentiable thus the SNN model cannot be trained with these back-propagation methods directly. One way to mitigate the optimization difficulty is through surrogate functions. This kind of method replaces the non-differentiable firing activity with a differentiable surrogate function to calculate the gradient in the back-propagation [37, 55, 45, 54]. In [2], the derivative of a truncated quadratic function was used to approximate the derivative of the firing activity function. In [63] and [7], the derivatives of a sigmoid [63] and a rectangular function were respectively adopted to construct the surrogate gradient. Furthermore, a dynamic evolutionary surrogate gradient that could maintain accurate gradients during the training was proposed in [41, 20, 5, 6]. Another way to solve the non-differentiable problem is converting an ANN model to an SNN model, known as ANN-SNN conversion methods [27, 26, 39, 32, 3, 29, 28]. The ANN-SNN method trains a high-accuracy ANN with ReLU activation first and then transforms the network parameters into an SNN under the supposition that the activation of the ANN can be approximated by the average firing rates of an SNN. However, conversion methods will introduce conversion errors inevitably. Many efforts were made to tackle this problem, such as long inference time [14], threshold rescaling [52], soft reset [27], threshold shift [39], and the quantization clip-floor-shift activation function [4]. This paper adopts the surrogate gradient method. The second problem is information loss. Though transmitting information with binary spikes is more efficient than that with real values, quantizing these real-valued membrane potentials to 0/1 spikes will cause information loss. There have been some works handling this problem indirectly. Such as learning the appropriate membrane leak-and-firing threshold [50, 68]. These are of benefit to finding a preferable quantization choice. Similarly, in [17, 62], some learnable neuron parameters are incorporated into the SNN to optimize the quantization choice. In [24], three regularization losses are introduced to penalize three undesired shifts of membrane potential distribution to make easy optimization and convergence for the SNN. The regularization is also beneficial to reduce quantization error. However, all Figure 1: The overall workflow of the proposed method. We embed a membrane potential regularization loss in the task loss to redistribute the membrane potential in the training phase to reduce the quantization error. these methods do not handle the information loss problem directly and most of them require more parameters. InfLoR-SNN [21] suggests adding a membrane potential redistribution function in the spiking neuron to redistribute the membrane potential closer to spike values, which can be seen as a direct method to reduce the quantization error. However, adding another function in spiking neurons will decrease the biological plausibility of the SNN and increase the inference burden. In this work, we focus on handling the quantization error problem directly without introducing extra parameters or computation burden in the inference phase. ## 3 Preliminary In this section, a brief review of the primary computing element of SNNs is first provided in Sec. 3.1. Then we introduce the proposed surrogate gradient that handles the non-differentiability challenge in Sec. 3.2. Finally, in Sec. 3.3, we describe the threshold-dependent batch normalization technique used in this work. ### Spiking Neuron Model _Spiking neuron_. The primary computing element, _i.e._, the neuron of an SNN is much different from that of an ANN. The neuron of an ANN plays the role of nonlinear transformation and can output real-valued values. Meanwhile, the neuron in an SNN accumulates the information from previous neurons into its membrane potential and presents the spike to the following neurons when its membrane potential exceeds the firing threshold. The SNN is more efficient with this special information transmission paradigm while suffering the information loss problem. A unified form of this spiking neuron model can be formulated as follows, \[u^{(t),\mathrm{pre}}=\tau u^{(t-1)}+x^{(t)}, \tag{1}\] \[o^{(t)}=\left\{\begin{array}{ll}1,&\mathrm{if}\ u^{(t),\mathrm{pre}}\geq V _{\mathrm{th}}\\ 0,&\mathrm{otherwise}\end{array}\right., \tag{2}\] \[u^{(t)}=u^{(t),\mathrm{pre}}(1-o^{(t)}). \tag{3}\] Where \(u^{(t),\mathrm{pre}}\) and \(u^{(t)}\) are pre-membrane potential and membrane potential, respectively, \(x^{(t)}\) is input information, \(o^{(t)}\) is output spike at the timestep \(t\). The range of constant leaky factor \(\tau\) is \((0,1)\), and we set \(\tau\) as \(0.25\). \(V_{\mathrm{th}}\) is the firing threshold and is set to \(0.5\) here. _Output layer neuron_. In the general classification task, the output of the last layer will be presented to the Softmax function first and then find the winner class. We make the neuron model in the output layer only accumulate the incoming inputs without any leakage as final output like doing in recent practices[50, 66], described by \[u^{(t)}=u^{(t-1)}+x^{(t)}. \tag{4}\] Then the cross-entropy loss is calculated according to the final membrane potential at the last timesteps, \(u^{(T)}\). ### Surrogate Gradients of SNNs As shown in Eq. 2. the firing activity of SNNs can be seen as a step function. Its derivative is \(0\) everywhere except at \(V_{\mathrm{th}}\). This non-differentiability problem of firing activity will cause gradient vanishing or explosion and make the back-propagation unsuitable for training SNNs, directly. As mentioned before, many prior works adopted the surrogate function to replace the firing function to obtain a suitable gradient [7, 66, 56]. In InfLoR-SNN [21], an extra membrane potential redistribution function is added before the firing function. It argues that this redistribution function will reduce the information loss in forward propagation. However, investigating it from a backward propagation perspective and considering the redistribution function being next to the firing function, it can be seen as a more suitable surrogate gradient if we gather the STE (used in [21]) gradients of the firing function and redistribution function together. In this sense, the suitable surrogate gradient can also be seen as a route to reduce quantization error. Therefore, in this paper, we adopt the gathered redistribution function in InfLoR-SNN as our surrogate function, which indeed performs better than other methods [7, 66, 56] in our experiments, denoted as \[\varphi(u)=\left\{\begin{array}{ll}0,&u{<}0,\\ \frac{1}{2\mathrm{tanh}(3/2)}\mathrm{tanh}(3(u-1/2))+1/2,&0\leq u\leq 1,\\ 1,&u{>}1.\end{array}\right. \tag{5}\] ### Threshold-dependent Batch Normalization Normalization techniques can effectively reduce the training time and alleviate the gradient vanishing or explosion problem for training DNNs. A most widely used technique called batch normalization (BN) [30] uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This is significantly effective for convolutional neural networks (CNNs). However, directly applying BN to SNNs will ignore the temporal characteristic and can not achieve the desired effect. To this end, a more suitable normalization technique for SNNs, named threshold-dependent BN (tdBN) was further proposed in [66]. It normalizes feature inputs on both temporal and spatial dimensions. Besides, as its name implies, tdBN makes normalized variance dependent on the firing threshold, _i.e._, the pre-activations are normalized to \(N(0,(\alpha V_{\mathrm{th}})^{2})\) instead of \(N(0,1)\). this can balance pre-synaptic input and threshold to maintain a reasonable firing rate. In this paper, we also adopt the tdBN, as follows, \[\bar{\mathbf{X}}=\alpha V_{\mathrm{th}}\frac{\mathbf{X}-\mathrm{mean}( \mathbf{X})}{\sqrt{\mathrm{mean}((\mathbf{X}-\mathrm{mean}(\mathbf{X}))^{2}) +\epsilon}}, \tag{6}\] \[\mathbf{\bar{X}}=\lambda\mathbf{\bar{X}}+\beta. \tag{7}\] where \(\mathbf{X}\in\mathbb{R}^{T\times B\times C\times H\times W}\) (\(T\): timesteps; \(B\): batch size; \(C\): channel; \((H,W)\): spatial domain) is a 5D tensor and \(\alpha\) is a hyper-parameter set as 1. ## 4 RMP-Loss As aforementioned, we suggest reducing the quantization error to avoid information loss in supervised training-based SNNs. In this section, we first try to provide a metric to measure the quantization error in SNNs. Then based on the metric, we introduce our method, a loss term for regularizing membrane potential, _i.e._, RMP-Loss, which can push the membrane potentials close to spike values to reduce the quantization error. Finally, theoretical proof to clarify why the RMP-Loss can prevent information loss is given. ### Quantization Error Estimation Estimating the quantization error precisely is important for designing a method to reduce the prediction error. Hence, before giving the full details of the RMP-Loss, here we try to formulate the quantization error first. As shown in Eq. 2, the output of a neuron is dependent on the magnitude of the membrane potential. When the membrane potential exceeds the firing threshold, the output is \(1\), otherwise, is \(0\); that is, these full-precision values will be mapped to only two spike values. The difference between output, \(o\), and membrane potential, \(u\), can be measured as \[\mathrm{Dis}(o,u)=\left\{\begin{array}{ll}1-u,&\mathrm{if}\ u\geq V_{\mathrm{ th}}\\ 0-u,&\mathrm{otherwise}\end{array}\right.. \tag{8}\] Obviously, the closer a membrane potential is to its corresponding spike, the smaller the quantization error. The quantization error depends on the margin between the membrane potential and its corresponding spike. Then we define the quantization error as \[q(u)=|\mathrm{Dis}(o,u)|^{p}, \tag{9}\] where \(p>0\), which is usually set to \(1\) and/or \(2\) corresponding to L1-norm and/or L2-norm. Unless otherwise specified, we set it as 2 in the paper. ### Regularizing Membrane Potential Distribution To mitigate the information loss problem in SNNs, the quantization error should be reduced to some extent. To this end, we propose RMP-Loss to carve the distribution to a range close to the spikes based on \(q(u)\), as \[\mathcal{L}_{\mathrm{RMP}}=\frac{1}{TLBCWH}\sum_{t,l,b,c,w,h}q(u). \tag{10}\] where \(T\), \(L\), \(B\), \(C\), \(W\), and \(H\) denote the number of timesteps, the number of layers, batch size, channel length, width, and height of the membrane potential map, respectively. Finally, taking classification loss into consideration, the total loss can be written as \[\mathcal{L}_{\mathrm{CE-RMP}}=\mathcal{L}_{\mathrm{CE}}+\lambda(n)\mathcal{L }_{\mathrm{RMP}}, \tag{11}\] where \(\mathcal{L}_{CE}\) is the cross-entropy loss, and \(\lambda(n)\) is a balanced coefficient, which changes dynamically with the training epoch. Obviously, \(\mathcal{L}_{RMP}\) does not take into account the classification problem at hand and will add a new constraint in the network optimization. To make the network parameter update have enough freedom at the beginning, which is important to train a well-performance network, and still focus on the classification task at the end. We adopt a strategy of increasing first and decreasing later to adjust the \(\lambda(n)\) as follows, \[\lambda(n)=\left\{\begin{array}{ll}2k\frac{n}{N},&\mathrm{if}\ n\leq N/2\\ 2k(1-\frac{n}{N}),&\mathrm{otherwise}\end{array}\right., \tag{12}\] where \(n\) is the \(n\)-th epoch and \(k\) is a coefficient, which controls the amplitude of regularization. Unless otherwise specified, we set it as 0.1 in the paper. The final resulting RMP-Loss training method is defined in Algo. 1. ### Analysis and Discussion In this work, we assume that RMP-Loss can help reduce the quantization loss of the SNNs. To verify our assump tion, a theoretical analysis is conducted by using the information entropy concept. To mitigate the information loss, the output spike tensor \(\mathbf{O}\) should reflect the information of the membrane potential tensor \(\mathbf{U}\) as much as possible. Since KL-divergence is a metric to measure the difference between two random variable distributions. We use KL-divergence to compute the difference between \(\mathbf{O}\) and \(\mathbf{U}\), which refers to the difficulty of reconstructing \(\mathbf{U}\) from \(\mathbf{O}\), _i.e._, information loss degree. Then we provide the theoretical analysis below. Let \(x^{u}\sim\mathcal{U}\) where \(x^{u}\) are samples \(\in\mathbf{U}\) and \(\mathcal{U}\) represents the distribution of \(x^{u}\). Similarily, we have \(x^{o}\sim\mathcal{O}\) where \(x^{o}\in\mathbf{O}\). We use \(P_{U}(x)\) and \(P_{O}(x)\) to denote the probability density function for \(\mathcal{U}\) and \(\mathcal{O}\), respectively. Then the information loss for the spike firing process can be described as \[\mathcal{L}_{KL}(\mathcal{U}||\mathcal{O})=\int_{-\infty}^{\infty}P_{U}(x^{u} )\text{log}\frac{P_{U}(x^{u})}{P_{O}(x^{o})}dx. \tag{13}\] Since output spike \(x^{o}\) is discrete, we can update it as \[\mathcal{L}_{KL}(\mathcal{U}||\mathcal{O})=\int_{o-\epsilon}^{o+\epsilon}P_{ U}(x^{u})\log\frac{P_{U}(x^{u})}{P_{O}(x^{o})}dx, \tag{14}\] where \(\epsilon\) is a small constant, \(o\) is the spike value, \(P_{O}(x^{o})\) can be seen as a very large constant in this situation, and \(\int_{o-\epsilon}^{o+\epsilon}P_{O}(x^{o})dx=1\). With RMP-Loss, it will become \[\mathcal{L}_{KL}(\mathcal{\hat{U}}||\mathcal{O})=\int_{o-\epsilon}^{o+ \epsilon}P_{\hat{U}}(x^{\hat{a}})\log\frac{P_{\hat{U}}(x^{\hat{a}})}{P_{O}(x^{ o})}dx, \tag{15}\] where \(x^{\hat{u}}\in\mathcal{\hat{U}}\) and \(\mathcal{\hat{U}}\) is the new distribution of \(x^{\hat{u}}\) adjusted by RMP-Loss. Here, we have the following propositions. **Proposition 1**: \(\frac{d\mathcal{L}_{KL}}{dP_{U}(x^{u})}<0\)_, i.e., \(\mathcal{L}_{KL}\downarrow\) as \(P_{U}(x^{u})\uparrow\)._ **proof:** When a membrane potential \(u\) is below the firing threshold, it will be pushed to 0 by the RMP-Loss, considering its effect, and otherwise, 1. Hence the firing rate of the neuron can be seen as the same, whether using the RMP-Loss or not, _i.e._, \(P_{O}(x^{o})\) keep the same for \(\hat{\mathbf{U}}\) and \(\mathbf{U}\). \[\frac{d\mathcal{L}_{KL}}{dP_{U}(x^{u})} =\frac{d\int_{o-\epsilon}^{o+\epsilon}P_{U}(x^{u})\log\frac{P_{U} (x^{u})}{P_{O}(x^{o})}dx}{dP_{U}(x^{u})} \tag{16}\] \[=\int_{o-\epsilon}^{o+\epsilon}(\log\frac{P_{U}(x^{u})}{P_{O}(x ^{o})}+\frac{1}{\ln 2})dx. \tag{17}\] Since \(P_{O}(x^{o})\) is much larger than \(P_{U}(x^{u})\), we can get \(\int_{o-\epsilon}^{o+\epsilon}(\log\frac{P_{U}(x^{u})}{P_{O}(x^{o})}+\frac{1} {\ln 2})dx<0\), then \(\frac{d\mathcal{L}_{KL}}{dP_{U}(x^{u})}<0\). \(\blacksquare\) **Proposition 2**: \(P_{\hat{U}}(x^{\hat{u}})>P_{U}(x^{u})|x=o\)_._ **proof:** We assume that there are \(n_{e}^{u}\) samples in the interval \((o-\epsilon,o+\epsilon)\) for \(\mathcal{U}\), where \(\epsilon\) is a small constant. Then we can get \(P_{U}(x^{u})\approx\frac{n_{e}^{u}}{2\epsilon}\). And we can simply see the RMP-Loss as a function to push the vanilla \(\mathbf{U}\) to \(\hat{\mathbf{U}}\), which is closer to \(\mathbf{O}\), based on its effect. Therefore, we can assume that these samples will be gathered to a new interval \((o-\epsilon_{l},o+\epsilon_{r})\), where \(\epsilon_{l},\epsilon_{r}<\epsilon\). Then we can get \(P_{\hat{U}(x^{\hat{a}})}\approx\frac{n_{e}^{u}}{\epsilon_{l}+\epsilon_{r}}\). Thereby we can have \(P_{\hat{U}(x^{\hat{a}})}>P_{U}(x^{u})|_{x=o}\). \(\blacksquare\) Along with **Proposition 1** and **Proposition 2**, we can conclude that \(\mathcal{L}_{KL}(\mathcal{\hat{U}}||O)<\mathcal{L}_{KL}(U||O)\), _i.e._, our method with RMP-Loss enjoys lesser information loss. ## 5 Experiment In this section, we first conducted extensive ablation studies to compare the SNNs with RMP-Loss and their vanilla counterparts to verify the effectiveness of the method. Next, we fully compared our method with other state-of-the-art (SoTA) methods. Finally, some further experimental visualizations are provided to understand the RMP-Loss. ### Datasets and Settings **Datasets.** We conducted experiments on four benchmark datasets: CIFAR-10 (100) [33], CIFAR10-DVS [38], and ImageNet (ILSVRC12) [10]. The CIFAR-10 (100) dataset includes 60,000 images in 10 (100) classes with \(32\times 32\) pixels. The numbers of training images and test images are 50,000 and 10,000. The CIFAR10-DVS dataset is the neuromorphic version of the CIFAR-10 dataset. It is composed of 10,000 images in 10 classes. We split the dataset into 9000 training images and 1000 test images similar to [56]. ImageNet dataset consists of 1,250,000 training images and 50,000 test images. **Preprocessing.** We applied data normalization on all static datasets to make input images have \(0\) mean and \(1\) variance. Besides, we conducted random horizontal flipping and cropping on these datasets to avoid overfitting. For CIFAR, the AutoAugment [8] and Cutout [13] were also used for data augmentation as doing in [24, 20]. For the neuromorphic dataset, we resized the training image frames to \(48\times 48\) as in [66] and adopted random horizontal flip and random roll within \(5\) pixels for augmentation. And the test images were merely resized to \(48\times 48\) without any additional processing. **Training setup.** For all the datasets, the firing threshold \(V_{\mathrm{th}}\) was set as \(0.5\). For static image datasets, the images were encoded to binary spike using the first layer of the SNN, as in recent works [50, 17, 16]. This is similar to rate coding. For the neuromorphic image dataset, we used the \(0/1\) spike format directly. The neuron models in the output layer accumulated the incoming inputs without generating any spike as the output like in [50]. For CIFAR-10 (100) and CIFAR10-DVS datasets, we used the SGD optimizer with the momentum of \(0.9\) and learning rate of \(0.01\) with cosine decayed [43] to \(0\) as in [21]. All models were trained within \(400\) epochs with the same batch size of \(128\). For the ImageNet dataset, we adopted the SGD optimizer with a momentum of \(0.9\) and a learning rate of \(0.1\) with cosine decayed to \(0\). All models are trained within \(320\) epochs. ### Ablation Study for RMP-Loss We conducted a set of ablation experiments on CIFAR-10 using ResNet20, ResNet19, and VGG16 as backbones. The results are shown in Tab. 1. It can be seen that with the RMP-Loss, these SNNs can achieve higher accuracy than their vanilla counterparts. We also show the membrane potential distribution of the first layer of the second block in the ResNet20 with and without RMP-Loss on the test set of CIFAR-10 in Fig. 2. It can be seen that the models trained with RMP-Loss can shrink the membrane potential distribution range which enjoys less quantization error. ### Comparison with State-of-the-art Methods We evaluated the proposed method with the accuracy performance on various widely used static and neuromorphic datasets using spiking ResNet20 [50, 52], VGG16 [50], ResNet18 [16], ResNet19 [66], and ResNet34 [16]. The results with the mean accuracy and standard deviation of 3-trials are listed in Tab. 2. **CIFAR-10.** For CIFAR-10, we tested three different network architectures. The performance of our method is shown in Tab. 2. It can be seen that our method achieves the best results in most of these cases. In special, using ResNet19 [66], the RMP-Loss achieves 96.10% averaged top-1 accuracy, which improves 1.60% absolute accuracy compared with the existing state-of-the-art TET [12]. Though InfLoR-SNN [21] is slightly better than our method for 4 and 6 timesteps, it will decrease the biological plausibility of the SNN and increase the inference burden since a complex transform function is added in its spiking neuron. On ResNet20, our method can achieve higher accuracy with only 6 timesteps, while Diet-SNN [50] with 10 timesteps. On VGG16, the RMP-Loss also shows obvious advantages. **CIFAR-100.** For CIFAR-100, we also experimented with these three different network structures. For all these configurations, our method achieves consistent and significant accuracy over prior work. ResNet20-based and VGG16-based RMP-Loss achieve 66.65% and 72.55% top-1 accuracy with only 4 timesteps, which outperform their Diet-SNN counterparts with 2.58% and 2.88% higher accuracy and Real spike counterparts with 0.05% and 1.93% respectively but with fewer timesteps. Noteworthy, our method significantly surpasses TET with 4.26% higher accuracy on ResNet19, which is not easy to achieve in the SNN field. Overall, our results on CIFAR-100 show that, when applied to a more complex dataset, the RMP-Loss achieves an even more favorable performance compared to competing methods. **CIFAR10-DVS.** We also verified our method on the popular neuromorphic dataset, CIFAR10-DVS. It can be seen that RMP-Loss also shows amazing performance. RMP-Loss outperforms STBP-tdBN by 12.39%, RecDis-SNN by 3.78%, and InfLoR-SNN by 0.70% respectively with 76.20% top-1 accuracy in 10 timesteps using ResNet19 as the backbone. With ResNet20 as the backbone, RMP-Loss can also achieve well-performed results. **ImageNet.** For ImageNet, we conducted experiments with ResNet18 and ResNet34 as backbones. Our results are presented in Tab. 4. In these normal spiking structures, our method achieves the highest accuracy among these SoTA prior works. However, our method is slightly worse than SEW ResNet [16]. This is because that SEW ResNet uses an atypical architecture, which abandons binary spike forms but will output arbitrary integers to transmit information, thus enjoying higher accuracy but the efficiency from mul \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Architecture & Type & Timestep & Accuracy & Quantization error \\ \hline \multirow{6}{*}{CIFAR-10} & \multirow{2}{*}{ResNet20} & Without RMP-Loss & 4 & 91.26\% & 0.186 \\ & & With RMP-Loss & 4 & **91.89\%** & 0.121 \\ \cline{2-5} & & Without RMP-Loss & 4 & 95.04\% & 0.128 \\ & & With RMP-Loss & 4 & **95.51\%** & 0.104 \\ \cline{2-5} & \multirow{2}{*}{VGG16} & Without RMP-Loss & 4 & 92.80\% & 0.174 \\ & & With RMP-Loss & 4 & **93.33\%** & 0.135 \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation Study for RMP-Loss Figure 2: The effect of RMP-Loss. The overall original membrane potential distribution (left) and the redistributed membrane potential distribution by RMP-Loss (right) of the first layer of the second block in ResNet20 on CIFAR-10 test sets. \begin{table} \begin{tabular}{l l l c c c} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{CIFAR-10} & SpikeNorm [52] & ANN2SNN & VGG16 & 2500 & 91.55\% \\ & Hybrid-Train [51] & Hybrid training & VGG16 & 200 & 92.02\% \\ & Spike-Thrift [35] & Hybrid training & VGG16 & 100 & 91.29\% \\ & Spike-basedBP [36] & SNN training & ResNet11 & 100 & 90.95\% \\ & STBP [56] & SNN training & CIFARNet & 12 & 90.53\% \\ & TSSL-BP [64] & SNN training & CIFARNet & 5 & 91.41\% \\ & PLIF [17] & SNN training & PLIFNet & 8 & 93.50\% \\ & DSR [44] & SNN training & ResNet18 & 20 & 95.40\% \\ & Joint A-SNN [23] & SNN training & ResNet18 & 4 & 95.45\% \\ \cline{2-6} & \multirow{4}{*}{SNN training} & VGG16 & \multirow{4}{*}{20} & 5 & 92.70\% \\ & & & & 10 & 93.44\% \\ \cline{3-6} & & ResNet20 & 5 & 91.78\% \\ & & & & 10 & 92.54\% \\ \cline{2-6} & \multirow{4}{*}{RecDis-SNN [24]} & \multirow{4}{*}{SNN training} & \multirow{4}{*}{ResNet19} & 2 & 93.64\% \\ & & & & 6 & 95.55\% \\ \cline{3-6} & & & & 2 & 92.34\% \\ & & & & 4 & 92.92\% \\ & & & & 6 & 93.16\% \\ \hline \multirow{2}{*}{TET [12]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 94.16\% \\ & & & & 4 & 94.44\% \\ & & & & 6 & 94.50\% \\ \hline \multirow{2}{*}{InfLoR-SNN [21]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 94.44\% \\ & & & & 4 & 96.27\% \\ & & & & 6 & 96.49\% \\ \hline \multirow{2}{*}{**RMP-Loss**} & \multirow{2}{*}{SNN training} & ResNet19 & 2 & **95.31\%**\(\pm 0.07\) \\ & & & 4 & **95.51\%**\(\pm 0.08\) \\ & & & & 6 & **96.10\%**\(\pm 0.08\) \\ \cline{3-6} & & & & 2 & **91.89\%**\(\pm 0.05\) \\ & & & & 6 & **92.55\%**\(\pm 0.06\) \\ \cline{3-6} & & & & 6 & **93.33\%**\(\pm 0.07\) \\ \hline \multirow{2}{*}{CIFAR-100} & DSR [44] & SNN training & ResNet18 & 20 & 78.50\% \\ & InfLoR-SNN [21] & SNN training & VGG16 & 5 & 71.56\% \\ & IM-Loss [20] & SNN training & VGG16 & 5 & 70.18\% \\ \cline{2-6} & \multirow{2}{*}{SNN training} & ResNet20 & 5 & 64.07\% \\ & & & VGG16 & 5 & 69.67\% \\ \cline{2-6} & Real Spike [25] & SNN training & ResNet20 & 5 & 66.60\% \\ & & & VGG16 & 5 & 70.62\% \\ \hline \multirow{2}{*}{CIFAR-100} & \multirow{2}{*}{TET [12]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 72.87\% \\ & & & & 4 & 74.47\% \\ & & & & 6 & 74.72\% \\ \cline{3-6} & & & ResNet20 & 4 & **66.65\%**\(\pm 0.10\) \\ \cline{3-6} & & & & 10 & **73.30\%**\(\pm 0.11\) \\ \cline{2-6} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & **74.66\%**\(\pm 0.12\) \\ & & & & 4 & **78.28\%**\(\pm 0.10\) \\ & & & & 6 & **78.98\%**\(\pm 0.08\) \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with SoTA methods on CIFAR-10/100. tiplication free of SNNs will be lost. Hence, it would be limited in some applications while our method not. ### Visualization Furthermore, we also provide some experimental visualizations to show the regularizing effects of RMP-Loss. Fig. 3 shows the membrane potential distribution of the first layer of the second block in the ResNet20/19 with and without RMP-Loss on the test set of CIFAR-10. It can be seen that the SNN models trained with RMP-Loss can shrink the membrane potential distribution range which enjoys less quantization error. On the other hand, comparing the membrane potential distribution difference between ResNet20 and ResNet19, it can be found that the membrane potential distribution of ResNet19 is thinner than that of ResNet20 too. Considering that ResNet19 can achieve higher accuracy than ResNet20, it also shows that reducing quantization error is effective to improve the accuracy of SNN models and our route to improve the SNN accuracy by reducing the quantization error is reasonable. ## 6 Conclusion This paper aims at addressing the information loss problem caused by the \(0/1\) spike quantization of SNNs. We introduce RMP-Loss to adjust the membrane potential to \begin{table} \begin{tabular}{l l l l l l} \hline \hline Method & Type & Architecture & Timestep & Spike form & Accuracy \\ \hline Hybrid-Train [51] & Hybrid training & ResNet34 & 250 & Binary & 61.48\% \\ SpikeNorm [52] & ANN2SNN & ResNet34 & 2500 & Binary & 69.96\% \\ STBP-tdBN [66] & SNN training & ResNet34 & 6 & Binary & 63.72\% \\ TET [12] & SNN training & ResNet34 & 6 & Binary & 64.79\% \\ \hline SEW ResNet [16] & SNN training & ResNet18 & 4 & Integer & **63.18\%** \\ & & ResNet34 & 4 & Integer & **67.04\%** \\ \hline Spiking ResNet [16] & SNN training & ResNet18 & 4 & Binary & 62.32\% \\ & & ResNet34 & 4 & Binary & 61.86\% \\ \hline **RMP-Loss** & SNN training & ResNet18 & 4 & Binary & 63.03\%\(\pm 0.07\) \\ & & ResNet34 & 4 & Binary & 65.17\%\(\pm 0.07\) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with SoTA methods on ImageNet. Figure 3: The original membrane potential distribution (left) and the redistributed membrane potential distribution by RMP-Loss (right) of the first layer of the second block in ResNet20/19 on CIFAR-10 test sets. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{CIFAR10-DVS} & Rollout [34] & Rollout & DenseNet & 10 & 66.80\% \\ & STBP-tdBN [66] & SNN training & ResNet19 & 10 & 67.80\% \\ & LIAF [57] & Conv3D & LIAF-Net & 10 & 71.70\% \\ & LIAF [57] & LIAF & LIAF-Net & 10 & 70.40\% \\ & RecDis-SNN [24] & SNN training & ResNet19 & 10 & 72.42\% \\ \cline{2-6} & InfLoR-SNN [21] & SNN training & ResNet19 & 10 & 75.50\% \\ & & ResNet20 & 10 & 75.10\% \\ \cline{2-6} & & & ResNet19 & 10 & **76.20\%\(\pm 0.20\)** \\ \cline{2-6} & & & ResNet20 & 10 & **75.60\%\(\pm 0.30\)** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with SoTA methods on CIFAR10-DVS. reduce the quantization error. Different from other methods that reduce the quantization error indirectly or will induce more parameters, RMP-Loss focuses on handling this problem directly and will introduce no extra parameters in the inference phase. We show that our method outperforms SoTA methods on both static and neuromorphic datasets. ## Acknowledgment This work is supported by grants from the National Natural Science Foundation of China under contracts No.12202412 and No.12202413.
2304.07980
RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks
It is well-known that recurrent neural networks (RNNs), although widely used, are vulnerable to adversarial attacks including one-frame attacks and multi-frame attacks. Though a few certified defenses exist to provide guaranteed robustness against one-frame attacks, we prove that defending against multi-frame attacks remains a challenging problem due to their enormous perturbation space. In this paper, we propose the first certified defense against multi-frame attacks for RNNs called RNN-Guard. To address the above challenge, we adopt the perturb-all-frame strategy to construct perturbation spaces consistent with those in multi-frame attacks. However, the perturb-all-frame strategy causes a precision issue in linear relaxations. To address this issue, we introduce a novel abstract domain called InterZono and design tighter relaxations. We prove that InterZono is more precise than Zonotope yet carries the same time complexity. Experimental evaluations across various datasets and model structures show that the certified robust accuracy calculated by RNN-Guard with InterZono is up to 2.18 times higher than that with Zonotope. In addition, we extend RNN-Guard as the first certified training method against multi-frame attacks to directly enhance RNNs' robustness. The results show that the certified robust accuracy of models trained with RNN-Guard against multi-frame attacks is 15.47 to 67.65 percentage points higher than those with other training methods.
Yunruo Zhang, Tianyu Du, Shouling Ji, Peng Tang, Shanqing Guo
2023-04-17T03:58:54Z
http://arxiv.org/abs/2304.07980v1
# RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks ###### Abstract It is well-known that recurrent neural networks (RNNs), although widely used, are vulnerable to adversarial attacks including one-frame attacks and multi-frame attacks. Though a few certified defenses exist to provide guaranteed robustness against one-frame attacks, we prove that defending against multi-frame attacks remains a challenging problem due to their enormous perturbation space. In this paper, we propose the first certified defense against multi-frame attacks for RNNs called RNN-Guard. To address the above challenge, we adopt the perturb-all-frame strategy to construct perturbation spaces consistent with those in multi-frame attacks. However, the perturb-all-frame strategy causes a precision issue in linear relaxations. To address this issue, we introduce a novel abstract domain called InterZono and design tighter relaxations. We prove that InterZono is more precise than Zonoope yet carries the same time complexity. Experimental evaluations across various datasets and model structures show that the certified robust accuracy calculated by RNN-Guard with InterZono is up to 2.18 times higher than that with Zonotope. In addition, we extend RNN-Guard as the first certified training method against multi-frame attacks to directly enhance RNNs' robustness. The results show that the certified robust accuracy of models trained with RNN-Guard against multi-frame attacks is 15.47 to 67.65 percentage points higher than those with other training methods. keywords. ## I Introduction Recurrent neural networks have been widely applied in various applications, including natural language processing [26], automatic speech recognition [22], and video processing [57]. However, due to the lack of robustness, RNNs are vulnerable to adversarial attacks [7, 29, 43, 20], where adversaries add small perturbations to clean inputs to induce the target model to misclassify the perturbed inputs (termed as adversarial examples). The existence of adversarial attacks raises concerns about the safety of RNN-based applications. To improve the robustness of RNNs, earlier researchers proposed many empirical defenses based on heuristics (e.g., adversarial training [20, 40]). However, empirical defenses can often be defeated by stronger attacks due to their lack of theoretical guarantees. To provide guaranteed robustness, recently, a few _robustness certification_ methods (a.k.a. _certified defenses_) [12, 25] have been proposed for RNNs to formally verify whether a given neighbor around the clean input contains any adversarial example. Robustness certification methods can be further extended as certified training methods, which utilize the certification results for improving the model's robustness. **Motivation.** Adversarial attacks on RNNs can be categorized into either _one-frame attacks_ where adversaries can only perturb one frame or _multi-frame attacks_ where adversaries can perturb multiple frames simultaneously. Previous certified defenses for RNNs focus on one-frame attacks. However, we discover a severe vulnerability in those works, i.e., they are vulnerable to multi-frame attacks. As shown in Fig. 1, even if the model is certified to be robust around a clean sample against one-frame attacks, multi-frame adversaries can still find its adversarial example. Thus, defending against multi-frame attacks becomes an important and urgent problem. **This work: Certified Defense against Multi-frame Attacks.** In this paper, we address this problem and present the first certified defense against multi-frame attacks called RNN-Guard. To capture all potential adversarial examples in multi-frame attacks, we adopt the perturb-all-frame strategy (i.e., to simultaneously perturb all frames), which results in a much larger perturbation space. However, the perturb-all-frame strategy causes a precision issue. Most certified defenses apply linear relaxations to handle non-linear functions. Generally speaking, relaxations in smaller spaces are more precise because a shorter curve is closer to a straight line than a longer one. Thus, the larger perturbation space results in less precise relaxations. The precision issue can cause imprecise results of robustness certification, i.e., too many robust samples are Fig. 1: Examples of the perturbation space in one-frame attacks (the horizontal and vertical orange line) and multi-frame attacks (the yellow circle), where the black dot is a clean input with two frames (i.e., \(x_{1}\) and \(x_{2}\)) and the red dot is an adversarial example in the multi-frame attack. (a) One-frame attack on \(x_{1}\). (b) One-frame attack on \(x_{2}\). (c) Multi-frame attack on \(x_{1}\) and \(x_{2}\). incorrectly proven to be non-robust. To address this issue, we introduce a new abstract domain called InterZono and design tighter relaxations for non-linear functions in RNN models. InterZono consists of a main domain and a support domain, which can reduce errors in relaxations by computing the intersection of the two domains. We comprehensively evaluate the performance of RNN-Guard across various datasets and model structures. Experimental results show that certified robust accuracy calculated by RNN-Guard with InterZono is up to 2.18 times higher than by RNN-Guard with Zonotope (a classic abstract domain used in many certified defenses), which indicates that InterZono is more precise, while they share the same efficiency. To further demonstrate RNN-Guard's practicality, we apply RNN-Guard to certify the effectiveness of existing defenses. The results show that those defenses lack robustness against multi-frame attacks, which reaffirms the exigency of this work. Furthermore, we extend RNN-Guard as a certified training method to directly improve RNNs' robustness against multi-frame attacks. The results show that the certified robust accuracy of models trained with RNN-Guard against multi-frame attacks is 15.47 to 67.65 percentage points higher than other training methods, which indicates that RNN-Guard is the best choice for defending against such attacks. We also conduct adaptive evaluations, in which the results reconfirm that RNN-Guard is more effective against multi-frame attacks. **Our Contributions.** Our main contributions are: * We discover a severe vulnerability in existing certified defenses for RNNs, i.e., they are challenged by multi-frame attacks due to their limited perturbation space. We theoretically prove that defending against multi-frame attacks is more difficult than against one-frame attacks. * To address this challenge, we propose a novel certified defense with the perturb-all-frame strategy for capturing all potential adversarial examples in multi-frame attacks. To the best of our knowledge, this is the first certified defense that focuses on multi-frame attacks. * To address the precision issue caused by the perturb-all-frame strategy, we introduce an abstract domain called InterZono and design tighter relaxations. We theoretically prove that InterZono is more precise than Zonotope while carrying the same time complexity. * Through extensive evaluations, we show that InterZono is more precise than Zonotope while sharing the same efficiency. Moreover, we demonstrate RNN-Guard's superiority in improving RNN models' robustness against multi-frame attacks. ## II Background ### _Recurrent Neural Networks_ In this work, we apply RNNs to sequence classification tasks such as sentiment prediction and toxic content detection. The RNN model receives a sequence of variable lengths (denoted by \(T\)) composed of frames such as words or tokens and performs classification into \(C\) distinct classes. Let \(\mathbf{X}=[\mathbf{x}^{(1)},\mathbf{x}^{(2)},\ldots,\mathbf{x}^{(t)},\ldots,\mathbf{x}^{(T)}]\) be the input sequence. As shown in Fig. 2, the RNN model updates a hidden state \(\mathbf{h}^{(t)}\) at each time step according to the current frame \(\mathbf{x}^{(t)}\) and the last hidden state \(\mathbf{h}^{(t-1)}\): \[\mathbf{h}^{(t)}=\text{layer}(\mathbf{x}^{(t)},\mathbf{h}^{(t-1)}),\] where \(t=1,2,\ldots,T\). The initial hidden state \(\mathbf{h}^{(0)}\) is usually set to zero. In particular, LSTM models update an additional hidden state \(\mathbf{c}^{(t)}\) called cell state. The output vector \(\mathbf{y}\in\mathbb{R}^{C}\) is calculated according to each hidden state: \[\mathbf{y}=\mathbf{W}_{o}\cdot\mathbf{h}^{(1)}+\mathbf{W}_{o}\cdot\mathbf{h}^{(2)}+\cdots+\mathbf{W}_ {o}\cdot\mathbf{h}^{(T)}+\mathbf{b}_{o}.\] For simplicity, we replace it with \(\mathbf{y}=\mathbf{W}_{o}\cdot\mathbf{h}^{(T)}+\mathbf{b}_{o}\). Fig. 2: The recurrent neural network architecture for sequence classification. The cell state is unique to LSTM models. We mainly consider three kinds of RNN models in this paper: vanilla RNN models, long short-term memory (LSTM) [23] models, and gated recurrent unit (GRU) [9] models. Each kind of model updates its hidden states differently. Vanilla RNN models update \(\mathbf{h}^{(t)}\) according to \[\mathbf{h}^{(t)}=\text{tanh}(\mathbf{W}_{x}\mathbf{x}^{(t)}+\mathbf{b}_{x}+\mathbf{W}_{h}\mathbf{h}^{(t- 1)}+\mathbf{b}_{h}).\] LSTM models update \(\mathbf{h}^{(t)}\) and \(\mathbf{c}^{(t)}\) according to \[\mathbf{i}^{(t)} =\sigma(\mathbf{W}_{x{}i}\mathbf{x}^{(t)}+\mathbf{b}_{xi}+\mathbf{W}_{hi}\mathbf{h}^{(t -1)}+\mathbf{b}_{hi}),\] \[\mathbf{f}^{(t)} =\sigma(\mathbf{W}_{x{}f}\mathbf{x}^{(t)}+\mathbf{b}_{xf}+\mathbf{W}_{hf}\mathbf{h}^{( t-1)}+\mathbf{b}_{hf}),\] \[\mathbf{g}^{(t)} =\text{tanh}(\mathbf{W}_{xg}\mathbf{x}^{(t)}+\mathbf{b}_{xg}+\mathbf{W}_{hg}\mathbf{h }^{(t-1)}+\mathbf{b}_{hg}),\] \[\mathbf{o}^{(t)} =\sigma(\mathbf{W}_{x{}o}\mathbf{x}^{(t)}+\mathbf{b}_{x{}o}+\mathbf{W}_{ho}\mathbf{h}^ {(t-1)}+\mathbf{b}_{ho}),\] \[\mathbf{c}^{(t)} =\mathbf{f}^{(t)}\odot\mathbf{c}^{(t-1)}+\hat{\mathbf{i}}^{(t)}\odot\mathbf{g}^{( t)},\] \[\mathbf{h}^{(t)} =\mathbf{o}^{(t)}\odot\text{tanh}(\mathbf{c}^{(t)}),\] where \(\odot\) is the Hadamard product. GRU models update \(\mathbf{h}^{(t)}\) according to \[\mathbf{r}^{(t)} =\sigma(\mathbf{W}_{x{}r}\mathbf{x}^{(t)}+\mathbf{b}_{x{}r}+\mathbf{W}_{hr}\mathbf{h}^ {(t-1)}+\mathbf{b}_{hr}),\] \[\mathbf{z}^{(t)} =\sigma(\mathbf{W}_{xz}\mathbf{x}^{(t)}+\mathbf{b}_{xz}+\mathbf{W}_{hz}\mathbf{h}^{(t -1)}+\mathbf{b}_{hz}),\] \[\mathbf{n}^{(t)} =\text{tanh}(\mathbf{W}_{xn}\mathbf{x}^{(t)}+\mathbf{b}_{xn}\] \[\quad+\mathbf{r}^{(t)}\odot(\mathbf{W}_{hn}\mathbf{h}^{(t-1)}+\mathbf{b}_{hn})),\] \[\mathbf{h}^{(t)} =(1-\mathbf{z}^{(t)})\odot\mathbf{n}^{(t)}+\mathbf{z}^{(t)}\odot\mathbf{h}^{(t-1)}.\] ### _Adversarial Attacks on RNNs_ According to the number of frames in the input sequence that adversaries can perturb, adversarial attacks on RNNs are categorized into one-frame attacks or multi-frame attacks [12]. #### Ii-B1 One-frame Attacks One-frame attacks are adversarial attacks where the adversary can only perturb one frame and keep the others unchanged, which are considered in the previous works on certified defenses for RNNs [12, 25]. For a clean input sequence \(\mathbf{X}_{c}=[\mathbf{x}_{c}^{(1)},\mathbf{x}_{c}^{(2)},\ldots,\mathbf{x}_{c}^{(t)},\ldots, \mathbf{x}_{c}^{(T)}]\), the one-frame adversarial example \(\mathbf{X}_{adv}^{(t)}\) can be any sequence that satisfies: \[\mathbf{X}_{adv}^{(t)}=[\mathbf{x}_{c}^{(1)},\mathbf{x}_{c}^{(2)},\ldots,\mathbf{x }_{c}^{(t-1)},\mathbf{x}_{adv}^{(t)},\mathbf{x}_{c}^{(t+1)},\ldots,\mathbf{x}_{c}^{(T)}],\] \[\|\mathbf{x}_{c}^{(t)}-\mathbf{x}_{adv}^{(t)}\|_{p}\leq\varepsilon,\mathbf{f} _{p}(\mathbf{X}_{adv}^{(t)})\neq\mathbf{f}_{p}(\mathbf{X}),t=1,2,\ldots,T,\] where \(\mathbf{f}_{p}\) returns the target model's prediction on the input. However, perturbing one frame is not always enough to achieve a successful attack. Thus, to ensure a high success rate, most attacks in practice (e.g., DeepWordBug [17] and BERT-Attack [27]) usually try to perturb more frames together until an adversarial example is found, which are referred to as multi-frame attacks. #### Ii-B2 Multi-frame Attacks Multi-frame attacks are adversarial attacks where the adversary can perturb multiple (partial or all) frames in the input sequence. The perturbation on each perturbed frame is also constrained within a given length. For example, the worst-case adversarial example \(\mathbf{X}_{adv}\) where all frames are perturbed satisfies: \[\mathbf{X}_{adv}=[\mathbf{x}_{adv}^{(1)},\mathbf{x}_{adv}^{(2)},\ldots,\mathbf{ x}_{adv}^{(t-1)},\mathbf{x}_{adv}^{(t)},\mathbf{x}_{adv}^{(t+1)},\ldots,\mathbf{x}_{adv}^{(T)}],\] \[\|\mathbf{x}_{c}^{(t)}-\mathbf{x}_{adv}^{(t)}\|_{p}\leq\varepsilon,\mathbf{f} _{p}(\mathbf{X}_{adv}^{(t)})\neq\mathbf{f}_{p}(\mathbf{X}),t=1,2,\ldots,T.\] ### _Zonotope Certification_ This work builds upon the Zonotope [19] abstract domain, which was used in many certified defenses [12, 30]. #### Ii-C1 Zonotope A Zonotope \(\mathcal{Z}\) that abstracts \(n\) variables consists of a center \(\mathbf{\alpha}_{0}\) and \(N\) error terms: \[\mathcal{Z}=\mathbf{\alpha}_{0}+\sum_{i=1}^{N}\mathbf{\alpha}_{i}\cdot\epsilon_{i},\] where \(\mathbf{\alpha}_{0},\mathbf{\alpha}_{i}\in\mathbb{R}^{n}\) and \(\epsilon_{i}\in[-1,1](i=1,2,\ldots,N)\). #### Ii-C2 Concetization The numerical bounds of the variables can be derived using the following equation: \[\mathbf{l}=\mathbf{\alpha}_{0}-\sum_{i=1}^{N}|\mathbf{\alpha}_{i}|,\ \mathbf{u}=\mathbf{\alpha}_{0}+\sum_{i=1}^{N}|\mathbf{ \alpha}_{i}|, \tag{1}\] where \(\mathbf{l}\in\mathbb{R}^{n}\) and \(\mathbf{u}\in\mathbb{R}^{n}\) are the lower and upper bound, respectively. #### Ii-C3 Affine Abstract Transformer Given an affine transformation \(\mathbf{W}\cdot\mathbf{x}+\mathbf{b}\), its abstract transformer for \(\mathbf{W}\cdot\mathcal{Z}+\mathbf{b}\) is defined as follows: \[\mathbf{W}\cdot\mathcal{Z}+\mathbf{b}=\mathbf{W}\cdot\mathbf{\alpha}_{0}+\mathbf{b}+\sum_{i=1}^{N }\mathbf{W}\cdot\mathbf{\alpha}_{i}\cdot\epsilon_{i}. \tag{2}\] Fig. 3: Comparison of relaxations for tanh. Our relaxed area is smaller than others. (a) Box-style relaxation. (b) Parallelogram-style relaxation. (c) Ours. #### Ii-B4 Non-linear Abstract Transformer Abstract transformers for non-linear functions adopt relaxations to over-approximate the output. Take the tanh function as an example. There are different abstract transformers for the tanh function, we describe the one used in Cert-RNN, as shown in Fig. 2(b). Given an input Zonotope \(\mathcal{Z}\), the abstract transformer first computes the numerical bounds, \(\mathbf{l}\) and \(\mathbf{u}\), using Equation 1. Then, it computes two parallel bounding lines that minimize their distance for each variable. Let \(y=k_{j}x+c_{j}\) and \(y=k_{j}x+d_{j}\) be the upper and lower bound line of the \(j\)-th variable, where \(k_{j}=(\text{tanh}(u_{j})-\text{tanh}(l_{j}))/(u_{j}-l_{j}),j=1,\ldots,n\). The output domain of the abstract transformer is: \[\text{tanh}(\mathcal{Z})=\mathbf{\alpha}_{0}\odot\mathbf{k}+\frac{\mathbf{c}+ \mathbf{d}}{2}+\sum_{i=1}^{N}\mathbf{\alpha}_{i}\odot\mathbf{k}\cdot\epsilon_{i}\] \[+\sum_{j=1}^{n}\frac{c_{j}-d_{j}}{2}\cdot\mathbf{e}_{j}\cdot\epsilon _{N+j}, \tag{3}\] where \(\mathbf{k}=(k_{1},k_{2},\ldots,k_{n})^{\mathsf{T}}\), \(\mathbf{c}=(c_{1},c_{2},\ldots,c_{n})^{\mathsf{T}}\), \(\mathbf{d}=(d_{1},d_{2},\ldots,\)\(d_{n})^{\mathsf{T}}\), and \(\mathbf{e}_{i}\) is the \(i\)-th standard basis vector. In LSTM and GRU models, there are some unique operations that are more complex, such as the Hadamard product between a sigmoid function and a tanh function. Since the Hadamard product depends on two variables, the corresponding abstract transformer computes two planes instead of lines. The rest calculation is similar to that in the tanh abstract transformer. Due to the limited space, we omit the computational details, please refer to [12]. #### Ii-B5 Certification and Certified Training To certify the target model's robustness in a perturbation space, certification methods first abstract the perturbation space as a Zonotope. Then, they propagate the Zonotope through the model using abstract transformers. Thus, they obtain an output Zonotope that is an over-approximation of the model's all possible outputs. Finally, let \(y_{t}\) be the output of the correct class and \(y_{f}\) be the incorrect class's, the robustness is proven if the lower bound of \(y_{t}-y_{f}\) is positive. Furthermore, certification methods can be extended as certified training methods by training models with a robustness loss \(l\) computed according to the output Zonotope \(\mathcal{Z}_{o}\). For example, \(l=\max_{\mathbf{y}\in\mathcal{Z}_{o}}l_{ce}(\mathbf{y},l_{t})\), where \(l_{ce}\) is the Cross-Entropy loss and \(l_{t}\) denotes the label of the correct class. However, certified training methods that improve the model's robustness also cause a decrease in its clean accuracy. To mitigate this problem, existing works [54] usually use a linear combination of the robustness loss and the standard loss. ## III Motivation In this section, we demonstrate the vulnerability in existing certified defenses [12, 25] by proving that defending against multi-frame attacks is more difficult than defending against one-frame attacks. ### _Defense Strategy_ Let \(\mathcal{S}^{(t)}=\{\mathbf{x}^{(t)}:\|\mathbf{x}^{(t)}-\mathbf{x}_{c}^{(t)}\|_{p}\leq\varepsilon\}\) be the perturbation space (i.e., the neighborhood contains all potential adversarial examples) of the \(t\)-th frame. Previous works focus on one-frame attacks and thus adopt _the perturbo-frame strategy_, which considers the perturbation space of the one-frame adversary: \[\mathcal{S}_{1}=\bigcup_{t=1}^{T}\mathcal{S}_{1}^{(t)}=\bigcup_{t =1}^{T}\{\mathbf{X}:\mathbf{x}^{(1)}=\mathbf{x}_{c}^{(1)},\ldots,\mathbf{x}^{(t-1)}=\mathbf{x}_{c}^ {(t-1)},\] \[\mathbf{x}^{(t)}\in\mathcal{S}^{(t)},\mathbf{x}^{(t+1)}=\mathbf{x}_{c}^{(t+1) },\ldots,\mathbf{x}^{(T)}=\mathbf{x}_{c}^{(T)}\}.\] Aiming at defending against multi-frame attacks, we adopt _the perturbo-all-frame strategy_, which considers the perturbation space of the worst-case multi-frame adversary: \[\mathcal{S}_{2}=\{\mathbf{X}:\mathbf{x}^{(1)}\in\mathcal{S}^{(1)},\ldots,\mathbf{x}^{(t)} \in\mathcal{S}^{(t)},\ldots,\mathbf{x}^{(T)}\in\mathcal{S}^{(T)}\}.\] **Lemma 1**: \(\mathcal{S}_{1}\) _is a proper subset of \(\mathcal{S}_{2}\)._ **Proof of Lemma 1**: _On the one hand, according to their definitions, \(\mathcal{S}_{1}^{(t)}\) is a subset of \(\mathcal{S}_{2},t=1,2,\ldots,T\). Since \(\mathcal{S}_{1}\) is the union of all \(\mathcal{S}_{1}^{(t)}\)s, \(\mathcal{S}_{1}\) is also a subset of \(\mathcal{S}_{2}\). On the other hand, we can find an element \(\mathbf{X}_{adv}=[\mathbf{x}_{adv}^{(1)},\mathbf{x}_{adv}^{(2)},\ldots,\mathbf{x}_{adv}^{(T)}]\) in \(\mathcal{S}_{2}\) but not in \(\mathcal{S}_{1}\), where \(\mathbf{x}_{adv}^{(t)}\in\mathcal{S}^{(t)},\mathbf{x}_{adv}^{(t)}\neq\mathbf{x}_{c}^{(t)},t =1,2,\ldots,T\). In conclusion, \(\mathcal{S}_{1}\) is a proper subset of \(\mathcal{S}_{2}\). \(\square\)_ **Theorem 1**: _A model that is robust against multi-frame attacks around a clean sample \(\mathbf{X}_{c}\) is also robust against one-frame attacks around \(\mathbf{X}_{c}\), but not vice versa._ _Proof of Theorem 1: _On the one hand, the model is robust against multi-frame attacks around \(\mathbf{X}_{c}\) indicates that all samples in \(\mathcal{S}_{2}\) are robust. Since \(\mathcal{S}_{1}\) is a proper subset of \(\mathcal{S}_{2}\), all samples in \(\mathcal{S}_{1}\) must be robust. Thus, the model is also robust against one-frame attacks around \(\mathbf{X}_{c}\). On the other hand, the model is robust against one-frame attacks around \(\mathbf{X}_{c}\) only indicates that all samples in \(\mathcal{S}_{1}\) are robust. As for samples in \(\mathcal{S}_{2}-\mathcal{S}_{1}\), their robustness is uncertain. Thus, there can be adversarial examples in \(\mathcal{S}_{2}\). \(\square\)_ ### _Challenges_ While the perturbo-all-frame strategy offers a stronger guarantee of models' robustness, however, it also causes a precision issue. Most existing certified defenses apply linear relaxations to handle non-linear functions. As shown in Fig. 4, relaxations in smaller spaces are more precise because a shorter curve is closer to a straight line than a longer one. As a result, larger perturbation spaces lead to less precise relaxations, which brings an intractable challenge to robustness certification against multi-frame attacks. Fig. 4: Examples of perturbation space’s effect on the linear relaxations of tanh. The orange and blue areas are the linear relaxation when the perturbation space is \([0,1]\) and \([0,2]\), respectively. ## IV Methodology In this section, we present RNN-Guard, the first certified defense against multi-frame attacks for RNN models. ### _Overview_ #### Iv-A1 Threat Model We focus on white-box multi-frame adversarial attacks, which indicate the strongest adversary who has full knowledge about the target model (including its parameters and architecture) and can perturb all frames in its input sequence simultaneously by adding \(\ell_{p}(p=\infty)\) noises to their embeddings (as we introduced in Section II-B). #### Iv-A2 Robustness Certification To certify RNN model's robustness against multi-frame attacks, we first construct an InterZono capturing all potential adversarial examples. Then, we propagate the InterZono through the model using our abstract transformers. The output InterZono we obtain is an over-approximation of the model's all possible outputs. Finally, we certify the model's robustness by confirming whether \(y_{t}-y_{f}\) is positive, where \(y_{t}\) and \(y_{f}\) are the correct and incorrect class's outputs, respectively. #### Iv-A3 Certified Training To improve RNN model's robustness against multi-frame attacks, we first compute the output InterZono \(\mathcal{D}_{o}\) by propagating the perturbation space through the model. Then, we calculate a robustness loss \(l=\max_{\mathbf{y}\in\mathcal{D}_{s}}l_{ce}(\mathbf{y},l_{t})\), where \(l_{ce}\) is the Cross-Entropy loss, and \(l_{t}\) denotes the label of the correct class. We combine the robustness loss with the standard loss to improve both the robustness and clean accuracy of the model. Finally, we update the model's parameters through training according to the combined loss. ### _InterZono_ #### Iv-B1 InterZono We present a novel abstract domain called InterZono to address the precision issue caused by the perturb-all-frame strategy. **Definition 1 (InterZono):** An InterZono \(\mathcal{D}\) is an intersection between two Zonotopes, which are the main domain \(\mathcal{Z}_{m}\) and the support domain \(\mathcal{Z}_{s}\): \[\mathcal{D}=\mathcal{Z}_{m}\cap\mathcal{Z}_{s}.\] The main domain propagates the relations of variables through the model and the support domain is designed to refine the main domain for improving the precision. #### Iv-B2 Concretization Given an InterZono \(\mathcal{D}\), we first calculate the lower and upper bound for \(\mathcal{Z}_{m}\) (i.e., \(\mathbf{l}_{m}\) and \(\mathbf{u}_{m}\)) and \(\mathcal{Z}_{s}\) (i.e., \(\mathbf{l}_{s}\) and \(\mathbf{u}_{s}\)) according to Equation 1. Then, the lower and upper bound of \(\mathcal{D}\), i.e., \(\mathbf{l}\) and \(\mathbf{u}\), are computed as follows: \[\mathbf{l}=\max\{\mathbf{l}_{m},\mathbf{l}_{s}\},\ \mathbf{u}=\min\{\mathbf{u}_{m},\mathbf{u}_{s}\}, \tag{4}\] where the maximum and minimum are taken in an element-wise manner. #### Iv-B3 Affine Abstract Transformer The affine abstract transformer of InterZono is simple. We apply the affine abstract transformer of Zonotope to the main domain and the support domain according to the Equation 2. The outputs of the main domain and the support domain are the main domain and the support domain of the new InterZono, respectively. #### Iv-B4 Non-linear Abstract Transformer Unlike the affine abstract transformer, both the main domain and the support domain of the new InterZono are computed mainly based on the main domain of the input InterZono. We continue to use the tanh abstract transformer as an example. Given an InterZono \(\mathcal{D}\), we first calculate its lower and upper bound \(\mathbf{l}\) and \(\mathbf{u}\) according to Equation 4. Then, we apply the parallelogram-style abstract transformer to the main domain \(\mathcal{Z}_{m}\), according to Equation 3. The output zonotope is denoted by \(\mathcal{Z}_{m}^{\prime}\). Note that we compute the \(\mathbf{k}\) in Equation 3 according to the bounds of \(\mathcal{D}\) rather than \(\mathcal{Z}_{m}\). Next, we compute the output's support domain \(\mathcal{Z}_{s}^{\prime}\) as follows (i.e., the box-style abstract transformer) \[\mathcal{Z}_{s}^{\prime}=\frac{\mathbf{u}+\mathbf{l}}{2}+\sum_{i=1}^{n}\frac{u_{i}-l_ {i}}{2}\cdot\mathbf{e}_{i}\cdot\epsilon_{i}, \tag{5}\] where \(u_{i}\) and \(l_{i}\) are the \(i\)-th component of \(\mathbf{l}\) and \(\mathbf{u}\), respectively. Finally, the output InterZono \(\mathcal{D}^{\prime}\) is \(\mathcal{Z}_{m}^{\prime}\cap\mathcal{Z}_{s}^{\prime}\). For complex operation in LSTM and GRU models, e.g., the Hadamard product between a sigmoid function and a tanh function \(\sigma(x)\odot\text{tanh}(y)\), our abstract transformer is constructed in a similar way. Let \(\mathcal{D}_{x}=\mathcal{Z}_{x,m}\cap\mathcal{Z}_{x,s}\) and \(\mathcal{D}_{y}=\mathcal{Z}_{y,m}\cap\mathcal{Z}_{y,s}\) be two InterZonos. To calculate the output of \(\sigma(\mathcal{D}_{x})\odot\text{tanh}(\mathcal{D}_{y})\), denoted by \(\mathcal{D}_{z}\), we first calculate the lower and upper bounds of \(\mathcal{D}_{x}\) and \(\mathcal{D}_{y}\), denoted by \(\mathbf{l}_{x},\mathbf{u}_{x}\) and \(\mathbf{l}_{y},\mathbf{u}_{y}\), respectively. Then, we apply the abstract transformer for \(\sigma(x)\odot\text{tanh}(y)\) in Cert-RNN to \(\mathcal{Z}_{x,m}\) and \(\mathcal{Z}_{y,m}\), but replace the lower and upper bounds of \(\mathcal{Z}_{x,m}\) and \(\mathcal{Z}_{y,m}\) with the corresponding bounds of \(\mathcal{D}_{x}\) and \(\mathcal{D}_{y}\). Let \(\mathcal{Z}_{z,m}\) denote \(\sigma(\mathcal{Z}_{x,m})\odot\text{tanh}(\mathcal{Z}_{y,m})\). Next, we calculate the support domain of \(\mathcal{D}_{z}\), denoted by \(\mathcal{Z}_{z,s}\), as follows. \[\mathcal{Z}_{z,s}=\frac{\mathbf{u}+\mathbf{l}}{2}+\sum_{i=1}^{n}\frac{u_{i}-l_{i}}{2} \cdot\mathbf{e}_{i}\cdot\epsilon_{i},\] \[\mathbf{l}=\min\{\sigma(\mathbf{l}_{x})\odot\text{tanh}(\mathbf{l}_{y}),\sigma(\mathbf{u}_{x}) \odot\text{tanh}(\mathbf{l}_{y})\},\] \[\mathbf{u}=\max\{\sigma(\mathbf{l}_{x})\odot\text{tanh}(\mathbf{u}_{y}),\sigma(\mathbf{u}_{x}) \odot\text{tanh}(\mathbf{u}_{y})\},\] where the maximum and minimum are taken in an element-wise manner and \(\mathbf{l},\mathbf{u}\in\mathbb{R}^{n}\). Finally, the output InterZono \(\mathcal{D}_{z}\) is \(\mathcal{Z}_{z,m}\cap\mathcal{Z}_{z,s}\). #### Iv-B5 Tightness We prove that InterZono is tighter than Zonotope. **Theorem 2:** The output InterZono \(\mathcal{D}^{\prime}\) is a tighter over-approximation (super-set) of \(\mathcal{Z}_{m}\)'s image under tanh. Proof:: Let \(\mathcal{G}\) be the exact image of \(\mathcal{Z}_{m}\) under the tanh function. Since the parallelogram-style abstract transformer calculates the over-approximation of \(\mathcal{Z}_{m}\), \(\mathcal{G}\) is a subset of \(\mathcal{Z}_{m}^{\prime}\). Meanwhile, since the box-style abstract transformer can be considered as a special case of parallelogram-style abstract transformer (\(\mathbf{k}=0\)), \(\mathcal{G}\) is also a subset of \(\mathcal{Z}_{s}^{\prime}\). Thus, \(\mathcal{G}\) is a subset of \(\mathcal{D}^{\prime}\). Moreover, since \(\mathcal{D}^{\prime}\) is the intersection between \(\mathcal{Z}_{m}^{\prime}\) and \(\mathcal{Z}_{s}^{\prime}\), \(\mathcal{D}^{\prime}\) is tighter than any of them. This conclusion also holds for other activation function such as sigmoid. We provide an example in Fig. 5 to support Theorem 2. #### Iv-B6 Efficiency We prove that InterZono is as efficient as Zonotope. **Theorem 3:** The time complexity of InterZono's abstract transformers are equal to those of Zonotope's. Proof of Theorem 3.: Let \(T_{Z}(n)=O(f_{Z}(n))\) be the time complexity of Zonotope's linear abstract transformer. According the definition of InterZono's linear abstract transformer, its time complexity is \(T_{D}(n)=O(f_{Z}(n))+O(f_{Z}(n))=O(f_{Z}(n))=T_{Z}(n)\). Let \(T^{\prime}_{Z}(n)=O(f^{\prime}_{Z}(n))\) be the time complexity of Zonotope's non-linear abstract transformer. According the definition of InterZono's non-linear abstract transformer, the time complexity of calculating \(\mathcal{Z}^{\prime}_{m}\), denoted as \(T^{\prime}_{m}(n)\), is equal to \(O(f^{\prime}_{Z}(n))\). Since the box-style abstract transformer is a special case of parallelogram-style abstract transformer without calculating and multiplying \(\mathbf{k}\), the time complexity of calculating \(\mathcal{Z}^{\prime}_{s}\), denoted as \(T^{\prime}_{s}(n)\), is no larger than \(O(f^{\prime}_{Z}(n))\). Thus, the time complexity of InterZono's non-linear abstract transformer is \(T^{\prime}_{D}(n)=T^{\prime}_{m}(n)+T^{\prime}_{s}(n)=O(f^{\prime}_{Z}(n))=T^{ \prime}_{Z}(n)\). ## V Evaluation In this section, we evaluate the performance of RNN-Guard on robustness certification against multi-frame attacks. To the best of our knowledge, RNN-Guard is the first robustness certification method against multi-frame attacks. Since there is no existing baseline method, we use RNN-Guard with the classic abstract domain Zonotope as the baseline method. ### _Experimental Settings_ #### V-A1 Models We use LSTM models and GRU models, which consist 32 and 64 hidden neurons (consistent with those used in previous work [12]). We refer to them as LSTM-32, LSTM-64, GRU-32, and GRU-64 in the following. #### V-A2 Datasets We use two datasets in our evaluation. (i) _Rotten Tomatoes Movie Review (RT)_ dataset [32], which is a benchmark corpus of movie reviews used for sentiment analysis. The RT dataset contains about 39000 and 4800 samples in the training set and the testing set, respectively. (ii) _Yelp Reviews Polarity (Yelp)_ dataset [56], which is a large text classification benchmark. The Yelp dataset contains about 560000 and 38000 samples in the training set and the testing set, respectively. #### V-A3 Hardware All experiments are conducted on a server with an Intel Core i9-10920X CPU running at 3.50 GHz, 128 GB memory, 4TB HDD, and a GeForce RTX 3090 GPU card. #### V-A4 Word Embedding We use the _GloVe_ model [35] to map the words into embeddings and normalize the word embeddings to reduce their internal covariate shift. For the out-of-vocabulary words, we randomly sample from the uniform distribution in \([-0.1,0.1]\) for initialization. On vanilla RNN models, we use the pre-trained model 'glove.840B.300d'. On LSTM and GRU models, due to the limited GPU memory, we use the smaller model 'glove.6B.50d'. #### V-A5 Evaluation Metrics We consider the following metrics: (i) the _certified robust accuracy_ (C.Acc.) at \(\varepsilon_{e}\) on a dataset is the fraction of samples that are guaranteed (by robustness certification) to be robust within a neighborhood of theirs with radius equal to \(\varepsilon_{e}\) in the test set, which can be seen as the lower bound of the target model's robustness; (ii) the _running time_ of a method on a dataset is running time of the robustness certification method to certify the robustness of all samples in the testing set. ### _Results and Analysis_ We compare RNN-Guard with InterZono to RNN-Guard with Zonotope across four models and two datasets to demonstrate the advantages of InterZono. First, we compare the precision of Zonotope and InterZono. As we mentioned earlier, most certified defenses adopt linear relaxations to handle non-linear functions in the model, which causes the precision issue that eventually leads them to mistakenly certify that certain robust samples are non-robust. Thus, improving the precision of robustness certification is one of the most important research directions in this area. The precision of a robustness certification method is usually measured by the certified robust accuracy it computes. The more precise a method is, the more samples it can certify the robustness of, thus the higher certified robust accuracy it computes. As shown in Table. I and Fig. 6, certified robust accuracy computed using InterZono is higher than that computed using Zonotope. For example, for the LSTM-32 model on the RT dataset, RNN-Guard with Zonotope proves that 6.80% samples are robust while RNN-Guard with InterZono proves that 14.81% samples are robust. The reason for advantage of InterZono is that InterZono uses an additional support domain to refine the bounds. Thus, InterZono is more precise than Zonotope. Fig. 5: An example of different relaxations for tanh. The grey area is the exact image of the input domain under tanh. (a) Input domain. (b) Box-style relaxation’s output domain. (c) Parallelogram-style relaxation’s output domain. (d) Ours. Moreover, we observe that the precision improvement of InterZono is more obvious in LSTM models than in GRU models. This is because LSTM models involve more variables and use more non-linear functions and our advantage comes from the relaxation of non-linear functions. Hence, our method is more precise, especially on complex models with more non-linear functions. Then, we compare the efficiency of Zonotope and InterZono. As shown in Table. I, the running time of RNN-Guard with InterZono is very close to those with Zonotope. For example, for the LSTM-64 model on the RT dataset, RNN-Guard with Zonotope takes 73.5 seconds to certify the robustness of all samples in the testing set while RNN-Guard with InterZono takes 73.6 seconds. This can be explained by Theorem 3. Thus, InterZono is as efficient as Zonotope. In conclusion, we believe that InterZono is a better choice for certifying the robustness of RNN models. ## VI Application Robustness certification has many important applications, such as certifying the effectiveness of different defenses [18] and designing new defenses by improving the certified robustness of the target model (i.e., certified training) [30]. In this section, we apply RNN-Guard in the above applications to further demonstrate its practicality and benefits. ### _Certifying Existing Defenses_ In this subsection, we certify the effectiveness of existing defenses, including the _standard training_ without any defense, two _adversarial training_ methods (i.e., AT-FGSM [20] and AT-PGD [29]), and two _certified training_ methods (i.e., POPQORN+ [25] and Cert-RNN [12]). Note that Cert-RNN failed to train several large models due to timeout or running out of GPU memory. #### Vi-A1 Experimental Settings _Models._ We consider three different kinds of RNN models, i.e., vanilla RNN models, LSTM models, and GRU models. We will refer to them as RNN-32 (R-32), RNN-64 (R-64), LSTM-32 (L-32), LSTM-64 (L-64), GRU-32 (G-32), and GRU-64 (G-64) in the following. _Dataset_. In addition to the RT and Yelp datasets, we use the _Toxic Comment (TC)_ dataset [10] provided by Kaggle, which is also used in the evaluation of Cert-RNN. To be consistent with Cert-RNN, we also perform binary classification on this dataset by reclassifying the six toxicity categories as one. We randomly sample a balanced subset of this dataset for evaluation. _Implementation Details._ Though POPQORN [25] is proposed to quantify the robustness of RNN models, we modify it for certified training and refer to it as 'POPQORN+'. The rest settings are consistent with those in Section V-A. #### Vi-A2 Results and Analysis The results are shown in Tab. II and Fig. 7, from which we have the following observations. First, the standard training method achieves nearly zero certified robust accuracy in Tab. II. This indicates that models trained by it lack robustness, which is already widely acknowledged by the security community. Second, adversarial training methods achieve higher but limited certified robust accuracy than the standard training method. For example, AT-FGSM and AP-PGD achieve 13.99% and 29.58% certified robust accuracy at \(\varepsilon_{e}=0.03\) for the LSTM-32 model on the Yelp dataset, respectively. The reason for such limited robustness is that they are based on heuristics and thus lack theoretical guarantees. Finally, existing certified training methods achieve relatively low certified robust accuracy. For instance, POPQORN+ and Cert-RNN achieve 0.97% and 0.69% certified robust accuracy at \(\varepsilon_{e}=0.03\) for the LSTM-32 model on the RT dataset, respectively. The reason for such low robustness is that they focus on one-frame attacks and defending against multi-frame attacks is more difficult. Fig. 6: Certified robust accuracy of the four models on the RT and Yelp dataset at different \(\varepsilon_{e}\) computed using Zonotope and InterZono. (a) RT LSTM-32. (b) RT LSTM-64. (c) RT GRU-32. (d) RT GRU-64. (e) Yelp LSTM-32. (f) Yelp LSTM-64. (g) Yelp GRU-32. (h) Yelp GRU-64. In conclusion, multi-frame attacks pose a significant challenge to the effectiveness of existing defenses. ### _Improving RNN's Robustness_ To address the security challenge posed by multi-frame attacks, we extend our certification method, i.e., RNN-Guard, as a _certified training_ method and compare it to existing defenses to demonstrate its advantages. #### Vi-B1 Experimental Settings We consider the following metrics: (i) the _certified robust accuracy_, which is explained in Section V-A; (ii) the _empirical robust accuracy_ (E.Acc.) at \(\varepsilon_{e}\) on a dataset is the fraction of samples that adversarial attackers failed to attack within a neighborhood of theirs with a radius equal to \(\varepsilon_{e}\) in the test set, which can be seen as the upper bound of the target model's robustness; (iii) the _clean accuracy_ (Acc.) on a dataset is the fraction of samples that is correctly classified in the test set; (iv) the _running time_ of a method on a dataset is running time of the certified training method to train the target model for one epoch on the training set. The rest settings are consistent with those in Section VI-A. #### Vi-B2 Results and Analysis The results are shown in Tab. II, III, IV, V, and Fig. 7, from which we have the following observations. First, as shown in Tab. II and Fig. 7, RNN-Guard achieves the highest certified robust accuracy across different models and datasets. For example, RNN-Guard achieved \(64.05\%\) certified robust accuracy at \(\varepsilon_{e}=0.03\) for the RNN-64 model on the RT dataset and the second-best method only reaches \(9.04\%\) certified robust accuracy. RNN-Guard attains better performance than adversarial training methods because it provides theoretical guarantees for RNN models' robustness. RNN-Guard outperforms existing certified training methods because we focus on multi-frame attacks that is more difficult to defend against. Roughly speaking, let \(|\mathcal{S}^{(t)}|\) be the number of potential adversarial examples in \(\mathcal{S}^{(t)}\) (see details in Section III-A), which is a very large number, existing certified training methods improve the robustness of samples in \(\mathcal{S}_{1}\), whose number is \(|\mathcal{S}_{1}|=|\mathcal{S}^{(1)}|+|\mathcal{S}^{(2)}|+\cdots+|\mathcal{ S}^{(T)}|\). Meanwhile, RNN-Guard improves the robustness of samples in \(\mathcal{S}_{2}\) and their number is \(|\mathcal{S}_{2}|=|\mathcal{S}^{(1)}|\times|\mathcal{S}^{(2)}|\times\cdots \times|\mathcal{S}^{(T)}|\). As Fig. 7: Certified robust accuracy of RNN models. On the RT dataset, we train two models with \(\varepsilon_{t}=0.03\) for each defense and record their certified robust accuracy in \(0\leq\varepsilon_{e}\leq 0.06\). On the TC dataset, we train two models with \(\varepsilon_{t}=0.05\) for each defense and record their certified robust accuracy in \(0\leq\varepsilon_{e}\leq 0.1\). (a) RT RNN-32. (b) RT RNN-64. (c) TC RNN-32. (d) TC RNN-64. (e) RT LSTM-32. (f) RT LSTM-64. (g) Yelp LSTM-32. (h) Yelp LSTM-64. (i) RT GRU-32. (j) RT GRU-64. (k) Yelp GRU-32. (l) Yelp GRU-32. (l) Yelp GRU-64. we can see, \(|\mathcal{S}_{2}|\) is far larger than \(|\mathcal{S}_{1}|\), which explains why POPQORN+ and Cert-RNN achieve such low certified robust accuracy against multi-frame attacks. Moreover, we further validate RNN-Guard's advantages by recording certified robust accuracy of different defenses at different radii in Fig. 7. As shown in Fig. 7, RNN-Guard achieves the highest certified robust accuracy at almost all \(\varepsilon_{e}\), which indicates that the advantages of RNN-Guard are general and do not rely on a specific radius. For instance, the certified robust accuracy of RNN-Guard is the highest as \(\varepsilon_{e}\) increases from \(0.01\) to \(0.06\) for the LSTM-32 model on the RT dataset. Thus, RNN-Guard provides more robustness for RNN models. Second, as shown in Tab. III, RNN-Guard attains high empirical robust accuracy across different models and datasets. For example, RNN-Guard achieved \(67.61\%\) empirical robust accuracy at \(\varepsilon_{e}=0.03\) for the LSTM-32 model on the RT dataset, which is the best result in this setting. The outstanding effectiveness of RNN-Guard against adversarial attacks is guaranteed by its certified robustness results. Though adversarial training methods can improve models' robustness to some extent, they are usually challenged by later and stronger attacks [29]. Models trained with existing certified training methods cannot defend against the PGD attack (multi-frame), which is no surprise because defending against multi-frame attacks is more difficult. In addition, though the empirical robust accuracy of AT-PGD is similar or even slightly higher than RNN-Guard, we argue that, due to the limitation of empirical robust accuracy, it doesn't mean that models trained by AT-PGD are more robust than those trained by RNN-Guard. We conducted further experiments where we replace the PGD attack with adaptive attacks to support our argument, which are detailedly presented in Section VI-C. Nevertheless, RNN-Guard is an effective defense against adversarial attacks. Third, as shown in Tab. IV, RNN-Guard maintains relatively high clean accuracy. For instance, the clean accuracy of the LSTM-32 model trained with RNN-Guard on the RT dataset is \(73.94\%\), which is only about \(2\%\) lower than the model trained with the regular training. The clean accuracy of models trained with RNN-Guard is lower than those trained with the regular training, but the decline is acceptable. First, as many previous works (both empirical and certified defenses) [21, 29, 30, 54] have shown, improving a model's robustness can cause a drop in its clean accuracy due to the trade-off between the robustness and clean accuracy of the model. Second, the gap between RNN-Guard and the regular training in clean accuracy is similar to those in previous works. For example, adversarial training with PGD caused about \(13\%\) decline in clean accuracy on the CIFAR-10 dataset [29], certified training with 1BP caused about \(10\%\) decline in clean accuracy on the SVHN dataset [21], and certified training with DiffAI(hSmooth) caused about \(10\%\) decline in clean accuracy on the F-MNIST dataset [30]. Last but not least, models with high robustness and lower clean accuracy are still useful in the real world. For instance, recent works [31], [50] combine the advantages of models with high robustness but lower clean accuracy and those with high clean accuracy but lower robustness using model aggregation mechanisms to build a new model that achieves both high clean accuracy and high robustness. Though the training model using RNN-Guard can result in a lower clean accuracy, we argue that RNN-Guard is still an outstanding defense because it achieves a remarkably high robustness against multi-frame attacks. Finally, as shown in Tab. V, RNN-Guard is more efficient than Cert-RNN. For instance, it only takes about 1 hour for RNN-Guard to train the LSTM-32 model for one epoch on the RT dataset, whereas Cert-RNN takes about 4 hours. Thus, RNN-Guard is more efficient than the current SOTA method Cert-RNN, especially for long input sequences. Moreover, RNN-Guard's training time is acceptable compared to certified training methods for CNNs. For example, existing study [54] shows that CROWN-IBP takes 954 to 4173 seconds to train models with medium size on the MNIST and CIFAR-10 datasets while convex adversarial polytopes (CAP) takes 6961 to 160764 seconds to train same models on same datasets. Note that both adversarial training and certified training consume extra time to compute an additional robustness loss in the training phase, which has _no negative effect_ on the model's efficiency in the _inference phase_. In conclusion, RNN-Guard achieve the highest certified robustness among the six training methods, the running time of RNN-Guard is significantly shorter than the SOTA certified training method Cert-RNN, and the decrease in accuracy caused by RNN-Guard is similar to those caused by previous certified defenses for CNNs. Thus, we believe RNN-Guard is currently the best choice for enhancing the robustness of RNN models. ### _Adaptive Evaluation of Existing Defenses and RNN-Guard_ In this subsection, we further evaluate the defensive effectiveness of existing defenses and RNN-Guard using adaptive attacks, which are considered to be reliable evaluations of adversarial defenses. We use an automatic adaptive attack tool called _AdaptiveAutoAttack_ (A3) [52], which is experimentally demonstrated to be more effective than traditional manually-designed adaptive attacks. #### Iv-C1 Experimental Settings _Automatic Adaptive Attack._ In the above section, we empirically evaluate the defensive effectiveness of different defenses using the PGD attack. However, such a one-sided evaluation cannot reveal the true vulnerability of each defense. To comprehensively evaluate the robustness of models trained by different defenses against adversarial attacks, we replace the PGD attack with the adaptive attack, which can utilize the weakness in the design of the defense to adaptively generate adversarial examples for breaking through the defense. In the normal adaptive attack setting, the adversary has _complete knowledge_ of the defense and manually crafts attacks based on their knowledge. However, most common adaptive attack methods are designed for empirical defenses such as adversarial training, which are ineffective for certified defenses because common adaptive attack methods such as replacing the loss function optimized by the attack are ineffective for certified defenses. Therefore, we follow [52] to design our adaptive attack, which searches for an effective attack over different combinations of common reusable building blocks of existing adaptive attacks (i.e., attack algorithm and parameters, network transformations, and loss functions). In our experiment, we use the AdaptiveAutoAttack (A3) tool with a little customization for the adaptation of the input data. _Evaluation Metrics._ We use _adaptive robust accuracy_ to quantify the defense effectiveness of a defense against adaptive attackers, which is the fraction of samples that adaptive adversaries fail to find their adversarial examples in the test set. The rest of the experimental settings are consistent with those in the above section, which are explained in Section V-A and VI-A. Due to timeout, results on the Yelp dataset are not presented. #### Iv-C2 Results and Analysis We compare the adaptive robust accuracy of different defenses, which reveals the defensive effectiveness of a defense against the worst-case adversary who has complete knowledge of that defense. The results are shown in Tab. VI, from which we have the following observations. First, RNN-Guard achieves the highest adaptive robust accuracy across different models and datasets. For example, RNN-Guard achieved \(61.37\%\) empirical robust accuracy at \(\varepsilon_{e}=0.03\) for the RNN-32 model on the RT dataset, while the second-best method only reached \(29.54\%\). The advantages of RNN-Guard in certified robustness provide a solid foundation for its effectiveness against adaptive adversaries. Hence, it is extremely difficult for adaptive adversaries to successfully attack models trained by RNN-Guard. Second, adversarial training is vulnerable to adaptive attacks because they are empirical defenses and lack theoretical guarantees for security. For instance, comparing the results in Tab. III, the RNN-32 model trained by AT-PGD on the RT dataset achieves \(67.00\%\) empirical robust accuracy against the PGD attack, but it only achieves \(29.54\%\) adaptive robust accuracy against the adaptive attack. In contrast, the RNN-32 model trained by RNN-Guard on the RT dataset achieves \(65.68\%\) empirical robust accuracy and \(61.37\%\) adaptive robust accuracy, which indicates that RNN-Guard is a more dependable defense against the adaptive attack. Finally, existing certified training methods cannot defend against the adaptive attack because they focus on one-frame attacks and defending against multi-frame attacks is beyond their capability. For example, POPQORN+ and Cert-RNN achieve \(2.21\%\) and \(0.60\%\) adaptive robust accuracy for the RNN-32 model on the RT dataset, respectively. In conclusion, the results of adaptive attacks further confirm that RNN-Guard is the best choice for enhancing the robustness of RNN models against multi-frame attacks. ## VII Discussion _Evaluation on commercial models/platforms._ With the great success of Machine Learning as a Service (MLaaS), many companies have launched their own models/platforms for NLP tasks, such as Google Cloud NLP, IBM Waston Natural Language Understanding, Microsoft Azure Text Analytics, and Amazon AWS Comprehend. For security and copyright considerations, those companies only provide APIs to their customers without any access to their models' structure or parameters. However, certified defenses (e.g., POPQORN, Cert-RNN, and RNN-Guard) are white-box evaluations, i.e., they require full knowledge of the model. Besides, the attack methods we used in the evaluation (i.e., FGSM, PGD, and adaptive attacks) are white-box attacks, which also require full knowledge of the target model. Thus, without full access to the model, it is nearly impossible to conduct any white-box evaluation on those commercial models/platforms. Nevertheless, the large datasets and complex model structures we used in our evaluations are chosen according to those in real-world applications. Thus, we believe our results have certain practical significance. _Extending to SOTA models._ Recently, SOTA models for NLP tasks have shifted from RNNs to Transformers, especially large-scale pre-trained models such as BERT [11] and GPT-3 [5]. For instance, the base BERT model has 12 layers and 110M parameters. Extending existing certified defense to such models is extremely hard due to their enormous number of parameters. However, we argue that Transformers cannot completely replace RNNs because RNNs are more suitable for simple tasks due to their small size and adequate performance. We will conduct research on extending RNN-Guard to Transformers in the future. _Extending to other tasks._ In this work, we focus on several common textual classification tasks such as sentiment analysis and toxic content detection. Besides classification tasks, there are many other tasks that RNNs can handle including sequence prediction, machine translation, and question answering. However, the current definition of robustness is proposed for classification tasks, which require a ground-truth label to determine whether a sample is robust or not. Thus, the existing definition of robustness cannot directly apply to other tasks due to the lack of ground-truth labels in them. We will conduct research on proposing a general definition of robustness and extending RNN-Guard to general tasks in future works. _Extending to other threat models._ In this work, we follow the threat model in the previous works [12] where attackers can directly perturb word embeddings because word embeddings are in a continuous space and it is easy to express all potential adversarial examples with an abstract domain. Besides this threat model, there exist attacks on other attack surfaces such as word substitution attacks [1] and character injection attacks [4]. However, words and characters are in discrete space and how to extend abstract domains to discrete space remains a challenging problem. We are considering extending RNN-Guard to other threat models in our future plan. ## VIII Related Work Adversarial attacks and defenses have been one of the most popular topics in the machine learning area over the past few years. _Adversarial Attacks._ The existence of adversarial attacks was first found on feed-forward networks such as convolutional neural networks (CNNs) [43]. Afterward, many studies [15, 33, 39] show that RNNs are also vulnerable to adversarial examples. Some attackers perturb the input text by replacing a few characters or words [15]. Other attackers perturb the word embedding using \(\mathcal{L}_{p}\)-bounded attacks [39], which are considered to be more powerful [12]. Thus, in our threat model, we assume that adversaries can directly perturb the word embedding of textual input sequences. _Empirical Defenses._ Early defensive works are based on heuristics to defend against adversarial attacks empirically, including _adversarial training_[20, 40], _model distillation_[34], and _feature denoising_[51]. For example, adversarial training forces the target model to memorize adversarial examples by adding them to the training set. However, empirical defenses lack theoretical guarantees and can be defeated by stronger attacks [2, 46]. To end the constant competition between attackers and defenders, recent researches have started to focus on certified defenses. _Certified Defenses._ Certified defenses, or robustness certification, aim to formally verify whether a given neighbor around the clean input contains any adversarial example, which theoretically guarantees the safety of the target models. The existing certified defenses can fall into one of two categories, complete methods or incomplete methods. Complete methods usually model the robustness certification problem as a _satisfiability modulo theories_ (SMT) [6, 24, 16] or _mixed integer linear programming_ (MILP) [28, 45, 8]. Though complete methods can derive precise results of the model's robustness, they are constrained to small models with piece-wise linear activation functions. To certify the robustness of large models with general activation functions, incomplete methods employ relaxed approaches including _Linear Programming_ (LP) [38, 44, 47], _linear inequalities propagation_[3, 42, 55], _Zonotopes_[41, 30, 41], _interval bound propagation_ (IBP) [21], _dual optimization_[48, 49, 13], and _semi-definite programming_ (SDP) [37, 14, 36]. Due to the relaxation, the certification results of incomplete methods could be inaccurate, i.e., they may prove that a model is non-robust around a clean sample even if it is indeed robust. Nevertheless, incomplete methods are better choices considering the expensive computational cost of complete methods on large models. Furthermore, robustness certification methods usually are extended as certified training methods [21, 30, 54] to directly improve the robustness of the target model. However, due to the trade-off between the robustness of a model and its accuracy [29, 53], certified training methods can cause a decrease in clean accuracy. _Certified Defenses for RNN models._ Due to their unique structures and operations, most of the existing certified defenses cannot be applied to RNN models. To the best of our knowledge, there are only two certified defenses [25, 12] that focus on RNNs. POPQORN [25] utilizes IBP to certify the RNN model's robustness for the first time. Later, Cert-RNN [12] was proposed, which is based on the Zonotope domain and outperforms the former POPQORN in precision and efficiency. However, we discover a vulnerability in above works, i.e., they are challenged by multi-frame attacks because they focus on the weaker one-frame attacks, which motivates this work. ## IX Conclusion In this paper, we present RNN-Guard, the first certified defense against multi-frame attacks for RNN models. RNN-Guard adopts the perturb-all-frame strategy for larger perturbation space that captures all potential adversarial examples in multi-frame attacks. We also introduce a new abstract domain called InterZono and design tighter relaxations to address the precision issue caused by the perturb-all-frame strategy. We comprehensively evaluate the performance of RNN-Guard across various models and datasets. The results show that InterZono is more precise than Zonotope while carrying the same time complexity. Moreover, we extend RNN-Guard as the first certified training method against multi-frame attacks. The experimental results show that, compared to existing defenses, RNN-Guard is more effective against multi-frame attacks.
2306.17332
Designing Stable Neural Networks using Convex Analysis and ODEs
Motivated by classical work on the numerical integration of ordinary differential equations we present a ResNet-styled neural network architecture that encodes non-expansive (1-Lipschitz) operators, as long as the spectral norms of the weights are appropriately constrained. This is to be contrasted with the ordinary ResNet architecture which, even if the spectral norms of the weights are constrained, has a Lipschitz constant that, in the worst case, grows exponentially with the depth of the network. Further analysis of the proposed architecture shows that the spectral norms of the weights can be further constrained to ensure that the network is an averaged operator, making it a natural candidate for a learned denoiser in Plug-and-Play algorithms. Using a novel adaptive way of enforcing the spectral norm constraints, we show that, even with these constraints, it is possible to train performant networks. The proposed architecture is applied to the problem of adversarially robust image classification, to image denoising, and finally to the inverse problem of deblurring.
Ferdia Sherry, Elena Celledoni, Matthias J. Ehrhardt, Davide Murari, Brynjulf Owren, Carola-Bibiane Schönlieb
2023-06-29T22:59:47Z
http://arxiv.org/abs/2306.17332v2
# Designing Stable Neural Networks using Convex Analysis and ODEs ###### Abstract Motivated by classical work on the numerical integration of ordinary differential equations we present a ResNet-styled neural network architecture that encodes non-expansive (1-Lipschitz) operators, as long as the spectral norms of the weights are appropriately constrained. This is to be contrasted with the ordinary ResNet architecture which, even if the spectral norms of the weights are constrained, has a Lipschitz constant that, in the worst case, grows exponentially with the depth of the network. Further analysis of the proposed architecture shows that the spectral norms of the weights can be further constrained to ensure that the network is an averaged operator, making it a natural candidate for a learned denoiser in Plug-and-Play algorithms. Using a novel adaptive way of enforcing the spectral norm constraints, we show that, even with these constraints, it is possible to train performant networks. The proposed architecture is applied to the problem of adversarially robust image classification, to image denoising, and finally to the inverse problem of deblurring. keywords: Deep learning, numerical integration of ODEs, convex analysis, monotone operator theory, inverse problems + Footnote †: journal: ## 1 Introduction The desire to impose Lipschitz conditions on neural networks has come to the forefront in a number of tasks in recent years, especially because there have been serious concerns about the stability of neural networks ever since it was shown that high-performance image classifiers may suffer from adversarial examples [16]. These issues need to be satisfactorily resolved before deep learning methods can be considered suitable for application in safety-critical systems. Another important application of Lipschitz neural networks can be found in generative modelling, in particular in models such as Wasserstein generative adversarial networks (GANs) [2]. In these models, the aim is to minimise the Wasserstein distance between the output of a generator neural network and some target distribution: \[\min_{\Psi}W_{1}(\Psi\#\mu_{\text{latent}},\mu_{\text{true}}), \tag{1}\] where \(W_{1}\) is the Wasserstein metric, \(\mu_{\text{latent}}\) is a (simple) distribution of latent variables \(Z\in\mathcal{Z}\), \(\Psi\#\mu_{\text{latent}}\) is its pushforward by the generator neural network \(\Psi:\mathcal{X}\rightarrow\mathcal{Z}\) and \(\mu_{\text{true}}\) is the target distribution of \(X\in\mathcal{X}\). Appealing to the Kantorovich-Rubinstein duality, we know that \[W_{1}(\mu,\nu)=\sup_{f:\mathcal{X}\rightarrow\mathbf{R},\;1-\text{Lipschitz}} \mathbf{E}_{X\sim\mu}[f(X)]-\mathbf{E}_{Y\sim\nu}[f(Y)],\] where \(f\) is usually called the critic. With this result, eq. (1) becomes the following saddlepoint problem: \[\min_{\Psi}\sup_{f:X\rightarrow\mathbf{R}\;1-\text{Lipschitz}}\mathbf{E}_{Z \sim\mu_{\text{latent}}}[f(\Psi(Z))]-\mathbf{E}_{X\sim\mu_{\text{true}}}[f(X )].\] To solve this problem, we are required to flexibly parametrise 1-Lipschitz critic functions \(f:\mathcal{X}\rightarrow\mathbf{R}\). Another application in which neural network instabilities may cause considerable problems is in the application of deep learning to inverse problems. A prototypical approach to doing this is the so-called Plug-and-Play approach [38; 8; 9; 34], in which some parts of an iterative optimisation algorithm (usually the denoising steps) for a variational reconstruction problem are replaced by a different component, which may be learned separately. An example of this is shown in algorithm 1 (here \(E_{y}\) is a data discrepancy functional measuring the mismatch between an image and corresponding measurements). In this setting, the denoiser \(\Phi\) may be a denoiser trained on natural images. ``` inputs: measurements \(y\), initial estimate \(x^{0}\), denoiser \(\Phi\) \(x\gets x^{0}\) for\(i\gets 1,\dots,\texttt{it}\)do \(x\leftarrow\Phi(x-\tau\nabla E_{y}(x))\) endfor return\(x\) ``` **Algorithm 1** Plug-and-Play proximal gradient method With such algorithms, we may run into divergent behaviour if we do not restrict \(\Phi\) appropriately [33], but there is recent work showing that the iterative method will converge as long as certain Lipschitz conditions are imposed on the denoiser \(\Phi\)[30; 20]. Lipschitz continuity is a standard way to quantify the stability of a function. Let us recall its definition and some associated properties: a function \(f:\mathcal{X}\to\mathcal{Y}\) between metric spaces \((\mathcal{X},d_{\mathcal{X}})\) and \((\mathcal{Y},d_{\mathcal{Y}})\) is said to be \(L\)-Lipschitz for some \(L\geqslant 0\) if \(d_{\mathcal{Y}}(f(x_{1}),f(x_{2}))\leqslant Ld_{\mathcal{X}}(x_{1},x_{2})\) for all \(x_{1},x_{2}\in\mathcal{X}\). This notion of stability plays well with the compositional nature of neural networks: if \(f_{1}:\mathcal{X}\to\mathcal{Y}\) and \(f_{2}:\mathcal{Y}\to\mathcal{Z}\) are \(L_{1}\)-Lipschitz and \(L_{2}\)-Lipschitz respectively, their composition \(f_{2}\circ f_{1}\) is \((L_{1}\cdot L_{2})\)-Lipschitz. If \(\mathcal{X}\) and \(\mathcal{Y}\) are in fact normed spaces, we can furthermore see (by definition) that any bounded linear operator \(A:\mathcal{X}\to\mathcal{Y}\) is \(\|A\|\)-Lipschitz, where the norm is the operator norm. In particular, an ordinary feedforward neural network \(\Psi(x)=A^{K}\sigma(b^{K-1}+A^{K-1}\sigma(\ldots+A^{2}\sigma(b^{1}+A^{1}x)))\) with a \(1\)-Lipschitz activation function \(\sigma\) and learnable linear operators \(A^{1},\ldots,A^{K}\) and biases \(b^{1},\ldots,b^{K}\) is \(L\)-Lipschitz, where \(L=\prod_{i=1}^{K}\|A^{i}\|\). This naturally gives rise to the idea of spectral normalisation: if an ordinary feedforward neural network with a given Lipschitz constant \(L\) is required for a specific application, this can be achieved by appropriately normalising the linear operators, as applied to GANs in [22]. It is worth remarking here that we are referring to any \(L\) that satisfies the defining inequality for Lipschitz continuity of \(f\) as a Lipschitz constant of \(f\); often the term Lipschitz constant is used instead to refer only to the infimum of such \(L\), which defines a seminorm on vector spaces of Lipschitz functions. We will refer to this infimum as the optimal Lipschitz constant of \(f\), and note that the statements about the composition of Lipschitz functions, when framed in terms of optimal Lipschitz constants, only give upper bounds in general. In this work we are focused on the case where \(\mathcal{X}\) and \(\mathcal{Y}\) are equal to each other, as is the case in many image-to-image tasks. Residual networks (ResNets) [19] have proven to be an extremely successful neural network meta-architecture in this setting: a ResNet parametrises a neural network by \(\Psi=(\operatorname{id}+\Psi^{K})\circ\ldots\circ(\operatorname{id}+\Psi^{1})\), where each \(\Psi^{i}\) is a small neural network. Without further constraints, the Lipschitz continuity of such a network may be badly behaved as the depth increases: even if we control each \(\Psi^{i}\) to be \(\varepsilon\)-Lipschitz for some small \(\varepsilon>0\), in the worst case we can not guarantee anything better than that \(\operatorname{id}+\Psi^{i}\) is \((1+\varepsilon)\)-Lipschitz, and that the composition \(\Psi\) is \(L\)-Lipschitz with \(L=(1+\varepsilon)^{K}\), which grows exponentially as \(K\to\infty\). Nevertheless, we show that it is possible to design ResNets that are provably non-expansive (\(1\)-Lipschitz) by discretising non-expansive continuous flows in a sufficiently careful manner (note that it is not guaranteed in general that a discretisation of a continuous flow preserves its structural properties, such as non-expansiveness). ### Related topics _Lipschitz neural networks_ As mentioned above, within the deep learning community there have been a number of drivers for research into neural networks with controlled Lipschitz constants, such as the desire to increase robustness to adversarial examples [37], the necessity to model the critic in a Wasserstein GAN as a 1-Lipschitz function [2], and their use in parametrising learnable diffeomorphisms [6]. Spectral normalisation [22] has become a standard approach to constraining the Lipschitz constant of an ordinary feedforward neural network. This approach ensures that the optimal (smallest) Lipschitz constant of a neural network is upper bounded. It is known to be computationally hard to estimate the true optimal Lipschitz constant [39] of a neural network, which has prompted further research into refining Lipschitz neural network architectures. #### Methods based on continuous dynamical systems Applied mathematicians and physicists have long studied continuous dynamical systems in the form of ODEs and PDEs, giving rise to an extensive body of research on the structural properties of such systems. More recently, insights from these topics have been used to design neural network architectures which share similar structural properties [29; 10; 7]. The adjoint method for computing gradients has gained widespread use in the deep learning community, after it was shown in the Neural ODEs paper [11] that it is possible to parametrise the vector field defining an ODE by a neural network and differentiate through the flow to learn the vector field. This work has spawned a plethora of works that use learnable continuous dynamical systems. #### Convex analysis and monotone operator theory There is a recent line of work investigating the connections between existing deep learning practice and the topics of convex analysis and monotone operator theory. In particular, many of the standard activation functions that are used in neural networks are averaged (in the sense that we define in section 2.1), and further analysis enables one to use this insight to design neural networks that are averaged [12; 26; 18; 20]. ### Our contributions We describe and analyse a family of ResNet-styled neural network architectures that are guaranteed to be non-expansive. The effect of these neural networks on input vectors can be thought of as sequentially composing parts of (discretisations of) gradient flows along learnable convex potentials. We show that it is only necessary to control the operator norms of the learnable linear operators contained in these networks to ensure their non-expansiveness. This task is easily achieved in practice using power iteration. The most basic such network takes the simple form described in algorithm 2. For this network, we use convex analysis techniques to show that more fine-grained control of the learnable linear operators ensures that each layer of the network is averaged, and as a result that the overall network is averaged. We demonstrate the use of the proposed architectures by studying their natural application to an image denoising task, focusing on the influence of various tunable aspects in the architectures for this problem, and comparing our approach to a standard approach to the denoising task. ## 2 Methods Suppose that \(f:\mathbf{R}\times\mathbf{R}^{n}\to\mathbf{R}^{n}\) is a time-dependent vector field and consider the ODE given by the flow along this vector field: \[\dot{z}(t)=f(t,z(t)). \tag{2}\] Assuming existence and uniqueness of the solutions to the ODE, we can define the flow map \(\Psi:[0,\infty)\ \times\mathbf{R}^{n}\to\mathbf{R}^{n}\) by \(\Psi(t,x)=z(t)\), where \(z\) solves eq. (2) with the initial condition \(z(0)=x\). Since the vector fields that we will consider are (globally) Lipschitz continuous, global existence and uniqueness is not an issue by the Picard-Lindelof theorem [36]. It is natural to ask when this flow map is non-expansive, in the sense that \(\|\Psi(t,x)-\Psi(t,y)\|\leqslant\|x-y\|\) for all \(t,x,y\) (with \(\|x\|:=\sqrt{\langle x,x\rangle}\)). Letting \(t\to 0\), we see that it is necessary that \[\langle f(t,x)-f(t,y),x-y\rangle\leqslant 0,\] and conversely, if this condition holds the flow map is non-expansive since \[\frac{\mathrm{d}}{\mathrm{d}t}\|\Psi(t,x)-\Psi(t,y)\|^{2}=2\langle f(t,x(t))- f(t,y(t)),x(t)-y(t)\rangle\leqslant 0. \tag{3}\] In practice, most ODEs of interest are not explicitly solvable and it is necessary to resort to numerical methods to approximate the flow map. A very well-studied class of such numerical integrators is the class of Runge-Kutta methods, which can be defined as follows: **Definition 2.1** (Runge-Kutta method).: If \(m\in\mathbf{N}\) is a positive integer, an \(m\)-stage Runge-Kutta (RK) method is characterised by a matrix \(\mathcal{A}\in\mathbf{R}^{m\times m}\) and two vectors \(b,c\in\mathbf{R}^{m}\). For a step size \(h>0\), the RK method approximates the step from \(y=\Psi(t,x)\) to \(\Psi(t+h,x)\) as follows: \[\Phi_{h}(t,y,f)=y+h\sum_{i=1}^{m}b_{i}f(t+c_{i}h,Y_{i}),\] where \(Y=(Y_{1},\ldots,Y_{m})\) (the set of so-called stages of the method) solves the non-linear system of equations \[Y_{i}=y+h\sum_{j=1}^{m}\mathcal{A}_{ij}f(t+c_{j}h,Y_{j})\quad\text{for}\quad i =1,\ldots,m.\] If \(\mathcal{A}\) is strictly lower triangular, these equations are solvable in a single pass and the method is called explicit. Otherwise, the method is called an implicit method. To ensure that the method is at least of order one, we require that \(\sum_{j=1}^{m}b_{j}=1\). Furthermore, an RK method will usually satisfy \(c_{i}=\sum_{j=1}^{m}\mathcal{A}_{i,j}\). If \(c_{1},\ldots,c_{m}\) are distinct, the method is called non-confluent, whereas it is confluent otherwise. Since we aim to design neural networks that encode non-expansive operators, it is of particular interest to know whether a given numerical integrator preserves the non-expansiveness of a continuous flow for which eq.3 holds. This property of a numerical integrator is called BN-stability and has been studied in detail for RK methods in [4]; for these methods, BN-stability is equivalent to algebraic stability, which is defined by a simple algebraic condition on the coefficients on the method. For methods that are non-confluent, these conditions are also equivalent to the condition of AN-stability (which is a priori a simpler condition). A comprehensive overview of stability properties for RK methods is given in [17; 14]. It is well known (see for instance [24]) however that no explicit RK method can satisfy such an unconditional stability condition. Nevertheless, it was shown in [13] that a conditional stability result can be established for certain explicit RK methods as long as eq.3 is replaced by an alternative that has the effect of controlling the stiffness of the ODE. To state this result, we require the definition of the circle contractivity property of an RK method. **Definition 2.2** (Circle contractivity).: Suppose that \(\mathcal{A}\in\mathbf{R}^{m\times m}\) and \(b,c\in\mathbf{R}^{m}\) are the matrix and vectors characterising an RK method as in definition2.1. We say that this RK method satisfies the \(r\)-circle contractivity condition for a given \(r\in\mathbf{R}\cup\{\infty\}\) if \(|K(\zeta)|\leqslant 1\) for all \(\zeta\in D(r)^{m}\). Here, the function \(K:\mathbf{C}^{m}\to\mathbf{C}\) is defined 1 as: Footnote 1: To understand the definition of \(K\), it can be thought of as the action of the RK method on a scalar nonautonomous linear ODE. \[K(\zeta)=1+b^{\top}\operatorname{diag}(\zeta)(\operatorname{id}-\mathcal{A} \operatorname{diag}(\zeta))^{-1}\mathbf{1},\] and \(D(r)\) is a generalised disk: \[D(r)=\begin{cases}\{z\in\mathbf{C}||z+r|\leqslant r\}&\text{when}\quad r \geqslant 0,\\ \{z\in\mathbf{C}|\operatorname{Re}(z)\leqslant 0\}&\text{when}\quad r=\infty,\\ \{z\in\mathbf{C}||z+r|\geqslant-r\}&\text{when}\quad r<0.\end{cases}\] **Example 2.1**.: _The most basic nontrivial example of an RK method is the forward (explicit) Euler method, given by \(\Phi_{h}(t,y,f)=y+hf(t,y)\). In the notation of definition2.1, we have \(m=1\), \(\mathcal{A}=0\) and \(b=1\), so \(K(z)=1+z\). We conclude that the forward Euler method is \(1\)-circle contractive._ **Remark 1**.: It is straightforward to compute the optimal (in the sense that it gives the largest generalised disk) \(r\) for which a given RK method is \(r\)-circle contractive if we know \(\mathcal{A}\) and \(b\): if we define the symmetric matrix \(Q=\operatorname{diag}(b)\mathcal{A}+\mathcal{A}^{\top}\operatorname{diag}(b )-bb^{\top}\), theorem3.1 from [13] tells us that \(r=-1/\rho\), where \(\rho\) is the largest number such that \(w^{\top}Qw\geqslant\rho w^{\top}\operatorname{diag}(b)w\) for all \(w\in\mathbf{R}^{m}\). Hence, if we can solve the generalised eigenvalue problem \(Qv=\lambda\operatorname{diag}(b)v\), we know that the minimal eigenvalue gives the desired \(\rho\). With this definition, it is now possible to state the conditional stability result that extends to certain explicit methods: **Theorem 2.3** (Theorem 4.1 from [13]).: _Suppose that \(\Phi_{h}\) is an RK method satisfying the \(r\)-circle contractivity condition, and that \(f\) satisfies the monotonicity condition_ \[\langle f(t,y)-f(t,z),y-z\rangle\leqslant-\nu\|f(t,y)-f(t,z)\|^{2}. \tag{4}\] _Then, if \(r\neq\infty\) and \(h/r\leqslant 2\nu\), or if \(r=\infty\) and \(\nu\geqslant 0\),_ \[\|\Phi_{h}(t,y,f)-\Phi_{h}(t,z,f)\|\leqslant\|y-z\|.\] The idea of using this result to design non-expansive neural networks was recently discussed in [5], though in this work no indication was given of how the vector fields should be parametrised. The monotonicity condition given by eq. (4) is reminiscent of the property of co-coercivity, known mainly from the theory of convex optimisation for its use in the Baillon-Haddad theorem: **Theorem 2.4** (Corollary 18.16 from [3]).: _Suppose that \(\phi:\mathcal{X}\to\mathbf{R}\) is a Frechet-differentiable convex function on a Hilbert space \(\mathcal{X}\). Then \(\phi\) is \(L\)-smooth for some \(L\geqslant 0\) (equivalently, \(\nabla\phi\) is \(L\)-Lipschitz), meaning that_ \[\phi(y)\leqslant\phi(x)+\langle\nabla\phi(x),y-x\rangle+\frac{L}{2}\|y-x\|^{2},\] _if and only if \(\nabla\phi\) is \(1/L\)-co-coercive, meaning that_ \[\langle\nabla\phi(y)-\nabla\phi(x),y-x\rangle\geqslant\frac{1}{L}\|\nabla \phi(y)-\nabla\phi(x)\|^{2}.\] Indeed, if \(f(t,x)=-\nabla\phi(x)\) for a \(1/\nu\)-smooth convex potential \(\phi:\mathbf{R}^{n}\to\mathbf{R}\) (so that we have a gradient flow of a smooth convex potential), then eq. (4) is satisfied. This connection has recently been used to demonstrate in [31] that there is an explicit RK method for which the circle contractivity disk degenerates to a point, by constructing a smooth convex potential for which the non-expansiveness of the flow map is not preserved. For the purpose of using this observation and theorem 2.3 to design non-expansive neural networks, note the following result: **Lemma 2.5**.: _Suppose that \(\sigma:\mathbf{R}\to\mathbf{R}\) is a non-decreasing \(L\)-Lipschitz activation function, \(A\in\mathbf{R}^{n\times k}\) is a matrix and \(b\in\mathbf{R}^{n}\) is a bias vector. The vector field \(f_{A,b}(t,x)=-A^{\top}\sigma(Ax+b)\) (where \(\sigma\) is applied separately to each component) satisfies eq. (4) with \(\nu=1/(\|A\|^{2}L)\)._ Proof.: Since \(\sigma\) is non-decreasing and \(L\)-Lipschitz, the function \(\psi:\mathbf{R}\to\mathbf{R}\) given by \(\psi(t)=\int_{0}^{t}\sigma(s)\,\mathrm{d}s\) is convex and \(L\)-smooth. Hence, \(\phi:\mathbf{R}^{n}\to\mathbf{R}\) given by \[\phi(x)=\sum_{i=1}^{n}\psi(x_{i})\] is convex and \(L\)-smooth. The functional \(x\mapsto\phi(Ax+b)\) is convex and by the chain rule it has gradient equal to \(-f_{A,b}\) and it is \(\|A\|^{2}L\)-smooth. By the comments preceding this lemma, the vector field \(f_{A,b}\) satisfies eq. (4) with \(\nu=\|A\|^{2}L\) By the previous observations, we can propose a natural non-expansive neural network architecture as follows: given an \(r>0\) such that we have an \(r\)-circle contractive RK method \(\Phi_{h}\) and an \(L\)-Lipschitz increasing activation function \(\sigma\), consider linear operators \(A^{1},\ldots,A^{\texttt{n\_blocks}}\), biases \(b^{1},\ldots,b^{\texttt{n\_blocks}}\) and stepsizes \(h^{1},\ldots,h^{\texttt{n\_blocks}}\) and define the operator \(\Xi\) by \[\Xi=\Xi^{\texttt{n\_blocks}}\circ\ldots\circ\Xi^{1},\] where \(\Xi^{i}(x)=\Phi_{h^{i}}(0,x,f_{A^{i},b^{i}})\) is one numerical integration step along the vector field \(f_{A^{i},b^{i}}\) as defined in lemma 2.5. Lemma 2.5 and theorem 2.3 ensure that \(\Xi\) is non-expansive as long as \[h^{i}\|A^{i}\|^{2}L\leqslant 2r.\] There are various ways in which this bound can be maintained during training, and the power method can be used to compute the required operator norm: as an example, it is possible to alternate gradient update steps of an optimiser with steps that scale the operators down to satisfy the bounds that are violated after the gradient update. An alternative method, which is more faithful to the dynamical systems interpretation, similarly keeps track of the operator norms of the weights but splits the interval into multiple smaller time steps to guarantee the bound when the gradient updates cause a violation of the bound. This approach, the adaptive approach, is the one that we will take in section 3. For any explicit RK method, the corresponding neural network \(\Xi\) is a residual network. For the forward Euler method, the network takes the particularly simple form shown in algorithm 2. ``` input: vector \(x\) parameters: step sizes \(h^{i}>0\), linear operators \(A^{1},\ldots,A^{\texttt{n\_blocks}}\) satisfying \(h\|A^{i}\|^{2}L\leqslant 1\) for \(i=1,\ldots,\texttt{n\_blocks}\), and biases \(b^{1},\ldots,b^{\texttt{n\_blocks}}\) \(z^{0}\gets x\) for\(i\gets 1,\ldots,\texttt{n\_blocks}\)do \(z^{i}\gets z^{i-1}-h^{i}(A^{i})^{\top}\sigma(A^{i}z^{i-1}+b^{i})\) endfor return\(\Xi(x)=z^{\texttt{n\_blocks}}\) ``` **Algorithm 2** Forward Euler method for non-expansive ODE networks As mentioned before, we are focused in this chapter on explicit RK methods since they do not require the solution of a (potentially difficult) non-linear equation at each step. It may be interesting to note, however, what can happen when an implicit numerical method is used, such as the backward Euler method. In that case, each update step in algorithm 2 needs to be replaced by solving the equation \[z^{i}=z^{i-1}-h^{i}f_{A^{i},b}(z^{i})=z^{i-1}-h^{i}(A^{i})^{\top}\sigma(A^{i}z ^{i}+b^{i}).\] Recalling from the proof of lemma 2.5 that \(-f_{A,b}\) is the gradient of a convex functional \(\phi(A\cdot+b)\), this shows that the update step is given by \[z^{i}=(\operatorname{id}+h^{i}\nabla\phi(A^{i}\cdot+b^{i}))^{-1}(z^{i-1})=: \operatorname{prox}_{h^{i}\phi(A^{i}\cdot+b^{i})}(z^{i-1}),\] which is the defining equation of the proximal operator [23], a mathematical object that has been studied in great detail in the field of convex analysis. Whether considering it from the ODE viewpoint (the backward Euler method is BN-stable [4]) or from the convex analysis and monotone operator theory viewpoint (proximal operators are non-expansive as the resolvents of monotone operators [3, Chapter 23]), proximal operators \(\operatorname{prox}_{h\phi(A\cdot+b)}\) are well-defined and non-expansive regardless of the step size \(h>0\) and the smoothness of \(\phi(A\cdot+b)\). This unconditional stability comes at a cost, though: for general \(A\), computing the proximal operator of \(\operatorname{prox}_{h\phi(A\cdot+b)}\) is not easy (and becomes more difficult as the condition number of \(A\) increases). This issue can be overcome by restricting \(A\) to certain special sets of operators (for instance satisfying certain orthogonality properties), in which case the proximal operator may be explicitly computable. This approach is similar to the one taken in [18; 20] though note that it may be difficult to enforce these constraints on convolution-type linear operators. On the other hand, the operator norm constraints that we are required to enforce with explicit numerical integration methods can be easily controlled using power iteration [15]; all we need is the ability to apply the operator and its adjoint to test vectors. ### A more detailed look at the architecture for the forward Euler method When the numerical integrator used is the forward Euler method, as described in algorithm 2, straightforward computations can be used to establish the same results guaranteed by the machinery of theorem 2.3, and some more nuanced results. Indeed, it is possible to choose the stepsizes in such a way that the resulting neural network is not just non-expansive, but in fact is also averaged: **Definition 2.6** (Definition 4.23 from [3]).: Suppose that \(A:\mathcal{X}\to\mathcal{X}\) is an operator mapping a Hilbert space \(\mathcal{X}\) into itself and that \(\alpha\in(0,1)\). We call \(A\) an \(\alpha\)-averaged operator if there is a non-expansive \(T:\mathcal{X}\to\mathcal{X}\) such that \(A=(1-\alpha)\operatorname{id}+\alpha T\). We may also leave \(\alpha\) unspecified, in which case we just call \(A\) an averaged operator if there is an \(\alpha\in(0,1)\) such that \(A\) is \(\alpha\)-averaged. Note that the triangle inequality shows that an averaged operator is non-expansive. In addition, averaged operators allow for convergent fixed point iterations, whereas ordinarily non-expansive operators enjoy no such guarantees. This is of crucial importance in certain applications, such as Plug-and-Play algorithms, where modelling denoisers using non-expansive operators is not enough to prevent divergence, but using averaged operators can ensure convergence [35; 20]. For our analysis here, let us note the following fact: **Lemma 2.7**.: _Suppose that \(\Xi:\mathbf{R}^{n}\to\mathbf{R}^{n}\) is \(C^{1}\) with symmetric Jacobian \(D\Xi(x)\) everywhere and that \(\alpha\in(0,1)\). Then \(\Xi\) is \(\alpha\)-averaged if and only if_ \[\operatorname{spectrum}(D\Xi(x))\subset[1-2\alpha,1]\] _for all \(x\in\mathbf{R}^{n}\). Note that the condition that the Jacobian is everywhere symmetric is equivalent to asking that \(\Xi=\nabla f\) for some underlying functional \(f:\mathbf{R}^{n}\to\mathbf{R}\)._ Proof.: Showing that \(\Xi\) is \(\alpha\)-averaged is equivalent to showing that \(\Theta=(\Xi-\operatorname{id})/\alpha+\operatorname{id}\) is non-expansive. Furthermore \(\operatorname{spectrum}(\Theta)=\operatorname{spectrum}(\Xi)/\alpha+1-1/\alpha\), so if \(\operatorname{spectrum}(D\Xi(x))\subset[1-2\alpha,1]\), then \(\operatorname{spectrum}(D\Theta(x))\subset[-1,1]\). By the fundamental theorem of calculus, we also have that \[\Theta(x)-\Theta(y)=\Big{[}\int\limits_{0}^{1}D\Theta(y+t(x-y))\operatorname{ d}\!t\Big{]}(x-y),\] so \[\|\Theta(x)-\Theta(y)\|\leqslant\Big{[}\sup_{z\in\mathbf{R}^{n}}\|D\Theta(z) \|\Big{]}\|x-y\|\leqslant\|x-y\|,\] as desired. Conversely, suppose that we do not have that \(\operatorname{spectrum}(D(\Xi))\subset[1-2\alpha,1]\) everywhere. In particular, there is some \(x\in\mathbf{R}^{n}\), such that there is an eigenvector \(v\in\mathbf{R}^{n}\) (assume that \(\|v\|=1\)) of \(D\Theta(x)\) with eigenvalue outside of \([-1,1]\): \(D\Theta(x)v=\lambda v\) with \(|\lambda|>1\). Hence, we have \[\Theta(x+hv)-\Theta(x)=hD\Theta(x)v+R(h)=\lambda hv+R(h)\] with \(R(h)\) a remainder term that satisfies \(R(h)=o(h)\) as \(h\to 0\). By the triangle inequality, this implies that \[\|\Theta(x+hv)-\Theta(x)\|\geqslant|\lambda|\|hv\|-|R(h)|.\] For small enough \(h\), this inequality tells us that \(\Theta\) is not non-expansive, and by the preceding argument, \(\Xi\) is not \(\alpha\)-averaged. Recall that a single layer of the proposed architecture is given by \(\Xi(x)=x-hA^{\top}\sigma(Ax+b)\), with the same setting in mind as described in lemma 2.5. There we saw that \(\Xi\) is the gradient of the functional \(x\mapsto\|x\|^{2}/2-h\phi(Ax+b)\), where \(\phi\) is convex and \(L\)-smooth, so that \(\operatorname{spectrum}(D^{2}\phi(x))\subset[0,L]\) for each \(x\in\mathbf{R}^{n}\). Hence, since we have \(D\Xi(x)=x-hA^{\top}D^{2}\phi(Ax+b)A\), we find that \[\operatorname{spectrum}(D\Xi(x))\subset[1-h\|A\|^{2}L,1].\] Combining this with lemma 2.7 immediately gives the following result if the activation function \(\sigma\) is \(C^{1}\). This is not required, however, for the result to be valid; any \(L\)-Lipschitz \(\sigma\), such as \(\sigma(x)=\mathtt{ReLU}(x)=\max\{0,x\}\), will work equally well. The argument for general \(L\)-Lipschitz \(\sigma\) is given below: **Theorem 2.8**.: _Let \(\sigma,A,b\) be as in lemma 2.5 and let \(\alpha\in(0,1)\). A single layer of the proposed architecture, \(\Xi(x)=x-hA^{\top}\sigma(Ax+b)\), is \(\alpha\)-averaged if_ \[h\|A\|^{2}\leqslant 2\alpha/L. \tag{5}\] Proof.: The argument given above provides some intuition regarding averaged operators, but requires the activation function \(\sigma\) to be \(C^{1}\). Here, we will show that this is not necessary. Indeed, note that an operator \(\Xi\) is \(\alpha\)-averaged if and only if \((\Xi-\mathrm{id})/\alpha+\mathrm{id}\) is non-expansive. Furthermore, we note that \(\Xi-\mathrm{id}=hf_{A,b}\), where \(-f_{A,b}\) is the gradient of an \(\|A\|^{2}L\)-smooth convex functional, as defined in the proof of lemma 2.5. By theorem 2.4 we have that \[\frac{1}{\|A\|^{2}L}\|f_{A,b}(x)-f_{A,b}(y)\|^{2}\leqslant\langle-f_{A,b}(x)+f _{A,b}(y),x-y\rangle\leqslant\|A\|^{2}L\|x-y\|^{2},\] so, if we write \(\Lambda=(\Xi-\mathrm{id})(x)-(\Xi-\mathrm{id})(y)\) to reduce clutter, we have \[-h\|A\|^{2}L\|x-y\|^{2}\leqslant\langle\Lambda,x-y\rangle\leqslant-\frac{1}{h \|A\|^{2}L}\|\Lambda\|^{2}.\] In particular, \(\langle\Lambda,x-y\rangle+\|\Lambda\|^{2}/(h\|A\|^{2}L)\leqslant 0\). Upon expanding the squared norm, we find that \[\|((\Xi-\mathrm{id})/\alpha+\mathrm{id})(x)- ((\Xi-\mathrm{id})/\alpha+\mathrm{id})(y)\|^{2}\] \[=\frac{\|\Lambda\|^{2}}{\alpha^{2}}+2\frac{\langle\Lambda,x-y \rangle}{\alpha}+\|x-y\|^{2}\] \[\leqslant\|x-y\|^{2}+\frac{2}{\alpha}\Big{(}\langle\Lambda,x-y \rangle+\frac{\|\Lambda\|^{2}}{2\alpha}\Big{)}.\] By the above comments, we see that \((\Xi-\mathrm{id})/\alpha+\mathrm{id}\) is non-expansive when \(2\alpha\geqslant h\|A\|^{2}L\), which can be rewritten into \(h\|A\|^{2}\leqslant 2\alpha/L\). Finally, the following result guarantees that the overall network will be averaged as long as each layer is averaged, with a corresponding \(\alpha\) that can be controlled: **Theorem 2.9** (Proposition 4.32 from [3]).: _Suppose that \(\Xi^{1},\ldots,\Xi^{m}\) are operators \(\Xi^{i}:\mathcal{X}\to\mathcal{X}\) on a Hilbert space \(\mathcal{X}\) and that each \(\Xi^{i}\) is \(\alpha_{i}\)-averaged for some \(\alpha_{i}\in(0,1)\). Then \(\Xi^{m}\circ\ldots\circ\Xi^{1}\) is \(\alpha\)-averaged, where_ \[\alpha=\frac{m}{m-1+\min_{i=1,\ldots,m}(1/\alpha_{i})}.\] In particular, if we are targeting a certain \(\alpha\in(0,1)\) for which our neural network (\(m\) layers deep) should be \(\alpha\)-averaged, we should ask that each layer is \(\alpha_{i}\)-averaged with \(\alpha_{i}\) at most \[\alpha_{i}\leqslant\frac{\alpha}{m(1-\alpha)+\alpha}.\] By theorem 2.8, we see that this implies that we must use a step size \(h=\mathcal{O}(1/m)\) that decreases to \(0\) as the depth \(m\) of the network increases. Alternatively, it is possible to get an averaged operator by appealing to definition 2.6: \((1-\alpha)\,\mathrm{id}+\alpha\Xi\) will be \(\alpha\)-averaged as long as \(\Xi\) is non-expansive, which we have seen can be guaranteed with a step size independent of the depth of the network. ## 3 Experiments We will study the application of the proposed architectures to tasks where a certain kind of robustness is required: in section 3.2 we study the robustness of image classifiers to adversarial attacks and in section 3.3 we consider the robustness of learned image denoisers. Robustness in the latter setting is of particular importance in downstream tasks, such as when the learned denoisers are used to solve inverse problems, as we will see in section 3.4. First, though, we will describe the methods that we use to train the networks. ### Training methods We train each network in a supervised manner, by solving a regularised empirical risk minimisation problem. In all experiments, we perform 40,000 iterations of stochastic gradient descent (SGD) with momentum (with momentum parameter \(\beta=0.9\)), with a piecewise linear learning rate schedule that ramps up from a minimum learning rate to a maximum learning rate for the first half of the iterations and then down to 0 for the second half of the iterations. The minimum and maximum learning rates to use are found with the method described in [32]. Every experiment is run with weight decay, with weighting hyperparameters \(\lambda\in\{10^{-5},5\cdot 10^{-5},10^{-4},5\cdot 10^{-4}\}\). The reported results correspond to the setting of the weight decay hyperparameter that performed best in terms of the loss function evaluated on a held-out validation set. As mentioned before, to guarantee non-expansiveness or averagedness of the network we need to ensure that a bound such as eq.5 in theorem2.8 holds, so we use the power method [22] to compute spectral norms of each of the learnable linear operators: if \(A:\mathbf{R}^{n}\to\mathbf{R}^{m}\) is a linear operator and we have an initial estimate of the first left singular vector \(v^{0}\in\mathbf{R}^{m}\), we iterate \[u^{k}\leftarrow\frac{A^{\top}v^{k-1}}{\|A^{\top}v^{k-1}\|},\qquad v^{k} \leftarrow\frac{Au^{k}}{\|Au^{k}\|}.\] Assuming that \(v^{0}\) is not orthogonal to the first left singular vector (this is guaranteed to hold with probability 1 if \(v^{0}\) is randomly selected from a probability distribution that has a density w.r.t. Lebesgue measure), \(u^{k}\) and \(v^{k}\) converge to the first singular vectors of \(A\) as \(k\to\infty\) and \((u^{k})^{\top}Av^{k}\to\|A\|\). In addition, it is reasonable to assume that the weights only undergo incremental updates during each training iteration, so that we can warm start the power method using the estimates of the singular vectors from the previous training iteration, followed by just a single iteration of the power method. The most straightforward way to satisfy the bounds that we require is by alternating gradient update steps with spectral normalisation steps \(A\mapsto A/\|A\|\). In a similar spirit, it is in principle also possible to differentiate through the spectral normalisation step [22], which results in different training dynamics and may avoid the tendency of the alternating steps to counteract each other's effect. Instead of these methods, we find that another approach (which we will refer to as the adaptive approach in what follows) similarly ensures that the bounds of theorem 2.8 hold, with the benefits of empirically resulting in high expressive power, a simple implementation (compared to differentiating through the spectral normalisation) and a close connection to the underlying dynamical system. In the adaptive approach, we still use the power method to keep track of the operator norms, but when the bound in eq. (5) is violated, we split the integration interval into sufficiently many subintervals (each of equal size) to ensure that the bound is satisfied on each subinterval. Let us show an example of what this means for the approach shown in algorithm 2: after updating the weights of each block using SGD and updating the spectral norm estimate of the weights using the power method, the forward propagation shown in algorithm 2 is modified. In a given block with weights \(A^{i}\) and overall step size \(h^{i}\), rather than take a single step \(z^{i}\gets z^{i-1}-h^{i}(A^{i})^{\top}\sigma(Az^{i-1}+b)\), we compute a number of steps \(N=\lceil h^{i}\|A^{i}\|^{2}L/(2r)\rceil\) to subdivide the integration into and take \(N\) steps with stepsize \(h^{i}/N\). In fact, since the ceiling defining \(N\) is not generally attained (this can be checked after training to be certain), each block will naturally satisfy the constraint in theorem 2.8, making the network averaged by theorem 2.9. As a result of the adaptive approach, the depths of our networks are not fixed during training; although the number of learnable parameters is fixed, the network will become deeper during training if the weights grow. We do not take any additional measures to ensure that the weights stay bounded, but we find that this is not necessary in practice. Furthermore, since the loss functions that we consider are bounded from below and the added weight decay penalty is coercive, we are guaranteed that any minimising sequence of the regularised loss function is bounded. Hence, we do not expect the depths of our networks to blow up during training. To initialise the weights of the non-expansive networks that we study, we first use the default initialisation method in PyTorch [25] to initialise the convolutional filters and biases, and apply 1,000 iterations of the power method to compute the operator norms of the convolution operations. The filters are then normalised to have operator norm 1 and the singular vectors output by the power method are saved for future iterations. In all of the networks that we study in the experiments, we choose the activation function \(\sigma\) to be the LeakyReLU, defined as LeakyReLU\((x)=\max\{x,0.01x\}\). Evidently, LeakyReLU is 1-Lipschitz and corresponds to the gradient of a strongly convex functional. For each ODE block in the non-expansive networks, we fix the overall timestep to be the circle contractivity radius \(r\) of the integrator used. Combining the weight initialisation mentioned above and the Lipschitz constant of LeakyReLU, this ensures that the required constraints are satisfied at initialisation. As previously described, the constraints are maintained during training using the adaptive approach. All experiments have been implemented using PyTorch [25] and were run on a single computational node with a NVIDIA A100-SXM4 GPU with 80 GB of memory. The majority of training runs were finished in fewer than 4 hours (the exceptions being the image denoising experiment with higher order integrators in section 3.3, which took up to 12 hours to complete). The code that we have written to implement the methods and experiments is publicly available at [https://github.com/fsherry/non-expansive-odes](https://github.com/fsherry/non-expansive-odes). ### Adversarial robustness Let us study the robustness of the proposed architecture as opposed to a comparable (but unconstrained) classifier based on a ResNet architecture. We will use the CIFAR-10 dataset [21], which consists of 60,000 colour images of size \(32\times 32\), with the standard split into 50,000 training images and 10,000 testing images. Additionally, we will hold out 2,500 training images to use as a validation set. Each image comes with one of 10 possible labels, which we will simply refer to as numbers in \(\{1,\ldots,10\}\). We scale the images so that each channel only takes values in \([0,1]\), but do not perform any additional normalisation or whitening. The neural networks that we consider implement functions between \(\mathbf{R}^{32\times 32\times 3}\) and \(\mathbf{R}^{10}\), the outputs of which are interpreted as the scores assigned to each label. Finally, given such a function \(\Phi:\mathbf{R}^{32\times 32\times 3}\rightarrow\mathbf{R}^{10}\) its classification of an input image \(x\) is given by \(\operatorname*{argmax}_{i\in\{1,\ldots,10\}}\Phi(x)_{i}\). Oftentimes, the scores output by a classifier are interpreted probabilistically: applying the softmax function, we transform the outputs of the network into probabilities, which can be fit to the true labels by minimising a cross-entropy loss function. In our setting, however, we will not take this approach, instead opting for a loss function that more directly encourages the classifier to have a large margin. Recall that the margin of a classifier at an input \(x\) with true label \(y\) is given by \(\Phi(x)_{y}-\operatorname*{argmax}_{i\in\{1,\ldots,10\}\setminus\{y\}}\Phi(x )_{i}\). With this definition, images that are incorrectly classified correspond to a negative margin, while those that are correctly classified correspond to a positive margin. If we can lower bound the margin at a correctly classfied image, while upper bounding the Lipschitz constant of \(\Phi\) we can certify that \(\Phi\) equally classifies all images in a ball around this image [37]. In particular, maximising the margin of a classifier makes sense when we constrain its Lipschitz constant. In this context, it is natural to use a multi-class hinge loss function: \[\min_{\Psi}\frac{1}{N_{\text{train}}}\sum_{i=1}^{N_{\text{train}}}\sum_{k=1}^{ 10}\max\{0,\mu-\Phi(x_{i})_{y_{i}}-\Phi(x_{i})_{k}\}, \tag{6}\] with \(\mu\) a hyperparameter that we fix to be equal to 0.1 in our experiments. Furthermore, we will use data augmentation, effectively replacing the inner term in eq. (6) by an expectation over the augmentations, and \(x_{i}\) by \(\Delta(x_{i})\) with \(\Delta\) the augmentation. We compose an augmentation that randomly erases pixels, an augmentation that randomly crops out parts of the images and an augmentation that applies a random horizontal flip. For the training of all networks we use minibatches of size 128 in each SGD update. We will consider two comparable network architectures: an architecture we will refer to as ODENet which uses our proposed nonexpansive components, and an architecture we will call ResNet which is similar but unconstrained. In particular, by appropriately setting the weights of the ResNet, we can recover an ODENet. Both classifiers take the form \[\Phi=A_{\text{lin}}\circ\text{pool}_{\text{global}}\circ\overset{3}{\underset{i=1 }{\cap}}\Big{[}\Xi^{i}\circ\text{pool}_{2\times 2}\circ A_{\text{lift},i}\Big{]}.\] Here \(A_{\text{lin}}\) is a final linear layer, \(A_{\text{lift},i}\) are \(1\times 1\) convolutions increasing the number of channels, \(\text{pool}_{2\times 2}\) are average pooling operations with \(2\times 2\) kernels and a stride of 2 and \(\text{pool}_{\text{global}}\) is a global average pooling operation (which collapses the spatial extent of the remaining image into a single pixel). In the case of the ResNet, each \(\Xi^{i}\) is a composition of 5 simple ResNet blocks of the form \(x\mapsto B\sigma(Ax+b)\), whereas in the case of the ODENet, each \(\Xi^{i}\) takes the form shown in algorithm 2 with 5 blocks. With the particular configuration used in our experiments, the ODENet has 254,810 trainable parameters, while the ResNet has 497,290 trainable parameters. Recall from section 3.1 that we use an adaptive training approach for ODENet, meaning that its depth is generally allowed to grow during training, but its number of trainable parameters remains constant throughout training. This is done to enforce the constraints that we require to ensure nonexpansiveness. On the other hand, since we do not enforce constraints on the ResNet, its depth is constant during training. To implement the adversarial attacks, we use the Foolbox package [27], applying a projected gradient descent attack (which in fact alternates gradient _ascent_ steps on the loss function with projections onto a ball around the clean image). More specifically, we consider an attack that uses an \(\ell^{2}\)-norm constraint and perform 10 iterations of the projected gradient descent method. Figure 1: A comparison of the adversarial robustness of the proposed ODENet and a similar ResNet trained to classify CIFAR-10 images. The adversarial attack takes the form of a projected gradient descent attack with 10 iterations, with the accuracy curves computed on the test set. The result of applying such an attack to the networks described above is shown in fig. 1: we compute attacks of various sizes \(\varepsilon\) (corresponding to the size of the \(\ell^{2}\)-ball within which we search for an attack) for each image in the test set and compute how well both networks continue to perform on the attacked images. Clearly, the unconstrained ResNet has higher clean accuracy (accuracy at \(\varepsilon=0\)), namely \(90.2\%\), compared to the ODENet, which has a clean accuracy of \(84.2\%\). On the other hand, the ODENet is considerably more robust than the ResNet as the perturbation size grows: considering the area under the curve as a measure of overall robustness, (which is bounded between \(0\) and \(1\)), we find that the ResNet has an area under the curve of \(0.537\), while the ODENet has an area under the curve of \(0.641\). ### Nonexpansive neural networks for denoising Let us now proceed by studying the performance of the proposed networks on an image denoising task, comparing to a standard variational denoising approach and an unconstrained deep learning approach. We will use the BSDS500 dataset [1] as training data and test data for these experiments. This dataset consists of \(500\) natural colour images of size \(321\times 481\), split into \(N_{\text{train}}=200\) training images, \(N_{\text{val}}=100\) validation images and \(N_{\text{test}}=200\) test images. In our experiments, we adhere to the same splitting of the dataset. We scale the images so that each channel only contains values in \([0,1]\) and simulate noisy images \(y\) corresponding to each ground truth image \(x\) by adding Gaussian white noise \(\varepsilon\) with a standard deviation of \(0.15\), corresponding to a high noise level (see figs. 3 and 4). For all of the learned approaches, we train the networks \(\Gamma\) involved to solve the empirical risk minimisation problem \[\min_{\Gamma}\frac{1}{N_{\text{train}}}\sum_{i=1}^{N}\|\Gamma(y_{i})-x_{i}\|^{ 2}, \tag{7}\] and when we test the performance of the various denoisers on the test set, we essentially use the same loss: we report the peak signal-to-noise ratio (PSNR), defined between a reference \(x^{*}\) and an estimate \(\hat{x}\) as \[\text{PSNR}(\hat{x},x^{*})=10\log_{10}\Big{(}\frac{\max_{i,j,k}|x^{*}_{i,j,k}| ^{2}}{\frac{1}{3\cdot 321\cdot 481}\sum_{i,j,k}|x^{*}_{i,j,k}-\hat{x}_{i,j,k}| ^{2}}\Big{)}.\] We use the training approach described in section 3.1 and consistently use mini-batches of size \(5\). The networks that we propose for this task are of the form \[\Gamma=A_{\text{project}}\circ\Xi\circ A_{\text{lift}}, \tag{8}\] where \(A_{\text{lift}}\) is a lifting operation taking the \(3\) input channels to \(64\) channels by appending \(61\) channels filled with zeros, \(A_{\text{project}}\) is a projection operator taking \(64\) channels to the \(3\) output channels by simply dropping the last \(61\) channels, and \(\Xi\) is a network as in algorithm 2 with each \(A^{i}\) a convolution taking \(64\) channels to 64 channels and each \(b^{i}\in\mathbf{R}^{64}\) (i.e. the biases are spatially constant, as usual). All convolution operators have kernel size \(3\times 3\). This choice of lifting and projection operators ensures that \(\alpha\)-averagedness of \(\Xi\) implies \(\alpha\)-averagedness of \(\Gamma\): we have \(A_{\text{project}}\circ A_{\text{lift}}=\operatorname{id}\) and \(\|A_{\text{lift}}\|=\|A_{\text{project}}\|=1\) so if \(\Xi=(1-\alpha)\operatorname{id}+\alpha T\) is \(\alpha\)-averaged with \(T\) non-expansive, we also have \[\Gamma =A_{\text{project}}\circ((1-\alpha)\operatorname{id}+\alpha T) \circ A_{\text{lift}}\] \[=(1-\alpha)A_{\text{project}}\circ A_{\text{lift}}+\alpha A_{ \text{project}}\circ T\circ A_{\text{lift}}\] \[=(1-\alpha)\operatorname{id}+\alpha A_{\text{project}}\circ T \circ A_{\text{lift}}.\] Since \(A_{\text{project}}\) and \(A_{\text{lift}}\) are non-expansive, this shows that \(\Gamma\) is \(\alpha\)-averaged. We will instantiate this architecture using the forward Euler integrator as in algorithm 2 (with n_blocks = 10) and will simply refer to this network by the name Euler in what follows. Although the concrete architecture obtained when using the forward Euler integrator (as described in algorithm 2) is appealing in its simplicity, the framework laid out in section 2 also allows us to use certain higher order integrators. For instance, consider Heun's method, which is given by \[\Phi_{h}^{\text{Heun}}(t,y,f)=y+\frac{h}{2}\Big{(}f(t,y)+f(t+h,y+hf(t,y))\Big{)}. \tag{9}\] This is a 2-stage, second-order, RK method, with \[\mathcal{A}=\begin{pmatrix}0&0\\ 1&0\end{pmatrix},\quad b=\begin{pmatrix}1/2\\ 1/2\end{pmatrix},\quad\operatorname{diag}(b)\mathcal{A}+\mathcal{A}^{\top} \operatorname{diag}(b)-bb^{\top}=\frac{1}{4}\begin{pmatrix}1&-1\\ -1&1\end{pmatrix},\] and using remark 1, we conclude that Heun's method is 1-circle contractive, like the forward Euler method is. As a result, \(x\mapsto\Phi_{h}^{\text{Heun}}(0,x,f_{A,b})\) is non-expansive as long as \(h\|A\|^{2}\leq 2\) and algorithm 2 can be adapted to use Heun's method, the only change being that the steps \(z^{i}\gets z^{i-1}-h(A^{i})^{\top}\sigma(A^{i}z^{i-1}+b^{i})\) are replaced by steps of the form \(z^{i}\leftarrow\Phi_{h}^{\text{Heun}}(0,z^{i-1},f_{A^{i},b^{i}})\). We will also instantiate the architecture in eq. (8) with this integrator (again using n_blocks = 10) and will refer to this network by the name Heun. As this integrator takes more evaluations of the right hand side of the ODE, it is more costly to compute the output of Heun than of Euler. Similarly, we can consider integrators with yet higher orders, such as the fourth-order RK4 integrator \(\Phi_{h}^{\text{RK4}}\), given by definition 2.1 with \[\mathcal{A} =\begin{pmatrix}0&0&0&0\\ 1/2&0&0&0\\ 0&1/2&0&0\\ 0&0&1&0\end{pmatrix}, b =\begin{pmatrix}1/6\\ 1/3\\ 1/3\\ 1/6\end{pmatrix},\] \[\operatorname{diag}(b)\mathcal{A}+\mathcal{A}^{\top} \operatorname{diag}(b)-bb^{\top} =\frac{1}{36}\begin{pmatrix}-1&4&-2&-1\\ 4&-4&2&-2\\ -2&2&-4&4\\ -1&-2&4&-1\end{pmatrix}.\] Again using remark 1, we conclude that the RK4 method is 1-circle contractive and we can replace the forward Euler method in algorithm 2 by the RK4 method to obtain a non-expansive neural network. We design a denoiser (with n_blocks = 10) as in eq. (8) with the RK4 integrator and refer to it by the name RK4. RK4 is again more costly to to use since the RK4 integrator takes 4 evaluations of the right hand side of the ODE for each step. All of our proposed networks have the same number, 369,280, of trainable parameters. As a benchmark denoising algorithm using the variational approach, we can consider total variation (TV) denoising [28], which gives the denoised image as \[\hat{u}=\operatorname*{argmin}_{u}\frac{1}{2}\|u-y\|^{2}+\alpha\|\nabla u\|_{1}, \tag{10}\] where we have tuned \(\alpha\) for optimal reconstruction performance on the training set. This approach solves a convex optimisation problem with a hand-crafted regularisation functional that favours reconstruction of piecewise constant images. Solving this problem accurately is more expensive than the deep learning approaches that we will consider and generally its hand-crafted prior information is not matched as well to reality as the prior information encoded in the learned approaches. On the other hand, this approach can be considered trustworthy for use in downstream tasks: eq. (10) defines the reconstruction map as a proximal operator of a convex functional, so that the reconstruction map is 1/2-averaged. The DnCNN, introduced in [40], has become a standard benchmark for denoising tasks. A natural comparison to make is between our networks \(\Gamma\) with n_blocks = 10, and the DnCNN \(\Gamma_{\text{DnCNN}}=\operatorname{id}-A_{\text{project}}\circ\Xi_{\text{ DnCNN}}\circ A_{\text{lift}}\) where \(\Xi_{\text{DnCNN}}\) is a 18-layer convolutional neural network without skip connections, the details of which are described in [40]. \(A_{\text{lift}}\) and \(A_{\text{project}}\) are taken as convolutions that lift the channels to 64 channels from the 3 input channels and project the 64 channels back to the 3 output channels respectively. Note that this is a reasonable version of the DnCNN to compare with our networks since each block of our architecture contains a convolution and its transpose, whereas the DnCNN uses one convolution per layer. This DnCNN is trained to solve eq. (7), in the same way as our architectures except that no operator norm constraints are enforced on the convolutions. The DnCNN that we consider has 670,531 trainable parameters. The results of comparing all of these approaches are shown in fig. 2: all of the learned approaches perform basically on par with each other, slightly outperforming the variational approach. The gap between the learned approaches and the variational approach is not as large as one may expect, but this is a result of the high noise level considered. Figure 3 and fig. 4 show zoomed in examples of test images that are relatively more favourable to the learned approaches and to TV and vice versa. It is remarkable that Euler, Heun and RK4 perform extremely similarly to each other: although the higher order integrators are more costly to evaluate, and attain higher order of approximation of the underlying trajectories, this is not borne out in better denoising performance. This can be explained by the fact that (at least in this task) we are only interested in the endpoints of the underlying trajectories. Another point that is worth expanding on is the fact that our proposed networks perform on par with the unconstrained DnCNN. This is perhaps somewhat unexpected: recall from the discussion in section 2 that our approach bears some similarity to the Parseval proximal networks described in [18; 20]. Their approach effectively uses an implicit Euler integrator and restricts the weights to be orthogonal to make the computation of the implicit step computationally feasible, and they observed that it was necessary to scale up their proposed networks by a multiplicative factor and apply residual learning (where the network models the noise instead of the image directly) to obtain networks that perform similarly to DnCNN. These modifications break the desirable non-expansiveness properties of the network, necessitating tricks such as using a so-called "oracle denoiser" to obtain the desired properties. On the other hand, our experiments in this section show that these tricks are not necessary: it is possible to enforce stability constraints while having high denoising performance. ### Plug-and-Play applications of the learned denoisers Recall from section 1.2 that one of the downstream applications where provably stable neural networks are of great interest is in the Plug-and-Play (PnP) approach to solving ill-posed inverse problems using learned denoisers. With the results from the previous section and the theoretical results of section 2.1, we are now in a position to pursue this further. Indeed, recalling theorem 2.8 and theorem 2.9, we get that the denoiser trained using the Euler integrator in the previous section is an averaged operator, since the adaptive training approach ensures that the condition in theorem 2.8 is satisfied (for an unspecified Figure 2: Comparison of the denoising performance of the various considered approaches on the test set. The learned approaches all perform similarly and outperform the variational approach that uses TV regularisation. \(\alpha>0\)) at all times. In particular, it has convergent fixed-point iterations (3, Theorem 5.14). Let us see what this means in practice: fig. 5 shows what happens if we repeatedly apply the learned DnCNN and Euler denoisers to an input image. We clip the values of all images to lie in \([0,1]\) for display purposes, but it is evident from the figure that repeatedly applying DnCNN results in divergence, whereas repeatedly apply Euler converges. This convergence property is a desirable property for a denoiser, and is for instance satisfied by any proximal operator of a convex regularisation functional. Let us consider the inverse problem of deblurring: we assume that we are given measurements \(y=Kx+\varepsilon\), where \(Kx=k*x\) is a convolution operation representing a motion blur, \(x\) is the ground truth image that we aim to recover as well as possible and \(\varepsilon\) is Gaussian white noise corrupting the measurements. The ill-posedness of this problem is manifested in the instability of the inverse of the convolution; as a consequence of this, a naive inversion of the measurements will blow up the noise in the measurements. The PnP approach uses the previously learned denoiser to regularise this inversion. In particular, we will use the simple PnP proximal gradient method of algorithm 1, with \(E_{y}(x)=\|y-Kx\|^{2}/2\) and the denoiser \(\Phi\) equal to the previously learned Euler denoiser. We choose the step size \(\tau=1/\|K\|^{2}\), which together with the fact that the denoiser is averaged gives a convergence guarantee (35, Proposition 2). The asserted convergence is practically observed in fig. 6: the result \(\hat{x}\) obtained as the limit of the PnP iterations trades off the prior information about natural images encoded in the Figure 3: Comparison of some of the denoisers considered, on an image that is favourable to the learned approaches. Note that the learned approaches recover more fine details, whereas TV has a tendency to flatten them out. The numbers in the top right corner of each image are the PSNRs (in dB) relative to the ground truth \(x\). Euler denoiser with the data consistency. ## 4 Conclusions and discussion We have exhibited a family of ResNet architectures for which it is straightforward to enforce non-expansiveness. The proposed architecture is given by compositions of numerical integration steps along gradient flows in convex potentials. For the main example using the forward Euler method, we have used Figure 4: Comparison of some of the denoisers considered, on an image that is favourable to TV. The true image to be recovered is relatively well-approximated by a piecewise constant image. Note that the finer details in the image (for example the ridge between the top and bottom part of the beak) are still better recovered by the learned approaches. The numbers in the top right corner of each image are the PSNRs (in dB) relative to the ground truth \(x\). Figure 5: Repeated application of the unconstrained and constrained denoisers to a given input image gives drastically different results: for the unconstrained DnCNN, this sequence diverges, whereas for the averaged Euler denoiser, this sequence converges. tools from convex analysis to show that the architecture can be used to encode averaged operators. We have demonstrated the use of the proposed architectures on adversarially robust image classification and on image denoising, with the idea of applying the learned denoisers to ill-posed inverse problems. With a novel adaptive training approach, we have shown that it is possible to obtain performant neural networks, even when enforcing desirable stability constraints. Although the basic architecture uses the first order forward Euler method as the numerical integrator, it is possible to use higher order methods. In the tasks considered in this work, using higher order integrators did not come with any benefits: the performance of the learned networks was not improved, while the computational cost was significantly increased. It remains a question of interest whether there are tasks where the higher order integrators provide real benefits over the forward Euler integrator. Future work could also go towards studying the application of these architectures in typical deep learning applications such as GANs, specifically in Wasserstein GANs which require the use of a 1-Lipschitz critic function. We have seen in practice that the proposed architectures are quite expressive, but an interesting direction for future work would also be to study ways in which more general learnable non-expansive flows can be used to motivate the design of provably stable neural network architectures, and provide approximation guarantees for them. Finally, the framework shown in this paper essentially depends Figure 6: Using the learned Euler denoiser to solve an ill-posed inverse problem (deblurring) in a PnP fashion, with convergence guarantee. The numbers in the top right corner of each image are the PSNRs (in dB) relative to the ground truth \(x\). on the use of the Euclidean norm, but depending on the application one may want to design neural networks based on the dynamical systems connection that are 1-Lipschitz with respect to different norms. ## Acknowledgments EC & BO have received support from the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 860124. MJE acknowledges support from EPSRC (EP/S026045/ 1, EP/T026693/1, EP/V026259/1) and the Leverhulme Trust (ECF-2019-478). DM was partially supported by a grant from the Simons Foundation. CBS acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC advanced career fellowship EP/V029428/1, EPSRC grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Awards 215733/Z/19/Z and 221633/Z/20/Z, the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 777826 NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute. FS acknowledges support from the EPSRC advanced career fellowship EP/V029428/1.
2308.02293
Outlier-Robust Neural Network Training: Efficient Optimization of Transformed Trimmed Loss with Variation Regularization
In this study, we consider outlier-robust predictive modeling using highly-expressive neural networks. To this end, we employ (1) a transformed trimmed loss (TTL), which is a computationally feasible variant of the classical trimmed loss, and (2) a higher-order variation regularization (HOVR) of the prediction model. Note that using only TTL to train the neural network may possess outlier vulnerability, as its high expressive power causes it to overfit even the outliers perfectly. However, simultaneously introducing HOVR constrains the effective degrees of freedom, thereby avoiding fitting outliers. We newly provide an efficient stochastic algorithm for optimization and its theoretical convergence guarantee. (*Two authors contributed equally to this work.)
Akifumi Okuno, Shotaro Yagishita
2023-08-04T12:57:13Z
http://arxiv.org/abs/2308.02293v3
# A stochastic optimization approach ###### Abstract While highly expressive parametric models including deep neural networks have an advantage to model complicated concepts, training such highly non-linear models is known to yield a high risk of notorious overfitting. To address this issue, this study considers a \((k,q)\)th order variation regularization (\((k,q)\)-VR), which is defined as the \(q\)th-powered integral of the absolute \(k\)th order derivative of the parametric models to be trained; penalizing the \((k,q)\)-VR is expected to yield a smoother function, which is expected to avoid overfitting. Particularly, \((k,q)\)-VR encompasses the conventional (general-order) total variation with \(q=1\). While the \((k,q)\)-VR terms applied to general parametric models are computationally intractable due to the integration, this study provides a stochastic optimization algorithm, that can efficiently train general models with the \((k,q)\)-VR without conducting explicit numerical integration. The proposed approach can be applied to the training of even deep neural networks whose structure is arbitrary, as it can be implemented by only a simple stochastic gradient descent algorithm and automatic differentiation. Our numerical experiments demonstrate that the neural networks trained with the \((k,q)\)-VR terms are more "resilient" than those with the conventional parameter regularization. The proposed algorithm also can be extended to the physics-informed training of neural networks (PINNs). Figure 1: Non-linear neural network models (defined as a single-hidden layer perception with \(L=200\) hidden units) are trained with \(N=100\) samples that follow a quadratic function. The neural network trained with the \((\mathrm{c})\)\((3,2)\)-VR term with \(\eta_{3}=10^{-5}\) is more “resilient” (i.e., the number of inflection points is smaller) than those with (a) no regularization and (b) \(L_{2}\) regularization. All the optimization settings including the parameter initialization, decay schedule of the learning rate, minibatch sizes (\(n=m=5\)), and the number of iterations (\(T=2\times 10^{4}\)) for minibatch stochastic gradient descent, are the same. See Section 3 for more details, including the same experiments for linear and cubic functions. Introduction It is needless to emphasize that highly non-linear parametric models including deep neural network have attracted considerable attention these days, by virtue of their high representation capability (Goodfellow et al., 2016; Sze et al., 2017; Miikkulainen et al., 2019; Samek et al., 2021). They are proven to approximate arbitrary continuous functions (see, e.g., Cybenko (1989) for shallow and Yarotsky (2017) for deep neural networks); the highly non-linear models are expected to fit the underlying target functions adaptively. While the high expressive power can become a strong merit to capture the complicated concepts (e.g., large language models (Brown et al., 2020; Liu et al., 2023) used to capture the human language structure), training such highly non-linear models with a relatively smaller number of samples is known to yield notorious overfitting issues. See Figure 1(a) for a neural network trained without any regularization. The neural network seems to have a prodigal number of inflection points that are not welcome for better prediction and interpretation. For a better prediction, various approaches have been proposed to avoid overfitting in the context of neural network training. To name a few, dropout (Srivastava et al., 2014), early-stopping (see, e.g., references in Yao et al. (2007)), data-augmentation (see, e.g., Hernandez-Garcia and Konig (2018) and Zhang et al. (2021) for the effectiveness), batch-normalization (Luo et al., 2019), and so forth. They are considered to be variants of regularization, and all of the above regularization approaches are compatible with more direct parameter regularization. The \(L_{2}\) parameter regularization is also known as ridge regularization, Tikhonov regularization or weight decay (Kingma and Ba, 2014; Goodfellow et al., 2016), and it is further generalized to \(L_{p}\) regularization including lasso-type (Tibshirani, 1996) and elastic-type (Zou and Hastie, 2005) regularizations. See Figure 1(b) for a neural network trained with \(L_{2}\) parameter regularization. Unfortunately, the neural network trained with \(L_{2}\) penalization still may contain a prodigal number of inflection points as the parameter regularization is not directly related to the output variation. For additive models \(f_{\theta}^{(\text{add.})}(x)=\sum_{j=1}^{N}\theta_{j}\phi_{j}(x)\) defined with user-specified basis functions \(\{\phi_{j}\}\), that includes kernel and spline regression models as special cases, the \(L_{2}\)-type regularization term \(\langle\theta,G\theta\rangle\) for some matrix \(G\) coincides with a function norm \(\|f_{\theta}^{(\text{add.})}\|_{L_{2}}^{2}=\int\{f_{\theta}^{(\text{add.})}( x)\}^{2}\mathrm{d}x\) (see Appendix A). It is further generalized to the \((k,q)\)th order variation regularization (\((k,q)\)-VR) considered in this study: \[C_{k,q}(f_{\theta}):=\frac{1}{2}\int_{\Omega}\Big{|}\frac{\partial^{k}f_{ \theta}(x)}{\partial x^{k}}\Big{|}^{q}\mathrm{d}x.\] \((k,q)\)-VR encompasses a total variation (TV) regularization (Rudin et al., 1992; Engl et al., 1996; Osher et al., 2005), which corresponds to \((k,q)=(1,1)\), and TV has been incorporated into the training of the regression models equipped with splines (Stone, 1994; Stone et al., 1997; Mammen and van de Geer, 1997), kernels (Zou and Hastie, 2005), triograms (Koenker and Mizera, 2004), and Delaunay triangles (Pourya et al., 2023). The 1st order TV (\(k=1,q=1\)) regularization has been further generalized to the TV regularization of 2nd order (\(k=2\)) (Koenker and Mizera, 2004; Hinterberger and Scherzer, 2006; Duan et al., 2015) and general order (\(k\in\mathbb{N}\)) (Bredies and Holler, 2020). While the TV regularization can be simply incorporated into the training of such additive models whose basis functions are user-specified, the optimization techniques cannot be straightforwardly generalized to neural networks which adaptively learn the basis functions in their training. The main problem to introduce the \((k,q)\)-VR to neural network training is the computational intractability of the integral. To avoid the integration, neural splines (Williams et al., 2021) approximate the 2nd order TV terms by their finite approximation variants; a similar idea can be found in a variety of existing studies (see, e.g., Koenker and Mizera (2004) equipped with a theoretical justification shown in Natanson (1974) Theorem IX.4.8), and it can be also regarded as computing numerical integration instead of the exact integration itself. Generally speaking, numerical integration requires high computational complexity to obtain high precision. Another interesting direction is attempted by a purely theoretical work (Unser, 2019). Therein, 2nd order TV regarding the activation function is considered for deep neural network training and proves the optimality of a piece-wise linear activation function (see Theorem 4). However, as also noted in Unser (2019) section 3.4, finding the optimal piece-wise linear function still remains difficult practically. The representer theorem is also considered in Banach space (Parhi and Nowak, 2021) with TV terms measured in the Radon domain. Their theories have been further developed mathematically (Parhi and Nowak, 2022; Unser, 2023; Bartolucci et al., 2023). While these works have provided significant progresses in the theoretical understanding of neural network behaviors, practical algorithm for training neural networks with exact \((k,q)\)-VR including TV regularization as a special case, still remains lacking. To address these issues, this study proposes a stochastic optimization algorithm in line with Robbins and Monro (1951) and Ghadimi and Lan (2013), by following the similar idea used to optimize the intractable likelihood (Geyer and Thompson, 1992), contrastive divergence (Carreira-Perpinan and Hinton, 2005) and robust divergence (Okuno, 2023). The proposed stochastic optimization algorithm efficiently minimizes the loss function equipped with the \((k,q)\)-VR terms without conducting numerical integration. See Figure 1(c). The neural network trained with \((k,q)\)-VR terms is more "resilient" than those with \(L_{2}\) regularization, and it is more suitable for prediction and interpretation purposes. The proposed algorithm can be applied to general parametric models including arbitrary-structured neural networks, as the algorithm is a slight modification of the conventional stochastic gradient descent. Furthermore, the proposed algorithm also can be applied to the physics-informed training of a neural network (PINN; Dissanayake and Phan-Thien, 1994; Berg and Nystrom, 2018; Cuomo et al., 2022). Our approach can minimize the PINN loss function including the integral-type constraint directly, without computing the explicit numerical integration. See Section 4 for a brief discussion, though PINN extension slightly goes beyond the main scope of this study. ## 2 Higher-order variation regularization Section 2.1 describes the conventional regression framework and the remaining issues to be discussed in this study. Section 2.2 describes the higher-order variation regularization, and Section 2.3 describes the proposed stochastic algorithm. While this study considers univariate functions for simplicity, the below discussions can be generalized to the multivariate case straightforwardly. ### Conventional regression framework Let \(D_{1},D_{2},N\in\mathbb{N}\), \(\Omega:=[D_{1},D_{2}]\) and let \((x_{i},y_{i})\in\Omega\times\mathbb{R}\) be a pair of observations \((i=1,2,\ldots,N)\). Let \(f_{\theta}:\Omega\rightarrow\mathbb{R}\) be a regression model equipped with the parameter \(\theta\in\Theta\subset\mathbb{R}^{r}\) (\(r\in\mathbb{N}\)). We consider training the regression model by estimating the parameter \(\theta\), so that \(f_{\theta}(x_{i})\) well approximates the observed outcome \(y_{i}\). While this study employs a single-hidden-layer perceptron: \[f_{\theta}(x)=\sum_{\ell=1}^{L}a_{\ell}\sigma(b_{\ell}x+c_{\ell})+d,\quad \theta=(\{a_{\ell}\}_{\ell},\{b_{\ell}\}_{\ell},\{c_{\ell}\}_{\ell},d) \tag{1}\] for simplicity, the below discussion can be extended to arbitrary regression models including deep neural networks and kernel regression models. \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) denotes an activation function. Typically, we employ a sigmoid function \(\sigma(z)=1/\{1+\exp(-z)\}\) or a hyperbolic tangent function \(\sigma(z)=\{\exp(z)-\exp(-z)\}/\{\exp(z)+\exp(-z)\}\). We may estimate the parameter \(\theta\) by minimizing a loss function \[A(\theta)=\frac{1}{N}\sum_{i=1}^{N}\nu(y_{i}-f_{\theta}(x_{i}))\] defined with some non-negative function \(\nu:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}\). Typically, we employ a quadratic function \(\nu(z)=z^{2}\), and we may replace this loss function to, for instance, Tukey's biweight function (Huber and Ronchetti, 1981) and so forth depending on the regression purposes. The single-hidden-layer perceptron (1) is known to have a universal approximation capability. Namely, (1) can approximate arbitrary continuous functions by increasing the number of hidden units \(L\) to infinity (see, e.g., Cybenko (1989)) as well as more recent deep neural networks (Yarotsky, 2017). While such high expressiveness is greatly welcomed to approximate the underlying functions adaptively, the highly expressive models usually contain a prodigal number of inflection points as illustrated in Figure 1(a), and it results in overfitting to the training samples. Owing to their observational errors, overfitting to only the observed training samples is known to degrade the prediction accuracy for unseen data. To address such an overfitting issue, we may consider a parameter regularization. Typically, we may employ a \(L_{2}\) regularization (also known as ridge regularization, Tikhonov regularization and weight decay) \[B(\theta)=\frac{1}{2}\|\theta\|_{2}^{2}=\frac{1}{2}[\theta_{1}^{2}+\theta_{2} ^{2}+\cdots+\theta_{r}^{2}].\] Then, we may simply minimize the regularized loss function \(\tilde{L}_{A}(\theta)=A(\theta)+\lambda B(\theta)\) to train the regression model \(f_{\theta}\). The minimizer is explicitly obtained for linear regression models, and even for non-linear neural networks, the loss function \(\tilde{L}_{A}(\theta)\) is easily minimized by leveraging gradient-based algorithms. Typically, a stochastic gradient descent algorithm is employed for optimization (see, e.g., adam optimizer (Kingma and Ba, 2014) for implementation). While such a parameter regularization is known to relieve the overfitting issue, the trained model still contains a number of inflection points as illustrated in Figure 1(b). While the parameter regularization can be regarded as penalization of the output variation for additive regression models (see Appendix A), the regularization seems rather indirect for more highly expressive models including neural networks; more direct regularization for the higher-order output variation is appreciated. ### Higher-order variation regularization To address the aforementioned issues, this study employs a \((k,q)\)th order variation regularization (\((k,q)\)-VR) for the regression model \(f_{\theta}\) itself: \[C_{k,q}(f_{\theta})=\frac{1}{2}\int_{\Omega}\big{|}f_{\theta}^{[k]}(x)\big{|} ^{q}\mathrm{d}x. \tag{2}\] \(k\in\{0,1,2,\ldots,\}\) and \(q>0\) are user-specified parameters, and \(C_{k,q}(f)\) regularizes the \(q\)th powered variation of the \(k\)th derivative \(f^{[k]}(x)=\partial^{k}f(x)/\partial x^{k}\). \((1,1)\)-VR is also known as a total variation (Rudin et al., 1992; Engl et al., 1996; Osher et al., 2005), and it is further generalized to the 2nd order (\(k=2\)) (Koenker and Mizera, 2004; Hinterberger and Scherzer, 2006) and general order (\(k\in\mathbb{N}\)) (Bredies and Holler, 2020). Then, we may train the regression model \(f_{\theta}\) by minimizing the loss function equipped with the variation regularization: \[L_{A,q}(\theta):=\underbrace{A(\theta)}_{\text{(loss func.)}}+\underbrace{ \lambda B(\theta)}_{\text{(param. neg.)}}+\underbrace{\sum_{k=0}^{K}\eta_{k}C_{k,q}(f_{ \theta})}_{\text{(variation reg.)}} \tag{3}\] with user-specified hyperparameters \(\lambda\geq 0\) and \(\eta=(\eta_{0},\eta_{1},\eta_{2},\ldots,\eta_{K})\in\mathbb{R}_{\geq 0}^{K+1}\). One important problem here is the computational intractability of the \((k,q)\)-VR term (2). Generally speaking, the explicit form of the integral term with highly non-linear models (such as single-hidden-layer perceptron (1) considered in this study) cannot be obtained. Although we may compute numerical integration for evaluating (2), it requires large number of samples for conducting numerical integration with high precision; numerical integration is incompatible with iterative algorithms such as gradient descent. While the numerical integration is needed to compute the full-batch gradient descent, theories on stochastic optimization originated from Robbins and Monro (1951) suggest that only the unbiased estimate of the full-batch gradient is needed to optimize the loss function. Following a similar idea in the optimization of intractable likelihood (Geyer and Thompson, 1992), contrastive divergence (Carreira-Perpinan and Hinton, 2005), and robust divergence (Okuno, 2023), this study proposes a stochastic gradient descent that can be computed efficiently without conducting numerical integration. ### A stochastic gradient descent To minimize the loss function (3) defined with the \((k,q)\)-VR (2), this study proposes a stochastic gradient descent algorithm described herein. Firstly, we define a stochastic gradient, which is an unbiased estimate of the full-batch gradient. At the iteration \(t\in\mathbb{N}\), pick \((x_{1}^{(t)},y_{1}^{(t)}),(x_{2}^{(t)},y_{2}^{(t)}),\ldots,(x_{n}^{(t)},y_{n}^ {(t)})\) from \(\{(x_{i},y_{i})\}_{i=1}^{N}\) uniformly randomly; \(\{(x_{i}^{(t)},y_{i}^{(t)})\}\) is used for computing the gradient of the term \(A(\theta)\). Also pick \(z_{1}^{(t)},z_{2}^{(t)},\ldots,z_{m}^{(t)}\) uniformly randomly from \(\Omega=[D_{1},D_{2}]\); the \(\{z_{j}^{(t)}\}\) is used for unbiasedly estimating the integral-based \((k,q)\)-VR term (2). Specifically, we define three terms \[\alpha^{(t,n)}(\theta) :=-\frac{1}{n}\sum_{i=1}^{n}\nu^{[1]}(y_{i}^{(t)}-f_{\theta}(x_{i}^{( t)}))\frac{\partial}{\partial\theta}f_{\theta}(x_{i}^{(t)}), \tag{4}\] \[\beta(\theta) :=\frac{\partial}{\partial\theta}B(\theta),\] (5) \[\gamma_{k,q}^{(t,m)}(f_{\theta}) :=\frac{D_{2}-D_{1}}{m}\sum_{j=1}^{m}\text{sign}(f_{\theta}^{[k]}( z_{j}^{(t)}))|f_{\theta}^{[k]}(z_{j}^{(t)})|^{q-1}\frac{\partial}{\partial \theta}f_{\theta}^{[k]}(z_{j}^{(t)}), \tag{6}\] where \(\nu^{[1]}\) is the 1st order derivative of the function \(\nu\) and \(\text{sign}(z)=\mathbb{1}\left(z>0\right)-\mathbb{1}\left(z<0\right)\). Then, the terms \(\alpha^{(t,m)}(\theta),\beta(\theta),\gamma_{k,q}^{(t,m)}(f_{\theta})\) are unbiased estimators of the gradients \(\partial A(\theta)/\partial\theta,\partial B(\theta)/\partial\theta, \partial C_{k}(f_{\theta})/\partial\theta\), respectively. See Appendix B for the explicit forms of the gradients of the \(k\)th order derivative \(f_{\theta}^{[k]}(x)\) for the single-hidden-layer perceptron (1). Note that the gradient also can be computed by an automatic differentiation implemented for training deep neural network (see, e.g., Paszke et al. (2017)). Then, it holds for a stochastic gradient \[g_{\lambda,\eta}^{(t,n,m)}(\theta):=\alpha^{(t,n)}(\theta)+\lambda\beta( \theta)+\sum_{k=0}^{K}\eta_{k}Y_{k,q}^{(t,m)}(f_{\theta}) \tag{7}\] that \[\mathbb{E}^{(t)}\left(g_{\lambda,\eta}^{(t,n,m)}(\theta)\right)=\frac{ \partial}{\partial\theta}A(\theta)+\lambda\frac{\partial}{\partial\theta}B( \theta)+\sum_{k=0}^{K}\eta_{k}\frac{\partial}{\partial\theta}C_{k,q}(f_{ \theta})=\frac{\partial}{\partial\theta}L_{\lambda,\eta}(\theta). \tag{8}\] \(\mathbb{E}^{(t)}\) represents the expectation with respect to \(X^{(t)}=\{x_{i}^{(t)}\}\), \(Y^{(t)}=\{y_{i}^{(t)}\}\), \(Z^{(t)}=\{z_{j}^{(t)}\}\). Note that the unbiased-ness (8) holds for any \(n,m\in\mathbb{N}\). Using this stochastic gradient (7), we employ a stochastic gradient descent \[\theta^{(t)}=\theta^{(t-1)}-\omega_{t}g_{\lambda,\eta}^{(t-1,n,m)}(\theta^{( t-1)}),\quad(t=1,2,\ldots,T). \tag{9}\] The stochastic gradient descent (9) equipped with the unbiased estimator of the gradient (7) and the decreasing learning rate \(\omega_{t}\searrow 0\) is proved to optimize the loss function \(L_{\lambda,\eta}(\theta)\). See Proposition 1 for more rigorous descriptions showing a sufficient condition of the convergence. **Proposition 1** (A simpler version of Ghadimi and Lan (2013) Theorem 2.1 (a)).: Let \(n,m\in\mathbb{N}\) be arbitrarily fixed. Assume that (i) \(f(\theta)=L_{\lambda,\eta}(\theta)\) is smooth, (ii) gradient of \(f(\theta)\) satisfies the Lipschitz property, i.e., \(\|\partial f(\theta)/\partial\theta-\partial f(\theta^{\prime})/\partial \theta\|\leq L\|\theta-\theta^{\prime}\|\), (iii) \(\mathbb{E}^{(t)}(g_{\lambda,\eta}^{(t,n,m)}(\theta^{(t)}))=\partial f(\theta^{ (t)})/\partial\theta\), and (iv) \(\mathbb{E}^{(t)}(\|g_{\lambda,\eta}^{(t,n,m)}(\theta^{(t)})-\partial f(\theta^ {(t)})/\partial\theta\|^{2})\leq\sigma^{2}\) for some \(\sigma\geq 0\), for any \(t\in 1,2,\ldots,T\). Assume that the learning rate \(\{\omega_{t}\}_{t=1}^{T}\) satisfies \(\sum_{t=1}^{T}\omega_{t}-\infty\) and \(\{\sum_{t=1}^{T}\omega_{t}^{-1}\sum_{t=1}^{T}\omega_{t}^{2}\to 0\) as \(T\to\infty\). Then, the sequence \(\{\theta^{(t)}\}\) obtained by the stochastic gradient descent (9) satisfies \[\mathbb{E}_{\tau}\left(\left\|\frac{\partial}{\partial\theta}L_{\lambda,\eta}( \theta)\right\|^{2}\right)\overset{\text{in prob.}}{\rightharpoonup}0\quad(T \to\infty).\] \(\mathbb{E}_{\tau}\) represents the expectation with respect to the step \(\tau\in\{1,2,\ldots,T\}\) randomly chosen with the probability \(\mathbb{P}(\tau=k\mid T)=\{2\omega_{k}-L\omega_{k}^{2}\}/\sum_{k=1}^{T}\{2 \omega_{k}-L\omega_{k}^{2}\}\) (\(k=1,2,\ldots,T\)). Proposition 1 proves that the parameter estimated by the proposed stochastic algorithm is expected to reach to a (local) minima of the exact loss function \(L_{\lambda,\eta}(\theta)\), but not the finite approximation of the loss function via numerical integration. It is also noted that the convergence holds regardless of the sample sizes \(n,m\in\mathbb{N}\) (namely, there is no need to take the limit \(n,m\to\infty\) to guarantee the convergence; \(n=m=5\) is used in our numerical experiments). As stochastic algorithm includes randomness to compute the stochastic gradient, it is believed that stochastic gradient algorithms are more likely to escape from the local minima or a saddle point (compared with the full-batch gradient descent), i.e., the stochastic solution would be better than the fullbatch-based estimators for some cases. See, e.g., Jin et al. (2021). \begin{table} \begin{tabular}{l c c c} \hline & \(f_{1}(x)\): linear & \(f_{2}(x)\): quadratic & \(f_{3}(x)\): cubic \\ \hline No regularization & \(0.943\pm 0.0092\) & \(0.938\pm 0.0106\) & \(0.863\pm 0.0271\) \\ Smaller NN (\(L=20\)) & \(0.971\pm 0.0095\) & \(0.916\pm 0.0289\) & \(0.689\pm 0.1890\) \\ \(L_{2}\) (\(\lambda=10^{-1}\)) & \(0.975\pm 0.0054\) & \(0.971\pm 0.0069\) & \(0.928\pm 0.0234\) \\ \(L_{2}\) (\(\lambda=10^{-3}\)) & \(0.933\pm 0.0121\) & \(0.909\pm 0.0248\) & \(0.782\pm 0.0443\) \\ \(L_{2}\) (\(\lambda=10^{-5}\)) & \(0.960\pm 0.0114\) & \(0.930\pm 0.0211\) & \(0.849\pm 0.0528\) \\ \hline (1,2)-VR (\(\eta_{1}=10^{-1}\)) & \(0.996\pm 0.0007\) & \(0.967\pm 0.0271\) & \(0.823\pm 0.1720\) \\ (2,2)-VR (\(\eta_{2}=10^{-3}\)) & \(0.999\pm 0.0010\) & \(0.994\pm 0.0031\) & \(0.842\pm 0.1000\) \\ (3,2)-VR (\(\eta_{3}=10^{-5}\)) & \(0.999\pm 0.0007\) & \(0.996\pm 0.0036\) & \(0.966\pm 0.0301\) \\ \hline \end{tabular} \end{table} Table 2: Predictive correlation for \(N=10^{2}\). \begin{table} \begin{tabular}{l c c c} \hline & \(f_{1}(x)\): linear & \(f_{2}(x)\): quadratic & \(f_{3}(x)\): cubic \\ \hline No regularization & \(0.943\pm 0.0092\) & \(0.938\pm 0.0106\) & \(0.863\pm 0.0271\) \\ Smaller NN (\(L=20\)) & \(0.971\pm 0.0095\) & \(0.916\pm 0.0289\) & \(0.689\pm 0.1890\) \\ \(L_{2}\) (\(\lambda=10^{-1}\)) & \(0.975\pm 0.0054\) & \(0.971\pm 0.0069\) & \(0.928\pm 0.0234\) \\ \(L_{2}\) (\(\lambda=10^{-3}\)) & \(0.933\pm 0.0121\) & \(0.909\pm 0.0248\) & \(0.782\pm 0.0443\) \\ \(L_{2}\) (\(\lambda=10^{-5}\)) & \(0.960\pm 0.0114\) & \(0.930\pm 0.0211\) & \(0.849\pm 0.0528\) \\ \hline (1,2)-VR (\(\eta_{1}=10^{-1}\)) & \(0.996\pm 0.0007\) & \(0.967\pm 0.0271\) & \(0.823\pm 0.1720\) \\ (2,2)-VR (\(\eta_{2}=10^{-3}\)) & \(0.999\pm 0.0010\) & \(0.994\pm 0.0031\) & \(0.842\pm 0.1000\) \\ (3,2)-VR (\(\eta_{3}=10^{-5}\)) & \(0.999\pm 0.0007\) & \(0.996\pm 0.0036\) & \(0.966\pm 0.0301\) \\ \hline \end{tabular} \end{table} Table 3: Predictive correlation for \(N=10^{3}\). Figure 2: Neural networks trained with \(N=100\) samples. Experiments This section conducts numerical experiments to demonstrate the proposed approach. Source codes to reproduce numerical results are provided in [https://github.com/oknakfm/HOVR](https://github.com/oknakfm/HOVR). ### Experimental settings Dataset generation:we generate \(x_{i}\sim U(-1/2,1/2)\) uniformly randomly from the interval \(\Omega=[-1/2,1/2]\) and normal random numbers \(y_{i}\sim N(f(x_{i}),\sigma^{2})\) with \(\sigma=0.1\). We consider three settings: (i) linear: \(f(x)=x\), (ii) quadratic: \(f(x)=4x^{2}\), (iii) cubic: \(f(x)=\frac{6q}{7}(x+3/8)x(x-3/8)\). Neural network architecture:we employ a single-hidden-layer perceptron (1) with \(L=200\) hidden units. Hyperbolic tangent function \(\tanh(z)=\{\exp(z)-\exp(-z)\}/\{\exp(z)+\exp(-z)\}\) is employed for the activation function \(\sigma(z)\). Neural network training:we compute a stochastic gradient descent described in Section 2.3. We employ \(\nu(z)=z^{2}\) for the loss function \(A(\theta)\) and \(q=2\) for the variation regularization. We randomly pick \(n=m=5\) samples for computing the stochastic gradient \(\alpha^{(t,m)}(\theta)\) and \(\gamma^{(t,m)}_{k}(f_{0})\). The parameters of the neural network (1) is initialized by normal random numbers: we randomly pick \(a^{(0)}_{\ell},c^{(0)}_{\ell}\) from \(N(0,1)\), \(b^{(0)}_{\ell}\) from \(N(0,1/\hat{\psi}(\{x_{i}\}))\), and specify \(d^{(0)}=0\). The learning rate is designed to be cyclic (by following the similar idea as Smith (2017)): the learning rate is initialized by \(\omega_{0}=10^{-3}\), and the rate is multiplied by \(0.9\) for each \(25\) iterations. For each \(10^{3}\) iteration, the learning rate is pull-back to \(10^{-3}\). Overall, we compute \(T=2\times 10^{4}\) iterations, meaning that our SGD repeats \(20\) cycles of decreasing learning rate. Baselines:we train the neural network (i) without regularization, (ii) with a smaller number of hidden units (\(L=20\)), and (iii) with \(L_{2}\) regularization as baselines. Evaluation metric:after training the neural network using the proposed approach and baselines, we compute the predicted \(\hat{y}_{i}\) for \(x_{i}\) (\(i=1,2,\ldots,10^{4}\)) that are regularly placed over the interval \([-1/2+0.1,1/2-0.1]\). Using \(y^{*}_{i}=f(x_{i})\), we compute the (predictive) correlation coefficient. Larger scores are better. We compute the predictive correlation \(5\) times with different random seeds. We summarize the results with the mean and the standard deviation for comparison purposes. ### Results Experimental results are shown in Tables 1-3. Also see Figure 2 for the illustration of the neural networks trained with \(N=10^{2}\) samples. Overall, variation regularization demonstrates the highest predictive correlation for all the settings (\(N=10^{1}\), \(10^{2}\), \(10^{3}\) and \(f_{1},f_{2},f_{3}\)) while all the remaining neural networks are trained with the same optimization setting (except for the type of regularization). There are several observations through these experiments. First, regularizing higher-order variation yields "smoother" functions. See Figure 2: regularizing the \(3\)rd order derivative (\(\eta_{3}=10^{-5}\)) yields smoother function than those with \(2\)nd (\(\eta_{2}=10^{-3}\)) and \(1\)st (\(\eta_{1}=10^{-1}\)) order derivatives. Particularly, regularizing the \(1\)st order derivative (\(\eta_{1}=10^{-1}\)) forces the estimated functions to be closer to piece-wise constant functions, and regularizing the \(2\)nd order derivative (\(\eta_{2}=10^{-3}\) forces the functions to be closer to piece-wise linear ones. Second, the \(L_{2}\) regularization provides functions that still contain many inflection points while \((k,q)\)-VR provides smoother functions. Third, reducing the number of hidden units in NN is effective when the underlying function is simple. For instance, the prediction accuracy is improved by reducing the number of hidden units \(L\) if the underlying function is linear. However, if the underlying function is more complicated, for instance, quadratic and cubic, the prediction accuracy is degraded as the smaller NN has less expressive power. The smaller NN is not capable of fully approximating the non-linear functions. Application to the physics-informed training of a neural network Besides traditional numerical approaches such as finite element method (see, e.g., Oden and Reddy (2012)), neural networks have been leveraged to solve a user-specified differential equations (Dissanayake and Phan-Thien, 1994; Lagaris et al., 1998) so that the neural network model itself is trained to satisfy the differential equations. In recent years, this approach has been combined with automatic differentiation widely-used in modern deep neural network frameworks (Paszke et al., 2017). The neural network framework that aims to solve the differential equation with the aid of automatic differentiation is especially called physics-informed training of a neural network (or simply referred to as physics-informed neural network; PINN), and PINN has drawn considerable attention these days (Berg and Nystrom, 2018; Cuomo et al., 2022), especially in the field of natural science. See a comprehensive survey (Cuomo et al., 2022) for more details; herein, we summarize the concept of PINN and the relation to this study. While we describe the ordinary differential equation (ODE) for simplicity, the concept has been extended to partial differential equations. Let \(D_{1},D_{2}\in\mathbb{N}\), and let \(\Omega:=[D_{1},D_{2}]\) be the support of the function to be considered. Let \(f_{\theta}\) be a neural network and let \(f_{\theta}^{[k]}\) denote its \(k\)th derivative (with respect to the input \(x\)). Let \(\mathcal{L}[f](x)=0\) (\(\forall x\in\Omega\)) be the ODE to be solved (for instance, \(\mathcal{L}[f]=f^{[2]}+\alpha f\) represents a simple harmonic motion). Then, PINN is formulated to minimize a loss function \[\int_{\Omega}\mathcal{L}[f_{\theta}](x)^{2}\mathrm{d}x+\kappa \mathcal{B}[f_{\theta}], \tag{10}\] where \(\mathcal{B}[f_{\theta}]\) represents the loss function for boundary conditions (e.g., \(\mathcal{B}[f_{\theta}]=\{f_{\theta}(D_{1})-f_{1}\}^{2}+\{f_{\theta}(D_{2})- f_{2}\}^{2}\) for conditions \(f(D_{1})=f_{1},f(D_{2})=f_{2}\)) and \(\kappa>0\) is a hyperparameter. Here, we can further consider the case that several labels for the underlying function \(f\) is available, i.e., we assume that the pairs \((x_{i},y_{i})\) satisfying \(y_{i}\approx f(x_{i})\) are available, we can introduce the label loss into the original form (10): \[\frac{1}{N}\sum_{i=1}^{N}\nu(y_{i}-f_{\theta}(x_{i}))+\frac{\eta }{2}\int_{\Omega}\mathcal{L}[f_{\theta}](x)^{2}\mathrm{d}x+\kappa\mathcal{B} [f_{\theta}]. \tag{11}\] While the original PINN is designed to solve the ODE, minimizing (11) also can be regarded as the regression problem with the ODE-based regularization (with not that large hyperparameter \(\eta>0\)). The \((k,q)\)-VR term \(C_{k,q}(f_{\theta})\) coincides with the second term in the ODE-constrained regression (11) by specifying \(\mathcal{L}[f_{\theta}]=|f_{\theta}^{[k]}|^{2/q}\), so the training of neural network with the \((k,q)\)-VR is closely related to the PINN formulation. While there is a similarity, our approach is significantly different from the PINN framework, from the computational perspective. In the computation of PINN, explicit form of the integral term \(\int_{\Omega}\mathcal{L}[f_{\theta}](x)^{2}\mathrm{d}x\) is hard to be obtained (as well as the computational difficulty of the \((k,q)\)-VR term discussed so far), thus existing studies finitely approximate this integration as \[\int_{\Omega}\mathcal{L}[f_{\theta}](x)^{2}\mathrm{d}x\approx \frac{D_{2}-D_{1}}{M}\sum_{j=1}^{M}\mathcal{L}[f_{\theta}](\bar{x}_{j})^{2} \tag{12}\] with \(M\) training points \(\{\bar{x}_{j}\}_{j=1}^{M}\subset\Omega\) regularly picked from the interval \(\Omega=[D_{1},D_{2}]\). See, e.g., Section 2.3 in Cuomo et al. (2022) for the current surrounding situation in the computation of PINN, including the efficient approximation of the integral (12). Due to the finite approximation, theoretical analyses indicate that the (practically-computed) finite approximation of PINN includes an approximation error depending on the parameter \(M\in\mathbb{N}\) adjusting the precision (Shin et al., 2020; De Ryck et al., 2021). See Section 2.4 in Cuomo et al. (2022) for more details. As long as the training points \(\{\bar{x}_{j}\}_{j=1}^{M}\) are first fixed, the approximation error does not vanish even if the finite approximation is minimized by the stochastic gradient descent. Our proposed stochastic algorithm can be straightforwardly generalized to this PINN setting. See (13) in Appendix C; the PINN extension of our stochastic gradient is an unbiased estimator of the gradient for the original PINN loss function (11) which contains the integral term. Therefore, the stochastic gradient descent using the unbiased gradient (13) directly minimizes the loss function (11) without going through the finite approximation (12). Although the PINN extension goes beyond the main scope of this study, we certainly admit a huge potential for this PINN direction. Conclusion and possible future direction This study considered directly regularizing \(k\)th order derivative of the parametric models to be trained. While the \((k,q)\)-VR is represented by a computationally-intractable integral, the loss function equipped with the \((k,q)\)-VR can be efficiently minimized by a proposed stochastic optimization algorithm. Numerical experiments demonstrated the training of highly-nonlinear neural networks with the \((k,q)\)-VR terms. Although this study considered only the regression problems for simplicity, \((k,q)\)-VR terms can be straightforwardly incorporated into various statistical problems, such as classification. Furthermore, the proposed approach can be applied to general parametric models, including the arbitrary structure of deep neural networks. The possible future direction of the proposed approach is diverse. One direction is the application to physics-informed neural networks described in Section 4. The proposed approach is free from the approximation error which is originated from the finite approximation of the integral. Another direction is the fusion of expressive neural networks and robust estimation (against outliers). While expressive models such as neural networks may fit both the underlying target function and the outlier distribution, regularization of the higher-order variation may prevent "overfitting" to the outlier distribution. ## Acknowledgement A. Okuno was supported by JSPS KAKENHI (21K17718, 22H05106). We would like to thank Keisuke Yano for the helpful discussions. ## Appendix A Regularization for additive model An additive model \(f_{\theta}^{(\text{add.})}(x)=\sum_{j=1}^{J}\theta_{j}\phi_{j}(x)\) with user-specified basis functions \(\{\phi_{j}\}\) encompasses kernel and spline regression models as special cases. With the gram matrix \(G=(g_{ij}),g_{ij}=\int_{\Omega}\phi_{i}(x)\phi_{j}(x)\mathrm{d}x\), we have \[\|f_{\theta}^{(\text{add.})}\|_{L^{2}(\Omega)}^{2}=\int_{\Omega}f_{\theta}^{( \text{add.})}(x)^{2}\mathrm{d}x=\sum_{l=1}^{J}\sum_{j=1}^{J}\theta_{i}\theta_ {j}\underbrace{\int_{\Omega}\phi_{l}(x)\phi_{j}(x)\mathrm{d}x}_{-g_{ij}}=\sum_ {i=1}^{J}\theta_{i}\theta_{j}g_{ij}=\langle\theta,G\theta\rangle.\] See, e.g., Smale and Zhou (2007) for more details of the regularization for kernel regression. ## Appendix B Gradient of \(k\)th order variation The \(k\)th order variation of the single-hidden layer perceptron (1) is obtained as \[\frac{\partial^{k}}{\partial x^{k}}f_{\theta}(x)=\sum_{\ell=1}^{L}\alpha_{ \ell}b_{\ell}^{k}\sigma^{[k]}(b_{\ell}x+c_{\ell})+d\mathbb{1}_{[k=0]},\] where \(\sigma^{[k]}\) denotes the \(k\)th derivative of the activation function and \(\mathbb{1}_{\mathcal{A}}\) denotes the indicator function which outputs 1 if and only if the event \(\mathcal{A}\) occurs, and 0 otherwise. For the hyperbolic tangent activation function \(\sigma(z)=\tanh(z)=\{\exp(z)-\exp(-z)\}/\{\exp(z)+\exp(-z)\}\), we have \(\sigma^{[1]}(z)=1-\tanh^{2}z\), \(\sigma^{[2]}(z)=-2\tanh z(1-\tanh^{2}z)\), \(\sigma^{[3]}(z)=2(1-\tanh^{2}z)(3\tanh^{2}z-1)\), and \(\sigma^{[4]}(z)=8\tanh(z)(\tanh^{2}z-1)(3\tanh^{2}z-2)\). Then, the gradient of the \(k\)th order variation is also obtained as \[\frac{\partial}{\partial a_{\ell^{\prime}}}\frac{\partial^{k}}{ \partial x^{k}}f_{\theta}(x) =b_{\ell^{\prime}}^{k}\sigma^{|k|}(b_{\ell^{\prime}}x+c_{\ell^{ \prime}}),\] \[\frac{\partial}{\partial b_{\ell^{\prime}}}\frac{\partial^{k}}{ \partial x^{k}}f_{\theta}(x) =ka_{\ell^{\prime}}b_{\ell^{\prime}}^{k-1}\sigma^{|k|}(b_{\ell^{ \prime}}x+c_{\ell^{\prime}})+a_{\ell^{\prime}}b_{\ell^{\prime}}^{k}x\sigma^{|k |+1|}(b_{\ell^{\prime}}x+c_{\ell^{\prime}})\] \[\frac{\partial}{\partial c_{\ell^{\prime}}}\frac{\partial^{k}}{ \partial x^{k}}f_{\theta}(x) =a_{\ell^{\prime}}b_{\ell^{\prime}}^{k}\sigma^{|k|+1|}(b_{\ell^{ \prime}}x+c_{\ell^{\prime}}),\] \[\frac{\partial}{\partial d}\frac{\partial^{k}}{\partial x^{k}}f_{ \theta}(x) =\mathbb{1}_{\{k=0\}}.\] ## Appendix C An unbiased stochastic gradient of the PINN loss function Pick \(\{(x_{i}^{(t)},y_{i}^{(t)})\}_{i=1}^{n}\) from \(\{(x_{i},y_{i})\}_{i=1}^{N}\), and also pick \(\{z_{j}^{(t)}\}_{j=1}^{m}\) from the interval \(\Omega=\{D_{1},D_{2}\}\) uniformly randomly. Then, we define \[-\frac{1}{n}\sum_{i=1}^{n}\nu^{|1|}(y_{i}^{(t)}-f_{\theta}(x_{i}^{(t)}))\frac{ \partial}{\partial\theta}f_{\theta}(x_{i}^{(t)})+\eta\frac{D_{2}-D_{1}}{m} \sum_{j=1}^{m}\mathcal{L}[f_{\theta}](z_{j}^{(t)})\frac{\partial}{\partial \theta}\mathcal{L}[f_{\theta}](z_{j}^{(t)})+\kappa\frac{\partial}{\partial \theta}\mathcal{B}[f_{\theta}]. \tag{13}\] (13) is an unbiased estimator of the gradient of the PINN loss (11) regardless of the number of samples \(n,m\in\mathbb{N}\). The same discussion shown in Section 2.3 can be applied to the stochastic gradient descent using (13).
2302.06086
Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects
With the widespread deployment of deep neural networks (DNNs), ensuring the reliability of DNN-based systems is of great importance. Serious reliability issues such as system failures can be caused by numerical defects, one of the most frequent defects in DNNs. To assure high reliability against numerical defects, in this paper, we propose the RANUM approach including novel techniques for three reliability assurance tasks: detection of potential numerical defects, confirmation of potential-defect feasibility, and suggestion of defect fixes. To the best of our knowledge, RANUM is the first approach that confirms potential-defect feasibility with failure-exhibiting tests and suggests fixes automatically. Extensive experiments on the benchmarks of 63 real-world DNN architectures show that RANUM outperforms state-of-the-art approaches across the three reliability assurance tasks. In addition, when the RANUM-generated fixes are compared with developers' fixes on open-source projects, in 37 out of 40 cases, RANUM-generated fixes are equivalent to or even better than human fixes.
Linyi Li, Yuhao Zhang, Luyao Ren, Yingfei Xiong, Tao Xie
2023-02-13T04:09:34Z
http://arxiv.org/abs/2302.06086v3
# Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects ###### Abstract With the widespread deployment of deep neural networks (DNNs), ensuring the reliability of DNN-based systems is of great importance. Serious reliability issues such as system failures can be caused by numerical defects, one of the most frequent defects in DNNs. To assure high reliability against numerical defects, in this paper, we propose the RANUM approach including novel techniques for three reliability assurance tasks: detection of potential numerical defects, confirmation of potential-defect feasibility, and suggestion of defect fixes. To the best of our knowledge, RANUM is the first approach that confirms potential-defect feasibility with failure-exhibiting tests and suggests fixes automatically. Extensive experiments on the benchmarks of 63 real-world DNN architectures show that RANUM outperforms state-of-the-art approaches across the three reliability assurance tasks. In addition, when the RANUM-generated fixes are compared with developers' fixes on open-source projects, in 37 out of 40 cases, RANUM-generated fixes are equivalent to or even better than human fixes. neural network, numerical defect, testing, fix ## I Introduction Deep Neural Networks (DNNs) are successfully deployed and show remarkable performance in many challenging applications, including facial recognition [21, 56], game playing [38], and code completion [20, 4]. To develop and deploy DNNs, one needs to attain a DNN architecture, which is usually encoded by program code as the example shown in Figure 2. First, for training, the user executes the program with the architecture on the given training/validation data, attains the model weights, and stores them in a weight file. The architecture along with the weights is named a model. Then, for inference, the user loads the weight file to CPU/GPU memory or AI chips, executes the same program with the given inference sample and weights as arguments, and gets the model prediction result as the program output. With the wide deployment of DNN models (resulted from training DNN architectures), reliability issues of DNN-based systems have become a serious concern, where malfunctioning DNN-based systems have led to serious consequences such as fatal traffic accidents [24]. To assure the reliability of DNN-based systems, it is highly critical to detect and fix numerical defects for two main reasons. First, numerical defects widely exist in DNN-based systems. For example, in the DeepStability database [14], over 250 defects are identified in deep learning (DL) algorithms where over 60% of them are numerical defects. Moreover, since numerical defects exist at the architecture level, any model using the architecture naturally inherits these defects. Second, numerical defects can result in serious consequences. Once numerical defects (such as divide-by-zero) are exposed, the faulty DNN model will output NaN or INF instead of producing any meaningful prediction, resulting in numerical failures and system crashes [60, 58]. Thus, numerical defects hinder the application of DNNs in scenarios with high reliability and availability requirements such as threat monitoring in cybersecurity [33] and cloud system controlling [36, 13]. To address numerical defects in DNN architectures in an actionable manner [53], in this paper, we propose a workflow of reliability assurance (as shown in Figure 1), consisting of three tasks: potential-defect detection, feasibility confirmation, and fix suggestion, along with our proposed approach to support all these three tasks. _Potential-Defect Detection._ In this task, we detect all potential numerical defects in a DNN architecture, with a focus on operators with numerical defects (in short as defective operators) that potentially exhibit inference-phase numerical failures for two main reasons, following the literature [61, 54]. First, these defective operators can be exposed after the model is deployed and thus are more devastating than those that potentially exhibit training-phase numerical failures [27, 61]. Second, a defective operator that potentially exhibits training-phase numerical failures can usually be triggered to exhibit inference-phase numerical failures, thus also being detected by Fig. 1: Workflow for reliability assurance against numerical defects in DNN architectures. The left-hand side shows three tasks and the right-hand side shows corresponding examples. RANUM supports all the three tasks, and is the first automatic approach for system test generation and fix suggestion. our task. For example, the type of training-phase NaN gradient failures is caused by an operator's input that leads to invalid derivatives, and this input also triggers failures in the inference phase [54]. _Feasibility Confirmation_. In this task, we confirm the feasibility of these potential numerical defects by generating failure-exhibiting system tests. As shown in Figure 1, a system test is a tuple of training example1\(\mathbf{x}_{\text{train}}\) and inference example \(\mathbf{x}\) such that after the training example is used to train the architecture under consideration, applying the resulting model on the inference example exhibits a numerical failure. Footnote 1: In real settings, multiple training examples are used to train an architecture, but generating a single training example to exhibit failures (targeted by our work) is desirable for ease of debugging while being more challenging than generating multiple training examples to exhibit failures. _Fix Suggestion_. In this task, we fix a feasible numerical defect. To determine the fix form, we have inspected the developers' fixes of the numerical defects collected by Zhang et al. [60] by looking at follow-up Stack Overflow posts or GitHub commits. Among the 13 numerical defects whose fixes can be located, 12 fixes can be viewed as explicitly or implicitly imposing interval preconditions on different locations, such as after inputs or weights are loaded and before defective operators are invoked. Thus, imposing an interval precondition, e.g., by clipping (i.e., chopping off the input parts that exceed the specified input range) the input for defective operator(s), is an effective and common strategy for fixing a numerical defect. Given a location (i.e., one related to an operator, input, or weight where users prefer to impose a fix), we suggest a fix for the numerical defect under consideration. To support all the three tasks of the reliability assurance process against DNN numerical defects, we propose the **RANUM** approach in this paper. _For task 1 and task 2a, which are already supported by two existing tools (DEBAR [61] and GRIST [54]), RANUM introduces novel extensions and optimizations that substantially improve the effectiveness and efficiency_. (1) DEBAR [61] is the state-of-the-art tool for potential-defect detection; however, DEBAR can handle only static computational graphs and does not support widely used dynamic graphs in PyTorch programs [28]. RANUM supports dynamic graphs thanks to our novel technique of _backward fine-grained node labeling_. (2) GRIST [54] is the state-of-the-art tool for generating failure-exhibiting unit tests to confirm potential-defect feasibility; however, GRIST conducts gradient back-propagation by using the original inference input and weights as the starting point. Recent studies [23, 8] on DNN adversarial attacks suggest that using a randomized input as the starting point leads to stronger attacks than using the original input. Taking this observation, we combine gradient back-propagation with random initialization in RANUM. _For task 2 and task 3, which are not supported by any existing tool, RANUM is the first automatic approach for them_. For **feasibility confirmation**, RANUM is the **first** approach that generates failure-exhibiting **system** tests that contain training examples. Doing so is a major step further from the existing GRIST tool, which generates failure-exhibiting unit tests ignoring the practicality of generated model weights. Given that in practice model weights are determined by training examples, we propose the technique of _two-step generation_ for this task. First, we generate a failure-exhibiting unit test. Second, we generate a training example that leads to the model weights in the unit test when used for training. For the second step, we extend the deep-leakage-from-gradient (DLG) attack [63] by incorporating the straight-through gradient estimator [3]. For **fix suggestion**, RANUM is the **first** automatic approach. RANUM is based on the novel technique of _abstraction optimization_. We observe that a defect fix in practice is typically imposing interval clipping on some operators such that each later-executed operator (including those defective ones) can never exhibit numerical failures. Therefore, we propose the novel technique of abstraction optimization to "deviate away" the input range of a defective operator from the invalid range, falling in which can cause numerical failures. For RANUM, we implement a tool2 and evaluate it on the benchmarks [54] of 63 real-world DNN architectures containing 79 real numerical defects; these benchmarks are the largest benchmarks of DNN numerical defects to the best of our knowledge. The evaluation results show that RANUM is both effective and efficient in all the three tasks for DNN reliability assurance. (1) For potential-defect detection, RANUM detects \(>\)60% more true defects than the state-of-the-art DEBAR approach. (2) For feasibility confirmation, RANUM generates failure-exhibiting unit tests to confirm potential numerical defects in the benchmarks with 100% success rate; in contrast, with the much higher time cost (17.32X), the state-of-the-art GRIST approach generates unit tests to confirm defects with 96.96% success rate. More importantly, for the first time, RANUM generates failure-exhibiting system tests that confirm defects (with 92.78% success rate). (3) For fix suggestion, RANUM proposes fix suggestions for numerical defects with 100% success rate. In addition, when the RANUM-generated fixes are compared with developers' fixes on open-source projects, in 37 out of 40 cases, RANUM-generated fixes are equivalent to or even better than human fixes. Footnote 2: Open source at [https://github.com/llyly/RANUM](https://github.com/llyly/RANUM). This paper makes the following main contributions: * We formulate the reliability assurance problem for DNN architectures against numerical defects and elaborate on three important tasks for this problem. * We propose RANUM--the first automatic approach that solves all these three tasks. RANUM includes three novel techniques (backward fine-grained node labeling, two-step test generation, and abstraction optimization) and solves system test generation and fix suggestion for the first time. * We implement RANUM and apply it on 63 real-world DNN architectures, showing the high effectiveness and efficiency of RANUM compared to both the state-of-the-art approaches and developers' fixes. ## II Background and Approach Overview In this section, we introduce the background of DNN numerical defects and failures, and then give an overview of the RANUM approach with a running example. ### _Background_ DL developers define the DNN architecture with code using modern DL libraries such as PyTorch [28] and TensorFlow [1]. The DNN architecture can be expressed by a computational graph. Figures 2 and 3 depict a real-world example. Specifically, the DNN architecture in a DL program can be automatically converted to an ONNX-format computational graph [43]. The computational graph can be viewed as a Directed Acyclic Graph (DAG): \(\mathcal{G}=\langle\mathcal{V},\mathcal{E}\rangle\), where \(\mathcal{V}\) and \(\mathcal{E}\) are sets of nodes and edges, respectively. We call nodes with zero in-degree as _initial nodes_, which correspond to input, weight, or constant nodes. Initial nodes provide concrete data for the DNN models resulted from training the DNN architecture. The data from each node is formatted as a tensor, i.e., a multidimensional array, with a specified data type and array shape annotated alongside the node definition. We call nodes with positive in-degree as _internal nodes_, which correspond to concrete operators, such as matrix multiplication (MatMul) and addition (Add). During model training, the model weights, i.e., data from weight nodes, are generated by the training algorithm. Then, in the deployment phase (i.e., model inference), with these trained weights and a user-specified input named inference example, the output of each operator is computed in the topological order. The output of some specific node is used as the prediction result. We let \(\mathbf{x}\) and \(\mathbf{w}\) denote the concatenation of data from all input nodes and data from all weight nodes, respectively.3 For example, in Figure 3, \(\mathbf{x}\) concatenates data from nodes 1 and 11; and \(\mathbf{w}\) concatenates data from nodes 2 and 4. Given specific \(\mathbf{x}\) and \(\mathbf{w}\), the input and output for each node are deterministic.4 We use \(f^{\text{in}}_{n}(\mathbf{x};\mathbf{w})\) and \(f^{\text{out}}_{n}(\mathbf{x};\mathbf{w})\) to express input and output data of node \(n\), respectively, given \(\mathbf{x}\) and \(\mathbf{w}\). Footnote 3: A bolded alphabet stands for a vector or tensor throughout the paper. Footnote 4: An architecture may contain stochastic nodes. We view these nodes as nodes with randomly sampled data, so the architecture itself is deterministic. **Numerical Defects in DNN Architecture.** We focus on inference-phase numerical defects. These defects lead to numerical failures when specific operators receive inputs within invalid ranges so that the operators output NaN or INF. **Definition 1**.: For the given computational graph \(\mathcal{G}=\langle\mathcal{V},\mathcal{E}\rangle\), if there is a node \(n_{0}\in\mathcal{V}\), such that there exists a valid input and valid weights that can let the input of node \(n_{0}\) fall within the invalid range, we say there is a _numerical defect_ at node \(n_{0}\). Formally, \(\exists\mathbf{x}_{0}\in\mathcal{X}_{\text{valid}},\mathbf{w}_{0}\in\mathcal{W}_{ \text{valid}},f^{\text{in}}_{n_{0}}(\mathbf{x}_{0};\mathbf{w}_{0})\in\mathcal{I}_{n_{0 },\text{invalid}}\) \(\Longrightarrow\exists\) numerical defect at node \(n_{0}\). In the definition, \(\mathcal{X}_{\text{valid}}\) and \(\mathcal{W}_{\text{valid}}\) are valid input range and weight range, respectively, which are clear given the deployed scenario. For example, ImageNet Resnet50 models have valid input range \(\mathcal{X}_{\text{valid}}=[0,1]^{3\times 224\times 224}\) since image pixel intensities are within \([0,1]\), and valid weight range \(\mathcal{W}_{\text{valid}}=[-1,1]^{p}\) where \(p\) is the number of parameters since weights of well-trained Resnet50 models are typically within \([-1,1]\). The invalid range \(\mathcal{I}_{n_{0},\text{invalid}}\) is determined by \(n_{0}\)'s operator type with detailed definitions in Suppl. B. For example, for the Log operator, the invalid range \(\mathcal{I}_{n_{0},\text{invalid}}=(-\infty,U_{\min})\) where \(U_{\min}\) is the smallest positive number of a tensor's data type. ### _Approach Overview_ In Figure 4, we show the overview structure of the RANUM approach. RANUM takes a DNN architecture as the input. Note that although RANUM is mainly designed and illustrated for a DNN architecture, the RANUM approach can also be directly applied to general neural network architectures since they can also be expressed by computational graphs. First, the DNN static analysis framework (task 1 in Figure 1) in Fig. 4: Overview of the RANUM approach. The output of RANUM indicates confirmation and manifestation of numerical defects (that can be feasibly exposed at the system level) for a given DNN architecture and effective fixes for the architecture’s confirmed defects. Fig. 3: Computational graph encoded by the snippet in Figure 2. RANUM detects all potential numerical defects in the architecture. Second, the two-step test generation component (task 2) in Figure 1), including unit test generation and training example generation, confirms the feasibility of these potential numerical defects. Third, the abstraction optimization component (task 3 in Figure 1) takes the input/output abstractions produced by the DNN static analysis framework along with the user-specified fix locations, and produces preconditions to fix the confirmed defects. We next go through the whole process in detail taking the DNN architecture shown in Figure 3 as a running example. **Task 1: Potential-Defect Detection via Static Analysis**. The DNN static analysis framework within RANUM first computes the numerical intervals of possible inputs and outputs for all nodes within the given DNN architecture, and then flags any nodes whose input intervals overlap with their invalid ranges as nodes with potential numerical defects. In Figure 3, suppose that the user-specified input x-input (node 1) is within (elementwise, same below) range \([(-10,-10)^{\mathrm{ T}},(10,10)^{\mathrm{ T}}]\); weights (node 2) are within range \([-10\ \ method is clipping before the defective operator, \(\mathcal{V}_{\text{fix}}\) are nodes with numerical defects (e.g., nodes 9, 10 in Figure 3). According to the empirical study of developers' fixes in Section I, 12 out of 13 defects are fixed by imposing interval preconditions for clipping the inputs of \(\mathcal{V}_{\text{fix}}\). Hence, we suggest interval precondition, which is interval constraint \(\mathbf{l}_{n}\leq f_{n}^{\text{in}}(\mathbf{x};\mathbf{w})\leq\mathbf{u}_{n}\) for nodes \(n\in\mathcal{V}_{\text{fix}}\), as the defect fix in this paper. A fix should satisfy that, when these constraints \(\bigwedge_{n\in\mathcal{V}_{\text{fix}}}(\mathbf{l}_{n}\leq f_{n}^{\text{in}}(\mathbf{x };\mathbf{w})\leq\mathbf{u}_{n})\) are imposed, the input of any node in the computational graph should always be valid, i.e., \(f_{n_{0}}^{\text{in}}(\mathbf{x};\mathbf{w})\notin\mathcal{I}_{n_{0},\text{invalid}}, \forall n_{0}\in\mathcal{V}\). In RANUM, we formulate the fix suggestion task as a constrained optimization problem, taking the endpoints of interval abstractions for nodes in \(\mathcal{V}_{\text{fix}}\) as optimizable variables. We then propose the novel technique of abstraction optimization to solve this constrained optimization problem. Back to the Figure 3 example, if users plan to impose a fix on inference input, RANUM can suggest the fix \(-1\leq\) x-input \(\leq\) 1; if users plan to impose a fix on nodes with numerical defects, RANUM can suggest the fix \(10^{-38}\leq\) node 9 & node 10.input \(\leq+\infty\). ## III The RANUM Approach In this section, we introduce the three novel techniques in RANUM: backward fine-grained node labeling in Section III-A; two-step test generation in Section III-B; and abstraction optimization in Section III-C. _DNN Static Analysis Framework with Backward Fine-Grained Node Labeling for Potential-Defect Detection_ RANUM contains a static analysis framework to enable potential-defect detection and support downstream tasks as shown in Figure 4. Given a DNN architecture and valid ranges for input and weight nodes, the static analysis framework computes interval abstractions for possible inputs and outputs of each node. As a result, we can check whether an overlap exists between the interval abstraction and invalid input ranges for all nodes in the graph to detect potential numerical defects. Then, the defective nodes are fed into the two-step test generation component to confirm the feasibility of potential defects; and the differentiable abstractions are fed into the abstract optimization component to produce fixes. Formally, for given valid ranges of inference input and model weights, namely \(\mathcal{X}\) and \(\mathcal{W}\), for each node \(n\in\mathcal{V}\), our framework computes _sound_ input interval abstraction \([\mathbf{l}_{n},\mathbf{u}_{n}]:=\{\mathbf{x}:\mathbf{l}_{n}\leq\mathbf{x}\leq\mathbf{u}_{n}\}\) such that \([\mathbf{l}_{n},\mathbf{u}_{n}]\) always captures all possible inputs of the node: \([\mathbf{l}_{n},\mathbf{u}_{n}]\supseteq\{f_{n}^{\text{in}}(\mathbf{x},\mathbf{w}):\mathbf{x}\in \mathcal{X},\mathbf{w}\in\mathcal{W}\}\). We also compute output interval abstractions similarly. Compared with traditional analysis tools for numerical software [10, 39], RANUM's static analysis framework designs abstractions for DNN primitives operating on multi-dimensional tensors that are not supported by traditional tools. Compared with the state-of-the-art DEBAR tool [61], RANUM uses the same abstraction domain (interval domain with tensor partitioning), but incorporates a novel technique (backward fine-grained node labeling) to improve abstraction precision and support a wider range of DNN architectures. **Abstract Domain: Interval with Tensor Partitioning.** Following DEBAR's design, we use the interval with tensor partitioning [61] as the abstraction domain. This abstraction domain partitions a tensor into multiple subblocks and shares the interval abstractions at the block level instead of imposing abstractions at the element level. Therefore, we can compute the abstraction of a smaller size than the original tensor to improve efficiency. **Our Technique: Backward Fine-Grained Node Labeling.** The interval domain with tensor partitioning provides a degree of freedom in terms of the partition granularity, i.e., we can choose the subblock size for each node's abstraction. When the finest granularity, i.e., elementwise abstraction, is chosen, the abstraction interval is the most concrete. When the coarsest granularity (i.e., one scalar to summarize the node tensor) is chosen, the abstraction saves the most space and computational cost but loses much precision. _Example._ Suppose that the possible input range of a node is \(([-1,0],[0,1],[1,2],[-1,0])\), where each interval \([l,u]\) specifies the range of corresponding elements in the four-dimensional vector. If we choose the finest granularity, we use \([\mathbf{l}_{n},\mathbf{u}_{n}]=[(-1,0,1,-1),(0,1,2,0)]\) as the input range abstraction. If we choose the coarsest granularity, we use \([\mathbf{l}_{n},\mathbf{u}_{n}]=[-1,2]\) as the abstraction where the same interval is shared for all elements. As we can see, finer granularity provides tighter abstraction at the expense of larger computational and space costs. In DEBAR, the coarsest granularity is used by default for most operators. However, we find that using the finest instead of the coarsest granularity for some nodes is more beneficial for overall abstraction preciseness. For example, the control-flow operators (e.g., Loop) benefit from concrete execution to determine the exact control flow in the dynamic graph, and the indexing operators (e.g., Slice) and shaping operators (e.g., Reshape) benefit from explicit indexers and shapes to precisely infer the output range. Hence, we propose to use the finest granularity for some nodes (namely fine-grained requiring operators) while the coarsest granularity for other nodes during static analysis. To benefit from the finest granularity abstraction for required nodes, typically, all of their preceding nodes also need the finest granularity. Otherwise, the over-approximated intervals from preceding nodes will be propagated to the required nodes, making the finest abstraction for the required nodes useless. To solve this problem, in RANUM, we back-propagate "fine-grained" labels from these fine-grained requiring nodes to initial nodes by topologically sorting the graph with _inverted_ edges, and then apply the finest granularity abstractions on all labeled nodes. In practice, we find that this strategy eliminates the control-flow ambiguity and indexing ambiguity with little loss of efficiency6. As a result, RANUM supports all dynamic graphs (which are not supported by DEBAR) that comprise 39.2% of the benchmarks proposed by Yan et al. [54]. Footnote 6: Theoretically, using the finest granularity for tensor partitioning cannot fully eliminate the ambiguity, since interval abstraction is intrinsically an over-approximation. Nevertheless, in our evaluation (Section IV), we find that this technique eliminates control-flow and indexing ambiguities on all 63 programs in the benchmarks. Furthermore, when preceding nodes use finer-grain abstraction granularity, the subsequent nodes should preserve such fine granularity to preserve the analysis preciseness. Principally, the choice of abstraction granularity should satisfy both tightness (bearing no precision loss compared to elementwise interval abstraction) and minimality (using the minimum number of partitions for high efficiency). To realize these principles, we dynamically determine a node's abstraction granularity based on the granularity of preceding nodes. The abstraction design for some operators is non-trivial. Omitted details (formulation, illustration, and proofs) about the static analysis framework are in Suppl. C. In summary, the whole static analysis process consists of three steps. (1) Determine the tensor partition granularity of all initial nodes by our technique of backward fine-grained node labeling. (2) Sort all nodes in the graph in the topological order. (3) Apply corresponding abstraction computation algorithms for each node based on the preceding node's abstractions. The key insight behind the design of our static analysis framework is the strategic granularity selection for tensor abstraction, maintaining both high efficiency (by selecting the coarse granularity for data-intensive nodes) and high precision (by selecting the fine granularity for some critical nodes, such as nodes with control-flow, indexing, and shaping operators). ### _Two-Step Test Generation for Feasibility Confirmation_ RANUM generates failure-exhibiting system tests for the given DNN to confirm the feasibility of potential numerical defects. Here, we take the DNN architecture as the input. From the static analysis framework, we obtain a list of nodes that have potential numerical defects. For each node \(n_{0}\) within the list, we apply our technique of two-step test generation to produce a failure-exhibiting system test \(\mathbf{t}_{\text{sys}}=\langle\mathbf{x}_{\text{train}},\mathbf{x}_{\text{infer}}\rangle\) as the output. According to Section II-B, the test should satisfy that after the architecture is trained with \(\mathbf{x}_{\text{train}}\), entering \(\mathbf{x}_{\text{infer}}\) in the inference phase results in a numerical failure. We propose the novel technique of two-step test generation: first, generate failure-exhibiting unit test \(\langle\mathbf{w}_{\text{infer}},\mathbf{x}_{\text{infer}}\rangle\); then, generate training example \(\mathbf{x}_{\text{train}}\) that leads model weights to be close to \(\mathbf{w}_{\text{infer}}\) after training. **Step a: Unit Test Generation**. As sketched in Section II-B, we strengthen the state-of-the-art unit test generation approach, GRIST [54], by combining it with random initialization to complete this step. Specifically, GRIST leverages the gradients of the defective node's input with respect to the inference input and weights to iteratively update the inference input and weights to generate failure-exhibiting unit tests. However, GRIST always conducts updates from the existing inference input and weights, suffering from local minima problem [23]. Instead, motivated by DNN adversarial attack literature [23, 45], a sufficient number of random starts help find global minima effectively. Hence, in RANUM, we first conduct uniform sampling 100 times for both the inference input and weights to trigger the numerical failure. If no failure is triggered, we use the sample that induces the smallest loss as the start point for gradient optimization. As Section IV-A shows, this strategy substantially boosts the efficiency, achieving 17.32X speedup. **Step b: Training Example Generation**. For this step, RANUM takes the following inputs: (1) the DNN architecture, (2) the failure-exhibiting unit test \(t_{\text{unit}}=\langle\mathbf{w}_{\text{infer}},\mathbf{x}_{\text{infer}}\rangle\), and (3) the randomly initialized weights \(\mathbf{w}_{0}\). Our goal is to generate a legal training example \(\mathbf{x}_{\text{train}}\), such that the model trained with \(\mathbf{x}_{\text{train}}\) will contain weights close to \(\mathbf{w}_{\text{infer}}\). DNNs are typically trained with gradient-descent-based algorithms such as stochastic gradient descent (SGD). In SGD, in each step \(t\), we sample a mini-batch of samples from the training dataset to compute their gradients on model weights and use these gradients to update the weights. We focus on one-step SGD training with a single training example, since generating a single one-step training example to exhibit a failure is more desirable for debugging because, in one-step training, the model weights are updated strictly following the direction of the gradients. Therefore, developers can inspect inappropriate weights, easily trace back to nodes with inappropriate gradients, and then fix these nodes. In contrast, in multi-step training, from inappropriate weights, developers cannot trace back to inappropriate gradients because weights are updated iteratively and interactions between gradients and weights are complex (even theoretically intractable [18]). In this one-step training case, after training, the model weights \(\mathbf{w}_{\text{infer}}\) satisfy \[\mathbf{w}_{\text{infer}}=\mathbf{w}_{0}-\gamma\nabla_{\mathbf{w}}\mathcal{L}(\mathbf{x}_{ \text{train}};\mathbf{w}_{0}), \tag{1}\] where \(\gamma\in\mathbb{R}_{+}\) is a predefined learning rate, and \(\mathcal{L}\) is the predefined loss function in the DNN architecture. Hence, our goal becomes finding \(\mathbf{x}_{\text{train}}\) that satisfies \[\nabla_{\mathbf{w}}\mathcal{L}(\mathbf{x}_{\text{train}};\mathbf{w}_{0})=(\mathbf{w}_{0}-\mathbf{ w}_{\text{infer}})/\gamma. \tag{2}\] The DLG attack [63] is a technique for generating input data that induce specific weight gradients. The attack is originally designed for recovering training samples from monitored gradient updates. Since the right-hand side (RHS) of Equation (2) is known, our goal here is also to generate input example \(\mathbf{x}_{\text{train}}\) that induces specific weight gradients. Therefore, we leverage the DLG attack to generate training example \(\mathbf{x}_{\text{train}}\). **Extending DLG Attack with Straight-Through Estimator**. Directly using DLG attack suffers from an optimization challenge in our scenario. Specifically, in DLG attack, suppose that the target weight gradients are \(\Delta\mathbf{w}_{\text{trarg}}\), we use gradient descent over the squared error \(\|\nabla_{\mathbf{w}}\mathcal{L}(\mathbf{x};\mathbf{w}_{0})-\Delta\mathbf{w}_{\text{trarg}}\|_ {2}^{2}\) to generate \(\mathbf{x}\). In this process, we need meaningful gradient information of this squared error loss to perform the optimization. However, the gradient of this loss involves second-order derivatives of \(\mathcal{L}(\mathbf{x};\mathbf{w}_{0})\), which could be zero. For example, DNNs with ReLU as activation function are piecewise linear and have zero second-order derivatives almost everywhere [35]. This optimization challenge is partly addressed in DLG attack by replacing ReLU with Sigmoid, but it changes the DNN architecture (i.e., the system under test) and hence is unsuitable. We leverage the straight-through estimator to mitigate the optimization challenge. Specifically, for a certain operator, such as ReLU, we do not change its forward computation but change its backward gradient computation to provide second-order derivatives within the DLG attack process. For example, for ReLU, in backward computation we use the gradient of \(\mathrm{Softplus}\) function, namely \(1-\frac{1}{1+\exp(\mathbf{x})}\), because \(\mathrm{Softplus}\) is an approximation of ReLU[7] with non-zero second-order derivatives. Note that we modify the computed gradients only within the DLG attack process. After such \(\mathbf{x}_{\text{train}}\) is generated by the attack, we evaluate whether it triggers a numerical failure using the original architecture and gradients in Equation (1). Suppl. E lists hyperparameters used by our implementation. ### _Abstraction Optimization for Fix Suggestion_ In this task, we aim to generate the precondition fix given imposing locations. The inputs are the DNN architecture, the node \(n_{0}\) with numerical defects, and a node set \(\mathcal{V}_{\text{fix}}\) to impose the fix. We would like to generate interval preconditions for \(\mathcal{V}_{\text{fix}}\) node inputs so that after these preconditions are imposed, the defect on \(n_{0}\) is fixed. Formally, our task is to find \(\langle l_{n},u_{n}\rangle\) for each \(n\in\mathcal{V}_{\text{fix}}\) (\(l_{n}\) and \(u_{n}\) are scalars so the same interval bound applied to all elements of \(n\)'s tensor), such that for any \(\mathbf{x},\mathbf{w}\) satisfying \(\mathbf{f}_{n}^{\text{in}}(\mathbf{x};\mathbf{w})\in[l_{n},u_{n}]\), \(\forall n\in\mathcal{V}_{\text{fix}}\), for the defective node \(n_{0}\), we have \(f_{n_{0}}^{\text{in}}(\mathbf{x};\mathbf{w})\not\in\mathcal{I}_{n_{0},\text{valid}}\), where the full list of invalid input ranges \(\mathcal{I}_{n_{0},\text{invalid}}\) is in Suppl. B. There is an infinite number of possible \(\langle l_{n},u_{n}\rangle\) interval candidates since \(l_{n}\) and \(u_{n}\) are floating numbers. Hence, we need an effective technique to find a valid solution from the exceedingly large search space that incurs a relatively small model utility loss. To achieve so, we formulate a surrogate optimization problem for this task. \[\underset{l_{n},u_{n}\in\mathcal{V}_{\text{fix}}}{\text{maximize}} s\quad\text{s.t.}\quad u_{n}\geq l_{n}+s(u_{n}^{\text{valid}}-l_{n}^{\text{valid}}), \forall n\in\mathcal{V}_{\text{fix}}, \tag{3}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\] (4) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mathcal{L} _{n_{0}}^{\text{precond}}(\{l_{n},u_{n}\}_{n\in\mathcal{V}_{\text{fix}}})<0. \tag{5}\] Here, \(l_{n}^{\text{valid}}\) and \(u_{n}^{\text{valid}}\) are the valid ranges (of the node's input \(n\)), which are fixed and determined by the valid ranges of input and weights. \(\mathcal{L}_{n_{0}}^{\text{precond}}\) is the node-specific precondition generation loss that is the distance between the furthest endpoint of defective node \(n_{0}\)'s interval abstraction and \(n_{0}\)'s valid input range. Hence, when \(\mathcal{L}_{n_{0}}^{\text{precond}}(\{l_{n},u_{n}\}_{n\in\mathcal{V}_{\text{ fix}}})\) becomes negative, the solution \(\{l_{n},u_{n}\}_{n\in\mathcal{V}_{\text{fix}}}\) is a valid precondition. The optimization variables are the precondition interval endpoints \(l_{n}\) and \(u_{n}\) and the objective is the relative span of these intervals. The larger the span is, the looser the precondition constraints are, and the less hurt they are for the model's utility. Equation (3) enforces the interval span requirement. Equation (4) assures that the precondition interval is in the valid range. Equation (5) guarantees the validity of the precondition as a fix. For any \(\{l_{n},u_{n}\}_{n\in\mathcal{V}_{\text{fix}}}\), thanks to RANUM's static analysis framework, we can compute induced intervals of defective node \(n_{0}\), and thus compute the loss value \(\mathcal{L}_{n_{0}}^{\text{precond}}\). As shown in Algorithm 1, we propose the technique of **abstraction optimization** to effectively and approximately solve this optimization. Our technique works iteratively. In the first iteration, we set span \(s=1\), and in the subsequent iterations, we reduce the span \(s\) exponentially as shown in Line 13 where hyperparameter \(\gamma_{s}=0.9\). Inside each iteration, for each node to impose precondition \(n\in\mathcal{V}_{\text{fix}}\), we use the interval center \(c_{n}=(l_{n}+u_{n})/2\) as the optimizable variable and compute the _sign_ of its gradient: \(\mathrm{sgn}(\nabla_{c_{n}}\text{loss})\). We use this gradient sign to update each \(c_{n}\) toward reducing the loss value in Line 6. Then, we use \(c_{n}\) and the span \(s\) to recover the actual interval in Line 7 and clip \(l_{n}\) and \(u_{n}\) by the valid range \([l_{n}^{\text{valid}},u_{n}^{\text{valid}}]\) in Line 8. At the end of this iteration, for updated \(l_{n}\) and \(u_{n}\), we compute \(\mathcal{L}_{n_{0}}^{\text{precond}}(\{l_{n},u_{n}\}_{n\in\mathcal{V}_{\text{ fix}}})\) to check whether the precondition is a fix. If so, we terminate; otherwise, we proceed to the next iteration. We note that _if the algorithm finds a precondition, the precondition is guaranteed to be a valid fix_ by the soundness nature of our static analysis framework and the definition of \(\mathcal{L}_{n_{0}}^{\text{precond}}\). When no feasible precondition is found within \(\mathsf{maxiter}=1000\) iterations, we terminate the algorithm and report "failed to find the fix". ``` 0: DNN architecture \(\mathcal{G}=\langle\mathcal{V},\mathcal{E}\rangle\), defective node \(n_{0}\in\mathcal{V}\), nodes to impose fix \(\mathcal{V}_{\text{fix}}\subseteq\mathcal{V}\) 1:\(s\gets 1,\gamma_{s}\gets 0.9,\gamma_{c}\gets 0.1,\text{minstep} \gets 0.1,\text{maxiter}\gets 1000\) 2:\(c_{n}\gets(l_{n}^{\text{valid}}+u_{n}^{\text{valid}})/2,l_{n} \gets l_{n}^{\text{valid}},u_{n}\gets u_{n}^{\text{valid}},\forall n\in \mathcal{V}_{\text{fix}}\) 3:for\(i=1\) to maxiterdo 4:for\(n\in\mathcal{V}_{\text{fix}}\)do 5:\(\text{loss}\leftarrow\mathcal{L}_{n_{0}}^{\text{precond}}(\{l_{n^{\prime}},u_{n^{ \prime}}\}_{n^{\prime}\in\mathcal{V}_{\text{fix}}})\) 6:\(c_{n}\gets c_{n}-\gamma_{c}\max\{[c_{n}],\text{minstep}(\nabla_{c_{n}} \text{loss})\) 7:\((l_{n},u_{n})\leftarrow(c_{n}-\frac{s(u_{n}^{\text{valid}}-\text{valid})}{2},c_{n} +\frac{s(u_{n}^{\text{valid}}-\text{valid})}{2})\) 8:\((l_{n},u_{n})\leftarrow(\max\{l_{n},l_{n}^{\text{valid}}\},\min\{u_{n},u_{n}^{ \text{valid}}\})\) 9:endfor 10:if\(\mathcal{L}_{n_{0}}^{\text{precond}}(\{l_{n},u_{n}\}_{n\in\mathcal{V}_{\text{ fix}}})<0\)then 11:return\(\{l_{n},u_{n}\}_{n\in\mathcal{V}_{\text{fix}}}\)// Find precondition fix 12:endif 13:\(s\leftarrow\gamma_{s}\cdot s\) 14:endfor 15:return"failed" // Failed to find precondition fix ``` **Algorithm 1** Abstraction Optimization (Section III-C) _Remark_. The key ingredient in the technique is the gradient-sign-based update rule (shown in Line 6), which is much more effective than normal gradient descent for two reasons. (1) Our update rule can get rid of gradient explosion and vanishing problems. For early optimization iterations, the span \(s\) is large and interval bounds are generally coarse, resulting in too large or too small gradient magnitude. For example, the input range for Log could be \([1,10^{10}]\) where gradient can be \(10^{-10}\), resulting in almost negligible gradient updates. In contrast, our update rule leverages the gradient sign, which always points to the correct gradient direction. The update step size in our rule is the maximum of current magnitude \(|c_{n}|\) and minstep to avoid stagnation. (2) Our update rule mitigates the gradient magnitude discrepancy of different \(c_{n}\). At different locations, the nodes in DNNs can have diverse value magnitudes that are not aligned with their gradient magnitudes, making gradient optimization challenging. Therefore, we use this update rule to solve the challenge, where the update magnitude depends on the value magnitude (\(|c_{n}|\)) instead of gradient magnitude (\(\nabla_{c_{n}}\) loss). We empirically compare our technique with standard gradient descent in Section IV-C. ## IV Experimental Evaluation We conduct a systematic experimental evaluation to answer the following research questions. * For tasks already supported by existing state-of-the-art (SOTA) tools (tasks 1 and 2a), how much more effective and efficient is RANUM compared to these SOTA tools? * For feasibility confirmation via _generating failure-exhibiting system tests_ (task 2), how much more effectively and efficiently can RANUM confirm potential numerical defects compared to baseline approaches? * For _suggesting fixes_ (task 3), how much more efficient and effective is RANUM in terms of guarding against numerical failures compared to baseline approaches and developers' fixes, respectively? For RQ1, we compare RANUM with all SOTA tools. For RQ2 and RQ3, RANUM is the first approach to the best of our knowledge, so we compare RANUM with baseline approaches (constructed by leaving our novel techniques out of RANUM) and developers' fixes. We conduct the evaluation on the GRIST benchmarks [54], being the largest dataset of real-world DNN numerical defects to our knowledge. The benchmarks contain 63 real-world DL programs with numerical defects collected from previous studies and GitHub. Each program contains a DNN architecture, and each architecture has one or more numerical defects. There are 79 real numerical defects in total. We perform our evaluation on a Linux workstation with a 24-core Xeon E5-2650 CPU running at 2.20 GHz. Throughout the evaluation, we stop the execution after reaching \(30\,\mathrm{min}\) limit by following the evaluation setup by the most recent related work [54]. ### _RQ1: Comparison with SOTA Tools_ For two tasks, existing tools can provide automatic support: potential-defect detection (task 1) where the SOTA tool is DEBAR [61], and failure-exhibiting unit test generation (task 2a) where the SOTA tool is GRIST [54]. We compare RANUM with these tools on their supported tasks, respectively. **Comparison with DEBAR**. RANUM successfully detects all 79 true defects and DEBAR detects only 48 true defects according to both our evaluation and the literature [54]. Hence, RANUM detects 64.58% more true defects than DEBAR. In terms of efficiency, DEBAR and RANUM have similar running time, and both finish in \(3\,\mathrm{s}\) per case. We manually inspect the cases where DEBAR fails but RANUM succeeds. They correspond to DL programs written with the PyTorch library, which generates dynamic computational graphs that DEBAR cannot handle. In contrast, RANUM provides effective static analysis support for dynamic computational graphs thanks to our backward fine-grained node labeling technique (Section III-A) that is capable of disambiguating the control flow within dynamic graphs. **Comparison with GRIST**. Results are shown in Table I. Since both RANUM and GRIST have a randomness component where RANUM uses random initialization and GRIST relies on DNN's randomly initialized weights, we repeat both approaches for 10 runs, record the total number of times where a failure-exhibiting unit test is generated, and the average execution time per run. RANUM succeeds in _all_ cases and _all_ repeated runs, and GRIST fails to generate such unit test in 24 out of 790 runs (i.e., 96.96% success rate). RANUM has \(6.66\,\mathrm{s}\) average execution time and is 17.32X faster than GRIST. The superior effectiveness and efficiency of RANUM are largely due to the existence of random initialization as introduced in Section III-B. We observe that since GRIST always takes initial model weights and inference input as the starting point to update from, the generated unit test is just slightly changed from initial weights and input, being hard to expose the target numerical defect. In contrast, RANUM uses random initialization to explore a much larger space and combines gradient-based optimization to locate the failure-exhibiting instances from the large space. We also evaluate the pure random strategy that uses only random initialization without gradient-based optimization, and such strategy fails in 30 runs, being inferior to both RANUM and GRIST, implying that both random initialization and gradient-based optimization are important. Among all the 79 cases, RANUM is slower than GRIST on only one case (28a). For this case, we find the default inference input loaded by the DNN program (used by GRIST) is not far from a failure-exhibiting one, but a randomly sampled inference input (used by RANUM) is usually far from that. Hence, RANUM takes more iterations to find a failure-exhibiting inference input by the nature of gradient-based optimization. ### _RQ2: Feasibility Confirmation via System Test Generation_ In task 2, RANUM confirms the feasibility of potential numerical defects by generating failure-exhibiting system tests. **Baseline**. Since RANUM is the first approach for this task, we do not compare with existing literature and propose one random-based approach (named "Random" hereinafter) as the baseline. In "Random", we first generate a failure-exhibiting unit test with random sampling. If there is any sample that triggers a failure, we stop and keep the inference example part as the desired \(\mathbf{x}_{\text{infer}}\). Then, we generate \(\mathbf{x}_{\text{train}}\) again by random sampling. If any sample, when used for training, could induce model weights \(\mathbf{w}\) that cause a numerical failure when using \(\mathbf{x}_{\text{infer}}\) as the inference input, we keep that sample as \(\mathbf{x}_{\text{train}}\) and terminate. If and only if both \(\mathbf{x}_{\text{infer}}\) and \(\mathbf{x}_{\text{train}}\) are found, we count this run as a "success" one for "Random". For each defect, due to the randomness of the model's initial weights, we repeat both RANUM and "Random" for \(10\) runs. Both approaches use the same set of random seeds. **Evaluation Result.** Results are in Table II. We observe that RANUM succeeds in \(733/(79\times 10)=92.78\%\) runs and the baseline "Random" succeeds in \(649/(79\times 10)=82.15\%\) runs. Moreover, RANUM spends only \(17.31\,\mathrm{s}\) time on average per run, which is a 19.30X speedup compared to "Random". We also observe that RANUM is more reliable across repeated runs. There are only 6 cases with unsuccessful repeated runs in RANUM, but there are 19 such cases in "Random". Hence, RANUM is substantially more effective, efficient, and reliable for generating system tests than the baseline. **Discussion**. The high effectiveness of RANUM mainly comes from the advantage of gradient-guided search compared with random search. As described in Section III-B, RANUM leverages both first-order gradients (in step a) and second-order derivatives (in step b) to guide the search of system tests. In contrast, "Random" uses random sampling hoping that failure-exhibiting training examples can emerge after sufficient sampling. Hence, when such training examples are relatively rare in the whole valid input space, "Random" is less effective. We conduct an ablation study (in Suppl. F) for showing that RANUM improves over "Random" in both steps inside RANUM. **Failing-Case Analysis**. We study all six defects where RANUM may fail and have the following findings. (1) For four defects (Case IDs 1, 15, 37, and 38), the architecture is challenging for gradient-based optimization, e.g., due to the Min/Max/Softmax operators that provide little or no gradient information. We leave it as future work to solve these cases, likely in need of dynamically detecting operators with vanishing gradients and reconstructing the gradient flow. (2) Two defects (Case IDs 28a and 28b) correspond to those caused by Div operators where only a close-to-zero divisor can trigger a numerical failure. Hence, for operators with narrow invalid ranges, RANUM may fail to generate failure-exhibiting system tests. ### _RQ3: Fix Suggestion_ In task 3, RANUM suggests fixes for numerical defects. We compare RANUM with fixes generated by baseline approaches and developers' fixes. #### Iv-C1 **Comparison between RANUM and Baselines** RANUM is the first approach for this task, and we propose two baseline approaches to compare with. (1) RANUM-E: this approach changes the abstraction domain of RANUM from interval with tensor partitioning to standard interval. To some degree, RANUM-E represents the methodology of conventional static analysis tools that use standard interval domain for abstraction and search of effective fixes. (2) GD: this approach uses standard gradient descent for optimization instead of the abstraction optimization technique in RANUM. **Evaluation Protocol.** We evaluate whether each approach can generate fixes that eliminate _all_ numerical defects for the DNN architecture under analysis given imposing locations. We consider three types of locations: on both weight and input nodes, on only weight nodes, and on only input nodes. In practice, model providers can impose fixes on weight nodes by clipping weights after a model is trained; and users can impose fixes on input nodes by clipping their inputs before loading them into the model. Since all approaches are deterministic, for each case we run only once. We say that the fix eliminates all numerical defects if and only if (1) the RANUM static analysis framework cannot detect any defects from the fixed architecture; and (2) 1,000 random samples cannot trigger any numerical failures after imposing the fix. **Evaluation Result.** We report the statistics, including the number of successful cases among all the 79 cases and the total running time, in Table III. From the table, we observe that on all the three imposing location settings, RANUM always succeeds in most cases and spends much less time. For example, when fixes can be imposed on both weights and input nodes, RANUM succeeds on _all_ cases with a total running time \(54.23\,\mathrm{s}\). In contrast, RANUM-E requires \(>10\times\) time, and GD succeeds in only \(72.15\%\) cases. Hence, RANUM is substantially more effective and efficient for suggesting fixes compared to baseline approaches. Since RANUM is based on iterative refinement (see Algorithm 1), we study the number of iterations needed for finding the fix. When fixes can be imposed on both weight and input nodes, where RANUM succeeds on all the 79 cases, the average number of iterations is \(29.80\), the standard deviation is \(14.33\), the maximum is \(53\), and the minimum is \(2\). Hence, when RANUM can find the fix, the number of iterations is small, coinciding with the small total running time \(54.23\,\mathrm{s}\). **Discussion.** The two baseline approaches can be viewed as ablated versions of RANUM. Comparing RANUM and GD, we conclude that the technique of abstraction optimization substantially improves the effectiveness and also improves the efficiency. Comparing RANUM and RANUM-E, we conclude that the interval abstraction with tensor partitioning as the abstraction domain substantially improves the efficiency and also improves the effectiveness. From Table III, it is much easier to find the fix when imposing locations are weight nodes compared to input nodes. Since model providers can impose fixes on weights and users impose on inputs, this finding implies that fixing numerical defects on the providers' side may be more effective than on the users' side. #### V-B2 **Comparison between RANUM and Developers' Fixes** We conduct an empirical study to compare the fixes generated by RANUM and by the developers. **Evaluation Protocol.** We manually locate GitHub repositories from which the GRIST benchmarks are constructed. Among the 79 cases, we find the repositories for 53 cases on GitHub and we study these cases. We locate the developers' fixes of the numerical defects by looking at issues and follow-up pull requests. Since RANUM suggests different fixes for different imposing locations, for each case we first determine the imposing locations from the developer's fix, and then compare with RANUM's fix for these locations. RANUM fixes are on the computational graph and developers' fixes are in the source code, so we determine to conduct code-centered comparison: RANUM fixes are considered feasible only when the fixes can be easily implemented by code (within 10 lines of code) given that developers' fixes are typically small, usually in 3-5 lines of code. In particular, our comparison is based on two criteria: (1) which fix is sound on any valid input; (2) if both are sound, which fix hurts less to model performance and utility (based on the span of imposed precondition, the larger span the less hurt). Two authors independently classify the comparison results for each case and discuss the results to reach a consensus. **Results.** We categorize the comparison results as below. 1. _(30 cases) Better than developers' fixes or no available developer's fix._ Developers either propose no fixes or use heuristic fixes, such as reducing the learning rate or using the mean value to reduce the variance. These fixes may work in practice but are unsound, i.e., cannot rigorously guarantee the elimination of the numerical defect for any training or inference data. In contrast, RANUM generates better fixes since these fixes rigorously eliminate the defect. 2. _(7 cases) Equivalent to developers' fixes._ Developers and RANUM suggest equivalent or highly similar fixes. 3. _(13 cases) No need to fix._ For these cases, there is no need to fix the numerical defect in the given architecture. There are mainly three reasons. (1) The DNN is used in the whole project with fixed weights or test inputs. As a result, although the architecture contains defects, no system failure can be caused. (2) The architecture is injected a defect as a test case for automatic tools, such as a test architecture in the TensorFlow2[27] repository. (3) The defect can be hardly exposed in practice. For example, the defect is in a Div operator where the divisor needs to be very close to zero to trigger a divide-by-zero failure, but such situation hardly happens in practice since the divisor is randomly initialized. 4. _(3 cases) Inferior than developers' fixes or RANUM-generated fixes are impractical._ In two cases, RANUM-generated fixes are inferior to developers' fixes. Human developers observe that the defective operator is Log, and its input is non-negative. So they propose to add \(10^{-6}\) to the input of Log as the fix. In contrast, RANUM can generate only a clipping-based fix, e.g., clipping the input if it is less than \(10^{-6}\). When the input is small, RANUM's fix interrupts the gradient flow from output to input while the human's fix maintains it. As a result, the human's fix does less hurt to the model's trainability and is better than RANUM's fix. In another case, the RANUM-generated fix imposes a small span for some model weights (less than \(0.1\) for each component of that weight node). Such a small weight span strongly limits the model's expressiveness and utility. We leave it as the future work to solve these limitations. From the comparison results, we can conclude that for the 40 cases where numerical defects are needed to be fixed (excluding case C), RANUM suggests equivalent or better fixes than human developers in 37 cases. Therefore, RANUM is comparably effective as human developers in terms of suggesting numerical-defect fixes, and is much more efficient since RANUM is an automatic approach. **Guidelines for Users**. We discuss two practical questions for RANUM users. (1) _Does RANUM hurt model utility, e.g., inference accuracy?_ If no training or test data ever exposes a numerical defect, RANUM does not confirm a defect and hence no fix is generated and there is no hurt to the utility. If RANUM confirms numerical defects, whether the fix affects the utility depends on the precondition-imposing locations. If imposing locations can be freely selected, RANUM tends to impose the fix right before the vulnerable operator, and hence the fix does not reduce inference performance. The reason is that the fix changes (by clipping) the input only when the input falls in the invalid range of the vulnerable operator. In practice, if the imposing locations cannot be freely selected and need to follow developers' requirements, our preceding empirical study shows that, in only 3 out of 40 cases, compared with developers' fixes, our fixes incur larger hurt to the inference or training performance of the architecture. (2) _Should we always apply RANUM to fix any architecture?_ We can always apply RANUM to fix any architecture since RANUM fixes do not visibly alter the utility in most cases. Nonetheless, in deployment, we recommend first using RANUM to confirm defect feasibility. If there is no such failure-exhibiting system test, we may not need to fix the architecture; otherwise, we use RANUM to generate fixes. ## V Related Work **Understanding and Detecting Defects in DNNs**. Discovering and mitigating defects and failures in DNN based systems is an important research topic [60, 32, 14]. Following the taxonomy in previous work [12], DNN defects are at four levels from bottom to top. (1) Platform-level defects. Defects can exist in real-world DL compilers and libraries. Approaches exist for understanding, detecting, and testing against these defects [49, 37, 42, 52]. (2) Architecture-level defects. _Our work focuses on numerical defects, being one type of architecture-level defects_. Automatic detection and localization approaches [50, 19] exist for other architecture-level defects such as suboptimal structure, activation function, and initialization and shape mismatch [11]. (3) Model-level defects. Once a model is trained, its defects can be viewed as violations of desired properties as discussed by Zhang et al. [55]. Some example defects are correctness [44, 9], robustness [48], and fairness [57] defects. (4) Interface-level defects. DNN-based systems, when deployed as services, expose interaction interfaces to users where defects may exist, as shown by empirical studies on real-world systems [12, 46, 47]. **Testing and Debugging for DNNs**. A rich body of work exists for testing and debugging DNN defects [55]. Some representatives are DeepXplore [31] and DeepGauge [22]. Recent work enables automatic model debugging and repair via consistency checking [51], log checking [59], spectrum analysis [34], or analyzer-guided synthesis [41]. **DNN Static Analysis**. Another solution for eliminating DNN defects is conducting static analysis to rigorously guarantee the non-existence of defects [17, 2]. Although DNNs essentially conduct numerical computations, traditional tools of numerical analysis [10, 39] are inefficient for DNN analysis due to lack of support for multi-dimensional tensor computations. Recently, static analysis tools customized for DNNs are emerging, mainly focusing on proposing tighter abstractions [6, 26, 29] or incorporating abstractions into training to improve robustness [16, 25, 62]. Besides robustness, static analysis has also been applied to rigorously bound model difference [30]. Our approach includes a static analysis framework customized for numerical-defect detection and fixing. **Detecting and Exposing Numerical Defects in DNNs**. Despite the widespread existence of numerical defects in real-world DNN-based systems [60, 12, 14], only a few automatic approaches exist for detecting and exposing these defects. To the best of our knowledge, DEBAR [61] and GRIST [54] are the only two approaches. We discuss and compare RANUM with both approaches extensively in Sections III and IV. ## VI Conclusion In this paper, we have presented a novel automatic approach named RANUM for reliability assurance of DNNs against numerical defects. RANUM supports detection of potential numerical defects, confirmation of potential-defect feasibility, and suggestion of defect fixes. RANUM includes multiple novel extensions and optimizations upon existing tools, and includes three novel techniques. Our extensive evaluation on real-world DNN architectures has demonstrated high effectiveness and efficiency of RANUM compared to both the state-of-the-art approaches and developers' fixes. **Data Availability**. All artifacts including the tool source code and experiment logs are available and actively maintained at [https://github.com/llyly/RANUM](https://github.com/llyly/RANUM). ## Acknowledgements This work is sponsored by the National Natural Science Foundation of China under Grant No. 62161146003, the National Key Research and Development Program of China under Grant No. 2019YFE0198100, the Innovation and Technology Commission of HKSAR under Grant No. MHP/055/19, and the Tencent Foundation/XPLORER PRIZE.
2310.07189
SpikePoint: An Efficient Point-based Spiking Neural Network for Event Cameras Action Recognition
Event cameras are bio-inspired sensors that respond to local changes in light intensity and feature low latency, high energy efficiency, and high dynamic range. Meanwhile, Spiking Neural Networks (SNNs) have gained significant attention due to their remarkable efficiency and fault tolerance. By synergistically harnessing the energy efficiency inherent in event cameras and the spike-based processing capabilities of SNNs, their integration could enable ultra-low-power application scenarios, such as action recognition tasks. However, existing approaches often entail converting asynchronous events into conventional frames, leading to additional data mapping efforts and a loss of sparsity, contradicting the design concept of SNNs and event cameras. To address this challenge, we propose SpikePoint, a novel end-to-end point-based SNN architecture. SpikePoint excels at processing sparse event cloud data, effectively extracting both global and local features through a singular-stage structure. Leveraging the surrogate training method, SpikePoint achieves high accuracy with few parameters and maintains low power consumption, specifically employing the identity mapping feature extractor on diverse datasets. SpikePoint achieves state-of-the-art (SOTA) performance on four event-based action recognition datasets using only 16 timesteps, surpassing other SNN methods. Moreover, it also achieves SOTA performance across all methods on three datasets, utilizing approximately 0.3\% of the parameters and 0.5\% of power consumption employed by artificial neural networks (ANNs). These results emphasize the significance of Point Cloud and pave the way for many ultra-low-power event-based data processing applications.
Hongwei Ren, Yue Zhou, Yulong Huang, Haotian Fu, Xiaopeng Lin, Jie Song, Bojun Cheng
2023-10-11T04:38:21Z
http://arxiv.org/abs/2310.07189v2
# SpikePoint: An Efficient Point-based Spiking Neural Network for Event Cameras Action Recognition ###### Abstract Event cameras are bio-inspired sensors that respond to local changes in light intensity and feature low latency, high energy efficiency, and high dynamic range. Meanwhile, Spiking Neural Networks (SNNs) have gained significant attention due to their remarkable efficiency and fault tolerance. By synergistically harnessing the energy efficiency inherent in event cameras and the spike-based processing capabilities of SNNs, their integration could enable ultra-low-power application scenarios, such as action recognition tasks. However, existing approaches often entail converting asynchronous events into conventional frames, leading to additional data mapping efforts and a loss of sparsity, contradicting the design concept of SNNs and event cameras. To address this challenge, we propose SpikePoint, a novel end-to-end point-based SNN architecture. SpikePoint excels at processing sparse event cloud data, effectively extracting both global and local features through a singular-stage structure. Leveraging the surrogate training method, SpikePoint achieves high accuracy with few parameters and maintains low power consumption, specifically employing the identity mapping feature extractor on diverse datasets. SpikePoint achieves state-of-the-art (SOTA) performance on four event-based action recognition datasets using only 16 timesteps, surpassing other SNN methods. Moreover, it also achieves SOTA performance across all methods on three datasets, utilizing approximately 0.3% of the parameters and 0.5% of power consumption employed by artificial neural networks (ANNs). These results emphasize the significance of Point Cloud and pave the way for many ultra-low-power event-based data processing applications. ## 1 Introduction Event cameras are a recent development in computer vision that is revolutionizing how visual information is captured and processed (Gallego et al., 2020). They are particularly well-suited for detecting fast moving objects, as they can eliminate redundant information and significantly reduce memory usage and data processing requirements. This is achieved through innovative pixel design resulting in a sparse data output that is more efficient than traditional cameras (Posch et al., 2010; Son et al., 2017). However, most event data processing algorithms rely on complex and deep ANNs, which are not aligned with the low power consumption benefits of event cameras. Instead, combining event-based vision tasks with SNNs has shown great potential thanks to their highly compatible properties, especially in tasks such as action recognition(Liu et al., 2021). Spiking Neural Networks have emerged as a promising alternative that can address the limitations of traditional neural networks for their remarkable biological plausibility, event-driven processing paradigm, and exceptional energy efficiency (Gerstner and Kistler, 2002). The network's asynchronous operations depend on bionic neurons, communicating information via precisely timed discrete spikes (Neftci et al., 2019). The event-driven processing paradigm enables sparse but potent computing capabilities, where a neuron activates only when it receives or generates a spike (Hu et al., 2021). This property gives the network remarkably high energy efficiency and makes it an ideal candidate for processing event-based data. Nevertheless, existing approaches for combining event cameras with SNN require the conversion of asynchronous events into conventional frames for downstream processing, resulting in additional data mapping work and a loss of sparsity (Kang et al., 2020)(Berlin and John, 2020). Moreover, this process leads to the loss of detailed temporal information, which is critical for accurate action recognition (Innocenti et al., 2021). Therefore, developing SNN-compatible novel techniques that can operate directly on event data remains a challenge. Point Cloud is a powerful representation of 3D geometry that encodes spatial information in the form of a sparse set of points and eliminates the need for the computational image or voxel conversion, making it an efficient and ideal choice for representing event data (Qi et al., 2017a). The sparse and asynchronous event data could be realized as a compact and informative 3D space-time representation of the scene, bearing a resemblance to the concept of Point Cloud (Sekikawa et al., 2019). Still, Point Cloud networks need frequent high-data dimension transformations and complex feature extraction operators in ANNs. These may not function optimally in SNNs due to their binarization and dynamic characteristics. In this paper, we introduce SpikePoint, an end-to-end framework harnessing SNN for the effective and efficient processing of event data in vision-based tasks. Our contributions are as follows: First, we combine event-based vision tasks with SNN by treating the input as Point Clouds rather than stacked event frames to preserve the fine-grained temporal feature and retain the sparsity of raw events. Second, unlike ANN counterparts with a multi-stage hierarchical structure, we design a singular-stage feature extractor structure to harmoniously extract local and global features. This lightweight design achieves effective performance through the back-propagation training method. Lastly, we introduce a pioneering encoding approach to address relative position data containing negative values. This scheme maintains symmetry between positive and negative values, optimizing information representation. We evaluate SpikePoint on diverse event-based action recognition datasets of varying scales, achieving SOTA results on Daily DVS (Liu et al., 2021), DVS ACTION (Miao et al., 2019), and HMDB51-DVS (Bi et al., 2020) datasets, surpassing even traditional ANNs. Additionally, we attain the SNN's SOTA on the DVS128 Gesture dataset (Amir et al., 2017). Notably, our evaluation encompasses an assessment of network power consumption. In comparison to both SNNs and ANNs with competitive accuracy, our framework consistently exhibits exceptional energy efficiency, both in dynamic and static power consumption, reaffirming the unequivocal superiority of our network. ## 2 Related Work ### Event-based action recognition Action recognition is a critical task with diverse applications in fields such as anomaly detection, entertainment, and security. Two primary methods for event-based action recognition are ANN and SNN (Ren et al., 2023). The ANN approaches have yielded several notable contributions, including IBM's pioneering end-to-end gesture recognition system (Amir et al., 2017) and Cannici's asynchronous event-based full convolutional networks (Cannici et al., 2019), while Chadha et al. developed a promising multimodal transfer learning framework for heterogeneous environments (Chadha et al., 2019). Additionally, Bin Yin et al. proposed a graph-based spatiotemporal feature learning framework and introduced several new datasets (Bi et al., 2020), including HMDB51-DVS. On the other hand, SNN methods have also demonstrated great potential, with Liu et al. presenting a successful model for object classification using address event representation (Liu et al., 2020) and George et al. (George et al., 2020) using multiple convolutional layers and a reservoir to extract spatial and temporal features, respectively. Liu et al. (Liu et al., 2021) have further advanced the field by extracting motion information from asynchronous discrete events captured by event cameras. While these SNNs have improved in terms of efficiency, however, their accuracy still falls short when compared to ANN-based approaches. ### Point Cloud networks in ANN Point-based methods have revolutionized the direct processing of Point Cloud data as input, with PointNet (Qi et al., 2017a) standing out as a remarkable example. PointNet++ (Qi et al., 2017b) took it a step further by introducing a set abstraction module. While it used a simple MLP in the feature extractor, numerous more advanced feature extractors have recently been developed to elevate the quality of Point Cloud processing (Wu et al., 2019; Zhao et al., 2021; Ma et al., 2021; Dosovitskiy et al., 2020). To apply these methods to the event stream, Wang et al. (Wang et al., 2019) first tackled the temporal information processing challenge while preserving representation in both the x and y axes, achieving gesture recognition using PointNet++. PAT (Yang et al., 2019) further improved on this model by incorporating self-attention and Gumbel subset sampling, achieving even better performance in the recognition task. Nonetheless, the current performance of the point-based models still cannot compete with frame-based methods in terms of accuracy. Here, we propose SpikePoint as a solution that fully leverages the characteristics of event clouds while maintaining high accuracy, low parameter numbers, and low power consumption. ## 3 SpikePoint ### Event cloud Event streams are time-series data that record spatial intensity changes in chronological order. Each event can be represented by \(e_{m}=(x_{m},y_{m},t_{m},p_{m})\), where \(m\) represents the event number, \(x_{m}\) and \(y_{m}\) denote the spatial coordinates of the event, \(t_{m}\) indicates the timestamp of the event, and \(p_{m}\) denotes the polarity of the event. To facilitate the effective processing of action recognition data, it is common practice to divide a sample \(AR_{raw}\) into some sliding windows \(AR_{dip}\) by the next formula. \[AR_{clip}=clip_{i}\left\{e_{k\longrightarrow t}\right\}\mid i\in(1,n_{win}) \mid t_{l}-t_{k}=L \tag{1}\] where \(L\) is the length of the sliding window, \(k\) and \(l\) represent the start and end event's number of the \(i_{th}\) sliding window, and \(n_{win}\) is the number of the sliding window. To apply the Point Cloud method, four-dimensional events in a \(clip\) have to be normalized and converted into a three-dimensional spacetime event \(AR_{point}\). A straight way to do this is to convert \(t_{m}\) into \(z_{m}\) and ignore \(p_{m}\): \[AR_{point}=\left\{e_{m}=(x_{m},y_{m},z_{m})\mid m=k,k+1,\ldots,l\right\} \tag{2}\] With \(z_{m}=\frac{t_{m}-t_{k}}{t_{l}-t_{k}}\mid m\in(k,l)\), and \(x_{m}\),\(y_{m}\) are also normalized between \([0,1]\). After pre-processing, the event cloud is converted into the pseudo-Point Cloud, which comprises explicit spatial information \((x,y)\) and implicit temporal information \(t\). Through pseudo-Point Cloud, SpikePoint is capable of learning spatio-temporal features which is crucial for action recognition. ### Sampling and Grouping To unify the number of inputs fed into the network, we utilize random sampling \(AR_{point}\) to construct the trainset and testset. Then, we group these points \(PN\) by the Farthest Point Sampling (\(FPS\)) method and the \(K\) Nearest Neighbor (\(KNN\)) algorithm. \(FPS\) is responsible for finding the Figure 1: The overall architecture of SpikePoint. The raw event cloud is segmented by the sliding window. Then, the global Point Cloud is transformed into \(M\) groups by grouping and sampling. The coordinate is converted into spikes by rate coding, and the results of action recognition are obtained by the local feature extractor, global feature extractor, and classifier in turn. \(Centroid\) of each group, while \(KNN\) is utilized to identify \(N^{{}^{\prime}}\) group members. This process can be abstracted as follows : \[Centroid=FPS(PN)\quad\mathcal{G}=KNN(PN,Centroid,N^{{}^{\prime}}) \tag{3}\] The grouping of \(PN\) into \(\mathcal{G}\) results in a transformation of the data dimension from [\(N\), 3] to [\(N^{{}^{\prime}}\), \(M\), 3]. \(N\) is the number of input points, \(M\) is the number of groups, and \(N^{{}^{\prime}}\) is the number of Point Cloud in each group. Given that the coordinates \([x,y,z]\) among the points within each group exhibit similarity and lack distinctiveness, this situation is particularly severe for SNN rate encoding. The standardization method employed to determine the relative position of the point information \([\Delta x,\Delta y,\Delta z]\) with respect to their \(Centroid\) is as follows: \[[\Delta x,\Delta y,\Delta z]=\frac{\mathcal{G}-Centroid}{SD(\mathcal{G})}\sim N (0,1),\quad SD(\mathcal{G})=\sqrt{\frac{\sum_{i=1}^{n}(g_{i}-\bar{g})^{2}}{n-1 }}\quad g_{i}\in\mathcal{G} \tag{4}\] Where \([\Delta x,\Delta y,\Delta z]\) adheres to the standard Gaussian distribution \(N(0,1)\), \(SD\) corresponds to the standard deviation of \(\mathcal{G}\) and \(g=[x_{0},y_{0},t_{0},\ldots,x_{n},y_{n},t_{n}]\). Ultimately, we concatenate the relative position \([\Delta x,\Delta y,\Delta z]\) and \(Centroid\)\([x_{c},y_{c},z_{c}]\) as the final grouping result. After grouping, we rate encode the Point Cloud coordinates to meet the SNN network's binary input. It is worth mentioning that the reflected distance information \([\Delta x,\Delta y,\Delta z]\) yields positive and negative values, as shown in Fig. 2. While ANN can successfully handle such information due to their utilization of floating point operations, SNN employs rate coding and binarization, and can't process negative values. It is thus necessary to develop a method to effectively handle such information in SNN in order to achieve accurate results. A straightforward approach is to normalize \([\Delta x,\Delta y,\Delta z]\) to [0,1], but this can lead to **asymmetric information after passing through points equidistant from the centroid**, resulting in limited accuracy. The detailed comparison of accuracy is summarized in Table 8 and discussed later. Alternatively, we take the absolute value of the numerator of Eq. 4 to get the \([\Delta|x|,\Delta|y|,\Delta|z|]\) that can perform the spike transformation. However, after such processing, the direction of the relative distance information of the coordinates is lost and the distribution of the response data is changed from the standard normal distribution to the folded normal distribution. The probability density function is: \[f(x;\mu,\delta^{2})=\frac{1}{\sqrt{2\pi\delta}}e^{-\frac{(x-\mu)^{2}}{2\delta^ {2}}}\rightarrow\frac{1}{\sqrt{2\pi\delta}}(e^{-\frac{(x-\mu)^{2}}{2\delta^{2 }}}+e^{-\frac{(x+\mu)^{2}}{2\delta^{2}}})(x\geq 0) \tag{5}\] Where \(\mu\) and \(\delta\) are the mean and standard deviation of the Gaussian distribution, respectively. The expectation \(\dot{\mu}\) after the absolute value is given in Eq. 6. \(erf(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}e^{t^{2}}dt\) is the error function. \[\dot{\mu}=\sqrt{\frac{2}{\pi}}\delta e^{-\frac{x^{2}}{2\delta^{2}}}+\mu\cdot[1 -2\phi(-\frac{\mu}{\delta})],\quad\phi(x)=\frac{1}{2}[1+erf(\frac{x}{\sqrt{2}})] \tag{6}\] By Eq. 4 - Eq. 6, We maintain codability and shift the expectation of those coordinates to a larger value of \(\sqrt{\frac{2}{\pi}}\) which is calculated in Appendix A.1. However, data distribution has also changed and needs to be compensated. To do so, we replace the input \([\Delta x,\Delta y,\Delta z,x_{c},y_{c},z_{c}]\) mentioned above with \([\Delta|x|,\Delta|y|,\Delta|z|,x_{min},y_{min},z_{min}]\), \([x_{min},y_{min},z_{min}]\) represents the smallest \(x,y,z\) values in a specific group. Hence, the increase of the first three input components is compensated by the decrease of the last three input components and the input remains balanced. It is worth noting that the \(Centroid\) in each sphere is important and \([x_{min},y_{min},z_{min}]\) is not a good indicator of the \(Centroid\), so we introduce a separate branch to extract the global information contained \(Centroid\) and do the fusion of these two features in the middle part of the network. Figure 2: Visualization of our grouping method. (a) The different spatial positions of \([x_{c},y_{c},z_{c}]\), \([x_{min},y_{min},z_{min}]\) and \([x,y,z]\). (b) The transformation of the distribution after taking absolute. After implementing the aforementioned modifications to the sampling and grouping module, we conducted an analysis that revealed a significant decrease in both the mean relative error (MRE) of rate coding and the coefficient of variation (CV) of the data. This reduction in MRE and CV provides a fundamental explanation for the efficacy of our proposed methodology as evidenced by the results. For the step-by-step derivation of the formulas and the validation of the dataset, please consult the Appendix A.2 and A.3. ### Singular stage structure The SpikePoint model is unique in its utilization of a singular stage structure as shown in Fig. 1, in contrast to the hierarchical structure employed by all other ANN-based methods for Point Cloud networks. While this hierarchical paradigm has become the standard design approach for ANNs, it is not readily applicable to SNNs because spike-based features tend to become sparse and indistinguishable as the depth of the stage increases and the training method based on backpropagation has a serious gradient problem. In light of this, we develop **a novel, streamlined network architecture that effectively incorporates the properties of SNNs**, resulting in a simple yet highly efficient model that is adept at abstract local and global geometry features. The specific dimensional changes and SpikePoint's algorithms can be referred to in Appendix A.6 and Algorithm A.6. #### 3.3.1 Basic Unit Next, the basic unit of the feature extractor will be introduced in the form of a formula. The first is a discrete representation of LIF neurons. We abstract the spike into a mathematical equation as follows: \[S_{j}(t)=\sum_{s\in C_{j}}\gamma(t-s),\quad\gamma(x)=\theta(U(t,x)-V_{th}) \tag{7}\] where \(S\) represents the input or output spike, \(C\) represents the set of moments when the spike is emitted, \(j\) is the \(j^{th}\) input of the current neuron, \(\gamma\) represents the spike function, \(\theta\) represents the Heaviside step function, \(V_{th}\) denotes the threshold of neuron's membrane potential and \(\gamma=1\) if \(U(t,x)-V_{th}\geq 0\) else \(\gamma=0\). We proceed to define the process of synaptic summation in the SNN using the following equation: \[I[n]=e^{-\frac{\Delta t}{r_{syn}}}I[n-1]+\sum_{j}W_{j}S_{j}[n] \tag{8}\] The aforementioned equation delineates the mechanism of neuronal synaptic summation, where the current neuron's input is denoted as \(I\), \(\tau_{syn}\) conforms to the synapse time constant, \(\Delta t\) represents the simulation timestep, \(n\) represents the discrete timestep, while \(j\) refers to the antecedent neuron number. \[U[n+1]=e^{-\frac{\Delta t}{r_{mem}}}U[n]+I[n]-S[n] \tag{9}\] where the variable \(U\) denotes the membrane potential of the neuron, \(n\) represents discrete timesteps, while \(\tau_{mem}\) is the membrane time constant. Moreover, \(S\) signifies the membrane potential resetting subsequent to the current neuron transmitting the spike. SNN is prone to gradient explosion and vanishing during training via backpropagation through time (BPTT), owing to the unique properties of their neurons as described above. Drawing on the utilization of conventional regularization techniques, such as dropout and batch normalization, we endeavor to incorporate a residual module to address the issues of overfitting and detrimental training outcomes. We do identity mapping by changing the residual module to the following equation in SNN refer (Hu et al., 2021; Fang et al., 2021; Feng et al., 2022). And the coefficient \(\sigma^{{}^{\prime}}(I_{i}^{l+m-1}+S_{j}^{l})\) in Eq. 29 of error propagation of the corresponding residual term is canceled. A detailed derivation can be found in Appendix A.4, which describes how this connection solves the problem of backpropagation. \[S^{l}=LIF(I+S^{l-1})\longrightarrow LIF(I)+S^{l-1} \tag{10}\] We have defined two basic units, \(ResFB\) and \(ResF\), with and without a bottleneck, respectively, as depicted in Fig. 1, and have incorporated them into the overall architecture. Due to the specificity of Point Cloud networks, the operation of feature dimensioning usually uses one-dimensional convolution. To maintain the advantage of the residual module, we do not do anything to the features of the residual connection. As a result, the module's input and output dimensions remain the same. #### 3.3.2 Local feature Extrator The local feature extractor plays a crucial role in abstracting features within the Point Cloud group, similar to convolutional neural networks with fewer receptive fields. As each point is processed, it is imperative to adhere to the principle of minimizing the depth and width of the extractor to ensure the efficiency of the SpikePoint. To this end, we have employed the \(ResFB\) unit with a bottleneck to streamline the network design as the following equations, result in more advanced extracted features. \[X_{1}=[\Delta|x|,\Delta|y|,\Delta|z|,x_{min},y_{min},z_{min}],X_{2}=[x_{c},y_{ c},z_{c}] \tag{11}\] \[F_{l1}=ResFB(Conv1D(X_{1})) \tag{12}\] \[F_{l2}=ResFB(Conv1D(X_{2})) \tag{13}\] \[F_{local}=MaxPool(F_{l1})+F_{l2} \tag{14}\] We evaluate both the concat and add operations for the two-channel feature fusion in the ablation experiments section. #### 3.3.3 Global feature Extrator The local feature extractor aggregates point-to-point features within each group into intra-group feature vectors, while the global feature extractor further abstracts the relationships between groups into inter-group feature tensors. The input's dimension for the final classifier is intimately linked with the width of the global feature extractor. Thus, it is crucial to enable the feature extractor to expand its width as much as possible while utilizing a limited depth, and simultaneously ensuring the extracted features are of high-level abstraction. The feature extractor can be formulated as follows: \[L(x)=ResF(Conv1D(x)) \tag{15}\] \[F_{m}=L_{2}(L_{1}(F_{local})) \tag{16}\] \[F_{global}=MaxPool(Conv1D(F_{m})) \tag{17}\] The final extracted features, denoted as \(F_{global}\), will be transmitted to the classifier A.7.3 for action recognition. To accommodate small and large datasets, we utilized two distinct ascending dimensionality scales while ensuring consistency in our architecture where specific feature dimensions are illustrated through Appendix Fig. 8. ## 4 Experiment ### Dataset We evaluate SpikePoint on four event-based action recognition datasets of different scales, and more details about the dataset are presented in Appendix A.5. These datasets have practical applications and are valuable for research in neuromorphic computing. ### Preprocessing The time set for sliding windows is 0.5 s, 1 s, and 1.5 s, as shown in Table 1. The coincidence area of adjacent windows is set as 0.25 s in DVS128 Gesture and DVS Action datasets, and 0.5 s in other datasets. The testset is 20% randomly selected from the total samples. \begin{table} \begin{tabular}{c c c c} \hline \hline DataSet & Classes & Sensor & Avg length \\ \hline DVS128 Gesture (Amir et al., 2017) & 10711 & DAVIS128 & 6.52 s \\ Daily DVS (Liu et al., 2021) & 12 & DAVIS128 & 3 s \\ DVS Action (Miao et al., 2019) & 10 & DAVIS346 & 5 s \\ HMDBS1-DVS (Bi et al., 2020) & 51 & DAVIS240 & 8 s \\ \hline Train & Test & Sliding Window & Overlap \\ \hline 26796 & 66959 & 0.5 s & 0.25 s \\ 2916 & 720 & 1.5 s & 0.5 s \\ 912 & 116 & 0.5 s & 0.25 s \\ 30720 & 3840 & 0.5 s & 0.5 s \\ \hline \hline \end{tabular} \end{table} Table 1: Specific information on the four datasets. \begin{table} \begin{tabular}{c c c c} \hline \hline Name & Method & Param & Acc \\ \hline TBR+3D (Innocenti et al., 2021) & ANN & 12.25 M & 99.6\% \\ PointNet+ (Qi et al., 2017b) & ANN & 1.48 M & 95.3\% \\ EV-VOCN (Deng et al., 2021) & ANN & 0.82 M & 95.7\% \\ VVM-GCN (Xie et al., 2022) & ANN & 0.86 M & 97.5\% \\ \hline SEW-ResNet (Fang et al., 2021) & SNN & - & 97.9\% \\ Deep SNN(16 layers) (Amir et al., 2017) & SNN & - & 91.8\% \\ Deep SNN (Isevers) (Sheeath \& Orchard, 2018) & SNN & - & 93.6\% \\ Conv-RNN SNN(Isevers)(Xing et al., 2020) & SNN & - & 92.0\% \\ Conv+Reservoir SNN (George et al., 2020) & SNN & - & 65.0\% \\ HMAX-based SNN (Liu et al., 2020) & SNN & - & 70.1\% \\ Motion-based SNN (Liu et al., 2021) & SNN & - & 92.7\% \\ \hline **SpikePoint** & **SNN** & **0.58** M & **98.74\%** \\ \hline \hline \end{tabular} \end{table} Table 2: SpikePoint’s performance on DVS128 Gesture. ### Network Structure In this paper, a relatively small version network is used for the Daily DVS and DVS Action, and a larger version network is used for the other two datasets. It should be emphasized that the architectures are identical, with only a shift in dimensionality in the feature extraction process as presented in Appendix A.7. ### Results **DVS128 Gesture:** The DVS128 Gesture dataset is extensively evaluated by numerous algorithms, serving as a benchmark for assessing their efficiency and effectiveness. Compared to the other SNN methods, SpikePoint achieves SOTA results with an accuracy of 98.74%, as shown in Table 2. Comparing the ANN methods, our SpikePoint model demonstrates superior accuracy compared to lightweight ANN such as EV-VGCNN and VMV-GCN, despite the latter employing a larger number of parameters. This indicates that Point Clouds' rich and informative nature may confer an advantage over voxels in conveying complex and meaningful information. The current SOTA employed by ANNs utilizes nearly 21 times more parameters than SpikePoint, resulting in a 0.86% increase in performance. Additionally, it should be added that SpikePoint divides a test event stream into many test subsets through the sliding window. During the training process, the accuracy of the SpikePoint on the test subsets is 97.1%. The results of a test stream are obtained by counting and voting on the test subsets of a sequence. **Daily DVS:** SpikePoint achieves SOTA on this dataset with 97.92% accuracy on the testset, and outperforms all ANN and SNN networks. Notably, it uses only 0.3% of its parameters compared to the high-performing ANN network, as shown in Table 3. Upon visualization of the dataset, we find that it has relatively low noise, and the number of events is evenly distributed in the time domain. The action description is well captured by the 1024 points sampled using random sampling. With this in mind, we conduct the ablation study on this dataset, including the processing of negative values (absolute or [0,1]), the utilization of dual channel inputs, and the techniques employed for feature fusion, which are discussed in detail in the Ablation study. **DVS Action:** SpikePoint also obtains SOTA results on this dataset, as shown in Table 4. In the same way, we visualize the addat files in the dataset but find that most of the data have heavy background noise, causing us to collect a lot of noise during random sampling. So we preprocess the dataset and finally achieve 90.6% accuracy. This reflects a problem random sampling will not pick up useful points when there is too much background noise. More reasonable sampling **HMDB51-DVS:** Compared with the first three datasets which have relatively few categories, HMDB51-DVS has 51 different categories. Our experiments demonstrate that SpikePoint excels not only on small-scale datasets but also demonstrates impressive adaptability to larger ones. As shown in Table 5, SpikePoint outperforms all ANN methods, despite using very few parameters. This indicates that SpikePoint has excellent generalization capabilities. It should be added that the SpikePoint used for DVS Gesture and HMDB51-DVS is both the larger model and the number of parameter differences are only due \begin{table} \begin{tabular}{c c c c} \hline Name & Method & Param & Acc \\ \hline ST-EVNet (Wang et al., 2020) & ANN & 1.6 M & 88.7\% \\ PointNet (Qi et al., 2017a) & ANN & 3.46 M & 75.1\% \\ Deep SNN (Iéyes) with (Li et al., 2019) & SNN & - & 71.2\% \\ HMAX-based SNN (Xiao et al., 2019) & SNN & - & 55.0\% \\ Motion-based SNN (Liu et al., 2021a) & SNN & - & 78.1\% \\ \hline **SpikePoint** & **SNN** & **0.16 M** & **90.6\%** \\ \hline \end{tabular} \end{table} Table 4: Model’s performance on DVS ACTION. \begin{table} \begin{tabular}{c c c c} \hline Name & Method & Param & Acc \\ \hline T3D(Carreira and Zisserman, 2017) & ANN & 49.19 M & 96.2\% \\ TAN(Liu et al., 2021b) & ANN & 24.8 M & 96.5\% \\ VMV-GCN (Xie et al., 2022) & ANN & 0.84 M & 94.1\% \\ TimeSformer (Bertsins et al., 2021) & ANN & 121.27 M & 90.6\% \\ HMAX-based SNN (Xiao et al., 2019) & SNN & - & 68.3\% \\ HMAX-based SNN (Liu et al., 2020) & SNN & - & 76.9\% \\ Motion-based SNN (Liu et al., 2021a) & SNN & - & 90.3\% \\ \hline **SpikePoint** & **SNN** & **0.16 M** & **97.92\%** \\ \hline \end{tabular} \end{table} Table 3: SpikePoint’s performance on Daily DVS. \begin{table} \begin{tabular}{c c c} \hline Name & Method & Param & Acc \\ \hline T3D(Carreira and Zisserman, 2017) & ANN & 49.19 M & 96.2\% \\ TAN(Liu et al., 2021b) & ANN & 24.8 M & 96.5\% \\ VMV-GCN (Xie et al., 2022) & ANN & 0.84 M & 94.1\% \\ TimeSformer (Bertsins et al., 2021) & ANN & 121.27 M & 90.6\% \\ HMAX-based SNN (Xiao et al., 2019) & SNN & - & 68.3\% \\ HMAX-based SNN (Liu et al., 2020) & SNN & - & 76.9\% \\ Motion-based SNN (Liu et al., 2021a) & SNN & - & 90.3\% \\ \hline **SpikePoint** & **SNN** & **0.16 M** & **97.92\%** \\ \hline \end{tabular} \end{table} Table 3: SpikePoint’s performance on Daily DVS. to the different categories classified by the final classifier. As the final output layer is a voting layer, the difference in parameter quantities is 0.21 M. ### Power Consumption We assume that multiply-and-accumulate (MAC) and accumulate (AC) operations are implemented on the 45 nm technology node with \(V_{DD}=0.9~{}V\)(Horowitz, 2014), where \(E_{MAC}=4.6~{}pJ\) and \(E_{AC}=0.9~{}pJ\). We calculate the number of synaptic operations (SOP) of the spike before calculating the theoretical dynamic energy consumption by \(SOP=firerate\times T\times FLOPs\), where \(T\) is the timesteps, and FLOPs is float point operations per sample. OPs in Table 6 refer to SOPs in SNNs and FLOPs in ANNs. The dynamic energy is calculated by \(Dynamic=OPs\times E_{MAC\,or\,AC}\) and the static energy is calculated by \(Static=Para.\times spp\times L_{sample}\), where spp is the static power per parameter in SRAM and \(L_{sample}\) is the sample time length, assuming always-on operation. The static power consumption of 1bit SRAM is approximately 12.991 pW in the same technology node (Saun & Kumar, 2019). Spikepoint's power consumption exhibits substantial advantages over ANNs and also surpasses the performance of existing SNNs. ### Ablation Study \(ResF\) **ablation:** To verify the efficacy of the proposed feature extractor \(ResF\), we conduct ablation experiments on the DVS ACTION dataset by varying only the feature extractor variable, keeping the overall architecture, hyperparameters, and datasets consistent. As depicted in Fig. 3(a), three groups of experiments are conducted: the first group utilizes the residual structure with the same architecture as ANN, the second group uses the SpikePoint model without residual connection, and the third group employs the SpikePoint. The results of experiments are marked with blue, orange, and gray, respectively, in Fig. 3(b). The curves demonstrate the superiority of the \(ResF\) block, and the results zoom in on the accuracy curves at the beginning and stabilization periods of the model. SpikePoint converges fastest at the beginning of training and has the highest accuracy after stabilization. Compared with the model without residual, it can be proved that this module can improve the model's fitting ability. SpikePoint is better in convergence speed and convergence accuracy than copying the ANN scheme. **Structural ablation:** In the structure ablation experiment, we conduct four comparison experiments. The first group involves the standard SpikePoint. The second group is the network with only a local feature extractor. The third group focuses solely on global feature extraction of the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Model** & **Input** & **Timestep** & **Accuracy(\%)** & **OPN(\%)** & **Dynamic(mJ)** & **Para.(M)** & **Static(mJ)** \\ \hline **SpikePoint** & **Point** & **16** & **98.7** & **0.9** & **0.82** & **0.58** & **0.756** \\ \hline **SPW-ResNet(UBS)(Zheng et al., 2021) & Frame & 40 & 96.9 & 4.79 & 4.56 & 11.7 & 15.305 \\ Spiking(Zheng et al., 2021) & Frame & 16 & **98.3** & 3.72 & 4.26 & 2.6 & 3.401 \\ Spiking(Zhao et al., 2021) & Frame & 16 & 97.9 & 6.33 & 10.75 & 2.6 & 3.401 \\ Deep SNN(6)(Aunz et al., 2017) & Frame & 16 & 91.8 & 2.74 & 2.49 & 1.7 & 2.223 \\ Deep SNN(6)(Sheretha & Orchard, 2018) & Frame & 16 & 93.6 & 2.13 & 1.94 & 1.3 & 1.7 \\ \hline **PILF (Fang et al., 2019)** & Frame & 20 & 97.6 & 2.98 & 2.71 & 17.4 & 22.759 \\ \hline **PILF(Jann Point Cloud set without any grouping or sampling operations, as shown in Fig. 4(a). The fourth group is the SNN version of PointNet, which with an architecture that is essentially identical to the third group except for differing dimensions of \([32,64,128,256]\) and \([64,128,256,512,1024]\), respectively. The results provide evidence of the effectiveness of combining local and global features. As depicted in Fig. 4(b), the gray and orange lines represent architectures with only global and local feature extractors, respectively. Not surprisingly, these models perform poorly on the DVS Action dataset, with an accuracy of only 40%. To compare with PointNet, we adjust the dimensionality of each stage to be consistent with it, and the resulting training curve is indicated by the yellow color. We observe that the model shows relatively better results when the dimensionality reaches 1024. Timestep ablation:The timestep serves as a crucial hyperparameter profoundly influencing the accuracy and power consumption in SNN. The choice of the key parameter is a result of the controlled experiment with varying timestep. We do seven sets of comparison experiments on Daily DVS and DVS Action respectively, and the results are shown as the accuracy of the testset in Table 7. Grouping ablation:To overcome the challenge of rate encoding input with negative values, we introduce five relevant variables, each corresponding to the number of columns in Table 8. We then conducted several experiments to verify their validity. The results demonstrate the effectiveness of taking absolute values for input, which are superior to normalization, with 2 and 5 in Table 8. And \([x_{min},y_{min},z_{min}]\) can have a certain corrective effect with 2 and 3, 6 and 8. However, \(centroid\) affects the results more than \([x_{min},y_{min},z_{min}]\) in the single channel with 4 and 5. Another branch is used to compensate for this phenomenon with 4 and 6. The dual path structure is better than the single path with 5 and 6, and experiments have shown that the \(Add\) operation is better than the \(Concat\) operation with 6 and 7. ## 5 Conclusion In this paper, we introduce a full spike event-based network that effectively matches the event cameras' data characteristics, achieving low power consumption and high accuracy. SpikePoint as a singular-stage structure is capable of extracting both global and local features, and it has demonstrated excellent results on four event-based action recognition datasets using back-propagation but not converting ANNs to SNNs. Going forward, our aim is to extend the applicability of SpikePoint to other fields of event-based research, such as SLAM and multimodality (Ding et al., 2023; Ren et al., 2021). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Time steps & 2 & 4 & 8 & 12 & **16** & 24 & 32 \\ \hline DailyDVS Acc.(\%) & 92.75 & 95.17 & 96.53 & 97.22 & **97.92** & 96.7 & 96.01 \\ DVS Action Acc.(\%) & 60.93 & 71.88 & 80.09 & 85.65 & **90.6** & 88.39 & 81.03 \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation study of SNN’s timesteps. Figure 4: Structural ablation experiment (a) and the result (b) on DVS ACTION dataset. \begin{table} \begin{tabular}{c c c c c c} \hline \hline No. & Absolute & \([x_{min}...]\) & \([x_{c}...]\) & Branch & Fusion & Performance \\ \hline 1 & \(\times\) & \(\times\) & ✓ & single & 97.22\% \\ 2 & \([0,1]\) & \(\times\) & ✓ & single & 96.53\% \\ 3 & \([0,1]\) & ✓ & \(\times\) & single & 97.36\% \\ 4 & ✓ & ✓ & \(\times\) & single & 97.63\% \\ 5 & ✓ & \(\times\) & ✓ & single & 97.78\% \\ \hline 6 & ✓ & ✓ & \(\times\) & **double** & **Add** & **97.92\%** \\ 7 & ✓ & ✓ & \(\times\) & double & Concat & 97.50\% \\ 8 & ✓ & \(\times\) & ✓ & double & Add & 97.50\% \\ 9 & \(\times\) & \(\times\) & ✓ & double & Add & 97.22\% \\ 10 & [0,1] & \(\times\) & ✓ & double & Add & 96.25\% \\ \hline \hline \end{tabular} \end{table} Table 8: Ablation study on grouping in Daily DVS dataset.
2307.12639
Fake News Detection Through Graph-based Neural Networks: A Survey
The popularity of online social networks has enabled rapid dissemination of information. People now can share and consume information much more rapidly than ever before. However, low-quality and/or accidentally/deliberately fake information can also spread rapidly. This can lead to considerable and negative impacts on society. Identifying, labelling and debunking online misinformation as early as possible has become an increasingly urgent problem. Many methods have been proposed to detect fake news including many deep learning and graph-based approaches. In recent years, graph-based methods have yielded strong results, as they can closely model the social context and propagation process of online news. In this paper, we present a systematic review of fake news detection studies based on graph-based and deep learning-based techniques. We classify existing graph-based methods into knowledge-driven methods, propagation-based methods, and heterogeneous social context-based methods, depending on how a graph structure is constructed to model news related information flows. We further discuss the challenges and open problems in graph-based fake news detection and identify future research directions.
Shuzhi Gong, Richard O. Sinnott, Jianzhong Qi, Cecile Paris
2023-07-24T09:30:30Z
http://arxiv.org/abs/2307.12639v1
# Fake News Detection through Graph-based Neural Networks: A Survey ###### Abstract The popularity of online social networks has enabled rapid dissemination of information. People now can share and consume information much more rapidly than ever before. However, low-quality and/or accidentally/deliberately fake information can also spread rapidly. This can lead to considerable and negative impacts on society. Identifying, labelling and debunking online misinformation as early as possible has become an increasingly urgent problem. Many methods have been proposed to detect fake news including many deep learning and graph-based approaches. In recent years, graph-based methods have yielded strong results, as they can closely model the social context and propagation process of online news. In this paper, we present a systematic review of fake news detection studies based on graph-based and deep learning-based techniques. We classify existing graph-based methods into knowledge-driven methods, propagation-based methods, and heterogeneous social context-based methods, depending on how a graph structure is constructed to model news related information flows. We further discuss the challenges and open problems in graph-based fake news detection and identify future research directions. Fake News Detection, Social Media, Propagation Graphs, Graph Neural Networks. ## I Introduction Online social network platforms, such as Twitter and Reddit, offer immense convenience for users to share and consume content related to their daily lives. However, these platforms also facilitate the rapid, low-cost dissemination of rumors and/or fake news. Maliciously created fake news can have a significant negative impact on society, particularly during major events such as national elections or pandemics. To combat fake news and reduce its detrimental effects, researchers have proposed various methods for automated fake news detection and classification. The advancement of deep learning techniques has ushered in a new era of fake news detection, with a large number of deep learning-based methods, particularly graph-based methods, being proposed. In this paper, we present a systematic review of such methods. Given a news article comprising various content and contextual information, the task of fake news detection involves determining the veracity of the news and ideally automatically classifying it as either fake or real. Different approaches have analysed the news textual content and then expanded to gather information from related entities. This includes not only explicitly related entities such as users who disseminate or comment on/reply to the news but also implicitly related entities, such as other news articles on similar topics. The intricate, non-Euclidean relationships among such different entities related to a given news article can be represented using graph modeling techniques. For example, the BiGCN [1] collects the all comments, retweets of a news item on Twitter platform and constructs a propagation graph to model the news spreading process, then uses a GCN to encode the graph and classify news item to fake or real one based on the graph representation. This has led to significant advancements in graph-based fake news detection in recent years. Graph modeling has been shown to offer immense potential in capturing non-Euclidean relationships between entities for fake news detection. However, despite the emergence and promise of graph-based methods, there has not yet been a systematic review of the background, progress, and developing trends in this area. This survey aims to fill this gap. There are a number of existing surveys on fake news detection. However, the majority of works (e.g., [2, 3, 4, 5]) focus on providing an overview of the entire field of fake news detection, and graph-based deep learning methods are either not mentioned at all [2] or only briefly described [3, 4, 5]. This lack of coverage creates a mismatch given the recent surge in the use of graph-based methods for fake news detection and the impressive results they achieve. One survey [6] exists on fake news detection using Graph Convolutional Networks (GCN), but it only covers a few graph-based studies, since GCN is just one particular graph-based deep learning technique. Existing surveys typically provide a coarse-grained taxonomy of fake news detection studies, e.g., content- or social context-based approaches [2], whilst some methods take a hybrid approach considering both content and context. As such, a more fine-grained and nuanced taxonomy is needed. In this survey, we categorise graph-based fake news detection methods into three groups based on the underlying graph modelling methods: _knowledge-driven methods_, _propagation-based methods_, and _heterogeneous social context-based methods_ as illustrated in Fig. 1. Knowledge-driven methods leverage entities found in news content, e.g., concepts and named entities to identify fake news. Natural language processing (NLP) techniques can then be employed to pre-process the text content and extract such entities. Subsequently, a knowledge graph can be constructed to reflect the relationships among such entities. This can then serve as a computational representation of the news content. This graph is then encoded to graph embeddings using graph modeling techniques, and the embeddings are fed into a classifier to classify whether a given news article is fake or real news. Sometimes, the direct (internal) knowledge extracted from news contents may not be sufficient however, and hence external knowledge sources (e.g., Wikipedia) may need to be used. Entities in the news content can be linked to such external sources through entity linking [7] to obtain more comprehensive information. Both internal and external knowledge can then be combined for fake news detection. As a result, knowledge-driven methods can be further categorised into internal and external knowledge-based methods. Propagation-based methods focus on the dissemination process of news articles. Throughout the process, many users typically interact by posting, reposting or replying to a given article. These users and their interactions form a tree (or graph) structure. By examining the news propagation structure and the trustworthiness of the users within the propagation network, the potential veracity of a given news article can be inferred. Heterogeneous social context-based methods extract the context of the source news, such as the other posts that come from the same user together with other news items on the same topic. Such a social context can also be represented with a graph that helps the fake news classification process. One key difference between propagation-based methods and heterogeneous social context-based methods is the process whereby graph modelling is applied. When a graph is used to model the propagation process of individual news items, it implies a propagation-based method. When a graph is applied to model a larger social context involving multiple news articles and users interacting on different or related news articles, it requires that a heterogeneous social context-based method is adopted. In this case, the veracity of multiple news articles needs to be inferred from the graph. Fig. 2 visualises and summarises the differences among the three types of methods. Overall, the contributions of this survey are as follows: * **A systematic review.** We provide a systematic review of existing graph-based fake news detection methods, describing how they were developed and discuss their strengths and weaknesses. * **A novel taxonomy**. We propose a novel taxonomy focusing on graph-based deep learning methods for fake news detection. * **Future direction landscape.** We discuss open problems and challenges in graph-based fake news detection methods, providing insights on future research directions. The rest of the paper is organised as follows. In Section II, we define the core concepts related to fake news detection and provide a definition of the three categories of fake news detection methods. Next, an overview of the papers reviewed is given in Section III. We describe representative methods of the three categories in Sections IV, V and VI. Open issues and opportunities are discussed in Section VIII, and widely used datasets are presented in Section VII. ## II Preliminaries We start by some basic concepts and definitions related to graph-based fake news detection. _Fake news_ is a news article that is intentionally or verifiably shown to be false [2]. This definition entails two characteristics of fake news: fake news can be verified to be false, and they often come with a dishonest intention to mislead readers. _Fake news detection_ is a process used to detect fake news items, e.g., Twitter posts. As noted, in this survey we explore graph-based fake news detection based on the following methods: * **Knowledge-driven methods** construct a knowledge graph for a given news article \(\mathbf{a}\) which is then used to establish the article veracity. The knowledge graph \(\mathbf{K_{g}}\) is typically constructed from entities \(\mathcal{E}\backslash=\{en_{i}\}\) extracted from the textual and potentially visual content of the news article. In the external knowledge-based methods, the \(\mathcal{E}\backslash\) also includes entities from external knowledge database. * **Propagation-based methods** model the propagation process of a given news item by constructing an associated _propagation graph_. This graph is then used to assess the article veracity. The propagation graph of a given news item \(\mathbf{a}\) is formed by a set of engagement tuples \(\mathcal{E}=\{e_{it}\}\) that represents the process of how \(\mathbf{a}\) spreads over time \(t\) among \(n\) users \(\mathcal{U}=\{u_{1},u_{2},\ldots,u_{n}\}\), and their corresponding posts \(\mathcal{P}=\{p_{1},p_{2},\ldots,p_{n}\}\) and re-posts/comments. Each engagement \(e_{it}=\langle u_{i},p_{i},t\rangle\) represents a user \(u_{i}\) resharing (spreading) news article \(\mathbf{a}\) at time \(t\) and optionally with a comment \(p_{i}\). * **Heterogeneous social context-based methods** consider the broader context of news items for fake news detection. A graph covering multiple news items \(\mathbf{A}=\{\mathbf{a_{1}},\mathbf{a_{2}},\ldots,\mathbf{a_{1}}\}\) and their engagements is considered, e.g., all news items from the _same user_ who posted the news item being evaluated for fake news, or news articles from other users but on the _same topic_. Since the social context is formed by different types of entities (e.g., users Fig. 1: A taxonomy on graph-based fake news detection methods. and news articles) and connections (e.g, user-user follower/followee connections, user-news post connections), a heterogeneous social context graph is constructed by the various methods in this category. We note a related concept of _rumour_. A rumour is unverified information, which may not necessarily be real or fake. When a rumour is identified as false, the rumour is regarded as fake news. Many studies often use the terms "rumour" and "fake news" interchangeably. In this survey, we also cover graph-based rumour detection methods. ## III Overview of Graph-based Fake News Detection Methods Table I summarises key studies using graph-based methods for fake news detection. It outlines how the graph structures are constructed and encoded. **Graph construction.** To form graph nodes, knowledge-driven methods tend to use sentences, words, topics or knowledge entities from target news articles, while most propagation-based methods use the news articles and associated comments. While news articles and comments also contain words and sentences, the focus of the propagation-based methods is more on their propagation pattern instead of the detailed news contents. Heterogeneous social context-based methods also consider the propagation patterns but they emphasise the connections between multiple news items. Thus, besides the news and comments, users and topics (or keywords) also form graph nodes that serve as bridges between the graph nodes. To connect nodes in the graph (i.e., to create the edges), a common strategy used by knowledge-driven methods is to simply connect all pairs of nodes. The PMI [15] strategy is a variant, which forms a fully connected graph where the weight of an edge is the similarity between two nodes, e.g., textual similarity. Propagation-based methods, on the other hand, create edges based on propagation events, e.g., adding an edge to connect a comment node with the article node being commented on. Heterogeneous social context-based methods aim to capture every connection in the social context. Edges are added among users, news, comments and even topics. **Graph encoding.** After the graph structures have been constructed, they are typically encoded with a deep learning module, e.g., a _graph encoder_ captures latent information of the graphs for fake news classification. Graph Convolutional Networks (GCN) are a common choice for graph encoders and used by more than half of the methods surveyed. Several variants of GCN have been proposed [10, 16, 19, 36, 46]. Besides GCN, Graph Attention Networks (GAT) [34] and its variants (e.g., KGAT [13] and TGN [41]) have been put forward based on their strong capability for identifying important neighbour nodes in a graph, e.g., nodes that include more important words related to a given news article. **Fake news detection strategies.** As Fig. 1 shows, knowledge-driven methods can be further categorised into _internal knowledge-driven_[8, 10, 12] and _external knowledge-driven_[19, 21] methods. Internal knowledge-driven methods utilise a graph to model the semantic relationships within the news content and social context. This works in a way similar to some text classification methods [55]. In this model, the news article including comments from other users is regarded as a text document. Then, connections between different parts of the document are extracted and modelled via a graph, and fake news detection is performed as a graph classification task. External knowledge-driven methods Fig. 2: Abstraction representation of knowledge-driven, propagation-based and social context-based methods. \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline **Models** & **Graph Nodes** & **Edge Construction** & **Graph Encoder** & **Core Strategy** \\ \hline \hline \multicolumn{5}{c}{**Knowledge-driven Methods**} \\ \hline **GCN-text [8]** & sentences & fully-connected sliding window [11] & GCN [9] & Capture sentence connections in article. \\ **FinerFact [12]** & news, users, keywords & & fully-connected, propagation & GGNN [10] & Capture long-distance semantic dependency. \\ **KMGCN [14]** & visual, textual, external knowledge entities & & fully-connected, propagation & KGT [13] & Extract claims from news content and evidence from comments. \\ **KMAGCN [16]** & visual, textual, external knowledge entities & PMI [15] & GCN & Understand the news content through external knowledge \\ **LOSIRD [17]** & comments, retrieved evidence & & & & \\ **CompareNet [19]** & topics, knowledge entities, sentences & & & & \\ **InforSurgeon [21]** & visual, textual, external knowledge entities & & & & \\ \hline \multicolumn{5}{c}{**Propagation-based Methods**} \\ \hline **RvNN [22]** & claim, comments & responsive propagation & - * & RNN encodes posts in tree-shaped order. \\ **Tree-Transformer** [23] & claim, comments & responsive propagation & - & Transformer encodes posts in tree-shaped order. \\ **TRM-CPM [24]** & claim, comments & responsive propagation & GraphSAGE & Model the responsive propagation by a graph. \\ **Bi-GCN [1]** & claim, comments & responsive propagation & GCN & Model tumour spreading and diffusion in a graph from both top-down and bottom-up directions. \\ **EBGCN [25]** & claim, comments & & & & Infer edge weights from node features by the Bayesian theorem. \\ **UPSR [26]** & claim, comments & & & & \\ **EDEA [27]**, **GACL** [28]**, **RDCL [29]**, **CCFD [30]** & claim, comments & & & & \\ **GCAN [31]** & users & fully-connected & GCN & Model potential interaction among users. \\ **UPFD [32]** & claim, response users & responsive propagation & GNN & Retrieve user historical posts as user preference. \\ **PSIN [33]** & claim, users, comments & & & & \\ **DUCK [35]** & claim, comments, retweets, users & & & & \\ **UniPF** [36] & claims, comments, topics & cluster connection, responsive propagation & GAT & Consider multiple types and relations in the news propagation process. \\ **RNLNP [37]** & claim, comments & & & & \\ **Dynamic-GCN/GNN** & claim, comments & & & & \\ **TGNF [40]** & claim, comments & & & & \\ **DDGCN [42]** & claim, comments, knowledge graph & & & & \\ **SEAGEN [43]** & claim, comments, retweets & & & & \\ \hline \multicolumn{5}{c}{**heterogeneous social context-based Methods**} \\ \hline **GLAN [44]** & news, users, retweets & news-retweet-user & CNN & Jointly learn the local (in-news) and global (cross-news) relations. \\ **Monti [45]** & news, users & user-user following, retweeting propagation & GCN & Model news propagation with global user following relationships. \\ **NDG [46]** & news, users, sources, comments, comments, domains & news-domain, source-news & HDGCN [46] & Introduce more social context information. \\ **MFAN [47]** & news, users, comment posts & & & & \\ **SureFact [48]** & claim, users, comments, retweet, keywords & & & & \\ **TriFN [49]** & publishers, news, spreaders & & & & \\ **FANG [50]** & news sources, news, users & user-user following, source-news publication, etc. & GraphSAGE & Exploit social context. \\ **Mehta [51]** & news sources, news, users & FANG-like connections, link inference & R-GNN [52] & Inference implicit connections in social context. \\ **Hetero-SCAN [53]** & publishers, news, spreaders & publisher-news, news-spreader relations & - & Capture multi-level and temporal information in social context. \\ **TR-HGAN [54]** & news, comments, users & authorship, response propagation & hierarchical attention [54] & Mitigate topology imbalance and relation authentication in the heterogeneous social context. \\ \hline \hline \end{tabular} * * *: Does not use graph modelling techniques in the encoding process. \end{table} TABLE I: Summary of Graph-based Fake News Detection Methods. utilise knowledge from external resources. References to and comparisons with external authoritative knowledge sources are enabled. For propagation-based methods, the strategy is to model responsive propagation patterns (i.e., how one user post responds to another) and classify news based on the patterns. Considering that the patterns can contain noise and be modified by fake news spreaders, edge enhancement, adversarial learning, and contrastive learning strategies are often used [26, 27, 28, 29, 30]. There is also a trend to combine more information with the propagation patterns, such as temporal information [37, 38, 39, 40, 41, 42] and user information [32, 33, 35]. Heterogeneous social context-based methods can be seen as a zoomed-out version of propagation-based methods. They use a heterogeneous graph to model the global social context including connections between different news articles. In recent years, more and more hybrid models have been proposed which fall into more than one of the aforementioned method categories. For example, DDGCN [42] is a propagation-based method that takes snapshots of the propagation graph and the knowledge graph. Similarly, some heterogeneous social context-based methods [46, 47] also follow the idea of propagation-based methods, to model propagation patterns by GNNs, but with additional information obtained from a larger social context. In the next three sections, we detail the studies in each of the three categories. ## IV Knowledge-driven methods Knowledge-driven fake news detection methods, as shown in Fig. 3, extract entities (e.g., nouns or named entities) from the news content and sometimes from reader comments to construct a (knowledge) graph. The graph describes the connections (relationships) among the entities. In most cases, the nodes in the graph represent the entities, while the edges represent the connections. Fake news detection is then done by analysing and performing anomaly detection on the knowledge graph constructed. As mentioned above, depending on whether external knowledge sources (e.g., YAGO [56] or Probase [57]) are used, knowledge-driven methods can be further divided into internal knowledge-based or external knowledge-based approaches. Different from internal knowledge-driven methods, external knowledge-driven methods collect external knowledge entities from external databases based on the news contents, using entity linking techniques [7] (cf. Fig. 3). As a result, graphs constructed through external knowledge-based methods contain both entities that are internal and external to the news content, while those constructed by internal knowledge-based methods only contain internal entities. The graphs constructed by knowledge-driven methods are processed by graph modelling techniques (i.e., graph encoders). The graph encoder outputs are usually fed into fully connected neural networks to perform binary classification, i.e., to classify whether the graph corresponds to fake or real news. Following such a pipeline, existing methods mainly differ in how the knowledge graphs are constructed and encoded. ### _Internal Knowledge-Based Methods_ Internal knowledge-based methods construct a knowledge graph to model the semantics of the contents of a news item. They use graph encoders to encode the knowledge graphs and perform graph classification for fake news detection. Studies following this approach mainly vary in the knowledge graph construction process, including the knowledge entity selection (i.e., using sentences, words, or claims from the news content) and/or the knowledge entity connection (fully-connected or sliding-window based connections). The internal knowledge-based models are trained in a supervised manner, by propagating the loss calculated based on the predicted news veracity and the ground-truth veracity. Cross-entropy loss is often used as the basis for the loss function: \[\mathcal{L}=-y\log(\hat{y})+(1-y)\log(1-\hat{y})+\lambda_{reg}||\Theta||^{2} \tag{1}\] where \(y\) is the ground-truth news veracity, \(\hat{y}\) is the prediction result, \(\lambda_{reg}\) is the regularisation coefficient, and \(||\Theta||^{2}\) is the \(L_{2}\) norm of the model parameters, i.e., \(\lambda_{reg}||\Theta||^{2}\) is the \(L_{2}\) regularisation. GCN-text [8], GET [10], DISCO [58] all utilise this standard cross-entropy loss, i.e., setting \(\lambda_{reg}\) to \(0\) in Equation 1, while FinerFACT [12] uses a non-zero value for \(\lambda_{reg}\). We discuss these methods below. **GCN-text**[8] considers the differences in the context around sentences associated with real news and fake news. It builds a graph where every sentence of a news item is represented as a node, and the sentence interactions are represented by edges. The features of a node are the textual representation of the sentence computed by a _Long Short-term Memory_ (LSTM) network. To consider all possible interactions, the graph is constructed to be fully connected. Then, the fake news detection task is transformed into a graph classification task. The news graph is encoded by a GCN which corresponds to Equation 2: \[\mathbf{H}^{(l+1)}=\sigma(\mathbf{\bar{D}}^{-\frac{1}{2}}\mathbf{\bar{A}} \mathbf{\bar{D}}\mathbf{H}^{(l)}\mathbf{W}^{(l)}) \tag{2}\] Here, \(\mathbf{H}^{(l)}\in\mathbb{R}^{N\times D}\) is the computed node feature matrix at layer \(l\), for the \(N\) nodes in the input graph and \(D\) features per node where \(\mathbf{H}^{(0)}=\mathbf{X}\) is the input node feature matrix; \(\mathbf{W}^{(l)}\in\mathbb{R}^{D\times D^{\prime}}\) is the weight matrix at layer \(l\), with \(D^{\prime}\) output features per node; \(\mathbf{\hat{A}}=\mathbf{A}+\mathbf{I}_{N}\) is the adjacency matrix of the Fig. 3: Basic idea of internal and external knowledge-driven methods. graph with added self-loops; \(\tilde{\mathbf{D}}\) is the diagonal degree matrix of \(\tilde{\mathbf{A}}\); \(\sigma(\cdot)\) is the activation function, applied element-wise to the resultant matrix, and \(\mathbf{H}^{(l+1)}\in\mathbb{R}^{N\times D^{\prime}}\) is the output feature matrix at layer \(l+1\) with \(D^{\prime}\) features per node. The output graph embedding is processed by a pooling layer, and the pooled representation is then utilised for fake news classification using fully connected layers. **GET**[10] constructs knowledge graphs where every single word of a news article is represented as a node. Edges are built between words (i.e., nodes) within fixed-size sliding windows. In the knowledge entity extraction step (cf. Fig. 3), entities are from both the _source news_ (i.e., the news article to be classified) and the comments. Several word-based knowledge graphs are constructed for each source news and associated comment. Then, GET follows the same procedure as GCN-text based on applying GCN on the word-based knowledge graphs, feeding the graph representation into fully connected layers with _attention mechanism_, and classifying news to be fake or real. With the assistance of the attention mechanism, GET is able to tell which words provide fake information (i.e., the _misleading words_). In a similar fashion, **DISCO**[58] also constructs a graph to model word relationships in news content. It only extracts knowledge entities from source news, and hence only one knowledge graph is constructed for each news article. DISCO reveals misleading words by masking different nodes in the knowledge graph to help observe their contributions to the classification outcome. By building and analysing knowledge graphs over news content, knowledge-driven methods can provide explanations about why a news article is classified as fake news, thus enabling result interpretability. GET [10] and DISCO [58] offer result interpretability by detecting the misleading words, as mentioned above. With the assistance of the attention mechanism, word entities in the knowledge graphs assigned with higher weights are considered candidates for misleading words. FinerFact [12] has a different definition of interpretability. It finds claims in news content that convey incorrect information, together with user comments associated with the news post that may serve as evidence of fake or real news. **FinerFact**[12] constructs a _claim-evidence graph_, where each node contains a claim from the textual content of a news article together with the most relevant evidence from user comments associated with the news article that might reveal the veracity of that claim. The claim-evidence graph is fully connected to indicate all possible connections between all claims. To extract the most relevant evidence, an additional _evidence graph_ is constructed. The evidence graph contains the users who made the original post and comments and keywords. The connectivity of the evidence graph shows the interactions of different nodes in the graph, e.g., a user posting a comment, or comment replying to another comment, or a term/keyword contained in a comment, etc. Each node in the evidence graph has a saliency score as its attribute, which indicates how important that node is in the graph. The saliency score is initialised based on meta-data, such as the number of followers that a user has, the number of times that a comment is retweeted, etc. The saliency scores on the evidence graph are propagating iteratively until they converge. To construct a claim-evidence graph, the topics of the news content is extracted by the LDA algorithm [59], where each topic contains several keywords. The topics are connected to the keywords from the evidence graph to calculate saliency scores of the topics. The top-\(K\) topics with the highest saliency scores are extracted, together with relevant claims and responses (found through the keywords). The claims and evidence of the top-\(K\) topics are then used to form \(K\) nodes for the claim-evidence graph. Each node of the this graph is classified to be fake or real, while an overall consideration of all claims is regarded as the final veracity of the news. ### _External Knowledge-Based Methods_ External knowledge-based methods draw on external knowledge to assist fake news detection. In many ways, this approach mimics the way how humans identify rumours, i.e., they seek help from authoritative sources to identify conflicts and disambiguate between fake and real news. To connect news content with external knowledge, entity linking [7] serves as a crucial step in nearly all external knowledge-based methods. Entities from a news article of interest are extracted using Named Entity Recognition (NER) algorithms [60]. The extracted entities are then linked to external knowledge sources, such as open knowledge base, e.g., YAGO [56], Probase [57], or Wikipedia. These knowledge bases contain a large number of entities (e.g., people, places, concepts, events, etc.) with rich information as well as their connections. Entity linking helps fetch such rich information for the entities in the news content. Subsequently, the external knowledge is incorporated into the knowledge graph constructed from the news content, as shown in Fig. 3. #### Iii-B1 Information Fusion Early methods such as [14, 16, 17] build a knowledge graph for a news article with the help of an external knowledge base. They identify fake news by encoding the knowledge graph and fusing information from both internal knowledge and external knowledge in an end-to-end way: (1) the internal knowledge entities that are extracted are linked to external entities through entity linking; (2) a knowledge graph composed of both internal and external entities is constructed; (3) as with internal knowledge-based methods, the knowledge graph is encoded by a graph encoder, and (4) the graph representation is used for classification with a fully-connected neural network. The loss function shown in Equation 1 is used for model training. We present three methods that adopt this approach. **KMGCN**[14] performs fake news detection in three steps: knowledge distillation, graph construction and graph encoding. First, the knowledge distillation step extracts knowledge entities from the news content, including both the news text and the news images. KMGCN uses an entity linking algorithm [61] to link entities in news text to those in external knowledge bases such as YAGO [56] and Probase [57]. It further uses a pre-trained YOLO-3 detector [62] to detect objects from news images. The detected objects (i.e., their textual representations) are further linked with external knowledge bases like above. Next, the graph construction step creates an undirected knowledge graph with words from the news textual content and detected objects in the news images. In this graph, the edge weights are computed based on point-wise mutual information (PMI) between words corresponding to two nodes. Two nodes are connected if their PMI score is greater than 0. Lastly, the graph encoding step uses a GCN to learn the representation of the constructed knowledge graph. A max-pooled node representation is fed to a fully-connected neural network to generate a news veracity prediction. **KMAGCN**[16] resembles KMGCN in that it also extracts knowledge entities and models them with a graph. The difference is that KMGCN only extracts entities from the textual content of a news article at the start. Following a procedure similar to that of KMGCN, a knowledge graph composed of textual words of the news, knowledge entities and external knowledge entities are constructed and encoded by a GCN. Visual features of a news article are then extracted from images through a pre-trained VGG-19 network [63]. To emphasise the visual features that have better correlations with the textual information, the visual features are aligned with the knowledge graph representation using feature-level attention. In the implementation, the feature-level attention computes weights for each visual feature by comparing the visual feature with the knowledge graph representation and re-weighting each visual feature with the computed weights. The pooled graph representation and re-weighted visual representation are then concatenated for the final classification through a fully-connected neural network. **LOSIRD**[17] uses Wikipedia as the external knowledge source to verify the veracity of news articles. It has an evidence retrieval module (ERM) that is pre-trained to retrieve Wikipedia articles related to news articles of interest. The top-\(K\) most relevant sentences from Wikipedia are regarded as evidence for the news. Then, a star-shaped knowledge graph consisting of the news article of interest and the evidence sentences is constructed. The news article forms the centre node, while the retrieved sentences are nodes connected to the centre node. A tree-shaped graph model reflecting the reply relationships in the news propagation process is also constructed. The knowledge graph and the propagation graph are both encoded using the GraphSAGE model [18], and the two graph embeddings are concatenated and used in fake news classification through a fully-connected neural network. **DDGCN**[42] models the temporal evolution of the knowledge graph by constructing the graph gradually as more users interact with a given news article, hence more entities are extracted through user comments. DDGCN takes snapshots of the knowledge graph at different time points. The snapshots are then combined with a news propagation graph to infer the veracity of news articles, as detailed in Section V-A. #### Iii-B2 Inconsistency Detection Some studies [19, 21] predict news veracity by detecting inconsistency between the news content and external knowledge. We detail CompareNet [19] as a representative example of studies following this approach. **CompareNet**[19] extracts knowledge entities from the textual content of news and retrieves descriptions of the entities from Wikipedia. An internal knowledge graph containing the knowledge entities, news sentences and news topics is constructed to model the news internal information. Meanwhile, external knowledge embeddings are obtained by encoding the Wikipedia entity descriptions. By comparing the internal knowledge graph representation and external Wikipedia representation, inconsistency between the news content and existing external knowledge is captured. To construct the internal knowledge graph, a news article is first dissected into sentences. Subsequently, knowledge entities are extracted from each sentence using TAGME1. To incorporate topic information and determine the topics conveyed in each sentence, the top-\(K\) topics are identified using an unsupervised LDA algorithm [59]. The internal knowledge graph is then composed of nodes representing sentences, knowledge entities and topics. To establish connections between these nodes, sentences are bidirectionally and fully connected to each other, while each sentence node is also linked to the entities they encompass and the topics they convey. Since the graph is heterogeneous (i.e. composed of different types of nodes and edges), a heterogeneous graph convolutional network (Hetero-GCN) is used to model the graph. In HeterGCN, the node representation is updated with a node-type-aware procedure as formulated by Equation 3: Footnote 1: [https://sobigdata.4dscience.org/group/tagme/](https://sobigdata.4dscience.org/group/tagme/) \[\mathbf{H}^{(l+1)}=\sigma(\sum_{\tau\in\mathcal{T}}\mathcal{B}_{\tau}\mathbf{ H}_{\tau}^{(l)}\mathbf{W}_{\tau}^{(l)}) \tag{3}\] where \(\sigma(.)\) is an activation function; \(\mathcal{T}=\{\tau_{1},\tau_{2},\tau_{3}\}\) represents three node types: sentences, topics and entities; \(H_{\tau}^{(l)}\) is the input node feature matrix of type \(\tau\) at layer \(l\); \(W_{\tau}^{(l)}\) is the weight matrix of type \(\tau\) at layer \(l\); and \(\mathcal{B}_{\tau}\) is an attention weight matrix - its rows represent the nodes and its columns represent the neighbouring nodes of type \(\tau\). After graph encoding, the sentence and entity node embeddings are obtained. To retrieve external entity representations, CompareNet searches the Wikipedia page of each entity from the news content and encodes the first paragraph of the retrieved Wikipedia text as the textual representation of that entity. To capture the structural connections between all entities from a news article, TransE [20] is utilised to compute the entity embeddings. Then, each entity's embedding is fused from both textual and structural aspects through a gating function. After getting both internal and external entity representations (i.e., embeddings), CompareNet compares them to capture inconsistency, with the assumption that entities from trusted news should be better aligned with the corresponding external information. The comparison is done by calculating a comparison vector \(\mathcal{A}_{i}\) for each entity: \[\mathcal{A}_{i}=f_{cmp}(e_{c},W_{e},e_{KB}) \tag{4}\] \[f_{cmp}(x,y)=W_{a}[x-y,x\odot y] \tag{5}\] Here, \(f_{cmp}(.)\) is a comparison function, \(e_{c}\) is the \(i\)-th entity's internal representation, \(e_{KB}\) is the \(i\)-th entity's external representation, while \(W_{e}\) and \(W_{a}\) are weight matrices. All comparison vectors \(\mathcal{A}_{i}\) are pooled using max-pooling to obtain the overall comparison output, which is further concatenated with the max-pooled sentence outputs of HeterGCN before being fed into a fully-connected neural network to generate the final news veracity prediction. **InfoSurgeon**[21] leverages the inconsistency between textual content and the images of a news article and external knowledge. A higher inconsistency is considered a stronger hint that the news is fake. The knowledge graph construction for InfoSurgeon is similar to those in KMGCN [14] and KMAGCN [16], where both internal and external knowledge are integrated in one knowledge graph. InfoSurgeon detects the nodes and edges in the knowledge graph with inconsistency. Since there is no existing dataset with such labels, InfoSurgeon generates synthetic fake news samples from real news by modifying news details to inject misinformation into the knowledge graph (e.g., swapping entities, adding non-existing relationships, or replacing a sub-graph). The synthetic data is used to train a fake news detector to detect information inconsistency and hence predict the news veracity. ## V Propagation-based Methods News in social media is directly exposed to the public, and its spread involves many social media users who interact with the news, forming unique propagation patterns. Studies have found that fake news often spreads differently from real news, e.g., news published by official sources vs. rumours disseminated through social media [64, 65]. Researchers thus have proposed propagation-based methods aiming to exploit such differences to identify fake news. The propagation process involves multiple users and their interactions, e.g., comments and re-posts. Each interaction links two entities such as users, source posts and comment posts, or comments on earlier comments. These interactions form a graph structure which can together be modelled and analysed with graph-based approaches as shown in Fig. 7. A side benefit of considering the propagation process is that conversations formed by a news article post and its subsequent comments possess the capability to "self-correct" inaccurate information [22]. This is because users often debate and share evidence in the comments. Evidences hinting towards the veracity of the news may be found within the comments. This has also led to a series of studies representing the comments using tree structures and encoding them as trees/graphs for fake news detection. We classify and review propagation-based methods in two sub-categories: _static graph-based_ and _dynamic-graph based_, depending on whether temporal information is considered in the graph construction and encoding process. Beyond propagation modelling, auxiliary information such as user meta-information and advanced machine learning techniques (e.g., adversarial learning and contrastive learning) have also been utilised in propagation-based models. The propagation-based methods are mainly trained in a supervised way, the loss is calculated from some classification loss functions such as cross-entropy loss (Equation 1). In addition to classification loss, some models (eg. RDEA [27], GACL [28]) utilises additional loss function to enhance the detection, which will be declared later. ### _Static Graph-Based Methods_ To capture the information flow embedded in the text of a source news post and its comments, researchers have constructed graphs using the source posts and comments as nodes, and represent the conversation through graph structures. The constructed graphs, known as _propagation graphs_, are modelled using various techniques to extract the latent information, which subsequently aids in fake news detection. For each news item, including the user interactions (e.g., comments and reposts), a single propagation graph is constructed in static graph-based methods. The graph construction is demonstrated in Fig. 5. For a news article and user interactions shown in Fig. 4(a), each interaction including the news article is represented by a node, and the nodes are connected by edges following propagation events. Depending on the edge directions, the propagation graphs can be classified into top-down directed graphs (Fig. 4(d)), bottom-up directed graphs (Fig. 4(b)) Fig. 4: Basic idea of propagation-based methods. Fig. 5: A news propagation example and different graph modelling of the propagation process: (a) News propagation example, (b) Top-down graph modelling, (c) Bottom-up graph modelling and (d) Undirected graph modelling. or undirected graphs (Fig. 5c). The directions of propagation are differentiated in some papers [1, 22, 35] to produce finer-grained propagation modelling, but the undirected graphs are the main stream in static graph-based methods. Next, we detail studies on static graph-based methods, starting from vanilla propagation pattern modelling to more robust propagation modelling and then to additional feature modelling. #### V-A1 Vanilla Propagation Pattern Modelling **RvNN**[22] models news propagation using a tree structure as illustrated in Fig. 5a. A source news post and its associated comments are represented as nodes in the tree, with the source post being the root. The feature vector of each node is initialised as a textual representation of the corresponding post, i.e., a vector (tf-idf values) of the words. A GRU-based recursive neural network (RvNN) then encodes the tree by processing nodes in both top-down (Fig. 5b) and bottom-up (Fig. 5c) directions. The veracity of the source news post is classified by applying fully connected neural layers to the max-pooled tree embeddings computed by both RvNNs. It is important to note that the RvNNs for the top-down and bottom-up encoding are two separate models, referred to as TD-RvNN and BU-RvNN, respectively, and the TD-RvNN has overall better performance in the experiments.. In follow-up work, the authors of RvNN enhanced their model by replacing the GRU-based network with a Transformer encoder, resulting in the **Tree Transformer**[23] model with improved accuracy. Other studies have used Graph Neural Networks (GNN) for encoding the propagation graph. For example, **TRM-CPM**[24] encodes the propagation graph using GraphSAGE [66]. Similar to RvNN, **BiGCN**[1] also models news propagation patterns with two graphs in top-down and bottom-up directions. It employs a GCN for the top-down graph (Fig.5b) to learn the patterns of rumour propagation, and another GCN for the bottom-up graph (Fig.5c) to capture the structures associated with rumour dispersal. The embeddings of both graphs are subsequently concatenated to classify the veracity of a news. BiGCN has been extended in many follow-up studies with advanced GNN models. For example, Zhang et al. [67] use _Graph Attention Networks_ (GAT) [34] to model the propagation graph; others such as [68, 69] use graph auto-encoders and leverage their strong learning capabilities. GACL [28] uses an adversarial contrastive learning method (discussed in Section V-A2), which also follows the bi-directional graph modelling framework. #### V-A2 Enhanced Propagation Pattern Modelling Ma et al. [70] argue that propagation graphs harvested directly may be noisy and untrustworthy. Fake news spreaders may be able to influence the graph propagation structure by deleting comments associated with their posts and/or by employing promotional bots. Additionally, news crawlers may fail to capture the entire propagation process due to privacy policies and other platform restrictions. To address these issues, Ma et al. [70] propose a _Generative Adversarial Networks_ (GAN)-based method to capture low-frequency but non-trivial patterns to improve fake news detection robustness. In this model, a generator produces uncertain or conflicting comments in the propagation graph to force the discriminator to learn stronger indicative representations of the propagation patterns. Further issues arise when news from different domains are considered, which may have different patterns, e.g., a propagation-based model may suffer in its generalisation capability. To address such issues, **EBGCN**[25] performs edge enhancement on the propagation graph that adaptively infers edge weights from node features based on the Bayes' theorem instead of treating all edges equally. It then uses GCN layers to encode the enhanced propagation graph, and the resultant graph representation is fed into a classifier for news veracity classification. Similarly, **UPSR**[26] enhances graph edges using Gaussian distributions and encodes both the original and the enhanced graphs for fake news detection. **RDEA**[27] and **GACL**[28] follow Ma et al. [70] and further introduce _contrastive learning_ into graph-based fake news detection. They train more robust propagation-based models with contrastive learning. RDEA presents a data augmentation strategy that randomly modifies the propagation graph (e.g., dropping edges or masking subgraphs). It then trains a GCN model with contrastive learning. In contrastive learning, positive and negative data samples are fed into an encoder model in a contrastive way to encourage the model to learn data representations that better distinguish dissimilar data samples [71]. In RDEA, the modified propagations and the original propagation are regarded as a set of positive data samples, while negative data samples are obtained by random sampling from the propagation graphs with a different news class label (i.e., fake or real). GACL also utilises contrastive learning, and it introduce adversarial learning with an Adversarial Feature Transformation (AFT) module. The AFT aims to improve the model's robustness to human camouflaged propagation samples. In those samples, fake news producers may manipulate the propagation to make it closer to the real instances. the AFT simulates the malicious manipulations in the model training process and forces the model to learn event-invariant features. Another study **RDCL**[29] finds that a minor punctuation change in the propagation graph may cause a prediction flip for the existing models. To obtain a more robust model named, it then employs contrastive learning by adding perturbations. The perturbations come from various aspects, including adding noise to the text representations, tuning the propagation structures as mentioned in RDEA [27]. **RDCL**[29]. **CCFD**[30] further introduces _curriculum contrastive learning_ into fake news detection. The core idea is to gradually increase the contrastive difficulty of negative samples (i.e., making them less dissimilar to the propagation graph of interest?) to fully exploit the power of contrastive learning. #### V-A3 Additional Feature Modelling Further studies attempt to extract more information from the propagation graph beyond the graph structure, e.g., characteristics of the users involved in the propagation, historical engagements of those users, and other propagation features. **GCAN**[31] collects some statistical features (eg. numbers of followers, number of posts) of all users involved in a news propagation process and constructs a fully-connected graph of the users, assuming every user has implicit connections to all others. The user features are then encoded within the fully-connected graph using a CNN encoder to serve as features for fake news detection. A limitation of this work is that the propagation graph structure is ignored and the propagation is only modelled in sequence with RNNs. **UPFD**[32] focuses on retweet interactions and user attributes. It constructs a graph consisting of the source news article and users who retweet the article. The user node features are extracted from their historical posts. The final graph model thus captures information related to user credibility from user historical activities through joint content and graph modeling. **PSIN**[33] considers the _propagation heterogeneity_, i.e., the different types of entities and relationships that may exist in the propagation context of a news item. It proposes a heterogeneous graph to model the propagation process, including user, post nodes and user-follower, post-reply edges. PSIN breaks the heterogeneous graph into three sub-graphs: a post propagation tree following BiGCN [1], a user social graph formed by user-follower relationships, and a user-post interaction graph showing the authors of the posts. These three graphs are encoded synchronously by three individual neural networks in parallel, and their embeddings are concatenated to help classify the veracity of the news. **DUCK**[35] considers propagation information from three aspects: the structural patterns of retweets, the structural patterns of comments, and the temporal patterns of comments. Two tree-shaped propagation graphs for comments and retweets are constructed, as shown in Fig. 4(b), which are encoded by two GATs. The temporal patterns of comments are described in a sequence: comments are listed in chronological order, and the list (i.e., texts) is encoded by a pre-trained Transformer encoder. The output of the GATs and the Transformer encoder are concatenated for fake news classification. The works above primarily focus on individual propagation graphs while overlooking potential connections between multiple news sources. **UniPF**[36] clusters news articles based on the K-means algorithm. News articles in the same cluster are considered to be on the same topic, and each source news node from the same topic is linked to the same topic node by an edge. As a result, the propagation graphs of different news items on the same topic are connected together through a shared topic node. Information can then be exchanged across different propagation graphs. UniPF demonstrates the potential of a hybrid fake news detection method where the connections between multiple news articles are considered, just like in the heterogeneous social context-based methods, which will be detailed in Section VI. ### _Dynamic Graph-Based Methods_ The techniques discussed in the previous subsection assume a complete (static) news propagation graph. In reality, the full propagation structure of news does not emerge instantaneously. Instead, it undergoes an evolution process, expanding from one or two nodes to potentially a much larger propagation graph. Such a temporal process has been omitted (or significantly simplified) by static graph-based methods. This misses important patterns for fake news detection. For example, in Fig. 6, two news propagation graphs (where each node represents a post) have identical tree structures, but their propagation patterns differ when temporal information is considered. It has been observed that fake news propagation exhibits a viral pattern with multiple stages reflecting people's attention and reactions. This gives rise to a distinct life cycle [72]. Studies exploit such patterns for fake news detection with dynamic propagation graph-based methods. **DSTS**[73] emphasises the importance of temporal features and proposes a feature engineering-based method that considers features such as the number of retweets over time when detecting fake news. **MMHM**[74] models the retweeting dynamics using marked Hawkes processes [75] based on the _self-exciting phenomenon_[75]. The self-exciting phenomenon refers to the occurrence of an event that can influence the likelihood of future events. In the social media context, this can involve a phenomenon whereby some user interactions (e.g., comments, reposts) can cause a sudden increase or decrease in the propagation of a news item. MMHM demonstrates the potential of fake news detection using only self-exciting phenomenon features and opens up new research opportunities. **RDLNP**[37] learns the temporal sequential characteristics of the propagation patterns, by adding linear sequence learning alongside graph-based structural learning. To simplify the model, only an undirected propagation graph was considered (cf. Fig. 4(d)). As before, a GCN is used to encode the graph structure, while an LSTM network is used to encode the linear sequence. The graph and sequence embeddings are then integrated using an attention mechanism for fake news classification. **Dynamic-GCN**[38] and **Dynamic-GNN**[39] capture temporal dynamics by taking snapshots of the propagation graph at different time points. These graph snapshots are encoded by a GNN (e.g., a GCN) to generate graph representations, which are then processed using sequence modeling approaches, such as self-attention encoders. Similarly, **DDGCN**[42] extracts temporal graph snapshots. In addition, it extracts knowledge entities and builds temporal snapshots of the resultant knowl Fig. 6: Two dynamic propagation processes (a and b) with the same static pattern: From a temporal dynamic perspective, their patterns can be quite different since (a) propagates almost linearly in time while (b) has a peak at around time \(t_{2}\sim t_{3}\). edge graphs. The temporal snapshots of both the propagation graphs and the knowledge graphs are encoded simultaneously and interactively for fake news detection. **TGNF**[40] considers temporal information at the graph node level. It utilises a temporal pattern-aware GNN named _Temporal Graph Network_ (TGN) [76] to encode the evolution of propagation graphs. In the node aggregation step of TGN, a node's active time and historical status are considered. The TGN updates node features according to the temporal aspects of node interactions. Given an interactionbetween two nodes, only the two node embeddings are updated using the attention mechanism. TGNF utilises TGN to build a graph structure based on the sequence of user interactions (i.e., replies and retweets on social media). The graph is encoded by a GCN, which is then fed into a fully-connected neural network to predict the news veracity. **UGRN**[77] introduces the concept of _trigger detection_ to identify user interactions (retweets or comments) that yield an marked increase in propagation pattern of the source news item. If the triggers are found properly, they can be used to infer the trustworthiness of the source news based on the trigger information. After applying GCN to the propagation graph, the node representations are listed in chronological order. A classifier is then trained at the node level to classify whether a node is a trigger. Meanwhile, UGRN also feeds the graph representation to a fully-connected neural network to predict whether the news is real or fake. The trigger detection and fake news detection models are jointly trained as a multi-task learning task using the following loss function. \[\mathcal{L}=\mathcal{L}_{t}+\mathcal{L}_{v} \tag{6}\] where \(\mathcal{L}_{t}\) and \(\mathcal{L}_{v}\) are the cross-entropy loss for trigger classification and news trustworthiness classification, respectively. Both of \(\mathcal{L}_{t}\) and \(\mathcal{L}_{v}\) are calcualted based on Equation.1. Similarly to UGRN, **SEAGEN**[43] also encodes the propagation process with a graph encoder, and then it predicts interaction intensities over the propagation. The intensity here is the likelihood for the source news to receive an interaction at a given time, which reflects the self-exciting phenomenon. SEAGEN is also trained in a joint manner, in which the cross-entropy loss for news veracity and intensity accuracy loss are combined together as the overall model loss. The intensity accuracy loss is calculated by comparing the real propagation intensity and the estimated one. Both UGRN and SEAGEN consider the graph propagation speed. The difference between them is that UGRN aims to find the trigger that invokes a sudden increase in the retweets or comments, while SEAGEN aims to model the evolving propagation speed using the self-exciting phenomenon. ## VI heterogeneous social context-based Methods Beyond the textual content of news posts and their associated comments, other entities such as the users making those posts and comments, and the topics expressed by the posts can be considered as the basis of the veracity of news articles. Such additional information form a heterogeneous social context that reveals implicit connections among multiple news. For example, multiple news may be posted or spread by the same users. This section focuses on graph-based methods that model such heterogeneous social contexts through _heterogeneous graphs_. A heterogeneous graph is a graph that consists of more than one type of nodes or edges. After a heterogeneous graph is constructed, graph embedding techniques can be used to encode the graphs for fake news classification. To extract information from heterogeneous graphs, existing methods either extract a sub-graph most relevant to a news post and encode the sub-graph with a graph encoder such as GCN [9], or they construct a large graph with representation optimisation and then take the optimised representation for veracity classification. Depending on whether the sub-graph-level features or the node-level features are utilized in the detection process, we classify the heterogeneous social context-based methods into two sub-categories: _sub-graph-level classification_ and _node-level classification_. Intuitively, node-level methods run graph algorithms on the full heterogeneous graphs, while sub-graph-level methods extract a sub-graph and only run a graph encoder on the sub-graphs. Fig. 7 illustrates both types of method. ### _Sub-graph-level Classification_ Sub-graph-level methods construct a large heterogeneous graph composed of all news items, users and user interactions (e.g., comments or user-follower relationships). Then, a sub-graph is extracted from the large graph for each news item by sampling the k-hops neighbours of it, to be encoded by a heterogeneous graph encoder. The encoded sub-graph embedding is fed into a fully-connected neural network for news veracity classification. This process is similar to the propagation-based methods described in the last section. A core difference is that propagation-based methods construct a propagation graph for each news item, neglecting the implicit connections between the news item and other news items. In contrast, sub-graph-level methods reviewed in this section consider a wider social context where news items are connected to each other, through either user interactions or topic similarity. Fig. 7: Basic idea of heterogeneous social context-based methods. Two early methods, **Monti**[45] and **GLAN**[44], construct a large heterogeneous graph over all news articles and users. They model the user-follower relationships and news propagation relationships as edges in the graph. To detect fake news, Monti extracts sub-graphs corresponding to news article of interest and encodes the sub-graph by a GCN. The mean pooling of the node representations computed by the GCN is then fed into a classifier for news veracity classification. GLAN follows a similar procedure but uses a CNN and multi-head attention as the graph encoder. **NDG**[46] uses the nodes of the heterogeneous graph to represent news comments and news domains. The edges in the graph represent users commenting on a news article and news articles belonging to a given domain. There may be too many edges connected to a node in such a graph (e.g., too many comments on the same news article or too many news articles on the same domain=1), resulting in a "neighbourhood explosion" problem. To mitigate this problem, a sampling strategy that randomly takes a subset of the graph is proposed in NDG to retain a size-limited sub-graph. The rest of the classification process is the same as in Monti and GLAN. **MFAN**[47] captures unobserved links in the propagation graph. It models the social propagation context with graphs that contain three types of nodes: the news article node, user comment nodes, and user nodes (i.e., users who made comments). MFAN argues that the links and relationships in the graph structure may be incomplete due to privacy issues or due to manipulation by fake news spreaders. MFAN then complements the graph links by inferring hidden links. During the inference, MFAN compares the cosine similarity between every pair of nodes in the graph and assigns edges to the pairs with high similarity. Then, an adaptive GAT is utilised to encode the graph to obtain the graph embedding. **Gregor**[78] constructs a heterogeneous graph to model news propagation processes. The news article and the users who posted about the article are linked together in a graph that is encoded by a GNN. Similarly, **SureFact**[48] constructs a heterogeneous propagation graph consisting of four types of nodes corresponding to the news article, users, user posts (retweets and replies), and keywords (extracted through topic models such as the LDA algorithm [59]). Then, SureFact uses reinforcement learning to select the most important sub-graphs for fake news detection, since considerable noise and irrelevant information may exist in the full graph. After filtering the irrelevant information, the remaining subgraphs are modelled by an adapted GAT in a fine-grained manner, to improve the model's discrimination power and explainability. Compared to the earlier graph-based methods that capture mainly high-level patterns of the full propagation graph, SureFact provides deeper insights that can be used to understand the propagation patterns of fake news through filtered sub-graphs. Chandra et al. [79] build a large heterogeneous graph consisting of users and news articles. They detect fake news via analysing the user communities (i.e., user clusters). Their core idea is that a news's veracity would be similar to those topologically close to that news. The veracity of a news article thus may be inferred from other news articles in the same cluster that come with a veracity label. ### _Node-level Classification_ Other studies construct a large heterogeneous graph and exploit labelled news article nodes to help classify unlabelled ones. Such methods are referred to as node-level methods. In such methods, all news and users participate in the propagation process are considered as nodes in the graph, and the connections among news are represented by edges. The graph representation is optimised by GNNs during graph construction. The learned embedding of each news node can be utilised for news veracity prediction directly, i.e., no sub-graph extraction is needed. **TriFN**[49] is the first approach to model multiple news articles in a large heterogeneous graph. In TriFN, news articles are connected to their publishing users as well as spreading users. Such a structure has been adapted in many follow-up studies [46, 47, 50, 51]. Both news articles with and without veracity labels are included in the heterogeneous graph. The graph representation is then optimised through Laplacian matrix decomposition. The unlabelled articles are then classified by a classifier trained on the labelled articles. **FANG**[50] enhances the heterogeneous graph representation by a node representation proximity loss below, which is optimised together with the news veracity classification loss based on Equation 1: \[\mathcal{L}_{prox}=-\sum_{r\in G}\left[\sum_{r_{p}\in P_{r}}\log(\sigma(z_{r} ^{T}z_{r_{p}}))+Q\sum_{r_{n}\in N_{r}}\log(\sigma(-z_{r}^{T}z_{r_{n}}))\right] \tag{7}\] Here \(z_{r}\in\mathbb{R}^{d}\) is the representation of entity \(r\) in the heterogeneous graph \(G\), \(P_{r}\) is the set of nearby nodes (_positive set_) of \(r\), and \(N_{r}\) is the set of disparate nodes (the _negative set_) of \(r\). Set \(P_{r}\) is obtained with a random walk from the entity node \(r\), and \(N_{r}\) is derived using negative sampling over the graph. The proximity loss optimises the graph representation based on the _echo chamber_ effect, in which nodes closely connected within the same community share similar features, while those that are farther apart in distinct communities exhibit different features. Overall, fake news detection in FANG is modelled as a reasoning problem over a graph where the graph representation is refined in parallel. FANG uses the evidence provided by existing knowledge of real and fake content from the training data to assess the authenticity of unknown news (i.e., test data) based on the observed links in the heterogeneous graph. Following TriFN and FANG, a series of other methods have been proposed [80, 81] that use advanced machine learning techniques such as adversarial active learning and weakly supervised learning for node-level fake news classification. Based on the graph representation of FANG, **Mehta**[51] enriches the edges in the graphs. Its idea is similar to MFAN [47] but from a node-level classification perspective. Since there are multiple types of nodes in the heterogeneous graph, different inference operators are used to augment the graph with different relationships beyond those initially seen. Fake news detection in Mehta is performed in a transductive manner using a _Relational Graph Neural Network_ (R-GNN) [52]. The information in the observed data is transferred to the veracity-unknown nodes, for veracity prediction. **Hetero-SCAN**[53] decomposes the heterogeneous graph into two levels. The higher level describes the correlations between news publishers and news articles, while the lower level describes the correlations between news spreaders and news articles. The two-level heterogeneous graph is divided into sub-graphs named _meta-paths_, to model the detailed correlations, e.g., a news article cites another, and a social media user spreads two news articles. During fake news detection, a news article's surrounding meta-paths are extracted, encoded with GRU and attention encoders, and integrated based on the attention mechanism to assess the veracity of the news article. **TR-HGAN**[54] considers the _topology imbalance_ and _relation inauthenticity_ when inferring the veracity of a news article. Here, the topology imbalance means that the topology distribution of labelled news nodes is likely to be asymmetric and uneven. To address this issue, TR-HGAN proposes a smoothing strategy to force the influence from labelled nodes to decay with the topological distance Relation inauthenticity refers to the fact that the propagation structure is not always reliable because it can be manipulated by users and thus are unauthentic. To address this issue, TR-HGAN adapts a hierarchical attention mechanism that weights a node's neighbours based on the nodes and their types (e.g. user nodes, news nodes or comment nodes) to deal with propagation graphs being manipulated by fake news spreaders. ## VII Comparison of Reported Empirical Model Performance Results Various datasets have been utilised in the experiments of the existing fake news detection studies. Below, we summarise the most commonly used datasets: * **Twitter15**[82] includes popular source tweets that were highly retweeted or replied (in 2015) along with the propagation graphs (retweets and replies). * **Twitter16**[82] shares the same data collection process withTwitter15 but was based on data from 2016. * **PHEME** has two versions: PHEME-5 [83] and PHEME-9 [84]. PHEME-5 was collected in 2016 from five news events covering five domains, whilst PHEME-9 was collected in 2018 from nine news events (and domains). Only replies are collected in the propagation graphs. * **Weibo**[64] contains news propagation graphs crawled from the Weibo platform. Similar to Twitter15 and Twitter16, source news, reposts and replies are collected. * **FakeNewsNet**[72] was composed of two topics: politics and entertainment. Rich social context information is included such as the news, news sources, news spreaders, retweets and reply propagation graphs, etc. FakeNewsNet is also divided into two smaller datasets: **PolitiFact** and **GossipCop** depending on the news topics. Most of these datasets are obtained from Twitter. Social media platforms like Twitter and Weibo have become mainstream data sources for social media research in recent years, due to their scale, adoption and openness of data. The statistics of these datasets is shown in Table II. The detection performance (i.e., accuracy) of the models reviewed over the datasets above (as reported in the original papers) is summarised in Table III. Note that even when the same datasets are utilised, different papers may run experiments with different subsets. Therefore, we label the size of the datasets in the performance result table with a '#' symbol. For example, \(0.876(\#5802)\) means the model accuracy is 0.876 when experimented on a subset of 5,802 source news. If two papers utilised subsets of the same size, we consider that they have the same experiment setting and hence are comparable even though the dataset splits for training, development and testing may still be slightly different. As seen in Table III, propagation-based methods are the primary approaches utilising the above datasets. Many methods perform experiments with different versions of the datasets. For example, the Weibo [64] dataset has many versions in the table as the dataset size varies from 4,197 to 9,528 source posts. The different versions come from filtering and re-sampling to make the dataset more suitable for the given experiment. For example TGNF [40] and Dynamic-GNN [39] exploit temporal features to detect fake news, but some samples from Weibo are too short to be used for such temporal features which are hence filtered. In addition, FakeNewsNet (PolitiFact and GossipCop) [72] is released based on Twitter post tokens due to the privacy concerns. To get the full content, researchers need to crawl Twitter data using these tokens. However, since the contents on social media platforms are dynamic, it can be the case that some posts and users are not available any more at a later date. Therefore, the size of FakeNewsNet may be reduced in the later papers. In terms of model performance comparison, we observe that models such as LOSIRD [17], UPSR [26], DUCK [35], SureFact [48] and TR-HGAN [54] achieve the best performance over these datasets, respectively. LOSIRD uses knowledge from Wikipedia for debunking fake news. UPSR questions the reliability of propagation patterns because the propagation graph can be deliberately perturbed and imbalanced, therefore it proposes a Gaussian distribution-based edge enhancement to make the propagation-based detection more robust. The strength of DUCK is that it considers the propagation pattern from multiple aspects including linear and non-linear aspects, static and temporal aspects, as well as retweets and comments. SureFact and TR-HGAN both utilise heterogeneous graphs to model the social context of news. SureFact uses reinforcement learning to select informative parts of the social context graph to filter noise. TR-HGAN improves the robustness of heterogeneous social context-based methods by solving graph topology imbalances and relation inauthenticity issues. **Limitations of existing datasets and experiments.** Though much progress has been achieved, important issues persist: 1. A standard benchmark is currently lacking. Different models are implemented on different datasets and/or different versions, leading to challenges in assessing and comparing existing works. 2. Some of the existing datasets are also outdated. With the exception of FakeNewsNet, the other datasets were released at least five years ago. Considering the con stantly changing news interests and sharing patterns, it is less meaningful to design models on historic datasets. For example, nowadays AI models are able to synthesise fake news, but no dataset exists to train models to detect AI-synthesised misinformation. 3. The detection performance related to accuracy and F1 scores should not be the only evaluation metrics. Other attributes such as early detection as well as cross-domain detection capability should also be considered. Most existing methods are trained on the full time span of the samples. This means that rich information and evidence are provided to the models to judge if a news is fake. However, such full-time-span information is not always available. In reality, if some news propagates explosively in a short period of time, the negative impact may already be done to the society and any individuals involved. Therefore, models should have the capability to discover fake news from different topics and proactively intervene to stop the spread of fake news early on. ## VIII Challenges and Opportunities As reviewed above, many automatic graph-based fake news detection methods have been proposed. However, few of them are have been deployed in real application systems. For example, with Twitter, humans are still hired as fact checkers to address fake news2. The gap between the variety of automatic detection methods and real-world deployment is a major shortfall of existing studies. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline **Feature** & **Twitter15** & **Twitter16** & **PHEME-5** & **PHEME-9** & **Weibo** & **Politics** & **GessipCop** \\ \hline \hline Number of source news & 1,490 & 818 & 5,802 & 6,425 & 4,664 & 1,056 & 22,140 \\ \hline Number of users & 276,663 & 173,487 & 49,435 & 50,593 & 2,746,881 & 345,440 & 345,292 \\ \hline Number of posts & 331,612 & 204,820 & 103,212 & 105,354 & 3,805,656 & 564,129 & 1,396,548 \\ \hline Number of classes & 4 & 4 & 2 & 2 & 2 & 2 \\ \hline Number of fake news & - & - & 1,972 & 3830 & 2,313 & 432 & 5,323 \\ \hline Number of real news & - & - & 2402 & 4023 & 2,351 & 624 & 16,817 \\ \hline Number of non-rumors & 374 & 205 & - & - & - & - & - \\ \hline Number of false rumors & 370 & 205 & - & - & - & - & - \\ \hline Number of real rumors & 372 & 205 & - & - & - & - & - \\ \hline Number of unverified rumors & 374 & 203 & - & - & - & - & - \\ \hline Average number of time length/news & 1337 hours & 848 hours & - & - & 2,461 hours & - & - \\ \hline Average number of post/news & 223 & 251 & - & - & 816 & - & - \\ \hline Maximum number of posts/news & 1,768 & 2,765 & - & - & 59,318 & - & - \\ \hline Minimum number of posts/news & 55 & 81 & - & - & 10 & - & - \\ \hline \hline \end{tabular} \end{table} TABLE II: Statistics of Datasets \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline **Method** & **Twitter15** & **Twitter16** & **PHEME-5** & **PHEME-9** & **Weibo** & **Politics** & **GossipCop** \\ \hline \hline \multicolumn{8}{c}{**Knowledge-driven Methods**} \\ \hline FinnerFact [12] & - & - & - & - & - & 0.920 (#815) & 0.862 (#7.162) \\ \hline KMGCN [14] & - & - & 0.876 (#5802) & - & 0.886 (#4664) & - & - \\ \hline KMAGCN [16] & - & - & 0.867 (#5802) & - & 0.944 (#9528) & - & - \\ \hline LOSIRD [17] & - & - & **0.914** (#5802) & **0.925** (#6425) & - & - & \\ \hline \multicolumn{8}{c}{**Propagation-based Methods**} \\ \hline RvNN [22] & 0.723 (#1490) & 0.737 (#818) & - & - & - & - & - & - \\ \hline TRM-CPM [24] & - & - & 0.900 (#5802) & 0.919 (#6425) & - & - & - \\ \hline Bi-GCN [1] & 0.886 (#1490) & 0.880 (#818) & - & - & 0.961 (#4664) & - & - \\ \hline RNLPN [37] & - & - & - & 0.919 (#3164) & - & - & - \\ \hline EBGCN [25] & 0.892 (#1490) & 0.915 (#818) & - & 0.715 (#2402) & - & - & - \\ \hline UPSR [26] & - & - & - & - & - & **0.914** (#314) & **0.977** (#5464) \\ \hline EDCA [27] & 0.855 (#1490) & 0.880 (#818) & - & - & - & - & - \\ \hline GACL [28] & 0.901 (#1490) & 0.920 (#818) & - & 0.850 (#6425) & - & - & - \\ \hline RDCL [29] & - & - & 0.871 (#5802) & 0.864 (#6425) & - & - & - \\ \hline CCFD [30] & 0.856 (#1490) & 0.886 (#818) & - & - & 0.975 (#4532) & - & - \\ \hline UPDP [32] & - & - & - & - & 0.846 (#314) & 0.972 (#5464) \\ \hline DUCK [35] & 0.900 (#1490) & 0.910 (#818) & - & - & **0.980** (#4664) & - & - \\ \hline UnIPF [36] & 0.959 (#712) & 0.963 (#410) & - & - & - & 0.911 (#314) & 0.966 (#5464) \\ \hline Dynamic-GCN [38] & 0.827 (#1490) & 0.836 (#818) & - & - & 0.936 (#4664) & - & - \\ \hline Dynamic-GNN [39] & - & - & - & - & 0.957 (#4338) & - & - \\ \hline TGNF [40] & - & - & - & - & 0.968 (#4338) & - & - \\ \hline DDGCN [42] & - & - & 0.855 (#4657) & - & 0.948 (#5748) & - & - \\ \hline \multicolumn{8}{c}{**Heterogeneous Social Context-based Methods**} \\ \hline NDG [46] & - & - & - & - & - & 0.961 (#4197) & - & - \\ \hline SureFact [48] & - & - & - & - & - & **0.9413** (#815) & 0.8797 (#7612) \\ \hline TR-HGAN [54] & **0.929** (#1490) & **0.932** (#818) & - & - & 0.963 (#46664) & - & - \\ \hline \hline \end{tabular} \end{table} TABLE III: Accuracy of Different Models on the Different Datasets. We consider several key issues from different aspects based on the reviewed graph-based fake news detection methods: _explainability_, _cross domain detection_, _real time detection_, and _efficiency and cost_. ### _Explainability_ _Explainability_ in fake news detection refers to the ability to explain why a news article might be considered as misinformation or fake news. Some information in the fake news arena may be incorrect which can be shown directly by referring to authoritative knowledge sources, whilst other news may be posted and spread by propaganda social media accounts that may have elements of truth or be a small part of a more complex news item used to influence the public, etc. Existing work tends to answer such explainability challenges based on model interpretability. Interpretability here refers to the ability to understand and make sense of the internal workings of a machine learning model [85]. It focuses on the model's internal mechanisms, such as the relationships between the input features and the mapped output predictions. By providing model interpretability, humans should be able to comprehend and reason about how and why the model classified a given news item as fake or real. Interpretability has been defined in different ways in the literature. The seminal work [86] shows how high-level statistics about the propagation networks (e.g., community density) are important for fake news detection. Another work [87] defines the interpretability as the ability to identify the user comments that hold objections or contrary opinions and can reveal misinformation in the source posts. This is based on the self-correction ability of propagation, i.e., fake news usually receives more objections and alternative opinions than real news. Other studies [10, 58] define interpretability of news content based on capturing misleading words. Many attempts to offer interpretability for fake news detection have exploited attention mechanisms [88]. The attention mechanism weights elements (e.g., users, posts, or words) in an input sequence and graph as the basis for the model decision making (i.e., classifying a news item as being fake or not). Though much progress has been made, the gap between model interpretability and real explainability remains substantial. Some models introduce more complex interpretability approaches. SureFact [48] considers the information of sub-graphs in a propagation graph to be imbalanced, meaning that some sub-graphs can contain useful information like anomaly patterns that can be used to reveal the veracity of news, while other sub-graphs cannot. To select more informative sub-graphs, a reinforcement learning-based selector is designed and aligned with the fake news detection process. By showing anomaly patterns in sub-graphs, SureFact explains why a news has been classified as fake. FinerFact [12] takes a knowledge-based approach. It extracts potential fake claims from a news article and supporting evidences from user comments. By analysing the saliency of the evidence and graph modelling, FinerFact explains which claims in the news might be fake and attempts to provide supporting evidences. SureFact [48] and FinerFact [12] offer fake news detection reasoning based on the propagation graph and textual contents. Further work is required. On the one hand, existing explainability work is mainly driven by attention mechanism weights. Words in texts or nodes in propagation graphs with higher weights are assumed to be more important, however a semantic connection is lacking in this process. Even with the correct veracity prediction and accurate attention weights, it may not be persuasive enough for people to believe in the the explanation of the result. Witnessing the success of semantic reasoning of large language models such as ChatGPT, the explainability of fake news detection can also be improved from a semantic level. One the other hand, when people reason about news veracity, the broader social context needs also to be considered. For example, the social context includes what is posted concurrently by other users, what are the connections between those users, what is the verified knowledge background, etc. Reasoning based on the broader social context is still under-explored and an area of future work. ### _Cross-domain Detection_ Fake news detection has unique challenges for model generalisability. Textual content plays a key role in deciding whether a news is fake, while the content distribution can vary substantially across different news domains. A model trained for news in one domain may suffer from a substantial performance loss (around 30% [33]) when being applied to fake news detection in another domain. A key challenge is how to obtain a generalisable fake news detection model. Early studies [1, 22] have shown that fake news and real news exhibit different propagation patterns, and they try to avoid the entity-bias from individual domains by learning specific propagation patterns of fake news. Data augmentation strategies [27] are often used to enhance the learning of propagation patterns and improve model robustness. However, a recent study [33] analysed the propagation of news from different domains and found that the propagation pattern also suffers from entity-bias. Therefore, the study utilises multi-task learning to detect fake news and classify news topic contemporaneously. The assumption is that if a model can identify the topic of news, it can capture topic-agnostic features to mitigate the domain dependence of the trained model. Another possible solution to the entity-bias problem is to train models continuously by techniques such as active learning, i.e., keeping the models updated with fresh knowledge from real-time news sources. An initial study [89] was done on this idea and demonstrated its feasibility. However, the study only experimented with NLP models, while graph-based methods remain under-explored. At present, cross-domain fake news detection is mainly studied under individual news settings that consider the content and context of of a news item. When considered from a broader perspective, it has been observed that different rumours propagate concurrently, even when some rumours have already been shown to be fake. When judging the veracity of a single news item, we can consider information from other concurrent news (e.g., similar information is being shared by unverified users), and historical news (e.g., some users who have been previously detected to spread fake news). Such information can be used to improve fake news detection. In summary, capturing a broader social context also helps to understand the overall news context, and can contribute to a better understanding of news from various domains. ### _Real-time Detection_ The need for real-time detection of fake news cannot be overstated. Swift identification and prevention of the dissemination of fake news is crucial. However, real-time detection remains a significant challenge for current methods due to three limitations: weakness in temporal modelling; lack of connections between multiple social media platforms and the fact that any solution has to be accepted and rolled out by the major social media platforms themselves. Note that for the last point, this may not be in the business interests of the platforms, e.g., fake news is often the source of attraction and engagement to many user communities. It is the case however that the majority of models discussed in this survey are built upon static graphs. These include the static propagation-based models discussed in Section V-A, the knowledge-driven models outlined in Section IV, and the heterogeneous social context-based models presented in Section VI. It is worth mentioning that utilising static graphs results in the loss of (current) temporal information and hence may not effectively capture the dynamic nature of news on an ongoing and evolving basis. Existing studies such as dynamic propagation-based methods (e.g., Section V-B), tend to treat samples (i.e., news articles) in a fake news dataset as independent and identically distributed. They formulate fake news detection as an inductive classification task. Each news sample in a dataset is considered individually and classified only based on information from that sample, e.g., its text content and propagation pattern. In the real world, news items posted on the same topic, in a short time frame, or by closely related users, are often correlated. Such correlations are missed if the news items are treated separately, e.g., transductive methods consider the distribution of all samples in a dataset and how they may be used to address veracity issues. Using such methods, samples distributed closely, e.g., posted by users from the same community, are more likely to be classified into the same class. Some heterogeneous social context-based methods [50, 51] are based on transductive learning, which have a limited capability to incorporate contextual information from related news samples. However, a critical limitation of these methods is that only a limited amount of information is utilised in their models. As a result, these models only focus on a small scope of context, e.g., a subset of users who share news on similar topics. A larger scope including external context, i.e., where the news to be classified is created and posted is still largely opaque, e.g., recent public opinion or attention beyond the immediate social media context. To incorporate a larger external context, NEP [90] collects a large volume of news from verified outlets (e.g., Huffington Post, NPR, and Daily Mail) to form the external context. When classifying a given news item, the verified news articles posted during the period around the time when the target news item was posted are compared with the target news item. NEP is not a graph-based method, however, and it is based on external knowledge retrieval and text comparison. A graph-based method considering the external context and temporal information offers a promising future direction. One challenge here is that the external context is typically not as dynamic as social media news, e.g., official news articles are typically published after all information sources are checked and verified. ### _Efficiency and Cost_ The computational complexity and scalability of detection algorithms have not been fully explored. Fake news detection involves analysing large volumes of textual and multimedia data, which can be computationally intensive. Scaling fake news detection methods to handle the vast amount of information generated on social media and online platforms can thus be challenging. Systems must be able to process a high volume of news articles, tweets, posts, and multimedia content in real-time. The processing power and resources required for real-time or large-scale analysis is a major challenge, especially for resource-constrained systems. As more and more graph-based algorithms are proposed with more complicated graph modelling methods, the complexity and scalability demands require further research exploration. Secondly, as mentioned earlier, the existing datasets tend to be outdated given the dynamic news topics and especially the impact of emerging AI techniques. Nowadays, AI has the ability to synthesise text, images and video with very low cost. However, gathering reliable and diverse training data for fake news detection can be time-consuming and costly. The data collection process may involve manual annotation, fact-checking, and verification, which can require human expertise and considerable effort. From a graph-based detection perspective, it would also be necessary to monitor the broader social context, while this is challenging especially for emerging patterns in social media platforms. This also requires access to and use of huge volume of data. If any solutions were to be used to tackle the current news veracity issues, then no doubt fake news producers would intentionally craft content to bypass such detection algorithms. This might be through adversarial attacks aiming to manipulate the features or propagation of news articles to deceive detection models. Defending against such attacks requires continuous monitoring, updating models, and staying ahead of evolving fake news techniques. Currently only a few efforts have explored adversarial learning technology [28] and social propagation edge enhancement [51]. Last but not the least, deploying and maintaining an automatic fake news detection system requires appropriate infrastructure and ongoing maintenance. This includes hosting resources, storage, monitoring, and periodic updates to adapt to changing news patterns and techniques. This should potentially be the remit of social media platform providers, but they may argue that others should take this responsibility, e.g., governments. Major platform providers such as Twitter are explicitly offering themselves as the open platform offering a voice for all in the global town square. Many of the most followed accounts belong to individuals who have been demonstrably shown to post factually incorrect materials, hence it is not in the platforms business interests to block such users and/or stop such content from arising.
2307.04042
Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric Regression by Adversarial Training
We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme. For the nonparametric regression problem, it has been shown that an estimator using deep neural networks can achieve better performances in the sense of the $L2$-norm. In contrast, it is difficult for the neural estimator with least-squares to achieve the sup-norm convergence, due to the deep structure of neural network models. In this study, we develop an adversarial training scheme and investigate the sup-norm convergence of deep neural network estimators. First, we find that ordinary adversarial training makes neural estimators inconsistent. Second, we show that a deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction. We extend our adversarial training to general setups of a loss function and a data-generating function. Our experiments support the theoretical findings.
Masaaki Imaizumi
2023-07-08T20:24:14Z
http://arxiv.org/abs/2307.04042v1
Sup-norm convergence of deep neural network estimator for nonparametric regression by adversarial training ###### Abstract. We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme. For the nonparametric regression problem, it has been shown that an estimator using deep neural networks can achieve better performances in the sense of the \(L2\)-norm. In contrast, it is difficult for the neural estimator with least-squares to achieve the sup-norm convergence, due to the deep structure of neural network models. In this study, we develop an adversarial training scheme and investigate the sup-norm convergence of deep neural network estimators. First, we find that ordinary adversarial training makes neural estimators inconsistent. Second, we show that a deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction. We extend our adversarial training to general setups of a loss function and a data-generating function. Our experiments support the theoretical findings. ## 1. Introduction We study the nonparametric regression problem. Suppose we observe \((X_{1},Y_{1}),...,(X_{n},Y_{n})\in[0,1]^{d}\times\mathbb{R}\) with dimension \(d\in\mathbb{N}\) that are independent and identical copies of a \([0,1]^{d}\times\mathbb{R}\)-valued random element \((X,Y)\) which follows the following regression model: \[Y=f^{*}(X)+\xi, \tag{1}\] where \(f^{*}:[0,1]^{d}\rightarrow\mathbb{R}\) is an unknown function, \(\xi\) is a random noise variable with zero mean and finite variance and is independent to \(X\), and \(X\) follows a marginal measure \(P_{X}\) on \([0,1]^{d}\). Our interest is to utilize a deep neural network model and develop an estimator \(\widehat{f}\) from the model and the \(n\) observations, then study its estimation risk in terms of the sup-norm, referred to as an \(L^{\infty}\)-risk: \[\sup_{x\in[0,1]^{d}}|\widehat{f}(x)-f^{*}(x)|,\] which implies uniform convergence of the estimator. In this study, we prove that an adversarial training framework can provide an estimator with deep neural networks whose \(L^{\infty}\)-risk converges, then derive a convergence rate of the risk and show the minimax optimality of the rate. ### Background and Question Deep learning is a data-driven statistical method using deep neural network models [1], which have multiple layers. It has many well-known extensions, such as a deep convolutional network [16], a residual network [11], and an attention mechanism [20]. Owing to the multiple layers and the well-designed training algorithm, deep learning has achieved quite accurate prediction performance in various tasks. The framework of nonparametric regression has been actively used to analyze deep neural networks, and many roles of deep learning have been revealed. A deep neural network is a model of functions \(f:[0,1]^{d}\rightarrow\mathbb{R}\) with multiple layers such that \[f(x)=g_{L}\circ g_{L-1}\circ\cdots\circ g_{1}(x), \tag{2}\] where \(g_{1}(\cdot),...,g_{L}(\cdot)\) are trainable functions by \(L\) layers. Deep learning is a method of fitting the function by deep neural networks to observed data, hence it is obviously regarded as a method for the nonparametric regression problem. Specifically, in most studies on the nonparametric regression with deep neural networks, the following least-square estimator has been studied: \[\widehat{f}^{\text{LS}}\in\operatorname*{argmin}_{f\in\mathcal{F}}\frac{1}{n} \sum_{i=1}^{n}(Y_{i}-f(X_{i}))^{2},\] where \(\mathcal{F}\) is a set of functions by deep neural networks with the form (2). Further, performance of the estimator \(\widehat{f}^{\text{LS}}\) has been studied by its \(L^{2}\)-risk \[\left\|\widehat{f}^{\text{LS}}-f^{*}\right\|_{L^{2}}^{2}:=\mathbb{E}\left[( \widehat{f}^{\text{LS}}(X)-f^{*}(X))^{2}\right].\] Using this framework, seminal works [1, 2, 1] show that the multilayer structure of deep neural networks fits an internal structure of the unknown function \(f^{*}\) and that its estimation error achieves a faster convergence. [1, 1, 1, 2] investigate statistical properties of the neural estimators such as asymptotic distribution and robustness. [1, 1, 2, 3] show that the multilayer structure of the neural estimator is effective when the target function \(f^{*}\) has irregular properties such as discontinuity and heterogeneous smoothness. [1, 2, 3, 4, 5] shows an adaptive property of the neural estimators to an intrinsic low-dimensionality of the observations, e.g., data concentrates on a low-dimensional manifold in its domain. Studying a sup-norm value of the estimation error has been an important interest in nonparametric regression problems. The sup-norm value, referred to as an \(L^{\infty}\)-risk, is a sharper measure of accuracy and sensitivity of estimators than the \(L^{2}\)-risk. Furthermore, the sup-norm convergence of errors is useful for statistical inference, such as a uniform confidence band, and is effective in the case with covariate shift of the transfer learning [5]. For several conventional (non-deep) nonparametric estimators for \(f^{*}\), their sup-norm convergence has been actively studied. Classically, the convergence of kernel methods [1, 2, 3, 4, 5] and series methods [1, 2, 1, 1, 1] have been investigated. More recently, the convergence of wavelet methods [1, 2], methods with reproducing kernel Hilbert spaces [2], and Gaussian process methods [1, 2, 3] have been clarified. Roughly speaking, when studying the sup-norm convergence of these non-deep estimators \(\widehat{f}^{\text{ND}}\), the following linear-in-basis form plays an effective role: \[\widehat{f}^{\text{ND}}=\sum_{j\in J}w_{j}\psi_{j}(\cdot), \tag{3}\] where \(J\) is an index set, \(\{w_{j}\}_{j\in J}\) is a set of weights in \(\mathbb{R}\) trained by the least-square approach, and \(\{\psi_{j}(\cdot)\}_{j\in J}\) is a family of basis functions (possibly depending on covariates) such as wavelets or kernels. Since the non-deep estimators have the linear form, it is possible to control the \(L^{\infty}\)-risk effectively and show its convergence, except a general result by [5]. Our interest is to evaluate the \(L^{\infty}\)-risk of an estimator using deep neural networks (5). Since the deep neural network model (2) does not have the linear-in-basis form (3) as the non-deep methods, the existing analysis cannot study the \(L^{\infty}\)-risk of deep neural networks. Based on the background, we have the following questions: _Is it possible to achieve an estimator by deep neural networks \(f^{*}\) whose \(L^{\infty}\)-risk converges?_ _If so, is it possible to show the optimality of a convergence rate of the \(L^{\infty}\)-risk?_ ### Introduction to Adversarial Training The _adversarial training_ is a training scheme for deep neural networks, which has been developed to deal with an _adversarial attack_ on prediction by neural networks. An adversarial attack is a methodology to mislead deep neural networks in its predictions, by putting a tiny perturbation into a covariate for a trained deep neural network. Since functions by trained deep neural networks are unstable, the perturbed samples, called adversarial samples, vary the outputs of deep neural networks drastically. [1] reported that the phenomenon by introducing a case in which a deep neural network misclassified an image of a panda as an image of gibbons by adding very fine noise to the image. After the finding, many adversarial attack methods have been developed [1, 2, 3, 4, 5], threatening the robustness of neural networks. A standard approach to adversarial training is to minimize a robustified empirical risk, which is measured by adding perturbations to the observed input variable [1, 2, 3]. Rigorously, an estimator by the adversarial training for regression is defined as the minimizer of the following empirical risk: \[\min_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\max_{x^{\prime}:\|x^{\prime}- X_{i}\|_{\infty}\leq h}(Y_{i}-f(x^{\prime}))^{2}, \tag{4}\] with some \(h>0\). The outer minimization is solved by the gradient descent method as well as the usual least-square loss, and the inner maximization is solved by a gradient ascent method. Several efficient algorithms have been proposed to solve this problem effectively [2, 3, 4], such as the fast gradient sign method [1, 1]. The optimization process is summarized in the following: 1. Initialize \(f\in\mathcal{F}\) and repeat the following steps ii and iii: 2. For each \((Y_{i},X_{i})\), find \(x_{i}^{*}=\operatorname*{argmax}_{x^{\prime}\in(x:\|x-X_{i}\|_{\infty}\leq h )}(Y_{i}-f(x^{\prime}))^{2}\). 3. Update function \(f\gets f-\eta\nabla(n^{-1}\sum_{i=1}^{n}(Y_{i}-f(x_{i}^{*}))^{2})\), where \(\eta>0\) is a learning rate and \(\nabla\) denotes a derivative with respect to neural network parameters of \(f\). Note that the efficiency of the algorithm is not a primary interest of this study, hence we focus on the estimation error by the global minimizer of the adversarial risk. Several works actively pursue a theoretical understanding of adversarial training. One of the most significant issues is a trade-off between the robustness and accuracy of the adversarial training, which studies the possibility of balancing the predictive performance of deep neural networks with their ability to defend against adversarial samples. A risk bound and the sample complexity of the adversarial training in general settings is widely examined [1, 1, 2, 3, 5, 6]. The predictive performance of the adversarial training has been also studied, particularly in linear regression models with over-parameterization [1, 2, 3, 4, 5]. ### This Study The purpose of this study is to investigate the sup-norm convergence of an error by deep neural networks using the adversarial training scheme. For this aim, we develop a novel formulation of adversarial training and study its efficiency. Specifically, our formulation includes a preprocessing for smoothing the output variable at the first step, then formulates a neural estimator as a minimizer of an empirical adversarial risk associated with the preprocessing. The preprocessing has a role to reduce a bias on the estimator from the perturbation of the adversarial training scheme. As a specific form of preprocessing, we can employ several nonparametric estimators including the nearest neighbor method and the kernel method. As a result, we derive an upper bound on the \(L^{\infty}\)-risk of the estimator with deep neural networks using our adversarial training scheme, then reveal some properties of its convergence rate. Specifically, our contributions are summarized as follows. 1. We derive a convergence rate of the \(L^{\infty}\)-risk of the estimator when the true function \(f^{*}\) belongs to the Holder space. The derived rate achieves the minimax optimal rate with an appropriately designed preprocessing. 2. We show the inconsistency of the ordinary adversarial training without preprocessing. This is due to the inability of an output variable in the regression problem to accommodate perturbations of the adversarial training. 3. Our approach applies to not only the adversarial training with a squared loss but also a general convex loss. Specifically, we study an \(L^{\infty}\)-risk of the regression problem of general loss, which is useful for handling data that have heavy-tailed noise. 4. We additionally study the \(L^{\infty}\)-risk when the true function \(f^{*}\) has a heterogeneous smoothness, i.e. it belongs to the Besov space. Our analysis shows the minimax optimality of the convergence rate of the \(L^{\infty}\)-risk in this case. 5. Our result is applicable to a wide range of architectures of deep neural networks, such as a fully-connected dense layer. Also, it allows both finite depth networks and finite width networks. We conduct numerical experiments and confirm that our theoretical results are consistent with the result. Our results provide new implications for the understanding of adversarial training, which argues the trade-off between robustness and accuracy of prediction by adversarial training. Along with this line, we show that (i) the ordinary adversarial learning is not consistent in the regression problem in the first place, (ii) the robustness obtained by adversarial learning is described by sup-norm convergence of the estimation error, and (iii) the adversarial training achieve the optimal rate with appropriate preprocessing. Technical contributions in our proof are summarized as follows. First, we derive an upper bound of the sup-norm of an estimation error by the adversarial risk up to constants. This bound uses a volume of a neighborhood set of an input variable, which is utilized to design the adversarial perturbation. Second, we develop an empirical process technique for the evaluation of preprocessing. To control the effects of the preprocessing and the adversarial training simultaneously, we involve two levels of evaluation of biases and variances as appropriate. ### Organization The rest of this paper is organized as follows. Section 2 gives a setup for the nonparametric regression problem and the definition of deep neural networks. Section 3 gives a general formulation of adversarial training and an overview of analysis on it. Furthermore, the section shows that naive adversarial training does not give a consistent estimator. In Section 4, as a main result, we derive an upper bound by a sup-norm of an estimation error by the developed estimator Section 5 gives extensions and applications. Section 6 gives numerical simulations, and Section 7 concludes. ### Notation For \(n\in\mathbb{N}\), \([n]:=\{1,2,...,n\}\) is a set of natural numbers no more than \(n\). For \(a,a^{\prime}\in\mathbb{R}\), \(a\lor a^{\prime}:=\max\{a,a^{\prime}\}\) is the maximum. \(\lfloor a\rfloor\) denotes the largest integer which is no more than \(a\). The Euclidean norm of a vector \(b\in\mathbb{R}^{d}\) is denoted by \(\|b\|_{2}:=\sqrt{b^{\top}b}.\) Let \(C_{w}\) be a positive finite constant depending on a variable \(w\). \(\mathbf{1}\{E\}\) denotes the indicator function. It is \(1\) if the event \(E\) holds and \(0\) otherwise. For a matrix \(A\in\mathbb{R}^{N\times N}\), \(A_{i,j}\) denotes an \((i,j)\)-th element of \(A\) for \(i,j=1,...,N\). For a measurable function \(f:\Omega\rightarrow\mathbb{R}\) on a set \(\Omega\subset\mathbb{R}^{d}\), \(\|f\|_{L^{p}}(\mu):=(\int|f(x)|^{p}d\mu(x))^{1/p}\) denotes an \(L^{p}\)-norm for \(p\in[1,\infty)\) with a measure \(\mu\), and \(\|f\|_{L^{\infty}}:=\sup_{x\in\Omega}|f(x)|\) denotes a sup-norm. Also, \(L^{p}(\Omega)\) denotes a set of measurable functions such that \(\|f\|_{L^{p}(\lambda)}<\infty\) with the Lebesgue measure \(\lambda\). For \(x\in\mathbb{R}^{d}\), \(\delta_{x}\) denotes the Dirac measure at \(x\). For a function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) with a multi-variate input \((x_{1},...,x_{d})\in\mathbb{R}^{d}\) and a multi-index \(a=(a_{1},...,a_{d})\in\mathbb{N}^{d}\), \(\partial^{a}f(x_{1},...,x_{d}):=\partial_{x_{1}}^{a_{1}}\partial_{x_{2}}^{a_{ 2}}\cdots\partial_{x_{d}}^{a_{d}}f(x_{1},...,x_{d})\) denotes a partial derivative with the multi-index. For a variable \(x\), \(C_{x}\) denotes some positive finite constant that polynomially depends on \(x\), and it can have different values in different places. For sequences of reals \(\{a_{n}\}_{n\in\mathbb{N}}\) and \(\{b_{n}\}_{n\in\mathbb{N}}\), \(a_{n}\asymp b_{n}\) denotes \(\lim_{n\rightarrow\infty}a_{n}/b_{n}\to c\) with some \(c\in(0,\infty)\), \(a_{n}=O(b_{n})\) denotes \(|a_{n}|\leq M|b_{n}|\) and \(a_{n}=\Omega(b_{n})\) denotes \(|a_{n}|\geq M|b_{n}|\) with some \(M>0\) for all sufficiently large \(n\). \(a_{n}=o(b_{n})\) denotes \(|a_{n}|\leq M|b_{n}|\) for any \(M>0\) and for all sufficiently large \(n\). \(\widetilde{O}(\cdot)\) and \(\widetilde{\Omega}(\cdot)\) are the notations \(O(\cdot)\) and \(\Omega(\cdot)\) ignoring multiplied polynomials of \(\log(n)\), respectivelly. For a sequence of random variables \(\{X_{n}\}_{n\in\mathbb{N}}\), \(X_{n}=O_{P}(a_{n})\) denotes \(\Pr(|X_{n}/a_{n}|>M)\leq\varepsilon\) for any \(\varepsilon>0\) and some \(M>0\) for all sufficiently large \(n\), and \(X_{n}=o_{P}(a_{n})\) denotes \(\lim_{n\rightarrow\infty}\Pr(|X_{n}/a_{n}|>\varepsilon)=0\) for any \(\varepsilon>0\). ## 2. Problem Setting and Preliminaries ### Nonparametric Regression and \(L^{\infty}\)-Risk #### 2.1.1. Model and Observations For the nonparametric regression, suppose that we have \(n\) observations \((X_{1},Y_{1}),...,(X_{n},Y_{n})\in[0,1]^{d}\times\mathbb{R}\) that are independent and identical copies of a random variable \((X,Y)\) which follows the regression model (1). Note that the model is characterized by the unknown function \(f^{*}\) and the noise variable \(\xi\). Let \(P_{X}\) be a marginal measure of \(X\). #### 2.1.2. Basic Assumption We introduce a standard assumption on the regression model. **Assumption 1**.: \(P_{X}\) _has a density function that is uniformly lower bounded by \(C_{P_{X}}>0\) on \([0,1]^{d}\)._ Assumption 1 is important to estimate \(f^{*}\) on the entire domain \([0,1]^{d}\). Both of the assumptions are commonly introduced in the nonparametric regression for neural networks [1, 10]. We suppose that \(f^{*}\) belongs to a function class with the Holder smoothness with an index \(\beta>0\). To the end, we define a ball of the Holder space with \(\beta>0\) as \[\mathcal{H}^{\beta}([0,1]^{d}):=\Bigg{\{}f:[0,1]^{d}\rightarrow\mathbb{R}\ |\\ \sum_{b\in\mathbb{N}^{d}:\|b\|_{1}<[\beta]}\|\partial^{b}f\|_{L^{ \infty}}+\sum_{b\in\mathbb{N}^{d}:\|b\|_{1}=[\beta]}\sup_{x,x^{\prime}\in[0,1 ]^{d},x\neq x^{\prime}}\frac{|\partial^{b}f(x)-\partial^{b}f(x^{\prime})|}{\| x-x^{\prime}\|_{\infty}^{\beta-[\beta]}}\leq B\Bigg{\}},\] with its radius \(B\geq 1\). Intuitively, \(\mathcal{H}^{\beta}([0,1]^{d})\) is a set of functions on \([0,1]^{d}\) that are \(\lfloor\beta\rfloor\) times partially differentiable and their derivatives are \((\beta-\lfloor\beta\rfloor)\)-Holder continuous. **Assumption 2**.: _There exists \(\beta>0\) such that \(f^{*}\in\mathcal{H}^{\beta^{\prime}}([0,1]^{d})\) holds for all \(\beta^{\prime}\in(0,\beta]\)._ To impose differentiability for \(f^{*}\) is the usual setting for nonparametric regression (see [13], for example). Further, in the statistical studies on deep neural networks, it has also studied the estimation of functions with more complex structures [1, 1, 10, 11]. We will discuss an extension on this assumption in Section 5. #### 2.1.3. Goal: Sup-norm Convergence Our goal is to estimate the true function \(f^{*}\) in the model (1) and study an estimation error of an estimator in terms of the sup-norm \(\|\cdot\|_{L^{\infty}}\). Rigorously, we will develop an estimator \(\widehat{f}\) and study its \(L^{\infty}\)-risk defined as follows: \[\|\widehat{f}-f^{*}\|_{L^{\infty}}:=\sup_{x\in[0,1]^{d}}|\widehat{f}(x)-f^{*} (x)|. \tag{5}\] The \(L^{\infty}\)-risk is a sharp measure for the robustness of estimators and is applied to statistical inference such as a uniform confidence band. To understand this point, we discuss its relation to the commonly used \(L^{2}\)-risk measured by the \(L^{2}\)-norm, which is a typical case with the following \(L^{p}\)-norm (\(p\in[1,\infty)\)) with \(p=2\): \[\|\widehat{f}-f^{*}\|_{L^{p}(P_{X})}^{p}:=\mathbb{E}_{X}\left[|\widehat{f}(X)- f^{*}(X)|^{p}\right].\] Since the \(L^{\infty}\)-risk bounds the \(L^{p}\)-risk, i.e. \(\|\widehat{f}-f^{*}\|_{L^{\infty}}\geq\|\widehat{f}-f^{*}\|_{L^{p}(P_{X})}\) holds for every \(p\geq 1\), the \(L^{\infty}\)-risk leads stronger convergence. Figure 1 illustrates the difference between the convergences in the \(L^{2}\)-norm and the sup-norm. In the related studies with neural networks (e.g. [1, 10]), the \(L^{2}\)-risk has been mainly studied, but the \(L^{\infty}\)-risk of neural network estimators has not been proved to converge. Figure 1. Examples of the \(L^{2}\)– and \(L^{\infty}\)-risks. The black curve is the true function \(f^{*}\) and the red curve is an estimator. In the left, the \(L^{2}\)-risk is determined by the volume of the difference between the two functions. Even if the estimator is unstable for a particular input, the \(L^{2}\)-risk is still small. In the right, the \(L^{\infty}\)-risk is determined by the maximum distance between the two functions at a given input point. When the \(L^{\infty}\)-risk is small, the two functions have similar shapes. ### Deep Neural Network Model We define a deep neural network, which is a model of functions by multiple layers. Specifically, we consider deep neural networks with fully-connected layers and the rectified linear unit (ReLU) activation function, which is one of the most commonly used activations. Let \(L\in\mathbb{N}\) be a number of layers, and \(\mathcal{W}=(W_{1},...,W_{L+1})\in\mathbb{N}^{L+1}\) be a tuple of width parameters, where \(W_{\ell}\) denotes width of an \(\ell\)-th layer. Deep neural networks have a weight matrix \(A_{\ell}\in\mathbb{R}^{W_{\ell+1}\times W_{\ell}}\) and a weight vector \(b_{\ell}\in\mathbb{R}^{W_{\ell}}\) for each \(\ell\in[L]\). For each \(d\in\mathbb{N}\), we introduce a ReLU activation function \(\sigma:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) such that \(\sigma(z)=((z_{1}\lor 0),(z_{2}\lor 0),...,(z_{d}\lor 0))^{\top}\) for \(z=(z_{1},...,z_{d})\in\mathbb{R}^{d}\). For each \(\ell\in[L-1]\), we define a map \(g_{\ell}:\mathbb{R}^{W_{\ell}}\rightarrow\mathbb{R}^{W_{\ell+1}}\) by an \(\ell\)-th layer as \[g_{\ell}(z)=\sigma\left(A_{\ell}z+b_{\ell}\right),\ z\in\mathbb{R}^{W_{\ell}}.\] For the last \(L\)-th layer, we define \(g_{L}(z)=A_{L}z+b_{L}\) with \(z\in\mathbb{R}^{W_{L}}\). For \(L\) and \(\mathcal{W}\), we define a parameter space \(\Theta_{L,W}:=(\mathbb{R}^{W_{2}\times W_{1}}\times\mathbb{R}^{W_{1}})\times (\mathbb{R}^{W_{3}\times W_{2}}\times\mathbb{R}^{W_{2}})\times\cdots\times( \mathbb{R}^{W_{L+1}\times W_{L}}\times\mathbb{R}^{W_{L}})\) whose elements is \(\theta=((A_{1},b_{1}),\)\((A_{2},b_{2}),...,(A_{L},b_{L}))\), then we define a function \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}\) by a deep neural network with \(d=W_{1}\) and \(W_{L+1}=1\) as \[f_{\theta}(x)=g_{L}\circ g_{L-1}\circ\cdots\circ g_{1}(x),\ x\in[0,1]^{d}. \tag{6}\] Intuitively, \(f_{\theta}(x)\) is constituted by compositions of \(L\) maps by the multiple layers with the maximum width \(\|\mathcal{W}\|_{\infty}=\max_{\ell\in[L+1]}W_{\ell}\). There are at most \(\sum_{\ell=1}^{L}(W_{\ell}+1)W_{\ell+1}\leq L(\|\mathcal{W}\|_{\infty}+1)^{2}\) parameters in the deep neural network model. We introduce a set of functions by deep neural networks with \(L\) layers and \(W\) maximum width. With a tuple \((L,W)\in\mathbb{N}^{2}\) and an upper bound \(B\geq 1\), we define the set of functions by deep neural networks as \[\mathcal{F}(L,W):=\Big{\{}f_{\theta}\text{ as }(\ref{eq:deep neural network model})\ |\ \|f_{\theta}\|_{L^{\infty}}\leq B,\theta\in\Theta_{L,W},\|\mathcal{W}\|_{ \infty}\leq W\Big{\}}. \tag{7}\] The condition on the upper bound \(B\) can be satisfied by a clipping operation using the ReLU activation function [10]. This definition of deep neural networks includes several variations of neural networks. If the parameter matrix \(A_{\ell}\) is not sparse, the defined neural network is a fully-connected neural network. If the matrix \(A_{\ell}\) is constrained to be sparse with some structure, it is equivalent to a convolutional neural network [14] or a residual network [17]. **Remark 1**.: One advantage of the definition (7) is that it controls the easily manipulated values of width \(W\) and depth \(L\) of neural networks, that can be easily specified when designing neural network models. This is in contrast to manipulating the number of nonzero parameters and the maximum parameter value, which are difficult to control in practice (for example, see [1]). ## 3. Adversarial Training Estimator for Regression ### Ordinary Adversarial Training and its Inconsistency We introduce a framework of adversarial training. The adversarial training framework defines its loss using an input point in the neighborhood of a data point that maximizes loss, as reviewed in (4). Rigorously, with a scale multipliers \(h\in(\underline{h},1)\) with \(\underline{h}>0\), we consider a neighbourhood of \(x\in[0,1]^{d}\) as \[\Delta_{h}^{p}(x)=\{x^{\prime}\in[0,1]^{d}\ |\ \|x-x^{\prime}\|_{p}\leq h\} \subset[0,1]^{d}.\] Then, we consider the following estimator by the empirical adversarial risk with a function \(f:[0,1]^{d}\to\mathbb{R}\) and \(p\geq 1\): \[R_{n}^{\mathrm{o}}(f):=\frac{1}{n}\sum_{i=1}^{n}\sup_{x^{\prime}\in\Delta_{h}^{ P}(X_{i})}(Y_{i}-f(x^{\prime}))^{2}. \tag{8}\] We can define an estimator of \(f^{*}\) by the minimizer of this empirical adversarial risk as \[\tilde{f}:=\operatorname*{argmin}_{f\in\mathcal{F}^{*}(L,W)}R_{n}^{\mathrm{o} }(f).\] The minimax optimization in the problem (8) is solved by various algorithms [1, 1, 1, 2]. #### 3.1.1. Inconsistency of Ordinary Adversarial Training In this section, we show the inconsistency of \(\tilde{f}\) by ordinary adversarial training. Specifically, we obtain the following result. **Proposition 1**.: _Suppose \(n\geq 3\). There exists a sub-Gaussian noise \(\xi_{i}\), \(f^{*}\in\mathcal{H}^{1}([0,1]^{d})\), \(P_{X}\), and \(h\in(0,1)\) such that the estimator \(\tilde{f}\) in (8) satisfies the following inequality with an existing constant \(c^{*}>0\) with probability at least 0.5:_ \[\|\tilde{f}-f^{*}\|_{L^{2}(P_{X})}^{2}\geq c^{*}.\] This result shows that the \(L^{\infty}\)-risk of \(\tilde{f}\) does converge to zero with the ordinary adversarial training, regardless of the sample size \(n\) and a neural network architecture. Since the \(L^{\infty}\)-risk is bounded below by the \(L^{2}\)-risk, hence the ordinary adversarial training also yields an inconsistent estimator in the sense of a sup-norm. This result is not limited to the choice of model used for the estimator, hence it occurs with methods other than neural networks. Intuitively, ordinary adversarial training produces a bias by the design of perturbations on inputs (see the middle panel of Figure 2). This is because the perturbation makes \(\tilde{f}(X_{i})\) fit to an output with a shift \(\varsigma=x^{\prime}-X_{i}\), which creates the inconsistency. Hence, we need to correct the bias by the ordinary adversarial training in the regression problem. ### Proposed Framework of Adversarial Training We introduce an empirical risk function for adversarial training based on a quadratic loss. We develop a random map \(\tilde{Y}:[0,1]^{d}\to\mathbb{R}\) for surrogate outputs, which referred to a _preprocessed output_. This notion is a general expression of several methods, and its specific configurations will be given later. With \(\tilde{Y}\), we define an empirical preprocessed adversarial risk as \[R_{n}(f):=\frac{1}{n}\sum_{i=1}^{n}\sup_{x^{\prime}\in\Delta_{h}^{P}(X_{i})}( \tilde{Y}(x^{\prime})-f(x^{\prime}))^{2}, \tag{9}\] for a function \(f\in L^{2}([0,1]^{d})\). This loss function is a generalized version of the ordinary adversarial risk (9) with the preprocessing \(\tilde{Y}\). Using this notion, we define an estimator as the minimizer of the empirical risk as \[\widehat{f}\in\operatorname*{argmin}_{f\in\mathcal{F}(L,W)}R_{n}(f). \tag{10}\] This framework intends to perturb an output variable in response to the perturbation on the input \(X_{i}\). That is, when the input point \(X_{i}\) is shifted by \(\varsigma=x^{\prime}-X_{i}\) due to the adversarial training, we also shift the output side by \(\varsigma\). Hence, the observed outputs may not be able to accommodate the shift. To address this issue, we prepare the corresponding output using a preprocessing approach, such as the nearest neighbor method. Figure 2 illustrates differences between the least square estimator \(\widehat{f}^{\text{LS}}\), the ordinary adversarial training \(\tilde{f}\), and our proposal estimator by the adversarial training with preprocessing \(\widehat{f}\). #### 3.2.1. Preprocessing Design We impose the following assumptions on the preprocessing. **Assumption 3** (Preprocessing).: \(\widehat{Y}(x)\) _is continuous and \(\mathbb{E}[\|\widehat{Y}\|_{L^{\infty}}^{2}]\leq V^{2}\) with some \(V>0\). Also, there exists a non-negative sequence \(\{\xi_{n}\}_{n\in\mathbb{N}}\) such that \(\zeta_{n}\to 0\) as \(n\to\infty\) such that the following holds for all \(n\in\mathbb{N}\):_ \[\zeta_{n}^{2}\geq\mathbb{E}\left[\|\widehat{Y}-f^{*}\|_{L^{\infty}}^{2}\right].\] The sequence \(\{\zeta_{n}\}_{n\in\mathbb{N}}\) represents a convergence rate of the preprocessing \(\widehat{Y}\) to \(f^{*}\). Importantly, the data used to construct the preprocessed output \(\widehat{Y}\) here may overlap the data for the estimator as (10). There are several examples for preprocessing as follows. **Example 1** (Nearest neighbour).: First, we consider the \(k\)-nearest neighbor method. For \(k\in\mathbb{N}\) and \(x\in[0,1]^{d}\), we define a radius \(B_{x}(r):=\{x^{\prime}\in[0,1]^{d}\mid\|x-x^{\prime}\|_{2}\leq r\}\) with \(r>0\), the \(k\)-nearest neighbour radius \(r_{k}(x):=\inf\{r>0\mid|B_{x}(r)\cap\mathcal{D}|\geq k\}\), and its corresponding dataset \(N_{k}(x):=B_{x}(r)\cap\mathcal{D}\). With this notion, we define the \(k\)- nearest neighbor preprocessing. \[\widehat{Y}(x)=\frac{1}{|N_{k}(x)|}\sum_{i=1}^{n}Y_{i}\mathbf{1}\{X_{i}\in N_{ k}(x)\}\] In this example, if Assumption 2 holds with \(\beta\in(0,1]\), we have \(\zeta_{n}^{2}=O(n^{-2\beta/(2\beta+d)}\log n)\) with \(k\approx n^{2\beta/(2\beta+d)}\) by Theorem 1 in [11]. **Example 2** (Posterior mean by Bayesian method).: We consider a mean of a posterior distribution by a prior distribution on functions. The method considers a B-spline series (see [10] for Figure 2. Comparison of the estimators. The left is the least square estimator, which measures the difference between \(Y_{i}\) and \(f(X_{i})\). The middle is the ordinary adversarial training, measuring the difference between \(Y_{i}\) and \(f(x^{\prime})\), where \(x^{\prime}\) is chosen from a neighborhood \(\Delta_{h}^{p}(X_{i})\). The input is shifted to the right, which causes the inconsistency. The right is adversarial training with preprocessing. For \(x^{\prime}\in\Delta_{h}^{p}(X_{i})\), it construct the corresponding preprocessing \(\widehat{Y}(x^{\prime})\) and measure the difference between \(\widehat{Y}(x^{\prime})\) and \(f(x^{\prime})\). overview and specific constructions). With some tuple of numbers of basis \((J_{1},...,J_{d})\in\mathbb{N}^{d}\) and orders \((q_{1},...,q_{d})\in\mathbb{N}^{d}\), we consider parameters \(\{\theta_{j_{1},...,j_{d}}\}_{j_{1},...,j_{d}=1}^{J_{1},...,J_{d}}\) and the B-spline series \(\{B_{j_{k},q_{k}}(x)\}_{j_{k}=1}^{J_{k}}\) for \(k=1,...,d\). Then, the method constructs a prior distribution on a function \(f\) with the form \[f(x)=\sum_{j_{1}=1}^{J_{1}}\cdots\sum_{j_{d}=1}^{J_{d}}\theta_{j_{1},...,j_{d}} \prod_{k=1}^{d}B_{j_{k},q_{k}}(x_{k}),\] by putting a Gaussian prior on the parameters \(\theta_{j_{1},...,j_{d}}\). If Assumption 2 holds with \(\beta>0\), Theorem 4.4 in [13] shows that \(\zeta_{n}^{2}=O(n^{-2\beta/(2\beta+d)}\log^{2\beta/(2\beta+d)}n)\), which is implied by a contraction of the posterior shown by the theorem. We can pick other methods for preprocessing. The required property is that an error in estimating a smooth function converges in the sup-norm sense. ## 4. Main Result: \(L^{\infty}\)-Risk Analysis We present our main results on the consistency of the estimator and a non-asymptotic upper bound on the estimation error with its convergence rate in \(n\). We further discuss the minimax optimality of the obtained convergence rate. To achieve optimality, we need to discuss the design of the preprocessing \(\widetilde{Y}\) and the architecture of deep neural networks. ### Consistency We present an upper bound of an expectation of the \(L^{\infty}\)-risk of the estimator. The first result is consistency in the sense of the \(L^{\infty}\)-risk. In an asymptotic analysis with \(n\to\infty\), a product of the depth and width of deep neural networks should also increase in \(n\). **Theorem 2**.: _Consider the regression model (1) and the adversarial estimator \(\widehat{f}\) in (10) with the function class by deep neural networks with a tuple \((L,W)\). Suppose 1, and 3 hold and \(f^{*}\) is continuous. Then, there exists a tuple \((L,W)\) with \(LW=o(n)\) such that it holds that_ \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]\to 0,\] _as \(n\to\infty\)._ The results show that under divergent widths and depths and appropriate preprocessing, we obtain consistency in the sense of sup-norm. Note that \(f^{*}\) needs only be continuous, and conditions on derivatives are not necessary. Also, it provides the following important implications: (i) we can control the \(L^{\infty}\)-risk even though the deep neural network model does not have the linear-in-feature structure, and (ii) the preprocessing solves the problem of inconsistency in adversarial training presented in Section 3.1.1. Its proof is based on the procedure in Section 4.3. We note the importance of sup-norm convergence in the context of estimation. In the theory of approximation, the sup-norm convergence by neural networks has been an important topic, that is, \(\inf_{f\in\mathcal{F}(L,W)}\|f-f^{*}\|_{L^{\infty}}\to 0\) as \(L\to\infty\) or \(W\to\infty\), and numerous studies have studied the problem, e.g. [12, 13, 14]. Conversely, in the nonparametric regression problem, the sup-norm convergence has been difficult due to noise in observations. Theorem 2 shows that the adversarial training with preprocessing enables convergence in the sup-norm. ### Non-Asymptotic Bound and Convergence Rate As a more rigorous error evaluation, we derive a non-asymptotic upper bound for the \(L^{\infty}\)-risk of the estimator with the adversarial training. This result is also useful in studying convergence rates of the risk and discussing its optimality. **Theorem 3**.: _Consider the regression model (1) and the adversarial estimator \(\widehat{f}\) in (10) with the function class \(\mathcal{F}(L,W)\) by deep neural networks. Suppose Assumption 1, 2, and 3 hold for some \(\beta>0\). Then we have_ \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]\leq C_{P_{X},p,d,B,d,\beta}h^{-d}\left(\frac{(WL)^{2}\log(WL)\log n}{n}+(WL)^{-4\beta/d}+h^{-d} \zeta_{n}^{2}\right),\] _for every \(n\geq\bar{n}\) with some \(\bar{n}\in\mathbb{N}\)._ This result gives some implications: (i) we develop an upper bound on the \(L^{\infty}\)-risk of the estimator, and (ii) the bound is proportional to \(h^{-d}\), which appears when evaluating the \(L^{\infty}\)-risk using the adversarial loss. Note that we can select \(h\) as strictly positive and thus it does not affect an order of the bound in \(n\). More precisely, this upper bound consists of the three terms. The first term \(O((WL)^{2}\log(WL)/n)\) is the complexity error, the second term \(O((WL)^{-4s/d})\) is the approximation error by the deep neural network, and the third term \(O(\zeta_{n}^{2})\) is the error by the preprocessing. The complexity and approximation errors also appear in several risk bounds on an \(L^{2}\)-risk of deep neural network (e.g., Theorem 4.3 in [11]). In contrast, the preprocessing error term is a new term needed to derive an upper bound on the \(L^{\infty}\)-risk. We derive the convergence rate of the \(L^{\infty}\)-risk with respect to \(n\). Specifically, we select the width and depth of deep neural networks in order to balance the trade-off in the error terms presented in Theorem 3. **Corollary 4**.: _Consider the setting in Theorem 3. Further, suppose that \(\zeta_{n}^{2}=O(n^{-2\beta/(2\beta+d)}\log^{\beta^{*}}n)\) for some \(\beta^{*}>0\). We set \(L\) and \(W\) as \(LW\asymp n^{2\beta/(2\beta+d)}\). Then, we obtain the following as \(n\to\infty\):_ \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]=O\left(n^{-2\beta /(2\beta+d)}\log^{2\vee\beta^{*}}n\right).\] The rate obtained in Corollary 4 is identical to the minimax optimal rate of risk measured in the sup-norm in the problem of estimating a function from \(\mathcal{H}^{\beta}([0,1]^{d})\)[12, 13]. Specifically, the derived rate corresponds to the following lower bound: \[\inf_{\tilde{f}_{n}}\sup_{f^{*}\in\mathcal{H}^{\beta}([0,1]^{d})}\mathbb{E} \left[\|\tilde{f}_{n}-f^{*}\|_{L^{\infty}}^{2}\right]=\widetilde{\Omega}\left( n^{-2\beta/(2\beta+d)}\right),\ (n\to\infty),\] where \(\tilde{f}_{n}\) is taken from all estimators depending on the \(n\) observations. Since the derived rate is the same as the lower bound, we show that the adversarial training estimator achieves the minimax optimal rate. ### Proof Overview We give an overview of proof of the main theorem. As preparation, we introduce several notations related to adversarial training. With \(h\), an order \(p\) and a base measure \(P\), we define an adversarial (pseudo-)norm of \(f:[0,1]^{d}\to\mathbb{R}\) and its empirical analogue \[\|f\|_{P,\Delta}^{2}:=\mathbb{E}_{X\sim P}\left[\max_{x^{\prime}\in\Delta_{h} ^{P}(X)}|f(x^{\prime})|^{2}\right],\ \text{and}\ \|f\|_{n,\Delta}^{2}:=n^{-1}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{P} (X_{i})}|f(x^{\prime})|^{2}. \tag{11}\] These norms correspond to the adversarial risks with a squared loss for the regression problem ([1]). We also define an approximation error of deep neural networks in \(\mathcal{F}(L,W)\) as \[\Phi_{L,W}:=\inf_{f\in\mathcal{F}(L,W)}\|f-f^{*}\|_{L^{\infty}}. \tag{12}\] This term represents an expressive power of neural networks in \(\mathcal{F}(L,W)\), which decreases as \(L\) or \(W\) increase (see [13] for an example). We further use a uniform covering number of \(\mathcal{F}(L,W)\). Let \(Q_{n}\) be an empirical measure with \(n\) samples. Given \(\delta\in(0,1]\), we define a \(\delta\)-covering set of \(\mathcal{F}(L,W)\) as \(\{f_{1},...,f_{N}\}\subset\mathcal{F}\) and the uniform covering number from the empirical process theory (e.g., [21]): \[N_{L,W}(\delta):=\sup_{Q_{n}}N(\delta,\mathcal{F}(L,W),\|\cdot\|_{L^{2}(Q_{n}) }),\] where the supremum is taken over all possible empirical measures \(Q_{n}\). This notion is useful to evaluate the complexity of the set of deep neural networks, because it gives an upper bound without boundedness or sparsity of parameters of neural networks (See Lemma 16, for example). Our proof consists of three main elements: (i) the derivation of an upper bound of the adversarial norm of the estimation error, (ii) to develop an upper bound of the \(L^{\infty}\) norm of the estimation error by the adversarial norm, and (iii) a comb of the above results using the localization technique. Each of these is described below. In the first step, we derive an upper bound for the adversarial norm of the estimation error. Rigorously, Lemma 10 will state the following upper bound \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{P_{X},\Delta}^{2}\right] \leq C\left\{\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{n,\Delta}^{2} \right]+\frac{B^{2}(\log N_{L,W}(\delta)+1)}{n}+\delta B+\delta^{2}\right\},\] for any \(\delta\in(0,1)\) with some universal constant \(C>0\). Furthermore, Proposition 12 will bound the empirical adversarial norm \(\mathbb{E}[\|\widehat{f}-f^{*}\|_{n,\Delta}^{2}]\) as \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{n,\Delta}^{2}\right] \leq C\left\{\left(\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2} \right]^{1/2}+\delta\right)\left(\frac{\log N_{L,W}(\delta)}{n}+\zeta_{n} \right)^{1/2}+(\Phi_{L,W}+\zeta_{n})^{2}\right\}.\] We achieve these bounds by extending the empirical process technique by [11] to the adversarial norm. There are several points for noting: (i) the term \(\Phi_{L,W}\) represents a bias, and the term \(O(\log N_{L,W}(\delta)/n)\) represents a variance of the estimator, that are similar to the least square estimator, (ii) the variance term is described by the uniform covering number, which is useful to study neural networks whose parameters are unbounded and non-sparse, and (iii) there is a term \(\zeta_{n}\) which represents the error by the preprocessing, unlike the case of the least square estimator. In the second step, we construct an upper bound for the sup-norm using the adversarial norm. That is, we develop the following statement: **Lemma 5**.: _Consider the estimator as (10) and the adversarial norm as (11). Suppose \(P_{X}\) satisfies Assumption 1. Then, we have_ \[\|\widehat{f}-f^{*}\|_{P_{X},\Delta}^{2}\geq C_{P_{X},p,d}h^{d}\|\widehat{f}- f^{*}\|_{L^{\infty}}^{2}.\] Intuitively, we utilize the similarity between the adversarial norm and the sup-norm to achieve the result. That is, the maximization over \(\Delta_{h}^{p}\) in the adversarial norm has a similar property to the sup-norm. Using this property, we give an upper bound on the sup-norm while taking into account the volume of the hypercube. We will give a generalized version of this result as Lemma 15 in the supplementary material. In the last step, we combine these results and derive the main statement of Theorem 3. Here we apply the peeling argument to obtain convergence rates. Note that a simple combination of the above results would lose optimality. To obtain the minimax optimal rate, we evaluate the approximation error and the uniform covering number based on the localization techniques. ## 5. Applications ### Extension to General Loss Function #### 5.1.1. Motivation and Setting We can extend our adversarial training results to the case of non-squared loss functions. Specifically, we can handle loss functions such as absolute value loss, quantile loss, and Huber loss, which are used in the presence of heavy-tailed noise. This setting with deep neural networks is studied in [10]. We introduce a generic loss function, which satisfies the following assumption: **Assumption 4**.: _A loss function \(\ell:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) is symmetric and \(\ell(x,y)\) is Lipschitz-continuous in each \(x\) and \(y\) with its Lipschitz constant \(C_{\ell}>0\). Further, \(\ell(y,x)=0\) holds if and only if \(y=x\), and there exists a constant \(c_{\ell}>0\) and \(q\geq 1\) such that_ \[\ell(y,x)\geq c_{\ell}|y-x|^{q},\forall x,y\in\mathbb{R}.\] A class of loss function satisfying Assumption 4 includes several representative loss functions, e.g., an absolute loss \(\ell(y,x)=|y-x|\), a quantile loss \(\ell(y,x)=(\mathbf{1}\{y\geq x\}\tau+\mathbf{1}\{y\leq x\}(\tau-1))(y-x)\) for \(\tau\in(0,1)\), and the Cauchy loss \(\ell(y,x)=\log(1+\kappa^{2}(y-x)^{2})\) for \(\kappa>0\). We introduce an empirical risk function for adversarial training based on \(\ell\). Using the neighbourhood set \(\Delta_{h}^{p}(x)\) and the preprocessing \(\widetilde{Y}\), we define an empirical risk function as \[\widetilde{R}_{n}(f):=\frac{1}{n}\sum_{i=1}^{n}\sup_{x^{\prime}\in\Delta_{h}^{ P}(X_{i})}\ell(\widetilde{Y}(x^{\prime}),f(x^{\prime})).\] This loss function is a generalized version of the ordinary loss for the adversarial training (9). Using this notion, we define its minimizer as \[\widetilde{f}\in\operatorname*{argmin}_{f\in\mathcal{F}(L,W)}\widetilde{R}_{ n}(f). \tag{13}\] #### 5.1.2. Error Analysis We study an \(L^{\infty}\)-risk of this estimator by deriving a non-asymptotic upper bound. The proof differs from that of Theorem 3, requiring a more general treatment of loss combined with adversarial training. **Proposition 6**.: _Consider the regression model (1) and the adversarial estimator \(\widetilde{f}\) in (13) with the function class by deep neural networks with a tuple \((L,W)\) and \(h\in(0,1)\). Suppose Assumption 1 and 2 for \(\beta>0\), Assumption 3 holds with \(\zeta_{n}^{2}=O(n^{-2\beta/(2\beta+d)}\log^{\beta^{*}}n)\) for some \(\beta^{*}>0\) and \(\widetilde{Y}\) is independent of \(\{(X_{i},Y_{i})_{i=1}^{n}\}\), and Assumption 4 holds with \(q\in[1,\infty)\). Then, we have the following as \(n\to\infty\):_ \[\mathbb{E}\left[\|\widetilde{f}-f^{*}\|_{L^{\infty}}^{2}\right]=O\left(h^{-2d /q}n^{-\beta/(q(\beta+d))}\log^{(2/q)\vee\beta^{*}}n\right).\] This result shows that the \(L^{\infty}\)-risk is bounded with the setup with general loss functions. The convergence rate of Proposition 6 of the \(L^{\infty}\)-risk corresponds to a convergence rate of excess risks derived by Theorem 4.2 in [10] under general losses. The key to this result is the bound \(V\) on \(\mathbb{E}[\|\widetilde{Y}\|_{L^{\infty}}^{2}]\) given in Assumption 3. The independence of the preprocessing \(\widetilde{Y}\) is imposed because of a technical reason, however, it is easy to satisfy it. For example, we can randomly split the observed data into two and then conduct the preprocessing using one of the two. The technical derivation is similar to that of Theorem 3. First, we define an expected value of adversarial risk with the general loss and the preprocessing: for \(f\in\mathcal{F}(L,W)\), we define \[\widetilde{R}(f):=\mathbb{E}_{X}\left[\sup_{x^{\prime}\in\Delta_{h}^{P}(X)} \ell(f(x^{\prime}),\widetilde{Y}(x^{\prime}))\right]. \tag{14}\] Then, we derive an upper bound for an excess value of the risk \(\widetilde{R}(\widetilde{f})-\widetilde{R}(f^{*})\) in Proposition 6. Next, we bound the \(L^{\infty}\)-risk by properties of the expected adversarial risk as \[\|\widetilde{f}-f^{*}\|_{L^{\infty}}^{q}=O\left(h^{-d}\left(\widetilde{R}( \widetilde{f})-\widetilde{R}(f^{*})+\|\widetilde{Y}-f^{*}\|_{L^{\infty}} \right)\right).\] in Lemma 14. This result is an extension of the bound for the \(L^{\infty}\)-risk by the \(L^{2}\)-risk as shown in Lemma 5. Combining the results, we obtain the result of Proposition 6. ### Adaptation to Heterogeneous Smoothness with Besov Space #### 5.2.1. Motivation and Setting In this section, we show that our proposed method can be adapted to estimate functions with heterogeneous smoothness, that is, we study the case that the true function \(f^{*}\) is an element of the Besov space (see [14] for an introduction). The Besov space has an interesting property that linear estimators, a certain type of non-deep estimators, cannot estimate its elements with the optimal convergence rate. First, we give the definition of the Besov space following [13, 14]. Note that there are several equivalent definitions for Besov spaces, and the following is based on the notion of difference of functions. Consider parameters \(p,q\in(0,\infty]\) and \(\beta>0\). For \(r\in\mathbb{N}\), \(h\in\mathbb{R}^{d}\), and \(f:[0,1]^{d}\to\mathbb{R}\), we define an \(r\)-th difference of \(f\) at \(x\in[0,1]^{d}\) as \[\Delta_{h}^{r}[f](x)=\mathbf{1}\{x+rh\in[0,1]^{d}\}\sum_{j=1}^{r}\binom{r}{j} (-1)^{r-j}f(x+jh).\] We also define the \(r\)-th modulus of smoothness of \(f\) with \(u>0\) as \[\omega_{r,p}(f,u)=\sup_{\|h\|_{2}\leq u}\|\Delta_{h}^{r}[f]\|_{L^{p}(\lambda)}.\] Recall that \(\|\cdot\|_{L^{p}(\lambda)}\) denotes the \(L^{p}\)-norm with the Lebesgue measure \(\lambda\). Using these notions, we define a ball in the Besov space as follows.. **Definition 1** (Besov space).: With \(r\in\mathbb{N}\) such that define \(r>\beta\), we define a semi-norm of \(f:[0,1]^{d}\to\mathbb{R}\) as \[\|f\|_{\mathcal{B}_{p,q}^{\beta}}:=\begin{cases}\int_{0}^{\infty}((u^{-\beta} \omega_{r,p}(f,u))^{q}u^{-1}du)^{1/q}&\text{ if }q<\infty\\ \sup_{u>0}u^{-\beta}\omega_{r,p}(f,u)&\text{ if }q=\infty.\end{cases}\] Then, we define a ball of the Besov space with its radius \(B\geq 1\) as \[\mathcal{B}^{\beta}_{p,q}:=\left\{f:[0,1]^{d}\to\mathbb{R}\mid\|f\|_{L^{p}( \lambda)}+\|f\|_{\mathcal{B}^{\beta}_{p,q}}\leq B\right\}\,.\] The Besov space can represent functions with discontinuity and heterogeneous smoothness, which means that the degree of smoothness of functions varies depending on \(x\). These properties follow the fact that \(\mathcal{B}^{1}_{1,1}\) coincides with the space of bounded total variation [10]. An important property of heterogeneous smoothness is that deep estimators, such as deep neural networks, tend to have an advantage in estimating such functions. Specifically, a linear estimator, which is one certain family of non-deep estimators [13], becomes sub-optimal when estimating elements of the Besov space. The linear estimator has a form \(\widehat{f}^{\text{lin}}(\cdot)=\sum_{i=1}^{n}\Psi(\cdot;X_{1},...,X_{n})Y_{i}\) with an arbitrary measurable map \(\Psi\), and includes major estimators such as the kernel ridge estimator. Then, Theorem 1 in [11] implies the following minimax lower bound with \(d=1\) case: \[\min_{\widehat{f}^{\text{lin}}}\max_{f^{*}\in\mathcal{B}^{\beta}_{p,q}} \mathbb{E}\left[\|\widehat{f}^{\text{lin}}-f^{*}\|_{L^{2}(\lambda)}^{2}\right] \geq Cn^{-2\beta^{\prime}/(2\beta^{\prime}+d)},\] with some \(C>0\) and \(\beta^{\prime}=\beta+1/2-1/p\). For \(p<2\) case, the linear estimator is sub-optimal, hence the rate is slower than the minimax optimal rate \(\widetilde{O}(n^{-2\beta/(2\beta+d)})\). Several studies [1, 20, 21] show similar statements. Therefore, it is important to estimate functions in the Besov space with deep neural networks, since it overcomes the limitations of linear estimators. #### 5.2.2. Error Analysis We give a convergence rate of the adversarial estimator with deep neural networks and the preprocessing in (10). Note that we consider the adversarial risk (9) based on the squared loss function. We first give the following assumption. **Assumption 5**.: _There exists \(\beta>0\) such that \(f^{*}\in\mathcal{B}^{\beta^{\prime}}_{p,q}\) holds for every \(\beta^{\prime}\in(0,\beta]\)._ To estimate functions in the Besov space, we have to restrict a set of neural network functions. Let \(\overline{\mathcal{F}}(L,W,S,\bar{B})\) be a set of neural network functions (7) such that there are \(S\in\mathbb{N}\) non-zero parameters and each value is included in \([-\bar{B},\bar{B}]\) with \(\bar{B}\geq 1\), then consider the empirical preprocessed adversarial risk (9) on \(\overline{\mathcal{F}}(L,W,S,\bar{B})\) as \[\widehat{f}\in\operatorname*{argmin}_{f\in\overline{\mathcal{F}}(L,W,S,\bar{ B})}R_{n}(f). \tag{15}\] Then, we give the convergence rate of the estimator, which corresponds to the minimax optimal rate \(\widehat{O}(n^{-2\beta/(2\beta+d)})\)[1]. Note that this rate is valid regardless of the values of \(p\) and \(q\). **Proposition 7**.: _Fix \(p,q\in(0,\infty]\). Consider the regression model (1) and the adversarial estimator \(\widehat{f}\) in (15) with the function class \(\overline{\mathcal{F}}(L,W,S,\bar{B})\) by deep neural networks. Suppose that Assumption 1, and 5 hold with \(\beta>d/p\). Further, suppose that \(\zeta_{n}^{2}=O(n^{-2\beta/(2\beta+d)}\log^{\beta^{*}}n)\) for some \(\beta^{*}>0\). We set \(L\) and \(W\) as \(L\geq C_{d,p,\beta,B}\log n\), \(S\asymp W\asymp n^{d/(2\beta+d)}\), and \(\bar{B}=O(n^{\alpha})\) with some \(a>0\). Then, we obtain the following as \(n\to\infty\):_ \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]=O\left(n^{-2 \beta/(2\beta+d)}\log^{3\lor\beta^{*}}n\right).\] The result shows that our estimator with deep neural networks inherits the advantages of both deep and non-deep estimators. Rigorously, first, it achieves the minimax optimal rate up to \(\log\) factors. This optimality is not achieved by the linear estimator and is one of the advantages of using deep neural networks. Next, the errors are convergent in the sup-norm sense. This is not shown by deep neural network estimators using the least squares, and is achieved by adversarial training with preprocessing. Note that the requirement on the preprocessing is satisfied by, for example, the wavelet estimator with \(\beta^{*}=2\beta/(2\beta+d)\)[13, 14]. The proof of this proposition is a slight modification of the proof of Proposition 8 in Appendix. The main update is an analysis of the approximation error by deep neural networks to a function in the Besov space. Here, we apply the seminal result by [16] on the approximation error in the sup-norm. ## 6. Simulations In this section, we conduct simulation experiments to justify the theoretical results. Specifically, we generate data from a function and then numerically compute the \(L^{\infty}\)-risk of the proposed estimator and other standard methods. We generate \(n\) samples from the regression model (1) with the sample size \(n\in\{400,800,1200,1600\}\) and the noise variance \(\sigma^{2}\in\{0.0001,0.01,1.0\}\). We consider the following three cases as values of \(f^{*}\) on \([0,1]^{d}\). In Case 1, we set \(d=1\) and \(f^{*}(x)=0.3\sin(4\pi x)-x+0.5\). In Case 2, we set \(d=2\) and \(f^{*}(x_{1},x_{2})=\sin(4\pi x_{1})+\cos(2\pi x_{2})\). In Case 3, we set \(d=7\) and \(f^{*}(x_{1},x_{2},...,x_{7})=\frac{2}{x_{1}+0.01}+3\log(x_{7}^{2}x_{3}+0.1)x_ {4}+0.1x_{5}^{4}x_{6}^{2}x_{7}\). For estimation, we use a three-layer fully-connected neural network with the ReLU activation function. The width of each layer is \(40\). For training, we use three methods: (i) adversarial training without preprocessing, (ii) adversarial training with preprocessing (our proposal), and (iii) ordinary least squares. In the adversarial training case (i) and (ii), the value of \(h\) is set to \(2^{-3}\). For the adversarial training, we employ the projected descent algorithm [12]. For the preprocessing, we employ the \(k\)-nearest neighbor with setting \(k=3\). To measure the \(L^{\infty}\)-risk, we generate \(10,000\) uniform random variables on the support \([0,1]^{d}\) and use their maximum to approximate the risk. Figure 3 shows the measured \(L^{\infty}\)-risk against the sample size \(n\). We have mainly three findings: (i) In approximately all cases, our proposed estimator from adversarial training with preprocessing monotonically reduces the \(L^{\infty}\)-risk in \(n\). (ii) The adversarial estimators without preprocessing may or may not be as good as those with preprocessing. This implies that the magnitude of the bias from adversarial training depends on the shape of the true function \(f^{*}\). (iii) The \(L^{\infty}\)-risk of the least square estimator generally decreases at a slower rate or does not decrease in all cases. This supports the possibility that training a deep neural network with least-squares may have difficulty in reducing the \(L^{\infty}\)-risk. ## 7. Conclusion and Discussion We consider the nonparametric function estimator by deep neural networks that converge in the sense of the sup-norm, i.e., \(L^{\infty}\)-norm. Since deep neural networks do not have a tractable structure such as a linear sum of basis functions as the conventional non-deep estimators, they are not guaranteed to converge in the sup-norm sense. In this study, we tackle this problem by considering the estimator based on adversarial training. For the bias due to the adversarial training, we solve this problem by introducing the preprocessing for the data. As a result, our proposed corrected adversarial converges to the smooth true function with the minimax optimal rate in the sup-norm sense. Our approach is also valid for functions with general loss and functions with heterogeneous smoothness. The experiments support our theoretical results. Future research directions include sup-norm convergence for estimating non-smooth functions. Although we expect that there are significant obstacles to the sup-norm convergence of estimators for the non-smooth functions, it is interesting to argue how far we can relax the conditions to estimate such functions. Another direction is the application of uniform confidence bands for functions. Our sup-norm convergence is useful to study the uncertainty of neural network estimators and constructing uniform confidence bands. These directions may be a step toward statistical inference with deep neural networks. ## Appendix A Proof for Main Result in Section 4 ### Overview We first develop a general theorem with arbitrary preprocessing, then apply the result and prove the results in Section 4. For a preprocessed output \(\widehat{Y}\), we define its residual as \[\Xi(x):=\widehat{Y}(x)-f^{*}(x),\ x\in[0,1]^{d}.\] This notion expresses an error in estimating the true function \(f^{*}\) by the preprocessing \(\widehat{Y}\). Figure 3. \(L^{\infty}\)-risk against the sample size \(n\). The mean and standard deviation of the 10 repetitions are plotted. **Proposition 8**.: _Consider the regression model (1) and the corrected adversarial estimator \(\widehat{f}\) as (10) with the function class \(\mathcal{F}(L,W)\) by deep neural networks. Suppose that Assumption 2 and 1 hold. Then, we obtain_ \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]\] \[\leq C_{P_{X},p,d,B}h^{-d}\left(\frac{W^{2}L^{2}\log(WL)\log n}{n} +\Phi_{L,W}^{2}+\mathbb{E}[\|\Xi\|_{L^{\infty}}]\Phi_{L,W}+h^{-d}\mathbb{E} \left[\|\Xi\|_{L^{\infty}}^{2}\right]\right).\] Proof of Proposition 8.: We apply Lemma 9 to bound the sup-norm as \[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\leq 2(C_{P_{X},p,d}h^{d})^{-1}\| \widehat{f}-f^{*}\|_{P_{X},\Delta}^{2} \tag{16}\] Note that any \(f\in\mathcal{F}(L,W)\) is continuous, since it has a form of deep neural network with the ReLU activation with continuity. We then take an expectation of the bounds and apply Lemma 10 and Proposition 12 to obtain \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{P_{X},\Delta}^{2}\right]\] \[\leq 4\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{n,\Delta}^{2} \right]+\frac{800B^{2}\log N_{L,W}(\delta)+4118B^{2}}{n}+32\delta B+8\delta^{2}\] \[\leq\left(16\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2 }\right]^{1/2}+40\delta\right)\left(\frac{\log N_{L,W}(\delta)}{n}+\mathbb{E} \left[\|\Xi\|_{L^{\infty}}^{2}\right]\right)^{1/2}\] \[\quad+\frac{800B^{2}\log N_{L,W}(\delta)+4118B^{2}}{n}+32\delta B +8\delta^{2}+4\Phi_{L,W}^{2}+8\mathbb{E}[\|\Xi\|_{L^{\infty}}]\Phi_{L,W}+2 \mathbb{E}\left[\|\Xi\|_{L^{\infty}}^{2}\right],\] for \(\delta\in(0,1]\). Note that both \(f\in\mathcal{F}(L,W)\) and \(f^{*}\) are bounded, the expectations are guaranteed to exist. We combine this fact with the above inequality to (16), then obtain \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]\] \[\leq C_{P_{X},p,d}h^{-d}\left(\mathbb{E}\left[\|\widehat{f}-f^{*} \|_{L^{\infty}}^{2}\right]^{1/2}+\delta\right)\left(\frac{\log N_{L,W}(\delta )}{n}+\mathbb{E}\left[\|\Xi\|_{L^{\infty}}^{2}\right]\right)^{1/2}\] \[\quad+C_{P_{X},p,d}h^{-d}\left(\frac{B^{2}\log N_{L,W}(\delta)+B ^{2}}{n}+\delta B+\Phi_{L,W}^{2}+\mathbb{E}[\|\Xi\|_{L^{\infty}}]\Phi_{L,W}+ \mathbb{E}\left[\|\Xi\|_{L^{\infty}}^{2}\right]\right),\] by setting \(\delta\leq B\vee\Phi_{L,W}\), which will be verified later. We arrange the terms in the above inequality. For \(a,b\geq 0\) and \(z\in\mathbb{R}\), \(z^{2}\leq az+b\) implies \(z^{2}\leq 3a^{2}+2b\). with regarding regard \(z=\mathbb{E}[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}]^{1/2}\) and obtain \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]\] \[\leq C_{P_{X},p,d,B}h^{-d}\Bigg{\{}\frac{\log N_{L,W}(\delta)}{n }+\delta+\Phi_{L,W}^{2}+\mathbb{E}[\|\Xi\|_{L^{\infty}}]\Phi_{L,W}+h^{-d} \mathbb{E}\left[\|\Xi\|_{L^{\infty}}^{2}\right]\] \[\qquad\qquad\qquad\qquad+\left(\frac{\log N_{L,W}(\delta)}{n}+ \mathbb{E}\left[\|\Xi\|_{L^{\infty}}^{2}\right]\right)^{1/2}\delta\Bigg{\}}. \tag{17}\] Further, we set \(\delta=1/n\) then Lemma 16 shows \[\log N_{L,W}(1/n)=\log\sup_{Q_{n}}N(1/n,\mathcal{F}(L,W),\|\cdot\|_{L^{2}(Q_{n})} )\leq CW^{2}L^{2}\log(WL)\log(Bn^{2}).\] We substitute these results and obtain the statement. **Lemma 9**.: _Suppose \(P_{X}\) satisfies Assumption 1 and \(f^{*}\) is continuous. For any bounded and continuous \(f:[0,1]^{d}\rightarrow\mathbb{R}\), we have_ \[\|f-f^{*}\|_{P_{X},\Delta}^{2}\geq C_{P_{X},p,d}h^{d}\|f-f^{*}\|_{L^{\infty}}^ {2}.\] Proof of Lemma 9.: We apply Lemma 15 to achieve the statement. To apply the lemma, we verify that the map \(x^{\prime}\mapsto(f(x^{\prime})-f^{*}(x^{\prime}))^{2}\) is bounded and continuous by the compactness of the domain \([0,1]^{d}\) and the assumptions. Then, we have \[\|f-f^{*}\|_{P_{X},\Delta}^{2}\geq C_{P_{X},p,d}h^{d}\sup_{x^{\prime}\in[0,1]^ {d}}(f(x^{\prime})-f^{*}(x^{\prime}))^{2}=C_{P_{X},p,d}h^{d}\|f-f^{*}\|_{L^{ \infty}}^{2}.\] The inequality follows Lemma 15 by setting \(g(\cdot)=(f(\cdot)-f^{*}(\cdot))^{2}\). **Lemma 10**.: _All \(f\in\mathcal{F}\) is continuous. Suppose that \(f^{*}\) is continuous and \(\|f^{*}\|_{L^{\infty}}\leq B\) holds. Then, for any \(\delta>0\), we have_ \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{P_{X},\Delta}^{2}\right]\] \[\leq 4\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{n,\Delta}^{2} \right]+\frac{800B^{2}\log N_{L,W}(\delta)+4118B^{2}}{n}+32\delta B+8\delta^{ 2}.\] Proof of Lemma 10.: Without loss of generality, we assume that \(N_{L,W}(\delta)\geq 3\) and \(\log N_{L,W}(\delta)\leq n\). Also, we define the nearest element of the covering set to \(\widehat{f}\), that is, we define \(\widehat{j}:=\operatorname*{argmin}_{j^{\prime}=1,...,N}\sup_{Q_{n}}\|f_{j^{ \prime}}-\widehat{f}\|_{L^{2}(Q_{n})}\). Let \(X_{i}^{\prime}\) be an i.i.d. samples from \(P_{X}\) for \(i=1,...,n\). Note that \(\widehat{Y}\) depends on \(X_{1},...,X_{n}\). We give a bound on the following difference as \[\left|\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{P_{X},\Delta}^{2} \right]-\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{n,\Delta}^{2}\right]\right|\] \[=\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\sup_{x^{\prime} \in\Delta_{h}^{P}(X_{i}^{\prime})}(\widehat{f}(x^{\prime})-f^{*}(x^{\prime})) ^{2}-\sup_{x^{\prime}\in\Delta_{h}^{P}(X_{i})}(\widehat{f}(x^{\prime})-f^{*}( x^{\prime}))^{2}\right]\right|\] \[\leq\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\underbrace{ \sup_{x^{\prime}\in\Delta_{h}^{P}(X_{i}^{\prime})}(f_{\widehat{j}}(x^{\prime })-f^{*}(x^{\prime}))^{2}-\sup_{x^{\prime}\in\Delta_{h}^{P}(X_{i})}(f_{ \widehat{j}}(x^{\prime})-f^{*}(x^{\prime}))^{2}}_{=:g_{\widehat{j}}(X_{i},X_{ i}^{\prime})}\right]\right|\] \[\quad+2\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\sup_{x^{ \prime}\in\Delta_{h}^{P}(X_{i})}(\widehat{f}(x^{\prime})-f_{\widehat{j}}(x^{ \prime})+f_{\widehat{j}}(x^{\prime})-f^{*}(x^{\prime}))^{2}-\sup_{x^{\prime} \in\Delta_{h}^{P}(X_{i})}(f_{\widehat{j}}(x^{\prime})-f^{*}(x^{\prime}))^{2} \right]\right|\] \[\leq\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}g_{\widehat{j} }(X_{i},X_{i}^{\prime})\right]\right|+4\mathbb{E}\left[\sup_{Q_{n}}\|\widehat {f}-f_{\widehat{j}}\|_{L^{2}(Q_{n})}^{2}\right]^{1/2}\mathbb{E}\left[\sup_{Q_{ n}}\|f_{\widehat{j}}-f^{*}\|_{L^{2}(Q_{n})}^{2}\right]^{1/2}\] \[+2\mathbb{E}\left[\sup_{Q_{n}}\left\|\widehat{f}-f_{\widehat{j}} \right\|_{L^{2}(Q_{n})}^{2}\right]\] \[\leq\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}g_{\widehat{j}} (X_{i},X_{i}^{\prime})\right]\right|+4\delta\mathbb{E}\left[\left\|f_{ \widehat{j}}-f^{*}\right\|_{L^{\infty}}^{2}\right]^{1/2}+2\delta^{2}\] \[\leq\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}g_{\widehat{j} }(X_{i},X_{i}^{\prime})\right]\right|+8\delta B+2\delta^{2}. \tag{18}\] Here, the second last inequality follows Lemma 17 using the continuity of \(f^{*}\) and the \(f\in\mathcal{F}\). The last inequality follows the definition of \(\widehat{j}\) and the boundedness of \(f\in\mathcal{F}\) and \(f^{*}\) by \(B\). We further study the first term of the bound (18). As preparation, we define \[r_{j}=B\max\left\{\mathbb{E}\left[\left\|f_{j}-f^{*}\right\|_{P_{X},\Delta}^{ 2}\right]^{1/2},\left(n^{-1}\log N_{L,W}(\delta)\right)^{1/2}\right\},\] for \(j=1,...,N\), and it yields \[r_{\widehat{j}} \leq B\mathbb{E}_{X|X_{1:n},Y_{1:n}}\left[\sup_{x^{\prime}\in \Delta_{h}^{P}(X)}(f_{\widehat{j}}(x^{\prime})-f^{*}(x^{\prime}))^{2}\right]^ {1/2}+B(n^{-1}\log N_{L,W}(\delta))^{1/2}\] \[\leq B\mathbb{E}_{X|X_{1:n},Y_{1:n}}\left[\sup_{x^{\prime}\in \Delta_{h}^{P}(X)}(\widehat{f}(x^{\prime})-f^{*}(x^{\prime}))^{2}\right]^{1/2 }+B(n^{-1}\log N_{L,W}(\delta))^{1/2}+B\delta. \tag{19}\] Here, \(\mathbb{E}_{X|X_{1:n},Y_{1:n}}[\cdot]\) denotes a conditional expectation with given \(X_{1},...,X_{n}\) and \(Y_{1},...,Y_{n}\). By the law of iterated expectation, the first term of the bound is decomposed as \[\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}g_{\widehat{j}}(X_ {i},X_{i}^{\prime})\right]\right|\] \[=\frac{1}{n}\Bigg{|}\mathbb{E}\Bigg{[}\sum_{i=1}^{n}\underbrace{ \frac{g_{\widehat{j}}(X_{i},X_{i}^{\prime})}{r_{\widehat{j}}}}_{=:\widehat{g }_{\widehat{j}}(X_{i},X_{i}^{\prime})}r_{\widehat{j}}\Bigg{]}\Bigg{|}\] \[\leq\frac{1}{n}\Bigg{|}\mathbb{E}\Bigg{[}\sum_{i=1}^{n}\widetilde {g}_{\widehat{j}}(X_{i},X_{i}^{\prime})\left(B\mathbb{E}_{X|X_{1:n},Y_{1:n}} \left[\sup_{x^{\prime}\in\Delta_{h}^{P}(X)}(\widehat{f}(x^{\prime})-f^{*}(x^{ \prime}))^{2}\right]^{1/2}+B(n^{-1}\log N_{L,W}(\delta))^{1/2}+B\delta\right) \Bigg{]}\Bigg{|}\] \[\leq\frac{1}{n}\Bigg{|}\mathbb{E}\Bigg{[}\max_{j=1,...,N_{L,W}( \delta)}\sum_{i=1}^{n}\widetilde{g}_{j}(X_{i},X_{i}^{\prime})\left(B\mathbb{ E}_{X|X_{1:n},Y_{1:n}}\left[\sup_{x^{\prime}\in\Delta_{h}^{P}(X)}(\widehat{f}(x^{ \prime})-f^{*}(x^{\prime}))^{2}\right]^{1/2}\right)\Bigg{]}\Bigg{|}\] \[\leq\frac{B}{n}\Bigg{|}\mathbb{E}\Bigg{[}\left(\max_{j=1,...,N_{L,W}(\delta)}\sum_{i=1}^{n}\widetilde{g}_{j}(X_{i},X_{i}^{\prime})\right)^{2} \Bigg{]}^{1/2}\mathbb{E}\left[\left\|\widehat{f}-f^{*}\right\|_{P_{X},\Delta}^{ 2}\right]^{1/2}\Bigg{|}\] \[\leq 2r_{j}^{-2}\mathbb{E}\left[\left(\sup_{x^{\prime}\in\Delta_{h}^{P} (X_{1})}(f_{j}(x^{\prime})-f^{*}(x^{\prime}))^{2}\right)^{2}\right]\] \[\leq 8r_{j}^{-2}\mathbb{E}\left[\left\|f_{j}-f^{*}\right\|_{P_{X}, \Delta}^{2}\right]B^{2}\] \[\leq 8.\] The second inequality follows Holder's inequality. Using the bounds above, we apply the Bernstein inequality as \[\mathbb{P}\left(\sum_{i=1}^{n}\widetilde{g}_{j}(X_{i},X_{i}^{\prime})\geq t \right)\leq\exp\left(-\frac{t^{2}}{2tM/3+2n\operatorname{Var}(\widetilde{g}_{j }(X_{1},X_{1}^{\prime}))}\right)\] \[\leq\exp\left(-\frac{t^{2}}{8tn^{1/2}(\log N_{L,W}(\delta))^{-1/2}/3+1 6n}\right)\] \[\leq\exp\left(-\frac{t^{2}}{16tn^{1/2}(\log N_{L,W}(\delta))^{-1/2} /3}\right)\] \[=\exp\left(-\frac{3t(\log N_{L,W}(\delta))^{1/2}}{16n^{1/2}} \right), \tag{21}\] for \(t\geq 6(n\log N_{L,W}(\delta))^{1/2}\). The last inequality follows \(8tn^{1/2}(\log N_{L,W}(\delta))^{-1/2}/3\geq 16n\) for \(t\) larger than the threshold \(6(n\log N)^{1/2}\). Using the result (21) associated with \(t\geq 6(n\log N_{L,W}(\delta))^{1/2}\), we bound the following expectation: \[\mathbb{E}\left[\max_{j=1,\ldots,N_{L,W}(\delta)}\sum_{i=1}^{n} \overline{g}_{j}(X_{i},X_{i}^{\prime})\right]\] \[=\int_{0}^{\infty}\mathbb{P}\left(\max_{j=1,\ldots,N_{L,W}( \delta)}\sum_{i=1}^{n}\overline{g}_{j}(X_{i},X_{i}^{\prime})\geq t\right)dt\] \[\leq 6(n\log N_{L,W}(\delta))^{1/2}+2N_{L,W}(\delta)\int_{6(n\log N _{L,W}(\delta))^{1/2}}^{\infty}\max_{j=1,\ldots,N_{L,W}(\delta)}\mathbb{P} \left(\sum_{i=1}^{n}\overline{g}_{j}(X_{i},X_{i}^{\prime})\geq t\right)dt\] \[\leq 6(n\log N_{L,W}(\delta))^{1/2}+2N_{L,W}(\delta)\int_{6(n\log N _{L,W}(\delta))^{1/2}}^{\infty}\exp\left(-\frac{3t(\log N_{L,W}(\delta))^{1/2} }{16n^{1/2}}\right)dt\] \[\leq 6(n\log N_{L,W}(\delta))^{1/2}+\frac{32n^{1/2}}{3(\log N_{L,W }(\delta))^{1/2}}.\] Then, the first statement is proved. For the second statement, we similarly apply (21) and obtain Using the result (21) associated with \(t\geq 6(n\log N_{L,W}(\delta))^{1/2}\), we bound the following expectation: \[\mathbb{E}\left[\left(\max_{j=1,\ldots,N_{L,W}(\delta)}\sum_{i=1 }^{n}\overline{g}_{j}(X_{i},X_{i}^{\prime})\right)^{2}\right]\] \[=\int_{0}^{\infty}\mathbb{P}\left(\max_{j=1,\ldots,N_{L,W}(\delta )}\sum_{i=1}^{n}\overline{g}_{j}(X_{i},X_{i}^{\prime})\geq t^{1/2}\right)dt\] \[\leq 36n\log N_{L,W}(\delta)+2N_{L,W}(\delta)\int_{6n\log N_{L,W}( \delta)}^{\infty}\max_{j=1,\ldots,N_{L,W}(\delta)}\mathbb{P}\left(\sum_{i=1}^ {n}\overline{g}_{j}(X_{i},X_{i}^{\prime})\geq t^{1/2}\right)dt\] \[\leq 36n\log N_{L,W}(\delta)+2N_{L,W}(\delta)\int_{6n\log N_{L,W}( \delta)}^{\infty}\exp\left(-\frac{3t^{1/2}(\log N_{L,W}(\delta))^{1/2}}{16n^{1/ 2}}\right)dt\] \[\leq 36n\log N_{L,W}(\delta)+256n.\] Then, the second statement is also proved. **Proposition 12**.: _Consider the setting in Theorem 3. Then, for any \(\delta\in(0,1]\), we have_ \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{n,\Delta}^{2}\right] \leq\left(4\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2} \right]^{1/2}+10\delta\right)\left(\frac{\log N_{L,W}(\delta)}{n}+\mathbb{E} \left[\|\Xi\|_{L^{\infty}}^{2}\right]\right)^{1/2}\] \[\quad+\Phi_{L,W}^{2}+2\mathbb{E}[\|\Xi\|_{L^{\infty}}]\Phi_{L,W}+ 2\mathbb{E}[\|\Xi\|_{L^{\infty}}^{2}].\] Proof of Proposition 12.: By the definition of the minimization problem, \(L_{n}(\widehat{f})\leq L_{n}(f)\) holds for any \(f\in\mathcal{F}(L,W)\), hence we have the following basic inequality as \[\frac{1}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i})}(\widehat{Y }(x^{\prime})-\widehat{f}(x^{\prime}))^{2}\leq\frac{1}{n}\sum_{i=1}^{n}\max_{ x^{\prime}\in\Delta_{h}^{p}(X_{i})}(\widehat{Y}(x^{\prime})-f(x^{\prime}))^{2},\] which can be rewritten as \[\frac{1}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i})}(f^{*}(x^{ \prime})+\Xi(x^{\prime})-\widehat{f}(x^{\prime}))^{2}\leq\frac{1}{n}\sum_{i=1 }^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i})}(f^{*}(x^{\prime})+\Xi(x^{\prime })-f(x^{\prime}))^{2}. \tag{22}\] We bound the both-hand side of (22). The left-hand side (LHS) of (22) is lower bounded as \[\text{LHS of \eqref{eq:LHS}} =\frac{1}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i} )}\left\{(f^{*}(x^{\prime})-\widehat{f}(x^{\prime}))^{2}+\Xi(x^{\prime})^{2}+ 2\Xi(x^{\prime})(f^{*}(x^{\prime})-\widehat{f}(x^{\prime}))\right\}\] \[\geq\|f^{*}-\widehat{f}\|_{n,\Delta}^{2}-\|\Xi\|_{n,\Delta}^{2}- \frac{2}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i})}|\Xi(x^{ \prime})(f^{*}(x^{\prime})-\widehat{f}(x^{\prime}))|, \tag{23}\] by applying Lemma 18. Similarly, we bound the right-hand side of (22) as \[\text{RHS of \eqref{eq:LHS}} =\frac{1}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i })}\left\{(f^{*}(x^{\prime})-f(x^{\prime}))^{2}+\Xi(x^{\prime})^{2}+2\Xi(x^{ \prime})(f^{*}(x^{\prime})-f(x^{\prime}))\right\}\] \[\leq\|f^{*}-f\|_{n,\Delta}^{2}+\|\Xi\|_{n,\Delta}^{2}+\frac{2}{n} \sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i})}|\Xi(x^{\prime})(f^{*} (x^{\prime})-f(x^{\prime}))|. \tag{24}\] Combining (23) and (24) with (22), we obtain \[\|f^{*}-\widehat{f}\|_{n,\Delta}^{2} \leq\|f^{*}-f\|_{n,\Delta}^{2}+2\|\Xi\|_{n,\Delta}^{2}+\underbrace {\frac{2}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p}(X_{i})}|\Xi(x^{ \prime})(f^{*}(x^{\prime})-\widehat{f}(x^{\prime}))|}_{=:T_{1}}\] \[\quad+\frac{2}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_{h}^{p} (X_{i})}|\Xi(x^{\prime})(f^{*}(x^{\prime})-f(x^{\prime}))|\] \[\leq\Phi_{L,W}^{2}+2\|\Xi\|_{L^{\infty}}^{2}+T_{1}+2\|\Xi\|_{L^{ \infty}}\Phi_{L,W}, \tag{25}\] by the definition of \(\Phi_{L,W}\) in (12). We will bound an expectation the terms. Note that the expectations of the terms are guaranteed to exist, by the boundedness of \(f^{*}\) and \(\widehat{f}\), \(f\in\mathcal{F}(L,W)\), and \(\widehat{Y}\). We bound \(\mathbb{E}[T_{1}]\). We define the nearest element of the covering set to \(\widehat{f}\), that is, we define \(\widehat{f}:=\operatorname*{argmin}_{j^{\prime}=1,\ldots,N}\sup_{Q_{n}}\|f_{j ^{\prime}}-\widehat{f}\|_{L^{2}(Q_{n})}\). Then, we bound \(\mathbb{E}[T_{1}]\) as \[\mathbb{E}[T_{1}]=\mathbb{E}\left[\frac{2}{n}\sum_{i=1}^{n}\max_{x^{\prime} \in\Delta_{h}(X_{i})}|\Xi(x^{\prime})(f^{*}(x^{\prime})-f_{\widehat{f}}(x^{ \prime})+f_{\widehat{f}}(x^{\prime})-\widehat{f}(x^{\prime}))|\right]\] \[\leq\mathbb{E}\left[\frac{2}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in \Delta_{h}(X_{i})}|\Xi(x^{\prime})(f^{*}(x^{\prime})-f_{\widehat{j}}(x^{\prime }))|\right]+\mathbb{E}\left[\frac{2}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in\Delta_ {h}(X_{i})}|\Xi(x^{\prime})(f_{\widehat{j}}(x^{\prime})-\widehat{f}(x^{\prime }))|\right]\] \[\leq\mathbb{E}\left[\frac{2}{n}\sum_{i=1}^{n}\max_{x^{\prime}\in \Delta_{h}(X_{i})}|\Xi(x^{\prime})(f^{*}(x^{\prime})-f_{\widehat{j}}(x^{\prime }))|\frac{\|\widehat{f}-f^{*}\|_{L^{\infty}}+\delta}{\|f_{\widehat{j}}-f^{*}\| _{L^{\infty}}}\right]\] \[\quad+2\mathbb{E}\left[\sup_{\mathcal{Q}_{n}}\|\Xi\|_{L^{2}( \mathcal{Q}_{n})}^{2}\right]^{1/2}\mathbb{E}\left[\sup_{\mathcal{Q}_{n}}\|f_ {\widehat{j}}-\widehat{f}\|_{L^{2}(\mathcal{Q}_{n})}^{2}\right]^{1/2}\] \[\leq\mathbb{E}\Bigg{[}\left(\|\widehat{f}-f^{*}\|_{L^{\infty}}+ \delta\right)\underbrace{\frac{2}{n}\sum_{i=1}^{n}\frac{\max_{x^{\prime}\in \Delta_{h}(X_{i})}|\Xi(x^{\prime})(f^{*}(x^{\prime})-f_{\widehat{j}}(x^{\prime }))|}{\|f_{\widehat{j}}-f^{*}\|_{L^{\infty}}}\Bigg{]}}_{=:Z_{\widehat{j}}}+2 \mathbb{E}\left[\|\Xi\|_{L^{\infty}}^{2}\right]^{1/2}\delta.\] Since we have \[|Z_{j}|\leq\frac{2}{n}\sum_{i=1}^{n}\left|\frac{\max_{x^{\prime}\in\Delta_{h}( X_{i})}\{|\Xi(x^{\prime})||(f^{*}(x^{\prime})-f_{j}(x^{\prime}))|\}}{\|f_{j}-f^{*} \|_{L^{\infty}}}\right|\leq 2\|\Xi\|_{L^{\infty}},\] for any \(j=1,...,N\), the Cauchy-Schwartz inequality yields \[\mathbb{E}\left[\left(\|\widehat{f}-f^{*}\|_{L^{\infty}}+\delta \right)Z_{\widehat{j}}\right] \leq\mathbb{E}\left[\left(\|\widehat{f}-f^{*}\|_{L^{\infty}}+ \delta\right)^{2}\right]^{1/2}\mathbb{E}\left[Z_{\widehat{j}}^{2}\right]^{1/2}\] \[\leq 2\left(\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2 }\right]^{1/2}+\delta\right)\mathbb{E}\left[\max_{j=1,...,N_{L},w(\delta)}Z_{ j}^{2}\right]^{1/2}\] \[\leq 4\left(\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2 }\right]^{1/2}+\delta\right)\left(\frac{\log N_{L,W}(\delta)+\mathbb{E}\left[ \|\Xi\|_{L^{\infty}}^{2}\right]}{n}\right)^{1/2}.\] The last inequality follows the maximal inequality (Theorem 3.1.10 in [1]) for the bounded random process. Using this result, we obtain \[\mathbb{E}[T_{1}] \leq 4\left(\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2 }\right]^{1/2}+\delta\right)\left(\frac{\log N_{L,W}(\delta)+\mathbb{E}\left[ \|\Xi\|_{L^{\infty}}^{2}\right]}{n}\right)^{1/2}+2\mathbb{E}\left[\|\Xi\|_{L^ {\infty}}^{2}\right]^{1/2}\delta\] \[\leq\left(4\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2 }\right]^{1/2}+10\delta\right)\left(\frac{\log N_{L,W}(\delta)}{n}+\mathbb{E} \left[\|\Xi\|_{L^{\infty}}^{2}\right]\right)^{1/2}. \tag{26}\] We substitute the bound (26) into the expectation of (25), then obtain the statement. Proof of Theorem 2.: Fix \(\varepsilon>0\) arbitrary. Also, we fix \(C_{*}=C_{P_{X},p,d,B}\) as used in the statement of Proposition 8. By the universal approximation theorem (e.g. Theorem 1 in [1]) associate with the continuity of \(f^{*}\), there exists a tuple \((L^{\prime},W^{\prime})\) such that \[\Phi_{L^{\prime},W^{\prime}}\leq\sqrt{\varepsilon h^{d}/(4C_{*})}.\] Further, by Assumption 3, there exists \(\bar{n}\in\mathbb{N}\) such that \[\mathbb{E}[\|\Xi\|_{L^{\infty}}^{2}]\leq\sqrt{\varepsilon h^{2d}/(4C_{*})}.\] Then, for all \(n\geq\bar{n}\), Proposition 8 yields that \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]\leq C_{*}h^{-d} \frac{(W^{\prime}L^{\prime})^{2}\log(W^{\prime}L^{\prime})\log n}{n}+\frac{3 \varepsilon}{4}.\] Then, for any \(n\geq n^{\prime}\vee(4C_{*}(W^{\prime}L^{\prime})^{2}\log(W^{\prime}L^{\prime })h^{-d}\varepsilon^{-1})\), we have \(\mathbb{E}[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}]\leq\varepsilon/4+3 \varepsilon/4=\varepsilon\), which shows the statement. Proof of Theorem 3.: As preparation, Lemma 20 gives the following bound \[\Phi_{L,W}\leq C_{d,\beta}(LW)^{-2\beta/d}.\] With this bound on \(\Phi_{L,W}\), we apply Proposition 8 and obtain \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right] \tag{27}\] \[\leq C_{P_{X},p,d,B,d,\beta}h^{-d}\left(\frac{(WL)^{2}\log(WL) \log n}{n}+(LW)^{-4\beta/d}+\mathbb{E}[\|\Xi\|_{L^{\infty}}](LW)^{-2s/d}+h^{- d}\mathbb{E}[\|\Xi\|_{L^{\infty}}^{2}]\right).\] Further, we have \[(LW)^{-4\beta/d}+\mathbb{E}[\|\Xi\|_{L^{\infty}}](LW)^{-2s/d}+h^{-d}\mathbb{E }[\|\Xi\|_{L^{\infty}}^{2}]\leq\left\{(LW)^{-2\beta/d}+h^{-d/2}\mathbb{E}[\| \Xi\|_{L^{\infty}}^{2}]^{1/2}\right\}^{2},\] by applying Jensen's inequality. Arranging the terms, we obtain the statement. Proof of Corollary 4.: We start with the inequality (27) in the proof of Theorem 3 and obtain \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right]\] \[\leq C_{P_{X},p,d,B,d,\beta}h^{-d}\left(n^{-2\beta/(2\beta+d)}( \log^{2}n+1)+\mathbb{E}[\|\Xi\|_{L^{\infty}}]n^{-\beta/(2\beta+d)}+h^{-d} \mathbb{E}[\|\Xi\|_{L^{\infty}}^{2}]\right)\] by the setting \(WL\asymp n^{d/(4s+2d)}\). ## Appendix B Proof for Applications ### Proof for General Loss Setting We give proofs of the result in Section 5. **Proposition 13**.: _Consider the setting in Proposition 6. Then, we have for \(n\) such that \(\log N(1/n)\geq 1\):_ \[\mathbb{E}\left[\widetilde{R}(\widetilde{f})-\widetilde{R}(f^{*})\right]\leq \frac{C_{\ell,B}(\log N_{L,W}(1/n)+V^{2})}{n^{1/2}}+C_{\ell}(\Phi_{L,W}+ \mathbb{E}[\|\Xi_{n}\|_{L^{\infty}}]).\] This proof is similar to Lemma 3.1 in [11]. A difference between [11] and our result is that a property of the loss depends on \(f\) in our setting, so we have to modify it. Hence, we write down the proof. Proof of Proposition 13.: We develop the proof in the following four steps: (i) a basic decomposition, (ii) bounding a variance, (iii) bounding a bias, and (iv) combining every bound. _Step 1: Basic decomposition._ We define i.i.d. copies of the observations \(D:=\{(X_{i},Y_{i})_{i=1}^{n}\}\) as \(D^{\prime}:=\{(X_{i}^{\prime},Y_{i}^{\prime})_{i=1}^{n}\}\), and also define an excess loss as \[g(x,\widehat{Y},f)=\sup_{x^{\prime}\in\Delta_{h}^{P}(x)}\ell(f(x^{ \prime}),\widehat{Y}(x^{\prime}))-\sup_{x^{\prime}\in\Delta_{h}^{P}(x)}\ell(f^{ *}(x^{\prime}),\widehat{Y}(x^{\prime})) \tag{28}\] We further define empirical means of the excess loss as \(G_{n}(f):=n^{-1}\sum_{i=1}^{n}g(X_{i},\widehat{Y},f)\) with the observations \(D\), and \(G_{n}^{\prime}(f):=n^{-1}\sum_{i=1}^{n}g(X_{i}^{\prime},\widehat{Y},f)\) with the copies \(D^{\prime}\). Since \(\widehat{f}\) is independent to \(D^{\prime}\), we can rewrite the expected risk as \[\mathbb{E}\left[\widehat{R}(\widehat{f})-\widehat{R}(f^{*}) \right]=\mathbb{E}\left[\mathbb{E}_{D^{\prime}}\left[G_{n}^{\prime}(\widehat{ f})\right]\right].\] Since \(\widehat{f}\) is the minimizer of the empirical risk and the loss is bounded, we obtain the following inequality of expectations: \[\mathbb{E}\left[G_{n}(\widehat{f})\right]\leq\mathbb{E}\left[G_{ n}(f)\right], \tag{29}\] for any \(f\in\mathcal{F}(L,W)\). We set set \(\bar{f}\) such that \(\|\bar{f}-f^{*}\|_{L^{\infty}}=\inf_{f\in\mathcal{F}(L,W)}\|f-f^{*}\|_{L^{ \infty}}\) Using this fact, we decompose the excess risk as \[\mathbb{E}\left[\widetilde{R}(\widehat{f})-\widetilde{R}(\bar{f })\right]=\mathbb{E}\left[\mathbb{E}_{D^{\prime}}\left[G_{n}^{\prime}(\widehat{ f})\right]\right]\leq\mathbb{E}\underbrace{\left[-2G_{n}(\widehat{f})+\mathbb{E}_{D^{ \prime}}\left[G_{n}^{\prime}(\widehat{f})\right]\right]}_{=:\mathcal{V}}+2 \mathbb{E}\left[\underbrace{G_{n}(\bar{f})}_{=:\mathcal{B}}\right]. \tag{30}\] The inequality follows (29). _Step 2: Bound the variance \(\mathbb{E}[\mathcal{V}]\)._ We bound an expectation of the term \(\mathcal{V}\). By the boundedness of both \(\widehat{Y}\) and \(\widetilde{f}\) by Assumption 3 and (7), the expectation \(\mathbb{E}[\mathcal{V}]\) exists. We prepare additional notations. Fix \(\delta\in(0,1]\). We consider a covering set \(\{f_{j}\}_{j=1}^{N_{L,W}(\delta)}\subset\mathcal{F}\), then we pick \(f_{j}\) from the set such that \(\sup_{Q_{n}}\|f_{j}-\widetilde{f}\|_{L^{2}(Q_{n})}\leq\delta\). We define a term \(\widetilde{g}(X_{i},\widehat{Y},\widetilde{f})\) by the following reform of \(\mathcal{V}\) as \[\mathcal{V}=\frac{1}{n}\sum_{i=1}^{n}\left\{\mathbb{E}_{D^{\prime }}\left[G_{n}^{\prime}(\widetilde{f})\right]-2g(X_{i},\widehat{Y}, \widetilde{f})\right\}=:\frac{1}{n}\sum_{i=1}^{n}\widetilde{g}(X_{i},\widehat {Y},\widetilde{f}),\] which yields the following form \[\mathbb{E}[\mathcal{V}] =\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\widetilde{g}(X_{i}, \widehat{Y},\widehat{f})\right]\] \[=\mathbb{E}\underbrace{\left[\frac{1}{n}\sum_{i=1}^{n}\widetilde{ g}(X_{i},\widehat{Y},f_{j})\right]}_{:=\mathcal{V}_{1}}+\mathbb{E}\underbrace{ \left[\frac{1}{n}\sum_{i=1}^{n}\widetilde{g}(X_{i},\widehat{Y},\widetilde{f})- \frac{1}{n}\sum_{i=1}^{n}\widetilde{g}(X_{i},\widehat{Y},f_{j})\right]}_{=: \mathcal{V}_{2}}. \tag{31}\] We will bound both \(\mathbb{E}[\mathcal{V}_{1}]\) and \(\mathbb{E}[\mathcal{V}_{2}]\), separately. We bound the term \(\mathbb{E}[\mathcal{V}_{2}]\). Since \(g\) in (28) is Lipschitz continuous in \(f\) with its Lipschitz constant \(C_{\ell}\) by Lemma 19, we easily see that \(\widetilde{g}\) is Lipschitz continuous in \(f\) with its Lipschitz constant \(6C_{\ell}\) Thus, we obtain that \[\mathbb{E}[\mathcal{V}_{2}]\leq\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n} \widetilde{g}(X_{i},\widetilde{Y},\widetilde{f})\right]-\mathbb{E}\left[\frac{1 }{n}\sum_{i=1}^{n}\widetilde{g}(X_{i},\widetilde{Y},f_{j})\right]\right|\leq 6C_ {\ell}\delta. \tag{32}\] Next, we bound the term \(\mathbb{E}[\mathcal{V}_{1}]\). Here, we need to consider a uniformly bounded function \(y:[0,1]^{d}\rightarrow[-B,B]\) For each \(f_{j}\) in the covering set, \(t>0\), and the bounded function \(y\), we use the Bernstein inequality to derive its stochastic upper bound. As preparation, we consider a threshold \(B_{n}\geq 1\) depending on \(n\) and define a clipped preprocessing \(\widetilde{Y}_{B_{n}}(\cdot):=\max\{\min\{\widetilde{Y}(\cdot),B_{n}\},-B_{n}\}\). We firstly approximate \(\mathbb{E}[\mathcal{V}_{1}]\) by the Lipschitz continuity of \(\ell\) as \[\mathbb{E}[\mathcal{V}_{1}]\leq\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n} \widetilde{g}(X_{i},\widetilde{Y}_{B_{n}},f_{j})\right]+6C_{\ell}\mathbb{E} \left[\|\widehat{Y}-\widetilde{Y}_{B_{n}}\|_{L^{\infty}}\right]. \tag{33}\] Since \(|\widetilde{Y}(x)-\widehat{Y}_{B_{n}}(x)|=|\widehat{Y}(x)|\mathbf{1}\{| \widetilde{Y}(x)|\geq B_{n}\}\) holds, we can bound the expectation in the second term of the right-hand side as \[\mathbb{E}\left[\|\widehat{Y}-\widetilde{Y}_{B_{n}}\|_{L^{\infty}}\right] =\mathbb{E}\left[\sup_{x\in[0,1]^{d}}|\widehat{Y}(x)|\mathbf{1}\{| \widetilde{Y}(x)|\geq B_{n}\}|\right]\] \[\leq\mathbb{E}\left[\sup_{x\in[0,1]^{d}}|\widehat{Y}(x)|\sup_{x \in[0,1]^{d}}\mathbf{1}\{|\widetilde{Y}(x)|\geq B_{n}\}|\right]\] \[\leq\mathbb{E}\left[\|\widehat{Y}\|_{L^{\infty}}^{2}/B_{n}\right].\] The last inequality follows \(\mathbf{1}\{x\geq 1\}\leq x\) for any \(x\geq 0\). The existence of the second moment is guaranteed by Assumption 3. We put this result to (33) and obtain \[\mathbb{E}[\mathcal{V}_{1}]\leq\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n} \widetilde{g}(X_{i},\widetilde{Y}_{B_{n}},f_{j})\right]+6C_{\ell}\mathbb{E} \left[\|\widetilde{Y}\|_{L^{\infty}}^{2}/B_{n}\right]. \tag{34}\] Then, we bound the first term \(\mathbb{E}[n^{-1}\sum_{i=1}^{n}\widetilde{g}(X_{i},\widetilde{Y}_{B_{n}},f_{j })]\). Since we have \(|g(x,\widetilde{Y}_{B_{n}},f)|\leq C_{\ell}(B_{n}\lor B)\) for any \(x\in[0,1]^{d}\) and \(f:\|f\|_{L^{\infty}}\leq B\), we obtain the following inequality with fixed \(\widetilde{Y}_{B_{n}}\): \[\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\widetilde{g}(X_{i}, \widetilde{Y}_{B_{n}},f_{j})>t\right)\] \[=\mathbb{P}\left(\mathbb{E}_{D^{\prime}}\left[g(X_{i}^{\prime}, \widetilde{Y}_{B_{n}},f_{j})\right]-\frac{2}{n}\sum_{i=1}^{n}g(X_{i},\widetilde {Y}_{B_{n}},f_{j})>t\right)\] \[=\mathbb{P}\left(\mathbb{E}_{D^{\prime}}\left[g(X_{i}^{\prime}, \widetilde{Y}_{B_{n}},f_{j})\right]-\frac{1}{n}\sum_{i=1}^{n}g(X_{i},\widetilde {Y}_{B_{n}},f_{j})>\frac{t}{2}+\frac{1}{2}\mathbb{E}_{D^{\prime}}\left[g(X_{i }^{\prime},\widetilde{Y}_{B_{n}},f_{j})\right]\right)\] \[\leq\mathbb{P}\left(\mathbb{E}_{D^{\prime}}\left[g(X_{i}^{\prime}, \widetilde{Y}_{B_{n}},f_{j})\right]-\frac{1}{n}\sum_{i=1}^{n}g(X_{i},\widetilde {Y}_{B_{n}},f_{j})>\frac{t}{2}+\frac{1}{2}\frac{\operatorname{Var}_{D^{\prime}}( g(X_{i},\widetilde{Y}_{B_{n}},f_{j}))}{4C_{\ell}B_{n}}\right)\] \[\leq\exp\left(-\frac{n(t^{\prime})^{2}}{2\operatorname{Var}_{D^{\prime}}(g(X_{i },\widehat{Y}_{B_{n}},f_{j}))+16C_{\ell}(B_{n}\lor B)t^{\prime}/3}\right)\] \[\leq\exp\left(-\frac{n(t^{\prime})^{2}}{2t^{\prime}C_{\ell}(B_{n} \lor B)+C_{\ell}(B_{n}\lor B)t^{\prime}/3}\right)\] \[\leq\exp\left(-\frac{n(t^{\prime})^{2}}{16t^{\prime}C_{\ell}(B_{n} \lor B)+16C_{\ell}(B_{n}\lor B)t^{\prime}/3}\right)\] \[\leq\exp\left(-\frac{3nt^{\prime}}{64C_{\ell}(B_{n}\lor B)}\right)\] \[\leq\exp\left(-\frac{3nt}{128C_{\ell}(B_{n}\lor B)}\right).\] The first and third inequalities follow \(\operatorname{Var}_{D^{\prime}}(g(X_{i},\widehat{Y}_{B_{n}},f_{j}))\leq 4C_{ \ell}B_{n}\mathbb{E}_{D^{\prime}}[g(X_{i},\widehat{Y}_{B_{n}},f_{j})]\), and the second and last inequalities follows a setting \(t^{\prime}=t/2+\operatorname{Var}_{D^{\prime}}(g(X_{i},\widehat{Y}_{B_{n}},f _{j}))/(8C_{\ell}(B\lor B_{n}))\). Using this inequality for a uniform bound in terms of the covering set \(\{f_{j}\}_{j=1}^{N_{L,W}(\delta)}\) and the independent random functions \(\widehat{Y}\) and \(\widehat{Y}_{B_{n}}\), we obtain \[\mathbb{P}\left(\max_{j=1,\ldots,N_{L,W}(\delta)}\frac{1}{n}\sum_{i=1}^{n} \overline{g}(X_{i},\widehat{Y}_{B_{n}},f_{j})>t\right)\leq N_{L,W}(\delta)\exp \left(-\frac{3nt}{128C_{\ell}(B_{n}\lor B)t}\right).\] Then, by the maximal inequality (Corollary 2.2.8 in [10]), for any \(\eta>0\), we have \[\mathbb{E}\left[\max_{j=1,\ldots,N_{L,W}(\delta)}\mathbb{E}\left[ \frac{1}{n}\sum_{i=1}^{n}\overline{g}(X_{i},\widehat{Y}_{B_{n}},f_{j})\right]\right]\] \[\leq\eta+\int_{\eta}^{\infty}\mathbb{P}\left(\max_{j=1,\ldots,N_{ L,W}(\delta)}\frac{1}{n}\sum_{i=1}^{n}\overline{g}(X_{i},\widehat{Y}_{B_{n}},f _{j})>t\right)dt\] \[\leq\eta+\int_{\eta}^{\infty}N_{L,W}(\delta)\exp\left(-\frac{3nt }{128C_{\ell}(B_{n}\lor B)t}\right)dt\] \[\leq\eta+\frac{N_{L,W}(\delta)(128C_{\ell}(B_{n}\lor B))}{3n}\exp \left(-\frac{3m\eta}{128C_{\ell}(B_{n}\lor B)}\right).\] We set \(B_{n}=n^{1/2}\), hence we have \((B\lor B_{n})\leq C_{B}n^{1/2}\). Also, we set \(\eta=(128C_{B,\ell}n^{1/2})\log N_{L,W}(\delta)/(3n)\) and put this result into (34), we obtain \[\mathbb{E}[\mathcal{V}_{1}]\leq\mathbb{E}\left[\max_{j=1,\ldots,N}\mathbb{E} \left[\frac{1}{n}\sum_{i=1}^{n}\overline{g}(X_{i},\widehat{Y},f_{j})\right] \right]\leq\frac{C_{\ell,B}(\log N_{L,W}(\delta)+\mathbb{E}[\|\widehat{Y}\|_ {L^{\infty}}^{2}])}{n^{1/2}}. \tag{35}\] Combining the inequalities (32) and (35) into (31) and set \(\delta=1/n\), we obtain \[\mathbb{E}[\mathcal{V}]\leq\frac{(2C_{\ell}^{2}B_{2}+C_{\ell}B/3)(\log N_{L,W }(1/n)+\mathbb{E}[\|\widehat{Y}\|_{L^{\infty}}^{2}])}{n^{1/2}}.\] _Step 3: Bound the bias \(\mathbb{E}[\mathcal{B}]\)._ By the Lipschitz continuity of the loss \(\ell\) by Assumption 4, we have \[\mathbb{E}[\mathcal{B}] =\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\sup_{x^{\prime}\in \Delta_{h}^{p}(X_{i})}\ell(\bar{f}(x^{\prime}),\widehat{Y}(x^{\prime}))\right]\] \[\leq\mathbb{E}\left[\sup_{x\in[0,1]^{d}}\ell(\bar{f}(x),\widehat{ Y}(x))\right]\] \[\leq\mathbb{E}\left[\sup_{x^{\prime}\in[0,1]^{d}}C_{\ell}|\bar{f }(x)-\widehat{Y}(x)|+\ell(\widehat{Y}(x),\widehat{Y}(x))\right]\] \[\leq C_{\ell}\mathbb{E}\left[\|\bar{f}-\widehat{Y}\|_{L^{\infty}}\right]\] \[\leq C_{\ell}(\|\bar{f}-f^{*}\|_{L^{\infty}}+\mathbb{E}[\|f^{*}- \widehat{Y}\|_{L^{\infty}}])\] \[\leq C_{\ell}(\Phi_{L,W}+\mathbb{E}[\|\Xi_{n}\|_{L^{\infty}}]).\] The last inequality holds by setting \(\bar{f}\) such that \(\|\bar{f}-f^{*}\|_{L^{\infty}}=\inf_{f\in\mathcal{F}(L,W)}\|f-f^{*}\|_{L^{ \infty}}\). _Step 4: Combining the bounds._ We combine the result in Step 3 and Step 4 into the decomposition (30), then obtain the statement. **Lemma 14**.: _Consider the expected adversarial risk \(\widetilde{R}(\cdot)\) with general losses as (14). Then, for the estimator \(\widetilde{f}\) as (13) and \(q\in[1,\infty)\), we have_ \[\|f^{*}-\widetilde{f}\|_{L^{\infty}}^{q}\leq C_{P_{X},p,d,\ell,q}h^{-d}\left( \widetilde{R}(\widetilde{f})-\widetilde{R}(f^{*})+\|\Xi\|_{L^{\infty}}^{q} \vee\|\Xi\|_{L^{\infty}}\right).\] Proof of Lemma 14.: We develop a lower bound of \(\widetilde{R}(\widetilde{f})-\widetilde{R}(f^{*})\) as \[\widetilde{R}(\widetilde{f})-\widetilde{R}(f^{*}) =\mathbb{E}_{X}\left[\sup_{x^{\prime}\in\Delta_{h}^{p}(X)}\ell( \widehat{Y}(x^{\prime}),\widetilde{f}(x^{\prime}))-\sup_{x^{\prime}\in\Delta_ {h}^{p}(X)}\ell(\widehat{Y}(x^{\prime}),f^{*}(x^{\prime}))\right]\] \[\geq C_{P_{X},p,d,\ell}h^{d}\sup_{x\in[0,1]^{d}}|\ell(\widehat{ Y}(x^{\prime}),\widetilde{f}(x^{\prime}))|-C_{\ell}\|\widehat{Y}-f^{*}\|_{L^{ \infty}}\] \[\geq C_{P_{X},p,d,\ell}h^{d}\|\widehat{Y}-\widetilde{f}\|_{L^{ \infty}}^{q}-C_{\ell}\|\Xi\|_{L^{\infty}}\] \[\geq C_{P_{X},p,d,\ell,q}h^{d}\left(\|f^{*}-\widetilde{f}\|_{L^{ \infty}}^{q}-\|\Xi\|_{L^{\infty}}^{q}\right)-C_{\ell}\|\Xi\|_{L^{\infty}}.\] Here, the first inequality follows Lemma 15 and the Lipschitz continuity of \(\ell\) by Assumption 4, and the last inequality follows \((a+b)^{q}\leq C_{q}(a^{q}+b^{q})\) for \(q\in[1,\infty)\) and \(a,b\geq 0\). Proof of Proposition 6.: By Proposition 13 and Lemma 14, we have \[\mathbb{E}\left[\|f^{*}-\widetilde{f}\|_{L^{\infty}}^{2}\right] \leq C_{P_{X},p,d,\ell,q}h^{-2d/q}\left(\mathbb{E}[(\widetilde{R} (\widetilde{f})-\widetilde{R}(f^{*}))^{2/q}]+\mathbb{E}[\|\Xi\|_{L^{\infty}}^ {2}]\right)\] \[\leq C_{P_{X},B,p,d,\ell,q}h^{-2d/q}\left\{\left(\frac{\log N_{L, W}(1/n)}{n^{1/2}}\right)^{2/q}+\Phi_{L,W}^{2/q}+\zeta_{n}^{2}\right\}\] \[\leq C_{P_{X},B,p,d,\ell,q,V}h^{-2d/q}\left\{\left(\frac{W^{2}L^{ 2}\log(WL)\log n}{n^{1/2}}\right)^{2/q}+\Phi_{L,W}^{2/q}+\zeta_{n}^{2}\right\}.\] The last inequality follows Lemma 16. We set \(WL\asymp n^{d/(4\beta+4d)}\) and obtain the statement. ### Proof of Adaptation to Besov Space We give proof of the result in Section 5.2. Proof of Proposition 7.: To show the statement, we slightly modify the proof of Proposition 8. We start from the inequality (17) with setting \(\delta=1/n\). Since we use \(\overline{\mathcal{F}}(L,W,S,\bar{B})\) as a set of candidate functions instead of \(\mathcal{F}(L,W)\), we obtain the following updated inequality of (17) as \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right] \leq C_{P_{X},p,d,B}h^{-d}\left\{\frac{\log\widetilde{N}_{L,W,S,\bar{B}}(1/n) }{n}+\widetilde{\Phi}_{L,W,S,\bar{B}}^{2}+\zeta_{n}^{2}\right\}, \tag{36}\] which replaces \(N_{L,W}(1/n)\) by \(\widetilde{N}_{L,W,S,\bar{B}}(1/n):=\sup_{Q_{n}}N(1/n,\overline{\mathcal{F}}(L,W,S,\bar{B}),\|\cdot\|_{L^{2}(Q_{n})})\) and \(\Phi_{L,W}\) by \(\widetilde{\Phi}_{L,W,S,\bar{B}}:=\inf_{f\in\overline{\mathcal{F}}(L,W,S,\bar {B})}\|f-f^{*}\|_{L^{\infty}}\). We study the terms \(\widetilde{N}_{L,W,S,\bar{B}}(1/n)\) and \(\widetilde{\Phi}_{L,W,S,\bar{B}}\). For the approximation error term \(\widetilde{\Phi}_{L,W,S,\bar{B}}\), we apply Lemma 21 by setting \(r=\infty\) and obtain \[\widetilde{\Phi}_{L,W,S,\bar{B}}\leq C_{d,\beta}N^{-\beta/d}, \tag{37}\] for sufficiently large \(N\) such that \(L\geq C_{d,p,\beta,B}\log(N)\), \(W=C_{d,\beta}N\), \(S=(L-1)C_{d,\beta}N+N\). About the entropy term \(\widetilde{N}_{L,W,S,\bar{B}}(1/n)\), we apply Lemma 22 and obtain \[\log\widetilde{N}_{L,W,S,\bar{B}}(1/n) \leq\log N(1/n,\overline{\mathcal{F}}(L,W,S,\bar{B}),\|\cdot\|_{ L^{\infty}})\] \[\leq LS\log(nL\bar{B}(1+S))\] \[\leq C_{d,\beta}L^{2}N\log(nL^{2}\bar{B}N)\] \[\leq C_{d,p,\beta,B}N\log^{2}(N)\log(nN\log^{2}(N)), \tag{38}\] by substituting the setup of \(L,S,W\) and \(\bar{B}\). We substitute (37) and (38) into (36) and obtain \[\mathbb{E}\left[\|\widehat{f}-f^{*}\|_{L^{\infty}}^{2}\right] \leq C_{P_{X},p,d,B,\beta}h^{-d}\left\{\frac{N\log^{2}(N)\log(nN\log^{2}(N)) }{n}+N^{-2\beta/d}+\zeta_{n}^{2}\right\}.\] We set \(N\times n^{d/(2\beta+d)}\) and obtain the statement. ## Appendix C Supportive Result **Lemma 15**.: _Consider a non-negative bounded continuous function \(g:[0,1]^{d}\to\mathbb{R}_{+}\). Then, we have_ \[\mathbb{E}_{X}\left[\sup_{x^{\prime}\in\Delta_{h}^{P}(X)}g(x^{ \prime})\right]\geq\|g\|_{L^{\infty}}P_{X}(\Delta_{h}^{p}(x^{*})),\] _with \(x^{*}\in\operatorname*{argmax}_{x\in[0,1]^{d}g(x)}\). Further, if Assumption 1 holds, then we have_ \[\mathbb{E}_{X}\left[\sup_{x^{\prime}\in\Delta_{h}^{P}(X)}g(x^{ \prime})\right]\geq\|g\|_{L^{\infty}}h^{d}C_{P_{X},p,d}.\] Proof of Lemma 15.: Let \(A:=\{x\in[0,1]^{d}\mid g(x)=\max_{x^{\prime}\in[0,1]^{d}}g(x^{\prime})\}\) be a set of argmax of \(g(x)\), which is non-empty because of the compactness of \([0,1]^{d}\) and boundedness/continuity of \(g\). Also, we define a union \(\Delta_{A}:=\cup_{x\in A}\Delta_{h}^{P}(\{x\})\). By the non-negativity of \(g\), we obtain \[\mathbb{E}_{X}\left[\sup_{x^{\prime}\in\Delta_{h}^{P}(X)}g(x^{ \prime})\right] \geq\mathbb{E}_{X}\left[\sup_{x^{\prime}\in\Delta_{h}^{P}(X)}g(x^{ \prime})\mathbf{1}\{X\in\Delta_{A}\}\right]\] \[=\mathbb{E}_{X}\left[\sup_{x\in[0,1]^{d}}g(x)\mathbf{1}\{X\in \Delta_{A}\}\right]\] \[=\|g\|_{L^{\infty}}P_{X}(\Delta_{A}).\] Hence, we obtain the first statement. We consider that Assumption 1 holds. We develop a lower bound of \(P_{X}(\Delta_{A})\) as \[P_{X}(\Delta_{A})\geq\inf_{x\in A}P_{X}(\Delta_{h}^{P}(\{x\}))\geq C_{P_{X}} \inf_{x\in A}P_{X}(\Delta_{h}^{P}(\{x\}))\geq C_{P_{X}}\inf_{x\in[0,1]^{d}} \lambda(\Delta_{h}^{P}(\{x\})),\] where \(C_{P_{X}}\) is a lower bound of a density function of \(P_{X}\) defined in Assumption 1, and \(\lambda(\cdot)\) is the Lebesgue measure. Since the Lebesgue of the \(L^{p}\)-ball is known, we obtain that \[\inf_{x\in[0,1]^{d}}\lambda(\Delta_{h}^{P}(\{x\}))=\frac{\Gamma(1/p+1)^{d}}{ \Gamma(d/p+1)}h^{d},\] where \(\Gamma(\cdot)\) is the Gamma function. Then, we obtain the second statement. We develop the following covering number bound. The following lemma immediately holds by [1] and [10]. **Lemma 16**.: _Consider the set of deep neural networks as (7) with the depth \(L\), the width \(W\), and the upper bound \(B\). For any \(\delta>0\) and sufficiently large \(n\), we have_ \[\log N(\delta,\mathcal{F}(L,W),\|\cdot\|_{L^{2}(P_{n})})\leq CW^{2}L^{2}\log( WL)\log(Bn/\delta).\] Proof of Lemma 16.: Let \(D\) be the VC-dimension of \(\mathcal{F}\), and \(S(\leq W^{2}L)\) be a number of parameters in \(\mathcal{F}\). By Theorem 3 in [10], we bound the VC-dimension as \(D=O(SL\log(S))\leq O(W^{2}L^{2}\log(WL))\). Using this inequality and Theorem 12.2 in [1], we have \[\log N(\delta,\mathcal{F}(L,W),\|\cdot\|_{L^{2}(P_{n})})\leq D\log\left(\frac {enB}{\delta D}\right)\leq CW^{2}L^{2}\log(WH)\log(Bn/\delta).\] for \(n=\Omega(W^{2}H^{2}\log(WH))\). **Lemma 17**.: _Consider a non-empty compact set \(T\subset\mathbb{R}^{d}\) with some \(d\) and continuous bounded functions \(f,f^{\prime}:T\to\mathbb{R}\). Then, we have_ \[\left|\sup_{t\in T}(f(t)+f^{\prime}(t))^{2}-\sup_{t\in T}f(t)\right|\leq\|f\|_{ L^{\infty}}\|f^{\prime}\|_{L^{\infty}}+\|f^{\prime}\|_{L^{\infty}}^{2}.\] Proof of Lemma 17.: We define the optimal values \(t^{*}\in T\) and \(t^{\dagger}\in T\) such that \(\sup_{t\in T}(f(t)+f^{\prime}(t))^{2}=(f(t^{*})+f^{\prime}(t^{*}))^{2}\) and \(\sup_{t\in T}f(t)^{2}=f(t^{\dagger})^{2}\). Note that such \(t^{*}\in T\) and \(t^{\dagger}\in T\) exist by the compactness of \(T\) and the continuity of \(f\) and \(f^{\prime}\). We first derive the following inequality \[\sup_{t\in T}(f(t)+f^{\prime}(t))^{2}-\sup_{t\in T}f(t)^{2} \leq f(t^{*})^{2}+2f(t^{*})f^{\prime}(t^{*})+f^{\prime}(t^{*})^{2 }-f(t^{*})^{2}\] \[\leq 2\|f\|_{L^{\infty}}\|f^{\prime}\|_{L^{\infty}}+\|f^{\prime}\|_ {L^{\infty}}^{2}.\] Second, we develop a bound for the opposite side as \[\sup_{t\in T}f(t)^{2}-\sup_{t\in T}(f(t)+f^{\prime}(t))^{2} \leq f(t^{\dagger})^{2}-(f(t^{\dagger})+f^{\prime}(t^{\dagger}))^{2}\] \[\leq 2f(t^{\dagger})f^{\prime}(t^{\dagger})-f^{\prime}(t^{\dagger}) ^{2}\] \[\leq 2\|f\|_{L^{\infty}}\|f^{\prime}\|_{L^{\infty}}+\|f^{\prime}\|_ {L^{\infty}}^{2}.\] Then, we obtain the statement. **Lemma 18**.: _For any continuous and bounded functions \(f,g\) on a compact set \(I\), we have_ \[\max_{t\in I}(f(t)+g(t))\geq\max_{t\in I}f(t)-\max_{t\in I}|g(t)|.\] Proof of Lemma 18.: Let \(t^{\prime}\in I\) be a point such that \(\max_{t\in I}(f(t)+g(t))=f(t^{\prime})+g(t^{\prime})\), which is guaranteed to exist by the compactness of \(I\) and the boundedness/continuity of \(f,g\). The statement simply follows \[\max_{t}(f(t)+g(t))=f(t^{\prime})+g(t^{\prime})\geq f(t^{\prime})-|g(t^{\prime })|\geq\max_{t}(f(t))-\max_{t}|g(t^{\prime})|.\] **Lemma 19**.: _Consider functions \(f,f^{\prime},y:[0,1]^{d}\to[-B,B]\), and a loss function \(\ell\) satisfying Assumption 4. Also, consider a funciton \(g\) as (28). For any \(x\in[0,1]^{d}\), we have_ \[g(x,y,f)-g(x,y,f^{\prime})\leq C_{\ell}|f(\bar{x})-f^{\prime}(\bar{x})|,\] _for some \(\bar{x}\in[0,1]^{d}\)._ Proof of Lemma 19.: We define \(x^{*}\in\Delta_{h}^{p}(x)\) such that \(\ell(y(x^{*}),f(x^{*}))=\max_{x^{\prime}\in\Delta_{x}}\ell(y(x^{\prime}),f(x^{ \prime}))\). Its existence follows the continuity of \(f,f^{\prime},y\), and \(\ell\). For \(f,f^{\prime}\in L^{2}([0,1]^{d})\), we have \[g(x,y,f)-g(x,y,f^{\prime}) =\max_{x^{\prime}\in\Delta_{h}^{p}(x)}\ell(y(x^{\prime}),f(x^{ \prime}))-\max_{x^{\prime}\in\Delta_{h}^{p}(x)}\ell(y(x^{\prime}),f^{\prime}( x^{\prime}))\] \[\leq\ell(y(x^{*}),f(x^{*}))-\ell(y(x^{*}),f^{\prime}(x^{*}))\] \[\leq C_{\ell}|f(x^{*})-f^{\prime}(x^{*})|.\] The first inequality follows \(\max_{x^{\prime}\in\Delta_{h}^{p}(x)}\ell(y(x^{\prime}),f(x^{\prime}))=\ell(y (x^{*}),f(x^{*}))\), and the second inequality follows the Lipschitz continuity of \(\ell\) in the second argument from Assumption 4. Thus, we obtain the statement. **Lemma 20** (Theorem 1.1 in [10]).: _Fix \(N,M\in\mathbb{N}\) arbitrarily. If \(\mathcal{F}(L,W)\) is a set of functions with \(W=C_{d}(N+2)\log_{2}(8N)\) and \(L=C_{s}(M+2)\log_{2}(4M)+2d\), we have_ \[\inf_{f\in\mathcal{F}}\sup_{f^{*}\in C_{1}^{s}([0,1]^{d})}\|f-f^{*}\|_{L^{ \infty}}\leq C_{d,s}N^{-2s/d}M^{-2s/d}.\] **Lemma 21** (Proposition 1 in [10]).: _Fix \(p,q,r\in(0,\infty]\) and \(\beta\in(0,\infty)\). Suppose that \(\beta>d\max\{1/p-1/r,0\}\) holds. Let \(\overline{\mathcal{F}}(L,W,S,\bar{B})\) be a set of neural network functions (7) such that there are \(S\in\mathbb{N}\) non-zero parameters and each value is included in \([-\bar{B},\bar{B}]\) with \(\bar{B}\geq 1\). Let \(N\) be a sufficiently large number and set \(L\geq C_{d,p,\beta,B}\log(N)\), \(W=C_{d,\beta}N\), \(S=(L-1)C_{d,\beta}N+N\), and \(\bar{B}\) is a polynomially increasing in \(N\). Then, we have_ \[\sup_{f^{0}\in\mathcal{B}_{p,q}^{\beta}}\inf_{f\in\overline{\mathcal{F}}(L,W,S, \bar{B})}\|f^{0}-f\|_{L^{r}(\lambda)}\leq CN^{-\beta/d},\] _with some constant \(C>0\) independent of \(N\)._ **Lemma 22** (Lemma 21 in [11]).: _For \(\varepsilon\in(0,1]\), we obtain_ \[\log N(\varepsilon,\overline{F}(L,W,S,\bar{B}))\leq LS\log( \varepsilon^{-1}L\bar{B}(1+S)).\] ## Appendix D Proof of Inconsistency Proof of Proposition 1.: We first specify the coordinates of the setting. We consider two points \(\bar{x}=(0.3,0.5,0.5,...,0.5),\bar{x}^{\prime}=(0.7,0.5,0.5,...,0.5)\in[0,1]^{d}\), and a marginal measure as a mixture of Dirac measures on the points; \(P_{X}=0.5\delta_{\{\bar{x}\}}+0.5\delta_{\{\bar{x}^{\prime}\}}\). We also specify the true function with an input \(x=(x_{1},...,x_{d})\in[0,1]^{d}\) as \(f^{*}(x)=-\mathbf{1}\{x_{1}<0.4\}+10(x_{1}-0.5)\mathbf{1}\{0.4\leq x_{1}\leq 0.6\}+\mathbf{1}\{x_{1}>0.6\}\), and the noise variable \(\xi_{i}\) as a uniform random variable on \([-0.1,0.1]\). For the adversarial training, we set \(p=\infty\) and \(h=0.5\). We study an empirical risk minimizer in this setting. Since the inputs \(X_{1},...,X_{n}\) are either of \(\bar{x}\) or \(\bar{x}^{\prime}\), we set \(n_{1}:=|\{i:X_{i}=\bar{x}\}|\) and \(n_{2}:=|\{i:X_{i}=\bar{x}^{\prime}\}|\) such that \(n=n_{1}+n_{2}\). With the specified coordinates above, we rewrite an empirical risk of \(f:[0,1]^{d}\to\mathbb{R}\) with the adversarial training as \[\frac{1}{n}\sum_{i=1}^{n}\max_{x\in\Delta_{h}^{p}(X_{i})}(Y_{i}- f(x))^{2}\] \[=\frac{1}{n}\sum_{i:X_{i}=\bar{x}}\max_{x\in\Delta_{h}^{p}(X_{i}) }(f^{*}(X_{i})+\xi_{i}-f(x))^{2}+\frac{1}{n}\sum_{i:X_{i}=\bar{x}^{\prime}} \max_{x\in\Delta_{h}^{p}(X_{i})}(f^{*}(X_{i})+\xi_{i}-f(x))^{2}\] \[=\frac{1}{n}\sum_{i:X_{i}=\bar{x}}\max_{x\in[0,1]^{d}:x_{1}\in[0,0.8]}(\xi_{i}-f(x))^{2}+\frac{1}{n}\sum_{i:X_{i}=\bar{x}^{\prime}}\max_{x\in[ 0,1]^{d}:x_{1}\in[0.2,1]}(1+\xi_{i}-f(x))^{2}, \tag{39}\] which follows \(f^{*}(\bar{x})=0\) and \(f^{*}(\bar{x}^{\prime})=1\). To minimize this empirical risk in terms of \(f\), we restrict a class of \(f\). Specifically, we set \(f\) with an input \(x=(x_{1},...,x_{d})\) as having a form \(f(x)=c_{1}\mathbf{1}\{x_{1}\leq 0.2\}+c_{2}\{0.2<x_{1}<0.8\}+c_{3}\mathbf{1}\{0.8 \leq x_{1}\}\) with some constants \(c_{1},c_{2},c_{3}\in\mathbb{R}\). This form of \(f\) can minimize the risk, since The risk depends only on the value of \(f\) for each region. Then, we rewrite the risk as \[\eqref{eq:11}=\frac{1}{n}\sum_{i:X_{i}=\bar{x}}\max\{(\xi_{i}-c_ {1})^{2},(\xi_{i}-c_{2})^{2}\}+\frac{1}{n}\sum_{i:X_{i}=\bar{x}^{\prime}}\max \{(1+\xi_{i}-c_{2})^{2},(1+\xi_{i}-c_{3})^{2}\}. \tag{40}\] Here, we consider an event \(|n_{1}/2-n/2|\leq 1\), which appears with probability \(1-2\exp(-2/n)\geq 0.5\) with \(n\geq 3\), by Hoeffding's inequality. In this case, a simple calculation yields that \(c_{2}\in[-0.2,0.2]\) minimizes the (40) since it prevents quadratic growth of the risk in terms of \(c_{2}\), which gives \((\xi_{i}-c_{1})^{2}<(\xi_{i}-c_{2})^{2}\) and \((1+\xi_{i}-c_{2})^{2}>(1+\xi_{i}-c_{3})^{2}\). Then, we rewrite the risk (40) as \[\eqref{eq:11}=\frac{1}{n}\sum_{i:X_{i}=\bar{x}}(\xi_{i}-c_{2})^{2}+\frac{1}{n }\sum_{i:X_{i}=\bar{x}^{\prime}}(1+\xi_{i}-c_{2})^{2},\] and the minimization on it by \(c_{2}\) yields the following optimal choise \[c_{2}^{*}=\frac{n_{2}-n_{1}}{n}+\frac{1}{n}\sum_{i=1}^{n}\xi_{i}.\] Then, we have that the original risk (39) is minimized by the following function \[\tilde{f}(x):=c_{1}^{*}\mathbf{1}\{x_{1}\leq 0.2\}+c_{2}^{*}\{0.2<x_{1}<0.8\}+c_{ 3}^{*}\mathbf{1}\{0.8\leq x_{1}\},\] where \(c_{1}^{*}=n_{1}^{-1}\sum_{i:X_{i}=\bar{x}}\xi_{i}\) and \(c_{3}^{*}=n_{2}^{-1}\sum_{i:X_{i}=\bar{x}^{\prime}}\xi_{i}\). Finally, we define the \(L^{\infty}\)-risk of \(\tilde{f}\). Simply, we have \[\|\tilde{f}-f^{*}\|_{L^{\infty}}^{2} \geq\|\tilde{f}-f^{*}\|_{L^{2}(P_{X})}^{2}\] \[=\mathbb{E}_{X\sim P_{X}}\left[(\tilde{f}(X)-f^{*}(X))^{2}\right]\] \[=\frac{1}{2}\left\{(\tilde{f}(\tilde{x})-f^{*}(\bar{x}))^{2}+( \tilde{f}(\bar{x}^{\prime})-f^{*}(\bar{x}^{\prime}))^{2}\right\}\] \[=\frac{1}{2}\left\{(c_{2}^{*}+1)^{2}+(c_{2}^{*}-1)^{2}\right\}\] \[=1+(c_{2}^{*})^{2}\] \[\geq 1.\] Hence, we show the statement of Proposition 1.
2304.06305
Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
This paper proposes a novel module called middle spectrum grouped convolution (MSGC) for efficient deep convolutional neural networks (DCNNs) with the mechanism of grouped convolution. It explores the broad "middle spectrum" area between channel pruning and conventional grouped convolution. Compared with channel pruning, MSGC can retain most of the information from the input feature maps due to the group mechanism; compared with grouped convolution, MSGC benefits from the learnability, the core of channel pruning, for constructing its group topology, leading to better channel division. The middle spectrum area is unfolded along four dimensions: group-wise, layer-wise, sample-wise, and attention-wise, making it possible to reveal more powerful and interpretable structures. As a result, the proposed module acts as a booster that can reduce the computational cost of the host backbones for general image recognition with even improved predictive accuracy. For example, in the experiments on ImageNet dataset for image classification, MSGC can reduce the multiply-accumulates (MACs) of ResNet-18 and ResNet-50 by half but still increase the Top-1 accuracy by more than 1%. With 35% reduction of MACs, MSGC can also increase the Top-1 accuracy of the MobileNetV2 backbone. Results on MS COCO dataset for object detection show similar observations. Our code and trained models are available at https://github.com/hellozhuo/msgc.
Zhuo Su, Jiehua Zhang, Tianpeng Liu, Zhen Liu, Shuanghui Zhang, Matti Pietikäinen, Li Liu
2023-04-13T07:31:41Z
http://arxiv.org/abs/2304.06305v1
# Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution ###### Abstract This paper proposes a novel module called middle spectrum grouped convolution (MSGC) for efficient deep convolutional neural networks (DCNNs) with the mechanism of grouped convolution. It explores the broad "middle spectrum" area between channel pruning and conventional grouped convolution. Compared with channel pruning, MSGC can retain most of the information from the input feature maps due to the group mechanism; compared with grouped convolution, MSGC benefits from the learnability, the core of channel pruning, for constructing its group topology, leading to better channel division. The middle spectrum area is unfolded along four dimensions: group-wise, layer-wise, sample-wise, and attention-wise, making it possible to reveal more powerful and interpretable structures. As a result, the proposed module acts as a booster that can reduce the computational cost of the host backbones for general image recognition with even improved predictive accuracy. For example, in the experiments on ImageNet dataset for image classification, MSGC can reduce the multiply-accumulates (MACs) of ResNet-18 and ResNet-50 by half but still increase the Top-1 accuracy by more than \(1\%\). With \(35\%\) reduction of MACs, MSGC can also increase the Top-1 accuracy of the MobileNetV2 backbone. Results on MS COCO dataset for object detection show similar observations. Our code and trained models are available at [https://github.com/hellozhuo/msgc](https://github.com/hellozhuo/msgc). Efficient networks, grouped convolution, network pruning, image recognition ## I Introduction DCNN has revolutionized the computer vision community in many applications, from preliminary tasks like salient object detection [1] and edge detection [2], to semantically more sophisticated tasks like image classification [3], object detection [4], and human pose estimation [5]. The increasing prediction accuracy is usually at the cost of considerable consumed energies, with large computational cost by deep models [6]. How to reduce the computational cost of DCNNs without sacrificing accuracy has been a pressing topic, especially in the era of edge computing, since deep models are moving to resource constrained devices like smart phones and IoTs. In recent years, numerous efforts have been made in the community to tackle this issue, such as network pruning [7, 8], compact and lighter network design [9, 10], network quantization [11, 12], _etc._ Among these attempts, network pruning [7, 13, 14, 15, 16] and grouped convolution [17, 18, 19, 20, 21] have attracted tremendous research interests. The former aims to prune the unnecessary redundant parts of deep models to make them lighter and more efficient to run, while the latter focuses on constructing compact structures by splitting computational operations into groups. It is not surprising that many research works considered either of these two methods alone, since they are structurally independent. In this paper, we give a novel view by regarding them as two poles of a network designing spectrum (Fig. 1), inside which we find there are a big variety of structural possibilities that incorporate these two paradigms. Based on that, we further build our module that can effectively reveal powerful structures within the spectrum, outperforming both the previous channel pruning and grouped convolution based counterparts in both accuracy and computational cost. To make our motivation clearer, we start by giving a brief introduction on both methods below. Without loss of generality, supposing a convolutional layer takes the input tensor \(\mathbf{X}\) with \(C\) channels and generates the output tensor \(\mathbf{Y}\), the convolutional operation can be formulated as: \[\mathbf{Y}=[f^{1}(\pi^{1}(\mathbf{X})),f^{2}(\pi^{2}(\mathbf{X})),...,f^{G}( \pi^{G}(\mathbf{X})], \tag{1}\] where \([,]\) represents the concatenation operation, \(G\) is the number of groups, \(f^{i}\) is a standard convolutional function that generates \(1/G\) part of \(\mathbf{Y}\), and \(\pi^{i}\) is a selecting function extracting a sub-tensor from \(\mathbf{X}\) with possibly fewer channels. When \(G=1\) and \(\pi\equiv\mathbf{1}\in\left\{1\right\}^{1\times C}\), which means all the input channels are kept as they are, the formulation reduces to the standard convolution. As the network often contains feature Fig. 1: MSGC reveals a powerful network structure in the “middle-spectrum” area. redundancy, it might be unnecessary to keep all the input channels. It is essentially the design spirit of many channel pruning methods, which focus on tuning \(\pi\). The grouped convolution is formulated when we set \(1<G\leq C\) and the reduction of feature redundancy is neatly organized in a predefined way. For example, \(\{\pi^{i}(\mathbf{X})|i=1,2,..,G\}\) are a series of regularly partitioned segments of the input tensor in the channel extent. This derived a lot of classical approaches in recent years like Deep Roots [20], ResNeXt [19], ShuffleNet [22], IGC series [21, 23, 24], UGConvs [25], ChannelNets [26], SCGNet [27], sharing grouped convolution [17], and the extreme case of depthwise separable convolution used in the Inception modules [28, 29], MobileNet [9], and Xception [30], where \(G=C\). One main focus of these methods is to design the correlation between groups to make the convolution complementary [21] (means that each output channel has at least one path to any of the input channels in the connection topology). Such encouragement of inter-group communication plays an important role in breaking the intrinsic order of channels [31] to maximally preserve the prediction accuracy. However, we may rethink the relationship between the grouped convolution and channel pruning methods, since both aim to reduce the network redundancy from the channel extent. The design strategies behind them help us to derive the method proposed in the paper. The two frameworks are illustrated in Fig. 2. On the one hand, channel pruning [13, 14, 15, 32, 33] attempts to learn the most important feature maps that contribute to the final prediction accuracy. Such **learnability** helps channel pruning to identify the unimportant channels that can be pruned without degrading the network performance significantly. However, since it is not easy to guarantee the correctness of such identification, or due to the bounded representation capacity by a pruning rate, there might always be some channels that contain specific useful and meaningful information than other channels. This is evidenced by the fact that a certain pruning rate usually causes a performance drop. On the other hand, grouped convolution preserves all the input channels that may to some extent, avoid the above issue of information loss. In effect, different from channel pruning, grouped convolution hypothesizes that the grouped network can still learn enough information to give comparable prediction accuracy than the original network, by regularly sparsifying the connection but keeping all the input channels intact. In this way, the efforts on finding the most important channels, can be circumvented. In other words, grouped convolution is powered by **channel preservation**. Generally, learning to prune and learning to group are manners that both lead to efficient and accurate networks. Previous works tend to consider them separately, which restrict their methods to go beyond. We believe there is a better balance between information preservation and learnability that can be achieved by integrating both worlds toward building more powerful network structures. To achieve this, we proposed our module named MSGC (middle spectrum grouped convolution), which enables the learning process to find a structure in between, by unfolding the spectrum along four dimensions: 1. Group-wise: injecting learnability for each group to learn how to segment channels; 2. Layer-wise: allowing layer-dependent pruning ratios; 3. Sample-wise: dynamically building grouped connection topology for individual samples (middle of Fig. 2); 4. Attention-wise: decoupling channel gating and attention for individual groups. We conduct extensive experiments including image classification and object detection on large-scale datasets on the ResNet [34], MobileNetV2 [10], and DenseNet [35] backbones, which consistently verify the superiority of MSGC compared with prior state-of-the-art methods. Those methods include both existing pruning-based and grouped convolution-based ones. Notably, we achieve not only the MAC reduction, but enhanced accuracy as well, due to the flexibility of forming groups. For example, on the ImageNet [36] dataset for image classification, MSGC reduces computational cost of the ResNet backbones by 50% and the MobileNetV2 backbone by 35% with even improved accuracy. On the MS COCO [37] dataset for object detection, MSGC can also effectively slim the backbones with negligible performance drop. In addition, MSGC can also be used to simply improve the prediction accuracy of the original networks with a relatively small pruning rate. In this case, the main role of MSGC transfers from an "computation booster" to a strong "accuracy booster". The rest of this article is organized as follows. In Sec. II, we review the related works. Following that, we illustrate our method in detail in Sec. III. A comprehensive experimental comparison with state-of-the-art methods is provided in Sec. IV, with detailed ablation studies. Finally, we conclude our paper in Sec. V. ## II Related work **Network pruning** aims to prune the network weights or feature maps in convolutional layers to make the network less redundant, therefore less computational cost or memory storage is used to achieve the similar accuracy as before pruning. According to the pruning granularity, network pruning is generally categorized into structured pruning and unstructured pruning. The former prunes network by removing individual weights, leading to irregular structures after pruning [16]. While the latter prunes groups of weights which belong to the whole filters, channels, or kernel blocks. Structured pruning keeps the pruned network in regular structures which can be more easily accelerated with existing Fig. 2: Illustration of the three paradigms. deep learning frameworks on hardware [14]. Therefore, it obtains more research attention than unstructured pruning in recent years. Among different methods, the core is how to evaluate the importance of network weights, thereby to guide the pruning process. Many kinds of criteria were developed during last years, such as reconstruction error [14, 15], \(\ell_{1}\)-norm and \(\ell_{2}\)-norm [8, 38, 39], geometric median [33], discrimination [40, 41], channel sensitivity [42], category-aware discrimination ability [13], _etc._ The way of pruning can also be searched [7, 32, 43, 44] or learnt with a slimmable structure [45, 46]. In addition, for dynamic pruning, which our methods are most related to, the importance scores can be directly generated [47, 48]. The proposed MSGC belong to structured pruning, more specifically, channel pruning, where the pruned weights are the complete kernels connected to some certain channels. The importance scores of the network weights are data-dependent, which are calculated on-the-fly with dynamic execution. **Grouped convolution** dates back to AlexNet [49], if not earlier, where the convolutional filters were put on two GPUs to suit the network's training memory. However, the by-product of this "filter groups" inspired a lot of following grouped convolution-based methods like [19, 20, 22], to improve the performance and reduce the MACs of DCNNs. The main spirit of grouped convolution has been introduced in the previous section. **MSGC-like methods**: Some previous works share certain similarities with our MSGC. Specifically, methods like LGC [31], FLGC [50], and DGC [18] can also preserve most or all of the input channels in a convolutional layer due to the group mechanism and at the same time benefit from the learnability for channel partition. However, these methods suffer from their relatively narrow exploration scores as shown in Fig. 1. For example, DGC [18] generates dynamic connection topology but is constrained with a predefined pruning rate in each layer. FBS [47] is a common dynamic channel pruning method without the consideration of grouped convolution. FLGC [50] and DGConv [51] actually shuffle the input channels (_i.e._, restrict each input channel belongs and only belongs to one group), and give fixed group topology. For LGC [31] and SG-CNN [52], they are only limited to the first dimension: learning how to divide the channels into groups. In contrast, our method searches for a highly optimal structure from all four dimensions, which leads to better efficiency and predictive performance compared with both channel pruning- and grouped convolution-based methods. **Dynamic network** or conditional network tunes its network topology or filters on the fly depending on the current input data, which can better increase its representation power and achieve a desired balance between accuracy and efficiency. The runtime execution of DCNNs can be implemented by dynamically skipping the layers [53, 54], re-weighting the filters [55, 56], pruning the channels or pixels [57, 47, 58], decomposing the kernels [59], adjusting the nonlinear activations [60], routing the inference paths [61], cascading multiple networks [62]. A more comprehensive review can be found in [63]. The proposed MSGC can be seen as a dynamic routing method that adjusts its connection topology among channels at runtime, a dynamic pruning method on the channel level, and a dynamic filter re-weighting method, in which we can use data-dependent attention for the selected channels. ## III Methodology ### _Criteria_ To unfold the spectrum, namely, the possible structure candidates, from the four dimensions mentioned in Sec. I, we aim to build powerful structures that conform to the following criteria: 1. **Group-wise**: (Fig. 3 (a)) We inject learnability for each group to learn how to segment channels. It indicates a soft assignment of the input channels to each group. Therefore a more flexible group topology is learned during network training. It should be noted that our algorithm does not restrict each input channel to be assigned to only one group, but possibly zero, or multiple groups as well. 2. **Layer-wise**: (Fig. 3 (b)) Different layers are enabled to have varying pruning ratios, due to the fact the each layer has its own property. For example, shallower and deeper layers may extract image abstraction in different semantic levels. Allowing layer-wise pruning ratios is expected to give better performance. 3. **Sample-wise**: (Fig. 3 (c)) The connection topology (_e.g._, how the input and output channels are connected with each other) in groups would be dynamically determined depending on the individual input samples. It helps improve the adaptiveness of our network, _i.e._, enabling Fig. 3: MSGC unfolds network structures along four dimensions during the learning process: (a) group-wise: input channels are assigned to certain group(s) in a learnable way; (b) layer-wise: the pruning rate automatically varies in different layers; (c) the connection topology varies with different samples even in the same layer. Please also see the illustration of attention-wise learning in our method part. the network adapt to the varying characteristics of individual samples. The dynamic execution can also lead to a better optimization of resource allocation, as MSGC automatically learns to pay more computation for harder samples and vice versa, but keep the average computation at a low level. 4. **Attention-wise**: In order to prune or select particular channels (_i.e._, gating), one typical way is to firstly generate a saliency vector/matrix with each element corresponding to a channel, then use the _Top-K_[18, 47] function or thresholds [64, 50] to select the top ranking channels. After gating, the saliency values can be reused as the attention of the associated survived channels1[18, 47]. In this way, it essentially couples the functionality of gating and attention via the same saliency values. Instead, we decouple them in MSGC as two separate sub-modules to diversify the gating process. Footnote 1: By element-wise multiplying the feature map with the saliency value. ### _Detailed design of MSGC_ The overview of the MSGC, which can be plugged into existing general convolutional backbones, is shown in Fig. 4. Assuming we have a building block (_e.g._, a Bottleneck block in ResNet, a Reverted Bottleneck block in MobileNetV2, _etc._) consisting of \(M\) consecutive convolutional layers \(\{L_{i}|i=1,2,...,M\}\), where the input and output tensor of \(L_{i}\) have \(C_{i}\) and \(C_{i+1}\) channels, respectively (it should be noted that \(L_{1}\) is denoted as the first layer of the block, not the network), and supposing the group number for each layer is \(\{G_{i}|i=1,2,...,M\}\), the pipeline of building MSGC is detailed as follows: **Step 1: Generating binary masks.** To segment the input channels into groups for each layer in a learnable way, we use binary gating masks \(\{\mathbf{B}_{i}^{G_{i}\times C_{i}}|i=1,2,...,M\}\), where each row in \(\mathbf{B}_{i}^{G_{i}\times C_{i}}\) only has 0/1 values representing which channels are selected (1) or not selected (0) to a certain group in layer \(i\). For example, \(\mathbf{B}_{3}^{2.5}=0\) means for the 3rd layer \(L_{3}\), the 5th input channel is not assigned to the 2nd group. We build a mask generator to create the gating masks, where we simply take the input of the block (which is also the input of \(L_{1}\)) as the input of the generator. Since each channel is a \(H\times W\) feature map, we use global average pooling (GAP) to firstly downsample the input to \(\mathbb{R}^{1\times C_{1}}\), followed by \(M\) light Multi-layer Perceptrons (MLPs) to generate \(M\) saliency tensors \(\{\mathbf{S}_{i}^{G_{i}\times C_{i}}|i=1,2,...,M\}\) for each later layer respectively (including \(L_{1}\)), where \(\mathbf{S}_{i}^{G_{i}\times C_{i}}\in\mathbb{R}^{G_{i}\times C_{i}}\). Particularly, inspired by [55], each MLP consists of two mapping matrices \(W_{1}\in\mathbb{R}^{C_{1}\times\frac{C_{1}}{M}}\) and \(W_{2}\in\mathbb{R}^{\frac{C_{1}}{M}\times G_{i}C_{i}}\), where \(R\) is the reduction rate. The output of the MLP is then reshaped to \(\mathbb{R}^{G_{i}\times C_{i}}\). We also insert a BN [65] and ReLU Fig. 4: The exemplary illustration of a MSGC based network block with \(M=2\) convolutional layers \(L_{1}\) and \(L_{2}\), where \(G_{1}=G_{2}=2\), \(C_{1}=C_{2}=8\). Here, the subscript “\(1\)” in \(L_{1}\) or \(G_{1}\) indicates the 1st layer in the block (which is actually “Layer \(i\)” in the network). Please refer to the text for the meaning of the denotations. Best viewed in color. Fig. 5: Structure of the mask generator in MSGC. Best viewed in color. layer between the two mappings to make the MLP a nonlinear transformation. Then, the Sign function (\(\text{Sign}(x)=1\) if \(x\geq 0\) and \(=0\) otherwise) is used to convert each \(\mathbf{S}_{i}^{G_{i}\times C_{i}}\) to \(\mathbf{B}_{i}^{G_{i}\times C_{i}}\). The whole process is depicted in Fig. 5. **Step 2: Matching sub-filters and sub-input tensors for each group to conduct convolutions.** Specifically, the complete filters for a group in layer \(L_{i}\) are \(W\in\mathbb{R}^{k\times k\times C_{i}\times C_{i+1}}\), where \(k\) is the kernel size. Based on the gating mask \(\mathbf{B}_{i}^{G_{i}\times C_{i}}\), we take the sub filters along the "\(C_{i}\)" dimension, followed by the convolution with the correspondingly selected sub-input tensor. For example, supposing \(L_{i}\) has 4 input and 8 output channels respectively and there are two groups in this layer, the 2nd row of \(\mathbf{B}_{i}^{G_{i}\times C_{i}}\) is \([1,0,0,1]\), we then select the 1st and 4th channel as the input for group 2. Meanwhile, the 5-8th output channels also belong to group 2 (output channels are regularly divided in groups). Thereby, the sub-filters that connect the 1st and 4th input channels to the 5-8th output channels are selected to conduct the convolution for group 2. Since the connection topology varies with different samples, all the filters will be kept in the memory to be selected and learnt with gradient descent. Therefore, we are able to preserve the whole capacity of the original network. It is worth mentioning that the above convolution process is different to the normal convolution, the standard grouped convolution, and network pruning. First, compared with the normal convolution where the 5-8th output channels are generated by all the input channels, convolution in MSGC only chooses a part of input channels which saves computation. Second, instead of choosing the input channels in a predefined way in the standard grouped convolution (_e.g._, the last two input channels will be chosen as the input for group 2 in standard grouped convolution), MSGC applies a mask generator to dynamically choose the channels, which brings more learnability. Finally, compared with network pruning, MSGC preservers more input channels. In other words, the input channels that are not selected (or pruned) by one group may be selected by other groups. We illustrate those specific chosen input channels for a random image sample in Fig. 6, where most of the channels are chosen by at least one group. Therefore, the channels are not easily pruned (please also see the following discussion). **Step 3: Decoupling gating and attention (Optional).** We can attach a parallel MLP to generate another saliency tensor \(\mathbf{A}_{i}^{G_{i}\times C_{i}}\in\mathbb{R}^{G_{i}\times C_{i}}\) as the attention tensor for layer \(L_{i}\). The input feature maps in the selected channels will then be rescaled with the associated attention values. **End-to-end learning.** We may notice the only non-differentiable part is the Sign function, which converts the continuous saliency tensors to binary gating masks. However, we could use gradient approximation techniques like the straight-through estimator (STE) [66] or the Gumbel-Softmax reparameterization technique [67] such that the whole pipeline can be optimized by gradient descent in an end-to-end manner. Once the gating masks are learnable with no restrictions _w.r.t._ groups, layers, and samples, the "middle-spectrum" space is freely unfolded. ### _Discussions_ #### Iii-C1 Pyramid structure of feature maps It is interesting to find that the group mechanism helps to rank the input feature maps in a softer way. Specifically, channel pruning only learns to select important channels once. _I.e._, once channels have been pruned, they are totally discarded. Grouped convolution preserves all channels but reduces the computation by conducting convolution in groups. When channel pruning and grouped convolution come together in MSGC, channels are not easily discarded because every group has an independent selection process. As shown in Fig. 6, we can identify those channels that are selected by all groups (canonical channels), channels selected by part of the groups (less important but still valuable channels), and finally the discarded channels (meaningless channels). The importance scores of channels become more reliable, leading to better performance compared with channel pruning-based methods (Tab. II). #### Iii-C2 Discussion about plug-in MSGC serves as a plug-in module for a backbone for reducing its computational cost and retaining its predictive performance to a large extent. In the implementation, the filters of a normal convolutional layer \(W\in\mathbb{R}^{k\times k\times C_{i}\times C_{i+1}}\) can be divided to \(\{W_{g}\in\mathbb{R}^{k\times k\times C_{i}\times\frac{C_{i+1}}{G_{i}}}|g=1,2,...,G_{i}\}\) as the initial state of the filters in each group when the module is plugged in. #### Iii-C3 Discussion about computational cost The reduction of computational cost of a network block, where MSGC is applied, mainly comes from the sparse connections in the groups. It is also attributed to the fact that the gating masks for all layers are generated using the input of \(L_{1}\) of the block. In other words, the group topology (formulated as \(\mathbf{B}_{i}^{G_{i}\times C_{i}}\)) for the whole block can be already determined given the input of the first layer. If a certain channel is not selected by any group in a later layer in the block, we can directly skip the Fig. 6: MSGC is different form channel pruning, where the rank of channels is simply described by “pruned” and “unpruned”. The group mechanism allows to create a richer pyramid structure: on the bottom are the canonical or fundamental channels that are selected by all groups, then the less general but potentially more specific channels for particular groups when we looking upward, finally the discarded channels on the top that are completely pruned for the current image sample. calculation of this channel in the previous layer. For example, if the 2nd input channel of \(L_{3}\) is not selected by any group in \(L_{3}\), we can directly skip the convolutions in the previous layer \(L_{2}\) that generate this channel. Besides, the MACs of the light MLPs is negligible compared with the original model. For example, in ResNet-18, MLPs take 1.8 million MACs, which is less than \(0.1\%\) of that of the original model that has 1.8 billion MACs. ### _Loss function_ MACs of the network can be calculated based on the binary gating masks \(\mathbf{B}_{i}^{G_{i}\times C_{i}}\). A zero value in the gating masks indicates a certain part of convolutions are skipped. Take the same example in the illustration of Step 1 of MSGC, \(\mathbf{B}_{3}^{2,5}=0\) indicates the 5th input channel of \(L_{3}\) is not considered as an input for calculating the 2nd group output channels, its associated MACs (=\(k\times k\times\frac{C_{4}}{G_{3}}\), where \(k\) is the kernel size) are therefore not included in the final MACs. To control both the final MACs and accuracy of the model during training, we adopts a budget loss \(\mathcal{L}_{bgt}\) and the standard task-specific loss \(\mathcal{L}_{task}\) (_e.g._, cross entropy loss for image classification). Specifically, the budget loss is formulated as: \[\mathcal{L}_{bgt}=\text{max}(\lambda(\frac{\mathcal{M}_{b}}{\mathcal{M}_{ori} }-\tau),0), \tag{2}\] where \(\mathcal{M}_{b}\) is the mean running MACs over the current mini-batch, \(\mathcal{M}_{ori}\) is the MACs of the original model, \(\lambda\) is a controlling parameter, and \(\tau\) is a target remaining rate. A more detailed description about \(\lambda\) and \(\tau\) can be found in the next section. The final loss is \(\mathcal{L}=\mathcal{L}_{bgt}+\mathcal{L}_{task}\). ## IV Experiments To demonstrate the effectiveness of the proposed method, we compare it with prior state-of-the-art (SOTA) methods including both channel pruning- and grouped convolution-based ones. The generalizability of our method is also validated on the task of object detection under both one- and two-stage frameworks. Model analysis and ablation study is given in the last part of this section. ### _Image classification_ **Dataset**: We use the large scale ImageNet dataset [36] to validate MSGC. There are 1.2 million images for training and 50K images for validation with 1K classes. During training, the data is augmented with random cropping (to size \(224\times 224\)) and random horizontal flipping. The Top-1 and Top-5 accuracy on validation set are used for comparison. **Training setup**: As a plug-in module, MSGC has the advantage to directly prune and boost the pretrained models. We start by setting \(\tau\) (in Eq. (2)) to \(1.0\), then progressively decrease it to a target value \(\tau_{end}\) during the first half of the training process. Precisely, if we want to reduce MACs of the network by half, we set \(\tau_{end}=0.5\). By using a big \(\lambda\) (_i.e._, \(\lambda=30\) in the experiments), we gradually reduce the computational cost to a target budget (_i.e._, \(\tau_{end}\cdot\mathcal{M}_{ori}\)). The second half of training can be seen as a fine-tuning process. Following [18], we train ResNet-18/50 and CondenseNet for 120 epochs and MobileNetV2 for 150 epochs, with a batch size of 256. The initial learning rate is set to 0.075 for the MLPs, and 0.015 for the pretrained weights in the backbone, which are both decayed with cosine-shape annealing [31] to 0. Stochastic gradient descent (SGD) with momentum of 0.9 is used as the optimizer, weight decay is set to \(10^{-4}\) and only applied on the backbone weights. We use Gumbel-Softmax [67] when binarizing the saliency vectors (please refer to appendix A for more details). The whole process is implemented with the Pytorch library [68] on two NVIDIA A100 GPUs. **Backbones**: To make a comprehensive comparison with prior approaches, we choose the widely adopted backbones, namely, ResNet-18/50, DenseNet, and MobileNetV2. Our MSGC module is plugged in those backbones with the configurations illustrated in Tab. I. In Tab. II and Tab. III, we compare MSGC with the prior SOTA works based on channel pruning and grouped convolution respectively on both ResNet and MobileNetV2 backbones. Due to the fact that there is a relatively lack of works on DenseNet, we compare MSGC with the previous methods that aimed to build light DenseNet-like structures, namely, the CondenseNet [31] variants, by replacing the LGC [31] module in CondenseNet with MSGC (the same for other modules in other methods). The results on different methods based on CondenseNet are given in Tab. V. Unlike prior methods, MSGC maintains accuracy with a more aggressive MAC reduction when other prior methods often cause accuracy drop. Notably, we even obtain accuracy gains for ResNet and MobileNetV2 when the MAC is considerably reduced by 50% and 35% respectively. Comparing with the latest method FFP [7], our method improves ResNet-18 by 1.75% with 51.3% MAC reduction, while FFP decreases the accuracy by 0.60% with a smaller MAC reduction rate (40.0%). On ResNet-50, our method increases the accuracy by 0.63% with a considerable 53.8% MAC reduction, while FFP only has a 0.05% accuracy increase, even with a smaller MAC reduction rate (39.5%). We conjecture that the phenomenon may result from the reorganization of channel usage from two perspectives: first, the effective removal of redundant parameters; second, the more reliable ranking of input channels as shown in Fig. 6 (also see our analysis in Sec. IV-C). We also completely remove the attention MLPs in our models, denoted as "MSGC w/o Attention" in the tables, to see the influence of our attention-wise design. Generally, the attention-wise design works well on relatively light backbones like ResNet-18 and MobileNetV2, but may give diminishing or negative returns for large backbones like DenseNet and ResNet-50, meaning that a saturated status is reached under current configurations before applying attentions. An illustration of training dynamics is shown in Fig. 7, where we could see the computational cost stably drops to the budget with a slowly growing accuracy in the first half of the training, with the budget loss keeping around 0. In the second half of training, the accuracy is significantly compensated. ### _Object detection_ **Dataset**: We conduct our experiments on the COCO 2017 detection dataset [37], which has 118K and 5K images for training and validation. During training, the images are resized with the longer edge \(=\) 1333, combined with random horizontal flipping. During validation, the images are resized such that the shorter edge \(\leq 800\), and the longer edge \(\leq 1333\). **Backbones**: We evaluate MSGC on ResNet-50 and MobileNetV2 under Faster R-CNN [77] (2-stage) and RetinaNet [78] (1-stage) frameworks. The configurations of the backbones are the same as in Sec. IV-A. Observed from the previous section on image classification, we attach the attention MLPs to MobileNetV2 and not to ResNet-50. **Training setup**: The training scheme mostly follows the experiments on ImageNet. The differences are illustrated as follows. All the models are trained for 90,000 iterations with 16 images in a mini-batch. The basic initial learning rate is set as 0.02 for Faster-RCNN and 0.01 for RetinaNet. For the weights in the backbone (including the MLPs in the MSGC module), the initial learning rate is further multiplied with 0.5. Learning rate is decayed with a factor of 0.1 at iteration 60,000 and 80,000. For others like batch size per image, we adopted the default settings in the Detectron2 library [79]. The whole process is implemented with the Pytorch library [68] on two NVIDIA A100 GPUs. The results are shown in Tab. IV. For ResNet-50, MSGC can effectively cut the MACs of the backbone by \(54\%\) with only a slight or negligible accuracy drop. For MobileNetV2, MSGC outperforms MobileNetV2-0.75 by a large margin with a similar computational overhead on both frameworks. Notably, on the RetinaNet framework, the performance of MobileNetV2 can also be improved with the plugged MSGC, from 30.7% to 31.3% in mAP, with a \(28.5\%\) reduction on MACs. The comparison shows MSGC has a good generalizability on the object detection task. ### _Model analysis and ablation study_ In this section, we give our analysis from the dimensions of group-wise, layer-wise, sample-wise, and attention-wise, respectively. Experiments are based on Sec. IV-A. More exploratory studies are given at the end. The attention MLPs are not applied in MSGC for simplicity if not specified. **Group-wise**: Configuration of a MSGC module can be varied. Though in Sec. IV-A, we empirically set the group numbers \(\{G_{i}\}\), it is definitely possible to seek more advanced techniques like neural architecture search (NAS) [80] to give a better structural setting. Here, we changed the group numbers on ResNet-18 (more specifically, we change the group number for the 2nd layer of the Basic block) and MobileNetV2 under similar MACs. The results are shown in Tab. VI. Group \(=1\) reduces MSGC to a dynamic channel pruning module. Like [47], the dynamic execution helps the network to prune redundant information specific to the input samples, potentially leading to better discrimination ability. As expected, using groups achieves an increasing improvement. Fig. 7: Curves of validation accuracy, computational cost, and training losses during the training process. It peaks at group \(=4\) since a bigger group number may bring extra learning complexity [18]. We dig deeper into the pyramid-style reuse of the feature channels, as illustrated in Fig. 8. MSGC can detect those canonical, general, and meaningless channels automatically, leading to a more reliable channel ranking. It is worth mentioning that the curves in shallower layers look steeper (means a major proportion of samples selected the same channels), which is in line with the fact that feature maps in shallow layers usually contain low-level or fundamental image information shared by most samples. While in deeper layers, samples become more distinguished due to their varying semantic characteristics. to hold the information. **Sample-wise**: As mentioned in Sec. III, MSGC gives an adaptive resource allocation due to its dynamic property. This is shown in Fig. 10. It is found that both the original model and the MSGC equipped model can obtain good prediction accuracies on those "easy" samples. However, MSGC can automatically spend more computation on those "harder" samples, leading to a better prediction (remembering that the original model uses much more MACs on each sample equally). Therefore, the overall performance can be enhanced with MSGC. Interestingly, the "easy" and "hard" samples determined by MSGC are quite intuitive for humans, which are partially illustrated at the bottom of Fig. 10. For example, "easy" samples have less cluttered backgrounds and easy to be recognized by human eyes, while the "hard" samples show the opposite (_e.g._, the rightmost samples even tend to be in a black and white style and more confusing). **Attention-wise**: Consistent with Fig. 8, in Fig. 11 (a), shallower layers show a relatively fixed gating pattern, _i.e._, the gating probabilities (the probabilities of being selected) in shallow layers are either close to zero or close to one. The close-to-one values indicate the associated channels (the yellow cells) were selected by a major part of samples. While deeper layers have larger variations. It is interesting to find that the visualized attention matrix in Fig. 11 (b) shows a similar sparse pattern with the gating matrix. In other words, the channels with close-to-zero gating probabilities also have close-to-zero attentions. However, different to the gating probabilities in shallow layers, the attention scores in shallow layers have more middle-range values, which inject extra variations that can enrich the representation of those layers. **Influence of the controlling parameter \(\lambda\)**: We conduct the experiments with ResNet-18 on ImageNet dataset to show how the controlling parameter \(\lambda\), which controls the weight of the budget loss (Eq. (2)) during training, influences the performance. The results are shown in Tab. VII. We can see that a too small \(\lambda\) may cause a larger model than expected. When \(\lambda\) is big enough (_i.e._, \(\geq 30\)), we can precisely control the target MAC and the method is quite stable. **From MAC reducer to accuracy enhancer**: Setting different values of \(\tau_{end}\) controls the computational overhead of the final model. For example, it is demonstrated in the previous sections that \(\tau_{end}=0.5\) leads to a computational reduction of 50% on the ResNet structures without sacrificing the accuracy. In this study, we give a more detailed exploration of how MSGC balances accuracy and MAC by changing \(\tau_{end}\). The results are shown in Tab. VIII where ResNet-18 is used. Since \(\tau_{end}<1\), MSGC always acts as a "MAC reducer" for the original baseline architecture. A small \(\tau_{end}\) results in a good computation reduction, but at the same time a degraded performance on prediction accuracy. However, when the computation reduction is not our focus, we can simply apply MSGC as an effective accuracy enhancer. Surprisingly, when \(\tau_{end}=0.9\), the model achieves a absolute 2.57% Top-1 accuracy gain, from 69.76% to 72.33%. Meanwhile, further increasing \(\tau_{end}\) to 0.99 does not incur additional accuracy gain, implying necessary removal of redundant or meaningless features is important for achieving the optimal performance. **Discussion on hardware implementation**: When our focus is to reduce computation, one factor we may consider is the actual hardware implementation. It is indeed a common Fig. 11: The visualization is based on ResNet-18, where each row represents a group in a certain layer, every four rows represent a convolutional layer. Therefore, each element in the visualized matrices indicates an input channel for a certain group in a certain layer. Left is the the gating probability of the channels, calculated based on the associated saliency values over the ImageNet validation set. Right is the attached average attention values.
2309.00788
Spectral Barron space and deep neural network approximation
We prove the sharp embedding between the spectral Barron space and the Besov space. Given the spectral Barron space as the target function space, we prove a dimension-free result that if the neural network contains $L$ hidden layers with $N$ units per layer, then the upper and lower bounds of the $L^2$-approximation error are $\mathcal{O}(N^{-sL})$ with $0 < sL\le 1/2$, where $s$ is the smoothness index of the spectral Barron space.
Yulei Liao, Pingbing Ming
2023-09-02T01:43:12Z
http://arxiv.org/abs/2309.00788v1
# Spectral Barron space and deep neural network approximation ###### Abstract. We prove the sharp embedding between the spectral Barron space and the Besov space. Given the spectral Barron space as the target function space, we prove a dimension-free result that if the neural network contains \(L\) hidden layers with \(N\) units per layer, then the upper and lower bounds of the \(L^{2}\)-approximation error are \(\mathcal{O}(N^{-sL})\) with \(0<sL\leq 1/2\), where \(s\) is the smoothness index of the spectral Barron space. Key words and phrases:Spectral Barron space; Deep neural network; Approximation theory 2020 Mathematics Subject Classification: 32C22, 32K05, 33C20, 41A25, 41A46, 42A38, 68T07 The authors thank Professor Renjin Jiang in Capital Normal University for the helpful discussion. The work of Liao and Ming were supported by National Natural Science Foundation of China through Grants No. 11971467 and No. 12371438. For functions in \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\), the Fourier transform and the pointwise Fourier inversion are valid. Unfortunately, we shall prove in Lemma 2.3 that \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\) equipped with the norm \(\upsilon_{f,0}+\upsilon_{f,s}\) is not complete. Therefore, it does not qualify as a Banach space. To address this issue, an alternative class of function spaces has been proposed, which can be traced back to the work of Hormander [11]. It is defined as follows. \[\mathscr{F}L^{s}_{p}(\mathbb{R}^{d}):=\Big{\{}\,f\in\mathscr{S}^{\prime}( \mathbb{R}^{d})\,\mid\,(1+|\xi|^{s})\widehat{f}(\xi)\in L^{p}(\mathbb{R}^{d}) \,\Big{\}}\] for \(1\leq p\leq\infty\) and \(s\geq 0\). This space has been studied extensively and may be referred to by different names. Some call it the Hormander space, as mentioned in works such as [11, 12, 13]; Others refer to it as the Fourier Lebesgue space, as seen in [1, 1, 13]. We are interested in \(p=1\), and call it the spectral Barron space: \[\mathscr{B}^{s}(\mathbb{R}^{d}):=\big{\{}\,f\in\mathscr{S}^{\prime}(\mathbb{R }^{d})\,\mid\,\upsilon_{f,0}+\upsilon_{f,s}<\infty\,\big{\}}\,,\] which is equipped with the norm \[\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}:=\upsilon_{f,0}+\upsilon_{f,s}= \int_{\mathbb{R}^{d}}(1+|\xi|^{s})|\widehat{f}(\xi)|\mathrm{d}\xi.\] We show in Lemma 2.1 that the pointwise Fourier inversion is valid for functions in \(\mathscr{B}^{s}(\mathbb{R}^{d})\) with a nonnegative \(s\). Some authors also refer to \(\mathscr{B}^{s}(\mathbb{R}^{d})\) as the Fourier algebra or Wiener algebra, whose algebraic properties, such as the Wiener-Levy theorem [14, 15, 16], have been extensively studied in [17, 18]. Another popular space for analyzing shallow neural networks is the Barron space [1, 13], which can be viewed as shallow neural networks with infinite width. The authors in [13, 14] claimed that the spectral Barron space is much smaller than the Barron space. As observed in [10], this statement is not accurate because they have not discriminated the smoothness index \(s\) in \(\mathscr{B}^{s}(\mathbb{R}^{d})\). In addition, the variation space, introduced in [1], has been studied in relation to the spectral Barron space \(\mathscr{B}^{s}(\mathbb{R}^{d})\) and the Barron space in [12, 13]. These spaces have been exploited to study the regularity of partial differential equations [16, 17, 18, 19]. Recently a new space [20] originated from the variational spline theory, which is closely related to the variation space, has also been exploited as the target function space for neural network approximation [20]. The first objective of the present work is the analytical properties of \(\mathscr{B}^{s}(\mathbb{R}^{d})\). In Lemma 2.2, we show that \(\mathscr{B}^{s}(\mathbb{R}^{d})\) is complete, while Lemma 2.3 shows that \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\) is not complete. This distinction highlights a key difference between these two spaces. Furthermore, Lemma 2.5 provides an example that illustrates functions in \(\mathscr{B}^{s}(\mathbb{R}^{d})\) may decay arbitrarily slow. This example, constructed elegantly using the generalized Hypergeometric function, reveals interesting relationships between the Fourier transform and the decay rate of the functions. Additionally, we study the relations among \(\mathscr{B}^{s}(\mathbb{R}^{d})\) and some classical function spaces. In Theorem 2.9, we establish the connections between \(\mathscr{B}^{s}(\mathbb{R}^{d})\) and the Besov space. Furthermore, in Corollary 2.12, we establish the connections between \(\mathscr{B}^{s}(\mathbb{R}^{d})\) and the Sobolev spaces. Notably, we prove the embedding relation \[B_{2,1}^{s+d/2}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d}) \hookrightarrow B_{\infty,1}^{s}(\mathbb{R}^{d}),\] which is an optimal result that appears to be missing in the existing literature. This embedding may serve as a bridge to study how the Barron space, the variation space and the space in [20] are related to the classical function spaces such as the Besov space, which seems missing in the literature; cf. [20, SS 5]. The second objective of the present work is to explore the neural network approximation on a bounded domain. Building upon Barron's seminal works on approximating functions in \(\mathscr{B}^{1}(\mathbb{R}^{d})\) with \(L^{2}\)-norm, recent studies extended the approximation to functions in \(\mathscr{B}^{k+1}(\mathbb{R}^{d})\) with \(H^{k}\)-norm, as demonstrated in [19, 21]. Furthermore, improved approximation rates have been achieved for functions in \(\mathscr{B}^{s}(\mathbb{R}^{d})\) with large \(s\) in works such as [1, 21, 22]. These advancements contribute to a deeper understanding of the approximation capabilities of neural networks. The distinction between deep ReLU networks and shallow networks has been highlighted in the separation theorems presented in [14, 14, 23]. These theorems provide examples that can be well approximated by three-layer ReLU neural networks but not by two-layer ReLU neural networks with a width that grows polynomially with the dimension. This sheds light on the differences in expressive power between shallow and deep networks. Moreover, the approximation rates for neural networks targeting mixed derivative Besov/Sobolev spaces, spectral Barron spaces, and Holder spaces have also been investigated. These studies contribute to a broader understanding of the approximation capabilities of neural networks in various function spaces as in [1, 15, 16, 17]. We focus on the \(L^{2}\)-approximation properties for functions in \(\mathscr{B}^{s}(\mathbb{R}^{d})\) when \(s\) is small. In Theorem 3.9, we establish that a neural network with \(L\) hidden layers and \(N\) units in each layer can approximate functions in \(\mathscr{B}^{s}(\mathbb{R}^{d})\) with a convergence rate of \(\mathcal{O}(N^{-sL})\) when \(0<sL\leq 1/2\). This bound is sharp, as demonstrated in Theorem 3.11. Importantly, our results provide optimal convergence rates compared to existing literature. For deep neural networks, a similar result has been presented in [15] with a convergence rate of \(\mathcal{O}(N^{-sL/2})\). For shallow neural network; i.e., \(L=1\), convergence rates of \(\mathcal{O}(N^{-1/2})\) have been established in [14, 19] when \(s=1/2\). However, it is worth noting that the constants in their estimates depend on the dimension at least polynomially, or even exponentially, and require other bounded norms besides \(\upsilon_{f,s}\). Our results provide a significant advancement by achieving optimal convergence rates without the additional dependency on dimension or other bounded norms. The remaining part of the paper is structured as follows. In Section 2, we demonstrate that the spectral Barron space is a Banach space and examine its relationship with other function spaces. This analysis provides a foundation for understanding the properties of the spectral Barron space. In Section 3, we delve into the error estimation for approximating functions in the spectral Barron space using deep neural networks with finite depth and infinite width. By investigating the convergence properties of these networks, we gain insights into their approximation capabilities and provide error bounds for their performance. Finally, in Section 4, we conclude our work by summarizing the key findings and contributions of this study. We also discuss potential avenues for future research and highlight the significance of our results in the broader context of function approximation using neural networks. Certain technical results are postponed to the Appendix. ## 2. Completeness of \(\mathscr{B}^{s}\) and its relation to other function spaces This part discusses the completeness of the spectral Barron space and embedding relations to other classical function spaces. Firstly, we fix some notations. Let \(\mathscr{S}\) be the Schwartz space and let \(\mathscr{S}^{\prime}\) be its topological dual space, i.e., the space of tempered distribution. The Gamma function \[\Gamma(s)\colon=\int_{0}^{\infty}t^{s-1}e^{-t}\mathrm{d}t,\qquad s>0.\] Denoting the surface area of the unit sphere \(\mathbb{S}^{d-1}\) by \(\omega_{d-1}=2\pi^{d/2}/\Gamma(d/2)\). The volume of the unit ball is \(\nu_{d}=\omega_{d-1}/d\). The Beta function \[B(\alpha,\beta)\colon=\int_{0}^{1}t^{\alpha-1}(1-t)^{\beta-1}\mathrm{d}t= \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)},\qquad\alpha,\beta>0.\] The series formulation of the first kind of Bessel function is defined as \[J_{\nu}(x)\colon=(x/2)^{\nu}\sum_{k=0}^{\infty}(-1)^{k}\frac{(x/2)^{2k}}{ \Gamma(\nu+k+1)k!}.\] This definition may be found in [10, SS 1.4.1, Eq. (1)]. For \(f\in L^{1}(\mathbb{R}^{d})\), its Fourier transform of \(f\) is defined as \[\widehat{f}(\xi)\colon=\int_{\mathbb{R}^{d}}f(x)e^{-2\pi ix\cdot\xi}\mathrm{d}x,\] and the inverse Fourier transform is defined as \[f^{\vee}(x)\colon=\int_{\mathbb{R}^{d}}f(\xi)e^{2\pi ix\cdot\xi}\mathrm{d}\xi.\] If \(f\in\mathscr{S}^{\prime}(\mathbb{R}^{d})\), then the Fourier transform in the sense of distribution means \[\langle\,\widehat{f},\varphi\rangle=\langle\,f,\widehat{\varphi}\rangle \qquad\text{for any}\quad\varphi\in\mathscr{S}(\mathbb{R}^{d})\subset L^{1}( \mathbb{R}^{d}).\] We shall frequently use the following Hausdorff-Young inequality. Let \(1\leq p\leq 2\) and \(f\in L^{p}(\mathbb{R}^{d})\), then \[\|\,\widehat{f}\,\|_{L^{p^{\prime}}(\mathbb{R}^{d})}\leq\|\,f\,\|_{L^{p}(\mathbb{ R}^{d})}, \tag{2.1}\] where \(p^{\prime}\) is the conjugate exponent of \(p\); i.e. \(1/p+1/p^{\prime}=1\). We shall use the following pointwise Fourier inversion theorem. **Lemma 2.1**.: _Let \(g\in L^{1}(\mathbb{R}^{d})\), then \(\widehat{g^{\vee}}=g\) in \(\mathscr{S}^{\prime}(\mathbb{R}^{d})\). Furthermore, let \(f\in\mathscr{S}^{\prime}(\mathbb{R}^{d})\) and \(\widehat{f}\in L^{1}(\mathbb{R}^{d})\), then \((\widehat{f})^{\vee}=f\), a.e. on \(\mathbb{R}^{d}\)._ Proof.: By definition, there holds \[\langle\,\widehat{g^{\vee}},\varphi\rangle=\langle\,g^{\vee},\widehat{ \varphi}\rangle=\langle\,g,\varphi\rangle\qquad\text{for any}\quad\varphi\in \mathscr{S}(\mathbb{R}^{d}).\] Therefore, \(\widehat{g^{\vee}}=g\) in \(\mathscr{S}^{\prime}(\mathbb{R}^{d})\). Note that \(\widehat{f}\in L^{1}(\mathbb{R}^{d})\), \[\langle\,(\widehat{f})^{\vee},\varphi\rangle=\langle\,\widehat{f},\varphi^{ \vee}\rangle=\langle\,f,\varphi\rangle\qquad\text{for any}\quad\varphi\in \mathscr{S}(\mathbb{R}^{d}).\] By the Hausdorff-Young inequality (2.1), \[\|\,(\widehat{f})^{\vee}\,\|_{L^{\infty}(\mathbb{R}^{d})}\leq\|\,\widehat{f}\, \|_{L^{1}(\mathbb{R}^{d})}.\] Therefore, \(f\) is a linear bounded operator on \(L^{1}(\mathbb{R}^{d})\); i.e., \(f\in[L^{1}(\mathbb{R}^{d})]^{*}=L^{\infty}(\mathbb{R}^{d})\) due to \(\mathscr{S}(\mathbb{R}^{d})\) is dense in \(L^{1}(\mathbb{R}^{d})\) and \[|\langle\,f,\varphi\rangle|=|\langle\,(\widehat{f})^{\vee},\varphi\rangle| \leq\|\,(\widehat{f})^{\vee}\,\|_{L^{\infty}(\mathbb{R}^{d})}\|\,\varphi\,\|_{ L^{1}(\mathbb{R}^{d})}\leq\|\,\widehat{f}\,\|_{L^{1}(\mathbb{R}^{d})}\|\, \varphi\,\|_{L^{1}(\mathbb{R}^{d})}.\] Hence, \((\widehat{f})^{\vee}=f\), a.e. on \(\mathbb{R}^{d}\) because \((\widehat{f})^{\vee}-f\in L^{\infty}(\mathbb{R}^{d})\)[1, Corollary 4.24]. A direct consequence of Lemma 2.1 is that the pointwise Fourier inversion is valid for functions in \(\mathscr{B}^{s}(\mathbb{R}^{d})\). We shall frequently use this fact later on. ### Completeness of the spectral Barron space **Lemma 2.2**.: 1. \(\mathscr{B}^{s}(\mathbb{R}^{d})\) _is a Banach space._ 2. _When_ \(s>0\)_,_ \(\mathscr{B}^{s}(\mathbb{R}^{d})\) _is not a Banach space if the norm_ \(\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}\) _is replaced by_ \(\upsilon_{f,s}\)_._ Proof.: We give a brief proof for the first claim for the reader's convenience, which has been stated in [10, Theorem 2.2.1]. It is sufficient to check the completeness of \(\mathscr{B}^{s}(\mathbb{R}^{d})\). For any Cauchy sequence \(\{f_{k}\}_{k=1}^{\infty}\subset\mathscr{B}^{s}(\mathbb{R}^{d})\), there exists \(g\in L^{1}(\mathbb{R}^{d})\) such that \(\widehat{f}_{k}\to g\) in \(L^{1}(\mathbb{R}^{d})\). Therefore there exists a sub-sequence of \(\{f_{k}\}_{k=1}^{\infty}\)(still denoted by \(f_{k}\)) such that \(\widehat{f}_{k}\to g\) a.e. on \(\mathbb{R}^{d}\). Define the measure \(\mu\) by setting that for any measurable set \(E\subset\mathbb{R}^{d}\), \[\mu(E)\colon=\int_{E}|\xi|^{s}\mathrm{d}\xi.\] Then \(\{\widehat{f}_{k}\}_{k=1}^{\infty}\) is a Cauchy sequence in \(L^{1}(\mathbb{R}^{d},\mu)\) and there exists \(h\in L^{1}(\mathbb{R}^{d},\mu)\) such that \(\widehat{f}_{k}\to h\) in \(L^{1}(\mathbb{R}^{d},\mu)\). Therefore there exists a sub-sequence of \(\{f_{k}\}_{k=1}^{\infty}\)(still denoted by \(f_{k}\)) such that \(\widehat{f}_{k}\to h\)\(\mu\)-a.e. on \(\mathbb{R}^{d}\). Note that for any measurable set \(E\subset\mathbb{R}^{d}\), \(\mu(E)=0\) is equivalent to \(|E|=0\). Therefore \(\widehat{f}_{k}\to h\) a.e. on \(\mathbb{R}^{d}\). By the uniqueness of limitation, \(h=g\), a.e. on \(\mathbb{R}^{d}\). Define \(f=g^{\vee}\). Lemma 2.1 shows that \(\widehat{f}=g\) in \(\mathscr{S}^{\prime}(\mathbb{R}^{d})\). Therefore \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\) and \(f_{k}\to f\) in \(\mathscr{B}^{s}(\mathbb{R}^{d})\). Hence \(\mathscr{B}^{s}\) is complete and it is a Banach space. The proof for (2) is a _reductio ad absurdum_. Suppose the claim does not hold, then there exists \(C\) depending only on \(s\) and \(d\) such that for any \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\), \[\upsilon_{f,0}\leq C\upsilon_{f,s}. \tag{2.2}\] We shall show this is false by the following example. For some \(\delta>-1\), let \[f_{n}(x)=\left(\sum_{k=1}^{n}2^{kd}(1-2^{2k}|\xi|^{2})_{+}^{\delta}\right)^{ \vee}(x).\] To bound \(\upsilon_{f_{n},0}\) and \(\upsilon_{f_{n},s}\), we introduce the Bochner-Riesz multipliers \[\phi_{R}=\left(\left(1-\frac{|\xi|^{2}}{R^{2}}\right)_{+}^{\delta}\right)^{ \vee},\qquad\delta>-1.\] We claim \[\phi_{R}(x)=\frac{\Gamma(\delta+1)}{\pi^{\delta}|x|^{\delta+d/2}}R^{-\delta+d /2}J_{\delta+d/2}(2\pi|x|R), \tag{2.3}\] and \[\upsilon_{\phi_{R},s}=\frac{\omega_{d-1}}{2}B\left(\frac{s+d}{2},\delta+1 \right)R^{s+d}. \tag{2.4}\] The proof is postponed to Appendix A.1. It follows from (2.3) that \[f_{n}(x)=\frac{\Gamma(\delta+1)}{\pi^{\delta}|x|^{\delta+d/2}}\sum_{k=1}^{n}2^ {k(\delta+d/2)}J_{\delta+d/2}(2^{1-k}\pi|x|), \tag{2.5}\] and \(f_{n}\in\mathscr{B}^{s}(\mathbb{R}^{d})\) with \[\upsilon_{f_{n},s}=\sum_{k=1}^{n}2^{kd}\upsilon_{\phi_{2^{-k}},s}=\frac{1-2^{ -ns}}{2^{s+1}-2}\omega_{d-1}B\left(\frac{s+d}{2},\delta+1\right),\] and \[\upsilon_{f_{n},0}=\sum_{k=1}^{n}2^{kd}\upsilon_{\phi_{2^{-k}},0}=\frac{\omega _{d-1}}{2}B\left(\frac{d}{2},\delta+1\right)n.\] where we have used (2.4). It is clear that \[\frac{\omega_{d-1}}{2^{s+1}}B\left(\frac{s+d}{2},\delta+1\right)\leq\upsilon_ {f_{n},s}\leq\frac{\omega_{d-1}}{2^{s+1}-2}B\left(\frac{s+d}{2},\delta+1\right).\] Hence \(\upsilon_{f_{n},0}\simeq\mathcal{O}(n)\) while \(\upsilon_{f_{n},s}\simeq\mathcal{O}(1)\). This shows that (2.2) is invalid for a large number \(n\). This proves the second claim. Similar to \(\mathscr{B}^{s}(\mathbb{R}^{d})\), the space \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\) defined in (1.1) has been exploited as the target space for neural network approximation by several authors [17, 18, 19, 20, 21]. The advantage of this space is that the Fourier transform is well-defined and the pointwise Fourier inversion is true for functions belonging to \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\). Unfortunately, \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\) is not a Banach space as we shall show below. **Lemma 2.3**.: _The space \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\) defined in (1.1) equipped with the norm \(\upsilon_{f,0}+\upsilon_{f,s}\) is not a Banach space._ To prove Lemma 2.3, we recall the Barron spectrum space defined by Meng and Ming in [14]: For \(s\in\mathbb{R}\) and \(1\leq p\leq 2\), \[\mathscr{B}^{s}_{p}(\mathbb{R}^{d}):=\big{\{}\,f\in L^{p}(\mathbb{R}^{d})\, \mid\,\|\,f\,\|_{L^{p}(\mathbb{R}^{d})}+\upsilon_{f,s}<\infty\,\big{\}} \tag{2.6}\] equipped with the norm \(\|\,f\,\|_{\mathscr{B}^{s}_{p}(\mathbb{R}^{d})}\colon=\|\,f\,\|_{L^{p}( \mathbb{R}^{d})}+\upsilon_{f,s}\). A useful interpolation inequality that compares the spectral norm of different orders has been proved in [14, Lemma 2.1]. For \(1\leq p\leq 2\) and \(-d/p<s_{1}<s_{2}\), there exists \(C\) depends on \(s_{1},s_{2},d\) and \(p\) such that \[\upsilon_{f,s_{1}}\leq C\|\,f\,\|_{L^{p}(\mathbb{R}^{d})}^{\gamma}\upsilon_{f,s_{2}}^{1-\gamma}, \tag{2.7}\] where \(\gamma=(s_{2}-s_{1})/(s_{2}+d/p)\). For any \(\varepsilon>0\), using the fact \[\upsilon_{f_{\varepsilon},s}=\varepsilon^{-s}\upsilon_{f,s},\] we observe that the inequality (2.7) is dilation invariant because it is invariant if we replace \(f\) by \(f_{\varepsilon}\colon=f(x/\varepsilon)\). Proof of Lemma 2.3.: The authors in [14] have proved that \(\mathscr{B}^{s}_{p}(\mathbb{R}^{d})\) equipped with the norm \(\|\,f\,\|_{\mathscr{B}^{s}_{p}(\mathbb{R}^{d})}\) is a Banach space. For any \(f\in\mathscr{B}^{s}_{1}(\mathbb{R}^{d})\), taking \(s_{1}=0,s_{2}=s\) and \(p=1\) in (2.7), we obtain, there exists \(C\) depending only on \(d\) and \(s\) such that \[\upsilon_{f,0}\leq C\|\,f\,\|_{L^{1}(\mathbb{R}^{d})}^{\gamma}\upsilon_{f,s}^ {1-\gamma}\leq C\|\,f\,\|_{\mathscr{B}^{s}_{1}(\mathbb{R}^{d})},\] where \(\gamma=s/(s+d)\). We prove the assertion by _reductio ad absurdum_. Suppose that \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\) equipped with the norm \(\upsilon_{f,0}+\upsilon_{f,s}\) is also a Banach space, then by the bounded inverse theorem and the above interpolation inequality, we get, there exists \(C\) depending only on \(s\) and \(d\) such that for any \(f\in\mathscr{B}^{s}_{1}(\mathbb{R}^{d})\), \[\|\,f\,\|_{L^{1}(\mathbb{R}^{d})}\leq C(\upsilon_{f,0}+\upsilon_{f,s}). \tag{2.8}\] We shall show this is not the case by the following example. For some \(\delta>(d-1)/2\), we define \[f_{n}(x)\colon=\left(\sum_{k=1}^{n}(1-2^{2k}|\xi|^{2})_{+}^{\delta}\right)^{ \vee}(x).\] Using (2.3) and noting \(f_{n}=\sum_{k=1}^{n}\phi_{2^{-k}}\), we have the explicit form of \(f_{n}\) as \[f_{n}(x)=\frac{\Gamma(\delta+1)}{\pi^{\delta}|x|^{\delta+d/2}}\sum_{k=1}^{n}2^{ k(\delta-d/2)}J_{\delta+d/2}(2^{1-k}\pi|x|). \tag{2.9}\] Using (2.4), we get \[\upsilon_{f_{n},s}=\sum_{k=1}^{n}\upsilon_{\phi_{2^{-k}},s}=\frac{1-2^{-n(s+d) }}{2^{s+d+1}-2}\omega_{d-1}B\left(\frac{s+d}{2},\delta+1\right),\] and \[\frac{\omega_{d-1}}{2^{s+d+1}}B\left(\frac{s+d}{2},\delta+1\right)\leq\upsilon _{f_{n},s}\leq\frac{\omega_{d-1}}{2^{s+d+1}-2}B\left(\frac{s+d}{2},\delta+1 \right).\] Proceeding along the same line, we obtain \[\upsilon_{f_{n},0}=\sum_{k=1}^{n}\upsilon_{\phi_{2^{-k}},0}=\frac{1-2^{-nd}}{2 ^{d+1}-2}\omega_{d-1}B\left(\frac{d}{2},\delta+1\right),\] and \[\frac{\omega_{d-1}}{2^{d+1}}B\left(\frac{d}{2},\delta+1\right)\leq\upsilon_{f _{n},0}\leq\frac{\omega_{d-1}}{2^{d+1}-2}B\left(\frac{d}{2},\delta+1\right).\] Hence, \[\upsilon_{f_{n},0}+\upsilon_{f_{n},s}\leq\frac{\omega_{d-1}}{2}\left(\frac{B( d/2,\delta+1)}{2^{d}-1}+\frac{B((s+d)/2,\delta+1)}{2^{s+d}-1}\right). \tag{2.10}\] By (2.3), a direct calculation gives \[\|\,\phi_{R}\,\|_{L^{1}(\mathbb{R}^{d})}=\frac{\Gamma(\delta+1)}{\pi^{\delta} R^{\delta-d/2}}\int_{\mathbb{R}^{d}}\frac{|J_{\delta+d/2}(2\pi|x|R)|}{|x|^{ \delta+d/2}}\mathrm{d}x=\frac{2^{\delta}\Gamma(\delta+1)}{\pi^{\delta+d/2}} \int_{\mathbb{R}^{d}}\frac{|J_{\delta+d/2}(|x|)|}{|x|^{\delta+d/2}}\mathrm{d}x.\] Invoking [1, Appendix B.6, B.7], there exists \(C\) that depends on \(\nu\) such that \[|J_{\nu}(x)|\leq C\begin{cases}|x|^{\nu}&|x|\leq 1,\\ |x|^{-1/2}&|x|>1.\end{cases}\] We get, there exists \(C\) depending only on \(d\) and \(\delta\) such that \[\|\,\phi_{R}\,\|_{L^{1}(\mathbb{R}^{d})} =\frac{2^{\delta}\Gamma(\delta+1)}{\pi^{\delta+d/2}}\left(\int_{ |x|\leq 1}\frac{|J_{\delta+d/2}(|x|)|}{|x|^{\delta+d/2}}\mathrm{d}x+\int_{|x|>1} \frac{|J_{\delta+d/2}(|x|)|}{|x|^{\delta+d/2}}\mathrm{d}x\right)\] \[\leq C\left(\int_{|x|\leq 1}\mathrm{d}\,x+\int_{|x|>1}|x|^{-1/2- \delta-d/2}\mathrm{d}\,x\right)\] \[\leq C\left(1+\frac{1}{\delta-(d-1)/2}\right),\] where we have used the fact \(\delta>(d-1)/2\) in the last step. Therefore, \(\|\,\phi_{R}\,\|_{L^{1}(\mathbb{R}^{d})}\) is bounded by a constant that depends only on \(\delta\) and \(d\) but is independent of \(R\). Moreover, \[\|\,f_{n}\,\|_{L^{1}(\mathbb{R}^{d})}\leq\sum_{k=1}^{n}\|\,\phi_{2^{-k}}\,\|_{ L^{1}(\mathbb{R}^{d})}\leq n\|\,\phi_{1}\,\|_{L^{1}(\mathbb{R}^{d})},\] and by the Hausdorff-Young inequality (2.1), \[\|\,f_{n}\,\|_{L^{1}(\mathbb{R}^{d})}\geq\|\,\widehat{f}_{n}\,\|_{L^{\infty}( \mathbb{R}^{d})}=\widehat{f}_{n}(0)=n.\] This means that \(\|\,f_{n}\,\|_{L^{1}(\mathbb{R}^{d})}=\mathcal{O}(n)\), which together with (2.10) immediately shows that the inequality (2.8) cannot be true for sufficiently large \(n\). Hence, we conclude that \(\widehat{\mathscr{B}}^{s}(\mathbb{R}^{d})\) is not a Banach space. ### Embedding relations of the spectral Barron spaces In this part we discuss the embedding of the spectral Barron spaces. **Lemma 2.4**.: 1. _Interpolation inequality: For any_ \(0\leq s_{1}\leq s\leq s_{2}\) _satisfying_ \(s=\alpha s_{1}+(1-\alpha)s_{2}\) _with_ \(0\leq\alpha\leq 1\)_, and_ \(f\in\mathscr{B}^{s_{1}}(\mathbb{R}^{d})\)_, there holds_ (2.11) \[\upsilon_{f,s}\leq\upsilon_{f,s_{1}}^{\alpha}\upsilon_{f,s_{2}}^{1-\alpha},\] _and_ (2.12) \[\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}\leq\|\,f\,\|_{\mathscr{B}^{s_{1} }(\mathbb{R}^{d})}^{\alpha}\|\,f\,\|_{\mathscr{B}^{s_{2}}(\mathbb{R}^{d})}^{1- \alpha}.\] 2. _Let_ \(0\leq s_{1}\leq s_{2}\)_, there holds_ \(\mathscr{B}^{s_{2}}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s_{1}}( \mathbb{R}^{d})\) _with_ (2.13) \[\|\,f\,\|_{\mathscr{B}^{s_{1}}(\mathbb{R}^{d})}\leq\left(2-\frac{s_{1}}{s_{2}} \right)\|\,f\,\|_{\mathscr{B}^{s_{2}}(\mathbb{R}^{d})}\qquad\forall f\in \mathscr{B}^{s_{2}}(\mathbb{R}^{d}).\] The embedding (2.13) has been stated in [10, Theorem 2.2.2.2] without tracing the embedding constant. Proof.: We start with the interpolation inequality (2.11) for the spectral norm. For any \(0\leq s_{1}\leq s\leq s_{2}\) with \(s=\alpha s_{1}+(1-\alpha)s_{2}\), using Holder's inequality, we obtain \[\upsilon_{f,s}=\int_{\mathbb{R}^{d}}\left(|\xi|^{s_{1}}|\widehat{f}(\xi)| \right)^{\alpha}\left(|\xi|^{s_{2}}|\widehat{f}(\xi)|\right)^{1-\alpha}\mathrm{ d}\xi\leq\upsilon_{f,s_{1}}^{\alpha}\upsilon_{f,s_{2}}^{1-\alpha}.\] This gives (2.11). Next, for \(a,b,c>0\), by Young's inequality, we have \[\frac{a+b^{\alpha}c^{1-\alpha}}{(a+b)^{\alpha}(a+c)^{1-\alpha}} =\left(\frac{a}{a+b}\right)^{\alpha}\left(\frac{a}{a+c}\right)^{1 -\alpha}+\left(\frac{b}{a+b}\right)^{\alpha}\left(\frac{c}{a+c}\right)^{1-\alpha}\] \[\leq\alpha\frac{a}{a+b}+(1-\alpha)\frac{a}{a+c}+\alpha\frac{b}{a+ b}+(1-\alpha)\frac{c}{a+c}\] \[=1.\] This yields \[a+b^{\alpha}c^{1-\alpha}\leq(a+b)^{\alpha}(a+c)^{1-\alpha}.\] Let \(a=\upsilon_{f,0},b=\upsilon_{f,s_{1}}\) and \(c=\upsilon_{f,s_{2}}\), we obtain \[\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}=\upsilon_{f,0}+\upsilon_{f,s}\leq \upsilon_{f,0}+\upsilon_{f,s_{1}}^{\alpha}\upsilon_{f,s_{2}}^{1-\alpha}\leq\| \,f\,\|_{\mathscr{B}^{s_{1}}(\mathbb{R}^{d})}^{\alpha}\|\,f\,\|_{\mathscr{B}^ {s_{2}}(\mathbb{R}^{d})}^{1-\alpha}.\] This implies (2.12). Next, if we take \(s_{1}=0\) in (2.11) and \(s=(1-\alpha)s_{2}\) with \(\alpha=1-s/s_{2}\), then \[\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}\leq\upsilon_{f,0}+\upsilon_{f,0}^{ \alpha}\upsilon_{f,s_{2}}^{1-\alpha}\leq(1+\alpha)\upsilon_{f,0}+(1-\alpha) \upsilon_{f,s_{2}}\leq(1+\alpha)\|\,f\,\|_{\mathscr{B}^{s_{2}}(\mathbb{R}^{d} )}.\] This leads to (2.13) and completes the proof. The next lemma shows that \(\mathscr{B}^{s}_{p}(\mathbb{R}^{d})\) is a proper subspace of \(\mathscr{B}^{s}(\mathbb{R}^{d})\). **Lemma 2.5**.: _For \(s\geq 0\) and \(1\leq p\leq 2\), there holds \(\mathscr{B}^{s}_{p}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^ {d})\), and the inclusion is proper in the sense that for any \(1\leq p<\infty\), there exists \(f_{p}\in\mathscr{B}^{s}(\mathbb{R}^{d})\) and \(f_{p}\not\in L^{p}(\mathbb{R}^{d})\)._ Proof.: It follows from the interpolation inequality (2.7) that \(\upsilon_{f,0}\leq C\|\,f\,\|_{\mathscr{B}^{s}_{p}(\mathbb{R}^{d})}\). Hence \[\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}\leq C\|\,f\,\|_{\mathscr{B}^{s}_{p }(\mathbb{R}^{d})}.\] This implies \(\mathscr{B}^{s}_{p}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^ {d})\) for any \(s\geq 0\) and \(1\leq p\leq 2\). We shall show below that the inclusion is proper. Let \[f_{p}(x)\colon=\left(|\xi|^{-d/p^{\prime}}\chi_{[0,1)}(|\xi|)\right)^{\vee}(x),\] where \(\chi_{\Omega}(t)\) is the characteristic function on \(\mathbb{R}\) that equals to one if \(t\in\Omega\) and zero otherwise. It is straightforward to verify \(f_{p}\in\mathscr{B}^{s}(\mathbb{R}^{d})\). We shall show below that \(f_{p}\notin L^{p}(\mathbb{R}^{d})\), which is based on the following explicit formula for \(f_{p}\) shown in Appendix A.2: \[f_{p}(x)={}_{1}F_{2}(d/(2p);1+d/(2p),d/2;-\pi^{2}|x|^{2})p\nu_{d}, \tag{2.14}\] where the generalized Hypergeometric function \({}_{n}F_{m}\) is defined as follows. For nonnegative integer \(n,m\) and none of the parameters \(\{\beta_{j}\}_{j=1}^{m}\) is a negative integer or zero, \[{}_{n}F_{m}(\alpha_{1},\ldots,\alpha_{n};\beta_{1},\ldots,\beta_{m};x)\colon= \sum_{k=0}^{\infty}\frac{\prod_{j=1}^{n}(\alpha_{j})_{k}}{\prod_{j=1}^{m}( \beta_{j})_{k}}\frac{x^{k}}{k!}.\] The generalized Hypergeometric function \({}_{n}F_{m}\) converges for all finite \(x\) if \(n\leq m\). In particular \({}_{n}F_{m}(\alpha_{1},\ldots,\alpha_{n};\beta_{1},\ldots,\beta_{m};0)=1\). Hence \(f_{p}(x)\) is finite for any \(x\). Using [33, Appendix], we obtain \[{}_{1}F_{2}(\alpha;\beta,\gamma;-x^{2}/4)\simeq\mathcal{O}(|x|^{\alpha-\beta- \gamma+1/2}+|x|^{-2\alpha})\qquad\text{when}\quad|x|\to\infty.\] Therefore, \[f_{p}(x)\simeq\mathcal{O}(|x|^{-(d+1)/2}+|x|^{-d/p})\qquad\text{when}\quad|x| \to\infty.\] This immediately implies \(f_{p}\not\in L^{p}(\mathbb{R}^{d})\). _Remark 2.6_.: The representation (2.14) is rather complicated, we give explicit formulas for certain special cases. When \(d=1\), \[f_{1}(x)=\frac{\sin(2\pi x)}{\pi x},\qquad f_{2}(x)=\frac{2}{\sqrt{|x|}}C(2 \sqrt{|x|}),\] where \(C\) is the Fresnel Cosine integral given by \[C(x)=\int_{0}^{x}\cos(\pi t^{2}/2)\mathrm{d}t\to\frac{1}{2}\qquad\text{when} \quad x\to\infty,\] Indeed, for \(d=p=1\), using the relation [11, SS 6.2.1, Eq.(10)] \[\sin x={}_{0}F_{1}(;3/2;-x^{2}/4)x,\] we obtain \[f_{1}(x)=2_{1}F_{2}(1/2;3/2,1/2;-\pi^{2}x^{2})=2_{0}F_{1}(;3/2;-\pi^{2}x^{2})= \frac{\sin(2\pi x)}{\pi x}.\] When \(p=2\), using the identity [11, SS 6.2.11, Eq. (41)] \[C(\sqrt{2x/\pi})=\sqrt{\frac{2x}{\pi}}{}_{1}F_{2}(1/4;5/4,1/2;-x^{2}/4)\qquad \text{when}\quad x>0,\] we obtain \[f_{2}(x)=4_{1}F_{2}(1/4;5/4,1/2;-\pi^{2}x^{2})=\frac{2}{\sqrt{|x|}}C(2\sqrt{|x |}).\] ### Relations to some classical function spaces In this part, we establish the embedding between the spectral Barron space \(\mathscr{B}^{s}(\mathbb{R}^{d})\) and the Besov space, and hence we bridge \(\mathscr{B}^{s}(\mathbb{R}^{d})\) and the Sobolev space as in [12]. We firstly recall the definition of the Besov space. **Definition 2.7** (Besov space).: Let \(\{\varphi_{j}\}_{j=0}^{\infty}\subset\mathscr{S}(\mathbb{R}^{d})\) satisfies \(0\leq\varphi_{j}\leq 1\) and \[\begin{cases}\operatorname{supp}(\varphi_{0})\subset\Gamma_{0}\colon=\big{\{} \,x\in\mathbb{R}^{d}\,\mid\,|x|\leq 2\,\big{\}}\,,\\ \operatorname{supp}(\varphi_{j})\subset\Gamma_{j}\colon=\big{\{}\,x\in \mathbb{R}^{d}\,\mid\,2^{j-1}\leq|x|\leq 2^{j+1}\,\big{\}}\,,\qquad j=1,2, \ldots.\end{cases}\] For every multi-index \(\alpha\), there exists a positive number \(c_{\alpha}\) such that \[2^{j|\alpha|}|\nabla^{\alpha}\varphi_{j}(x)|\leq c_{\alpha}\quad\text{for all}\quad j=0,\ldots,\quad\text{for all}\quad x\in\mathbb{R}^{d},\] and \[\sum_{j=0}^{\infty}\varphi_{j}(x)=1\quad\text{for every}\quad x\in\mathbb{R}^{d}.\] Let \(\alpha\in\mathbb{R}\) and \(1\leq p,q\leq\infty\). Define the _Besov space_ \[B_{p,q}^{\alpha}(\mathbb{R}^{d})\colon=\Big{\{}\,f\in\mathscr{S}^{\prime}( \mathbb{R}^{d})\,\mid\,\|\,f\,\|_{B_{p,q}^{\alpha}(\mathbb{R}^{d})}<\infty\, \Big{\}}\] equipped with the norm \[\|\,f\,\|_{B_{p,q}^{\alpha}(\mathbb{R}^{d})}\colon=\left(\sum_{j=0}^{\infty}2^ {\alpha qj}\|\,(\varphi_{j}\widehat{f})^{\vee}\,\|_{L^{p}(\mathbb{R}^{d})}^{q} \right)^{1/q}\qquad\text{when}\quad q<\infty,\] and \[\|\,f\,\|_{B_{p,\infty}^{\alpha}(\mathbb{R}^{d})}\colon=\sup_{j\geq 0}2^{ \alpha j}\|\,(\varphi_{j}\widehat{f})^{\vee}\,\|_{L^{p}(\mathbb{R}^{d})}.\] We firstly recall the following embedding for the Besov space, which was firstly proved in the series work of Taibleson [19, 18, 18]. We retain the proof in Appendix A.3 for the reader's convenience. **Lemma 2.8**.: _There holds \(B_{p_{1},q_{1}}^{\alpha_{1}}(\mathbb{R}^{d})\hookrightarrow B_{p_{2},q_{2}}^{ \alpha_{2}}(\mathbb{R}^{d})\) if and only if \(p_{1}\leq p_{2}\) and one of the following conditions holds:_ 1. \(\alpha_{1}-d/p_{1}>\alpha_{2}-d/p_{2}\) _and_ \(q_{1},q_{2}\) _are arbitrary;_ 2. \(\alpha_{1}-d/p_{1}=\alpha_{2}-d/p_{2}\) _and_ \(q_{1}\leq q_{2}\)_._ The main result of the embedding is: **Theorem 2.9**.: 1. _There holds_ (2.15) \[B_{2,1}^{s+d/2}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d}) \hookrightarrow B_{\infty,1}^{s}(\mathbb{R}^{d}).\] 2. _The above embedding is optimal in the sense that_ \(B_{2,1}^{s+d/2}(\mathbb{R}^{d})\) _is the biggest one of all_ \(B_{p,q}^{\alpha}(\mathbb{R}^{d})\) _satisfying_ \(B_{p,q}^{\alpha}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d})\)_, and_ \(B_{\infty,1}^{s}(\mathbb{R}^{d})\) _is the smallest one of all_ \(B_{p,q}^{\alpha}(\mathbb{R}^{d})\) _satisfying_ \(\mathscr{B}^{s}(\mathbb{R}^{d})\hookrightarrow B_{p,q}^{\alpha}(\mathbb{R}^{d})\)_._ Proof.: To prove (1), firstly, for any \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\), \[\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}= \sum_{j=0}^{\infty}\int_{\mathbb{R}^{d}}(1+|\xi|^{s})\varphi_{j} (\xi)|\widehat{f}(\xi)|\mathrm{d}\xi\] \[\leq \sum_{j=0}^{\infty}\left(\int_{\operatorname{supp}\,\varphi_{j}} (1+|\xi|^{s})^{2}\mathrm{d}\xi\right)^{1/2}\|\,\varphi_{j}\widehat{f}\,\|_{L^ {2}(\mathbb{R}^{d})}.\] A direct calculation gives: for \(j=0,1,\ldots\), \[\int_{\operatorname{supp}\,\varphi_{j}}(1+|\xi|^{s})^{2}\mathrm{ d}\xi \leq\int_{0\leq|\xi|\leq 2^{j+1}}(1+|\xi|^{s})^{2}\mathrm{d}\xi\] \[=\omega_{d-1}\int_{0}^{2^{j+1}}(1+r^{s})^{2}r^{d-1}\mathrm{d}\,r\] \[\leq 2\omega_{d-1}\int_{0}^{2^{j+1}}(1+r^{2s})r^{d-1}\mathrm{d}\,r\] \[\leq 2\omega_{d-1}\left(\frac{2^{(j+1)d}}{d}+\frac{2^{(j+1)(2s+d )}}{2s+d}\right)\] \[\leq 4\nu_{d}2^{(j+1)(2s+d)}.\] Using the Plancherel's theorem, we get \[\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})} \leq 2^{s+1+d/2}\sqrt{\nu_{d}}\sum_{j=0}^{\infty}2^{j(s+d/2)}\| \,\varphi_{j}\widehat{f}\,\|_{L^{2}(\mathbb{R}^{d})}\] \[=2^{s+1+d/2}\sqrt{\nu_{d}}\sum_{j=0}^{\infty}2^{j(s+d/2)}\|\,( \varphi_{j}\widehat{f})^{\vee}\,\|_{L^{2}(\mathbb{R}^{d})}\] \[=2^{s+1+d/2}\sqrt{\nu_{d}}\|\,f\,\|_{B_{2,1}^{s+d/2}(\mathbb{R}^{ d})}.\] Next, for any \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\), by Lemma 2.1, we have \(\varphi_{j}\widehat{f}\in L^{1}(\mathbb{R}^{d})\), using the Hausdorff-Young inequality (2.1), we obtain \[\|\,f\,\|_{B^{s}_{\infty,1}(\mathbb{R}^{d})}= \sum_{j=0}^{\infty}2^{sj}\|\,(\varphi_{j}\widehat{f})^{\vee}\,\|_ {L^{\infty}(\mathbb{R}^{d})}\leq\sum_{j=0}^{\infty}2^{sj}\|\,\varphi_{j} \widehat{f}\,\|_{L^{1}(\mathbb{R}^{d})}\] \[\leq \|\,\varphi_{0}\widehat{f}\,\|_{L^{1}(\mathbb{R}^{d})}+2^{s}\sum _{j=1}^{\infty}\int_{\mathbb{R}^{d}}\varphi_{j}(\xi)|\xi|^{s}|\widehat{f}(\xi )|\mathrm{d}\xi\] \[\leq v_{f,0}+2^{s}v_{f,s}.\] Therefore, \(\|\,f\,\|_{B^{s}_{\infty,1}(\mathbb{R}^{d})}\leq 2^{s}\|\,f\,\|_{\mathscr{B}^{s} (\mathbb{R}^{d})}\). This proves (2.15) with \[2^{-s}\|\,f\,\|_{B^{s}_{\infty,1}(\mathbb{R}^{d})}\leq\|\,f\,\|_{\mathscr{B}^ {s}(\mathbb{R}^{d})}\leq 2^{s+1+d/2}\sqrt{\nu_{d}}\|\,f\,\|_{B^{s+d/2}_{2,1}( \mathbb{R}^{d})}.\] It remains to show the embedding (2.15) is optimal. Suppose that there exists \(B^{\alpha}_{p,q}(\mathbb{R}^{d})\) such that \[B^{s+d/2}_{2,1}(\mathbb{R}^{d})\hookrightarrow B^{\alpha}_{p,q}(\mathbb{R}^{d })\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d})\hookrightarrow B^{s}_{\infty,1}(\mathbb{R}^{d}),\] using Lemma 2.8, we would have \(2\leq p\leq\infty,\alpha=s+d/p\) and \(q=1\). In what follows, we shall exploit an example adopted from [10, Ch. 5, Ex. 9] to show that \(B^{\alpha}_{p,1}(\mathbb{R}^{d})\not\hookrightarrow\mathscr{B}^{s}(\mathbb{R }^{d})\) when \(2<p<\infty\) and \(\alpha>0\). Therefore, \(B^{s+d/p}_{p,1}(\mathbb{R}^{d})\not\hookrightarrow\mathscr{B}^{s}(\mathbb{R }^{d})\) for any \(2<p\leq\infty\). Let \[\psi_{n}(x)=(1+in)^{-d/2}e^{-\pi|x|^{2}/(1+in)}.\] A direct calculation gives \(\widehat{\psi}_{n}(\xi)=e^{-\pi(1+in)|\xi|^{2}}\). Hence \(|\widehat{\psi}_{n}(\xi)|=e^{-\pi|\xi|^{2}}\in\mathscr{S}(\mathbb{R}^{d})\) and \[\|\,\psi_{n}\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}=1+\frac{\Gamma((s+d)/2)}{ \Gamma(d/2)\pi^{s/2}},\] which is independent of \(n\). We shall prove in Appendix A.4 that when \(1\leq p<\infty\) and \(\alpha>0\), \[\|\,\psi_{n}\,\|_{B^{\alpha}_{p,1}(\mathbb{R}^{d})}\leq C(1+n^{2})^{-d(p-2)/(4 p)}. \tag{2.16}\] Therefore \(\|\,\psi_{n}\,\|_{B^{\alpha}_{p,1}(\mathbb{R}^{d})}\to 0\) when \(p>2\) and \(n\to\infty\). On the other hand, we cannot expect that there exists certain \(p<\infty\) such that \(\mathscr{B}^{s}(\mathbb{R}^{d})\hookrightarrow B^{s+d/p}_{p,1}(\mathbb{R}^{d})\). Otherwise, we would have \[\mathscr{B}^{s}(\mathbb{R}^{d})\hookrightarrow B^{s+d/p}_{p,1}(\mathbb{R}^{d} )\hookrightarrow L^{p}(\mathbb{R}^{d})\] because of Lemma 2.8 and [14, SS 2.5.7, Proposition]. This contradicts with the fact \(\mathscr{B}^{s}(\mathbb{R}^{d})\not\hookrightarrow L^{p}(\mathbb{R}^{d})\), which has been proved in Lemma 2.5. As a consequence of Theorem 2.9 and Lemma 2.5, we establish the embedding between the spectral Barron space and the Sobolev spaces. **Definition 2.10** (Fractional Sobolev space).: Let \(1\leq p<\infty\) and non-integer \(\alpha>0\), then the fractional Sobolev space \[W_{p}^{\alpha}(\mathbb{R}^{d})\colon=\left\{\,f\in W_{p}^{\lfloor\alpha\rfloor} (\mathbb{R}^{d})\,\mid\,\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\frac{| \nabla^{\lfloor\alpha\rfloor}f(x)-\nabla^{\lfloor\alpha\rfloor}f(y)|^{p}}{|x- y|^{d+(\alpha-\lfloor\alpha\rfloor)p}}\mathrm{d}x\mathrm{d}y<\infty\,\right\}\] equipped with the norm \[\|\,f\,\|_{W_{p}^{\alpha}(\mathbb{R}^{d})}\colon=\|\,f\,\|_{W_{p}^{\lfloor \alpha\rfloor}(\mathbb{R}^{d})}+\left(\iint_{\mathbb{R}^{d}\times\mathbb{R}^ {d}}\frac{|\nabla^{\lfloor\alpha\rfloor}f(x)-\nabla^{\lfloor\alpha\rfloor}f(y )|^{p}}{|x-y|^{d+(\alpha-\lfloor\alpha\rfloor)p}}\mathrm{d}x\mathrm{d}y \right)^{1/p}.\] We firstly recall the relation between the Sobolev space and \(\mathscr{B}_{p}^{s}(\mathbb{R}^{d})\), which has been proved in [13, Theorem 4.3]. **Lemma 2.11** ([13, Theorem 4.3]).: 1. _If_ \(1\leq p\leq 2\) _and_ \(\alpha>s+d/p>0\)_, then_ \[W_{p}^{\alpha}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}_{p}^{s}(\mathbb{R}^{ d}).\] 2. _If_ \(s>-d\) _is not an integer or_ \(s>-d\) _is an integer and_ \(d\geq 2\)_, then_ \[W_{1}^{s+d}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}_{1}^{s}(\mathbb{R}^{d}).\] It follows from the above lemma and Lemma 2.5 that **Corollary 2.12**.: 1. _If_ \(1\leq p\leq 2\) _and_ \(\alpha>s+d/p\)_, there holds_ (2.17) \[W_{p}^{\alpha}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d}) \hookrightarrow C^{s}(\mathbb{R}^{d}).\] 2. _If_ \(s\) _is not an integer or_ \(s\) _is an integer and_ \(d\geq 2\)_, then_ \[W_{1}^{s+d}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d}).\] The first embedding with \(p=2\) and \(s=1\) was hidden in [1, SS II, Para. 7; SS IX, 15]. Proof.: By Lemma 2.11 and Lemma 2.5, when \(\alpha>s+d/p\) and \(1\leq p\leq 2\), we have \[W_{p}^{\alpha}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}_{p}^{s}(\mathbb{R}^{ d})\hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d}).\] When \(s\) is not an integer or \(s\) is an integer and \(d\geq 2\), there holds \[W_{1}^{s+d}(\mathbb{R}^{d})\hookrightarrow\mathscr{B}_{1}^{s}(\mathbb{R}^{d}) \hookrightarrow\mathscr{B}^{s}(\mathbb{R}^{d}).\] It remains to prove the right-hand side of (2.17). Using Theorem 2.9, \[\mathscr{B}^{s}(\mathbb{R}^{d})\hookrightarrow B_{\infty,1}^{s}(\mathbb{R}^{ d})\hookrightarrow C^{s}(\mathbb{R}^{d})\] due to Theorem 2.9, Lemma 2.8 and [12, SS 2.3.5, Eq. (1); SS 2.5.7, Eq. (2), (9), (11)]. ## 3. Application to deep neural network approximation The embedding results proved in Theorem 2.9 and Corollary 2.12 indicate that \(s\) is a smoothness index. Consequently, we are interested in exploring the approximate rate when \(s\) is small with \(\mathscr{B}^{s}\) as the target function space. To facilitate our analysis, we shall focus on the hypercube \(\Omega\colon=[0,1]^{d}\), and the spectral norm for function \(f\) defined on \(\Omega\) is \[\upsilon_{f,s,\Omega}=\inf_{Ef|_{\Omega}=f}\int_{\mathbb{R}^{d}}\|\,\xi\,\|_{ 1}^{s}|\widehat{Ef}(\xi)|\mathrm{d}\xi,\] where the infimum is taken for all extension operators \(E:\Omega\to\mathbb{R}^{d}\). To simplify the notations, we employ \(f\) to denote \(Ef\) subsequently. We replace \(|\xi|\) by \(\|\,\xi\,\|_{1}\) in the definition of \(\upsilon_{f,s,\Omega}\), the latter seems more natural for studying the approximation over the hypercube as suggested by [1, SS V]. **Definition 3.1**.: A sigmoidal function is a bounded function \(\sigma:\mathbb{R}\mapsto\mathbb{R}\) such that \[\lim_{t\to-\infty}\sigma(t)=0,\qquad\lim_{t\to+\infty}\sigma(t)=1.\] For example, the Heaviside function \(\chi_{[0,\infty)}\) is a sigmoidal function. A classical idea for the approximation error of neural networks with sigmoidal activation functions \(\sigma\) is to use the Heaviside function \(\chi_{[0,\infty)}\) as a transition. Caragea et. al. [4] pointed out that the gap between sigmoidal function \(\sigma\) and the Heaviside function \(\chi_{[0,\infty)}\) cannot be dismissed in \(L^{\infty}(\Omega)\). While this gap does not exist in \(L^{2}(\Omega)\). **Lemma 3.2**.: _For fixed \(\omega\in\mathbb{R}^{d}\backslash\{0\}\) and \(b\in\mathbb{R}\),_ \[\lim_{\tau\to\infty}\|\,\sigma(\tau(\omega\cdot x+b))-\chi_{[0,\infty)}( \omega\cdot x+b)\,\|_{L^{2}(\Omega)}=0.\] Proof.: Note that \[\lim_{t\to\pm\infty}\lvert\sigma(t)-\chi_{[0,\infty)}(t)\rvert=0.\] We divide the cube \(\Omega\) into \(\Omega_{1}\colon=\{\,x\in\Omega\,\mid\,\lvert\tau(\omega\cdot x+b)\rvert< \delta\,\}\) and \(\Omega_{2}\colon=\Omega\setminus\Omega_{1}\). With proper choice of \(\delta>0\) and \(\tau>0\) large enough, we can obtain that the \(L^{2}\)-distance between \(\sigma(\tau(\omega\cdot x+b))\) and \(\chi_{[0,\infty)}(\omega\cdot x+b)\) is arbitrarily small. For a shallow neural network, the following lemma in [1] is proved for the real-valued function, while it is straightforward to extend the proof to the complex-valued function. **Lemma 3.3** ([1, Theorem 1]).: _Let \(f\in\mathscr{B}^{1}(\mathbb{R}^{d})\), there exists_ \[f_{N}(x)=\sum_{i=1}^{N}c_{i}\sigma(\omega_{i}\cdot x+b_{i}) \tag{3.1}\] _with \(\omega_{i}\in\mathbb{R}^{d},b_{i}\in\mathbb{R}\) and \(c_{i}\in\mathbb{C}\) such that_ \[\|\,f-f_{N}\,\|_{L^{2}(\Omega)}\leq\frac{2\upsilon_{f,1,\Omega}}{\sqrt{N}}.\] In this part, we shall show the approximation error for the deep neural network. We use the \((L,N)\)-network to describe a neural network with \(L\) hidden layers and at most \(N\) units per layer. Here \(L\) denotes the number of hidden layers, e.g., the shallow neural network, expressed as (3.1), is an \((1,N)\)-network. **Definition 3.4** (\((L,n)\)-network).: An \((L,N)\)-network represents a neural network with \(L\) hidden layers and at most \(N\) units per layer. The activation functions of the first \(L-1\) layers are all ReLU and the activation function of the last layer is the sigmoidal function. The connection weights between the input layer and the hidden layer, and between the hidden layer and the hidden layer are all real numbers. The connection weights between the last hidden layer and the output layer are complex numbers. Here we make some preparations for the rest work. The analysis in this part owns the most to [1] with certain improvements that will be detailed later on. For any function \(g\) defined on \([0,1]\) and it is symmetric about \(x=1/2\), We use the notation \(g_{,n}\) to denote the function \(g\) in the \([0,1]\) interval of the period repeated \(n\) times, i.e., \[g_{,n}(t)=g(nt-j),\quad j=0,\ldots,n-1,\quad 0\leq nt-j\leq 1. \tag{3.2}\] Define \[\beta(t)=\operatorname{ReLU}(2t)-2\operatorname{ReLU}(2t-1)+\operatorname{ ReLU}(2t-2)=\begin{cases}2t,&0\leq t\leq 1/2,\\ 2-2t,&1/2\leq t\leq 1,\\ 0,&\text{otherwise}.\end{cases}\] By definition (3.2), \(\beta_{,n}\) represents a triangle function with \(n\) peaks and can be represented by \(3n\) ReLUs: \[\beta_{,n}(t)=\sum_{j=0}^{n-1}\beta(nt-j),\quad 0\leq t\leq 1.\] **Lemma 3.5**.: _Let \(g\) be a function defined on \([0,1]\) and symmetric about \(x=1/2\), then \(g_{,n_{2}}\circ\beta_{,n_{1}}=g_{,2n_{1}n_{2}}\) on \([0,1]\)._ The above lemma is a rigorous statement of [14, Proposition 5.1]. A key example is \(\cos(2\pi n_{2}\beta_{,n_{1}}(t))=\cos(4\pi n_{1}n_{2}t)\) when \(t\in[0,1]\). A geometrical explanation may be founded in [1, Figure 3]. We postpone the rigorous proof in Appendix A.5. For \(r\in(0,1)\), we define \[\alpha(t,r)=\chi_{[0,\infty)}(t-r/2)-\chi_{[0,\infty)}(t-(1-r)/2)=\begin{cases} \chi_{[r/2,(1-r)/2]}(t),&0<r\leq 1/2,\\ -\chi_{[(1-r)/2,r/2]}(t),&1/2\leq r<1,\end{cases}\] then \(\operatorname{supp}(\alpha(\cdot,r))\subset(0,1/2)\) and \(\alpha(t,r)\) is symmetric about \(t=1/4\). Define \[\gamma(t,r)=\alpha(t+1/4,r)-\alpha(t-1/4,r)+\alpha(t-3/4,r).\] Then \(\gamma(t,r)\) is symmetric about \(t=1/2\) because \[\gamma(1-t,r)= \alpha(5/4-t,r)-\alpha(3/4-t,r)+\alpha(1/4-t,r)\] \[= \alpha(t-3/4,r)-\alpha(t-1/4,r)+\alpha(t+1/4,r)=\gamma(t,r).\] By definition (3.2), \(\gamma_{,n}(\cdot,r)\) is well defined on \([0,1]\) and \[\gamma_{,n}(t,r)= \begin{cases}\alpha(nt-j+1/4,r),&0\leq nt-j\leq 1/4,\\ -\alpha(nt-j-1/4,r),&1/4\leq nt-j\leq 3/4,\end{cases}\qquad j=0,\dots,n-1\] \[= \begin{cases}\alpha(nt-j+1/4,r),&-1/4\leq nt-j\leq 1/4,\\ -\alpha(nt-j-1/4,r),&1/4\leq nt-j\leq 3/4,\end{cases}\qquad j=0,\dots,n.\] And \(\gamma_{,n}(\cdot,r)\) on \([0,1]\) can be represents by \(4n\) Heaviside function \(\chi_{[0,\infty)}\) due to \(\alpha(nt+1/4,r),\alpha(nt-n+1/4,r)\) only need one Heaviside function each: \[\gamma_{,n}(t,r)=\sum_{j=0}^{n}\alpha(nt-j+1/4,r)-\sum_{j=0}^{n-1}\alpha(nt-j- 1/4,r).\] A direct consequence of the above construction is **Lemma 3.6**.: _For \(t\in[0,1]\), there holds_ \[\frac{\pi}{2}\int_{0}^{1}\cos(\pi r)\gamma_{,n}(t,r)\mathrm{d}r=\cos(2\pi nt). \tag{3.3}\] Proof.: For any \(t\in[0,1/2]\), a direct calculation gives \[\frac{\pi}{2}\int_{0}^{1}\cos(\pi r)\alpha(t,r)\mathrm{d}r=\pi\int_{0}^{2t} \cos(\pi r)\mathrm{d}r=\sin(2\pi t).\] Fix a \(t\in[0,1]\). If there exists an integer \(j\) satisfying \(0\leq j\leq n\) and \(-1/4\leq nt-j\leq 1/4\), then \(\gamma_{,n}(t,r)=\alpha(nt-j+1/4,r)\) and \[\frac{\pi}{2}\int_{0}^{1}\cos(\pi r)\alpha(nt-j+1/4,r)\mathrm{d}r=\sin(2\pi( nt-j+1/4))=\cos(2\pi nt).\] Otherwise there exists an integer \(j\) satisfying \(0\leq j\leq n-1\) and \(1/4\leq nt-j\leq 3/4\). Then \(\gamma_{,n}(t,r)=-\alpha(nt-j-1/4,r)\) and \[-\frac{\pi}{2}\int_{0}^{1}\cos(\pi r)\alpha(nt-j-1/4,r)\mathrm{d}r=-\sin(2\pi( nt-j-1/4))=\cos(2\pi nt).\] This completes the proof of (3.3). Now we are ready to give an approximation result for deep neural networks, which follows the framework of [1], while we achieve a higher order convergence rate and the prefactor is dimension-free. **Lemma 3.7**.: _Let the positive integer \(L\) and \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\) with \(0<sL\leq 1/2\) and \(\operatorname{supp}\,f\subset\big{\{}\,\xi\in\mathbb{R}^{d}\,\mid\,\|\,\xi\,\|_ {1}\geq 1\,\big{\}}\). For any positive integer \(N\) there exists an \((L,N)\)-network \(f_{N}\) such that_ \[\|\,f-f_{N}\,\|_{L^{2}(\Omega)}\leq\frac{22\upsilon_{f,s,\Omega}}{N^{sL}}. \tag{3.4}\] Proof.: By Lemma 2.1, for \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\), assume \(f\) is real-valued, then \[f(x)=\int_{\mathbb{R}^{d}}\widehat{f}(\xi)e^{2\pi i\xi\cdot x}\mathrm{d}\xi= \int_{\mathbb{R}^{d}}|\widehat{f}(\xi)|\cos(2\pi(\xi\cdot x+\theta(\xi))) \mathrm{d}\xi,\] with proper choice \(\theta(\xi)\) such that \(0\leq\xi\cdot x+\theta(\xi)\leq\|\,\xi\,\|_{1}+1\). For fixed \(\xi\), choose \(n_{\xi}=2^{L-1}\lceil(\|\,\xi\,\|_{1}+1)^{1/L}\rceil^{L}\) and \(t_{\xi}(x)=(\xi\cdot x+\theta(\xi))/n_{\xi}\), then \(0\leq t_{\xi}(x)\leq 1\) and by Lemma 3.6, \[f(x)=\int_{\mathbb{R}^{d}}|\widehat{f}(\xi)|\cos(2\pi n_{\xi}t_{\xi}(x)) \mathrm{d}\xi=\frac{\pi}{2}\int_{\mathbb{R}^{d}}|\widehat{f}(\xi)|\mathrm{d} \xi\int_{0}^{1}\cos(\pi r)\gamma_{,n_{\xi}}(t_{\xi}(x),r)\mathrm{d}r.\] Define the probability measure \[\mu(\mathrm{d}\xi,\mathrm{d}r)=\frac{1}{Q}\|\,\xi\,\|_{1}^{-s}|\widehat{f}(\xi )|\chi_{(0,1)}(r)\mathrm{d}\xi\mathrm{d}r, \tag{3.5}\] where \(Q\) is the normalized factor that \[Q=\int_{\mathbb{R}^{d}}\|\,\xi\,\|_{1}^{-s}|\widehat{f}(\xi)|\mathrm{d}\xi \int_{0}^{1}\mathrm{d}r\leq\upsilon_{f,s,\Omega}.\] Therefore \(f(x)=\mathbb{E}_{(\xi,r)\sim\mu}F(x,\xi,r)\) with \[F(x,\xi,r)=\frac{\pi Q}{2}\|\,\xi\,\|_{1}^{s}\cos(\pi r)\gamma_{,n_{\xi}}(t_{ \xi}(x),r).\] If \(\{\xi_{i},r_{i}\}_{i=1}^{m}\) is an i.i.d. sequence of random samples from \(\mu\), and \[\tilde{f}=\frac{1}{m}\sum_{i=1}^{m}F(x,\xi_{i},r_{i}),\] then using Fubini's theorem, we obtain \[\mathbb{E}_{(\xi_{i},r_{i})\sim\mu}\|\,f-\tilde{f}\,\|_{L^{2}( \Omega)}^{2}= \int_{\Omega}\mathbb{E}_{(\xi_{i},r_{i})\sim\mu}\|\mathbb{E}_{( \xi,r)\sim\mu}F(x,\xi,r)-\tilde{f}(x)|^{2}\mathrm{d}x\] \[= \frac{1}{m}\int_{\Omega}\mathrm{Var}_{(\xi,r)\sim\mu}F(x,\xi,r) \mathrm{d}x\] \[\leq \frac{1}{m}\mathbb{E}_{(\xi,r)\sim\mu}\|\,F(\cdot,\xi,r)\,\|_{L ^{\infty}(\Omega)}^{2}.\] Note that \[\|\,F(\cdot,\xi,r)\,\|_{L^{\infty}(\Omega)}\leq\frac{\pi Q}{2}\|\,\xi\,\|_{1}^ {s},\] we obtain \[\mathbb{E}_{(\xi_{i},r_{i})\sim\mu}\|\,f-\tilde{f}\,\|_{L^{2}(\Omega)}^{2} \leq\frac{1}{m}\mathbb{E}_{(\xi,r)\sim\mu}\|\,F(\cdot,\xi,r)\,\|_{L^{\infty}( \Omega)}^{2}\leq\frac{\pi^{2}Q\upsilon_{f,s,\Omega}}{4m}.\] By Markov's inequality, with probability at least \((1+\varepsilon)/(2+\varepsilon)\), for some \(\varepsilon>0\) to be chosen later on, we obtain \[\|\,f-\tilde{f}\,\|_{L^{2}(\Omega)}^{2}\leq\frac{(2+\varepsilon)\pi^{2}Qv_{f,s, \Omega}}{4m} \tag{3.6}\] It remains to calculate the number of units in each layer. For each \(\gamma_{,n_{\xi}}(t_{\xi}(x),r)\), choose \(n_{1}=\cdots=n_{L}=\lceil(\|\,\xi\,\|_{1}+1)^{1/L}\rceil\), then \(n_{\xi}=2^{L-1}n_{1}\ldots n_{L}\), and by Lemma 3.5, \(\gamma_{,n_{\xi}}(\cdot,r)=\gamma_{,n_{L}}(\cdot,r)\circ\beta_{,n_{L-1}}\circ \cdots\circ\beta_{,n_{1}}\) on \([0,1]\). Lemma 3.2 shows the Heaviside function \(\chi_{[0,\infty)}\) can be approximated by \(\sigma\), and we need at most \[\max\{3n_{1},\ldots,3n_{L-1},4n_{L}\}\leq 4\lceil(\|\,\xi\,\|_{1}+1)^{1/L} \rceil\leq 12\|\,\xi\,\|_{1}^{1/L}\] units in each layer to represent \(\gamma_{,n_{\xi}}(t_{\xi}(x),r)\). Denote \(N\) the total number of units in each layer, then \(N\leq 12\sum_{i=1}^{m}\|\,\xi_{i}\,\|_{1}^{1/L}\) and \[\mathbb{E}_{(\xi_{i},r_{i})\sim\mu}N^{2sL}\leq 12\sum_{i=1}^{m}\mathbb{E}_{( \xi_{i},r_{i})\sim\mu}\|\,\xi_{i}\,\|_{1}^{2s}\leq\frac{12mv_{f,s,\Omega}}{Q}.\] Again, by Markov inequality, with probability at least \((1+\varepsilon)/(2+\varepsilon)\), we obtain \[\frac{Q}{m}\leq\frac{12(2+\varepsilon)v_{f,s,\Omega}}{N^{2sL}}. \tag{3.7}\] Combining (3.6) and (3.7), with probability at least \(\varepsilon/(2+\varepsilon)\), there exists an \((L,N)\)-network \(f_{N}\) such that \[\|\,f-f_{N}\,\|_{L^{2}(\Omega)}\leq\frac{\sqrt{3}(2+\varepsilon)\pi v_{f,s, \Omega}}{N^{sL}}\leq\frac{11v_{f,s,\Omega}}{N^{sL}},\] with proper choice of \(\varepsilon\) in the last step. Finally, if \(f\) is complex-valued, we approximate the real and imaginary parts of the function separately to obtain (3.4). _Remark 3.8_.: We assume \(\text{supp }\widehat{f}\subset\big{\{}\,\xi\in\mathbb{R}^{d}\,\mid\,\|\,\xi\,\|_{1} \geq 1\,\big{\}}\) in Lemma 3.7 because we want to obtain an upper bound depending only on \(v_{f,s,\Omega}\). If we give up this condition, then the upper bound in (3.4) changes to \(C\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}/N^{sL}\) for some dimension-free constant \(C\). The proof is essentially the same provided that the probability measure (3.5) is replaced by \[\mu(\mathrm{d}\xi,\mathrm{d}r)=\frac{1}{Q}(1+\|\,\xi\,\|_{1})^{-s}|\widehat{f }(\xi)|\chi_{(0,1)}(r)\mathrm{d}\xi\mathrm{d}r,\] We leave it to the interested reader. There is relatively little work on the approximation rate of deep neural networks that employ the spectral Barron space as the target space. For deep ReLU networks, [1] has proven approximation results of \((sL/2)\)-order. We shall show below this may be improved to \(sL\)-order at the cost of \(\upsilon_{f,s,\Omega}\) appears in the estimate. **Theorem 3.9**.: _Let the positive integer \(L\) and \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\) with \(0<sL\leq 1/2\). For any positive integer \(N\) there exists an \((L,N+2)\)-network \(f_{N}\) such that_ \[\|\,f-f_{N}\,\|_{L^{2}(\Omega)}\leq\frac{29v_{f,s,\Omega}}{N^{sL}}. \tag{3.8}\] _Moreover, if \(f\) is a real-valued function, then the connection weights in \(f_{N}\) are all real._ Proof.: We may write \(f=f_{1}+f_{2}\) with \[f_{1}(x)=\int_{\|\,\xi\,\|_{1}<1}\widehat{f}(\xi)e^{2\pi i\xi\cdot x}\mathrm{d} \xi,\qquad f_{2}(x)=\int_{\|\,\xi\,\|_{1}\geq 1}\widehat{f}(\xi)e^{2\pi i\xi\cdot x }\mathrm{d}\xi.\] Then \(\upsilon_{f_{1},1,\Omega}\leq\upsilon_{f,s,\Omega}\) and \(\upsilon_{f_{2},s,\Omega}\leq\upsilon_{f,s,\Omega}\) because \[\widehat{f}_{1}(\xi)=\widehat{f}(\xi)\chi_{[0,1)}(\|\,\xi\,\|_{1})\qquad\text{ and}\qquad\widehat{f}_{2}(\xi)=\widehat{f}(\xi)\chi_{[1,\infty)}(\|\,\xi\,\|_{1}).\] We approximate \(f_{1}\) with an \((L,n_{1})\)-network with \(n_{1}=\lceil N/6\rceil\) and obtain the error estimate. Applying Lemma 3.3 we obtain, there exists an \((1,n_{1})\)-network \(f_{1,n_{1}}\) such that \[\|\,f_{1}-f_{1,n_{1}}\,\|_{L^{2}(\Omega)}\leq\frac{2\upsilon_{f_{1},1,\Omega }}{n_{1}^{1/2}}\leq\frac{2\sqrt{6}\upsilon_{f,s,\Omega}}{N^{sL}}.\] Additional emphasis needs to be placed on the fact that an \((1,n_{1})\)-network can be represented by an \((L,n_{1})\)-network. We just need to fill the rest of the hidden layers with \[t=\begin{cases}\mathrm{ReLU}(t),&t\geq 0,\\ -\mathrm{ReLU}(-t),&t<0.\end{cases}\] Meanwhile, we approximate \(f_{2}\) with an \((L,n_{2})\)-network with \(n_{2}=\lceil 5N/6\rceil\) and obtain the error estimate. Applying Lemma 3.7 we obtain, there exists an \((L,n_{2})\) network \(f_{2,n_{2}}\) such that \[\|\,f_{2}-f_{2,n_{2}}\,\|_{L^{2}(\Omega)}\leq\frac{22\pi\upsilon_{f_{2},s, \Omega}}{n_{2}^{s}}\leq\frac{22\sqrt{6/5}\pi\upsilon_{f,s,\Omega}}{N^{sL}}.\] These together with the triangle inequality give the estimate (3.8) and the total number of units in each layer is \[n_{1}+2n_{2}=\lceil N/6\rceil+\lceil 5N/6\rceil\leq N+2.\] If \(f\) is a real-valued function, then we let \(f_{N}=\mathrm{Re}(f_{1,n_{1}}+f_{2,n_{2}})\), and the upper bound (3.8) still holds. As far as we know, the above theorem is best in the literature available so far. For shallow neural network \(L=1\), the authors in [14] have proven the \(1/2\)-convergence rate with \(\mathscr{B}_{p}^{1/2}(\mathbb{R}^{d})\) as the target function space, which is smaller than \(\mathscr{B}^{1/2}(\mathbb{R}^{d})\), and their estimate depends on the dimension as \(d^{1/4}\). The upper bound in [11] depends on \(\upsilon_{f,0}+\upsilon_{f,1/2}\), while (2.5) exemplifies that \(\upsilon_{f,0}\) may be much larger than \(\upsilon_{f,s}\) for some functions in \(\mathscr{B}^{s}(\mathbb{R}^{d})\), and the estimate depends upon the dimension exponentially. By contrast to these two results, the upper bound in Theorem 3.9 depends only on \(\upsilon_{f,s,\Omega}\), and is independent of the dimension. For deep neural network, a similar result for ReLU has been proven in [1] with \((sL/2)\)-order, which is not optimal compared with our estimate. At first glance, our result may seem contradictory with [1, Theorem 2]. This is not the case because the upper bound therein is \(\sqrt{v_{f,0}v_{f,s}}+\upsilon_{f,0}\), which requires \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\), but is usually smaller than \(\|\,f\,\|_{\mathscr{B}^{s}(\mathbb{R}^{d})}\) for oscillatory functions; cf. Lemma A.1. _Remark 3.10_.: The activation function of the last hidden layer of the \((L,N)\)-network in Theorem 3.9 may be replaced by many other familiar activation functions such as Hyperbolic tangent, SoftPlus, ELU, Leaky ReLU, ReLU\({}^{k}\) and so on. Because all these activation functions can be reduced to sigmoidal functions by certain shifting and scaling argument; e.g., for SoftPlus, we observe that SoftPlus\((t)-\)SoftPlus\((t-1)\) is a sigmoidal function. Unfortunately, it is not easy to change ReLU of the first \(L-1\) hidden layers by other activation functions. In what follows, we shall show that Theorem 3.9 is sharp if the activation function of the last hidden layer is Heaviside function. This example is adopted from [1]. We reserve it briefly to ensure the completeness of our work and postpone the proof in Appendix A.6. **Theorem 3.11**.: _For any fixed positive integers \(L,N\) and real numbers \(\varepsilon,s\) with \(0<\varepsilon,sL\leq 1/2\), there exists \(f\in\mathscr{B}^{s}(\mathbb{R}^{d})\) satisfying \(\upsilon_{f,s,\Omega}\leq 1+\varepsilon\) such that for any \((L,N)\)-network \(f_{N}\) whose activation function \(\sigma\) in the last layer is the Heaviside function \(\chi_{[0,\infty)}\), there holds_ \[\|\,f-f_{N}\,\|_{L^{2}(\Omega)}\geq\frac{1-\varepsilon}{8N^{sL}}. \tag{3.9}\] ## 4. Conclusion We discuss the analytical functional properties of the spectral Barron space. The sharp embedding between the spectral Barron spaces and various classical function spaces have been established. The approximation rate has been proved for the deep ReLU neural networks when the spectral Barron space with a small smoothness index is employed as the target function space. There are still some unsolved problems, such as the sup-norm error and the higher-order convergence results for larger \(s\), the relations among Barron type spaces, variational space and the Radon bounded variation space as well as understanding how these spaces are related to the classical function spaces, which will be pursued in the subsequent works.
2304.09750
Application of Tensor Neural Networks to Pricing Bermudan Swaptions
The Cheyette model is a quasi-Gaussian volatility interest rate model widely used to price interest rate derivatives such as European and Bermudan Swaptions for which Monte Carlo simulation has become the industry standard. In low dimensions, these approaches provide accurate and robust prices for European Swaptions but, even in this computationally simple setting, they are known to underestimate the value of Bermudan Swaptions when using the state variables as regressors. This is mainly due to the use of a finite number of predetermined basis functions in the regression. Moreover, in high-dimensional settings, these approaches succumb to the Curse of Dimensionality. To address these issues, Deep-learning techniques have been used to solve the backward Stochastic Differential Equation associated with the value process for European and Bermudan Swaptions; however, these methods are constrained by training time and memory. To overcome these limitations, we propose leveraging Tensor Neural Networks as they can provide significant parameter savings while attaining the same accuracy as classical Dense Neural Networks. In this paper we rigorously benchmark the performance of Tensor Neural Networks and Dense Neural Networks for pricing European and Bermudan Swaptions, and we show that Tensor Neural Networks can be trained faster than Dense Neural Networks and provide more accurate and robust prices than their Dense counterparts.
Raj G. Patel, Tomas Dominguez, Mohammad Dib, Samuel Palmer, Andrea Cadarso, Fernando De Lope Contreras, Abdelkader Ratnani, Francisco Gomez Casanova, Senaida Hernández-Santana, Álvaro Díaz-Fernández, Eva Andrés, Jorge Luis-Hita, Escolástico Sánchez-Martínez, Samuel Mugel, Roman Orus
2023-04-18T07:41:04Z
http://arxiv.org/abs/2304.09750v2
# Application of Tensor Neural Networks to Pricing Bermudan Swaptions ###### Abstract The Cheyette model is a quasi-Gaussian volatility interest rate model widely used to price interest rate derivatives such as European and Bermudan Swaptions for which Monte Carlo simulation has become the industry standard. In low dimensions, these approaches provide accurate and robust prices for European Swaptions but, even in this computationally simple setting, they are known to underestimate the value of Bermudan Swaptions when using the state variables as regressors. This is mainly due to the use of a finite number of pre-determined basis functions in the regression. Moreover, in high-dimensional settings, these approaches succumb to the Curse of Dimensionality. To address these issues, Deep-learning techniques have been used to solve the backward Stochastic Differential Equation associated with the value process for European and Bermudan Swaptions; however, these methods are constrained by training time and memory. To overcome these limitations, we propose leveraging Tensor Neural Networks as they can provide significant parameter savings while attaining the same accuracy as classical Dense Neural Networks. In this paper we rigorously benchmark the performance of Tensor Neural Networks and Dense Neural Networks for pricing European and Bermudan Swaptions, and we show that Tensor Neural Networks can be trained faster than Dense Neural Networks and provide more accurate and robust prices than their Dense counterparts. ## 1 Introduction Partial Differential Equations (PDE) are a crucial tool for modeling a variety of problems in Quantitative Finance. Many of these problems, such as the option pricing problem, can be recast as solving a parabolic PDE. Classical methods for solving such PDE involve mesh-based or Monte-Carlo approaches. Unfortunately, these techniques fail to scale to high-dimensional settings due to their dependence on spatio-temporal grids and an exorbitant number of sample paths. Recent advances in the field of Deep Learning [1] have made it possible to address these issues by approximating the unknown solution to a PDE using a Dense Neural Network (DNN) [2, 3, 4, 5, 6, 7]. This breakthrough has had deep implications for the field of Quantitative Finance, where it has made it possible to consider models with a previously computationally intractable number of assets or agents without making any provisional assumptions on their correlation or interaction structure. The flexibility and omnipresence of PDE makes these advances equally relevant for the domains of high-dimensional Stochastic Optimal Control and Forward-Backward Stochastic Differential Equations, both of which can be recast in the language of PDE [8]. In particular, these Deep Learning techniques can be leveraged to address problems such as option pricing, optimal liquidation, portfolio optimization and systemic risk in quantitative finance [9, 10], but they can also be extended to industries as diverse as energy management or supply chain and inventory control. This paper explores the problem of pricing European and Bermudan Swaptions [11, 12, 13] in the Cheyette model [14], a Markovian quasi-Gaussian volatility interest model. The importance of this problem stems from the considerable trading volume of interest rate derivatives, including European and Bermudan Swaptions, and the wide-spread use of the Cheyette model in industry. Classically, these interest rate derivatives are priced by Monte-Carlo methods. In the case of Bermudan Swaptions, these Monte-Carlo approaches rely on the regression-based Longstaff-Schwartz method [15] which becomes computationally intractable in high dimensions when taking the factors as the state variables in the regression. In addition to suffering from the Curse of Dimensionality, this regression-based method is widely known to under-estimate the true price of the option [16]. There have been attempts industrially to leverage Principal Component Analysis to first extract relevant factors, and then solve their associated PDE; however, with five or more factors, the PDE cannot be solved by traditional approaches such as finite difference or finite element methods. To circumvent these problems, in this paper, the problem of pricing European and Bermudan Swaptions is recast as the problem of solving a system of forward-backward Stochastic Differential Equations (SDE) [8]. Efficient methods for approximating the solutions to forward-backward SDE using Dense Neural Networks have recently been proposed [17, 18, 19]; however, in spite of their apparent success, Dense Neural Network approaches are computationally expensive and limited by memory [20, 21, 22]. Furthermore, there have been recent attempts to address the problem of solving PDE using the advances in Quantum Computing [23, 24, 25]. While promising, given the current limitations of quantum algorithms such as the limited number of qubits and high error rates, these approaches require important hardware advances before becoming a viable tool to solve high-dimensional PDE. In this paper we take a quantum-inspired approach by combining Tensor Networks with Dense Neural Networks to obtain Tensor Neural Networks (TNN). Originally developed in the field of physics to describe strongly-correlated structures, Tensor Networks [26, 27, 28] have gained popularity in Machine Learning due to their efficiency in describing high-dimensional vectors and operators. Tensor Networks have been successfully applied to various Machine Learning tasks [29, 30] such as classification [31, 32, 33, 34, 35, 36, 37], generative modeling [38, 39, 40], sequence modeling [41], quantum chemistry [42], dimensionality reduction [43] and subspace learning [44] to mention just a few. Inspired by recent works, we propose this transformation from a Dense Neural Network into a Tensor Neural Network for the case of Swaptions, resulting in improved training performance as compared to the classical approaches and reduced memory consumption. To back this claim, a detailed comparison of Tensor Neural Networks against the best performing Dense Neural Network with the same number of parameters is performed. This is supported by contrasting a given Tensor Neural Network with the Dense Neural Network with the same number of neurons, and therefore considerably more parameters [7, 29]. It should be noted that the strategy proposed to price European Swaptions differs from that put forth to price Bermudan Swaptions. In the European setting, a forward methodology leveraging a single Neural Network to solve the SDE associated with the Swaption value is used. On the other hand, in the Bermudan setting, a stacked sequence of Neural Networks, one for each exercise time, is exploited. This requires a careful handling of the early-exercise nature of the option which is done by starting at the terminal time and recursively propagating the terminal condition backwards in time while simultaneously learning the optimal decision boundary. Despite these deep and fundamental strategic differences, both of these problems benefit from the computational speed-up associated with Tensor Neural Networks. This paper is organized as follows. In Section 2, the concept of European and Bermudan Swaptions is briefly reviewed. Following this, the Cheyette Model is defined, and then the pricing of European and Bermudan Swaptions in the Cheyette Model is discussed. Section 3 focuses on the Neural Network setup required to price European and Bermudan Swaptions in the Cheyette model. The notion of tensorizing a Neural Network is then briefly reviewed in Section 4. The results of numerical experiments using the methods introduced in this paper to price European and Bermudan Swaptions are presented and discussed in Section 5, where it is shown that Tensor Neural Networks outperform Dense Neural Networks. Finally, in Section 6, conclusions and further areas of investigation are put forth. ## 2 Problem Formulation and Mathematical Framework In this section we define the two financial instruments, the _European Swaption_ and the _Bermudan Swaption_, whose no-arbitrage price we determine in this paper. We also introduce the stochastic interest rate model that will drive the evolution of these financial derivatives, and discuss how to price these instruments under said model. ### European and Bermudan Swaptions A _European Swaption_ is a financial contract that gives the holder the right, but not the obligation, to enter the _payer leg_ of an _interest rate swap_ with a pre-determined _tenor structure_\(0<T_{0}<T_{1}<\ldots<T_{n}\) and fixed rate \(K\geq 0\) at the future time \(T_{0}\). A _Bermudan Swaption_ is a financial contract that gives the holder the right, but not the obligation, to enter the _payer leg_ of an _interest rate swap_ with a pre-determined _tenor structure_\(0<T_{0}<T_{1}<\ldots<T_{n}\) and fixed rate \(K\geq 0\) at any future tenor date \(T_{m}\) with \(0\leq m\leq n\) An analogous discussion to what follows can be developed in the case that the holder of the Swaption enters the _receiver leg_ of the interest rate swap, but, for brevity, we focus exclusively on payer Swaptions. An _interest rate swap_ with tenor structure \(0<T_{0}<T_{1}<\ldots<T_{n}\) and fixed rate \(K\geq 0\) is a financial contract in which one party, the _payer_, pays another party, the _receiver_, the fixed cash-flow \(K\) in exchange of a floating cash flow \(\ell_{m}\) at each future time \(T_{m}\) in the tenor structure. The floating rate \(\ell_{m}\) between times \(T_{m-1}\) and \(T_{m}\) can be expressed in terms of the over-night risk-free-rate at \(T_{m}\) and the value \(P(T_{m-1},T_{m})\) of a _zero-coupon bond_ with maturity \(T_{m}\) at time \(T_{m-1}\). Recall that a _zero-coupon bond_ with maturity \(T>0\) is a contract which guarantees the holder one dollar to be paid at time \(T>0\), and its price is the stochastic process denoted by \((P(t,T))_{t\in[0,T]}\). Zero-coupon bonds are the building blocks of any interest rate theory, and their stochastic evolution will be the driver for the value of the European and Bermudan Swaptions that we price. ### The Cheyette model Throughout this paper we will place ourselves in the context of the _Cheyette model_[14]. The Cheyette model is a special case of the _Heath-Jarrow-Morton_ (HIM) stochastic interest rate model, so, instead of directly modeling the evolution of zero-coupon bonds, we model the evolution of the forward curve \[f(t,T):=-\frac{\partial}{\partial T}\log P(t,T). \tag{1}\] More specifically, we assume that the _initial forward curve_\(T\mapsto f(0,T)\) is known and evolves according to the real-world dynamics \[\mathrm{d}f(t,T)=\mu(t,T)\,\mathrm{d}t+\sigma(t,T)\cdot\,\mathrm{d}W(t) \tag{2}\] for some drift \(\mu:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\to\mathbb{R}\), some volatility \(\sigma:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\to\mathbb{R}^{d}\) and some \(d\)-dimensional Brownian motion \(W\). Notice that the evolution of zero-coupon bonds may be recovered from the evolution of the forward curve, \[P(t,T)=\exp\bigg{(}-\int_{t}^{T}f(t,s)\,\mathrm{d}s\bigg{)}. \tag{3}\] Combining Ito's lemma with the Girsanov theorem, under appropriate regularity assumptions, it is possible to find a unique risk-neutral measure \(\mathbb{Q}\) under which all discounted zero-coupon bonds are martingales. Here the discounting is done relative to the _short rate_ \[r(t):=f(t,t). \tag{4}\] Under this risk-neutral measure, the initial forward curve evolves according to the stochastic dynamics \[\mathrm{d}f(t,T)=\sigma(t,T)\cdot\sigma^{*}(t,T)\,\mathrm{d}t+\sigma(t,T)\cdot \,\mathrm{d}W^{\mathbb{Q}}(t) \tag{5}\] for some \(d\)-dimensional \(\mathbb{Q}\)-Brownian motion \(W^{\mathbb{Q}}\) and the functions \[\mu^{*}(t,T):=\int_{t}^{T}\mu(t,s)\,\mathrm{d}s\quad\text{and}\quad\sigma^{*}( t,T):=\int_{t}^{T}\sigma(t,s)\,\mathrm{d}s. \tag{6}\] Since we are concerned with the _pricing_ of financial derivatives, by the HJM model we will mean the stochastic evolution (5) of the forward curve under the risk-neutral measure \(\mathbb{Q}\). One of the main difficulties in analyzing the HJM model is the path dependence of the process \(\sigma^{*}\) in (6) which makes the forward curve non-Markovian. A classical way to overcome this difficulty is to impose that the volatility process \(\sigma=(\sigma_{i})_{i\leq d}\) be separable, \[\sigma_{i}(t,T):=h_{i}(t,X_{i}(t))g_{i}(T) \tag{7}\] for some deterministic functions \(h=(h_{i})_{i\leq d}\) and \(g=(g_{i})_{i\leq d}\) and some adapted process \(X=(X_{i})_{i\leq d}\). A direct calculation reveals that under this assumption the forward curve is Markovian, \[f(t,T)=f(0,T)+\sum_{i\leq d}\frac{g_{i}(T)}{g_{i}(t)}\bigg{(}X_{i}(t)+Y_{i}(t) \int_{t}^{T}\frac{g_{i}(v)}{g_{i}(t)}\,\mathrm{d}v\bigg{)} \tag{8}\] for the stochastic processes \(X=(X_{i})_{i\leq d}\) and \(Y=(Y_{i})_{i\leq d}\) defined by \[X_{i}(t) :=g_{i}(t)\int_{0}^{t}h_{i}^{2}(s,X_{i}(s))\int_{s}^{t}g_{i}(v)\, \mathrm{d}v\,\mathrm{d}s+g_{i}(t)\int_{0}^{t}h_{i}(s,X_{i}(s))\,\mathrm{d}W_{i }^{\mathbb{Q}}(s), \tag{9}\] \[Y_{i}(t) :=g_{i}(t)^{2}\int_{0}^{t}h_{i}^{2}(s,X_{i}(s))\,\mathrm{d}s. \tag{10}\] Writing \(\odot\) for the Hadamard product on the space of matrices and applying Ito's lemma shows that the factor processes \(X\) and \(Y\) evolve according to the system of SDE \[\left\{\begin{aligned} \mathrm{d}X(t)&=\big{(}Y(t)- \kappa(t)\odot X(t)\big{)}\,\mathrm{d}t+\eta(t,X(t))\odot\,\mathrm{d}W^{ \mathbb{Q}}(t)\\ \mathrm{d}Y(t)&=\big{(}\eta^{\odot 2}(t,X(t))-2 \kappa(t)\odot Y(t)\big{)}\,\mathrm{d}t\end{aligned}\right. \tag{11}\] subject to the initial conditions \(X_{0}=Y_{0}=0\), where we have introduced the quantities \(\kappa,\eta\in\mathbb{R}^{d}\) defined by \[\kappa_{i}(t):=-\frac{g_{i}^{\prime}(t)}{g_{i}(t)}\quad\text{and}\quad\eta_{i} (t,X_{i}(t)):=h_{i}(t,X_{i}(t))g_{i}(t). \tag{12}\] The following result, whose proof is a direct computation leveraging (3) and (8) shows that modeling the factor processes \(X\) and \(Y\) is equivalent to modeling zero-coupon bonds. By the _Cheyette model_ we will therefore mean the stochastic evolution (11) of the factor processes \(X\) and \(Y\). **Proposition 1**.: _The value of a zero-coupon bond with maturity \(T\) at time \(t\) in the Cheyette model is given by_ \[P(t,T)=\frac{P(0,T)}{P(0,t)}\exp\bigg{(}-X(t)\cdot G(t,T)-\frac{1}{2}Y(t)\cdot G ^{\odot 2}(t,T)\bigg{)} \tag{13}\] _for the deterministic vector-valued function \(G(t,T)=(G_{i}(t,T))_{i\leq d}\) defined by_ \[G_{i}(t,T)=\int_{t}^{T}\exp\bigg{(}-\int_{t}^{s}\kappa_{i}(u)\,\mathrm{d}u \bigg{)}\,\mathrm{d}s. \tag{14}\] We now discuss how this result can be leveraged to price European and Bermudan Swaptions in the Cheyette model. ### Pricing European Swaptions in the Cheyette model A straightforward no-arbitrage argument reveals that a European Swaption with tenor structure \(T_{0}<T_{1}<\ldots<T_{n}\) and fixed rate \(K\) is a contingent claim with payoff at the first tenor date \(T_{0}\) given by \[\phi^{\mathrm{EUR}}(X(T_{0}),Y(T_{0})):=\big{(}1-P(T_{0},T_{n})-A_{T_{0}}K \big{)}_{+}\quad\text{where}\quad A_{T_{0}}:=\sum_{m=1}^{n}P(T_{0},T_{m}) \Delta T_{k} \tag{15}\] is the _annuity process_ evaluated at the maturity \(T_{0}\) of the Swaption. On the other hand, a direct application of the Feynmann-Kac theorem shows that, in the context of the Cheyette model, the value process \[V_{t}:=V(t,X(t),Y(t)) \tag{16}\] of a contingent claim with payoff \(V_{T}:=\phi(X(T),Y(T))\) at its maturity \(T>0\) evolves according to the stochastic dynamics \[\mathrm{d}V_{t}=r_{t}V_{t}\,\mathrm{d}t+\nabla_{x}V(t,X(t),Y(t))\eta(t,X(t)) \odot\,\mathrm{d}W^{\mathbb{Q}}(t) \tag{17}\] subject to the terminal condition \(V_{T}:=\phi(X(T),Y(T))\). Together with Proposition 1 and the observation from (8) that \[r_{t}=f(0,t)+\sum_{i\leq d}X_{i}(t), \tag{18}\] this insight will allow us to determine the value of a European Swaption using Neural Networks. More details will be provided in Section 3.1. ### Pricing Bermudan Swaptions in the Cheyette model Pricing a Bermudan Swaption requires more care. A no-arbitrage argument analogous to that for a European Swaption shows that the _exercise value_, or the immediate profit, made by exercising the Bermudan Swaption at exercise time \(T_{m}\) is \[\phi^{\mathrm{BER},m}(X(T_{m}),Y(T_{m})):=\big{(}1-P(T_{m},T_{n})-A_{T_{m}}K \big{)}_{+}\quad\text{where}\quad A_{T_{m}}:=\sum_{\ell=m+1}^{n}P(T_{m},T_{\ell })\Delta T_{\ell} \tag{19}\] is the annuity process evaluated at the exercise date \(T_{m}\). To deduce from this the value \(V_{T_{m}}\) of the Swaption at each exercise date \(T_{m}\), and hence determine its initial value \(V_{0}\), the idea will be to proceed iteratively and backwards in time. More specifically, the strategy will be to compare the exercise value of the Swaption to its _continuation value_ at each exercise date \(T_{m}\); the larger of the two being the value of the Swaption at that time. To begin with, notice that the terminal value of the Bermudan Swaption is \[V_{T_{n}}:=\phi^{\mathrm{BER},n}(X(T_{n}),Y(T_{n})), \tag{20}\] and observe that the continuation value \(C^{m}=(C^{m}_{t})_{t\in[T_{m-1},T_{m})}\) of the Bermudan Swaption between exercise times \(T_{m-1}\) and \(T_{m}\) coincides with the value of a contingent claim having payoff \(V_{T_{m}}\) at time \(T_{m}\). It therefore evolves according to the stochastic dynamics \[\mathrm{d}C^{m}_{t}=r_{t}C^{m}_{t}\,\mathrm{d}t+\nabla_{x}C^{m}_{t}(t,X(t),Y(t ))\eta(t,X(t))\odot\,\mathrm{d}W^{\mathbb{Q}}(t)\quad\text{for }t\in[T_{m-1},T_{m}) \tag{21}\] subject to the terminal condition \(C^{m}_{T_{m}}:=V_{T_{n}}\). With the continuation value \(C^{m}_{T_{m-1}}\) at hand, the value \(V_{T_{m-1}}\) of the Bermudan Swaption at exercise time \(T_{m-1}\) becomes \[V_{T_{m-1}}:=\max\big{(}C^{m}_{T_{m-1}},\phi^{\mathrm{BER},m-1}\big{)}. \tag{22}\] This well-defined backwards iterative procedure yields the value of the Bermudan Swaption at each tenor date \(T_{m}\). The initial value \(V_{0}\) of the Bermudan Swaption is now the continuation value \(C^{0}_{0}\) obtained by evolving the SDE (21) backwards in time from \(T_{0}\) to \(0\) subject to the terminal condition \(C^{0}_{T_{0}}:=V_{T_{0}}\). This backward iterative procedure will be implemented using a sequence of Neural Networks as detailed in Section 3.2. ## 3 Neural Networks for Swaption Pricing In this section we describe a Neural Network approach to price European and Bermudan Swaptions in the Cheyette stochastic interest rate model. The strategy will be to define Neural Networks that learn the solution to the SDE (17) with an appropriate terminal condition at all points along a partition of the relevant time interval. We also discuss how these Neural Networks can be tensorized using quantum-inspired ideas to yield Tensor Neural Networks. ### A Neural Network for the European Swaption To price a European Swaption with tenor structure \(T_{0}<T_{1}<\ldots<T_{n}\) and fixed rate \(K\geq 0\), we will use a single Neural Network that learns the solution \(V\) to the SDE (17) along a partition of the interval \([0,T_{0}]\) subject to the terminal condition \(\phi^{\mathrm{EUR}}\) defined in (15). To be more specific, we partition the interval \([0,T_{n}]\) into \(N\) sub-intervals of width \(\Delta t=T_{n}/N\) with endpoints at \(0=:t_{0}<t_{1}<\ldots<t_{N}:=T_{n}\), and we strive to learn the value \(V_{t_{k}}\) of the solution to (17) at each of the partition points \(t_{k}\) in the interval \([0,T_{0}]\). For simplicity, we will assume that \(N\) is chosen in such a way that \(T_{0}=t_{k_{0}}\) for some \(0\leq k_{0}\leq N\); otherwise, in everything that follows, \(T_{0}\) would have to be replaced by its nearest neighbor \(t_{k_{0}}\) in the partition of \([0,T_{n}]\). Writing \(M\) for the batch size, we proceed in \(3\) steps. #### 3.1.1 The forward simulation To begin with, we discretize the system of SDE (11) describing the evolution of the factor processes \(X\) and \(Y\) using an Euler scheme. This yields \(M\) simulated paths \((X^{j})_{j\leq M}\) and \((Y^{j})_{j\leq M}\) of each factor process. The \(k^{\mathrm{th}}\) coordinate in the vectors \(X^{j}=(X^{j,k})_{k\leq N}\) and \(Y^{j}=(Y^{j,k})_{k\leq N}\) is found from the system of discrete differences \[\begin{cases}X^{j,k+1}&:=X^{j,k}+\big{(}Y^{j,k}-\kappa(t_{k})\odot X^{j,k} \big{)}\Delta t+\eta(t_{k},X^{j,k})\odot\Delta W^{j,k}\\ Y^{j,k+1}&:=Y^{j,k}+\big{(}\eta^{\odot 2}(t_{k},X^{j,k})-2\kappa(t_{k})\odot Y^{j,k} \big{)}\Delta t\end{cases} \tag{23}\] subject to the initial condition \(X^{j,0}=Y^{j,0}=0\), and it corresponds to a sample of the random variable \(X_{t_{k}}\) or \(Y_{t_{k}}\). Here \(\Delta W^{j,k}:=\sqrt{\Delta t}Z^{j,k}\) for a family of independent and identically distributed standard Gaussian random variables \((Z^{j,k})\). Notice that the vectors \(Y^{j}\) are all identical since \(Y\) is deterministic; nonetheless, we artificially take \(M\) of them as this will lighten the notation required to describe the Neural Network. #### 3.1.2 The European terminal condition The forward simulation yields \(NM\) vectors of triples \((X^{j,k},Y^{j,k},t_{k})\) which may be combined with Proposition 1 to estimate the zero-coupon bond prices \(P(T_{0},T_{m})\) required to determine the terminal condition \(\phi^{\mathrm{EUR}}\) defined in (15). The \(m^{\mathrm{th}}\) coordinate in the vector \(P^{j}=(P^{j,m})_{1\leq m\leq n}\) estimates the zero-coupon bond price \(P(T_{0},T_{m})\), and is defined by \[P^{j,m}:=\frac{P(0,T_{m})}{P(0,T_{0})}\exp\bigg{(}-X^{j,k_{0}}\cdot G(T_{0},T_{ m})-\frac{1}{2}Y^{j,k_{0}}\cdot G^{\odot 2}(T_{0},T_{m})\bigg{)}, \tag{24}\] where the deterministic function \(G\) is given by (14) and \(k_{0}\) is the unique index with \(T_{0}=t_{k_{0}}\). From these zero-coupon bond estimates, it is possible to approximate the \(M\) terminal conditions \[\phi^{\mathrm{EUR},j}:=\bigg{(}1-P^{j,n}-\sum_{m=1}^{n}KP^{j,m}\Delta T_{m} \bigg{)}_{+} \tag{25}\] required to price the European Swaption. #### 3.1.3 The European Neural Network The \(NM\) vectors of triples \((X^{j,k},Y^{j,k},t_{k})\) and the \(M\) terminal conditions \((\phi^{\mathrm{EUR},j})\) now become the inputs of the Neural Network tasked with learning the solution to the SDE (17) subject to the terminal condition \(\phi^{\mathrm{EUR}}\) at the points \(t_{k}\) in the interval \([0,T_{0}]\). The output of this Neural Network are \(M\) paths \[\widehat{V}^{j}(\theta)=\big{(}\widehat{V}^{j,k}(\theta)\big{)}_{k\leq k_{0}} \tag{26}\] with \(\widehat{V}^{j,k}(\theta)\) being an estimator for \(V_{t_{k}}\). This output vector can be used to estimate the gradient \(\widehat{\nabla}_{X}V^{j,k}\) by means of automatic differentiation, and this estimated gradient can be leveraged to obtain another approximation of \(V_{t_{k}}\) through an Euler discretization of the backward SDE (17), \[\widetilde{V}^{j,k+1}(\theta):=\widehat{V}^{j,k}(\theta)+r^{j,k}\widehat{V}^{ j,k}(\theta)\Delta t+\widehat{\nabla}_{X}V^{j,k}\eta(t_{k},X^{j,k})\odot \Delta W^{j,k}, \tag{27}\] where \[r^{j,k}:=f(0,t_{k})+\sum_{i\leq d}X^{j,k} \tag{28}\] is an approximation of the short rate (18) at \(t_{k}\). The Neural Network is now trained by minimizing the square difference between the estimators \(\widehat{V}^{j}\) and \(\widetilde{V}^{j}\) while matching the terminal conditions \(\phi^{\mathrm{EUR},j}\) across batches. In other words, its associated loss function is defined by \[\mathcal{L}(\theta):=\sum_{j=1}^{M}\sum_{k=1}^{N}\big{(}\widehat{V}^{j,k}( \theta)-\widetilde{V}^{j,k}(\theta)\big{)}^{2}+\sum_{j=1}^{M}\big{(}\widehat{ V}^{j,k_{0}}(\theta)-\phi^{\mathrm{EUR},j}\big{)}^{2} \tag{29}\] Notice that we could have included a term comparing \(\widetilde{V}^{j,k_{0}}_{N}(\theta)\) and \(\phi^{\mathrm{EUR},j}\), but this is not necessary as the triangle inequality already takes care of it. Figure 1: Learning pipeline for the European Swaption Neural Network. ### A Sequence of Neural Networks for the Bermudian Swaption To price a Bermudan Swaption with tenor structure \(T_{0}<T_{1}<\ldots<T_{n}\) and fixed rate \(K\geq 0\), we will use a backward sequence of \(n+1\) Neural Networks. The \(m^{\rm th}\) of these Neural Networks will learn the solution \(C^{m}\) to the SDE (21) along a partition of the interval \([T_{m-1},T_{m}]\) subject to a terminal condition depending on the payoff \(\phi^{\rm BER,m}\) defined in (19) and the ouput of the previous Neural Network. Here and henceforth we adopt the convention that \(T_{-1}:=0\). Just like we did for European Swaptions, we partition the interval \([0,T_{n}]\) into \(N\) sub-intervals of width \(\Delta t=T_{n}/N\) with endpoints at \(0=:t_{0}<t_{1}<\ldots<t_{N}:=T_{n}\), and we assume that \(N\) is chosen in such a way that for every \(0\leq m\leq n\), we have \(T_{m}=t_{k_{m}}\) for some \(0\leq k_{m}\leq N\); otherwise, in everything that follows, \(T_{m}\) would have to be replaced by its nearest neighbour \(t_{k_{m}}\) in the partition of \([0,T_{n}]\). Moving forward we adopt the convention that \(t_{k_{-1}}:=0\). Writing \(M\) for the batch size we once again proceed in \(3\) steps. #### 3.2.1 The forward simulation The forward simulation is identical to that described in Section 3.1.1 for European Swaptions, and again yields \(NM\) vectors of triples \((X^{j,k},Y^{j,k},t_{k})\). #### 3.2.2 The Bermudan payoffs To price a Bermudan Swaption, we need to estimate the \(n+1\) payoffs \(\phi^{\rm BER,m}\) defined in (19). To do this we require approximations \(P^{j,m,\ell}\) to the zero-coupon bond prices \(P(T_{m},T_{\ell})\) for \(0\leq m<n\) and \(m<\ell\leq n\). These may be obtained from Proposition 1 by setting \[P^{j,m,\ell}:=\frac{P(0,T_{\ell})}{P(0,T_{m})}\exp\bigg{(}-X^{j,k_{m}}\cdot G (T_{m},T_{\ell})-\frac{1}{2}Y^{j,k_{m}}\cdot G^{\odot 2}(T_{m},T_{\ell}) \bigg{)}, \tag{30}\] where the deterministic function \(G\) is given by (14) and \(k_{m}\) is the unique index with \(T_{m}=t_{k_{m}}\). From these zero-coupon bond estimates, for each of the \(M\) batches, it is possible to approximate the \(m^{\rm th}\) payoff \[\phi^{\rm BER,j,m}:=\bigg{(}1-P^{j,m,n}-\sum_{\ell=m+1}^{n}KP^{j,m,\ell}\Delta T _{\ell}\bigg{)}_{+} \tag{31}\] required to price the Bermudan Swaption. #### 3.2.3 The sequence of Bermudan Neural Networks The \(NM\) vectors of triples \((X^{j,k},Y^{j,k},t_{k})\) with \(k_{n-1}\leq k\leq k_{n}\) and the \(M\) payoffs \((\phi^{\rm BER,j,n})\) now become the inputs of the \(n^{\rm th}\) Neural Network tasked with learning the solution to the SDE (21) subject to the terminal condition \(\phi^{\rm BER,n}\) at the points \(t_{k}\) in the interval \([T_{n-1},T_{n}]\). The output of this Neural Network are \(M\) continuation values \[\widehat{C}^{j,n}(\theta_{n})=\big{(}\widehat{C}^{j,n,k}(\theta_{n})\big{)}_{ k_{n-1}\leq k\leq k_{n}} \tag{32}\] with \(\widehat{C}^{j,n,k}(\theta_{n})\) being an estimator for \(C^{n}_{t_{k}}\). Before discussing how the \(m^{\rm th}\) Neural Network learns, let us describe the inputs and outputs of the \(m^{\rm th}\) Neural Network in this stacked sequence of Neural Networks for \(0\leq m<n\). The \(NM\) vectors of triples \((X^{j,k},Y^{j,k},t_{k})\) with \(k_{m-1}\leq k\leq k_{m}\) and the \(M\) terminal conditions \[V^{j,m}:=\max\big{(}\phi^{\rm BER,j,m},\widehat{C}^{j,m+1,k_{m}}(\theta_{m+1}) \big{)} \tag{33}\] are the inputs of the \(m^{\rm th}\) Neural Network tasked with learning the solution to the SDE (21) subject to the terminal condition \(V_{T_{m}}\) defined in (22) at the points \(t_{k}\) in the interval \([T_{m-1},T_{m}]\). The output of the \(m^{\rm th}\) Neural Network are the \(M\) continuation values \[\widehat{C}^{j,m}(\theta_{m})=\big{(}\widehat{C}^{j,m,k}(\theta_{m})\big{)}_{ k_{m-1}\leq k\leq k_{m}} \tag{34}\] with \(\widehat{C}^{j,m,k}(\theta_{m})\) being an estimator for \(C^{m}_{t_{k}}\). The value of the Bermudan swaption is then obtained from the continuation values \(V^{j}_{0}:=\widehat{C}^{j,0,0}(\theta_{0})\). Having described the stacked Neural Network structure, let us delve into the learning procedure for the \(m^{\rm th}\) Neural Network. This will be identical for \(0\leq m\leq n\) and will very closely resemble the procedure described for the European Neural Network in Section 3.1.3. The output \(\widehat{C}^{j,m,k}(\theta_{m})\) of the \(m^{\rm th}\) Neural Network can be used to estimate the gradient \(\widehat{\nabla}_{X}C^{j,m,k}\) by means of automatic differentiation, and this estimated gradient can be leveraged to obtain another approximation of the continuation value \(C^{m}_{t_{k}}\) through the Euler discretization of the backward SDE (21), \[\widehat{C}^{j,m,k+1}(\theta_{m})=\widehat{C}^{j,m,k}(\theta_{m})+r^{j,k} \widehat{C}^{j,m,k}(\theta_{m})\Delta t+\widehat{\nabla}_{X}C^{j,m,k}\eta(t_{k },X^{j,k})\odot\Delta W^{j,k}, \tag{35}\] where \(r^{j,k}\) is an approximation of the short rate (18) at \(t_{k}\). The \(m^{\rm th}\) Neural Network is now trained by minimizing the loss function \[\mathcal{L}_{m}(\theta_{m}):=\sum_{j=1}^{M}\sum_{k=k_{m-1}}^{k_{m}} \left(\widehat{C}^{j,m,k}(\theta_{m})-\widetilde{C}^{j,m,k}(\theta_{m})\right) ^{2}+\sum_{j=1}^{M}\left(\widehat{C}^{j,m,k_{m}}(\theta_{m})-V^{j,m}\right)^{2} \tag{36}\] which strives to match the estimators \(\widehat{C}^{j,m}\) and \(\widetilde{C}^{j,m}\) as well as the terminal conditions \(V^{j,m}\) across batches. ## 4 Tensorizing the Neural Networks To address the shortcomings of dense architectures, we transform classical fully-connected Dense Neural Networks into what we call Tensor Neural Networks. This has the purpose of enhancing training performance and reducing memory consumption [7, 29]. The way we tensorize the Neural Networks is to replace the weight matrix of a dense layer by a Tensor Network [26]. In particular, we choose a Matrix Product Operator (MPO) representation [26] of the weight matrix that is analogous to the Tensor-Train format [20], and we call this layer a _TN layer_. This representation, however, is not unique, and is determined by two additional parameters: the MPO bond dimension, and the number of tensors in the MPO. In the simplest case, the MPO may consist of only two tensors, \(\mathbf{W_{1}}\) and \(\mathbf{W_{2}}\), as shown in Figure 3. The MPO in the figure has bond dimension \(\chi\) and physical dimension \(d\) as the input and output dimension. The TN layer with such an MPO can be initialized in the same manner as a weight matrix of a dense layer. In the forward pass of the TN layer, we first contract the MPO along the bond index and then reshape the resulting rank-4 tensor into a matrix as shown in Figure 3. This matrix is the corresponding weight matrix of a TN layer. The weight matrix can then be multiplied with the input vector. We apply an activation function to the resulting output vector, thereby finishing the forward pass. The weight matrix takes the form \[\mathbf{W}=\sum_{\alpha=1}^{\chi}(\mathbf{A}_{\alpha}\otimes\mathbf{B}_{ \alpha}),\ \ \ \mathbf{W}\in\mathbb{R}^{d^{2}\times d^{2}}, \tag{37}\] where \(\mathbf{W}_{1}=[\mathbf{A}_{1},\mathbf{A}_{2},\cdots,\mathbf{A}_{\chi}], \mathbf{A}_{\alpha}\in\mathbb{R}^{d\times d}\) and \(\mathbf{W}_{2}=[\mathbf{B}_{1},\mathbf{B}_{2},\cdots,\mathbf{B}_{\chi}], \mathbf{B}_{\alpha}\in\mathbb{R}^{d\times d}\) are the two rank-3 weight tensors connected by a virtual bond \(\alpha\) of dimension \(\chi\). The resulting weight matrix \(\mathbf{W}\) is of dimension \(d^{2}\times d^{2}\), so it contains \(d^{4}\) elements. Notice that these elements _are not independent_ due to the Kronecker product in (37). Indeed, the weights come from the TN structure with \(2\chi d^{2}\) trainable parameters, and they are a sum of products of elements of the tensors \(\mathbf{A}\) and \(\mathbf{B}\), thus leading to a correlation structure. It is worth noting that the parameters to be trained Figure 3: The process of contracting a 2-node MPO and reshaping it into the weight matrix \(\mathbf{W}\) in each forward pass. Figure 2: Learning pipeline for the Bermudan Swaption stacked Neural Network. are not the matrix elements of the weight matrix, but the elements of the individual tensors of the MPO. This can lead to interesting training behaviour and can result in faster convergence of the loss function. Moreover, any choice of \(\chi<d^{2}/2\) will result in \(d^{4}-2\chi d^{2}\) fewer trainable parameters than an equivalent dense layer, thus allowing for potential parameter savings. In principle, when \(\chi=d^{2}\), we have sufficient degrees of freedom to be able to construct an arbitrary \(d^{2}\times d^{2}\) matrix. By increasing the bond dimension, we therefore expect the TN layer to behave increasingly like a dense layer [29]. ## 5 Swaption Pricing Results We now apply the methodology described in Sections 3.1, 3.2 and 4 to benchmark the performance of Dense Neural Networks and Tensor Neural Networks in pricing European and Bermudan Swaptions. For benchmarking, we consider the Cheyette model with constant parameters and identity correlation matrix. Under this parametrization, the Cheyette model is equivalent to a one-factor Hull and White model. It is worth noting, however, that our approach and code can be extended to the generalized version of the Cheyette model. ### European Swaptions To price European Swaptions we place ourselves in the context of the 3-factor Cheyette model with \(X_{i}(0)=Y_{i}(0)=0\) for \(1\leq i\leq 3\), we use the base parameters \(\kappa=-0.02\) and \(\eta=0.0065\), the tenor structure \((T_{0},T_{1},T_{2},T_{3},T_{4})=(1,2,3,4,5)\), the fixed rates \(K=0.00\) or \(K=0.01\), and the initial forward curve \(T\mapsto f(0,T)\) implied from (1) and the zero-coupon bond prices \(P(0,T)\) given in Table 1. Moreover, we partition the time interval \([0,T_{4}]\) into \(N=500\) sub-intervals of width \(\Delta t=0.01\). For our use-case we only use Neural Networks with a 2-hidden layer architecture. Furthermore, for Tensor Neural Networks, we only construct TN layers that are symmetric in each input and each output dimension. As a result, we choose the first layer in our Tensor Neural Network to be a dense layer with neurons that match the input shape of the second TN layer. For convenience of notation, we will write \(\text{DNN}(x,y)\) to mean a two-layer Dense Neural Network with \(x\) neurons in the first layer and \(y\) neurons in the second layer. Similarly, we will write \(\text{TNN}(x,y)\) to mean a two-layer Tensor Neural Network with a dense first layer having \(x\) neurons and a TN second layer having \(y\) neurons, where we must have \(x=y\). To optimize the Neural Network weights, we use the Adam Optimizer with batch of size 100 and a piece-wise learning rate of \([10^{-2},10^{-3},10^{-4},10^{-5}]\). This means that for the \(i\)'th quarter of epochs, we use a learning rate of \(10^{-(i+1)}\) in the Adam Optimizer. Having described the model and Neural Network specifications for our numerical experiments, we now turn our attention to the numerical results for the initial price and loss evolution of European Swaptions with fixed rate \(K=0.00\) and \(K=0.01\). We focus on three architectures, TNN(64, 64), DNN(64, 64) and DNN(24,27), and we compare and contrast them to the Monte-Carlo (MC) price obtained by approximating the discounted payoff \[\mathbb{E}\bigg{[}\exp\bigg{(}-\int_{0}^{T_{0}}r_{s}\,\mathrm{d}s \bigg{)}\phi^{\text{EUR}}(X(T_{0}),Y(T_{0}))\bigg{]} \tag{38}\] by its sample average. In the context of European Swaptions, we will assume this MC price to be the "true" option price. We start with DNN(64, 64) as it is the smallest Dense Neural Network that agrees with the MC price for one of the two fixed rates \(K\) that we consider. To compare a Tensor Neural Network with this DNN(64, 64), we select the architecture TNN(64, 64) as it has a comparable number of neurons. However, it is worth noting that TNN(64, 64) does not have the same number of parameters as DNN(64, 64). In our experiments, we use a bond-dimension \(\chi\)= 2, so, as discussed in Section 4, the architecture TNN(64,64) would have \(d^{4}-2\chi d^{2}=(8)^{4}-2*2*(8^{2})=3840\) fewer parameters than DNN(64,64). Notice that \(d=8\) as \(64=8^{2}\) (see Figure 3). In this particular case, DNN(64, 64) has \(4737\) parameters whereas TNN(64, 64) has \(897\) parameters. Moreover, to have a fair comparison, we also contrast TNN(64, 64) to DNN(24, 27) as it is the best performing Dense Neural Network architecture with a comparable number of parameters as TNN(64,64). \begin{table} \begin{tabular}{|c|c||c|c||c|c|} \hline Maturity & Bond price & Maturity & Bond price & Maturity & Bond price \\ \hline 0 & 1.00000 & 5 & 0.88232 & 18 & 0.60911 \\ 1 & 0.99005 & 6 & 0.83500 & 23 & 0.53693 \\ 2 & 0.97528 & 7 & 0.78240 & 28 & 0.49611 \\ 3 & 0.95596 & 8 & 0.77064 & 33 & 0.47940 \\ 4 & 0.91376 & 13 & 0.67661 & 38 & 0.46721 \\ \hline \end{tabular} \end{table} Table 1: Zero-coupon bond prices \(T\mapsto P(0,T)\) used to determine the initial forward curve \(T\mapsto f(0,T)\). The results in Figure 4 show that the Tensor Neural Network (64, 64) outperforms the best performing Dense Neural Network with similar parameter count for both fixed rates \(K\) considered. This is evident from the option price evolution, with the Tensor Neural Network approaching the MC price faster and to a much greater extent while exhibiting a significantly smaller confidence interval than the DNN(24, 27) architecture which doesn't even converge to the MC price. This is supported by the loss function evolution, with the Tensor Neural Network loss function becoming considerably smaller than the DNN(24,27) loss function. Upon comparing the TNN(64, 64) architecture to a dense architecture with similar neuron count but significantly higher parameter count, we see that the Tensor Neural Network again outperforms its dense counterpart despite an \(81\%\) compression in parameter count. It is worth pointing out that while the DNN(64,64) architecture does result in convergence to the MC price for the fixed rate \(K=0.01\), it fails to converge to this benchmark price for the fixed rate \(K=0.00\). On the other hand, the Tensor Neural Network readily converges to this "true" price for both fixed rates. In summary, the Tensor Neural Network outperforms both the best Dense Neural Network with comparable number of parameters and the Dense Neural Network with similar neuron count but considerably more parameters. In light of the fact that the two layer Dense Neural Networks in Figure 4 fail to converge, to compare the convergence time of Tensor Neural Networks and Dense Neural Networks we consider four layer architectures. The three four layer architectures shown in Figure 5 all converge to the true price with the Tensor Neural Network doing so considerably faster than its dense counterparts with comparable number of neurons or parameters. Figure 4: **(top panel)** Initial option price evolution for fixed rate \(K=0.00\) (left) and fixed rate \(K=0.01\) (right) for TNN(64, 64) with a bond dimension 2 (red), the corresponding DNN(64,64) with similar neuron count (green) and the best DNN(24,27) with equivalent parameter count (blue). The plots display the mean and \(95\%\) confidence interval for the runs. To benchmark the results, the dotted line (black) indicates the MC price from \(10^{5}\) runs while the grey shaded region indicates its associated \(95\%\) confidence interval. (**bottom panel**) Training loss evolution for fixed rate \(K=0.00\) (left) and fixed rate \(K=0.01\) (right) for TNN(64, 64) with a bond dimension 2 (red), the corresponding DNN(64,64) with similar neuron count (green) and the best DNN(24,27) with equivalent parameter count (blue). The plots display the mean and \(95\%\) confidence interval for the runs. ### Bermudan Swaptions To price Bermudan Swaptions we consider the 3-factor Cheyette model with \(X_{i}(0)=Y_{i}(0)=0\) for \(1\leq i\leq 3\), we use the base parameters \(\kappa=-0.02\) and \(\eta=0.0065\), the tenor structure \((T_{0},T_{1},T_{2},T_{3},T_{4})=(1,2,3,4,5)\), the fixed rates \(K=0.00\) or \(K=0.01\), and the initial forward curve \(T\mapsto f(0,T)\) implied from (1) and the zero-coupon bond prices \(P(0,T)\) given in Table 1. Moreover, we partition the time interval \([0,T_{4}]\) into \(N=500\) sub-intervals of width \(\Delta t=0.01\), and we assume that the exercise dates of the Bermudan Swaption coincide with its tenor structure. The plot in Figure 6 gives the Bermudan Swaption price for a collection of different Neural Network architectures both for Tensor Neural Networks and Dense Neural Networks. We have chosen to compare a given Tensor Neural Network architecture to the Dense Neural Network architecture with the same number of neurons, and therefore a considerably higher parameter count. The differences in parameter count between comparable Tensor Neural Networks and Dense Neural Networks is summarized in Table 2. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Layers & Neurons & Dense Neural Networks & Tensor Neural Networks \\ \hline 2 & 16 & 417 & 225 \\ 2 & 64 & 4,737 & 897 \\ 4 & 64 & 13,057 & 1,537 \\ 10 & 64 & 38,017 & 3,457 \\ 20 & 256 & 1,252,353 & 26,625 \\ \hline \end{tabular} \end{table} Table 2: Parameter Count for Dense Neural Networks and Tense Neural Networks compared in Figure 6. Figure 5: European Swaption Price for different architectures in 3-factor Cheyette Model with fixed rate \(K=0.00\). Figure 6: Bermudan Swaption Price for different architectures in 3-factor Cheyette Model with fixed rate \(K=0.00\). To benchmark the Bermudan Swaption prices given by Tensor Neural Networks and Dense Neural Networks, Figure 6 also displays the Bermudan Swaption price provided by the classical Longstaff-Schwartz (LS) approach. We have included the LS price when the regressors are of degree 1, which turned out to be $0.1090, and also the price when the regressors are of degree 2, which turned out to be $0.1096. It is well-known that the LS approach leads to an under-estimate of the option price [16] and that the LS price approaches the "true" option price when the degree of the regressors tends to infinity [15]. The first of these heuristics is supported by our numerical results, and the second suggests that the "true" option price should be higher than the $0.1096 given by the LS with regressors of degree 2. It is worth noting that such a price will also be higher than the "true" price of $0.1079 for the European Swaption with the same parameters displayed in Figure 4. This is in accordance with the fact that the price of a Bermudan Swaption should always be lower bounded by the price of its European counterpart. Having understood how we will benchmark the Tensor Neural Network and Dense Neural Network prices, and emphasising that this benchmark is not as reliable as it was in the setting of European Swaptions, let us turn our attention to the Neural Network prices for various architectures. To begin with we used a 2-layer architecture with 16 neurons in each layer for each of the 5 Neural Networks (one for each exercise date), and trained each Neural Network for 500 epochs. The prices for such a Tensor Neural Network and Dense Neural Network are displayed on the far right of Figure 6. It is clear that these prices are underestimates resulting from the Neural Networks under-fitting the data. This led us to consider the same configuration of Neural Networks with 64 neurons instead of just 16 in each of the 2 layers. We observe a marked improvement in both the price and the stability (the bars around the estimate indicate a 95% confidence interval for 10 runs under the same configuration) of the resulting Bermudan Swaption prices, particularly in the Tensor Neural Network setting where the price goes above the LS price as expected of the "true" option price. Upon increasing the complexity of the architecture to a 4-layer Neural Network with 64 neurons in each layer, the improvements are evident with both the Tensor Neural Network and the Dense Neural Network above the LS price, thereby making the Neural Network advantage evident. This phenomenon of Neural Networks outperforming classical regression should come as no surprise. Indeed, the finite number of regressors chosen in the classical LS approach cannot fully represent the conditional payoff they strive to approximate. This limitation could be mitigated by increasing the degree of the regressors but this quickly becomes infeasible from an accuracy and time perspective, leading both to over-fitting and a lack of convergence. The advantage of Neural Networks is that they allow us to learn the relevant regressors instead of selecting them a priori, thus being more amenable to high-dimensional settings. Once the Tensor Neural Networks and Dense Neural Networks both start to outperform the LS approach, in the sense that they yield higher prices, we increase the number of layers to 10 and see a further increment in price to $0.1230. Upon increasing the number of layers to 20, we observe that the Tensor Neural Network price is almost identical to that given in the 10 layer setting. On the other hand, the Dense Neural Network price for 20 layers falls subject to over-fitting and is considerably worse than the price observed in the 10 layer setting both in terms of accuracy and stability. Besides exhibiting overfitting, as shown in Figure 7, Dense Neural Networks also require a larger number of parameters than Tensor Neural Networks to converge above the threshold prices of $0.110 and $0.120 which appear to be lower bounds for the true price. This suggests that Tensor Neural Networks provide better and more scalable prices than both LS and Dense Neural Networks which are more prone to overfitting if not properly fine-tuned. Figure 7: DNN and TNN parameter comparison for Bermudan Swaption price in \(3\)-factor Cheyette model with \(K=0.00\) to cross $0.110 and $0.120 price threshold. Before closing this section we would like to delve deeper into the 150bps difference between the degree 2 LS price and the 20 layer Tensor Neural Network price shown in Figure 6 as this might seem overly large. We hypothesise that this difference is a consequence of using low degree regressors in the LS approach, and that the LS price would approach the Tensor Neural Network price if we increased the degree of the regressors. Due to computational limitations, instead of increasing the degree of the regressors, we decrease the complexity of the model by considering the 1-factor Cheyette model as opposed to the 3-factor model. The hypothesis that the 150bps difference is being driven by an under-fitting of the regressors is supported by Figure 8 which shows a smaller 30bps difference between the degree 2 LS price and the 4 layer Tensor Neural Network price. Furthermore, in this simpler setting the Tensor Neural Network and the Dense Neural Network price both tend to the same value with almost no standard error. To further display the advantage provided by Tensor Neural Networks, let us compare the loss plots, parameter count and time to convergence for Tensor Neural Networks and Dense Neural Networks. From the premise that smaller losses imply better results, Figure 9 shows that the Tensor Neural Network outperforms the Dense Neural Network with similar neuron count and also the best Dense Neural Network with comparable parameter count. This corroborates the observation from Figures 6 and 8 that Tensor Neural Networks provide more reliable prices than Dense Neural Networks. Figure 8: Bermudan Swaption Price for different architectures in 1-factor Cheyette Model with fixed rate \(K=0.00\). Figure 9: Bermudan Swaption loss evolution in \(3\)-factor Cheyette model with fixed rates \(K=0.00\) and \(K=0.01\). ## 6 Conclusions and Outlook In this paper we have shown how Tensor Neural Networks can be leveraged to price European Swaptions in the multi-factor Cheyette model. We have also extended this approach to price early-exercise Swaptions under this stochastic interest rate model by stacking \(n\) feed-forward Neural Networks to price a Bermudan Swaption with \(n\) exercise dates. To quantify the considerable advantages of Tensor Neural Networks we have performed rigorous empirical benchmarking. In doing so, we have demonstrated that Tensor Neural Networks provide significant parameter savings relative to Dense Neural Networks with comparable number of neurons or parameter count while attaining more reliable prices having smaller variance. We have further shown that Tensor Neural Networks achieve a computational speed-up in training when contrasted with Dense Neural Networks having comparable number of neurons or parameter count. Although we have performed numerical experiments in the 3-factor setting, unlike the traditional Monte-Carlo based methods which succumb to the Curse of Dimensionality, our approach can be scaled to a significantly higher number of factors. Despite an absence of theoretical bounds, which can be an area of further investigation, the Tensor Neural Network approach described in this paper can be used to improve training from a memory and speed perspective for a wide variety of problems in Machine Learning. Extending this approach to yield a parametric Tensor Neural Network, able to instantaneously derive prices for a wide variety of different model parameters without having to retrain the Neural Network, would be an interesting and fruitful avenue of further investigation. **Acknowledgements -** We acknowledge the regular fruitful discussions with the technical teams both at BBVA and Multiverse Computing. **Disclaimer -** This paper is purely scientific and informative in nature and is not a product of BBVA SA or any of its subsidiaries. Neither BBVA nor such subsidiaries are aware of or necessarily share the premises, conclusions or contents in general of this document. Consequently, the responsibility for its originality, accuracy, reliability or for any other reason lies exclusively with the authors. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction.
2310.07190
Neural networks: deep, shallow, or in between?
We give estimates from below for the error of approximation of a compact subset from a Banach space by the outputs of feed-forward neural networks with width W, depth l and Lipschitz activation functions. We show that, modulo logarithmic factors, rates better that entropy numbers' rates are possibly attainable only for neural networks for which the depth l goes to infinity, and that there is no gain if we fix the depth and let the width W go to infinity.
Guergana Petrova, Przemyslaw Wojtaszczyk
2023-10-11T04:50:28Z
http://arxiv.org/abs/2310.07190v1
# Neural networks: deep, shallow, or in between? ###### Abstract We give estimates from below for the error of approximation of a compact subset from a Banach space by the outputs of feed-forward neural networks with width \(W\), depth \(\ell\) and Lipschitz activation functions. We show that, modulo logarithmic factors, rates better that entropy numbers' rates are possibly attainable only for neural networks for which the depth \(\ell\to\infty\), and that there is no gain if we fix the depth and let the width \(W\to\infty\). ## 1 Introduction The fascinating new developments in the area of Artificial Intelligence (AI) and other important applications of neural networks prompt the need for a theoretical mathematical study of their potential to reliably approximate complicated objects. Various network architectures have been used in different applications with substantial success rates without significant theoretical backing of the choices made. Thus, a natural question to ask is whether and how the architecture chosen affects the approximation power of the outputs of the resulting neural network. In this paper, we attempt to clarify how the width and the depth of a feed-forward neural network affect its worst performance. More precisely, we provide estimates from below for the error of approximation of a compact subset \(\mathcal{K}\subset X\) of a Banach space \(X\) by the outputs of feed-forward neural networks (NNs) with width \(W\), depth \(\ell\), bound \(w(W,\ell)\) on their parameters, and Lipschitz activation functions. Note that the ReLU function is included in our investigation since it is a Lipschitz function with a Lipschitz constant \(L=1\). To prove our results, we assume that we know lower bounds on the entropy numbers of the compact sets \(\mathcal{K}\) that we approximate by the outputs of feed-forward NNs. Such bounds are known for a wide range of classical and novel classes \(\mathcal{K}\) and Banach spaces \(X\), and are usually of the form \(n^{-\alpha}[\log n]^{\beta}\), \(\alpha>0\), \(\beta\in\mathbb{R}\). We refer the reader to [8, Chapters 3,4], [10, Chapter 15],[5, Section 5], [18, Theorem 9], or [6, 9], where such examples are provided. It is a well known fact that the number \(n\) of parameters of a feed-forward NN with width \(W\) and depth \(\ell\) is \[n\asymp\begin{cases}W^{2}\ell,&\text{when}\quad\ell>1,\\ W,&\text{when}\quad\ell=1.\end{cases} \tag{1}\] Let us denote by \(\Sigma(W,\ell,\sigma;w)\) the set of functions that are outputs of a such a NN with bounds \(w=w(W,\ell)\) on its parameters and with Lipschitz activation function. We prove estimates from below for the error \(E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\) of approximation of a class \(\mathcal{K}\) by the functions from \(\Sigma(W,\ell,\sigma;w)\), see Theorem 4.1. Our conclusion is that under a moderate growth of the bound \(w\asymp n^{\delta}\), \(\delta\geq 0\), one can possibly obtain rates of approximation that are better than the corresponding entropy numbers' rates only when the depth of the NN is let to grow. If the rate of approximation of \(\mathcal{K}\) by outputs of feed-forward NNs is better than the decay rate of its entropy numbers, then we say that we have super convergence. In fact, since we only obtain estimates from below, we claim that super convergence is possibly attainable in such cases. If the depth \(\ell\) is fixed, then the rates of decay of \(E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\) cannot be better (modulo logarithmic factors) than the rates of the entropy numbers of \(\mathcal{K}\). If both the width \(W\) and depth \(\ell\) are allowed to grow, then an improvement of the rates of decay of \(E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\) in comparison to the entropy numbers' decay is possible. Of course, the bound \(w\) on the NN's parameters also has an effect and a fast growing bound, for example \(w\asymp 2^{n}\), could lead to improved convergence in all cases. However, one needs to be aware of the fact that NNs with such bounds are computationally infeasible. We show that the mapping assigning to each choice of neural network parameters the function that is an output of a feed-forward NN with these parameters is a Lipschitz mapping, see Theorem 3.1. This allows us to study the approximation properties of such NNs via the recently introduced Lipschitz widths, see [14, 15]. We have utilized this approach in [15] to discuss deep (\(W=W_{0}\) is fixed and \(\ell\to\infty\)) and shallow (\(W\to\infty\) and \(\ell=1\)) NNs with bounded Lipschitz or ReLU activation functions and their limitations in approximating compact sets \(\mathcal{K}\). Here, we implement the developed technique to treat NNs for which both \(W,\ell\to\infty\). Results in this direction are available for shallow and deep NNs, and we refer the reader to the series of works [19, 2, 22, 20, 21, 16, 1, 7, 12, 13], where various estimates from below are given for the error of approximation for particular classes \(\mathcal{K}\) and Banach spaces \(X\). The paper is organized as follows. In SS2, we introduce our notation, recall the definitions of NNs, entropy numbers and Lipschitz widths, and state some known results about them. We show in SS3 that feed-forward NNs are Lipschitz mappings. Finally, in SS4, we use results for Lipschitz widths to derive estimates from below for the error of neural network approximation for a compact class \(\mathcal{K}\). ## 2 Preliminaries In this section, we introduce our notation and recall some known facts about NNs, Lipschitz widths and entropy numbers. In what follows, we will denote by \(A\gtrsim B\) the fact that there is an absolute constant \(c>0\) such that \(A\geq cB\), where \(A,B\) are some expressions that depend on some variable which tends to infinity. Note that the value of \(c\) may change from line to line, but is always independent on that variable. Similarly, we use the notation \(A\lesssim B\) (defined in an analogues way) and \(A\asymp B\) if \(A\gtrsim B\) and \(A\lesssim B\). We also write \(A=A(B)\) to stress the fact that the quantity \(A\) depends on \(B\). For example, if \(C\) is a constant, the expression \(C=C(d,\sigma)\) means that \(C\) depends on \(d\) and \(\sigma\). ### Entropy numbers We recall, see e.g. [3, 4, 10], that the _entropy numbers_\(\epsilon_{n}(\mathcal{K})_{X}\), \(n\geq 0\), of a compact set \(\mathcal{K}\subset X\) are defined as the infimum of all \(\epsilon>0\) for which \(2^{n}\) balls with centers from \(X\) and radius \(\epsilon\) cover \(\mathcal{K}\). Formally, we write \[\epsilon_{n}(\mathcal{K})_{X}=\inf\{\epsilon>0\ :\ \mathcal{K}\subset\bigcup_{j=1} ^{2^{n}}B(g_{j},\epsilon),\ g_{j}\in X,\ j=1,\ldots,2^{n}\}.\] ### Lipschitz widths We denote by \((\mathbb{R}^{n},\|.\|_{Y_{n}})\), \(n\in\mathbb{N}\), the \(n\)-dimensional Banach space with a fixed norm \(\|\cdot\|_{Y_{n}}\), by \[B_{Y_{n}}(r):=\{y\in\mathbb{R}^{n}:\ \|y\|_{Y_{n}}\leq r\},\] its ball with radius \(r\), and by \[\|y\|_{\ell^{n}_{\infty}}:=\max_{j=1,\ldots,n}|y_{j}|,\] the \(\ell_{\infty}\) norm of \(y=(y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\). The Lipschitz widths \(d^{\gamma}_{n}(\mathcal{K})_{X}\) of the compact set \(\mathcal{K}\) with respect to the norm \(\|\cdot\|_{X}\) is defined as \[d^{\gamma}_{n}(\mathcal{K})_{X}:=\inf_{\mathcal{L}_{n},\,r>0,\,\|\cdot\|_{Y_{n }}}\ \sup_{f\in\mathcal{K}}\ \inf_{y\in B_{Y_{n}}(r)}\|f-\mathcal{L}_{n}(y)\|_{X}, \tag{2}\] where the infimum is taken over all \(\gamma/r\)-Lipschitz maps \(\mathcal{L}_{n}:(B_{Y_{n}}(r),\|\cdot\|_{Y_{n}})\to X\), all \(r>0\), and all norms \(\|\cdot\|_{Y_{n}}\) in \(\mathbb{R}^{n}\). We have proven, see Theorem 9 in [15], the following result which relates the behavior of the entropy numbers of \(\mathcal{K}\) and its Lipschitz widths with a Lipschitz constant \(\gamma=2^{\varphi(n)}\). **Theorem 2.1**.: _For any compact set \(\mathcal{K}\subset X\), we consider the Lipschitz width \(d^{\gamma_{n}}_{n}(\mathcal{K})_{X}\) with Lipschitz constant \(\gamma_{n}=2^{\varphi(n)}\), where \(\varphi(n)\geq c\log_{2}n\) for some fixed constant \(c>0\). Let \(\alpha>0\) and \(\beta\in\mathbb{R}\). Then the following holds:_ \[\text{(i)}\ \epsilon_{n}(\mathcal{K})_{X}\gtrsim\frac{(\log_{2}n)^{ \beta}}{n^{\alpha}},\quad n\in\mathbb{N}\quad\Rightarrow\quad d^{\gamma_{n}}_{ n}(\mathcal{K})_{X}\gtrsim\frac{[\log_{2}(n\varphi(n))]^{\beta}}{[n\varphi(n)]^{ \alpha}},\quad n\in\mathbb{N}; \tag{3}\] \[\text{(ii)}\ \epsilon_{n}(\mathcal{K})_{X}\gtrsim[\log_{2}n]^{- \alpha},\quad n\in\mathbb{N}\Rightarrow\quad d^{\gamma_{n}}_{n}(\mathcal{K})_{ X}\gtrsim[\log_{2}(n\varphi(n))]^{-\alpha},\quad n\in\mathbb{N}. \tag{4}\] ### Neural networks Let us denote by \(C(\Omega)\) the set of continuous functions defined on the compact set \(\Omega\subset\mathbb{R}^{d}\), equipped with the uniform norm. A feed-forward NN with activation function \(\sigma:\mathbb{R}\to\mathbb{R}\), width \(W\), depth \(\ell\) and bound \(w=w(W,\ell)\) on its parameters generates a family \(\Sigma(W,\ell,\sigma;w)\) of continuous functions \[\Sigma(W,\ell,\sigma;w):=\{\Phi^{W,\ell}_{\sigma}(y):\ y\in\mathbb{R}^{n}\} \subset C(\Omega),\quad\Omega\subset\mathbb{R}^{d},\] where the number of parameters \(n\) satisfies (1). Each \(y\in\mathbb{R}^{n}\), \(\|y\|_{\ell^{n}_{\infty}}\leq w\) determines a continuous function \(\Phi^{W,\ell}_{\sigma}(y)\in\Sigma(W,\ell,\sigma;w)\), defined on \(\Omega\), of the form \[\Phi^{W,\ell}_{\sigma}(y):=A^{(\ell)}\circ\bar{\sigma}\circ A^{(\ell-1)}\circ \ldots\circ\bar{\sigma}\circ A^{(0)}, \tag{5}\] where \(\bar{\sigma}:\mathbb{R}^{W}\to\mathbb{R}^{W}\) is given by \[\bar{\sigma}(z_{1},\ldots,z_{W})=(\sigma(z_{1}),\ldots,\sigma(z_{W})), \tag{6}\] and \(A^{(0)}:\mathbb{R}^{d}\to\mathbb{R}^{W}\), \(A^{(j)}:\mathbb{R}^{W}\to\mathbb{R}^{W}\), \(j=1,\ldots,\ell-1\), and \(A^{(\ell)}:\mathbb{R}^{W}\to\mathbb{R}\) are affine mappings. The coordinates of \(y\in\mathbb{R}^{n}\) are the entries of the matrices and offset vectors (biases) of the affine mappings \(A^{(j)}\), \(j=0,\ldots,\ell\), taken in a pre-assigned order. The entries of \(A^{(j)}\) appear before those of \(A^{(j+1)}\) and the ordering for each \(A^{(j)}\) is done in the same way. We refer the reader to [7] and the references therein for detailed study of such NNs with fixed width \(W=W_{0}\) and depth \(\ell\to\infty\). We view a feed-forward NN as a mapping that to each vector of parameters \(y\in\mathbb{R}^{n}\) assigns the output \(\Phi^{W,\ell}_{\sigma}(y)\in\Sigma(W,\ell,\sigma;w)\) of this network, \[y\to\Phi^{W,\ell}_{\sigma}(y), \tag{7}\] where all parameters (entries of the matrices and biases) are bounded by \(w(W,\ell)\), namely \[\Sigma(W,\ell,\sigma;w)=\Phi^{W,\ell}_{\sigma}(B_{\ell^{n}_{\infty}}(w(W, \ell))),\] with \(\Phi^{W,\ell}_{\sigma}\) being defined in (5). Lower bounds for the error of approximation of a class \({\cal K}\subset X\) by the outputs of DNNs (when \(W=W_{0}\) for a fixed \(W_{0}\) and \(\ell\to\infty\), in which \(n\asymp\ell\)) and SNNs (when \(\ell=1\) and \(W\to\infty\), in which \(n\asymp W\)) have been discussed in [15] in the case of bounded Lipschitz or ReLU activation functions. In this paper, we state similar results for any feed-forward NN with general Lipschitz activation function. We use the approach from [15] and first show that the mapping (7) is a Lipschitz mapping. ## 3 Feed-forward NNs are Lipshitz mappings Let us denote by \[L:=\max\{L^{\prime},|\sigma(0)|\}, \tag{8}\] where \(L^{\prime}\) is the Lipschitz constant of \(\sigma\). Then the following theorem is a generalization of Theorems 3 and 5 from [15] to the case of any feed-forward NN. **Theorem 3.1**.: _Let X be a Banach space such that \(C([0,1]^{d})\subset X\) is continuously embedded in \(X\). Then the mapping \(\Phi^{W,\ell}_{\sigma}:(B_{\ell^{n}_{\infty}}(w(W,\ell)),\|\cdot\|_{\ell^{n}_ {\infty}})\to X\), defined in (5) with a Lipschitz function \(\sigma\), is an \(L_{n}\)-Lipschitz mapping, that is,_ \[\|\Phi^{W,\ell}_{\sigma}(y)-\Phi^{W,\ell}_{\sigma}(y^{\prime})\|_{X}\leq L_{n }\|y-y^{\prime}\|_{\ell^{n}_{\infty}},\quad y,y^{\prime}\in B_{\ell^{n}_{ \infty}}(w(W,\ell)).\] _Moreover, there are constants \(c_{1},c_{2}>0\) such that_ \[2^{c_{1}\ell\log_{2}(W(w+1)))}<L_{n}<2^{c_{2}\ell\log_{2}(W(w+1)))},\quad w=w( W,\ell),\] _provided LW \(\geq 2\)._ **Proof:** Let us first set up the notation \(\|g\|:=\max\limits_{1\leq i\leq W}\|g_{i}\|_{C(\Omega)}\), where \(g\) is the vector function \(g=(g_{1},\ldots,g_{W})^{T}\) whose coordinates \(g_{i}\in C(\Omega)\). We also will use \[w:=w(W,\ell),\quad\mbox{and}\quad\tilde{w}:=w+1.\] Let \(y,y^{\prime}\) be the two parameters from \(B_{\ell^{n}_{\infty}}(w(W,\ell))\) that determine the continuous functions \(\Phi^{W,\ell}_{\sigma}(y)\), \(\Phi^{W,\ell}_{\sigma}(y^{\prime})\in\Sigma(W,\ell,\sigma;w)\). We fix \(x\in\Omega\) and denote by \[\eta^{(0)}(x):=\overline{\sigma}(A_{0}x+b^{(0)}),\quad\eta^{\prime(0)}(x):= \overline{\sigma}(A^{\prime}_{0}x+b^{\prime(0)}),\] \[\eta^{(j)}:=\overline{\sigma}(A_{j}\eta^{(j-1)}+b^{(j)}),\quad\eta^{\prime(j )}:=\overline{\sigma}(A^{\prime}_{j}\eta^{\prime(j-1)}+b^{\prime(j)}),\quad j =1,\ldots,\ell-1,\] \[\eta^{(\ell)}:=A_{\ell}\eta^{(\ell-1)}+b^{(\ell)},\quad\eta^{\prime(\ell)}:= A^{\prime}_{\ell}\eta^{\prime(\ell-1)}+b^{\prime(\ell)}.\] Note that \(A_{0},A^{\prime}_{0}\in\mathbb{R}^{W\times d}\), \(A_{j},A^{\prime}_{j}\in\mathbb{R}^{W\times W}\), \(b^{(j)},b^{(j)}\in\mathbb{R}^{W}\), for \(j=0,\ldots,\ell-1\), while \(A_{\ell},A^{\prime}_{\ell}\in\mathbb{R}^{1\times W}\), and \(b^{(\ell)},b^{\prime(\ell)}\in\mathbb{R}\). Each of the \(\eta^{(j)},\eta^{\prime(j)}\), \(j=0,\ldots,\ell-1\), is a continuous vector function with \(W\) coordinates, while \(\eta^{(\ell)},\eta^{\prime(\ell)}\) are the outputs of the NN with activation function \(\sigma\) and parameters \(y,y^{\prime}\), respectively. Since, see (8), \[|\sigma(t)|\leq|\sigma(t)-\sigma(0)|+|\sigma(0)|\leq L(|t|+1),\quad|\sigma(t_{1 })-\sigma(t_{2})|\leq L|t_{1}-t_{2}|,\quad t_{1},t_{2}\in\mathbb{R},\] it follows that for any \(m\), vectors \(\bar{y},\hat{y},\eta\in\mathbb{R}^{m}\) and numbers \(y_{0},\hat{y}_{0}\in\mathbb{R}\), where \(\bar{y},y_{0}\) and \(\hat{y},\hat{y}_{0}\) are subsets of the coordinates of \(y,y^{\prime}\in\mathbb{R}^{n}\), respectively, we have \[|\sigma(\bar{y}\cdot\eta+y_{0})| \leq L(|\bar{y}\cdot\eta+y_{0}|+1)\leq L(m\|\eta\|_{\ell^{n}_{\infty} }+1)\|y\|_{\ell^{n}_{\infty}}+L\] \[\leq L(m\|\eta\|_{\ell^{n}_{\infty}}+1)w+L<L\tilde{w}m\|\eta\|_{\ell^{n}_ {\infty}}+L\tilde{w}\] \[|\sigma(\tilde{y}\cdot\eta+y_{0})-\sigma(\hat{y}\cdot\eta+\hat{y}_{0})|\leq L(m \|\eta\|_{\ell^{m}_{\infty}}+1)\|y-y^{\prime}\|_{\ell^{n}_{\infty}}. \tag{10}\] Then we have \(\|\eta^{\prime(0)}\|<L\tilde{w}d+L\tilde{w}\) (when \(m=d\) and \(\eta=x\)) and \[\|\eta^{\prime(j)}\|<LW\tilde{w}\|\eta^{\prime(j-1)}\|+L\tilde{w},\quad j=1, \ldots,\ell,\] (when \(m=W\) and \(\eta=\eta^{\prime(j-1)}\)). One can show by induction that for \(j=1,\ldots,\ell\), \[\|\eta^{\prime(j)}\|\leq dW^{j}[L\tilde{w}]^{j+1}+L\tilde{w}\sum_{i=0}^{j}[LW \tilde{w}]^{i}.\] Therefore, we have that \[\|\eta^{\prime(j)}\|\leq dW^{j}[L\tilde{w}]^{j+1}+2L\tilde{w}[LW\tilde{w}]^{j} =(d+2)L\tilde{w}[LW\tilde{w}]^{j}, \tag{11}\] since \(LW\tilde{w}>LW\geq 2\). The above inequality also holds for \(j=0\). Clearly, we have \[\|\eta^{(0)}-\eta^{{}^{\prime}(0)}\|\leq L(d+1)\|y-y^{\prime}\|_{\ell^{n}_{ \infty}}=:C_{0}\|y-y^{\prime}\|_{\ell^{n}_{\infty}}.\] Suppose we have proved the inequality \[\|\eta^{(j-1)}-\eta^{\prime(j-1)}\|\leq C_{j-1}\|y-y^{\prime}\|_{\ell^{n}_{ \infty}},\] for some constant \(C_{j-1}\). Then we derive that \[\|\eta^{(j)}-\eta^{\prime(j)}\| \leq L\|A_{j}\eta^{(j-1)}+b^{(j)}-A_{j}^{\prime}\eta^{\prime(j-1)}-b^{ \prime(j)}\|\] \[\leq L\|A_{j}(\eta^{(j-1)}-\eta^{\prime(j-1)})\|+L\|(A_{j}-A_{j}^{ \prime})\eta^{\prime(j-1)}\|+L\|b^{(j)}-b^{\prime(j)}\|\] \[\leq LW\|y\|_{\ell^{n}_{\infty}}\|\eta^{(j-1)}-\eta^{\prime(j-1)}\| +LW\|y-y^{\prime}\|_{\ell^{n}_{\infty}}\|\eta^{\prime(j-1)}\|+L\|y-y^{\prime }\|_{\ell^{n}_{\infty}}\] \[\leq (LW\tilde{w}C_{j-1}+LW(d+2)L\tilde{w}[LW\tilde{w}]^{j-1}+L)\|y- y^{\prime}\|_{\ell^{n}_{\infty}}\] \[= L(W\tilde{w}C_{j-1}+(d+2)[LW\tilde{w}]^{j}+1)\|y-y^{\prime}\|_{ \ell^{n}_{\infty}}\] \[=: C_{j}\|y-y^{\prime}\|_{\ell^{n}_{\infty}},\] where we have used that \(\|y\|_{\ell^{n}_{\infty}}\leq w\), the bound (11), and the induction hypothesis. The relation between \(C_{j}\) and \(C_{j-1}\) can be written as \[C_{0}=L(d+1),\quad C_{j}=L(W\tilde{w}C_{j-1}+(d+2)[LW\tilde{w}]^{j}+1),\quad j =1,\ldots,\ell.\] Clearly, \[C_{1}=L((d+1)LW\tilde{w}+(d+2)LW\tilde{w}+1)<(d+2)L(2LW\tilde{w}+1),\] and we obtain by induction that \[C_{\ell} < (d+2)L\left(\ell[LW\tilde{w}]^{\ell}+\sum_{i=0}^{\ell}[LW\tilde{ w}]^{i}\right).\] If we use the fact \(2\leq LW<LW\tilde{w}\), we derive the inequality \[C_{\ell}<(d+2)L(\ell+2)[LW\tilde{w}]^{\ell}.\] Finally, we have \[\|\Phi_{\sigma}^{W,\ell}(y)-\Phi_{\sigma}^{W,\ell}(y^{\prime})\|_{C( \Omega)} = \|\eta^{(\ell)}-\eta^{\prime(\ell)}\|\leq C_{\ell}\|y-y^{\prime}\|_ {\ell_{\infty}^{n}}\] \[< (d+2)L(\ell+2)[LW\tilde{w}]^{\ell}\|y-y^{\prime}\|_{\ell_{\infty}^ {n}},\] and therefore \[\|\Phi_{\sigma}^{W,\ell}(y)-\Phi_{\sigma}^{W,\ell}(y^{\prime})\|_{X}\leq c_{0} \|\Phi_{\sigma}^{W,\ell}(y)-\Phi_{\sigma}^{W,\ell}(y^{\prime})\|_{C(\Omega)} \leq\tilde{C}\ell[LW\tilde{w}]^{\ell}\|y-y^{\prime}\|_{\ell_{\infty}^{n}},\] where \(\tilde{C}=\tilde{C}(d,\sigma)\). Clearly, the Lipschitz constant \(L_{n}:=\tilde{C}\ell[LW\tilde{w}]^{\ell}\) is such that \(2^{c_{1}\ell\log_{2}(W(w+1))}<L_{n}<2^{c_{2}\ell\log_{2}(W(w+1))}\) for some \(c_{1},c_{2}>0\), and the proof is completed. \(\Box\) **Remark 3.2**.: _Note that the proof of Theorem 3.1 holds also in the case when every coordinate of \(\bar{\sigma}\), see (6), is chosen to be a different Lipschitz function \(\sigma\) as long as \(LW\geq 2\), where \(L\) is defined via (8)._ ## 4 Estimates from below for neural network approximation In this section, we consider Banach spaces \(X\) such that \(C([0,1]^{d})\) is continuously embedded in \(X\). Let us denote by \[E(f,\Sigma(W,\ell,\sigma;w))_{X}:=\inf_{y\in B^{n}_{\ell_{\infty}}(w)}\|f- \Phi_{\sigma}^{W,\ell}(y)\|_{X},\] the error of approximation in the norm \(\|\cdot\|_{X}\) of the element \(f\in\mathcal{K}\) by the set of outputs \(\Sigma(W,\ell,\sigma;w)\) of a feed-forward NN with width \(W\), depth \(\ell\), activation function \(\sigma\), and a bound \(w\) on its parameters \(y\), that is \(\|y\|_{\ell_{\infty}^{n}}\leq w\). We also denote by \[E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}:=\sup_{f\in\mathcal{K}}\;E(f, \Sigma(W,\ell,\sigma;w))_{X},\] the error for the class \(\mathcal{K}\subset X\). It follows from Theorem 3.1 that \[E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\geq d_{n}^{\gamma_{n}}(\mathcal{K} )_{X},\quad\text{with}\quad\gamma_{n}=2^{c\ell\log_{2}(W(w+1))}=:2^{\varphi(n )}, \tag{12}\] for some \(c>0\). Therefore, see (1), \[n\varphi(n)=\begin{cases}cn\ell\log_{2}(W(w+1)),\quad n\asymp W^{2}\ell,\quad \ell>1,\\ cn\log_{2}(n(w+1)),\quad n\asymp W,\quad\ell=1,\end{cases}\] and we can state the following corollary of (12) and Theorem 2.1. **Theorem 4.1**.: _Let \(\Sigma(W,\ell,\sigma;w)\) be the set of outputs of an \(n\) parameter NN with width \(W\), depth \(\ell\), Lipschitz activation function \(\sigma\) and weights and biases bounded by \(w\), where \(LW\geq 2\). Then, the error of approximation \(E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\) of a compact subset \(\mathcal{K}\) of a Banach space \(X\) by \(\Sigma(W,\ell,\sigma;w)\) satisfies the following estimates from below, provided we know the following information about the entropy numbers \(\epsilon_{n}(\mathcal{K})_{X}\) of \(\mathcal{K}\):_ * _if for_ \(\alpha>0\) _and_ \(\beta\in\mathbb{R}\) _we have_ \[\epsilon_{n}(\mathcal{K})_{X}\gtrsim\frac{[\log_{2}n]^{\beta}}{n^{\alpha}},\,n \in\mathbb{N},\] _then_ \[E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\gtrsim\begin{cases}\frac{1}{n^{ \alpha}\ell^{\alpha}}\cdot\frac{[\log_{2}(n\ell\log_{2}(W(w+1)))]^{\beta}}{[ \log_{2}(W(w+1))]^{\alpha}},\quad n\asymp W^{2}\ell,\quad\ell>1,\\ \frac{1}{n^{\alpha}}\cdot\frac{[\log_{2}(n\log_{2}(nw))]^{\beta}}{[\log_{2}(n( w+1))]^{\alpha}},\qquad\qquad\quad n\asymp W,\quad\ell=1.\end{cases}\] * _if for_ \(\alpha>0\) _we have_ \[\epsilon_{n}(\mathcal{K})_{X}\gtrsim\left[\log_{2}n\right]^{-\alpha},\,n\in \mathbb{N},\] _then_ \[E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\gtrsim\begin{cases}[\log_{2}(n\ell \log_{2}(W(w+1)))]^{-\alpha},\quad n\asymp W^{2}\ell,\quad\ell>1,\\ \\ [\log_{2}(n\log_{2}(n(w+1)))]^{-\alpha},\quad\quad\quad n\asymp W,\quad\ell=1. \end{cases}\] **Proof:** The proof follows directly from (12) and Theorem 2.1. \(\Box\) **Remark 4.2**.: _Theorem 4.1 gives various estimates from below depending on the behavior of the bound \(w=w(W,\ell)\) on the absolute values of the parameters of the NN. Here we state only one particular case. Under the conditions of Theorem 4.1 with \(w=w(W,\ell)=\mathrm{const}\), we have:_ * _if for_ \(\alpha>0\) _and_ \(\beta\in\mathbb{R}\) _we have_ \[\epsilon_{n}(\mathcal{K})_{X}\gtrsim\frac{[\log_{2}n]^{\beta}}{n^{\alpha}},\,n \in\mathbb{N},\] _then_ \[E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\gtrsim\begin{cases}\frac{1}{n^{ \alpha}\ell^{\alpha}}\cdot\frac{[\log_{2}(n\ell\log_{2}W)]^{\beta}}{[\log_{2}W ]^{\alpha}},\quad n\asymp W^{2}\ell,\quad\ell>1,\\ \\ \frac{1}{n^{\alpha}}\cdot[\log_{2}n]^{\beta-\alpha},\quad\quad\quad\quad\quad n \asymp W,\quad\ell=1.\end{cases}\] * _if for_ \(\alpha>0\) _we have_ \[\epsilon_{n}(\mathcal{K})_{X}\gtrsim[\log_{2}n]^{-\alpha},\,n\in\mathbb{N},\] _then_ \[E(\mathcal{K},\Sigma(W,\ell,\sigma;w))_{X}\gtrsim\begin{cases}[\log_{2}(n\ell \log_{2}W)]^{-\alpha},\quad n\asymp W^{2}\ell,\quad\ell>1,\\ \\ [\log_{2}n]^{-\alpha},\quad\quad\quad\quad\quad\quad n\asymp W,\quad\ell=1. \end{cases}\] **Acknowledgments:** G.P. was supported by the NSF Grant DMS 2134077 and ONR Contract N00014-20-1-278.
2308.11590
Vision-Based Intelligent Robot Grasping Using Sparse Neural Network
In the modern era of Deep Learning, network parameters play a vital role in models efficiency but it has its own limitations like extensive computations and memory requirements, which may not be suitable for real time intelligent robot grasping tasks. Current research focuses on how the model efficiency can be maintained by introducing sparsity but without compromising accuracy of the model in the robot grasping domain. More specifically, in this research two light-weighted neural networks have been introduced, namely Sparse-GRConvNet and Sparse-GINNet, which leverage sparsity in the robotic grasping domain for grasp pose generation by integrating the Edge-PopUp algorithm. This algorithm facilitates the identification of the top K% of edges by considering their respective score values. Both the Sparse-GRConvNet and Sparse-GINNet models are designed to generate high-quality grasp poses in real-time at every pixel location, enabling robots to effectively manipulate unfamiliar objects. We extensively trained our models using two benchmark datasets: Cornell Grasping Dataset (CGD) and Jacquard Grasping Dataset (JGD). Both Sparse-GRConvNet and Sparse-GINNet models outperform the current state-of-the-art methods in terms of performance, achieving an impressive accuracy of 97.75% with only 10% of the weight of GR-ConvNet and 50% of the weight of GI-NNet, respectively, on CGD. Additionally, Sparse-GRConvNet achieve an accuracy of 85.77% with 30% of the weight of GR-ConvNet and Sparse-GINNet achieve an accuracy of 81.11% with 10% of the weight of GI-NNet on JGD. To validate the performance of our proposed models, we conducted extensive experiments using the Anukul (Baxter) hardware cobot.
Priya Shukla, Vandana Kushwaha, G C Nandi
2023-08-22T17:36:26Z
http://arxiv.org/abs/2308.11590v1
# Vision-Based Intelligent Robot Grasping Using Sparse Neural Network ###### Abstract In the modern era of Deep Learning, network parameters plays a vital role in models efficiency but it has its own limitations like extensive computations and memory requirements, which may not be suitable for real time intelligent robot grasping tasks. Current research focuses on how the model efficiency can be maintained by introducing sparsity but without compromising accuracy of the model in robot grasping domain. More specifically, in this research two light-weighted neural networks have been introduced, namely Sparse-GRConvNet and Sparse-GINNet, which leverage sparsity in robotic grasping domain for grasp pose generation by integrating the Edge-PopUp algorithm. This algorithm facilitates the identification of the top K% of edges by considering their respective score values. Both the Sparse-GRConvNet and Sparse-GINNet models are designed to generate high-quality grasp poses in real-time at every pixel location, enabling robots to effectively manipulate unfamiliar objects. We extensively trained our models using two benchmark datasets: Cornell Grasping Dataset (CGD) and Jacquard Grasping Dataset (JGD). Both Sparse-GRConvNet and Sparse-GINNet models outperform the current state-of-the-art methods in terms of performance, achieving an impressive accuracy of 97.75% with only 10% of the weight of GR-ConvNet and 50% of the weight of GL-NNet, respectively, on CGD. Additionally, Sparse-GRConvNet achieve an accuracy of 85.77% with 30% of the weight of GR-ConvNet and Sparse-GINNet achieve an accuracy of 81.11% with 10% of the weight of GL-NNet on JGD. To validate the performance of our proposed models, we conducted extensive experiments using the Anukul (Baxter) hardware cobot. Sparse Networks, GR-ConvNet, GI-NNet, Robotic Grasping, Edge-Popup Algorithm ## I **Introduction** Robotic grasping is essential for effective interactions between robots and the physical world. It involves the ability of robots to grasp objects accurately in dynamic and unstructured environments. Real-time grasping, with applications in industry, household robotics, and healthcare, has gained attention. However, intelligent grasping is complex, akin to human development where we learn skilled manipulation through hand-eye coordination. Recent advancements in machine learning, computer vision, and deep learning offer potential to create intelligent robot graspers. These developments could lead to autonomous robots effectively interacting with the world. Challenges remain in creating efficient computational learning architectures [10] with minimal trainable parameters [23, 7]. Human brain learning [22], driven by neuro-plasticity, reshaping neural connections through experience, provides insights. Unlike fixed artificial neural networks, the brain adapts continuously, consuming minimal energy. While back-propagation learning persists, structure's role is emphasized, supported by rigorous experiments in predicting grasping rectangles for robots. In this study, we have introduced a novel approach that incorporates the Edge-PopUp algorithm [23] into grasp generation models such as GR-ConvNet [14] and GI-NNet [27]. This integration has resulted in our proposed models, namely Sparse-GRConvNet and Sparse-GINNet, which effectively address the challenge of real-time grasping of unfamiliar objects. Our proposed Sparse-GRConvNet and Sparse-GINNet models are characterized by their lightweight nature, as they have a significantly reduced number of parameters. Despite their lightweight design, these models achieve accuracy levels comparable to state-of-the-art models, making them suitable for real-time applications. Overall, our work presents a practical solution that combines sparsity-based techniques, reduced parameterization, and high accuracy, enabling efficient real-time grasping of novel items. Fig. 1, shows the overview of grasp pose generation though the proposed models. In the Edge-PopUp algorithm, as illustrated in Fig. 2, each edge in the network is assigned a positive real value known as a score, in addition to its weight. These scores play a crucial role in the selection of a subnetwork. Specifically, we identify the top K% of edges with the highest scores and utilize their associated weights to construct the subnetwork. During the backward pass, a gradient estimator is employed to update the scores of all edges. This dynamic update process enables previously inactive or "dead" edges to become active again and rejoin the subnetwork. Importantly, only the score corresponding to each weight is updated during this process, while all other weights in the network remain unchanged. We have evaluated the performances of our models on both the benchmark dataset: CGD [17] and, JGD [5]. Both Sparse-GRConvNet and Sparse-GINNet models outperform the current state-of-the-art methods in terms of performance, achieving an impressive accuracy of 97.75% with only 10% of the weight of GR-ConvNet and 50% of the weight of GI-NNet, respectively, on CGD. Additionally, Sparse-GRConvNet achieve an accuracy of 85.77% with 30% of the weight of GR-ConvNet and Sparse-GINNet achieve an accuracy of 81.11% with 10% of the weight of GI-NNet on the JGD. To validate the performance of our proposed models, we conducted extensive experiments using the Anukul (Baxter) hardware robot with our test object set. Here, is the summary of our paper's significant contributions: * We have introduced the integration of sparsity into the grasp rectangle generation networks, GR-ConvNet and GI-NNet, and named it Sparse-GRConvNet and Sparse-GINNet respectively. To the best of our knowledge, this is the first time we have leveraged the concept of sparsity for the grasp pose prediction. * The Edge-PopUp algorithm is employed to choose the significant edges of GR-ConvNet and GI-NNet based on their assigned scores, leading to the creation of a more streamlined network known as Sparse-GRConvNet and Sparse-GINNet. * Our Sparse-GRConvNet and Sparse-GINNet models demonstrates a significant reduction in parameters. Despite their lightweight architecture, they achieve impressive levels of accuracy. * To evaluate the effectiveness of our proposed models, we conducted comprehensive experiments utilizing the Anukul (Baxter) hardware robot in conjunction with our designated test object set. ## II **Related Work** There is a lot of research being done to address the robot grasping issue. Finding a universal answer to this issue, however, is difficult since it calls for study in a variety of fields, and the existing limits of the hardware for grasping make it problematic to evaluate the suggested solutions. The two main kinds of grasping methods are described by Bohg et al. [3] in more detail. The first is analytical methods, which demand accurate modeling of the gripper and the item. For instance, [25] applies the idea of multi-finger grippers to the two-finger caging notion to ensure item entrapment. When the initial guess for the grasping locations and the 3-D object model is supplied, Krug et al. [13] describe an effective approach for determining the contact areas. The goal of data-driven techniques, in contrast, is to develop the ability to suggest effective grasping configurations for any type of item by using either external user input [9] or annotated training data [11]. The best-grasping candidate from the suggested choices is chosen using some heuristics or metrics. According to [9], the best-matched object with a known grasp configuration is used to synthesize the grasping configurations for unknown things. By using kinesthetic training, the robot is taught how to grasp familiar things. Utilizing the knowledge from earlier grasp attempts, the grasp choice gets better with time. In [11], Jiang et al. suggest representing grasping rectangles on the camera image plane to help detect suitable grasping stances. They provide a two-step approach based on this model to automatically locate the ideal orientated grasping rectangle for every given item. Deep learning-based approaches have recently been developed to address the grasping problem [4, 6]. Some of these methods make use of deep learning to forecast the optimal grasping stance based on sensory information. Annotated images are used by Lenz et al. [17] to identify the ideal grasping stance. Their method involves rotating rectangles to identify the appropriate and inappropriate grasping stances for each object in the image. To serve as a comparison point for related strategies, this data is made available as the CGD. With the use of this data, they train a two-stage deep neural network that generates the grasping rectangle with the highest score. The network's initial stage develops the ability to suggest qualified grasping candidates. In contrast, the second step picks out the best-grasping rectangle by learning how to improve the candidates that were acquired. Similar to this, Redmon et al. [24] use the same dataset and suggest three distinct deep neural network designs to find candidates that are strong graspers. Their results outperform the strategy described in [17], which they used. However, they don't offer any outcomes using a real robot. Using ResNet-50, a recent CNN network, as their foundation, Kumra et al. [15] describe the most current results on the CGD. On top of ResNet-50, they have suggested several shallow topologies for unimodal (RGB only) and multimodal (RGBD) data. Their findings demonstrate that, on this benchmark dataset, the suggested unimodal and multimodal designs produce state-of-the-art outcomes. A deep neural network design for recognizing grasping rectangles from photos is also proposed by Guo et al. [8]. However, their method can only suggest rectangles that are horizontally aligned, which restricts Fig. 1: Overview of Grasp Pose Generation Process its applicability to rotating objects. An approach that forecasts the grasp candidates' robustness is put forth by Mahler et al. [19]. Three parameters: the grasp center coordinates, the vertical gripper angle, and the approach are used to establish a grasp position. They build a deep neural network to assess the robustness of a grasp candidate from depth photos using millions of artificial point clouds. Their approach makes use of parallel grippers and presupposes knowledge of the gripper's 3-D model. They tested their method on a real robot and had over 90% success in grasping. In [20, 21], Morrison et al. propose GG-CNN and GG-CNN2, a fully convolutional neural network that predicts the antipodal grasp pose for each pixel. Building upon this, Kumra et al. [14] further enhance the model's capabilities by incorporating residual modules and named their model as GR-ConvNet. Although GG-CNN has a significantly smaller number of parameters compared to GR-ConvNet, it is important to note that GR-ConvNet outperforms GG-CNN in terms of accuracy for both benchmark datasets. Priya et al. in [27] proposed a lightweight model called GI-NNet, which is based on GG-CNN. They incorporated an inception module into the architecture of the model. The GI-NNet model achieved higher accuracy compared to GR-ConvNet on the CGD while also reducing the number of parameters required. In other recent work [18, 26], the researchers proposed a novel discriminative-generative model that combines the representation quality of VQ-VAE with GG-CNN and GG-CNN2, named RGGCNN and RGGCNN2 respectively. This integrated model has improved the grasp pose prediction, especially in scenarios with limited labeled data. By leveraging the strengths of these different components, the model offers enhanced performance and robustness in grasp pose estimation. [16] To address the challenge of the limited availability of labeled grasping datasets, a generative-based model has been proposed. This model aims to generate grasp poses for both seen and unseen objects. Unlike previous approaches, our research introduces a groundbreaking concept of sparsity to grasp pose prediction models such as GR-ConvNet and GI-NNet. This novel approach allows us to develop a lightweight network that achieves comparable or even superior accuracy compared to the current state-of-the-art methods. ## III **Problem Formulation** In this study, the problem of robotic grasping, is defined as the prediction of grasp for objects in a given scene. The grasp pose in the robot's frame is denoted by (1): \[G_{r}=(P,\theta_{r},W_{r},Q) \tag{1}\] where, \(P=(x,y,z)\) refers to end-effector's center position, \(\theta_{r}\) represents its rotation around the z-axis, \(W_{r}\) represents the required width for it, and \(Q\) denotes the quality score for grasp. The predicted grasp pose for an n-channel image \(I=R^{n\times h\times w}\) with a height of \(h\) and width of \(w\) is denoted as (2): \[G_{i}=(x,y,\theta_{i},W_{i},Q) \tag{2}\] Here, \((x,y)\): the grasp's center and \(W_{i}\): the required width are in the image's frame, \(\theta_{i}\) represents the rotation in the camera's frame of reference, and \(Q\) remains the same scalar as mentioned above. To transform the predicted grasp pose (\(G_{i}\)), which is in the image coordinate plane, into the robot coordinate frame (\(G_{r}\)), we apply a series of transformations using (3). \[G_{r}=T_{c}^{r}\Big{(}T_{i}^{c}(G_{i})\Big{)} \tag{3}\] where a grasp pose \(G_{i}\) in image space is transformed using the \(T_{i}^{c}\) into the camera's 3D space, and then the camera space is transformed into the robot space using \(T_{c}^{r}\). Our aim is to create a lightweight network that has the capability to predict the optimal grasp pose for unfamiliar objects based on an n-channel image of the scene. ## IV **Methodology** In this part, we explain our methodology and start with a quick summary. ### **Preliminaries** In our study, we have adopted GR-ConvNet as the base architecture for Sparse-GRConvNet and GI-NNet as the base architecture for Sparse-GINNet. Furthermore, we have integrated the Edge-PopUp algorithm into our framework to enhance the grasp generation process. In this section, we will provide a comprehensive description of the architecture of GR-ConvNet, GI-NNet, and the Edge-PopUp algorithm. Fig. 2: Edge-PopUp Algorithm [23] #### Ii-B1 **GR-ConvNet and GI-NNet** GR-ConvNet and GI-NNet are generative grasping networks specifically designed to generate grasp poses for each pixel in an input image captured by a camera. These networks are designed to accommodate input images with dimensions of \(n\times 224\times 224\). Notably, the input modality is not limited to a specific type, such as depth-only or RGB-only images. Instead, GR-ConvNet and GI-NNet can effectively process input images with an arbitrary number of channels, making them suitable for various input modalities. This flexibility enables the models to be versatile and capable of handling different types of input data. The GR-ConvNet architecture consists of three convolutional layers, five residual layers, and transposed convolution layers. The convolutional layers are responsible for extracting relevant features from the input image, allowing the network to identify important patterns. The residual layers further enhance the network's ability to capture fine details and comprehend complex patterns in the input image. Finally, the transposed convolution layers upsample the features and generate the final output images. However, the GI-NNet architecture comprise of three 2D convolutional layers for feature extraction, followed by five inception blocks that select different filter sizes in parallel to reduce computation. Transposed convolution layers are then used for upsampling, and a final convolution operation generates the desired output images. This architecture efficiently captures contextual information, reduces complexity, and produces accurate grasp pose predictions. The output of both GR-ConvNet and GI-NNet consists of four images, each representing specific aspects of the grasp pose for every pixel in the input image. These images correspond to grasp quality, angle (represented as sin2\(\theta\) and cos2\(\theta\)), and grasp width. Each pixel in these output images contains information about the grasp quality, angle, and width at that particular location in the input image. By utilizing the generated output images, both networks can predict the grasp pose for each pixel, providing a detailed representation of the grasp pose for the object present within the input image. These predicted grasp poses offer valuable information for robotic manipulation and can be used to guide robots in effectively grasping objects. However, the large number of parameters in GR-ConvNet (19,00,900) makes it less suitable for real-time applications. While GI-NNet offers a reduced parameter count of 5,92,300, it is still relatively higher compared to other [20, 21], which have significantly fewer parameters (approximately 62K and 66K, respectively). #### Ii-B2 **Edge-PopUp Algorithm** In [23], researchers have presented a concept suggesting the presence of potential sub-networks within an over-parameterized neural network that achieves comparable performance. They have developed an algorithm named as Edge-PopUp for selecting the relevant edge for creating a subnetwork. ``` 0: Image I, Score \(S_{uv}\) and Weights \(W_{uv}\) for the Edge \((u,v)\) of the Network 0: After training a potential subnetwork is obtained with edges of top%K score. Procedure: 1: First initialized the network parameters like scores, weights, and biases for each layer. 2: Use the edge corresponding to the top K% scores for the forward pass in each layer and compute the gradient. 3: In backward pass update all the scores with the gradient estimator. \[S_{uv}\gets S_{uv}-\alpha\frac{\partial L}{\partial I_{v}}W_{uv}Z_{u}\] (4) where, \(W_{uv}Z_{u}\) denotes the weighted output of neuron \(u\), and \(I_{v}\) denotes the input of neuron \(v\). 4: Repeat till accuracy is updated. ``` **Algorithm 1** Edge-PopUp Algorithm In Algorithm 1, steps for the Edge-PopUp algorithm are described. Initially, the neural network parameters, specifically the weights, are initialized along with an additional parameter called "score" for each edge in every layer. During the forward pass, only the top K% of edges are selected based on their score values. Subsequently, during the backward pass, all score values are updated using the gradient estimator. Through this dynamic update process, previously inactive or "dead" edges have the opportunity to become active once again and rejoin the subnetwork. It is important to note that only the score Fig. 3: Architecture of Sparse-GRConvNet associated with each weight is updated, while all other weights in the network remain unchanged. After the training process, a potential subnetwork, also known as a sparse network, is obtained that demonstrates a comparable performance to the original fully connected network. ### **Proposed Approach** Here, we have explored the integration of the Edge-PopUp algorithm with GR-ConvNet and GI-NNet to develop our proposed models, Sparse-GRConvNet and Sparse-GINNet. This integration allows us to leverage the benefits of sparsity, resulting in lightweight networks with reduced computational requirements. #### Iv-B1 **Architecture of Sparse-GRConvNet and Sparse-GINNet** For visual representations of the proposed models, namely Sparse-GRConvNet and Sparse-GINNet, refer to Fig. 3 and Fig. 4 respectively. The key modification involves replacing the convolutional layers with mask-convolutional layers and substituting transposed convolution layers with transposed mask-convolutional layers within their respective base architectures. Furthermore, the original batch normalization is replaced with non-affine batch normalization. The mask-convolutional layers and transposed mask-convolutional layers serve as wrapper classes for the corresponding convolutional layers, incorporating the Edge-PopUp algorithm. This algorithm is responsible for selecting the most relevant edges based on their top score values, enhancing the sparsity and computational efficiency of both the models. Fig. 5 shows the inception block of Sparse-GINNet. Both Sparse-GRConvNet and Sparse-GINNet models takes an input image of size \(224\times 224\) with n channels and produces three output images: a quality image, angle images (for sin2\(\theta\) and cos2\(\theta\)), and a width image, all representing pixel-wise grasp information. By utilizing these output images, the model predicts the grasp pose for the object. The overall parameter count of the Sparse-GR-ConvNet and Sparse-GINNet varies depending on the chosen sparsity value (K). Different sparsity values lead to changes in the number of parameters throughout the model. #### Iv-B2 **Training Details** During the training process, RGB-D images are utilized, and various train-test splits of the dataset are employed for different sparsity values. The training is conducted with a batch size of 8, using the Adam optimizer with a learning rate of 0.001. The Sparse-GRConvNet model run for a total of 50 epochs, while the Sparse-GINNet model run for 30 epochs. ## V **Performance Evaluation** ### **Datasets** For both the training and testing of our models, we utilized two benchmark datasets: Cornell Grasping Dataset and Jacquard Grasping Dataset. #### V-A1 **Cornell Grasping Dataset (CGD)** The Cornell Grasping Dataset [17] is a comprehensive dataset that contains RGB-D images of various real objects. It consists of 885 RGB-D photos, capturing 240 different objects. The dataset provides annotations for positive and negative grasps, with a total of 5,110 positive grasps and 2,909 negative grasps. The annotations are represented as rectangular bounding boxes with pixel-level coordinates, depicting antipodal grasps. To augment the dataset, we employed random cropping, zooming, and rotation techniques, resulting in approximately 51,000 grasps. During training, we only considered positively labeled grasps from the dataset. #### V-A2 **Jacquard Grasping Dataset (JGD)** The JGD [5] is created using a portion of ShapeNet, a substantial dataset of CAD models. This dataset focuses on effective gripping positions and includes annotations generated from grasp attempts conducted in a simulated environment. It consists of 54,000 RGB-D photos, each accompanied by annotations indicating the locations of effective gripping positions on the objects. These annotations were derived from the grasp attempts performed within the simulated setting, resulting in a total of 1.1 million occurrences of successful grasps. With such a significant number of grip occurrences and a large collection of RGB-D photos, this dataset offers comprehensive coverage of grasping scenarios and object variations, and no need for data augmentation during model training. ### **Grasp Detection Metric** To ensure a fair comparison of our model's performance, we have used the rectangle metric [11] proposed by Jiang et al. As per the rectangle metric, a grasp is considered good if it satisfies the following two criteria: * The intersection over union (IoU) score between the predicted and the ground truth grasp rectangle should be higher than 25%. * The orientation offset between the ground truth and predicted grasping rectangles should be less than \(30^{\circ}\). ### **Implementation** #### V-C1 **Setup** Our trial setup consists of Anukul, a Baxter Cobot build by Rethink Robotics, equipped with two arms that have seven degrees of freedom. Additionally, an externally attached high-resolution stereo camera (Intel RealSense D435) is positioned at the torso of Anukul, as depicted in Fig. 6. For our real-time grasping experiments, we have employed a parallel plate gripper with two fingers, allowing Anukul to grasp objects effectively. #### V-C2 **Procedure to perform robotic grasping** First, the RGB-D data captured by the external camera serves as input to the trained model of Sparse-GRConvNet and Sparse-GINNet, that predicts the grasp pose for the object in the scene. The highest-scoring grasp pose is selected as the target grasping point for execution. Then, the selected grasp pose (\(G_{i}\)), which is in the image coordinate plane, is transform to the robot coordinate frame using (3) to obtain (\(G_{r}\)). After computing \(G_{r}\) in the robot coordinate frame, joint angle for Anukul's arm is computed using an inverse kinematic solution and passed to the control system of Anukul to execute the grasping. ### **Results** In this section, we have discussed the results of our experiments. We assessed the effectiveness of the Sparse-GRConvNet and Sparse-GINNet on two benchmark datasets: CGD and JGD. Additionally, we evaluated our model's performance on a test object set, as depicted in Fig. 7. #### V-D1 **Cornell Grasping Dataset (CGD)** We have conducted a performance evaluation of Sparse-GRConvNet and Sparse-GINNet on the CGD, considering various sparsity (K) values (10%, 30%, 50%, 70%, 90%) for different dataset split ratios (10-90, 30-70, 50-50, 70-30, 90-10). TABLE I summarizes the accuracy achieved by the Sparse-GRConvNet model across different train/test split ratios and sparsity values. The accuracy achieved by the Sparse-GINNet model for different train/test split ratios and sparsity values is tabulated in TABLE II. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Train-Test** & **GR-ConvNet** & **Sparse-GRConvNet** & **GENNet** & **Sparse-GRNet** \\ \hline **Split** & **(base)** & **(ours)** & **(base)** & **(ours)** \\ \hline 10-90 & \(87.64\) & \(81.30\) & \(89.88\) & \(87.57\) \\ \hline 30-70 & \(89.88\) & \(91.61\) & \(95.50\) & \(91.77\) \\ \hline 50-50 & \(96.62\) & \(93.67\) & \(96.62\) & \(93.22\) \\ \hline 70-30 & \(95.50\) & \(92.85\) & \(98.87\) & \(93.60\) \\ \hline 90-10 & \(97.75\) & \(97.75\) & \(98.87\) & \(97.75\) \\ \hline \end{tabular} \end{table} TABLE IV: Comparative study of proposed models with their base models on CGD for varied train-test splits. Fig. 8: Performance of Sparse-GRConvNet on CGD for different sparsity values on different split ratios. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Model \(\rightarrow\)** & \multicolumn{2}{c|}{**Sparse-GRConvNet**} & \multicolumn{2}{c|}{**Sparse-GRNet**} \\ \hline **Sparsity (K) \(\downarrow\)** & **Parameters** & **Accuracy (\%)** & **Parameters** & **Accuracy (\%)** \\ \hline K=109 & \(1,90,090\) & **97.75** & \(59,230\) & \(95.50\) \\ \hline K=30\% & \(5,70,270\) & **97.75** & \(1,77,690\) & \(96.62\) \\ \hline K=50\% & \(9,50,450\) & \(96.62\) & \(2,96,150\) & **97.75** \\ \hline K=70\% & \(13,30,630\) & \(92.13\) & \(4,14,610\) & \(93.60\) \\ \hline K=90\% & \(17,10,810\) & \(62.92\) & \(5,33,070\) & \(62.92\) \\ \hline \end{tabular} \end{table} TABLE III: Comparison of our proposed models on CGD at different sparsity values. Fig. 9: Performance of Sparse-GINNet on CGD for different sparsity values on different split ratios. less number of weight in most of the cases. TABLE III shows the comparison of both proposed model's parameters and the best accuracy achieved by them for different sparsity on the CGD. While, TABLE IV shows the overall performances of the proposed models in comparison to their base model, which shows that our models have achieved remarkable accuracy with the reduced number of network parameters. As per TABLE V, Sparse-GRConvNet has attained an accuracy of \(97.75\)**%** that is equivalent to its base model but with reduced parameters, count of \(1,90,090\) which is only \(10\)% weight of the base model. Whereas, Sparse-GINNet has achieved an accuracy of \(97.75\)**%** that is comparable to its base model that too with less number of parameters \(2,96,150\) that is only \(50\)% weight of the base model. From the obtained result it is evident that our models are lightweight and appropriate to be applied in real-time application. In TABLE VI, we have compared the performance of our proposed models with existing state-of-the art methods on the CGD. Fig. 10 shows the performance visualization of both proposed models for some input images from the CGD. ters between the proposed model and other existing methods. Fig. 11 shows the performance visualization of both proposed models on input images from the JGD. #### V-B3 **Results on Test object set** Fig. 12 shows the performance visualization of both proposed models on the test object set. ## VI Conclusion Designing a slim yet highly accurate deep neural network architecture is crucial, especially for real-time applications like robot grasping. We argued that searching for an appropriate neural network structure is equally important as deriving an ad hoc architecture based solely on empirical insights and attempting to determine weight parameters using the back propagation algorithm. We have introduced the Sparse-GRConvNet and Sparse-GINNet models by integrating an edge-selecting algorithm with grasp generation models like GR-ConvNet and GI-NNet to harness the benefits of sparsity. These models exhibits a remarkable reduction in parameters while maintaining comparable accuracy. We evaluated the performance of our model on two benchmark datasets, namely CGD and JGD, and achieved significant accuracy. To explore the impact of sparsity, we trained our model using different data split ratios. Both Sparse-GRConvNet and Sparse-GINNet models outperform the current state-of-the-art methods in terms of performance, achieving an impressive accuracy of 97.75% with only 10% of the weight of GR-ConvNet and 50% of the weight of GI-NNet, respectively, on CGD. Additionally, Sparse-GRConvNet achieve an accuracy of 85.77% with 30% of the weight of GR-ConvNet and Sparse-GINNet achieve an accuracy of 81.11% with 10% of the weight of GI-NNet on the JGD which is a remarkable improvement. Future endeavors should aim to either eliminate or minimize the reliance on the backpropagation algorithm. This could involve replacing it with suitable structures and conducting parameter tuning through the application of associative rules, such as those rooted in Hebb's law. ## Acknowledgements The present research is partially funded by the I-Hub foundation for Cobotics (Technology Innovation Hub of IIT-Delhi set up by the Department of Science and Technology, Govt. of India). Some of the experimental results presented here were undertaken by our undergraduate and dual degree students including Prasanth Kota, Mridul Mahajan and others during their semester long projects and otherwise.
2303.07080
Bag of Tricks with Quantized Convolutional Neural Networks for image classification
Deep neural networks have been proven effective in a wide range of tasks. However, their high computational and memory costs make them impractical to deploy on resource-constrained devices. To address this issue, quantization schemes have been proposed to reduce the memory footprint and improve inference speed. While numerous quantization methods have been proposed, they lack systematic analysis for their effectiveness. To bridge this gap, we collect and improve existing quantization methods and propose a gold guideline for post-training quantization. We evaluate the effectiveness of our proposed method with two popular models, ResNet50 and MobileNetV2, on the ImageNet dataset. By following our guidelines, no accuracy degradation occurs even after directly quantizing the model to 8-bits without additional training. A quantization-aware training based on the guidelines can further improve the accuracy in lower-bits quantization. Moreover, we have integrated a multi-stage fine-tuning strategy that works harmoniously with existing pruning techniques to reduce costs even further. Remarkably, our results reveal that a quantized MobileNetV2 with 30\% sparsity actually surpasses the performance of the equivalent full-precision model, underscoring the effectiveness and resilience of our proposed scheme.
Jie Hu, Mengze Zeng, Enhua Wu
2023-03-13T13:05:33Z
http://arxiv.org/abs/2303.07080v1
# Bag of tricks with quantized convolutional neural networks for image classification ###### Abstract Deep neural networks have been proven effective in a wide range of tasks. However, their high computational and memory costs make them impractical to deploy on resource-constrained devices. To address this issue, quantization schemes have been proposed to reduce the memory footprint and improve inference speed. While numerous quantization methods have been proposed, they lack systematic analysis for their effectiveness. To bridge this gap, we collect and improve existing quantization methods and propose a gold guideline for post-training quantization. We evaluate the effectiveness of our proposed method with two popular models, ResNet50 and MobileNetV2, on the ImageNet dataset. By following our guidelines, no accuracy degradation occurs even after directly quantizing the model to 8-bits without additional training. A quantization-aware training based on the guidelines can further improve the accuracy in lower-bits quantization. Moreover, we have integrated a multi-stage fine-tuning strategy that works harmoniously with existing pruning techniques to reduce cost even further. Remarkably, our results reveal that a quantized MobileNetV2 with 30% sparsity actually surpasses the performance of the equivalent full-precision model, underscoring the effectiveness and resilience of our proposed scheme. Jie Hu\({}^{1,2^{*}}\), Mengze Zeng\({}^{3^{*}}\), Enhua Wu\({}^{1,2,4}\)+\({}^{1}\)State Key Laboratory of Computer Science, ISCAS \({}^{2}\)University of Chinese Academy of Sciences \({}^{3}\)ByteDance Inc. \({}^{4}\)University of Macau [email protected], [email protected], [email protected] Model Quantization, Acceleration, Convolutional Neural Networks, Image Classification Footnote †: 1}\)Corresponding author. This work is supported in part by NSFC Grants (62072449). ## 1 Introduction Since the introduction of AlexNet [1], there has been an exponential increase in the number of exceptional convolutional neural networks proposed, resulting in promising outcomes for a variety of visual tasks [1, 2, 3, 4, 5]. Despite the remarkable results, deploying CNN models on embedded or mobile devices proves challenging as it poses an immense burden on computation and memory storage. To address this issue, a significant amount of research has been dedicated to reducing associated costs, thereby making CNN models more practical for real-world applications. Broadly speaking, this line of research can be categorized into three distinct areas: efficient structure design, network pruning, and network quantization. Efficient structural design is a challenge in research, with introduction of the separated convolution [6] proposed as an effective technique. This method factorizes the standard convolution into a depthwise and pointwise convolution, reducing computation. Successful examples of its use in efficient networks include MobileNets [6, 7, 8] and ShuffleNets [9, 10]. These networks are widely used on resource-constrained devices and have shown promise in practical applications. Besides that, various pruning strategies [11] have also been proposed to reduce both the computational and storage burdens. However, these methods often incur accuracy degradation, making them less attractive for practical applications. As such, the focus has shifted towards developing efficient structural design methods that can provide high accuracy without compromising on computational efficiency. Quantization is an incredibly efficient method for deploying models. Its effectiveness has been demonstrated in recent work [12, 13, 14, 15, 16] and has made it increasingly popular in the industry due to its hardware-friendly properties, which can significantly reduce computational and memory costs. However, despite its advantages, there exists a significant accuracy gap between a full-precision model and a quantized counterpart. This gap is especially pronounced in low-bitwidth quantization situations (e.g. 4-bits). Nevertheless, researchers are actively working on closing this gap and making quantization even more effective. This paper presents a systematic exploration of the effects of each quantization factor on convolutional neural networks, forming a gold guideline for post-training quantization. Additionally, our proposed multi-stage fine-tuning strategy enables quantization to work in conjunction with existing pruning strategies. Exhaustive experiments with ResNet-50 and MobileNetV2, quantized using different bitwidths on ImageNet, showcase the effectiveness of our proposed method.
2307.03711
Error-tolerant quantum convolutional neural networks for symmetry-protected topological phases
The analysis of noisy quantum states prepared on current quantum computers is getting beyond the capabilities of classical computing. Quantum neural networks based on parametrized quantum circuits, measurements and feed-forward can process large amounts of quantum data to reduce measurement and computational costs of detecting non-local quantum correlations. The tolerance of errors due to decoherence and gate infidelities is a key requirement for the application of quantum neural networks on near-term quantum computers. Here we construct quantum convolutional neural networks (QCNNs) that can, in the presence of incoherent errors, recognize different symmetry-protected topological phases of generalized cluster-Ising Hamiltonians from one another as well as from topologically trivial phases. Using matrix product state simulations, we show that the QCNN output is robust against symmetry-breaking errors below a threshold error probability and against all symmetry-preserving errors provided the error channel is invertible. This is in contrast to string order parameters and the output of previously designed QCNNs, which vanish in the presence of any symmetry-breaking errors. To facilitate the implementation of the QCNNs on near-term quantum computers, the QCNN circuits can be shortened from logarithmic to constant depth in system size by performing a large part of the computation in classical post-processing. These constant-depth QCNNs reduce sample complexity exponentially with system size in comparison to the direct sampling using local Pauli measurements.
Petr Zapletal, Nathan A. McMahon, Michael J. Hartmann
2023-07-07T16:47:02Z
http://arxiv.org/abs/2307.03711v2
# Error-tolerant quantum convolutional neural networks for symmetry-protected topological phases ###### Abstract The analysis of noisy quantum states prepared on current quantum computers is getting beyond the capabilities of classical computing. Quantum neural networks based on parametrized quantum circuits, measurements and feed-forward can process large amounts of quantum data to reduce measurement and computational costs of detecting non-local quantum correlations. The tolerance of errors due to decoherence and gate infidelities is a key requirement for the application of quantum neural networks on near-term quantum computers. Here we construct quantum convolutional neural networks (QCNNs) that can, in the presence of incoherent errors, recognize different symmetry-protected topological phases of generalized cluster-Ising Hamiltonians from one another as well as from topologically trivial phases. Using matrix product state simulations, we show that the QCNN output is robust against symmetry-breaking errors below a threshold error probability and against all symmetry-preserving errors provided the error channel is invertible. This is in contrast to string order parameters and the output of previously designed QCNNs, which vanish in the presence of any symmetry-breaking errors. To facilitate the implementation of the QCNNs on near-term quantum computers, the QCNN circuits can be shortened from logarithmic to constant depth in system size by performing a large part of the computation in classical post-processing. These constant-depth QCNNs reduce sample complexity exponentially with system size in comparison to the direct sampling using local Pauli measurements. ## I Introduction Existing noisy intermediate-scale quantum (NISQ) computers can perform computations that are challenging for classical computers [1]. However, quantum computing hardware and quantum algorithms need to be further developed to enable the exploitation of quantum computers in areas such as the simulation of many-body systems [2; 3] and machine learning [4]. One of the major challenges in developing scalable quantum computers is the characterization of noisy quantum data produced by near-term quantum hardware. With increasing system size, standard characterization techniques using direct measurements and classical post-processing become prohibitively demanding due to large measurement counts and computational efforts. While many local properties can be efficiently determined using randomized measurements [5], global properties of quantum states are typically hard to estimate. Quantum machine learning techniques based on the direct processing of quantum data on quantum processors can substantially reduce the measurement costs, including quantum principle component analysis [6], quantum autoencoders [7; 8; 9], certification of Hamiltonian dynamics [10; 11], quantum reservoir processing [12]. Moreover, quantum neural networks based on parametrized quantum circuits, measurements and feed-forward can process large amounts of quantum data, to detect non-local quantum correlations with reduced measurement and computational efforts compared to standard characterization techniques [13; 14; 15; 16; 17]. A key requirement for employing quantum neural networks to characterize noisy quantum data produced by near-term quantum hardware is the tolerance to errors due to decoherence and gate infidelities. The characterization of non-local correlations in quantum states is of key importance to condensed matter physics. It is required for the classification of topological quantum phases of matter [18; 19] and for understanding new strongly correlated materials [20] such as high-temperature superconductors [21]. Classical machine learning tools for the recognition of topological phases of matter have recently been studied, uncovering phase diagrams from data produced by numerical simulations [22; 23; 24] and measured in experiments [25; 26; 27; 28]. Moreover, quantum many-body states belonging to topological quantum phases have been prepared on quantum computers using exact matrix product state representations [29], unitary quantum circuits [30], and measurement and feed-forward [31]. Properties of topological phases have been probed on quantum computers by measuring characteristic quantities [29; 32] such as string order parameters (SOPs) [33; 34]. Classical machine learning algorithms have been shown to classify topological quantum phases from classical shadows formed by randomized measurements [35]. However, the rapidly increasing sample complexity with system size remains an outstanding problem for such approaches. In Ref. [14], quantum convolutional neural networks (QCNNs) have been proposed to recognize symmetry-protected topological (SPT) phases [18; 19] with reduced sample complexity compared to the direct measurement of SOPs. Such QCNNs can be trained to identify characteristics of SPT phases from training data [14; 36; 37]. Alternatively, QCNNs can be analytically constructed to mimic renormalization-group flow [14; 38], a method for classifying quantum phases [20]. A shallow QCNN has been implemented on a 7-qubit superconducting quantum processor in Ref. [39]. This QCNN has exhibited robustness against incoherent errors on the NISQ de vice which allowed for the recognition of a SPT phase with a higher fidelity than the direct measurement of SOPs. However, the propagation of errors leads to a rapid growth of error density in deeper QCNNs due to the reduction of qubit number from one QCNN layer to the next, which represents a central problem. Here we overcome this problem by constructing QCNNs that can recognize SPT phases of a generalized cluster-Ising model in the presence of incoherent errors. Apart from recognizing SPT phases from topologically trivial phases as previously shown in Refs. [14; 37; 38; 39], we newly demonstrate that QCNNs constructed here can distinguish two SPT phases from one another. Using matrix product state (MPS) simulations, we show that the QCNN output is robust against symmetry-breaking errors below a threshold error probability. This enables new quantum phase recognition capabilities for QCNNs in scenarios where SOPs and previous QCNN designs [14] are impractical. SOPs rapidly vanish with an increasing length for any probability of symmetry-breaking errors [40], whereas the QCNN proposed in Ref. [14] rapidly concentrates symmetry-breaking errors with increasing depth leading to a vanishing output for any error probability. In addition to the tolerance to symmetry-breaking errors, the QCNNs constructed here tolerate all symmetry-preserving errors if the error channel is invertible. The error tolerance is limited close to phase boundaries due to diverging correlation lengths. Nonetheless, a sharp change in the QCNN output at the phase boundaries allows us to precisely determine critical values of Hamiltonian parameters. To facilitate the implementation of QCNNs on near-term quantum computers, we show that the QCNN circuits constructed here can be shortened from logarithmic to constant depth in system size by efficiently performing a large part of the computation in classical post-processing. The output of the QCNNs corresponds to the expectation value of a multiscale SOP, which is a sum of products of individual SOPs. The multiscale SOP can, in principle, be determined using direct Pauli measurements on the input state without using any quantum circuit. However, the constant-depth QCNN circuits, we derive here, reduce the sample complexity of measuring the multiscale SOP exponentially with system size in comparison to direct Pauli measurements. The remainder of this manuscript is structured as follows. In Sec. II, we introduce the generalized cluster-Ising model we consider before describing the construction of the QCNNs to analyze it in Sec. III. We investigate the robustness of the QCNN output against incoherent symmetry-preserving errors in Sec. IV and show how to design QCNNs that tolerate symmetry-breaking errors in Sec. V. We investigate the phase transition between two SPT phases in Sec. VI and study the tolerance to incoherent errors close to phase boundaries in Sec. VII. In Sec. VIII, we compare the sample complexity of QCNNs to the direct Pauli measurement of the input state before presenting concluding remarks and possible applications of error-tolerant QCNNs in Sec. IX. ## II Generalized cluster-Ising model We consider a one-dimensional chain of \(N\) qubits with open boundary conditions described by the generalized cluster-Ising Hamiltonian \[H= -J_{1}\sum_{j=2}^{N-1}\,C_{j}-J_{2}\sum_{j=3}^{N-2}D_{j}\] \[-h_{1}\sum_{j=1}^{N}\,X_{j}-h_{2}\sum_{j=1}^{N-1}\,X_{j}X_{j+1}, \tag{1}\] where \(C_{j}=Z_{j-1}X_{j}Z_{j+1}\), \(D_{j}=Z_{j-2}X_{j-1}X_{j}X_{j+1}Z_{j+2}\), and \(X_{j}\) as well as \(Z_{j}\) are Pauli operators on qubit \(j\). The Hamiltonian exhibits a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry generated by \(P_{e/o}=\prod_{j=1}^{N/2}X_{2j/2j-1}\). The ground states of the Hamiltonian belong to one of four phases: a paramagnetic phase, an antiferromagnetic phase, a '\(ZXZ\)' SPT phase and a '\(ZXXXXZ\)' SPT phase [41]. The '\(ZXZ\)' ('\(ZXXXXZ\)') SPT phase contains the '\(ZXZ\)' ('\(ZXXXXZ\)') cluster state, which is a stabilizer state with stabilizer elements \(C_{j}\) (\(D_{j}\)) and thus the ground state for \(J_{2}=h_{1}=h_{2}=0\) (\(J_{1}=h_{1}=h_{2}=0\)). SPT phases are characterized by SOPs [33; 34]. In particular, the SOPs \[S_{jk} =Z_{j}\left(\prod_{i=1}^{(k-j)/2}X_{j+2i-1}\right)Z_{k}, \tag{2}\] \[T_{jk} =Z_{j}X_{j+1}Y_{j+2}\left(\prod_{i=2}^{(k-j)/2-2}X_{j+2i}\right)Y_ {k-2}X_{k-1}Z_{k}, \tag{3}\] attain non-vanishing values in the '\(ZXZ\)' SPT phase and the '\(ZXXXXZ\)' SPT phase, respectively. ## III Quantum convolutional neural networks Our goal is to design QCNNs that detect the SPT phases of the generalized cluster-Ising model via quantum phase recognition, a process that identifies whether the ground states \(|\psi\rangle\) of the Hamiltonian (II) belong to a given quantum phase. To perform quantum phase recognition, we process the ground states \(|\psi\rangle\) with the QCNN depicted in Fig. 1a consisting of \(d\) convolutional layers, \(d\) pooling layers and a final fully connected layer. In each convolutional layer \(f=1,2,...,d\), a translationally invariant unitary \(V_{f}\) is applied. In a pooling layer, the system size is reduced by measuring a fraction of qubits and applying feed-forward gates \(W_{f}\) conditioned on the measurement outcomes on the remaining qubits. In this work, we consider the reduction of system size by a factor of three in each pooling layer. As a result, the maximal depth \(d=\lfloor\log_{3}N\rfloor\) of the QCNN is logarithmic in system size \(N\). In the fully connected layer, a general unitary \(V_{\text{FC}}\) is performed on all remaining qubits and the qubits are read out labeling whether the ground state \(|\psi\rangle\) belongs to a given SPT phase or not. For each SPT phase, we construct the QCNN depicted in Fig. 1b by generalizing the procedure proposed in Ref. [14]. First, we identify a characteristic state belonging to each SPT phase. For the '\(ZXZ\)' ('\(ZXXXZ\)') SPT phase this is the '\(ZXZ\)' ('\(XXXXZ\)') cluster state, which can be mapped onto a product state by a disentangling unitary \(U^{\dagger}\) consisting of two (four) layers of two-qubit gates between neighboring qubits, see Appendix B for details. The convolutional layers of the QCNN consist of the disentangling unitary \(U^{\dagger}_{N/3^{f-1}}\) mapping the corresponding cluster state on \(N/3^{f-1}\) qubits onto a product state and the entangling unitary \(U_{N/3^{f}}\) mapping the product state on a sublattice with \(N/3^{f}\) qubits onto the cluster state. As a result, we obtain the cluster state for a reduced system size after the measurement of the remaining qubits in each pooling layer. By construction, the cluster state is a fixed point of the QCNN circuit. Next, we make all states belonging to the '\(ZXZ\)' ('\(ZXXXZ\)') SPT phase flow towards the '\(ZXZ\)' ('\(ZXXXZ\)') cluster state with the increasing depth of the QCNN. To this end, we implement in pooling layers a procedure that is analogous to quantum error correction (QEC), identifying perturbations away from the cluster state as errors. These errors are detected by measurements in the pooling layers and corrected by feed-forward gates \(W_{f}\) on the remaining qubits which are conditioned on the measurement outcomes. A measurement and a feed-forward gate can be replaced by an entangling gate and tracing out of the "measured" qubits. Using this equivalence, we represent the QEC procedure in each pooling layer \(f\) as a unitary \(\text{QEC}_{f}\) as depicted in Fig. 1b. It has been shown in Ref. [14] that by correcting \(X_{j}\) and \(X_{j}X_{j+1}\) errors one can make all pure ground states of the cluster-Ising Hamiltonian belonging to the '\(ZXZ\)' SPT phase (for \(J_{2}=0\)) flow towards the '\(ZXZ\)' cluster state. In this way, the QCNN mimics a renormalization-group flow [20]. In the fully connected layer, we measure stabilizer elements, i.e. either \(C_{j}\) or \(D_{j}\) for the '\(ZXZ\)' phase or the '\(ZXXXZ\)' SPT phase, respectively. This measurement is performed by applying the disentangling unitary \(U^{\dagger}_{N/3^{d}}\) and reading out all remaining qubits in the \(X\) basis. For system size \(N\) and depth \(d\), we have \(m=\lfloor N/3^{d}\rfloor\) output qubits. The QCNN output \[y=\frac{1}{m}\sum_{j=-(m-1)/2}^{(m-1)/2}\langle X_{\frac{N+1}{2}+j\cdot 3^{ d}}\rangle \tag{4}\] is thus the expectation value of \(X\) averaged over the \(m\) output qubits. Before discussing the performance of the constructed QCNNs in the presence of noise due to decoherence and gate infidelities on NISQ devices, we make a crucial ob Figure 1: Quantum convolutional neural network (QCNN). (a) QCNN quantum circuit consisting of \(d\) convolutional layers, \(d\) pooling layers and a final fully connected (FC) layer. The measurement of the output qubits labels whether the input state \(\rho\) belongs to a given SPT phase. (b) QCNN circuit mimicking renormalization-group flow. In each convolutional layer \(f=1,2,...,d\), a disentangling unitary \(U^{\dagger}_{N/3^{f-1}}\) and an entangling unitary \(U_{N/3^{f}}\) are applied on sublattices with \(N/3^{f-1}\) qubits and \(N/3^{f}\) qubits, respectively. In each pooling layer \(f\), a quantum-error-correction unitary \(\text{QEC}_{f}\) is performed on a sublattice with \(N/3^{f-1}\) qubits. In the fully connected layer, a disentangling unitary \(U^{\dagger}_{N/3^{d}}\) is applied. At the end, \(\lfloor N/3^{d}\rfloor\) qubits are measured in the \(X\) bases. (c) QCNN circuit equivalent to (b) consisting of a constant-depth quantum circuit \(U^{\dagger}_{N}\), the measurement of all qubits in the \(X\) basis and classical post-processing. The label of the quantum phase is determined as a Boolean function \(G(x)\) of the measured bit strings \(x\). servation allowing for a substantial shortening of the QCNN circuits. A large part of the QCNN circuits depicted in Fig. 1b can be efficiently implemented in classical post-processing if the QEC procedures \(\widetilde{\text{QEC}}_{f}=U_{N/3^{\prime}}^{\dagger}\text{QEC}_{f}U_{N/3^{ \prime}}\) transformed by the entangling unitaries \(U_{N/3^{\prime}}\) map \(X\)-basis eigenstates \(\ket{x}\) onto other \(X\)-basis eigenstates \(\ket{x^{\prime}}\) \[\widetilde{\text{QEC}}_{f}\ket{x}\propto\ket{x^{\prime}}. \tag{5}\] In this case, the QCNNs are equivalent to a constant depth quantum circuit consisting of the disentangling unitary \(U_{N}^{\dagger}\), the measurement of all qubits in the \(X\) basis and classical post-processing as depicted in Fig. 1c. See Appendix B for the derivation of these equivalent QCNN circuits. In these equivalent QCNN circuits, only the first convolutional layer is implemented on a quantum computer. The remaining convolutional layers, all pooling layers and the fully connected layer are implemented after the measurement of all qubits in classical post-processing as a bit-string-valued Boolean function \(G(x)=x^{\prime}\) of the measured bit strings \(x=x_{1}x_{2}\ldots x_{N}\), where \(x_{j}=0,1\) corresponds to measuring \(X_{j}=+1,-1\). Errors perturbing the cluster states lead to flipped measurement outcomes after the disentangling unitary \(U_{N}^{\dagger}\). These error syndromes are then corrected in classical post-processing. Note that the QCNN proposed in Ref. [14] satisfies the condition (5) and its equivalent QCNN circuit consisting of a constant-depth quantum circuit, measurement and classical post-processing has been developed and experimentally realized in Ref. [39]. In this work, we consider the equivalent QCNN circuits depicted in Fig. 1c. First, we numerically obtain the ground states of the Hamiltonian (1) in the thermodynamic limit using the infinite density matrix renormalization group (iDMRG) algorithm [42], see Appendix A for details. Next, we perform the constant-depth quantum circuit on the infinite MPSs by sequentially applying two-qubit gates between neighboring qubits. Then, we sample \(M_{S}\) outcomes of the measurement of \(N\) qubits from the infinite MPSs. Finally, we determine the QCNN output \(y\) from the measured bit strings \(x\) using the Boolean function \(G(x)\) as \[y=\frac{1}{m}\frac{1}{M_{S}}\sum_{j=-(m-1)/2}^{(m-1)/2}\sum_{x}\left[1-2\,G(x) _{\frac{N+1}{2}+j\cdot 3^{4}}\right]. \tag{6}\] ## IV Tolerance to symmetry-preserving errors NISQ computers operate in the presence of noise due to decoherence and gate infidelities. To enable the exploitation of QCNNs as a characterization tool for NISQ computers, it is thus crucial to investigate the effects of noise on the performance of QCNNs and to construct QCNNs whose output is robust against noise. We expect that the preparation of typical many-body ground states \(\ket{\psi}\) will require substantially deeper quantum circuits than the QCNNs considered in this work which can be implemented in very short constant depth as discussed above. We thus focus on the robustness of QCNNs against errors that occur during the preparation of many-body ground states \(\ket{\psi}\) on NISQ devices and neglect errors occurring during the QCNN circuits. To simulate the preparation errors, we consider an error channel \[\rho=\mathcal{E}(\ket{\psi}\bra{\psi})=\sum_{l=0}^{m}K_{l}\ket{\psi}\bra{\psi} K_{l}^{\dagger}, \tag{7}\] where \(K_{l}\in\{\sqrt{p_{\texttt{x}}}\texttt{1},\sqrt{p_{X}}X,\sqrt{p_{Y}}Y,\sqrt{p_{ Z}}Z\}^{\otimes N}\) are Kraus operators, \(p_{E}\) are probabilities of Pauli errors \(E=X,Y,Z\) and \(p_{\texttt{1}}+p_{X}+p_{Y}+p_{Z}=1\). For \(p_{X}=p_{Y}=p_{Z}\), this error channel describes single-qubit depolarizing noise. We formulate quantum phase recognition on NISQ devices as a task to identify whether the exact ground state \(\ket{\psi}\) belongs to a given quantum phase provided access only to the noisy state \(\rho\), which approximates \(\ket{\psi}\). We now discuss how to design QCNNs tolerating different types of errors and their ability to recognize an SPT phase for the example of the '\(ZXZ\)' phase. We start by investigating the performance of the QCNN proposed in Ref. [14] in the presence of incoherent \(X\) errors described by the error channel (7) with Figure 2: QCNN with \(X\)-error correcting layers detecting the ‘\(ZXZ\)’ phase. The QCNN circuit consists of a constant-depth quantum circuit, the measurement of all qubits in the \(X\) basis and classical post-processing. The quantum circuit performs the disentangling unitary \(U_{N}^{\dagger}\) consisting of controlled \(Z\) gates between neighboring qubits. The outcomes \(x\) of the measurement in the \(X\) basis are processed by the Boolean function \(G(x)\), expressed as a logic circuit in terms of AND and XOR gates. The logic circuit is composed of \(d\) layers correcting the syndromes of \(X\) errors. Red and purple lines show the propagation of \(X\) errors and \(Z\) errors, respectively, through the QCNN circuit. The \(X\)-error syndrome is corrected by the XOR gate marked in red. \(p_{Y}=p_{Z}=0\). We use the compact implementation as a quantum circuit consisting of the disentangling unitary \(U_{N}^{\dagger}\), the measurement of all qubits in the \(X\) basis and classical post-processing. We show the QCNN circuit in Fig. 2. The disentangling unitary \(U_{N}^{\dagger}\) consists of controlled \(Z\) gates between neighboring qubits. The outcomes \(x\) of the measurement in the \(X\) basis are processed by the Boolean function \(G(x)\) which is expressed as a logic circuit in terms of AND and XOR gates, see Fig. 2. The key feature of the QCNN is that it identifies and corrects perturbations away from the '\(ZXZ\)' cluster state. In particular, it corrects coherent \(X_{j}\) and \(X_{j}X_{j+1}\) errors which drive perturbations away from the cluster state to other ground states of the Hamiltonian (1) [14]. The logic circuit is composed of \(d\) layers \(f=1,2,...,d\), which correspond to the \(X\)-error correcting \(\widetilde{\text{QEC}}_{f}\) procedures transformed by the disentangling unitary \(U_{N}^{\dagger}\). We plot in Fig. 3 the QCNN output across a cut through the phase diagram as a function of \(h_{2}/J_{1}\) for fixed \(h_{1}/J_{1}=0.5\) and \(J_{2}=0\) for different depths \(d\) of the QCNN. We can see that the QCNN output converges to unity with the increasing depth of the QCNN in the '\(ZXZ\)' SPT phase but vanishes with the increasing depth outside of the SPT phase. This demonstrates our first observation that QCNNs can tolerate incoherent \(X\) errors since their QEC procedures can correct not only coherent perturbations, that transform the cluster state to another ground state in the SPT phase, but also incoherent errors. The QCNN output converges to ideal noise-free values with increasing depth \(d\) for any probability \(p_{X}\neq 0.5\) of incoherent \(X\) errors, since the error channel is invertible for these cases. For \(p_{X}=0.5\) the situation is qualitatively different as the error channel (7) is not invertible. Invertible symmetry-preserving error channels for \(p_{X}\neq 0.5\) preserve SPT order [40]. In contrast, the non-invertible error channel for \(p_{X}=0.5\) completely washes out SPT order as \[\mathcal{E}^{\dagger}(S_{jk})=(1-2p_{X})^{2}S_{jk}=0, \tag{8}\] for all \(j\) and \(k\), where \(\mathcal{E}^{\dagger}(\cdot)=\sum_{l=0}^{m}K_{l}^{\dagger}\cdot K_{l}\) is the adjoint channel to Eq. (7). As a result, also the QCNN output vanishes for any input ground state and any depth \(d\). We conclude that QCNNs recognizing the '\(ZXZ\)' SPT phase can tolerate symmetry-preserving \(X\) errors, provided that the error channel is invertible. ## V Tolerance to symmetry-breaking errors Since noise in NISQ devices typically does not preserve the symmetries of problem Hamiltonians, it is important to investigate the robustness of QCNNs against symmetry-breaking errors. While coherent and incoherent \(X\) errors are tolerated by the QCNN designed in Ref. [14], the situation is fundamentally different for incoherent \(Z\) errors as they break the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry of the Hamiltonian (1). \(Z\) errors described by the error channel (7) with \(p_{X}=p_{Y}=0\) lead to a decrease of the SOPs \[\mathcal{E}^{\dagger}(S_{jk})=(1-2p_{Z})^{\frac{L-1}{2}}S_{jk}, \tag{9}\] which scales exponentially with their length \(L=k-j+1\). As a result, the SOPs rapidly vanish with the increasing length \(L\) for any finite (non-unity) \(Z\)-error probability \(p_{Z}\neq 0,1\). Similarly to SOPs, the original design [14] of the QCNN depicted in Fig. 2 is substantially affected by \(Z\) errors. The syndrome of a \(Z_{j}\) error, i.e., the flipped outcome \(x_{j}\) of the measurement in the \(X\) basis, is denoted in Fig. 2 by a purple line. In contrast to \(X\)-error syndromes that are corrected (see red lines in Fig. 2), \(Z\)-error syndromes (purple line) propagate through the QCNN circuit, see Appendix C for mode details. As the system size is reduced by a factor of three in each layer, the density of \(Z\)-error syndromes increases with the increasing depth of the QCNN. As a result, the output of the QCNN rapidly decreases with the depth \(d\) both in the '\(ZXZ\)' SPT phase and outside of the phase for any finite probability \(p_{Z}\neq 0,1\). To perform quantum phase recognition on NISQ devices, QCNNs thus need to be robust against symmetry-breaking errors. To this end, we construct a new QCNN depicted in Fig. 4 by alternating the original \(X\)-error correcting layers with new \(Z\)-error correcting layers. The \(Z\)-error correcting layer \(f\) consists of a new QEC procedure that can be efficiently implemented in classical post-processing as the majority function \[M(x_{j-7.3^{f-1}},x_{j},x_{j+7.3^{f-1}})=\] \[[(x_{j-7.3^{f-1}}\oplus x_{j})\wedge(x_{j}\oplus x_{j+7.3^{f-1}}) ]\oplus x_{j}, \tag{10}\] Figure 3: QCNN consisting of \(X\)-error correcting layers for the ground states of the cluster-Ising Hamiltonian (1) perturbed by incoherent \(X\) errors. The QCNN output as a function of \(h_{2}/J_{1}\) and different depths \(d\) of the QCNN. The orange region denotes the ‘\(ZXZ\)’ phase. (Parameters: \(h_{1}/J_{1}=0.5\), \(J_{2}=0\), \(N=1215\), \(p_{X}=0.1\), \(p_{Y}=p_{Z}=0\), \(M_{S}=10^{4}\)) where \(\wedge\) is the AND gate and \(\oplus\) is the XOR gate, see Fig. 4. The majority function \(M(x_{j-7.3^{f-1}},x_{j},x_{j+7.3^{f-1}})\) returns the value of the majority of the three bits \(x_{j-7.3^{f-1}}\), \(x_{j}\), and \(x_{j+7.3^{f-1}}\). It thus removes isolated error syndromes, see purple lines in Fig. 4 and Appendix C for more details. The corresponding QEC unitary is described in Appendix B. We start by investigating the QCNN with alternating \(X\)- and \(Z\)-error correcting layers for the '\(ZXZ\)' cluster state perturbed by incoherent \(Z\) errors as the input state. We plot in Fig. 5 the QCNN output as a function of the \(Z\)-error probability \(p_{Z}\). We can see an alternating QCNN output after odd and even layers. \(Z\) errors propagate through odd, \(X\)-error correcting layers and, as the system size is reduced by a factor of three, the density of \(Z\) errors increases. This error concentration leads to the decrease of the QCNN output after odd layers, compare blue and red lines in Fig. 5. In contrast, even layers correct \(Z\) errors leading to the decrease of their density and the increase in the QCNN output. We find that the QCNN can tolerate \(Z\) errors for error probabilities \(p_{Z}\) below a threshold \(p_{\rm th}=0.054\) as the error correction in even layers dominates over the error concentration in odd layers leading to a net increase of the QCNN output after every two layers, see blue lines in Fig. 5. On the other hand, \(Z\) errors cannot be tolerated above the threshold since the error concentration dominates over the error correction leading to a net decrease of the QCNN output. See Appendix C for the derivation of the threshold error probability \(p_{\rm th}\). This shows that implementing a \(Z\)-error correcting layer after each \(X\)-error correcting layer prevents the concentration of symmetry-breaking \(Z\) errors below the threshold error probability. We now study the error tolerance of QCNNs with alternating layers for different ground states of the cluster-Ising Hamiltonian. We consider a depolarizing channel with \(p_{X}=p_{Y}=p_{Z}\) describing the presence of \(X\) errors, \(Z\) errors and their simultaneous appearance \(Y=iXZ\), representing a typical situation for NISQ devices. We plot in Fig. 6a the QCNN output for different ground states as a function of \(h_{2}/J_{1}\) and different depths \(d\) of the QCNN. We can see that the QCNN tolerates the incoherent errors as the QCNN output converges to unity with the increasing depth \(d\) in the SPT phase and it vanishes outside of the SPT phase. The two types of layers in the QCNN play complementary roles. The \(X\)-error correcting layers implement renormalization-group flow with states belonging to the '\(ZXZ\)' phase flowing towards the '\(ZXZ\)' cluster state and states outside of the '\(ZXZ\)' phase diverging from it. The \(X\)-error correcting layers are thus crucial for recognizing the '\(ZXZ\)' phase. In contrast, the \(Z\)-error correcting layers reduce the density of error syndromes by removing syndromes due to symmetry-breaking errors. As a result, the \(Z\)-error correcting layers equip the QCNNs with the tolerance to symmetry-breaking errors, see Fig. 6a. We now show that the QCNNs with alternating \(X\)- and \(Z\)-error correcting layers can perform phase recognition provided that the probability of errors in the prepared states is below the threshold \(p_{\rm th}\). To this end we plot the QCNN output for the depth \(d=4\) as a function of Figure 5: QCNN consisting of alternating layers correcting \(X\) errors and \(Z\) errors for the ‘\(ZXZ\)’ cluster state perturbed by symmetry-breaking \(Z\) errors. The QCNN output as a function of the error probability \(p_{Z}\) of \(Z\) errors for different depths \(d\). The black dashed line shows the threshold error probability \(p_{\rm th}=0.054\). (Parameters: \(N=1215\), \(M_{S}=10^{4}\), \(p_{X}=p_{Y}=0\)) Figure 4: QCNN with alternating layers correcting \(X\) errors and \(Z\) errors for detecting the ‘\(ZXZ\)’ phase. The QCNN circuit consists of a constant-depth quantum circuit, the measurement of all qubits in the \(X\) basis and classical post-processing. The quantum circuit performs the disentangling unitary \(U_{N}^{\dagger}\) consisting of controlled \(Z\) gates between neighboring qubits. The outcomes \(x\) of the measurement in the \(X\) basis are processed by the Boolean function \(G(x)\) expressed as a logic circuit in terms of AND and XOR gates as well as of the majority function \(M\). The decomposition of the majority function \(M\) into AND and XOR gates is shown on the right. Red and purple lines show the propagation of \(X\) errors and \(Z\) errors, respectively, through the QCNN circuit. The logic circuit consists of \(d\) layers \(f=1,2,...,d\). Odd layers \(f\) correct the syndromes of \(X\) errors (see the XOR gate marked in red) and even layers \(f\) correct the syndromes of \(Z\) errors (see the majority function \(M\) marked in purple). \(h_{1}/J_{1}\) and \(h_{2}/J_{1}\) in Fig. 6b in the presence of depolarizing noise. We can see that the QCNN output attains near unity value in the '\(ZXZX\)' phase and vanishing value outside of the '\(ZXZ\)' phase. The abrupt change of the QCNN output from near unity values to vanishing values coincides with the phase boundary (black crosses) determined by iDMRG simulations, see Appendix A for more details. In contrast to the QCNN, SOPs \(S_{jk}\) are significantly suppressed in the presence of incoherent errors. We plot in Fig. 6c SOPs \(S_{jk}\) as a function of \(h_{2}/J_{1}\) for different lengths \(L\) in the presence of depolarizing noise. We can see that the SOPs rapidly vanish both in the '\(ZXZ\)' SPT phase and outside of the phase with the increasing length \(L\). In conclusion, the QCNN constructed here recognizes the '\(ZXZ\)' SPT phase in the presence of symmetry-breaking errors below the threshold error probability \(p_{\rm th}\). In contrast, SOPs rapidly vanish with the increasing length for any finite probability of symmetry-breaking errors. The previously considered QCNN of Ref. [14] cannot tolerate symmetry-breaking errors either as its output decreases with the increasing depth for any error probability. As a result, it cannot recognize the SPT phase in the presence of symmetry-breaking errors. ## VI '\(Zxxxz\)' symmetry-protected topological phase We now discuss the extension of phase recognition capabilities of the error-tolerant QCNNs we introduced to distinguish the '\(ZXXXZ\)' SPT phase from topologically trivial phases as well as the '\(ZXZ\)' and '\(ZXXXZ\)' SPT phases from one another. Similarly, as for the '\(ZXZ\)' SPT phase, we construct a QCNN which detects the '\(ZXXXZ\)' phase from the topologically trivial paramagnetic and antiferromagnetic phases. Now the convolutional layer consists of a disentangling unitary \(\tilde{U}_{N}^{\dagger}\) mapping the '\(ZXXXXZ\)' cluster state onto a product state. The QEC procedures are amended to correct \(X_{j}\) and \(X_{j}X_{j+1}\) errors perturbing the '\(ZXXXXZ\)' cluster state, see Appendix D for more details about the QCNN for the '\(ZXXXXZ\)' phase. To equip the QCNN with the tolerance to state preparation errors, we again employ the procedure correcting \(Z_{j}\) errors based on the majority function of Eq. (10). In contrast to the disentangling circuit \(U_{N}^{\dagger}\) for the '\(ZXZ\)' cluster state, which commutes with \(Z_{j}\) errors, the disentangling unitary \(\tilde{U}_{N}^{\dagger}\) for the '\(ZXXXXZ\)' cluster state maps \(Z_{j}\) errors onto the errors \(\tilde{U}_{N}^{\dagger}Z_{j}\tilde{U}_{N}=Y_{j-1}Z_{j}Y_{j+1}\), which flip the measurement outcomes \(x_{j-1}\), \(x_{j}\) and \(x_{j+1}\) on the three qubits \(j-1\), \(j\) and \(j+1\). Due to this multiplication of symmetry-breaking errors, the threshold probability \(\tilde{p}_{\rm th}=0.018\) is reduced compared to \(p_{\rm th}=0.054\) for the '\(ZXZ\)' SPT phase. We now investigate QCNNs capable of distinguishing the '\(ZXZ\)' phase and the '\(ZXXXX\)' phase from one another. We consider ground states of the cluster-Ising Hamiltonian (1) for non-vanishing \(J_{1}\) and \(J_{2}\). We choose \(h_{1}=0\) and \(h_{2}/J_{2}=0.1\) that represent generic values of the Hamiltonian parameters for which the model cannot be mapped onto non-interacting fermions [41]. We determine the phase boundary between the '\(ZXZ\)' phase and the '\(ZXXXXZ\)' phase to be located at \(J_{1}/J_{2}=0.95\) via iDMRG simulations. We start with a QCNN recognizing the '\(ZXZ\)' phase from the '\(ZXXXXZ\)' phase. Before showing the results, we explain the construction of this QCNN, which requires identifying the perturbations driving the ground Figure 6: Recognition of ‘\(ZXZ\)’ SPT phase by QCNN consisting of alternating layers correcting \(X\) errors and \(Z\) errors for ground states of the Hamiltonian (1) perturbed by depolarizing noise. (a) The QCNN output \(y\) on the cut through the phase diagram as a function of \(h_{2}/J_{1}\) for different depths \(d\) of the QCNN. (b) The QCNN output \(y\) as a function of \(h_{1}/J_{1}\) and \(h_{2}/J_{1}\) for the depth \(d=4\). Black crosses show the phase boundary identified using iDMRG simulations. (c) String order parameters (SOPs) \(S_{jk}\) on the cut through the phase diagram as a function of \(h_{2}/J_{1}\) for different lengths \(L=k-j+1\). The orange regions denote the ‘\(ZXZ\)’ phase. [Parameters: \(p_{X}=p_{Y}=p_{Z}=0.015\), \(M_{S}=10^{4}\), \(J_{2}=0\); (a) \(N=1215\), \(h_{1}/J_{1}=0.5\); (b) \(N=135\); (c) \(h_{1}/J_{1}=0.5\)]. states for non-vanishing \(h_{2}\) and \(J_{2}\) away from the characteristic '\(ZXZ\)' cluster state. These perturbations include the \(X_{j}X_{j+1}\) interactions and the stabilizer elements \(D_{j}\). The \(X_{j}X_{j+1}\) interactions are corrected by the original \(X\)-error correcting procedure. The \(D_{j}\) stabilizer elements are mapped by the disentangling unitary onto \(U_{N}^{\dagger}D_{j}U_{N}=-Y_{j-1}X_{j}Y_{j+1}\), which lead to the same syndromes after the measurement of all qubits in the \(X\) basis (flipped measurement outcomes at qubits \(j-1\) and \(j+1\)) as \(X_{j}\) perturbations, for which \(U_{N}^{\dagger}X_{j}U_{N}=Z_{j-1}X_{j}Z_{j+1}\). As a result, the QCNN depicted in Fig. 2 constructed in the previous section for correcting \(X_{j}\) and \(X_{j}X_{j+1}\) perturbations, corrects \(D_{j}\) perturbations as well and can be readily used to recognize the '\(ZXZ\)' phase from the '\(ZXXXZ\)' phase. To achieve tolerance to state preparation errors on NISQ devices, we can thus alternate the \(X\)-error correcting layers with \(Z\)-error correcting layers in the same way as depicted in Fig. 4. We plot the QCNN output (blue lines) as a function of \(J_{1}/J_{2}\) in Fig. 7 for different depths of the QCNN in the presence of depolarizing noise. We can see that the QCNN detects the '\(ZXZ\)' phase as its output converges to unity in the phase (\(J_{1}/J_{2}>0.95\)) and vanishes in the '\(ZXXXZ\)' phase (\(J_{1}/J_{2}<0.95\)). We now discuss the construction of a QCNN recognizing the '\(ZXXXZ\)' phase from the '\(ZXZ\)' phase. Here, the stabilizer elements \(C_{j}\) and \(X_{j}X_{j+1}\) interactions play the role of perturbations away from the '\(ZXXXZ\)' cluster state. The disentangling unitary \(\hat{U}_{N}^{\dagger}\) for the '\(ZXXXZ\)' cluster state maps the \(C_{j}\) perturbations onto \(\hat{U}_{N}^{\dagger}C_{j}\hat{U}_{N}\propto Y_{j-1}X_{j}Y_{j+1}\). The \(C_{j}\) perturbations have different syndromes after the measurement of all qubits in the \(X\) basis than \(X_{j}\) and \(X_{j}X_{j+1}\) perturbations, \(\hat{U}_{N}^{\dagger}X_{j}\hat{U}_{N}=Y_{j-2}X_{j-1}X_{j}X_{j+1}Y_{j+2}\) and \(\hat{U}_{N}^{\dagger}X_{j}X_{j+1}\tilde{U}_{N}=Y_{j-2}Z_{j-1}Z_{j+2}Y_{j+3}\). As a result, we need to amend the QEC procedures to correct the \(C_{j}\) perturbations, see Appendix B for more details about this procedure. The \(C\)-error correcting procedure also corrects \(X_{j}X_{j+1}\) perturbations and \(X_{j}\) perturbations, where the latter now come about only due to noise on NISQ devices, see Appendix D for more details. To achieve tolerance to incoherent \(Z_{j}\) errors, we alternate the \(C\)-error correcting layers with \(Z\)-error correcting layers. We plot the resulting QCNN output (brown lines) as a function of \(J_{1}/J_{2}\) in Fig. 7 for different depths of the QCNN in the presence of depolarizing noise. We can see that the QCNN detects the '\(ZXXXZ\)' phase as its output converges to unity in the phase \(J_{1}/J_{2}<0.95\) and vanishes in the '\(ZXZ\)' phase \(J_{1}/J_{2}>0.95\). We have thus demonstrated that the error-tolerant QCNNs we introduced can recognize not only topological phases from topologically trivial phases but also two topological phases from one another. To this end, the QCNN for the '\(ZXXXZ\)' phase needed to be amended to correct \(C_{j}\) perturbations whereas the original QCNN for the '\(ZXZ\)' phase was already capable of correcting \(D_{j}\) perturbations. ## VII Phase boundary So far, we have shown that the QCNNs we consider can recognize SPT phases in the presence of incoherent errors. We now investigate the tolerance of incoherent errors close to phase boundaries. Precisely detecting phase boundaries is one of the major challenges of many-body physics due to diverging correlation lengths and the rapid growth of entanglement in their vicinity [20; 43]. In Fig. 8a we plot the output of the QCNN for the '\(ZXZ\)' phase close to a phase boundary between the '\(ZXZ\)' phase and the paramagnetic phase in the presence of depolarizing noise. We can see that the QCNN tolerates incoherent errors well in the SPT phase as its output converges to unity with the increasing depth \(d\). On the other hand, close to the phase boundary the QCNN does not tolerate incoherent errors as its output decreases with the increasing depth \(d\). We thus observe that while symmetry-preserving \(X\) errors can be tolerated for any ground state, the tolerance to symmetry-breaking errors is limited close to phase boundaries. To quantify the behavior close to the phase boundary further, we investigate the probability \(p_{Z}\) of symmetry-breaking \(Z\) errors which can be tolerated for each ground state belonging to the '\(ZXZ\)' phase as we approach the phase boundary with the paramagnetic phase. To do so, we determine the threshold error probability \(p_{\rm th}\) below which the QCNN output converges to unity and above which the output decreases. We plot in Fig. 9 the threshold error probability \(p_{\rm th}\) (red dots) as a function of the correlation length \(\xi\) of the ground states [44]. We can see that the threshold error probability decreases with the correlation length. We fit this decrease by the ex Figure 7: Distinguishing the ‘\(ZXZ\)’ SPT phase and the ‘\(ZXXXZ\)’ SPT phase. The QCNN output as a function of \(J_{1}/J_{2}\) for different depths \(d\) of the QCNN for ground states perturbed by depolarizing noise. The QCNN with alternating \(X\)-error correcting layers and \(Z\)-error correcting layers (blue lines) as well as the QCNN with alternating \(C\)-error and \(Z\)-error correcting layers (brown lines). (Parameters: \(N=1215\), \(M_{S}=10^{3}\), \(h_{1}/J_{2}=0\), \(h_{2}/J_{2}=0.1\), \(p_{X}=p_{Y}=p_{Z}=0.01\)) ponential function \(p_{\rm th}=p_{\rm th}^{0}\exp(-\xi/\bar{\xi})\) with the fitted parameters \(p_{\rm th}^{0}\) and \(\bar{\xi}\) in Tab. 1. We can see in Fig. 9 a similar exponential decrease of the threshold probability \(p_{\rm th}\) close to the phase boundaries between the '\(ZXXXXZ\)' phase and the paramagnetic state (gray diamonds) as well as between the '\(ZXXXXZ\)' phase and the '\(ZXZ\)' phase (green triangles). The error tolerance is thus strongly suppressed close to all phase boundaries when the correlation length exceeds the characteristic value \(\bar{\xi}\approx 26\), c.f. Tab. 1. To identify the phase boundary between the '\(ZXZ\)' phase and the paramagnetic phase in the presence of symmetry-breaking errors, we plot in Fig. 8b the slope \(\partial x/\partial h_{2}\) of the QCNN output \(y\) with respect to the Hamiltonian parameter \(h_{2}\) for different depths \(d\) of the QCNN. We can see a sharp dip in the slope of the QCNN output precisely located at the phase boundary. The dip becomes more pronounced with the increasing depth \(d\) of the QCNN. This shows that while the QCNN output decreases with the increasing depth \(d\) close to the phase boundary, see Fig. 8a, we can still precisely identify the phase boundary as a sharp dip in the slope of the QCNN output. We can see in Fig. 6b that the abrupt change of the QCNN output coincides with the phase boundary (black crosses) determined using iDMRG in the entire phase diagram (\(J_{2}=0\)). We also observe that the slope of the QCNN output exhibits a sharp dip (or peak) at all other phase boundaries of the generalized cluster-Ising model also for \(J_{2}\neq 0\) (not shown here). In stark contrast to these characteristics, individual SOPs and their slopes are largely suppressed for any finite probability of symmetry-breaking errors. We plot in Fig. 8c the slope \(\partial S_{jk}/\partial h_{2}\) of the SOPs with respect to the Hamiltonian parameter \(h_{2}\) for different lengths \(L=k-j+1\). We can see that the slope of the SOPs cannot be distinguished from sampling noise for the number of samples \(M_{S}=10^{4}\). Crucially, the slope of the SOPs does not become more pronounced with increasing length \(L\). As a result, one cannot use the SOPs to determine the phase boundary in the presence of symmetry-breaking noise. In conclusion, the tolerance of the considered QCNNs to symmetry-breaking errors is limited close to phase boundaries due to diverging correlation lengths. Nonetheless, we can precisely determine critical values Figure 8: Detecting the phase boundary between the ‘\(ZXZ\)’ phase and the paramagnetic phase in the presence of depolarizing noise. (a) The output of the QCNN with alternating layers correcting \(X\) errors and \(Z\) errors close to the phase boundary as a function of \(h_{2}/J_{1}\) for different depths \(d\). (b) Slope of the QCNN output \(\partial y/\partial h_{2}\) with respect to the Hamiltonian parameter \(h_{2}\). (c) Slope of SOPs \(\partial S_{jk}/\partial h_{2}\) with respect to the Hamiltonian parameter \(h_{2}\) close to the phase boundary as a function of \(h_{2}/J_{1}\) for different lengths \(L=k-j+1\). The orange regions denote the ‘\(ZXZ\)’ phase. [Parameters: \(M_{S}=10^{4}\), \(h_{1}/J_{1}=0.5\), \(J_{2}=0\), \(p_{X}=p_{Y}=p_{Z}=0.015\); (a) and (b) \(N=1215\)] Figure 9: Tolerance to incoherent errors close to phase boundaries. The plot shows the threshold probability \(p_{\rm th}\) of symmetry-breaking errors as a function of the correlation length \(\xi\) in the vicinity of phase boundaries between the ‘\(ZXZ\)’ phase and the paramagnetic phase (red dots), the ‘\(ZXXXXZ\)’ phase and the paramagnetic phase (gray diamonds), and the ‘\(ZXXXXZ\)’ phase and the ‘\(ZXZ\)’ phase (green triangles). Solid lines show fitted exponential functions and dashed lines show threshold error probabilities of corresponding cluster states. [Parameters: \(N=1215\), \(M_{S}=10^{5}\), \(h_{1}/J_{1}=0.5\) and \(J_{2}=0\) (red dots), \(h_{1}/J_{2}=0.5\) and \(J_{1}=0\) (gray diamonds), \(h_{1}=0\) and \(h_{2}/J_{2}=0.1\) (green triangles)] of Hamiltonian parameters as a dip in the slope of the QCNN output. This is in stark contrast to SOPs and their slopes which rapidly vanish for any finite probability of symmetry-breaking errors and thus cannot be used to identify phase boundaries in the presence of symmetry-breaking errors. ## VIII Sample complexity We now compare QCNNs to the direct measurement of the input state. We focus on sample complexity which quantifies the number of projective measurements required to identify to which quantum phase the input state belongs. In the absence of noise, QCNNs substantially reduce sample complexity compared to the direct measurement of SOPs [14]. In the presence of symmetry-breaking noise, SOPs vanish and thus they cannot detect the SPT phase. Instead, we could sample the observable, that is measured by a QCNN, which is different from any single SOP and robust against symmetry-breaking noise, directly from the input state without using any quantum circuit. In this section, we discuss this observable for the QCNN detecting the '\(ZXZ\)' phase and the cost of directly sampling it from the input state in comparison to sampling the QCNN output. To determine the observable measured by the QCNN with alternating X- and Z-error correcting layers, we represent the QCNN circuit as a unitary \(U_{\text{QCNN}}\), see Appendix E for details. The measurement of the Pauli \(X_{\frac{N+1}{2}}\) operator at the end of the QCNN circuit corresponds to the measurement of a multiscale SOP \[S_{\text{M}} =U_{\text{QCNN}}^{\dagger}X_{\frac{N+1}{2}}U_{\text{QCNN}}\] \[=\sum_{ij}\eta_{ij}^{(1)}S_{ij}+\sum_{ijkl}\eta_{ijkl}^{(2)}S_{ ij}S_{kl}+..., \tag{11}\] on the input state. The multiscale SOP \(S_{\text{M}}\) is a sum of products of SOPs \(S_{ij}\) at different lengths \(L=j-i+1\). The length of the SOPs, \(L\sim 3^{d}\), increases exponentially with the depth \(d\) of the QCNN. Compared to the QCNN of Ref. [14], the change in the coefficients \(\eta^{(n)}\) due to our construction equips the QCNN with error tolerance. The multiscale SOP in Eq. (11) involves at least \(2^{3^{d-2}}\) products of SOPs. As an alternative to executing the QCNN, we can determine the expectation value of the multiscale SOP using the direct measurements of the input state without performing any quantum circuit. Assuming that only measurements in local Pauli bases can be directly performed, which is the case for most devices, we show in Appendix F that the multiscale SOP involves at least \(3^{3^{d-4}}\) products of SOPs, which cannot be simultaneously measured via local Pauli measurements as they require sampling in mutually incompatible Pauli bases. As a result, the sample complexity of the direct Pauli measurement scales double exponentially with the depth of the QCNN, corresponding to an exponential scaling with system size for the maximal depth \(d=\lfloor\log_{3}N\rfloor\). In contrast, the QCNN determines the expectation value of the multiscale SOP with a constant sample complexity in system size (and depth of the QCNN), which exponentially reduces the sample complexity compared to direct Pauli measurements. Importantly, the equivalent QCNN circuit depicted in Fig. 4, which is based on a constant-depth quantum circuit, measurement and classical post-processing, measures the multiscale SOP with the same sample complexity as the full quantum QCNN circuit. The constant-depth quantum circuit allows us to simultaneously measure all stabilizer elements \(C_{j}\). From measured bit strings, we then determine the expectation value of the multiscale SOP in classical post-processing with the same sample complexity as for the full quantum QCNN circuit. QCNNs detecting the '\(ZXXXZ\)' phase also measure multiscale SOPs which are sums of double exponentially many products of SOPs \(T_{jk}\). Similarly to the QCNN for the '\(ZXZ\)' phase, these QCNNs also reduce sample complexity exponentially compared to direct local Pauli measurements. ## IX Conclusions We constructed QCNNs that tolerate incoherent errors due to decoherence and gate infidelities during the preparation of their input states. These QCNNs tolerate symmetry-breaking errors below a threshold error probability in contrast to previous QCNN designs and SOPs, which are significantly suppressed for any non-vanishing error probability. Moreover, their output is robust against invertible symmetry-preserving error channels. The error tolerance is limited close to phase boundaries due to diverging correlation lengths. However, a steep gradient of the QCNN output at phase boundaries between SPT phases and topologically trivial phases as well as between two SPT phases allows us to precisely determine critical values of the Hamiltonian parameters. The QCNN quantum circuits constructed here can be shortened from logarithmic depth in input size to short, constant depth by performing a large part of computation in classical post-processing after the measurement of all qubits. This substantially improves the performance of QCNNs under NISQ conditions by reducing the number of finite-fidelity quantum gates. The classical post \begin{table} \begin{tabular}{|l|c|c|} \hline Phase boundary & \(p_{\text{th}}^{0}\) & \(\bar{\xi}\) \\ \hline \(ZXZ\)’\(\rightarrow\)Paramagnetic & 0.058 & 26.09 \\ \(\cdot ZXXXZ\)’\(\rightarrow\)Paramagnetic & 0.025 & 28.12 \\ \(\cdot ZXXXZ\)’\(\rightarrow\)’\(ZXZ\)’ & 0.055 & 24.62 \\ \hline \end{tabular} \end{table} Table 1: Fitted parameters of the threshold probability \(p_{\text{th}}=p_{\text{th}}^{0}\exp(-\xi/\bar{\xi})\) of symmetry-breaking errors as a function of the correlation length \(\xi\) close to different phase boundaries. processing part of QCNNs consists of logic circuits with at most logarithmic depth in input size. The QCNNs we constructed reduce sample complexity exponentially in input size in comparison to the direct sampling of the QCNN output using local Pauli measurements. Our work provides new insights into SPT order in open quantum systems, which are subject to decoherence and dissipation. Apart from NISQ computers, the error channel we consider, see Eq. (7), describes typical open quantum systems [40]. On the one hand, SOPs rapidly vanish with an increasing length for any symmetry-breaking error channel as shown in Ref. [40]. On the other hand, our results show that SPT order is not completely washed out for probabilities of symmetry-breaking errors below a finite threshold. This distinction emerges because the multiscale SOPs, that are efficiently measured by the QCNNs we introduce, exploit information about SPT order at different length scales to detect SPT phases in the presence of symmetry-breaking noise. Due to the tolerance of errors and the short depth of their quantum circuits, the QCNNs constructed here can be readily realized on current NISQ computers to efficiently measure characteristic non-local observables of SPT phases. This will facilitate the investigation of topological quantum phases of matter on quantum computers. Interesting future directions include QCNNs for two- and higher-dimensional systems detecting intrinsic topological order [45; 30], and less understood topological phases such as anyonic chains [46] and quantum spin liquids [47]. Another promising direction is the training of QCNNs based on parametrized quantum circuits to identify non-local observables characterizing topological phases from training data [36; 37; 48; 14]. The QCNNs we constructed open the way for efficiently characterizing noisy quantum data produced by near-term quantum hardware. In addition to the recognition of topological phases, reducing the sample complexity of non-local observables will substantially speed up other quantum algorithms. A prominent example is the variational quantum eigensolver for quantum chemistry problems which involves many repetitions of demanding measurements of molecular Hamiltonians [49; 50]. ## X Acknowledgements We thank R. Mansuroglu for insightful discussions. This work was supported by the EU program H2020-FETOPEN project 828826 Quromorphic and is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. NAM is funded by the Alexander von Humboldt foundation. ## Appendix A Numerical simulations Our main results are based on MPS simulations implemented using the Python library TeNPy [42]. Using iDMRG with the maximal bond dimension \(\chi=150\), we obtain numerically exact ground states \(\ket{\psi}\) of the Hamiltonian (1) in the thermodynamic limit to avoid finite size effects. First, we identify phase boundaries as sharp peaks in the second derivative of the ground state energy with respect to \(h_{2}/J_{1}\) for constant \(h_{1}/J_{1}\) and \(J_{2}=0\) in Figs. 3, 6, and 8, with respect to \(J_{1}/J_{2}\) for constant \(h_{1}=0\) and \(h_{2}/J_{2}=0.1\) in Fig. 7, as well as with respect to \(h_{2}/J_{2}\) for constant \(h_{1}/J_{2}\) and \(J_{1}=0\) in Fig. 12. We implement the QCNN circuits depicted in Fig. 1c consisting of a constant-depth quantum circuit, the measurement of all qubits in the \(X\) basis and classical post-processing. The constant-depth quantum circuit performs the disentangling unitary \(U_{N}^{\dagger}\) consisting of nearest-neighbor two-qubit gates which can be efficiently applied on the MPSs \(\ket{\psi}\) obtained using iDMRG. We simulate the measurement outcomes of \(N\) qubits by sampling spin configurations \(x\) in the \(X\) basis from their probability distribution \(P_{x}=\mathrm{Tr}[|x\rangle\langle x|U_{N}^{\dagger}|\psi\rangle\langle\psi|U_ {N}]\) corresponding to the MPS \(U_{N}^{\dagger}|\psi\rangle\) after having performed the disentangling unitary \(U_{N}^{\dagger}\). QCNN outputs are determined from the sampled bit strings \(x\) as a Boolean function which is expressed as a logic circuit, see Figs. 2 and 4. To explore incoherent errors using the error channel (7), we implement the error channel \(\mathcal{E}\) by sampling error events \(E_{l}=K_{l}/\sqrt{p_{l}}\) from their probability distribution \(p_{l}=\mathrm{Tr}[K_{l}^{\dagger}K_{l}]/2^{N}\). The error events \(E_{l}\) are products of Pauli operators, which can be efficiently implemented on the MPSs \(\ket{\psi}\). We then sample bit strings \(x\) from the joint probability distribution \[P_{x} = \tag{10}\] \[=\] which correspond to the measurement outcomes for the noisy state \(\mathcal{E}(\ket{\psi}\bra{\psi})\) after having performed the disentangling unitary \(U_{N}^{\dagger}\). Increasing the bond dimension to \(\chi=200\) does not lead to a visible change in our findings showing that the MPSs accurately describe the ground states of the Hamiltonian (1) and their processing with the QCNNs. ## Appendix B QCNN circuits In this appendix, we describe in detail the QCNN circuits used in this work. We first discuss the QCNN detecting the '\(ZXZ\)' phase. Then we describe the QCNN detecting the '\(ZXXXZ\)' phase. Finally, we discuss the QCNNs distinguishing the two SPTs phase from one another. All QCNNs considered in this work consist of \(d\) convolutional layers, \(d\) pooling layers and a fully connected layer as depicted in Fig. 1a. Each convolutional layer \(f=1,2,...,d\) consists of a disentangling unitary \(U_{N/3^{f-1}}^{\dagger}\) on \(N/3^{f-1}\) qubits followed by an entangling unitary \(U_{N/3^{f}}\) on a sublattice with \(N/3^{f}\) qubits as depicted in Fig. 1b. Each pooling layer involves a QEC procedure \(\mathrm{QEC}_{f}\). In the fully connected layer, the disentangling unitary \(U_{N/3^{d}}^{\dagger}\) is applied and all \(\lfloor N/3^{d}\rfloor\) remaining qubits are measured in the \(X\) basis. Note that each \(\mathrm{QEC}_{f}\) procedure is preceded by the entangling unitary \(U_{N/3^{f}}\), which is implemented in the preceding convolutional layer, and followed by the disentangling unitary \(U_{N/3^{f}}^{\dagger}\), which is implemented in the following convolutional layer for \(f<d\) and in the fully connected layer for \(f=d\), see Fig. 1b. The QCNN circuit is thus equivalent to a single convolutional layer followed by \(d\) pooling layers as depicted in Fig. 10a. The convolutional layer performs the disentangling unitary \(U_{N}^{\dagger}\). The pooling layer \(f\) involves the QEC unitary \(\widetilde{\mathrm{QEC}}_{f}=U_{N/3^{f}}^{\dagger}\mathrm{QEC}_{f}U_{N/3^{f}}\) transformed by the entangling unitary \(U_{N/3^{f}}\). Note that in this equivalent quantum circuit, convolutional layers for \(f>1\) are absorbed into the \(\widetilde{\mathrm{QEC}}_{f}\) unitaries in the pooling layers. The disentangling unitary \(U_{N/3^{d}}^{\dagger}\) from the fully connected layer is also absorbed into the \(\widetilde{\mathrm{QEC}}_{d}\) unitary. The fully connected layer in this equivalent quantum circuit thus consists only of the measurement of remaining qubits in the \(X\) basis. For conciseness, we focus here on the equivalent quantum circuits depicted in Fig. 10a as the \(\widetilde{\mathrm{QEC}}_{f}\) procedures transformed by the entangling unitary consist of fewer gates than the bare \(\mathrm{QEC}_{f}\) procedures. We first discuss the QCNN detecting the '\(ZXZ\)' phase. We start with the QCNN consisting of \(X\)-error correcting layers depicted in Fig. 10b. The disentangling unitary \(U_{N}^{\dagger}\) consists of controlled \(Z\) gates between neighboring qubits. The transformed QEC procedure \(\widetilde{\mathrm{QEC}}_{f}^{X}\) consists of controlled-controlled Z gates \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{Z}\), controlled Z gates \(\mathrm{C}_{x}\mathrm{Z}\) and controlled-controlled NOT gates \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{NOT}\) with all controls in the \(X\) basis. The error-tolerant design of the QCNN depicted in Fig. 10c consists of alternating layers correcting \(X\) errors and \(Z\) errors. The new \(Z\)-error correcting procedure \(\widetilde{\mathrm{QEC}}_{f}^{Z}\) involves SWAP, \(\mathrm{C}_{x}\mathrm{Z}\) and \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{Z}\) gates. Since all gates are controlled in the \(X\) basis and they implement either Pauli \(X\) or Pauli \(Z\) operations on the target qubit, the \(\widetilde{\mathrm{QEC}}_{f}^{X}\) and \(\widetilde{\mathrm{QEC}}_{f}^{Z}\) procedures map \(X\)-basis eigenstates \(\ket{x}\) onto other \(X\)-basis eigenstates \(\pm\ket{g_{f}(x)}\), where \(g_{f}:\{0,1\}^{N}\rightarrow\{0,1\}^{N}\) is a Boolean function. As we also measure in the \(X\) basis in the fully-connected layer, the processing of a quantum state \(\rho\) by the QCNN can be implemented in classical post-processing as a Boolean function \(G(x)=(g_{d}\circ g_{d-1}\circ...\circ g_{1})(x)\) after measuring all qubits in the \(X\) basis. In Figure 10: QCNN quantum circuits. (a) QCNN circuit equivalent to Fig. 1b consisting of a single convolutional layer, \(d\) pooling layers and a fully connected (FC) layer. The convolutional layer performs the disentangling unitary \(U_{N}^{\dagger}\). Each pooling layer \(f=1,2,...,d\) performs the \(\widetilde{\mathrm{QEC}}_{f}\) procedure transformed by the entangling unitary \(U_{N/3^{f}}\) on a sublattice with \(N/3^{f-1}\) qubits. The fully connected layer involves the measurement of \(\lfloor N/3^{d}\rfloor\) qubits in the \(X\) basis. (b) QCNN detecting the ‘\(ZXZ\)’ phase with \(X\)-error correcting procedures \(\widetilde{\mathrm{QEC}}_{f}^{X}\). (c) QCNN detecting the ‘\(ZXZ\)’ phase with alternating procedures \(\widetilde{\mathrm{QEC}}_{f}^{X}\) and \(\widetilde{\mathrm{QEC}}_{f+1}^{Z}\) correcting \(X\) errors and \(Z\) errors, respectively. The disentangling unitary \(U_{N}^{\dagger}\) involves controlled \(Z\) gates with controls in the computational basis. The \(\widetilde{\mathrm{QEC}}_{f}^{X}\) procedure consists of controlled-controlled Z gates \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{Z}\), controlled Z gates \(\mathrm{C}_{x}\mathrm{Z}\) and controlled-controlled NOT gates \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{NOT}\) with all controls in the \(X\) basis. The \(\widetilde{\mathrm{QEC}}_{f+1}^{Z}\) procedure consists of SWAP, \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{Z}\) and \(\mathrm{C}_{x}\mathrm{Z}\) gates. particular, the output \[\langle X_{j}\rangle=\mathrm{Tr}\left[X_{j}Q_{d}U_{N}^{\dagger}\rho U_{N}Q_{d}^{ \dagger}\right]=\sum_{x}P_{x}(1-2G(x)_{j}), \tag{10}\] of qubit \(j\) measured in the fully-connected layer of the full quantum QCNN circuit (Fig. 10a) can be determined from bit strings \(x\) measured after the convolutional layer by using the \(j\)th element of the output of the Boolean function \(G(x)\), where \(P_{x}=\mathrm{Tr}[|x\rangle\langle x|U_{N}^{\dagger}\rho U_{N}]\) is the probability of measuring a bit string \(x\) after the first convolutional layer and \[Q_{d}=\widetilde{\mathrm{QEC}}_{d}...\widetilde{\mathrm{QEC}}_{2}\widetilde{ \mathrm{QEC}}_{1}. \tag{11}\] We thus only need to apply the single disentangling unitary \(U_{N}^{\dagger}\) on a quantum computer, measure all qubits in the \(X\) basis and determine the QCNN output in classical post-processing from the measured bit strings \(x\), see Fig. 1c. The QCNN quantum circuits depicted in Figs. 10b and 10c can thus be implemented as equivalent circuits depicted in Figs. 2 and 4, respectively. The Boolean function \(g_{f}\) corresponding to the pooling layer \(f\) performing the \(\widetilde{\mathrm{QEC}}_{f}\) procedure can be expressed as a logic circuit with a constant depth in system size, see Figs. 2 and 4. As a result, the Boolean function \(G\) corresponding to the QCNN with \(d\) pooling layers can be implemented as a logic circuit with a depth proportional to \(d\) which can be at most \(d=\lfloor\log_{3}N\rfloor\) logarithmic in system size \(N\). We now describe QCNNs detecting the '\(ZXXX\)' phase. The QCNN consisting of alternating layers correcting \(X\) errors and \(Z\) errors is depicted in Fig. 11a. The disentangling unitary \(\tilde{U}^{\dagger}\) consists of controlled \(Z\) gates CZ between all neighboring qubits controlled in the computational basis, controlled \(Y\) gates \(\mathrm{C}_{y}\mathrm{Y}\) between all neighboring qubits controlled in the \(Y\) basis and \(Z\) gates. The \(X\)-error correcting procedure \(\widetilde{\mathrm{QEC}}_{f}^{X}\) consists of controlled \(Y\) gates \(\mathrm{C}_{x}\mathrm{Y}\) and controlled-controlled \(Y\) gates \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{Y}\) with controls in the \(X\) basis. The \(Z\)-error correcting procedure \(\widetilde{\mathrm{QEC}}_{f}^{Z}\) is the same as for the '\(ZXZ\)' phase involving SWAP, \(\mathrm{C}_{x}\mathrm{Z}\) and \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{Z}\) gates. The QCNN consisting of alternating layers correcting \(C\) errors and \(Z\) errors is depicted in Fig. 11b. The \(C\)-error correcting procedure \(\widetilde{\mathrm{QEC}}_{f}^{C}\) consists of \(\mathrm{C}_{x}\mathrm{Y}\) and \(\mathrm{C}_{x}\mathrm{C}_{x}\mathrm{Y}\) gates. As all \(\widetilde{\mathrm{QEC}}_{f}\) procedures consist of gates controlled in the \(X\) basis implementing the Pauli \(Y\) operation or the Pauli \(Z\) operation on the target qubit, they satisfy the condition (5) and thus they can be implemented in classical post-processing as a Boolean function \(g_{f}(x)=x^{\prime}\). The output \[\langle X_{j}\rangle=\mathrm{Tr}\left[X_{j}Q_{d}\tilde{U}_{N}^{\dagger}\rho \tilde{U}_{N}Q_{d}^{\dagger}\right]=\sum_{x}\tilde{P}_{x}(1-2G(x)_{j}), \tag{12}\] of qubit \(j\) measured in the fully-connected layer of the full quantum QCNN circuit can be determined from bit strings \(x\) measured after the first convolutional layer \(\tilde{U}_{N}^{\dagger}\) by using the \(j\)th element of the output of the Boolean function \(G(x)\), where \(\tilde{P}_{x}=\mathrm{Tr}[|x\rangle\langle x|\tilde{U}_{N}^{\dagger}\rho \tilde{U}_{N}]\). Note that the Boolean function corresponding to the \(C\)-error correcting procedure for the '\(ZXXXZ\)' phase is the same as the Boolean function performing the \(X\)-error correcting procedure for the '\(ZXZ\)' phase depicted in Fig. 2. ## Appendix C Error propagation in QCNN circuits In this appendix, we discuss the propagation of errors in QCNNs detecting the '\(ZXZ\)' phase. We exploit that the QCNNs can be implemented as the constant-depth quantum circuit \(U_{N}^{\dagger}\), measurements in the \(X\) basis and classical post-processing. We focus on the propagation of errors in the constant-depth quantum circuit \(U_{N}^{\dagger}\) and the logic circuits implemented in classical post-processing. The '\(ZXZ\)' cluster state \(\ket{C}\) is mapped by the disentangling unitary \(U_{N}^{\dagger}\) onto the product state \(\ket{+}^{\otimes N}\), where \(\ket{+}\) is the \(+1\) eigenstate of the Pauli \(X\) operator. The subsequent measurement thus deterministically yields the outcome \(x_{j}=0\) corresponding to \(X_{j}=+1\) for all qubits \(j\). A single \(X_{j}\) error perturbing the cluster state \(X_{j}\ket{C}\) is mapped onto \(U_{N}^{\dagger}X_{j}U_{N}=Z_{j-1}X_{j}Z_{j+1}\) by the disentangling unitary \(U_{N}^{\dagger}\) leading to the flip of two measurement outcomes \(x_{j\pm 1}=1\), see red lines in Fig. 2. This \(X\)-error syndrome is corrected by the \(X\)-error correcting procedure such that \(g(x)_{k}=0\) for all \(N/3\) classical bits \(k\) propagating to the next layer, see Fig. 2. The other \(2\cdot N/3\) bits are discarded. Similarly, the syndrome \(x_{j}=x_{j\pm 1}=x_{j+2}=1\) of a \(X_{j}X_{j+1}\) error is corrected by the \(X\)-error correcting procedure. As a result, for low density of \(X_{j}\) and \(X_{j}X_{j+1}\) errors, the QCNN output converges to unity. As shown in Fig. 3, the QCNN output converges to unity for all states in the '\(ZXZ\)' phase. The phase boundary coincides with a threshold density of coherent \(X\) errors perturbing the cluster state [14; 38]. Above the threshold density, \(X\)-error syndromes are concentrated in the QCNN circuit and the QCNN output vanishes with increasing depth \(d\). Incoherent \(X\) errors can be tolerated for any probability \(p_{X}\neq 0.5\) as discussed in the main text. The situation is more complicated for \(Z\) errors. A single \(Z_{j}\) error perturbing the cluster state \(Z_{j}\ket{C}\) leads to the flip of a single measurement outcome \(x_{j}=1\) as the error \(Z_{j}\) commutes with the disentangling unitary \(U_{N}^{\dagger}\), see Fig. 2. This syndrome of the \(Z_{j}\) error propagates through the \(X\)-error correcting layer such that \(g(x)_{k}=1\) for bits \(k\) on the sublattice with \(N/3\) bits in the next layer if \(k=j-2\), \(k=j\), or \(k=j+2\). As \(2\cdot N/3\) bits are discarded in each layer, the density of error syndromes increases. This leads to a decreasing QCNN output for any probability \(p_{Z}\neq 0\) or \(p_{Z}\neq 1\) of \(Z\) errors. To correct \(Z\) errors, we construct a new \(Z\)-error correcting \(\widetilde{\mathrm{QEC}}_{f}^{Z}\) procedure, depicted in Fig. 10c, with a corresponding logic circuit depicted in Fig. 4. This logic circuit consists of the majority function, see Eq. (10). The majority function \(M(x_{j-7.3^{f-1}},x_{j},x_{j+7.3^{f-1}})\) in layer \(f\) returns the value of the majority of the three bits \(x_{j-7.3^{f-1}}\), \(x_{j}\), and \(x_{j+7.3^{f-1}}\). It thus removes isolated syndromes \(x_{j}=1\) of \(Z_{j}\) errors, see Fig. 4. Provided that the initial density of \(Z_{j}\) errors is small enough, the majority vote further decreases the density of error syndromes, preventing their concentration in the QCNN circuit. We now investigate the propagation of \(Z\) errors in the QCNN with alternating \(X\)-error and \(Z\)-error correcting layers for the cluster state \(\rho=\mathcal{E}\left(\ket{C}\bra{C}\right)\) with the probability \(p_{Z}\) of \(Z\) errors described by the error channel (7) with \(p_{X}=p_{Y}=0\). As \(Z\) errors commute with the disentangling unitary \(U_{N}^{\dagger}\), we measure \(x_{j}=1\) with the uniform probability \(p_{Z}\) at all qubits \(j\). Moreover, the probabilities of measuring the values \(x_{j}=1\) and \(x_{k}=1\) on different qubits \(j\) and \(k\) are not correlated. We will now describe how the probability \(p_{f}\) of \(Z\)-error syndromes evolves in each layer \(f\) of the QCNN. We first note that the probability \(p_{f}\) remains uniform in each layer, i.e. the same for all qubits \(j\), as the QCNN circuit is translationally invariant. We also neglect correlations between error syndromes that build up in the logic circuit assuming that the probability of \(Z\)-error syndromes on different qubits remains uncorrelated. This assumption is well justified by the agreement with our numerical simulations. We start with the probability \(p_{0}=p_{Z}\) of measuring \(x_{j}=1\) at each qubit \(j\) after the disentangling unitary \(U_{N}^{\dagger}\). The measured bit strings \(x\) are now processed in the \(X\)-error correcting layers for \(f\) odd and in the \(Z\)-error correcting layers for \(f\) even. A bit \(x_{j}\) at the output of the \(X\)-error correcting layer \(f\) depends on five bits \(x_{j-4.3^{f-1}}\), \(x_{j-2.3^{f-1}}\), \(x_{j}\), \(x_{j+2.3^{f-1}}\), and \(x_{j+4.3^{f-1}}\) at the input of this layer, see Fig. 4, each of which has the value \(1\) with the probability \(p_{f-1}\). Using a truth table for the output of the \(X\)-error correcting layer, we determine that the output value \(x_{j}=1\) occurs with the probability \[\begin{split} p_{f}&=f_{X}(p_{f-1})\\ &=p_{f-1}^{3}+p_{f-1}(1-p_{f-1})^{2}\left(3-2p_{f-1}+4p_{f-1}^{2} \right).\end{split} \tag{13}\] The probability \(p_{f}\) after each \(X\)-error correcting layer increases, i.e., \(p_{f}>p_{f-1}\) for \(0<p_{f-1}<0.5\) resulting in a decreased QCNN output. This can also be seen in Fig. 5, where after each \(X\)-error correcting layer, the QCNN output decreases, compare red and blue lines. A bit \(x_{j}\) at the output of the \(Z\)-error correcting layer \(f\) depends on three bits \(x_{j-7.3^{f-1}}\), \(x_{j}\), and \(x_{j+7.3^{f-1}}\) at the input of this layer, see Fig. 4, each of which has the value \(1\) with the probability \(p_{f-1}\). Using a truth table for the output of the \(Z\)-error correcting layer, we determine that the output value \(x_{j}=1\) occurs with the probability \[p_{f}=f_{Z}(p_{f-1})=p_{f-1}^{2}(3-2p_{f-1}). \tag{14}\] The probability \(p_{f}\) after each \(Z\)-error correcting layer decreases, i.e., \(p_{f}<p_{f-1}\) for \(0<p_{f-1}<0.5\) resulting in an increased QCNN output. This can also be seen in Fig. 5, where after each \(Z\) error correcting layer the QCNN output increases, compare red and blue lines. We identify two distinct regimes depending on the initial error probability \(p_{0}=p_{Z}\). For error probabilities below the threshold \(p_{Z}<p_{\rm th}\), error correction in even (\(Z\)-error correcting) layers dominates over error concentration in odd (\(X\)-error correcting) layers resulting in a net reduction of errors after two subsequent layers. For error probabilities above the threshold \(p_{Z}>p_{\rm th}\), error concentration in odd layers dominates over error correction in even layers resulting in a net concentration of errors after two subsequent layers. We determine the threshold probability \(p_{\rm th}=0.054\) as the fixed point of the recursion relation \(p_{f}=f_{Z}(f_{X}(p_{f-2}))\). ## Appendix D '\(Zxxxz\)' SPT phase In this appendix, we discuss QCNNs recognizing '\(ZXXX\)' SPT phase and their tolerance to different types of errors. Similarly, as for the '\(ZXZ\)' SPT phase, we construct a QCNN to recognize the '\(ZXXXZ\)' SPT phase from the paramagnetic phase and the antiferromagnetic phase. The \(X\)-error correcting procedure \(\widetilde{\rm QEC}^{X}_{f}\) is depicted in Fig. 11a. The '\(ZXXXZ\)' cluster state \(\ket{D}\) is mapped by the disentangling unitary \(\tilde{U}^{\dagger}_{N}\) in the first convolutional layer onto the product state \(\ket{+}^{\otimes N}\). The subsequent measurement thus deterministically yields the outcome \(x_{j}=0\) for all qubits \(j\). A single \(X_{j}\) error perturbing the cluster state \(X_{j}\ket{D}\) is mapped onto \(\tilde{U}^{\dagger}_{N}X_{j}\tilde{U}_{N}=Y_{j-2}X_{j-1}X_{j}X_{j+1}Y_{j+2}\) by the disentangling unitary \(\tilde{U}^{\dagger}_{N}\) leading to the flip of two measurement outcomes \(x_{j\pm 2}=1\). This error is corrected by the \(X\)-error correcting QEC procedure such that \(g(x)_{k}=0\) for all \(N/3\) classical bits \(k\) propagating to the next layer. Similarly, the syndrome \(x_{j\pm 2}=x_{j-1}=x_{j+3}=1\) of a \(X_{j}X_{j+1}\) error is corrected by the \(X\)-error correcting procedure. We now investigate the QCNN with \(X\)-error correcting layers. We plot in Fig. 12a the QCNN output across a cut through the phase diagram as a function of \(h_{2}/J_{2}\) for \(J_{1}=0\) and different depths \(d\) of the QCNN in the presence of incoherent \(X\) errors. We can see that the QCNN output converges to unity with the increasing depth of the QCNN in the '\(ZXXXXZ\)' SPT phase (gray region). On the other hand, the QCNN output vanishes with the increasing depth in the topologically trivial phases (white regions). This shows that the QCNN can recognize the '\(ZXXXZ\)' SPT phase from topologically trivial phases in the presence of incoherent \(X\) errors. Incoherent \(X\) errors can be tolerated for any probability \(p_{X}\neq 0.5\). To equip the QCNN with the tolerance to symmetry-breaking \(Z\) errors, we alternate \(X\)-error correcting layers with \(Z\)-error correcting layers, see Fig. 11a. The \(\widetilde{\rm QEC}^{Z}_{f}\) procedure, correcting \(Z\) errors, is the same as that for the '\(ZXXZ\)' phase, c.f. Fig. 10c, which can be implemented in classical post-processing as the majority function (10), depicted in Fig. 4. We start by investigating the QCNN with alternating layers for the '\(ZXXXZ\)' cluster state perturbed by incoherent \(Z\) errors as the input state. We plot in Fig. 12b the QCNN output as a function of the \(Z\)-error probability. We can see an alternating QCNN output after odd and even layers. \(Z\) errors propagate through odd \(X\)-error correcting layers and, as the system size is reduced by a factor of three, the density of \(Z\) errors increases. This error concentration leads to the decrease of the QCNN output after odd layers, compare brown and Figure 12: QCNN recognizing the ‘\(ZXXXZ\)’ phase from the paramagnetic phase and the anti-ferromagnetic phase. (a) The output of the QCNN consisting of \(X\)-error correcting layers for ground states of the cluster-Ising Hamiltonian (1) perturbed by incoherent \(X\) errors as a function of \(h_{2}/J_{2}\) for different depths \(d\) of the QCNN. (b) The output of the QCNN consisting of alternating layers correcting \(X\) errors and \(Z\) errors as a function of the \(Z\)-error probability \(p_{Z}\) for the ‘\(ZXXXXZ\)’ cluster state for different depths \(d\). The black dashed line shows the threshold error probability \(\tilde{p}_{\rm th}=0.018\). (c) The output of the QCNN consisting of alternating layers correcting \(X\) errors and \(Z\) errors for ground states of the cluster-Ising Hamiltonian (1) perturbed by depolarizing noise as a function of \(h_{2}/J_{2}\) for different depths \(d\). The gray regions denote ‘\(ZXXXXZ\)’ phase. [Parameters: \(N=1215\), \(M_{S}=10^{4}\); (a) \(p_{X}=0.1\), \(p_{Y}=p_{Z}=0\), \(h_{1}/J_{2}=0.5\), \(J_{1}=0\); (b) \(p_{X}=p_{Y}=0\); (c) \(p_{X}=p_{Y}=p_{Z}=0.005\), \(h_{1}/J_{2}=0.5\), \(J_{1}=0\)] yellow lines in Fig. 12b. In contrast, even layers correct \(Z\) errors leading to the decrease of their density and the increase of the QCNN output. The QCNN can tolerate \(Z\) errors below the threshold error probability \(\tilde{p}_{\rm th}=0.018\) as the error correction in even layers dominates over the error concentration in odd layers. This shows that implementing the \(Z\)-error correcting layer after each \(X\)-error correcting layer prevents the concentration of symmetry-breaking \(Z\) errors for small error probabilities. The threshold probability \(\tilde{p}_{\rm th}=0.018\) for the '\(ZXXX\)Z' phase is smaller than the threshold probability \(p_{\rm th}=0.054\) for the '\(ZXZ\)' phase. This decrease in the tolerated error probabilities can be understood by investigating the propagation of \(Z\) errors in the QCNN circuit. In contrast to the disentangling circuit \(U_{N}^{\dagger}\) for the '\(ZXZ\)' phase, which commutes with \(Z_{j}\) errors, the disentangling unitary \(\tilde{U}^{\dagger}\) for the '\(ZXXXXZ\)' phase maps \(Z_{j}\) errors onto three errors \(\tilde{U}^{\dagger}_{N}Z_{j}\tilde{U}_{N}=Y_{j-1}Z_{j}Y_{j+1}\), which flip the measurement outcomes \(x_{j}=x_{j\pm 1}=1\) on qubits \(j-1\), \(j\) and \(j+1\). This error syndrome is corrected by the \(Z\)-error correcting procedure provided that the \(Z_{j}\) error is isolated. However, the threshold error probability \(\tilde{p}_{\rm th}\) is smaller than for the '\(ZXZ\)' phase due to the multiplication of symmetry-breaking errors by the disentangling unitary. We now study the error tolerance of the QCNN with alternating layers for different ground states of the clustering Hamiltonian (1) perturbed by depolarizing noise. We plot in Fig. 12c the QCNN output as a function of \(h_{2}/J_{2}\) for \(J_{1}=0\) and different depths \(d\) of the QCNN. We can see that the QCNN tolerates the incoherent errors due to depolarizing noise as its output converges to unity with the increasing depth \(d\) in the '\(ZXXXZ\)' phase (gray region) and vanishes in the topologically trivial phases (white regions). In conclusion, we constructed a QCNN for the '\(ZXXXX\)' phase that tolerates symmetry-preserving \(X\) errors if the error channel is invertible and symmetry-breaking \(Z\) errors for small error probabilities. The QCNN is constructed similarly as for the '\(ZXZ\)' phase by amending the QEC procedures to correct \(X_{j}\) and \(X_{j}X_{j+1}\) errors perturbing the '\(ZXXXXZ\)' cluster state. As the disentangling unitary \(\tilde{U}_{N}^{\dagger}\) for the '\(ZXXXXZ\)' phase maps \(Z_{j}\) errors onto three errors, \(\tilde{U}^{\dagger}_{N}Z_{j}\tilde{U}_{N}=Y_{j-1}Z_{j}Y_{j+1}\), the threshold probability \(\tilde{p}_{\rm th}=0.018\) of \(Z\) errors is reduced compared to the QCNN for the '\(ZXZ\)' phase. We finally discuss the propagation of errors in the QCNN detecting the '\(ZXXXXZ\)' phase from the '\(ZXZ\)' phase. This QCNN consists of alternating \(C\)-error and \(Z\)-error correcting layers as depicted in Fig. 11b and discussed in Sec. VI of the main text. The \(C\)-error correcting layers are essential for the detection of the '\(ZXXXXZ\)' phase while the \(Z\)-error correcting layers equip the QCNN with error tolerance. A single \(C_{j}\) error is transformed by the disentangling unitary \(\tilde{U}_{N}^{\dagger}\) as \(\tilde{U}_{N}^{\dagger}C_{j}\tilde{U}_{N}\propto Y_{j-1}X_{j}Y_{j+1}\) and thus leads to the error syndrome \(x_{j\pm 1}=1\). This error syndrome is corrected by the \(C\)-error correcting procedure. \(X_{j}\) and \(X_{j}X_{j+1}\) errors are mapped onto \(\tilde{U}_{N}^{\dagger}X_{j}\tilde{U}_{N}=Y_{j-2}X_{j-1}X_{j}X_{j+1}Y_{j+2}\) and \(\tilde{U}_{N}^{\dagger}X_{j}X_{j+1}\tilde{U}_{N}=Y_{j-2}Z_{j-1}Z_{j+2}Y_{j+3}\) with the corresponding error syndromes \(x_{j\pm 2}=1\) and \(x_{j\pm 2}=x_{j-1}=x_{j+3}=1\), respectively. The syndrome \(x_{j\pm 2}=1\) of the \(X_{j}\) error is corrected by the \(C\)-error correcting layer only if bit \(j\) propagates to the next layer. If bit \(j\) is discarded, the \(X_{j}\)-error syndrome is transformed into either \(x_{j-2}=x_{j+4}=1\) or into \(x_{j-4}=x_{j+2}=1\). On the sublattice with \(N/3\) bits in the next layer, this corresponds in both cases to the \(C_{k}\)-error syndrome \(x_{k\pm 3}=1\). In the subsequent \(Z\)-error correcting layer, we take the majority value \(M(x_{k-7\cdot 3},x_{k},x_{k-7\cdot 3})\) of every triple of qubits \(x_{k-7\cdot 3}\), \(x_{k}\), and \(x_{k-7\cdot 3}\). As these bits are separated by the distance \(7\cdot 3\), the single \(C_{k}\)-error syndrome does not change any of the majority values and it is thus removed. Similarly, also \(X_{j}X_{j+1}\) error syndromes are removed by two subsequent layers. The QCNN can thus distinguish the '\(ZXXXXZ\)' phase from the '\(ZXZ\)' phase, see Fig. 7. Note that the QCNN consisting of only \(C\)-error correcting layers also corrects \(X_{j}\) and \(X_{j}X_{j+1}\) errors. As we discussed above, the syndrome of the \(X_{j}\) error is transformed by the \(C\)-error correcting layer into the \(C_{k}\)-error syndrome on the sublattice with \(N/3\) bits. This \(C_{k}\)-error syndrome is corrected in the subsequent \(C\)-error correcting layer. Similarly, the syndrome of a single \(X_{j}X_{j+1}\) error is also corrected by two subsequent \(C\)-error correcting layers. ## Appendix E Multiscale string order parameter In this appendix, we describe the multiscale SOP \(S_{\rm M}\), see Eq. (11), that is measured by the QCNNs considered in this work. First, we show that \(S_{\rm M}\) is a sum of products of SOPs \(S_{jk}\). Then, we demonstrate that the length of the SOPs involved in \(S_{\rm M}\) increases exponentially with the depth \(d\) of the QCNN. Finally, we determine a lower bound for the number of products of SOPs in \(S_{\rm M}\). We focus here on the QCNN detecting the '\(ZXZ\)' phase, consisting of alternating \(X\)-error and \(Z\)-error correcting layers. We consider the form of the QCNN depicted in Fig. 10c with all convolutional layers for \(f>1\) and the fully connected layer absorbed into the \(\widetilde{\rm QEC}_{f}\) procedures in pooling layers. The QCNN circuit thus performs the unitary \[U_{\rm QCNN}=\widetilde{\rm QEC}_{d}^{X}\widetilde{\rm QEC}_{d-1}^{Z}... \widetilde{\rm QEC}_{2}^{Z}\widetilde{\rm QEC}_{1}^{X}U_{N}^{\dagger} \tag{12}\] consisting of the disentangling unitary \(U_{N}^{\dagger}\) and \(d\) pooling layers \(\widetilde{\rm QEC}_{f}\) where \(f=1,2,...,d\). For odd (even) \(f\), the pooling layers perform the \(X\)-error (Z-error) correcting procedure \(\widetilde{\rm QEC}_{f}^{X}\) (\(\widetilde{\rm QEC}_{f}^{Z}\)), see Fig. 10c. We also assume that the QCNN has an odd number \(d\) of layers. **Sum of products of string order parameters.** We first show that the observable measured by the QCNN corresponds to the multiscale SOP \(S_{\rm M}\) which is a sum of products of SOPs, c.f. Eq. (11). The measurement of the Pauli operator \[\langle X_{\frac{N+1}{2}}\rangle= {\rm Tr}[X_{\frac{N+1}{2}}U_{\rm QCNN}\rho U_{\rm QCNN}^{\dagger}]\] \[= {\rm Tr}[U_{\rm QCNN}^{\dagger}X_{\frac{N+1}{2}}U_{\rm QCNN}\rho] \tag{12}\] at the end of the QCNN circuit corresponds to the measurement of the observable \(U_{\rm QCNN}^{\dagger}X_{\frac{N+1}{2}}U_{\rm QCNN}\) on the input state \(\rho\). We used the cyclic property of the trace in the second equality in Eq. (12). We backpropagate the measured observable through the QCNN circuit to the input state (zerroch layer). To this end, we use the recursion relations \[\widetilde{\rm QEC}_{f}^{X\,\dagger}G_{jk}^{(f)}\widetilde{\rm QEC }_{f}^{X}=\frac{1}{4}\Biggl{[}\sum_{\alpha\beta}G_{(j-\alpha)(k+\beta)}^{(f-1)}\] \[-\sum_{\alpha}G_{(j-\alpha)k}^{(f-1)}G_{(k+\gamma)(k+\gamma)}^{(f -1)}-\sum_{\alpha}G_{(j-\gamma)(j-\gamma)}^{(f-1)}G_{j(k+\alpha)}^{(f-1)}\] \[+G_{(j-\gamma)(j-\gamma)}^{(f-1)}G_{jk}^{(f-1)}G_{(k+\gamma)(k+ \gamma)}^{(f-1)}\Biggr{]}, \tag{13}\] \[\widetilde{\rm QEC}_{f}^{Z\,\dagger}G_{jk}^{(f)}\widetilde{\rm QEC }_{f}^{Z}\] \[=\frac{1}{2^{l_{f}}}\prod_{\delta}\left[X_{\delta-\epsilon}+X_{ \delta}+X_{\delta+\epsilon}-X_{\delta-\epsilon}X_{\delta}X_{\delta+\epsilon} \right], \tag{14}\] for Pauli strings \[G_{jk}^{(f)}= X_{j}X_{j+2\cdot 3f}...X_{k}, \tag{15}\] where \(\alpha,\beta\in\left\{0,2\cdot 3^{f-1},4\cdot 3^{f-1}\right\}\), \(\gamma=4\cdot 3^{f-1}\), \(\delta\in\left\{j,j+2\cdot 3^{f},...,k\right\}\) and \(\epsilon=7\cdot 3^{f-1}\). The length of the Pauli strings \(G_{jk}^{(f)}\) is defined as \(l_{f}=(k-j)/(2\cdot 3^{f})+1\). The backpropagation of the measured observable is summarized in Tab. 2. The Pauli operator \(X_{\frac{N+1}{2}}\) measured at the end of the QCNN circuit corresponds to the Pauli string \(G_{\frac{N+1}{2},\frac{N+1}{2}}^{(d)}=X_{\frac{N+1}{2}}\) with the minimal length \(l_{d}=1\). The recursion relation (13) dictates that backpropagating this operator through the \(X\)-error correcting layer \(\widetilde{\rm QEC}_{d}^{X}\) gives rise to a sum of 16 terms including nine Pauli strings \(G_{jk}^{(d-1)}\), six products of two Pauli strings \(G_{jk}^{(d-1)}\) and a single product of three Pauli strings \(G_{jk}^{(d-1)}\) at layer \(f=d-1\), see Tab. 2. Next, we backpropagate these Pauli strings and the products of Pauli strings through the layer \(\widetilde{\rm QEC}_{d-1}^{Z}\) performing the \(Z\)-error correcting procedure. Due to the linearity of the unitary \(\widetilde{\rm QEC}_{d-1}^{Z}\), we can separately backpropagate each product in the sum. Each Pauli string \(G_{jk}^{(d-1)}\) gives rise to \(4^{l_{d-1}}\) products of Pauli \(X_{i}\) operators at layer \(f=d-2\), see Eq. (14). These products can be expressed in terms of Pauli strings \(G_{jk}^{(d-2)}\) by using Eq. (15). We thus again obtain a sum of products of Pauli strings \(G_{jk}^{(d-2)}\) at layer \(f=d-2\), see Tab. 2. We continue backpropagating these products of Pauli strings towards the input state at layer \(f=0\). Backpropagating the Pauli string \(G_{jk}^{(f)}\) through the \(X\)-error correcting layer \(\widetilde{\rm QEC}_{f}^{X}\) gives rise to 16 products of Pauli strings \(G_{jk}^{(f-1)}\), see Eq. (13). In every \(X\)-error correcting layer as well as in every \(Z\)-error correcting layer, we again obtain a sum of products of Pauli strings \(G_{jk}^{(f)}\). At layer \(f=0\), Pauli strings \(G_{jk}^{(0)}\) are mapped by the disentangling unitary \(U_{N}^{\dagger}\) onto SOPs, \[U_{N}G_{jk}^{(0)}U_{N}^{\dagger}=S_{(j-1)(k+1)}, \tag{16}\] see Tab. 2. As a result, we measure on the input state a sum of products of SOPs, i.e., the multiscale SOP \(S_{\rm M}\) of Eq. (11). **Length of string order parameters.** The backpropagation of all Pauli strings and their products is intractable due to their rapidly increasing number with the depth of the QCNN, see Tab. 2. However, we now show that the multiscale SOP involves a SOP whose length increases exponentially with the depth \(d\) of the QCNN. To this end, we focus on the product \[H^{(f)}_{jk}=\mathcal{L}^{(f)}_{j}G^{(f)}_{jk}G^{(f)}_{(j+3J)(k-3J)}\mathcal{R}^{ (f)}_{k} \tag{100}\] of Pauli strings \(G^{(f)}_{jk}\), \(G^{(f)}_{(j-3J^{\prime})(k+3J^{\prime})}\), \(\mathcal{L}^{(f)}_{j}\) and \(\mathcal{R}^{(f)}_{k}\). The Pauli strings \(\mathcal{L}^{(d-2)}_{j}=\mathcal{R}^{(d-2)}_{k}=\mathbb{1}\) reduce to the identity operator at layer \(f=d-2\) and they are defined recursively for \(f<d-2\) by relations \[\mathcal{L}^{(f)}_{j} =\mathcal{L}^{(f+1)}_{j+2\cdot 3^{f}}, \tag{101}\] \[\mathcal{R}^{(f)}_{k} =\mathcal{R}^{(f+1)}_{k-2\cdot 3^{f}}, \tag{102}\] for \(f\) being even and \[\mathcal{L}^{(f)}_{j} =\mathcal{L}^{(f+1)}_{j-5\cdot 3^{f}}X_{j-6\cdot 3^{f}}X_{j-3 \cdot 3^{f}}, \tag{103}\] \[\mathcal{R}^{(f)}_{k} =X_{k+3\cdot 3^{f}}X_{k+6\cdot 3^{f}}\mathcal{R}^{(f+1)}_{k+5 \cdot 3^{f}}, \tag{104}\] for \(f\) being odd. We show in the Supplementary Material that the product \(H^{(f)}_{jk}\) appears at every layer \(f\leq d-2\). The backpropagation of the products \(H^{(f)}_{jk}\) is summarized in Tab. 3. The first product \(H^{(d-2)}_{(\frac{N+1}{2}-l_{0})(\frac{N+1}{2}+3l^{d-2})}\) appears at layer \(f=d-2\), see Tab. 2. The product \(H^{(f)}_{jk}\) recursively appears at every layer \(f<d-2\). After the disentangling unitary \(U^{\dagger}_{N}\) at layer \(f=0\), the product \(H^{(0)}_{jk}\) gives rise to a product of SOPs, see Tab. 3. The Pauli string \(G^{(0)}_{(\frac{N+1}{2}-l_{0}+1)(\frac{N+1}{2}+l_{0}-1)}\) in the product \(H^{(0)}_{(\frac{N+1}{2}-l_{0})(\frac{N+1}{2}+l_{0})}\) at layer \(f=0\) attains the length \(l_{0}=\frac{3+13}{8}\), see Tab. 3. The Pauli string \(G^{(0)}_{(\frac{N+1}{2}-l_{0}+1)(\frac{N+1}{2}+l_{0}-1)}\) is mapped by the disentangling unitary \(U^{\dagger}_{N}\) onto the SOP \(S_{(\frac{N+1}{2}-l_{0})(\frac{N+1}{2}+l_{0})}\) with the length \[L=2l_{0}+1=\frac{3^{d}+17}{4}\sim 3^{d}. \tag{105}\] This shows that the multiscale SOP (11) involves a SOP whose length increases exponentially for large depths \(d\) of the QCNN. For the depth \(d=\log_{3}N\), this SOP exhibits the length \(L\approx N/4\) comparable to system size \(N\). By extending the analysis presented here, it can be shown that the multiscale SOP \(S_{\text{M}}\) involves also other SOPs with exponentially increasing lengths \(L\sim 3^{d}\) as well as SOPs at all length scales between \(L=1\) and \(L\sim 3^{d}\). **Number of products of string order parameters.** Finally, we determine a lower bound for the number of products of SOPs in the multiscale SOP \(S_{\text{M}}\). To this end, we focus on products of Pauli strings displayed in Tab. 4. We start with the product \(H^{(2)}_{(\frac{N+1}{2}-l_{0})(\frac{N+1}{2}+l_{0}-1)}\) which appears at layer \(f=2\), see Tab. 3. The recursion relation (100) dictates that this product appears at layer \(f=1\) as well. In contrast to the discussion above, we now focus on the product \(H^{(2)}_{(\frac{N+1}{2}-g_{2})(\frac{N+1}{2}+g_{2})}\) at layer \(f=1\). The Pauli strings \(G^{(2)}_{(\frac{N+1}{2}-g_{2}(l_{2}-1)][\frac{N+1}{2}+9(l_{2}-1)]}\) and \(G^{(2)}_{(\frac{N+1}{2}-9l_{2})(\frac{N+1}{2}+9l_{2})}\) in this product have lengths \(l_{2}=\frac{3^{d-2}+13}{8}\) and \(l_{2}+1\). We backpropagate this product through the \(X\)-error correcting layer \(\widetilde{\text{QEC}}^{X}_{1}\) and the \begin{table} \begin{tabular}{c|c|c|c} \(f\) & Product of Pauli strings & \(l_{f}\) & \\ \hline \(d-2\) & \(H^{(d-2)}_{(i-3l^{d-2})(i+3l^{d-2})}\) & \(1\) & \(\widetilde{\text{QEC}}^{\mathcal{L}}_{d-2}\) \\ & \(\vdots\) & & \\ \(f\) even & \(H^{(f)}_{(i-l_{f}\cdot 3f)(i+l_{f}\cdot 3f)}\) & \(\frac{\mathbb{J}^{d-f+1}+3}{8}\) & \(\widetilde{\text{QEC}}^{\mathcal{L}}_{f}\) \\ \(f\) odd & \(H^{(f)}_{(i-l_{f}\cdot 3f)(i+l_{f}\cdot 3f)}\) & \(\frac{3^{d-f-1}}{8}\) & \(\widetilde{\text{QEC}}^{\mathcal{L}}_{f}\) \\ & \(\vdots\) & & \\ \(0\) & \(H^{(0)}_{(i-l_{0})(i+l_{0})}\) & \(\frac{3^{d}+13}{8}\) & \(U^{\dagger}_{N}\) \\ \hline input & \(\widetilde{\mathcal{L}}S_{(i-l_{0}-1)(i+l_{0}+1)}S_{(i-l_{0})(i+l_{0})}\widetilde{ \mathcal{R}}\) & \(L=\frac{3^{d}+17}{4}\) & — \\ \end{tabular} \end{table} Table 3: Backpropagation of products \(H^{(f)}_{jk}\) of Pauli strings through the QCNN circuit with the depth \(d\). We focus on a single product at each layer \(f=1,2,...,d-2\). The table displays the length \(l_{f}\) of the Pauli string \(G^{(f)}_{(j+3f)(k-3f)}\) in the product \(H^{(f)}_{jk}\) and the unitary performed at layer \(f\). The corresponding product of SOPs measured on the input state and the length \(L\) of the SOP \(S_{(i-l_{0})(i+l_{0})}\) are displayed in the last row. Operators \(\tilde{\mathcal{L}}\) and \(\tilde{\mathcal{R}}\) are defined in the Supplementary Material and \(i=\frac{N+1}{2}\). disentangling unitary \(U_{N}^{\dagger}\) \[U_{N}\widetilde{\text{QEC}_{1}^{X}}^{\dagger}H_{(i-9l_{2})(i+9l_{2} )}^{(2)}\widetilde{\text{QEC}_{1}^{X}}U_{N}^{\dagger}\] \[=\bar{\mathcal{L}}\left(\prod_{\zeta}A_{\zeta}C_{\zeta}B_{\zeta} \right)\bar{\mathcal{R}}, \tag{101}\] where \(\zeta\in\{\frac{N+1}{2}-9l_{2},\frac{N+1}{2}-9(l_{2}-1),...,\frac{N+1}{2}+9l_{ 2}\}\), \(C_{\zeta}=S_{(\zeta-1)(\zeta+1)}\) are stabilizer elements as defined in the main text, and \[A_{\zeta}= \frac{1}{2}\left(C_{\zeta-4}C_{\zeta-2}-C_{\zeta-4}+C_{\zeta-2}+ \mathbb{1}\right), \tag{102}\] \[B_{\zeta}= \frac{1}{2}\left(\mathbb{1}+C_{\zeta+2}-C_{\zeta+4}+C_{\zeta+2}C _{\zeta+4}\right). \tag{103}\] In Eq. (101), we used the recursion relation (101) as well as Eq. (102), see Supplementary Material for details. The product (101) involves \(2l_{2}+1=\frac{3^{d-2}+17}{4}>\frac{3^{d-2}}{4}\) terms \(A_{\zeta}\) and \(B_{\zeta}\). By distributing the parentheses in all terms \(A_{\zeta}\) and \(B_{\zeta}\), we obtain a sum of \(16^{l_{2}+1}>2^{3^{d-2}}\) products of SOPs \(S_{jk}\). Note that these products of SOPs emerge from only the single product \(H_{(\frac{N+1}{2}-9l_{2})(\frac{N+1}{2}+9l_{2})}^{(2)}\) at layer \(f=1\). This places the lower bound \(2^{3^{d-2}}\) on the total number of products of SOPs in the multiscale SOP \(S_{\text{M}}\), which involves also many other products of SOPs. In summary, we showed in this appendix that the QCNN with alternating \(X\)-error and \(Z\)-error correcting layers detecting the '\(ZX\)' phase measures the multiscale SOP \(S_{\text{M}}\), see Eq. (11). This multiscale SOP is a sum of products of SOPs \(S_{jk}\) whose length \(L\sim 3^{d}\) increases exponentially with the depth \(d\) of the QCNN. The lower bound for the number of products of SOPs in the sum is \(2^{3^{d-2}}\). ## Appendix F Sample complexity of the multiscale string order parameter In this appendix, we discuss the sample complexity of directly sampling the multiscale SOP of Eq. (11) from the input state via local Pauli measurements without using any quantum circuit. A local Pauli measurement consists of simultaneously reading out all qubits \(j\) in the basis of Pauli operators \(\sigma_{j}=X_{j},Y_{j},Z_{j}\). We thus measure in the basis of the tensor product \(\mathcal{B}=\bigotimes_{j=1}^{N}\sigma_{j}\) of the Pauli operators \(\sigma_{j}\). A product of SOPs can be sampled via the local Pauli measurement in any Basis \(\mathcal{B}\) in which it is diagonal. Several products of SOPs that are diagonal in the same tensor product basis can be simultaneously sampled in this basis. We can express the multiscale SOP \[S_{\text{M}}=\sum_{m=1}^{b}O_{m} \tag{104}\] in terms of operators \(O_{m}\) which are sums of products of SOPs that are diagonal in the tensor product basis \(\mathcal{B}_{m}\). To determine the expectation value of the multiscale SOP, we can individually measure each operator \(O_{m}\) using local Pauli measurements. While the sample complexity of this measurement depends on the variance \(\langle O_{m}^{2}\rangle-\langle O_{m}\rangle^{2}\) of the operators \(O_{m}\), the number \(b\) of different bases \(\mathcal{B}_{m}\) in which we need to measure places a lower bound on the sample complexity. Note that the decomposition (104) of the multiscale SOP is not unique as a product of SOPs can be diagonal in several bases \(\mathcal{B}_{m}\). To determine the lower bound for the sample complexity of the multiscale SOP, we now investigate the minimal number \(B\) of bases in which we need to measure. To this end, we focus on the products (101) of SOPs. We rewrite the products (101) as \[\bar{\mathcal{L}}\left(\prod_{\zeta}A_{\zeta}C_{\zeta}B_{\zeta} \right)\bar{\mathcal{R}}\] \[=\bar{\mathcal{L}}A_{\frac{N+1}{2}-9l_{2}}\left(\prod_{\zeta}C_{ \zeta}\right)\left(\prod_{\zeta^{\prime}}K_{\zeta^{\prime}}\right)B_{\frac{N+ 1}{2}+9l_{2}}\bar{\mathcal{R}}, \tag{105}\] where \(\zeta^{\prime}\in\{\frac{N+1}{2}-9l_{2}+4,\frac{N+1}{2}-9l_{2}+13,...,\frac{N+1 }{2}+9l_{2}-5\}\) and \[K_{\zeta^{\prime}}= \frac{1}{4}\left[\left(C_{\zeta^{\prime}-2}+\mathbb{1}\right) \left(C_{\zeta^{\prime}+3}+\mathbb{1}\right)\right. \tag{106}\] \[+\left(C_{\zeta^{\prime}-2}-\mathbb{1}\right)C_{\zeta^{\prime}} \left(C_{\zeta^{\prime}+3}+\mathbb{1}\right)\] (107) \[+\left(C_{\zeta^{\prime}-2}+\mathbb{1}\right)C_{\zeta^{\prime}+1} \left(C_{\zeta^{\prime}+3}-\mathbb{1}\right)\] (108) \[+\left.\left(C_{\zeta^{\prime}-2}-\mathbb{1}\right)C_{\zeta^{ \prime}}C_{\zeta^{\prime}+1}\left(C_{\zeta^{\prime}+3}-\mathbb{1}\right) \right]. \tag{109}\] Crucially, each operator \(K_{\zeta^{\prime}}\) involves terms that need to be measured in three different tensor product bases. Recalling that \(C_{\zeta^{\prime}}=Z_{\zeta^{\prime}-1}X_{\zeta^{\prime}}Z_{\zeta^{\prime}+1}\), the terms in the line (107) are diagonal in the \(X\) basis on qubit \(\zeta^{\prime}\) and in the computational basis on qubit \(\zeta^{\prime}+1\). The terms in the line (108) are diagonal in the computational basis on qubit \(\zeta^{\prime}\) and in the \(X\) basis on qubit \(\zeta^{\prime}+1\). The terms in the line (108) are diagonal in the \(Y\) basis on qubits \(\zeta^{\prime}\) and \(\zeta^{\prime}+1\). As a result, we need to measure in three different bases \(X_{\zeta^{\prime}}Z_{\zeta^{\prime}+1}\), \(Z_{\zeta^{\prime}}X_{\zeta^{\prime}+1}\) and \(Y_{\zeta^{\prime}}Y_{\zeta^{\prime}+1}\) on qubits \(\zeta^{\prime}\) and \(\zeta^{\prime}+1\). The terms in the line (107) are diagonal in any of these three bases as they act as the identity operator on qubits \(\zeta^{\prime}\) and \(\zeta^{\prime}+1\). The product of SOPs (105) involves \(2\,l_{2}=\frac{3^{d-2}+13}{4}>3^{d-4}\) operators \(K_{\zeta^{\prime}}\). Distributing the parentheses in all operators \(K_{\zeta^{\prime}}\) on the right-hand side of Eq. (105) gives rise to products of SOPs with mutually incompatible bases \(\bigotimes_{\zeta^{\prime}}\{X_{\zeta^{\prime}}Z_{\zeta^{\prime}+1},Z_{\zeta^{ \prime}}X_{\zeta^{\prime}+1},X_{\zeta^{\prime}}Y_{\zeta^{\prime}+1}\}\) on qubits \(\zeta^{\prime}\) and \(\zeta^{\prime}+1\). As a result, we need to measure in \(3^{2l_{2}}>3^{3^{d-4}}\) different tensor product bases \(\mathcal{B}_{m}\). This places the lower bound \(3^{3^{d-4}}\) for the sample complexity of measuring the multiscale SOP via local Pauli measurements.
2304.07741
Canvas: End-to-End Kernel Architecture Search in Neural Networks
The demands for higher performance and accuracy in neural networks (NNs) never end. Existing tensor compilation and Neural Architecture Search (NAS) techniques orthogonally optimize the two goals but actually share many similarities in their concrete strategies. We exploit such opportunities by combining the two into one and make a case for Kernel Architecture Search (KAS). KAS reviews NAS from a system perspective and zooms into a more fine-grained level to generate neural kernels with both high performance and good accuracy. To demonstrate the potential of KAS, we build an end-to-end framework, Canvas, to find high-quality kernels as convolution replacements. Canvas samples from a rich set of fine-grained primitives to stochastically and iteratively construct new kernels and evaluate them according to user-specified constraints. Canvas supports freely adjustable tensor dimension sizes inside the kernel and uses two levels of solvers to satisfy structural legality and fully utilize model budgets. The evaluation shows that by replacing standard convolutions with generated new kernels in common NNs, Canvas achieves average 1.5x speedups compared to the previous state-of-the-art with acceptable accuracy loss and search efficiency. Canvas verifies the practicability of KAS by rediscovering many manually designed kernels in the past and producing new structures that may inspire future machine learning innovations. For source code and implementation, we open-sourced Canvas at https://github.com/tsinghua-ideal/Canvas.
Chenggang Zhao, Genghan Zhang, Mingyu Gao
2023-04-16T10:05:42Z
http://arxiv.org/abs/2304.07741v2
# Canvas: End-to-End Kernel Architecture Search in Neural Networks ###### Abstract The demands for higher performance and accuracy in neural networks (NNs) never end. Existing tensor compilation and Neural Architecture Search (NAS) techniques orthogonally optimize the two goals but actually share many similarities in their concrete strategies. We exploit such opportunities by combining the two into one and make a case for Kernel Architecture Search (KAS). KAS reviews NAS from a system perspective and zooms into a more fine-grained level to generate neural kernels with both high performance and good accuracy. To demonstrate the potential of KAS, we build an end-to-end framework, Canvas, to find high-quality kernels as convolution replacements. Canvas samples from a rich set of fine-grained primitives to stochastically and iteratively construct new kernels and evaluate them according to user-specified constraints. Canvas supports freely adjustable tensor dimension sizes inside the kernel and uses two levels of solvers to satisfy structural legality and fully utilize model budgets. The evaluation shows that by replacing standard convolutions with generated new kernels in common NNs, Canvas achieves average \(1.5\times\) speedups compared to the previous state-of-the-art with acceptable accuracy loss and search efficiency. Canvas verifies the practicability of KAS by rediscovering many manually designed kernels in the past and producing new structures that may inspire future machine learning innovations. For source code and implementation, we open-sourced Canvas at [https://github.com/tsinghua-ideal/Canvas](https://github.com/tsinghua-ideal/Canvas). Machine Learning Tensor Compilers Neural Architecture Search ## 1 Introduction Many emerging techniques backed by neural networks (NNs), including computer vision, natural language processing, and robotics, have boomed these years and made tremendous progress. However, NNs remain as a complex algorithm that not only consumes significant computation resources to achieve acceptable (e.g., real-time) performance but also lacks an effective and theoretical methodology to design models with high accuracy. The demands for high performance and high accuracy only keep increasing when NNs are applied to more scenarios in the real world. System and machine learning communities have adopted orthogonal approaches to satisfy the high demands above. From a system perspective, NNs are represented as tensor programs, and specialized tensor compilers and frameworks have been developed to realize high-performance execution on different hardware platforms. On the other hand, NN algorithm researchers have started to use automatic methods to design better model architectures with improved accuracy, known as Neural Architecture Search (NAS). Despite the different goals, these two directions share many common underlying techniques. Both attempt to reorganize the NN structures and redistribute the computations by transforming their basic blocks in the rich design spaces; both require timely evaluation as the feedback to guide the exploration. Consequently, recent efforts have started to exploit these similarities and simultaneously conduct performance and accuracy exploration with a careful tradeoff between the two. Examples include giving up mathematical equivalence in tensor compilers to unleash higher performance [1, 2], and making NAS aware of performance besides the accuracy goal [3, 4, 5, 6]. In this paper, we take these opportunities one step further and make a case for a new paradigm of _Kernel Architecture Search_ (KAS). To maximize runtime performance while pursuing high model accuracy, KAS searches for an efficient _kernel architecture_ from a lower-level system perspective. It then uses the generated kernels to replace the standard layers (e.g., convolutions) in existing NNs. KAS _stochastically and iteratively_ builds candidate kernels using a rich set of _fine-grained primitives_. It treats _runtime performance as first-priority constraints_ and searches for kernels that achieve the best accuracy within the given performance limit. By completely discarding mathematical equivalence and searching for new designs, KAS enables higher performance than tensor compilers. By primarily focusing on runtime and composing kernels in a fine-grained way, KAS complements NAS from a system perspective while retaining similar levels of accuracy. To demonstrate the promising potentials of KAS, we further build an end-to-end, automated, and efficient framework, Canvas, that could find new high-quality kernels to replace traditional convolutions. Canvas relies on a **fine-grained primitive library** as the building blocks for kernel construction, following the philosophy of _decoupling data rearrangements and arithmetic computations_, The primitive designs are inspired by the decoupling concept in system research while at the same time also having great expressibility to realize complicated mathematical compositions in machine learning. From such a primitive library, Canvas uses a **random sampler** to _stochastically and iteratively generate candidate kernels_ from a large search space. The sampler carefully adjusts the sampling probabilities of different primitive types to ensure fair and legitimate kernel construction. It also applies various pruning rules. Then Canvas evaluates the candidate kernels using two classes of **user-specified constraints and optimization goals**, including those that can be _analytically_ modeled and calculated (e.g., numbers of operations and parameters), and those that need _experimental_ measurements (e.g., runtime and accuracy). Finally, a key innovation of Canvas is the use of free _dynamic variables_ on tensor dimension sizes during kernel generation. Dynamic variables greatly enrich the search space but must eventually be substituted with reasonable values that satisfy all structural legality and fully utilize allowed model budgets. So we design **two levels of dynamic variable solvers** in Canvas to handle the two requirements with provable correctness and high efficiency. On top of several popular backbone NNs, we use Canvas to search for the best kernels to replace standard convolutions. Compared with the original model optimized by TVM Ansor [7], Canvas achieves on average \(1.5\times\) and up to \(4.6\times\) speedups with acceptable accuracy loss (\(\sim 1\%\)). And models are reduced to \(0.1\sim 0.6\times\) of the original sizes. Canvas also outperforms a previous work that searches for efficient kernel implementations in a network-independent way and then applies manual layer-wise optimizations [2], by \(1.4\sim 1.6\times\) faster. We also conduct detailed case studies to examine what kinds of kernels Canvas discovers. Interestingly, Canvas rediscovers many manually proposed kernel designs in the past and produces previously unexplored structures that may inspire future NN design automation and innovations. ## 2 Background To continuously improve the efficiency and effectiveness of neural networks (NNs), the two research communities of computer systems and machine learning have taken different but complementary approaches in the past years. From the system perspective, there is a large design space about how to carry out the computations specified by the mathematical representation of an NN, resulting in potential orders of magnitude runtime performance differences on real hardware platforms like CPUs, GPUs, and specialized chips. To ease the efforts of programming and optimization, many frameworks, including PyTorch [8], TensorFlow [9], and TVM [10, 11], have been proposed. They typically represent NNs as _tensor programs_ and apply fine-grained optimizations at multiple levels, from the computation graph [12, 13, 1, 14] to the loop nest of an individual operator kernel [10, 7]. Meanwhile, _Neural Architecture Search_ (NAS) has become an increasingly popular methodology for designing effective NN architectures. Indeed, a large part of the recently proposed state-of-the-art NNs is automatically discovered by NAS rather than composed manually [15, 16, 17, 18]. NAS typically defines a highly modular search space by dividing the backbone network topology into basic units or cells of various sizes. Then it searches for how to build each cell by connecting basic layers like convolution and pooling. During the search, the accuracy levels of the candidate cell structures are continuously evaluated using statistic metrics or through training sample datasets. In some sense, NAS also organizes NNs towards a better evaluation objective, but rather on model accuracy than runtime performance. Regardless of the research perspectives, the essence of the problem lies in getting reliable accuracy levels with a demand for less computation. It is only the overall research context that leads to the two communities of systems and machine learning focusing on different approaches: the system side leverages fine-grained performance optimizations under the rigid constraint of mathematical equivalence; in contrast, the ML side is free to transform the mathematical form of the NN at a coarse-grained level (i.e., layer-wise operators) to improve accuracy. Turner et al. [2] first captured this new opportunity. Specifically, they introduced two NAS-style transformations into the current TVM [10] compilation framework: bottlenecking (reducing channel counts) and grouping (dividing channels into groups). The new transformations gave up traditional compiler transformation equivalence. However, this work was still preliminary, as it only changed the loop ranges while retaining the original loop structure of traditional convolutions, leaving abundant opportunities unexplored. Also, the workflow was not end-to-end and required manual post-search fine-tuning. Therefore, in this work, we attempt to comprehensively study this new direction by searching for novel neural structures from a larger design space and a finer granularity without the limitation of transformation equivalence and achieve a user-friendly end-to-end system. ## 3 Kernel Architecture Search The similarity between tensor program optimizations and NAS techniques motivates us to take one step further along the path. Specifically, we propose _Kernel Architecture Search_ (KAS). KAS is a new paradigm to construct effective and efficient tensor computation kernels, the basic building blocks in NNs. The main innovation of KAS is to _use system techniques to search for novel kernel designs with high accuracy and high performance_. We view it as neither a compiler nor a pruner/tuner but an automated explorer for new kernels. This is because we are not transforming existing kernels but constructing new kernels from scratch. In the long term, we aim at automatic algorithmic exploration from the system perspective under the concept of KAS. KAS treats the computation of a kernel as a micro directed acyclic graph (micro-DAG) \(G=(V,E)\). Each node \(v_{i}\in V\) represents the current tensor shape, and edge \(e_{ij}\in E\) represents a fine-grained primitive that transforms tensor \(v_{i}\) to the output tensor \(v_{j}\). Fine-grained primitives offer a lower-level representation than traditional coarse-grained kernels like convolutions and matrix/tensor multiplications. They enable us to replace the monolithic and heavy kernel into a DAG of cheap primitives with a smaller total cost. In KAS, we apply a new perspective to balance performance and accuracy by _treating runtime performance as first-priority constraints_ and searching for kernels that achieve the best accuracy within the given performance limit. Such a "performance-first" philosophy essentially flips the conventional workflow, which first designs NN models with certain accuracy levels and then uses tensor compilers to optimize performance. The new approach allows us to balance the two objectives better with a smoother flow in our system. **Design challenges.** Nevertheless, realizing a practical KAS framework to automatically and efficiently explore the huge design space is still challenging. We summarize the key questions below, which our proposed system has to address. Figure 1: System overview of Canvas and its workflow. The micro-DAG random sampler generates a kernel that contains two dynamic variables, \(x_{1}\) and \(x_{2}\). The solver by shape matching sets the value of \(x_{2}\) when merging the two branches into one. The solver by constraints assigns concrete values to the remaining \(x_{1}\) for each replacement target. The code generator implements the resultant kernels. Finally, the evaluator selects the best candidate according to accuracy and runtime measurements. * What are the necessary fine-grained primitives KAS must incorporate to build high-quality kernels to allow for both rich expressibility and flexible construction? * What sampling and search algorithms should KAS use to construct candidate kernels? How to balance the use of different primitives? * How to effectively determine the legality of generated kernels with complex topologies, particularly when multiple branches interact and require matching dimension sizes? * What interface and techniques should KAS use to satisfy various user-specified constraints, including FLOPs, parameter numbers, runtime, and accuracy? ## 4 Canvas Design Overview As a concrete realization and an early milestone of KAS, we design _Canvas_ (CONVolution-like Architecture Search), an end-to-end, automated, and efficient framework. Figure 1 illustrates an overview of the system components in Canvas and their workflow. **Main focus: convolutions.** Generally, KAS can be made to generate any type of kernels and integrate them into any NN backbone topology. In Canvas, we mainly focus on _Convolution_ Neural Networks (CNNs), which are the state-of-the-art solutions in many real-world application scenarios [19, 20, 21]. A convolution kernel takes a tensor of shape \([C_{\text{in}},H,W]\) as input, uses a set of \(K_{H}\times K_{W}\) filters to aggregate information from neighbor pixels on multiple channels, and produces a \([C_{\text{out}},H,W]\) output tensor. With Canvas, our goal is to generate new kernels with the same input and output shapes but with entirely different computational behaviors that improve performance and/or accuracy. Then we could replace standard convolutions with the new kernels in the same backbone NN topology. For simplicity, Canvas only searches for _a single kernel template_, which takes an input tensor of \([C,H,W]\) and produces an output tensor of the same shape \([C,H,W]\). For a general convolution of \([C_{\text{in}},H,W]\Rightarrow[C_{\text{out}},H,W]\), we observe that usually one of \(C_{\text{in}}\) and \(C_{\text{out}}\) is a multiple of the other, so it can be composed using the kernel template as shown in Figure 2. This allows Canvas to focus on optimizing a single kernel template shaped of \([C,H,W]\Rightarrow[C,H,W]\), amortizing the expensive search cost. **Interface and functionality.** With a simple Python interface canvas.sample(nn, budget), users specify a backbone nn, and designates Canvas to sample a new kernel under the system budget. The user can specify budget using a variety of constraints and optimization goals, which we put into two categories. The first category includes constraints that can be _analytically calculated_, e.g., numbers of FLOPs and model parameters. The other category covers the constraints and goals that require _experimental measurements_, e.g., latency and accuracy. **Workflow.** Following Figure 1, we briefly introduce the end-to-end workflow of Canvas. Canvas composes a library of various _fine-grained primitives_ as the building blocks (Section 5) and uses a _micro-DAG random sampler_ to build candidate kernel implementations. Starting from the input tensor, it extends the current micro-DAG by randomly sampling from the primitive library (Section 6.1). The sampler can also grow the micro-DAG into multiple branches. Finally, it merges the branches back because of the hard shape constraint of the single output tensor \([C,H,W]\)1. Footnote 1: The kernel in Figure 1 is unfinished, a complete kernel after sampling should have exactly one output tensor shaped \([C,H,W]\). Figure 2: A single kernel template of \([C,H,W]\Rightarrow[C,H,W]\) is applied to general convolutions with different \(C_{\text{in}}\) and \(C_{\text{out}}\) values. However, some primitives (e.g., fully connected) may introduce intermediate tensors with arbitrary shapes. We use _dynamic variables_ (denoted as \(x_{i}\)) to represent such free dimensions and use a _solver by shape matching_ to coordinate the free variables in different branches so that they can match (Section 6.2). There could still be some dynamic variables left undetermined after the kernel structure is finalized. We use a _solver by constraints_ (Section 6.3) to solve their values according to the analytical constraints, e.g., numbers of FLOPs and parameters. After all its dynamic variables are set, we apply each kernel candidate to the backbone NN. The code generator generates optimized code, and the evaluator uses a distributed platform to experimentally measure the rest constraints and goals. ## 5 Fine-Grained Primitives Canvas uses a set of fine-grained primitives as the edges of the micro-DAG. It is critical to designate the appropriate granularity for these primitives. However, neither the techniques from the current system nor machine learning research satisfies this demand. The instruction and loop levels in tensor compilers are overly elaborative and would make the design space of constructing a kernel from scratch too large to be tackled; the existing NAS methods only work in a coarse-grained manner and offer limited flexibility to fine-tune performance and accuracy. Instead, we observe that most of the neural operators, especially those applied on multi-dimensional tensors and aggregating neighbor information, could be decomposed into _data rearrangements_ and _actual arithmetic computations_. For example, a standard convolution first _unfolds_ the input \(I\) of shape \([C,H,W]\) using a receptive view \([K_{H},K_{W}]\), and produces a tensor \(U\) of shape \([C,K_{H},K_{W},H,W]\). This step does not actually copy neighbor pixels but rearranges data through shallow indexing: \(U(c,kh,kw,h,w)=I(c,h+kh,w+kw)\) (lowercase variables represent iterators while uppercase ones represent constants). Then, the actual computation is done by multiplying tensor \(U\) with a weight tensor to obtain the result. This decoupling has been there since convolutions appear, known as im2col [22]. It reflects the semantic understanding gap between system programmers and machine learning researchers. Similar reasoning can also be found in many other operators, especially those designed for lightweight computations: group convolution uses an rearrangement like \(U(g,c,kh,kw,h,w)=I(g\times\text{GROUP\_SIZE}+c,h+kh,w+kw)\); shift convolution applies an offset on a spatial dimension \(U(c,h,w)=I(c,h+1,w)\). Following the above idea of decomposition, we conclude that the granularity of Canvas needs to follow this perspective of _rearrangement and computation decoupling_, which is more efficient than compiler primitives and more flexible than NAS building blocks. Moreover, to support complex topologies in a micro-DAG, primitives are needed to blend (i.e., merge) multiple branches. Consequently, we design our fine-grained primitive library in Canvas, which could be categorized into three classes summarized in Table 1. **Rearrangement primitives.** The primitives in this class generalize the first type of operations in the above example. These primitives enable flexible data rearrangement across different channels and spatial dimensions in the tensor. For example, along a channel dimension, we could _group_ the data into \(G\) separate groups to apply subsequent intra-group transformations. With the \([H,W]\) spatial pixels, we could _shift_ the pixels to manipulate neighbor information. Finally, the _unfold_ primitive extracts spatial neighborhood information into the channel space. **Arithmetic primitives.** This primitive class includes the arithmetic operators commonly used in NNs, which change the numerical value of a tensor. The most common one is _fully-connected_ (FC). It remaps all the channel dimensions into a new dimension with an arbitrary size, denoted by a dynamic variable \(x\) as the number of output channels. In our design, FC is the only primitive that contains learned parameters, a.k.a. weights. It is an essential primitive for NN functionality, but it also incurs higher cost than others, as well as the need to handle dynamic variables. Besides, we \begin{table} \begin{tabular}{c c l} \hline \hline **Class** & **Primitive** & **Description and Shape Changes** \\ \hline \multirow{4}{*}{Rearrangement} & Group & Group a channel dim \(X\) by a factor \(G\) or make each individual channel as a group: \\ & \([\cdots,X,\cdots]\Rightarrow[\cdots,G,\frac{\nabla}{G},\cdots]\) or \([\cdots,X,1,\cdots]\) \\ & Shift by 1 pixel at a spatial dim: \([\cdots,X,\cdots]\Rightarrow[\cdots,X_{+1},\cdots]\) \\ & Unfold & Unfold \(K\) neighbors of spatial dim \(X\) to a channel dim: \\ & \([\cdots,X,\cdots]\Rightarrow[\cdots,K_{X},\cdots,X,\cdots]\) \\ \hline \multirow{4}{*}{Arithmetic} & Fully-connected & Remap values at all channels to a new dim \(x\): \([\cdots_{\text{`channel}},\cdots]\Rightarrow[x,\cdots]\) \\ & Element-wise & ReLU, abs, \(\sin,\exp\), etc.: \([\cdots]\Rightarrow[\cdots]\) \\ & Folding & Average/max pooling at any dim \(X\): \([\cdots,X,\cdots]\Rightarrow[\cdots,\cdots]\) \\ & Softmax & Softmax at any continuous dims: \([\cdots,\cdots]\Rightarrow[\cdots,\cdots],\cdots]\) \\ \hline Blending & Broadcast & Broadcasting add/sub/mul/min/max from LHS to RHS: \([\cdots]_{\text{LBS}}\), \([\cdots]_{\text{RIS}}\Rightarrow[\cdots]_{\text{RIS}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Three classes of fine-grained primitives used in Canvas. have _element-wise/activation_ functions (ReLU, abs, etc.), _folding_ (average pooling at a certain dimension), and _softmax_. All of them are simple and cheap operators but could be helpful to introduce non-linearity and other properties to the data. **Blending primitives.** To eventually produce a single output tensor, the kernel micro-DAG must merge the multiple branches which may grow during the random sampling process (Section 6.1). Having such a capability allows us to support advanced complex connections, such as residual blocks (broadcasting addition) [23]. To blend two tensors into one, we introduce the _broadcast_ primitive. It takes two tensors \(\mathrm{LHS}\) and \(\mathrm{RHS}\) as inputs and produces an output tensor with the same shape as \(\mathrm{RHS}\). To do so, we denote the two input shapes in the form of \([\cdots\) common prefix \(\cdots\) LHS, \(\cdots\) common suffix] and \([\cdots\) common prefix \(\cdots\) RHS, \(\cdots\) common suffix]. The data on the dimensions \([\cdots\) LHS\(]\) are broadcast to \([\cdots\) RHS\(]\), i.e., replicated multiple times and applied to \(\mathrm{RHS}\) through a binary operation like addition, subtraction, or multiplication. This requires the total size of \([\cdots\) LHS\(]\) dimensions must be a factor of that \([\cdots\) RHS\(]\). For example, broadcasting \([G,\frac{x_{1}}{G},H,W]\) to \([G,\frac{C}{G},K_{H},H,W]\) expects that \(\frac{x_{1}}{G}\) is a factor of \(\frac{C}{G}\times K_{H}\). Such dimension matching introduces a new challenge about dynamic variable substitution, which we resolve in Section 6.2. **Expressibility of our primitive library.** The fine-grained primitive library is sufficiently expressive to explore a large design space of kernel construction, including complex manual designs by previous works. Take Involution [24] for a detailed illustration, which is embedded in Figure 1. The central part is a broadcast multiplication between two tensors \([G,K,K,H,W]\) and \([G,C/G,K,K,H,W]\). We can first generate them from the input tensor using group and unfold primitives, temporally as \([G,x_{2}/G,H,W]\) and \([G,C/G,K,K,H,W]\). Then we try to blend them with broadcasting and substitute \(x_{2}=GKK\). In the end, we still have a free variable \(x_{1}\), which can be easily scaled to realize different FLOPs and model sizes (Section 6.3). Note that this is just one specific sample in the large design space. In our experiments, we indeed observe that almost all manual designs appear during the search or in the final results, among other previously unexplored constructions. ## 6 Kernel Generation In Canvas, the micro-DAG random sampler stochastically and iteratively generates a large number of candidate kernels by sampling from the primitive library. Section 6.1 describes the sampling algorithm and our pruning techniques. A key challenge is to assign values to dynamic variables to ensure legality. We first propose a variable solver in Section 6.2 to address the problem in the sampling process. Finally, Section 6.3 resolves any remaining variables after primitive sampling according to the analytical constraints. ### Sampling Algorithm Given the high flexibility of constructing complex micro-DAGs from the rich set of primitives, Canvas uses random sampling. Canvas builds the micro-DAG from one node with the shape of input tensor: \([C,H,W]\). We iteratively grow it to \(N\) nodes, where \(N\) is a hyperparameter. In the step \(t\), we have \(n_{t}=|V_{t}|\) nodes in \(g_{t}\). We calculate all possible single-input primitives \(\{p_{t}^{i}\}\) for each node \(v_{t}^{i}\) and blending primitives \(\{p_{t}^{i,j}\}\) for each pair of nodes \((v_{t}^{i},v_{t}^{j})\). Then one primitive \(e_{t}\) is selected from \(\tilde{E}_{t}=\{\{p_{t}^{1}\},\{p_{t}^{2}\},...,\{p_{t}^{n_{t}}\},\{p_{t}^{1,2 }\},\{p_{t}^{1,3}\},...,\{p_{t}^{n_{t}-1,n_{t}}\}\}\). Then \(E_{t+1}=\{E_{t},e_{t}\}\). After deciding the \(e_{t}\), we add to \(g_{t}\) the new node derived from input node(s) of \(e_{t}\) and get \(g_{t+1}\). **Sampling probability adjustment.** A key innovation in the random sampler is the need to carefully adjust the sampling probability of each primitive. There are two reasons. First, notice that the number of choices from \(\{p_{t}^{i}\}\) and \(\{p_{t}^{i,j}\}\) are drastically different, i.e., \(O(n)\) vs. \(O(n^{2})\). Uniformly sampling from \(\tilde{E}_{t}\) would result in a large bias towards the blending primitives. We hence re-scale the probability of each primitive so that the sampling is uniform w.r.t. each primitive type, regardless of \(n_{t}\). **Topology heuristics.** Moreover, micro-DAG is constrained by certain topology properties. We must ensure that \(n_{T}=N\) for the final step \(T\), and \(n_{T}=n_{T-1}+1\). Therefore, we need to heuristically restrict the number of leaf nodes in \(g_{t}\), which we term as \(W(g_{t})\). During construction, we examine the number of remaining primitives \(N-n_{t}\) to see whether we are allowed to further extend more branches or must start to merge. For example, if \(W(g_{t})=3\) and \(N-n_{t}=2\), we must combine two tensors. We control such behaviors by modifying the sampling probability of primitives, e.g., by setting all probabilities to 0 except for blending primitives. **Pruning techniques.** Random sampling is an expensive process. We apply several pruning heuristics to improve its efficiency. We use an approximate graph isomorphism hash function similar to [25] and deduplicate generated kernels. Besides, we eliminate obviously sub-optimal constructions with redundant or illegal components, such as consecutive ReLUs and subtracting two equal tensors. With such pruning, Canvas is able to sample a legal kernel candidate in the huge search space within a few milliseconds. ### Dynamic Variable Solver by Shape Matching During the sampling process mentioned above, the FC primitives in our candidate micro-DAG produce free dynamic variables. These dynamic variables further enlarge the search space and provide more flexibility to tune the numbers of FLOPs and parameters in our kernel for performance-accuracy tradeoffs. Recall that an FC primitive uses weights to remap \([\cdots`_{\text{channel}},\cdots`_{\text{spatial}}]\) into \([x,\cdots`_{\text{spatial}}]\), where \(x\) indicates that the output channel dimension can have any size. Other primitives typically have deterministic dimensions without introducing variables. The concrete values of all dynamic variables will be eventually assigned according to the constraints such as FLOPs and model sizes (Section 6.3). However, not all variables can be independently set in the micro-DAG. In particular, the blending primitive that combines two tensors requires matched dimension sizes of the two inputs. For example, when a tensor \([x_{1},H,W]\) is broadcast to another tensor \([C,K_{H},H,W]\), \(x_{1}\) must be a factor of \(CK_{H}\). By default, the group number \(G\) is a factor of \(C\). So \(x_{1}\) can only take a value from the set \(\{C,K_{H},G,\frac{C}{G},CK_{H},GK_{H},\frac{C}{G}\times K_{H}\}\), which is an additional constraint that must be satisfied. To handle such constraints among dynamic variables in a general way, we first prove a simple theorem. **Theorem 1**: _For any tensor constructed in Canvas where all its dimensions are in the form of \(D=\frac{\text{numerator}}{\text{denominator}}\) (e.g. \(D=\frac{C}{G}\) or \(D=CK_{H}\)), there is_ * _no dynamic variable in any spatial dimension (i.e.,_ \([C,H,x_{0}]\) _does not exist);_ * _no dynamic variable in the denominator of a channel dimension (i.e.,_ \(\frac{C}{x_{0}}\) _does not exist);_ * _at most one dynamic variable in the numerator of a channel dimension (i.e.,_ \(\frac{x_{0}x_{1}}{K_{H}}\) _does not exist);_ * _at most one dynamic variable across all dimensions (i.e.,_ \([x_{0},x_{1},H,W]\) _does not exist)._ **Proof sketch.** We prove this by induction. For the input tensor shaped \([C,H,W]\), the theorem naturally holds. For all primitive types except FC, the shape transformation neither introduces new dynamic variables nor moves an existing dynamic variable to a denominator. For an FC primitive, all channel dimensions are remapped to a new dimension with a single dynamic variable \(x\). With the fact that no primitives introduce dynamic variables into spatial dimensions, the theorem holds. \(\square\) Theorem 1 simplifies the design to substitute dynamic variables for broadcast primitives in Canvas. For two tensors' shape denoted as \([\cdots`_{\text{common prefix}},\cdots`_{\text{LHS}},\cdots`_{\text{common suffix}}]\) and \([\cdots`_{\text{common prefix}},\cdots`_{\text{RHS}},\cdots`_{\text{common suffix}}]\), we first detect and strip the common prefix and suffix and focus on the unmatched parts \(\{\cdots`_{\text{LHS}}\}\) and \(\{\cdots`_{\text{RHS}}\}\). We calculate their total sizes as \(\#\{\cdots`_{\text{LHS}}\}=\frac{\Pi_{\text{numerator}}_{\text{LHS},,,,}}{\Pi _{\text{d}},\text{denominator}_{\text{LHS},,,}}\), and similarly for \(\#\{\cdots`_{\text{RHS}}\}\). Theorem 1 says that only one dynamic variable could exist in the numerator of each, which we use \(k_{\{\text{LHS},\text{RHS}\}}\in\{0,1\}\) to indicate. So if we take the ratio \(M\) between the two sizes, we have: \[M=\frac{\#\{\cdots`_{\text{RHS}}\}}{\#\{\cdots`_{\text{LHS}}\}}=\frac{(x_{ \text{RHS}})^{k_{\text{RHS}}}R_{\text{RHS}}}{(x_{\text{LHS}})^{k_{\text{LHS}} }R_{\text{LHS}}}\] where \(R_{\text{LHS}}\) and \(R_{\text{RHS}}\) are fully simplified as remaining factors. To ensure legality in broadcast, \(M\) must be an integer, which determines the possible values for dynamic variables \(x_{\text{LHS}}\) and \(x_{\text{RHS}}\) (if existing). This mainly constrains \(x_{\text{RHS}}\) in the denominator, which should be a factor of the numerator. Naturally, the valid substitutions of \(x_{\text{LHS}}\) are the set of all factors of \((x_{\text{RHS}})^{k_{\text{RHS}}}R_{\text{RHS}}\). As an example, assume two tensors \([x_{1},K_{H},H,W]\) and \([G,\frac{x_{2}}{G},K_{H},K_{W},H,W]\). The common prefix is empty, and the common suffix is \(\{H,W\}\), which are stripped. The substitutions of \(x_{1}\) could be all the factors of \(x_{2}K_{W}\), i.e., \(\{1,x_{2},K_{W},x_{2}K_{W}\}\). Once we have the valid substitution set from the dynamic variable solver, the random sampler will randomly select one value and propagate to the whole micro-DAG. For example, assume we sample \(x_{1}=x_{2}K_{W}\), then the shape of \(\text{LHS}\) becomes \([x_{2}K_{W},K_{H},H,W]\). ### Dynamic Variable Solver by Analytical Constraints After some of the dynamic variables are solved in the generation process, the remaining variables should still be carefully considered and assigned according to all _analytical constraints_, e.g., FLOPs and parameter sizes. Generally speaking, the generated kernel template needs to substitute multiple standard convolutions in an NN. Each individual convolution has a specific assignment of \(C\), \(K_{H}\), \(K_{W}\), \(H\), \(W\), \(G\), as well as the dynamic variables \(x\). For these remaining dynamic variables, we denote them as \(x_{i,j}\), i.e., the \(j\)-th variable in the \(i\)-th convolution target. The constraint solver derives all values of \(x_{i,j}\) using a heuristic algorithm to maximally utilize the available budget of the constraints that can be analytically modeled, e.g., FLOPs and parameter sizes. Before dealing with \(x_{i,j}\), we first specially handle the group number \(G\). We observe that most of the modern NN designs that adopt group convolutions [26, 27] usually apply the same number of groups in almost all their layers. Therefore we enforce a global value \(G\) for all the group primitives across _all_ targets in the NN. \(G\) must be a common factor of the original channel numbers \(C_{i}\) of these replaced convolutions, i.e., \(G\) is a factor of \(\gcd(C_{i})\). For example, with two targets of \(C_{1}=32\) and \(C_{2}=48\), \(G\) should be sampled from a set of \(\{2,4,8,16\}\). It simplifies our dynamic variable solver by excluding \(G\) from free variables. Now, all remaining \(x_{i,j}\) variables were generated by FC primitives as the output channel dimensions. They directly contribute to the FLOPs, the number of parameters, and model accuracy. Our heuristic algorithm sets their values in two steps. First, as each \(x_{i,j}\) denotes a channel dimension, it should roughly match the channel number of the input/output tensor at the target. The solver thus derives the base (i.e., minimum) value of each \(x_{i,j}\) that satisfies structural legality and is proportional to the input channel number \(C_{i}\). Second, more channels (thus higher FLOPs and more parameters) usually help improve model accuracy. So the solver tries to fully utilize the allowed constraint budget by using the maximum value for each \(x_{i,j}\), which is a multiple of the above base \(x_{i,j}\). Specifically, we should first satisfy the legality of individual dimension sizes and broadcast primitives, similar to Section 6.2. Recall from Theorem 1 that any \(x_{i,j}\) only appears in numerators, i.e., the dimension size is in the form of \(\frac{p\cdot x_{i,j}}{q}\). Each variable may appear in more than one primitive in each kernel; so for each \(x_{i,j}\), there may exist multiple pairs of \((p,q)\) (after fully reduced, \(\gcd(p,q)=1\)). To ensure integer dimension sizes, we must have: \[x_{i,j}=k_{i,j}\times\mathrm{lcm}(q_{1},q_{2},\cdots)\stackrel{{ \mathrm{def}}}{{=}}k_{i,j}\times\mathrm{lcm}_{i,j},\quad k_{i,j}\in \mathbb{Z}^{+}\] where \(\mathrm{lcm}_{i,j}\) is the least common multiple of all _qs_. To make each \(x_{i,j}\) proportional to the corresponding \(C_{i}\), without loss of generality, we assume \(C_{1}=\min_{i}(C_{i})\). We require approximately that: \[\frac{x_{i,j}}{x_{1,j}}=\frac{k_{i,j}\times\mathrm{lcm}_{i,j}}{k_{1,j}\times \mathrm{lcm}_{1,j}}\approx\frac{C_{i}}{C_{1}}\] By setting \(k_{1,j}=1\), we obtain a series of the base \(x_{i,j}\) values that satisfy legality and heuristically retain channel scaling throughout the network: \[k_{i,j}=\frac{C_{i}\times\mathrm{lcm}_{1,j}}{C_{1}\mathrm{lcm}_{i,j}}\approx \lceil\frac{C_{i}\times\mathrm{lcm}_{1,j}}{C_{1}\mathrm{lcm}_{i,j}}\rceil,\:x_ {i,j}=k_{i,j}\times\mathrm{lcm}_{i,j}\] In the second step, we fully utilize the budget of each analytical constraint. Assume a constraint \(\text{Cstr}(\text{net}(\{x_{i,j}\}))\leq\text{budget}\), where Cstr could be FLOPs, parameter sizes, or other constraint functions. If even the previously solved base \(x_{i,j}\) values violate the constraint, we discard this kernel and sample the next. Otherwise, we try to increase these variables according to their utilization sensitivities. Specifically, we calculate each \(\Delta_{i,j}=\text{Cstr}(\text{net}(\cdots,2x_{i,j},\cdots))-\text{Cstr}(\text{ net}(\cdots,x_{i,j},\cdots))\), and double the \(x_{i,j}\) by the ascending order of \(\Delta_{i,j}\) at each iteration, until we cannot further increase any \(x_{i,j}\) without exceeding the budget. Figure 3 illustrates an example, where the broadcast primitive between \([GK_{H},H,W]\) and \([x_{1},H,W]\) requires \(x_{1}\) must be a multiple of \(GK_{H}\). Applying it to the two layers gives \(\mathrm{lcm}_{1,1}=GK_{1,H}=12\) and \(\mathrm{lcm}_{i,1}=GK_{i,H}=20\). Scaling by \(C\), we have \(x_{1,1}=12\) and \(x_{i,1}=\lceil\frac{C_{i}\mathrm{lcm}_{1,1}}{C_{1}\mathrm{lcm}_{i,1}}\rceil x _{1,1}=60\). Finally, to fully utilize the budget, we double the values two times and get \(x_{1,1}=48\) and \(x_{i,1}=240\). ## 7 Kernel Evaluation After substituting all variables, we now have a concrete kernel instance for each replacement target. Canvas next generates code implementation and measures the runtime and accuracy on real hardware platforms. **Code generation and runtime evaluation.** Canvas does code generation for PyTorch and TVM separately. It translates the kernels to nn.Module classes in PyTorch and uses them to build the entire model for training. We also translate kernels into TVM Tensor Expression language [11] and use TVM Ansor [7] for performance-oriented machine code generation and tuning. The code generation also includes several optimizations passes, e.g., adding normalization for numeric stability and other common compiler optimizations. **Model accuracy evaluation.** Fully training a typical NN usually takes hours to days, depending on the model size. Existing NAS techniques reduce such overheads by using proxy datasets [15], fewer epochs [28, 15], zero-cost estimation [29, 30], or other early-pruning techniques. All these solutions could be directly incorporated in Canvas. We also developed a new pruning strategy that we empirically find very effective. We record the accuracy curve (accuracy vs. epochs) of the best accuracy result achieved so far. When training a new candidate, we require the accuracy at each epoch to be at least a given fraction of the accuracy of the recorded best result at the same epoch, i.e., \(\text{accuracy}_{\text{ker}}(\text{epoch})\geq\lambda\cdot\text{accuracy}_{\max}( \text{epoch})\), where \(\lambda=f(\text{epoch})\) and \(f(x)=\theta+(1-\theta)x\). Basically, \(\lambda\) is small at early epochs and gets larger later, so we allow a kernel to perform less effectively at an early stage but eventually should approach the best result. \(\theta\) adjusts the pruning strictness and is typically set to 0.5. **Implementation: distributed evaluation infrastructure.** We develop a platform to automatically dispatch runtime/accuracy evaluation tasks to a distributed pool of workers, e.g., GPUs for accuracy tasks and CPU cores for runtime tasks. Due to the high parallelism in our random sampling algorithm and the complete independence of all tasks, this infrastructure has superior scalability. This is desired as it offers a nice benefit vs. cost tradeoff: the more computing resources you use, the higher rewards in terms of better kernel implementations you may get. In contrast, more advanced algorithms such as evolutionary search [31] may suffer from the inefficiency caused by heavy task dependency and cannot scale well. We implemented all the procedures as a distributed and end-to-end framework with 8,000 lines of C++ and 6,000 lines of Python. ## 8 Results ### Experimental Setups **Hardware configurations.** We use a cluster of four nodes, each with two sockets of 20-core Intel(r) Xeon(r) Gold 5218R processors, 256 GB of DRAM, and four RTX(tm) 3090 GPUs. **Workloads.** We choose 8 commonly used NNs: ResNet-18 [23], ResNet-29 [23], VGG-16 [32], DenseNet-161 [33], MobileNet-V2 [34], ResNeXt-29-2x64D [26], RegNet-X-800MF [27], and MNASNet-1-0 [4]. We set the replacement targets to be all standard convolutions in these NNs. **Baselines.** We use two baselines for comparison. TVM Ansor [7] is a state-of-the-art tensor compiler that preserves mathematical equivalence during optimizations and provides performance with the original models. Turner et al. [2] (labeled as NAS-PTE, as mentioned in Section 2) is the first work that introduced NAS-style transformations into tensor program optimizations, but at a preliminary stage which only involves simple loop range number changing and is still in the traditional scope of convolution semantics. **Datasets and training configurations.** ImageNet [35] has been the standard benchmark for CNNs, but it is not suitable for directly searching because of the large size. We use the smaller but still relatively challenging CIFAR-100 [36] as the proxy dataset. Specifically, NN models are fully trained for 300 epochs on CIFAR-100 (may early stop due to the pruning in Section 7) using stochastic gradient descent (SGD) [37]. The selected best kernels under CIFAR-100 are then fully trained on the ImageNet dataset for 90 epochs, also using SGD, for accuracy and performance evaluation. We scale the CIFAR-100 images to the same size as ImageNet to ensure the same inference performance. Figure 3: An example of solving dynamic variables for different convolution targets by the constraint solver. ### End-to-End Performance We first compare the end-to-end performance between Canvas and the two baselines. Figure 4 shows the performance results among all the workloads. In Canvas, we keep reducing the specified FLOPs budget and see how far we can go in terms of actual runtime performance with no more than 1% loss in CIFAR-100 accuracy. NAS-PTE only searched for network-independent efficient kernel designs and later applied manual layer-wise optimizations to put them into an NN. Due to the manual optimizations and code unavailability of NAS-PTE, we only report the results which can be derived from their published search results; the other NNs with missing numbers in Figure 4 are not evaluated. We observe that Canvas achieves up to \(4.6\times\) of speedup and obtain an average (geomean) speedup of \(1.5\times\) across all workloads compared to the Ansor-compiled baselines. For early NNs like ResNet-18, ResNet-34, and VGG-16, Canvas performs \(3\times\) better due to the discovered novel kernels. Even for relatively new and highly optimized NNs, Canvas is still able to achieve \(1.05\sim 2\times\) gains. Compared to NAS-PTE, we also achieve \(1.4\times\) and \(1.6\times\) speedups on ResNet-34 and ResNeXt-29, even considering that NAS-PTE incorporated manual analysis and tuning and Canvas is fully automated. DenseNet-161 uses a large number of convolutions with the irregular matching of input and output channel numbers, which does not satisfy Canvas's assumption that one is a multiple of the other. So we can only replace \(60\%\) of the FLOPs that bounds the ideal speedup at \(2.5\times\), while we achieve \(1.56\times\). All Canvas-discovered NNs have less than 1% accuracy loss on CIFAR-100. When retrained on ImageNet, they exhibit approximately 4% accuracy loss, which is acceptable and comparable to state-of-the-art numeric pruning techniques summarized in [38] (reducing \(\geq 50\%\) FLOPs or parameters) from the machine learning community. **Search efficiency.** With the efficient sampler, solver, and pruning designs, Canvas, as a KAS implementation, has reasonable search speedup. During two weeks of experiments on our cluster of 16 GPUs, over 300,000 kernels were evaluated, translating to 0.01 GPU hours per kernel. ### Model Size Reduction We also present the model size comparison in terms of parameter numbers in Figure 5. Almost all models benefit from the Canvas design, with the new model being only \(0.1\times\) to \(0.6\times\) of the original size. VGG-16 contains a significant amount (\(89\%\)) of model parameters in its final classifier layer. As Canvas only focuses on convolution replacements, we are not able to reduce the final classifier layer. Excluding the classifier brings the model size ratio with the replaced kernel to \(0.13\times\). For classical NN designs like ResNet-18, ResNet-34, or VGG-16, we are able to compress the model size to \(\sim 0.1\times\). Even for the relatively new models, which are specifically targeted for small model sizes _by design_, Canvas can further reduce the sizes into at least \(0.5\times\). ### Kernel-Level Analysis of ResNet-34 Figure 6 shows the best result of our search on ResNet-34. Instead of collecting information from all neighbors by unfolding, this kernel uses a combination of shift and addition to extract features from only one adjacent pixel in order to significantly reduce FLOPs and parameters. There is also another branch that uses a lightweight depth-wise convolution (FC within each channel separately, i.e., grouping and FC) to maintain expressive power and generalization. Figure 4: End-to-end performance comparison between Canvas and the baselines. Figure 7 summarizes all individual layer performance on ResNet-34, compared to TVM Ansor and all the three kernel structures discovered by NAS-PTE. Our kernel outperforms the traditional convolution by up to \(9.45\times\). Compared to the optimized kernels from NAS-PTE, Canvas is on average \(2.2\times\), \(1.7\times\), and \(1.2\times\) faster, respectively. These improvements are achieved without any layer-wise tuning by a fully automated workflow in Canvas. ### Case Study: Machine Learning Kernels To understand what kernels Canvas can find, we study its discoveries from both historical and future views. **Rediscovering designs in the past.** Interestingly, Canvas can rediscover several convolution variants that researchers proposed previously. For example, on VGG [32], a very early linear network, the top kernels discovered by Canvas resemble the residual connections [23], using Canvas's broadcast primitives. On ResNet itself, Canvas finds depth-wise separable [39], and spatial separable [40] convolutions. Finally, for MobileNet [39], the resultant kernels include optimizations of pooling and folding, very similar to Squeeze-and-Excitation convolutions [41] and Involution [24]. Canvas not only finds these designs but also cleverly combines them to best exploit their properties. For example, Canvas expresses the aggregation of neighborhood information in one dimension and combines it with a shift primitive in another dimension to achieve lightweight but comprehensive feature extraction. In Figure 8, each unfold primitive can extract the information of 3 surrounding pixels. Applying unfold on both dimensions (bottom left) covers all 9 Figure 5: Model size comparison between the original size and that discovered by Canvas, for both the entire network and the feature extraction front-end excluding the final classifiers. Each number indicates the ratio of the new model size compared to the original size. Figure 6: Best kernel found for ResNet-34. Figure 7: ResNet-34 layer-wise performance comparison between TVM Ansor, NAS-PTE, and Canvas. neighboring pixels, with 9 parameters and 9 FLOPs. Instead, if we use an unfold and a shifted residual connection on the two dimensions, respectively (bottom right), we can use information from 6 neighboring pixels but reduce it to only 3 parameters and 4 FLOPs. **Motivating the future.** Canvas finds many interesting and previously unexplored kernel designs with high accuracy and performance. On ResNet, Canvas finds a design with spatial aggregation and kernel spanning, as kernel A in Figure 9. Traditional convolutions only operate on neighbor pixels. Kernel A first aggregates \([C,H,W]\) into an overall spatial vector \([C]\) by pooling and then generates an FC primitive with shape \([H\times W]\). It finally multiplies this tensor with the input to get the output, which is unexpectedly similar to the popular attention mechanism [42]. With Canvas's variable system and shape-matching rules, such irregularities are extremely abundant in generated kernels. A surprising finding is that, although we search for individual kernels, we end up with some primitives that do not materially affect the current kernel but are an integral part of the overall design. For example, in kernel B in Figure 9, all primitives inside the kernel only operate on channel dimensions; any spatial shift does not change the numerical results but only rearranges the pixel positions. However, when this kernel is placed in a residual connection, the shift primitive contributes to the output of the whole residual connection. This is a good demonstration of Canvas's ability to search for customized kernels in an end-to-end manner. ## 9 Conclusions We make a case for a new paradigm of Kernel Architecture Search, which stochastically explores new kernel constructions. To demonstrate such potential, we further build an end-to-end system, Canvas. Canvas has a random sampler to construct kernel structures from a library of fine-grained primitives, two solvers to address tensor dimension flexibility, and an evaluation system to handle analytical and experimental constraints. Our results show Canvas achieves on average \(1.5\times\) and up to \(4.6\times\) performance improvements. Figure 8: Fine-grained combination of unfold and shift. Figure 9: Two new kernel patterns discovered by Canvas. The first shows the dynamic variable system enables irregular designs; the second shows an out-of-kernel opportunity.
2307.05374
Multi-Task Learning to Enhance Generalizability of Neural Network Equalizers in Coherent Optical Systems
For the first time, multi-task learning is proposed to improve the flexibility of NN-based equalizers in coherent systems. A "single" NN-based equalizer improves Q-factor by up to 4 dB compared to CDC, without re-training, even with variations in launch power, symbol rate, or transmission distance.
Sasipim Srivallapanondh, Pedro J. Freire, Ashraful Alam, Nelson Costa, Bernhard Spinnler, Antonio Napoli, Egor Sedov, Sergei K. Turitsyn, Jaroslaw E. Prilepsky
2023-07-04T08:56:09Z
http://arxiv.org/abs/2307.05374v3
# Multi-Task Learning to Enhance Generalizability of ###### Abstract _For the first time, multi-task learning is proposed to improve the flexibility of NN-based equalizers in coherent systems. A "single" NN-based equalizer improves Q-factor by up to 4 dB compared to CDC, without re-training, even with variations in launch power, symbol rate, or transmission distance._ ## Introduction The demand for high-speed data transmission keeps increasing due to upcoming technologies (6G[1], etc.). Coherent optical systems have emerged as a key solution to meet this demand. Nonetheless, the presence of linear and especially nonlinear distortions in fiber-optic systems limits the achievable information rates[2, 3, 4]. Various digital signal processing (DSP) techniques have been proposed for nonlinear effects mitigation in long-haul systems[3]. Neural networks (NNs) have recently emerged as an effective alternative for channel equalization: the NNs have demonstrated excellent capability to approximate the inverse of the optical channel transfer function, potentially outperforming conventional DSP approaches[5, 6, 7]. However, generalizability remains one of the main challenges of NN-based equalizers and attracts more attention[8, 9, 10]. Due to different values of accumulated chromatic dispersion (CD)[11], or the presence of channel distortion, the equalizers in the receiver or transmitter require reconfiguration and must be adjustable to compensate for the variation of impairments as the channel characteristics change. In this work, multi-task learning (MTL)[12] is proposed to calibrate the NN-based equalizer used for different transmission conditions in coherent systems. MTL leverages shared representations to enhance the adaptability of NN-based equalizers across different system configurations and optical impairments. This approach does not require re-training or additional data when the channel conditions change. Our results demonstrate the effectiveness of an MTL-based NN equalizer, which not only improves the equalization performance but also works efficiently in different transmission regimes and scenarios, leading to more generalizable and flexible solutions for NN-based nonlinear transmission effect mitigation. ## Multi-Task Learning for NN-based Equalizers Single Task Learning (STL) is a commonly used approach to train NNs. STL refers to the training in which the NN learns the representation of the function to provide the output of a "specific" task[12]. One advantage of STL is that it allows the NN to focus solely on a specific task, usually leading to very good performance in that task. However, the NN may behave poorly when applied to different tasks (e.g., when the transmission scenario of interest is not included in the initial training dataset). As shown in Fig. 1b, if STL is used for channel equalization in different transmission scenarios, multiple NN models are usually required to provide acceptable performance. In MTL, the NN is trained with multiple datasets from multiple related tasks. In this case, the common representations learned from different but related tasks are shared[12, 13]. As depicted in Fig. 1c, MTL enables a single NN to equalize the signal in different ranges of launch power, symbol rate, and transmission distance by the joint training on the datasets from different transmission scenarios. MTL allows the NN to generalize better by using the domain-specific information contained in the different related tasks[12]. Besides the generalization feature enabled by the MTL, it reduces hardware costs. In fact, the shared weights are fixed, which results in the simplification of the multipliers[11]. However, MTL can also lead to some disadvantages compared to the STL. Firstly, there is a trade-off between the performance of individual tasks and the overall performance of the equalizer. Secondly, the degree of information sharing between tasks has to be carefully controlled. Too much sharing can cause a negative information transfer, resulting in performance degradation for each task[13]. In this work, we investigate the performance of NN-based equalizers using MTL where a sin gle NN, without re-training, is potentially capable of recovering the transmitted symbol independently of the specific parameters of the transmission systems. The considered transmission setup is altered by changing the symbol rate (\(R_{S}\)) and launch power (\(P\)) of data channels and the transmission distance (number of spans, \(N_{Span}\)). For the MTL, the NN is trained with different datasets resulting from the combination of different transmission setups (to share the weights and biases). ### Numerical Setup The dataset was obtained by numerical simulation assuming the transmission of a single 16-QAM dual-polarization channel along the standard single-mode fiber (SSMF). The signal propagation through the fiber was represented by a generalized Manakov equation using the GPU-accelerated split-step Fourier method[14]. The SSMF is characterized by the effective nonlinearity coefficient \(\gamma\) = 1.2 (W\(\cdot\) km)\({}^{-1}\), chromatic dispersion coefficient \(D\) = 16.8 ps/(nm\(\cdot\)km), and attenuation parameter \(\alpha\) = 0.21 dB/km. At the end of each fiber span, the optical fiber losses were compensated by an erbium-doped fiber amplifier with a noise figure of 4.5 dB. Downsampling and CD compensation (CDC) were performed on the receiver end. Afterwards, the received symbols were normalized and used as inputs of the NN. ### Methodology The NN architecture, depicted in Fig. 1a, contains a stack of four bidirectional-Long Short-Term Memory (biLSTM) layers with 100 hidden units in each layer coupled with a dense output layer of 2 neurons to deliver the real and imaginary values for the \(X\)-polarization. The biLSTM was selected because it outperformed other types of NNs when used for nonlinear compensation[7, 15]. The model took four input features resulting from the in-phase and quadrature components of the complex signal (\(X_{I},X_{Q},Y_{I}\), and \(Y_{Q}\)) where \(X_{I}+jX_{Q}\) and \(Y_{I}+jY_{Q}\) were the signals in the \(X\) and \(Y\) polarizations, respectively. A set of 141 input symbols was fed to the NN to recover one symbol at the output. A new set of synthetic data of size \(2^{18}\) was randomly created with different system parameters and used in each training epoch to allow the model to learn different transmission scenarios. The entire training was carried out with a mini-batch size of 2000, and a learning rate of 0.001. The mean square error (MSE) loss estimator and the classical Adam algorithm[16] were applied when training the weights and biases. The transmission scenarios include \(R_{S}\) ranging from 30 to 70 GBd, number of spans ranging between 10 and 50 (with fixed 50 km span length), and launch power ranging between -1 and 5 dBm. The NNs were trained with MTL or STL as follows: 1. MTL trained for 1000 epochs with datasets including different \(N_{Span}\), but fixed \(R_{S}\) = 40 GBd and \(P\) = 5 dBm. 2. MTL trained for 1000 epochs with datasets including different \(P\), but fixed \(N_{Span}\) = 50 and \(R_{S}\) = 40 GBd1. 3. MTL trained for 1000 epochs with datasets including different \(R_{S}\) but fixed \(N_{Span}\) = 50 and \(P\) = 5 dBm. 4. MTL trained for 1200 epochs with datasets including different combinations of \(N_{Span}\), \(R_{S}\), and \(P\). This NN is referred to as the "Universal model"2. Footnote 1: This model has one extra input feature, which is the launch power. The model learns the data during the training using a normalized launch power. Therefore, it could not learn to generalize well without knowing the actual launch power. 5. STL (without MTL) trained for 1000 epochs with fixed parameters: \(R_{S}\) = 40 GBd, \(N_{Span}\) = 50 and \(P\) = 5 dBm. Fig. 1: a) Equalizer architecture with 4-layer fullSTM and a dense layer; b) STL: multiple models are required for multiple transmission scenarios; c) MTL: only one model is required for multiple transmission scenarios. ### Results and Discussion We considered MTL for multiple symbol rates, transmission distances, and launch powers. To evaluate equalization performance and generalizability, the MTL models were compared to CDC and the STL model trained with a fixed dataset. Variation of transmission distance:Fig. 1(a) shows the optical performance for different reaches considering a fixed launch power of 5 dBm and a signal baud rate of 40 GBd. The STL model performed the best when \(N_{Span}\) was 50 (because it was trained for this specific transmission scenario), significantly outperforming the remaining approaches. However, its performance was significantly impacted in the shorter reaches as it could not generalize. On the other hand, the MTL trained with different \(N_{span}\) showed much better performance than STL for the shorter reaches, achieving a better Q-factor (about 3 dB Q-factor improvement) than CDC only for all considered scenarios. The universal MTL model also showed better performance than the CDC alone, leading to a maximum Q-factor improvement of about 2.5 dB at 50\(\times\)50 km. Variation of launch powers:Fig. 1(b) depicts the Q-factor as a function of the launch power for a fixed \(R_{S}\) of 40 GBd and transmission distance of 50\(\times\)50 km. Again, the STL model showed the best gain for launch powers close to the one it was trained with (5 dBm), but revealed quite poor results for the remaining launch powers. In contrast, the universal MTL model enabled a Q-factor improvement exceeding 2 dB for the most relevant launch powers. The MTL, trained with various \(P\) but fixed \(N_{SPAN}\) and \(R_{S}\), revealed the best performance, enabling a Q-factor improvement exceeding 4 dB for the most relevant launch powers. Interestingly, we can see that, at 5 dBm, the MTL outperformed STL. The reason for this may be that the STL is overfitting and cannot adapt to the unseen test data as effectively as the MTL model, which is more generalized. Ref.[17] supported the claim that a more generalized model can perform better. Variation of symbol rates:Fig. 1(c) illustrates the Q-factor as a function of the data signal baud rate for a fixed transmission distance and launch power of 50\(\times\)50 km and 5 dBm, respectively. STL led to very good results for the 40 GBd transmission scenario (training scenario) but showed very poor generalization capability. The MTL, trained with multiple \(R_{S}\) but fixed \(N_{Span}\) and \(P\), enabled a Q-factor improvement of up to 4.5 dB with respect to the CDC only, whereas the universal MTL model showed up to 2.5 dB improvement. The MTL provided a good gain in most cases. The aforementioned results show that, although STL may lead to outstanding performance in specific transmission conditions, it is not suitable for real-world system application because it lacks the adaptability to dynamic optical network parameters. MTL overcomes this limitation, allowing the equalizer to be more flexible, but at the cost of small performance degradation compared to models trained only for a specific task. ## Conclusions Multi-task learning is proposed to allow a "single" NN-based equalizer, without re-training, to recover received symbols when the transmission scenarios change. The results showed that the MTL can provide up to 4 dB improvement in Q-factor with respect to CDC alone even if the transmission distance, launch power, and symbol rate vary, thus highlighting the adaptability of the MTL NN-based equalizer to the real-world dynamic optical network. ## Acknowledgements This work is supported by the EU H2020 Marie Skodowska-Curie Action project MENTOR (No. 956713), SMARTNET EMJMOD program under Grant 586686-EPP-1-2071-1-107-LE-EPPARA-1-MDMOB, EP. SPC project TRANNSF (EP/RO53424/1), and the Horizon Europe project nalleGEMOA. GA. 101902766. Figure 1: Q-factor resulting from using MTL (orange and red) and STL model (blue) in the following test cases; a) when the transmission distance changes but the launch power and symbol rate are set to 5 dBm and 40 GBd, respectively; b) when the launch power changes but the number of span and symbol rate are set to 50 and 40 GBd, respectively; c) when the symbol rate changes but the number of spans and launch power are set to 50 and 5 dBm, respectively.
2305.06789
Integrating Nearest Neighbors with Neural Network Models for Treatment Effect Estimation
Treatment effect estimation is of high-importance for both researchers and practitioners across many scientific and industrial domains. The abundance of observational data makes them increasingly used by researchers for the estimation of causal effects. However, these data suffer from biases, from several weaknesses, leading to inaccurate causal effect estimations, if not handled properly. Therefore, several machine learning techniques have been proposed, most of them focusing on leveraging the predictive power of neural network models to attain more precise estimation of causal effects. In this work, we propose a new methodology, named Nearest Neighboring Information for Causal Inference (NNCI), for integrating valuable nearest neighboring information on neural network-based models for estimating treatment effects. The proposed NNCI methodology is applied to some of the most well established neural network-based models for treatment effect estimation with the use of observational data. Numerical experiments and analysis provide empirical and statistical evidence that the integration of NNCI with state-of-the-art neural network models leads to considerably improved treatment effect estimations on a variety of well-known challenging benchmarks.
Niki Kiriakidou, Christos Diou
2023-05-11T13:24:10Z
http://arxiv.org/abs/2305.06789v2
# Integrating Nearest Neighbors with Neural Network Models for Treatment Effect Estimation ###### Abstract Treatment effect estimation is of high-importance for both researchers and practitioners across many scientific and industrial domains. The abundance of observational data makes them increasingly used by researchers for the estimation of causal effects. However, these data suffer from several weaknesses, leading to inaccurate causal effect estimations, if not handled properly. Therefore, several machine learning techniques have been proposed, most of them focusing on leveraging the predictive power of neural network models to attain more precise estimation of causal effects. In this work, we propose a new methodology, named Nearest Neighboring Information for Causal Inference (NNCI), for integrating valuable nearest neighboring information on neural network-based models for estimating treatment effects. The proposed NNCI methodology is applied to some of the most well established neural network-based models for treatment effect estimation with the use of observational data. Numerical experiments and analysis provide empirical and statistical evidence that the integration of NNCI with state-of-the-art neural network models leads to considerably improved treatment effect estimations on a variety of well-known challenging benchmarks. Causal inference treatment effects performance profiles non-para-metric tests post-hoc tests ## 1 Introduction Causal effect estimation is often the central research data analysis objective across many scientific disciplines. In particular, it is the process of inferring a causal relationship between an intervention, or else treatment, and its effect on an outcome variable of interest. It involves quantifying the extent to which a change in a given intervention or treatment would influence the variable of interest. Among others, treatment effect is used in healthcare Schneeweiss et al. (2009),Zheng et al. (2022), education Oreopoulos (2006) and advertising Zantedeschi et al. (2017). Some examples of causal questions in each one of these fields include "_What is the effect of a specific drug on a patient's blood pressure levels?_", "_What is the effect of studying two more hours on a student's final test performance?_", "_What is the effect of an advertisement on social media to the product's sales?_", respectively. The golden standard for answering these questions is through the conduction of Randomized Controlled Trials (RCTs), in which subjects are randomly assigned to two groups: the _treatment group_ (also known as intervention or experimental) and the _control group_. The intervention of interest is applied on the members of the treatment group, while no intervention is applied on the members of the control group. Due to randomization, RCTs enable the calculation of the real treatment effect. Nevertheless, in most of the cases it is impossible to conduct an RCT due to financial and/or ethical issues, or due to the large number of combinations of variables that need to be evaluated. Therefore, researchers from various high-impact scientific areas could be benefited from the plethora of readily available observational data for the estimation of treatment effects. Nevertheless, handling observational data enclose several complications Kuang et al. (2020); Hammerton and Munafo (2021), since actions and outcomes are observed retrospectively and the mechanism caused the action is unknown. Additionally, observational data may include possible (hidden or observed) counfounding variables, which lead to incorrect estimation of treatment effects. Hence, there has been a growing interest in using machine learning models for causal effect estimation, as these models can leverage observational data as well as they can capture complex relationships between variables and provide accurate predictions. In this work, we propose a new methodology for integrating information from neighboring samples in neural network architectures for treatment effect estimation. The proposed methodology, named Nearest Neighboring Information for Causal Inference (NNCI), is implemented on the most effective neural network-based models, which provide predictions of the outcomes from control and treatment groups as well as for the propensity score, i.e., the probability for a subject to be assigned into the treatment group. The motivation of our approach consists of enriching the models' inputs with valuable information from control and treatment groups along with the covariates in order to improve the estimations of treatment effects. The proposed methodology identifies the nearest neighbor instances in the control group and the treatment group for each instance and then calculates the average outcomes of identified instances contained in both groups. This information is integrated with the covariates to the model's inputs in order to increase the prediction accuracy and reduce bias. By using the outcomes of the nearest neighbors as input features, the methodology aims to capture the causal effects of the treatment more efficiently. The neural network models can learn to weight these new input features appropriately, and better capture complex relationships between the features and the treatment effect. Summarizing, the main contributions of this work are: * NNCI methodology for integrating information from neighboring instances in neural network-based causal inference models for treatment effect estimation. * the modification of the architecture of three state-of-the-art neural network models i.e., Dragonnet Shi et al. (2019), TARnet Shalit et al. (2017) and NEDnet Shi et al. (2019) for incorporating the information provided by NNCI. The developed models use a vector of features extracted from the outcomes of the nearest neighbors of each sample, separately for the treatment and control group as inputs, to improve treatment effect estimation. * a comprehensive experimental analysis based on Dolan and More (2002) performance profiles as well as on post-hoc and non-parametric statistical tests Finner (1993); Hodges Jr and Lehmann (1962). The conducted analysis demonstrate that in most cases the proposed methodology leads to considerable improvement in the estimation of treatment effects, while in a minority of the cases, it exhibits similar performance with the corresponding baseline model. The remainder of this paper is organized as follows: Section 2 presents a brief review of the state-of-the-art baseline models for treatment effect estimation. Section 3 presents a detailed description of the proposed methodology and the proposed neural network-based models in which NNCI methodology is adopted. Section 4 presents the used datasets and the evaluation methodology. Section 5 presents the numerical experiments, focusing on the evaluation of the proposed neural network-based causal inference models with the corresponding baseline models. Section 6 discusses the proposed framework as well as the experimental results and outlines the findings of this work. Finally, Section 7 summarizes the main findings and conclusions of this work as well as some interesting directions for future work. ## 2 Related Work In recent years, researchers aimed at leveraging machine learning models for treatment effect estimation from observational data. Neural networks have demonstrated that they offer high capacity, while at the same time avoiding overfitting for a range of applications, including representation learning. Therefore, there has been considerable interest in using them for the estimation of causal effects at both the individual and population level. Next, we briefly describe the most well-mentioned. Yoon et al. Yoon et al. (2018) proposed a new causal inference model, named Generative Adversarial Nets for the inference of individualized Treatment Effects (GANITE) for estimating individual treatment effects. The rationale of their approach was the simulation of uncertainty regarding the counterfactual distributions, which was achieved by learning these distributions using a GAN model. The presented numerical experiments revealed that GANITE outperformed S-learner machine learning models Kunzel et al. (2019) as well as some tree-based models (BART Chipman et al. (2010), R-Forest Breiman (2001) and C-Forest Wager and Athey (2018)) on three benchmarks. Louizos et al. (2017) proposed Causal Effect Variational Auto-Encoder (CEVAE), for leveraging the proxy variables for the accurate estimation of treatment effects. The main advantage of the CEVAE is that it requires substantially weaker assumptions about the structure of the hidden confounders as well as about the data generating process. The authors provided a comprehensive experimental analysis, which presented that CEVAE exhibited better performance compared to traditional causal inference models and presented more robust behavior against hidden confounders in the case of noisy proxies. Shalit et al. (2017) proposed a new framework, named Counterfactual Regression (CFR), which focuses on learning a balanced representation of treatment and control groups using a prediction model. The authors utilized two distances, namely Maximum Mean Discrepancy (MMD) Gretton et al. (2012) and Wasserstein distance (Wass) Villani (2009) to measure the distances between treatment and control groups' distributions. In addition, they proposed Treatment Agnostic Representation Network (TARnet) neural network model, which constitutes a variant of CFR without balance regularization. The experimental analysis revealed the superiority of TARnet and CFR (Wass) over S-learner and tree-based causal inference models. Recently, Shi et al. (2019) proposed a three-head neural network model, named Dragonnet for the estimation of conditional outcomes as well as the propensity score and a new loss function, named _targeted regularization_ for further improving the estimation of causal effects and reduce the bias of the estimator. Furthermore, the authors proposed a modification of Dragonnet, named NEDnet, which was trained with a multi-stage procedure instead of an end-to-end. The main difference is that NEDnet is first trained using a pure treatment prediction objective, which is then replaced with an outcome-prediction matching (three-head), similar to the one used by Dragonnet. The representation layers are then frozen and the outcome-prediction neural network is trained on the pure outcome prediction task. The reported experimental results showed that both Dragonnet and NEDnet achieved to get better estimations than the state-of-the-art models and concluded that both models along with targeted regularization substantially improve estimation quality. Kiriakidou and Diou (2022a) proposed a neural network causal inference model, named modified Dragonnet. The proposed model captures information not only from covariates, but also from the average outcomes of neighboring instances from both treatment and control groups.For evaluating the efficiency of their approach, they used the semi-synthetic collection datasets IHDP Hill (2011). In their experiments, the proposed model was implemented with three different Minkowski distance metrics for the calculation of neighboring instances. The presented experimental analysis revealed that modified Dragonnet constitutes a better estimator than Dragonnet for all utilized metrics, while simultaneously is able to predict treatment effects with high accuracy. Nevertheless, the limitation of this approach was that the authors adopted the proposed approach on one neural network-based causal inference model and that the evaluation was based only on one benchmark. In this research, we present an extension of our previous work Kiriakidou and Diou (2022a) by proposing a new methodology, named Nearest Neighboring Information for Causal Inference (NNCI), based on the exploitation of valuable information provided by nearest neighboring instances-based philosophy for each instance. NNCI is applied on the most well-established neural network-based causal inference models proposed in the literature, namely Dragonnet, TARnet and NEDnet for capturing the causal effect estimations more accurately. Notice that these models were selected due to their special architecture design and the fact that they constitute the only neural network-based model to provide estimations for the propensity score as well as the conditional outcomes for control and treatment groups. The major difference between the presented works and the proposed approach is that the former ignore valuable information contained in the outcomes of the training instances; while the latter enriches the models' inputs with information from the covariates as well as from the outcomes from the control group and the treatment group. The main idea of the proposed approach is that the neural network causal inference models, through the adoption of NNCI are able to better capture complex causal relationships between the features and the treatment effect by appropriately weighting the advanced input features. Furthermore, in contrast to the usual approach for the performance evaluation of models for treatment effect estimation Yao et al. (2018); Shi et al. (2019); Johansson et al. (2016); Shalit et al. (2017); Louizos et al. (2017), we provide concrete and empirical evidence about the superiority of our approach, by using Dolan and More's Dolan and More (2002) performance profiles and a detailed statistical analysis based on non-parametric and post-hoc statistical tests Hodges Jr and Lehmann (1962); Finner (1993). ## 3 Methodology In this section, we provide a detailed presentation of the proposed methodology for treatment effect estimation. We recall that the rationale behind our approach is to exploit the wealth of information from nearest neighboring instances from the training data in order to get more accurate estimations of average and individual treatment effects. This is achieved through the enrichment of model's inputs with the average outcomes of the nearest neighbors from the control and treatment groups of each instance contained in the training data. ### Problem setup In this work, we are relying on the potential outcome framework of Neyman-Rubin Rubin (2005). We consider a \(d\)-dimensional space of covariates \(X\) and a joint distribution \(\Pi\) on \(\mathcal{X}\times\mathcal{T}\times\mathcal{Y}\), where \(\mathcal{X}\), \(\mathcal{T}\) and \(\mathcal{Y}\) are the domains of random variables corresponding to the covariates, the treatment and the outcome of a sample, respectively. According to the potential outcome framework we can learn causal effects given on a set of treatment-outcome pairs (\(T,Y\)). Throughout the paper, we are considering the case of binary treatment, which means that \(T\in\{0,1\}\), where \(T=0\) and \(T=1\) corresponds to the samples belong to the control and treatment group, respectively. Notice that for every instance \(i\), there is a potential outcome \(y_{i}^{(T)}\colon y_{i}^{(0)}\) for the samples belong to the control group and \(y_{i}^{(1)}\) for the samples belong to the treatment group. It is worth mentioning that the fundamental problem of causal inference is that only one of the potential outcomes can be observed for each instance. In case the sample belongs to the treatment group, then \(y_{i}^{(1)}\) is the factual outcome and \(y_{i}^{(0)}\) is the counterfactual outcome, and vice-versa for samples belong to the control group. Treatment effects can be defined using the potential outcomes \(y_{i}^{(0)}\) and \(y_{i}^{(1)}\). One of the causal effects we are interested to estimate is the _Individual Treatment Effect_ (ITE), which measures the effect of the treatment on a single sample: \[\text{ITE}_{i}=y_{i}^{(1)}-y_{i}^{(0)} \tag{1}\] To measure the causal effect of a treatment over the whole population, we use the _Average Treatment Effect_ (ATE): \[\text{ATE}=E[Y\,|\,T=1]-E[Y\,|\,T=0] \tag{2}\] Finally, for measuring the treatment effect for a specific subgroup of the population, i.e., samples with a specific value to the covariates \(X\), we estimate the _Conditional Average Treatment Effect_ (CATE): \[\text{CATE}(\mathbf{x})=E[Y\,|\,X=\mathbf{x},T=1]-E[Y\,|\,X=\mathbf{x},T=0] \tag{3}\] A standard dataset for inferring treatment effect consists of the covariate matrix \(\mathbf{X}\), the treatment vector \(\mathbf{t}\) and the outcome vector \(\mathbf{y}\). ### Nearest neighboring information for causal inference The proposed Nearest Neighboring Information for Causal Inference (NNCI) methodology aims to enrich the inputs of neural network models with information from nearest neighboring instances, to estimate treatment effects from observational data. More specifically, for each instance \(\mathbf{x}_{i}\), NNCI identifies its \(k\) nearest neighboring instances in the control and treatment groups. Then, we calculate the average outcomes of the identified instances from both groups. These quantities, denoted as \(\overline{y}_{i}^{(0)}\) and \(\overline{y}_{i}^{(1)}\), are incorporated as additional input features to the neural network models. By using the outcomes of the nearest neighbors \(\overline{y}_{i}^{(0)}\) and \(\overline{y}_{i}^{(1)}\) as input features, the NNCI methodology aims to capture the effect of the intervention more efficiently. The neural network models can learn to weight these new input features appropriately; hence, to better capture complex relationships between the features and the treatment effect. NNCI is integrated to the most effective neural network-based causal inference models i.e. Dragonnet, TARnet and NEDnet due to their special architecture design. More analytically, these models use a deep net to produce a representation layer, which is then processed independently for predicting both the outcomes for treatment and control groups. The adoption of NNCI to these models has the result of using as inputs the instance \(\mathbf{x}_{i}=(\mathbf{x}_{i_{1}},\mathbf{x}_{i_{2}},\cdots,\mathbf{x}_{i_{n}})\) contained in the covariate matrix \(\mathbf{X}\) along with the average outcomes of the neighboring instances from treatment and control groups, \(\overline{y}_{i}^{(1)}\) and \(\overline{y}_{i}^{(0)}\), respectively, as it is presented in Figure 1. Algorithm 1 presents the pseudocode of NNCI methodology for calculating information from the outcomes of the instances contained in the training data. The inputs are the selected number of nearest neighbors \(k\), the covariate matrix \(\mathbf{X}\), the binary vector of treatment values \(\mathbf{t}\) and the vector with the outcome values \(\mathbf{y}\). The outputs are the vectors \(\overline{\mathbf{y}}^{(0)}\) and \(\overline{\mathbf{y}}^{(1)}\), containing the average values of neighboring outcomes for each sample from control and treatment group, respectively. In Step 1, \(\mathbf{y}^{(0)}\) and \(\mathbf{y}^{(1)}\) are set to \(\mathbf{0}\). For every instance \(\mathbf{x}_{i}\), NNCI calculates the average values of neighboring outcomes from the control and treated groups (Steps 2-7). In more detail, in Step 4, NNCI calculates the \(k\)-nearest neighbors of instance \(\mathbf{x}_{i}\), which belong in the control group (i.e \(T=0\)) and store their corresponding indices in the index set \(S_{0}\). In Step 5, NNCI calculates the mean value of these neighboring outcomes, \(\overline{y}_{i}^{(0)}=\dfrac{1}{k}\sum_{j\in S_{0}}y_{j}\). Similarly, in Step 6-7, NNCI calculates the mean value of \(k\)-nearest neighbors' outcomes in treatment group (i.e \(T=1\)), namely \(\overline{y}_{i}^{(1)}=\frac{1}{k}\sum_{j\in S_{1}}y_{j}\). ``` 0:\(k\): number of nearest neighbors \(\mathbf{X}\): covariate matrix \(\mathbf{t}\): vector of treatment values \(t\) \(\mathbf{y}\): vector of outcome values \(y\) Outputs:\(\overline{\mathbf{y}}^{(0)}\): vector with average of \(k\)-nearest outcomes from control group for each sample \(\overline{\mathbf{y}}^{(1)}\): vector with average of \(k\)-nearest outcomes from treatment group for each sample Step 1: Set \(\overline{\mathbf{y}}^{(0)}=\mathbf{0}\) and \(\overline{\mathbf{y}}^{(1)}=\mathbf{0}\) Step 2:for\(i=1\)to\(n\)do Step 3: \(\mathbf{x}_{i}=\mathbf{X}[i,:]\) Step 4: Calculate the index set \(S_{0}\) containing the indices of the \(k\)-nearest neighbors of \(\mathbf{x}_{i}\) with \(T=0\) Step 5: \(\overline{y}_{i}^{(0)}=\dfrac{1}{k}\sum_{j\in S_{0}}y_{j}\) Step 6: Calculate the index set \(S_{1}\) containing the indices of the \(k\)-nearest neighbors of \(\mathbf{x}_{i}\) with \(T=1\) Step 7: \(\overline{y}_{i}^{(1)}=\dfrac{1}{k}\sum_{j\in S_{1}}y_{j}\) Step 8: end Step 9: Return \(\overline{\mathbf{y}}^{(0)},\overline{\mathbf{y}}^{(1)}\). ``` **Algorithm 1**NNCI Based on this iterative process, NNCI calculates the average outcomes from control and treated groups and store these quantities into \(\overline{\mathbf{y}}^{(0)}\) and \(\overline{\mathbf{y}}^{(1)}\), respectively. These vectors are then used in conjuction with Dragonnet, TARnet and Figure 1: Example of the application of NNCI methodology to a neural network model NEDnet models for the prediction of conditional outcomes \(Q(t,\mathbf{x})=E(Y\,|\,X=\mathbf{x},T=t)\) and propensity score \(g(x)=P(T=1\,|\,X=\mathbf{x})\), which measures the probability of a subject to belong in the treatment group, based on its characteristics. The computational complexity for calculating the vectors \(\overline{\mathbf{y}}^{(0)}\) and \(\overline{\mathbf{y}}^{(1)}\) is \(O(n^{2})\), where \(n\) is the number of training instances. This implies that for large datasets, the computational cost may generally be considered high. In these cases, one can use a sub-sampling strategy, where only a randomly selected subset of the training set is considered for the estimation of vectors \(\overline{\mathbf{y}}^{(0)}\) and \(\overline{\mathbf{y}}^{(1)}\) and the cost would be \(O(m\cdot n)\), where \(m<<n\) is the number of samples used for the estimation. #### 3.2.1 NN-Dragonnet Next, we present the NN-Dragonnet model, which is based on the adoption of the NNCI methodology in the state-of-the-art Dragonnet model Shi et al. (2019). The inputs of NN-Dragonnet model are the instance \(\mathbf{x}_{i}\), the average outcomes of its \(k\) neighboring instances from the control group \(\overline{y}_{i}^{(0)}\) and the average outcomes of its \(k\) neighboring instances from the treatment group \(\overline{y}_{i}^{(1)}\). The outputs of the proposed model are the predictions of the conditional outcomes \(\hat{Q}(0,\mathbf{x}_{i},\overline{y}_{i}^{(0)};\theta)\) and \(\hat{Q}(1,\mathbf{x}_{i},\overline{y}_{i}^{(1)};\theta)\) and the prediction of the propensity score \(\hat{g}(\mathbf{x}_{i};\theta)\), where \(\theta\) is the vector with the network's parameters. These are computed from the model's three-head architecture. Figure 2 illustrates the architecture of the proposed NN-Dragonnet model. Initially, the instance \(\mathbf{x}_{i}\) is processed by three dense layers of 200 neurons each, with Exponential Linear Unit (ELU) activation function for producing the representation layer \(Z(\mathbf{x}_{i})\in\mathbb{R}^{p}\). Then, \(Z(\mathbf{x}_{i})\) is concatenated with \(\overline{y}_{i}^{(0)}\) and processed by two dense layers of 100 neurons with ELU activation function and kernel regularizer of \(10^{-2}\). Next, an output layer of one neuron with linear activation provides the prediction of the outcome \(\hat{Q}(0,\mathbf{x}_{i},\overline{y}_{i}^{(0)};\theta)\). Similarly, \(Z(\mathbf{x}_{i})\) is concatenated with \(\overline{y}^{(1)}\) and processed by two dense layers of 100 neurons with ELU activation function and kernel regularizer of \(10^{-2}\) and a linear output for providing the prediction of the outcome \(\hat{Q}(1,\mathbf{x}_{i},\overline{y}_{i}^{(1)};\theta)\). Furthermore, \(Z(\mathbf{x}_{i})\) is used for providing the prediction for the propensity score \(\hat{g}(\mathbf{x}_{i};\theta)\), using linear activation followed by a sigmoid. The NN-Dragonnet model is trained using the following loss, which is a modification of _targeted regularization_ Shi et al. (2019), namely: \[\hat{\theta},\hat{\epsilon} = \underset{\theta,\epsilon}{\arg\min}\left[\hat{R}\left(\theta; \mathbf{X},\overline{\mathbf{y}}^{(0)},\overline{\mathbf{y}}^{(1)}\right)\right. \tag{4}\] \[+ \left.\beta\frac{1}{n}\sum_{i}\gamma\left(y_{i},t_{i},\mathbf{x}_{ i},\overline{\mathbf{y}}_{i}^{(0)},\overline{\mathbf{y}}_{i}^{(1)};\theta, \epsilon\right)\right]\] Figure 2: NN-Dragonnet architecture where \(\beta>0\) and \(\epsilon>0\) are hyper-parameters and \[\hat{R}(\theta;\mathbf{X},\mathbf{\overline{y}}^{(0)},\mathbf{ \overline{y}}^{(1)}) = \frac{1}{n}\sum_{i}\biggl{[}\left(\hat{Q}\left(t_{i},\mathbf{x}_{i},\mathbf{\overline{y}}_{i}^{(0)},\mathbf{\overline{y}}_{i}^{(1)};\theta\right) -y_{i}\right)^{2} \tag{5}\] \[+ \alpha f(\hat{g}(\mathbf{x}_{i};\theta),t_{i})\biggr{]}\] and \[\gamma(y_{i},t_{i},\mathbf{x}_{i},\mathbf{\overline{y}}^{(0)}, \mathbf{\overline{y}}^{(1)};\theta,\epsilon) = \left(y_{i}-\tilde{Q}\left(t_{i},\mathbf{x}_{i},\mathbf{\overline {y}}^{(0)},\mathbf{\overline{y}}^{(1)};\theta,\epsilon\right)\right)^{2}\] \[\tilde{Q}(t_{i},\mathbf{x}_{i},\mathbf{\overline{y}}^{(0)}, \mathbf{\overline{y}}^{(1)};\theta,\epsilon) = \hat{Q}\left(t_{i},\mathbf{x}_{i},\mathbf{\overline{y}}^{(0)}, \mathbf{\overline{y}}^{(1)};\theta\right)\] \[+ \epsilon\left[\frac{t_{i}}{\hat{g}(\mathbf{x}_{i};\theta)}- \frac{1-t_{i}}{1-\hat{g}(\mathbf{x}_{i};\theta)}\right]\] #### 3.2.2 NN-TARnet We present the NN-TARnet model, which is based on the adoption of NNCI methodology in the TARnet model. The inputs are the instances \(\mathbf{x}_{i}\), the average outcomes of its \(k\) neighboring instances from control group \(\overline{y}_{i}^{(0)}\) and the average outcomes of its \(k\) neighboring instances from treatment group \(\overline{y}_{i}^{(1)}\); while the outputs are the conditional outcomes for the two different groups of subjects, control \(\hat{Q}(0,\mathbf{x}_{i},\overline{y}_{i}^{(0)};\theta)\) and treatment \(\hat{Q}(1,\mathbf{x}_{i},\overline{y}_{i}^{(1)};\theta)\) groups, where \(\theta\) is the vector with the network's parameters. Figure 3 presents the architecture of NN-TARnet neural network model. As regards to NN-TARnet model, \(\mathbf{x}_{i}\) is processed by three dense layers of 200 neurons with ELU activation function and the representation layer \(Z(\mathbf{x}_{i})\in\mathbb{R}^{p}\) is produced. Next, \(Z(\mathbf{x}_{i})\) is concatenated with \(\overline{y}_{i}^{(0)}\) and processed by two dense layers of 100 neurons with ELU activation function and kernel regularizer of \(10^{-2}\). Next, an output layer of one neuron with linear activation provides the prediction of the outcome \(\hat{Q}(0,\mathbf{x}_{i},\overline{y}_{i}^{(0)};\theta)\). Likewise, \(Z(\mathbf{x}_{i})\) is concatenated with \(\overline{y}^{(1)}\) and processed by two dense layers of 100 neurons with ELU activation function and kernel regularizer of \(10^{-2}\) and a linear output for providing the prediction of the outcome \(\hat{Q}(1,\mathbf{x}_{i},\overline{y}_{i}^{(1)};\theta)\). Essentially, NN-TARnet uses the same architecture as NN-Dragonnet for the estimation of conditional outcomes, with the difference that NN-Dragonnet uses the propensity score for targeted regularization. Figure 3: NN-TARnet architecture #### 3.2.3 NN-NEDnet Next, we present NN-NEDnet, a model which is based on the adoption of NNCI methodology by NEDnet. NN-NEDnet shares the same architecture with NN-Dragonnet, with the difference that it is not based on an end-to-end philosophy, but on a multi-stage procedure. Initially, NN-NEDnet is trained utilizing a treatment prediction task (Figure 4(a)). In this case, the representation layer is trained using cross-entropy loss on the propensity score \(\hat{g}(\mathbf{x}_{i}\,;\theta)\). Then, the conditional outcome head is removed and substituted by an outcome prediction network, similar with the one used by NN-Dragonnet (Figure 4(b)). The difference is that the representation layers are frozen and concatenated with the inputs \(\overline{y}_{i}\,^{(0)}\) and \(\overline{y}_{i}\,^{(1)}\) for estimating the conditional outcomes \(\hat{Q}(0,\mathbf{x}_{i},\overline{y}_{i}^{(0)};\theta)\) and \(\hat{Q}(1,\mathbf{x}_{i},\overline{y}_{i}^{(1)};\theta)\), respectively, where \(\theta\) is the vectors of NN-NEDnet's parameters. In this case, the representation layer is frozen and the final model is frozen and trained using the mean squared error loss on the factual outcomes, i.e., \[\hat{\mathcal{L}}(\theta;\mathbf{X},\mathbf{\overline{y}}^{(0)},\mathbf{ \overline{y}}^{(1)})=\frac{1}{n}\sum_{i}\biggl{[}\left(\hat{Q}\left(t_{i}, \mathbf{x}_{i},\mathbf{\overline{y}}_{i}\,^{(0)},\mathbf{\overline{y}}_{i}\, ^{(1)};\theta\right)-y_{i}\right)^{2}\] ## 4 Evaluation Methodology & Datasets In this section, we comprehensively describe the tools for the evaluation of causal inference models. ### Performance Profiles & Statistical Analysis The most commonly used performance metrics for evaluating the estimation of treatment effects include the absolute error in the estimation of ATE and the error of Precision in Estimation of Heterogeneous Effect (PEHE), Johansson Figure 4: NN-NEDnet architecture. (a) Treatment prediction mask, used for training. (b) Outcome prediction, with frozen representation layers. et al. (2016); Louizos et al. (2017); Shalit et al. (2017); Yoon et al. (2018); Kiriakidou and Diou (2022a) which are respectively defined as follows: \[\left|\epsilon_{ATE}\right|=\left|\frac{1}{n}\sum_{i=1}^{n}(y_{i}^{(1)}-y_{i}^{ (0)})-\frac{1}{n}\sum_{i=1}^{n}(\hat{y}_{i}^{(1)}-\hat{y}_{i}^{(0)})\right| \tag{6}\] and \[\epsilon_{PEHE}=\frac{1}{n}\sum_{i=1}^{n}\biggl{(}(y_{i}^{(1)}-y_{i}^{(0)})-( \hat{y}_{i}^{(1)}-\hat{y}_{i}^{(0)})\biggr{)}^{2} \tag{7}\] where \(\hat{y}_{i}^{(t)}\) indicates the model's outcome prediction for the \(i\)-th sample for treatment \(t\). Notice that \(\epsilon_{ATE}\) and \(\epsilon_{PEHE}\) concern the evaluation of the estimation of average (ATE) and individual (ITE) treatment effects, respectively. In the literature, it is common to use ensembles of experiments for the evaluation of the models' effectiveness which is usually carried out by calculating the average \(\left|\epsilon_{ATE}\right|\) and \(\epsilon_{PEHE}\) across the experiments. Nonetheless, this approach may be misleading in some cases, since all problems are equally considered for the models' evaluation and a small number of them tend to dominate the results Kiriakidou and Diou (2022b,a). To address this problem, we adopt the evaluation framework for causal inference models proposed in Kiriakidou and Diou (2022b), using the performance profiles of Dolan and More Dolan and More (2002) and a comprehensive statistical analysis based on nonparametric and post-hoc tests. The performance profiles provide us information such as probability of success, effectiveness and robustness in compact form Livieris (2018); Livieris et al. (2020). Each profile plots the fraction \(P\) of experiments for which any given model is within a factor \(\tau\) of the best model. Statistical analysis is conducted to examine the hypothesis that a pair of models perform equally well and provide statistical evidence about the superiority of the proposed model Fernandez et al. (2017); Livieris et al. (2019); Livieris and Pintelas (2020); Vuttipittayamongkol and Elyan (2020); Tampakas et al. (2019). Firstly, we apply the non-parametric Friedman Aligned-Ranks (FAR) test Hodges Jr and Lehmann (1962) to rank the evaluated models from the best to the worst performance and the Finner post-hoc test Finner (1993) to examine whether there are statistically significant differences among the evaluated models' performance. ### Datasets The fundamental problem of causal inference is that in practice, we only have access to the factual outcomes, which prohibits model evaluation. To address this issue, we used simulated or semi-synthetic datasets, which include both outcomes for each sample. It is worth mentioning that the selected datasets are the most widely used in the field of causal inference and have been chosen by various researchers Yao et al. (2018); Shi et al. (2019); Johansson et al. (2016); Shalit et al. (2017); Louizos et al. (2017) for the evaluation of their proposed models. **IHDP dataset.** It constitutes a collection of semi-simulated datasets, which was constructed from the Infant Health and Development Program Hill (2011). Each dataset is composed by 608 units in the control group and 139 units in the treatment group, while the 25 covariates are collected from a real-world randomized experiment. Furthermore, the effect of home visits by specialists was studied on future cognitive test scores. In our experiments, we used the setting "A" in NPCI package Dorie composed by 1000 realizations. Notice that 80% of the data were used for training the models, while the rest 20% was used for testing. **Synthetic dataset**. It consists of a collection of toy datasets, which was originally introduced by Louizos et al. Louizos et al. (2017). Its generation is based on the hidden confounder variable \(W\) using the following process: \[w_{i} \sim \text{Bern}(0.5)\] \[t_{i}\,|\,w_{i} \sim \text{Bern}(0.75w_{i}+0.25(1-w_{i}))\] \[x_{i}\,|\,w_{i} \sim \mathcal{N}(w_{i},\sigma_{z_{1}}^{2}w_{i}+\sigma_{z_{0}}^{2}(1-w_ {i}))\] \[y_{i}\,|\,t_{i},w_{i} \sim \text{Bern}(\text{Sigmoid}(3(w_{i}+2(2t_{i}-1)))\] ,where Sigmoid is the logistic sigmoid function and \(\sigma_{z_{0}}=3\), \(\sigma_{z_{1}}=5\). Notice that the treatment variable \(T\) and the proxy to the confounder \(X\) constitute a mixture of Bernoulli and Gaussian distribution, respectively. Finally, 90% of the data were used for training the models, while the rest 10% was used for testing. **ACIC dataset**. This dataset was developed for the _2018 Atlantic Causal Inference Conference competition data_Yishai et al. (2018). ACIC is a collection of semi-synthetic datasets, which were received from the linked birth and infant death data MacDorman and Atkinson (1998). Every competition dataset is a sample from a distinct distribution and data are generated through a generating process, by possessing different treatment selection and outcome functions. Following Shi et al. (2019) for each data generating process setting, four and eight datasets were randomly picked of size 5000 and 10000, respectively. Notice that 80% of the data were used for training the models, while the rest 20% was used for testing. ## 5 Numerical Experiments In this section, we evaluate the prediction performance of the neural network models, which integrate the proposed NNCI methodology against the corresponding baseline models, on IHDP, Synthetic and ACIC datasets. More specifically, we evaluate the performance of the proposed NN-Dragonnet, NN-TARnet, NN-NEDnet against Dragonnet, TARnet and NEDnet, respectively. The implementation code was written in Python 3.7 using Tensorflow Keras library Gulli and Pal (2017) and run on a PC (3.2GHz Quad-Core processor, 16GB RAM) using Windows operating system. Notice that Dragonnet, TARnet and NEDNet were used with their default optimized parameter settings, while for the proposed models the number of neighbors was set to \(k=11\) and targeted regularization was implemented with \(\alpha=1\) and \(\beta=1\) Shalit et al. (2017). The curves in the following figures have the following meaning * "Dragonnet" stands for the state-of-the-art neural network-based model Dragonnet Shi et al. (2019). * "TARnet" stands for the neural network-based model TARnet Shalit et al. (2017). * "NEDnet" stands for the two-stage neural network-based model NEDnet Shi et al. (2019). * "NN-Dragonnet" stands for the proposed NN-Dragonnet model, which modifies Dragonnet using the NNCI methodology. * "NN-TARnet" stands for the NN-TARnet model, which modifies TARnet using the NNCI methodology. * "NN-NEDnet" stands for the NN-NEDnet model, which modifies NEDnet using the NNCI methodology. Additionally, it is worth mentioning that the proposed models were implemented using three different distance metrics i.e Euclidean, Manhattan and Chebychev as in Kiriakidou and Diou (2022). These distances belong to the class of Minkowski distances and constitute the most widely used distances proposed in the literature Pandit et al. (2011); Singh et al. (2013). ### Evaluation on IHDP dataset Figure 5 presents the performance profiles of NN-Dragonnet, NN-TARnet, NN-NEDnet and their corresponding baseline models Dragonnet, TARnet and NEDnet, based on the performance metrics \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\). All versions of NN-Dragonnet exhibit similar performance with state-of-the-art Dragonnet in terms of \(|\epsilon_{ATE}|\), while they considerably outperform Dragonnet in terms of \(\epsilon_{PEHE}\). Additionally, the adoption of NNCI methodology considerably improved the performance of TARnet model, relative to both performance metrics since the curves of all versions of NN-TARnet lie on the top. NN-NEDnet solves almost the same percentage of benchmarks with the best (lowest) \(|\epsilon_{ATE}|\) for any used distance metric and outperforms the baseline model NN-NEDnet model in terms of robustness. Finally, it is worth mentioning that all proposed models exhibited similar performance using any distance metric. In detail all versions of NN-NEDnet outperform the baseline model since their curves lie on the top. Tables 1 and 2 present the statistical analysis of the proposed causal inference models and their corresponding baseline models for IHDP dataset, in terms of \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\), respectively. As regards \(|\epsilon_{ATE}|\), Dragonnet model reports the best rank according to FAR test. However, Finner post-hoc test suggests that there are no statistically significant differences in the performance between Dragonnet and all versions of NN-Dragonnet model, which implies that all models perform similarly. NN-TARnet and NN-NEDnet report the highest probability-based ranking, outperforming TARnet and NEDnet, respectively. Additionally, Finner post-hoc test suggests that there are statistically significant differences in the performance between all versions of both NN-TARnet and NN-NEDnet with their corresponding baseline models. As regards \(\epsilon_{PEHE}\) performance metric, the interpretation of Table 2 reveals that NN-Dragonnet, NN-TARnet and NN-NEDnet are the top ranking models, outperforming the baseline models, Dragonnet, TARnet and NEDnet, respectively. Therefore, we able to conclude that we obtained strong statistical evidence that the adoption of the NNCI methodology considerably improved the performance of all baseline causal inference models. Figure 5: Log\({}_{10}\) Performance profiles of models, based on \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\) for IHDP dataset ### Evaluation on Synthetic dataset Figure 6 demonstrates the performance profiles of the proposed NN-Dragonnet, NN-TARnet, NN-NEDnet and their corresponding state-of-the-art models Dragonnet, TARnet and NEDnet, in terms of \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\) performance metrics. Firstly, relative to \(|\epsilon_{ATE}|\) all versions of the proposed models report almost identical performance while the use of Euclidean distance presents slightly better performance. NN-Dragonnet's versions exhibited similar performance with Dragonnet while NN-TARnet's and NN-NEDnet's versions slightly outperform the traditional TARnet and NEDnet, respectively in terms of efficiency. In case of \(\epsilon_{PEHE}\), NN-Dragonnet and NN-NEDnet outperformed Dragonnet and NEDnet, respectively using any distance metric since their curves lie on the top. In detail, all versions of both NN-Dragonnet and NN-NEDnet presented the highest percentage of simulations with the best (lowest) \(\epsilon_{PEHE}\). In contrast, NN-TARnet and TARnet reported almost identical performance, independent of the utilized distance metric. Tables 3 presents the statistical analysis for the proposed causal inference models and their corresponding baseline models for Synthetic dataset, in terms of \(|\epsilon_{ATE}|\). NN-Dragonnet (Manhattan) is the top ranking model since it presents the best (lowest) FAR score. Nevertheless, Finner post-hoc tests suggests that there is no statistically significant differences in the performance of all evaluated models. NN-TARnet and NN-NEDnet present the highest FAR ranking using the Euclidean distance. Additionally, Finner post-hoc test suggests that there exist significant differences between the performance of the proposed NN-TARnet and NN-NEDnet and the corresponding baseline models TARnet and NEDnet, independent of the utilized distance metrics. This implies that the adoption of the proposed NNCI methodology benefit the TARnet and NEDnet in terms of \(|\epsilon_{ATE}|\). \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{FAR} & \multicolumn{2}{c}{Finner post-hoc test} \\ \cline{3-4} & & \(p_{F}\)-value & \(H_{0}\) \\ \hline Dragonnet & 1840.19 & - & - \\ NN-Dragonnet (Manhattan) & 1894.39 & 0.282160 & Fail to reject \\ NN-Dragonnet (Chebyshev) & 1935.42 & 0.094191 & Fail to reject \\ NN-Dragonnet (Euclidean) & 1947.98 & 0.094191 & Fail to reject \\ \hline NN-TARnet (Manhattan) & 1827.99 & - & - \\ NN-TARnet (Chebyshev) & 1831.38 & 0.947642 & Fail to reject \\ NN-TARnet (Euclidean) & 1872.75 & 0.519053 & Fail to reject \\ TARnet & 2469.86 & 0.000000 & Reject \\ \hline NN-NEDnet (Manhattan) & 1839.56 & - & - \\ NN-NEDnet (Chebyshev) & 1845.91 & 0.938142 & Fail to reject \\ NN-NEDnet (Euclidean) & 1849.75 & 0.938142 & Fail to reject \\ NEDnet & 2457.28 & 0.000000 & Reject \\ \hline \hline \end{tabular} \end{table} Table 1: FAR test and Finner post-hoc test based on \(|\epsilon_{ATE}|\) for IHDP dataset \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{FAR} & \multicolumn{2}{c}{Finner post-hoc test} \\ \cline{3-4} & & \(p_{F}\)-value & \(H_{0}\) \\ \hline NN-Dragonnet (Euclidean) & 1628.32 & - & - \\ NN-Dragonnet (Manhattan) & 1653.07 & 0.623290 & Fail to reject \\ NN-Dragonnet (Chebyshev) & 1670.39 & 0.539643 & Fail to reject \\ Dragonnet & 2666.19 & 0.000000 & Reject \\ \hline NN-TARnet (Euclidean) & 1609.48 & - & - \\ NN-TARnet (Manhattan) & 1624.02 & 0.86625 & Fail to reject \\ NN-TARnet (Chebyshev) & 1626.72 & 0.86625 & Fail to reject \\ TARnet & 3141.77 & 0.000000 & Reject \\ \hline NN-NEDnet (Euclidean) & 1908.98 & - & - \\ NN-NEDnet (Manhattan) & 1937.97 & 0.574488 & Fail to reject \\ NN-NEDnet (Chebyshev) & 2001.56 & 0.107503 & Fail to reject \\ NEDnet & 2150.38 & 0.000000 & Reject \\ \hline \hline \end{tabular} \end{table} Table 2: FAR test and Finner post-hoc test based on \(\epsilon_{PEHE}\) for IHDP dataset Tables 4 presents the statistical analysis for the proposed causal inference models and their corresponding baseline models for Synthetic dataset, in terms of \(\epsilon_{PEHE}\). Both statistical tests provide statistical evidence that the proposed NN-Dragonnet and NN-NEDnet outperformed Dragonnet and NEDnet, respectively and are able to exhibit more reliable predictions. In contrast, although NN-TARnet present higher ranking than TARnet, there are no statistically significant differences in their performance, relative to \(\epsilon_{PEHE}\) metric. ### Evaluation on ACIC dataset Figure 7 presents the performance profiles of NN-Dragonnet, NN-TARnet, NN-NEDnet and their corresponding baseline models, for \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\) performance metrics. NN-Dragonnet outperform the state-of-the-art Dragonnet model independent of the used distance metric as regards both performance metrics. Additionally, it reported slightly better performance in case the Manhattan distance metric is used for the calculation of the neighboring instances. NN-TARnet (Chebyshev) model exhibits the best overall performance, slightly outperforming the rest of the models, relative to \(|\epsilon_{ATE}|\), while NN-TARnet (Manhattan) reports the best performance, relative to \(\epsilon_{PEHE}\). NN-NEDnet (Chebyshev) exhibits the highest probability of being the optimal model in terms of effectiveness and robustness, since its curves lies on the top for both performance metrics. More specifically, NN-NEDnet solved 50% and 66% of the simulations with the best (lowest) \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\) scores, while NEDnet reported only 23% and 24% in the same cases. Tables 5 and 6 present the statistical analysis for the proposed causal inference models and their corresponding baseline models for ACIC dataset, in terms of \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\), respectively. NN-Dragonnet (Manhattan) is the top ranking model relative to both performance metrics. FAR and Finner post-hoc tests suggests that NN-Dragonnet outperforms the baseline Dragonnet and there exist statistically significant differences in their performances. Additionally, NN Figure 6: Log\({}_{10}\) Performance profiles of models, based on \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\) for Synthetic dataset Dragonnet presents the best performance in case the Manhattan distance is utilized. Non-parametric FAR statistical test presents that NN-TARnet presents higher ranking compared to TARnet relative to both performance metrics; nevertheless, the Finner post-hoc test suggests that for all versions of NN-TARnet model there are no statistically significant differences in their performance. Finally, the interpretation of Tables 5 and 6 suggest that NN-NEDnet presented slightly higher ranking than the baseline model NEDnet. Summarizing, we point out that the proposed methodology significantly improved the performance of the baseline models in case of imbalanced datasets (IHDP), which indicates that the proposed approach is more beneficial to challenging and complex benchmarks. This issue should be further taken into consideration and analysis in order to study why and how the proposed NNIC methodology improves the prediction accuracy especially in case of imbalanced problems. ## 6 Discussion In this section, we provide a comprehensive discussion about the motivation and contribution of this work as well as about the advantages of the proposed NNCI methodology and its limitations. In a recent work, Kiriakidou and Diou (2022) proposed a new approach of modifying the state-of-the-art causal inference model Dragonnet by enriching the model's inputs with the average outcomes of the \(k\)-nearest neighbor samples from control and treatment groups. Furthermore, the authors presented some promising \begin{table} \begin{tabular}{l c c c} \hline \hline Model & FAR & \multicolumn{2}{c}{Finner post-hoc test} \\ \cline{3-4} & & \(p_{F}\)-value & \(H_{0}\) \\ \hline NN-Dragonnet (Manhattan) & 173.37 & - & - \\ NN-Dragonnet (Euclidean) & 177.45 & 0.802946 & Fail to reject \\ NN-Dragonnet (Chebyshev) & 193.46 & 0.310031 & Fail to reject \\ Dragonnet & 257.72 & 0.000000 & Reject \\ \hline NN-TARnet (Manhattan) & 190.71 & - & - \\ NN-TARnet (Euclidean) & 199.64 & 0.630507 & Fail to reject \\ TARnet & 202.13 & 0.630507 & Fail to reject \\ NN-TARnet (Chebyshev) & 209.51 & 0.578490 & Fail to reject \\ \hline NN-NEDnet (Chebyshev) & 155.34 & - & - \\ NN-NEDnet (Euclidean) & 172.58 & 0.291693 & Fail to reject \\ NN-NEDnet (Manhattan) & 190.71 & 0.0045430 & Reject \\ NEDnet & 283.37 & 0.000000 & Reject \\ \hline \hline \end{tabular} \end{table} Table 4: FAR test and Finner post-hoc test based on \(\epsilon_{PEHE}\) for IHDP dataset \begin{table} \begin{tabular}{l c c c} \hline \hline Model & FAR & \multicolumn{2}{c}{Finner post-hoc test} \\ \cline{3-4} & & \(p_{F}\)-value & \(H_{0}\) \\ \hline NN-Dragonnet (Manhattan) & 198.31 & - & - \\ Dragonnet & 200.89 & 0.994708 & Fail to reject \\ NN-Dragonnet (Chebyshev) & 200.89 & 0.994708 & Fail to reject \\ NN-Dragonnet (Euclidean) & 201.91 & 0.994708 & Fail to reject \\ \hline NN-TARnet (Euclidean) & 186.43 & - & - \\ NN-TARnet (Manhattan) & 190.16 & 0.841249 & Fail to reject \\ NN-TARnet (Chebyshev) & 192.58 & 0.841249 & Fail to reject \\ TARnet & 232.83 & 0.013568 & Reject \\ \hline NN-NEDnet (Euclidean) & 175.99 & - & - \\ NN-NEDnet (Chebyshev) & 182.04 & 0.711365 & Fail to reject \\ NN-NEDnet (Manhattan) & 191.46 & 0.468763 & Fail to reject \\ NEDnet & 252.51 & 0.000009 & Reject \\ \hline \hline \end{tabular} \end{table} Table 3: FAR test and Finner post-hoc test based on \(|\epsilon_{ATE}|\) for Synthetic dataset numerical experiments using the IHDP dataset, which revealed the efficiency of their approach. However, the major drawbacks of this work were that the authors adopted the proposed approach only on Dragonnet and evaluated the performance of the proposed model only on one causal inference benchmark. Motivated by the efficiency of their approach and the promising experimental results, we extend the work conducted in Kiriakidou and Diou (2022a) and address their major drawbacks. In this research, we propose NNCI methodology for integrating valuable information from the nearest neighboring samples from control and treatment groups in the training data, which is then utilized as inputs in the neural network models to provide accurate predictions of average and individual treatment effects. The proposed NNCI methodology is then integrated to the most effective neural network-based causal inference models i.e. Dragonnet, TARnet and NEDnet. These models use a deep net to produce a representation layer, which is then processed independently for predicting both the outcomes for treatment and control groups. The presented numerical experiments reveal that the application of NNCI considerably improve the performance of the state-of-the-art neural network models, achieving better estimation of both average and individual treatment effects. The evaluation was performed using three of the most well-known causal inference benchmarks i.e. IHDP, Synthetic and ACIC, which are characterized by high complexity and challenge. The performance profiles as well as the Friedman Aligned-Ranks and Finner post-hoc tests provide strong statistical evidence about the effectiveness of the proposed approach. Therefore, based on this experimental analysis, we conclude that the proposed NNCI methodology in conjunction with modified versions of Dragonnet, TARnet and NEDnet is able to accurately estimate treatment effects. In more detail, as regards IHDP dataset, the proposed causal inference models outperformed the corresponding traditional ones for both \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\), independent of the utilized distance metric. It is worth mentioning that IHDP is an imbalanced dataset, which indicates that NNCI methodology proved beneficial for challenging and complex Figure 7: Log\({}_{10}\) Performance profiles of models, based on \(|\epsilon_{ATE}|\) and \(\epsilon_{PEHE}\) for ACIC dataset benchmarks. Concerning the Synthetic dataset, the experimental analysis reveals that the proposed models presented the best overall performance in case the Manhattan distance was used. Finally, regarding ACIC dataset, the adoption of the proposed NNCI methodology to the state-of-the-art causal inference models improved the average treatment effect estimation. It is worth mentioning that NNCI methodology can be adopted only to the selected neural network based causal inference models due to their special architecture design. More specifically, NNCI assumes the existence of a representation layer prior to the final treatment effect estimation layers. This can be considered as a limitation of the proposed work. Therefore, the adoption of the proposed methodology to other causal inference models models is an interesting research direction for future work. ## 7 Conclusion In this work, we proposed a new methodology, named NNCI, which is applied to well-established neural network-based models for the estimation of individual (ITE) and average treatment effects (ATE). An advantage of the proposed methodology is the exploitation of valuable information, not only from the covariates, but also from the outcomes of nearest neighboring instances contained in the training data from both treatment and control group. The proposed NNCI methodology is applied to the state-of-the-art neural network models, Dragonnet, NEDnet and TARnet, aiming to increase the models' prediction accuracy and reduce bias. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{FAR} & \multicolumn{2}{c}{Finner post-hoc test} \\ \cline{3-4} & & \(p_{F}\)-value & \(H_{0}\) \\ \hline NN-Dragonnet (Manhattan) & 19.25 & - & - \\ NN-Dragonnet (Euclidean) & 21.75 & 0.661815 & Fail to reject \\ NN-Dragonnet (Chebyshev) & 23.58 & 0.590268 & Fail to reject \\ Dragonnet & 33.41 & 0.039045 & Reject \\ \hline NN-TARnet (Manhattan) & 20.75 & - & - \\ NN-TARnet (Euclidean) & 22.41 & 0.770588 & Fail to reject \\ NN-TARnet (Chebyshev) & 24.58 & 0.649006 & Fail to reject \\ TARnet & 30.25 & 0.262418 & Fail to reject \\ \hline NN-NEDnet (Chebyshev) & 16.66 & - & - \\ NN-NEDnet (Euclidean) & 23.58 & 0.226216 & Fail to reject \\ NEDnet & 28.16 & 0.069785 & Fail to reject \\ NN-NEDnet (Manhattan) & 29.58 & 0.069785 & Fail to reject \\ \hline \hline \end{tabular} \end{table} Table 6: FAR test and Finner post-hoc test based on \(\epsilon_{PEHE}\) for ACIC dataset \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{FAR} & \multicolumn{2}{c}{Finner post-hoc test} \\ \cline{3-4} & & \(p_{F}\)-value & \(H_{0}\) \\ \hline NN-Dragonnet (Manhattan) & 18.25 & - & - \\ NN-Dragonnet (Euclidean) & 22.33 & 0.474959 & Fail to reject \\ NN-Dragonnet (Chebyshev) & 26.58 & 0.209181 & Fail to reject \\ Dragonnet & 30.83 & 0.080796 & Reject \\ \hline NN-TARnet (Chebyshev) & 20.91 & - & - \\ NN-TARnet (Manhattan) & 22.75 & 0.812511 & Fail to reject \\ NN-TARnet (Euclidean) & 23.33 & 0.812511 & Fail to reject \\ TARnet & 31.00 & 0.215446 & Fail to reject \\ \hline NN-NEDnet (Chebyshev) & 22.16 & - & - \\ NN-NEDnet (Euclidean) & 23.25 & 0.849667 & Fail to reject \\ NN-NEDnet (Manhattan) & 25.41 & 0.763596 & Fail to reject \\ NEDnet & 27.16 & 0.763596 & Fail to reject \\ \hline \hline \end{tabular} \end{table} Table 5: FAR test and Finner post-hoc test based on \(|\epsilon_{ATE}|\) for ACIC dataset The experimental analysis on three widely used datasets in the field of causal inference, illustrated that the proposed approach improved the performance of the traditional neural network-based models, regarding the estimation of causal effects. This is confirmed by the performance profiles of Dolan and More as well as the nonparametric FAR test and the post-hoc Finner test. It is worth highlighting that in all of the cases the proposed methodology leads to considerable improvement in the estimation of treatment effects, in terms of effectiveness and robustness. Nevertheless, a limitation of the proposed work is the selection of the distance metric used for calculating the nearest neighbors as well as the optimal value of parameter \(k\). An evaluation study on the effectiveness and sensitivity of different values of parameter \(k\) and distance metrics is included as future work. Another promising research subject can be considered the use of dynamic ensemble learning algorithms Alam et al. (2020); Pintelas and Livieris (2020); Nandi et al. (2022); Ortiz et al. (2016), finite element machine learning Amezquita-Sancheza and Valtierra-Rodriguez (2020); Pereira et al. (2020) and self-supervised learning Rafiei et al. (2022); Hua et al. (2022) for further improving the prediction performance. Finally, an interesting idea for achieving more accurate predictions of causal effects is the development of new models for causal inference based on the architecture of augmented machine learning models Bhattacharya et al. (2022); Wu et al. (2022); Zhang et al. (2022a,b). ## Aknowledgements The work leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 965231, project REBECCA (REsearch on BrEast Cancer induced chronic conditions supported by Causal Analysis of multi-source data)
2310.05022
Fully Spiking Neural Network for Legged Robots
Recent advancements in legged robots using deep reinforcement learning have led to significant progress. Quadruped robots can perform complex tasks in challenging environments, while bipedal and humanoid robots have also achieved breakthroughs. Current reinforcement learning methods leverage diverse robot bodies and historical information to perform actions, but previous research has not emphasized the speed and energy consumption of network inference and the biological significance of neural networks. Most networks are traditional artificial neural networks that utilize multilayer perceptrons (MLP). This paper presents a novel Spiking Neural Network (SNN) for legged robots, showing exceptional performance in various simulated terrains. SNNs provide natural advantages in inference speed and energy consumption, and their pulse-form processing enhances biological interpretability. This study presents a highly efficient SNN for legged robots that can be seamless integrated into other learning models.
Xiaoyang Jiang, Qiang Zhang, Jingkai Sun, Jiahang Cao, Jingtong Ma, Renjing Xu
2023-10-08T05:48:30Z
http://arxiv.org/abs/2310.05022v3
# Fully Spiking Neural Network for Legged Robots ###### Abstract In recent years, legged robots based on deep reinforcement learning have made remarkable progress. Quadruped robots have demonstrated the ability to complete challenging tasks in complex environments and have been deployed in real-world scenarios to assist humans. Simultaneously, bipedal and humanoid robots have achieved breakthroughs in various demanding tasks. Current reinforcement learning methods can utilize diverse robot bodies and historical information to perform actions. However, prior research has not emphasized the speed and energy consumption of network inference, as well as the biological significance of the neural networks themselves. Most of the networks employed are traditional artificial neural networks that utilize multilayer perceptrons (MLP). In this paper, we successfully apply a novel Spiking Neural Network (SNN) to process legged robots, achieving outstanding results across a range of simulated terrains. SNN holds a natural advantage over traditional neural networks in terms of inference speed and energy consumption, and their pulse-form processing of body perception signals offers improved biological interpretability. To the best of our knowledge, this is the first work to implement SNN in legged robots. ## I Introduction The increasing adoption of mobile robots, which are equipped with continuous high-dimensional observations and action space, to tackle a wide range of intricate tasks in real-world situations emphasizes the critical importance of exceptional control algorithms. Currently, the limited on-board energy resources of most robots pose a significant challenge, as this constraint hinders their ability to operate continuously and cost-effectively. Consequently, there is an immediate demand for developing energy-efficient solutions for the seamless control of these autonomous machines. Deep reinforcement learning (DRL) employs deep neural networks (DNNs) as potent function approximators for learning optimal control strategies for intricate tasks [1, 2], through directly mapping the original state space to the action space [3, 4]. Nonetheless, the remarkable performance of DRL frequently comes at the expense of substantial energy consumption and slower execution speeds, making them unsuitable for various applications. Additionally, the execution speed of control strategies employing DNNs tends to be slower in comparison to the operational speed of the motion units. This discrepancy often results in a step-like behavior in the control signals, causing negative impacts on the performance of the system. Spiking neural networks (SNNs), also referred to as third-generation neural networks, present a promising alternative for energy-efficient and high-speed deep networks. These emerging SNNs operate based on the principles of neuromorphic computing, wherein the integration of memory and computation is seamless, and neurons engage in asynchronous, event-based communication and computation [5]. The biological plausibility, the significant increase in energy efficiency (particularly when deployed on neuromorphic chips [6]), high-speed processing and real-time capability for high-dimensional data (especially from asynchronous sensors like event-based cameras [7]) contribute to the advantages that SNNs possess over ANNs in specific applications. These advantages render the utilization of SNNs not only feasible but also highly advantageous in lieu of ANNs for performing effective calculations. A mounting body of research illustrates that SNNs can function as energy-efficient and high-speed solutions for effectively managing robot control in scenarios where there are limitations on onboard energy resources [8, 9, 10]. To address the limitations of SNNs in tackling high-dimensional control problems, a natural approach involves combining the energy efficiency of SNNs with the optimality of DRL, which has proven effective in various control tasks [11]. Due to the role of rewards as training guides in reinforcement learning (RL), some studies utilize a three-factor learning rule [12] to implement reward learning. Although these rules exhibit strong performance in low-dimensional tasks, they often struggle to handle complex problems, and the optimization process becomes challenging in the absence of a global loss function [13]. Recently, [14] proposed a strategy gradient-based algorithm to train an SNN for learning random strategies. However, this algorithm is designed for discrete action spaces, and its practical applications are somewhat limited when tackling high-dimensional continuous control problems. The recent conceptualization of the brain's topology and computational principles has ignited advancements in SNNs, exhibiting both human-like behavior [15] and superior performance [16]. A pivotal attribute associated with effective computation in the brain is the employment of neuronal populations for encoding and representing information, encompassing the entire spectrum from sensory input to output signals. In this scenario, each neuron within a population has a receptive field that captures a specific segment of the encoded signal [17]. Notably, initial investigations into this group coding scheme have shown its enhanced capability to represent stimuli [18], contributing to recent triumphs in training SNNs for complex, high-dimensional supervised learning tasks [19, 20]. The efficacy of population coding presents a promising pathway for the advancement of efficient SNNs that leverage population coding. These networks possess the potential to acquire optimal solutions for complex high-dimensional continuous control tasks, paving the way for significant progress in this field. The main contributions of this paper can be summarized as follows: * For the first time, we have implemented SNNs on a policy network in the legged robot simulated in Isaac Gym [21], enabling the encoding of each dimension of the observation and action space within a single population of neurons using a learnable acceptance domain. Furthermore, we have successfully integrated this method and achieved successful training outcomes using techniques such as imitation learning and trajectory history. * We have decreased the decimation time to allow for more frequent updates during the learning process. Numerous experiments have consistently shown that SNNs perform exceptionally well even at ultra-high control frequencies. * We achieved successful training on three typical legged robots: quadruped robotic dog a1, bipedal robot Cassie, and MIT humanoid, which fully demonstrate the feasibility and superiority of our method for application on legged robots (Fig. 1). ## II Related work ### _Reinforcement Learning for Legged Robotics Locomotion_ In recent years, there has been a rapid development of reinforcement learning techniques in the realm of achieving fast, stable, and adaptive locomotion for legged robots. [22] bridges the gap between reinforcement learning simulation and actual reality by modeling the actuator through neural networks. [23] utilizes the teacher-student training framework to integrate environmental parameters with preconceptions which enables quadruped robots to adapt to complex terrains. [24] introduces a large number of auxiliary rewards and gait parameter control to implement multiple gaits for a single policy. This work brings a lot of inspiration to the design of quadruped rewards. [25] combines reinforcement learning with computer vision to complete simultaneous locomotion and manipulation. Unlike traditional reinforcement learning, Generative adversarial imitation learning (GAIL) is an approach for imitating behaviors from reference datasets by integrating a generative adversarial network. Based on the GAIL, Adversarial Motion Priors (AMP) combines the task reward and imitation reward to make the agent complete the tasks based on the action being similar to the reference dataset. For learning unlabeled references dataset, [26] uses a skill discriminator additionally and enables quadruped to distinguish and master multiple gaits, and perform backflips not found in the dataset. [27] combines RMA and AMP, allowing the quadruped with the ability to traverse challenging terrains rapidly. However, the above methods are realized by ANNs. Therefore, it cannot take into account the high-frequency and energy-saving advantages of SNNs. ### _Spiking Neuron Networks_ Recently, many works have grown up around introducing SNNs into RL algorithms [12, 28, 29, 30, 31]. SNNs offer advantages in processing temporal information, energy efficiency, event-driven processing, robustness to noise, plasticity, and biological plausibility. Some methods get trained ANNs to be converted into corresponding SNNs with few accuracy loss by matching the firing rates of spiking neurons with the graded activation of analog neurons [32]. Yet following the surrogate gradient method [33], the spike-based backpropagation (BP) algorithm has quickly become the mainstream solution for training multi-layer SNNs [34]. As shown in several open-source frameworks of SNNs [35, 36], the membrane voltage of non-spiking neurons is feasible to represent a continuous value in a spike-based BP method. ## III SNNs based Locomotion in Isaacgym We chose to train and test the performance of our algorithm on Isaac Gym [21]. Isaac Gym is a simulation platform designed for robotics. It offers realistic physics simulation, specialized support for legged robots, integration with NVIDIA technologies, customization options, and an active community. ### _SNN based Policy Network_ We utilize a population-coded spiking actor-network (Pop-SAN) [37] that is trained in conjunction with a deep critic network using the DRL algorithms (including RMA and AMP). During training, the PopSAN generated an action \(\alpha\)\(\in\mathbb{R}^{N}\) for a given observation, \(s\), and the deep critic network predicted the associated state value \(V(s)\) or action-value \(Q(s\), \(\alpha)\), which in turn optimized the PopSAN, in accordance with a chosen DRL method (Fig. 2). In the PopSAN architecture, the encoder module encodes each dimension of the observation into the activity of a specific neuron population. During forward propagation, these input populations stimulate a multi-layer fully-connected SNN. The SNN then produces activity patterns in the output populations. At the end of every \(T\) timesteps, these activity patterns are decoded to determine the corresponding action dimensions (as outlined in Algorithm 1). To construct the SNN, the current-based leaky-integrate-and-fire (LIF) model of a spiking neuron is employed. This model is utilized for building the SNN architecture. The dynamics of the LIF neurons are controlled by a two-step model, as elaborated in Algorithm 1: i) integrating the presynaptic spikes \(o\) into current \(c\); and ii) integrating the current \(c\) into membrane voltage \(v\); \(d_{c}\) and \(d_{v}\) are the current and voltage decay factors. In this implementation, a neuron fires a spike when its membrane potential surpasses a certain threshold. We adopted the hard-reset model, which means that upon spiking, the membrane potential is instantly reset to the resting potential. The resulting spikes are then transmitted to the post-synaptic neurons within the same inference timestep, assuming zero propagation delay. This approach enables efficient and synchronous transmission of information within the SNN. Next, we combined the superiority of snn with the recently advanced algorithm RMA. Figure 3 illustrates that RMA system comprises two interconnected subsystems: the base policy \(\pi\) and the adaptation module \(\phi\), which collaborate harmoniously to facilitate continuous adaptation in a wide variety of environmental setups, thereby enabling smooth and uninterrupted online operations. The base policy is trained through reinforcement learning in simulation, utilizing privileged information about the environment configuration \(e_{t}\), which includes factors like friction, payload, and more. By leveraging the knowledge of the vector \(e_{t}\), the base policy can effectively adapt to the specific characteristics of the given environment. The process begins by encoding the environment configuration vector \(e_{t}\) into a latent feature space \(z_{t}\) through an encoder network \(\mu\). This latent vector, referred to as the extrinsics, is then combined with the current state \(x_{t}\) and the previous action \(\alpha_{t-1}\) as inputs to the base policy. The base policy then generates predictions for the desired joint positions of the robot, denoted as \(a_{t}\). The policy \(\pi\) and the environmental factor encoder \(\mu\) undergo a collaborative training process using RL within a simulated Fig. 2: Firstly, the observations are encoded by the encoder as \(n\) independent distributions, uniformly distributed over the range of the observation. These encoded distributions are then further processed by the population, resulting in the generation of the corresponding spikes. Then the neurons in the input populations are responsible for encoding each dimension of the observation and they drive a multi-layered and fully connected SNN. During the forward timesteps, the activities of each output population in PopSAN are decoded to determine the corresponding action dimension. This means that the neural network takes in observations, processes them through the SNN, and then decodes the resulting activities to generate the appropriate action for the given situation. environment. Regrettably, direct deployment of this policy is not feasible in the real world due to the unavailability of \(e_{t}\). To overcome this challenge, we need to estimate the extrinsics during runtime, a task performed by the adaptation module \(\phi\). The key insight here is that when we instruct the robot joints to perform a specific movement, the actual movement executed deviates from the intended movement, and this deviation is influenced by the extrinsics. Rather than relying on privileged information, we can leverage the recent history of the agent's state to estimate the extrinsics vector. Specifically, the purpose of \(\phi\) is to estimate the extrinsics vector \(z_{t}\) solely based on the recent state and action history of the robot, without any direct access to \(e_{t}\). At training time, since both the state history and the extrinsics vector \(z_{t}\) can be computed in simulation, we can train this module using supervised learning techniques. In addition, we have successfully combined SNN with AMP and achieved similar performance to ANN on legged robots. Figure 4 provides a schematic overview of the system. The motion dataset \(M\) consists of a collection of reference motions, where each motion \(m^{i}=\widehat{q}_{t}^{i}\) is represented as a sequence of poses \(\widehat{q}_{t}^{i}\). The motion clips can be obtained from a variety of sources, including motion capture (mocap) recordings of real-life actors or artist-authored keyframe animations. The motion of the simulated robot is controlled by a policy \(\pi(\alpha_{t}|s_{t},g)\) that maps the state of the character \(s_{t}\) and a given goal \(g\) to a distribution over actions \(\alpha_{t}\). The actions generated by the policy dictate the desired target positions for proportional-derivative (PD) controllers, which are located at each joint of the robot. These controllers then generate control forces that propel the robot's motion in accordance with the specified target positions. The goal \(g\) specifies a task reward function \(r_{t}^{G}=r^{G}(s_{t},\alpha_{t},s_{t+1},g)\), which defines high-level objectives that the robot must fulfill. These objectives can include tasks such as walking in a specific direction or executing a punch towards a designated target. The style objective \(r_{t}^{S}=r^{S}(s_{t},s_{t+1})\) is specified by an adversarial discriminator, trained to discern between motions captured in the dataset and motions generated by the robot itself. The style objective serves as a task-agnostic motion prior, offering an a-priori estimate of the naturalness or style of a given motion, regardless of the specific task. By doing so, the style objective motivates the policy to generate motions that closely resemble the behaviors observed in the dataset. ### _Training of Legged Robot using SNN on Isaaczym_ In our study, we employed gradient descent to update the PopSAN parameters, where the exact loss function varies depending on the chosen algorithm (RMA or AMP). To train the parameters of PopSAN, we utilize the gradient of the loss with respect to the computed action, denoted as \(\nabla_{a}L\). The parameters for each output population \(i,i\in{1,...,M}\) are updated independently as follows: \[\nabla_{\mathbf{W}_{d}{}^{(i)}}L=\nabla_{\alpha_{i}}L\cdot\mathbf{W}_{d}{}^{(i)}\cdot \mathbf{f}\mathbf{r}{}^{(i)},\nabla_{b_{d}{}^{(i)}}L=\nabla_{\alpha_{i}}L\cdot\mathbf{W}_ {d}{}^{(i)} \tag{1}\] The SNN parameters are updated using the extended spatiotemporal backpropagation introduced in [38]. We used Fig. 4: By leveraging Adversarial Motion Priors and employing PopSAN as a replacement for the policy network during training, the agent is able to generate behaviors that capture the essence of the motion capture dataset. Fig. 3: RMA is a system composed of two subsystems: the base policy \(\pi\) and the adaptation module \(\phi\). The training of RMA consists of two phases. **Training the Base Policy (Phase 1):** In the first phase, the base policy \(\pi\) is trained using PopSAN. It takes as input the current state \(x_{t}\), the previous action \(\alpha_{t-1}\), and the privileged environmental factors \(e_{t}\). These environmental factors are encoded into a latent extrinsics vector \(z_{t}\) using the environmental factor encoder \(\mu\). **Training the Adaptation Module (Phase 2):**In the second phase, the adaptation module \(\phi\) is trained to predict the extrinsics \(\widehat{z}_{t}\) based on the history of states and actions. This training is done using supervised learning with on-policy data. The adaptation module learns to capture the relationship between the state-action history and the corresponding extrinsics. By training the base policy and the adaptation module in these two phases, RMA is able to learn and adapt to the environment in a more effective manner. the rectangular function \(z(v)\), defined in [39], to approximate the gradient of a spike. The gradient of the loss with respect to the SNN parameters for each layer \(k\) are computed by collecting the gradients backpropagated from all the timesteps: \[\nabla_{\boldsymbol{W}^{(k)}}L=\sum_{t=1}^{T}\boldsymbol{o}^{(t)(k-1)}\cdot \nabla_{\mathbf{c}^{(t)(k)}}L,\nabla_{\mathbf{b}^{(k)}}L=\sum_{t=1}^{T}\nabla_ {\mathbf{c}^{(t)(k)}}L \tag{2}\] Lastly, we updated the parameters independently for each input population \(i,i\in{1,...,N}\) as follows: \[\nabla_{\boldsymbol{\mu}^{(i)}}L=\sum_{t=1}^{T}\nabla_{\boldsymbol{ \alpha}_{i}^{(t)(o)}}L\cdot\boldsymbol{A}_{\boldsymbol{E}}^{(i)}\cdot\frac{s _{i}-\boldsymbol{\mu}^{(i)}}{\boldsymbol{\sigma}^{(i)^{2}}}, \tag{3}\] \[\nabla_{\boldsymbol{\sigma}^{(i)}}L=\sum_{t=1}^{T}\nabla_{ \boldsymbol{\alpha}_{i}^{(t)(o)}}L\cdot\boldsymbol{A}_{\boldsymbol{E}}^{(i)} \cdot\frac{\left(s_{i}-\boldsymbol{\mu}^{(i)}\right)^{2}}{\boldsymbol{\sigma} ^{(i)^{3}}}\] ## IV Experiments The goals of our experiments are the followings: i) To validate the feasibility of SNNs on robots with high-latitude complex environments and complex dynamics models. ii) Verify the advantages of SNNs over ANNs in terms of ultra-high flatness control. iii) Identify rewards that SNNs excel on rl tasks and make improvements on poorly performing rewards. We evaluated our method on the Isaac Gym with accurate physics simulation and flexible and customizable framework for robot models and environments. We mainly tested the performance of the following robots: A1, Cassie and MIT Humanoid where each snn is trained within \(1,500,000\) iterations until convergence. The final visualization and metrics such as base velocity x, y, yaw, mean episode length and position torque are used for evaluation. ### _Simulation Setup_ To create a diverse range of environments, we imported several URDF files from recent works. These files include models such as A1, Anymal-b, Anymal-c, Cassie, MIT Humanoid, and more. Once imported, we leverage the capabilities of the Isaac Gym simulator to create new environments for each of these models. To add variety, we make use of the inbuilt fractal terrain generator provided by the simulator. This terrain generator enables us to generate different types of terrain, ranging from plain terrains to uneven terrains with various topographical features. Our policy operates at a control frequency of up to 500Hz, due to our SNN-based approach. This allows us to achieve high-frequency control, resulting in fast and accurate adjustments to the system. In Table I listed a comprehensive list of environmental variations and their ranges. ### _Performances of High Frequency Control using SNNs_ We conducted tests on the robots mentioned above specifically for linear and angular velocity tracking tasks. In these tests, we considered tracking velocity (linear and angular) as the primary positive rewards. Additionally, we incorporated penalties for base height that is too low, excessive acceleration, and instances where the robot falls, etc. #### Iv-B1 A1 For A1, we conducted training and testing in several terrain environments, including pyramid stairs like terraces (upstairs and downstairs), pyramids with sloping surfaces, hilly terrains, terrains with discrete obstacles, and terrains covered with stepping stones. On the other hand, Cassie is solely trained in a trapezoidal pyramid environment and MIT Humanoid in a plain terrain. To further investigate the advantages of SNNs in high-frequency control scenarios, we deliberately increased the simulation environment's time step (dt) to 2.5 times the original default ANNs task, resulting in a frequency of 500Hz. By comparing the performance of SNN with ANN under these conditions, our aim is to establish whether SNN outperforms ANN in real-world environment that require high-frequency control. The rationale behind this investigation stems from the fact that robots using ANN control algorithms often face limitations due to their onboard energy constraints. As a result, these robots can typically achieve only a policy inference frequency of 100Hz, which falls significantly behind the execution frequency of the motors. Conversely, SNN offers the potential to enhance the quality of policy inference and deliver more real-time control performance due to its energy-efficient nature and deployment on specialized processors. If SNN can achieve similar or even superior performance compared to ANN under high-frequency control, it would indicate the superiority of SNN in real-world environments. Fig. 5: The four graphs depicting the robot’s x-axis linear velocity, y-axis linear velocity, yaw-axis angular velocity, and DOP position vividly demonstrate the successful accomplishment of the task through the implementation of our method. \begin{table} \begin{tabular}{|c|c|c|} \hline **Parameters** & **Training range** & **Testing range** \\ \hline Friction & [0.005, 4.5] & [0.004, 6.0] \\ \hline \(K_{p}\) & [50, 60] & [45, 65] \\ \hline \(K_{d}\) & [0.4, 0.8] & [0.3, 0.9] \\ \hline Payload(Kg) & [0, 6] & [0, 7] \\ \hline Center of Mass(cm) & [-0.15, 0.15] & [-0.18, 0.18] \\ \hline Motor Strength & [0.90, 1.10] & [0.88, 1.22] \\ \hline Re-sample Probability & 0.004 & 0.01 \\ \hline \end{tabular} \end{table} TABLE I: Ranges of the environmental parameters In Figure 5, the effectiveness of our method under high-frequency control is clearly demonstrated. The performance of the A1 robot in tracking velocity x is highly commendable, showcasing its ability to accurately follow the desired trajectory. Moreover, when considering the impact of the intricate terrain environment, the fluctuation range of the linear velocity in the y-axis direction remains within an acceptable range. This further highlights the robustness of our method in tackling challenging terrains. Furthermore, the results obtained in the yaw-axis angular velocity tracking are also notable. Although occasional spikes occur, indicating moments when the robot deviates from the existing policy framework, these instances can be attributed to the robot's inclination to explore alternative approaches when faced with difficult terrain. This adaptive behavior showcases the robot's ability to go beyond pre-existing policies and dynamically adapt its actions to overcome obstacles. #### Iv-D2 Cassie The training results of Cassie robot have proven to be highly successful, as evidenced by the findings depicted in Figure 6. The stability observed in the angular velocity following along the yaw axis is particularly noteworthy. This stability is crucial in ensuring effective control and balance while navigating diverse terrains. Furthermore, the robot's impressive ability to reach the highest level of terrain height, as indicated by a reward score of up to \(6\) (with each unit representing a layer in the pyramid ladder), showcases its adaptability in conquering rugged landscapes. These findings highlight the Cassie robot's capacity to successfully traverse challenging terrains under the high control frequency of SNNs while maintaining stability and elegance. Consequently, the feasibility of our approach is substantiated, affirming its potential deployment in complex and ever-changing real-world scenarios. #### Iv-D3 MIT Humanoid The training of MIT Humanoid also proved to be highly successful, showcasing the effectiveness of our spike-based approach. While it did take slightly longer to train compared to the traditional ANNs, the results obtained are equally impressive. In fact, they even surpassed the ANN in certain individual metrics, as clearly depicted in Figure 7. These findings strongly suggest that the SNN possesses inherent advantages when it comes to control robustness. Furthermore, it enables agents to not only thrive but also endure in their environment for an extended duration. This highlights the potential of SNN as a powerful tool in enhancing agent performance and longevity. In the videos presented, the robots trained using SNN-based policies exhibit remarkable characteristics, including smooth, stable, and natural gaits. Whether it is the agile traversal of challenging terrains by A1 and Cassie or the unrestricted running of the MIT Humanoid, the performance showcased by our approach is undeniably superior. Notably, the low energy consumption associated with SNN further positions it as a promising alternative to ANN. With its ability to achieve exceptional performance and energy efficiency, SNN holds great potential in the field of robotics. ## V Conclusion and Limitation This study presents the integration of PopSAN with two cutting-edge reinforcement learning techniques, namely history trajectory and imitation learning. This innovative approach enables SNNs to achieve performance comparable to ANNs. The successful training of SNN-based policy networks using these methods highlights the versatility of SNNs in policy gradient-based DRL algorithms, thereby opening up new horizons for their application in various reinforcement learning tasks, including continuous control. Additionally, the study explores the suitability of SNNs for high-frequency control tasks, demonstrating that agents powered by SNNs can devise exceptional strategies even at an elevated inference frequency of 500Hz. These findings emphasize the advantages of SNN-based control methods over ANNs, particularly in scenarios with limited computational resources. They showcase the remarkable energy efficiency and robustness of SNNs, highlighting their potential in various applications. However, due to the difficulty in obtaining neuromorphic chips, our experiments could not be conducted on a physical machine, consequently slightly diminishing the strength of our argument regarding the energy efficiency of SNNs. By embracing SNNs, we unlock a realm of possibilities for future advancements in intelligent control systems, transcending traditional computational paradigms. ## Acknowledgment The authors would like to express their gratitude to Jingtong Ma and Chao Chen for their valuable contributions and insightful discussions. Fig. 6: The first image showcases Cassie’s remarkable capability to conquer complex terrain, as indicated by the terrain level value nearing 6. Additionally, the second figure demonstrates Cassie’s impoccable tracking of the angular velocity of the yaw axis, exemplifying its capacity to maintain body stability while traversing complex terrain. Fig. 7: In experiments conducted on the MIT Humanoid, the SNN achieves a comparable level of approximation with ANN in multiple evaluation metrics and even surpasses ANN. Despite the fact that SNN takes longer to train, the SNN outperforms ANN in terms of mean episode length after training convergence, providing strong evidence for the exceptional robustness of our method in whole body control.
2303.06375
About the de Almeida-Thouless line in neural networks
In this work we present a rigorous and straightforward method to detect the onset of the instability of replica-symmetric theories in information processing systems, which does not require a full replica analysis as in the method originally proposed by de Almeida and Thouless for spin glasses. The method is based on an expansion of the free-energy obtained within one-step of replica symmetry breaking (RSB) around the RS value. As such, it requires solely continuity and differentiability of the free-energy and it is robust to be applied broadly to systems with quenched disorder. We apply the method to the Hopfield model and to neural networks with multi-node Hebbian interactions, as case studies. In the appendices we test the method on the Sherrington-Kirkpatrick and the Ising P-spin models, recovering the AT lines known in the literature for these models, as a special limit, which corresponds to assuming that the transition from the RS to the RSB phase can be obtained by varying continuously the order parameters. Our method provides a generalization of the AT approach, which does not rely on this limit and can be applied to systems with discontinuous phase transitions, as we show explicitly for the spherical P-spin model, recovering the known RS instability line.
Linda Albanese, Andrea Alessandrelli, Adriano Barra, Alessia Annibale
2023-03-11T10:54:13Z
http://arxiv.org/abs/2303.06375v4
# About the de Almeida-Thouless line in neural networks ###### Abstract In this work we present a rigorous and straightforward method to detect the onset of the instability of replica-symmetric theories in information processing systems, which does not require a full replica analysis as in the method originally proposed by Almeida-Thouless for spin glasses. The method is based on an expansion of the free-energy obtained within one-step of the replica symmetry breaking (RSB) scheme around the replica-symmetric (RS) value. As such, it requires solely continuity and differentiability of the free-energy and it is robust to be applied broadly to systems with quenched disorder. We apply the method to the Hopfield model and to neural networks with multi-node Hebbian interactions, as case studies. In the appendix we test the method on the Sherrington-Kirkpatrick, the Ising \(P\)-spin and the spherical \(P\)-spin models, recovering the AT lines known in the literature for these models, as a special limit, which corresponds to assuming that the transition from the RS to the RSB phase can be obtained by varying continuously the order parameters. Therefore, our method provides a generalization of the AT approach, which can be applied to systems with a discontinuous transition. ###### Contents * 1 Introduction * 2 The Hopfield model * 3 Hebbian networks with \(P\)-node interactions * 4 Discussion * A Applications to Spin-glasses models * A.1 Sherrington-Kirkpatrick Model * A.2 The Ising P-spin model * A.3 P-spin spherical model * B Contributions to sub-leading orders ## 1 Introduction Replica symmetry breaking in neural networks has attracted increasing attention in recent years [4; 6; 7; 22; 32], however there is, as yet, no general broken replica-symmetry theory for these systems and no simple method to systematically detect their transition from the replica-symmetric (RS) to the replica-symmetry-broken (RSB) phase. While the first point is still out of reach, the second question can be addressed by adapting approaches originally developed for spin glasses. Indeed, the instability line of the RS phase in the Sherrington-Kirkpatrick (SK) spin glass model was derived by Almeida and Thouless (AT) many decades ago [17], using a method based on replicas. Since their seminal work, rigorous techniques have been developed and tested in archetypical mean-field as well as short-ranged spin glass models, by many researchers (see e.g. [8; 11; 12; 21; 23; 25; 29; 30]). As neural networks are particular realizations of spin glasses, it is quite natural to ask if we can devise a systematic method to derive the RS instability line also for these systems. In this work we answer affirmatively to this question, using the Hopfield model of neural networks and a model of dense associative memory, which extends Hebbian learning to multi-node interactions, as case studies. To this purpose, we devise a method inspired by the approach proposed by Toninelli in [31], which builds on Guerra's work on broken replica-symmetry bounds [20]. As a technical note we remark that at difference with conventional spin glasses, here we focus on the RS instability in the parameter space \((\alpha,T)\) where \(\alpha\) is the storage load of the network and \(T\) is the noise level, rather than in the space \((h,T)\) (i.e. magnetic field, temperature) conventionally used in spin glasses. For the Hopfield model, our method recovers the instability line obtained by Coolen [13] using the AT approach, as a special limit, which corresponds to assuming a continuous transition from the RS to the RSB phase, in the order parameters. Therefore, our method provides a generalization of the AT approach, which can be applied to systems with a discontinuous transition from the RS to the 1RSB phase. Another advantage of our method, when compared to the involved calculations of the AT method in the Hopfield model [13], is its remarkable simplicity. This allows for straightforward application to more complex neural network models, such as dense associative memories with \(P\)-node interactions. We supplement the results in the main text with an appendix where we show our method at work on conventional spin-glasses models, namely the Sherrington-Kirkpatrick model, the Ising \(P\)-spin and the spherical \(P\)-spin model, the latter providing an example of system which exhibits a discontinuous phase transition from the paramagnetic to the spin-glass phase. In all cases, we retrieve the AT lines known for these models [13, 16, 17, 18] in a specific limit, confirming the validity of our approach as a generalization of the AT method. As expected from the decomposition theorem of multi-node Hebbian networks proved in [4], for dense associative memories with \(P\)-node interactions we retrieve the instability line of the Ising \(P\)-spin model derived in the appendix. ## 2 The Hopfield model In this section we illustrate the method for the Hopfield model with \(N\) Ising neurons \(\sigma_{i}\in\{1,-1\},\ i=1,\ldots,N\) and \(K=\alpha N\) stored patterns \(\mathbf{\xi}^{\mu}\), \(\mu=1,\ldots,K\). Each pattern \(\mathbf{\xi}^{\mu}\) is a sequence of \(N\) Rademacher entries (i.e. Bernoulli variables) \(\xi_{i}^{\mu}\), \(i=1,\ldots,N\), with distribution \[\mathbb{P}(\xi_{i}^{\mu})=\frac{1}{2}\left(\delta_{\xi_{i}^{\mu},+1}+\delta_{ \xi_{i}^{\mu},-1}\right). \tag{1}\] The Hamiltonian of the model is \[H_{N}(\mathbf{\sigma}|\mathbf{\xi})=-\frac{1}{N}\sum_{i,j=1,1}^{N,N}\sum_{\mu=1}^{K} \xi_{i}^{\mu}\xi_{j}^{\mu}\sigma_{i}\sigma_{j}. \tag{2}\] In the so-called'retrieval' phase, the equilibrium local configurations are correlated only with a single pattern, say \(\nu\). As the couplings \(J_{ij}=\sum_{\mu=1}^{K}\xi_{i}^{\mu}\xi_{j}^{\mu}\) are symmetric w.r.t. to permutations of the patterns, it is assumed without loss of generality that \(\nu=1\). It is thus convenient to define the following order parameters: \[m\!:=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{1}\sigma_{i}\qquad q\!:=\frac{1}{N}\sum _{i=1}^{N}\sigma_{i}^{(1)}\sigma_{i}^{(2)} \tag{3}\] where the first one, denoted as the _Mattis magnetization_, quantifies the alignment of the network configuration with the retrieval pattern \(\mathbf{\xi}^{1}\) and \(q\) is the standard _two-replica overlap_ quantifying the correlations between two configurations of the network obtained with the same coupling's realization. Denoting their equilibrium values with \[\bar{m}\!:=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{1}\langle\sigma_{i}\rangle\qquad \bar{q}\!:=\frac{1}{N}\sum_{i=1}^{N}\langle\sigma_{i}^{(1)}\rangle\langle \sigma_{i}^{(2)}\rangle\] where \(\langle\,\cdot\,\rangle\) is the average with respect to the Boltzmann distribution with Hamiltonian (2) and noise level \(T=1/\beta\), the RS analysis assumes that the order parameters \(q\) and \(m\) self-average around their equilibrium values \(\bar{q}\) and \(\bar{m}\), in the thermodynamic limit, namely \[\lim_{N\rightarrow+\infty}P_{N}^{\prime}(q)= \delta(q-\bar{q}), \tag{4}\] \[\lim_{N\rightarrow+\infty}P_{N}^{\prime}(m)= \delta(m-\bar{m}). \tag{5}\] Under this assumption, the free-energy, averaged over the pattern distribution (or 'quenched' disorder), \(f\), is given by (see [5]) \[-\beta f_{RS}(\beta,\alpha,\bar{q})= \ln 2-\frac{\alpha}{2}\ln(1-\beta(1-\bar{q}))-\frac{\beta^{2}}{2} \bar{m}^{2}+\frac{\alpha\beta\bar{q}}{2(1-\beta(1-\bar{q}))}\] \[+\frac{\alpha\beta^{2}}{2}\frac{\bar{q}^{2}}{(1-\beta(1-\bar{q}) )}+\mathbb{E}\ln\cosh\left(\beta z\sqrt{\frac{\alpha\bar{q}}{(1-\beta(1-\bar{q} ))^{2}}}+\beta\bar{m}\right), \tag{6}\] where \(z\) is a random Gaussian variable with zero average and unit variance, \(\mathbb{E}\) denotes the average over \(z\) and \(\alpha\) is the load capacity of the network. In this limit, the order parameters \(\bar{q}\) and \(\bar{m}\) fulfill the celebrated Amit-Gutfreund-Sompolinsky self-consistency equations [5; 14]: \[\bar{q} =\mathbb{E}\tanh^{2}\left(\beta\bar{m}+\beta J\sqrt{\frac{\alpha \bar{q}}{(1-\beta(1-\bar{q}))^{2}}}z\right), \tag{7}\] \[\bar{m} =\mathbb{E}\tanh\left(\beta\bar{m}+\beta J\sqrt{\frac{\alpha \bar{q}}{(1-\beta(1-\bar{q}))^{2}}}z\right). \tag{8}\] On the other hand, within one step of the replica-symmetry breaking (1RSB) scheme [15; 28; 1] it is assumed that the distribution of the two-replica overlap \(q\), in the thermodynamic limit, displays two delta-peaks at the equilibrium values \(\bar{q}_{0}\) and \(\bar{q}_{1}>\bar{q}_{0}\) and the concentration on these two values is ruled by the parameter \(\theta\in[0,1]\), while \(m\) self-averages as in the RS case: \[\lim_{N\to+\infty}P^{\prime}_{N}(q) =\theta\delta(q-\bar{q}_{0})+(1-\theta)\delta(q-\bar{q}_{1}), \tag{9}\] \[\lim_{N\to+\infty}P^{\prime}_{N}(m) =\delta(m-\bar{m}). \tag{10}\] Within this assumption, the disorder-averaged free-energy is given by (see e.g. [1]) \[-\beta f_{1RSB}(\beta,\alpha,\theta,\bar{m},\bar{q}_{1},\bar{q}_ {0}) =\ln 2-\frac{\alpha}{2}\ln\left(\Delta_{1}(\bar{q}_{1})\right)+ \frac{\alpha}{2\theta}\ln\left(\frac{\Delta_{1}(\bar{q}_{1})}{\Delta_{2}(\bar {q}_{0},\bar{q}_{1})}\right)+\frac{\alpha\beta\bar{q}_{0}}{2\Delta_{2}(\bar{q }_{0},\bar{q}_{1})}\] \[-\frac{\beta}{2}\bar{m}^{2}+\frac{1}{2}\alpha\beta^{2}\theta \frac{\bar{q}_{0}^{2}}{\Delta_{2}^{2}(\bar{q}_{0},\bar{q}_{1})}+\frac{1}{ \theta}\mathbb{E}_{1}\ln\mathbb{E}_{2}\cosh^{\theta}g_{\theta}(\beta,\alpha, \bar{m},\bar{q}_{0},\bar{q}_{1})\] \[-\frac{1}{2}\alpha\beta^{2}\theta\bar{q}_{1}\bigg{(}\frac{\bar{q }_{0}}{\Delta_{2}^{2}(\bar{q}_{0},\bar{q}_{1})}+\frac{\bar{q}_{1}-\bar{q}_{0}} {\Delta_{1}(\bar{q}_{1})\Delta_{2}(\bar{q}_{0},\bar{q}_{1})}\bigg{)}\] \[-\frac{1}{2}\alpha\beta^{2}(1\!-\!\bar{q}_{1})\bigg{(}\frac{\bar{q }_{0}}{\Delta_{2}^{2}(\bar{q}_{0},\bar{q}_{1})}\!+\!\frac{\bar{q}_{1}-\bar{q}_ {0}}{\Delta_{1}(\bar{q}_{1})\Delta_{2}(\bar{q}_{0},\bar{q}_{1})}\bigg{)} \tag{11}\] where, for mathematical convenience, we defined \[\Delta_{1}(\beta,\bar{q}_{1}):=1-\beta(1-\bar{q}_{1}) \tag{12}\] \[\Delta_{2}(\beta,\theta,\bar{q}_{0},\bar{q}_{1}):=1-\beta(1- \bar{q}_{1})-\beta\theta(\bar{q}_{1}-\bar{q}_{0})\] (13) \[g_{\theta}(\beta,\alpha,\bar{m},\bar{q}_{0},\bar{q}_{1}):=\beta \bar{m}+\frac{\beta z^{(1)}\sqrt{\alpha\bar{q}_{0}}}{\Delta_{2}(\theta,\bar{q }_{0},\bar{q}_{1})}+\beta z^{(2)}\sqrt{\frac{\alpha(\bar{q}_{1}-\bar{q}_{0})} {\Delta_{1}(\bar{q}_{1})\Delta_{2}(\theta,\bar{q}_{0},\bar{q}_{1})}} \tag{14}\] and we have denoted with \(\mathbb{E}_{1}\), \(\mathbb{E}_{2}\) the averages w.r.t. the standard normal variables \(z^{(1)}\) and \(z^{(2)}\), respectively. From now on, we imply the dependence of the functions on \(\bar{m},\ \beta,\ \alpha\). By extremizing the 1RSB free-energy w.r.t. its order parameters \(\bar{q}_{0},\ \bar{q}_{1},\ \bar{m}\), it is possible to show that the latter fulfill the following self-consistency equations \[\bar{m} = \mathbb{E}_{1}\left[\frac{\mathbb{E}_{2}\cosh^{\theta}g_{\theta}( \bar{q}_{0},\bar{q}_{1})\tanh g_{\theta}(\bar{q}_{0},\bar{q}_{1})}{\mathbb{E}_{ 2}\cosh^{\theta}g_{\theta}(\bar{q}_{0},\bar{q}_{1})}\right],\] \[\bar{q}_{1} = \mathbb{E}_{1}\left[\frac{\mathbb{E}_{2}\cosh^{\theta}g_{\theta}( \bar{q}_{0},\bar{q}_{1})\tanh^{2}g_{\theta}(\bar{q}_{0},\bar{q}_{1})}{\mathbb{ E}_{2}\cosh^{\theta}g_{\theta}(\bar{q}_{0},\bar{q}_{1})}\right], \tag{15}\] \[\bar{q}_{0} = \mathbb{E}_{1}\left[\frac{\mathbb{E}_{2}\cosh^{\theta}g_{\theta} (\bar{q}_{0},\bar{q}_{1})\tanh g_{\theta}(\bar{q}_{0},\bar{q}_{1})}{\mathbb{E} _{2}\cosh^{\theta}g_{\theta}(\bar{q}_{0},\bar{q}_{1})}\right]^{2}.\] The key idea of our method is to assume that at the onset of the RS instability, one of the two delta-peaks in equation (9) has vanishing weight, i.e. either \(\theta\!\rightarrow\!0\) or \(\theta\!\rightarrow\!1\). Consistency with the RS theory then requires the dominating peak to be located at the value \(\bar{q}\) of the RS order parameter, so either \(\lim_{\theta\to 0}\bar{q}_{1}\!=\!\bar{q}\) or \(\lim_{\theta\to 1}\bar{q}_{0}\!=\!\bar{q}\). As \(q_{1}>q_{0}\geq 0\), the scenario \(\theta\!=\!0\) and \(q_{1}(\theta=0)\!=\!\bar{q}\) can only apply to systems where the RS instability manifests at a temperature \(T_{c}\) below the critical temperature \(T^{\star}\) at which \(\bar{q}\) becomes non-zero (within the RS theory). On the other hand, in systems where \(T_{c}>T^{\star}\), at the onset of RS instability \(\bar{q}=0\), so only the second scenario (\(\theta=1\) and \(q_{0}(\theta=1)=\bar{q}\)) can apply. For the Hopfield model, one can easily check using (15) that \(\lim_{\theta\to 1}\bar{q}_{0}\neq\bar{q}\), while \[\lim_{\theta\to 0}\bar{q}_{1} = \lim_{\theta\to 0}\mathbb{E}_{1}\mathbb{E}_{2}\tanh^{2} \left(\beta\bar{m}+\beta z^{(1)}\frac{\sqrt{\alpha\bar{q}_{0}}}{\Delta_{2}( \theta,\bar{q}_{0},\bar{q}_{1})}\,+\,\beta z^{(2)}\sqrt{\alpha\frac{\bar{q}_{ 1}-\bar{q}_{0}}{\Delta_{1}(\bar{q}_{1})\Delta_{2}(\theta,\bar{q}_{0},\bar{q}_{ 1})}}\right) \tag{16}\] \[= \mathbb{E}_{1}\mathbb{E}_{2}\tanh^{2}\left(\beta\bar{m}+\beta z^ {(1)}\frac{\sqrt{\alpha\bar{q}_{0}}}{\Delta_{1}(\bar{q}_{1})}+\beta z^{(2)} \frac{\sqrt{\alpha(\bar{q}_{1}-\bar{q}_{0})}}{\Delta_{1}(\bar{q}_{1})}\right)\] \[= \mathbb{E}\tanh^{2}\left(\beta\bar{m}+\beta J\sqrt{\frac{\alpha \bar{q}_{1}}{(1-\beta(1-\bar{q}_{1}))^{2}}}z\right)\] where we have used that for \(\theta=0\), \(\Delta_{2}(0,\bar{q}_{0},\bar{q}_{1})=\Delta_{1}(\bar{q}_{1})\) and the relation \[\mathbb{E}_{\lambda,Y}[F(a_{1}+\lambda a_{2}+Ya_{3})]=\mathbb{E}_{x}\left[F \left(a_{1}+Z\sqrt{a_{2}^{2}+a_{3}^{2}}\right)\right], \tag{17}\] with \(F\) any smooth function, \(a_{1},\ a_{2},\ a_{3}\in\mathbb{R}\), and \(\lambda\), \(Y\) and \(Z\) i.i.d. standard normal random variables. As (16) is identical to (7), in the limit \(\theta\to 0\), \(\bar{q}_{1}\) is equal to the RS order parameter \(\bar{q}\). Furthermore, one can easily verify that \(f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})|_{\theta=0}=f_{RS}(\bar{q})\), as expected from the fact that, for \(\theta=0\), eq. (9) reduces to (5) and one retrieves the RS scheme. Our purpose is then to prove that for small but finite values of \(\theta\) the 1RSB expression of the quenched free-energy is smaller than the RS expression, i.e. \(f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})<f_{RS}(\bar{q})\), below a critical line in the parameters space \((\alpha,\beta)\). To this purpose, we expand the 1RSB quenched free-energy around \(\theta=0\) -namely around the replica symmetric expression- to the first order, to write \[f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})= f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})|_{\theta=0}+\theta\partial_{ \theta}f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})|_{\theta=0}, \tag{18}\] where \(f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})|_{\theta=0}=f_{RS}(\bar{q})\). To determine when the RS solution becomes unstable, i.e. \(f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})<f_{RS}(\bar{q})\) we inspect the sign of \(\partial_{\theta}f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})|_{\theta=0}\). To evaluate the latter, we need to expand the self-consistency equations for \(\bar{q}_{0}\) and \(\bar{q}_{1}\) to linear orders in \(\theta\). Using (16) and denoting \[g_{0}(\bar{q}_{1},\bar{q}_{0})=\beta\bar{m}+\beta z^{(1)}\frac{\sqrt{\alpha \bar{q}_{0}}}{\Delta_{1}(\bar{q}_{1})}+\beta z^{(2)}\frac{\sqrt{\alpha(\bar{q} _{1}-\bar{q}_{0})}}{\Delta_{1}(\bar{q}_{1})}, \tag{19}\] we obtain \[\bar{q}_{1}=\mathbb{E}_{1}\mathbb{E}_{2}\tanh^{2}g_{0}(\bar{q}_{1},\bar{q}_{0})+ \theta A(\bar{q}_{1},\bar{q}_{0}) \tag{20}\] where \(A(\bar{q}_{1},\bar{q}_{0})\) is a function of \(\bar{q}_{1}\) and \(\bar{q}_{0}\) that will drop out of the calculation, whose expression is provided in (16). It follows from (16) that to \(\mathcal{O}(\theta^{0})\), \(\bar{q}_{1}\) is equal to the RS order parameter \(\bar{q}\) so we can rewrite (20) as \[\bar{q}_{1}=\bar{q}+\theta A(\bar{q},\bar{q}_{0}). \tag{21}\] Following the same path for \(\bar{q}_{0}\), and using (21), we have \[\bar{q}_{0}=:\mathbb{E}_{1}\left(\mathbb{E}_{2}\tanh g_{0}(\bar{q}_{0},\bar{q} )\right)^{2}+\theta B(\bar{q}_{1},\bar{q}_{0}) \tag{22}\] where \(B(\bar{q}_{1},\bar{q}_{0})\) is provided in (16) and will drop out of the calculation. For \(\theta=0\), we have \[\bar{q}_{0}=\mathbb{E}_{1}\left(\mathbb{E}_{2}\tanh g_{0}(\bar{q}_{0},\bar{q} )\right)^{2} \tag{23}\] which is a self-consistency equation for \(\bar{q}_{0}\), that depends only on \(\bar{q}\). Denoting with \(\tilde{q}_{0}(\bar{q})\) its solution, we can then write (22) as \[\bar{q}_{0}=\tilde{q}_{0}(\bar{q})+\theta B(\bar{q},\tilde{q}_{0}(\bar{q})) \tag{24}\] and, finally, \[\bar{q}_{1}=\bar{q}+\theta A(\bar{q},\tilde{q}_{0}(\bar{q})). \tag{25}\] Using (25) and (24) to evaluate the derivative of \(f_{1RSB}(\theta,\bar{q}_{1},\bar{q}_{0})\) w.r.t. \(\theta\) and finally setting \(\theta=0\), we obtain: \[K(\bar{q},\tilde{q}_{0}(\bar{q})):=\partial_{\theta}(-\beta f_{ 1RSB}(\theta,\bar{q}_{1},\tilde{q}_{0}))|_{\theta=0}\] \[=-\frac{\alpha\beta^{2}(\bar{q}^{2}-\tilde{q}_{0}(\bar{q})^{2})} {4\Delta_{1}(\bar{q})^{2}}+\frac{1}{2}\mathbb{E}_{1}\mathbb{E}_{2}\ln^{2} \cosh g_{0}(\bar{q},\tilde{q}_{0}(\bar{q}))-\frac{1}{2}\mathbb{E}_{1}\left( \mathbb{E}_{2}\ln\cosh g_{0}(\bar{q},\tilde{q}_{0}(\bar{q}))\right)^{2} \tag{26}\] Next, we study the sign of (26), where \(\bar{q}\) and \(\tilde{q}_{0}(\bar{q})\) are the solutions of the self-consistency equations (7) and (23), respectively. To this purpose, it is useful to study the behaviour of the function \(K(\bar{q},x)\) for \(x\in[0,\bar{q}]\). For \(x=\bar{q}\), we have \(K(\bar{q},\bar{q})=0\), while the extremum of \(K(\bar{q},x)\) is found from \[\partial_{x}K(\bar{q},x)=\frac{\beta^{2}\alpha x}{2\Delta_{1}(\bar{q})^{2}} \left[x-\mathbb{E}_{1}\left(\mathbb{E}_{2}\tanh g_{0}(\bar{q},x)\right)^{2} \right]=0\] as \[x=\mathbb{E}_{1}\left(\mathbb{E}_{2}\tanh g_{0}(\bar{q},x)\right)^{2}\equiv \tilde{q}_{0}(\bar{q}), \tag{27}\] from eq. (23). Given that \(K(\bar{q},x)\) vanishes for \(x=\bar{q}\), if the extremum \(x=\tilde{q}_{0}(\bar{q})\) is global in the domain considered, we must have that \(K(\bar{q},\tilde{q}_{0}(\bar{q}))>0\) if \(x=\tilde{q}_{0}(\bar{q})\) is a maximum and \(K(\bar{q},\tilde{q}_{0}(\bar{q}))<0\) if \(x=\tilde{q}_{0}(\bar{q})\) is a minimum. Therefore, if \[\partial_{x}^{2}K(\bar{q},x)|_{x=\tilde{q}_{0}(\bar{q})}=\frac{\beta^{2}\alpha }{2\Delta_{1}(\bar{q})^{2}}\left\{1-\frac{\beta^{2}\alpha}{\Delta_{1}(\bar{q})^ {2}}\mathbb{E}_{1}\left\{\mathbb{E}_{2}\left[\frac{1}{\cosh^{2}g_{0}(\bar{q} _{0}(\bar{q}),\bar{q})}\right]\right\}^{2}\right\} \tag{28}\] is negative, \(K(\bar{q},\tilde{q}_{0}(\bar{q}))\) is positive and \[f_{1RSB}(\theta,\bar{q},\tilde{q}_{0}(\bar{q}))=f_{RS}(\bar{q})-\theta\frac{K (\bar{q},\tilde{q}_{0}(\bar{q}))}{\beta}<f_{RS}(\bar{q}), \tag{29}\] hence the RS theory becomes unstable when the expression in the curly brackets in (28) becomes negative i.e. for \[(1-\beta(1-\bar{q}))^{2}<\beta^{2}\alpha\mathbb{E}_{1}\left\{\mathbb{E}_{2}\left[ \frac{1}{\cosh^{2}g_{0}(\bar{q},\tilde{q}_{0}(\bar{q}))}\right]\right\}^{2} \tag{30}\] This expression recovers the result found by Coolen in [13] using the Almeida and Thouless approach [17], in the limit \(\tilde{q}_{0}(\bar{q})\rightarrow\bar{q}\), where (30) reduces to \[(1-\beta(1-\bar{q}))^{2}<\alpha\beta^{2}\,\mathbb{E}\cosh^{-4}\left[\beta\bar{ m}+\beta z\frac{\sqrt{\alpha\bar{q}}}{1-\beta(1-\bar{q})}\right] \tag{31}\] While this limit is _a priori_ unjustified, as \(\bar{q}\) and \(\tilde{q}_{0}(\bar{q})\) should be solved from the self-consistency equations (7) and (23), respectively, one can check numerically that the solutions of these equations are indeed very close for any temperature. We anticipate, however, that this will not be the case in Hebbian networks with \(P\)-node interactions, that we will analyse in the next section. In conclusion, our method provides a more general result than the AT approach and it applies to systems where \(\tilde{q}_{0}(\bar{q})\) and \(\bar{q}\) are different. In particular, it carries over to systems with discontinuous phase transition, where \(\bar{q}_{1}\) differs from \(\bar{q}_{0}\) even at the onset of the RS instability, which implies \(\tilde{q}_{0}(\bar{q}_{0})\neq\bar{q}\). ## 3 Hebbian networks with \(P\)-node interactions In this section we consider generalizations of the Hopfield model, where neurons interact in P-tuples of even \(P\geq 4\) (rather than pairwise, i.e. \(P=2\)). Such networks were shown to store many more patterns than the number of their nodes, so they work as _dense_ associative memories [24]. They are also known to be dual to deep neural networks [3; 4] and to exhibit information processing capabilities that are forbidden in shallow networks, such as the existence of a region in the parameter space where they can retrieve patterns although these are overshadowed by the noise [2]. As before, we consider a network of \(N\) Ising neurons \(\sigma_{i}\in\{1,-1\}\) with \(K\) stored patterns \(\mathbf{\xi}^{\mathbf{\mu}}\), which are now vectors of \(N^{P/2}\) Rademacher entries. The Hamiltonian of this model is \[H_{N}(\mathbf{\sigma}|\mathbf{\xi})=-\frac{N^{1-P}}{P!}\sum_{\mu=1}^{K}\!\left(\sum_{ i_{1},\ldots,i_{P/2}}\xi_{i_{1}}^{\mu}\ldots\xi_{i_{P/2}}^{\mu}\sigma_{i_{1}} \ldots\sigma_{i_{P/2}}\!\right)^{\!2} \tag{32}\] The order parameters are still the Mattis magnetization \(m\) and the two-replicas overlap \(q\), as introduced in (3), with their RS distributions given in (4) and (5) and their 1RSB generalizations given in (9) and (10). The quenched free-energy in RS assumption is (see [19]) \[-\beta^{{}^{\prime}}f_{RS}(\beta^{{}^{\prime}},\alpha,\bar{q})= \ln 2-\frac{{\beta^{{}^{\prime}}}^{2}}{2}(P-1)\bar{m}^{P}+\frac{{ \beta^{{}^{\prime}}}^{2}\alpha}{4}(1-\bar{q}^{P})-\frac{{\beta^{{}^{\prime}}}^ {2}\alpha P}{4}\bar{q}^{P-1}(1-\bar{q})\] \[+\mathbb{E}\ln\cosh\left(\beta^{{}^{\prime}}\frac{P}{2}\bar{m}^{ P-1}+\beta^{{}^{\prime}}z\sqrt{\alpha\frac{P}{2}\bar{q}^{P-1}}\right) \tag{33}\] with \(\beta^{{}^{\prime}}:=2\beta/P!\), where \(\beta\) is the inverse temperature and \(\alpha=\lim_{N\rightarrow\infty}K/N^{P-1}\) is the network load. \(\mathbb{E}\) is the average w.r.t. the standard Gaussian random variable \(z\), and \(\bar{q}\) and \(\bar{m}\) satisfy the self-consistency equations: \[\bar{m}=\mathbb{E}\tanh\left(\beta^{{}^{\prime}}\frac{P}{2}\bar{m}^{P -1}+\beta^{{}^{\prime}}J\sqrt{\alpha\frac{P}{2}\bar{q}^{P-1}}z\right),\] \[\bar{q}=\mathbb{E}\tanh^{2}\left(\beta^{{}^{\prime}}\frac{P}{2} \bar{m}^{P-1}+\beta^{{}^{\prime}}J\sqrt{\alpha\frac{P}{2}\bar{q}^{P-1}}z\right). \tag{11}\] On the other hand, the quenched free-energy within the 1RSB approximation (see [4]), reads as \[-\beta^{{}^{\prime}}f_{1RSB}(\beta^{{}^{\prime}},\alpha,\theta, \bar{m},\bar{q}_{0},\bar{q}_{1})= \ln 2-\frac{\beta^{{}^{\prime}}{}^{2}}{2}(P-1)\bar{m}^{P}+\frac{ \beta^{{}^{\prime}}{}^{2}\alpha}{4}\left[1-\theta\bar{q}_{0}^{P}+(\theta-1) \bar{q}_{1}^{P}\right]\] \[+\frac{\beta^{{}^{\prime}}{}^{2}\alpha P}{4}\bar{q}_{1}^{P-1}- \frac{\beta^{{}^{\prime}}{}^{2}}{4}P\alpha\left[(\theta-1)\bar{q}_{1}^{P}- \theta\bar{q}_{0}^{P}\right]\] \[+\frac{1}{\theta}\mathbb{E}_{1}\ln\mathbb{E}_{2}\cosh^{\theta}g( \beta^{{}^{\prime}},\alpha,\bar{m},\bar{q}_{0},\bar{q}_{1}) \tag{12}\] where \(\mathbb{E}_{1}\), \(\mathbb{E}_{2}\) are the average w.r.t. the standard normal random variables \(z^{(1)}\) and \(z^{(2)}\), respectively, and \[g(\beta^{{}^{\prime}},\alpha,\bar{m},\bar{q}_{0},\bar{q}_{1})= \frac{\beta^{{}^{\prime}}P}{2}\bar{m}^{P-1}+\beta^{{}^{\prime}}z^ {(1)}\sqrt{\frac{P}{2}\alpha\bar{q}_{0}^{P-1}}+\beta^{{}^{\prime}}z^{(2)}\sqrt {\frac{P}{2}\alpha\left(\bar{q}_{1}^{P-1}-\bar{q}_{0}^{P-1}\right)}. \tag{13}\] From now on, we imply the dependence of the functions on \(\beta^{{}^{\prime}},\ \alpha\) and \(\bar{m}\). In this approximation the self-consistency equations for the order parameter \(\bar{q}_{1}\), \(\bar{q}_{0}\) and \(\bar{m}\) are as in (15), with the argument of the hyperbolic cosine and tangent replaced by (13). As before, for \(\theta=0\), \(\bar{q}_{1}=\bar{q}\) and the 1RSB expression for the quenched free-energy reduces to the RS one. Our objective is to prove that the 1RSB quenched free-energy is smaller than its replica symmetric counterpart i.e. \(f_{1RSB}(\beta^{{}^{\prime}},\alpha,\theta)<f_{RS}(\beta^{{}^{\prime}},\alpha)\) above a critical value of the effective parameter \(\sqrt{\alpha}\beta^{\prime}\). To this purpose, we proceed as in the Hopfield model: we expand, to the leading order in \(\theta\), the 1RSB quenched free-energy around its RS expression, as shown in (18). Since the self-consistency equations also depend on \(\theta\), we need to expand them too. Following the same steps as in the Hopfield model, we can write \(\bar{q}_{1}\) as in (25), with \(A(\bar{q},\tilde{q}_{0}(\bar{q}))\) given in (10), and \(\bar{q}_{0}\) as given in (24), where \(\tilde{q}_{0}(\bar{q})\) is the solution of the self-consistency equation \[\bar{q}_{0}=\mathbb{E}_{1}\left(\mathbb{E}_{2}\tanh g(\bar{q},\bar{q}_{0}) \right)^{2} \tag{14}\] and \(B(\bar{q},\tilde{q}_{0}(\bar{q}))\) is given in (10). With the above expressions in hand, we can now calculate the derivative of \(f_{1\text{RSB w.r.t. }\theta}\) when \(\theta=0\), as needed in (18) \[K(\bar{q},\tilde{q}_{0}(\bar{q})):=\partial_{\theta}(-\beta^{{}^ {\prime}}f_{1\text{RSB}}(\theta,\bar{q}_{1},\bar{q}_{0}))|_{\theta=0}\] \[=-\frac{\alpha\beta^{{}^{\prime}}{}^{2}}{4}(P-1)(\bar{q}^{P}- \tilde{q}_{0}^{P}(\bar{q}))+\frac{1}{2}\mathbb{E}_{1}\mathbb{E}_{2}\ln^{2} \cosh g(\bar{q},\tilde{q}_{0}(\bar{q}))-\frac{1}{2}\mathbb{E}_{1}\left(\mathbb{ E}_{2}\ln\cosh g(\bar{q},\tilde{q}_{0}(\bar{q}))\right)^{2} \tag{15}\] Again, we have that \(K(\bar{q},\bar{q})=0\) (this follows from the fact that for \(\theta=0\), \(\bar{q}\) is an extremum of the free-energy). Next, we inspect the sign of \(K(\bar{q},\tilde{q}_{0}(\bar{q}))\). To this purpose, we study \(K(\bar{q},x)\) for \(x\in[0,\bar{q}]\) and locate its extrema, which are found from \[\partial_{x}K(\bar{q},x)= \frac{\beta^{{}^{\prime}}{}^{2}\alpha P(P-1)}{4}x^{P-2}\left[x- \mathbb{E}_{1}\left(\mathbb{E}_{2}\tanh g(\bar{q},x)\right)^{2}\right]=0 \tag{16}\] as \[x=\mathbb{E}_{1}\left(\mathbb{E}_{2}\tanh g(\bar{q},x)\right)^{2} \equiv\tilde{q}_{0}(\bar{q}) \tag{11}\] where the last equality follows from (10). Under the assumption that the extremum \(x=\tilde{q}_{0}(\bar{q})\) is global in the domain considered and reasoning as in the Hopfield case, we have that \(K(\bar{q},\tilde{q}_{0}(\bar{q}))>0\) if \(x=\tilde{q}_{0}(\bar{q})\) is a maximum and \(K(\bar{q},\tilde{q}_{0}(\bar{q}))<0\) if it is a minimum. In particular, if \[\partial_{x}^{2}K(\bar{q},x)|_{x=\tilde{q}_{0}(\bar{q})}=-\frac{ \beta^{{}^{\prime}}{}^{2}\alpha P(P-1)}{4}\tilde{q}_{0}(\bar{q})^{P-2}\left\{1- \frac{\beta^{{}^{\prime}}{}^{2}\alpha P(P-1)\tilde{q}_{0}^{P-2}}{2}\mathbb{E} _{1}\left[\mathbb{E}_{2}\frac{1}{\cosh^{2}g(\bar{q},\tilde{q}_{0}(\bar{q}))} \right]^{2}\right\} \tag{12}\] is negative, \(K(\bar{q},\tilde{q}_{0}(\bar{q}))>0\) and \(f_{1RSB}<f_{RS}\). This happens when the expression in the curly brackets of the equation above is negative, i.e. when the parameter \(\alpha\beta^{{}^{\prime}}{}^{2}\) satisfies the inequality \[\frac{\alpha\beta^{{}^{\prime}}{}^{2}P(P-1)\tilde{q}_{0}^{P-2}}{ 2}\mathbb{E}_{1}\left\{\mathbb{E}_{2}\left[\frac{1}{\cosh^{2}g(\bar{q},\tilde{ q}_{0})}\right]\right\}^{2}>1. \tag{13}\] As noted in [4; 9], Hebbian networks with \(P\)-node interactions are equivalent to Ising \(P\)-spin models under a suitable rescaling of the temperature (13) \(\beta^{{}^{\prime}}\sqrt{\alpha}\to\beta^{\prime}\). With this rescaling, (13) retrieves indeed the RS instability line of the Ising P-spin model, that we have for completeness derived in the Appendix, using our method (see eq. (102)). In the limit \(\tilde{q}_{0}(\bar{q})\to\bar{q}\), (102) retrieves the AT line of the Ising \(P\)-spin model [18], however we note that for the Ising \(P\)-spin model and for Hebbian networks with \(P\)-node interactions, \(\tilde{q}_{0}(\bar{q})\) differs from \(\bar{q}\), with deviations getting more pronounced as \(P\) is increased, hence this limit cannot be justified. In Fig. 1 (top panels) we plot the difference between \(\tilde{q}_{0}(\bar{q})\) and \(\bar{q}\) (obtained solving numerically the self-consistency equations (23) and (7) for the Hopfield model, and equations (10) and (11) for Hebbian networks with multi-node interactions), as a function of the scaled parameter \(T/\sqrt{\alpha}\), for different values of \(\alpha\). In networks with multi-node interactions, differences can be appreciated at the onset of the RS instability for \(P=4\) and become significant for higher values of \(P\). In bottom panels we show the RS instability lines resulting from our method and the AT lines obtained in the limit \(\tilde{q}_{0}(\bar{q})\to\bar{q}\). ## 4 Discussion In this work we proposed a simple and systematic method to derive the critical line in the parameter space \((\alpha,\beta)\), below which the 1RSB expression for the free-energy is smaller than the RS expression, in Hebbian neural networks. The same analysis for spin-glass models is carried out in the appendix. For the Hopfield model, our approach recovers the critical line obtained by Coolen using the AT approach [13] as a special limit. Similarly, we recover the known AT lines of all the spin-glass models considered in the appendix, in the same limit, showing that our method provides a generalization of the approach originally devised by Almeida and Thouless. Owing to its simplicity, our method allows for straightforward application to Hebbian networks with multi-node interactions, for which the AT-line was unknown. The key idea of our method is to regard the 1RSB theory, which assumes two delta-peaks in the overlap distribution \(P(q)\), located at \(\bar{q}_{1}\) and \(\bar{q}_{0}<\bar{q}_{1}\), with weights \(1-\theta\) and \(\theta\), respectively, as departing continuously from the RS theory, which assumes only one peak at \(\bar{q}\). This leads us to assume that at the onset of the RS instability, the 1RSB overlap distribution is dominated by one peak, so that either \(\theta\to 0\) or \(\theta\to 1\). Then, consistency with the RS theory requires either \(\lim_{\theta\to 0}\bar{q}_{1}\!=\!\bar{q}\) or \(\lim_{\theta\to 1}\bar{q}_{0}\!=\!\bar{q}\). As \(\bar{q}_{1}\) and \(\bar{q}_{0}\) are determined from the self-consistency equations of the 1RSB theory, for each value of \(\theta\), it is straightfoward to determine which scenario applies. Noting that for the Hopfield model and Hebbian networks with \(P\)-node interactions, one has \(\bar{q}_{1}\!=\!\bar{q}\) for \(\theta\!=\!0\), we regard the 1RSB theory as a continuous variation of the RS theory, when the parameter \(\theta\) is increased from zero. Crucially, we do not make any assumption on the location of the peaks of the 1RSB theory, which are fixed by the 1RSB self-consistency equations. We then compare the 1RSB and the RS free-energies when \(\theta\) (the only free-parameter in our analysis) is close to zero, by performing simple expansions to linear orders in \(\theta\). In doing so, we solely require that \(f_{1RSB},\bar{q}_{1},\bar{q}_{0}\) are differentiable up to the first order in a neighborhood of \(\theta\!=\!0\) and that the derivative of \(f_{1RSB}\) exists at \(\theta=0\). Although our method is similar in spirit to the one introduced by Toninelli in [31], there is a crucial difference, in that the latter relies on the assumption \(\bar{q}_{0}\to\bar{q}\), which is, in our view, unjustified a priori. In fact, while \(\bar{q}_{1}=\bar{q}\) for \(\theta=0\), \(\bar{q}_{0}\) may differ from \(\bar{q}\), even in the limit \(\theta\to 0\). This consideration also leads to a departure of our approach from the method originally devised by Almeida and Thouless, which relies on a variation of the RS free-energy as the _order parameters_ are varied continuously around their RS values. In contrast, we study the variation of the RS free-energy as the _statistical weight_ of the order parameters is varied continuously (rather than the actual value of the order parameters). This approach allows us to determine the instability line of the RS theory also in spin-glass models Figure 1: Top panels: Difference between \(\bar{q}_{0}(\bar{q})\) and \(\bar{q}\) versus the scaled parameter \(T/\sqrt{\alpha}\), for different values of \(\alpha\) (as shown in the legend), for the Hopfield model (left) and Hebbian networks with \(P\)-node interactions, for \(P=4\) (mid) and \(P=6\) (right). Dotted lines show the onset of the RS instability. Bottom panels: The corresponding RS instability line (i.e. \(T_{c}\) versus \(\alpha\)) obtained via our method (red curve) and the AT line obtained in the limit \(\bar{q}_{0}(\bar{q})\to\bar{q}\) (blue curve), for the same models. Discrepancies increase with \(P=2,4,6\) from left to right. The black curves show the critical temperature \(T^{\star}\) at which \(\bar{q}\) becomes non-zero within the RS theory, i.e. the spin-glass (SG) transition with the RS theory. which exhibit a discontinuous phase transition, for which \(\bar{q}_{1}\!\neq\!\bar{q}_{0}\) even at \(T_{c}\). As a prototypical example of this class of models, we consider in the Appendix the spherical \(P\)-spin model [15]. For this model, the 1RSB self-consistency equations show that \(\lim_{\theta\to 1}\bar{q}_{0}\!=\!\bar{q}\) and the RS instability line can be found by comparing the 1RSB and the RS free-energies when \(\theta\) is close to one. Although \(\lim_{\theta\to 0}\bar{q}_{1}\!=\!\bar{q}\)_as well_ in this model, one can show that a similar analysis for \(\theta\) close to zero would give a lower temperature for the RS instability, hence the instability that occurs at higher temperature (i.e. for \(\theta\!\simeq\!1\)) is the physical one. This work is supported by Ministero degli Affari Esteri e della Cooperazione Internazionale (MAECI) via the BULBUL grant (Italy-Israel), CUP Project n. F85F21006230001 L.A. acknowledges E. Zegna Founder's Scholarship and UMI (Unione Matematica Italiana) for financial support and the Department of Mathematics at King's College London for the kind hospitality. L.A. and A.B. acknowledge INDAM (Istituto Nazionale d'Alta Matematica) for support. All the authors acknowledge the stimulating research environment provided by the Alan Turing Institute's Theory and Methods Challenge Fortnights event "Physics-informed Machine Learning".
2305.01794
MISNN: Multiple Imputation via Semi-parametric Neural Networks
Multiple imputation (MI) has been widely applied to missing value problems in biomedical, social and econometric research, in order to avoid improper inference in the downstream data analysis. In the presence of high-dimensional data, imputation models that include feature selection, especially $\ell_1$ regularized regression (such as Lasso, adaptive Lasso, and Elastic Net), are common choices to prevent the model from underdetermination. However, conducting MI with feature selection is difficult: existing methods are often computationally inefficient and poor in performance. We propose MISNN, a novel and efficient algorithm that incorporates feature selection for MI. Leveraging the approximation power of neural networks, MISNN is a general and flexible framework, compatible with any feature selection method, any neural network architecture, high/low-dimensional data and general missing patterns. Through empirical experiments, MISNN has demonstrated great advantages over state-of-the-art imputation methods (e.g. Bayesian Lasso and matrix completion), in terms of imputation accuracy, statistical consistency and computation speed.
Zhiqi Bu, Zongyu Dai, Yiliang Zhang, Qi Long
2023-05-02T21:45:36Z
http://arxiv.org/abs/2305.01794v1
# MISNN: Multiple Imputation via Semi-parametric Neural Networks ###### Abstract Multiple imputation (MI) has been widely applied to missing value problems in biomedical, social and econometric research, in order to avoid improper inference in the downstream data analysis. In the presence of high-dimensional data, imputation models that include feature selection, especially \(\ell_{1}\) regularized regression (such as Lasso, adaptive Lasso, and Elastic Net), are common choices to prevent the model from underdetermination. However, conducting MI with feature selection is difficult: existing methods are often computationally inefficient and poor in performance. We propose MISNN, a novel and efficient algorithm that incorporates feature selection for MI. Leveraging the approximation power of neural networks, MISNN is a general and flexible framework, compatible with any feature selection method, any neural network architecture, high/low-dimensional data and general missing patterns. Through empirical experiments, MISNN has demonstrated great advantages over state-of-the-art imputation methods (e.g. Bayesian Lasso and matrix completion), in terms of imputation accuracy, statistical consistency and computation speed. Keywords:Missing value, Imputation, Semi-supervised Learning ## 1 Introduction ### Missing Value Mechanisms and Imputation Missing data are commonly encountered in data analyses. It is well-known that inadequate handling of missing data can lead to biased findings, improper statistical inference [11, 37] and poor prediction performance. One of the effective remedies is missing data imputation. Existing imputation methods can be mainly classified as single imputation (SI) and multiple imputation (MI) [26]. The former imputes missing values only once while the latter generates imputation values multiple times from some distribution. In fields such as finance and medical research, linear models are often preferred as it is important to not only predict accurately but also explain the uncertainty of the prediction and the effect of features. In the interest of statistical inference, MI methods, including MISNN proposed in this paper, are more suitable as they adequately account for imputation uncertainty and provide proper inference. In general, performances of imputation are highly related to the mechanisms that generate missing values, which can be categorized into three types: missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). Missing data are said to be MCAR if the probability of being missing is the same for all entries; MAR means that the missing probability only depends on the observed values; MNAR means that the missing probability depends on the unobserved missing values. Intuitively, imputation is easier under MCAR mechanisms as the missing probability is only a (unknown) constant, and therefore most methods are designed to work under MCAR. However, MAR and MNAR are usually more difficult and fewer methods perform well on these problems. ### Feature Selection in Imputation Models In many applications including gene expression and financial time series research, we need to analyze high dimensional data with number of features being much larger than number of samples. In such cases, multiple imputation, which estimates the (conditional) distribution of missing data, can be inaccurate due to the overwhelming amount of features. Existing works [11, 37] propose to use regularized linear model for feature selection, before building the imputation model. Some representative models include Lasso [31], SLOPE [4], Elastic Net [39], Adaptive Lasso [38], Sparse Group Lasso [13, 27], etc. While the regularized linear models successfully reduces the number of features, they often fail to capture the true distribution of missing data due to the linear dependence on the selected features and information loss in the unselected features when building the imputation model. Hence, the corresponding inference can be significantly biased. MISNN proposed in this paper overcomes the shortcome via semi-parametric neural networks. At a high level, MISNN is a semi-parametric model based on neural networks, which divides predictors into two sets: the first set are used to build a linear model and the other is used to build neural networks, which are often regarded as non-parametric models. We highlight that the outperformance of MISNN is contributed both by its neural network and linear parts. The neural networks effectively capture the non-linear relationship in the imputation model, and the linear model, in addition to capturing the linear relationships, allows efficient MI, through maximum likelihood estimation for the regression parameters. ### Our Contribution This paper makes two contributions. Firstly, we propose MISNN, a novel imputation method that outperforms state-of-the-art imputation methods in terms of imputation accuracy, statistical consistency, and computation speed. MISNN is easy to tune, interpretable, and robust to high missing rates and high-dimensional features. Secondly, MISNN is a flexible imputation framework that can be used with any appropriate feature selection method, such as Lasso and forward-selection. Additionally, MISNN is compatible with any neural network, including under or over-parameterized networks, CNN, ResNet, dropout, and more. ## 2 Related Work Regarding missing data imputation, SI methods have long history before the concept of MI [26], of which one representative approach is the mean imputation. Recent work in SI include matrix completion approaches that translate the imputation into an optimization problem. Existing methods such as SoftImpute [23] and MMMF (Maximum-Margin Matrix Factorization) [28] provably work under MCAR mechanisms. Meanwhile, an increasing number of MI methods are studied: MICE [32, 5] imputes missing values through the chained equations; MissForest [30] imputes missing values via bootstrap aggregation of multiple trees. Deep generative models [34, 14, 22, 19, 9], including Generative Adversarial Impu-tation Nets (GAIN), are also proposed for imputation. We remark that most of the existing methods only provably work under MCAR (though some methods empirically work well under MAR). Regularized linear models have been proposed for MI in high-dimensional data. Bayesian Lasso [24, 16] estimates the posterior distribution of coefficients, while alternative approaches [37, 11] de-bias the estimator from the regularized linear regression. Namely, the direct use of regularized regression (DURR) and the indirect use of regularized regression (IURR). However, linear imputation models fail to capture the potential non-linear relations in the conditional distribution of missing data. MISNN falls into this line of research, is computationally more efficient than Bayesian Lasso, and captures non-linear relations during imputation. Figure 1: MISNN framework. Recent work has highlighted the importance of trustworthiness in missing data imputation, with privacy-preserving [18, 10, 8] and fairness-aware [21, 36, 6, 35] imputation models drawing attention. MISNN has strong interpretability, allowing for better understanding of the imputation process and greater trust in the results. ## 3 Data Setup Denote the data matrix by \(\mathbf{D}\in\mathbb{R}^{n\times p}\), where \(n\) is the number of samples/cases and \(p\) is the number of features/variables. We define the \(j\)-th feature by \(\mathbf{D}_{j}\) and its complement features by \(\mathbf{D}_{-j}:=\mathbf{D}_{2:p}\) for \(j\in[p]\). In the presence of missing data, \(\mathbf{D}\) can be separated into two submatrices \(\mathbf{D}_{\mathrm{cc}}\) and \(\mathbf{D}_{\mathrm{ic}}\), where \(\mathbf{D}_{\mathrm{cc}}\) denotes all complete cases (i.e. all features are observed) and \(\mathbf{D}_{\mathrm{ic}}\) denotes all incomplete cases. We let \(\mathbf{D}_{\mathrm{cc},j}\) and \(\mathbf{D}_{\mathrm{ic},j}\) denote the \(j\)-th feature of complete cases and incomplete cases, respectively. We also define \(\mathbf{D}_{\mathrm{miss}}\), the set of missing features in \(\mathbf{D}\), and \(\mathbf{D}_{\mathrm{obs}}\), the set of observed features for samples in \(\mathbf{D}\). Briefly speaking, to impute the missing values, we fit an imputation model \(g\) using \(\mathbf{D}_{\mathrm{obs}}\), and use \(\mathbf{D}_{\mathrm{ic},\mathrm{obs}}\) as input to give imputation result \(\hat{\mathbf{D}}_{\mathrm{miss}}\). For the ease of presentation, we start with a single feature missing, in which only the first column in \(\mathbf{D}\) (i.e., \(\mathbf{D}_{1}\)) contains missing values. We then move on to the general missing pattern with multiple features missing in Section 5. ### A Framework for Multiple Imputation Here we provide a brief discussion about a general framework for multiple imputation, which is also adopted in MISNN. Under the above data setting, MI methods estimate the conditional distribution \(\rho(\mathbf{D}_{\mathrm{miss}}|\mathbf{D}_{\mathrm{obs}})\) and sample imputed values from it multiple times. Assuming the distribution of \(\mathbf{D}\) is characterized by unknown parameters \(\boldsymbol{\xi}\), then \[\rho(\mathbf{D}_{\mathrm{miss}}|\mathbf{D}_{\mathrm{obs}})=\int\rho_{\mathrm{ miss}}(\mathbf{D}_{\mathrm{miss}}|\mathbf{D}_{\mathrm{obs}},\boldsymbol{\xi}) \rho_{2}(\boldsymbol{\xi}|\mathbf{D}_{\mathrm{obs}})d\boldsymbol{\xi}\] in which \(\rho,\rho_{1},\rho_{2}\) are three conditional distributions. For the \(m\)-th imputation, we randomly sample \(\boldsymbol{\xi}^{(m)}\) from the posterior distribution of \(\boldsymbol{\xi}\), i.e. \(\rho_{2}(\boldsymbol{\xi}|\mathbf{D}_{\mathrm{obs}})\); we then generate the \(m\)-th imputed data \(\mathbf{D}_{\mathrm{miss}}^{(m)}\) from the predictive distribution \(\rho_{1}(\mathbf{D}_{\mathrm{miss}}|\mathbf{D}_{\mathrm{obs}},\boldsymbol{\xi})\). With multiple imputed datasets, further analysis and inference can be conducted with the help of Rubin's rule [20, 26]. A detailed introduction of Rubin's rule is provided in Appendix 0.A. ## 4 Multiple Imputation with Semi-parametric Neural Network (MISNN) At the high level, MISNN imputes the missing data in each column through a partial linear model (PLM), which takes the form \[\hat{\mathbf{D}}_{1}=\mathbf{X}\hat{\beta}+\hat{f}(\mathbf{T})\] where \((\mathbf{X},\mathbf{T})\), determined through feature selection, is a partition of the rest \(p-1\) columns. While the choice of \(\hat{\beta}\) and \(\hat{f}(\mathbf{T})\) can be determined in an arbitrary manner, we adopt a partialling out approach [25] (also known as the orthogonalization in [7]) that can provide consistent parameter estimation if the true model takes the form \(\mathbf{D}_{1}=\mathbf{X}\beta+f(\mathbf{T})+\boldsymbol{\epsilon}\). To do so, we take the conditional expectation on \(\mathbf{T}\), assuming \(\mathbb{E}(\boldsymbol{\epsilon}|\mathbf{T})=0\): \[\begin{split}\mathbf{D}_{1}&=\mathbf{X}\boldsymbol{ \beta}+f(\mathbf{T})+\boldsymbol{\epsilon}\\ \mathbb{E}(\mathbf{D}_{1}|\mathbf{T})&=\mathbb{E}( \mathbf{X}|\mathbf{T})\boldsymbol{\beta}+f(\mathbf{T})\\ \mathbf{D}_{1}-\mathbb{E}(\mathbf{D}_{1}|\mathbf{T})& =\left(\mathbf{X}-\mathbb{E}(\mathbf{X}|\mathbf{T})\right) \boldsymbol{\beta}+\boldsymbol{\epsilon}\end{split} \tag{1}\] Let \(\mathcal{S}\) denote the set of features selected. Notice that \(\mathbf{T}:=\mathbf{D}_{-1}\setminus\mathbf{D}_{\mathcal{S}}\) is explicitly removed in the last equation. Therefore, if the number of selected features can be controlled (i.e., \(|\mathcal{S}|\) is small), we are left with a low-dimensional linear model (as \(\mathbf{X}-\mathbb{E}(\mathbf{X}|\mathbf{T})\in\mathbb{R}^{n\times|\mathcal{S }|}\)), as long as we can estimate the mapping \(\mathbb{E}(\mathbf{D}_{1}|\mathbf{T})\) and \(\mathbb{E}(\mathbf{X}|\mathbf{T})\) properly. To realize the above approach, MISNN algorithm takes three key steps: * **Feature Selection:** During imputation of each missing feature, MISNN conducts feature selection to select at most \(n\) features. The selected features \(\mathbf{X}\) are expected to have significant linear correlation with the missing feature, which later will be fitted in a linear model (e.g., least squares). * **Fitting Partially Linear Model:** Suppose the remaining features after the selection are denoted by \(\mathbf{T}\), MISNN fits two neural networks to learn \(\mathbb{E}(\mathbf{D}_{\text{miss}}|\mathbf{T})\) and \(\mathbb{E}(\mathbf{X}|\mathbf{T})\), so as to derive a low-dimensional ordinary linear model (1); * **Multiple Imputation:** MISNN uses maximum likelihood to estimate parameters in (1), then draw \(M\) times from the posterior distribution of \(\hat{\boldsymbol{\beta}}\) and further draw \(\widehat{\mathbf{D}}_{\text{miss}}\) from the predictive distribution. Note that the first two steps in combination is closely related to DebiNet [33], though we do not refine ourselves to over-parameterized neural network, and we utilize two neural networks to learn \((\mathbb{E}(\mathbf{D}_{\text{miss}}|\mathbf{T}),\mathbb{E}(\mathbf{X}|\mathbf{ T}))\). In the following, we introduce MISNN in Algorithm 1 and validate the procedure of MISNN rigorously. Here we assume the missing feature is continuous. For non-continuous features, some modifications to the algorithm should be made. See details in Appendix 0.B. **Remark 1**: _If one only focuses on the prediction, not the inference, single imputation can be conducted in Algorithm 1. In particular, OLS can solve the linear model in step (4) and we impute by_ \[\widehat{\mathbf{D}}_{ic,1}=\left(\mathbf{X}_{cc}-\mathbb{E}(\mathbf{X}_{cc}| \mathbf{T}_{cc})\right)\hat{\boldsymbol{\beta}}+\mathbb{E}(\mathbf{D}_{cc,1}| \mathbf{T}_{cc})\] _We name the imputation algorithm as SISNN (see Algorithm 4 in Appendix 0.B)._ ### Sampling from Posterior and Predictive Distributions To conduct multiple imputation in MISNN, we need to sample the parameters from the posterior distribution \(\rho_{2}\left(\boldsymbol{\beta},\sigma^{2}\Big{|}\mathbf{D}_{\text{obs},1}, \mathbf{X}_{\text{obs}},\mathbf{T}_{\text{obs}}\right)\) and the predictive distribution \(\rho_{1}\left(\mathbf{D}_{\mathrm{miss},1}|\mathbf{X}_{\mathrm{miss}},\mathbf{T}_{ \mathrm{miss}},\hat{\mathbf{\beta}}^{(m)},\hat{\mathbf{\sigma}}^{(m)^{2}}\right)\) in MISNN (c.f. Algorithm 1). With the partialling out, we fit a linear regression at step (4), \[\mathbf{D}_{\mathrm{obs},1}-\mathbb{E}(\mathbf{D}_{\mathrm{obs},1}|\mathbf{T}_ {\mathrm{obs}})=\left(\mathbf{X}_{\mathrm{obs}}-\mathbb{E}(\mathbf{X}_{ \mathrm{obs}}|\mathbf{T}_{\mathrm{obs}})\right)\mathbf{\beta}+\mathbf{\epsilon}\] We approximate the posterior distribution of \(\mathbf{\beta},\sigma\) using \[\rho_{2}\left(\mathbf{\beta},\sigma^{2}\Big{|}\mathbf{D}_{\mathrm{obs},1}, \mathbf{X}_{\mathrm{obs}},\mathbf{T}_{\mathrm{obs}}\right)=f_{1}\left(\mathbf{ \beta}\Big{|}\mathbf{D}_{\mathrm{obs},1},\mathbf{X}_{\mathrm{obs}},\mathbf{T}_ {\mathrm{obs}}\right)\times f_{2}\left(\sigma^{2}\Big{|}\mathbf{D}_{\mathrm{ obs},1},\mathbf{X}_{\mathrm{obs}},\mathbf{T}_{\mathrm{obs}}\right)\] Suppose the OLS estimate for \(\mathbf{\beta}\) and its variance are \(\bar{\mathbf{\beta}}\) and \(\Sigma_{\mathbf{\beta}}\), respectively. We can approximate the distribution of \(\mathbf{\beta}\) by a normal distribution: \[f_{1}\left(\mathbf{\beta}\Big{|}\mathbf{D}_{\mathrm{obs},1},\mathbf{X}_{\mathrm{ obs}},\mathbf{T}_{\mathrm{obs}}\right)\sim\mathcal{N}\left(\bar{\mathbf{\beta}},\Sigma_{\mathbf{ \beta}}\right)\] where the parameters are defined as: \[\bar{\mathbf{\beta}}=\mathrm{argmin}_{b}\|\mathbf{D}_{\mathrm{obs},1}-\eta_{D}( \mathbf{T}_{\mathrm{obs}})-[\mathbf{X}_{\mathrm{obs}}-\eta_{X}(\mathbf{T}_{ \mathrm{obs}})]\mathbf{b}\|^{2}\] \[\Sigma_{\mathbf{\beta}}=\bar{\sigma}^{2}\left((\mathbf{X}_{\mathrm{obs}}-\eta_{X} (\mathbf{T}_{\mathrm{obs}})^{\top}(\mathbf{X}_{\mathrm{obs}}-\eta_{X}(\mathbf{ T}_{\mathrm{obs}}))\right)^{-1}\] Here \(\bar{\sigma}^{2}\) can be estimated as the mean of squared residuals: \[f_{2}\left(\sigma^{2}\Big{|}\mathbf{D}_{\mathrm{obs},1},\mathbf{X}_{\mathrm{ obs}},\mathbf{T}_{\mathrm{obs}}\right)=\left\|\mathbf{D}_{\mathrm{obs},1}-\eta_{D}( \mathbf{T}_{\mathrm{obs}})-(\mathbf{X}_{\mathrm{obs}}-\eta_{X}(\mathbf{T}_{ \mathrm{obs}}))\bar{\mathbf{\beta}}\right\|^{2}/n_{\mathrm{obs}}\] As for drawing from the predictive distribution, we calculate \(\hat{\sigma}^{(m)}\) from \(f_{2}\) (with \(\bar{\beta}\) substituted by \(\hat{\mathbf{\beta}}^{(m)}\)). At last, we can draw \(\widehat{\mathbf{D}}_{\mathrm{miss},1}^{(m)}\) from \[\rho_{1}\left(\mathbf{D}_{\mathrm{miss},1}|\mathbf{X}_{\mathrm{miss}},\mathbf{T }_{\mathrm{miss}},\hat{\mathbf{\beta}}^{(m)},\hat{\mathbf{\sigma}}^{(m)^{2}}\right)= \eta_{D}(\mathbf{T}_{\mathrm{miss}})+(\mathbf{X}_{\mathrm{miss}}-\eta_{X}( \mathbf{T}_{\mathrm{miss}}))\hat{\mathbf{\beta}}^{(m)}+\mathcal{N}(0,\hat{\mathbf{ \sigma}}^{(m)^{2}})\] ### Flexibility of MISNN Framework Again, we highlight that the framework of MISNN is flexible in two folds: It can incorporate arbitrary feature selection method and arbitrary neural network models during imputation. MISNN can incorporate an arbitrary feature selection method. Here, we adopt Lasso to select features \(\mathbf{X}=\mathbf{D}_{\mathcal{S}}\) and \(\mathbf{T}=\mathbf{D}_{-1}\setminus\mathbf{D}_{\mathcal{S}}\), where \(\mathcal{S}=\{i>0:\hat{\alpha}_{i}\neq 0\}\) comes from the non-zero part of lasso estimate \[(\hat{\mathbf{\alpha}},\hat{\alpha}_{0})=\mathrm{argmin}_{\mathbf{a},a_{0}}\frac{1}{ 2}\|\mathbf{D}_{\mathrm{cc},1}-\mathbf{D}_{\mathrm{cc},-1}\mathbf{a}-a_{0}\|_{2}^ {2}+\lambda\|\mathbf{a}\|_{1}\] MISNN works compatibly with all types of networks. Especially, when equipped with over-parameterized neural networks, MISNN can borrow the results from DebiNet [33, Theorem 1&2] to claim \(\sqrt{n}\)-consistency and exponentially fast convergence. In practice, MISNN can work with a much richer class of neural networks than those theoretically supported in the neural tangent kernel regime [12, 1]. This includes the under-parameterized, moderately wide and deep neural networks. Empirical experiments shows that PLM learned by such neural networks exhibit strong prediction accuracy as well as post-selection inference (see Table 2). ### Other Properties of MISNN Here we discuss some properties that MISNN enjoys, besides the flexibility of the framework, the consistent estimation of \(\mathbf{\beta}\) and the fast training of PLM aforementioned. Numerical evidence can be found in Section 5. **Trainability:** MISNN can be trained by existing optimizers in an efficient manner, in comparison to Bayesian Lasso (which may require expensive burn-in period, see Table 3), boostrap methods (e.g. DURR, which needs many bootstrapped subsamples to be accurate) or MICE (which fits each feature iteratively and may be slow in high dimension). **Robustness:** Empirically, MISNN is robust to hyper-parameter tuning (e.g. the width of hidden layers does not affect the performance much). From the data perspective, in high feature dimension and high missing rate (e.g. when compared to DURR, IURR and GAIN), MISNN still works reasonably well. ### MISNN for General Missing Patterns The imputation procedure can be naturally extended to the case of general missing patterns, in which the pseudo code is provided in Algorithm 3 in the Appendix D. Suppose the first \(K\) columns are missing in \(\mathbf{D}\), denoted as \(\mathbf{D}_{\mathrm{full},[K]}\) and the \(k\)-th column is denoted by \(\mathbf{D}_{\mathrm{full},k}\). The set \(-[K]\) represents all other columns except those in \([K]\). Similar to the case of single column missingness, to construct a partial linear model, we need to partition the data into \(\mathbf{X}\) and \(\mathbf{T}\) We fit regularized linear regression for each of the \(K\) columns that have missing values and obtained \(K\) active sets. Then we propose to use either intersection or union to combine the sets into a single one, which will be treated as \(\mathbf{X}\). To estimate the parameters \(\beta\), during each imputation, for the \(k\)-th column, we consider an OLS model that uses \(\mathbf{D}_{\mathrm{full},[K]}\) as regressors and the \(k\)-th column as response. Maximum likelihood techniques are adopted to generate regression coefficients \(\boldsymbol{\beta}_{k}\). We remark that other proper feature selection methods and set-merging rules can be adopted to replace what we use. It's also possible that we use an iterative approach, following the idea of MICE, to conduct column-wise imputation. Generalization to the case of discrete missing values can be realized with the help of GPLM, which is similar to the discussion in Appendix 0.B. ## 5 Numerical Results We compared MISNN with other state-of-the-art methods on various synthetic and real-world datasets. To establish baselines, we included complete data analysis, complete case analysis, and column mean imputation. We also evaluated two MI methods that incorporate regularized linear models for feature selection in high-dimensional settings: MICE-DURR and MICE-IURR. Additionally, we included MissForest, a MICE approach that uses random forest as the regression model, as well as GAIN, a deep-learning-based imputation method, and two matrix completion methods: SoftImpute and MMMF. More details about our experimental setup and results can be found in Appendix 0.C. In addition to imputation accuracy, we evaluate the performance of imputation models in statistical inference that are based on im \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Method & Style & Bias & Imp MSE & Coverage & Seconds & SE & SD \\ \hline Complete Data & - & **0.0027** & - & **0.954** & - & 0.1126 & 0.1150 \\ Complete Case & - & 0.1333 & - & 0.854 & - & 0.1556 & 0.1605 \\ Mean-Impute & SI & 0.1508 & 12.6215 & **0.994** & **0.005** & 0.3268 & 0.1933 \\ \hline MISNN-wide (Lasso) & MI & **-0.0184** & **4.2382** & **0.902** & **0.324** & 0.1438 & 0.1713 \\ MISNN-wide (ElasticNet) & MI & **-0.0134** & **4.2191** & **0.924** & **0.286** & 0.1431 & 0.1641 \\ MISNN-narrow (Lasso) & MI & **-0.0251** & 6.2666 & **0.944** & **0.370** & 0.1816 & 0.1755 \\ MISNN-narrow (ElasticNet) & MI & **-0.0246** & 6.2550 & **0.956** & **0.344** & 0.1818 & 0.1647 \\ MICE-DURR (Lasso) & MI & 0.1815 & 12.6704 & **0.978** & 1.266 & 0.2275 & 0.1196 \\ MICE-DURR (ElasticNet) & MI & 0.1314 & 10.8060 & **0.990** & 0.633 & 0.2241 & 0.1219 \\ MICE-IURR (Lasso) & MI & 0.2527 & 15.7803 & 0.886 & 1.483 & 0.2136 & 0.1150 \\ MICE-IURR (ElasticNet) & MI & 0.2445 & 15.3266 & 0.892 & 0.566 & 0.2153 & 0.1399 \\ MissForest & MI & 0.0579 & 9.6174 & **0.962** & 69.948 & 0.2851 & 0.2609 \\ GAIN & SI & 0.7578 & 27.3505 & 0.289 & 14.812 & 0.2869 & 0.4314 \\ SoftImpute & SI & -0.1432 & **4.6206** & 0.842 & **0.019** & 0.1804 & 0.2005 \\ MMMF & SI & -0.1239 & **4.0956** & 0.782 & 3.385 & 0.1491 & 0.1869 \\ \hline \end{tabular} \end{table} Table 1: Multi-feature missing pattern in synthetic data over 500 Monte Carlo datasets. Bias: mean bias \(\hat{\beta}_{1}-\beta_{1}\); Imp MSE: \(\|\widehat{\mathbf{D}}_{\mathrm{miss},1:3}-\mathbf{D}_{\mathrm{miss},1:3}\|^{2 }/n_{\mathrm{miss}}\); Coverage: coverage probability of the 95% confidence interval for \(\beta_{1}\); Seconds: wall-clock imputation time; SE: mean standard error of \(\hat{\beta}_{1}\); SD: Monte Carlo standard deviation of \(\hat{\beta}_{1}\). Model settings are in Section 5.1 and data generation is left in Appendix 0.C.2. experiments, we specify a set of predictors and a response in the data matrix \(\mathbf{D}=(\mathbf{Z},y)\). A linear regression \(\hat{y}=\mathbf{Z}\hat{\boldsymbol{\theta}}\) is fitted using imputed dataset to predict \(y\) and we record the regression parameters \(\hat{\boldsymbol{\theta}}\). In synthetic datasets, we have access to the ground truth \(\boldsymbol{\theta}\), so we focus on inference performance. In real data analysis, we lose access to the true \(\boldsymbol{\theta}\) and focus on the prediction error instead. ### Viewpoint of Statistical Inference In terms of the statistical inference, we consider four statistical quantities: bias of \(\hat{\boldsymbol{\theta}}\), coverage rate of the 95% confidence interval (CR) for \(\boldsymbol{\theta}\), mean standard error (SE) for \(\hat{\boldsymbol{\theta}}\) and Monte Carlo standard deviation (SD) of \(\hat{\boldsymbol{\theta}}\). Imputation mean squared error (MSE) is also compared. We study the performance of MISNN under general missing patterns, in which multiple columns (features) in the dataset can contain missing values. We adopt a similar experiment setting to that in [11] and evaluate performance over 500 Monte Carlo datasets. A detailed experiment description can be found in Appendix C. Potentially, one can combine MICE with MISNN for single-column missingness as well. Nevertheless, we avoid doing so by proposing Algorithm 3 in Appendix D, which deals with the general missing patterns differently, in a parallel computing fashion. During the experiments, we use different network structures at step (3) of Algorithm 3: MISNN-wide uses two hidden layers with width 500, each followed by ReLU activation, a Batch Normalization layer [17] and a Dropout layer [29] at rate 0.1. The neural networks in MISNN-narrow are the same as in MISNN-wide, except the hidden layers have width 50 instead. The results are summarized in Table 1. We highlight that all MISNN give the smallest estimation bias compared with the rest of imputation methods. MISNN also achieves satisfying imputation MSE, statistical coverage and compu \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Method & Style & Estimatol & Imp & MSE & Seconds & SE & Pred MSE \\ \hline Complete Data & - & **0.0532** & - & - & 0.0676 & **0.8695** \\ Complete Case & - & 0.1278 & - & - & 0.1392 & 1.3376 \\ Mean-Impute & SI & -0.0374 & 1.3464 & **0.006** & 0.0686 & 0.8938 \\ \hline MISNN (Lasso) & MI & **0.0545** & 0.6620 & **1.501** & 0.0681 & **0.8780** \\ MISNN (ElasticNet) & MI & **0.0521** & **0.5140** & **0.861** & 0.0716 & **0.8789** \\ MICE-DURR (Lasso) & MI & **0.0504** & 1.8256 & 3.946 & 0.0508 & **0.8755** \\ MICE-DURR (ElasticNet) & MI & 0.0426 & 1.6998 & 2.709 & 0.0552 & 0.8817 \\ MICE-IURR (Lasso) & MI & 0.0474 & 2.0404 & 4.093 & 0.0476 & **0.8747** \\ MICE-IURR (ElasticNet) & MI & 0.0318 & 2.0219 & 2.620 & 0.0484 & 0.8803 \\ GAIN & SI & 0.0304 & 0.9902 & 67.432 & 0.0504 & **0.8749** \\ SoftImpute & SI & **0.0533** & 0.6667 & **0.0344** & 0.0763 & 0.8808 \\ MMMF & SI & 0.0833 & **0.3051** & 5.0261 & 0.0838 & **0.8755** \\ \hline \end{tabular} \end{table} Table 2: Multi-feature missing pattern in ADNI dataset over 100 repeats. Estimator: estimated \(\hat{\beta}_{1}\) through OLS using first 5 features as regressors; Imp MSE: imputation mean squared error \(\|\widehat{\mathbf{D}}_{\text{miss},1:3}-\mathbf{D}_{\text{miss},1:3}\|^{2}/n_{ \text{miss}}\); Seconds: wall-clock imputation time; SE: mean standard error of \(\hat{\beta}_{1}\); Pred MSE: mean squared error between \(\mathbf{A}\hat{\boldsymbol{\theta}}\) and \(\mathbf{y}\). Model settings are in Section 5.2 and data generation is left in Appendix C.3. MissForest is too slow (more than 5 min per dataset) to be considered. tation speed. In comparison, two matrix completion methods achieve comparable imputation MSE, but their coverage is much worse than MI methods. It is interesting to note that MISNN-wide tends to have smaller imputation MSE and estimation bias than MISNN-narrow. However, the coverage of the former is not as good as the latter, mainly due to the small SE. We suggest that in practice, if the accuracy of imputation or the parameter estimation is of main interest, MISNN with wide hidden layers should be adopted. If the statistical inference on parameters of interest is emphasized, then MISNN should be equipped with narrow hidden layers. ### Viewpoint of Prediction We applied MISNN to the Alzheimer's Disease Neuroimaging Initiative (ADNI) gene dataset *, which includes over 19k genomic features for 649 patients and a response, VBM right hippocampal volume, ranging between [0.4,0.6]. We selected the top 1000 features with the largest correlations with the response, and focused on the linear analysis model between the response and the top 5 features. Since we did not have access to the true coefficients in the linear model, we studied the difference between the estimated coefficients from complete data analysis and the ones from imputed datasets. We artificially generated missing values under MAR in the top 3 features that had the largest correlations with the response, with a missing rate of approximately 65%. We used MISNN, containing a single hidden layer with width 500 and a Batch Normalization layer, and fit a linear regression between the response \(\mathbf{y}\) and the top five features \(\mathbf{D}_{1}\sim\mathbf{D}_{5}\) for downstream prediction. Footnote *: The complete ADNI Acknowledgement is available at [http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf](http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf). Our results, summarized in Table 2, show that MISNN achieved small imputation and prediction MSEs in a computationally efficient manner, particularly when compared to other MI methods. Additionally, the estimators by MISNN (as well as SoftImpute) were closest to the gold criterion from complete data analysis. Further experiment details can be found in Appendix C. ## 6 Discussion In this work, we propose MISNN, a novel deep-learning based method for multiple imputation of missing values in tabular / matrix data. We demonstrate that MISNN can flexibly work with any feature selection and any neural network architecture. MISNN can be trained with off-the-shelf optimizers at high computation speed, providing interpretability for the imputation model, as well as being robust against data dimension and missing rate. Various experiments with synthetic and real-world datasets illustrate that MISNN significantly outperforms state-of-the-art imputation models. While MISNN works for a wide range of analysis models, we have only discussed the case for continuous missing values using the partialling out. We can easily extend MISNN to discrete missing value problems by considering the generalized partially linear models (GPLM, see Section 4.1 for details). However, the partialling out technique generally renders invalid for GPLM. Therefore, iterative methods including the backfitting, which can be slow, may be required to learn MISNN. ## 7 Acknowledgement This work was supported in part by National Institutes of Health grant, R01GM124111. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
2307.12807
Comprehending Semantic Types in JSON Data with Graph Neural Networks
Semantic types are a more powerful and detailed way of describing data than atomic types such as strings or integers. They establish connections between columns and concepts from the real world, providing more nuanced and fine-grained information that can be useful for tasks such as automated data cleaning, schema matching, and data discovery. Existing deep learning models trained on large text corpora have been successful at performing single-column semantic type prediction for relational data. However, in this work, we propose an extension of the semantic type prediction problem to JSON data, labeling the types based on JSON Paths. Similar to columns in relational data, JSON Path is a query language that enables the navigation of complex JSON data structures by specifying the location and content of the elements. We use a graph neural network to comprehend the structural information within collections of JSON documents. Our model outperforms a state-of-the-art existing model in several cases. These results demonstrate the ability of our model to understand complex JSON data and its potential usage for JSON-related data processing tasks.
Shuang Wei, Michael J. Mior
2023-07-24T13:58:15Z
http://arxiv.org/abs/2307.12807v1
# Comprehending Semantic Types in JSON Data with Graph Neural Networks ###### Abstract. Semantic types are a more powerful and detailed way of describing data than atomic types such as strings or integers. They establish connections between columns and concepts from the real world, providing more nuanced and fine-grained information that can be useful for tasks such as automated data cleaning, schema matching, and data discovery. Existing deep learning models trained on large text corpora have been successful at performing single-column semantic type prediction for relational data. However, in this work, we propose an extension of the semantic type prediction problem to JSON data, labeling the types based on JSON Paths. Similar to columns in relational data, JSON Path is a query language that enables the navigation of complex JSON data structures by specifying the location and content of the elements. We use a graph neural network to comprehend the structural information within collections of JSON documents. Our model outperforms a state-of-the-art existing model in several cases. These results demonstrate the ability of our model to understand complex JSON data and its potential usage for JSON-related data processing tasks. JSON data, graph neural networks, semantic type detection, deep learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methodologies Machine learning + Footnote †: journal: Computer methods Machine learning $.user.username refers to the "username" key within the top-level object nested under the key "user". ## 3. Feature Selection In our work, we use the same set of features as Sherlock, which is a single-column prediction model that takes all values from a single column as input and outputs the predicted semantic type for the corresponding column. The extracted feature vectors are used to train a neural network for the detection of semantic types. In Sherlock, a total of 1,587 features are extracted from the column values across four different dimensions. These dimensions are described below. 1. **Global statistics.** This category is a set of 27 hand-crafted features, typically some high-level statistical characteristics of the column. For example, column entropy describes the uniformity of the distribution of column values. Another example is the number of values that measures the number of unique values recorded in the column. 2. **Character-level distributions.** This category contains simple statistical features of character distributions. Specifically, 10 statistical functions, i.e. any, all, mean, variance, min, max, median, sum, kurtosis, skewness, are applied on all 96 ASCII-printable characters plus the form feed character, resulting in 960 features. For example, the any function checks if any column value contains a specific character. all checks if all column values contain a character. Some other examples of features are the maximum number of appearances of a character in a single column value and the median value of appearance of a character for all column values. 3. **Word embeddings.** Sherlock uses a pre-trained GloVe embedding (Haff et al., 2015) to characterize the semantic content of column values. The GloVe model contains a 50-dimensional embedding for 400,000 English words aggregated from 6,000,000,000 tokens. Similar to word2vec (Haff et al., 2015), GloVe embeddings can be used to measure semantic similarity between words. The advantage of GloVe compared to word2vec is that it does not rely simply on local information of words, but it also incorporates global statistics such as word co-occurrence. By calculating the mean, mode, median, and variance on the 50-dimensional GloVe feature vector for all column values, a 200-dimensional feature vector is produced in this category. 4. **Paragraph vectors.** A distributed bag of words version of the paragraph vector (PV-DBOW), or the doc2vec model (Haff et al., 2015) is implemented to capture features at the "topic" level of the column. The doc2vec model forces the model to predict random words that are sampled from paragraphs in the output by ignoring context words from the input. The model is pre-trained in Sherlock using the Gensim library to extract a paragraph feature that has 400 dimensions. ## 4. Proposed Graph Model Raw JSON data is annotated by treating each key-value pair as a data point. For instance, from the user.json file, four data points can be extracted as illustrated in Figure 1. For each key-value pair, the label is determined by annotating the JSON Path to represent the semantic meaning, while the features are extracted using Sherlock from the corresponding values at each path. As an example, consider the key-value pair "user": {"id":9171087, "id_str": "9171087", "name": "ud83c"}. Here, the label assigned is user, and the features are extracted from {"id":9171087, "id_str": "9171087", "name": "ud83c"}. At this stage, we can proceed to apply Sherlock and evaluate its performance for semantic type detection, using it as a baseline result. The structure of our model is illustrated in Figure 2. Each JSON file is processed by first obtaining all key-value pairs, as described in the preceding section. Using the same example as in Figure 1, we can obtain four key-value pairs. Subsequently, the features for each key-value pair are computed based on their respective values, yielding features \(f_{1}\), \(f_{2}\), \(f_{3}\), and \(f_{4}\). Following this, four graphs are generated, labeled as "id", "id_str", "name", and "user". The first three graphs, \(G_{1}\), \(G_{2}\), and \(G_{3}\), each consist of a single-node, with the node features \(f_{1}\), \(f_{2}\), and \(f_{3}\). The fourth graph, \(G_{4}\), is a multi-node graph, with a root node having node features \(f_{4}\) and three edges connecting to three other nodes, each having node features \(f_{1}\), \(f_{2}\), and \(f_{3}\). Once we obtain the graph representations, we use graph neural networks (GNNs)(Krizhevsky et al., 2015) to perform the classification task. Graph neural networks are well-suited for our problem, as JSON documents inherently possess a tree-like structure that can be represented as a graph. GNNs excel at capturing complex dependencies and relationships within graphs by leveraging structural information encoded in edges and node features. Using GNNs, we can effectively leverage the hierarchical relationships and dependencies present in JSON documents, enabling accurate analysis and prediction tasks such as semantic type prediction or classification. GNNs exploit the rich structural information of JSON documents, making them a powerful approach to understanding and extracting insights from this graph-based representation. In our study, we used the Spektral library(Serbia et al., 2017) to implement our GNN. Specifically, we employed a two-layer GCN model, where the input graph was first fed into a GCN layer with 256 hidden units, followed by graph pooling and a dropout layer. The output was then forwarded to another GCN layer with 64 hidden units, before being connected to a dense layer for multi-class classification. To optimize the model, we adopted Figure 1. Example of JSON Data Adam[(10)] as our optimizer and used categorical cross-entropy as our loss function. The learning rate was set to \(2\times 10^{-4}\). ## 5. Dataset Our study uses Twitter data and Meetup data available on push-shift.io [(1)], a large-scale archive of social media content. Due to the immense size of the dataset, we selected a representative subset of each dataset for our research. Specifically, we extracted all available data from a specific date, resulting in a total of 30,000 distinct JSON objects for Twitter data, and 20,000 distinct JSON objects for Meetup data. Figure 3 shows an example of the Twitter dataset used in this study. The example contains labels for a single-node graph, including created_at, id, and screen_name. The profile_link_color, profile_sidebar_border_color, profile_sidebar_fill_color, and profile_text_color also belong to single-node graphs with label color. Additionally, the example comprises multi-node graphs with labels such as bounding_box and user_mentions, which are also illustrated in the figure. The above-mentioned nodes have a depth of 1 in the graph, while nodes such as type and coordinate in this example have a depth of 2. Table 1 presents the number of examples corresponding to each depth (level of nesting) for our two datasets. All key-value pairs with a null value or a type of Boolean are ignored since these values are highly repetitive in our dataset and do not contain useful semantic information. We then annotated each JSON Path with a class label, as discussed in the previous section, resulting in a label set comprising 43 distinct classes for the Twitter dataset and 32 distinct classes for the Meetup dataset. To prepare the data for use in our model, we processed each JSON file into a graph structure, resulting in a total of around 110,000 distinct graphs for the Twitter dataset and 600,000 distinct graphs for the Meetup dataset. Each graph in our dataset is accompanied by a set of node features, an adjacency matrix, and a class label encoded in one-hot format. The dataset is partitioned into training, validation, and test sets in a 7:3:3 ratio. ## 6. Experiment Method In our experiment, we initially preprocess the JSON file by extracting key-value pairs based on their corresponding JSON paths. \begin{table} \begin{tabular}{c c c} \hline Depth & \multicolumn{2}{c}{Number of examples} \\ \cline{2-3} & Twitter & Meetup \\ \hline 1 & 1,205 & 191,884 \\ 2 & 32,158 & 304,750 \\ 3 & 74,187 & 101,930 \\ 4 & 3,907 & 0 \\ 5 & 124 & 0 \\ \hline \end{tabular} \end{table} Table 1. Number of examples for each depth Figure 3. Example of Twitter JSON data Figure 2. Proposed model architecture Figure 4 demonstrates the conversion of a JSON document into relational tables, where each key-value pair is represented as a separate column. Subsequently, a feature extraction process is applied to these values to obtain the feature vectors. This step enables us to utilize the Sherlock model for classification, allowing us to establish a baseline performance measure. We then proceed to process the data into graphs, following the procedures outlined in Figure 2. The classification task is then performed using a graph neural network model. We summarize the time spent on each preprocessing step in Table 2, providing an overview of the time requirements for these tasks on our two different datasets. The key-value pair extraction step takes 1,205s and 3,525s for the Twitter and Meetup datasets respectively, which is the most time-consuming preprocessing step. The feature extraction and graph processing steps take less time than the key-value pair extraction. The feature extraction step is the only preprocessing step required to train the Sherlock model. ## 7. Experiment Results Table 3 provides a comprehensive comparison between the performance of Sherlock and the proposed model in terms of the F1 score and accuracy metrics. The evaluation is conducted for all semantic classes, which are categorized into single-node and multi-node based on the number of nodes in the graph. The table presents some selected examples from each category, including screen_name, country_code, timestamp_ms, color, and description for single-node, and bounding_box, user_mentions, retweet_status, hashtag, and full_name for multi-node. The average performance result is also presented at the bottom of the table. In the single-node setting, the proposed model outperforms Sherlock on some labels, for example, with F1 scores of 0.95 for screen_name, 0.92 for country_code, compared to 0.92 and 0.80, respectively, for Sherlock. However, Sherlock achieves a perfect F1 score of 1.00 for the timestamp_ms label, while the proposed model only achieves a score of 0.97. The Sherlock model also exhibits higher accuracy levels for the semantic types of country_code and timestamp_ms. Sherlock also performs slightly better for the description label, with an F1 score of 0.75 compared to 0.67 for the proposed model. Based on the results, it can be suggested that the base Sherlock model might exhibit better performance when applied to single-node scenarios, which may be attributed to the fact that the Sherlock model utilizes a more complex neural network architecture compared to our network. In the multi-node setting, the proposed model achieves a perfect F1 score of 1.00 for the bounding_box label, while Sherlock achieves a score of 0.83. The proposed model also outperforms Sherlock for the user_mentions and retweet_status labels, with F1 scores of 0.84 and 0.82 compared to 0.59 and 0.57, respectively. However, Sherlock achieves a slightly better F1 score of 0.40 for the hashtag label compared to 0.34 for the proposed model. The proposed model achieves a higher F1 score of 0.22 for the full_name label compared to 0.00 for Sherlock. In terms of accuracy, our proposed model significantly outperforms Sherlock. Specifically, for the hashtag semantic type, our model achieves an accuracy of 0.80, while Sherlock only achieves an accuracy of 0.23. Similarly, for the full_name semantic type, our model achieves an accuracy of 0.50, while Sherlock fails to classify any instance correctly. These findings demonstrate that our proposed model is more adept at predicting complex structures, thus providing greater utility for practical applications. Table 4 presents the comparison between Sherlock and our proposed model on the meetup dataset. The JSON files within the meetup dataset exhibit highly similar structures, resulting in higher overall prediction accuracy and F1 score. In cases involving multiple nodes, our model achieves perfect accuracy of 1.00, while Sherlock performs equally well on classes such as event and category, with F1 scores of 0.99 and 0.97 respectively for group and group_photo. For single-node classes, both models demonstrate similar performance levels. The Meetup dataset consists of numerous homogeneous JSON files, many of which exhibit similar hierarchical structures. As a result, the base model Sherlock can achieve good performance in the multi-node setup, indicating that our model does not outperform Sherlock on this dataset. The reason could be that the Meetup dataset exhibits a higher level of homogeneity and has a less complex hierarchical structure for our model to take advantage of. Table 5 shows a comparison between the training time and model size of Sherlock and the proposed model. Our proposed model takes significantly longer to train but produces a much smaller model. We \begin{table} \begin{tabular}{c c c} \hline \hline Preprocessing steps & Twitter & Meetup \\ \hline Key-value pair extraction & 1,205s & 3,525s \\ Feature extraction & 785s & 2,100s \\ Graph processing & 152s & 450s \\ \hline Total time & 2,142s & 7,075s \\ \hline \hline \end{tabular} \end{table} Table 2. Time spent on preprocessing steps \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{Sherlock} & \multicolumn{2}{c}{Proposed model} \\ \cline{3-6} & Label & F1 score & Accuracy & F1 score & Accuracy \\ \hline Single-Node & screen\_- & 0.92 & 0.93 & **0.95** & 0.93 \\ & name & & & \\ & country\_- & 0.80 & **1.00** & **0.92** & 0.85 \\ & code & & & & \\ & timestamp\_- & **1.00** & **1.00** & 0.97 & 0.96 \\ & ms & & & & \\ & color & **0.99** & 0.99 & 0.98 & 0.99 \\ & description & **0.75** & **0.77** & 0.67 & 0.60 \\ \hline Multi-Node & bounding\_- & 0.83 & 1.00 & **1.00** & 1.00 \\ & box & & & & \\ & user\_- & 0.59 & 0.41 & **0.84** & **0.97** \\ & mentions & & & & \\ & retweet\_- & 0.57 & 0.48 & **0.82** & **0.86** \\ & status & & & & \\ & hashtag & 0.34 & 0.23 & **0.40** & **0.80** \\ & full\_- & 0.00 & 0.00 & **0.22** & **0.97** \\ & name & & & & \\ \hline Average & & 0.82 & 0.84 & **0.85** & **0.85** \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison of Sherlock and the proposed model on Twitter dataset expect that advances in graph neural network training will apply to our setting to further reduce the training time (Beng et al., 2017; Chen et al., 2018). Our proposed model exhibits a notable reduction in model size in comparison to Sherlock. This is mainly attributed to the fact that our model incorporates a smaller number of neural network layers. Consequently, this disparity may account for certain suboptimal predictions made by our model in comparison to Sherlock, particularly in single-node predictions. Our proposed model achieves a higher average F1 score compared to Sherlock. These results suggest that our proposed model has a superior ability to learn structural information from the data. ## 8. Future Work In our ongoing research, we aim to explore alternative subgraph representations to enhance the prediction of semantic types in our proposed model. The current subgraphs we utilize do not incorporate the parent node and its sibling nodes, thus missing out on valuable information present in those nodes. Sato (Sato, 2010) is a model that utilizes neighbor columns and table-level information for semantic prediction; however, it exhibits limited scalability in terms of training time. To address this limitation, we plan to construct subgraphs that include the parent and sibling nodes of the target node. In addition, we will assign edge weights to the graph, allowing us to emphasize the importance of the target node for prediction. By enabling additional edge features within each subgraph, we can leverage neural networks such as edge-conditioned GCN (Kipf and Welling, 2015) that are capable of utilizing edge weight information. Subsequently, we intend to combine the outputs obtained from different subgraphs using a transformer (Kipf and Welling, 2015). The transformer architecture is suitable for this purpose because of its ability to capture global dependencies and model the relationships between the outputs of different subgraphs effectively. In addition, we plan to extend our experimentation beyond the current dataset in order to enhance the robustness and generalizability of our model. Specifically, we will train and test our model on additional datasets that are representative of real-world scenarios. Our goal is to evaluate the effectiveness of the model's predictive capabilities on JSON structures that it has not yet encountered. Moreover, we aim to examine the impact of training set size on the performance of our model, with particular attention to the detection of less frequent semantic types. By conducting such analyses, we aim to better understand the limitations and strengths of our model and further refine it for real-world applications. ## 9. Conclusion In this paper, we proposed a model for predicting semantic types in nested JSON data that can be used for various automated data processing tasks. Existing models either predict for a single set of values at a time or are limited to non-nested relational data. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{Sherlock} & \multicolumn{2}{c}{Proposed model} \\ \cline{3-6} & Label & F1 score & Accuracy & F1 score & Accuracy \\ \hline Single-Node & event\_id & 0.66 & **0.64** & **0.68** & 0.63 \\ & id & 0.86 & 0.85 & **0.91** & **0.90** \\ & member\_- & **0.95** & **0.95** & **0.95** & 0.94 \\ & name & & & & \\ & shortname & **0.99** & 0.99 & 0.98 & **1.00** \\ \hline Multi-Node & event & **1.00** & **1.00** & **1.00** & **1.00** \\ & category & **1.00** & **1.00** & **1.00** & **1.00** \\ & group & 0.99 & 0.97 & **1.00** & **1.00** \\ & group\_- & 0.97 & 0.99 & **1.00** & **1.00** \\ & photo & & & & \\ \hline Average & & 0.89 & 0.89 & **0.92** & **0.90** \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of Sherlock and the proposed model on the Meetup dataset Figure 4. transformation of JSON document to relational table \begin{table} \begin{tabular}{c c c c} \hline \hline Model & \multicolumn{2}{c}{Training time} & Model size \\ \cline{2-4} & Twitter & Meetup & \\ \hline Sherlock & 252s & 690s & 5.9MB \\ Proposed model & 1,230s & 3,051s & 1.9MB \\ \hline \hline \end{tabular} \end{table} Table 5. Training time and model size comparison To address this limitation, we proposed an extension of the semantic type prediction problem to semi-structured JSON data with types labeled based on JSON Paths. Our proposed model annotates the semantic type of JSON data with its hierarchical structure and employs a graph neural network to predict the semantic type using the same set of features extracted by Sherlock. We demonstrate several cases where our model outperforms Sherlock, indicating its ability to comprehend complex JSON data and its potential for semi-structured data processing tasks. Our ongoing research focuses on enhancing the prediction of semantic types in our proposed model through alternative subgraph representations. By incorporating the parent and sibling nodes into subgraphs and assigning edge weights, we can capture valuable information for accurate predictions. Leveraging additional edge features will further improve our model's performance. We plan to validate our approach on diverse datasets, ensuring its robustness and generalizability to real-world scenarios. Furthermore, we plan to investigate the impact of the size of the training set on model performance, especially for detecting less frequent semantic types. In conclusion, our work contributes to the development of deep learning models for predicting semantic types in JSON data. Our proposed model offers a better understanding of complex data and improved performance compared to Sherlock on semi-structured data.
2308.11235
Adaptive White-Box Watermarking with Self-Mutual Check Parameters in Deep Neural Networks
Artificial Intelligence (AI) has found wide application, but also poses risks due to unintentional or malicious tampering during deployment. Regular checks are therefore necessary to detect and prevent such risks. Fragile watermarking is a technique used to identify tampering in AI models. However, previous methods have faced challenges including risks of omission, additional information transmission, and inability to locate tampering precisely. In this paper, we propose a method for detecting tampered parameters and bits, which can be used to detect, locate, and restore parameters that have been tampered with. We also propose an adaptive embedding method that maximizes information capacity while maintaining model accuracy. Our approach was tested on multiple neural networks subjected to attacks that modified weight parameters, and our results demonstrate that our method achieved great recovery performance when the modification rate was below 20%. Furthermore, for models where watermarking significantly affected accuracy, we utilized an adaptive bit technique to recover more than 15% of the accuracy loss of the model.
Zhenzhe Gao, Zhaoxia Yin, Hongjian Zhan, Heng Yin, Yue Lu
2023-08-22T07:21:06Z
http://arxiv.org/abs/2308.11235v1
# Adaptive White-Box Watermarking with Self-Mutual Check Parameters in Deep Neural Networks ###### Abstract Artificial Intelligence (AI) has found wide application, but also poses risks due to unintentional or malicious tampering during deployment. Regular checks are therefore necessary to detect and prevent such risks. Fragile watermarking is a technique used to identify tampering in AI models. However, previous methods have faced challenges including risks of omission, additional information transmission, and inability to locate tampering precisely. In this paper, we propose a method for detecting tampered parameters and bits, which can be used to detect, locate, and restore parameters that have been tampered with. We also propose an adaptive embedding method that maximizes information capacity while maintaining model accuracy. Our approach was tested on multiple neural networks subjected to attacks that modified weight parameters, and our results demonstrate that our method achieved great recovery performance when the modification rate was below 20%. Furthermore, for models where watermarking significantly affected accuracy, we utilized an adaptive bit technique to recover more than 15% of the accuracy loss of the model. Keywords:Deep learning Fragile watermarking Integrity protection. ## 1 Introduction Deep neural networks (DNNs) are often deployed in various fields, such as image classification [8]and natural language processing [13]. Due to the varying sizes of neural network models, we deploy artificial intelligence models on cloud [20] or embedded devices [16]. Regardless of the deployment method, it is challenging for users to ensure that the model is fully deployed as intended by the owner. The model can be subjected to quantization or pruning to reduce server load, and it may also be vulnerable to attacks that modify the model parameters, such as backdoor attacks or poisoning attacks [12, 3]. By embedding watermark information into the parameters, it serves as a fragile barrier for the model parameters, allowing us to determine whether the parameters have been tampered with by examining the parameters themselves, as shown in Figure 1. Although standard methods for data integrity checks, such as SHA-256 [5] and CRC [19], exist, adjustments need to be made to the calculation method of the password in different in model frameworks. Additionally, because of the characteristics of neural network models and hash, it is powerless to locate and recover tampering. Model watermarking technology [18] is a technique that combines the characteristics of neural network models, used for protecting model intellectual property and model integrity. Intellectual property was the first application of watermarking when neural network watermarking are proposed by [21]. The watermark for integrity protection is often referred to as a fragile watermark and is currently roughly divided into two directions: black-box fragile watermark and white-box fragile watermark. Fragile watermarking refers to the ability of the watermark to reflect any modifications made to the model, thereby determining its integrity status. Black-box fragile watermarking assumes that the model can only be queried through its input and output interfaces, and by testing specific inputs (triggers, also known as sensitive samples), it is possible to determine if the model has been tampered with. There have been many previous works in this field, including the most representative work by He et al. [9] from Princeton University, who used the Taylor series expansion of the neural network to describe the formula for attacking the neural network, and found the most sensitive sample that could best reflect the small changes in the neural network as the sensitive sample. Kuttichira et al. [14] searched for specific triggers by building an optimizer suitable for Bayesian algorithms, and achieved detection against any attacks in experiments, but the detection efficiency was not high. O. Aramoon et al. [1] believed that triggers that fall on the classification boundary are the required triggers for classification tasks, but for other tasks, the model's decision boundary is not as easily constructed based on output probabilities as Figure 1: Model parameters can be tampered with, and fragile watermarks can establish a fragile barrier that allows users or model owners to check the status of model parameters through tokens. in classification tasks. Yin et al. [23] used a generative adversarial nets[6] to learn the model's boundaries and generate sensitive samples autonomously. The aforementioned black-box model watermarking techniques are limited in their detection capabilities due to their pre-defined API-based approach. Due to the opacity of neural networks, it is challenging to be certain that black-box methods can detect all potential attacks with 100% accuracy. Furthermore, it is difficult to achieve localization and recovery. Therefore, white-box watermarking is necessary as a more rigorous approach to be applied in neural networks. White-box fragile watermarks allow for viewing of the model's internal parameters. However, this does not mean that one can easily obtain the true original model for comparison, as on the cloud, it is difficult to distinguish between the original model and its tampered copy. And for offline devices, it is even more difficult to conduct online comparison. Previous work on white-box fragile watermarks includes Li et al. [17] who studied the attack patterns of the PBFA algorithm for specific neural networks, and placed carefully designed model parameter check bits on a separate memory to detect model integrity at runtime. Additionally, they leveraged the technique of setting erroneous block parameters to zero in order to restore model performance. Botta et al. [2] achieved block-level positioning by using KL transforms and genetic algorithms to set the least significant bits (LSBs) of the parameters as watermark bits, but this still resulted in model performance degradation and it causes detection omissions. Similarly, Zhao et al. [24] from University of Shanghai for Science and Technology introduced the self-embedding technique used in image fragile watermarking to DNNs, setting the 12 LSBs of the neural network parameters as watermark bits, achieving 100% detection of neural network tampering, block-level positioning, and partial recovery of neural network performance, similar to recovery in the image domain. In our approach, we scrambled the parameters using a specific permutation and placed the important information of the previous parameter in the position of the unimportant information of the subsequent parameter. Meanwhile, we used mod operation on each parameter itself to achieve precise detection, accurate localization, and precise recovery at the parameter level. Parameters of neural networks differ from those of images in several ways. For instance, high-frequency features in images are sensitive to human perception, and therefore need to be protected when embedding watermarks. However, the importance of parameters to the results in neural networks is related to the gradients, magnitude, and position of tensors. As neural networks become deeper, even small changes to individual parameters can have a significant impact on the final output, rendering the traditional image watermarking inapplicable. Similarly, the Peak Signal-to-Noise Ratio (PSNR) [11] commonly used in the image field to indicate the similarity between images is not a incompatible indicator of model variation in neural networks. Moreover, for white-box watermarks, we often need to replace the least significant bits (LSBs), but the watermarking that have little impact on small models can lead to a significant performance decrease when placed on deep neural models. Therefore, we have developed an adaptive bit ad justment technique that achieves a watermarking embedding capacity far greater than that of previous works. Our contributions can be summarized as follows: * We propose a method for generating adaptive bits based on gradient descend, which provides a way to recover the performance of the model up to 15% when adjusting the LSBs of the model. * Our watermarking algorithm combines the relationship of parameters and the relationship among the parameter's own bits, achieving 100% detection of model modification, parameter-level positioning of tampered regions, and recovery of model performance for modifications below 20%. * We conduct a comparative analysis with previous integrity verification methods, demonstrating that our approach is the first to achieve precise parameter-level localization while preserving the original performance of the model. ## 2 Adaptive Watermarking ### Problem Formulation For methods that require replacing LSBs to embed watermarks, it is desirable to minimize the change in model performance caused by LSBs for each parameter \(W_{ij}\) in the neural network. In the field of image processing, PSNR is often used to describe the differences between images. However, in the field of neural networks, [22] have shown that even a small number of parameters can have a significant impact on model performance, and PSNR may not be suitable for neural networks. In this case, accuracy is used to describe the performance of the neural network after embedding the watermark, and designers aim to minimize the change in performance as much as possible. So the objective can be described as: \(max\)\(ize\big{(}Acc(f(X_{test},W^{\prime}),Y)\big{)}\). Here, \(f()\) denotes model inference and \(X_{test}\) and \(Y\) represent the test set and the set of labels, respectively. \(Acc\) represents accuracy of the inference. \(W^{\prime}\) denotes the parameters with the embedded watermark. As the amount of embedded watermark content and the depth of the model increase, the method of adjusting some LSBs may also have a greater impact on the model. ### Adaptive Method The existing neural network frameworks, such as Pytorch and TensorFlow, adopt default parameters that comply with the IEEE 754 protocol [10] for floating-point numbers. Each floating-point number consists of 32 bits. For ease of description, we use \(b_{0},b_{1},b_{2}...b_{31}\) to represent the 32 bits, where \(b_{0}\) is the sign bit, \(b_{1}-b_{8}\) are the exponent bits used to control the position of the decimal point, and the remaining bits are referred to as the fraction bits, which form the significant digits. Obviously, the value of a number is mainly influenced by the sign bit, exponent bits, and the leading fraction bits. We can gain a more intuitive understanding of Figure 2. In the watermark embedding method that replaces the least significant bits (LSBs), we often replace the trailing fraction bits to embed the watermark, which unavoidably causes slight changes in the original numerical values. As the neural network makes inferences layer by layer, the final results may deviate original model. To address this issue, we propose an adaptive watermark embedding method. In other words, we obtain performance correction by training one bit of the parameters. Taking our watermarking method as an example, for each floating-point parameter, we need to replace the 19 least significant bits (LSBs), and therefore, we need to train the 21st bit from the end to restore performance. Intuitively, for the fraction part, the bits closer to the front have a greater impact on the value. Thus, we can correct the previous impact by influencing \(b_{11}\), as shown in Figure 3. The generation process is described in detail in Algorithm 1. We iterate through each layer of the neural network, conduct \(\alpha\) training iterations for each layer, and obtain the gradient and accuracy using the training and test sets respectively. After adjusting the watermark bits, we compare whether the accuracy has improved and save the better adjustment. We aim to move each parameter in the direction opposite to its gradient to achieve a decrease in the loss function, as shown in Table 1. For instance, if the gradient is positive for a positive parameter, we want the tensor value to be smaller. Then we set \(b_{11}\) to \(0\). Similarly, if the gradient is positive for a negative parameter, we also want the tensor value to be smaller, but due to the sign, we need to make the absolute value of the tensor larger, leading to a smaller value. We set \(b_{11}\) to \(1\). Changes in the fraction part have adaptive capabilities \begin{table} \begin{tabular}{c c c} \hline **Tensor** & **Grad** & \(\mathbf{b_{11}}\) \\ \hline - & - & 0 \\ \hline - & + & 1 \\ \hline + & - & 1 \\ \hline + & + & 0 \\ \hline \end{tabular} \end{table} Table 1: Adjust adaptive bit in four different situations as follow. Figure 2: The sign field in IEEE 754 floating-point numbers determines the sign of the floating-point number, the exponent field determines the position of the decimal point, and the fraction field determines the significant digits. due to the existence of the exponent, and we still need to control the step size of parameter changes as in normal training. To enhance adaptability and better control the magnitude of gradient descent, not all parameters undergo the same operation, and we define two hyper parameters, \(\alpha\) and \(\beta\), which represent the number of iterations for each layer and the ratio of parameters, respectively. Pseudo-code is presented below. ## 3 Self-Mutual Parameter Check ### Two Simple Assumptions Validation bits need to have strong sensitivity to any changes made to the model [4], and this sensitivity must be related to the model itself. Although validation bits are still part of the model, watermarks hope to make the validation bits and all content of the model associated so that the model and watermark are truly and completely coupled. We can assume a scenario in which all parameters' least significant bits (LSBs) are set to 1. In this case, we can easily understand that even a tiny adjustment to the model or setting one of over millions parameter to zero (or any other value) would change the LSB of the model. However, since one is independent of other parameters of the model, we can easily implement an attack that sets all LSBs to 1, making the model's fragile watermark completely ineffective. We can also assume another protection method: parameter backup. We select 16 bits from a 32-bit floating-point number to carry the information and another 16 bits that are exactly the same as the first 16 bits to backup and check the information bits, ensuring that any modification to the parameter causes the two 16-bit parameters to be mismatched and successfully detected, but cannot be recovered. This is because it is impossible to determine which part is incorrect. If an attacker adjust the first 16 bits and the last 16 bits to be consistent after Figure 3: Construction of the parameters: Adaptive bit (\(b_{11}\)) is between Information bits and Mutual-self check bits. attacking one parameter, also achieving a covert attack without being detected. Although these two examples are relatively simple, they demonstrate that the fragile watermarking needs to have a strong association with the information itself and ensure that there is a certain correlation between parameters to ensure that when modifying parameters, one must implement a tampering of all parameters to maintain the original characteristics of the watermarking while preventing attacks. Finally, it is best to add a key attribute to the fragile watermark to ensure that the watermarking information can only be obtained through the secret key. ### Constructing Self-Mutual Check Parameters For white-box watermarking, we aim to achieve a 100% success rate in detecting tampering, locate the position of the tampering, and restore a certain amount of tampering. To achieve this, our design ensures the coupling of information between parameters and within individual parameters. Specifically, we designed the watermark using the following method. The process of generating and adding the watermark is done layer by layer on a neural network. For a layer of the network, we permute its parameters with a secret key (random seed) and record the scrambling sequence for detection and restoration. After permuting, we concatenate the first and last parameters to obtain a circular sequence resembling a circle, each parameter has a parameter before and after it. To obtain the check bits, we select the first 11 bits for pro tection (including one sign bit, eight bits of exponent and two bits of fraction), the next bit (\(b_{11}\)) as the adaptive bit, and the first eight bits of the remaining 20 bits as the mutual check bit that will be determined through computation. Computation involves XOR operations between the information bits of the previous parameter, the information bits of this parameter, and the secret key (you could also increase the complexity of reversible calculations to make them more difficult to crack). Without further discussion of cryptography, but only to explain our white-box watermarking method, we take the remaining nine bits as the result of taking the modulo 512 of the previous 23 bits (or other hash function). Above is illustrated in Figure 3. The self-check focuses on detecting whether the current parameter has been tampered with. If no error is found in self and mutual check, the probability of misjudgment can be calculated as: \(P=\frac{1}{512}\times\frac{2^{23-11}}{2^{23}}=\frac{1}{2^{20}}\approx\frac{1}{1 \times 10^{6}}\). And this approach ensures that when one parameter is damaged, we can choose another parameter for restoration (if there is no self-check, it is impossi \begin{table} \begin{tabular}{c c c c c c} \hline \hline Schemes & Object & Localization accuracy & Recoverability & Capacity[7] & Embedding method \\ \hline ACM-[21] & Copyright & - & � & Small & Regularization \\ ACM-[7] & Integrity & - & � & Medium & Histogram shift \\ INS-[2] & Integrity & Block & � & Large & LSB Substitution \\ PRL-[24] & Integrity & Block & ✓ & Large & LSB Substitution \\ **Ours** & **Integrity** & **Parameters** & ✓ & **Large** & **LSB Substitution** \\ \hline \hline \end{tabular} \end{table} Table 2: Compares our watermarking method with previous model watermarking methods in object, positioning accuracy, recovery ability, embedding capacity, and embedding method. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline Schemes & \multicolumn{2}{c|}{Embedding} & \multicolumn{2}{c|}{Detection} & \multicolumn{2}{c}{Characteristic} \\ \hline & Contact & Training & Positioning & Validator & Fidelity & Type \\ CVF-[9] & � & � & � & Trigger & ✓ & Black-box \\ KS-[25] & ✓ & ✓ & � & Trigger & � & Black-box \\ AAAI-[15] & ✓ & ✓ & � & Trigger & � & Black-box \\ ICIP-[23] & � & � & � & Trigger & ✓ & Score-based Black-box \\ ACM-[7] & ✓ & � & � & Hash & � & White-box \\ INS-[2] & ✓ & � & ✓ & Hash & � & White-box \\ PRL-[24] & ✓ & � & ✓ & Hash & � & White-box \\ **Ours** & ✓ & ✓ & ✓ & **Hash** & ✓ & **White-box** \\ \hline \hline \end{tabular} \end{table} Table 3: Compares our method with previous **fragile watermarking** methods. In embedding stage, the contacting represents modifying parameters and the training means whether the modification is relate to training. The fidelity represents whether the performance of the model remains unchanged before and after modification. ble to determine which parameter is damaged when checking between parameters). We compared our white-box watermarking with existing watermarking and found that only our method achieves parameter-level positioning and restoration, while achieving optimal performance in fidelity and lossless watermarking, which as shown in Table 2 and Table 3. ## 4 Experiment In this section, we selected four classic DNN models, LeNet, AlexNet, ResNet18 and ResNet50 as experimental objects. These DNN models are becoming increasingly larger and deeper, demonstrating the impact of watermark information on different models, and also proving the effectiveness of our adaptive method. We also selected datasets that match the models to make the watermark experiments more practical. The LeNet was conducted on MNIST. AlexNet,ResNet18 and ResNet50 conducted on CIFAR-10. The random seed was set to 1234 for all experiments. In Section 4.1, we will demonstrate the effectiveness of our proposed adaptive method, and in Section 4.2, we will demonstrate the effectiveness of our method against random parameter attacks. ### Adaptive Ability In our experiment, we compared our work with a similar method [24], as shown in table 4. Although they only replaced 12 bits per parameter, there was still some impact on deeper models, while our method replaced 20 bits for each parameter and can still keep the model performance well through adaptation. We divided the model accuracy into three categories: clean model, before and after adaptive model during the process of embedding the watermark. The results are shown in the Table 5. It can be seen that the decrease in accuracy for LeNet and AlexNet after adding the watermark is relatively small, while the decrease in accuracy for ResNet is relatively large. In particular, for ResNet50, the deeper network, the impact of the watermark is even more significant, reaching 15.72%. We believe this is because the small influence of the watermark on the parameters is amplified layer by layer in models with more layers, leading to a significant performance drop in the end. However, this also confirms the feasibility of our \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Layers} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Resize} & \multicolumn{3}{c}{Accuracy(\%)} \\ \cline{5-7} & & & Clean & PRL2022 & **Ours** \\ \hline LeNet & 7 & Mnist & \(1\times 28\times 28\) & 97.65 & 97.65 & **97.82** \\ AlexNet & 8 & Cifar10 & \(3\times 256\times 256\) & 97.49 & 97.49 & **98.63** \\ ResNet18 & 18 & Cifar10 & \(3\times 32\times 32\) & 71.89 & 71.88 & **72.06** \\ Resnet50 & 50 & Cifar10 & \(3\times 32\times 32\) & 73.93 & 73.62 & **74.01** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of our watermarking method with [24]’s method across four models. method. We successfully restored the performance of each model to its original level, even with slight improvements. However, it is unrealistic to expect that the adaptive method can significantly improve the model performance beyond the original level. ### Performance Recovery It has been observed that even small variations in model parameters can have a significant impact on the overall performance of the model [22], rendering traditional image restoration methods incapable. Therefore, we aim to restore the model's original parameters as much as possible. We assumed attack randomly Figure 4: Four models’ recovery performance under arbitrary parameter attacks. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Accuracy(\%)} \\ \cline{2-5} & Clean Model & Before Adaptive & After Adaptive & **Improvement** \\ \hline LeNet & 97.65 & 97.54 & 97.82 & **0.28** \\ AlexNet & 97.49 & 97.63 & 98.63 & **1.00** \\ ResNet18 & 71.89 & 64.16 & 72.06 & **0.79** \\ Resnet50 & 73.93 & 58.27 & 74.01 & **15.74** \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy of four models at three stages. selects model parameters for random number attacks, and the randomly set numbers will not exceed the size range of the original parameters. In order to compare the performance difference between the attack and the recovery, we select the first layer for testing. It can be seen that our recovery method achieved a uniform decline in performance for the smaller LeNet model, until the attacked parameters reached 90% and the performance quickly declined. For other larger models, we achieve recovery performance within 20%. The specific results are shown in the Figure 4. ## 5 Conclusion In this paper, we advocate for the integration of neural network watermarking with the characteristics of neural networks. To achieve this, we propose the application of gradient descent in neural network watermarking, introducing adaptive watermarks. Additionally, we aim to tightly associate each parameter's watermark, carrying the important information of parameters. We propose self-mutual check parameters to enable precise verification and recovery. We combine these two methods and conduct experiments on multiple networks, demonstrating the effectiveness of our approach. The adaptive technique also achieves a significant increase in watermark capacity, allowing for more watermark information to be embedded under lossless conditions in future works.
2302.02560
Causal Estimation of Exposure Shifts with Neural Networks
A fundamental task in causal inference is estimating the effect of distribution shift in the treatment variable. We refer to this problem as shift-response function (SRF) estimation. Existing neural network methods for causal inference lack theoretical guarantees and practical implementations for SRF estimation. In this paper, we introduce Targeted Regularization for Exposure Shifts with Neural Networks (TRESNET), a method to estimate SRFs with robustness and efficiency guarantees. Our contributions are twofold. First, we propose a targeted regularization loss for neural networks with theoretical properties that ensure double robustness and asymptotic efficiency specific to SRF estimation. Second, we extend targeted regularization to support loss functions from the exponential family to accommodate non-continuous outcome distributions (e.g., discrete counts). We conduct benchmark experiments demonstrating TRESNET's broad applicability and competitiveness. We then apply our method to a key policy question in public health to estimate the causal effect of revising the US National Ambient Air Quality Standards (NAAQS) for PM 2.5 from 12 ${\mu}g/m^3$ to 9 ${\mu}g/m^3$. This change has been recently proposed by the US Environmental Protection Agency (EPA). Our goal is to estimate the reduction in deaths that would result from this anticipated revision using data consisting of 68 million individuals across the U.S.
Mauricio Tec, Kevin Josey, Oladimeji Mudele, Francesca Dominici
2023-02-06T04:35:08Z
http://arxiv.org/abs/2302.02560v4
# Causal Shift-Response Functions with Neural Networks: ###### Abstract Policymakers are required to evaluate the health benefits of reducing the National Ambient Air Quality Standards (NAAQS; i.e., the safety standards) for fine particulate matter (PM\({}_{2.5}\)) before implementing new policies. We formulate this objective as a _shift-response function_ (SRF) and develop methods to analyze the problem using methods for causal inference, specifically under the _stochastic interventions_ framework. SRFs model the average change in an outcome of interest resulting from a hypothetical shift in the observed exposure distribution. We propose a new broadly applicable doubly-robust method to learn SRFs using targeted regularization with neural networks. We evaluate our proposed method under various benchmarks specific for marginal estimates as a function of continuous exposure. Finally, we implement our estimator in the motivating application that considers the potential reduction in deaths from lowering the NAAQS from the current level of \(12~{}\upmu\mathrm{g}/\mathrm{m}^{3}\) to levels that are recently proposed by the Environmental Protection Agency in the US (10, 9, and 8 \(\upmu\mathrm{g}/\mathrm{m}^{3}\)). Machine Learning, ICML, ICML ## 1 Introduction On January 6, 2023, the US Environmental Protection Agency (EPA) asked for public comment on a proposal to lower the National Ambient Air Quality Standard (NAAQS) for annual PM\({}_{2.5}\) concentrations from 12 \(\upmu\mathrm{g}/\mathrm{m}^{3}\) to 9 or 10 \(\upmu\mathrm{g}/\mathrm{m}^{3}\), considering lowering to as low as 8 \(\upmu\mathrm{g}/\mathrm{m}^{3}\), or as high as 11 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). The EPA estimated that lower standards could result in as much as \(\$43\) billion in net health benefits in 2032 (The New York Times, 2023). The final decision on the new NAAQS will be made later this year. The EPA's inquiry into lowering PM\({}_{2.5}\) concentrations remains contentious when considering all possible economic, social, and health consequences. We will focus on the potential health benefits gained from decreasing PM\({}_{2.5}\) concentrations and lowering the enforced limits. We formulate our analysis by introducing a general-purpose framework for neural networks that learns a _shift-response function_ (SRF). An SRF characterizes the causal effect for an outcome of interest (e.g. mortality) that can be attributed to a shift in the distribution of an exposure variable (e.g. a reduction of PM\({}_{2.5}\)). SRFs can be formally defined using _stochastic interventions_ a causal inference framework that is useful for evaluating changes to a continuous exposure like PM\({}_{2.5}\)(Munoz & Van Der Laan, 2012). To study the effects of lowering the NAAQS in the US, this paper proposes **T**argeted **R**egularization for **E**xposure **S**hifts with **Neural **N**etworks (tresnet), a novel method for estimating SRFs. tresnet adapts previous work on targeted regularization for estimating discrete and average-dose response curves with neural networks, incorporating ideas from targeted maximum likelihood estimation. tresnet uses a neural network with distinct prediction heads for the outcome and an importance-sampled density ratio of the observed and shifted exposure distributions while satisfying an estimating equation that ensures various desirable properties. A key property of our estimator is double-robustness. A doubly-robust estimator of the SRF yields unbiased estimates when either the outcome model or a density ratio Figure 1: Estimated reduction to early deaths among 68.5 million elderly adults enrolled in Medicare in the US for different NAAQS thresholds using tresnet. The curve shows the effect of implementing a cutoff exposure shift to reduce all observations above the NAAQS to the proposed upper limit. of the observed and shifted exposures are correctly specified. In addition, the doubly-robust SRF estimator achieves the semiparametric efficiency bound when both models are correct (Munoz and Van Der Laan, 2012; Kennedy, 2016). Currently, there is limited evidence that can directly inform the new NAAQS rule, even when numerous studies report adverse health outcomes attributable to high levels of PM\({}_{2.5}\)(Wu et al., 2020). Only a few peer reviewed articles have taken a causal perspective into non-linear effect estimation for changing PM\({}_{2.5}\) concentrations. Among these reports, none directly analyze policy interventions by supposing the changes manifest as an exposure shift. Instead, most analyses into the associations between PM\({}_{2.5}\) and adverse health outcomes focus on estimating exposure-response functions (ERF). While ERFs are useful for a variety of purposes (e.g. identifying potential thresholds), analyzing exposure shifts more concisely evaluates the potential costs or benefits for proposed interventions, as this paper will highlight concerning the EPA's desire to implement new ambient air standards. While trenet is motivated by this application, it can generalize to many settings with different types of exposure shifts, particularly within the domains of environmental health and epidemiology. A preview of the results is shown in Figure 1, showing a steep decrease in deaths with a lower standard. Details are provided in Section 7. ContributionsThis paper contributes to causal machine-learning methods and the public health domain, summarized as follows: (1) We use state-of-the-art neural network methods to take an important step toward answering an open public health issue in the US. Our results clearly demonstrate the health benefits of lowering the acceptable air quality standards. Further, they show evidence of increasing marginal benefits as the threshold is further reduced. (2) To our knowledge, we provide the first causal inference method using neural networks for continuous stochastic interventions or count data. The proposed methods are highly competitive and often outperform others in our simulation studies with synthetic/semi-synthetic datasets. ## 2 Background on Stochastic Interventions and Estimands of Interest NotationThe exposure1 and outcome of interest are denoted as \(A_{i}\in\mathcal{A}\) and \(Y_{i}\in\mathcal{Y}\), respectively. We suppose that \(A_{i}\) is continuous. Each unit \(i\in\mathcal{I}\) is described by a set of covariates \(\mathbf{X}_{i}\in\mathcal{X}\). We denote \(\mathbb{P}^{X}\) as the marginal distribution of \(\mathbf{X}_{i}\). The observed exposure density function is denoted as \(\pi_{0}(a\mid\mathbf{x})\triangleq\Pr(A_{i}=a\mid\mathbf{X}_{i}=\mathbf{x})\). This quantity is widely known as the _generalized propensity score_, which has broad utility for causal inference (Hirano and Imbens, 2005; Imai and van Dyk, 2004). Throughout we assume a sample \((\mathbf{X}_{i},A_{i},Y_{i})_{i=1}^{n}\) of size \(n\). Footnote 1: The terms exposure, intervention, and treatment are used interchangeably throughout this manuscript. ### Potential outcomes and assumptions We adopt the potential outcomes framework, also known as the Rubin causal model, to describe the assumptions necessary to identify the target SRF parameter. Let \(Y_{i}^{a}\) denote the _potential outcome_ of unit \(i\) under a hypothetical exposure \(a\in\mathcal{A}\)(Imbens and Rubin, 2015). Potential outcomes require the stable-unit treatment value assumption (SUTVA), having two conditions. The first condition is _consistency_, which entails \(Y_{i}^{A_{i}}=Y_{i}\), meaning that the potential outcome of the factual treatment corresponds with the observed outcome. The second condition is _no interference_, which states that \(Y^{A_{i}}\) does not depend on the treatment of other units. **Assumption 2.1** (Sutva).: (a) \(Y_{i}^{A_{i}}=Y_{i}\) (consistency); (b) \(Y_{i}^{\mathbf{A}}=Y_{i}^{A_{i}}\) where \(\mathbf{A}=(A_{i})_{i=1}^{n}\) is the vector of all treatment assignments (no interference). SUTVA is widely assumed in most causal inference applications, particularly in previously proposed methods for exploring the effects of air pollution on human health (Dominici and Zigler, 2017). We also assume that, conditionally to all the measured confounders included in \(\mathbf{X}_{i}\), there are no unmeasured confounders (variables correlated with both the treatment and outcome); or more formally: **Assumption 2.2** (unconfoundedness).: \(\mathbf{X}_{i}\) consists of pre-treatment covariates only (it is not caused by the treatment or outcome) and \(A_{i}\perp(Y_{i}^{a})_{a\in\mathcal{A}}\mid\mathbf{X}_{i}\), where \(\perp\) denotes conditional independence. ### Stochastic interventions and estimand The _stochastic intervention_ (SI) framework (Hubbard and Van der Laan, 2008) encompasses methods for estimating an expected potential outcome value when replacing the observed exposure distribution, \(\pi_{0}\), with a counterfactual exposure described by \(\pi_{d}\). The intervention is stochastic since \(\pi_{d}\) is a distribution rather than a constant. In this work, we assume that \(\pi_{d}\) is defined implicitly by a transformation of \(A_{i}\), denoted by \(A_{i}^{d}\). We let \(\mathbf{O}_{i}^{d}=(\mathbf{X}_{i},A_{i},A_{i}^{d},Y_{i})\) denote the augmented data and \(\mathbb{P}_{d}\) denote its distribution. Table 1 shows common examples of relevant exposure shifts, although a specific exposure shift should ideally be directed by the application itself and by input from domain experts. For instance, one way to approach our motivating application is to have \(d\) represent the new NAAQS threshold (e.g., 12, 10, 9) using a cutoff SI. Alternatively, we could use an expert-defined hypothetical exposure distribution that uses ZIP code level information not captured by \(\mathbf{X}_{i}\). We identify various possible unit-level transforms in the table. In general, such \(\pi_{d}\) cannot be computed as a simple closed-form expression in terms \(\pi_{0}\). The causal estimand of interest corresponding with an SRF can be defined as follows. First, define the expected potential outcome as: \[\begin{split} q(a,\mathbf{x})&\triangleq\mathbb{E}[Y_{i}^{a }\mid\mathbf{X}_{i}=\mathbf{x},A_{i}=a]\\ \mu_{d}&\triangleq\mathbb{E}_{\mathbb{P}^{d}}\left[q( \mathbf{X}_{i},A_{i}^{d})\right].\end{split} \tag{1}\] The _shift-response function_ (SRF) follows the mapping \(\pi_{d}\mapsto\mu(\pi_{d})\). It is useful to contrast SRFs with ERFs. An ERF represents the expected potential outcome at the same fixed treatment level averaged all units. An ERF can be seen as a limiting case of an SRF where \(\pi_{d}\) is a point mass distribution (though this estimator is no longer stochastic). Figure 2 illustrates the above concepts in an hypothetical air pollution example. Figure 2a shows the cutoff SI (inspired by our application), and its corresponding SRF is plotted in Figure 2b. The SRF is highly informative of the effect of the cutoff. It shows that reducing the cutoff from 12 to 8 provides a sharp decline in deaths, but flattens out after 6 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). For additional demonstration, Figure 2c contains an alternative example of SI for the same data by reducing every observed value by 10%. Lastly, Figure 2d shows a comparison with the exposure-response function (ERF). Since every point in the ERF implies the same hypothetical exposure for all units, it is less informative of the causal effect of a shift, particularly under non-linear effects. The estimand associated with the ERF has the form \(\xi(a)\triangleq\mathbb{E}_{\mathbf{X}_{i}\sim\mathbb{P}^{X}}[q(\mathbf{X}_{i},a)]\), yielding a curve \(a\mapsto\xi(a)\). Observe that the expectation defining \(q\) in Equation (1) requires the true causal model of the conditional distribution of the potential outcomes \(Y_{i}^{a}\). However, this distribution is unknown and cannot be observed from the data in general. However, the following well-known result establishes conditions of its identifiability from the observed data (Imbens and Rubin, 2015): **Proposition 2.3**.: _Suppose Assumptions 2.1 and 2.2 hold. Then \(q(a,\mathbf{x})=\mathbb{E}[Y_{i}\mid\mathbf{X}_{i}=\mathbf{x},A_{i}=a]\). The right-hand side expression does not involve the potential outcomes._ Importance sampling/inverse weightingDefine the density ratio \(w_{d}(a|\mathbf{x}):=\pi_{d}(a|\mathbf{x})/\pi_{0}(a|\mathbf{x})\), which is a measure of the domain shift. Given an estimator of \(\hat{w}_{d}\), the simplest kind of estimator of \(\mu_{d}\) is based on importance sampling (Munoz and Van Der Laan, 2012). It is the generalization of the well-known inverse probability weight (IPW) estimator: \[\hat{\mu}_{d}^{\text{pw}}=n^{-1}{\sum}_{i=1}^{n}\hat{w}_{d}(A_{i}|\mathbf{X}_{i}) Y_{i}. \tag{2}\] When \(w_{d}(A_{i}|\mathbf{X}_{i})\) is correctly estimated, then \(\hat{\mu}_{d}^{\text{pw}}\) is unbiased. However, this estimator is known to suffer from high variance (Munoz and Van Der Laan, 2012). This motivates the discussion on double robustness in the next section. ### Doubly-robust inference of SIs Doubly-robust estimation requires estimating two underlying quantities - the importance sampling ratio and the regression function (Munoz and Van Der Laan, 2012). A doubly-robust stochastic intervention effect estimator is consistent if either model of the respective nuisance parameter is consistently estimated (Kennedy, 2016). This double-robust property also achieves local efficiency when both models are consistently estimated, empowering thoughtful machine-learning methods. We formulate this section for the case of a single \(d\). Indeed, in many applications of interest, including the one motivating this paper, only a few finite values of \(d\) are relevant (for example, relevant NAAQS are 8, 9, 10, 11, 12). Results for a single value will generally extend to a small finite number of values (Nie et al., 2021). Let \(\hat{w}_{d}\), \(\hat{q}\), and \(\hat{\mu}_{d}\) be estimators of \(w_{d}\), \(q\), and \(\mu_{d}\), respec \begin{table} \begin{tabular}{l|l|l} \hline shift & \(A_{i}^{d}\) & \(\pi_{d}\) \\ \hline Cutoff & \(\min(A_{i},d)\) & \(\Pr(A_{i}<d)\pi_{0}(a|\mathbf{x})+\Pr(A_{i}\geq d)\delta_{d}(a)\) \\ \hline Additive & \(A_{i}-d\) & \(\pi_{0}(a+d|\mathbf{x})\) \\ \hline Percent & \(A_{i}(1-d)\) & \((1-d)\pi_{0}(a/(1-d)|\mathbf{x})\) \\ \hline Bijection & \(d(A_{i},\mathbf{x})\) & \(\pi_{0}(d^{-1}(a,\mathbf{x})|\mathbf{x})\frac{|\hat{w}_{d}|}{\hat{w}_{d}}d(a,\mathbf{x})|\) \\ \hline Unit-level & \(d(A_{i},\mathbf{x},i)\) & closed-form not available in general \\ \hline \end{tabular} \end{table} Table 1: Common examples of exposure shifts. \(\delta_{d}(a)\) denotes a point mass distribution at \(a=d\). Figure 2: **(a)** Example stochastic intervention (SI) setting an upper cutoff at \(d=9\). **(b)** is the resulting shift-response function (SRF) for varying cutoffs \(d\). **(c)** Another example of SI where the shift is a 10% reduction. **(d)** The exposure-response function (ERF) for comparison with panel (a); a point in the curve implies the same hypothetical exposure for all units, being less informative of the causal effect of a shift. tively. In short, doubly robust estimation for an SRF is achieved by specifying the estimating equation \[\mathbb{E}_{\mathbb{P}^{d}}[\Psi_{\pi_{d}}(\mathbf{O}_{i}^{d},\hat{w}_{d},\hat{q}_{d},\hat{\mu}_{d})]=0. \tag{3}\] Here, \(\Psi_{\pi_{d}}\) is the _efficient-influence function_ (EIF) of \(\mu_{d}\)(Kennedy, 2016). For additional background, Hines et al. (2022) provides an excellent introduction to semi-parametric inference and the relevance of EIFs in causal analyses. The following two results are taken from Munoz and Van Der Laan (2012) and characterize the estimating equation for stochastic interventions. We refer to this article for proofs of the statements in this section. **Proposition 2.4** (Eif).: _The efficient-influence function (EIF) of \(\mu_{d}\) is_ \[\begin{split}&\Psi_{\pi_{d}}(\mathbf{O}^{d},w_{d},q,\mu_{d})\triangleq\\ & w_{d}(A_{i}|\mathbf{X}_{i})(Y_{i}-q(A_{i},\mathbf{X}_{i}))+q(A_{i}^{ \delta},\mathbf{X}_{i})-\mu_{d}.\end{split} \tag{4}\] Solving the estimating equation yields an unbiased estimator of \(\mu_{d}\). To see this, notice that if \(\mathbb{E}_{\mathbb{P}^{d}}[\Psi_{\pi_{d}}(\mathbf{O}_{i}^{d},\hat{w}_{d},\hat{q} _{d},\mu_{d})]=0\) holds (\(\mu_{d}\) here is the true parameter), then we can rearrange the terms so that \[\mu_{d}=\mathbb{E}[\hat{q}(A_{i}^{\delta},\mathbf{X}_{i})]+\mathbb{E}[\hat{w}_{d} (A_{i}|\mathbf{X}_{i})(Y_{i}-\hat{q}(A_{i},\mathbf{X}_{i}))]. \tag{5}\] The following result shows that the estimating equation is solved when either the density ratio or outcome model is correctly specified and that the contraction rate under finite sample sizes is controlled by the composite contraction rate for these two estimators. **Proposition 2.5** (Doubly-robust estimation).: _Suppose Assumptions 2.1 and 2.2 hold, and that \(w_{d}(a|\mathbf{x})<\infty\) for all \(a\in\mathcal{A}\) and \(\mathbf{x}\in\mathcal{X}\). Then \(\mathbb{E}_{\mathbb{P}^{d}}[\Psi_{\pi_{d}}(\mathbf{O}_{i}^{d},\hat{w}_{d},\hat{q}, \mu_{d})]=0\) if either \(\hat{q}=q\) or \(\hat{w}_{d}=w_{d}\)._ Augmented IPWThe most straightforward way to construct a doubly-robust estimator so that the estimating equation holds is by plugging in the unknown quantities in Equation (5) with corresponding estimates. This estimator is known as _augmented inverse probability weighting_ (AIPW Munoz and Van Der Laan (2012): \[\hat{\mu}_{d}^{\text{AIPW}}=n^{-1}{\sum_{i=1}^{n}}\hat{w}_{d}(A_{i}|\mathbf{X}_{i })(Y_{i}-\hat{q}(\mathbf{X}_{i},A_{i}))+\hat{q}(A_{i}^{d},\mathbf{X}_{i}). \tag{6}\] It has been argued by Shi et al. (2019); Nie et al. (2021) that AIPW-type estimators may be unstable in small finite samples despite being asymptotically efficient due to the dependence on the bias term. This motivates the introduction of targeted regularization. ## 3 Tresnet: Targeted Regularization for Exposure Shifts with Neural Networks OverviewWe now explain the key idea behind tresnet. We provide a summary and details in the next section. Equation (5) shows that the estimating equation can be interpreted as a bias correction in the plugin estimator due to the domain shift induced by the new exposure distribution. Targeted regularization was proposed to mitigate the unreliability of the bias term in the context of estimating the average treatment effect of a binary intervention (Shi et al., 2019; Nie et al., 2021). The strategy behind targeted regularization consists of learning an outcome model highly predictive of the observed data, then learning a small perturbation so that the bias term vanishes. In this way, the asymptotic and double robustness properties are preserved while achieving strong performance in finite samples (Shi et al., 2019). ### Targeted regularization Targeted regularization (TR) begins with two models, \(q_{\theta}\) and \(w_{d,\theta}\), parameterized by neural networks and introduces a perturbation parameter \(\varepsilon_{d}\) to be optimized along with these models. The TR loss is given by \[\begin{split}\mathcal{L}_{\text{TR}}(q_{\theta},w_{d,\theta}, \varepsilon_{d})&\triangleq\frac{1}{2n}\sum_{i=1}^{n}w_{d,\theta }\left(Y_{i}-\tilde{q}_{d,\theta}(\mathbf{X}_{i},A_{i})\right)^{2}\\ \tilde{q}_{d,\theta}(\mathbf{X}_{i},A_{i})&\triangleq q _{\theta}(\mathbf{X}_{i},A_{i})+\varepsilon_{d}\end{split} \tag{7}\] The key observation is that \(\frac{\partial}{\partial\varepsilon_{d}}\mathcal{L}_{\text{tr}}(q_{\theta},w_{ d,\theta},\varepsilon_{d})=0\) if and only if the bias term vanishes, i.e., \(\frac{1}{n}\sum_{i=1}^{n}w_{d,\theta}(Y_{i}-\tilde{q}_{d,\theta},i)=0\). Upon successful optimization, we can use the perturbed estimator \[\tilde{\mu}_{d,\theta}=\frac{1}{n}\sum_{i=1}^{n}\tilde{q}_{d,\theta},i, \tag{8}\] which satisfies the finite-sample estimating equation \(\frac{1}{n}\sum_{i=1}^{n}\Psi_{\mathbb{P}^{d}}(\mathbf{O}_{i}^{d},w_{d,\theta}, \tilde{q}_{\theta,d},\tilde{\mu}_{d})=0\). When learning an SRF for a finite set of values \(\mathcal{D}\), we can use the integrated loss. \(\mathcal{L}_{\text{TR}}^{\mathcal{D}}(q_{\theta},w_{d,\theta},\varepsilon_{ \mathcal{D}})=\frac{1}{|\mathcal{D}|}\sum_{d\in\mathcal{D}}\mathcal{L}_{\text{ TR}}^{d}(q_{\theta},w_{d,\theta},\varepsilon_{d})\) with \(\varepsilon_{\mathcal{D}}=(\varepsilon_{d})(d\in\mathcal{D})\). When \(\mathcal{D}\) is infinite or very large, additional analytical considerations will be required in general (Nie et al., 2021). Our focus here is a small finite set of values as required in our application of interest and many realistic settings. The targeted loss is learned along with the empirical risk losses of \(q_{\theta}\) and \(w_{d,\theta}\): \[\begin{split}\mathcal{L}(q_{\theta},w_{d,\theta},\varepsilon_{d})= \\ \mathcal{L}_{Y}(q_{\theta})+\mathcal{L}_{A}(w_{d,\theta})+\beta \mathcal{L}_{\text{TR}}(q_{\theta},w_{d,\theta}.\varepsilon_{d})\end{split} \tag{9}\] In what follows, we explain the empirical risk losses in more detail. Recall that the idea is to calibrate \(q_{\theta}\) to the observed data so that \(\varepsilon_{d}\) is only a small perturbation. For generic applications, we use the quadratic loss \(\mathcal{L}_{Y}(q_{\theta})=\frac{1}{2n}\sum_{i=1}^{n}(Y_{i}-q_{\theta}(\mathbf{X}_{i},A_{i}))^{2}\). However, we will use a more appropriate loss for count data in our motivating application. Since specifying \(\mathcal{L}_{A}(w_{\theta,d})\) will require considering different cases, we explain in the next section. ### Density ratio learning We discuss three strategies for density ratio learning. The first one, and the most obvious one, is to leverage that the shifted exposure can be written as a transformation \(\pi_{d}=F(\pi_{0})\) (see Table 1). Accordingly, we can let \(\pi_{\theta}\) be parameterized by a neural network mean to approximate \(\pi_{0}\) and use the average negative loglikelihood loss \(\mathcal{L}_{W}^{\text{\sc cps}}(\pi_{\theta})=-\frac{1}{n}\sum_{i=1}^{n}\log( \pi_{\theta}(A_{i}|\mathbf{X}_{i}))\), and then set \(w_{d,\theta}=F(\pi_{\theta})/\pi_{\theta}\). Here gps stands for the generalized propensity score. However, it has been noted that direct density ratio estimation may be easier than learning a density (Sugiyama et al., 2012). Furthermore, there may be cases when \(A_{i}^{d}\) is defined in such a way that \(F\) is not available. To develop a method applicable to these cases, we depart from the following known result. **Lemma 3.1**.: _Let \(Z_{i}\sim_{\text{\sc iid}}\operatorname{Bernoulli}(1/2)\), \(A_{i}^{1}\sim_{\text{\sc iid}}\pi_{1}\) and \(A_{i}^{2}\sim_{\text{\sc iid}}\pi_{2}\), and define the auxiliary variable \(\bar{A}_{i}\triangleq Z_{i}A_{i}^{1}+(1-Z_{i})A_{i}^{2}\). Then \(\Pr(Z_{i}=1|\bar{A}_{i})=\sigma(\log(\pi_{1}(\bar{A}_{i})/\pi_{2}(\bar{A}_{i}))\). where \(\sigma(l)=(1+e^{-l})\) is the sigmoid function._ See Sugiyama et al. (2012) for the proof of this fact. According to the lemma, we can learn the log-density ratio by learning a classifier between samples \(A_{i}^{d}\) and \(A_{i}\). For convenience, denote \(p_{d,\theta}(a|\mathbf{x})=\sigma(\log(w_{d,\theta}(a|\mathbf{x})))\). Then we can use the classification loss \[\mathcal{L}_{A}(w_{d,\theta})=\] \[\quad-\frac{1}{2n}\sum_{i=1}^{n}\left(\log(p_{d,\theta}(A_{i}^{d} |\mathbf{X}_{i}))+\log(1-p_{d,\theta}(A_{i}|\mathbf{X}_{i})\right).\] Even when a transformation of \(\pi_{0}\) is known. It may be possible to entirely learn the density ratio with the above classification loss and the parameterization \(w_{d,\theta}=F(\pi_{\theta})/\pi_{\theta}\). We call this the hybrid-classifier (hybrid-class) strategy. We found it to be very competitive with the loglikelihood of loss in our experiments. We suspect it is less prone to overfitting, particularly when combined with label smoothing. Finally, the full-classifier (full-class) case is when we have a direct model for \(w_{d,\theta}\) because it can't be expressed in terms of \(\pi_{0}\). In the next section we specify a suitable architecture for this case. ### Neural Network Architecture When \(\pi_{d}\) can be written in terms of \(\pi_{0}\), our choice of neural network architecture is identical to vcnet(Nie et al., 2021). However, we remark that the loss functions used to train the network differ from the ones used in vcnet, since the latter concerns the estimation of ERFs. We will provide a brief overview of this architecture for completeness, and explain the changes necessary for the case when \(\pi_{d}\) cannot be written in terms of \(\pi_{0}\). Nie et al. (2021) also provides a visual representation of the architecture. **vcnet backbone** The initial part of the network learns a representation of the covariates \(\mathbf{Z}_{i}=f_{\theta}(\mathbf{X}_{i})\) using a feed-forward network. Then, the outcome model takes the representation \(\mathbf{Z}_{i}\) as an input and passes it through a varying-coefficient feed-forward network, where the weights and biases are functions of the exposure. This effect is achieved by parameterizing the weights and biases as learnable linear combinations of a spline basis. For the generalized propensity score, Nie et al. (2021) propose to discretize the exposure domain in \(k\) subintervals and use a single linear layer taking \(\mathbf{Z}_{i}\) as input and having \(k\) outputs, where the \(k\)-th output represents the conditional probability of the \(k\)-th subinterval. They focus on the case when \(\mathcal{A}=(0,1)\), but any bounded interval could be considered. To reduce the effect of discretization, they propose to use interpolation, although we refer to the original vcnet paper for details. As the authors remark, alternatives like Mixture density networks (Bishop, 1994), or normalizing flows (Rezende and Mohamed, 2015) could be considered. **Ratio model** Motivated by this architecture, we propose the following strategy for the case when \(w_{d,\theta}\) needs to be estimated directly. First, if only one value of \(d\) is of interest, we can use a feed-forward network with \(k\) outputs, each giving the value of \(\log w_{d,\theta}\) in a subinterval of \(\mathcal{A}\). Instead, when multiple values of \(d\) are considered, we can use a varying-coefficient feed-forward network parameterized by \(d\). This will ensure the ability to learn for various values of \(d\) without significantly increasing the number of parameters. **Perturbation parameters** We parameterize each \(\epsilon_{d}\) as an independent free parameter, in contrast to vcnet, which uses a spline parameterization. We found that independent parameters work better when considering only a small number of possible values for \(d\) as in our applications. ## 4 Targeted Regularization For Count Data The targeted loss needs to be modified to be useful with count data. Count data are very common within public health and epidemiology studies, for instance, to represent the number of deaths, infections, or hospitalizations. A common modeling choice for regression with count data outcome is to assume Poisson distribution of the count variable. For causal inference, this would imply that \(Y_{i}^{a}\sim\operatorname{Poisson}(q(\mathbf{X}_{i},a))\). Its most important property is that the error is proportional to the mean, \(\sqrt{\operatorname{Var}[Y_{i}^{a}]}=q(\mathbf{X}_{i},a)\). This property is often referred to as the _multiplicative errors_ property. When the true data has this property, regressing with the quadratic loss (which assumes homoscedastic errors) will result in a misspecified model that will be very sensitive to large values of \(Y_{i}\). In this section, we demonstrate how to adapt our proposed targeted regularization to deal with multiplicative errors, focusing on the Poisson model. The first step is to adapt the outcome model for Poisson regression. For this purpose, we replace the mean-squared error (Gaussian) loss with the Poisson (negative log-likelihood) loss. Up to a constant, it can be written as \[\mathcal{L}_{Y}^{\text{Poi}}(q_{\theta}) \triangleq\frac{1}{n}\sum_{i=1}\left(q_{\theta}(\mathbf{X}_{i},A_{i} )-Y_{i}\log(q_{\theta}(\mathbf{X}_{i},A_{i}))\right) \tag{10}\] \[q_{\theta}(\mathbf{X}_{i},A_{i}) \triangleq e^{f_{\theta}(\mathbf{X}_{i},A_{i})}.\] Here \(f_{\theta}\) is the neural network model, and we are using the canonical exp/log-link function. We now express the targeted loss for Poisson regression. The two key differences are the use of multiplicative perturbation and the re-weighted negative loglikelihood loss: \[\mathcal{L}_{\text{TR}}^{\text{Poi}}(q_{\theta},w_{d,\theta}, \varepsilon_{d})\triangleq \tag{11}\] \[\qquad\frac{1}{n}\sum_{i=1}w_{d,\theta}\left(q_{\theta}(\mathbf{X}_{ i},A_{i})-Y_{i}\log(\tilde{q}_{d,\theta}(\mathbf{X}_{i},A_{i}))\right)\] \[\tilde{q}_{d,\theta}(\mathbf{X}_{i},A_{i})\triangleq e^{\varepsilon_{ d}}q_{\theta}(\mathbf{X}_{i},A_{i}).\] It holds that \(\frac{\partial}{\partial\varepsilon_{d}}\mathcal{L}_{\text{TR}}^{\text{Poi}} (q_{\theta},w_{d,\theta},\varepsilon_{d})=0\) if and only if the bias term in the estimating equation vanishes, i.e., \(\frac{1}{n}\sum_{i=1}^{n}w_{d,\theta}(Y_{i}-\tilde{q}_{d,\theta,i})=0\). Finally, we remark that count data regression often requires an _offset_ term \(N_{i}\). For example, if unit \(i\) represents a geographic area with a population (the offset) \(N_{i}\), then by defining \(q_{\theta}(\mathbf{X}_{i},A_{i})\triangleq N_{i}e^{f_{\theta}(\mathbf{X}_{i},A_{i})}\) we can interpret the network output as the (logarithmic) unit rate of occurrence. In our application, the offset is the population enrolled in Medicare for a given county for a given year. ## 5 Related Work Neural networks for estimating causal effects with continuous treatmentstresnet contributes to the growing neural network literature on causal inference with continuous treatments. This literature is mainly divided among works aiming to estimate individualized effects (e.g., (Bica et al., 2020; Yoon et al., 2018)) and those estimating ERFs, also known as average dose-response functions (ADRFs). tresenet is more closely related to the latter group. In particular, we implement tresenet on top of the architecture proposed by Nie et al. (2021) and Schwab et al. (2020). Doubly-robust estimation for stochastic interventionsEIFs have been widely studied for constructing asymptotically efficient estimators (Bickel et al., 1993; Kennedy, 2016; Van der Laan et al., 2011). See Hines et al. (2022) for an accessible review. Targeted regularization is a special case. It was originally proposed for estimation with binary treatments by Shi et al. (2019) and extended to ERF estimation by Nie et al. (2021). We have adopted this framework for the case of SIs with continuous exposure. Notably, Duong et al. (2021) use network methods for SIs for binary treatments. To our knowledge, no previous work has explored the estimation of SIs with neural networks. Our derivation of doubly robust inference for SIs closely follows Munoz and Van Der Laan (2012). ## 6 Synthetic Data Experiments We conducted a series of experiments to evaluate the performance of tresenet2. We use semi-synthetic datasets since evaluating causal inference requires access to the true counterfactuals, which are rarely available in real data. The semi-synthetic datasets are ihdf Hill (2011) and news Newman (2008) which are widely used as causal inference benchmarks. The continuous treatment-adapted versions of these datasets which we use have been proposed by Nie et al. (2021). We also apply the same fully synthetic dataset (sim) proposed in that study. We explain the main experiment here and refer to the Appendix for additional ones. These experiments include an ablation on the density ratio strategies, showing that all of them are fairly comparable (surprising to us); and a variant for the Poisson regression, showing it outperforms the Gaussian case for count data. For the tresenet experiments presented here, we use the hybrid-class loss introduced in Section 3.2. MetricsFor each dataset, we use the counterfactuals to generate the true SRF corresponding to a percent reduction exposure shift. We use 20 uniformly spaced reductions \(\{d_{k}\}_{k=1}^{20}\subset(0,0.5)\). We evaluate the performance using the average bias bias and root-mean-squared error mse along the SRF. Letting \(\hat{\mu}_{d}\) be the estimate of \(\mu_{d}\), these quantities are defined as bias\(\triangleq 20^{-1}{\sum_{k=1}^{20}}(\hat{\mu}_{d_{k}}-\mu_{d_{k}})\) and \(\text{rMSE}\triangleq(20^{-1}{\sum_{k=1}^{20}}(\hat{\mu}_{d_{k}}-\mu_{d_{k}} )^{2})^{1/2}\). We perform multiple simulations from 100 random seeds and obtain the average and standard deviation of these metrics. Each simulation generates a different train-test split, and we only report the metric values on the test set. We provide additional details in the appendix. BaselinesWe evaluate ipw and aipw (Equations (2) and (6)). For the latter, we use the same architecture but without targeted regularization. We call this baseline ensnet-aipw. In addition, we compare with the outcome models (no regularization) of vcnet Nie et al. (2021) and drnet Schwab et al. (2020). We compare these baselines against tresnet, implemented with the hybrid density ratio estimator and the tresnet backbone for the outcome. The hyper-parameter \(\beta\) was fixed at \(0.1\) without any tuning. Two variants of tresnet are also considered, one in which the estimator is fed as an input to Equation (6) (tresnet-aipw), and one in which we replace the outcome model backbone based on vcnet with drnet. **Results** Table 2 shows the results. We observe that tresnet consistently outperforms ipw, aipw and vcnet in all settings. Contrary to our expectations, vanilla drnet was surprisingly strong on the hldp dataset. Combining it with tresnet improved the bias but affected the rmse. Overall, we suspect that these benchmarks are fairly easy for the outcome model and validation in more complex scenarios is required. ## 7 The Health Benefits of Stricter Air Pollution Regulation in the US This section uses tresnet to consider the EPA petition for public comment regarding their proposal to lower the National Ambient Air Quality Standard (NAQS) for annual PM\({}_{2.5}\) pollution from their current value of 12 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). The new proposes standards suggest values of 9-10 \(\upmu\mathrm{g}/\mathrm{m}^{3}\), but could consider going as low as 8 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). **Overview** To reflect the EPA guidelines, we consider two types of exposure shifts from Table 1: cutoff intervention and percent reductions, each providing different perspectives and valuable information. An explanation and visual intuition about these exposure shifts have been provided in Table 1 and Figure 2. For each case, we estimate the corresponding SRF using tresnet for Poisson regression (Section 4). The results are shown in Figures 1 and 4. The exposure shifts can be described as follows: 1. A cutoff shift \(A_{i}^{d}=\min(A_{i},d)\) where \(d\) goes from 15 \(\upmu\mathrm{g}/\mathrm{m}^{3}\) down to 6 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). We expect that at 15 \(\upmu\mathrm{g}/\mathrm{m}^{3}\) there will be no reduction in deaths since more than 99% of observations fall below that range. Analyzing this exposure shift would shed light on a counterfactual world where pollution exposure stays as they are for all levels below the standard, but only high values are shrunk. 2. An exposure shift \(A_{i}^{d}=A_{i}(1-d)\) implementing an overall percent reduction. We take \(d\) in the \(0-50\%\) range in the simulations. We can interpret these shifts in terms of the NAAQS by mapping each percent reduction to percentile and quantile metrics. **Data** The application dataset comes from a previous study assessing the effect of PM\({}_{2.5}\) on elder mortality by Wu et al. (2020). We provide an overview here and refer to their article for details. The dataset includes Medicare data from the 2000-2016 cohort3. This cohort comprises 68,503,979 individuals and 27,106,639 early deaths. Measurements include age, sex, race/ethnicity, residential ZIP code, Medicaid eligibility, and date of death. The records are aggregated at the annual ZIP code level, resulting in 573,370,257 ZIP code years. The PM\({}_{2.5}\) exposure measurements are derived from the ensemble prediction model of Di et al. (2019). The ensemble was trained on daily PM\({}_{2.5}\) concentrations measured at 2,156 US EPA monitoring sites using predictor variables derived from satellite data, land-use data, and atmospheric measurements. It is then averaged at the annual ZIP code level. Finally, these sources are combined with demographic, socio-economic, and environmental and weather confounding factors (Tec et al., 2023) derived from the US Census Bureau, gridMET (Abatzoglou, 2013), and risk factors from the US Centers for Disease Control and Prevention. Footnote 3: Access to Medicare data is available from The Center of Medicare and Medicaid Services to protect sensitive information under a restricted usage agreement from. The authors have completed the required IRB and ethical training to manipulate these data. **Model and architecture** We conduct a Poisson regression for count data and apply targeted regularization for Poisson regression (Equation (11)). Each unit in the regression dataset is a ZIP code year. The year is included as an additional covariate. We use the same neural network architecture and hyper-parameters as in the simulation study. For the cutoff shift, we use the classifier density ratio loss. We preferred it because it provides a smoothing effect over the mass point in the shifted density (see Table 1). For the percent reduction, we use the parameterized density ratio loss. We fit the model using the 80-20% train-test split, taking the model that performed best in the reweighted validation error (ref). We repeat the procedure for 100 runs with different \begin{table} \begin{tabular}{c c|c c c c|c c c} \hline \hline Dataset & Metric & ipw & center-aipw & vcnet & denet & **tresnet** & **tresnet-aipw** & **tresnet-bnet** \\ \hline \multirow{3}{*}{hldp} & bias & \(-0.38\pm 0.08\) & \(-0.30\pm 0.24\) & \(-0.30\pm 0.24\) & \(-0.22\pm 0.10\) & \(-0.23\pm 0.26\) & \(-0.23\pm 0.26\) & \(-0.20\pm 0.1\) \\ & MSE & \(1.02\pm 0.07\) & \(0.46\pm 0.25\) & \(0.47\pm 0.25\) & \(\mathbf{0.34\pm 0.06}\) & \(0.40\pm 0.24\) & \(0.40\pm 0.24\) & \(0.38\pm 0.09\) \\ \hline \multirow{3}{*}{news} & bias & \(-0.02\pm 0.03\) & \(\mathbf{0.00\pm 0.03}\) & \(\mathbf{0.00\pm 0.03}\) & \(\mathbf{0.00\pm 0.03}\) & \(\mathbf{0.00\pm 0.03}\) & \(0.00\pm 0.03\) & \(0.02\pm 0.03\) \\ & MSE & \(\mathbf{0.03\pm 0.02}\) & \(\mathbf{0.03\pm 0.02}\) & \(\mathbf{0.03\pm 0.01}\) & \(\mathbf{0.03\pm 0.01}\) & \(\mathbf{0.03\pm 0.02}\) & \(\mathbf{0.03\pm 0.02}\) & \(\mathbf{0.03\pm 0.01}\) \\ \hline \multirow{3}{*}{sim} & bias & \(0.10\pm 0.06\) & \(0.06\pm 0.06\) & \(0.06\pm 0.06\) & \(0.07\pm 0.06\) & \(\mathbf{0.04\pm 0.07}\) & \(\mathbf{0.04\pm 0.07}\) & \(0.07\pm 0.06\) \\ & MSE & \(0.12\pm 0.05\) & \(0.08\pm 0.04\) & \(0.08\pm 0.04\) & \(0.10\pm 0.04\) & \(\mathbf{0.07\pm 0.04}\) & \(\mathbf{0.07\pm 0.05}\) & \(0.10\pm 0.04\) \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison in benchmarks for continuous treatments. The task is to estimate an SRF for a percent reduction up to 50%. seeds and data splits. We found no evidence of overfitting with the selected models. We approximately quantify uncertainty by taking the IQR (25% and 75% quantiles) over the predictions in the test set of the multiple re-runs. While it is not a formal estimation procedure, ensembling neural networks gives reasonable results (Izmailov et al., 2021). ResultsFigure 1 presents the effects of cutoff intervention on deaths and it shows that there is a decrease in deaths as the NAAQS cutoff is reduced. The slope is steeper as the cutoff is reduced likely because lower cutoffs affects a larger fraction of the observed population. For instance, the figure shows that by enforcing 12 \(\upmu\mathrm{g}/\mathrm{m}^{3}\) current observed death counts would decrease by around 1%. In addition, if the cutoff is lowered to 9 \(\upmu\mathrm{g}/\mathrm{m}^{3}\), then deaths can fall up to around 4%. The slope in increasingly steeper, being strongly suggestive of increasing marginal benefits of lowering the standard. For instance, this means that there is as much or more gain from lowering the standard of 12 to 10 \(\upmu\mathrm{g}/\mathrm{m}^{3}\), as there is from lowering it from 10 to 8 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). The results for the percent reduction exposure shift are presented in Figure 4. A striking result is that decrease in deaths is linear with respect to the percent decrease in pollution. The model is not enforcing this result and, in fact, the simulation study show that ability to fit highly non-linear SRFs. As such, the SRF shows an approximate 0.5% decrease in deaths resulting from a 10% decrease in \(\mathrm{PM}_{2.5}\). If we take as reference the 75% currently at 12 \(\upmu\mathrm{g}/\mathrm{m}^{3}\) (Figure 4), we can identify interesting points along the SRF. Lowering to 10, 9, and 8 \(\upmu\mathrm{g}/\mathrm{m}^{3}\) requires a 15, 25, and 35 percent reduction in \(\mathrm{PM}_{2.5}\), respectively. These reductions correspond in turn to 0.7, 1.1, and 1.5 percent decrease in mortality. Furthermore, Figure 3 shows that under a 20% reduction starting from 9 \(\upmu\mathrm{g}/\mathrm{m}^{3}\), about 30% of the ZIP code years would still exceed the threshold. It also shows that 25% percent of factual observations were above the current standard of 12 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). In addition, the quantile view presented in that same figure shows that we will require a 32% overall reduction to bring down the 75% quantile from 12 to 8 \(\upmu\mathrm{g}/\mathrm{m}^{3}\). Comparing the results of the two exposure shifts suggests that further gains are obtained by enforcing the standard as a hard limit rather than a homogeneous reduction allowing a few locations to deviate. ## 8 Discussion and Ethics Statement We have taken an important step toward answering the open public health issue concerning the health benefits of lowering the safety air quality standards in the US. To answer this question, we have provided the first causal inference method using neural networks for continuous stochastic interventions. Moreover, we extended this method to deal with count data as necessitated by our application, and perhaps many applications in the public health and epidemiology domains. There are several key limitations worth mentioning. First, we only assessed uncertainty using an approximate cross-validating procedure. Thus, we advise against interpreting the statistical significance of the results based on the current confidence bounds, particularly for actionable policy considerations. Future work could explore combining fresnet with Bayesian methods. Secondly, our application considers hypothetical exposure shifts (representative of the EPA rule) but does not address the compliance question of estimating the most likely exposure shift produced by the new rule. Follow-up work will focus on this part of the application, which can also xuse tools found in the causal inference literature. Third, the annual average \(\mathrm{PM}_{2.5}\) assessments evaluated at the ZIP-code level are predictions rather than the true observed values, making our analysis subject to attenuation bias caused by measurement error. However, several experiments on measurement error with clustered air pollution exposures have demonstrated that the attenuation pulls the causal effect towards a null result, implying that the benefits of reducing the NAAQS may be greater than what we report (Josey et al., 2022; Wei et al., 2022). Finally, a potential disadvantage (and advantage) of the SI framework is that it places additional burden onto the analyst designing the exposure shift. Thus, SRF estimates have a propensity for misuse if the assumptions behind the exposure shift considered are not clearly and carefully stated. Figure 4: Estimated SRF of the total deaths (%) for different overall percent reductions. Figure 3: _(Left)_ Fraction (%) of observed units remaining above \(\mathrm{PM}_{2.5}\) limit as a function of reduction (%) considering different NAAQS (current NAAQS is at 12 \(\upmu\mathrm{g}/\mathrm{m}^{3}\)). _(Right)_ Quantiles of the annual \(\mathrm{PM}_{2.5}\) after the percent reduction. ## Acknowledgements The work was supported by the National Institutes of Health (R01AG066793, R01ES030616, R01ES034373), the Alfred P. Sloan Foundation (G-2020-13946), and the Ren Che Foundation supporting the climate-smart public health project in the Golden Lab. Computations ran on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University.
2303.06349
Resurrecting Recurrent Neural Networks for Long Sequences
Recurrent Neural Networks (RNNs) offer fast inference on long sequences but are hard to optimize and slow to train. Deep state-space models (SSMs) have recently been shown to perform remarkably well on long sequence modeling tasks, and have the added benefits of fast parallelizable training and RNN-like fast inference. However, while SSMs are superficially similar to RNNs, there are important differences that make it unclear where their performance boost over RNNs comes from. In this paper, we show that careful design of deep RNNs using standard signal propagation arguments can recover the impressive performance of deep SSMs on long-range reasoning tasks, while also matching their training speed. To achieve this, we analyze and ablate a series of changes to standard RNNs including linearizing and diagonalizing the recurrence, using better parameterizations and initializations, and ensuring proper normalization of the forward pass. Our results provide new insights on the origins of the impressive performance of deep SSMs, while also introducing an RNN block called the Linear Recurrent Unit that matches both their performance on the Long Range Arena benchmark and their computational efficiency.
Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, Soham De
2023-03-11T08:53:11Z
http://arxiv.org/abs/2303.06349v1
# Resurrecting Recurrent Neural Networks for Long Sequences ###### Abstract Recurrent Neural Networks (RNNs) offer fast inference on long sequences but are hard to optimize and slow to train. Deep state-space models (SSMs) have recently been shown to perform remarkably well on long sequence modeling tasks, and have the added benefits of fast parallelizable training and RNN-like fast inference. However, while SSMs are superficially similar to RNNs, there are important differences that make it unclear where their performance boost over RNNs comes from. In this paper, we show that careful design of deep RNNs using standard signal propagation arguments can recover the impressive performance of deep SSMs on long-range reasoning tasks, while also matching their training speed. To achieve this, we analyze and ablate a series of changes to standard RNNs including linearizing and diagonalizing the recurrence, using better parameterizations and initializations, and ensuring proper normalization of the forward pass. Our results provide new insights on the origins of the impressive performance of deep SSMs, while also introducing an RNN block called the Linear Recurrent Unit that matches both their performance on the _Long Range Arena_ benchmark and their computational efficiency. ## 1 Introduction Recurrent neural networks (RNNs) have played a central role since the early days of deep learning, and are a natural choice when modelling sequential data (Elman, 1990; Hopfield, 1982; McCulloch and Pitts, 1943; Rumelhart et al., 1985). However, while these networks have strong theoretical properties, such as Turing completeness (Chung and Siegelmann, 2021; Kilian and Siegelmann, 1996), it is well-known that they can be hard to train in practice. In particular, RNNs suffer from the vanishing and exploding gradient problem (Bengio et al., 1994; Hochreiter, 1991; Pascanu et al., 2013), which makes it difficult for these models to learn about the long-range dependencies in the data. Several techniques were developed that attempt to mitigate this issue, including orthogonal/unitary RNNs (Arjovsky et al., 2016; Helfrich et al., 2018), and gating mechanisms such as long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRUs) (Cho et al., 2014). Nonetheless, these models are still slow to optimize due to the inherently sequential nature of their computation (Kalchbrenner et al., 2016), and are therefore hard to scale. In recent years, Transformers (Vaswani et al., 2017) have gained increasing prominence for sequence modelling tasks, achieving remarkable success in a wide range of applications (Brown et al., 2020; Dosovitskiy et al., 2020; Jumper et al., 2021). Compared to RNNs, attention layers are easier to scale and parallelize during training, and crucially they do not suffer from the vanishing gradient problem, since the interaction between any two tokens in the sequence is modeled by direct edges in the network. A key issue with attention layers however is that their computational and memory costs scale quadratically as \(O(L^{2})\) with the sequence length \(L\). Transformers can therefore be especially expensive to deploy on long sequences. RNNs, which scale linearly with the sequence length, are therefore typically faster than transformers at inference time even for modest sequence lengths (Liu et al., 2019). Motivated by these problems, Gu et al. (2021) recently introduced the S4 model, a carefully designed deep state-space model (SSM) achieving remarkable performance on tasks from the Long Range Arena (LRA) (Tay et al., 2020), a benchmark explicitly designed to require very long-ranged reasoning. S4 is theoretically principled and inspired by continuous-time linear SSMs; well-established components of modern control systems. More importantly, the S4 layer and its variants (DSS, S4D, S5, etc) (Gu et al., 2022; Gupta et al., 2022; Smith et al., 2022) overcome the \(O(L^{2})\) bottleneck of attention layers by modeling interactions between tokens using a hidden state (like RNNs) under proper discretization techniques. These models can be made very efficient at inference time by simply unrolling the layer like an RNN. Futhermore, since SSMs are linear in the temporal dimension, they are easily parallelizable during training, in contrast to the slow sequential nature of training a typical RNN. This makes them very computationally efficient on long sequences. While the S4 model is equivalent to an RNN during inference, it has a number of unique characteristics during training. For example, S4 is parameterized as a discretization of a latent continuous-time system of differential equations. S4 also uses specific initializations of the state matrices motivated from the theory of polynomial projections (Gu et al., 2020). While these characteristics might seem to motivate the impressive performance of these models, later works (Gu et al., 2022; Gupta et al., 2022, 2022; Smith et al., 2022) have suggested that the specific initialization used by S4 is often not crucial for performance, and that the discretization rules which achieve best performance may deviate from theory (Smith et al., 2022). It is therefore unclear what these unique characteristics of the deep SSMs are doing mechanistically, and how they can be simplified. Motivated by the striking similarities between RNNs and deep SSMs, and in an attempt to better understand the underlying mechanism driving the performance of these models, we study the power and limitations of RNNs when used as core components of deep architectures for long-range reasoning. Our main goal is to answer the question: _"Can we match the performance and efficiency of deep continuous-time SSMs using deep RNNs?"_ We give a _positive answer_ to this question. We show that the performance boost provided by deep SSMs like S4 can also be achieved via a series of small changes to a vanilla deep RNN. With these changes, we can recover the performance and efficiency of these deep SSMs on the Long Range Arena (LRA) benchmark (Tay et al., 2020). We call this new RNN model the Linear Recurrent Unit (or LRU for short). Main Steps.We outline here the main steps needed towards crafting performant and efficient RNN models. Note while some of these observations have been made in prior works (see SSB), we provide novel perspectives and careful ablations leading to new insights. Each step presented in this paper unveils a specific property of Figure 1: **(Left)** _Deep Linear Recurrent Unit (LRU) architecture introduced in this paper, inspired by S4 (Gu et al., 2021). The model is a stack of LRU blocks, with nonlinear projections in between, and also uses skip connections and normalization methods like batch/layer normalization. We expand on the details in §D and provide pseudocode in §A. We also use the same architecture structure (Norm-Recurrence-GLU-Skip) for every variant of the recurrent module in our study (\(\tanh\) dense, linear dense, etc.). (Right) Summary of effects for the main steps outlined in the introduction towards designing LRUs starting from \(\tanh\) RNNs. Shown is the average performance (3 seeds) of the recurrent module at each step on the Long Range Arena (LRA), compared to average performance of deep SSMs. For all LRA tasks, we match the performance of deep SSMs like S4/S4D/S5 with LRUs. Detailed results in §3._ recurrent networks, and showcases the challenges and best practices in training and initializing deep RNNs. * **Linear Recurrences.** When replacing SSM layers in a deep architecture with vanilla RNN layers using tanh or ReLU activations, the performance on Long Range Arena (LRA) drops significantly. Surprisingly, in SS3.1 we find that simply _removing the nonlinearities in the recurrence of the RNN_ (i.e., using linear recurrences) gives a substantial boost in test accuracy. We motivate this effect in SSE.1 by showing that stacking linear RNN layers and nonlinear MLP blocks (Fig.1) can indeed model complex nonlinear sequence-to-sequence maps without the need for nonlinearities in the recurrence. While dropping the nonlinearity does not seem to harm expressivity, it leads to several advantages, from the ability to directly control how quickly the gradients might vanish or explode, to allowing us to parallelize training. Our findings also partially motivate the success of deep SSMs, where the recurrence is also linear. * **Complex Diagonal Recurrent Matrices.** Dense linear RNN layers can be reparameterized to a complex diagonal form without affecting the expressivity of the network or the features at initialization (SS3.2). Diagonal linear RNN layers additionally allow for a _highly parallelizable unrolling_ of the recurrence using parallel scans to substantially improve training speeds (Martin and Cundy, 2017). We validate that these observations, which have been leveraged by prior SSMs (Gupta et al., 2022; Smith et al., 2022), also provide important efficiency improvements for linear RNN layers. * **Stable Exponential Parameterization.** In SS3.3 we show that using an exponential parameterization for the diagonal recurrent matrix has important benefits. Crucially, this enables us to easily enforce stability during training, which in turn allows us to modify the initialization distribution to facilitate long-range reasoning and improve performance. Our results indicate that rather than the specific deterministic initializations used by several recent SSMs, it is the eigenvalue distribution of the recurrent layer at initialization that determines if the model can capture long-range reasoning. * **Normalization.** In SS3.4 we show that normalizing the hidden activations on the forward pass is important when learning tasks with very long-range dependencies. With this final modification, our RNNs can match the performance of deep SSMs on all tasks in the LRA benchmark. Connecting back to state-space models, we show in SS4 how our normalization can be linked to the discretization structure in S4. We summarize the deep Linear Recurrent Unit (LRU) architecture used in this paper, and the effect of each of the above steps on performance in Fig.1. We emphasize that the main purpose of our work is not to surpass the performance of S4-based models, but rather to demonstrate that simple RNNs can also achieve strong performance on long range reasoning tasks when properly initialized and parameterized. We believe the insights derived in this paper can be useful to design future architectures, and to simplify existing ones. ## 2 Preliminaries In this section, we compare the key architectural components (RNNs and SSMs) studied in this work, and also describe our methodology and experimental setup. For a more thorough discussion or related architectures, the reader can check our related work section SSB. ### Recap of recurrent block structures We give an overview of the main architectural components considered in this paper, focusing on the major difference between Vanilla RNNs and recent S4-like deep SSMs (Gu et al., 2021, 2022; Gupta et al., 2022; Smith et al., 2022). RNN Layer.Let \((u_{1},u_{2},\ldots,u_{L})\) be a sequence of \(H_{\text{in}}\)-dimensional inputs, which can be thought of as either the result of intermediate layer computations (which keep the sequential structure) or as the initial input. An RNN layer with \(N\)-dimensional hidden state computes a sequence of \(H_{\text{out}}\)-dimensional outputs \((y_{1},y_{2},\ldots,y_{L})\) through a recurrent computation1 using learnable parameters \(A\in\mathbb{R}^{N\times N},B\in\mathbb{R}^{N\times H_{\text{in}}},C\in\mathbb{ R}^{H_{\text{out}}\times N},D\in\mathbb{R}^{H_{\text{out}}\times H_{\text{in}}}\): Footnote 1: We do not use bias parameters as they can be incorporated into the MLP blocks preceding and following the RNN block. Classical RNNs also included a nonlinearity on the output \(y_{k}=a_{\text{out}}(Cx_{k}+b)\) with \(D=0\). Having \(D\neq 0\) basically introduces a skip connection (standard in modern architectures), and the \(\sigma_{\text{out}}\) can be thought of as part of the MLP following the RNN. \[x_{k}=\sigma(Ax_{k-1}+Bu_{k}),\quad y_{k}=Cx_{k}+Du_{k}, \tag{1}\] starting from \(x_{0}=0\in\mathbb{R}^{N}\). \(\sigma\) here denotes a nonlinearity, often chosen to be a \(\tanh\) or sigmoid activation. If \(\sigma\) is the identity function, then we say the RNN layer is _linear_. S4-like recurrent layer.We present a simplified2 version of the S4 recurrence introduced in Gu et al. (2021). The input \((u_{0},u_{1},\ldots,u_{L-1})\) is now seen as the result of sampling a latent continuous-time signal \(u_{\text{ct}}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{H_{\text{lin}}}\) at multiples of a stepsize \(\Delta>0\): i.e. \(u_{\text{ct}}(\Delta k):=u_{k}\) for all \(k\in 0,\ldots,L-1\). The output sequence \((y_{0},y_{1},\ldots,y_{L-1})\) is then sampled, again with stepsize \(\Delta\), from the signal \(y_{\text{ct}}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{H_{\text{test}}}\) computed by the following continuous-time state-space model, initialized at \(x_{\text{ct}}(0)=0\): Footnote 2: This version is most similar to SS (Smith et al., 2022), but is here presented for ease of reasoning for a single discretization parameter \(\Delta\), shared across input dimensions. For more details, see §B. \[\frac{d}{dt}x_{\text{ct}}(t)=\tilde{A}x_{\text{ct}}(t)+\tilde{B}u _{\text{ct}}(t),\] \[y_{\text{ct}}(t)=\Re\left[\tilde{C}x_{\text{ct}}(t)\right]+ \tilde{B}u_{\text{ct}}(t), \tag{2}\] where \(\Re\left(p\right)\) denotes the real part of a complex-valued vector \(p\), \(\tilde{A}=\text{diag}(\tilde{a})\) with \(\tilde{a}\in\mathbb{C}^{N}\), \(\tilde{B}\in\mathbb{C}^{N\times H_{\text{lin}}}\), \(\tilde{C}\in\mathbb{C}^{H_{\text{test}}\times N}\) and \(\tilde{D}\in\mathbb{R}^{H_{\text{test}}\times H_{\text{lin}}}\). Ignoring the continuous-time nature of this model, the most striking differences compared to Eq.(1) are that (a) the computation on the right-hand-side is _linear_ in the hidden state and in the input, and (b) most parameters are _complex_ valued, with \(\tilde{A}\) being diagonal. While \(\tilde{B},\tilde{C},\tilde{D}\) follow complex random or uniform initialization, the transition matrix \(\tilde{A}\) is _structured_, i.e., initialized _deterministically_ through HiPPO theory (Gu et al., 2020) in diagonal form. Common choices (Gu et al., 2022) are \(\tilde{a}_{n}=-\frac{1}{2}+i\pi n\) (S4D-Lin) and \(\tilde{a}_{n}=-\frac{1}{2}+i\frac{N}{\pi}\left(\frac{N}{n+1}-1\right)\) (S4D-Inv), for \(n=1,2,\ldots,N\). For training and inference, the continuous-time system in Eq.(2) is discretized at stepsize \(\Delta\) through a high-accuracy Zero-Order-Hold (ZOH) or Bilinear method. The ZOH method gives \[x_{k}=Ax_{k-1}+Bu_{k},\quad y_{k}=Cx_{k}+Du_{k}, \tag{3}\] where \(x_{-1}=0\), \(A=\exp(\Delta\tilde{A})\), \(B=(A-I)\tilde{A}^{-1}\tilde{B}\), \(C=\tilde{C}\) and \(D=\tilde{D}\), and \(\exp\) denotes the matrix exponential. Under the assumption that \(u_{\text{ct}}\) is constant in between timestamps (which can be thought of as a modeling assumption), this numerical integration is _exact_(Jacquot, 2019). Moreover, note that all these discretization operations can be quickly performed element-wise since \(\tilde{A}\) is diagonal. Some key differences.It is worth pointing out a few structural and computational properties, to highlight some crucial differences between RNNs and SSMs: * Since Eq.(3) is linear, it can be efficiently parallelized until \(k=L-1\) using parallel scans (Martin and Cundy, 2017; Smith et al., 2022), unlike a nonlinear RNN where the computation has to be performed sequentially. * While Eq.(3) is similar to the linear RNN computation, it is crucial to note that (a) \(A\) and \(B\) are parameterized in a peculiar way, prescribed by discretization, and (b) these matrices share parameters; in particular \(\Delta\) affects both \(A\) and \(B\). These differences are critical as in SSMs learning is performed on the continuous-time parameters \(\tilde{A},\tilde{B},\tilde{C},\tilde{D},\Delta\); hence parameterization choices directly affect optimization. * Unlike vanilla RNNs, most SSMs use complex-valued diagonal recurrent matrices that are initialized deterministically using HiPPO theory, and the literature attributes much of the success of SSMs to the specific initialized used (Gu et al., 2021, 2022; Gupta et al., 2022). The points above motivate our investigation: in this paper we consider the same architecture as Gu et al. (2021, 2022); Smith et al. (2022), but replace the SSM layer in the recurrent core by an RNN. We then study which steps need to be taken to gradually retrieve 54-like performance on LRA (Tay et al., 2020) tasks. The effectiveness of each of our steps is supported by empirical evidence and theoretical considerations, and leads to the architecture presented in Fig.1. ### Experimental setup In this paper, we consider the Long Range Arena benchmark (Tay et al., 2020), a set of tasks designed to test the ability of models to do long-range sequence modelling (except we use coloured images instead of grayscale images for the sequential CIFAR-10 classification task). Transformers fail to perform well on most of these tasks, while deep SSMs have shown remarkable performance on these tasks (Dao et al., 2022; Gu et al., 2021). This makes it an appropriate benchmark to explore the long-range modelling capabilities of deep RNNs. For all our experiments, we use a network of 6 layers with residual connections and layer/batch normalization (Ba et al., 2016; Ioffe and Szegedy, 2015) similar to Gu et al. (2021) (Fig.1), and we replace the SSM layers with RNN layers, building up to our LRU recurrence in a sequence of steps (see SS3). All experiments are repeated three times, and we report the mean and standard error. Networks are trained using the AdamW optimizer (Loshchilov and Hutter, 2017). We use a smaller learning rate and no weight decay on the recurrent parameters, as suggested by Gu et al. (2021); Steil (2004). We tune hyperparameters such as learning rates for all models on a logarithmic grid for best accuracy. See SSD for more details on our experimental setup. ## 3 Designing Performant Deep RNNs In this section, we discuss the fundamental steps needed for designing RNNs to reach the impressive performance of deep SSMs on the LRA benchmark. We present these steps, already outlined in the introduction, in logical order, and support each claim with experimental evidence and theoretical considerations, expanded in SSE. We consider the architecture of Fig.1, where the recurrent computation is gradually modified starting from a vanilla RNN. We start by showcasing the advantage of using linear recurrences in SS3.1; then, in SS3.2, we show how to speed-up training and inference without affecting expressivity and initialization distribution. In SS3.3, we discuss how (and why) changing the parameterization and initialization distribution enables us to make the RNN stable and improve long-range modeling. Finally, in SS3.4, we finalize the LRU architecture by proposing a normalization strategy for the hidden activations that results in a close match in performance with deep SSMs. ### Linear RNN layers are performant One of the main findings of our work is that linear RNN layers can be surprisingly expressive when coupled with nonlinear MLP or GLU (Dauphin et al., 2017) blocks, outperforming tuned nonlinear RNN variants in the same architecture. In Tb.1, we show that simply removing3 the nonlinearity, and therefore computing the next state as \(x_{k}=Ax_{k-1}+Bu_{k}\), is able to improve test accuracy on most LRA tasks. While the boost provided by vanilla linear RNN blocks leads to performance which is still far behind S4 on some tasks (sCIFAR, PathFinder and PathX), this first finding motivates us to drop nonlinearities in the recurrence for the rest of this paper. In later sections, we leverage the linearity of the recurrence to significantly speed up training as well as derive principled initialization and normalization principles to learn long-range dependencies. We note that, on the Text and Retrieval tasks, performance using vanilla RNNs already matches performance of deep SSMs (see Tb.3 for the performance of S4D/S5 on these tasks). Footnote 3: All other settings in the recurrent block are kept the same as in the Vanilla RNN module of Haiku (Hennigan et al., 2020). That is, all matrices have Glorot (Glorot and Bengio, 2010) initialization. The rest of the architecture is kept as in Fig.1, where the LRU block is replaced by an RNN. Table 1 \(|\)_The effect of removing the nonlinearity from the recurrent unit on test accuracy (SS3.1). We show here results only for the sCIFAR, ListOps, Text and Retrieval tasks in LRA as these models did not exceed random guessing on PathFinder/PathX (further improvements in Tb.2 and 3). Performance of deep SSMs shown in Tb.3._ The empirical result in Tb.1 is _surprising_, since recurrent nonlinearities are believed to be a key component for the success of RNNs -- both in the theory and in practice (Erichson et al., 2021; Pascanu et al., 2013; Siegelmann, 2012). Indeed, a strong property of single-layer sigmoidal and tanh RNNs is Turing completeness, which cannot be achieved by the linear variant (Chung and Siegelmann, 2021). However, the architecture we use (Fig.1) is deeper than a standard RNN and includes nonlinearities, placed position-wise _after_ each RNN block. In SSE.1, we investigate how the expressivity and trainability of deep models is affected by recurrent nonlinearities. Leveraging a spectral analysis and Koopman operator theory (Koopman and Neumann, 1932), we discuss how interleaving linear RNN layers with nonlinear feedforward blocks is sufficient to approximate highly nonlinear systems. A key observation in our analysis is that position-wise nonlinearities effectively transfer signal information to higher frequencies, enabling the system to go beyond linearity in the spectral domain and increasing the layer capacity. To further strengthen our claim on the advantage of linear recurrences, in SSE.2 we show that, while linear and nonlinear RNNs share an important class of approximating functionals (linear operators, see Wang et al. (2022)), nonlinear activations can potentially slow down training. ### Using complex diagonal recurrent matrices is efficient We now show that we can significantly speed up training and inference for deep linear RNNs without losing performance by using complex-valued diagonal recurrent matrices. While the idea of diagonalizing linear systems for computational efficiency is a dominating feature of all deep SSMs since the introduction of DSS by Gupta et al. (2022), in this section we construct our diagonalized version to exactly match the initialization spectrum (see SS3.2.1) of the Glorot-initialized deep linear RNN in Tb.1. Our main purpose with this approach is to _disentangle the effects of initialization and diagonalization on performance_ (cf. Tb.2 and Tb.3). We start in SS3.2.1 by recalling some useful linear algebra elements, and then proceed in SS3.2.2 with a discussion on how to diagonalize the recurrence while preserving the eigenvalue spectrum at initialization. #### 3.2.1 Linear RNN eigendecomposition The recurrence \(x_{k}=Ax_{k-1}+Bu_{k}\) can be unrolled easily using the assumption that \(x_{-1}=0\in\mathbb{R}^{N}\): \[x_{0}=Bu_{0},\quad x_{1}=ABu_{0}+Bu_{1},\quad x_{2}=A^{2}Bu_{0}+ABu_{1}+Bu_{2}, \quad\ldots\quad\Longrightarrow\quad x_{k}=\sum_{j=0}^{k-1}A^{j}Bu_{k-j}. \tag{4}\] Exponentiations of the matrix \(A\) in the equation above are the source of the well-known _vanishing/exploding gradient_ issue in RNNs (Bengio et al., 1994; Pascanu et al., 2013). While in nonlinear RNNs the state \(x_{k}\) is forced to live on the compact image of the activation function, the hidden-state of our linear variant can potentially explode or vanish exponentially as \(k\) increases. This phenomenon can be better understood by leveraging an eigenvalue (a.k.a. spectral) analysis: up to an arbitrarily small perturbation of the entries, every matrix \(A\in\mathbb{R}^{N\times N}\) is diagonalizable4(Axler, 1997), i.e. one can write \(A=P\Lambda P^{-1}\), where \(P\in\mathbb{C}^{N\times N}\) is an invertible matrix and \(\Lambda=\text{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{N})\in\mathbb{C}^{ N\times N}\). It is essential to note that, unlike the symmetric setting where eigenvalues and eigenvectors are real, in the non-symmetric case5 one has to allow for _complex_ entries to achieve full equivalence. Plugging the decomposition \(A=P\Lambda P^{-1}\) into Eq.(4) and multiplying both sides by \(P^{-1}\), we get \(\tilde{x}_{k}=\sum_{j=0}^{k-1}\Lambda^{j}\tilde{B}u_{k-j}\), where \(\tilde{x}_{k}:=P^{-1}x_{k}\), \(\tilde{B}:=P^{-1}B\). The output can then be computed as \(y_{k}=\Re[\tilde{C}\tilde{x}_{k}]+Du_{k}\in\mathbb{R}^{H}\), where \(\tilde{C}=CP^{-1}\), and we take the real part of \(\tilde{C}\tilde{x}_{k}\). Therefore, instead of learning \((A,B,C,D)\), one can equivalently learn \((\Lambda,\tilde{B},\tilde{C},D)\), where \(\Lambda,\tilde{B},\tilde{C}\) are complex valued, and \(\Lambda\) is a diagonal matrix. Footnote 4: In other words, the set of non-diagonalizable matrices has measure zero, see e.g. Zhinan (2002) for a proof idea. Footnote 5: Take e.g. \(A=((0,1)(-1,0))\). The solution to the standard eigenvalue equation gives \(\lambda=\pm i\), where \(i\) is the imaginary unit. Are complex numbers really necessary?We adopt complex numbers since they provide a convenient and compact representation of non-symmetric matrices in diagonal form. However this is not the only option - one could work (almost) as efficiently using real numbers. We discuss how this can be achieved in SSE.3. Stability.Since \(\tilde{x}_{k}=\sum_{j=0}^{k-1}\Lambda^{j}\tilde{B}u_{k-j}\), the norm of component \(j\) of \(\tilde{x}\) at timestamp \(k\) evolves such that \(|x_{k,j}|=O(|\tilde{x}_{k,j}|)=O(|\lambda_{j}|^{k})\). Therefore, a sufficient condition to ensure stability (i.e. \(x_{k}\) does not explode) is therefore \(|\lambda_{j}|<1\) for all \(j\)(Gu et al., 2021). #### 3.2.2 Learning in the diagonalized space Learning recurrent linear systems in diagonal form provides substantial computational speedups both for training and inference. For example, in our implementation of sCIFAR, we found diagonal linear RNNs to be \(\sim\)8 times faster to train than a dense RNN with ReLUs, matching the speed of our implementations of S4D and S5. The main reasons for this computational benefit are that (a) taking powers of diagonal matrices is trivial (speeding up both training and inference), while exponentiating dense matrices is computationally expensive, and (b) while nonlinear recurrences must be computed sequentially, unrolling a linear recurrence can be parallelized using associative scans resulting in faster training (Gupta et al., 2022; Smith et al., 2022). Equivalent initialization.To disentangle the benefits of diagonal linear systems from the role of initialization, we seek an initialization for the diagonal system which keeps the eigenvalue spectrum of the recurrence unchanged when comparing our diagonal system with the dense linear RNN in SS3.1, where \(A\) followed Glorot initialization. Fortunately, we can use a classical result from random matrix theory (Ginibre, 1965). **Theorem 3.1** (Strong circular law).: _Let \(\mu_{N}\) be the empirical spectral measure of \(A_{N}\), where \(A_{N}\) is a real \(N\times N\) matrix with i.i.d. Gaussian entries, each with zero mean and variance \(1/N\). Then, \(\mu_{N}\) converges weakly almost surely as \(N\to\infty\) to the uniform probability measure on \(\{|z|\leq 1\}\subseteq\mathbb{C}\)._ The theorem above, illustrated in Fig.2, shows that under Glorot initialization the spectrum of \(A\) is _de-facto_ sampled from the unit disk in \(\mathbb{C}\). This result motivates the strong performance of linear RNNs in SS3.1, since it implies Glorot initialization provides an approximately stable initialization (see definition in SS3.2.1).6 Moreover, from Theorem 3.1, an _equivalent spectral initialization_ follows for the diagonal system, which holds exactly for the large width limit: \(\Lambda\) should be diagonal with entries sampled uniformly on the unit disk. Using the definition of exponential of a complex number: \(\exp(-\nu+i\theta):=e^{-\nu}(\cos(\theta)+i\sin(\theta))\), we adopt a simple scheme for sampling uniformly on a ring in between circles with radii \(r_{\min}\) and \(r_{\max}\)in \(\mathbb{C}\). Footnote 6: Later in training, the system is less likely to become unstable if the learning rate is small enough. **Lemma 3.2**.: _Let \(u_{1},u_{2}\) be independent uniform random variables on the interval \([0,1]\). Let \(0\leq r_{\min}\leq r_{\max}\leq 1\). Compute \(\nu=-\frac{1}{2}\log\left(u_{1}(r_{\max}^{2}-r_{\min}^{2})+r_{\min}^{2}\right)\) and \(\theta=2\pi u_{2}\). Then \(\exp(-\nu+i\theta)\) is uniformly distributed on the ring in \(\mathbb{C}\) between circles of radii \(r_{\min}\) and \(r_{\max}\)._ We recover the spectrum of Glorot-initialization (in the limit of infinite width) by setting \(r_{\min}=0\) and \(r_{\max}=1\) (we will explore tuning these hyper-parameters in SS3.3). Tb.2 (first two rows) shows the results of learning deep linear RNNs in complex diagonal form,7 where each diagonal entry of \(\Lambda\) is initialized uniformly on unit disk in \(\mathbb{C}\) using Lemma 3.2 with \([r_{\min},r_{\max}]=[0,1]\). In our experiments, \(\bar{B},\bar{C}\) (which we rename for convenience back to \(B\) and \(C\)) follow Glorot initialization for both real and imaginary parts (parameterized separately), with halved variance in each component to preserve lengths on the input-output projections (Glorot and Bengio, 2010). Finally, after the SSM computation, the real part of the signal is kept and the imaginary discarded (as in Gu et al. (2022); Gupta et al. (2022)). Footnote 7: To avoid issues with backpropagation on complex variables, each complex parameter in the network is stored and learned as a pair of floats encoding real and imaginary parts. Our results in Tb.2 show that diagonalizing the recurrence surprisingly improves accuracy on tasks like ListOps and sCIFAR. More importantly, it drastically reduces training and inference time on all LRA tasks (see Tb.4 in SC.1 for training speed comparisons), and makes the RNN just as fast to train as deep SSMs like 54D and S5. ### Benefits of stable exponential parameterization In SS3.2 we showed that moving to complex diagonal recurrences is computationally efficient. However we also observed that learning the diagonal model can be more unstable than learning the dense model in some experiments. To learn long-range dependencies and avoid quickly vanishing gradients, eigenvalues in the recurrence need to have magnitude close to 1 (Gu et al., 2022; Gupta et al., 2022); however, these eigenvalues are also likely to make the system unstable during training. In this section, we show the benefits of a stable parameterization of the RNN, and of tuning \(r_{\text{min}}\) and \(r_{\text{max}}\) (see Lemma 3.2). Optimization under exponential parameterization.Lemma 3.2 suggests a natural parameterization of the diagonalized RNN as \(\Lambda=\text{diag}(\exp(-\nu+i\theta))\) with \(\nu\in\mathbb{R}^{N}\) and \(\theta\in\mathbb{R}^{N}\) as the learnable parameters (instead of the real and imaginary parts of \(\Lambda\)). As we explain in SSE.2 leveraging an easy-to-visualize 2-dimensional example (see Fig.8), this choice decouples magnitude and oscillation frequencies, making optimization with Adam easier. The positive effects of this exponential parametrization, which resembles some features of ZOH discretization (see SS2 and SS4) and notably takes the performance of PathFinder above random chance, can be observed in the third row of Tb.2. Enforcing stability.An important benefit of the exponential parameterization is that it makes it simple to enforce stability on the eigenvalues. To see this, note that at initialization, \(|\lambda_{j}|=|\exp(-\nu_{j})|\leq 1\) since \(\nu_{j}>0\). Therefore, to preserve stability during training, we can use an exponential or another positive nonlinearity: \(\lambda_{j}:=\exp(-\exp(\nu_{j}^{\text{log}})+i\theta_{j})\), where \(\nu^{\text{log}}\in\mathbb{R}^{N}\) is the parameter we optimize, and we set \(\nu_{j}^{\text{log}}:=\log(\nu)\) at initialization. Note that a similar idea is used in deep SSMs (Gu et al., 2021) in the context of discretization. We choose an exponential non-linearity over a simple ReLU nonlinearity to increase granularity around \(|\lambda|=1\), achieved at \(\nu^{\text{log}}=-\infty\) (while \(|\lambda|=0\) is achieved at \(\nu^{\text{log}}=\infty\)). Stable parameterization helps on most LRA tasks. In the fourth row of Tb.2, we show its effects on sCIFAR, ListOps and Pathfinder. We observe the most drastic improvement on Pathfinder, one of the harder long-range dependency tasks in LRA, where performance now reaches above 93%. The benefits of the stable parameterization becomes more apparent when we explore the idea of initializing the eigenvalues of \(\Lambda\) on a ring closer to the unit disk (increasing \(r_{\text{min}}\) closer to 1 in Lemma 3.2) to bias the network towards longer-range interactions and avoid vanishing gradients. Indeed, as discussed in detail in Gu et al. (2022); Gupta et al. (2022), for reasonings requiring consideration of interactions between distant tokens, eigenvalues in the recurrence need to have magnitude close to 1. Otherwise, as clear from the diagonal version of Eq.(4), when taking powers of eigenvalues close to the origin, the signal from past tokens quickly dies out (see SS3.2.1). As we show in the last row of Tb.5 in SSC, without enforcing stability, performance starts to degrade as we increase \(r_{\text{max}}\) past 0.9 in the sCIFAR task. With stability enforced, we can increase \(r_{\text{max}}\) up to 0.99 and improve performance. We see similar benefits on the other tasks where we sweep different values of \(r_{\text{min}}\) and \(r_{\text{max}}\) (Tbs.7 & 8 have more details). Finally, note that while here we explore changing the magnitude of the eigenvalues of \(\Lambda\), in SS3.4 we also show the benefits of initializing the eigenvalues to have a small phase to learn more global patterns, useful for particularly long-range reasoning tasks. \begin{table} \begin{tabular}{|l||c|c|c|} \hline & sCIFAR & ListOps & Pathfinder \\ \hline Dense \(A\) & 72.2 (0.2) & 50.4 (0.2) & ✗ \\ \(\Lambda\) Real + Im & 86.5 (0.1) & 58.8 (0.3) & ✗ \\ \(\Lambda\) Exp & 85.4 (0.7) & **60.5** (0.3) & 65.4 (9.0) \\ \(\Lambda\) Stable Exp & 87.2 (0.4) & 59.4 (0.3) & 93.5 (0.5) \\ + Ring Init & **88.1** (0.0) & 59.4 (0.3) & **94.4** (0.3) \\ \hline \end{tabular} \end{table} Table 2: _Test accuracy of a linear diagonal complex RNNs under different parametrizations of the transition matrix (see §3.2). Performance directly improves the results in Tb.1, and showcases the advantage of exponential (polar) representation of \(\Lambda\). In bold font is the best parametrization option for linear RNN blocks. Ring Init denotes a changed initialization where \(r_{\text{min}}\) and \(r_{\text{max}}\) are tuned. Performance on the Text and Retrieval tasks is not shown as linear RNNs already align with S4 results (cf. Tb.1 with Tb.3). These models cannot solve PathX yet, and requires normalizing the hidden activations and initializing the eigenvalues of \(\Lambda\) with small phase (see Tb.3)._ ### Additional considerations for long-range reasoning tasks Up to this point, our model did not succeed in learning PathX -- the hardest dataset in our benchmark, with a sequence length of 16k tokens. In this section, we discuss the additional modifications we need to make to improve our model's ability to learn very long-range dependencies and finalize our LRU model. Normalization.In SS3.3, we initialized the eigenvalues of \(\Lambda\) close to the unit disk for better performance on long-range tasks. However, we observed that as we moved \(r_{\text{min}}\) and \(r_{\text{max}}\) closer to 1, the training loss also started to blow up at initialization (see Fig.5). In this section, we first present a result explaining this phenomenon, before deriving a practical normalization scheme for the hidden activations to tackle this problem and further improve performance. **Proposition 3.3** (Forward-pass blow-up).: _Let \(\Lambda\) be diagonal with eigenvalues sampled uniformly on the ring in \(\mathbb{C}\) between circles of radii \(r_{\text{min}}<r_{\text{max}}<1\). Then, under constant or white-noise input and Glorot input projection, we have that the squared norm of the state \(x_{k}\) converges as \(k\to\infty\) to the following quantity._ \[\mathbb{E}[\|x_{\infty}\|_{2}^{2}]=\frac{1}{r_{\text{max}}^{2}-r_{\text{min}}^ {2}}\log\left(\frac{1-r_{\text{min}}^{2}}{1-r_{\text{max}}^{2}}\right)\mathbb{ E}[\|Bu\|_{2}^{2}].\] This result has the following intuitive form if \(r_{\text{min}}=r_{\text{max}}=r\): if we initialize \(\rho\)-close to the unit disk, the forward pass blows up by a factor \(1/\rho\) (since the contributions from previous states take longer to decay): let \(\epsilon=r_{\text{max}}^{2}-r_{\text{min}}^{2}\) and \(\rho=1-r_{\text{max}}^{2}\), then: \[\lim_{\epsilon\to 0}\frac{\mathbb{E}[\|x_{\infty}\|_{2}^{2}]}{\mathbb{E}[\|Bu\|_{ 2}^{2}]}=\lim_{\epsilon\to 0}\left[\frac{1}{\epsilon}\log\left(1+\frac{ \epsilon}{\rho}\right)\right]=\lim_{\epsilon\to 0}\left[\frac{1}{\epsilon} \left(\frac{\epsilon}{\rho}+O(\epsilon^{2})\right)\right]=\frac{1}{\rho}= \frac{1}{1-r^{2}}. \tag{5}\] Towards the derivation of an effective normalization scheme for the forward pass, we present a simplified derivation of the \(1/\rho\) gain formula for the one-dimensional setting under white-noise input8: let \(\Lambda=\lambda\in\mathbb{C}\), and \(B=1\). Let \(p^{*}\) denote the conjugate of \(p\in\mathbb{C}\), we have that \(|p|^{2}=p^{*}p\) and in expectation over the input, using Eq.(4) and the fact that \(\mathbb{E}[u_{k-i}u_{k-j}]=0\) for \(i\neq j\): Footnote 8: We use the random input assumption for our normalization scheme as we found it to work well in practice. \[\mathbb{E}|x_{k}|^{2}=\left(\sum_{i=0}^{k-1}\lambda^{i}\mathbb{E}[u_{k-i}] \right)\left(\sum_{j=0}^{k-1}\lambda^{j}\mathbb{E}[u_{k-j}]\right)^{*}=\sum_{ i,j=0}^{k-1}\lambda^{i}(\lambda^{j})^{*}\mathbb{E}[u_{k-i}u_{k-j}]=\sum_{i=0}^{k-1} \lambda|2^{i}\overset{\infty}{\to}\frac{1}{1-|\lambda|^{2}}. \tag{6}\] Since the formula above holds for every Euclidean direction in our recurrence (\(\Lambda\) is diagonal), we can add a normalization parameter that is initialized element-wise. Additionally, note that as \(\lambda\) approaches \(1\), \(1-|\lambda|^{2}\) approaches \(0\), making further adaptations with SGD of this parameter hard. Therefore, we use normalization parameter \(y^{\log}\in\mathbb{R}^{N}\), initialized element-wise as \(Y_{i}^{\log}\leftarrow\log(\sqrt{1-|A_{i}|^{2}})\),9 and modify the recurrence as: Footnote 9: We also tried setting \(y_{i}\) to \(\sqrt{1-|A_{i}|^{2}}\) in each training iteration, and found it to work similarly in practice to a trainable \(y\). \[x_{k}=\Lambda x_{k-1}+\exp(y^{\log})\odot(Bu_{k}), \tag{7}\] where \(\odot\) denotes the element-wise product. The \(y\) parameter allows the RNN to adaptively scale the input fed into the corresponding eigendirection. We found the \(y\) normalization to consistently improve performance on tasks that benefit from initializing close to the unit disk, such as sCIFAR and Pathfinder, as shown in Tb.3. Reducing Eigenvalue Phase at Initialization.In the context of the diagonalized recurrence, we have \(\Lambda=\operatorname{diag}(\exp(-\exp(\nu^{\log})+\theta))\), where \(\nu^{\log}\in\mathbb{R}^{N}\) is the vector of log eigenvalue magnitudes and \(\theta\in\mathbb{R}^{N}\) the vector of eigenvalue _phases_. While \(\nu^{\log}\) encodes the distance to the origin, \(\theta\) is the angle from the vector \(1+0i\). _For long sequences_, initializing uniformly \(\theta\sim[0,2\pi]\) implies that most state entries will exhibit an overall large number of oscillations at initialization, see upper panel in Fig.4. Equivalently, in this setting, most state dimensions are the result of _convolutions_10 capturing an average of _local oscillation patterns_. This behavior is independent from the ability of capturing long-range dependencies (controlled by \(\nu^{\log}\)), but pertains to the nature of the information stored by the RNN. Therefore, we claim that initializing \(A\) with uniform phase on long sequence data inherently biases the network towards learning spurious features in the input sequence. The model cannot recover from this suboptimal initialization: we indeed observe that, for our best to far model on PathX, the Figure 4: _Evolution of \(x\in\mathbb{R}^{3}\) under impulse input \(u=(1,0,0,\ldots,0)\in\mathbb{R}^{16k}\). Plotted in different colors are the 3 components of \(x\). \(\Lambda\) has parameters \(\nu_{j}=0.00005\) and \(\theta_{j}\) sampled uniformly in \([0,2\pi]\) or with small phase \([0,\pi/50]\). For small sequences, such as \(L=1024\) (PathFinder, sCIFAR), \([0,2\pi]\) produces kernels with acceptable overall number of oscillations: information about \(u_{0}\) is recalled only a few times in the overall state history. Instead, for high \(L\), the range of the imaginary part at initialization has to be smaller to obtain a similar effect._ Figure 5: _Efficet of normalization and using a small phase at initialization on the PathX task. For each setting, we show mean and standard errors over three independent runs for 100k iterations. Without normalization, the model presents higher loss values at initialization and quickly converges to a suboptimal value, where train and test accuracy are both at random chance. Adding normalization helps: the train loss is lower at initialization, and the optimizer is able to escape the suboptimal region and train accuracy also increases. Interestingly, this model still fails to generalize at all. Finally, reducing initialization phase (i.e. tuning the range of \(\theta\)) dramatically improves convergence on the training set, while also generalizing to the test set._ training loss after a few iterations converges to a highly suboptimal minimizer which leads to random chance test performance (see Fig.5). To fix this issue, we found it sufficient to restrict the range of \(\theta\) to a thin slice around \(0\), biasing the model towards learning more global features. Since the optimal values of \(\theta\) are small, we parameterize the phase logarithmically: \(\theta=\exp(\theta^{\log})\), where \(\theta^{\log}\) is optimized, to aid optimization. Restricting the range of the phase at initialization to be \([0,\pi/10]\), our LRU achieved 94.2% _on PathX_, aligning with state-of-the-art deep SSMs. We did not explore using a smaller phase at initialization for the other LRA tasks, although we believe this might further improve performance on other tasks as well. Note that using both \(\gamma\) normalization and restricting the eigenvalue phase at initialization were crucial to solving PathX. We were unable to learn when using restricted phase at initialization without also introducing \(\gamma\) normalization. With all the components of SS3 taken together, we name this new model the **Linear Recurrent Unit** (or **LRU** for short). It provides a flexible, interpretable, and principled framework for initializing and learning deep RNNs efficiently, and matches performance and efficiency of deep SSMs across all LRA tasks as shown in Tb.3. ## 4 Insights on S4 and Variants We believe our ablations in SS3 explain the underlying mechanisms driving the success of deep SSMs. Hence, to conclude the paper, in this section, we inspect in detail the main similarities and differences between our LRU model and diagonal SSMs, and elaborate a few insights. As in SS2, to avoid technicalities, we provide a simplified discussion capturing the main features of models stemming from the original S4 paper. For a comparison of different models, we defer the reader to SSB. As detailed in SS2, diagonal SSMs (DSS, S4D, S5) are instantiated and parameterized through _discretization_ of a latent continuous-time model \(\hat{x}_{\mathrm{ct}}(t)=\hat{A}x_{\mathrm{ct}}(t)+\hat{B}u_{\mathrm{ct}}(t)\), where \(A=\mathrm{diag}(\bar{a})\) is initialized with complex entries, often prescribed or inspired by HiPPO theory (Gu et al., 2020). Zero-Order-Hold (ZOH) discretization with stepsize \(\Delta\) leads to the recurrence \(x_{k}=\exp(\Delta\bar{A})x_{k-1}+(\exp(\Delta\bar{A})-I)\bar{A}^{-1}\bar{B}u_{k}\). This formula, while arguably complex compared to our Eq.(7), relates to it as outlined in the next paragraphs. Matrix exponentials make training easier.The exponential in the ZOH formula is due to exact integration of \(\hat{x}_{\mathrm{ct}}(t)=\bar{A}x_{\mathrm{ct}}(t)\), which leads to \(x_{\mathrm{ct}}(\Delta k)=\exp(\Delta\bar{A})x_{\mathrm{ct}}(\Delta(k-1))\). In addition, to enforce stability, in models inspired by S4 the real part of \(A\) is often fed into a positive nonlinearity, as we also do in SS3.3. From our results SS3.3 and our discussion on optimization advantages (see also SSE.2), we claim that the power of exponential parameterization is not necessarily attributable to accurate integration (which is not present in our system), but is more fundamentally rooted in a magnitude-phase decoupling on the recurrence (this makes training with Adam easier, see Fig.8), as well as in the overall advantage of learning in diagonalized space (see Tb.2). We also note that stabilizing the recurrence by adding a nonlinearity was beneficial also in our experiments, although this is not prescribed by the theory underlying S4. Structured initialization is not necessary.While Gu et al. (2022); Gupta et al. (2022); Smith et al. (2022) also discuss initializations for \(A\) deviating from the HiPPO structure (see SS2 and SSB), to the best of our knowledge we are the first to show that simple uniform initialization on a slice of the unit disk, combined with proper normalization, is able to also solve the hardest task in LRA: PathX.11 We also show (Tb.2) that uniform initialization on the disk, which is simply the diagonalized version of Glorot initialization (Thm. 3.1), is sufficient to achieve performance close to more complex deep state-space models on the remaining LRA tasks. Our results ultimately suggest that HiPPO theory, while fundamental for the development of this field, should not be thought of as the main source of S4 success. Footnote 11: Among the models in (Gu et al., 2022), only S4D-inv and S4D-LegS (options heavily inspired by the HiPPO theory) perform beyond random guessing on PathX. In S5, the skew-symmetric component of the HiPPO matrix is used for initialization. Discretization changes initialization spectrum.For simplicity, let us restrict our attention to S4D-Lin, for which \(A=\mathrm{diag}(\bar{a})\) with \(\bar{a}_{n}=-\frac{1}{2}+i\pi n\), yielding a diagonal transition matrix with elements (i.e. eigenvalues) initialized at \(\exp(-\Delta/2+i\pi\Delta n)\). Under typical choices e.g. \(\Delta=1e{-3},N=128\), the SSM eigenvalues have magnitude \(\exp(-\Delta/2)\approx 0.9995\), and phase \(\theta=\pi\Delta n\stackrel{{\leftarrow}}{{\in}}[0,\pi/8]\) -- i.e. initialization is performed on a ring12 close to the unit circle in \(C\), with restricted phase connected to the eigenvalues magnitude. As is clear from the results in SS3.3 and SS3.4, linking the eigenvalues phase and magnitude is not necessary to achieve good performance: indeed, as it can be seen in Tb.3, test accuracy on the Long Range Arena (except PathX) can be recovered by using a more natural magnitude-independent initialization on the complete ring. As we discussed in SS3.4, changing the initialization phase to a small range around 0 can be motivated by first principles, yet is only needed for extremely long sequences: this modification is already hard-coded in S4, where choosing a small \(\Delta\) also shrinks the phase.13 However, our results clearly show that connecting real and imaginary parts during training through the \(\Delta\) parameter is not necessary to achieve good performance, even on PathX. Footnote 13: This is a useful effect of having a latent continuous-time model: choosing eigenvalues close to the unit circle (i.e. small \(\Delta\)) changes the oscillation frequencies in the discretized system. Discretization performs normalization.The most striking visual difference between ours and ZOH-discretized S4 recurrence is in the matrix multiplier for \(u_{k}\): \((\exp(\Delta\bar{A})-I)\bar{A}^{-1}\bar{B}\). After conducting experiments on S4D, we found that simply replacing this multiplier with its first-order expansion in \(\Delta\), i.e. \(\Delta B\), yields a close match in performance. For input dimension \(H=1\) and unit \(B\in\mathbb{R}^{N\times 1}\) (to keep reasoning simple), the corresponding recurrence is \(x_{k}=\exp(\Delta\bar{a})+\Delta 1_{N}u_{k}\). Elementwise unrolling of this recurrence - without the \(\Delta\) in front of \(u\) - yields \(|x_{k,i}|\leq\sum_{j=0}^{k}|\exp(\Delta\bar{a}_{i})|u_{k-j,i}\), which in the limit \(k\to\infty\) gives \(O(\Delta^{-1})\). Therefore, the \(\Delta\) multiplier in front of \(B\) effectively scales the recurrence to avoid blow-ups -- similar to our \(y\) normalization factor. Parameter sharing is not necessary.As a result of discretization, the \(\Delta\) parameter multiplying both \(\bar{A}\) and \(\bar{B}\) couples the recurrence formula with the input projection during training. In our S4 ablations, we found that decoupling these in two separate parameters -- keeping the same initialization to guarantee no blow-ups (see last paragraph) -- does not decrease performance, suggesting that the ODE discretization viewpoint (which induces parameter sharing) is not necessary to achieve S4 performance. From this discussion, we conclude that the success of (diagonal) state-space models is attributable to the use of linear recurrences and complex diagonal exponential matrices, combined with the normalization and initialization induced by discretization. On the other hand, other artifacts of discretization such as parameter sharing or the continuous-time interpretation do not necessarily contribute to its performance. ## 5 Conclusion In this paper, we introduce a new RNN layer called the Linear Recurrent Unit or LRU and show how it can be effectively and efficiently used as core layers of deep sequence models. We provide theoretical insights and extensive ablations on a series of step-by-step modifications of a vanilla RNN--linearization, diagonalization, stable exponential parameterization and normalization--that substantially improve performance, especially on tasks requiring long range reasoning. While our recurrence shares similarities with modern deep SSMs, our design does not rely on discretization of a latent continous-time system or on structured transition matrices. Instead our improvements directly follow from initialization and forward pass analysis arguments standard in the deep learning community, starting from a Glorot-initialized RNNs. Our final model matches the performance of modern deep state-space models (e.g. S4 or S5) on all LRA tasks. ## Acknowledgements The authors would like to thank Michalis Titsias, Aleksandar Botev, James Martens and Yee Whye Teh for the interesting discussions and perspectives on our work.
2307.04526
Self-Expanding Neural Networks
The results of training a neural network are heavily dependent on the architecture chosen; and even a modification of only its size, however small, typically involves restarting the training process. In contrast to this, we begin training with a small architecture, only increase its capacity as necessary for the problem, and avoid interfering with previous optimization while doing so. We thereby introduce a natural gradient based approach which intuitively expands both the width and depth of a neural network when this is likely to substantially reduce the hypothetical converged training loss. We prove an upper bound on the ``rate'' at which neurons are added, and a computationally cheap lower bound on the expansion score. We illustrate the benefits of such Self-Expanding Neural Networks with full connectivity and convolutions in both classification and regression problems, including those where the appropriate architecture size is substantially uncertain a priori.
Rupert Mitchell, Robin Menzenbach, Kristian Kersting, Martin Mundt
2023-07-10T12:49:59Z
http://arxiv.org/abs/2307.04526v3
# Self-Expanding Neural Networks ###### Abstract The results of training a neural network are heavily dependent on the architecture chosen; and even a modification of only the size of the network, however small, typically involves restarting the training process. In contrast to this, we begin training with a small architecture, only increase its capacity as necessary for the problem, and avoid interfering with previous optimization while doing so. We thereby introduce a natural gradient based approach which intuitively expands both the width and depth of a neural network when this is likely to substantially reduce the hypothetical converged training loss. We prove an upper bound on the "rate" at which neurons are added, and a computationally cheap lower bound on the expansion score. We illustrate the benefits of such Self-Expanding Neural Networks in both classification and regression problems, including those where the appropriate architecture size is substantially uncertain a priori. ## 1 Introduction Correctly tailoring a model's capacity to an arbitrary task is extremely challenging, especially when the latter is not yet well studied. This challenge can be side stepped by choosing an architecture which is so large that a poor solution is nevertheless unlikely to occur [19], e.g. due to the double-descent phenomenon. However, since it is hard to predict what size would be large enough this will often in practice entail using a massively overparameterized network [22][12][11]. Surely it is possible to detect that the existing capacity of the network is insufficient and add more neurons when and where they are needed? In fact, biological neural networks are grown by adding new neurons to the existing network through the process of neurogenesis. The popular review [9] discusses the relatively recent discovery that this process is still active in the adult mammalian brain [23], and [13][5] identify it as a key ability underpinning lifelong learning. Thus inspired, we propose an analogous process for adding both neurons and layers to an artificial neural network during training, based on a local notion of "sufficient capacity" derived from first principles in close relation to the natural gradient [1][17]. Any method for artificial neurogenesis must answer three questions to avoid the problem of locally insufficient capacity [6]. It must determine **when** the current capacity is insufficient and that neuron(s) must therefore be added. It must identify **where** these neurons should be introduced. Finally, it must choose **what** initialization is appropriate for these neurons. These questions, if they are addressed at all in the literature, are normally addressed piecemeal or in ad-hoc ways. For example, very few methods address the question of **what**[6][26]. **When** is answered either by assuming predetermined schedules [26][21], or by waiting for the training loss to converge [27][25], neither of which are informative about **where**. "Whenever you party, hit, spring,..., you must cut the enemy in the same movement." 1 Our metaphorical enemy is not a loss which is momentarily poor, or even one which is converging to a poor value: it is a deficiency in our parameterization such that the optimizer cannot make progress. We argue that by inspecting the degrees of freedom of the optimizer in function space, one may not only strike faster in answer to **when**, but answer **where** and **what** in the same stroke. From a mathematical perspective, these degrees of freedom available to the optimizer are given by the image of the parameter space under the Jacobian, and the derivative with respect to the loss in function space will not in general lie in this subspace. It is however possible to project this derivative onto that subspace, and the natural gradient, \(\mathbf{F}^{-1}\mathbf{g}\), is exactly the change in parameters which changes the function according to this projection. In order to measure the size of that projection for a given parameterization, we introduce the natural expansion score \(\eta=\mathbf{g}^{T}\mathbf{F}^{-1}\mathbf{g}\). Specifically, the capacity of a neural network is locally insufficient when this score is small for the current parameterization. We therefore add neurons **when** this substantially increases \(\eta\), **where** they will maximally increase \(\eta\), and choose **what** initialization to use for the new parameters according to how it increases \(\eta\). To summarize, our **contributions** are: 1. We introduce the _natural expansion score_ which measures the increase in rate of loss reduction under natural gradient descent when width or depth is added to a neural network. 2. We show how such additions may be made during training without altering the function represented by the network. Our neurogenesis inspired Self-Expanding Neural Networks (SENN) thus avoid interfering with previous optimization or requiring restarts of training. 3. We prove that the number of neurons added simultaneously in SENN is bounded. We further introduce a computationally efficient approximation as a provable lower bound to increases in natural expansion score resulting from additions. 4. We demonstrate SENN's effectiveness for regression and classification. In the remainder of this paper, we proceed as follows: In section 2 we summarize existing growth methods, in section 3 we then describe SENN, and in section 4 we illustrate its operation in practice. ## 2 Related Methods for Growing Neural Networks The problem of adding nodes to neural networks during training has been under consideration for over 30 years (e.g. Dynamic Node Creation (DyNC) [2]), but remains substantially unsolved. There does not seem to exist a unified answer to **when**, **where** and **what**, as we summarize in table 1. Most methods cannot add depth and sideline at least one of these questions. Inspired by neurogenesis like SENN, Draelos _et al._[5] examine the case of representational learning with stacked autoencoders, where they exploit local reconstruction error to determine **when** and **where** to add neurons. Due to their more general setting, DyNC, Progressive NNs (PrNNs) [21] and Dynamically Expandable NNs (DENNs) [27] use simple training loss convergence or even task boundaries to answer **when**, but must then fall back on ad-hoc preset decisions for **where**. (However, DENNs use subsequent pruning to mitigate the excess capacity introduced by the preset.) All four methods freeze old neurons or continue training from their present values, but randomly initialize new neurons in answer to **what**. While ActiveNAS [7] can add both width and depth, it does so by \begin{table} \begin{tabular}{l l l l l} **METHOD** & **WHEN** & **WHERE** & **WHAT** & **DEPTH?** \\ \hline ActiveNAS [7] & end of training & argmax on presets & reinitialize & Yes \\ DyNC [2] & converged loss & preset & random & No \\ NeurDL [5] & recon error & recon error & random & No \\ DENNs [27] & converged loss & preset then prune & random & No \\ PrNNs [21] & at new task & preset & random & No \\ \hline GradMax [6] & future work & future work & vanilla gradient & No \\ SSD [25] & converged loss & loss reduction & loss reduction & No \\ Firefly [26] & N epochs & vanilla gradient & loss reduction & No \\ \hline SENN (ours) & natural gradient & natural gradient & natural gradient & Yes \\ \end{tabular} \end{table} Table 1: Existing expansion methods’ answer to when, where, what and whether they consider depth. SENN is the only approach to provide a cohesive answer to all three based on natural gradients. completely restarting training with a fresh initialization of the whole network after every modification. It then waits for convergence, and uses preset answers to **where** similar to the previous methods. The final cluster of three methods all aim to improve on random initialization as an answer to **what**. Splitting Steepest Descent (SSD) [25] and Firefly [26] make small changes to the existing function and answer **what** by optimizing the consequent loss reduction. The former answers **when** by waiting for convergence and examining the loss, whereas the latter simply adds more capacity every \(N\) epochs. Gradmax [6] is the closest to SENN in spirit, but is based on vanilla rather than natural gradient descent. More importantly, potential extensions of the method to the **where** and **when** questions are mentioned briefly and their investigation deferred to future work. All three of these latter methods are only able to avoid redundancy of added neurons with existing neurons to the extent that the network is already converged. Of these three, only GradMax completely avoids changing the overall function. In contrast, SENN provides a monolithic answer to all three questions via the natural expansion score. ## 3 Self-Expanding Neural Networks To provide a cohesive answer to **when**, **where**, **what** with Self-Expanding Neural Networks, we start with the definition of the natural expansion score as the foundation: **Definition 1**.: _The natural expansion score \(\eta=\mathbf{g}^{T}\mathbf{F}^{-1}\mathbf{g}\) is given by the inner product of the natural gradient \(\mathbf{F}^{-1}\mathbf{g}\) with the gradient \(\mathbf{g}\)._ With this definition we will describe **how** we add capacity without interfering with the existing optimized parameters in section 3.1. We then in section 3.2 give an intuitive account of what our score \(\eta\) measures, and why we use this to decide **when** to add capacity. Section 3.3 gives a more mathematically precise account of the meaning of \(\eta\), and what this says about **what** initializations should be used for new capacity. Section 3.4 extends the argument of 3.3 to deciding **where** new capacity should be added and whether it should be depth or width, allowing us to put the ingredients of SENN together and summarize this combination. Finally, sections 3.5 and 3.6 cover the practical questions of convergence guarantees and computational efficiency respectively. ### How to add: expanding without changing the overall function In order to explain how to add without changing the overall function, we will consider the illustration in figure 1. This shows a perceptron with two hidden layers, each with three neurons. The number of neurons a hidden layer may be increased by introducing a new copy of the activation function \(\sigma_{p}\) and connecting it to the neurons of the preceding layer with some linear transform \(\mathbf{W}_{p}\). As shown on the left of the figure, we connect the new neuron to the subsequent layer (in this case the output \(\mathbf{y}\)) with a linear transform initialized to zero. In doing so, we guarantee that we will not perturb the function specified by the existing parameters. Although \(\mathbf{W}_{p}\) will initially receive zeroed gradients Figure 1: **SENN is able to add width (orange, left side) and depth (green, right side) to a neural network without changing the overall function. In the diagram, the model’s component functions are composed vertically, e.g. \(\sigma_{1}(\mathbf{W}_{1}\mathbf{x})\) gives the first set of hidden activations. \(\uplus\) indicates concatenation along a hidden dimension, i.e. width addition. \(\mathbf{W}_{2}\mathbf{W}_{p}^{-1}\) indicates matrix multiplication, used in depth expansion via the insertion of an identity function. (Best viewed in color.)** since the output transform is zero, this latter transform will immediately receive non-zero gradients and thereby become non-zero. The new neuron may thus be used in future optimization. In addition to width expansion, we now consider inserting an entirely new layer, as shown on the right of figure 1. In essence, a particular linear transform, \(\mathbf{W}_{2}\) in the figure, is replaced with a single layer perceptron. To this end, we assume our nonlinearity \(\sigma_{p}\) to be parameterised, and there to exist a choice of those parameters such that \(\sigma_{p}=\mathbf{I}\) is the identity. If we require the initial linear transform \(\mathbf{W}_{p}\) of the inserted perceptron to be invertible (but otherwise arbitrary), then we may choose the output linear transform of the perceptron to be the matrix product \(\mathbf{W}_{2}\mathbf{W}_{p}^{-1}\). With these choices made, the inserted perceptron is equivalent to the linear transform \(\mathbf{W}_{2}\) it replaces, and the overall parameterized function once again remains unchanged. We thus have the first ingredient of SENN: **SENN Ingredient 1: How to add more capacity without changing the overall function.** We add proposed neurons \(p\) to layer \(i\) by concatenation along the the \(i\)th hidden dimension \((0\!\uplus\mathbf{W}_{i+1})\circ(\sigma_{p}\!\uplus\sigma_{i})\circ(\mathbf{W}_{p} \!\uplus\mathbf{W}_{i})=\mathbf{W}_{i+1}\circ\sigma_{i}\circ\mathbf{W}_{i}\), and initialize the output weights of \(p\) to zero. We insert a new layer \(q\) by replacing some linear transform \(\mathbf{W}_{i}\) with the composition \((\mathbf{W}_{i}\mathbf{W}_{q}^{-1})\circ(\sigma_{q}=\mathbf{I})\circ\mathbf{W}_{q}\), where \(\mathbf{W}_{q}\) is invertible and \(\sigma_{q}\) is initialized to be the identity. We must therefore choose a suitable parameterized activation function. Rational activation functions satisfy the above conditions and were shown to obtain good real world performance [18]. We use the simplified parameterization \(\sigma_{\mathbf{\theta}}(x)=\alpha x+(\beta+\gamma x)/(1+x^{2})\), where \(\mathbf{\theta}=\{\alpha,\beta,\gamma\}\) are the three parameters of \(\sigma\), and setting \(\mathbf{\theta}=\{1,0,0\}\) results in the identity function, as required. Since this parameter count is small, we do not share the activation function weights within our layers. ### When to add: deciding whether more capacity is useful Having decided how to add, perhaps the most natural way to evaluate the utility of making some change to the parameterization is to ask what immediate effect this has on the total loss. However, we cannot do this as we have assumed the overall function to remain unaltered. We must therefore consider additional information such as the gradients of the function. Specifically, one can favor adding neurons which maximally increase the euclidean norm of the gradients \(||\mathbf{g}||_{2}\). As found in Evci _et al._[6] this norm functions well for selecting which neurons to add when the network is close to convergence since it is a direct measure of the rate at which gradient descent will decrease the loss. Unfortunately, comparing the gradient norms \(||\mathbf{g}||_{2}^{2}\) and \(||\mathbf{g}^{\prime}||_{2}^{2}\) for the current parameterization \(\mathbf{\theta}\) and some new expanded parameterization \(\mathbf{\theta}^{\prime}\) is insufficient to determine whether or not more capacity is needed in the first place. This is primarily because it does not account for redundancy in the parameterization: if there is some neuron \(a\) such that the gradients of the linear weights in the next layer "listening" to it have some large norm \(||\mathbf{g}_{a}||_{2}\), then we could introduce an exact copy of this neuron \(a^{\prime}\) for which the corresponding norm would also be \(||\mathbf{g}_{a}^{\prime}||_{2}=||\mathbf{g}_{a}||_{2}\). Since the squared euclidean norm is additive across parameters, we could unboundedly increase \(||\mathbf{g}||_{2}^{2}\) just by adding very many copies of this one neuron \(a\). 2 Footnote 2: More generally, the same problem would occur when considering a new neuron \(c\) whose activations were some linear combination of those of some existing neurons \(a\) and \(b\). In SENN, we avoid this problem with the following simple notion of redundancy. We are using our parameters to express a point in function space. At some point in optimization we are therefore also using them to express a small change in function space. There is some direction that our optimizer "wants" to move in (i.e. the direction in function space which most quickly reduces the loss). We can define new parameters as being useful in a way which is non-redundant with the old parameters to the extent that they allow the optimizer to better express the direction in function space it "wants" to move in. Our natural expansion score \(\eta=\mathbf{g}^{T}\mathbf{F}^{-1}\mathbf{g}\) captures this combined sense of usefulness and non-redundancy in a way which will be made more mathematically precise in the next section. This description of its function is sufficient, however, to justify our answer to **when**: **SENN Ingredient 2: When to add more capacity.** A new neuron or layer will be helpful and non-redundant if it provides a fractional increase in \(\eta=\mathbf{g}^{T}\mathbf{F}^{-1}\mathbf{g}\) greater than some threshold \(\tau\). **When** we find a potential new neuron or layer for which this is true, we add it. We defer specific choices for \(\tau\) to section 3.4, at which point we may draw on the derivation of \(\eta\). ### What to add: determining the initial value of new neurons The reader may at this point be expecting us to tackle the question of **where** additional capacity is most useful, but this would put the cart before the horse. Additional capacity is useful to the extent that it can be initialized in a way which is useful, which we now consider. To simplify mathematical notation in this section, we consider the output \(\mathbf{y}\) to be concatenated over the entire training dataset. While the gradient of the loss with respect to the output \(\mathbf{g_{y}}\) tells us how the loss changes for arbitrary changes in \(\mathbf{y}\), the only changes in \(\mathbf{y}\) we can actually achieve with some parameterization \(\Theta\) are given by Jacobian product \(\mathbf{J}\mathbf{t}\) for some small parameter change \(\mathbf{t}\in\Theta\). Let \(\mathbf{P}_{\Theta}\) be the orthogonal projection onto this space of directions in output space. The vector \(\mathbf{P}_{\Theta}\mathbf{g_{y}}\) is then the portion of \(\mathbf{g_{y}}\) which lies in the space of achievable output changes, and its squared norm \(||\mathbf{P}_{\Theta}\mathbf{g_{y}}||_{2}^{2}\) is a scalar measure of how much of \(\mathbf{g_{y}}\) this portion is. The vector \(\mathbf{P}_{\Theta}\mathbf{g_{y}}\) is the image under the Jacobian \(\mathbf{J}\mathbf{t}\) of some tangent vector \(\mathbf{t}\) in the parameter space. By the definition of orthogonal projection, \(\mathbf{t}\) minimizes \(||\mathbf{J}\mathbf{t}-\mathbf{g_{y}}||_{2}\), but if there are redundant directions in \(\Theta\) then there may exist multiple such \(\mathbf{t}\). There is however a unique \(\mathbf{t}_{*}\) which minimizes \(||\mathbf{t}||_{2}\) among those \(\mathbf{t}\) which minimise \(||\mathbf{J}\mathbf{t}-\mathbf{g_{y}}||_{2}\). The Moore-Penrose inverse, \(\mathbf{J}^{+}\), of \(\mathbf{J}\) is the unique matrix such that \(\mathbf{t}_{*}=\mathbf{J}^{+}\mathbf{g_{y}}\) for arbitrary \(\mathbf{g_{y}}\). However, \(\mathbf{J}\) is a map from parameter space to total output space, which depends on dataset size \(N\). This dependency can be avoided by working with maps from the parameter space to itself, such as the following average over the dataset \(\mathbf{F}=\frac{1}{N}\mathbf{J}^{T}\mathbf{J}\), known as the Fisher information matrix. The natural gradient is then given by \(\hat{\mathbf{F}}^{-1}\mathbf{g}\), where \(\mathbf{g}=\frac{1}{N}\mathbf{J}^{T}\mathbf{g_{y}}\) is the gradient of the loss with respect to the parameters averaged over the training set, and existence of \(\hat{\mathbf{F}}=\mathbf{F}+\epsilon\mathbf{I}\) is guaranteed by the addition of a small multiple \(\epsilon\) of the identity. In the limit of small \(\epsilon\) this is exactly our \(\mathbf{t}_{*}\).3 We are now able to rewrite the squared norm \(||\mathbf{P}_{\Theta}\mathbf{g_{y}}||_{2}^{2}\) in the familiar form of definition 1: Footnote 3: In fact, an alternative definition of the Moore-Penrose inverse is: \(\mathbf{J}^{+}:=\lim_{\epsilon\to 0}(\mathbf{J}^{T}\mathbf{J}+\epsilon\mathbf{I})^{-1}\mathbf{J}^{T}\) \[||\mathbf{P}_{\Theta}\mathbf{g_{y}}||_{2}^{2}=\mathbf{t}_{*}^{T}\mathbf{J}^{T}\mathbf{J}\mathbf{t}_{*} =\mathbf{g}^{T}\mathbf{F}^{-1}\mathbf{J}^{T}\mathbf{J}\mathbf{F}^{-1}\mathbf{g}=N\mathbf{g}^{T}\mathbf{F}^{-1} \mathbf{g}=N\eta\quad. \tag{1}\] Here, the factor of the dataset size \(N\) appears because the average gradient \(\mathbf{g}\) and our \(\eta\) are normalized according to the training set size. With this formula, we have now derived \(\eta\) from first principles and may use it to choose between specific initializations, yielding our third SENN ingredient: **SENN Ingredient 3: What Initialization to Use.** If \(\mathbf{\theta}^{\prime}\in\Theta^{\prime}\) is an initialization of an expanded parameterization \(\Theta^{\prime}\) such that the overall function remains unchanged (see section 3.1), then the best such initialization \(\mathbf{\theta}^{\prime}_{*}\) is given by \(\operatorname*{arg\,max}_{\mathbf{\theta}^{\prime}}(\eta^{\prime})\). When we add new neurons or layers, we choose **what** initialization to use by this method. ### Where to add: completing the algorithm Much as the euclidean norm \(||\mathbf{g}||_{2}\) measures the rate of loss reduction according to vanilla gradient descent, our \(\eta\) measures the rate of loss reduction according to natural gradient descent. This gives a uniform way of comparing the effect of new capacity no matter where it is added in the network or whether it takes the form of new neurons in an existing layer or a new layer. In particular, one may compare the \(\eta\) values of the best initializations (see section 3.3) for each such variety of addition. 4 Footnote 4: In general one can also adjust for the “size” of each addition in some relevant sense. We found it sufficient to just penalize adding entire new layers versus single new neurons by some constant factor. **SENN Ingredient 4: Where to Add.** A choice of whether to add width or depth, and **where** in the network the new neuron/layer will be added, specifies a particular extension of the current parameter space \(\Theta^{\prime}\). We make those choices which correspond to the extension \(\Theta^{\prime}_{*}=\operatorname*{arg\,max}_{\Theta^{\prime}}\operatorname*{ arg\,max}_{\mathbf{\theta}^{\prime}}(\eta^{\prime})\) for which the best initialization is possible. Our newfound knowledge of \(\eta\) as a rate of loss reduction in hand, we return to the question of specifying the expansion threshold \(\tau\), which we deferred from section 3.2 in our previous answer to **when**. An increase from the current natural expansion score \(\eta_{c}\) to a new score \(\eta_{p}\) due to some proposed expansion \(p\) corresponds to an increase in the rate of loss reduction by natural gradient descent. We define this increase to be "sufficient" when it corresponds to a relative increase \(\eta_{p}/\eta_{c}>\tau\) in loss reduction rate greater than the **expansion threshold**\(\tau\). For example, with the intuitive choice \(\tau=2\), each addition must at least double the rate of loss reduction. Following the well known intuition that a network does not practically converge without setting the learning rate to zero, it is generally considered to have converged once changes in loss become sufficiently small. In analogy to monitoring plateaus in loss, we further require the increase in loss reduction resulting from new capacity to surpass an absolute **stopping criterion**\(\alpha\). While we answer **when, where** and **what** cohesively with \(\eta\) during training, we thus concur with all prior works on terminating training. Overall, we may now summarize all ingredients of SENN on the basis of the natural expansion score: [leftmargin=*] **SENN: Summary.** When we add width or depth we do so without changing the overall function. We add new capacity **when** this produces a relative increase in score \(\eta_{p}/\eta_{c}>\tau\) larger than the expansion threshold \(\tau\). We add new capacity **where** it would most increase \(\eta\), and choose **what** initialization to use in order to maximize \(\eta\). We ensure the addition process terminates by additionally comparing each \(\Delta\eta\) to the absolute stopping criterion \(\alpha\), and not adding capacity **when**\(\eta_{p}-\eta_{c}\leq\alpha\). ### Bounds on convergence of expansion Consider repeatedly running our addition algorithm for a network with initial expansion score \(\eta_{0}\). The expansion threshold \(\tau\) guarantees that \(\eta_{i}>\tau\eta_{i-1}\) after the \(i\)-th addition. Since \(\eta=\mathbf{g}^{T}\mathbf{F}^{-1}\mathbf{g}\) is the squared length of the projected gradient in output space \(||P_{0}\mathbf{g}_{\mathbf{y}}||_{2}\), it is non-negative and bounded above by \(\eta\leq\lambda=||\mathbf{g}_{\mathbf{y}}||_{2}^{2}\). Since \(\eta_{i}\) grows exponentially with \(i\) and is bounded above by \(\lambda\) the maximum number of sequential additions \(i<N_{s}\) increases logarithmically with \(\lambda\). Specifically, \(N_{s}<(\ln\lambda-\ln\eta_{0})/\ln\tau\). This bound becomes large when \(\eta_{0}\) is small, but we also know that \(\eta_{1}>\alpha\) from the stopping criterion \(\alpha\). **Theorem 1** (Upper bound on the "rate" of neuron addition).: _The maximum number of additions \(N_{s}\) from repeatedly running the expansion algorithm is bounded: \(N_{s}<1+(\ln\lambda-\ln\alpha)/\ln\tau\)._ (Proof in supplementary material.) For example, if \(\tau=2\) and \(\alpha/\lambda>10^{-3}\) then \(N_{s}<1+\frac{3\ln 10}{\ln 2}<11\). Note that exponentially large ratios between \(\alpha\) and \(\lambda\) produce only linearly large bounds on \(N_{s}\). We now consider the number of additions \(N_{T}\) made over the course of training with natural gradient descent. Intuitively, \(\lambda\) is the total possible loss reduction and \(\alpha\) is the minimum reduction which justifies expanding the network. If every time we expand the network it only achieves this minimum reduction then we must expand a total of roughly \(N_{T}\approx\lambda/\alpha\) times. If the loss function has constant curvature equal to the fisher \(\mathbf{F}\), then the total loss reduction possible with the current parameters is given by \(\frac{1}{2}\eta\) and we have \(N_{T}<\lambda/\alpha\) exactly. More generally, we expect that when \(\mathbf{F}\) is an underestimate of the true curvature, \(\eta\) will overestimate the usefulness of new neurons causing \(N_{T}\) to be larger, and vice versa for \(\mathbf{F}\) an overestimate. See supplementary for more in depth discussion. ### Efficiently computing a lower bound on score increase Recall that the natural expansion score \(\eta\) is given by the inner product of the gradient \(\mathbf{g}\) with the natural gradient \(\mathbf{F}^{-1}\mathbf{g}\). Since working with the natural gradient can be challenging due to the matrix inverse \(\mathbf{F}^{-1}\), we will make use of established approximation techniques. Specifically, when we need the natural gradient for the whole network we will use the iterative conjugate gradient method, as suggested for the Hessian in Martens [16], performing Fisher-vector multiplication cheaply via auto-differentiation. When we require the inverse Fisher \(\mathbf{F}_{l}^{-1}\) for the linear transform in some layer \(l\) considered in isolation, we approximate \(\mathbf{F}_{l}\) by the Kronecker product \(\mathbf{F}_{l}\approx\tilde{\mathbf{F}}_{l}=\mathbf{S}_{l}\otimes\mathbf{A}_{l}\), where \(\mathbf{A}_{l}\) is the second moment of the activations at the input of the linear transform, and \(\mathbf{S}_{l}\) is given by the second moment of some distribution of gradients with respect to the output of the linear transform. The relevant gradient distribution is determined by the choice of metric on the output space implicit in the exact definition of \(\mathbf{F}\) one is using, which for us is the euclidean metric. The advantage of this Kronecker factorization is that \(\tilde{\mathbf{F}}_{l}\) may be inverted by inverting \(\mathbf{A}_{l}\) and \(\mathbf{S}_{l}\) separately: \(\tilde{\mathbf{F}}_{l}^{-1}=\mathbf{A}_{l}^{-1}\otimes\mathbf{S}_{l}^{-1}\), which is much cheaper. If \(\partial\mathbf{W}\) is the gradient with respect to the weights as a matrix, then the natural gradient is given by \(\mathbf{S}^{-1}\partial\mathbf{W}\mathbf{A}^{-1}\)[15]. The natural expansion score \(\eta\) is given by the inner product of the gradient with the natural gradient as vectors, which in this matrix form becomes the elementwise inner product \(\eta=\sum_{i,j}\partial\mathbf{W}_{ij}(\mathbf{S}^{-1}\partial\mathbf{W}\mathbf{A}^{-1})_{ij}\), which can also be expressed as a trace: \(\eta=\operatorname{Tr}[\partial\mathbf{W}^{T}\mathbf{S}^{-1}\partial\mathbf{W}\mathbf{A}^{-1}]\). The trace formula for \(\eta\) is reminiscent of the definition of the pearson correlation coefficient \(r^{2}=\operatorname{\mathbb{E}}\left[xy\right]^{2}/(\operatorname{\mathbb{E }}\left[xx\right]\operatorname{\mathbb{E}}\left[yy\right])\). The gradient for \(\mathbf{W}\) is given by the expectation \(\partial\mathbf{W}=\operatorname{\mathbb{E}}\left[\mathbf{a}\mathbf{g}^{T}\right]\), where \(\mathbf{a}\) is the input activation vector, \(\mathbf{g}\) is the derivative of the loss with respect to the outputs, and the expectation is over the dataset. Let the residual gradient \(\mathbf{g}_{r}=\mathbf{g}-\operatorname{\mathbb{E}}\left[\mathbf{g}\mathbf{a}^{T}\right]\mathbf{ A}^{-1}\mathbf{a}\) be the part of the gradient not predicted by the current activations \(\mathbf{a}\). Then if \(\mathbf{a}_{p}\) is the activation vector of a set of proposed neurons, and \(\mathbf{A}_{p}\) is their second moment, then the "correlation coefficient" of the new activations with the residual gradients is a lower bound \(\Delta\eta^{\prime}\) on the improvement \(\Delta\eta\) in natural expansion score (proof in appendix via block LDU decomposition of joint activation covariance): **Theorem 2** (**Computationally cheap lower bound on increase in natural expansion score)**.: \(\Delta\eta^{\prime}:=\operatorname{Tr}[\mathbf{A}_{p}^{-1}\operatorname{\mathbb{ E}}\left[\mathbf{a}_{p}\mathbf{g}_{r}^{T}\right]\mathbf{G}_{l}^{-1}\operatorname{\mathbb{E}} \left[\mathbf{g}_{r}\mathbf{a}_{p}^{T}\right]]\) _is a lower bound \(\Delta\eta^{\prime}\leq\Delta\eta=\eta_{p}-\eta_{c}\) on the improvement in natural expansion score due to some proposed addition of neurons \(p\) to a layer \(l\)._ Intuitively, \(\Delta\eta^{\prime}\) is the fraction of variance in residual gradients "explained" by the output of our new neuron(s). This result holds for adding an arbitrary number of new neurons to an existing layer. If a layer was inserted while retaining residual connections around it, then the same result would hold if we treated the activations of the new layer as "new neurons" in the old layer to calculate \(\Delta\eta^{\prime}\). Because our activation function can represent the identity, we will automatically add these connections if in fact they are necessary, so we in fact use this same method for evaluating our actual layer insertions. The bound \(\Delta\eta^{\prime}\) can be computed for an arbitrary proposal \(p\) of additional neurons using only those intermediate activations and gradients which it would be necessary to cache in order to calculate the gradient \(\mathbf{g}\) and (Kronecker factored approximate) natural gradient \(\tilde{\mathbf{F}}^{-1}\mathbf{g}\) via backpropagation. Therefore, if we have an outer optimizer which computes \(\mathbf{g}\) and \(\tilde{\mathbf{F}}^{-1}\), then we may optimize arbitrarily many proposals \(p\) for arbitrarily many steps with an inner optimizer without incurring any additional costs related to the evaluation of the existing network. The costs of this inner optimizer instead scale with the size of the (very small) networks whose addition to the existing network is being considered. ## 4 Experiments We now apply Self-Expanding Neural Networks to regression and classification, to illustrate the behavior of the natural expansion score and demonstrate SENN's efficacy. ### Width Addition in Least-Squares Regression We first show that the evolution over training of the possible improvements \(\Delta\eta^{\prime}\) in natural expansion score due to potential width expansions is meaningful. In order to do so we consider the application of a single layer SENN to a one dimensional least squares regression task as shown in figure 2, i.e. SENN with depth addition deliberately disabled. The reason to have only one hidden layer is that this is effectively least squares regression with basis functions given by the neurons of that layer. We can therefore plot the normalized score increase \(\Delta\eta^{\prime}/\eta_{c}\) of the best neuron for each basis function location and length scale. Where \(\Delta\eta^{\prime}/\eta_{c}>1\) there exists an acceptable proposal. Accepted/rejected proposed neurons are shown on this landscape in red/black at key points in training. We see in the leftmost figure that the best such proposal is accepted because it achieves a large improvement in \(\eta\), and it corresponds to a basis function location close to datapoints with visibly large prediction error which we have been unable to reduce using the existing neurons. The next figure to the right shows the same landscape after the new neuron is introduced, and it can be seen that the \(\Delta\eta^{\prime}/\eta_{c}\) values for neurons with similar locations to it have been dramatically reduced since they would be redundant. The second figure from the right shows the result of optimizing the new expanded parameters until the point at which the next neuron would be added. It can be seen that the prediction errors in the region of the previously introduced neuron are now practically invisible, and that the next neuron is to be introduced in a different region in which errors remain. The rightmost figure shows the function approximation at the conclusion of training, and it can be seen that the prediction errors are negligible and proposals with large relative increase in \(\eta\) are not to be found in the region considered. The reader may note that there are some possible new neurons with small length scales which would surpass the expansion threshold which we do not find; we could deliberately try optimizing initializations at this lengthscale to find these, but this would likely result in overfitting. Overall, SENN thus identifies regions of locally insufficient capacity in our parameterization, targets these regions precisely with new added neurons, and uses this selectively added capacity to achieve a good final fit. ### Layer Addition in Classification We now highlight SENN's depth expansion in the context of classification. Specifically, we consider two-dimensional inputs from the half-moons dataset [20]. In figure 3 we plot \(\Delta\eta^{\prime}/\eta_{c}\) for the best layer addition proposals as a function of overall optimizer steps. Visualizations of the learned decision boundary at initialization and just before layer additions are shown. We can observe that \(\Delta\eta^{\prime}/\eta_{c}\) increases approximately monotonically during three phases, punctuated by large drops when layers are added. In the initial phase the network has zero hidden layers (i.e. is linear), and the simplicity of the decision boundary at the end of this phase reflects this. Since the datapoints are not linearly separable, the large \(\Delta\eta^{\prime}/\eta_{c}\) value correctly indicates that the introduction of a hidden layer is necessary in order to further reduce loss. The visible increase in decision boundary complexity Figure 3: Classification is performed with SENN on the half moons [20] dataset. The normalized layer addition score \(\Delta\eta^{\prime}/\eta_{c}\) is shown as a function of optimization steps; the horizontal bar shows the point above which a layer will be added. The score increases during three phases during which the SENN has initial zero, one and then two hidden layers. The respective decision boundary is shown at the beginning and end of these. These layer additions allow SENN to represent more complex decision boundaries when required for global expressivity. (Best viewed in color.) Figure 2: A single layer SENN (black, solid) is trained to approximate a target function (red, dashed) via non-linear least-squares regression on samples (blue, markers). The location of existing neurons is shown by vertical lines. The lower figures show \(\Delta\eta^{\prime}/\eta_{c}\) as a function of the location and scale of the nonlinearity introduced by a new neuron. Accepted and rejected proposals are marked in red and black respectively. From left to right we see the landscape before and immediately after the fourth neuron is added, before the fifth neuron is added, and at the end of training. SENN adds neurons where they are relevant in order to achieve a good final fit. (Best viewed in color.) and accuracy over the course of the second phase confirms this. The beginning of the third phase marks the introduction of a second hidden layer and we wait until \(\Delta\eta^{\prime}/\eta_{c}\) rises again, indicating an exhaustion of this new capacity, before reexamining the decision boundary. The increase in boundary complexity is less visible this time, but close inspection reveals that the boundary has become narrower and more rounded. Conclusively, we have intentionally constructed a scenario where depth addition is necessary for a good fit to lie in the space of solutions, and seen that SENN inserts new layers when this is necessary for global expressivity. ### Dynamic Selection of Appropriate Architecture Size in Image Classification Finally, we examine the ability of self-expanding neural networks to choose an appropriate size when classifying MNIST [4] images. The leftmost plots of figure 4 show SENN's total hidden size and validation accuracy during training on the full dataset as a function of total batches seen. This use of mini-batching is not strictly necessary for MNIST but we use it to better reflect the realities of training modern neural networks. Our SENN is initialized with a single hidden layer of size 10, and promptly adds a second hidden layer, also of size 10. All five seeds considered then proceed to consistently add width to these layers at a moderate rate until a total hidden size of around 40 is reached, at which point far fewer productive extensions of the network are found and addition slows dramatically. It can be seen that this results in respectable validation performance (>97%) by the end of training with very modest hidden neuron counts (50-60). It is of particular note that our method produces strong **anytime** performance: we are able to continually expand size, and even insert layers, during training without any attendant drops in validation accuracy. Indeed, our method exhibits mostly monotonic improvement up to stochasticity from batching, a property not shared by methods which rely on reinitializing a new network, e.g. [7]. This makes SENN a perfect fit to prospective applications in e.g. active or continual learning, in the spirit of our original neurogenesis inspiration. Having verified sensible performance of SENN on the full MNIST dataset, we now examine the way in which they adapt their final converged size to the amount of information in the dataset. To this end, we take class-balanced subsets of MNIST of varying sizes and train SENNs to convergence. To maximize clarity in our examination of this relationship, we restrict the SENN to width addition. The converged hidden sizes are shown together with the standard error across five seeds in the rightmost plots of figure 4. The first of these shows log width against linear subset size for ease of comparison to the leftmost panel. It can be seen that the final width tails off rapidly with subset size. The rightmost plot shows instead linear width against logarithmic subset size, in which we can now distinguish three regimes. For the smallest subsets, the initial hidden size of 10 is sufficient. For subsets between 10% and 60% of the standard training set, the final hidden size increases logarithmically, but past that point further increases in subset size do not similarly increase the final network size. We posit that this is due to substantial redundancy within the MNIST training set, leaving further capacity growth unnecessary. Thus, SENN does not only provide desirable any time performance, but also tailors its size suitably to the available data. Figure 4: SENN shows reasonable and reproducible hidden layer growth on MNIST at appealing any-time validation accuracy without intermittent perturbations (left pair of panels). SENN features appropriate scaling with respect to data complexity in its chosen network sizes (right pair of panels). Conclusion We have introduced the natural expansion score \(\eta\) and shown how it may be used to cohesively answer the three key questions **when**, **where** and **what** of growing neural networks. We have demonstrated its ability to capture redundancy of new neurons with old and thereby make sensible expansion decisions across time and tasks. While we have focused on providing a thorough mathematical grounding of the natural expansion score in this work, we acknowledge that the multilayer perceptrons on which it was demonstrated differ in scale and complexity from many of the architectures in active use for deep learning in the modern big data regime. Dually, however, prospects for further development are promising, as our theoretical results regarding \(\eta\) apply for arbitrary expansions of parameterized models, and our method of expansion would extend naturally to, for example, convolutional neural networks or normalizing flows where layers may be initialized invertibly. ## Acknowledgments This work was supported by the project "safeFBDC - Financial Big Data Cluster" (FKZ: 01MK21002K), funded by the German Federal Ministry for Economics Affairs and Energy as part of the GAIA-x initiative, and the Hessian research priority programme LOEWE within the project "WhiteBox". It benefited from the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK; projects "The Third Wave of AI" and "The Adaptive Mind").
2308.05903
Comparing the quality of neural network uncertainty estimates for classification problems
Traditional deep learning (DL) models are powerful classifiers, but many approaches do not provide uncertainties for their estimates. Uncertainty quantification (UQ) methods for DL models have received increased attention in the literature due to their usefulness in decision making, particularly for high-consequence decisions. However, there has been little research done on how to evaluate the quality of such methods. We use statistical methods of frequentist interval coverage and interval width to evaluate the quality of credible intervals, and expected calibration error to evaluate classification predicted confidence. These metrics are evaluated on Bayesian neural networks (BNN) fit using Markov Chain Monte Carlo (MCMC) and variational inference (VI), bootstrapped neural networks (NN), Deep Ensembles (DE), and Monte Carlo (MC) dropout. We apply these different UQ for DL methods to a hyperspectral image target detection problem and show the inconsistency of the different methods' results and the necessity of a UQ quality metric. To reconcile these differences and choose a UQ method that appropriately quantifies the uncertainty, we create a simulated data set with fully parameterized probability distribution for a two-class classification problem. The gold standard MCMC performs the best overall, and the bootstrapped NN is a close second, requiring the same computational expense as DE. Through this comparison, we demonstrate that, for a given data set, different models can produce uncertainty estimates of markedly different quality. This in turn points to a great need for principled assessment methods of UQ quality in DL applications.
Daniel Ries, Joshua Michalenko, Tyler Ganter, Rashad Imad-Fayez Baiyasi, Jason Adams
2023-08-11T01:55:14Z
http://arxiv.org/abs/2308.05903v1
# Comparing the quality of neural network uncertainty estimates for classification problems ###### Abstract Traditional deep learning (DL) models are powerful classifiers, but many approaches do not provide uncertainties for their estimates. Uncertainty quantification (UQ) methods for DL models have received increased attention in the literature due to their usefulness in decision making, particularly for high-consequence decisions. However, there has been little research done on how to evaluate the quality of such methods. We use statistical methods of frequentist interval coverage and interval width to evaluate the quality of credible intervals, and expected calibration error to evaluate classification predicted confidence. These metrics are evaluated on Bayesian neural networks (BNN) fit using Markov Chain Monte Carlo (MCMC) and variational inference (VI), bootstrapped neural networks (NN), Deep Ensembles (DE), and Monte Carlo (MC) dropout. We apply these different UQ for DL methods to a hyperspectral image target detection problem and show the inconsistency of the different methods' results and the necessity of a UQ quality metric. To reconcile these differences and choose a UQ method that appropriately quantifies the uncertainty, we create a simulated data set with fully parameterized probability distribution for a two-class classification problem. The gold standard MCMC performs the best overall, and the bootstrapped NN is a close second, requiring the same computational expense as DE. Through this comparison, we demonstrate that, for a given data set, different models can produce uncertainty estimates of markedly different quality. This in turn points to a great need for principled assessment methods of UQ quality in DL applications. Bayesian neural network, Deep Ensembles, uncertainty quantification, deep learning 02D23 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This is the accepted version of the article published available at [https://doi.org/10.1109/ICMLA55696.2022.00039](https://doi.org/10.1109/ICMLA55696.2022.00039) ## I Introduction Traditional deep learning (DL) models are powerful predictors in both regression and classification problems (LeCun et al. (2015)), but many do not provide uncertainties for their predictions or estimates. The usefulness of uncertainty quantification (UQ) in DL models is being recognized, especially for applications that are high-consequence, including nuclear stockpile stewardship and safety (Stracuzzi et al. (2018), Trucano (2004)), nuclear energy (Stevens et al. (2016)), national security problems (Gray et al. (2022), Ries et al. (2022)), and medical diagnoses (Begoli et al. (2019), Kompa et al. (2021)). For example, Kompa et al. (2021) explains the benefit of using UQ in medical decision making, including models that can report "I don't know" to ensure human experts will further evaluate results. ### _High Consequence Application_ Hyperspectral images (HSI) contain information across hundreds of spectral bands over a surface. These spectral bands provide crucial information about what is in the scene, giving significantly more information than the human eye can detect. A common application of HSI is target detection, where an observer is trying to determine if an object of interest is in the image (Anderson et al. (2019), Nasrabadi (2013), Poojary et al. (2015)). Of particular interest for national security problems is finding rare or hidden targets. Past work (Anderson et al. (2019), Gray et al. (2022)) has shown the ability to detect targets at the sub-pixel level. However, the high consequence nature of target detection applications have an extremely high cost for false positives where the need for trustworthy algorithms is paramount. Uncertainty quantification of model predictions is becoming a necessity in high consequence problems (Begoli et al. (2019), Trucano (2004)) to help alleviate this problem. Traditional DL methods only provide a best estimate, and do not provide an estimate of the model's confidence in its predictions. Ries et al. (2022) applied Bayesian neural networks (BNN) to an HSI target detection problem and proposed High Confidence sets (HCS) as a way to operationalize UQ output. There are many ways (other than BNNs) to quantify model uncertainty, and the decision maker must determine which UQ approach is most representative of the true uncertainty. Comparing different UQ methods on this application, we clearly demonstrate that, for a given data set, different models can produce uncertainty estimates of markedly different quality. This in turn points to a great need for principled methods to assess UQ quality in DL applications. ### _UQ Methods for DL Models_ Bayesian neural networks were first popularized by David MacKay (MacKay (1992, 1995)) and his student Radford Neal (Neal (1996)). Neal's dissertation introduced Hamiltonian Monte Carlo (HMC) as a way to sample the posterior distribution of a BNN, providing a practical way of training using Markov Chain Monte Carlo (MCMC). To this day, HMC is considered the gold standard for BNN training due to its theoretical backing and lack of approximations. Interested readers should consult Gelman et al. (2013) for more details and references about MCMC and HMC. Variational inference is the most popular method of Bayesian inference for neural networks (NN) (Graves (2011)). Blei et al. (2017) gives an extensive review of VI methods. Although VI is computationally much cheaper than MCMC, a common criticism of standard implementations of VI is the mean-field assumption (assuming posterior independence of all parameters). Put simply, VI is an approximation to the posterior distribution using optimization tmt improves as the sample size increases, compared to MCMC which is an approximation to the posterior distribution using sampling that improves as the number of Monte Carlo (MC) samples increases. Therefore, VI is constrained by data, and MCMC is constrained by computation time. The bootstrap is a simulation-based method that treats the training data as the population and samples new data sets with replacement from the original training set. Uncertainty is measured by creating a large number of these new data sets and then using the distribution of estimates or predictions to quantify uncertainty (Gray et al. (2022)). Deep Ensembles (DE) (Lakshminarayanan et al. (2017)) follow a similar idea to the bootstrap except no resampling is done; the only difference for each model in the ensemble is the set of starting values for the model optimizer. Monte Carlo Dropout, proposed by Gal and Ghahramani (2016), is an extension of dropout regularization (Srivastava et al. (2014)) that understands dropout as a sampling method that approximates a deep Gaussian process (GP). Unlike traditional dropout regularization, which is only applied during training, MC Dropout includes dropout during inference. In this way, an ensemble of predictions can be obtained from a single trained NN, allowing for uncertainty to be estimated. Comprehensive reviews of UQ methods in DL can be found in Kabir et al. (2018) and Moloud et al. (2021). ### _Review of assessing quality of UQ in DL_ Unlike evaluating a DL model's predictive performance using metrics like mean squared error (MSE) or accuracy, a commonly accepted UQ quality metric does not exist, but some previous work has sought to address this problem. Kabir et al. (2018) reviews the ideas of frequentist coverage and interval width as tools for UQ evaluation and cites several examples. Yao et al. (2019) evaluates the predictive uncertainty for several BNN training methods and ensembles. The authors found ensembles do not provide the UQ that users believe it provides, and emphasize calibration metrics are not good indicators of posterior approximation. The authors concluded a new metric for assessing predictive uncertainty is needed. Ovadia et al. (2019) gives a large-scale benchmark of current UQ for DL methods using metrics such as negative log likelihood, Brier score, and expected calibration error (ECE). The authors find many methods have trouble with out of distribution (OOD) situations or with dataset shift. Stahl et al. (2020) evaluated several UQ for DL methods, including BNN and DE, and found they captured the uncertainty differently and correlations between the methods' quantifications were low. Kompa et al. (2021) checked empirical frequentist coverage and interval widths for several DL methods. The authors found MC dropout and ensembling to have low interval coverages and high variability in results on a regression example. In comparison, BNN and GP provided the expected coverages and low variability in the results. For classification, all methods gave adequate coverages for independent and identically distributed (i.i.d.) data, but methods generally performed poorly in terms of coverage when dataset shift was added. Naeini et al. (2015) developed the Expected Calibration Error (ECE) metric for classification models which assesses the agreement of predicted confidences and model accuracy. Nado et al. (2021) baselines UQ quality using ECE and creates a user-friendly framework for assessing performance across multiple UQ methods and model architectures, but the authors do not address epistemic uncertainty. A desired metric to compare and assess uncertainty estimates should consider both aleatoric and epistemic uncertainties. In brief, aleatoric uncertainty is the variability due to randomness or noise in the process or measurement. This type of uncertainty is always present and can only be reduced by an improvement in the process of measurement, not by increasing the sample size. Epistemic uncertainty is the uncertainty resulting from imperfect knowledge of the model. Examples of this include uncertainty during model selection and parameter uncertainty during training. Increasing sample sizes will help reduce epistemic uncertainty by either further understanding the mechanism and creating better model architectures, estimating model parameters more precisely, or both. A comprehensive introduction to the two types of uncertainties in the context of machine learning is given by Hullemeier and Waegeman (2021). This paper is organized as follows: Section 2 introduces the motivating application and presents results which necessitate further exploration. Section 3 introduces interval coverage, interval width and ECE, the UQ metrics used in this paper to assess UQ quality. Section 4 applies the metrics in Section 3 on DL models to a simulated classification data set. Finally, Sections 5 and 6 discuss the results and provide conclusions, respectively. ## II Motivating Application Our interest in the quality of the UQ given by a model stems from a target detection problem in a high-consequence decision space described in this section. ### _Data_ The synthetic dataset Megascene (Ientilucci and Brown [2003]), is a high fidelity HSI simulation scene representing a suburban area of Rochester, NY. The scene contains both natural and man-made objects. Figure 1 shows a pseudo color rendering of the entire scene, designated as MLS-1200; roads, houses, trees, and even a track can be seen in the image. The simulator uses an AVIRIS-like sensor measuring 211 spectral bands ranging from 0.4 to 2.5 \(\mu m\). The images were created such that the scene is being observed at an elevation of 4 km, giving a pixel size of 1 m\({}^{2}\). Therefore at each pixel, we have the complete spectrum from 0.4 to 2.5 \(\mu m\), and we know exactly the contents of that pixel which make up the spectrum. Details about the radiance to reluctance conversion, in addition to other specifics, can be found in Anderson et al. [2019]. We are interested in detecting small targets hidden within a scene. We manually inserted green discs (with a known spectrum) randomly through the scene to represent targets to detect. In total, the scene contains 125 discs ranging in size from 0.1 to 4m radii. Given the pixel size of 1 \(m^{2}\), some targets fill multiple pixels while others fill just a fraction of a pixel. To make the targets more realistic, some of the discs were partially hidden beneath foliage. Figure 2 (Figure 6 in Anderson et al. [2019]), shows a subset of Megascene with several different sized green target discs. The image on the right shows an example of a disc partially hidden by foliage. ### _Training_ Several methods were described in Section I-B which provide the necessary UQ for high consequence applications, any of which would be valid for this application. The architecture of the neural networks is 2 hidden layers with 10 nodes each. The left half of MLS-1200 was used for training, and the right half was used for testing. Let \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\), be the training data set where \(\mathbf{y}=(y_{1},...y_{n})^{r}\) and \(\mathbf{X}=(\mathbf{x}_{1},...,\mathbf{x}_{n})^{r}\). Let \(y_{i}\in\{0,1\}\), denoting non-target or target and \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) be a \(p\)-dimensional vector of features corresponding to response \(y_{i}\). Let \(\boldsymbol{\theta}\) denote all the weights and biases of the neural network. The neural network \(\pi:\mathbb{R}^{p}\rightarrow(0,1)\) estimates the probability that pixel \(i\) contains target as, \(\pi_{i}=P(y_{i}=1|\mathbf{x}_{i},\boldsymbol{\theta})\). ### _Quantifying Uncertainty_ Uncertainty on the neural network is measured on its estimates \(\hat{\pi}_{i}\), which use the trained models' weights and biases \(\hat{\theta}\). The uncertainty of \(\hat{\pi}_{i}\) is obtained in the form of \((1-\alpha)\%\) credible intervals (CIs), denoted by \(\mathcal{B}_{\pi_{i}}(\alpha)\). In order to reduce analyst burden through automation, we want to know where the model believes, with high-confidence, whether or not a pixel contains a target. The High Confidence Sets (HCS) proposed in Ries et al. [2022] provide a means to operationalize such a process. Formally, the HCS \(\Omega\) is the set of pixels such that: \[\Omega=\{i:(\mathcal{B}_{\pi_{i}}(\alpha)_{LB}>1-\delta\cup\mathcal{B}_{\pi_{ i}}(\alpha)_{UB}<\delta)\} \tag{1}\] where \(\mathcal{B}_{\pi_{i}}(\alpha)_{LB}\) and \(\mathcal{B}_{\pi_{i}}(\alpha)_{UB}\) are the lower and upper bounds of a \((1-\alpha)\%\) CI for \(\pi_{i}\), respectively; \(\delta\) is a probability threshold which defines an estimated probability as close to zero. Both \(\alpha\) and \(\delta\) are user chosen and should reflect the users' risk preferences. We choose \(\alpha=\delta=0.2\). Table I shows the proportion of pixels from the test set which were included in the respective HCS. There are clear Fig. 1: Pseudo color render of Megascene MLS-1200 at R=670 nm, G=540 nm, B=480 nm. Image reproduced from Anderson et al. [2019] Fig. 2: Left: Subset of Megascene showing inserted target green discs. Right: Example of a green disc partially hidden by foliage. Image reproduced from Anderson et al. [2019]. differences in the results, begging the question: which UQ method should the decision maker rely on? In the test scene with over 1.5 million pixels, the 10% difference between BNN-MCMC and DE corresponds to a difference in HCS size of 150,000 pixels. This difference can have a large effect on analysts, but the UQ method with the largest HCS should not automatically be relied on since it could be overconfident. An UQ quality assessment is needed. ## III Uncertainty Quantification Quality Metrics This section introduces UQ metrics that can be used to evaluate UQ model performance and help answer the question posed at the end of the previous section. Some of these metrics require knowing the complete probabilistic data generating mechanism, which in real data problems is not generally known. Therefore the simulation study in Section IV is needed to evaluate the different UQ methods used in Section II. ### _Frequentist Interval Coverage_ Credible intervals contain a set of plausible class probability estimates, where plausible is defined by the _nominal_ rate of the interval itself, typically denoted as \((1-\alpha)\%\). A \((1-\alpha)\%\) CI for an estimate should contain the true population parameter about \((1-\alpha)\%\) of the time if the experiment was redone. Frequentist coverage (coverage, from here on) is the _actual_ rate at which the population parameter is contained in the interval, averaged over all observations. \[\text{CI Coverage}=\frac{1}{n}\sum_{i=1}^{n}\mathds{1}\left(\pi_{i}\in \mathcal{B}_{\pi_{i}}(\alpha)\right) \tag{2}\] This empirical value should be as close as possible to the nominal rate of \((1-\alpha)\%\). Going under or over this value is an indication of poor UQ quality, e.g. a 90% CI with 70% coverage indicates the interval is overly optimistic and not accounting for enough uncertainty. Conversely a 90% interval with 99% coverage is overly conservative. Note that Equation (2) requires knowing the _true_ value of the model parameter. ### _Interval Width_ Intervals contain values that are plausible estimates for a quantity of interest, therefore it would make sense that there is less variability in the data generating mechanism if the interval is smaller. However, it is not quite this simple. The highest UQ quality is given to models that minimize interval width _and_ match coverage with nominal rate. The width of intervals is given in Equation (3) by \[\text{Interval Width}=\frac{1}{n}\sum_{i=1}^{n}(\mathcal{B}_{\pi_{i}}(\alpha)_{ UB}-\mathcal{B}_{\pi_{i}}(\alpha)_{LB}). \tag{3}\] ### _Expected Calibration Error_ Naeini et al. [2015] proposed ECE as a metric to check whether a machine learning classifier's confidence scores are calibrated to true probabilities of correctness. Here we use the broader term _predicted confidence_ defined as \(\hat{\pi}_{i}\equiv\pi(\mathbf{x}_{i},\hat{\theta})\in[0,1]\), or estimated class probabilities. However, we make no claim that all models are expected to estimate the true probability. For classification BNNs, the uncertainty of interest is on the estimated class probabilities (predicted confidences). Consider a binary decision rule, \(\tau(\cdot)\), that generates predictions \(\tau(\hat{\pi}_{i})=\hat{\gamma}_{i}\in\{0,1\}\). Provided a set of true and predicted responses, the accuracy is computed as: \[acc(\mathbf{y},\mathbf{\hat{y}})=\frac{1}{n}\sum_{i=1}^{n}\mathds{1}\left( \hat{\gamma}_{i}=y_{i}\right). \tag{4}\] The average confidence of the set is \[conf(\mathbf{\hat{\pi}})=\frac{1}{n}\sum_{i=1}^{n}\hat{\pi}_{i}. \tag{5}\] ECE discretizes the interval [0, 1] under equally spaced bins and assigns each predicted confidence to the bin that encompasses it. The calibration error of a bin is the difference between the accuracy and average confidence of the samples assigned to that bin. In other words, calibration error treats predicted confidences as estimated probabilities and measures the disagreement between the estimated and true probability of correctness. ECE is a weighted average across all bins: \[ECE(\mathbf{y},\mathbf{\hat{\pi}})=\sum_{b=1}^{B}\frac{n_{b}}{n}\Big{|}acc( \mathbf{y}_{b},\tau(\mathbf{\hat{\pi}}_{b}))-conf(\mathbf{\hat{\pi}}_{b}) \Big{|}. \tag{6}\] where \(B\) is the number of bins, \((\mathbf{y}_{b},\mathbf{\hat{\pi}}_{b})\) is the subset of \((\mathbf{y},\mathbf{\hat{\pi}})\) in the \(b^{th}\) bin, and \(n_{b}\) is the number of predictions in bin \(b\), i.e. the rank of \(\mathbf{\hat{\pi}}_{b}\). Calibration informs us of the probability of correctness, regardless of cause. However, as model accuracy approaches the limit of irreducible error, calibrated confidences will approach the true probability of the most probable class. As such, calibration error can effectively assess the quality of aleatoric uncertainty estimation. On the other hand, interval coverage and width provide an assessment of epistemic uncertainty in classification problems because the credible intervals should converge to point predictions as estimates of the class probabilities approach the true probabilities. ## IV Simulation Study In this section we evaluate UQ metrics of Section III on a simulated two-class classification (TCC) dataset to compare different UQ in DL methods, including BNN trained via MCMC, BNN trained via VI, bootstrapped NN, DE, and \begin{table} \begin{tabular}{l|c} \hline Method & Proportion of Pixels in HC Set \\ \hline BNN-MCMC & 0.81 \\ BNN-V1 & 0.27 \\ DE & 0.71 \\ Bootstrap & 0.78 \\ MC Dropout & 0.74 \\ \hline \end{tabular} \end{table} TABLE I: Proportion of test set pixels in HCS for Megascene for each model. MC dropout. For comparison against a non-DL model, we also train a GP with MCMC. The TCC dataset is a fully parameterized generative model with a joint probability that allows direct evaluation of CI coverage. A full probability distribution is needed in classification problems to check CI coverage. The underlying model is a 2-D Gaussian Mixture Model (GMM) with two equally proportioned clusters that undergo a series of transformations and scalings. The result is a data model that can easily generate a large variety of data classification scenarios that arise in quantifying UQ. Figure 3 shows one simulated TCC data set and densities. In all, 100 data sets from the same TCC simulator are generated with each of the UQ methods fit to each of the 100 simulated data sets. Coverage and widths of 90% credible intervals are computed for each data set for each method, then averaged over the 100 simulations. For the ensemble methods (DE, bootstrap, MC dropout), 100 ensembles were used. The architecture for the DL models was a two layer fully connected NN with 10 nodes per layer. Table II shows mean coverage, width, and ECE for each method with its MC standard error in parentheses. Bolded terms show the best metric in each column. Overall, BNN-MCMC does the best since it is the only method to correctly capture the nominal coverage of 0.9. Bootstrap is a close second since it slightly undercovers nominal and has wider intervals than BNN-MCMC. Interestingly, while DE has a coverage rate much less than nominal, its ECE is comparable with BNN-MCMC and bootstrap. This could lead to an erroneous conclusion that DE's UQ is high quality, when in fact it is only _calibrated_, meaning its aleatoric uncertainty is accurate, but based on coverage, its epistemic uncertainty is not. MC Dropout appears to help the ensemble, but it still doesn't achieve nominal coverage. Figure 4 shows the prediction surface for one simulated TCC data set for each model. Figure 5 shows the width of a 90% credible interval for one simulated TCC data set for each model. The estimation surfaces for all methods except the GP are similar. The GP appears to also be measuring the density of the domain as well as class probabilities, potentially giving it an OOD measure. The interval widths among all the methods except GP are also similar. The main difference of the DL models is that the DE and MC dropout uncertainty doesn't fan out as quickly as it departs from training data. This behavior is expected since DE does not account for sampling variation. The MCMC and bootstrap plots look similar, and based on the metrics in Table II, they are the most reliable NN models. ## V Discussion There are several results from the simulations that are worth further discussion. First, DE failed to provide an accurate measure of the full uncertainty in the simulation. Although the model was well calibrated (as measured by ECE) compared to other models, its credible intervals undercovered the nominal rate indicating it is not measuring epistemic uncertainty correctly. This is not surprising since DE creates an ensemble by simply using different starting values for each model in the ensemble. Practically this means the uncertainty the ensemble is capturing is the optimization uncertainty. Although this may be of interest in some scenarios, we do not believe this is the case for most users. However, DE is a simple way to understand the complexity of the training procedure. In Lakshminarayanan et al. (2017), the authors say for that there is little difference between DE and bootstrap when training sets are large. However, in cases where we are not data-rich, as in many high-consequence national security problems, we do not have the luxury of an abundance of data. Therefore, for high-consequence problems, we recommend to proceed with caution when using DE, and urge users to understand theoretically which types of uncertainty DE will measure, and which it will not. Simply resampling data with replacement (bootstrap) for each model in the ensemble gives a theoretically plausible solution to the simplicity of DE. The bootstrap performed only slightly worse than BNN-MCMC, giving reasonable coverage with relatively skinny interval widths and comparable ECE. This additional step requires no additional computational burden compared to DE. Bayesian neural networks fit using MCMC significantly outperformed BNN fit using VI. Although MCMC is the gold standard for Bayesian estimation, we hoped VI would have given better results given the theoretical guarantees it has. We do note that BNN fit with VI is still a difficult process, \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & Coverage & Width & ECE \\ \hline BNN-MCMC & **0.91 (0.04)** & **0.22 (0.01)\({}^{*}\)** & **0.04 (0.01)** \\ BNN-VI & 0.59 (0.17) & 0.38 (0.07) & 0.08 (0.02) \\ DE & 0.48 (0.09) & 0.09 (0.01) & **0.04 (0.01)** \\ Bootstrap & 0.84 (0.06) & 0.25 (0.02) & **0.04 (0.01)** \\ MC Dropout & 0.67 (0.08) & 0.15 (0.02) & **0.04 (0.01)** \\ GP & 0.98 (0.02) & 0.36 (0.02) & 0.05 (0.01) \\ \hline \hline \end{tabular} \end{table} TABLE II: TCC Simulation results. Bolded values indicate best metric in each column. The asterisk indicates the best interval width, given the nominal coverage was met (nominal rate = 0.9). Fig. 3: TCC transformed space with 10% contours for \(P(Y=y|x_{1},x_{2})\). and we believe it is possible better results could be obtained using different software or VI algorithms. But in light of this, we recommend caution for non-experts using BNN fit via VI. VI provides a significant speedup that should not be ignored, therefore future work should continue to develop VI algorithms and continue to make them more user-friendly. More research and applications of BNN fit using VI will help understanding of how to diagnose common training issues. We can now tie these low dimensional, oracle-like evaluations of UQ quality back to our original high consequence application in Section II. Although not a causal relationship, poor results in low-dimentional simplistic examples are often an indicator that algorithms will not improve as the data and models become more complex. The originally proposed HCSs are predicated upon the notion of good quality UQ estimates, yet our simulation results indicate low-quality UQ estimates from DE and VI (and MC dropout to a lesser degree). This would suggest that for more complex modeling tasks, VI and DE UQ estimates are likely to be of lower quality than a model such as the MCMC BNN or bootstrap NN. There are ample opportunities for future work in the assessment of the quality of UQ for DL models. New metrics should be created that assess the quality of UQ given by DL models, preferably ones that are more well suited to the DL framework. Although the traditional statistical metrics used in this paper are adequate, there are certainly better approaches. We also argue for metrics beyond combining the two, such as with the coverage width criterion of Khosravi et al. (2011) or evaluating coverages at a large number of nominal rates such as with the continuous ranked probability score from Zamo and Naveau (2018). We recognize these metrics are useful in evaluation too, but they still require knowing the underlying _true_ probability distribution, which for classification problems is only possible with simulated data. New metrics will be able to be used on real data to compare which UQ method to use for that specific data set, much like model selection is currently done (where selection only considers predictive performance of the model). A metric analogue to the AIC, which allows simple comparison of model fits, is desired to measure the quality of UQ. ## VI Conclusion Uncertainty quantification of DL models is an active area of research since researchers and users of DL models have realized point predictions are not always enough, especially in high consequence problems. Many different approaches to UQ for DL models have been proposed, however, there has been little research into the _quality_ of those UQ methods. We fit several UQ for DL models on a target detection application, but looking only at the predictions and uncertainties from the models does not tell a decision maker which model best captures the underlying uncertainty. In fact, it introduces more questions than answers. In an attempt to answer these questions, this paper explores the quality of UQ given by several probabilistic UQ models, including BNN, GP, DE, MC dropout, and bootstrapped NN, using traditional statistical metrics of frequentist coverage and CI width, as well as ECE. A two class classification data set, for which complete knowledge of the data generating mechanism was known, was used to quantitatively assess the UQ qualities. BNN trained via MCMC was the clear winner, but this comes with a heavy computational cost. The bootstrap came in a close second and may be more practical to use. It requires Fig. 4: Prediction surfaces for each model on one TCC simulation. Training data is overlaid. the same computation as the popular DE, but appears to provide higher quality UQ. However, this paper only explores two specific cases and therefore more research in this area is needed, and better UQ metrics need to be developed to definitively compare UQ in DL methods. ## Acknowledgements This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories, a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND2022-8993 C. The authors would like to thank Michael Darling and Lekha Patel for their contributions in reviewing and improving this paper.
2302.00834
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data
We study the interpolation power of deep ReLU neural networks. Specifically, we consider the question of how efficiently, in terms of the number of parameters, deep ReLU networks can interpolate values at $N$ datapoints in the unit ball which are separated by a distance $\delta$. We show that $\Omega(N)$ parameters are required in the regime where $\delta$ is exponentially small in $N$, which gives the sharp result in this regime since $O(N)$ parameters are always sufficient. This also shows that the bit-extraction technique used to prove lower bounds on the VC dimension cannot be applied to irregularly spaced datapoints. Finally, as an application we give a lower bound on the approximation rates that deep ReLU neural networks can achieve for Sobolev spaces at the embedding endpoint.
Jonathan W. Siegel
2023-02-02T02:46:20Z
http://arxiv.org/abs/2302.00834v2
# Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data ###### Abstract We study the interpolation, or memorization, power of deep ReLU neural networks. Specifically, we consider the question of how efficiently, in terms of the number of parameters, deep ReLU networks can interpolate values at \(N\) datapoints in the unit ball which are separated by a distance \(\delta\). We show that \(\Omega(N)\) parameters are required in the regime where \(\delta\) is exponentially small in \(N\), which gives the sharp result in this regime since \(O(N)\) parameters are always sufficient. This also shows that the bit-extraction technique used to prove lower bounds on the VC dimension cannot be applied to irregularly spaced datapoints. ## 1 Introduction We study the problem of how efficiently, in terms of the number of parameters, deep neural networks can interpolate prescribed values at \(N\) datapoints. Let \(x_{1},...,x_{N}\in\mathbb{R}^{d}\) be distinct points and suppose that we are given values \(y_{1},...,y_{N}\in C\subset\mathbb{R}\). Here the set \(C\) is the set of classes and the \(y_{i}\) are the class labels corresponding to the data point \(x_{i}\). We wish to find a neural network \(f\) with as few parameters as possible such that \(f(x_{i})=y_{i}\) for \(i=1,...,N\). To be specific, let us gives some notation describing the class of deep (feedforward) neural networks [11] we consider. Let \(\sigma\) be a piecewise polynomial activation function. We will mainly consider the important special case of \(\sigma=\max(0,x)\), i.e. the rectified linear unit (ReLU) activation function [16]. Let \(L\geq 1\) be an integer denoting the depth of the network and denote by \(\mathbf{W}=(w_{1},...,w_{L})\) a vector with positive integer values which contains the widths of the intermediate layers. We write \(A_{\mathbf{W},b}\) to denote the affine map with weight matrix \(\mathbf{M}\) and offset, or bias, \(b\), i.e. \[A_{\mathbf{M},b}(x)=\mathbf{M}x+b. \tag{1.1}\] When the weight matrix \(\mathbf{M}\in\mathbb{R}^{k\times n}\) the bias \(b\in\mathbb{R}^{k}\), the function \(A_{\mathbf{M},b}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}\) maps \(\mathbb{R}^{n}\) to \(\mathbb{R}^{k}\). We denote the set of deep neural networks with activation function \(\sigma\) and architecture defined by the vector \(\mathbf{W}\) by \[\Upsilon^{\mathbf{W}}(\mathbb{R}^{d}):=\{A_{\mathbf{M},b_{L}}\circ\sigma\circ A _{\mathbf{M},-1,b_{L-1}}\circ\sigma\circ\cdots\circ\sigma\circ A_{\mathbf{M},b _{1}}\circ\sigma\circ A_{\mathbf{M},b_{0}}\}, \tag{1.2}\] where the weight matrices satisfy \(\mathbf{M}_{L}\in\mathbb{R}^{1\times w_{L}}\), \(\mathbf{M}_{0}\in\mathbb{R}^{w_{1}\times d}\), and \(\mathbf{M}_{i}\in\mathbb{R}^{w_{i+1}\times w_{i}}\) for \(i=1,...,L\), and the biases satisfy \(b_{i}\in\mathbb{R}^{w_{i+1}}\) for \(i=0,...,L-1\) and \(b_{L}\in\mathbb{R}\). For such a neural network architecture, the total number of free parameters is \[P=P(\mathbf{W}):=w_{L}+dw_{1}+1+\sum_{i=1}^{L-1}w_{i+1}w_{i}+\sum_{i=1}^{L}w_{ i}. \tag{1.3}\] Let \[\mathcal{X}_{N}\subset(\mathbb{R}^{d})^{\neq N}:=\{(x_{1},...,x_{N})\in( \mathbb{R}^{d})^{N},\;x_{i}\neq x_{j}\;\text{for}\;i\neq j\}\] be a set of possible configurations of distinct data points and let \(C\subset\mathbb{R}\) be a set of labels. We wish to find a neural network architecture \(\mathbf{W}\) with the minimal number of parameters \(P(\mathbf{W})\) which can interpolate labels from the set at any datapoints in \(\mathcal{X}_{N}\), i.e. such that for any \((x_{1},...,x_{N})\in\mathcal{X}_{N}\) and \(y_{1},...,y_{n}\in C\) there exists an \(f\in\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\) with \(f(x_{i})=y_{i}\). This has been a central and well-studied problem in the theory of the expressive power of neural networks (see, for instance [3, 7, 8, 12, 13, 18, 19, 20, 30]) alongside other problems related to expressiveness, such as universal approximation [5, 14] and the benefits of depth [6, 9, 21, 22, 27]. The problem of interpolation, also called memorization, has recently been connected to the phenomenon of double descent [4, 17]. We remark that it is easy to see that \(O(N)\) parameters suffice to interpolate arbitrary values at an arbitrary set of \(N\) distinct points, i.e. with \(O(N)\) parameters one can interpolate with \(\mathcal{X}_{N}=(\mathbb{R}^{d})^{\neq N}\) and \(C=\mathbb{R}\). Indeed, there exists a direction \(v\in\mathbb{R}^{d}\) such that \(v\cdot x_{i}\) are all distinct, and any piecewise linear function with \(N\) pieces can be represented using a (deep or shallow) ReLU neural network with \(O(N)\) parameters. Note that by throwing away some of the coordinates, we can always assume that \(d<N\) since the pairwise differences \(x_{i}-x_{j}\) lie in an \(N-1\) dimensional subspace, so that we can restrict to some set of \(N-1\) coordinates while keeping the datapoints distinct. Using the VC-dimension [28], it can be proved that if the set of possible labels \(C\) is infinite and the architecture \(\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\) can interpolate, then \(P(\mathbf{W})\geq cN\) for a constant \(c\), i.e. if the label set is infinite, then a neural network with piecewise polynomial activation function requires \(\Omega(N)\) parameters to interpolate at \(N\) values (see [29], Lemma 4.1 or [25], Theorem 5). In addition, it has been shown that if \(\mathcal{X}_{N}=(\mathbb{R}^{d})^{\neq N}\) and \(|C|>1\), then if \(\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\) interpolates it must hold that \(P(\mathbf{W})\geq(N-1)/2\), i.e. that a neural network with piecewise polynomial activation function requires \(\Omega(N)\) parameters to interpolate at _arbitrary_ distinct datapoints [26]. Of particular interest are situations where interpolation is possible with \(P(\mathbf{W})=o(N)\) parameters. By the preceding remarks, this requires additional assumptions on the allowed datasets \(\mathcal{X}_{N}\) and the label set \(C\). A typical assumption is that the label set \(C\) is finite (independent of \(N\)) and that the datapoints are well separated [19, 29, 30]. Specifically, we consider datasets of the form \[\mathcal{X}_{N}=\{(x_{1},...,x_{N})\subset(\mathbb{R}^{d})^{N},\ |x_{i}|\leq 1,\ |x_{i}-x_{j}|\geq\delta(N)\text{ if }i\neq j\}. \tag{1.4}\] The assumption (1.4) means that the datapoints lie in the unit ball and are separated by at least a distance \(\delta(N)\) (clearly the separation distance \(\delta\) must depend upon \(N\)). This situation was recently analyzed in [19, 29], and in [29] (building upon the work in [19]) it was proved that there exists a deep ReLU architecture \(\mathbf{W}\) which can interpolate labels from a finite set \(C\) at datasets \(\mathcal{X}_{N}\) satisfying the separation condition (1.4) using only \[P(\mathbf{W})=O\left(\sqrt{N\log N}+\sqrt{\frac{N}{\log N}}\max(\log(R),\log( |C|))\right) \tag{1.5}\] parameters, where \(R=10N^{2}\delta^{-1}\sqrt{\pi d}\). This remarkable result shows that if \(\delta\) is polynomial in \(N\), then deep ReLU neural networks can interpolate binary labels using only \(\sqrt{N}\) parameters (up to logarithmic factors). This is also tight up to logarithmic factors, which follows from bounds on the VC-dimension of deep neural networks with polynomial activation function [1, 2, 10]. An interesting open question is to determine the precise dependence of the number of parameters \(P(\mathbf{W})\) on both the number of datapoints \(N\) and the separation distance \(\delta(N)\). The result (1.5) from [29] gives a nearly complete solution to this problem when \(\delta(N)\) depends polynomially on \(N\), only a logarithmic gap in \(N\) remains. In this work, we consider the opposite regime, where \(\delta(N)\) is exponentially small in \(N\), and prove that in this case \(\Omega(N)\) parameters are required. Since, as remarked above, \(O(N)\) parameters are always sufficient, this provides a complete solution in the regime where \(\delta(N)\) is exponentially small in \(N\). To explain this result, we first recall the notions of shattering and VC-dimension [28]. Let \(X=\{x_{1},...,x_{N}\}\subset\mathbb{R}^{d}\) be a finite set of points and \(\mathcal{F}\) a class of real-valued functions on \(\mathbb{R}^{d}\). The class \(\mathcal{F}\) is said to shatter [23, 24] the point set \(X\) if given any signs \(\epsilon_{1},...\epsilon_{N}\in\{\pm 1\}\) there exists an \(f\in\mathcal{F}\) such that \(\operatorname{sgn}(f(x_{i}))=\epsilon_{i}\). Here we are using the convention that \[\operatorname{sgn}(x)=\begin{cases}-1&x<0\\ 1&x\geq 0.\end{cases}\] The VC-dimension of the class \(\mathcal{F}\) is the size of the largest set of points that \(\mathcal{F}\) shatters, i.e. \[\text{VC-dim}(\mathcal{F})=\sup\{N:\ \exists X=\{x_{1},...,x_{N}\}\subset \mathbb{R}^{d},\ X\text{ is shattered by }\mathcal{F}\}. \tag{1.6}\] In [10], the VC-dimension of a neural network architecture \(\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\) is shown to be bounded by \(P(\mathbf{W})^{2}\) for any piecewise polynomial activation function \(\sigma\). Using the 'bit-extraction' technique, this bound is shown to be optimal for very deep ReLU neural networks [1, 2]. Note that the VC-dimension only requires that a single set of \(N\) points be shattered by the class \(\mathcal{F}\). If we instead require that _all_ sets of \(N\) distinct points be shattered by a neural network architecture \(\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\), then it is shown in [26] that the number of parameters must be at least \(P(\mathbf{W})\geq(N-1)/2\). Indeed, Theorem 1 in [26] implies that if \(P(\mathbf{W})<(N-1)/2\), then the set of shattered point sets is not dense in \((\mathbb{R}^{d})^{N}\). This implies that if \(\mathcal{R}_{N}\) is a dense subset of \((\mathbb{R}^{d})^{N}\) and a neural network architecture \(\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\) can interpolate binary class labels for all datasets in \(\mathcal{X}_{N}\), then \(P(\mathbf{W})\geq(N-1)/2\). However, note that the collection of \(\delta\)-separated datasets defined by (1.4) are not dense in \((\mathbb{R}^{d})^{N}\) for any \(\delta>0\), and so this result gives no indication how small \(\delta(N)\) needs to be to rule out interpolation with \(o(N)\) parameters. Our main result is a quantitative version of Theorem 1 in [26] which answers precisely this question and shows that \(\delta(N)\) need only be exponentially small in \(N\) to rule out interpolation with \(o(N)\) parameters using deep ReLU neural networks. **Theorem 1**.: _Let \(\sigma\) be a piecewise polynomial activation function and \(\mathbf{W}=(w_{1},...,w_{L})\) be a vector representing the widths of each layer of a deep neural network architecture with activation function \(\sigma\). Let \(\delta>0\) and suppose that \(\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\) shatters **every** subset \(X=\{x_{1},...,x_{N}\}\subset\mathbb{R}^{d}\) which satisfies \(|x_{i}|\leq 1\) for all \(i\) and_ \[|x_{i}-x_{j}|\geq\delta\text{ for }i\neq j.\] _There exists a constant \(c\) depending only on \(\sigma\) such that if_ \[\delta<\begin{cases}e^{-cN^{2}}&\sigma\text{ is a general piecewise polynomial function}\\ e^{-cN}&\sigma\text{ is a piecewise linear function},\end{cases} \tag{1.7}\] _we must have_ \[P(\mathbf{W})\geq N/6. \tag{1.8}\] We apply this to the case of interpolation with deep ReLU neural networks to get the following Corollary, which shows that if the separation distance is exponentially small in \(N\), then deep ReLU neural networks require \(\Omega(N)\) parameters to interpolate. **Corollary 1**.: _Let \(\mathcal{X}_{N}\) be defined by (1.4) for a separation distance \(\delta(N)\) and suppose that the number of classes \(|C|>1\). Let \(\mathbf{W}=(w_{1},...,w_{L})\) describe a deep neural network architecture with ReLU activation function._ _There exists a constant \(c<\infty\) such that if \(\delta(N)<e^{-cN}\) and for any \((x_{1},...,x_{N})\in\mathcal{X}_{N}\) and \(y_{1},...,y_{n}\in C\) there exists an \(f\in\Upsilon^{\mathbf{W}}(\mathbb{R}^{d})\) which satisfies_ \[f(x_{i})=y_{i}, \tag{1.9}\] _then necessarily \(P(\mathbf{W})\geq N/6\)._ Proof.: By translating the output, we may assume there exists \(c_{1},c_{2}\in C\) with \(c_{1}<0<c_{2}\). If arbitrary values from \(C\) can be interpolated, then every dataset in \(\mathcal{X}_{N}\) can be shattered. We now apply Theorem 1 with \(\sigma=\max(0,x)\), which is piecewise linear, to get the desired result. ## 2 Proof of the Main Result In this section we prove Theorem 1. Observe that it suffices to prove the Theorem in the case \(d=1\). This follows by restricting the input of the network to the \(x\)-axis, which can only reduce the number of parameters. If every subset of \(\mathbb{R}^{d}\) satisfying the assumptions of Theorem 1 is shattered, then so is every subset of the \(x\)-axis, and the one-dimensional result gives the desired lower bound. The first ingredient is a Lemma which restricts the number of sign changes that a function \(f\in\Upsilon^{\mathbf{W}}(\mathbb{R})\) can have. The Lemma and proof follow closely the ideas of [27] and of Theorem 14 in [2]. **Lemma 1**.: _Let \(\sigma\) be a piecewise polynomial activation function with \(p\) pieces and degree at most \(d\) in each piece. Suppose that \(\mathbf{W}=(w_{1},...,w_{L})\) is a vector representing the widths of each layer of a deep neural network architecture. Let \(f\in\Upsilon^{\mathbf{W}}(\mathbb{R})\) and \(x_{0}<x_{1}<\cdots<x_{M}\) be a sequence of real numbers such that \(\operatorname{sgn}(x_{i})\neq\operatorname{sgn}(x_{i+1})\) for \(i=0,...,M-1\). Then_ \[M\leq M^{*}(\mathbf{W},\sigma):=((d^{L}+2)p^{L}d^{L(L+1)/2}{\prod_{i=1}^{L}}w_{ i}. \tag{2.1}\] Proof.: The proof essentially follows from repeated application of Lemma 13 in [2]. Specifically, consider the output of the network after the \(k\)-th layer, which is given by \[f_{k}(x):=A_{\mathbf{M}_{k},h_{k}}\circ\sigma\circ A_{\mathbf{M}_{k-1},h_{k-1}} \circ\sigma\circ\cdots\circ\sigma\circ A_{\mathbf{M}_{1},b_{1}}\circ\sigma \circ A_{\mathbf{M}_{0},b_{0}}(x). \tag{2.2}\] We prove by induction that each component of \(f_{k}\) is a piecewise polynomial function with at most \(P_{k}\) pieces, each of degree at most \(D_{k}\), where \[P_{k}=p^{k}d^{k(k+1)/2}\prod_{i=1}^{k}w_{i},\ D_{k}=d^{k}. \tag{2.3}\] This evidently holds when \(k=0\), since each component of \(f_{0}\) is an affine linear function, which has one piece of degree one. For the inductive step, we first note that if the \(i\)-th component \(f_{k}^{i}\) has \(\leq P_{k}\) pieces of degree \(\leq D_{k}\), then \(\sigma(f_{k}^{i})\) has at most \(pP_{k}D_{k}\) pieces of degree \(\leq dD_{k}\). Indeed, let \(b_{1},...,b_{p-1}\) denote the breakpoints of \(\sigma\). On each if its at most \(P_{k}\) pieces, \(f_{k}^{i}\) is a polynomial of degree at most \(D_{k}\). This means that there can be at most \(D_{k}\) solutions to \(f_{k}^{i}(x)=b_{i}\) for \(i=1,...,p-1\) on each of these pieces. Hence applying \(\sigma\) divides each piece of \(f_{k}^{i}\) into at most \(pD_{k}\) further sub-pieces. On each of these sub-pieces \(\sigma(f_{k}^{i})\) is the composition of a polynomial of degree \(\leq D_{k}\) with a polynomial of degree at most \(d\) and so has degree at most \(dD_{k}\). Thus each component of \(f_{k+1}\) is a linear combination of at most \(w_{k}\) piecewise polynomials with \(\leq pP_{k}D_{k}\) pieces of degree \(\leq dD_{k}\). Taking the union of their breakpoints, we see that this can have at most \(w_{k}pP_{k}D_{k}\) pieces of degree \(\leq dD_{k}\). This completes the inductive step and implies that \(f\) has at most \(P\) pieces of degree at most \(D\), where \[P=p^{L}d^{L(L+1)/2}\prod_{i=1}^{L}w_{i},\ D=d^{L}. \tag{2.4}\] On each of these pieces, the function \(f\) is a polynomial of degree \(D\) and thus can switch sign at most \(D+1\) times. In addition, moving from one piece to the next the sign can switch at most an additional \(P\) times (note that this only needs to be counted if \(\sigma\) is not continuous). This gives a total number of sign changes which is at most \[M\leq P(D+1)+P=(d^{L}+2)p^{L}d^{L(L+1)/2}\prod_{i=1}^{L}w_{i}. \tag{2.5}\] Note that in the important special case where \(\sigma=\max(0,x)\) is the ReLU activation function [16] the bound in Lemma 1 reduces to \[M^{*}(\mathbf{W},\sigma)\leq 3\prod_{i=1}^{L}(2w_{i}),\] which has been shown to be essentially sharp in [27]. In particular, in [27] a network with small fixed width \(w\) and depth \(L=O(m)\) is constructed which represents the function \(f(x)=x\pmod{2}\) for \(x=0,...,2^{m}-1\). Thus, for this family of networks we have \[\log(M)\geq c\log(M^{*}(\mathbf{W},\sigma))\] for a fixed constant \(c\). Proof of Theorem 1.: Assume without loss of generality that \(N\) is even. Let \(T\geq 2N\) be an even integer and consider the set of points \[X_{T}=\{i/T,\ i=0,...,T-1\}\subset[0,1]. \tag{2.6}\] Note that every \(N\)-element subset of \(X_{T}\) satisfies the assumptions of Theorem 1 with \(\delta=T^{-1}\). Let \(S(\mathbf{W},T)\) denote the set of sign patters that \(\Upsilon^{\mathbf{W}}(\mathbb{R})\) achieves on the large set \(X_{T}\), i.e. \[S(\mathbf{W},T):=\{(\epsilon_{i})_{i=0}^{T-1},\ \exists f\in\Upsilon^{ \mathbf{W}}(\mathbb{R})\ \text{s.t.}\ \operatorname{sgn}(f(i/T))=\epsilon_{i}\}. \tag{2.7}\] Suppose that \(\Upsilon^{\mathbf{W}}(\mathbb{R})\) shatters every subset \(X\subset X_{T}\) with \(|X|=N\). We claim that this implies \[|S(\mathbf{W},T)|\geq\left(\frac{T}{4M^{*}(\mathbf{W},\sigma)}\right)^{N/2}. \tag{2.8}\] Indeed, let \(I\subset\{0,...,T/2-1\}\) be an arbitrary subset of size \(|I|=N/2\) and consider the set \[X_{I}=\left\{\frac{2i}{T}\right\}_{i\in I}\cup\left\{\frac{2i+1}{T}\right\}_{i \in I}\] and the sign pattern on \(X_{I}\) given by \[\epsilon_{I}\left(\frac{2i}{T}\right)=1,\ \epsilon_{I}\left(\frac{2i+1}{T}\right)=-1 \tag{2.9}\] for \(i\in I\). Since the set \(X_{I}\) is shattered by \(\Upsilon^{\mathbf{W}}(\mathbb{R})\), the sign pattern \(\varepsilon_{I}\) can be matched by an \(f\in\Upsilon^{\mathbf{W}}(\mathbb{R})\), so there must exist an \(\varepsilon\in S(\mathbf{W},T)\) such that \[\varepsilon(2i)=1,\ \varepsilon(2i+1)=-1 \tag{2.10}\] for all \(i\in I\). For each fixed sign pattern \(\varepsilon\in S(\mathbf{W},T)\) let \(J_{\varepsilon}\) denote the set of indices for which \(\varepsilon(2i)=1\) and \(\varepsilon(2i+1)=-1\), i.e. \[J_{\varepsilon}=\{i:\ \varepsilon(2i)=1\ \text{and}\ \varepsilon(2i+1)=-1\} \subset\{0,...,2T-1\}. \tag{2.11}\] Note that any subset \(I\subset\{0,...,T/2-1\}\) for which \(\varepsilon\) satisfies (2.10) must satisfy \(I\subset J_{\varepsilon}\). On the other hand, Lemma 1 implies that \(|J_{\varepsilon}|\leq M^{*}(\mathbf{W},\sigma)\). This means that the number of subsets \(I\) of size \(|I|=N/2\) for which \(\varepsilon\) satisfies (2.10) is bounded by \[\binom{M^{*}(\mathbf{W},\sigma)}{N/2}\leq\frac{M^{*}(\mathbf{W},\sigma)^{N/2} }{(N/2)!}. \tag{2.12}\] On the other hand, the total number of subsets \(I\subset\{0,...,T/2-1\}\) of size \(|I|=N/2\) is \[\binom{T/2}{N/2}\geq\frac{T^{N/2}}{4^{N/2}(N/2)!}, \tag{2.13}\] since \(T\geq 2N\). Since by assumption \(\varepsilon_{I}\) is matched by an \(f\in\Upsilon^{\mathbf{W}}(\mathbb{R})\) for every subset \(I\subset\{0,...,T/2-1\}\) of size \(N/2\) and the number of \(\varepsilon_{I}\) that each \(\varepsilon\in S(\mathbf{W},T)\) can match is bounded by (2.12), we get the lower bound (2.8). Next, we upper bound \(|S(\mathbf{W},T)|\) using Warren's Theorem [31]. Let \(b_{1}<b_{2}<\cdots<b_{p-1}\) denote the break points of the activation function \(\sigma\) and \(q_{1},...,q_{p}\) be the polynomials in each piece. Specifically, setting \(b_{0}=-\infty\) and \(b_{p}=\infty\), \(\sigma\) is given by \[\sigma(x)=\big{\{}q_{i}(x)\quad b_{i-1}\leq x<b_{i}. \tag{2.14}\] Note that here we assume that \(\sigma\) is left continuous, but changing the value of \(\sigma\) at the break points \(b_{i}\) (if it is discontinuous) doesn't significantly change the proof. For \(i=1,...,L\), let \(\mathbf{t}_{i}\in\{1,...,p\}^{w_{i}}\) and define a function \(\mathbf{t}_{i}:\mathbb{R}^{w_{i}}\rightarrow\mathbb{R}^{w_{i}}\) by \[\mathbf{t}_{i}(\mathbf{x})_{j}=q_{(\mathbf{t}_{i})_{j}}(\mathbf{x}_{j}) \tag{2.15}\] for \(j=1,...,w_{i}\). For indices \(\mathbf{t}_{i}\), the function \(\mathbf{t}_{i}\) applies the pieces of \(\sigma\) indicated by \(\mathbf{t}_{i}\) to the corresponding entries of the input \(\mathbf{x}\in\mathbb{R}^{w_{i}}\). Given an input \(x\in X_{T}\), consider the signs of the following quantities \[\begin{split}(A_{\mathbf{M}_{0},b_{0}}(x))_{j}-b_{q},\,j=1,...,w_ {1},\ q=1,...,p-1\\ (A_{\mathbf{M}_{1},b_{1}}\circ\mathbf{t}_{1}\circ A_{\mathbf{M}_{ 0},b_{0}}(x))_{j}-b_{q},\,j=1,...,w_{2},\ q=1,...,p-1\\ (A_{\mathbf{M}_{2},b_{2}}\circ\mathbf{t}_{2}\circ A_{\mathbf{M}_{ 1},b_{1}}\circ\mathbf{t}_{1}\circ A_{\mathbf{M}_{0},b_{0}}(x))_{j}-b_{q},\,j=1,...,w_{3},\ q=1,...,p-1\\ \vdots\\ (A_{\mathbf{M}_{L-1},b_{L-1}}\circ\mathbf{t}_{L-1}\circ\cdots\circ \mathbf{t}_{2}\circ A_{\mathbf{M}_{1},b_{1}}\circ\mathbf{t}_{1}\circ A_{ \mathbf{M}_{0},b_{0}}(x))_{j}-b_{q},\,j=1,...,w_{L},\ q=1,...,p-1\\ A_{\mathbf{M}_{L},b_{L}}\circ\mathbf{t}_{L}\circ A_{\mathbf{M}_{L-1},b _{L-1}}\circ\mathbf{t}_{L-1}\circ\cdots\circ\mathbf{t}_{2}\circ A_{\mathbf{M} _{1},b_{1}}\circ\mathbf{t}_{1}\circ A_{\mathbf{M}_{0},b_{0}}(x),\end{split} \tag{2.16}\] where the \(\mathbf{t}_{i}\) range over all elements of \(\{1,...,p\}^{w_{i}}\) for \(i=1,...,L\). Given \(x\in X_{T}\) and parameters \(\mathbf{P}:=\{\mathbf{M}_{0},...,\mathbf{M}_{L},b_{1},...,b_{L}\}\) the sign of \(f_{\mathbf{P}}(x)\), where \(f_{\mathbf{P}}\) is the neural network function defined by the parameters \(\mathbf{P}\), is uniquely determine by the signs of the quantities in (2.16). Indeed, we recursively set \(\mathbf{t}_{i}\) to index the pieces of \(\sigma\) which contain the outputs of the neurons at layer \(i\), i.e. we set (recursively in \(i\)) \[(\mathbf{t}_{i})_{j}=\min\{q:\text{sgn}((A_{\mathbf{M}_{i-1},b_{i-1}}\circ \mathbf{t}_{i-1}\circ\cdots\circ\mathbf{t}_{2}\circ A_{\mathbf{M}_{1},b_{1}} \circ\mathbf{t}_{1}\circ A_{\mathbf{M}_{0},b_{0}}(x))_{j}-b_{q})=-1\}\cup\{p\} \tag{2.17}\] for \(j=1,...,w_{i}\). Then \[\text{sgn}(f_{\mathbf{P}}(x))=\text{sgn}(A_{\mathbf{M}_{L},b_{L}}\circ\mathbf{t }_{L}\circ A_{\mathbf{M}_{L-1},b_{L-1}}\circ\mathbf{t}_{L-1}\circ\cdots\circ \mathbf{t}_{2}\circ A_{\mathbf{M}_{1},b_{1}}\circ\mathbf{t}_{1}\circ A_{\mathbf{ M}_{0},b_{0}}(x)). \tag{2.18}\] Now, for each fixed \(x\in X_{T}\) and set of \(\mathbf{t}_{i}\in\{1,...,p\}^{w_{i}}\), each of the quantities in (2.16) is a polynomial of degree at most \(d^{L}\), where \(d=\max\{\text{deg}(q_{i})\}_{i=1}^{p}\), in the parameters \(\mathbf{P}\). Ranging over all \(x\in X_{T}\), \(\mathbf{t}_{i}\in\{1,...,p\}^{w_{i}}\), and all quantities in (2.16), we see that the sign pattern \(\text{sgn}(f_{\mathbf{P}}(x))_{x\in X_{T}}\) is uniquely determined by the signs of (setting \(V=\sum_{i=1}^{L}w_{i}\)) \[Tp^{V}\left(1+(p-1)V\right)\] polynomials of degree \(d^{L}\) in the \(P=P(\mathbf{W})\) parameters \(\mathbf{P}\). We now apply Warren's Theorem (Theorem 3 in [31]) to see that the number of such sign patterns, and thus \(|S(\mathbf{W},T)|\), is bounded by \[|S(\mathbf{W},T)|\leq\left(\frac{4ed^{L}Tp^{V}\left(1+(p-1)V\right)}{P}\right)^ {P}\leq\left(4ed^{L}Tp^{V}\left(1+(p-1)V\right)\right)^{P}. \tag{2.19}\] We take logarithms and compare with the lower bound (2.8) to get \[\frac{N}{2}(\log(T)-\log(4M^{*}(\mathbf{W},\sigma)))\leq P(\log(T)+\log(4ed^{L }p^{V}(1+(p-1)V)). \tag{2.20}\] We complete the proof of Theorem 1 by means of a contradiction. Suppose that \(P<N/6\) and observe that by Lemma 1 \[\log(4M^{*}(\mathbf{W},\sigma))\leq C\left(\sum_{i=1}^{L}w_{i}+L^{2}\log(d)+L \log(p)\right)\leq C(\sigma)P^{2}, \tag{2.21}\] since we clearly have that \(\sum_{i=1}^{L}w_{i}\leq P\) and \(L\leq P\). In addition, we easily calculate that \[\log(4ed^{L}p^{V}(1+(p-1)V))\leq C(\sigma)P. \tag{2.22}\] This means that if we choose \(T>e^{cN^{2}}>e^{cP^{2}}\) (recall that by assumption \(P<N/6\)) for a constant \(c\) depending upon \(\sigma\), then we can ensure that \[\log(T)\geq 2\max(\log(4M^{*}(\mathbf{W},\sigma)),\ \log(4ed^{L}p^{V}(1+(p-1)V))). \tag{2.23}\] But then (2.20) becomes \[\frac{N}{4}\log(T)\leq\frac{3P}{2}\log(T), \tag{2.24}\] which implies \(P\geq N/6\), contradicting our assumption. If \(\sigma\) is piecewise linear, then the bound (2.21) can be improved to (since \(d=1\)) \[\log(4M^{*}(\mathbf{W},\sigma))\leq C(\sigma)P, \tag{2.25}\] so that we can take \(T>e^{cN}>e^{cP}\) in the above argument. Since \(\delta=T^{-1}\), this completes the proof. ## 3 Acknowledgements We would like to thank Ronald DeVore, Jinchao Xu, and Juncai He for helpful discussions. This work was supported by the National Science Foundation (DMS-2111387 and CCF-2205004).
2305.03342
Finite-size analysis in neural network classification of critical phenomena
We analyze the problem of supervised learning of ferromagnetic phase transitions from the statistical physics perspective. We consider two systems in two universality classes, the two-dimensional Ising model and two-dimensional Baxter-Wu model, and perform careful finite-size analysis of the results of the supervised learning of the phases of each model. We find that the variance of the neural network (NN) output function (VOF) as a function of temperature has a peak in the critical region. Qualitatively, the VOF is related to the classification rate of the NN. We find that the width of the VOF peak displays the finite-size scaling governed by the correlation length exponent, $\nu$, of the universality class of the model. We check this conclusion using several NN architectures -- a fully connected NN, a convolutional NN and several members of the ResNet family -- and discuss the accuracy of the extracted critical exponents $\nu$.
Vladislav Chertenkov, Evgeni Burovski, Lev Shchur
2023-05-05T07:52:51Z
http://arxiv.org/abs/2305.03342v2
# Finite-size analysis in neural network classification of critical phenomena ###### Abstract We analyze the problem of supervised learning of ferromagnetic phase transitions from the statistical physics perspective. We consider two systems in two universality classes, the two-dimensional Ising model and two-dimensional Baxter-Wu model, and perform careful finite-size analysis of the results of the supervised learning of the phases of each model. We find that the variance of the neural network (NN) output function (VOF) as a function of temperature has a peak in the critical region. Qualitatively, the VOF is related to the classification rate of the NN. We find that the width of the VOF peak displays the finite-size scaling governed by the correlation length exponent, \(\nu\), of the universality class of the model. We check this conclusion using several NN architectures--a fully connected NN, a convolutional NN and several members of the ResNet family--and discuss the accuracy of the extracted critical exponents \(\nu\). _Introduction.--_ Deep learning is since recently emerging as a promising tool for studying phase transitions and critical phenomena. The pioneering observation of Ref. [1] is that training a neural network (NN) to perform a binary classification of microscopic spin states of a two-dimensional (2D) Ising model reproduces the critical temperature of the ferromagnetic phase transition, known from the exact solution [2]. Following the seminal work, a variety of approaches are being explored to test deep learning techniques in application to several models, including the Ising and \(q\)-state Potts models, percolation, XY- and clock models [3; 4; 5; 6; 7; 8; 9]. It is becoming clear that a neural network (NN) trained on an equilibrium ensemble of microscopic states can learn and predict phase transitions between macroscopic states, _in many situations_. This gives rise to a series of fundamental questions: How to interpret NN results from the physics perspective--specifically, does a NN learn the critical behavior of a universality class of a transition? What are relevant NN observables? How general is the NN approach and what are its failure modes? What limits the reliability and accuracy of these predictions? What is the role of the NN architecture? In this Letter, we address these questions by considering two exactly solvable models in 2D, the Ising model [2] and the Baxter-Wu (BW) model [10; 11]. We train NNs to perform binary classification of microscopic spin configurations, and perform a careful finite-size scaling analysis of the classification results. We show that the second moment of the NN output displays finite-size scaling governed by the correlation length exponent, \(\nu\), of the universality class of the model. We compare predictions of several network architectures--fully connected networks (FCNN), shallow convolutional networks (CNN) and several members of the ResNet family. We note that using the BW model turns out to be essential to be able to distinguish between the critical scaling, \(\sim 1/L^{\nu}\), from regular, analytic corrections, \(\sim 1/L\), to thermodynamic limit behavior of systems with finite linear size \(L\). While for the Ising model the correlation length exponent \(\nu=1\), the BW model belongs to the 4-state Potts universality class with \(\nu=2/3\)--thus making the critical scaling clearly distinguishable from analytic corrections. We note that the BW model, unlike other models in the same universality class, does not show any logarithmic corrections [10], which allows us to simplify the finite-size analysis. _Models and methods.--_ We consider two classical, exactly solved models, formulated in terms of Ising spins, \(\sigma_{i}=\pm 1\) on an \(L\times L\) lattices. The Ising model [2] is defined by the Hamiltonian \(H_{\rm Is}=-J\sum_{\langle ij\rangle}\sigma_{i}\sigma_{i}\), where \(J\) is the coupling constant, and the summation runs over the pairs of nearest neighbors of the _square_ lattice with periodic boundary conditions. The BW model [10; 11] is defined on a triangular lattice, and contains three-spin interactions \(H_{\rm BW}=-J\sum_{\langle ijk\rangle}\sigma_{i}\sigma_{j}\sigma_{k}\), where the summation runs over triplets of spins which form triangular plaquettes of a triangular lattice with periodic boundary conditions. To generate data sets for NN training and validation, we use the standard Monte Carlo (MC) simulations with Metropolis single spin flip updates [12]. We perform simulations for system sizes with \(L=48,72,96,114,\) and \(216\) for the Ising model, and \(L=48,72,96,114,\) and \(243\) for the BW model. For each system size, we perform simulations for \(N_{t}=114\) values of the temperatures between \([T_{c}-0.4;T_{c}+0.4]\), using the value of the critical temperature \(T_{c}\) from the exact solution of the corresponding model. For each system size and for each value of the temperature, we collect \(N_{s}=1500\) "snapshots" of spin configurations generated by the MC process (here by a "snapshot" we mean a collection of \(L^{2}\) spin values, \(\pm 1\)). To make sure that snapshots are uncorrelated, we skip at least \(2\,\tau_{\rm corr}\) Monte Carlo steps between snapshots, where \(\tau_{\rm corr}\) is the integrated autocorrelation time for the magnetization [13]. For each simulation, we allow at least \(20\,\tau_{\rm corr}\) MC steps for equilibration (see Ref. [14] for a detailed discussion of our MC simulations). _NN training.--_ We train a NN to perform binary clas sification of snapshots for a given system size \(L\) into two classes, ferromagnetic, FM, (\(T<T_{c}\)) or paramagnetic, PM, (\(T>T_{c}\)) separately for the Ising model and the BW model. A NN takes as input a "snapshot" of size \(L\times L\), and outputs the class scores for the FM and PM classes. We interpret the class scores as probabilities, since their sum equal unity. We use three different network architectures: Convolutional neural network (CNN) [15], Fully-connected neural network (FCNN) [16], and Deep convolutional residual networks (ResNet) [17]. In the ResNet family we use networks with 10, 18, 34, and 50 layers. The detailed parameters of the networks and our training protocol can be found in Supplementary Materials. _Analysis of NN outputs.--_ Once a NN is trained, we feed it with \(N\) snapshots from the testing dataset to perform the classification. In what follows we denote by \(f_{i}^{T}\) the FM class prediction for the \(i\)-th snapshot at temperature \(T\). Averaging over the training dataset, we define the average prediction, \(F^{T}\), \[F^{T}=\frac{1}{N}\sum_{i=1}^{N}f_{i}^{T} \tag{1}\] and its variance, \(V^{T}\), \[V^{T}=\frac{1}{N}\sum_{i=1}^{N}\left(f_{i}^{T}\right)^{2}-\left(\frac{1}{N} \sum_{i=1}^{N}f_{i}^{T}\right)^{2}. \tag{2}\] Fig. 1 shows the dependence of the FM class prediction of (left image) the Ising model and (right image) the BW model with the CNN architecture. Other NN architectures give similar results. Here we only show the FM class prediction, because the PM class prediction is given by \(1-F^{T}\). The network output, \(F^{T}\), for both models, is clearly similar to the observation of Ref. [1]: for low temperatures, \(F^{T}\approx 1\), for high temperatures, \(F^{T}\approx 0\), and the transition region clearly shrinks on increasing the system size \(L\), thus developing a step function for \(L\gg 1\). This behavior is qualitatively similar for all network architectures we considered. According to Ref. [1]--for the Ising model, the FM prediction, \(F^{T}\), approaches the value of 0.5 for all values of the system size \(L\) at the exact value of the critical temperature, \(T_{c}=2/\ln(1+\sqrt{2})\)[2]. Since the PM prediction is simply \(1-F^{T}\), a straightforward interpretation would be that at \(T=T_{c}\), NNs are equally likely to classify a snapshot as either ferromagnetic or paramagnetic _for finite system sizes, \(L\)_. However, our simulations of the Ising model and the BW model, Fig. 1, show that this interpretation is not entirely correct. For some lattice sizes for Ising model and for the BW model, the "equal prediction" point, \(F^{T}=1/2\) is shifted away from the value of \(T_{c}=2/\ln(1+\sqrt{2})\) known from the exact solution [10]. For FCNN architecture, the point \(F^{T}=1/2\) is shifted away to the paramagnetic phase for all lattice sizes both for the Ising and the BW models (see Fig. 1 of Supplementary Materials). Non-systematic shifts can be observed for the Ising and the BW models for different system sizes in the networks of the ResNet family. For the ResNet-50 (Fig. 5 of Supplementary Materials) for the Ising model large system sizes (96, 144, 216) are shifted to the ferromagnetic phase, while small sizes (48, 72) are shifted to the opposite side, to the paramagnetic phase. We thus conclude that \(F^{T}=1/2\) is not a reliable finite-size estimate of the critical temperature \(T_{c}\). For the Ising model, Ref. [1], considered system sizes of up to \(L=60\) and observed that the \(F^{T}\) curves display data collapse with respect to the "scaling variable", \(tL^{1/\nu}\), where the reduced temperature \(t=(T-T_{c})/T\) is scaled by the critical exponent \(\nu\). The data collapse estimate of Ref. [1] for \(L\) up to \(L=60\), produces the values \(T_{c}=2.266\pm 0.002\) and \(\nu=1.0\pm 0.2\), consistent with the exact values of the critical temperature and the correlation length exponent for the 2D Ising universality class, \(\nu\)=1, [10]. Our numerical experiments show that data collapse is visually observed in a wide range of values of the critical exponent \(\nu\in[0.75,1.5]\), depending on the network architecture (see Figs 13-24 of Supplementary Materials for details). We stress that simply including larger system sizes does not improve correlation length exponent and critical temperature estimates due to increasing errorbars of the NN output in the critical region, cf Fig.1. We note however, that the increase of the errorbars of \(F^{T}\), Eq. (1)--equivalently, the variance \(V^{T}\), Eq. (2)--around \(T=T_{c}\) is similar to the expected behavior of thermodynamic functions in the critical region, where second moments of observables are related to temperature derivatives of corresponding thermodynamic functions. In this spirit, we consider the second moment of the NN prediction of the FM class, Eq. (2), and hypothesize that the variance of the NN output, Eq. (2) is singular in the thermodynamic limit. This way, the observed increase of the errorbars of \(F^{T}\) around \(T\approx T_{c}\) is in fact nothing but a finite-size rounding of this divergence, governed by the correlation length exponent \(\nu\). Fig. 2 displays the temperature dependence of \(V^{T}\), which indeed shows a drastic increase around \(T=T_{c}\), and a characteristic Gaussian-like bell shape for both Ising and BW models and all network architectures. Furthermore, the widths of the bell-shaped curves decrease with increasing the system size, which is consistent with scaling behavior. To test this hypothesis, we study the \(L\) dependence of the width of the peak of \(V^{T}\), Eq. (2). Specifically, for each value of \(L\), we fit \(V^{T}\) vs T with an unnormalized Gaussian-like Ansatz, \(V^{T}\sim\exp\left(-(T-T_{*})^{2}/2\sigma^{2}\right)\) with \(\sigma\) and \(T_{*}\) being fit parameters, and extract the dependence of the width of \(\sigma\) on \(L\). Since there is no _a priori_ requirement that the profile is strictly Gaussian, we also perform a separate single-parameter fits the left-hand (\(T<T_{*}\)) and the right-hand (\(T>T_{*}\)) parts of the \(V^{T}\) curves. In this procedure, \(T_{*}\) is simply the location of the maximum of \(V^{T}\), and \(\sigma\) is the (only) fit parameter. For both fitting protocols, we then fit the resulting widths, \(\sigma(L)\), to a power-law Ansatz, \(\sigma(L)\sim 1/L^{1/\nu_{\sigma}}\). We perform this procedure for the Ising and the BW models and for all network architectures, and results are summarized in Tables 1 and 2. For the Ising model, Table 1, the first observation is that the resulting values of the scaling exponent (both one-sided \(1/\nu_{\sigma}\) and two-sided \(1/\nu_{\sigma^{\pm}}\)) are consistent with the correlation length exponent for the Ising universality Figure 1: \(F^{T}\) ferromagnetic phase predictions for the Ising model (left) and the Baxter-Wu model (right) with FCNN for various lattice sizes. The error bars correspond to the variance \(V^{T}\) of the NN prediction. The black vertical dashed line is the position of the critical temperature \(T_{c}=2/\ln(1+\sqrt{2})\), which by chance the same for both two models. Figure 2: \(V^{T}\) variance for the Ising model (left) and the Baxter-Wu model (right) with FCNN for different lattice sizes. The black vertical dahed line is the position of the critical temperature. The solid lines are limited to the area where the Gaussian approximation was applied to extract the width \(\sigma\) for each lattice size \(L\). class, \(\nu=1\). One notable exception is the ResNet 10- and 34-layer architectures, which shows vastly different values for exponents \(1/\nu_{\sigma^{\pm}}\) and \(1/\nu_{\sigma}\), and the resulting values are barely within the 4 standard deviations from the exact result, \(\nu=1\). For the BW model, Table 2, the striking observation is that the scaling exponents, \(1/\nu_{\sigma}\), estimated from the width of \(V^{T}\), are consistent with the exact value of the correlation length exponent for universality class of the BW model, \(\nu=2/3\). The accuracy of fit results, Table 2, allows to conclusively distinguish this value from regular, non-singular corrections, \(\sim L^{-1}\). This is the major advantage of considering the BW model in addition to the Ising model where \(\nu=1\). We also note that the shape of \(V^{T}\) is in fact not symmetric around the maximum--for both Ising and BW models. Allowing for different widths, \(\sigma^{+}\) and \(\sigma^{-}\) for \(T>T_{*}\) and \(T<T_{*}\), respectively, produces closer fits of \(V^{T}\). Moreover, scaling exponents, \(1/\nu_{\sigma^{+}}\) and \(1/\nu_{\sigma^{-}}\) are different--the low-temperature exponent, \(1/\nu_{\sigma^{-}}\), is consistently larger than the high-temperature exponent, \(1/\nu_{\sigma^{+}}\) -- again, for both Ising and BW models. It is clear from Tables 1 and 2 that the values of the critical exponents, extracted from NN data are largely independent of the NN architecture, and that increasing the depth of an NN does not bring drastic improvements in exponent accuracy estimation. For networks of the ResNet family, both for the Ising model and for the BW model, some of the scaling exponents have larger errors than similar ones for simpler architectures FCNN and CNN. We thus conclude that the width of the \(V^{T}\) peak displays finite-size scaling consistent with the universality class of a model, and that simple convolutional networks, CNN, or fully-connected, FCNN, are more appropriate for studying this class of problems, and that increasing the network depth does not automatically translate into better reliability or accuracy of the estimates--this is consistent with the conclusion of Ref. [3]. Given that the width of the \(V^{T}\) peak displays finite size scaling with the correlation length exponent, it is natural to study \(L\)-dependence of other properties of the peak: its maximum value, \(V^{T}_{\rm max}\), and the shift of the maximum from the thermodynamic limit value of \(T_{c}\). Our numerical experiments show that both maximum height and the peak shift are NN architecture dependent and do not display meaningful convergence with \(L\to\infty\). This behavior must be contrasted with the behavior of more traditional thermodynamic observables. It is well known [18] that the position of the specific heat maximum \(T^{*}\) shifts from the critical point \(T_{C}\) with the correlation length index \(T^{*}-T_{C}\propto 1/L^{1/\nu}\), and the same behavior is found for other thermodynamic quantities due to fluctuation cutoff, when the correlation length becomes comparable with the dimensions of the system, similar to i.e. to the rounding of the magnetic susceptibility at a temperature close to the critical one [19]. We tested the deviation of the maximum VOT for both models and six networks, and the results are placed in Table 3 for the Ising model and Table 4 for the Baxter-Wu model. Note that the critical temperature values are coincidentally the same for the two models, but \(1/\nu\) is different - it is 1 for the Ising model and 1.5 for the Baxter-Wu model, and we use these values when analyzing the VOT data. A demonstration of fits can be found in Supplemental Materials. The results of the fitting are in most cases consistent within no more than five standard deviations and follow the assumption that the shift of the VOT function follows the Ferdinand-Fischer law with an exactly known exponent. The testing of the Ising model with the ResNet-50 network is the worst, and at the same time the \(T*\) estimates for the largest systems are very close to the critical temperature \(T_{C}\), as can be seen from the Fig. 23 in Supplemental materials. Surprisingly, the values of \(T*\) change more regularly with \(L^{-1/\nu}\) for the Baxter-Wu model than for the Ising model. This may be due to weaker corrections to scaling for the Baxter-Wu model (see, for discussion Ref. [20]). _Conclusion.--_ The main result of the presented analysis is that the most reliable information on the classification of snapshots of the spin configuration of statistical mechanics systems experiencing phase transitions of the second kind is contained in the output variation (VOT) of neural networks. Namely, VOT contains information about the critical temperature and the correlation length \begin{table} \begin{tabular}{|l|r|r|r|} \hline \hline NN & \(1/\nu_{\sigma}\) & \(1/\nu_{\sigma^{-}}\) & \(1/\nu_{\sigma^{+}}\) \\ \hline FCNN & 1.01(1) & 1.02(13) & 0.98(4) \\ CNN & 1.06(3) & 1.11(5) & 1.07(2) \\ ResNet-10 & 1.25(3) & 1.24(7) & 1.24(3) \\ ResNet-18 & 1.17(11) & 1.41(6) & 1.08(10) \\ ResNet-34 & 1.15(16) & 1.26(7) & 1.12(24) \\ ResNet-50 & 1.20(5) & 1.21(5) & 1.31(6) \\ \hline \hline \end{tabular} \end{table} Table 1: Peak widths for the Ising model. Here \(\nu_{\sigma}\) is the estimate from fitting the Gaussian profile to the \(V^{T}\). \(\nu_{\sigma}^{+}\) and \(\nu_{\sigma}^{-}\) are similar estimates where we only fit the right-hand side (resp., left-hand-side) of the \(V^{T}\) curves. See the text for discussion. exponent. We present a VOT analysis method and extract estimates for the critical temperature and correlation length exponent of two systems in two universality classes. The results are stable when using three different architectures in the NN deep pool - CNN, FCNN and Resnet with four configurations. We do not have theory for the network output function as the thermodynamic function in the same ensemble as the statistical mechanics model which we tested with the neural network. At the same time we found evidence that the VOT width scales with the critical length exponent \(\nu\) and demonstrated that clearly for two universality classes. This means that output function \(F(T)\) somehow connected to the fluctuation of the physical quantities of the model although the clear connection is not directly found [24]. We find no evidence that the network output function \(F(T)\) should be equal to \(1/2\) at the critical point, as stated in the pioneering work [1] -- our claim is based on careful analysis using different network architecture. Instead, we show that the variation bias of the VOT output function does not contradict the Ferdinand-Fischer picture and can be used to estimate the critical temperature. This estimate is still not under control of the desired accuracy and more work needs to be done on a sound methodology. We would like to emphasize that the width dependence of VOT on the system size is a good candidate for extracting the exponent \(\nu\) of the critical length and gives better accuracy than the approach proposed in Ref. [1] using \(F(T)\) collapse data. We should note again that more research is needed to find a reliable way to estimate \(\nu\) from the VOT width, since not all network architectures produce \(\nu\) with the desired precision. Research supported by the grant 22-11-00259 of the Russian Science Foundation. The simulations were done using the computational resources of HPC facilities at HSE University [21].