id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2305.04612 | Can graph neural network-based detection mitigate the impact of hardware
imperfections? | Until recently, researchers used machine learning methods to compensate for
hardware imperfections at the symbol level, indicating that optimum
radio-frequency transceiver performance is possible. Nevertheless, such
approaches neglect the error correcting codes used in wireless networks, which
inspires machine learning (ML)-approaches that learn and minimise hardware
imperfections at the bit level. In the present work, we evaluate a graph neural
network (GNN)-based intelligent detector's in-phase and quadrature imbalance
(IQI) mitigation capabilities. We focus on a high-frequency, high-directional
wireless system where IQI affects both the transmitter (TX) and the receiver
(RX). The TX uses a GNN-based decoder, whilst the RX uses a linear error
correcting algorithm. The bit error rate (BER) is computed using appropriate
Monte Carlo simulations to quantify performance. Finally, the outcomes are
compared to both traditional systems using conventional detectors and wireless
systems using belief propagation based detectors. Due to the utilization of
graph neural networks, the proposed algorithm is highly scalable with few
training parameters and is able to adapt to various code parameters. | Lamprini Mitsiou, Stylianos Trevlakis, Argiris Tsiolas, Dimitrios J. Vergados, Angelos Michalas, Alexandros-Apostolos A. Boulogeorgos | 2023-05-08T10:46:11Z | http://arxiv.org/abs/2305.04612v1 | # Can graph neural network-based detection mitigate the impact of hardware imperfections?
###### Abstract
Until recently, researchers used machine learning methods to compensate for hardware imperfections at the symbol level, indicating that optimum radio-frequency transceiver performance is possible. Nevertheless, such approaches neglect the error correcting codes used in wireless networks, which inspires machine learning (ML)-approaches that learn and minimise hardware imperfections at the bit level. In the present work, we evaluate a graph neural network (GNN)-based intelligent detector's in-phase and quadrature imbalance (IQI) mitigation capabilities. We focus on a high-frequency, high-directional wireless system where IQI affects both the transmitter (TX) and the receiver (RX). The TX uses a GNN-based decoder, whilst the RX uses a linear error correcting algorithm. The bit error rate (BER) is computed using appropriate Monte Carlo simulations to quantify performance. Finally, the outcomes are compared to both traditional systems using conventional detectors and wireless systems using belief propagation based detectors. Due to the utilization of graph neural networks, the proposed algorithm is highly scalable with few training parameters and is able to adapt to various code parameters.
Belief propagation, bit error rate, graph neural networks, hardware imperfection mitigation, in-phase and quadrature imbalance, machine learning.
## I Introduction
As the wireless world search for unexploited resources in higher frequency bands, like millimeter wave and terahertz, new challenges are identifies and call for novel solutions [1, 2, 3, 4, 5]. One of the most important challenges is dealing with the impact of transceiver hardware imperfections. As discussed in [6, 7, 8, 9, 10, 11, 12], hardware imperfections, such as local oscillators' phase noise, amplifier's non-linearity and especially up and down-converters in-phase and quadrature imbalance (IQI), significantly limit the reliability of high-frequency wireless systems. Note that, as described [13], the hardware imperfection of wireless systems in the higher frequency range cannot be completely avoided, even with new technological solutions supported by integrated microwave photonics.
Motivated by this, several researchers have presented hardware imperfections mitigation solutions [14, 15, 16, 17, 18]. In particular, in [14], the authors reported a widely linear IQI calibration structure that estimates the IQI parameters using either second-order statistics or least-square-based model fitting. The authors of [15] used higher order statistics-based approaches in order to estimate the amplifier non-linearity in the presence of IQI and documented an IQI parameter maximum-likelihood estimation approach. The aforementioned approaches are two-step processes that are usually energy consuming.
To counterbalance this, the authors of [16] presented a real-valued time-delay neural network that is used as an one-step mitigation process; thus, simplifies the compensation process. In [17], a shortcut real-valued time-delay neural network for compensating IQI and amplifiers non-linearity was introduced. Finally, in [18], a neural network-based digital predistortion as a solution to countermeasure the impact of cross-talk, amplifier non-linearity, IQI, and direct current offset was presented.
In other words, the authors of [15, 17, 18] aimed to employ machine learning methodologies in order to compensate the impact of hardware imperfections at a symbol-level, proving that with such approaches the ideal radio-frequency (RF) transceiver performance are reachable. However, following such approach, it is impossible to exploit the characteristics of the error correction codes that are employed in nowadays wireless systems. This observation motivates the design of machine learning (ML)-approaches that learn and de-emphasize the impact of hardware imperfections in a bit level. These approaches should be scalable and have a relatively low-number of training parameters in order to be adaptive in different code parameters. Motivated by this as well as the close relation between the Tanner graphs, which can be used to represent codes, and the graph neural networks (GNNs), in this paper, we assess the IQI mitigation capabilities of an intelligent detector that employs GNN. In particular, we consider a high-frequency and high-directional wireless system in which both the transmitter (TX) and the receiver (RX) suffer from IQI. A linear error correction code is employed by the TX, while a GNN-based decoder is used by the RX. To quantify the performance, the bit error rate (BER) is derived through respective Monte Carlo simulations. The results are benchmarked against conventional systems that employ traditional detectors, as well as wireless systems that use belief propagation (BF) based detectors.
The organization of the rest of the paper is as follows: The system model is described in Section II. Section III reports the operation and training procedures of the intelligent detectors.
Results and related discussions are documented in Section IV. Finally, the conclusions and main message of this contribution is summarized in Section V.
_Notations:_ In what follows, \(\left[\cdot||\cdot\right]\) stands for the concatenation operator. Moreover, \(\oplus\) represents the message aggregation function, which, for this contribution is the mean value. The logarithm to base \(2\) is denoted as \(\log_{2}\left(\cdot\right)\). The sum of \(x_{i}\) for \(i\in\left[1,N\right]\) is represented as \(\sum_{i=1}^{N}x_{i}\), while \(\prod_{i=1}^{N}x_{i}\) is the product of \(x_{i}\) for \(i\in\left[1,N\right]\). Finally, \(\Pr\left(\mathcal{E}\right)\) is the probability of the event \(\mathcal{E}\).
## II System model
As illustrated in Fig. 1, we consider a high-directional wireless system that consists of a TX and a RX. Both the TX and RX employ analog beamforming and their beams are assumed to be perfectly aligned.
The TX consists of a bit source that outputs a tuple of \(\mathbf{b}\) bits, i.e., the codeword, which is the input of a \((N,K,L)\) linear encoder, where \(N\) is the codeword length, while \(K\) and \(L\) are respectively the number of ones in each column and row of the parity check matrix, \(\mathbf{P}\). The linear encoder uses zero-padding in order to be able to support odd codeword lengths and outputs a bit tuple \(\mathbf{c}\). Let \(\mathcal{L}\) be the function that describes the operation of the linear encoder, then
\[\mathbf{c}=\mathcal{L}\{\mathbf{b}\}. \tag{1}\]
The output of the linear encoder is in turn inputted in a quadrature amplitude modulation (QAM) mapper. Let \(\mathcal{M}\{\cdot\}\) be the function that models the operation of the QAM mapper. Then, the output of the QAM mapper can be described as
\[\mathbf{x}=\mathcal{M}\{\mathbf{c}\}. \tag{2}\]
The symbol vector, \(\mathbf{x}\), is forwarded to the up-converter. We assume that the up-converter suffers from in-phase and quadrature imbalance. As a consequence, the baseband equivalent signal at the up-converter's output can be expressed as [19]
\[\mathbf{s}=K_{1}^{t}\,\mathbf{x}+K_{2}^{t}\,\mathbf{x}^{*}, \tag{3}\]
where \(K_{1}^{t}\) and \(K_{2}^{t}\) are the IQI coefficients that, based on [20], can be written as [21]
\[K_{1}^{t}=\frac{1+g_{t}}{2}\,\frac{\exp\left(j\,\theta_{t}\right)}{2} \tag{4}\]
and
\[K_{2}^{t}=\frac{1+g_{t}}{2}\,\frac{\exp\left(j\,\theta_{t}\right)}{2}, \tag{5}\]
with \(g_{t}\) and \(\theta_{t}\) denoting the IQI-infused amplitude and phase mismatched, respectively. Notice that
\[K_{1}^{t}=1-\left(K_{2}^{t}\right)^{*}. \tag{6}\]
Moreover, the TX image rejection ratio (IRR) can be obtained as
\[I_{t}=\frac{\left|K_{1}^{t}\right|^{2}}{\left|K_{2}^{t}\right|^{2}}. \tag{7}\]
The up-converter is followed by the analog beamformer. The baseband equivalent at the output of the TX can be written as
\[\mathbf{s}_{b}=\mathbf{u}\,\mathbf{s}, \tag{8}\]
where \(\mathbf{u}\) stands for the TX beamforming vector.
The baseband equivalent signal at the output of the RX beamformer can be expressed as
\[\mathbf{r}_{b}=\mathbf{v}\,\mathbf{H}\,\mathbf{s}_{b}+\mathbf{n}, \tag{9}\]
where \(\mathbf{v}\) and \(\mathbf{H}\) stands for the RX beam-vector and the channel matrix respectively, while \(\mathbf{n}\) is an additive white Gaussian noise vector. Each element of \(\mathbf{n}\) is modeled as an zero-mean complex Gaussian process of variance \(N_{o}\). Additionally, \(\mathbb{E}[n_{i}\,n_{j}]=0\), for \(i\neq j\).
With the aid of (8), (9) can be rewritten as
\[\mathbf{r}_{b}=\mathbf{v}\,\mathbf{H}\,\mathbf{u}\,\mathbf{s}+\mathbf{n}. \tag{10}\]
As reported in [22], since the TX and RX beams are perfectly aligned,
\[\mathbf{v}\,\mathbf{H}\,\mathbf{u}=h, \tag{11}\]
where \(h\) is a scalar that represents the channel coefficient. As a consequence, (10) yields
\[\mathbf{r}_{b}=h\,\mathbf{s}+\mathbf{n}. \tag{12}\]
Notice that the impact of multi-path fading is respectively low. Thus, the channel coefficient models only the deterministic path-gain.
The output of the RX beamformer is connected to a down-converter that suffers from IQI. The down-converter introduces IQI; as a result, the baseband equivalent signal at the output of the down-converter can be expressed as [20]
\[\mathbf{r}=K_{1}^{r}\,\mathbf{r}_{b}+K_{2}^{r}\,\mathbf{r}_{b}^{*}, \tag{13}\]
where
\[K_{1}^{r}=\frac{1+g_{r}\,\exp\left(-j\,\theta_{r}\right)}{2} \tag{14}\]
and
\[K_{2}^{r}=\frac{1-g_{r}\,\exp\left(j\,\theta_{r}\right)}{2}. \tag{15}\]
In (14) and (15), \(g_{r}\) and \(\theta_{r}\) are respectively the RX amplitude and phase mismatches. The RX IRR can be written as
\[\mathcal{I}_{r}=\frac{\left|K_{1}^{r}\right|^{2}}{\left|K_{2}^{r}\right|^{2}}. \tag{16}\]
From (12), (13) can be expressed as
\[\mathbf{r}=K_{1}^{r}\,\left(h\,\mathbf{s}+\mathbf{n}\right)+K_{2}^{r}\,\left( h\,\mathbf{s}+\mathbf{n}\right)^{*} \tag{17}\]
or
\[\mathbf{r}=K_{1}^{r}\,h\,\mathbf{s}+K_{2}^{r}\,h\,\mathbf{s}^{*}+K_{1}^{r}\, \mathbf{n}+K_{2}^{r}\mathbf{n}^{*}. \tag{18}\]
By applying (3) to (18), the baseband equivalent received signal at the output of the down-converter can be expressed as
\[\mathbf{r} =K_{1}^{r}\,h\,\left(K_{1}^{t}\,\mathbf{x}+K_{2}^{t}\,\mathbf{x}^{ *}\right)+K_{2}^{r}\,h\,\left(K_{1}^{t}\,\mathbf{x}+K_{2}^{t}\,\mathbf{x}^{*} \right)^{*}\] \[+K_{1}^{r}\,\mathbf{n}+K_{2}^{r}\mathbf{n}^{*}, \tag{19}\]
or equivalently
\[\mathbf{r} =\left(K_{1}^{r}\,K_{1}^{t}+K_{2}^{r}\,K_{2}^{t}\right)\,h\, \mathbf{x}+\left(K_{1}^{r}\,K_{2}^{t}+K_{2}^{r}\,K_{1}^{t}\right)\,h\,\mathbf{ x}^{*}\] \[+K_{1}^{r}\,\mathbf{n}+K_{2}^{r}\mathbf{n}^{*}. \tag{20}\]
Thus, the received signal-to-distortion-plus-noise-ratio (SDNR) is given by
\[\gamma=\frac{\left|K_{1}^{r}\,K_{1}^{t}+K_{2}^{r}\,K_{2}^{t}\right|^{2}\,\rho }{\left|K_{1}^{r}\,K_{2}^{t}+K_{2}^{r}\,K_{1}^{t}\right|^{2}\,\rho+\left|K_{1} ^{r}\right|^{2}+\left|K_{2}^{r}\right|^{2}}, \tag{21}\]
where \(\rho\) stands for the signal-to-noise-ratio (SNR) of the ideal wireless system, i.e., the one that does not suffer from IQI, and can be expressed as
\[\rho=\frac{h^{2}\,P_{x}}{N_{o}}. \tag{22}\]
In (22), \(P_{x}\) stands for the average transmission power.
The output of the down-converter is inserted to the QAM demapper, which returns the log-likelihood ratios (LLRs) of the received signals. Let \(\mathcal{D}\{\cdot\}\) represent the QAM demapper's operation. Then, the LLRs at the output of the QAM demapper can be written as
\[\mathbf{l}=\mathcal{D}\{\mathbf{r}\}, \tag{23}\]
where the \(k-\)th value of \(\mathbf{l}\) can be obtained as
\[l_{k,m}=\log_{2}\frac{\Pr\left(c_{k}=1\left|r_{m}\right.\right)}{\Pr\left(c_ {k}=0\left|r_{m}\right.\right)}. \tag{24}\]
Note that \(r_{m}\) stands for the \(m-\)th value of \(\mathbf{m}\) that carriers the \(c_{k}\) bit.
The LLR vector is inputted in the zero-padding remover that outputs only the LLR elements of that corresponds to the coded message. In turn, the output of the zero-padding remover is inserted in the decoder that provides an estimation of the transmitted codeword.
## III Intelligent detectors
This section focuses on presenting intelligent detectors. Specifically, Section III-A presents a BP-based detector, while Section III-B reports a graph neural network-based detector.
### _Belief propagation_
Let \(G_{P}=(\mathcal{V}\cup\mathcal{C},\mathcal{E})\) with \(\mathcal{V}\), \(\mathcal{C}\) and \(\mathcal{E}\) respectively stand for the variable nodes (VNs), check nodes (CNs), edge nodes (ENs) of the Tanner graph \(G_{P}\). Note that each row of \(\mathbf{P}\) stands for a CN and each column for a VN. The Tanner graph can be seen as a deep neural network, in which the input layer receives the LLRs. The nodes in the hidden layers represent processing nodes. Each processing node is connected with a number of edges of the Tanner graph. As a consequence, each hidden layer consists of \(E\) nodes. The number of nodes is equal to the size of \(\mathcal{E}\). The output layer has \(N\) processing elements and its responsibility is to provide an estimation of the transmitted codeword.
If the number of iterations is set to \(L\), then the number of hidden layer are \(2L\). The processing element \(p_{k}\) of the hidden layer \(k\) is associated with the VN, \(v_{k}\), and CN, \(c_{k}\), outputs [23]
\[t_{k,e_{k}} =\left\{\begin{array}{ll}l_{v_{k}}+\sum_{e^{{}^{\prime}}_{k-1},c _{k-1}\neq c_{k}}t_{k-1,e^{{}^{\prime}}},&\text{for $k$ odd}\\ 2\,\tan^{-1}\left(\prod_{e^{{}^{\prime\prime}},v_{k-1}\neq v_{k}}\tanh\left( \frac{t_{k-1,e^{{}^{\prime\prime}}}}{2}\right)\right),&\text{for $k$ even} \end{array}\right. \tag{25}\]
where \(e^{{}^{\prime}}=(v_{k},c_{k-1})\), \(e^{{}^{\prime\prime}}=(v_{k-1},c_{k})\) and \(l_{v_{k}}\) is the self LLR message of \(v_{k}\). The \(k-\)th node of the output layer reports
\[o_{k}=l_{v_{2L}}+\sum_{e^{{}^{\prime}}}t_{2L,e^{{}^{\prime}}}. \tag{26}\]
Fig. 1: System model.
### _Graph neural network_
Similar to the BP approach, we consider a Tanner graph \(G_{g}=(\mathcal{V}_{g}\cup\mathcal{F}_{g},\mathcal{E}_{g})\), where \(\mathcal{V}_{g}\) is the set of the VNs. Each VN stands for a specific element of \(\mathbf{c}\). \(\mathcal{F}_{g}\) represents the set of the CNs, and \(\mathcal{E}_{g}\) is the set of ENs. If \(P_{i,j}=1\), then \(v_{g,i}\) is connected to \(f_{g,j}\), where \(P_{i,j}\) is the \(i,j\) element of \(\mathbf{P}\), \(v_{g,i}\) and \(f_{g,j}\) are the \(i\) and \(j\) elements of the sets \(\mathcal{V}_{g}\), and \(\mathcal{F}_{g}\), respectively. To denote the set of all VNs that are connected to the \(v_{g,i}\), we use \(\mathcal{V}_{g}\left(v_{g,i}\right)\). Similarly, the set of all the CNs that are associated with \(f_{g,j}\) is represented by \(\mathcal{F}_{g}\left(f_{g,j}\right)\).
To train the graph neural network, we use a function for the update of the edge messages and another one that update the nodes. Let \(m_{v,i,j}\) be the updated message from \(v_{i}\) to \(v_{j}\), then
\[\mathbf{m}_{v,i,j}=g^{m}\left(\left[\mathbf{w}_{v_{g,i}}||\mathbf{w}_{f_{g,j}} ||\mathbf{g}_{m_{g,i}f_{g,j}}\right],\mathbf{a}_{m_{v,f}}\right), \tag{27}\]
where \(\mathbf{w}_{v_{g,i}}\) and \(\mathbf{w}_{f_{g,j}}\) are vectors computed by each node for the VN \(v_{g,i}\) and the CN \(f_{g,j}\), respectively. Moreover, \(\mathbf{a}_{m_{v,f}}\) stands for the trainable parameters. Finally, \(g^{m}\left(\cdot\right)\) represents the parametrized function. Let \(m_{u,j,i}\) be the updated message value from the CN \(u_{j}\) to the VN \(v_{j}\). Thus, it can be obtain as
\[\mathbf{m}_{f,i,j}=g^{n}\left(\left[\mathbf{w}_{f_{g,j}}||\mathbf{w}_{v_{g,i}} ||\mathbf{g}_{m_{f_{g,j}}\cdot v_{g,i}}\right],\mathbf{a}_{m_{f_{v}}}\right), \tag{28}\]
To evaluate the updated value of \(\mathbf{w}_{v_{g,i}}\), we apply
\[\mathbf{w}_{v_{g,i}}^{{}^{\prime}}=g^{n}\left(\left[\mathbf{w}_{v_{g,i}}|| \oplus_{v_{i}\in\mathcal{V}_{g}}\mathbf{m}_{v,i,j}||\mathbf{g}_{v_{g,i}} \right]\mathbf{a}_{v}\right), \tag{29}\]
where \(\mathbf{a}_{v}\) stands for the trainable parameters of the VN. Following a similar approach, the CN values can be updated as
\[\mathbf{w}_{f_{g,j}}^{{}^{\prime}}=g^{n}\left(\left[\mathbf{w}_{f_{g,i}}|| \oplus_{f_{i}\in\mathcal{C}_{g}}\mathbf{m}_{f,i,j}||\mathbf{g}_{f_{g,i}} \right]\mathbf{a}_{f}\right), \tag{30}\]
where \(\mathbf{a}_{v}\) and \(\mathbf{a}_{f}\) are the VN and CN parameters, respectively.
The training process consists of two phases: i) initialization, and ii) iterative optimization. In this paper, we employ the Gloron uniform initalizer [24] to find the initial values of the trainable parameters, and the Adam optimizer to find their (sub)optimal values. As a loss function, the binary cross-entropy is applied.
## IV Results & Discussion
This section presents Monte Carlo simulations that reveal the effectiveness of the ML-based detection approaches in the mitigating the impact of IQI and benchmarks GNN against BP and conventional approaches. The following scenario is considered. A wireless system that operates in the \(120\,\mathrm{GHz}\) band and employs low-density parity check code (LDPC) with code-rate equal to \(0.714\). The parity check code has \(63\times 45\) size. The zero-padding adds \(1\) bit if and only if the length of the codeword is odd. A quadrature phase shift keying modulator is used by the TX and the corresponding demodulator by the RX. Both the TX and RX suffer from IQI with phase error equal to \(5^{o}\). The BF and GNN respectively perform \(20\) and \(8\) iterations.
Figure 2 depicts the BER as a function of the SNR for different IQI levels and coding/decoding schemes. As benchmarks, the cases of conventional detectors and ideal RF front-end are considered. As expected, for given detector and level of IQI, as the SNR increases, the error performance improves. For example, for the ideal RF front-end with conventional detector case, as the SNR increases from \(7\) to \(9\,\mathrm{dB}\), the BER decreases for more than one order of magnitude. For the same SNR variation and for the case in which a conventional detector is employed, but both the TX and RX suffer of IQI with IRR equal to \(20\,\mathrm{dB}\), the BER decreases for about two orders of magnitude. Moreover, for conventional decoders and a SNR beyond \(3\,\mathrm{dB}\), as the level of IQI increases, i.e., as IRR decreases, the error performance degrades. For instance, for a SNR equal to \(7\,\mathrm{dB}\), as IRR increases from \(20\) to \(30\,\mathrm{dB}\), the BER decreases from \(1.17\times 10^{-3}\) to \(7.88\times 10^{-4}\). On the other hand, for either BF or GNN-based decoders and a fixed SNR, as the level of IQI increases, an error performance improvement is observed. For example, for BF-based detector and SNR equals \(7\,\mathrm{dB}\), the BER decreases from \(1.53\times 10^{-4}\) to \(4.45\times 10^{-5}\), as the the IRR decreases from \(30\) to \(20\,\mathrm{dB}\). For the same SNR, but GNN-based detector, the BER decreases from \(5.15\times 10^{-5}\) to \(3.52\times 10^{-6}\), as the IRR decreases from \(30\) to \(20\,\mathrm{dB}\). As explained in [25], this is due to the TX IQI induced diversity order that can be exploited by the intelligent detectors. Additionally, from this figure, we observe that for ideal RF and a given SNR that is beyond \(4.5\,\mathrm{dB}\), GNN-based detectors outperforms both BF-based and conventional detectors. On the other hand, in wireless systems in which their transceiver suffer from IQI, for any given SNR, GNN-based detectors outperforms both BF-based and conventional detectors. For instance, for SNR equals \(7\,\mathrm{dB}\) and both the IRR of the TX and and RX equal to \(20\,\mathrm{dB}\), the GNN-based detector achieves a BER that is equal to \(3.52\times 10^{-6}\), while, for the same SNR and IRR, BF-based detectors achieve a BER that equals \(4.65\times 10^{-5}\), and conventional detectors achieve a BER, which is equal to \(1.17\times 10^{-3}\). Notice that the GNN-based detectors uses \(8\) iterations, while the BF-based one \(20\). In other words, with less iterations the GNN-based detector achieves better performance than the BF-based detector.
Fig. 2: BER vs SNR for different coding schemes and levels of IQI.
## V Conclusions
In this paper, we presented a GNN-based intelligent detector and demonstrated its ability to prevent IQI. Specifically, we consider a two-way wireless system in which the TX and RX are both susceptible to IQI at high frequencies and in opposite directions. The transmitter uses a linear error correction code, while the receiver employs a GNN-based decoder. The BER is calculated via independent Monte Carlo simulations in order to quantify the system's performance. The results are compared against those wireless systems that rely on traditional and BF based detectors, and illustrate the performance improvements that can be achieved when employing the proposed GNN-based detector.
## Acknowledgement
This work has received funding from the European Union's Horizon-CL4-2021 research and innovation programme under grant agreement No. 101070181 (TALON).
|
2301.07820 | On the limits of neural network explainability via descrambling | We characterize the exact solutions to neural network descrambling--a
mathematical model for explaining the fully connected layers of trained neural
networks (NNs). By reformulating the problem to the minimization of the
Brockett function arising in graph matching and complexity theory we show that
the principal components of the hidden layer preactivations can be
characterized as the optimal explainers or descramblers for the layer weights,
leading to descrambled weight matrices. We show that in typical deep learning
contexts these descramblers take diverse and interesting forms including (1)
matching largest principal components with the lowest frequency modes of the
Fourier basis for isotropic hidden data, (2) discovering the semantic
development in two-layer linear NNs for signal recovery problems, and (3)
explaining CNNs by optimally permuting the neurons. Our numerical experiments
indicate that the eigendecompositions of the hidden layer data--now understood
as the descramblers--can also reveal the layer's underlying transformation.
These results illustrate that the SVD is more directly related to the
explainability of NNs than previously thought and offers a promising avenue for
discovering interpretable motifs for the hidden action of NNs, especially in
contexts of operator learning or physics-informed NNs, where the input/output
data has limited human readability. | Shashank Sule, Richard G. Spencer, Wojciech Czaja | 2023-01-18T23:16:53Z | http://arxiv.org/abs/2301.07820v3 | # Emergence of the SVD as an interpretable factorization in deep learning for inverse problems
###### Abstract.
We demonstrate the emergence of weight matrix singular value decomposition (SVD) in interpreting neural networks (NNs) for parameter estimation from noisy signals. The SVD appears naturally as a consequence of initial application of a descrambling transform - a recently-developed technique for addressing interpretability in NNs [1]. We find that within the class of noisy parameter estimation problems, the SVD may be the means by which networks memorize the signal model. We substantiate our theoretical findings with empirical evidence from both linear and non-linear settings. Our results also illuminate the connections between a mathematical theory of semantic development [2] and neural network interpretability.
Interpreting the nature of the mapping between inputs and outputs of trained neural networks (NNs) remains one of the major unsolved problems in machine learning and is the focus of significant research effort [3, 4, 5, 6, 7, 8, 9, 10]. More recently, such efforts have succeeded in interpreting NN's in image classification by applying interpretation heuristics to the singular value decompositions (SVDs) of trained weights [1, 11, 12, 13]. The SVD has innumerable applications in statistics [14, 15], dimensionality-reduction [16, 17, 18], and geometric data science [19, 20]. It has also found previous application in specific deep learning settings, including sensing neural network topology [21], isotropic pattern recognition in control charts [22], neural network weight compression [23]. The key idea behind using SVD for NN interpretation is that the singular subspaces of NN-associated matrices, such as weight and data correlation matrices, encode the learning that occurs during training. This is particularly appealing when interpretation relies upon recognition of readily-identifiable patterns, such as dog or cat labels. Consequently, the hidden semantics of a trained NN may be uncovered by taking the SVD as a starting point and then applying a (possibly complicated) heuristic such as intertwiner groups [12] or hypergraph rearrangements [13] of these SVDs.
In this paper we show that this link between singular spaces and NN interpretation is more fundamental: an interpretation heuristic can be the starting point and the singular value decomposition can naturally emerge as an interpretation. We specifically exhibit this phenomenon for _smoothness descrambling transformations_ - or simply, descrambling transformations - for interpretations of NNs used in signal processing [1]. Neural networks are routinely used for such regression problems in the sciences [24, 25, 26, 27] where network outputs are no longer simple class labels but are objects in a desired function space, and the network is trained to learn the underlying mapping between the input and output function classes. While these networks often match the prior state-of-the-art, their interpretability in the context of these problems remains an open question since the applicability of NN interpretation techniques from image classification is not established yet. To that end, we have focused on the SVD and the latent orthogonal transformations learned by the network motivated by the observation that many well-known transformations, such as the Fourier transform, are unitary and thus serve as a standard for human readability. This approach is quite promising: an analysis of linear two-layer neural networks [2] showed that the first and last weight matrices interact with the SVD of the covariance matrices from the distributions of training data. In particular, if the network structure is given by \(f(x)=W_{2}W_{1}x\), then the weights \(W_{1}\) and \(W_{2}\) in the large data limit obey the continuous-time evolution law:
\[W_{2}(t)=UA(\Lambda,t)Q^{-1}\quad W_{2}(t)=QA(\Lambda,t)V^{\top} \tag{1}\]
Here \(U\) and \(V\) are the right and left singular vectors of the input-output correlation matrix \(\Sigma_{yx}=\mathbb{E}[yx^{\top}]=U\Lambda V^{\top}\). While this simple model shows that the singular vectors closest to both input and output data learn semantics from the data distribution, an understanding of the intermediate weight matrices (matrices of the type \(Q\) in (1)) and the case of nonlinear and deep networks, remains unresolved. A major step in addressing this problem was taken in [28], in the context of digital signal processing, where a fully-connected network
## 1. Introduction
The _macroscopic_ of a graph \(G\) is a graph \(G\) with vertices \(v\in V(G)\) and edges \(v\in V(G)\), where \(v\) is the vertex of \(G\). The _macroscopic_ of a graph \(G\) is a graph \(G\) with vertices \(v\in V(G)\) and edges \(v\in V(G)\) are called _macroscopic_.
and biexponential parameter estimation-noting that right singular vectors of the first weight matrix closely resemble the training data model independently of the descrambling.
## 1. Smoothness Criterion Descrambling
In this section we formulate the main mathematical problem that we address in this paper. Let \(\eta\) - the objective function, be the descrambler criterion and let \(\widehat{P}\) denote the descrambler. The goal of this procedure is design \(\eta\) which yields a simple, human readable explanation of the descrambled weight matrices \(P^{*}(k,X)W_{k}\). In this context the _smoothness criterion_
\[\eta(P)=\|D^{2}PW_{k}\circ\sigma\circ\cdots W_{1}X\|_{F}^{2} \tag{4}\]
is well suited for interpretation of linear layers of feedforward networks. The motivation behind this criterion is that while the network's incoming and outgoing data is usually smooth and has an intelligible time-ordered structure, the intermediate data often loses this structure. As such, the role of the descrambler is to change basis so that the intermediate signal is itself smooth -- and thus ordered with respect to the indices of the output or input dimension. This strategy was used successfully to interpret the first layer of DEERNet [1, 28]. Our goal is to study the minimizers
\[\widehat{P}(k,X):=\operatorname{argmin}\,\|DPf_{k}(X)\|_{F}^{2}\quad\text{w.r.t}\,P^{\top}P=I. \tag{5}\]
Here \(X\) denotes the matrix of the training data, which we assume is drawn from a context-dependent prior distribution. Different choices of this prior will lead to different solutions of the descrambling problem 3. However, for any general prior it is difficult to say anything about the structure of \(\widehat{P}(k,X)\). As a consequence we focus on the case where \(X\) models data used in inverse problems - the original use-case of descrambling transformations. Thus each column of \(X\) can be written as a noisy measurement \(s(z)+\alpha^{-1}y\) where \(z\) is a to-be-recovered variable, \(y\) is noise, and \(\alpha^{-1}\) is the SNR. We will first understand the case when the network is linear and the input is only noise. Then we will generalize it to the case where the output has an underlying signal, and finally when the network is non-linear. All proofs have been outlined in the SI appendix.
## 2. Results
We proceed to characterize the minimizers of (5) - the smoothness criterion descrambling problem. As stated above, the minimizer \(\widehat{P}(k,X)\) depends on the matrix \(X\) and the wiretapped network \(f_{k}\). Our main mathematical innovation is to separate the cases \(k=1\) (when \(f_{k}\) is affine) and \(k\geq 2\) to consider \(X\) as sampled over a distribution. This enables us to model the real-life settings in which neural networks are trained and evaluated. With these assumptions in mind, we state our main theoretical results below.
**Linear Network, Only Noise.** We first address the case when \(k=1\) in (5). In this case, \(f_{1}(x)=W_{1}x\) so the wiretapped network \(f_{1}\) is linear. Additionally, to approach the data models present in inverse problems, we first consider the case where \(X\) is pure noise, i.e \(x_{i}\sim\xi\) where \(\xi\) is some isotropic random variables. The canonical example of such a random variable is if \(\xi\sim N(0,I_{d})\). Our strategy to understand the minimizers of (5) is to study the convergence of \(\widehat{P}\) as \(N\to\infty\). As such, we denote \(\widehat{P}(k,X_{N}):=\widehat{P}(N)\) and \(X:=X_{N}\) to keep track of the number of data points and implicitly assume \(k=1\). We shall see that these minimizers \(\widehat{P}(N)\) have a limit \(\mathcal{P}\) given by the (strong) law of large numbers (SLLN). However, to use the SLLN to swap a limit and a minimum we need to ensure the stability of the singular spaces of the random matrices \(WX_{N}\). Bearing this in mind we first introduce the following definition.
_Definition 1_.: Let \(A_{n,m}\subseteq M_{n,m}(\mathbb{R})\) be the set of matrices with distinct non-zero singular values. Let \(A=\bigcup_{n,m}A_{n,m}\) be the set of _admissible matrices_. A fully-connected feedforward NN with weights \(W_{k}\in A\) is termed an _admissible neural network_.
Using the definition of admissibility, the following lemma can be proved using the continuity of the simple eigenvalues of a matrix in its entries [29]:
_Lemma 1_.: Let \(W_{n}\) be a sequence of matrices such that \(W_{n}\to W\) in the Frobenius norm, where \(W\in A\). Then, the left and right singular vectors \(\{u_{i}^{n}\}\) and \(\{v_{i}^{n}\}\) of \(W_{n}\) converge to the right and left singular vectors of \(W\), respectively.
Henceforth, we shall only work with admissible neural networks. Note that since \(A_{n,m}\) is a dense subset of \(M_{n,m}\), admissible neural networks are dense in the set of neural networks. Note that for \(k=1\), we can simplify (5) to
\[\widehat{P}(N)=\operatorname{argmin}\|DPW_{1}X_{N}\|_{F}^{2}\qquad\text{w.r.t }P^{\top}P=I. \tag{6}\]
We make the following assumptions modelling the empirical setup in [1]:
1. \(D\) is a Fourier Differentiation Stencil
2. \(X\) is a random \(d\times N\) matrix such that each column \(x_{i}\) is drawn independently from a zero-mean isotropic density.
3. \(T\) is the \(m\times m\) matrix such that every \(k\)th column has \(l\)th entry \(t_{k}(l)\) given by \[t_{k}(l)=\begin{cases}\cos\frac{\pi lk}{m}&k=2n\\ \sin\frac{\pi l(k+1)}{m}&k=2n-1.\end{cases}\]
With these assumptions we can show that as the number of points \(N\) tends to infinity, \(\widehat{P}(N)\) can be expressed in terms of the matrix \(T\) and the SVD of \(W_{1}\):
_Theorem 1_.: Let \(f\) be an admissible NN with \(W_{1}\) as its first layer weight matrix with rank-\(r\) SVD \(W=U_{r}\Sigma_{r}V_{r}^{\top}\). Additionally, let \(T_{r}\) be the matrix formed by selecting the first \(r\) columns of \(T\). Then,
\[\widehat{P}(N)\longrightarrow\widehat{\mathcal{P}}:=T_{r}U^{\top}\,\text{a.s. }\,with\,N\to\infty. \tag{7}\]
In particular, the descrambled weight matrix converges to \(T_{r}\Sigma V^{\top}\) almost surely in the measure induced by \(x\).
Thus, under certain geometric assumptions for the data \(X\) (isotropic, finite second moment), weight matrices descrambled for smooth propagation of data choose to propagate that data in a sinusoidal basis. Here this choice of \(X\) corresponds to the noise track taken in many signal processing applications. In particular if the SNR \(\alpha^{-1}\) "equals" zero then the smoothness descramblers \(\widehat{P}_{N}\) tend to \(T_{r}U_{1}^{\top}\). This limit produces a scaling law \(P\to T_{r}^{P}\top U\). Under this scaling law, the minimizers \(\widehat{P}(N)=T_{r}U_{1}^{\top}\) are scaled to \(I\) as \(N\to\infty\). This scaling law will be used to justify the use of Fourier domain visualizations in [1]. We provide a more general version of the theorem in the Supplementary Information; here the choice of \(D\) as a finite difference stencil was taken to align the result with the experimental setup used in [1]
In more realistic applied settings, training data is usually a mixture of signal and noise; we extend Theorem 1 to this case below. We recall that \(k=1\), so it will be dropped from the notation for clarity.
Figure 1. We sampled \(X\) in Equation 5 from a standard Gaussian ensemble and computed the left singular vectors of the first layer pre- and post-descrambling of the network in [30]. Note that the left singular vectors of the descrambled weight matrix are perfectly oscillating: this is because \(X\) as a standard Gaussian ensemble together with a linear wiretapped layer yields a descrambler \(\widehat{P}(N)\approx T_{r}U^{\top}\), which results in \(\widehat{P}W\approx T_{r}\Sigma V^{\top}\) so that the left singular vectors of the descrambled weight matrix are given by \(T_{r}\)
**Linear Network, Noise and Signal.** We consider the case when each training sample is \(x=s(z)+\alpha^{-1}y\). First we observe that given an unbiased noise we have
\[\eta(P)\to\mathbb{E}[\|DPW(s(z))\|_{2}^{2}]+\alpha^{-2}\mathbb{E}[\|DPWy\|_{2}^ {2}]]. \tag{8}\]
as \(N\to\infty\) (see SI appendix). Thus, the objective function \(\eta\)_splits_ into the sum of the _signal term_\(\mathbb{E}[\|DPW(s(z))\|_{2}^{2}]\) and a noise term \(\mathbb{E}[\|DPWy\|_{2}^{2}]]\) weighted by the SNR \(\alpha\). Using (8) we show the following result for a noisy signal model.
_Theorem 2_.: Borrowing notation from Theorem 1, let \(X\) be a \(d\times N\) matrix of training data where each column is sampled from \(x=s(z)+\alpha^{-1}y\). Denote \(\widehat{\mathcal{P}}(\alpha):=\lim_{N\to\infty}\widehat{P}(N)\) and let \(\widehat{\mathcal{U}}(\alpha)\) be the matrix of left singular vectors of the descrambled weight matrix \(\widehat{\mathcal{P}}(\alpha)W\). Then \(\mathcal{U}(\alpha)\) is continuous in \(\alpha\) and as \(\alpha\to 0\)
\[\|\widehat{\mathcal{U}}(\alpha)-T_{r}\|_{F}\to 0. \tag{9}\]
Theorem 2 shows that the left singular vector basis of the descrambled matrix are "close" to a trigonometric basis, and extent of this proximity is controlled by the signal to noise ratio \(\alpha^{-1}\). As \(\alpha\to 0\) this basis converges to the trigonometric basis (which is consistent with the case of Theorem 1).
**Non-linear network.** Finally, we proceed to discuss the non-linear case when \(k\geq 2\). In this situation we choose to deal with the non-linearities using the Taylor expansion at the sample mean \(\overline{X}\):
\[\widehat{P}(k,N)\approx\operatorname*{argmin}_{P^{\top}P}\mathbb{E}[\|DP(f_{ k}(\overline{x})+Jf(\overline{x})^{\top}(X-\overline{x}))\|_{2}^{2}] \tag{10}\]
Now the RHS can be characterized through Theorems 1 and 2 applied to \(Jf(\overline{X})\). We were able to also provide empirical evidence for the quality of the Jacobian approximation to the descrambling problem in Figure 2.
## 3. Applications
**Interpretable neural networks.** We use the results from the preceding discussion to study two cases in which descrambling reveals information already available to us through some other well-understood transformation of the weights. The first example comes from convolutional neural networks (CNN's).
_Corollary 1_.: Adopting the assumptions of Theorem 1 with samples of noise \(X\) and \(k=1\), let \(f\) be a 1-D CNN with stride 1. Then \(\widehat{\mathcal{P}}(k)W=W\), i.e., descrambling acts as identity transformation on the weight matrix.
Corollary 1 presents a simple case where descrambling becomes ineffective. This sheds light onto the limitations of this method when it comes to network interpretation. More pertinently, however, it shows that every choice of \(\eta\), \(G\), and \(X\) from (3) defines a class of \(k\)-_interpretable_ weights \(\mathcal{I}_{k}\) such that the minimizers \(\widehat{P}(\eta,G,k,N)\) converge to \(I\) whenever \(W_{k}\in\mathcal{I}_{k}\) in the large data limit (here we change notation slightly
Figure 2. We compare the descramblers computed for \(k=2\) in DEERnet with \(f_{2}(X)\) (termed full) and with the approximation in (10) (termed jacobian). We find that the overall descrambled matrices are visually similar (albeit not very interpretable). We also see, via the action of \(\widehat{P}\) on a singular vector, that the Jacobian descramblers \(\widehat{P}^{j}\) are an intermediate between full descrambling and no descrambling (bottom right).
from (3) to include the effect of group \(G\) and criterion \(\eta\)). We formalize this intuition by defining a class of _interpretable_ neural networks. In what follows we assume that \(\mathcal{N}\) is the set of fully connected neural networks.
_Definition 2_.: An interpretability class \(\mathcal{I}(\eta,G,k,x\sim\mu)\subset\mathcal{N}\) is the set of neural networks such that \(f\in\mathcal{I}(\eta,G,k,x\sim\mu)\) if, for the \(kth\) layer, a group \(G\), criterion \(\eta\), and data \(X\) the solution to (3) where \(X\) has \(N\) i.i.d columns sampled by \(\mu\) is \(\widehat{\mathcal{P}}(\eta,G,k,x\sim\mu)=I\) as \(N\to\infty\). If \(f\in\mathcal{I}(\eta,G,k,x)\) then \(f\) is said to be \(\mathcal{I}(\eta,G,k,x)\)-interpretable, or simply \(\mathcal{I}\) interpretable if \(\eta,G,k\) and \(x\) are clear from context.
Interpretability classes quantify those neural networks which are interpretable after training and do not require any descrambling to be intelligible to humans/experts. In describing such neural networks,\(\mathcal{I}\)-classes account for the underlying problem (such as signal recovery/image classification) by tracking the distribution on the training data. Furthermore, definition 2 models a simple but natural setting used for training various types of neural networks: the training inputs are an incoming stream of samples of \(x\) forming the matrix of samples \(X\). Under this training data regime, \(\mathcal{I}\)-interpretability record the large-data behaviour of the descrambling matrix \(\widehat{P}\) for a specific choice of wiretapped layer and descrambling criterion. _A priori_ interpretability classes have a rather complicated structure and it is a mathematically interesting problem to provide some better understanding of different cases. In fact, we can reformulate Corollary 1 to give an explicit presentation for \(\mathcal{I}(\eta,O(n),1,x\sim N(0,\alpha^{-1}I))\) as the class of \(1-D\) neural networks which are convolutional in the first layer.
_Proposition_.: Let \(k=1\), \(G=O(n)\), \(\eta\) be the smoothness criterion descrambling functional ((5)) and let \(x\sim\alpha^{-1}N(0,I))\) - the multivariate zero-mean normal distribution with variance \(\alpha^{-2}I\). Let \(\mathcal{C}_{1}^{n}\subset\mathcal{N}\) with a convolutional first layer with \(n\) nodes in the first hidden layer and let \(\mathcal{N}\) be the class of \(1-D\) NNs in \(C(\mathbb{R}^{n})^{m}\). Then, we have
\[\mathcal{I}(\eta,O(n),1,(x,N(0,\alpha^{-1}I))=\mathcal{C}_{1}^{n}. \tag{11}\]
As such, characterizing \(\mathcal{I}(\eta,O(n),1,(x,N(0,\alpha^{-1}I))\) can seem unrealistic because most real-life instances of neural network training do not utilize purely noisy data. Recalling the context of inverse problems, it is of interest to know whether we can characterize any interpretability class when the input data is in the form of noisy measurements of a parameter. It turns out, for instance, that a simple phase identification problem in oscillatory data analysis (ODA) can be the right setting for such a question. A central problem in ODA is to identify the phases \(\phi(t)\) and the trend \(T(t)\) from the measurements of the noisy signal
\[f(t)=\exp 2\pi i\phi(t)+T(t)+y. \tag{12}\]
The signal model in (12) is ubiquitous in many applications of science and engineering, including clinical, seismic, and climate data, and investigative art [31, 32, 33, 34, 35, 36, 37, 38]. Unsurprisingly, this type of inverse problem has attracted neural network approaches [39]. Here we show that the simple case when the trend \(T=0\) and the phase \(\phi=z\), a constant to be estimated under a uniform prior, thee oscillatory data modeled by (12) defines a setting where we can characterize an interpretability class.
_Corollary 2_.: Let \(z\sim\text{Unif}[-uN,vN]\) be the phase parameter with uniform prior for \(u,v\in\mathbb{Z}\) and the oscillating signal \(s(z)=\left(\exp 2\pi ikT/N\right)_{k=0}^{N-1}\) be measured as \(x=s(z)+\alpha^{-1}y\) where \(y\) is a standard normal variable. If \(\eta\) is the smoothness descrambling criterion then
\[\mathcal{I}(\eta,O(n),1,x)=\mathcal{C}_{1}^{n}. \tag{13}\]
Here \(\mathcal{C}_{1}^{n}\) is the class of neural networks with a convolutional first layer and \(n\) nodes in the first hidden layer.
Corollary 2 has multiple implications. For example, it shows that convolutional neural networks reside in the class of interpretable networks for the problem of recovering the phase from equispaced samples of an oscillating signal in time. Second, it shows shows that descrambling the first layer for a network trained to solve this inverse problem might be inconclusive in yielding interpretations, because the descrambler transformation converges to the identity matrix. Thus, using smoothness descrambling for network interpretation reveals to us only the information already available in the raw weights of the first layer.
**DEERNet.** We use our theoretical analysis to furnish quantitative justifications for the visual descrambling analysis of DEERNet [1], the neural network on which descrambling was first tested. This theroetical perspective will illuminate why the SVD emerges as an "interpretable" factorization. DEERNet is a feedforward neural net trained to solve a noisy signal recovery problem in deep electron-electron resonance spectroscopy. In particular, it solves the following Fredholm equation of the first kind:
\[\Gamma(t)=\int_{\Omega}p(r)\gamma(r,t)\,dr+\xi. \tag{14}\]
Here \(\gamma(r,t)\) is the DEER kernel given by
\[\gamma(r,t)\!:=\!\sqrt{\frac{\pi r^{3}}{6Dt}}\left[\cos[Dt]C\!\left[\sqrt{ \frac{6Dt}{\pi}}\right]+\!\sin[Dt]S\!\!\left[\sqrt{\frac{6Dt}{\pi}}\right]\right] \tag{15}\]
\[D\!:=\frac{\mu_{0}}{4\pi}\frac{\gamma_{1}\gamma_{2}h}{r^{3}};\,FrC(x)=\int_{0} ^{x}\cos(t^{2})\,dt;\,FrS(x)=\int_{0}^{x}\sin(t^{2})\,dt. \tag{16}\]
The training inputs and outputs are of the form \(\{d_{i},p_{i}\}_{i=1}^{N}\), where \(p_{i}\) are distributions and \(\Gamma_{i}\) is the distance detected at times \(\{t_{j}\}_{j=1}^{256}\), with Gaussian noise. DEERNet obtains \(p_{i}\) from \(d_{i}\) and hence can be formulated as a network that solves problem of recovering \(z\) from noisy observations of \(x=s(z)+\alpha^{-1}y\). Here \(s\) represents the integral operator corresponding to integration against the DEER kernel and \(z\) is the input probability distribution.
Why Fourier domain visualization worksIn [1] a descrambling analysis of the first layer weights of a shallow 2-layer DEERnet uncovered a notch filter and a bandpass filter in the first weight matrix after visualizing the 2-D DFT \(\mathcal{F}\widehat{P}(1,N)W_{1}\) (Figure 3). Fourier domain visualization in [1] was justified experimentally because of the interlocking wave patterns seen in the descrambled weights \(\widehat{P}(1,N)W_{1}\). We now provide a more quantitative justification for this heuristic based on Theorem 1, and on the correspondence between
Figure 3. Left: Descrambling the first layer of a two-layer DEERNet reveals a notch and bandpass filter. A powerful plausibility argument is mounted in [1] to indicate that this notch resembles a cube root on the time axis/input dimension. Center: Visualize the fourier transform of \(\Sigma_{1}V_{1}^{\top}\) (right) handily reveals repeating streaks along the singular vectors; in fact along the time axis we see these streaks arranged in the shape of a cubic. Right: We visualize the Fourier transform of the right singular vectors of the integral kernel \(K\) scaled by the singular values \(\Sigma_{K}\) and observe cubic streaks similar to the central panel. This provides a more quantitative justification for the first layer’s function as inverting the integral operator.
the SNR \(\alpha^{-1}\) and the interpretation matrix \(\widehat{P}(1,N)\). When the descrambling data \(X\) is purely noisy, \(\widehat{P}(N)\to T_{r}U_{1}^{\top}\iff T_{r}^{\top}\widehat{P}(N)U_{1}\to I\) due to Theorem 1. Thus, the group homomorphism
\[P\to\varphi(P)=T_{r}^{\top}PU_{1} \tag{17}\]
provides an appropriate rescaling for the limiting descrambling matrix \(\widehat{\mathcal{P}}(1)\). For example, when \(\alpha^{-1}\to 0\), \(\varphi(\widehat{P}(N))\to I\), so that the group homomorphism (17) simplifies the correspondence between the SNR and the rescaled large-limit descrambler \(\varphi(\widehat{P}(N))\). In this case specifically we have
\[\text{SNR}=0\iff\varphi(\widehat{\mathcal{P}})=I. \tag{18}\]
As a consequence, post-descrambling visualization of the 2D DFT of \(W_{1}\) approximates \(\varphi(\widehat{\mathcal{P}})\Sigma_{1}(\mathcal{F}V_{1})^{\top}\). In fact, in the case when \(\varphi(\mathcal{P})=I\), \(\mathcal{F}\widehat{P}_{1}W_{1}\approx\Sigma_{1}(\mathcal{F}V_{1})^{\top}\) - the Fourier transform of the singular vectors rescaled by the singular values. Thus, visualizing the matrix by setting \(\varphi(P)=I\) is quite informative: the result is \(\Sigma_{1}V_{1}^{\top}\) and it reveals a repeating pattern of cubic streaks, uncovering both the notch filter and the distance cube root as found in the integral kernel from (14). This shows that descrambling uncovers - to a large extent - the information within the SVD and that multiplication by \(\widehat{P}_{1}\) moves \(W_{1}\) closer to the integral kernel. Singular vectors themselves can be interpretable without necessitating any additional processing such as descrambling, intertwiner groups, or hypergraph arrangements [13, 12] in the context of noisy signal estimation.
Connection to semantic development.The emergence of the DEER kernel in the right singular vector matrix \(V_{1}\) is not entirely surprising due to a heuristic motivated by (1). Indeed, if the network did not have a non-linearity, then \(W_{1}(t)=Q^{-1}A(\Lambda,t)V^{\top}\) where \(V\) is the matrix of the right singular vectors of \(\Sigma_{yx}=\mathbb{E}[yx^{\top}]\), the output-input covariance matrix. But in the signal estimation case where \(y=s(x)+\alpha^{-1}\xi\), this covariance matrix has a very simple form, especially when \(s\) is a linear map. In this case, we have \(\Sigma_{yx}=\mathbb{E}_{yx}[Kxx^{\top}K^{\top}]=K\mathbb{E}[xx^{\top}]K^{\top}\). With the assumption that \(\mathbb{E}[xx^{\top}]=I\), we have \(\Sigma_{yx}=KK^{\top}\) so the matrix \(V\) corresponds to the spectral decomposition of the kernel \(KK^{\top}\). We confirm this heuristic in the right panel of Figure 3
**Noise isn't always meaningless.** Our analysis of DEERnet shows that descrambling goes hand-in-hand with semantic development in the SVD, particularly when a network is trained to learn a signal from noisy measurements. However, this interplay with semantic development is only approximate. This is because the linear network training dynamics (1) merely approximates the real training dynamics of nonlinear networks. This necessitates the discovery of nonlinear features via descrambling. For example, the notch filter in the first layer of DEERnet cannot be discovered by noise-only descrambling, and we require the presence of the forward map \(s\) in the propagating data to discover a notch that is latent in \(W_{1}\).
Here we describe a case where noise descrambling, i.e the neural network interpretation via its singular vectors, is useful in and of itself. We demonstrate this phenomenon for a NN in which the forward map \(s\) is non-linear, so that the nature of matrices \(U\) and \(V\) in (1) is entirely unspecified. In particular, following [30] we train a four layer fully-connected NN to learn the exponential parameters \((T_{2,1},T_{2,2})\) generating a biexponential model:
\[y(t)=0.6\exp(-t/T_{2,1})+0.4\exp(-t/T_{2,2},) \tag{19}\]
corrupted by noise. The recovery of exponential parameters \(T_{2,1},T_{2,2}\) from a noisy decay curve is a central problem in magnetic resonance relaxometry [40], and is well-known to be ill-posed with parameter estimates strongly dependent on the noise. This problem has been investigated with a number of neural network-based approaches [41, 42]; the novelty of the network in [30] is that it is trained to solve this problem on both noisy and smooth forms of the same data, as a form of input data transformation to incorporate high-fidelity and high-stability and generalizability characteristics into the solutions. To achieve this, the noisy input data is first processed with regularized non-linear least squares parameter estimation, with these estimates used to generate smooth decay curves. These smooth curves are concatenated with the noisy samples to form a single input sample for presentation to the NN. The NN with just the native input, concatenated with itself, is termed (ND, ND), where ND indicates noisy decay. The NN with the concatenated native and smoothed versions of the decay curve is termed (ND, Reg), with Reg indicating the smooth decay generated by the regularized nonlinear least squares analysis. This strategy of training on both noisy and smooth data is termed _input layer regularization_, and improves parameter estimation by 5-10 percent as compared to
the more conventional NN estimation of parameters from noisy decay curves [30]. We find that the right singular vectors corresponding to the largest singular values of the first layer are biexponential curves, so that the network learns an input signal library in the class of its training data. Most notably, the (ND, Reg) network learns two very different shapes of biexponential for the noisy and smooth case; we attribute its higher test accuracy as compared to (ND, ND) to this result, indicating that (ND, Reg) learns a larger set of functions within the signal model class of biexponential functions. Thus, in the regression setting the descrambling guides us to finding the location of NN learning, namely within the SVD. In addition to explaining generalization, learning of the model in the SVD may have adversarial implications since we are able to learn samples of the input data model directly from a trained network. While the SVD has been used as a compression technique in feedforward neural networks [23] and deep layer interpretation in classification [11, 13], our results appear to be the first demonstration that data model learning occurs in singular vectors for nonlinear networks.
## 4. Conclusion
A key novelty of descrambling is that it leverages problem-dependent interpretation, akin to the manner in which intepretability methods in image classification concern the most influential pixels, image substructures, and, decision boundaries. We take this concept further: our characterization of solutions to (5) and the subsequent theoretical explanations to the experimental results in [1] show that the interpretation matrices \(\widehat{P}\) can admit simple large-data limits that depend on the underlying problem. In fact, we observe that the mathematical formalism in inverse problems allows us to obtain strong characterizations of these limits with practical implications. For example, descrambling implicitly defines problem-dependent classes of neural networks - and these classes can be explicitly characterized, for instance, in a simple problem from oscillatory data analysis. Furthermore, while [1] interprets DEERNet through the SVD of the descrambled weights, we show that even the SVDs of the _scrambled_, i.e., raw, weights can themselves be informative. We give an explanation for this phenomenon - if the data is highly noisy, the Fourier domain visualization of the
Figure 4. We visualize the singular vectors of the first layer of ILR networks trained on concatenations of noisy time-series data with the functional form \(c_{1}\exp(-t/T_{2,1})+c_{2}\exp(-t/T_{2,2}\) for \(c_{i}\geq 0\), discovering that these singular vectors themselves can be fit by curves of the same algebraic form as the noise-less version of the training data, allowing for the \(c_{i}\)’s to be negative. Note that this phenomenon persists for a variety of SNR’s and for both classes of networks–(ND, ND) and (ND, Reg). Here (ND, Reg) refers to networks trained on data points given by a concatentation of noisy and Tikhonov regularized copies of the same data (right panel).
descrambled weights approximates the Fourier domain visualization of the orthogonal row basis vectors provided by the right singular vectors through the rescaling formula motivated by Theorem 1. We remark, however, that this does not imply that descrambling is equivalent to the SVD -indeed, in the cases when the data \(X\) follows the more realistic distribution of a signal under noisy measurements and the network is wiretapped at a higher layer (\(k\geq 2\)) we can no longer exactly characterize the solutions to (5) and the minimizers do indeed reflect the latent transformations in the underlying weights \(W_{k}\). We find, at least empirically that the action of descrambler matrices can be approximated through descrambling of the network linearized its Jacobian at the empirical average. This does not mean the choice of the Jacobian as a linear approximation is canonical-there are many different linear models for NN's [43, 44, 45] and it is yet unclear which approximation serves best as a surrogate for modeling the action of descrambling transformations.
A surprising and highly significant aspect of our results is that when the SVD is uncovered indirectly by descrambling, it _still_ remains informative of the transformations in the data. This observation closely resembles examples [13, 12, 46] where SVDs of weights in CNNs are used to interpret features of an image classification network. Unlike these approaches - which require highly non-trivial post-processing of the singular vectors - we demonstrate that a simple visual analysis of singular vectors themselves can inform the post hoc analysis of the network's performance. Moreover we show that in noisy estimation problems it is the SVD where the network memorizes the underlying function class of the problem. This demonstrates not only that the training dynamics given by (1) for linear models can approximate non-linear settings but more also that the SVD by itself can illuminate the network's generalizability and interpretability.
## Materials and Methods
The proofs all our results and supporting figures have been provided in the SI document. Code for the figures and additional experiments can be found at [https://github.com/ShashankSule/descrambling-NN](https://github.com/ShashankSule/descrambling-NN)
## Acknowledgements
This work was supported in part by the Intramural Research Program of the National Institute on Aging of the National Institutes of Health (NIH).
## Appendix
Theoretical Results
Here we provide proofs for the results in our submitted article.
**Lemma 1**.: Let \(W_{n}\) be a sequence of matrices such that \(W_{n}\to W\) in the Frobenius norm where \(W\in A\), where \(A\) is the set of matrices with distinct non-zero singular values. Then the (right and left) singular vectors \(\{u_{i}^{n}\}\) and \(\{v_{i}^{n}\}\) of \(W_{n}\) converge to the right (resp. left) singular vectors of \(W\).
Proof.: Since \(u_{i}^{n}\) and \(v_{i}^{n}\) are the eigenvectors of \(WW^{\top}\) and \(W^{\top}W\) respectively, we drop without losing generality to the case where \(W\) is symmetric, so it has real eigenvalues. But from [29, Theorem 8, pp. 130], the eigenvectors of a matrix with simple eigenvalues are differentiable in its entries. Consequently, if \(u_{i}^{n}\) is the \(i\)th eigenvector of \(W_{n}\) then \(u_{i}^{n}\to u_{i}\) where \(u_{i}\) is the \(i\)th eigenvector of \(W\).
**Theorem 1**.: For the reader's convenience we repeat the statement of the theorem here. Let
1. \(D\) be a Fourier/finite difference stencil [47]
2. \(X_{N}\) a random \(d\times N\) matrix such that each column \(x_{i}\) is drawn independently from an isotropic density.
3. \(W\) any \(m\times d\) weight matrix with ordered rank \(r\) SVD \(W=U\Sigma V^{\top}\)
4. \(T\) the \(m\times m\) matrix such that every \(k\)th column has \(l\)th entry \(t_{k}(l)\) given by (20) \[t_{k}(l)=\begin{cases}\cos\frac{\pi lk}{m}&k\text{ even}\\ \sin\frac{\pi l(k+1)}{m}&k\text{ odd}\end{cases}\]
5. \(T_{r}\) the submatrix of \(T\) given by picking the last \(r\) columns.
Then we have that
\[\widehat{P}_{N}=\text{argmin}\left\|DPWX_{N}\right\|_{F}^{2}\text{w.r.t}\,P^{ \top}P=I\longrightarrow T_{r}U^{\top}\]
almost surely as \(N\to\infty\). In particular the descrambled weight matrix converges to \(T_{r}\Sigma V^{\top}\) almost surely.
Proof.: First to clean up some notation we set \(S_{N}:=WX_{N}\). Now note that
\[\underset{P^{\top}P=I}{\text{argmin}}\|DPWX\|_{F}^{2}=\underset{P^{\top}P=I} {\text{argmin}}\frac{1}{N}\|DPWX\|_{F}^{2}\]
So we can switch to analyzing the minimum of \(\frac{1}{N}\|DPWX\|_{F}^{2}\). But now,
\[\frac{1}{N}\|DPWX_{N}\|_{F}^{2} =\frac{1}{N}\text{Tr}(DPWX(DPWX)^{\top}) \tag{22}\] \[=\frac{1}{N}\text{Tr}(DPWXX^{\top}W^{\top}P^{\top}D^{\top})\] (23) \[=\text{Tr}\left(DPW\left(\frac{1}{N}X_{N}X_{N}^{\top}\right)W^{ \top}P^{\top}D^{\top}\right)\] (24) \[=\text{Tr}(DPS_{N}S_{N}^{\top}P^{\top}D^{\top})\] (25) \[=\text{Tr}(S_{N}^{\top}P^{\top}D^{\top}DPS_{N}) \tag{21}\]
To find the minimizer to (25) w.r.t \(P^{\top}P=I\) we make an interesting change of variable: let \(S_{N}=U_{N}\Sigma_{N}V_{N}\) be the SVD of \(S_{N}:=W\left(\frac{1}{N}X_{N}X_{N}^{\top}\right)W^{\top}\). Then we let \(Y_{N}=PS_{N}V_{N}\). Note that this means \(Y_{N}^{\top}Y=\Sigma_{N}^{2}\) given the constraint \(P^{\top}P\). With this change of variables (25) can be written as
\[\frac{1}{N}\|DPWX_{N}\|_{F}^{2}=\text{Tr}(Y^{\top}D^{\top}DY) \tag{26}\]
Consequently, the smoothness descrambling problem
\[\min\|DPWX_{N}\|_{F}^{2}\qquad\text{w.r.t}P^{\top}P=I \tag{27}\]
can be relaxed to
\[\min\text{Tr}(Y^{\top}D^{\top}DY)\qquad\text{w.r.t}Y^{\top}Y=\Sigma_{N}^{2}. \tag{28}\]
Note that the optimization problem 27 is a relaxation of 28 because of the transformation \(P\to PWV\). But now the solution to the relaxed problem 28 is well-known: it is a generalized eigenvalue problem whose solutions correspond to the first \(R_{N}=\min(r_{N},d-1)\) eigenvectors of \(D^{\top}D\) where \(r_{N}\) is the number of non-zero diagonal entries in \(\Sigma_{N}^{2}\) (this is because notably any finite difference stencil has rank \(d-1\)). Since \(D^{\top}D\) is diagonalized by \(T\), we pick the largest \(R_{N}\) eigenvectors (indexed by the matrix \(T_{R_{N}}\)) and scale them by \(\Sigma_{N}\) to get \(Y\). Thus, \(Y=T_{R_{N}}\Sigma_{R_{N}}\). Now using the change of variables \(Y=PS_{N}V_{R_{N}}\) we get \(PS_{N}=T_{R_{N}}\Sigma_{R_{N}}V_{R_{N}}^{\top}\) so \(S_{N}=P^{\top}T_{R_{N}}\Sigma_{R_{N}}V_{R_{N}}^{\top}\). This is a singular value decomposition for \(S_{N}\), so as long as the non-zero singular values of \(S_{N}\) are distinct we get from the uniqueness of the SVD that \(\widehat{P}_{N}^{\top}T_{R_{N}}=U_{N}\iff\widehat{P}_{N}=T_{R_{N}}(U_{N})_{R_ {N}}\). Now this is where the assumption that \(W\in A\) comes in: due to the strong law of large numbers we have that \(S_{N}\to WW^{\top}\) almost surely. Since the singular values of \(W\) are distinct and we get that the singular values of \(S_{N}\) are eventually distinct and ordered according to the ordered of \(W\) so the expression for \(\widehat{P}_{N}\) is valid; moreover from Lemma 1 we get that the matrices \((U_{N})_{R_{N}}\) will converge to the left singular vectors of \(U\) of \(W\). This proves our result.
**Equation [10]**.: Let \(x=s(z)+\alpha^{-1}y\) where \(y\) is a zero-mean isotropic noise vector with finite second moment and \(z\) has some prior distribution independent of the noise \(y\). We stated the following convergence result:
\[\eta(P)\to\mathbb{E}[\|DPW(s(z))\|_{2}^{2}]+\alpha^{-2}\mathbb{E}[\|DPWy\|_{2 }^{2}]] \tag{29}\]
The proof of (8) is as follows: Note first that because of the law of large numbers,
\[\frac{1}{N}\|DPWX\|_{F}^{2}=\frac{1}{N}\sum_{i=1}^{N}\|DPWx_{i}\|_{2}^{2}\to \mathbb{E}_{z,y}[\|DPWx\|]\]
We directly switch to analyzing the LLN limit of \(\eta\) and write
\[\eta(P)=\mathbb{E}[\|DPWx\|_{2}^{2}]=\mathbb{E}[\mathbb{E}[\|DPWx\|_{2}^{2} \mid z]]=\mathbb{E}[\mathbb{E}[\|DPW(s(z)+\alpha^{-1}y)\|_{2}^{2}\mid z]]\]
Now setting \(A=DPW\) and using the zero-mean property of the additive noise we get
\[\eta(P) =\mathbb{E}\Big{[}\mathbb{E}[\|A(s(z))\|_{2}^{2}]\mid z]+ \mathbb{E}[\|A(\alpha^{-1}y)\|_{2}^{2}]\mid z]\] \[+2\alpha^{-1}\mathbb{E}[\langle\mathcal{A}^{\top}A(s(z)),y\rangle \mid z]\Big{]}\] \[=\mathbb{E}\Big{[}\mathbb{E}[\|A(s(z))\|_{2}^{2}]\mid z]+ \mathbb{E}[\|A(\alpha^{-1}y)\|_{2}^{2}]\mid z]\Big{]}\] \[=\mathbb{E}[\|A(s(z))\|_{2}^{2}]+\alpha^{-2}\mathbb{E}[\|Ay\|_{2 }^{2}]] \tag{30}\] \[=\mathbb{E}[\|DPW(s(z))\|_{2}^{2}]+\alpha^{-2}\mathbb{E}[\|DPWy\| _{2}^{2}]]\]
Now we state Theorem 2:
**Theorem 2**.: Borrowing notation from Theorem 1, let \(X\) be a \(d\times N\) matrix of training data where each column is sampled from \(x=s(z)+\alpha^{-1}y\). Denote \(\widehat{\mathcal{P}}(\alpha):=\lim_{N\to\infty}\widehat{P}(N)\) and let \(\widehat{\mathcal{U}}(\alpha)\) be the matrix of left singular vectors of the descrambled weight matrix \(\widehat{\mathcal{P}}(\alpha)W\). Then \(\mathcal{U}(\alpha)\) is continuous in \(\alpha\) and as \(\alpha\to 0\)
\[\|\widehat{\mathcal{U}}(\alpha)-T_{r}\|_{F}\to 0. \tag{31}\]
Proof.: We can use the same analysis as the proof of Theorem 4 to conclude first that \(\widehat{P}_{N}=T_{R_{M}}(U_{N})_{R_{N}}\); here \(U_{N}\) is the matrix of left singular vectors of \(W\frac{1}{N}\left(X_{N}X_{N}^{\top}\right)W^{\top}\). Then \(\left(X_{N}X_{N}^{\top}\right)\to\mathbb{E}_{X}[xx^{\top}]\), the autocorrelation matrix of \(X\). But as \(\alpha\to 0\), this autocorrelation matrix converges to \(I\). Once again using the continuity of eigenvectors in the entries of the matrix, we get that \(U(\alpha)\to U\), where \(U(\alpha)\) is the matrix of singular vectors of \(W\mathbb{E}[xx^{\top}]W^{\top}\) and \(U\) is the matrix of singular vectors of \(W\).
Finally, we state and prove Corollary 1 and 2 pertaining to convolutional neural networks and oscillatory data analysis:
**Corollary 1**.: Adopting the assumptions of Theorem 1 with samples of noise \(X\) and \(k=1\), let \(f\) be a 1-D CNN with stride 1. Then \(\widehat{\mathcal{P}}(k)W=W\), i.e., descrambling acts as identity transformation on the weight matrix.
Proof.: If \(f\) is a 1-D CNN where the first filter \(W\) has stride 1 then \(W\) is symmetric and circulant so \(W=T\Sigma T^{\top}\) where \(T\) is the matrix of samples of a trigonometric basis from Theorem 1. Then \(P=T_{r}^{\top}T_{r}=I_{r}\)
**Corollary 2**.: Adopting the assumptions of Theorem 1, let \(z\sim\text{Unif}[-uN,vN]\) for \(u,v\in\mathbb{Z}\) and \(s(z)=(\exp 2\pi ikT/N)_{k=0}^{N-1}\). Then \(\widehat{P}(N)\to\widehat{\mathcal{P}}=T_{r}U^{\top}\) where \(T_{r}\) is the trigonometric basis from Theorem 1 and \(U\) is the left singular vector matrix of the weights \(W\).
Proof.: We show that the signal term and noise term reduce to the same minimization problem. Let \(A:=DPW\). Then we have
\[\mathbb{E}_{z}[DPWs(z)]=\mathbb{E}_{z}[As(z)]=\frac{1}{N(u+v)}\int_{-uN}^{vN} \|As(z)\|_{2}^{2}\,dz\]
But now, \((As(z))_{j}\overline{(As(z))_{j}}=\sum_{k,l}a_{jk}a_{jl}\exp 2\pi i(k-l)z/N\), and
\[(1/N(u+v))\int_{-uN}^{vN}\exp 2\pi i(k-l)z/N\,dz=\delta_{l,k}\]
Here \(\delta_{k,l}\) is the Kronecker delta. Now,
\[\frac{1}{N(u+v)}\int_{-uN}^{vN}\|As(z)\|_{2}^{2}\,dz =\frac{1}{N(u+v)}\int_{-uN}^{vN}\sum_{j}(As(z))_{j}\overline{(As(z ))_{j}}\,dz\] \[=\sum_{j}\frac{1}{N(u+v)}\int_{-uN}^{vN}(As(z))_{j}\overline{(As(z ))_{j}}\,dz\] \[=\sum_{j}\sum_{k}|a_{j,k}|^{2}\] \[=\|A\|_{F}^{2}=\|DPW\|_{2}^{2}\]
Putting the above calculation together with the calculation for isotropic noise from Theorem 1 and using Equation (8) we get
\[\eta(P) =\mathbb{E}_{z}[DPWs(z)]+\alpha^{-2}\mathbb{E}[\|DPWy\|_{2}^{2}]]\] \[=\|DPW\|_{2}^{2}+\alpha^{-2}\|DPW\|_{2}^{2}\]
Minimizing the above function for \(P\) orthogonal leads to \(P=T_{r}^{\top}U\) (just like Theorem 1).
**MDS Criterion.** The _maximum diagonal sum_ (MDS) criterion is suggested in [1] for NN interpretation of frequency domain data:
\[\widehat{P}_{MDS}=\underset{P^{\top}P}{\operatorname{argmax}}\,\text{Tr}(PW)\]
We show that for this criterion \(\widehat{P}\) can be given explicitly in terms of the SVD of weights \(W\):
_Proposition_.: Let \(W\in M_{n}(\mathbb{R})\). Then \(\underset{P^{\top}P=I}{\operatorname{argmax}}Tr(PW)=VU^{\top}\) where \(W=U\Sigma V^{\top}\) is the SVD of \(W\).
Proof.: Let \(W=\sum_{i=1}^{r}\sigma_{i}u_{i}v_{i}^{\top}\) be the SVD of \(W\) and let \(P\) be orthogonal. Then
\[Tr(PW)=\sum_{i=1}^{r}\sigma_{i}Tr(Pu_{i}v_{i}^{\top})\]
But note that \(Tr(xy^{\top})=\langle x,y\rangle\) so \(Tr(Pu_{i}v_{i}^{\top})=\langle Pu,v\rangle\leq||Pu||^{2}||v||^{2}=1\) using Cauchy-Schwarz. So
\[Tr(PW)=\sum_{i=1}^{r}\sigma_{i}Tr(Pu_{i}v_{i}^{\top})\leq\sum_{i=1}^{r}\sigma_{i}= Tr(\sqrt{W^{\top}W})\]
Thus \(\widehat{P}=VU^{\top}\) yields the conclusion.
**DEERNet.** We descrambled DEERNet according to the specifications in [1]. In partciular we used the shallow architecture Input \(\rightarrow\) Fully connected \(256\times 80\rightarrow\) tanh activation \(\rightarrow\) Fully connected \(80\times 256\rightarrow\) sigmoid activation \(\rightarrow\) renormalization \(\rightarrow\) Output. Additionally, we made use of the MATLAB modules in the Spinach package to train the network and descramble it on a Quadro GV100 GPU to reproduce the experiments in [1] as closely as possible. We were able to reproduce these experiments only up to an amount of noise-the recovery of the cubic notch filter is much clearer in [1] and may have contributed to the interpretation of the first layer as notch filter/baseline elimination. Nonetheless, in our main article we have also given a quantitative explanation for why the first layer represents the DEER kernel based on a rescaling analysis motivated by Theorem 1. We also confirmed the existence of cubic conversion by changing the cubic factor in the DEER kernel to a quartic factor and finding a narrowing in the notch presumably to account for the fact that a quartic curve is flatter around the origin than a cubic. This experiment has been provided in our data repository [https://github.com/ShashankSule/descrambling-NN](https://github.com/ShashankSule/descrambling-NN).
**ILR network.** We trained neural networks with a \(128\to 32\to 256\to 256\to 2\) input layer regularization structure with ReLU non-linearities to solve the problem of recovering \((T_{2,1},T_{2,2})\) from 64 noisy time-equispaced measurements of the signal
\[0.6\exp-t/T_{2,1}+0.4\exp\left(-t/T_{2,2}\right) \tag{32}\]
The raw noisy data track (termed ND) was processed through a non-linear least squares problem and the resulting NLLS-recovered parameters \((T_{2,1}^{NLLS},T_{2,2}^{NLLS})\) were used to generate a new track (termed Reg) using equispaced time samples. The concatenated data, now of 128 measurements, was fed through the network with RMSE loss to recover the original \((T_{2,1},T_{2,2})\). Following the recommendations in [30] we used a Tikhonov regularization procedure in the NLLS pre-processing step with \(\lambda=1.6\times 10^{-4}\). We also trained the same architecture by concatenating the same sample of noisy data into a size 128 vector; this architecture is termed (ND,ND) and it represents the traditional neural-network approach of recovering parameters from noisy samples of a signal. Post-training we compute the singular vectors of the weight matrices \(W_{1}\), finding that these singular vectors themselves are noisy samples of biexponential curves. The caveat here, however, is that we required biexponential functions of the form \(c_{1}\exp\left(-t/T_{2,1}\right)+c_{2}\exp\left(-t/T_{2,2}\right)\) where \(c_{1},c_{2},T_{2,1},T_{2,2}\) could be any real number. As a consequence, these singular vector curves cannot necessarily fit a _decaying_ biexponential model. We infer from this that the first layer seems to learn the general signal model rather than training examples themselves.
Figure 5: Descrambling the first layer of DEERNet reveals interlocking wave patterns, hinting at the transformation underlying this weight matrix
Figure 6. Visualizing the three right singular vectors corresponding to the largest singular values for the first weight matrix in (ND, Reg) networks for SNR = 100,30,1. We find that these vectors can be fit with a (not necesarily decaying biexponential model). In fact the singular vectors themselves are split along the 64th entry (akin to the training data) with a different biexponential curve being learned for each half. The curves learned for the noisy half vs the smooth half are markedly different in shape, enabling us to better understand why (ND,Reg) generalizes better than (ND,ND)
Figure 7. Visualizing the three right singular vectors corresponding to the largest singular values for the first weight matrix in (ND, ND) networks for SNR = 100,30,1. We witness a similar pattern to the (ND, Reg) networks where the singular vectors can be fit with a biexponential model, concluding that singular vector model learning is truly a consequence of the data and not the concatenation procedure. Clearly, since both halves of the data are the same, the singular vectors on both halves the data are also nearly identical. Thus the signal model library learned by the weights of the NN is less diverse, and may be reponsible for the lower test accuracy of the NN
Figure 8. We also descrambled the first layer of an SNR 10, (ND, Reg) network with its input data. However, unlike what was done for DEERNet, we were not able to discern the underlying transformation of this layer from its Fourier signature, or from the descrambled singular vectors. In fact, after descrambling with data only from the noise track we found the descrambled matrices much more interpretable; Theorem 4 explained why–this was because we were actually visualizing the right singular vectors, which turn out to be samples of a biexponential model. |
2301.02780 | Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching | The success of graph neural networks (GNNs) provokes the question about
explainability: ``Which fraction of the input graph is the most determinant of
the prediction?'' Particularly, parametric explainers prevail in existing
approaches because of their more robust capability to decipher the black-box
(i.e., target GNNs). In this paper, based on the observation that graphs
typically share some common motif patterns, we propose a novel non-parametric
subgraph matching framework, dubbed MatchExplainer, to explore explanatory
subgraphs. It couples the target graph with other counterpart instances and
identifies the most crucial joint substructure by minimizing the node
corresponding-based distance. Moreover, we note that present graph sampling or
node-dropping methods usually suffer from the false positive sampling problem.
To alleviate this issue, we designed a new augmentation paradigm named
MatchDrop. It takes advantage of MatchExplainer to fix the most informative
portion of the graph and merely operates graph augmentations on the rest less
informative part. Extensive experiments on synthetic and real-world datasets
show the effectiveness of our MatchExplainer by outperforming all
state-of-the-art parametric baselines with significant margins. Results also
demonstrate that MatchDrop is a general scheme to be equipped with GNNs for
enhanced performance. The code is available at:
https://github.com/smiles724/MatchExplainer. | Fang Wu, Siyuan Li, Xurui Jin, Yinghui Jiang, Dragomir Radev, Zhangming Niu, Stan Z. Li | 2023-01-07T05:14:45Z | http://arxiv.org/abs/2301.02780v2 | # Explaining Graph Neural Networks via Non-parametric Subgraph Matching
###### Abstract
The great success in graph neural networks (GNNs) provokes the question about explainability: _"Which fraction of the input graph is the most determinant to the prediction?"_ Particularly, parametric explainers prevail in existing approaches because of their stronger capability to decipher the black-box (i.e., the target GNN). In this paper, based on the observation that graphs typically share some joint motif patterns, we propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs. It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance. Moreover, we note that present graph sampling or node-dropping methods usually suffer from the false positive sampling problem. To ameliorate that issue, we design a new augmentation paradigm named MatchDrop. It takes advantage of MatchExplainer to fix the most informative portion of the graph and merely operates graph augmentations on the rest less informative part. We conduct extensive experiments on both synthetic and real-world datasets and show the effectiveness of our MatchExplainer by outperforming all parametric baselines with significant margins. Additional results also demonstrate that our MatchDrop is a general scheme to be equipped with GNNs for enhanced performance.
Graph neural networks, Graph neural networks, Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Sub Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-non Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching, Non-parametric Subgraph Matching
subgraph patterns are shared by different groups of graphs, which can be the key to deciphering the decision of GNNs. These frequently occurring motifs contain rich semantic meanings and indicate the characteristics of the whole graph instance (Henderson et al., 2012; Zhang et al., 2020; Banjade et al., 2021; Wu et al., 2023). For example, the hydroxide group (-OH) in small molecules typically results in higher water solubility and a carboxyl group (-COOH) usually contributes to better stability and higher boiling points. Besides that, the pivotal role of functional groups has also been proven in protein structure prediction (Senior et al., 2020).
Inspired by this inspection, we propose to mine the explanatory motif in a subgraph matching manner and design a novel non-parametric algorithm dubbed MatchExplainer, whose workflow is depicted in Fig. 1. For each pair of graphs, our MatchExplainer endeavors to explore the most crucial joint substructure by minimizing their node corresponding-based distance in the high-dimensional feature space. Then it marries the target graph iteratively with other counterpart graphs in the reference set to seek potential explanatory subgraphs. Consequently, unlike traditional explainers, the candidate explanation produced by MatchExplainer can be non-unique for the same target graph instance.
Taking a step further, we leverage the metric of mutual information from the information theory to analyze the working principle of our MatchExplainer. To be specific, we define the explanation that contains all shared information between paired graphs as _sufficient explanation_, while the explanation that contains the shared and eliminates the non-shared information as _minimal sufficient explanation_. We prove that the minimal sufficient explanation can be used to approximate the desired ground truth explanation with a theoretical guarantee. This strong relationship also provides a perspective for us to filter out the best-case substructure from all candidate explanatory subgraphs. To be precise, we propose to optimize the final candidate explanations by maximizing the difference in the prediction after the explanatory subgraph is removed from the original graph.
Last but not least, we exhibit a bonus of our MatchExplainer to be applied in enhancing the traditional graph augmentation methods. Though exhibiting strong power in preventing over-fitting and over-smoothing, present graph sampling or node-dropping mechanisms suffer from the false positive sampling problem. That is, nodes or edges of the most informative substructure are accidentally dropped or erased but the model is still required to forecast the original property, which can be misleading. To alleviate this obstacle, we take advantage of MatchExplainer and introduce a simple technique called MatchDrop. Specifically, it first digs out the explanatory subgraph by means of MatchExplainer and keeps this part unchanged. Then the graph sampling or node dropping is implemented solely on the remaining less informative part. As a consequence, the core fraction of the input graph that reveals the label information is not affected and the false positive sampling issue is effectively mitigated.
To summarize, we are the foremost to investigate the explainability of GNNs from the perspective of non-parametric subgraph matching to the best of our knowledge. Extensive experiments on synthetic and real-world applications demonstrate that our MatchExplainer can find the explanatory subgraphs fast and accurately with state-of-the-art performance. Additionally, we empirically show that our MatchDrop, a pragmatic application of MatchExplainer, can serve as an efficient way to promote conventional graph augmentation methods.
## 2 Preliminary and Task Description
In this section, we begin with the description of the task of GNN explanation and then briefly review the relevant background of graph matching and graph similarity learning (GSL). Throughout this paper, an upper-case letter like \(\mathcal{G}\) denotes random variables, while lower-case letters like \(g\) denote deterministic values of variables.
Explanations for GNNs.Let \(h_{Y}:\mathcal{G}\rightarrow\mathcal{Y}\) denote the well-trained GNN to be explained, which gives the prediction \(\hat{Y}\) to approximate the ground truth \(Y\). Without loss of generality, we consider the problem of explaining a graph classification task. Our goal is to find an explainer \(h_{S}:\mathcal{G}\rightarrow\mathcal{G}_{\mathcal{S}}\) that discovers the subgraph \(\mathcal{G}_{S}\) from input graph \(\mathcal{G}\) as:
\[\min_{h_{S}}\mathcal{R}(h_{Y}\circ h_{S}(\mathcal{G}),\hat{Y}),\text{s.t.}|h_{ S}(\mathcal{G})|\leq K, \tag{1}\]
where \(\mathcal{R}(.)\) is the risk function such as a cross-entropy loss or a mean squared error (MSE) loss, and \(K\) is a constraint on the size of \(\mathcal{G}_{S}\) to attain a compact explanation. That is, \(\mathcal{G}_{S}\) has at most \(K\) nodes.
Figure 1: The illustration of our proposed MatchExplainer. The explanation \(\mathcal{G}_{S}\) is attained via subgraph matching between \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\), where we minimize the accumulated node-to-node distance in the high-dimensional feature space in a greedy search manner. Since several \(\mathcal{G}_{S}\) can be obtained by matching \(\mathcal{G}\) to different counterpart graphs \(\mathcal{G}^{\prime}\) from the reference set \(\mathcal{D}_{\mathcal{G}}\), we seek to find the optimal one by maximizing Equ. 10.
Graph matching.As a classic combinatorial problem, graph matching is known in general NP-hard (Loiola et al., 2007). They require expensive, complex, and impractical solvers, leading to inexact solutions (Wang et al., 2020). Given two different graphs \(\mathcal{G}_{1}=(\mathcal{V}_{1},\mathcal{E}_{1})\) and \(\mathcal{G}_{2}=(\mathcal{V}_{2},\mathcal{E}_{2})\) with \(N_{1}\) and \(N_{2}\) nodes respectively, the matching between them can be generally expressed by the quadratic assignment programming (QAP) form as (Wang et al., 2019):
\[\min_{\mathbf{T}\in\{0,1\}^{N_{1}\times N_{2}}}\text{vec}(\mathbf{T})^{T} \mathbf{Kvec}(\mathbf{T}),\,s.t.,\mathbf{T}\mathbf{1}=\mathbf{1},\,\mathbf{T} ^{T}\mathbf{1}=\mathbf{1}, \tag{2}\]
where \(\mathbf{T}\) is a binary permutation matrix encoding the node correspondence, and \(\mathbf{1}\) denotes a column vector with all elements to be one. \(\mathbf{K}\) is the so-called affinity matrix (Leordeanu & Hebert, 2005), whose elements encode the node-to-node and edge-to-edge affinity between \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\).
Graph similarity learning.GSL is a general framework for graph representation learning that requires reasoning about the structures and semantics of graphs (Li et al., 2019). We need to produce the similarity score \(s(\mathcal{G}_{1},\mathcal{G}_{2})\) between them. This similarity \(s(.,.)\) is typically defined by either exact matches for full-graph or sub-graph isomorphism (Berretti et al., 2001; Shasha et al., 2002), or some measure of structural similarity such as the graph edit distance (Willett et al., 1998; Raymond et al., 2002). In our setting, \(s(.,.)\) depends entirely on whether these two graphs belong to the same category or share very close properties. Then for \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) with the same type, GSL seeks to maximize the mutual information between their representations with the joint distribution \(p(\mathcal{G}_{1},\mathcal{G}_{2})\) as:
\[\max_{f_{1},f_{2}}I(f_{1}(\mathcal{G}_{1}),f_{2}(\mathcal{G}_{2}),T), \tag{3}\]
where \(f_{1}\) and \(f_{2}\) are encoding functions. They can share the same parameter (i.e., \(f_{1}=f_{2}\)) or be combined into one architecture. \(T\) is the random variable that stands for the information required for a specific task, which is independent to the model selection.
## 3 The MatchExplainer Approach
The majority of recent approaches lean on parametric networks to interpret GNNs, and some early methods for GNN explanations are based on local explainability and from a single-graph view (Ying et al., 2019; Baldassarre & Azizpour, 2019; Pope et al., 2019; Schwab & Karlen, 2019). Regardless of this inclination, we argue that a non-parametric graph-graph fashion can also excavate important subgraphs and may lead to better explainability. In this work, we introduce MatchExplainer to explain GNNs via identifying the joint essential substructures by means of subgraph matching (see Algorithm 1).
### Theoretical Analysis of MatchExplainer
From the perspective of probability theory and information theory, Equ. 1 is equivalent to maximizing the mutual information between the input graph \(\mathcal{G}\) and the subgraph \(\mathcal{G}_{S}\) in the context of \(h_{Y}\). Namely, the goal of an explainer is to derive a small subgraph \(\mathcal{G}_{S}\) such that:
\[\max_{\mathcal{G}_{S}\subset\mathcal{G},|\mathcal{G}_{S}|\leq K}I(\mathcal{G }_{S},T_{h}), \tag{4}\]
where \(I(.)\) refers to the Shannon mutual information of two random variables. Unlike \(T\) which is model-agnostic, \(T_{h}\) represents the knowledge learned by the GNN predictor \(h_{Y}\) in a concrete downstream task. Notably, instead of merely optimizing the information hidden in \(\mathcal{G}_{S}\), another line of research (Yuan et al., 2021) seeks to reduce the mutual information between the remaining subgraph \(\mathcal{G}-\mathcal{G}_{S}\) and the original one \(\mathcal{G}\) as:
\[\min_{\mathcal{G}_{S}\subset\mathcal{G},|\mathcal{G}_{S}|\leq K}I(\mathcal{G }-\mathcal{G}_{S},T_{h}). \tag{5}\]
As an approximation of directly optimizing Equ. 4, the core idea of MatchExplainer is to fetch another graph \(\mathcal{G}^{\prime}\) that shares the same predicted property as \(\mathcal{G}\) (_i.e._, \(h_{Y}(\mathcal{G})=h_{Y}(\mathcal{G}^{\prime})\)) and then extract the most relevant part between them as the explanations. To be specific, we aim to search for the best counterpart \(\mathcal{G}^{\prime}\) so that the mutual information between the input graph \(\mathcal{G}\) and the subgraph \(\mathcal{G}_{S}\) is maximized as:
\[\max_{\mathcal{G}^{\prime}\in\mathcal{D}_{\mathcal{G}},\mathcal{G}^{\prime} \neq\Phi}\left[\max_{\mathcal{G}_{S}\subset\mathcal{G},|\mathcal{G}_{S}| \leq K}I(\mathcal{G}_{S},\mathcal{G}^{\prime},T_{h})\right], \tag{6}\]
where \(\mathcal{D}_{\mathcal{G}}\) denotes the reference set consisting of all available graphs, and \(\mathcal{G}_{S}\) is obtained by subgraph matching between \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\). Similar to the information bottleneck theory (Tishby & Zaslavsky, 2015; Achille & Soatto, 2018) in supervised learning, we can define the sufficient explanation and minimal sufficient explanation of \(\mathcal{G}\) with its counterpart \(\mathcal{G}^{\prime}\neq\mathcal{G}\) in the context of subgraph matching.
**Definition 3.1** (Sufficient Explanation).: Given \(\mathcal{G}^{\prime}\), the explanation \(\mathcal{G}_{S}^{usf}\) of \(\mathcal{G}\) is sufficient if and only if \(I(\mathcal{G}_{S}^{usf},\mathcal{G}^{\prime},T_{h})=I(\mathcal{G},\mathcal{G }^{\prime},T_{h})\).
The sufficient explanation \(\mathcal{G}_{S}^{usf}\) of \(\mathcal{G}\) keeps all joint information with \(\mathcal{G}^{\prime}\) related to the learned information \(T_{h}\). In other words, \(\mathcal{G}_{S}^{usf}\) contains all the shared information between \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\). Symmetrically, the sufficient explanation for \(\mathcal{G}^{\prime}\) satisfies \(I(\mathcal{G}_{S}^{usf},\mathcal{G}^{\prime},T_{h})=I(\mathcal{G},\mathcal{G }^{\prime},T_{h})\).
**Definition 3.2** (Minimal Sufficient Explanation).: Given \(\mathcal{G}^{\prime}\), the sufficient explanation \(\mathcal{G}_{S}^{min}\) of \(\mathcal{G}\) is minimal if and only if \(I(\mathcal{G}_{S}^{min},\mathcal{G},T_{h})\leq I(\mathcal{G}_{S}^{usf}, \mathcal{G},T_{h})\).
Among all sufficient explanations, the minimal sufficient explanation \(\mathcal{G}_{S}^{min}\) contains the least information about
with regards to the learned knowledge \(T_{h}\). Normally, it is usually assumed that \(\mathcal{G}_{S}^{min}\) only maintains the shared information between \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\), and eliminates other non-shared one, i.e., \(I(\mathcal{G}_{S}^{min},\mathcal{G}|\mathcal{G}^{\prime})=0\).
**Theorem 3.3** (Task Relevant Information in Explanations).: _(Wang et al., 2022) Given \(\mathcal{G}^{\prime}\), the minimal sufficient explanation \(\mathcal{G}_{S}^{min}\) contains less task-relevant information learned by \(h_{Y}\) from input \(\mathcal{G}\) than any other sufficient explanation \(\mathcal{G}_{S}^{suf}\). Formally, we have:_
\[\begin{split} I(\mathcal{G},T_{h})&=I(\mathcal{G}_ {S}^{min},T_{h})+I(\mathcal{G},T_{h}|\mathcal{G}^{\prime})\\ &\geq I(\mathcal{G}_{S}^{suf},T_{h})=I(\mathcal{G}_{S}^{min},T_{ h})+I(\mathcal{G}_{S}^{suf},\mathcal{G},T_{h}|\mathcal{G}^{\prime})\\ &\geq I(\mathcal{G}_{S}^{min},T_{h}).\end{split} \tag{7}\]
Theorem 3.3 indicates that the mutual information between \(\mathcal{G}\) and \(T_{h}\) can be divided into two fractions. One is \(\mathcal{G}_{S}^{min}\), which is the interaction between \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) associated with the learned knowledge \(T_{h}\). The other is determined by the disjoint structure of \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) with respect to the learned information \(T_{h}\). Our subgraph matching is committed to maximizing \(I(\mathcal{G}_{S}^{min},T_{h})\), which is the lower bound of \(I(\mathcal{G},T_{h})\). Notably, \(I(\mathcal{G},T_{h}|\mathcal{G}^{\prime})\) is not completely independent to \(I(\mathcal{G}_{S}^{min},T_{h})\), but is instead the offset of \(I(\mathcal{G}_{S}^{min},T_{h})\) to \(I(\mathcal{G},T_{h})\). Hence, if we increase \(I(\mathcal{G}_{S}^{min},T_{h})\), \(I(\mathcal{G},T_{h}|\mathcal{G}^{\prime})\) is minimized simultaneously. Consequently, \(I(\mathcal{G}_{S}^{min},T_{h})\) can be used to not only improve the lower bound of \(I(\mathcal{G},T_{h})\) but approximate \(I(\mathcal{G},T_{h})\), which is exactly our final explanatory object. This provides a firm theoretical foundation for our MatchExplainer to mine the most explanatory substructure via the subgraph matching approach.
### Non-parametric Subgraph Exploration
Preamble.It is remarkable that our excavation of explanations through subgraph matching has some significant differences from either graph matching or GSL. On the one hand, graph matching algorithms (Zanfir and Sminchisescu, 2018; Sarlin et al., 2020; Wang et al., 2020, 2021) typically establish node correspondence from a whole graph \(\mathcal{G}_{1}\) to another whole graph \(\mathcal{G}_{2}\). However, we seek to construct partial node correspondence between the subgraph of \(\mathcal{G}_{1}\) and the subgraph of \(\mathcal{G}_{2}\). On the other hand, GSL concentrates on the graph representations encoded by \(f_{1}\) and \(f_{2}\), as well as the ground truth information \(T\) rather than the information \(T_{h}\) learned by the GNN predictor \(h_{Y}\).
Besides, most existing graph matching architectures (Zanfir and Sminchisescu, 2018; Li et al., 2019; Wang et al., 2020; Papakis et al., 2020; Liu et al., 2021) are deep learning-based. They utilize a network to forecast the relationship between nodes or graphs, which has several flaws. For instance, the network needs tremendous computational resources to be trained. More importantly, its effectiveness is unreliable and may fail in certain circumstances if the network is not delicately designed. To overcome these limitations, we employ a non-parametric subgraph matching paradigm, which is totally training-free and fast to explore the most informatively joint substructure shared by any pair of input instances.
Subgraph matching framework.We break the target GNN \(h_{Y}\) into two consecutive parts: \(h_{Y}=\phi_{G}\circ\phi_{X}\), where \(\phi_{G}\) is the aggregator to compute the graph-level representation and predict the properties, and \(\phi_{X}\) is the feature function to update both the node and edge features. For a given graph \(\mathcal{G}\) with node features \(\mathbf{h}_{i}\in\mathbb{R}^{\psi_{v}},\forall i\in\mathcal{V}\) and edge features \(\mathbf{e}_{ij}\in\mathbb{R}^{\psi_{e}},\forall(i,j)\in\mathcal{E}\), the renewed output is calculated as \(\{\mathbf{h}_{i}^{\prime}\}_{i\in\mathcal{V}},\{\mathbf{e}_{ij}^{\prime}\}_{(i, j)\in\mathcal{E}}=\phi_{X}\left(\{\mathbf{h}_{i}\}_{i\in\mathcal{V}},\{\mathbf{e}_{ij}\}_{(i, j)\in\mathcal{E}}\right)\), which is forwarded into \(\phi_{G}\) afterwards.
Our target is to find subgraphs \(\mathcal{G}_{S}\subset\mathcal{G}\) and \(\mathcal{G}_{S}^{\prime}\subset\mathcal{G}^{\prime}\) both with \(K\) nodes to maximize \(I(\mathcal{G}_{S},\mathcal{G}_{S}^{\prime},T_{h})\). There we utilize the node correspondence-based distance \(d_{G}\) as a substitution for measuring \(I(\mathcal{G}_{S},\mathcal{G}_{S}^{\prime},T_{h})\), the shared learned information between \(\mathcal{G}_{S}\) and \(\mathcal{G}_{S}^{\prime}\). Then given a pair of \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\), \(d_{G}\) is defined and minimized as follows:
\[\begin{split}\min_{\mathcal{G}_{S}\subset\mathcal{G},\mathcal{G}_{ S}^{\prime}\subset\mathcal{G}^{\prime}}d_{G}(\mathcal{G}_{S},\mathcal{G}_{S}^{ \prime})=\\ \min_{\mathcal{G}_{S}\subset\mathcal{G},\mathcal{G}_{S}^{\prime} \subset\mathcal{G}^{\prime}}\left(\min_{\mathbf{T}\in\Pi(\mathcal{G}_{S}, \mathcal{G}_{S}^{\prime})}\left\langle\mathbf{T},\mathbf{D}^{\phi_{X}}\right \rangle\right),\end{split} \tag{8}\]
where \(\mathbf{D}^{\phi_{X}}\) is the matrix of all pairwise distances between node features of \(\mathcal{G}_{S}\) and \(\mathcal{G}_{S}^{\prime}\). Its element is calculated as \(\mathbf{D}_{ij}^{\phi_{X}}=d_{X}(\mathbf{h}_{i}^{\prime},\mathbf{h}_{j}^{ \prime})\ \forall i\in\mathcal{V},j\in\mathcal{V}^{\prime}\), where \(d_{X}\) is the standard vector space similarity such as the Euclidean distance and the Hamming distance. The inner optimization is conducted over \(\Pi(.,.)\), which is the set of all matrices with prescribed margins defined as:
\[\Pi(\mathcal{G}_{S},\mathcal{G}_{S}^{\prime})=\left\{\mathbf{T}\in\{0,1\}^{K \times K}\,|\,\mathbf{T}\mathbf{1}=\mathbf{1},\,\mathbf{T}^{T}\mathbf{1}= \mathbf{1}\right\}. \tag{9}\]
Due to the NP-hard nature of graph matching (Loiola et al., 2007), we adopt the greedy strategy to optimize \(d_{G}(\mathcal{G}_{S},\mathcal{G}_{S}^{\prime})\) and attain the subgraph \(\mathcal{G}_{S}\). It is worth noting that the greedy algorithm does not guarantee to reach the globally optimal solution (Bang-Jensen et al., 2004), but can yield locally optimal solutions in a reasonable amount of time with the complexity of \(O(K)\).
After that, we feed \(\mathcal{G}_{S}\) into \(h_{Y}\) and examine its correctness. If \(h_{Y}(\mathcal{G}_{S})=h_{Y}(\mathcal{G})\), then \(\mathcal{G}_{S}\) is regarded as the candidate explanation. Otherwise, \(\mathcal{G}_{S}\) is abandoned since it cannot recover the information required by \(h_{Y}\) to predict \(\mathcal{G}\).
Non-uniqueness of GNN explanations.Unlike prior learning-based GNN explanation methods (Vu and Thai, 2020; Wang et al., 2021, 2022) that generate a unique subgraph \(\mathcal{G}_{S}\) for \(\mathcal{G}\), our selection of \(\mathcal{G}_{S}\) varies according to
the choice of the counterpart \(\mathcal{G}^{\prime}\in\mathcal{D}_{\mathcal{G}}\). Therefore, MatchExplainer can provide many-to-one explanations for a single graph \(\mathcal{G}\) once a bunch of counterparts is given. This offers a new understanding that the determinants for GNNs' predictions are non-unique, and GNNs can gain correct predictions based on several different explanatory subgraphs of the same size.
Optimization of GNN Explanations.Since our MatchExplainer is able to discover a variety of possible explanatory subgraphs, how to screen out the most informative one becomes a critical issue. As indicated in Theorem 3.3, \(I(\mathcal{G}^{min}_{S},T_{h})\) is the lower bound of \(I(\mathcal{G},T_{h})\), and their difference \(I(\mathcal{G},T_{h}|\mathcal{G}^{\prime})\) entirely depends on the selection of the matching counterpart \(\mathcal{G}^{\prime}\). Ideally, \(\mathcal{G}^{\prime}\) ought to share the exact same explanatory substructure with \(\mathcal{G}\), i.e., \(\mathcal{G}_{S}=\mathcal{G}^{\prime}_{S}\). Meanwhile, \(\mathcal{G}\) conditioned on \(\mathcal{G}^{\prime}\) is independent to the learned knowledge \(T_{h}\), i.e., \(I(\mathcal{G},T_{h}|\mathcal{G}^{\prime})=0\). Therefore, there are two distinct principles for selecting the counterpart graphs.
The first line is to seek \(\mathcal{G}^{\prime}\) that has as close the explanatory subgraph as possible to \(\mathcal{G}\). The second line is to ensure that \(\mathcal{G}\) conditioned on \(\mathcal{G}^{\prime}\) maintains little information relevant to the learned information \(T_{h}\). Nevertheless, without sufficient domain knowledge regarding which substructure is majorly responsible for the graph property, it would be impossible for us to manually select the counterpart graph \(\mathcal{G}^{\prime}\) that satisfies \(\mathcal{G}_{S}\approx\mathcal{G}^{\prime}_{S}\).
As a remedy, we consider optimizing an opposite objective described in Equ. 5. That is, we desire to minimize the intersection between \(\mathcal{G}-\mathcal{G}_{S}\) and \(T_{h}\), _i.e._, \(I(\mathcal{G}-\mathcal{G}_{S},T_{h})\). Towards this goal, we remove the extracted subgraph \(\mathcal{G}_{S}\) from \(\mathcal{G}\) and aspire to confuse GNNs' predictions on the remaining part \(\mathcal{G}-\mathcal{G}_{S}\). Mathematically, the optimal \(\mathcal{G}^{\prime}\) maximizes the difference between the prediction of the whole graph and the prediction of the graph that is subtracted by \(\mathcal{G}_{S}\). In other words, we wish to retrieve the best explanation \(\mathcal{G}^{+}_{S}\) via:
\[\max_{\mathcal{G}^{\prime}\in\mathcal{D}_{S},\mathcal{G}^{\prime} \neq\mathcal{G}} \Delta_{\mathcal{G}}(\mathcal{G}^{\prime},h_{Y})= \tag{10}\] \[\max_{\mathcal{G}^{\prime}\in\mathcal{D}_{S},\mathcal{G}^{\prime} \neq\mathcal{G}}\left[h_{Y}^{c^{*}}(\mathcal{G})-h_{Y}^{c^{*}}(\mathcal{G}- \mathcal{G}_{S})\right],\]
where \(c^{*}\) is the ground truth class of \(\mathcal{G}\) and \(\mathcal{G}_{S}\) is the substructure via subgraph matching with \(\mathcal{G}^{\prime}\). \(\mathcal{D}_{\mathcal{S}}\) is the candidate subgraph set.
To summarize, given any graph \(\mathcal{G}\) and a reference graph set \(\mathcal{D}_{\mathcal{G}}\), we first acquire all possible subgraphs via matching \(\mathcal{G}\) to available counterparts in \(\mathcal{D}_{\mathcal{G}}\). After the pairwise subgraph matching, we calculate their corresponding \(\Delta_{\mathcal{G}}(.,h_{Y})\) and pick up the one that leads to the largest \(\Delta_{\mathcal{G}}(.,h_{Y})\) as the optimal counterpart graph. Notably, not all graphs in \(\mathcal{D}_{\mathcal{G}}\) are qualified counterparts and there are several intuitive conditions that \(\mathcal{G}^{\prime}\) has to satisfy. First, \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) should belong to the same category predicted by \(h_{Y}\). Besides, \(\mathcal{G}^{\prime}\) needs to have at least \(K\) nodes. Otherwise, \(G_{S}\) would be smaller than the given constrained size.
Effectiveness vs. efficiency.The time-complexity is always an important topic to evaluate the practicability of explainers. For our MatchExplainer, the size of the reference set, _i.e._, \(|\mathcal{D}_{\mathcal{G}}|\), plays a vital role in determining the time cost, since the total time cost is \(O(K|\mathcal{D}_{\mathcal{G}}|)\). However, a limited number of counterpart graphs can also prohibit it from exploring better explanatory subgraphs. Thus, it is non-trivial to balance the effectiveness and efficiency of MatchExplainer by choosing an appropriate size of \(\mathcal{D}_{\mathcal{G}}\).
## 4 The MatchDrop Methodology
Preventing the false positive sampling.Deep graph learning faces unique challenges such as feature data incompleteness, structural data sparsity, and over-smoothing. To address these issues, a growing number of data augmentation techniques (Hamilton et al., 2017; Rong et al., 2019) have been proposed in the graph domain and shown promising outcomes. Among them, graph sampling and node dropping (Feng et al., 2020; Xu et al., 2021) are two commonly used mechanisms. However, most previous approaches are completely randomized, resulting in false positive sampling and injecting spurious information into the training process. For instance, _1,3-dinitrobenzene_ (C\({}_{6}\)H\({}_{4}\)N\({}_{2}\)O\({}_{4}\)) is a mutagen molecule and its explanation is the NO\({}_{2}\) groups (Debnath et al., 1991). If any edge or node of the NO\({}_{2}\) group is accidentally dropped or destroyed, the mutagenicity property no longer exists. And it will misguide GNNs if the original label is assigned to this molecular graph after node or edge sampling.
To tackle this drawback, recall that our MatchExplainer offers a convenient way to discover the most essential part of a given graph. It is natural to keep this crucial portion unchanged and only drop nodes or edges in the remaining portion. Based on this idea, we propose a simple but effective method dubbed MatchDrop, which keeps the most informative part of graphs found by our MatchExplainer and alters the less informative part (see Figure 2).
The procedure of our MatchDrop is described as follows. To begin with, we train a GNN \(h_{Y}\) for several epochs until it converges to an acceptable accuracy, which guarantees the effectiveness of the subsequent subgraph selection. Then for each graph \(\mathcal{G}\) in the training set \(\mathcal{D}_{\text{train}}\), we randomly select another graph \(\mathcal{G}^{\prime}\in\mathcal{D}_{\text{train}}\) with the same class as the counterpart graph. Afterwards, we explore its subgraph \(\mathcal{G}_{S}\) via MatchExplainer with a retaining ratio \(\rho\) (i.e., \(|\mathcal{G}_{S}|=\rho|\mathcal{G}|\)) and use it as the model input to train \(h_{Y}\).
Notably, similar to the typical image augmentation skills such as rotation and flapping (Shorten and Khoshgoftaar, 2019), MatchDrop is a novel data augmentation technique for GNN training. However, instead of augmenting \(\mathcal{G}\) randomly, MatchDrop reserves the most informative part and only changes the less important substructure. This significantly reduces the possibility of false positive sampling. Additionally, unlike other learnable mechanisms to inspect subgraphs, our MatchDrop is entirely parameter-free and, therefore, can be deployed at any stage of the training period.
Training objective.The training of GNNs is supervised by the cross entropy (CE) loss. Suppose there are \(M\) classes in total, then the loss takes the following form:
\[\mathcal{L}_{S}=-\frac{1}{|\mathcal{D}_{\text{train}}|}\sum_{\mathcal{G}\in \mathcal{D}_{\text{train}}}\sum_{c=1}^{M}Y_{\mathcal{G}}\log\left(h_{Y}^{c} \left(h_{S}(\mathcal{G},\rho)\right)\right), \tag{11}\]
where \(h_{Y}^{c}(.)\) indicates the predicted probability of \(\mathcal{G}_{S}\) to be of class \(c\) and \(Y_{G}\) is the ground truth value. \(h_{S}\) employs MatchExplainer to mine the subgraph \(\mathcal{G}_{S}\) by matching \(\mathcal{G}\) to a randomly selected counterpart graph \(\mathcal{G}^{\prime}\) in the training set \(\mathcal{D}_{\text{train}}\) with a pre-defined ratio \(\rho\).
## 5 Experimental Analysis
### Datasets and Experimental Settings
Following Wang et al. (2021), we use four standard datasets with various target GNNs.
* **Molecule graph classification**: MUTAG (Debnath et al., 1991; Kazius et al., 2005) is a molecular dataset for the graph classification problem. Each graph stands for a molecule with nodes for atoms and edges for bonds. The labels are determined by their mutagenic effect on a bacterium. The well-trained Graph Isomorphism Network (GIN) (Xu et al., 2018) has approximately achieved a 82% testing accuracy.
* **Motif graph classification.**: Wang et al. (2021) create a synthetic dataset, BA-3Motif, with 3000 graphs. They take advantage of the Barabasi-Albert (BA) graphs as the base, and attach each base with one of three motifs: house, cycle, grid. We train an ASAP model (Ranjan et al., 2020) that realizes a 99.75% testing accuracy.
* **Handwriting graph classification**: Knyazev et al. (2019) transforms the MNIST images into 70K superpixel graphs with at most 75 nodes for each graph. The nodes are superpixels, and the edges are the spatial distances between them. There are 10 types of digits as the label. We adopt a Spline-based GNN (Fey et al., 2018) that gains around 98% accuracy in the testing set.
* **Scene graph classification**: Wang et al. (2021) select 4443 pairs of images and scene graphs from Visual Genome (Krishna et al., 2017) to construct the VG-5 dataset (Pope et al., 2019). Each graph is labeled with five categories: stadium, street, farm, surfing and forest. The regions of objects are represented as nodes, while edges indicate the relationships between object nodes. We train an AAPNP (Klicpera et al., 2018) that reaches 61.9% testing accuracy.
We compare our MatchExplainer with several state-of-the-art and popular explanation baselines, which are listed below:
* **SA**(Baldassarre and Azizpour, 2019) directly uses the gradients of the model prediction with respect to the adjacency matrix of the input graph as the importance of edges.
* **Grad-CAM**(Selvaraju et al., 2017; Pope et al., 2019) uses the gradients of any target concept such as the motif in a graph flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the graph for predicting the concept.
Figure 2: Comparison between graph augmentation with and without MatchDrop.
* **GNNExplainer**(Ying et al., 2019) optimizes soft masks for edges and node features to maximize the mutual information between the original predictions and new predictions.
* **PGExplainer**(Luo et al., 2020) hires a parameterized model to decide whether an edge is important, which is trained over multiple explained instances with all edges.
* **PGM-Explainer**(Vu and Thai, 2020) collects the prediction change on the random node perturbations, and then learns a Bayesian network from these perturbation-prediction observations, so as to capture the dependencies among the nodes and the prediction.
* **ReFiner**(Wang et al., 2021) exploits the pre-training and fine-tuning idea to develop a multi-grained GNN explainer. It has both a global understanding of model workings and local insights on specific instances.
As the ground-truth explanations are usually unknown, it is tough to quantitatively evaluate the excellence of explanations. There, we follow Wang et al. (2021) and employ **the predictive accuracy (ACC@\(\mathbf{\rho}\))** and **Recall@\(\mathbf{N}\)** as the metrics. Specifically, ACC@\(\rho\) measures the fidelity of the explanatory subgraphs by forwarding them into the target model and examining how well it recovers the target prediction. ACC-AUC is reported as the area under the ACC curve over different selection ratios \(\rho\in\{0.1,0.2,...,1.0\}\). Recall@\(N\) is computed as \(\mathbb{E}_{\mathcal{G}}\left[\left|\mathcal{G}_{s}\cap\mathcal{G}_{S}^{*} \right|/\left|\mathcal{G}_{S}^{*}\right|\right]\), where \(\mathcal{G}_{S}^{*}\) is the ground-truth explanatory subgraph. Remarkably, Recall@\(N\) is only suitable for BA3-motif, since this dataset is synthetic and the motifs are foregone.
### Can MatchExplainer Find Better Explanatory Subgraphs?
Quantitative results.To investigate the effectiveness of MatchExplainer, we conduct broad experiments on four datasets and the comparisons are reported in Table 4. For MUTAG, VG-5, and BA3-Motif, we exploit the whole training and validation data as the reference set. For MNIST, we randomly select 10% available samples as the reference set to speed up matching. It can be found that MatchExplainer outperforms every baseline in all cases. Particularly, previous explainers fail to explain GNNs well in MNIST with ACC-AUCs lower than 65%, but MatchExplainer can reach as high as 93.8%. And if we use the whole training and validation data in MNIST as the reference, its ACC-AUC can increase to 97.2%. This phenomenon demonstrates the advantage of subgraph matching in explaining GNNs when the dataset has clear patterns of explanatory subgraphs. Additionally, MatchExplainer also achieves significant relative improvements over the strongest baseline by 8.6% and 8.1% in VG-5 and BA3-Motif, respectively.
Furthermore, it is also worth noting that MatchExplainer realizes nearly 100% ACC-AUCs in each task but BA-3Motif. For BA-3Motif, we find that its predictive accuracy are \([0.31,0.31,0.31,0.34,0.49,0.71,0.97,1.0,1.0,1.0]\) with different selection ratios. This aligns with the fact that most motifs in this task occupy a large fraction of the whole graph. Once the selection ratio is greater than 0.7, MatchExplainer is capable of figuring out the correct explanatory subgraph.
Visualization.In addition, we envision the explanations of MatchExplainer on MUTAG in Appendix B for qualitative evaluations. We also compare the efficiency of our MatchExplainer with other parametric methods in Appendix 3. It can be discovered that MatchExplainer enjoys a competitive fast inference speed with no additional training cost, making it possible for large-scale deployment.
### Can MatchDrop Generally Improve the Performance of GNNs?
Implementations.We take account of two backbones: GCN (Kipf and Welling, 2016), and GIN (Xu et al., 2018) with a depth of 6. Similar to Rong et al. (2019), we adopt a random hyper-parameter search for each architecture to enable more robust comparisons. There, _DropNode_ stands for randomly sampling subgraphs, which can be also treated as a specific form of node dropping. False-positive drop (_FP-Drop_) is the opposite operation of our MatchDrop, where the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \begin{tabular}{c} MUTAG \\ ACC-AUC \\ \end{tabular} & \begin{tabular}{c} VG-5 \\ ACC-AUC \\ \end{tabular} & \begin{tabular}{c} MNIST \\ ACC-AUC \\ \end{tabular} & \begin{tabular}{c} BA-3Motif \\ ACC-AUC \\ \end{tabular} \\ \hline SA & \(0.769\) & \(0.769\) & \(0.559\) & \(0.518\) & \(0.243\) \\ Grad-CAM & \(0.786\pm 0.011\) & \(0.909\pm 0.005\) & \(0.581\pm 0.009\) & \(0.533\pm 0.003\) & \(0.212\pm 0.002\) \\ GNNExplainer & \(0.895\pm 0.010\) & \(0.895\pm 0.003\) & \(0.535\pm 0.013\) & \(0.528\pm 0.005\) & \(0.157\pm 0.002\) \\ PG-Explainer & \(0.631\pm 0.008\) & \(0.790\pm 0.004\) & \(0.504\pm 0.010\) & \(0.586\pm 0.004\) & \(0.293\pm 0.001\) \\ PGM-Explainer & \(0.714\pm 0.007\) & \(0.792\pm 0.001\) & \(0.615\pm 0.003\) & \(0.575\pm 0.002\) & \(0.250\pm 0.000\) \\ ReFine & \(0.955\pm 0.005\) & \(0.914\pm 0.001\) & \(0.636\pm 0.003\) & \(0.576\pm 0.013\) & \(0.297\pm 0.000\)\({}^{1}\) \\ \hline \hline
\begin{tabular}{c} MatchExplainer \\ Relative Impro. \\ \end{tabular} & \(\mathbf{0.997}\) & \(\mathbf{0.993}\) & \(\mathbf{0.938}\) & \(\mathbf{0.634}\) & \(\mathbf{0.305}\) \\ \multicolumn{1}{c}{} & \(4.5\%\) & \(8.6\%\) & \(48.9\%\) & \(8.1\%\) & \(2.6\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of our MatchExplainer with other baseline explainers.
subgraph sampling or node dropping is only performed in the explanatory subgraphs while the rest remains the same. We add FPDrop as a baseline to help unravel the reason why MatchDrop works. _PGDrop_ is similar to MatchDrop, but uses a fixed PGExplainer (Luo et al., 2020) to explore the informative substructure. The selection ratios \(\rho\) for FPDrop, PGDrop, and MatchDrop are all set as 0.95.
Overall results.Table 5.2 documents the performance on all datasets except BA-3Motif, since its testing accuracy has already approached 100%. It can be observed that MatchDrop consistently promotes the testing accuracy for all cases. Exceptionally, FPDrop imposes a negative impact over the performance of GNNs. This indicates that false positive sampling does harm to the conventional graph augmentation methods, which can be surmounted by our MatchDrop effectively. On the other hand, PGDrop also gives rise to the decrease of accuracy. One possible reason is that parameterized explainers like PGExplainer are trained on samples that GNNs predict correctly, so they are incapable to explore explanatory subgraphs on unseen graphs that GNNs forecast mistakenly.
## 6 Related Work
### Explainability of GNNs
Interpretability and feature selection have been attached growing significance in demystifying complicated deep learning models, and increasing interests have been appealed in explaining GNNs. Despite fruitful progress, the study in this area is still insufficient compared to the domain of images and natural languages. Generally, there are two mainstream lines of research. The widely-adopted one nowadays is the parametric explanation methods. They run a parameterized model to dig out informative substructures or generate the saliency maps. For example, GNNExplainer (Ying et al., 2019) learns soft masks for each instance and applies them on the adjacency matrix. PGExplainer (Luo et al., 2020) collectively explains multiple instances with a probabilistic graph generative model. XGNN (Yuan et al., 2020) utilizes a graph generator to output class-wise graph patterns to explain GNNs for each class. PGM-Explainer (Vu and Thai, 2020) proposes an Bayesian network on the pairs of graph pertubations and prediction changes. The other line is the non-parametric explanation methods, which do not involve any additional trainable models. They employ some heuristics like gradient-like scores obtained by backpropagation as the feature contributions of a specific instance (Baldassarre and Azizpour, 2019; Pope et al., 2019; Schnake et al., 2020). As mentioned previously, the latter is usually less favored because their performance is much poorer than the former parametric methods. In contrast, our MatchExplainer procures state-of-the-art results astonishingly.
### Graph Augmentations
Data augmentation has recently attracted growing attention in graph representation learning to counter issues like data noise and data scarcity (Zhao et al., 2022). The related work can be roughly broken down into _feature-wise_(Zhang et al., 2017; Liu et al., 2021; Taguchi et al., 2021), _structure-wise_(You et al., 2020; Zhao et al., 2021), and _label-wise_(Verma et al., 2019) categories based on the augmentation modality (Ding et al., 2022). Among them, many efforts are raised to augment the graph structures. Compared with adding or deleting edges (Xu et al., 2022), the augmentation operations on node sets are more complicated. A typical application is to promote the propagation of the whole graph by inserting a supernode (Gilmer et al., 2017), while Zhao et al. (2021) interpolate nodes to enrich the minority classes. On the contrary, some implement graph or subgraph sampling by dropping nodes for different purposes, such as scaling up GNNs (Hamilton et al., 2017), enabling contrastive learning (Qiu et al., 2020), and prohibiting over-fitting and over-smoothing (Rong et al., 2019). Nonetheless, few of those graph sampling or node dropping approaches manage to find augmented graph instances from the input graph that best preserve the original properties.
## 7 Conclusion
This paper proposes a promising subgraph matching technique called MatchExplainer for GNN explanations. Distinct from the popular trend of using a parameterized network that lacks interpretability, we design a non-parametric algorithm to search for the most informative joint subgraph
\begin{table}
\begin{tabular}{c c|c c c c c} \hline \hline Dataset & Backbone & Original & FPDrop & DropNode & PGDrop & MatchDrop \\ \hline \multirow{2}{*}{MUTAG} & GCN & \(0.828\pm 0.004\) & \(0.803\pm 0.017\) & \(0.832\pm 0.008\) & \(0.825\pm 0.02\) & **0.844\(\pm\)0.006** \\ & GIN & \(0.832\pm 0.003\) & \(0.806\pm 0.020\) & \(0.835\pm 0.009\) & \(0.828\pm 0.01\) & **0.845\(\pm\)0.007** \\ \hline \multirow{2}{*}{VG-5} & GCN & \(0.619\pm 0.003\) & \(0.587\pm 0.014\) & \(0.623\pm 0.007\) & \(0.604\pm 0.002\) & **0.638\(\pm\)0.008** \\ & GIN & \(0.621\pm 0.004\) & \(0.593\pm 0.018\) & \(0.622\pm 0.006\) & \(0.600\pm 0.004\) & **0.630\(\pm\)0.003** \\ \hline \multirow{2}{*}{MNIST} & GCN & \(0.982\pm 0.001\) & \(0.955\pm 0.008\) & \(0.982\pm 0.002\) & \(0.975\pm 0.003\) & **0.986\(\pm\)0.002** \\ & GIN & \(0.988\pm 0.001\) & \(0.959\pm 0.005\) & \(0.989\pm 0.001\) & \(0.979\pm 0.002\) & **0.990\(\pm\)0.001** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Testing accuracy (%) comparisons on different backbones with and without MatchDrop.
between a pair of graphs. Furthermore, we combine Match-Explainer with the classic graph augmentation method and show its great capacity in ameliorating the false positive sampling challenge. Experiments convincingly demonstrate the efficacy of our MatchExplainer by winning over parametric approaches with significant margins. Our work hopes to shed light on pushing the frontier of non-parametric methods to explain deep learning models.
|
2305.16202 | DP-SGD Without Clipping: The Lipschitz Neural Network Way | State-of-the-art approaches for training Differentially Private (DP) Deep
Neural Networks (DNN) face difficulties to estimate tight bounds on the
sensitivity of the network's layers, and instead rely on a process of
per-sample gradient clipping. This clipping process not only biases the
direction of gradients but also proves costly both in memory consumption and in
computation. To provide sensitivity bounds and bypass the drawbacks of the
clipping process, we propose to rely on Lipschitz constrained networks. Our
theoretical analysis reveals an unexplored link between the Lipschitz constant
with respect to their input and the one with respect to their parameters. By
bounding the Lipschitz constant of each layer with respect to its parameters,
we prove that we can train these networks with privacy guarantees. Our analysis
not only allows the computation of the aforementioned sensitivities at scale,
but also provides guidance on how to maximize the gradient-to-noise ratio for
fixed privacy guarantees. The code has been released as a Python package
available at https://github.com/Algue-Rythme/lip-dp | Louis Bethune, Thomas Massena, Thibaut Boissin, Yannick Prudent, Corentin Friedrich, Franck Mamalet, Aurelien Bellet, Mathieu Serrurier, David Vigouroux | 2023-05-25T16:05:46Z | http://arxiv.org/abs/2305.16202v2 | # DP-SGD Without Clipping:
###### Abstract
State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) faces difficulties to estimate tight bounds on the sensitivity of the network's layers, and instead rely on a process of per-sample gradient clipping. This clipping process not only biases the direction of gradients but also proves costly both in memory consumption and in computation. To provide sensitivity bounds and bypass the drawbacks of the clipping process, our theoretical analysis of Lipschitz constrained networks reveals an unexplored link between the Lipschitz constant with respect to their input and the one with respect to their parameters. By bounding the Lipschitz constant of each layer with respect to its parameters we guarantee DP training of these networks. This analysis not only allows the computation of the aforementioned sensitivities at scale but also provides leads on to how maximize the gradient-to-noise ratio for fixed privacy guarantees. To facilitate the application of Lipschitz networks and foster robust and certifiable learning under privacy guarantees, we provide a Python package that implements building blocks allowing the construction and private training of such networks.
## 1 Introduction
Machine learning relies more than ever on foundational models, and such practices raise questions about privacy. Differential privacy allows to develop methods for training models that preserve the privacy of individual data points in the training set. The field seeks to enable deep learning on sensitive data, while ensuring that models do not inadvertently memorize or reveal specific details about individual samples in their weights. This involves incorporating privacy-preserving mechanisms into the design of deep learning architectures and training algorithms, whose most popular example is Differentially Private Stochastic Gradient Descent (DP-SGD) [1]. One main drawback of classical DP-SGD methods is that they require costly per-sample backward processing and gradient clipping. In this paper, we offer a new method that unlocks fast differentially private training through the use of Lipschitz constrained neural networks. Additionally, this method offers new opportunities for practitioners that wish to easily "DP-fy" [2] the training procedure of a deep neural network.
**Differential privacy fundamentals.** Informally, differential privacy is a _definition_ that quantifies how much the change of a single sample in a dataset affects the range of a stochastic function (here the DP training), called _mechanism_ in this context. This quantity can be bounded in an inequality involving
two parameters \(\epsilon\) and \(\delta\). A mechanism fulfilling such inequality is said \((\epsilon,\delta)\)-DP (see Definition 1). This definition is universally accepted as a strong guarantee against privacy leakages under various scenarii, including data aggregation or post-processing [3]. A popular rule of thumb suggests using \(\epsilon\leq 10\) and \(\delta<\frac{F}{N}\) with \(N\) the number of records [2] for mild guarantees. In practice, most classic algorithmic procedures (called _queries_ in this context) do not readily fulfill the definition for useful values of \((\epsilon,\delta)\), in particular the deterministic ones: randomization is mandatory. This randomization comes at the expense of "utility", i.e the usefulness of the output for downstream tasks [4]. The goal is then to strike a balance between privacy and utility, ensuring that the released information remains useful and informative for the intended purpose while minimizing the risk of privacy breaches. The privacy/utility trade-off yields a Pareto front, materialized by plotting \(\epsilon\) against a measurement of utility, such as validation accuracy for a classification task.
**Private gradient descent.** The SGD algorithm consists of a sequence of queries that (i) take the dataset in input, sample a minibatch from it, and return the gradient of the loss evaluated on the minibatch, before (ii) performing a descent step following the gradient direction. The sensitivity (see Definition 2) of SGD queries is proportional to the norm of the per-sample gradients. DP-SGD turns each query into a Gaussian mechanism by perturbing the gradients with a noise \(\zeta\). The upper bound on gradient norms is generally unknown in advance, which leads practitioners to clip it to \(C>0\), in order to bound the sensitivity manually. This is problematic for several reasons: **1.** Hyper-parameter search on the broad-range clipping value \(C\) is required to train models with good privacy/utility trade-offs [5], **2.** The computation of per-sample gradients is expensive: DP-SGD is usually slower and consumes more memory than vanilla SGD, in particular for the large batch sizes often used in private training [6], **3.** Clipping the per-sample gradients biases their average [7]. This is problematic as the average direction is mainly driven by misclassified examples, that carry the most useful information for future progress.
**An unexplored approach: Lipschitz constrained networks.** We propose to train neural networks for which the parameter-wise gradients are provably and analytically bounded during the whole training procedure, in order to get rid of the clipping process. This allows for rapid training of models without a need for tedious hyper-parameter optimization.
The main reason why this approach has not been experimented much in the past is that upper bounding the gradient of neural networks is often intractable. However, by leveraging the literature of Lipschitz constrained networks [8], we show that these networks allows to estimate their gradient bound. This yields tight bounds on the sensitivity of SGD steps, making their transformation into Gaussian mechanisms inexpensive - hence the name **Clipless DP-SGD**.
Informally, the Lipschitz constant quantifies the rate at which the function's output varies with respect to changes in its input. A Lipschitz constrained network is one in which its weights and activations are constrained such that it can only represent \(l\)-Lipschitz functions. In this work, we will focus our
Figure 1: **An example of usage of our framework, illustrating how to create a small Lipschitz VGG and how to train it under \((\epsilon,\delta)\)-DP guarantees while reporting \((\epsilon,\delta)\) values.**
attention on feed-forward networks (refer to Definition 3). Note that the most common architectures, such as Convolutional Neural Networks (CNNs), Fully Connected Networks (FCNs), Residual Networks (ResNets), or patch-based classifiers (like MLP-Mixers), all fall under the category of feed-forward networks. We will also tackle the particular case of Gradient Norm Preserving (GNP) networks, a subset of Lipschitz networks that enjoy tighter bounds (see appendix).
**Contributions**
While the properties of Lipschitz constrained networks regarding their inputs are well explored, the properties with respect to its parameters remain non-trivial. This work provides a first step to fill this gap: our analysis shows that under appropriate architectural constraints, a \(l\)-Lipschitz network has a tractable, finite Lipschitz constant with respect to its parameters. We prove that this Lipschitz constant allows for easy estimation of the sensitivity of the gradient computation queries. The prerequisite and details of the method to compute the sensitivities are explained in Section 2.
Our contributions are the following:
1. We extend the field of applications of Lipschitz constrained neural networks. So far the literature focused on Lipschitzness with respect to the _inputs_: we extend the framework to **compute the Lipschitzness with respect to the parameters**. This is exposed in Section 2.
2. We propose a **general framework to handle layer gradient steps as Gaussian mechanisms** that depends on the loss and the model structure. Our framework covers widely used architectures, including VGG and ResNets.
3. We show that SGD training of deep neural networks can be achieved **without gradient clipping** using Lipschitz layers. This allows the use of larger networks and larger batch sizes, as illustrated by our experiments in Section 4.
4. We establish connections between **Gradient Norm Preserving** (GNP) networks and **improved privacy/utility trade-offs** (Section 3.1).
5. Finally, a **Python package5** companions the project, with pre-computed Lipschitz constant and noise for each layer type, ready to be forked on any problem of interest (Section 3.2).
Footnote 5: Code and documentation are given as supplementary material during review process.
### Differential Privacy and Lipschitz Networks
The definition of DP relies on the notion of neighboring datasets, i.e datasets that vary by at most one example. We highlight below the central tools related to the field, inspired from [9].
**Definition 1** (\((\epsilon,\delta)\)-Differential Privacy).: _A labeled dataset \(\mathcal{D}\) is a finite collection of input/label pairs \(\mathcal{D}=\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots\ldots(x_{N},y_{N})\}\). Two datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) are said to be neighboring for the "replace-one" relation if they differ by at most one sample: \(\mathcal{D}^{\prime}=\mathcal{D}\cup\{(x^{\prime}_{i},y^{\prime}_{i})\}\setminus \{(x_{i},y_{i})\}\). Let \(\epsilon\) and \(\delta\) be two non-negative scalars. A mechanism \(\mathcal{A}\) is \((\epsilon,\delta)\)-DP if for any two neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\), and for any \(S\subseteq\text{range}(\mathcal{A})\):_
\[\mathbb{P}[\mathcal{A}(\mathcal{D})\in S]\leq e^{\epsilon}\times\mathbb{P}[ \mathcal{A}(\mathcal{D}^{\prime})\in S]+\delta. \tag{1}\]
A cookbook to create a \((\epsilon,\delta)\)-DP mechanism from a query is to compute its _sensitivity_\(\Delta\) (see Definition 2), and to perturb its output by adding a Gaussian noise of predefined variance \(\zeta^{2}=\Delta^{2}\sigma^{2}\), where the \((\epsilon,\delta)\)-DP guarantees depends on \(\sigma\). This yields what is called a _Gaussian mechanism_[3].
**Definition 2** (\(l_{2}\)-sensitivity).: _Let \(\mathcal{M}\) be a query mapping from the space of the datasets to \(\mathbb{R}^{p}\). Let \(\mathcal{N}\) be the set of all possible pairs of neighboring datasets \(\mathcal{D},\mathcal{D}^{\prime}\). The \(l_{2}\) sensitivity of \(\mathcal{M}\) is defined by:_
\[\Delta(\mathcal{M})=\max_{\mathcal{D},\mathcal{D}^{\prime}\in\mathcal{N}}\lVert \mathcal{M}(D)-\mathcal{M}(D^{\prime})\rVert_{2}. \tag{2}\]
**Differentially Private SGD.** The classical algorithm keeps track of \((\epsilon,\delta)\)-DP values with a _moments accountant_[1] which allows to keep track of privacy guarantees at each epoch, by composing different sub-mechanisms. For a dataset with \(N\) records and a batch size \(b\), it relies on two parameters: the sampling ratio \(p=\frac{b}{N}\) and the "noise multiplier" \(\sigma\) defined as the ratio between effective noise strength \(\zeta\) and sensitivity \(\Delta\). Bounds on gradient norm can be turned into bounds on sensitivity
of SGD queries. In "replace-one" policy for \((\epsilon,\delta)\)-DP accounting, if the gradients are bounded by \(K>0\), the sensitivity of the gradients averaged on a minibatch of size \(b\) is \(\Delta=2K/b\)..
Crucially, the algorithm requires a bound on \(\|\nabla_{\theta}\mathcal{L}(\hat{y},y)\|_{2}\leq K\). The whole difficulty lies in bounding tightly this value in advance for neural networks. Currently, gradient clipping serves as a patch to circumvent the issue [1]. Unfortunately, clipping individual gradients in the batch is costly and will bias the direction of their average, which may induce underfitting [7].
**Lipschitz constrained networks.** Our proposed solution comes from the observation that the norm of the gradient and the Lipschitz constant are two sides of the same coin. The function \(f:\mathbb{R}^{m}\to\mathbb{R}^{n}\) is said \(l\)-Lipschitz for \(l_{2}\) norm if for every \(x,y\in\mathbb{R}^{m}\) we have \(\|f(x)-f(y)\|_{2}\leq l\|x-y\|_{2}\). Per Rademacher's theorem [10], its gradient is bounded: \(\|\nabla_{x}f\|\leq l\). Reciprocally, continuous functions gradient bounded by \(l\) are \(l\)-Lipschitz.
In Lipschitz networks, the literature has predominantly concentrated on investigating the control of Lipschitzness with respect to the inputs (i.e bounding \(\nabla_{x}f\)), primarily motivated by concerns of robustness [11]. However, in this work, we will demonstrate that it is also possible to control Lipschitzness with respect to parameters (i.e bounding \(\nabla_{\theta}f\)), which is essential for ensuring privacy. Our first contribution will point out the tight link that exists between those two quantities.
**Definition 3** (Lipschitz feed-forward neural network).: _A feedforward neural network of depth \(D\), with input space \(\mathcal{X}\subset\mathbb{R}^{n}\), output space \(\mathcal{Y}\subset\mathbb{R}^{K}\) (e.g logits), and parameter space \(\Theta\subset\mathbb{R}^{p}\), is a parameterized function \(f:\Theta\times\mathcal{X}\to\mathcal{Y}\) defined by the sequential composition of layers \(f_{d}\):_
\[f(\theta,x):=\left(f_{D}(\theta_{d})\circ\ldots\circ f_{2}(\theta_{2})\circ f_ {1}(\theta_{1})\right)(x). \tag{3}\]
_The parameters of the layers are denoted by \(\theta=(\theta_{d})_{1\leq d\leq D}\in\Theta\). For affine layers, it corresponds to bias and weight matrix \(\theta_{d}=(W_{d},b_{d})\). For activation functions, there is no parameters: \(\theta_{d}=\varnothing\)._
_Lipschitz networks are feed-forward networks, with the additionnal constraint that each layer \(x_{d}\mapsto f_{d}(\theta_{d},x_{d}):=y_{d}\) is \(l_{d}\)-Lipschitz for all \(\theta_{d}\). Consequently, the function \(x\mapsto f(\theta,x)\) is \(l\)-Lipschitz with \(l=l_{1}\times\ldots\times l_{d}\) for all \(\theta\in\Theta\)._
In practice, this is enforced by using activations with Lipschitz constant \(l_{d}\), and by applying a constraint \(\Pi:\mathbb{R}^{p}\to\Theta\) on the weights of affine layers. This corresponds to spectrally normalized matrices [12; 13], since for affine layers we have \(l_{d}=\|W_{d}\|_{2}:=\max\limits_{\|x\|\leq 1}\|W_{d}x\|_{2}\) hence \(\Theta=\{\|W_{d}\|\leq l_{q}\}\).
The seminal work of [8] proved that universal approximation in the set of \(l\)-Lipschitz functions was achievable by this family of architectures. Concurrent approaches are based on regularization (like in [14; 15; 16]) but they fail to produce formal guarantees. While they have primarily been studied in the context of adversarial robustness [11; 17], recent works have revealed additional properties of these networks, such as improved generalization [13; 18]. However, the properties of their parameter gradient \(\nabla_{\theta}f(\theta,x)\) remain largely unexplored.
## 2 Clipless DP-SGD with \(l\)-Lipschitz networks
Our framework consists of **1.** a method that computes the maximum gradient norm of a network with respect to its parameters to obtain a _per-layer_ sensitivity \(\Delta_{d}\)**, 2.** a moments accountant that relies on the per-layer sensitivities to compute \((\epsilon,\delta)\)-DP guarantees. The method 1. is based on the recursive formulation of the chain rule involved in backpropagation, while 2. keeps track of \((\epsilon,\delta)\)-DP values with RDP accounting. It requires some natural assumptions that we highlight below.
**Requirement 1** (Lipschitz loss.).: _The loss function \(\hat{y}\mapsto\mathcal{L}(\hat{y},y)\) must be \(L\)-Lipschitz with respect to the logits \(\hat{y}\) for all ground truths \(y\in\mathcal{Y}\). This is notably the case of Categorical Softmax-Crossentropy._
The Lipschitz constants of common classification losses can be found in the appendix.
**Requirement 2** (Bounded input).: _There exists \(X_{0}>0\) such that for all \(x\in\mathcal{X}\) we have \(\|x\|\leq X_{0}\)._
While there exist numerous approaches for the parametrization of Lipschitz networks (e.g differentiable re-parametrization [19; 8], optimization over matrix manifolds [20] or projections [21]), our framework only provides sensitivity bounds for projection-based algorithms (see appendix).
**Requirement 3** (Lipschitz projection).: _The Lipschitz constraints must be enforced with a projection operator \(\Pi:\mathbb{R}^{p}\to\Theta\). This corresponds to Tensorflow [22] constraints and Pytorch [23] hooks. Projection is a post-processing of private gradients: it induces no privacy leakage [3]._
To compute the per-layer sensitivities, our framework mimics the backpropagation algorithm, where _Vector-Jacobian_ products (VJP) are replaced by _Scalar-Scalar_ products of element-wise bounds. For an arbitrary layer \(x_{d}\mapsto f_{d}(\theta_{d},x_{d}):=y_{d}\) the operation is sketched below:
\[\underbrace{\nabla_{x_{d}}\mathcal{L}:=(\nabla_{y_{d}}\mathcal{L})\frac{ \partial f_{d}}{\partial x_{d}}}_{\text{Vector-Jacobian product: backpropagate gradients}}\implies\underbrace{\|\nabla_{x_{d}}\mathcal{L}\|_{2}\leq\|\nabla_{y_{d}} \mathcal{L}\|_{2}\times\left\|\frac{\partial f_{d}}{\partial x_{d}}\right\|_{ 2}}_{\text{Scalar-Scalar product: backpropagate bounds}}. \tag{4}\]
The notation \(\|\cdot\|_{2}\) must be understood as the spectral norm for Jacobian matrices, and the Euclidean norm for gradient vectors. The scalar-scalar product is inexpensive. For Lipschitz layers the spectral norm of the Jacobian \(\|\frac{\partial f}{\partial x}\|\) is kept constant during training with projection operator \(\Pi\). The bound of the gradient with respect to the parameters then takes a simple form:
\[\|\nabla_{\theta_{d}}\mathcal{L}\|_{2}=\|\nabla_{y_{d}}\mathcal{L}\|_{2} \times\left\|\frac{\partial f_{d}}{\partial\theta_{d}}\right\|_{2}. \tag{5}\]
Once again the operation is inexpensive. The upper bound \(\left\|\frac{\partial f}{\partial\theta}\right\|_{2}\) typically depends on the supremum of \(\|x_{d}\|_{2}\), that can also be analytically bounded, as exposed in the following section.
### Backpropagation for bounds
The pseudo-code of **Clipless DP-SGD** is sketched in Algorithm 2. The algorithm avoids clipping by computing a _per-layer_ bound on the element-wise gradient norm. The computation of this _per-layer_ bound is described by Algorithm 1 (graphically explained in Figure 2). Crucially, it requires to compute the spectral norm of the Jacobian of each layer with respect to input and parameters.
Input bound propagation (line 2).We compute \(X_{d}=\max_{\|x\|\leq X_{d-1}}\|f_{d}(x)\|_{2}\). For activation functions it depends on their range. For linear layers, it depends on the spectral norm of the operator itself. This quantity can be computed with SVD or Power Iteration [24, 19], and constrained during training using projection operator \(\Pi\). In particular, it covers the case of convolutions, for which tight bounds are known [25]. For affine layers, it additionally depends on the amplitude of the bias \(\|b_{d}\|\).
```
[MISSING_PAGE_POST]
Backpropagate octangeant vector bounds (line 7).We bound the Jacobian \(\frac{\partial f_{d}(\theta_{d,x})}{\partial x}\). For activation functions this value can be hard-coded, while for affine layers it is the spectral norm of the linear operator. Like before, this value is constrained with projection operator \(\Pi\).
```
0: Feed-forward architecture \(f(\theta,\cdot)=f_{D}(\theta_{D},\cdot)\circ\ldots\circ f_{1}(\theta_{1},\cdot)\)
0: Weights \(\theta=(\theta_{1},\theta_{2},\ldots\theta_{D})\), input bound \(X_{0}\)
1:for all layers \(1\leq d\leq D\)do
2:\(X_{d}\leftarrow\max\limits_{\|x\|\leq X_{d-1}}\|f_{d}(\theta_{d},x)\|_{2}\). \(\triangleright\) Input bounds propagation
3:endfor
4:\(G\gets L/b\). \(\triangleright\) Lipschitz constant of the loss for batchsize b
5:for all layers \(D\geq d\geq 1\)do
6:\(\Delta_{d}\gets G\max\limits_{\|x\|\leq X_{d-1}}\|\frac{\partial f( \theta_{d,x})}{\partial\partial x}\|_{2}\). \(\triangleright\) Compute sensitivity from gradient norm
7:\(G\gets G\max\limits_{\|x\|\leq X_{d-1}}\|\frac{\partial f_{d}(\theta_{d,x })}{\partial x}\|_{2}=G\lrcorner_{d}\). \(\triangleright\) Backpropagate octangeant vector bounds
8:endfor
9:return sensitivities \(\Delta_{1},\Delta_{2}\ldots,\Delta_{D}\)
```
**Algorithm 1**Backpropagation for \(\text{Bounds}(f,X)\)
### Privacy accounting for Clipless DP-SGD
Two strategies are available to keep track of \((\epsilon,\delta)\) values as the training progresses, based on accounting either a per-layer "local" sensitivity, either by aggregating them into a "global" sensitivity.
**The "global" strategy.** Illustrated in the appendix,this strategy simply aggregates the individual sensitivities \(\Delta_{d}\) of each layer to obtain the global sensitivity of the whole gradient vector \(\Delta=\sqrt{\sum_{d}\Delta_{d}^{2}}\). The origin of the clipping-based version of this strategy can be traced back to [30]. With noise variance \(\sigma^{2}\Delta^{2}\) we recover the accountant that comes with DP-SGD. It tends to overestimate the true sensitivity (in particular for deep networks), but its implementation is straightforward with existing tools.
**The "local" strategy.** Recall that we are able to characterize the sensitivity \(\Delta_{d}\) of every layer of the network. Hence, we can apply a different noise to each of the gradients. We dissect the whole training procedure in Figure 3. At same noise multiplier \(\sigma\), it tends to produce a higher value of \(\epsilon\) per epoch than "global" strategy, but has the advantage over the latter to add smaller effective noise \(\zeta\) to each weight.
We rely on the autodp6 library [32, 33, 34] as it uses the Renyi Differential Privacy (RDP) adaptive composition theorem [35, 36], that ensures tighter bounds than naive DP composition.
Footnote 6: [https://github.com/yuxiangw/autodp](https://github.com/yuxiangw/autodp) distributed under Apache License 2.0.
## 3 From theory to practice
Beyond the application of Algorithms 1 and 2, our framework provides numerous opportunities to enhance our understanding of prevalent techniques identified in the literature. An in-depth exploration of these is beyond the scope of this work, so we focus on giving insights on promising tracks based on our theoretical analysis. In particular, we discuss how the tightness of the bound provided by Algorithm 1 can be influenced by working on the architecture, the input pre-processing and the loss post-processing.
### Gradient Norm Preserving networks
We can manually derive the bounds obtained from Algorithm 2 across diverse configurations. Below, we conduct a sensitivity analysis on \(l\)-Lipschitz networks.
**Theorem (informal) 1. Gradient Norm of Lipschitz Networks.**_Assume that every layer \(f_{d}\) is \(K\)-Lipschitz, i.e \(l_{1}=\cdots=l_{D}=K\). Assume that every bias is bounded by \(B\). We further assume that each activation is centered in zero (e.g ReLU, tanh, GroupSort). We recall that \(\theta=[\theta_{1},\theta_{2},\ldots\theta_{D}]\). Then the global upper bound of Algorithm 2 can be expanded analytically._
_1. If \(K<1\) we have:_\(\|\nabla_{\theta}\mathcal{L}(f(\theta,x),y)\|_{2}=\mathcal{O}\left(L\left(K^{D} (X_{0}+B)+1\right)\right).\)
_Due to the \(K^{D}\ll 1\) term this corresponds to a vanishing gradient phenomenon [37]. The output of the network is essentially independent of its input, and the training is nearly impossible._
_2. If \(K>1\) we have:_\(\|\nabla_{\theta}\mathcal{L}(f(\theta,x),y)\|_{2}=\mathcal{O}\left(LK^{D} \left(X_{0}+B+1\right)\right).\)
_Due to the \(K^{D}\gg 1\) term this corresponds to an exploding gradient phenomenon [38]. The upper bound becomes vacuous for deep networks: the added noise \(\zeta\) is at risk of being too high._
_3. If \(K=1\) we have:_\(\|\nabla_{\theta}\mathcal{L}(f(\theta,x),y)\|_{2}=\mathcal{O}\left(L\left( \sqrt{D}+X_{0}\sqrt{D}+\sqrt{BX_{0}}D+BD^{3/2}\right)\right),\)
_which for linear layers without biases further simplify to \(\mathcal{O}(L\sqrt{D}(1+X_{0}))\)._
The formal statement can be found in appendix. From Theorem 1 we see that most favorable bounds are achieved by 1-Lipschitz neural networks with 1-Lipschitz layers. In classification tasks, they are not less expressive than conventional networks [18]. Hence, this choice of architecture is not at the expense of utility. Moreover an accuracy/robustness trade-off exists, determined by the choice of loss function [18]. However, setting \(K=1\) merely ensures that \(\|\nabla_{x}f\|\leq 1\), and in the worst-case scenario we have \(\|\nabla_{x}f\|<1\) almost everywhere. This could result in a situation where the bound of case 3 in Theorem 1 is not tight, leading to an underfitting regime as in case \(K<1\). With Gradient Norm Preserving (GNP) networks [17], we expect to mitigate this issue.
**Controlling \(\mathbb{\wedge}\) with Gradient Norm Preserving (GNP) networks.** GNP networks are 1-Lipschitz neural networks with the additional constraint that the Jacobian of layers consists of orthogonal matrices. They fulfill the Eikonal equation \(\left\|\frac{\partial f_{d}(\theta_{d},x_{d})}{\partial x_{d}}\right\|_{2}=1\) for any intermediate activation \(f_{d}(\theta_{d},x_{d})\). Without biases these networks are also norm preserving: \(\|f(\theta,x)\|=\|x\|\).
Figure 3: **Accountant for locally enforced differential privacy.****(i)** The gradient query for each layer is turned into a Gaussian mechanism [9], **(ii)** their composition at the scale of the whole network is a non isotropic Gaussian mechanism, **(iii)** that benefits from amplification via sub-sampling [31], **(iv)** the train steps are composed over the course of training.
As a consequence, the gradient of the loss with respect to the parameters is easily bounded by
\[\|\nabla_{\theta_{d}}\mathcal{L}\|=\|\nabla_{{}_{\!\!\text{\tiny{$\mathit{SD}$}}} \mathcal{L}}\|\times\left\|\frac{\partial f_{d}(\theta_{d},x_{d})}{\partial \theta_{d}}\right\|, \tag{7}\]
which for weight matrices \(W_{d}\) further simplifies to \(\|\nabla_{W_{d}}\mathcal{L}\|\leq\|\nabla_{{}_{\!\!\text{\tiny{$\mathit{SD}$}}} \mathcal{L}}\|\times\|f_{d-1}(\theta_{d-1},x_{d-1})\|\). We see that this upper bound crucially depends on two terms than can be analyzed separately. On one hand, \(\|f_{d-1}(\theta_{d-1},x_{d-1})\|\) depends on the scale of the input. On the other, \(\|\nabla_{{}_{\!\!\text{\tiny{$\mathit{SD}$}}}\mathcal{L}}\|\) depends on the loss, the predictions and the training stage. We show below how to intervene on these two quantities.
**Remark 2** (Implementation of GNP Networks).: _In practice, GNP are parametrized with GroupSort activation [8; 39], Householder activation [40], and orthogonal weight matrices [17; 41]. Strict orthogonality is challenging to enforce, especially for convolutions for which it is still an active research area (see [42; 43; 44; 45; 46] and references therein). Our line of work traces an additional motivation for the development of GNP and the bounds will strengthen as the field progresses._
**Controlling \(X_{0}\) with input pre-processing.** The weight gradient norm \(\|\nabla_{\theta_{d}}\mathcal{L}\|\) indirectly depends on the norm of the inputs. This observation implies that the pre-processing of input data significantly influences the bounding of sensitivity. Multiple strategies are available to keep the input's norm under control: projection onto the ball ("norm clipping"), or projection onto the sphere ("normalization"). In the domain of natural images for instance, this result sheds light on the importance of color space such as RGB, HSV, YIQ, YUV or Grayscale. These strategies are natively handled by our library.
**Controlling \(L\) with the hybrid approach, loss gradient clipping.** As training progresses, the magnitude of \(\|\nabla_{\mathit{f}}\mathcal{L}\|\) tends to diminish when approaching a local minima, quickly falling below the upper bound and diminishing the gradient norm to noise ratio. To circumvent the issue, the gradient clipping strategy is still available in our framework. Crucially, instead of clipping the parameter gradient \(\nabla_{\theta}\mathcal{L}\), any intermediate gradient \(\nabla_{f_{d}}\mathcal{L}\) can be clipped during backpropagation. This can be achieved with a special "_clipping layer_" that behaves like the identity function at the forward pass, and clips the gradient during the backward pass. The resulting cotangent vector is not a true gradient anymore, but rather a descent direction [47]. In vanilla DP-SGD the clipping is applied on the batched gradient \(\nabla_{W_{d}}\mathcal{L}\) of size \(b\times h^{2}\) for matrix weight \(W_{d}\in\mathbb{R}^{h\times h}\) and clipping this vector can cause memory issues or slowdowns [6]. In our case, \(\nabla_{y_{D}}\mathcal{L}\) is of size \(b\times h\) which reduces overhead.
### Lip-dp library
To foster and spread accessibility, we provide an opensource tensorflow library for Clipless DP-SGD training, named lip-dp. It provides an exposed Keras API for seamless usability. It is implemented as a wrapper over the Lipschitz layers of deel-lip7 library [48]. Its usage is illustrated in Figure 1.
Footnote 7: [https://github.com/deel-ai/deel-lip](https://github.com/deel-ai/deel-lip) distributed under MIT License (MIT).
## 4 Experimental results
We validate our implementation with a speed benchmark against competing approaches, and we present the privacy/utility Pareto front that can be obtained with GNP networks.
Speed and memory consumption.We benchmarked the median runtime per epoch of vanilla DP-SGD against the one of Clipless DP-SGD, on a CNN architecture and its Lipschitz equivalent respectively. The experiment was run on a GPU with 48GB video memory. We compare against the implementation of tf_privacy, opacus and optax. In order to allow a fair comparison, when evaluating Opacus, we reported the runtime with respect to the logical batch size, while capping the physical batch size to avoid Out Of Memory error (OOM). Although our library does not implement logical batching yet, it is fully compatible with this feature.
Figure 4: **Our approach outperforms concurrent frameworks in terms of runtime and memory:** we trained CNNs (ranging from 130K to 2M parameters) on CIFAR-10, and report the median batch processing time (including noise, and constraints application \(\Pi\) or gradient clipping).
An advantage of projection \(\Pi\) over per-sample gradient clipping is that the projection cost is independent of the batch size. Fig 4 validates that our method scales much better than vanilla DP-SGD, and is compatible with large batch sizes. It offers several advantages: firstly, a larger batch size contributes to a decrease of the sensitivity \(\Delta\propto 1/b\), which diminishes the ratio between noise and gradient norm. Secondly, as the batch size \(b\) increases, the variance decreases at the parametric rate \(\mathcal{O}(\sqrt{b})\) (as demonstrated in appendix), aligning with expectations. This observation does not apply to DP-SGD: gradient clipping biases the direction of the average gradient, as noticed by [7].
Pareto front of privacy/utility trade-off.We performed a search over a broad range of hyper-parameters values to cover the Pareto front between utility and privacy. Results are reported in Figure 5. We emphasize that our experiments did not use the elements behind the success of most recent papers (pre-training, data preparation, or handcrafted feature are examples). Hence our results are more representative of the typical performance that can be obtained in an "out of the box" setting. Future endeavors or domain-specific engineering can enhance the performance even further, but such improvements currently lie beyond the scope of our work. We also benchmarked architectures inspired from VGG [52], Resnet [53] and MLP_Mixers [54] see appendix for more details. Following standard practices of the community [2], we used _sampling without replacement_ at each epoch (by shuffling examples), but we reported \(\epsilon\) assuming _Poisson sampling_ to benefit from privacy amplification [31]. We also ignore the privacy loss that may be induced by hyper-parameter search, which is a limitation per recent studies [5], but is common practice.
## 5 Limitations and future work
Although this framework offers a novel approach to address differentially private training, it introduces new challenges. We primary rely on GNP networks, where high performing architectures are quite different from the usual CNN architectures. As emphasized in Remark 2, we anticipate that progress in these areas would greatly enhance the effectiveness of our approach. Additionally, to meet requirement 3, we rely on projections, necessitating additional efforts to incorporate recent advancements associated with differentiable reparametrizations [42; 43]. It is worth noting that our methodology is applicable to most layers. Another limitation of our approach is the accurate computation of sensitivity \(\Delta\), which is challenging due to the non-associativity of floating-point arithmetic and its impact on numerical stability [55]. This challenge is exacerbated on GPUs, where operations are inherently non-deterministic [56]. Finally, as mentioned in Remark 1, our propagation bound method can be refined.
Figure 5: **Our framework paints a clearer picture of the privacy/utility trade-off. We trained models in an “out of the box setting” (no pre-training, no data augmentation and no handcrafted features) on multiple tasks. While our results align with the baselines presented in other frameworks, we recognize the importance of domain-specific engineering. In this regard, we find the innovations introduced in [49; 50; 51] and references therein highly relevant. These advancements demonstrate compatibility with our framework and hold potential for future integration.**
Concluding remarks and broader impact
Besides its main focus on differential privacy, our work provides **(1) a motivation to further develop Gradient Norm Preserving architectures**. Furthermore, the development of networks with known Lipschitz constant with respect to parameters is a question of independent interest, **(2) a useful tool for the study of the optimization dynamics** in neural networks. Finally, Lipschitz networks are known to enjoy certificates against adversarial attacks [17; 57], and from generalization guarantees [13], without cost in accuracy [18]. We advocate for the spreading of their use in the context of robust and certifiable learning.
#### Acknowledgments and Disclosure of Funding
This work has benefited from the AI Interdisciplinary Institute ANITI, which is funded by the French "Investing for the Future - PIA3" program under the Grant agreement ANR-19-P3IA-0004. The authors gratefully acknowledge the support of the DEEL project.8
Footnote 8: [https://www.deel.ai/](https://www.deel.ai/) |
2305.03627 | A surface-normal photodetector as nonlinear activation function in
diffractive optical neural networks | Optical neural networks (ONNs) enable high speed parallel and energy
efficient processing compared to conventional digital electronic counterparts.
However, realizing large scale systems is an open problem. Among various
integrated and non-integrated ONNs, free-space diffractive ONNs benefit from a
large number of pixels of spatial light modulators to realize millions of
neurons. However, a significant fraction of computation time and energy is
consumed by the nonlinear activation function that is typically implemented
using a camera sensor. Here, we propose a novel surface-normal photodetector
(SNPD) with a nonlinear response to replace the camera sensor that enables
about three orders of magnitude faster (5.7 us response time) and more energy
efficient (less than 10 nW/pixel) response. Direct efficient vertical optical
coupling, polarization insensitivity, inherent nonlinearity with no control
electronics, low optical power requirements, and the possibility of
implementing large scale arrays make the SNPD a promising nonlinear activation
function for diffractive ONNs. To show the applicability, successful
classification simulation of MNIST and Fashion MNIST datasets using the
measured response of SNPD with accuracy comparable to that of an ideal ReLU
function are demonstrated. | Farshid Ashtiani, Mohamad Hossein Idjadi, Ting-Chen Hu, Stefano Grillanda, David Neilson, Mark Earnshaw, Mark Cappuzzo, Rose Kopf, Alaric Tate, Andrea Blanco-Redondo | 2023-05-05T15:38:57Z | http://arxiv.org/abs/2305.03627v1 | A surface-normal photodetector as nonlinear activation function in diffractive optical neural networks
###### Abstract
Optical neural networks (ONNs) enable high speed parallel and energy efficient processing compared to conventional digital electronic counterparts. However, realizing large scale systems is an open problem. Among various integrated and non-integrated ONNs, free-space diffractive ONNs benefit from a large number of pixels of spatial light modulators to realize millions of neurons. However, a significant fraction of computation time and energy is consumed by the nonlinear activation function that is typically implemented using a camera sensor. Here, we propose a novel surface-normal photodetector (SNPD) with a nonlinear response to replace the camera sensor that enables about three orders of magnitude faster (5.7 \(\mu\)s response time) and more energy efficient (less than 10 nW/pixel) response. Direct efficient vertical optical coupling, polarization insensitivity, inherent nonlinearity with no control electronics, low optical power requirements, and the possibility of implementing large scale arrays make the SNPD a promising nonlinear activation function for diffractive ONNs. To show the applicability, successful classification simulation of MNIST and Fashion MNIST datasets using the measured response of SNPD with accuracy comparable to that of an ideal ReLU function are demonstrated.
## 1 Introduction
As artificial neural networks are more widely utilized in a variety of applications from pattern recognition [1, 2] to medical diagnosis [3, 4], there is an increasing need for faster and more energy efficient hardware platforms. Optical neural networks (ONNs) benefit from massive parallelism and different multiplexing schemes, such as wavelength, mode, time, and polarization, to enable processing with high energy efficiency at the speed of light [5]. Hence, various ONN implementations have been demonstrated both using bench-top setups [6, 7, 8] as well as integrated platforms that enable smaller size and higher energy efficiency [9, 10, 11].
Despite the significant progress, scaling ONNs to thousands or millions of neurons and multiple layers to perform more complex tasks, is one of the main issues that integrated ONNs face [6]. Complex and area-consuming photonic routing in commercially available platforms, larger on-chip propagation loss, and intricate electronic control circuitry to compensate for fabrication-induced errors, result in lower energy efficiency, packaging complexities, and impractically large integrated systems.
Free-space diffractive ONNs, on the other hand, enable orders of magnitude larger number of neurons compared to integrated ONNs, as well as more flexibility to implement different network configurations [6, 7]. Such systems are especially useful for image and video processing and classification as they directly process the input pictures or video frames with large number of pixels. Figure 1(a) shows the conceptual schematic of a feed-forward neural network with multiple layers of neurons, where each neuron performs linear (weight and sum) and nonlinear (activation function) computations on its inputs. Correspondingly, a diffractive ONN architecture
that performs the linear and nonlinear computations is shown in Fig. 1(b). A laser source illuminates a digitally controlled micro-mirror device (DMD) that modulates the intensity of the incoming light with the input data to the network. A spatial light modulator (SLM) is used to implement linear weights. Large number of pixels of commercially available SLMs enable ONNs with millions of neurons per layer. The diffracted signals from the SLM are then directed towards the camera to apply the nonlinear activation function on the weighted-sum of the inputs. So far, the nonlinear activation function has been implemented either digitally after forming the image on a camera [7], or using the inherent nonlinear photoelectric response of the CMOS sensor [6]. In either case, the total computation time is mainly limited by the sensor exposure time which for commercial cameras is several milliseconds. For instance, in Ref [6], despite achieving an impressive performance of more than 200 tera operations per second (TOPS) and more than 1 TOPS/W, about 64% of the total processing time and 15% of total power consumption (about 6 \(\mu\)W per pixel) are consumed by the sensor. Therefore, a faster and more energy efficient implementation of the nonlinear activation function can significantly improve the computation speed and energy efficiency of such systems. Note that in most diffractive ONNs only one neural layer is implemented using this setup and the full neural network is realized by re-using the same architecture but with different parameters. The output of the layer is always in electrical domain that drives the DMD after some processing. Therefore, an optical-in electrical-out (O-E) nonlinearity would best fit such systems.
Here we propose a novel implementation of the nonlinear activation function using a surface-normal nonlinear photodetector (SNPD) to significantly improve the speed and energy efficiency of a diffractive ONN. The SNPD is formed by a vertical p-i-n structure contained in a Fabry-Perot cavity. These devices have been used previously as high-speed electro-optic modulators operating according to the quantum confined Stark effect [12, 13, 14, 15, 16]. However, light coupled to these devices
Figure 1: (a) Typical feed-forward neural network architecture with multiple layers of interconnected neurons. The neural output is generated by passing the weighted-sum of the inputs through a nonlinear activation function. (b) Diffractive ONN architecture using a DMD to generate the input signals and SLM to apply corresponding weights to the inputs [6]. Conventionally, a CMOS sensor acts as a detector and/or nonlinear activation function.
generates a photocurrent [14], and hence they can be used as photodetectors as well. Also, under high light intensity, nonlinearities induced by thermal effects arise. In this work, we use the nonlinear behavior of the SNPD photocurrent as a function of the incident optical power to realize a nonlinear activation function as an improved alternative to the camera sensor. The SNPD is a polarization-independent device and light can be vertically coupled to it with a high efficiency and without any additional coupling devices that ease its deployment within a free space ONN setup. In this work, we show that a reverse-biased SNPD (_i.e.,_ each pixel) has a response time of about 5.7 \(\mu\)s (3-dB bandwidth of 61 kHz) while consuming less than 10 nW of static power that make it about three orders of magnitude faster and more energy efficient than commercially available camera sensors. As a result, the activation function will not be a performance bottleneck of the system. As a proof of concept, the measured characteristics of SNPD is used in a neural network simulation platform to classify MNIST and Fashion MNIST datasets. In these tests, accuracies of 97% and 89% are achieved, respectively, showing a performance comparable with that of a standard rectified linear unit (ReLU) activation function. Note that the SNPD is primarily proposed to be utilized in a diffractive ONN setup as an O-E nonlinearity. Other solutions such as all-optical [17, 18, 19] and O-E-O [20] require additional coupling devices (_e.g._, grating couplers), polarization control, additional photodetectors to generate an electrical output, larger size per pixel, and control electronic circuitry to realize nonlinearity (especially in the case of O-E-O) that result in more complexity and less energy efficiency and make scaling more challenging. Although they enable faster response time than the SNPD, due the millisecond-scale response time of the SLM, the performance of the overall system will not improve and this only results in more energy consumption. Therefore, SNPD best fits a diffractive ONN setup.
## 2 SNPD structure and characterization
Figure 2(a) shows a sketch of the cross-section of the SNPD used in this work. It is composed of a multi-quantum-well (MQW) stack placed in the intrinsic region of a vertical _p-i-n_ structure. The MQW is formed by 36 periods of In\({}_{0.53}\)Ga\({}_{0.47}\)As wells with 9 nm thickness and In\({}_{0.52}\)Al\({}_{0.48}\)As barriers with 4 nm thickness. The total thickness of the MQW is 468 nm, which is equivalent to one wavelength at about 1540 nm. The _p-i-n_ stack is then inserted in an asymmetric Fabry-Perot resonant cavity with a high-reflectivity (HR) mirror on the bottom of the structure and a partial reflectivity top mirror formed by the semiconductor/air interface. Other MQWs with different composition (such as Si/SiGe) [14] and thickness [15] may be used as well in such a structure. The SNPD used in this work has active area diameter of 20 \(\mu\)m. The top-view microphotograph
Figure 2: (a) Sketch of the cross-section of a SNPD. (b) Top-view photograph of a SNPD with 20 \(\mu\)m active area diameter.
of the device is shown in Fig. 2(b). The chip is bonded to a submount with single-ended ground-signal-ground metal pads that allow application of an electric field orthogonal to the layers of the MQW region. Note that devices with smaller active area can be designed in order to reduce the form factor when placed within an array [12]. The details of the fabrication process are described in Ref [15].
Typically, when used as a modulator, such a device operates according to the quantum confined Stark effect: upon application of a reverse bias voltage, the MQW absorption edge shifts in wavelength and produces amplitude modulation of the optical output signal. In this work, while we still apply a reverse bias voltage, we use it as a photodetector and work at wavelengths much longer than those typically used for modulation.
Figure 3(a) shows the experimental setup to characterize the SNPD in the linear and nonlinear regions. The output light of a tunable continuous wave (CW) laser is coupled orthogonally to the surface of the SNPD chip using a standard single mode fiber and a GRIN lens. The GRIN lens is used to reimage the optical mode of the standard fiber on the SNPD top surface with about 80% coupling efficiency while allowing to move the fiber farther away from the chip, but does not change the mode size. A fiber-optic based circulator allows to separate light at the input and output of the SNPD. Note that the reflected optical signal (P\({}_{out}\) in Fig. 3(a)) is used when the device operates in the modulator mode. As mentioned before, there is no need for any optical polarization control of the light as the device is fully polarization independent [12]. Moreover, the SNPD is placed on a thermo-electric cooler (TEC) to stabilize the working temperature of the device. Due to a broad wavelength range of operation of the SNPD [12], no complex and power hungry closed-loop wavelength locking mechanism is necessary. To later characterize the nonlinear response time of the SNPD, an acousto-optic modulator (AOM) is driven with a 27 MHz CW signal by an arbitrary signal generator. In this mode of operation, the AOM only frequency shifts the laser with an insertion loss of about 3.5 dB.
In the first experiment, the responsivity of the SNPD in the linear region (_i.e._, low optical power) as a function wavelength and for different reverse bias conditions is measured. In this case, the AOM is bypassed and no amplitude modulation is performed. Figure 3(b) shows the responsivity of the SNPD as a function of optical wavelength for three different reverse bias voltages and a fixed on-chip optical power of -4.9 dBm (estimated after de-embedding the loss of other components). As the reverse bias value increases, the absorption edge red-shifts, resulting in higher a peak peak responsivity. To achieve a high photocurrent, a reverse bias voltage of 5 V is used in all of the following experiments.
In the second experiment and to study the nonlinear behavior of the SNPD as the input optical power changes, the responsivity of the device for a reverse bias voltage of 5 V and different input optical power values is measured. As shown in Fig. 3(c), for optical wavelengths shorter than 1580 nm, the responsivity graphs for different input optical powers are similar and no significant nonlinearity is observed. However, for longer wavelengths, as the input optical power increases, the difference between the responsivity graphs becomes more significant, showing the nonlinear behavior of the SNPD. This behavior is dominated by thermal effects [13] and once the optical power exceeds a certain threshold for a given wavelength, the generated photocurrent increases at a higher rate, resulting in a larger responsivity.
The third experiment is performed to characterize the nonlinear response of the SNPD that is to be used in an ONN. The laser wavelength is fixed to 1598 nm while the optical power is swept. As shown in Fig. 3(d), the photocurrent is a nonlinear function of the input optical power which resembles a ReLU function at 1598 nm. For optical power of larger than 1.25 mW, the change in the photocurrent significantly increases. The measured characteristic is later used in a neural network to confirm its applicability as a nonlinear activation function. Note that the threshold power is a function of the cavity design and biasing conditions and can be adjusted. Moreover, the optical wavelength of 1598 nm is chosen as it results in a close approximation of the ReLU
nonlinear response. However, other wavelengths can be selected depending on the application and the desired type of nonlinear function. Since one neural layer is typically implemented using diffractive ONNs, the laser power can be set properly to maintain a sufficient optical power level at the SNPD to trigger the nonlinearity.
To measure the bandwidth of the SNPD in the proposed mode of operation, a train of square wave pulses is applied to the AOM to amplitude modulate the CW laser with an extinction ratio of greater than 35 dB. Note that the amplitude of the modulation signal is large enough to switch
Figure 3: (a) Experimental setup used to characterize the device. (b) SNPD responsivity as a function of optical wavelength for different reverse bias voltages and an on-chip optical power of -4.9 dBm. In this case, the AOM is bypassed. (c) SNPD responsivity for different input optical powers showing a nonlinear behavior at wavelengths longer than 1580 nm. Here too, the AOM is bypassed. (d) SNPD photocurrent as a function of the input optical power (P\({}_{in}\)) measured at the wavelength 1598 nm. The modulation signal is turned off in this measurement. (e) frequency response of the SNPD measured at the wavelength of 1598 nm, while the AOM modulates the input optical signal.
the input optical power between less than 1 \(\mu\)W and a value larger than the threshold power which is about 1.25 mw. This way, we emulate a large change in the weighted-sum signal to find the worst case scenario for the response time. In this experiment, the modulation frequency is varied and the amplitude of the AC voltage is measured across a 50 \(\Omega\) load on an oscilloscope. Figure 3(e) shows the normalized AC response of the SNPD (yellow squares) where fitting a single pole transfer function suggests a 3-dB bandwidth of 61 kHz, that is equivalent to a rise time (response time) of about 5.7 \(\mu\)s. This is about three orders of magnitude faster than the typical millisecond response time of camera sensors.
## 3 Neural network simulation results
To demonstrate the applicability of the nonlinear response of the SNPD in a neural network, the measured transfer function of the device in Fig. 3(d) is used as the activation function in a neural network simulation platform to classify MNIST and Fashion MNIST datasets. Figure 4(a) shows
Figure 4: MNIST and Fashion MNIST data classification. (a) Architecture of the neural network used in this work. (b) Cross entropy loss and (c) classification accuracy as a function of number of epochs, showing the results both for training and test for the MNIST dataset. (d) Cross entropy loss and (e) classification accuracy as a function of number of epochs, showing the results both for training and test for Fashion MNIST dataset.
the architecture of a simple neural network used in this work. The \(28\times 28\)-pixel images are input to the convolution layer with 32 parallel \(3\times 3\) kernels with a stride (step size) of one and SNPD response as its activation function that replaces the standard ReLU function. A maxpooling layer down-samples the output of the convolution layer and is followed by a fully-connected layer with 100 neurons and SNPD response as the nonlinearity. Finally, 10 neurons with softmax activation generate the classification results of the network. The neural network is implemented using Tensorflow libraries. Stochastic gradient descent with a learning rate of 0.01 and momentum of 0.9 is used as the optimizer and with a categorical cross-entropy as the loss function. The input images are fed to the network with a batch size of 32. Moreover, random normal kernel initialization is used throughout the network.
Figures 4(b) and 4(c) show the training and test cross-entropy loss and classification accuracy, respectively, both as a function of the number of epochs. Using the measured SNPD nonlinear response, the network achieves a test classification accuracy of about 97%. As a reference, the same network with the standard ReLU function achieves the same accuracy.
In the second test, the same network is used to classify the Fashion MNIST dataset consisting of \(28\times 28\)-pixel images of 10 different types of clothing (Fig. 4(a)). While the network architecture is the same, Adam optimizer is used instead of stochastic gradient descent for faster convergence. Moreover, He Uniform is used for kernel initialization. Figures 4(d) and 4(e) show the training and test loss and accuracy as a function of the number of epochs, respectively. An accuracy of about 88.5% is achieved while the same network with ReLU function achieves about 89%. Note that the lower accuracy compared to the MNIST classification case is due to more complex features in the Fashion MNIST dataset and can be improved by using a network that is better optimized for this application.
## 4 Discussion and summary
It should be noted that the proposed SNPD as the nonlinear activation function can be scaled to one-dimensional (1D) and two-dimensional (2D) arrays with large number of devices (pixels), similar to camera sensors. For instance, in Ref [16], a \(288\times 132\) array of similar devices is demonstrated. Therefore, high-resolution 2D array of nonlinear activation functions can be used in a diffractive ONN.
In summary, we demonstrated the applicability of a surface-normal nonlinear photodetector in free-space diffractive ONNs to realize the O-E neural nonlinearity as an alternative to the commonly used camera sensors. Significantly faster response time of 5.7 \(\mu\)s removes the nonlinear activation function as a computation time bottleneck. The reverse biased SNPD consumes less than 10 nW of static power per pixel which in turn improves the overall energy efficiency of an ONN. Moreover, polarization-independent operation of the SNPD together with direct optical coupling and the possibility of implementing large-scale 1D and 2D arrays of the device, make it a promising candidate to be used in a free space ONN setup.
## 5 Backmatter
DisclosuresThe authors declare no conflicts of interest.
|
2303.07917 | Reachability Analysis of Neural Networks with Uncertain Parameters | The literature on reachability analysis methods for neural networks currently
only focuses on uncertainties on the network's inputs. In this paper, we
introduce two new approaches for the reachability analysis of neural networks
with additional uncertainties on their internal parameters (weight matrices and
bias vectors of each layer), which may open the field of formal methods on
neural networks to new topics, such as safe training or network repair. The
first and main method that we propose relies on existing reachability analysis
approach based on mixed monotonicity (initially introduced for dynamical
systems). The second proposed approach extends the ESIP (Error-based Symbolic
Interval Propagation) approach which was first implemented in the verification
tool Neurify, and first mentioned in the publication of the tool VeriNet.
Although the ESIP approach has been shown to often outperform the
mixed-monotonicity reachability analysis in the classical case with
uncertainties only on the network's inputs, we show in this paper through
numerical simulations that the situation is greatly reversed (in terms of
precision, computation time, memory usage, and broader applicability) when
dealing with uncertainties on the weights and biases. | Pierre-Jean Meyer | 2023-03-14T14:00:32Z | http://arxiv.org/abs/2303.07917v1 | # Reachability Analysis of Neural Networks
###### Abstract
The literature on reachability analysis methods for neural networks currently only focuses on uncertainties on the network's inputs. In this paper, we introduce two new approaches for the reachability analysis of neural networks with additional uncertainties on their internal parameters (weight matrices and bias vectors of each layer), which may open the field of formal methods on neural networks to new topics, such as safe training or network repair. The first and main method that we propose relies on existing reachability analysis approach based on mixed monotonicity (initially introduced for dynamical systems). The second proposed approach extends the ESIP (Error-based Symbolic Interval Propagation) approach which was first implemented in the verification tool Neuify, and first mentioned in the publication of the tool VeriNet. Although the ESIP approach has been shown to often outperform the mixed-monotonicity reachability analysis in the classical case with uncertainties only on the network's inputs, we show in this paper through numerical simulations that the situation is greatly reversed (in terms of precision, computation time, memory usage, and broader applicability) when dealing with uncertainties on the weights and biases.
U +
Footnote †: footnote]Footnote : footnotetext: _Keywords:_ Uncertain systems, reachability analysis, neural network.
## 1 Introduction
In the recent years, artificial intelligence methods have grown very rapidly and spread to numerous application fields. Although such approaches often work well in practice, the usual statistical testing of their behavior (Kim et al., 2020) becomes insufficient when dealing with safety-critical applications such as autonomous vehicles (Xiang et al., 2018). In such application fields, we instead need to develop formal verification approaches to guarantee the desired safe behavior of the system. In the case of neural networks, most formal verification tools rely on reachability analysis methods or on solving optimization problems (Liu et al., 2021), and they aim to verify safety specifications in the form of input-output conditions: check if, given a set of allowed input values, the set of all outputs reachable by the network (or an over-approximation of this set) remains within safe bounds (Bak et al., 2021).
Note however that all the tools mentioned in Liu et al. (2021); Bak et al. (2021) focus on the safety verification of pre-trained neural networks, i.e. the network parameters (weight matrices, bias vectors) are assumed to be fixed and known, and they only consider uncertainties on the network's input (from the safety specification to be checked). In contrast, in this paper we are interested in expending the reachability-based verification methods to a whole set of neural networks, or equivalently to a neural network with additional uncertainties on all its internal parameters (weight matrices and bias vectors). Such methods would in turn allow us to connect this field of neural network verification to new topics and offer new ways to approach problems such as safe training (ensuring during training that the trained network satisfies the desired properties, see e.g. Gowal et al. (2018); Mirman et al. (2018)) and network repair (finding the smallest changes to apply to an unsafe network in order to ensure its safety, see e.g. Majd et al. (2021); Yang et al. (2022)). This paper thus introduces the first necessary step in the development of such verification tools: creating new methods for the reachability analysis of neural networks with bounded inputs, weights and biases.
ContributionsWe propose two new methods to compute interval over-approximations of the output set of a neural network with bounded uncertainties on its inputs, weight matrices and bias vectors. The first approach and main contribution is based on mixed-monotonicity reachability analysis (initially introduced for the analysis of dynamical systems (Meyer et al., 2021)). One of the main strength of this approach is its generality since it is applicable to neural networks with any Lipschitz-continuous activation functions, unlike most other approaches in the literature which are limited to piecewise-affine (Wang et al., 2018; Katz et al., 2019; Botoeva et al., 2020; Xu et al., 2021), sigmoid-shaped (Henriksen and Lomuscio, 2020; Tran et al., 2020; Muller, 2022) or monotone increasing functions (Dvijotham et al., 2018; Raghunathan et al., 2018). The proposed algorithm applies mixed-monotonicity reachability analysis to each partial network within the main neural network, and then intersects their results to obtain tighter over-approximations than if the reachability analysis was applied only once to the whole network directly.
Since, to the best of our knowledge, other approaches solving the considered problem have not yet been pro
posed in the literature, we introduce a second method to offer some elements of comparison with the above mixed-monotonicity approach. This second algorithm extends to uncertain neural networks the ESIP (Error-based Symbolic Interval Propagation) method described in Henriksen and Lomuscio (2020). Although this ESIP approach is more limited in terms of activation functions (only piecewise-affine and sigmoid-shaped functions), this method was chosen here because it was shown in Meyer (2022) to be very computationally efficient in the particular case of uncertainties only on the network's input. However in this paper, numerical simulations show that with additional uncertainties on the network parameters, the mixed-monotonicity approach outperforms the ESIP algorithm on all relevant criteria: tightness of over-approximations, computation time and memory usage.
Related workBoth methods proposed in this paper to tackle the reachability analysis problem on an uncertain neural network are generalizations of the methods presented in the particular case without uncertainties on the weights and biases of the network in Meyer (2022) for the mixed-monotonicity approach, and in Henriksen and Lomuscio (2020) for the ESIP approach. To the best of our knowledge, the only other publication attempting to consider a similar problem in the literature is Zuo et al. (2014). On the other hand, while the authors of this work indeed consider reachability analysis of uncertain neural networks, they do it in a very different setting of neural ordinary differential equations which does not allow us to provide any theoretical or numerical comparison with our approach on discrete models of feedforward neural networks. As mentioned above, many existing works on safety verification of neural networks also rely on various algorithms and set representations for reachability analysis (see e.g. those listed in the survey paper Liu et al. (2021) or the neural network verification competition Bak et al. (2021)). However, all these works currently only apply their reachability methods to pre-trained neural networks, and thus without any uncertainty on the weight matrices and bias vectors as we consider in this paper.
This paper is organized as follows. Section 2 introduces the considered neural network model and defines the reachability analysis problem. Section 3 describe the first and main contribution of this paper, solving the considered problem with mixed-monotonicity reachability analysis. The second approach based on ESIP (Error-based Symbolic Interval Propagation) is introduced in 4. Finally, Section 5 provides numerical simulations to compare both algorithms and to highlight the advantages of the mixed-monotonicity approach.
## 2 Problem Definition
Given \(\underline{x},\overline{x}\in\mathbb{R}^{n}\) with \(\underline{x}\leq\overline{x}\), the interval \([\underline{x},\overline{x}]\subseteq\mathbb{R}^{n}\) is the set \(\{x\in\mathbb{R}^{n}\ |\ \forall i\in\{1,\ldots,n\},\ \underline{x}_{i}\leq x_{i}\leq \overline{x}_{i}\}\).
We consider an \(L\)-layer feedforward neural network defined as
\[x^{l}=\Phi(W^{l}x^{l-1}+b^{l}),\ \forall l\in\{1,\ldots,L\} \tag{1}\]
with uncertain input vector \(x^{0}\in[\underline{x}^{0},\overline{x^{0}}]\subseteq\mathbb{R}^{n_{0}}\), and uncertain weight matrix \(W^{l}\in[\underline{W^{l}},\overline{W^{l}}]\subseteq\mathbb{R}^{n_{l}\times n _{l-1}}\) and bias vector \(b^{l}\in[\underline{b^{l}},\overline{b^{l}}]\subseteq\mathbb{R}^{n_{l}}\) for each layer \(l\in\{1,\ldots,L\}\). The function \(\Phi\) is defined as the componentwise application of a scalar and Lipschitz-continuous activation function. For simplicity of presentation, the activation function \(\Phi\) is assumed to be identical for all layers.
In this paper, we are interested in the robustness of the neural network with respect to the uncertainties on its input \(x^{0}\), weights \(W^{l}\) and biases \(b^{l}\). Since the output set of the network cannot be computed exactly due to the nonlinearities in the activation function \(\Phi\), we use a simpler set representation (mutli-dimensional interval) to over-approximate this output set. Relying on such over-approximations ensures that any safety property satisfied on the computed interval is guaranteed to also be satisfied on the real output set of the neural network. This reachability analysis problem is formalized as follows.
Given the \(L\)-layer neural network (1) and the uncertainty sets \([\underline{x}^{0},\overline{x^{0}}]\subseteq\mathbb{R}^{n_{0}}\), \([\underline{W^{l}},\overline{W^{l}}]\subseteq\mathbb{R}^{n_{l}\times n_{l-1}}\) and \([\underline{b^{l}},\overline{b^{l}}]\subseteq\mathbb{R}^{n_{l}}\) for all \(l\in\{1,\ldots,L\}\), find an interval \([\underline{x^{L}},\overline{x^{L}}]\subseteq\mathbb{R}^{n_{L}}\) over-approximating the output set of (1):
\[\left\{x^{L}\ \text{in}\ (1)\begin{vmatrix}x^{0}\in[\underline{x}^{0}, \overline{x^{0}}],W^{l}\in[\underline{W^{l}},\overline{W^{l}}],\\ b^{l}\in[\underline{b^{l}},\overline{b^{l}}],\forall l\in\{1,\ldots,L\}\end{vmatrix} \right\}\subseteq[\underline{x^{L}},\overline{x^{L}}].\]
The secondary goal is to find over-approximations that are as close to the real output set as possible. In this paper, we introduce two new approaches addressing this reachability analysis problem of neural networks with uncertain parameters, which has not been explored in the literature yet. The first and main contribution in Section 3 is based on mixed-monotonicity reachability analysis. The second proposed approach in Section 4 relies on Error-based Symbolic Interval Propagation (ESIP).
## 3 Mixed Monotonicity
### Mixed-monotonicity reachability analysis
We first introduce the reachability analysis method for a general static function \(y=f(x)\), which will then be applied multiple times to the various partial networks within (1) in the following sections. This result is a straightforward generalization to static functions \(y=f(x)\) of the reachability analysis approach for discrete-time system \(x^{+}=f(x)\) proposed in Meyer et al. (2021). It relies on the boundedness assumption of the derivative (also called Jacobian matrix in the paper) of function \(f\), which is satisfied by any Lipschitz-continuous function.
Consider the function \(y=f(x)\) with output \(y\in\mathbb{R}^{n_{y}}\) and bounded input \(x\in[\underline{x},\overline{x}]\subseteq\mathbb{R}^{n_{x}}\). Assume that its derivative \(f^{\prime}\) is bounded: for all \(x\in[\underline{x},\overline{x}]\), \(f^{\prime}(x)\in[\underline{J},\overline{J}]\subseteq\mathbb{R}^{n_{y}\times n _{x}}\); and denote as \(J^{*}\) the center of these derivative bounds. For each output dimension \(i\in\{1,\ldots,n_{y}\}\), define input vectors \(\underline{\xi^{i}},\overline{\xi^{i}}\in\mathbb{R}^{n_{x}}\) and row vector \(\alpha^{i}\in\mathbb{R}^{1\times n_{x}}\) such that for all \(j\in\{1,\ldots,n_{x}\}\),
\[(\underline{\xi^{i}},\overline{\xi^{i}},\alpha^{i}_{j})=\begin{cases}( \overline{x}_{j},\underline{x}_{j},\max(0,\overline{J}_{ij}))&\text{if }J^{*}_{ij}<0,\\ (\underline{x}_{j},\overline{x}_{j},\min(0,\underline{J}_{ij}))&\text{if }J^{*}_{ij} \geq 0.\end{cases}\]
Then for all \(x\in[\underline{x},\overline{x}]\) and \(i\in\{1,\ldots,n_{y}\}\), we have:
\[f_{i}(x)\in\left[f_{i}(\underline{\xi^{i}})-\alpha^{i}(\underline{\xi^{i}}- \overline{\xi^{i}}),\quad f_{i}(\overline{\xi^{i}})+\alpha^{i}(\underline{ \xi^{i}}-\overline{\xi^{i}})\right].\]
Intuitively, the output bounds are obtained by computing for each output dimension the images for two diagonally opposite vertices of the input interval, then expanding these bounds with an error term when the bounds on the derivative \(f^{\prime}\) spans both negative and positive values. Proposition 1 can thus provide an interval over-approximation of the output set of any function as long as bounds on the derivative \(f^{\prime}\) are known. Obtaining such bounds for a neural network is made possible by computing local bounds on the derivative of its activation functions, as detailed in Section 3.2.
### Local bounds of activation functions
Proposition 1 and the main algorithm in Section 3.3 are applicable to neural networks with any Lipschitz-continuous activation function \(\Phi\). This is indeed a sufficient condition for the derivative of the whole network description (1) to be bounded. On the other hand, knowing the values of these derivative bounds is required to apply Proposition 1 to the neural network. To avoid asking users of this method to compute themselves the derivative bounds of their neural network, we restrict our framework to a subset of Lipschitz-continuous activation functions for which we provide a method to automatically define local bounding functions for the derivative of a given activation function.
**Assumption 1**: _Let \(\mathbb{R}_{\infty}=\mathbb{R}\cup\{-\infty,+\infty\}\) and consider a scalar activation function \(\Phi\) whose derivative is defined as \(\Phi^{\prime}:\mathbb{R}_{\infty}\to\mathbb{R}_{\infty}\), and where \(\Phi^{\prime}(x)\in\{-\infty,+\infty\}\) only if \(x\in\{-\infty,+\infty\}\). The global \(\arg\min\) and \(\arg\max\underline{z},\overline{z}\in\mathbb{R}_{\infty}\) of \(\Phi^{\prime}\) are known, and \(\Phi^{\prime}\) is a 3-piece piecewise-monotone function as follows:_
* _non-increasing on_ \((-\infty,\underline{z}]\) _until reaching its global minimum_ \(\min_{x\in\mathbb{R}_{\infty}}\Phi^{\prime}(x)=\Phi^{\prime}(\underline{z})\)_;_
* _non-decreasing on_ \([\underline{z},\overline{z}]\) _until reaching its global maximum_ \(\max_{x\in\mathbb{R}_{\infty}}\Phi^{\prime}(x)=\Phi^{\prime}(\overline{z})\)_;_
* _and non-increasing on_ \([\overline{z},+\infty)\)_._
_When_ \(\underline{z}=-\infty\) _(resp._ \(\overline{z}=+\infty\)_), the first (resp. last) monotone segment is squeezed into a singleton at infinity and can thus be ignored._
While the formulation of this assumption may seem restrictive (compared to the initial assumption of taking any Lipschitz-continuous activation function), it should be noted that the large majority of activation functions in the literature indeed have a derivative as described in Assumption 1, including all the less common non-monotone activation functions reviewed or introduced in Zhu et al. (2021).1 Therefore, the mixed-monotonicity approach proposed in Section 3.3 has a much broader applicability than most neural network verification tools in the literature, which are most often restricted to ReLU and piecewise-affine activation functions (Wang et al., 2018; Katz et al., 2019; Botoeva et al., 2020; Xu et al., 2021), occasionally able to consider sigmoid-shaped functions (Henriksen and Lomuscio, 2020; Tran et al., 2020; Muller, 2022), and very rarely dealing with general monotone activation functions (Dvijotham et al., 2018; Raghunathan et al., 2018).
Footnote 1: More details and examples on activation functions satisfying Assumption 1 are available in Meyer (2022) where this assumption was first introduced.
**Proposition 2**: _Given an activation function \(\Phi\) satisfying Assumption 1 and a bounded input domain \([\underline{x},\overline{x}]\in\mathbb{R}\), the local bounds of the derivative \(\Phi^{\prime}\) on \([\underline{x},\overline{x}]\) are given by:_
\[\min_{x\in[\underline{x},\overline{x}]}\Phi^{\prime}(x) =\begin{cases}\Phi^{\prime}(\underline{z})&\text{if }\underline{z}\in[ \underline{x},\overline{x}],\\ \min(\Phi^{\prime}(\underline{x}),\Phi^{\prime}(\overline{x}))&\text{otherwise},\end{cases}\] \[\max_{x\in[\underline{x},\overline{x}]}\Phi^{\prime}(x) =\begin{cases}\Phi^{\prime}(\overline{z})&\text{if }\overline{z}\in[ \underline{x},\overline{x}],\\ \max(\Phi^{\prime}(\underline{x}),\Phi^{\prime}(\overline{x}))&\text{otherwise}.\end{cases}\]
_In short, as long as the user provides \(\Phi^{\prime}\) and its global \(\arg\min\) and \(\arg\max\) (\(\underline{z}\) and \(\overline{z}\)), Proposition 2 returns a local bounding function of \(\Phi^{\prime}\). An illustration of Assumption 1 and Proposition 2 (for the lower bound of \(\Phi^{\prime}\)) is provided in Fig. 1. The local bounding of \(\Phi^{\prime}\) in Proposition 2 is used in Section 3.3 for the computation of bounds on the Jacobian matrix of the neural network, which is required to apply the mixed-monotonicity reachability result from Proposition 1._
### Main algorithm
In this section, we propose an approach using both Propositions 1 and 2 to solve Problem 1 and obtain the tightest possible interval over-approximation of the neural network output set that can be computed when applying mixed-monotonicity reachability analysis to (1). The proposed algorithm is inspired by the one introduced in Meyer (2022) in the particular case of a pre-trained neural network with only uncertainties on its input vector \(x^{0}\).
Although Proposition 1 can be applied to any partial network (described as a subset of consecutive layers of (1)), we do not know in advance which decomposition into partial networks yields the best results. Algorithm 1 thus proposes an efficient way to apply this mixed-monotonicity reachability analysis on all possible network decompositions, while avoiding any redundant computation. To achieve this, we explore the layers of (1) iteratively, and apply Proposition 1 to each partial network ending at the current
Figure 1: _Top_: General shape for the activation function derivative according to Assumption 1. _Bottom_: Two examples for the computation of the local lower bound of \(\Phi^{\prime}\) depending on whether its global \(\arg\min\underline{z}\) is contained in the input interval \([\underline{x},\overline{x}]\) (in red) or not (in blue).
layer. All the obtained interval over-approximations of this layer's output set are then intersected to obtain a significantly tighter over-approximation, which will then be used in the computations of the next layers.
These main steps are summarized in Algorithm 1 and described below. The algorithm takes as input the neural network description (1) with an activation function \(\Phi\) satisfying Assumption 1, as well as all the intervals bounding the network's uncertainties: network input \(x^{0}\) and the weight matrices \(W^{l}\) and bias vectors \(b^{l}\) for each layer \(l\in\{1,\ldots,L\}\). For a partial network considering only layers \(k\) to \(l\) (with \(k\leq l\)) of (1) and denoted as \(\texttt{NN}(k,l)\), we use the notations: \(u(k,l)\) for the concatenated vector of all its uncertainties (the partial network's input \(x^{k-1}\), and the elements of all \(W^{i}\) and \(b^{i}\) for \(i\in\{k,\ldots,l\}\)); \(J(k,l)\) for the derivative (or Jacobian matrix) of this partial network with respect to \(u(k,l)\); and \(x(k,l)\) for the output of this partial network that we want to over-approximate.
Since the Jacobian bounds are defined iteratively using products, they are initialized in line 1 as identity matrices. Then, for each layer \(l\) (lines 2 to 7), we first use Proposition 2 to compute local bounds on the activation function derivative \(\Phi^{\prime}\) when the pre-activation variable is the result of the affine transformation of layer \(l\) (line 3, using interval arithmetic operators): \([\underline{W^{l}},\overline{W^{l}}]*[\underline{x^{l-1}},\overline{x^{l-1}} ]+[\underline{b^{l}},\overline{b^{l}}]\).
Next, we consider independently each partial network covering from some previous layer \(k\in\{1,\ldots,l\}\) to the current layer \(l\) (lines 4 to 6). The first step in line 5 is to compute bounds on the Jacobian matrix of partial network \(\texttt{NN}(k,l)\). Using the chain rule, we know that bounds on the derivative of \(\texttt{NN}(k,l)\) with respect to all uncertainties in \(u(k,l)\) are given by the product:
\[[J(k,l),\overline{J(k,l)}]=[\underline{\Phi^{\prime}},\overline{\Phi^{\prime} }]*[\underline{G(k,l)},\overline{G(k,l)}], \tag{2}\]
where \(\overline{G(k,l)}\) is the derivative of \(\texttt{NN}(k,l)\) without the last activation function. The bounds on \(G(k,l)\) are defined as the horizontal concatenation of:
\[[\underline{W^{l}},\overline{W^{l}}]*[\underline{J(k,l-1)},\overline{J(k,l-1)}]\]
representing the derivative with respect to all uncertainties in \(\texttt{NN}(k,l-1)\);
\[[\underline{x^{l-1}},\overline{x^{l-1}}]*I_{n_{l}},\ldots,[\underline{x^{l-1 }}_{n_{l-1}},\overline{x^{l-1}}_{n_{l-1}}]*I_{n_{l}}\]
representing the derivative with respect to the weight matrix \(W^{l}\) of the last layer; and \(I_{n_{l}}\) representing the derivative with respect to the bias vector \(b^{l}\) of the last layer. Using the computed Jacobian bounds \([J(k,l),\overline{J(k,l)}]\) along with the known uncertainty bounds \([\underline{u(k,l)},\overline{u(k,l)}]\), we can then apply Proposition 1 to the partial network \(\texttt{NN}(k,l)\) in line 6 to obtain an interval over-approximation \([\underline{x(k,l)},\overline{x(k,l)}]\) of the output set of \(\texttt{NN}(k,l)\).
The last step of the algorithm, in line 7, is to take the intersection of the interval over-approximations obtained for each partial neural network ending at layer \(l\). Algorithm 1 then returns the interval \([\underline{x^{L}},\overline{x^{L}}]\) computed at the last layer \(L\) of the network.
```
0:\(L\)-layer network (1) with activation function \(\Phi\) satisfying Assumption 1, uncertainties \([\underline{v}^{0},\overline{x^{0}}]\subseteq\mathbb{R}^{n_{0}}\), \([\underline{W^{l}},\overline{W^{l}}]\subseteq\mathbb{R}^{n_{l}\times n_{l-1}}\) and \([\underline{b^{l}},\overline{b^{l}}]\subseteq\mathbb{R}^{n_{l}}\) for all \(l\in\{1,\ldots,L\}\) \(\forall k,l\in\{0,\ldots,L\}\), \(J(k,l)\gets I,\overline{J(k,l)}\gets I\) for\(l\in\{1,\ldots,L\}\)do
0\([\underline{\Phi^{\prime}},\overline{\Phi^{\prime}}]\leftarrow\texttt{Prop2}( \Phi^{\prime},[\underline{W^{l}},\overline{W^{l}}]*[\underline{x^{l-1}}, \overline{x^{l-1}}]+[\underline{b^{l}},\overline{b^{l}}])\) for\(k\in\{1,\ldots,l\}\)do
0 Compute \([\underline{J(k,l)},\overline{J(k,l)}]\) as in (2) \([\underline{x(k,l)},\overline{x(k,l)}]\leftarrow\) \(\texttt{Prop1}(\texttt{NN}(k,l),[\underline{u(k,l)},\overline{u(k,l)}],[ \underline{J(k,l)},\overline{J(k,l)}])\) \([\underline{x^{l}},\overline{x^{l}}]\leftarrow[\underline{x(1,l)},\overline{x (1,l)}]\cap\cdots\cap[\underline{x(l,l)},\overline{x(l,l)}]\)
```
**Algorithm 1**Mixed-monotonicity reachability analysis of an uncertain feedforward neural network.
**Theorem 1**.: The interval \([\underline{x^{L}},\overline{x^{L}}]\) returned by Algorithm 1 is a solution to Problem 1.
**Proof.** Proposition 1 is guaranteed in Meyer et al. (2021) to return an interval over-approximation for the output set of the considered system. The theorem statement can then be proved by induction. For layer 1, Algorithm 1 computes \([\underline{x^{1}},\overline{x^{1}}]=[\underline{x(1,1)},\overline{x(1,1)}]\), which is indeed an over-approximation of the output set of layer 1 under all uncertainties on \(x^{0}\), \(W^{1}\) and \(b^{1}\). Next, assuming that intervals \([\underline{x^{1}},\overline{x^{1}}]\),..., \([\underline{x^{l-1}},\overline{x^{l-1}}]\) over-approximate the output sets of layers 1 to \(l-1\) respectively, then according to Proposition 1 each interval \([\underline{x(k,l)},\overline{x(k,l)}]\) over-approximates the output set of layer \(l\). If the true output set of layer \(l\) is included in each of these intervals, it is then included in their intersection \([\underline{x^{l}},\overline{x^{l}}]\). \(\blacksquare\)
The proof of Theorem 1 thus primarily relies on the fact that the intersection operator preserves the soundness of over-approximations. Another benefit of the use of intersections comes into play in the fact that Algorithm 1 proposes an exhaustive exploration of all possible decomposition of (1) into partial networks. Indeed, this ensures that the final result of Algorithm 1 is at least as tight as the one that would be obtained from using Proposition 1 on any specific decomposition into partial networks. And in practice, the result of Algorithm 1 is often strictly tighter than any other result from specific decomposition, due to the fact that the intersection of multiple intervals is strictly smaller than each individual interval in most cases (except in the case where one is included in all others, in which case the intersection is equal to this smallest interval).
## 4 Error-Based Symbolic Interval Propagation
Although the primary contribution of this paper is the mixed-monotonicity approach presented in Section 3, numerically evaluating its performances is of great importance as for any computational method on neural networks. Since we were not able to find any relevant and comparable approaches in the literature, we developed a second method solving Problem 1, to be able to provide numerical comparisons between them in Section 5.
The approach proposed in this section relies on symbolic interval propagation, which has been used in several neural network verification tools such as ReluVal (Wang et al.,
2018b), Neurify Wang et al. (2018) and VeriNet Henriksen and Lomuscio (2020). The core idea is to create bounding functions depending linearly in the network input, and propagating these linear functions iteratively through the layers of the network (i.e. through both the layer's affine transformation, and the linear relaxations of the nonlinear activation functions Salman et al. (2019)). This ensures that the dependency to the network's input is preserved during the propagation of these bounding functions through the network, which results in significantly tighter reachable set over-approximations compared to naive interval bound propagation approaches (see e.g. Xiang et al. (2020)) where this input dependency is lost at each layer.
In this paper, we are interested in a particular variation called ESIP (Error-based Symbolic Interval Propagation), which was first introduced (but not published) in the implementation of the tool Neurify Wang et al. (2018) and first published in the paper of the tool ViriNet Henriksen and Lomuscio (2020). Unlike the classical approach designing and propagating two linear functions (for the lower and upper bounds, respectively), ESIP relies on a single linear function alongside an error matrix. The symbolic equation represents the behavior of the network if the nonlinear activation function of each node is replaced by its lower linear relaxation. The error matrix accounts for deviations from this symbolic equation induced by nodes operating at the upper linear relaxation of their activation function. This particular approach is chosen here because it was shown in Meyer (2022) to be very computationally efficient, with a low and constant computation time per node in the network, while other methods had a computation time per node that grew with the size of the network.
Compared to the ESIP implementation in Henriksen and Lomuscio (2020) in the case where the neural network only has uncertainties on its input, we propose here an extension of this approach to also handle uncertainties on the weight matrices and bias vectors and thus to be able to solve Problem 1. This extension is summarized in Algorithm 2 and detailed below. Due to space limitations and since this new approach is primarily introduced for comparison with our main contribution in Section 3, the formal proof that Algorithm 2 solves Problem 1 is left for future work. We refer the reader to Henriksen and Lomuscio (2020) for more theoretical details in the case of uncertainties only on the network input \(x^{0}\).
In line 1, we initialize the uncertainty vector \(u^{0}\) to the input \(x^{0}\), the symbolic equation \(S^{0}\) to the identity function, and the error \([\underline{E},\overline{E}]\) to and empty interval matrix. Then, for each layer \(l\) we first propagate these variables through the affine transformation \(x\to W^{l}*x+b^{l}\), by appending all elements of \(W^{l}\) and \(b^{l}\) at the end of the previous uncertainty vector \(u^{l-1}\) (line 3), updating the symbolic equation with respect to the new uncertainties \(W^{l}\) and \(b^{l}\) introduced in this layer (line 4), and multiplying the error by the bounds on \(W^{l}\) (line 5).
Next, propagating through the layer's activation function is done individually for each node \(i\) of the layer (line 6). We first compute concrete bounds of the pre-activation variable (line 7) by evaluating the symbolic equation \(S_{i}^{l}\) on the current uncertainty bounds \([\underline{u},\overline{u}^{l}]\), and then adding all negative error terms to the lower bound and all positive errors to the upper bound. These pre-activation bounds can then be used in line 8 to compute a linear relaxation of the activation function, i.e. two linear functions \(\underline{r}\) and \(\overline{r}\) such that \(\underline{r}(x)\leq\Phi(x)\leq\overline{r}(x)\) for all \(x\in[\underline{x}_{i}^{l},\overline{x}_{i}^{l}]\). More details on how to compute such linear relaxations can be found e.g. in Xu et al. (2021) for ReLU functions and in Henriksen and Lomuscio (2020) for sigmoid-shaped functions. We then propagate the symbolic equation through the lower relaxation \(\underline{r}\) (line 9) and compute the maximal error between the relaxation bounds over the pre-activation range \([\underline{x}_{i}^{l},\overline{x}_{i}^{l}]\) (line 10). These new error terms are appended at the end of both bounds of the error (line 11).
For the final layer \(L\), the propagation of the symbolic equation and error through the activation function (lines 8-11) can be skipped since the interval over-approximation of the output set can be simply computed by propagating the pre-activation bounds (from line 7) through the activation function. Note that in line 12, we obtain these bounds by applying \(\Phi\) directly to the lower and upper bounds because this class of approaches relying on linear relaxations are currently limited in the literature (either in their theory or their implementation) to monotone increasing activation functions Wang et al. (2018); Henriksen and Lomuscio (2020); Zhang et al. (2018).
```
Input:\(L\)-layer network (1), uncertainties \([\underline{x}^{0},\overline{x^{0}}]\subseteq\mathbb{R}^{n_{0}}\), \([\underline{W^{l}},\overline{W^{l}}]\subseteq\mathbb{R}^{n_{l}\times n_{l-1}}\) and \([\underline{b^{l}},\overline{b^{l}}]\subseteq\mathbb{R}^{n_{l}}\) for all \(l\in\{1,\ldots,L\}\)
1\(u^{0}\gets x^{0}\), \(S^{0}(u^{0})\gets x^{0}\), \(\underline{E}\leftarrow[]\), \(\overline{E}\leftarrow[]\)
2for\(l\in\{1,\ldots,L\}\)do /* Affine transformation */
3\(u^{l}\leftarrow[u^{l-1};W^{l}(:);b^{l}]\)
4\(S^{l}(u^{l})\gets W^{l}*S^{l-1}(u^{l-1})+b^{l}\)
5\([\underline{E},\overline{E}]\leftarrow[\underline{W^{l}},\overline{W^{l}}]*[ \underline{E},\overline{E}]\)
6for\(i\in\{1,\ldots,n_{l}\}\)do /* Pre-activation bounds */
7\([\underline{x}_{i}^{l},\overline{x}_{i}^{l}]\gets S_{i}^{l}([\underline{u} _{i}^{l},\overline{u^{l}}])+\\ [\sum_{j|\underline{E}(i,j)<0}\underline{E}(i,j),\sum_{j|\underline{E}(i,j)> 0}\overline{E}(i,j)]\) /* Activation function */
8 find \(\underline{r},\overline{r}\mid\underline{r}(x)\leq\Phi(x)\leq\overline{r}(x), \forall x\in[\underline{x}_{i}^{l},\overline{x}_{i}^{l}]\)
9\(S_{i}^{l}(u^{l})\gets\underline{r}(S_{i}^{l}(u^{l}))\)
10\(e_{i}\leftarrow\max(\overline{r}(\underline{x}_{i}^{l})-\underline{r}( \underline{x}_{i}^{l}),\overline{r}(\overline{x}_{i}^{l})-\underline{r}( \overline{x}_{i}^{l}))\)
11\(\underline{E}\leftarrow[\underline{E},diag(e)]\), \(\overline{E}\leftarrow[\overline{E},diag(e)]\)
12\([\underline{x}^{L},\overline{x}^{L}]\leftarrow[\Phi(\underline{x}^{L}), \Phi(\overline{x}^{L})]\)
13 Output:Over-approximation \([\underline{x}^{L},\overline{x}^{L}]\) of the network output
```
**Algorithm 2**ESIP reachability analysis of an uncertain feedforward neural network.
Algorithm 2 is very similar to the ESIP approach described in Henriksen and Lomuscio (2020) in the particular case without weight and bias uncertainty. The main differences are that in our Algorithm 2, the dimension of uncertainty vector \(u^{l}\) grows at each layer, and the error needs to be described as an interval matrix (due to the product with uncertain weight \(W^{l}\) in line 5) instead of a simple matrix in Henriksen and Lomuscio (2020).
In terms of implementation of the algorithm however, there is a much more significant difference with Henriksen and Lomuscio (2020): the symbolic equation which was a linear function in the network input \(x^{0}\) in Henriksen and Lomuscio (2020) is now a multi-linear function in the uncertainty vector \(u^{l}\). This introduces a significantly higher complexity in terms of implementation and memory usage. Indeed, in Henriksen and Lomuscio (2020) the symbolic equation of layer \(l\) could be simply defined as an \(n_{l}\times(n_{0}+1)\) matrix, where for each of the \(n_{l}\) output nodes of this layer we only need to store \(n_{0}\) values for the factors multiplying the terms of \(x^{0}\), and the final value for the constant term of the linear equation. On the other hand, for the multi-linear function \(S^{l}\) in Algorithm 2, we would similarly need to store one factor for each multi-linear term appearing in the equation. The creation of such huge matrices thus limits the application of this ESIP approach to very shallow and narrow neural networks, as illustrated in Example 1 below.
_Example 1_.: The initial symbolic equation \(S^{0}\) has dimensions \(n_{S^{0}}=n_{0}\times(n_{0}+1)\). Next if \(S^{l-1}\) is stored as an \(n_{l-1}\times n_{S^{l-1}}\) matrix, then the affine transformation at layer \(l\) (\(S^{l}(u^{l})=W^{l}*S^{l-1}(u^{l-1})+b^{l}\)), implies that the new symbolic equation \(S^{l}\) has \(n_{l}*n_{l-1}*n_{S^{l-1}}\) columns for the multi-linear and linear terms in \(W^{l}*S^{l-1}(u^{l-1})\), followed by \(n_{l}\) columns for the linear terms in \(b^{l}\), and a final column for the constant of the symbolic equation (which becomes a non-zero value only after its propagation through the activation function). Therefore, \(S^{l}\) is stored as an \(n_{l}\times n_{S^{l}}\) matrix, with \(n_{S^{l}}=n_{l}*n_{l-1}*n_{S^{l-1}}+n_{l}+1\).
For simplicity, assume that all layers have the same width: \(\exists n\in\mathbb{N}\mid\forall l\in\{0,\dots,L\},\ n_{l}=n\). Then we can prove by induction that the width of the matrix for \(S^{l}\) is \(n_{S^{l}}=\sum_{i=0}^{2l+1}n^{i}\). We can thus conclude that the final symbolic equation \(S^{L}\) is of dimensions:
\[n\times\left(\frac{1-n^{2L+2}}{1-n}\right).\]
Therefore, the complexity of Algorithm 2 is exponential in the depth \(L\) of the network, and polynomial in its width \(n\).
In Matlab, matrices cannot contain more than \(2^{48}-1\approx 2.8*10^{14}\) elements. To illustrate the high memory usage of this approach, we show in Table 1 the maximum width \(n\) of an \(L\)-layer network for the symbolic equation \(S^{L}\) to remain within this Matlab constraint.
Note however that this constraint on the matrix dimension is never actually reached in practice, since we would first reach another limitation related to the actual weight of this matrix compared to the available RAM. Taking the same conditions that will be considered in the numerical example of Section 5 with a network of depth \(L=3\) and width \(n=20\), the matrix storing the symbolic equation would weigh up to 215 GB (counting 8 bytes per element in the matrix). This is significantly higher than the available RAM on most computers, which will result in a crash of Matlab when attempting to create such matrix.
## 5 Numerical Examples
In this section, we provide a numerical comparison of Algorithms 1 and 2 on a set of randomly generated neural networks with various dimensions and activation functions, and we highlight the better performances of the mixed-monotonicity approach from Section 3 with respect to most criteria (generality, tightness, computation time, memory usage). Both algorithms are implemented in Matlab 2021b and run on a laptop with 1.80GHz processor and 16GB of RAM.
In our first numerical experiment, we consider neural networks as in (1) with increasing depth \(L\) and a fixed uniform width \(n\) for all input, hidden and output layers (i.e. \(n_{l}=n\) for all \(l\in\{0,\dots,L\}\)). Since we already know from Example 1 that the ESIP approach will struggle in terms of complexity and memory usage, we focus this comparison on narrow networks with \(n=20\) nodes per layer. All uncertainty variables (input, weight matrices, bias vectors) are assigned randomly generated bounds within \([-1,1]\). The simulation are run a total of \(N=10\) times, each with different random uncertainty bounds, and the obtained results in terms of width of the interval over-approximation, computation time and memory usage are averaged over this number of runs. Since the original ESIP implementation in VeriNet (Henriksen and Lomuscio, 2020) is limited to piecewise-affine or sigmoid-shaped activation functions, we focus this first comparison on the most popular activation function: Rectified Linear Unit (ReLU), which is the piecewise-affine function \(\Phi(x)=\max(0,x)\).
In Table 2 are summarized the obtained results for both Algorithm 1 using mixed-monotonicity and Algorithm 2 using the ESIP approach. In terms of the width of the computed interval over-approximations, we first notice that both algorithms return identical results for shallow networks (\(L=1\)), but that the mixed-monotonicity approach always generates tighter intervals for networks with hidden layers. In terms of complexity, the superiority of the mixed-monotonicity approach is striking, since the computation times are on average 12 times faster than ESIP with one layer, and up to 7000 times faster with two layers. Similarly the memory usage is on average 1.4 times smaller than ESIP with one layer, and 176 times smaller with two layers. As predicted in Example 1, as soon as we add a third layer, the ESIP approach attempts to create a matrix much larger than the available 16 GB of RAM (even despite the use of sparse matrices), which causes Matlab to crash. On the other hand, we can see that the mixed-monotonicity approach from Algorithm 1 still behaves well in terms of complexity (time and memory) for deeper networks, although the conservativeness of the over-approximation naturally increases with the size of the network.
The second set of numerical experiments is run only by the mixed-monotonicity approach in Algorithm 1 and focuses on its performances while dealing with the main two limitations of the ESIP approach that we could not explore in the comparison of Table 2: larger networks and
more uncommon activation functions. Indeed, the ESIP approach is not only limited by its complexity, but as mentioned in Section 4, Algorithm 2 and its original version in VeriNet (Henriksen and Lomuscio, 2020) also cannot handle non-monotone activation functions. Here, we thus consider neural networks with the Sigmoid Linear Unit (SiLU) activation function, which is the non-monotone function \(\Phi(x)=x/(1+e^{-x})\) introduced in Ramachandran et al. (2017). This SiLU activation function satisfies Assumption 1 (with global arg min and arg max of its derivative defined as \(\underline{z}=-2.3994\) and \(\overline{z}=2.3994\), respectively), and it is thus natively handled by the mixed-monotonicity approach in Algorithm 1.
Tables 3 and 4 report the average (over \(N=10\) randomly generated uncertainty bounds as in the previous test) computation time and memory usage, respectively, when the depth \(L\) of the neural network goes from 1 to 10 and its width \(n\) from 20 to 100. Although both these quantities naturally increase with the size (\(L\) or \(n\)) of the network, we can observe that Algorithm 1 could solve Problem 1 on all neural networks of up to 10 layers and 100 neurons per layer, in less than an hour and using less than 1 GB of RAM. This is a significant advantage compared to Algorithm 2 which took over 6 minutes per network for a 2-layer 20-width network, and over hundreds of GB for a 3-layer network. After plotting the obtained results from Tables 3 and 4 using log and various \(n\)-th roots to identify the growth rates, we have identified that the mixed-monotonicity approach in Algorithm 1 has a polynomial complexity in \(O(n^{3}*L^{3})\) for the computation time and \(O(n^{3}*L^{2})\) for the memory.
make it satisfy a given property), which will be the main focus of our future work.
|
2302.10824 | Localizing the Origin of Idiopathic Ventricular Arrhythmia from ECG
Using an Attention-Based Recurrent Convolutional Neural Network | Idiopathic ventricular arrhythmia (IVAs) is extra abnormal heartbeats
disturbing the regular heart rhythm that can become fatal if left untreated.
Cardiac catheter ablation is the standard approach to treat IVAs, however, a
crucial prerequisite for the ablation is the localization of IVAs' origin. The
current IVA localization techniques are invasive, rely on expert
interpretation, or are inaccurate. In this study, we developed a new
deep-learning algorithm that can automatically identify the origin of IVAs from
ECG signals without the need for expert manual analysis. Our developed deep
learning algorithm was comprised of a spatial fusion to extract the most
informative features from multichannel ECG data, temporal modeling to capture
the evolving pattern of the ECG time series, and an attention mechanism to
weigh the most important temporal features and improve the model
interpretability. The algorithm was validated on a 12-lead ECG dataset
collected from 334 patients (230 females) who experienced IVA and successfully
underwent a catheter ablation procedure that determined IVA's exact origins.
The proposed method achieved an area under the curve of 93%, an accuracy of
94%, a sensitivity of 97%, a precision of 95%, and an F1 score of 96% in
locating the origin of IVAs and outperformed existing automatic and
semi-automatic algorithms. The proposed method shows promise toward automatic
and noninvasive evaluation of IVA patients before cardiac catheter ablation. | Mohammadreza Shahsavari, Niloufar Delfan, Mohamad Forouzanfar | 2023-01-28T08:01:10Z | http://arxiv.org/abs/2302.10824v1 | Localizing the Origin of Idiopathic Ventricular Arrhythmia from ECG Using an Attention-Based Recurrent Convolutional Neural Network
###### Abstract
Idiopathic ventricular arrhythmia (IVAs) are extra abnormal heartbeats disturbing the regular heart rhythm that can become fetal if left untreated. Cardiac catheter ablation is the standard approach to treat IVAs, however, a crucial prerequisite for the ablation is the localization of IVAs' origin. The current IVA localization techniques are invasive, rely on expert interpretation, or are inaccurate. In this study, we developed a new deep learning algorithm that can automatically identify the origin of IVAs from ECG signals without the need for expert manual analysis. Our developed deep learning algorithm was comprised of a spatial fusion to extract the most informative features from multichannel ECG data, temporal modeling to capture the evolving pattern of the ECG time series, and an attention mechanism to weigh the most important temporal features and improve model interpretability. The algorithm was validated on a 12-lead ECG dataset collected from 334 patients (230 female, age 46\(\pm\)13) who experienced IVA and successfully underwent a catheter ablation procedure that determined IVA's exact origins. The proposed method achieved an area under the curve of 93%, an accuracy of 94%, a sensitivity of 97%, a precision of 95%, and an F1-score of 96% in locating the origin of IVAs and outperformed existing automatic and semi-automatic algorithms. The proposed method shows promise toward automatic and noninvasive evaluation of IVA patients before cardiac catheter ablation.
Attention, Convolutional neural network, Deep learning, Electrocardiogram, Idiopathic ventricular arrhythmia, Recurrent neural network
## I Introduction
Premature ventricular complex (PVC) is defined as a premature heartbeat occurring when the lower chambers of the heart contract too early [1]. Ventricular tachycardia (VT) is a heart rhythm disorder (arrhythmia) defined as three or more PVCs in a row, at a rate of more than 100 PVC beats per minute. For normal individuals, an occasional period of PVC is not considered a problem and usually does not need treatment [2]. However, PVCs become more of a concern, if they happen frequently or when other heart problems are present. For example, for an individual whose ventricle already squeezes poorly, PVCs may be life-threatening. Idiopathic ventricular arrhythmias are those PVCs and VTs which occur for an unknown reason and in the absence of structural heart disease.
Cardiac catheter ablation is usually performed to prevent IVAs and to restore a normal heart rhythm [3]. This procedure consists of two main stages, a diagnostic stage in which the origins of electrical signals causing IVAs is be found, and a treatment stage where the abnormal tissue is ablated (destroyed) through heating or freezing. Thus, it is very important to correctly locate the origin of the abnormal heart electrical activity before ablation. The origin of these abnormal signals is mainly located in either the right ventricular outflow tract (RVOT) or the left ventricular outflow tract (LVOT) [2, 4].
The conventional methods used for the detection of IVA origin include electrical mapping, substrate mapping, and pace mapping. Electrical mapping [5, 6] is performed by moving an electrically sensitive catheter to different ventricle points and recording electrical signals inside the heart (electrograms). The recorded electrograms are analyzed to identify the IVA electrical pathway. This technique can only be performed in a small number of patients who can withstand a stable IVA for the entire mapping duration. Substrate mapping [7-11] studies the electrical properties of the ventricle to identify the critical and slow conduction areas. This is performed by detecting abnormal electrograms obtained in normal sinus rhythm and/or after stimulation of the ventricles from various sites [8, 9]. However, the presence of such abnormal electrograms at a specific site does not necessarily mean that this site is the IVA origin [12]. Besides, the abnormal electrograms caused by IVAs are low-amplitude and are hard to distinguish from noise in dense fibrotic scar areas. Pace mapping is a substitute
technique for electrical mapping [12-15] performed in two steps. First, a clinical ECG record is obtained when an IVA happens. Next, a catheter is used for stimulating the heart from different ventricular sites and to produce electrical pathways originating from these sites. The IVA's origin is recognized when the electrical pathway generated by stimulation best matches that of the clinical IVA. Compared to electrical mapping, this method gives a better sense of IVA origin by reconstructing and visualizing the actual IVA electrical pathway. The conventional methods for the detection of IVA are invasive, high risk to the patient, complicated, time-consuming, and expensive.
Recently, new non-invasive methods for the detection of IVA origins have been proposed based on the analysis of ECG signals. Several studies have already proven a strong relationship between the features of electrocardiogram (ECG) and the locations where IVAs stem from [16-19], and several approaches have been developed to find the IVAs' origins based on this relationship [20-24]. For example, in [20], a threshold based on the amplitude of ECG R wave in lead I was used to identify the IVA origin. In [21], a mathematical model based on S wave and R wave amplitudes was introduced as an index for the identification of the IVA origin. In [22], ECG QRS morphological features were extracted by three electrophysologists, and an extreme gradient boosting tree classifier was designed for the detection of IVA origins. In [23], a 12-lead ECG dataset was simulated using a computer heart model to train a convolutional neural network for the localization of IVA. In [24] a support vector machine (SVM) was designed to locate IVA origins. The use of convolutional neural networks was also investigated in [24], however, the designed network could not outperform the conventional SVM model.
Given that automatic detection of morphological features in an abnormal ECG is challenging, most IVA localization methods rely on manual ECG morphological feature extraction and analysis [17, 20, 21, 25, 26]. Among automatic IVA localization algorithms, some rely on simulated data that may not fully represent the real ECG [23] while others rely on conventional machine learning algorithms that are not capable of fully extracting the ECG spatial and temporal information [24]. There is therefore a significant demand for a fully automatic approach that can detect the origin of IVA from the complex spatial-temporal pattern of ECG multichannel data without the need for any manual analysis. Among different automatic techniques for the characterizing of ECG multichannel data, deep learning has shown greater potential and more accurate results in various applications [27, 28].
In this paper, we propose an end-to-end deep learning-based framework trained on a real-world dataset to automatically localize IVA origins. Our feature extraction and training phase are end-to-end which gives the algorithm the ability to automatically extract the most informative features correlated with IVA origins. Our deep learning framework relies on a combination of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to capture the spatial information contained within the ECG multichannel data and the temporal information contained in the ECG time pattern. An attention mechanism is employed to further focus the algorithm on the most important segments of ECG signal and improve model interpretability. The proposed framework is validated on a dataset of 334 patients using a 10-fold cross-validation approach to ensure the reliability and generalizability of the reported results.
## II Method
### _Dataset_
This study was conducted on a publicly available 12-lead ECG dataset collected from 334 patients who experienced IVA and successfully underwent a catheter ablation procedure to validate the exact origin of IVAs [2].
The dataset has been collected under the auspices of Chapman University and Ningbo First Hospital of Zhejiang University. The institutional review board of Ningbo First Hospital of Zhejiang University has approved this study and has allowed the data to be shared publicly after de-identification.
The average length of ECG recordings in the dataset is 10.53\(\pm\)3.58 s and the sampling rate is 2000 Hz. Of the 334 patients, 77% are classified as RVOT and 23% as LVOT. Participants' characteristics are provided in Table I.
### _Problem Formulation_
Finding the origin of IVA can be formulated as a time-series classification problem in which a comprehensive model is required to extract useful information from varied-length ECG records and predict the correct class for each record. We transformed this problem into a deep learning framework that receives a varied-length ECG as input, and outputs a single binary value, indicating LVOT or RVOT. The objective of the deep learning framework was to minimize the binary cross-entropy between the reference labels and models outputs, given by:
\[Loss=-\frac{1}{N}\sum_{i=1}^{N}(y_{i}.\log\dot{y_{i}}+(1-y_{i}).\log(1-\dot{y_ {i}}))\]
where \(y_{i}\) is the reference label, \(\dot{y_{i}}\) is the model output for the \(i\) th ECG record, and \(N\) is the total number of available ECG segments. Binary cross-entropy loss was chosen to minimize the distance between two predicted and actual probability distributions.
### _Preprocessing_
Our dataset contained ECG signals with different lengths (5 s-25 s). To have a fixed input size, all ECG signals were zero-padded to the length of the longest ECG signal (25 s). Padding ECG signals allows the application of deep learning algorithms that require input samples with equal lengths. ECG signals were then lowpass filtered at 25 Hz with a 4th order Butterworth filter to remove high-frequency noise and downsampled to 50 Hz to reduce the input data dimension. The filter was applied in forward and backward directions to achieve a zero-phase shift. ECG segments were normalized by removing the mean and scaling to unit variance. As ECG data can have a wide range of values, normalization eases the learning process of the deep learning algorithm [29].
### _Model Architecture_
Deep learning has proved its significant potential in predictive modeling of complex physiological processes including different cardiovascular signals such as ECG [30-32]. Here, we designed a novel deep neural network (DNN) structure specific to multichannel ECG data by combining CNN, LSTM, and MLP architectures. It involved four different processing and learning stages including spatial fusion, temporal modeling, attention mechanism, and fully connected layers which enabled the optimum extraction of spatial and temporal features from the ECG multichannel data and the automatic modeling of their relationship with the IVA origin (see Fig. 1).
_Spatial Fusion:_ To extract the relevant spatial information contained in the 12 leads of the ECG signal, a 13-layer CNN based on the VGG structure [33] was designed. VGG has proved as a powerful spatial feature extractor even for 1-D signals [30]. The designed network encoded the 12 ECG channels of size 1250 into 512 channels of size 78. The encoded data contained important spatial information that were further processed for IVA localization.
_Temporal Modeling:_ To capture the temporal information and to learn the complex dependencies in the ECG time sequence, two bidirectional long short-term memory (BiLSTM) [34, 35] were designed. Each BiLSTM layer consisted of 2 LSTM layers: one that iterated the input time series forward in time and another one that iterated them backward. The BiLSTM layers consisted of 64 LSTM units that received 78 encoded samples from the spatial fusion output one by one in 78-time steps. In each time step, the temporal model read all the 512 channels. After processing each time step, it returned two vectors of size 64 called BiLSTM states containing the extracted temporal information from the sequence until that time step, one corresponding to the forward LSTM layer and the other to the backward LSTM layer.
_Attention Mechanism:_ The attention mechanism used in this work was based on a similar principle as the one used in [36]. Rather than focusing on BiLSTM final state, attention calculated a weighted average of BiLSTM states in all time steps. This was performed by using two fully connected layers along with a hyperbolic tangent activation function that were applied to BiLSTM states in all time steps. These fully connected layers learned to assign a score to each time step in the input time series. A Softmax operation was then applied to these scores to designate the importance of each time step and generate attention weights. The designated weights focused the model's attention on the most informative parts of the ECG signal and therefore improved performance and enhanced interpretability.
_Fully Connected and Output Layers:_ The attention output was passed to two fully connected layers with 256 nodes and LeakyReLU activation functions with a 0.01 negative slope. The fully connected layer was responsible for combining all the features extracted in the previous layers and preparing a more abstract feature representation needed to make the final decision in the output layer. Two dropout layers with the value of 0.2 were used to prevent the algorithm from overfitting.
An output layer with sigmoid activation functions, to empower the network learning the complex and nonlinear relationship between the inputs and the targets, was used to classify LVOT from RVOT.
Fig. 1: Our proposed DNN model for localizing the IVA origin.
### Data Analysis
A 10-fold cross-validation was performed to reliably evaluate the generalizability performance of the proposed method on unseen data. In each fold, 80% of data were used for training, 10% of data were used as a validation set to find the model optimum hyperparameters, and 10% were used to evaluate the performance of the optimum trained network on unseen data. This procedure was then repeated 10 times so that data belonging to every individual was once placed in the test set. The Adam optimization algorithm was used to minimize the binary cross-entropy loss function.
A random search was performed to find the optimum hyperparameters of our model. The model was trained using different combinations of hyperparameter values and the set with the best validation performance was selected. The list of model hyperparameters is listed in Table II.
## III Results
We evaluated our proposed algorithm for localization of the origin of IVAs on data collected from 334 individuals (see Table I) using a 10-fold cross-validation.
The optimum model hyperparameters selected based on the validation results are listed in Table II. Our experiments revealed a DNN structure with 13 convolution layers, 2 BiLSTM layers with 64 LSTM units, an attention layer with 64 nodes, and 2 fully connected layers with 256 nodes and LeakyReLU activation functions with negative slopes of 0.01 leads to the best generalization performance. The optimum training parameters included a batch size of 32 and a learning rate of 0.00005.
The achieved results on the test sets (not seen during training and validation) are reported in Table III. The results are reported for every fold as well as the whole dataset in terms of the area under the curve (AUC), accuracy (ACC), sensitivity (SE) or recall, specificity (SP), and F1-score (F1). It is observed that depending on the train/validation/test split the localization results vary. For example, the accuracy varied from 84.8% in fold 7 to 100% in fold 3 while the average accuracy over all the folds was 94.3%. A similar pattern is also exhibited for other classification metrics. Overall, the proposed method achieved relatively high performance in terms of all the evaluated metrics.
Our proposed method was compared with three state-of-the-art techniques on the same dataset using the same train, validation, and test sets. The results are reported in Table IV. It is observed that the proposed method substantially outperforms the other techniques. The achieved improvements
can be attributed to our adopted hybrid deep architecture that extracted both within and between channel information from ECG data, and the attention mechanism that further focused the model's emphasis on important features.
An ablation study was also performed to evaluate the performance of different modules of our DNN framework on its overall performance. Fig. 2 compares the performance of our DNN framework when using only the VGG model, VGG and one LSTM layer, VGG and one BiLSTM layer, VGG and two BiLSTM layers, and VGG and two BiLSTM layers with the attention mechanism. The AUC, ACC, SE, SP, PPV, NPV, and F1 were 85%, 88%, 93%, 71%, 92%, 76%, and 92% using the VGG network, 88%, 91%, 96%, 77%, 93%, 84%, and 94% by adding a one-layer LSTM, 91%, 93%, 96%, 82%, 95%, 86%, 95%, by adding a one-layer BiLSTM, 91%, 93%, 96%, 83%, 95%, 88%, and 96% by adding a two-layer BiLSTM, and 93%, 94%, 97%, 84%, 95%, 90%, and 96% by adding an attention mechanism. It is observed that by adding the recurrent architectures the performance is improved and the best results are achieved when combining the VGG network with two BiLSTM layers and the attention mechanism. For example, the AUC was improved by 6% after adding the recurrent layers and by another 2% after adding the attention mechanism.
## IV Discussion
Twelve-lead ECG is considered the preferred technique for identifying abnormal cardiac conditions [37] as it can be easily performed by placing 10 surface electrodes on the patient's limbs and chest. The use of ECG as a noninvasive tool in the assessment of ventricular arrhythmias is of particular importance for diagnosis and treatment planning. However, the analysis of ECG signals requires highly trained experts.
In this study, we investigated the application of deep learning to identify the origin of IVA from 12-lead ECG data without the need for expert interpretation. A DNN comprising several learning modules including a CNN architecture for spatial feature extraction, an RNN for modeling the time pattern evolution, and an attention mechanism to weigh the most important ECG segments was designed. It was shown that the developed approach can effectively unveil the relationship between the spatiotemporal pattern of multichannel ECG data and the origin of IVA with an AUC of 93.3%. To evaluate the performance of our methods, we used several classification metrics including AUC, ACC, SE, SP, PPV, NPV, and F1-Score. AUC shows the separability of LVOT and RVOT classes. An AUC of closer to 1 means better classification performance on both RVOT and LVOT prediction. ACC measures the proportion of correctly classified cases. While it is simple and easy to understand, it cannot give a good understanding of the model performance, especially on a balanced dataset. SE shows how well the model can classify RVOT cases correctly while SP indicates the model's ability to correctly classify LVOT cases. PPV predicts how likely someone is to be classified as RVOT, while NPV predicts how likely someone is to be classified as LVOT. F1-Score is the harmonic mean of sensitivity and PPV which is especially useful when data is imbalanced. Given that our dataset was imbalanced (257 RVOT cases) and 77 LVOT cases) AUC and F1-score can provide a better demonstration of our model performance. In addition, given that the detection of LVOT and RVOT are equally important, SE and SP are of equally high importance.
An ablation study revealed an improved averaged performance of about 3.5% (average of all evaluation metrics) when adding an RNN (LSTM) module to the CNN (VGG) network. The results were further improved by about 2.8% when using a more powerful two-layer BiLSTM network (see Fig. 2). Adding an attention mechanism further improved the performance by approximately 1.2%, on average. These results show that a hybrid model that can effectively capture the spatiotemporal information hidden in the pattern of multichannel ECG data performs the best among different deep architectures for localizing the origin of IVAs.
Table V compares the proposed approach with other conventional and state-of-the-art techniques for localizing the origin of IVA. Unlike conventional clinical approaches such as electrical mapping [5, 6], substrate mapping [7-11], and pace mapping [12-14] that are invasive, our proposed method is solely based on the analysis of noninvasively measured ECG and therefore poses no significant risk to the patient. Unlike manual approaches that rely on the manual extraction and analysis of ECG morphological features such as R and S wave amplitudes and timings [17, 20, 21, 26], our proposed method automatically learns from the spatiotemporal pattern of ECG multichannel data and therefore does not require expert manual analysis. Unlike semi-automatic machine learning techniques that learn from manually extracted ECG features [23, 25], our algorithm automatically extracts and learns ECG features and therefore is less expensive and time-consuming. Among fully automatic techniques, our algorithm is the only algorithm that is validated on real data (unlike [23] which used simulated data), extracts both spatial and temporal features of the ECG signal (unlike [24] that only modeled the ECG spatial information) and is fully validated on all available data using a 10-fold cross-validation. The closest results among the existing automatic algorithms are reported in [24] on a dataset of 464 individuals using an SVM model without specific feature extraction. The same approach was implemented and tested on our dataset (see Table IV). Given that the reported results in [24] were calculated on a single hold-out test set, we conjecture that they may not be fully generalizable.
Fig. 2: Performance of different modules of our DNN framework on its overall performance.
Our method was tested on a modest cohort of 334 individuals with IVA. As the available data was somewhat limited, a cross-validation technique was used to evaluate the generalizability of the model when encountering unseen data.
The significance of our cross-validation approach can be observed in Table III, where it is shown that depending on the selection of the test set, the detection performance can vary. The best-achieved performance (AUC) was up to 100% while by changing the distribution of train and test data the performance was decreased to nearly 80% in some folds. To ensure a fair and unbiased evaluation, we reported the average performance among different folds. This approach is superior to the previous validation methods that held out a specific small portion of data for testing the performance [24]. Using a single hold-out set as the test data can lead to an unreliable (and unstable) performance evaluation.
We used an attention mechanism to 1) potentially improve localization performance and 2) enhance the interpretability of the algorithm. It was observed that by adding attention to the network, AUC, ACC, SE, SP, PPV, NPV, and F1 are improved by 1.9%, 0.9%, 0.8%, 1.3%, 0.4%, 2.6%, and 0.6%. The assigned weights of the attention mechanism to different parts of the ECG signal over time can illustrate their importance in localizing the origin of IVAs. Fig. 3 shows examples of 12-lead ECGs for individuals with RVOT and LVOT. The attention weights are plotted in red on top of the figures. It is observed that attention is focusing on specific parts of the input data that are mostly related to abnormal heartbeats. On the other hand, some irrelevant parts of the ECG signal such as the zero-padded segments are completely ignored by the attention mechanism.
A limitation of the current study was the relatively limited number of individuals (334) used for performance evaluation. Our dataset was also imbalanced in terms of the target classes (257 patients with RVOT out of 334) and gender distribution (230 females out of 334 individuals). To generate more training data, we tried to augment ECG signals by shifting or squeezing and stretching them along the time axis. However, no improvement in the localization of IVAs was achieved. Future work should focus on more advanced augmentation techniques as well as generating synthetic ECG signals using generative adversarial networks (GANs) [38]. Generating realistic synthetic ECG signals not only helps to provide a larger dataset but can also make the dataset balanced with respect to the target classes or even genders. A semi-supervised approach is also recommended to take advantage of both labeled and unlabeled data [39]. There exist several public datasets containing ECGs with PVC or VT but their origins are not labeled [37]. These unlabeled data can be used in a semi-supervised learning framework along with limited labeled data to achieve a higher localization performance.
The proposed method was solely tested on localization of right and left IVAs. As a future work, a further detailed set of data with precise location of IVA should be collected to
evaluate the performance of the proposed method in localization of the exact IVA origins.
## V Conclusion
We proposed a fully automatic deep learning model to identify IVA origin from noninvasively measured 12-lead ECG. Our algorithm achieved a high localization performance (AUC = 93.3%) and outperformed existing techniques. Our proposed algorithm was based on spatial modeling using a VGG CNN, temporal modeling using a two-layer BiLSTM RNN, and temporal weighting using an attention mechanism. Unlike manual and semi-automatic algorithms, the proposed approach provided end-to-end automatic processing of ECG data without the need for any expert analysis. Given the short-duration ECG signals (\(\sim\)10 sec) required for the analysis, our deep learning model provides a cost-effective and low-risk approach toward the identification of the origin of IVAs before cardiac catheter ablation.
|
2310.15581 | Deep ReLU neural networks overcome the curse of dimensionality when
approximating semilinear partial integro-differential equations | In this paper we consider PIDEs with gradient-independent Lipschitz
continuous nonlinearities and prove that deep neural networks with ReLU
activation function can approximate solutions of such semilinear PIDEs without
curse of dimensionality in the sense that the required number of parameters in
the deep neural networks increases at most polynomially in both the dimension $
d $ of the corresponding PIDE and the reciprocal of the prescribed accuracy
$\epsilon $. | Ariel Neufeld, Tuan Anh Nguyen, Sizhou Wu | 2023-10-24T07:46:38Z | http://arxiv.org/abs/2310.15581v3 | # Deep ReLU neural networks overcome the curse of dimensionality
###### Abstract.
In this paper we consider PIDEs with gradient-independent Lipschitz continuous nonlinearities and prove that deep neural networks with ReLU activation function can approximate solutions of such semilinear PIDEs without curse of dimensionality in the sense that the required number of parameters in the deep neural networks increases at most polynomially in both the dimension \(d\) of the corresponding PIDE and the reciprocal of the prescribed accuracy \(\epsilon\).
Key words and phrases:Curse of dimensionality, high-dimensional PDEs, high-dimensional partial integro-differential equations, deep neural networks, multilevel Picard approximations, stochastic fixed point equations, stochastic differential equations with jumps 2020 Mathematics Subject Classification: 65C99, 65C05, 65C30, 68T07 Financial support by the Nanyang Assistant Professorship Grant (NAP Grant) _Machine Learning based Algorithms in Finance and Insurance_ is gratefully acknowledged.
and let \(\mathcal{D}\colon\mathbf{N}\to\mathbf{D}\), \(\mathcal{P}\colon\mathbf{N}\to\mathbb{N}\), \(\mathcal{R}\colon\mathbf{N}\to(\cup_{k,l\in\mathbb{N}}C(\mathbb{R}^{k},\mathbb{R} ^{l}))\) satisfy that for all \(H\in\mathbb{N}\), \(k_{0},k_{1},\ldots,k_{H},k_{H+1}\in\mathbb{N}\), \(\Phi=((W_{1},B_{1}),\ldots,(W_{H+1},B_{H+1}))\in\prod_{n=1}^{H+1}\left(\mathbb{ R}^{k_{n}\times k_{n-1}}\times\mathbb{R}^{k_{n}}\right),\,x_{0}\in\mathbb{R}^{k_{0}},\ldots,x_{H}\in\mathbb{R}^{k_{H}}\) with the property that \(\forall\,n\in\mathbb{N}\cap[1,H]\colon x_{n}=\mathbf{A}_{k_{n}}(W_{n}x_{n-1}+B _{n})\) we have that_
\[\mathcal{P}(\Phi)=\sum_{n=1}^{H+1}k_{n}(k_{n-1}+1),\quad\mathcal{D}(\Phi)=(k_ {0},k_{1},\ldots,k_{H},k_{H+1}), \tag{3}\]
\(\mathcal{R}(\Phi)\in C(\mathbb{R}^{k_{0}},\mathbb{R}^{k_{H+1}}),\) _and_
\[(\mathcal{R}(\Phi))(x_{0})=W_{H+1}x_{H}+B_{H+1}. \tag{4}\]
Let us comment on the mathematical objects in Setting 1.1. For all \(d\in\mathbb{N}\), \(\mathbf{A}_{d}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) refers to the componentwise rectified linear unit (ReLU) activation function. By \(\mathbf{N}\) we denote the set of all parameters characterizing artificial feed-forward DNNs, by \(\mathcal{R}\) we denote the operator that maps each parameters characterizing a DNN to its corresponding function, by \(\mathcal{P}\) we denote the function that counts the number of parameters of the corresponding DNN, and by \(\mathcal{D}\) we denote the function that maps the parameters characterizing a DNN to the vector of its layer dimensions.
### Main result
**Theorem 1.2**.: _Consider the notations in Subsection 1.4, assume Setting 1.1, let \(T,c\in(0,\infty)\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(v\in\mathds{R}^{d}\) let \(\beta_{\varepsilon}^{d}\in C(\mathds{R}^{d},\mathds{R}^{d})\), \(\sigma_{\varepsilon}^{d}\in C(\mathds{R}^{d},\mathds{R}^{d\times d})\), \(\Phi_{\beta_{d}^{d}},\Phi_{\sigma_{\varepsilon}^{d},v}\in\mathbf{N}\) satisfy that \(\beta_{\varepsilon}^{d}=\mathcal{R}(\Phi_{\beta_{\varepsilon}^{d}})\), \(\sigma_{\varepsilon}^{d}(\cdot)v=\mathcal{R}(\Phi_{\sigma_{\varepsilon}^{d},v})\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(\gamma_{\varepsilon}^{d}\colon\mathbb{R}^{2d}\to\mathbb{R}^{d}\), \(F_{\varepsilon}^{d}\colon\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\), \(G^{d}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) be measurable and satisfy for all \(y,z\in\mathds{R}^{d}\) that \(\gamma_{\varepsilon}^{d}(y,z)=F_{\varepsilon}^{d}(y)G^{d}(z)\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(v\in\mathds{R}^{d}\) let \(\Phi_{F_{\varepsilon}^{d},v}\in\mathbf{N}\) satisfy \(F_{\varepsilon}^{d}(\cdot)v=\mathcal{R}(\Phi_{F_{\varepsilon}^{d},v})\), assume for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1]\), \(v\in\mathds{R}^{d}\) that \(\mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},v})=\mathcal{D}(\Phi_{\sigma_{ \varepsilon}^{d},0})\) and \(\mathcal{D}(\Phi_{F_{\varepsilon}^{d},v})=\mathcal{D}(\Phi_{F_{\varepsilon}^{d },0})\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(g_{\varepsilon}^{d}\in C(\mathds{R}^{d},\mathds{R})\), \(\Phi_{g_{\varepsilon}^{d}}\in\mathbf{N}\) satisfy that \(\mathcal{R}(\Phi_{g_{\varepsilon}^{d}})=g_{\varepsilon}^{d}\), for every \(d\in\mathbb{N}\) let \(\beta^{d}\in C(\mathds{R}^{d},\mathds{R}^{d})\), \(\sigma^{d}\in C(\mathds{R}^{d},\mathds{R}^{d\times d})\), \(g^{d}\in C(\mathds{R}^{d},\mathds{R})\), let \(f\in C(\mathds{R},\mathds{R})\), for every \(d\in\mathbb{N}\) let \(\nu^{d}\colon\mathcal{B}(\mathbb{R}^{d}\setminus\{0\})\to[0,\infty)\) be a Levy measure, assume that for all \(d\in\mathbb{N}\) there exists \(C_{d}\in(0,\infty)\) such that for all \(x,y,z\in\mathds{R}^{d}\), \(t\in[0,T]\) we have that_
\[\left\|\gamma^{d}(x,z)\right\|\leq C_{d}\left(1\wedge\|z\|^{2}\right),\quad\left\| \gamma^{d}(x,z)-\gamma^{d}(y,z)\right\|^{2}\leq C_{d}\|x-y\|^{2}\left(1\wedge\| z\|^{2}\right), \tag{5}\]
_assume that for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(x,z\in\mathds{R}^{d}\) the Jacobian matrix \((D_{x}\gamma^{d})(x,z)\) exists, assume that for all \(d\in\mathbb{N}\) there exists \(\lambda_{d}\in(0,\infty)\) such that for all \(t\in[0,T]\), \(x,z\in\mathds{R}^{d}\), \(\delta\in[0,1]\) we have that_
\[\lambda_{d}\leq\left|\det(I_{d}+\delta(D_{x}\gamma^{d})(x,z))\right|, \tag{6}\]
_where \(I_{d}\) denotes the \(d\times d\) identity matrix, and assume for all \(d\in\mathbb{N}\), \(x,y\in\mathds{R}^{d}\), \(w_{1},w_{2}\in\mathbb{R}\), \(\varepsilon\in(0,1)\) that_
\[\left\|\beta_{\varepsilon}^{d}(x)-\beta_{\varepsilon}^{d}(y)\right\|^{2}+\left\| \sigma_{\varepsilon}^{d}(x)-\sigma_{\varepsilon}^{d}(y)\right\|_{\mathrm{F}}^{2} +\int_{\mathds{R}^{d}\setminus\{0\}}\left\|\gamma_{\varepsilon}^{d}(x,z)-\gamma_ {\varepsilon}^{d}(y,z))\right\|^{2}\nu^{d}(dz)\leq c\|x-y\|^{2}, \tag{7}\]
\[|f(w_{1})-f(w_{2})|^{2}\leq c|w_{1}-w_{2}|^{2},\quad\left|g_{ \varepsilon}^{d}(x)-g_{\varepsilon}^{d}(y)\right|^{2}\leq cd^{c}\|x-y\|^{2}, \tag{8}\]
\[\left\|\beta_{\varepsilon}^{d}(0)\right\|^{2}+\left\|\sigma_{ \varepsilon}^{d}(0)\right\|_{\mathrm{F}}^{2}+\int_{\mathds{R}^{d}\setminus\{0 \}}\left\|\gamma_{\varepsilon}^{d}(0,z)\right\|^{2}\nu^{d}(dz)+|f(0)|^{2}+|g_{ \varepsilon}^{d}(0)|^{2}\leq cd^{c}, \tag{9}\]
\[\left\|\beta_{\varepsilon}^{d}(x)-\beta^{d}(x)\right\|^{2}+\left\| \sigma_{\varepsilon}^{d}(x)-\sigma^{d}(x)\right\|_{\mathrm{F}}^{2}+\int_{ \mathds{R}^{d}\setminus\{0\}}\left\|\gamma_{\varepsilon}^{d}(x,z)-\gamma^{d}(x,z )\right\|^{2}\,\nu^{d}(dz)+\left|g_{\varepsilon}^{d}(x)-g^{d}(x)\right|^{2} \tag{10}\] \[\leq ccd^{c}(d^{c}+\|x\|^{2}),\]
_and_
\[\mathcal{P}(\Phi_{\beta_{d}^{d}})+\mathcal{P}(\Phi_{\sigma_{\varepsilon}^{d},0})+ \mathcal{P}(\Phi_{F_{\varepsilon}^{d},0})+\mathcal{P}(\Phi_{g_{d}^{d}})\leq d^{c} \varepsilon^{-c}. \tag{11}\]
_Then_
1. _for every_ \(d\in\mathbb{N}\) _there exists a unique viscosity solution_1__\(u^{d}\colon[0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) _to the PIDE_ \[\left\{\begin{array}{l}(\frac{\partial}{\partial t}u^{d})(t,x)+\langle\beta^{d }(x),(\nabla_{x}u^{d})(t,x)\rangle\\ +\frac{1}{2}\mathrm{trace}\big{(}\sigma^{d}(t,x)(\sigma^{d}(t,x))^{\top} \mathrm{Hess}_{x}u^{d}(t,x)\big{)}+f(u^{d}(t,x))\\ +\int_{\mathbb{R}^{d}}\Big{(}u^{d}(x+\gamma^{d}(x,z))-u^{d}(t,x)-\Big{\langle} (\nabla_{x}u^{d})(t,x),\gamma^{d}(x,z)\Big{\rangle}\Big{)}\,\nu^{d}(dz)=0\\ \quad\forall\,t\in[0,T),x\in\mathbb{R}^{d},\\ u^{d}(T,x)=g^{d}(x)\quad\forall\,x\in\mathbb{R}^{d}\end{array}\right.\] (12) _satisfying that_ \(\sup_{s\in[0,T],\eta\in\mathbb{R}^{d}}\frac{|u^{d}(s,\eta)|}{1+\|y\|}<\infty\) _and_ F _(_12_)_ _there exist_ \(\eta\in(0,\infty)\) _and_ \((\Psi_{d,\epsilon})_{d\in\mathbb{N},\epsilon\in(0,1)}\subseteq\mathbb{N}\) _such that for all_ \(d\in\mathbb{N}\)_,_ \(\epsilon\in(0,1)\) _we have that_ \(\mathcal{R}(\Psi_{d,\epsilon})\in C(\mathbb{R}^{d},\mathbb{R})\)_,_ \(\mathcal{P}(\Psi_{d,\epsilon})\leq\eta d^{\eta}\epsilon^{-\eta}\)_, and_ F _(_13_)_ \[\left(\int_{[0,1]^{d}}\left|(\mathcal{R}(\Psi_{d,\epsilon}))(x)-u^{d}(t,x) \right|^{2}dx\right)^{\frac{1}{2}}\leq\epsilon.\] (13)
Footnote 1: For the definition of a viscosity solution see, e.g., [31, Definition 2.7].
Let us make some comments on the mathematical objects in Theorem 1.2. The functions \(\mu_{d}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\), \(d\in\mathbb{N}\), and \(\sigma_{d}\colon\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\), \(d\in\mathbb{N}\), describe the linear part of the family of PIDEs indexed by \(d\in\mathbb{N}\) in (12). The functions \(g_{d}\colon\mathbb{R}^{d}\to\mathbb{R},d\in\mathbb{N},\) describe the initial condition, while the function \(f\colon\mathbb{R}\to\mathbb{R}\) describes the nonlinearity of the PIDEs in (12). Condition (5) and (6) are needed for the existence and uniqueness of the solution to the PIDE (12) (see the proof of Theorem 5.1). Conditions (7)-(8) are globally Lipschitz condition. Condition (9) states that the initial values of the input functions grow polynomially in the dimension \(d\in\mathbb{N}\). Condition (10) ensures that the input functions \(\beta^{d},\sigma^{d},\gamma^{d},g^{d}\) can be approximated by the functions \(\beta^{d}_{\varepsilon},\sigma^{d}_{\varepsilon},\gamma^{d}_{\varepsilon},g^{d}_ {\varepsilon}\). The bound \(d^{\varepsilon}e^{-c}\), which is a polynomial of \(d\) and \(\varepsilon^{-1}\), in condition (11) ensures that the functions \(\beta^{d}_{\varepsilon},\sigma^{d}_{\varepsilon},\gamma^{d}_{\varepsilon},g^{d}_ {\varepsilon}\) can be represented by DNNs without curse of dimensionality. Under these assumptions Theorem 1.2 states that, roughly speaking, if DNNs can approximate the initial condition, the linear part, the nonlinearity, and the jump part of the PIDEs in (12) without the curse of dimensionality, then they can also approximate its solution without the curse of dimensionality. We refer to [1, 7, 8, 19, 24, 30] for similar results obtained for PDEs _without any non-local/ jump term_.
### Outline of the proof and organization of the paper
Theorem 1.2 follows directly from Theorem 5.1, see the proof of Theorem 1.2 which is provided right after the proof of Theorem 5.1. Although the result presented in Theorem 1.2 is purely deterministic, we use probabilistic arguments to prove its statement. More precisely, we employ the theory of full history recursive MLP approximations, which are numerical approximation methods for which it is known that they overcome the curse of dimensionality. We refer to [31] for the convergence analysis of MLP algorithms for semilinear PIDEs and to [3, 4, 12, 15, 21, 22, 23, 25, 26, 27, 28] for corresponding results proving that MLP algorithms overcome the curse of dimensionality for PDEs without any non-local/ jump term.
The main strategy of the proof, roughly speaking, is to demonstrate that these MLP approximations can be represented by DNNs, if the coefficients determining the linear part, the jump term, the initial condition, and the nonlinearity are represented by DNNs (cf. Lemma 4.12). Such ideas have been successfully applied to prove that DNNs overcome the curse of dimensionality in the numerical approximations of _semilinear_ heat equations (see [24]) as well as _semilinear_ Kolmogorov PDEs (see [8]). We also refer to [19, 30] for results proving that DNNs overcome the curse of dimensionality when approximating _linear_ PDEs.
In order to introduce the outline of the proof we first need an MLP setting. For every \(K\in\mathbb{N}\) let \(\lfloor\cdot\rfloor_{K}\colon\mathbb{R}\to\mathbb{R}\) satisfy for all \(t\in\mathbb{R}\) that \(\lfloor t\rfloor_{K}=\max(\{0,\frac{T}{K},\frac{2T}{K},\dots,T\}\cap((-\infty,t )\cup\{0\})\), let \((\Omega,\mathcal{F},\mathbb{P},(\mathbb{F}_{t})_{t\in[0,T]})\) be a probability space satisfying the usual conditions, let \(\Theta=\cup_{n\in\mathbb{N}}\mathbb{Z}^{n}\), let \(\mathfrak{t}^{\theta}\colon\Omega\to[0,1]\), \(\theta\in\Theta\), be identically independently distributed random variables which satisfy for all \(t\in(0,1)\) that \(\mathbb{P}\,(\mathfrak{t}^{0}\leq t)=t\), for every \(\theta\in\Theta\), \(t\in[0,T]\) let \(\mathfrak{T}^{\theta}_{t}\colon\Omega\to\mathbb{R}\) satisfy for all \(\theta\in\Theta\) that \(\mathfrak{T}^{\theta}_{t}=t+(T-t)\mathfrak{t}^{\theta}\), for every \(d\in\mathbb{N}\) let \(W^{d,\theta}\colon\Omega\times[0,T]\to\mathbb{R}^{d}\), \(\theta\in\Theta\), be identically independently distributed standard \((\mathbb{F}_{t})_{t\in[0,T]}\)-Brownian motions, for every \(d\in\mathbb{N}\) let \(N^{d,\theta}\), \(\theta\in\Theta\), be independent \((\mathbb{F}_{t})_{t\in[0,T]}\)-Poisson random measures on \([0,\infty)\times(\mathbb{R}^{d}\setminus\{0\})\) with intensity \(\nu^{d}\), for every \(d\in\mathbb{N}\), \(\theta\in\Theta\) let
\[\tilde{N}^{d,\theta}(dt,dz)=N^{d,\theta}(dt,dz)-dt\,\nu^{d}(dz), \tag{14}\]
and assume for all \(d\in\mathbb{N}\) that \(\mathcal{F}_{0}\), \((\mathfrak{t}^{\theta})_{\theta\in\Theta}\), \((N^{d,\theta})_{\theta\in\Theta}\) and \((W^{d,\theta})_{\theta\in\Theta}\) are independent. First, the viscosity solution to (12) can be represented by the following stochastic fixed point equation (SFPE) (cf. [31, Proposition 5.16]):
\[u^{d}(t,x)=\mathbb{E}\Big{[}g^{d}(X_{T}^{d,0,t,x})\Big{]}+\int_{t}^{T}\mathbb{ E}\Big{[}f(u^{d}(X_{s}^{d,0,t,x}))\Big{]}\,ds \tag{15}\]
where
\[\begin{split} X_{s}^{d,\theta,t,x}&=x+\int_{t}^{s} \beta^{d}(X_{u-}^{d,\theta,t,x})du\\ &\quad+\int_{t}^{s}\sigma^{d}(X_{u-}^{d,\theta,t,x})dW_{u}^{d, \theta}+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\gamma^{d}(X_{u-}^{d, \theta,t,x},z)\tilde{N}^{d,\theta}(du,dz).\end{split} \tag{16}\]
We approximate the input functions \(\beta^{d},\sigma^{d},\gamma^{d},g^{d},f\) by \(\beta^{d}_{\varepsilon},\sigma^{d}_{\varepsilon},\gamma^{d}_{\varepsilon}, g^{d}_{\varepsilon},f_{\varepsilon}\) where the functions with index \(\varepsilon\) are represented by DNNs. We then get the following SFPE:
\[u^{d,\varepsilon}(t,x)=\mathbb{E}\Big{[}g^{d}_{\varepsilon}(X_{T}^{d,0, \varepsilon,t,x})\Big{]}+\int_{t}^{T}\mathbb{E}\Big{[}f_{\varepsilon}(u^{d, \varepsilon}(X_{s}^{d,0,\varepsilon,t,x}))\Big{]}\,ds \tag{17}\]
where
\[\begin{split} X_{s}^{d,\theta,\varepsilon,t,x}&=x+ \int_{t}^{s}\beta^{d}_{\varepsilon}(X_{u-}^{d,\theta,\varepsilon,t,x})du\\ &\quad+\int_{t}^{s}\sigma^{d}_{\varepsilon}(X_{u-}^{d,\theta, \varepsilon,t,x})dW_{u}^{d,\theta}+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus \{0\}}\gamma^{d}_{\varepsilon}(X_{u-}^{d,\theta,\varepsilon,t,x},z)\tilde{N} ^{d,\theta}(du,dz).\end{split} \tag{18}\]
Next, we approximate the stochastic differential equation (SDE) in (18) by the Euler-Maruyama discretization:
\[\begin{split} X_{s}^{d,\theta,K,\varepsilon,t,x}&=x+ \int_{t}^{s}\beta^{d}_{\varepsilon}(X_{\max\{t,[u]-\rfloor_{K}\}}^{d,\theta,K, \varepsilon,t,x})du+\int_{t}^{s}\sigma^{d}_{\varepsilon}(X_{\max\{t,[u]- \rfloor_{K}\}}^{d,\theta,K,\varepsilon,t,x})dW_{u}^{d,\theta}\\ &\quad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\gamma^{d} _{\varepsilon}(X_{\max\{t,[u]-\rfloor_{K}\}}^{d,\theta,K,\varepsilon,t,x}) \,\tilde{N}^{d,\theta}(du,dz).\end{split} \tag{19}\]
The latter is associated to the following SFPE:
\[u^{d,K,\varepsilon}(t,x)=\mathbb{E}\Big{[}g^{d}_{\varepsilon}(X_{T}^{d,0,K, \varepsilon,t,x})\Big{]}+\int_{t}^{T}\mathbb{E}\Big{[}f_{\varepsilon}(u^{d, K,\varepsilon}(X_{s}^{d,0,K,\varepsilon,t,x}))\Big{]}\,ds. \tag{20}\]
The DNNs that approximate \(u^{d}\), \(d\in\mathbb{N}\), will be constructed from the following MLP approximation:
\[\begin{split} U_{n,m}^{d,\theta,K,\varepsilon}(t,x)& =\frac{1_{\mathbb{N}}(n)}{m^{n}}\sum_{i=1}^{m^{n}}g^{d}_{\varepsilon}(X_{T}^{d,(\theta,0,-i),K,\varepsilon,t,x})\\ &\quad+\sum_{\ell=0}^{n-1}\frac{T-t}{m^{n-\ell}}\sum_{i=1}^{m^{n} }(f\circ U_{\ell,m}^{d,(\theta,\ell,i),K,\varepsilon}-f\circ U_{\ell-1,m}^{d,( \theta,-\ell,i),K,\varepsilon})(\overline{x}_{t}^{(\theta,\ell,i)},X_{ \overline{x}_{t}^{(\theta,\ell,i)}}^{d,(\theta,0,i),K,\varepsilon,t,x})\end{split} \tag{21}\]
(cf. Lemma 4.12). We then decompose the error \(U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d}(t,x)\) as follows:
\[\begin{split} U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d}(t,x)\\ &=\underbrace{U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d,K, \varepsilon}(t,x)}_{=:E_{1}}+\underbrace{u^{d,K,\varepsilon}(t,x)-u^{d, \varepsilon}(t,x)}_{=:E_{2}}+\underbrace{u^{d,\varepsilon}(t,x)-u^{d}(t,x)}_{=: E_{3}}\end{split} \tag{22}\]
where \(E_{1}\) is estimated in Lemma 3.3, \(E_{2}\) is estimated in Lemma 3.2, and \(E_{3}\) is estimated in Lemma 2.1. In the proof of our main result, Theorem 5.1, we combine these results and construct a DNN realization on a probability space that suffices the prescribed approximation accuracy \(\epsilon\in(0,1)\) between \(U_{n,m}^{d,\theta,K,\varepsilon}(t,x)\) and \(u^{d}(t,x)\). Note that Lemmas 3.3, 3.2, and 2.1 are stability and perturbation results that can be read without knowledge on DNNs.
The remaining part of the paper is organized as follows. In Section 2 we establish a stability result on SFPEs that demonstrates the error \(E_{3}\). In Section 3 we recall basic facts on Euler-Maruyama and MLP approximations and establish the error \(E_{1}\) of the MLP approximation as well as the discretization error \(E_{2}\). Section 4 introduces a mathematical framework for DNNs and demonstrates the connection between DNNs and MLP approximations, see Lemma 4.12. Finally, Section 5 combines the results of the previous sections to prove the main results, Theorem 5.1 and Theorem 1.2.
### Notations
Let \(\left\|\cdot\right\|,\left\|\cdot\right\|\)\(\left(\cdot\right\|_{d\in\mathbb{N}}\mathbb{R}^{d})\rightarrow[0,\infty)\), \(\dim\colon(\cup_{d\in\mathbb{N}}\mathbb{R}^{d})\rightarrow\mathbb{N}\) satisfy for all \(d\in\mathbb{N}\), \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) that \(\left\|x\right\|=\sqrt{\sum_{i=1}^{d}(x_{i})^{2}}\), \(\left\|x\right\|=\max_{i\in[1,d\cap\mathbb{N}}\left|x_{i}\right|\), and \(\dim(x)=d\) and let \(\left\|\cdot\right\|_{\mathrm{F}}\colon\cup_{d\in\mathbb{N}}\mathbb{R}^{d \times d}\rightarrow[0,\infty)\) satisfy for all \(d\in\mathbb{N}\), \(x=(x_{ij})_{i,j\in[1,d]\cap\mathbb{Z}}\in\mathbb{R}^{d\times d}\) that \(\left\|x\right\|_{\mathrm{F}}^{2}=\sum_{i,j=1}^{d}\lvert x_{ij}\rvert^{2}\).
## 2. Approximation of the coefficients
In Lemma 2.1 below we approximate the solution to the SFPE (30) through solution to the SFPE (31), whose linear part, initial condition, and nonlinearity can be exactly represented through suitable DNNs.
**Lemma 2.1** (A stability result).: _Consider the notations given in Subsection 1.4, let \(T\in(0,\infty)\), \(c\in[1,\infty)\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), let \(\beta_{\varepsilon}^{d},\beta^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d})\), \(\sigma_{\varepsilon}^{d},\sigma^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d\times d})\), \(g_{\varepsilon}^{d},g^{d}\in C(\mathbb{R}^{d},\mathbb{R})\), \(f_{\varepsilon},f\in C(\mathbb{R},\mathbb{R})\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(\gamma^{d},\gamma_{\varepsilon}^{d}\colon\mathbb{R}^{2d}\rightarrow\mathbb{R}^ {d}\) be measurable, let \((\Omega,\mathcal{F},\mathbb{P},(\mathbb{F})_{t\in[0,T]})\) be a filtered probability space satisfying the usual conditions, for every \(d\in\mathbb{N}\) let \(W^{d}\colon\Omega\times[0,T]\rightarrow\mathbb{R}^{d}\), be standard \((\mathbb{F}_{t})_{t\in[0,T]}\)-Brownian motion, for every \(d\in\mathbb{N}\) let \(N^{d}\) be an \((\mathbb{F}_{t})_{t\in[0,T]}\)-Poisson random measure on \([0,\infty)\times(\mathbb{R}^{d}\setminus\{0\})\) with intensity \(\nu^{d}\), for every \(d\in\mathbb{N}\) let \(\tilde{N}^{d}(dt,dz)=N^{d}(dt,dz)-dt\,\nu^{d}(dz)\), assume for all \(d\in\mathbb{N}\) that \(\mathcal{F}_{0}\), \(N^{d}\) and \(W^{d}\) are independent, assume for all \(d\in\mathbb{N}\), \(x,y\in\mathbb{R}^{d}\), \(w_{1},w_{2}\in\mathbb{R}\), \(\varepsilon\in(0,1)\) that_
\[\left\|\beta_{\varepsilon}^{d}(x)-\beta_{\varepsilon}^{d}(y)\right\|^{2}+ \left\|\sigma_{\varepsilon}^{d}(x)-\sigma_{\varepsilon}^{d}(y)\right\|_{ \mathrm{F}}^{2}+\int_{\mathbb{R}^{d}\setminus\{0\}}\left\|\gamma_{ \varepsilon}^{d}(x,z)-\gamma_{\varepsilon}^{d}(y,z)\right\|^{2}\nu^{d}(dz) \leq c\|x-y\|^{2}, \tag{23}\]
\[\left|f_{\varepsilon}(w_{1})-f_{\varepsilon}(w_{2})\right|^{2}\leq c|w_{1}-w_ {2}|^{2},\quad T\left|g_{\varepsilon}^{d}(x)-g_{\varepsilon}^{d}(y)\right|^{2} \leq cd^{c}\|x-y\|^{2}, \tag{24}\]
\[\left\|\beta_{\varepsilon}^{d}(0)\right\|^{2}+\left\|\sigma_{ \varepsilon}^{d}(0)\right\|_{\mathrm{F}}^{2}+\int_{\mathbb{R}^{d}\setminus\{0 \}}\left\|\gamma_{\varepsilon}^{d}(0,z)\right\|^{2}\nu^{d}(dz)+T^{3}|f_{ \varepsilon}(0)|^{2}+T|g_{\varepsilon}^{d}(0)|^{2}\leq cd^{c}, \tag{25}\]
_and_
\[\left\|\beta_{\varepsilon}^{d}(x)-\beta^{d}(x)\right\|^{2}+\left\| \sigma_{\varepsilon}^{d}(x)-\sigma^{d}(x)\right\|_{\mathrm{F}}^{2}+\int_{ \mathbb{R}^{d}\setminus\{0\}}\left\|\gamma_{\varepsilon}^{d}(x,z)-\gamma^{d }(x,z)\right\|^{2}\nu^{d}(dz)\] \[\quad+\left|g_{\varepsilon}^{d}(x)-g^{d}(x)\right|^{2}+|f_{ \varepsilon}(w_{1})-f(w_{1})|^{2} \tag{26}\] \[\leq ccd^{c}(d^{c}+\|x\|^{2})+\varepsilon|w_{1}|^{4},\]
_for every \(d\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) let \((X_{s}^{d,t,x})_{s\in[t,T]}\), \((X_{s}^{d,\varepsilon,t,x})_{s\in[t,T]}\) be \((\mathbb{F}_{t})_{t\in[0,T]}\)-adapted cadlag processes which satisfy for all \(s\in[t,T]\) that \(\mathbb{P}\)-a.s._
\[X_{s}^{d,t,x}=x+\int_{t}^{s}\beta^{d}(X_{r-}^{d,t,x})dr+\int_{t}^{s}\sigma^{d} (X_{r-}^{d,t,x})dW_{r}^{d}+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}} \gamma^{d}(X_{r-}^{d,t,x},z)\tilde{N}^{d}(dr,dz) \tag{27}\]
_and_
\[X_{s}^{d,\varepsilon,t,x}=x+\int_{t}^{s}\beta_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,x})dr+\int_{t}^{s}\sigma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon, t,x})dW_{r}^{d}+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\gamma_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)\tilde{N}^{d}(dr,dz), \tag{28}\]
_and for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(u^{d},u^{d,\varepsilon}\colon[0,T]\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) be measurable and satisfy for all \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that \(\mathbb{E}\!\left[\left|g^{d}(X_{T}^{d,t,x})\right|\right]+\int_{t}^{T}\mathbb{E} \!\left[\left|f(u^{d}(s,X_{s}^{d,t,x}))\right|\right]ds<\infty\), \(\mathbb{E}\!\left[\left|g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon,t,x})\right| \right]+\int_{t}^{T}\mathbb{E}\!\left[\left|f_{\varepsilon}(u^{d,\varepsilon}( s,X_{s}^{d,\varepsilon,t,x}))\right|\right]ds<\infty\),_
\[\sup_{s\in[0,T]}\sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d}(s,y) \right|+\left|u^{d,\varepsilon}(s,y)\right|}{1+\left\|y\right\|}<\infty, \tag{29}\]
\[u^{d}(t,x)=\mathbb{E}\!\left[g^{d}(X_{T}^{d,t,x})\right]+\int_{t}^{T}\mathbb{E} \!\left[f(u^{d}(s,X_{s}^{d,t,x}))\right]ds, \tag{30}\]
_and_
\[u^{d,\varepsilon}(t,x)=\mathbb{E}\!\left[g_{\varepsilon}^{d}(X_{T}^{d, \varepsilon,t,x})\right]+\int_{t}^{T}\mathbb{E}\!\left[f_{\varepsilon}(u^{d, \varepsilon}(s,X_{s}^{d,\varepsilon,t,x}))\right]ds. \tag{31}\]
_Then_
1. _for all_ \(d\in\mathbb{N}\)_,_ \(\varepsilon\in(0,1)\)_,_ \(t\in[0,T]\)_,_ \(x\in\mathbb{R}^{d}\) _we have that_ \[\max\biggl{\{}\mathbb{E}\biggl{[}d^{c}+\left\|X_{s}^{d,\varepsilon,t,x}\right\|^ {2}\biggr{]},\mathbb{E}\biggl{[}d^{c}+\left\|X_{s}^{d,t,x}\right\|^{2}\biggr{]} \biggr{\}}\leq(d^{c}+\|x\|^{2})e^{7c(s-t)},\] (32)
2. _for all_ \(d\in\mathbb{N}\)_,_ \(\varepsilon\in(0,1)\)_,_ \(t\in[0,T]\)_,_ \(x,y\in\mathbb{R}^{d}\) _we have that_ \[\left|u^{d,\varepsilon}(t,x)-u^{d,\varepsilon}(t,y)\right|\leq 2(cd^{c}T^{-1})^{ \frac{1}{2}}\|x-y\|e^{5cT+2cT^{2}},\] (33) _and_
3. _for all_ \(d\in\mathbb{N}\)_,_ \(\varepsilon\in(0,1)\)_,_ \(t\in[0,T]\)_,_ \(x\in\mathbb{R}^{d}\) _we have that_ \[\left|u^{d,\varepsilon}(t,x)-u^{d}(t,x)\right|\leq 2cd^{c}\varepsilon^{ \frac{1}{2}}(d^{c}+\|x\|^{2})e^{24cT+5cT^{2}}.\] (34)
Proof of Lemma 2.1.: Throughout this proof let \(\langle\cdot,\cdot\rangle\colon\cup_{d\in\mathbb{R}}\mathbb{R}^{d}\times \mathbb{R}^{d}\to\mathbb{R}\) satisfy for every \(d\in\mathbb{N}\), \(x=(x_{1},\ldots,x_{d}),y=(y_{1},\ldots,y_{d})\in\mathbb{R}^{d}\) that \(\langle x,y\rangle=\sum_{i=1}^{d}x_{i}y_{i}\). First, the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathbb{R}^{d}\colon\|x+y\|^{2}\leq 2\|x\|^{2}+2 \|y\|^{2}\), (25), and (23) show for all \(d\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\|\beta_{\varepsilon}^{d}(x)\|^{2}\leq 2\|\beta_{\varepsilon}^{d}(0)\|^{2}+2\| \beta_{\varepsilon}^{d}(x)-\beta_{\varepsilon}^{d}(0)\|^{2}\leq 2cd^{c}+2c\|x\|^{2} =2c(d^{c}+\|x\|^{2}). \tag{35}\]
Similarly, we have for all \(d\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\left\|\sigma_{\varepsilon}^{d}(x)\right\|_{\mathrm{F}}^{2}\leq 2c(d^{c}+\|x\|^{2}). \tag{36}\]
Next, the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathbb{R}^{d}\colon\|x+y\|^{2}\leq 2\|x\|^{2}+2 \|y\|^{2}\), (25), and (23) show for all \(d\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\int_{\mathbb{R}^{d}\setminus\{0\}}\left\|\gamma_{ \varepsilon}^{d}(x,z)\right\|^{2}\nu^{d}(dz)&\leq\int_{\mathbb{R }^{d}\setminus\{0\}}2\left\|\gamma_{\varepsilon}^{d}(0,z)\right\|^{2}+2 \left\|\gamma_{\varepsilon}^{d}(x,z)-\gamma_{\varepsilon}^{d}(0,z)\right\|^{2 }\nu^{d}(dz)\\ &\leq 2cd^{c}+2c\|x^{2}\|=2c(d^{c}+\|x\|^{2}).\end{split} \tag{37}\]
Next, Ito's formula (see, e.g., [20, Theorem 3.1]) and (28) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that \(\mathbb{P}\)-a.s. we have that
\[\begin{split}\left\|X_{s}^{d,\varepsilon,t,x}\right\|^{2}& =\|x\|^{2}+\int_{t}^{s}\left(2\left\langle X_{r}^{d,\varepsilon,t, x},\beta_{\varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x})\right\rangle+\left\| \sigma_{\varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x})\right\|_{\mathrm{F}}^{2} \right)dr\\ &\quad+2\int_{t}^{s}\sum_{i,j=1}^{d}\left(X_{r-}^{d,\varepsilon,t,x}\right)_{i}\left(\sigma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x}) \right)_{ij}d(W_{j}^{d})_{r}\\ &\quad+2\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\left\langle X _{r-}^{d,\varepsilon,t,x},\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z )\right\rangle\tilde{N}^{d}(dz,dr)\\ &\quad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\left\| \gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)\right\|^{2}N^{d}(dz,dr ).\end{split} \tag{38}\]
Next, for every \(d,n\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) let \(\tau_{n}^{d,\varepsilon,x}\colon\Omega\to\mathbb{R}\) satisfy that
\[\begin{split}\tau_{n}^{d,\varepsilon,x}=\inf\biggl{\{}& s\in[t,T]\colon\int_{t}^{s}\left(2\left\langle X_{r}^{d,\varepsilon,t,x},\beta_{ \varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x})\right\rangle+\left\|\sigma_{ \varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x})\right\|_{\mathrm{F}}^{2}\right)dr \\ &\quad+\int_{t}^{s}\sum_{i,j=1}^{d}\left|\left(X_{r}^{d, \varepsilon,t,x}\right)_{i}\left(\sigma_{\varepsilon}^{d}(X_{r}^{d,\varepsilon,t, x})\right)_{ij}\right|^{2}dr\\ &\quad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\sum_{i=1}^{d} \left|\left(X_{r}^{d,\varepsilon,t,x}\right)_{i}\left(\gamma_{\varepsilon}^{d}(X_{ r}^{d,\varepsilon,t,x},z)\right)_{i}\right|^{2}\nu^{d}(dz)\,dr\\ &\quad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\left\| \gamma_{\varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x},z)\right\|_{\mathrm{F}}^{2} \nu^{d}(dz)\,dr\geq n\biggr{\}\wedge T\end{split} \tag{39}\]
(with the convention that \(\inf\emptyset=\infty\)). Then (38), the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathbb{R}^{d}\colon 2\langle x,y\rangle\leq\|x\|^{2}+\|y\|^ {2}\), (35), (36), (37), and the fact that \(c\geq 1\) show for all \(d,n\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\bigg{[}d^{c}+\left\|X_{s\wedge\tau_{n}^{d, \varepsilon,x}}^{d,\varepsilon,t,x}\right\|^{2}\bigg{]}\\ &=d^{c}+\|x\|^{2}+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{n}^{d, \varepsilon,x}}\left(2\left\langle X_{r}^{d,\varepsilon,t,x},\beta_{ \varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x})\right\rangle+\left\|\sigma_{ \varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x})\right\|_{\mathrm{F}}^{2}\right)dr \Bigg{]}\\ &\quad+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{n}^{d, \varepsilon,x}}\int_{\mathrm{R}^{d}\setminus\{0\}}\left\|\gamma_{\varepsilon }^{d}(X_{r}^{d,\varepsilon,t,x},z)\right\|^{2}\nu^{d}(dz)\,dr\Bigg{]}\\ &\leq d^{c}+\|x\|^{2}+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{ n}^{d,\varepsilon,x}}\left\|X_{r}^{d,\varepsilon,t,x}\right\|^{2}dr\Bigg{]}+ \mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{n}^{d,\varepsilon,x}}\left\|\beta_{ \varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x})\right\|^{2}dr\Bigg{]}\\ &\quad+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{n}^{d, \varepsilon,x}}\left\|\sigma_{\varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x}) \right\|_{\mathrm{F}}^{2}dr\Bigg{]}+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{ n}^{d,\varepsilon,x}}\int_{\mathrm{R}^{d}\setminus\{0\}}\left\|\gamma_{ \varepsilon}^{d}(X_{r}^{d,\varepsilon,t,x},z)\right\|^{2}\nu^{d}(dz)\,dr \Bigg{]}\end{split} \tag{40}\]
and
\[\begin{split}&\mathbb{E}\bigg{[}d^{c}+\left\|X_{s\wedge\tau_{n}^{ d,\varepsilon,x}}^{d,\varepsilon,t,x}\right\|^{2}\bigg{]}\\ &\leq d^{c}+\|x\|^{2}+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{ n}^{d,\varepsilon,x}}\left(d^{c}+\left\|X_{r}^{d,\varepsilon,t,x}\right\|^{2} \right)dr\Bigg{]}+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{n}^{d,\varepsilon, x}}2c\left(d^{c}+\left\|X_{r}^{d,\varepsilon,t,x}\right\|^{2}\right)dr \Bigg{]}\\ &\quad+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{n}^{d, \varepsilon,x}}2c\left(d^{c}+\left\|X_{r}^{d,\varepsilon,t,x}\right\|_{ \mathrm{F}}^{2}\right)dr\Bigg{]}+\mathbb{E}\Bigg{[}\int_{t}^{s\wedge\tau_{n}^ {d,\varepsilon,x}}2c\left(d^{c}+\left\|X_{r}^{d,\varepsilon,t,x}\right\|^{2} \right)dr\Bigg{]}\\ &\leq d^{c}+\|x\|^{2}+7c\int_{t}^{s}\mathbb{E}\left[d^{c}+\left\| X_{r\wedge\tau_{n}^{d,\varepsilon,x}}^{d,\varepsilon,t,x}\right\|^{2}dr\right]. \end{split} \tag{41}\]
This, Fatou's lemma, and Gronwall's inequality show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\mathbb{E}\bigg{[}d^{c}+\left\|X_{s}^{d,\varepsilon,t,x}\right\|^{2}\bigg{]} \leq\liminf_{n\to\infty}\mathbb{E}\bigg{[}d^{c}+\left\|X_{s\wedge\tau_{n}^{d,\varepsilon,x}}^{d,\varepsilon,t,x}\right\|^{2}\bigg{]}\leq(d^{c}+\|x\|^{2}) e^{7c(s-t)}. \tag{42}\]
Next, using (26) and letting \(\varepsilon\) tend to zero in (23) and (25) we obtain that
\[\left\|\beta^{d}(x)-\beta^{d}(y)\right\|^{2}+\left\|\sigma^{d}(x)-\sigma^{d}( y)\right\|^{2}+\int_{\mathrm{R}^{d}\setminus\{0\}}\left\|\gamma^{d}(x,z)- \gamma^{d}(y,z)\right)\right\|^{2}\nu^{d}(dz)\,dr\leq c\|x-y\|^{2} \tag{43}\]
and
\[\left\|\beta^{d}(0)\right\|^{2}+\left\|\sigma^{d}(0)\right\|^{2}+\int_{ \mathrm{R}^{d}\setminus\{0\}}\left\|\gamma^{d}(0,z)\right\|^{2}\nu^{d}(dz) \leq cd^{c}. \tag{44}\]
Using a similar argument as that for (42) we then obtain for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\) that
\[\mathbb{E}\bigg{[}d^{c}+\left\|X_{s}^{d,t,x}\right\|^{2}\bigg{]}\leq(d^{c}+\|x \|^{2})e^{7c(s-t)}. \tag{45}\]
This and (42) show (i).
Next, (26) and (45) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\bigg{[}\int_{t}^{s}\left\|\beta_{ \varepsilon}^{d}(X_{r-}^{d,t,x})-\beta^{d}(X_{r-}^{d,t,x})\right\|^{2}dr \bigg{]}\\ &\leq\int_{t}^{s}\varepsilon cd^{c}\left(d^{c}+\mathbb{E}\left[ \left\|X_{r}^{d,t,x}\right\|^{2}\right]\right)dr\\ &\leq\int_{t}^{s}\varepsilon cd^{c}(d^{c}+\|x\|^{2})e^{7c(r-t)} \,dr\leq\varepsilon cd^{c}(d^{c}+\|x\|^{2})(s-t)e^{7cT}\end{split} \tag{46}\]
and similarly
\[\int_{t}^{s}\mathbb{E}\bigg{[}\left\|\sigma_{\varepsilon}^{d}(X_{r-}^{d,t,x})- \sigma^{d}(X_{r-}^{d,t,x})\right\|_{\mathrm{F}}^{2}\bigg{]}\,dr\leq ccd^{c}(d^{c }+\|x\|^{2})(s-t)e^{7cT}. \tag{47}\]
Next, (26) and (45) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}} \mathbb{E}\left[\left\|\gamma_{\varepsilon}^{d}(X_{r-}^{d,t,x},z)-\gamma^{d}(X _{r-}^{d,t,x},z)\right\|^{2}\right]\nu^{d}(dz)\,dr\\ &\leq\int_{t}^{s}\varepsilon cd^{c}\left(d^{c}+\mathbb{E}\left[ \left\|X_{r}^{d,t,x}\right\|^{2}\right]\right)dr\\ &\leq\int_{t}^{s}\varepsilon cd^{c}(d^{c}+\|x\|^{2})e^{\tau c(r-t )}\,dr\leq\varepsilon cd^{c}(d^{c}+\|x\|^{2})(s-t)e^{\tau cT}.\end{split} \tag{48}\]
Next, Holder's inequality, the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathbb{R}^{d}\colon\|x+y\|^{2}\leq 2\|x\|^{2}+2\|y\|^ {2}\), (23), and (46) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\left[\left\|\int_{t}^{s}(\beta_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})-\beta^{d}(X_{r-}^{d,t,x}))\,dr \right\|^{2}\right]\\ &\leq\mathbb{E}\Bigg{[}\left(\int_{t}^{s}\left\|\beta_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})-\beta^{d}(X_{r-}^{d,t,x})\right\| dr\right)^{2}\Bigg{]}\\ &\leq\mathbb{E}\Bigg{[}\left(\int_{t}^{s}dr\right)\left(\int_{t} ^{s}\left\|\beta_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})-\beta^{d}(X_{r- }^{d,t,x})\right\|^{2}dr\right)\Bigg{]}\\ &\leq T\mathbb{E}\Bigg{[}\int_{t}^{s}\left\|\beta_{\varepsilon} ^{d}(X_{r-}^{d,\varepsilon,t,x})-\beta^{d}(X_{r-}^{d,t,x})\right\|^{2}dr \Bigg{]}\\ &\leq 2T\mathbb{E}\Bigg{[}\int_{t}^{s}\left\|\beta_{\varepsilon} ^{d}(X_{r-}^{d,\varepsilon,t,x})-\beta_{\varepsilon}^{d}(X_{r-}^{d,t,x})\right\| ^{2}dr\Bigg{]}+2T\mathbb{E}\left[\int_{t}^{s}\left\|\beta_{\varepsilon}^{d}(X _{r-}^{d,t,x})-\beta^{d}(X_{r-}^{d,t,x})\right\|^{2}dr\right]\\ &\leq 2Tc\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,\varepsilon,t,x }-X_{r}^{d,t,x}\right\|^{2}\right]dr+2T\cdot\varepsilon cd^{c}(d^{c}+\|x\|^{2 })(s-t)e^{\tau cT}\\ &=2cT\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,\varepsilon,t, x}-X_{r}^{d,t,x}\right\|^{2}\right]dr+2\varepsilon cd^{c}(d^{c}+\|x\|^{2})T(s-t)e^{ \tau cT}.\end{split} \tag{49}\]
Furthermore, Ito's isometry, the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathbb{R}^{d\times d}\colon\|x+y\|_{\rm F}^{2} \leq 2\|x\|_{\rm F}^{2}+2\|y\|_{\rm F}^{2}\), (23), and (47) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\Bigg{[}\left\|\int_{t}^{s}(\sigma_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})-\sigma^{d}(X_{r-}^{d,t,x}))\,dW_{r }^{d}\right\|^{2}\Bigg{]}\\ &=\mathbb{E}\Bigg{[}\int_{t}^{s}\left\|\sigma_{\varepsilon}^{d}(X _{r-}^{d,\varepsilon,t,x})-\sigma^{d}(X_{r-}^{d,t,x})\right\|_{\rm F}^{2}dr \Bigg{]}\\ &\leq 2\int_{t}^{s}\mathbb{E}\Bigg{[}\left\|\sigma_{\varepsilon}^{d}(X _{r-}^{d,\varepsilon,t,x})-\sigma_{\varepsilon}^{d}(X_{r-}^{d,t,x})\right\|_{ \rm F}^{2}\Bigg{]}dr+2\int_{t}^{s}\mathbb{E}\left[\left\|\sigma_{\varepsilon}^{ d}(X_{r-}^{d,t,x})-\sigma^{d}(X_{r-}^{d,t,x})\right\|_{\rm F}^{2}\right]dr\\ &\leq 2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,\varepsilon,t,x}-X_{ r}^{d,t,x}\right\|_{\rm F}^{2}\right]dr+2\varepsilon cd^{c}(d^{c}+\|x\|^{2})(s-t)e^{ \tau cT}.\end{split} \tag{50}\]
Next, Ito's isometry (see, e.g., [9, Proposition 8.8]), the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathbb{R}^{d}\colon\|x+y\|^{2}\leq 2\|x\|^{2}+2\|y\|^ {2}\), (23), and (48) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\left[\left\|\int_{t}^{s}\int_{\mathbb{R}^{d} \setminus\{0\}}(\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)-\gamma^{ d}(X_{r-}^{d,t,x},z))\,\tilde{N}^{d}(dr,dz)\right\|^{2}\right]\\ &=\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\mathbb{E}\,\left[ \left\|\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)-\gamma^{d}(X_{r-}^{d,t,x},z))\right\|^{2}\right]\nu^{d}(dz)\,dr\\ &\leq 2\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\mathbb{E}\left[ \left\|\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)-\gamma_{ \varepsilon}^{d}(X_{r-}^{d,t,x},z))\right\|^{2}\nu^{d}(dz)\,dr\\ &\quad+2\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\mathbb{E} \left[\left\|\gamma_{\varepsilon}^{d}(X_{r-}^{d,t,x},z)-\gamma^{d}(X_{r-}^{d,t,x},z) \right\|^{2}\right]\nu^{d}(dz)\,dr\\ &\leq 2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,\varepsilon,t,x}-X_{ r}^{d,t,x}\right\|^{2}\right]dr+2\varepsilon cd^{c}(d^{c}+\|x\|^{2})(s-t)e^{\tau cT}.\end{split} \tag{51}\]
Next, (27) and (28) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that \(\mathbb{P}\)-a.s.
\[\begin{split}& X_{s}^{d,\varepsilon,t,x}-X_{s}^{d,t,x}\\ &=\int_{t}^{s}(\beta_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t, x})-\beta^{d}(X_{r-}^{d,t,x}))\,dr+\int_{t}^{s}(\sigma_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,x})-\sigma^{d}(X_{r-}^{d,t,x}))\,dW_{r}^{d}\\ &\quad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}(\gamma_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)-\gamma^{d}(X_{r-}^{d,t,x},z)) \,\tilde{N}^{d}(dr,dz).\end{split} \tag{52}\]
This, the fact that \(\forall\,d\in\mathbb{N},x,y,z\in\mathbb{R}^{d}\colon\|x+y+z\|^{2}\leq 3\|x\|^{2}+3 \|y\|^{2}+3\|z\|^{2}\), (49), (50), and (51) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\left[\left\|X_{s}^{d,\varepsilon,t,x}-X_{s} ^{d,t,x}\right\|^{2}\right]\\ &\leq 3\mathbb{E}\left[\left\|\int_{t}^{s}(\beta_{\varepsilon}^{d}(X _{r-}^{d,\varepsilon,t,x})-\beta^{d}(X_{r-}^{d,t,x}))dr\right\|^{2}\right]+3 \mathbb{E}\left[\left\|\int_{t}^{s}(\sigma_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,x})-\sigma^{d}(X_{r-}^{d,t,x}))\,dW_{r}^{d}\right\|^{2}\right] \\ &\quad+3\mathbb{E}\left[\left\|\int_{t}^{s}\int_{\mathbb{R}^{d} \setminus\{0\}}(\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)-\gamma ^{d}(X_{r-}^{d,t,x},z))\,\tilde{N}^{d}(dr,dz)\right\|^{2}\right]\\ &\leq 3\left[2cT\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d, \varepsilon,t,x}-X_{r}^{d,t,x}\right\|^{2}\right]dr+2\varepsilon cd^{c}(d^{c} +\|x\|^{2})T(s-t)e^{\tau cT}\right]\\ &\quad+3\left[2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d, \varepsilon,t,x}-X_{r}^{d,t,x}\right\|_{\mathbb{F}}^{2}\right]dr+2\varepsilon cd ^{c}(d^{c}+\|x\|^{2})(s-t)e^{\tau cT}\right]\\ &\quad+3\left[2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d, \varepsilon,t,x}-X_{r}^{d,t,x}\right\|^{2}\right]dr+2\varepsilon cd^{c}(d^{c} +\|x\|^{2})(s-t)e^{\tau cT}\right]\\ &=(12c+6cT)\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d, \varepsilon,t,x}-X_{r}^{d,t,x}\right\|^{2}\right]dr+3\left(2T+4\right) \varepsilon cd^{c}(d^{c}+\|x\|^{2})(s-t)e^{7cT}.\end{split} \tag{53}\]
This, Gronwall's inequality, (42), (45), and the fact that \(3\,(2T+4)\leq 12(T+1)\leq 12e^{cT}\) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\left[\left\|X_{s}^{d,\varepsilon,t,x}-X_{s }^{d,t,x}\right\|^{2}\right]&\leq 3\,(2T+4)\,\varepsilon cd^{c}(d^{c} +\|x\|^{2})(s-t)e^{\tau cT}e^{(12c+6cT)T}\\ &\leq 12e^{cT}\varepsilon cd^{c}(d^{c}+\|x\|^{2})e^{\tau cT}e^{(12c+6 cT)T}(s-t)\\ &=12\varepsilon cd^{c}(d^{c}+\|x\|^{2})e^{20cT+6cT^{2}}(s-t). \end{split} \tag{54}\]
Next, (28), the fact that \(\forall\,d\in\mathbb{N},x_{1},x_{2},x_{3},x_{4}\in\mathbb{R}^{d}\colon\| \sum_{i=1}^{4}x_{i}\|^{2}\leq 4\sum_{i=1}^{4}\|x_{i}\|^{2}\), Jensen's inequality, Ito's isometry, and (23) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\left[\left\|X_{s}^{d,\varepsilon,t,x}-X_{s }^{d,\varepsilon,t,y}\right\|^{2}\right]&\leq 4\|x-y\|^{2}+4\mathbb{E}\left[\left\|\int_{t}^{s}( \beta_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})-\beta_{\varepsilon}^{d}(X_ {r-}^{d,\varepsilon,t,y}))dr\right\|^{2}\right]\\ &\quad+4\mathbb{E}\left[\left\|\int_{t}^{s}(\sigma_{\varepsilon}^{ d}(X_{r-}^{d,\varepsilon,t,x})-\sigma_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,y}))dW_{r}^{d}\right\|^{2}\right]\\ &\quad+4\mathbb{E}\left[\left\|\int_{t}^{s}\int_{\mathbb{R}^{d} \setminus\{0\}}(\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)-\gamma _{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,y},z))\tilde{N}^{d}(dr,dz)\right\| ^{2}\right]\end{split} \tag{55}\]
and
\[\mathbb{E}\left[\left\|X_{s}^{d,\varepsilon,t,x}-X_{s}^{d, \varepsilon,t,y}\right\|^{2}\right] \tag{56}\] \[\leq 4\|x-y\|^{2}+4T\mathbb{E}\left[\int_{t}^{s}\left\|\beta_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})-\beta_{\varepsilon}^{d}(X_{r-}^{ d,\varepsilon,t,y})\right\|^{2}dr\right]\] \[\quad+4\mathbb{E}\left[\int_{t}^{s}\left\|\sigma_{\varepsilon}^{d }(X_{r-}^{d,\varepsilon,t,x})-\sigma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon, t,y})\right\|^{2}_{\text{F}}dr\right]\] \[\quad+4\mathbb{E}\left[\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus \{0\}}\left\|\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)-\gamma_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,y},z)\right\|^{2}\nu^{d}(dz)\,dr\right]\] \[\leq 4\|x-y\|^{2}+4T\mathbb{E}\left[\int_{t}^{s}c\left\|X_{r-}^{ d,\varepsilon,t,x}-X_{r-}^{d,\varepsilon,t,y}\right\|^{2}dr\right]+4\mathbb{E} \left[\int_{t}^{s}c\left\|X_{r-}^{d,\varepsilon,t,x}-X_{r-}^{d,\varepsilon,t,y}\right\|^{2}dr\right]\] \[\quad+4\mathbb{E}\left[\int_{t}^{s}c\left\|X_{r-}^{d,\varepsilon,t,x}-X_{r-}^{d,\varepsilon,t,y}\right\|^{2}dr\right]\] \[=4\|x-y\|^{2}+(4Tc+8c)\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{ d,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,y}\right\|^{2}\right]dr.\]
This, Gronwall's lemma, and (42) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\mathbb{E}\left[\left\|X_{s}^{d,\varepsilon,t,x}-X_{s}^{d, \varepsilon,t,y}\right\|^{2}\right]\leq 4\|x-y\|^{2}e^{(4Tc+8c)(s-t)}\leq 4\|x-y\|^{2}e ^{(4Tc+8c)T}=4\|x-y\|^{2}e^{8cT+4cT^{2}}. \tag{57}\]
Next, (24), Jensen's inequality, and (54) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that
\[\mathbb{E}\left[\left|g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon, t,x})-g_{\varepsilon}^{d}(X_{T}^{d,t,x})\right|\right] \leq\mathbb{E}\left[(cd^{c})^{\frac{1}{2}}T^{-\frac{1}{2}}\left\| X_{T}^{d,\varepsilon,t,x}-X_{T}^{d,t,x}\right\|\right] \tag{58}\] \[\leq(cd^{c})^{\frac{1}{2}}T^{-\frac{1}{2}}\left(\mathbb{E}\left[ \left\|X_{T}^{d,\varepsilon,t,x}-X_{T}^{d,t,x}\right\|^{2}\right]\right)^{ \frac{1}{2}}\] \[\leq(cd^{c})^{\frac{1}{2}}T^{-\frac{1}{2}}\left(12\varepsilon d ^{c}(d^{c}+\|x\|^{2})e^{20cT+6cT^{2}}T\right)^{\frac{1}{2}}\] \[\leq 4(\varepsilon cd^{2c})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{ \frac{1}{2}}e^{10cT+3cT^{2}}.\]
Next, (26), Jensen's inequality, and (45) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that
\[\mathbb{E}\left[\left|g_{\varepsilon}^{d}(X_{T}^{d,t,x})-g^{d}(X_ {T}^{d,t,x})\right|\right] \leq\mathbb{E}\left[\left|\varepsilon cd^{c}\left(d^{c}+\left\|X _{T}^{d,t,x}\right\|^{2}\right)\right|^{\frac{1}{2}}\right] \leq(\varepsilon cd^{c})^{\frac{1}{2}}\left(\mathbb{E}\left[d^{c}+ \left\|X_{T}^{d,t,x}\right\|^{2}\right]\right)^{\frac{1}{2}} \tag{59}\] \[\leq(\varepsilon cd^{c})^{\frac{1}{2}}\left((d^{c}+\|x\|^{2})^{ \frac{1}{2}}e^{7cT}\right)^{\frac{1}{2}}\] \[=(\varepsilon cd^{c})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2 }}e^{3.5cT}.\]
This, the triangle inequality, and (58) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that
\[\mathbb{E}\left[\left|g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon,t,x })-g^{d}(X_{T}^{d,t,x})\right|\right] \leq\mathbb{E}\left[\left|g_{\varepsilon}^{d}(X_{T}^{d, \varepsilon,t,x})-g_{\varepsilon}^{d}(X_{T}^{d,t,x})\right|\right]+\mathbb{E} \left[\left|g_{\varepsilon}^{d}(X_{T}^{d,t,x})-g^{d}(X_{T}^{d,t,x})\right|\right] \tag{60}\] \[\leq 4(\varepsilon cd^{2c})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{ \frac{1}{2}}e^{10cT+3cT^{2}}+(\varepsilon cd^{c})^{\frac{1}{2}}(d^{c}+\|x\|^{2 })^{\frac{1}{2}}e^{3.5cT}\] \[\leq 5(\varepsilon cd^{2c})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{ \frac{1}{2}}e^{10cT+3cT^{2}}.\]
This, Jensen's inequality, and (45) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(s\in[0,T]\), \(t\in[s,T]\), \(x\in\mathds{R}^{d}\) that
\[\begin{split}\mathbb{E}\left[\mathbb{E}\Big{[}\Big{|}g_{ \varepsilon}^{d}(X_{T}^{d,\varepsilon,t,\tilde{x}})-g^{d}(X_{T}^{d,t,\tilde{x} })\Big{|}\Big{]}\Big{|}\Big{|}_{\tilde{x}=X_{t}^{d,s,x}}\right]& \leq 5(\varepsilon cd^{2c})^{\frac{1}{2}}\mathbb{E}\left[ \left(d^{c}+\left\|X_{t}^{d,s,x}\right\|^{2}\right)^{\frac{1}{2}}\right]e^{10cT +3cT^{2}}\\ &\leq 5(\varepsilon cd^{2c})^{\frac{1}{2}}\left(\mathbb{E}\Big{[}d^{ c}+\left\|X_{t}^{d,s,x}\right\|^{2}\right)\right]^{\frac{1}{2}}e^{10cT+3cT^{2}}\\ &\leq 5(\varepsilon cd^{2c})^{\frac{1}{2}}\left((d^{c}+\|x\|^{2})^{ \frac{1}{2}}e^{10cT+3cT^{2}}\right.\\ &\leq 5(\varepsilon cd^{2c})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{ \frac{1}{2}}e^{14cT+3cT^{2}}.\end{split} \tag{61}\]
Next, the triangle inequality, (25), and (24) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(x\in\mathds{R}^{d}\), \(w\in\mathds{R}\) that
\[\begin{split}\Big{|}g_{\varepsilon}^{d}(x)\Big{|}\leq\Big{|}g_{ \varepsilon}^{d}(0)\Big{|}+\Big{|}g_{\varepsilon}^{d}(x)-g_{\varepsilon}^{d}( 0)\Big{|}&\leq(cd^{c}T^{-1})^{\frac{1}{2}}+(cd^{c}T^{-1})^{ \frac{1}{2}}\|x\|\\ &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}} \end{split} \tag{62}\]
and
\[|f_{\varepsilon}(w)|\leq|f_{\varepsilon}(0)|+|f_{\varepsilon}(w)-f_{ \varepsilon}(0)|\leq(cd^{c}T^{-3})^{\frac{1}{2}}+c^{\frac{1}{2}}|w|. \tag{63}\]
This and (26) show for all \(d\in\mathbb{N}\), \(x\in\mathds{R}^{d}\) that
\[|g^{d}(x)|=\lim_{\varepsilon\to 0}\Big{|}g_{\varepsilon}^{d}(x)\Big{|}\leq 2(cd^{ c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}. \tag{64}\]
This and (45) show for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathds{R}^{d}\) that
\[\begin{split}\left\|g^{d}(X_{T}^{d,t,x})\right\|_{L^{2}(\mathrm{P} )}&\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\left\|\left(d^{c}+\left\|X_{T}^{ d,t,x}\right\|^{2}\right)^{\frac{1}{2}}\right\|_{L^{2}(\mathrm{P})}\\ &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\left(\mathbb{E}\left[d^{c}+\left\|X _{T}^{d,t,x}\right\|^{2}\right]\right)^{\frac{1}{2}}\\ &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\left((d^{c}+\|x\|^{2})^{c} ^{\varepsilon TcT}\right)^{\frac{1}{2}}\\ &=2(cd^{c}T^{-1})^{\frac{1}{2}}\left(d^{c}+\|x\|^{2}\right)^{ \frac{1}{2}}e^{3.5cT}.\end{split} \tag{65}\]
This, (31), the triangle inequality, the disintegration theorem, the flow property, and (63) show for all \(d\in\mathbb{N}\), \(s\in[0,T]\), \(t\in[s,T]\), \(x\in\mathds{R}^{d}\) that
(66)
This, Gronwall's lemma, (29), (45), and the fact that \(c\in[1,\infty)\) show for all \(d\in\mathbb{N}\), \(s\in[0,T]\), \(t\in[s,T]\), \(x\in\mathds{R}^{d}\) that
\[\left\|u^{d}(t,X_{t}^{d,s,x})\right\|_{L^{2}(\mathrm{P})}\leq 3(cd^{c}T^{-1})^{ \frac{1}{2}}\left(d^{c}+\|x\|^{2}\right)^{\frac{1}{2}}e^{3.5cT}e^{c^{1/2}T} \leq 3(cd^{c}T^{-1})^{\frac{1}{2}}\left(d^{c}+\|x\|^{2}\right)^{\frac{1}{2} }e^{4.5cT}. \tag{67}\]
Hence, for all \(d\in\mathbb{N}\), \(s\in[0,T]\), \(t\in[s,T]\), \(x\in\mathbb{R}^{d}\) we have that
\[|u^{d}(t,x)|\leq 3(cd^{c}T^{-1})^{\frac{1}{2}}\left(d^{c}+\|x\|^{2}\right)^{ \frac{1}{2}}e^{4.5cT}. \tag{68}\]
Next, (31) and the triangle inequality show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x,y\in\mathds{R}^{d}\) that
\[\begin{split}\Big{|}u^{d,\varepsilon}(t,x)-u^{d,\varepsilon}(t, y)\Big{|}&\leq\mathbb{E}\Big{[}\Big{|}g_{\varepsilon}^{d}(X_{T}^{d, \varepsilon,t,x})-g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon,t,y})\Big{|} \Big{]}\\ &\quad+\int_{t}^{T}\mathbb{E}\Big{[}\Big{|}f_{\varepsilon}(u^{d, \varepsilon}(X_{r}^{d,\varepsilon,t,x}))-f_{\varepsilon}(u^{d,\varepsilon}(X_ {r}^{d,\varepsilon,t,y}))\Big{|}\Big{]}\,dr.\end{split} \tag{69}\]
This, the triangle inequality, the disintegration theorem, (24), Jensen's inequality, the fact that \(c\in[1,\infty)\), and (57) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(s\in[0,T]\), \(t\in[s,T]\), \(x,y\in\mathds{R}^{d}\) that
(70)
This, Gronwall's inequality, (29), and (42) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(s\in[0,T]\), \(t\in[s,T]\), \(x,y\in\mathds{R}^{d}\) that
\[\begin{split}\mathbb{E}\Big{[}\Big{|}u^{d,\varepsilon}(t,X_{t}^{d,\varepsilon,s,x})-u^{d,\varepsilon}(t,X_{t}^{d,\varepsilon,s,y})\Big{|}\Big{]} &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\|x-y\|e^{4cT+2cT^{2}}\cdot e^{cT}\\ &=2(cd^{c}T^{-1})^{\frac{1}{2}}\|x-y\|e^{5cT+2cT^{2}}\end{split} \tag{71}\]
and hence
\[\begin{split}\Big{|}u^{d,\varepsilon}(t,x)-u^{d,\varepsilon}(t, y)\Big{|}\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\|x-y\|e^{5cT+2cT^{2}}.\end{split} \tag{72}\]
This shows (ii).
Next, (24), (72), Jensen's inequality, and (54) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathds{R}^{d}\) that
(73)
This, the triangle inequality, (24), (26), the fact that \(c\in[1,\infty)\), the fact that \(cT\leq e^{cT}\), and (68) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that
\[\begin{split}&\int_{t}^{T}\mathbb{E}\Big{[}\Big{|}f_{\varepsilon}(u ^{d,\varepsilon}(s,X_{s}^{d,\varepsilon,t,x}))-f(u^{d}(s,X_{s}^{d,t,x}))\Big{|} \Big{]}\,ds\\ &\leq T\sup_{s\in[t,T]}\mathbb{E}\Big{[}\Big{|}f_{\varepsilon}(u ^{d,\varepsilon}(s,X_{s}^{d,\varepsilon,t,x}))-f_{\varepsilon}(u^{d, \varepsilon}(s,X_{s}^{d,t,x}))\Big{|}\Big{]}\\ &\quad+\int_{t}^{T}\mathbb{E}\Big{[}\Big{|}f_{\varepsilon}(u^{d, \varepsilon}(s,X_{s}^{d,t,x}))-f_{\varepsilon}(u^{d}(s,X_{s}^{d,t,x}))\Big{|} \Big{]}\,ds\\ &\quad+T\sup_{s\in[t,T]}\mathbb{E}\Big{[}\Big{|}f_{\varepsilon} \big{(}u^{d}(s,X_{s}^{d,t,x}))-f(u^{d}(s,X_{s}^{d,t,x}))\Big{|}\Big{]}\\ &\leq T\cdot 8c^{\frac{3}{2}}d^{c}\varepsilon^{\frac{1}{2}}(d^{c}+\|x \|^{2})^{\frac{1}{2}}e^{15cT+5cT^{2}}+\int_{t}^{T}c^{\frac{1}{2}}\mathbb{E} \Big{[}\Big{|}u^{d,\varepsilon}(s,X_{s}^{d,t,x})-u^{d}(s,X_{s}^{d,t,x})\Big{|} \Big{]}\,ds\\ &\quad+T\sup_{s\in[t,T]}\Big{[}\varepsilon^{\frac{1}{2}}\mathbb{ E}\Big{[}\Big{|}u^{d}(s,X_{s}^{d,t,x})\Big{|}^{2}\Big{]}\Big{]}\end{split} \tag{74}\]
and
\[\begin{split}&\int_{t}^{T}\mathbb{E}\Big{[}\Big{|}f_{\varepsilon} (u^{d,\varepsilon}(s,X_{s}^{d,\varepsilon,t,x}))-f(u^{d}(s,X_{s}^{d,t,x})) \Big{|}\Big{]}\,ds\\ &\leq 8cd^{c}\varepsilon^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{ \frac{1}{2}}e^{16cT+5cT^{2}}+\int_{t}^{T}c^{\frac{1}{2}}\mathbb{E}\Big{[}\Big{|} u^{d,\varepsilon}(s,X_{s}^{d,t,x})-u^{d}(s,X_{s}^{d,t,x})\Big{|}\Big{]}\,ds\\ &\quad+T\varepsilon^{\frac{1}{2}}\left(3(cd^{c}T^{-1})^{\frac{1} {2}}\left(d^{c}+\|x\|^{2}\right)^{\frac{1}{2}}e^{4.5cT}\right)^{2}\\ &=8cd^{c}\varepsilon^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}} e^{16cT+5cT^{2}}+\int_{t}^{T}c^{\frac{1}{2}}\mathbb{E}\Big{[}\Big{|}u^{d, \varepsilon}(s,X_{s}^{d,t,x})-u^{d}(s,X_{s}^{d,t,x})\Big{|}\Big{]}\,ds\\ &\quad+9cd^{c}\varepsilon^{\frac{1}{2}}e^{9cT}(d^{c}+\|x\|^{2}) \\ &\leq 17cd^{c}\varepsilon^{\frac{1}{2}}(d^{c}+\|x\|^{2})e^{16cT+5 cT^{2}}+\int_{t}^{T}c\mathbb{E}\Big{[}\Big{|}u^{d,\varepsilon}(s,X_{s}^{d,t,x})-u^{d}(s,X_{s}^{d,t,x}) \Big{|}\Big{]}\,ds.\end{split} \tag{75}\]
This, (45), the disintegration theorem, and the flow property show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(s\in[0,T]\), \(t\in[s,T]\), \(x\in\mathbb{R}^{d}\) that
(76)
This, the triangle inequality, (30), (31), and (61) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(s\in[0,T]\), \(t\in[s,T]\), \(x\in\mathbb{R}^{d}\) that
\[\begin{split}&\mathbb{E}\Big{[}\Big{|}u^{d,\varepsilon}(t,X_{t}^{d,s,x})-u^{d}(t,X_{t}^{d,s,x})\Big{|}\Big{]}=\mathbb{E}\left[\Big{|}u^{d, \varepsilon}(t,\tilde{x})-u^{d}(t,\tilde{x})\Big{|}\Big{|}_{\tilde{x}=X_{t}^{ d,s,x}}\right]\\ &\leq\mathbb{E}\bigg{[}\mathbb{E}\Big{[}\Big{|}g_{\varepsilon}^{ d}(X_{T}^{d,\varepsilon,t,\tilde{x}})-g^{d}(X_{T}^{d,t,\tilde{x}})\Big{|} \Big{|}\Big{]}\Big{|}_{\tilde{x}=X_{t}^{d,s,x}}\bigg{]}\\ &\qquad+\int_{t}^{T}\mathbb{E}\left[\mathbb{E}\left[\Big{|}f_{ \varepsilon}(u^{d,\varepsilon}(r,X_{r}^{d,\varepsilon,t,\tilde{x}}))-f(u^{d}( r,X_{r}^{d,t,\tilde{x}}))\Big{|}\Big{]}\Big{|}\Big{|}_{\tilde{x}=X_{t}^{d,s,x}} \right]dr\\ &\leq 5(\varepsilon cd^{2c})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1 }{2}}e^{14cT+3cT^{2}}\\ &\qquad+17cd^{c}\varepsilon^{\frac{1}{2}}\left(d^{c}+\|x\|^{2} \right)e^{7cT}\right)e^{16cT+5cT^{2}}+c\int_{t}^{T}\mathbb{E}\left[\Big{|}u^{ d,\varepsilon}(r,X_{r}^{d,s,x})-u^{d}(r,X_{r}^{d,s,x})\Big{|}\Big{]}\,dr\\ &\leq 22cd^{c}\varepsilon^{\frac{1}{2}}(d^{c}+\|x\|^{2})e^{23cT+5cT^ {2}}+c\int_{t}^{T}\mathbb{E}\Big{[}\Big{|}u^{d,\varepsilon}(r,X_{r}^{d,s,x})- u^{d}(r,X_{r}^{d,s,x})\Big{|}\Big{]}\,dr.\end{split} \tag{77}\]
This, Gronwall's lemma, (29), and (45) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(s\in[0,T]\), \(t\in[s,T]\), \(x\in\mathbb{R}^{d}\) that
\[\begin{split}&\mathbb{E}\Big{[}\Big{|}u^{d,\varepsilon}(t,X_{t}^{d,s,x})-u^{d}(t,X_{t}^{d,s,x})\Big{|}\Big{]}\\ &\leq 22cd^{c}\varepsilon^{\frac{1}{2}}(d^{c}+\|x\|^{2})e^{23cT+5cT^{2} }\cdot e^{cT}=2cd^{c}\varepsilon^{\frac{1}{2}}(d^{c}+\|x\|^{2})e^{24cT+5cT^{2}} \end{split} \tag{78}\]
and hence \(\big{|}u^{d,\varepsilon}(t,x)-u^{d}(t,x)\big{|}\leq 2cd^{c}\varepsilon^{\frac{1}{2}} (d^{c}+\|x\|^{2})e^{24cT+5cT^{2}}\). This shows (iii).
The proof of Lemma 2.1 is thus completed.
## 3. Euler-Maruyama and MLP approximations revisited
In Lemma 3.2 below we approximate the solution to the SFPE (86), associated to (83), through solution to the SFPE (85), associated to the Euler-Maruyama approximation (82).
**Setting 3.1**.: _Consider the notations given in Subsection 1.4, let \(T\in(0,\infty)\), \(c\in[1,\infty)\), for every \(K\in\mathbb{N}\) let \([\cdot]_{K}\colon\mathbb{R}\to\mathbb{R}\) satisfy for all \(t\in\mathbb{R}\) that \(\lfloor t\rfloor_{K}=\max(\{0,\frac{T}{K},\frac{2T}{K},\ldots,T\}\cap((-\infty, t)\cup\{0\}))\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), let \(\beta_{\varepsilon}^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d})\), \(\sigma_{\varepsilon}^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d\times d})\), \(f_{\varepsilon}\in C(\mathbb{R},\mathbb{R})\), \(g_{\varepsilon}^{d}\in C(\mathbb{R}^{d},\mathbb{R})\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(\gamma_{\varepsilon}^{d}\colon\mathbb{R}^{2d}\to\mathbb{R}^{d}\), be measurable, for every \(d\in\mathbb{N}\) let \(\nu^{d}\colon\mathcal{B}(\mathbb{R}^{d}\setminus\{0\})\to[0,\infty)\) be a Levy measure, and assume for all \(d\in\mathbb{N}\), \(x,y\in\mathbb{R}^{d}\), \(u_{1},w_{2}\in\mathbb{R}\), \(\varepsilon\in(0,1)\) that_
\[\Big{\|}\beta_{\varepsilon}^{d}(x)-\beta_{\varepsilon}^{d}(y)\Big{\|}^{2}+ \Big{\|}\sigma_{\varepsilon}^{d}(x)-\sigma_{\varepsilon}^{d}(y)\Big{\|}_{ \mathrm{F}}^{2}+\int_{\mathbb{R}^{d}\setminus\{0\}}\Big{\|}\gamma_{ \varepsilon}^{d}(x,z)-\gamma_{\varepsilon}^{d}(y,z)\Big{)}\Big{\|}^{2}\,\nu^{d }(dz)\leq c\|x-y\|^{2}, \tag{79}\]
\[|f_{\varepsilon}(w_{1})-f_{\varepsilon}(w_{2})|^{2}\leq c|w_{1}-w_{2}|^{2}, \quad\Big{|}g_{\varepsilon}^{d}(x)-g_{\varepsilon}^{d}(y)\Big{|}^{2}\leq cd^{c} T^{-1}\|x-y\|^{2}, \tag{80}\]
_and_
\[\Big{\|}\beta_{\varepsilon}^{d}(0)\Big{\|}^{2}+\Big{\|}\sigma_{\varepsilon}^{d }(0)\Big{\|}_{\mathrm{F}}^{2}+\int_{\mathbb{R}^{d}\setminus\{0\}}\Big{\|} \gamma_{\varepsilon}^{d}(0,z)\Big{\|}^{2}\,\nu^{d}(dz)+T^{3}|f_{\varepsilon}(0 )|^{2}+T|g_{\varepsilon}^{d}(0)|^{2}\leq cd^{c}. \tag{81}\]
**Lemma 3.2** (Discretization error).: _Assume Setting 3.1, for every \(d,K\in\mathbb{N}\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\) let \((X_{s}^{d,K,\varepsilon,t,x})_{s\in[t,T]}\), \((X_{s}^{d,\varepsilon,t,x})_{s\in[t,T]}\) be adapted cadlag processes which satisfy for all \(s\in[t,T]\) that \(\mathbb{P}\)-a.s._
\[\begin{split} X_{s}^{d,K,\varepsilon,t,x}&=x+\int_{t}^{s} \beta_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})\,dr+\int_{t}^ {s}\sigma_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})\,dW_{r} ^{d}\\ &\qquad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\gamma_{ \varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})\,z)\,\tilde{N}^{d}( dr,dz),\end{split} \tag{82}\]
_and_
\[X_{s}^{d,\varepsilon,t,x}=x+\int_{t}^{s}\beta_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,x})dr+\int_{t}^{s}\sigma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t, x})dW_{r}^{d}+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\gamma_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,x},z)\tilde{N}^{d}(dr,dz), \tag{83}\]
_and for every \(d,K\in\mathbb{N}\), \(\varepsilon\in(0,1)\), let \(u^{d,K,\varepsilon},u^{d,\varepsilon}\colon[0,T]\times\mathds{R}^{d}\to\mathds{R}\) be measurable functions satisfying for all \(t\in[0,T]\), \(x\in\mathds{R}^{d}\) that_
\[\sup_{s\in[0,T]}\sup_{y\in\mathds{R}^{d}}\frac{|u^{d,K,\varepsilon}(s,y)|+|u^{ d,\varepsilon}(s,y)|}{1+\|y\|}<\infty, \tag{84}\]
\[u^{d,K,\varepsilon}(t,x)=\mathds{E}\Big{[}g_{\varepsilon}^{d}(X_{T}^{d,K, \varepsilon,t,x})\Big{]}+\int_{t}^{T}\mathds{E}\Big{[}f_{\varepsilon}(u^{d,K, \varepsilon}(r,X_{r}^{d,K,\varepsilon,t,x}))\Big{]}\,dr, \tag{85}\]
_and_
\[u^{d,\varepsilon}(t,x)=\mathds{E}\Big{[}g_{\varepsilon}^{d}(X_{T}^{d, \varepsilon,t,x})\Big{]}+\int_{t}^{T}\mathds{E}\Big{[}f_{\varepsilon}(u^{d, \varepsilon}(r,X_{r}^{d,\varepsilon,t,x}))\Big{]}\,dr. \tag{86}\]
_Then_
1. _for all_ \(d,K\in\mathbb{N}\)_,_ \(t\in[0,T]\)_,_ \(s\in[t,T]\)_,_ \(x\in\mathbb{R}^{d}\)_,_ \(\varepsilon\in(0,1)\) _we have that_ \[\mathbb{E}\bigg{[}d^{c}+\left\|X_{s}^{d,K,\varepsilon,t,x}\right\|^{2}\bigg{]} \leq(d^{c}+\|x\|^{2})e^{7c(s-t)},\] (87) _and_
2. _for all_ \(d,K\in\mathbb{N}\)_,_ \(t\in[0,T]\)_,_ \(x\in\mathds{R}^{d}\)_,_ \(\varepsilon\in(0,1)\)_, we have that_ \[\left|u^{d,K,\varepsilon}(t,x)-u^{d,\varepsilon}(t,x)\right|\leq 12c^{ \frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{21cT+5cT^{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2} }\frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}.\] (88)
Proof of Lemma 3.2.: First, for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x,y\in\mathds{R}^{d}\) we have that
\[\mathbb{E}\bigg{[}d^{c}+\left\|X_{s}^{d,\varepsilon,t,x}\right\|^{2}\bigg{]} \leq(d^{c}+\|x\|^{2})e^{7c(s-t)} \tag{89}\]
and
\[\left|u^{d,\varepsilon}(t,x)-u^{d,\varepsilon}(t,y)\right|\leq 2(cd^{c}T^{-1})^{ \frac{1}{2}}\|x-y\|e^{5cT+2cT^{2}} \tag{90}\]
(cf. Lemma 2.1). Next, the triangle inequality, the fact that \(\forall\,a_{1},a_{2}\in\mathds{R}\colon(a_{1}+a_{2})^{2}\leq 2|a_{1}|^{2}+2|a_{2}|^ {2}\), (79), and (81) show for all \(d\in\mathbb{N}\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\|\beta_{\varepsilon}^{d}(x)\|^{2}\leq 2\|\beta_{\varepsilon}^{d}(0)\|^{2}+2\| \beta_{\varepsilon}^{d}(x)-\beta_{\varepsilon}^{d}(0)\|^{2}\leq 2cd^{c}+2c\|x\|^{2} =2c(d^{c}+\|x\|^{2}). \tag{91}\]
Similarly, we have for all \(d\in\mathbb{N}\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\left\|\sigma_{\varepsilon}^{d}(x)\right\|_{\mathrm{F}}^{2}\leq 2c(d^{c}+\|x\|^{2}). \tag{92}\]
Next, the triangle inequality, the fact that \(\forall\,a_{1},a_{2}\in\mathds{R}\colon(a_{1}+a_{2})^{2}\leq 2|a_{1}|^{2}+2|a_{2}|^ {2}\), (79), and (81) show for all \(d\in\mathbb{N}\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\int_{\mathds{R}^{d}\setminus\{0\}}\left\|\gamma_{ \varepsilon}^{d}(x,z)\right\|^{2}\nu^{d}(dz)&\leq\int_{\mathds{R }^{d}\setminus\{0\}}2\left\|\gamma_{\varepsilon}^{d}(0,z)\right\|^{2}+2\left\| \gamma_{\varepsilon}^{d}(x,z)-\gamma_{\varepsilon}^{d}(0,z)\right\|^{2}\nu^{d} (dz)\\ &\leq 2cd^{c}+2c\|x\|^{2}=2c(d^{c}+\|x\|^{2}).\end{split} \tag{93}\]
Next, Ito's formula (see, e.g., [20, Theorem 3.1]) and (82) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that \(\mathbb{P}\)-a.s.
\[\begin{split}\left\|X_{s}^{d,K,\varepsilon,t,x}\right\|^{2}& =\|x\|^{2}+\int_{t}^{s}\left(2\left\langle X_{\max\{t,\lfloor r \rfloor-_{K}\}}^{d,K,\varepsilon,t,x},\beta_{\varepsilon}^{d}(X_{\max\{t, \lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x})\right\rangle+\left\|\sigma_{ \varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x}) \right\|_{\mathrm{F}}^{2}\right)dr\\ &\quad+2\int_{t}^{s}\sum_{i,j=1}^{d}\left(X_{\max\{t,\lfloor r \rfloor-_{K}\}}^{d,K,\varepsilon,t,x}\right)_{i}\left(\sigma_{\varepsilon}^{d} (X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x})\right)_{ij}d(W_{j}^ {d})_{r}\\ &\quad+2\int_{t}^{s}\int_{\mathds{R}^{d}\setminus\{0\}}\left\langle X _{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x},\gamma_{\varepsilon}^{d} (X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x},z)\right\rangle \tilde{N}^{d}(dz,dr)\\ &\quad+\int_{t}^{s}\int_{\mathds{R}^{d}\setminus\{0\}}\left\| \gamma_{\varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x},z) \right\|^{2}N^{d}(dz,dr).\end{split} \tag{94}\]
Next, for every \(d,n,K\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) let \(\tau_{n}^{d,K,\varepsilon,x}\colon\Omega\to\mathbb{R}\) satisfy that
\[\begin{split}\tau_{n}^{d,K,\varepsilon,x}=\inf& \Bigg{\{}s\in[t,T]\colon\int_{t}^{s}\left(2\left\langle X_{\max\{t,[r-]_{K}\} ^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x}\right\rangle\right)+\left\| \sigma_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}^{d,K,\varepsilon,t,x}}^{d,K, \varepsilon,t,x})\right\|_{\mathrm{F}}^{2}\right)dr\\ &\qquad+\int_{t}^{s}\sum_{i,j=1}^{d}\left|\left(X_{\max\{t,[r-]_{ K}\}^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x}\right)\right\rangle_{i} \left(\sigma_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x})\right)\right|_{ij}^{2}dr\\ &\qquad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\sum_{i= 1}^{d}\left|\left(X_{\max\{t,[r-]_{K}\}^{d,K,\varepsilon,t,x}}^{d,K, \varepsilon,t,x}\right)\right\rangle_{i}\left(\gamma_{\varepsilon}^{d}(X_{ \max\{t,[r-]_{K}\}^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x})\right) \right|_{i}^{2}\nu^{d}(dz)\,dr\\ &\qquad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\left\| \gamma_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}^{d,K,\varepsilon,t,x}}^{d,K, \varepsilon,t,x})\right\|_{\mathrm{F}}^{2}\nu^{d}(dz)\,dr\geq n\Bigg{\}}\wedge T \end{split} \tag{95}\]
(with the convention that \(\inf\emptyset=\infty\)). Then (94), the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathbb{R}^{d}\colon 2\langle x,y\rangle\leq\|x\|^{2}+\|y\|^{2}\), (91), (92), (93) show for all \(d,n,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
(96)
This, (35)-(37), and the fact that \(c\geq 1\) show for all \(d,K,n\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\max\biggl{\{}\mathbb{E}\biggl{[}d^{c}+\left\|X_{ \max\{t,[s\wedge\tau_{n}^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x}\right\} ^{d,K,\varepsilon,t,x}\right\|^{2}\biggr{]}\biggr{]},\mathbb{E}\biggl{[}d^{c}+ \left\|X_{s\wedge\tau_{n}^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x}\right\| ^{2}\biggr{]}\biggr{\}}\\ &\leq d^{c}+\|x\|^{2}+\mathbb{E}\Biggl{[}\int_{t}^{s\wedge\tau_{n} ^{d,K,\varepsilon,x}}\left(d^{c}+\left\|X_{\max\{t,[r]_{K}\}^{d,K,\varepsilon, t,x}}^{d,K,\varepsilon,t,x}\right\|^{2}\right)dr\Biggr{]}\\ &\qquad+\mathbb{E}\Biggl{[}\int_{t}^{s\wedge\tau_{n}^{d,K, \varepsilon,x}}6c\left(d^{c}+\left\|X_{\max\{t,[r]_{K}\}^{d,K,\varepsilon,t, x}}^{d,K,\varepsilon,t,x}\right\|^{2}\right)dr\Biggr{]}\\ &\leq d^{c}+\|x\|^{2}+7c\mathbb{E}\Biggl{[}\int_{t}^{s\wedge \tau_{n}^{d,K,\varepsilon,x}}\left(d^{c}+\left\|X_{\max\{t,[r]_{K}\}}^{d,K, \varepsilon,t,x}\right\|^{2}\right)dr\Biggr{]}\\ &\leq d^{c}+\|x\|^{2}+7c\int_{t}^{s}\mathbb{E}\biggl{[}d^{c}+ \left\|X_{\max\{t,[r\wedge\tau_{n}^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x} \right\|_{K}}^{d,K,\varepsilon,t,x}\right\|^{2}\biggr{]}^{2}dr\biggr{]}\\ \end{split} \tag{97}\]
This, Fatou's lemma, and Gronwall's inequality show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\left[d^{c}+\left\|X_{\max\{t,[s]_{K}\} ^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x}\right\|^{2}\right]& \leq\liminf_{n\to\infty}\mathbb{E}\biggl{[}d^{c}+\left\|X_{\max\{t,[s \wedge\tau_{n}^{d,K,\varepsilon,t,x}}^{d,K,\varepsilon,t,x}\right\|_{K}}^{d,K, \varepsilon,t,x}\right\|^{2}\biggr{]}\\ &\leq(d^{c}+\|x\|^{2})e^{7\varepsilon(s-t)}.\end{split} \tag{98}\]
This, Fatou's lemma, (97), and the fact that \(\forall\,t\in[0,T],s\in[t,T]\colon 1+7c\int_{t}^{s}e^{7c(r-t)}\,dr=1+e^{7c(r-t)} |_{r=s}^{t}=e^{7c(s-t)}\) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\left[d^{c}+\left\|X_{s}^{d,K, \varepsilon,t,x}\right\|^{2}\right]&\leq\liminf_{n\to\infty} \mathbb{E}\left[d^{c}+\left\|X_{s\wedge\tau_{n}^{d,K,\varepsilon,t,x}}^{d,K, \varepsilon,t,x}\right\|^{2}\right]\\ &\leq d^{c}+\|x\|^{2}+7c\mathbb{E}\left[\int_{t}^{s}\left(d^{c}+ \left\|X_{\max\{t,\lceil r\rfloor_{K}\}}^{d,K,\varepsilon,t,x}\right\|^{2} \right)dr\right]\\ &\leq d^{c}+\|x\|^{2}+7c\int_{t}^{s}(d^{c}+\|x\|^{2})e^{7c(r-t)} \,dr\\ &=(d^{c}+\|x\|^{2})(1+7c\int_{t}^{s}e^{7c(r-t)}\,dr)\\ &=(d^{c}+\|x\|^{2})e^{7c(s-t)}.\end{split} \tag{99}\]
This shows (i).
Next, Holder's inequality, (91), and (99) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s,s^{\prime}\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\left[\left\|\int_{s}^{s^{\prime}}\beta_{ \varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x}) \,dr\right\|^{2}\right]&\leq\mathbb{E}\left[\left(\int_{s}^{s^{ \prime}}\left\|\beta_{\varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x})\right\|dr\right)^{2}\right]\\ &\leq\mathbb{E}\left[|s^{\prime}-s|\int_{s}^{s^{\prime}}\left\| \beta_{\varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon, t,x})\right\|^{2}dr\right]\\ &\leq T|s^{\prime}-s|\sup_{r\in[s,s^{\prime}]}\mathbb{E}\left[2c \left(d^{c}+\left\|X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x} \right\|^{2}\right)\right]\\ &\leq T|s^{\prime}-s|\cdot 2c(d^{c}+\|x\|^{2})e^{7cT}.\end{split} \tag{100}\]
Next, Ito's isometry, (92), and (99) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s,s^{\prime}\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\left[\left\|\int_{s}^{s^{\prime}}\sigma_{ \varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x}) \,dW_{r}^{d}\right\|^{2}\right]&=\mathbb{E}\left[\int_{s}^{s^{ \prime}}\left\|\sigma_{\varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{ d,K,\varepsilon,t,x})\right\|_{\mathbb{F}}^{2}dr\right]\\ &\leq|s^{\prime}-s|\sup_{r\in[s,s^{\prime}]}\mathbb{E}\left[2c \left(d^{c}+\left\|X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x} \right\|^{2}\right)\right]\\ &\leq|s^{\prime}-s|\cdot 2c(d^{c}+\|x\|^{2})e^{7cT}.\end{split} \tag{101}\]
Next, Ito's isometry, (93), and (99) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s,s^{\prime}\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\left[\left\|\int_{s}^{s^{\prime}}\int_{ \mathbb{R}^{d}\setminus\{0\}}^{d}\gamma_{\varepsilon}^{d}(X_{\max\{t,\lfloor r -\rfloor_{K}\}}^{d,K,\varepsilon,t,x},z)\,\tilde{N}^{d}(dr,dz)\right\|^{2} \right]\\ &=\mathbb{E}\left[\int_{s}^{s^{\prime}}\int_{\mathbb{R}^{d} \setminus\{0\}}\left\|\gamma_{\varepsilon}^{d}(X_{\max\{t,\lfloor r-\rfloor_{ K}\}}^{d,K,\varepsilon,t,x},z)\right\|^{2}\nu^{d}(dz)\,dr\right]\\ &\leq|s^{\prime}-s|\sup_{r\in[s,s^{\prime}]}\mathbb{E}\left[2c \left(d^{c}+\left\|X_{\max\{t,\lfloor r-\rfloor_{K}\}}^{d,K,\varepsilon,t,x} \right\|^{2}\right)\right]\\ &\leq|s^{\prime}-s|\cdot 2c(d^{c}+\|x\|^{2})e^{7cT}.\end{split} \tag{102}\]
This, (82), the fact that \(\forall\,d\in\mathbb{N},x,y,z\in\mathds{R}^{d}\colon\|x+y+z\|^{2}\leq 3\|x\|^{2}+3 \|y\|^{2}+3\|z\|^{2}\), (100), and (101) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s,s^{\prime}\in[t,T]\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\mathbb{E}\bigg{[}\Big{\|}X_{s^{\prime}}^{d,K,\varepsilon,t,x}-X _{s}^{d,K,\varepsilon,t,x}\Big{\|}^{2}\bigg{]} \tag{103}\] \[\leq 3\mathbb{E}\left[\left\|\int_{s}^{s^{\prime}}\beta_{ \varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})\,dr\right\|^{ 2}\right]+3\mathbb{E}\left[\left\|\int_{s}^{s^{\prime}}\sigma_{\varepsilon}^{d }(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})\,dW_{r}^{d}\right\|^{2}\right]\] \[\quad+3\mathbb{E}\left[\left\|\int_{s}^{s^{\prime}}\int_{\mathbb{ R}^{d}\setminus\{0\}}\gamma_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}}^{d,K, \varepsilon,t,x},z)\,\tilde{N}^{d}(dr,dz)\right\|^{2}\right]\] \[\leq 3\cdot T|s^{\prime}-s|\cdot 2c(d^{c}+\|x\|^{2})e^{7cT}+3 \cdot|s^{\prime}-s|\cdot 2c(d^{c}+\|x\|^{2})e^{7cT}\] \[\quad+3\cdot|s^{\prime}-s|\cdot 2c(d^{c}+\|x\|^{2})e^{7cT}\] \[=6c(T+2)e^{7cT}(d^{c}+\|x\|^{2})|s^{\prime}-s|.\]
This, Holder's inequality, (79), the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathds{R}^{d}\colon\|x+y\|^{2}\leq 2\|x\|^{2}+2\|y\|^ {2}\), and (103) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\mathbb{E}\left[\left\|\int_{t}^{s}\left(\beta_{\varepsilon}^{d }(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})-\beta_{\varepsilon}^{d}(X_{r -}^{d,\varepsilon,t,x})\right)dr\right\|^{2}\right] \tag{104}\] \[\leq\mathbb{E}\bigg{[}\left(\int_{t}^{s}\left\|\beta_{\varepsilon }^{d}(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})-\beta_{\varepsilon}^{d}(X_ {r-}^{d,\varepsilon,t,x})\right\|dr\right)^{2}\bigg{]}\] \[\leq T\mathbb{E}\bigg{[}\int_{t}^{s}\left\|\beta_{\varepsilon}^{ d}(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})-\beta_{\varepsilon}^{d}(X_{r -}^{d,\varepsilon,t,x})\right\|^{2}dr\bigg{]}\] \[\leq T\mathbb{E}\bigg{[}\int_{t}^{s}c\left\|X_{\max\{t,[r-]_{K} \}}^{d,K,\varepsilon,t,x}-X_{r-}^{d,\varepsilon,t,x}\right\|^{2}dr\bigg{]}\] \[\leq 2cT\int_{t}^{s}\mathbb{E}\left[\left\|X_{\max\{t,[r]_{K}\}} ^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,x}\right\|^{2}\right]+2cT\int_{t }^{s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon, t,x}\right\|^{2}\right]dr\] \[\leq 2cT(s-t)\cdot 6c(T+2)e^{7cT}(d^{c}+\|x\|^{2})\frac{T}{K}+2cT \int_{t}^{s}\mathbb{E}\bigg{[}\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d, \varepsilon,t,x}\right\|^{2}\bigg{]}\,dr\] \[\leq 12c^{2}T^{2}(T+2)e^{7cT}(d^{c}+\|x\|^{2})\frac{T}{K}+2cT\int_{t }^{s}\mathbb{E}\bigg{[}\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,x}\right\|^{2}\bigg{]}\,dr\]
Next, Ito's isometry, (79), the fact that \(\forall\,d\in\mathbb{N},x,y\in\mathds{R}^{d}\colon\|x+y\|^{2}\leq 2\|x\|^{2}+2\|y\|^ {2}\), and (103) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\mathbb{E}\left[\left\|\int_{t}^{s}\left(\sigma_{\varepsilon}^{d }(X_{\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})-\sigma_{\varepsilon}^{d}(X_{r -}^{d,\varepsilon,t,x})\right)dW_{r}^{d}\right\|^{2}\right] \tag{105}\] \[=\mathbb{E}\bigg{[}\int_{t}^{s}\left\|\sigma_{\varepsilon}^{d}(X_ {\max\{t,[r-]_{K}\}}^{d,K,\varepsilon,t,x})-\sigma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})\right\|^{2}dr\bigg{]}\] \[\leq\mathbb{E}\bigg{[}\int_{t}^{s}c\left\|X_{\max\{t,[r-]_{K}\}} ^{d,K,\varepsilon,t,x}-X_{r-}^{d,\varepsilon,t,x}\right\|^{2}dr\bigg{]}\] \[\leq 2c\int_{t}^{s}\mathbb{E}\bigg{[}\left\|X_{\max\{t,[r]_{K}\}} ^{d,K,\varepsilon,t,x}-X_{r}^{d,K,\varepsilon,t,x}\right\|^{2}\bigg{]}\,dr+2c \int_{t}^{s}\mathbb{E}\bigg{[}\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d, \varepsilon,t,x}\right\|^{2}\bigg{]}\,dr\] \[\leq 2c(s-t)\cdot 6c(T+2)e^{7cT}(d^{c}+\|x\|^{2})\frac{T}{K}+2c\int_{t }^{s}\mathbb{E}\bigg{[}\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon, t,x}\right\|^{2}\bigg{]}\,dr\] \[=12c^{2}T(T+2)e^{7cT}(d^{c}+\|x\|^{2})\frac{T}{K}+2c\int_{t}^{s} \mathbb{E}\bigg{[}\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,x} \right\|^{2}\bigg{]}\,dr.\]
Next, Ito's isometry, (79), the fact that \(\forall\,d\in\mathbb{N},x,y,z\in\mathds{R}^{d}\colon\|x+y\|^{2}\leq 2\|x\|^{2}+ 2\|y\|^{2}\), and (103) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\left[\left\|\int_{t}^{s}\int_{\mathds{R} ^{d}\setminus\{0\}}\left(\gamma_{\varepsilon}^{d}(X_{\max\{t,|r-|_{K}\}}^{d,K,\varepsilon,t,x},z)-\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z) \right)\tilde{N}^{d}(dr,dz)\right\|^{2}\right]\\ &=\mathbb{E}\left[\int_{t}^{s}\int_{\mathds{R}^{d}\setminus\{0\}} \left\|\gamma_{\varepsilon}^{d}(X_{\max\{t,|r-|_{K}\}}^{d,K,\varepsilon,t,x},z )-\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)\right\|^{2}\nu^{d}( dz)dr\right]\\ &\leq\mathbb{E}\left[\int_{t}^{s}c\left\|X_{\max\{t,|r-|_{K}\}}^ {d,K,\varepsilon,t,x}-X_{r-}^{d,\varepsilon,t,x}\right\|^{2}dr\right]\\ &\leq 2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{\max\{t,|r\}}^{d,K, \varepsilon,t,x}-X_{r}^{d,K,\varepsilon,t,x}\right\|^{2}\right]dr+2c\int_{t}^ {s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,x}\right\|^{2}\right]dr\\ &\leq 2c(s-t)\cdot 6c(T+2)e^{\tau cT}(d^{c}+\|x\|^{2})\frac{T}{K }+2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d, \varepsilon,t,x}\right\|^{2}\right]dr\\ &\leq 12c^{2}T(T+2)e^{\tau cT}(d^{c}+\|x\|^{2})\frac{T}{K}+2c\int_{ t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,x}\right\|^{2}\right]dr.\end{split} \tag{106}\]
This, the fact that \(\forall\,d\in\mathbb{N},x,y,z\in\mathds{R}^{d}\colon\|x+y+z\|^{2}\leq 3\|x\|^{2}+ 3\|y\|^{2}+3\|z\|^{2}\), (104), (105) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\left[\left\|X_{s}^{d,K,\varepsilon,t,x}-X_{ s}^{d,\varepsilon,t,x}\right\|^{2}\right]\\ &\leq 3\mathbb{E}\left[\left\|\int_{t}^{s}\left(\beta_{ \varepsilon}^{d}(X_{\max\{t,|r-|_{K}\}}^{d,K,\varepsilon,t,x})-\beta_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})\right)dr\right\|^{2}\right]\\ &\quad+3\mathbb{E}\left[\left\|\int_{t}^{s}\left(\sigma_{ \varepsilon}^{d}(X_{\max\{t,|r-|_{K}\}}^{d,K,\varepsilon,t,x})-\sigma_{ \varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x})\right)dW_{r}^{d}\right\|^{2} \right]\\ &\quad+3\mathbb{E}\left[\left\|\int_{t}^{s}\int_{\mathds{R}^{d} \setminus\{0\}}\left(\gamma_{\varepsilon}^{d}(X_{\max\{t,|r-|_{K}\}}^{d,K, \varepsilon,t,x},z)-\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z) \right)\tilde{N}^{d}(dr,dz)\right\|^{2}\right]\end{split} \tag{107}\]
and
\[\begin{split}&\mathbb{E}\bigg{[}\left\|X_{s}^{d,K, \varepsilon,t,x}-X_{s}^{d,\varepsilon,t,x}\right\|^{2}\bigg{]}\\ &\leq 3\left[12c^{2}T^{2}(T+2)e^{\tau cT}(d^{c}+\|x\|^{2})\frac{T}{K }+2cT\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d, \varepsilon,t,x}\right\|^{2}\right]dr\right]\\ &\quad+3\left[12c^{2}T(T+2)e^{\tau cT}(d^{c}+\|x\|^{2})\frac{T}{K }+2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d, \varepsilon,t,x}\right\|^{2}\right]dr\right]\\ &\quad+3\left[12c^{2}T(T+2)e^{\tau cT}(d^{c}+\|x\|^{2})\frac{T}{K }+2c\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d, \varepsilon,t,x}\right\|^{2}\right]dr\right]\\ &=36c^{2}T(T+2)^{2}e^{\tau cT}(d^{c}+\|x\|^{2})\frac{T}{K}+6c(T+2 )\int_{t}^{s}\mathbb{E}\left[\left\|X_{r}^{d,K,\varepsilon,t,x}-X_{r}^{d, \varepsilon,t,x}\right\|^{2}\right]dr.\end{split} \tag{108}\]
This, (89), (99), and Gronwall's inequality show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\mathbb{E}\left[\left\|X_{s}^{d,K,\varepsilon,t,x}-X_{s}^{d,\varepsilon,t,x} \right\|^{2}\right]\leq 36c^{2}T(T+2)^{2}e^{\tau cT}(d^{c}+\|x\|^{2})\frac{T}{K}e^{6c(T+2)T}. \tag{109}\]
This, (80), and Jensen's inequality show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\mathbb{E}\Big{[}\Big{|}g_{\varepsilon}^{d}(X_{T}^{d,K,\varepsilon,t,x})-g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon,t,x})\Big{|} \Big{]}&\leq(cd^{c})^{\frac{1}{2}}T^{-\frac{1}{2}}\mathbb{E} \Big{[}\Big{\|}X_{T}^{d,K,\varepsilon,t,x}-X_{T}^{d,\varepsilon,t,x}\Big{\|} \Big{]}\\ &\leq(cd^{c})^{\frac{1}{2}}T^{-\frac{1}{2}}\left(\mathbb{E} \left[\left\|X_{T}^{d,K,\varepsilon,t,x}-X_{T}^{d,\varepsilon,t,x}\right\|^{2 }\right]\right)^{\frac{1}{2}}\\ &\leq(cd^{c})^{\frac{1}{2}}T^{-\frac{1}{2}}\left(36c^{2}T(T+2)^{ 2}e^{7cT}(d^{c}+\|x\|^{2})\frac{T}{K}e^{6c(T+2)T}\right)^{\frac{1}{2}}\\ &\leq 6c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{10cT+3cT^{2}}(d^{c}+\|x \|^{2})^{\frac{1}{2}}\frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}.\end{split} \tag{110}\]
Next, (90), Jensen's inequality, and (109) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(r\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\Big{[}\Big{|}u^{d,\varepsilon}(r,X_{r}^{d,K,\varepsilon,t,x})-u^{d,\varepsilon}(r,X_{r}^{d,\varepsilon,t,x})\Big{|} \Big{]}\\ &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\mathbb{E}\Big{[}\Big{\|}X_{r}^{ d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,x}\Big{\|}\Big{]}\,e^{5cT+2cT^{2}}\\ &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\left(\mathbb{E}\left[\left\|X_{ r}^{d,K,\varepsilon,t,x}-X_{r}^{d,\varepsilon,t,x}\right\|^{2}\right]\right)^{ \frac{1}{2}}e^{5cT+2cT^{2}}\\ &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\left(36c^{2}T(T+2)^{2}e^{7cT}(d ^{c}+\|x\|^{2})\frac{T}{K}e^{6c(T+2)T}\right)^{\frac{1}{2}}e^{5cT+2cT^{2}}\\ &\leq 12c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{15cT+5cT^{2}}(d^{c}+\|x \|^{2})^{\frac{1}{2}}\frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}.\end{split} \tag{111}\]
Next, Jensen's inequality and (99) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(r\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\mathbb{E}\Big{[}\Big{|}u^{d,K,\varepsilon}(r,X_{r}^{ d,K,\varepsilon,t,x})-u^{d,\varepsilon}(r,X_{r}^{d,K,\varepsilon,t,x})\Big{|} \Big{]}\\ &\leq e^{3.5c(T-r)}\mathbb{E}\Bigg{[}\bigg{(}d^{c}+\Big{\|}X_{r} ^{d,K,\varepsilon,t,x}\Big{\|}^{2}\bigg{)}^{\frac{1}{2}}\sup_{y\in\mathbb{R}^{ d}}\,\frac{\big{|}u^{d,K,\varepsilon}(r,y)-u^{d,\varepsilon}(r,y)\big{|}}{e^{3.5c (T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\\ &\leq e^{3.5c(T-r)}\left(\mathbb{E}\left[d^{c}+\Big{\|}X_{r}^{ d,K,\varepsilon,t,x}\Big{\|}^{2}\right]\right)^{\frac{1}{2}}\sup_{y\in\mathbb{R}^{ d}}\,\frac{\big{|}u^{d,K,\varepsilon}(r,y)-u^{d,\varepsilon}(r,y)\big{|}}{e^{3.5c (T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\\ &\leq e^{3.5c(T-r)}(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{3.5c(r-t)} \sup_{y\in\mathbb{R}^{d}}\,\frac{\big{|}u^{d,K,\varepsilon}(r,y)-u^{d, \varepsilon}(r,y)\big{|}}{e^{3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\\ &=(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{3.5c(T-t)}\sup_{y\in\mathbb{R }^{d}}\,\frac{\big{|}u^{d,K,\varepsilon}(r,y)-u^{d,\varepsilon}(r,y)\big{|}}{e^{ 3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}.\end{split} \tag{112}\]
This, the triangle inequality, (80), the fact that \(c\geq 1\), (110), (111), the fact that \(1+cT\leq e^{cT}\) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\Big{|}u^{d,K,\varepsilon}(t,x)-u^{d,\varepsilon}(t,x) \Big{|}\\ &\leq\mathbb{E}\Big{[}\Big{|}g_{\varepsilon}^{d}(X_{T}^{d,K, \varepsilon,t,x})-g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon,t,x})\Big{|} \Big{]}+\int_{t}^{T}\mathbb{E}\Big{[}\Big{|}f_{\varepsilon}(u^{d,K, \varepsilon}(r,X_{r}^{d,K,\varepsilon,t,x}))-f_{\varepsilon}(u^{d,\varepsilon} (r,X_{r}^{d,\varepsilon,t,x}))\Big{|}\Big{]}\,dr\\ &\leq\mathbb{E}\Big{[}\Big{|}g_{\varepsilon}^{d}(X_{T}^{d,K, \varepsilon,t,x})-g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon,t,x})\Big{|}\Big{]}+ \int_{t}^{T}c\mathbb{E}\Big{[}\Big{|}u^{d,K,\varepsilon}(r,X_{r}^{d,K, \varepsilon,t,x})-u^{d,\varepsilon}(r,X_{r}^{d,K,\varepsilon,t,x})\Big{|}\Big{]} \\ &\quad+\int_{t}^{T}c\mathbb{E}\Big{[}\Big{|}u^{d,\varepsilon}(r,X_ {r}^{d,K,\varepsilon,t,x})-u^{d,\varepsilon}(r,X_{r}^{d,\varepsilon,t,x})\Big{|} \Big{]}\,dr\end{split} \tag{113}\]
and
\[\begin{split}\left|u^{d,K,\varepsilon}(t,x)-u^{d,\varepsilon}(t,x) \right|&\leq 6c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{10cT+3cT^{2}}(d^{c}+\|x\|^{2})^{ \frac{1}{2}}\frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}\\ &\quad+\int_{t}^{T}c(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{3.5c(T-t)} \sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d,K,\varepsilon}(r,y)-u^{d, \varepsilon}(r,y)\right|}{e^{3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}dr\\ &\quad+Tc\cdot 12c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{15cT+5cT^{2}} (d^{c}+\|x\|^{2})^{\frac{1}{2}}\frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}\\ &\leq 12c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{16cT+5cT^{2}}(d^{c}+\|x \|^{2})^{\frac{1}{2}}\frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}\\ &\quad+\int_{t}^{T}c(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{3.5c(T-t)} \sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d,K,\varepsilon}(r,y)-u^{d, \varepsilon}(r,y)\right|}{e^{3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\,dr. \end{split} \tag{114}\]
Dividing by \((d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{3.5c(T-t)}\) shows for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\sup_{x\in\mathbb{R}^{d}}\frac{\left|u^{d,K, \varepsilon}(t,x)-u^{d,\varepsilon}(t,x)\right|}{e^{3.5c(T-t)}(d^{c}+\|y\|^{2 })^{\frac{1}{2}}}\\ \leq 12c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{16cT+5cT^{2}} \frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}+\int_{t}^{T}c\sup_{y\in\mathbb{R}^{d} }\frac{\left|u^{d,K,\varepsilon}(r,y)-u^{d,\varepsilon}(r,y)\right|}{e^{3.5c( T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\,dr.\end{split} \tag{115}\]
This, (84), (89), (99), and Gronwall's inequality show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(\varepsilon\in(0,1)\) that
\[\begin{split}\sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d,K, \varepsilon}(t,y)-u^{d,\varepsilon}(t,y)\right|}{e^{3.5c(T-t)}(d^{c}+\|y\|^{2 })^{\frac{1}{2}}}&\leq 12c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{16cT+5cT^{2}} \frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}\cdot e^{cT}\\ &=12c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{17cT+5cT^{2}}\frac{T^{ \frac{1}{2}}}{K^{\frac{1}{2}}}.\end{split} \tag{116}\]
Hence, for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) we have that
\[\left|u^{d,K,\varepsilon}(t,x)-u^{d,\varepsilon}(t,x)\right|\leq 12c^{\frac{3}{2}} d^{\frac{5}{2}}(T+2)e^{21cT+5cT^{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}\frac{T^{ \frac{1}{2}}}{K^{\frac{1}{2}}}. \tag{117}\]
This shows (ii). The proof of Lemma 3.2 is thus completed.
In Lemma 3.3 below we approximate the solution to SFPE (120), associated to (118), by the MLP approximation (121).
**Lemma 3.3**.: _Assume Setting 3.1, let \((\Omega,\mathcal{F},\mathbb{P}\,,(\mathbb{F}_{t})_{t\in[0,T]})\) be a probability space satisfying the usual conditions, let \(\Theta=\cup_{n\in\mathbb{N}}\mathbb{Z}^{n}\), for every \(d\in\mathbb{N}\) let \(W^{d,\theta}\colon\Omega\times[0,T]\to\mathbb{R}^{d}\), \(\theta\in\Theta\), be identically independently distributed standard \((\mathbb{F}_{t})_{t\in[0,T]}\)-Brownian motions, for every \(d\in\mathbb{N}\) let \(N^{d,\theta}\), \(\theta\in\Theta\), be independent \((\mathbb{F}_{t})_{t\in[0,T]}\)-Poisson random measures on \([0,\infty)\times(\mathbb{R}^{d}\setminus\{0\})\) with intensity \(\nu^{d}\), for every \(d\in\mathbb{N}\), \(\theta\in\Theta\) let \(\tilde{N}^{d,\theta}(dt,dz)=N^{d,\theta}(dt,dz)-dt\,\nu^{d}(dz)\), assume for all \(d\in\mathbb{N}\) that \(\mathcal{F}_{0}\), \((N^{d,\theta})_{\theta\in\Theta}\) and \((W^{d,\theta})_{\theta\in\Theta}\), are independent, for every \(d,K\in\mathbb{N}\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\) let \((X_{s}^{d,\theta,K,\varepsilon,t,x})_{s\in[t,T]}\) satisfy that \(X_{t}^{d,\theta,K,\varepsilon,t,x}=x\) and_
\[\begin{split} X_{s}^{d,\theta,K,\varepsilon,t,x}&=x+ \int_{t}^{s}\beta_{\varepsilon}^{d}(X_{\max\{t,[r^{-}]_{K}\}}^{d,\theta,K, \varepsilon,t,x})\,dr+\int_{t}^{s}\sigma_{\varepsilon}^{d}(X_{\max\{t,[r^{-}]_{ K}\}}^{d,\theta,K,\varepsilon,t,x})\,dW_{r}^{d,\theta}\\ &\quad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\sigma_{ \varepsilon}^{d}(X_{\max\{t,[r^{-}]_{K}\}}^{d,\theta,K,\varepsilon,t,x},z)\, \tilde{N}^{d,\theta}(dr,dz),\end{split} \tag{118}\]
_for every \(d,K\in\mathbb{N}\), \(\varepsilon\in(0,1)\), let \(u^{d,K,\varepsilon}\colon[0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) be measurable functions satisfying for all \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that \(\mathbb{E}\left[\left|g_{\varepsilon}^{d}(X_{t,T}^{d,0,K,\varepsilon,x})\right| \right]+\int_{t}^{T}\mathbb{E}\Big{[}\left|f_{\varepsilon}(u^{d,K,\varepsilon} (r,X_{t,r}^{d,0,K,\varepsilon,x}))\right|\Big{]}<\infty,\)_
\[\sup_{s\in[0,T]}\sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d,K,\varepsilon}(s,y) \right|}{1+\|y\|}<\infty \tag{119}\]
_and_
\[u^{d,K,\varepsilon}(t,x)=\mathbb{E}\left[g_{\varepsilon}^{d}(X_{T}^{d,0,K, \varepsilon,t,x})\right]+\int_{t}^{T}\mathbb{E}\Big{[}f_{\varepsilon}(u^{d,K, \varepsilon}(r,X_{r}^{d,0,K,\varepsilon,t,x}))\Big{]}\,dr, \tag{120}\]
_let \(\theta:\Omega\to[0,1]\), \(\theta\in\Theta\), be i.i.d random variables which satisfy for all \(t\in(0,1)\) that \(\mathbb{P}\left(\mathfrak{t}^{0}\leq t\right)=t\), for every \(\theta\in\Theta\), \(t\in[0,T]\) let \(\mathfrak{T}_{t}^{\theta}:\Omega\to\mathbb{R}\) satisfy for all \(\theta\in\Theta\) that \(\mathfrak{T}_{t}^{\theta}=t+(T-t)\mathfrak{t}^{\theta}\), assume for all \(d\in\mathbb{N}\) that \((\mathfrak{t}^{\theta})_{\theta\in\Theta}\), \((N^{d,\theta})_{\theta\in\Theta}\) and \((W^{d,\theta})_{\theta\in\Theta}\), are independent, for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(U_{n,m}^{d,\theta,K,\varepsilon}\colon[0,T]\times\mathbb{R}^{d}\times\Omega \to\mathbb{R}\), \(\theta\in\Theta\), \(n,m\in\mathbb{Z}\), satisfy for all \(\theta\in\Theta\), \(n\in\mathbb{N}_{0}\), \(m\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that_
\[U_{n,m}^{d,\theta,K,\varepsilon}(t,x)=\frac{\mathbbm{1}_{\mathbb{ N}}(n)}{m^{n}}\sum_{i=1}^{m^{n}}g_{\varepsilon}^{d}\Big{(}X_{T}^{d,(\theta,0,-i),K, \varepsilon,t,x}\Big{)} \tag{121}\] \[\quad+\sum_{\ell=0}^{n-1}\frac{(T-t)}{m^{n-\ell}}\sum_{i=1}^{m^{ n-\ell}}\Big{(}f_{\varepsilon}\circ U_{\ell,m}^{d,(\theta,\ell,i),K,\varepsilon}- \mathbbm{1}_{\mathbb{N}}(\ell)f_{\varepsilon}\circ U_{\ell-1,m}^{d,(\theta,- \ell,i),K,\varepsilon}\Big{)}\Big{(}\mathfrak{T}_{t}^{(\theta,\ell,i)},X_{ \mathfrak{T}_{t}^{(\theta,\ell,i)}}^{d,(\theta,\ell,i),K,\varepsilon,t,x} \Big{)}.\]
_Then for all \(d,K,n,m\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) we have that \(U_{n,m}^{d,\theta,K,\varepsilon}\) is measurable and \(\mathbb{E}\bigg{[}\bigg{(}\Big{|}U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d, K,\varepsilon}(t,x)\Big{|}^{2}\bigg{]}\bigg{)}^{\frac{1}{2}}\leq 6e^{\frac{m}{2}}m^{-\frac{n}{2}}e^{12cTn} (cd^{c}T^{-1})^{\frac{1}{2}}\left(d^{c}+\|x\|^{2}\right)^{\frac{1}{2}}.\)_
Proof of Lemma 3.3.: For measurability see [23, Lemma 3.2]. Next, (25) and the triangle inequality show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(x\in\mathbb{R}^{d}\), \(w\in\mathbb{R}\) that
\[\Big{|}g_{\varepsilon}^{d}(x)\Big{|}\leq\Big{|}g_{\varepsilon}^{d}(0)\Big{|}+( cd^{c}T^{-1})^{\frac{1}{2}}\|x\|\leq(cd^{c}T^{-1})^{\frac{1}{2}}+(cd^{c}T^{-1})^{ \frac{1}{2}}\|x\|\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}} \tag{122}\]
and
\[|f_{\varepsilon}(w)|\leq|f_{\varepsilon}(0)|+c^{\frac{1}{2}}|w|\leq(cd^{c}T^{ -3})^{\frac{1}{2}}+c^{\frac{1}{2}}|w|. \tag{123}\]
First, for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(s\in[t,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) we have that
\[\mathbb{E}\bigg{[}d^{c}+\left\|X_{t,s}^{d,0,K,\varepsilon,x}\right\|^{2} \bigg{]}\leq(d^{c}+\|x\|^{2})e^{7c(s-t)} \tag{124}\]
(cf. Lemma 3.2). This, (122), and Jensen's inequality show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\)\(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\mathbb{E}\Big{[}g_{\varepsilon}^{d}(X_{T}^{d,0,K,\varepsilon,t,x}) \Big{]} \leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\mathbb{E}\Bigg{[}\bigg{(}d^{c}+ \left\|X_{T}^{d,0,K,\varepsilon,t,x}\right\|^{2}\bigg{)}^{\frac{1}{2}}\Bigg{]} \tag{125}\] \[\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\left(\mathbb{E}\bigg{[}d^{c}+ \left\|X_{T}^{d,0,K,\varepsilon,t,x}\right\|^{2}\bigg{]}\right)^{\frac{1}{2}}\] \[\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}\left((d^{c}+\|x\|^{2})e^{7cT} \right)^{\frac{1}{2}}\] \[=2(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{3.5cT}.\]
This, (120), (123), the fact that \(c\geq 1\), Jensen's inequality, (124) show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\)\(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\begin{split}&\left|u^{d,K,\varepsilon}(t,x)\right|\\ &\leq\mathbb{E}\Big{[}\Big{|}g_{\varepsilon}^{d}(X_{T}^{d,0,K, \varepsilon,t,x})\Big{|}\Big{]}+\int_{t}^{T}\mathbb{E}\Big{[}\Big{|}f_{ \varepsilon}(u^{d,K,\varepsilon}(r,X_{r}^{d,0,K,\varepsilon,t,x}))\Big{|} \Big{]}\,dr\\ &\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{ 3.5cT}+\int_{t}^{T}\Big{(}(cd^{c}T^{-3})^{\frac{1}{2}}+c^{\frac{1}{2}}\mathbb{ E}\Big{[}\Big{|}u^{d,K,\varepsilon}(r,X_{r}^{d,0,K,\varepsilon,t,x})\Big{|} \Big{]}\Big{)}\,dr\\ &\leq 3(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{ 3.5cT}\\ &\quad+\int_{t}^{T}c\left[\sup_{y\in\mathbb{R}^{d}}\frac{\left|u ^{d,K,\varepsilon}(r,y)\right|}{e^{3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}} \right]e^{3.5c(T-r)}\mathbb{E}\Bigg{[}\bigg{(}d^{c}+\left\|X_{r}^{d,0,K, \varepsilon,t,x}\right\|^{2}\bigg{)}^{\frac{1}{2}}\Bigg{]}\,dr\\ &\leq 3(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^{ 3.5cT}\\ &\quad+\int_{t}^{T}c\left[\sup_{y\in\mathbb{R}^{d}}\frac{\left|u ^{d,K,\varepsilon}(r,y)\right|}{e^{3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}} \right]e^{3.5c(T-r)}\left((d^{c}+\|x\|^{2})e^{7c(r-t)}\right)^{\frac{1}{2}}\, dr\\ &=3(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}e^ {3.5cT}\\ &\quad+\int_{t}^{T}c\left[\sup_{y\in\mathbb{R}^{d}}\frac{\left|u ^{d,K,\varepsilon}(r,y)\right|}{e^{3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}} \right]e^{3.5c(T-t)}(d^{c}+\|x\|^{2})^{\frac{1}{2}}\,dr.\end{split} \tag{126}\]
Dividing by \(e^{3.5c(T-t)}(d^{c}+\|x\|^{2})^{\frac{1}{2}}\) we then obtain that for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(\varepsilon\in(0,1)\) we have that
\[\sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d,K,\varepsilon}(t,y)\right|}{e^{3.5c (T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\leq 3(cd^{c}T^{-1})^{\frac{1}{2}}e^{3.5cT}+ \int_{t}^{T}c\sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d,K,\varepsilon}(r,y) \right|}{e^{3.5c(T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\,dr \tag{127}\]
This, (119), and Gronwall's inequality show for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\)\(\varepsilon\in(0,1)\) that
\[\sup_{y\in\mathbb{R}^{d}}\frac{\left|u^{d,K,\varepsilon}(t,y)\right|}{e^{3.5c (T-r)}(d^{c}+\|y\|^{2})^{\frac{1}{2}}}\leq 3(cd^{c}T^{-1})^{\frac{1}{2}}e^{3.5cT} \cdot e^{cT}. \tag{128}\]
This shows for all \(d,K\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\left|u^{d,K,\varepsilon}(t,x)\right|\leq 3(cd^{c}T^{-1})^{\frac{1}{2}}e^{8cT}(d^{c} +\|x\|^{2})^{\frac{1}{2}}. \tag{129}\]
Next, (81) and (122) show for all \(d\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\) that
\[\frac{T|f_{\varepsilon}(0)|+|g_{\varepsilon}^{d}(x)|}{(d^{c}+\|x\|^{2})^{ \frac{1}{2}}}\leq\frac{T\cdot T^{-\frac{3}{2}}c^{\frac{1}{2}}d^{\frac{1}{2}}+ 2(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}}{(d^{c}+\|x\|^{2} )^{\frac{1}{2}}}\leq 3T^{-\frac{1}{2}}(cd^{c})^{\frac{1}{2}} \tag{130}\]
This, [23, Corollary 3.12] (applied for all \(d,K\in\mathbb{N}\), \(\varepsilon\in(0,1)\) with \(f\gets f_{\varepsilon}\), \(g\gets g_{\varepsilon}^{d}\), \(\varphi\leftarrow(\mathbb{R}^{d}\ni x\mapsto(d^{c}+\|x\|^{2})^{\frac{1}{2}})\in(0,\infty)\), \((Y_{\cdot,}^{\theta}(\cdot))_{\theta\in\Theta}\leftarrow(X_{\cdot,}^{d,\theta, K,\varepsilon}\cdot)_{\theta\in\Theta}\leftarrow(U_{\cdot,}^{\theta})_{ \theta\in\Theta,m\in\mathbb{Z}}\leftarrow(U_{n,m}^{d,K,\varepsilon})_{\theta \in\Theta,n,m\in\mathbb{Z}}\) in the
notation of [23, Corollary 3.12]), (124), and (129) show for all \(d,K,n,m\in\mathds{N}\), \(\varepsilon\in(0,1)\), \(\theta\in\Theta\) that
\[\begin{split}&\sup_{t\in[0,T],x\in\mathds{R}^{d}}\frac{\mathbb{E} \bigg{[}\bigg{(}\Big{|}U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d,K, \varepsilon}(t,x)\Big{|}^{2}\bigg{]}\bigg{)}^{\frac{1}{2}}}{(d^{c}+\|x\|^{2} )^{\frac{1}{2}}}\\ &\leq 2e^{\frac{m}{2}}m^{-\frac{n}{2}}(1+2Tc)^{N-1}e^{3.5cT}\left( \sup_{x\in\mathds{R}^{d}}\frac{T|f_{\varepsilon}(0)|+|g_{\varepsilon}^{d}(x)| }{(d^{c}+\|x\|^{2})^{\frac{1}{2}}}+Tc\sup_{t\in[0,T],x\in\mathds{R}^{d}}\frac{ \big{|}u^{d,K,\varepsilon}(t,x)\big{|}}{(d^{c}+\|x\|^{2})^{\frac{1}{2}}}\right) \\ &\leq 2e^{\frac{m}{2}}m^{-\frac{n}{2}}(e^{2cT})^{N-1}e^{3.5cT} \left(3T^{-\frac{1}{2}}(cd^{c})^{\frac{1}{2}}+Tc\cdot 3(cd^{c}T^{-1})^{\frac{1}{2}}e^{8cT }\right)\\ &\leq 2e^{\frac{m}{2}}m^{-\frac{n}{2}}(e^{2cT})^{N-1}e^{3.5cT}(1+Tc) \cdot 3(cd^{c}T^{-1})^{\frac{1}{2}}e^{8cT}\\ &\leq 6e^{\frac{m}{2}}m^{-\frac{n}{2}}(e^{2cT})^{N-1}e^{12cT}(cd^{c}T ^{-1})^{\frac{1}{2}}\\ &\leq 6e^{\frac{m}{2}}m^{-\frac{n}{2}}e^{12cTn}(cd^{c}T^{-1})^{ \frac{1}{2}}.\end{split} \tag{131}\]
This shows for all \(\theta\in\Theta\), \(d,K,n,m\in\mathds{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathds{R}^{d}\) that
\[\mathbb{E}\bigg{[}\bigg{(}\Big{|}U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d,K, \varepsilon}(t,x)\Big{|}^{2}\bigg{]}\bigg{)}^{\frac{1}{2}}\leq 6e^{\frac{m}{2}}m^{- \frac{n}{2}}e^{12cTn}(cd^{c}T^{-1})^{\frac{1}{2}}\left(d^{c}+\|x\|^{2}\right) ^{\frac{1}{2}}. \tag{132}\]
This completes the proof of Lemma 3.3.
## 4. DNNs
### Properties of operations associated to DNNs
In Setting 4.1 below we introduce operations which are important for constructing the random DNN that represents the MLP approximations in the proof of Lemma 4.12.
**Setting 4.1**.: _Assume Setting 1.1, let \(\mathfrak{n}_{n}^{d}\in\mathbf{D}\), \(n\in[3,\infty)\cap\mathbb{Z}\), \(d\in\mathds{N}\), satisfy for all \(n\in[3,\infty)\cap\mathds{N}\), \(d\in\mathds{N}\) that_
\[\mathfrak{n}_{n}^{d}=(d,\underbrace{2d,\ldots,2d}_{(n-2)\text{ times}},d)\in\mathds{N}^{n}, \tag{133}\]
_let \(\mathfrak{n}_{n}\in\mathbf{D}\), \(n\in[3,\infty)\), satisfy for all \(n\in[3,\infty)\) that \(\mathfrak{n}_{n}=\mathfrak{n}_{n}^{1}\), let \(\boxplus\colon\mathbf{D}\times\mathbf{D}\to\mathbf{D}\) satisfy for all \(H\in\mathds{N}\), \(\alpha=(\alpha_{0},\alpha_{1},\ldots,\alpha_{H},\alpha_{H+1})\in\mathds{N}^{H +2}\), \(\beta=(\beta_{0},\beta_{1},\beta_{2},\ldots,\beta_{H},\beta_{H+1})\in\mathds{ N}^{H+2}\) that \(\alpha\boxplus\beta=(\alpha_{0},\alpha_{1}+\beta_{1},\ldots,\alpha_{H}+\beta_{H}, \beta_{H+1})\in\mathds{N}^{H+2},\) and let \(\odot\colon\mathbf{D}\times\mathbf{D}\to\mathbf{D}\) satisfy for all \(H_{1},H_{2}\in\mathds{N}\), \(\alpha=(\alpha_{0},\alpha_{1},\ldots,\alpha_{H_{1}},\alpha_{H_{1}+1})\in \mathds{N}^{H_{1}+2}\), \(\beta=(\beta_{0},\beta_{1},\ldots,\beta_{H_{2}},\beta_{H_{2}+1})\in\mathds{N }^{H_{2}+2}\) that \(\alpha\odot\beta=(\beta_{0},\beta_{1},\ldots,\beta_{H_{2}},\beta_{H_{2}+1}+ \alpha_{0},\alpha_{1},\alpha_{2},\ldots,\alpha_{H_{1}+1})\in\mathds{N}^{H_{1} +H_{2}+3}.\)_
To prove our main result in this section presented in Lemma 4.12 we employ several results presented in Lemmas 4.2-4.10, which are basic facts on DNNs. The proof of Lemmas 4.2-4.9 can be found in [8, 24] and therefore omitted.
**Lemma 4.2** (\(\odot\) is associative-[24, Lemma 3.3]).: _Assume Setting 4.1 and let \(\alpha,\beta,\gamma\in\mathbf{D}\). Then we have that \((\alpha\odot\beta)\odot\gamma=\alpha\odot(\beta\odot\gamma)\)._
**Lemma 4.3** (\(\boxplus\) and associativity-[24, Lemma 3.4]).: _Assume Setting 4.1, let \(H,k,l\in\mathds{N}\), and let \(\alpha,\beta,\gamma\in\big{(}\{k\}\times\mathds{N}^{H}\times\{l\}\big{)}\). Then_
1. _we have that_ \(\alpha\boxplus\beta\in\big{(}\{k\}\times\mathds{N}^{H}\times\{l\}\big{)}\)_,_
2. _we have that_ \(\beta\boxplus\gamma\in\big{(}\{k\}\times\mathds{N}^{H}\times\{l\}\big{)}\)_, and_
3. _we have that_ \((\alpha\boxplus\beta)\boxplus\gamma=\alpha\boxplus(\beta\boxplus\gamma)\)_._
**Lemma 4.4** (Triangle inequality-[24, Lemma 3.5]).: _Consider the notations given in Subsection 1.4, assume Setting 4.1, let \(k,l,H\in\mathds{N}\), \(\alpha,\beta\in\{k\}\times\mathds{N}^{H}\times\{l\}\). Then we have that \(\|\alpha\boxplus\beta\|\leq\|\alpha\|+\|\beta\|\)._
**Lemma 4.5** (DNNs for affine transformations-[24, Lemma 3.7]).: _Assume Setting 1.1 and let \(d,m\in\mathds{N}\), \(\lambda\in\mathds{R}\), \(b\in\mathds{R}^{d}\), \(a\in\mathds{R}^{m}\), \(\Psi\in\mathbf{N}\) satisfy that \(\mathcal{R}(\Psi)\in C(\mathds{R}^{d},\mathds{R}^{m})\). Then we have that \(\lambda\left((\mathcal{R}(\Psi))(\cdot+b)+a\right)\in\mathcal{R}(\{\Phi\in \mathds{N}\colon\mathcal{D}(\Phi)=\mathcal{D}(\Psi)\})\)._
**Lemma 4.6** (Composition of functions generated by DNNs-[24, Lemma 3.8]).: _Assume Setting 4.1 and let \(d_{1},d_{2},d_{3}\in\mathds{N}\), \(f_{1}\in C(\mathds{R}^{d_{2}},\mathds{R}^{d_{3}})\), \(f_{2}\in C(\mathds{R}^{d_{1}},\mathds{R}^{d_{2}})\), \(\alpha,\beta\in\mathbf{D}\) satisfy both that \(f_{1}\in\mathcal{R}(\{\Phi\in\mathds{N}\colon\mathcal{D}(\Phi)=\alpha\})\) as well as \(f_{2}\in\mathcal{R}(\{\Phi\in\mathds{N}\colon\mathcal{D}(\Phi)=\beta\})\). Then we have that \((f_{1}\circ f_{2})\in\mathcal{R}(\{\Phi\in\mathds{N}\colon\mathcal{D}(\Phi)= \alpha\odot\beta\})\)._
**Lemma 4.7** (Sum of DNNs of the same length-[24, Lemma 3.9]).: _Consider the notations given in Subsection 1.4, assume Setting 4.1 and let \(p,q,M,H\in\mathbb{N}\), \(\alpha_{1},\alpha_{2},\ldots,\alpha_{M}\in\mathbb{R}\), \(k_{i}\in\mathbf{D}\), \(g_{i}\in C(\mathds{R}^{p},\mathds{R}^{q})\), \(i\in[1,M]\cap\mathbb{N}\), satisfy for all \(i\in[1,M]\cap\mathbb{N}\) that \(\dim(k_{i})=H+2\) and \(g_{i}\in\mathcal{R}(\{\Phi\in\mathbb{N}\colon\mathcal{D}(\Phi)=k_{i}\})\). Then we have that \(\sum_{i=1}^{M}\alpha_{i}g_{i}\in\mathcal{R}\left(\left\{\Phi\in\mathbb{N} \colon\mathcal{D}(\Phi)=\mathbb{H}_{i}^{M}k_{i}\right\}\right)\)._
**Lemma 4.8** (Existence of DNNs with \(H\) hidden layers for \(\mathrm{Id}_{\mathbb{R}}\)\({}^{d}\)-[8, Lemma 3.6]).: _Assume Setting 4.1 and let \(d,H\in\mathbb{N}\). Then we have that \(\mathrm{Id}_{\mathbb{R}^{d}}\in\mathcal{R}(\{\Phi\in\mathbb{N}\colon\mathcal{ D}(\Phi)=\mathbb{H}_{H+2}^{d}\})\)._
**Lemma 4.9** ([8, Lemma 3.7]).: _Consider the notations given in Subsection 1.4, assume Setting 1.1, let \(H,p,q\in\mathbb{N}\), and let \(g\in C(\mathds{R}^{p},\mathds{R}^{q})\) satisfy that \(g\in\mathcal{R}(\{\Phi\in\mathbb{N}\colon\dim(\mathcal{D}(\Phi))=H+2\})\). Then for all \(n\in\mathbb{N}_{0}\) we have that \(g\in\mathcal{R}(\{\Phi\in\mathbb{N}\colon\dim(\mathcal{D}(\Phi))=H+2+n\})\)._
**Lemma 4.10**.: _Consider the notations given in Subsection 1.4 and assume Setting 4.1. Then for all \(n\in\mathbb{N}\), \(d_{0},d_{1},\ldots,d_{n}\in\mathbb{N}\), \(f_{1}\in C(\mathds{R}^{d_{1}},\mathds{R}^{d_{0}}),f_{2}\in C(\mathds{R}^{d_{2}},\mathds{R}^{d_{1}}),\ldots,f_{n}\in C(\mathds{R}^{d_{n}},\mathds{R}^{d_{n-1}})\), \(\phi_{1},\phi_{2},\ldots,\phi_{n}\in\mathbb{N}\) with \(\forall i\in[1,n]\cap\mathbb{Z}\colon f_{i}=\mathcal{R}(\phi_{i})\) we have that_
\[\left|\left|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(d\in\mathbb{N}\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\) let \((X_{s}^{d,\theta,K,\varepsilon,t,x})_{s\in[t,T]}\) satisfy that \(X_{t}^{d,\theta,K,\varepsilon,t,x}=x\) and_
\[\begin{split} X_{s}^{d,\theta,K,\varepsilon,t,x}&=x+ \int_{t}^{s}\beta_{\varepsilon}^{d}(X_{\max\{t,\lfloor u-\rfloor\}}^{d,\theta, K,\varepsilon,t,x})du+\int_{t}^{s}\sigma_{\varepsilon}^{d}(X_{\max\{t,\lfloor u-\rfloor\}}^{d, \theta,K,\varepsilon,t,x})dW_{u}^{d,\theta}\\ &\quad+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}}\gamma_{ \varepsilon}^{d}(X_{\max\{t,\lfloor u-\rfloor\}}^{d,\theta,K,\varepsilon,t,x} )\,\tilde{N}^{d,\theta}(du,dz).\end{split} \tag{138}\]
_and let \(\omega\in\Omega\). Then there exists \((\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon})_{d\in\mathbb{N},\,\theta\in \Theta,\varepsilon\in(0,1),t\in[0,T),s\in(t,T]}\subseteq\mathbb{N}\) such that_
1. _for all_ \(d\in\mathbb{N}\)_,_ \(\theta\in\Theta\)_,_ \(\varepsilon\in(0,1)\)_,_ \(t\in[0,T)\)_,_ \(s\in(t,T]\)_,_ \(x\in\mathbb{R}^{d}\) _we have that_ \(\mathcal{R}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon})\in C(\mathbb{R}^{d}, \mathbb{R}^{d})\) _and_ \((\mathcal{R}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon}))(x)=X_{s}^{d,\theta, K,\varepsilon,t,x}(\omega)\)_,_
2. _for all_ \(d\in\mathbb{N}\)_,_ \(\theta\in\Theta\)_,_ \(\varepsilon\in(0,1)\)_,_ \(t_{1}\in[0,T)\)_,_ \(s_{1}\in(t_{1},T]\)_,_ \(t_{2}\in[0,T)\)_,_ \(s_{2}\in(t_{2},T]\)_,_ \(x\in\mathbb{R}^{d}\) _we have that_ \(\mathcal{D}(\mathcal{X}_{t,s_{1}}^{d,\theta,K,\varepsilon})=\mathcal{D}( \mathcal{X}_{t_{2},s_{2}}^{d,\theta_{2},K,\varepsilon})\)_,_
3. _for all_ \(d\in\mathbb{N}\)_,_ \(\theta\in\Theta\)_,_ \(\varepsilon\in(0,1)\)_,_ \(t\in[0,T)\)_,_ \(s\in(t,T]\) _we have that_ \(\dim(\mathcal{D}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon}))=K(\max\{\dim( \mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}})),\dim(\mathcal{D}(\Phi_{\sigma_{ \varepsilon}^{d},0})),\dim(\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0}))\})-1)+1\)_, and_
4. _for all_ \(d\in\mathbb{N}\)_,_ \(\theta\in\Theta\)_,_ \(\varepsilon\in(0,1)\)_,_ \(t\in[0,T)\)_,_ \(s\in(t,T]\) _we have that_ \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathcal{D}(\Phi_{\beta_{ \varepsilon}^{d}})\right|\kern-1.075pt\right|\kern-1.075pt\right|+\left|\kern-1.075pt \left|\kern-1.075pt\left|\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0})\right| \kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right|+\left| \kern-1.075pt\left|\kern-1.075pt\left|\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0}) \right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\)_._
Proof of Lemma 4.11.: Throughout this proof let the notation in Setting 4.1 be given. First, observe that for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\), \(k\in[1,K]\cap\mathbb{Z}\), \(t\in[0,T)\), \(s\in[\frac{kT}{K},\frac{(k+1)T}{K}]\) we have that
\[\begin{split} X_{s}^{d,\theta,K,\varepsilon,t,x}(\omega)& =X_{\max\{t,\frac{kT}{K}\}}^{d,\theta,K,\varepsilon,t,x}(\omega)+ \beta_{\varepsilon}^{d}\left(X_{\max\{t,\frac{kT}{K}\}}^{d,\theta,K, \varepsilon,t,x}(\omega)\right)\left(s-\max\{t,\frac{kT}{K}\}\right)\\ &\quad+\sigma_{\varepsilon}^{d}\left(X_{\max\{t,\frac{kT}{K}\}}^{d, \theta,K,\varepsilon,t,x}(\omega)\right)\left(W_{s}^{d,\theta}(\omega)-W_{ \max\{t,\frac{kT}{K}\}}^{d,\theta}(\omega)\right)\\ &\quad+F_{\varepsilon}^{d}\left(X_{\max\{t,\frac{kT}{K}\}}^{d, \theta,K,\varepsilon,t,x}(\omega)\right)\int_{\max\{t,\frac{kT}{K}\}}^{s} \int_{\mathbb{R}^{d}\setminus\{0\}}G^{d}(z)\tilde{N}^{d,\theta}(\omega)(du,dz). \end{split} \tag{139}\]
Next, for every \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\), \(\varepsilon\in(0,1)\), \(k\in[1,K]\cap\mathbb{Z}\), \(t\in[0,T)\), \(s\in(t,T]\) let \(J_{k}(s)\in\mathbb{R}\), \(\phi_{t,s,k}^{d,\theta,K,\varepsilon}(x)\in\mathbb{R}^{d}\) satisfy that
\[\begin{split} J_{k}(s)&=\max\{t,\frac{(k-1)T}{K}\} \mathbbm{1}_{[0,\max\{t,\frac{(k-1)T}{K}\}]}(s)\\ &\quad+s\mathbbm{1}_{(\max\{t,\frac{(k-1)T}{K}\},\max\{t,\frac{ kT}{K}\}]}(s)+\max\{t,\frac{kT}{K}\}\mathbbm{1}_{(\max\{t,\frac{kT}{K}\},T]}(s) \end{split} \tag{140}\]
and
\[\begin{split}\phi_{t,s,k}^{d,\theta,K,\varepsilon}(x)& =x+\beta_{\varepsilon}^{d}(x)\left(J_{k}(s)-\max\{t,\frac{(k-1)T}{K}\} \right)+\sigma_{\varepsilon}^{d}(x)\left(W_{J_{k}(s)}^{d,\theta}(\omega)-W_{ \max\{t,\frac{(k-1)T}{K}\}}^{d,\theta}(\omega)\right)\\ &\quad+F_{\varepsilon}^{d}(x)\int_{\max\{t,\frac{(k-1)T}{K}\}}^{s} \int_{\mathbb{R}^{d}\setminus\{0\}}G^{d}(z)\tilde{N}^{d,\theta}(\omega)(du,dz) \end{split} \tag{141}\]
Next, for every \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(k\in[1,K]\cap\mathbb{Z}\), \(t\in[0,T)\), \(s\in(t,T]\) let
\[\begin{split}\psi_{t,s,k}^{d,\theta,K,\varepsilon}=\phi_{t,s,k}^{d, \theta,K,\varepsilon}\circ\phi_{t,s,k-1}^{d,\theta,K,\varepsilon}\circ\ldots \circ\phi_{t,s,1}^{d,\theta,K,\varepsilon}.\end{split} \tag{142}\]
Note that for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(k\in[1,K-1]\cap\mathbb{Z}\), \(s\in[0,\max\{t,\frac{(k-1)T}{K}\}]\) we have that \(\phi_{t,s,k}^{d,\theta,K,\varepsilon}=\mathrm{Id}_{\mathrm{R}^{d}}\). This ensures for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(k\in[1,K-1]\cap\mathbb{Z}\), \(n\in[k+1,K]\cap\mathbb{Z}\), \(s\in[0,\max\{t,\frac{kT}{K}\}]\) that \(\psi_{t,s,k}^{d,\theta,K,\varepsilon}=\psi_{t,s,n}^{d,\theta,K,\varepsilon}\) and in particular \(\psi_{t,s,k}^{d,\theta,K,\varepsilon}=\psi_{t,s,K}^{d,\theta,K,\varepsilon}\). Observe that for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(k\in[1,K]\cap\mathbb{Z}\), \(s\in[0,\
Lemma 4.8 shows for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,\infty)\) that
\[\mathrm{Id}_{\mathbb{R}^{d}}\in\mathcal{R}(\{\Phi\in\mathbf{N}\colon\mathcal{D}( \Phi)=\mathfrak{n}_{\dim(\Phi_{\beta_{\varepsilon}^{d}})}^{d}\}) \tag{145}\]
This, (141), (144), and Lemma 4.7 show for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\delta,\varepsilon\in(0,1)\), \(k\in[1,K]\cap\mathbb{Z}\), \(t\in[0,T)\), \(s\in(t,T]\) that
\[\phi_{t,s,k}^{d,\theta,K,\varepsilon}(\cdot)\in\mathcal{R}\left(\left\{\Phi \in\mathbf{N}\colon\mathcal{D}(\Phi)=\mathfrak{n}_{\dim(\Phi_{\beta_{ \varepsilon}^{d}})}^{d}\boxplus\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}}) \boxplus\mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},0})\boxplus\mathcal{D}( \Phi_{F_{\varepsilon}^{d},0})\right\}\right) \tag{146}\]
This, (143), and Lemma 4.6 show that there exists \((\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon})_{d\in\mathbb{N},\theta\in\Theta, \varepsilon\in(0,1),t\in[0,T),s\in(t,T]}\subseteq\mathbf{N}\) such that for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\), \(s\in(t,T]\), \(x\in\mathbb{R}^{d}\) we have that
\[\mathcal{D}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon})=\overset{K}{\underset{k =1}{\overset{\odot}{\sim}}}\left[\mathfrak{n}_{\dim(\Phi_{\beta_{\varepsilon} ^{d}})}^{d}\boxplus\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}})\boxplus\mathcal{ D}(\Phi_{\sigma_{\varepsilon}^{d},0})\boxplus\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0}) \right], \tag{147}\] \[(\mathcal{R}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon}))(x)=X_{s} ^{d,\theta,K,\varepsilon,t,x}(\omega).\]
This, the definition of \(\odot\) and an induction argument show that for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\), \(s\in(t,T]\), \(x\in\mathbb{R}^{d}\) we have that
\[\dim(\mathcal{D}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon}))=K(\dim( \mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}}))-1)+1. \tag{148}\]
Next, (147), Lemma 4.10, the triangle inequality (cf. Lemma 4.4), and (133) show that for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\), \(s\in(t,T]\), \(x\in\mathbb{R}^{d}\) we have that
\[\begin{split}\left|\left|\mathcal{D}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon})\right|\right|&=\left|\left|\overset{K}{\underset{k =1}{\overset{\odot}{\sim}}}\left[\mathfrak{n}_{\dim(\Phi_{\beta_{\varepsilon }^{d}})}^{d}\boxplus\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}})\boxplus \mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},0})\boxplus\mathcal{D}(\Phi_{F_{ \varepsilon}^{d},0})\right]\right|\right|\\ &\leq\max\left\{2d,\left|\left|\mathfrak{n}_{\dim(\Phi_{\beta_{ \varepsilon}^{d}})}^{d}\boxplus\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}}) \boxplus\mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},0})\boxplus\mathcal{D}( \Phi_{F_{\varepsilon}^{d},0})\right|\right|\right\}\\ &\leq\max\left\{2d,\left|\left|\mathfrak{n}_{\dim(\Phi_{\beta_{ \varepsilon}^{d}})}^{d}\right|\right|+\left|\left|\mathcal{D}(\Phi_{\beta_{ \varepsilon}^{d}})\right|\right|+\left|\left|\mathcal{D}(\Phi_{\sigma_{ \varepsilon}^{d},0})\right|\right|+\left|\left|\mathcal{D}(\Phi_{F_{\varepsilon }^{d},0})\right|\right|\right\}\\ &=2d+\left|\left|\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}}) \right|\right|+\left|\left|\mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},0}) \right|\right|+\left|\left|\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0})\right| \right|\right|\end{split} \tag{149}\]
The proof of Lemma 4.11 is thus completed.
### DNN representation of our MLP approximations
In Lemma 4.12 below we prove that the MLP approximations under consideration can be represented by DNNs.
**Lemma 4.12**.: _Assume the setting of Lemma 4.11, for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(f_{\varepsilon}\in C(\mathbb{R},\mathbb{R})\), \(g_{\varepsilon}^{d}\in C(\mathbb{R}^{d},\mathbb{R})\), \(\Phi_{f_{\varepsilon}},\Phi_{g_{\varepsilon}^{d}}\in\mathbf{N}\) satisfy that \(\mathcal{R}(\Phi_{f_{\varepsilon}})=f_{\varepsilon}\) and \(\mathcal{R}(\Phi_{g_{\varepsilon}^{d}})=g_{\varepsilon}^{d}\), let \(\mathfrak{t}^{\theta}\colon\Omega\to[0,1]\), \(\theta\in\Theta\), be i.i.d random variables which satisfy for all \(t\in(0,1)\) that \(\mathbb{P}(\mathfrak{t}^{0}\leq t)=t\), for every \(\theta\in\Theta\), \(t\in[0,T]\) let \(\mathfrak{z}_{t}^{\theta}\colon\Omega\to\mathbb{R}\) satisfy for all \(\theta\in\Theta\) that \(\mathfrak{z}_{t}^{\theta}=t+(T-t)\mathfrak{t}^{\theta}\), assume for all \(d\in\mathbb{N}\) that \((\mathfrak{t}^{\theta})_{\theta\in\Theta}\), \((N^{d,\theta})_{\theta\in\Theta}\) and \((W^{d,\theta})_{\theta\in\Theta}\), are independent, for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(U_{n,m}^{d,\theta,K,\varepsilon}\colon[0,T]\times\mathbb{R}^{d}\times\Omega\to \mathbb{R}\), \(\theta\in\Theta\), \(n,m\in\mathbb{Z}\), satisfy for all \(\theta\in\Theta\), \(n\in\mathbb{N}_{0}\), \(m\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that_
\[\begin{split}& U_{n,m}^{d,\theta,K,\varepsilon}(t,x)=\frac{ \mathbbm{1}_{\mathbb{N}}(n)}{m^{n}}\sum_{i=1}^{m^{n}}g_{\varepsilon}^{d} \Big{(}X_{T}^{d,(\theta,0,-i),K,\varepsilon,t,x}\Big{)}\\ &\quad+\sum_{\ell=0}^{n-1}\frac{(T-t)}{m^{n-\ell}}\sum_{i=1}^{m^{ n-\ell}}\Big{(}f_{\varepsilon}\circ U_{\ell,m}^{d,(\theta,\ell,i),K,\varepsilon}- \mathbbm{1}_{\mathbb{N}}(\ell)f_{\varepsilon}\circ U_{\ell-1,m}^{d,(\theta,- \ell,i),K,\varepsilon}\Big{)}\Big{(}\mathfrak{z}_{t}^{(\theta,\ell,i)},X_{ \mathfrak{z}_{t}^{(\theta,\ell,i)}}^{d,(\theta,\ell,i),K,\varepsilon,t,x} \Big{)},\end{split} \tag{150}\]
_and let \((c_{d,\varepsilon})_{d\in\mathbb{N},\varepsilon\in(0,1)}\subseteq\mathbb{R}\) satisfy for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) that_
\[c_{d,\varepsilon}\geq 2d+\left|\mathcal{D}(\Phi_{f_{\varepsilon}})\right|\right|+ \left|\left|\mathcal{D}(\Phi_{g_{\varepsilon}^{d}})\right|\right|+\left| \left|\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}})\right|\right|+\left|\left| \mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},0})\right|\right|+\left|\left| \mathcal{D}(\Phi_{F_{\varepsilon}^{d},0})\right|\right|\right|\]. (151)
_Then for all \(m\in\mathbb{N}\), \(n\in\mathbb{N}_{0}\), \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) there exists \((\Phi_{n,m,t}^{d,K,\varepsilon})_{t\in[0,T],\theta\in\Theta}\subseteq\mathbf{N}\) such that_
1. _we have for all_ \(t_{1},t_{2}\in[0,T]\)_,_ \(\theta_{1},\theta_{2}\in\Theta\) _that_ \(\mathcal{D}(\Phi_{n,m,t_{1}}^{d,\theta_{1},K,\varepsilon}
_,_
2. _we have for all_ \(t\in[0,T]\)_,_ \(\theta\in\Theta\) _that_ \[\begin{split}\dim(\mathcal{D}(\Phi_{n,m,t}^{d,\theta,K,\varepsilon})) &=(n+1)\left[K\left(\max\left\{\dim(\mathcal{D}(\Phi_{\beta^{d}_{ \varepsilon}})),\dim(\mathcal{D}(\Phi_{\sigma^{d}_{\varepsilon}}))\right\}-1 \right)+1\right]\\ &\qquad+n(\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2)+\dim( \mathcal{D}(\Phi_{g^{d}_{\varepsilon}}))-1,\end{split}\] (152)
3. _we have for all_ \(t\in[0,T]\)_,_ \(\theta\in\Theta\) _that_ \(\left\|\kern-1.075pt\left|\mathcal{D}(\Phi_{n,m,t}^{d,\theta,K,\varepsilon}) \right|\kern-1.075pt\right|\kern-1.075pt\right|\leq c_{d,\varepsilon}(3m)^{n}\)_, and_
4. _we have for all_ \(t\in[0,T]\)_,_ \(\theta\in\Theta\)_,_ \(x\in\mathbb{R}^{d}\) _that_ \(U_{n,m}^{d,\theta,K,\varepsilon}(t,x,\omega)=(\mathcal{R}(\Phi_{n,m,t}^{d, \theta,K,\varepsilon}))(x)\)_._
Proof of Lemma 4.12.: Throughout this proof let the notation in Setting 4.1 be given. First, Lemma 4.11 and (151) show that there exists \((\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon})_{d\in\mathbb{N},\theta\in\Theta, \varepsilon\in(0,1),t\in[0,T),s\in(t,T]}\subseteq\mathbf{N}\) such that
for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\), \(s\in(t,T]\), \(x\in\mathbb{R}^{d}\) we have that \[\mathcal{R}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon}))(x)=X_{s}^{d,\theta,K, \varepsilon,t,x}(\omega),\] (153) for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t_{1}\in[0,T)\), \(s_{1}\in(t_{1},T]\), \(t_{2}\in[0,T)\), \(s_{2}\in(t_{2},T]\), \(x\in\mathbb{R}^{d}\) we have that
\[\mathcal{D}(\mathcal{X}_{t_{1},s_{1}}^{d,\theta_{1},K,\varepsilon})=\mathcal{ D}(\mathcal{X}_{t_{2},s_{2}}^{d,\theta_{2},K,\varepsilon})\] (154) for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\), \(s\in(t,T]\) we have that \[\dim(\mathcal{D}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon}))=K(\max\{\dim( \mathcal{D}(\Phi_{\beta^{d}_{\varepsilon}})),\dim(\mathcal{D}(\Phi_{\sigma^{ d}_{\varepsilon},0})),\dim(\mathcal{D}(\Phi_{F^{d}_{\varepsilon},0}))\}-1)+1,\] (155) and for all \(d\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(t\in[0,T)\), \(s\in(t,T]\) we have that \[\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathcal{D}(\mathcal{X}_{t,s}^{d, \theta,K,\varepsilon})\right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt \right|\leq 2d+\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathcal{D}(\Phi_{ \beta^{d}_{\varepsilon}})\right|\kern-1.075pt\right|\kern-1.075pt\right|+\left| \kern-1.075pt\left|\mathcal{D}(\Phi_{F^{d}_{\varepsilon},0})\right|\kern-1.075pt \right|\kern-1.075pt\right|\kern-1.075pt\right|\leq c_{d,\varepsilon}. \tag{156}\]
Throughout the rest of this proof let \(m\in\mathbb{N}\), \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) be fixed, let \(\mathrm{Id}_{\mathbb{R}}\) be the identity on \(\mathbb{R}\), and let \(L\in\mathbb{Z}\) satisfy that
\[L=\dim(\mathcal{D}(\mathcal{X}_{t,s}^{d,\theta,K,\varepsilon}))=K\left(\max \left\{\dim(\mathcal{D}(\Phi_{\beta^{d}_{\varepsilon}})),\dim(\mathcal{D}( \Phi_{\sigma^{d}_{\varepsilon}}))\right\}-1\right)+1. \tag{157}\]
(cf. (155)). We will prove the lemma via induction on \(n\in\mathbb{N}_{0}\). First, the base case \(n=0\) is true since the \(0\)-function can be represented by DNN function of arbitrary length. For the induction step \(\mathbb{N}_{0}\ni n\mapsto n+1\in\mathbb{N}\) let \(n\in\mathbb{N}_{0}\) and assume that there exists \((\Phi_{\ell,m,t}^{d,\theta,K,\varepsilon})_{t\in[0,T],\theta\in\Theta}\subseteq \mathbf{N}\), \(\ell\in[0,n]\cap\mathbb{Z}\), such that
we have for all \(t_{1},t_{2}\in[0,T]\), \(\theta_{1},\theta_{2}\in\Theta\), \(\ell\in[0,n]\cap\mathbb{Z}\) that
\[\mathcal{D}(\Phi_{\ell,m,t_{1}}^{d,\theta_{1},K,\varepsilon})=\mathcal{D}(\Phi_ {\ell,m,t_{2}}^{d,\theta_{2},K,\varepsilon}), \tag{158}\]
we have for all \(t\in[0,T]\), \(\theta\in\Theta\), \(\ell\in[0,n]\cap\mathbb{Z}\) that
\[\dim(\mathcal{D}(\Phi_{\ell,m,t}^{d,\theta,K,\varepsilon}))=(\ell+1)L+\ell( \dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2)+\dim(\mathcal{D}(\Phi_{g_{ \varepsilon}^{d}}))-1, \tag{159}\]
we have for all \(t\in[0,T]\), \(\theta\in\Theta\), \(\ell\in[0,n]\cap\mathbb{Z}\) that
\[\left|\kern-1.075pt\left|\kern-1.075pt\left|\Phi_{\ell,m,t}^{d,\theta,K, \varepsilon}\right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt \tag{160}\]
and
we have for all \(t\in[0,T]\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\), \(\ell\in[0,n]\cap\mathbb{Z}\) that
\[U_{\ell,m}^{d,\theta,K,\varepsilon}(t,x,\omega)=(\mathcal{R}(\Phi_{\ell,m,t}^{d, \theta,K,\varepsilon}))(x). \tag{161}\]
Next, Lemma 4.8, the fact that \(g_{\varepsilon}^{d}=\mathcal{R}(\Phi_{g_{\varepsilon}^{d}})\), (153), (154), and Lemma 4.6 show for all \(\theta\in\Theta\), \(i\in[1,m^{n+1}]\cap\mathbb{Z}\), \(t\in[0,T]\) that
\[\begin{split}& g_{\varepsilon}^{d}\Big{(}X_{t,T}^{d,(\theta,0,-i),K, \varepsilon},(\omega)\Big{)}=\mathrm{Id}_{\mathbb{R}}\,\left(g_{ \varepsilon}^{d}\Big{(}X_{t,T}^{d,(\theta,0,-i),K,\varepsilon},(\omega)\Big{)} \right)\\ &\in\mathcal{R}\left(\left\{\Phi\in\mathbf{N}\colon\mathcal{D}( \Phi)=\mathfrak{n}_{(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2+L \big{)}+1}\odot\mathcal{D}(\Phi_{g_{\varepsilon}^{d}})\odot\mathcal{D}( \mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right\}\right).\end{split} \tag{162}\]
In addition, the definition of \(\odot\), (133), and (157) show that
\[\begin{split}&\dim\left(\mathfrak{n}_{(n+1)\big{(}\dim(\mathcal{D}( \Phi_{f_{\varepsilon}}))-2+L\big{)}+1}\odot\mathcal{D}(\Phi_{g_{\varepsilon}^{ d}})\odot\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right)\\ &=\dim\left(\mathfrak{n}_{(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{ \varepsilon}}))-2+L\big{)}+1}\right)+\dim\left(\mathcal{D}(\Phi_{g^{d}}) \right)+\dim\left(\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right)-2 \\ &=(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2+L\big{)} +1+\dim\left(\mathcal{D}(\Phi_{g^{d}})\right)+L-2\\ &=(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2\big{)} +(n+2)L+\dim\left(\mathcal{D}(\Phi_{g^{d}})\right)-1.\end{split} \tag{163}\]
Next, Lemma 4.8, the fact that \(f_{\varepsilon}=\mathcal{R}(\Phi_{f_{\varepsilon}})\), (158), (161), (153), (154), and Lemma 4.6 show for all \(\theta\in\Theta\), \(i\in[1,m^{n+1}]\cap\mathbb{Z}\), \(t\in[0,T]\) that
\[\begin{split}&\Big{(}f_{\varepsilon}\circ U_{n,m}^{d,(\theta,n,i),K, \varepsilon}\Big{)}\Big{(}\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega),X_{t, \mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,(\theta,\ell,i),K,\varepsilon, t}(\omega)\Big{)}\\ &\in\mathcal{R}\left(\Big{\{}\Phi\in\mathbf{N}\colon\mathcal{D}( \Phi)=\mathcal{D}(\Phi_{f_{\varepsilon}})\odot\mathcal{D}(\Phi_{n,m,0}^{d,0,K, \varepsilon})\odot\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\Big{\}} \right).\end{split} \tag{164}\]
In addition, the definition of \(\odot\), (159), and (157) show that
\[\begin{split}&\dim\left(\mathcal{D}(\Phi_{f_{\varepsilon}}) \odot\mathcal{D}(\Phi_{n,m,0}^{d,0,K,\varepsilon})\odot\mathcal{D}(\mathcal{X }_{0,T}^{d,0,K,\varepsilon})\right)\\ &=\dim\left(\mathcal{D}(\Phi_{f_{\varepsilon}})\right)+\dim \left(\mathcal{D}(\Phi_{n,m,0}^{d,0,K,\varepsilon})\right)+\dim\left(\mathcal{ D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right)-2\\ &=\dim\left(\mathcal{D}(\Phi_{f_{\varepsilon}})\right)+\Big{(}(n +1)L+n(\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2)+\dim(\mathcal{D}(\Phi_{g ^{d}_{\varepsilon}}))-1\Big{)}+L-2\\ &=(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2\big{)} +(n+2)L+\dim\left(\mathcal{D}(\Phi_{g^{d}_{\varepsilon}})\right)-1.\end{split} \tag{165}\]
Furthermore, the fact that \(f_{\varepsilon}=\mathcal{R}(\Phi_{f_{\varepsilon}})\), Lemma 4.8, (158), (161), (153), (154), and Lemma 4.6 show for all \(\theta\in\Theta\), \(i\in[1,m^{n+1}]\cap\mathbb{Z}\), \(t\in[0,T]\), \(\ell\in[0,n-1]\cap\mathbb{Z}\) that
\[\begin{split}&\Big{(}f_{\varepsilon}\circ U_{\ell,m}^{d,(\theta, \ell,i),K,\varepsilon}\Big{)}\Big{(}\mathfrak{T}_{t}^{(\theta,\ell,i)}( \omega),X_{\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,(\theta,\ell,i),K, \varepsilon,t},\omega\Big{)}\\ &=\Big{(}f_{\varepsilon}\circ\mathrm{Id}_{\mathrm{R}}\circ U_{ \ell,m}^{d,(\theta,\ell,i),K,\varepsilon}\Big{)}\Big{(}\mathfrak{T}_{t}^{( \theta,\ell,i)}(\omega),X_{\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,( \theta,\ell,i),K,\varepsilon,t},\omega\Big{)}\\ &\quad\in\mathcal{R}\left(\Big{\{}\Phi\in\mathbf{N}\colon \mathcal{D}(\Phi)=\mathcal{D}(\Phi_{f_{\varepsilon}})\odot\mathfrak{n}_{(n- \ell)\big{(}\mathcal{D}(\Phi_{f_{\varepsilon}})-2+L\big{)}+1}\odot\mathcal{D }(\Phi_{\ell,m,0}^{d,0,K,\varepsilon})\odot\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\Big{\}}\right).\end{split} \tag{166}\]
In addition, the definition of \(\odot\), (133), (159), and (157) show for all \(\ell\in[0,n-1]\cap\mathbb{Z}\) that
\[\begin{split}&\dim\left(\mathcal{D}(\Phi_{f_{\varepsilon}}) \odot\mathfrak{n}_{(n-\ell)\big{(}\mathcal{D}(\Phi_{f_{\varepsilon}})-2+L \big{)}+1}\odot\mathcal{D}(\Phi_{\ell,m,0}^{d,0,K,\varepsilon})\odot\mathcal{D }(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right)\\ &=\dim\left(\mathcal{D}(\Phi_{f_{\varepsilon}})\right)+\dim\left( \mathfrak{n}_{(n-\ell)\big{(}\mathcal{D}(\Phi_{f_{\varepsilon}})-2+L\big{)}+1} \right)+\dim\left(\mathcal{D}(\Phi_{\ell,m,0}^{d,0,K,\varepsilon})\right)+ \dim\left(\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right)-3\\ &=\dim\left(\mathcal{D}(\Phi_{f_{\varepsilon}})\right)+\left((n- \ell)\big{(}\mathcal{D}(\Phi_{f_{\varepsilon}})-2+L\big{)}+1\right)\\ &\quad+\Big{(}(\ell+1)L+\ell(\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}) )-2)+\dim(\mathcal{D}(\Phi_{g^{d}_{\varepsilon}}))-1\Big{)}+L-3\\ &=(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))-2\big{)} +(n+2)L+\dim\left(\mathcal{D}(\Phi_{g^{d}_{\varepsilon}})\right)-1.\end{split} \tag{167}\]
Furthermore, the fact that \(f_{\varepsilon}=\mathcal{R}(\Phi_{f_{\varepsilon}})\), Lemma 4.8, (158), (161), (153), (154), and Lemma 4.6 show for all \(\theta\in\Theta\), \(i\in[1,m^{n+1}]\cap\mathbb{Z}\), \(t\in[0,T]\), \(\ell\in[1,n]\cap\mathbb{Z}\) that
\[\begin{split}&\Big{(}f_{\varepsilon}\circ U_{\ell,m}^{d,(\theta, \ell,i),K,\varepsilon}\Big{)}\Big{(}\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega),X_{ \mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,(\theta,\ell,i),K,\varepsilon, t}(\omega),\omega\Big{)}\\ &=\Big{(}f_{\varepsilon}\circ\mathrm{Id}_{\mathrm{R}}\circ U_{ \ell-1,m}^{d,(\theta,\ell,i),K,\varepsilon}\Big{)}\Big{(}\mathfrak{T}_{t}^{( \theta,\ell,i)}(\omega),X_{\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,(\theta, \ell,i),K,\varepsilon,t}(\omega),\omega\Big{)}\\ &\in\mathcal{R}\left(\Big{\{}\Phi\in\mathbf{N}\colon\mathcal{D}( \Phi)=\mathcal{D}(\Phi_{f_{\varepsilon}})\odot\mathfrak{n}_{(n-\ell+1) \big{(}\mathcal{D}(\Phi_{f_{\varepsilon}})-2+L\big{)}+1}\odot\mathcal{D}( \Phi_{\ell-1,m,0}^{d,0,K,\varepsilon})\odot\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\Big{\}}\right).\end{split} \tag{168}\]
In addition, the definition of \(\odot\), (133), (159), and (157) show for all \(\ell\in[1,n]\cap\mathbb{Z}\) that
\[\begin{split}&\dim\left(\mathcal{D}(\Phi_{f_{\ell}})\odot\mathfrak{ n}_{(n-\ell+1)\big{(}\mathcal{D}(\Phi_{f_{\ell}})-2+L\big{)}+1}\odot\mathcal{D}( \Phi_{\ell-1,m,0}^{d,0,K,\varepsilon})\odot\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right)\\ &=\dim\left(\mathcal{D}(\Phi_{f_{\ell}})\right)+\dim\left( \mathfrak{n}_{(n-\ell+1)\big{(}\mathcal{D}(\Phi_{f_{\ell}})-2+L\big{)}+1} \right)\\ &\quad+\dim\left(\mathcal{D}(\Phi_{\ell-1,m,0}^{d,0,K,\varepsilon })\right)+\dim\left(\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right)- 3\\ &=\dim\left(\mathcal{D}(\Phi_{f_{\ell}})\right)+\left((n-\ell+1) \big{(}\mathcal{D}(\Phi_{f_{\ell}})-2+L\big{)}+1\right)\\ &\quad+\left(\ell L+(\ell-1)(\dim(\mathcal{D}(\Phi_{f_{\ell}}))- 2)+\dim(\mathcal{D}(\Phi_{g_{\ell}^{d}}))-1\right)+L-3\\ &=(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{\ell}}))-2\big{)}+(n+2)L +\dim\left(\mathcal{D}(\Phi_{g_{\ell}^{d}})\right)-1.\end{split} \tag{169}\]
Now, (162)-(169) and Lemma 4.7 show that there exists \((\Phi_{n+1,m,t}^{d,\theta,K,\varepsilon})_{t\in[0,T],\theta\in\Theta}\) such that \(t\in[0,T]\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\) we have that
\[\begin{split}(\mathcal{R}(\Phi_{n+1,m,t}^{d,\theta,K, \varepsilon}))(x)&=\frac{1}{m^{n+1}}\sum_{i=1}^{m^{n+1}}g_{ \varepsilon}^{d}\Big{(}X_{T}^{d,(\theta,0,-i),K,\varepsilon,t,x}(\omega) \Big{)}\\ &\quad+\frac{1}{m}\sum_{i=1}^{m}\Bigl{(}f_{\varepsilon}\circ U_{n,m}^{d,(\theta,n,i),K,\varepsilon}\Bigr{)}\Bigl{(}\mathfrak{T}_{t}^{(\theta, \ell,i)}(\omega),X_{\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,(\theta, \ell,i),K,\varepsilon,t,x}(\omega),\omega\Bigr{)}\\ &\quad+\sum_{\ell=0}^{n-1}\frac{(T-t)}{m^{n+1-\ell}}\sum_{i=1}^{m ^{n+1-\ell}}\Bigl{(}f_{\varepsilon}\circ U_{\ell,m}^{d,(\theta,\ell,i),K, \varepsilon}\Bigr{)}\Bigl{(}\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega),X_{ \mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,(\theta,\ell,i),K,\varepsilon, t,x}(\omega),\omega\Bigr{)}\\ &\quad-\sum_{\ell=1}^{n}\frac{(T-t)}{m^{n+1-\ell}}\sum_{i=1}^{m ^{n+1-\ell}}\Bigl{(}f_{\varepsilon}\circ U_{\ell-1,m}^{d,(\theta,-\ell,i),K, \varepsilon}\Bigr{)}\Bigl{(}\mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega),X_{ \mathfrak{T}_{t}^{(\theta,\ell,i)}(\omega)}^{d,(\theta,\ell,i),K,\varepsilon, t,x}(\omega),\omega\Bigr{)}\\ &=U_{n+1,m}^{d,\theta,K,\varepsilon}(t,x),\end{split} \tag{170}\]
\[\mathcal{D}(\Phi_{n+1,m,t}^{d,K,\varepsilon})=(n+1)\bigl{(}\dim(\mathcal{D}( \Phi_{f_{\ell}}))-2\bigr{)}+(n+2)L+\dim\left(\mathcal{D}(\Phi_{g_{\ell}^{d}}) \right)-1, \tag{171}\]
and
\[\begin{split}\mathcal{D}(\Phi_{n+1,m,t}^{d,\theta,K,\varepsilon })&=\left[\overset{m^{n+1}}{\underset{i=1}{\overset{n}{\equiv}}} \left[\mathfrak{n}_{(n+1)\big{(}\dim(\mathcal{D}(\Phi_{f_{\ell}}))-2+L\big{)} +1}\odot\mathcal{D}(\Phi_{g_{\ell}^{d}})\odot\mathcal{D}(\mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right]\right]\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
Next, Lemma 4.10, (151), (160), and (156) show that
\[\begin{split}&\left|\!\left|\!\left|\mathcal{D}(\Phi_{f_{\varepsilon}}) \odot\mathcal{D}(\Phi_{n,m,0}^{d,0,K,\varepsilon})\odot\mathcal{D}(\mathcal{ X}_{0,T}^{d,0,K,\varepsilon})\right|\!\right|\!\right|\\ &\leq\max\left\{2d,\left|\!\left|\!\left|\mathcal{D}(\Phi_{f_{ \varepsilon}})\right|\!\right|\!\right|\!\right|\!\left|\!\left|\!\left| \mathcal{D}(\Phi_{n,m,0}^{d,0,K,\varepsilon})\right|\!\right|\!\right|\! \right|\!\!\left|\!\right|\!\right|\!\!\left|\!\left|\!\left|\mathcal{D}( \mathcal{X}_{0,T}^{d,0,K,\varepsilon})\right|\!\right|\!\right|\!\right\}\leq c _{d,\varepsilon}(3m)^{n}.\end{split} \tag{175}\]
Furthermore, Lemma 4.10, (151), (160), and (156) show for all \(\ell\in[0,n-1]\cap\mathbb{Z}\) that
(176)
In addition, Lemma 4.10, (151), (160), and (156) show for all \(\ell\in[1,n]\cap\mathbb{Z}\) that
(177)
Now, (172), the triangle inequality, and (174)-(177) show for all \(t\in[0,T]\), \(\theta\in\Theta\) that
(178)
and
\[\begin{split}&\left|\!\left|\!\left|\mathcal{D}(\Phi_{n+1,m,t}^{ 4\theta,K,\varepsilon})\right|\!\right|\!\right|\\ &\leq\left[\sum_{i=1}^{m+1}c_{d,\varepsilon}\right]+\left[\sum_{ i=1}^{m}c_{d,\varepsilon}(3m)^{n}\right]+\left[\sum_{\ell=0}^{n-1}\sum_{i=1}^{m+1- \ell}c_{d,\varepsilon}(3m)^{\ell}\right]+\left[\sum_{\ell=1}^{n}\sum_{i=1}^{m ^{n+1-\ell}}c_{d,\varepsilon}(3m)^{\ell-1}\right]\\ &=m^{n+1}c_{d,\varepsilon}+mc_{d,\varepsilon}(3m)^{n}+\left[\sum_ {\ell=0}^{n-1}m^{n+1-\ell}c_{d,\varepsilon}(3m)^{\ell}\right]+\left[\sum_{ \ell=1}^{n}m^{n+1-\ell}c_{d,\varepsilon}(3m)^{\ell-1}\right]\\ &=m^{n+1}c_{d,\varepsilon}\left[1+3^{n}+\sum_{\ell=0}^{n-1}3^{ \ell}+\sum_{\ell=1}^{n}3^{\ell-1}\right]=m^{n+1}c_{d,\varepsilon}\left[1+ \sum_{\ell=0}^{n}3^{\ell}+\sum_{\ell=1}^{n}3^{\ell-1}\right]\\ &\leq cm^{n+1}\left[1+2\sum_{\ell=0}^{n}3^{\ell}\right]=cm^{n+1} \left[1+2\frac{3^{n+1}-1}{3-1}\right]=c_{d,\varepsilon}(3m)^{n+1}.\end{split} \tag{179}\]
This, (173), (171), the definition of \(L\) (see (157)), and (170) completes the induction step. The proof of Lemma 4.12 is thus completed.
## 5. DNN approximations of PIDEs
In Theorem 5.1 below we combine the result of Lemmas 3.3, 3.2, and 2.1 to prove the existence of a DNN that approximates the solution to (190) and whose number of parameters depend only polynomially on the dimension \(d\) and the reciprocal of the prescribed accuracy \(\epsilon\).
**Theorem 5.1**.: _Consider the notations given in Subsection 1.4, assume Setting 1.1, let \(T\in(0,\infty)\), \(b,c\in[2,\infty)\) satisfy that_
\[16\Big{(}1+\left|c^{\frac{1}{2}}(4c^{\frac{1}{2}}+2c^{\frac{1}{2}}T^{-\frac{3}{ 2}})\right|^{\frac{1}{2}}\Big{)}\leq\frac{b}{4}, \tag{180}\]
_for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(v\in\mathbb{R}^{d}\) let \(\beta_{\varepsilon}^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d})\), \(\sigma_{\varepsilon}^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d\times d})\), \(\Phi_{\beta_{\varepsilon}^{d}},\Phi_{\sigma_{\varepsilon}^{d},v}\in\mathbf{N}\) satisfy that \(\beta_{\varepsilon}^{d}=\mathcal{R}(\Phi_{\beta_{\varepsilon}^{d}})\), \(\sigma_{\varepsilon}^{d}(\cdot)v=\mathcal{R}(\Phi_{\sigma_{\varepsilon}^{d},v})\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(\gamma_{\varepsilon}^{d}\colon\mathbb{R}^{2d}\to\mathbb{R}^{d}\), \(F_{\varepsilon}^{d}\colon\mathbb{R}^{d\times d}\), \(G^{d}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) be measurable and satisfy for all \(y,z\in\mathbb{R}^{d}\) that \(\gamma_{\varepsilon}^{d}(y,z)=F_{\varepsilon}^{d}(y)G^{d}(z)\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(v\in\mathbb{R}^{d}\) let \(\Phi_{F_{\varepsilon}^{d},v}\in\mathbf{N}\) satisfy \(F_{\varepsilon}^{d}(\cdot)v=\mathcal{R}(\Phi_{F_{\varepsilon}^{d},v})\), assume for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1]\), \(v\in\mathbb{R}^{d}\) that \(\mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},v})=\mathcal{D}(\Phi_{\sigma_{ \varepsilon}^{d},0})\) and \(\mathcal{D}(\Phi_{F_{\varepsilon}^{d},v})=\mathcal{D}(\Phi_{F_{\varepsilon}^{d },0})\), for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(g_{\varepsilon}^{d}\in C(\mathbb{R}^{d},\mathbb{R})\), \(\Phi_{g_{\varepsilon}^{d}}\in\mathbf{N}\) satisfy that \(\mathcal{R}(\Phi_{g_{\varepsilon}^{d}})=g_{\varepsilon}^{d}\), for every \(d\in\mathbb{N}\) let \(\beta^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d})\), \(\sigma^{d}\in C(\mathbb{R}^{d},\mathbb{R}^{d\times d})\), \(g^{d}\in C(\mathbb{R}^{d},\mathbb{R})\), let \(f\in C(\mathbb{R},\mathbb{R})\), for every \(d\in\mathbb{N}\) let \(\nu^{d}\colon\mathcal{B}(\mathbb{R}^{d}\setminus\{0\})\to[0,\infty)\) be a Levy measure, assume that for all \(d\in\mathbb{N}\) there exists \(C_{d}\in(0,\infty)\) such that for all \(x,y,z\in\mathbb{R}^{d}\), \(t\in[0,T]\) we have that_
\[\left\|\gamma^{d}(x,z)\right\|\leq C_{d}\left(1\wedge\|z\|^{2}\right),\quad \left\|\gamma^{d}(x,z)-\gamma^{d}(y,z)\right\|^{2}\leq C_{d}\|x-y\|^{2}\left(1 \wedge\|z\|^{2}\right), \tag{181}\]
_assume that for all \(d\in\mathbb{N}\), \(t\in[0,T]\), \(x,z\in\mathbb{R}^{d}\) the Jacobian matrix \((D_{x}\gamma^{d})(x,z)\) exists, assume that for all \(d\in\mathbb{N}\) there exists \(\lambda_{d}\in(0,\infty)\) such that for all \(t\in[0,T]\), \(x,z\in\mathbb{R}^{d}\), \(\delta\in[0,1]\) we have that_
\[\lambda_{d}\leq\left|\det(I_{d}+\delta(D_{x}\gamma^{d})(x,z))\right|, \tag{182}\]
_where \(I_{d}\) denotes the \(d\times d\) identity matrix, assume for all \(d\in\mathbb{N}\), \(x,y\in\mathbb{R}^{d}\), \(w_{1},w_{2}\in\mathbb{R}\), \(\varepsilon\in(0,1)\) that_
\[\begin{split}&\left\|\beta_{\varepsilon}^{d}(x)-\beta_{ \varepsilon}^{d}(y)\right\|^{2}+\left\|\sigma_{\varepsilon}^{d}(x)-\sigma_{ \varepsilon}^{d}(y)\right\|^{2}_{\mathrm{F}}+\int_{\mathbb{R}^{d}\setminus\{0 \}}\left\|\gamma_{\varepsilon}^{d}(x,z)-\gamma_{\varepsilon}^{d}(y,z))\right\|^ {2}\nu^{d}(dz)\\ &\leq c\|x-y\|^{2},\end{split} \tag{183}\]
\[\begin{split}&\left|f(w_{1})-f(w_{2})\right|^{2}\leq c|w_{1}-w_{ 2}|^{2},\quad\left|g_{\varepsilon}^{d}(x)-g_{\varepsilon}^{d}(y)\right|^{2} \leq cd^{c}T^{-1}\|x-y\|^{2},\end{split} \tag{184}\]
\[\begin{split}&\left\|\beta_{\varepsilon}^{d}(x)-\beta^{d}(x) \right\|^{2}+\left\|\sigma_{\varepsilon}^{d}(x)-\sigma^{d}(x)\right\|^{2}_{ \mathrm{F}}\\ &\quad+\int_{\mathbb{R}^{d}\setminus\{0\}}\left\|\gamma_{ \varepsilon}^{d}(x,z)-\gamma^{d}(x,z)\right\|^{2}\nu^{d}(dz)+\left|g_{ \varepsilon}^{d}(x)-g^{d}(x)\right|^{2}\\ &\leq\varepsilon cd^{c}(d^{c}+\|x\|^{2}),\end{split} \tag{185}\]
\[\begin{split}&\left\|\!\left|\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}}) \right|\!\right|\!\right|+\left|\!\left|\!\left|\mathcal{D}(\Phi_{\sigma_{ \varepsilon}^{d},0})\right|\!\right|\!\right|+\left|\!\left|\!\left|\mathcal{D}( \Phi_{F_{\varepsilon}^{d},0})\right|\!\right|\!\right|+\left|\!\left|\! \left|\mathcal{D}(\Phi_{g_{\varepsilon}^{d}})\right|\!\right|\!\right|\! \right|+\left|\!\left|\!\left|\mathcal{D}(\Phi_{g_{\varepsilon}^{d}})\right| \!\right|\!\right|\!\right|\leq\frac{bd^{c}\varepsilon^{-c}}{4},\end{split} \tag{187}\]
\[\dim(\mathcal{D}(\Phi_{\beta_{\ell}^{d}}))+\dim(\mathcal{D}(\Phi_{\sigma_{\ell}^{d},0}))+\dim(\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0}))+\dim(\mathcal{D}(\Phi_{g _{\ell}^{d}}))\leq\frac{bd^{c}\varepsilon^{-c}}{4}, \tag{188}\]
_and for every \(d\in\mathbb{N}\) let \(\Gamma^{d}\colon\mathcal{B}(\mathbb{R}^{d})\to[0,1]\) be a probability measure satisfying_
\[\left(\int_{\mathbb{R}^{d}}\|x\|^{4}\,\Gamma^{d}(dx)\right)^{\frac{1}{2}}\leq cd ^{c}. \tag{189}\]
_Then_
1. _[label=()]_
2. _for every_ \(d\in\mathbb{N}\) _there exists a unique viscosity solution_ \(u^{d}\colon[0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) _to the PIDE_ \[\begin{cases}&(\frac{\partial}{\partial u}u^{d})(t,x)+\left\langle\beta^{d}(x ),(\nabla_{x}u^{d})(t,x)\right\rangle\\ &+\frac{1}{2}\mathrm{trace}\big{(}\sigma^{d}(t,x)(\sigma^{d}(t,x))^{ \top}\mathrm{Hess}_{x}u^{d}(t,x)\big{)}+f(u^{d}(t,x))\\ &+\int_{\mathbb{R}^{d}}\left(u^{d}(x+\gamma^{d}(x,z))-u^{d}(t,x)- \left\langle(\nabla_{x}u^{d})(t,x),\gamma^{d}(x,z)\right\rangle\right)\nu^{d}( dz)=0,\\ &\quad\forall t\in[0,T),x\in\mathbb{R}^{d},\\ & u^{d}(T,x)=g^{d}(x),\,\forall\,x\in\mathbb{R}^{d},\end{cases}\]
_satisfying that_ \(\sup_{s\in[0,T],y\in\mathds{R}^{d}}|\frac{u^{d}(s,y)}{1+\|y\|}|<\infty\) _and_
2. _there exists_ \((C_{\delta})_{\delta\in(0,1)}\subseteq(0,\infty)\)_,_ \(\eta\in(0,\infty)\)_,_ \((\Psi_{d\varepsilon})_{d\in\mathbb{N},\,\epsilon\in(0,1)}\) _such that for all_ \(d\in\mathbb{N}\)_,_ \(\epsilon\in(0,1)\) _we have that_ \(\mathcal{R}(\Psi_{d,\epsilon})\in C(\mathds{R}^{d},\mathbb{R})\)_,_ \(\mathcal{P}(\Psi_{d,\epsilon})\leq C_{\delta}\eta d^{3c+12c^{2}+2c(6+\delta)} \epsilon^{-6c-6-\delta}\)_, and_ \[\left(\int_{\mathds{R}^{d}}\left|(\mathcal{R}(\Psi_{d,\epsilon}))(x)-u^{d}(t, x)\right|^{2}\Gamma^{d}(dx)\right)^{\frac{1}{2}}\leq\epsilon.\] (191)
Proof of Theorem 5.1.: First, (183)-(186) (with \(\varepsilon\to 0\)) show for all \(d\in\mathbb{N}\), \(x,y\in\mathds{R}^{d}\) that
\[\left\|\beta^{d}(x)-\beta^{d}(y)\right\|^{2}+\left\|\sigma^{d}(x)-\sigma^{d}( y)\right\|_{\rm F}^{2}+\int_{\mathds{R}^{d}\setminus\{0\}}\left\|\gamma^{d}(x,z)- \gamma^{d}(y,z)\right\|^{2}\nu^{d}(dz)\leq c\|x-y\|^{2}, \tag{192}\]
\[|f(w_{1})-f(w_{2})|^{2}\leq c|w_{1}-w_{2}|^{2},\quad\left|g^{d}(x)-g^{d}(y) \right|^{2}\leq cd^{c}T^{-1}\|x-y\|^{2}, \tag{193}\]
\[\left\|\beta^{d}(0)\right\|^{2}+\left\|\sigma^{d}(0)\right\|_{\rm F}^{2}+\int _{\mathds{R}^{d}\setminus\{0\}}\left\|\gamma^{d}(0,z)\right\|^{2}\nu^{d}(dz) +T^{3}|f(0)|^{2}+T|g^{d}(0)|^{2}\leq cd^{c}, \tag{194}\]
Next, [24, Corollary 3.13] (applied for all \(\varepsilon\in(0,1)\) with \(L\gets c^{\frac{1}{2}}\), \(q\gets 2\), \(\epsilon\leftarrow(\varepsilon/2)^{\frac{1}{2}}\) in the notation of [24, Corollary 3.13]), (185), and (180) show that there exist \(f_{\varepsilon}\in C(\mathds{R},\mathbb{R})\), \(\Phi_{f_{\varepsilon}}\in\mathbb{N}\) such that for all \(\varepsilon\in(0,1)\), \(w_{1},w_{2}\in\mathds{R}\) that
\[\mathcal{R}(\Phi_{f_{\varepsilon}})=f_{\varepsilon},\quad|f_{ \varepsilon}(w_{1})-f_{\varepsilon}(w_{1})|^{2}\leq c|w_{1}-w_{2}|^{2},\quad \dim(\mathcal{D}(\Phi_{f_{\varepsilon}}))=3\leq\frac{bd^{c}\varepsilon^{-c}} {4}, \tag{195}\]
\[\left\|\!\left|\mathcal{D}(\Phi_{f_{\varepsilon}})\right|\!\right| \leq 16\Big{(}1+\left|c^{\frac{1}{2}}(4c^{\frac{1}{2}}+2|f(0)|) \right|^{\frac{1}{2}}\Big{)}\varepsilon^{-2} \tag{196}\] \[\leq 16\Big{(}1+\left|c^{\frac{1}{2}}(4c^{\frac{1}{2}}+2c^{\frac{ 1}{2}}T^{-\frac{3}{2}})\right|^{\frac{1}{2}}\Big{)}\varepsilon^{-2}\leq\frac{ b\varepsilon^{-2}}{4},\]
and
\[|f(w_{1})-f_{\varepsilon}(w_{1})|^{2}\leq\frac{\varepsilon^{2}}{2}(1+|w_{1}| ^{2})^{2}\leq\varepsilon(1+|w_{1}|^{4}). \tag{197}\]
This, (187), and (188) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) that
\[2d+\left|\!\left|\!\left|\mathcal{D}(\Phi_{\varrho_{\varepsilon}^{d}}) \right|\!\right|\!\right|+\left|\!\left|\!\left|\mathcal{D}(\Phi_{\varrho_{ \varepsilon}^{d},0})\right|\!\right|\!\right|+\left|\!\left|\!\left|\mathcal{D}( \Phi_{F_{\varepsilon}^{d},0})\right|\!\right|\!\right|+\left|\!\left|\!\left| \mathcal{D}(\Phi_{g_{\varepsilon}^{d}})\right|\!\right|\!\right|+\left|\! \left|\!\left|\mathcal{D}(\Phi_{f_{\varepsilon}})\right|\!\right|\!\right|\leq bd ^{c}\varepsilon^{-c} \tag{198}\]
and
\[\dim(\mathcal{D}(\Phi_{\varrho_{\varepsilon}^{d}}))+\dim(\mathcal{D}(\Phi_{ \sigma_{\varepsilon}^{d},0}))+\dim(\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0})) +\dim(\mathcal{D}(\Phi_{g_{\varepsilon}^{d}}))+\dim(\mathcal{D}(\Phi_{f_{ \varepsilon}}))\leq bd^{c}\varepsilon^{-c}. \tag{199}\]
Furthermore, (197), (185), and the triangle inequality show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) that \(|f_{\varepsilon}(0)|\leq\sqrt{\varepsilon}+|f(0)|\leq 1+|f(0)|\) and hence
\[\left\|\beta_{\varepsilon}^{d}(0)\right\|^{2}+\left\|\sigma_{\varepsilon}^{d}(0 )\right\|_{\rm F}^{2}+\int_{\mathds{R}^{d}\setminus\{0\}}\left\|\gamma_{ \varepsilon}^{d}(0,z)\right\|^{2}\nu^{d}(dz)+T^{3}|f_{\varepsilon}(0)|+T|g_{ \varepsilon}^{d}(0)|^{2}\leq cd^{c}. \tag{200}\]
Next, for every \(K\in\mathbb{N}\) let \(\lfloor\cdot\rfloor_{K}\colon\mathbb{R}\to\mathbb{R}\) satisfy for all \(t\in\mathbb{R}\) that \(\lfloor t\rfloor_{K}=\max(\{0,\frac{T}{K},\frac{2T}{K},\ldots,T\}\cap((-\infty,t )\cup\{0\}))\), let \((\Omega,\mathcal{F},\mathbb{P},(\mathbb{F}_{t})_{t\in[0,T]})\) be a probability space satisfying the usual conditions, let \(\Theta=\cup_{n\in\mathbb{N}}\mathbb{Z}^{n}\), for every \(d\in\mathbb{N}\) let \(W^{d,\theta}\colon\Omega\times[0,T]\to\mathbb{R}^{d}\), \(\theta\in\Theta\), be identically independently distributed standard \((\mathbb{F}_{t})_{t\in[0,T]}\)-Brownian motions, for every \(d\in\mathbb{N}\) let \(N^{d,\theta}\), \(\theta\in\Theta\), be independent \((\mathbb{F}_{t})_{t\in[0,T]}\)-Poisson random measures on \([0,\infty)\times(\mathds{R}^{d}\setminus\{0\})\) with intensity \(\nu^{d}\), for every \(d\in\mathbb{N}\), \(\theta\in\Theta\) let \(\tilde{N}^{d,\theta}(dt,dz)=N^{d,\theta}(dt,dz)-dt\,\nu^{d}(dz)\), assume for all \(d\in\mathbb{N}\) that \(\mathcal{F}_{0}\), \((N^{d,\theta})_{\theta\in\Theta}\) and \((W^{d,\theta})_{\theta\in\Theta}\), are independent, for every \(d,K\in\mathbb{N}\), \(\theta\in\Theta\), \(x\in\mathds{R}^{d}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\) let \((X_{s}^{d,\theta,K,\varepsilon,t,x})_{s\in[t,T]}\) satisfy that
\[X_{s}^{d,\theta,K,\varepsilon,t,x} =x+\int_{t}^{s}\beta_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}}^{d,\theta,K,\varepsilon,t,x})\,dr+\int_{t}^{s}\sigma_{\varepsilon}^{d}(X_{\max\{t,[ r-]_{K}\}}^{d,\theta,K,\varepsilon,t,x})\,dW_{r}^{d,\theta} \tag{201}\] \[\quad+\int_{t}^{s}\int_{\mathds{R}^{d}\setminus\{0\}}^{\prime} \sigma_{\varepsilon}^{d}(X_{\max\{t,[r-]_{K}\}}^{d,\theta,K,\varepsilon,t,x})\, \tilde{N}^{d,\theta}(dr,dz),\]
let \(\theta\colon\Omega\to[0,1]\), \(\theta\in\Theta\), be i.i.d random variables which satisfy for all \(t\in(0,1)\) that \(\mathbb{P}\left(\mathfrak{t}^{0}\leq t\right)=t\), for every \(\theta\in\Theta,t\in[0,T]\) let \(\mathfrak{T}_{t}^{d}\colon\Omega\to\mathbb{R}\) satisfy for all \(\
\((N^{d,\theta})_{\theta\in\Theta}\) and \((W^{d,\theta})_{\theta\in\Theta}\) are independent, for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\) let \(U_{n,m}^{d,\theta,K,\varepsilon}\colon[0,T]\times\mathbb{R}^{d}\times\Omega \to\mathbb{R}\), \(\theta\in\Theta\), \(n,m\in\mathbb{Z}\), satisfy for all \(\theta\in\Theta\), \(n\in\mathbb{N}_{0}\), \(m\in\mathbb{N}\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) that
\[\begin{split}& U_{n,m}^{d,\theta,K,\varepsilon}(t,x)=\frac{ \mathbbm{1}_{\mathbb{N}}(n)}{m^{n}}\sum_{i=1}^{m^{n}}g_{\varepsilon}^{d}\Big{(} X_{T}^{d,(\theta,0,-i),K,\varepsilon,t,x}\Big{)}\\ &\quad+\sum_{\ell=0}^{n-1}\frac{(T-t)}{m^{n-\ell}}\sum_{i=1}^{m^ {n-\ell}}\Big{(}f_{\varepsilon}\circ U_{\ell,m}^{d,(\theta,\ell,i),K,\varepsilon }-\mathbbm{1}_{\mathbb{N}}(\ell)f_{\varepsilon}\circ U_{\ell-1,m}^{d,(\theta,-\ell,i),K,\varepsilon}\Big{)}\Big{(}\mathfrak{z}_{t}^{(\theta,\ell,i)},X_{ \mathfrak{z}_{t}^{(\theta,\ell,i)}}^{d,(\theta,\ell,i),K,\varepsilon,t,x} \Big{)}.\end{split} \tag{202}\]
A standard result on existence and uniqueness of SDEs with jumps (cf., e.g., [29, Theorem 9.1]), (183), and (185) show that for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) there exist adapted cadlag processes \((X_{s}^{d,\varepsilon,t,x})_{s\in[t,T]},(X_{s}^{d,t,x})_{s\in[t,T]}\) such that for all \(s\in[t,T]\) we have \(\mathbb{P}\)-a.s. that
\[X_{s}^{d,t,x}=x+\int_{t}^{s}\beta^{d}(X_{r-}^{d,t,x})dr+\int_{t}^{s}\sigma^{d} (X_{r-}^{d,t,x})dW_{r}^{d,0}+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\}} \gamma^{d}(X_{r-}^{d,t,x},z)\tilde{N}^{d,0}(dr,dz) \tag{203}\]
and
\[X_{s}^{d,\varepsilon,t,x}=x+\int_{t}^{s}\beta_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,x})dr+\int_{t}^{s}\sigma_{\varepsilon}^{d}(X_{r-}^{d, \varepsilon,t,x})dW_{r}^{d,0}+\int_{t}^{s}\int_{\mathbb{R}^{d}\setminus\{0\} }\gamma_{\varepsilon}^{d}(X_{r-}^{d,\varepsilon,t,x},z)\tilde{N}^{d,0}(dr,dz). \tag{204}\]
Observe that for all \(d,K\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) we have that
\[\max\left\{\mathbb{E}\bigg{[}d^{c}+\left\|X_{s}^{d,K,\varepsilon,t,x}\right\|^ {2}\bigg{]},\mathbb{E}\bigg{[}d^{c}+\left\|X_{s}^{d,\varepsilon,t,x}\right\|^ {2}\bigg{]},\mathbb{E}\bigg{[}d^{c}+\left\|X_{s}^{d,t,x}\right\|^{2}\bigg{]} \right\}\leq(d^{c}+\|x\|^{2})e^{7\varepsilon(s-t)} \tag{205}\]
(cf. Lemmas 2.1 and 3.2). Next, (185) and (184) show for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1)\), \(x\in\mathbb{R}^{d}\) that
\[\left|g_{\varepsilon}^{d}(x)\right|\leq\left|g_{\varepsilon}^{d}(0)\right|+(cd ^{c}T^{-1})^{\frac{1}{2}}\|x\|\leq(cd^{c}T^{-1})^{\frac{1}{2}}+(cd^{c}T^{-1})^{ \frac{1}{2}}\|x\|\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}. \tag{206}\]
This and (186) (with \(\varepsilon\to 0\)) show for all \(d\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\) that
\[\left|g^{d}(x)\right|\leq 2(cd^{c}T^{-1})^{\frac{1}{2}}(d^{c}+\|x\|^{2})^{\frac{1}{2}}. \tag{207}\]
This, (206), (205), (184), (195), and [23, Proposition 2.2] (applied with \(V\leftarrow([0,T]\times\mathbb{R}^{d}\ni(t,x)\mapsto e^{4.5c(T-t)}(d^{c}+\|x\|^ {2})^{\frac{1}{2}}\in(0,\infty)\)) in the notation of [23, Proposition 2.2]) show that for all \(\varepsilon\in(0,1)\), \(d,K\in\mathbb{N}\) there exist measurable functions \(u^{d,K,\varepsilon},u^{d,\varepsilon},u^{d}\colon[0,T]\times\mathbb{R}^{d}\) such that for all \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) we have that \(\mathbb{E}\left[\left|g_{\varepsilon}^{d}(X_{T}^{d,K,\varepsilon,t,x}) \right|\right]+\int_{t}^{T}\mathbb{E}\Big{[}\big{|}f_{\varepsilon}(u^{d,K, \varepsilon}(r,X_{r}^{d,K,\varepsilon,t,x}))\big{|}\Big{]}\,dr+\mathbb{E} \Big{[}\big{|}g_{\varepsilon}^{d}(X_{T}^{d,\varepsilon,t,x})\big{|}\Big{]}+\int_ {t}^{T}\mathbb{E}\Big{[}\big{|}f_{\varepsilon}(u^{d,\varepsilon}(r,X_{r}^{d, \varepsilon,t,x}))\big{|}\Big{]}+\mathbb{E}\Big{[}\big{|}g_{\varepsilon}^{d}(X_{T }^{d,\varepsilon})\big{|}\Big{]}+\int_{t}^{T}\mathbb{E}\Big{[}\big{|}f(u^{d }(r,X_{r}^{d,t,x}))\big{|}\Big{]}<\infty\),
\[\sup_{s\in[0,T]}\sup_{y\in\mathbb{R}^{d}}\frac{|u^{d,K,\varepsilon}(s,y)|+|u^{ d,\varepsilon}(s,y)|+|u^{d}(s,y)|}{1+\|y\|}<\infty, \tag{208}\]
\[u^{d,K,\varepsilon}(t,x)=\mathbb{E}\Big{[}g_{\varepsilon}^{d}(X_{T}^{d,K, \varepsilon,t,x})\Big{]}+\int_{t}^{T}\mathbb{E}\Big{[}f_{\varepsilon}(u^{d,K, \varepsilon}(r,X_{r}^{d,K,\varepsilon,t,x}))\Big{]}\,dr, \tag{209}\]
\[u^{d,\varepsilon}(t,x)=\mathbb{E}\Big{[}g_{\varepsilon}^{d}(X_{T}^{d, \varepsilon,t,x})\Big{]}+\int_{t}^{T}\mathbb{E}\Big{[}f_{\varepsilon}(u^{d, \varepsilon}(r,X_{r}^{d,\varepsilon,t,x}))\Big{]}\,dr, \tag{210}\]
and
\[u^{d}(t,x)=\mathbb{E}\Big{[}g^{d}(X_{T}^{d,t,x})\Big{]}+\int_{t}^{T}\mathbb{E} \Big{[}f(u^{d}(r,X_{r}^{d,t,x}))\Big{]}\,dr. \tag{211}\]
This, a result on existence and uniqueness of viscosity solutions to PIDEs (see [31, Propositions 5.4 and 5.16]), and the assumptions of Theorem 5.1 show (i).
Next, let \(c_{1},c_{2}\in\mathbb{R}\), \((\varepsilon_{d,\varepsilon})_{d\in\mathbb{N},\varepsilon\in(0,1)}\subseteq \mathbb{R}\), \((N_{d,\varepsilon})_{d\in\mathbb{N},\varepsilon\in(0,1)}\subseteq\mathbb{N}\), \((C_{\delta})_{\delta\in(0,1)}\subseteq[0,\infty]\) satisfy for all \(d\in\mathbb{N}\), \(\delta,\epsilon\in(0,1)\) that
\[c_{1}=6(cT^{-1})^{\frac{1}{2}}+12c^{\frac{2}{2}}(T+2)e^{21cT+5cT^{2}}T^{\frac{1}{2} }+2ce^{24cT+5cT^{2}},\quad c_{2}=2cc_{1}, \tag{212}\]
\[c_{2}d^{2c}|\varepsilon_{d,\epsilon}|^{\frac{1}{2}}=\frac{\epsilon}{2},\quad N_{ d,\epsilon}=\min\left\{n\in\mathbb{N}\cap[2,\infty)\colon c_{2}d^{2c}\left(\frac{e^{12cTn+ \frac{n}{2}}}{n^{\frac{n}{2}}}+\frac{1}{n^{\frac{n}{2}}}\right)\leq\frac{ \epsilon}{2}\right\}, \tag{213}\]
and
\[C_{\delta}=\sup_{n\in[2,\infty)}\left[\left(\frac{e^{(12cT+0.5)(n-1)}}{(n-1)^ {\frac{n-1}{2}}}\right)^{6+\delta}(3n)^{3n+1}\right]. \tag{214}\]
Then the triangle inequality, Lemmas 3.3, 3.2, and 2.1 show for all \(d,K\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(n,m\in\mathbb{N}\) that
\[\left(\mathbb{E}\bigg{[}\Big{|}U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d}(t,x)\Big{|}^{2}\bigg{]}\right)^{\frac{1}{2}} \tag{215}\] \[\leq 6e^{\frac{n}{2}}m^{-\frac{n}{2}}e^{12cTn}(cd^{c}T^{-1})^{\frac{1 }{2}}\left(d^{c}+\|x\|^{2}\right)^{\frac{1}{2}}\] \[+12c^{\frac{3}{2}}d^{\frac{5}{2}}(T+2)e^{21cT+5cT^{2}}(d^{c}+\|x \|^{2})^{\frac{1}{2}}\frac{T^{\frac{1}{2}}}{K^{\frac{1}{2}}}+2cd^{c}\varepsilon ^{\frac{1}{2}}(d^{c}+\|x\|^{2})e^{24cT+5cT^{2}}\] \[\leq \left[6(cd^{c}T^{-1})^{\frac{1}{2}}+12c^{\frac{3}{2}}d^{\frac{5} {2}}(T+2)e^{21cT+5cT^{2}}T^{\frac{1}{2}}+2cd^{c}e^{24cT+5cT^{2}}\right](d^{c} +\|x\|^{2})\left(\varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+\frac{m}{2}}}{m^{ \frac{n}{2}}}+\frac{1}{K^{\frac{1}{2}}}\right)\] \[\leq \left[6(cT^{-1})^{\frac{1}{2}}+12c^{\frac{3}{2}}(T+2)e^{21cT+5cT^ {2}}T^{\frac{1}{2}}+2ce^{24cT+5cT^{2}}\right]d^{c}(d^{c}+\|x\|^{2})\left( \varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+\frac{m}{2}}}{m^{\frac{n}{2}}}+ \frac{1}{K^{\frac{1}{2}}}\right)\] \[\leq c_{1}d^{c}\left(\varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+ \frac{m}{2}}}{m^{\frac{n}{2}}}+\frac{1}{K^{\frac{1}{2}}}\right)(d^{c}+\|x\|^{2 }).\]
This, the triangle inequality, (189), and (212) show for all \(d,K\in\mathbb{N}\), \(\theta\in\Theta\), \(\varepsilon\in(0,1)\), \(n,m\in\mathbb{N}\) that
\[\left(\int_{\mathbb{R}^{d}}\mathbb{E}\left[\left|U_{n,m}^{d,\theta,K,\varepsilon}(t,x)-u^{d}(t,x)\right|^{2}\right]\Gamma^{d}(dx)\right)^{\frac{ 1}{2}} \tag{216}\] \[\leq c_{1}d^{c}\left(\varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+ \frac{m}{2}}}{m^{\frac{n}{2}}}+\frac{1}{K^{\frac{1}{2}}}\right)\left(\int_{ \mathbb{R}^{d}}(d^{c}+\|x\|^{2})^{2}\Gamma^{d}(dx)\right)^{\frac{1}{2}}\] \[\leq c_{1}d^{c}\left(\varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+ \frac{m}{2}}}{m^{\frac{n}{2}}}+\frac{1}{K^{\frac{1}{2}}}\right)\left(d^{c}+ \left(\int_{\mathbb{R}^{d}}\|x\|^{4}\,\Gamma^{d}(dx)\right)^{\frac{1}{2}}\right)\] \[\leq c_{1}d^{c}\left(\varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+ \frac{m}{2}}}{m^{\frac{n}{2}}}+\frac{1}{K^{\frac{1}{2}}}\right)2cd^{c}=c_{2}d^ {2c}\left(\varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+\frac{m}{2}}}{m^{\frac{n}{2 }}}+\frac{1}{K^{\frac{1}{2}}}\right).\]
This, Fubini's theorem, and (213) show for all \(d\in\mathbb{N}\), \(\epsilon\in(0,1)\) that
\[\left(\mathbb{E}\left[\int_{\mathbb{R}^{d}}\left|U_{n,n}^{d,\theta,n,\varepsilon}(t,x)-u^{d}(t,x)\right|^{2}\Gamma^{d}(dx)\right]\right)^{\frac{ 1}{2}}\Big{|}_{\begin{subarray}{c}n=N_{d,\epsilon}\\ \varepsilon=d_{\epsilon}\end{subarray}} \tag{217}\] \[=\left(\int_{\mathbb{R}^{d}}\mathbb{E}\left[\left|U_{n,n}^{d, \theta,n,\varepsilon}(t,x)-u^{d}(t,x)\right|^{2}\right]\Gamma^{d}(dx)\right)^{ \frac{1}{2}}\Big{|}_{\begin{subarray}{c}n=N_{d,\epsilon}\\ \varepsilon=d_{\epsilon}\end{subarray}}\] \[\leq c_{2}d^{2c}\left(\varepsilon^{\frac{1}{2}}+\frac{e^{12cTn+ \frac{m}{2}}}{n^{\frac{n}{2}}}+\frac{1}{n^{\frac{n}{2}}}\right)\Big{|}_{ \begin{subarray}{c}n=N_{d,\epsilon}\\ \varepsilon=d_{\epsilon}\end{subarray}}\] \[\leq\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon.\]
Then for all \(d\in\mathbb{N}\), \(\epsilon\in(0,1)\) there exists \(\omega_{d,\epsilon}\in\Omega\) such that
\[\int_{\mathbb{R}^{d}}\left|U_{n,n}^{d,\theta,n^{n},\varepsilon}(t,x,\omega_{d, \epsilon})-u^{d}(t,x)\right|^{2}\Gamma^{d}(dx)\Big{|}_{\begin{subarray}{c}n=N_{ d,\epsilon}\\ \varepsilon=d_{\epsilon}\end{subarray}}\leq\epsilon^{2}. \tag{218}\]
Next, Lemma 4.12, (199), and (198) show for all \(d\in\mathbb{N}\), \(\epsilon\in(0,1)\) that there exists \(\Psi_{d,\epsilon}\in\mathbb{N}\) such that
1. we have for all \(t\in[0,T]\), \(\theta\in\Theta\) that \[\begin{split}\dim(\mathcal{D}(\Psi_{d,\epsilon}))& =(n+1)\left[n^{n}\left(\max\left\{\dim(\mathcal{D}(\Phi_{g_{ \epsilon}^{d}})),\dim(\mathcal{D}(\Phi_{g_{\epsilon}^{d}}))\right\}-1 \right)+1\right]\\ &\quad+n(\dim(\mathcal{D}(\Phi_{f_{\epsilon}}))-2)+\dim(\mathcal{ D}(\Phi_{g_{\epsilon}^{d}}))-1\Big{|}_{n=N_{d,\epsilon},\varepsilon=\varepsilon_{d, \epsilon}}\\ &\leq 2nn^{n}bd^{c}\varepsilon^{-c}+nbd^{c}\varepsilon^{-c}+bd^{c} \varepsilon^{-c}\big{|}_{n=N_{d,\epsilon}}\leq 4n^{n}nbd^{c}\varepsilon^{-c} \big{|}_{n=N_{d,\epsilon},\varepsilon=\varepsilon_{d,\epsilon}},\end{split}\] (219)
2. we have for all \(t\in[0,T]\), \(\theta\in\Theta\) that \[\begin{split}\left|\!\left|\!\left|\mathcal{D}(\Psi_{d, \epsilon})\right|\!\right|\!\right|&\leq\left(2d+\left|\!\left| \!\left|\!\left|\mathcal{D}(\Phi_{g_{\epsilon}^{d}})\right|\!\right|\! \right|\!\right|+\left|\!\left|\!\left|\mathcal{D}(\Phi_{g_{\epsilon}^{d},0}) \right|\!\right|\!\right|\\ &\quad\quad+\left|\!\left|\!\left|\mathcal{D}(\Phi_{F_{d}^{d},0} )\right|\!\right|\!\right|+\left|\!\left|\!\left|\mathcal{D}(\Phi_{g_{\epsilon }^{d}})\right|\!\right|\!\right|+\left|\!\left|\!\left|\mathcal{D}(\Phi_{f_{ \epsilon}})\right|\!\right|\!\right|+\left|\!\left|\!\left|\mathcal{D}(\Phi_{f _{\epsilon}})\right|\!\right|\!\right)(3n)^{n}\big{|}_{n=N_{d,\epsilon}, \varepsilon=\varepsilon_{d,\epsilon}}\\ &\leq bd^{c}\varepsilon^{-c}(3n)^{n}\big{|}_{n=N_{d,\epsilon}, \varepsilon=\varepsilon_{d,\epsilon}},\end{split}\] (220) and
3. we have for all \(t\in[0,T]\), \(\theta\in\Theta\), \(x\in\mathbb{R}^{d}\) that \(U_{n,n}^{d,\theta,n^{n},\varepsilon}(t,x,\omega_{d,\epsilon})\Big{|}_{ \begin{subarray}{c}n=N_{d,\epsilon}\\ \varepsilon=\varepsilon_{d,\epsilon}\end{subarray}}=(\mathcal{R}(\Psi_{d, \epsilon}))(x)\).
This and (218) show for all \(d\in\mathbb{N}\), \(\epsilon\in(0,1)\) that
\[\begin{split}&\int_{\mathbb{R}^{d}}\left|(\mathcal{R}(\Psi_{d, \epsilon}))(x)-u^{d}(t,x)\right|^{2}\Gamma^{d}(dx)\leq\epsilon^{2}.\end{split} \tag{221}\]
Furthermore, the fact that \(\forall\,\Phi\in\mathbb{N}\colon\mathcal{P}(\Phi)\leq 2\dim(\mathcal{D}(\Phi)) \!\left|\!\left|\!\left|\mathcal{D}(\Phi)\right|\!\right|\!\right|^{2}\), (219), and (220) show for all \(d\in\mathbb{N}\), \(\epsilon\in(0,1)\) that
\[\begin{split}&\mathcal{P}(\Psi_{d,\epsilon})\leq 2\dim(\mathcal{ D}(\Psi_{d,\epsilon}))\!\left|\!\left|\mathcal{D}(\Psi_{d,\epsilon})\right|\! \right|\!\right|^{2}\leq 2\cdot 4n^{n}nbd^{c}\varepsilon^{-c}\left(bd^{c} \varepsilon^{-c}(3n)^{n}\right)^{2}\!\big{|}_{n=N_{d,\epsilon}}\\ &=8n^{n}nb^{2}d^{3c}|\varepsilon_{d,\epsilon}|^{-3c}(3n)^{2n} \big{|}_{n=N_{d,\epsilon},\varepsilon=\varepsilon_{d,\epsilon}}\end{split} \tag{222}\]
Recall that in (213) we have for all \(d\in\mathbb{N}\), \(\epsilon\in(0,1)\) that \(c_{2}d^{2c}|\varepsilon_{d,\epsilon}|^{\frac{1}{2}}=\frac{\epsilon}{2}\). Hence, for all \(d\in\mathbb{N}\), \(\epsilon\in(0,1)\) we have that \(\varepsilon_{d,\epsilon}=\frac{\epsilon^{2}}{4}|c_{2}|^{-2}d^{-4c}\). This, (222), and (214) show for all \(d\in\mathbb{N}\), \(\epsilon,\delta\in(0,1)\) that
\[\begin{split}&\mathcal{P}(\Psi_{d,\epsilon})\leq 8n^{n}nb^{2}d^{3c}| \varepsilon_{d,\epsilon}|^{-3c}(3n)^{2n}\big{|}_{n=N_{d,\epsilon}}\\ &\leq 8b^{2}d^{3c}|\varepsilon_{d,\epsilon}|^{-3c}(3n)^{3n+1} \Big{|}_{n=N_{d,\epsilon}}\\ &\leq 8b^{2}d^{3c}\left[\frac{\epsilon^{2}}{4}|c_{2}|^{-2}d^{-4c} \right]^{-3c}(3n)^{3n+1}\Big{|}_{n=N_{d,\epsilon}}\\ &\leq 4^{3c+2}b^{2}d^{3c+12c^{2}}|c_{2}|^{6c}\epsilon^{-6c}(3N_{d, \epsilon})^{3N_{d,\epsilon}+1}\\ &\leq 4^{3c+2}b^{2}d^{3c+12c^{2}}|c_{2}|^{6c}\epsilon^{-6c-6-\delta }\epsilon^{6+\delta}(3N_{d,\epsilon})^{3N_{d,\epsilon}+1}\\ &\leq 4^{3c+2}b^{2}d^{3c+12c^{2}}|c_{2}|^{6c}\epsilon^{-6c-6-\delta }\left(\frac{4c_{2}d^{2c}e^{(12cT+0.5)(N_{d,\epsilon}-1)}}{(N_{d,\epsilon}-1) \frac{N_{d,\epsilon}-1}{2}}\right)^{6+\delta}(3N_{d,\epsilon})^{3N_{d, \epsilon}+1}\\ &\leq 4^{3c+8+\delta}b^{2}d^{3c+12c^{2}+2c(6+\delta)}|c_{2}|^{6c+2c(6+ \delta)}\epsilon^{-6c-6-\delta}C_{\delta}.\end{split} \tag{223}\]
This, (221), the fact that \(c_{2}\) does not depend on \(d\) (see (212)), and the fact that \(\forall\,\delta\in(0,1)\colon C_{\delta}<\infty\) (cf. (171) in [8]) complete the proof of Theorem 5.1.
Proof of Theorem 1.2.: Let \(b,\tilde{c}\in[2,\infty)\) satisfy that
\[c\leq\tilde{c},\quad Ted^{c}\leq\tilde{c}d^{\tilde{c}},\quad T^{3}(c^{\frac{1}{2}} d^{\frac{5}{2}}+1)\leq\tilde{c}d^{\tilde{c}}, \tag{224}\]
and
\[16\Big{(}1+\Big{|}(3\tilde{c})^{\frac{1}{2}}(4(3\tilde{c})^{\frac{1}{2}}+2(3\tilde{ c})^{\frac{1}{2}}T^{-\frac{3}{2}})\Big{|}^{\frac{1}{2}}\Big{)}\leq\frac{b}{4}. \tag{225}\]
Then Theorem 1.2 follows from Theorem 5.1 (applied with \(c\gets 3\tilde{c}\), \((\Gamma^{d})_{d\in\mathbb{N}}\leftarrow(\int_{(\cdot)\cap[0,1]^{d}}dx)_{d\in \mathbb{N}}\) in the notation of Theorem 5.1). This can be easily checked as follows. From (183)-(188), (224), and (225) it follows that for all \(d\in\mathbb{N}\), \(x,y\in\mathbb{R}^{d}\), \(w_{1},w_{2}\in\mathbb{R}\), \(\varepsilon\in(0,1)\) we have that
\[\Big{\|}\beta_{\varepsilon}^{d}(x)-\beta_{\varepsilon}^{d}(y)\Big{\|}^{2}+ \Big{\|}\sigma_{\varepsilon}^{d}(x)-\sigma_{\varepsilon}^{d}(y)\Big{\|}_{ \mathbb{F}}^{2}+\int_{\mathbb{R}^{d}\setminus\{0\}}\Big{\|}\gamma_{ \varepsilon}^{d}(x,z)-\gamma_{\varepsilon}^{d}(y,z))\Big{\|}^{2}\,\nu^{d}(dz) \leq\tilde{c}\|x-y\|^{2}, \tag{226}\]
\[|f(w_{1})-f(w_{2})|^{2}\leq\tilde{c}|w_{1}-w_{2}|^{2},\quad\Big{|}g_{ \varepsilon}^{d}(x)-g_{\varepsilon}^{d}(y)\Big{|}^{2}\leq cd^{\tilde{c}}\|x- y\|^{2}\leq\tilde{c}d^{\tilde{c}}T^{-1}\|x-y\|^{2}, \tag{227}\]
\[\Big{\|}\beta_{\varepsilon}^{d}(0)\Big{\|}^{2}+\Big{\|}\sigma_{\varepsilon}^ {d}(0)\Big{\|}_{\mathbb{F}}^{2}+\int_{\mathbb{R}^{d}\setminus\{0\}}\Big{\|} \gamma_{\varepsilon}^{d}(0,z)\Big{\|}^{2}\,\nu^{d}(dz)\leq\tilde{c}d^{\tilde{ c}}, \tag{228}\]
\[T^{3}(|f(0)|+1)^{2}\leq T^{3}(c^{\frac{1}{2}}d^{\tilde{c}}+1)\leq\tilde{c}d^ {\tilde{c}}, \tag{229}\]
\[T|g_{\varepsilon}^{d}(0)|^{2}\leq Tcd^{c}\leq\tilde{c}d^{\tilde{c}}, \tag{230}\]
\[\Big{\|}\beta_{\varepsilon}^{d}(x)-\beta^{d}(x)\Big{\|}^{2}+\Big{\|}\sigma_{ \varepsilon}^{d}(x)-\sigma^{d}(x)\Big{\|}_{\mathbb{F}}^{2}+\int_{\mathbb{R}^ {d}\setminus\{0\}}\Big{\|}\gamma_{\varepsilon}^{d}(x,z)-\gamma^{d}(x,z) \Big{\|}^{2}\,\,\nu^{d}(dz)+\Big{|}g_{\varepsilon}^{d}(x)-g^{d}(x)\Big{|}^{2}\]
\[\leq\varepsilon\tilde{c}d^{\tilde{c}}(d^{\tilde{c}}+\|x\|^{2}), \tag{231}\]
\[\Big{|}\Big{|}\Big{|}\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}})\Big{|}\Big{|} +\Big{|}\Big{|}\mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},0})\Big{|}\Big{|}+ \Big{|}\Big{|}\mathcal{D}(\Phi_{F_{\varepsilon}^{d},0})\Big{|}\Big{|}+\Big{|} \Big{|}\mathcal{D}(\Phi_{g_{\varepsilon}^{d}})\Big{|}\Big{|} \tag{232}\]
and
\[\begin{array}{l}\dim(\mathcal{D}(\Phi_{\beta_{\varepsilon}^{d}}))+\dim( \mathcal{D}(\Phi_{\sigma_{\varepsilon}^{d},0}))+\dim(\mathcal{D}(\Phi_{F_{ \varepsilon}^{d},0}))+\dim(\mathcal{D}(\Phi_{g_{\varepsilon}^{d}}))\\ \leq\mathcal{P}(\Phi_{\beta_{\varepsilon}^{d}})+\mathcal{P}(\Phi_{\sigma_{ \varepsilon}^{d},0})+\mathcal{P}(\Phi_{F_{\varepsilon}^{d},0})+\mathcal{P}( \Phi_{g_{\varepsilon}^{d}})\leq\frac{bd^{c}\varepsilon^{-c}}{4}.\end{array} \tag{233}\]
In addition, for every \(d\in\mathbb{N}\) let \(\Gamma^{d}\colon\mathcal{B}(\mathbb{R}^{d})\to[0,1]\) be a probability measure satisfying \(\Gamma^{d}(\cdot)=\int_{(\cdot)\cap[0,1]^{d}}dx\). Then for all \(d\in\mathbb{N}\) we have that
\[\left(\int_{\mathbb{R}^{d}}\|x\|^{4}\,\Gamma^{d}(dx)\right)^{\frac{1}{2}}= \left(\int_{[0,1]^{d}}\|x\|^{4}\,dx\right)^{\frac{1}{2}}\leq d^{2}\leq\tilde{c }d^{\tilde{c}}. \tag{234}\]
The proof of Theorem 1.2 is thus completed.
|
2308.09492 | Predicting Properties of Oxide Glasses Using Informed Neural Networks | Many modern-day applications require the development of new materials with
specific properties. In particular, the design of new glass compositions is of
great industrial interest. Current machine learning methods for learning the
composition-property relationship of glasses promise to save on expensive
trial-and-error approaches. Even though quite large datasets on the composition
of glasses and their properties already exist (i.e., with more than 350,000
samples), they cover only a very small fraction of the space of all possible
glass compositions. This limits the applicability of purely data-driven models
for property prediction purposes and necessitates the development of models
with high extrapolation power. In this paper, we propose a neural network model
which incorporates prior scientific and expert knowledge in its learning
pipeline. This informed learning approach leads to an improved extrapolation
power compared to blind (uninformed) neural network models. To demonstrate
this, we train our models to predict three different material properties, that
is, the glass transition temperature, the Young's modulus (at room
temperature), and the shear modulus of binary oxide glasses which do not
contain sodium. As representatives for conventional blind neural network
approaches we use five different feed-forward neural networks of varying widths
and depths. For each property, we set up model ensembles of multiple trained
models and show that, on average, our proposed informed model performs better
in extrapolating the three properties of previously unseen sodium borate glass
samples than all five conventional blind models. | Gregor Maier, Jan Hamaekers, Dominik-Sergio Martilotti, Benedikt Ziebarth | 2023-08-18T12:02:11Z | http://arxiv.org/abs/2308.09492v3 | # Predicting Properties of Oxide Glasses Using Informed Neural Networks
###### Abstract
Many modern-day applications require the development of new materials with specific properties. In particular, the design of new glass compositions is of great industrial interest. Current machine learning methods for learning the composition-property relationship of glasses promise to save on expensive trial-and-error approaches. Even though quite large datasets on the composition of glasses and their properties already exist (i.e., with more than 350,000 samples), they cover only a very small fraction of the space of all possible glass compositions. This limits the applicability of purely data-driven models for property prediction purposes and necessitates the development of models with high extrapolation power.
In this paper, we propose a neural network model which incorporates prior scientific and expert knowledge in its learning pipeline. This informed learning approach leads to an improved extrapolation power compared to blind (uninformed) neural network models. To demonstrate this, we train our models to predict three different material properties, that is, the glass transition temperature, the Young's modulus (at room temperature), and the shear modulus of binary oxide glasses which do not contain sodium. As representatives for conventional blind neural network approaches we use five different feed-forward neural networks of varying widths and depths.
For each property, we set up model ensembles of multiple trained models and show that, on average, our proposed informed model performs better in extrapolating the three properties of previously unseen sodium borate glass samples than all five conventional blind models.
## 1 Introduction
The development of new materials is essential for the modern-day progress in engineering applications and future-oriented technologies. Aside from ever new demands on physical and chemical materials properties, ecological issues, such as sustainability, long service life, environmental compatibility, and recyclability, are of great importance for product development in a variety of different fields. However, the common materials design process is still majorly based on the application of suitable empirical models, on past experiences and educated guesses, and on an extensive subsequent testing phase. The development of new glassy materials, in particular, would benefit to a large extent from a more resource-efficient, systematic, data-driven approach in contrast to the Edisonian trial-and-error approach which is still often used in traditional research and development [23].
The space of all possible glass compositions is very large as a glass can be made from the combination of 80 chemical elements, which leads to \(10^{52}\) possible glass compositions [41]. Moreover, since the influencing parameters are usually known only qualitatively or not at all, the optimization of glass material properties is inherently challenging. A trial-and-error approach to find a glass composition with specific properties for a certain application is time-consuming and often not feasible in practice. An expert-guided approach with integrating experiences from the past is usually not sufficient as well since there are interesting glass properties that are extremely difficult to predict. Especially when properties show nonlinearities, caused, for example, by the so-called borate anomaly in alkali borate glasses [13], conventional exploration and exploitation strategies quickly reach their limits. Therefore, going beyond the area of known materials requires new approaches based on new and innovative methods. The field of machine learning (ML) provides such methods which allow to generate accurate models based on existing data in order to predict the properties of yet unseen materials.
### Related Work
In recent years, ML techniques have been widely used for accelerating materials design [36; 39; 27]. In glass science, there have been several successful attempts to use ML to predict, i.a., optical, physical, and mechanical properties of glasses [10; 8; 9; 1; 30; 3]. Most ML models perform exceptionally well in interpolating the training data. However, given the high-dimensional search space of all possible glass compositions and its sparse coverage by experimental data, the search for new glass materials is majorly a question of designing models which possess a high extrapolation power. We refer to [21; 29] and references therein for reviews of the current status of ML in glass science and future challenges.
To address the lack of extrapolation power, ordinary ML methods can be extended by integrating prior knowledge which exists independently of the learning task. This idea is termed _informed machine learning_ and we refer to the recent survey [37] for a
taxonomy and thorough overview of its application in current ML state-of-the-art use cases. For glass design, this idea is utilized, e.g., in [34] where the empirical MYEGA formula is integrated into a neural network architecture to predict the viscosity of a glass based on its compound fractions and temperature. In [7], this approach is developed further by additionally integrating prior chemical and physical knowledge of the glasses' elements into the training data. Similarly, in [33; 2], the authors use external chemical and physical knowledge to carefully design enriched descriptors of glass compositions which are used as inputs for ML models to predict properties of oxide glasses. In [20] and [22], the authors predict the dissolution kinetics of silicate glasses in an informed manner by suitably splitting the training data and using a descriptor which encodes the glasses' network structure, respectively. They demonstrate the superior performance of the informed approach compared to the uninformed approach. This superiority is also shown in [4], where the authors design a neural network model which is informed by statistical mechanics in order to predict structural properties of oxide glasses.
### Contributions
In this paper, we propose a new ML model based on neural networks for the property prediction of oxide glasses which integrates prior knowledge in order to achieve a high degree of extrapolation of the training data. We modify the ideas from [7] in order to predict three material properties, that is, the _glass transition temperature_\(T_{g}\), the _Young's modulus_\(E\)_(at room temperature)_, and the _shear modulus_\(G\). We focus our analysis on binary oxide glasses, that is, oxide glasses which consist of exactly two compounds. Our model is informed in the sense that we explicitly integrate prior knowledge into the design of our training data, the hypothesis set, and the final hypothesis at four major points in our learning pipeline. We place emphasis on explaining how this is done in detail in terms of the taxonomy in [37]. Especially the design of the network architecture to realize permutation invariance with respect to the input features seems, to the best of our knowledge, to be new in the field of glass materials modeling.
To examine the extrapolation power of our models, we train and validate them on glass samples which do not contain sodium in their compositions. The trained models are then used to predict the properties of sodium borate glass compositions with varying compound fractions. For each property, we train multiple models and study the average performance of the model ensemble. To demonstrate the superiority of the informed model ensemble compared to blind (uninformed) approaches, we perform the same experiments with five standard fully connected feed-forward neural networks of varying widths and depths without integration of any prior knowledge. We compare the results quantitatively in terms of error metrics and qualitatively in terms of a meaningful approximation of the respective composition-property curves.
Outline.The remainder of this paper is organized as follows: In Sect. 2, we explain our methodology. That is, in Sect. 2.1, we present our automated pipeline for collecting and preparing data for model training, validation, and testing. In Sect. 2.2, we describe the different model setups in the blind and the informed setting. In Sect. 2.3, we explain how we train and evaluate our models. We discuss the results of our experiments in Sect. 3 and conclude our findings in Sect. 4.
Notation.For notational convenience, we use the letter **P** whenever we refer to one of the three properties \(T_{g}\), \(E\), or \(G\). Moreover, for all entities which exist for every property **P**, we use the prefix "**P**-" to specify the respective entity. For example, given **P**, we refer to the dataset that is used to train a model for predicting **P** by "**P**-training set".
Moreover, we use the symbols \(\mathbb{N}\) and \(\mathbb{R}\) to denote the set of positive integers and the set of real numbers, respectively.
## 2 Methodology
The prediction quality of any data-driven machine learning algorithm in the context of supervised learning is strongly dependent on the quantity and quality of the training data. Before presenting our neural network approach to the problem of glass property prediction in detail in Sect. 2.2 and Sect. 2.3, we therefore describe in the following Sect. 2.1 how we collect and prepare our data.
### Data Collection and Preparation
We use data from the INTERGLAD Ver. 8 database [18] and the SciGlass database [32] and merge them together into a common glassmodel database. For the identification of oxide glasses we follow the same definition as in [1] and only consider glasses whose mole atomic fraction of oxygen is at least 0.3 and whose compounds do not contain the chemical elements S, H, C, Pt, Au, F, Cl, N, Br, and I, which could affect the balance of oxygen. The resulting glassmodel database of oxide glasses consists of 420,973 glass samples in total. It lists the mole atomic fractions of 118 chemical elements and the mole fractions of 439 compounds, i.e., the oxides that a glass composition consists of, together with the values of 87 material properties. However, among the 118 elements, only 66 elements appear with non-vanishing fraction in at least one glass sample. Among the 439 compounds, only 183 compounds appear with non-vanishing fraction in at least one glass sample.
To obtain clean data for training, validating, and testing our models, we apply a sequence of preprocessing steps which follows in parts the procedure described in [1; 10]. For each glass property **P**, we extract clean data from the "dirty" glassmodel database in an automated fashion in form of a preprocessing pipeline whose steps
are schematically depicted in Figure 1. The number of samples which are dropped in each step is shown in Table 1.
We begin with all samples from the entire glassmodel database. As a first step, we make sure that all glass samples have numerically valid entries. That is, we first remove glass samples which have a Not-a-Number (NaN) entry for at least one compound fraction. Moreover, we drop all glasses which have NaN entries for **P**.1
Footnote 1: At the end of this and all the following preprocessing steps, we always drop all compounds which do not appear in any of the glass samples that are present in the dataset at the respective preprocessing stage.
Next, we make sure that all glass samples are physically valid binary glass compositions. For this, we first discard glasses whose compound fractions do not add up to a value in the closed range between 0.9999 and 1.0001. Then, we exclude all samples which do not consist of exactly two compounds.
To ensure physically valid property values, we fix a closed range of values between a minimum and maximum cut-off value for each property **P** (see Table 2). We determine these values by investigating the distribution of the glass samples with respect to their **P**-values in the datasets that result from the preprocessing pipeline up to this point. Property values outside of this range are considered non-physical but may be present in the database due to typos or other mistakes. Hence, we drop each glass sample with a **P**-value outside of the respective range.
As the minimum and maximum values are rather crude bounds, in a further step, we remove glasses with extreme **P**-values, which have a high chance of still appearing in the datasets again because of typos or other mistakes. To do so, we compute the 0.05th percentile and the 99.95th percentile of **P**-values among all remaining glass samples and subsequently discard all glasses with **P**-values below the lower percentile or above the upper percentile.
A lot of glass samples appear in both the INTERGLAD and the SciGlass database. Consequently, there may be many duplicates among the remaining data points at this stage of the preprocessing pipeline. We therefore apply a duplicate filter which consists of the following steps:
1. We group all glasses with the same (up to the fifth decimal place) compound fractions.
2. For each such group we do the following: 1. We drop all but the first sample which agree _exactly_ in their values of **P**. 2. We compute the midpoint of the range of values of **P** among all remaining samples. 2. If the **P**-value of every sample has a distance to the midpoint smaller than a certain **P**-dependent threshold (see Table 2), then, as a representative of the group of duplicates, we select the first sample in the group, assign to it the median of the **P**-values of the samples in the group, and drop all other samples. Otherwise, we discard the whole group of glass samples. The values for the duplicate thresholds are determined by using domain knowledge and investigating the average spread of **P**-values in a group of duplicates.
In the next step, we deal with compounds of low representability and iteratively drop compounds which appear in less than one percent of all remaining glass samples. This allows us to reduce the dimension of the compound space and leaves us only with glasses whose compounds are present in sufficiently many samples in order to use them for robust model training.2
Footnote 2: At the end of each iteration, we again drop those samples whose compound fractions do not add up to a value in the closed range between 0.9999 and 1.0001.
As a final step, we apply an outlier detection based on a one-class support vector machine (SVM) followed by an outlier detection based on Gaussian process (GP) regression.3
Footnote 3: We fit a Gaussian process to the data and drop samples with a too large deviation in their **P**-value from the respective mean curve.
The resulting cleaned datasets encompass all problem-specific information which is available for each glass property **P**. Given **P**, we denote the elements and compounds which are present (with non-vanishing value in at least one glass sample) in the corresponding cleaned dataset as **P**-elements and **P**-compounds, respectively. The cleaned datasets are subsequently split into training, validation, and test sets as described in Sect. 2.3.
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline
**P** & & \multicolumn{2}{c}{\(T_{g}\)} & \multicolumn{2}{c}{\(E\)} & \multicolumn{2}{c}{\(G\)} \\ & \#S & \#C & \#S & \#C & \#S & \#C \\ \hline Dirty dataset & 420,973 & 439 & 420,973 & 439 & 420,973 & 439 \\ \hline Drop NaNs & 344,247 & 283 & 396,460 & 329 & 410,589 & 356 \\ Check compound fractions & 17,483 & 1 & 5,482 & 0 & 1,867 & 0 \\ Filter binary glasses & 50,681 & 83 & 16,651 & 57 & 6,386 & 38 \\ Min-max filter & 16 & 0 & 21 & 0 & 15 & 0 \\ Drop extreme values & 10 & 0 & 4 & 0 & 4 & 0 \\ Duplicate filter & 5,902 & 0 & 1,577 & 0 & 1,253 & 2 \\ Drop rare compounds & 229 & 40 & 79 & 30 & 67 & 19 \\ One-class SVM & 22 & 0 & 5 & 0 & 10 & 0 \\ GP regression & 205 & 0 & 62 & 0 & 63 & 0 \\ \hline Cleaned dataset & 2,178 & 32 & 632 & 23 & 719 & 24 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data reduction in each step of the preprocessing pipeline. The first and last row show the number of samples (#S) and number of compounds (#C) which are present in the dirty and cleaned dataset, respectively. The rows in between show the number of samples and compounds which are dropped in the respective preprocessing steps. We remark that in the cleaned datasets, the number of _elements_ appearing with non-vanishing fraction in at least one glass sample is 32 for \(T_{g}\), 23 for \(E\), and 24 for \(G\) (and therefore coincides with #C).
### Model Setups
We use neural networks for the approximation of the composition-property relationship of binary oxide glasses. The target quantity is given by one of the respective
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**P** & Min. cut-off & Max. cut-off & Duplicate threshold \\ \hline \(T_{g}\) (\({}^{\circ}\)C) & 50 & \(1.8\times 10^{3}\) & 5.0 \\ \(E\) (GPa) & 5.0 & \(2.0\times 10^{2}\) & 1.5 \\ \(G\) (GPa) & 0.10 & \(2.0\times 10^{2}\) & 0.75 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Minimum and maximum cut-off values and duplicate thresholds used in the preprocessing pipeline.
Figure 1: Steps in the preprocessing pipeline as described in Sect. 2.1.
properties **P**. The composition of a glass can be represented in various ways. Designing a representation in form of a feature vector that encodes a given glass composition in a way that is suitable as input for a neural network is an essential part of the modeling process and is one of the key differences between the blind (uninformed) and our informed learning approach. A second difference lies in the design of the model architecture where the black-box modeling approach of standard blind feed-forward neural networks can be leveraged in the informed setting by integrating prior scientific knowledge.
In the following Sect. 2.2.1 and Sect. 2.2.2, we describe in detail the choice of the feature vectors and the network architectures for the blind and informed models, respectively, and highlight their differences.
#### Blind Models
In the _blind_ approach, only the available problem-specific data is used to design a suitable ML model for the composition-property relationship of oxide glasses. This approach is blind or uninformed in the sense that no prior knowledge that exists independently of the learning task is integrated into the model setup.
##### 2.2.1.1 Feature Vectors
Each glass composition is, by definition, uniquely determined by its compound fractions. It is therefore natural to use the compound fractions, grouped together in a feature vector for a given glass composition, as input for a neural network model to predict one of the glass's properties.
##### Network Architectures
If no further information is available, the standard architectural design of a neural network is given by a (fully connected) feed-forward neural network (FFNN) [14]. This class of models satisfies the universal approximation theorem, that is, for any continuous function on a compact domain there exists a FFNN which approximates the function within a given arbitrary tolerance [11; 16; 28]. This result justifies the usage of the set of FFNNs as hypothesis space. In the context of glass materials research, this approach is followed for example in [8; 30] to model several different properties of oxide glasses.
A fully connected FFNN is characterized by (i) its input and output dimensions, i.e., the number of units in its input and output layer, respectively, (ii) its depth, i.e., the number of layers (without counting the input layer), and (iii) the width, i.e., the number of units, of each hidden layer. An example architecture of a fully connected FFNN is shown in Fig. 2.
We use a variety of different FFNNs as benchmark models which we compare our informed model to. To capture the main architectural trends of designing a FFNN and their effects on the prediction quality, we consider five different FFNNs with depths \(L=2,4,8,16,32\) for each property **P**. The input dimensions are determined by the number of respective **P**-compounds and the output dimension is always one as we predict scalar-valued properties. For each **P**-model, we choose the width to be constant for all hidden layers such that the total number of trainable parameters is roughly the same among all **P**-models including the informed model which we describe in Sect. 2.2.2. Each hidden layer is a linear layer with an additive bias term. In accordance to the universal approximation theorem, the output layer is a linear layer with no additive bias term. The exact dimensions of all models are summarized in Table 3. In all models, we use the rectified linear unit (ReLU) as activation function.
#### Informed Model
Rather than just using the compound fractions as input features for a neural network, we can increase the informational capacity of a glass sample's representation by utilizing characteristic chemical and physical quantities of each element which is present in the given glass sample and provide them as additional inputs to a neural network. Features which are carefully engineered in such an informed manner can lead to an improved prediction quality of the model, given that the model's expressive power is large enough. The latter issue is a question of the model's architecture. If there are too few parameters, the model will underfit the training data no matter how carefully we designed the input features. If there are too many parameters, however, the model might overfit the training data and pick up on spurious patterns and noise
Figure 2: Schematic architecture of a fully connected FFNN with input dimension 3, output dimension 2, depth 4, and constant width 4. Each layer represents an affine linear function. The additive bias nodes are not shown. In case of the blind models, the input nodes store the compound fractions of a given glass sample and the output node provides the predicted value for **P**. Image adapted from [25].
in the input features. In general, by building as much prior information as possible into the model's architecture we expect to obtain a more robust inference behavior, especially in the extrapolation regime.
#### 2.2.2.1 Feature Vectors
Compared to the uninformed approach, we change our viewpoint and identify a given glass composition not by the fractions of its compounds but by the mole atomic fractions of its elements. For each element, there is additional extensive scientific knowledge about its chemical and physical properties, which exists independently of our learning problem. According to the taxonomy in [37] this knowledge is
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline \multirow{2}{*}{\(T_{g}\)} & \multicolumn{3}{c}{Blind} & \multicolumn{3}{c}{Informed} \\ \cline{3-10} & \multicolumn{3}{c}{Former} & \multicolumn{3}{c}{Non-former} & \multicolumn{1}{c}{Down} \\ \hline Input dim. & 32 & 32 & 32 & 32 & 32 & 28 & 28 & 32 \\ Length \(L\) & 2 & 4 & 8 & 16 & 32 & 4 & 4 & 4 \\ Width \(W\) & 304 & 63 & 38 & 26 & 18 & 32 & 32 & 32 \\ Output dim. & 1 & 1 & 1 & 1 & 14 & 18 & 1 \\ \#Parameters & 10,336 & 10,206 & 10,184 & 10,712 & 10,872 & & 10,336 & \\ \hline \hline \end{tabular} \begin{tabular}{l r r r r r r r r} \hline \hline \multirow{2}{*}{\(E\)} & \multicolumn{3}{c}{Blind} & \multicolumn{3}{c}{Informed} \\ \cline{3-10} & \multicolumn{3}{c}{Former} & \multicolumn{3}{c}{Non-former} & \multicolumn{1}{c}{Down} \\ \hline Input dim. & 23 & 23 & 23 & 23 & 23 & 26 & 26 & 23 \\ Length \(L\) & 2 & 4 & 8 & 16 & 32 & 4 & 4 & 4 \\ Width \(W\) & 385 & 63 & 38 & 25 & 17 & 32 & 32 & 32 \\ Output dim. & 1 & 1 & 1 & 1 & 10 & 13 & 1 \\ \#Parameters & 9,625 & 9,639 & 9,842 & 9,725 & 9,605 & & 9,623 & \\ \hline \hline \end{tabular}
\begin{tabular}{l r r r r r r r r} \hline \hline \multirow{2}{*}{\(G\)} & \multicolumn{3}{c}{Blind} & \multicolumn{3}{c}{Informed} \\ \cline{3-10} & \multicolumn{3}{c}{Former} & \multicolumn{3}{c}{Non-former} & \multicolumn{1}{c}{Down} \\ \hline Input dim. & 24 & 24 & 24 & 24 & 24 & 26 & 26 & 24 \\ Length \(L\) & 2 & 4 & 8 & 16 & 32 & 4 & 4 & 4 \\ Width \(W\) & 372 & 63 & 38 & 25 & 17 & 32 & 32 & 32 \\ Output dim. & 1 & 1 & 1 & 1 & 10 & 14 & 1 \\ \#Parameters & 9,672 & 9,702 & 9,880 & 9,750 & 9,622 & & 9,688 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Model hyperparameters
represented as a weighted graph. Its nodes are given by elements and properties and each element node is connected with a property node via an edge which is weighted by the element's respective property value. We integrate this knowledge into our training data by designing feature vectors in a hybrid fashion. We partly follow the approach used in [7], where, for each element, the authors extract physical and chemical properties, i.a., from the Python library mendeleev[24]. They design and select feature vectors for a neural network model in a way that allows them to complement information from the space of chemical compositions with information from the space of chemical and physical properties.
In our case, we first extract for each glass property \(\mathbf{P}\) and each \(\mathbf{P}\)-element a list of characteristic chemical and physical properties from mendeleev. We drop properties with non-numeric values and only keep those which are available for all \(\mathbf{P}\)-elements. We also drop properties which we consider to be unrelated to the elements' influence on the glass material properties, namely, the elements' _abundances in the earth crust_ and the elements' _dipole polarizability uncertainties_. We refer to [24] and references therein for a detailed explanation of all the available properties in the mendeleev library.
Among the remaining properties, we drop those which are highly correlated. More specifically, we compute the standard pairwise Pearson correlation coefficients and iteratively, for each pair of properties whose coefficient is larger than 0.95, we only keep one of the two properties. The resulting list of properties is given in Table 7.
For each glass property \(\mathbf{P}\) and each \(\mathbf{P}\)-element, we group the resulting element properties into vectors, which we call \(\mathbf{P}\)-element-vectors. These element-vectors represent all prior knowledge about the elements which exists independently of the learning problem.
In the next step, we complement this information with our given problem-specific data. For each glass property \(\mathbf{P}\) and a given glass sample, we consider the collection of \(\mathbf{P}\)-element-vectors corresponding to the sample's elements and extend each of them by one more component which lists the mole atomic fraction of the respective element in the glass sample. Eventually, we obtain for each glass sample a collection of feature vectors \((v_{1},\ldots,v_{M})\in\mathbb{R}^{d\times M}\), where we arrange the tuple in lexicographical order of the elements' symbols. Here, \(M=M(\mathbf{P})\) is the number of \(\mathbf{P}\)-elements and \(d=d(\mathbf{P})\) is the number of properties resulting from the property extraction process described above (including the entry with the mole atomic fraction). Each glass sample can thus be represented as a point in a subspace \(\Omega=\Omega(\mathbf{P})\subset\mathbb{R}^{d\times M}\).
#### 2.2.2 Network Architecture
For each glass property \(\mathbf{P}\), we want to design a neural network which approximates the functional relationship \(f:\Omega\rightarrow\mathbb{R},\mathbf{V}\mapsto\mathbf{P}(\mathbf{V})\), where \(\mathbf{P}(\mathbf{V})\) is the value of property \(\mathbf{P}\) for the glass sample with representation \(\mathbf{V}=(v_{1},\ldots,v_{M})\). We design the architecture of the network by two leading principles in the spirit of informed learning.
First, we observe that the order in which the feature vectors are passed to the function \(f\) actually does not matter. That is, the function \(f\) is _permutation invariant_
with respect to the order of the input vectors. More specifically, \(f\) is a function on sets of the form \(\{v_{1},\ldots,v_{M}\}\). In terms of the taxonomy in [37] we use this scientific knowledge, which is represented as a spatial invariance, and directly integrate it into the architecture of the network which we use to approximate \(f\). It is shown in [40] that such a function \(f\) on sets can be written as
\[f(\{v_{1},\ldots,v_{M}\})=\psi\left(\sum_{i=1}^{M}\phi(v_{i})\right)\,, \tag{1}\]
where \(\phi:\mathbb{R}^{d}\to\mathbb{R}^{N}\) denotes an inner embedding function with \(N\in\mathbb{N}\) being an appropriately chosen embedding dimension, and \(\psi:\mathbb{R}^{N}\to\mathbb{R}\) denotes an outer (downstream) function. Here, \(\phi\) and \(\psi\) can be approximated by neural networks. Using the universal approximation theorem of neural networks, the right-hand-side of (1) yields architectures of neural networks which, in principal, can approximate \(f\) arbitrarily well.
In our specific use case, we can refine the network's architecture even further by integrating prior chemical knowledge. Glass oxides can be categorized in three groups [5]. _Glass formers_ are oxides that can readily form a glassy material and build the backbone of a glass's network structure. _Glass modifiers_ are oxides that cannot form a glassy material by themselves but influence its material properties when mixed with a glass former. _Glass intermediates_ are oxides which can act both as a glass former as well as a glass modifier depending on the respective cation's oxidation number. For our purposes, we only differentiate between oxides which are glass formers and oxides which are not glass formers. We refer to the latter group as _glass non-formers_. We use the classification proposed in [5] to determine for every element whether its oxide is a glass former or a glass non-former. The classification is shown in Table 4.
The scientific knowledge whether an element's oxide has glass-forming or glass-non-forming ability is naturally represented as a simple knowledge graph, where each element is represented by a node. There is also a glass former node and a glass non-former node. Each element node is connected via an edge with the glass former node or the glass non-former node depending on whether the element's oxide is a glass former or a glass non-former. Due to the largely different influence on a glass's properties, we integrate this prior knowledge additionally into our hypothesis set by using two functions to treat glass formers and non-formers separately. The _glass former network_ receives as input only feature vectors of elements whose oxides are glass formers. The _glass non-former network_ receives as input all other elements whose oxides, by definition, are glass non-formers. The outputs of the glass former network are added together, as are the outputs of the glass non-former network. The results are concatenated and then used as input for the _downstream network_ which yields the final prediction for the respective property \(\mathbf{P}\).
More specifically, let \(\Omega=\Omega_{f}\cup\Omega_{nf}\) be the decomposition of \(\Omega\) into the space \(\Omega_{f}\) of representations of glass formers and the space \(\Omega_{nf}\) of representations of glass non-formers. We then replace the inner function \(\phi\) in (1) by two separate functions, \(\phi_{f}:\Omega_{f}\to\mathbb{R}^{N_{f}}\), \(N_{f}\in\mathbb{N}\), for the glass former network and \(\phi_{nf}:\Omega_{nf}\to\mathbb{R}^{N_{nf}}\)
\(N_{nf}\in\mathbb{N}\), for the glass non-former network. Permutation invariance then holds only within the feature vectors \(v_{1},\ldots,v_{M_{f}}\) corresponding to the \(M_{f}\) glass formers and within the \(M_{nf}:=M-M_{f}\) feature vectors \(v_{M_{f}+1},\ldots,v_{M}\) corresponding to the glass non-formers. The resulting representation of \(f\) then has the following form,
\[f(\{v_{1},\ldots,v_{M}\})=\psi\left(\sum_{i=1}^{M_{f}}\phi_{f}(v_{i}),\sum_{i= M_{f}+1}^{M}\phi_{nf}(v_{i})\right), \tag{2}\]
where, under slight abuse of notation, we used the same notation \(\psi\) for the downstream function as in (1). As glass former network, glass non-former network, and downstream network we use three separate ReLU-FFNNs to approximate the functions \(\phi_{f},\phi_{nf}\), and \(\psi\) in (2), respectively. Their widths and depths are listed in Table 3. The overall network architecture of our informed model is illustrated in Fig. 3.
The embedding dimensions \(N_{f}\) and \(N_{nf}\) are hyperparameters of the glass former and non-former network, respectively. It is shown in [38] that in the scalar case, \(d=1\), the choice \(N=M\) in (1) is a sufficient and necessary condition in order to approximate the function \(f\) arbitrarily well by a neural network whose architecture is given by the right-hand side in (1). In the vector-valued case, \(d>1\), to the best of our knowledge, no non-trivial necessary condition on the embedding dimension is known so far. In [15], the authors prove a sufficient condition in form of an upper bound on the embedding dimension, which, however, is very pessimistic. Based on the results in the one-dimensional case, we choose \(N_{f}=M_{f}\) and \(N_{nf}=M_{nf}\) in (2).
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**P** & Formers & \#Formers & Non-formers & \#Non-formers \\ \hline \(T_{g}\) & As, B, Bi, Ge, Mo, P, Pb, & 14 & Ag, Al, Ba, Ca, Cs, Cu, & 18 \\ & Sb, Si, Sn, Te, Tl, V, W & & Fe, Ga, K, La, Li, Mg, Na, & \\ & & & O, Rb, Sr, Ti, Zn & \\ \(E\) & B, Bi, Ge, Mo, Nb, P, Pb, & 10 & Al, Ba, Ca, Co, Cs, K, Li, & 13 \\ & Si, Te, V & & Mg, Na, O, Sr, Ti, Zn & \\ \(G\) & B, Bi, Ge, Mo, Nb, P, Pb, & 10 & Al, Ba, Ca, Co, Cs, K, Li, & 14 \\ & Si, Te, V & & Mg, Na, O, Rb, Sr, Ti, Zn & \\ \hline All & As, B, Bi, Ge, Mo, Nb, P, & 17 & & \\ & Pb, Sb, Se, Si, Sn, Ta, Te, & & & \\ & Tl, V, W & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification of elements based on the glass-forming and glass-non-forming properties of their oxides. The last row shows all elements whose oxides are glass formers according to the classification in [5]. The second column lists, for given **P**, the respective **P**-elements whose oxides are glass formers and which are a subset of the elements in the last row. The fourth column lists all respective **P**-elements whose oxides are glass non-formers. We classify oxygen as a glass non-former.
### Model Training and Evaluation
Recall that we consider three different glass material properties: glass transition temperature \(T_{g}\), Young's modulus \(E\) at room temperature, and shear modulus \(G\). For each of these properties \(\mathbf{P}\), we split the cleaned datasets from Sect. 2.1 further into datasets for training, validation, and testing. We then apply the blind models and the informed model discussed in Sect. 2.2.1 and Sect. 2.2.2.
For data management and visualization, we use the Python libraries pandas[35], Scikit-learn[26], and Matplotlib[17], respectively. The neural network models are built using the PyTorch-Lightning[12] module.
We describe the data splitting in a bit more detail. For each property \(\mathbf{P}\), we apply the following steps. First, we apply the preprocessing pipeline described in Sect. 2.1. Then, we apply the feature design and selection processes for the blind and the informed models as described in Sect. 2.2.1.1 and Sect. 2.2.2.1, respectively. Next, we split up the resulting cleaned dataset into those glass samples which contain sodium and those which do not contain sodium. From the samples which contain sodium we extract only those binary oxides which consist of B\({}_{2}\)O\({}_{3}\) as glass former and Na\({}_{2}\)O as glass non-former. The resulting dataset is our \(\mathbf{P}\)-test set. The other glass samples, which do not contain sodium, are randomly split for each model into a \(\mathbf{P}\)-training set and a \(\mathbf{P}\)-validation set using a 80%/20% ratio. The dimensions of the resulting datasets are shown in Table 5. We emphasize that sodium as an element is totally absent in the training and validation sets and only present in the test sets. Examining the performance of the trained models on the test sets therefore allows us to properly evaluate their extrapolation power.
We use the _bagging_ method from the field of ensemble learning [6]. For each model setup and each property \(\mathbf{P}\), we train 50 models. Their
Figure 3: Architecture of the informed model. The feature vectors of the elements are split according to whether the elements’ oxides are glass formers or glass non-formers and input to separate neural networks. The results of the latter are first summed individually and then concatenated to a vector which is used as input for the final downstream neural network that predicts a value for property \(\mathbf{P}\).
initializations are the same, but each model is trained and validated on a different random \(80\%/20\%\)-split into training set and validation set. We therefore end up with a model ensemble of 50 different models.
For training, we use the ADAM optimizer with default settings [19] and weight decay of \(10^{-5}\) and train for a maximum of 1,000 epochs with a batch size of 8. We start with a learning rate of 0.001 and multiply it by a factor of 0.5 if the model's performance on the validation set in terms of the mean squared error (MSE) does not improve over the course of 50 epochs. Moreover, to avoid overfitting, we use the early stopping criterion and stop training if the model's MSE on the validation set does not improve over the course of 100 epochs. If the training is finished, we retrain each model with
After training, we apply a post-processing step. For each property **P**, we discard those models whose predictions for **P** on the whole test set can be considered to be constant. More specifically, we first compute for each sample in the test set the mean value of the models' predictions for **P**. Then, we drop those models where the deviation of the predicted property values for all samples in the test set from the respective mean value is less than or equal to the **P**-duplicate threshold from Table 2. This is in alignment with informed learning since we know a priori that for each property **P**, not all Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glass samples have the same **P**-value. In terms of the taxonomy in [37], we thus use this expert knowledge, which is represented as algebraic equations, i.e., being of non-constant value, and integrate it into our final hypothesis.
Among the remaining models we compute the mean and the 95%-confidence interval of the predictions. This yields the final prediction of the model ensemble and quantifies its uncertainty. We compare the ensembles' performances quantitatively in terms of their root mean squared errors (RMSE), mean absolute errors (MAE), and maximum errors (MAX) on the respective **P**-test sets, which are summarized in Table 6. All blind 32-layer networks yield constant predictions for all three properties and are therefore discarded as non-physical in the post-processing step. Nevertheless, we still record their respective ensemble errors in Table 6 to get a more conclusive picture. However, when talking about the best and worst error values, we _only consider the ensembles of blind models with depths \(L=2,4,8,16\)_ and neglect the values of the models with 32 layers.
To also get a qualitative picture of the ensembles' extrapolation performances, we plot the composition-property curves of the ensembles' averaged predictions on the **P**-test sets in Figs. 4-6. Recall that the test sets consist of the Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glass
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**P** & \#Training samples & \#Validation samples & \#Test samples \\ \hline \(T_{g}\) & 1,385 & 347 & 125 \\ \(E\) & 415 & 104 & 42 \\ \(G\) & 477 & 120 & 73 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Dimensions of the training, validation, and test sets.
samples together with their respective **P**-values. We also plot the predictions of the best and worst performing model of each **P**-ensemble in terms of the RMSE on the test set. Moreover, we plot the property values of all available alkali borate glasses, that is, binary glasses which consist of B\({}_{2}\)O\({}_{3}\) as glass former and Na\({}_{2}\)O, Li\({}_{2}\)O, and Rb\({}_{2}\)O as glass non-former, respectively. It is known that these glass compositions have similar material properties [13].
Concerning Figs. 4-6, a few remarks are in order. First, there are no Rb\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glass samples available for \(E\). Next, since we only consider binary oxide glasses, knowing the compound fraction of B\({}_{2}\)O\({}_{3}\) completely determines the compound fraction of the respective alkali oxide as glass non-former as well. Finally, since the blind 32-layer networks are discarded, their predictions are not shown.
## 3 Results and Discussion
We first compare the models' performances quantitatively in terms of their average errors in Table 6. Among the blind models, we note that the ensembles of shallow two-layer networks perform best for all three properties in terms of RMSE, MAE, and MAX. Considering the worst performing ensembles, we note that the deeper networks with depths of 8 and 16 layers perform worst, on average, in terms of almost all three error metrics for all three properties. Only for \(T_{g}\) in the case of MAX, the ensemble of models with only four layers performs worst. We note that in this case, the ensemble of models with a depth of 32 layers actually performs best in terms of MAX. In all other cases, however, the 32-layer network ensembles perform worst for all three properties when compared to the other blind models. We conclude that, in general, increasing network complexity in terms of increasing depth tends to lead to worse performing models.
The number of models with non-constant predictions clearly decays with increasing network depth for all three properties. Whereas the networks with depths of \(2,4\), and 8 layers lead to no constant predictions, the 16-layer networks lead to some constant predictions. There is a steep decay when increasing the number of layers from 16 to 32, where all models for all properties lead to only constant predictions. A possible explanation for this phenomenon could be that the models' loss landscapes become more and more rugged with increasing network depth yielding constant predictions to be local minima which are hard to escape during the optimization routine. This matches the observation from above that increasing network depth generally tends to lead to worse performing models.
Invoking now the errors of the informed models, we see that they perform best, on average, in terms of all three error metrics for all three properties. They lead to a relative improvement in the errors between 26% up to 59%. For \(E\) and \(G\), only two models yield constant predictions, whereas for \(T_{g}\) more than half of all models do. Again, this could indicate that the loss landscape of the informed networks is much more rugged for \(T_{g}\) than for the other two properties.
To get a more conclusive qualitative picture of the extrapolation behavior of all models, we take a closer look at Figs. 4-6. We observe that the blind networks in terms of the ensembles' means as well as the best and worst performing models are not able to qualitatively capture the trend of the Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) curves correctly and instead generally deviate from the test points to a large extent. However, the ensembles' predictions for all blind networks seem to be quite close to each other for all three properties. This is reflected by the small width of the confidence band around the mean curves as well as the similar shape of the mean curves and the
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline \multirow{2}{*}{\(T_{g}\) (\({}^{\circ}\)C)} & \multicolumn{6}{c}{Blind} & \multirow{2}{*}{Informed} & \multirow{2}{*}{Rel. improv.} \\ \cline{2-2} \cline{5-8} & Depth & 2 & 4 & 8 & 16 & 32 \\ \hline RMSE & \multicolumn{2}{c}{**86.2**} & 111 & **112** & 109 & 86.5 & **62.5** & 27\% \\ MAE & \multicolumn{2}{c}{**63.0**} & 77.2 & 81.7 & **84.4** & 73.6 & **44.4** & 30\% \\ MAX & \multicolumn{2}{c}{**265**} & **341** & 338 & 311 & 205 & **186** & 30\% \\ \#Non-const. & \multicolumn{2}{c}{50} & 50 & 50 & 48 & 0 & 24 & \\ predictions & & & & & & & \\ \hline \hline \end{tabular} \begin{tabular}{l r r r r r r r} \hline \hline \multirow{2}{*}{\(E\) (GPa)} & \multicolumn{6}{c}{Blind} & \multirow{2}{*}{Informed} & \multirow{2}{*}{Rel. improv.} \\ \cline{2-2} \cline{5-8} & Depth & 2 & 4 & 8 & 16 & 32 \\ \hline RMSE & \multicolumn{2}{c}{**9.88**} & 10.7 & **12.0** & 11.6 & 15.6 & **4.67** & 53\% \\ MAE & \multicolumn{2}{c}{**8.58**} & 9.44 & **10.7** & 10.3 & 12.5 & **3.50** & 59\% \\ MAX & \multicolumn{2}{c}{**16.8**} & 18.2 & 19.5 & **19.8** & 33.7 & **12.5** & 26\% \\ \#Non-const. & \multicolumn{2}{c}{50} & 50 & 50 & 43 & 0 & 48 & \\ predictions & & & & & & & \\ \hline \hline \end{tabular}
\begin{tabular}{l r r r r r r r} \hline \hline \multirow{2}{*}{\(G\) (GPa)} & \multicolumn{6}{c}{Blind} & \multirow{2}{*}{Informed} & \multirow{2}{*}{Rel. improv.} \\ \cline{2-2} \cline{5-8} & Depth & 2 & 4 & 8 & 16 & 32 \\ \hline RMSE & \multicolumn{2}{c}{**2.92**} & 3.65 & **4.02** & 3.81 & 5.94 & **1.39** & 52\% \\ MAE & \multicolumn{2}{c}{**2.43**} & 3.09 & **3.39** & 3.13 & 5.08 & **1.12** & 54\% \\ MAX & \multicolumn{2}{c}{**6.10**} & 7.24 & 7.89 & **8.37** & 12.2 & **2.95** & 52\% \\ \#Non-const. & \multicolumn{2}{c}{50} & 50 & 50 & 39 & 0 & 48 & \\ predictions & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results and errors of the model ensembles’ averaged predictions on the test sets. Among the blind ensembles with depths \(L=2,4,8,16\), bold blue numbers denote the lowest error values, bold red numbers the highest ones. Bold black numbers denote the lowest error values among all model ensembles. The last column shows the relative improvement in the error when comparing the error value of the informed ensemble to the lowest error value (blue) among the blind ensembles. The blind models with 32 layers are not considered in the error analysis as these models only yield constant predictions and are therefore discarded as non-physical.
curves of the best and worst performing models. This indicates that the blind models are robust with respect to training.
The mean curves of the informed model ensembles qualitatively capture the trend of the Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) curves to a more acceptable degree. This is most noticeable in the cases of \(T_{g}\) and \(G\) where the mean curves are able to capture the nonlinearity of the respective Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) curve, which the blind networks are not capable of. For \(E\), the informed model ensemble yields more accurate trajectories than the blind ensembles in the linear regime with B\({}_{2}\)O\({}_{3}\)-fractions between 0.7 and 1.0, but the kink in the Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) curve at a B\({}_{2}\)O\({}_{3}\)-fraction of around 0.7 is not captured. This could, in parts, be due to the small training and validation sets which are available for \(E\) (see Table 5) and, in particular, to the lack of Rb\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glass samples which the models could base their predictions on. We explain the latter point in more detail below. Nevertheless, whereas the mean curve for \(E\) shows at least a physically reasonable trajectory in the region of low B\({}_{2}\)O\({}_{3}\)-fractions, where there are no data points of alkali borate glasses available, the mean curves for \(T_{g}\) and \(G\) show a non-physical incline for glasses of B\({}_{2}\)O\({}_{3}\)-fractions of less than 0.2 and 0.4, respectively. As a further observation, we note that, in general, for all three properties, the uncertainty of the model ensembles' predictions in terms of the width of the confidence band around the mean curve is much higher than in the blind settings, especially in the regions of low B\({}_{2}\)O\({}_{3}\)-fractions where there are only few or no alkali borate glass samples available. This indicates less robustness with respect to training the models and is also most noticeably reflected by the large deviation of the worst performing model's curve from the mean curve for all three properties.
As most probable explanation, we suspect these observations to be caused by the choice of our training and test sets. As already indicated in Sect. 2.3, we note that the curves of all alkali borate glasses show a similar trajectory for all three properties since these glasses have similar material properties. We also note that only Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glasses are not present in the training and validation sets. In regions where the other alkali borate glasses are available in the training and validation sets, the models are thus, in principal, able to learn the properties of the Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glasses based on the other alkali borate glass samples. In regions where there are many of these samples available and where their property curves are very close to the Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) curve, the informed models' predictions thus tend to be quite accurate. In regions where only few or no alkali borate glasses are available in the training and validation sets, the models are prone to base their predictions on other spurious or noisy features. This leads to non-physical predictions with a high uncertainty. This phenomenon is amplified by a large feature set and is therefore pronounced to a much higher degree in the informed setting than in the blind one.
In summary, the informed model shows, on average in the ensemble setting, clear superior performance than all considered blind (uninformed) models in extrapolating the property curves of Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) binary glasses for all three properties \(T_{g}\), \(E\), and \(G\). This is in terms of quantitative error measurements on the test sets as well as in the qualitative approximation of the property curves.
Finally, we emphasize the importance of the ensemble setting. Whereas single models might yield bad predictions, averaging multiple trained models, as we observe in our specific use case, often yields good approximations of the target quantities [31].
## 4 Conclusion and Outlook
In this paper, we presented an informed neural network approach for the prediction of three material properties of binary oxide glasses, that is, glass transition temperature \(T_{g}\), Young's modulus \(E\) (at room temperature), and shear modulus \(G\). We compared this approach to five different blind (uninformed) models for all three properties and demonstrated its superior average extrapolation power when applied in an ensemble setting to alkali borate glass samples which contain sodium as previously unseen element.
In terms of the taxonomy of informed machine learning introduced in [37], we integrated prior knowledge into our learning pipeline at four major points. We integrated scientific knowledge, represented as a weighted graph, knowledge graph, and spatial invariance in the training data and in the hypothesis set, respectively. Moreover, we integrated expert knowledge, represented as algebraic equations, into the final hypothesis.
Our informed neural network model could be improved in various ways. First, the list of chemical and physical element features could be extended. Second, instead of classifying glass oxides into formers and non-formers, we could follow the refined classification into formers, modifiers, and intermediates and treat these three classes by three separate neural networks. Third, in this paper, we did not tune any of the models' hyperparameters. A thorough hyperparameter study probably leads to improved model performance. Finally, by relying on further expert knowledge, we could potentially filter out even more predicted property curves in the post-processing step than just constant predictions. This might improve the final predictions even further.
Our results show that our informed neural network model is capable of meaningfully extrapolating various properties of binary glass samples with previously unseen compounds. As a next step, we plan to scale up our approach in order to make it applicable to oxide glass samples with three or more compounds. We also plan to make it more universal, such that it can accurately predict more material properties.
###### Acknowledgements.
This work was supported in part by the BMBF-project 05M2AAA M-GriDo (Mathematics for Machine Learning Methods for Graph-Based Data with Integrated Domain Knowledge), by the Fraunhofer Cluster of Excellence Cognitive Internet Technologies, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via project 390685813 - GZ 2047/1 - Hausdorff Center for Mathematics (HCM).
Figure 4: \(T_{g}\)-values of binary alkali borate glasses. Scattered points represent \(T_{g}\)-values given in the cleaned \(T_{g}\)-dataset. Solid lines show the predictions for the \(T_{g}\)-value of Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glass samples of the blind and informed models, respectively. The model ensembles’ mean curves are shown as blue solid lines with the shaded blue area depicting the 95% confidence band. The predictions of the best and worst performing models in the ensembles are shown as green and red solid lines, respectively.
Figure 5: \(E\)-values of binary alkali borate glasses. Scattered points represent \(E\)-values given in the cleaned \(E\)-dataset. Solid lines show the predictions for the \(E\)-value of Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glass samples of the blind and informed models, respectively. The model ensembles’ mean curves are shown as blue solid lines with the shaded blue area depicting the 95% confidence band. The predictions of the best and worst performing models in the ensembles are shown as green and red solid lines, respectively.
Figure 6: \(G\)-values of binary alkali borate glasses. Scattered points represent \(G\)-values given in the cleaned \(G\)-dataset. Solid lines show the predictions for the \(G\)-value of Na\({}_{2}\)O-B\({}_{2}\)O\({}_{3}\) glass samples of the blind and informed models, respectively. The model ensembles’ mean curves are shown as blue solid lines with the shaded blue area depicting the 95% confidence band. The predictions of the best and worst performing models in the ensembles are shown as green and red solid lines, respectively.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Properties from mendeleev & \(T_{g}\) & \(E\) & \(G\) \\ \hline Atomic number & ✓ & ✓ & ✓ \\ Atomic radius & ✓ & ✓ & ✓ \\ Atomic radius by Rahm et al. & ✓ & ✓ & ✓ \\ Atomic volume & ✓ & ✓ & ✓ \\ Atomic weight & & & \\ Boiling temperature & ✓ & ✓ & ✓ \\ \(C_{6}\) dispersion coefficient by Gould and Bucko & ✓ & ✓ & ✓ \\ Covalent radius by Cordero et al. & ✓ & & \\ Single bond covalent radius by Pyykko et al. & & & \\ Double bond covalent radius by Pyykko et al. & & & \\ Density & ✓ & ✓ & ✓ \\ Dipole polarizability & & & \\ Electron affinity & ✓ & ✓ & ✓ \\ Electron affinity in the Allen scale & ✗ & ✓ & ✓ \\ Electron affinity in the Ghosh scale & ✓ & ✓ & ✓ \\ Electron affinity in the Pauling scale & ✓ & & \\ Glawe’s number & ✓ & ✓ & ✓ \\ Group in periodic table & ✓ & ✓ & ✓ \\ Heat of formation & ✓ & ✓ & ✓ \\ First ionization energy & ✓ & ✓ & ✓ \\ Lattice constant & ✓ & ✓ & ✓ \\ Maximum coordination number & ✓ & ✓ & ✓ \\ Maximum oxidation state & ✓ & ✓ & ✓ \\ Melting temperature & ✓ & & \\ Mendeleev’s number & & & \\ Minimum coordination number & ✓ & ✓ & ✓ \\ Minimum oxidation state & ✓ & ✓ & ✓ \\ Period in periodic table & ✓ & ✓ & ✓ \\ Pettifor scale & & & \\ Index to chemical series & ✓ & ✓ & ✓ \\ Number of valence electrons & ✓ & ✓ & ✓ \\ Van der Waals radius & ✓ & ✓ & ✓ \\ Van der Waals radius according to Alvarez & ✓ & ✓ & ✓ \\ Van der Waals radius according to Batsanov & & & \\ Van der Waals radius from the MM3 FF & & & \\ Van der Waals radius from the UFF & ✓ & ✓ & ✓ \\ \hline Number \(d\) of all features & 28 & 26 & 26 \\ (including mole atomic fractions) & & & \\ \hline \hline \end{tabular}
\end{table}
Table 7: Chemical and physical properties extracted from the mendeleev library which are used for the correlation study described in Sect. 2.2.1. Properties that are marked with ✗ are dropped before the correlation study as they are not available for all respective **P**-elements. Properties that are marked with ✓ are not highly correlated among each other and are used as final features. All unmarked properties are dropped due to too high correlation with other properties. We refer to [24] and the references therein for detailed explanations of the properties. |
2305.17650 | Evolving Connectivity for Recurrent Spiking Neural Networks | Recurrent spiking neural networks (RSNNs) hold great potential for advancing
artificial general intelligence, as they draw inspiration from the biological
nervous system and show promise in modeling complex dynamics. However, the
widely-used surrogate gradient-based training methods for RSNNs are inherently
inaccurate and unfriendly to neuromorphic hardware. To address these
limitations, we propose the evolving connectivity (EC) framework, an
inference-only method for training RSNNs. The EC framework reformulates
weight-tuning as a search into parameterized connection probability
distributions, and employs Natural Evolution Strategies (NES) for optimizing
these distributions. Our EC framework circumvents the need for gradients and
features hardware-friendly characteristics, including sparse boolean
connections and high scalability. We evaluate EC on a series of standard
robotic locomotion tasks, where it achieves comparable performance with deep
neural networks and outperforms gradient-trained RSNNs, even solving the
complex 17-DoF humanoid task. Additionally, the EC framework demonstrates a two
to three fold speedup in efficiency compared to directly evolving parameters.
By providing a performant and hardware-friendly alternative, the EC framework
lays the groundwork for further energy-efficient applications of RSNNs and
advances the development of neuromorphic devices. | Guan Wang, Yuhao Sun, Sijie Cheng, Sen Song | 2023-05-28T07:08:25Z | http://arxiv.org/abs/2305.17650v1 | # Evolving Connectivity for Recurrent Spiking Neural Networks
###### Abstract
Recurrent spiking neural networks (RSNNs) hold great potential for advancing artificial general intelligence, as they draw inspiration from the biological nervous system and show promise in modeling complex dynamics. However, the widely-used surrogate gradient-based training methods for RSNNs are inherently inaccurate and unfriendly to neuromorphic hardware. To address these limitations, we propose the evolving connectivity (EC) framework, an inference-only method for training RSNNs. The EC framework reformulates weight-tuning as a search into parameterized connection probability distributions, and employs Natural Evolution Strategies (NES) for optimizing these distributions. Our EC framework circumvents the need for gradients and features hardware-friendly characteristics, including sparse boolean connections and high scalability. We evaluate EC on a series of standard robotic locomotion tasks, where it achieves comparable performance with deep neural networks and outperforms gradient-trained RSNNs, even solving the complex 17-DoF humanoid task. Additionally, the EC framework demonstrates a two to three fold speedup in efficiency compared to directly evolving parameters. By providing a performant and hardware-friendly alternative, the EC framework lays the groundwork for further energy-efficient applications of RSNNs and advances the development of neuromorphic devices.
## 1 Introduction
Learning from the information processing mechanisms of the biological nervous system has the potential to advance our computing systems towards artificial general intelligence (AGI) and foster the brain-inspired computing field, as evidenced by the popularity of this conference [Pei et al., 2019]. Specifically, the highly recurrently connected and spike-emitting brain network inspired recurrent spiking neural networks (RSNNs), which employ discrete, spike-based signals for transmitting information through recurrent connections in an event-driven manner. RSNNs can serve as realistic models of the brain incorporating the latest anatomical and neuro-physiological data [Billeh et al., 2020], leading to the discovery of general principles of computing, e.g., noise robustness [Chen et al., 2022]. Furthermore, RSNNs can exhibit self-organized critical properties [Poil et al., 2012] and serve as a reservoir for complex temporal dynamics [Jaeger, 2001, Maass et al., 2002], making them suitable for sequential tasks like robotics [Lee et al., 2020], path-finding [Rueckert et al., 2016], and others.
Despite the importance of RSNNs, developing end-to-end training algorithms remains a challenge. Inspired by the success of deep learning, a line of research [Wu et al., 2018, Shrestha and Orchard, 2018, Bauer et al., 2022] uses error-backpropagation [Rumelhart et al., 1985] with carefully chosen surrogate gradients to address the non-differentiability problem and achieve performance comparable to deep learning. However, incorporating surrogate gradients into RSNNs introduces two concerns. Algorithmically, the surrogate gradient leads to inherent inaccuracy in the descent direction [Li et al., 2021] and sensitivity to function scale selection [Zenke and Vogels, 2021]. At the implementation level, gradient-based training is incompatible with prominent neuromorphic devices [Pei et al., 2019, Davies et al., 2018, Mayr et al., 2019] due to the requirement of accessing the full network state over every timestep [Werbos, 1990]. Consequently, this raises a critical research question: _can we design a training method for RSNNs that bypasses the need for gradients without compromising performance?_
In response to this challenge, we observe that connection probability distribution information from brain connection maps across various species exhibit similarities [Haber et al., 2023] and can be used to construct large-scale neural computing models [Billeh et al., 2020, Schmidt et al., 2018]. Existing studies demonstrate that networks with binary connections alone can achieve high performance, rivaling that of weighted networks [Gaier and Ha, 2019, Frankle and Carbin, 2018, Malach et al., 2020]. Moreover, computing boolean connections relies primarily on integer arithmetic rather than resource-intensive floating-point operations, enabling simpler on-chip implementation and enhanced energy efficiency. Considering these features, we reformulate the architecture of RSNNs, where connections are sampled independently from parametric Bernoulli distributions, and adopt Natural Evolution Strategies (NES)[Wierstra et al., 2014] to optimize this parametric probability distribution, providing a scalable, inference-only, and effective algorithm for tuning parameters.
In this paper, we present the evolving connectivity (EC) framework for training RSNNs. The EC framework reformulates RSNNs as boolean connections with homogeneous weights and uses NES to evolve the connection probabilities of the network. To assess both effectiveness and efficiency, we conduct extensive experiments on locomotion tasks, a set of widely-used sequential decision-making tasks [Brockman et al., 2016, Freeman et al., 2021]. Experimental results demonstrate that our proposed EC method achieves performance comparable to deep recurrent neural networks while surpassing the asymptotic performance of surrogate gradients. Moreover, despite using a general-purpose graphics processing unit (GPGPU) not specifically optimized for our framework, the EC method yields a speed improvement of \(2\sim 3\times\) compared to directly evolving parameters using Evolution Strategies [Salimans et al., 2017] and exhibits better efficiency than surrogate gradients.
Our main contributions are summarized as follows:
* **Novel framework**: We propose a novel inference-only training framework for RSNNs by reformulating weight-tuning as connection probability searching through evolution-based algorithms.
* **High performance**: Our method can solve the complex 17-DoF humanoid locomotion task with RSNN, achieving performance on par with recurrent neural networks and outperforming gradient-trained RSNNs.
* **Hardware-friendly**: By producing RSNNs with sparse 1-bit boolean connections, as well as enabling inference-only training and high scalability, our method is highly compatible with neuromorphic devices. This compatibility holds promising potential for further energy-efficient applications of RSNNs.
## 2 Related Works
### Training Recurrent Spiking Neural Networks
The study of training algorithms for RSNNs endeavors to unravel the mechanism that enables the brain to learn. For decades, biological plasticity rules, especially spike-timing-dependent plasticity (STDP) [Bi and ming Poo, 1998], have been considered as a foundational of RSNNs training [Diehl and Cook, 2015, Kheradpisheh et al., 2018, Mozafari et al., 2019]. Recently, with the success of deep learning, gradient-based approaches [Wu et al., 2018, Shrestha and Orchard, 2018, Bauer et al., 2022] have been predominately utilized in training RSNNs. Slayer [Shrestha and Orchard, 2018] adopted an exponential decaying surrogate function with a spike response model (SRM), while STBP [Wu et al., 2018] incorporated BPTT in multi-layer SNN training and proposed several surrogate function choices. Chen et al. [2022] trained an anatomical and neurophysiological data constrained RSNN and
demonstrated its robustness and versatility. However, gradient-based approaches face the new problem of being difficult to implement on neuromorphic devices. To address the limitation, E-prop (Bellec et al., 2020) utilize eligibility trace to compute truncated gradient, becoming an exemplary training algorithm on neuromorphic devices like Loihi2 (Davies et al., 2018) and SpiNNaker2 (Mayr et al., 2019), but have a strict restriction on the temporal relationship of SNN models and still lags behind surrogate gradient methods in performance. Our approach tackles the gradient-effectiveness dilemma from the beginning, by introducing the inference-only training framework.
### Weight-agnostic Neural Networks
Training a neural network typically involves assigning appropriate values to the network's weights. However, the weight-agnostic neural network (WANN) (Gaier and Ha, 2019) has demonstrated that a network's topology can be highly informative, similar to that of a weighted neural network. Additionally, research on the lottery ticket hypothesis (LTH) (Frankle and Carbin, 2018; Zhou et al., 2019; Ramanujan et al., 2020) suggests that an over-parameterized network contains an effective subnetwork, even when using its initial parameter weights. Several theoretical studies have also shown that a sufficiently large, randomly-weighted deep neural network contains a subnetwork capable of approximating any function (Malach et al., 2020; Fischer et al., 2022). Our approach, furthering the existing research on connection-encoded networks, proposes a framework that utilizes connection probability to parameterize the network.
### Deep Neuroevolution
Deep neuroevolution utilizes evolutionary algorithms for training deep neural networks. Natural Evolution Strategies (NES) (Wierstra et al., 2014) have laid the foundation for gradient estimation in this domain. A well-known variant, Evolution Strategies (ES) (Salimans et al., 2017), has been employed for training deep neural networks in sequential decision-making tasks. ES optimizes a single set of continuous network weights by applying Gaussian perturbations and updating the weights using the NES gradient estimator. Following the success of ES, numerous evolutionary algorithms have been proposed for training deep neural networks. For example, Such et al. (2018) showed that deep neural networks could be effectively trained using simple genetic algorithms that mutate weights. Furthermore, Conti et al. (2018) integrated exploration and novelty-seeking into ES to overcome local minima. While the majority of prior research has primarily focused on continuous parameters, our proposed framework presents a novel approach by concentrating on the search for connection probability distributions. This shift in perspective offers new possibilities for evolving recurrent spiking neural networks in a hardware-friendly manner.
## 3 Preliminaries: Recurrent Spiking Neural Networks
RSNN is a class of spiking neural networks which incorporate feedback connections. In this paper, we adopt a typical RSNN architecture from the reservoir network literature (Jaeger, 2001; Maass et al., 2002) for sequential tasks. It is worth noting that, despite that we adopt a specific RSNN model as an example, our framework can be broadly applied to search for connectivity distributions in any type of RSNN, as it does not depend on network-specific assumptions and only relies on evaluating the network with a set of parameters.
According to Dale's law, our network consists of an excitatory neuron group and an inhibitory neuron group. Each neuron is modeled as a leaky integrate and fire (LIF) neuron. A neuron will fire a spike when its membrane potential \(u\) exceeds the threshold, and the membrane potential will hard-reset to \(0\). More specifically, our model defines the dynamics of membrane potential \(u\) and synaptic current \(c\) as follows:
\[\tau_{m}\frac{\mathrm{d}\textbf{u}^{(g)}}{\mathrm{d}t}=-\textbf{u}^{(g)}+R \textbf{c}^{(g)} \tag{1}\]
Where \(g=\{Exc,Inh\}\) denotes the excitatory and inhibitory group, respectively, and \(R\) denotes the resistance. The current input was modeled using exponential synapses to retain information for a short duration, commonly adopted in robotic tasks (Tang et al., 2020; Naya et al., 2021).
\[\frac{\text{d}\textbf{c}^{(g)}}{\text{d}t}=-\frac{\textbf{c}^{(g)}}{\tau_{syn}}+ \sum_{g_{j}}I_{g_{j}}\sum_{j}\textbf{W}_{ij}^{(g_{i}g_{j})}\delta(t-t_{j}^{s(g_{ j})})+\textbf{I}_{ext} \tag{2}\]
Where \(t_{j}^{s(g_{j})}\) denotes the spike time of neuron \(j\) in neuron group \(g_{j}\), \(\delta\) denotes Dirac delta function. \(I_{g}\) was set to define the connection strength of excitatory and inhibitory synapse respectively, where \(I_{Exc}>0\) and \(I_{Inh}<0\). Weight \(\textbf{W}^{(g_{i}g_{j})}\) was defined as a matrix with non-negative elements, connecting group \(g_{j}\) to group \(g_{i}\). Besides, the external input signal \(\textbf{I}_{ext}\) is extracted using linear projection of observation **x**.
Discretizing the LIF differential equations (Eq. 1 and 2) with \(\Delta t\) as a time-step, we could obtain the following difference equation for our RSNN model:
\[\textbf{c}^{(t,g)} =d_{c}\textbf{c}^{(t-1,g)}+\sum_{g_{j}}I_{g_{j}}\textbf{W}^{(g_{i }g_{j})}\textbf{s}^{(t-1,g_{j})}+\textbf{I}_{ext}^{(t,g)} \tag{3}\] \[\textbf{v}^{(t,g)} =d_{v}\textbf{u}^{(t-1,g)}+R\textbf{c}^{(t,g)}\] (4) \[\textbf{s}^{(t,g)} =\textbf{v}^{(t,g)}>\textbf{1}\] (5) \[\textbf{u}^{(t,g)} =\textbf{v}^{(t,g)}(\textbf{1}-\textbf{s}^{(t,g)}) \tag{6}\]
Where \(d_{c}=e^{-\frac{\Delta t}{\tau_{syn}}}\) and \(d_{v}=e^{-\frac{\Delta t}{\tau_{m}}}\) are two constant parameters. The output vector **o** is extracted using linear projection of neuron firing rates in a short time period \(\tau\), i.e.,
\[\textbf{o}^{(t)}=\sum_{\tau}k(\tau)\sum_{g}\textbf{W}_{out}^{(g)}\textbf{s}^{ (t-\tau,g)} \tag{7}\]
Where \(\textbf{W}_{out}^{(g)}\) denotes the output weight, and \(k\) is a function averaging the time period \(\tau\).
## 4 Framework
In this section, we present our proposed Evolving Connectivity (EC) framework, which is depicted in Fig. 1. Our approach consists of three main steps: (1) reformulating the neural network architecture from weight-based parameterization to connection probability distribution, (2) employing the Natural Evolution Strategies (NES) method to optimize the reformulated parameter space, and (3) deterministically extracting the final parameters from the distribution.
Reformulation.A sparse connected weight matrix can be described into a weight matrix **w** and a connection mask \(\mathbf{\theta}\). Traditionally, we can use Erdos-Renyi random matrix to describe the connection mask, in which connections are independently drawn from a Bernoulli distribution, i.e.,
Figure 1: Architecture of evolving connectivity (EC). The connectivity \(\mathbf{\theta}_{i}\) of the population is sampled from the global distribution \(B(\mathbf{\rho})\) and then evaluated in parallel. The RSNN consists of excitatory and inhibitory neurons, simulated using 1-bit firing calculations and the LIF model.
\[\mathbf{W}_{ij}=\mathbf{w}_{ij}\cdot\mathbf{\theta}_{ij},\mathrm{where}\ \mathbf{\theta}_{ij}\sim B(\mathbf{\rho}) \tag{8}\]
Inspired by the success of subnetworks in deep neural networks (Ramanujan et al., 2020), we can reformulate this in a connection framework. In this view, we aim to find a connection probability matrix, \(\mathbf{\rho}=(\rho_{ij})\), where each element represents the connection probability between two neurons, and set all weights \(\mathbf{w}_{ij}\) to be unit size, i.e.,
\[\mathbf{W}_{ij}=\mathbf{\theta}_{ij},\mathrm{where}\ \mathbf{\theta}_{ij}\sim B(\mathbf{ \rho}_{ij}). \tag{9}\]
Optimization.We optimize \(\mathbf{\rho}\) to maximize the expected performance metric function \(R(\cdot)\) across individual network samples drawn from the distribution, which can be expressed as the following objective function:
\[\mathbf{\rho}^{*}=\operatorname*{arg\,max}_{\mathbf{\rho}}J(\mathbf{\rho})= \operatorname*{arg\,max}_{\mathbf{\rho}}\mathbb{E}_{\mathbf{\theta}\sim B(\mathbf{\rho})} [R(\mathbf{\theta})] \tag{10}\]
To optimize this objective, we employ Natural Evolution Strategies (NES) (Wierstra et al., 2014). NES provides an unbiased Monte Carlo gradient estimation of the objective \(J(\mathbf{\rho})\), by evaluating the performance metric \(R_{i}\) on multiple individual samples \(\mathbf{\theta}_{i}\) drawn from \(B(\mathbf{\rho})\). Specifically, NES estimates the gradient of \(J(\mathbf{\rho})\) with respect to the parameters of the distribution \(\mathbf{\rho}\) by computing the expectation over samples from \(B(\mathbf{\rho})\):
\[\nabla_{\mathbf{\rho}}J(\mathbf{\rho}) =\mathbb{E}_{\mathbf{\theta}\sim B(\mathbf{\rho})}[\nabla_{\mathbf{\rho}} \log P(\mathbf{\theta}|\mathbf{\rho})R(\mathbf{\theta})] \tag{11}\] \[=\mathbb{E}_{\mathbf{\theta}\sim B(\mathbf{\rho})}[\frac{\mathbf{\theta}-\bm {\rho}}{\mathbf{\rho}(\mathbf{1}-\mathbf{\rho})}R(\mathbf{\theta})]\] (12) \[\approx\frac{1}{N}\sum_{i=1}^{N}\frac{\mathbf{\theta}_{i}-\mathbf{\rho}} {\mathbf{\rho}(\mathbf{1}-\mathbf{\rho})}R_{i} \tag{13}\]
Thus, we obtained an inference-only estimation of the gradient by simply sampling and evaluating the metric. Then gradient descent can be carried over the estimated gradients, following the NES approach (Wierstra et al., 2014). Furthermore, as in Williams (1992), we scale the step size proportionally to the variance, i.e. \(\alpha=\eta\cdot\mathrm{Var}[B(\mathbf{\rho})]\), and obtain the update rule:
\[\mathbf{\rho}_{t}=\mathbf{\rho}_{t-1}+\alpha\nabla_{\mathbf{\rho}}J(\mathbf{\rho})\approx\bm {\rho}_{t-1}+\frac{\eta}{N}\sum_{i=1}^{N}{(\mathbf{\theta}_{i}-\mathbf{\rho})R_{i}} \tag{14}\]
In addition, all elements of \(\mathbf{\rho}\) are clipped in the interval \([\epsilon,1-\epsilon]\), where \(\epsilon\to 0^{+}\), to guarantee a minimal level of exploration within the search space.
Once a sufficient number of updates have been performed, the final parameter \(\mathbf{\theta}\) can be deterministically obtained as the parameter with maximum probability, which is subsequently used for deployment. In the context of a Bernoulli distribution, this process is equivalent to thresholding \(\mathbf{\rho}\) at \(0.5\) to produce a boolean matrix containing only \(\{0,1\}\) values.
## 5 Properties of EC Framework
In this section, we further discuss the important properties of our proposed EC framework in implementation, thanks to leveraging the connection probability distribution.
Inference only.The most significant challenge in neuromorphic devices is the absence of an effective hardware-friendly learning algorithm (Li et al., 2023). Most neuromorphic chips, such as Loihi2 (Davies et al., 2018), Tianjic (Pei et al., 2019), TrueNorth (Akopyan et al., 2015), SpiN-Naker2 (Mayr et al., 2019), lack support for directly calculating error backpropagation, which is crucial for surrogate gradient methods. The inference-only EC framework enables an alternative method for training on these inference chips.
Scalable.The inference-only property of the EC framework implies no data dependence between evaluations, which allows for the distribution of sampled parameters \(\mathbf{\theta}_{i}\) across independent workers and the collection of their reported performance measures, making EC highly scalable. Moreover, the random seed for sampling the population can be transmitted across nodes instead of the connections \(\mathbf{\theta}_{i}\) to minimize communication overhead, facilitating scalar-only communication.
1-bit connections.EC employs 1-bit sparse connections throughout training and deployment, replacing the traditional floating-point weight matrix. Therefore, the 1-bit connections permit the use of more economical integer arithmetic instead of costly floating-point cores. This approach not only accelerates computations on devices like GPUs, but also holds a promise of driving the creation of novel 1-bit connection neuromorphic computing hardware.
## 6 Experiments
### Experimental Setups
Tasks.We focus on three robotic locomotion tasks in our experiments, Humanoid, Walker2d, and Hopper, as they are commonly used for sequential decision-making problems in the reinforcement learning domain (Brockman et al., 2016; Freeman et al., 2021). As illustrated in Fig. 2, these tasks involve controlling robots with varying degrees of freedom (DoF), to perform certain actions that maximize the return within a fixed-length episode \(T\). In the EC framework, the performance metric function can be defined as the expected return over episodes \(R(\theta)=\mathbb{E}_{\tau\sim\pi_{\theta}}[\sum_{t=0}^{T}{r_{t}}]\), and a single evaluation \(R_{i}\) corresponds to the return of one episode using network parameter \(\theta_{i}\).
Baselines.To thoroughly evaluate the effectiveness of our framework and its advantages over prior methods, we compare our EC framework with deep RNNs, as well as RSNN trained with Surrogate Gradients (SG) and Evolution Strategies (ES).
For deep RNNs, we employ widely-used recurrent deep neural networks, specifically long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Cho et al., 2014), trained using Evolution Strategies (ES) (Salimans et al., 2017) as our baselines.
For RSNN, we employ the same structured but excitatory/inhibitory not separated RSNN trained with ES and Surrogate Gradient (SG). ES is directly applied to search the weight matrix \(\mathbf{W}\) of RSNNs, optimizing the performance metric in a gradient-free manner. In contrast, the Surrogate Gradient is combined with Proximal Policy Optimization (PPO) (Schulman et al., 2017), a prominent reinforcement learning method, to optimize the weight \(\mathbf{W}\) for sequential decision-making tasks where returns cannot be differentiated.
Implementation Details.Our EC framework and all baselines are implemented using the JAX library (Bradbury et al., 2018) and just-in-time compiled with the Brax physics simulator (Freeman et al., 2021) for efficient GPU execution. The population is vectorized automatically, and 1-bit firing calculations take advantage of INT8 arithmetic for performance optimization. As a result, the training process achieves over \(180,000\) frames per second on a single GPU. Each experiment's result is averaged over 3 independent seeds, with the standard deviation displayed as a shaded area. For detailed information on hyperparameters and hardware specifications, please refer to Appendix A.
### Performance Evaluation
Firstly, we compare RSNN trained with our EC framework (EC-RSNN) to deep RNNs. To ensure a fair comparison, we utilized the evolution-based training approach ES (Salimans et al., 2017) for deep RNNs, including ES-GRU and ES-LSTM. Typically, SNNs face challenges in surpassing the
Figure 2: Locomotion tasks illustrated from left to right: Humanoid (17-DoF), Walker2d (6-DoF), and Hopper (3-DoF).
performance of their DNN counterparts. However, as shown in the experimental results in Fig. 3, our EC-RSNN achieves competitive performance and training efficiency compared to deep RNNs, even surpassing ES-GRU and ES-LSTM on the Walker2d and Hopper tasks. In the complex 17-DoF Humanoid task, our EC-RSNN could also outperform ES-GRU. It is worth noting that, while ES-LSTM and ES-GRU use densely connected, full precision (32-bit floating-point) weights and hard-coded gating mechanisms, EC-RSNN is constrained on 1-bit sparse weights and adheres to Dale's law for weight signs.
Then, as for the same structured RSNNs, our EC-RSNN outperforms ES-RSNN, especially significantly surpassing ES-RSNN on complex Humanoid and Walker2d tasks. We highlight two aspects that potentially account for the superior performance of our EC framework. On the one hand, schema theory [10] suggests that selecting over a population of binary strings could combine partial binary string patterns (schemata) to implicitly work on many solutions without evaluating them explicitly, leading to massive parallelism. EC leverages the discrete \(\{0,1\}\)-string space, which may enable more efficient optimization due to implicit parallelism. On the other hand, ES uses fixed-scale noise to perturb a set of deterministic parameters, which may often fall in wide areas in the landscape of the objective function and fail to proceed on sharp maxima [13]. In contrast, EC operates over a probabilistic distribution, providing the flexibility to adjust variance on sensitive parameters and achieving more fine-grained optimization.
Finally, gradient updates cannot be directly converted into evolution generations, thus we only compare the final converged return for surrogate gradient trained RSNNs. We adopt the best parameter set of the SuperSpike surrogate function with \(\beta=10\) as the baseline. However, as shown in Fig. 3, SG-RSNN still exhibits limited performance on all tasks when compared to our EC-RSNN. Considering that the surrogate gradient approach is sensitive to the selection of the surrogate function and its parameters [14], we choose three commonly-used surrogate functions with different \(\beta\) parameters to thoroughly validate the performance: SuperSpike [14], the piecewise linear function proposed by Esser et al. [20], and the derivative of the Sigmoid function [14]. As shown in Figure 4, the performance of our EC-RSNNs is
Figure 4: Surrogate Gradient on the Humanoid task, with several surrogate functions and \(\beta\) parameters. Performance is sensitive to the chosen surrogate function and its parameter \(\beta\).
Figure 3: Performance evaluation on locomotion tasks. SG-RSNN is plotted based on the final return, as gradient-based updates are not directly comparable to generations in evolution-based methods. The proposed Evolving Connectivity (EC) framework effectively solves the 17-DoF Humanoid locomotion task, demonstrating competitive performance with deep RNNs and outperforming RSNNs trained using both Surrogate Gradient and Evolution Strategies across all tasks.
consistently higher than SG-RSNNs across all surrogate functions and \(\beta\) parameters on the complex 17-DoF Humanoid task. One possible reason to explain the performance of SG-RSNN is that due to RSNNs having both explicit and implicit recurrence, the choice of surrogate function may determine whether gradients vanish or explode (Zenke and Vogels, 2021). Moreover, the gradient approximation of SG is inaccurate and may deviate from the steepest descent direction (Li et al., 2021). In contrast, our EC framework leverages unbiased Monte-Carlo gradient estimation, which approaches the optimal direction given that the population is sufficiently large (Zhang et al., 2017).
### Efficiency Comparison
In this section, we compare the computational efficiency of our proposed EC framework with other baselines, as shown in Figure 5. To ensure a fair comparison, we evaluate the EC and baseline methods using the same wall-clock computation time and implement them on identical hardware.
As expected, EC-RSNN exhibits slower training than deep RNNs due to the higher complexity of RSNNs, which necessitates simulating multiple timesteps to compute firing and integrate differential equations. Additionally, the deep RNN models employed in this study, specifically LSTM and GRU, are well-established and highly optimized for GPU implementation. Nevertheless, within the same computation time, the performance achieved by EC-RSNN remains competitive with deep RNNs.
Furthermore, we execute both Evolution Strategies (ES) and EC for 1,000 generations using the same population size and RSNN architecture. Our EC-RSNN evaluates the population employing 1-bit discrete connections, while ES-RSNN utilizes 32-bit floating-point representations for continuous weights. Experimental results have shown that our EC-RSNN achieves a speedup of approximately \(2\sim 3\times\) over ES-RSNN. This finding emphasizes the value of 1-bit connections within the proposed framework. Utilizing integer arithmetic typically results in higher throughput than floating-point arithmetic across most accelerators, and incorporating smaller data types reduces both memory requirements and memory access time. It is also worth mentioning that the 1-bit connections are implemented using INT8 on GPU, which is not fully optimized. By implementing the framework on hardware supporting smaller data types, connection size could be further reduced by up to \(8\times\), enabling more significant acceleration and cost reduction.
Finally, our EC-RSNN demonstrates faster convergence compared to SG-RSNN. This can be attributed to the fact that EC-RSNN is an inference-only method with 1-bit connections, requiring only a single forward pass using integer arithmetic, while SG-RSNN demands both a forward and a backward pass employing computationally-intensive floating-point operations.
## 7 Conclusion
In this study, we present the innovative Evolving Connectivity (EC) framework, a unique inference-only approach for training Recurrent Spiking Neural Networks (RSNNs) with 1-bit connections. The key attributes of the EC framework, such as its inference-only nature and scalability, render it particularly suitable for training RSNNs on neuromorphic devices. The use of 1-bit connections
Figure 5: Efficiency comparison in locomotion tasks. Evolution-based approaches (EC, ES) are executed for 1,000 generations, while surrogate gradient (SG) runs for 250,000 gradient steps to attain a comparable run-time to EC, providing a fair comparison. When training RSNNs, EC attains a \(2\sim 3\times\) speedup over ES and demonstrates faster convergence than SG.
significantly reduces memory requirements and computational cost, facilitating faster training on GPUs and paving the way for additional cost reductions in neuromorphic chip production.
We performed extensive experiments on a variety of intricate locomotion tasks, showcasing the competitive performance of our proposed EC framework in comparison to deep RNNs like LSTM and GRU. Moreover, the EC framework outperforms other RSNN training methods such as Evolution Strategies and Surrogate Gradient, both in terms of performance and computational efficiency.
In future research, investigators may consider implementing the EC framework on neuromorphic platforms to further amplify the capabilities of RSNNs in energy-efficient and scalable applications.
## 8 Limitations
Evolutionary algorithms demand the storage of \(N\) distinct parameter sets for population evaluation, leading to a space complexity of \(\mathrm{O}(N|\theta|)\). In contrast, gradient-based approaches utilize a single parameter set but necessitate the storage of intermediate results at each timestep, resulting in a space complexity of \(\mathrm{O}(NHS+|\theta|)\), where \(H\) corresponds to the number of BPTT timesteps, and \(S\) represents the size of the intermediate results. This situation gives rise to a trade-off: evolutionary methods offer greater memory efficiency for tasks featuring long time horizons, whereas gradient-based techniques with larger parameter sizes and shorter time horizons require less memory. Recognizing this trade-off is crucial in practical applications. Although our proposed EC framework, as an evolutionary algorithm, retains the same space complexity and trade-off, it can achieve a constant term reduction by storing 1-bit connections.
## 9 Discussions
Neuromorphic hardware.One primary bottleneck in the development of neuromorphic hardware is the lack of effective on-chip learning algorithms to build applications compared to deep learning approaches. This paper proposes a novel EC framework that circumvents the requirement for gradients and demonstrates efficacy on locomotion tasks, providing a potential solution for solving the on-chip learning challenge. Typically, the demand for computing devices falls into two categories, including cloud and edge. In the cloud, our proposed EC framework supports large-scale learning, while at the edge, it enables energy-efficient applications. Therefore, our framework offers a potential approach to further building neuromorphic applications.
Another practical challenge lies in the trade-off between numeric precision and cost. We suggest that it is possible to drastically reduce connections from floating point to 1-bit, providing a novel design principle for the next generation of neuromorphic hardware. Therefore, our method holds the potential to significantly decrease the manufacturing and energy costs of neuromorphic hardware.
Neuroscience.Our EC framework introduces a novel method for neuroscience research. First, as opposed to toy tasks or simulated signals, which are often studied in previous neuromodeling work, we employed RSNN in a complex, real-world like locomotion task. Additionally, controlling signals in neuroscience like 'GO' and 'NOGO' signals in the basal ganglia can be easily integrated into the task by concatenating them to the environment observation vector, enabling the creation of novel neuroscientific tasks. Our work lays the foundation for further investigation of decision-making processes and motor control.
Moreover, our framework provides a novel type of data to analyze: neuron-to-neuron level connection probability. Connection probability is one of the most fundamental properties in brain-wide connectomes. However, obtaining neuron-to-neuron connection probability from the whole mammalian brain is experimentally implausible due to limitations in nowadays neuroconnectomic technology. Our work provides in silico connection data for further analysis, including covariances, motifs, clustered engrams, and dimensional properties.
Finally, our framework is capable of incorporating neuroanatomical and neurophysiological data to construct a novel neurosimulation model, since our framework is able to train arbitrary models in principle. Neuroscientists have discovered that connection probability between two neurons is determined by various factors, including spatial distance, receptive field, and neuron types. Our framework can leverage these findings and conduct in silico experiments with data-driven models. |
2310.17394 | PSP: Pre-Training and Structure Prompt Tuning for Graph Neural Networks | Graph Neural Networks (GNNs) are powerful in learning semantics of graph
data. Recently, a new paradigm "pre-train and prompt" has shown promising
results in adapting GNNs to various tasks with less supervised data. The
success of such paradigm can be attributed to the more consistent objectives of
pre-training and task-oriented prompt tuning, where the pre-trained knowledge
can be effectively transferred to downstream tasks. Most existing methods are
based on the class prototype vector framework. However, in the few-shot
scenarios, given few labeled data, class prototype vectors are difficult to be
accurately constructed or learned. Meanwhile, the structure information of
graph is usually exploited during pre-training for learning node
representations, while neglected in the prompt tuning stage for learning more
accurate prototype vectors. In addition, they generally ignore the impact of
heterophilous neighborhoods on node representation and are not suitable for
heterophilous graphs. To bridge these gaps, we propose a novel pre-training and
structure prompt tuning framework for GNNs, namely PSP, which consistently
exploits structure information in both pre-training and prompt tuning stages.
In particular, PSP 1) employs a dual-view contrastive learning to align the
latent semantic spaces of node attributes and graph structure, and 2)
incorporates structure information in prompted graph to construct more accurate
prototype vectors and elicit more pre-trained knowledge in prompt tuning. We
conduct extensive experiments on node classification and graph classification
tasks to evaluate the effectiveness of PSP. We show that PSP can lead to
superior performance in few-shot scenarios on both homophilous and
heterophilous graphs. The implemented code is available at
https://github.com/gqq1210/PSP. | Qingqing Ge, Zeyuan Zhao, Yiding Liu, Anfeng Cheng, Xiang Li, Shuaiqiang Wang, Dawei Yin | 2023-10-26T13:46:18Z | http://arxiv.org/abs/2310.17394v2 | # Enhancing Graph Neural Networks with Structure-Based Prompt
###### Abstract.
Graph Neural Networks (GNNs) are powerful in learning semantics of graph data. Recently, a new paradigm "pre-train & prompt" has shown promising results in adapting GNNs to various tasks with less supervised data. The success of such paradigm can be attributed to the more consistent objectives of pre-training and task-oriented prompt tuning, where the pre-trained knowledge can be effectively transferred to downstream tasks. However, an overlooked issue of existing studies is that the _structure information_ of graph is usually exploited during pre-training for learning node representations, while neglected in the prompt tuning stage for learning task-specific parameters. To bridge this gap, we propose a novel structure-based prompting method for GNNs, namely SAP, which consistently exploits structure information in both pre-training and prompt tuning stages. In particular, SAP 1) employs a dual-view contrastive learning to align the latent semantic spaces of node attributes and graph structure, and 2) incorporates structure information in prompted graph to elicit more pre-trained knowledge in prompt tuning. We conduct extensive experiments on node classification and graph classification tasks to show the effectiveness of SAP. Moreover, we show that SAP can lead to better performance in more challenging few-shot scenarios on both homophilulous and heterophilious graphs.
Graph neural networks, pre-training, prompt, few-shot learning Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
_Conference'17, July 2017, Washington, DC, USA_
© 2023 Association for Computing Machinery.
ACM ISBN 978-1-4503-XXC-X/18/06. $15.00 [https://doi.org/10.1145/mnmnmn.nmmn](https://doi.org/10.1145/mnmnmn.nmmn)
3
## 1. Introduction
Graph Neural Networks (GNNs) have been widely applied in a variety of fields, such as social network analysis [(9)], drug discovery [(18)], financial risk control [(35)], and recommender systems [(38)], where both structural and attribute information are learned via message passing on the graphs [(15)]. Recently, extensive efforts [(10; 19; 41)] have been made to design graph pre-training methods, which are further fine-tuned for various downstream tasks. Nevertheless, inconsistent objectives of pre-training and fine-tuning often leads to catastrophic forgetting during downstream adaptation [(42)], especially when the downstream supervised data is too scarce to be easily over-fitted.
To bridge this gap, many prompt tuning methods for GNNs [(8; 21; 30; 31; 42)] have also been proposed to achieve remarkable performance in few-shot learning tasks on graphs. In particular, the key insight of these methods is to freeze the pre-trained model (i.e., GNN) and introduce extra task-specific parameters, which learns to exploit the pre-trained knowledge for downstream tasks. For example, GPPT [(30)] and GraphPrompt [(21)] pre-train a GNN model based on the link prediction task, then they take the class prototype vectors and the readout function as parameters respectively to reformulate the downstream node/graph classification task into the same format as link prediction.
Despite the initial success, a clear limitation in these existing models is that _graph structure_, as the key ingredient in pre-training, is under-explored in prompt tuning, which limits their effectiveness in unleashing pre-trained knowledge. In particular, their task-specific parameters (e.g., class prototype vectors or readout functions) are usually learned only with few labeled data. They fail to consider the relationships between the task and the massive unlabeled data, which could also provide rich pre-trained knowledge that is very useful for the task at hand. This is even more important when the labeled data is scarce, e.g., few-shot node classification.
As shown in Figure 1 (b), existing methods that directly use the average embeddings of labeled nodes/graphs as the class prototype representations can easily be undermined by noisy/outlier data when the number of labeled nodes is scarce. In contrast, facilitating class representation learning with structural connections between class prototype vectors and unlabeled nodes could help solve this issue (as shown in Figure 1 (d)).
In this paper, we propose a novel **S**tructure-**b**A**sed **P**rompting (SAP) method for GNNs, which unifies the objectives of pre-training and prompt tuning for GNNs and integrates structural information in both pre-training and prompt tuning stages. For pre-training, we employ dual-view contrastive learning to align the latent semantic spaces of node attributes and graph structure. Specifically, one view is implemented with MLP, which only uses node attributes in the graph. The other view adopts GNN to leverage both node attributes and structural information of the graph. For downstream prompt-tuning, we fix the learned parameters of MLP and GNN in the pre-training stage, add class prototype vectors as new nodes to the raw graph and introduce structural connections between prototype vectors and original nodes as prompts to learn more accurate prototype vectors (see Figure 1 (c)). Note that weights associated with these connections are parameters to be learned. In the training phase, we use representations of labeled nodes/graphs calculated by MLP as anchors, and representations of prototype vectors obtained through GNN as positive/negative samples. Specifically, the prototype vector in the same class as the anchor is considered as positive sample, while prototype vectors in other classes serve as negative samples. Then, contrastive learning between nodes/graphs (MLP-based view) and prototype vectors (GNN-based view) is performed to learn prompt parameters. As a result, we unify the objectives of pre-training and prompt tuning. After prompt tuning, the nodes and their corresponding class prototypes are learned to have higher weights on the edges, as shown in Figure 1(d). We also experimentally show the results in Figure 6 of Section 5.5. Based on the learned weights, for each prototype vector, GNN formulates its embedding by weighted-aggregating information from its neighboring nodes, i.e., all the nodes in the raw graph. Finally, in the testing stage, node/graph classification can be conducted via comparing the similarity of representations between node/graph and the prototype vectors.
Compared with conventional graph prompt tuning methods, our method is more desirable in learning better prototype vectors, as we leverage both labeled nodes and massive unlabeled nodes in the graph, which is particularly useful in few-shot scenarios. We further highlight that our prompt tuning method is applicable to both homophilious and heterophilious graphs. First, node/graph representations computed from MLP-based view are not affected by structural heterophily. Second, prototype vectors calculated from GNN-based view are based on the learned weights in structure prompt tuning, which takes all nodes in the raw graph as neighbors and learns to assign large (small) weights to those in the same (different) class. As such, the computation of prototype vectors is not affected by the heterophily of the graph. To summarize, our main contributions in this paper are:
* We propose an effective structure-based prompting method SAP for GNNs, which unifies the objectives of pre-training and prompt tuning.
* We present a novel prompt tuning strategy, which introduces a learnable structure prompt to enhance model performance in few-shot learning tasks on both homophilious and heterophilious graphs.
* We extensively demonstrate the effectiveness of SAP with different benchmark datasets on both node classification and graph classification. In particular, we vary the number of labeled training data and show that SAP can lead to better performance in challenging few-shot scenarios.
## 2. Related Work
### Graph Neural Networks
GNNs have presented powerful expressiveness in many graph-based applications (Gan et al., 2016; Gan et al., 2016; Gan et al., 2016; Li et al., 2017). Modern GNNs typically follow a message-passing scheme, which combines the attribute information and structural information of the graph to derive low-dimensional embedding of a target node by aggregating messages from its context nodes. There are many effective neural network structures proposed such as graph attention network (GAT) (Shen et al., 2016), graph convolution network (GCN) (Gan et al., 2016), GraphSAGE (Gan et al., 2016).
### Graph Pre-training
Inspired by the remarkable achievements of pre-trained models in Natural Language Processing (NLP) (Gan et al., 2016; Li et al., 2017; Li et al., 2017) and Computer Vision (CV) (Gan et al., 2016; Li et al., 2017; Li et al., 2017), graph pre-training (Kang et al., 2016) emerges as a powerful paradigm that leverages self-supervision on label-free graphs to learn intrinsic graph properties. Some effective and commonly-used pre-training strategies include node-level comparison (Shen et al., 2016), edge-level pretext (Shen et al., 2016), and graph-level contrastive learning (Shen et al., 2016; Li et al., 2017). Recently,
Figure 1. The construction of class prototype vectors. The colored areas contain the labeled nodes for training. The circles represent nodes, and the triangles represent class prototype vectors for node classification task. The solid black lines and gray dashed lines denote the original edges in the graph and the new weighted edges, respectively. For each node, the dashed line in red or green denotes the edges with the largest weight to the class prototype vector.
there are also some newly proposed pre-training methods. For example, GCC (Zhou et al., 2017) leverages contrastive learning to capture the universal network topological properties across multiple networks and transfers the learned prior knowledge to downstream tasks. GPT-GNN (Yang et al., 2018) introduces a self-supervised attributed graph generation task to pre-train GNN models that can capture the structural and semantic properties of the graph. L2P-GNN (Yang et al., 2019) utilizes meta-learning to learn the fine-tune strategy during the pre-training process. In summary, there exists a diverse range of pre-training strategies for GNN models, each characterized by unique pre-training objectives. However, these approaches do not consider the gap between pre-training and downstream objectives, which limits their generalization ability to handle different tasks.
### Prompt-based Learning
The training strategy "pre-train & fine-tune" is widely used to adapt pre-trained models onto specific downstream tasks. However, this strategy ignores the inherent gap between the objectives of pre-training and diverse downstream tasks, where the knowledge learned via pre-training could be forgot or infectively leveraged for downstream tasks, leading to poor performance.
To bridge this gap, natural language processing proposes a new paradigm (Zhou et al., 2017), namely "pre-train & prompt". These methods freeze the parameters of the pre-trained models and introduce additional learnable components in the input space, thereby enhancing the compatibility between inputs and pre-trained models. On graph data, there are a handful of studies that adopt prompt tuning to learn more generalizable GNNs. GPPT (Zhou et al., 2017) relies on edge prediction as the pre-training task and reformulates the downstream task as edge prediction by introducing task tokens for node classification. GraphPrompt (Yang et al., 2018) proposes a unified framework based on subgraph similarity and link prediction, hinging on a learnable prompt to actively guide downstream tasks using task-specific aggregation in readout function. SGL-PT (Yang et al., 2019) unifies the merits of generative and contrastive learning through the asymmetric design as pre-training, and computes class prototype vectors via supervised prototypical contrastive learning (SPCL) (Shen et al., 2017; Wang et al., 2018). GPF (Chen et al., 2018) extends the node embeddings with additional task-specific prompt parameters, and can be applied to the pre-trained GNN models that employ any pre-training strategy. ProG (Yang et al., 2019) reformulates node-level and edge-level tasks to graph-level tasks, and introduces the meta-learning technique to the graph prompt tuning study.
Despite their success, we observe that most of them utilize the _structure information_ in pre-training, while ignoring it in downstream prompt tuning stage. This restricts their effectiveness to fully utilize pre-trained knowledge stored in the entire graph. Moreover, their task-specific parameters are usually learned only with few labeled data, leading to poor performance in more challenging few-shot scenarios. In this paper, our proposed method SAP integrates structure information in both pre-training and prompt tuning stage, and achieve superior performance under few-shot setting on both node classification and graph classification tasks.
## 3. Preliminary
**Graph.** We denote a graph as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{i}\}_{i=1}^{N}\) is a set of \(N\) nodes and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is a set of edges. We also define \(\mathbf{A}\in\mathbb{R}^{N\times N}\) as the adjacency matrix of \(\mathcal{G}\), where \(A_{ij}=1\) if \(v_{i}\) and \(v_{j}\) are connected in \(\mathcal{G}\), and \(A_{ij}=0\) otherwise. Each node in the graph is associated with an \(F\)-dimensional feature vector \(\mathbf{z}_{i}\in\mathbb{R}^{1\times F}\), and the feature matrix of all nodes is defined as \(\mathbf{X}\in\mathbb{R}^{N\times F}\).
**Research problem.** In this paper, we investigate the problem of graph pre-training and prompt tuning, which learns representations of graph data via pre-training, and transfer the pre-trained knowledge to solve downstream tasks, i.e., node classification and graph classification. Moreover, we further consider the scenarios where downstream tasks are given limited supervision, i.e., \(k\)-shot classification. For each class, only \(k\) labeled samples (i.e., nodes or graphs) are provided as training data.
**Prompt tuning of pre-trained models.** Given a pre-trained model, a set of learnable prompt parameters \(\theta\) and a labeled task dataset \(\mathcal{D}\), we fix the parameters of the pre-trained model and only optimize \(\theta\) with \(\mathcal{D}\) for the downstream graph tasks.
## 4. Proposed Method
In this paper, we propose a novel structure-based prompting method for GNNs, namely SAP, which unifies the objectives of pre-training and prompt tuning for GNNs and integrates structure information in both pre-training and prompt tuning stage to achieve better performance in more challenging few-shot scenarios. The overall framework of SAP is shown in Figure 2.
### Graph Pre-training
For graph data, both attribute and structural information are critical for revealing the underlying semantics of graph during the pre-training phase. As such, we design a dual-view contrastive learning to align the latent semantic spaces of node attributes and graph structures. The top part of Figure 2 shows the dual-view pre-training paradigm. In particular, one view is implemented with MLP, which only uses node attributes in the graph, while the other adopts GNN to leverage both node attributes and structural information of the graph. More formally, we define the node representations computed by the two views respectively as
\[\mathbf{Z}^{(1)}=\mathrm{MLP}(\mathbf{X})\quad\text{ and }\quad\mathbf{Z}^{(2)}= \mathrm{GNN}(\mathbf{X},\mathbf{A}), \tag{1}\]
where \(\mathbf{Z}^{(1)}\in\mathbb{R}^{N\times D}\) and \(\mathbf{Z}^{(2)}\in\mathbb{R}^{N\times D}\) have the same latent dimensionality \(D\).
To optimize the MLP and GNN, we leverage a contrastive loss function \(\mathcal{L}_{pre}\) to maximize the similarity between the two representations of the same node, denoted as \(\mathbf{z}_{i}^{(1)}\) and \(\mathbf{z}_{i}^{(2)}\) for node \(v_{i}\). For an anchor \(\mathbf{z}_{i}^{(1)}\), other node representations \(\mathbf{z}_{j}^{(2)}\) are considered as negative samples. More specifically, we formulate the loss as the normalized temperature-scaled cross entropy loss (Dai et al., 2018) as
\[\mathcal{L}_{pre}=-\frac{1}{N}\sum_{i}^{N}\log\frac{\exp(\mathrm{sim}( \mathbf{z}_{i}^{(1)},\mathbf{z}_{i}^{(2)})/\tau)}{\sum_{j=0,j\neq i}^{N}( \exp(\mathrm{sim}(\mathbf{z}_{i}^{(1)},\mathbf{z}_{j}^{(2)})/\tau))}, \tag{2}\]
where \(N\) is the number of nodes, \(\tau\) is a temperature parameter, and \(\mathrm{sim}(\cdot)\) is implemented with cosine similarity.
In general, the dual-view contrastive pre-training can exploit both attribute and structure information of graph to encode learning generalizable knowledge in the output node embeddings. In the
next subsection, we introduce graph structure prompt tuning that leverage such knowledge in downstream classification tasks.
### Graph Structure Prompt Tuning
Next, we demonstrate how we freeze the pre-trained model and adapt it to different downstream tasks on graph. In particular, we propose a novel method, namely structure prompt tuning, which considers the structure relationships between the graph data and the task at hand. Compared to existing graph prompt tuning methods, the structure relationships in our method allow the task to more effectively leverage the pre-trained knowledge embedded in the graph data. In the following, we elaborate the structure prompt tuning method on two representative tasks, i.e., node classification and graph classification.
#### 4.2.1. Node Classification
The overall framework of our method is similar to previous prototype-based methods (Zhou et al., 2017; Zhou et al., 2017), which learns a prototype embedding vector \(\mathbf{p}_{c}\) for each node class \(c\in\{1,2,...,C\}\). In particular, our method comprises three steps: 1) structure prompting, 2) prototype initialization, and 3) prompt tuning.
**Step 1: Structure prompting.** For all the class prototypes, the main idea of our method is to consider them as virtual nodes, and connect them to all the original nodes \(\mathcal{V}\) in the graph \(\mathcal{G}\), as shown in Figure 1(c). More specifically, we add totally \(C\) nodes (denoted as \(\mathcal{P}=\{p_{c}\}_{c=1}^{C}\)) and \(N\times C\) weighted edges (denoted as \(\mathcal{W}\)) to construct a prompted graph \(\mathcal{G}^{\prime}\) as
\[\mathcal{G}^{\prime}=(\mathcal{G}\cup\mathcal{P},\mathcal{E}\cup\mathcal{W}). \tag{3}\]
**Step 2: Prototype initialization.** Next, we define the attributes and edge weights for the prototype nodes. In particular, we simply define the attributes for each prototype \(p_{c}\) as the averaged attribute vector of its corresponding labeled nodes, i.e.,
\[\mathbf{x}_{c}=\frac{1}{|\mathcal{D}_{c}|}\sum_{(u_{i},c)\in\mathcal{D}_{c}} \mathbf{x}_{i}. \tag{4}\]
For the edges that connect \(\mathcal{P}\) and \(\mathcal{V}\), we initialize their weights \(\mathbf{W}\in\mathbb{R}^{N\times C}\) as
\[\mathbf{W}=\mathbf{Z}^{(2)}\mathbf{p}^{\prime T}. \tag{5}\]
Here, we introduce a surrogate prototype embedding matrix \(\mathbf{p}^{\prime}\), where each row vector \(\mathbf{p}_{c}^{\prime}\) of \(\mathbf{p}^{\prime}\) is aggregated from its labeled nodes, i.e.,
\[\mathbf{p}_{c}^{\prime}=\frac{1}{|\mathcal{D}_{c}|}\sum_{(u_{i},c)\in\mathcal{ D}_{c}}\mathbf{z}_{i}^{(2)}. \tag{6}\]
During the subsequent prompt tuning step, the weight matrix \(\mathbf{W}\) is considered as task-specific parameters to be learned.
**Step 3: Prompt tuning.** We conduct prompt tuning on the prompted graph \(\mathcal{G}^{\prime}\) using the same form of loss function as in Equation 2, while keeping the node representations fixed and only optimize the
Figure 2. Overall framework of SAP. Top: pre-training. Middle: prompt tuning for node classification. Bottom: prompt tuning for graph classification.
prototypes embeddings. Formally, the prompt tuning loss is defined as the following contrastive loss as
\[\mathcal{L}_{pro}=-\frac{1}{|\mathcal{D}|}\sum_{(\eta_{c}|c)\in\mathcal{D}}\log \frac{\exp(\text{sim}(\mathbf{z}_{i}^{(1)},\mathbf{p}_{c})/\tau)}{\sum_{c^{ \prime}\in C_{c},e^{\prime}\neq c}\exp(\text{sim}(\mathbf{z}_{i}^{(1)},\mathbf{ p}_{c^{\prime}})/\tau)}, \tag{7}\]
where \(\mathbf{p}_{c}\) is parameterized by \(\mathbf{W}\) as
\[\mathbf{P}=\text{GNN}([\mathbf{X},\mathbf{X}_{P}],[\mathbf{A},\mathbf{W}]). \tag{8}\]
Here, \(\mathbf{X}\) represents the original node features in graph \(\mathcal{G}\), \(\mathbf{X}_{P}\in\mathbb{R}^{C\times F}\) represents the prototype features, and \([\mathbf{A},\mathbf{W}]\in\mathbb{R}^{N\times(N+C)}\) represents the adjacency matrix of \(\mathcal{G}^{\prime}\). Notably, unlike conventional methods that directly consider \(\mathbf{P}\) as learnable parameters, we parameterize \(\mathbf{P}\) with their structure connections with the graph data, i.e., the added edges \(\mathbf{W}\). In other words, only \(\mathbf{W}\) would be optimized as parameters when minimizing \(\mathcal{L}_{pro}\), which learns to aggregate pre-trained knowledge from all the nodes in the graph to formulate prototype embeddings.
#### 4.2.2. Graph Classification
Our method can also be adopted for graph classification, with some changes to be made in each step. In **structure prompting**, for each graph instance \(\mathcal{G}_{i}\) and a prototype node \(p_{c}\), the added edge between any \(v_{i}\in\mathcal{V}_{i}\) and \(p_{c}\) share the same weight. As such, the prompt tuning of graph classification also has \(N\times C\) parameters, while \(N\) denotes the number of graphs here. **In prototype initialization**, we further introduce a mean-based readout function on node level representations \(\mathbf{Z}^{(1)}\) and \(\mathbf{Z}^{(2)}\) to compute the graph-level representation \(\mathbf{S}^{(1)}\) and \(\mathbf{S}^{(2)}\), respectively, as shown in Figure 2. Then, we can use Equations 5 and 6 to initialize \(\mathbf{W}\) with \(\mathbf{Z}^{(2)}\) replaced by \(\mathbf{S}^{(2)}\). In **prompt tuning**, we can also replace \(\mathbf{Z}^{(1)}\) replaced by \(\mathbf{S}^{(1)}\) in Equation 7 for optimizing graph-level classification tasks.
#### 4.2.3. Remarks
In our proposed SAP method, it worth noting that the weights of pre-trained model are frozen for downstream tasks, and the prompt tuning is parameterized by the learnable adjacency matrix \(\mathbf{W}\in\mathbb{R}^{N\times C}\). Compared with most existing studies where the prototype vectors are directly optimized on few labeled data, this allows the prototype vectors to aggregate pre-trained knowledge from massive unlabeled nodes for more effective task adaptation.
### Inference
In the inference stage, classification is performed by comparing the similarity of representations between node/graph (from MLP-based view) and prototype vectors (from GNN-based view) from different classes. The class corresponding to the prototype vector with the largest similarity is taken as the final prediction of the node/graph. We use node classification as an example to explain in detail. By comparing the node representation with each class prototype vector \(\mathbf{p}_{c}\), we can get the predicted class probability
\[p(c|\eta_{i})=\frac{\exp(\text{sim}(\mathbf{z}_{i}^{(1)},\mathbf{p}_{c})/\tau) }{\sum_{c=0}^{C}\exp(\text{sim}(\mathbf{z}_{i}^{(1)},\mathbf{p}_{c})/\tau)}, \tag{9}\]
where the highest-scored class is chosen as the prediction.
## 5. Experiments
In this section, we conduct extensive experiments on node classification and graph classification tasks with 10 benchmark datasets to evaluate our proposed SAP.
### Experimental Setup
**Datasets.** We evaluate the performance of SAP using various benchmark datasets with diverse properties, including homophilous graphs: citation networks (Krizhevsky et al., 2017) (i.e, Cora, CiteSeer, PubMed), ogbn-arxiv (Zhu et al., 2017), and heterophilous graphs: Chameleon (Zhu et al., 2017), Actor (Zhu et al., 2017), ENZYMES (Zhu et al., 2017), PROTEINS (Zhu et al., 2017), COX2 (Zhu et al., 2017), BZR (Zhu et al., 2017). We summarize these datasets in Table 1. Note that the "Task" column indicates the type of downstream task on each dataset, where "N" represents node classification and "G" represents graph classification.
**Baselines.** To evaluate the proposed SAP, we compare it with 3 categories of state-of-the-art approaches as follows.
* **Supervised models**: GraphSAGE (Chen et al., 2017), GCN (Krizhevsky et al., 2017) and GAT (Zhu et al., 2017). They use the labeled data to learn GNNs, which are then directly applied for the classification tasks.
* **Graph pre-training models**: EdgeMask (Zhu et al., 2017) and GraphCL (Zhu et al., 2017). Following the transfer learning strategy of "pre-train & fine-tune", the pre-trained models are fine-tuned on the downstream tasks.
* **Graph prompt models**: GPPT (Zhu et al., 2017), GraphPrompt (Zhu et al., 2017), GPF (Zhu et al., 2017) and ProG (Zhu et al., 2017). They adopt the "pre-train & prompt" paradigm, where the pre-trained models are frozen, and task-specific learnable prompts are introduced and trained in the downstream tasks. Note that GPF works for all pre-training tasks, in our experiments GraphCL is used for pre-training.
**Implementation details.** To train SAP, we adopt the Adam optimizer (Kingmae and Ba, 2014), where the learning rate and weight decay in the pre-training stage is fixed as 1e-4. We set the number of both graph neural layers and multilayer perceptron layers as 2. We set the hidden dimension for node classification as 128, and for graph classification as 32. Other hyper-parameters are tuned on the results on the validation set by a grid search. Details of the hyper-parameter setting is listed in Appendix C. Furthermore, for those non-prompt-based competitors, some of their results are directly reported as in (Zhu et al., 2017) and (Zhu et al., 2017) (i.e., node classification with 0%, 30% and 50% masking
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline Datasets & Graphs & Graph classes & Avg. nodes & Avg. edges & Node features classes (N/G) \\ \hline Cora & 1 & - & 2,708 & 5,429 & 1433 & 7 & N \\ CiteSeer & 1 & - & 3,327 & 4,732 & 3703 & 6 & N \\ PubMed & 1 & - & 19,717 & 44,338 & 500 & 3 & N \\ ogbn-arxiv & 1 & - & 169,343 & 1,166,243 & 128 & 40 & N \\ Chameleon & 1 & - & 2,277 & 31,421 & 2325 & 5 & N \\ Actor & 1 & - & 7,600 & 26,752 & 931 & 5 & N \\ ENZYMES & 600 & 6 & 32.63 & 62.14 & 18 & 3 & N, G \\ PROTEINS & 1,113 & 2 & 39.06 & 72.82 & 1 & 3 & G \\ COX2 & 467 & 2 & 41.22 & 43.45 & 3 & - & G \\ BZR & 405 & 2 & 35.75 & 38.36 & 3 & - & G \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of the datasets.
ratio and graph classification). For other cases and prompt-based models, we fine-tune hyper-parameters with the codes released by their original authors. For fair comparison, we report the average results with standard deviations of 5 runs for node classification experiments, while the setting of graph classification experiments follow (Krizhevsky et al., 2017). We run all the experiments on a server with 32G memory and a single Tesla V100 GPU.
### Node classification
**Experimental setting**. We conduct node classification on 6 datasets, i.e., Cora, CiteSeer, PubMed, ogb-arxiv, Chameleon and Actor. For homophilous graph datasets, we use the official splitting of training/validation/testing (Krizhevsky et al., 2017). For heterophilous graph datasets, we randomly sample 20 nodes per class as training set and validation set, respectively. The remaining nodes which are not sampled will be used for evaluation. Following the setting of (Zhu et al., 2017), we randomly mask 0%, 30% and 50% of the training labels. Note that for datasets except ogbn-arxiv, the masking ratio of 0%, 30% and 50% correspond to 20-shot, 14-shot and 10-shot.
**Results.** Table 2 summarizes the performance results, from which we make the following major observations.
(1) Most supervised methods are very hard to achieve better performance compared with pre-training methods and prompt methods. This is because the annotations required by supervised frameworks are not enough. In contrast, pre-training approaches are usually facilitated with more prior knowledge, alleviating the need for labeled data. However, these pre-training methods still face an inherent gap between the training objectives of pre-training and downstream tasks. Pre-trained models may suffer from catastrophic forgetting during downstream adaptation. Therefore, we can find that compared with pre-training approaches, prompt-based methods usually achieve better performance.
(2) In all cases except for 0% mask on Cora, our proposed SAP outperforms all the baselines on node classification. This is because SAP bridges the gap between the pre-training stage and the downstream prompt tuning stage, and leverages graph structure prompt to provide more information from the massive unlabeled nodes. Specifically, we add class prototype vectors as new nodes to the raw graph and introduce weighted edges between prototype vectors and original nodes as prompts.
### Few-shot node classification
**Experimental setting**. To explore more challenging few-shot node classification settings, we only assign a very smaller number of labeled data as the training data for each class. Specifically, for ENZYMES, we follow existing study (Krizhevsky et al., 2017) to only choose graphs that consist of more than 50 nodes, which ensures there are sufficient labeled nodes for testing. On each graph, we randomly sample 1 node per class for training and validation, respectively. The remaining nodes which are not sampled will be used for testing. For Cora, CiteSeer and PubMed, we randomly sample 3 nodes per class for training, while the validation and test sets follow the official
\begin{table}
\begin{tabular}{c|c c c|c c c c c c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c|}{Pre-train} & \multicolumn{3}{c|}{Prompt} & \multirow{2}{*}{Ours} \\ & GraphSAGE & GCN & GAT & EdgeMask & GraphCL & & & & & \\ \hline \multicolumn{11}{c}{Masking ratio = 0\%} \\ \hline Cora & 81.07(0.30) & 76.32(0.95) & 78.47(1.68) & 81.00(0.93) & 81.05(0.59) & **81.40(0.48)** & 72.15(0.51) & 69.44(0.41) & 78.29(0.47) & 80.32(0.35) \\ CiteSeer & 67.95(0.34) & 65.40(0.13) & 65.61(0.57) & 67.39(0.98) & 68.03(0.71) & 69.21(0.48) & 67.62(0.50) & 64.12(0.39) & 70.05(0.54) & **72.23(0.42)** \\ PubMed & 77.24(0.62) & 75.33(0.27) & 75.91(0.14) & 78.51(0.14) & 79.10(0.52) & 79.73(0.38) & 77.90(0.34) & 77.08(0.41) & 76.82(0.31) & **82.04(0.28)** \\ ogbn-arxiv & 65.07(0.67) & 64.26(0.15) & 64.31(0.10) & 65.59(0.18) & 67.34(0.92) & 65.94(0.24) & 59.25(0.31) & 57.02(0.34) & 70.49(0.21) & **72.63(0.67)** \\ \hline Chameleon & 32.85(1.21) & 36.60(0.61) & 34.69(0.72) & 31.62(0.38) & 29.30(0.16) & 34.25(0.81) & 35.11(0.23) & 32.47(1.92) & 36.68(1.25) & **37.94(0.68)** \\ Actor & 22.57(0.83) & 20.98(1.00) & 21.98(0.24) & 22.83(0.90) & 23.17(1.07) & 25.17(0.82) & 22.62(0.96) & 23.70(0.71) & 24.79(1.01) & **28.80(0.48)** \\ \hline \multicolumn{11}{c}{Masking ratio = 30\%} \\ \hline Cora & 75.51(0.73) & 74.04(0.30) & 76.00(0.51) & 76.32(0.32) & 76.72(0.36) & 76.83(0.26) & 67.34(0.37) & 61.72(0.28) & 73.27(0.53) & **77.05(0.40)** \\ CiteSeer & 64.46(0.56) & 56.05(0.56) & 61.23(0.60) & 63.71(0.46) & 63.98(0.54) & 64.74(0.17) & 63.96(0.31) & 62.99(0.48) & 65.02(0.47) & **68.78(0.23)** \\ PubMed & 77.79(0.34) & 78.12(0.64) & 77.40(0.63) & 79.30(0.48) & 79.91(0.61) & 80.15(0.34) & 78.13(0.47) & 71.49(0.25) & 74.23(0.36) & **80.21(0.31)** \\ ogbn-arxiv & 65.69(0.23) & 63.18(0.38) & 64.32(0.25) & 65.99(0.69) & 65.72(0.37) & 66.18(0.33) & 60.25(0.42) & 56.68(0.39) & 69.84(0.49) & **71.96(0.97)** \\ \hline Chameleon & 32.82(0.73) & 35.01(0.42) & 33.69(0.36) & 31.02(0.57) & 31.58(0.10) & 34.31(0.95) & 33.61(0.14) & 30.62(1.67) & 35.28(0.94) & **39.20(0.29)** \\ Actor & 21.28(0.79) & 20.71(0.47) & 21.03(0.51) & 22.94(0.85) & 23.28(0.07) & 24.76(0.49) & 21.90(0.22) & 23.25(0.29) & 24.10(0.93) & **26.64(1.03)** \\ \hline \multicolumn{11}{c}{Masking ratio = 50\%} \\ \hline Cora & 68.40(0.96) & 71.78(0.50) & 74.94(1.26) & 76.38(0.89) & 76.73(0.91) & 77.16(1.35) & 68.43(0.58) & 62.20(0.93) & 72.49(1.04) & **77.62(1.29)** \\ CiteSeer & 57.98(0.98) & 52.15(0.27) & 59.50(0.61) & 65.49(0.90) & 65.94(1.20) & 65.81(0.97) & 61.11(1.24) & 60.78(1.17) & 65.33(0.93) & **67.52(0.95)** \\ PubMed & 68.23(0.46) & 65.05(0.15) & 69.30(0.97) & 71.29(0.66) & 72.03(1.54) & 72.23(1.22) & 72.63(1.72) & 68.81(0.95) & 73.70(1.49) & **75.94(1.06)** \\ ogbn-arxiv & 64.56(0.75) & 64.19(0.59) & 63.97(0.69) & 64.86(0.67) & 65.87(0.82) & 66.13(0.44) & 59.13(0.59) & 56.04(0.97) & 67.80(1.65) & **69.20(0.73)** \\ \hline Chameleon & 27.93(1.72) & 30.38(1.21) & 29.14(0.79) & 30.89(1.15) & 27.37(1.40) & 31.28(0.65) & 31.67(1.19) & 29.31(1.68) & 31.75(1.25) & **34.08(1.26)** \\ Actor & 19.13(0.85) & 18.67(1.94) & 20.85(1.37) & 21.76(0.95) & 22.18(1.24) & 22.07(0.81) & 21.11(1.35) & 20.56(1.09) & 22.82(
splitting (Kumar et al., 2017). For Chameleon and Actor, we randomly sample 3 nodes per class for training and validation respectively, while the remaining nodes which are not sampled are used for testing.
**Results.** Table 3 illustrates the results. From it, we see that
(1) Compared with the results of 10-shot in Table 2, SAP under this few-shot setting improves more than the baselines. Taking Cora as an example, the accuracy of SAP is 0.46% higher than the runner-up under 10-shot, while under 3-shot, the accuracy of SAP is 2.97% higher than the runner-up. This is because SAP uses graph structure information to optimize the prototype vectors, so when the training set is extremely small, the prototype vectors can also be accurate by capturing the underlying information of other unlabeled nodes. In contrast, other methods fail to consider the relationship between the task and the massive unlabeled data.
(2) Our proposed SAP achieves larger improvements in heterophilous graphs. SAP achieves a largest improvement of 2.97% in homogeneous graphs while 5.82% in heterophilous graphs. This is because in the downstream prompt tuning stage of SAP, node representations computed from MLP-based view are not affected by the structural heterophily. Also, the graph structure prompt reduces the heterophily of the graph. Therefore, the prototype vector representations calculated from GNN-based view are more accurate. In contrast, other methods are susceptible to structure heterophily and can only achieve sub-optimal results.
### Few-shot graph classification
We conduct few-shot graph classification on 4 datasets, i.e., ENZYMES, PROTEINS, COX2 and BZR. Following the setting of (Zhu et al., 2017), we conduct 5-shot tasks. The results are listed in Table 4, from which we observe that our proposed SAP significantly outperforms the baselines on these datasets. This again demonstrates the effectiveness of our proposed method. Notably, as both node and graph classification tasks share the same pre-trained model on ENZYMES, the superior performance of SAP on both types of tasks further demonstrates that the gap between different tasks is better addressed by our unified framework.
### Model Analysis
We further analyse several aspects of our model. The following experiments are conducted on the 3-shot node classification and 5-shot graph classification.
**Ablation study on training paradigm.** We conduct an ablation study that compares variants of SAP with different pre-training and prompt tuning strategies: (1) We directly fine-tune the pre-trained models (i.e., MLP and GNN) on the tasks, instead of prompt tuning. We call this variant **SAP-ft** (with **fine-tune**). (2) For downstream tasks, we remove the prompt tuning process and only use the mean embedding vectors of the labeled data as the prototype vectors to perform classification. We call this variant **SAP-np** (**no** prompt). (3) We replace our proposed dual-view contrastive learning in pre-training with GraphCL, i.e., the pre-trained encoder is GNN, while the downstream tasks still use our proposed graph structure prompt. We call this variant **SAP-CL** (with GraphCL as the pre-training model). Figure 3 shows the results of this study, where we have the following observations:
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Methods & ENZYMES & PROTEINS & COX2 & BZR \\ \hline GraphSAGE & 18.31\(\pm\)6.22 & 52.99\(\pm\)10.57 & 52.87\(\pm\)11.46 & 57.23\(\pm\)10.95 \\ GCN & 20.37\(\pm\)5.24 & 54.87\(\pm\)11.20 & 51.37\(\pm\)11.06 & 56.16\(\pm\)11.07 \\ GAT & 15.90\(\pm\)4.13 & 48.78\(\pm\)18.46 & 51.20\(\pm\)27.93 & 53.19\(\pm\)20.61 \\ \hline InfoGraph & 20.90\(\pm\)3.32 & 54.12\(\pm\)8.20 & 54.04\(\pm\)9.45 & 57.57\(\pm\)9.93 \\ GraphCL & 28.11\(\pm\)4.00 & 56.38\(\pm\)7.24 & 55.40\(\pm\)12.04 & 59.22\(\pm\)7.42 \\ \hline GPPT & - & - & - & - \\ GraphPrompt & 31.45\(\pm\)4.32 & 64.42\(\pm\)4.37 & 59.21\(\pm\)6.82 & 61.63\(\pm\)7.68 \\ GPF & 32.65\(\pm\)5.73 & 57.16\(\pm\)5.96 & 61.62\(\pm\)7.47 & 59.17\(\pm\)6.18 \\ ProG & 29.18\(\pm\)3.09 & 60.98\(\pm\)7.49 & 61.96\(\pm\)6.35 & 63.71\(\pm\)5.25 \\ \hline SAP & **33.57\(\pm\)4.72** & **64.95\(\pm\)5.86** & **65.71\(\pm\)5.34** & **68.58\(\pm\)7.57** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Accuracy (%) on graph classification.
Figure 3. The ablation study on training paradigm.
\begin{table}
\begin{tabular}{c|c c c|c c|c c c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Supervised} & \multicolumn{3}{c|}{Pre-train} & \multicolumn{3}{c|}{Prompt} & \multirow{2}{*}{Ours} \\ & GraphSAGE & GCN & GAT & EdgeMask & GraphCL & GPPT & GraphPrompt & GPF & ProG & SAP \\ \hline Cora & 56.28(3.17) & 57.83(5.90) & 60.34(1.5) & 64.10(2.79) & 65.89(3.45) & 64.55(3.72) & 63.91(2.43) & 63.52(5.39) & 65.68(4.29) & **68.65(2.17)** \\ CiteSeer & 52.71(3.28) & 49.69(4.39) & 52.85(3.69) & 55.23(3.17) & 58.37(4.74) & 55.63(2.55) & 53.42(4.98) & 54.31(5.21) & 59.07(2.73) & **61.70(4.21)** \\ PubMed & 63.83(4.21) & 63.16(4.56) & 64.26(3.17) & 65.89(4.26) & 69.06(3.24) & 70.07(6.07) & 68.93(3.93) & 63.98(3.54) & 64.57(3.81) & **72.23(4.20)** \\ ENZYMES & 60.02(3.72) & 61.49(12.87) & 59.94(2.86) & 56.17(14.39) & 58.73(16.47) & 53.79(17.46) & 67.04(11.48) & 60.13(15.37) & 57.22(17.41) & **72.86(14.58)** \\ \hline Chameleon & 25.69(4.82) & 27.88(5.77) & 26.97(4.85) & 23.76(3.74) & 22.25(3.14) & 28.91(3.23) & 26.35(3.50) & 27.38(3.62) & 29.18(4.53) & **33.23(3.80)** \\ Actor & 21.17(3.15) & 20.69(2.96) & 20.93(2.67) & 18.03(2.48) & 19.56(1.15) & 20.88(1.69) & 20.50(2.45) & 19.32(2.52) & 21.43(3.27) & **24.74(2.79)** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Accuracy (%) on few-shot node classification.
(1) SAP-np always performs the worst among all the variants, showing the effectiveness of our proposed graph structure prompt. SAP-ft achieves better performance than SAP-np, which is because SAP-ft is parameterized.
(2) SAP achieves comparable results to SAP-CL in homogeneous graph datasets and clearly outperforms SAP-CL in heterophilous graph datasets. This is because our pre-training method uses MLP and GNN for contrastive learning, where MLP is not affected by the heterophily of the graph. In contrast, SAP-CL is susceptible to structure heterophily and can only achieve sub-optimal result in heterophilous graph datasets.
**Varying the number of the shots.** We next vary the number of shots on two datasets for node classification (Cora, CiteSeer) and graph classification (PROTEINS and COX2), respectively. For node classification, we vary the number of shots between 1 and 10. For graph classification, we vary the number of shots between 1 and 30. We compare SAP with several competitive baselines in Figure 4 and 5. In general, SAP consistently outperforms the baselines, especially when the number of shots is few. We further notice that when the number of shots is relatively large, SAP can be surpassed by graphCL on graph classification, especially on COX2. This could be contributed to more available training data on COX2, where 30 shots per class implies that 12.85% of the 467 graphs are used for training. This is not our target few-shot scenario.
**Varying the ratio of added edges.** Our prompt tuning method is parameterized by the added edges between original nodes and prototype vectors, where the weights of added edges are learnable. We hereby the impact of the ratio of added edges (i.e., parameters) \(r\). To vary the number of edges, we first randomly select \(Nr\) nodes outside the training set. During prompt tuning, we next merge the \(Nr\) nodes with \(N_{t}\) training nodes as the set of nodes to build edges with all the \(C\) prototypes. Following this setting, we conduct 3-shot node classification on 4 datasets. The results are shown in Table 5. Surprisingly, we observe that for Cora, CiteSeer and Chameleon, SAP only needs 1%, 5% and 0.1% parameters, respectively to surpass the runner-up. For PubMed, SAP can outperform runner-up only using the training set. This proves that our prompt tuning model can achieve comparable performance with only a small number of parameters (\(N_{t}+Nr\))\(C\), As such, SAP is promising in scaling to large graphs.
**Visualization of learned edge weights.** We visualize the weight matrix \(\mathbf{W}\) of the added edges after prompt tuning. As shown in Figure 6, for each node, its edge connected to its corresponding class prototype is more likely to have a larger weight. Hence, the
Figure 4. Varying the number of shots for node classification.
Figure 5. Varying the number of shots for graph classification.
Figure 6. The weights of added edges between nodes and class prototype vectors before and after prompt tuning.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c c} \hline \hline & runner-up & 0\% & 0.1\% & 1\% & 5\% & 10\% & 30\% & 50\% & 100\% \\ \hline Cora & 65.89\(\pm\)3.45 & 65.06\(\pm\)3.77 \(\downarrow\) & 65.72\(\pm\)3.14 \(\downarrow\) & 67.50\(\pm\)2.38 \(\uparrow\) & 67.46\(\pm\)2.45 \(\uparrow\) & 67.88\(\pm\)2.83 \(\uparrow\) & 67.56\(\pm\)3.19 \(\uparrow\) & 68.62\(\pm\)2.67 \(\uparrow\) & 68.68\(\pm\)2.17 \(\uparrow\) \\ CiteSeer & 59.07\(\pm\)2.73 & 56.70\(\pm\)4.16 \(\downarrow\) & 57.03\(\pm\)5.25 \(\downarrow\) & 57.52\(\pm\)5.03 \(\downarrow\) & 59.36\(\pm\)5.90 \(\uparrow\) & 60.98\(\pm\)4.98 \(\uparrow\) & 61.44\(\pm\)3.96 \(\uparrow\) & 62.62\(\pm\)4.26 \(\uparrow\) & 62.76\(\pm\)4.21 \(\uparrow\) \\ PubMed & 70.07\(\pm\)6.07 & 70.84\(\pm\)5.31 \(\uparrow\) & 70.32\(\pm\)3.95 \(\uparrow\) & 71.62\(\pm\)4.34 \(\uparrow\) & 70.94\(\pm\)4.40 \(\uparrow\) & 71.42\(\pm\)4.70 \(\uparrow\) & 72.20\(\pm\)4.62 \(\uparrow\) & 72.24\(\pm\)4.33 \(\uparrow\) & 73.23\(\pm\)4.20 \(\uparrow\) \\ Chameleon & 29.18\(\pm\)4.53 & 25.22\(\pm\)3.09 \(\downarrow\) & 30.92\(\pm\)3.07 \(\uparrow\) & 33.41\(\pm\)3.38 \(\uparrow\) & 33.30\(\pm\)2.36 \(\uparrow\) & 33.07\(\pm\)3.79 \(\uparrow\) & 31.89\(\pm\)3.83 \(\uparrow\) & 32.09\(\pm\)2.20 \(\uparrow\) & 33.97\(\pm\)3.22 \(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 5. Varying the ratio of added edges on few-shot node classification. The \(\downarrow\)/\(\uparrow\) arrow means the decreasing/improvement compared with the accuracy of runner-up.
prototype vectors are very accurate by aggregating massive unlabeled data which contain rich pre-trained knowledge to reflect the semantics of the task labels.
**Time complexity analysis**. The major time complexity in our proposed SAP comes from MLP, GNN and the contrastive loss. We take node classification as an example. Suppose we use one-layer MLP and GCN as the backbone, since the adjacency matrix is generally sparse, let \(d_{A}\) be the average number of non-zero entries in each row of the adjacency matrix \(\mathbf{A}\). Let \(N\) be the number of nodes, \(F\) be the dimensionality of raw features, \(D\) be the dimensionality of output layer and \(C\) be the number of class labels. Then, the time complexities for MLP and GCN are \(O(NFD)\) and \(O(Nd_{A}F+NFD)\), respectively. We next analyze the time complexity of the contrastive loss. In the pre-training stage, let \(M\) be the number of selected negative samples. The time complexity is \(O(NMD)\), which is linear to the number of nodes \(N\). Then we conduct \(C\)-way \(K\)-shot learning for downstream tasks. For each node, we need to calculate its similarity with all prototype vectors, so the time complexity is \(O(CD)\). In the prompt tuning stage, we have a total of \(KC\) labeled nodes for training, so the time complexity is \(O(KC^{2}D)\). In the inference stage, the time complexity is \(O(CD)\).
## 6. Conclusion
In this paper, we proposed SAP, a novel structure-based prompting method for GNNs, which unifies the objectives of pre-training and prompt tuning for GNNs and integrates structural information in both pre-training and prompt tuning stages. For pre-training, we proposed a dual-view contrastive learning to align the latent semantic spaces of node attributes and graph structure. For downstream prompt tuning, we proposed to learn the structural connection between the prototype vectors and the graph, and then leveraged the learned structural information to perform better in few-shot tasks. Finally, we conducted extensive experiments on 10 public datasets, and showed that SAP significantly outperforms various state-of-the-art baselines on both homophilous and heterophilous graphs, especially on few-shot scenarios.
|
2302.07396 | Convolutional unitary or orthogonal recurrent neural networks | Recurrent neural networks are extremely powerful yet hard to train. One of
their issues is the vanishing gradient problem, whereby propagation of training
signals may be exponentially attenuated, freezing training. Use of orthogonal
or unitary matrices, whose powers neither explode nor decay, has been proposed
to mitigate this issue, but their computational expense has hindered their use.
Here we show that in the specific case of convolutional RNNs, we can define a
convolutional exponential and that this operation transforms antisymmetric or
anti-Hermitian convolution kernels into orthogonal or unitary convolution
kernels. We explicitly derive FFT-based algorithms to compute the kernels and
their derivatives. The computational complexity of parametrizing this subspace
of orthogonal transformations is thus the same as the networks' iteration. | Marcelo O. Magnasco | 2023-02-14T23:36:21Z | http://arxiv.org/abs/2302.07396v1 | # Convolutional _unitary_ or _orthogonal_ recurrent neural networks.
###### Abstract
Recurrent neural networks are extremely powerful yet hard to train. One of their issues is the _vanishing gradient problem_, whereby propagation of training signals may be exponentially attenuated, freezing training. Use of _orthogonal_ or _unitary_ matrices, whose powers neither explode nor decay, has been proposed to mitigate this issue, but their computational expense has hindered their use. Here we show that in the specific case of convolutional RNNs, we can define a _convolutional exponential_ and that this operation transforms antisymmetric or anti-Hermitian convolution kernels into _orthogonal_ or _unitary convolution kernels_. We explicitly derive FFT-based algorithms to compute the kernels and their derivatives. The computational complexity of parametrizing this subspace of orthogonal transformations is thus the same as the networks' iteration.
Lab of Integrative Neuroscience, Rockefeller University
## 1 tl;dr
This is an extremely terse synopsis for quick reference. All proofs, explanations and misguided attempts at clarity are in subsequent chapters, to which you're welcome to skip to.
Given a layer \(X\) in \(D\) dimensions, a spatial convolution operation \(\otimes\) and a convolution kernel \(K\) acting on \(X\) as \(K\otimes X\), we formally define the _convolutional exponential_\(e_{\otimes}^{K}\) as the kernel defined by the series
\[e_{\otimes}^{K}\otimes X\equiv X+K\otimes X+\frac{1}{2!}K\otimes K\otimes X+ \frac{1}{3!}K\otimes K\otimes K\otimes X+\frac{1}{4!}K\otimes K\otimes K\otimes K \otimes X+\cdots \tag{1.1}\]
so the linear operator defined by \(e_{\otimes}^{K}\otimes\) is _quite literally_ the matrix exponential of the linear operator defined by \(K\otimes\). This exponential can be computed in Fourier space through
\[e_{\otimes}^{K}\equiv\mathscr{F}^{-1}\left[\exp(\mathscr{F}\left[K\right])\right] \tag{1.2}\]
where the right-hand exponential is element-wise and where \(\mathscr{F}\) and \(\mathscr{F}^{-1}\) are the forward and inverse Fourier transforms in \(D\) dimensions, and hence an \(N\log N\) operation. This can easily be generalized to any operation defined through convergent power series, for example the _convolutional sine and cosine_ of \(K\) are defined through
\[\begin{array}{l}\cos_{\otimes}(K)\equiv\mathscr{F}^{-1}\left[\cos(\mathscr{ F}\left[K\right])\right]\\ \sin_{\otimes}(K)\equiv\mathscr{F}^{-1}\left[\sin(\mathscr{F}\left[K\right])\right] \end{array} \tag{1.3}\]
Given a complex-valued kernel \(K\), we define an _anti-Hermitian kernel_ as one that satisfies \(K=-\overline{K^{*}}\) where \(\bar{K}\) is the spatial flip operation and \(K^{*}\) the elementwise complex conjugate, because then the linear operator given by \(K\otimes\) is an anti-Hermitian operator. Then \(e_{\otimes}^{K}\) is _unitary_ in the sense that the linear operator \(e_{\otimes}^{K}\otimes\) is a _unitary operator_: it is the matrix exponential of the anti-Hermitian operator \(K\otimes\), and as such has eigenspectrum on the unit circle.
Given a complex-valued layer \(Z\) in \(D\) dimensions, an anti-Hermitian kernel \(K\) acting on \(X\), an input \(I_{n}\), and element-wise complex-valued activation function \(\phi\), we define a _convolutional unitary recurrent neural network_ (cuRNN?) as the iterated recursion
\[Z_{n+1}=\phi\left(e_{\otimes}^{K}\otimes Z_{n}+I_{n}\right) \tag{1.4}\]
where the subindex \(n\) represents the passage of time in the recurrence and \(Z_{0}\) is the initialization value of the layer. |
2306.13614 | Adversarial Robustness Certification for Bayesian Neural Networks | We study the problem of certifying the robustness of Bayesian neural networks
(BNNs) to adversarial input perturbations. Given a compact set of input points
$T \subseteq \mathbb{R}^m$ and a set of output points $S \subseteq
\mathbb{R}^n$, we define two notions of robustness for BNNs in an adversarial
setting: probabilistic robustness and decision robustness. Probabilistic
robustness is the probability that for all points in $T$ the output of a BNN
sampled from the posterior is in $S$. On the other hand, decision robustness
considers the optimal decision of a BNN and checks if for all points in $T$ the
optimal decision of the BNN for a given loss function lies within the output
set $S$. Although exact computation of these robustness properties is
challenging due to the probabilistic and non-convex nature of BNNs, we present
a unified computational framework for efficiently and formally bounding them.
Our approach is based on weight interval sampling, integration, and bound
propagation techniques, and can be applied to BNNs with a large number of
parameters, and independently of the (approximate) inference method employed to
train the BNN. We evaluate the effectiveness of our methods on various
regression and classification tasks, including an industrial regression
benchmark, MNIST, traffic sign recognition, and airborne collision avoidance,
and demonstrate that our approach enables certification of robustness and
uncertainty of BNN predictions. | Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska | 2023-06-23T16:58:25Z | http://arxiv.org/abs/2306.13614v1 | # Adversarial Robustness Certification
###### Abstract
We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations. Given a compact set of input points \(T\in\mathbb{R}^{n}\) and a set of output points \(S\subseteq\mathbb{R}^{n}\), we define two notions of robustness for BNNs in an adversarial setting; probabilistic robustness and decision robustness. Probabilistic robustness is the probability that for all points in \(T\) the output of a BNN sampled from the posterior is in \(S\). On the other hand, decision robustness considers the optimal decision of a BNN and checks if for all points in \(T\) the optimal decision of the BNN for a given loss function lies within the output set \(S\). Although exact computation of these robustness properties is challenging due to the probabilistic and non-convex nature of BNNs, we present a unified computational framework for efficiently and formally bounding them. Our approach is based on weight interval sampling, integration, and bound propagation techniques, and can be applied to BNNs with a large number of parameters, and independently of the (approximate) inference method employed to train the BNN. We evaluate the effectiveness of our methods on various regression and classification tasks, including an industrial regression benchmark, MNIST, traffic sign recognition, and airborne collision avoidance, and demonstrate that our approach enables certification of robustness and uncertainty of BNN predictions.
Certification, Bayesian Neural Networks, Adversarial Robustness, Classification, Regression, Uncertainty
## I Introduction
While neural networks (NNs) regularly obtain state-of-the-art performance in many supervised machine learning problems [1, 2], they have been found to be vulnerable to adversarial attacks, i.e., imperceptible modifications of their inputs that trick the model into making an incorrect prediction [3]. Along with several other vulnerabilities [4], the discovery of adversarial examples has made the deployment of NNs in real-world, safety-critical applications - such as autonomous driving or healthcare - increasingly challenging. The design and analysis of methods that can mitigate such vulnerabilities of NNs, or provide guarantees for their worst-case behaviour in adversarial conditions, has thus become of critical importance [5, 6].
While retaining the advantages intrinsic to deep learning, Bayesian neural networks (BNNs), i.e., NNs with a probability distribution placed over their weights and biases [7], enable probabilistically principled evaluation of model uncertainty. Since adversarial examples are intuitively related to uncertainty [8], the application of BNNs is particularly appealing in safety-critical scenarios. In fact, model uncertainty of a BNN can, in theory, be taken into account at prediction time to enable safe decision-making [9, 10, 11, 12]. Various techniques have been proposed for the evaluation of their robustness, including generalisation of gradient-based adversarial attacks (i.e., non
Fig. 1: Certifications for a traffic sign recognition benchmark with two classes: speed limit (spd. lmt.) and warning sign (wam.). We plot original images, the upper and lower-bound class probabilities as red and blue horizontal lines, respectively, and a description of the result. **Top Row:** A 50 km/hr sign from the GTSRB dataset. As the lower bound class probability is 0.81, we certify that all images in the ball are classified correctly as speed limit signs and therefore no adversarial examples exist. **Bottom Row:** A nonsense traffic sign. As the upper bound probability for all classes is less than a threshold (0.75), we certify that the BNN is uncertain.
Bayesian) [13], statistical verification techniques [14], as well as approaches based on pointwise (i.e., for a specific test point \(x^{*}\)) uncertainty evaluation [15]. However, to the best of our knowledge, a systematic approach for computing formal (i.e., with certified bounds) guarantees on the behaviour of BNNs and their decisions against adversarial input perturbations is still missing.
In this work, we develop a novel algorithmic framework to quantify the adversarial robustness of BNNs. In particular, following existing approaches for quantifying the robustness of deterministic neural networks [16, 17, 18], we model adversarial robustness as an _input-output specification_ defined by a given compact set of input points \(T\subseteq\mathbb{R}^{m}\) and a given convex polytope output set \(S\subseteq\mathbb{R}^{n}\). A neural network satisfies this specification if all points in \(T\) are mapped into \(S\), called a safe set. Modelling specifications in this way encompasses many other practical properties such as classifier monotonicity [19] and individual fairness [20]. For a particular specification, we focus on two main properties of a BNN of interest for adversarial prediction settings: _probabilistic robustness_[21, 14] and _decision robustness_[18, 22]. The former, probabilistic robustness, is defined as the probability that a network sampled from the posterior distribution is robust (e.g., satisfies a robustness specification defined by a given \(T\) and \(S\)). Probabilistic robustness attempts to provide a general measure of robustness of a BNN; in contrast, _decision robustness_ focuses on the decision step, and evaluates the robustness of the optimal decision of a BNN. That is, a BNN satisfies decision robustness for a property if, for all points in \(T\), the expectation of the output of the BNN in the case of regression, or the argmax of the expectation of the softmax w.r.t. the posterior distribution for classification, are contained in \(S\).
Unfortunately, evaluating probabilistic and decision robustness for a BNN is not trivial, as it involves computing distributions and expectations of high-dimensional random variables passed through a non-convex function (the neural network architecture). Nevertheless, we derive a unified algorithmic framework based on computations over the BNN weight space that yields _certified lower_ and _upper bounds_ for both properties. Specifically, we show that probabilistic robustness is equivalent to the measure, w.r.t. the BNN posterior, of the set of weights for which the resulting deterministic NN is _robust_, i.e., it maps all points of \(T\) to a subset of \(S\). Computing upper and lower bounds for the probability involves sampling compact sets of weights according to the BNN posterior, and propagating each of these weight sets, \(H\), through the neural network architecture, jointly with the input region \(T\), to check whether all the networks instantiated by weights in \(H\) are safe. To do so, we generalise bound propagation techniques developed for deterministic neural networks to the Bayesian settings and instantiate explicit schemes for Interval Bound Propagation (IBP) and Linear Bound Propagation (LBP) [23]. Similarly, in the case of decision robustness, we show that formal bounds can be obtained by partitioning the weight space into different weight sets, and for each weight set \(J\) of the partition we again employ bound propagation techniques to compute the maximum and minimum of the decision of the NN for all input points in \(T\) and different weight configurations in \(J\). The resulting extrema are then averaged according to the posterior measure of the respective weight sets to obtain sound lower and upper bounds on decision robustness.
We perform a systematic experimental investigation of our framework on a variety of tasks. We first showcase the behaviour of our methodology on a classification problem from an airborne collision avoidance benchmark [24] and on two safety-critical industrial regression benchmarks [25]. We then consider image recognition tasks and illustrate how our method can scale to verifying BNNs on medium-sized computer vision problems, including MNIST and a two-class subset of the German Traffic Sign Recognition Benchmark (GTSRB) dataset [26]. On small networks, such as those used for airborne collision avoidance (\(\sim\)\(5000\) parameters), our method is able to verify key properties in under a second, thus enabling comprehensive certification over a fine partition of the entire state space. Moreover, when employed in conjunction with adversarial training [27], we are able to obtain non-trivial certificates for convolutional NNs (471,000 parameters) on full-colour GTSRB images (2,352 dimensions).1 As an example, we demonstrate the bounds on decision robustness in Figure 1, where we plot the upper and lower bound class probabilities (in red and blue respectively) for a BNN trained on a two-class traffic sign recognition benchmark. The bounds are computed for all images within a \(\ell_{\infty}\) ball with radius 2/255 of the two images in the left column of the figure. For the top image of a speed limit sign, our lower bound allows us to verify that the all images within the 2/255 are correctly classified by the BNN as a 50 km/hr sign. For the bottom image of a nonsense traffic sign, our upper bound allows us to verify that the BNN is uncertain for this image and all images in the ball.
Footnote 1: An implementation to reproduce all the experiments can be found at: [https://github.com/matthewwicker/AdversarialRobustnessCertificationForBNNs](https://github.com/matthewwicker/AdversarialRobustnessCertificationForBNNs).
In summary, this paper makes the following contributions.
* We present an algorithmic framework based on convex relaxation techniques for the robustness analysis of BNNs in adversarial settings.
* We derive explicit lower- and upper-bounding procedures based on IBP and LBP for the propagation of input and weight intervals through the BNN posterior function.
* We empirically show that our method can be used to certify BNNs consisting of multiple hidden layers and with hundreds of neurons per layer.
A preliminary version of this paper appeared as [21]. This work extends [21] in several aspects. In contrast to [21], which focused only on probabilistic robustness, here we also tackle decision robustness and embed the calculations for the two properties in a common computational framework. Furthermore, while the method in [21] only computes lower bounds, in this paper we also develop a technique for upper bounds computation. Finally, we substantially extend the empirical evaluation to include additional datasets, evaluation of convolutional architectures and scalability analysis, as well as certification of out-of-distribution uncertainty.
## II Related Work
Bayesian uncertainty estimates have been shown to empirically flag adversarial examples, often with remarkable success [15, 28]. These techniques are, however, empirical and can be circumvented by specially-tailored attacks that also target the uncertainty estimation [29]. Despite these attacks, it has been shown that BNN posteriors inferred by Hamiltonian Monte Carlo tend to be more robust to attacks than their deterministic counterparts [10]. Further, under idealised conditions of infinite data, infinitely-wide neural networks and perfect training, BNNs are provably robust to gradient-based adversarial attacks [11]. However, while showing that BNNs are promising models for defending against adversarial attacks, the arguments in [10] and [11] do not provide concrete bounds or provable guarantees for when an adversarial example does not exist for a given BNN posterior.
In [9, 14], the authors tackle similar properties of BNNs to those discussed in this paper. Yet these methods only provide bounds on probabilistic robustness and the bounds are _statistical_, i.e., only valid up to a user-defined, finite probability \(1-\delta\), with \(\delta>0\). In contrast, the method in this paper covers both probabilistic and decision robustness and computes bounds that are sound for the whole BNN posterior (i.e., hold with probability \(1\)). In [27], the authors incorporate worst-case information via bound propagation into the likelihood in order to train BNNs that are more adversarially robust; while that work develops a principled defense for BNNs against attack, it does not develop or study methods for analyzing or guaranteeing their robustness.
Since the publication of our preliminary work [21], the study of [30] has further investigated certifying the posterior predictive of BNNs. The definition in [30] corresponds to a subset of what we refer to as _decision_ robustness, but their method only applies to BNNs whose posterior support has been clipped to be in a finite range. Here, we pose a more general problem of certifying decision and probabilistic robustness of BNNs, and can handle posteriors on continuous, unbounded support, which entails the overwhelming majority of those commonly employed for BNNs. Furthermore, following the preliminary version of this paper [21, 31] introduced a technique for probabilistic robustness certification implemented via a recursive algorithm the that operates over the state-space of a model-based control scenario. [32] uses similar methods to those presented in [21] to study infinite-time horizon robustness properties of BNN control policies by checking for safe weight sets and modifying the posterior so that only safe weights have non-zero posterior support.
Most existing certification methods in the literature are designed for deterministic NNs. Approaches studied include abstract interpretation [23], mixed integer linear programming [33, 34, 35, 36], game-based approaches [37, 38], and SAT/SMT [24, 39]. In particular, [40, 18, 41] employ relaxation techniques from non-convex optimisation to compute guarantees over deterministic neural network behaviours, specifically using Interval Bound Propagation (IBP) and Linear Bound Propagation (LBP) approaches. However, these methods cannot be used for BNNs because they all assume that the weights of the networks are deterministic, i.e., fixed to a given value, while in the Bayesian setting we need to certify the BNN for a continuous range of values for weights that are not fixed, but distributed according to the BNN posterior.
In the context of Bayesian learning, methods to compute adversarial robustness measures have been explored for Gaussian processes (GPs), both for regression [42] and classification tasks [43, 44]. However, because of the non-linearity in NN architectures, GP-based approaches cannot be directly employed for BNNs. Furthermore, the vast majority of approximate Bayesian inference methods for BNNs do not employ Gaussian approximations over latent space [45]. In contrast, our method is specifically tailored to take into account the non-linear nature of BNNs and can be directly applied to a range of approximate Bayesian inference techniques used in the literature.
## III Background
In this section, we overview the necessary background and introduce the notation we use throughout the paper. We focus on neural networks (NNs) employed in a supervised learning scenario, where we are given a dataset of \(n_{\mathcal{D}}\) pairs of inputs and labels, \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{n_{\mathcal{D}}}\), with \(x_{i}\in\mathbb{R}^{m}\), and where each target output \(y\in\mathbb{R}^{n}\) is either a one-hot class vector for classification or a real-valued vector for regression.
### _Bayesian Deep Learning_
Consider a feed forward neural network \(f^{w}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\), parametrised by a vector \(w\in\mathbb{R}^{n_{w}}\) containing all its weights and biases. We denote with \(f^{w,1},...,f^{w,K}\) the \(K\) layers of \(f^{w}\) and take the activation function of the \(i\)th layer to be \(\sigma^{(i)}\), abbreviated to just \(\sigma\) in the case of the output activation.2 Throughout this paper, we will use \(f^{w}(x)\) to represent pre-activation of the last layer.
Footnote 2: We assume that the activation functions have a finite number of inflection points, which holds for activation functions commonly used in practice [46].
Bayesian learning of deep neural network starts with a prior distribution, \(p_{\mathbf{w}}(w)\), over the vector of random variables associated to the weights, \(\mathbf{w}\). Placing a distribution over the weights defines a stochastic process indexed by the input space, which we denote as \(f^{\mathbf{w}}\). After the data set \(\mathcal{D}\) has been observed, the BNN prior distribution is updated according to the likelihood, \(p(\mathcal{D}|w)=\prod_{i=1}^{n_{\mathcal{D}}}p(y_{i}|x_{i},w)\), which models how likely (probabilistically speaking) we are to observe an output under the stochasticity of our model parameters and observational noise [47]. The likelihood function, \(p(y_{i}|x_{i},w)\), generally takes the shape of a softmax for multiclass classification and a multivariate Gaussian for regression. The posterior distribution over the weights given the dataset is then computed by means of the Bayes formula, i.e., \(p(w|\mathcal{D})\propto p(\mathcal{D}|w)p(w)\). The cumulative distribution of \(p(w|\mathcal{D})\) we denote as \(P(\cdot)\), so that for \(R\subseteq\mathbb{R}^{n_{w}}\) we have:
\[\int_{R}p(w|\mathcal{D})dw=P(R). \tag{1}\]
The posterior \(p(w|\mathcal{D})\) is in turn used to calculate the output of a BNN on an unseen point, \(x^{*}\). The distribution over outputs is called the posterior predictive distribution and is defined as:
\[p(y^{*}|x^{*},\mathcal{D})=\int p(y^{*}|x^{*},w)p(w|\mathcal{D})dw. \tag{2}\]
Equation (2) defines a distribution over the BNN output. When employing a Bayesian model, the overall final prediction is taken to be a single value, \(\hat{y}\), that minimizes the Bayesian risk of an incorrect prediction according to the posterior predictive distribution and a loss function \(\mathcal{L}\). Formally, the final decision of a BNN is computed as
\[\hat{y}=\operatorname*{arg\,min}_{y}\int_{\mathbb{R}^{n}}\mathcal{L}(y,y^{*})p (y^{*}|x^{*},\mathcal{D})dy^{*}.\]
This minimization is the subject of Bayesian decision theory [48], and the final form of \(\hat{y}\) clearly depends on the specific loss function \(\mathcal{L}\) employed in practice. In this paper, we focus on two standard loss functions widely employed for classification and regression problems.3
Footnote 3: In Appendix B we discuss how our method can be generalised to other losses commonly employed in practice.
Classification DecisionsThe 0-1 loss, \(\ell_{0-1}\), assigns a penalty of 0 to the correct prediction, and 1 otherwise. It can be shown that the optimal decision in this case is given by the class for which the predictive distribution obtains its maximum, i.e.:
\[\hat{y}=\operatorname*{arg\,max}_{i=1,\ldots,n}p_{i}(y^{*}|x^{*},\mathcal{D}) =\operatorname*{arg\,max}_{i=1,\ldots,n}\mathbb{E}_{w\sim p(w|\mathcal{D})} \left[\sigma_{i}(f^{w}(x))\right],\]
where \(\sigma_{i}\) represents the \(i\)th output component of the softmax function.
Regression DecisionsThe \(\ell_{2}\) loss assigns a penalty to a prediction according to its \(\ell_{2}\) distance from the ground truth. It can be shown that the optimal decision in this case is given by the expected value of the BNN output over the posterior distribution, i.e.:
\[\hat{y}=\mathbb{E}_{w\sim p(w|\mathcal{D})}\left[f^{w}(x)\right].\]
Unfortunately, because of the non-linearity of neural network architectures, the computation of the posterior distribution over weights, \(p(w|\mathcal{D})\), is generally intractable [7]. Hence, various approximation methods have been studied to perform inference with BNNs in practice. Among these methods, we will consider Hamiltonian Monte Carlo (HMC) [7] and Variational Inference (VI) [45], which we now briefly describe.
#### Iii-B1 Hamiltonian Monte Carlo (HMC)
HMC proceeds by defining a Markov chain whose invariant distribution is \(p_{\mathbf{w}}(w|\mathcal{D}),\) and relies on Hamiltonian dynamics to speed up the exploration of the space. Differently from VI discussed below, HMC does not make parametric assumptions on the form of the posterior distribution, and is asymptotically correct [7]. The result of HMC is a set of samples that approximates \(p_{\mathbf{w}}(w|\mathcal{D})\).
#### Iii-B2 Variational Inference (VI)
VI proceeds by finding a Gaussian approximating distribution over the weight space \(q(w)\sim p_{\mathbf{w}}(w|\mathcal{D})\) in a trade-off between approximation accuracy and scalability. The core idea is that \(q(w)\) depends on some hyperparameters that are then iteratively optimized by minimizing a divergence measure between \(q(w)\) and \(p_{\mathbf{w}}(w|\mathcal{D})\). Samples can then be efficiently extracted from \(q(w)\).
For simplicity of notation, in the rest of the paper we will indicate with \(p(w|\mathcal{D})\) the posterior distribution estimated by either of the two methods, and clarify the methodological differences when they arise.
## IV Problem Statements
We focus on local specifications defined over an input compact set \(T\subseteq\mathbb{R}^{m}\) and output set \(S\subseteq\mathbb{R}^{n}\) in the form of a convex polytope:
\[S=\{y\in\mathbb{R}^{n}\,|\,C_{S}y+d_{S}\geq 0\}, \tag{3}\]
where \(C_{S}\in\mathbb{R}^{n_{S}xn}\) and \(d_{S}\in\mathbb{R}^{n_{S}}\) are the matrix and vector encoding the polytope constraints, and \(n_{S}\) is the number of output constraints considered. For simplicity of presentation, we assume that \(T\) is defined as a box (axis-aligned linear constraints).4 However, we stress that all the methods in this paper can be extended to the more general case where \(T\) is a convex polytope. Our formulation of input-output specifications can be used to capture important properties such as classifier monotonicity [49] and individual fairness [20], but in this work we focus exclusively on adversarial robustness. Targeted adversarial robustness, where one aims to force the neural network into a particular wrong classification, is captured in this framework by setting \(T\) to be an over-approximation of an \(\ell_{p}\) ball around a given test input, and setting \(C_{S}\) to an \(n_{S}\times n\) matrix of all zeros with a \(-1\) entry in the diagonal entry corresponding to the true class and a \(1\) on the diagonal entry corresponding to the target class or classes. For regression, one uses \(C_{S}\) to encode the absolute deviation from the target value and \(d_{S}\) to encode the maximum tolerable deviation. Throughout the paper we will refer to an input-output set pair, \(T\) and \(S\), as defined above as a _robustness specification_.
Footnote 4: Note that, where a specification is not in this form already, one can first compute a bounding box \(R=[x^{L},x^{U}]\) (or a finite sequence of them) such that \(T\subseteq R\), and then proving that the output specification holds for \(R\) also proves that it holds for \(T\). In the case that we do not prove that an output specification holds, then we cannot guarantee it is violated by nature of our method being sound but not complete.
### _Probabilistic Robustness_
Probabilistic robustness accounts for the probabilistic behaviour of a BNN in adversarial settings.
**Definition 1** (Probabilistic robustness).: _Given a Bayesian neural network \(f^{\mathbf{w}}\), an input set \(T\subseteq\mathbb{R}^{m}\) and a set \(S\subseteq\mathbb{R}^{n}\) of safe outputs, then probabilistic robustness is defined as_
\[P_{\text{safe}}(T,S):=Prob_{w\sim p(w|\mathcal{D})}\big{(}\forall x\in T,f^{w} (x)\in S\big{)}. \tag{4}\]
Given \(\eta\in[0,1]\), we then say that \(f^{\mathbf{w}}\) is probabilistically robust, or safe, for specifications \(T\) and \(S\), with probability at least \(\eta\) iff:_
\[P_{\text{safe}}(T,S)\geq\eta\]
Probabilistic robustness considers the adversarial behaviour of the model while accounting for the uncertainty arising from the posterior distribution. \(P_{\text{safe}}(T,S)\) represents the (weighted) proportion of neural networks sampled from \(f^{\mathbf{w}}\) that satisfy a given input-output specification (captured by \(T\) and \(S\)) and can be used directly as a measure of compliance for Bayesian neural networks. As such, probabilistic robustness is particularly suited to quantification of the robustness of a BNN to adversarial perturbations [9, 22, 50]. Exact computation of \(P_{\text{safe}}(T,S)\) is hindered by both the size and non-linearity of neural networks. As \(P_{\text{safe}}(T,S)\) cannot be computed exactly for general BNNs, in this work we tackle the problem of computing provable bounds on probabilistic robustness.
**Problem 1** (Bounding probabilistic robustness).: _Given a Bayesian neural network \(f^{\mathbf{w}}\), an input set \(T\subseteq\mathbb{R}^{m}\) and a set \(S\subseteq\mathbb{R}^{n}\) of safe outputs, compute (non-trivial) \(P_{\text{safe}}^{L}\) and \(P_{\text{safe}}^{U}\) such that_
\[P_{\text{safe}}^{L}\leq P_{\text{safe}}(T,S)\leq P_{\text{safe}}^{U}. \tag{5}\]
We highlight the difference between this problem definition and those discussed in prior works [9, 14]. In particular, prior works compute upper and lower bounds that hold with probability \(1-\delta\) for some pre-specified \(\delta\). While such statistical bounds can provide an estimation for \(P_{\text{safe}}(T,S)\), these may not be sufficient in safety-critical contexts where strong, worst-case guarantees over the full behaviour of the BNN are necessary. The problem statement above holds with probability \(1\) and represents sound guarantees on \(P_{\text{safe}}(T,S)\).
### _Decision Robustness_
While \(P_{\text{safe}}\) attempts to measure the compliance of all functions in the support of a BNN posterior, we are often interested in evaluating robustness w.r.t. a specific decision. In order to do so, we consider _decision robustness_, which is computed over the final decision of the BNN. In particular, given a loss function and a decision \(\hat{y}\) we have the following.
**Definition 2** (Decision robustness).: _Consider a Bayesian neural network \(f^{\mathbf{w}}\), an input set \(T\subseteq\mathbb{R}^{m}\) and a set \(S\subseteq\mathbb{R}^{n}\) of safe outputs. Assume that the decision for a loss \(\mathcal{L}\) for \(x\in\mathbb{R}^{m}\) is given by \(\hat{y}(x)\). Then, the Bayesian decision is considered to be robust if:_
\[\forall x\in T\quad\hat{y}(x)\in S. \tag{6}\]
We notice that, since the specific form of the decision depends on the loss employed in practice, the definition of decision robustness takes different form depending on whether the BNN is used for classification or for regression. In particular, we instantiate the definition for the two cases of standard loss discussed in Section III.
In the regression case, using the mean square loss we have that \(\hat{y}(x)=\mathbb{E}[f^{w}(x)]\), so that if we find upper and lower bounds on \(\mathbb{E}[f^{w}(x)]\) for all \(x\in T\), i.e., for \(i=1,...,m\):
\[D_{\text{safe},i}^{L}\leq\min_{x\in T}\mathbb{E}\left[f_{i}^{w}(x)\right],\;D_ {\text{safe},i}^{U}\geq\max_{x\in T}\mathbb{E}\left[f_{i}^{w}(x)\right],\]
we can then simply check whether these are within \(S\).
For the classification case, where the decision is given by the \(\arg\max\) of the predictive posterior, note that, in order to check the condition in Definition 2, it suffices to compute lower and upper bounds on the posterior predictive in \(T\), i.e.:
\[D_{\text{safe},i}^{L}\leq\min_{x\in T}\mathbb{E}\left[\sigma_{i}(f^{w}(x)) \right],\;D_{\text{safe},i}^{U}\geq\max_{x\in T}\mathbb{E}\left[\sigma_{i}(f ^{w}(x))\right],\]
for \(i=1,\ldots,m\). It is easy to see that the knowledge of \(D_{\text{safe},i}^{L}\) and \(D_{\text{safe},i}^{U}\) for all \(i=1,\ldots,m\) can be used to provide guarantees of the model decision belonging to \(S\), as defined in Equation (3), by simply propagating these bounds through the equations. Therefore, for both classification and regression we have to bound an expectation of the BNN output over the posterior distribution, with the additional softmax computations for classification. We thus arrive at the following problem for bounding decision robustness.
**Problem 2** (Bounding decision robustness).: _Let \(f^{\mathbf{w}}\) be a BNN with posterior distribution \(p(w|\mathcal{D})\). Consider an input-output
Fig. 2: A diagram illustrating a single iteration of the computational flow for the certification process of a BNN w.r.t. decision robustness (green) and probabilistic robustness (purple). This process is summarised in Algorithm 1.
specification (\(T\), \(S\)) and assume \(\mathcal{L}=\ell_{0-1}\) for classification or \(\mathcal{L}=\ell_{2}\) for regression. We aim at computing (non-trivial) lower and upper bounds \(D^{L}_{\text{safe}}\) and \(D^{U}_{\text{safe}}\) such that:_
\[D^{L}_{\text{safe}}\lesssim\mathbb{E}[s(f^{w}(x))]\leq D^{U}_{\text{safe}} \quad\forall x\in T,\]
_where \(s=\sigma\) for classification and \(s=\mathbb{I}\) for regression._
Note that, while for regression we bound the decision directly, for classification we compute the bounds on the predictive posterior and use these to compute bounds on the final decision. This is similar to what is done for deterministic neural networks, where in the case of classification the bounds are often computed over the logit, and then used to provide guarantees for the final decision [18]. As with probabilistic robustness, our bounds on decision robustness are sound guarantees and do not have a probability of error as with statistical bounds.
### _Outline of our Approach:_
We design an algorithmic framework for worst-case and best-case bounds on local robustness properties in Bayesian neural networks, taking account of both the posterior distribution (\(P^{L}_{\text{safe}}\) and \(P^{U}_{\text{safe}}\)) and the overall model decision (\(D^{L}_{\text{safe}}\) and \(D^{L}_{\text{safe}}\)). First, we show how the two robustness properties of Definitions 1 and 2 can be reformulated in terms of computation over weight intervals. This allows us to derive a unified approach to the bounding of the robustness of the BNN posterior (i.e., probabilistic robustness) and of the robustness of the overall model decision (i.e., decision robustness) that is based on _bound propagation_ and _posterior integral_ computation over hyper-rectangles. A visual outline for our framework is presented in Figure 2. We organise the presentation of our framework by first introducing a general theoretical framework for bounding the robustness quantities of interest (Section V). We will then show how the required integral computations can be achieved for Bayesian posterior inference techniques commonly used in practice (Section VI-A). Hence, we will show how to extend bound propagation techniques to deal with both input variable intervals and intervals over the weight space, and will instantiate approaches based on Interval and Linear Bound Propagation techniques (Section VI-B). Finally (Section VII), we will present an overall algorithm that produces the desired bounds.
## V Formulating BNN Adversarial Robustness via Weight Sets
In this section, we show how a single computational framework can be leveraged to compute bounds on both definitions of BNN robustness. We start by converting the computation of robustness into weight space and then define a family of weight intervals that we leverage to bound the integrations required by both definitions. Interestingly, we find that the resulting theoretical bounds in both cases depend on the same quantities. Proofs for the main results in this section are presented in Appendix C.
### _Bounding Probabilistic Robustness_
We first show that the computation of \(P_{\text{safe}}(T,S)\) is equivalent to computing a maximal set of safe weights \(H\) such that each network associated to weights in \(H\) is safe w.r.t. the robustness specification at hand.
**Definition 3** (Maximal safe and unsafe sets).: _We say that \(H\subseteq\mathbb{R}^{n_{w}}\) is the maximal safe set of weights from \(T\) to \(S\), or simply the maximal safe set of weights, iff \(H=\{w\in\mathbb{R}^{n_{w}}\,|\,\forall x\in T,f^{w}(x)\in S\}\). Similarly, we say that \(K\subseteq\mathbb{R}^{n_{w}}\) is the maximal unsafe set of weights from \(T\) to \(S\), or simply the maximal unsafe set of weights, iff \(K=\{w\in\mathbb{R}^{n_{w}}\,|\,\exists x\in T,f^{w}(x)\not\in S\}\)._
Intuitively, \(H\) and \(K\) simply encode the input-output specifications \(S\) and \(T\) in the BNN weight space. The following lemma, which trivially follows from Equation 4, allows us to directly relate the maximal sets of weights to the probability of robustness.
**Lemma 1**.: _Let \(H\) and \(K\) be the maximal safe and unsafe sets of weights from \(T\) to \(S\). Assume that \(w\sim p(w|\mathcal{D})\). Then, it holds that_
\[P(H)=\int_{H}p(w|\mathcal{D})dw=P_{\text{safe}}(T,S)= \tag{7}\] \[1-\int_{K}p(w|\mathcal{D})dw=1-P(K).\]
Lemma 1 simply translates the robustness specification from being concerned with the input-output behaviour of the BNN to an integration on the weight space.
An exact computation of sets \(H\) and \(K\) is infeasible in general. However, we can easily compute subsets of \(H\) and \(K\). Such subsets can then be used to compute upper and lower bounds on the value of \(P_{\text{safe}}(T,S)\) by considering subsets of the maximal safe and unsafe weights.
**Definition 4** (Safe and unsafe sets).: _Given a maximal safe set \(H\) or a maximal unsafe set \(K\) of weights, we say that \(\hat{H}\) and \(\hat{K}\) are a safe and unsafe set of weights from \(T\) to \(S\) iff \(\hat{H}\in H\) and \(\hat{K}\subseteq K\), respectively._
\(\hat{H}\) and \(\hat{K}\) can include _any_ safe and unsafe weights, respectively, without requiring they are _maximal_. Without maximality, we no longer have strict equality in Lemma 1, but instead we arrive at bounds on the value of probabilistic robustness.
We proceed by defining \(\hat{H}\) and \(\hat{K}\) as the union of a family of disjoint weight intervals, as these can provide flexible approximations of \(H\) and \(K\). That is, we consider \(\mathcal{H}=\{H_{i}\}_{i=1}^{n_{H}}\), with \(H_{i}=[w_{i}^{L,H},w_{i}^{U,H}]\) and \(\mathcal{K}=\{K_{i}\}_{i=1}^{n_{K}}\), with \(K_{i}=[w_{i}^{L,K},w_{i}^{U,K}]\) such that \(H_{i}\in H\) and \(K_{i}\subset K\), \(\hat{H}=\bigcup_{i=1}^{n_{H}}H_{i}\), \(K=\bigcup_{i=1}^{n_{K}}K_{i}\), and \(H_{i}\cap H_{j}=\emptyset\) and \(K_{i}\cap K_{j}=\emptyset\), for any \(i\neq j\). Hence, as a consequence of Lemma 1, and by the fact that \(\hat{H}=\bigcup_{i=1}^{n_{H}}H_{i}\subset H\) and \(\hat{K}=\bigcup_{i=1}^{n_{K}}K_{i}\subset K\), we obtain the following.
**Proposition 1** (Bounds on probabilistic robustness).: _Let \(H\) and \(K\) be the maximal safe and unsafe sets of weights from \(T\) to \(S\). Consider two families of pairwise disjoint weight intervals \(\mathcal{H}=\{H_{i}\}_{i=1}^{n_{H}}\), \(\mathcal{K}=\{K_{i}\}_{i=1}^{n_{K}}\), where for all \(i\):_
\[H_{i}\subseteq H,\quad K_{i}\subseteq K. \tag{8}\]
_Let \(\hat{H}\subseteq H\) and \(\hat{K}\subseteq K\) be non-maximal safe and unsafe sets of weights, with \(\hat{H}=\bigcup_{i=1}^{n_{H}}H_{i}\) and \(\hat{K}=\bigcup_{i=1}^{n_{K}}K_{i}\). Assume that \(w\sim p(w|\mathcal{D})\). Then, it holds that_
\[P_{\text{safe}}^{L}:=\sum_{i=1}^{n_{H}}P(H_{i})\leq P_{\text{safe}}(T,S,\mathbf{ w})\leq 1-\sum_{i=1}^{n_{K}}P(K_{i})=:P_{\text{safe}}^{U}, \tag{9}\]
_that is, \(P_{\text{safe}}^{L}\) and \(P_{\text{safe}}^{U}\) are, respectively, lower and upper bounds on probabilistic robustness._
Through the use of Proposition 1 we can thus bound probabilistic robustness by performing computation over sets of safe and unsafe intervals. Note that the bounds are given in the case where \(\mathcal{H}\) and \(\mathcal{K}\) are families of pairwise disjoint weight sets. The general case can be tackled by using the Bonferroni bound, which is discussed in Appendix D-D for hyper-rectangular weight sets.
Before explaining in detail how such bounds can be explicitly computed, first, in the next section, we show how a similar derivation leads us to analogous bounds and computations for decision robustness.
### _Bounding Decision Robustness_
The key difference between our formulation of probabilistic robustness and that of decision robustness is that, for the former, we are only interested in the behaviour of neural networks extracted from the BNN posterior that satisfy the robustness requirements (hence the distinction between \(H\)- and \(K\)-weight intervals), whereas for the computation of bounds on decision robustness we need to take into account the overall worst-case behaviour of an expected value computed for the BNN predictive distribution in order to compute sound bounds. As such, rather than computing safe and unsafe sets, we only need a family of weight sets, \(\mathcal{J}=\{J_{i}\}_{i=1}^{n_{J}}\), and rely on that for bounding \(D_{\text{safe}}(T,S)\). We explicitly show how this can be done for classification with likelihood \(\sigma\). The bound for regression follows similarly by using the identity function as \(\sigma\).
**Proposition 2** (Bounding decision robustness).: _Let \(\mathcal{J}=\{J_{i}\}_{i=1}^{n_{J}}\), with \(J_{i}\subset\mathbb{R}^{n_{\omega}}\) be a family of disjoint weight intervals. Let \(\sigma^{L}\) and \(\sigma^{U}\) be vectors that lower and upper bound the domain of the final activation function, and \(c\in\{1,\dots,m\}\) an index spanning the BNN output dimension. Define:_
\[D_{\text{safe},c}^{L} :=\sum_{i=1}^{n_{J}}P(J_{i})\min_{\begin{subarray}{c}x\in T\\ w\in J_{i}\end{subarray}}\sigma_{c}(f^{w}(x))+\sigma^{L}\Bigg{(}1-\sum_{i=1}^ {n_{J}}P(J_{i})\Bigg{)} \tag{10}\] \[D_{\text{safe},c}^{U} :=\sum_{i=1}^{n_{J}}P(J_{i})\max_{\begin{subarray}{c}x\in T\\ w\in J_{i}\end{subarray}}\sigma_{c}(f^{w}(x))+\sigma^{U}\Bigg{(}1-\sum_{i=1}^ {n_{J}}P(J_{i})\Bigg{)}. \tag{11}\]
_Consider the vectors \(D_{\text{safe}}^{L}=[\,D_{\text{safe},1}^{L},\dots,D_{\text{safe},m}^{L}]\) and \(D_{\text{safe}}^{U}=[\,D_{\text{safe},1}^{U},\dots,D_{\text{safe},m}^{U}]\), then it holds that:_
\[D_{\text{safe}}^{L}\leq\mathbb{E}_{p(w|\mathcal{D})}[\sigma(f^{w}(x))]\leq D_ {\text{safe}}^{U}\quad\forall x\in T,\]
_that is, \(D_{\text{safe}}^{L}\) and \(D_{\text{safe}}^{U}\) are lower and upper bounds on the predictive distribution in \(T\)._
Intuitively, the first terms in the bounds of Equations (10) and (11) consider the worst-case output for the input set \(T\) and each interval \(J_{i}\), while the second term accounts for the worst-case value of the posterior mass not captured by the family of intervals \(\mathcal{J}\) by taking a coarse, overall bound on that region. The provided bound is valid for any family of intervals \(\mathcal{J}\). Ideally, however, the partition should be finer around regions of high probability mass of the posterior distribution, as these make up the dominant term in the computation of the posterior predictive. We will discuss in Section VI how we select these intervals in practice so as to empirically obtain non-vacuous bounds.
### _Computation of the Bounds_
We now propose a unified approach to computing these lower and upper bounds. We first observe that the bounds in Equations (9), (10) and (11) require the integration of the posterior distribution over weight intervals, i.e., \(P(H_{i})\), \(P(K_{i})\) and \(P(J_{i})\). While this is in general intractable, we have built the bound so that \(H_{i}\), \(K_{i}\) and \(J_{i}\) are axis-aligned hyper-rectangles, and so the computation can be done exactly for approximate Bayesian inference methods used in practice. This will be the topic of Section VI-A, where, given a rectangle in weight space of the form \(R=[w^{L},w^{U}]\), we will show how to compute \(P(R)=\int_{R}p(w|\mathcal{D})dw\).
For the explicit computation of decision robustness, the only missing ingredient is then the computation of the minimum and maximum \(\sigma(f^{w}(x))\) for \(x\in T\) and \(w\in J_{i}\). We do this by bounding the BNN output for any given rectangle in the weight space \(R\). That is, we will compute upper and lower bounds \(y^{L}\) and \(y^{U}\) such that:
\[y^{L}\leq\min_{\begin{subarray}{c}x\in T\\ w\in R\end{subarray}}f^{w}(x)\quad y^{U}\geq\max_{\begin{subarray}{c}x\in T \\ w\in R\end{subarray}}f^{w}(x), \tag{12}\]
which can then be used to bound \(\sigma(f^{w}(x))\) by simple propagation over the softmax (if needed). The derivation of such bounds will be the subject of Section VI-B.
Finally, observe that, whereas for decision robustness we can simply select any weight interval \(J_{i}\), for probabilistic robustness one needs to make a distinction between safe sets (\(H_{i}\)) and unsafe sets (\(K_{i}\)). It turns out that this can be done by bounding the output of the BNN in each of these intervals. For example, in the case of the safe sets, by definition we have that \(\forall w\in H_{i},\forall x^{\prime}\in T\) it follows that \(f^{w}(x^{\prime})\in S\). By defining \(y^{L}\) and \(y^{U}\) as in Equation (12), we can see that it suffices to check whether \([y^{L},y^{U}]\subseteq S\). Hence, the computation of probabilistic robustness also depends on the computation of such bounds (again, discussed in Section VI-B).
Therefore, once we have shown how the computation of \(P(R)\) for any weight interval and \(y^{L}\) and \(y^{U}\) can be done, the bounds in Proposition 1 and Proposition 2 can be computed explicitly, and we can thus bound probabilistic and decision robustness. Section VII will assemble these results together into an overall computational flow of our methodology.
## VI Explicit Bound Computation
In this section, we provide details on the specific computations needed to calculate the theoretical bound presented in Section V for probabilistic and decision robustness. We start
by discussing how a weight intervals family can be generated in practice, and how to integrate over them in Section VI-A. In Section VI-B, we then derive a scheme based on convex-relaxation techniques for bounding the output of BNNs.
### _Integral Computation over Weight Intervals_
Key to the computation of the bounds derived in Section V is the ability to compute the integral of the posterior distribution over a combined set of weight intervals. Crucially, the shape of the weight sets \(\mathcal{H}=\{H_{i}\}_{i=1}^{n_{H}}\), \(\mathcal{K}=\{K_{i}\}_{i=1}^{n_{K}}\) and \(\mathcal{J}=\{J_{i}\}_{i=1}^{n_{J}}\) is a parameter of the method, so that it can be chosen to simplify the integral computation depending on the particular form of the approximate posterior distribution used. We build each weight interval as an axis-aligned hyper-rectangle of the form \(R=[w^{L},w^{U}]\) for \(w^{L}\) and \(w^{U}\in\mathbb{R}^{n_{w}}\).
Weight Intervals for Decision RobustnessIn the case of decision robustness it suffices to sample any weight interval \(J_{i}\) to compute the bounds we derived in Proposition 2. Clearly, the bound is tighter if the \(\mathcal{J}\) family is finer around the area of high probability mass for \(p(w|\mathcal{D})\). In order to obtain such a family we proceed as follows. First, we define a _weight margin_\(\gamma>0\) that has the role of parameterising the radius of the weight intervals we define. We then iteratively sample weight vectors \(w_{i}\) from \(p(w|\mathcal{D})\), for \(i=1,\ldots,n_{J}\), and finally define \(J_{i}=[w_{i}^{L},w_{i}^{U}]=[w_{i}-\gamma,w_{i}+\gamma]\). As such, thus defined weight intervals naturally hover around the area of greater density for \(p(w|\mathcal{D})\), while asymptotically covering the whole support of the distribution.
Weight Intervals for Probabilistic RobustnessOn the other hand, for the computation of probabilistic robustness one has to make a distinction between safe weight intervals \(H_{i}\) and unsafe ones \(K_{i}\). As explained in Section V-C, this can be done by bounding the output of the BNN in each of these intervals. For example, in the case of the safe sets, by definition, \(H_{i}\) is safe if and only if \(\forall w\in H_{i},\forall x^{\prime}\in T\) we have that \(f^{w}(x^{\prime})\in S\). Thus, in order to build a family of safe (respectively unsafe) weight intervals \(H_{i}\) (\(K_{i}\)), we proceed as follows. As for decision robustness, we iteratively sample weights \(w_{i}\) from the posterior used to build hyper-rectangles of the form \(R_{i}=[w_{i}-\gamma,w_{i}+\gamma]\). We then propagate the BNN through \(R\) and check whether the output of the BNN in \(R\) is (is not) a subset of \(S\). The derivation of such bounds on propagation will be the subject of Section VI-B.
Once the family of weights is computed, there remains the computation of the cumulative distribution over such sets. The explicit computations depend on the particular form of Bayesian approximate inference that is employed. We discuss explicitly the case of Gaussian variational approaches, and of sample-based posterior approximation (e.g., HMC), which entails the majority of the approximation methods used in practice [51].
Variational InferenceFor variational approximations, \(p(w|\mathcal{D})\) takes the form of a multi-variate Gaussian distribution over the weight space. The resulting computations reduce to the integral of a multi-variate Gaussian distribution over a finite-sized axis-aligned rectangle, which can be computed using standard methods from statistics [52]. In particular, under the common assumption of variational inference with a Gaussian distribution with diagonal covariance matrix [53], i.e., \(p(w|\mathcal{D})=\mathcal{N}(\mu,\Sigma)\), with \(\Sigma=\text{diag}(\Sigma_{1},\ldots,\Sigma_{n_{w}})\), we obtain the following result for the posterior integration:
\[P(R)=\int_{R} \!\!\!\!P(w|\mathcal{D})dw= \tag{13}\] \[\prod_{j=1}^{n_{w}}\frac{1}{2}\left(\text{erf}\left(\frac{\mu_{ j}-w_{i}^{L}}{\sqrt{2\Sigma_{j}}}\right)-\text{erf}\left(\frac{\mu_{j}-w_{i}^{u }}{\sqrt{2\Sigma_{j}}}\right)\right).\]
By plugging this into the bounds of Equation (9) for \(P(H_{i})\) and \(P(K_{i})\) for probabilistic robustness and in Equations (10) and (11) for decision robustness, one obtains a closed-form formula for the bounds given weight set interval families \(\mathcal{H}\), \(\mathcal{K}\) and \(\mathcal{J}\).
Sample-based approximationsIn the case of sample-based posterior approximation (e.g., HMC), we have that \(p(w|\mathcal{D})\) defines a distribution over a finite set of weights. In this case we can simplify the computations by selecting the weight margin \(\gamma=0\), so that each sampled interval will be of the form \(R=[w_{i},w_{i}]\) and its probability under the discrete posterior will trivially be:
\[P(R_{i})=p(w_{i}|\mathcal{D}). \tag{14}\]
### _Bounding Bayesian Neural Networks' Output_
Given an input specification, \(T\), and a weight interval, \(R=[w^{L},w^{U}]\), the second key step in computing probabilistic and decision robustness is the bounding of the output of the BNN over \(R\) given \(T\). That is, we need to derive methods to compute \([y^{L},y^{U}]\) such that, by construction, \(\forall w\in[w^{L},w^{U}]\), \(\forall x^{\prime}\in T\) it follows that \(f^{w}(x^{\prime})\in[y^{L},y^{U}]\).
In this section, we discuss interval bound propagation (IBP) and linear bound propagation (LBP) as methods for computing the desired output set over-approximations. Before discussing IBP and LBP in detail, we first introduce common notation for the rest of the section. We consider feed-forward neural networks of the form:
\[z^{(0)} =x \tag{15}\] \[\zeta_{i}^{(k+1)} =\sum_{j=1}^{n_{k}}W_{ij}^{(k)}z_{j}^{(k)}+b_{i}^{(k)} i=0,\ldots,n_{k+1}\] (16) \[z_{i}^{(k)} =\sigma(\zeta_{i}^{(k)}) i=0,\ldots,n_{k} \tag{17}\]
for \(k=1,\ldots,K\), where \(K\) is the number of hidden layers, \(\sigma(\cdot)\) is a pointwise activation function, \(W^{(k)}\in\mathbb{R}^{n_{k}\times n_{k-1}}\) and \(b^{(k)}\in\mathbb{R}^{n_{k}}\) are the matrix of weights and vector of biases that correspond to the \(k\)th layer of the network and \(n_{k}\) is the number of neurons in the \(k\)th hidden layer. Note that, while Equations (15)-(17) are written explicitly for fully-connected layers, convolutional layers can be accounted for by embedding them in fully-connected form [41].
We write \(W_{:}^{(k)}\) for the vector comprising the elements from the \(i\)th row of \(W^{(k)}\), and similarly \(W_{:j}^{(k)}\) for that comprising the elements from the \(j\)th column. \(\zeta^{(K+1)}\) represents the final output of the network (or the logit in the case of classification networks), that is, \(\zeta^{(K+1)}=f^{w}(x)\). We write \(W^{(k),L}\) and \(W^{(k),U}\) for the lower and upper bound induced
by \(R\) for \(W^{(k)}\) and \(b^{(k),L}\) and \(b^{(k),U}\) for those of \(b^{(k)}\), for \(k=0,\ldots,K\). Observe that \(z^{(0)}\), \(\zeta_{i}^{(k+1)}\) and \(z_{i}^{(k)}\) are all functions of the input point \(x\) and of the combined vector of weights \(w=[W^{(0)},b^{(0)},\ldots,W^{(K)},b^{(K)}]\). We omit the explicit dependency for simplicity of notation. Finally, we remark that, as both the weights and the input vary in a given set, Equation (16) defines a quadratic form.
Interval Bound Propagation (IBP)IBP has already been employed for fast certification of deterministic neural networks [18]. For a deterministic network, the idea is to propagate the input box around \(x\), i.e., \(T=[x^{L},x^{U}]\), through the first layer, so as to find values \(z^{(1),L}\) and \(z^{(1),U}\) such that \(z^{(1)}\in\left[z^{(1),L},z^{(1),U}\right]\), and then iteratively propagate the bound through each consecutive layer for \(k=1,\ldots,K\). The final box constraint in the output layer can then be used to check for the specification of interest [18]. The only adjustment needed in our setting is that at each layer we also need to propagate the interval of the weight matrix \([W^{(k),L},W^{(k),U}]\) and that of the bias vector \([b^{(k),L},b^{(k),U}]\). This can be done by noticing that the minimum and maximum of each term of the bi-linear form of Equation (16), that is, of each monomial \(W^{(k)}_{ij}z^{(k)}_{j}\), lies in one of the four corners of the interval \([W^{(k),L}_{ij},W^{(k),U}_{ij}]\times[z^{(k),L}_{j},z^{(k),U}_{j}]\), and by adding the minimum and maximum values respectively attained by \(b^{(k)}_{i}\). As in the deterministic case, interval propagation through the activation function proceeds by observing that generally employed activation functions are monotonic, which permits the application of Equation (17) to the bounding interval. Where monotonicity does not hold, we can bound any activation function that has finitely many inflection points by splitting the function into piecewise monotonic functions. This is summarised in the following proposition.
**Proposition 3**.: _Let \(f^{w}(x)\) be the network defined by the set of Equations (15)-(17), let for \(k=0,\ldots,K\):_
\[t^{(k),L}_{ij}=\min\{W^{(k),L}_{ij}z^{(k),L}_{j},W^{(k),U}_{ij}z^ {(k),L}_{j},\] \[W^{(k),L}_{ij}z^{(k),U}_{j},W^{(k),U}_{ij}z^{(k),U}_{j}\}\] \[t^{(k),U}_{ij}=\max\{W^{(k),L}_{ij}z^{(k),L}_{j},W^{(k),U}_{ij}z^ {(k),L}_{j},\] \[W^{(k),L}_{ij}z^{(k),U}_{j},W^{(k),U}_{ij}z^{(k),U}_{j}\}\]
_where \(i=1,\ldots,n_{k+1}\), \(j=1,\ldots,n_{k}\), and \(z^{(k),L}=\sigma(\zeta^{(k),L})\), \(z^{(k),U}=\sigma(\zeta^{(k),U})\) and:_
\[\zeta^{(k+1),L}=\sum_{j}t^{(k),L}_{ij}+b^{(k),L}\] \[\zeta^{(k+1),U}=\sum_{j}t^{(k),U}_{j}+b^{(k),U}.\]
_Then we have that \(\forall x\in T\) and \(\forall w\in R\):_
\[f^{w}(x)=\zeta^{(K+1)}\in\left[\zeta^{(K+1),L},\zeta^{(K+1),U}\right].\]
The proposition above, whose proof is Appendix C-C (Appendix C-C), yields a bounding box for the output of the neural network in \(T\) and \(R\).
Linear Bound Propagation (LBP)We now discuss how LBP can be used to lower-bound the BNN output over \(T\) and \(R\) as an alternative to IBP. In LBP, instead of propagating bounding boxes, one finds lower and upper Linear Bounding Functions (LBFs) for each layer and then propagates them through the network. As the bounding function has an extra degree of freedom w.r.t. the bounding boxes obtained through IBP, LBP usually yields tighter bounds, though at an increased computational cost. Since in deterministic networks non-linearity comes only from the activation functions, LBFs in the deterministic case are computed by bounding the activation functions, and propagating the bounds through the affine function that defines each layer.
Similarly, in our setting, given \(T\) in the input space and \(R\) for the first layer in the weight space, we start with the observation that LBFs can be obtained and propagated through commonly employed activation functions for Equation (17), as discussed in [41].
**Lemma 2**.: _Let \(f^{w}(x)\) be defined by Equations (15)-(17). For each hidden layer \(k=1,\ldots,K\), consider a bounding box in the pre-activation function, i.e. such that \(\zeta_{i}^{(k)}\in[\zeta_{i}^{(k),L},\zeta_{i}^{(k),U}]\) for \(i=1,\ldots,n_{k}\). Then there exist coefficients \(\alpha_{i}^{(k),L}\), \(\beta_{i}^{(k),L}\), \(\alpha_{i}^{(k),U}\) and \(\beta_{i}^{(k),U}\) of lower and upper LBFs on the activation function such that for all \(\zeta_{i}^{(k)}\in[\zeta_{i}^{(k),L},\zeta_{i}^{(k),U}]\) it holds that:_
\[\alpha_{i}^{(k),L}\zeta_{i}^{(k)}+\beta_{i}^{(k),L}\leq\sigma(\zeta_{i}^{(k)}) \leq\alpha_{i}^{(k),U}\zeta_{i}^{(k)}+\beta_{i}^{(k),U}.\]
The lower and upper LBFs can thus be minimised and maximised to propagate the bounds of \(\zeta^{(k)}\) in order to compute a bounding interval \([z^{(k),L},z^{(k),U}]\) for \(z^{(k)}=\sigma(\zeta^{(k)})\). Then, LBFs for the monomials of the bi-linear form of Equation (16) can be derived using McCormick's inequalities [54]:
\[W^{(k)}_{ij}z^{(k)}_{j}\geq W^{(k),L}_{ij}z^{(k)}_{j}+W^{(k)}_{ ij}z^{(k),L}_{j}-W^{(k),L}_{ij}z^{(k),L}_{j} \tag{18}\] \[W^{(k)}_{ij}z^{(k)}_{j}\leq W^{(k),U}_{ij}z^{(k)}_{j}+W^{(k)}_{ ij}z^{(k),L}_{j}-W^{(k),U}_{ij}z^{(k),L}_{j} \tag{19}\]
for every \(i=1,\ldots,n_{k}\), \(j=1,\ldots,n_{k-1}\) and \(k=1,\ldots,K\). The bounds of Equations (18)-(19) can thus be used in Equation (16) to obtain LBFs on the pre-activation function of the following layer, i.e. \(\zeta^{(k+1)}\). The final linear bound can be obtained by iterating the application of Lemma 2 and Equations (18)-(19) through every layer. This is summarised in the following proposition, which is proved in Appendix C along with an explicit construction of the LBFs.
**Proposition 4**.: _Let \(f^{w}(x)\) be the network defined by the set of Equations (15)-(17). Then for every \(k=0,\ldots,K\) there exists lower and upper LBFs on the pre-activation function of the form:_
\[\zeta_{i}^{(k+1)}\geq\mu_{i}^{(k+1),L}\cdot x+\sum_{l=0}^{k-1}(\nu_ {i}^{(l,k+1),L},W^{(l)})+\] \[\nu_{i}^{(k,k+1),L}\cdot W_{i}^{(k)}+\lambda_{i}^{(k+1),L}\quad \text{for }i=1,\ldots,n_{k+1}\] \[\zeta_{i}^{(k+1)}\leq\mu_{i}^{(k+1),U}\cdot x+\sum_{l=0}^{k-1}(\nu_ {i}^{(l,k+1),U},W^{(l)})+\] \[\nu_{i}^{(k-1,k+1),U}\cdot W_{i}^{(k)}+\lambda_{i}^{(k+1),U} \quad\text{for }i=1,\ldots,n_{k+1}\]
_where \(\langle\cdot,\cdot\rangle\) is the Frobenius product between matrices, \(\cdot\) is the dot product between vectors, and the explicit formulas for the LBF coefficients, i.e., \(\mu_{i}^{(k+1),L}\), \(\nu_{i}^{(l,k+1),L}\), \(\lambda_{i}^{(k+1),L}\), \(\mu_{i}^{(k+1),U}\), \(\nu_{i}^{(l,k+1),U}\), are given in Appendix C-D._
_Now let \(\zeta_{i}^{(k),L}\) and \(\zeta_{i}^{(k),U}\), respectively, be the minimum and the maximum of the right-hand side of the two equations above; then we have that \(\forall x\in T\) and \(\forall w\in R\):_
\[f^{w}(x)=\zeta^{(K+1)}\in\left[\zeta^{(K+1),L},\zeta^{(K+1),U}\right].\]
```
1:\(\#\)\(\mathcal{H}\) is a set of known safe weight intervals
2:\(\mathcal{H}\leftarrow\varnothing\)
3:\(\#\) Elementwise product to obtain width of weight margin
4:\(v\leftarrow\gamma\cdot I\cdot\Sigma\)
5:for\(i\gets 0\) to \(N\)do
6:\(w^{(i)}\gets p(w[\mathcal{D}])\)
7:\(\#\) Assume weight intervals are built to be disjoint
8:\([w^{(i),L},w^{(i),U}]\leftarrow[w_{i}-v,w_{i}+v]\)
9:\(f\) Interval/Linear Bound Propagation, Section VI-B
10:\(y^{i},y^{U}\leftarrow\) Propagate(\(f,T,[w^{(i),L},w^{(i),U}]\))
11:if\([y^{L},y^{U}]\subset S\)then
12:\(\mathcal{H}\leftarrow\mathcal{H}\bigcup\{[w^{(i),L},w^{(i),U}]\}\)
13:endif
14:endfor
15:\(P_{\text{safe}}^{L}\gets 0.0\)
16:for\([w^{(i),L},w^{(i),U}]\in\mathcal{H}\)do
17:\(\#\) Compute safe weight probs, Section VI-A
18:\(P_{\text{safe}}^{L}=P_{\text{safe}}^{L}+P([w^{(i),L},w^{(i),U}])\)
19:endfor
20:return\(P_{\text{safe}}^{L}\)
```
**Algorithm 1** Lower Bounds for BNN Probabilistic Robustness
## VII Complete Bounding Algorithm
Using the results presented in Section VI, it is possible to explicitly compute the bounds on probabilistic and decision robustness derived in Section V. In this section, we bring together all the results discussed so far, and assemble complete algorithms for the computation of bounds on \(P_{\text{safe}}(T,S)\) and \(D_{\text{safe}}(T,S)\). We discuss the procedure to lower bound \(P_{\text{safe}}(T,S)\) in Algorithm 1. We then discuss the details of upper bounds and bounds on \(D_{\text{safe}}(T,S)\), leaving the algorithms and their description for these bounds to Appendix D.
### _Lower Bounding Algorithm_
We provide a step-by-step outline for how to compute lower bounds on \(P_{\text{safe}}(T,S)\) in Algorithm 1. We start on line 2 by initializing the family of safe weight sets \(\mathcal{H}\) to be the empty set and by scaling the weight margin with the posterior weight scale (line 4). We then iteratively (line 5) proceed by sampling weights from the posterior distribution (line 6), building candidate weight boxes (line 8), and propagate the input and weight box through the BNN (line 10). We next check whether the propagated output set is inside the safe output region \(S\), and if so update the family of weights \(\mathcal{H}\) to include the weight box currently under consideration (lines 11 and 12). Finally, we rely on the results in Section VI-A to compute the overall probabilities over all the weight sets in \(\mathcal{H}\), yielding a valid lower bound for \(P_{\text{safe}}(T,S)\). For clarity of presentation, we assume that all the weight boxes that we sample in lines 6-8 are pairwise disjoint, as this simplifies the probability computation. The general case with overlapping weight boxes relies on the Bonferroni bound and is given in Appendix D-D. The algorithm for the computation of the lower bound on \(D_{\text{safe}}(T,S)\) (listed in the Appendix D as Algorithm 2) proceeds in an analogous way, but without the need to perform the check in line 11, and by adjusting line 18 to the formula from Proposition 2.
### _Upper Bounding Algorithm_
Upper bounding \(P_{\text{safe}}(T,S)\) and \(D_{\text{safe}}(T,S)\) follows the same computational flow as Algorithm 1. The pseudocode outlines computation of probabilistic and decision robustness are listed respectively in Algorithm 3 and 4 in Appendix D subsection A and Appendix D subsection B. We again proceed by sampling a rectangle around weights, propagate bounds through the NN, and compute the probabilities of weight intervals. The key change to the algorithm to allow upper bound computation involves computing the best case, rather than the worst case, for \(y\) in for decision robustness (line 12 in Algorithm 3) and ensuring that the entire interval \([y^{L},y^{U}]\notin S\) (line 18) for probabilistic robustness. In Appendix D subsection B we also discuss how adversarial attacks can be leveraged to improve the upper bounds.
### _Computational Complexity_
Calculations for probabilistic robustness and decision robustness follow the same computational flow and include: bounding of the neural network output, sampling from the posterior distribution, and computation of integrals over boxes on the input and weight space.
Regarding Algorithm 1 (or equivalently Algorithm 2 for decision robustness), it is clear that the computational complexity scales linearly with the number of samples, \(N\), taken from the posterior distribution. Observe that, in order to obtain a tight bound on the integral computation, \(N\) needs to be large enough such that \(N\) samples of the posterior with width \(\gamma\) span an area of high probability mass for \(p(w|\mathcal{D})\). Unfortunately, this means that, for a given approximation error magnitude, \(N\) needs to scale quadratically on the number of hidden neurons. Given the sampling of the hyper-rectangles, computation of the integral over the weight boxes is done through Equations (13) and (14). The integration over the weight boxes is done in constant time for HMC (though a good quality HMC posterior approximation scales with the number of parameters) and \(\mathcal{O}(n_{w})\) for VI. The final step needed for the methodology is that of bound propagation, which clearly differs when using IBP or LBP. In particular, the cost of performing IBP is \(\mathcal{O}(K\hat{n}\hat{m})\), where \(K\) is the number of hidden layers and \(\hat{n}\times\hat{m}\) is the size of the largest weight matrix \(W^{(k)}\), for \(k=0,\ldots,K\). LBP is, on the other hand, \(\mathcal{O}(K^{2}\hat{n}\hat{m})\). Overall, the time complexity for certifying a VI BNN with IBP is therefore \(\mathcal{O}(Nn_{w}K\hat{n}\hat{m})\), and similar formulas can be obtained for alternative combinations of inference and propagation techniques that are employed. We remark that, while sound, the bounds we compute are not guaranteed to converge to the true values of \(P_{\text{safe}}(T,S)\) and
\(D_{\text{safe}}(T,S)\) in the limit of the number of sample \(N\) because of the introduction of over-approximation errors due to bound propagation.
## VIII Experiments
In this section, we empirically investigate the suitability of our certification framework for the analysis of probabilistic and decision robustness in BNNs. We focus our analysis on four different case studies. First, we provide a comprehensive evaluation of an airborne collision avoidance system [24]. To do so, we partition the entire input space into 1.8 million different specifications and bound \(P_{\text{safe}}(T,S)\) and \(D_{\text{safe}}(T,S)\) by computing the bounds for each specification. We then turn our attention to an industrial regression benchmark [25] and demonstrate how our analysis can provide tight characterization of the worst-case error of predictions in adversarial settings in relation to the magnitude of the maximum attack allowed. Next, we analyse the scalability of our method in the well-known MNIST dataset for handwritten digits recognition [55], along with its behaviour on out-of-distribution input samples. Finally, we we study a two-class subset of the German Traffic Sign Recognition Benchmark (GTSRB) dataset [26], whose input space is 1500 dimensions larger than what has previously been studied for BNN certification against adversarial examples, showcasing that we are still able to compute non-trivial guarantees in this setting. For each dataset, we first describe the problem setting and BNN used to solve it, along with its hyperparameters. We then discuss the properties of interest for each dataset. Finally, we provide discussion and illustration of our bounds performance. All the experiments have been run on 4 NVIDIA 2080Ti GPUs in conjunction with 4 24-core Intel Core Xeon 6230.
### _Airborne Collision Avoidance_
Our first case study is the Horizontal airborne Collision Avoidance System (HCAS) [24], a dataset composed of millions of labelled examples of intruder scenarios.
#### Vi-A1 Problem Setting
The task of the BNN is to predict a turn advisory for an aircraft given another oncoming aircraft, including clear of conflict (COC), weak left (WL), weak right (WR), strong left (SL), and strong right (SR). These are depicted in the top left of Figure 3. We follow the learning procedure described in [24], where encounter scenarios are partitioned into 40 distinct datasets. We then learn a BNN to predict the correct advisories for each dataset, resulting in 40 different BNNs which need to be analysed.
To analyze the system of 40 BNNs, we first discretize the entire state-space into 1.8 million mutually exclusive input specifications. The input specifications are sized according to the spacing of the ground truth labels supplied by [24]. Namely, we consider an \(\ell_{\infty}\) norm ball with different widths for each input dimension. Those widths are \([0.016,0.025,0.025,0.05]\). The output specification is taken to be the set of all softmax vectors such that the argmax of the softmax corresponds to the true label. We separate these output specifications into 5 different properties, which we termed \(\phi_{j}\) for \(j=0,\ldots,4\) corresponding to each of the possible advisories. For all properties in this section we use LBP with 5 samples with a weight margin of 2.5 standard deviations.
We train BNNs with Variational Online Gauss Newton (VOGN), where the posterior approximation is a diagonal covariance Gaussian, and wih Hamiltonian Monte Carlo (HMC). The BNN architecture has a single hidden layer with 125 hidden units, the same size as the original system proposed in [24]. We use a diagonal covariance Gaussian prior with variance 0.5 for VOGN and a prior variance of 2.5 for HMC.
#### Vi-A2 Analysis with \(P_{\text{safe}}(T,S)\) Certification
For each of the 1.8 million disjoint input specification, we compute both upper and lower bounds on \(P_{\text{safe}}(T,S)\). Given that probabilistic robustness is a real-valued probability and not a binary predicate, practitioners must select thresholds that reflect a strong belief that a value is safe or unsafe. We call these thresholds \(\tau_{\text{safe}}\)
Fig. 4: **Left:** Box plots showing the distribution of upper and lower bounds for VI (top) and HMC (bottom). **Right:** Histograms showing gap between upper and lower bounds for VI (top) and HMC (bottom).
Fig. 3: **Top Left:** Encounter geometry, ground truth and property labels: Clear of Conflict (COC), Strong Left/Right (SL/R), Weak Left/Right (WL/R), for a collision scenario. Diagrams modified from [24]. **Bottom Left:** Encounter geometry labelled with features used for collision avoidance prediction. **Right:** Bounds on decision robustness obtained for HMC and VI trained BNNs for each property.
and \(\tau_{\text{unsafe}}\). Once one has computed bounds on \(P_{\text{safe}}(T,S)\), the proportions of safe and unsafe states (as reported in Table I) can be computed by checking thresholds. We check our bounds against strict safety and unsafety thresholds \(\tau_{\text{safe}}=0.98,\tau_{\text{unsafe}}=0.05\).
For the selected threshold values, Table I reports the certified performance of the BNN system. Such a report can be used by regulators and practitioners to determine the if the system is safe for deployment. In this case, we find that across all properties 74% of the states are certified to be safe while 20% are certified to be unsafe. The remaining 6% fall somewhere in between the two safety thresholds. These statistics indicate that roughly 18% of the decisions issued by the system were correct but not robust, thus the systems accuracy of 92% does not paint the complete story of its performance. Moreover, we break down each of the properties of the system, represented by each row of Table I, to understand where the most common failure modes occur. We find that the most unsafe indicators are the strong left, \(\phi_{3}\), and for strong right, \(\phi_{4}\), the system has the lowest certified safety at 53.4% and 55.0% respectively. They also have the highest certified unsafety at 34.0% and 37.0%, respectively. We conjecture that these these specifications are less safe due to the fact that their is less labeled data representing them in the dataset. Less data has been shown to be correlated with less robustness for BNNs [10]. If the results in Table I are deemed to be insufficient for deployment, then practitioners can collect more data for unsafe properties e.g., \(\phi_{3}\) and \(\phi_{4}\), or could resort to certified safe training for BNNs as suggested in [27].
#### V-A3 Analysis with \(D_{\text{safe}}(T,S)\) Certification
In order to analyze the decision robustness of the BNNs, we again discretize the input space. For these results, we use a coarser discretization, with the input specification being an \(\ell_{\infty}\) ball radius of \(0.125\) over each input dimension, and for the sake of computational efficiency we allow some gaps between the input specifications. For each specification, we compute upper and lower bounds on \(D_{\text{safe}}(T,S)\). We plot the result of our bounds on decision robustness in Figures 3 and 4. In the right hand portion of Figure 3 we visualize the average lower bound on decision robustness for two BNNs, one trained with HMC (yellow) and the other trained with VI (green). We find that we are able to certify a higher lower bound, indicating heightened robustness, for the HMC-trained BNN. This corroborates previous robustness studies that highlight that HMC is more adversarially robust [10, 11]. In Figure 4 we analyze the tightness of our bounds in this scenario by comparing the lower and upper bounds provided by our method. For VI, the gap, plotted in purple in the upper right, is tightly centered around a mean of \(0.08\), with maximum gap observed in these experiments being approximately \(0.16\) and a minimum \(0.035\). For HMC, on the other hand, the mean gap is \(0.11\), which is higher than VI, but this mean is affected by a small proportion of inputs that have a very high gap between upper and lower bounds, with the highest gap being \(0.72\). We further highlight the higher variance bound distribution for the HMC-trained BNN (plotted as blue and red box plots). We hypothesize that this arises due to the higher uncertainty predictive of the HMC in areas of little data [7]
#### V-A4 Computational Requirements
For VI certification, we can compute upper and lower bounds in an average of 0.544 seconds. Thus, when run in serial mode, the 3.6 million probabilistic bound computations needed for Table I takes an estimated 11.347 computational days. However, our parallelized certification procedure produces Table I in under 3 days of computational time (61 hours). These computations include the 1.8 million lower bound runs and 1.8 million upper bound runs. For HMC, on the other hand, certification can be done in a fraction of the time, with bounds being computed in 0.073 seconds. This is due to the fact that weight intervals for HMC necessarily satisfy the pairwise disjoint precondition of Proposition 1, thus no Bonferroni correction is needed.
### _Industrial Benchmarks_
We now focus our analyses on two safety-critical industrial regression problems taken from the UCI database [25], and widely employed for benchmarking of Bayesian inference methods [53, 56, 57].
#### V-B1 Problem Setting
The _Concrete_ dataset involves predicting the compressive strength of concrete, based on 8 key factors including its ingredients and age. The _Powerplant_ dataset uses six years worth of observations from combined cycle power plants and poses the problem of predicting energy output from a plant given a range of environmental and plant parameters. For each dataset we learn a BNN by using the architecture (i.e., a single hidden layer with 100 hidden units) and inference settings proposed in [53]. The BNNs are inferred using VOGN with a diagonal covariance prior over the weights with variance 0.5 for the _Concrete_ dataset and 0.25 for the _Powerplant_ dataset. We use a Gaussian likelihood corresponding to a mean squarred error loss function. In this setting we use IBP with 10 samples and a weight margin of 2 standard deviations.
#### Vi-B2 Analysis with \(P_{\text{safe}}(T,s)\) Certification
In industrial applications it is useful to understand the maximum amount of adversarial noise that a learned system can tolerate, as failures can be costly and unsafe [58]. To this end, we introduce the maximum and minimum robust radius. Given a threshold \(\tau_{\text{safe}}\) (as before) the maximum robust radius (MaxRR) is the largest \(\ell_{\infty}\) radius for which we can certify the BNN satisfies \(P_{\text{safe}}(T,S)>\tau_{\text{safe}}\). Similarly, the minimum unrobust radius (MinUR) is the smallest radius such that we can certify \(P_{\text{safe}}(T,S)<\tau_{\text{ unsafe}}\). The MaxRR gives us a safe lower bound on the amount of adversarial noise a BNN is robust against, whereas the MinUR gives us a corresponding upper bound.
In our experiments on these datasets we considered \(\tau_{\text{safe}}=\tau_{\text{ unsafe}}=0.7\), meaning that we request that over \(70\%\) of the BNN probability mass is certifiably safe; however, we stress that similar results can be obtained for different values of \(\tau_{\text{safe}}\) similarly to what is discussed in our previous analysis of the HCAS dataset. In order to compute the MaxRR we start with \(\epsilon=0\), check that \(P_{\text{safe}}(T,S)>\tau_{\text{safe}}\) using our lower bound, and if the inequality is satisfied we increase epsilon by 0.01 and continue this process until the inequality to longer holds. Similarly for the MinUR, we start with \(\epsilon=0.5\) and iteratively decrease the value of \(\epsilon\) until the upper bound no longer certifies that \(P_{\text{safe}}(T,S)<\tau_{\text{ unsafe}}\); if the property does not hold at \(0.5\) one can increase the value of \(\epsilon\) until the bound holds.
The result of computing the MaxRR and MinUR over the test datasets for the _Concrete_ and _Powerplant_ datasets are plotted in Figure 5. We highlight that in the overwhelming majority of the cases our methods is able to return non-vacuous bounds on MinUR (i.e., strictly less than \(1\)) and on MaxRR (i.e., strictly greater than \(0\)). As expected we observe the MaxRR is strictly smaller than MinUR. Encouragingly, as MinUR grows, MaxRR tends to increase indicating that our bounds track the true value of \(P_{\text{safe}}(T,S)\). We see that the _Concrete_ dataset is typically guaranteed to be robust for radius \(\epsilon\approx 0.03\) and is typically guaranteed to be unsafe for \(\epsilon\approx 0.06\). For the _Powerplant_ posterior we compute a MaxRR of roughly \(0.18\) for most inputs and a MinUR lower than \(0.32\). Notice how the results for the _Concrete_ datasets systematically display more robustness than those for _Powerplant_ and the gap between MaxRR and MinUR is significantly smaller in the former datasets than in the latter.
#### Vi-B3 Computational Requirements
On average, it takes 1.484 seconds to compute a certified upper or lower bound on \(P_{\text{safe}}(T,S)\) for the _Powerplant_ dataset and 1.718 seconds for the _Concrete_ dataset. We use a linear search in order to compute the MaxRR and MinUR which require, on average, 5 certifications for both MaxRR and MinUR computations. We compute these values over the entire test datasets for both _Powerplant_ and _Concrete_, which requires tens of thousands of certifications and each input can be done in parallel.
### _Mnist_
We investigate the suitability of our methods in providing certifications for BNNs on larger input domains, specifically BNNs learnt for MNIST, a standard benchmark for the verification of deterministic neural networks whose inputs are 784-dimensional. In this setting we use IBP with 5 weight samples with a weight margin of 2.5 standard deviations.
#### Vi-B1 Problem Setting
MNIST poses the problem of handwritten digit recognition. Given handwritten digits encoded as a 28 by 28 black and white image, the task is to predict which digit - 0 through 9 - is depicted in the image (two images randomly sampled from the dataset are reproduced in the far left of Figure 6). We learn BNNs using the standard 50,000/10,000 train/test split that is provided in the original work [55]. For our experimental analysis, we use one-layer neural network with 128 hidden neurons, each of which uses rectified linear unit activation functions. The BNN has 10 output neurons that use a softmax activation function. We train the network using VOGN with a diagonal covariance Gaussian prior that has variance 2.0. We use a sparse categorical cross-entropy loss modified with the method presented in [27] to promote robustness in the BNN posteriors.
#### Vi-B2 Analysis using \(D_{\text{safe}}(T,S)\) Certification
We analyze the trained BNN using decision robustness on 1000 images taken from the MNIST test dataset. We compute bounds on \(D_{\text{safe}}(T,S)\) for increasing widths of an \(\ell_{\infty}\) input region \(\epsilon\). We plot the mean and standard deviation obtained for the upper (\(D_{\text{safe}}^{U}\), in red) and lower bound (\(D_{\text{safe}}^{L}\), in blue) on decision robustness for the ground truth label of each image in the left hand portion of Figure 6. As greater \(\epsilon\) implies a larger input
Fig. 5: Computation of the minimum (MinUR) and maximum (MaxRR) safe radius for _Concrete_ and _Powerplant_ datasets. **Left:** Boxplots for the empirical distribution of MinUR and MaxRR computed over all test inputs. **Centre:** Per-test-instance certified radii for the _Concrete_ dataset. **Right:** Per-test-instance certified radii for the _Powerplant_ dataset.
specification \(T\), increasing values of \(\epsilon\) leads to a widening of the gap between the lower and upper bounds, and hence an increased vulnerability of the network. Notice that even for \(\epsilon=0.25\), i.e., half of the whole input space, our method still obtains on average non-vacuous bounds (i.e., strictly within \((0,1)\)). In order to get a rough estimation of the adversarial robustness of the network, we observe that, for lower bound values above \(0.5\), the BNN is guaranteed to correctly classify all the inputs in the region \(T\) (however, as MNIST has 10 classes, even values of the lower bound lower than \(0.5\) could still result in correct classification). Using the \(0.5\) threshold, we notice that our method guarantees that the BNN is still robust on average for \(\epsilon=0.075\). Notice that this is on par with results obtained for verification of deterministic neural networks, where \(\epsilon=0.05\) leads to adversarial attack robustness of around \(70\%\)[17].
#### Vii-C3 Certification of Uncertainty Behaviour
In this section we study how to certify the uncertainty behavior of a BNN in the presence of adversarial noise. We assume we have an out-of-distribution input, i.e., an input whose ground-truth does not belong to any of the classes in the range of the learned model. As with previous specifications, we build the set \(T\) around such an input with an \(\ell_{\infty}\) ball of radius \(\epsilon\). Unlike for the previous specifications, we build \(S\) as the set of all softmax vectors such that no entry in the vector is larger than a specified value \(\tau_{\text{uncertain}}\). The function of \(\tau_{\text{uncertain}}\) is to determine the confidence at which a classification is ruled to be uncertain. For example, in Figure 6 we have set \(\tau_{\text{uncertain}}=0.4\), thus any classification that is made with confidence \(<0.4\) will be ruled uncertain. By certifying that all values of \(T\) are mapped into \(S\), we guarantee that the BNN is uncertain on all points around the out-of-distribution input.
In the right half of Figure 6, we plot two example images from the FashionMNIST dataset, which are considered out-of-distribution for the BNN trained on MNIST. In our experiments we use 1000 test set images from the FashionMNIST dataset. On the right of the out-of-distribution samples in Figure 6, we plot the bounds on decision robustness with various values of \(\epsilon\) for the \(\ell_{\infty}\) ball. We start by noticing that the BNN never outputs a confidence of more than \(\sim 0.25\) on the clean Fashion-MNIST dataset, which indicates that the network has good calibrated uncertainty on these samples. We notice that up to \(\epsilon=0.06\) we certify that no adversary can perturb the image to force a confident classification; however, at \(\epsilon=0.10\) no guarantees can be made.
#### Vii-C4 Architecture Width and Depth
We now analyse the behaviour of our method when computing bounds on the certified radius on MNIST while varying the width and depth of the BNN architecture. The results of this analysis are given in Figure 7. Notice that we are able to obtain non-vacuous bounds in all the cases analysed. However, as could be expected, we see that the gap between MinUR and MaxRR widens as we increase the depth and/or the width of the neural network. This inevitably arises from the fact that the tightness of bound propagation techniques decreases as we need to perform more boundings and/or propagations, and because increasing the number of weights in the network renders the bounds obtained by Proposition 2 more coarse, particularly as we increase the number of layers of the BNN, as explained in Section VII-C. In particular, we observe that MinRR increases drastically as we increase the number of layers in the BNN architecture, while, empirically, the bounding for MaxRR is more stable w.r.t. the architecture parameters.
#### Vii-C5 Computational Requirements
On average, it takes 24.765 seconds to verify an MNIST image on a single CPU core. Each of the images in our experiments is run in parallel across 96 cores which allows us to compute all of the results for Figure 6 in less than an hour.
### _German Traffic Sign Recognition_
In this section, we investigate the ability of our method to scale to a full-color image dataset, which represent safety
Fig. 6: **Left:** Mean and standard deviation of upper and lower bounds obtained on \(D_{\text{safe}}\) on 1000 images taken from the MNIST test set. **Right:** Mean and standard deviation on upper and lower bound on decision robustness on out-of-distribution samples taken from the FashionMNIST dataset.
Fig. 7: **Left:** Boxplots for the empirical distribution of the maximum safe radius and minimum unsafe radius for a BNN with 128 hidden units and a single hidden layer. **Right:** For a range of architectures, we plot the mean certified safe and unsafe radius.
critical tasks with high-dimensional inputs (\(2,352\) dimensions).
#### V-C1 Problem Setting
We study BNNs on a two-class subset of the German Traffic Sign Recognition Benchmark (GTSRB), consisting of the images that represent the 'construction ahead' and '50 Km/H speed limit' [26]. Though this dataset is only comprised of two classes, full-colour images stretch the capabilities of BNN training methods, especially robust Bayesian inference. The dataset is comprised of 5000 training images and 1000 test set images. We employ VOGN to train a Bayesian convolutional architecture, with 2 convolutional layers and one fully-connected layer first proposed in [18]. We employ the method of [27] in order to encourage robustness in the posterior. We find that this dataset poses a challenge to robust inference methods, with the BNN achieving 72% accuracy over the test set after 200 epochs. We found that, without robust training, we are able to achieve 98% accuracy over the test set, but were unable to certify robustness or uncertainty for any tested image (see discussion of limitations below). We verify these networks with 3 weight samples with a weight margin of 3.0 standard deviations.
#### V-C2 Analysis with \(D_{\text{safe}}(T,S)\)
For our analysis for GTSRB, we take \(T\) to be a \(\ell_{\infty}\) ball with radius \(2/255\). As in our previous analysis, for test set images we take \(S\) to be the set of all vectors such that the true class is the argmax. We study 250 images and find that 53.8% of the images are certified to be correct. We plot a visual sample of these images in the top row of Figure 8. We also study the out-of-distribution performance of various kinds of images with \(\tau_{\text{uncertain}}=0.55\). Of 400 images of random noise, visualized in the bottom row of Figure 8, we certified that the BNN was uncertain on 398 images, indicating that on that set the BNN has correctly calibrated uncertainty as it does not issue conflict predictions on random noise. We then turned our attention to two more realistic sets of out-of-distribution images: nonsense traffic signs and international traffic signs. We were limited to a small set of free-use images for these tests but found that for eight out of ten nonsense traffic signs we were able to certify the BNN's uncertainty, and for nine out of ten international traffic signs we were able to certify the uncertainty. On average these certifications took 34.2 seconds.
#### V-C3 Limitations
While this analysis represents an encouraging proof of concept for certification of BNNs, we find that datasets whose inputs are of this scale and complexity are not yet fully accessible to robust inference for BNNs, as 74% test set accuracy is not strong enough performance to warrant deployment. However, with approaches such as [59, 60] investigating more powerful methods for scaling Bayesian inference for neural networks, we are optimistic that future works will be able to apply our method to more advanced Bayesian approximate posteriors.
## IX Conclusion
In this work, we introduced a computational framework for evaluating robustness properties of BNNs operating under adversarial settings. In particular, we have discussed how probabilistic robustness and decision robustness - both employed in the adversarial robustness literature for Bayesian machine learning [42, 43] - can be upper- and lower-bounded via a combination of posterior sampling, integral computation over boxes and bound propagation techniques. We have detailed how to compute these properties for the case of HMC and VI posterior approximation, and how to instantiate the bounds for interval and linear propagation techniques, although the framework presented is general and can be adapted to different inference techniques and to most of the verification techniques employed for deterministic neural networks.
In an experimental analysis comprising 5 datasets (airborne collision avoidance, concrete, powerplant, MNIST, and GTSRB), we have showcased the suitability of our approach for computing effective robustness bounds in practice, and for various additional measures that can be computed using our technique including certified robust radius and analysis of uncertainty.
With verification of deterministic neural networks already being NP-hard, inevitably certification of Bayesian neural networks poses several practical challenges. The main limitation of the approach presented here arises directly from the Bayesian nature of the model analysed, i.e., the need to bound and partition at the weight space level (which is not needed for deterministic neural networks, with the weight fixed to a specific value). Unfortunately, this means that the computational complexity, and also the tightness of the bounds provided, scale quadratically with the number of neurons across successive layer connections. We have discussed methods for mitigating the resulting gap between the bounds, including adaptive partitioning based on weight variance and implementing a branch-and-bound refinement approach for the bound, which would, however, results in a sharp increase in computational time. Nevertheless, the methods presented here provide the first formal technique for the verification of robustness in Bayesian neural network systematically and across various robustness notions, and as such can provide a sound basis for for future practical applications in safety-critical scenarios.
Fig. 8: Certification of a Bayesian CNN on a two-class subset of the German Traffic Recognition (GTSRB) dataset. In the top row, we plot illustrative examples showing that we can verify correctness of test inputs. In the bottom three rows we visualize the the uncertainty guarantees on various out-of-distribution inputs including nonsense traffic signs (second row), international traffic signs (third row), and random noise (bottom row). |
2304.05627 | Constructing Deep Spiking Neural Networks from Artificial Neural
Networks with Knowledge Distillation | Spiking neural networks (SNNs) are well known as the brain-inspired models
with high computing efficiency, due to a key component that they utilize spikes
as information units, close to the biological neural systems. Although spiking
based models are energy efficient by taking advantage of discrete spike
signals, their performance is limited by current network structures and their
training methods. As discrete signals, typical SNNs cannot apply the gradient
descent rules directly into parameters adjustment as artificial neural networks
(ANNs). Aiming at this limitation, here we propose a novel method of
constructing deep SNN models with knowledge distillation (KD) that uses ANN as
teacher model and SNN as student model. Through ANN-SNN joint training
algorithm, the student SNN model can learn rich feature information from the
teacher ANN model through the KD method, yet it avoids training SNN from
scratch when communicating with non-differentiable spikes. Our method can not
only build a more efficient deep spiking structure feasibly and reasonably, but
use few time steps to train whole model compared to direct training or ANN to
SNN methods. More importantly, it has a superb ability of noise immunity for
various types of artificial noises and natural signals. The proposed novel
method provides efficient ways to improve the performance of SNN through
constructing deeper structures in a high-throughput fashion, with potential
usage for light and efficient brain-inspired computing of practical scenarios. | Qi Xu, Yaxin Li, Jiangrong Shen, Jian K Liu, Huajin Tang, Gang Pan | 2023-04-12T05:57:21Z | http://arxiv.org/abs/2304.05627v2 | Constructing Deep Spiking Neural Networks from Artificial Neural Networks with Knowledge Distillation
###### Abstract
Spiking neural networks (SNNs) are well-known as brain-inspired models with high computing efficiency, due to a key component that they utilize spikes as information units, close to the biological neural systems. Although spiking based models are energy efficient by taking advantage of discrete spike signals, their performance is limited by current network structures and their training methods. As discrete signals, typical SNNs cannot apply the gradient descent rules directly into parameter adjustment as artificial neural networks (ANNs). Aiming at this limitation, here we propose a novel method of constructing deep SNN models with knowledge distillation (KD) that uses ANN as the teacher model and SNN as the student model. Through the ANN-SNN joint training algorithm, the student SNN model can learn rich feature information from the teacher ANN model through the KD method, yet it avoids training SNN from scratch when communicating with non-differentiable spikes. Our method can not only build a more efficient deep spiking structure feasibly and reasonably but use few time steps to train the whole model compared to direct training or ANN to SNN methods. More importantly, it has a superb ability of noise immunity for various types of artificial noises and natural signals. The proposed novel method provides efficient ways to improve the performance of SNN through constructing deeper structures in a high-throughput fashion, with potential usage for light and efficient brain-inspired computing of practical scenarios.
## 1 Introduction
By referring to the information processing mechanism and structural characteristics of the biological nervous system, spiking neural networks (SNNs) are remarkably good at computational intelligence tasks [26] and suitable for processing unstructured information, with stronger autonomous learning capabilities and ultra-low power consumption [2, 7, 37, 23].
Although various engineering effort has been made in this area, such type of biological information processing system still underperforms artificial systems (artificial neural networks, ANNs) in some common computer tasks, such as image classification. One possible reason for this is that typical SNNs lack deep hierarchical network structures compared to those from ANNs. Due to the non-differentiable spikes, typical SNNs are restricted to global training rules which lead to various of current SNNs being just shallow fully-connected layer based [28, 34]. Limited by training rules and structures, although SNNs can significantly handle spatial-temporal data efficiently, it is difficult to train a deep SNN directly as using backpropagation (BP) in ANNs do [22].
By drawing on some key tricks from ANNs, some studies want to improve image classification accuracy which is led by SNNs by combing structure and learning rules those has been proven to be effective in improving model performance in ANNs. [3, 6] proposed methods to convert ANNs to SNNs by controlling the output and network structure of ANN and SNN to be as consistent as possible. Through this way, although they can build effective deep SNNs, these conversion methods suffer long training time and lack some intermediate information during ANN training period. [12, 19] tried to adopt the threshold of spiking neurons to make them suitable for using gradient surrogate method, these models adopted too complex neuron models to get good performance and take up large computational memory and cost. [8] did interesting work on directly converting an adjusted ANN to an SNN using the theoretical equivalence between activation and firing rate, which achieves superior performance. [29] constructed ANN-SNN hybrid models to improve the feature extraction, but these
hybrid models suffered a difficult training process.
Aiming at constructing efficient SNNs, this paper proposed a brand-new method using knowledge distillation (KD) to let student models (SNNs) absorb rich information from teacher models (ANNs). KD [4] can transfer the knowledge of one network to another network, two networks can be homogeneous or heterogeneous. This is done by training a teacher network and then using the output of the teacher network and the real tag of the data to train the student network. KD can be used to transform a network from a large network to a small network, retaining performance close to that of a large network, or to transfer knowledge learned from multiple networks into a single network, making the performance of a single network close to the results of Ensemble.
Under the guidance of teacher models, the wanted SNNs model can be trained in a layer-wise manner [20]. Unlike traditional ANN-SNN conversion requires the same model structure of two models, the proposed KD conversion can make a heterogeneous network structure of them, for example, if the teacher ANN is larger and deeper, the student SNN can be smaller and shallower. This kind of KD conversion provides sufficient flexibility to construct any arbitrary SNN.
In this paper, we propose a novel KD based training method to construct deep SNNs which avoids restricting corresponding network structures between ANN and SNN during the training period. Through a unified ANN-SNN loss function, we can construct the SNN model from well-trained ANN, accelerate the training time and save memory usage. We adopt supervised gradient surrogate method as basic student SNN training rules. We evaluated the proposed method on several image classification datasets (MNIST, CIFAR10, and CIFAR100) and their noisy variations. Experimental results showed that the proposed method can get pretty good image classification performance with a light SNN model. The main contributions are as follows:
* This paper proposed a KD based conversion method to construct deep SNNs from ANNs, which only takes less training latency and allows the structure of the SNN and ANN to be heterogeneous.
* Through the proposed ANN-SNN joint training method, the student SNN model can absorb more information from ANN during training method, compared to offline ANN-SNN conversion, the proposed method significantly helped to improve the performance of the student SNN model.
* We demonstrate the efficiency and effectiveness of the proposed distilling SNN method through evaluations of several datasets and noisy ones. Experimental results show that we can construct a more reasonable SNN which can achieve state-of-the-art performance on experimental datasets with less training latency and show better anti-noise ability.
## 2 Related Work and Motivation
Since the spiking signal is discontinuous and non-differentiable, it is difficult to train deep SNNs with loss function directly. Current deep SNNs training methods could be categorized into two classes: ANN-to-SNN conversion methods and surrogate gradient training methods.
### ANN-to-SNN Conversion Methods
In order to make further use of some knowledge of ANNs, some studies [30] tried to reuse the trained parameters from ANNs to avoid training deep SNNs directly. [6] firstly trained an ANN, then keep the weights stable and transfer them to an SNN with the nearly same structure. The core technical skills in this type of method are making the output of the ANN and SNN as close as possible. Although this kind of conversion can make full use of the trained ANN model, it lost real-time updates of spatio-temporal information of spikes and too many resources were wasted during ANN training and conversion period.
### Surrogate Gradient Training Methods
The other commonly used method for training deep SNNs is to surrogate gradient training methods [36]. These training algorithms aim at tackling the non-differentiable barrier in training SNNs with continuous value based loss functions. [22] firstly used a surrogate gradient method to construct a deep SNN model and evaluated it into event-based DVS Gestures data set, experimental results showed that it is able to get better accuracy when resuming a few training iterations. To further improve the computational resources in SNNs, [33, 35] introduced spiking-convolutional hybrid network architecture, within more rapid training convergence, this method is capable of getting pretty good performance in some image classification datasets. Combining surrogate gradient training and discrete cosine transform, [11] reduced the number of timesteps of inference in SNN. To solve the degradation problem of deep SNNs, [10] discussed gradients vanishing/exploding in Spiking ResNet and proposed SEW ResNet, which is the first directly trained SNNs with more than 100 layers and becomes a frequently-used backbone in later research. Although to some extent, the surrogate gradient can mitigate the spiking non-derivable, it is essentially a gradient training method that is less biologically plausible.
### Motivation
Based on the aforementioned problems in constructing efficient deep SNN models, this paper proposed a knowl
odels including the intermediate training output and network structure. In the paper, we proposed two ANN-SNN joint loss functions for better use of KD from SNN to ANN, one utilized one output layer from ANN and SNN respectively, and the other constructed union loss functions between several intermediate layers and output layer.
Through the proposed KD training procedure, the constructed student SNN model learns rich feature information from the teacher ANN which allowed the structure of the ANN and SNN to be heterogeneous. The proposed method not only adopted ANN-to-SNN conversion to keep the output from ANN and SNN as close as possible but also utilize surrogate gradient method to replace the non-differentiable function with continuous functions and apply it during the gradient calculation period to train a deep SNN efficiently.
## 3 Proposed Method
The proposed KD SNN training method was based on the joint teacher ANN model and student SNN model as shown in Fig.1. The form of the teacher ANN model can be current typical CNN models such as residual blocks with attention mechanisms. The student SNN model extracted and transferred outside input based on spiking neurons, the parameters of the wanted SNN model are able to be trained with KD procedure simultaneously when ANN model was in the training process which avoids the defect that spikes are not differentiable. Combing the spike coding and joint loss function, theoretically, we can make the structures of ANN and SNN heterogeneous or heterogeneous which gives us enormous space to construct different SNN structures.
### Spiking Neuron Model
In this paper, we use IF neuron as the basic neuron model of the student SNN. The neuron dynamics of the IF neuron model are described by the differential equation Eq. (1) as follows:
\[\frac{\mathrm{d}V(t)}{\mathrm{d}t}=V(t)+X(t) \tag{1}\]
where \(V(t)\) is the membrane potential of the adopted spiking neuron model. \(X(t)\) denotes the external input current, which comes from the weighted sum of the spikes fired by the neurons in the previous layer. When the membrane potential of the neuron exceeds the threshold, it fires a spike and then returns to the reset potential. In SNNs, the neural state is divided into three processes charging, discharging, and resetting. The charging process corresponding to the IF neuron model can be expressed as:
\[H[t]=f(V(t-1),X(t))=V(t-1)+X(t) \tag{2}\]
Due to the binary nature of spikes, the process of firing spikes in the neuron model is represented by a Heaviside step function as described in Eq. (3). The discharging process corresponding to the IF neuron model can be expressed as:
Figure 1: The schematic illustration of KDSNN training.
\[S[t]=\Theta\left(\mathrm{H}[\mathrm{t}]-V_{\mathrm{th}}\right)=\begin{cases}1,& \mathrm{H}[\mathrm{t}]\geq V_{\text{th}}\\ 0,&\mathrm{H}[\mathrm{t}]<V_{\text{th}}\end{cases} \tag{3}\]
where \(V_{th}\) represents the fired threshold of the membrane potential.
### _Joint ANN-to-SNN Knowledge Distillation Method_
The joint ANN-to-SNN knowledge distillation method transfers the hidden knowledge in a pre-trained teacher ANN model to the student SNN model to guide the training of SNN. Our method is divided into two ways: response-based knowledge distillation and feature-based knowledge distillation. The first one extracts hidden knowledge only from the output of the last layer of the teacher ANN model. The second one extracts hidden knowledge from several intermediate layers of the teacher ANN model.
**KDSNN with response-based knowledge distillation.** The proposed method used response-based knowledge which can transfer the knowledge from the output layer (teacher ANN model) to the student SNN model to guide the training of the SNN. The soft label of the vector of the probability of each category through the softmax layer, instead of the hard label, is employed to preserve the hidden information in the output of the teacher model. In order to better learn hidden knowledge, the parameter temperature \(T\) is introduced to make the probability distribution flatter. The output of the teacher network is processed as follows:
\[q_{i}=\frac{\exp\left(Z_{i}/T\right)}{\sum\exp\left(Z_{j}/T\right)} \tag{4}\]
where \(Z_{i}\) and \(Z_{j}\) denote the output calculated by the softmax layer of the teacher network. Through the parameter temperature \(T\), the knowledge of category correlation in the output probability distribution vector can be more conducive to learning. When \(T=1\), the output is the same as the common softmax layer. When \(T\) is larger, the probability distribution of the output is flatter.
The whole training method for constructing student SNN model is divided into two parts as shown in Fig. 1: One is to learn the true labels of the samples i.e. hard labels; The other is to learn the soft labels which is the output of the teacher ANN model. First, we pre-train a teacher ANN model and fix the weights of the teacher ANN model when training the student SNN model. Then the student SNN model learns the hidden knowledge from the output of the teacher ANN model through Eq. (4). The proposed loss function here in this paper is an improved version compared to this paper used [16, 21]. Compared to the traditional KD method we simplify the KL divergence between the network output and the teacher network output which could make a faster KD from the teacher model. The loss function can be expressed as follows:
\[L_{KD} =\alpha T^{2}*\text{ CrossEntropy }\left(Q_{S}^{\tau},Q_{T}^{\tau}\right) \tag{5}\] \[+\left(1-\alpha\right)*\text{ CrossEntropy }\left(Q_{S},y_{\text{ true}}\right)\]
where \(y_{\text{true}}\) denotes the true labels, \(Q_{S}^{\tau}\) is the output of student SNN model. In which \(Q_{S}\) could be used for softening the output with temperature \(T\) and then calculated through LogSoftmax function. \(Q_{T}^{\tau}\) is the \(q_{i}\) in Eq. (4). The temperature \(T\) in \(Q_{S}^{\tau}\) and \(Q_{T}^{\tau}\) is equal and exceed 1. The total loss is obtained by summing two part loss and using \(\alpha\) to indicate the importance of the two learning targets.
**KDSNN with feature-based knowledge distillation.** In this paper, we proposed another KD method for constructing an efficient deep SNN model named feature-based knowledge distillation, which utilizes the hidden knowledge in some intermediate layers of ANN to guide the training of SNN. One of the drawbacks when only using the response of the last output layer of the teacher ANN will cause the learned knowledge to accumulate in the last several layers, which means the knowledge in the teacher network cannot be absorbed by the student SNN model through the layer-wise way in the training process. Therefore, we proposed this feature-based KD method to let the SNN learn the features of the intermediate layers of the ANN.
The features of the student SNN model are encoded by firing rates. The position for extracting the features of the intermediate layers is usually chosen after a group of layers corresponding to the teacher ANN model. For example, for ResNet, the distillation position will be selected at the end of each block. In order to make the channel size of the student SNN model match the channel size of the teacher ANN model, the intermediate features of the student SNN model are generally transformed with a 1\(\times\)1 convolutional layer. Combine with the proposed feature-spike encoding rule, we adopt the advanced feature-based KD method named overhaul, as an improved version of Fitnet [25]. The loss function is calculated with \(L2\) distance function as follows:
\[L_{distill}=\sum_{i}^{WHC}\begin{cases}0&\text{if }S_{i}\leq T_{i}\leq 0\\ \left(T_{i}-S_{i}\right)^{2}&\text{otherwise}\end{cases} \tag{6}\]
where \(T_{i}\) refers to the features of the intermediate layers of the student SNN model transformed by 1 \(\times\) 1 convolution layer. The features of the teacher ANN model need to be transformed with margin ReLU in order to suppress the influence of negative information, and \(S_{i}\) is the spiking based features after transformation.
Similar to response-based knowledge distillation, the total loss is divided into two parts: the features of the intermediate layers and the true labels. The total loss is:
\[L_{KD}=L_{task}+\alpha*L_{distill} \tag{7}\]
where \(L_{task}\) denotes the loss between true labels and the real output of the student SNN model. \(L_{distill}\) is the loss of the intermediate layers as represented in Eq. (6).
**Training of student SNN model.** In order to solve the non-differentiable problem of SNN when using backpropagation (BP), we use surrogate gradient to train the proposed student SNN model. The surrogate gradient method refers to simulating the Heaviside step function with a differentiable function \(\sigma(x)\) such as a sigmoid function whose shape is similar to the step function. During BP, the gradient of the function \(\sigma(x)\) is calculated to update the weights of the network so that SNN can be trained by a similar gradient descent to ANN.
```
1:pre-trained teacher ANN model T, initialized student SNN model S, input dataset samples X, true labels \(y_{\text{true}}\).
2:SNN model with KD
3:#forward propagation
4:for t=1 to L-1 do
5:\(o[0]=encode(X)\);
6:for t=1 to T do
7:#calculate the membrane potential
8:\(V(t)=V(t-1)+W_{l}O_{l-1}(t)\);
9:if\(V(t)>V_{th}\)then
10:#fire a spike
11:\(O_{l}(t)=1\);
12:#reset the membrane potential
13:\(V(t)=V_{reset}\);
14:endif
15:endfor
16:endfor
17:#calculate the spike rate
18:\(O_{L}(t)=counter(L)/T\);
19:calculate the total loss \(L_{KD}\);
20:#backward propagation
21:for t=L-1 to 1 do
22:for t=1 to T do
23: calculate the gradients \(\frac{\partial L_{KD}}{V_{l}(t)}\) ;
24: update \(W_{l}\);
25:endfor
26:endfor
```
**Algorithm 1** Training student SNN model with knowledge distillation.
**Overall training algorithm.** As illustrated in Algorithm 1, the overall training of the proposed method KDSNN has two steps: pre-training a teacher ANN model and training a student SNN model.
In the first step, we choose the ANN model with higher accuracy and a more complex model as the teacher model. The teacher model is pre-trained and the weight parameters of the teacher ANN model are fixed when training the student SNN model.
In the second step, we choose one SNN model as the student SNN model. Then we use the hidden knowledge of the teacher ANN model to guide the training of the student SNN model. In the process of forward propagation, the same dataset samples are both inputs to the teacher ANN model and the student SNN model. The student SNN model converts the output into the frequency of the spikes as its features. The pre-trained teacher ANN model computes the output of the teacher model or extracts features from the intermediate layers of the teacher model. Then we get the total loss function with hidden knowledge i.e. Eq. (5) or Eq. (7). In the process of error backpropagation, the derivative of the total loss function is calculated with the surrogate gradient method to update the synaptic weights of the student SNN model.
## 4 Experimental Evaluation
In this section, we evaluate the proposed KDSNN construction method on two benchmark datasets, MNIST and CIFAR10. To further show the generalization ability of the constructed SNN models in a noisy environment, we also test them on their variations with different types of noises. As shown in Fig.2, for MNIST, we adopted background MNIST, background-random MNIST, rotation-normalized MNIST, and background-rotation-normalized MNIST. For CIFAR10, we used CIFAR10 with different noise intensities.
### Experimental Settings
The experiments are evaluated on a server equipped with 16 cores Intel(R) Xeon(R) Gold 5218 CPU with 2.30GHz and 8 NVidia GeForce RTX 2080 Ti GPUs. The operating system is Ubuntu 18.04. Besides we use PyTorch and SpikingJelly [9] for training and testing the proposed methods.
For MNIST, CIFAR10, and their variational datasets, we choose some advanced network models as basic ANN teacher models, such as ResNet18 [15], WRN28-4 [1], pyramidnet18 [14]. In order to better demonstrate the performance of the constructed model, we choose some neural network structures with fewer layers than the student SNN models, such as VGG11 (VGG16), WRN16-2 and ResNet18 for the CIFAR10 dataset. The student SNN architecture for the MNIST dataset is 28\(\times\)28-128C3-P2-128C3-P2-1152FC-10FC. In order to better show the spatio-temporal characteristics of the SNN, the ReLU functions in the networks are replaced by IF node. We train the student SNN with only 4 time steps to simulate spike firing.
### Evaluation under different knowledge levels and architectures
To show the effectiveness of the proposed KDSNN training method adequately, we design and implement several KD methods to construct efficient student SNN models under the utilization of feature representations of teacher ANNs.
As illustrated in Table 1, all of the accuracies of three proposed SNN models, VGG11, WRN28-4, and Pyramid18, with response-based knowledge distillation on the CIFAR10 dataset are higher than the corresponding original SNNs with same architecture. Especially, the SNN model with ResNet18 structure achieves a test accuracy of 93.41\(\%\) with the Pyramidnet18 teacher model, which promotes the accuracy of about \(0.73\%\) compared with the SNNs without the proposed KD training. It indicates that the SNN model with KD training is capable of making use of the knowledge learned from ANN teachers model by learning the joint responses, hence improving its performance effectively.
Moreover, the student SNN model can perform better with the stronger ANN teacher model. For instance, the KDSNN model with WRN16-2 structure can achieve \(90.98\%\), \(91.14\%\), and \(91.11\%\) under the help of ANN teacher model of ResNet18, WRN28-4, and Pyramidnet18, respectively. The reason for this phenomenon is that with the stronger ANN teacher model, the SNN student model could learn more precise representation and decision sur
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Method & SNN Model & ANN Model & ANN Acc.(\%) & SNN Acc.(\%) & KDSNN ACC.(\%) & Improvement(\%) \\ \hline \multirow{6}{*}{Response-based} & \multirow{3}{*}{VGG11} & ResNet18 & 93.20 & 88.44 & 89.12 & 0.68 \\ & & WRN28-4 & 93.10 & 88.44 & 89.43 & 0.99 \\ & & Pyramidnet18 & 95.10 & 88.44 & 89.51 & 1.07 \\ \cline{2-7} & \multirow{3}{*}{WRN16-2} & ResNet18 & 93.20 & 90.34 & 90.98 & 0.64 \\ & & WRN28-4 & 93.10 & 90.34 & 91.14 & 0.80 \\ & & Pyramidnet18 & 95.10 & 90.34 & 91.11 & 0.77 \\ \cline{2-7} & ResNet18 & Pyramidnet18 & 95.10 & 92.68 & 93.41 & 0.73 \\ \hline \multirow{6}{*}{Feature-based} & \multirow{3}{*}{WRN16-2} & WRN28-4 & 93.10 & 90.34 & 91.03 & 0.69 \\ & & Pyramidnet18 & 95.10 & 90.34 & 92.10 & 1.76 \\ \cline{1-1} & & PreResNet20 & 92.36 & 90.34 & 91.57 & 1.23 \\ \cline{1-1} \cline{2-7} & \multirow{3}{*}{ResNet14} & WRN28-4 & 93.10 & 87.46 & 87.84 & 0.38 \\ \cline{1-1} & & Pyramidnet18 & 95.10 & 87.46 & 88.20 & 0.74 \\ \cline{1-1} & & PreResNet20 & 92.36 & 87.46 & 87.90 & 0.44 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracies of KDSNN with different teacher ANNs and Student SNNs on CIFAR10.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Dataset & Noise & ANN model & ANN Acc.(\%) & SNN model & SNN Acc. (\%) & KDSNN Acc. (\%) & Improvement(\%) \\ \hline \multirow{3}{*}{CIFAR10} & Gaussian noise(\(\sigma=0.01\)) & ResNet18 & 83.00 & VGG11 & 81.63 & 82.90 & 1.27 \\ & Gaussian noise(\(\sigma=0.03\)) & ResNet18 & 80.40 & VGG11 & 76.42 & 77.96 & 1.54 \\ & Gaussian noise(\(\sigma=0.05\)) & ResNet18 & 77.00 & VGG11 & 73.40 & 74.23 & 0.83 \\ \hline \multirow{6}{*}{MNIST} & Background & ResNet18 & 97.72 & 2conv & 95.04 & 96.35 & 1.31 \\ & Random noise & ResNet18 & 96.95 & 2conv & 95.31 & 95.79 & 0.48 \\ \cline{1-1} & Rotation & ResNet18 & 96.01 & 2conv & 94.43 & 95.34 & 0.91 \\ \cline{1-1} & Rotation with background & ResNet18 & 86.59 & 2conv & 80.96 & 81.82 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification performance evaluation of KDSNN on CIFAR10 and MNIST with different types of noise.
Figure 2: Two benchmark datasets.
face with the mapping of the final responses. Furthermore, we analyze the convergence behavior of the proposed response-based KDSNN in Fig. 3. With the WRN16-2 structure, the accuracy of SNN model rises gradually as the increasing of training epochs with the help of those three ANN teacher models. Therefore, the proposed KDSNN training method can effectively improve the SNN performance by learning the responses of the ANN teacher model.
In addition, the feature-based KDSNN training method is also evaluated to compare the KD method with different knowledge levels. Table 1 indicates that the performance of the feature-based KDSNN training is better for SNN student model than response-based KDSNN under some structures. For the WRN16-2 SNN model, the accuracy of the KDSNN with Pyramidnet18 is \(92.10\%\) with \(1.76\%\) improvement, which is even higher than all the ones with the response-based KDSNN model with the same architecture. A similar phenomenon appears in the ResNet18 SNN model with the Pyramidnet18 as the teacher model. In this situation, the feature information of the Pyramidnet18 assists the SNN training for more supplementary and useful knowledge is introduced to strengthen the SNN training compared with the response-based KD training. Considering the convenience of discussion, we choose the response-based KDSNN to explore its ability in the following subsections.
The robustness of the KDSNN model is explored by imposing different types and intensities of noises on the MNIST and CIFAR10 datasets. As illustrated in Table 2, KDSNN training method improves the performance of the original SNN models in noisy environment on the MNIST dataset. Especially, under the noise with background image, SNN model with KDSNN training method performs better than the original SNN, exceeding \(1.31\%\) classification accuracy. Similarly, the robustness of the SNN model with KDSNN training is also verified on the CIFAR10 dataset with different levels of Gaussian noises, the denoise ability of SNN model with VGG11 structure promotes the accuracy improvement of \(1.27\%\), \(1.54\%\) and \(0.83\%\) under three different noise levels. Thus, the knowledge extracted from pre-trained ANNs leads to the noise immunity of SNNs. Although the performance got by the proposed SNN model is not able to behave better than ANN, the proposed KDSNN method can learn rich knowledge from teacher ANNs and behave better than original SNNs in a noisy environment.
### Parameter Comparison between ANN and SNN
As one of the well-known advantages, power consumption is commonly mentioned in neuromorphic areas. In this paper, we count and analyze some crucial power consumption metrics when the proposed KDSNN methods are in the run.
Compared to ANNs, the proposed KDSNN training method is the simplification of spiking based network structures with relatively small parameters, which makes good scalability on neuromorphic hardware. As shown in Table 3, with ResNet18 as the ANN teacher model, the KDSNN-trained SNNs have a simpler structure with fewer convolutional layers and parameters, the corresponding SNNs with \(0.05M\) Synaptic Operations is quite smaller than ANN FLOPs with \(457.72M\). On the CIFAR10 dataset, for the VGG11 and WRN16-2 SNN student models employ fewer parameters than the ResNet18 and WRN28-4 ANN teacher models. Meanwhile, quite a few synaptic operations of SNNs have the ability to implement low power consumption through neuromorphic hardware.
### Performance Comparison with Other Methods
To better demonstrate the superior performance of the proposed KDSNN model, we compare the proposed KDSNN training method with other methods in Table 4.
The SNN models with the proposed KDSNN training method obtain higher accuracy with fewer time steps on MNIST and CIFAR10 datasets. On MNIST dataset, the SNN model trained with the proposed KDSNN method achieves \(99.37\%\) test accuracy with only 4 time steps. The trained SNN model has simpler architectures and fewer time steps than other models, such as SDNN, STBP and ANTLR. The test accuracy of \(ASF-BP\) is higher than the SNNs with KDSNN training method by about \(0.28\%\) with \(400\) time steps. However, the numerous time steps contribute
Figure 3: Test accuracy curves of KDSNN during training period under the guide of different ANN teacher models.
to high power consumption and big-time latency. On CIFAR10 dataset, the SNN with ResNet18 structure achieves \(93.41\%\) with 4 time steps, which exceeds other methods such as SPIKE-NORM, Hybrid Training, RMP, and Opt method. In conclusion, the proposed KDSNN training method could improve the performance of SNNs with high classification accuracy and fewer time steps.
## 5 Conclusion
In this paper, we proposed knowledge distillation (KD) based methods to construct efficient SNN models. Taking full use of the high-dimensional and accurate features of the teacher ANN model, we proposed spiking based surrogate gradient methods and ANN-to-SNN conversion combination-based training method which can overcome the non-differentiable obstacle caused by binary spikes.
Experimental evaluation showed that the proposed KD-SNN model not only get good performance on some image classification tasks but also behave with noise immunity in different types and intensity noise environment. Through a rapid training convergence, the proposed method would build SNN models faster which means we can use less time to achieve or even exceed the performance of other spiking models. Under the qualitative and quantitative analysis, we also compare the memory occupation and synaptic operation between some typical ANNs and the constructed SNNs, the proposed KDSNN could be deep but efficient and behave better than some other spiking based models with few resource consumption. It demonstrated great advantages on resource-constrained devices such as neuromorphic hardware platforms.
In our future work, we will expand both structures of ANNs and SNNs to utilize the advantages of the proposed KDSNN which allowed ANNs and SNNs homogeneous or heterogeneous. In addition, when the network structure of teacher model is defective (teacher model is weaker than student model or even nonexistent as a probability distribution), it is also our next step to consider.
## 6 Acknowledgement
This work was supported in part by National Natural Science Foundation of China (NSFC No.62206037), National Key Research and Development Program of China (2021ZD0109803), the Huawei-Zhejiang University Joint Innovation Project on Brain-Inspired Computing (FA2019111021), Open Research Fund from Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), under Grant No. GML-KF-22-11, the CAAI-Huawei Mindsporc Open Fund under Grant CAAIXSJLJJ-2020-024A and the Fundamental Research Funds for the Central Universities (DUT21RC(3)091).
\begin{table}
\begin{tabular}{c c c c c c c} \hline Dataset & Method & ANN Architecture & SNN Architecture & ANN ACC.(\%) & SNN Acc.(\%) & timestep \\ \hline \multirow{4}{*}{MNIST} & SDNN [17] & - & 2conv-2pool & - & 98.40 & 30 \\ & STBP [32] & - & 784-800-10 & - & 98.89 & 30 \\ & ANTLR [18] & - & 784-800-10 & - & 97.60 & 100 \\ & ASF-BP [31] & - & LeNet5 & - & 99.65 & 400 \\ & **Proposed** & ResNet18 & 2conv & 99.59 & 99.37 & 4 \\ \hline \multirow{4}{*}{CIFAR10} & SPIKE-NORM [27] & VGG16 & VGG16 & 91.70 & 91.55 & 2500 \\ & Hybrid Train [24] & VGG16 & VGG16 & 92.81 & 91,13 & 100 \\ & RMP [13] & VGG16 & VGG16 & 93.63 & 93.63 & 2048 \\ & Opt. [5] & VGG16 & VGG16 & 92.34 & 92.29 & 16 \\ & **Proposed** & Pyramidnet18 & VGG16 & 95.17 & 91.05 & 4 \\ \hline \multirow{4}{*}{CIFAR10} & SPIKE-NORM [27] & Resnet20 & Resnet20 & 89.10 & 87.46 & 2500 \\ & Hybrid Train [24] & Resnet20 & Resnet20 & 93.15 & 92.22 & 250 \\ \cline{1-1} & RMP [13] & Resnet20 & Resnet20 & 91.47 & 91.36 & 2048 \\ \cline{1-1} & Opt. [5] & Resnet20 & Resnet20 & 92.46 & 92.41 & 16 \\ \cline{1-1} & **Proposed** & Pyramidnet18 & Resnet18 & 95.17 & 93.41 & 4 \\ \hline \end{tabular}
\end{table}
Table 4: Summary comparison of classification accuracies with other spiking based models
\begin{table}
\begin{tabular}{c c c c c c c} \hline Dataset & ANN Model & ANN params & ANN FLOPs & SNN Model & SNN params & SNN SynOps \\ \hline MNIST & ResNet18 & 11.17M & 457.72M & 2conv & 7.39M & 0.05M \\ \hline \multirow{4}{*}{CIFAR10} & ResNet18 & 11.17M & 557.88M & VGG11 & 9.75M & 0.09M \\ & WRN28-4 & 5.85M & 849.33M & WRN16-2 & 0.69M & 0.15M \\ & Pyramidnet18 & 1.56M & 368.74M & ResNet18 & 11.17M & 0.44M \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the memory and operations from ANN and the proposed SNN models |
2305.10103 | Predicting Tweet Engagement with Graph Neural Networks | Social Networks represent one of the most important online sources to share
content across a world-scale audience. In this context, predicting whether a
post will have any impact in terms of engagement is of crucial importance to
drive the profitable exploitation of these media. In the literature, several
studies address this issue by leveraging direct features of the posts,
typically related to the textual content and the user publishing it. In this
paper, we argue that the rise of engagement is also related to another key
component, which is the semantic connection among posts published by users in
social media. Hence, we propose TweetGage, a Graph Neural Network solution to
predict the user engagement based on a novel graph-based model that represents
the relationships among posts. To validate our proposal, we focus on the
Twitter platform and perform a thorough experimental campaign providing
evidence of its quality. | Marco Arazzi, Marco Cotogni, Antonino Nocera, Luca Virgili | 2023-05-17T10:09:40Z | http://arxiv.org/abs/2305.10103v1 | # Predicting Tweet Engagement with Graph Neural Networks
###### Abstract
Social Networks represent one of the most important online sources to share content across a world-scale audience. In this context, predicting whether a post will have any impact in terms of engagement is of crucial importance to drive the profitable exploitation of these media. In the literature, several studies address this issue by leveraging direct features of the posts, typically related to the textual content and the user publishing it. In this paper, we argue that the rise of engagement is also related to another key component, which is the semantic connection among posts published by users in social media. Hence, we propose _TweetGage_, a Graph Neural Network solution to predict the user engagement based on a novel graph-based model that represents the relationships among posts. To validate our proposal, we focus on the Twitter platform and perform a thorough experimental campaign providing evidence of its quality.
\({}^{1}\) Department of Electrical, Computer and Biomedical Engineering, University of Pavia
\({}^{2}\) DII, Polytechnic University of Marche
{marco.arazzi, marco.cotogni}[email protected]
[email protected], [email protected]
_keywords:_**Graph Neural Networks, Engagement, Social Network, Twitter, Deep Learning**
## 1 Introduction
Social Networks are, nowadays, an established authority in the context of information sharing. Although originally they have been designed to favor social interaction among people, in recent years, they have become a powerful tool also for companies to boost their capability to reach customers directly and continuously [9]. This is evident in the context of Industry 4.0, in which Social media marketing has been identified as a critical asset to promote a more effective business involvement of the customers [47]. Furthermore, the advancement of such a paradigm, referred to as Industry 5.0, includes concepts such as "human-centric" and "mass personalization" of the production chain, implying an even more prominent need to directly and elastically involve the population, their needs, and opinions in the business strategies [49].
Among the other business sectors, marketing is, for sure, one of the drivers of the interactions with the final customers of every company. Understanding and being able to predict the capability of content published on a social platform to reach the desired audience is, for sure, a timely and well-studied subject for marketing applications [10, 15].
However, if, on the one hand, the capability of content to be spread across the population is of fundamental importance, also the interest generated in the recipients is a crucial aspect to investigate, on the other hand.
Following this intuition, in the recent literature, several studies have been devoted to the definition
and analysis of the concept of user engagement in social networks [20, 3].
Typically, in the context of social media, marketers considered the number of reactions received on a post as a quantitative measure of user engagement [8, 36, 43]. Of course, how the engagement is computed from a technical point of view, strictly depends on the functionalities provided by the target social medium. However, classical engagement definitions consider the percentage of users who reacted to a post by expressing an opinion or comment, or by directly exhibiting an appreciation (e.g., "like", "retweet", "favorite", and so forth). In any case, according to the scientific literature, the features that have an impact on the rise of user engagement can be categorized into three main classes: the ones related to the user creating the post, those related to the post context (such as time, group, topic, etc.), and those related to the content itself (text features, presence of media, etc.) [23].
From these premises, many researchers have developed solutions, typically based on machine learning, to study the dynamics of user engagement and to build solutions to support companies by predicting whether a content will be capable of generating reactions [44, 14, 29].
However, while these approaches achieved quite satisfactory results, the high dynamics of social networks and the crucial roles of communities in the diffusion of contents guided researchers to identify features more and more related to the structural properties of social media [37, 18].
Motivated by this research direction, in this paper we propose _TweetGage_, a novel approach for the binary classification of tweets according to user engagement, i.e., for predicting whether published content can generate engagement (a reaction rate non equal to zero) or is destined to remain unnoticed. In our study, we argue that, although the network of connections among users and the textual features of the content plays a crucial role, engagement is also triggered by the semantic connections among posts. Indeed, intuitively, if a post relates to a sequence of previous posts that are attracting the attention of
Figure 1: Workflow of the three steps composing _TweetGage_. In the first step, given a tweet \(t_{i}\in T\), features associated with the post and the user are retrieved using two feature extractors \(\phi\) and \(\sigma\). In the same way, given the text of the tweet \(text_{t_{i}}\), an embedding of the text is obtained using a pre-trained language model \(\omega\). During the second step, the graph model \(\mathcal{G}\) is created connecting posts (nodes) that share at least one hashtag. The features and embedding previously computed are associated with the corresponding node. Finally, in the last step, the model \(\mathcal{G}\) is provided as input to a Graph Convolutional Neural Network with the aim of predicting the user engagement of each tweet.
users, then it will have a higher probability to generate engagement. To capture this dynamic, in this paper we introduce a graph-based model in which the nodes are the posts and the connection among them encodes the information of whether their content is overlapping to some extent. To evaluate such overlapping, we exploit the hashtags written by the authors to give weights to the links among posts. Then, we leverage the Graph Neural Network (GNN) technology to process such data, along with basic information about the posts as devised by the scientific literature and identify important features that can be used to model the dynamics of engagement generation. The workflow of our solution is visible in Figure 1.
To validate our proposal, we focus on the Twitter platform and perform a deep experimental campaign to assess the quality of our results by comparing them with those achieved by existing approaches. The motivations underlying the choice of Twitter as referring platform, are to be found in its extremely high popularity as a social media and in the interest that the research community has devoted to it in recent years [30, 50]. Interestingly, our GNN-based approach outperforms state-of-the-art solutions in all the performed tests. Finally, our ablation study on the role of the considered features provides useful insights into the dynamics leading to the generation of user engagement in social platforms and demonstrates the correctness of our intuition.
The plan of this paper is as follows. In Section 2, we report a review of the related literature. In Section 3, we describe our proposal and the adopted methodology. In Section 4, we report the experiments to validate our approach along with our ablation study to assess the role of the considered feature. Finally, in Section 5 we draw our conclusions and look at possible future works.
## 2 Related Work
The concept of user engagement in social networks has been heavily studied in the literature [20, 41, 38]. Indeed, due to the enormous popularity of social media, such as Facebook, Twitter, and TikTok, many companies are increasingly investing in content creation and distribution on these platforms. Evaluating the outcome of these investments is not straightforward, but it is fundamental to understand how to effectively create posts on social platforms. One of the most used metrics to measure the impact of a post is user engagement [41].
Depending on the social network, there are many ways to compute the engagement of a post, since users have different actions for interacting with it [20]. For instance, on Twitter, the user can like, retweet, and reply to a post, and these actions are considered for developing a user engagement metric [38, 32, 31, 19]. In our case, we decided to use the formula of [19] for computing the engagement, as we will see in Section 3.
Due to the high interest of both researchers and companies to study user engagement in social networks, it is not unexpected that many approaches to predict this engagement has been developed over time [14, 5, 6, 28]. Many of these approaches leverage Machine Learning and Deep Learning algorithms. From our perspective of using network features, we can identify two lines of thought in the literature: _(i)_ prediction of engagement without considering graph-based features [42, 44, 14, 29, 34, 4], and _(ii)_ prediction of engagement with graph-based features [19, 46, 37, 18, 21, 2].
In the former case, researchers employed features from the user profile, from his/her tweets' performance in terms of the number of likes, retweets, replies, and mentions, and from the analysis of the post text. For instance, in [44] the authors analyze sets of features that reflect user behaviors, tweet statistics, and the semantics of text through BERT. They tested their prediction performance with a Light Gradient Boosting Machine (LightGBM) and a Multilayer Perceptron (MLP) in a supervised task. The results highlight that users would engage with tweets based on text semantics and contents regardless of the tweet author, even if the popularity of the user could be useful for replies and mentions. In [42], the authors investigated the relationship between misinformation and user engagement in COVID-19-related tweets. They measured the engagement of a post as the sum of likes and retweets and then la
beled them as high or low engagement based on the median value of engagement distribution. From the tweet text, they extracted features such as the number of words, Part-Of-Speech tagging, etc. while no deep learning-based embedding was used. The resulting dataset is the input to Gradient Boosting, Multinomial Naive Bayes, and Random Forest classifiers, which achieved very good performances. However, we point out that these kinds of approaches do not consider the interactions among users and/or posts, which represent the core of social networks and from which we can extract information to improve the classification results.
In the latter case, the researchers used both the features from the former case and graph-based features extracted from the users and/or the corresponding tweets. In [19], the authors predict which posts will generate the highest user engagement on Twitter. They modeled the scenario as a Collaborative Ranking task and proposed to use the user-item-tweet interactions. Here, the features were extracted from the tweets and users' statistics, such as the number of tweets posted, average user engagement from their history, the ratio of the number of user friends to the number of his/her followers, etc. They learned a scoring function that directly optimizes user engagement in terms of normalized discounted cumulative gain on the predicted ranking. In [18], the authors fine-tuned a DistilBERT model [39] on tweets to obtain a text embedding and used Efficient Manifold Density Estimator [17] to represent the same text as a compressed and fixed-size representation of the tweet meaning. Furthermore, they added some features concerning community detection on directed graphs of engaged-engaging user interactions. In this way, they captured potentially complex communities of mutual interaction between users, with large communities having lower interaction strength. These features are fed into a simple shallow feed-forward neural network that predicts tweet engagement. In [37], the authors proposed a framework called People-Content-Network Analysis (PCNA), which is designed to analyze human dynamics on social networks. This framework uses three categories of features: _(i)_ community features, which are measurements of the community like its size, the total number of active users that another one is following, etc.; _(ii)_ author features, such as counts of followers, and following; and _(iii)_ content features, such as a number of retweets, mentions, hashtags, and keywords. Finally, a Support Vector Machine classifier is trained to predict tweet engagement, and they demonstrated that all these categories of features are useful for obtaining good performances. In [21], the authors proposed an ensemble model composed of two stages: the first one is made up of three LightGBM, Gradient Boosting, and neural networks, while the second one is another LightGBM. The input to this last model consists of two groups of features. The first one concerns the modeling of user behavior, such as the number of active and passive engagements, the number of engagements with language or hashtag, and user similarity. This last is extracted through an undirected graph, where each edge connects two users if one engaged with a tweet created by the other and has a weight equal to the number of such engagements. The second one regards tweet text features, which contain the text embeddings produced by DistilBERT, the unique word frequency (how much a user tends to use the same words over time), and the tweet topic (the authors identified some popular topics and manually associated each to a list of the most used words).
Even if some authors considered graph-based features to predict the engagement of a post, there are some differences between our approach and other ones. As we will see in Section 3.1, we model Twitter through a network of posts based on hashtags that also have a time threshold to control the number of connections. Then, this network of posts is the input to a Graph Neural Network that will predict the engagement of a tweet. The Machine Learning and Deep Learning algorithms so far employed in the literature are not specifically designed to deal with graphs, which is a limitation that we want to point out. We will show that Graph Neural Networks are suitable for this scenario because they can learn meaningful patterns from the underlying graph of the posts, and then leverage this knowledge to achieve very high performances.
Methodology
In this section, we describe our methodology to deal with engagement prediction on Twitter. In Section 3.1, we introduce our model to represent the scenario, while in Section 3.2, we report a brief description of Graph Convolutional Neural Networks and their importance in our case.
### Model Proposal
In this section, we introduce a suitable network model to represent Twitter posts and their interactions. Let \(T\) be a set of Twitter posts, where \(t_{i}\in T\) is a tweet, and \(\phi(t_{i})\) a function that extracts information about \(t_{i}\), such as timestamp, text, favorite count, the user identifier that wrote it and so on. In our case, \(\phi(t_{i})\) associates the following features with each \(t_{i}\):
* \(id_{t_{i}}\): the identifier of \(t_{i}\);
* \(\tau_{t_{i}}\): the posting timestamp of \(t_{i}\);
* \(text_{t_{i}}\): the text corresponding to the tweet \(t_{i}\);
* \(Length\)\(of\)\(post\): the number of characters present in \(t_{i}\);
* \(Emojis\): the number of emojis used;
* \(u_{t_{i}}\): the username of the author of the tweet;
* \(Has\)\(media\): if \(t_{i}\) contains an image and/or video;
* \(Favorite_{t_{i}}\): the number of likes received by \(t_{i}\);
* \(Retweet_{t_{i}}\): the number of times \(t_{i}\) was retweeted;
* \(Official\)\(Source\): if \(t_{i}\) was published through Twitter websites or Twitter API instead of third parties;
* \(h_{t_{i}}\): the sets of hashtags used in \(t_{i}\);
* \(Number\)\(of\)\(hashtags\): the number of hashtags present in \(h_{t_{i}}\);
* \(Number\)\(of\)\(Mentions\): the number of times a username is mentioned in \(text_{t_{i}}\).
In order to process \(text_{t_{i}}\), we need a function that maps the text into a vectorial representation. To this end, we define a function \(\omega(text_{t_{i}})\) as the embedding of \(text_{t_{i}}\), which takes in input a natural text and returns a numeric vector representing such text in a continuous space. The function \(\omega(text_{t_{i}})\) could be a suitable embedding model available in the Natural Language Processing literature, such as Word2Vec, GloVe, or BERT [35].
Moreover, we need some information about the user posting the tweet \(t_{i}\). Recall that \(u_{t_{i}}\) is the user identifier of the author of at least a tweet \(t_{i}\in T\). We define a function \(\sigma(u_{t_{i}})\) that extracts information about a user \(u_{t_{i}}\), such as the number of followers and following, the number of tweets, a boolean value specifying if he/she is a verified user, and so on. Starting from \(u_{t_{i}}\), we leverage \(\sigma(u_{t_{i}})\) in order to extract the following information:
* \(Verified\)\(user\): a boolean value representing if \(u_{t_{i}}\) is a verified account or not;
* \(Followers\): the number of followers of \(u_{t_{i}}\);
* \(Following\): the number of following of \(u_{t_{i}}\);
* \(Number\)\(of\)\(Tweets\): the number of tweets posted by \(u_{t_{i}}\) since the creation of the account.
Now, we can define a graph \(\mathcal{G}\) to model the tweets contained in \(T\) along with their corresponding interactions. Specifically, let \(\mathcal{G}=\langle T,E_{\delta}\rangle\) be a network of tweets. Here, the set of edges is represented by \(E_{\delta}\), where there is an edge \(e_{ij}=(t_{i},t_{j},w_{ij})\) if the tweet \(t_{i}\) and \(t_{j}\) share at least one common hashtag and if they were published within a specific time interval \(\delta\). The weight \(w_{ij}\) is equal to the number of common hashtags between \(t_{i}\) and \(t_{j}\). Formally speaking, the set of edges is defined as \(E_{\delta}=\{\langle t_{i},t_{j},|h_{t_{i}}\cap h_{t_{j}}|\rangle\ s.\ t.\ t_{i},t_{j}\in T,h_{t_{i}}\cap h _{t_{j}}\neq\emptyset,|\tau_{t_{i}}-\tau_{t_{j}}|<\delta\}\).
In our model, \(\delta\) represents a threshold to control the number of edges between the nodes in \(\mathcal{G}\). Indeed, on Twitter and other social networks, hashtags are a way to collect posts on similar topics, which sometimes are general (such as #politic, #healthcare, etc.) and sometimes are specific (such as #World-Cup2022, #Election2022). Defining the duration of a hashtag on a social network is not straightforward,
since it could last either a day or months. Tweets that are posted today can contain the same hashtags as older tweets, even if the former regards a newer event in that topic and should not be directly connected to the latter. Following this reasoning, we decided to add a temporal threshold \(\delta\) in order to create edges between posts only if they are published within a specific time interval. We can assume that \(\delta\) narrows down the topics discussed in tweets and connects them in a smarter way. In the literature, many papers have studied the lifespan of tweets to predict popularity and generate engagement [27, 51, 7]. According to the literature, the lifespan of a tweet is between 10 and 30 minutes, and for this reason, we decided to set \(\delta=15\) minutes. This means that two posts \(t_{i}\) and \(t_{j}\) are connected by an edge in \(\mathcal{G}\) only if they share at least one hashtag and if they were published 15 minutes apart.
Now, we need to define a way for evaluating the engagement of a tweet. To do so, we adopt a formula proposed in [19] to compute the engagement of \(t_{i}\):
\[eng(t_{i})=Favorite_{t_{i}}+Retweet_{t_{i}} \tag{1}\]
where \(Favorite_{t_{i}}\) is the number of favorites received by \(t_{i}\), and \(Retweet_{t_{i}}\) is the number of times \(t_{i}\) was retweeted.
In order to perform binary classification on tweets that did or did not generate engagement, we must add a set of class labels \(L\) for each \(t_{i}\in T\). To this end, we define \(l_{t_{i}}\) the label of a post \(t_{i}\) as:
\[l_{t_{i}}=\begin{cases}0&\quad\text{if }eng(t_{i})=0\\ 1&\quad\text{if }eng(t_{i})\geq 1\end{cases}\]
Roughly speaking, a label \(l_{t_{i}}\) equal to 0 is assigned to all the posts that generated no engagement in terms of favorites and retweets during their presence on Twitter (not only during the \(\delta\) threshold). On the other hand, a label \(l_{t_{i}}\) equal to 1 is associated with all the posts that generated at least one favorite or retweet action by a user.
### Graph Convolutional Neural Network
Deep learning models effectively learn hidden patterns of Euclidean data, but there are many scenarios in which data are represented through graphs [48]. Networks are hard to process since they could have a variable number of nodes, which, in turn, could have a different number of neighbors. This means that typical operations of deep learning, like convolutions, are difficult to apply to the graph domain. Moreover, instances in a network (i.e., nodes) are related to each other by links of various natures, which violates the assumption of machine learning of independent instances in the dataset. All these issues are addressed by Graph Neural Networks (GNN) [40], which extracts high-level representations of nodes and edges of a graph given as input. In [11] the authors introduced convolution operations on graphs thanks to the spatial construction algorithm and the spectrum of the graph-Laplacian. Convolutions on graphs allow the learning of node embeddings that consider also neighborhood information, which provides much richer representations. GNNs are used in many different tasks, such as node classification, graph classification, link prediction, and graph clustering.
Since we modeled the Twitter scenario through a graph \(\mathcal{G}\) representing the tweets and their interactions, and we wanted to evaluate the engagement of posts according to the network, we decided to use GNN models to perform node classification. To the best of our knowledge, our paper is the first attempt to employ this kind of model to predict the engagement of social posts. From the different GNNs architectures available in the literature [48], the one that is suitable for node classification and that can easily work with our dataset is the Graph Convolutional Neural Network (GCNN). The main idea of GCNNs is to apply the convolution operation to graph data, and so to generate a node embedding by aggregating its features and neighbors' features. Two variants of GCNNs gained a lot of attention lately: GraphSAGE [22] and Graph Attention Network (GAT) [45]. Both these architectures are spatial-based, and the most important difference lies in the assignment of weights to the neighbors of a node. Indeed, GraphSAGE ex
plicitly assigns a non-parametric weight according to the degrees of the nodes involved in the aggregation process, while GAT implicitly computes this weight through the attention mechanism and a neural network model, in order to give larger weights to the most important nodes. As will be clear in the next sections, in our context, GraphSAGE performs better than GAT due to the high peculiarity of the considered scenario, therefore we adopted GraphSAGE as referring GCNN architecture.
According to GraphSAGE [22], we define \(T_{t_{i}}\) as the set of nodes that has an edge with the node \(t_{i}\), and so it represents the immediate neighborhood of \(t_{i}\). The process to obtain a node embedding is iterative and consists of \(n\) steps. At each step \(k\), let \(r^{k}_{t_{i}}\) be the node representation of \(t_{i}\) at this step. We aggregate the representations of the nodes in its neighborhood \(r^{k}_{T_{t_{i}}}=\{r^{k}_{t_{j}},\forall t_{j}\in T_{t_{i}}\}\). In our case, we use a sum operator to aggregate the representations of the neighbors of \(t_{i}\). Then, as in GraphSAGE [22], we concatenate the \(r^{k}_{t_{i}}\) with the aggregated neighborhood vector \(r^{k}_{T_{t_{i}}}\), and feed it to a fully connected layer with a GELU activation function.
According to the representation reported in Section 3.1, the features associated with a node of our GCNN are derived both from the tweet and the user posting it. As for the creator of a tweet, we used _Followers_, _Number of Tweets_, _Following_, while as for the tweet we leveraged _Length of a post_, _Number of hashtags_, _Number of mentions_, _Emojis_, _Official Source_, _Has media_. We define this last set of features as \(\Phi_{t_{i}}\), which includes user and tweet data. Then, we define \(\omega(text_{i})\) as the embedding of \(text_{i}\) computed through a pre-trained BERT.
## 4 Experiments
In this section, we present the experiments we carried out to evaluate the proposed model in the engagement prediction task. We begin by describing the dataset for our experiments with the associated statistics in Section 4.1. Following that, in Section 4.2, we present the network analysis of the graph which motivated our strategy of exploiting its intrinsic patterns to improve the engagement prediction of a post. Then in Section 4.3, we compare our solution with other models that take as input only the information regarding the features extracted from the text of the post and the user who posted it. Finally, in Section 4.4, we perform an ablation study on the different components of our model to evaluate whether the results obtained are actually due to the advantage of considering the intrinsic information of the graph.
### Dataset Description
Our Twitter dataset consisted of the tweets posted during November 2021, and was built through a real-time stream using the Twitter API. Once we downloaded the Twitter dataset of November 2021, we performed some data cleaning operations. First of all, we removed all the tweets that were not written in English. Then, since the tweets in the original dataset did not contain any information about the interactions they received (such as the number of likes, mentions, and retweets), we developed a Python script to update the tweets data, using, again, the Twitter APIs [24, 12]. Moreover, due to the very high number of tweets present in the dataset (more than 1 million), we decided to select one week of data, from November \(1^{st}\) 2021 to November \(7^{th}\) which still remains a large dataset. Finally, we computed the engagement of each tweet according to Equation 1. The statistics of our dataset after the data cleaning operations are reported in Table 1.
From the analysis of Table 1, we observed that our dataset contained \(243,750\) tweets spread in the first
\begin{table}
\begin{tabular}{l c} \hline \hline Number of days & 7 \\ Number of users & 194,046 \\ Number of posts & 243,750 \\ Mean number of posts by day & 34,821.43 \\ Number of unique hashtags & 94,646 \\ Median number of posts by user & 1 \\ Max number posts by user & 126 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of our dataset
week of November. There are many different hashtags used in the tweets, which will be useful when we will create the corresponding post network. As in many social network scenarios [16, 33], we saw that the median number of posts by a user is low (i.e., 1) while the maximum number of posts is quite high for a week (i.e., 126). This highlighted that there were many differences in terms of posting activities among authors, in which few users wrote a lot of tweets, while most of them wrote only one or two posts. Starting from our dataset of tweets, we created the corresponding graph \(\mathcal{G}\), whose characteristics are reported in Table 2.
From Table 2, we observed that the number of nodes corresponds to the number of posts and that we had millions of edges connecting tweets together. It is worth noting that, even if we set a threshold (\(\delta=15\) minutes) to drive the creation of edges, we still obtained a high number of connections. Finally, we saw that there are many connected components (\(49.40\%\) of the number of nodes), and the maximum component consisted of \(18,322\) nodes. This means that there were several isolated nodes and/or components with few nodes, which is a phenomenon both depending on the used hashtags and on our \(\delta\) threshold. Once we built \(\mathcal{G}\), we computed four centrality measures (i.e., Weighted degree centrality, Closeness centrality, Betweenness centrality, and Eigenvector centrality [13, 26]) for each post, and then created a correlation matrix of all the features obtained so far. The results are reported in Figure 2.
From Figure 2, we observed that there were very few interesting correlations between two features in our dataset, and some of them are expected. Specifically, the engagement of a post was correlated with the number of followers of the user. Then, the weighted degree centrality and eigenvector centrality were correlated with the number of hashtags and the mentions present in a post, which can be explained by the construction of \(\mathcal{G}\). In conclusion, there are no unexpected correlations in our dataset, and the engagement of a post had poorly correlated with other features. This suggested that there were no linear relationships in the tweet dataset, and so we had to exploit non-linear approaches to predict the engagement of a post.
### Analysis of Centrality Measures and Engagement
We analyzed the network centralities of the posts in order to verify possible differences between posts with engagement and posts without engagement. Recall that, each centrality depicts a different way of describing the role of nodes, such as the number of connections, their impact in the information flow, or their connection with important peers. We considered the four most important centrality measures and plotted the distribution of the obtained results in Figure 3.
From Figure 3, we observed that the Weighted Degree, Closeness, and Eigenvector centralities followed a power law distribution, even if the first one was far steeper than the others.
\begin{table}
\begin{tabular}{l c} \hline \(\delta\) (minutes) & \(15\) \\ Number of nodes & \(243,750\) \\ Number of edges & \(4,403,434\) \\ Density & \(1.48\)\(e^{-4}\) \\ Number of connected components & \(120,434\) \\ Maximum connected component & \(18,322\) \\ \hline \end{tabular}
\end{table}
Table 2: Graph statistics
Figure 2: Correlation matrix of the dataset and graph features
tweenness centrality had a more equal distribution of values than the previous ones. These distributions highlighted that the posting network reflected the same phenomena of classical social network analysis, so to deepen our analysis we proceeded by studying the differences between the posts with and without engagement. To this end, we split the posts by the class \(l_{t_{i}}\) and computed the distribution of their corresponding centrality values. We reported the summarized results in Table 3.
As for the Weighted Degree Centrality, nodes with \(l_{t_{i}}=0\) had a higher mean and standard deviation than nodes with \(l_{t_{i}}=1\). This means that posts without engagement were much more connected than posts with engagement, which also leads to the conclusion that having a higher number of hashtags in a tweet is not a way to create engagement. As for Closeness and Betweenness centralities, there was no difference as in the previous case. In the Eigenvector centrality case, the mean of posts with \(l_{t_{i}}=0\) was higher by one order of magnitude w.r.t. the mean of posts with \(l_{t_{i}}=1\). In order to statistically verify the discrepancies between these distributions, we ran a Kolmogorov-Smirnov test and reported the results of the test statistics and p-values in the same table. Since the p-values were always lower than 0.05, we stated that the distributions of network centralities of posts with \(l_{t_{i}}=0\) and \(l_{t_{i}}=1\) had statistically different, and so that posts with engagement have specific patterns w.r.t. posts without engagement. This finding supported our intuition that a machine-learning approach capable of learning on graphs is useful to predict the engagement of a post in this particular scenario.
### Results
The advantage of our proposal over more traditional ones is the fact that it exploits the intrinsic patterns of the graph in addition to the classical features directly derived from the posts. To assess the quality of our proposal, we considered a set of baseline architectures that take as input only the features related to the single posts. In the baselines, we included the solution that won the \(RecSys\ Challenge\) in 2020 [5] whose goal was the same as the one that we are addressing in this paper (XGBoost experiment, in the following). In addition, since in our approach, we combine the features of the post with a vectorial representation of the corpus obtained by a pre-trained BERT-based model, we decided to perform a fine-tuning of such BERT model for our specific task, to verify if the features of the text alone are enough to predict the engagement (BERT FT experiment). Then, we took into consideration two solutions based on neural networks: a Multilayer Perceptron (MLP experiment) and a Convolutional Neural Network (CNN experiment), characterized by a one-dimensional Convolution Layer followed by a fully connected classifier.
Finally, as stated in Section 3.2, we included in the comparison a variation of our proposal using Graph Attention Networks (GAT experiment). In detail, this variation of our approach used a Multi-Head Attention Layer instead of the Convolutional ones. In
Figure 3: Log-log distribution of the number of posts against the common centralities
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \(l_{p}\) & Mean & Std & K-S test statistic & K-S test p-value \\ \hline Weighted Degree Centrality & 0 & 110.63 & 384.05 & 0.15 & i 0.01 \\ & 1 & 39.12 & 197.26 & 0.15 & i 0.01 \\ Closeness Centrality & 0 & 1.126 & 2.146 & 0.02 & i 0.01 \\ & 1 & 1.06 & 2.129 & 0.15 & 0.14 \\ Betweenness Centrality & 0 & 3.51 & 2.13 & 0.15 & 0.14 \\ & 1 & 2.28 & 0.15 & 0.15 & 0.15 \\ Eigenvector Centrality & 0 & 2.38 & 0.148 & 0.13 & 0.15 \\ & 1 & 4.53 & 0.10 & 0.15 & 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Centrality measures statistics of the nodes of \(\mathcal{G}\) split by engagement classes and results of Kolmogorov–Smirnov tests between the distributions of centrality values of the two classes
Table 5, we report the results of the experiments.
**Implemention details.** For our experiments, we used TensorFlow [1] as Python framework. The BERT FT experiment was performed by training a linear layer using the features extracted from the posts (768) as input. For the MLP experiment, we build a two layer MLP network with 32 hidden neurons. As for the CNN experiment, we considered a convolutional neural network composed of two 1D convolutional layers. For the XGBoost experiment, instead, we considered the solution that won the \(RecSys\)\(Challenge\) in 2020 [5]. Finally, for our solution using GCNN and its variation using GAT, we applied a linear head with 16 on the former and 100 hidden units on the latter. Moreover, for BERT FT, MLP, CNN, and GCNN we used ADAM [25] as optimizer with initial learning rate \(1e^{-4}\), \(1e^{-2}\), \(1e^{-1}\) and \(1e^{-1}\) respectively. For the GAT experiment, we used SGD as the optimizer with an initial learning rate \(3e^{-1}\) and momentum \(9e^{-1}\). For all the experiments we considered 256 as batch size, a learning rate scheduler that reduced the learning rate on a plateau with a scaling factor of \(1e^{-1}\), and early stopping.
As expected, the fine-tuning of the BERT model on exclusively the corpus of the tweet had the worst performance between the baseline architecture. Therefore, we stated that the features obtained from the text alone are not enough to complete the task we were addressing. Looking at the other three baselines instead, we noticed that they were characterized by similar performance with the best results achieved by the model winner of the challenge [5]. This result can be explained by considering the fact that the combination of only the features related to the post and the vectorial representation of the text is not enough to predict the impact of a post. Focusing instead on our proposal and its variation based on the attention mechanism, we saw a significant improvement between 15% and 20% over all the metrics. The obtained results show that the features derived from the relations between posts in the graph are fundamental in predicting the success of a tweet.
### Ablation Study
As a further study, we decided to ablate the importance of the features used in our method. In the previous sections, we presented two different typologies of features for a post \(t_{i}\): those directly related to it, like the number of hashtags or its length (i.e., \(\Phi_{t_{i}}\)), and the embeddings of the text contained in the post extracted using a pre-trained BERT model (i.e., \(\omega(text_{i})\)). In our approach, we combined these features with the ones related to the structure of the tweet graph as captured by the Graph Neural Network.
To study the role of the features above, as a preliminary experiment, we excluded the features of \(\Phi_{t_{i}}\) and \(\omega(text_{i})\). We validated the importance of combining post-related features with the graph neural model. Indeed, from the first row of Table 6, it is possible to observe that only using the information obtained from the connection between the posts, i.e., the features derived from the graph, it is not possible to obtain accurate predictions of the engagement of a post. In fact, the obtained model performed poorly according to all the considered metrics.
In light of this first result, we alternatively considered the features from \(\Phi_{t_{i}}\) and \(\omega(text_{i})\). The inclusion of these features significantly increased the performance of the obtained model over all the metrics.
As a final experiment, due to the large dimension of the embeddings obtained using the pre-trained BERT, we applied a dimensionality reduction strategy using the Principal Component Analysis (PCA). Specifically, we used \(N=48\) projected features covering more than the 80% of the variance of the features
\begin{table}
\begin{tabular}{l c c c c c c} \hline Architecture & Acc & Prec & Recall & AUC\({}_{ROC}\) & AUC\({}_{PR}\) & F1 \\ \hline BERT FT & 0.50 & 0.51 & 0.50 & 0.49 & 0.64 & 0.50 \\ MLP & 0.67 & 0.67 & 0.68 & 0.74 & 0.74 & 0.67 \\ CNN & 0.70 & 0.70 & 0.70 & 0.77 & 0.76 & 0.70 \\ XGBoost[5] & 0.72 & 0.72 & 0.72 & 0.80 & 0.80 & 0.72 \\ \hline TweetGage(GAT) & 0.87 & 0.87 & 0.87 & 0.92 & 0.90 & 0.88 \\ TweetGage & **0.89** & **0.89** & **0.89** & **0.95** & **0.94** & **0.89** \\ \hline \end{tabular}
\end{table}
Table 4: Log-log distribution of the number of posts against the common centralities
\begin{table}
\begin{tabular}{l c c c c c} \hline Architecture & Acc & Prec & Recall & AUC\({}_{ROC}\) & AUC\({}_{PR}\) & F1 \\ \hline BERT FT & 0.50 & 0.51 & 0.50 & 0.49 & 0.64 & 0.50 \\ MLP & 0.67 & 0.67 & 0.68 & 0.74 & 0.74 & 0.67 \\ CNN & 0.70 & 0.70 & 0.70 & 0.77 & 0.76 & 0.70 \\ XGBoost[5] & 0.72 & 0.72 & 0.72 & 0.80 & 0.80 & 0.72 \\ \hline TweetGage(GAT) & 0.87 & 0.87 & 0.87 & 0.92 & 0.90 & 0.88 \\ TweetGage & **0.89** & **0.89** & **0.89** & **0.95** & **0.94** & **0.89** \\ \hline \end{tabular}
\end{table}
Table 5: Results obtained applying our network-based deep learning models and comparison with state-of-the-art methods
obtained from \(\omega(text_{t_{i}})\). However, this strategy did not increase the model's performances, which maintained an 84% of accuracy.
Finally, we combined both the \(\Phi_{t_{i}}\) and \(\omega(text_{t_{i}})\) features in three different way: _(i)_ we reduced the entire feature set using the PCA with \(N=48\); _(ii)_ we reduced the \(\omega(text_{t_{i}})\) with PCA (\(N=48\)) and kept the \(\Phi_{t_{i}}\) without any projection; and _(iii)_ we used the entire feature set without any projection. As it is possible to observe, the complete model using the entire feature set obtained the best results in terms of all the considered metrics. This result confirmed the effectiveness of our model by combining both the features extracted by \(\Phi_{t_{i}}\) and \(\omega(text_{t_{i}})\), with a GCNN architecture.
## 5 Conclusion
In this paper, we proposed a novel approach for the binary classification of posts based on user engagement on Twitter, using Graph Neural Networks. As a first contribution, our solution introduces a suitable graph-based representation of the relationships between posts published on a social network. In particular, in our design, we focused on the content of the posts and used the information contained in their hashtags to build an interaction network among posts. Our intuition was that the engagement dynamics depend on both the properties of a post (such as its length, the author, its content, and so forth) and also on the connection with other previously published content. To capture these additional features, we leveraged the Graph Neural Network technology and designed a combined solution exploiting both features directly related to posts and the structure of the interaction network of posts to predict the engagement generated on Twitter. Through a thorough experimental campaign, we proved the effectiveness of our solution and the improvements introduced with respect to previous related approaches. However, the results reported in this paper must not be considered as an ending point. Indeed, in the future, we plan to extend our research work in different directions. For instance, we plan to add more engagement classes and pass from a binary to a multi-class classification. In this way, we could predict the level of engagement of a post, and use this information to feed a decision support system capable of identifying the best possible author for writing a post about a specific topic. Moreover, we would like to create a new graph model that considers the users as nodes and different edge-types, encoding information such as the usage of the same hashtags, and then predict the user engagement in those interactions. This new representation could help to create a recommendation system that predicts the top users with whom to carry out discussions on a specific topic for generating engagement. Of course, in such a context, an analysis of the time variance of modeled relationships could be explored to improve the obtained performance.
|
2306.10453 | Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls
and New Benchmarking | Link prediction attempts to predict whether an unseen edge exists based on
only a portion of edges of a graph. A flurry of methods have been introduced in
recent years that attempt to make use of graph neural networks (GNNs) for this
task. Furthermore, new and diverse datasets have also been created to better
evaluate the effectiveness of these new models. However, multiple pitfalls
currently exist that hinder our ability to properly evaluate these new methods.
These pitfalls mainly include: (1) Lower than actual performance on multiple
baselines, (2) A lack of a unified data split and evaluation metric on some
datasets, and (3) An unrealistic evaluation setting that uses easy negative
samples. To overcome these challenges, we first conduct a fair comparison
across prominent methods and datasets, utilizing the same dataset and
hyperparameter search settings. We then create a more practical evaluation
setting based on a Heuristic Related Sampling Technique (HeaRT), which samples
hard negative samples via multiple heuristics. The new evaluation setting helps
promote new challenges and opportunities in link prediction by aligning the
evaluation with real-world situations. Our implementation and data are
available at https://github.com/Juanhui28/HeaRT | Juanhui Li, Harry Shomer, Haitao Mao, Shenglai Zeng, Yao Ma, Neil Shah, Jiliang Tang, Dawei Yin | 2023-06-18T01:58:59Z | http://arxiv.org/abs/2306.10453v3 | # Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking
###### Abstract
Link prediction attempts to predict whether an unseen edge exists based on only a portion of edges of a graph. A flurry of methods have been introduced in recent years that attempt to make use of graph neural networks (GNNs) for this task. Furthermore, new and diverse datasets have also been created to better evaluate the effectiveness of these new models. However, multiple pitfalls currently exist that hinder our ability to properly evaluate these new methods. These pitfalls mainly include: (1) Lower than actual performance on multiple baselines, (2) A lack of a unified data split and evaluation metric on some datasets, and (3) An unrealistic evaluation setting that uses easy negative samples. To overcome these challenges, we first conduct a fair comparison across prominent methods and datasets, utilizing the same dataset and hyperparameter search settings. We then create a more practical evaluation setting based on a **H**euristic **R**elated Sampling **T**echnique (HeaRT), which samples hard negative samples via multiple heuristics. The new evaluation setting helps promote new challenges and opportunities in link prediction by aligning the evaluation with real-world situations. Our implementation and data are available at [https://github.com/Juanhui28/HeaRT](https://github.com/Juanhui28/HeaRT).
## 1 Introduction
The task of link prediction is to determine the existence of an edge between two unconnected nodes in a graph. Existing link prediction algorithms attempt to estimate the proximity of different pairs of nodes in the graph, where node pairs with a higher proximity are more likely to interact [1]. Link prediction is applied in many different domains including social networks [2], biological networks [3], and recommender systems [4].
Graph neural networks (GNNs) [5] have gained prominence in recent years with many new frameworks being proposed for a variety of different tasks. Corresponding to the rise in popularity of GNNs, there has been a number of studies that attempt to critically examine the effectiveness of different GNNs on various tasks. This can be seen for the task of node classification [6], graph classification [7], knowledge graph completion (KGC) [8; 9; 10], and others [11].
However, despite a number of new GNN-based methods being proposed [12; 13; 14; 15] for link prediction, there is currently no work that attempts to carefully examine recent advances in link prediction methods. Upon examination, we find that there are several pitfalls in regard to model evaluation that impede our ability to properly evaluate current methods. This includes:
1. **Lower than Actual Performance**. We observe that the current performance of multiple models is underreported. For some methods, such as standard GNNs, this is due to poor hyperparameter tuning. Once properly tuned, they can even achieve the best overall performance on some metrics (see SAGE [16] in Table 1). Furthermore, for other methods like Neo-GNN [14] we can achieve around an 8.5 point increase in Hits@50 on ogbl-collab relative to the originally reported performance. This results in Neo-GNN achieving the best overall performance on ogbl-collab in our study (see Table 2). Such problems obscure the true performance of different models, making it difficult to draw reliable conclusions from the current results.
2. **Lack of Unified Settings on the Planetoid Datasets**. For the Cora, Citeseer, and Pubmed datasets [17], there exists no unified data split and evaluation metrics utilized. For the data split, some works [18; 19] use a single fixed train/valid/test split with percentages 85/5/10%. More recent works [13; 15] utilize 10 random splits of size 70/10/20%. In terms of the evaluation metrics, some studies [13; 15] utilize ranking-based metrics such as MRR or Hits@K while others [20; 19] report the area under the curve (AUC). This is despite multiple studies that argue that AUC is a poor metric for evaluating link prediction [21; 22]. This lack of a unified setting hampers our ability to determine which methods perform best on these datasets.
3. **Unrealistic Evaluation Setting**. During the evaluation, we are given a set of true samples (i.e., positive samples) and a set of false samples (i.e., negative samples). We are tasked with learning a classifier \(f\) that assigns a higher probability to the positive samples than the negatives. The current evaluation setting uses the same set of randomly selected negative samples for each positive sample. We identify two potential problems with the current evaluation procedure. **(1)** It is not aligned with real-world settings. In a real-world scenario, we typically care about predicting links for a specific node. For example, in friend recommendations, we aim to recommend friends for a specific user \(u\). To evaluate such models for \(u\), we strive to rank node pairs including \(u\). However, this does not hold in the current setting as \(u\) is not included in most of the negative samples. **(2)** The current evaluation setting makes the task too easy. As such, it may not reflect the model performance in real-world applications. This is because the nodes in a randomly selected negative "node pair" are likely to be unrelated to each other. As shown in Figure 1, almost all negative samples in the test data have no common neighbors, a typically strong heuristic, making them trivial to classify them.
To account for these issues, we propose to first conduct a fair and reproducible evaluation among current link prediction methods under the existing evaluation setting. We then design a new evaluation strategy that is more aligned with a real-world setting and detail our results. Our key contributions are summarized below:
* **Reproducible and Fair Comparison**. We conduct a fair comparison of different models across multiple common datasets. To ensure a fair comparison, we tune all models on the same set of hyperparameters. We further evaluate different models using multiple types of evaluation metrics. For the Planetoid datasets [17], we further utilize a unified data split to facilitate a point of comparison between models. To the best of our knowledge, there are no recent efforts
Figure 1: Common neighbor distribution for the positive and negative test samples for the ogbl-collab, ogbl-ppa, and ogbl-citation2 datasets under the existing evaluation setting.
to comprehensively benchmark link prediction methods (several exist for KGC [10; 9; 8]). Furthermore, we open-source the implementation in our analysis to enable others in their analyses.
* **New Evaluation Setting**. We recognize that the current negative sampling strategy used in evaluation is unrealistic and easy. To counter these issues, we first utilize a more realistic setting of tailoring the negatives to each positive sample. This is achieved by restricting them to be corruptions of the positive sample (i.e., containing one of its two nodes). Given the prohibitive cost of utilizing all possible corruptions, we opt instead to only rank against \(K\) negatives for each positive sample. In order to choose the most relevant and difficult corruptions, we propose a **He**uristic **R**elated **S**ampling **T**echnique (HeaRT), which selects them based on a combination of multiple heuristics. This creates a more challenging task than the previous evaluation strategy and allows us to better assess the capabilities of current methods.
The rest of the paper is structured as follows. In Section 2 we introduce the models, datasets, and settings used for conducting a fair comparison between methods. In Section 3 we show the results of the fair comparison under the existing evaluation setting and discuss our main observations. Lastly, in Section 4 we introduce our new evaluation setting. We then detail and discuss the performance of different methods using our new setting.
## 2 Preliminaries
### Link Prediction Methods
Link prediction aims to predict the likelihood of a connection between two nodes given the existing graph. Conventional methods [23; 24] often exploit hand-craft graph structural properties (i.e., heuristics) between node pairs. GNNs attempt to learn the structural information to facilitate link prediction [25; 15; 13]. Given the strong performance of pairwise-based heuristics [14; 15], some recent works leverage both GNNs and pairwise information, demonstrating strong performance.
For our study, we consider both traditional and state-of-the-art GNN-based models. They can be roughly organized into four categories. **1) Heuristic methods**: Common Neighbor (CN) [26], Adamic Adar (AA) [27], Resource Allocation (RA) [28], Shortest Path [24], and Katz [29]. These methods define a score to indicate the link existence based on the graph structure. Among them, CN, AA, and RA are based on the common neighbors, while Shortest Path and Katz are based on the path information. **2) Embedding methods**: Matrix factorization (MF) [23], Multilayer Perceptron (MLP) and Node2Vec [30]. These methods are trained to learn low-dimensional node embeddings that are used to predict the likelihood of node pairs existing. **3) GNN methods**: GCN [31], GAT [18], SAGE [16], and GAE [20]. These methods attempt to integrate the multi-hop graph structure based on the message passing paradigm. **4) GNN + Pairwise Information methods**: Standard GNN methods, while powerful, are not able to capture link-specific information [25]. As such, works have been proposed that augment GNN methods by including additional information to better capture the relation between the nodes in the link we are predicting. SEAL [25], BUDDY [13], and NBFNet [19] leverage the subgraph features. Neo-GNN [14], NCN [15], and NCNC [15] are based on common neighbor information. Lastly, PEG [32] utilizes the positional encoding derived from the graph structure.
### Datasets and Experimental Settings
In this section we summarize the datasets and evaluation and training settings. We note that the settings depend on the specific dataset. More details are given in Appendix B.
**Datasets**. We limit our experiments to homogeneous graphs, which are the most commonly used datasets for link prediction. This includes the small-scale datasets, i.e., Cora, Citeseer, Pubmed [17], and large-scale datasets in the OGB benchmark [33], i.e., ogbl-collab, ogbl-ddi, ogbl-ppa, and ogbl-citation2. We summarize the statistics and split ratio of each dataset in Appendix B.
**Metrics**. For evaluation, we utilize both the area under the curve (AUC) and ranking-based metrics, i.e., mean reciprocal rank (MRR) and Hits@K. For Cora, Citeseer, and Pubmed we adopt \(K\in\{1,3,10,100\}\). We note that \(K=100\) is reported in some recent works [13; 15]). However due to the small number of negatives used during evaluation (e.g., \(\approx 500\) for Cora and Citeseer) \(K=100\)
is likely not informative. For the OGB datasets, we adopt \(K\in\{20,50,100\}\) to keep consistent with the original study [33].
**Hyperparameter Ranges**. We conduct a hyperparameter search across a comprehensive range of values. For Cora, Citeseer, and Pubmed this includes: learning rate (0.01, 0.001), dropout (0.1, 0.3, 0.5), weight decay (1e-4, 1e-7, 0), number of model layers (1, 2, 3), number of prediction layers (1, 2, 3), and the embedding size (128, 256). Due to the large size of the OGB datasets, it's infeasible to tune over such a large range. Therefore, following the most commonly used settings among published hyperparameters, we fix the weight decay to 0, the number of model and prediction layers to be 3, and the embedding size to be 256. The best hyperparameters are chosen based on the validation performance. We note that several exceptions exist to these ranges when they result in significant performance degradations (see Appendix B for more details). We further follow the existing setting and only sample one negative sample per positive sample during training.
**Existing Evaluation Settings**. In the evaluation stage, the same set of randomly sampled negatives are used for all positive samples. We note that one exception is ogbl-citation2, where they randomly sample 1000 negative samples per positive sample. For Cora, Citeseer, and Pubmed the number of negative samples is equal to the number of positive samples. For the OGB datasets, we use the existing fixed set of randomly chosen negatives found in [33]. Furthermore, for ogbl-collab we follow the existing protocol [33] and include the validation edges in the training graph during testing. This setting is adopted on ogbl-collab under both the existing and new evaluation setting.
## 3 Fair Comparison Under the Existing Setting
In this section, we conduct a fair comparison among link prediction methods. This comparison is spurred by the multiple pitfalls noted in Section 1, which include lower-than-actual model performance, multiple data splits, and inconsistent evaluation metrics. These pitfalls hinder our ability to fairly compare different methods. To rectify this, we conduct a fair comparison adhering to the settings listed in section 2.2.
The results are split into two tables. The results for Cora, Citeseer, and Pubmed are shown in Table 1 and OGB in Table 2. For simplicity, we only present the AUC and MRR for Cora, Citeseer, and Pubmed. For OGB datasets, we include AUC and the original ranking metric reported in [33] to allow a convenient comparison (Hits\(@20\) for ogbl-ddi, Hits\(@50\) for ogbl-collab, Hits\(@100\) for ogbl-ppa, and MRR for ogbl-citation2). We use ">24h" to denote methods that require more than 24 hours for either training one epoch or evaluation. OOM indicates that the algorithm requires over 50Gb of GPU memory. Additional results in terms of other metrics are presented in Appendix E. We
\begin{table}
\begin{tabular}{c c|c c c c c c} \hline \hline \multicolumn{2}{c|}{Models} & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{Pubmed} \\ \multicolumn{2}{c|}{} & MRR & AUC & MRR & AUC & MRR & AUC \\ \hline \multirow{4}{*}{Heuristic} & CN & 20.99 & 70.85 & 28.34 & 67.49 & 14.02 & 63.9 \\ & AA & 31.87 & 70.96 & 29.37 & 67.49 & 16.66 & 63.9 \\ & RA & 30.79 & 70.96 & 27.61 & 67.48 & 15.63 & 63.9 \\ & Shortest Path & 12.45 & 81.08 & 31.82 & 75.5 & 7.15 & 74.64 \\ & Katz & 27.4 & 81.17 & 38.16 & 75.37 & 21.44 & 74.86 \\ \hline \multirow{3}{*}{Embedding} & Node2Vec & 37.29 \(\pm\) 8.82 & 90.97 \(\pm\) 0.64 & 44.33 \(\pm\) 8.99 & 94.46 \(\pm\) 0.59 & 34.61 \(\pm\) 2.48 & 93.14 \(\pm\) 0.18 \\ & MF & 14.29 \(\pm\) 5.79 & 80.29 \(\pm\) 2.26 & 24.80 \(\pm\) 4.71 & 75.92 \(\pm\) 3.25 & 19.29 \(\pm\) 6.29 & 93.06 \(\pm\) 0.43 \\ & MLP & 31.21 \(\pm\) 7.90 & 95.32 \(\pm\) 0.37 & 43.53 \(\pm\) 7.26 & 94.45 \(\pm\) 0.32 & 16.52 \(\pm\) 4.14 & 98.34 \(\pm\) 0.10 \\ \hline \multirow{4}{*}{GNN} & GCN & 32.50 \(\pm\) 6.87 & 95.01 \(\pm\) 0.32 & 50.01 \(\pm\) 6.04 & 95.89 \(\pm\) 0.26 & 19.94 \(\pm\) 2.42 & 98.69 \(\pm\) 0.06 \\ & GAT & 31.86 \(\pm\) 6.08 & 93.90 \(\pm\) 0.32 & 48.69 \(\pm\) 7.53 & 96.25 \(\pm\) 2.00 & 18.63 \(\pm\) 7.75 & 98.20 \(\pm\) 0.07 \\ & SAGE & **37.83 \(\pm\) 7.75** & 95.63 \(\pm\) 0.27 & 47.84 \(\pm\) 6.39 & **93.99 \(\pm\) 0.15** & 22.74 \(\pm\) 5.47 & 98.87 \(\pm\) **0.04** \\ & GAE & 29.98 \(\pm\) 3.21 & 59.08 \(\pm\) 0.33 & **63.33 \(\pm\) 3.14** & 97.06 \(\pm\) 0.22 & 16.67 \(\pm\) 0.19 & 97.47 \(\pm\) 0.08 \\ \hline \multirow{4}{*}{GNN+Pairwise Info} & SEAL & 26.69 \(\pm\) 5.89 & 90.59 \(\pm\) 0.75 & 39.36 \(\pm\) 4.99 & 88.52 \(\pm\) 1.40 & **38.06 \(\pm\) 5.18** & 97.77 \(\pm\) 0.40 \\ & BUDDY & 26.40 \(\pm\) 4.40 & 59.06 \(\pm\) 0.36 & 95.94 \(\pm\) 8.96 & 96.72 \(\pm\) 0.26 & 23.98 \(\pm\) 5.11 & 98.2 \(\pm\) 0.05 \\ & Neo-GNN & 22.65 \(\pm\) 2.60 & 93.73 \(\pm\) 0.36 & 53.97 \(\pm\) 5.88 & 94.98 \(\pm\) 0.60 & 31.45 \(\pm\) 3.17 & 98.71 \(\pm\) 0.05 \\ & NCN & 32.93 \(\pm\) 3.80 & **96.76 \(\pm\) 0.18** & 54.97 \(\pm\) 6.03 & 97.04 \(\pm\) 0.26 & 35.65 \(\pm\) 4.60 & **98.98 \(\pm\) 0.04** \\ & NCNC & 29.01 \(\pm\) 3.83 & **96.90 \(\pm\) 0.28** & **64.03 \(\pm\) 3.67** & **97.65 \(\pm\) 0.30** & 25.70 \(\pm\) 4.48 & **99.14 \(\pm\) 0.03** \\ & NBFNet & **37.69 \(\pm\) 3.97** & 92.85 \(\pm\) 0.17 & 38.17 \(\pm\) 3.06 & 91.06 \(\pm\) 0.15 & **47.32 \(\pm\) 2.12** & 98.34 \(\pm\) 0.02 \\ & PEG & 22.76 \(\pm\) 1.84 & 94.46 \(\pm\) 0.34 & 56.12 \(\pm\) 6.62 & 96.15 \(\pm\) 0.41 & 21.05 \(\pm\) 2.85 & 96.97 \(\pm\) 0.39 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on Cora, Citeseer, and Pubmed(%) under the existing evaluation setting. High-lightened are the results ranked **first**, **second**, and **third**.
have several noteworthy observations concerning the methods, the datasets, the evaluation settings, and the overall results. We highlight the main observations below.
**Observation 1: Better than Reported Performance.** We find that for some models we are able to achieve superior performance compared to what is reported by recent studies. For instance, in our study Neo-GNN [14] achieves the best overall test performance on ogbl-collab with a Hits@50 of \(66.13\). In contrast, the reported performance in [14] is only \(57.52\), which would rank seventh under our current setting. This is because the original study [14] does not follow the standard setting of including validation edges in the graph during testing. This setting, as noted in Section 2.2, is used by all other methods on ogbl-collab. However it was omitted by [14], resulting in lower reported performance.
Furthermore, with proper tuning, conventional baselines like GCN [34] and GAE [20] generally exhibit enhanced performance relative to what was originally reported across all datasets. For example, we find that GAE can achieve the second best MRR on Citeseer and GCN the third best Hits@20 on ogbl-ddi. A comparison of the reported results and ours are shown in Table 3. We note that we report AUC for Cora, Citeseer, Pubmed as it was used in the original study. These observations suggest that the performance of various methods are better than what was reported in their initial publications. However, many studies [13, 15, 25] only report the original performance for comparison, which has the potential to lead to inaccurate conclusions.
**Observation 2: Divergence from Reported Results on ogbl-ddi.** We observe that our results in Table 2 for ogbl-ddi differ from the reported results. Outside of GCN, which reports better performance, most other GNN-based methods report a lower-than-reported performance. For example, for BUDDY we only achieve a Hits@20 of 29.60 vs. the reported 78.51 (see Appendix C for a comprehensive comparison among methods). We find that the reason for this difference depends on the method. BUDDY [13] reported 2 using 6 negatives per positive sample during training, leading to an increase in performance. Neo-GNN [14] first pretrains the GNN under the link prediction task, and then uses the pretrained model as the initialization for Neo-GNN 3. For a fair comparison among methods, we only use 1 negative per positive sample in training and we don't apply the pretraining. For other methods, we find that a weak relationship between the validation and test performance
\begin{table}
\begin{tabular}{c c|c c c c c c c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{ogbl-collab} & \multicolumn{2}{c}{ogbl-ddi} & \multicolumn{2}{c}{ogbl-c} & \multicolumn{2}{c}{ogbl-c} \\ & & Hits@50 & AUC & Hits@20 & AUC & Hits@100 & AUC & MRR \\ \hline \multirow{4}{*}{Heuristic} & CN & 61.37 & 82.78 & 17.73 & 95.2 & 27.65 & 97.22 & 50.31 \\ & AA & 64.17 & 82.78 & 18.61 & 95.43 & 32.45 & 97.23 & 51.69 \\ & RA & 63.81 & 82.78 & 6.23 & 96.51 & 49.33 & 97.24 & 51.65 \\ & Shortest Path & 0 & 95.83 & 0 & 59.07 & 0 & 90.13 & \(>\)24h \\ & Katz & 60.28 & 90.05 & \(>\)24h & \(>\)24h & \(>\)24h & \(>\)24h & \(>\)24h \\ \hline \multirow{3}{*}{Embedding} & Node2vec & 49.06 \(\pm\) 1.04 & 96.24 \(\pm\) 0.15 & 34.69 \(\pm\) 2.90 & 99.78 \(\pm\) 0.04 & 26.24 \(\pm\) 0.96 & 99.77 \(\pm\) 0.00 & 45.04 \(\pm\) 0.10 \\ & MF & 41.81 \(\pm\) 1.67 & 83.75 \(\pm\) 1.77 & 23.50 \(\pm\) 5.35 & 99.46 \(\pm\) 0.10 & 28.4 \(\pm\) 4.62 & 99.46 \(\pm\) 0.10 & 50.57 \(\pm\) 1.24 \\ & MLP & 35.81 \(\pm\) 1.08 & 95.91 \(\pm\) 0.08 & N/A & N/A & 0.45 \(\pm\) 0.04 & 90.23 \(\pm\) 0.00 & 38.07 \(\pm\) 0.09 \\ \hline \multirow{3}{*}{GNN} & GCN & 54.96 \(\pm\) 3.18 & 97.98 \(\pm\) 0.06 & 49.90 \(\pm\) 7.23 & 99.86 \(\pm\) 0.03 & 29.57 \(\pm\) 2.90 & **99.84 \(\pm\) 0.03** & **84.85 \(\pm\) 0.07** \\ & GAT & 55.00 \(\pm\) 3.28 & 97.11 \(\pm\) 0.09 & 31.88 \(\pm\) 8.33 & 99.63 \(\pm\) 0.21 & 0.001 & OOM & OOM \\ & GAE & 59.94 \(\pm\) 1.37 & 96.88 \(\pm\) 0.03 & 49.84 \(\pm\) 15.56 & 99.86 \(\pm\) 0.10 & 41.24 \(\pm\) 19.94 & 99.82 \(\pm\) 0.00 & 83.66 \(\pm\) 0.09 \\ & GAE & OOM & OOM & 7.09 \(\pm\) 6.02 & 75.34 \(\pm\) 15.96 & OOM & OOM \\ \hline \multirow{3}{*}{GNN+Pairwise Info} & SEAL & 63.37 \(\pm\) 0.69 & 96.65 \(\pm\) 0.29 & 25.25 \(\pm\) 3.90 & 97.97 \(\pm\) 0.19 & 48.80 \(\pm\) 5.61 & 99.79 \(\pm\) 0.02 & 86.93 \(\pm\) 0.43 \\ & BUDDY & **64.59 \(\pm\) 0.46** & 96.52 \(\pm\) 0.40 & 96.47 \(\pm\) 7.981 \(\pm\) 0.02 & 47.33 \(\pm\) 1.96 & 99.56 \(\pm\) 0.02 & 87.86 \(\pm\) 0.18 \\ & NG-GNN & **66.31 \(\pm\) 0.61** & **98.23 \(\pm\) 0.05** & 90.55 \(\pm\) 6.03 & 98.06 \(\pm\) 2.00 & 48.55 \(\pm\) 1.01 & 97.30 \(\pm\) 0.14 & **83.54 \(\pm\) 0.32** \\ & NCN & 63.86 \(\pm\) 0.51 & 97.83 \(\pm\) 0.04 & **76.52 \(\pm\) 10.47** & **99.97 \(\pm\) 0.00** & **62.63 \(\pm\) 1.15** & **99.95 \(\pm\) 0.01** & **89.27 \(\pm\) 0.05** \\ & NCN & **65.97 \(\pm\) 1.03** & **95.02 \(\pm\) 0.05** & **70.23 \(\pm\) 1.211** & **99.97 \(\pm\) 0.01** & **62.61 \(\pm\) 0.76** & **99.97 \(\pm\) 0.01** & **89.82 \(\pm\) 0.43** \\ & NER & OOM & OOM & \(>\)24h & \(>\)24h & OOM & OOM & OOM \\ & PEG & 49.02 \(\pm\) 2.99 & 94.45 \(\pm\) 0.89 & 30.28 \(\pm\) 4.92 & 99.45 \(\pm\) 0.04 & OOM & OOM \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on OGB datasets (%) under the existing evaluation setting. Highlighted are the results ranked **first**, **second**, and third.
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline & ogbl-collab & ogbl-pra & ogbl-ddi & ogbl-ction2 & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{Pubmed} \\ GCN & Hits@50 & Hits@100 & Hits@20 & MRR & GAE & AUC & AUC & AUC \\ \hline Reported & 47.14 \(\pm\) 1.45 & 18.67 \(\pm\) 1.32 & 37.07 \(\pm\) 5.07 & 84.74 \(\pm\) 0.21 & Reported & 91.00 \(\pm\) 0.01 & 89.5 \(\pm\) 0.05 & 96.4 \(\pm\) 0.00 \\ Ours & **54.96 \(\pm\) 3.18** & **29.57 \(\pm\) 2.90** & **49.90 \(\pm\) 7.23** & **84.85 \(\pm\) 0.07** & Ours & **95.08 \(\pm\) 0.33** & **97.06 \(\pm\) 0.22** & **97.47 \(\pm\) 0.08** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of ours and the reported results for GCN and GAE.
complicates the tuning process, making it difficult to find the optimal hyperparameters. Please see Appendix D for a more in-depth study and discussion.
**Observation 3: High Model Standard Deviation.** The results in Tables 1 and 2 present the mean performance and standard deviation when training over 10 seeds. Generally, we find that for multiple datasets the standard deviation of the ranking metrics is often high for most models. For example, the standard deviation for MRR can be as high as \(8.82\), \(8.96\), or \(7.75\) for Cora, Citeseer, and Pubmed, respectively. Furthermore, on ogbl-ddi the standard deviation of Hits@20 reaches as high as 10.47 and 15.56. A high variance indicates unstable model performance. This makes it difficult to compare results between methods as the true performance lies in a larger range. This further complicates replicating model performance, as even large differences with the reported results may still fall within variance (see observation 2). Later in Section 4.3 we find that our new evaluation can reduce the model variance for all datasets (see Table 6). This suggests that the high variance is related to the current evaluation procedure.
**Observation 4: Inconsistency of AUC vs. Ranking-Based Metrics.** The AUC score is widely adopted to evaluate recent advanced link prediction methods [20, 19]. However, from our results in Tables 1 and 2 we observe that there exists a disparity between AUC and ranking-based metrics. In some cases, the AUC score can be high when the ranking metric is very low or even 0. For example, the Shortest Path heuristic records a Hits@K of 0 on ogbl-collab and ogbl-ppa. However, the AUC score on both datasets is \(>95\%\). Furthermore, even though RA records the third and fifth best performance on ogbl-ppa and ogbl-collab, respectively, it has a lower AUC score than Shortest Path on both. Previous works [22, 21] argued that AUC is not a proper metric for link prediction. This is due to the inapplicability of AUC for highly imbalanced problems [35, 36].
## 4 New Evaluation Setting
In this section, we introduce a new setting for evaluating link prediction methods. We first discuss the unrealistic nature of the current evaluation setting in Section 4.1. Based on this, we present our new evaluation setting in Section 4.2, which aims to align better with real-world scenarios. Lastly, in Section 4.3, we present and discuss the results based on our new evaluation setting.
### Issues with the Existing Evaluation Setting
The existing evaluation procedure for link prediction is to rank a positive sample against a set of \(K\) randomly selected negative samples. The same set of \(K\) negatives are used for all positive samples (with the exception of ogbl-citation2 which utilizes 1000 per positive sample). We demonstrate that there are multiple issues with this setting, making it difficult to properly evaluate the effectiveness of current models.
**Issue 1: Non-Personalized Negative Samples.** The existing evaluation setting uses the same set of negative samples for all positive samples (outside of ogbl-citation2). This strategy, referred to as global negative sampling [37], is not a commonly sought objective. Rather, we are often more interested in predicting links that will occur for a specific node. Take, for example, a social network that connects users who are friends. In this scenario, we may be interested in recommending new friends to a user \(u\). This requires learning a classifier \(f\) that assigns a probability to a link existing. When evaluating this task, we want to rank links where \(u\) connects to an existing friend above those where they don't. For example, if \(u\) is friends with \(a\) but not \(b\), we hope that \(f(u,a)>f(u,b)\). However, the existing evaluation setting doesn't explicitly test for this. Rather it compares a true sample \((u,a)\) with a potentially unrelated negative sample, e.g., \((c,d)\). This is not aligned with the real-world usage of link prediction on such graphs.
**Issue 2: Easy Negative Samples.** The existing evaluation setting randomly selects negative samples to use. However given the large size of most graphs (see Table 7 in Appendix B), randomly sampled negatives are likely to choose two nodes that bear no relationship to each other. Such node pairs are trivial to classify. We demonstrate this by plotting the distribution of common neighbors (CN), a strong heuristic, for all positive and negative test samples in Figure 1. Almost all the negative samples contain no CNs, making them easy to classify. We further show that the same problem afflicts even the smaller datasets in Figure 3 in Appendix A.
These observations suggest that a more realistic evaluation strategy is desired. At the core of this challenge is which negative samples to use during evaluation. We discuss our design for solving this in the next subsection.
### Heuristic Related Sampling Technique (HeaRT)
In this subsection, we introduce new strategy for evaluating link prediction methods. To address the concerns outlined in Section 4.1, we design a new method for sampling negatives during evaluation. Our strategy, HeaRT, solves these challenges by: (a) personalizing the negatives to each sample and (b) using heuristics to select hard negative samples. This allows for the negative samples to be directly related to each positive sample while also being non-trivial. We further discuss how to ensure that the negative samples are both _personalized_ and _non-trivial_ for a specific positive sample.
From our discussion in Section 4.1, we are motivated in personalizing the negatives to each positive sample. Since the positive samples in the current datasets are node pairs, we seek to personalize the negatives to both nodes in the positive sample. Extending our example in Section 4.1, this is analogous to restricting the negatives to contain one of the two users from the original friendship pair. As such, for a positive sample \((u,a)\), the negative samples will belong to the set:
\[S(u,a)=\{(u^{\prime},a)\mid u^{\prime}\in\mathcal{V}\}\cup\{(u,a^{\prime}) \mid a^{\prime}\in\mathcal{V}\}, \tag{1}\]
where \(\mathcal{V}\) is the set of nodes. This is similar to the setting used for knowledge graph completion (KGC) [38] which utilizes all such samples for evaluation. However, one drawback of evaluating each positive sample against the entire set of possible corruptions is the high computational cost. To mitigate this issue we consider only utilizing a small subset of \(S(u,a)\) during evaluation.
_The key challenge is how to generate a subset of \(S(u,a)\)._ If we randomly sample from \(S(u,a)\), we risk only utilizing easy negative samples. This is one of the issues of the existing evaluation setting (see Issue 2 in Section 4.1), whereby randomly selecting negatives, they unknowingly produce negative samples that are too easy. We address this by selecting the negative samples via a combination of multiple heuristics. Since heuristics typically correlate well with performance, we ensure that the negative samples will be non-trivial to classify. This is similar to the concept of candidate generation [39; 40], which only ranks a subset of candidates that are most likely to be true.
An overview of the generation process is given in Figure 2. For each positive sample, we generate \(K\) negative samples. To allow personalization to both nodes in the positive sample equally, we sample \(K/2\) negatives with each node. For the heuristics, we consider RA [28], PPR [41], and feature similarity. A more detailed discussion on the negative sample generation is given in Appendix F.
### Results and Discussion
In this subsection we present our results when utilizing HeaRT. We follow the parameter ranges introduced in Section 2.2. For all datasets we utilize \(K=500\) negative samples per positive sample during evaluation. Furthermore for ogbl-ppa we only use a small subset of the validation and test positive samples (100K each) for evaluation. This is because the large size of the validation and test sets (see Table 7 in Appendix B) makes HeaRT prohibitively expensive.
Figure 2: Pipeline for generating the hard negative samples for a positive sample (a, b).
The results are shown in Table 4 (Cora, Citeseer, Pubmed) and Table 5 (OGB). For simplicity, we only include the MRR and Hits@10 for Cora, Citeseer, Pubmed, and the MRR and Hits@20 for OGB. Additional results for other metrics can be found in Appendix G. We highlight the main observations below.
**Observation 1: Better Performance of Simple Models**. We find that under HeaRT, "simple" baseline models (i.e., heuristic, embedding, and GNN methods) show a greater propensity to outperform their counterparts via ranking metrics than under the existing setting. Specifically, we focus on MRR in Table 1, 4, and 5, and the corresponding ranking-based metrics in Table 2. Under the existing setting, such methods only rank in the top three for any dataset a total of **5** times. However, under HeaRT this occurs **10** times. Furthermore, under the existing setting only **1** "simple" method ranks best overall while under HeaRT there are **4**. This suggests that recent advanced methods may have benefited from the easier negative samples in the existing setting.
Another interesting observation is that on ogbl-collab, heuristic methods are able to outperform more complicated models by a large margin. Specifically, we find that Katz is the best ranked method, Shortest Path the second, and RA the fourth. Furthermore, the MRR gap between the second ranked method (Shortest Path) and the third (BUDDY) is very large at 14.29 points. This suggests that
\begin{table}
\begin{tabular}{c c|c c c c c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Models}} & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{Pubmed} \\ & \multicolumn{1}{c|}{MRR} & \multicolumn{1}{c}{Hits@10} & \multicolumn{1}{c}{MRR} & \multicolumn{1}{c}{Hits@10} & \multicolumn{1}{c}{MRR} & \multicolumn{1}{c}{Hits@10} \\ \hline \multirow{4}{*}{Heuristic} & CN & 9.78 & 20.11 & 8.42 & 18.68 & 2.28 & 4.78 \\ & AA & 11.91 & 24.10 & 10.82 & 22.20 & 2.63 & 5.51 \\ & RA & 11.81 & 24.48 & 10.84 & 22.86 & 2.47 & 4.9 \\ & Shortest Path & 5.04 & 15.37 & 5.83 & 16.26 & 0.86 & 0.38 \\ & Katz & 11.41 & 22.77 & 11.19 & 24.84 & 3.01 & 5.98 \\ \hline \multirow{4}{*}{Embedding} & Node2Vec & \(14.47\pm 0.60\) & \(32.77\pm 1.29\) & \(21.17\pm 1.01\) & \(45.82\pm 2.01\) & \(3.94\pm 0.24\) & \(8.51\pm 0.77\) \\ & MF & \(6.20\pm 1.42\) & \(15.26\pm 3.39\) & \(7.80\pm 0.79\) & \(16.72\pm 1.99\) & \(4.46\pm 0.32\) & \(9.42\pm 0.87\) \\ & MLP & \(13.52\pm 0.65\) & \(31.01\pm 1.71\) & \(22.62\pm 0.55\) & \(48.02\pm 1.79\) & \(6.41\pm 0.25\) & \(15.04\pm 0.67\) \\ \hline \multirow{4}{*}{GNN} & GCN & \(16.61\pm 0.30\) & \(36.26\pm 1.14\) & \(21.09\pm 0.88\) & \(47.23\pm 1.88\) & \(7.13\pm 0.27\) & \(15.22\pm 0.57\) \\ & GAT & \(13.84\pm 0.63\) & \(38.29\pm 1.27\) & \(19.58\pm 0.84\) & \(45.30\pm 1.3\) & \(4.95\pm 0.14\) & \(9.99\pm 0.64\) \\ & SAGE & \(14.74\pm 0.69\) & \(34.65\pm 1.47\) & \(21.09\pm 1.15\) & \(48.75\pm 1.85\) & \(**9.40\pm 0.70**\) & \(**20.54\pm 1.40**\) \\ & GAE & \(18.32\pm 0.41\) & \(37.95\pm 1.24\) & \(**25.25\pm 0.82\) & \(49.65\pm 1.48\) & \(5.27\pm 0.25\) & \(10.50\pm 0.46\) \\ \hline \multirow{4}{*}{GNN+Pairwise Info} & SEAL & \(10.67\pm 3.46\) & \(24.27\pm 6.74\) & \(13.16\pm 1.66\) & \(27.37\pm 3.20\) & \(5.88\pm 0.53\) & \(12.47\pm 1.23\) \\ & BUDDY & \(13.71\pm 0.59\) & \(30.40\pm 1.18\) & \(22.84\pm 0.36\) & \(48.35\pm 1.18\) & \(7.56\pm 0.18\) & \(16.85\pm 0.53\) \\ & Neo-GNN & \(13.95\pm 0.39\) & \(31.27\pm 0.72\) & \(17.34\pm 0.84\) & \(41.74\pm 1.78\) & \(7.74\pm 0.30\) & \(17.88\pm 0.71\) \\ & NCN & \(14.66\pm 0.95\) & \(35.14\pm 1.04\) & \(28.65\pm 1.21\) & \(53.41\pm 1.46\) & \(5.84\pm 0.22\) & \(13.22\pm 0.56\) \\ & NCNC & \(14.98\pm 1.00\) & \(36.70\pm 1.57\) & \(**24.10**\pm 0.65\) & \(**53.72\pm 0.97\)** & \(**8.58\pm 0.59\) & \(**18.81\pm 1.16**\) \\ & NBFNet & \(13.56\pm 0.58\) & \(31.12\pm 0.75\) & \(14.29\pm 0.80\) & \(31.39\pm 1.34\) & \(>\)24h & \(>\)24h \\ & PEG & \(15.73\pm 0.39\) & \(36.03\pm 0.75\) & \(21.01\pm 0.77\) & \(45.56\pm 1.38\) & \(4.4\pm 0.41\) & \(8.70\pm 1.26\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on Cora, Citeseer, and Pubmed (%) under HeaRT. Highlighted are the results ranked **first**, **second**, and **third**.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Models}} & \multicolumn{2}{c}{ogbl-collab} & \multicolumn{2}{c}{ogbl-ddi} & \multicolumn{2}{c}{ogbl-pra} & \multicolumn{2}{c}{ogbl-citation2} \\ & MRR & Hits@20 & MRR & Hits@20 & MRR & Hits@20 & MRR & Hits@20 \\ \hline CN & 12.60 & 27.51 & 6.71 & 38.69 & 25.70 & 68.25 & 16.83 & 36.45 \\ AA & 16.40 & 32.65 & 6.97 & 39.75 & 26.85 & 70.22 & 17.80 & 37.36 \\ RA & 28.14 & 41.16 & 8.70 & 44.01 & 28.34 & 71.50 & 18.10 & 37.84 \\ Shortest Path & **46.71** & **46.56** & 0 & 0 & 0.54 & 1.31 & \(>\)24h & \(>\)24h \\ Katz & **47.15** & **48.66** & \(>\)24h & \(>\)24h & 324h & \(>\)24h & 14.10 & 35.55 \\ \hline Node2Vec & \(12.10\pm 0.20\) & \(25.85\pm 0.21\) & \(11.14\pm 0.95\) & \(63.63\pm 2.05\) & \(18.34\pm 0.10\) & \(53.42\pm 0.11\) & \(14.67\pm 0.18\) & \(42.68\pm 0.20\) \\ MF & \(26.86\pm 1.74\) & \(38.44\pm 0.07\) & \(13.99\pm 0.47\) & \(59.50\pm 1.68\) & \(22.47\pm 1.53\) & \(70.71\pm 4.82\) & \(8.72\pm 2.60\) & \(29.64\pm 7.30\) \\ MLP & \(12.61\pm 0.66\) & \(23.05\pm 0.89\) & N/A & N/A & \(0.98\pm 0.00\) & \(1.47\pm 0.
methods that utilize GNNs methods may not be optimal for certain graph topologies, necessitating further study.
**Observation 2: Lower Model Standard Deviation**. We observed earlier that, under the existing evaluation setting, the model variance across seeds was high (see observation 3 in Section 3). This complicates model comparison as the model performance is unreliable. Interestingly, we find that HeaRT is able to dramatically reduce the variance for all datasets. We demonstrate this by first calculating the mean standard deviation across all models on each individual dataset. This was done for both evaluation settings with the results compared. As demonstrated in Table 6, the mean standard deviation decreases for all datasets. This is especially true for Cora, Citeseer, and Pubmed, which each decrease by over 85%. Such a large decrease in standard deviation is noteworthy as it allows for a more trustworthy and reliable comparison between methods.
We posit that this observation is caused by a stronger alignment between the positive and negative samples under our new evaluation setting. Under the existing evaluation setting, the same set of negative samples is used for all positive samples. One consequence of this is that a single positive sample may bear little to no relationship to the negative samples (see Section 4.1 for more discussion). However, under our new evaluation setting, the negatives for a positive sample are a subset of the corruptions of that sample. This allows for a more natural comparison via ranking-based metrics as the samples are more related and can be more easily compared.
**Observation 3: Lower Model Performance**.
We observe that the majority of datasets exhibit a significantly reduced performance in comparison to the existing setting. For example, under the existing setting, models typically achieve a MRR of around 30, 50, and 30 on Cora, Citeseer, and Pubmed (Table 1), respectively. However, under HeaRT the MRR for those datasets is typically around 20, 25, and 10 (Table 4). Furthermore, under the existing setting many models consistently achieve a Hits@50 of around 60 on ogbl-collab (Table 2). Under HeaRT, the mean Hits@50 drops to 45 (Table 15 in Appendix G). For ogbl-citation2, the MRR of the best performing model falls from a shade under 90 on the existing setting to slightly over 20 on HeaRT. Lastly, we note that the performance on ogbl-ppa actually increases. This is because we only utilize a small subset of the total test set when evaluating on HeaRT, nullifying any comparison between the two settings.
These outcomes are observed despite HeaRT using much fewer negative samples than the original setting. This suggests that the negative samples generated by HeaRT are substantially more challenging than those used in the existing setting. This underscores the need to develop more advanced methodologies that can tackle harder negatives samples like in HeaRT.
## 5 Conclusion
In this work we have revealed several pitfalls that currently befall recent work in link prediction. To overcome these pitfalls, we first establish a benchmarking that facilitates a fair and consistent evaluation across a diverse set of models and datasets. By doing so, we are able to make several illuminating observations about the performance and characteristics of various models. Furthermore, based on several limitations we observed in the existing evaluation procedure, we introduce a more practical setting called HeaRT (Heuristic Related Sampling Technique). HeaRT incorporates a more real-world evaluation setting, resulting in a better comparison among methods. By introducing a more rigorous and realistic assessment, HeaRT could guide the field towards more effective models, thereby advancing the state of the art in link prediction.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Dataset & Existing & HeaRT & \% Change \\ \hline Cora & 5.19 & 0.79 & -85\% \\ Citeseer & 5.94 & 0.88 & -85\% \\ Pubmed & 4.14 & 0.35 & -92\% \\ ogbl-collab & 1.49 & 0.96 & -36\% \\ ogbl-ppa & 2.13 & 0.36 & -83\% \\ ogbl-ddi & 7.34 & 3.81 & -48\% \\ ogbl-citation2 & 1.39 & 0.59 & -58\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: Mean model standard deviation for the existing setting and HeaRT. We utilize Hits@20 for ogbl-ddi, Hits@50 for ogbl-collab, Hits@100 for ogbl-ppa, and MRR otherwise. |
2301.11926 | Neural Network Approximation of Optimal Controls for Stochastic
Reaction-Diffusion Equations | We present a numerical algorithm that allows the approximation of optimal
controls for stochastic reaction-diffusion equations with additive noise by
first reducing the problem to controls of feedback form and then approximating
the feedback function using finitely based approximations. Using structural
assumptions on the finitely based approximations, rates for the approximation
error of the cost can be obtained. Our algorithm significantly reduces the
computational complexity of finding controls with asymptotically optimal cost.
Numerical experiments using artificial neural networks as well as radial basis
function networks illustrate the performance of our algorithm. Our approach can
also be applied to stochastic control problems for high dimensional stochastic
differential equations and more general stochastic partial differential
equations. | Wilhelm Stannat, Alexander Vogler, Lukas Wessels | 2023-01-25T18:48:03Z | http://arxiv.org/abs/2301.11926v2 | # Neural Network Approximation of Optimal Controls for Stochastic Reaction-Diffusion Equations
###### Abstract
We present a numerical algorithm that allows the approximation of optimal controls for stochastic reaction-diffusion equations with additive noise by first reducing the problem to controls of feedback form and then approximating the feedback function using finitely based approximations. Using structural assumptions on the finitely based approximations, rates for the approximation error of the cost can be obtained. Our algorithm significantly reduces the computational complexity of finding controls with asymptotically optimal cost. Numerical experiments using artificial neural networks as well as radial basis function networks illustrate the performance of our algorithm. Our approach can also be applied to stochastic control problems for high dimensional stochastic differential equations and more general stochastic partial differential equations.
**There is a huge body of literature on optimal control problems with partial differential equation (PDE) constraints and their numerical treatment. Recent years have shown a rising interest in the optimal control of stochastic partial differential equations (SPDEs). However, the numerical approximation of optimal controls, let alone its practical implementation in the stochastic case, faces serious obstacles due to the computational complexity of classical algorithms. In this work, we present a new numerical algorithm that approximates feedback controls for SPDEs with asymptotically optimal cost. The algorithm is based on adjoint calculus applied to gradient descent. For the approximation of feedback controls, we use finitely based approximations such as artificial neural networks or radial basis function networks. The restriction to additive noise and our approach for the approximation of feedback functions enables us to significantly reduce the algorithmic complexity of our approach in comparison with classical algorithms.**
+
Footnote †: preprint: APS/123-QED
## I Introduction
For a fixed finite time horizon \(T>0\), we consider the randomly forced reaction-diffusion equation
\[\begin{cases}\partial_{t}u(t,x)=\Delta u(t,x)+f(u(t,x))+\xi(t,x),\,(t,x)\in[0, T]\times\Lambda\\ u(0,x)=u(x),\quad x\in\Lambda\end{cases} \tag{1}\]
on a bounded domain \(\Lambda\subset\mathbb{R}^{d}\) with Neumann boundary conditions. Here, \(\Delta:=\sum_{k=1}^{d}\frac{\partial^{2}}{\partial x_{k}^{2}}\) denotes the Laplace operator, \(f:\mathbb{R}\to\mathbb{R}\) models a local reaction term, and \(\xi(t,x)\) are random fluctuations. Deterministic reaction-diffusion equations are ubiquitous in the natural sciences and, in many situations, taking into account random fluctuations leads to the more realistic model (1). Typically, these random fluctuations are highly irregular, and therefore, in order to treat this equation rigorously, we reformulate it as the following \(L^{2}(\Lambda)\)-valued SPDE:
\[\begin{cases}\mathrm{d}u_{t}=[\Delta u_{t}+\mathcal{F}(u_{t})]\mathrm{d}t+ \sigma\mathrm{d}W_{t},\quad t\in[0,T]\\ u_{0}=u\in L^{2}(\Lambda),\end{cases} \tag{2}\]
where \(u_{t}(x)=u(t,x)\) for fixed \(t\in[0,T]\) is considered as an element in \(L^{2}(\Lambda)\). The random fluctuations \(\xi(t,x)\) are modelled by a cylindrical Wiener process \((W_{t})_{t\in[0,T]}\) on \(L^{2}(\Lambda)\), defined on some underlying probability space \((\Omega,\mathcal{F},\mathbb{P})\), and \(\sigma:L^{2}(\Lambda)\to L^{2}(\Lambda)\) is a Hilbert-Schmidt operator. For more details on the mathematical theory of SPDEs, see [1]. Furthermore, \(\mathcal{F}:L^{2}(\Lambda)\to L^{2}(\Lambda)\) denotes the Nemytskii operator associated with \(f\), i.e.,
\[\mathcal{F}(u)(x):=f(u(x)),\quad u\in L^{2}(\Lambda),x\in\Lambda.\]
The objective of control theory is to achieve a desired outcome for a dynamical system by applying an external input which can be chosen freely among a set of admissible inputs. In order to set up a mathematical formulation for the control of the SPDE (2), we introduce a control process \(\mathfrak{g}:[0,T]\times\Omega\to\mathcal{U}\subset L^{2}(\Lambda)\), adapted to the filtration generated by \(u_{t}\), as a forcing term on the right-hand side of the equation
\[\begin{cases}\mathrm{d}u_{t}^{\mathfrak{g}}=[\Delta u_{t}^{ \mathfrak{g}}+\mathcal{F}(u_{t}^{\mathfrak{g}})+\mathfrak{g}_{t}]\mathrm{d}t+ \sigma\mathrm{d}W_{t},\quad t\in[0,T]\\ u_{0}^{\mathfrak{g}}=u\in L^{2}(\Lambda),\end{cases} \tag{3}\]
and define the cost functional
\[J(\mathfrak{g}):=\mathbb{E}\bigg{[}\int_{0}^{T}\int_{\Lambda}l( t,x,u_{t}^{\mathfrak{g}}(x))\mathrm{d}x+\frac{\nu}{2}\|\mathfrak{g}_{t}\|_{L^{2}( \Lambda)}^{2}\mathrm{d}t\\ +\int_{\Lambda}m(x,u_{T}^{\mathfrak{g}}(x))\mathrm{d}x\bigg{]}. \tag{4}\]
Here, the running cost \(l(t,x,u):[0,T]\times\Lambda\times\mathbb{R}\to\mathbb{R}\) and the terminal cost \(m(x,u):\Lambda\times\mathbb{R}\to\mathbb{R}\) are assumed to be differentiable and Lipschitz continuous in the state variable \(u\). Now, the objective is to find a control process \(\mathfrak{g}\) that minimizes the cost functional (4) over some set of admissible controls subject to (3).
In the deterministic case, there is a huge body of literature on necessary and sufficient optimality conditions [2; 3; 4], as well as numerical algorithms which efficiently approximate optimal controls, see e.g. [5; 6; 7; 8]. In recent years, extensions to the stochastic case have seen a rising interest in the mathematical literature leading to necessary and sufficient optimality conditions in great generality [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. However, classical algorithms for the numerical approximation of optimal controls in the stochastic case either require the approximation of backward SPDEs or the approximation of infinite dimensional Hamilton-Jacobi-Bellman (HJB) equations. Due to the curse of dimensionality, both of these alternatives are computationally very expensive, leading to an increased interest in the development of new, more efficient algorithms [22; 23; 24; 25; 26; 27; 28; 29; 30; 31].
A considerable reduction of the problem can be achieved in the case when an optimal control \(\mathfrak{g}^{*}\) is of feedback-type, i.e.,
\[\mathfrak{g}^{*}_{t}=G^{*}(t,u^{G^{*}}_{t}), \tag{5}\]
for some \(G^{*}:[0,T]\times L^{2}(\Lambda)\to L^{2}(\Lambda)\). In this work, we consider a mathematical setting in which the optimal control is indeed of the above type. In a first step we then use finitely based approximations for \(G^{*}\) in order to approximate the optimal control \(\mathfrak{g}^{*}\). This approach together with the restriction to additive noise enables us to replace the backward SPDE arising in adjoint calculus by a random backward PDE which significantly reduces its computational complexity. A similar idea was already used in [32].
The reduction to finitely based feedback controls then allows in a second step the local uniform approximation of finitely based feedback controls in appropriate ansatz spaces of variable, but finite, dimension. Solving the corresponding finite dimensional optimal control problem, e.g. with artificial neural networks, leads to an efficient computation of finite dimensional controls, whose costs approximate the optimal cost with increasing dimension, see Theorem 4. Imposing additional smoothness assumptions on the optimal control \(\mathfrak{g}^{*}_{t}=G^{*}(t,u^{G^{*}}_{t})\), in particular \(G^{*}\) globally Lipschitz, also enables us to derive rates for the approximation error in ansatz spaces consisting of Lipschitz continuous feedback functions, see Theorem 5. We discuss the example of neural network approximation in detail; see Theorem 8 for the universal approximation of finitely based feedback controls with 1-layer artificial neural networks and Proposition 9 for rates on the approximation error.
The practical implementation of the gradient descent algorithm for the approximation of finite dimensional feedback controls also requires the numerical discretization of the controlled state equation. In Proposition 11, we derive the combined approximation error for the optimal cost. Detailed proofs of our results, that also hold for more general ansatz spaces beyond artificial neural networks, can be found in [33]. The performance of our gradient descent algorithm is illustrated with three examples. The first example deals with the validation of our algorithm in the case of linear quadratic control. The other two examples consider the problem of stabilizing a bump solution of the stochastic Nagumo equation, in one case with general feedback controls, in the other case with feedback controls of Nemytskii-type.
The remainder of the paper is organized as follows: First, in Section II, we show that it is sufficient to consider feedback controls. In Section III, we explain how to approximate the optimal control by introducing ansatz spaces of controls that are suited for the numerical implementation. Section IV contains our main approximation results. In Section V, we describe our gradient descent algorithm. Next, in Section VI, we discuss the explicit example of artificial neural networks for the finite dimensional ansatz spaces. In Section VII, we investigate the error resulting from the numerical discretization. Finally, in Section VIII, we present numerical experiments using artificial neural networks and radial basis function networks.
## II Optimal controls of feedback-type
In order to reduce the complexity of the problem, we assume that there exists an optimal control \(\mathfrak{g}^{*}\) in feedback form (5), for some continuous \(G^{*}:[0,T]\times L^{2}(\Lambda)\to L^{2}(\Lambda)\) that satisfies a linear growth condition. This can be achieved, using the solution of the associated HJB equation
\[\begin{cases}\partial_{t}V+\frac{1}{2}\mathrm{tr}(\sigma^{*}D^{2}V\sigma)+ \left\langle DV,\Delta u+\mathcal{F}(u)\right\rangle_{L^{2}(\Lambda)}\\ +\int_{\Lambda}I(t,x,u(x))\mathrm{d}x+\inf_{G\in\mathcal{U}}\left\{\left\langle DV,G\right\rangle+\frac{\nu}{2}\|G\|^{2}_{L^{2}(\Lambda)}\right\}=0,\\ (t,u)\in[0,T]\times L^{2}(\Lambda)\\ V(T,u)=\int_{\Lambda}m(x,u(x))\mathrm{d}x,\quad u\in L^{2}(\Lambda).\end{cases} \tag{6}\]
For solution theories regarding equations of this type, see [11]. In particular, if (6) has a unique mild solution satisfying certain regularity assumptions, and
\[\gamma(p):=\mathrm{arginf}_{G\in\mathcal{U}}\left\{\left\langle p,G\right\rangle +\frac{\nu}{2}\|G\|^{2}_{L^{2}(\Lambda)}\right\}\]
is continuous, then
\[\mathfrak{g}^{*}_{t}:=G^{*}(t,u^{G^{*}}_{t}):=\gamma(DV(t,u^{G^{*}}_{t})).\]
is an optimal control in feedback form, provided that \(u^{G^{*}}\) is a unique strong solution of the closed loop equation
\[\begin{cases}\mathrm{d}u^{G}_{t}=[\Delta u^{G}_{t}+\mathcal{F}(u^{G}_{t})+G(t,u^{G}_{t})]\mathrm{d}t+\sigma\mathrm{d}W_{t},\;t\in[0,T]\\ u^{G}_{0}=u\in L^{2}(\Lambda)\end{cases} \tag{7}\]
with \(G=G^{*}\). A sufficient condition for (7) to have a unique strong solution is that \(G^{*}\) is Lipschitz continuous in \(u\), see e.g. [1]. This is in particular the case if the solution \(V\) of the HJB equation (6) has a bounded second derivative in \(u\), see Theorem 4.155 and Remark 4.202 in [11]. However, directly tackling the optimal control problem by approximating the solution of the HJB equation (6) numerically is very challenging due to the infinite dimensionality of the domain of \(V\). Instead, our approach is to approximate the feedback function directly. Therefore we consider the following feedback control prob
lem: Minimize
\[J(G):=\mathbb{E}\bigg{[}\int_{0}^{T}\int_{\Lambda}l(t,x,u_{t}^{G}(x) )\mathrm{d}x+\frac{\nu}{2}\|G(t,u_{t}^{G})\|_{L^{2}(\Lambda)}^{2}\mathrm{d}t\\ +\int_{\Lambda}m(x,u_{T}^{G}(x))\mathrm{d}x\bigg{]}\]
subject to equation (7). We seek to minimize \(J\) over the set of admissible controls
\[U_{\mathrm{ad}}:=\{G:[0,T]\times L^{2}(\Lambda)\to\mathcal{U}\,|\,G(\cdot,u_{ \cdot}^{G})\in\mathbb{A}\},\]
where \(\mathbb{A}=L^{2}([0,T]\times\Omega;L^{2}(\Lambda))\).
**Example 1** (Linear Quadratic Control): _Let us consider the linear quadratic control problem_
\[\begin{cases}\mathrm{d}u_{t}^{\mathfrak{g}}=[\Delta u_{t}^{\mathfrak{g}}+ \mathfrak{g}_{t}]\mathrm{d}t+\sigma\mathrm{d}W_{t},\quad t\in[0,T]\\ u_{0}^{\mathfrak{g}}=u\in L^{2}(\Lambda),\end{cases}\]
_where \(\mathfrak{g}:[0,T]\times\Omega\to L^{2}(\Lambda)\) is an adapted process, and_
\[J(\mathfrak{g}):=\frac{1}{2}\mathbb{E}\left[\int_{0}^{T}\int_{\Lambda}(u_{t}^ {\mathfrak{g}}(x))^{2}+\mathfrak{g}_{t}^{2}(x)\mathrm{d}x\mathrm{d}t+\int_{ \Lambda}(u_{T}^{\mathfrak{g}}(x))^{2}\mathrm{d}x\right].\]
_In this case, the optimal control \(\mathfrak{g}^{*}\) is indeed of feedback form, given by_
\[\mathfrak{g}_{t}^{*}=P(t)u_{t}^{\mathfrak{g}^{*}}\]
_where \(P:[0,T]\to L(L^{2}(\Lambda))\) is the solution of the Riccati equation34_
Footnote 3: The solution \(\mathcal{U}\) is not necessarily a solution of the Riccati equation (34).
\[\begin{cases}\partial_{t}P(t)+P(t)\Delta+\Delta P(t)-Id+P^{2}(t)=0,\,\,\,t\in[ 0,T]\\ P(T)=-Id.\end{cases} \tag{8}\]
## III Construction of ansatz spaces
For the efficient numerical implementation we need to restrict ourselves to a subset of controls \(\mathbb{U}\subset U_{\mathrm{ad}}\) that are suitable for the implementation in a gradient descent algorithm. However, we need to ensure that when minimizing over \(\mathbb{U}\), we do not end up much worse than in the original control problem. In this section we will provide a method to construct suitable ansatz spaces \(\mathbb{U}\) when we do not have any particular control constraints, i.e. \(\mathcal{U}=L^{2}(\Lambda)\).
In order to construct a suitable space \(\mathbb{U}\), we introduce the so-called finitely based approximation of a function \(G:[0,T]\times L^{2}(\Lambda)\to L^{2}(\Lambda)\). To this end, we consider finite dimensional subspaces
\[S_{n}\subset L^{2}(\Lambda)\]
with orthonormal basis \(e_{0},\ldots,e_{n}\) and orthogonal projections \(P_{n}:L^{2}(\Lambda)\to S_{n}\), such that
\[\|P_{n}u-u\|_{L^{2}(\Lambda)}\to 0.\]
The finitely based approximations of \(G\) with respect to \(S_{n}\) are then defined by
\[G^{n}(t,u) :=P_{n}G(t,P_{n}u) \tag{9}\] \[=\sum_{k=0}^{n}\langle G(t,\sum_{j=0}^{n}u_{j}e_{j}),e_{k}\rangle_ {L^{2}(\Lambda)}e_{k},\]
where \(u_{j}:=\langle u,e_{j}\rangle_{L^{2}(\Lambda)}\). One possible choice for the finite dimensional subspaces in the case of \(\Lambda=(0,1)\) is
\[S_{n}:=\mathrm{span}\left\{1,\sqrt{2}\cos(k\pi\cdot)\Big{|}\,k=1,\ldots,n \right\}.\]
In this case the finitely based approximation of \(G\) is given by
\[G^{n}(t,u)(x)\] \[:=\sum_{k=0}^{n}2\langle G(t,\sum_{j=0}^{n}u_{j}\sqrt{2}\cos(j\pi \cdot)),\cos(k\pi\cdot)\rangle_{L^{2}(0,1)}\cos(k\pi x),\]
for \(u=\sum_{j=0}^{n}u_{j}\cos(j\pi\cdot)\in L^{2}(\Lambda)\).
Our main results in Section IV provide approximation results for ansatz spaces \(\mathbb{U}\subset U_{\mathrm{ad}}\). The main assumption on the ansatz spaces \(\mathbb{U}\subset U_{\mathrm{ad}}\) is the following approximation property with respect to the optimal feedback \(G^{*}\):
**Definition 2**: _Let \(G:[0,T]\times L^{2}(\Lambda)\to L^{2}(\Lambda)\). We say that a subset \(\mathbb{U}\subset U_{\mathrm{ad}}\) satisfies the universal approximation property with respect to \(G\), if there exists a sequence \((G^{n,m})_{n,m\in\mathbb{N}}\subset\mathbb{U}\) that satisfies the following linear growth condition uniformly in \(m\)_
\[\|G^{n,m}(t,u)\|_{L^{2}(\Lambda)}\leq C_{n}(1+\|u\|_{L^{2}(\Lambda)}),\]
_for some constant \(C_{n}>0\), such that for any \(R>0\)_
\[\lim_{m\to\infty}\sup_{(t,u)\in[0,T]\times\mathcal{B}_{L^{2}(\Lambda)}(0,R)} \|G^{n}(t,u)-G^{n,m}(t,u)\|_{L^{2}(\Lambda)}^{2}=0,\]
_where_
\[\mathcal{B}_{L^{2}(\Lambda)}(0,R):=\{u\in L^{2}(\Lambda)|\|u\|_{L^{2}(\Lambda )}\leq R\}\]
_and \(G^{n}\) is given as in (9)._
We would like to stress that in the above definition we only require that the finitely based approximations of \(G\) can be approximated uniformly on bounded sets, and not \(G\) itself.
In the first part of this section we will explain how to construct ansatz spaces \(\mathbb{U}\) that satisfies the universal approximation property with respect to the optimal feedback \(G^{*}\). These ansatz spaces consider finitely based controls of arbitrary dimension \(n\in\mathbb{N}\), which are of the type
\[G(t,u)=\sum_{k=0}^{n}\psi^{k}(t,P_{n}u)e_{k},\]
for some functions \(\psi^{k}:[0,T]\times S_{n}\to\mathbb{R}\). However, in practice the dimension \(n\in\mathbb{N}\) of the ansatz space need to be fixed
a priori and one is interested in how close one can get to the optimal cost. Our second main result, Theorem 5, provides explicit convergence rates, however, we need to strengthen the assumption on our ansatz spaces. In the second part of this section we will explain how to construct for any \(n\in\mathbb{N}\) a sequence of ansatz spaces \((\mathbb{U}^{n,m})_{m\in\mathbb{N}}\subset U_{\mathrm{ad}}\) that satisfies the following approximation property with respect to the optimal feedback \(G^{*}\):
**Definition 3**: _Let \(G:[0,T]\times L^{2}(\Lambda)\to L^{2}(\Lambda)\) and \(n\in\mathbb{N}\). We say that a sequence of subsets \((\mathbb{U}^{n,m})_{m\in\mathbb{N}}\subset U_{\mathrm{ad}}\) satisfies the Lipschitz approximation property with respect to \(G\) in dimension \(n\), if there exists a sequence of Lipschitz continuous controls \((G^{n,m})_{m\in\mathbb{N}}\) with Lipschitz constants independent of \(m\), such that \(G^{n,m}\in\mathbb{U}^{n,m}\), and a sequence of radii \((R^{n}_{m})_{m\in\mathbb{N}}\) with \(\lim_{m\to\infty}R^{n}_{m}=\infty\), such that_
\[\begin{split}\varepsilon^{n}_{m}&:=\sup_{(t,u)\in[0,T]\times\mathcal{B}_{L^{2}(\Lambda)}(0,R^{n}_{m})}\|P_{n}(G^{n}(t,u)-G^{n,m}( t,u))\|^{2}_{L^{2}(\Lambda)}\\ &\to 0,\end{split} \tag{10}\]
_as \(m\to\infty\)._
For this type of ansatz spaces, Theorem 5 provides error estimates for
\[|\inf_{g\in\mathbb{U}^{n,m}}J(g)-\inf_{g\in\Lambda}J(g)|\]
in terms of \(\varepsilon^{n}_{m}\) and the projection error \(\gamma_{n}\) (11).
### Universal Ansatz Spaces
We start by constructing an ansatz space that satisfies a universal approximation property with respect to \(G^{*}\). For \(n\in\mathbb{N}\), we consider the function \(g^{n}:[0,T]\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) given by
\[g^{n}_{k}(t,u_{1},\ldots,u_{n})=\langle G^{*}(t,\sum_{j=1}^{n}u_{j}e_{j}),e_{k }\rangle_{L^{2}(\Lambda)},\quad k=1,\ldots,n.\]
Since \(G^{*}\) is continuous, the functions \((g^{n})_{n\in\mathbb{N}}\) are also continuous. In particular, it is possible to approximate these functions by simpler functions that can be treated numerically, e.g., artificial neural networks. In the following we consider for any \(n\in\mathbb{N}\) a set \(\mathcal{N}^{n}\) of Lipschitz continuous approximations, such that for all \(R>0\) there exists a sequence \((\psi^{n}_{m})_{m\in\mathbb{N}}\in\mathcal{N}^{n}\) with
\[\sup_{(t,x)\in[0,T]\times\mathcal{B}_{\mathrm{fa}}(0,R)}|\psi^{n}_{m}(t,x)-g^ {n}(t,x)|^{2}\to 0,\quad m\to\infty,\]
where
\[\mathcal{B}_{\mathrm{fa}}(0,R):=\{x\in\mathbb{R}^{n}||x|\leq R\}.\]
For a particular choice of \(\mathcal{N}^{n}\) we refer to our examples in Section VI. Then we define the ansatz space
\[\mathbb{U}:=\Big{\{}G(t,\sum_{j=0}^{\infty}u_{j}e_{j}):=\sum_{k=0 }^{n}\psi^{k}(t,\eta^{l}(u_{1},\ldots,u_{n}))e_{k}\\ \Big{|}\psi\in\mathcal{N}^{n},n,l\in\mathbb{N}\Big{\}},\]
where \(\eta^{l}:\mathbb{R}^{n}\to\mathbb{R}^{n}\)
\[\eta^{l}(x)=\begin{cases}x&|x|\leq l\\ I\frac{x}{|x|}&|x|>l\end{cases}\]
is a smooth cutoff function. It is not difficult to see that for any \(n\in\mathbb{N}\) and \(R>0\) there exists a sequence \((G^{n,m})_{m\in\mathbb{N}}\subset\mathbb{U}\) that satisfies a linear growth condition of the type
\[\|G^{n,m}(t,u)\|_{L^{2}(\Lambda)}\leq C_{n}(1+\|u\|_{L^{2}(\Lambda)}),\]
such that for any \(n\in\mathbb{N}\)
\[\lim_{m\to\infty}\sup_{(t,u)\in[0,T]\times\mathcal{B}_{L^{2}(\Lambda)}(0,R)} \|G^{n}(t,u)-G^{n,m}(t,u)\|^{2}_{L^{2}(\Lambda)}=0,\]
where
\[\mathcal{B}_{L^{2}(\Lambda)}(0,R):=\{u\in L^{2}(\Lambda)\|u\|_{L^{2}(\Lambda )}\leq R\}.\]
In particular \(\mathbb{U}\) satisfies a universal approximation property with respect to \(G^{*}\). We will give a short proof for this in Section VI where we consider artificial neural networks, mapping from \(\mathbb{R}^{n+1}\) to \(\mathbb{R}^{n}\), as an explicit example for Lipschitz continuous approximations \(\mathcal{N}^{n}\).
### Finite Dimensional Ansatz Spaces
Our second main result, Theorem 5, requires that the controls of our ansatz space take values in the Sobolev space \(H^{1}(\Lambda)\). Therefore we strengthen our assumption on the finite dimensional subspaces and assume that
\[S_{n}\subset H^{1}(\Lambda).\]
In order to obtain convergence rates, we also need to specify the rate of convergence of the orthogonal projections. Therefore, we assume that
\[\|P_{n}u-u\|_{L^{2}(\Lambda)}\leq\gamma_{n}\|u\|_{H^{1}(\Lambda)}, \tag{11}\]
for some \(\gamma_{n}\to 0\), as \(n\to\infty\). Furthermore, we assume that there exists an optimal control \(\mathfrak{g}^{*}\) in feedback form with a Lipschitz continuous feedback function \(G^{*}\). A simple example for such a situation is the linear quadratic case discussed in Example 1.
Since \(G^{*}\) is Lipschitz continuous, the functions \(g^{n}\) are also Lipschitz continuous. As we will see in the examples in Section VI, it is therefore possible to approximate \(g^{n}\) by artificial neural networks that have uniformly bounded Lipschitz constants. For any \(n\in\mathbb{N}\), let \((\mathcal{N}^{n,m})_{m\in\mathbb{N}}\) be a sequence of sets of Lipschitz continuous approximations, such that for all \(m\in\mathbb{N}\) there exists a \(\psi^{n,m}\in\mathcal{N}^{n,m}\) with Lipschitz constant independent of \(m\) and
\[\sup_{(t,x)\in[0,T]\times\mathcal{B}_{\mathrm{fa}}(0,R^{n}_{m})}|\psi^{n,m}(t, x)-g^{n}(t,x)|^{2}=:\varepsilon^{n}_{m}\to 0,\]
as \(m\to\infty\), for some sequence \((R^{n}_{m})_{m\in\mathbb{N}}\) of radii with \(R^{n}_{m}\to\infty\), as \(m\to\infty\). Then we define the sequence of ansatz spaces of
dimension \(n\in\mathbb{N}\) by
\[\mathbb{U}^{n,m}:=\Big{\{}G(t,\sum_{j=0}^{\infty}u_{j}e_{j}):=\sum_{k =0}^{n}\psi_{k}(t,u_{1},\ldots,u_{n})e_{k}\\ \Big{|}\psi\in\mathcal{N}^{n,m}\Big{\}}.\]
It is not difficult to observe that for any \(m\in\mathbb{N}\), there exists a Lipschitz continuous control \(G^{n,m}\in\mathbb{U}^{n,m}\) with Lipschitz constant independent of \(m\), such that
\[\sup_{(t,u)\in[0,T]\times\mathcal{B}_{L^{2}(\Lambda)}(0,R^{m}_{m})}\|P_{n}(G^{ n,m}(t,u)-G^{n}(t,u))\|_{L^{2}(\Lambda)}^{2}\to 0.\]
In particular, for any \(n\in\mathbb{N}\), the sequence \((\mathbb{U}^{n,m})_{m\in\mathbb{N}}\) satisfies the Lipschitz approximation property with respect to \(G^{*}\) in dimension \(n\).
## IV Main approximation results
Our first main result, Theorem 4, provides an approximation result for universal ansatz spaces of the type constructed in Section III. In the second main result, Theorem 5, we provide error estimates for ansatz spaces of fixed dimension.
### Universal Approximation
The following result shows that we can reach the optimal cost of the control problem when we consider ansatz spaces of the type constructed in Section III.
**Theorem 4**: _Assume that \(\mathbb{U}\subset U_{ad}\) satisfies the universal approximation property with respect to \(G^{*}\). Then_
\[\inf_{\mathfrak{g}\in\mathbb{A}}J(\mathfrak{g})=\inf_{G\in\mathbb{U}}J(G).\]
### Approximation with Ansatz Spaces of Fixed Dimensions
In Section VI, we construct as a particular example our ansatz space using artificial neural networks. By Theorem 4 we can get arbitrarily close to the optimal cost of the original control problem. However, at this point we do not know how close we can get to the optimal cost using ansatz spaces of a fixed size. Our second main result, Theorem 5, provides error estimates for the convergence of the cost of our approximation and the optimal cost. Similar to Section III.2, we consider finite dimensional subspaces
\[S_{n}\subset H^{1}(\Lambda).\]
and assume that for any \(u\in H^{1}(\Lambda)\) the orthogonal projections satisfy
\[\|P_{n}u-u\|_{L^{2}(\Lambda)}\leq\gamma_{n}\|u\|_{H^{1}(\Lambda)},\]
for some \(\gamma_{n}\to 0\), as \(n\to\infty\). Furthermore we assume that there exists an optimal control \(\mathfrak{g}^{*}\) in feedback form with a Lipschitz continuous feedback function \(G^{*}\), as already mentioned in Section III.2.
**Theorem 5**: _Let \(n\in\mathbb{N}\). Let \((\mathbb{U}^{n,m})_{m\in\mathbb{N}}\) be a sequence of subsets of \(U_{ad}\) that satisfies the Lipschitz approximation property with respect to \(G^{*}\) in dimension \(n\). Then it holds_
\[\inf_{G\in\mathbb{U}^{n,m}} J(G)-\inf_{\mathfrak{g}\in\mathbb{A}}J(\mathfrak{g})\] \[\leq C\left(1+\sqrt{\mathbb{E}\left[\int_{0}^{T}\|\mathfrak{g}^{* }_{H^{1}(\Lambda)}\mathrm{d}t\right]}\right)\gamma_{n}+C_{n}\sqrt{\varepsilon ^{n}_{m}},\]
_for some universal constant \(C_{n}\) independent of \(n\) and \(m\) and some constant \(C_{n}\) which is independent of \(m\). Here \(\varepsilon^{n}_{m}\) is given by (10)._
## V Gradient descent algorithm
In this section, we describe our gradient descent algorithm. Theorem 4 enables us to consider the approximating optimal control problem of minimizing optimal costs on a finite dimensional ansatz space \(\mathbb{U}^{n,m}\). We assume that
\[\mathbb{U}^{n,m}=\Big{\{}\Phi(\cdot,\cdot,\alpha)\Big{|}\alpha\in\mathbb{R}^{d _{m}}\Big{\}}\]
for a parametrization \(\Phi:[0,T]\times L^{2}(\Lambda)\times\mathbb{R}^{d_{m}}\to L^{2}(\Lambda)\). Replacing \(G(t,u)=\Phi(t,u,\alpha)\)
for some given \(\alpha\in\mathbb{R}^{d_{m}}\) leads to the state equation
\[\begin{cases}\mathrm{d}u^{\alpha}_{t}=[\Delta u^{\alpha}_{t}+\mathcal{F}(u^{ \alpha}_{t})+\Phi(t,u^{\alpha}_{t},\alpha)]\mathrm{d}t+\sigma\mathrm{d}W_{t} \\ u^{\alpha}_{0}=u\in L^{2}(\Lambda),\end{cases} \tag{12}\]
and the cost functional \(J:\mathbb{R}^{d_{m}}\to\mathbb{R}\),
\[J(\alpha)=\mathbb{E}\bigg{[}\int_{0}^{T}\int_{\Lambda}l(t,x,u^{ \alpha}_{t}(x))\mathrm{d}x+\frac{\nu}{2}\|\Phi(t,u^{\alpha}_{t},\alpha)\|_{L^{ 2}(\Lambda)}^{2}\mathrm{d}t\\ +\int_{\Lambda}m(x,u^{\alpha}_{T}(x))\mathrm{d}x\bigg{]}.\]
Using this parametrization, we obtain under suitable regularity assumptions on \(\Phi\) the following representation of the gradient of the cost functional:
\[\nabla J(\alpha)\] \[=\mathbb{E}\left[\int_{0}^{T}\nu\Phi^{*}_{\alpha}(t,u^{\alpha}_{ t},\alpha)\Phi(t,u^{\alpha}_{t},\alpha)+\Phi^{*}_{\alpha}(t,u^{\alpha}_{t}, \alpha)p_{t}\mathrm{d}t\right], \tag{13}\]
where \(p\) is the solution of the so-called adjoint equation
\[\begin{cases}\mathrm{d}p_{t}=-[(\Delta+\mathcal{F}^{*}(u^{\alpha}_{t})+\Phi^{ *}_{u}(t,u^{\alpha}_{t},\alpha))p_{t}+\mathcal{L}^{\prime}(t,u^{\alpha}_{t})\\ +\nu\Phi^{*}_{u}(t,u^{\alpha}_{t},\alpha)\Phi(t,u^{\alpha}_{t},\alpha)]\mathrm{d} t\\ p_{T}=\mathcal{M}^{\prime}(u^{\alpha}_{T}).\end{cases} \tag{14}\]
Here
\[\mathcal{L}(t,u)(x):=l(t,x,u(x)),\quad u\in L^{2}(\Lambda),(t,x)\in[0,T]\times\Lambda\]
denotes the Nemytskii operator associated with \(l\), and
\[\mathcal{M}(u)(x):=m(x,u(x)),\quad u\in L^{2}(\Lambda),x\in\Lambda\]
denotes the Nemytskii operator associated with \(m\). Furthermore, \(\Phi_{\alpha}(t,u,\alpha):\mathbb{R}^{d_{m}}\to L^{2}(\Lambda)\) (resp. \(\Phi_{u}(t,u,\alpha):L^{2}(\Lambda)\to L^{2}(\Lambda)\)) denotes the derivative of \(\Phi\) with respect to \(\alpha\) (resp. \(u\)), and \(\Phi^{\alpha}_{u}(t,u,\alpha)\) (resp. \(\Phi^{\alpha}_{u}(t,u,\alpha)\)) denotes the adjoint of \(\Phi_{\alpha}(t,u,\alpha)\) (resp. \(\Phi_{u}(t,u,\alpha)\)). For a derivation of the gradient, see the supplemental material.
Note that equation (14) is a linear backward PDE with random coefficients which are given by the state equation (12). For details concerning the numerical implementation, see the supplemental material.
**Example 6** (Artificial Neural Network): _In the case of an artificial neural network with activator function \(\theta\), the controls in the ansatz space \(\mathbb{V}^{n,m}\) can be parametrized as_
\[\Phi(t,u,\alpha)=\sum_{i=1}^{n}\left\langle C\theta\left(A\left(\frac{t}{ \pi_{n}u}\right)+a\right),e_{i}\right\rangle e_{i},\]
_where \(\alpha=(A,a,C)\) consists of \(A\in\mathbb{R}^{k\times(n+1)}\), \(a\in\mathbb{R}^{k}\), and \(C\in\mathbb{R}^{n\times k}\), for respective dimensions \(n=n_{m}\) and \(k=k_{m}\), and \(\pi_{n}u=(\langle u,e_{1}\rangle,\ldots,\langle u,e_{n}\rangle)^{\top}\)._
Based on this representation, we implement the following algorithm:
**Algorithm 7**: _Fix an initial control \(\alpha_{0}\), a stopping criterion \(\rho>0\), and a step size \(s>0\)._
1. _Solve the state equation (_12_) for one realization of the noise._
2. _Solve the adjoint equation (_14_) with the data given by the sample calculated in Step 1._
3. _Repeat Step 1 and Step 2 to approximate the gradient (_13_) using Monte Carlo approximation._
4. _Compute new control via_ \(\alpha_{n+1}=\alpha_{n}-s\nabla J(\alpha_{n})\)_._
5. _Stop if_ \(\|\nabla J(\alpha_{n})\|<\rho\)_._
## VI Artificial Neural Networks as Ansatz Spaces
In this section, we discuss explicit examples for ansatz spaces \(\mathbb{U}\subset U_{\text{ad}}\) that satisfy the assumptions of our main results, Theorem 4 and Theorem 5, and are suitable for the numerical implementation of our gradient descent algorithm. In the first part of this section, we focus on universal spaces and Theorem 4. The second part is devoted to ansatz spaces of fixed size and Theorem 5 with corresponding convergence rates. In the whole section, we consider the finite dimensional subspaces
\[S_{n}:=\text{span}\left\{1,\sqrt{2}\cos(k\pi\cdot)\Big{|}\,k=1,\ldots,n\right\} \subset H^{1}(0,1),\]
with orthonormal basis in \(L^{2}(\Lambda)\)
\[e_{0}=1,\quad e_{k}=\sqrt{2}\cos(k\pi\cdot),\quad k=1,\ldots,n.\]
In particular we have for any \(u\in H^{1}(0,1)\)
\[\|P_{n}u-u\|_{L^{2}(0,1)}^{2} =\sum_{k=n+1}^{\infty}|\langle u,\sqrt{2}\cos(k\pi\cdot)\rangle_ {L^{2}(0,1)}|^{2}\] \[\leq\frac{1}{n^{2}\pi^{2}}\sum_{k=n+1}^{\infty}|\langle\Delta^{1 }/2u,\sqrt{2}\cos(k\pi\cdot)\rangle_{L^{2}(0,1)}|^{2}\] \[\leq\frac{1}{n^{2}\pi^{2}}\|u\|_{H^{1}(0,1)}^{2}=:\gamma_{n}\|u \|_{H^{1}(0,1)}^{2},\]
where we used that the \(k\)-th eigenvalue of the Neumann Laplace operator with respect to the eigenfunction \(e_{k}\) is given by \(-\frac{1}{k^{2}\pi^{2}}\).
### Neural Network Approximation
Regarding Theorem 4, we will show in our first example that it is indeed sufficient to consider the type of ansatz space constructed in the first part of Section III, using 1-layer artificial neural networks for the approximating sets, to get arbitrarily close to the optimal cost. More precisely, we show that the set
\[\mathbb{U}:=\left\{G(t,\sum_{i=1}^{\infty}u_{i}e_{i})(x)=\sum_{i =1}^{n}\psi_{i}(t,\eta^{l}(u_{1},\ldots,u_{n}))e_{i}(x)\right.\\ \left.\left|\psi\in\mathcal{N}^{n},n,l\in\mathbb{N}\right\}\right.\]
satisfies the universal approximation property with respect to \(G^{*}\), where
\[\mathcal{N}^{n}:=\bigcup_{k=1}^{\infty}\mathcal{N}_{k}^{n}\]
and
\[\mathcal{N}_{k}^{n}:=\left\{\psi(x)=C\theta(Ax+a)\right.\\ \left.\big{|}\,A\in\mathbb{R}^{k\times n},a\in\mathbb{R}^{k},C \in\mathbb{R}^{n\times k}\right\}\]
denotes the set of all 1-layer artificial neural networks from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{n}\) with \(k\) neurons, for a given non-polynomial, Lipschitz continuous activator function \(\theta\).
To this end, we recall the following classical universal approximation result by [35]:
**Theorem 8**: _Let \(\theta\in\mathcal{C}(\mathbb{R})\), then we define for \(u=(u_{1},\ldots,u_{d})\in\mathbb{R}^{d}\)_
\[\theta(u)^{i}:=\theta(u_{i}).\]
_If \(\theta\) is not polynomial, then for any \(n,m\in\mathbb{N}\), compact set \(K\subset\mathbb{R}^{n}\), \(h\in\mathcal{C}(K,\mathbb{R}^{n})\) and \(\varepsilon>0\), there exists \(k\in\mathbb{N},A\in\mathbb{R}^{k\times n},a\in\mathbb{R}^{k},C\in\mathbb{R}^ {m\times k}\), such that_
\[\sup_{u\in K}|h(u)-\psi(u)|<\varepsilon,\]
_where \(\psi\) is the \(1\)-layer artificial neural network_
\[\psi(u):=C\theta(Au+a).\]
Let \(n\in\mathbb{N}\) and recall from Section III the finitely based approximation of \(G^{*}\)
\[g^{n}:[0,T]\times\mathbb{R}^{n}\to\mathbb{R}^{n}\] \[g^{n}_{i}(t,u):=\langle G^{*}(t,\sum_{j=1}^{n}u_{j}e_{j}),e_{i} \rangle,\quad i=1,\ldots,n.\]
Since \(g^{n}\) is continuous, there exists for any \(m\in\mathbb{N}\) a \(1\)-layer artificial neural network \(\psi^{n,m}:[0,T]\times\mathbb{R}^{n}\to\mathbb{R}^{n}\), such that
\[\sup_{(u,u)\in[0,T]\times\mathcal{B}_{\mathbb{R}^{n}}(0,m)}|g^{n}(t,u)-\psi^{ n,m}(t,u)|<\frac{1}{m}.\]
Since \(G^{*}\) satisfies a linear growth condition, i.e.,
\[\|G^{*}(t,u)\|_{L^{2}(\Lambda)}\leq C(1+\|u\|_{L^{2}(\Lambda)}),\]
for some \(C>0\), we have
\[|g^{n}(t,u)| =\|P_{n}G^{*}(t,\sum_{i=1}^{n}u_{i}e_{i})\|_{L^{2}(\Lambda)}\] \[\leq\|G^{*}(t,\sum_{i=1}^{n}u_{i}e_{i})\|_{L^{2}(\Lambda)}\] \[\leq C(1+|u|),\]
where \(u=(u_{1},\ldots,u_{n})\). Hence, if we consider the continuous function \(\eta^{m}:\mathbb{R}^{n}\to\mathbb{R}^{n}\)
\[\eta^{m}(x)=\begin{cases}x&|x|\leq m\\ m\frac{x}{|x|}&|x|>m\end{cases}\]
and define
\[\psi^{n,m}(t,u):=\psi^{n,m}(t,\eta(u)),\]
then clearly \(\tilde{\psi}^{n,m}=\psi^{n,m}\) on \([0,T]\times\mathcal{B}_{\mathbb{R}^{n}}(0,m)\) and for any \((t,u)\in[0,T]\times\mathcal{B}_{\mathbb{R}^{n}}(0,m)\)
\[|\psi^{n,m}(t,u)| \leq|g^{n}(t,u)|+|\psi^{n,m}(t,u)-g^{n}(t,u)|\] \[\leq C(1+|u|)+1.\]
Furthermore, on \([0,T]\times\mathcal{B}_{\mathbb{R}^{n}}(0,m)^{c}\) we have
\[|\tilde{\psi}^{n,m}(t,u)| =|\psi^{n,m}(t,m\frac{u}{|u|})|\] \[\leq|g^{n}(t,m\frac{u}{|u|})|+|\psi^{n,m}(t,m\frac{u}{|u|})-g^{n} (t,m\frac{u}{|u|})|\] \[\leq C(1+m)+1\] \[\leq C(1+|u|)+1.\]
Therefore \(\tilde{\psi}^{n,m}\) satisfies a linear growth condition with some constant \(C>0\) independent of \(m\). Now, we define
\[G^{n,m}(t,u):=\sum_{i=1}^{n}\tilde{\psi}^{n,m}_{i}(t,\langle u,e_{1}\rangle_{ L^{2}(\Lambda)},\ldots,\langle u,e_{n}\rangle_{L^{2}(\Lambda)}))e_{i}.\]
One can easily check that all the assumptions of Theorem 5 are satisfied for \(\mathbb{U}\). Indeed, due to the Lipschitz continuity of the elements in \(\mathcal{N}_{k}^{n}\), any control in \(\mathbb{U}\) is indeed admissible, i.e., \(\mathbb{U}\subset U_{\text{ad}}\). Furthermore, \((G^{n,m})_{m\in\mathbb{N}}\) is a sequence in \(\mathbb{U}\) that satisfies a linear growth condition with some constant \(C>0\) independent of \(m\) and approximates any finitely based \(G^{n}\). Indeed, for any \(R>0\) and any \(\varepsilon>0\) there exists an \(M\in\mathbb{N}\), such that \(\mathcal{B}_{L^{2}(\Lambda)}(0,R)\subset\mathcal{B}_{L^{2}(\Lambda)}(0,m)\) and \(\frac{1}{m}<\varepsilon\) for every \(m\geq M\). Therefore, we have for any \(m\geq M\)
\[\sup_{(t,u)\in[0,T]\times\mathcal{B}_{L^{2}(\Lambda)}(0,R)}\|G^{n, m}(t,u)-G^{n}(t,u)\|^{2}\] \[\leq\sup_{(t,u)\in[0,T]\times\mathcal{B}_{L^{2}(\Lambda)}(0,m)}\| \tilde{\psi}^{n,m}(t,u)-g^{n}(t,u)\|^{2}\] \[\leq\sup_{(t,u)\in[0,T]\times\mathcal{B}_{\mathbb{R}^{n}}(0,m)}| \psi^{n,m}(t,u)-g^{n}(t,u)|^{2}\] \[\leq\sup_{(t,u)\in[0,T]\times\mathcal{B}_{\mathbb{R}^{n}}(0,m)}| \psi^{n,m}(t,u)-g^{n}(t,u)|^{2}\] \[<\frac{1}{m}<\varepsilon.\]
In the case of bounded controls, for example if \(\mathcal{U}=\mathcal{B}_{L^{2}(\Lambda)}(0,R)\), we could consider the ansatz space
\[\mathbb{U}:=\bigg{\{}G(t,\sum_{i=1}^{n}u_{i}e_{i})(x)=\sum_{i=1}^{ n}\psi_{i}(t,\eta^{l}(u_{1},\ldots,u_{n}))e_{i}(x)\] \[\qquad\qquad\qquad\left\|G(t,u)\right\|_{L^{2}(\Lambda)}\leq R \text{ where }\psi\in\mathcal{N}^{n},n,l\in\mathbb{N}\bigg{\}}\]
and the sequence
\[G^{n,m}(t,u)\] \[:=(1-\frac{1}{m(R+1)})\sum_{i=1}^{n}\tilde{\psi}^{n,m}_{i}(t, \langle u,e_{1}\rangle_{L^{2}(\Lambda)},\ldots,\langle u,e_{n}\rangle_{L^{2}( \Lambda)}))e_{i}.\]
### Convergence Rates
Next, we provide explicit convergence rates for the sequence of approximating spaces
\[\mathbb{U}^{n,m}:=\Big{\{}G(t,\sum_{i=1}^{\infty}u_{i}e_{i})(x)=\sum _{i=1}^{n}\psi_{i}(t,(u_{1},\ldots,u_{n}))e_{i}(x)\\ \Big{|}\psi\in\mathcal{N}_{m}^{n}\Big{\}}\]
using Theorem 5. We will mainly follow the ideas of [36]. In the following we consider a \(2\pi\)-periodic activator function \(\mathbf{\theta}:\mathbb{R}\to\mathbb{R}\) that satisfies \(\hat{\mathbf{\theta}}:=\int_{-\pi}^{\pi}\mathbf{\theta}(x)e^{-\mathrm{i}\mathrm{d}x}\neq 0\). Recall the following result from [36]:
**Proposition 9**: _Let \(K,L_{1},L_{2}>0\) and \(h:[0,T]\times\mathscr{B}_{\mathbb{R}^{n+1}}(0,R)\to\mathbb{R}^{n}\) be Lipschitz continuous in \((t,x)\) with \(\|h\|_{\mathcal{G}^{0}(0,T]\times\mathscr{B}_{\mathbb{R}^{n+1}}(0,R)}\leq K\) and Lipschitz constant bounded by \(L_{1}\). Furthermore we assume that \(h\) is twice differentiable in \(x\) and \(\partial_{x_{i}}h\) is Lipschitz continuous with Lipschitz constant bounded by \(L_{2}\). Then there exists a constant \(C>0\) depending only on the above constants, on \(n,T\) and on the activator function \(\mathbf{\theta}\) through \(\hat{\mathbf{\theta}}\), \(\|\mathbf{\theta}^{\prime}\|_{\mathcal{G}^{0}}\), \(\|\mathbf{\theta}^{\prime\prime}\|_{\mathcal{G}^{0}}\) and \(\|\mathbf{\theta}^{\prime\prime}\|_{\mathcal{G}^{0}}\), and there exists a constant \(m_{0}\) depending only on \(n\) with the following property. For every \(R>0\) and every \(m_{n}>m_{0}\), there exists a one-hidden layer artificial neural network \(\psi_{h}\in\mathcal{N}_{m_{n}}^{n}\) such that_
\[\|h-\psi_{h}\|_{\mathcal{G}^{0}([0,T]\times\mathscr{B}_{\mathbb{R}^{n}}(0,R); \mathbb{R}^{n})}\leq C(1+R)m_{in}^{-1/(2(n+1))}\]
_and such that the Lipschitz constants of \(\psi_{h},\partial_{x}\psi_{h}\) are at most \(C(1+Rm_{in}^{-1/(2(n+1))})\)._
If we now set \(R^{n}(m):=m^{1/(3(n+1))}\), then by Proposition 9 there exists for any sufficiently large \(m\in\mathbb{N}\) a network \(\psi^{n,m}\in\mathcal{N}_{m}^{n}\), such that for
\[G^{n,m}(t,\sum_{i=1}^{\infty}u_{i}e_{i})(x)=\sum_{i=1}^{n}\psi_{i}^{n,m}(t,(u_ {1},\ldots,u_{n}))e_{i}(x),\]
it holds
\[\sup_{(t,x)\in[0,T]\times\mathscr{B}_{L^{2}(\Lambda)}(0,R^{n}(m)) }\|P_{n}(G^{n,m}(t,x)-G^{n}(t,x))\|_{L^{2}(\Lambda)}^{2}\\ \leq m^{-1/(3(n+1))}\to 0.\]
Furthermore \(G^{n,m}\in\mathbb{U}^{n,m}\) is Lipschitz continuous with some Lipschitz constant independent of \(n\). Therefore, applying Theorem 5 yields
\[\inf_{G\in\mathbb{U}^{n,m}}J(G)-\inf_{\mathbf{\theta}\in\Lambda}J( \mathfrak{g})\\ \leq C\left(1+\sqrt{\mathbb{E}\left[\int_{0}^{T}\|\mathbf{\phi}_{t}^{* }\|_{H^{1}(\Lambda)}^{2}\mathrm{d}t\right]}\right)\frac{1}{n^{2}\pi^{2}}+C_{n }\sqrt{m^{-1/(3(n+1))}}.\]
## VII Numerical discretization of the control problem
In order to implement a numerical algorithm, the control problem needs to be discretized. In this section, we introduce the discretized version of our control problem and provide a bound for the error resulting from the numerical discretization.
**Problem 10** (Discretized Control Problem): _Let \(n\in\mathbb{N}\). Minimize_
\[J^{n}(G):=\mathbb{E}\left[\int_{0}^{T}\int_{\Lambda}l(t,x,u_{t}^{G,n}(x)) \mathrm{d}x+\frac{\nu}{2}\|P_{n}G(t,u_{t}^{G,n})\|_{L^{2}(\Lambda)}^{2} \mathrm{d}t\right]\\ +\mathbb{E}\left[\int_{\Lambda}m(x,u_{T}^{G,n}(x))\mathrm{d}x\right]\]
_over the set \(U_{\text{ad}}\) subject to the discretized SPDE_
\[\begin{cases}\mathrm{d}u_{t}^{G,n}=[-\Delta_{n}u_{t}^{G,n}+P_{n}\mathcal{F}(u_ {t}^{G,n})+P_{n}G(t,u_{t}^{G,n})]\mathrm{d}t+\sigma P_{n}\mathrm{d}W_{t}\\ u_{0}^{G,n}=P_{n}u\in S_{n},\end{cases}\]
_where \(\Delta_{n}x_{n}\) is defined as the unique element in \(S_{n}\) with_
\[\left\langle\nabla x_{n},\nabla y_{n}\right\rangle_{L^{2}(\Lambda)}=\left\langle \Delta_{n}x_{n},y_{n}\right\rangle_{L^{2}(\Lambda)}\quad\forall y_{n}\in S_{n}.\]
**Proposition 11**: _Under the same assumptions as in Theorem 5, it holds_
\[\inf_{G\in\mathbb{U}^{n,m}} J^{n}(G)-\inf_{\mathfrak{g}\in\mathbb{A}}J(\mathfrak{g})\] \[\leq C\left(1+\sqrt{\mathbb{E}\left[\int_{0}^{T}\|\mathbf{\phi}_{t}^{* }\|_{H^{1}(\Lambda)}^{2}\mathrm{d}t\right]}\right)\gamma_{n}+C_{n}\sqrt{\mathbf{ \varepsilon}_{m}^{n}},\]
_for some constant \(C_{n}>0\) that only depends on \(n\in\mathbb{N}\) and on the Lipschitz constant of \(G^{*}\)._
**Remark 12**: _Under additional assumptions, in particular convexity of the Hamiltonian, one can show that for any \(\mathbb{U}\subset U_{\text{ad}}\), it holds_
\[\inf_{\mathfrak{g}\in\mathbb{A}}J(\mathfrak{g})-\inf_{G\in\mathbb{U}}J^{n}(G) \leq C\gamma_{n},\]
_for some constant \(C>0\) which is independent of \(n\)._
## VIII Simulations
In this whole section, we consider equation (2) on some interval \(\Lambda=(0,L)\) of length \(L\). For the approximation we consider the Galerkin finite dimensional subspace
\[S_{n}:=\text{span}\left\{1,\sqrt{\frac{2}{L}}\cos\left(\frac{k}{L}\mathbf{\pi} \cdot\right)\middle|k=1,\ldots,n\right\}\subset H^{1}(0,L),\]
with orthonormal basis
\[e_{0}=\sqrt{\frac{1}{L}},\quad e_{k}=\sqrt{\frac{2}{L}}\cos\left(\frac{k}{L}\mathbf{ \pi}\cdot\right),\quad k=1,\ldots,n,\]
or the finite element subspace
\[\overline{S}_{n}:=\text{span}\left\{\overline{e}_{k}|\,k=1,\ldots,n\right\}\subset H ^{1}(0,L),\]
with basis
\[\overline{e}_{k}(x)=\begin{cases}n\left(x-\frac{k-1}{n}L\right)&\text{if }x\in\left[\frac{k-1}{n}L,\frac{k}{n}L\right]\\ n\left(\frac{k+1}{n}L-x\right)&\text{if }x\in\left[\frac{k}{n}L,\frac{k+1}{n}L \right]\\ 0&\text{otherwise}\end{cases}\]
for \(k=1,\ldots,n\).
### Heat Equation
In this subsection, we consider the controlled stochastic heat equation in order to validate our algorithm by comparing with the optimal feedback control obtained from the associated Riccati equation, see Example 1. The controlled state equation is given by
\[\begin{cases}\text{d}u_{0}^{\theta}=[\Delta u_{t}^{\theta}+\mathfrak{g}_{t}] \text{d}t+0.05\text{d}W_{t},\quad t\in[0,20]\\ u_{0}^{\theta}=u\in L^{2}(0,20),\end{cases} \tag{15}\]
where \(u=\mathbf{I}_{[20/3,40/3]}\). We consider the problem of steering the solution of the stochastic heat equation into the constant zero profile. To this end, we introduce the cost functional
\[J(\mathfrak{g})=\frac{1}{2}\mathbb{E}\left[\int_{0}^{20}\|u_{t}^{\theta}\|_{L ^{2}(0,20)}^{2}+\|\mathfrak{g}_{t}\|_{L^{2}(0,20)}^{2}\text{d}t\right].\]
Note that the second term is a regularization, which is necessary in linear quadratic control theory. We approximate the Riccati equation (8) numerically based on 400 Fourier coefficients to obtain the approximated optimal feedback control
\[\mathfrak{g}_{t}^{\text{Ric}}=P^{n}(t)u_{t}^{\theta^{\text{Ric}}}\]
and use this approximation as a benchmark for our gradient descent algorithm.
For the approximation of the optimal control, we use the ansatz space
\[\mathbb{U}^{n,k_{1}+k_{2}}:=\bigg{\{}G(t,\sum_{i=1}^{\infty}u_{i} e_{i})(x)=\sum_{i=1}^{n}\psi_{t}(t,(u_{1},\ldots,u_{n}))e_{i}(x)\\ \bigg{|}\psi\in\mathcal{N}_{k_{1},k_{2}}^{n}\bigg{\}}, \tag{16}\]
for \(k=k_{1}=k_{2}=100\), where
\[\mathcal{N}_{k,k}^{n}:=\bigg{\{}\psi(t,u)=C\theta\left(B\theta \left(A\begin{pmatrix}t\\ u\end{pmatrix}+a\right)+b\right)\\ \bigg{|}A\in\mathbb{R}^{k\times(n+1)},B\in\mathbb{R}^{k\times k },C\in\mathbb{R}^{(n+1)\times k},a,b\in\mathbb{R}^{k}\bigg{\}}\]
denotes the set of all 2-layer neural networks with \(k\) neurons in the first and second layer, and ReLU activator function \(\theta\). After about \(20,000\) iterations of our stochastic gradient descent algorithm, we end up with an approximated cost of \(J\approx 5.3\), and approximated \(L^{2}\)-distance of our approximation \(\mathfrak{g}_{t}^{\text{approx}}=G^{\text{approx}}(t,u_{t}^{G^{\text{approx}}})\) to the optimal control \(\mathfrak{g}^{\text{Ric}}\) given by
\[\mathbb{E}\left[\int_{0}^{20}\|\mathfrak{g}_{t}^{\text{Ric}}-\mathfrak{g}_{t} ^{\text{approx}}\|_{L^{2}(\Lambda)}^{2}\text{d}t\right]\approx 0.05.\]
Below, we display our simulation results. Figure 1 displays one realization of the uncontrolled stochastic heat equation. Figure 2 displays our neural network approximation of the optimal feedback control, and Figure 3 shows its impact when applied to the system (15). Figure 4 displays the optimal control obtained using the Riccati equation, and shows that our approximated feedback control is indeed qualitatively close.
### \(L^{2}\)-Feedback Control of the Nagumo Equation
In this example, we apply our algorithm to the controlled stochastic Nagumo equation
\[\begin{cases}\mathrm{d}u_{t}^{G}=\big{[}\Delta u_{t}^{G}-u_{t}^{G}(u_{t}^{G}- \frac{1}{2})(u_{t}^{G}-1)+G(t,u_{t}^{G})\big{]}\mathrm{d}t+0.05\mathrm{d}W_{t} \\ u_{0}^{G}=u\in L^{2}(0,20),\end{cases} \tag{17}\]
where \(u=\mathbf{1}_{[\bar{5},15]}\). We consider the problem of stabilizing a bump profile given by the solution to the uncontrolled deterministic Nagumo equation, i.e., equation (17) with \(G\equiv 0\) and without noise, see Figure 5. To this end, we introduce the cost functional
\[J(G) =\mathbb{E}\left[\int_{0}^{100}\|u_{t}^{G}-u_{t}^{0}\|_{L^{2}(0, 20)}^{2}+\frac{1}{2}\|G(t,u_{t}^{G})\|_{L^{2}(0,20)}^{2}\mathrm{d}t\right]\] \[\quad+\mathbb{E}\left[\|u_{100}^{G}-u_{100}^{0}\|_{L^{2}(0,20)}^ {2}\right]. \tag{18}\]
We consider again the ansatz space (16) with the same parameters as in the previous example. In this case, our optimal control achieves an approximated cost of \(J\approx 8.1\).
Figures 6 and 7 display two realizations of the uncontrolled stochastic Nagumo equation (17), i.e., \(G\equiv 0\). Without control, the bump is unstable and the noise pushes the solution to one of the stable steady states \(u\equiv 0\) or \(u\equiv 1\). Figure 8 displays one realization of the approximated optimal control and shows that the control mostly acts on the interface, but also reacts to the noise in the system. Figure 9 illustrates the impact of the approximated optimal control when applied to the system (17); it shows that our feedback control indeed stabilizes the bump.
### Nemytskii Feedback Control of the Nagumo Equation
In our final example, we consider again the controlled stochastic Nagumo equation (17) with the same cost functional (18). However, now we only consider feedback controls of Nemytskii-type, i.e., the control \(G\) is a Nemytskii operator
\[\mathcal{G}(t,u)(x)=g(t,x,u(x))\]
for some function \(g:[0,100]\times[0,20]\times\mathbb{R}\to\mathbb{R}\). This means that the control at point \(x\in\Lambda\) depends on \(u(x)\), the value of the solution at point \(x\), but not on the whole function \(u\in L^{2}(\Lambda)\) as in the previous example. This restriction to feedback controls of Nemytskii-type significantly reduces the computational complexity and therefore leads to a more efficient approximation.
For the approximation of the optimal control, we consider the following ansatz space of Gaussian radial basis function neural networks with \(m=40\) neurons:
\[\mathbb{U}^{n,m}\] \[:=\bigg{\{}G(t,u)(x)=\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{k=1}^{r} \alpha_{ijk}\mathds{1}_{j_{k-1}j_{k}}(t)e^{-\kappa|u(x)-\overline{u}_{j}|^{2} }\overline{e}_{i}(x)\] \[\bigg{|}\alpha_{ijk}\in\mathbb{R},\overline{u}_{j}\in\mathbb{R} \bigg{\}},\]
where \(\{0=t_{0}<t_{1}<\ldots<t_{r}=100\}\) and \(\kappa=6\). In this case, our optimal control achieves an approximated cost of \(J\approx 1.25\). Similar to the approximated control of Section VIII.2, Figure 10 shows that the control again mostly acts on the interface and also reacts to the noise in the system (compare with Figure 8). Figure 11 shows that the Nemytskii feedback control achieves a better result than the \(L^{2}\)-feedback control from Subsection VIII.2 (compare with Figure 9). Figure 12 displays the approximation of the feedback function \((x,u)\mapsto g(t,x,u)\) at time \(t=40\). Observe that the feedback function is only trained for profiles arising in simulations. In particular, the feedback function is not trained in the middle of the bump (\(x\approx 10\)) for \(u\approx 0\).
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
Wilhelm Stannat: Mathematical theory (equal). Alexander Vogler: Mathematical theory (equal); Software. Lukas Wessels: Mathematical theory (equal).
## Data Availability
Software and simulation output freely available with publication on GitHub at [https://github.com/AVoglerTu/SFB91OFeedback](https://github.com/AVoglerTu/SFB91OFeedback).
## Appendix A Derivation of the Gradient of the Cost Functional
As discussed in Section V, we now consider the following approximating optimal control problem: Minimize
\[J(\alpha)=\mathbb{E}\Big{[}\int_{0}^{T}\int_{\Lambda}l(t,x,u_{t} ^{\alpha}(x))\mathrm{d}x+\frac{\nu}{2}\|\Phi(t,u_{t}^{\alpha},\alpha)\|_{L^{2} (\Lambda)}^{2}\mathrm{d}t\\ +\int_{\Lambda}m(x,u_{T}^{\alpha}(x))\mathrm{d}x\Big{]}.\] (A.1)
subject to
\[\begin{cases}\mathrm{d}u_{t}^{\alpha}=[\Delta u_{t}^{\alpha}+\mathcal{F}(u_{t }^{\alpha})+\Phi(t,u_{t}^{\alpha},\alpha)]\mathrm{d}t+\sigma\mathrm{d}W_{t} \\ u_{0}^{\alpha}=u\in L^{2}(\Lambda).\end{cases}\] (A.2)
In the following derivation of the gradient of the cost functional, we assume that \(l\) and \(m\) are differentiable with respect to \(u\), \(\mathcal{F}\) is differentiable, and \(\Phi\) is differentiable with respect to \(u\) and \(\alpha\). The derivation of the gradient of the cost functional follows along the same lines as the derivation in [32], however, in the present setting, the control enters the state equation (A.1) in a nonlinear fashion, which requires slight modifications of the arguments. We begin by differentiating the cost functional in some direction \(\beta\in\mathbb{R}^{d_{m}}\) which yields
\[\frac{\partial J(\alpha)}{\partial\beta}\] (A.3) \[=\mathbb{E}\Big{[}\int_{0}^{T}\langle\mathcal{L}^{\prime}(t,u_{t} ^{\alpha}),y_{t}^{\beta}\rangle+\nu\langle\Phi(t,u_{t}^{\alpha},\alpha),\Phi_{ \alpha}(t,u_{t}^{\alpha},\alpha)\beta\rangle\] \[+\nu\langle\Phi(t,u_{t}^{\alpha},\alpha),\Phi_{u}(t,u_{t}^{\alpha },\alpha)y_{t}^{\beta}\rangle\mathrm{d}t+\langle\mathcal{M}^{\prime}(u_{t}^{ \alpha}),y_{T}^{\beta}\rangle\Big{]},\]
where \(y^{\beta}\) is the solution of the linearized state equation
\[\begin{cases}\mathrm{d}y_{t}^{\beta}=[\Delta y_{t}^{\beta}+ \mathcal{F}^{\prime}(u_{t}^{\alpha})y_{t}^{\beta}+\Phi_{u}(t,u^{\alpha}, \alpha)y_{t}^{\beta}\\ \hskip 113.811024pt+\Phi_{\alpha}(t,u_{t}^{\alpha},\alpha)\beta]\mathrm{d}t\\ y_{0}^{\beta}=0.\end{cases}\]
Next, we introduce the adjoint state \(p\) as the solution of the adjoint equation
\[\begin{cases}\mathrm{d}p_{t}=-[(\Delta+\mathcal{F}^{\prime}(u_{t}^{\alpha})+ \Phi_{u}^{*}(t,u_{t}^{\alpha},\alpha))p_{t}+\mathcal{L}^{\prime}(t,u_{t}^{ \alpha})\\ +\nu\Phi_{u}^{*}(t,u_{t}^{\alpha},\alpha)\Phi(t,u_{t}^{\alpha},\alpha)] \mathrm{d}t\\ p_{T}=\mathcal{M}^{\prime}(u_{T}^{\alpha}).\end{cases}\]
A straightforward computation leads to the following adjoint state property:
\[\begin{split}\mathrm{d}(y_{t}^{\beta},p_{t})&=\langle y_{t }^{\beta},\mathrm{d}p_{t}\rangle+\langle p_{t},\mathrm{d}y_{t}^{\beta}\rangle \\ &=\big{[}-\nu\langle\Phi_{u}^{*}(t,u_{t}^{\alpha},\alpha)\Phi(t,u_{t }^{\alpha},\alpha),y_{t}^{\beta}\rangle-\langle\mathcal{L}^{\prime}(t,u_{t}^{ \alpha}),y_{t}^{\beta}\rangle\\ &\hskip 113.811024pt+\langle\Phi_{\alpha}(t,u_{t}^{\alpha}, \alpha)\beta,p_{t}\rangle\big{]}\mathrm{d}t.\end{split}\]
Integrating over \([0,T]\) and taking expectations, we obtain from equation (A.3) the desired representation of the gradient of the cost functional:
\[\begin{split}&\nabla J(\alpha)\\ &=\mathbb{E}\left[\int_{0}^{T}\nu\Phi_{\alpha}^{*}(t,u_{t}^{\alpha}, \alpha)\Phi(t,u_{t}^{\alpha},\alpha)+\Phi_{\alpha}^{*}(t,u_{t}^{\alpha},\alpha )p_{t}\mathrm{d}t\right].\end{split}\]
|
2303.15487 | Knowledge Enhanced Graph Neural Networks for Graph Completion | Graph data is omnipresent and has a wide variety of applications, such as in
natural science, social networks, or the semantic web. However, while being
rich in information, graphs are often noisy and incomplete. As a result, graph
completion tasks, such as node classification or link prediction, have gained
attention. On one hand, neural methods, such as graph neural networks, have
proven to be robust tools for learning rich representations of noisy graphs. On
the other hand, symbolic methods enable exact reasoning on graphs.We propose
Knowledge Enhanced Graph Neural Networks (KeGNN), a neuro-symbolic framework
for graph completion that combines both paradigms as it allows for the
integration of prior knowledge into a graph neural network model.Essentially,
KeGNN consists of a graph neural network as a base upon which knowledge
enhancement layers are stacked with the goal of refining predictions with
respect to prior knowledge.We instantiate KeGNN in conjunction with two
state-of-the-art graph neural networks, Graph Convolutional Networks and Graph
Attention Networks, and evaluate KeGNN on multiple benchmark datasets for node
classification. | Luisa Werner, Nabil Layaïda, Pierre Genevès, Sarah Chlyah | 2023-03-27T07:53:43Z | http://arxiv.org/abs/2303.15487v3 | # Knowledge Enhanced Graph Neural Networks
###### Abstract
Graph data is omnipresent and has a wide variety of applications, such as in natural science, social networks, or the semantic web. However, while being rich in information, graphs are often noisy and incomplete. As a result, graph completion tasks, such as node classification or link prediction, have gained attention. On one hand, neural methods, such as graph neural networks, have proven to be robust tools for learning rich representations of noisy graphs. On the other hand, symbolic methods enable exact reasoning on graphs. We propose Knowledge Enhanced Graph Neural Networks (KeGNN), a neuro-symbolic framework for graph completion that combines both paradigms as it allows for the integration of prior knowledge into a graph neural network model. Essentially, KeGNN consists of a graph neural network as a base upon which knowledge enhancement layers are stacked with the goal of refining predictions with respect to prior knowledge. We instantiate KeGNN in conjunction with two state-of-the-art graph neural networks, Graph Convolutional Networks and Graph Attention Networks, and evaluate KeGNN on multiple benchmark datasets for node classification.
neuro-symbolic integration, graph neural networks, relational learning, knowledge graphs, fuzzy logic
## I Introduction
Graphs are ubiquitous across diverse real-world applications such as e-commerce [1], natural science [2] or social networks [3]. Graphs connect nodes by edges and allow to enrich them with features. This makes them a versatile and powerful data structure that encodes relational information. As graphs are often derived from noisy data, incompleteness and errors are common issues. Consequently, graph completion tasks such as node classification or link prediction have become increasingly important. These tasks are approached from different directions. In the field of deep learning, research on graph neural networks (GNNs) has gained momentum. Numerous models have been proposed for various graph topologies and applications [4][5][6][7]. The key strength of GNNs is to find meaningful representations of noisy graph data, that can be used to improve prediction tasks [8]. Despite this advantage, as a subcategory of deep learning methods, GNNs are criticized for their limited interpretability and large data consumption [9]. Alongside, the research field of symbolic AI addresses the above-mentioned tasks. In symbolic AI, solutions are found by performing logic-like reasoning steps that are exact, interpretable and data-efficient [10]. For large graphs, however, symbolic methods are often computationally expensive or even infeasible. Since techniques from deep learning and from symbolic AI have complementary pros and cons, the field of neuro-symbolic AI aims to combine both paradigms. Neuro-symbolic AI not only paves the way towards the application of AI to learning with limited data, but also allows for jointly using symbolic information (in the form of logical rules) and sub-symbolic information (in the form of real-valued data). This helps to overcome the black-box nature of deep learning methods and to improve interpretability through symbolic representations [11][12][9].
In this work, we present the neuro-symbolic approach Knowledge enhanced Graph Neural Networks (KeGNN) to conduct node classification given graph data and a set of prior knowledge. In KeGNN, knowledge enhancement layers are stacked on top of a GNN and adjust its predictions in order to increase the satisfaction of some prior knowledge. In addition to the parameters of the GNN, the knowledge enhancement layers contain learnable clause weights that reflect the impact of the prior knowledge on the predictions. Both components form an end-to-end differentiable model.
In this work, we instantiate KeGNN in conjunction with two popular GNNs: Graph Attention Networks [13] and Graph Convolutional Networks [14]. We apply KeGNN to the benchmark datasets for node classification Cora, Citeseer, PubMed [15] and Flickr [16].
## II Method: KeGNN
KeGNN is a neuro-symbolic approach that can be applied to node classification tasks with the capacity of handling graph structure at the base neural network level. The model takes two types of input: (1) real-valued graph data and (2) prior knowledge expressed in first-order logic.
### _Graph-structured Data_
A Graph \(\mathbf{G}=(\mathbf{N},\mathbf{E})\) consists of a set of \(n\) nodes \(\mathbf{N}\) and a set of \(k\) edges \(\mathbf{E}\) where each edge of the form \((v_{i},v_{j})\) connects two nodes \(v_{i}\in\mathbf{N}\) and \(v_{j}\in\mathbf{N}\). The neighborhood \(\mathcal{N}(v_{i})\) describes the set of first-order neighbors of \(v_{i}\). For an _attributed_ and _labelled_ graph, nodes are enriched with features and labels. Each node has a feature vector \(\mathbf{x}\in\mathbb{R}^{d}\) of dimension \(d\) and a label vector \(\mathbf{y}\in\mathbb{R}^{m}\). The label vector \(\mathbf{y}\) contains one-hot encoded ground truth labels for \(m\) classes. In matrix notation, the features and labels of the entire graph are described as \(\mathbf{X}\in\mathbb{R}^{n\times d}\) and \(\mathbf{Y}\in\mathbb{R}^{n\times m}\). A graph is _typed_ if type functions \(f_{\mathbf{E}}\) and \(f_{\mathbf{N}}\) assign edge types and node types to the edges and nodes, respectively. A graph with constant type functions (that assign the same edge and node type to all edges
and nodes) is called _homogeneous_, whereas for _heterogeneous_ graphs, nodes and edges may have different types [5].
**Example II.1**: _A Citation Graph \(\mathbf{G}_{\mathrm{Cit}}\) consists of documents and citations. Fig. 1 shows an extract of the Citeseer citation graph that is used as example to guide through this paper. The documents are represented by a set of nodes \(\mathbf{N}_{\mathrm{Cit}}\) and citations by a set of edges \(\mathbf{E}_{\mathrm{Cit}}\). Documents can be attributed with features \(\mathbf{X}_{\mathrm{Cit}}\) that describe their content as Word2Vec [17] vectors. Each node is labelled with one of the six topic categories \(\{\mathrm{AI}\), \(\mathrm{DB}\), \(\mathrm{HCI}\), \(\mathrm{IR}\), \(\mathrm{ML}\), \(\mathrm{AG}\}\)1 that are encoded in \(\mathbf{Y}_{\mathrm{Cit}}\). Since all nodes (documents) and edges (citations) have the same type, \(\mathbf{G}_{\mathrm{Cit}}\) is homogeneous._
Footnote 1: The classes are abbreviations for the categories _Artificial Intelligence, Databases, Human-Computer Interaction, Information Retrieval, Machine Learning and Agents_.
### _Prior Knowledge_
Some prior knowledge \(\mathcal{K}\) is provided to KeGNN. It can be described as a set of \(\ell\) logical clauses expressed in the logical language \(\mathcal{L}\) that is defined as sets of constants \(\mathcal{C}\), variables \(\mathcal{X}\) and predicates \(\mathcal{P}\). Predicates have an arity \(r\) of one (unary) or two (binary): \(\mathcal{P}=\mathcal{P}_{U}\,\cup\,\mathcal{P}_{B}\). Predicates of arity \(r>2\) are not considered in this work. Unary predicates express properties, whereas binary predicates express relations. \(\mathcal{L}\) supports negation (\(\neg\)) and disjunction (\(\vee\)). Each clause \(\varphi\in\mathcal{K}=\{\varphi_{1},\ldots,\varphi_{\ell}\}\) can be formulated as a disjunction of (possibly negated) atoms \(\bigvee_{j=1}^{q}o_{j}\) with \(q\) atoms \(\{o_{1},\ldots,o_{q}\}\). Since the prior knowledge is general, all clauses are assumed to be universally quantified. Clauses can be _grounded_ by assigning constants to the free variables. A grounded clause is denoted as \(\varphi[x_{1},x_{2},...|c_{1},c_{2},...]\) with variables \(x_{i}\in\mathcal{X}\) and constants \(c_{i}\in\mathcal{C}\). The set of all grounded clauses in a graph is \(\mathcal{G}(\mathcal{K},\mathcal{C})\).
**Example II.2**: _The graph \(\mathbf{G}_{\mathrm{Cit}}\) in Fig. 1 can be expressed in \(\mathcal{L}\). Nodes are represented by a set of constants \(\mathcal{C}=\{a,b,\ldots,f\}\). Node labels are expressed as a set of unary predicates \(\mathcal{P}_{U}=\{\mathrm{AI},\mathrm{DB},\ldots,\mathrm{AG}\}\) and edges as a set of binary predicates \(\mathcal{P}_{B}=\{\mathrm{Cite}\}\). \(\mathcal{L}\) has a set of variables \(\mathcal{X}=\{x,y\}\). The atom \(\mathrm{AI}(\mathrm{x})\), for example, expresses the membership of \(x\) to the class \(\mathrm{AI}\) and \(\mathrm{Cite}(\mathrm{x},\mathrm{y})\) expresses the existence of a citation between \(x\) and \(y\). Some prior knowledge \(\mathcal{K}\) can be written as a set of \(\ell=6\) disjunctive clauses in \(\mathcal{L}\). Here, the assumption is denoted that two papers that cite each other have the same document class:_
\[\varphi_{\mathrm{AI}}:\forall xy\neg\mathrm{AI}(\mathrm{x})\vee \neg\mathrm{Cite}(\mathrm{x},\mathrm{y})\vee\mathrm{AI}(\mathrm{y})\] \[\varphi_{\mathrm{DB}}:\forall xy\neg\mathrm{DB}(\mathrm{x})\vee \neg\mathrm{Cite}(\mathrm{x},\mathrm{y})\vee\mathrm{DB}(\mathrm{y})\] \[\ldots\]
_The atoms are grounded by replacing the variables \(x\) and \(y\) with the constants \(\{a,b,\ldots f\}\) to obtain the sets of unary groundings \(\{\mathrm{AI}(\mathrm{a}),\mathrm{ML}(\mathrm{b}),\ldots,\mathrm{IR}(\mathrm{f})\}\) and binary groundings \(\{\mathrm{Cite}(\mathrm{a},\mathrm{d}),\mathrm{Cite}(\mathrm{a},\mathrm{e}), \ldots,\mathrm{Cite}(\mathrm{a},\mathrm{f})\}\). Assuming a closed world and exclusive classes, other facts could be derived, such as \(\{\neg\mathrm{DB}(\mathrm{a}),\neg\mathrm{IR}(\mathrm{a}),\ldots,\neg\mathrm{ Cite}(\mathrm{a},\mathrm{b})\}\). For the sake of simplicity, these are omitted here._
### _Node Classification_
Node classification is a subtask of knowledge graph completion on a graph \(\mathbf{G}\) with the objective to assign classes to nodes where they are unknown. This task is accomplished given node features \(\mathbf{X}\), edges \(\mathbf{E}\) and some prior knowledge \(\mathcal{K}\) encoded as a set of clauses in \(\mathcal{L}\). A predictive model is trained on a subset of the graph \(\mathbf{G}_{\mathrm{train}}\) with ground truth labels \(\mathbf{Y}_{\mathrm{train}}\) and validated on a test set \(\mathbf{G}_{\mathrm{test}}\) for which the ground truth labels are compared to the predictions in order to assess the predictive performance. Node classification can be studied in a _transductive_ or _inductive_ setting. In a transductive setting, the entire graph is available for training, but the true labels of the test nodes are masked. In an inductive setting, only the nodes in the training set and the edges connecting them are available, making it more challenging to classify unseen nodes.
### _Fuzzy Semantics_
Let us consider an attributed and labelled graph \(\mathbf{G}\) and the prior knowledge \(\mathcal{K}\). While \(\mathcal{K}\) can be defined in the
Fig. (1) Example extract of the Citeseer citation graph.
logic language \(\mathcal{L}\), the neural component in KeGNN relies on continuous and differentiable representations. To interpret Boolean logic in the real-valued domain, KeGNN uses fuzzy logic [18], which maps Boolean truth values to the continuous interval \([0,1]\subset\mathbb{R}\). A constant in \(\mathcal{C}\) is interpreted as a real-valued feature vector \(\mathbf{x}\in\mathbb{R}^{d}\). A predicate \(P\in\mathcal{P}\) with arity \(r\) is interpreted as a function \(f_{P}:\mathbb{R}^{r\times d}\mapsto[0,1]\) that takes \(r\) feature vectors as input and returns a truth value.
**Example II.3**: _In the example, a unary predicate \(P_{U}\in\mathcal{P}_{U}=\{\mathrm{AI},\mathrm{DB},\ldots\}\) is interpreted as a function \(f_{P_{U}}:\mathbb{R}^{d}\mapsto[0,1]\) that takes a feature vector \(\mathbf{x}\) and returns a truth value indicating whether the node belongs to the class encoded as \(P_{U}\). The binary predicate \(\mathrm{Cite}\in\mathcal{P}_{B}\) is interpreted as the function_
\[f_{\mathrm{Cite}}(v_{i},v_{j})=\begin{cases}1,&\text{if }(v_{i},v_{j})\in \mathbf{E}_{\mathrm{Cit}}\\ 0,&\text{else.}\end{cases}\]
\(f_{\mathrm{Cite}}\) _returns the truth value 1 if there is an edge between two nodes \(v_{i}\) and \(v_{j}\) in \(\mathbf{G}_{\mathrm{Cit}}\) and 0 otherwise._
T-conorm functions \(\bot:[0,1]\times[0,1]\mapsto[0,1]\)[19] take real-valued truth values of two literals2 and define the truth value of their disjunction. The Godel t-conorm function for two truth values \(\mathbf{t}_{i},\mathbf{t}_{j}\) is defined as
Footnote 2: A literal is a (possibly negated) grounded atom, e.g. \(\mathrm{AI}(\mathrm{a})\)
\[\bot(\mathbf{t}_{i},\mathbf{t}_{j})\mapsto\max(\mathbf{t}_{i},\mathbf{t}_{j}).\]
To obtain the truth value of a clause \(\varphi:o_{1}\lor...\lor o_{q}\), the function \(\bot\) is extended to a vector \(\mathbf{t}\) of \(q\) truth values: \(\bot(\mathbf{t}_{1},\mathbf{t}_{2},...,\mathbf{t}_{q})=\bot(\mathbf{t}_{1}, \bot(\mathbf{t}_{2}....(\mathbf{t}_{q-1},\mathbf{t}_{q})))\). Fuzzy negation over truth values is defined as \(\mathbf{t}\mapsto 1-\mathbf{t}\)[18].
**Example II.4**: _Given the clause \(\varphi_{\mathrm{AI}}:\forall xy\neg\mathrm{AI}(\mathrm{x})\vee\neg\mathrm{ Cite}(\mathrm{x},\mathrm{y})\vee\mathrm{AI}(\mathrm{y})\) and its grounding \(\varphi_{\mathrm{AI}}[x,y|a,b]:\mathrm{AI}(\mathrm{a})\vee\neg\mathrm{Cite}( \mathrm{a},\mathrm{b})\vee\mathrm{AI}(\mathrm{b})\) to the constants \(a\) and \(b\) and truth values for the grounded predicates \(\mathrm{AI}(\mathrm{a})=\mathbf{t}_{1}\), \(\mathrm{AI}(\mathrm{b})=\mathbf{t}_{2}\) and \(\mathrm{Cite}(\mathrm{a},\mathrm{b})=\mathbf{t}_{3}\), the truth value of \(\varphi_{\mathrm{AI}}[x,y|a,b]\) is \(\max\{\max\{(1-\mathbf{t}_{1}),(1-\mathbf{t}_{3})\},\mathbf{t}_{2}\}\)._
### _Model Architecture_
The way KeGNN computes the final predictions can be divided in two stages. First, a GNN predicts the node classes given the features and the edges. Subsequently, the knowledge enhancement layers use the predictions as truth values for the grounded unary predicates and update them with respect to the knowledge. An overview of KeGNN is given in Fig. 2.
#### Iv-E1 Neural Component
The role of the GNN in the neural component is to exploit feature information in the graph structure. The key strength of a GNN is to enrich node representations with graph structure by nesting \(k\) message passing layers [8]. Per layer, the representations of neighboring nodes are aggregated and combined to obtain updated representations. The node representation \(v_{i}^{k+1}\) in the \(k\)-th message passing layer is
\[v_{i}^{k+1}=\mathrm{combine}\big{(}v_{i}^{k},\mathrm{aggregate}\big{(}\{v_{j}^ {k}|v_{j}^{k}\in\mathcal{N}(v_{i})\}\big{)}\big{)}.\]
The layers contain learnable parameters that are optimized with backpropagation. In this work, we consider two well-known GNNs as components for KeGNN: Graph Convolutional Networks (GCN) [14] and Graph Attention Networks (GAT) [13]. While GCN considers the graph structure as given, GAT allows for assessing the importance of the neighbors with attention weights \(\alpha_{ij}\) between node \(v_{i}\) and node \(v_{j}\). In case of multi-head attention, the attention weights are calculated multiple times and concatenated which allows for capturing different aspects of the input data. In KeGNN, the GNN implements the functions \(f_{P_{U}}\) (see Section II-D). In other words, the predictions are used as truth values for the grounded unary predicates in the symbolic component.
#### Iv-E2 Symbolic Component
To refine the predictions of the GNN, one or more knowledge enhancement layers are stacked onto the GNN to update its predictions \(\mathbf{Y}\) to \(\mathbf{Y}^{\prime}\). The goal is to increase the satisfaction of the prior knowledge. The predictions \(\mathbf{Y}\) of the GNN serve as input to the symbolic component where they are interpreted as fuzzy truth values for the unary grounded predicates \(\mathbf{U}:=\mathbf{Y}\) with \(\mathbf{U}\in\mathbb{R}^{n\times m}\). Fuzzy truth values for the groundings of binary predicates are encoded as a matrix \(\mathbf{B}\) where each row represents an edge \((v_{i},v_{j})\) and each column represents an edge type \(e\). In the context of node classification, the GNN returns only predictions for the node classes, while the edges are assumed to be given. A binary grounded predicate is therefore set to truth value \(1\) (true) if an edge between two nodes \(v_{i}\) and \(v_{j}\) exists:
\[\mathbf{B}_{[(v_{i},v_{j}),e]}=\begin{cases}1,&\text{if }(v_{i},v_{j})\text{ of type }e\in\mathbf{E}\\ 0,&\text{else.}\end{cases}\]
**Example II.5**: _In case of the beforementioned citation graph of Fig. 1, \(\mathbf{U}\) and \(\mathbf{B}\) are defined as:_
\[\mathbf{U}:=\begin{bmatrix}\mathrm{AI}(\mathrm{a})&\ldots&\mathrm{AG}(\mathrm{a })\\ \mathrm{AI}(\mathrm{b})&\ldots&\mathrm{AG}(\mathrm{b})\\ \vdots&&\vdots\\ \mathrm{AI}(\mathrm{f})&\ldots&\mathrm{AG}(\mathrm{f})\end{bmatrix}\quad\mathbf{B}:= \begin{bmatrix}\mathrm{Cite}(\mathrm{a},\mathrm{d})\\ \mathrm{Cite}(\mathrm{a},\mathrm{e})\\ \mathrm{Cite}(\mathrm{a},\mathrm{c})\\ \vdots\\ \mathrm{Cite}(\mathrm{c},\mathrm{e})\\ \mathrm{Cite}(\mathrm{e},\mathrm{f})\end{bmatrix}\]
To enhance the satisfaction of clauses that contain both unary and binary predicates, their groundings are joined into one matrix \(\mathbf{M}\in\mathbb{R}^{k\times p}\) with \(p=2\cdot|\mathcal{P}_{U}|+|\mathcal{P}_{B}|\). \(\mathbf{M}\) is computed by joining \(\mathbf{U}\) and \(\mathbf{B}\) so that each row of \(\mathbf{M}\) represents an edge \((v_{i},v_{j})\). As a result, \(\mathbf{M}\) contains all required grounded unary predicates for \(v_{i}\) and \(v_{j}\).
**Example II.6**: _For the example citation graph, we obtain \(\mathbf{M}\) as follows:_
\[\mathbf{M}=\begin{bmatrix}\frac{\mathrm{rounding}}{\mathrm{may}}\text{ positions in }\mathbf{x}\\ \frac{\mathrm{AP}(\mathrm{a})}{\mathrm{BP}(\mathrm{a})}&\ldots&\mathrm{AG}( \mathrm{a}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{a})}{\mathrm{BP}(\mathrm{a})}&\ldots&\mathrm{AG}( \mathrm{a}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{a})}{\mathrm{BP}(\mathrm{a})}&\ldots&\mathrm{AG}( \mathrm{a}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{a})}{\mathrm{BP}(\mathrm{a})}&\ldots&\mathrm{AG}( \mathrm{a}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{c})}{\mathrm{BP}(\mathrm{c})}&\ldots&\mathrm{AG}( \mathrm{c}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{c})}{\mathrm{BP}(\mathrm{c})}&\ldots&\mathrm{AG}( \mathrm{c}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{c})}{\mathrm{BP}(\mathrm{c})}&\ldots&\mathrm{AG}( \mathrm{c}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{c})}{\mathrm{BP}(\mathrm{c})}&\ldots&\mathrm{AG}( \mathrm{c}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{c})}{\mathrm{BP}(\mathrm{c})}&\ldots&\mathrm{AG}( \mathrm{c}^{\prime})\\ \end{bmatrix}\quad\begin{array}{c}\text{grounding using vertices for }\mathbf{y}\text{ grounding }\text{ Cart}(\mathrm{x}\mathrm{y})\\ \hline\mathrm{AP}(\mathrm{a})&\mathrm{BP}(\mathrm{a})&\ldots&\mathrm{AG}( \mathrm{a}^{\prime})\\ \mathrm{AP}(\mathrm{a})&\mathrm{BP}(\mathrm{a})&\ldots&\mathrm{AG}( \mathrm{a}^{\prime})\\ \frac{\mathrm{AP}(\mathrm{a})}{\mathrm{BP}(\mathrm{a})}&\ldots&\mathrm{AG}( \mathrm{a}^
A knowledge enhancement layer consists of multiple _clause enhancers_. A clause enhancer is instantiated for each clause \(\varphi\in\mathcal{K}\). Its aim is to compute updates \(\delta\mathbf{M}_{\varphi}\) for the groundings in \(\mathbf{M}\) that increase the satisfaction of \(\varphi\).
First, fuzzy negation is applied to the columns of \(\mathbf{M}\) that correspond to negated atoms in \(\varphi\). Then \(\delta\mathbf{M}_{\varphi}\) is computed by a _t-conorm boost function_\(\phi\)[20]. This function \(\phi:[0,1]^{q}\mapsto[0,1]^{q}\) takes \(q\) truth values and returns changes to those truth values such that the satisfaction is increased: \(\bot(\mathbf{t})\leq\bot(\mathbf{t}+\phi(\mathbf{t}))\). [20] propose the following differentiable t-conorm boost function
\[\phi_{w_{\varphi}}(\mathbf{t})_{i}=w_{\varphi}\cdot\frac{e^{\mathbf{t}_{i}}}{ \sum_{j=1}^{q}e^{\mathbf{t}_{j}}}.\]
The boost function \(\phi_{w_{\varphi}}\) employs a clause weight \(w_{\varphi}\) that is initialized in the beginning of the training and optimized during training as a learnable parameter. The updates for the groundings calculated by \(\phi_{w_{\varphi}}\) are proportional to \(w_{\varphi}\). Therefore, \(w_{\varphi}\) determines the magnitude of the update and thus reflects the impact of a clause. The changes to the atoms that do not appear in a clause are set to zero. The boost function is applied row-wise to \(\mathbf{M}\) as illustrated in the following example.
**Example II.7**: _Given the clause \(\varphi_{AI}:\forall xy\neg\mathrm{AI(x)}\vee\neg\mathrm{Cite(x,y)}\lor \mathrm{AI(y)}\) and the clause weight \(w_{\mathrm{AI}}\), the changes for this clause are \(\delta\mathbf{M}_{\varphi_{AI}}=\)_
\[w_{\mathrm{AI}}\cdot\left[\begin{array}{cccccc}\delta_{\neg\mathrm{AI^{p}( a)}}&0&\ldots&\delta_{\mathrm{AI^{p}(c)}}&0&\ldots&\delta_{\neg\mathrm{Cite(a,c)}}\\ \delta_{\neg\mathrm{AI^{p}(a)}}&0&\ldots&\delta_{\mathrm{AI^{p}(c)}}&0&\ldots &\delta_{\neg\mathrm{Cite(a,a)}}\\ \delta_{\neg\mathrm{AI^{p}(a)}}&0&\ldots&\delta_{\mathrm{AI^{p}(d)}}&0&\ldots &\delta_{\neg\mathrm{Cite(c,d)}}\\ \vdots&&\vdots&\vdots&&\\ \delta_{\neg\mathrm{AI^{p}(c)}}&0&\ldots&\delta_{\mathrm{AI^{p}(f)}}&0&\ldots &\delta_{\neg\mathrm{Cite(e,f)}}\end{array}\right]\]
_The values of \(\delta\mathbf{M}_{\varphi_{AI}}\) are calculated by \(\phi_{w_{\mathrm{AI}}}\), for example:_
\[\delta_{\neg\mathrm{AI^{p}(a)}}=\phi_{w_{\mathrm{AI}}}(\mathbf{z})_{a}=- \frac{e^{-\mathbf{z}_{AI(a)}}}{e^{-\mathbf{z}_{AI(a)}}+e^{-\mathbf{z}_{UI(a,c)} }+e^{\mathbf{z}_{AI(c)}}}\]
_Each clause enhancer computes updates \(\delta\mathbf{M}_{\varphi}\) to increase the satisfaction of a clause independently. The updates of all clause enhancers are finally added, resulting in a matrix \(\delta\mathbf{M}=\sum_{\varphi\in\mathcal{K}}\delta\mathbf{M}_{\varphi}\). To apply the updates to the initial predictions, \(\delta\mathbf{M}\) has to be added to \(\mathbf{Y}\). The updates in \(\delta\mathbf{M}\) can not directly be applied to the predictions \(\mathbf{Y}\) of the GNN. Since the unary groundings \(\mathbf{U}\) were joined with the binary groundings \(\mathbf{B}\), multiple changes may be proposed for the same grounded unary atom. For example, for the grounded atom \(\mathrm{AI(c)}\) the changes \(\delta_{\neg\mathrm{AI^{p}(c)}}\) and \(\delta_{\neg\mathrm{AI^{p}(c)}}\) are proposed, since \(c\) appears in the grounded clauses \(\varphi_{\mathrm{AI}}[x,y|a,c]\) and \(\varphi_{\mathrm{AI}}[x,y|c,e]\). In \(\mathbf{G}_{\mathrm{Cite}}\) the node \(c\) appears in first place of edge \((a,c)\) and in second place of edge \((c,c)\). Therefore, all updates for the same grounded atom are summed, which reduces the size of \(\mathbf{M}\) to the size of \(\mathbf{U}\)._
To ensure that the updated predictions remain truth values in the range of \([0,1]\), the knowledge enhancement layer updates at first the preactivations \(\mathbf{Z}\) of the GNN and then applies the activation function \(\sigma\) to the updated preactivations \(\mathbf{Z}^{\prime}\) in order to obtain the final predictions: \(\mathbf{Y}^{\prime}=\sigma(\mathbf{Z}^{\prime})\). Therefore, a knowledge enhancement layer transforms \(\mathbf{Z}\) to \(\mathbf{Z}^{\prime}\) (with \(\mathbf{Z},\mathbf{Z}^{\prime}\in\mathbb{R}^{n\times m}\)). In the last step, the updates by the knowledge enhancer are added to the preactivations \(\mathbf{Z}\) of the GNN and passed to \(\sigma\) to obtain the updated predictions
\[\mathbf{Y}^{\prime}=\sigma\bigg{(}\mathbf{Z}+\sum_{\varphi\in\mathcal{K}} \delta\mathbf{U}_{\varphi}\bigg{)}\]
where \(\delta\mathbf{U}_{\varphi}\) is the matrix obtained by extracting the changes to the unary predicates from \(\delta\mathbf{M}_{\varphi}\). Regarding the binary groundings, the values in \(\mathbf{B}\) are set to a high positive value that results in one when \(\sigma\) is applied.
## III Related Work
The field of knowledge graph completion is addressed from several research directions. Symbolic methods exist that conduct link prediction given a set of prior knowledge [21][22]. Embedding-based methods [23] are mostly sub-symbolic methods to obtain node embeddings that are used for knowledge graph completion tasks. Usually, their common objective is to find similar embeddings for nodes that are located closely in the graph. The majority of these methods only encodes the
Fig. (2) Overview of KeGNN.
graph structure, but does not consider node-specific feature information [24]. However, KeGNN is based on GNNs that are suited for learning representations of graphs attributed with node features. It stacks additional layers that interpret the outputs of the GNN in fuzzy logic and modify them to increase the satisfiability. Therefore, it is considered a neuro-symbolic method. In the multifaceted neuro-symbolic field, KeGNN can be placed in the category of knowledge-guided learning [20], where the focus lies on learning in the presence of additional supervision introduced as prior knowledge. Within this category, KeGNN belongs to the model-based approaches, where prior knowledge in the form of knowledge enhancement layers is an integral part of the model. KeGNN can be seen as an extension to knowledge enhanced neural networks (KENN) [25], which stack knowledge enhancement layers onto a multi-layer perceptron (MLP). However, an MLP is not powerful enough to incorporate graph structure into the representations. Thus, relational information can only be introduced by binary predicates in the symbolic part of KENN. In contrast, KeGNN is based on GNNs that process the graph structure, which makes both the neural and symbolic components sufficiently powerful to exploit the graph structure. Beyond, loss-based methods such as logic tensor networks [26] exist that encode the satisfiability of prior knowledge as an optimization objective.
Further, in [27] neuro-symbolic approaches dealing with graph structures are classified into three categories. First, logically informed embedding approaches [28][29] use predefined logical rules that provide knowledge to a neural system, while both components are mostly distinct. Second, approaches for knowledge graph embedding with logical constraints [30][31] use prior knowledge as constraints on the neural knowledge graph embedding method in order to modify predictions or embeddings. Thirdly, neuro-symbolic methods are used for learning rules for graph reasoning tasks [32][33]. This allows for rule generation or confidence scores for prior knowledge and makes the models robust to exceptions or soft knowledge. KeGNN best falls into the second category, since the prior knowledge is interpreted in fuzzy logic to be integrated with the neural model and update the GNN's predictions. The idea of confidence values in category three shares the common property of weighting knowledge as with KeGNN's clause weights. However, even though KeGNN's clause weights introduce a notion of impact of a clause when predictions are made, they cannot directly be interpreted as the confidence in a rule.
In the well-known _Kautz Taxonomy_[34] that classifies neuro-symbolic approaches according to the integration of neural and symbolic modules, KeGNN falls best into the category Neuro[Symbolic] (Type 6) of fully-integrated neuro-symbolic systems that embed symbolic reasoning in a neural architecture.
## IV Experimental Evaluation
To evaluate the performance of KeGNN, we apply it to the datasets Citeseer, Cora, PubMed and Flickr that are common benchmarks for node classification in a transductive setting. In the following, KeGNN is called KeGCN and KeGAT when instantiated to a GCN or a GAT, respectively. As additional baseline, we consider KeMLP, that stacks knowledge enhancement layers onto an MLP, as proposed in [25]. Further, the standalone neural models MLP, GCN and GAT are used as baselines. While Citeseer, Cora and PubMed are citation graphs that encode citations between scientific papers (as in Example 2.2), Flickr contains images and shared properties between them. All datasets can be modelled as homogeneous, labelled and attributed graphs as defined in Section II-A. Tab. I gives an overview of the named datasets in this work. The datasets are publicly available on the dataset collection3 of PyTorch Geometric [35]. For the split into train, valid and test set, we take the predefined splits in [36] for the citation graphs and in [16] for Flickr. Word2Vec vectors [17] are used as node features for the citation graphs and image data for Flickr. Fig. 1 visualizes the graph structure of the underlying datasets in this work as a homogeneous, attributed and labelled graph on the example of Citeseer.
Footnote 3: [https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html](https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html)
The set of prior logic for the knowledge enhancement layers is manually defined. In this work, we encode the assumption that the existence of an edge for a node pair points to their membership to the same class and hence provides added value to the node classification task. In the context of citation graphs, this implies that two documents that cite each other refer to the same topic, while for Flickr, linked images share the same properties. Following this pattern for all datasets, a clause \(\varphi\): \(\forall xy:\neg\mathrm{Cls}_{i}(\mathrm{x})\vee\neg\mathrm{Link}(\mathrm{x}, \mathrm{y})\vee\mathrm{Cls}_{i}(\mathrm{y})\) is instantiated for each node class \(\mathrm{Cls}_{i},\mathrm{i}\in\{1,\dots,\mathrm{m}\}\). More details on the experiments are given in Appendix VII. The implementation of KeGNN and the experiments will be made publicly available after the revision period.
### _Results_
To compare the performance of all models, we examine the average test accuracy over 50 runs (10 for Flickr) for the knowledge enhanced models KeMLP, KeGCN, KeGAT and the standalone base models MLP, GCN, GAT on the named datasets. The results are given in Tab. II. For Cora and Citeseer, KeMLP leads to a significant improvement over
MLP (p-value of one-sided t-test \(\ll 0.05\)). In contrast, no significant advantage of KeGCN or KeGAT in comparison to the standalone base model is observed. Nevertheless, all GNN-based models are significantly superior to KeMLP for Cora. This includes not only KeGCN and KeGAT, but also the GNN baselines. For Citeseer, KeGAT and GAT both outperform KeMLP. In the case of PubMed, only a significant improvement of KeMLP over MLP can be observed, while the GNN-based models and their enhanced versions do not provide any positive effect. For Flickr, no significant improvement between the base model and the respective knowledge enhanced model can be observed. Nevertheless, all GNN-based models outperform KeMLP, reporting significantly higher mean test accuracies for KeGAT, GAT, GCN and KeGCN.
#### Iv-B1 Exploitation of the Graph Structure
It turns out that the performance gap between MLP and KeMLP is larger than for KeGNN in comparison to the standalone GNN. To explain this observation, we examine how the graph structure affects the prediction performance. Therefore, in Fig. 3 we analyze the accuracy grouped by the node degree for the entire graph for MLP vs. KeMLP and GCN vs. KeGCN. The findings for KeGAT are in line with those for KeGCN. It is observed that KeMLP performs better compared to MLP as the node degree increases. By contrast, when comparing GCN and KeGCN, for both models, the accuracy is observed superior for nodes with a higher degree.
This shows that rich graph structure is helpful for the node classification in general. Indeed, the MLP is a simple model that misses information on the graph structure and thus benefits from graph structure contributed by KeMLP in the form of binary predicates. On the contrary, standalone GNNs can process graph structure by using message passing techniques to transmit learned node representations between neighbors. The prior knowledge introduced in the knowledge enhancer is simple. It encodes that two neighbors are likely to be of the same class. An explanation for the small difference in performance is that GNNs may be able to capture and propagate this simple knowledge across neighbors implicitly, using its message passing technique. In other words we observe that, in this particular case, the introduced knowledge happens to be redundant for GNNs. However, the introduced knowledge significantly improves the accuracy of MLPs. In this context, we discuss perspectives for future work in Section V.
#### Iv-B2 Robustness to wrong knowledge
Furthermore, a question of interest is how the knowledge enhanced model find a balance between knowledge and graph data in case of knowledge that is not consistent with the graph data. In other words, can the KeGNN successfully deal with nodes having mainly neighbors that belong to a different ground truth class and thus contribute misleading information to the node classification?
To analyze this question, we categorize the accuracy by the proportion of misleading nodes in the neighborhood, see Fig. 4. Misleading nodes are node neighbors that have a different ground truth class than the node to be classified. It turns out that KeMLP is particularly helpful over MLP when the neighborhood provides the right information. However, if the neighborhood is misleading (if most or even all of the neighbors belong to a different class), an MLP that ignores the graph structure can lead to even better results. When comparing KeGCN and GCN, there is no clear difference. This is expected, since both models are equally affected by
Fig. (3): The accuracy grouped by the node degree for MLP vs. KeMLP (above) and GCN vs. KeGCN (middle) and GAT vs. KeGAT(below) on Citeseer.
\begin{table}
\begin{tabular}{|c|c c|c c|c c|} \hline & **MLP** & **KeMLP** & **GCN** & **KeGCN** & **GAT** & **KeGAT** \\ \hline
**Cora** & 0.7098 & 0.8072 & 0.8538 & **0.8587** & 0.8517 & 0.8498 \\ & (0.0080) & (0.0193) & (0.0057) & (0.0057) & (0.0068) & (0.0066) \\ \hline
**CiteSeer** & 0.7278 & 0.7529 & 0.748 & 0.7506 & 0.7718 & **0.7734** \\ & (0.0081) & (0.0067) & (0.0102) & (0.0096) & (0.0072) & (0.0073) \\ \hline
**PubMed** & 0.8844 & **0.8931** & 0.8855 & 0.8840 & 0.8769 & 0.8686 \\ & (0.0057) & (0.0048) & (0.0062) & (0.0087) & (0.0040) & (0.0081) \\ \hline
**Flickr** & 0.4656 & 0.4659 & **0.5007** & 0.4974 & 0.4970 & 0.4920 \\ & (0.0018) & (0.0012) & (0.0063) & (0.0180) & (0.0124) & (0.0189) \\ \hline \end{tabular}
\end{table} TABLE (II): Average test accuracy of 50 runs (10 for Flickr). The standard deviations are reported in brackets.
misleading nodes as they utilize the graph structure. Just as a GCN, the KeGCN is not necessarily robust to wrong prior knowledge since the GCN component uses the entire neighborhood, including the misleading nodes.
When comparing GCN to KeMLP, see plot below in Fig. 4, KeMLP is more robust to misleading neighbors. While GCN takes the graph structure as given and includes all neighbors equally in the embeddings by graph convolution, the clause weights in the knowledge enhancement layers provide a way to devalue knowledge. If the data frequently contradicts a clause, the model has the capacity to reduce the respective clause weight in the learning process and reduce its impact.
#### Iii-B3 Clause Weight Learning
Further, we want to examine whether the clause weights learned during training are aligned with the knowledge in the ground truth data. The clause weights provide insights on the magnitude of the updates made by a clause.
The _clause compliance_[20] measures how well the prior knowledge is satisfied in a graph. Given a clause \(\varphi\), a class \(\mathrm{Cl_{Si}}\), a set of nodes \(\mathbf{V}\), a set of nodes of the class \(\mathrm{Cl_{Si}}\): \(\mathbf{V_{i}}=\{v_{i}|v_{i}\in\mathbf{V}\land\mathrm{Cl_{S}}(v_{i})==\mathrm{i}\}\), and the neighborhood \(\mathcal{N}(v_{i})\) of \(v_{i}\), the clause compliance of clause \(\varphi\) on graph \(\mathbf{G}\) is defined as follows:
\[\mathrm{Compliance}(\mathbf{G},\varphi)=\frac{\sum_{v_{i}\in\mathbf{V_{i}}} \sum_{v_{j}\in\mathcal{N}(v)}\mathbf{1}[\text{ if }v_{j}\in\mathbf{V_{i}}]}{\sum_{v_{i}\in \mathbf{V_{i}}}|\mathcal{N}(v_{i})|} \tag{1}\]
In other words, the clause compliance counts how often among nodes of a class \(\mathrm{Cl_{Si}}\) the neighboring nodes have the same class [20]. The clause compliance can be calculated on the ground truth classes of the training set or the predicted classes. As a reference, we measure the clause compliance based on the ground truth labels in the training set. Fig. 5 displays the learned clause weights for KeGCN and KeMLP versus the clause compliance on the ground truth labels of the training set. For KeMLP, a positive correlation between the learned clause weights and the clause compliance on the training set is observed. This indicates that higher clause weights are learned for clauses that are satisfied in the training set. Consequently, these clauses have a higher impact on the updates of the predictions. In addition, the clause weights corresponding to clauses with low compliance values make smaller updates to the initial predictions. Accordingly, clauses that are rarely satisfied learn lower clause weights during the training process. In the case of KeGCN, the clause weights are predominantly set to values close to zero. This is in accordance with the absence of a significant performance gap between GCN and KeGCN. Since the GCN itself already leads to valid classifications, smaller updates are required by the clause enhancers.
Furthermore, we analyze how the compliance evolves during training to investigate whether the models learn predictions that increase the satisfaction of the prior knowledge. Fig. 6 plots the evolution of the clause compliance for the six clauses for GCN vs. KeGCN and MLP vs. KeMLP. It is observed that GCN and KeGCN yield similar results as the evolution of the compliance during training for both models is mostly aligned. For MLP vs. KeMLP the clause compliance of the prediction of the MLP converges to lower values for all classes than the clause compliance obtained with the KeMLP. This gives evidence that the knowledge enhancement layer actually improves the satisfiability of the prior knowledge. As already observed, this gives evidence that the standalone GCN is able
Fig. (4): The accuracy grouped by the ratio of misleading first-order neighbors for GCN vs. KeGCN (left), MLP vs. KeMLP (right), GCN vs. KeMLP (below) on Citeseer.
Fig. (5): Learned clause weights vs. clause compliance for KeMLP (left) and KeGCN (right) on Citeseer.
Fig. (6): Clause compliance during training for GCN vs. KeGCN (left) and MLP vs. KeMLP (right) on Citeseer.
to implicitly satisfy the prior knowledge even though it is not explicitly defined.
## V Limitations and Perspectives
The method of KeGNN is limited in some aspects, which we present in this section. In this work, we focus on homogeneous graphs. In reality, however, graphs are often heterogeneous with multiple node and edge types [4]. Adaptations are necessary on both the neural and the symbolic side to apply KeGNN to heterogeneous graphs. The restriction to homogeneous graphs also limits the scope of formulating complex prior knowledge. Eventually, the datasets used in this work and the set of prior knowledge are too simple for KeGNN to exploit its potential and lead to a significant improvement over the GNN. The experimental results show that the knowledge encoded in the symbolic component leads to significant improvement over an MLP that is not capable to capture and learn that knowledge. This indicates that for more complex knowledge that is harder for a GNN to learn, KeGNN has the potential to bring higher improvements. A perspective for further work is the extension of KeGNN to more generic data structures such as incomplete and heterogeneous knowledge graphs in conjunction with more complex prior knowledge.
Another limitation of KeGNN is scalability. With an increasing number of stacked knowledge enhancement layers, the affected node neighborhood grows exponentially, which can lead to significant memory overhead. This problem is referred as neighborhood explosion [7] and is particularly problematic in the context of training on memory-constrained GPUs. This affects both the GNN and the knowledge enhancement layers that encode binary knowledge. Methods from scalable graph learning [37][16][38] represent potential solutions for the neighborhood explosion problem in KeGNN.
Furthermore, limitations appear in the context of link prediction with KeGNN. For link prediction, a neural component is required that predicts fuzzy truth values for binary predicates. At present, KeGNN can handle clauses containing binary predicates, but their truth values are initialized with artificial predictions, where a high value encodes the presence of an edge. This limits the application of KeGNN to datasets for which the graph structure is complete and known a priori.
## VI Conclusion
In this work, we introduced KeGNN, a neuro-symbolic model that integrates GNNs with symbolic knowledge enhancement layers to create an end-to-end differentiable model. This allows the use of prior knowledge to improve node classification while exploiting in strength of a GNN to learn expressive representations. Experimental studies show that the inclusion of prior knowledge has the potential to improve simple neural models (as observed in the case of MLP). However, the knowledge enhancement of GNNs is harder to achieve on the underlying and limited benchmarks for which the injection of simple knowledge concerning local neighborhood is redundant with the representations that GNNs are able to learn. Nevertheless, KeGNN has not only the potential to improve graph completion tasks from a performance perspective, but also to increase interpretability through clause weights. This work is a step towards a holistic neuro-symbolic method on incomplete and noisy semantic data, such as knowledge graphs.
## VII Additional Experiment Details
### _Implementation_
The implementation of KeGNN and the described experiments will be made publicly available after the revision period. The code is based on PyTorch [39] and the graph learning library PyTorch Geometric [35]. The Weights & Biases tracking tool [40] is used to monitor the experiments. All experiments are conducted on a machine running an Ubuntu 20.4 equipped with an Intel(R) Xeon(R) Silver 4114 CPU 2.20GHz processor, 192G of RAM and one GPU Nvidia Quadro P5000.
### _Model Parameters and Hyperparameter Tuning_
KeGNN contains a set of hyperparameters. Batch normalization [41] is applied after each hidden layer of the GNN. The Adam optimizer [42] is used as optimizer for all models. Concerning the hyperparameters specific to the knowledge enhancement layers, the initialization of the preactivations of the binary predicates (which are assumed to be known) is taken as a hyperparameter. They are set to a high positive value for edges that are known to exist and correspond to the grounding of the binary predicate. Furthermore, different initializations of clause weights and constraints on them are tested. Moreover, the number of stacked knowledge enhancement layers is a hyperparameter. We further allow the model to randomly neglect a proportion of edges by setting an edges drop rate parameter. Further, we test whether the normalization of the edges with the diagonal matrix \(\tilde{\mathbf{D}}=\sum_{j}\tilde{\mathbf{A}}_{i,j}\) (with \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\)) is helpful.
To find a suitable set hyperparameters for each dataset and model, we perform a random search with up to 800 runs and 48h time limit and choose the parameter combination which leads to the highest accuracy on the validation set. The hyperparameter tuning is executed in Weights and Biases [40]. The following hyperparameter values are tested:
* Adam optimizer parameters: \(\beta_{1}\): 0.9, \(\beta_{2}\): 0.99, \(\epsilon\): 1e-07
* Attention heads: \(\{1,2,3,4,6,8,10\}\)
* Batch size: \(\{128,512,1024,2048,\text{full batch}\}\)
* Binary preactivation: \(\{0.5,1.0,10.0,100.0,500.0\}\)
* Clause weights initialization: \(\{0.001,0.1,0.25,0.5\), random uniform distribution on [0,1\}\)
* Dropout rate: \(0.5\)
* Edges drop rate: random uniform distribution \([0.0,0.9]\)
* Edge normalization: \(\{\text{true, false}\}\)
* Early stopping: \(\delta_{min}:0.001\), patience: {1, 10, 100}
* Hidden layer dimension: {32, 64, 128, 256}
* Learning rate: random uniform distribution \([0.0001,0.1]\)
* Clause weight clipping: \(w_{min}:0.0\), \(w_{max}\): random uniform distribution: \([0.8,500.0]\)
* Number of knowledge enhancement layers: \(\{1,2,3,4,5,6\}\)
* Number of hidden layers: \(\{2,3,4,5,6\}\)
* Number of epochs \(200\) (unless training stopped early)
The obtained parameter combinations for the models KeMLP, KeGCN and KeGAT for Cora, Citeseer, PubMed and Flickr are displayed in Tab. III. We set the random seed for all experiments to 1234.
The reference models MLP, GCN and GAT are trained with the same parameter set as the respective knowledge enhanced models.
|
2301.11701 | TransNet: Transferable Neural Networks for Partial Differential
Equations | Transfer learning for partial differential equations (PDEs) is to develop a
pre-trained neural network that can be used to solve a wide class of PDEs.
Existing transfer learning approaches require much information of the target
PDEs such as its formulation and/or data of its solution for pre-training. In
this work, we propose to construct transferable neural feature spaces from
purely function approximation perspectives without using PDE information. The
construction of the feature space involves re-parameterization of the hidden
neurons and uses auxiliary functions to tune the resulting feature space.
Theoretical analysis shows the high quality of the produced feature space,
i.e., uniformly distributed neurons. Extensive numerical experiments verify the
outstanding performance of our method, including significantly improved
transferability, e.g., using the same feature space for various PDEs with
different domains and boundary conditions, and the superior accuracy, e.g.,
several orders of magnitude smaller mean squared error than the state of the
art methods. | Zezhong Zhang, Feng Bao, Lili Ju, Guannan Zhang | 2023-01-27T13:26:25Z | http://arxiv.org/abs/2301.11701v1 | # TransNet: Transferable Neural Networks for Partial Differential Equations
###### Abstract
Transfer learning for partial differential equations (PDEs) is to develop a pre-trained neural network that can be used to solve a wide class of PDEs. Existing transfer learning approaches require much information of the target PDEs such as its formulation and/or data of its solution for pre-training. In this work, we propose to construct transferable neural feature spaces from purely function approximation perspectives without using PDE information. The construction of the feature space involves re-parameterization of the hidden neurons and uses auxiliary functions to tune the resulting feature space. Theoretical analysis shows the high quality of the produced feature space, i.e., uniformly distributed neurons. Extensive numerical experiments verify the outstanding performance of our method, including significantly improved transferability, e.g., using the same feature space for various PDEs with different domains and boundary conditions, and the superior accuracy, e.g., several orders of magnitude smaller mean squared error than the state of the art methods.
Machine Learning, Transferable Neural Networks, Partial Differential Equations
## 1 Introduction
Rapid advancement of deep learning has attracted significant attention of researchers to explore how to use deep learning to solve scientific and engineering problems. Since numerical solutions of partial differential equations (PDEs) sits at the heart of many scientific areas, there is a surge of studies on how to use neural networks to leverage data and physical knowledge to solve PDEs (Raissi et al., 2019; E and Yu, 2018; Long et al., 2018; Zang et al., 2020; Li et al., 2021; Lu et al., 2021; Gin et al., 2021; Zhang et al., 2021; Teng et al., 2022; Clark Di Leoni et al., 2023). The neural network-based methods have several advantages over traditional numerical methods (e.g., finite element, finite difference and finite volume), such as avoiding the need for numerical integration, generating differentiable solutions, exploiting advanced computing capabilities, e.g., GPUs. Nevertheless, a major drawback of these deep learning methods for solving PDEs is high computational cost associated with the neural network training/retraining using stochastic gradient descent (SGD). One of the popular strategies to alleviate this issue is transfer learning.
Transfer learning for PDEs is to develop a pre-trained neural network that can be effectively re-used to solve a PDE with multiple coefficients or in various domains, or to solve multiple types of PDEs. When transfer a pre-trained neural network from one scenario to another, the feature space, e.g., the hidden layers, are often frozen or slightly perturbed, which can dramatically reduce the training overhead by orders of magnitude. However, existing transfer learning approaches for PDEs, e.g., (Lu et al., 2021; Li et al., 2021; Chakraborty, 2020; Desai et al., 2021), require information/knowledge of the target family of PDEs to pre-train a neural network model. The needed information could be the analytical definitions of the PDEs including initial and boundary conditions, and/or measurement data of the PDE's solution. These requirement not only leads to time-consuming simulation data generation using other PDE solvers, but also limits the transferability of the pre-trained neural network (i.e., the pre-trained network is only transferable to the same or similar type of PDEs that are used for pre-training).
To overcome the above challenges, in this paper we propose a transferable neural network (TransNet) to improve the transferability of neural networks for solving PDEs. The key idea is construct a pre-trained neural feature space without using any PDE information, so that the pre-trained feature space could be transferred to a variety of PDEs with different
domains and boundary conditions. We limit our attention to single-hidden-layer fully-connected neural networks, which have sufficient expressive power for low-dimensional PDEs that are commonly used in science and engineering fields. Specifically, we treat each hidden neuron as a basis function and re-parameterize all the neurons to separate the parameters that determine the neuron's location and the ones that control the shape (i.e., the slope) of the activation function. Then, we develop a simple, yet very effective, approach to generate uniformly distributed neurons in the unit ball, and rigorously prove the uniform neuron distribution. Then, the shape parameters of the neurons are tuned using auxiliary functions, i.e., realizations of a Gaussian process. The entire feature space construction (determining the hidden neurons' parameters) does not require the PDE's formulation or data of the PDE's solution. When applying the constructed feature space to a PDE problem, we only need to solve for the parameters of the output layer by minimizing the standard PDE residual loss. This can be done by either solving a simple least squares problem for linear PDE or combining a least squares solver with a nonlinear iterative solver, e.g., Pichard iteration, for nonlinear PDEs.
The major contributions of this work are summarized as
* We develop transferable neural feature spaces that are independent of any PDE, and can be applied to effectively solve various linear and nonlinear PDE problems.
* We theoretically and computationally prove the uniform distribution of the hidden neurons, viewed as global non-orthogonal basis, for the proposed TransNet in the unit ball of any dimension.
* We demonstrate the superior accuracy and efficiency of the proposed TransNet for solving PDEs, e.g., the mean square errors of TransNet are several orders of magnitudes smaller than those by the state-of-the-art methods.
## 2 Related work
Studies on using neural networks for solving PDEs can be traced back to some early works, e.g., (Dissanayake and Phan-Thien, 1994; Lagaris et al., 1998). Recent advances mostly have been focused on physics-informed neural network (PINN). The general idea of PINN is to represent the PDE's solution by a neural network, and then train the network by minimizing certain measurement of the PDE's residual at a set of samples in the domain of computation. Several improvements on the training and sampling were proposed in (Lu et al., 2021; Anitescu et al., 2019; Zhao and Wright, 2021; Krishapriyan et al., 2021). Besides direct minimizing the PDE's residual, there are studies on how to combine traditional PDE solvers with neural networks. For example, the deep Ritz method (E and Yu, 2018) uses the variational form of PDEs and combines the stochastic gradient descent with numerical integration to train the network; the deep Galerkin method (Sirignano and Spiliopoulos, 2018) combines the Galerkin method with machine learning; the PDE-Net (Long et al., 2018, 2019) uses a stack of neural networks to approximate the PDE solutions over a multiple of time steps.
Another type of deep learning method for PDEs is to use neural networks to learn a family of PDE operators, instead of a single equation. The Fourier neural operator (FNO) (Li et al., 2021) parameterizes the integral kernel in Fourier space and is generalizable to different spatial/time resolutions. The DeepONet (Lu et al., 2021) extends the universal approximation theorem (Chen and Chen, 1995) to deep neural networks, and its variant (Wang et al., 2021) further reduces the amount of data needed for training. The physics-informed neural operator (PINO) (Li et al., 2021) combines operator learning with function approximation to achieve higher accuracy. MIONet (Jin et al., 2022) was proposed to learn multiple-input operators via tensor product based on low-rank approximation.
Random feature models have also been used to solve PDEs (Sun et al., 2018; Liu et al., 2022) or learn PDE operators (Nelsen and Stuart, 2021). The theory of random feature models for function approximation was developed due to its natural connection with kernel methods (Liu et al., 2022; Bach, 2017). The proposed TransNet can be viewed as an improved random feature model for PDEs from two perspectives: (1) the re-parameterization of the hidden neurons to separate the parameters that determine locations of the neurons and the ones that control the activation function slope, (2) the usage of auxiliary functions to tune the neural feature space, which makes a critical contribution to the improvement of the accuracy of TransNet in solving PDEs.
## 3 Transferable neural networks for PDEs
### Problem setting and background
We introduce the problem setup for using neural networks to solve partial differential equations. The PDE of interest can be presented in a general formulation, i.e.,
\[\begin{cases}\mathcal{L}(u(\mathbf{y}))=f(\mathbf{y})&\text{for}\ \mathbf{y}\in\Omega,\\ \mathcal{B}(u(\mathbf{y}))=g(\mathbf{y})&\text{for}\ \mathbf{y}\in\partial\Omega,\end{cases} \tag{1}\]
where \(\Omega\subset\mathbb{R}^{d}\) with the boundary \(\partial\Omega\) is the spatial-temporal bounded domain under consideration, \(\mathbf{y}:=(\mathbf{x},t)=(x_{1},\dots,x_{d-1},t)^{\top}\) is a column vector includes both spatial and temporal variables, \(u\) denotes the unknown solution of the PDE, \(\mathcal{L}(\cdot)\) is a differential operator, \(\mathcal{B}(\cdot)\) is the operator defining the initial and/or boundary conditions, \(f(\mathbf{y})\) and \(g(\mathbf{y})\) are the right hand sides associated with the operators \(\mathcal{L}(\cdot)\) and \(\mathcal{B}(\cdot)\), respectively. For notational simplicity, we assume that the solution is a scalar function; the proposed method can be extended to vector-valued functions without
any essential difficulty. We limit our attention to the single-hidden-layer fully-connected neural networks, denoted by
\[u_{\rm NN}(\mathbf{y}):=\sum_{m=1}^{M}\alpha_{m}\,\sigma(\mathbf{w}_{m}\mathbf{y}+b_{m})+ \alpha_{0}, \tag{2}\]
where \(M\) is the number of hidden neurons, the row vector \(\mathbf{w}_{m}=(w_{m,1},\ldots,w_{m,d})\) and the scalar \(b_{m}\) are the weights and bias of the \(m\)-th hidden neuron, the row vector \(\mathbf{\alpha}=(\alpha_{0},\alpha_{1},\ldots,\alpha_{M})\) includes the weights and bias of the output layer, and \(\sigma(\cdot)\) is the activation function. As demonstrated in Section 4, this type of neural networks have sufficient expressive power for solving a variety of PDEs with satisfactory accuracy.
A typical method (Karniadakis et al., 2021) for solving the PDE in Eq. (1) is to directly parameterize the solution \(u(\mathbf{y})\) as a neural network \(u_{\rm NN}(\mathbf{y})\) in Eq. (2) and optimize the neural network's parameters by minimizing the PDE residual loss, e.g., \(L(\mathbf{y})=\|\mathcal{L}(u(\mathbf{y}))-\mathcal{L}(u_{\rm NN}(\mathbf{y}))\|_{2}+\| \mathcal{B}(u(\mathbf{y}))-\mathcal{B}(u_{\rm NN}(\mathbf{y}))\|_{2}\), at a set of spatial-temporal locations. Despite the good performance of these approaches in solving PDE problems, its main drawback is the _limited transferability_ because of the high computational cost of gradient-based re-training and hyperparameter re-tuning. When there is any change to the operators \(\mathcal{L}(\cdot),\mathcal{B}(\cdot)\), the right-hand-side functions \(f(\mathbf{y}),g(\mathbf{y})\), or the shape of the domain \(\Omega\), the neural network \(u_{\rm NN}(\mathbf{y})\) often needs to be re-trained using gradient-based optimization (even though the current parameter values could provide a good initial guess for the re-training), or the hyperparameters associated with the network and the optimizer need to be re-tuned. In comparison, the random feature models require much lower re-training cost, which has been exploited in learning operators (Nelsen and Stuart, 2021) and dynamical systems (McDonald and Alvarez, 2021; Liu et al., 2022b).
### The neural feature space
We can treat each hidden neuron \(\sigma(\mathbf{w}_{m}\mathbf{y}+b_{m})\) as a non-linear feature map from the space of \(\mathbf{y}\in\mathbb{R}^{d}\) to the output space \(\mathbb{R}\). From the perspective of approximation theory, the set of hidden neurons \(\{\sigma(\mathbf{w}_{m}\mathbf{y}+b_{m})\}_{m=1}^{M}\) can be viewed as a globally supported basis in \(\mathbb{R}^{d}\). The _neural feature space_, denoted by \(\mathcal{P}_{\rm NN}\), can be defined by the linear space expanded by the basis \(\{\sigma(\mathbf{w}_{m}\mathbf{y}+b_{m})\}_{m=1}^{M}\), i.e.,
\[\mathcal{P}_{\rm NN}=span\Big{\{}1,\sigma(\mathbf{w}_{1}\mathbf{y}+b_{1}),\ldots, \sigma(\mathbf{w}_{M}\mathbf{y}+b_{M})\Big{\}}, \tag{3}\]
where the constant basis corresponds to the bias of the output layer. Then, the neural network in Eq. (2) lives in the linear space, i.e., \(u_{\rm NN}(\mathbf{y})\in\mathcal{P}_{\rm NN}\). In other words, the neural network approximation can be viewed as a spectral method with _non-orthogonal_ basis, and the parameters \(\mathbf{\alpha}\) in Eq. (2) of the output layer of \(u_{\rm NN}(\mathbf{y})\) contains the coefficients of the expansion in the neural feature space \(\mathcal{P}_{\rm NN}\).
In the PINN methods, the neural feature space \(\mathcal{P}_{\rm NN}\) and the coefficient \(\mathbf{\alpha}\) are trained simultaneously using stochastic gradient descent methods, which often leads to a non-convex and ill-conditioned optimization problem. It has been shown that the non-convexity and ill-conditioning in the neural network training are major reasons of unsatisfactory accuracy of the trained neural network. A natural idea to reduce the complexity of the training is to decouple the training of \(\mathcal{P}_{\rm NN}\) from that of \(\mathbf{\alpha}\). For example, in random feature models, \(\mathcal{P}_{\rm NN}\) is defined by randomly generating the parameters\(\{(\mathbf{w}_{m},b_{m})\}_{m=1}^{M}\) from a user-defined probability distribution; the coefficients \(\mathbf{\alpha}\) can then be obtained by solving a linear system when the operators \(\mathcal{L}\), \(\mathcal{B}\) in Eq. (1) are linear. However, the numerical experiments in Section 4 show that the random feature model based on Eq. (2) converges very slowly with the increase of the number of features. This drawback motivates us to develop a methodology to customize the neural feature space \(\mathcal{P}_{\rm NN}\) to improve the accuracy, efficiency and transferability of \(u_{\rm NN}\) in solving PDEs.
### Constructing the transferable neural feature space
This section contains the key ingredients of the proposed TransNet. The goal is to construct a single neural feature space \(\mathcal{P}_{\rm NN}\) that can be used to solve various PDEs in different domains.
#### 3.3.1 Re-parameterization of \(\mathcal{P}_{\rm NN}\)
The first step is to re-parameterize the hidden neuron \(\sigma(\mathbf{w}_{m}\mathbf{y}+b_{m})\), viewed as a basis function in \(\Omega\), to separate the components that determine the _location_ of the neuron and the components that control the _shape_ of the neuron.
The idea of handling the locations of the basis functions is inspired by the studies on activation patterns of ReLU networks. When \(\sigma\) is the ReLU function, there is a _partition hyperplane_ defined by
\[w_{m,1}y_{1}+w_{m,2}y_{2}+\cdots+w_{m,d}y_{d}+b_{m}=0 \tag{4}\]
that separates the activated and inactivated regions for this neuron. The intersections of multiple partition hyperplanes associated with different neurons define a linear region of ReLU network. Studies have shown that the expressive power of a ReLU network is determined by the number of linear regions and the distribution of those linear regions. In principle, the more _uniformly distributed_ linear regions in the domain \(\Omega\), the more expressive power the ReLU network has. For other activation functions, e.g., \(tanh(\cdot)\) that is widely used in solving PDEs due to its smoothness, the partition hyperplane in Eq. (4) can be used to describe the geometric property of the neuron.
Specifically, let us re-write Eq. (4) into the following point
slope form:
\[\gamma_{m}\big{(}a_{m,1}(y_{1}-r_{m}a_{m,1})+\cdots+a_{m,d}(y_{d}-r_{m}a_{m,d}) \big{)}=0, \tag{5}\]
where \(\mathbf{a}_{m}=(a_{m,1},\ldots,a_{m,d})\) is a unit vector, i.e., \(\|\mathbf{a}_{m}\|_{2}=1\), \(r_{m}>0\) and \(\gamma_{m}\in\mathbb{R}\) are two scalar parameters for the \(m\)-th neuron. We can relate Eq. (5) to Eq. (4) by
\[\left\{\begin{aligned} w_{m,i}&=\gamma_{m}a_{m,i}, \quad i=1,\cdots,d,\\ b_{m}&=-\gamma_{m}\sum_{i=1}^{d}a_{m,i}^{2}r_{m}, \end{aligned}\right. \tag{6}\]
which shows the desired geometric properties of the partition hyperplane in Eq. (4). In terms of the location, the unit vector \(\mathbf{a}_{m}\) is the normal direction of the partition hyperplane in \(\mathbb{R}^{d}\), the vector \((r_{m}a_{m,1},\ldots,r_{m}a_{m,d})\) indicates a point that the hyperplane passes, \(r_{m}\) is the distance between the origin and the partition hyperplane. An illustration is shown in Figure 1(a). In terms of the shape, the constant \(\gamma_{m}\) determines the steepness of the slope of the activation function along the normal direction \(\mathbf{a}_{m}\). Thus, the re-parameterization in Eq. (5) successfully separates the parameters determining location from the ones determining the shape.
#### 3.3.2 Generating uniformly distributed neurons for \(\mathcal{P}_{\mathrm{NN}}\)
The second step of constructing \(\mathcal{P}_{\mathrm{NN}}\) is to determine the parameters \(\{(\mathbf{a}_{m},r_{m})\}_{m=1}^{M}\) in Eq. (5), such that all the neurons are uniformly distributed in \(\Omega\). We assume \(\Omega\) is a _unit ball_, i.e., \(B_{1}(\mathbf{0})=\{\mathbf{y}:\|\mathbf{y}\|_{2}\leq 1\}\subset\mathbb{R}^{d}\) in this subsection. To proceed, we need to define a density function that measures the neuron distribution. For a given \(\mathbf{y}\in\Omega\), the distance between \(\mathbf{y}\) and the partition hyperplane in Eq. (5) is given by
\[dist(\mathbf{y},m)=|\mathbf{a}_{m}(\mathbf{y}-r_{m}\mathbf{a}_{m})|, \tag{7}\]
for \(m=1,\ldots,M\). We use this distance to define how close the point \(\mathbf{y}\) to the \(m\)-th neuron. The density function, denoted by \(D_{M}(\mathbf{y})\), is defined using the above distance, i.e.,
\[D_{M}(\mathbf{y})=\frac{1}{M}\sum_{m=1}^{M}\mathbf{1}_{dist(\mathbf{y},m)<\tau}(\mathbf{y}), \tag{8}\]
where \(\mathbf{1}_{dist(\mathbf{y},m)<\tau}(\mathbf{y})\) is the indicator function of the event that the distance between \(\mathbf{y}\) and the \(m\)-th neuron is smaller than a prescribed tolerance \(\tau>0\). Intuitively, \(D_{M}(\mathbf{y})\) measures the percentage of neurons whose partition hyperplane in Eq. (4) intersect the ball (with radius \(\tau\)) around \(\mathbf{y}\).
Next we propose the following approach, illustrated in Figure 1(b), to generate the parameters \(\{(\mathbf{a}_{m},r_{m})\}_{m=1}^{M}\). Specifically, we first generate the normal directions \(\{\mathbf{a}_{m}\}_{m=1}^{M}\) uniformly distributed on the \(d-1\)-dimensional unit sphere. Note that when \(d>2\), sampling uniformly in the angular space in the hyperspherical coordinate system does not lead to uniformly distributed samples on the unit sphere. This is known as the sphere point picking problem. To overcome this issue, we draw samples from the \(d\)-dimensional Gaussian distribution in the Cartesian coordinate system, and normalize the samples to unit vectors to obtain \(\{\mathbf{a}_{m}\}_{m=1}^{M}\). Then, we generate \(\{r_{m}\}_{m=1}^{M}\) uniformly from \([0,1]\) using the Monte Carlo method. The following theorem shows that our approach provides a set of uniformly distributed neurons in \(\Omega\), where the density is measured by \(D_{M}(\mathbf{y})\) in Eq. (8).
**Theorem 1** (Uniform neuron distribution): _Given the re-parameterization in Eq. (5), if \(\{\mathbf{a}_{m}\}_{m=1}^{M}\) are uniformly distributed random vectors on the \(d\)-dimensional unit sphere, i.e., \(\|\mathbf{a}_{m}\|_{2}=1\), and \(\{r_{m}\}_{m=1}^{M}\) are uniformly distributed random variables in \([0,1]\), then, for a fixed \(\tau\in(0,1)\),_
\[\mathbb{E}[D_{M}(\mathbf{y})]=\tau\ \text{ for any }\|\mathbf{y}\|_{2}\leq 1-\tau,\]
_where \(D_{M}(\mathbf{y})\) is the density function defined in Eq. (8)._
The proof is given in Appendix A; an illustration of the density function is given in Figure 1(c). This result is a little surprising that the distribution of \(\{r_{m}\mathbf{a}_{m}\}_{m=1}^{M}\), i.e., the red dots in Figure 1(b)-middle, are not uniformly distributed in the ball \(B_{1-\tau}(\mathbf{0})\), but the density function \(D_{M}(\mathbf{y})\) is a constant in the ball \(B_{1-\tau}(\mathbf{0})\).
**Remark 1** (The dimentionality): _Even though Theorem 1 holds for any dimension \(d\), the number of neurons required to cover a high-dimensional unit ball still could be intractable. On the other hand, the majority of PDEs commonly used in science and engineering are defined in low-dimensional domains, e.g., 3D spatial domain + 1D time domain. In this scenario, the proposed method is effective and easy to implement, as demonstrated in Section 4._
#### 3.3.3 Tuning the shape of the neurons in \(\mathcal{P}_{\mathrm{NN}}\) using auxiliary functions
The third step is to tune the shape parameters \(\{\gamma_{m}\}_{m=1}^{M}\) in Eq. (5) that controls the slope of the activation function. The experimental tests in Section 4.1 show that the slope parameters play a critical role in determining the accuracy of the neural network approximator \(u_{\mathrm{NN}}\). For simplicity, we assume the same shape parameter value for all neurons, i.e., \(\gamma=\gamma_{m}\) for \(m=1,\ldots,M\). Because we intend to construct a feature space \(\mathcal{P}_{\mathrm{NN}}\) that can be used in multiple scenarios, e.g., various PDEs with different domains and boundary conditions, we do not want to tune the shape parameter \(\gamma\) using any information about a specific PDE.
Our idea is to use auxiliary functions that have similar or more complicated spatial-temporal variation frequency as
the PDE solution to tune \(\gamma\). Specifically, we propose to use realizations of Gaussian processes to generate the auxiliary functions. The advantage of Gaussian process is that one can control the variation frequency of its realizations by adjusting the correlation length. Additionally, the Guassian process is independent of the coordinate system. Let us denote by \(G(\mathbf{y}|\omega,\eta)\) the Gaussian process, where \(\omega\) represents the abstract random variable and \(\eta\) is the correlation length. Given a correlation length, we first generate a set of realizations of the Gaussian process, denoted by \(\{G(\mathbf{y}|\omega_{k},\eta)\}_{k=1}^{K}\). For each realization, define the MSE loss as
\[\begin{split}\text{MSE}(u_{\mathrm{NN}}(\mathbf{y}),G(\mathbf{y}|\omega _{k},\eta))\\ =&\frac{1}{J}\sum_{j=1}^{J}\left[\sum_{m=1}^{M} \alpha_{m}\sigma(\mathbf{w}_{m}\mathbf{y}_{j}+b_{m})+\alpha_{0}-G(\mathbf{y}_{j}|w_{k}, \eta)\right]^{2},\end{split} \tag{9}\]
where the parameters \(\{\mathbf{w}_{m}\}_{m=1}^{M}\) and \(\{b_{m}\}_{m=1}^{M}\) are already determined using the strategy in Section 3.3.2 and Eq. (6), and \(J\) denotes the number of sample points. Unlike standard neural network training, the optimal coefficient \(\mathbf{\alpha}\) that minimizing the MSE loss can be efficiently achieved by solving the least squares problem. Hence, the shape parameter \(\gamma\) can be tuned by solving the following one-dimensional optimization problem
\[\min_{\gamma}\left\{\sum_{k=1}^{K}\min_{\mathbf{\alpha}}\left[\text{MSE}(u_{ \mathrm{NN}}(\mathbf{y}),G(\mathbf{y}|\omega_{k},\eta))\right]\right\}, \tag{10}\]
where for each candidate \(\gamma\), we solve \(K\) least squares problems to compute the total loss.
**Remark 2** (The choice of the correlation length): _There are two strategies to choose the correlation length \(\eta\). One is to use the prior knowledge about the PDE. For example, for the Naveier-Stokes equations with low Reynolds' number, we know the solution will not have very high-frequency oscillation. The other is to use an over-killing correlation length to ensure that the feature space has sufficient expressive power to solve the target PDE._
### Applying TransNet to linear and nonlinear PDEs
Once the neural feature space \(\mathcal{P}_{\mathrm{NN}}\) is constructed and tuned, we can readily use it to solve PDE problems. Even though \(\mathcal{P}_{\mathrm{NN}}\) is defined on the unit ball, i.e., \(B_{1}(\mathbf{0})\), we can always place the (bounded) domain \(\Omega\) for the target PDE in \(B_{1}(\mathbf{0})\) by simple translation and dilation. Thus, the feature space can be used to handle PDEs defined in various domains, as demonstrated in Section 4.
**Linear PDEs.** When \(\mathcal{L}\) and \(\mathcal{B}\) in Eq. (1) are linear operators, the unknown parameters \(\mathbf{\alpha}=(\alpha_{0},\ldots,\alpha_{M})\) in Eq. (2) can be easily determined by solving the following least squares problem, i.e.,
\[\begin{split}\min_{\mathbf{\alpha}}&\left\{\frac{1}{J_ {1}}\sum_{j=1}^{J_{1}}\left[\sum_{m=1}^{M}\alpha_{m}\,\mathcal{L}(\sigma(\mathbf{ w}_{m}\mathbf{y}_{j}+b_{m}))+\alpha_{0}-f(\mathbf{y}_{j})\right]^{2}\right.\\ &+\left.\frac{1}{J_{2}}\sum_{j=1}^{J_{2}}\left[\sum_{m=1}^{M} \alpha_{m}\,\mathcal{B}(\sigma(\mathbf{w}_{m}\mathbf{y}_{j}+b_{m}))+\alpha_{0}-g(\mathbf{y }_{j})\right]^{2}\right\}\end{split} \tag{11}\]
where the parameters \(\{\mathbf{w}_{m}\}_{m=1}^{M}\) and \(\{b_{m}\}_{m=1}^{M}\) are first computed using the strategy in Section 3.3.2 and Eq. (6).
**Nonlinear PDEs.** When one or both operators, \(\mathcal{L}\) and \(\mathcal{B}\), are nonlinear, there are two approaches to handle the situation. The first way is to wrap the least squares problem with a well established nonlinear iterative solver, e.g., Picard's methods, to solve the PDE. Within each iteration, the PDE is linearized such that we can update the coefficient \(\mathbf{\alpha}\) by solving the least squares problem as mentioned above. When there is sufficient knowledge to choose a
Figure 1: **(a)** Illustrates how the re-parameterization in Eq. (5) characterizes the location of a neuron. The blue line is the plane where \(\tanh(\cdot)=0\), \(\mathbf{a}_{m}\) (the arrow) is the normal direction of the plane, the red dot is the location \(r_{m}\mathbf{a}_{m}\) that the plane passes, \(r_{m}\) is the distance between the origin and the plane. **(b)** illustrates how to generate uniformly distributed neurons in the unit ball. The first step in **(b)-left** is to generate the normal directions \(\{\mathbf{a}_{m}\}_{m=1}^{M}\) uniformly distributed on unit sphere; the second step in **(b)-middle** is to generated \(\{r_{m}\}_{m=1}^{M}\) uniformly from \([0,1]\) defining the locations the neurons’ partition hyperplanes will pass; the blue lines in **(b)-right** show the distribution of the partition hyperplanes. **(c)** shows the density function \(D_{M}(\mathbf{y})\) with \(\tau=0.05\) in Eq. (8) for a set of neurons generated using our approach. We can see that our approach provides a uniformly distributed neurons in the ball \(B_{1-\tau}(\mathbf{0})\), which is consistent with Theorem 1.
proper nonlinear solver, we prefer this approach because the well-established theory on nonlinear solvers can ensure a good convergence rate. Thus, we in fcat adopt this approach for numerical experiments in this paper. The second feasible approach is to wrap a gradient descent optimizer around the total loss \(L(\mathbf{y})=\|\mathcal{L}(u(\mathbf{y}))-\mathcal{L}(u_{\mathrm{NN}}(\mathbf{y}))\|_{2}^{2 }+\|\mathcal{B}(u(\mathbf{y}))-\mathcal{B}(u_{\mathrm{NN}}(\mathbf{y}))\|_{2}^{2}\). Because the neural feature space \(\mathcal{P}_{\mathrm{NN}}\) is fixed, the optimization will be simpler than training the entire neural network from scratch. This approach is easier to implement and suitable for scenarios that standard nonlinear solvers do not provide a satisfactory solution.
**Remark 3** (Not using PDE's solution data): _In this work, we do not rely on any measurement data of the solution \(u(\mathbf{y})\) when using TransNet to solve PDEs, because the operators \(\mathcal{L}\) and \(\mathcal{B}\) in Eq. (1) are sufficient to ensure the existence and uniqueness of the PDE's solution. On the other hand, if any extra data of \(u(\mathbf{y})\) are available, TransNet can easily incorporate it into the least squares problem in Eq. (11) as a supervised learning loss._
### Complexity and accuracy of TransNet
The complexity of TransNet is greatly reduced compared to the scenario of using SGD to train the entire network. The construction of the neural feature space \(\mathcal{P}_{\mathrm{NN}}\) only involves random number generations and a simple one-dimensional optimization in Eq. (10). Moreover, these cost are completely offline, and the constructed \(\mathcal{P}_{\mathrm{NN}}\) is transferable to various PDE problems. The online operation for solving linear PDEs only requires solving one least squares problem, where the assembling of the least squares matrix can be efficiently done using the autograd function in Tensorflow or Pytorch. The numerical experiments in Section 4 show that that the accuracy and efficiency of TransNet is significantly improved compared with several baseline methods, because our method does not suffer from the slow convergence of SGD in neural network training.
## 4 Numerical experiments
We now demonstrate the performance of TransNet by testing several classic steady-state or time-dependent PDEs in two and three dimensional spaces. In Section 4.1, we illustrate how to construct the transferable feature space \(\mathcal{P}_{\mathrm{NN}}\). To test and demonstrate the transferability of our model, we build and test two neural features spaces, one for the 2D case and the other for the 3D case1. The constructed feature spaces are then used in Section 4.2 to solve the model PDE problems.
Footnote 1: Note that the dimension of the feature space is the sum of both space and time dimensions since it doesn’t differ them.
### Uniform neuron distribution
This experiment is to use and test the algorithm proposed in Section 3.3 to construct transferable neural feature spaces \(\mathcal{P}_{\mathrm{NN}}\) in the 2D and 3D unit balls. We tune the shape parameter \(\gamma=\gamma_{m}\) for \(m=1,\dots,M\) in Eq. (5) with \(K=50\) realizations of the Gaussian process. In addition, we also test the effect of the correlation length and the number of hidden neurons by setting different values for \(\eta\) and \(M\). For each setting of \(\eta\) and \(M\), the shape parameter \(\gamma\) is tuned separately. Additional information about the experiment setup is given in Appendix B.
Figure 2 illustrates the landscapes of the loss function \(\sum_{k=1}^{K}\min_{\mathbf{\alpha}}[\text{MSE}(u_{\mathrm{NN}}(\mathbf{y}),G(\mathbf{y}| \omega_{k},\eta))]\) of the optimization problem in Eq. (10) for 2D and 3D neural feature spaces. We report the results for two correlation lengths (\(\eta=0.5\) and \(\eta=1.0\)) combined with three numbers of hidden neurons (\(M=100,500,1000\) for 2D and \(M=500,1000,5000\) for 3D). We observe that the loss function behaves roughly like a parabolic curve for a fixed number of hidden neurons, so that the problem in Eq. (10) can be solved by a simple solver for one-dimensional optimization. More importantly, we observe that the optimal value for \(\gamma\) varies with the number of hidden neurons. This provides an important insight that tuning \(\gamma\) is a necessary operation to achieve optimal accuracy of \(u_{\mathrm{NN}}\) when changing the number of hidden neurons.
Figure 2: The loss landscapes of the optimizing problem in Eq. (10) for tuning the shape parameter \(\gamma\) of the feature space \(\mathcal{P}_{\mathrm{NN}}\) in two and three dimensional cases. The blue star is the optimal value for \(\gamma\) founded by our method. It shows that the optimal value for \(\gamma\) varies with the number of hidden neurons, meaning that tuning \(\gamma\) is a necessary operation to achieve optimal accuracy of \(u_{\mathrm{NN}}\) when changing the number of hidden neurons.
Figure 3 illustrates the error distribution when using TransNet to approximate three realizations of the Gaussian process with correlation length \(\eta=0.5\) in the 2D unit ball. Even though the purpose of TransNet is not to approximate the Gaussian process, it is interesting to check whether the uniform density \(D_{M}(\mathbf{y})\) (proved in Theorem 1) leads to uniform error distribution. We use 1000 hidden neurons and the shape parameter \(\gamma\) is set to 2. The bottom row of Figure 3 shows that the MSE error distributes uniformly in the unit ball, which demonstrates the effectiveness of the feature space generation method proposed in Section 3.3.
### PDE examples
We then use the constructed 2D and 3D neural feature spaces from Section 4.1 to solve two steady-state PDEs (i.e., the Poisson equation and the time-independent Navier-Stokes equation) and two time-dependent PDEs (i.e., the Fokker-Planck equation and the wave equation). The definitions of the PDEs under consideration are given in Appendix C. We perform the following testing cases:
* Poisson equation (2D space) in a box domain;
* Poisson equation (2D space) in a circular domain;
* Poisson equation (2D space) in an L-shaped domain;
* Poisson equation (2D space) in an annulus domain;
* Poisson equation (3D space) in a box domain;
* Steady-state Navier-Stokes equation (2D space);
* Fokker-Planck equation (1D space + 1D time);
* 2D Fokker-Planck equation (2D space + 1D time);
* 1D wave equation (1D space + 1D time)
to demonstrate the transferability of TransNet in solving various PDEs in different domains. Recall that for time-dependent PDEs, the temporal variable is simply treated as an extra dimension, so that we will use the 2D feature space to solve problems \((C_{7})\) and \((C_{9})\) and the 3D feature space to solve problem \((C_{8})\). We compare our method with two baseline methods, i.e., the random feature mode and the PINN. All the methods use the same network architecture, i.e., Eq. (2) with the \(tanh\) activation. Additional information about the setup of the experiments are given in Appendix D.
Figure 4 shows the MSE decay with the increasing of the number of the hidden neurons, where the number of hidden neurons are chosen as \(M\) = 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, respectively, for the 2D feature space, and \(M=\) 1000, 2000, 3000, 4000, 5000, respectively, for the 3D feature space. We observe that our TransNet achieves a superior performance for all the nine test cases, which demonstrates the outstanding transferability of TransNet. PINN with BFGS acceleration provides a good accuracy gain compared with PINN with Adam, which means the landscape of the PDE loss exhibits severe ill-conditioning as the SGD method approaches the minimizer2. In comparison, TransNet does not require SGD in solving the PDEs, so that TransNet does not suffer from the slow convergence of SGD used in PINN.
Footnote 2: BFGS can alleviate ill-conditioning by exploiting the second-order information, e.g., the approximate Hessian.
Figure 5 shows the density function \(D_{M}(\mathbf{y})\) in Eq. (8) of the feature spaces obtained by training PINN and the random feature models in solving the Poisson equation in the 2D space, i.e., case \((C_{1})\) - \((C_{4})\), where the constant \(\tau\) in Eq. (8) is set to 0.2. Compared with TransNet's uniform density shown in Figure 1(c), the feature spaces obtained by the baseline methods have highly non-uniform densities in the domain of computation. The random feature models tend to have higher density, i.e., more hidden neurons, near the center of the domain. The first row in Figure 5 can be viewed as the initial densities of the feature space for PINN; the second and the third rows are the final densities. We can see that the training of PINN does not necessarily lead to a more uniform density function \(D_{M}(\mathbf{y})\), which is one of the reasons why PINN cannot exploit the full expressive power of the neural network \(u_{\mathrm{NN}}\).
## 5 Conclusion
We propose a transferable neural network model to advance the state of the art of using neural networks to solve PDEs. The key ingredient is to construct a neural feature space independent of any PDE, which makes it easy to transfer the neural feature space to various PDEs in different domains. Moreover, because the feature space is in fact fixed when using TransNet to solve a PDE, we only need to solve linear least squares problems, which avoids the drawbacks of SGD
Figure 3: Top row: three realizations of the auxiliary Gaussian process with the correlation length \(\eta=0.5\). Bottom row: the distribution of the MSE of TransNet’s approximation with 1000 hidden neurons. Thanks to the feature space with the uniform density in the 2D unit ball (illustrated in Figure 1(c)), we obtain a TransNet approximation with very small MSE fluctuation.
based training algorithms, e.g., ill-conditioning. Numerical experiments show that the proposed TransNet can exploit more expressive power of a given neural network than the compared baselines. This work is the first scratch in this research direction, and there are multiple potential related topics that will be studied in our future work, including (1) _theoretical analysis of the convergence rate of TransNet in solving PDEs._ We observe in Figure 4 that the MSE of TransNet decays along with the increasing of the number of hidden neurons. A natural question to study is that whether TransNet can achieve the optimal convergence rate of the single-hidden-layer fully-connected neural network. (2) _Extension to multi-layer neural networks._ Even though the single-hidden-layer model has sufficient expressive power for the PDEs tested in this work, there are more complicated PDEs, e.g., turbulence models, that could require multi-layer models with much higher expressive power. (3) _The properties of the least squares problem._ In this work, we use the standard least squares solver of Pytorch in the numerical experiments. However, it is worth further investigation of the properties of this specific least squares problem. For example, since the set of neurons \(\{\sigma(\mathbf{w}_{m}\mathbf{y}+b_{m})\}_{m=1}^{M}\) forms a non-orthogonal basis, it is possible to have linearly correlated neurons which will reduce the column rank of the least squares problem.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline & \((C_{1})\) & \((C_{2})\) & \((C_{3})\) & \((C_{4})\) & \((C_{5})\) & \((C_{6})\) & \((C_{7})\) & \((C_{8})\) & \((C_{9})\) \\ \hline Random feature model & 0.25s & 0.22s & 0.22s & 0.19s & 0.96s & 12.85s & 0.92s & 1.21s & 0.47s \\ PINN:Adam & 29.69s & 25.34s & 24.57s & 22.24s & 110.59s & 69.73s & 61.45s & 97.12s & 49.25s \\ PINN:Adam+BFGS & 125.78s & 121.46s & 120.93s & 119.24s & 264.62s & 191.53s & 172.86s & 178.99s & 152.71s \\ TransNet & 0.27s & 0.20s & 0.20s & 0.17s & 1.03s & 11.14s & 0.97s & 1.27s & 0.51s \\ \hline \hline \end{tabular}
\end{table}
Table 1: The computing times of TransNet and the baselines in solving the nine PDE test cases with 1000 hidden neurons. TransNet and the random feature model are significantly faster than PINN because SGD is not required in them.
Figure 4: The MSE decay along with the increasing of the number of hidden neurons for \((C_{1})\) to \((C_{9})\), where all the methods use the same network architecture. Our TransNet significantly outperforms the baseline methods from two aspects: (i) _Transferability_: for a fixed number of hidden neurons, TransNet only need use one 2D feature space and one 3D feature space; (ii) _Accuracy_: TransNet achieves several orders of magnitude smaller MSE than PINN and the random feature models. TransNet does not suffer from the slow convergence in SGD-based neural network training, and can exploit more expressive power of a given neural network \(u_{\mathrm{NN}}\) to obtain more accurate PDE solutions.
Figure 5: The density function \(D_{M}(\mathbf{y})\) with \(\tau=0.2\) in Eq. (8) of the neural feature spaces obtained by training PINN and the random feature models in solving the Poisson equation in the 2D space, i.e., problems \((C_{1})\) - \((C_{4})\). Compared to the uniform density of TransNet in Figure 1(c), both PINN and the random feature model cannot provide feature spaces with uniform density, which is one explanation of their under-performance shown in Figure 4.
squares matrix, or even lead to an under-determined system. This will require the use of some regularization techniques, e.g., ridge regression, to stabilize the least squares system. Additionally, compressed sensing, i.e., \(\ell_{1}\) regularization, could be added to remove redundant neurons from the feature space as needed and obtain a sparse neural network.
## Acknowledgement
This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics Program, under the contract number ERKJ387. This work was accomplished at Oak Ridge National Laboratory (ORNL). ORNL is operated by UT-Battelle, LLC., for the U.S. Department of Energy under Contract DE-AC05-00OR22725.
|
2302.02563 | Stochastic Gradient Descent-Induced Drift of Representation in a
Two-Layer Neural Network | Representational drift refers to over-time changes in neural activation
accompanied by a stable task performance. Despite being observed in the brain
and in artificial networks, the mechanisms of drift and its implications are
not fully understood. Motivated by recent experimental findings of
stimulus-dependent drift in the piriform cortex, we use theory and simulations
to study this phenomenon in a two-layer linear feedforward network.
Specifically, in a continual online learning scenario, we study the drift
induced by the noise inherent in the Stochastic Gradient Descent (SGD). By
decomposing the learning dynamics into the normal and tangent spaces of the
minimum-loss manifold, we show the former corresponds to a finite variance
fluctuation, while the latter could be considered as an effective diffusion
process on the manifold. We analytically compute the fluctuation and the
diffusion coefficients for the stimuli representations in the hidden layer as
functions of network parameters and input distribution. Further, consistent
with experiments, we show that the drift rate is slower for a more frequently
presented stimulus. Overall, our analysis yields a theoretical framework for
better understanding of the drift phenomenon in biological and artificial
neural networks. | Farhad Pashakhanloo, Alexei Koulakov | 2023-02-06T04:56:05Z | http://arxiv.org/abs/2302.02563v2 | # Stochastic Gradient Descent-induced drift of representation in a two-layer neural network
###### Abstract
Representational drift refers to over-time changes in neural activation accompanied by a stable task performance. Despite being observed in the brain and in artificial networks, the mechanisms of drift and its implications are not fully understood. Motivated by recent experimental findings of stimulus-dependent drift in the piriform cortex, we use theory and simulations to study this phenomenon in a two-layer linear feedforward network. Specifically, in a continual learning scenario, we study the drift induced by the noise inherent in the Stochastic Gradient Descent (SGD). By decomposing the learning dynamics into the normal and tangent spaces of the minimum-loss manifold, we show the former correspond to a finite variance fluctuation, while the latter could be considered as an effective diffusion process on the manifold. We analytically compute the fluctuation and the diffusion coefficients for the stimuli representations in the hidden layer as a function of network parameters and input distribution. Further, consistent with experiments, we show that the drift rate is slower for a more frequently presented stimulus. Overall, our analysis yields a theoretical framework for better understanding of the drift phenomenon in biological and artificial neural networks.
Machine Learning, ICML, Neural Networks, Neural Networks, Neural Networks, Neural Networks
## 1 Introduction
Representational drift has been observed across different parts of the nervous system, such as in the hippocampus (Ziv et al., 2013), sensorimotor (Rule et al., 2019), and the visual systems (Deitch et al., 2021; Marks & Goard, 2021). A recent study in the piriform cortex, (Schoonover et al., 2021), a brain structure processing information about smells, demonstrated drift of odorant representations despite stable odor identification. Such drift was characterized by a gradual and across-days decay in the self-similarity of the stimulus representation. Additionally, in the same study, the drift was shown to be stimulus dependent. It was shown that a more familiar stimulus drifts at a smaller rate (Schoonover et al., 2021).
The mechanisms and implications of the drift in the brain and artificial neural networks are still under investigation (see recent reviews by Masset et al. (2022), Rule et al. (2019), and Driscoll et al. (2022)). Recent modeling studies have shown that drift could happen in the presence of synaptic or other types of noise (Qin et al., 2023; Aitken et al., 2022). Using simulations of neural networks, Aitken et al. (2022) showed different noise types injected during training could lead to drift with qualitatively different patterns and geometries. However, no theoretical consideration of the drift was provided in that study. Qin et al. (2023) studied the drift in a network with a similarity-matching objective and a Hebbian/anti-Hebbian learning rule. They showed that noisy synaptic updates could lead to a random-walk exploration of the solution space. Other mechanisms have been suggested on how a stable readout could be performed despite an evolving population code (Rule & O'Leary, 2022; Rule et al., 2020; Kalle Kossio et al., 2021).
One important source of noise during learning for both natural and artificial neural networks is the sampling noise that arises from the stochasticity in observing the data. It is therefore natural to ask if and how this type of noise can lead to drift, and whether it can explain the stimulus-dependency of the rate of the drift observed in experiments. Here, we aim to answer these questions in a two layer linear neural network model that undergoes online continual learning via Stochastic Gradient Descent (SGD). In addition to being common in machine learning, feed-forward networks are a reasonable first approximation to sensory (olfactory) processing in the brain. Specifically, we found the two layer linear network to be one of the simplest models that enables studying the representational drift in the hidden layer, and yet allows for analytical tractability.
## 2 Model and Theory
A multilayer network including \(L\) layers can be described by a set of weight matrices \(\mathbf{W}^{(l)}\), where index \(l\) enumerates the individual layers. The network can be represented by a vector in a vector space constructed from non-commuting weight matrices:
\[\mathbf{\theta}=\left(\mathbf{W}^{(1)},\mathbf{W}^{(2)},...,\mathbf{W}^{(L)}\right). \tag{1}\]
Our goal is to study the dynamics of learning and the drift in this space for a two-layer neural network described in Section 2.1. We derive the manifold of stable minimum-loss (Section 2.2), and study its first order differential geometry (Section 2.3). We use the Euclidean inner product in this space, which means for two vectors \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\), we have:
\[\mathbf{\theta}_{1}^{T}\mathbf{\theta}_{2}=\sum_{l}\mathrm{tr}\Big{(}\mathbf{W}_{1}^{(l) T}\bar{\mathbf{W}}_{2}^{(l)}\Big{)}. \tag{2}\]
By characterizing the movements normal and tangential to the manifold (Sections 2.4.1 and 2.4.2), we derive an effective diffusion process on the manifold and calculate the corresponding diffusion ("drift") rates for the representations (Section 2.4.3). Finally, in Sections 3 and 4 we study the dependency of the drift on the input statistics in two cases of an isotropic Gaussian stimuli, and a case with a frequently presented stimulus. Our results are further validated by numerical simulations.
### Neural network model
Our model consists of a linear feed-forward neural network with an expansive hidden layer, as shown in Figure 1. The input to the network represents the external stimulus (\(\mathbf{x}_{n\times 1}\)), the hidden layer activation is considered as the representation of the stimulus (\(\mathbf{h}_{p\times 1}\), e.g. neural activities in the piriform cortex), and the output is the task outcome (\(\mathbf{y}_{m\times 1}\), e.g. the percept or an identity of an stimulus). The two weight matrices are \(\mathbf{U}=\mathbf{W}^{(1)}\) and \(\mathbf{W}=\mathbf{W}^{(2)}\) respectively, and the predicted output is \(\mathbf{\hat{y}}=\mathbf{W}\mathbf{U}\mathbf{x}\). Additionally, in the parameter space the network is denoted by \(\mathbf{\theta}=(\mathbf{U},\mathbf{W})\).
We consider a continual online learning scenario in which the network sees one sample at a time taken independently from a fixed data distribution. The objective function consist of a Mean Squared Error (MSE) loss and L2-regularization. Hence, the sample loss becomes:
\[l(\mathbf{x},\mathbf{y};\mathbf{\theta})=\frac{1}{2}\|\mathbf{y}-\mathbf{W}\mathbf{U} \mathbf{x}\|^{2}+\frac{\gamma}{2}\|\mathbf{W}\|_{F}^{2}+\frac{\gamma}{2}\|\mathbf{U}\|_{F }^{2}, \tag{3}\]
where \(\gamma\) is the regularization coefficient. Further, we assume \(\mathbf{y}=\mathbf{x}\), which means the goal of the network is to learn the identity mapping from the input to the output. Note this essentially becomes an autoencoder but with an expansive hidden layer (\(p\geq n=m\)). Finally, the learning occurs via the SGD with a minibatch size of one. The update equation upon observing sample \(\mathbf{x}\) is:
\[\Delta\mathbf{\theta}=-\eta\mathbf{g}(\mathbf{x};\mathbf{\theta}), \tag{4}\]
where \(\mathbf{g}(\mathbf{x};\mathbf{\theta})=\nabla_{\mathbf{\theta}}l(\mathbf{x};\mathbf{\theta})\) is the gradient vector, and \(\eta\) is the learning rate. In terms of the weight matrices, this is equivalent to \(\Delta\mathbf{W}=-\eta\nabla_{\mathbf{W}}l\) and \(\Delta\mathbf{U}=-\eta\nabla_{\mathbf{U}}l\).
### Degeneracy and the manifold of solutions
One of the conditions that make representational drift possible is the redundancy of the parameters in achieving an optimal task performance. In our model, this could be identified by the rotational symmetry in the network, as for any orthogonal \(\mathbf{Q}_{p\times p}\), the transformation \(\mathbf{\tilde{W}}\rightarrow\mathbf{\tilde{W}}\mathbf{Q}\) and \(\mathbf{\tilde{U}}\rightarrow\mathbf{Q}^{-1}\mathbf{\tilde{U}}\) leaves the loss unchanged. We aim to study whether, and how the space of redundant parameters are explored due to the online learning stochasticity, and from that, characterize the rate and the geometry of the drift for the representations.
We define the _manifold of solutions_, \(\mathcal{M}\), to be the set of stable critical points in the parameter space. This manifold represents the redundancy in the model, as all the points on it have the same expected loss value \(L(\mathbf{\theta})=\langle l(\mathbf{\theta})\rangle_{x}\), and hence are equally preferable (note that \(\langle.\rangle_{x}\) is the expectation over the input distribution). We analytically derive \(\mathcal{M}\) in the following theorem as a function of the input distribution, and for the rest of the paper refer to it as the _manifold_.
**Theorem 2.1**.: (Manifold) _The manifold of solutions for learning the identity map (\(\mathbf{y}=\mathbf{x}\)) is:_
\[\mathcal{M}:\{\tilde{\mathbf{\theta}}=(\tilde{\mathbf{U}},\tilde{\mathbf{W}} )\mid\tilde{\mathbf{W}}\tilde{\mathbf{W}}^{T}=\mathbf{I}_{n}-\gamma\mathbf{\Sigma}_{\mathbf{x}}^{ -1},\tilde{\mathbf{U}}=\tilde{\mathbf{W}}^{T}\}, \tag{5}\]
_where \(\mathbf{\Sigma}_{\mathbf{x}}=\langle\mathbf{x}\mathbf{x}^{T}\rangle_{x}\) is the second order moment of the input distribution with eigenvalues that are assumed to be greater than \(\gamma\)._
The above can be proven by first finding the critical points of the expected loss satisfying \(\nabla_{\mathbf{U}}L=\nabla_{\mathbf{W}}L=0\), and from those finding the stable solutions (see Section A.1 in the Appendix).
Figure 1: (left) Neural network model. (right) Manifold of minimum-loss (\(\mathcal{M}\)) in the parameter space.
### Differential geometry of the manifold
To study the learning dynamics near the manifold, we will characterize its first order differential geometry. This is done by finding the local tangent and normal spaces (\(T_{\tilde{\mathbf{\theta}}}\mathcal{M}\) and \(N_{\tilde{\mathbf{\theta}}}\mathcal{M}\), respectively), as shown in the following lemmas. Note the results of the lemmas are not directly used in the rest of the main paper, and we only mention them here for completeness (see also Appendix Section A.2).
**Lemma 2.2**.: (Tangent space) _The local tangent space to the manifold at point \((\tilde{\mathbf{W}}^{T},\tilde{\mathbf{W}})\) is spanned by vectors \(\mathbf{t}=(\mathbf{T}_{\mathbf{W}}^{T},\mathbf{T}_{\mathbf{W}})\) with \(\mathbf{T}_{\mathbf{W}}=\tilde{\mathbf{W}}\tilde{\mathbf{W}}^{T}\mathbf{\Omega}\tilde{\mathbf{W}}+\mathbf{K }\tilde{\mathbf{W}}_{\perp}\), where \(\mathbf{\Omega}\) is an arbitrary \(n\times n\) skew-symmetric matrix, \(\mathbf{K}\) is an arbitrary \(n\times(p-n)\) matrix, and \(\tilde{\mathbf{W}}_{\perp}\) is a full-rank \((p-n)\times p\) matrix whose rows are orthogonal to rows of \(\tilde{\mathbf{W}}\)._
With the definition of the inner product in Eq.2, the normal space can be defined, which is described in the next lemma.
**Lemma 2.3**.: (Normal space) _The local normal space to the manifold at point \((\tilde{\mathbf{W}}^{T},\tilde{\mathbf{W}})\) is spanned by vectors \(\mathbf{n}=(\mathbf{M},\mathbf{S}\tilde{\mathbf{W}}-\mathbf{M}^{T})\), where \(\mathbf{S}\) is an arbitrary \(n\times n\) symmetric matrix and \(\mathbf{M}\) is an arbitrary \(p\times n\) matrix._
We will use the above lemmas to find tangential and normal projection operators, \(\Pi_{T}(.)\) and \(\Pi_{N}(.)\), which project an arbitrary vector to the tangent and normal space of the manifold respectively (see lemma A.1 in the Appendix).
### Learning dynamics near the manifold and emergence of drift
In this section, we will study the stochastic dynamics of learning in an online learning scenario in which the data are sampled from a fixed distribution. We assume a sufficient training has passed, and hence we are predominantly on or close to the manifold being continuously nudged around due to the SGD noise. An arbitrary point \(\mathbf{\theta}\) near the manifold can be represented as:
\[\mathbf{\theta}=\tilde{\mathbf{\theta}}+\mathbf{\theta}_{N} \tag{6}\]
where \(\tilde{\mathbf{\theta}}\) is the closest point on the manifold to \(\mathbf{\theta}\), and \(\mathbf{\theta}_{N}\in N_{\tilde{\mathbf{\theta}}}\mathcal{M}\) is the deviation from the manifold in the normal space. In the rest of this section, we will find update equations for \(\mathbf{\theta}_{N}\) and \(\tilde{\mathbf{\theta}}\) respectively, and describe their dynamics.
#### 2.4.1 Fluctuation in the normal space
We can find an update equation for \(\mathbf{\theta}_{N}\) by projecting the two sides of the SGD update (Eq.4) into the normal space. For small learning rates and over long times, this equation can be approximated with a continuous-time stochastic differential equation (SDE):
\[d\mathbf{\theta}_{N}=-\mathbf{H}\mathbf{\theta}_{N}dt-\sqrt{\eta}\,\mathbf{C}d\mathbf{B}_{t}. \tag{7}\]
In the above, \(\mathbf{H}_{k,l\,(\in[1,2np])}=\frac{\partial L}{\partial\mathbf{\theta}_{k}\partial _{t}}|_{\tilde{\mathbf{\theta}}}\) is the Hessian, \(\mathbf{B}_{t}\) is the standard multi-dimensional Brownian motion (Wiener process), and \(\mathbf{C}\approx\langle\tilde{\mathbf{g}}\tilde{\mathbf{g}}^{T}\rangle_{x}^{1/2}\), where \(\tilde{\mathbf{g}}\) is the gradient at point \(\tilde{\mathbf{\theta}}\) on the manifold. In general, the gradient (\(\mathbf{g}\)) can be analytically calculated at any point by differentiating the loss function with respect to the weight matrices (see Eq.29 in the Appendix). On the manifold we have:
\[\tilde{\mathbf{g}}(\mathbf{x}):=\mathbf{g}(\mathbf{x};\tilde{\mathbf{\theta}})=(\tilde{\mathbf{W}}^{T }\mathbf{Z}_{\mathbf{x}},\mathbf{Z}_{\mathbf{x}}\tilde{\mathbf{W}}), \tag{8}\]
where \(\mathbf{Z}_{\mathbf{x}}=\gamma(\mathbf{I}_{n}-\mathbf{\Sigma}_{n}^{-1}\mathbf{x}\mathbf{x}^{T})\). Note that \(\mathbf{C}\) in Eq.7 is in general a function of \(\mathbf{\theta}_{N}\). But in deriving the SDE, we approximated it with its value on the manifold (\(\mathbf{\theta}_{N}=0\)), which is justified for small deviations (see Section A.3.1 for the derivation of the SDE).
Since the Hessian is positive semidefinite on the manifold (see Section A.1.3), the process defined by the SDE in Eq.7 is a mean-reverting process known as the multi-dimensional Ornstein-Uhlenbeck (OU) stochastic process (Gardiner et al. (1985)). OU has a stationary solution with zero mean and a finite variance (see Section A.3.2 in the Appendix). Hence, we refer to the deviations in the normal space as _fluctuation_. We represent the fluctuation in a basis constructed from the eigenvectors of the Hessian with positive eigenvalues:
\[\mathbf{\rho}=\mathbf{N}^{T}\mathbf{\theta}_{N},\quad\text{where }\mathbf{N}=[\mathbf{n}_{1}|\, \mathbf{n}_{2}|\,...\,|\mathbf{n}_{K}] \tag{9}\] \[\text{and }\mathbf{H}\mathbf{n}_{k}=\lambda_{k}\mathbf{n}_{k},\ \lambda_{k}>0\]
Here \(\rho_{k}\) represents the deviation from the manifold along the Hessian eigenvector \(\mathbf{n}_{k}\). As shown in Section A.3.2 in the Appendix, the stationary solution of \(\mathbf{\rho}\) satisfies \(\langle\rho_{k}\rangle=0\), and has the covariance:
\[\langle\rho_{k}\rho_{l}\rangle=\frac{\eta}{\lambda_{k}+\lambda_{l}}\langle\mathbf{n }_{k}^{T}\tilde{\mathbf{g}}(\mathbf{x})\,\mathbf{n}_{l}^{T}\tilde{\mathbf{g}}(\mathbf{x})\rangle_{ x}. \tag{10}\]
As expected, since \(\tilde{\mathbf{g}}\) is the driver of the fluctuations in the SDE, the covariance depends on the the projection of \(\tilde{\mathbf{g}}\) on \(\mathbf{n}_{k}\). However, it turns out that \(\tilde{\mathbf{g}}\) has no projection on a subspace of the Hessian eigenspace, and hence for the purpose of fluctuations, only a subset of \(\{\mathbf{n}_{k}\}\) are relevant. This is described in the next proposition.
**Proposition 2.4**.: (Hessian eigenspace) _If \((\mathbf{v}_{i},s_{i})\) are the eigenvector/eigenvalue pairs of \(\mathbf{\Sigma}_{\mathbf{x}}=\langle\mathbf{x}\mathbf{x}^{T}\rangle\) with \(s_{1}\geqslant..\geqslant s_{n}>\gamma\), the Hessian eigenvectors \(\mathbf{n}_{k}\) along which there is non-zero fluctuation (i.e. \(\mathbf{n}_{k}^{T}\tilde{\mathbf{g}}\neq 0\)) correspond to \((\mathbf{v}_{i},\mathbf{v}_{j})\) pairs for \(i,j\in[1,n]\) via:_
\[\mathbf{n}_{k}\equiv\mathbf{n}_{ij}=(\tilde{\mathbf{W}}^{T}\mathbf{Z}_{ij},\mathbf{Z}_{ij}\tilde{ \mathbf{W}}), \tag{11}\]
_where \(\mathbf{Z}_{ij}=C_{ij}(\kappa_{ij}\mathbf{v}_{i}\mathbf{v}_{j}^{T}+\mathbf{v}_{j}\mathbf{v}_{i}^{T})\). Here, \(\kappa_{ij}=\text{sgn}(i-j)(\sqrt{1+b^{2}}-b)\) for \(b=(\frac{1}{\gamma}-\frac{s_{i}+s_{j}}{2s_{i}s_{j}})|s_{i}-s_{j}|\), and \(C_{ij}=[(2-\gamma/s_{i}-\gamma/s_{j})(1+\kappa_{ij}^{2})]^{-\frac{1}{2}}\) is a normalization constant ensuring \(\mathbf{n}_{k}^{T}\mathbf{n}_{k}=1\). The corresponding Hessian eigenvalues are \(\lambda_{ii}=2(s_{i}-\gamma)\) and \(\lambda_{ij(i\neq j)}=2s_{i}-\gamma(s_{i}/s_{j}+\kappa_{ij})\). (see proof in Section A.4.)_
The above proposition relates the relevant directions in the normal space to the corresponding \((ij)\) indices of the input covariance eigenvectors (this makes \(k\) a composite index). Subsequently, the components of the fluctuation covariance can be written as \(\langle\rho_{k}\rho_{l}\rangle\equiv\langle\rho_{ij}\rho_{pq}\rangle\) (see Section A.5.1 in the Appendix for derivation of the components). The remark below provides an interpretation of movements along different \(\mathbf{n}_{k}\) in the representation space.
_Remark 2.5_.: (Fluctuation of the representations): On the manifold, the representations of all possible stimuli form an n-dimensional ellipsoid embedded in \(\mathbb{R}^{p}\). The main axes of this ellipsoid are \(\widetilde{\mathbf{h}}_{i}=\tilde{\mathbf{W}}^{T}\mathbf{v}_{i}\) for \(i\in[1,n]\), with norms \(|\tilde{\mathbf{h}}_{i}|=(1-\gamma/s_{i})^{1/2}\). By replacing \(\Delta\mathbf{U}\) from the previous proposition in \(\Delta\mathbf{h}_{i}=\Delta\mathbf{U}\mathbf{v}_{i}\), we can find how deviation along different \(\mathbf{n}_{k}\) leads to different deformation modes for the ellipsoid. Specifically, we can see that deviation along the eigenvector \(\mathbf{n}_{ii}\) changes \(\tilde{\mathbf{h}}_{i}\) radially (changing its norm), while leaving other (orthogonal) axes unchanged. Similarly, deviation along \(\mathbf{n}_{ij}\) for \(i\neq j\) moves \(\tilde{\mathbf{h}}_{i}\) and \(\tilde{\mathbf{h}}_{j}\) toward or away from each other, and leaves other axes intact. The latter shows that the fluctuations can change not only the lengths but also the pairwise angles of the representation vectors. Note however that the average of these changes is zero over time.
We take the variance of \(|\tilde{\mathbf{h}}_{i}|\) as a measure of fluctuation for the representations:
\[\sigma_{i}^{2}:=\text{var}(|\mathbf{h}_{i}(t)|)=\frac{1}{2}\langle\rho_{ii}^{2} \rangle=\frac{\eta\gamma^{2}}{4s_{i}}(\frac{\langle x_{i}^{4}\rangle_{x}}{s_{ i}^{2}}-1) \tag{12}\]
where \(x_{i}:=\mathbf{v}_{i}^{T}\mathbf{x}\) (see Section A.5.2 for derivation). As shown later in the paper, this has an excellent match to numerically measured fluctuations.
#### 2.4.2 Tangential updates
Over a step of learning update, the displacement in the parameter space (\(\Delta\mathbf{\theta}\)) has a corresponding projected movement on the manifold (\(\Delta\tilde{\mathbf{\theta}}\)). For small learning rates and by avoiding the curvature effect, we have:
\[\Delta\tilde{\mathbf{\theta}}=-\eta\mathbf{g}_{T}(\mathbf{x};\tilde{\mathbf{\theta}}+\mathbf{N} \mathbf{\rho}), \tag{13}\]
where \(\mathbf{g}_{T}=\Pi_{T}(\mathbf{g})\) is the projection of the gradient vector onto the tangent space. In the next theorem, we calculate this projection and find its action on the representations.
**Theorem 2.6**.: (Tangential update) _For a normal deviation \(\mathbf{\rho}=\sum_{k}\rho_{k}\mathbf{n}_{k}\) from point \(\tilde{\mathbf{\theta}}=(\tilde{\mathbf{W}}^{T},\tilde{\mathbf{W}})\) on the manifold, the tangential projection of the gradient is:_
\[\mathbf{g}_{T}(\mathbf{x};\tilde{\mathbf{\theta}}+\mathbf{N}\mathbf{\rho})=(\mathbf{G}_{ \mathbf{U}_{T}},\mathbf{G}_{\mathbf{U}_{T}}^{T}),\quad\text{for} \tag{14}\] \[\mathbf{G}_{\mathbf{U}_{T}}=\tilde{\mathbf{W}}^{T}(\tilde{\mathbf{W}}\tilde{\mathbf{ W}}^{T})^{-\frac{1}{2}}\sum_{k}\rho_{k}\mathbf{\mathcal{G}}_{k}^{\,:}(\tilde{\mathbf{W}} \tilde{\mathbf{W}}^{T})^{\frac{1}{2}}+\mathcal{O}(|\mathbf{\rho}|^{2}),\]
_where \(\mathbf{\mathcal{G}}\) is a rank-3 tensor with components \(\mathbf{\mathcal{G}}_{k}^{s,r}\equiv\mathbf{\mathcal{G}}_{ij}^{s,r}\) that are defined as:_
\[\mathbf{\mathcal{G}}_{ij}^{s,r}(\mathbf{x})=\frac{\gamma C_{ij}\sqrt{ \omega_{r}\omega_{s}}}{2(\omega_{r}+\omega_{s})}[x_{j}(S_{sj}^{ij}x_{s}\delta _{i}^{r}-S_{rj}^{ij}x_{r}\delta_{i}^{s})\\ +x_{i}(S_{is}^{ij}x_{s}\delta_{j}^{r}-S_{ir}^{ij}x_{r}\delta_{j}^{ s})], \tag{15}\]
_where \(\delta_{s}^{r}\) is the Kronecker delta function, \(\omega_{i}:=1-\gamma/s_{i}\), and \(S_{rs}^{ij}:=\kappa_{ij}/s_{r}+1/s_{s}\) (\(r,s,i,j\in[1,n]\)). The action of the tangential gradient in the representation space is a small rigid-body rotation around the origin. In this rotation, the representation vectors stay within the column-space of \(\tilde{\mathbf{W}}^{T}\), which itself stays fixed during the tangential updates. Additionally, the angular displacement of the representation \(\mathbf{h}_{s}(=\mathbf{U}\mathbf{v}_{s})\) toward \(\mathbf{h}_{r}(=\mathbf{U}\mathbf{v}_{r})\) is:_
\[\Delta\varphi_{sr}(\mathbf{x};\tilde{\mathbf{\theta}}+\mathbf{N}\mathbf{\rho})=\eta\sum_{k=1}^{ K}\rho_{k}\mathbf{\mathcal{G}}_{k}^{s,r}(\mathbf{x})+\mathcal{O}(|\mathbf{\rho}|^{2}). \tag{16}\]
In deriving the above, the gradient was approximated to the first order of the deviation from the manifold, and subsequently the tangential projector operator was applied (see Section A.6 of the Appendix). Here \(\mathbf{\mathcal{G}}\) is a tensor that, for a given direction in the normal space (corresponding to \(k\equiv ij\)), returns a rotation generator via the skew-symmetric matrix \(\mathbf{\mathcal{G}}_{k}^{\,:\,:\,:\,:}\) (note \(G_{k}^{s,r}=-G_{k}^{r,s}\)).
#### 2.4.3 Drift as a diffusion process
As we saw in Section 2.4.1, the mean-reverting property of the gradient confines the normal deviations near the manifold (Fig.2 - Left). The movements in the tangent space, however, face no such mean-reverting property and hence can diffuse freely on the manifold in a random-walk fashion. As theorem 2.6 showed, the tangential component of the gradient is proportional to the deviation from the manifold (\(\mathbf{\rho}\)), such that it vanishes on the manifold (see the schematics in the middle and right panels of Fig.2). We can find an effective diffusion process for the movements on the manifold, which because of the above correspondence, is expected to depend on the statistics of the normal deviations. Specifically, over large timescales, Eq.13 can be approximated as the following continuous-time SDE, describing the evolution of the the parameter vector on the manifold:
\[d\tilde{\mathbf{\theta}}=(2\mathbf{D}_{\theta}/\eta)^{\frac{1}{2}}\,d\mathbf{B}_{t}. \tag{17}\]
In the above, \(\mathbf{D}_{\theta}=(1/2)\langle\Delta\tilde{\mathbf{\theta}}\Delta\tilde{\mathbf{\theta}}^{T} \rangle_{x,\rho}\) is the diffusion tensor for the parameters (measured over a training step), and \(\mathbf{B}_{t}\) is multi-dimensional Brownian motion. The above SDE is equivalent to the diffusion equation \(\partial c/\partial t=\mathbf{D}_{\theta}\nabla^{2}c\). The above diffusion process is manifested as a rotational diffusion in the representation space. To fully characterize this, it's sufficient to measure the pairwise diffusion rates
between the axes of the ellipsoid \(\{\mathbf{h}_{s}\}_{s\in[1,n]}\). Similar to the definition of \(\mathbf{D}_{\theta}\) above, we define \(D_{sr}\) to be the diffusion rate between the two representations \(\mathbf{h}_{s}\) and \(\mathbf{h}_{r}\) based on the mean squared of the angular displacement between the two vectors, i.e.:
\[D_{sr}:=\frac{1}{2}\langle\Delta\varphi_{sr}^{2}\rangle_{x,\rho}=\frac{\eta^{2 }}{2}\sum_{k,l=1}^{K}\langle\rho_{k}\rho_{l}\rangle\,\bar{\mathbf{G}}_{k,l}^{s,r}, \tag{18}\]
where we defined \(\bar{\mathbf{\bar{Q}}}_{k,l}^{s,r}:=\langle\mathbf{\mathcal{G}}_{k}^{s,r}\mathbf{\mathcal{G }}_{l}^{s,r}\rangle_{x}\). The right-hand side results from replacing \(\Delta\varphi_{sr}\) from Eq.16 and taking the average. The total diffusion for the representation \(\mathbf{h}_{s}\) can be derived by summing up the diffusion rates along different directions: \(D_{s}=\sum_{r=1}^{n}D_{sr}\).
Eq.18 suggests the diffusion rate between two representation vectors is an aggregation of terms resulting from deviations in different directions in the normal space. In this equation, \(\eta^{2}\bar{\mathbf{\bar{Q}}}_{k,l}\) can be thought of as the diffusion per direction (or more accurately direction pair \(k,l\)) in the normal space, and \(\langle\rho_{k}\rho_{l}\rangle\) is the covariance of the fluctuations associated with those directions. As both \(\bar{\mathbf{\bar{Q}}}\) and \(\langle\mathbf{\rho}\mathbf{\rho}^{T}\rangle\) depend on the fourth-moment of the stimuli distribution, the diffusion is also a function of the fourth-moment of the input distribution, as we'll see later in the paper.
### Numerical Simulations
Alongside the analytical derivations, we also performed numerical simulations in which we measured the drift in a neural network undergoing continual SGD training. We first numerically validated the equations of the manifold by verifying that after enough training steps the network gets close enough to the theoretical manifold. Next, to measure the drift, we initialized multiple (\(>10^{4}\)) realizations of the network, all starting from a fixed point on the manifold but undergoing different SGD sampling schemes during the training. Following a transitory phase, we studied the over-time trajectories of the hidden layer activation for different trial stimuli. This was done by measuring the fluctuation (variance) of the representation norm, and the angular displacements. The diffusion coefficient was estimated as half of a linear fit slope to the mean squared angular displacements aggregated from all the realizations.
## 3 Drift under Gaussian stimuli
In this section, we present complete analytical results for a case where the stimuli are drawn randomly and independently from a standard n-dimensional Gaussian distribution, i.e. \(x_{i}\sim\mathcal{N}(0,1)\). Since the input covariance is identity, \(\{\mathbf{v}_{i}\}\) form an orthonormal basis for \(\mathbb{R}^{n}\), all corresponding to the eigenvalue \(s_{i}=\langle x_{i}^{2}\rangle=1\). Additionally, we have \(\langle x_{i}x_{j}x_{p}x_{q}\rangle=\delta_{ij}^{pq}+\delta_{ij}^{qp}+\delta_ {ij}^{qp}\). The Hessian eigenspace can be found from proposition 2.4, where we can show \(\kappa_{ij(i>j)}=1\), \(\kappa_{ii}=0\), and \(\kappa_{ij(i<j)}=-1\), for \(i,j\in[1,n]\). Hence the eigenspace corresponds to three sets with \(\mathbf{Z}_{ij(i>j)}=\frac{1}{2\sqrt{\omega}}(\mathbf{v}_{i}\mathbf{v}_{j}^{T}+\mathbf{v}_{j} \mathbf{v}_{i}^{T})\), \(\mathbf{Z}_{ii}=\frac{1}{\sqrt{2\omega}}\mathbf{v}_{i}\mathbf{v}_{i}^{T}\), and \(\mathbf{Z}_{ij(i<j)}=\frac{1}{2\sqrt{\omega}}(-\mathbf{v}_{i}\mathbf{v}_{j}^{T}+\mathbf{v}_{j} \mathbf{v}_{i}^{T})\), with eigenvalues \(\lambda_{ij(i\geqslant j)}=2(1-\gamma)\) and \(\lambda_{ij(i<j)}=2-\gamma\) respectively (\(\omega:=1-\gamma/s_{i}\)).
The components of the fluctuation matrix can be calculated from Eq.10 as:
\[\langle\rho_{ij}^{2}\rangle=\eta\gamma^{2},\;\text{for}\;i\geqslant j, \tag{19}\]
where the rest of the components are zero (see Section A.5.1). Hence, the fluctuation in the representation norm of an arbitrary unit-length stimulus (Eq.12) becomes:
\[\sigma_{s}^{2}=\frac{\eta\gamma^{2}}{2}. \tag{20}\]
To find the diffusion coefficients, we will have to first calculate the components of the \(\mathbf{\mathcal{G}}\) tensor from Eq.15, to have:
\[\mathbf{\mathcal{G}}_{ii}^{s,r}=\frac{\gamma}{2\sqrt{2\omega}}x_{i}(x _{s}\delta_{i}^{r}-x_{r}\delta_{i}^{s}), \tag{21}\] \[\mathbf{\mathcal{G}}_{ij(i>j)}^{s,r}=\frac{\gamma}{4\sqrt{\omega}}[x _{j}(x_{s}\delta_{i}^{r}-x_{r}\delta_{i}^{s})+x_{i}(x_{s}\delta_{j}^{r}-x_{r} \delta_{j}^{s})]\]
and \(\mathbf{\mathcal{G}}_{ij(i<j)}^{s,r}=0\). Subsequently, the coefficients \(\bar{\mathbf{\bar{Q}}}_{ij,pq}^{s,r}\) can be found by taking the averages of the product terms from the above, leading to:
\[\bar{\mathbf{\bar{Q}}}_{ii,ii}^{s,r}=\frac{\gamma^{2}}{8\omega}(\delta _{i}^{r}+\delta_{i}^{s}),\,\bar{\mathbf{\bar{Q}}}_{ii,jj(i\neq j)}^{s,r}=\frac{- \gamma^{2}}{8\omega}(\delta_{ij}^{rs}+\delta_{ij}^{sr})\] \[\bar{\mathbf{\bar{Q}}}_{ij,ij(i>j)}^{s,r}=\frac{\gamma^{2}}{16 \omega}(\delta_{i}^{r}+\delta_{i}^{s}+\delta_{j}^{r}+\delta_{j}^{s}+2\delta_{ ij}^{rs}+2\delta_{ij}^{sr})\]
and the other components are zero. Replacing these and the fluctuation terms in Eq.18 results in the diffusion rate between the representations of two arbitrary \(s\) and \(r(\neq s)\):
\[D_{sr} =\frac{\eta^{3}\gamma^{2}}{2}(\sum_{i=1}^{n}\bar{\mathbf{\bar{Q}}}_{ii,ii}^{s,r}+\sum_{i,j=1,i>j}^{n}\bar{\mathbf{\bar{Q}}}_{ij,ij}^{s,r})\] \[=\frac{\eta^{3}\gamma^{4}}{2(1-\gamma)}(\frac{1}{4}+\frac{n}{8})= \frac{1}{16}\frac{\eta^{3}\gamma^{4}}{1-\gamma}(n+2) \tag{22}\]
Note the summation was performed only over indices where the fluctuation covariance was non-zero. Since the diffusion is isotropic, the total diffusion for representation
Figure 2: Schematics showing (left) the probability distribution of the fluctuations outside the manifold, (middle) the gradient vectors on and near the manifold, and (right) the tangential component of the gradient.
becomes:
\[D_{s}=(n-1)D_{sr}=\frac{1}{16}\frac{\eta^{3}\gamma^{4}}{1-\gamma}(n-1)(n+2) \tag{23}\]
## 4 Drift under a frequently presented stimulus
As discussed in the previous section, an SGD-induced drift could occur even with an isotropic background stimuli. In this section, we will study how the presence of a frequent stimulus in the environment influences the drift. We consider a case where in addition to the background Gaussian stimuli, there is a relatively more frequent stimulus, \(\mathbf{a}\), that is presented with probability \(\alpha\), i.e.:
\[\mathbf{x}=\left\{\begin{array}{ll}\mathbf{a}&\text{Pr}=\alpha\\ \mathcal{N}(\mathbf{0},\mathbf{I}_{n})&\text{Pr}=1-\alpha\end{array}\right. \tag{24}\]
(note the previous case is equivalent to \(\alpha=0\)). Without loss of generality, we take \(\mathbf{a}\) to be along the first axis. The second and fourth moments of the input distribution become \(\langle x_{i}x_{j}\rangle=\alpha|\mathbf{a}|^{2}\delta^{11}_{ij}+(1-\alpha)\delta^ {j}_{i}\) and \(\langle x_{i}x_{j}x_{p}x_{q}\rangle=\alpha|\mathbf{a}|^{4}\delta^{111}_{ijpq}+(1- \alpha)(\delta^{iq}_{ip}+\delta^{pq}_{ij}+\delta^{pq}_{ij})\) respectively. For simplicity, we will also assume \(|\mathbf{a}|=1\). The eigenvalues of \(\mathbf{\Sigma}_{\mathbf{x}}\) will be \(s_{a}=1\) and \(s_{b}=1-\alpha\), corresponding to eigenvectors \(\mathbf{v}_{a}=\mathbf{a}\), and orthonormal vectors \(\{\mathbf{v}_{b}\}\perp\mathbf{a}\) for \(b\in[2,n]\) respectively. The fluctuation of representation norm for unit-length stimuli \(\mathbf{a}\) and \(\mathbf{b}\) (\(\perp\mathbf{a}\)) can be found by replacing \(\langle x_{a}^{4}\rangle=3-2\alpha\) and \(\langle x_{b}^{4}\rangle=3(1-\alpha)\) in Eq.12, to get:
\[\sigma_{a}^{2}=\frac{\eta\gamma^{2}}{2}(1-\alpha),\quad\sigma_{b}^{2}=\frac{ \eta\gamma^{2}}{4}\frac{2+\alpha}{(1-\alpha)^{2}}. \tag{25}\]
It is easy to show that \(\sigma_{a}^{2}\leqslant\sigma_{b}^{2}\) irrespective of \(\alpha\), which suggests the fluctuation of the frequent stimulus representation is smaller than that of a background stimulus.
Acknowledging the symmetry within the space of background stimuli, we only need to find two diffusion coefficients \(D_{ab}=D_{ba}\) and \(D_{bc}=D_{cb}\) for \(a=1\) and any \(b\neq c\in[2,n]\) (without loss of generality, we take \(b=2\) and \(c=3\)). To calculate these coefficients, we need to perform the summation in Eq.18 which is taken over \((ij,pq)\) indices. Despite many terms being zero due to the nature of the \(\mathbf{\mathcal{G}}\) tensor and the fourth moment of the input, the full summation could lead to bulky equations which we avoid here and only present approximate results. Specifically, for large \(n\), many of the terms in the summation can be ignored (Section A.7.1). Additionally, we first consider the regime of small \(\alpha\), which is a perturbation to the previous Gaussian stimuli case for which \(\alpha\) was zero. As a first order correction, it is sufficient to sum over indices for which the unperturbed case had non-zero \(\langle\rho_{ij}\rho_{pq}\rangle\) or \(\mathbf{\widetilde{\mathcal{G}}}^{s,r}\) components. This limits the summation to indices \((ij,ij)_{i>j}\) for which either \(i\) or \(j\) is equal to \(r\) or \(s\) (see detail in Section A.7.1). If we take \(d(>c)\) to be an index within the background subspace (i.e. \(d=4\)), the diffusion summings simplify as below:
\[D_{ba} \approx\frac{n\eta^{2}}{2}(\langle\rho_{db}^{2}\rangle\mathbf{ \widetilde{\mathcal{G}}}^{a,b}_{db,db}+\langle\rho_{da}^{2}\rangle\mathbf{ \widetilde{\mathcal{G}}}^{a,b}_{da,da})\quad(\alpha\ll 1,n\gg 1)\] \[\approx\frac{n\eta^{3}\gamma^{4}}{32}[(1+2\alpha)(1+(1+\frac{ \alpha}{2})(1+\alpha)]\] \[=\frac{n\eta^{3}\gamma^{4}}{16}(1+\frac{7\alpha}{4})\] \[D_{bc} \approx\frac{n\eta^{2}}{2}(\langle\rho_{db}^{2}\rangle\mathbf{ \widetilde{\mathcal{G}}}^{c,b}_{db,db}+\langle\rho_{dc}^{2}\rangle\mathbf{ \widetilde{\mathcal{G}}}^{c,b}_{dc,dc})\] \[\approx\frac{n\eta^{3}\gamma^{4}}{32}[(1+2\alpha)(1+\alpha)+(1+2 \alpha)(1+\alpha)]\] \[=\frac{n\eta^{3}\gamma^{4}}{16}(1+3\alpha) \tag{26}\]
In the above, we replaced the following quantities calculated for small \(\alpha\) and \(\gamma\) (see Section A.7.3):
\[\langle\rho_{db}^{2}\rangle=\langle\rho_{dc}^{2}\rangle=\eta\gamma^{2}(1+2 \alpha),\quad\langle\rho_{da}^{2}\rangle=\eta\gamma^{2}(1+\frac{\alpha}{2})\]
\[\mathbf{\widetilde{\mathcal{G}}}^{a,b}_{db,db}=\frac{\gamma^{2}}{16},\quad\mathbf{ \widetilde{\mathcal{G}}}^{a,b}_{da,da}=\mathbf{\widetilde{\mathcal{G}}}^{c,b}_{db, db}=\mathbf{\widetilde{\mathcal{G}}}^{c,b}_{dc,dc}=\frac{\gamma^{2}}{16}(1+\alpha)\]
Total diffusion coefficients for stimuli \(a\) and \(b\) become:
\[D_{a} =(n-1)D_{ab}\approx\frac{n^{2}\eta^{3}\gamma^{4}}{16}(1+\frac{7 \alpha}{4})\quad(\alpha\ll 1,n\gg 1)\] \[D_{b} =D_{ba}+(n-2)D_{bc}\approx\frac{n^{2}\eta^{3}\gamma^{4}}{16}(1+3 \alpha). \tag{27}\]
which shows \(D_{b}\geq D_{a}\).
In Section A.7.2, we also perform analytical derivation of the diffusion under \(n\gg 1\) and \(\alpha\gg\gamma\) (specifically, with respect to the eigenvalues we assume \(s_{a},s_{b},s_{a}-s_{b}\gg\gamma\)). The results of those calculations are as follows:
\[D_{a} \approx\frac{n^{2}\eta^{3}\gamma^{4}\langle x_{a}^{2}x_{c}^{2} \rangle\langle x_{b}^{2}x_{c}^{2}\rangle}{128s_{b}^{5}}[1+3\frac{s_{b}}{s_{a}} +2(\frac{s_{b}}{s_{a}})^{2}+\frac{4(\frac{s_{b}}{s_{a}})^{2}}{1+\frac{s_{b}}{ s_{a}}}]\] \[=\frac{n^{2}\eta^{3}\gamma^{4}}{16(1-\alpha)^{3}}[\frac{1-\frac {7}{4}\alpha+\frac{15}{16}\alpha^{2}-\frac{1}{8}\alpha^{3}}{1-\frac{\alpha}{ 2}}]\]
\[D_{b} \approx\frac{n^{2}\eta^{3}\gamma^{4}\langle x_{b}^{2}x_{c}^{2} \rangle^{2}}{16s_{b}^{5}}=\frac{n^{2}\eta^{3}\gamma^{4}}{16(1-\alpha)^{3}} \quad(\alpha\gg\gamma,n\gg 1) \tag{28}\]
where in the equalities we replaced \(s_{a}=1\), \(s_{b}=1-\alpha\), and \(\langle x_{a}^{2}x_{c}^{2}\rangle=\langle x_{b}^{2}x_{c}^{2}\rangle=1-\alpha\). The term inside the second line brackets in \(D_{a}\) is always smaller than one. Hence, we have \(D_{b}\geq D_{a}\).
In Figure 3, we plot the diffusion and fluctuations coefficients for \(\alpha\in[0,0.6]\) for stimuli \(a\) and \(b\). We see that, consistent with the above results, the frequent stimulus drifts at a relatively slower rate and has a smaller fluctuation irrespective of \(\alpha\). Additionally, there is an excellent match
between the theoretical and the simulations results for both fluctuations and diffusion coefficients. Finally, in the bottom panel of the same figure, we plotted the trajectories of the representations over time for \(n=p=3\). The lower fluctuation and diffusion rates for the more frequent stimulus could be visually observed from the smaller point cloud for this stimulus.
## 5 Discussion
In a two-layer neural network model of the olfactory system, we show, using theory and simulations, that the stochasticity in SGD online learning could result in a drift of stimuli representation over time, even after the training is complete and no measurable change in the performance is observed. We analytically demonstrate the dependency of the drift on the input distribution and, in particular, show that a frequently presented stimuli drifts at a relatively slower rate. This finding is consistent with experimental observations in the piriform cortex (Schoonov et al., 2021).
We studied the learning dynamics in the high-dimensional space of network parameters. In this space, drift can be considered as any movement tangential to the manifold of minimum-loss, the effects of which aggregate over time to create an effective diffusion process. Orthogonal to this is the fluctuation outside the manifold that is determined by the mean-reverting property of the gradient. For the tangential gradient to have a non-zero value, an orthogonal deviation from the manifold was necessary (see Eq.14 and Figure 2). In a way, this makes the diffusion on the manifold a second order phenomenon, and that explains why the amount of diffusion depends on the fluctuation covariance (Eq.18). We showed diffusion (random walk) as a mechanism by which the representational drift can happen. In this sense it is different from the term drift in Physics.
In the representation space, the effects of the fluctuation and tangential movements are deformations and rigid-body rotations of the space respectively. If we consider the representation of a given stimulus over time, its trajectory essentially consists of two parts: a random-walk movement on a high-dimensional sphere, and simultaneously, a mean-reverting fluctuating process that causes deviations on and outside the sphere (i.e. changing norms and pair-wise angles of representations). This can be observed from the bottom panel in Figure 3. The lower diffusion rate observed for a more frequent stimulus suggests that the rigid-body rotation is such that on average it rotates the frequent stimulus to a lesser degree. In three dimensions, this could be imagined by, for example, the axis of rotation being on average closer to the representation of the frequent stimulus. The lower diffusion and fluctuation for the more frequent stimulus could be traced to the mechanism by which the gradient tends to preserve the representation and the output of a more rehearsed stimuli to a greater extent compared to other stimuli.
Our work relates closely to the recent study by Qin et al. (2023). They studied the drift in a one-layer neural network model of sensory systems with a similarity matching objective and a Hebbian/anti-Hebbian learning rule. Similarly to our results, they showed that the representations undergo a rotational diffusion process. This is not surprising as both of our models have objective functions with a degenerate solution that has a rotational symmetry. In our case, the L2 regularization creates that symmetry. Qin et al. (2023) also demonstrated a stimulus dependency of the diffusion coefficient. In their case the drift for a given eigenvector direction is inversely related to its eigenvalue. Additionally, in that study, the drift is produced by injected synaptic noise and goes away when this noise is close to zero. Here, we used a two-layer neural network model with the MSE loss objective to study drift under the condition of no synaptic noise. We showed that the sampling noise due to the SGD stochasticity and under no synaptic noise is enough to demonstrate the stimulus dependency of the drift rate.
We were able to analytically solve the drift in a simple linear case. Although each layer on such a network is linear, the resulting output is a product of the network weights. As a result, linear networks, despite their apparent simplicity, have nonlinear learning trajectories that could lead to rather com
Figure 3: Plots of (left) diffusion, and (right) fluctuation for representations of a frequent and a background stimulus, denoted by \(a\) and \(b\) respectively. \(\alpha\) is the probability of frequent stimulus, and \(m=n=10,p=20\), \(\gamma=0.04,\eta=0.005\), \(|\mathbf{a}|=1\). (bottom) Over time history of representations for three trial stimuli after \(2.2\times 10^{5}\) training steps. \(n=p=3\), \(\gamma=0.1\), \(\eta=0.1\), \(|\mathbf{a}|=1\) and \(\alpha=0.5\), and \(\mathbf{c}\) is another background stimulus.
plicated and interesting behaviors, especially as the number of layers increases (Saxe et al., 2013; Li & Sompolinsky, 2021). Our framework could be applied to more general cases as long as the manifold represents the redundancy in the model. The detail of the drift, however, may depend on the specific features of the setup such as the noise type, etc. Future steps of our work may include nonlinear and deep networks with recurrent connections in addition to a non-stationary data-distribution.
## Acknowledgements
We are grateful for fundings from the Swartz Foundation, and the National Institute of Health (U19NS112953-01).
|
2308.08767 | Graph Neural Network Backend for Speaker Recognition | Currently, most speaker recognition backends, such as cosine, linear
discriminant analysis (LDA), or probabilistic linear discriminant analysis
(PLDA), make decisions by calculating similarity or distance between enrollment
and test embeddings which are already extracted from neural networks. However,
for each embedding, the local structure of itself and its neighbor embeddings
in the low-dimensional space is different, which may be helpful for the
recognition but is often ignored. In order to take advantage of it, we propose
a graph neural network (GNN) backend to mine latent relationships among
embeddings for classification. We assume all the embeddings as nodes on a
graph, and their edges are computed based on some similarity function, such as
cosine, LDA+cosine, or LDA+PLDA. We study different graph settings and explore
variants of GNN to find a better message passing and aggregation way to
accomplish the recognition task. Experimental results on NIST SRE14 i-vector
challenging, VoxCeleb1-O, VoxCeleb1-E, and VoxCeleb1-H datasets demonstrate
that our proposed GNN backends significantly outperform current mainstream
methods. | Liang He, Ruida Li, Mengqi Niu | 2023-08-17T03:50:37Z | http://arxiv.org/abs/2308.08767v1 | # Graph Neural Network Backend for Speaker Recognition
###### Abstract
Currently, most speaker recognition backends, such as cosine, linear discriminant analysis (LDA), or probabilistic linear discriminant analysis (PLDA), make decisions by calculating similarity or distance between enrollment and test embeddings which are already extracted from neural networks. However, for each embedding, the local structure of itself and its neighbor embeddings in the low-dimensional space is different, which may be helpful for the recognition but is often ignored. In order to take advantage of it, we propose a graph neural network (GNN) backend to mine latent relationships among embeddings for classification. We assume all the embeddings as nodes on a graph, and their edges are computed based on some similarity function, such as cosine, LDA+cosine, or LDA+PLDA. We study different graph settings and explore variants of GNN to find a better message passing and aggregation way to accomplish the recognition task. Experimental results on NIST SRE14 i-vector challenging, VoxCeleb1-O, VoxCeleb1-E, and VoxCeleb1-H datasets demonstrate that our proposed GNN backends significantly outperform current mainstream methods.
Speaker recognition, graph neural network, embeddings, representative learning
## I Introduction
The core task of speaker recognition is to determine whether two utterances are from the same speaker. Currently, the mainstream methods are variants of x-vector [1], which has obtained excellent performance in recent evaluations and applications [2]. It mainly consists of a frontend neural network responsible for mapping from an utterance with variable duration to a fixed dimension embedding, also termed an x-vector, and a backend module in charge of making the decision based on enrollment and test embeddings.
Most studies are about improvements of the neural networks, e.g., time-delay neural network (TDNN) [1], emphasized channel attention, propagation and aggregation-TDNN (ECAPA-TDNN) [3], ResNet [4], ResNeXt [5], of pooling layer, e.g., attentive statistics pooling (ASP) [6], multi-head attention pooling (MHAP) [7], learnable dictionary encoding (LDE) [8], and of the loss function, e.g., angular softmax (A-softmax) loss [9], additive margin softmax loss (AM-softmax) [10, 11], additive angular margin (ArcFace) loss [12], dynamic margin softmax loss [13], adaptive margin circle loss [14], and _etc_.
In contrast, there is less research on the backend. The mainstream backends are still cosine scoring and linear discriminant analysis (LDA) followed by probabilistic LDA (PLDA, LDA+PLDA), which is already verified on most databases or evaluations [2, 15, 16]. In recent years, there are three kinds of representative methods to improve the backend. The first category is about PLDA, such as Neural PLDA [17], discriminative PLDA (DPLDA) [18], heavy-tailed PLDA (HTPLDA) [19], multi-objective optimization training of PLDA (Mot-PLDA) [20] and _etc_. The second is to add an additional trainable neural network module, e.g., decision residual networks (Dr-vectors) [21], deep learning backend (DLB) [22] and tied variational autoencoder (TVAE) [23]. And the last is to develop a robust backend against domain mismatch, such as Coral++ [24], domain-aware batch normalization (DABN) and domain-agnosticinstance normalization (DAIN) [25], information-maximized variational domain adversarial neural network (InfoVDANN) [26], and _etc_. However, these algorithms rarely use spatial or graph information among the extracted embeddings, which may significantly boost the performance.
Recently, graph neural network (GNN) has achieved great success in a large number of areas, such as physics, chemistry, biology, knowledge graph, social network, recommendation systems, and _etc_[27]. It is a powerful tool to mine rich relation information among data, which has great potential for speaker recognition. Jung _et al._ propose a graph attention network (GAT) in the case of test time augmentation (TTA) [28] and demonstrate that the GAT-TTA backend has consistent improvement over cosine scoring. Although the proposed GAT-TTA framework takes multiple embeddings to construct graphs, they are still only from the enrollment and test utterances, which do not use the relationship between the concerned embedding and its surrounding embeddings lying on the hypothesized hypersphere. Wang _et al._[29] use a graph neural network for better clustering to accomplish the speaker diarization task. Furthermore, Zheng _et al._[30] construct a heterogeneous graph to realize multi-modal information aggregation. It takes the speaker and speech segment as vertexes and uses the contextual connection of the speech segment and speaker identity to calculate edges. Experimental results on the MELD databases show the effectiveness of the proposed method [30].
To take advantage of the uniqueness of each speaker's low dimensional spatial (graph) structure embedded on the hypothesized hypersphere, we propose a graph neural network (GCN) backend to mine latent relationships between embeddings and their neighbors to improve the system performance. We assume all the embeddings as nodes on a graph, and the edges are constructed by calculating the similarity between two
nodes. The similarity function could be cosine, LDA+cosine, LDA+PLDA, or others, which will be examined in section III. The graph construction methods and variants of GCN are studied and compared to find a better message passing and aggregation way. The experiment results on the NIST SRE14 i-vector challenging and Voxceleb-1 database validate the effectiveness of our proposed method.
## II Graph neural network backend for speaker recognition
### _Motivation_
The task of the backend for speaker recognition is to make correct and robust decisions based on the extracted i-vectors [31] or x-vectors. If we view these vectors as points in space, the position of each point and its local spatial structure with other points together will help to determine its corresponding category, see Fig. 1. We take each point as a node and add edges by their pairwise geometric distance. Thanks to the powerful ability of graph neural networks to process complex non-Euclidean data, we can more effectively use the spatial structural information on the built graph to compare them and make decisions.
Our proposed method contains graph construction and graph neural networks, see Fig. 2. During graph construction, we mainly finish the computation of nodes and edges. The graph neural network includes graph network modules, batch normalizations (BN), fully connected layers, softmax output, and CE loss.
### _Graph building with nearest neighbors_
Suppose the connected graph is represented as \(G=(V,E)\), where \(V\) are the set of nodes and \(E\) are the set of edges. Each vector is a node on the graph, and its connected nodes are calculated by the nearest neighbor algorithm with cosine, LDA, or LDA + PLDA distance. For instance, if the cosine value of two vectors is greater than a pre-defined threshold, we add an edge to connect them. After the construction of the graph, the adjacency matrix \(A\) is subsequently calculated, in which the element \(a_{i,j}=1\) if there is an edge between the nodes \(i\) and \(j\), otherwise \(a_{i,j}=0\). The diagonal elements in the adjacency matrix are set to be \(1\) for including the center vectors when aggregating information.
### _Variants of graph module_
Variants of graph neural network modules are different ways of message-passing to generate the next layer's nodes by aggregating nodes and their neighbors' information. Denote the \(i\)-th node in the \(k\)-th layer by \(\mathbf{x}_{i}^{(k)}\), the message-passing module can be described as:
\[\mathbf{x}_{i}^{(k)}=f_{\mathbf{\Theta}}\left(\mathbf{x}_{i}^{(k-1)},\text{ AGGREGATE}(\{\mathbf{x}_{j}^{(k-1)}|j\in\mathcal{N}_{i}\})\right)\]
where \(\mathcal{N}_{i}=\{j\in V|(i,j)\in E\}\) is the neighbor set of \(i\)-th node. \(\mathbf{\Theta}\) is the parameters of message-passing module. The design of \(f\) and AGGREGATE is what mostly distinguishes one type of GNN from the other.
#### Ii-C1 Graph convolutional network
The graph convolutional network (GCN) can leverage the graph structure and aggregate node information from the neighborhoods in a convolutional way [32]. The core layer of GCN is as follows:
\[X^{(k)}=\sigma\left(\bar{D}^{-\frac{1}{2}}\tilde{A}\bar{D}^{-\frac{1}{2}}X^{ (k-1)}\mathbf{\Theta}^{(k-1)}\right)\]
where each row of \(X\) is \(\mathbf{x}\), \(\sigma\) represents the nonlinear activation function, \(\hat{A}=A+I\) denotes the adjacency matrix with inserted self-loops and \(\bar{D}_{ii}=\sum_{j=0}\hat{A}_{ij}\) is its diagonal degree matrix.
Fig. 1: We randomly select ten speakers’ i-vectors from the NIST SRE14 database to visualize the relationship among them by the GNE. We can see that both location (ellipses) and graph structure (arrows) contain discriminant information, which helps the classification.
Fig. 2: In the graph building, the nodes are i-vectors or x-vectors, and for each node, its corresponding edges are calculated by using the nearest neighbor algorithms. Based on the built graph, we adopt several layers of a graph network module followed by batch normalization. The last two layers include a linear layer and a softmax output layer. The number of output nodes corresponds to the speaker number in the training set. In the enrollment and test phase, we read out the embeddings in the linear layer, name them as g-vectors, and use them instead of x-vectors or i-vectors for the final cosine scoring.
#### Ii-A2 Graph attention networks
The graph attention networks (GAT) can leverage masked self-attentional weights to aggregate information [33]. The core layer of GAT is as follows:
\[\mathbf{x}_{i}^{(k)}=\alpha_{i,i}\mathbf{\Theta x}_{i}^{(k-1)}+\sum_{j\in\mathcal{ N}(i)}\alpha_{i,j}\mathbf{\Theta x}_{j}^{(k-1)}\]
where the attention coefficients \(\alpha_{i,j}\) are computed as
\[\alpha_{i,j}=\frac{\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{t}\| \mathbf{\Theta x}_{i}\,\|\,\mathbf{\Theta x}_{j}\right)\right)}{\sum_{k\in \mathcal{N}(i)\cup\{i\}}\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{t}| \mathbf{\Theta x}_{i}\,\|\,\mathbf{\Theta x}_{k}\right)\right)}.\]
where \(\mathbf{a}\) is learned, the superscript \(t\) represents transposition and \(\|\) is the concatenation operation.
#### Ii-A3 GATv2
The GATv2 is a modification of GAT, with the attention coefficients \(\alpha_{i,j}\) computed as follows
\[\alpha_{i,j}=\frac{\exp\left(\mathbf{a}^{t}\mathrm{LeakyReLU}\left(\left| \mathbf{\Theta x}_{i}\,\|\,\mathbf{\Theta x}_{j}\right|\right)\right)}{\sum_{k \in\mathcal{N}(i)\cup\{i\}}\exp\left(\mathbf{a}^{t}\mathrm{LeakyReLU}\left( \left|\mathbf{\Theta x}_{i}\,\|\,\mathbf{\Theta x}_{k}\right|\right)\right)}\]
GATv2 is a dynamic graph attention variant that is strictly more expressive than the GAT which is a static graph attention variant [34].
#### Ii-A4 GraphSAGE
The GraphSAGE (SAmple and aggre-GatE) learns a function that generates embeddings by sampling and aggregating features from local neighbors, which can efficiently generate node representation for previously unseen data [35]. The core layer of GraphSAGE is as follows:
\[\mathbf{x}_{i}^{(k)}=\sigma\left(W^{(k)}\cdot\left(\mathbf{x}_{i}^{(k-1)}\| \mathrm{AGGREGATE}(\{\mathbf{x}_{j}^{(k-1)}|j\in\mathcal{N}_{i}\})\right)\right)\]
where \(W^{(k)}\) is the weight matrix of \(k\)-th layer, and the AGGREGATE method could be mean aggregator, LSTM aggregator, or pooling aggregator [35].
#### Ii-A5 Graph transformer
The graph transformer networks are the integration of graph networks and transformer [36]. The graph transformer (GraphTF) layer identifies useful connections, learns a soft selection, and composites an effective node representation [37].
\[\mathbf{x}_{i}^{(k)}=W_{1}^{(k)}\mathbf{x}_{i}^{(k-1)}+\sum_{j\in\mathcal{N}( i)}\alpha_{i,j}W_{2}^{(k)}\mathbf{x}_{j}^{(k-1)}\]
where the attention coefficients \(\alpha_{i,j}\) are computed via multi-head dot product attention [36, 37].
#### Ii-A6 Topology adaptive graph convolutional network
The topology adaptive graph convolutional network (TAGCN) is a generalization of GCN and adopts a set of fixed-size learnable filters to perform convolutions on graphs, which is adaptive to the topology of the graph [38]. The core layer of TAGCN is as follows
\[X^{(k)}=\sum_{p=1}^{P}\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac {1}{2}}\right)^{P}X^{(k-1)}\mathbf{\Theta}_{p}^{(k-1)}\]
where the \(p\) denotes the number of hops on the graph.
## III Experiments
### _Results on SRE14 i-vector database_
The SRE14 i-vector challenge [39] takes vectors instead of speech as input to compare different speaker verification backends fairly. The dataset is gender independent and contains 1306 speaker models, 9634 test segments, and 12582004 trials. Each speaker model has 5 i-vectors. Trials are randomly divided into a progress subset (40\(\%\)) and an evaluation subset (60\(\%\)). In addition, NIST provided a development set containing 36572 i-vectors. All i-vectors are 600 dimensional. We take both training and test datasets to construct graphs with nearest neighbor algorithms. The development data with labels are used to train LDA, PLDA, and GNN backends. The default dimension of LDA and PLDA are 250 and 50, respectively. The GNN layer is implemented by the Pytorch geometric. Unless otherwise specified, our model architecture includes two layers of GNN+BN, a linear layer, and a softmax output followed by a cross-entropy loss. The g-vectors (See Fig 2) are extracted for the final cosine decision. The model is trained with \(600\) epochs, a fixed learning rate of \(10^{-4}\), and a weight decay parameter of \(5\times 10^{-4}\).
The TABLE I shows the comparison results of cosine, LDA, LDA+PLDA, DBL, and our proposed GNN backends on the SRE14 dataset. From the table, we can see that the EER and minDCF\({}_{14}\) of our proposed GNN backend are \(1.69\%\) and \(0.238\) on the Progress set, and \(1.55\%\) and \(0.218\) on the Evaluation set, which gains \(22.4\%\), \(0.4\%\), \(25.8\%\), and \(4.3\%\) relative performance improvement compared with the LDA+PLDA, respectively.
### _Ablation study on SRE14 i-vector database_
From the TABLE II, we find that the GAT has the best results in EER and the GCN has the best results in minDCF\({}_{14}\). The GAT learns adaptive weights to edges through the attention mechanism, which means it can filter more effective neighbors for auxiliary decisions. The GCN is good at capturing global information on the graph, and we think it's the reason for its good performance. The structure of GATv2 is similar to the GAT, and its performance is also approximate to the GAT. Both the GCN and GraphSAGE (mean aggregator in our experiment) have a similar aggregation way. However, the GCN takes advantage of the adjacency matrix to normalize the node and its neighbors, which may learn the more robust potential pattern. Similar to the GAT, for a node on the graph, the GraphTF also learns adaptive weights from its neighbors. Introducing multi-heads in the GraphTF brings more freedom to fit the local structure information, which may be adverse to obtaining a model with good generalization ability, especially under limited training data. The TGGCN considers multi-hops (3 hops in our experiments), which is suitable for the social network or recommendation system. But it also brings
instability, and we guess that's the reason for the poor results of TAGCN. According to the above analysis, we do the following ablation experiments using the GAT.
We study the construction method of graphs, e.g., nodes and edges, subsequently. The TABLE III shows that the performance is best when the nodes are 250-dimensional vectors after LDA reduction and edges are built by the PLDA (50 dimensions). We conclude that the dimension deduction is necessary for graph building.
From the TABLE IV, we find that when the threshold is bewteen 4 and 10, the proposed method will maintain relatively stable and good performance. If the threshold is too low, there will be too many neighbors for a node, which makes the local graph structure tend to be consistent, and this is harmful to the classification task. If the threshold is too high, there will be too few neighbors for a node, which makes the training data too sparse.
From the TABLE V, we learn that when the layer is \(2\), the performance is the best. When the layer is \(1\), the shallow model cannot effectively learn the nonlinear structure on the graph. And, it exists over-smoothing if too many layers are adopted.
### _Results on VoxCeleb1-O, VoxCeleb1-E, and VoxCeleb1-H_
Our proposed method is also evaluated on the VoxCeleb1 dataset, using the development set of VoxCeleb2 [41] as the training data for our frontend x-vector extractor. The frontend model utilizes a TDNN network, a statistic pooling layer, and a fully-connected layer, optimized with the AAM-Softmax loss [12] function. The evaluation is performed on all three official trial lists: VoxCeleb1-O, VoxCeleb1-E, and VoxCeleb1-H [42]. The features are extracted using 80-dimensional Fbank features with voice activity detection, augmented with MUSAN [43] and RIRs noise sources, and trained with an Adam optimizer with a \(10^{-3}\) initial learning rate and a weight decay of \(10^{-4}\). The parameters of GCN backends are re-tuned in a similar way, as mentioned earlier. The experiment results shown in TABLE VI demonstrate the effectiveness of graph neural network (GNN) variants on the VoxCeleb1 dataset. Specifically, GCN outperforms all other GNN-based methods with the lowest EER and minDCF\({}_{0.01}\). These results suggest that GNN-based methods can effectively extract speaker-related discriminant information and are promising for speaker verification tasks.
## IV Conclusion
We propose a graph neural network (GNN) backend for speaker recognition. The proposed method can capture the structural relation among extracted i-vectors or x-vectors on a graph and thus allows us to take advantage of more information for classification compared with analyzing them in isolation. The embeddings extracted from the GNN, named g-vectors, are excellent representations and preserve rich graph properties in a low-dimensional Euclidean space, which contains more discriminant information. The detailed experimental results on the SRE14 i-vector and VoxCeleb1-O, VoxCeleb1-E, and VoxCeleb1-H datasets demonstrate that our proposed GNN backend is very effective. |
2310.10362 | Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks | Graphs have become an important modeling tool for web applications, and Graph
Neural Networks (GNNs) have achieved great success in graph representation
learning. However, the performance of traditional GNNs heavily relies on a
large amount of supervision. Recently, ``pre-train, fine-tune'' has become the
paradigm to address the issues of label dependency and poor generalization.
However, the pre-training strategies vary for graphs with homophily and
heterophily, and the objectives for various downstream tasks also differ. This
leads to a gap between pretexts and downstream tasks, resulting in ``negative
transfer'' and poor performance. Inspired by prompt learning in Natural
Language Processing (NLP), many studies turn to bridge the gap and fully
leverage the pre-trained model. However, existing methods for graph prompting
are tailored to homophily, neglecting inherent heterophily on graphs.
Meanwhile, most of them rely on the randomly initialized prompts, which
negatively impact on the stability. Therefore, we propose Self-Prompt, a
prompting framework for graphs based on the model and data itself. We first
introduce asymmetric graph contrastive learning for pretext to address
heterophily and align the objectives of pretext and downstream tasks. Then we
reuse the component from pre-training phase as the self adapter and introduce
self-prompts based on graph itself for task adaptation. Finally, we conduct
extensive experiments on 11 benchmark datasets to demonstrate its superiority.
We provide our codes at https://github.com/gongchenghua/Self-Pro. | Chenghua Gong, Xiang Li, Jianxiang Yu, Cheng Yao, Jiaqi Tan, Chengcheng Yu | 2023-10-16T12:58:04Z | http://arxiv.org/abs/2310.10362v3 | # Prompt Tuning for Multi-View Graph Contrastive Learning
###### Abstract.
In recent years, "pre-training & fine-tuning" has emerged as a promising approach in addressing the issues of label dependency and poor generalization performance in traditional GNNs. To reduce labeling requirement, the "pre-train, fine-tune" and "pre-train, prompt" paradigms have become increasingly common. In particular, prompt tuning is a popular alternative to "pre-training & fine-tuning" in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives. However, existing study of prompting on graphs is still limited, lacking a framework that can accommodate commonly used graph pre-training methods and downstream tasks. In this paper, we propose a multi-view graph contrastive learning method as pretest and design a prompting tuning for it. Specifically, we first reformulate graph pre-training and downstream tasks into a common format. Second, we construct multi-view contrasts to capture relevant information of graphs by GNN. Third, we design a prompting tuning method for our multi-view graph contrastive learning method to bridge the gap between pretexts and downstream tasks. Finally, we conduct extensive experiments on benchmark datasets to evaluate and analyze our proposed method.
2017 acmcopy 2023 Association for Computing Machinery, ACM ISBN 978-x-xx-xxx-xx/YY/MM... $15.00[https://doi.org/10.1145/mnmnn.mmn](https://doi.org/10.1145/mnmnn.mmn) +
Footnote †: journal: Computer Vision and Pattern Recognition
node classification) using edge prediction as pretext and design a learnable prompt. This pretext overly emphasizes the topological information, thereby neglecting the important semantic information within the graph. Especially when dealing with node classification tasks on heterophilous graphs, the performance is naturally suboptimal due to the differing labels of neighboring nodes.
In this paper, our emphasis lies on the formulation between pre-training and downstream tasks and prompting methods for GNNs. Specifically, we strive for a unified framework that offers flexibility in accommodating various types of graphs and diverse downstream tasks. However, the appliacation of prompts on graphs encounters the following three challenges. First, graph data is more complex compared to text data. Graph data usually contains both graph structure and node features, which play different roles in various tasks. For example, node features in social networks have a great impact on the classification results, while in chemical substance networks, graph structure plays a more essential part. Therefore, prompt on graphs should consider both node features and graph structure and have the ability to transform any of them adaptively. Second, to fully leverage knowledge of pre-training model and transfer it, the optimization objective of pre-training step should be compatible with that of downstream tasks. The key to solving the problem is how to reformulate pre-training and downstream tasks in the same template. Specifically, prompting in NLP usually reformulate both pre-training and downstream tasks as masked language modeling (Chen et al., 2017). However, the pre-training strategies vary for graphs under homophily and heterophily. Meanwhile, graph downstream tasks include node-level, edge-level and graph-level ones, and they often have different objectives. It is crucial but challenging to convert the pretexts and different-level task to a unified framework. Third, it is equally important to design specific prompt under unified framework to identify the distinction between different downstream tasks. Prompt tuning in NLP employed handcrafted tokens (Wang et al., 2018; Wang et al., 2019) or learnable word vectors (Wang et al., 2019; Wang et al., 2019) to give different hints to different tasks. Accordingly, we should design task-specific prompting methods for graphs to guide downstream tasks to extract relevant prior knowledge of pre-trained models.
To address the aforementioned challenges, we proposed a multi-view graph contrastive learning method and corresponding prompt tuning strategies, namely, PGCL. More specifically, to address the first challenge, we present a common template through constructing graph instances and focus on two crucial graph components: node feature and graph structure. So we establish two views, namely, semantic view and contextual view to capture relevant information on graphs, respectively. To address the second challenge, we reformulate pre-training and downstream task into the same format, which aims to compute the similarity of graph representations from both semantic view and contextual view. In this work, we adopt contrastive learning as the self-supervised pre-training task, which can be understood as calculating the similarity between anchor and positive/negative sample with the goal of reducing the distance between anchor and positive sample, while enlarging that between anchor and negative sample in the latent space. Accordingly, downstream tasks can be reformulated to calculate the similarity of semantic and contextual views among the graph instances, which bridges the gap between the pretext and different downstream tasks. To address the third challenge, we distinguish different down-stream tasks by way of a fusion operation and a learnable prompt. First, we fuse representations from semantic view and contextual view into a comprehensive representation. Then we use a multi-view learnable prompt to extract the most relevant prior knowledge from representation of both views for downstream tasks. In summary, our main contributions can be summarized as follows:
\(\bullet\) We propose PGCL, a novel multi-view pre-train and unified downstream tasks prompting framework. To our best knowledge, PGCL is the first graph prompt framework that can accommodate various types of graphs and downstream tasks.
\(\bullet\) We propose a prompting strategy for PGCL, hinging on a fusion operation and a learnable prompt design to transfer the pre-trained knowledge to different downstream tasks for improving performance.
\(\bullet\) We conduct extensive experiments on \(12\) benchmark datasets to evaluate the performance of PGCL. Our results show its superiority over other state-of-the-art competitors.
## 2. Related Work
### Graph Neural Networks
Recently, GNNs have received significant attention for Web applications and there have been many GNN models proposed (Krizhevsky et al., 2014; Kipf and Welling, 2015; Kipf and Welling, 2016) on homophilic graphs. Their key idea boils down to a message-passing framework, in which each node derives its representation by receiving and aggregating messages from its neighboring nodes recursively. Moreover, there are also many studies on designing GNNs for heterophilic graphs (Chen et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). Existing graph neural network methods on heterophilic graphs can mainly be divided into two categories. One is to capture information from distant nodes (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), the other is to adaptively aggregate useful information from the neighborhood by refining the GNN architecture (Chen et al., 2017; Wang et al., 2019). At graph levels, representation learning requires an additional readout operation (Chen et al., 2017; Wang et al., 2019; Wang et al., 2019) which summarizes the global information of a graph by aggregating node representations. Recent research has also focused on addressing the challenges posed by insufficient data labels in order to make graph learning more adaptive. Furthermore, there is a growing interest in improving the model's generalization when it is transferred to new domains. To tackle these issues, many studies have shifted their focus towards graph unsupervised learning and pre-training methods, as opposed to traditional supervised learning approaches.
### Graph Pre-training
Inspired by pre-trained techniques in NLP, tremendous efforts have been devoted to pre-trained graph models. Some effective pre-training strategies include two types of unsupervised learning: generation-based methods and contrast-based methods. Generation-based methods such as GAE (Krizhevsky et al., 2014), GraphMAE (Krizhevsky et al., 2014) and SeeGera (Krizhevsky et al., 2014) reconstruct the graph data from the perspectives of feature and structure of the graph, and use the input data as the supervision signal. Contrast-based methods construct representations under different views and maximize their agreement. In particular, GRACE (Wang et al., 2019) pulls the representations of the same node closer under different augmentations and pushes away the representations
of other nodes. GraphCL (Gordes and Riedl, 2017) brings the graph-level representations closer under different views to ensure perturbation invariance. DGI (Wang et al., 2017) and MVGRL (Wang et al., 2018) maximize the mutual information between node-level representations and graph-level representations. For heterophilic graphs, GREET (Wang et al., 2019) discriminates homophilic edges from heterophilic edges and uses low-pass and high-pass filters to capture the corresponding information. NWR-GAE (Wang et al., 2019) emphasizes the graph topology and reconstructs the neighborhoods based on the local structure and features. However, the above approaches do not consider the gap between pre-training and downstream objectives, which limits their generalization ability.
### Graph Prompt Learning
Recognizing the existing gap between pre-training and downstream tasks, recent studies have aimed to bridge this gap and shifted their focus towards prompt learning. Many effective prompt methods are firstly proposed in the NLP area, including some hand-crafted prompts (Wang et al., 2017; Wang et al., 2018) and continuous prompts (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). However, we only find very few works like GPPT (Wang et al., 2019) and GraphPrompt (Wang et al., 2019) in the graph domain. GPPT pre-trains a GNN model based on link prediction and leverages a sophisticated design of learnable prompts but it only works with node classification. GraphPrompt introduces a unification framework for pretext and down-stream tasks and employs a learnable prompt to assist downstream tasks. Both of them employ link prediction as the pre-training task, but this pretext is limited, which excessively prioritizes the emphasis on the graph topology without considering the importance of node attributes.
## 3. Preliminaries
In this section, we introduce the notations used in this paper and also some GNN basics.
### Notations
Let \(G=(V,E)\) be an undirected and unweighted graph, where \(V\) is the set of nodes and \(E\) is the set of edges. \(\mathbf{X}\in\mathbb{R}^{|V|\times d}\) is the feature matrix where the \(i\)-th row \(x_{i}\) is the \(d\)-dimensional feature vector of node \(u_{i}\in V\). \(\mathbf{A}\in\mathbb{R}^{|V|\times|V|}\) denotes the binary adjacent matrix with \(\mathbf{A}_{i,j}=1\) if \(e_{i,j}\in E\) and \(\mathbf{A}_{i,j}=0\) otherwise. The neighboring set of node \(v\) is denoted as \(\mathcal{N}(v)\). In addition, we denote a set of graphs as \(\mathcal{G}=\{G_{1},G_{2},...\}\).
### Graph Neural Networks
GNNs adopt a message passing mechanism, where the representation of each node is updated by aggregating messages from its local neighboring nodes, and then combining the aggregated messages with the node's own representation. Generally, given a GNN model \(f(\cdot)\), message passing in the \(l\)-th layer can be divided into two operations: one is to aggregate information from a node's neighbors while the other is to update a node's representation. Given a node \(v\), these two operations are formulated as:
\[m_{v}^{(l)}=\texttt{AGGREGATE}^{(l)}\{h_{v}^{(l-1)},\forall u\in\mathcal{N }(v)\}, \tag{1}\]
\[h_{v}^{(l)}=\texttt{COBINE}^{(l)}\{h_{v}^{(l-1)},m_{v}^{(l)}\}, \tag{2}\]
where \(m_{v}^{(l)}\) and \(h_{v}^{(l)}\) denote the message vector and representation of node \(v\) in the \(l\)-th layer, respectively. \(\texttt{AGGREGATE}^{(l)}(\cdot)\) and \(\texttt{COMBINE}^{(l)}(\cdot)\) are two functions in each GNN layer. Note that in the first layer, the input node embedding \(h_{v}^{0}\) can be initialized as the node features in \(\mathbf{X}\). The total learnable GNN parameters can be denoted as \(\Theta\). For brevity, we simply denote the output node representations of the last layer as \(h_{v}\).
## 4. Methodologies
In this section, we introduce the proposed framework of MVP which is illustrated in Fig. 1. First, we present a common template and unify various graph tasks in the same format. Second, we propose a multi-view pre-training approach to capture relevant information from different views. Third, we design prompts for specific tasks to further narrow down the gap between pre-training and downstream phrase. Next, we elaborate on the main components.
### Unified Framework
In the following, we present how the framework facilitates a unified perspective on pre-training and downstream tasks.
#### 4.1.1. Graph instances under semantic and contextual view
**Motivation of constructing graph instances.** The success of prompts in NLP relies on the shared task template. However, graph-related tasks including node-level, edge-level and graph-level are far from similar. Generally, operations on graphs such as "modifying features" at node-level or "adding/deleting edges" at edge-level can both be considered as basic operations of graph-level. Compared with node-level and edge-level, graph-level tasks are more general and graph-level knowledge can be effectively transferred to other levels (Wang et al., 2019). So we follow (Wang et al., 2019; Wang et al., 2019) and uniformly perform the node-level, edge-level tasks on graph-level through constructing graph instances. At node-level, we expand the target node \(v\) on a graph \(G=(V,E)\) into a subgraph \(S_{v}=(V(S_{v}),E(S_{v}))\) of its local area, where its set of nodes and edges are respectively given by
\[V(S_{v})=\{d(u,v)\leq\delta\mid u\in V\}, \tag{3}\]
\[E(S_{v})=\{(u,u^{\prime})\in E\mid u\in V(S_{v}),u^{\prime}\in V(S_{v})\}, \tag{4}\]
where \(d(u,v)\) gives the shortest distance between nodes \(u\) and \(v\) on \(G\) and \(\delta\) is a predetermined threshold. \(S_{v}\) consists of nodes within \(\delta\) hops from the node \(v\), and the edges between those nodes. At graph-level, the maximum subgraph \(S_{G}\) of a graph \(G\) is the graph itself ( i.e., \(S_{G}=G\)), which naturally embodies the information of all nodes in \(G\).
**Semantic and contextual views.** Since we use graph instances to represent both nodes and graphs, the (sub)graph where the graph instance resides preserves rich self-information and contextual information by neighboring node features and connections (Gordes and Riedl, 2017; Wang et al., 2019; Wang et al., 2019), which play distinct roles in various downstream tasks. Therefore, we establish two views, namely the semantic view and the contextual view, to capture the node feature information and the topological structure information of graph instances, respectively. The semantic view for a graph instance mainly focuses on the features of nodes. This view describes the nodes in the (sub)graph with their intrinsic properties. The contextual view for a graph instance primarily emphasizes the information derived from neighboring connections. This view characterizes nodes in the (sub)graph by considering their local neighborhoods, which is the main scope of most GNN encoders. By establishing and complementing semantic
and contextual views, we aim to capture information from graphs in a more comprehensive manner.
**Embedding of graph instances.** To obtain the embedding \(z_{x}\) of graph instance \(x\), we follow the standard approach and employ a READOUT operation to aggregate the representations of nodes in the (sub)graph \(S_{x}\) of \(x\). Considering a \(K\)-layer GNN \(f(\cdot)\) and node representation \(h_{u}^{(k)}\) generated by it,
\[z_{x}=f(S_{x})=\texttt{READOUT}(h_{u}^{(k)}:v\in V(S_{x}),k\in K). \tag{5}\]
The choice of the aggregation scheme for READOUT is flexible, including sum, max and mean pooling and more advanced techniques [44, 45]. We simply use sum pooling in our implementation.
#### 4.1.2 Unified task template.
**Motivation of adopting constrastive learning.** Since we expand the target node into a graph instance, naturally we treat the target node's label as this graph instance's label. The same at edge-level, pair of nodes can be treated as a positive label if there is an edge connecting them, or a negative label if not. When we treat the edge's label as this pair graph instances' label, we can translate the edge-level task into the relationship learning between two graph instances. As can be seen in Fig. 2, link prediction is anchored on the similarity of representations for pairs of nodes. Intuitively, the representations of two adjacent nodes shall be more similar than those of non-adjacent nodes. For classification tasks, the graph/node representations of the same class shall exhibit higher similarity compared to those from different classes. It is worth noting that the core idea of contrastive learning is calculating the similarity between anchor and positive/negative sample with the goal of reducing the distance between the anchor and positive sample, while enlarging that between the anchor and negative sample. By similarity calculation in Fig. 2, we unify the graph contrastive learning as pretext with downstream tasks into a common task template.
Next, we will formally define the template for downstream tasks. Let \(z_{x}\) be the representation vector of (sub)graph \(S_{x}\) for graph instance \(x\). How to fuse semantic and contextual information to obtain the representation of graph instance will be described in Section 4.3. Let \(sim(\cdot,\cdot)\) be the cosine similarity function. Three downstream tasks (node classification, graph classification and link prediction) can be mapped to the similarity computation, which is formalized below.
**Graph instance classification.** Under our framework, node classification and graph classification are unified as graph instance classification. Consider a set of graph instances \(\mathcal{G}\) with a set of classes \(C\), and a set of labeled graph instances \(\mathcal{D}=\{(G_{1},Y_{1}),(G_{2},Y_{2}),...\}\) where \(G_{i}\in\mathcal{G}\) and \(Y_{i}\) is the corresponding label of \(G_{i}\). We follow the \(k\)-shot setting in [27], there are exactly \(k\) pairs of \((G_{i},L_{i}=c)\in\mathcal{D}\) for each class \(c\in C\). We define a class-prototype represented by \(\tilde{z}_{c}\) for each class \(c\). It is worth noting that the class-prototype is a "virtual" graph instance in the same latent space.
Figure 1. The overall framework of PGCL
Figure 2. Illustration of the motivation.
The class-prototype representation can be obtained through representation learning (Srivastava et al., 2017). However, the representation learning of class-prototypes in few-shot settings poses challenges due to the limited annotated data. Therefore, we follow (Zhu et al., 2017) and employ the mean representation of labeled graph instances for each class as class-prototypes:
\[\widetilde{z_{c}}=\frac{1}{k}\sum_{(G_{i},Y_{i})\in\mathcal{D},Y_{i}=c}z_{G_{i}}. \tag{6}\]
Given a graph instance \(G_{j}\) not in the labeled set \(\mathcal{D}\), its class label \(Y_{j}\) shall be
\[Y_{j}=\arg\max_{c\in C}sim(z_{G_{j}},\widetilde{z_{c}}) \tag{7}\]
Intuitively, the graph instance shall belong to the class whose class-prototype is the most similar to itself.
**Link prediction.** Under our framework, link prediction are model as the similarity computation between pair graph instances. Given a graph \(G=(V,E)\), a triplet of nodes \((v,a,b)\) such that \((a,b)\in E\) and \((v,b)\notin E\), we shall have
\[sim(z_{v},z_{a})>sim(z_{v},z_{b}) \tag{8}\]
It is worth noting that the link prediction task is based on the homophily assumption, i.e., the representation for \(S_{v}\) of \(v\) shall be more similar to that of a node adjacent to \(v\) than that of another non-adjacent node.
In summary, we reformulated the pretext and downstream tasks on graphs into a common task template: the similarity learning among graph instances, which lays the foundation of our pre-training and prompting strategies as we will introduce in the following subsection.
### Multi-View Pre-Training
In this section, we will introduce our multi-view pre-training strategy, which captures relevant information on graphs through semantic and contextual contrasts.
**Semantic View.** Semantic-level contrast aims to encourage the learned representations of graph instances with similar features to be consistent. Given a graph instance \(x\) and its (sub)graph \(S_{x}=(V,E)\), we employ a perturbation \(\tau_{\alpha}\) on the initial feature matrix \(\mathbf{X}\) to generate a new feature matrix as the positive sample:
\[\widetilde{\mathbf{X}}\sim\tau_{\alpha}(\mathbf{X}), \tag{9}\]
where \(\widetilde{\mathbf{X}}\) is the augmented feature matrix. We apply perturbations by altering only the features of nodes while keeping the graph structure unchanged. Specifically, we randomly mask the initial node features in different dimensionality with a probability. To independently encode the features without taking into account the structure information, we construct a unit matrix \(\mathbf{I}\in\mathbb{R}^{|V|\times|V|}\) as the adjacency matrix and feed it into GNN \(f(\cdot)\) along with features. The semantic representation obtained is denoted by \(z^{s}\) and corresponding augmentation are denoted by \(\widetilde{z^{s}}\):
\[z^{s}=f_{0}\left(\mathbf{X},\mathbf{I}\right),\widetilde{z^{s}}=f_{0}\left( \widetilde{\mathbf{X}},\mathbf{I}\right), \tag{10}\]
where \(\Theta\) denotes the parameters of GNN encoder. Then, a non-linear projection \(g(\cdot)\) is used to map the representations to the latent space where the contrastive loss is applied, as advocated in (Bordes et al., 2017). Specially, a two-layer perceptron (MLP) is applied to obtain \(\mathbf{z^{s}}\) and \(\widetilde{\mathbf{z^{s}}}\):
\[\mathbf{z^{s}}=g(z^{s}),\widetilde{\mathbf{z^{s}}}=g(\widetilde{\mathbf{z^{s}}}). \tag{11}\]
We construct the semantic-level contrastive loss based on the normalized temperature-scaled cross entropy loss (NT-Xent) (Bordes et al., 2017). During pre-training phrase, we randomly select \(N\) graph instances from the whole dataset as a mini-batch. For each graph instance \(G_{i}\) in the mini-batch, we construct positive pairs \(\left(\mathbf{z^{s}}_{G_{i}},\widetilde{\mathbf{z^{s}}}_{G_{i}}\right)\), negative pairs are not explicitly sampled but generated from the other graphs and their corresponding augmentations within the same minibatch. For convenience of expression, the semantic contrastive loss is formulated as:
\[\mathcal{L}_{s}=-\frac{1}{N}\sum_{i=1}^{N}\log\frac{exp(sim\left(\mathbf{z^{s}}_{G _{i}},\widetilde{\mathbf{z^{s}}_{G_{i}}}\right)/\tau)}{\sum_{j=1}^{N}exp(sim\left( \mathbf{z^{s}}_{G_{i}},\widetilde{\mathbf{z^{s}}_{G_{j}}}\right)/\tau)}, \tag{12}\]
where \(\tau\) denotes the temperature parameter to control the shape of the output distribution.
**Contextual View.** Context-level contrast aims to encourage graph instances with similar topology to be consistent. When constructing positive samples under the contextual view, it should be ensured that the semantic information of graph instances remains unchanged. Therefore, we only introduce a perturbation \(\tau_{\beta}\) on the adjacent matrix \(\mathbf{A}\) while preserving the semantic information:
\[\widetilde{\mathbf{A}}\sim\tau_{\beta}\left(\mathbf{A}\right). \tag{13}\]
Specifically, we randomly drop some edges in the graph with a probability, which is equivalent to removing a small portion of nodes to alter the contextual information. To capture the contextual information of graph instance, we use feature matrix and adjacency matrix as the input of GNN \(f(\cdot)\). The contextual representation obtained is denoted by \(z^{c}\) and the corresponding augmentation are denoted by \(\widetilde{z^{c}}\),
\[z^{c}=f_{0}\left(\mathbf{X},\mathbf{A}\right),\widetilde{z^{c}}=f_{0}\left( \mathbf{X},\widetilde{\mathbf{A}}\right). \tag{14}\]
Similar as the contrastive loss under the semantic view, we construct the contextual contrastive loss after the non- linear projection \(g(\cdot)\):
\[\mathcal{L}_{c}=-\frac{1}{N}\sum_{i=1}^{N}\log\frac{exp(sim\left(\mathbf{z^{c}}_{G _{i}},\widetilde{\mathbf{z^{c}}_{G_{i}}}\right)/\tau)}{\sum_{j=1}^{N}exp(sim\left( \mathbf{z^{c}}_{G_{i}},\widetilde{\mathbf{z^{c}}_{G_{j}}}\right)/\tau)}. \tag{15}\]
Note that we share the same GNN encoder between two views and the loss is parameterized by \(\Theta\). To sum up, the pre-training loss \(\mathcal{L}_{pre}\) can be defined as:
\[\mathcal{L}_{pre}=\mathcal{L}_{s}+\lambda\mathcal{L}_{c}, \tag{16}\]
where \(\lambda\) are weight factors for adjusting the importance of two views. After pre-training, we freeze the model and then perform prompting on the outputs of model.
### Prompt Design for Downstream Tasks
Aligning the pretext with downstream tasks can enable more effective knowledge transfer. We next show how to unify pre-training and downstream prompt-tuning.
**Representation fusion and prompt design.** From the macroscopic perspective, node features and graph topology are both
essential components of a graph. To further enhance representation learning on graphs, we consider that the contextual information and the semantic information complement each other. We thus fuse them into a holistic representation that can adapt to different tasks. Here we simply use the CONCAT operation in our implementation. Formally, for each graph instance \(x\), we fuse its semantic representation \(z_{x}^{s}\) and contextual representation \(z_{x}^{c}\) to obtain \(x\)'s representation \(z_{x}=\texttt{CONCAT}(z_{x}^{s},z_{x}^{c})\). In this way, the similarity between two graph instances can be captured by their similarity in both semantic and contextual information. This also aligns with the pre-training objectives.
From the microscopic perspective, semantic view and contextual view play different roles in various tasks. For example, node classification (especially on heterophilous graphs) generally pays more attention to node features, while link prediction and graph classification focus on the structure information. Therefore, we should design prompts to extract the most relevant knowledge from representations of the two views. A naive way is to directly apply a linear transformation \(\mathbf{P}\) as learnable prompt, which can be formulated as:
\[z_{x}^{p}=\mathbf{P}\cdot\texttt{CONCAT}(z_{x}^{s},z_{x}^{c}), \tag{17}\]
where \(z_{x}^{p}\) is the prompted representation for graph instance \(x\). The linear transformation \(\mathbf{P}\) can be regarded as a reweighting between semantic and contextual view to adjust the representation for downstream tasks.
Further, we can also introduce a prompt vector \(\mathbf{p}^{s}\) for semantic view and \(\mathbf{p}^{c}\) for contextual view, respectively. After that, for each view, we perform element-wise reweighting to extract more fine-grained irrelevant information and derive prompted semantic representation \(z_{x}^{ps}\) and contextual representation \(z_{x}^{pc}\):
\[z_{x}^{ps}=z_{x}^{s}\odot\mathbf{p}^{s},z_{x}^{pc}=z_{x}^{c}\odot\mathbf{p}^{ c}, \tag{18}\]
where \(\odot\) denotes the element-wise multiplication. Considering that the importance of these two views could vary in different tasks, we further employ a hyper-parameter \(\alpha\) to control their weights and overload \(z_{x}^{p}\) as:
\[z_{x}^{p}=\texttt{CONCAT}(z_{x}^{ps},az_{x}^{pc}). \tag{19}\]
Note that \(\alpha\) can be learned by using the attention mechanism. To align it with the pre-training objective, we set \(\alpha=\lambda\) for simplicity in our implementation, where \(\lambda\) is used in Equation 16 as a balance coefficient for the two views.
**Prompt tuning.** To optimize the learnable prompt parameters, we next formulate the loss functions in prompt tuning.
For node/graph classification, given a labeled training set \(\mathcal{T}=\{(G_{1},Y_{1}),(G_{2},Y_{2}),\dots\}\) with a set of classes \(C\), where \(G_{i}\) is a labeled graph instance (e.g., a node or a graph) and \(Y_{i}\) is the class label of \(G_{i}\), the loss function for prompt tuning is then defined as:
\[\mathcal{L}_{prompt}=-\sum_{(G_{i},Y_{i})\in\mathcal{T}}\log\frac{exp(sim\left( z_{G_{i}}^{p},\widetilde{z_{Y_{i}}^{p}}\right)/\tau)}{\sum_{e\in Y}exp(sim \left(z_{G_{i}}^{p},\widetilde{z_{e}^{p}}\right)/\tau)}, \tag{20}\]
where \(\widetilde{z_{e}^{p}}\) denotes the prompted representation of prototype for class \(c\in C\). For link prediction, given a node \(v\) on graph \(G\), we randomly sample one positive node \(a\) from \(v\)'s adjacent neighbors, and one negative node \(b\) that does not directly link to \(v\), forming a triplet \((v,a,b)\). Our objective is to increase the similarity between graph instance \(S_{v}\) and \(S_{a}\), while decreasing that between \(S_{v}\) and \(S_{b}\). We sample a number of triplets from the graph to construct a training set \(\mathcal{T}\). Then our objective is given as:
\[\mathcal{L}_{prompt}=-\sum_{(v,a,b)\in\mathcal{T}}\log\frac{exp(sim\left(z_{v }^{p},z_{a}^{p}\right)/\tau)}{\sum_{u\in\{x,y\}}exp(sim\left(z_{v}^{p},z_{u} ^{p}\right)/\tau)}. \tag{21}\]
Note that, unlike fine-tuning, prompt tuning freezes parameters in the pre-training stage, while updating only a few parameters, e.g., \(\mathbf{P}\) and \(\mathbf{p}^{s},\mathbf{p}^{c}\) in our cases. This significantly decreases the difficulty in model training. More analysis about model complexity are presented in Appendix 6.
## 5. Experiments
In this section, we perform experiments on benchmark datasets to evaluate the proposed PGCL.
### Experimental Settings
**Datasets.** We employ 12 datasets in total, which can be divided into three groups. The first group is homophilous graphs, which include Cora, Citeseer, PubMed and DBLP (Cordes and Manning, 2015; Chen et al., 2016). These datasets are citation networks that are widely used for node classification and link prediction. In these datasets, nodes represent publications and edges are citations between them. Further, node features are the bag-of-words representations of keywords contained in the publications. The second group is heterophilous graphs. We adopt four public datasets from: Chameleon, Cornell, Texas, Wisconsin (Cordes and Manning, 2015). Specifically, these datasets are web networks, where nodes are web pages and edges are hyperlinks. The last group is graph classification dataset including PROTEINS, COX2, ENZYMES, BZR (Cordes and Manning, 2015; Chen et al., 2016; Chen et al., 2016). These datasets are a collection of molecular structure graphs.
**Baselines.** To evaluate the effectiveness of PGCL, we mainly compare it with the state-of-the-art approaches from three main categories (**1) End-to-end GNN methods**: GCN (Kipf and Welling, 2015), GraphSAGE (Kipf and Welling, 2015), GAT (Yang et al., 2016), GIN (Yang et al., 2016) and H\({}_{2}\)GCN (Yang et al., 2016). These methods directly train a GNN model on a specific task and work in an end-to-end manner.
**(2) Graph pre-training methods**: GAE (VGAE) (Vgg and Welling, 2015), DGI (Yang et al., 2016), InfoGraph (Zhu et al., 2017), GRACE (Wang et al., 2017), MVGRL (Kipf and Welling, 2015) and GraphCL (Yang et al., 2016). These methods pre-train a GNN model in a self-supervised way work and fine-tune for the downstream task. **(3) Graph prompt methods**: GPPT (Yang et al., 2016) and GraphPrompt (Zhu et al., 2017). These methods utilize the link prediction task for pre-training, and reformulate downstream tasks into the common complete. Note that meta-learning methods cannot be compared in our setting, as they require labeled data in their base classes for the meta-training phase.
**Setup.** To evaluate the goal of PGCL in better utilization of the capabilities of pre-trained model and generalizability of different tasks. We mainly consider three typical types of downstream tasks, i.e., node classification and graph classification in few-shot settings and link prediction. For node classification and graph classification, we follow GraphPrompt (Zhu et al., 2017) and construct a series of \(k\)-shot classification tasks. The details of task construction will be elaborated later when reporting the results. For task evaluation, as the \(k\)-shot tasks are balanced classification, we employ accuracy as the evaluation metric. For all the baselines, based on the authors' code and
default settings, we further tune their hyper-parameters to optimize their performance.
### Performance Comparison
We conduct various types of downstream task, namely, few-shot node classification on homophilous and heterophilous graphs, few-shot graph classification, and link prediction and compare the performance of PGCL with other methods.
**Few-shot on homophilous graphs.** We conduct this node classification on homophilous graphs on four datasets, i.e., Cora, Citeseer, PubMed and DBLP. Following the \(k\)-shot setup (Zhou et al., 2017), we generate a series of few-shot tasks for model training and validation. In particular, we randomly generate ten 1-shot node classification tasks (i.e., we randomly sample 1 node per class) for training and validation, respectively. Each training task is paired with a validation task, and the remaining nodes not sampled by the pair of training and validation tasks will be used for testing. Table 1 illustrates the results of few-shot node classification on homophilous graphs. We have the following observations:
(1) End-to-end GNN models achieve poor performance in most cases, demonstrating that they heavily depend on task-specific labeled data as supervision.
(2) Compared to end-to-end GNN models, pre-traing GNN models achieve even worse performance on PubMed. This implies that they have not effectively transferred knowledge to downstream tasks.
(3) PGCL outperforms all the baselines across all datasets, demonstrating that the unified framework can fully leverage the capabilities of pre-trained models by aligning the objectives of pre-training and downstream tasks.
**Few-shot on heterophilous graphs.** We conduct this node classification on heterophilous graphs on four datasets, i.e., Chameleon, Cornell, Texas, Wisconsin. We following the \(k\)-shot setup of experiments on homophilous graphs. Note that we additionally consider MLP as our baseline in our experiments. Table 2 shows the results of few-shot node classification on heterophilous graphs. We have the following observations:
(1) PGCL consistently leads to the best results on all the datasets and shows superiority over H\({}_{2}\)GCN, which is specially designed for graphs with heterophily.
(2) GAE, GPT and GraphPrompt achieve poor performance on heterophilic datasets. This implies that link prediction as pretext overemphasizes the structure information, thereby neglecting the important semantic information on graphs.
(3) MLP achieves good results on Cornell, Texas and Wisconsin, indicating that node features only can play a vital role. Basic GCN performs relatively well on Chameleon, suggesting that contextual information also counts too. Therefore, capturing information on graphs through semantic and contextual views in PGCL provides a more comprehensive approach.
**Few-shot on graph classification.** We conduct this graph classification on four datasets, i.e., PROTEINS, ENZYMES, COX2, BZR. For each dataset, we randomly generate 100 5-shot classification tasks for training and validation, following a similar process for node classification tasks. We illustrate the results of few-shot graph classification in Table 3, and have the following observations:
(1) GraphPrompt and PGCL outperform other baselines, once again demonstrating that the effectiveness of the prompt design and unified framework for pre-training and downstream tasks.
(2) Note that we use the raw features in these datasets to initialize input feature vectors, so PGCL can simultaneously captures information from both features and graph structure. The superior results on graph classification demonstrate that the fusion operation of PGCL provides more inter-class information in few-shot setting.
**Link prediction.** We conduct this link prediction on three datasets, i.e., Cora, Citeseer and PubMed. We follow previous studies (Cong et al., 2019) to construct the train/valid/test edge sets. To be specific, we randomly split all edges into three sets, i.e., the training set (85%), the validation set (5%), and the test set (10%), and evaluate the performance based on AUC and AP scores. We compare our PGCL with two generation-based methods: GAE and VGAE, and two contrastive methods: GRACE and MVGRL. Table 4 list the results and note that PGCL_np is our method without prompt tuning. We have the following observation:
(1) Generative baselines perform generally better than contrastive methods and PGCL_np, which is consistent with common belief.
(2) PGCL leads to a much larger performance gap compared with MVP_np and performs generative baselines on Cora and Citeseer and achieves comparable results on PubMed, indicating that the
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Method** & **Cara** & **Citeseer** & **PubMed** & **DBLP** \\ \hline GCN & 43.04\(\pm\)8.06 & 33.83\(\pm\)9.53 & 54.38\(\pm\)7.59 & 42.93\(\pm\)10.85 \\ GAT & 45.33\(\pm\)7.01 & 36.72\(\pm\)8.52 & 55.74\(\pm\)9.12 & 38.63\(\pm\)9.83 \\ GraphSAGE & 43.27\(\pm\)8.05 & 32.68\(\pm\)10.23 & 54.32\(\pm\)10.11 & 39.91\(\pm\)12.68 \\ \hline GAE & 42.51\(\pm\)6.53 & 40.68\(\pm\)9.23 & 51.72\(\pm\)8.92 & 39.73\(\pm\)12.05 \\ DGI & 49.41\(\pm\)7.72 & 43.19\(\pm\)8.73 & 50.30\(\pm\)7.89 & 42.38\(\pm\)9.57 \\ MVGRL & 56.02\(\pm\)7.86 & 46.25\(\pm\)8.98 & 54.29\(\pm\)8.93 & 45.14\(\pm\)10.71 \\ GRACE & 55.56\(\pm\)8.76 & 46.64\(\pm\)5.98 & 54.16\(\pm\)9.59 & 47.78\(\pm\)11.90 \\ \hline GPPT & 51.63\(\pm\)8.76 & 42.89\(\pm\)7.93 & 50.98\(\pm\)9.12 & 39.48\(\pm\)12.08 \\ GraphPrompt & 55.32\(\pm\)9.56 & 44.19\(\pm\)9.73 & 53.49\(\pm\)6.68 & 43.32\(\pm\)10.87 \\ PGCL & **57.51\(\pm\)8.09** & **48.12\(\pm\)7.23** & **59.94\(\pm\)8.16** & **49.07\(\pm\)7.31** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Node classification accuracy (%) on homophilous graphs. We highlight the best score for each dataset in bold.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Method** & **Chameleon** & **Cornell** & **Texas** & **Wisconsin** \\ \hline GCN & 26.84\(\pm\)7.52 & 23.93\(\pm\)9.33 & 19.48\(\pm\)11.12 & 19.84\(\pm\)13.67 \\ GAT & 25.75\(\pm\)5.46 & 23.77\(\pm\)10.25 & 20.40\(\pm\)12.07 & 18.27\(\pm\)10.19 \\ MLP & 22.67\(\pm\)2.88 & 29.61\(\pm\)11.34 & 33.74\(\pm\)8.82 & 36.54\(\pm\)10.72 \\ H\({}_{2}\)GCN & 26.95\(\pm\)3.73 & 28.38\(\pm\)12.52 & 34.37\(\pm\)15.82 & 35.21\(\pm\)15.35 \\ \hline GAE & 22.80\(\pm\)3.63 & 21.44\(\pm\)8.97 & 26.89\(\pm\)16.91 & 17.71\(\pm\)8.79 \\ DGI & 25.42\(\pm\)4.82 & 24.35\(\pm\)9.74 & 21.21\(\pm\)12.71 & 24.64\(\pm\)11.89 \\ MVGRL & 26.45\(\pm\)4.01 & 26.23\(\pm\)9.62 & 24.55\(\pm\)11.49 & 25.82\(\pm\)13.35 \\ GRACE & 24.68\(\pm\)3.79 & 19.51\(\pm\)8.07 & 18.58\(\pm\)9.36 & 19.95\(\pm\)9.94 \\ \hline GPPT & 25.05\(\pm\)3.68 & 27.57\(\pm\)9.13 & 22.96\(\pm\)12.89 & 30.44\(\pm\)10.77 \\ GraphPrompt & 25.62\(\pm\)4.66 & 28.67\(\pm\)7.24 & 23.13\(\pm\)11.89 & 28.54\(\pm\)7.64 \\ PGCL & **30.45\(\pm\)3.14** & **39.52\(\pm\)8.71** & **40.17\(\pm\)7.88** & **47.48\(\pm\)10.24** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Node classification accuracy (%) on heterophilous graphs. For each dataset, we highlight the best result in bold.
prompt design effectively tune output of pre-trained model to adapt to downstream task.
## 6. Model Analysis
**Time complexity.** We analyze the time complexity of main components in our model. Assume an input graph has \(n\) nodes and \(m\) edges, let the GNN encoder contain \(l\) layers. During the pre-training phrase, the time complexities of GNN encoder is \(O(dml+nd^{\prime}dl)\), where \(d\) is the dimensionality of initial features, \(d^{\prime}\) is the dimensionality of final representations. Let \(q\) be the number of selected negative samples in a batch, the complexity of contrastive loss is \(O(nqd^{\prime})\). During the prompt tuning phrase, the time complexity for \(k\)-shot classification task with \(c\) classes is \(O(kc^{2}d^{\prime})\), and the time complexity for link prediction is \(O(nd^{\prime})\). During the inference phrase, the time complexity for classification task is \(O(cd^{\prime})\), and the time complexity for link prediction is \(O(d^{\prime})\).
**Parameter efficiency.** We also compare the number of parameters that needs to be updated in a downstream classification task with a few representative models. As can be seen in Table 5, GCN works in an end-to-end manner, it is obvious that it involves the largest number of parameters for updating. GRACE and MVGRL employ a linear classifier for node classification, so the parameters in the classifier need to be updated in the downstream task. For our proposed PGCL, it not only outperforms the baselines GCN, GRACE and MVGRL as we have seen earlier, but also requires the least parameters, demonstrating the superiority of graph prompting.
## 7. Conclusions
In this paper, we proposed a multi-view pre-train and unified downstream tasks prompting method PGCL. In particular, to narrow the gap between pre-training and downstream objectives on graph, we reformulated pre-training pretexts and downstream tasks on graphs into a common template. In the pre-training phase, we proposed a multi-view contrastive learning method to capture the semantic and contextual structure information on graphs. In the prompt tuning stage, we introduced a learnable prompt strategy to transfer the pre-trained knowledge to different downstream tasks. Finally, we extensively evaluated the performance of our method on 12 public datasets. Our experimental results demonstrate the effectiveness of our framework.
|
2306.14184 | Solution of inverse problem for Gross-Pitaevskii equation with
artificial neural networks | We propose an Artificial Neural Network (ANN) design to solve the inverse
problem for a 1D Gross-Pitaevskii equation (GPE). More precise, the ANN takes
the squared modulus of the stationary GPE solution as an input and returns the
parameters of the potential function and the factor in front of the GPE
non-linear term. From the physical point of view the ANN predicts the
parameters of a trap potential and the interaction constant of 1D Bose-Einstein
Condensate (BEC) by its density distribution. Using the results of numerical
solution of GPE for more than $30 000$ sets of GPE parameters as train and
validation datasets we build the ANN as a fast and accurate inverse GPE solver. | Stepan P. Pokatov, Tatiana Yu. Ivanova, Denis A. Ivanov | 2023-06-25T09:39:02Z | http://arxiv.org/abs/2306.14184v1 | # Solution of inverse problem for Gross-Pitaevskii equation with artificial neural networks
###### Abstract
We propose an Artificial Neural Network (ANN) design to solve the inverse problem for a 1D Gross-Pitaevskii equation (GPE). More precise, the ANN takes the squared modulus of the stationary GPE solution as an input and returns the parameters of the potential function and the factor in front of the GPE non-linear term. From the physical point of view the ANN predicts the parameters of a trap potential and the interaction constant of 1D Bose-Einstein Condensate (BEC) by its density distribution. Using the results of numerical solution of GPE for more than 30000 sets of GPE parameters as train and validation datasets we build the ANN as a fast and accurate inverse GPE solver.
## 1 Introduction
Ultra cold gases are very promising objects for wide range of practical and fundamental applications. These applications range from quantum metrology to quantum information processing [1, 2, 3]. The peculiar feature distinguishing degenerate quantum gases [4, 5, 6, 7, 8, 9] from thermal ensembles is the possibility to observe quantum superposition on the macroscopic level. Thus the methods of control degenerate quantum gases without disturbing their "quantumness" are of great interest. The control protocols especially those designed for the real time applications should be fast enough to minimize the environmental noise effects. It is natural that such protocols involve some sort of feedback control based on the measurement of the gas density distribution. The extraction of the information about the system is the key component of any feedback control scheme. Thus a practical method to effectively extract the information on the cold atomic ensemble is of great importance.
In the realm of classical systems the promising tool to help to extract the information on the system dynamics is Artificial Intelligence and Machine Learning (ML). The application of ML to various problems of experimental and theoretical atomic and molecular physics is becoming more and more popular [10, 11, 12, 13, 14]. The ML methods proved their efficiency in optimizing the application of Density Functional Theory [15, 16, 17, 18, 19] and solution of Schrodinger equation [20].
The ML methods have been applied to describe ensembles of atoms in lattices and trapped Bose-Einstein Condensates. In [21] the authors demonstrate the Artificial Neural Network (ANN) trained to solve Gross-Pitaevskii equation (GPE). It has been shown that the accuracy of such a solver is very high while the time required to get the solution is negligible compared to the time required to numerically solve GPE on a typical modern computer. Among multiple possible applications this ANN GPE solver can be used as a part of a real time control system.
However, it is even more important from the practical point of view to be able to solve the inverse GPE problem. That is to find the potential or the interaction constant having the knowledge of the BEC density distribution. The inverse GPE solver can help to extract the information from various sensors based on BEC and thereby increase their accuracy. There are multiple possible applications of such sensors in geophysics, metrology and tests of fundamental physics [22, 23, 24]. In particular, the setups based on BEC have been proposed for magnetic sensors [25] and gravitational wave detectors [26]. Furthermore, the possibility of quick solution of the inverse problem can help to improve the time resolution of the sensors.
One can claim that the solution of the mentioned inverse problem is only meaningful if the density distribution of BEC can be one-to-one mapped to the set of parameters of GPE. Unfortunately, the rigorous mathematical prove or disprove of this property of the mapping is unknown to the authors. The results presented below can be considered as a practical way to address this question. Also the conclusions we made below should not be extrapolated to arbitrary potentials, but should be restricted to the cases with certain _a priori_ knowledge of the potential shape. In our research we restrict ourselves to symmetric double-well potentials.
Bearing the disclaimer of the previous paragraph in mind the paper demonstrates the feasibility of ANN to solve the inverse GPE problem. In particular, it is shown that an ANN can be designed and trained to accurately reconstruct the parameters of a double-well potential and the interaction constant of BEC using the density distribution as an ANN input.
## 2 Model
The common mathematical tool to describe the steady state of an one-dimensional atomic BEC in an external potential \(V_{ext}(x)\) is GPE for the condensate wavefunction \(\psi(x)\):
\[\left(-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V_{ext}(x)+g|\psi(x)|^{2} \right)\psi(x)=\mu\psi(x). \tag{1}\]
In this equation the mass of an individual atom is \(m\) while the Planck's constant is \(\hbar\). The low-energy collisions of atoms are described via the interaction constant \(g\). The right hand side of (1) contains the chemical potential \(\mu\) of the considered ensemble.
Contrary to commonly used normalization we require
\[\int dx|\psi(x)|^{2}=1. \tag{2}\]
Thus, the BEC wavefunction describes not the density, but the probability density of the trapped atoms.
For our purpose we need a non-trivial potential that can be characterized by only a few parameters. Otherwise the training dataset could be huge and the training process can become too time consuming. We chose the symmetric double-well external potential \(V_{ext}(x)\) as it is frequently discussed in the literature with respect to different possible applications. The potential function is then given by
\[V_{ext}(x)=8V_{min}\left(2\frac{x^{4}}{\xi^{4}}-\frac{x^{2}}{\xi^{2}}\right), \tag{3}\]
where \(V_{\min}\) is the depth of the potential well, \(\xi\) is the distance between the potential minima. These two parameters have transparent physical meaning and apart from a gauge constant completely characterize the potential functions. The ANN discussed below is designed and trained to predict \(V_{min}\) and \(\xi\) as well as the interaction constant \(g\) using the BEC density values \(|\psi(x)|^{2}\) in certain sample points as an input.
## 3 Data Collection and ANN topology
The key element of the success of ANN is the availability of high quality labeled data. For our purpose we generate data by solving GPE with various potential parameters and interaction constants. The GPE is numerically solved using the Julia-based free framework QuantumOptics.jl [27]. Since the solution time for a single case is quite small it is possible to generate density distributions for quite a dense grid of the problem parameters. Thus, there is no need in Design of Experiments methods [28].
The GPE was solved for the coupling strengths \(g\) in the range of 1000\(-\)10800 with the step of 200. For each \(g\) the depth of the potential well \(V_{min}\) ranges in 100\(-\)2500 with the step of 100. For each pair of \(g\) and \(V_{min}\) the distance between the potential minima \(\xi\) covers the range 5.0\(-\)9.8 with the step of 0.2. The units of the parameters are derived from the used convention \(\hbar=1\) and \(m=1\). The interaction constant \(g\) here differs from the usually used definition by the factor of the total number of atoms. Thus the complete dataset contains \(50\times 25\times 25=31250\) entries. The 70% of this dataset is used to train the model and the rest is reserved for the evaluation of ANN prediction accuracy.
The solution of GPE is performed on the grid of 128 points using the imaginary time propagation technique [29]. The input of the ANN accepts the density values on the uniform grid of points \(x_{i}\), \(i=1..128\). Thus, the input layer of ANN contains 128 neurons. Since ANN is designed to predict 3 parameters there are 3 neurons in the output layer as shown in figure 1.
Being limited to 1D case and moderate (128) number of input neurons we decided to use the fully connected network. Such ANN can result in more accurate predictions provided sufficient amount of data as compared with, for example, a convolution one.
We tested multiple designs of ANN and eventually came up with the model shown in figure 1. The ANN contains 10 hidden layer shown in orange with the following numbers of neurons: 512, 256, 128, 96, 64, 32, 24, 16, 8, 6. These values were manually adjusted until the work of ANN met predefined criteria. The activation function for all neurons is LeakyReLU.
The ANN model was implemented with Python using the Keras TensorFlow library [30]. To train the model we applied Adam algorithm with the initial learning rate 0.001. This value was found to provide reasonable time of convergence. Following the strategy of stochastic gradient decent methods the whole training dataset was divided into the number of mini-batches with 2000 elements in each mini-batch.
The process of training the model is shown in figure 2. It demonstrates rapid oscillations of the loss function on the validation set in the beginning of the learning process. The loss function is defined as
\[\mathrm{LF}\left(g^{(pr)},\xi^{(pr)},V_{min}^{(pr)}\right)=\] \[\frac{1}{N_{mb}}\sum_{n}\left[\left(g_{n}-g^{(pr)}\right)^{2}+ \alpha\left(\xi_{n}-\xi^{(pr)}\right)^{2}+\beta\left(V_{min,n}-V_{min}^{(pr)} \right)^{2}\right], \tag{4}\]
where \(N_{mb}\) is the size of the mini-batch; \(g^{(pr)}\), \(\xi^{(pr)}\) and \(V_{min}^{(pr)}\) are current predictions of the ANN. In order to increase the weight of the parameters with smaller absolute values the weighting coefficients \(\alpha\) and \(\beta\) are introduced.
As it is seen from figure 2 the reasonable convergence requires more than approximately 6000 training epochs. To speedup the learning the Early Stopping technique was used. This algorithm controls the value of the loss function and decreases the learning rate if the loss value stabilizes. If decreasing of the learning rate does not help the Early Stopping algorithm terminates training.
Figure 1: Schematic representation of the working ANN model. The ANN is fully connected with 10 hidden layers shown in orange. The input layers are shown in blue.
## 4 Results
After training the ANN we tested its predictions on the dataset of 30% of the whole available data. The box plots for the prediction errors are shown in figures 3, 4 and 5. Here, the absolute values of the errors and their statistics are addressed to indicate that the errors are weakly dependent on the absolute value of the predicted parameter. Thus, the larger parameters will be predicted with better relative accuracy. The horizontal lines inside boxes show the 0.5 quantile (median), the lower (upper) edges of the boxes indicate 0.25 (0.75) quantile. The length of the "whiskers" is 1.5 of the box sizes (inter quartile range) if there are data points outside of this range otherwise the length indicates the position of the most distant point.
they are close to the limits of the training parameter ranges.
Apart from this trend, in the most cases (75%) the prediction error for the minima separation \(\Delta\xi<0.025\), for the potential depth it is \(\Delta V_{min}<15\), for the interaction constant \(\Delta g<20\). Thus, in the worst case corresponding to low values of \(V_{min}\) the prediction error is estimated as about 15%. This clearly demonstrates the possibility of using ANN to predict the GPE parameters from the density distribution.
Another informative way to describe the efficiency of the designed ANN is using the statistics of the errors, that ignores the values of the predicted parameters. The bar charts representing the fraction of the predictions with error smaller than the given value are shown in figures 6, 7 and 8. The 95%-confidence interval for the minima separation is 0.035, which is less than 1% of the smallest value in the dataset. The confidence interval for the potential minimum is 16. This is 16% of the minimal value in the dataset. For the interaction constant the confidence interval is 32, which is about 3.2% of the minimal value in the dataset.
These plots can be interpreted as a probability of getting the error below the certain value regardless what value is predicted. In all the cases this probability rapidly grows
Figure 4: Box plot of prediction error for potential depth \(V_{min}\).
Figure 5: Box plot of prediction error for interaction constant \(g\).
for small errors and grows slowly for larger errors. The error value that distinguishes these different trends approximately corresponds to the green area in the figures 6, 7 and 8.
Note that the prediction of different parameters demonstrates slightly different behavior as the error value grows. In particular, the prediction of the minima separation \(\xi\) shows faster growth for small errors and slower growth of the probability for larger errors than the prediction of the potential depth \(V_{min}\) and the interaction constant \(g\). Thus, the prediction of the potential minima separation is easier for the ANN that the
Figure 6: Statistics of the minima distance error \(\Delta\xi\).
Figure 7: Statistics of the potential depth error \(\Delta V_{min}\).
prediction of other parameters. Interestingly, the same seems to be true for a human who can quantitatively predict the separation by simply measuring the distance between the BEC density peaks. However, the depth of the potential and the interaction constant can only be qualitatively estimated by looking at the width of the BEC density distribution.
## 5 Conclusions
We designed and trained the ANN that predicts the potential parameters and the interaction constant of a trapped BEC based on its density distribution. The training and validation datasets were generated by the numerical solution of stationary GPE on a 1D grid of spacial points.
The fully connected ANN with 10 hidden layers was trained on a set of about 30000 data samples. The largest prediction error is obtained for small values of the potential depth. It is 16%. The accuracy of the prediction of other parameters in all cases is better than 3.2%. This clearly demonstrates the feasibility of the used approach and indicates the possibility of using it in practical applications.
It was found that the prediction of the distance between the minima of a double-well potential is in some sense easier for the ANN than the predictions of other parameters. Intuitively this can be understood by the fact that the distance between the minima has a more straightforward quantitative relation to an easily extracted distance between the peaks of the BEC density distribution.
Figure 8: Statistics of the interaction constant error \(\Delta g\).
## 6 Acknowledgment
We thank Vera Baturo from the department of Photonics in Saint-Petersburg University for useful discussions and comments.
|
2301.07639 | A Comparative Analysis of Bias Amplification in Graph Neural Network
Approaches for Recommender Systems | Recommender Systems (RSs) are used to provide users with personalized item
recommendations and help them overcome the problem of information overload.
Currently, recommendation methods based on deep learning are gaining ground
over traditional methods such as matrix factorization due to their ability to
represent the complex relationships between users and items and to incorporate
additional information. The fact that these data have a graph structure and the
greater capability of Graph Neural Networks (GNNs) to learn from these
structures has led to their successful incorporation into recommender systems.
However, the bias amplification issue needs to be investigated while using
these algorithms. Bias results in unfair decisions, which can negatively affect
the company reputation and financial status due to societal disappointment and
environmental harm. In this paper, we aim to comprehensively study this problem
through a literature review and an analysis of the behavior against biases of
different GNN-based algorithms compared to state-of-the-art methods. We also
intend to explore appropriate solutions to tackle this issue with the least
possible impact on the model performance. | Nikzad Chizari, Niloufar Shoeibi, María N. Moreno-García | 2023-01-18T16:29:05Z | http://arxiv.org/abs/2301.07639v1 | A Comparative Analysis of Bias Amplification in Graph Neural Network Approaches for Recommender Systems
###### Abstract
Recommender Systems (RSs) are used to provide users with personalized item recommendations and help them overcome the problem of information overload. Currently, recommendation methods based on deep learning are gaining ground over traditional methods such as matrix factorization due to their ability to represent the complex relationships between users and items and to incorporate additional information. The fact that these data have a graph structure and the greater capability of Graph Neural Networks (GNNs) to learn from these structures has led to their successful incorporation into recommender systems. However, the bias amplification issue needs to be investigated while using these algorithms. Bias results in unfair decisions, which can negatively affect the company's reputation and financial status due to societal disappointment and environmental harm. In this paper, we aim to comprehensively study this problem through a literature review and an analysis of the behavior against biases of different GNN-based algorithms compared to state-of-the-art methods. We also intend to explore appropriate solutions to tackle this issue with the least possible impact on the model's performance.
recommender systems; Graph Neural Network (GNN); bias amplification; average popularity; Gini Index; sensitive features Article
## 1 Introduction
Currently, numerous users benefit from the advantage of purchasing different products and services online. The problem arises when people face too many options, which can cause an overload of information, leading to a difficult decision-making process. To overcome this problem, Recommender Systems (RSs) are used to provide users with personalized item recommendations [1, 2, 3, 4, 5].
Considering the vast usage of RSs in a wide range of application domains, it is important to make sure the recommendation results are not unfair. Bias is one of the most important concerns in RSs, which affects RSs' effectiveness [1, 2]. Biased recommendation results can cause an unbalanced and unfair allocation of resources and opportunities [6]. These biased decisions can lead to severe financial, societal, and reputational harm to individuals and companies. Furthermore, bias can result in discrimination, which is against regulations [7, 8, 9]. Bias also can cause severe legal, technological, and security harms [10]. The mentioned problems have opened the door to the investigation of the bias issue in RSs in recent years, and the number of articles addressing this topic has increased [2]. These problems derived from bias in RSs are the main motivation of this article.
Lately, deep learning methods also have been improved significantly and found a path to RSs. Deep learning methods are capable of establishing a multi-layer, nonlinear, layer-to-layer interconnection network structure that aims to automatically extract representations of multi-level features from data [11]. These methods can significantly enhance the RSs' performance and address their problems. Among these methods, Graph Neural Network (GNN) algorithms have proven their usefulness in many learning tasks that require handling graph data, which contain rich information about relations between elements. GNNs
can seize the dependence of graphs by propagating the information through the graph edges via message passing [12].
In order to address the limitations of traditional RSs, GNN algorithms have been successfully applied to RSs. The use of GNNs in multiple application domains has spread rapidly in recent years. This is mainly due to their power to learn from graphical information representations and the importance of the advantages of deep learning methods.
In recommender systems, user-item interactions can be modeled as graphs. Besides, additional data can be used to improve recommendations, including social or contextual information. In the RS field, neural-network-based methods, especially those using deep learning algorithms, have been proposed as an alternative to Collaborative Filtering (CF) approaches, thanks to their power to learn complex relations between users and items. However, these methods can only operate on Euclidean space data since they are not designed to deal with high-order information structures [6; 12]. These drawbacks can be addressed by recent GNN techniques, which extend deep learning algorithms to the non-Euclidean space [13].
In GNNs, an iterative process is performed, where the information between nodes is propagated and aggregated from neighbors. So far, there are several proposals for GNN-based recommender models, both general and focused on sequential or session-based recommendations [14]. Different GNN variants can be used to accomplish these tasks. Among them, Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and Graph Recurrent Networks (GRNs) have shown higher performance than other machine learning methods in a wide variety of areas [12], including recommender systems. Despite the proclaimed strengths of GNN-based models, there are still some challenges to be addressed for these approaches to prevail over the classical methods. One of them is dealing with multiple types of biases that affect recommender systems negatively. Some studies have evidenced that fairness and other social bias could be amplified by the use of graph structures [15].
Generally, bias in RSs can be categorized into four main classes according to Baeza-Yates (2016) and Chen et al. (2020) [2; 16]: data bias, algorithmic bias (model bias), result and response bias, and feedback loop bias, which can magnify this issue. All of them can be divided into multiple categories. Bias also can cause serious problems in different aspects: economic, legal, social, security, and technological [17], as well as ethical aspects including content, privacy, autonomy, opacity, fairness, and social aspects [18; 19]. In addition, objectives are influenced by bias regarding utility, coverage, diversity, novelty, serendipity, visibility, and exposure [20]. Bias also can be conveyed by user reviews such as sequential bias, opinion bias, and textual bias [21; 22]. Bias can have different types o platforms, which can consist of functional biases, normative biases, external biases, and non-individual accounts [23]. This high impact of biases in multiple fields of study and application domains highlights the importance of analyzing them in-depth and proposing effective ways to address this problem. Biases in the RS area have been studied for several years. However, their study in the context of new algorithms such as GNNs is very limited.
In this paper, we aim to analyze the classification of the different types of biases troubling GNN-based RSs in general. We also analyze in depth the behavior of GNN algorithms against different types of biases (considering there is limited research analyzing the behavior of these algorithms against certain types of biases) and compare them with other widely used algorithms. We investigate if, in addition to achieving high accuracy in recommendation results, it is possible to trust that the algorithm is not biased and does not make unfair decisions. We also look into which biases are more accentuated with GNN algorithms and study ways of dealing with them.
In the following section, we present the state-of-the-art of the bias problem in Machine Learning (ML), GNN algorithms, and RSs, and finally, we focus on the problem of bias amplification in GNN-based RSs. In this section, we consider the most important related works in this specific area of research. Then, we explain the experimental study in detail. The results of the comparative study involving traditional and GNN-based recommenda
tion methods and two real datasets are presented in Section 4. In the end, we discuss the most important problems in this field and our future work.
## 2 State-of-the-Art
In this section, a review of the bias problem from different aspects including machine learning, GNN algorithms, RSs, and GNN-based RSs is presented. There is extensive work in the literature studying biases from different perspectives. However, in this section, we also analyze the most important works regarding the bias problem in general, focusing mainly on RSs and more specifically on GNN-based RSs.
### Bias in Machine Learning
In order to have a better understanding of the bias problem regarding different aspects and how this problem can be transferred to the results of a recommender system, here, we provide different definitions of bias at different levels such as the statistical meaning of bias and bias in Machine Learning (ML).
According to [24], statistical bias relates to systematic errors produced by the measurement or sampling process. Within them, it is necessary to differentiate between errors caused by random chance and those caused by bias.
Data are the input in ML algorithms. When this input is biased, this bias can be transferred into the model generated by the algorithm, leading to unfair decisions and a reduction in quality [25; 26]. These problems can have severe financial, social, and reputational effects on companies [8].
Algorithmic bias in ML-based models stems from abnormal datasets, weak models, poor algorithm designs, or historical human biases [27]. Algorithmic bias also can happen due to the problem of under-fitting in the training phase, which can be caused by a mixture of limitations in the training data and model capacity issues [28]. The factors affecting this mechanism are irreducible error (Bayes error), regularization mechanisms, class imbalance, and under-represented categories [28].
There are various types of general biases happening in different stages of CRISP-DM, a well-known standard process for data mining introduced by IBM in 2015, which breaks the data process into six different stages including business understanding, data understanding, data preparation, modeling, evaluation, and development. The types of described biases are social bias, measurement bias, representation bias, label bias, algorithmic bias, evaluation bias, deployment bias, and feedback bias [8]. Social bias happens when the data transfer biases in society to the model on a large scale. Measurement bias arises due to human error in the business understanding phase, especially in working with sensitive features such as age and gender. This kind of bias can also happen during the data preparation phase. Representation bias, moreover, occurs during data collection and sampling when the sample or data distribution does not represent the real underlying distribution. Label bias can be seen during the data preparation phase when labels are chosen for the prediction task. Choosing the best label for a dataset could be very difficult due to vagueness and cultural or individual variety. Algorithmic bias can also happen in the modeling phase due to model parameters or technical issues such as model misclassification [29].
It is also important to know, based on training data statistics, whether a model can amplify existing bias in data [29]. Evaluation bias, which happens in the evaluation phase, can happen because of the differences between the training and test data population. Finally, deployment bias can happen after model implementation in a complicated socio-technical environment.
Bias in ML can also lead to unfair results. Fairness in machine learning can be categorized into ten classes: statistical party, equalized odds, equal opportunity, disparate impact, disparate mistreatment, treatment equality, general entropy index, individual fairness (formerly, fairness through awareness), fairness through unawareness, and counterfactual fairness [30].
A systematic, controlled study on bias amplification is provided in [29]. To reach this, a heavily controlled simple image classification problem was taken into consideration. The results showed different factors including the accuracy of the model, model capacity, model overconfidence, and size of the training data are correlated with bias amplification. Furthermore, The results also illustrated that bias amplification can be different during training time, and also, the difficulty of classification tasks in recognizing group membership can influence bias amplification.
### Bias in GNNs
GNNs and their variants have shown great performance for a wide range of graph learning tasks. However, they face remarkable computational challenges due to the increasing sizes of current datasets. Graph convolutions' multilayers mean recursively developing the neighbor aggregation in a top-down method, which can lead to a neighborhood whose size is growing based on the number of layers. If the graph is scale-free and condensed, a large part of the graph is required to compute the embeddings, also with a few layers, which is infeasible for large-scale graphs [31; 32].
Other researches showed that GNNs perform better with homophilous nodes rather than heterophilous ones. A homophily ratio is defined in order to examine whether a graph is homophilous or heterophilous. Graphs with higher homophily ratios are considered homophilous, and graphs with lower ratios are non-homophilous [33].
Although GNNs usually provide better accuracy in results, most of the existing GNNs do not take the fairness issue into consideration, which can result in discrimination toward certain demographic subgroups with specific values of features that can be considered sensitive, such as age, gender, and race. The decision made by the implemented GNNs can be highly affected by these kinds of discrimination [32; 34; 35]. In addition, a wide range of ML systems are trained with human-generated data; hence, there is a clear need to comprehend and mitigate bias toward demographic groups in GNN approaches [36].
Biased results in GNN algorithms can stem from different reasons, the most important of which is a biased network structure. Although it is very important to detect which part of this network structure can lead to bias, it is believed this bias can be due to the message passing mechanism in the GNN's main operation. There are several challenges to understanding bias in the network structure, including the Fairness Notion Gap, Usability Gap, and Faithfulness Gap. The Fairness Notion Gap points to how to measure bias at the instance level. The Usability Gap points to the fact that it is also vital to find the edges in the computational graph most influential on the fairness degree of its prediction. The final edges cannot be considered the ones that contributed the most to this fairness. The Faithfulness Gap points to the need to ensure that gathered bias explanations indicate the true reasoning results based on the chosen model [34]. Furthermore, bias can also lead to a distribution shift between training and testing data, especially among labels [37].
### Bias in RSs
The quality of recommendations provided by different RSs is various for different users based on their characteristics and sensitive information including age, gender, race, and personality. This behavior conflicts with European Commission (EC) regulations: "obligations for ex-ante testing, risk management and human oversight of AI systems to minimize the risk of erroneous or biased AI-assisted decisions in critical areas such as education and training, employment, important services, law enforcement, and the judiciary" [7]. According to this regulation, AI systems should follow EU fundamental rights such as the right not to be discriminated against, respecting individuals' private life, and personal data protection [7]. Moreover, biased results in RSs can cause user dissatisfaction [38].
Considering the bias issue in RSs is one of the most important factors leading to unfair decisions and discrimination, and this issue clearly disagrees with the mentioned regulations. The work presented in [2] indicates that bias can be divided into three potential
categories, which can be the first considered for recognition: bias in input data, computational bias, which may stem from the algorithm and can be added to team decisions, and outcome bias. This is an expansion of the bias categorization previously introduced by Baeza-Yates (2016) [16], which helps break down the circular behavior into seven different types of biases in a circular format. Data bias, which is observational rather than experimental, happens when the distribution of the training data differs from the ideal test data distribution and consists of: selection bias, exposure bias, conformity bias, and position bias. Algorithmic bias can also happen during the different stages of the modeling process including training, evaluation, and feature engineering. Popularity bias, unfairness, and inductive bias can be the results of this particular type of bias. Popularity bias stems from the long-tail phenomenon in RSs. This common issue happens when a small number of very popular items have the most interaction in the system. This can lead to a neglection of the model toward unpopular items and give a higher score to the more popular ones [2]. Together, the previously mentioned biases can create a circle graph in which biased data move from one stage to the next, where additional and new biases are introduced [2; 39]. This circular behavior of biases increases the complexity to recognize where actions are needed. Exposure bias happens due to the exposure of specific parts of items for users achieved from implicit feedback, and it can also be caused by popularity bias due to the recommendation of the most popular items [2; 40]. In other words, bias can result in the limitation of the users' choices and contaminate users' feedback, which can amplify exposure bias [41]. Among the mentioned types of biases, popularity bias has been considered the most important type in the field of RSs [42].
Another proposed classification by Ashokan and Hass (2021) [30] for bias considers three main categories in more detailed sub-categories consisting of: data generation bias, which includes historical, representation, measurement, population, sampling, and Simpson's paradox biases. Historical bias refers to already-existing bias from socio-technical issues. Representation bias can be created due to the sampling phase. Measurement bias happens during selecting, utilizing, and measuring specific features. Population bias happens when the dataset distribution differs from the real-world population. Sampling bias also could be due to error while creating random subgroups. Simpson's paradox means bias from the distinction in the behavior of population subgroups in the aggregation phase [43].
Model building and evaluation bias include evaluation, aggregation, popularity, algorithmic, omitted variable, demographic, and temporal biases. Evaluation bias can arise during the model evaluation phase. Aggregation bias can happen due to the wrong assumptions about the effects of population on the model's results. Popularity occurs because of more popular items gaining more interactions [44; 42]. Algorithmic bias can happen due to technical issues inside the used algorithm. Omitted variable bias takes place because of not choosing one or more essential variables for the model. Demographic bias happens due to differences in user demographic groups (e.g., age and gender) being treated differently [30; 45]. Temporal bias stems from behavior and population differences with the passage of time [30; 46].
Deployment and user interaction biases include behavioral, content production, linking, presentation, social, emergent, observer, interaction, and ranking biases. Behavioral bias can occur due to the dissimilarity of users' behavior in the dataset. Content production bias exists due to differences in the users' generated contents including structural, lexical, semantic, and syntactic varieties. Linking bias arises when network attributes from user activities do not truly represent the user behavior. Presentation bias happens during the presentation of information. Social bias also happens when preferences deliberately are given to certain groups and affect their judgments [30; 47]. Emergent bias arises because of the difference in the real users' behavior and users' behavior in the dataset. Observer bias could happen when researchers' expectations are unintentionally injected into the research data. Interaction bias can be created due to the difference in the means of users' interaction with a system. Finally, ranking bias occurs when top-ranked results are more exposed [30].
Due to the impact the biases have on the model's decision, it is important to consider all types of biases; however, the most recent publications mainly aimed to solve exposure bias, popularity bias, unfairness, and the bias loop effect. One of the most important challenges of the previous approaches is the trade-off between the model's performance and bias mitigation, which is believed to be based on the chosen scenario [2]. The definition of fairness also may depend on the domain, but this issue has recently drawn much attention [48].
In the recommendation area, a model that simulates multiple rounds of a bias feedback loop in a social network was proposed in [39] in order to analyze the consequence of this feedback loop in the long run. This model uses different control parameters including the level of homophily in the network, the relative size of the groups, the choice among many new link recommenders, and the choice between three various stochastic use behavior models, which decide whether each recommendation would be accepted or not. The results of this experimental study showed that a minority group with a high level of homophily can receive an excessive advantage in exposure from all link recommenders. On the other hand, if the group is heterophilic, it becomes under-exposed. Furthermore, the level of homophily in the minority group can influence the disparate exposure speed, and the relative size of the minority can magnify the effect. Both minority and majority classes based on their level of homophily can experience the "rich-get-richer" effect.
In [1], the authors worked on Conversational Recommender Systems (CRSs) and systematically investigated the popularity bias issue in state-of-the-art CRSs from various perspectives including exposure rate, success rate, and conversational utility. This article proposed a suite of popularity bias metrics that are specifically designed for CRSs. The work presented in [49] also focused on popularity bias and the long-tail problem in RSs. In addition, this paper introduced useful metrics for measuring the long-tail phenomenon on items. To complete the previously mentioned issue, Reference [30] continued this analysis further to measure algorithmic bias and fairness in a rating-based recommender system. This work considered various types of biases and fairness. Besides, this work proposed fairness metrics by analyzing two domains.
In some works including [50; 51], the sensitive attribute gender was taken into consideration. Unbiased Gender Recommendation (UGRec) was introduced in [50] in order to balance performance among males and females. Aiming at seizing the users' preferences, an information aggregation component was designed to learn the representation of users and items from the user-item graph. To improve representation, a multihop mechanism was proposed by the aggregation of users' higher-order neighbors. An end-to-end training framework with adversarial learning was also used to avoid an impact on the accuracy. This framework is capable of removing gender-specific features and maintaining common features. An exploratory analysis of gender bias and discrimination in music RSs was conducted in [51]. The main aim of this work was to investigate which CF approach enhances or reduces artist gender bias. To reach this, the Preference Ratio (PR) and Bias Disparity (BD) metrics were used to measure the results. The results showed that the CF RS can amplify the gender bias problem in a real-world LastFM dataset.
Other work proposed by [52] also focused on gender bias in RSs for two book rating datasets, Amazon and book-crossing. In this research, a model-agnostic bias mitigation approach was introduced with respect to the accuracy of the system. Two recommender system approaches were used from the K-nearest neighbors' family. The results showed a significant decrease in the bias with little impact on the accuracy of the models.
### Bias in GNN-Based RSs
Specific sensitive attributes that reinforce an already existing bias in the network of GNN-based RSs have drawn attention toward measuring fairness in supervised methods. The metrics used for this purpose make the proportion of sensitive attribute values in a protected group classified as positive to be the same as the unprotected group [14; 53].
The behavior of user-item interaction does not explicitly include any sensitive information from users, but due to the high correlation between users and their attributes, directly applying modern user and item representation learning can result in the leakage of the users' sensitive information [14]. Furthermore, considering the graph-based nature of RSs, users do not have independence, and they are implicitly correlated with other users who share similar behavior, which can result in vital problems in previous models and the basics of CF recommendations [14]. In addition, current GNN algorithms are suffering from societal bias in data, which limits the generalization power of the models [15]. In the graph structure, nodes of similar sensitive attributes are prone to be connected, and this nature can result in critical bias in decision-making due to differences between representations from nodes of similar sensitive information and other nodes of other sensitive features [15]. Some approaches also consider graph embedding methods used in Online Social Networks (OSNs). Graph embedding methods are one of the best tools for data mining, which connect each user with a lower-dimensional vector that contains structural information within the network. This information can include the user's neighborhood, popularity, etc. [53]. According to previous research, OSNs suffer from discrimination, favoring the majority group, which is against anti-discrimination laws [53].
The work presented in [54] focused on calibrating the long-tail issue in session-based recommendations, which can be divided into Recurrent Neural Network-based (RNN) models and GNN-based models. This work used different metrics to evaluate the models (e.g., MRR and recall) and measured popularity bias including coverage and tail coverage. Besides, a calibration module was proposed that uses the session representation to predict the ratio of items from the tail in the recommendation list. A curriculum training strategy with two stages also was used to enhance the accuracy of predictions in the calibration module.
In [55], an investigation into graph-based Collaborative Filtering (CF) approaches for RSs was performed. In this work, two-fold performances for accuracy and novelty for currently used graph-based CF methods were taken into consideration. The results indicated that symmetric neighborhood aggregation in most of the graph-based CF models amplifies the popularity bias in RSs. In addition, this amplification can be expanded by the increase in the depth of graph propagation.
Considering works on the bias and fairness problem in GNN-based RSs, they are very limited. Most of the research works in this area focus on sensitive information in RSs. The work in [14] focused on eliminating sensitive information in representation learning, to achieve fair representation learning for a fair recommendation. To address this problem, a model-agnostic graph-based perspective for fairness-aware representation learning was introduced. The proposed model uses user and item embeddings from any recommendation models as the input and defines a sensitive feature set. In addition, the proposed model works as a filter to obscure any sensitive information in the defined set without damaging the accuracy of the recommendation. In this structure, every user can be considered an ego-centric graph structure, which helps the filters work under a graph-based adversarial training process. The discriminators were designed to make predictions of the attribute of concern, and the training of the filters was addressed to the removal of any sensitive information that can leak from the user-centric graph structure. This model was examined through two real-world datasets and showed high performance.
The aim of [15] was to overcome two major challenges in the fairness of GGN-based RSs with limited sensitive information: first, how to eradicate discrimination by fixing the insufficiency of sensitive attributes; second, the confirmation of the fairness in the GNN classifier in the RS. To tackle the mentioned issues, a new approach called FairGNN was introduced for fair node classification. To predict some of the sensitive attributes with noise, which can lead to a fair classification, an estimator for a sensitive attribute in the GNN was used in FairGNN, which can work in an atmosphere including an adversary on different datasets. The results of the experiments on real-world datasets showed that the proposed model can work effectively with respect to fairness and classification performance.
Another work [53] focused on quantifying and tackling fairness problems in graph embedding methods by using the node2vec approach for GNN-based RSs. To address this problem, this article provided a new study method for the algorithmic fairness of node2vec. In addition, the statistical parity method (which uses sensitive attributes of pairs of users to measure fairness for groups) was extended and the novel idea of the Equality of Representation was proposed to calculate fairness in a friendship RS. The node2vec approach then was applied to the real-world OSN dataset to discover biases in the recommendations caused by unfair graph embeddings. Finally, as an extension of node2vec, a new fairness-aware graph embedding algorithm called Fairwalk was introduced.
The exposure bias problem in GNN-based RSs was addressed in [56]. In this paper, a neighbor aggregation via an inverse propensity approach was proposed. The mentioned approach balances the biased local structure of each target node by obtaining the user-item propensity score for each interaction in the graph, and then, the inverse propensity score with Laplacian normalization is used as the edge weight for the process of neighbor aggregation. This leads to highlighting the less-popular neighbors in an embedding. The results showed that the debiasing method works successfully, hence increasing the performance of the model.
Considering the mentioned challenges in RSs, bias amplification is one of the most important subjects that needs to be taken into consideration. Moreover, based on the GNN structure, the accuracy of the models can be enhanced, but the bias problem can be even worse. This problem is clearly against mentioned guidelines and needs further investigation. On the other hand, sensitive attributes such as gender in GNN-based RSs are very important, and fairness toward these attributes needs to be taken into consideration.
## 3 Experimental Study
This research aimed to study the behavior of GNN-based methods against different types of biases affecting recommender systems. The main focus of this experiment was on popularity and gender bias. The results were used to determine whether this recommendation approach in most cases acquires better results in performance, but amplifies the bias. To reach this, we implemented three different types of approaches for the RS on two real-world datasets (MovieLens and LastFM): Collaborative Filtering (CF), Matrix Factorization (MF), and GNN-based approaches. We used different methods for each approach: Deep Matrix Factorization (DMF), Item-based K-Nearest Neighbor (ItemKNN), Neural Collaborative Filtering (NeuMF), Neural Collaborative Filtering Model with Interaction-based Neighborhood (NNCF), Neural Graph Collaborative Filtering (NGCF), a Light version of Graph Convolution Network for recommendation (LightGCN), and Self-supervised Graph Learning (SGL). We compared the results based on different metrics for both performance and bias.
### Benchmark Datasets
In this research, two real-world datasets were used for implementing the RS. These two datasets suffer from the mentioned long-tail effect on items, which denotes the existence of popularity bias in the data, which makes them a good choice for bias investigation. In the section below, we give a description of these two datasets.
#### 3.1.1 MovieLens 100K [57]
MovieLens is one of the widely used datasets in the field of RSs and especially in bias investigation. This dataset is collected gradually from the MovieLens website, a non-commercial web-based movie recommender system, and randomly selected. The dataset includes ratings of users on movies on a star scale in the interval [1; 5]. Among the different datasets in different sizes provided on this website, we used in this project the ml-100 K dataset, which includes 100-thousand records of ratings. MovieLens, moreover, consists of three different files, which contain, respectively, the information related to the users, the items, and the ratings given to the items by the users. The sensitive features are located in
the users' dataset, where, based on capAI guidance [58], "Gender" and "Age" are detected as sensitive features. This dataset also suffers from popularity bias on items. This means popular movies received more ratings in comparison to the other movies. The mentioned reasons make this dataset a very good fit for this investigation into the bias problem. The MovieLens dataset information can be seen in Table 1.
In this research, Exploratory Data Analysis (EDA) is provided in order to have a better understanding of both datasets. The long-tail effect can be seen in Figure 1.
Figure 1 shows that the distribution of ratings on items in the MovieLens dataset is heavily focused on popular items. This means this dataset contains popularity bias toward items. The number of items in this dataset is 1682, 87.3% of which are in the long-tail.
Figure 2 indicates the number of ratings for each rating segment, which is defined based on the ratings of the users on items from 1 to 5. This plot shows the distribution on average of the ratings ranging from 1 to 5.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Features** & **Description** & **Data Type** & **Count** & **Mean** & **Std** \\ \hline Age & Age of users & int & 100 K & 32.96 & 11.56 \\ Rating & Rating on movies provided by users & float & 100 K & 3.52 & 1.12 \\ User id & IDs of the users & int & 100 K & - & - \\ Movie id & IDs of the movies & int & 100 K & - & - \\ Gender & Gender of the user & String & 100 K & - & - \\ Occupation & Users’ job & String & 100 K & - & - \\ Movie title & The title of rated movies & String & 100 K & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: MovieLens dataset information.
Figure 1: MovieLens long-tail plot.
Figure 2: MovieLens average ratings plot.
Figure 3 illustrates the distribution of age and gender, which shows that a majority of the users are young individuals and the number of rated items by men is significantly more than by women.
Figure 4 shows that the number of male users is considerably higher than that of women.
Figure 5 shows the number of rated items based on the occupation of the users. We can see that the predominant occupation of the users in the dataset is student.
Figure 4: MovieLens gender distribution.
Figure 5: MovieLens occupation distribution.
Figure 3: MovieLens age/gender distribution.
#### 3.1.2 LastFM [59]
The LastFM dataset also is widely used in RSs, especially when it comes to dealing with the popularity bias problem. This dataset is one of the largest datasets, which includes user and artist information from all over the world. This dataset shows how many times each user has listened to each artist. The LastFM dataset includes the features of users and artists and the interactions among them. According to the capAI guidance [58], gender can be considered a sensitive feature in this dataset. The analysis of this dataset shows that the ratings of popular items are significantly higher than the other items. The LastFM dataset's information and details can be seen in Table 2.
In the following section, we provide an EDA on the LastFM dataset.
Figure 6 illustrates the distribution of the ratings of artists in the LastFM dataset. The figure shows a high popularity bias due to the fact that the ratings are concentrated on a small number of items. Specifically, 90.3% out of the 41,269 items in the dataset are in the long-tail.
Figure 7 shows the gender distribution for the LastFM dataset. As in the MovieLens dataset, the number of male users is considerably higher than the female users.
Figure 8 illustrates the top ten most popular artists regarding the number of records.
Figure 9 shows the distribution of records throughout the world map. It can be seen that most of the records belong to the U.S.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Features** & **Description** & **Data Type** & **Count** & **Mean** & **Std** \\ \hline Weight & Listening count for each artist & float & 100 K & 745.24 & 3751.32 \\ User id & IDs of the users & int & 100 K & - & - \\ Item id & IDs of the artists & int & 100 K & - & - \\ Gender & Age of users & String & 100 k & - & - \\ Country & Users’ country & String & 100 K & - & - \\ Name & Names of the artists & String & 100 K & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: LastFM dataset information.
Figure 6: LastFM long-tail distribution.
Figure 8: LastFM top ten artists.
Figure 7: LastFM gender distribution.
Figure 9: LastFM distribution of the records on the map.
### Recommendation Methods
In this research study, we used three recommendation approaches for RSs: Collaborative Filtering (CF), Matrix Factorization (MF), and GNN-based recommendation methods. We also implemented different methods for each approach to gather a wide range of results, which allowed us to reach more sound conclusions. Implementing different methods can help us to analyze the bias problem further; hence, different models should be compared to investigate bias amplification in detail. Below is the description of these approaches and chosen models:
1. Collaborative Filtering (CF): Collaborative (or social) filtering techniques are based on user preferences and use information about ratings given by users to items to compute user or item similarity. If two users rate items in a similar way, it is more likely that they rate the new items likewise. Therefore, the target user is one who recommended items well rated by users with the same taste. Another CF strategy is to recommend items similar to those that the user has consumed or has rated positively; such similarity between items is calculated from the ratings received by users. In CF approaches, user ratings on items are collected in order to create a user-item rating matrix. This matrix is used to find similarities between users/items. CF approaches can tackle some of the problems of content-based approaches, in which items similar to those consumed or well rated by the user are also recommended, but this similarity is computed from item features. A drawback of this approach, which is avoided by CF, is the unavailability of item features or the difficulty to obtain them, since recommendations in CF are made using only the feedback of other users. Besides, the quality of these techniques is higher because they are based on items evaluated by users, instead of relying on content, whose quality can be low. CF approaches unlike content-based systems can recommend items with various content, that is not similar to those previously consumed by the user, as long as other users have already shown interest in these different items. CF techniques use different approaches including:
* User-based: these systems assess the preference of a target user for an item using the ratings given to this item by his/her neighbors, which are users that have a similar rating behavior [4].
* Item-based: these approaches anticipate the rating of a user for an item considering the ratings given by this user to similar items. In such approaches, two items are similar if they have received similar ratings from several users in the system. This is different from content-based methods, which base the similarities of items on their characteristics or attributes. This approach is more convenient in common commercial recommender systems, where the number of users is much higher than the number of items in the catalog. Usually, item-based approaches are more reliable, require less computation time, and do not need to be updated as frequently [4].
Figure 10 shows the differences between user-based and item-based approaches. The CF approaches used are as follows:
* ItemKNN: This method is an item-based approach that computes the similarity between the items based on the ratings that users give to them. The main motivation behind this method is that customers are more prone to purchase items that are compatible with their previous purchases. Historical purchase information in the user-item matrix can lead to recognizing sets of similar items and using them to create the top-K recommendations. This algorithm in a high-level view includes two major elements. The first component creates a model seizing the relations among various items. The second component applies the mentioned calculated model to acquire top-K recommendations for a user. This method also shows a high performance in comparison to other similar CF approaches [60; 61; 62].
* Neural Collaborative Filtering model with interaction-based Neighborhood (NNCF): This model utilizes deep learning for modeling complicated interactions between users and items. This novel model also uses neighborhood information to complete user-item interaction data, hence improving the model's performance. NNCF models can overcome traditional algorithmic issues such as simple linear factorization, which may not completely support complex interaction among users and items. This method can also provide user/item embeddings with a good quality [63; 64; 65].
2. Matrix factorization: Matrix factorization encompasses a group of model-based techniques in which the rating matrix is transformed into two matrices of latent factors representing users and items, respectively, in an attempt to tackle the sparsity problem of the ratings matrix. This is a low-dimensional factor model, where it is assumed that the inner products between the user and item latent factors influence the preferences of the user for an item [66]. Currently, MF has become one of the most popular methods for implementing RS [67].
The MF approaches used are as follows:
* Deep Matrix Factorization (DMF): This method uses a neural network architecture. This method constructs the user-item matrix with explicit ratings and non-preference implicit feedback. Afterward, this matrix is used as the input for learning a common low-dimensional space for the deep structure learning architecture. This method also uses a novel loss function based on binary cross-entropy, which considers both explicit ratings and implicit feedback for enhancing optimization. DMF provides better top-K recommendations in comparison to traditional models by applying implicit feedback, thus reconstructing the users' ratings via learning hidden structures with explicit historical ratings. This method also supports two-channel structures, which can combine side information from both users and items. Some articles also indicate that this approach can outperform new recommendation algorithms with respect to accuracy and training efficiency [68; 69; 70].
* Neural Collaborative Filtering (NeuMF): Knowing that the most important factor in CF models is the interaction between user and item features, the inner products in these methods can be replaced by a neural network architecture. Neural network-based Collaborative Filtering (NCF) is a schema that expresses and generalizes matrix factorization, which can be enhanced by using nonlinear kernels. To achieve this, a multi-layer perceptron can be used in order to learn the user-item interaction function [71]. The good capacity and nonlinearity of deep neural networks are the main reasons for their good performance. Furthermore,
Figure 10: The Differences between user-based and item-based approaches.
the general NCF used in NeuMF can provide us with the opportunity for using the combination of various models [67].
3. GNN-based: One of the fastest-growing technologies that had great capability in recent years is Graph Learning (GL) [14]. This approach relates to machine learning applied to graph structure data. Using these advantages to learn relational data, Graph Learning-based Recommender Systems (GLRSs) have been proposed [6]. In reality, the majority of objects are explicitly or implicitly connected with each other. These relations can be shown by graphs. In RSs where the objects can be considered users, items, attributes, and context, this characteristic is even clearer. These objects are strongly connected with each other and affect each other via different relations. The quality of RSs can be remarkably increased by using graph techniques. Graph learning has a great ability to learn complex relations, as well as a high potential in obtaining knowledge enclosed in a variety of graphs [72]. There are different types of entities in RSs including users, items, and attributes, which maintain different types of relationships with each other and can therefore be represented by graphs of diverse types. It is well known that the three main objects used in recommender models are user, item, and user-item interaction, although other information concerning users and/or items may also be used. On this basis, data used in RSs can be classified into two broad categories: user-item interaction data (clicks, purchases, or ratings made by the users on the items) and side information data (user and item attributes). In addition, interaction data can be classified into two categories depending on whether the interactions are sequential or general. [72]. Each class also is divided into various sub-classes, as can be seen in Table 3.
Each input in the user-item matrix is information about the type of interaction that happened between them. The interaction data can be divided into categories based on their types: explicit and implicit. Explicit interaction happens when a user is asked to provide an opinion on an item (e.g., users' ratings on items). Implicit interaction is the one that is concluded from the user's action (e.g., click, view) [72; 73]. The GNN methods used are the following:
* LightGCN: This model is a simple version of a Graph Convolution Network (GCN), which includes the most important components of GCNs for recommendation tasks. LightGCN linearly propagates the user and item embeddings on the user-item interaction graph. Afterward, this model uses the weighted sum of the embeddings learned at all layers as the final embedding [74]. The symmetric normalization in LightGCN is the same as the standard GCN, which controls the increase in the size of embeddings with graph convolution operations. This method also showed great performance in comparison to conventional approaches [75; 76].
* Neural Graph Collaborative Filtering (NGCF): This model is another chosen method for this investigation. This model introduces a graph structure into user-item interactions. This method benefits from the user-item graph structure by generating embeddings on it, which results in high-order connectivity in
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Data Class** & **Data Subclass** & **Representing Graph** \\ \hline \multirow{2}{*}{General interaction} & Explicit interaction, implicit interaction & Weighted bipartite graph, unweighted bipartite graph \\ \multirow{2}{*}{Sequential interaction} & Single-type interactions multi-type interactions & Directed homogeneous graph, directed heterogeneous graph \\ \multirow{2}{*}{Side information} & Attribute information, social information, external knowledge & Heterogeneous graph, homogeneous graph tree, or heterogeneous graph \\ \hline \hline \end{tabular}
\end{table}
Table 3: A summary of data representation in RSs and representing graphs [72].
the user-item graph. The collaborative signal is pumped into the embedding process in an explicit way [72]. This method, moreover, uses multiple embedding propagation layers, with concatenated outputs to create the final prediction for the recommendation task. NGCF also shows great performance concerning model optimization [77].
* node dropout, edge dropout, and random walk
- which change the graph structure in different aspects. The SGL method has also shown great performance in RS tasks, which makes it a suitable choice for this experiment [78; 79; 80].
### Evaluation Metrics
In order to evaluate the models implemented with the previously described methods from the point of view of the reliability of the recommendations, as well as from the perspective of sensitivity to biases, we used both performance metrics and metrics for measuring bias amplification. Considering there is a set of \(n\) items to be ranked and given an item set \(I\) and a user set \(U\), \(\hat{R}(u)\) is used to represent a ranked list of items that a model produces and \(R(u)\) represents a ground-truth set of items that user \(u\) has interacted with. For top-K recommendations, only top-ranked items need to be considered. Therefore, in top-K evaluation scenarios, we truncated the recommendation list with a length \(K\).
In order to have a better understanding of the metrics, a notation table is provided below in Table 4.
Table 5 shows the main metrics for assessing the reliability of the recommendations. As in most current recommender systems, the evaluation was performed on top-K item recommendation lists, where K represents the size of the list.
Table 6 shows the metrics used in this study to evaluate the sensitivity of the models to the most relevant types of biases in recommender systems. In this research, we mainly focused on bias amplification related to the popularity and diversity of the recommended items. To reach this, three different metrics (average popularity, Gini Index, and item coverage) were used, as can be seen in Table 6. In addition, we evaluated the gender bias by means of the Differential Fairness (DF) metric, also described in Table 6. The objective of this metric is to evaluate whether the recommendation algorithms behave the same for all values of the protected attributes or produce a certain degree of bias in the output for some of the values. In this study, we considered the values "male" and "female" of the protected attribute "gender". This implies that the probabilities of the predicted
\begin{table}
\begin{tabular}{l l} \hline \hline
**Notation** & **Definition** \\ \hline \(U\) & A set of users \\ \(V\) & A set of items \\ \(u\) & A user \\ \(v\) & An item \\ \(R(u)\) & A ground-truth set of items that user \(u\) interacted with \\ \(\hat{R}(u)\) & A ranked list of items that a model produces \\ \(K\) & The length of the recommendation list \\ \(M(x)\) & Algorithmic mechanism for the RS with input \(x\) and output \(y\) \\ \(\beta\) & Distribution, which generates \(x\) \\ \(\Theta\) & A set of distributions of \(\beta\), which generate each instance \(x\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Table of notations.
item scores should be similar for both values of this attribute. DF specifically focuses on both intersectionality [81; 82], which involves fairness for each of the protected attributes individually, and behavior toward minorities, which is related to the previously mentioned anti-discrimination laws.
\begin{table}
\begin{tabular}{p{113.8pt} p{227.6pt}} \hline \hline
**Metric Name** & **Description** \\ \hline \multirow{6}{*}{MRR} & Computes the reciprocal rank of the first relevant item found by an algorithm. Considers \(Rank_{u}^{*}\) to be the rank position of the first relevant item found by an algorithm for a user u. \\ & \(MRR@K=\frac{1}{|U|}\sum\limits_{u\in U}\frac{1}{Rank_{u}^{*}}\) \\ \hline \multirow{6}{*}{NDCG} & Is a measure of ranking quality, where positions are discounted logarithmically. It accounts for the position of the hit by assigning higher scores to hits at top ranks. \(\delta(0)\) is an indicator function. \\ & \(NDCG@K=\frac{1}{|U|}\sum\limits_{u\in U}\frac{1}{\sum\limits_{i=1}^{min(\{R(u) |,K\})}\frac{1}{log2\,(i+1)}}\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits _{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_ {i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{ \sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{ \sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum \sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sumsum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum \sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum \sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{ \sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits_{i=1}^{\sum\sum\limits _{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{ \sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sumsum\limits_{i=1 }^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum \limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1 }^{\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1 }^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=1}^{\sum \limits_{i=1}^{\sum\limits_{i=1}^{\sum\limits_{i=11}^{\sum\limits_{i=1}^{\sum\limits_ {i=11}^{\sum\limits_{i=11}^{\sum\limits_{i=11}^{\sum\limits_{i=
## 4 Results
In this section, we present the results provided by the recommendation methods described in the previous section on the datasets MovieLens and LastFM. These results came from applying the metrics explained above. The objective was to evaluate the different recommendation methods to determine which ones present the best balance between performance and sensitivity to biases, since the improvement of the former usually leads to a worsening of the latter. The results will also serve to determine whether the hypothesis that GNN-based methods produce more reliable, but also more biased models is confirmed. This would evidence the need for further research on ways to deal with the amplification of biases in these methods.
In Tables 7 and 8, the results of the mentioned models can be seen on the MovieLens and LastFM datasets. Different metrics also are provided in order to have a better understanding of the performance of the models and bias amplification for each model.
Table 7 shows the results on the LastFM dataset.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline
Figure 11 shows the recall values for three different sizes of the top-K lists of the implemented models on the two datasets. According to these results, two of the three GNN-based methods, NGCF and SGL, showed better performance for both datasets regarding the recall metric, while the third one, LightGCN, showed a different behavior in each dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline
**Approach** & **Method** & **Top K** & **Recall** & **Precision** & **MRR** & **NDCG** & **HIT** & **Item Coverage** & **Gini Index** &
\begin{tabular}{c} **Average** \\ **Popularity** \\ \end{tabular} \\ \hline MF & DMF & K = 5 & 0.14 & 0.22 & 0.43 & 0.26 & 0.62 & 0.18 & 0.94 & 256.29 \\ MF & DMF & K = 10 & 0.21 & 0.17 & 0.42 & 0.25 & 0.73 & 0.20 & 0.93 & 252.25 \\ MF & DMF & K = 15 & 0.29 & 0.16 & 0.45 & 0.28 & 0.83 & 0.28 & 0.90 & 219.49 \\ MF & NeuMF & K = 5 & 0.15 & 0.23 & 0.45 & 0.27 & 0.65 & 0.25 & 0.91 & 228.52 \\ MF & NeuMF & K = 10 & 0.23 & 0.18 & 0.46 & 0.27 & 0.78 & 0.36 & 0.89 & 212.41 \\ MF & NeuMF & K = 15 & 0.30 & 0.16 & 0.46 & 0.28 & 0.83 & 0.40 & 0.86 & 196.89 \\ CF & ItemKNN & K = 5 & 0.15 & 0.23 & 0.44 & 0.28 & 0.63 & 0.19 & 0.93 & 231.96 \\ CF & ItemKNN & K = 10 & 0.22 & 0.18 & 0.46 & 0.27 & 0.75 & 0.24 & 0.93 & 249.74 \\ CF & ItemKNN & K = 15 & 0.31 & 0.16 & 0.46 & 0.29 & 0.84 & 0.29 & 0.89 & 208.12 \\ CF & NNCF & K = 5 & 0.15 & 0.24 & 0.47 & 0.29 & 0.64 & 0.17 & 0.95 & 284.47 \\ CF & NNCF & K = 10 & 0.24 & 0.19 & 0.46 & 0.22 & 0.78 & 0.25 & 0.91 & 217.70 \\ CF & NNCF & K = 15 & 0.28 & 0.15 & 0.47 & 0.27 & 0.81 & 0.30 & 0.91 & 231.28 \\ GNN & NGCF & K = 5 & 0.15 & 0.24 & 0.48 & 0.29 & 0.66 & 0.15 & 0.95 & 277.85 \\ GNN & NGCF & K = 10 & 0.25 & 0.20 & 0.49 & 0.30 & 0.77 & 0.25 & 0.93 & 255.49 \\ GNN & NGCF & K = 15 & 0.32 & 0.17 & 0.49 & 0.31 & 0.86 & 0.32 & 0.89 & 219.13 \\ GNN & LightGCN & K = 5 & 0.11 & 0.17 & 0.36 & 0.21 & 0.55 & 0.05 & 0.98 & 245.13 \\ GNN & LightGCN & K = 10 & 0.18 & 0.14 & 0.37 & 0.21 & 0.67 & 0.07 & 0.97 & 312.47 \\ GNN & LightGCN & K = 15 & 0.23 & 0.12 & 0.38 & 0.21 & 0.76 & 0.10 & 0.96 & 292.8 \\ GNN & SGL & K = 5 & 0.15 & 0.25 & 0.47 & 0.29 & 0.66 & 0.24 & 0.91 & 229.24 \\ GNN & SGL & K = 10 & 0.25 & 0.20 & 0.49 & 0.29 & 0.80 & 0.31 & 0.89 & 209.39 \\ GNN & SGL & K = 15 & 0.31 & 0.17 & 0.49 & 0.30 & 0.85 & 0.34 & 0.88 & 200.63 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results on the MovieLens dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline
**Approach** & **Method** & **Top K** & **Recall** & **Precision** & **MRR** & **NDCG** & **HIT** & **Item Coverage** & **Gini Index** &
\begin{tabular}{c} **Average** \\ **Popularity** \\ \end{tabular} \\ \hline MF & DMF & K = 5 & 0.05 & 0.05 & 0.11 & 0.05 & 0.20 & 0.01 & 0.99 & 377.81 \\ MF & DMF & K = 10 & 0.07 & 0.03 & 0.12 & 0.06 & 0.25 & 0.02 & 0.99 & 341.85 \\ MF & DMF & K = 15 & 0.08 & 0.02 & 0.12 & 0.07 & 0.30 & 0.02 & 0.99 & 309.64 \\ MF & NeuMF & K = 5 & 0.10 & 0.10 & 0.25 & 0.12 & 0.40 & 0.05 & 0.98 & 167.49 \\ MF & NeuMF & K = 10 & 0.15 & 0.07 & 0.27 & 0.14 & 0.52 & 0.06 & 0.98 & 157.12 \\ MF & NeuMF & K = 15 & 0.20 & 0.06 & 0.27 & 0.16 & 0.60 & 0.09 & 0.98 & 140.17 \\ CF & ItemKNN & K = 5 & 0.12 & 0.11 & 0.29 & 0.14 & 0.41 & 0.12 & 0.96 & 152.64 \\ CF & ItemKNN & K = 10 & 0.16 & 0.08 & 0.30 & 0.16 & 0.50 & 0.23 & 0.93 & 131.54 \\ CF & ItemKNN & K = 15 & 0.20 & 0.06 & 0.30 & 0.18 & 0.57 & 0.31 & 0.91 & 118.00 \\ CF & NNCF & K = 5 & 0.09 & 0.07 & 0.16 & 0.09 & 0.31 & 0.04 & 0.98 & 195.14 \\ CF & NNCF & K = 10 & 0.12 & 0.06 & 0.17 & 0.10 & 0.38 & 0.05 & 0.98 & 185.23 \\ CF & NNCF & K = 15 & 0.15 & 0.05 & 0.19 & 0.12 & 0.49 & 0.06 & 0.99 & 177.06 \\ GNN & NGCF & K = 5 & 0.12 & 0.11 & 0.29 & 0.14 & 0.44 & 0.03 & 0.99 & 202.55 \\ GNN & NGCF & K = 10 & 0.18 & 0.09 & 0.32 & 0.17 & 0.59 & 0.06 & 0.98 & 155.32 \\ GNN & NGCF & K = 15 & 0.21 & 0.07 & 0.31 & 0.18 & 0.64 & 0.08 & 0.98 & 160.38 \\ GNN & LightGCN & K = 5 & 0.13 & 0.14 & 0.31 & 0.15 & 0.47 & 0.05 & 0.98 & 174.23 \\ GNN & LightGCN & K = 10 & 0.19 & 0.09 & 0.33 & 0.18 & 0.59 & 0.09 & 0.98 & 148.15 \\ GNN & LightGCN & K = 15 & 0.23 & 0.07 & 0.34 & 0.20 & 0.66 & 0.12 & 0.97 & 132.52 \\ GNN & SGL & K = 5 & 0.13 & 0.13 & 0.33 & 0.15 & 0.48 & 0.06 & 0.98 & 142.71 \\ GNN & SGL & K = 10 & 0.20 & 0.10 & 0.35 & 0.19 & 0.62 & 0.10 & 0.97 & 114.49 \\ GNN & SGL & K = 15 & 0.24 & 0.07 & 0.35 & 0.21 & 0.69 & 0.14 & 0.96 & 103.11 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Results on the LastFM dataset.
Regarding the precision of the given models, shown in Figure 12, the results were similar. From this plot, it can be seen that NGCF and SGL provided better precision for both datasets, but LightGCN could also be a good choice for the LastFM dataset.
Figure 13 illustrates the MRR metric. The results showed, just like the previous metrics, that the NGCF and SGL models provided a higher MRR. LightGCN also showed a reasonable value for the LastFM dataset.
Figure 11: Results of recall for Movielens and LastFM. (**a**) MovieLens recall. (**b**) LastFM recall.
Figure 12: Results of precision for Movielens and LastFM. (**a**) MovieLens precision. (**b**) LastFM precision.
Figure 14 shows the HIT measure. From this metric's view, most of the implemented models showed similar results for MovieLens except LightGCN, which performed considerably poorly, although SGL and NGCF obtained the highest values. On the other hand, this metric showed that SGL, LightGCN, and NGCF, the three GNN-based methods, provided a higher HIT for the LastFM dataset.
Figure 15 shows the NDCG of the implemented models. We can observe in the graph that two GNN-based methods, NGCF and SGL, provided better results for Movielens than the other methods. On the LastFM dataset, the three GNN-based algorithms gave the best values of NDCG, although LightGCN performed better than NGCF.
Figure 14: Results of HIT for Movielens and LastFM. (**a**) MovieLens HIT. (**b**) LastFM HIT.
Figure 13: Results of MRR for Movielens and LastFM. (**a**) MovieLens MRR. (**b**) LastFM MRR.
After presenting the results corresponding to the performance metrics, we turn to the bias metrics. Figure 16 illustrates the Gini Index of the given models. These results show that more accurate models such as SGL provided a lower Gini Index, which indicates lower diversity and more bias amplification. However, the differences in the results of the tested models were not as significant for this metric as for the previous metrics, especially in the case of the LastFM dataset.
Figure 17 shows item coverage. Low coverage represents discrimination with respect to certain items the user may like, but which are not recommended by the system. In general terms, the coverage was better for MovieLens than for LastFM, which is consistent with the greater number of items in the latter dataset, 41,269 items compared to 1682 in lastFM. It can be seen that NeuMF had the highest and LightGCN the lowest item coverage among all for the MovieLens dataset. In contrast, ItemKNN provided the best item coverage on the LastFM dataset. We can also see that the SGL method, based on the GNN, occupied the second position in the coverage ranking on both datasets, and the methods corresponding to the other approaches presented a different behavior for each dataset. NGCF coverage was similar to that of other non-GNN based methods on both datasets, and in the case of
Figure 16: Results of Gini Index for Movielens and LastFM. (**a**) MovieLens Gini Index. (**b**) LastFM Gini Index.
Figure 15: Results of NDGC for Movielens and LastFM. (**a**) MovieLens NDCG. (**b**) LastFM NDCG.
MovieLens, for k = 10 and K = 15, it outperformed most of those methods. This shows that the behavior of the methods based on the GNN in relation to coverage is acceptable.
Figure 18 shows the average popularity of items on both datasets. Minimizing this bias involves generating recommendations of items with low popularity, so lower values of this metric would be the most desirable. Regarding these results, we can highlight the unequal behavior of the methods for the MovieLens dataset, since they present very different values between them and also present great variation for the different sizes of the top-K lists. The results showed that SGL algorithm provided the lowest values of popularity on both datasets and NeuMF gave very similar values on the MovieLens dataset. The worst performance was presented by LightGCN on the MovieLens dataset and NNCF on LastFM. NGCF also performed worse than most classical approaches on both datasets. Therefore, this confirms that there is an amplification in the popularity bias in the GNN-based methods, with the exception of SGL, which is superior to the rest regarding popularity and many of the other metrics.
Figure 17: Results of item coverage for Movielens and LastFM. (**a**) MovieLens item coverage. (b) LastFM item coverage.
Figure 18: Results of average popularity for Movielens and LastFM. (**a**) MovieLens average popularity. (b) LastFM average popularity.
Figure 19 shows the results of differential fairness for the sensitive attribute gender. It should be taken into account that the lower its value, the lower the gender bias is. This metric applies to rating prediction since it uses the score prediction for user-item pairs and is not applicable to top-K recommendation lists. This is the reason why the figure does not show results for different values of k, but a single value corresponding to the mean values obtained for all the rating predictions for the examples in the test set. The graph shows that DMF on the LastFM dataset and ItemKNN on the Movielens dataset had a significant gender bias. Another relevant observation is the fact that the GNN-based methods provided better results than the rest in the case of the MovieLens dataset, while for LastFM, the values were more equal. In both cases, the SGL method outperformed the others, in line with the results discussed previously. Consequently, it follows that not only GNN-based methods are sensitive to gender bias, but also other methods of lower reliability. Even some models such as SGL performed better in comparison to the other methods used.
Analyzing the results as a whole, it can be concluded that the GNN-based methods provide more reliable recommendations, since they achieved the highest values of the performance metrics. However, some types of biases were amplified when using these methods, as shown by the applied bias metrics, especially those related to diversity and coverage. Within this approach, the SGL algorithm showed the best performance since it had good reliability and was not as affected by biases as the other two algorithms in this group. The self-supervised learning on the user-item graph introduced in SGL to address some problems of recommenders based on graph convolution networks (sparsity, skewed distributions, and noise) also led to better performance against different types of biases, as evidenced by the results of this study. Additionally, this algorithm was the one that had the most uniform behavior for both datasets and for all list sizes. At the opposite extreme was LightGCN, which had a high sensitivity to some biases, and its results varied greatly from dataset to dataset.
## 5 Conclusions and Future Work
Bias is one of the most important issues in RSs, and due to its heavy costs for companies and individuals, this subject matter is worth investigating. In this work, we conducted an empirical study on bias amplification in GNN-based RSs. We studied the biases that are currently of most concern to companies that offer products or services to their users through the recommendation of top-K item lists. Moreover, we also implemented an investigation into gender bias by calculating differential fairness for the sensitive attribute gender. Apart from the fact that the lists should match the users' tastes as closely as possible, the most sought-after requirements of the lists are diversity, covering as many items as possible and
Figure 19: Results of differential fairness of sensitive attribute gender for Movielens and LastFM. (a) MovieLens average differential fairness of gender. (b) LastFM average differential fairness of gender.
containing unpopular items, in order to broaden and diversify the offer to users and help them discover products they would not otherwise be able to find.
Toachieve our objectives, we selected two real-world datasets affected by biases, which were the subject of this research. The chosen datasets showed a power law distribution, also known as the long-tail phenomenon, which indicates popularity bias on items. Besides, the mentioned datasets contain the sensitive feature gender, which made them a good fit for this investigation.
Several different models from three approaches (CF, MF, and GNN) were implemented to compare the behavior of GNN-based methods against algorithms of other types. In order to evaluate the results, different metrics were used for measuring both the reliability and bias amplification of models for top \(K\) recommendations. The metrics used to evaluate for each model the types of biases analyzed in this study were the Gini Index, item coverage, and average popularity.
The results showed that GNN-based methods mostly provide better performance regarding precision, recall, NDCG, and other reliability metrics, but are more prone to bias amplification based on the calculated bias metrics. Among all of them, LightGCN had the highest variability depending on the dataset and types of biases, while SGL was the most stable and the least sensitive to biases, especially popularity and coverage biases. This highlights the need for further research in order to achieve a better trade-off between accuracy and bias amplification. Furthermore, the results of gender bias showed a different pattern in bias amplification. In most cases, the GNN methods performed better with respect to differential fairness on both datasets. This shows that GNN approaches can have different amplification effects on different types of biases.
As future work, our purpose will be to investigate the causes of bias amplification in GNN-based recommendation algorithms and to propose mitigation solutions that minimally impact the reliability of recommendations. In addition, we intend to extend the study by making use of more datasets and a larger number of metrics, which will allow us to analyze sensitive features other than gender. Besides, an investigation can be implemented on gender bias with a non-binary gender attribute.
Conceptualization, N.C., N.S. and M.N.M.-G.; methodology, N.C. and M.N.M.-G.; software, N.C. and N.S.; validation, N.C. and N.S.; formal analysis, N.C.; investigation, N.C. and N.S.; resources, M.N.M.-G.; data curation, N.C.; writing\(-\)original draft preparation, N.C.; writing\(-\)review and editing, N.C., N.S. and M.N.M-G; visualization, N.C.; supervision, M.N.M.-G.; project administration, M.N.M.-G.; funding acquisition, M.N.M.-G. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Not applicable.
Publicly available datasets were used. Details are provided in Section 3.1.
In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).
The authors declare no conflict of interest.
|
2304.14925 | Uncertainty Aware Neural Network from Similarity and Sensitivity | Researchers have proposed several approaches for neural network (NN) based
uncertainty quantification (UQ). However, most of the approaches are developed
considering strong assumptions. Uncertainty quantification algorithms often
perform poorly in an input domain and the reason for poor performance remains
unknown. Therefore, we present a neural network training method that considers
similar samples with sensitivity awareness in this paper. In the proposed NN
training method for UQ, first, we train a shallow NN for the point prediction.
Then, we compute the absolute differences between prediction and targets and
train another NN for predicting those absolute differences or absolute errors.
Domains with high average absolute errors represent a high uncertainty. In the
next step, we select each sample in the training set one by one and compute
both prediction and error sensitivities. Then we select similar samples with
sensitivity consideration and save indexes of similar samples. The ranges of an
input parameter become narrower when the output is highly sensitive to that
parameter. After that, we construct initial uncertainty bounds (UB) by
considering the distribution of sensitivity aware similar samples. Prediction
intervals (PIs) from initial uncertainty bounds are larger and cover more
samples than required. Therefore, we train bound correction NN. As following
all the steps for finding UB for each sample requires a lot of computation and
memory access, we train a UB computation NN. The UB computation NN takes an
input sample and provides an uncertainty bound. The UB computation NN is the
final product of the proposed approach. Scripts of the proposed method are
available in the following GitHub repository: github.com/dipuk0506/UQ | H M Dipu Kabir, Subrota Kumar Mondal, Sadia Khanam, Abbas Khosravi, Shafin Rahman, Mohammad Reza Chalak Qazani, Roohallah Alizadehsani, Houshyar Asadi, Shady Mohamed, Saeid Nahavandi, U Rajendra Acharya | 2023-04-27T02:05:31Z | http://arxiv.org/abs/2304.14925v1 | # Uncertainty Aware Neural Network from Similarity and Sensitivity
###### Abstract
Researchers have proposed several approaches for neural network (NN) based uncertainty quantification (UQ). However, most of the approaches are developed considering strong assumptions. Uncertainty quantification algorithms often perform poorly in an input domain and the reason for poor performance remains unknown. Therefore, we present a neural network training method that considers similar samples with sensitivity awareness in this paper. In the proposed NN training method for UQ, first, we train a shallow NN for the point prediction. Then, we compute the absolute differences between prediction and targets and train another NN for predicting those absolute differences or absolute errors. Domains with high average absolute errors represent a high uncertainty. In the next step, we select each sample in the training set one by one and compute both prediction and error sensitivities. Then we select similar samples with sensitivity consideration and save indexes of similar samples. The ranges of an input parameter become narrower when the output is highly sensitive to that parameter. After that, we construct initial uncertainty bounds (UB) by considering the distribution of sensitivity aware similar samples. Prediction intervals (PIs) from initial uncertainty bounds are larger and cover more samples than required. Therefore, we train bound correction NN. As following all the steps for finding UB for each sample requires a lot of computation and memory access, we train a UB computation NN. The UB computation NN takes an input sample and provides an uncertainty bound. The UB computation NN is the final product of the proposed approach. Scripts of the proposed method are available in the following GitHub repository: [https://github.com/dipuk0506/UQ](https://github.com/dipuk0506/UQ)
keywords: Uncertainty Bound, Probabilistic Forecast, Neural Network, Prediction Interval, Uncertainty Quantification, Heteroscedastic Uncertainty. +
Footnote †: journal:
## 1 Introduction
Whenever any prediction system receives an unexpectedly high prediction error, people try to investigate the reason for that failure. NNs consist of weights and biases. Values of those parameters are determined during the training. Therefore, the training procedure and the sample distribution play a vital role in NNs heteroscedastic performance [1; 2]. The knowledge of sample distribution also helps individuals in the investigation. Moreover, NNs training needs to be robust to achieve competitive performance on rare and critical samples, and a high overall performance [3; 4].
Traditional regressive models can find the regression mean of a quantity for any input combination within the range [5; 6]. They also provide an overall statistical error, such as the mean-square-error (MSE), the root-mean-square-error (RMSE), etc [7]. Traditional point prediction models, with an overall statistical error, cannot represent the level of heteroscedastic uncertainty. The level of uncertainty can be high in one input domain and low in another input domain [8; 9]. Prediction intervals with a coverage probability can indicate the level of heteroscedastic uncertainty. Regions with narrower intervals have lower uncertainty. Regions with wider interval have lower uncertainty [10]. Uncertainty quantification is becoming popular in various fields [11; 12]. Neural networks are also getting popularity due to their optimal performances [13]. There exist several popular approaches for constructing prediction intervals. Bayesian neural networks are getting popular due to their applications in deep learning. However, Bayesian regressive neural networks struggle to maintain a coverage probability, close to the expected coverage probability. The cost-function-based prediction intervals can provide a narrow interval and the coverage probability becomes very close to the expected coverage probability [14]. However, there are different cost functions proposed by different researchers, and there exist debates on their acceptability. A group of researchers prescribed reducing failure distance and some other researchers prescribed bringing the target near the mid-interval. There is also debate on penalizing high coverage probability. Therefore, we propose a method that trains NNs based on similar samples instead of a cost function.
Although neural networks have brought promising results |
2304.08883 | Parameterized Neural Networks for Finance | We discuss and analyze a neural network architecture, that enables learning a
model class for a set of different data samples rather than just learning a
single model for a specific data sample. In this sense, it may help to reduce
the overfitting problem, since, after learning the model class over a larger
data sample consisting of such different data sets, just a few parameters need
to be adjusted for modeling a new, specific problem. After analyzing the method
theoretically and by regression examples for different one-dimensional
problems, we finally apply the approach to one of the standard problems asset
managers and banks are facing: the calibration of spread curves. The presented
results clearly show the potential that lies within this method. Furthermore,
this application is of particular interest to financial practitioners, since
nearly all asset managers and banks which are having solutions in place may
need to adapt or even change their current methodologies when ESG ratings
additionally affect the bond spreads. | Daniel Oeltz, Jan Hamaekers, Kay F. Pilz | 2023-04-18T10:18:28Z | http://arxiv.org/abs/2304.08883v1 | # Parameterized Neural Networks for Finance
###### Abstract
We discuss and analyze a neural network architecture, that enables learning a model class for a set of different data samples rather than just learning a single model for a specific data sample. In this sense, it may help to reduce the overfitting problem, since, after learning the model class over a larger data sample consisting of such different data sets, just a few parameters need to be adjusted for modeling a new, specific problem. After analyzing the method theoretically and by regression examples for different one-dimensional problems, we finally apply the approach to one of the standard problems asset managers and banks are facing: the calibration of spread curves. The presented results clearly show the potential that lies within this method. Furthermore, this application is of particular interest to financial practitioners, since nearly all asset managers and banks which are having solutions in place may need to adapt or even change their current methodologies when ESG ratings additionally affect the bond spreads.
## 1 Introduction
Deep learning has shown impressive results over the past two decades in various fields, such as image recognition and classification, as well as natural language processing. A lot of training data is usually needed to calibrate neural networks for these tasks, but, unfortunately, financial data is often quite limited. For example, consider calibrating a neural network for forecasting the distribution of returns corresponding to a certain stock conditioned on past returns. Although there may be a long history of daily closing stock prices, maybe even 40 years for certain companies, we end up with only approximately 10.000 data points, which is not that much for calibrating a neural network, in particular, because such problems often have a low signal-to-noise ratio. Moreover, the time series may not be stationary either, which means that past data may not be a good input for training a network that predicts future data. Hence, the total amount of data that is suitable for fitting, has to be further reduced by selecting sub-periods of the available data. On the other hand, there often exists a global structure in the data, that can be found in all periods, maybe even over different stocks, and the question is, how can we make use of such structures when training the network.
Here, multi-task learning (MTL) comes into play. Multi-task learning has been successfully applied in different fields such as time series forecasting, in general [1], weather forecasting and power generation modeling [2, 3, 4, 5], computer vision [6, 7], natural language processing [8, 9] and many other applications, see [10]. MTL can improve the generalization capabilities resulting in a lower risk of overfitting, see [11, 12, 13, 14]. Additionally, learning new tasks may be faster and more robust using MTL. A drawback of these methods is the so-called _negative transfer_ which describes the effect of getting larger errors in less difficult tasks, compared to single neural network models, due to the larger errors involved by the difficult tasks [15, 16, 3, 1].
In this paper we discuss and analyze the application of a very simple MTL architecture as introduced in [4] under the name task embedding network, which we believe is a promising approach to solving several problems in finance. Note, that we prefer the name parameterized neural network (PNN) in our context, for reasons that are discussed later.
The paper is organized as follows. In section 2 we give a brief overview of different MTL architectures and describe the PNN design. We also discuss the generalization capability of the PNN following the approach of [13]. Section 3 presents several simple experiments that give insight into the functioning of the PNN and its performance on certain problem classes. The section concludes with a more complex example inspired by the problem of calibrating spread curves to bond market quotes. In the final section, the results are summarized, and an outlook on potential applications in the area of Finance as well as future research are given.
## 2 Multi-Task Learning
### Architectures
In general, MTL can be subdivided into two categories [12], _soft parameter sharing_ and _hard parameter sharing_. In soft parameter sharing, we calibrate a network to each task separately in a way, that the network parameters are somehow related, e.g. by penalizing deviations between parameters of different networks [17, 18]. Hard parameter sharing goes back to [19] and PNNs belong to this category. In hard parameter sharing, the neural networks for every single task share a certain subset of their parameters. Here, a lot of different architectures exist [11, 20], and as discussed in [12] may be further categorized into _encoder-based_ and _decoder-based_ architectures. Encoder-based architectures share the input and first layers (bottom layers) of the networks. This encoder-based approach is also known as internal representation learning and [13] was able to show that this lowers the risk of overfitting compared to calibrating a single network to each task separately. Decoder-based architectures, which PNNs belong to, apply task-specific networks using their output as input for a single network that is task-independent.
The selection of the architecture may depend on different considerations, and given a problem (to our knowledge) there are no general criteria available that determine which architecture is more suitable. Encoder-based approaches seem to dominate the field of multi-task learning, especially in the area of computer vision. Recent work compares both architectures on different kinds of problems [12, 21]. However, as we will discuss in the next section, we think that many problems in finance can benefit from the very simple multi-task architecture that is discussed in the following.
### Parameterized Neural Networks
We discuss the PNN architecture and motivate why it is a good candidate for MTL in the financial field. To our knowledge, PNNs were proposed in [4] the first time, under the name task embedded
networks. The tasks are represented by integers, very similar to word embedding [22], and an embedding layer is used to map these integers to a corresponding parameter vector consisting of real values. The concatenation of this parameter vector with the original input data defines the input to the main neural network, which is the same for all tasks (in terms of the network weights), see figure 1. Each of these parameter vectors is optimized during training by optimizing the embedding layer weights.
To illustrate this approach, let us assume we have \(n\) tasks, and a task \(1\leq i\leq n\) consists of \(m\) samples \(\{z_{ij}\}_{j=1}^{m}\), where \(z_{ij}=(x_{ij},y_{ij})\in Z\subset\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\) for task independent \(k_{1}\) and \(k_{2}\). For ease of notation, each task has the same number of samples. The \(x_{ij}\) are the (regression or approximation) function inputs, and the \(y_{ij}\) are the outputs.
Further, assume the embedding layer has dimension \(l\), which means that we have a task-specific parameter vector \(p_{i}\in\mathbb{R}^{l}\) for each task \(1\leq i\leq n\). Given a task \(i\), the neural network is a function \(g:\mathbb{R}^{k_{1}}\rightarrow\mathbb{R}^{k_{2}}\), that also can be regarded as \(g(x;p_{i})\), the approximating function of task \(i\). More generally, for any parameter vector \(p\) (not necessarily corresponding to a task from the training set) the neural network \(g(x;p)\) represents a family of functions depending on the parameter \(p\).
The reason why we prefer the term parameterized neural network in contrast to the term task-embedded neural network is, that the problems we have in mind are not related to the solution of different tasks (as in classical multi-task learning), but rather related to finding an optimal solution within a family of parameterized functions.
It is quite obvious that this approach is very generic and independent of the basic network architecture that uses task embedding as input. Therefore, without many changes, one can directly apply this approach not only to simple feed-forward multi-layer networks but also to more sophisticated architectures such as generative methods (VAE and GAN), mixture-density networks (MDN), and recurrent networks, as well as to reinforcement learning. We will give an outlook of promising applications in the area of finance at the end of this work. Although being very generic
Figure 1: Parameterized network architecture. The input is concatenated with a task-specific vector that is individually trained while the remaining network architecture is unchanged (except for a higher dimension in input space).
on the one hand, the approach is restrictive in terms of the structure of the different tasks. Obviously, the architecture makes only sense for multiple tasks with the same (or similar) input and output spaces. This might be a substantial limitation for some use cases of multi-task learning (even in finance), but we believe that there are a lot of applications to which this approach can be successfully applied, and does significantly help to get stable models based on a limited amount of available training data.
We discuss certain properties of this approach in the following sub-sections.
#### 2.2.1 Fast Calibration to New Data
After model calibration, we end up with a family of functions parameterized by the parameter vector \(p\). Whenever we have to calibrate the model to new data represented by a new task integer, we just have to find a new parameter \(p\in\mathbb{R}^{l}\) and get a model for this task. The start value for the parameter is usually important for the speed of convergence when using stochastic gradient methods. Here, one may use either the parameter of the last calibration (for instance, when the tasks are generated by a time series and expected to be auto-correlated) or simply the average over all parameters that have been calibrated so far. In our numerical experiments, we use the latter approach to calibrate to the test data, where the average over parameters is taken from the training data. This approach gave good results just after a few steps of gradient descent for our experiments.
#### 2.2.2 Interpretability and Validation
Since we have a parameterized family of functions, we can write the neural network model as \(g(x,p)\), the function of the inputs \(x\in\mathbb{R}^{k_{1}}\) and the parameter \(p\in\mathbb{R}^{l}\). If \(g\) is continuous on \(\mathbb{R}^{k_{1}+l}\), it is Lipschitz continuous on each compact subset \(\mathcal{C}\) and therefore it easily follows that for any two compact subsets \(\mathcal{C}_{1}\subset\mathbb{R}^{k_{1}}\) and \(\mathcal{C}_{2}\subset\mathbb{R}^{l}\) there is a constant \(L(\mathcal{C}_{1},\mathcal{C}_{2})\) such that
\[\|g(x;p_{1})-g(x;p_{2})\|\leq L(\mathcal{C}_{1},\mathcal{C}_{2})\|p_{1}-p_{2}\| \text{ for all }x\in\mathcal{C}_{1},p_{1},p_{2}\in\mathcal{C}_{2}.\]
Hence, given two parameters \(p_{1}\) and \(p_{2}\) we directly get an upper bound on the distance between the two models corresponding to \(p_{1}\) and \(p_{2}\). We could use techniques to compute bounds on the Lipschitz constant, see [23] and the references therein, for a given network \(g\) or use methods to build Lipschitz-constrained networks [24] to explicitly bound the Lipschitz constant. But even if we do not explicitly know the Lipschitz constant, this property may help to understand and validate new model parameters derived from training on new data by comparing them to previous results and considering tasks that were similar in the past. As we will see in the numerical experiments the parameters may also be used to identify certain regimes in the tasks and allow to cluster them.
It is another advantage of the PNN that a set of extensively validated and tested models with parameters in a certain range, \(p\in[p_{\text{low}},p_{\text{up}}]\), transfer their validity to new models with calibrated parameters in the same range. This may save computational costs and time for fully re-testing these models.
#### 2.2.3 Separation of Regularization
The PNN allows separating the regularization regarding the tasks and the parameters. For instance, we may apply Gaussian noise to the \(x\) inputs but leave the parametrization \(p\) unchanged. If we penalize the first derivatives with respect to the inputs \(x\) by incorporating them into the network output and cost function, as in [25], we can enforce different restrictions to the \(x\) features than
for the parameters. This gives additional control and flexibility which may be useful in certain situations.
### Theoretical Considerations
In this section, we discuss the generalization property of the PNN and introduce some notations. The main result of this section is theorem 2.1 that gives insight into the impact of using MTL compared to calibrating a model to a single task. Here, we use mainly the results from [26] on representation learning and adapt the approach to the PNN.
To introduce the notation and the basic principle, let us first consider the case of learning one task, i.e. data consisting of vectors \(\vec{z}_{j}=(x_{j},y_{j})\in\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\). We assume that the training set \(\vec{z}\) is created by a probability distribution \(P\) on \(Z\subset\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\) and define \(Z^{m}\) as the set of all samples of length \(m\) according to \(P\), such that \(\vec{z}\in Z^{m}\). Let \(l:\mathbb{R}^{k_{2}}\times\mathbb{R}^{k_{2}}\rightarrow[0,M]\) be a _loss function_ with fixed \(M>0\). For a function \(g:\mathbb{R}^{k_{1}}\rightarrow\mathbb{R}^{k_{2}}\) we define the empirical loss by
\[\langle l_{g}\rangle_{\vec{z}}:=\frac{1}{m}\sum_{j=1}^{m}l(y_{j},g(x_{j})) \tag{1}\]
Consider \(g(x;\theta)\) a neural network with parameters \(\theta\in\Theta\). The typical learning task is to determine \(\theta^{\star}\) s.t.
\[\theta^{\star}=\operatorname*{arg\,min}_{\theta\in\Theta}\langle l_{g} \rangle_{\vec{z}} \tag{2}\]
which defines a learning algorithm
\[\mathcal{A}:\bigcup_{m\geq 1}Z^{m}\rightarrow\{g(x,\theta)\mid\theta\in \Theta\}=:\mathcal{H}. \tag{3}\]
By \(l_{\mathcal{H}}\) we denote the family of loss functions that are defined by all \(g(\cdot)\in\mathcal{H}\). In equation (1) the empirical loss is computed on the training set only and may overestimate the model performance
Figure 2: Simple example for a case with overfitting, where the empirical loss (\(\langle l_{g}\rangle_{\vec{z}}=0\)) and the true loss (\(\langle l_{g}\rangle_{P}\approx 0.01\)) differ substantially. The blue dots represent samples from \(P\) while the orange circles mark the training points. The loss function \(l(y_{1},y_{2}):=|y_{1}-y_{2}|^{2}\) is the mean squared error.
due to overfitting, which means that although the empirical loss is small, the true loss defined by
\[\langle l_{g}\rangle_{P}:=\int_{Z}l(y,g(x))dP \tag{4}\]
can be quite large. As an example see figure 2, which shows a fit with zero empirical loss but a large true loss, hence is overfitting. For having a model that generalizes to new data, it is quite essential that the empirical loss used in the learning algorithm defined by (3) is close to the true loss. Statistical learning theory provides bounds for the difference between these two losses, depending on the complexity of the learning model and the number of training points. To measure the distance between both losses, we use a family of metrics \(d_{\nu}:\mathbb{R}_{+}\times\mathbb{R}_{+}\) introduced in [27],
\[d_{\nu}(x,y):=\frac{|x-y|}{\nu+x+y}.\]
One gets the following upper bound under suitable conditions for \(\nu>0\) and \(0<\alpha<1\),
\[\Pr\{\vec{z}\in Z^{m}:\exists l_{g}\in l_{\mathcal{H}}:d_{\nu}\left(\langle l \rangle_{P},\langle l\rangle_{\vec{z}}\right)>\alpha\}\leq C(\alpha,\nu, \mathcal{H})e^{-\frac{\alpha^{2}\nu m}{8M}}, \tag{5}\]
where \(C(\alpha,\nu,\mathcal{H})\) is a constant depending on \(\alpha\), \(\nu\) and the so-called \(\varepsilon\)-capacity of \(\mathcal{H}\), the set of neural networks defined in (3), see [26] for further details. Therefore, to guarantee with probability \(\delta\) that the difference between empirical and true loss does not differ more than \(\alpha\) with respect to \(d_{\nu}\), it suffices to have \(m\) training points, with
\[m>\frac{8M}{\alpha^{2}\nu}\ln\left(\frac{C(\alpha,\nu,\mathcal{H})}{1-\delta} \right).\]
For a proof of this bound and further details, see [26] and the references therein. We will now consider the case of a PNN with fixed parameter dimension \(l\) and multiple tasks. Recall that the PNN calibrated to \(n\) tasks, where each task has \(m\) data points, produces a sequence of \(n\) different functions \(\vec{g}:=\left(g(x;p_{i},\theta)\right)_{i=1,\ldots,n}\) that share the same network parameters \(\theta\) and do only differ in their input parameter (concatenated to the original input) \(p_{i}\in\mathbb{R}^{l}\). For ease of notation, we simply write \(g_{p_{i}}\) instead of \(g(x;p_{i},\theta)\). If we denote the training points by \(\vec{z}\) where \(\vec{z}_{i}\) denotes the training data of the \(i\)-th task sampled from a distribution \(P_{i}\), we define the empirical loss analogously to the previously discussed case with only one task,
\[\langle l_{\vec{g}}\rangle_{\vec{z}}:=\frac{1}{n}\sum_{i=1}^{n}\langle l_{g_{ p_{i}}}\rangle_{\vec{z}_{i}}, \tag{6}\]
and the true loss
\[\langle l_{\vec{g}}\rangle_{\vec{P}}:=\frac{1}{n}\sum_{i=1}^{n}\langle l_{g_{ p_{i}}}\rangle_{P_{i}}. \tag{7}\]
In order to apply a similar approach as in [26], we define \(\mathcal{F}:=\left\{f(x)=(x,p)\mid p\in\mathbb{R}^{l}\right\}\), \(\mathcal{G}:=\left\{g(x,p;\theta)\mid\theta\right\}\) and furthermore
\[\mathcal{F}^{n}:=\left\{f(x_{1},\ldots,x_{n}):=\left(f_{1}(x_{1}),\ldots,f_{n} (x_{n})\right)\mid f_{i}\in\mathcal{F},x_{i}\in\mathbb{R}^{k_{1}}\right\} \tag{8}\]
and
\[\vec{\mathcal{G}}:=\left\{g((x_{1},p_{1},\ldots,x_{n},p_{n});\theta):=\left(g( x_{1},p_{1};\theta),\ldots,g(x_{n},p_{n};\theta)\right)\mid\theta\right\}. \tag{9}\]
Using this notation we obtain the following theorem.
**Theorem 2.1**.: _Let \(\nu>0\), \(0<\alpha<1\), be fixed and \(\varepsilon_{1},\varepsilon_{2}>0\) such that \(\varepsilon_{1}+\varepsilon_{2}=\frac{\alpha\nu}{8}\). For \(0<\delta<1\) and the structure_
\[X^{n}\xleftrightarrow{\mathcal{F}^{n}}V^{n}\xleftrightarrow{\mathcal{G}} \mathbb{R}^{k_{2}\cdot n}\]
_and \(\vec{z}\in Z^{(m,n)}\) be generated by \(m>\frac{8M}{\alpha^{2}\nu}\left[\ln(\mathcal{C}(\varepsilon_{1},\mathcal{F}) )+\frac{1}{n}\ln\frac{4\mathcal{C}(\varepsilon_{2},l_{\mathcal{G}})}{\delta}\right]\) independent samples we have_
\[\Pr\left\{z\in Z^{(m,n)}:\exists\bar{g}\circ\vec{f}\in\bar{\mathcal{G}}\circ \mathcal{F}^{n}:d_{\nu}(\langle l_{\bar{g}\circ\vec{f}}\rangle\vec{z},\langle l _{\bar{g}\circ\vec{f}}\rangle\vec{p})>\alpha\right\}\leq\delta \tag{10}\]
A sketch for a proof is given in Appendix 5.
This theorem shows that the parameterized network approach may reduce overfitting compared to the single-task case. We see that the number of tasks reduces the term involving the complexity of the overall set of network functions.
## 3 Numerical Experiments
In this section, we investigate the behavior and performance of the proposed method using simulation-based experiments. If not stated otherwise, we use the following settings:
* The base network of the PNN has 3 inner layers with 32 neurons and SELU activation functions.
* Adam optimizer with 8000 epochs and exponentially decaying learning rate.
* 100 tasks used to train the network.
* 250 tasks to test the method.
We measure the error by the standard mean squared error for a single task \(i\) on the test data,
\[e_{i}:=\frac{\sqrt{\sum_{j}\|y_{ij}-\hat{g}(x_{ij},\hat{p}_{i})\|^{2}}}{m},\]
where \(g\) is the resulting parameterized network fitted on the training data and \(\hat{p}_{i}\) denotes the parameter that is calibrated to the test data. The calibration of the parameter \(\hat{p}_{i}\) is different from the common approach of measuring error on a test data set without recalibrating anything on the data. We also measured the error w.r.t the data generation process of each task in the training data to create test data without recalibrating the respective parameter for this task to the new data. We use this methodology since one of our main interests in this approach is the capability of the model to calibrate to new unseen data. For the calibration of \(\hat{p}_{i}\) we use the mean over all parameters from the training as the start value on the training data and Adam optimizer with 100 epochs, a batch size of 10, and a learning rate of 0.01. We generate 250 tasks for the training data and define the training error as the square root of the mean overall errors of the single tasks
\[e:=\frac{\sqrt{\sum_{i=1}^{250}e_{i}^{2}}}{250}.\]
For all one-dimensional experiments below we use a uniform grid with 100 gridpoints on the function domain to construct the test data.
Figure 4: Approximation functions from the first four tasks for a PNN with three parameters, compared to a neural network trained on the respective, single task only. The training data is shown by the dots.
Figure 3: Left: PNN tasks sampled from the family of quadratic function defined in (11). Right: Approximation errors depending on the number of parameters used in the PNN, for 100 different tasks used in training.
Figure 5: Projection along a parameter direction (equidistant between the minimum and maximum of the respective parameter over all tasks) where all other parameters are fixed.
Figure 6: Scatter plots for all parameters calibrated on the training set for the function family defined by (11).
### Family of quadratic functions
In our first example, we consider a problem where each task is a simple quadratic function of the form
\[f(x;a,b,c):=a(x-c)^{2}+b,\;x\in[-1,1], \tag{11}\]
for parameters \(a\), \(b\), \(c\). For each task, we first sample the parameters from uniform distributions, such that \(a\in[1.0,2.0]\), \(b\in[-1,1]\), and \(c\in[-0.5,0.5]\) to determine the function for this task. For each function we then sample \(n\) points, uniformly distributed on \([-1,1]\), and evaluate the function on these points. Figure 3 shows samples and errors for the case with three points per task. In this experiment, we analyze the performance of the method in relation to the number of parameters used in the PNN. Since the family of functions has three parameters, we expect that this should also be the optimal number for the parameter dimension our PNN.
The right graph in figure 3 shows the error corresponding to the number of parameters as well as the mean of the error for eight different calibrations using different random seeds for network initialization. As a baseline, we also plot the mean error calibrating a neural network on each task separately. We clearly see that PNN outperforms the calibration of networks for each task separately. Furthermore, for one and two parameters the error is slightly larger than for a higher number of parameters. This is not surprising, considering that the generating family of functions depends on three parameters. We also see that an increasing number of parameters does not affect the performance of the resulting network significantly. Figure 4 shows some examples of the resulting PNN approximation functions compared to a simple feed-forward network for four selected tasks. We see that the PNN has learned the parabolic shape of the target problem much better than the single-task network. The upper right graph shows that even extrapolation for points between -1 and 0 gives quite satisfactory results.
Table 1 shows the error results of a PNN depending on the number of data points per task, as well as on the number of tasks, compared to the error results of neural networks fitted to each task separately. We clearly see that the number of points from the training data as well as the number of tasks influences the overall approximation error. The error decreases by a factor of two with an increasing number of points per task, for the PNN as well as for the single networks. As indicated by theorem 2.1, we observe a similar effect on the error with an increasing number of tasks. Note that performance seems to slightly deteriorate from 50 to 100 tasks. One reason for this effect might be that we used the same training parameters (number of epochs, learning rate schedule) across all configurations without tuning these parameters individually for each number of tasks leading and the networks have not fully converged to the desired accuracy. The role of the parameters is shown in figure 5, where projections along each parameter dimension are shown. We use the parameter from task zero as a basis and vary the parameter coordinate between the
\begin{table}
\begin{tabular}{c c c c c c} \hline number & simple & \multicolumn{4}{c}{number tasks} \\ points & network & 10 & 20 & 50 & 100 \\ \hline
3 & 0.669 & 0.273 & 0.236 & 0.176 & 0.163 \\
4 & 0.538 & 0.223 & 0.258 & 0.137 & 0.141 \\
5 & 0.481 & 0.243 & 0.117 & 0.137 & 0.115 \\
6 & 0.292 & 0.134 & 0.113 & 0.108 & 0.092 \\ \hline \end{tabular}
\end{table}
Table 1: Test error for a family of 250 quadratic functions as defined in (11) for a network fitted to each task separately (column _simple network_) and for a PNN with three parameters, trained on different numbers of tasks and points per sample.
minimum and maximum of the training data set in each figure. The behavior of the PNN with respect to each parameter coordinate is quite different and also mutually independent between the coordinates. This observation is confirmed by the scatter plots of the parameters in figure 6, where the distribution of the parameters seems to be rather uncorrelated and uniformly distributed. Furthermore, the change of a single parameter coordinate leads to a parabolic-shaped curve.
### Family of quadratic functions with noise
We now consider the case of noisy data, again, generated by a family of quadratic functions, but with a bit more structure than in the previous example. Let
\[f(x;a,b):=ax^{2}+bx+\varepsilon,\;x\in[-1,1], \tag{12}\]
where \(\varepsilon\) is normally distributed with standard deviation \(0.1\). The parameters \(a\in[1.0,2.0]\) and \(b\in[-0.5,0.5]\) are uniformly sampled. Note that for any function of this family \(f(0;a,b)=0\) is true. Due to the noise term, we sample five points per task for building the training data inputs. In the following we compare the PNN results with two benchmarks based on quadratic polynomial regression: the first one including an estimation of the constant, and the second one setting the constant term to zero and estimating only the linear and quadratic coefficients. Ignoring the bounds for the parameters \(a\) and \(b\) the quadratic regression with zero constant seems to be the best possible model class for this kind of data. The right graph in figure 7 shows the error on the training data for an increasing number of parameters and eight different networks (with different initial weights), as well as the results for the two regression models. Note that the error is measured between the PNN and the target function _without_ noise. Independently of the number of parameters, the PNN provides smaller errors than the regression model using quadratic polynomials. Moreover, the mean error for two parameters is nearly equal to the error using the quadratic regression with zero constant. In contrast to the previous example, we see that the errors for PNNs with parameter dimension greater than two are slightly higher than for PNNs with two
Figure 7: Left: A task sampled from the family defined by (12) together with the generating function (before noise is added) and the regression using the quadratic function and the quadratic function with zero constant. Right: The error of the PNN for different numbers of parameters. As baselines, the error of the simple regressions (quadratic and quadratic with constant term fixed to zero) are plotted as straight lines.
Figure 8: Distribution of the predicted values at \(x=0\) from a PNN fitted to 250 tasks sampled from the family of functions defined by (12), as well as from the respective quadratic polynomial regressions.
Figure 9: Function values from PNN, polynomial regression, and target function for two tasks sampled by (12).
parameters. The reason might be that due to the noise term a bit of overfitting is introduced. Using some kind of regularization might further improve the results. However, even for larger number of parameters the results are quite good, having in mind that the performance is still better than applying a quadratic polynomial regression. From these results we see, that the model is able to learn the functional structure on noisy data too. Moreover, since the results for the PNN are better than for the quadratic model (with fitted constant), we can assume that the PNN is able to learn the property of the real function being equal to zero for \(x=0\). This is visually confirmed by figure 8, where the distribution of the function values at \(x=0\) are plotted for the PNN and for the polynomial regression. Figure 9 shows the target function, the PNN regression, and the polynomial regression functions for three different tasks.
### Family of quadratic functions with Interdependencies
Many financial applications involve binary or categorical features such as ratings or countries. As an example, consider the interest rate spread curves mentioned before. With real data it happens often that input data is unbalanced in the sense that some categories occur much lesser than others. For instance, for a developed interest rate market one will find a large amount of bond prices for all rating categories, whereas smaller country may have only liquid prices for some of the ratings.
In such cases, PNNs may help to learn relationships between categories to improve results for underrepresented data. To analyze such behavior we perform the following simple experiment.
For \(x\) in \([-1,1]\times\{0,1\}\) we define the function family
\[f(x;a,b,c,d):=\left\{\begin{array}{ll}a(x_{1}-c)^{2}+b+dx_{1}&\text{ if }x_{2}=1,\\ a(x_{1}-c)^{2}+b&\text{ otherwise.}\end{array}\right. \tag{13}\]
As before, each task is constructed by uniformly sampling \(a\in[1.0,2.0]\), \(b\in[-1,1]\), \(c\in[-0.5,0.5]\) and \(d\in[0.1,1.0]\). We generate five \(x\) values per task, where \(x_{1}\) is uniformly sampled from \([-1,1]\), and for three of these five samples \(x_{2}\) is set to \(0\), and \(x_{2}=1\) for the other two. Note, that splitting each task into two separate estimation problems (according to the binary value \(x_{2}\)) does not work, since in the case \(x_{2}=1\) only two data points per task are given, which is not enough to recover the
Figure 10: Left: Several tasks sampled from (13). The straight lines correspond to the case \(x_{2}=0\) and the dashed lines to \(x_{2}=1\). Right: Approximation errors for different numbers of parameters of the PNN, using 100 different tasks for training.
Figure 11: Four tasks sampled from (13) with true functions and PNN estimations.
underlying quadratic structure of the function from the data. Several sample points of different tasks are plotted in the left graph of figure 10. The right graph in figure 10 shows the error for different parameter dimensions together with the errors of a simple neural network fitted to each task separately. As in the previous examples, we see a significant improvement for parameter dimensions greater than 1 compared to the simple neural network case. The PNN learned from the tasks that the true function for \(x_{2}=1\) is of parabolic shaped too. In figure 11 the true and the estimated functions are plotted for four selected tasks.
### Regimes
In this example, the tasks are generated by two different functions and the information on the function used is not encoded in the feature data. The functions are simple quadratic and cubic monomials,
\[\begin{array}{rcl}f_{1}(x)&=&ax^{2}\mbox{ for }x\in[0,1],\\ f_{2}(x)&=&ax^{3}\mbox{ for }x\in[0,1].\end{array} \tag{14}\]
Figure 12: Left: Sample data for 30 tasks for the function family in (14), interpolated by straight lines for each sampled function. Right: 250 sampled functions (right).
Figure 13: Left: Error of PNN estimates for different parameter dimensions. Right: Two sampled tasks, one quadratic and one cubic, together with resulting PNN approximations and true functions.
We construct each task by randomly choosing \(f_{1}\) or \(f_{2}\), sample \(a\) uniformly from \([1,2]\) and four \(x\) values uniformly from \([0,1]\). Figure 12 shows 30 sampled tasks in the left plot, and a sample of 250 functions in the right one. The error relating to different parameter dimensions is given in the left graph of figure 13. In this example the error increases for increasing parameter dimension. One may guess that this behavior is due to overfitting, but the training error shows the same pattern. A possible explanation might be, that, since we are not applying any hyperparameter tuning, the optimization did not fully converge to sufficient accuracy. The right graph of figure 13 shows for a quadratic and a cubic task the predicted values for a PNN (with two parameters), the training data as well as the target functions. For PNNs with parameter dimension two, Figure 14 shows for several tasks the corresponding parameter vector as a scatter plot. We clearly see that the parameters can be separated into two sets, one representing the quadratic function regime, and the other the cubic function regime.
Figure 14: Calibrated parameters for all training data points for a PNN with parameter dimension equal to two.
Figure 15: Left: Sampled yield curves for different ratings and all other categories fixed. Right: 500 randomly sampled yield curves with all categories fixed.
Figure 16: Target (dashed line) and PNN calibrated yield curves for different ratings and a fixed task, initial value (top left), after 1 epoch (top right, only PNN parameters are calibrated) and 15 epochs (bottom middle).
Figure 17: Differences between target and predicted values for daily recalibrated PNN (600 days) with 600 different bonds per day.
### Bond Spread Curve Calibration
In this section, we analyze the PNN for a potential application in finance, the calibration of spread curves for bond pricing. Here, we use artificially created data and not real market data in order to allow for a precise measurement of the performance. The application to real-world data is straightforward and will be presented in future work.
For the pricing of bonds one typically uses curves that express the excess return over a risk-free rate depending on the maturity, the so-called spread curve. Given such a curve for a bond, the price of the bond can simply be derived by discounting all cash flows of the bond with the values of the curve relating to the dates where the cashflows are paid.
As the basis for the creation of the data, we use the Nelson-Siegel parametrization [28] that is given by
\[r(T;\beta_{0},\beta_{1},\beta_{2},\tau):=\beta_{0}+(\beta_{1}+\beta_{2})\tau( 1.0-e^{-T/\tau})/T-\beta_{2}e^{-T/\tau}.\]
We assume that there are four different categories influencing the spread: company rating, country, sector, ESG rating. More precisely, we consider 9 rating classes, 5 countries, 11 sectors, and 3 ESG ratings which gives a total of 1485 different classes with different spread curves. Note that other features like liquidity and securitization level that also affect bond prices in practice can also easily be incorporated into our approach. In real-world applications, there is usually not enough price data to calibrate all these curves for each category. Therefore, either the categories have to be defined on a coarser level, or some relationships between the curves need to be used. For example, a spread for a bond with a lower rating must be higher than the spread for a bond with the same features but a higher rating. Note that although we may hot have enough quotes on one day to successfully calibrate a network giving the spread for a given bond, we usually have a lot of data over time. This allows a PNN to learn a parameterization that reflects relationships between these categories and allows one to calibrate just the embedding parameter to data for a single day.
The data is created as follows. For each task, we first sample two sets of Nelson-Siegel parameters for each of the above four categories
\[\beta_{0}^{i,j} \sim \mathcal{U}(0,0.15),\] \[\beta_{1}^{i,j} \sim \mathcal{U}(0-\beta_{0}^{i,j},0.1-\beta_{0}^{i,j}),\] \[\beta_{2}^{i,j} \sim \mathcal{U}(0,0.2),\] \[\tau^{i,j} \sim \mathcal{U}(0.2,2.0),\]
where \(i\in\{1,2\}\) and \(j\) denotes the category. We then define two curves
\[s_{1,j}(T) = r(T;\beta_{0}^{1,j},\beta_{1}^{1,j},\beta_{2}^{1,j},\tau^{2,j}), \tag{15}\] \[s_{2,j}(T) = s_{1,j}(T)+r(T;\beta_{0}^{2,j},\beta_{1}^{2,j},\beta_{2}^{2,j}, \tau^{2,j}). \tag{16}\]
Note, that due to the range of parameters and the construction of the \(s_{2,j}\), we always have \(s_{1,j}(T)\leq s_{2,j}(T)\). We denote by \(n_{j}\) the number of elements in category \(j\), e.g. \(n_{j}=3\) for the category _ESG rating_. We then define the overall curve \(s(T;k_{1},k_{2},k_{3},k_{4})\), \(1\leq k_{j}\leq n_{j}\) by
\[s(T;k_{1},k_{2},k_{3},k_{4})=\sum_{j=1}^{4}w_{j}\left(\frac{n_{j}-k_{j}}{n_{j} -1}s_{1,j}(T)+\frac{k_{j}-1}{n_{j}-1}s_{2,j}(T)\right)\]
where \(w_{1}=0.1\), \(w_{2}=0.2\), \(w_{3}=0.5\), \(w_{4}=0.2\). Figure 15 shows one set of yield curves (left) sampled with different ratings (all other categories are the same) and a set of sampled curves
all with the same categories. If we would handle all yield curves separately using the Nelson-Siegel parametrization, due to the high number of different categories we would end up with 5940 parameters. For most interest rate markets there is not enough bond data to calibrate that many parameters. However, due to our construction which imposes a strong structure between the yields of different categories, the problem has in fact just 64 parameters. Here, the PNN may learn this structure using much fewer parameters, making it possible to calibrate the embedded parameters when new data comes in. In our experiment, we use simple feedforward neural network with three layers and 64 activation functions per layer. Since the credit and ESG ratings exhibit a meaningful ordering, we transform them to ordinal values between zero and one. The maturity is scaled linearly so that 20 years are transformed to 1.0. The country and sector features are one hot encoded, which leads to a total input dimension of 19. The training parameters are given by
* 500 different tasks as training data where each task consists of 600 bonds and respective yields as training data.
* Simple feedforward neural network with three layers and 64 activation functions per layer.
* Adam optimizer with the initial learning rate 0.001 and exponential decay learning rate schedule (decay factor 0.99), 1000 data points per batch, and 20.000 epochs.
We test the resulting network by sampling new data (600 days with 600 bond yields per day) and recalibrating just the PNN parameters each day, leaving the base model unchanged. For the recalibration of the parameters (leaving the network weights fixed) we use the average over all parameters derived from the initial training as the start value and apply 15 epochs of Adam optimizer over the 600 data points with 10 points per batch. Note the very fast and robust convergence properties we observed in our experiments. Here, figure 16 shows the resulting curves versus the targets for a data point (varying ratings) at the beginning of the parameter recalibration, after one, and after 15 optimizer steps. A histogram of the differences between target and predicted values on the overall test data is shown in figure 17, split into the errors for bonds with maturity less than 3 months and all others. We see that most of the errors are in a range of less then 10 basis points for the bonds with maturities beyond 3 months while the error of short-dated bonds is in a wider range of around 25 basis points. The reason why short-dated bonds show a higher error can be seen in the right graph of figure 15 where some sample curves for a fixed bond are depicted which shows that most of the curves are very steep at the beginning while on the other hand, compared to the overall number of bonds only, the training data contains not that many bonds with such a short maturity (approximately 1.2%). Here, to further improve results for the short-dated instruments we could use oversampling or a different weighting.
## 4 Conclusion and Future Work
In this paper, we discussed and analyzed a very simple form of MTL where all network weights except the bias in the first layer are shared between different tasks. We showed by several simple examples that this approach is able to learn a family of models for given data that makes it relatively easy to recalibrate the respective parameters to new data avoiding overfitting. Another important aspect to apply methods in the financial domain is the safety and robustness of the algorithm when retrained on new data. Here, an interesting aspect of the proposed method is that the validation of the model class can be done in advance by validating the trained model w.r.t. the embedding parameters. So if new data comes in and we need to recalibrate the model by recalibrating just the embedded parameters, we have a strong indication that our model behaves well
as long as the embedded parameters stay in the range that was used in the validation. Moreover, one example showed that the calibrated embedded parameters can be used to identify different regimes indicating that these parameters may also be used to analyze the data that is used for training which might give some further insights into the problem structure.
These results indicate the potential power of this approach within the financial domain where many problems may show a certain macrostructure between different tasks overcoming the problem that we may not have enough data for a successful calibration of a neural network to a single task. As an example, we investigated the performance of the method within the context of calibrating bond yields to market data on an artificially created toy dataset. Here, the different tasks consisted of bond yields on different days depending on typical static bond data such as credit ratings, ESG ratings, countries, and sectors. This example showed that the proposed method is able to learn from a set of different tasks a parametrization of the bond yields that can be stable and robustly recalibrated to new data.
However, although the results are quite promising we have to test the method on more benchmark applications as well as on real data. Applications may vary from the estimation of credit default probabilities over the construction of new parameterized volatility surfaces up to portfolio optimization problems and the estimation of conditional probabilities.
## 5 Appendix - Proof of Theorem 2.1
In this section, we give a sketch of the proof for Theorem 2.1 which is quite similar to the theory presented in [26] with just minor modifications to handle our special case. We therefore just present the most relevant theorems and lemmas for the proof of Theorem 2.1 and refer to [13] for the proof of these statements and for further definitions. Recall from section 2.2 that \(n\) denotes the number of tasks and \(m\) the number of samples per task, let \(Z\subset\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\).
To deal with the additive structure of (6) and (7) we first define the following.
**Definition 5.1**.: _Let \(\mathcal{H}_{1}\),...,\(\mathcal{H}_{n}\) be n sets of functions mapping \(Z\) into \([0,M]\), For all \(h_{i}\in\mathcal{H}_{i}\), define_
\[\bigoplus_{i=1}^{n}h_{i}(\vec{z}):=\frac{1}{n}\sum_{i=1}^{n}h_{i}(z_{i})\]
_and define the set of all these functions as \(\bigoplus_{i=1}^{n}\mathcal{H}_{i}\)._
For such an additive structure we have the following theorem from [26].
**Theorem 5.2**.: _Let \(\mathcal{H}\subset\bigoplus_{i=1}^{n}\mathcal{H}_{i}\) be a permissible set of functions \(Z^{n}\mapsto[0,M]\). Let \(\mathbf{z}\in Z^{(m,n)}\) be generated by \(m>\frac{2M}{\alpha^{2}\nu}\) independent trials from \(Z^{n}\) according to some product probability measure \(\vec{P}=P_{1}\times\cdots\times P_{n}\). For all \(\nu>0\), \(0<\alpha<1\),_
\[Pr\left\{\mathbf{z}\in Z^{(m,n)}:\exists\vec{h}\in\mathcal{H}:d_{\nu}\left( \langle\vec{h}_{\mathbf{z}}\rangle,\langle\vec{h}_{\vec{p}}\rangle\right)> \alpha\right\}\leq 4\mathcal{C}(\alpha\nu/8,\mathcal{H})e^{-\frac{\alpha^{2}\nu nm }{8M}},\]
_where \(\langle\vec{h}_{\mathbf{z}}\rangle:=\frac{1}{m}\sum_{i=1}^{m}h(\vec{z}_{i})\) and \(\langle h\rangle_{P}=\int_{Z^{n}}h(\vec{z})dP(\vec{z})\)_
From [26] we have the following Lemma.
**Lemma 5.3**.: _Let \(\mathcal{H}:X\mapsto A\) be of the form \(\mathcal{H}=\mathcal{G}\circ\mathcal{F}\) where \(X\stackrel{{\mathcal{F}}}{{\longmapsto}}V\stackrel{{ \mathcal{G}}}{{\longmapsto}}A\). For all \(\varepsilon_{1},\varepsilon_{2}>0\), \(\varepsilon=\varepsilon_{1}+\varepsilon_{2}\),_
\[\mathcal{C}(\varepsilon)\leq\mathcal{C}_{l_{\mathcal{G}}}(\varepsilon_{1}, \mathcal{F})\mathcal{C}(\varepsilon_{2},l_{\mathcal{G}}). \tag{17}\]
**Lemma 5.4**.: _For the function space \(\mathcal{H}=\mathcal{G}\circ\mathcal{F}\) with \(X\stackrel{{\mathcal{F}}}{{\longmapsto}}V\stackrel{{ \mathcal{G}}}{{\longrightarrow}}A\), \(\mathcal{F}\subset\mathcal{F}_{1}\times\cdots\times\mathcal{F}_{n}\) and \(\mathcal{G}\subset\mathcal{G}_{1}\times\cdots\mathcal{G}_{n}\),_
\[\mathcal{C}_{l_{\mathcal{G}}}(\varepsilon,\mathcal{F}) \leq \prod_{i=1}^{n}\mathcal{C}_{l_{\mathcal{G}_{i}}}(\varepsilon, \mathcal{F}_{i}), \tag{18}\] \[\mathcal{C}(\varepsilon,l_{\mathcal{G}}) \leq \prod_{i=1}^{n}\mathcal{C}(\varepsilon,l_{\mathcal{G}_{i}}). \tag{19}\]
**Lemma 5.5**.: _For \(\bar{\mathcal{G}}:=\{(g(x_{1}),\cdots,g(x_{n}))\mid g\in\mathcal{G}\}\subset \mathcal{G}^{n}\) we have_
\[\mathcal{C}(\varepsilon,l_{\mathcal{G}})\leq\mathcal{C}(\varepsilon,l_{ \mathcal{G}}) \tag{20}\]
Proof.: By definition we have
\[\mathcal{C}(\varepsilon,l_{\mathcal{G}}):=\sup_{P\in\mathcal{P}_{\mathcal{G}}} \mathcal{N}(\varepsilon,\mathcal{G},d_{P})\]
and analogously
\[\mathcal{C}(\varepsilon,l_{\mathcal{G}}):=\sup_{\bar{P}\in\mathcal{P}_{ \mathcal{G}}}\mathcal{N}(\varepsilon,\bar{\mathcal{G}},d_{\bar{P}}).\]
For \(\bar{P}=P_{1}\times\cdots\times P_{n}\in\mathcal{P}_{\bar{\mathcal{G}}}\) define \(\bar{P}=\frac{1}{n}\sum_{i=1}^{n}P_{i}\). We show that \(\{\bar{g}\mid g\in\mathcal{N}(\varepsilon,\mathcal{G},d_{\bar{P}})\}\) is an \(\varepsilon\)-cover for \(\bar{\mathcal{G}}\).
For \(\bar{g}=(g(x_{1}),\cdots,g(x_{n}))\in\bar{\mathcal{G}}\) fixed we select \(\tilde{g}\in\mathcal{N}(\varepsilon,\mathcal{G},d_{\bar{P}})\) such that \(d_{[\bar{P},l_{\mathcal{G}}]}(\tilde{g},g)\leq\varepsilon\) and get
\[d_{[\bar{P},l_{\mathcal{G}}]}(\bar{g},\bar{\tilde{g}}) = \frac{1}{n}\sum_{i=1}^{n}d_{[P_{i},l_{\mathcal{G}}]}(g,\tilde{g})\] \[= d_{[\bar{P},l_{\mathcal{G}}]}(g,\tilde{g})\] \[\leq \varepsilon.\]
**Theorem 5.6**.: _(C.10) For the structure_
\[X^{n}\stackrel{{\mathcal{F}^{n}}}{{\longmapsto}}V^{n}\stackrel{{ \mathcal{G}}}{{\longmapsto}}A^{n}\]
_a loss function \(l:Y\mapsto[0,M]\), and all \(\varepsilon,\varepsilon_{1},\varepsilon_{2}>0\) such that \(\varepsilon=\varepsilon_{1}+\varepsilon_{2}\),_
\[\mathcal{C}(\varepsilon,l_{\mathcal{G}\circ\mathcal{F}^{n}})\leq C( \varepsilon_{1},l_{\mathcal{G}})\mathcal{C}(\varepsilon_{2},\mathcal{F})^{n}\]
Proof.: \[\mathcal{C}(\varepsilon,l_{\mathcal{G}\circ\mathcal{F}^{n}}) \leq \mathcal{C}(\varepsilon_{1},l_{\mathcal{G}})\mathcal{C}_{l_{ \mathcal{G}}}(\varepsilon_{2},\mathcal{F}^{n})\text{ (using (\ref{eq:
**Theorem 5.7**.: _Let \(\nu>0\), \(0<\alpha<1\), be fixed and \(\varepsilon_{1},\varepsilon_{2}>0\) such that \(\varepsilon_{1}+\varepsilon_{2}=\frac{\alpha\nu}{8}\). For \(0<\delta<1\) and the structure_
\[X^{n}\stackrel{{\mathcal{F}^{n}}}{{\mapsto}}V^{n}\stackrel{{ \mathcal{G}}}{{\mapsto}}A^{n}\]
_and \(\mathbf{z}\in Z^{(m,n)}\) be generated by \(m>\frac{8M}{\alpha^{2}\nu}\left[\ln(\mathcal{C}(\varepsilon_{1},\mathcal{F}) )+\frac{1}{n}\ln\frac{4\mathcal{C}(\varepsilon_{2},l_{\mathcal{G}})}{\delta}\right]\) independent samples we have_
\[PR\left\{z\in Z^{(m,n)}:\exists\bar{g}\circ\vec{f}\in\vec{\mathcal{G}}\circ \mathcal{F}^{n}:d_{\nu}(\langle l_{\bar{g}\circ\vec{f}}\rangle_{\mathbf{z}}, \langle l_{\bar{g}\circ\vec{f}}\rangle_{\vec{P}})>\alpha\right\}\leq\delta \tag{21}\]
Proof.: Using theorem 5.2 we have
\[PR\left\{\mathbf{z}\in Z^{(m,n)}:\exists\bar{g}\circ\vec{f}\in\vec{\mathcal{G}} \circ\mathcal{F}^{n}:d_{\nu}(\langle l_{\bar{g}\circ\vec{f}}\rangle_{\mathbf{ z}},\langle l_{\bar{g}\circ\vec{f}}\rangle_{\vec{P}})\right\}\leq 4\mathcal{C}( \alpha\nu/8,l_{\vec{g}\circ\mathcal{F}^{n}})e^{-\frac{\alpha^{2}\nu nm}{8M}}\]
and using theorem 5.6 we obtain
\[4\mathcal{C}(\alpha\nu/8,l_{\vec{g}\circ\mathcal{F}^{n}}) \leq 4\mathcal{C}_{l_{\mathcal{G}}}(\varepsilon_{1},\mathcal{F})^{n} \mathcal{C}(\varepsilon_{2},l_{\mathcal{G}}) \tag{22}\]
and simple calculation proves the statement.
|
2307.12435 | A Generalized Schwarz-type Non-overlapping Domain Decomposition Method
using Physics-constrained Neural Networks | We present a meshless Schwarz-type non-overlapping domain decomposition
method based on artificial neural networks for solving forward and inverse
problems involving partial differential equations (PDEs). To ensure the
consistency of solutions across neighboring subdomains, we adopt a generalized
Robin-type interface condition, assigning unique Robin parameters to each
subdomain. These subdomain-specific Robin parameters are learned to minimize
the mismatch on the Robin interface condition, facilitating efficient
information exchange during training. Our method is applicable to both the
Laplace's and Helmholtz equations. It represents local solutions by an
independent neural network model which is trained to minimize the loss on the
governing PDE while strictly enforcing boundary and interface conditions
through an augmented Lagrangian formalism. A key strength of our method lies in
its ability to learn a Robin parameter for each subdomain, thereby enhancing
information exchange with its neighboring subdomains. We observe that the
learned Robin parameters adapt to the local behavior of the solution, domain
partitioning and subdomain location relative to the overall domain. Extensive
experiments on forward and inverse problems, including one-way and two-way
decompositions with crosspoints, demonstrate the versatility and performance of
our proposed approach. | Shamsulhaq Basir, Inanc Senocak | 2023-07-23T21:18:04Z | http://arxiv.org/abs/2307.12435v1 | A Generalized Schwarz-type Non-overlapping Domain Decomposition Method using Physics-constrained Neural Networks
###### Abstract
We present a meshless Schwarz-type non-overlapping domain decomposition method based on artificial neural networks for solving forward and inverse problems involving partial differential equations (PDEs). To ensure the consistency of solutions across neighboring subdomains, we adopt a generalized Robin-type interface condition, assigning unique Robin parameters to each subdomain. These subdomain-specific Robin parameters are learned to minimize the mismatch on the Robin interface condition, facilitating efficient information exchange during training. Our method is applicable to both the Laplace's and Helmholtz equations. It represents local solutions by an independent neural network model which is trained to minimize the loss on the governing PDE while strictly enforcing boundary and interface conditions through an augmented Lagrangian formalism. A key strength of our method lies in its ability to learn a Robin parameter for each subdomain, thereby enhancing information exchange with its neighboring subdomains. We observe that the learned Robin parameters adapt to the local behavior of the solution, domain partitioning and subdomain location relative to the overall domain. Extensive experiments on forward and inverse problems, including one-way and two-way decompositions with crosspoints, demonstrate the versatility and performance of our proposed approach.
A Generalized Schwarz-type Non-overlapping Domain Decomposition Method using Physics-constrained Neural Networks
Augmented Lagrangian method constrained optimization domain decomposition physics-informed neural networks
## 1 Introduction
Deep learning with artificial neural networks (ANNs) has transformed many fields of science and engineering. The functional expressivity of ANNs was established by universal approximation theory. Since then ANNs have emerged as a meshless method to solve partial differential equations (PDEs) for both forward and inverse problems [1, 2, 3, 4]. With the introduction of easily accessible software tools for auto-differentiation and optimization, the use of ANNs to solve PDEs has grown rapidly in recent years as physics-informed neural networks (PINNs) [5, 6]. Numerous works have been published since the introduction of the PINN framework to address the shortcomings of the framework as well as expand it with different features such as uncertainty quantification.
PINNs offer several advantages over conventional numerical methods such as the finite element and volume methods when applied to data-driven modeling, and inverse and parameter estimation problems. Unlike conventional numerical methods that have been developed and advanced over several decades as predictive-science techniques for challenging problems, PINNs have thus far been mostly applied to two-dimensional problems. Several issues stand in the way of extending PINNs to large, three-dimensional, multi-physics problems, including difficulties with nonlinear non-convex optimization, respecting conservation laws strictly, and long training times. In the present work, we focus on the
application of domain decomposition methods (DDM) to PINNs, which are motivated by solving forward and inverse problems that can be computationally large and may involve multiple physics.
Domain decomposition has become an essential strategy for solving complex PDE problems that are too large to be solved on a single computer or that have complex geometries with multiple physics [7]. Domain decomposition methods can be constructed as overlapping or non-overlapping as shown in Fig. 1a and Fig. 1b, respectively. There are different type of domain decomposition techniques. A detailed discussion of these methods can be found in textbooks written on the subject matter [8; 9; 7]. In the present study, we will focus on Schwarz-type methods, specifically optimized Schwarz methods [10; 11]. Most of the modern developments in DDM have taken place with conventional numerical methods such as finite element and volume methods in mind. On the other hand, domain decomposition in the context of PINNs is a new and active research area that has been the subject of several recent works. Li et al. [12] proposed an overlapping domain decomposition method based for the DeepRitz method [13] which is an alternative formulation of PINNs for learning the solution of PDEs. In their approach, the alternating Schwarz method with a Dirichlet-type overlapping interface condition was used and the arising loss term was incorporated into the objective function of the DeepRitz method. Similar to the work of [12], Li et al. [14] solved Poisson's equation on overlapping decomposed domains with a complex interface using the baseline PINN approach. In their approach a classical alternating Schwarz type method was used as well and the loss term arising from satisfying the interface conditions was added to the PINN's objective function in a composite fashion along with loss terms arising from the residual forms of the boundary conditions and the PDE. These works [12; 14] demonstrated the feasibility of using Schwarz type domain decomposition methods in the context of PINNs. Recently, Dolean et al. [15] introduced finite bases physics-informed neural networks (PINNs) for solving PDEs on overlapping subdomains. Specifically, the authors pursued a Schwarz-type domain decomposition approach in which a PINN model is trained for each subdomain. However, to improve the accuracy of the local solutions, the authors also trained a neural network for the entire domain to serve as a coarse correction.
Jagtap et al. [16] decomposed a spatial domain into smaller domains and used the baseline PINN method to learn the solution of a PDEs on the whole domain. A separate neural network was adopted in each subdomain and flux continuity across subdomain interfaces were enforced in strong form. The average value of the solution between two subdomains sharing an interface was also enforced as an additional condition. Since, the neural network models associated for subdomains exchange information at each epoch, makes it not strictly Schwarz-type domain decomposition method. In the spirit of the baseline PINNs, loss terms arising from the flux continuity across the subdomain interfaces are lumped into a single composite objective function with tunable weights. In a followup work, Jagtap and Karniadakis [17] extended the work presented in [16] to include the time domain. Furthermore, in this followup work, the interface conditions were simplified to make the method applicable to PDEs that may not represent conservation laws. A parallel implementation of these works is presented in [18] showing decent scalability and speedup.
Clearly, domain decomposition in the context of scientific machine learning or physics-informed neural networks is a growing area of focus. Success in this front is expected to enable neural networks to tackle larger problems or reduce training times substantially. Additionally, empirical evidence shows that training separate neural networks on smaller domains is much more feasible and likely to converge than training a single neural network on a large domain with many points. In what follows, we present the theory behind the optimized Schwarz methods [11] and our physics- and equality-constrained artificial networks (PECANN) for solving forward and inverse PDE problems [19]. We then propose a non-overlapping generalized Schwarz-type domain decomposition method with a Robin-type interface condition with learnable, subdomain-specific parameters for our PECANN framework. We then apply the resulting method to learn the solution of forward and inverse PDE problems using decomposed domains with increasing complexity.
Figure 1: domain decomposition types: (a) overlapping subdomains, (b) non-overlapping subdomains
Technical Background
The earliest example of an overlapping domain decomposition method is the work of Schwarz [20], which later became known as the alternating Schwarz method (ASM). Multiplicative Schwarz method (MSM) is a generalization of ASM. It involves solving a PDE on the first subdomain and then on the second subdomain sequentially. Values from neighboring subdomains at the most current iteration are used as interface conditions during each solve. A drawback of MSM is that it is not amenable to parallelization because of its sequential nature. Additive Schwarz method [21] is a slight modification of the MSM that enables parallel computations by solving the problem on both subdomains concurrently using the information from the previous iteration [8]. However, these aforementioned variants of the Schwarz method are computationally slow and do not converge for non-overlapping subdomains [22]. Furthermore, these methods do not converge for acoustic problems despite overlapping subdomains [11]. Lions et al. [23] proposed replacing the Dirichlet boundary conditions in ASM for Laplace's equations with Robin boundary conditions, whereas for Helmholtz equation, Despres [24] proposed radiation conditions. These modifications extended ASM to non-overlapping domains while being applicable to overlapping domains. Japhet [10] optimized the transmission conditions to achieve faster convergence, which has become known as the optimized Schwarz method (OSM) [11]. Although optimal conditions leading to the best convergence in OSM are tied to the Steklov-Poincare operator [25], their non-local nature makes them challenging to implement efficiently in numerical simulations [26]. Hence, the alternative approach involves approximating optimal conditions using local operators, which can then be fine-tuned for enhancing the convergence of OSM [23].
There are other domain decomposition methods besides the Schwarz-type methods. For instance, substructuring algorithms such as balancing domain decomposition (BDD) methods [27] and finite element tearing and interconnect (FETI) [28] are domain decomposition methods to solve a system of linear equations arising from the finite element discretization. For BDD, FETI, and other type of domain decomposition methods, we refer the reader to textbooks dedicated to the subject matter [9; 8; 7]. Heinlein et al. [29] provides a review of recent FETI-type domain decomposition methods in the context of machine learning.
### Optimized Schwarz method
Our proposed domain decomposition method has important parallels with the optimized Schwarz methods, but also differ from the OSM in unique ways. Therefore, we briefly explain OSM and discuss some of the key works in OSM.
Let us consider a typical second-order elliptic PDE on two subdomains for demonstration purposes. On the first subdomain, we consider
\[\begin{split}-\nabla(u_{1}^{n+1})&=s_{1}\qquad \qquad\qquad\qquad\qquad\text{in}\quad\Omega_{1},\\ u_{1}^{n+1}&=0\qquad\qquad\qquad\qquad\qquad\text{on} \quad\partial\Omega_{1}\cap\partial\Omega,\\ (\mathcal{A}_{1}+\beta_{1}\frac{\partial}{\partial\mathbf{n_{1}}})u_ {1}^{n+1}&=(\mathcal{A}_{1}+\beta_{1}\frac{\partial}{\partial \mathbf{n_{1}}})u_{2}^{n}\quad\text{on}\quad\Gamma_{1},\end{split} \tag{1}\]
and on the second subdomain
\[\begin{split}-\nabla(u_{2}^{n+1})&=s_{2}\qquad \qquad\qquad\qquad\text{in}\quad\Omega_{2},\\ u_{2}^{n+1}&=0\qquad\qquad\qquad\qquad\qquad\text{ on}\quad\partial\Omega_{2}\cap\partial\Omega,\\ (\mathcal{A}_{2}+\beta_{2}\frac{\partial}{\partial\mathbf{n_{2}}})u_ {2}^{n+1}&=(\mathcal{A}_{2}+\beta_{2}\frac{\partial}{\partial\bm {n_{2}}})u_{1}^{n}\quad\text{on}\quad\Gamma_{2},\end{split} \tag{2}\]
where \(\mathbf{n_{1}}\) and \(\mathbf{n_{2}}\) are the outward normal directions on the subdomain boundaries of \(\Omega_{1}\) and \(\Omega_{2}\), respectively. \(\Gamma_{1}\) and \(\Gamma_{2}\) represents the subdomain interfaces corresponding to \(\Omega_{1}\) and \(\Omega_{2}\), respectively. In the case of a non-overlapping domain decomposition \(\Gamma_{1}\) and \(\Gamma_{2}\) are identical. \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) are operators that act along the interfaces \(\Gamma_{1}\) and \(\Gamma_{2}\), respectively. \(\beta_{1}\) and \(\beta_{2}\) are real valued functions. With \(\beta_{1}=\beta_{2}=0\) and \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) being identity operators, the original Schwarz method is recovered. As a remedy to the drawbacks of classical Schwarz methods (MSM, ASM), Lions [30] proposed to replace Dirichlet interface conditions with Robin interface conditions with a tunable parameter \(\alpha\). In the above interface formulation, we see that with \(\beta_{1}=\beta_{2}=1\) and \(\mathcal{A}_{1}=\mathcal{A}_{2}=\alpha\), where \(\alpha>0.0\), we recover the Robin interface conditions proposed by Lions [30].
The essence of optimized Schwarz method (OSM) [11] is to determine optimal operators \(\mathcal{A}\) and the parameter \(\beta\) such that the convergence rate of the Schwarz algorithm is minimized. This is often achieved by theoretically deriving an expression for the convergence rate for a representative problem with a simple decomposition (e.g. two subdomains) and optimizing the interface parameters with respect to that convergence rate. The extension of this approach to complex domains with challenging decompositions with many subdomains is admittedly a formidable task. However,
numerical experiments have shown that optimal interface conditions, once derived from canonical problems, can be used in complex problems with a general decomposition, as shown in several works [31, 32, 33, 26]. We should note that the parameters (i.e. \(\beta\) used in the Robin transmission conditions do not have to be the same for each subdomain. For instance, Gander et al. [32] proposed a two-sided Robin condition for the Helmholtz equation on non-overlapping domains in which different parameters were adopted in the Robin transmission conditions adopted in each subdomain. Gander et al. [32] attained better convergence rates with two-sided Robin condition compared to using the same parameters in the Robin transmission condition.
### Physics and Equality Constrained Artificial Neural Networks
In this section, we present our recently developed physics and equality constrained artificial neural networks (PECANN) as a meshless neural network based solver for forward and inverse PDE problems [34]. We will then introduce a generalized Schwarz-type domain decomposition method with a Robin interface condition and extend our PECANN framework to solve forward and inverse PDE problems with domain decomposition to enable distributed learning.
Let us consider a general constrained optimization problem with equality constraints
\[\min_{\theta}\mathcal{J}(\theta),\quad\text{ such that }\quad\mathcal{C}_{i}( \theta)=0,\quad\forall i\in\mathcal{E}, \tag{3}\]
where the objective function \(\mathcal{J}\) and the constraint functions \(\mathcal{C}_{i}\) are all smooth, real valued functions on a subset of \(R^{n}\) and \(\mathcal{E}\) is a finite set of equality constraints. We can cast the constrained optimization problem (3) into an unconstrained optimization problem using the augmented Lagrangian formalism [35, 36] as follows:
\[\max_{\lambda}\min_{\theta}\mathcal{L}(\theta,\lambda;\mu)= \mathcal{J}(\theta)+\sum_{i\in\mathcal{E}}\lambda_{i}\mathcal{C}_{i}(\theta)+ \frac{1}{2}\sum_{i\in\mathcal{E}}\mu_{i}\mathcal{C}_{i}^{2}(\theta), \tag{4}\]
where \(\lambda_{i}\) is a vector of Lagrange multipliers and \(\mu_{i}\) is a vector of penalty parameters. The minimization of Eq. 4 can be performed using a variant of gradient descent type optimizer for a sequence of Lagrange multipliers generated by the following adaptive update strategy proposed in Basir and Senocak [19]
\[\bar{v}_{i} \leftarrow\alpha\bar{v}_{i}+(1-\alpha)\mathcal{C}_{i}(\theta)^{2}, \forall i\in\mathcal{E}, \tag{5}\] \[\mu_{i} \leftarrow\frac{\gamma}{\sqrt{\bar{v}_{i}}+\epsilon}, \forall i\in\mathcal{E},\] (6) \[\lambda_{i} \leftarrow\lambda_{i}+\mu_{i}\mathcal{C}_{i}(\theta), \forall i\in\mathcal{E}, \tag{7}\]
where \(\bar{v}_{i}\) are the weighted moving average of the squared gradient of our Lagrange multipliers, \(\gamma\) is a scheduled global learning rate, \(\epsilon\) is a term added to the denominator to avoid division by zero for numerical stability and \(\alpha\) is a smoothing constant.
```
1Defaults:\(\gamma=1\times 10^{-2},\ \alpha=0.99,\ \epsilon=1\times 10^{-8}\)
2Input:\(\theta^{0}\)
3\(\lambda_{0}^{0}=1\quad\forall i\in\mathcal{E}\) /* Initializing Lagrange multipliers */
4\(\mu_{i}^{0}=1\quad\forall i\in\mathcal{E}\) /* Initializing penalty parameters */
5\(\bar{v}_{i}^{0}=0\quad\forall i\in\mathcal{E}\) /* initializing averaged square-gradients */
6for\(t=1\)to...do
7\(\theta^{t}\leftarrow\argmin\mathcal{L}(\theta^{t-1};\lambda^{t-1},\mu^{t-1})\) /* primal update */
8\(\bar{v}_{i}^{t}\leftarrow\alpha\ \bar{v}_{i}^{t-1}+(1-\alpha)\ \mathcal{C}_{i}(\theta^{t})^{2},\quad \forall i\in\mathcal{E}\) /* square-gradient update */
9\(\mu_{i}^{t}\leftarrow\frac{\gamma}{\sqrt{\bar{v}_{i}^{t}+\epsilon}},\quad \forall i\in\mathcal{E}\) /* penalty update */
10\(\lambda_{i}^{t}\leftarrow\lambda_{i}^{t-1}+\mu_{i}^{t}\ \mathcal{C}_{i}(\theta^{t}),\quad \forall i\in\mathcal{E}\) /* dual update */
11 end for Output:\(\theta^{t}\)
```
**Algorithm 1**Adaptive Augmented Lagrangian Method
The input to the algorithm is an initialized set of parameters (i.e, \(\theta^{0}\)) associated with the neural network model representing the solution on the physical domain, a global learning rate \(\gamma\), and a smoothing constant \(\alpha\). In Algorithm 1, the Lagrange multiplier vector is initialized to \(1.0\) with their respective averaged squared-gradients initialized to zero.
We have chosen to employ our PECANN framework due to its inherent strength in formulating and solving forward/inverse PDE problems with given constraints. PECANNs excel in this regard by formulating a constrained optimization problem based on a given PDE, and then utilizing an adaptive augmented Lagrangian method to create an equivalent dual unconstrained optimization formulation that is suitable for neural networks. This unique approach enables PECANNs to effectively address learning problems with constraints. Unlike other methods that rely on heuristics to balance the interplay between objective functions [37; 38], PECANNs provide a more robust and principled approach. By leveraging the augmented Lagrangian formulation, PECANNs offer a general and systematic approach for incorporating constraints into the learning process, enhancing the overall effectiveness and reliability of the method.
## 3 Proposed Domain Decomposition Method
In this section, we aim to develop a generalized Schwarz-type domain decomposition method that facilitates distributed learning of both forward and inverse PDE problems using artificial neural networks. To achieve this, we adopt our PECANN framework as a solver for each subdomain. Notably, we consider a generalized Robin-type interface transmission conditions as an additional constraint on the solution of each subdomain. By incorporating these transmission conditions, we enhance the accuracy and consistency of the learned solutions across the entire domain. This approach allows us to effectively address complex problems by decomposing them into smaller, more manageable subdomains, while ensuring the continuity and compatibility of the solutions at the interfaces. Through the utilization of the PECANN framework and the incorporation of interface transmission conditions, we aim to provide a robust and efficient method for distributed learning of PDE problems.
Optimized Schwarz methods have established the benefits of using Robin type interface conditions with optimized parameters as opposed to adopting purely Dirichlet or Neumann type interface conditions. In our proposed approach, we adopt a generalized interface transmission condition using a convex combination of Neumann and Dirichlet conditions. However, one of the aspects of our proposed approach that distinguishes it from optimized Schwarz methods is that, in our method, the parameters of the interface conditions are inferred as part of the PECANN framework and not prescribed as done in optimized Schwarz methods. As we discuss in section 2.1, in optimized Schwarz methods, the optimal parameters are derived from canonical problems with a simple decomposition by minimizing the convergence rate. These parameters are then used in complex problems. Another distinguishing aspect of our work is that we pursue a non-overlapping decomposition to tackle both Laplace and Helmholtz equations in a unified fashion, whereas in optimized Schwarz methods, separate transmission conditions are used for Laplace and Helmholtz equations [31; 32; 39].
For ease of presentation, we split the domain \(\Omega\) into subdomains \(\Omega_{1}\) and \(\Omega_{2}\) sharing the common interface \(\Gamma\). We adopt the following absorbing boundary conditions [40; 9] as a generalized Schwarz alternating method. Note that the Robin type interface condition is a convex combination of Dirichlet and Neumann conditions with parameters to be learned. For the first subdomain \(\Omega_{1}\) we have
\[\begin{split}-\nabla(u_{1}^{n+1})&=s_{1}\qquad \qquad\qquad\qquad\qquad\qquad\text{in}\quad\Omega_{1},\\ u_{1}^{n+1}&=0\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\text{on}\quad\partial\Omega_{1}\cap\partial\Omega,\\ \alpha_{1}u_{1}^{n+1}+(1-\alpha_{1})\frac{\partial u_{1}^{n+1}}{ \partial\mathbf{n_{1}}}&=\alpha_{1}u_{2}^{n}+(1-\alpha_{1})\frac{ \partial u_{2}^{n}}{\partial\mathbf{n_{1}}}\quad\text{on}\quad\Gamma,\end{split} \tag{8}\]
and for the second subdomain \(\Omega_{2}\) we have
\[\begin{split}-\nabla(u_{2}^{n+1})&=s_{2}\qquad \qquad\qquad\qquad\qquad\qquad\text{in}\quad\Omega_{2},\\ u_{2}^{n+1}&=0\qquad\qquad\qquad\qquad\qquad\qquad \qquad\text{on}\quad\partial\Omega_{2}\cap\partial\Omega,\\ \alpha_{2}u_{2}^{n+1}+(1-\alpha_{2})\frac{\partial u_{2}^{n+1}}{ \partial\mathbf{n_{2}}}&=\alpha_{2}u_{1}^{n}+(1-\alpha_{2})\frac{ \partial u_{1}^{n}}{\partial\mathbf{n_{2}}}\quad\text{on}\quad\Gamma,\end{split} \tag{9}\]
where \(\alpha_{1}>0\) and \(\alpha_{2}>0\) are "learnable" scalar parameters of the transmission conditions for subdomain \(\Omega_{1}\) and \(\Omega_{2}\) respectively. We initialize our \(\alpha_{1}^{0}=\alpha_{1}^{0}=1/2\) so as not to favor either the Neumann or the Dirichlet condition. We propose independent parameters (i.e., \(\alpha_{1}\) & \(\alpha_{2}\)) for each subdomain because our solution and its gradient may change significantly across our domains, and having the same \(\alpha\) for all the subdomains may not be desirable. Therefore, as the solution improves in each subdomain, \(\alpha_{1}\) and \(\alpha_{2}\) evolve toward independent optimal values. Consequently, setting them as independent parameters enables us to readily learn these parameters for any complex problem. Equally important, through this strategy, each subdomain can exchange information across its interface while minimizing its mismatch with its neighboring subdomain. We should mention that using different parameters in transmission conditions sharing the same interface is not uncommon. For instance, Gander et al. [32] used different parameters in the Robin transmission conditions on subdomains sharing a common interface and showed that the resulting domain decomposition method with different parameters performs better than using the same parameters in the Robin transmission conditions.
Next, we present our PECANN formulation with domain decomposition using the generalized Schwarz alternating method given by Eq. 8 - 9. For ease presentation, we split the spatial domain into two subdomains, but our method can handle multiple subdomains. In the following equations, \(\mathcal{J}_{i}(\theta_{i})\) is the objective function representing the governing partial differential equation in domain \(\Omega_{i}\), \(\mathcal{C}_{1}(\theta_{i})\), \(\mathcal{C}_{2}(\theta_{i})\) are the expected equality constraint functions due to physical boundary conditions and interface transmission conditions, respectively. Subscript \(i\) is the subdomain index resulting from the partitioning of the domain. For the first subdomain \(\Omega_{1}\) we have
\[\mathcal{J}_{1}(\theta_{1}) :=\frac{1}{N_{\Omega_{1}}}\sum_{i=1}^{N_{\Omega_{1}}}\|\nabla(u_{ 1}^{n+1})-s_{1}\|_{2}^{2}\quad\text{ in}\quad\Omega_{1}, \tag{10}\] \[\mathcal{C}_{1}(\theta_{1}) :=\frac{1}{N_{\partial\Omega_{1}}}\sum_{i=1}^{N_{\partial\Omega_{ 1}}}\|u_{1}^{n+1}-g_{1}\|_{2}^{2}\quad\text{ on}\quad\partial\Omega_{1}\cap \partial\Omega,\] (11) \[\mathcal{C}_{2}(\theta_{1}) :=\frac{1}{N_{\Gamma}}\sum_{i=1}^{N_{\Gamma}}\|\alpha_{1}(u_{1} ^{n+1}-u_{2}^{n})\|_{2}^{2}+\|(1-\alpha_{1})(\frac{\partial u_{1}^{n+1}}{ \partial\mathbf{n_{1}}}-\frac{\partial u_{2}^{n}}{\partial\mathbf{n_{1}}})\|_{2}^{2} \quad\text{ on}\quad\Gamma, \tag{12}\]
and similarly for the second subdomain \(\Omega_{2}\) we have
\[\mathcal{J}_{2}(\theta_{2}) :=\frac{1}{N_{\Omega_{2}}}\sum_{i=1}^{N_{\Omega_{2}}}\|\nabla(u_{ 2}^{n+1})-s_{2}\|_{2}^{2}\quad\text{ in}\quad\Omega_{2}, \tag{13}\] \[\mathcal{C}_{1}(\theta_{2}) :=\frac{1}{N_{\partial\Omega_{2}}}\sum_{i=1}^{N_{\partial\Omega_{ 2}}}\|u_{2}^{n+1}-g_{2}\|_{2}^{2}\quad\text{ on}\quad\partial\Omega_{2}\cap \partial\Omega,\] (14) \[\mathcal{C}_{2}(\theta_{2}) :=\frac{1}{N_{\Gamma}}\sum_{i=1}^{N_{\Gamma}}\|\alpha_{2}(u_{2}^{ n+1}-u_{1}^{n})\|_{2}^{2}+\|(1-\alpha_{2})(\frac{\partial u_{2}^{n+1}}{ \partial\mathbf{n_{2}}}-\frac{\partial u_{1}^{n}}{\partial\mathbf{n_{2}}})\|_{2}^{2} \quad\text{ on}\quad\Gamma, \tag{15}\]
The unconstrained objective function (i.e. augmented Lagrangian) for each subdomain is then formed through Eq. 4.
```
Input :Collocation points \(D\), number of subdomains \(K\), number of epochs \(E\), number of outer iterations \(T\)
1for\(k\gets 1\) to \(K\)do
2 Initialize subdomain \(k\) and assign it a portion of the global problem;
3 Initialize the local model for subdomain \(k\);
4 Initialize the Robin parameter \(\alpha_{k}\);
5 Initialize Lagrange multipliers \(\lambda_{i}\) for each type of constraint function;
6 Initialize penalty parameters \(\mu_{i}\) for each type of constraint function;
7for\(t\gets 1\) to \(T\)do
8for\(k\gets 1\) to \(K\)do
9 Train local model for subdomain \(k\) for \(E\) epochs independently
10 Exchange interface information between neighboring models;
11 Reset Lagrange multipliers for interface constraints;
12
13 Output :Trained local models
```
**Algorithm 2**Domain Decomposition Training Procedure
Algorithm 2 is our domain decomposition training procedure for solving PDEs using deep learning. Input to the algorithm are collocation points, the number of subdomains \(K\), the number of epochs \(E\) for local training, and the number of outer iterations \(T\) for DDM. The algorithm initializes each subdomain \(k\) with a portion of the global problem, a local model, a Robin parameter \(\alpha_{k}\), vector of Lagrange multipliers, and penalty parameters. It then trains each local model in parallel and exchanges interface information between neighboring models. The interface Lagrange multipliers are reset at each outer iteration. The output of the algorithm is a set of trained local models. The main idea of the algorithm is to divide the global problem into subdomains and solve each subdomain separately, exchanging information at the end of each local training. This approach allows a trade-off between communication and computation, making it suitable for distributed computing environments.
## 4 Application to Forward PDE problems
Poisson's and Helmholtz equations have key significance in the field of domain decomposition methods. Discretization of Poisson's equation with a suitable numerical scheme creates a symmetric positive definite matrix whereas, in the case of a Helmholtz equation, which governs propagation phenomena, the resulting matrix is symmetric but non-positive [7]. Furthermore, it has been established that classical Schwarz method works for Poisson's equation only when there is overlap of subdomains and the convergence of the method depends on the width of the overlap. Whereas for the Helmholtz equation, the classical Schwarz does not converge, even with overlap [11]. Therefore, separate transmission conditions have been proposed to solve Poisson's and Helmholtz equations with domain decomposition.
In the following examples, we apply our proposed DDM to both the Poisson's and Helmholtz equations without any modification to demonstrate the effectiveness of our approach for physics-constrained machine learning of PDEs.
### Poisson's Equation
We consider the following Poisson's equation on the domain \(\Omega=\{(x,y)\mid-1\leq x\leq 1,-1\leq y\leq 1\}\)
\[\nabla^{2}u =s,\text{ in }\Omega, \tag{16a}\] \[u =g,\text{ on }\partial\Omega, \tag{16b}\]
where \(\nabla^{2}\) is the Laplacian operator applied to the function \(u\), and \(s\) is a given source term, and \(\partial\Omega\) is the boundary of the domain \(\Omega\).
We manufacture an oscillatory solution that satisfies Eq. (16) as follows:
\[u(x,y)=\sin(\frac{\pi}{2}x-\frac{\pi}{2})\sin(\frac{\pi}{2}y-\frac{\pi}{2}), \quad\forall(x,y)\in\Omega. \tag{17}\]
The corresponding source functions \(s(x,y)\) and \(g(x,y)\) can be calculated exactly by substituting the manufactured solution into Eq. (16).
For this problem, we utilize a feed-forward neural network consisting of three hidden layers, with each layer containing 20 neurons for each subdomain. The neural network models are designed to have two inputs and one output and employs the tangent hyperbolic activation function. We train our local neural network models for 500 epochs before exchanging the interface information. It should be emphasized that the Poisson equation is an elliptic PDE which lacks any
Figure 2: Domain splitting types: (a) one-way splitting with non-overlapping subdomains, (b) two-way splitting with non-overlapping subdomains
characteristic curves. Inefficient domain decomposition methods may require excessive number of communications and information exchanges between neighboring subdomains to achieve convergence or satisfactory accuracy. However, this can lead to substantial communication overhead, resulting in increased computational complexity and time requirements. Additionally, excessive communication can undermine the advantages of domain decomposition, as the overall efficiency gains from parallel processing may be negated by frequent information exchanges and synchronization demands. We limit the outer iteration count to 30, which implies that only 30 communications occur between neighboring subdomains during the process of learning the global solution. To generate the necessary collocation points, we randomly select 1024 points from within each subdomain, and an additional 128 points are selected along each boundary or interface edge only once.
In our first experiment, we adopt one-dimensional domain decomposition to discover the solution of Poisson's equation. We divide the global domain into four subdomains along one direction, and each subdomain shares a common face with its neighbor. We use the solution at the shared face as an interface condition for the neighboring subdomain. Our aim with this particular decomposition is to demonstrate that boundary conditions propagate across subdomains and middle subdomains that do not have direct access to the physical boundaries are informed by the imposed boundary conditions. This is crucial because it ensures that the solution of Poisson's equation remains accurate and consistent across all subdomains. We present our results in Figure 3. Our results demonstrate the effectiveness of our method on a one-dimensional domain decomposition for the solution of Poisson's equation while maintaining great accuracy and consistency across subdomains.
Table 1 shows a performance comparison of trained models with adaptive and constant Robin penalty parameter \(\alpha\), in terms of the maximum error across subdomains for two different error measures: \(\mathcal{E}r(u,\hat{u})\) and \(\mathcal{E}_{\infty}(u,\hat{u})\). The results indicate that the adaptive penalty parameter outperforms the constant penalty parameter, with a significant reduction in maximum error across subdomains for both error measures. The final learned Robin parameters for the subdomains are: \(\alpha_{1}=0.6699\), \(\alpha_{2}=0.4524\), \(\alpha_{3}=0.4564\), and \(\alpha_{4}=0.6470\). Notably, \(\alpha_{1}>0.5\), indicating a focus on exchanging Dirichlet Conditions, while \(\alpha_{2}<0.5\), suggesting an emphasis on matching flux. This novel insight reveals that neighboring subdomains exhibit different evolutions in their Robin parameter behaviors, with varying tendencies towards Dirichlet or Neumann conditions.
To further investigate the effectiveness of our domain decomposition method, we consider a two-dimensional Cartesian domain decomposition with four subdomains for the same Poisson's equation as in our first experiment. The primary aim is to create a cross point where subdomains meet and can communicate with each other. At this cross point, each subdomain should communicate with all the connecting subdomains to ensure the accuracy and consistency of the solution. We show that by just exchanging information between neighboring subdomains, we can obtain excellent results with a two-dimensional Cartesian domain decomposition. Specifically, we demonstrate that the boundary conditions propagate correctly in both directions, and the solution remains accurate and consistent across all subdomains as can be
\begin{table}
\begin{tabular}{l c c} \hline Robin Penalty & maximum \(\mathcal{E}_{r}(u,\hat{u})\) across subdomains & maximum \(\mathcal{E}_{\infty}(u,\hat{u})\) across subdomains \\ \hline constant \(\alpha\) & \(1.275\times 10^{-3}\) & \(1.430\times 10^{-3}\) \\ adaptive \(\alpha\) & \(\mathbf{6.245\times 10^{-4}}\) & \(\mathbf{7.451\times 10^{-4}}\) \\ \end{tabular}
\end{table}
Table 1: Performance comparison of trained models with adaptive and constant Robin penalty parameter \(\alpha\)
Figure 3: Poisson’s equation on a one-dimensional decomposed domain: (a) predicted solution on each subdomain, (b) point-wise absolute error on each subdomain
seen in Figure 4. The final learned Robin parameters for the subdomains are: \(\alpha_{1}=0.4997\), \(\alpha_{2}=0.5018\), \(\alpha_{3}=0.5036\), and \(\alpha_{4}=0.4949\). Notably, \(\alpha_{1}<0.5\) indicates a focus on exchanging Neumann Conditions, while \(\alpha_{2}>0.5\) suggests an emphasis on matching Dirichlet Conditions. This novel insight reveals that neighboring subdomains exhibit different evolutions in their Robin parameter behaviors, with varying tendencies towards Dirichlet or Neumann conditions. The observed differences in the learned parameters are a consequence of the distinct physical boundary conditions and random initialization of local models. Despite the symmetric partitioning, the unique characteristics of each subdomain lead to divergent Robin parameters that optimize information exchange effectively.
### Poisson's Equation with a Complex Decomposition
We now consider the solution of Poisson's equation on a complex-shaped domain with a complex partitioning. The primary objective of this experiment is to demonstrate the versatility of our method, specifically its ability to handle complex subdomain partitioning and subdomains that lack direct access to the domain boundary \(\partial\Omega\). The first subdomain is represented by the region between the boundaries \(\partial\Omega\) and \(\Gamma\), where
\[\partial\Omega=(x,y)|x=\rho(\theta)\cos(\theta),y=\rho(\theta)\sin(\theta), \rho(\theta)=2+\sin(2\theta)\cos(2\theta),\ \forall 0\leq\theta\leq 2\pi \tag{18}\]
and the interface between subdomains
\[\Gamma=(x,y)|x=\rho(\theta)\cos(\theta),y=\rho(\theta)\sin(\theta),\rho( \theta)=1+0.5\cos(4\theta)\sin(6\theta),\ \forall 0\leq\theta\leq 2\pi \tag{19}\]
The shape of our subdomain is non-trivial, consisting of a region with a complex boundary. The second subdomain is enclosed by the boundary \(\Gamma\). Figure 5(a) provides a visual representation of the complex partitioning that we adopt in this problem. To solve the problem at hand, we employ a feed-forward neural network with two hidden layers, each containing 30 neurons for each subdomain. The neural networks are designed to have two inputs and one output and uses the tangent hyperbolic activation function. We locally train the neural network models for 50 epochs, while setting the outer iteration count to \(T=30\). To generate the necessary collocation points, we randomly select 4096 points from within each subdomain, 4096 points along the boundary \(\partial\Omega\), and 4096 points along the interface \(\Gamma\). This process is performed only once before training, and the same set of collocation points is used throughout the training process.
Figure 5 presents our results for solving Poisson's equation on the two complex subdomains. The exact global solution is displayed in panel (b), while panel (c) presents the predicted solution obtained from our neural network models. Finally, panel (d) shows the absolute error between the exact and predicted solutions. Our results indicate that our approach provides an efficient and accurate means of approximating solutions to Poisson's equation on complex subdomains. Specifically, we observe excellent agreement between the exact and predicted solutions, highlighting the effectiveness of our approach. Additionally, we have obtained learned values of \(\alpha=0.5058\) for the outer domain and \(\alpha=0.4059\) for the inner subdomain, showcasing the adaptive nature of our neural network model in optimizing information exchange across its interface. The Robin parameter for the outer subdomain, being larger than 0.5, suggests a focus on exchanging
Figure 4: Poisson’s equation on two-dimensional Cartesian decomposed domain: (a) exact solution on each subdomain, (b) point-wise absolute error.
Figure 5: Steady-state heat conduction in a complex domain with a complex partitioning: (a) complex geometry of domain and the interior subdomain, (b) exact solution, (c) predicted solution on the partitioned domain, (d) point-wise absolute error on each subdomain
Dirichlet information, while the Robin parameter for the interior domain, being smaller than 0.5, indicates a focus on the flux. This observation demonstrates the neural network's ability to tailor the efficiency of information exchange based on the specific requirements of each subdomain, providing valuable insights into the domain's behavior and dynamics. In summary, our approach provides a promising strategy for solving Poisson's equation on complex subdomains without an overlap.
### Helmholtz Equation
As we discussed earlier, classical Schwarz methods fail for solving Helmholtz equation with domain decomposition, even with overlapping of subdomains. Furthermore, transmission conditions that work well for the Laplace's equation does not readily extend to handle the Helmholtz equation. Therefore, solving the Helmholtz equation with domain decomposition methods presents a significant challenge. We consider the following Helmholtz equation on the domain \(\Omega=\{(x,y)\mid-1<(x,y)<1\}\)
\[\nabla^{2}u+k^{2}u =s,\quad\text{ in }\Omega, \tag{20}\] \[u =g,\quad\text{ on }\partial\Omega, \tag{21}\]
where \(\nabla^{2}\) is the Laplacian operator applied to the function \(u\), \(k\) is the wavenumber, and \(s\) is a given source term and \(\partial\Omega\) is the boundary of the domain \(\Omega\). The function \(u\) represents the amplitude of the wave, and the equation is typically solved subject to appropriate boundary conditions.
Following the equation presented above, we manufacture an oscillatory solution that satisfies Eq. (21) as follows:
\[u(x,y)=\sin(\pi x)\cos(\pi y/2),\forall(x,y)\quad\text{ in }\Omega. \tag{22}\]
where and \(\partial\Omega\) is its boundary.
We employ a feed-forward neural network consisting of three hidden layers, with each layer containing 20 neurons for each subdomain. The networks have two inputs and one output and employs the tangent hyperbolic activation function. We train the neural network models locally for 500 epochs while setting the outer iteration count to 30. We generate the necessary collocation points by randomly selecting 1024 points from within each subdomain and an additional 128 points along each boundary or interface edge. This process is performed only once before training. We first illustrate the effectiveness of one-dimensional domain decomposition as in 2.
Figure 6 illustrates the results of solving the Helmholtz equation on a one-dimensional decomposed domain. Panel (a) shows the predicted solution obtained from the feed-forward neural network models, while panel (b) shows the absolute error between the exact and predicted solutions. The figure demonstrates the effectiveness of the approach for approximating solutions to the Helmholtz equation on a one-dimensional decomposed domain. The final learned Robin parameters for the subdomains are: \(\alpha_{1}=0.5522\), \(\alpha_{2}=0.7132\), \(\alpha_{3}=0.7059\), and \(\alpha_{4}=0.5439\). It is interesting to observe that the outer subdomains have larger Robin parameters, indicating a higher focus on matching fluxes than the interior subdomains. This suggests that the neural network effectively adapts to the local characteristics of each subdomain, allocating more importance to matching fluxes within the interior regions for better overall performance in solving the Helmholtz equation.
Figure 7 presents the results of solving the Helmholtz equation on a two-dimensional Cartesian decomposed domain using the feed-forward neural network model. Panel (a) displays the predicted solution obtained from local neural
Figure 6: Helmholtz equation on a one-dimensional decomposed domain: (a) predicted solution on each subdomain, (b) absolute point-wise error.
network models, while panel (b) shows the absolute error between the exact and predicted solutions. It is evident from the figure that the neural network model can effectively approximate the solutions to the Helmholtz equation on a two-dimensional Cartesian decomposed domain. The final learned Robin parameters for the subdomains are: \(\alpha_{1}=0.7648\), \(\alpha_{2}=0.7577\), \(\alpha_{3}=0.7456\), and \(\alpha_{4}=0.5521\). Notably, all Robin parameters are larger than 0.5, indicating a focus on exchanging Neumann conditions. This finding aligns with theoretical studies that suggest the incorporation of Neumann conditions is beneficial for solving the Helmholtz equation. The neural network's ability to learn and emphasize the importance of Neumann conditions showcases its adaptability and capability to exploit valuable information for improved accuracy and efficiency in solving the problem.
## 5 Application to Inverse Problems
One of the attractive features of physics-informed/constrained neural networks is that they excel at data-driven and inverse modeling problems. In an inverse problem, one seeks to determine the unknown parameters or properties of a physical system, such as the conductivity of a material or the distribution of a scalar field, given measurements of some quantity. Inverse problems arise in many fields of engineering. In this section, we showcase the versatility and effectiveness of our proposed Domain Decomposition Method (DDM) by applying it to solve inverse problems, akin to the forward problems, without any modifications.
Poisson's equation is expressed
\[\nabla^{2}u =s, \text{on }\Omega, \tag{23a}\] \[u =g, \text{in }\partial\Omega, \tag{23b}\]
where \(\nabla^{2}\) is the Laplacian operator applied to the function \(u\), and \(s\) is a given source term. For the inverse Poisson's equation, we use a feed-forward neural network with three hidden layers, each containing 20 neurons for each subdomain. The network has two inputs and one output and employs the tangent hyperbolic activation function. We train the neural network models locally for 500 epochs while setting the outer iteration count to 30. To generate the necessary collocation points, we randomly select 1024 points from within each subdomain, and an additional 128 points are selected along each boundary or interface edge. This process is performed only once before training.
In the context of inverse problems, we consider two different cases. The first case (Case 1) involves a two-dimensional Cartesian subdomain where one of the subdomains lacks physical boundary conditions but has measurement data available. In this case, we aim to demonstrate that we can learn the global solution in the subdomain without having the information on the physical boundary conditions and use the information at the interfaces as interface conditions.
Figure 8 presents the results of the first case of the inverse problem governed by Poisson's equation. Panel (a) shows the available measurement data as black dots and the subdomains with known boundary conditions in red, while the right bottom subdomain lacks a boundary condition. Panel (b) shows the predicted solution obtained from local
Figure 7: Helmholtz equation on two-dimensional Cartesian decomposed domain: (a) predicted solution on subdomains, (b) absolute point-wise error
Figure 8: Inverse Poisson’s equation case one: (a) boundary data (blue) and synthetic measurement data (magenta), (b) predicted solution on subdomains, (c) absolute point-wise error
Figure 9: Inverse Poisson’s equation case two: (a) boundary data (blue) and synthetic measurement data (magenta), (b) predicted solution on subdomains, (c) absolute point-wise error
neural network models, and panel (c) shows the absolute error between the predicted and true solutions. The figure demonstrates that our models can accurately predict the solution in the right bottom subdomain using the available measurement data. Thus, this approach can effectively solve inverse problems in cases where there are missing boundary conditions but measurement data is available in the subdomain. The final learned Robin parameters for the subdomains are: \(\alpha_{1}=0.6667\), \(\alpha_{2}=0.5090\), \(\alpha_{3}=0.5117\), and \(\alpha_{4}=0.5395\). It is interesting to observe that \(\alpha_{3}\) which is the top left subdomain have the smallest Robin parameter among the subdomains and its value is roughly 0.5 indicating an equal focus on matching fluxes and Dirichlet conditions.
In the second case (Case 2) we want to explore how to reconstruct the global solution using only limited measurement data within the subdomain, while the majority of available information is not directly accessible by the subdomain. Figure 9 illustrates the results of solving the inverse Poisson's equation in case two. Panel (a) shows the distribution of the labeled data, where the red dots represent the measurement points and the blue dots represent the collocation points. Panel (b) shows the predicted solution and panel (c) shows the absolute error between the predicted and exact solutions. The figure demonstrates that the trained neural network models can accurately discover the solution within the subdomain despite the limited number of available measurements. The final learned Robin parameters for the subdomains are: \(\alpha_{1}=0.8173\), \(\alpha_{2}=0.5083\), \(\alpha_{3}=0.5084\), and \(\alpha_{4}=0.5291\). Notably, it is intriguing to observe that \(\alpha_{1}\), which corresponds to the bottom left subdomain, possesses the largest Robin parameter among all the subdomains. This remarkable result suggests a distinct focus on matching Dirichlet conditions in that specific subdomain. Overall, these optimized Robin parameters signify the effectiveness of the learning process in capturing the behavior of the subdomains, and the prominence of \(\alpha_{1}\) emphasizes the significance of Dirichlet boundary conditions in the corresponding region. This improved understanding of the subdomains' characteristics can be valuable for further enhancing the performance and accuracy of the model in relevant applications.
## 6 Conclusion
Domain decomposition methods are needed to extend physics-informed/constrained machine learning methods to solve large-scale problems involving PDEs. In this work, we presented a generalized Schwarz-type domain decomposition method with a Robin-type interface condition to solve forward and inverse PDE problems using physics-informed/constrained neural networks on non-overlapping subdomains. The proposed Robin-type interface condition is a convex combination of Dirichlet and Neumann type interface conditions with a subdomain-specific parameter that we infer or learn as part of the overall solution method. Specifically, we use our previously developed physics and equality constrained artificial neural networks (PECANN) framework [34, 19] to formulate a constrained optimization problem for every local subdomain in which the boundary and subdomain interface conditions act as an equality constraint to the PDE solution within the subdomain. The local constrained optimization formulation is then recast as a dual unconstrained optimization problem using an adaptive augmented Lagrangian method. In our approach, we train a neural network model for each subdomain independently while exchanging information between subdomains through the Robin-type interface condition and discovering its subdomain-specific parameter as part of the training. Although, the interface parameter is discovered as part of the optimization procedure in our approach, our proposed DDM differs from the so-called optimized Schwarz methods in which interface parameters are optimized with respect to the convergence rate of the method.
We have demonstrated the performance and versatility of our method on several forward and inverse problems with various domain partitioning strategies, including complex ones. A noteworthy strength of our proposed DDM coupled with our existing PECANN framework is that it can learn the solution of both the Laplace's and Helmholtz equation with the same transmission conditions and without resorting to any ad-hoc tuning strategies in the neural network model. All the codes accompanying the present work are available as open-source software at [https://github.com/HiPerSimLab/PECANN/DDM](https://github.com/HiPerSimLab/PECANN/DDM).
## 7 Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. 1953204 and in part by the University of Pittsburgh Center for Research Computing through the resources provided.
|
2306.08744 | High-performance deep spiking neural networks with 0.3 spikes per neuron | Communication by rare, binary spikes is a key factor for the energy
efficiency of biological brains. However, it is harder to train
biologically-inspired spiking neural networks (SNNs) than artificial neural
networks (ANNs). This is puzzling given that theoretical results provide exact
mapping algorithms from ANNs to SNNs with time-to-first-spike (TTFS) coding. In
this paper we analyze in theory and simulation the learning dynamics of
TTFS-networks and identify a specific instance of the vanishing-or-exploding
gradient problem. While two choices of SNN mappings solve this problem at
initialization, only the one with a constant slope of the neuron membrane
potential at threshold guarantees the equivalence of the training trajectory
between SNNs and ANNs with rectified linear units. We demonstrate that training
deep SNN models achieves the exact same performance as that of ANNs, surpassing
previous SNNs on image classification datasets such as MNIST/Fashion-MNIST,
CIFAR10/CIFAR100 and PLACES365. Our SNN accomplishes high-performance
classification with less than 0.3 spikes per neuron, lending itself for an
energy-efficient implementation. We show that fine-tuning SNNs with our robust
gradient descent algorithm enables their optimization for hardware
implementations with low latency and resilience to noise and quantization. | Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner | 2023-06-14T21:01:35Z | http://arxiv.org/abs/2306.08744v2 | # Are training trajectories of deep single-spike
###### Abstract
Communication by binary and sparse spikes is a key factor for the energy efficiency of biological brains. However, training deep spiking neural networks (SNNs) with backpropagation is harder than with artificial neural networks (ANNs), which is puzzling given that recent theoretical results provide exact mapping algorithms from ReLU to time-to-first-spike (TTFS) SNNs. Building upon these results, we analyze in theory and in simulation the learning dynamics of TTFS-SNNs. Our analysis highlights that even when an SNN can be mapped exactly to a ReLU network, it cannot always be robustly trained by gradient descent. The reason for that is the emergence of a specific instance of the vanishing-or-exploding gradient problem leading to a bias in the gradient descent trajectory in comparison with the equivalent ANN. After identifying this issue we derive a generic solution for the network initialization and SNN parameterization which guarantees that the SNN can be trained as robustly as its ANN counterpart. Our theoretical findings are illustrated in practice on image classification datasets. Our method achieves the same accuracy as deep ConvNets on CIFAR10 and enables fine-tuning on the much larger PLACES365 dataset without loss of accuracy compared to the ANN. We argue that the combined perspective of conversion and fine-tuning with robust gradient descent in SNN will be decisive to optimize SNNs for hardware implementations needing low latency and resilience to noise and quantization.
## 1 Introduction
Similar to the brain, neurons in spiking neural networks (SNNs) communicate via short pulses called spikes - in striking contrast to artificial neural networks (ANNs) where neurons communicate by the exchange of real-valued signals. While ANNs are the basis of modern artificial intelligence (AI) with impressive achievements [1; 2; 3], their high performance on various tasks comes at the expense of high energy consumption [4; 5; 6]. In general, high energy consumption is a challenge in terms of sustainability or deployment in low-power edge devices [7; 8; 9]. Due to their sparse binary communication scheme, SNNs may offer a potential solution by reducing resource usage in the network [10; 11; 12; 13; 14; 15; 16], but these studies have shown that it is difficult to demonstrate working SNNs which perform at the same level as ANNs.
There exist multiple methods to train the parameters of an SNNs with various advantages and drawbacks. Traditionally, SNNs were trained with plasticity rules observed in biology [17; 18] but it appears more efficient to rely on gradient-descent optimization as done in deep learning (see [19; 20; 21; 22; 23; 24] for theoretical relationships between the plasticity rules and gradient descent). One of the most successful training paradigms for SNNs views the spiking neuron as discrete-time recurrent unit with binary activation and uses a pseudo-derivative or surrogate gradient on the backward pass while keeping the strict threshold function in the forward pass [25; 26; 27; 28; 29].
Other approaches [30, 31, 32] either translate ANN activations into SNN spike counts to train the SNN with the ANN gradients, or use temporal coding with a large number of spikes, both of which can jeopardize the energy efficiency of SNNs.
More recently, and in contrast to spike-count measures in neuroscience [33], it was found in sensory brain areas that neurons also encode information in the exact timing of the first spike, i.e. more salient information leads to earlier spikes [34, 35, 36] which in turn leads to a fast response to the stimuli [37, 38]. While it is possible to train temporally coded spiking neural networks where neurons send _multiple spikes_ to transmit information, we focus in this paper on a _time-to-first-spike_ (TTFS) coding scheme [39] where each neuron fires at most a single spike. The goal of the present study is (1) to analyze theoretically why all attempts at training SNNs with TTFS encoding run into difficulties with deep networks (beyond 4 layers) and (2) to provide a solution to these problems.
Related workThere is a long history of implementations of gradient descent in SNNs with TTFS. In [40] the authors used the Spike Response Model [39] and calculated backpropagation gradients with respect to spike timing and parameters. While the paper states that the learning rule contains an approximation, it turns out to be the exact gradient when the number of spikes is fixed to avoid discontinuities, i.e. spikes do not appear or disappear. This approach was rediscovered recently, extended to other neuron models [14, 41, 42, 43, 44] and applied to small machine learning datasets like MNIST and Fashion-MNIST [14, 45, 42, 44] with architectures of 4 hidden layers or less.
A different line of work avoids training altogether and converts directly an ANN into an SNN. Beyond the classic conversion techniques based on rate coding [46, 47], some studies considered the conversion from ANNs to temporally coded SNNs [48, 49, 50, 51]. While most of them relied on an inexact mapping algorithm or unconventional threshold dynamics, it was recently shown that approximation-free conversion from ANN to TTFS-SNN is possible [51]. However, these results have not shown any benefits outside of the network conversion setting, therefore it remains difficult to understand why gradient descent in the TTFS cannot be used for training or fine-tuning deep SNNs. We build upon this aforementioned work [51] to study the learning dynamics of SNNs.
Contributions of the paperOur work combines the exact backpropagation update steps [41, 42, 43, 44] with an exact mapping between ANNs and SNNs [51], and goes considerably beyond the state-of-the-art with respect to the following points:
* **Even if a TTFS-SNN has an equivalent ReLU ANN, they do not necessarily follow equivalent learning trajectories.** We extend the theory from [51] to provide a reversed mapping from TTFS-SNN to an equivalent ReLU network. We propose the conditions and an adaptive SNN hyperparameter update rule, which are necessary for the equivalence to hold throughout training. Furthermore, the _linearly mappable condition_ from SNN to ANN parameters is identified as the one that guarantees that gradient descent follows the learning trajectories of the equivalent ReLU ANN.
* **Hard instance of the vanishing-gradient problem.** We identify that naively using ANN weight matrix initialization techniques [52, 53, 54] with TTFS-SNN results in a severe instance of the vanishing-gradient problem. We identify the problem analytically and provide a generic recipe to solve it and initialize TTFS-SNN efficiently.
* **Training SNNs to the state-of-the-art accuracy on large datasets.** All previous learning attempts in the TTFS setting were limited to MNIST or Fashion-MNIST datasets and networks of up to 4 layers [14, 45, 55, 43, 42]. We are the first to train TTFS-SNN on CIFAR10 [56], CIFAR100 [56] and PLACES365 [57]. As predicted by the theory, on all datasets, our TTFS-SNN achieves the exact same performance as a ReLU network with the same architecture.
* **Demonstrating the benefits of training under hardware constraints.** Since SNNs can be implemented in low-energy neuromorphic hardware, we test their robustness to quantization of spike times and weights, by fine-tuning the quantized network with our training framework. We show that the latency to decision can be reduced by a factor of four with a performance drop of less than 3 percent on CIFAR10.
## 2 Definition and properties of time-to-first-spike networks
In the following section, we will analyze the gradient descent dynamics in a TTFS setting. To avoid any approximation we follow [51] and study deep spiking neural networks consisting of neurons with triangular post-synaptic integration filters. This model should be viewed as the linearization of a more classical double-exponential filter [14; 45] where all the spikes of a layer arrive within a time window that is small compared to the two time constants of the double-exponential. As we will see in the following section, the theory becomes rigorous under this linearized approximation - we discuss the extension to a spike response model with a double-exponential post-synaptic potential filter in Appendix A.
A time-to-first-spike (TTFS) network modelThe neurons are arranged in \(N\) hidden layers where the spikes of neurons in layer \(n\) are sent to neurons in layer \(n+1\). The layers are either fully-connected (i.e. each neuron receives input from all neurons in the previous layer) or convolutional (i.e. connections are limited to be local and share weights). All connections are feed-forward, i.e. there are no recurrent connections.
_Input layer:_ At the first layer, the analog input to the network represents, for example, the pixel intensity. We assume that the input is scaled to the interval \([0,1]\) and the values are encoded with TTFS coding (a high input pixel intensity \(x_{j}^{(0)}\) leads to an early spike at \(t_{j}^{(0)}\)):
\[t_{j}^{(0)}=\tau_{c}[1-x_{j}^{(0)}]=t_{\max}^{(0)}-\tau_{c}x_{j}^{(0)} \tag{1}\]
where spiking time \(t_{j}^{(0)}\) of neuron \(j\) in the input layer encodes the real-valued input \(x_{j}^{(0)}\in[0,1]\) and \(t_{\max}^{(0)}=\tau_{c}\) is the last possible spike time in the input layer. The conversion parameter \(\tau_{c}\) translates unit-free inputs into time units. In biology \(\tau_{c}\) in sensory areas is in the range of a few milliseconds [34; 35] whereas in hardware devices it could be in the range of microseconds or even shorter.
Neuron dynamicsIn the hidden layers and similarly to [51], the potential \(V_{i}(t)\) of neuron \(i\) models an integrate-and-fire neuron. Given the spike times \(t_{j}^{(n-1)}\) of neurons \(j\) in the previous layer, the potential \(V_{i}^{(n)}\) of neuron \(i\) in layer \(n\) follows the dynamics:
\[\tau_{c}\frac{\mathrm{d}V_{i}^{(n)}}{\mathrm{d}t}=\alpha_{i}^{(n)}H(t-t_{\text {min}}^{(n)})+\sum_{j}W_{ij}^{(n)}H(t-t_{j}^{(n-1)}) \tag{2}\]
where \(t_{\min}^{(n)}\) is a constant which will become by construction a lower bound of the earliest possible spike time in layer \(n\), \(\alpha_{i}^{(n)}\) is a positive scalar which can be seen as the weight of an external spike at \(t_{\min}^{(n)}\), \(W_{ij}^{(n)}\) is the synapse strength from neuron \(j\) to neuron \(i\), and \(H\) denotes the Heaviside function
Figure 1: **Network of TTFS neurons.****a.** A feed-forward TTFS architecture. **b.** Potential \(V_{i}^{(n)}\) for different neurons \(i\) as a function of time. The intial slope is always zero. For \(\alpha_{i}^{(n)}=1\) slopes of all neurons \(i\) are increased by \(1\) at \(t_{\text{max}}^{(n-1)}=t_{\text{min}}^{(n)}\). **c.** For \(\alpha_{i}^{(n)}=1-\sum_{j}W_{ij}^{(n)}\) slopes of all neurons \(i\) are equal to \(1\) for \(t\geq t_{\text{min}}^{(n)}\).
which takes a value of \(1\) for positive inputs and is \(0\) otherwise. When the potential \(V_{i}^{(n)}\) reaches the threshold \(\vartheta_{i}^{(n)}\), neuron \(i\) generates a spike at time \(t_{i}^{(n)}\) and sends it to the next layer. Once a neuron spikes we assume a very large refractory period to ensure that every neuron spikes at most once. According to Eq. (2), each input spike \(t_{j}^{(n-1)}\) changes the _slope_ of the potential by a fixed amount proportional to \(W_{ij}^{(n)}\). Integration of Eq. (2) therefore leads to a piecewise linear behaviour of the potential, see Fig. 1. Without loss of generality, we assume \(V\) to be unit-free and so are the parameters \(W_{ij}^{(n)}\) and \(\alpha_{i}^{(n)}\), whereas \(t\) and \(\tau_{c}\) have units of time. Rescaling time by \(t\rightarrow(t/\tau_{c})\) would remove the units, but we keep it in the equations to show the role of the conversion factor \(\tau_{c}\). Note that this is the same spiking neuron model as defined in [51] with the minor modification that the ramping input of strength \(\alpha_{i}^{(n)}\) arrives at \(t_{\text{min}}^{(n)}\) and not \(t_{\text{min}}^{(n-1)}\), which will simplify our equations in the following. We initialize \(\vartheta_{i}^{(n)},t_{\text{min}}^{(n)},t_{\text{max}}^{(n)}\) so that all the neurons of layer \(n\) spike once in the interval \([t_{\text{min}}^{(n)},t_{\text{max}}^{(n)}]\). The threshold \(\vartheta_{i}^{(n)}\) is defined as \(\vartheta_{i}^{(n)}\stackrel{{\text{def}}}{{=}}\tilde{\vartheta }_{i}^{(n)}+D_{i}^{(n)}\) where \(D_{i}^{(n)}\) is a model parameter initialized at 0 and \(\tilde{\vartheta}_{i}^{(n)}\) is the base threshold. We define a maximum spike time \(t_{\text{max}}^{(n)}\), after which emission of a spike is forced in all neurons of layer \(n\) which have not spiked yet. The construction of \(t_{\text{min}}^{(n)}\) and \(t_{\text{max}}^{(n)}\) is recursive as \(t_{\text{min}}^{(n)}\stackrel{{\text{def}}}{{=}}t_{\text{max}}^ {(n-1)}\). Details of the choice of the base threshold and \(t_{\text{max}}^{(n)}\) are given Appendix B.
Adaptive \(t_{\text{max}}^{(n)}\) parametersDuring training as we update the network parameters \(W_{ij}^{(n)}\) and \(D_{i}^{(n)}\) the hyperparameters like \(t_{\text{max}}^{(n)}\) need to be changed as well such that the condition that all the neurons of layer \(n\) spike once in the interval \([t_{\text{min}}^{(n)},t_{\text{max}}^{(n)}]\) remains true. We suggest a new adaptive update rule which moves \(t_{\text{max}}^{(n)}\). Note that this is an addition to the model in [51] where an adaptive \(t_{\text{max}}^{(n)}\) was not necessary since the parameters were fixed. Formally, when processing the training dataset, we update \(t_{\text{max}}^{(n)}\) as follows:
\[\Delta t_{\text{max}}^{(n)}=\begin{cases}\gamma(t_{\text{max}}^{(n)}-\text{ min}_{i,\mu}t_{i}^{(n)})-(t_{\text{max}}^{(n)}-t_{\text{min}}^{(n)}),&\text{if }t_{\text{max}}^{(n)}-t_{\text{min}}^{(n)}<\gamma(t_{\text{max}}^{(n)}-\text{ min}_{i,\mu}t_{i}^{(n)})\\ 0,&\text{otherwise}\end{cases} \tag{3}\]
The minimum operator iterates over all neurons \(i\) and input samples \(\mu\) in the batch and \(\gamma\) is a constant. After this update, we change the subsequent time window accordingly so that: \(t_{\text{min}}^{(n+1)}=t_{\text{max}}^{(n)}\), and we iterate over all layers sequentially. The base threshold \(\tilde{\vartheta}_{i}^{(n)}\) is then updated accordingly, see Appendix B. This adaptation effectively moves all the spikes \(t_{j}^{(n)}\) away from the boundary \(t_{\text{min}}^{(n)}\). For simplicity, in the theory section, we consider that this update has reached an equilibrium, so we consider that \(t_{\text{max}}^{(n)}\), \(t_{\text{min}}^{(n)}\), and \(\tilde{\vartheta}_{i}^{(n)}\) are constants w.r.t. the SNN parameters, the condition \(t_{\text{min}}^{(n+1)}=t_{\text{max}}^{(n)}\) is always satisfied and all the spikes of layer \(n\) arrive within \([t_{\text{min}}^{(n)},t_{\text{max}}^{(n)}]\).
Output layerThe output layer has index \(N+1\) and contains non-spiking read-out neurons. Each neuron \(m\) simply integrates input spikes coming from layer \(N\) without firing. Integration of \(V_{m}^{(N+1)}\) stops at time \(t_{\text{min}}^{(N+1)}\) and the _softmax_ and the standard cross-entropy loss \(\mathcal{L}\) are calculated using real-valued potentials, analogous to real-valued activations of ANNs.
General reverse mapping from SNN to ANNBuilding upon the conversion method from ANN with rectified linear units (ReLUs) to TTFS networks [51], we now describe a reversed mapping strategy defining uniquely the parameters of an equivalent ANN for given SNN parameters. This mapping will be a fundamental pillar in the following theoretical analysis. In the most general case, to find a ReLU network with weights \(w\) and bias \(\mathbf{b}\) which is equivalent to our SNN model, we define:
\[B_{i}^{(n)}\stackrel{{\text{def}}}{{=}}\alpha_{i}^{(n)}+\sum_{k}W_{ ik}^{(n)}\quad\text{and}\quad\ w_{ij}^{(n)}\stackrel{{\text{def}}}{{=}} \frac{W_{ij}^{(n)}}{B_{i}^{(n)}}\quad\text{and}\quad\ b_{i}^{(n)}\stackrel{{ \text{def}}}{{=}}-\frac{\vartheta_{i}^{(n)}}{B_{i}^{(n)}}+\frac{t_{\text{ max}}^{(n)}-t_{\text{min}}^{(n)}}{\tau_{c}}. \tag{4}\]
Here, \(B_{i}^{(n)}\) has a simple interpretation: it is the slope of the potential at the moment of threshold crossing in the SNN if time is measured in units of \(\tau_{c}\) (see Eq. (2)). We call \(B_{i}^{(n)}\) the'slope-at-threshold factor' and it will play an important role in the following. Then if we define the ANN
activation at the input layer as the pixel intensity \(\mathbf{x}^{(0)}\), Eq. (4) defines uniquely a ReLU network with activations \(\mathbf{x}^{(n)}\) such that (this is the reciprocal mapping inspired by [51]):
\[\mathbf{x}^{(n)}\tau_{c}=t_{\text{max}}^{(n)}-\mathbf{t}^{(n)}. \tag{5}\]
At the output layer, we resort directly to the simpler parameter mapping of the output layer from [51]: with \(w_{ij}^{(N+1)}\stackrel{{\text{\tiny def}}}{{=}}W_{ij}^{(N+1)}\) and \(b_{i}^{(N+1)}\stackrel{{\text{\tiny def}}}{{=}}\alpha_{i}^{(N+1)} (t_{\text{max}}^{(N)}-t_{\text{min}}^{(n)})\) the logits and the cross-entropy loss \(\mathcal{L}\) are also equal in the SNN and the equivalent ANN.
Proof.: Starting from the introduced SNN definition, we compute analytically the spikes at time \(t_{i}^{(n)}\) in the SNN. In case the potential \(V_{i}^{(n)}\) reaches the threshold \(\vartheta_{i}^{(n)}\) before \(t_{\text{max}}^{(n)}\), the spiking condition \(\vartheta_{i}^{(n)}=V_{i}^{(n)}(t_{i}^{(n)})\) yields:
\[\tau_{c}\vartheta_{i}^{(n)}=\alpha_{i}^{(n)}(t_{i}^{(n)}-t_{\text{min}}^{(n)} )H(t_{i}^{(n)}-t_{\text{min}}^{(n)})+\sum_{j}W_{ij}^{(n)}(t_{i}^{(n)}-t_{j}^{(n -1)})H(t_{i}^{(n)}-t_{j}^{(n-1)}). \tag{6}\]
Since we constructed the SNN such that \(t_{i}^{(n)}\) arrives in the time window \([t_{\text{min}}^{(n)},t_{\text{max}}^{(n)}]\), all the terms \(H(\cdot)\) are equal to \(1\) in the previous equation. So for any spike \(t_{i}^{(n)}\) arriving before \(t_{\text{max}}^{(n)}\), we have:
\[t_{i}^{(n)}=\frac{\tau_{c}\vartheta_{i}^{(n)}+\alpha_{i}^{(n)}t_{\text{min}}^{ (n)}+\sum_{j}W_{ij}^{(n)}t_{j}^{(n-1)}}{\alpha_{i}^{(n)}+\sum_{k}W_{ik}^{(n)}} =\frac{A_{i}^{(n)}}{B_{i}^{(n)}}\;. \tag{7}\]
We can already identify the slope-at-threshold \(B_{i}\) in the denominator, then we replace \(\alpha_{i}^{(n)}\) by \(B_{i}^{(n)}-\sum_{k}W_{ij}^{(n)}\) in \(A_{i}^{(n)}\) and subtract \(t_{\text{max}}^{(n)}\) on both sides:
\[t_{i}^{(n)}-t_{\text{max}}^{(n)}=\tau_{c}\frac{\vartheta_{i}^{(n)}}{B_{i}^{(n )}}+t_{\text{min}}^{(n)}-t_{\text{max}}^{(n)}+\sum_{j}\frac{W_{ij}^{(n)}}{B_{i }^{(n)}}(t_{j}^{(n-1)}-t_{\text{min}}^{(n)}). \tag{8}\]
Using this identity, and using that the rectified linear unit is in its operating regime \(x_{i}>0\) if and only if the spiking neuron \(i\) fires before \(t_{\text{max}}\), one can now prove by induction that the definition Eq. (4) defines an equivalent ReLU network satisfying the identity from Eq. (5).
## 3 Analysis of learning dynamics
The linearly mappable conditionWe now define a specific choice of SNN, so-called _linearly mappable SNN_ and we will show that it satisfies the theoretical conditions for robust SNN training via gradient descent optimization. _The linearly mappable condition_ is defined by the choice of \(\alpha_{i}^{(n)}\):
\[\alpha_{i}^{(n)}=1-\sum_{j}W_{ij}^{(n)}. \tag{9}\]
This choice implies that the slope-at-threshold \(B_{i}^{(n)}=1\), and results in the linear mapping formula:
\[w_{ij}^{(n)}\stackrel{{\text{\tiny def}}}{{=}}W_{ij}^{(n)}\quad \text{ and }\quad b_{i}^{(n)}\stackrel{{\text{\tiny def}}}{{=}}- \vartheta_{i}^{(n)}+\frac{t_{\text{max}}^{(n)}-t_{\text{min}}^{(n)}}{\tau_{c}} \tag{10}\]
Vanishing-gradient problem for deep SNNsPrevious TTFS-SNNs with exact gradients use mostly shallow networks containing one hidden layer [12; 45; 43; 44], or at most 4 hidden layers [42]. The question arises why the exact gradient approach does not scale to larger networks. We demonstrate in this section that TTFS networks are prone to yield vanishing or exploding gradients (vanishing-gradient problem [54; 52]). To solve this problem, we show that a tight balance has to be respected between the weight initialization and the slope-at-threshold vector \(\textbf{B}^{(n)}\). This analysis will result in the definition of robust initialization schemes for TTFS networks.
The vanishing-gradient problem has been studied exhaustively in ANNs [52]. Similarly, TTFS networks are also subject to this problem. To see this, one has to observe that the network state at
layer \(n\) is summarized by the vector of spike timings \(\mathbf{t}^{(n)}\) such that the loss with respect to the weight parameter at layer \(n\) factorizes as:
\[\frac{\mathrm{d}\mathcal{L}}{\mathrm{d}\mathbf{W}^{(n)}}=\frac{\mathrm{d} \mathcal{L}}{\mathrm{d}\mathbf{V}^{(N+1)}}\frac{\mathrm{d}\mathbf{V}^{(N+1)}}{ \mathrm{d}\mathbf{t}^{(N)}}\frac{\mathrm{d}\mathbf{t}^{(N)}}{\mathrm{d} \mathbf{t}^{(N-1)}}\cdots\frac{\mathrm{d}\mathbf{t}^{(n+1)}}{\mathrm{d} \mathbf{t}^{(n)}}\frac{\mathrm{d}\mathbf{t}^{(n)}}{\mathrm{d}\mathbf{W}^{(n)}}. \tag{11}\]
where \(\mathbf{V}^{(N+1)}\) is a vector containing potentials of neurons in layer \(N+1\) at time \(t_{\text{min}}^{(N+1)}\). Hence, if the product of Jacobians: \(\frac{\mathrm{d}\mathbf{t}^{(n+1)}}{\mathrm{d}\mathbf{t}^{(n)}}\) is naively defined, the amplitude of this gradient might vanish or explode exponentially fast as the number of layers becomes large. As analyzed in [52, 58], a way to solve this problem is to make sure that the largest eigenvalues of the Jacobian \(\frac{\mathrm{d}\mathbf{t}^{(n+1)}}{\mathrm{d}\mathbf{t}^{(n)}}\) are close to \(1\) in absolute value.
We now compute analytically the Jacobian of the SNN. It requires the definition \(M_{i}^{(n)}\) which is \(1\) if and only if spike \(t_{i}^{(n)}\) arrives before \(t_{\max}^{(n)}\). We also denote with \(M^{(n)}\) the matrix containing \(M_{i}^{(n)}\) on the diagonal and \(0\) elsewhere, with \(\mathbf{t}^{(n)}\) a vector of spike times in layer \(n\) and with \(\mathrm{B}^{(n)}\) a diagonal matrix of slope-at-threshold factors. By deriving Eq. (7) we find that the Jacobian of the network can be written as (\(\cdot\) is the matrix multiplication):
\[\frac{\mathrm{d}\mathbf{t}^{(n)}}{\mathrm{d}\mathbf{t}^{(n-1)}}=M^{(n)}\cdot \frac{1}{B^{(n)}}\cdot W^{(n)} \tag{12}\]
We can now analyze the conditions for which the vanishing-or-exploding gradient problems are solved. We observe primarily that (1) the eigenvalues of this Jacobian are strongly determined by the slope-at-threshold \(B^{(n)}\) and not only by the weight matrix \(W^{(n)}\) as in ANN; then (2) the eigenvalues of the Jacobian of the SNN are the same as the eigenvalues of the Jacobian of the equivalent ANN. To see this, one may recall from Eq. (4) that the ANN weights are \(w^{(n)}=\frac{1}{B^{(n)}}\cdot W^{(n)}\) and \(M_{i}^{(n)}\) is 1 if and only if the equivalent ReLU unit is saturated.
Robust initialization of TTFS networksBefore defining a generic recipe for initializing the weight matrix \(W^{(n)}\), we illustrate why using naively the standard deep learning recipes with SNN results in vanishing or exploding gradients. In Fig. 2, we demonstrate numerically that this naive approach faces the vanishing-gradient problem. We initialized the weight matrix of an SNN with \(W^{(n)}=\frac{1}{\sqrt{340}}\mathcal{N}(0,1)\) where \(340\) is the number of units in the layers (this is one of the many standard choices in deep learning) so the eigenvalue of \(W^{(n)}\) with largest absolute value is close to \(1\). We can use this matrix to estimate the eigenvalue spectrum of the SNN at initialization. Following classical work in the ANN literature [58, 59], we assume that \(M^{(n)}\) has a small impact on the distribution of the eigenvalues, and we can display the eigenvalue spectrum of \(w^{(n)}=\frac{1}{B^{(n)}}\cdot W^{(n)}\) which are closely related to the eigenvalue spectrum of the SNN Jacobian \(\frac{\mathrm{d}\mathbf{t}^{(n)}}{\mathrm{d}\mathbf{t}^{(n-1)}}\). As shown in Fig. 2a, this naive initialization produces multiple eigenvalues with modulus larger than \(1\) when \(\alpha_{i}^{(n)}=1\); this eigenvalue spectrum leads to an explosion of the gradient norm in backpropagation.
Our theory also provides a recipe to initialize the SNN outside of the _linearly mappable condition_. Since we know from Eq. (12) that a good SNN initialization requires the equivalent ANN to have a
Figure 2: **Eigenvalues of the SNN Jacobian under naive initialization and solution.****a.** In the general case, when the SNN is initialized with standard deep learning initialization, the eigenvalues of the SNN spread beyond the unit circle. **b.** It is possible to correct for this using the inverse relation between \(w^{(n)}\) and \(W^{(n)}\). With the linearly mappable condition, this correction is not necessary.
good initialization, we can first choose the matrix in the ANN parameter space, and map it to the SNN initialization with the inverse map of Eq. (4). This is for instance automatically corrected with the _linearly mappable condition_ in Fig. 2b since the SNN and ANN are the same, so the eigenvalues stay tightly within the unit circle, showing numerically that the vanishing-gradient problem is avoided.
Biased gradient descent trajectory with the generic mappingBeyond initialization, we also analyze whether the gradient descent trajectory in the SNN parameter space necessarily follows the gradient descent trajectory of the equivalent ReLU network. To describe the gradient descent trajectory of the SNN, we consider a gradient descent step with learning rate \(\eta\) when applying backpropagation to the SNN: \(\Delta W_{ij}^{(n)}=-\eta\frac{\mathrm{d}\mathcal{L}}{\mathrm{d}W_{ij}^{(n)}}\), and compute the corresponding update in the space of the ANN parameters. We denote with \(\delta w_{ij}^{(n)}\) the update in ANN parameter space, and we use \(\mathcal{M}_{\mathbf{\alpha}}\) to denote the mapping formula such that \(w_{ij}^{(n)}=\mathcal{M}_{\mathbf{\alpha}}(W_{ij}^{(n)})\); see Eq. (4). If we assume that \(\alpha_{i}^{(n)}\) is a constant independent of \(W_{ij}^{(n)}\), we find: \(\delta w_{ij}^{(n)}=\mathcal{M}_{\mathbf{\alpha}}(W_{ij}^{(n)}-\eta\frac{d \mathcal{L}}{dW_{ij}^{(n)}})-\mathcal{M}_{\mathbf{\alpha}}(W_{ij}^{(n)})\). Assuming a small learning rate, we make a first-order approximation using the derivative \(\frac{\mathrm{d}\mathcal{M}_{\mathbf{\alpha}}}{\mathrm{d}W_{ij}^{(n)}}=\frac{ \mathrm{d}w_{ij}^{(n)}}{\mathrm{d}W_{ij}^{(n)}}=\frac{B_{ij}^{(n)}-W_{ij}^{(n )}}{(B_{i}^{(n)})^{2}}\) leading to the approximate update in ANN parameter space:
\[\delta w_{ij}^{(n)}\approx-\eta\,\frac{\mathrm{d}\mathcal{M}_{\mathbf{\alpha}}}{ \mathrm{d}W_{ij}^{(n)}}\,\delta W_{ij}^{(n)}=-\eta\frac{\mathrm{d}\mathcal{M} _{\mathbf{\alpha}}}{\mathrm{d}W_{ij}^{(n)}}\,\frac{\mathrm{d}\mathcal{L}}{ \mathrm{d}W_{ij}^{(n)}}=-\eta\left[\frac{\mathrm{d}w_{ij}^{(n)}}{\mathrm{d}W _{ij}^{(n)}}\right]^{2}\,\frac{\mathrm{d}\mathcal{L}}{\mathrm{d}w_{ij}^{(n)}} \tag{13}\]
The difference between Eq. (13) and a direct ANN update obtained through gradient descent \(\delta w_{ij}^{(n)}\propto\frac{\mathrm{d}\mathcal{L}}{\mathrm{d}w_{ij}^{(n)}}\) cannot be corrected with a different learning rate \(\eta\), because the multiplicative bias in Eq. (13) changes for every neuron pair \((i,j)\) and algorithmic iteration. We conclude that in general, the gradient descent trajectory in SNN is "biased", meaning that it is impossible to find a naive gradient descent trajectory in the ANN for which SNN and ANN remain equivalent from initialization to convergence. Under the linearly mappable condition, the multiplicative bias disappears since \(\frac{\mathrm{d}w_{ij}^{(n)}}{\mathrm{d}W_{ij}^{(n)}}=1\) and this is the choice made in Section 4. An alternative might be to work with specifically designed'metrics' [61] that counterbalances the multiplicative factor in Eq. (13).
The difficulty to train an SNN with \(\alpha_{i}^{(n)}=1\) is illustrated in Fig. 3. Both SNNs (with or without the linearly mappable condition) are initialized to be equivalent to the same ReLU ANN which solves the vanishing-gradient problem at initialization. Nevertheless, only the SNN with the linearly mappable condition follows the ANN whereas the other one diverges away from the true ReLU trajectories after \(20\) epochs (Fig. 3b). This is true despite using the small learning rate.
Figure 3: **Learning trajectories are biased without the linearly mappable condition.****a.** Training of an 8-layer SNN on the MNIST dataset [60] with and without the linearly mappable condition. Both are initialized correctly: we make sure the eigenvalues of the Jacobian at initialization lie within the unit circle. The SNN (light blue) follows the same training curve as the equivalent ANN (red-dashed) only under the linearly mappable condition. **b.** Weights \(W^{(n)}\) of the SNN and \(w^{(n)}\) of the equivalent ANN remain the same (Cosine Similarity=1) throughout training under the linearly mappable condition, otherwise \(\frac{W^{(n)}}{\mathrm{B}^{(n)}}\) is the same as \(w^{(n)}\) in the beginning but diverges during training.
## 4 Benchmark results
In the following we always consider SNN initialized with the linearly mappable condition and trained with the Adam optimizer and exponential learning rate schedule (See Appendix C simulation details).
Comparison with previous TTFS trainingWe report the performance on the MNIST [60] and Fashion-MNIST [62] to compare with previous implementation of TTFS training paradigms. We tested a 16-layer fully-connected SNN and a ConvNet SNN (similar to LeNet5). As expected from the theory our network achieves the same performance as the ReLU ANN as seen in Figure 4 and Table 1. The performance is therefore better than all previous TTFS implementations which were limited to \(4\) layers (see Appendix C).
State-of-the art TTFS performance on CIFAR100 and PLACES365Previously, tackling larger-scale image datasets like CIFAR100 [56] or PLACES365 [57] (similarly large as ImageNet) was considered impossible. We propose to combine conversion from pre-trained VGG16 and fine-tuning with our approach to build competitive TTFS ConvNet. In Fig. 4b we use a pre-trained VGG16 architecture downloaded from an online repository [63; 64] and map it to the SNN without loss of performance (similarly to [51]). The networks are then fine-tuned with a reduced learning rate. In Table 2 we see results for different datasets, fine-tuning provides an increase of the SNN accuracy by \(1.76\%\) on CIFAR100 and \(1.17\%\) on PLACES365. We are not aware of any TTFS method achieving higher accuracies. More interestingly, fine-tuning SNNs promises to be most useful when the SNN performance is degraded through conversion, for instance, because of hardware constrains like quantization as demonstrated in the next section.
\begin{table}
\begin{tabular}{l c c} Dataset & \multicolumn{2}{c}{Acc [\%]} \\ & ReLU & SNN \\ \hline \hline MNIST & 99.57 \(\pm\) 0.01 & 99.57 \(\pm\) 0.00 \\ f-MNIST & 94.24 \(\pm\) 0.02 & 94.26 \(\pm\) 0.03 \\ CIFAR10 & 93.68 \(\pm\) 0.02 & 93.69 \(\pm\) 0.001 \\ \hline \end{tabular}
\end{table}
Table 1: Resulting when training TTFS-SNNs on MNIST, f-MNIST and CIFAR10
\begin{table}
\begin{tabular}{l c c c c c c} Dataset & Image size & Classes & Acc [\%] & w/o FT & Acc [\%] & w/ FT [\%] \\ & & & ReLU & SNN & ReLU & SNN \\ \hline \hline CIFAR100 & 32 \(\times\) 32 \(\times\) 3 & 100 & 70.48 & 70.48 & 72.23 \(\pm\) 0.06 & 72.24 \(\pm\) 0.06 \\ PLACES365 & 224 \(\times\) 224 \(\times\) 1 & 365 & 52.69 & 52.69 & 53.86 \(\pm\) 0.01 & 53.86 \(\pm\) 0.02 \\ \hline \end{tabular}
\end{table}
Table 2: VGG16 architecture before and after fine-tuning, for our SNN and the equivalent ANN
Figure 4: **Performance of the SNN under the linearly mappable condition a.** Training deep SNN and ReLU network yields same average performance. **b.** Fine-tuning (FT) of VGG16 SNN network which is initialized with the weights of a pre-trained VGG16 ReLU network. Table 2 shows results after fine-tuning for both networks.
Mitigating quantization, noise effects and reducing latencyLet's consider a scenario in which a ReLU network was pre-trained with full precision weights. After mapping to the SNN it is assumed to be deployed on a device with parameter noise, limited temporal resolution or limited weight precision. We use fine-tuning of the VGG16 to recover the SNN accuracy in all these situations in Fig. 5. All the experiments are done on CIFAR10 dataset with 10 epochs of fine-tuning. In all three cases (spike time jitter, time-step quantization or SNN weight quantization) fine-tuning enables to recover the performance. We demonstrate in particular TTFS VGG16 networks achieving higher than \(90\%\) accuracy on CIFAR 10 with 16 time-steps per layer or weights quantized on \(4\) bits. We also investigated whether it is possible to improve the classification latency through fine-tuning by reducing the intervals \([t_{\text{min}}^{(n)},t_{\text{max}}^{(n)}]\) after conversion from ANN. Doing this we indeed improve latency but the SNN diverges away from the pre-trained ANN. Through fine-tuning, performance higher than \(90\%\) test accuracy is recovered, even when the latency is improved by a factor of \(4\).
## 5 Discussion and future work
In this work we solved the hard instance of a vanishing-gradient problem for single-spike neural networks. Moreover, we showed that through application of the linear-mapping condition the learning trajectories of ReLU ANN and SNN become equivalent. Based on this result, we demonstrated, to the best of our knowledge for the first time, that training of deep single-spike neural networks with sixteen layers yields identical performance as ReLU ANN on large datasets such CIFAR100 and PLACES365. In the future we plan to train even deeper networks and more sophisticated architectures such as ResNets, but this requires to map skip connections to the SNN model which is not trivial.
The work will be probably most impactful when implemented in either digital or analog SNN hardware. We are able to fine-tune the single-spike neural network to adapt the SNN to specific device constraints. Moreover, the learning can be generalized to leaky neuronal dynamics which further addresses the imperfections in hardware elements. After downloading the pre-trained network on the device we envision a continual online learning on chip with energy efficient and low latency inference. Our demonstration that gradient descent is possible in deep SNNs might provide in the future the opportunities to derive a completely local, hardware-friendly, training algorithm.
Figure 5: **Fine-tuning (FT) VGG16 for quantization/noise robustness or low latency.** In all cases after only 10 epochs of fine-tuning on CIFAR10 (purple) the performance of the initially mapped network (blue) is significantly improved. **a.** Adding noise with certain standard deviation (SD) to all spiking times in the network. **b.** Quantizing spiking times in the network to a given number of time steps per layer. **c.** Representing all weights \(W_{ij}^{(n)}\) with given number of bits. **d.** Reducing the latency by reducing the size of \([t_{\text{min}}^{(n)},t_{\text{max}}^{(n)}]\). |
2303.08818 | Boosting Convolutional Neural Networks' Protein Binding Site Prediction
Capacity Using SE(3)-invariant transformers, Transfer Learning and
Homology-based Augmentation | Figuring out small molecule binding sites in target proteins, in the
resolution of either pocket or residue, is critical in many virtual and real
drug-discovery scenarios. Since it is not always easy to find such binding
sites based on domain knowledge or traditional methods, different deep learning
methods that predict binding sites out of protein structures have been
developed in recent years. Here we present a new such deep learning algorithm,
that significantly outperformed all state-of-the-art baselines in terms of the
both resolutions$\unicode{x2013}$pocket and residue. This good performance was
also demonstrated in a case study involving the protein human serum albumin and
its binding sites. Our algorithm included new ideas both in the model
architecture and in the training method. For the model architecture, it
incorporated SE(3)-invariant geometric self-attention layers that operate on
top of residue-level CNN outputs. This residue-level processing of the model
allowed a transfer learning between the two resolutions, which turned out to
significantly improve the binding pocket prediction. Moreover, we developed
novel augmentation method based on protein homology, which prevented our model
from over-fitting. Overall, we believe that our contribution to the literature
is twofold. First, we provided a new computational method for binding site
prediction that is relevant to real-world applications, as shown by the good
performance on different benchmarks and case study. Second, the novel ideas in
our method$\unicode{x2013}$the model architecture, transfer learning and the
homology augmentation$\unicode{x2013}$would serve as useful components in
future works. | Daeseok Lee, Jeunghyun Byun, Bonggun Shin | 2023-02-20T05:02:40Z | http://arxiv.org/abs/2303.08818v2 | Boosting convolutional neural networks' protein binding site prediction capacity using SE(3)-invariant transformers, transfer learning and homology-based augmentation
###### Abstract.
Figuring out small molecule binding sites in target proteins, in the resolution of either pocket or residue, is critical in many virtual and real drug-discovery scenarios. Since it is not always easy to find such binding sites based on domain knowledge or traditional methods, different deep learning methods that predict binding sites out of protein structures have been developed in recent years. Here we present a new such deep learning algorithm, that significantly outperformed all state-of-the-art baselines in terms of the both resolutions--pocket and residue. This good performance was also demonstrated in a case study involving the protein human serum albumin and its binding sites. Our algorithm included new ideas both in the model architecture and in the training method. For the model architecture, it incorporated SE(3)-invariant geometric self-attention layers that operate on top of residue-level CNN outputs. This residue-level processing of the model allowed a transfer learning between the two resolutions, which turned out to significantly improve the binding pocket prediction. Moreover, we developed novel augmentation method based on protein homology, which prevented our model from over-fitting. Overall, we believe that our contribution to the literature is twofold. First, we provided a new computational method for binding site prediction that is relevant to real-world applications, as shown by the good performance on different benchmarks and case study. Second, the novel ideas in our method--the model architecture, transfer learning and the homology augmentation--would serve as useful components in future works.
## 1. Introduction
In structure-based drug discovery, the knowledge of ligand binding sites (hereafter binding sites) on target proteins is crucial. It can aid _rational drug design_ ([1], [2], [3]) and is required for _in-silico_ methods such as docking ([1], [2]). Such knowledge of binding sites can be attained by analyses of experimental structures of the target protein in complex with ligands. However, if no such structure is at one's disposal, it may be necessary to rely on computational means in order to identify the binding sites.
In general, this computational task of **Binding Site Prediction (BSP)** can be regarded as composition of two sub-tasks: (1) **Binding Site Detection (BSD)** and (2) **Binding Residue Identification (BRI)**.
Firstly, BSD aims to identify the binding sites in a coarse-grained manner and score their druggability. A successful detection of highly druggable binding sites can aid medical chemists in many ways when designing better drug compounds. For example, the medical chemists can draw valuable insights in improving drug compounds' binding affinity or physical properties by examining the receptor structure at the potential binding site ([1]). Also, preparing a suitable binding site is the first step in any virtual structure-based drug discovery pipeline ([1]).
Secondly, BRI aims to identify residues in a given binding site that plays key roles in interactions with ligands. Identification of the key residues has been pursued in many previous research papers, due to its importance in rational drug design ([2], [3], [4]). In particular, it has several applications in virtual structure-based drug discovery. For example, _structural pharmacophore_ features can be selected based on the identified key residues ([2], [1]), and docking results can be prioritized according to whether the docked molecule has favored interactions with the key residues ([1], [2]).
In this paper, we will focus in particular on the structure-based Deep Learning methods to tackle the BSP problem. This choice reflects two recent trends. Firstly, Deep Learning has been widely adopted for BSP ([1]), and has shown good performance. Secondly, it has become easier to identify protein structures as a result of (1) the rapidly accumulating experimental data in databases such as PDB, and (2) advances in Deep Learning methods e.g. Alphafold ([1]).
Recent structure-based Deep Learning methods for BSP are predominantly based solely on 3D CNN architectures that operate on grid-shaped inputs. This results in several limitations, in terms of the performance of two sub-tasks explained previously.
Firstly, the way the CNN-based methods aggregate the local (or short-range) information to recognize the global patterns may be sub-optimal. They achieve this through either clustering algorithms applied on top of the CNN outputs, or through the CNN layers themselves. In the first case, the clustering algorithm could be replaced by highly parameterized modules such as Neural Networks. These can potentially outperform the clustering algorithm, since many parameters can be optimized for the given task. In the second case, the deep
layers of convolution operations may suffer from the problem of "long-term dependency". This means, since a convolution layer's operation is only local, a deep hierarchy of convolutions must be applied in order to allow a neuron to have a receptive field large enough to capture the global patterns. This long-term dependency is known to impede the training ([12]). This problem can be resolved if we adopt a neural network architecture whose operation is not local in nature.
Secondly, these grid-based models do not directly operate on the protein residues. Therefore, they need an ad-hoc conversion of outputs to obtain predictions about the binding residues. For example, they posit that an atom close to a predicted 3D point (e.g. a predicted voxel center) is part of a binding residue ([10], [11]). This may lead to sub-optimal BRI performances, since the loss function used to train the model do not compare the ground truth with the final output, but with the intermediate output before the conversion. For a better BRI performance, having a parameterized model that directly outputs the residue-level predictions would be preferable.
To resolve these problems, we devised a model that has geometric attention layers that operate on top of residue-level CNN outputs. Our model is composed of two modules dedicated to the sub-tasks BSD and BRI respectively. Both modules (1) divide local surroundings around each protein residue into grids, (2) process the grid features in parallel using a CNN model to obtain residue-level local features, (3) use the geometric self-attention layers to update the features, and (4) compute the final reductions to produce subtask-specific outputs.
To further improve our model's performance, we devised several additional elements--transfer learning, SE(3)-invariance, and augmentation. All these elements are intimately connected to our model's architecture.
Firstly, we applied the transfer learning between the two modules. More specifically, the BSD module was not trained from scratch but a portion of its parameters was initialized from the trained parameters of the BRI module. This was meant to facilitate the learning process of the BSD module, which may suffer from a relative lack of data.
Secondly, in order to improve the robustness of our model's predictions, we adapted the model to be SE(3)-invariant. In other words, our model is invariant to rotation and translation of the input structure ([13]). Besides using an SE(3)-invariant attention mechanism ([12]) at the outset, we made the grid featurization process SE(3)-invariant as well. This was achieved by aligning the axes of the grids with a specific orientation. This is similar to the grid alignment method used in [1]. While the grids aligned with that method is not completely deterministic (as one degree of freedom remains), the grids aligned in our method do and hence achieve the full SE(3)-invariance.
Lastly, we devised the data augmentation techniques that can be used along with our model. This was necessary because existing augmentation methods based on translations or rotations have no effect on training the SE(3)-invariant model. In particular, we came up with a novel augmentation method based on protein homology search and sequence alignment. Protein homology has been utilized for the BSP problem ([14], [15], [16], [17]), but to the best of our knowledge, we were the first to use it as an augmentation method for the same problem.
The resulting method achieved significant performance gains over previous methods when evaluated on various BSP datasets, both in terms of BSD and BRI. A BSD metric increased by 3.8% on average, and a BRI metric increased by 16.9% on average.
Through an ablation study, we showed that all the components of our model--the model architecture, transfer learning, SE(3)-invariance, and augmentation--made significant contributions, with a few exceptions for the BSD metric. At the end of the experiment section, we provided potential reasons for such exceptions.
Finally, we performed a case study on _human serum albumin_ to show the effectiveness of our model in real world applications. We based our case study on two previous studies on the binding sites of human serum albumin ([15], [17]), and examined how well our method could make predictions that are compatible with the studies. The promising results showed the potential usefulness of our method in in-silico drug discovery.
To summarize, our contributions in this paper are:
* We developed a new SE(3)-invariant deep learning model for BRI and BSD that combine CNN with the geometric self-attention layers.
* We developed data augmentation methods, in particular homology augmentation, that can be used when training our model.
* We found that our new model, trained with the proposed data augmentation methods, achieved significant performance gains over state-of-the-art deep learning methods for BRI and BSD.
* By an ablation study, we found that all elements of our method contributed significantly to BRI performance.
* By an ablation study, we examined which elements contributed unambiguously to BSD performance. For those that did not, we provided possible explanations.
## 2. Related Works
In this section, we discuss (1) existing BSP methods, focusing on the traditional and the deep learning methods, and (2) a similar problem of predicting ligand-specific binding sites.
### Traditional BSP methods
#### 2.1.1. Probe-based Methods
These methods use a fixed set of small molecules called "probes" to determine the binding sites in a query protein ([11], [12], [13], [14]). Specifically, they place the probes at different positions on the surface of the protein, and calculate the physical energy at the positions. The low-energy positions are predicted to be the potential binding sites.
#### 2.1.2. Geometry-based Methods
These methods rely on 3D geometric characterization of binding sites to detect them. ([15], [16], [17])
One example is Frocket ([17]), that tries to find concave regions of appropriate sizes on the protein surface. It does so by approximating the local curvatures by radii of alpha spheres, which are spheres with four heavy atoms on them but no heavy atom inside. More specifically, it finds all alpha spheres within a radius range, clusters them, and filters them according to the number of constituent alpha spheres to produce a binding pocket.
Although Frocket typically produces an excessive number of binding pockets, it has relatively good recall (96.4% on scPDB v.2017, according to [1]). Therefore, there are ML-based algorithms ([1], [1]) that make use of it as a means to generate initial candidate binding sites. Our method employs the same strategy.
#### 2.1.3. Template-based Methods
These methods predict binding sites of a query protein based on the _templates_, which are other similar proteins whose binding sites are already known ([18], [19], [17], [18]). A portion of the query protein is regarded as a binding site if it resembles binding sites of templates either sequentially or structurally.
For example, the authors of [17] suggested combining two template-based approaches, one based on substructure comparison (TM-SITE) and the other on sequence profile alignment (S-SITE). TM-SITE works as follows:
1. Putative binding pockets are identified in the query protein by relying on an external software ([10]).
2. For each putative binding pocket, the template binding sites similar to it are collected as putative templates. The similarity measure is based on both structural and sequential comparisons.
3. The ligands in the putative templates are projected to the binding pocket.
4. A consensus voting by the projected ligands determines whether the residues in the binding pocket are in the binding site or not.
On the other hand, S-SITE works as follows:
1. The query protein sequence is aligned with the template sequences based on their position-specific scoring matrices (PSSM) profiles and secondary structure information.
2. Templates with the highest alignment _quality scores_ are chosen as the putative templates.
3. A consensus voting by the templates determines whether the residues in the query sequence are in the binding site.
Our homology-based augmentation algorithm is inspired by these methods. The overall flow of it resembles that of TM-SITE, and the use of global sequential alignment is shared by S-SITE. However, our aim in applying the algorithm is not to make a final prediction of the model, but rather to augment the training dataset.
### deep learning based BSP Methods
#### 2.2.1. DeepSite
DeepSite([1]) predicts the binding sites by using a 3D CNN model and a clustering algorithm. The inference steps of DeepSite are as follows: (1) it generates points spanning the entire protein-occupied 3D space, (2) predicts the _ligandability_ of the points using the CNN model computed on a 3D grid centered at the points, and (3) clusters the ligandable points to produce binding sites.
#### 2.2.2. DeepSurf
DeepSurf([1] has the same overall procedure with DeepSite([1]), but it uses more sophisticated approaches in several aspects. More specifically, it tries to improve (1) the generation of initial points, (2) the formation of input grids, and (3) the architecture of the 3D CNN model. The initial points are sampled on the Solvent Accessible Surface (SAS) of the protein, rather than the entire span of the protein. Then, the axes of the grids formed at those points are not arbitrarily oriented, but one axis is set to be the normal vector of the SAS. Finally, rather than using a plain CNN model, they used 3D equivalents of ResNet and Bottleneck ResNet ([18]).
#### 2.2.3. Kalasanty
Kalasanty ([16]) tries to solve BSP by viewing it as a 3D _image segmentation_ problem. Therefore, it uses a 3D equivalent of the U-net model ([12]), which was originally developed for 2D images. It applies the U-net model to large grids that cover most of the query proteins, and outputs the connected components consisting of positively predicted voxels as binding sites.
#### 2.2.4. Deeppocket
Similar to our proposed method, Deeppocket ([1]) relies on the binding site candidates generated by Fpocket. It has separate detection and segmentation models, where the former is a plain 3D CNN model, and the latter is a U-net model similar to the one used in Kalasanty. The detection model is used to rank the binding site candidates generated by Fpocket, and the segmentation model is used to segment the 3D voxels centered at the top-ranked sites.
### Predicting ligand-specific binding sites
Recently, deep learning models that predict protein-ligand complex structures, given a protein-ligand pair, have been developed ([1], [10], [15]). In principle, these models can be used to find ligand-specific binding sites. Therefore, one may argue that the BSP models are strictly less useful than these models, since their predictions on binding sites do not take into account the partner ligands. However, we argue that they are still useful in their own right. Firstly, for many applications, predicting ligand-agnostic binding sites is enough, and even desirable. For example, a typical docking experiment requires a binding pocket location as a prerequisite, and docks all molecules in a virtual library to the pocket. To predict binding pockets using the models that do consider the partner ligands, preparing appropriate ligands may add additional complexity to the problem. This is similar to the problem of preparing appropriate _probe_ molecules in the previously mentioned _prob-based methods_. Secondly, the performances of the methods that predict protein-ligand complex structures are not satisfactory at this point. For example, EquiBind ([15]) scored median and mean ligand rmsd of \(6.2\AA\) and \(8.2\AA\), which suggests that the model is not accurate enough to correctly identify binding residues. Therefore, we might want to focus on the easier and well-studied BSP problem.
## 3. Problem Definition
Thus far, the BSP task has not been addressed explicitly under a common definition across different literatures, even though they used different model compositions. For example, while Deeppocket is comprised of separate "detection" and "segmentation" models, Kalasanty only uses single segmentation model, whose output is post-processed by a clustering algorithm.
In order to fairly assess different BSP models, it is necessary to envisage an unified definition of the BSP task. To be more specific, we will formally establish standards on the input and output. All baseline models can be regarded as following the standard, which will be explained in the experiment section.
Moreover, we will also explain the decomposition of the task into sub-tasks (including BSD and BRI), which is employed in our method and Deeppocket.
### The BSP task
BSP is the task of identifying the ligand binding sites in a given protein. In the task, we are given as input: a protein structure \(P\) and the number of binding sites \(n\). We assume that there are known structures of ligands \(l_{i}\) (\(i=1,\cdots,n\)) that correspond to the binding sites.
The goal of the task is to predict an unordered set of \(n\) binding sites of \(P\) where the ligands \(l_{1},\cdots,l_{n}\) bind.
A _predicted binding site_ is of the form \((\hat{c}_{i},\hat{R}_{i})\), where \(\hat{c}_{i}\in\mathbb{R}^{3}\) is the _binding site center_ and \(\hat{R}_{i}\subset\{1,\cdots,size(P)\}\) is the set of indices of _binding residues_. For example, an ideal prediction \(\left\{(\hat{c}_{1},\hat{R}_{1}),\cdots,(\hat{c}_{n},\hat{R}_{n})\right\}\) is such that
* \(\hat{c}_{i}\) is close (e.g. within the radius threshold \(4\AA\)) to \(l_{i}\)
* \(\hat{R}_{i}\) is the set of indices of residues close (e.g. within the radius threshold \(4\AA\)) to \(l_{i}\)
The methods that we used to evaluate such predictions will be explained in 5.4.
### Decomposition into sub-tasks
Our method divides the BSP task into sub-tasks (1) candidate generation, (2) Binding Site Detection (BSD) and (3) Binding Residue Identification (BRI), each corresponding to a dedicated module (this is similar to TM-SITE ([13]) and Deeppocket ([1])). To be more specific, let \((P,n)\) be an input to perform BSP on. First, the _candidate generation module_ takes the protein structure \(P\) as an input then generates the candidate binding site centers \(\hat{c}^{\prime}_{1},\hat{c}^{\prime}_{2},...,\hat{c}^{\prime}_{m}\in\mathbb{R }^{3}\), where typically \(m\gg n\). Next, the _BSD module_ takes \((P,\hat{c}_{i})\) (\(1\leq i\leq m\)) as the inputs and outputs the predicted _drugability_ of \(\hat{c}^{\prime}_{i}\) in \(P\). The druggability scores are then used to rank the candidate centers, the top \(n\) of which form a filtered list \(\hat{c}_{1},\cdots,\hat{c}_{n}\) of candidate centers. Lastly, for each \(1\leq i\leq n\), the _BRI module_ takes as input \((P,\hat{c}_{i})\), and outputs \(\hat{R}_{i}\), that is the set of binding residues within the binding site. The resulting set \(\left\{(\hat{c}_{1},\hat{R}_{1}),\cdots,(\hat{c}_{n},\hat{R}_{n})\right\}\) becomes the final output of the model.
## 4. Key Components of Our Method
This section briefly illustrates the key components of our method, which will be explained in more details in Section 7. These include the details of the modules (candidate generation, BSD and BRI) as well as other aspects independent to the model architecture. In particular, the latter includes the transfer learning and the homology-based augmentation method.
### The candidate generation module
To generate the binding site candidate centers, we use an external software Fpocket ([1]). Given a protein structure, Fpocket finds sets of heavy atoms \(\hat{S}_{1},\cdots,\hat{S}_{m}\), each corresponding to a region geometrically likely to be a binding pocket. Then, we find the candidate centers \(\tilde{c}^{\prime}_{i}\) (\(i=1,\cdots,m\)) by taking the center of the mass of the atoms in \(\hat{S}_{i}\).
We chose Fpocket as the candidate generation method because it achieves a sufficiently high recall rate (\(96.4\%\) on scPDB v.2017, according to [1]). This means that, for a given protein and its binding site, it is likely that at least one of the generated candidates corresponds to the binding site. Then, provided that the BSD module ranks the candidates properly, the top-\(n\) candidates may approximate the true binding site centers with a high accuracy.
### The BSD module
The BSD module takes as input the protein structure and a candidate binding site center \(\tilde{c}^{\prime}\), and outputs the predicted druggability at \(\tilde{c}^{\prime}\).
In doing so, it featurizes the surroundings of \(\tilde{c}^{\prime}\) into a set of per-residue 3D grids, and processes the grids through a neural network to produce the output. Here, each grid in the set corresponds to a residue close enough to \(\tilde{c}^{\prime}\) (distance threshold \(17\AA\)), and encodes the local environment of the residue.
The neural network of the BSD module is composed of (1) a residue-local feature extracting unit which runs in parallel for each grid (2) an aggregation unit which globally aggregates the local features (3) the reduction unit which maps the aggregated feature to single scalar quantity. The feature extracting unit is a 3D CNN model, and the aggregation unit is composed of several geometric self-attention layers. The reduction part is composed of a point-wise feed-forward layer and a mean-reduction operation.
### The BRI module
The BRI module takes in the protein structure and a putative binding site \(\hat{c}\) as inputs and outputs the set of predicted binding residue indices.
Figure 1. (a) Our BRI module. (b) Our BSD module. (c) Our BSD module is trained in two stages, where in the second stage, the parameters of the shared parts are initialized from the result of training a BRI module in the first stage. (d) Ground truth generation. A candidate site is a true binding site if the center is within \(4\AA\) from a ligand atom (i.e. \(DCA<4\AA\)). A residue is a binding site residue if it is within \(4\AA\) from a ligand atom.
The BRI module shares the residue-local feature extraction and global aggregation units with the BSD module. To be more specific, the BRI module shares the following units with the BSD module: (1) the CNN feature extractor and (2) the stack of geometric attention layers up to the penultimate one in the BSD module. However, the remaining part of the BRI module is only comprised of a point-wise feed-forward layer without a mean-reduction operation. Hence, the outputs of the last layer are used to determine (with a threshold value) whether the corresponding residues are binding site residues or not.
### Transfer learning
Transfer learning can be applied between the BSD and BRI sub-tasks thanks to the shared architectures between the BSD and BRI modules. More specifically, we initialize the weights of the BSD module's shared parameters with the weights obtained from BRI module's training. The rationale behind this procedure is the following intuition: the protein's binding site can be determined based on the patterns of the binding residues. Under this rationale, we hypothesize that a well-performing BRI module will learn useful features that can transfer well to the BSD task.
In addition, the transfer learning allows BSD module to leverage relatively more abundant labels present in BRI dataset. While there is one label per binding site for the BSD task, there are a multiple number of binding residues per binding site. Thus, it is desirable to exploit such abundance in labels from the BRI task for BSD task via the transfer learning.
### The geometric self-attention layers
As mentioned previously, we adopted the geometric self-attention layers to globally aggregate local features obtained by the CNN. First, let \(\{x_{i}\}\,i=1^{n}\) denote a sequence of hidden vectors that is either the initial local features computed by the CNN or the output of the preceding attention layer. Then a geometric self-attention layer \(f_{att}\) transforms \(\{x_{i}\}\,i=1^{n}\) into another sequence of hidden vectors
\(\{x^{\prime}_{i}\}\,i=1^{n}\) based on the protein structure. In our method, we represent the protein structure as _local frames_ ([1]) \(\left\{T_{i}\right\}_{i=1}^{n}=\left\{\left(R_{i},t_{i}\right)\right\}_{i=1}^{n}\) associated to the residues, where \(R_{i}\) is the _residue orientation_ and \(t_{i}\) is the _residue center_. Then the geometric self-attention layer \(f_{att}\) takes the following form:
\[\left\{x^{\prime}_{i}\right\}_{i=1}^{n}=f_{att}\left(\left\{x_{i}\right\}_{i=1 }^{n},\left\{T_{i}\right\}_{i=1}^{n}\right) \tag{4.1}\]
The geometric attention mechanism has several advantages over the modules from previous works used for the feature aggregation--the traditional clustering algorithms and the CNN models. First, unlike the Neural Network based methods (such as our), the clustering algorithms are less flexible in terms of the number of adjustable parameters and they do not fit their parameters based on the gradients. As for the CNN models, the attention layers are arguably more effective in terms of modeling the long-distance dependency. While an attention layer can emulate arbitrarily distant interaction by a single step of operation, convolution layers require several steps to do so due to their local nature. This _long-term dependency_ may lead to sub-optimal learning outcomes ([1]).
For the attention mechanism, we used a modified version of _Invariant Point Attention_ from Alphafold ([1]). The essential ideas of its computation, compared to the standard attention mechanism ([13]), are that (1) it uses not only the standard _query_, _key_ and _value_ vectors but also geometric ones, (2) it calculates the attention weights based on not only the inner products of the standard query and key vectors but also the distances between the geometric query and key vectors, and (3) its output is determined by not only the aggregated standard value vectors but also the aggregated geometric value vectors. More detailed descriptions are provided in Figure 2 and the Methods section. The effect of using the geometric vectors in the attention mechanism is discussed as a part of our ablation study.
### Satisfaction of SE(3)-invariance
A function is _SE(3)-invariant_ if its output remains unchanged when SE(3) transformations (translations, rotations or compositions-of them) are applied to the input. In our context, when \(\left\{v_{i}\right\}_{i=1}^{N}\) are the coordinates of the protein atoms, \(\left\{f_{i}\right\}_{i=1}^{N}\) are the feature vectors of the protein atoms, and \(\left\{T_{i}\right\}_{i=1}^{n}\) are the local frames related to the protein residues, a function \(f(\left\{v_{i}\right\}_{i=1}^{N},\left\{f_{i}\right\}_{i=1}^{N}\left\{T_{i} \right\}_{i=1}^{n})\) is SE(3)-invariant if, for any SE(3) transformation \(T\), we have
\[f(\left\{Tu_{i}\right\}_{i=1}^{N},\left\{f_{i}\right\}_{i=1}^{N},\left\{TT_{i }\right\}_{i=1}^{n})=f(\left\{v_{i}\right\}_{i=1}^{N},\left\{f_{i}\right\}_{i =1}^{N},\left\{T_{i}\right\}_{i=1}^{n})\]
The SE(3)-invariance is a desired property for the structure-based BSP models. This is because the binding site information should remain unchanged regardless of the reference frame. To reflect this, we are injecting this inductive bias via incorporating the SE(3)-invariance property into our model and hence achieve robustness in our model's prediction ([13]).
Two strategies were employed to achieve SE(3)-invariance in our model. First, we adopted SE(3)-invariant attention mechanism at the outset. To be more specific, we required the attention layer in (4.1) to be invariant to any SE(3) transformation \(T=(R,t)\), that is:
\[f_{att}(\left\{x_{i}\right\}_{i=1}^{n},\left\{TT_{i}\right\}_{i=1}^{n})=f_{att }(\left\{x_{i}\right\}_{i=1}^{n},\left\{T_{i}\right\}_{i=1}^{n}). \tag{4.2}\]
The details of the SE(3)-invariant attention mechanism will be explained in the Methods section. Then, the other strategy employed to meet the SE(3)-invariance was altering the grid-based residue featurization process. When constructing the grids, we did not use an arbitrary (xyz-) axes, but rather used axes aligned with respect to the orientations of the residues. This will be also explained in more detail in the Methods section.
### Augmentation strategies
_Data augmentation_ is one of the most essential elements comprising the modern Deep Learning. It plays a crucial role in (1) alleviating the problem of overfitting which occurs commonly due to the lack of data and (2) improving the trained model's generalization capability ([15]). In essence, the data augmentation enlarges the effective size of the training set by applying various transformations to the inputs.
Previous Deep Learning methods for BSP ([1], [1], [1]) employed various augmentations. These augmentation methods were mainly based on applying transformations in the SE(3) class (rotations, translations and compositions of them) to the inputs of the CNN models.
In the absence of data augmentation, our model suffered from a clear pattern of over-fitting (Figure 4). We believe that this is due to the lack of diversity in the training dataset as compared to the large complexity of the model.
However, the existing augmentation methods based on SE(3) transformations have no effect on the training dynamics when used in conjunction with our SE(3)-invariant model. From this aspect, we devised two novel augmentation methods: (1) a method based on a class of geometric transformations and (2) a method based on the homologous proteins.
Firstly, the augmentation based on the geometric transformation introduces the _random perturbations_ to the residue orientations. This differs to the usual "random rotation" in that it permits only rotations of small magnitude. We applied independent random perturbations on the residue orientations once then used the modified residue orientations as an input to every attention layer. The rationale behind applying such random
perturbation is to promote diversity in geometric information while not completely forgetting the original residue orientation.
Next, we employed an augmentation scheme based on the protein homology. This fundamentally differs from the usual augmentation methods, in that it relies on an unlabelled external database of unbound protein structures to pre-compute the transformations. Essentially, we first align the protein sequences in the training set (referred to as a _internal protein sequence_) with the protein sequences from the external database (referred to as a _external protein sequence_). Then, we assign the ground-truth binding site and binding residue labels of the internal protein sequence to the aligned external protein sequence as its labels. Finally, we augment our training dataset with these aligned external protein sequences and their labels assigned. The extended dataset may contain some label noises since the assignment of labels based on the homology relations is subject to inaccuracies to some extent. In spite of this problem, our augmentation scheme enhances diversity of input data compared to the previous augmentation schemes. This is because the extended dataset may include protein species absent in the training dataset.
## 5. Experiments
### Datasets
We used scPDB v.2017 ([1]) as a main dataset for the training and the validation. In addition, we used three other datasets for the tests: COACH420 ([11]), HOLO4K ([1]) and CHEN ([12]). To be more specific, we used the training subset of scPDB dataset provided by [13] for 5-fold cross-validation. This subset excludes proteins which have sequence similarity higher than 90% to the proteins in one of the external test datasets. We used the remaining part of the scPDB as a test dataset. Thus, the test datasets were comprised of the scPDB test set and the external test dataests -- COACH420, HOLO4k and CHEN. The CHEN dataset had holo and apo subsets. Thus in the tests using the apo subset, we obtained the ground-truth binding sites from the structural alignments with the corresponding holo structures. More specifically, the ligands of the holo structures were superimposed onto the apo structures according to the structural alignments. The characteristics of each test dataset and the details of the structural alignments are described in the Supplementary Information.
### Baseline Methods
We compared our method to the previous state-of-the-art Deep Learning methods, which are based on CNN: Deeppocket ([1]), Kalasanty ([1]) and DeepSurf ([1]). All these methods are briefly explained in 2.2. For Deeppocket and DeepSurf, we have trained their parameters from scratch according to our dataset splits. However for Kalasanty, we used the parameters released by the authors due to the high computational costs to train. It is important to note that the parameters of Kalasanty were trained on the entire scPDB v.2017 dataset. To be specific, the training data of Kalasanty may have included data whose protein sequences are similar (similarity above 90%) with those in the test dataset. Thus, Kalasanty method have an advantage in terms of the coverage of the training dataset compared to the other methods when they are evaluated on the external test datasets.
### Ablated Methods
We conducted an ablation study to assess the effectiveness of each component of our proposed method. We considered the omission of the following components:
* The geometric information used in the self-attention layers
* The use of local features extracted by the CNN
* The grid alignment process adopted to achieve SE(3)-invariance in our model
* The transfer learning from the BRI to the BSD module
* The data augmentation methods as a whole
* Each of the data augmentation methods
We used the most basic form of transformers, that is BERT ([10]), as a geometric information omitted version of self-attention layers to conduct the ablation study on 'the geometric information used in self-attention layers'.
In the ablation study of 'the use of local features extracted by the CNN', we removed the CNN component from our model. To be more specific, the hidden vectors for the attention layers are directly obtained from the One-hot-encoding layer for the amino acid types followed by the token embedding layer. To compensate for the loss of model complexity, we added two more attention layers to the default configuration of BRI and BSD model architecture.
In the ablation study on the grid alignment process, we removed the alignment process but used the standard random rotation augmentation for a fair comparison. By doing so, we attempted to argue the efficacy of our method as compared to the common practice, rather than the case where no additional technique is applied.
### Evaluation
In 3, we formalized the inputs and outputs of the BSP methods. According to the formalism, a BSP method (1) takes as inputs a protein structure and the number of binding sites (\(n\)), and (2) outputs \(n\) predictions \((\hat{c}_{1},\hat{R}_{1}),\cdots,(\hat{c}_{n},\hat{R}_{n})\) with \(\hat{c}_{i}\) and \(\hat{R}_{i}\) being the binding site centers and the binding residue indices, respectively. Now that we have formalized the problem, we need to establish an evaluation scheme. This requires
(1) interpreting each method's input and output as per the problem definition and (2) defining the evaluation metrics in terms of the input and output formats specified in the problem definition.
#### 5.4.1. How the baseline methods fit into our problem definition
The definitions of \(\hat{c}_{1},\cdots,\hat{c}_{n}\) for the baseline methods are mostly natural, and derived directly from the original papers. All methods produce a ranked list of predictions, thus we limit them to produce only the top-\(n\) outputs. Also, they compute the centers of predicted binding sites in their evaluation, so we can compute \(\hat{c}_{i}\) as they prescribed.
However, not all baseline models output the predicted binding sites at the residue level. Thus, it is necessary to map their outputs to sets of residues \(\hat{R}_{1},\cdots,\hat{R}_{n}\). For example in the Deeppocket ([AGC\({}^{+}\)21]), the authors used the distance threshold \(2.5\AA\) (performed best in their validation set) to determine the binding residues from the segmented voxels; therefore, we followed the same procedure. For Kalasanty ([SDZS20]) and DeepSurf ([MAD21]), the authors introduced a method to convert their predictions to the atom-level predictions (which was implemented in their code); therefore, we regarded the residues having at least one such predicted binding atoms as the predicted binding residues.
#### 5.4.2. The evaluation metrics
We use three evaluation metrics: (1) _the success rate for detection_ (success rate) (2) _the average IOU of binding residues with respect to the closest ligands_ (IOU) (3) _the average IOU of the binding residues with respect to the successfully detected ligands_ (conditional IOU). The metrics evaluate different combinations of BSD and BRI performances. The success rate metric evaluates the BSD performance, the IOU metric evaluates the BRI performance but it is also influenced by the BSD performance, and the conditional I metric aims to evaluate the BRI performance alone. We give additional details with regards to how these evaluation metrics compare to their counterpart metrics introduced in the previous literatures in Supplementary Information.
In order to provide a formal definition of each metric, we shall adopt following notations:
* \(n^{(i)}\) is the number of ground-truth ligands bound to the \(i\)-th protein.
* \(\left\{l_{1}^{(i)},\cdots,l_{n^{(i)}}^{(i)}\right\}\) is the set of ground-truth ligands bound to the \(i\)-th protein.
* \(\left\{(c_{1}^{(i)},BR_{1}^{(i)})\cdots,(c_{n^{(i)}}^{(i)},BR_{n^{(i)}}^{(i)})\right\}\) is the set of predictions of the method to evaluate.
* \(\left\{TBR_{1}^{(i)},\cdots,TBR_{n^{(i)}}^{(i)}\right\}\) is the set of _true binding residue indices_, where \(TBR_{j}^{(i)}\) is defined to be the set of residues in the \(i\)-th protein that is within \(4\AA\) from \(l_{j}^{(i)}\)
The success rate metric measures the correspondence between the predicted binding site centers \(\left\{c_{1}^{(i)},\cdots,c_{n^{(i)}}^{(i)}\right\}\) and the positions of the ground-truth ligands \(\left\{l_{1}^{(i)},\cdots,l_{n^{(i)}}^{(i)}\right\}\). For each \(i\), we compute the F1 score (the harmonic mean of precision and recall) based on the definition of _detection_. Specifically, we define that \(c_{j}^{(i)}\) is a _correct detection_ of \(l_{k}^{(i)}\) when \(c_{j}^{(i)}\) is within \(4\AA\) (a threshold commonly used in the literature e.g. [1] and [1]) from any ligand of \(l_{k}^{(i)}\). In other words, we define that detection is correctly performed when the Distance from Center to Atom (DCA) is \(<4\AA\). Then, the F1 scores are weighted-averaged (weighted by \(n^{(i)}\)) over the proteins. In summary, we obtain this metric as
\[\left(\sum_{i}n^{(i)}\cdot\frac{2}{\frac{1}{P^{(i)}}+\frac{1}{R^{(i)}}}\right) \bigg{/}\left(\sum_{i}n^{(i)}\right)\]
, where \(P^{(i)}\) is the _precision_ defined as follows:
\[P^{(i)}=\frac{\#\left\{1\leq j\leq n^{(i)}:c_{j}^{(i)}\text{ detects one of }l_{1}^{(i)},\cdots l_{n^{(i)}}^{(i)}\right\}}{n^{(i)}}\]
and \(R^{(i)}\) is the _recall_ defined as follows:
\[R^{(i)}=\frac{\#\left\{1\leq k\leq n^{(i)}:l_{k}^{(i)}\text{ is detected by one of }c_{1}^{(i)},\cdots,c_{n^{(i)}}^{(i)}\right\}}{n^{(i)}}\]
This is a BSD metric since this involves only the predicted binding site centers, not the predicted binding residues.
The IOU metric compares the predicted binding residues \(BR_{j}^{(i)}\) with the true binding residues \(TBR_{\phi^{(i)}(j)}^{(i)}\) of the ligand \(l_{\phi^{(i)}(j)}^{(i)}\) closest to the predicted binding site center \(c_{j}^{(i)}\). Here, the index \(\phi^{(i)}(j)\) of the closest ligand is defined as
\[\phi^{(i)}(j)=\operatorname*{arg\,min}_{k=1}^{n^{(i)}}DCA(c_{j}^{(i)},l_{k}^{( i)})\]
The comparison is performed in terms of Intersection Over Union (IOU), and the quantity is averaged over all pairs of \((i,j)\). In summary, we obtain the second metric as
\[\left(\sum_{i}\sum_{j=1}^{n^{(i)}}\frac{\#(BR^{(i)}_{j}\cap TBR^{(i)}_{\phi^{(i)} (j)})}{\#(BR^{(i)}\cup TBR^{(i)}_{\phi^{(i)}(j)})}\right)\bigg{/}\left(\sum_{i}n _{i}\right)\]
Although this is essentially a BRI metric, it also depends on the BSD performance due to the definition of \(\phi^{(i)}(j)\). In particular, if the predicted center \(c^{(i)}(j)\) is far from any ligand, the set of predicted binding residues \(\hat{R}^{(i)}_{j}\) does not contribute to the metric.
The conditional IOU metric is almost the same as the IOU metric, but it aims to eliminate the previously mentioned problem that the BRI performance is bound by the BSD performance. It does so by focusing on the case that the predicted binding sites is close to at least one ligand. In summary, we obtain the metric as
\[\left(\sum_{i}\sum_{j\in S^{(i)}}\frac{\#(BR^{(i)}_{j}\cap TBR^{(i)}_{\phi^{(i )}(j)})}{\#(BR^{(i)}\cup TBR^{(i)}_{\phi^{(i)}(j)})}\right)\bigg{/}\left(\sum_ {i}\#S^{(i)}\right)\]
, where
\[S^{(i)}=\left\{j=1,\cdots,n^{(i)}:DCA(c^{(i)}_{j},t^{(i)}_{\phi^{(i)}(j)})<4 \hat{A}\right\}\]
This is similar to metrics used in Deeppocket ([AGC\({}^{+}\)21]) and DeepSurf ([MAD21]) to evaluate models' binding site _segmentation_ capability conditional on the successful location of the binding sites.
### Training the BSD Module
To implement the transfer learning described in 4.4, the BSD training consists of two stages. The first stage is pre-training the part of the BSD module's architecture shared by the BRI module, as depicted in Figure 1. In doing so, we attach the unshared part of the BRI module architecture on top of the shared part in the BSD module then train the combined model for the BRI task. The second stage is fine-tuning the entire original BSD module for the BSD task. In the second stage, to promote smoother transfer learning, we freeze the parameters of the parts trained in the first stage up to certain steps of gradient descent.
In both stages, we use balanced sampling of binding site candidates of positive and negative labels. This is because, among the binding site candidates predicted by Fpocket (On average 33 per protein in scPDB), only few of them are actual binding sites (typically only one). Training without such balanced sampling may render the trained model biased toward the out-numbered label. ([JK19])
In addition, in the first stage, we resolve the similar problem of unbalanced residue labels by using a weighted loss function. This loss function consists of a weighted sum of terms from different residues, where the binding and non-binding residues attain the following weights:
\[w_{pos}=\frac{1}{2n_{pos}},\quad w_{neg}=\frac{1}{2n_{neg}}\]
, where \(n_{pos}\) and \(n_{neg}\) are the number of binding and non-binding residues respectively.
### Training the BRI module
To train the BRI module, we use only the positive binding site candidates. (We do the same to train Deeppocket) This is because, in our intended usage, the BRI module operates on the binding sites detected by the BSD module. Note that this intention is reflected in the evaluation metric _average IOU of binding residues against the closest ligands_ as well. All the settings of the first stage of BSD training were maintained, except the balanced sampling of the binding site candidates.
### Data Augmentations
We apply the data augmentation methods described in 4.7 in every training scenario (pre-training the BSD module, fine-tuning the BSD module and training the BRI module), except when they are omitted as a part of the ablation study. Applying the random perturbation means transforming the inputs to the geometric attention layers once for each data sample. Applying the homology-based "augmentation" means adding to the original loss (originated from the original dataset) an auxiliary loss calculated in the same way but originated from the "extended dataset".
### Experiment Results
The experiment results are summarized in Table 1, Table 2, Table 3, Figure 3 and Figure 4.
#### 5.8.1 BSD performance
In general, Table 1 shows that our method significantly outperformed the baseline methods in the BSD task. Although there is an exception that Deeppocket outperformed our method on CHEN-holo, the gap in performance (1.2%(\(p\)) is relatively insignificant compared to the average gain in performance (3.1%\(p\)) across other datasets.
The ablation results show that all components had positive effects on the performance in general. In particular, (1) the use of geometric information in the attention layers, (2) the transfer learning and (3) augmentation
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & s\(\epsilon\)PDB(field-out) & COACH420 & HOLO4K & CHE=holo & CHE=apo \\ \hline DeepSurf & \(62.4\pm 1.3\) & \(43.6\pm 1.3\) & \(59.7\pm 1.4\) & \(24.5\pm 1.4\) & \(22.3\pm 1.2\) \\ Kalasuardy & \(70.0\pm 0.0\) & \(50.8\pm 0.0\) & \(44.9\pm 0.0\) & \(28.5\pm 0.0\) & \(27.1\pm 0.0\) \\ DeepPosed & \(67.9\pm 0.4\) & \(55.7\pm 0.7\) & \(72.9\pm 0.2\) & \(\mathbf{42.4\pm 0.3}\) & \(34.5\pm 1.2\) \\ \hline Ours & \(\mathbf{70.1\pm 0.4}\) & \(59.1\pm 0.3\) & \(\mathbf{77.0\pm 0.6}\) & \(41.2\pm 1.2\) & \(36.5\pm 0.2\) \\ Ours(BERT) & \(65.6\pm 1.2\) & \(54.8\pm 1.2\) & \(70.8\pm 1.1\) & \(38.2\pm 1.2\) & \(32.8\pm 0.7\) \\ Ours(no CNN) & \(69.2\pm 0.7\) & \(57.7\pm 0.3\) & \(75.1\pm 0.2\) & \(41.7\pm 1.0\) & \(\mathbf{36.7\pm 1.0}\) \\ Ours(no alignment) & \(69.0\pm 0.6\) & \(\mathbf{59.4\pm 0.7}\) & \(75.6\pm 0.7\) & \(41.0\pm 0.6\) & \(35.4\pm 0.6\) \\ Ours(no transfer) & \(61.9\pm 0.3\) & \(53.9\pm 0.3\) & \(70.0\pm 0.5\) & \(39.8\pm 0.5\) & \(35.1\pm 0.6\) \\ Ours(no augmentation) & \(64.2\pm 2.0\) & \(58.8\pm 1.4\) & \(71.3\pm 1.5\) & \(39.1\pm 1.7\) & \(34.3\pm 0.5\) \\ Ours(no homology) & \(68.3\pm 1.1\) & \(57.3\pm 0.9\) & \(75.1\pm 1.1\) & \(39.9\pm 0.5\) & \(34.6\pm 0.6\) \\ Ours(no perturbation) & \(70.0\pm 0.5\) & \(58.9\pm 0.7\) & \(76.2\pm 0.3\) & \(41.8\pm 0.7\) & \(36.3\pm 1.1\) \\ \hline \end{tabular}
\end{table}
Table 1. (BSD metric) F1 Success rate for detection. The mean and standard deviation are calculated based on the metric values of 5 different loss-validation folds.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & s\(\epsilon\)PDB(field-out) & COACH420 & HOLO4K & CHE=holo & CHE=apo \\ \hline DeepSurf & \(0.402\pm 0.010\) & \(0.419\pm 0.013\) & \(0.330\pm 0.007\) & \(0.372\pm 0.019\) & \(0.336\pm 0.020\) \\ Kalasuardy & \(0.356\pm 0.000\) & \(0.362\pm 0.000\) & \(0.344\pm 0.000\) & \(0.333\pm 0.000\) & \(0.323\pm 0.000\) \\ DeepPosed & \(0.095\pm 0.002\) & \(0.506\pm 0.005\) & \(0.371\pm 0.004\) & \(0.339\pm 0.008\) & \(0.382\pm 0.000\) \\ Ours & \(\mathbf{0.643\pm 0.004}\) & \(\mathbf{0.685\pm 0.007}\) & \(\mathbf{0.415\pm 0.002}\) & \(\mathbf{0.495\pm 0.010}\) & \(\mathbf{0.473\pm 0.004}\) \\ Ours(BERT) & \(0.567\pm 0.012\) & \(0.459\pm 0.013\) & \(0.356\pm 0.007\) & \(0.353\pm 0.016\) & \(0.328\pm 0.010\) \\ Ours(no CNN) & \(0.624\pm 0.002\) & \(0.548\pm 0.004\) & \(0.395\pm 0.002\) & \(0.450\pm 0.014\) & \(0.419\pm 0.006\) \\ Ours(no alignment) & \(0.637\pm 0.005\) & \(0.572\pm 0.001\) & \(0.413\pm 0.002\) & \(0.479\pm 0.005\) & \(0.450\pm 0.006\) \\ Ours(no augmentation) & \(0.522\pm 0.001\) & \(0.475\pm 0.007\) & \(0.347\pm 0.004\) & \(0.397\pm 0.010\) & \(0.380\pm 0.007\) \\ Ours(no homology) & \(0.628\pm 0.003\) & \(0.567\pm 0.009\) & \(0.412\pm 0.002\) & \(0.481\pm 0.007\) & \(0.449\pm 0.010\) \\ Ours(no perturbation) & \(0.596\pm 0.006\) & \(0.547\pm 0.008\) & \(0.391\pm 0.004\) & \(0.468\pm 0.008\) & \(0.445\pm 0.008\) \\ \hline \end{tabular}
\end{table}
Table 3. (BRI metric) Average IOU of binding residues against the detected ligands. The mean and standard deviation are calculated based on the metric values for 5 different cross-validation folds.
Figure 4. The effect of the augmentation methods on the training
Figure 3. The effect of transfer learning on the BSD training
(in particular the homology augmentation) showed positive contributions consistently and significantly. Omitting each component resulted in the decrease in the model's performance as follows:
* The geometric information: 4.3\(\%p\)
* The transfer learning: 4.64\(\%p\)
* The augmentation as a whole: 3.84\(\%p\)
* The homology augmentation: 1.74\(\%p\)
* The random perturbation augmentation: 0.14\(\%p\)
However, we observed that not all components of our proposed model showed consistent contributions to the model's performance across different datasets. This was the case for the use of CNN, the grid alignment and the random perturbation. The questionable effects of the first two components may be accounted to the insufficient size of the training dataset for the BSD task. We believe that the model required a larger training dataset to compensate for the increased model complexity from the introduction of the CNN component. Similarly, aligning the grids rather than applying a random rotation to the grids may have decreased the diversity of the training dataset. Lastly, the random perturbation's questionable contribution may be attributed to the offset in the effects from the homology augmentation. Indeed, the random perturbation resulted in a significant improvement in performance in the absence of the homology augmentation, which can be confirmed by comparing "no augmentation" with "no homology" in Table 1 and Figure 4.
#### 5.8.2. The effect of transfer learning
Figure 3 shows the effect of transfer learning in the BSD training. According to the plot, the transfer learning had two effects on the training process. Firstly, it significantly accelerated the convergence speed. In fact, validation loss almost dropped to the convergence level in 2000 steps. Note that until 4000 steps, we updated only the weights of the un-pretrained parts. Secondly, the transfer learning also significantly improved the validation loss of the converged state. This finding is compatible with the ablation result of Table 1.
#### 5.8.3. BRI performance
In terms of the BRI performance, Table 2 and Table 3 show that our model outperformed the baselines by a significant margin on all test datasets. In particular, while the strongest baseline model Deeppocket performed poorly on the external test datasets, our model did not. This shows our model generalizes well to the proteins such that no similar proteins are encountered during training.
The ablation results show that all key components of our method explained in 4 contributed significantly in the good performance. These components are (1) the architectural aspects (the CNN and attention layers), (2) the grid alignment process and (3) the augmentation methods.
#### 5.8.4. The effect of augmentation
Figure 4 shows that the augmentation contributed significantly in alleviating over-fitting in all training scenarios (pre-training the BSD module, fine-tuning the BSD module and training the BRI module). While the augmentation as a whole was shown to dramatically reduce the over-fitting, individual augmentation methods (random perturbation and homology augmentation) were also shown to be effective. This finding is compatible with the ablation results of Table 1, Table 2 and Table 3.
### The structure and Binding Sites of HSA
HSA is composed of three homologous domains (I, II, III), each composed of two subdomains A and B, as depicted in Figure 5 ([4], [10]). The authors of [10] performed a large-scale survey of HSA binding sites based on 142 crystal structures involving HSA. By analyzing the complexes, they identified 14 different binding sites, alongside their _frequencies_. They noted that binding sites "IB", "IIA" and "IIIA" dominated clearly. Inspired from this result, the authors of [12] provided more detailed analysis on the binding sites at the IB subdomain. They did so by inspecting the crystal structures of 6 oncology drugs (9-amino-camptothecin, camptothecin, idarubicin, teniposide, etoposide and bicalutamide) in complex with HSA. In particular, for each structure, they identified all key residues of HSA and their interaction types with the drug molecule.
### Basic settings
We designed the analyses such that they can faithfully assess our model's real-world applicability.
Firstly, before the analyses, we trained our model with a new dataset split (different from the ones used in our main experiments) to prevent data leakage. Specifically, we removed all 42 "albumin" structures from the scPDB v.2017 dataset, randomly sampled a validation set of size 1000 from the remaining, and took all the other proteins as the training set. Moreover, we ensured that there is no leakage coming from the homology-based augmentation, by using a new augmentation dataset generated from the new training set.
Secondly, all our model's predictions are based on taking as input the HSA structure provided by the Alphafold database ([10]. Therefore, a good performance under this setting would imply that our model can be used to predict the binding sites of a protein without any known experimental structures. In particular, one can make use of the publicly available Alphafold database in the predictions.
### Binding Site Detection
#### 6.3.1. experiment procedure
First, we obtained our model and Deeppocket's BSD module's predictions on 15 binding sites identified by [10]. This means that the predictions were made based on 15 different inputs, where we set the "binding site center" to be the mean alpha carbon coordinates of residues comprising one of the binding sites. The indices of residues comprising each binding site were provided by [10]. Note that we re-trained the Deeppocket model with a new dataset split to avoid data leakage, just as we did for our model.
Then, we assessed each model's predictions by comparing them with the ranks of the _frequencies_ of the binding sites as recorded in [10].
#### 6.3.2. results and analysis
The results are summarized in Figure 6. Note that our model successfully assigned high (more than 70%) druggability scores on the second to sixth most _frequent_ binding sites. Moreover, these five were exactly the binding sites that scored the highest. The probability of the latter condition being met by a random ordering is only 0.2%, which shows the statistical significance of our model's ability in replicating binding sites' _frequency_ ranks. On the other hand, although Deeppocket assigned high druggability scores on the second and third most _frequent_ binding sites, it failed to do so on the fourth to sixth most _frequent_ ones.
### Binding Residue Identification
Figure 5. The subdomains of HSA
Figure 6. A comparison between our BSD module’s predictions and the _frequencies_ from [1]
Figure 7. The purple, red and blue residues indicate the true-positive, false-positive and false-negative binding site residues respectively. Therefore, the larger the purple region is compared to the regions with the other colors, the better the prediction is.
#### 6.4.1. experiment procedure
First, we predicted the binding site residues in the IB subdomain of HSA using our model and all the baseline methods -- Deeppocket, Kalsanty and DeepSurf. When obtaining our model's (resp. Deeppocket's) predictions, we set the "binding site center" to be the mean alpha carbon coordinate of the IB subdomain residues, and run the BRI module (resp. the _segmentation model_). Note that as we did for our model, we re-trained the Deeppocket and DeepSurf models with a new dataset split to avoid data leakeage.
Then, we visually inspected the binding residues predicted by different methods, comparing them with the ground truth provided by [13]. As the ground truth, we took the union of all the "key residues" (in the IB subdomain) of the 6 drug molecules as determined by [13].
#### 6.4.2. results and analysis
The results are visualized in Figure 7. From the figures, it is clear that our model's prediction best matched with the ground truth. In particular, while all the baseline models resulted in false-positives within the left-bottom helix, our model did not. Also, while all the baseline models showed limited precision in identifying the key residues in the other parts of the IB subdomain, our model showed mostly successful identifications.
## 7. Methods
### The residue orientation
Our model uses a concept of "residue orientation" in the grid alignment process and the geometric attention layers. In the grid alignment process, the grid axes are aligned with respect to the orientations. In the geometric attention layers, the orientations are used as a part of the input.
We define the residue orientation in terms of the relative positions of atoms surrounding the alpha carbon, as in [1]. More precisely, when \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\) and \(\mathbf{x}_{3}\) are the coordinates of \(N\), \(C\alpha\) and \(C\) (the carbon that is not \(C\beta\) and adjacent to \(C\alpha\)), the rotation matrix \(R=(\mathbf{e}_{1}\ \mathbf{e}_{2}\ \mathbf{e}_{3})\) that we refer to as "orientation" is obtained as follows:
\[\mathbf{v}_{1} =\mathbf{x}_{3}-\mathbf{x}_{2}\] \[\mathbf{v}_{2} =\mathbf{x}_{1}-\mathbf{x}_{2}\] \[\mathbf{e}_{1} =\mathbf{v}_{1}/\left\|\mathbf{v}_{1}\right\|\] \[\mathbf{u}_{2} =\mathbf{v}_{2}-(\mathbf{e}_{1}\cdot\mathbf{v}_{2})\mathbf{e}_{1}\] \[\mathbf{e}_{2} =\mathbf{u}_{2}/\left\|\mathbf{u}_{2}\right\|\] \[\mathbf{e}_{3} =\mathbf{e}_{1}\times\mathbf{e}_{2}\]
### The grid featurization
As explained in 4.2 and 4.3, our BSD and BRI modules take as input a sequence of 3D voxelized images, where each image represents the local environment of a residue. The process of making such voxelized images, the "grid featurization", proceeds as follows:
1. Collect coordinates \(\mathbf{x}_{i}\in\mathbb{R}^{3}\) (\(1\leq i\leq n\)) of the heavy (non-hydrogen) atoms in the protein.
2. Collect feature vectors \(f_{i}\in\mathbb{R}^{d_{feature}}\) (\(1\leq i\leq n\)) of the heavy atoms.
3. Given a choice of _grid axes_\((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\), a cubical grid is laid such that the centers of the voxels in the grid become \[t+\sum_{i=1}^{3}r(a_{i}-\frac{L-1}{2})\mathbf{e}_{i}\quad((a_{1},a_{2},a_{3}) \in\{0,\cdots,L-1\}^{3})\], where \(t\) is the grid center (the alpha carbon coordinate), \(r\) is the _grid resolution_ and \(L\) is the _grid size_.
4. Compute feature vectors corresponding to each voxel in the grid, by summing up those of nearby heavy atoms.
The same process was used in [1], [1] and [25], although the grids in those methods are not laid on the protein residues.
The specific details of our grid-featurization process are:
* The grid resolution is \(1\AA\) and the grid size is 16.
* The atom features are of dimension 18 and include atom types, hybridization, degree, partial charge and aromaticity. ([25], [1].)
### The grid alignment
As explained in 4.6, we apply the "grid alignment" during the grid featurization to promote SE(3)-invariance. This means that we choose the grid axes to be the vectors comprising the residue orientation (See 7.1).
### The CNN model
Our model incorporates a 3D CNN to encode the residue-level grid features. For the CNN architecture, it uses a 3D BottleNeck ResNet model introduced in [1]. The model is adapted from the bottleneck ResNet model introduced in [15] for the image classification. The bottleneck architecture reduces the number of parameters thus enables the use of a deeper network. [1] showed that the 3D BottleNeck ResNet model, while being light, performed competitively well compared to its non-bottleneck counterpart.
### The geometric attention layers
The geometric attention is an integral part of our model's architecture. The attention mechanism was adopted from Alphafold([JEP\({}^{+}\)21])'s IPA(Invariant Point Attention) with an adjustment necessary to adapt it to our forms of inputs.
The inputs of the attention layers are composed of the following:
* \(x_{i}\in\mathbb{R}^{d_{hidden}}\) (\(i=1,\cdots,n\)), hidden vectors associated to the residues.
* \(T_{i}=(R_{i},t_{i})\in SO(3)\times\mathbb{R}^{3}\) (\(i=1,\cdots,n\)), the _local frames_ associated to the residues, where \(t_{i}\) is the position of the alpha carbon and \(R_{i}\) is the rotation matrix that represents the residue orientation (See 7.1). Note that the operation \(v\mapsto T_{i}v\) maps the local coordinates (with respect to the local frame) to the corresponding global coordinates, and the operation \(u\mapsto T_{i}^{\neg}u\) plays the reverse role.
Then, the computation is carried out in the following steps (See also Figure 2):
1. The standard query and key vectors \(q_{i}^{h}\) and \(k_{i}^{h}\) are computed by the linear mappings from \(x_{i}\). Here, \(h\) stands for a "head".
2. The geometric query and key vectors \(\mathbf{q}_{i}^{hp}\) and \(\mathbf{k}_{i}^{hp}\) in \(\mathbb{R}^{3}\) are computed by the linear mappings from \(x_{i}\). Here, \(h\) stands for a "head" and \(p\) stands for a "point" of attention.
3. The _attention weight_ from the \(i\)-th token to the \(j\)-th token is computed from a linear combination of the standard attention weight \[w_{ij}^{h,standard}=\frac{1}{\sqrt{d_{hidden}}}q_{i}^{h}\cdot k_{j}^{h}\] and the geometric attention weight (7.1) \[w_{ij}^{h,geometric}=\frac{1}{\sqrt{N_{points}}}\sum_{p}\left\|T_{i}\mathbf{q} _{i}^{hp}-T_{j}\mathbf{k}_{j}^{hp}\right\|\] by applying a softmax operation. More precisely, the attention weight becomes \[w_{ij}^{h}=softmax_{j}(\frac{1}{\sqrt{2}}(w_{ij}^{h,standard}-\log(1+\gamma^{h} )w_{ij}^{h,geometric}))\] where \(\gamma^{h}\) is a learnable parameter.
4. The standard value vectors \(v_{j}^{h}\) are computed by a linear map from \(x_{j}\), and aggregated as \[o_{i}^{h}=\sum_{j}w_{ij}^{h}v_{j}^{h}\]
5. The geometric value vectors \(\mathbf{v}_{j}^{h}\) are computed by a linear map from \(x_{j}\), and aggregated as (7.2) \[\mathbf{o}_{i}^{hp}=T_{i}^{-1}(\sum_{j}w_{ij}^{h}T_{j}\mathbf{v}_{j}^{hp})\]
6. The aggregated vectors as well as their sizes are concatenated and linearly mapped via \(f_{final}\) to produce the output of the attention layer \[x_{i}^{\prime}=f_{final}(concat_{h,p}(o_{i}^{h},\mathbf{o}_{i}^{hp},\left\| \mathbf{o}_{i}^{hp}\right\|))\]
The adjustment made to the original IPA is the omission of the "attention bias" term. In the original paper, this term was based on the "pair representation" computed at the earlier stages of the Alphafold architecture using the evolutionary information of the protein. Since our model does not involve this representation, the omission is necessary to use the IPA in our model.
### SE(3)-invariance of the geometric attention layers
As noted in 4.6, our model's SE(3)-invariance relies on the attention layer's SE(3)-invariance. In order to prove that the layer is SE(3)-invariant, one has to examine the equality (4.2). Essentially, the equality holds for our attention layer because the quantities (7.1) and (7.2), as functions of \(\{x_{i}\}_{i=1}^{n}\) and \(\{T_{i}\}_{i=1}^{n}\), satisfy the same equality. For any \(T\in SE(3)\), the (7.1) satisfies the equality because \(\left\|TT_{i}\mathbf{q}_{i}^{hp}-TT_{j}\mathbf{k}_{j}^{hp}\right\|=\left\|T_{ i}\mathbf{q}_{i}^{hp}-T_{j}\mathbf{k}_{j}^{hp}\right\|=\left\|\text{(since $T$ preserves the norm)}\), and (7.2) satisfies the equality because \(\left(TT_{i}\right)^{-1}(\sum_{j}w_{ij}^{h}(TT_{j})\mathbf{v}_{j}^{hp})=T_{i}^ {-1}T^{-1}T(\sum_{j}w_{ij}^{h}T_{j}\mathbf{v}_{j}^{hp})=T_{i}^{-1}(\sum_{j}w_{ ij}^{h}T_{j}\mathbf{v}_{j}^{hp})\). Note that this derivation is almost identical to that presented in [JEP\({}^{+}\)21](Supplementary Information, page 28).
### The Homology-based Augmentation
The "homology-based augmentation" is one of our new augmentation methods used to overcome the problem of over-fitting. It is distinguished from the usual augmentation methods in that it is not based on transformations applied to the samples on the fly during the training. Instead, it pre-computes appropriate "augmented samples" out of an external database of unlabelled protein structures, and use the augmented dataset consisting of the augmented samples during training. Essentially, the augmented samples are selected based on the sequence alignments computed with respect to the proteins in the original
Figure 8. These figures illustrate the way our homology-based augmentation determines the positive and negative binding site candidates in augmented proteins. Figure (A) depicts an augmented protein, where there are two positive (the blue points) and one negative (the red point) binding site candidate centers. Out of binding site candidates proposed by Fpocket, they are labeled based on the distances to the _proxy centers_ (the purple points) of binding sites inferred from homology relations. Figures (B) and (C) depict the homologous proteins in the original database that gave rise to the inferred binding sites in the augmented protein. \(X\) and \(Y\) are their ligands. The bright and dark green regions of the chains indicate the residues in close proximity to the ligands, while only the bright green region has evolutionary correspondence to residues in the augmented protein. The bright green region must comprise at least 50% of the entire green region in order for the binding site to count.
Figure 9. These figures illustrate the way our homology-based augmentation assigns residue labels in augmented proteins with respect to a positive binding site candidate. Figure (A) illustrates UniProt protein Q9VC32 as an augmented protein. The red and purple residues correspond to the red residues in figure (B) of the PDB protein 4G34, which are the ligand-binding residues. Similarly, the blue and purple residues correspond to the blue residues in figure (C) of the PDB protein 4BID, which are the ligand-binding residues. The purple residues, the intersection, attain labels 1.0, while the other colored residues attain labels 0.5. This means that our augmentation method regards the purple residues as the most likely ligand-binding ones.
training set. In this section, we describe the augmentation method in more detail, clarifying its inputs, outputs, the procedures and the underlying rationale.
The augmentation method requires a _seed_ database \(\mathcal{S}^{*}\) of multi-chain protein-ligand complexes and a _target_ database \(\mathcal{T}\) of single-chain protein structures. In our instantiation, \(\mathcal{S}^{*}\) was the portion of the scPDB dataset used to train the current fold of the cross-validation. For \(\mathcal{T}\), we used the entire Alphafold Protein Structure Database (as of April. 2022) that contained 992,316 protein structures from the proteome of human and 47 other key organisms as well as the Swiss-Prot entries.
The augmentation procedure outputs two types of information, which together form the "augmented dataset" and are used during the training as described in 5.7. The first information denotes the centers of the binding site candidates in proteins in a selected subset of \(\mathcal{T}\), labeled either _positive_ or _negative_. This is used to augment the BSD train dataset. The second information denotes, for each previous positive binding site candidates, the likelihood of each nearby protein residue to be a ligand-binding residue. This is used to augment the BRI train dataset.
In the following, we describe the steps of the procedure. The _italized words_ are general terms whose specification may vary depending on own's needs. Whenever there is an _italized word_, we provide our specification at the end of the step.
1. In each holo structure of \(\mathcal{S}^{*}\), find ligands _associated to_ exactly one chain. As a result, obtain a database \(\mathcal{S}\) of protein chains associated to at least one such single-chain ligands. (A chain can be associated to multiple single-chain ligands) We define that a chain and a ligand are _associated to_ each other if they have heavy atoms within \(4\AA\) to each other.
2. Run a _homology search algorithm_ with \(\mathcal{S}\) as the query database and \(\mathcal{T}\) (the database of single-chain protein structures) as the target database. Based on the results, obtain a MSA for each chain in \(\mathcal{S}\). For the _homology search algorithm_, we use the software HHBlits with its default setting.
3. For each triplet \((x,l,y)\), composed of: 1. a query chain \(x\) in \(\mathcal{S}\) 2. a ligand \(l\) associated to \(x\) found in step 1 of the procedure and 3. a target chain \(y\) aligned with \(x\) in the MSA, determine whether the ligand \(l\)'s binding site in \(x\) is _preserved_ in \(y\). The triplets for which the previous determination was affirmative will be called _preserving_. We define that a triplet \((x,l,y)\) is _preserving_ if at least half of the residues of \(x\) that are in close contact with \(l\) (heavy atoms within \(4\AA\)) are aligned with a residue of \(y\) in the MSA.
4. For each preserving triplet \((x,l,y)\), find a _proxy center_ of the binding site in \(y\) that corresponds to the ligand \(l\)'s binding site in \(x\). We define the _proxy center_ to be the mean of the alpha carbon coordinates of the residues of \(y\) aligned in the MSA with a residue of \(x\) in close contact with \(l\).
5. On each chain \(y\) in \(\mathcal{T}\) that is involved in at least one preserving triplet, run Fpocket to get an initial list of binding site center candidates. Label a candidate center "positive" if it is within a _lower threshold_ from a proxy center obtained in the previous step. Label it "negative" if it is farther than a _higher threshold_ from any such proxy center. If a candidate center does not fall into these categories, ignore it and exclude it from the dataset. We define the _lower threshold_ to be \(7.5\AA\) and the _upper threshold_ to be \(30\AA\). Figure 8 illustrates this step using schematic figures.
5. For each positively labeled binding site candidate from the previous step, label residues of \(y\) with the estimated likelihood of comprising the binding site. The estimate is obtained as a result of "voting" of the homologous chains in \(\mathcal{S}\) that gave rise to the binding site. More specifically, among the preserving triplets \((x,l,y)\) whose proxy center gave rise to the binding site (in the sense of the step (5-1)), the proportion of such triplets for which the residue at hand corresponds (in MSA) to a residue in the binding site of \(l\) is computed. Figure 9 illustrates this step using an actual example.
The assignments of different labels in the previous procedure are based on the following hypotheses:
* The positive binding site label: if a pocket-like site (discovered by a geometry-based BSP method) is surrounded by the sequence fragments that are homologous to the binding site sequence fragments of other proteins, it is likely to be a binding site.
* The negative binding site label: even if a site is pocket-like, if it is far from any sequence fragments that are homologous to the binding site sequence fragments of other proteins (in a given seed database), it is unlikely to be a binding site.
* The residue labels: whether a residue nearby a binding site is a part of the binding site or not can be determined by whether the same is true for corresponding residues of the homologous binding sites.
These hypotheses except the second one have been the bases of the _template-based_ BSP methods introduced in 2.1.3.
However, the second hypothesis is, to our knowledge, has not been employed in the previous BSP methods. The result of ablating the homology augmentation in Table 1 provides a partial evidence that this hypothesis
is valid at least to some degree. Otherwise if the negative labels did not provide valuable learning signals, the homology augmentation would have had only adversary effects on the BSD module's performance.
|
2306.02533 | On Emergence of Clean-Priority Learning in Early Stopped Neural Networks | When random label noise is added to a training dataset, the prediction error
of a neural network on a label-noise-free test dataset initially improves
during early training but eventually deteriorates, following a U-shaped
dependence on training time. This behaviour is believed to be a result of
neural networks learning the pattern of clean data first and fitting the noise
later in the training, a phenomenon that we refer to as clean-priority
learning. In this study, we aim to explore the learning dynamics underlying
this phenomenon. We theoretically demonstrate that, in the early stage of
training, the update direction of gradient descent is determined by the clean
subset of training data, leaving the noisy subset has minimal to no impact,
resulting in a prioritization of clean learning. Moreover, we show both
theoretically and experimentally, as the clean-priority learning goes on, the
dominance of the gradients of clean samples over those of noisy samples
diminishes, and finally results in a termination of the clean-priority learning
and fitting of the noisy samples. | Chaoyue Liu, Amirhesam Abedsoltan, Mikhail Belkin | 2023-06-05T01:45:22Z | http://arxiv.org/abs/2306.02533v1 | # On Emergence of Clean-Priority Learning in Early Stopped Neural Networks
###### Abstract
When random label noise is added to a training dataset, the prediction error of a neural network on a label-noise-free test dataset initially improves during early training but eventually deteriorates, following a U-shaped dependence on training time. This behaviour is believed to be a result of neural networks learning the pattern of clean data first and fitting the noise later in the training, a phenomenon that we refer to as _clean-priority learning_. In this study, we aim to explore the learning dynamics underlying this phenomenon. We theoretically demonstrate that, in the early stage of training, the update direction of gradient descent is determined by the clean subset of training data, leaving the noisy subset has minimal to no impact, resulting in a prioritization of clean learning. Moreover, we show both theoretically and experimentally, as the clean-priority learning goes on, the dominance of the gradients of clean samples over those of noisy samples diminishes, and finally results in a termination of the clean-priority learning and fitting of the noisy samples.
Machine Learning, ICML
## 1 Introduction
Recent studies suggest that Neural Network (NN) models tend to first learn the patterns in the clean data and over-fit the noise at a later stage (Arpit et al., 2017; Li et al., 2020). We refer to this phenomena as _clean-priority learning_. Real datasets may have intrinsic label noise, which is why early stopping can be useful in practice, saving a significant amount of unnecessary computation cost.
To study _clean-priority learning_ phenomena, we intentionally add label noise to the training data set and leave the test data set untouched, this is a common setting in literature (see, for example, (Zhang et al., 2021; Nakkiran et al., 2021; Belkin et al., 2018)). Figure 1 illustrate this for a MNIST classification task using CNN. The test prediction error exhibits a U-shaped dependence on training time, with an initial decrease followed by an increase after the early stopping point. The observation is that in the intermediate steps, especially around the early stopping point, the test performance can be significantly better than the label noise level added to the training set (below the dashed line).
To further explore this phenomenon, we address the following fundamental questions:
1. _What is the underlying mechanism by which neural networks learn the clean data first and fit the noise in later stages?_
2. _How does the model performance deteriorate after the early stopping point?_
At the outset, we analyze the configuration of sample-wise gradients (or its variant, for multi-class classification) on the training dataset, at the neural network initialization. Our objective is to examine whether there exists any pattern among the gradients that can explain the clean-priority learning phenomenon. Our analysis reveals that, at initialization, samples within the same class (before label corruption), which are presumably more similar to each other, tend to have their sample-wise gradients relatively closer in vector directions (compared to the samples from different
Figure 1: Classification errors on training and test datasets of MNIST using CNN. Test error exhibits a U-shaped curve, and can be significantly lower than the noise level during training.
classes). The label corruption, which flips the label to a different class, flips the corresponding sample-wise gradient to its opposite direction. Consequently, the sum of the noisy sample gradients is in sharp opposite direction of that of the clean sample gradients.
The key observation is that, due to the dominance of the population of the clean samples, in the early stage of learning, the gradient of noisy subset is cancelled out, and essentially makes no contribution on the gradient descent (GD) update direction1. It is also worth noting that almost all clean sample-wise gradient vectors "agree" with the GD update (i.e., have positive projection), while almost all noisy sample-wise gradients are "against" the GD update. As a result, the individual loss on each clean sample is decreased, and that on each noisy sample is increased. Hence, we see that, in the early stage, the GD algorithm is determined by the clean samples and exhibits the clean-priority learning.
Footnote 1: To be more precise, the only effect of the noisy subset gradient is resulting in a smaller GD step size.
We further show that as the clean-priority learning process continues, the clean subset gradient's dominance over the noisy subset gradually diminishes. This is particularly evident around the early stopping point, at which the noisy subset gradient begins to make a meaningful contribution, causing the model to fit the noisy samples along with the clean ones. This new trend in learning behavior is expected to hurt the model's performance, which was previously based primarily on the clean samples.
In summary, we make the following contributions:
* **learning dynamics.** In the early stage of learning of neural networks, the noisy samples contribution to the GD update is cancelled out by that of the clean samples, which is the key mechanism underlying clean-priority learning. However, this clean-priority learning behavior gradually fades as the dominance of the clean subset diminishes, particularly around the early stopping point. We experimentally verify our findings on deep neural networks on various classification problems.
* For fully connected networks with mild assumption on data we theoretically prove our empirical observation.
* In addition, we find for neural networks, at initialization, sample-wise gradients from the same class tend to have relatively similar directions, when there is no label noise.
The paper is organized as follows: in Section 2, we describe the setup of the problems and introduce necessary concepts and notations. In Section 3, we analyze the sample-wise gradients at initialization of neural network, for binary classification. In Section 4, we show the learning dynamics, especially the clean-priority learning, on binary classification. In Section 5, we extend our study and findings to multi-class classification problems.
### Related works
Early stopping is often considered as a regularization technique and is widely used in practice to obtain good performance for machine learning models (Zhang & Wallace, 2017; Gal & Ghahramani, 2016; Graves et al., 2013). Early stopping also received a lot theoretical analyses, both on non-neural network models, especially linear regression and kernel regression (Yao et al., 2007; Ali et al., 2019; Xu et al., 2022; Shen et al., 2022), and on neural networks (Zhang et al., 2021; Ji et al., 2021).
Recent studies suggest that, when random label noise presents, neural networks fit the clean data first and "overfit" the noise later on (Arpit et al., 2017; Li et al., 2020; Bai et al., 2021). For example, based on experimental observations that maximum validation set accuracy is achieved before good training set accuracy, the work (Arpit et al., 2017) conjectures that the neural network learns clean patterns first. However, there is no explanation on why and how the clean-priority learning happens. In another work (Li et al., 2020), assuming (almost) perfectly cluster-able data and uniform conditioning on Jacobian matrices, proves that clean data are fit by two-layer neural networks in an early stage. However, this data assumption requires, at the same location of each noisy sample, there must exist several (at least \(1/\delta\), with \(\delta\) being the label noise level) clean samples. This assumption is often not met by actual datasets.
To the best of our knowledge, our work is the first to elucidate the mechanism underlying the clean-priority phenomenon, offering a fresh perspective on its dynamics.
## 2 Problem setup and preliminary
In this paper, we consider supervised classification problems.
Datasets.There is a training dataset \(\mathcal{D}\triangleq\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) of size \(|\mathcal{D}|=n\). In each sample \((\mathbf{x}_{i},y_{i})\), there are input features \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and a label \(y_{i}\). For binary (2-class) classification problems, the label \(y_{i}\in\{0,1\}\) is binary; for multi-class classification problems, the label is one-hot encoded, \(y_{i}\in\mathbb{R}^{C}\), where \(C\) is the total number of classes. We further denote \(\mathcal{D}^{(c)}\), \(c\in\{1,2,\cdots,C\}\), as the subset of \(\mathcal{D}\) that is composed of samples from the \(c\)-th class. It is easy to see, \(\mathcal{D}=\cup_{c=1}^{C}\mathcal{D}^{(c)}\).
We assume the labels in \(\mathcal{D}\) are randomly corrupted. Specif
ically, if denote \(\hat{y}\) as the ground truth label of \((\mathbf{x}_{i},y_{i})\in\mathcal{D}\), there exists a non-empty set
\[\mathcal{D}_{noise}\triangleq\{(\mathbf{x}_{i},y_{i})\in\mathcal{D}:y_{i}\neq \hat{y}_{i}\}. \tag{1}\]
Furthermore, the labels \(y_{i}\) in \(\mathcal{D}_{noise}\) is uniformly randomly distributed across all the class labels except \(\hat{y}_{i}\). We call \(\mathcal{D}_{noise}\) as the _noisy subset_ and its elements as _noisy samples_. We also define the _clean subset_\(\mathcal{D}_{clean}\) as the compliment, i.e., \(\mathcal{D}_{clean}=\mathcal{D}\backslash\mathcal{D}_{noise}\), and call its elements as _clean samples_. The _noise level_\(\delta\) is defined as the ratio \(|\mathcal{D}_{noise}|/|\mathcal{D}|\). In this paper, we set \(\delta<0.5\), i.e., the majority of training samples are not corrupted. We also denote \(\tilde{\mathcal{D}}\) as the ground-truth-labeled dataset: \(\tilde{\mathcal{D}}\triangleq\{(\mathbf{x}_{i},\hat{y}_{i})\}_{i=1}^{n}\).
In addition, there is a test dataset \(\tilde{\mathcal{D}}\) which is i.i.d. drawn from the same data distribution as the training set \(\mathcal{D}\), except that the labels of test set \(\tilde{\mathcal{D}}\) are not corrupted.
Optimization.Given an arbitrary dataset \(\mathcal{S}\) and a model \(f\) which is parameterized by \(\mathbf{w}\) and takes an input \(\mathbf{x}\), we define the loss function as
\[L(\mathbf{w};\mathcal{S})=\frac{1}{|\mathcal{S}|}\sum_{(\mathbf{x},y)\in \mathcal{S}}l(\mathbf{w};\mathbf{x}_{i},y_{i}), \tag{2}\]
with \(l(\mathbf{w};\mathbf{x}_{i},y_{i})\triangleq l(f(\mathbf{w};\mathbf{x}),y)\) is evaluated on a single sample and is a function of the model output \(f(\mathbf{w};\mathbf{x})\) and label \(y\). We use the logistic loss for binary classification problems, and the cross entropy loss for multi-class problems. In this paper, we use neural networks as the model. In case of binary classification problems, the output layer of \(f\) has only one neuron, and there exists a _sigmoid_ function on top such that the output \(f(\mathbf{w};\mathbf{x})\in(0,1)\). In case of multi-class classification problems, the neural network \(f\) has \(C\) output neurons, and there exists a _softmax_ function to normalize the outputs. Throughout the paper, we assume that the neural network is large enough so that the training data can be exactly fit. This is usually satisfied when the neural network is over-parameterized (Liu et al., 2022).
The optimization goal is to minimize the empirical loss function (i.e., loss function on the training dataset \(\mathcal{D}\)):
\[L(\mathbf{w})\triangleq L(\mathbf{w};\mathcal{D})=\frac{1}{n}\sum_{i=1}^{n}l (\mathbf{w};\mathbf{x}_{i},y_{i}). \tag{3}\]
The above loss function is usually optimized by gradient descent (or its stochastic variants) which has the following update form:
\[\mathbf{w}_{t+1} =\mathbf{w}_{t}-\eta\nabla L(\mathbf{w}_{t};\mathcal{D}) \tag{4}\] \[=\mathbf{w}_{t}-\eta\frac{1}{n}\sum_{(\mathbf{x}_{i},y_{i})\in \mathcal{D}}\nabla l(\mathbf{w}_{t};\mathbf{x}_{i},y_{i}).\]
Here, \(\nabla l(\mathbf{w};\mathbf{x}_{i},y_{i})\) is the gradient of loss \(l(\mathbf{w};\mathbf{x}_{i},y_{i})\) w.r.t. the neural network parameters \(\mathbf{w}\).
Sample-wise gradients.We call \(\nabla l(\mathbf{w};\mathbf{x}_{i},y_{i})\)_sample-wise gradient_, as it is evaluated on a single sample, and denote it as \(\nabla l_{i}(\mathbf{w})\) for short. As there are \(n\) samples in \(\mathcal{D}\), at each point \(\mathbf{w}\) in the parameter space, we have \(n\) sample-wise gradients, each of which is of dimension \(p\).
Denote \(h(\mathbf{w};\mathbf{x})\) as the pre-activation output neuron(s) before activation, which is in \(\mathbb{R}\) for binary classification, and is in \(\mathbb{R}^{C}\) for multi-class classification. The sample-wise gradient for a given sample \((\mathbf{x}_{i},y_{i})\) has the following form (Bishop and Nasrabadi, 2006):
\[\nabla l_{i}(\mathbf{w})=(f(\mathbf{w};\mathbf{x}_{i})-y_{i})\nabla h(\mathbf{ w};\mathbf{x}_{i}). \tag{5}\]
Note that the above expression is a scalar-vector multiplication for binary classification, and is a vector-matrix multiplication for multi-class classification.
We note that the sample-wise gradient will be one of our major quantities, and play a fundamental role in the analysis through out this paper.
## 3 Sample-wise gradients at initialization for binary classification
In this section, we analyze the configuration of the \(n\) sample-wise gradients at randomly initialization of neural networks for binary classification.
We start with the expression Eq.(5) of the sample-wise gradient \(\nabla l_{i}(\mathbf{w})\). For binary classification, \(\nabla l_{i}(\mathbf{w})\) is proportional to the model derivative \(\nabla h(\mathbf{w};\mathbf{x}_{i})\), up to a scalar factor \(f(\mathbf{w};\mathbf{x}_{i})-y_{i}\). In Section 3.1, we focus on the directions of \(\nabla h(\mathbf{w};\mathbf{x}_{i})\), which is label independent. Then, in Section 3.2, we discuss the effects of the label-related factor \(f(\mathbf{w};\mathbf{x}_{i})-y_{i}\) and label noise on the direction of sample-wise gradients.
### Direction of the model derivative \(\nabla h\)
Given any two inputs \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\), we denote the angle between \(\mathbf{x}\) and \(\mathbf{z}\) in the data space by \(\theta_{d}(\mathbf{x},\mathbf{z})\), and denote the angle between the two vectors \(\nabla h(\mathbf{w}_{0};\mathbf{x})\) and \(\nabla h(\mathbf{w}_{0};\mathbf{z})\) by \(\theta_{h}(\mathbf{x},\mathbf{z})\). In this subsection, we are concerned with the connection between \(\theta_{h}(\mathbf{x},\mathbf{z})\) and \(\theta_{d}(\mathbf{x},\mathbf{z})\) at the network initialization.
Consider the following two types of neural networks: a two-layer linear network \(h_{1}\), which is defined as
\[h_{1}(\mathbf{w};\mathbf{x})=\frac{1}{\sqrt{m}}\mathbf{v}^{T}A\mathbf{x}, \tag{6}\]
and a two-layer ReLU network \(h_{2}\), which is defined as
\[h_{2}(\mathbf{w};\mathbf{x})=\frac{1}{\sqrt{m}}\mathbf{v}^{T}\sigma(A\mathbf{x }). \tag{7}\]
Here, \(\sigma(\cdot)=\max(\cdot,0)\) is the element-wise ReLU activation function, \(\mathbf{v}\in\mathbb{R}^{m}\) and \(A\in\mathbb{R}^{m\times d}\) are the first layer
and second layer parameters, respectively. We denoted \(\mathbf{w}\) as the collection of all the parameters. We use the NTK parameterization Jacot et al. (2018) for the networks; namely, each parameter is i.i.d. initialized using the normal distribution \(\mathcal{N}(0,1)\) and there exist a scaling factor \(1/\sqrt{m}\) explicitly on each hidden layer.
The following theorem and its corollary show the relation between \(\theta_{h}(\mathbf{x},\mathbf{z})\) and \(\theta_{d}(\mathbf{x},\mathbf{z})\). (Proofs in Appendix A.1).
**Theorem 3.1**.: _Consider the two-layer neural networks, \(h_{1}\) and \(h_{2}\), defined in Eq.(6) and (7), with infinite width \(m\). Given any two inputs \(\mathbf{x}\) and \(\mathbf{z}\), the angles \(\theta_{h_{1}}(\mathbf{x},\mathbf{z})\) and \(\theta_{h_{2}}(\mathbf{x},\mathbf{z})\) have the following relations with \(\theta_{d}(\mathbf{x},\mathbf{z})\), at network initialization \(\mathbf{w}_{0}\): for the linear neural network \(h_{1}\),_
\[\theta_{h_{1}}(\mathbf{x},\mathbf{z})=\theta_{d}(\mathbf{x},\mathbf{z});\]
_for the ReLU neural network \(h_{2}\),_
\[\cos\theta_{h_{1}}(\mathbf{x},\mathbf{z})=\frac{\pi-\theta_{d}(\mathbf{x}, \mathbf{z})}{\pi}\cos\theta_{d}(\mathbf{x},\mathbf{z})+\frac{1}{2\pi}\sin \theta_{d}(\mathbf{x},\mathbf{z}).\]
**Corollary 3.2**.: _Consider the same networks \(h_{1}\) and \(h_{2}\) as in Theorem 3.1. For both networks, the following holds: for any inputs \(\mathbf{x}\), \(\mathbf{z}\) and \(\mathbf{z}^{\prime}\), if \(0\leq\theta_{d}(\mathbf{x},\mathbf{z})\leq\theta_{d}(\mathbf{x},\mathbf{z}^{ \prime})\leq\frac{\pi}{2}\), then \(0\leq\theta_{h_{i}}(\mathbf{x},\mathbf{z})\leq\theta_{h_{i}}(\mathbf{x}, \mathbf{z}^{\prime})\leq\frac{\pi}{2}\), for \(i\in\{1,2\}\)._
The theorem and corollary suggest that: _similar inputs (small angle \(\theta_{d}\)) induce similar model derivatives (small angle \(\theta_{h}\))._
**Remark 3.3** (Not just at initialization).: As discussed in Liu et al. (2020, 2022), the model derivative \(\nabla h(\mathbf{w};\mathbf{x})\) is constant during optimization for infinitely wide neural networks. Hence, the angle \(\theta_{h}\) between model derivatives is also constant, and Theorem 3.1 and Corollary 3.2 apply to any time stamp of the network training.
Experimental verification.We experimentally verify the above theoretical results on neural networks with large width. Specifically, we consider six neural networks: three linear networks with \(2\), \(3\) and \(5\) layers, respectively; and three ReLU networks with \(2\), \(3\) and \(5\) layers. Each hidden layer of each neural network has \(512\) neurons. For each network, we compute the model derivatives \(\nabla h\) on the \(1\)-sphere \(\mathcal{S}^{1}=\{(\cos\theta_{d},\sin\theta_{d}):\theta_{d}\in[0,2\pi)\}\), at the network initialization. Figure 2 shows the relations between the angle \(\theta_{h}\) and \(\theta_{d}\).2 We observe that the curves for the \(2\)-layer networks match Theorem 3.1. More importantly, the experiments suggest that the same or similar relations, as well as Corollary 3.2, still hold for deep neural networks, although our analysis is conducted on shallow networks.
Footnote 2: The curves for the three linear networks are almost identical and not visually distinguishable, we only present the one for \(2\)-layer linear network in Figure 2.
Consider the following synthetic dataset (also shown in the left panel of Figure 3): two separated data clusters in a \(2\)-dimensional space. We use a \(3\)-layer ReLU network of width \(m=512\) at its initialization to compute the sample-wise model derivatives \(\nabla h\). The right panel of Figure 3 shows the distributions of angle \(\theta_{h}\) for data pairs from the same cluster ("within") and from different clusters ("between"). It can be easily seen that the "within" distribution has smaller angles \(\theta_{h}\) than the "between" distribution, which is expected as the data from the same clusters are more similar.
### Directions of the sample-wise gradients
Now, we consider the sample-wise gradients \(\nabla l(\mathbf{w}_{0})\) at initialization, using Eq.(5). We note that the direction of \(\nabla l(\mathbf{w}_{0})\) is determined by \(\nabla h(\mathbf{w}_{0};\mathbf{x})\) and the label \(y\). This is because only the sign (not the magnitude) of \(y-f(\mathbf{w}_{0};\mathbf{x})\) may affect the direction, and the post-activation output \(f(\mathbf{w}_{0};\mathbf{x})\) is always in \((0,1)\) and label \(y\) is either \(0\) or \(1\).
Motivated by this observation, for a fixed class \(c\in\{0,1\}\), we consider the following subsets: clean subset \(\mathcal{D}^{(c)}_{clean}\triangleq\mathcal{D}^{(c)}\cap\mathcal{D}_{clean}\), noisy subset \(\mathcal{D}^{(c)}_{noise}\triangleq\mathcal{D}^{(c)}\cap\mathcal{D}_{noise}\), and
Figure 3: **(Left) Data visualization: two separated data clusters in \(2\)-dimensional space. (Right) Distributions of angle \(\theta_{h}\) for sample pairs from the same cluster (“within”) and from different clusters (“between”).**
Figure 2: Relation between \(\theta_{h}\) and \(\theta_{d}\). For both shallow and deep neural networks, similar inputs (small angle \(\theta_{d}\)) induce similar model derivatives (small angle \(\theta_{h}\)).
\(\mathcal{D}^{(c)}_{other}\triangleq\mathcal{D}_{clean}\backslash\mathcal{D}^{(c)}\). We note that \(\mathcal{D}^{(c)}_{noise}\) and \(\mathcal{D}^{(c)}_{clean}\) have the same input distribution but different labels \(y\), while \(\mathcal{D}^{(c)}_{noise}\) and \(\mathcal{D}^{(c)}_{other}\) have the same labels \(y\) but different input distributions.
For these subsets, we denote their corresponding sets of sample-wise gradients as \(\mathcal{G}^{(c)}_{clean}(\mathbf{w})\), \(\mathcal{G}^{(c)}_{noise}(\mathbf{w})\) and \(\mathcal{G}^{(c)}_{other}(\mathbf{w})\), respectively. We also define the corresponding _subset gradients_ as the sum of all sample-wise gradients in the subset: \(g^{(c)}_{k}(\mathbf{w})\triangleq\sum_{\nabla l(\mathbf{w})\in\mathcal{G}^{(c )}_{k}}\nabla l(\mathbf{w})\), for \(k\in\{clean,\ noise,\ other\}\). the direction of \(g^{(c)}_{k}(\mathbf{w})\) is the same as that of the average gradient in the subset.
**Direction of sample-wise gradients.** We consider the angles between sample-wise gradients, and denote by \(\theta_{g}\).
First we consider the clean subset. Within \(\mathcal{D}^{(c)}_{clean}\), the factor \(f(\mathbf{w};\mathbf{x})-y\) always have the same sign, as the label \(y\) is the same. Thus, the directional distribution of \(\mathcal{G}^{(c)}_{clean}(\mathbf{w}_{0})\) is identical to the \(\nabla h\) distribution, which has been analyzed in Section 3.1. Presumably, inputs from the same ground truth class tend to be more similar (with small angles \(\theta_{d}\)), compared to others. Applying Corollary 3.2, we expect the angles \(\theta_{h}\), and therefore \(\theta_{g}\) also, within this subset are relatively small. The green plots of Figure 4 show numerical verification of the \(\theta_{g}\) distributions for the clean subset \(\mathcal{D}^{(c)}_{clean}\).3 We see that \(\theta_{g}\) tends to concentrate around relatively small angles.
Footnote 3: For illustration purpose, we compare each sample-wise gradient with the average direction, represented by \(g^{(c)}_{clean}(\mathbf{w}_{0})\).
The interesting part is about the noisy subset \(\mathcal{D}_{noise}\). This subset shares the same input distribution, hence \(\nabla h\) distribution as well, with the clean subset. However, due to different label \(y\), the factor \(f(\mathbf{w}_{0};\mathbf{x})-y\) has different sign from that of clean samples, which flips all \(\nabla l(\mathbf{w}_{0})\in\mathcal{G}^{(c)}_{noise}(\mathbf{w}_{0})\) to the opposite direction of those in \(\mathcal{G}^{(c)}_{noise}(\mathbf{w}_{0})\). As a consequence, sample-wise gradients between \(\mathcal{G}^{(c)}_{noise}(\mathbf{w}_{0})\) and \(\mathcal{G}^{(c)}_{clean}(\mathbf{w}_{0})\) makes large angles \(\theta_{g}\); and the noisy subset gradient \(g^{(c)}_{noise}(\mathbf{w}_{0})\) is sharply opposite to \(g^{(c)}_{clean}(\mathbf{w}_{0})\). Figure 4 experimentally verifies this phenomenon. The red histograms, representing the \(\theta_{g}\) distribution for noisy subset, are symmetric to the green ones and locate at large angles. The red dash lines representing the angle \(\theta_{g}\) between \(g^{(c)}_{noise}(\mathbf{w}_{0})\) and \(g^{(c)}_{clean}(\mathbf{w}_{0})\) is almost close to \(180^{\circ}\).
Lastly, the subset \(\mathcal{D}^{(c)}_{other}\), having different ground truth labels with the other two subsets, has different input distributions. By Corollary 3.2, the sample-wise gradients \(\nabla l(\mathbf{w}_{0})\) of this subset are expected to be not align with those of the other two subsets (as shown by the blue histograms in Figure 4). Moreover, the subset gradient \(g^{(c)}_{other}(\mathbf{w}_{0})\) should have a significant component orthogonal to \(g^{(c)}_{noise}(\mathbf{w}_{0})\) and \(g^{(c)}_{clean}(\mathbf{w}_{0})\) (as shown by the blue dash lines in Figure 4).
**Magnitudes of subset gradients.** We are interested in the magnitudes of \(g^{(c)}_{clean}(\mathbf{w}_{0})\) and \(g^{(c)}_{noise}(\mathbf{w}_{0})\). By definition, for \(k\in\{clean,\ noise\}\),
\[g^{(c)}_{k}(\mathbf{w}_{0})=|D^{(c)}_{k}|\mathbb{E}[\nabla l]=|D^{(c)}_{k}| \mathbb{E}[f(\mathbf{w}_{0};\mathbf{x})-y]\mathbb{E}[\nabla h]\]
where the expectation is taken over the corresponding data subset. We know that \(\mathbb{E}[\nabla h]\) is the same for clean and noisy subsets. In addition, \(\mathbb{E}[f(\mathbf{w}_{0};\mathbf{x})-y]\) are opposite for these two subsets, as \(\mathbb{E}[f(\mathbf{w}_{0};\mathbf{x})]\) is \(0.5\) by random guess and \(y=1\) for one subset and \(y=0\) for the other. Hence, we see that the magnitudes \(\|g^{(c)}_{k}\|\) are determined by the subset population, and we have
\[\|g^{(c)}_{clean}(\mathbf{w}_{0})\|/\|g^{(c)}_{noise}(\mathbf{w}_{0})\|=(1- \delta)/\delta>1. \tag{8}\]
## 4 Learning dynamics of binary classification
In this section, we analyze the learning dynamics of binary classification with label noise in the training dataset.
Specifically, we show that in the early stage of training, the dynamics exhibits a clean-priority learning characteristic, due to a dominance of the clean subset in first-order information, i.e., sample-wise gradients. We further show that in later stage of training, this dominance fades away and clean-priority learning terminates, resulting in a fitting of the noisy samples and worsening of the test performance.
We partition the optimization procedure into two stages: early stage which happens before the early stopping point; and later stage which is after the early stopping point.
### Initialization & early stage
In Section 3, we have seen that, at initialization,
\[g^{(c)}_{noise}(\mathbf{w}_{0})=-\alpha_{0}g^{(c)}_{clean}(\mathbf{w}_{0}), \tag{9}\]
with \(\alpha_{0}\triangleq\delta/(1-\delta)\in(0,1)\). We note that, during training, the model derivative \(\nabla h\) for a wide neural network is found
Figure 4: The distributions of \(\theta_{g}\). **Left**: synthetic data in Figure 3 (\(\delta=0.3\)), **Right**: two classes MNIST ((“0” and 1”, \(\delta=0.3\)). Dash lines represent subset gradients. Both cases use a 2-layer ReLU neural network.
to barely change Liu et al. (2020):
\[\nabla h(\mathbf{w}_{t})=\nabla h(\mathbf{w}_{0}),\;\forall t>0.\]
By Eq.(5), this implies that each sample-wise gradient \(\nabla l_{i}\) keeps its direction unchanged during training (but changes in magnitude through the factor \(f(\mathbf{w};\mathbf{x})-y\)). Therefore, we make the following assumption:
**Assumption 4.1**.: There exist a time \(T>0\) and a sequence \(\{\alpha_{t}\}_{t=0}^{T}\), with each \(\alpha_{t}\in(0,1)\), such that, for all \(t\in[0,T]\) and \(c\in\{0,1\}\), the following holds \(g_{noise}^{(c)}(\mathbf{w}_{t})=-\alpha_{t}g_{clean}^{(c)}(\mathbf{w}_{t})\).
Define \(\hat{g}^{(c)}(\mathbf{w})\) as the summation of the sample-wise gradients with ground truth labels, i.e., \(\hat{g}^{(c)}(\mathbf{w})=\sum_{(\mathbf{x},\hat{y})\in\hat{\mathcal{D}}^{(c) }}\nabla l(\mathbf{w};\mathbf{x},\hat{y})\). By the assumption, we have for all \(0\leq t\leq T\) and \(c\in\{0,1\}\),
\[g_{clean}^{(c)}(\mathbf{w}_{t})=\frac{1}{\alpha_{t}+1}\hat{g}^{(c)}(\mathbf{ w}_{t}). \tag{10}\]
On the other hand, by definition, we have for full gradient
\[\nabla L(\mathbf{w}_{t};\mathcal{D})=\sum_{c}\left(g_{clean}^{(c)}(\mathbf{w} _{t})+g_{noise}^{(c)}(\mathbf{w}_{t})\right). \tag{11}\]
Combining Assumption 4.1 and Eqs.(10) and (11), we get that the full gradient on the training data \(\mathcal{D}\) has the same direction with that on the ground-truth-labeled data \(\hat{\mathcal{D}}\). Hence, we have the following proposition.
**Proposition 4.2** (Update rules).: _Suppose Assumption 4.1 holds with time \(T>0\) and sequence \(\{\alpha_{t}\}_{t=0}^{T}\in(0,1)^{T}\). Then, the gradient descent (with learning rate \(\eta\)), Eq.(4), has the following equivalent update rule_
\[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta_{t}^{\prime}\nabla L(\mathbf{w}_{t};\hat {\mathcal{D}}),\;\;\mathrm{for}\;t\leq T, \tag{12}\]
_with \(\eta_{t}^{\prime}=\frac{1-\alpha_{t}}{1+\alpha_{t}}\eta>0\) and \(\nabla L(\mathbf{w}_{t};\hat{\mathcal{D}})\) being the gradient evaluated on the ground-truth-labeled dataset \(\hat{\mathcal{D}}\)._
**Remark 4.3** (mini-batch scenario).: In mini-batch SGD, similar relation of Eq.(12) also holds for a mini-batch estimation \(\nabla L(\mathbf{w}_{0};\mathcal{B})\), as long as the sampling of the mini-batch is independent of the label noise and the batch size \(|\mathcal{B}|\) is not too small such that the majority of samples are clean in the batches. Hence, in the following, we do not explicitly write out the dependence on the mini-batches.
The theorem states that, after adding label noise to the training dataset, the gradient descent update is equivalent to the one without label noise (except a different learning rate \(\eta_{t}^{\prime}<\eta\)). In another word, the gradient descent does not essentially "see" the noisy data and its update direction is determined only by the clean samples.
Clean-priority learning.This theorem implies the following learning characteristics of what we call _clean-priority learning_, as we described below.
_Training loss and accuracy on subsets._ The loss \(L(\mathbf{w};\mathcal{D}_{clean})\) on the clean subset keeps decreasing, while the loss \(L(\mathbf{w};\mathcal{D}_{noise})\) on the noisy subset is increasing, as formally stated in the following Theorem (see the proof in Appendix A.4):
**Theorem 4.4**.: _Suppose Assumption 4.1 holds with time \(T>0\) and sequence \(\{\alpha_{t}\}_{t=0}^{T}\), \(\alpha_{t}\in(0,1)\). We have, for all \(t\in[0,T]\) and sufficiently small \(\eta\),_
\[L(\mathbf{w}_{t+1};\mathcal{D}_{clean}) <L(\mathbf{w}_{t};\mathcal{D}_{clean});\] \[L(\mathbf{w}_{t+1};\mathcal{D}_{noise}) >L(\mathbf{w}_{t};\mathcal{D}_{noise}).\]
Accordingly, the training accuracy on the clean subset is increased, and that on the noisy subset is decreased.
_Residual magnitude:_\(|f(\mathbf{w};\mathbf{x})-y|\). As a consequence of the decreasing clean subset loss \(L(\mathbf{w};\mathcal{D}_{clean})\), the clean training samples are learned, in the sense that the network output \(f(\mathbf{w};\mathbf{x})\) moves towards its corresponding label \(y\), i.e., \(|f(\mathbf{w};\mathbf{x})-y|\) decreases on the clean subset. On the
Figure 5: Learning dynamics on two classes (“7” and “9”) of MNIST (noise level \(\delta=0.4\)) with FCN. **Left**: in the early stage (before the vertical dash line), clean subset error decreases, while noisy subset error increases. **Middle**: In the early stage, the clean subset average residual \(\mathbb{E}_{(\mathbf{x},y)\in\mathcal{D}_{clean}}[|f(\mathbf{w};\mathbf{x})-y|]\) decreases, i.e., on average the network outputs of clean subset move towards the labels, indicating a “learning” on the clean subset. One the other hand, the noisy subset average residual, \(\mathbb{E}_{(\mathbf{x},\hat{y})\in\mathcal{D}_{noise}}[|f(\mathbf{w}; \mathbf{x})-y|]\), monotonically increases, indicating that the noisy subset is not-learned. **Right**: total test error and total training error.
other hand, the increase of the \(L(\mathbf{w};\mathcal{D}_{noise})\) results in that, on the noisy subset, the network output \(f(\mathbf{w};\mathbf{x})\) moves away from its corresponding label \(y\), but towards its ground truth label \(\hat{y}\). Namely, the noisy subset is not learnt.
_Test loss._ As the test dataset \(\bar{\mathcal{D}}\) is not label-corrupted and is drawn from the same data distribution as \(\hat{\mathcal{D}}\), it is expected that the update rule in Eq.(12) decreases the test loss \(L(\mathbf{w};\bar{\mathcal{D}})\).
Figure 5 shows the clean-priority learning phenomenon on a binary classification of two classes of MNIST. The relevant part is the early stage, i.e., before the early stopping point (left of the vertical dash line). As one can see, in this stage, the prediction error and noisy subset loss \(L(\mathbf{w};\mathcal{D}_{noise})\) keep increasing (See Appendix B for subset loss curves). Especially, the prediction error increases from a random guess (error \(=0.5\)) at initialization towards \(100\%\). Meanwhile, the clean subset loss and prediction error keep decreasing. Moreover, the average residual magnitude \(|f(\mathbf{w};\mathbf{x})-y|\) decreases on the clean subset, but increases on the noisy subset, implying that only clean subset is learnt. These behaviors illustrate that in the early stage the learning dynamics prioritize the clean samples.
In short, in the early stage, the _clean-priority learning_ prioritizes the learning on clean training samples. The interesting point is that, although it seems impossible to distinguish the clean from the noisy directly from the data, this prioritization is possible because the model have access to the first-order information, i.e., sample-wise gradients. Importantly, it is this awareness of the clean samples and this prioritization in the early stage that allow the possibility of achieving test performances better than the noisy level.
### Early stopping point & later stage
As we have seen in the above subsection, the dominance of the magnitude \(\|g_{clean}^{(c)}\|\) over \(\|g_{noise}^{(c)}\|\) is one of the key reasons to maintain the clean-priority learning in the early stage. However, we shall see shortly that this dominance diminish as the training goes on, resulting in a final termination of the clean-priority learning.
Diminishing dominance of the clean gradient.Recall that the sample-wise gradient is proportional to the magnitude of the residual: \(\nabla l(\mathbf{w})\propto y-f(\mathbf{w};\mathbf{x})\). The learning of a sample, i.e., decreased \(|y-f(\mathbf{w};\mathbf{x})|\), results in a decrease in the magnitude \(|\nabla l(\mathbf{w})|\). As an effect of the clean-priority learning, the residuals magnitude \(|f(\mathbf{w};\mathbf{x})-y|\) evolves differently for different data subsets: _decreases_ on the clean subset \(\mathcal{D}_{clean}\), but _increases_ on the noisy subset \(\mathcal{D}_{noise}\). This difference leads to the diminishment of the dominance of clean subset \(\|g_{clean}^{(c)}(\mathbf{w})\|\), which originates from the dominance of the population of clean training samples.
**Theorem 4.5** (Diminishing dominance of the clean gradient).: _Assume the neural network is infinitely wide and the learning rate \(\eta\) of the gradient descent is sufficiently small. Suppose Assumption 4.1 holds with time \(T>0\) and sequence \(\{\alpha_{t}\}_{t=0}^{T}\in(0,1)^{T}\). The sequence \(\{\alpha_{t}\}_{t=0}^{T}\) monotonically increases: for all \(t\in[0,T]\), \(\alpha_{t+1}>\alpha_{t}\)._
Please find the proof in Appendix A.5. As \(\alpha_{t}\) measures this clean dominance (\(\alpha_{t}\) close to \(1\) means less dominant), this theorem indicates that the dominance diminishes as the training goes on.
Figure 6 illustrates this diminishing dominance on the two class MNIST classification problem. In the early stage, the ratio \(\|g_{clean}^{(c)}(\mathbf{w})\|/\|g_{noise}^{(c)}(\mathbf{w})\|\) starts with a value around the ratio of population \((1-\delta)/\delta=1.5\), and monotonically decrease to around \(1\) at or before the early stopping point, indicating that the dominance vanishes.
Learning the noisy samples.In the later stage (i.e., after the early stopping point), the magnitudes of \(\|g_{clean}^{(c)}(\mathbf{w})\|\) and \(\|g_{noise}^{(c)}(\mathbf{w})\|\) are similar, and there is no apparent dominance of one over the other. Then, the model and algorithm do not distinguish the clean subset from the noisy one, and there will be no clean-priority learning. In this stage, the model learns both the clean and noisy subsets, aiming at achieving exact fitting of the training data. Ultimately, training errors of both subsets converge to zero.
It is expected that in this stage the loss and prediction error on the test dataset \(\bar{\mathcal{D}}\) become worse, as the learning on the noisy subset contaminates the performance achieved by the clean-priority learning in the earlier stage.
As illustrated in Figure 5, after the early stopping point, the noisy subset starts to be learnt. Specifically, both training loss and error on this subset turn to decrease towards zero; the average residual magnitude \(|f(\mathbf{w};\mathbf{x})-y|\) turn to de
Figure 6: Diminishing dominance of clean gradient. The ratio \(\|g_{clean}^{(c)}(\mathbf{w})\|/\|g_{noise}^{(c)}(\mathbf{w})\|\) monotonically decreases in the early stage (before the vertical dash line), as a consequence of the clean-priority learning, as predicted by Theorem 4.5. Experiment setting is the same as in Figure 5.
crease, indicating that the network output \(f(\mathbf{w};\mathbf{x})\) is learnt to move towards its (corrupted) label. It is worth to note that the learning on the clean subset is still ongoing, as both training loss and error on this subset keeps decreasing.
In high level, before the first stage, the learning procedure prioritizes the clean training samples, allowing the superior-noise-level performance on the test dataset; in later stage, the learning procedure picks up the noisy samples, worsening the test performance toward the noise-level.
## 5 Multi-class classification
In this section we show that multi-class classification problems exhibit the same learning dynamics, especially the clean-priority learning, as described in Section 4.
For multi-class classification, we consider a variant of the sample-wise gradient, _single-logit sample-wise gradient_.
Single-logit sample-wise gradients.In a \(C\)-class classification problem, the neural network \(f\) has \(C\) output logits, and the labels are a \(C\)-dimensional one-hot encoded vectors. One can view the neural network as \(C\) co-existing binary classifiers. Specifically, for each \(c\in\{1,2,\cdots,C\}\), the \(c\)-th logit \(f_{c}\) is a binary classifier, and the \(c\)-th component of the label \(y_{c}\in\{0,1\}\) is the binary label for \(f_{c}\).
By Eq.(5), the sample-wise gradient can be written as \(\nabla l(\mathbf{w})=\sum_{c=1}^{C}\nabla l_{c}(\mathbf{w}),\) where
\[\nabla l_{c}(\mathbf{w})\triangleq(f_{c}(\mathbf{w};\mathbf{x})-y_{c})\nabla h _{c}(\mathbf{w};\mathbf{x}) \tag{13}\]
is the _single-logit sample-wise gradient_, which only depends on quantities of the corresponding single logit.
We point out that, the cleanness of a sample is only well defined with respect to each single logit, but not to the whole output. For example, consider a sample with ground truth label \(0\) but is incorrectly labeled as class \(1\). For all the rest binary classifiers, except the \(0\)-th and \(1\)-st, this sample is always considered as the negative class, as \(y_{c}=0\) for all \(c\neq 0\) or \(1\); hence, the noisy sample is considered "clean", for these \(C-2\) binary classifiers. Therefore, a noisy sample is not necessarily noisy for all the \(C\) binary classifiers.
With this observation, we consider the single-logit sample-wise gradient \(\nabla l_{c}(\mathbf{w})\) instead.
At initialization.Given \(c\in\{1,2,\cdots,C\}\), the \(c\)-logit sub-network \(h_{c}\) (before _softmax_) is the same as the network \(h\) discussed in Section 3, and the output \(f_{c}\in(0,1)\). Hence, all the directional analysis for binary case (Section 3) still applies to the single-logit sample-wise gradient \(\nabla l_{c}(\mathbf{w})\). See Appendix C for numerical verification.
Different from the _sigmoid_ output activation which tends to predict an average of \(0.5\) before training, the _softmax_ has an average output \(f_{c}\) around \(1/C\) with random guess at initialization. This leads to \(\mathbb{E}[f_{c}(\mathbf{w}_{0};\mathbf{x})-y_{c}]=1-1/C\) when \(y_{c}=1\), and \(\mathbb{E}[f_{c}(\mathbf{w}_{0};\mathbf{x})-y_{c}]=1/C\) when \(y_{c}=0\). Recalling that \(\mathcal{D}_{clean}^{(c)}\) and \(\mathcal{D}_{noise}^{(c)}\) (hence the corresponding \(\nabla h\)) have the same distribution, using Eq.(13) we have
\[g_{noise}^{(c)}(\mathbf{w}_{0}) \approx-\delta\hat{g}^{(c)}(\mathbf{w}_{0})/(C-1), \tag{14a}\] \[g_{clean}^{(c)}(\mathbf{w}_{0}) \approx(1-\delta)\hat{g}^{(c)}(\mathbf{w}_{0}). \tag{14b}\]
Therefore, we have the dominance of \(\|g_{clean}^{(c)}\|\) over \(\|g_{noise}^{(c)}\|\) at initialization, with a ratio
\[\|g_{clean}^{(c)}(\mathbf{w}_{0})\|/\|g_{clean}^{(c)}(\mathbf{w}_{0})\| \approx(C-1)(1-\delta)/\delta.\]
Figure 7: Learning dynamics on multi-class classification. **Left**: in the early stage (before the vertical dash line), clean subset error decreases, while noisy subset error increases. **Middle**: In the early stage, the clean subset average residual \(\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{D}_{clean}}[\|f(\mathbf{w}; \mathbf{x})-y\|]\) decreases, i.e., on average the network outputs of clean subset move towards the labels, indicating a “learning” on the clean subset. One the other hand, the noisy subset average residual, \(\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{D}_{noise}}[\|f(\mathbf{w}; \mathbf{x})-y\|]\), monotonically increases, indicating that the noisy subset is not-learned. **Right**: total test error and total training error. See subset loss curves in Appendix C.
Learning dynamics.As the configuration of \(\nabla l_{c}\) is similar to that of a binary classification, we expect similar learning dynamics as discussed in Section 4, especially the clean-priority learning, happen for multi-class classification.
We conduct experiments to classify the MNIST (with added label noise \(\delta=0.3\)) and CIFAR-10 (with added label noise \(\delta=0.4\)) datasets using a CNN and a ResNet, respectively. As is shown in Figure 8 and Figure 7, in most of the early stage, the clean subset has clean dominance over the noise subset and the dynamics shows the clean-priority learning characteristic, decreasing the clean subset error and residual, but increasing the noisy subset error and residual. Furthermore, the dominance of the clean subset monotonically decreases (Figure 8) until the early stopping point. In the later stage, the networks start to learn the noisy subsets. See the detailed experimental setup in Appendix B.
## 6 Discussion and Future work
In this section, we aim to provide some insights into the relationship between the clean-priority phenomena described in this paper and previous works, as well as how this phenomena may manifest in other gradient descent-based learning algorithms.
Firstly, previous studies observed that, for certain very large models, the test classification error may exhibit a second descent in the later stages of training (Nakkiran et al., 2021). In such scenarios, our analysis still holds for the first descent and the subsequent ascent that follows it. Regarding the second descent, we hypothesize that it may be connected to certain types of underlying feature learning dynamics, for example, the Expected Gradient Outer Product (EGOP) (Radhakrishnan et al., 2022). However, further investigation is needed to confirm this hypothesis, and we leave it as a topic for future research.
Secondly, as can be seen in Figure 1, 5, and 7, This occurs sometime after the early stopping point. It suggests that while the model is fitting the noise, the test performance at convergence is not catastrophic, and still outperforms the random guess. This, in turn, implies that neural networks demonstrate a tempered over-fitting behavior, as described in (Mallinar et al., 2022).
#### Acknowledgments
We are grateful for the support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning ([https://deepfoundations.ai/](https://deepfoundations.ai/)) through awards DMS-2031883 and #814639 and the TI-LOS institute (NSF CCF-2112665). This work used NVIDIA V100 GPUs NVLINK and HDR IB (Expanse GPU) at SDSC Dell Cluster through allocation TG-CIS220009 and also, Delta system at the National Center for Supercomputing Applications through allocation bbj-delta-gpu from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.
|
2305.10473 | Predicting Side Effect of Drug Molecules using Recurrent Neural Networks | Identification and verification of molecular properties such as side effects
is one of the most important and time-consuming steps in the process of
molecule synthesis. For example, failure to identify side effects before
submission to regulatory groups can cost millions of dollars and months of
additional research to the companies. Failure to identify side effects during
the regulatory review can also cost lives. The complexity and expense of this
task have made it a candidate for a machine learning-based solution. Prior
approaches rely on complex model designs and excessive parameter counts for
side effect predictions. We believe reliance on complex models only shifts the
difficulty away from chemists rather than alleviating the issue. Implementing
large models is also expensive without prior access to high-performance
computers. We propose a heuristic approach that allows for the utilization of
simple neural networks, specifically the recurrent neural network, with a 98+%
reduction in the number of required parameters compared to available large
language models while still obtaining near identical results as top-performing
models. | Collin Beaudoin, Koustubh Phalak, Swaroop Ghosh | 2023-05-17T16:56:19Z | http://arxiv.org/abs/2305.10473v2 | # Predicting Side Effect of Drug Molecules
###### Abstract
Identification and verification of molecular properties such as side effects is one of the most important and time-consuming steps in the process of molecule synthesis. For example, failure to identify side effects before submission to regulatory groups can cost millions of dollars and months of additional research to the companies. Failure to identify side effects during the regulatory review can also cost lives. The complexity and expense of this task have made it a candidate for a machine learning-based solution. Prior approaches rely on complex model designs and excessive parameter counts for side effect predictions. We believe reliance on complex models only shifts the difficulty away from chemists rather than alleviating the issue. Implementing large models is also expensive without prior access to high-performance computers. We propose a heuristic approach that allows for the utilization of simple neural networks, specifically the recurrent neural network, with a 98+% reduction in the number of required parameters compared to available large language models while still obtaining near identical results as top-performing models.
Molecular Property Prediction, Drug Evaluation, Machine Learning.
## I Introduction
Molecular property prediction is one of the oldest and most fundamental tasks within the field of drug discovery [1, 2]. Applying in silico methods to molecular property prediction offers the potential of releasing safer drugs to the market while reducing test time and cost. Historically, these in silico approaches relied on complex feature engineering methods to generate their molecule representations for processing [3, 4]. These approaches are bound by the bias of the descriptor, which means the generated features are not reusable for different tasks as some valuable information may be removed. The feature vectors also depended on current molecular comprehension; upon discovery, the feature vectors could become redundant. Graph Neural Networks (GNN) remove the dependence on complex and temporal descriptors. GNNs became favorable due to the common practice of drawing molecules using graph representations which offers a generic form of input allowing for machine learning models to build their interpretation of information, rather than rely on human capabilities. Using this generic form, GNNs have been able to perform well on multiple chem-informatic tasks, especially molecular property prediction [5, 6]. GNNs are limited by the ability to build an understanding of a shared dependence and have scalability issues. The size of the graphical input increases exponentially with each additional molecule that is represented. With this growth, the cost of communication between graphical nodes also exponentially increases. Compared to other neural network types, GNNs perform worse at molecular property prediction than both feed-forward neural networks, despite their built-in generic representation [7]. With the recent success of large language models, newer attempts aim to build transformer-based approaches with promising signs of success [8]. While new large language models offer comparable performance to GNNs, they require up to 120 billion parameters to achieve similar performance.
Due to the rapid explosion of parameters caused by GNNs, feed-forward neural networks, and transformers, we propose a heuristic approach using a recurrent neural network. Our approach can obtain close to state-of-the-art results with 99+% fewer parameters than Galactic [8]. In the following sections, we review the SIDER data set and compare the SMILES and SELFIES formats and the basic concepts of a recurrent neural network, and also discuss a few of the related works that perform classification on SIDER dataset (Section II). We then discuss the data pre-processing and model implementation details (Section III), followed by the model performance and a comparison to other state-of-the-art options (Section IV). Finally we conclude the paper by giving a summary (Section V).
## II Background & Related Works
### _Side Effect Resource (SIDER)_
The principal molecular property in terms of human consumption is the side effect associated with the molecule. The Side Effect Resource (SIDER) data set attempts to create a single source of combined public records for known side effects [9]. The data set consists of 28 columns, the first column is the SMILES representation of a given molecule, and the next 27 columns are potential side effects. The side effects of each molecule are marked with a one if it is known to have a side effect or a zero otherwise.
### _Simplified Molecular-Input Line Entry System (SMILES)_
Simplified molecular-input line-entry system (SMILES) uses characters to build a molecular representation [10]. Letters represent various elements within a molecule, where the first letter of an element can be uppercase, denoting that the element is non-aromatic, or lowercase, denoting that the element is aromatic. Assuming an element requires a second letter, it will be lowercase. Another possible representation of aromaticity is the colon, which is the aromatic bond symbol. Other potential
bond symbols are a period (.), a hyphen (-), a forward slash (_/_), a backslash (\(\backslash\)), an equal sign (=), an octothrope (#), and a dollar sign ($). Periods represent a no bond, hyphens represent a single bond, and the forward slash and backslash represent single bonds adjacent to a double bond. However, the forward slash and backslash are only necessary when rendering stereochemical molecules. The equal sign represents a double bond, the octothrope represents the triple bond, and the dollar sign represents a quadruple bond. In cases where stereochemical molecules are used, the asperand (@) can be used in a double instance to represent clockwise or in a single occurrence to represent counterclockwise. Numbers are used within a molecule to characterize the opening and closing of a ring structure, or if an element is within brackets, the number can represent the number of atoms associated with an element. Numbers appearing within brackets before an element represent an isotope. A parenthesis (()) denotes branches from the base chain.
### _Self-Referencing Embedded Strings (SELFIES)_
Self-Referencing Embedded Strings (SELFIES) improve the initial idea of SMILES for usage in machine learning processes; by creating a robust molecular string representation [11]. SMILES offered a simple and interpretable characterization of molecules that was able to encode the elements of molecules and their spatial features. The spatial features rely on an overly complex grammar where rings and branches are not locally represented features. This complexity causes issues, especially in generative models, where machines frequently produce either syntactically invalid or physically invalid strings. To remove this non-locality, SELFIES uses a single ring or branch symbol, and the length of this spatial feature is supplied directly; ensuring that any SELFIES string has a valid physical representation.
### _Recurrent Neural Networks (RNN)_
Elman networks, more commonly known as vanilla recurrent neural networks (RNN), attempt to introduce the concept of a time-dependent dynamic memory [12]. The idea is to make predictions about certain inputs based on contextual information. These context-based predictions can be done for four different input-output schemes, one-to-one, one-to-many, many-to-one, and many-to-many. One-to-one models are a variation of a classic neural network, one-to-many models are best for image caption generation, many-to-one models are best for sentiment analysis, and many-to-many is best for translation or video frame captioning. Figure 1 is an example of the basic structure of a vanilla RNN.
In Figure 1, the \(X_{t}\) represents some input, \(H_{t-1},H_{t}\) represents some hidden state (which is representative of memory), \(O_{t}\) represents some output and \(\sigma\) represents some activation function. The current input information combines with the previous hidden state, and the resulting combined state is then fed to an activation function to insert some non-linearity. This non-linearity produces the next hidden state, which can be further manipulated to create a desired output. The fundamental element is the hidden state which theoretically allows for consideration of any historical input and its effects on the current input. For a mathematical description of an RNN, we refer to Equation 1 and Equation 2.
\[H_{t}=\sigma(W_{HH}H_{t-1}+W_{XH}X_{t}) \tag{1}\]
\[O_{t}=W_{HO}H_{t} \tag{2}\]
### _Related Works_
GroverThe graph representation from the self-supervised message passing transformer (GROVER) model takes two forms, GROVER\({}_{base}\) and GROVER\({}_{large}\)[13]. For this paper, we only consider GROVER\({}_{large}\) as it achieves the highest performance of the two. GROVER bases its design on popular large language models such as, BERT and GPT, where a large corpus of information pre-trains a model and fine-tuning is applied for the completion of downstream tasks [14, 15]. However, they stray from prior works that attempt training using the SMILES string format [16] and instead use graphs which they believe to be more expressive. Previous graph pre-training approaches use the available supervised labels to train their model [5], but GROVER believes a self-supervised approach would perform better, so they suggest using contextual property prediction and graph-level motif prediction. Contextual property prediction takes a given element (node) within a molecular graph and predicts the connected elements and the type of bond used for the connection. Graph-level motif prediction takes a given molecule and attempts to predict the recurrent sub-graphs, known as motifs, that may appear within the molecule. To build the model they designed a new architecture known as the GTransformer, which creates an attention-based understanding of molecular graphs. While this new architecture and self-supervised training approach offer appealing results the model uses 100M parameters, uses 250 Nvidia V100 GPUs, and takes four days for pre-training.
ChemRL-GEMGeometry Enhanced Molecular representation learning method (GEM) for Chemical Representation Learning (ChemRL) (ChemRL-GEM) draws inspiration from previous works using a graph-based approach, especially
Fig. 1: RNN architecture used for training; (\(H_{t-1},H_{t}\)) represent the hidden state, (\(O_{t}\)) represents the output state, and (\(X_{t}\)) represents the input information. The \(\sigma\) represents the activation function that operates on the combined input and hidden state.
GROVER [5, 13]. ChemRL-GEM uses a large corpus of information to pre-train a model and, like GROVER, believes the ambiguity of SMILES and lack of structural information make it hard to build a successful model using a string-based approach [17]. ChemRL-GEM believes the low performance of prior graph approaches is from neglecting the available molecular 3D information and improper pre-training tasks. ChemRL-GEM pre-training tasks are split into two types, geometry level, and graph level tasks. The geometry level tasks are again split into two types where bond length prediction, and bond angle prediction are local spatial structure predictions, and atomic distance matrices prediction is a global spatial prediction. The graph-level predictions are the Molecular ACCess System (MACCS) key prediction, and the extended-connectivity fingerprint (ECFP) prediction. To build the model they designed an architecture called GeoGNN which trains on the atom-bond graph and the bond-angle graph of molecules to build a 3D structure-based understanding of the molecular graphs. ChemRL-GEM achieves SOTA performance and is one of the first to attempt a large 3D graph model pre-trained network. However, their approach requires training with 20 million samples which would be time-consuming. They state pre-training a small subset of the data would take several hours using 1 Nvidia V100 GPU, and fine-tuning would require 1-2 days on the same GPU. As a rough estimate of the actual training process there was a follow-up work called LiteGEM which removed the 3D input of the model but still uses 74 million parameters and takes roughly ten days of training using 1 Nvidia V100 GPU [18].
GalacticaGalactica is inspired directly by previous large language models and their utilization of large data sets to pre-train models for downstream tasks [14, 15]. Differentiating themselves from GPT or BERT, they use a decoder-only setup from [19]. Unlike GROVER or ChemRL, Galactica focuses on general scientific knowledge and wishes to apply it to the entirety of the scientific domain [8]. The Galactica model takes several forms, but we focus our attention on the 120 billion parameter model as it offers the best performance. Galactica trains over 60 million individual scientific documents and 2 million SMILES strings. Galactica acknowledges while using SMILES they receive reduced performance gains as their model size increases, but they believe this could be overcome with more samples. Galactica offers a competitive performance to graph-based approaches while offering a simplified architecture design. Unfortunately, the model requires 120 billion parameters which are trained using 128 Nvidia A100 80GB nodes. Despite the massive model size, it is not SOTA for a single SMILES metric. This is likely due to their focus on building a general model, so they performed no fine-tuning to try to obtain SOTA results.
## III Methods
Our goal is to achieve the highest possible performance by using a simple language model. Achieving this requires improvement in the data and the neural network.
### _Data Pre-processing_
The available SIDER data set uses SMILES for its molecular representation. After reviewing some of the molecule strings, not all are canonical. Including non-canonical SMILES is problematic as SMILES grammar is already complex; the molecules are converted to canonical form to reduce complexity. The next issue we address is caused by RNNs. One of the many advantages of RNN is the allowance of variable length inputs to account for a variable length of history. This is only true theoretically; in practice, memory is limited which is the focus of many newer works [20]. Despite this limitation, it has been recently shown that RNNs can handle input lengths of around 45-50 before the performance begins to degrade [21, 22]. Using this knowledge, we set a maximum length of 46 for the SMILES molecules. This keeps a minor majority of the molecules while allowing us to ensure the RNN is performing well. Figure 2 visualizes the molecule lengths within the SIDER data set.
After limiting the SMILES molecular length, the SMILES are converted to SELFIES. The intention of converting SMILES to SELFIES is to reduce the grammar complexity and simplify the learning process of the RNN. SELFIES converts each element and structural component, such as rings or branches, into their label. These labels are then encoded into a numerical value based on their dictionary index.
### _RNN Implementation_
To achieve the best possible results from the vanilla RNN model, we insert an embedding layer before the RNN with a dimension the size of the label dictionary. This is to maintain as much information as possible. The input, hidden, and output dimensions of the RNN are also set to be the size of the label dictionary. We believe maintaining the dimensional space, and not reducing before output generation, will give the RNN the best chance of learning the context of the molecule. RNNs historically use the Tanh activation function, however, we use the LeakyReLU as it reduces saturation possibilities and typically results in higher performance [23, 24]. In addition to this, we also include a dropout layer on the output of the RNN which helps prevent overfitting and reduce the error rate of RNNs [25]. After processing the molecule through the RNN, the final state should have all important prior information encoded into it. This vector then passes through an additional
Fig. 2: Histogram of molecule length in SIDER data set pre-canonization.
LeakyReLU and dropout layer before being fed to a fully connected layer. The fully connected layer reduces the vector from the dictionary-sized dimension down to the number of classes present in the molecular property. A soft-max operation is then used to find the most likely class. Figure 3 contains an overview of the process described above.
## IV Results and Comparative Analysis
### _Results_
Before training on SIDER, we perform an additional reduction to the data set by setting the lower bound of 31 molecules to the SMILES string allowing for the search space to remain sufficiently complex while reducing the overall run time. This reduces the data set to 400 molecules, which is then stratified split using 80% for training and 20% for testing [26]. The stratified split intends to maintain the known sample rate of a given side effect to model real-world testing. However, during training, we want to remove the bias of sampling to ensure our model is accurately learning the causes of a side effect. To reduce the bias in the training set, the minority samples within the training set are duplicated to have an even sample count between the side effect present and the side effect not present. After completing the replication of training samples, the SMILES strings are converted to SELFIES. Typical natural language processing (NLP) methods use a word, sub-word, or character tokenization to convert strings into numerical values, but we opt for a slightly different method which we explain by referring to equation 3. It shows the SELFIES representation of benzene where each molecule and structural element are between brackets. Using this representation we decide to tokenize based on each set of brackets that exist within the SELFIES converted data set. This results in a total of 47 unique values.
\[[C][=C][C][=C][=C][Ring1][=Branch1] \tag{3}\]
After tokenizing the SELFIES, the embedding dimension, input dimension of the RNN, and the hidden dimension of the RNN are set to a size of 47 to match the dimensional space of the tokens. To give the RNN model the best opportunity to make accurate classifications, we use a single RNN model to perform a single side effect classification prediction. Instead of predicting all 27 potential side effect classifications, we opt to predict 20 side effect classifications due to extreme imbalances present in the side effect data. The RNN architecture results in a model with 11.5K parameters training in under 2 minutes on an Nvidia GeForce RTX 3090. To compare our performance with other works that use SIDER we evaluate using the receiver operating characteristic curve (ROC) [1, 27]. The ROC scores for 20 side effect classification tasks are shown in Table I. While ROC is helpful for comparison, it is commonly misunderstood [28, 29] so we include a small sample of 3 training/testing accuracy and loss curves in Figure 4 as a simple spot check of model performance.
Examining Figure 4, we note that training and testing loss is decreasing across all three side effect properties. There are spikes within each of the loss curves, but this is known to occur since the inception of RNNs [12]. The training loss for all three side effects appears to saturate faster than the testing. There can be some gap in performance in loss based on the difficulty of new samples, but the gap here is likely accentuated as an unfortunate side effect of the minority sample duplication process. The duplicate samples within the training set help the model learn what molecular components help detect a side effect, but during training, the repeated samples become easier to predict for the model. In the case of accuracy, both training and testing show an upward trending curve where improvement starts to attenuate between the 20th-40th epoch. This attenuation roughly matches the attenuation that occurs with the loss curves. Comparing training and testing accuracy there appears to be a roughly 20+% gap in performance at nearly every epoch, which we again attribute to the duplicate samples within the training set.
### _Comparisons_
To understand our model performance we compare it across multiple data sets to two top-performing GNN models, ChemRL-GEM [17] and GROVER\({}_{large}\)[13], and a top-performing NLP model, Galactica [8]. Overall results are shown in Table II.
Beginning with the SIDER test, the results in Table II show our approach achieves.01% less than the SOTA model. While there are no direct statistics available for ChemRL-GEM, we use roughly 99.7% fewer parameters than its follow-up work, LiteGEM [17, 18]. For the BBBP test, we outperform ChemRL-GEM and Galactica but perform worse than GROVER\({}_{large}\). While it may be possible that GROVER
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Test** & **Proposed** & ChemRL-GEM [17] & GROVER\({}_{large}\)[13] & Galactica [8] \\ \hline SIDER & _.61_ & _.62_ & _.658_ & _.632_ \\ \hline BBBP & _.73_ & _.724_ & _.940_ & _.661_ \\ \hline Cintox & _.75_ & _.901_ & _.944_ & _.826_ \\ \hline BACE & _.80_ & _.856_ & _.894_ & _.617_ \\ \hline \end{tabular}
\end{table} TABLE II: Table of ROC performance per molecular property prediction across data sets.
Fig. 3: Overview of the RNN process.
achieves these results due to their usage of graph representation, it more likely stems from having 100M parameters, over 8,000x more parameters than our model [13]. For the Clintox test, our performance was unfortunately the lowest of all the models. Rescamining the Galactica work, we can outperform their 1.3B parameter model and are competitive with their 6.7B parameter model, meaning we can compete with an NLP model that has 80,000x more parameters [8]. Using BACE for evaluation, our model achieves the third highest score, a competitive performance coming in at only 9.4% less than the highest score received.
## V Discussion & Conclusion
While large data models may offer slightly higher performance, we have shown that small models (specifically RNNs) are still viable candidates for molecular property prediction. Smaller models are cheaper, more practical, and more accessible solutions as they don't require high-performance machines and billions of dollars to deploy. While there is a gap between the performance of the RNN model to the larger models, we believe it can be closed by continuing to focus on building more descriptive languages such as SELFIES [11].
## Acknowledgments
The work is supported in parts by NSF (CNS-1722557, CNS-2129675, CCF-2210963, CCF-1718474, OIA-2040667, DGE-1723687, DGE-1821766 and DGE-2113839) and seed grants from Penn State ICDS and Huck Institute of the Life Sciences.
|
2301.11517 | Task-Agnostic Graph Neural Network Evaluation via Adversarial
Collaboration | It has been increasingly demanding to develop reliable methods to evaluate
the progress of Graph Neural Network (GNN) research for molecular
representation learning. Existing GNN benchmarking methods for molecular
representation learning focus on comparing the GNNs' performances on some
node/graph classification/regression tasks on certain datasets. However, there
lacks a principled, task-agnostic method to directly compare two GNNs.
Additionally, most of the existing self-supervised learning works incorporate
handcrafted augmentations to the data, which has several severe difficulties to
be applied on graphs due to their unique characteristics. To address the
aforementioned issues, we propose GraphAC (Graph Adversarial Collaboration) --
a conceptually novel, principled, task-agnostic, and stable framework for
evaluating GNNs through contrastive self-supervision. We introduce a novel
objective function: the Competitive Barlow Twins, that allow two GNNs to
jointly update themselves from direct competitions against each other. GraphAC
succeeds in distinguishing GNNs of different expressiveness across various
aspects, and has demonstrated to be a principled and reliable GNN evaluation
method, without necessitating any augmentations. | Xiangyu Zhao, Hannes Stärk, Dominique Beaini, Yiren Zhao, Pietro Liò | 2023-01-27T03:33:11Z | http://arxiv.org/abs/2301.11517v3 | # Task-Agnostic Graph Neural Network
###### Abstract
It has been increasingly demanding to develop reliable methods to evaluate the progress of Graph Neural Network (GNN) research for molecular representation learning. Existing GNN benchmarking methods for molecular representation learning focus on comparing the GNNs' performances on some node/graph classification/regression tasks on certain datasets. However, there lacks a principled, task-agnostic method to directly compare two GNNs. Additionally, most of the existing self-supervised learning works incorporate handcrafted augmentations to the data, which has several severe difficulties to be applied on graphs due to their unique characteristics. To address the aforementioned issues, we propose GraphAC (Graph Adversarial Collaboration) - a conceptually novel, principled, task-agnostic, and stable framework for evaluating GNNs through contrastive self-supervision. We introduce a novel objective function: the Competitive Barlow Twins, that allow two GNNs to jointly update themselves from direct competitions against each other. GraphAC succeeds in distinguishing GNNs of different expressiveness across various aspects, and has demonstrated to be a principled and reliable GNN evaluation method, without necessitating any augmentations.
## 1 Introduction
Graph Neural Networks (GNNs) have gained immense research attention in recent years, leading to significant progress that has been successfully implemented across a broad range of fields, including chemistry (Gilmer et al., 2017) and biology (Stokes et al., 2020). This makes GNNs important tools in the molecular representation learning landscape, and improving their development is of great interest to the biomedical machine learning community. As this field of research grows rapidly, it becomes crucial to develop general and reliable evaluation methods to facilitate GNN research and quantify the performances of various GNN architectures in the context of molecular graph data.
Existing approaches on benchmarking GNNs focus on comparing the GNNs with respect to their performances on some node/graph classification/regression tasks in a collection of molecular/protein/DNA datasets (Dwivedi et al., 2020; Hu et al., 2020). However, these approaches can be limited in the following ways: 1) classification/regression tasks are naturally simple in terms of the combinatorial complexity, and cannot fully challenge the GNNs to learn from the graphs; 2) there exist many different molecular prediction datasets used by various works on GNNs, but there lacks a standardized way to compare those results on different datasets, and sometimes it is not feasible to evaluate on all of them; and 3) Deep Neural Networks (DNNs) generally rely on high-quality labeled data for high-performance training, but unfortunately, large datasets almost always contain examples with inaccurate, incorrect or even missing labels, known as label noises. It has been demonstrated that most DNNs, especially GNNs, are vulnerable to noisy labels, resulting in drastically lowered generalization performances (Dai et al., 2021; NT et al., 2019; Zhang et al., 2017; Zhang et al., 2020). Consequently, it is highly desirable to develop GNN evaluation methods that can effectively exploit the training data, without necessitating any labels.
There have been some attempts to evaluate the capacities of GNNs against theoretical tests such as the Weisfeiler-Lehman (1-WL) graph isomorphism test (Weisfeiler and Leman, 1968; Xu et al., 2019; Dwivedi et al., 2020), but the information they provide is normally limited. These methods are normally designed for small-scale benchmarks, and measure GNNs' ability to detect patterns, substructures and clusters (Dwivedi et al., 2020), or graph properties such as diameter, eccentricity, and spectral radius (Corso et al., 2020). However, these approaches cannot be used consistently, since certain GNN types can exploit this information as positional encodings to directly cheat the task (Kreuzer et al., 2021; Bodnar et al., 2021; Dwivedi et al., 2022). Therefore, there is a need for a task-agnostic evaluation of the expressiveness of GNNs.
Designing principled self-supervised learning (SSL) methods for graphs is also a challenging task. As labeled data can be expensive, limited or even unavailable in many real-word scenarios, it has become increasingly demanding to develop powerful SSL methods on graphs. A lot of successful SSL works on graphs (Velickovic et al., 2019; Sun et al., 2020; You et al., 2020, 2021; Xu et al., 2021) rely on applying handcrafted augmentations to the graphs, which are further described in Section 2. However, there are several key difficulties in applying augmentations to graphs. Firstly, there exists no universal augmentation that works across all types of graphs. Secondly, graphs are not invariant to augmentations like images: applying filters or rotating an image still preserves its essential invariances, but even a tiny augmentation on a graph can significantly change its topological structure or intrinsic properties. Another class of SSL methods on graphs that do not require augmentations (Stark et al., 2021) relies on exploiting the physical properties of small molecules, and cannot be generalized to other graph types, such as proteins or DNAs. Therefore, it is highly desirable to develop a principled and generalizable SSL framework that does not require handcrafted augmentations.
**Our solution: Graph Adversarial Collaboration (GraphAC).** We address both aforementioned questions by proposing a conceptually novel, principled, and task-agnostic framework for evaluating GNNs in the context of molecular data, via a self-supervised, adversarial collaboration manner, without the need of handcrafted augmentations. In the GraphAC framework, two GNNs directly compete against each other on the same unlabeled graphs. The more expressive GNN produces more complex and informative graph embeddings and is thereby able to win the game. We make the following contributions in this paper:
* We introduce a novel principle for evaluating GNNs, by having them directly compete against each other in a self-supervised manner, rather than comparing them using a scoreboard of training performances on some datasets;
* Inspired by the novel principle, we propose a new architecture and an original modification to the existing Barlow Twins loss (Zbontar et al., 2021) that enables the GNNs to stably compete against each other, while ensuring that more expressive GNNs can always win;
* We provide the very first framework for evaluating GNN expressiveness directly on the molecular graph data, without the need of a specific downstream task, or theoretical representations of these molecular graphs;
* We develop a principled contrastive learning framework without needing any handcrafted augmentations, which is also generalizable to various types of GNNs.
## 2 Related Work
Contrastive Self-Supervised LearningTo the best of our knowledge, there has not been any published attempt to develop a method for evaluating deep learning models by directly competing two models in a contrastive self-supervised environment, no matter in the general machine learning or the graph representation learning communities. Current approaches center around competing with a set of baselines that evaluate a limited number of performance metrics on a fixed number of benchmark or datasets. However, the state-of-the-arts in contrastive SSL, both in the non-graph and the graph domains, is still relevant to this work. Their successes in building contrastive learning architectures can help us build a principled, task-agnostic graph model evaluation framework. It is worth noting that GraphAC is a task-agnostic evaluation of GNNs, and not an optimization for downstream tasks. Therefore, the performance of state-of-the-art contrastive SSL on downstream tasks is not relevant.
Many works on contrastive SSL on graphs (Velickovic et al., 2019; Sun et al., 2020; You et al., 2020, 2021; Xu et al., 2021) are inspired from the successes of the idea of mutual information maximization between two representations of the same data, with manually applied augmentations, in the non-graph domain (Gutmann and Hyvarinen, 2010; Oord et al., 2018; Hjelm et al., 2019; He et al., 2020; Chen et al., 2020). Those works vary in augmentation strategies and mutual information estimators. However, in order to prevent _information collapse_ (models ignoring the input data and outputting identical and constant vectors), all those works require large batch sizes or memory banks, and extensive searches for augmentations and negative pairs, making them very costly. Besides, applying augmentations to graphs can be much harder than to images, since there exists no universal augmentation that works for all graph types. Furthermore, graphs are not noise-invariant - small changes to a graph can significantly alter its topological structure, especially for small graphs such as molecules. While existing research has developed fine-grained graph augmentations, it is still almost impossible to apply these augmentations while preserving the graph's intrinsic properties, such as the chemical properties of molecules. Moreover, graph augmentations can deviate the data from real-word distributions, since they introduce arbitrary human knowledge not provided by the training data. Stark et al. (2021) propose a noiseless framework by maximizing the mutual information between the embedding of a 2D molecular graph and the embedding capturing its 3D structure, but it is specific to the physical properties of molecules, and cannot be generalized to other domains. Consequently, _there is a need for a principled contrastive SSL framework that can be applied across a diversity of graph types without requiring any augmentations._
Barlow TwinsZbontar et al. (2021) introduce an alternative approach to prevent information collapse, by maximizing the information contents within the representations. In Barlow Twins, for a given input batch \(\mathbf{X}\in\mathbb{R}^{N_{b}\times d_{\mathbf{x}}}\) of batch size \(N_{b}\) and dimension \(d_{\mathbf{x}}\), two batches of distorted views \(\mathbf{X}^{A}\) and \(\hat{\mathbf{X}}^{B}\) of \(\mathbf{X}\) are generated using manual data augmentation. The two batches of distorted views \(\hat{\mathbf{X}}_{A}\) and \(\hat{\mathbf{X}}^{B}\) are fed into two separate models, which produces batches of \(d\)-dimensional embeddings \(\mathbf{H}^{A},\mathbf{H}^{B}\in\mathbb{R}^{N_{b}\times d}\). For simplicity, the features in both \(\mathbf{H}^{A}\) and \(\mathbf{H}^{B}\) are assumed to have a mean of zero across the batch. Barlow Twins then computes the cross-correlation \(\mathbf{C}\in\mathbb{R}^{d\times d}\) matrix between \(\mathbf{H}^{A}\) and \(\mathbf{H}^{B}\) along the batch dimension. It then applies the following loss function on \(\mathbf{C}\):
\[\mathcal{L}_{\text{BT}}=\underbrace{\sum_{i}^{d}(1-\mathbf{C}_{i,i})^{2}}_{ \text{invariance term}}+\lambda\underbrace{\sum_{i}^{d}\sum_{j\neq i}^{d} \mathbf{C}_{i,j}^{2}}_{\text{ redundancy reduction term}} \tag{1}\]
The invariance term of the Barlow Twins loss enforces the two output embeddings to be similar by pushing the on-diagonal elements of the cross-correlation matrix towards one. Meanwhile, the redundancy reduction term attempts to ensure that the off-diagonal elements of the cross-correlation matrix closer to zero, thereby decorrelating the different features of the embeddings, so that the embeddings contain non-redundant information about the data. This process implicitly maximizes the amount of information contained within the embedding vectors.
Variance-Invariance-Covariance Regularization (VICReg)Bardes et al. (2022) build VICReg based on the principle of preserving the information content of the representations, similar to Barlow Twins. The architecture of VICReg is the same as Barlow Twins, except that it uses three regularization terms in its objective function: 1) invariance regularization \(\mathcal{L}_{\text{Inv}}\): the mean square Euclidean distance between the output embeddings; 2) variance regularization \(\mathcal{L}_{\text{Var}}\): a hinge loss to maintain the standard deviation of the embeddings along the batch dimension close to 1, which forces the output embeddings within a batch to be different; and 3) covariance regularization \(\mathcal{L}_{\text{Cov}}\): the sum of the squared off-diagonal elements of the covariance matrix, with a factor \(\nicefrac{{1}}{{d}}\) to scale the term as a function of the feature dimension. This term attracts the covariances between every pair of features of the embeddings over a batch towards zero, decorrelating the different features of the embeddings, thus preventing them from encoding similar information. The overall loss function for VICReg is then a weighted sum of the invariance, variance and covariance regularization terms:
\[\mathcal{L}_{\text{VICReg}}=\lambda\mathcal{L}_{\text{Inv}}+\mu\mathcal{L}_{ \text{Var}}+\nu\mathcal{L}_{\text{Cov}} \tag{2}\]
where \(\lambda,\mu,\nu>0\) are hyperparameters controlling the importance of each term in the loss.
## 3 Method
The intuition behind Graph Adversarial Collaboration (GraphAC) is to have different GNNs competing against each other on the same _unlabeled_ graphs, and encouraging more expressive GNNs to produce more complex and informative graph embeddings. This can be measured by the ability to predict other GNNs' graph embeddings from a GNN's own graph embeddings: if a GNN can predict another GNN's graph embeddings from its own graph embeddings better than the other way round, then its graph embeddings can be deemed more complex and informative than the other GNN's graph embeddings, and therefore, more expressive. The two GNNs collaborate by predicting each other's output graph embeddings, and compete adversarially to prevent the other GNNs from predicting their own graph embeddings. To solve the challenge of maximizing the performance differences between different GNNs while ensuring stable training, we introduce the _Competitive Barlow Twins_, a novel pair of loss functions modified from the Barlow Twins described in Section 2.
### Competitive Barlow Twins
A deeper analysis of the Barlow Twins shows that, according to Zbontar et al. (2021)'s definition for the cross-correlation matrix \(\mathbf{C}\) between the embeddings,
\[\mathbf{C}_{i,j}=\frac{\sum_{b}^{N_{b}}\mathbf{H}_{b,i}^{A}\mathbf{H}_{b,j}^{ B}}{\sqrt{\sum_{b}^{N_{b}}\left(\mathbf{H}_{b,i}^{A}\right)^{2}}\sqrt{\sum_{b}^{N_{ b}}\left(\mathbf{H}_{b,j}^{B}\right)^{2}}} \tag{3}\]
the \((i,j)\)-th entry \(\mathbf{C}_{i,j}\) of the cross-correlation matrix represents how much feature \(i\) of the first model's output embeddings \(\mathbf{H}^{A}\) correlates to feature \(j\) of the second model's output embeddings \(\mathbf{H}^{B}\). Therefore, for output embeddings of dimensionality \(d\), the row \(\mathbf{C}_{i,[i+1:d]}\) at the upper-triangle of the cross-correlation matrix represents how much feature \(i\) of \(\mathbf{H}^{A}\) correlates to features \(i+1\) to \(d\) of \(\mathbf{H}^{B}\). For \(i\) close to one, the row \(\mathbf{C}_{i,[i+1:d]}\) in the upper-triangle becomes much longer, and thus the \(i\)-th feature of \(\mathbf{H}^{A}\) represented by that piece of the row correlates to the majority of the features of \(\mathbf{H}^{B}\). For \(i\) close to \(d\), the row at the upper-triangle becomes much shorter, and thus the \(i\)-th feature of \(\mathbf{H}^{A}\) represented by that piece of the row correlates to very few features of \(\mathbf{H}^{B}\). This means that the smaller-indexed features of the first model's output embeddings \(\mathbf{H}^{A}\) become the more important features, if monitored by the upper-triangle of the cross-correlation matrix. Similarly, in the lower-triangle of the cross-correlation matrix, the column \(\mathbf{C}_{[j+1:d],j}\) represents how much feature \(j\) of \(\mathbf{H}^{B}\) correlates to features \(j+1\) to \(d\) of \(\mathbf{H}^{A}\), making the smaller-indexed features of the second model's output embeddings also becoming the more important features. It can therefore be hypothesized that under this upper-lower-triangle setting, the first few features of both models' output embeddings are targeted at capturing the low frequency signals as they are easier to predict, and the later features are set to capture the high frequency signals, which are harder to predict.
Based on the above findings, if the two triangles of the cross-correlation matrix are summed, then the sum of each triangle is dominated by the first few rows/columns, as they contain the most entries. Therefore, the sum of the triangle provides a measure of how much a model's output features correlate to the other model's output features, weighted by importance, since there are more elements in the triangle corresponding to the more important features. Consequently, a larger sum implies a better correlation of a model's most important features in its output embeddings with the other model's output features, which implies a stronger ability to predict the other model's output embeddings from its own output embeddings. This naturally yields the definition of the Competitive Barlow Twins loss, which preserves the invariance term in the original Barlow Twins, but replaces the off-diagonal sum with the difference between the upper-triangle and the lower-triangle of the cross-correlation matrix:
\[\begin{split}\mathcal{L}_{\text{CBT}_{A}}&=\sum_{i}^ {d}(1-\mathbf{C}_{i,i})^{2}+\lambda\Bigg{(}\sum_{i}^{d}\sum_{j>i}^{d}\mathbf{C }_{i,j}^{2}-\mu\sum_{j}^{d}\sum_{i>j}^{d}\mathbf{C}_{i,j}^{2}\Bigg{)}\\ \mathcal{L}_{\text{CBT}_{B}}&=\sum_{i}^{d}(1-\mathbf{ C}_{i,i})^{2}+\lambda\Bigg{(}\sum_{j}^{d}\sum_{i>j}^{d}\mathbf{C}_{i,j}^{2}-\mu \sum_{i}^{d}\sum_{j>i}^{d}\mathbf{C}_{i,j}^{2}\Bigg{)}\end{split} \tag{4}\]
where \(\lambda,\mu>0\) are weighting coefficients, with \(\lambda\) inherited from the original Barlow Twins, and \(\mu\) trading off the importance of correlating the opponent GNN's output features (collaboration) and preventing the opponent GNN from correlating the GNN's own output features (competition). Although the above reasoning discourages the use of different weights on the sums of triangles, which is also confirmed by the hyperparameter tuning results described in Appendix D.1, we still include \(\mu\) in the definition of the Competitive Barlow Twins for the purpose of hyperparameter tuning.
Another important enhancement by the Competitive Barlow Twins is that, since the triangles make the smaller-indexed features of both models' output embeddings the more important features, both models' output embeddings are ordered by feature importance. This ordering prevents the models from simply permuting the entries of their output embeddings to avoid being predicted by their opponent models, and makes the training much more stable.
### Proposed Framework
The architecture of the GraphAC framework is illustrated in Figure 1. We also include the covariance regularization term from VICReg in GraphAC's loss functions because it decorrelates different feature dimensions within each output graph embeddings, and forces the graph embeddings to be fully used to capture graph information. Therefore, we obtain the following definitions for GraphAC's final loss functions:
\[\begin{split}\mathcal{L}_{\text{GNN}_{A}}&=\alpha \mathcal{L}_{\text{CBT}_{A}}+\beta\mathcal{L}_{\text{Cov}}\\ \mathcal{L}_{\text{GNN}_{B}}&=\alpha\mathcal{L}_{ \text{CBT}_{B}}+\beta\mathcal{L}_{\text{Cov}}\end{split} \tag{5}\]
where \(\alpha,\beta>0\) are weighting coefficients, and \(\mathcal{L}_{\text{Cov}}\) is the VICReg covariance regularization term defined in Section 2. The invariance regularization term from VICReg is not included in the loss functions, because the effect of the invariance regularization term has already been achieved by the invariance term from the Competitive Barlow Twins. We also do not include the variance regularization term from VICReg in GraphAC's loss functions, because it forces the variance of the embeddings over a batch to be above a given threshold, which can potentially cause the training to be unstable. Although the Competitive Barlow Twins losses can enable stable training of the models, and can counter the instability caused by the VICReg variance regularization term, we still do not include that term in the loss functions, because it is used to prevent the models from producing the same embedding vectors for samples within a batch, which did not occur in this framework.
Figure 1: Architecture of GraphAC’s framework. Batched unlabeled graph data \(\mathbf{X}\in\mathbb{R}^{N_{b}\times d_{w}}\) are fed into two different GNNs, obtaining two different batched embeddings \(\mathbf{H}^{A},\mathbf{H}^{B}\in\mathbb{R}^{N_{b}\times d}\). Then, the cross-correlation matrix \(\mathbf{C}\in\mathbb{R}^{d\times d}\) between \(\mathbf{H}^{A}\) and \(\mathbf{H}^{B}\) along the batch dimension is calculated. The Competitive Barlow Twins losses, \(\mathcal{L}_{\text{CBT}_{A}}\) and \(\mathcal{L}_{\text{CBT}_{B}}\), are then computed by pushing the on-diagonal elements of \(\mathbf{C}\) towards one, while adding the difference between the upper/lower-triangles of \(\mathbf{C}\). The pair of Competitive Barlow Twins losses are then used to update the two GNNs respectively. Details about the GraphAC framework are described in Section 3.
## 4 Evaluation
### Data Preparation
In order for GraphAC to provide the most realistic evaluations of the GNNs, the datasets used for evaluating it should be application-oriented with real-word implications. To ensure GraphAC can discriminate between GNNs with statistical significance, the datasets should be large-scale of high quality. Moreoever, since GraphAC is designed for graph-level prediction, the datasets should also be constructed for graph-level prediction, which means that they should contain a large number of relatively small graphs. Finally, in order for GraphAC to support GNNs both with and without edge features, and allowing it to study the effect for a GNN with edge features, the datasets should provide both node and edge features for the graphs. Based on the above requirements, GraphAC is best suited to drug-like small molecular datasets for the following reasons:
* Molecules can naturally be represented as graphs;
* Molecular property prediction is a fundamental task within many important applications in chemistry, biology and medicine;
* There is a vast variety of molecules in the world, and the drug-like small molecular graphs can be trained efficiently without requiring extensive GPU resources.
Therefore, we use the largest molecular property prediction dataset from OGB (Hu et al., 2020), namely the ogbg-molpcba dataset. It contains 437,929 drug-like molecules, with on average 26.0 atoms (nodes) and 28.1 bonds (edges) per molecule (graph). Despite the fact that the dataset comprises multiple classification tasks and its class balance is highly skewed, these factors do not affect GraphAC's evaluation, as only the molecular graphs are used while their labels are discarded. In order to confirm that GraphAC is indeed task-agnostic, we also evaluate GraphAC on the ogbg-code2 dataset, which contains 452,741 abstract syntax trees obtained from Python method definitions, with on average 125.2 nodes and 124.2 edges per tree.
### Experimental Setup
Training and experiments were conducted on an NVIDIA A100 SXM GPU with 80GB graphics memory. All experiments were trained for 50 epochs. The pseudocode for the core training algorithm of GraphAC can be found in Appendix A, and details of the hyperparameter tuning experiments are described in Appendix D.1. The source code for GraphAC is publicly available at [https://github.com/Victor2XY/GraphAC](https://github.com/Victor2XY/GraphAC).
In order to fairly compare the GNNs as well as to evaluate GraphAC's ability in distinguishing different components of a GNN, the experiments were split into five groups of controlled experiments. In each group, one component of the GNN is varied, while all other components are fixed. The five aspects of a GNN evaluated by GraphAC are:
* **Number of GNN layers:** in this group of experiments, PNAs (Corso et al., 2020) with 2, 4, 6, 8, and 10 layers compete in the GraphAC framework on a double round-robin basis, with one extra experiment performed for each model to compete against itself. All PNAs have a fixed hidden dimension of 256, and use the combination of [max, mean, sum] as their aggregators. All PNAs use [identity, amplification, attenuation] as their scalers, and their message passing functions are parametrized by 2-layer MLPs. We choose PNAs for evaluation due to their flexibility and state-of-the-art performance on molecular tasks.
* **Hidden dimensions:** in this group of experiments, 4-layer PNAs with hidden dimensions of 16, 32, 64, 128, and 256 competed in the GraphAC framework on a double round-robin basis, with an additional experiment performed for each model to compete against itself. All PNAs utilize [max, mean, sum] as their aggregators.
* **Aggregators:** in this group of experiments, four 4-layer PNAs with 64 hidden dimensions, and [max], [mean], [sum], [max, mean, sum] as their aggregators respectively, are set to compete in the GraphAC framework on a double round-robin basis, again with one extra experiment for each model to compete against itself.
* **GNN architectures:** in this group of architectures, PNA, GIN (Xu et al., 2019) and GCN (Kipf and Welling, 2017), all with 4 layers and 64 hidden dimensions, and PNA with [max, mean, sum] as aggregators, are set to compete in the GraphAC framework on a double round-robin basis, again with one extra experiment for each model to compete against itself.
* **Edge features:** in this group of experiments, PNAs with 4, 6, and 8 layers, and hidden dimensions ranging from 64, 128, and 256 are used. All PNAs use [max, mean, sum] as their aggregators. In each experiment, PNAs with the same structure, but one with the edge features and the other without the edge features, are inserted into GraphAC for competitions.
### Results
Training took from 1.7 hours (2-layer PNA vs. 2-layer PNA) to 4.2 hours (10-layer PNA vs. 10-layer PNA). The details of the training process are described in Appendix D.2. The training outcomes indicate that the more expressive GNN can continuously achieve a lower loss in our framework, and that GraphAC can successfully avoid information collapse.
The results of the experiments, recorded as \(\mathcal{L}_{\text{GNN (edge features)}}-\mathcal{L}_{\text{GNN (no edge features)}}\) for the edge features group and \(\mathcal{L}_{\text{GNN}_{A}}-\mathcal{L}_{\text{GNN}_{B}}\) for all other groups, are reported in Table 1. Another two tables containing detailed results of the experiments can be found in Appendix B. These results demonstrate that _GraphAC can successfully distinguish GNNs of different expressiveness across various aspects, and consistently favors the more expressive GNNs_: 1) deeper GNNs; 2) GNNs with larger hidden dimensions; 3) combining multiple aggregators \(>\) sum \(>\) mean \(>\) max as aggregators (Xu et al., 2019; Corso et al., 2020); 4) PNA \(>\) GIN \(>\) GCN (Xu et al., 2019; Corso et al., 2020); and 5) GNNs
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{\#Layers in GNN\({}_{B}\) (256 hidden dims, aggregators: [max, mean, sum])} \\ & & \multicolumn{6}{c}{ogbg-molpcba} & \multicolumn{6}{c}{ogbg-code2} \\ \cline{3-12} & & 2 & 4 & 6 & 8 & 10 & 2 & 4 & 6 & 8 & 10 \\ \hline \multirow{3}{*}{\#Layers in GNN\({}_{A}\) (256 hidden dims, 6 [max, mean, sum])} & \multirow{3}{*}{2} & \multirow{3}{*}{4} & \multirow{3}{*}{6} & \multirow{3}{*}{8} & \multirow{3}{*}{10} & \multirow{3}{*}{2} & \multirow{3}{*}{4} & \multirow{3}{*}{6} & \multirow{3}{*}{8} & \multirow{3}{*}{10} \\ & & & & & & & & & & & & & \\ \cline{1-1} \cline{6-12} & 2 & -0.04 & 1.29 & 1.46 & 1.88 & 1.97 & -0.01 & 0.31 & 0.36 & 0.63 & 0.65 \\ \cline{1-1} \cline{6-12} & 4 & -1.31 & 0.03 & 0.80 & 1.40 & 1.75 & -0.25 & 0.01 & 0.34 & 0.40 & 0.52 \\ (256 hidden dims, 6 [max, mean, sum]) & 6 & -1.48 & -0.84 & -0.01 & 0.67 & 0.81 & -0.36 & -0.31 & -0.00 & 0.24 & 0.34 \\ \cline{1-1} \cline{6-12} & 8 & -1.69 & -1.28 & -0.41 & 0.03 & 0.56 & -0.59 & -0.40 & -0.27 & 0.00 & 0.19 \\ & 10 & -2.01 & -1.75 & -1.08 & -0.67 & -0.00 & -0.67 & -0.46 & -0.35 & -0.18 & -0.08 \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multicolumn{6}{c}{Hidden dims in GNN\({}_{B}\) (4 layers, aggregators: [max, mean, sum])} \\ & & \multicolumn{6}{c}{ogbg-molpcba} & \multicolumn{6}{c}{ogbg-code2} \\ \cline{3-12} & & \multicolumn{6}{c}{16} & \multirow{3}{*}{32} & \multirow{3}{*}{6} & \multirow{3}{*}{6} & \multirow{3}{*}{128} & \multirow{3}{*}{256} & \multirow{3}{*}{16} & \multirow{3}{*}{32} & \multirow{3}{*}{64} & \multirow{3}{*}{128} & \multirow{3}{*}{256} \\ \cline{3-12} & & \multicolumn{6}{c}{16} & \multirow{3}{*}{0.02} & \multirow{3}{*}{1.39} & \multirow{3}{*}{1.88} & \multirow{3}{*}{2.36} & \multirow{3}{*}{2.35} & \multirow{3}{*}{0.01} & \multirow{3}{*}{0.01} & \multirow{3}{*}{0.01} & \multirow{3}{*}{0.01} & \multirow{3}{*}{0.06} & \multirow{3}{*}{0.93} \\ \cline{3-12} & 32 & -1.24 & 0.00 & 0.93 & 1.61 & 2.09 & -0.66 & -0.04 & 0.53 & 0.74 & 0.88 \\ (4 layers, [max, mean, sum]) & 64 & -2.29 & -0.89 & 0.02 & 1.21 & 1.81 & -0.73 & -0.66 & -0.00 & 0.39 & 0.62 \\ & 128 & -2.49 & -1.61 & -1.00 & -0.01 & 1.49 & -0.81 & -0.72 & -0.41 & -0.03 & 0.42 \\ & 256 & -2.54 & -2.04 & -1.70 & -1.33 & -0.01 & -0.91 & -0.78 & -0.64 & -0.40 & -0.03 \\ \hline \multirow{3}{*}{
\begin{tabular}{} \end{tabular} } & \multicolumn{6}{c}{Aggregators in GNN\({}_{B}\) (4 layers, 64 hidden dims)} \\ & & \multicolumn{6}{c}{ogbg-molpcba} & \multicolumn{6}{c}{ogbg-code2} \\ \cline{3-12} & & \multicolumn{6}{c}{[max]} & \multicolumn{6}{c}{[mean]} & \multicolumn{6}{c}{[sum]} & \multirow{3}{*}{Comb.} \\ \cline{3-12} & & \multicolumn{6}{c}{[max]} & \multicolumn{6}{c}{[mean]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[max]} & \multicolumn{6}{c}{[mean]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} \\ \cline{3-12} & & \multicolumn{6}{c}{[max]} & \multicolumn{6}{c}{-0.03} & \multirow{3}{*}{0.18} & \multirow{3}{*}{0.31} & \multirow{3}{*}{0.34} & \multirow{3}{*}{-0.02} & \multirow{3}{*}{0.14} & \multirow{3}{*}{0.30} & \multirow{3}{*}{0.34} \\ \cline{3-12} & & \multicolumn{6}{c}{[mean]} & \multicolumn{6}{c}{-0.17} & \multicolumn{6}{c}{-0.01} & \multirow{3}{*}{0.24} & \multirow{3}{*}{0.33} & \multirow{3}{*}{-0.15} & \multirow{3}{*}{-0.00} & \multirow{3}{*}{0.17} & \multirow{3}{*}{0.20} \\ (4 layers, 64 hidden dims) & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} \\ \cline{3-12} & & \multicolumn{6}{c}{[mean]} & \multicolumn{6}{c}{-0.30} & \multirow{3}{*}{0.25} & \multirow{3}{*}{0.05} & \multirow{3}{*}{0.23} & \multirow{3}{*}{-0.30} & \multirow{3}{*}{-0.12} & \multirow{3}{*}{-0.01} & \multirow{3}{*}{0.15} \\ (4 layers, 64 hidden dims) & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} & \multicolumn{6}{c}{[sum]} \\ \cline{3-12} & & \multicolumn{6}{c}{[mean]} &
that include edge features. Furthermore, regardless of the ordering of the GNNs in the framework, there is no distinction between GNN\({}_{A}\) or GNN\({}_{B}\); this means that for all pairs of experiments, even if the order is switched, the absolute loss differences still remain roughly equal but only with the sign flipped. These observations suggest that _GraphAC can genuinely distinguish different GNNs, regardless of their ordering in the framework_. Additionally, for the experiments with the same GNNs competing, GraphAC produced loss differences close to zero, demonstrating its ability to allow GNNs with the same expressiveness to tie, rather than falsely deciding a winner. Moreover, for every three GNNs: GNN\({}_{A}\), GNN\({}_{B}\) and GNN\({}_{C}\) in the experiments, if their expressiveness can be ordered as GNN\({}_{A}>\) GNN\({}_{B}>\) GNN\({}_{C}\), then there is also \(|\mathcal{L}_{\text{GNNs}_{A}}-\mathcal{L}_{\text{GNN}_{C}}|>|\mathcal{L}_{ \text{GNN}_{A}}-\mathcal{L}_{\text{GNN}_{B}}|\) produced by GraphAC. This phenomenon shows that _GraphAC is able to produce a total ordering of all GNNs_, which further demonstrates its credibility in GNN evaluation.
Some notable group-specific observations are as follows:
**Different aggregators** It is observed that that the loss differences in the aggregators group are less significant than in the numbers of GNN layers and hidden dimensions groups. This is possibly because the effect on expressiveness by the aggregators is less significant than the number of parameters (i.e., number of layers and hidden dimensions).
**Different GNN architectures** It is also observed that the differences between architectures have a greater impact than simply changing the aggregators. This can be due to other differences, such as message passing framework in PNA compared to convolutions in GCN and GIN, and the added \(\epsilon\) term in GIN compared to GCN.
**Inclusion of edge features** It is also observed that when the number of layers and hidden dimensions are larger, the magnitude of the loss difference becomes smaller. This is possibly because when a GNN is more complex, it can capture enough information from graphs even when they contain no edge information, thus the gain in performance by including edge features becomes relatively smaller.
### Correlation with Task Performance
In order to fully validate that the GNNs favored by GraphAC are indeed more expressive, we further evaluate all the GNNs used in the aforementioned experiments against the supervised learning tasks under the ogbg-molpcba and ogbg-code2 datasets (Hu et al., 2020). Each of the supervised training experiments takes 50 epochs. We then perform a correlation study on the GNNs' task performances with their expressiveness rankings produced by GraphAC.
Figure 2 shows the GNNs' task performance on both datasets with respect to their expressiveness rankings produced by GraphAC, measured using the average precision of multi-class classification for the tasks on the ogbg-molpcba dataset, and F1 score of sub-token prediction for the tasks on the ogbg-molpcba dataset. The plots are divided into the experiment groups as presented in the previous section, in order to accurately demonstrate the correlation. The consistent, monotonic upward trend shows a strong correlation between the GNNs' task performance and GraphAC's expressiveness rankings on them, suggesting that _GraphAC can genuinely distinguish GNNs of different expressiveness across various aspects, favoring the more expressive GNNs_. Separated plots of Figure 2, with one diagram per experiment group and detailed descriptions of the GNN architectures and parameters, can be found in Appendix C.
Figure 2: Correlation plots of GraphAC’s GNN rankings with the GNNs’ performances. The positive correlations show that GraphAC’s expressiveness rankings align well with the GNNs’ performances.
## 5 Conclusion
We propose GraphAC (Graph Adversarial Collaboration), a novel, principled, and task-agnostic framework for evaluating GNNs through contrastive self-supervision, without the need of handcrafted augmentations. Inspired by the Barlow Twins loss (Zbontar et al., 2021), we introduce a novel objective function: the Competitive Barlow Twins, which replaces its redundancy reduction term with a difference between the upper-triangle and lower-triangle of the cross-correlation matrix of the two GNN's output embeddings. GraphAC successfully distinguishes GNNs of different expressiveness in all experiments within graphs of two distinct contexts: molecular graphs and abstract syntax trees, across various aspects including the number of layers, hidden dimensionality, aggregators, GNN architecture and edge features, and ensures that more expressive GNNs can always win with a statistically significant difference. GraphAC is also able to estimate the degree of expressiveness of different GNNs, and produce a total ordering of all GNNs with its measurements. GraphAC provides a novel principle of evaluating GNNs and an effective contrastive SSL framework without requiring any augmentations, making a notable contribution to the graph SSL and molecular representation learning community, which can be applied to many important tasks in drug discovery.
We believe that the success of GraphAC opens up a new, principled way of thinking when developing contrastive SSL methods, by considering the more expressive GNN as an encoder that learns more complex but less general information from the graphs, and the less expressive GNN as one that captures more basic but general information. Consequently, combining the two GNNs creates a better overall understanding of the graphs and can be used to perform SSL on graphs without manually applying augmentations, which may have introduced arbitrary human knowledge that were not originally provided by the training data.
## Acknowledgements
This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service, provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
|
2310.03652 | Extreme sparsification of physics-augmented neural networks for
interpretable model discovery in mechanics | Data-driven constitutive modeling with neural networks has received increased
interest in recent years due to its ability to easily incorporate physical and
mechanistic constraints and to overcome the challenging and time-consuming task
of formulating phenomenological constitutive laws that can accurately capture
the observed material response. However, even though neural network-based
constitutive laws have been shown to generalize proficiently, the generated
representations are not easily interpretable due to their high number of
trainable parameters. Sparse regression approaches exist that allow to
obtaining interpretable expressions, but the user is tasked with creating a
library of model forms which by construction limits their expressiveness to the
functional forms provided in the libraries. In this work, we propose to train
regularized physics-augmented neural network-based constitutive models
utilizing a smoothed version of $L^{0}$-regularization. This aims to maintain
the trustworthiness inherited by the physical constraints, but also enables
interpretability which has not been possible thus far on any type of machine
learning-based constitutive model where model forms were not assumed a-priory
but were actually discovered. During the training process, the network
simultaneously fits the training data and penalizes the number of active
parameters, while also ensuring constitutive constraints such as thermodynamic
consistency. We show that the method can reliably obtain interpretable and
trustworthy constitutive models for compressible and incompressible
hyperelasticity, yield functions, and hardening models for elastoplasticity,
for synthetic and experimental data. | Jan N. Fuhg, Reese E. Jones, Nikolaos Bouklas | 2023-10-05T16:28:58Z | http://arxiv.org/abs/2310.03652v1 | Extreme sparsification of physics-augmented neural networks for interpretable model discovery in mechanics
###### Abstract
Data-driven constitutive modeling with neural networks has received increased interest in recent years due to its ability to easily incorporate physical and mechanistic constraints and to overcome the challenging and time-consuming task of formulating phenomenological constitutive laws that can accurately capture the observed material response. However, even though neural network-based constitutive laws have been shown to generalize proficiently, the generated representations are not easily interpretable due to their high number of trainable parameters. Sparse regression approaches exist that allow to obtaining interpretable expressions, but the user is tasked with creating a library of model forms which by construction limits their expressiveness to the functional forms provided in the libraries. In this work, we propose to train regularized physics-augmented neural network-based constitutive models utilizing a smoothed version of \(L^{0}\)-regularization. This aims to maintain the trustworthiness inherited by the physical constraints, but also enables interpretability which has not been possible thus far on any type of machine learning-based constitutive model where model forms were not assumed a-priory but were actually discovered. During the training process, the network simultaneously fits the training data and penalizes the number of active parameters, while also ensuring constitutive constraints such as thermodynamic consistency. We show that the method can reliably obtain interpretable and trustworthy constitutive models for compressible and incompressible hyperelasticity, yield functions, and hardening models for elastoplasticity, for synthetic and experimental data.
Physics-augmented machine learning Solid mechanics Data-driven constitutive models
## 1 Introduction
In continuum mechanics, material-specific constitutive models are a necessary closure relationship to describe the motion of solid bodies. In contrast to balance laws and kinematic equations, these mathematical descriptions of material behavior do not directly follow from a physical law [1]. Nevertheless, mechanistic assumptions, mathematical well-posedness, and physical understanding still constrain the formulations to adhere to objectivity, material symmetry, and thermodynamic consistency considerations, among others [2].
With the recent advances in manufacturing technologies including additive manufacturing, we are experiencing a rise of highly specific materials with advanced requirements [3]. The systematic satisfaction of these design requirements has led to a pressing need for advanced predictive capabilities, and automation directly linking experimental charac
terization and modeling. More specifically, specialized constitutive models are required that focus on constructing mathematical models capable of describing relevant physical phenomena [4] that can eventually enable computations at the structural level. In the last decades, phenomenological models have been the main driver of progress in constitutive modeling. Phenomenological models are derived and developed based on knowledge of the material response, are based on a limited number of user-chosen functional forms, and are characterized by a limited number of unknown material parameters often directly tied to specific experiments [5]. They aim to describe complex behaviors with as few parameters as possible [6] and are generally interpretable and the user has knowledge about their extrapolation behavior [7]. Extrapolation capabilities are important as traditional mechanical experiments provide stress-strain data (or rates of these quantities for rate-dependent responses) only for limited stress- or strain-states, e.g. uniaxial, biaxial, simple shear and hydrostatic, leaving the majority of the strain- or stress- space virtually unexplored. Crucially, this has been accomplished by strictly enforcing thermodynamic constraints and mechanistic assumptions in phenomenological constitutive models. However, due to the specific user-chosen model form, phenomenological models in general experience a type of model-form error [8], i.e. the model often is not descriptive enough to fully fit the data which results from incomplete knowledge about the material response [9].
In order to improve the restrictions of functional form selections of classical constitutive models and to automate their formulation, machine learning (ML)-based constitutive models have been popularized in recent years [10]. They have been used to model hyperelasticity [11, 12, 13] viscoelasticity [14, 15, 16] and plasticity [17, 18, 19]. Recently, these representations have been extended to enforce objectivity- and material symmetry-constraints [11, 20], polyconvexity [21, 22, 23, 24] and thermodynamical consistency [25, 26, 27, 28] in a physics-augmented manner. Many of these approaches are built around the flexibility of neural networks that allow them to fulfill the constitutive constraints by construction. For example, hyperelastic models as suggested by Ref. [21] are designed to be polyconvex and rely on input convex neural networks [29]; an architecture which can strictly enforce the development of a strain energy density that is polyconvex with respect to its inputs. However, even though these proposed ML-based formulations are designed to comply with mechanistic and thermodynamic constraints, they are either not easily interpretable due to the high dimensionality of parameters required in these representations (e.g. often in the order of thousands or more), or might be too restrictive if a model form (or a model-form library) is already pre-selected [30, 31, 32]. The former is also problematic in the low and limited data domain because, although the physics augmentation can act as a regularizer in some cases, the models are still overfitting as will be shown in this work. Lastly, neural network models take longer to implement in existing commercial and open-source computational infrastructure (e.g. finite element modeling platforms) that is not directly coupled to machine-learning frameworks [33]. To combat all of these points and to efficiently discover interpretable constitutive models without restrictive assumptions about specific model forms, we introduce extreme sparsification to physics-augmented neural network-based constitutive models. Our approach is based on network pruning, which is illustrated in Figure 1.
In recent years, neural network pruning - the reduction of the network size by removing parameters - has received increased interest in the machine-learning community [34, 35]. In this context, the pruning is mostly utilized to enable the networks to be deployed in real-time on mobile devices. In general, there are two ways to prune a neural network
* In a first step train a full neural network without any regularization. Then in the second step remove all trainable parameters that are below a threshold and then, in the third step, retrain the model. These greedy phases of pruning and retraining may then be repeated until the required balance between performance and network size is reached [36].
* Regularize the neural network in an eager manner directly during training without the need for any postprocessing or iterative steps [37]. This approach of course reduces the training time to obtain a pruned network but typically has a hyperparameter controlling the influence of the secondary sparsity objective.
In this work, we follow the latter since we aim to be as close as possible to the standard training procedure of physics-augmented neural networks for constitutive modeling in mechanics. In general, most techniques are built around penalizing the \(L^{p}\)-norm (\(p\geq 0\)) of the parameters using an additional term in the loss function [36, 38]. Typical norms are the \(L^{1}\)- and \(L^{2}\)-norm which are known as Lasso- and Ridge-regularization respectively [39]. Other methods prune parameters in groups to remove whole neurons or channels [40]. Recently, Ref. [41] introduced a pruning technique that aims to make neural networks more modular and interpretable by pruning the network during training by encouraging _locality_, i.e. the more neurons communicate the closer they should be in Euclidean space. This is achieved by a local \(L^{1}\)-regularization measure and by placing neurons with high communication closer together by employing a swapping algorithm. In this work, we rely on Ref. [42] which prune the network through a smoothed version of the expected \(L^{0}\)-regularization. The benefits of this approach include that it enforces sparsity without placing a penalty on the magnitude of the weights and it allows for parameters to be exactly zero. Hence, no thresholding is necessary. We remark that our work is related to neural symbolic regression [43, 44], we, however, aim to use standard neural network models that have been used in the data-driven constitutive modeling community and combine these formulations with
sparsification techniques instead of selecting a model from a library of formulations. We choose this approach to enhance the expressiveness of the models and avoid selecting a model-informed basis for our representation.
The paper is structured as follows. Section 2 introduces physics-augmented neural networks and explains their sparsification through smoothed \(L^{0}\) regularization. We test the framework's ability to produce interpretable constitutive models for hyperelasticity and elastoplasticity in Section 3. The paper is concluded in Section 4.
## 2 Sparsifying physics-augmented neural networks for constitutive modeling
Due to their remarkable flexibility, neural networks have emerged as the most used machine-learning approach for data-driven constitutive modeling. In particular, compared to other regression techniques, such as Gaussian process regression or Support Vector Regression that have been applied for constitutive modeling [45, 46, 6], neural networks have received more attention with regards to finding a proficient balance between enforcing constitutive-constraints and retaining model expressiveness [47]. In the following, we will show two examples of incorporating constraints into neural networks and discuss a potential way of sparsifying them. We remark that in this context we mean the sparsification of all trainable parameters of a neural network, instead of, e.g., the sparse identification of terms from a user-defined functional library as in SINDy [48, 49] or the EUCLID framework [7, 50].
### Physics-augmented neural network formulations
Two classes of physics augmentation have been found to be crucial for constitutive modeling
* Input convex/concave functions. Convex functions are needed to model polyconvex strain energy density functions for hyperelasticity [21, 24] and guarantee dissipation requirements when utilizing potentials to model yield functions [51, 19, 52] or hardening behavior [53, 27].
Figure 1: Extreme sparsification of a physics-augmented neural network constitutive model to improve their interpretability. Note the final input convex network utilizes the pass-through of \(I_{1}\) and \(I_{2}\) terms.
* Positive, monotonically increasing functions or their counterparts negative, monotonically decreasing functions. These functions are needed to model the derivatives of convex functions [26] or where mechanistic assumptions require monotonic effects such as isotropic hardening [27].
Even though these examples of constraints might sound limited, they form the basis of the framework for constitutive modeling for several classes of materials, whose responses are described via hyperelasticity, viscoelasticity, elastoplasticity, viscoplasticity and damage mechanics [54]. We will briefly highlight how both of these constraints can be intrinsically incorporated into feed-forward neural networks.
#### 2.1.1 Input convex neural network
Following Ref. [29], consider an output \(\hat{\mathbf{y}}\in\mathbb{R}^{n^{L}}\) connected to an input \(\mathbf{x}_{0}\in\mathbb{R}^{n^{0}}\) by the neural network \(\mathcal{N}\) given by
\[\mathbf{x}_{1}=\sigma_{1}\left(\mathbf{x}_{0}\mathbf{W}_{1}^{T}+\mathbf{b}_{1}\right) \in\mathbb{R}^{n^{1}} \tag{1}\] \[\mathbf{x}_{l}=\sigma_{l}\left(\mathbf{x}_{l-1}\mathbf{W}_{1}^{T}+\mathbf{x}_{0} \mathbf{\mathcal{W}}_{1}^{T}+\mathbf{b}_{l}\right) \in\mathbb{R}^{n^{l}},\qquad l=2,\ldots,L-1\] \[\hat{\mathbf{y}}=\mathbf{x}_{L-1}\mathbf{W}_{L}^{T}+\mathbf{x}_{0}\mathbf{\mathcal{W} }_{L}^{T}+\mathbf{b}_{L}, \in\mathbb{R}^{n^{L}}\]
with the weights \(\mathbf{W}\) and \(\mathbf{\mathcal{W}}\), the biases \(\mathbf{b}\) and the activation functions \(\sigma\). The weights and biases form the set of trainable parameters \(\mathbf{\theta}=\{\{\mathbf{W}_{i}\}_{i=1}^{L},\{\mathbf{\mathcal{W}}_{i}\}_{i=2}^{L},\{ \mathbf{b}_{i}\}_{i=1}^{L}\}\). The output is then convex with regards to the input if the weights \(\{\mathbf{W}_{i}\}_{i=2}^{L}\) are non-negative and the activation functions \(\{\sigma_{i}\}_{i=1}^{L}\) are convex and non-decreasing. For a proof see Ref. [29].
#### 2.1.2 Positive, monotonically increasing neural network
Let a positive, monotonically increasing neural network \(\mathcal{N}\) be defined by
\[\mathbf{x}_{0} \in\mathbb{R}^{n^{0}} \tag{2}\] \[\mathbf{x}_{1}=\sigma_{1}\left(\mathbf{x}_{0}\mathbf{W}_{1}^{T}+\mathbf{b}_{1}\right) \in\mathbb{R}^{n^{1}}\] \[\mathbf{x}_{l}=\sigma_{l}\left(\mathbf{x}_{l-1}\mathbf{W}_{1}^{T}+\mathbf{b}_{l}\right) \in\mathbb{R}^{n^{l}},\qquad l=2,\ldots,L-1\] \[\hat{\mathbf{y}}=\mathbf{x}_{L-1}\mathbf{W}_{L}^{T}+\mathbf{b}_{L}, \in\mathbb{R}^{n^{L}}\]
where \(\mathbf{W}\), \(\mathbf{b}\) and \(\sigma\) denote the weights, biases and activation functions respectively. The set of trainable parameters reads \(\theta=\{\{\mathbf{W}_{i=1}^{L}\},\{\mathbf{b}_{i}\}_{i=1}^{L}\}\). Then each output value of \(\hat{\mathbf{y}}\) is positive, and monotonically increasing with regard to all input values of \(\mathbf{x}_{0}\) when the trainable parameters are nonnegative and the activation functions are positive and non-decreasing. For a proof, we refer to Ref. [27].
### Extreme sparsification with smoothed \(L^{0}\) regularization
Given a data set of input-output pairs \(\{\mathbf{x}^{i},\mathbf{y}^{i}\}_{i=1}^{N}\) the trainable parameters of eqs. (1) and (2) are classically found by minimizing a loss function \(\mathcal{R}(\mathbf{\theta})\)
\[\mathbf{\theta}^{\star}=\operatorname*{arg\,min}_{\mathbf{\theta}}\mathcal{R}(\mathbf{ \theta})=\operatorname*{arg\,min}_{\mathbf{\theta}}\frac{1}{N}\sum_{i=1}^{N}\left[ \mathcal{L}(\mathcal{N}(\mathbf{x}^{i};\mathbf{\theta}),\mathbf{y}^{i})\right]\]
with some loss function \(\mathcal{L}(\bullet)\). In the following, we discuss how to sparsify the parameters by adding regularization. The approach follows the general idea of a gating system where each trainable parameter is multiplied by a gate value \(z\in[0,1]\) of either zero or one depending on whether the parameter should be active or not. Zero reflects an inactive parameter while one defines an active parameter. The number of active gates could then be penalized in the loss function. However, due to the binary nature of the gates, the loss function would not be differentiable. Hence, following Ref. [42] we consider a reparametrization of the trainable parameters using a smoothed "gating" system, i.e. let
\[\mathbf{\theta}=\overline{\mathbf{\theta}}\odot\mathbf{z},\quad\text{with}\quad\mathbf{z}= \min(\mathbf{1},\max(\mathbf{0},\overline{\mathbf{s}})) \tag{3}\]
where \(\odot\) denotes the Hadamard product and where
\[\overline{\mathbf{s}}=\mathbf{s}(\zeta-\gamma)+\gamma,\quad\mathbf{s}=\text{Sigmoid}(( \log\mathbf{u}-\log(1-\mathbf{u})+\log\mathbf{\alpha})/\beta),\quad\mathbf{u}\sim U(\mathbf{0}, \mathbf{1}). \tag{4}\]
Here, \(\gamma\), \(\beta\), \(\zeta\) and \(\log\mathbf{\alpha}\) are user-chosen parameters that define the smoothing of the gate \(\mathbf{z}\). Following the suggestions of the authors [42], we choose \(\gamma=-0.1\), \(\zeta=1.1\), \(\beta=2/3\) and \(\log\mathbf{\alpha}\) is obtained by sampling from a normal distribution with zero mean and a standard deviation of \(0.01\).
Since the gate values are dependent on a random variable, we can define a Monte Carlo approximated loss function as
\[\mathcal{R}(\overline{\mathbf{\theta}})=\frac{1}{L}\sum_{l=1}^{L}\left(\frac{1}{N} \left(\sum_{i=1}^{N}\mathcal{L}(\mathcal{N}(\mathbf{x}_{i},\overline{\mathbf{\theta}} \odot\mathbf{z}^{l}),\mathbf{y}_{i})\right)\right]+\lambda\sum_{j=1}^{|\theta|}\text{ Sigmoid}(\log\alpha_{j}-\beta\log\frac{-\gamma}{\zeta}) \tag{5}\]
with \(L\) being the number of samples of the Monte Carlo approximation and where \(\lambda\) is a weighting factor for the regularization. At test time we can then set the values of the trainable parameters \(\mathbf{\theta}^{\star}=\overline{\mathbf{\theta}}^{\star}\odot\hat{\mathbf{z}}\) where the gates can be obtained as
\[\hat{\mathbf{z}}=\min(\mathbf{1},\max(\mathbf{0},\text{Sigmoid}(\log\mathbf{\alpha})(\zeta- \gamma)+\gamma)). \tag{6}\]
In the following section, we show how this idea can be applied to constitutive modeling in computational mechanics.
## 3 Results
We consider three different application areas where fitting physics-augmented neural network constitutive laws have been applied in the literature:
* Incompressible and compressible hyperelasticity
* Yield functions for elastoplasticity
* Isotropic hardening for elastoplasticity.
For each case, we look at multiple different experimental and synthetic datasets to test the capabilities of the sparsification approach. We have implemented all the models in Pytorch [55] and use the Adam optimizer [56] with a learning rate of \(10^{-3}\) to solve the optimization problem. We also note that for elastoplasticity, we follow the paradigm suggested in [27] for modular learning approaches.
### Constitutive modeling of hyperelastic potential
Hyperelasticity enables modeling finite strain, nonlinear elastic material behavior by assuming that a strain-dependent potential function \(\Psi(\mathbf{F})\) exists from which the stress can be derived [2]. Here \(\mathbf{F}=\frac{\partial\phi}{\partial\mathbf{X}}\) is the deformation gradient that depends on \(\phi(\mathbf{X},t)\) which describes the motion of a body from a reference position \(\mathbf{X}\) to its current position at time \(t\). One important strain measure for finite strain modeling is the right Cauchy-Green tensor \(\mathbf{C}=\mathbf{F}^{T}\mathbf{F}\). In the following, we restrict ourselves to isotropic material behavior. The hyperelastic potential is generally subject to constitutive constraints that are derived from thermodynamic considerations and mechanistic assumptions [21]. These include
* Objectivity and material symmetry. Let \(\mathbf{R}\) be an orthogonal matrix, the strain energy function then needs to adhere to the assumed isotropic nature of the material behavior, i.e. \[\Psi(\mathbf{F}\mathbf{R}^{T})=\Psi(\mathbf{F}).\] (7) Furthermore, the energy value needs to be independent of the choice of an observer, which is known as objectivity, and which is defined by \[\Psi(\mathbf{R}\mathbf{F})=\Psi(\mathbf{F}).\] (8) By formulating the strain energy function in terms of the invariants of the right Cauchy-Green tensor \(\psi(I_{1},I_{2},J)\) where \[I_{1}=\text{tr}(\mathbf{C}),\quad I_{2}=\text{tr}(\text{cof}\mathbf{C}),\quad J=\sqrt {\det(\mathbf{C})},\] (9) instead of the deformation gradient, the objectivity and the material symmetry conditions are fulfilled.
* Normalization condition. It is physically sensible to define the form of the strain energy function such that both its value and the derived stress value are zero at the undeformed configuration \(\mathbf{C}=\mathbf{I}\)[2].
* Polyconvexity. Following Ball [57], polyconvex hyperelasticity guarantees the existence of minimizers of the underlying potential in finite elasticity. Polyconvexity in this context requires that \(\Psi(\mathbf{F},\text{cof}\mathbf{F},\det\mathbf{F})\) is convex in all its arguments [58]. This is equivalent to ensuring convexity of the strain energy function with regard to a set of polyconvex invariants.
In the following, we discuss neural network formulations that enforce all these constitutive conditions and use regularization to obtain interpretable model forms for different data sets. We differentiate between problems in compressible and incompressible hyperelasticity.
#### 3.1.1 Compressible hyperelastic materials from numerical data
In order to generate a compressible, polyconvex, objective, and isotropic material description the strain energy function can be assumed to be a convex function \(\psi(I_{1},I_{2},J)\). We then build an input convex neural network, c.f. eq. (1) that takes these invariants as input and outputs a scalar \(\hat{\Psi}^{NN}(I_{1},I_{2},J)\). In order to also fulfill the normalization condition, the approach suggested by Ref. [24] can be used, we thus obtain the strain energy prediction as
\[\hat{\Psi}(I_{1},I_{2},J)=\hat{\Psi}^{NN}(I_{1},I_{2},J)-\hat{\Psi}^{NN}(3,3,1)- \psi^{S}(J) \tag{10}\]
where \(\psi^{S}\) defined as
\[\psi^{S}=n(J-1)=\underbrace{\left.\left(2\frac{\partial\hat{\Psi}^{NN}}{ \partial I_{1}}+4\frac{\partial\hat{\Psi}^{NN}}{\partial I_{2}}+\frac{ \partial\hat{\Psi}^{NN}}{\partial J}\right)\right|_{\mathbf{C}=\mathbf{I}}}_{=n}(J-1) \tag{11}\]
enforces the normalization of the stress. For more information see Appendix A.1. A prediction of the second Piola-Kirchhoff stress can then be obtained with
\[\hat{\mathbf{S}}=2\frac{\partial\hat{\Psi}}{\partial\mathbf{C}}=2\left(\sum_{i}\frac{ \partial\hat{\Psi}}{\partial I_{i}}\frac{\partial I_{i}}{\partial\mathbf{C}} \right)=2\left(\frac{\partial\hat{\Psi}}{\partial I_{1}}+I_{1}\frac{\partial \hat{\Psi}}{\partial I_{2}}\right)\mathbf{I}-2\frac{\partial\hat{\Psi}}{\partial I _{2}}\mathbf{C}+J\frac{\partial\hat{\Psi}}{\partial J}\mathbf{C}^{-1}. \tag{12}\]
Using this formulation we can train a sparsified, compressible, physics-augmented neural network model on stress-strain data. For this application, consider a region in the deformation gradient space of the form
\[F_{ij}\in[F_{ij}^{L},F_{ij}^{U}]=\begin{cases}[1-\delta,1+\delta]&\text{if}\,i =j\\ [-\delta,+\delta]&\text{else}.\end{cases} \tag{13}\]
Following Ref. [59] we define a training region with \(\delta=0.2\) and a test region with \(\delta=0.3\). We then utilize the space-filling sampling algorithm proposed in Ref. [20] to sample \(50\) training and \(10,000\) testing data points in their respective regions. In terms of the invariants, the 50 data points are summarized in Table A1. We furthermore divide the training dataset in an \(80/20\) split into training and validation points.
ExamplesFirstly, we study the performance of the proposed approach and how it is influenced by the network size and the weighting factor \(\lambda\). Consider a Gent-Gent hyperelastic model [60] of the form
\[\Psi_{gent}=-\frac{\theta_{1}}{2}J_{m}\log\left(1-\frac{I_{1}-3}{J_{m}} \right)-\theta_{2}\log\left(\frac{I_{2}}{J}\right)+\theta_{3}\left(\frac{1}{2 }(J^{2}-1)-\log J\right) \tag{14}\]
where we employ \(\theta_{1}=2.4195\), \(J_{m}=77.931\), \(\theta_{2}=-0.75\) and \(\theta_{3}=1.20975\), see Ref. [59]. We consider three different architectures: 1 hidden layer with 30 neurons, 2 hidden layers with 30 neurons, and 3 hidden layers with 30 neurons), 5 different regularization parameter values, and repeat the training process \(10\) times with different random seeds. Figure 2 depicts the average (in solid lines) and each individual data point of the final training loss and the number of parameters with an absolute value greater than \(0\) at the end of \(100,000\) training epochs for each regularization parameter value and architectural setting. We additionally highlight the average training loss (red dotted line) and the number of active parameters (in the legend) when no regularization was employed. We can see that the final training loss value as well as the number of active training parameters is relatively similar in terms of absolute values for all architectures. As was expected, higher regularization parameters penalize the number of active parameters more, and hence we see can see that the number of active parameters increases with lower regularization parameters. Since the loss is then also dominated by the regularization, the training loss decreases with a decrease in \(\lambda\). Due to the higher number of tunable parameters in the unregularized models (e.g. \(1112\) parameters whose absolute value is greater than zero for the architecture with 1 hidden layer and 30 neurons), these networks achieve a comparatively low training loss. Even though, the training loss for the sparsified networks with \(\lambda=10^{-5}\) might reach a similar value with around \(15\) parameters.
For the same trained networks as above, Figure 3 summarizes the generalization performance of the networks based on the test loss over the \(10,000\) test points. Here, again, the red dotted lines indicate the average test loss when no regularization was employed. Interestingly, the average test loss of the networks with a regularization value of \(\lambda=10^{-5}\) is better than that of the full models where \(\lambda=0\). This suggests that due to the additional regularization, these models are not only more interpretable but they also generalize more proficiently. The functional form of the network (1 hidden layer and \(\lambda=10^{-4}\)) with the median training loss is given by
\[\hat{\Psi}=0.398J+3.095\log\left(\left(1+e^{-1.356I_{2}}\right)^{1.314}\left(e^ {0.755I_{1}}+1\right)^{0.515}\left(e^{0.135I_{1}-0.319I_{2}-0.329J}+1\right)^{1. 874}+1\right)-6.686. \tag{15}\]
The representation consists of 3 terms that can be written out in one line of text, making it easy to implement and allowing for easier interpretation than the equivalent \(1112\) parameters of the full model. For example, it is easy to see that the obtained hyperelastic law does not fulfill the coercivity condition [21], i.e., as \(J\to 0^{+}\), we do not get \(\hat{\Psi}\rightarrow\infty\). This observation would not have been possible with over \(1000\) parameters.
We remark that we could have included a functional term in the strain energy function, such that the coercivity would be fulfilled by design, c.f. Ref. [21].
To highlight how accurate the obtained formulation nevertheless is we can look at the stress-strain behavior inside and outside of the training regime \(0.6\leq F_{11}\leq 1.4\), see Figure 4, where the green lines indicate the training domain. We can see that the network finds a sparse representation that appears to interpolate well and is also proficient in extrapolation.
Next, we look at the effect of the physics augmentation compared to a standard neural network model. Figure 5 depicts the influence of the \(\mathcal{L}^{0}\) regularization parameter and the network architecture on the averaged (over 10 runs) test loss and number of active parameters of the Gent-Gent model. Two things are noteworthy: (i) the test loss is generally similar between the two network formulations, and (ii) the average number of active parameters of the standard neural networks seems to be significantly more sensitive to the regularization parameter than its physics-augmented counterparts. Apart from the guaranteed adherence to physical principles and mechanistic assumptions, this seems to be a major positive side effect of the physics augmentation. The robustness of the physics-augmented approach to maintain interpretability irrespective of the choice of the regularization parameter is a key feature of the proposed framework here, and it will positively impact the overall trustworthiness of the approach, as will be discussed further in the manuscript.
Lastly, to prove the flexibility of the proposed approach, we also consider two more hyperelastic laws, in particular, a compressible Mooney-Rivlin model [61] and a polynomial model [7]. We again ran 10 random realizations of the training process and report the result with the median final training loss after \(100,000\) epochs. The true functional
Figure 3: Gent-Gent law test loss for different architectures. The red dotted line indicates the test loss of the network when no \(\mathcal{L}^{0}\)-sparsification is employed.
Figure 2: Gent-Gent law training loss for different architectures. The red dotted line indicates the training loss of the network when no \(\mathcal{L}^{0}\)-sparsification is employed.
forms as well as their sparsified representations of the neural network trained on 40 training data points are shown in Table 1. We can see that these fitted functional forms are similarly interpretable to the one discussed for the Gent-Gent model. For example, neither of the other two obtained representations fulfill the coercivity condition either.
Furthermore, similarly, to the Gent-Gent representation, the formulations obtained by these networks also appear accurate inside and even far outside the training regime, see Figure 6, where the training domain is highlighted by the green dotted lines.
### Incompressible hyperelastic materials from experimental data
If the material is incompressible, it can accommodate no volumetric deformations, captured by the constraint \(J=1\). We can then assume a predictor of the strain energy function given by
\[\hat{\Psi}(I_{1},I_{2})=\hat{\Psi}^{NN}(I_{1},I_{2})-(p+n)(J-1)-\hat{\Psi}^{NN }(3,3) \tag{16}\]
where the second term on the right-hand side of this expression has no contribution on the strain energy, \(p\) is the hydrostatic pressure (remaining to be determined by the boundary value problem at hand) and
\[n=2\left.\left(\frac{\partial\hat{\Psi}^{NN}}{\partial I_{1}}+2\frac{\partial \hat{\Psi}^{NN}}{\partial I_{2}}\right)\right|_{\mathcal{C}=\mathbf{I}} \tag{17}\]
Figure 4: Stress strain curves of fitted Gent-Gent hyperelastic laws. Green dotted lines represent the limits of the training regime.
Figure 5: Effect of the network architecture and the \(\mathcal{L}^{0}\)-regularization parameter on the Gent-Gent law test loss and the number of active parameters for physics-augmented neural networks and standard neural networks. Averaged over 10 runs.
**Gent-Gent:**
\[\begin{array}{c}\Psi_{gent}=-\frac{\theta_{1}}{2}J_{m}\log\left(1-\frac{I_{1}-3 }{J_{m}}\right)-\theta_{2}\log\left(\frac{I_{2}}{J}\right)+\theta_{3}\left( \frac{1}{2}(J^{2}-1)-\log J\right),\\ \theta_{1}=2.4195,J_{m}=77.931,\theta_{2}=-0.75,\theta_{3}=1.20975\end{array}\]
**Gent-Gent fit:**
\[\hat{\Psi}=0.398J+3.095\log\left(\left(1+e^{-1.356I_{2}}\right)^{1.314}\left(e ^{0.755I_{1}}+1\right)^{0.515}\left(e^{0.135I_{1}-0.319I_{2}-0.329J}+1\right)^ {1.874}+1\right)-6.686\]
**Mooney-Rivlin:**
\[\begin{array}{c}\Psi_{MR}=\theta_{1}\left(\frac{I_{1}}{J^{2/3}}-3\right)+ \theta_{2}\left(\frac{I_{2}}{J^{3/3}}-3\right)+\theta_{3}(J-1)^{2}\;,\\ \theta_{1}=9.2\cdot 10^{-4},\theta_{2}=2.37\cdot 10^{-3},\theta_{3}=10.0010\end{array}\]
**Mooney-Rivlin fit:**
\[\hat{\Psi}_{MR}=90.338J+17.263\log\left(\left(1+e^{-0.368J}\right)^{34.796}+1 \right)+0.276\log\left(\left(e^{-1.671I_{1}+1.305I_{2}}+1\right)^{1.298}+1 \right)-406.511\]
**Polynomial:**
\[\begin{array}{c}\Psi_{poly}=\theta_{1}\left(I_{1}-3\right)^{2}+\theta_{2} \left(I_{1}-3\right)^{4}+\theta_{3}\left(I_{2}-3\right)^{2}+\theta_{4}\left(I _{2}-3\right)^{4}+\theta_{5}\left(I_{3}-1\right)^{2},\\ \theta_{1}=0.1,\theta_{2}=0.15,\theta_{3}=2\cdot 10^{-4},\theta_{4}=1\cdot 10^{-4}, \theta_{5}=0.125\end{array}\]
**Polynomial fit:**
\[\begin{array}{c}\hat{\Psi}_{poly}=-0.116J+9.046\log\left(\left(1+e^{-0.179I _{2}}\right)^{3.59}\left(e^{0.193I_{2}}+1\right)^{1.927}+1\right)\\ \hskip 142.26378pt+0.597\log\left(\left(e^{-2.931I_{1}+2.272I_{2}}+1\right)^{ 0.696}+1\right)-33.35\end{array}\]
is determined by enforcing the normalization constraint. Furthermore \(\hat{\Psi}^{NN}(\bullet)\) is the output of an input convex neural network. Formulation (16) allows us to obtain a model that fulfills all constitutive constraints described above. The derivation can be found in Appendix A.2.
We can then obtain a formulation for the first Piola-Kirchoff stress with
\[\mathbf{P}=2\left(\left[\frac{\partial\hat{\Psi}}{\partial I_{1}}+I_{1}\frac{ \partial\hat{\Psi}}{\partial I_{2}}\right]\mathbf{F}-\frac{\partial\hat{\Psi}}{ \partial I_{2}}\mathbf{F}\mathbf{C}\right)-(p+n)J\mathbf{F}^{-T} \tag{18}\]
which in terms of the principal strains \(\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\) reads
\[P_{i}\lambda_{i}=2\left(\lambda_{i}^{2}\frac{\partial\hat{\Psi}}{\partial I_{ 1}}+\frac{\partial\hat{\Psi}}{\partial I_{2}}\left[(\lambda_{1}^{2}+\lambda_{ 2}^{2}+\lambda_{3}^{2})\lambda_{i}^{2}-\lambda_{i}^{4}\right]\right)-p-n. \tag{19}\]
Analytical forms for the hydrostatic pressure can be found for some simple boundary value problems.
In the following, we fit curves to data from uniaxial tension (UT), equibiaxial tension (ET), pure shear stress (PS), simple shear deformation (SS), and simple torsion (ST). The relevant deformations and quantities of interest that were used to fit the data are summarised in Table 2 and described in detail in Appendix A.2.
### Examples
In the following, we analyze and study the performance of the sparsified physics-augmented ML model on different sets of experimental data. We have repeated the training process 10 times with different random seeds and in the following only report the results corresponding to the median training loss after (depending on the dataset) \(50,000\) or \(80,000\) epochs. We utilize an architecture of 1 hidden layer with 30 neurons and \(\lambda=10^{-3}\) for all the following results.
\begin{table}
\begin{tabular}{|c||c|} \hline Type & Deformation & Relevant experimental output \\ \hline \hline Uniaxial tension (UT) & \(\mathbf{F}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&\frac{1}{\sqrt{\lambda_{1}}}&0\\ 0&0&\frac{1}{\sqrt{\lambda_{1}}}\end{bmatrix}\) & \(P_{1}=2\left(\frac{\partial\hat{\Psi}}{\partial I_{1}}+\frac{1}{\lambda_{1}} \frac{\partial\hat{\Psi}}{\partial I_{2}}\right)\left[\lambda_{1}-\frac{1}{ \lambda_{1}^{4}}\right]\) \\ \hline Equibiaxial tension (ET) & \(\mathbf{F}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&\lambda_{1}&0\\ 0&0&\frac{1}{\lambda_{1}^{2}}\end{bmatrix}\) & \(P_{1}=P_{2}=2\left(\frac{\partial\hat{\Psi}}{\partial I_{1}}+\lambda_{1}^{2} \frac{\partial\hat{\Psi}}{\partial I_{2}}\right)\left[\lambda_{1}-\frac{1}{ \lambda_{1}^{4}}\right]\) \\ \hline Pure shear stress (PS) & \(\mathbf{F}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&1&0\\ 0&0&\frac{1}{\lambda_{1}}\end{bmatrix}\) & \(P_{1}=2\left(\frac{\partial\hat{\Psi}}{\partial I_{1}}+\frac{\partial\hat{\Psi} }{\partial I_{2}}\right)\left[\lambda_{1}-\frac{1}{\lambda_{1}^{4}}\right]\) \\ \hline Simple shear deformation (SS) & \(\mathbf{F}=\begin{bmatrix}1&\gamma&0\\ 0&1&0\\ 0&0&1\end{bmatrix}\) & \(P_{12}=2\gamma\left(\frac{\partial\hat{\Psi}}{\partial I_{1}}+\frac{\partial \hat{\Psi}}{\partial I_{2}}\right)\) \\ \hline Simple torsion & \(\mathbf{F}=\begin{bmatrix}1&0&0\\ 0&1&\rho\phi\\ 0&0&1\end{bmatrix}\) & \(\tau=\int_{0}^{1}4\pi\rho^{3}\phi\left(\frac{\partial\hat{\Psi}}{\partial I_{ 1}}+\frac{\partial\hat{\Psi}}{\partial I_{2}}\right)d\rho\) \\ \hline \end{tabular}
\end{table}
Table 2: Studied deformation modes with relevant output
#### 3.3.1 Model for rubber
One of the most used datasets for incompressible hyperelasticity goes back to Treloar [62] who reported the uniaxial, equibiaxial and pure shear responses of vulcanized rubber for \(20^{\circ}C\) and \(50^{\circ}C\). Following Ref. [63] this data is summarized in Table A2. In the following, we use the UT and ET data as training data and test the performance on the PS dataset.
\(20^{\circ}C\) datasetFor the \(20^{\circ}C\) dataset, the evolution of the median training loss and the median number of active parameters is shown in Figure (a)a. It can be seen that both values reduce simultaneously and that the number of active parameters reduces from an initial value greater than \(1000\) to around \(10\) over the training process. The training data, the predicted stress responses, and the \(R^{2}\)-scores are depicted in Figure (b)b. The predicted responses on the training data are proficiently accurate. Further, even on the unseen PS data the model reaches an accuracy \(R^{2}>0.99\). The obtained sparsified representation for the compressible hyperelastic law reads
\[\hat{\Psi}=0.15I_{1}+6.3\cdot 10^{-3}I_{2}-0.33J-0.28-p\left(J-1.0\right)+794. 94\log\left(10^{-4}\left(e^{0.09I_{1}-0.01I_{2}}+1\right)^{0.73}+1\right) \tag{20}\]
which fulfills all the discussed constraints such as convexity and the normalization conditions.
\(50^{\circ}C\) datasetFor the \(50^{\circ}C\) dataset, the evolution of the training loss and the number of active parameters is plotted in Figure (a)a. At the end of the training process, the discovered strain formulation is given by
\[\hat{\Psi} =0.2025I_{1}+0.0114I_{2}-0.4512J-p\left(J-1.0\right)+5.1533\log \left(0.0009\left(0.2155e^{0.1852I_{1}-0.0028I_{2}}+1\right)^{0.7778}+1\right) \tag{21}\] \[+12.5236\log\left(0.0009e^{0.0052I_{2}}+1\right)-0.2076.\]
Using this formulation, Figure (b)b shows the stress responses, the training data points, and the \(R^{2}\)-scores. Here again, the predicted responses are accurate and the model shows it can generalize well. For the same dataset Figure 9 depicts the effect of the regularization. We can see that if we train a physics-augmented neural network with 1 hidden layer with \(30\) neurons without reducing the number of trainable parameters with the \(\mathcal{L}^{0}\)-regularizer, the number of trainable parameters of the run with the median final training loss (over 10 runs) is still in the hundreds at the end of the training process. Furthermore, while the training data is fitted accurately in a comparable manner to the obtained model with \(\mathcal{L}^{0}\)-regularization, the model generalizes significantly worse on the unseen PS data, c.f. Figure (b)b. We can hence see that in the limited data domain, the regularization not only increases the interpretability of the model but improves its generalization performance as well.
Figure 7: Rubber response \(20^{\circ}C\). (a) Training loss and number of active parameters over the training process, (b) Data fit and \(R^{2}\) errors.
#### 3.3.2 Model for human brain tissue
Next, we employ the \(\mathcal{L}^{0}\)-regularized neural network approach to find incompressible hyperelastic formulations for human brain tissues. In particular, we look at four datasets of mechanical tests on the cortex, the corona radiata and two different sections of the midbrain.
we argue that its complexity is (roughly) in line with other reported forms used to model human brain tissues, e.g. compare Ref. [66] which use a 3-term Ogden model [67]. Figure 9(b) depicts the data points, the predicted responses outside, and the \(R^{2}\)-error values. We can see that not only the training data is fitted proficiently well, and the model is also highly accurate on the shear stress test data.
The loss behavior and the number of active parameters of the network fitted on the Corona Radiata are shown in Figure 10(a). We can see a similar performance to the Cortex example. The final functional form reads
\[\begin{split}\hat{\Psi}_{\text{CR Radiata}}&=-43.126J-p \left(J-1.0\right)-56.327\\ &+4962.449\log\Big{(}0.014\left(1+0.0001e^{-1.294I_{1}}\right)^{ 66688.773}\left(0.0001e^{0.841I_{2}}+1\right)^{189.231}+1\Big{)}.\end{split} \tag{23}\]
The fitted responses and the \(R^{2}\)-score are shown in Figure 10(b) where again proficient generalization of the obtained model can be noted.
MidbrainNext, we fit a model to UT and ST data for two sections of the midbrain. The datsets were adopted from Ref. [31] and are listed in Table A4. For the first section, Figure 11(a) shows the training loss corresponding to the
Figure 11: Corona radiata dataset. (a) Training loss and number of active parameters over the training process, (b) Data fit and \(R^{2}\) errors.
Figure 10: Cortex dataset. (a) Training loss and number of active parameters over the training process, (b) Data fit and \(R^{2}\) errors.
training run with the median final training loss and the respective evolution of the number of active parameters. The obtained model form for this section of the midbrain reads
\[\begin{split}\Psi_{\text{MB1}}&=-47.0604J-p\left(J-1. 0\right)+28.8387\\ &+9.9229\log\Big{(}0.0048\big{(}1+0.0001e^{-0.4491I_{1}}\big{)}^{ 194016.0037}\left(0.0001e^{1.299I_{2}}+1\right)^{399.3928}+1\Big{)}.\end{split} \tag{24}\]
We remark that due to the lower number of parameters the loss appears to have single optimization steps where the training loss sharply increases but then immediately reverts. This could potentially be avoided by a lower learning rate, which is however out of the scope of the current paper, and appears to have no effect on the final accuracy of the obtained model. The latter is highlighted in Figures 11(b) and 11(c) that show the data and the model response for UT and ST, respectively. We furthermore highlight the \(R^{2}\) score which for both cases is \(>0.97\) and depict the fit of Flaschel et. al. [31] which was obtained using a sparse regression framework called EUCLID [7]. Ref. [31] obtain a strain energy function of the form
\[\Psi_{\text{Fl}}=\frac{2\cdot 0.01}{(-78.58)^{2}}\left(\lambda_{1}^{-78.58}+ \lambda_{2}^{-78.58}+\lambda_{3}^{-78.58}-3\right)+\frac{2\cdot 90.36}{(-28.71)^{2} }\left(\lambda_{1}^{-28.71}+\lambda_{2}^{-28.71}+\lambda_{3}^{-28.71}-3\right) \tag{25}\]
which could potentially be perceived as slightly more interpretable than the strain energy function of eq. (24), which however has a significantly worse accuracy with \(R^{2}\)-scores below \(0.86\). The loss and parameter evolution of the
Figure 12: Midbrain Section 1 response. (a) Training loss and number of active parameters over the training process, (b) Uniaxial tension data, the fit of the proposed approach, and fit of Ref. [31], (c) torsion data, the fit of the sparsified neural network approach, compared to the fit of Ref. [31].
network trained on the data of the second midbrain section is depicted in Figure 12(a). The final model form reads
\[\begin{split}\Psi_{\text{MB2}}&=-57.0237J-p\left(J-1.0 \right)+29.8625\\ &+87.1968\log\left(0.0015\left(1+0.0003e^{-0.2335I_{1}}\right)^{ 24578.9446}\left(0.0003e^{0.7607I_{2}}+1\right)^{437.4412}+1\right)\end{split} \tag{26}\]
which is of similar complexity to the model obtained for the first midbrain section. Figures 12(b) and 12(c) plot the data points, our fit and the fit of Ref. [31]. Again the proposed approach shows a proficient accuracy which is better than the one provided by the EUCLID method.
### Constitutive modeling of elastoplastic material responses
Elastoplasticity is a framework to model history-dependent nonlinear materials. In the small strain regime, we assume that we can split the strain into elastic and plastic strains \(\mathbf{\epsilon}=\mathbf{\epsilon}^{e}+\mathbf{\epsilon}^{p}\). We can then postulate the existence of a free energy function \(\Psi(\mathbf{\epsilon}^{e},r)=\Psi^{e}(\mathbf{\epsilon}^{e})+\Psi^{r}(r)\) that is decomposed into its elastic and part \(\Psi^{e}\) and a history-dependent component that models isotropic hardening \(\Psi^{r}\) which is dependent on the internal variable \(r\). To be consistent with thermodynamics, the intrinsic dissipation has to be non-negative [68], i.e.
\[\mathcal{D}_{int}=\mathbf{\sigma}:\dot{\mathbf{\epsilon}}-\dot{\Psi}\geq 0= \left(\mathbf{\sigma}-\frac{\partial\Psi^{e}}{\partial\mathbf{\epsilon} ^{e}}\right):\dot{\mathbf{\epsilon}}^{e}+\mathbf{\sigma}:\dot{\mathbf{\epsilon}}^{p}-R \dot{r}\geq 0 \tag{27}\]
Figure 13: Midbrain Section 2 response. (a) Training loss and number of active parameters over the training process, (b) Uniaxial tension data, the fit of the proposed approach, and fit of Ref. [31], (c) torsion data, the fit of the sparsified neural network approach, compared to fit of Ref. [31].
where \(R=\frac{\partial\Psi^{p}}{\partial r}\) is the thermodynamic force conjugate to \(r\). In order to guarantee the fulfillment of eq. (27) we set \(\mathbf{\sigma}=\frac{\partial\Psi^{e}}{\partial\mathbf{e}^{e}}\) and introduce the yield function \(f(\mathbf{\sigma},R)\) which is required to be a non-negative convex function of its arguments and zero-valued at the origin, i.e. \(f(\mathbf{0},0)=0\), see Ref. [69]. This function allows us to derive the evolution equations
\[\dot{\mathbf{\epsilon}}^{p}=\dot{\lambda}\frac{\partial f}{\partial\mathbf{\sigma}}, \quad\dot{r}=-\dot{\lambda}\frac{\partial f}{\partial\mathbf{R}} \tag{28}\]
with the consistency parameter \(\dot{\lambda}\). Many materials (such as metals or rocks) are characterized by pressure-independence and by linear elastic behavior in the elastic regime, i.e. where \(f(\mathbf{\sigma},R)<0\). The latter allows us to define the elastic component of the free energy function as
\[\Psi^{e}(\mathbf{\epsilon}^{e})=\frac{1}{2}\mathbf{\epsilon}^{e}:\mathbb{C}:\mathbf{ \epsilon}^{e} \tag{29}\]
where \(\mathbb{C}\) is a fourth-order material tangent specifying the (elastic) anisotropy. Under the assumption of isotropic yielding and due to the pressure independence of metals, the yield function can be rewritten as a function of two \(\pi\)-plane components
\[f(\frac{1}{R}\pi_{1},\frac{1}{R}\pi_{2}) \tag{30}\]
with
\[\begin{bmatrix}\pi_{1}\\ \pi_{2}\\ \pi_{3}\end{bmatrix}=\begin{bmatrix}\sqrt{\frac{2}{3}}&-\sqrt{\frac{1}{6}}&- \sqrt{\frac{1}{6}}\\ 0&\sqrt{\frac{1}{2}}&-\sqrt{\frac{1}{2}}\\ \sqrt{\frac{1}{3}}&\sqrt{\frac{1}{3}}&\sqrt{\frac{1}{3}}\end{bmatrix}\begin{bmatrix} \sigma_{1}\\ \sigma_{2}\\ \sigma_{3}\end{bmatrix} \tag{31}\]
where \(\sigma_{i}\), \(i=1,2,3\) are the principal stresses and where \(R\) now acts as the ratio of a homothetic transformation of the initial yield function (\(R=1\)). For \(0<R<1\) the yield function is expanding and isotropic hardening can be represented. In the following, we will use the sparse regression framework to find representations for the initial yield function \(f(\pi_{1},\pi_{2})\) and for the hardening function \(R(r)\), respectively.
### Yield function from numerical data
We aim to represent an isotropic, pressure-independent initial yield function of the form \(f(\pi_{1},\pi_{2})\). Following Ref. [70] we can prove that for thermodynamic consistency the yield function is convex with regards to its inputs \(\pi_{1}\) and \(\pi_{2}\). We can therefore train a sparse input convex neural network (c.f. Section 2.1.1) and find an interpretable formulation for the yield functions. Since we focus on pressure-independent yield functions we introduce the deviatoric stress \(\mathbf{s}\) as well the two invariants \(J_{2}\) and \(J_{3}\), i.e.
\[\mathbf{s}=\mathbf{\sigma}-\frac{1}{3}\text{tr}(\mathbf{\sigma})\mathbf{I},\qquad J_{2}= \frac{1}{2}\text{tr}(\mathbf{s}^{2}),\qquad J_{3}=\frac{1}{3}\text{tr}(\mathbf{s}^{3}). \tag{32}\]
To make them easier to read the following representations are rounded to three decimal places. For all the examples the yield functions are trained on 30 training points, shown in Table A5.
### Drucker
The first example focuses on the well-known Drucker yield function [71] which involves both invariants of the Cauchy stress deviator and is here specified as
\[f_{D}=J_{2}^{3}+1.5\,J_{3}^{2}-(0.24)^{6}. \tag{33}\]
Figure 13(a) shows the training loss and the number of active parameters over the training process of the training run with the median final loss. We can see that the final model contains roughly \(10\) parameters at the end. The obtained, interpretable functional form reads
\[\begin{split}\hat{f}_{D}&=0.068\log\left(\left(e^{-2.122\pi_{1}+3.648\pi_{2}}+1\right)^{5.238}+1\right)+0.068\log\left(\left(e^{-2.121\pi_{1}-3.648\pi_{2}}+1\right)^{5.229}+1\right)\\ &+0.296\log\left(e^{4.922\pi_{1}}+1\right)-1.699.\end{split} \tag{34}\]
Using this form the approximated yield surface and the raw data are shown in Figure 13(b). We can see that the model fits the data proficiently well. Given the simplicity of the model we can furthermore, for example, use the series expansion around \(x=0\) of
\[\log\left(\frac{(e^{ax}+1)^{y}+1}{(e^{-ax}+1)^{y}+1}\right)\approx-\frac{a2^{y }yx}{2^{y}+1} \tag{35}\]
to show that
\[\begin{split} R&=\hat{f}_{D}(0,\pi_{2})-\hat{f}_{D}(0,- \pi_{2})\\ &=0.068\log\left(\frac{\left(e^{3.648\pi_{2}}+1\right)^{5.238}+1}{ \left(e^{-3.648\pi_{2}}+1\right)^{5.238}+1}\right)+0.068\log\left(\frac{\left( e^{-3.648\pi_{2}}+1\right)^{5.229}+1}{\left(e^{+3.648\pi_{2}}+1\right)^{5.229}+1} \right)\\ &\approx-0.068\frac{2.12225^{.238}5.238\pi_{2}}{2^{5.238}+1}+0.068 \frac{2.1212^{5.229}5.229\pi_{2}}{2^{5.229}+1}\\ &\approx-0.0018\pi_{2}\end{split} \tag{36}\]
the obtained yield function is roughly symmetric along \(\hat{f}_{D}(0,\pi_{2})\).
### Cazacu
The Drucker yield function is symmetric in the \(\pi_{1}\)-\(\pi_{2}\)-plane. To highlight that the model is also able to obtain an accurate model when tension-compression asymmetries are presented we consider the yield function suggested by Cazacu et. al. [72] which we specify to be
\[f_{C}=\left(|s_{1}|+0.5s_{1}\right)^{2}+\left(|s_{2}|+0.5s_{2}\right)^{2}+ \left(|s_{3}|+0.5s_{3}\right)^{2}-0.24. \tag{37}\]
We again obtain 30 data points on the yield limit of this model and fit a \(\mathcal{L}^{0}\)-regularized input convex neural network. Figure 14(a) shows the loss and parameter evolution. The functional form of the final model is given by
\[\begin{split}\hat{f}_{C}&=17.327\log\left(\left(1+ e^{-0.364\pi_{1}}\right)^{0.939}\left(1+e^{-0.313\pi_{2}}\right)^{1.852}+1 \right)\\ &+1.119\log\left(\left(e^{-7.341\pi_{1}+4.18\pi_{2}}+1\right)^{0. 95}e^{6.953\pi_{1}+3.93\pi_{2}}+1\right)-38.066.\end{split} \tag{38}\]
Figure 14(b) shows that the presented approach is also able to fit this asymmetric yield function with reasonable accuracy.
### Tresca yield function
In contrast to the other two yield functions, the last example deals with the non-smooth Tresca yield surface as proposed in Ref. [73] which we specify as
\[f_{T}=\max\left(|\sigma_{1}-\sigma_{2}|,|\sigma_{1}-\sigma_{3}|,|\sigma_{3}- \sigma_{2}|\right)-0.24. \tag{39}\]
Figure 14: Fit of a convex neural network to the Drucker yield function. (a) Training loss behavior and number of active (\(|w|>0\)) trainable parameters; (b)Data points (black dots), true curve (black dotted), and predicted curve (blue).
Figure (a)a again shows that both the number of parameters and the loss decrease over the course of the training process. The final functional representation is given by
\[\begin{split}\hat{f}_{T}&=0.021\log\left(\left(1+e^{- 116.395\pi_{2}}\right)^{2.706}+1\right)+0.019\log\left(\left(e^{-96.41\pi_{1}+55. 983\pi_{2}}+1\right)^{3.117}+1\right)\\ &+0.023\log\left(\left(e^{99.193\pi_{1}+57.03\pi_{2}}+1\right)^{2.46}+1\right)-1.127\end{split} \tag{40}\]
which is reasonably small to allow for some interpretation. Finally, the predicted yield limit \(\hat{f}_{T}=0\) is overlaid over the true data in Figure (b)b. We can see that the presented approach is also able to find interpretable functional forms when approximating non-smooth yield surfaces. We remark, that due to the smoothness of the activation function, the final yield function is necessarily also smooth. To exactly fit non-smooth yield surfaces other sharper activation functions like the rectified linear unit [74] could be employed.
Figure 16: Fit of a convex neural network to the Tresca yield function. (a) Training loss behavior and number of active (\(|w|>0\)) trainable parameters; (b)Data points (black dots), true curve (black dotted), and predicted curve (blue).
Figure 15: Fit of a convex neural network to the Cazacu yield function. (a) Training loss behavior and number of active (\(|w|>0\)) trainable parameters; (b)Data points (black dots), true curve (black dotted), and predicted curve (blue).
### Isotropic hardening law
For this application, we only have access to uniaxial monotonic loading tests and we aim to fit the isotropic hardening function \(R(r)\) as given in eq. (30). Given this limited data, we assume that the elastic response is isotropic, i.e.
\[\mathbb{C}=\frac{E\nu}{(1+\nu)(1-2\nu)}\delta_{ij}\delta_{kl}+\frac{E}{2(1+\nu )}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right) \tag{41}\]
where the Young's modulus \(E\) and the Poisson's ratio \(\nu\) are adjustable parameters. We then aim to find the functional form \(R(r)\) depending on the internal hardening variable \(r\) which influences the scaling of a von Mises yield function [75] of the form
\[f(\frac{1}{R(r)}\,\pi_{1},\frac{1}{R(r)}\,\pi_{2})=\frac{3}{2}\sqrt{\frac{1}{R (r)}\,\pi_{1}^{2}+\frac{1}{R(r)}\,\pi_{2}^{2}}-\sigma_{y} \tag{42}\]
where the yield stress \(\sigma_{y}\) can be directly obtained from the data. If we assume isotropic hardening then the function \(R(r)\) is required to be positive and monotonically increasing. We can therefore employ a positive, monotonically increasing neural network as introduced in Section 2.1.2 with a sigmoid activation function to approximate the functional form of \(R(r)\). We remark that we require \(R(r=0.0)=1.0\) when no plastic yielding has occurred.
We will test the approach out on three different datasets which are summarized in Table 6.
#### 3.9.1 Experimental data - U71Mn rail steel
We start with a monotonic uniaxial loading dataset of U17Mn rail steel discussed in Ref. [76]. We first fit the elastic parameters and the yield stress to \(E=220\cdot 10^{3}\)MPa, \(\nu=0.3\) and \(\sigma_{y}=484.5\)MPa respectively. Using the network with the median loss after 10 runs, the training evolution and the reduction of the number of active parameters are shown in Figure 16(a). The obtained functional form for the isotropic hardening function reads
\[\hat{R}_{U71Mn}(r)=0.099+\frac{1.801}{1+e^{-194.688r}} \tag{43}\]
which is an easily interpretable function, i.e. we can see that it is always monotonically increasing since \(\exp(-r)\) is a monotonically decreasing function in \(r\). Figure 16(b) shows the true data, and the predicted response of the monotonic loading process with the fitted \(\hat{R}_{U71Mn}(r)\). The blue line indicates the range of the training data, i.e. \(R(r)\) was trained until roughly \(4\%\) strain. To highlight the extrapolation quality of the model the red line is the model prediction into unseen loading ranges. We can see that both the training and testing data were fitted proficiently well.
#### 3.9.2 Experimental data - SS316L stainless steel
Next, we look at data of a uniaxial loading test done on SS316L stainless steel performed by Ref. [76]. The fitted parameters of the elastic range read \(E=190\cdot 10^{3}\)MPa, \(\nu=0.35\) and the yield stress is given by \(\sigma_{y}=200\)MPa.
Figure 17: Fit of experimental monotonic loading curve with the hardening function represented by a sparse neural network - U71Mn rail steel. (a) Loss and number of active parameters over epochs (b) Uniaxial tension curve; Black dots represent the experimental data; Blue line indicates prediction in training data; Red line is extrapolation.
Figure (a)a depicts the evolution of the training loss and the parameters while Figure (b)b highlights the ability of the resulting model to fit the training data (up to the end of the blue solid line) and the ability to generalize well beyond the training region (solid red line). The interpretable functional form of the isotropic hardening function reads
\[\hat{R}_{SS316L}(r)=0.023+\frac{1.662}{1+1.071e^{-190.683r}}+\frac{0.362}{1+1.071 e^{-2200.640r}} \tag{44}\]
which, for example, allows us to easily see that there is no hardening when the internal variable is zero, i.e. \(\hat{R}_{SS316L}(0)\approx 0.023+0.802+0.175\approx 1\).
#### 3.9.3 Experimental data - 40Cr3MoV bainitic steel
Lastly, we look at monotonic loading data of 40Cr3MoV bainitic steel which was also published Ref. [76]. We set the material parameters as \(E=207\cdot 10^{3}\)MPa, \(\nu=0.3\) and \(\sigma_{y}=1000\)MPa. The loss and parameter evolution are shown in Figure (a)a. Figure (b)b depicts the interpolated and extrapolated predictions using the fitted model which is of the form
\[\hat{R}_{40Cr3MoV}(r)=-0.501+\frac{2.904}{1.669e^{-\frac{5.892}{1+1.720e^{-1 02.643r}}}\frac{0.740}{1+1.720e^{-667.227r}}+1}. \tag{45}\]
We can see that similar to the first two cases the accuracy of the fit is proficiently high.
## 4 Discussion and Conclusion
We have proposed to prune physics-augmented neural network-based constitutive models using a smoothed version of \(L^{0}\)-regularization to enable interpretable and trustworthy model discovery in a wide array of problems in mechanics. The network is trained by simultaneously fitting the training data and penalizing the number of active parameters. On a variety of applications including synthetic and experimental data, we have shown that we are able to obtain accurate, yet interpretable constitutive models for compressible and incompressible hyperelasticity, yield functions, and isotropic hardening functions. The presented approach seems to be highly flexible and has the potential to overcome the restrictions of functional form selection, towards automation of constitutive modeling. Uniquely, i) enforcing physical constraints enables generalization/extrapolation as well as training with limited and low data, ii) utilization of neural networks enables high expressiveness and eliminates the development of specific model form libraries, and iii) pruning, leads to interpretable discovery and also enhances generalization/extrapolation.
In the next steps, we will use this approach to obtain functional forms for other constitutive models such as representations for kinematic hardening or viscoelasticity. We have furthermore (purposefully) only used neural networks
Figure 18: Fit of experimental monotonic loading curve with the hardening function represented by a sparse neural network - SS316L stainless steel. (a) Loss and number of active parameters over epochs (b) Uniaxial tension curve; Black dots represent the experimental data; Blue line indicates prediction in training data; Red line is extrapolation.
with single nonlinear activation functions. Future work could center around finding representations using a different activation function at each neuron as a means to enhance the expressivity of our representations to parsimoniously model even more complex material responses.
## Acknowledgments
JF and NB gratefully acknowledge support by the Air Force Office of Scientific Research under award number FA9550-22-1-0075.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
|
2308.02060 | Accurate Neural Network Pruning Requires Rethinking Sparse Optimization | Obtaining versions of deep neural networks that are both highly-accurate and
highly-sparse is one of the main challenges in the area of model compression,
and several high-performance pruning techniques have been investigated by the
community. Yet, much less is known about the interaction between sparsity and
the standard stochastic optimization techniques used for training sparse
networks, and most existing work uses standard dense schedules and
hyperparameters for training sparse networks. In this work, we examine the
impact of high sparsity on model training using the standard computer vision
and natural language processing sparsity benchmarks. We begin by showing that
using standard dense training recipes for sparse training is suboptimal, and
results in under-training. We provide new approaches for mitigating this issue
for both sparse pre-training of vision models (e.g. ResNet50/ImageNet) and
sparse fine-tuning of language models (e.g. BERT/GLUE), achieving
state-of-the-art results in both settings in the high-sparsity regime, and
providing detailed analyses for the difficulty of sparse training in both
scenarios. Our work sets a new threshold in terms of the accuracies that can be
achieved under high sparsity, and should inspire further research into
improving sparse model training, to reach higher accuracies under high
sparsity, but also to do so efficiently. | Denis Kuznedelev, Eldar Kurtic, Eugenia Iofinova, Elias Frantar, Alexandra Peste, Dan Alistarh | 2023-08-03T21:49:14Z | http://arxiv.org/abs/2308.02060v2 | # Accurate Neural Network Pruning Requires
###### Abstract
Obtaining versions of deep neural networks that are both highly-accurate and highly-sparse is one of the main challenges in the area of model compression, and several high-performance pruning techniques have been investigated by the community. Yet, much less is known about the interaction between sparsity and the standard stochastic optimization techniques used for training sparse networks, and most existing work uses standard dense schedules and hyperparameters for training sparse networks. In this work, we examine the impact of high sparsity on model training using the standard computer vision and natural language processing sparsity benchmarks. We begin by showing that using standard dense training recipes for sparse training is suboptimal, and results in under-training. We provide new approaches for mitigating this issue for both sparse pre-training of vision models (e.g. ResNet50/ImageNet) and sparse fine-tuning of language models (e.g. BERT/GLUE), achieving state-of-the-art results in both settings in the high-sparsity regime, and providing detailed analyses for the difficulty of sparse training in both scenarios. Our work sets a new threshold in terms of the accuracies that can be achieved under high sparsity, and should inspire further research into improving sparse model training, to reach higher accuracies under high sparsity, but also to do so efficiently.
## 1 Introduction
The difficulty of finding deep neural networks (DNNs) that are both _accurate and sparse_, i.e., closely match the accuracy of dense models while having a large majority of their weights set to zero, is one of the main challenges in the area of model compression. On the conceptual side, this challenge connects to fundamental questions related to the _Lottery Ticket Hypothesis (LTH)_[18, 19], which posited that such sparse masks exist, and that, in some cases, they can even allow accurate training of sparse models _from scratch_, that is, applying the sparsity mask at initialization. On the practical side, obtaining highly-sparse and accurate networks can lead to significant practical speedups, both for inference [56] and training [57].
In this work, we focus on the challenge of obtaining accurate DNNs in the high-sparsity regime, and investigate the barriers to obtaining **highly-sparse** and **highly-accurate** variants of DNNs for standard vision and language tasks. We mainly focus on two tasks that are, arguably, the standard benchmarks for sparsity in vision and language, respectively: image classification using the ResNet50 model [25] on the ImageNet-1K dataset [61], e.g. [27, 14, 21, 16, 67, 65, 60], and language modelling using the BERT-base model [12] on the GLUE benchmark datasets [73], e.g. [64, 27, 42, 43]. Roughly, for both benchmarks, it is known that sparsities lower than 90% can be achieved with approximately 1% accuracy loss relative to the original dense model, but accuracy rapidly decreases in the 90-95% range [27, 16], and that decreases are drastic at higher (\(\geq 95\%\)) sparsities [67, 43]. In this paper, we investigate the reasons behind this accuracy loss due to sparsity, mainly targeting _high sparsity_, i.e. sparsities between 90% and 99%, studying the difficulty of obtaining accurate models in this range.
Contribution.We begin from the observation that, when training sparse models from scratch, following standard _dense training_ schedules, _sparse models show clear evidence of undertraining_: both their accuracy and loss fail to saturate, and their
output continues to have high entropy. This finding suggests that maximization of the accuracy of sparse models requires longer training than the dense optimization recipes adopted in most of the work on model sparsification.
Motivated by this observation, we propose a combination of techniques which can mitigate the inherent difficulty of sparse training. As a consequence, we significantly improve on the best currently-known sparsity-accuracy trade-offs on standard sparsity benchmarks for both image classification and language modelling. More precisely, we obtain, for the first time, highly-accurate sparse versions of ResNet50, such as a 90%-sparse model with 78.5% Top-1 accuracy, a 95%-sparse model with 77.7% Top-1 accuracy, and a 98%-sparse model with 75.2% Top-1 accuracy. In addition, we show that stable results can be obtained even for extreme sparsities (e.g., 99%). For language models, we show that on the most challenging tasks, as measured by the drop in accuracy relative to the dense model, we can improve results by 3 points in accuracy relative to the current state-of-the-art results at 90% sparsity. We arrive at these results as follows:
* We perform an analysis of the output and training characteristics of models trained using current state-of-the-art techniques, relative to their dense counterparts. First, we show that sparse DNNs obtained via many current techniques behave similarly to _undertrained dense models_: specifically, they tend to have high output entropy (alternatively, low "output confidence"), which correlates with their reduced accuracy.
* This analysis provides clear evidence that optimizing _sparse models_ is more difficult than standard _dense_ optimization [15]. This observation stands in contrast to the fact that most current sparsification techniques use standard _dense_ training recipes for fine-tuning and recovery. We exploit this insight to obtain state-of-the-art accuracy for sparse models in two popular scenarios: _sparse pretraining_, i.e. training sparse models from scratch, and _sparse transfer_, i.e. optimizing a sparse pretrained model onto a target transfer task.
* In the _sparse pretraining_ scenario, illustrated by the standard task of obtaining a highly-sparse ResNet50 model on the ImageNet dataset, we show that we can circumvent the difficulty of sparse training by adopting a variant of the Alternating Compressed/Decompressed (AC/DC) algorithm [60] for training sparse DNNs, which has guarantees for sparse recovery. Specifically, we show that, by scaling the algorithm's runtime, we can obtain state-of-the-art results for sparse pretraining on ImageNet for ResNet50 and MobileNet models, and reach extremely high sparsities (e.g. 98% and 99%) while still obtaining stable results. Moreover, only sparse models benefit from extended training, whereas dense models start to overfit with longer training.
* In the _sparse transfer_ scenario, popular in language domain, the difficulty of sparse training can manifest itself through both _undertraining_ and _overfitting_, depending on the parametrization of the chosen transfer learning recipe, specifically on the training length. We address this via a modified version of the _gradual layer unfreezing_ approach [31], tailored towards a _sparse_ transfer learning scenario, which allows us to obtain state-of-the-art results in the case of BERT-base transfer on downstream datasets.
Discussion.Overall, our results suggest that the difficulty of obtaining highly-accurate sparse models is closely linked to the difficulty of accurate sparse optimization using current state-of-the-art techniques. Specifically, our work improves the best known results on standard sparsity benchmarks, for both sparse pretraining and sparse finetuning, both in terms of absolute accuracy, and accuracy loss relative to the dense baseline. Moreover, we observe the following:
* Achieving state-of-the-art sparsity-vs-accuracy trade-offs currently requires using significant additional computational complexity and more epochs for training the sparse models, relative to the best known dense training methods. In turn, this suggests that sparse optimization may be inherently harder than its dense counterpart.
* Reaching high validation accuracy for sparse models is strongly linked to reaching low training loss, which occurs at a slower rate for sparse models in the case of SGD-based optimization. At the same time, we do observe overfitting behavior (decrease of validation accuracy w.r.t. increased training time), especially at lower sparsities.
* To further investigate the hardness of sparse optimization, we perform an analysis of the loss landscape of accurate sparse networks both in terms of sharpness and loss interpolation / mode connectivity. We observe that achieving highly-accurate sparse networks from initialization requires overcoming multiple loss barriers, and that sparsity mask exploration may be a key ingredient for overcoming these barriers.
* In addition, we investigate the relationship between standard hyperparameters such as weight decay, on the one hand, and sparsity structure, on the other. We find that careful setting of weight decay is critical for accurate sparsity, and that weight decay additionally induces (partial) structured sparsity in highly-sparse models. This provides a first explanation to the emergence of structured sparsity in unstructured sparse networks, which has been observed previously [60, 33, 77].
Our results set new accuracy thresholds for sparse models using relatively simple techniques. They should serve as motivation for the community to devise improved _sparsity-aware_ optimization techniques, specifically allowing for faster, more efficient accuracy recovery.
Related Work
The goal of most sparsification methods [27] is to create a DNN that is as accurate as possible, while maximizing sparsity. This goal can be achieved via different strategies: for instance, _post-training sparsification methods_ assume a _pretrained dense model_, from which weights are removed either in a single step (one-shot) or progressively (gradual pruning). By contrast, in _sparse training methods_, parameters are pruned from the model during training from scratch, either close to initialization [16, 37, 46, 70, 66], or progressively as the model is trained [24, 21, 65]. A subset of sparse training methods are _dynamic_, in the sense that weights may be reintroduced during training [16, 60].
In this work, we mainly focus on the _high-sparsity regime_, in which _sparse training_ methods provide the best known accuracy-vs-sparsity trade-offs. We begin by discussing methods for computer vision. Here, Gradual Magnitude Pruning (GMP), in which the lowest-magnitude weights are progressively removed throughout training, is a common baseline. In [21], GMP was shown to be competitive with more sophisticated pruning methods on image classification models when properly tuned; similar results were later shown for language models [42].
The RigL pruning method [16] is a common, high-performing benchmark for dynamic sparse training. In this method, the weights are initially pruned to the target sparsity and trained through (sparse) stochastic gradient descent. Periodically, however, the mask is updated by selecting weights with the highest magnitude gradient, subject to a limit on the total mask change. The authors run this method using two sparsity targets - Uniform sparsity, where all layers (except the first and last) are pruned to the same proportion, and Erdos-Renyi Kernel (ERK), where layer sparsity targets are set to optimize performance. The authors test their method in the normal-schedule (100 epochs on Imagenet) and 5x training regime, getting results of 73.22% validation accuracy and 74.63% validation accuracy at 95% global (ERK) and uniform sparsity, respectively when training for 500 epochs. Extending training to 10 000 epochs (100x) further allowed the authors to produce 99% sparse (ERK) ResNet50 models with 68.5% accuracy on ImageNet. RigL was improved by combining it with ITOP [49], by altering training hyperparameters to encourage mask exploration, which was shown to improve RigL results at medium (80-90%) sparsity (see Table 1).
The GraNet[50] method extends this approach by making it gradual - either starting from a dense network and performing RigL-like updates while simultaneously increasing sparsity until the target sparsity is achieved, or by starting by a partially sparse (50%) network and doing the same. Models trained with the sparse-init version of GraNet achieved 72.3% validation accuracy at 95% global sparsity when training for 100 epochs.
The AC/DC pruning method [60] alternates dense and sparse pruning phases of several epochs each, effectively co-training dense and sparse models. Similar to RigL, AC/DC was tested in the normal and extended training regime, creating 95% globally sparse ImageNet-1K ResNet50 models with 73.14% top-1 accuracy, and 68.44% top-1 accuracy 98% sparse models after 100 epochs of training. The authors also experiment with extended training times, producing 95% uniform sparsity ResNet50 models with 74.3% validation accuracy.
Another successful pruning approach is the combination of Powerpropagation [66] with Top-KAST [37]. In Powerpropagation, the weights are reparametrized using \(f(w)=w|w|^{\alpha-1}\) for \(\alpha>1\), effectively encouraging high-magnitude weights to continue increasing while lower-magnitude weights are driven toward 0. Top-KAST is a dynamic sparse training scheme that is largely similar to RigL: in Top-KAST, for a target density \(D\), the gradients of the top \(D^{\prime}<D\) weights are computed in each backpropagation round and allowed to accumulate, and the masks at these respective sparsities are periodically recomputed. The combination of these two methods results in 77.16% accuracy at 90% sparsity when trained for 3x their baseline of 32K steps.
The recently-proposed ST-3 method [69] uses the technique of soft thresholding with straight-through gradient estimation to progressively prune neural networks while allowing weights to move more smoothly between the dense and sparse states. Using this method, the authors were able to achieve ImageNet accuracies of between 74% and 75% at 96% sparsity on ResNet-50, depending on the method variant used.
Additionally, some works have explored the difficulty of sparse optimization [15], explored changes to dense training pipelines to improve sparse training [1, 35], or focused on the creation of sparse accurate neural networks outside of the standard paradigm of simultaneously searching for the optimal mask and weights. Notably, [49] explored the impact of mask exploration (that is, the total number of explored parameters at any point in sparse training), demonstrating the positive effect of extended training on both sparse network performance and total number of explored parameters. The STEP [52] learning method explored the interaction of sparsity with the Adam optimizer [38], finding that the masked weights lead to an incorrect estimate of the second moment during optimization; these observations led to their proposal of a new method for N:M sparsity that alleviates these effects. The GradMax method [17] initializes a small neural network, then uses predicted gradients to grow a larger (while still small) neural network by adding additional neurons.
Language modelsFor language models, the standard compression pipeline consists of two stages: pre-training on a large unlabeled text corpus followed by fine-tuning on a small and labeled task-specific dataset. The former is used to capture the statistical patterns and relationships that exist in the natural language, allowing the model to recognize and even generate various
linguistic patterns. The latter stage, fine-tuning on a downstream task, builds on top of the learned representations and adapts them to solve specific tasks such as text classification, sentiment analysis, duplicate detection, etc. Sparsity has been explored in both stages: pruning during pre-training and pruning during fine-tuning.
Methods such as Movement Pruning [63] and The Optimal BERT Surgeon (oBERT) [43] make use of first-order (gradient) and second-order (curvature) information, respectively, to guide pruning decisions during the fine-tuning stage. However, recent work observed two problems with this approach when applied on small datasets: [80] demonstrated instability due to large variability of estimated importance scores, while [32] observed overfitting despite reduced expressive power due to pruning. From the practical side, this approach is less favorable for practitioners as it requires extensive pruning-domain knowledge to properly configure pruners for each model and dataset combination. Therefore, the main focus of our work is on the other stage, leveraging already sparse pre-trained models with transfer learning to obtain highly accurate task-specific fine-tuned models. Prune Once for All (Prune OFA) [79] and oBERT [43] represent the most recent state-of-the-art techniques addressing this problem. Both methods first prune the model during the pre-training stage, and then apply transfer learning with a fixed sparsity mask to obtain fine-tuned and sparse models on various downstream datasets.
Impact of sparsification beyond top-1 accuracyAn open area of research is the impact that pruning in general, and the choice of pruning method in particular, have on the resulting model. In particular, pruned models have been shown to be more vulnerable to bias [28, 29, 34], and worse at prediction accuracy under distribution shift [48]. Recent works by [8] and [34] investigate the effects of pruning on a range of model trustworthiness metrics and find mixed results, with sparse neural networks having better calibration, but exaggerating spurious patterns in the existing data. Finally, works such as [33] and [9] investigated the capacity of sparse CNNs for domain adaptation via transfer learning, finding that sparsely trained networks can have more generalizable features than dense ones.
## 3 The Difficulty of Sparse Pretraining of Vision Models
### Background
Formally, accurate pruning is a constrained optimization problem which, given the objective of minimizing a loss function \(\mathcal{L}\), aims to find an "optimal" sparsity mask \(\mathbf{M}^{\star}\) with a given target sparsity \(s\), fraction of zero parameters,2 and weights \(\mathbf{W}^{\star}\) such that
Footnote 2: A _sparsity mask_ is simply a binary tensor of the same dimensions as the model, with \(0\) at the indices of the sparsified entries, and \(1\) at the other indices.
\[\mathbf{M}^{\star},\mathbf{W}^{\star}=\operatorname*{argmin}_{\text{mask }\mathbf{M},\;\text{weights}\;\mathbf{W}}\left[\mathcal{L}(\mathbf{M}\odot \mathbf{W})\right]\quad\operatorname{nonzero}(\mathbf{M})\leq(1-s) \mathrm{numel}(\mathbf{M}). \tag{1}\]
In its general form, where both the optimal mask and the optimal weights must be determined, this approach is NP-complete [5], even for simple least-squares loss. However, this problem can be made tractable if we assume a fixed mask, or we wish to approximate the sparsity of the mask [2].
In the context of pruning, this procedure can be logically split into 1) determining the sparsity mask \(\mathbf{M}\), which is often separated from 2) the optimization procedure over the non-zero weights. For instance, the standard Lottery Ticket Hypothesis (LTH) approach [18, 9] is to first identify a "ticket" mask by performing weight selection by magnitude over an already-trained model, followed by SGD-based finetuning, using the initialization and the same set of hyperparameters as for dense training.
While several novel ways of choosing or updating the sparsity mask choice (step 1), have been investigated, by and large, for the second step, that of optimizing the remaining weights, sparse training methods largely emulate the hyperparameters of the baseline dense model, including the total number of training epochs [21, 36, 16, 60]. However, it is intuitive that the problem of simultaneously finding near-optimal weights and a near-optimal mask may be harder to solve than a standard dense loss minimization problem.
This naturally motivates an in-depth investigation into the following questions: _can optimization over sparse networks converge with the same rate as over dense ones?_, and _are dense training recipes well-suited for sparse training?_ In this paper, we provide evidence that the answer to both questions is _negative_, suggesting that improved optimizers may be required for obtaining accurate sparse models under reduced training budgets.
### Sparse Vision Models Show Evidence of Undertraining
We begin by investigating correlations between the performance and output characteristics of dense and sparse models trained for increasing number of epochs. Specifically, we examine three key metrics: _Top-1 accuracy_ on the validation/test set, _loss on the train set_, and _prediction entropy_ on the validation/test set for the trained models, while scaling the number of training epochs and the associated hyperparameters correspondingly. We detail these metrics below.
Train Loss and Output Entropy.We examine model fit to the training data via the training (cross-entropy) loss at the last epoch, and output predictions via the information-theoretic notion of _entropy_. Low prediction entropy implies that the prediction weight is largely concentrated in a single class, while a high entropy suggests that it is spread out over several classes. Intuitively, the entropy of the model is related to its "confidence" in predictions, and is independent of whether the predictions are correct (and so can be measured on unlabeled data). Conversely, low training loss measures the model's fit to the training data.
We compute the cross-entropy loss and prediction entropy by taking the softmax over the vector of output values of the network and then applying the respective standard formulas, where the cross-entropy is taken with respect to the correct label distribution for the model (1 for the correct class and 0 otherwise). For an output of a network outputting a vector \(Z=(z_{1},z_{2},...,z_{C})\) of size \(C\) with correct label \(L\), the entropy \(H\) and the cross-entropy \(CE\) are given by the following formulas:
\[H(Z)=-\sum\limits_{i=1}^{C}\frac{e^{z_{i}}}{\sum\limits_{j=1}^{C}e^{z_{j}}} \log\left(\frac{e^{z_{i}}}{\sum\limits_{j=1}^{C}e^{z_{j}}}\right)\quad\text{ and}\quad CE(Z)=-\log\left(\frac{e^{z_{L}}}{\sum\limits_{j=1}^{C}e^{z_{j}}} \right). \tag{2}\]
As we will demonstrate, we expect a sufficiently large and well-trained model to have (a) low loss on the training data and (b) fairly low average prediction entropy, while a model that is not well-trained to have high prediction entropy. However, as is conventionally known, continued training on dense and low-sparsity models resulting in overfitting will lower these metrics further.
#### 3.2.1 Example 1: Sparse Training on ImageNet
Experimental setup.We first examine validation accuracy on trained sparse and dense ResNet50 models on the ImageNet-1K dataset and compare it to (a) prediction entropy on the validation dataset and (b) the train loss on the last epoch of training. All models were trained using standard hyperparameters (see Appendix A) except for the difference in number of training of epochs in different experiments. Measurements represent the final accuracy and entropy after the last training epoch, so each marker on the plots represents a full experiment, rather than an intermediate checkpoint. Sparse models were pruned with Alternating Compression/Decompression (AC/DC) [60], likewise adjusting the total number of compressed and decompressed phases to the total run length. AC/DC was chosen as it was among the best-performing methods across all sparsities and training lengths (see Section 3.3.1). We use the FFCV library [45] for fast loading of the data. In contrast with other runs presented in this paper, we do not use progressive resizing or label smoothing, as the latter explicitly encourages high prediction entropy and cross-entropy. In these experiments, we keep the first and last layer dense.
Results.Our results are presented in Figure 1. On the left panel, we show the top-1 accuracy of the final models. We observe that 80% and 90% sparse models reach an accuracy that is similar to dense models, even slightly exceeding dense accuracy at 80% sparsity. Accuracy does drop at higher sparsity (95% and 98%); this is consistent with the original AC/DC paper and results from other pruning methods. Examining accuracy across epoch budgets, and focusing on the best-performing model for each sparsity level, we observe the following:
* _The dense model requires the fewest epochs_ (88) to reach its best validation accuracy, and extending the training recipe results in _worse performance for the dense model_, commonly known as "overfitting."
* _The outcome changes if we examine sparse models_, for which the ideal training length increases with sparsity: 250 epochs for 80% and 90% sparse models, and at least 500 epochs--the longest schedule we tried in this experiment--for
Figure 1: Average validation accuracy (left), Train loss at final epoch (center), and Entropy (right) for sparse and dense ImageNet models trained for different numbers of epochs. The highest-accuracy model for each sparsity level is highlighted with a larger marker. The cross-entropy loss and entropy level of the dense model is also shown with a dashed line, to simplify comparison.
95% and 98% sparse models. Even at 500 epochs, the accuracy increase/loss decrease for these models does not appear to be saturated.
We now examine loss on the training dataset and prediction entropy for the same experiment in more detail. These two metrics give us two different ways to consider the convergence of our model. The (cross-entropy) loss on the training data shows how well the parameters of the model fit the learning objective; conversely, the prediction entropy on the validation data, although similar in calculation, reflects the model's confidence in its predictions when presented with previously unseen data; additionally, unlike cross-entropy, it is _label-independent_.
We observe that, for all sparsity levels, both metrics behave very similarly. Specifically, both decrease when the number of training epochs is increased, and sparse models trained for the standard 100 epochs show similar training loss and prediction entropy to dense models trained for far fewer epochs. For example, dense models trained for 24 epochs have a similar training loss and prediction entropy to 95% sparse models trained for 100 epochs, while dense models trained for 100 epochs have a slightly lower training loss and prediction entropy than 80% sparse models trained for 250 epochs. When we consider the best-performing models at their respective sparsity levels, we find that they have similar training loss and prediction entropy to the top-performing dense model, in cases where such low loss/entropy can be achieved in a reasonable number of epochs (at 80% and 90% sparsity); at all sparsities, performance drops for models whose training loss and prediction entropy fall below this value.
Discussion.These findings further support our hypothesis that, due to the inherent difficulty of sparse optimization, using standard training recipes is not sufficient for sparse training, and suggests that longer training may mitigate this effect. Further, results suggest that training loss and/or prediction entropy may be useful criteria to validate that the sparse models are properly trained3, with the latter criterion being also useful in cases where access to train data, or to any labeled data, is not possible.
Footnote 3: The 98% sparse model will likely never reach the entropy of the optimal dense model, suggesting that the accuracy may continue to improve with very long training schedules. In fact, the authors of RigL trained a 99% sparse model for 100 times the dense training time and were not able to saturate its accuracy. See www.github.com/google-research/rigl#extended-training-results.
#### 3.2.2 Example 2: Sparse Training on Celeb-A
To validate our findings, we repeat this experiment on the Celeb-A Dataset [51]. This dataset consists of a combined \(202599\) face images of \(10177\) celebrities collected from the public domain, automatically cropped to the face, and annotated with \(40\) binary labels. Due to its content, this dataset is frequently used to study bias in machine learning models, and has also been used in studies on the effect of sparsity on bias [28; 34].
Experimental Setup.Following [34], we train the ResNet18 architecture on this task. We train dense and sparse (90%, 95%, 98%, 99%, and 99.5%) models for a varying number of epochs (5-200); in some cases the very low or very high-epoch runs are skipped if it is clear that the duration will not be optimal for that sparsity. Sparse models are produced with a variant of AC/DC, in which the sparsity of the sparse phases ramps up progressively from 90% to the final target sparsity; this is necessary to prevent layer collapse at very high sparsities. Unlike the ImageNet experiments, here the phase length varies somewhat with duration, due to the extremely short duration of some runs. For each experiment, characterized by a sparsity/epoch-length pair, we measure the accuracy and training loss of the resulting model. In addition, following [34], we measure the _uncertainty_, rather than entropy, of the model predictions on the test set. The prediction uncertainty is computed as follows: first, the sigmoid operator is applied to each logit's output in order to obtain a pseudo-probability of a positive label between 0 and 1; if this
Figure 2: Average validation accuracy (left), train loss at final epoch (center), and uncertainty (right) for sparse and dense CelebA models trained for different numbers of epochs. The highest-accuracy model for each sparsity is highlighted with a larger marker. The cross-entropy loss and entropy level of the dense model is also shown with a dashed line, to simplify comparison.
quantity is between 0.1 and 0.9, the prediction is considered uncertain. We then compute the proportion of uncertain predictions across the validation dataset.
Results.The results are presented in Figure 2. We observe that, consistent with our earlier observations on ImageNet, the optimal training duration goes up with sparsity, with dense models reaching their optimal accuracy at 25 epochs, and 99% and 95% sparse models at 150 epochs. Further, we observe that, even as training loss and test uncertainty always decrease with longer training, the overall training loss and proportion of uncertain predictions goes up with sparsity at a fixed training length. As in the ImageNet example, the highest-performing models at each sparsity have a similar training loss of about 0.17 and mean prediction uncertainty of about 24%, except for the very sparse 99.5% model, which has slightly higher 26% uncertain predictions.
Discussion.We interpret these results as corroborating evidence that sparse models require longer training compared to dense models to achieve optimal accuracy before the overfitting starts to take place.
### State-of-the-Art Accurate Sparse Pre-Training on ImageNet
The above observations for vision models suggest that successful sparse training may benefit from an extended training schedule. We now build on this idea to achieve state-of-the-art results for the classic ResNet50/ImageNet benchmark by using an extended-training version of AC/DC, which we call AC/DC++.
#### 3.3.1 Comparing Sparse Training Methods
For the following experiments, we start from the current state-of-the-art training approach for ResNet50/ImageNet training, using the Pytorch FFCV package [45]. In addition to an extended training schedule, we use label smoothing and a linear learning rate decay with warm-up, as well as progressive resizing of input samples 4. In this context, we implemented three leading sparse training methods: Gradual Magnitude Pruning (GMP) [81], RigL [16] and AC/DC [60], which we execute for an increasing number of epochs between 100 (standard) and 1000 (10x). For this, we scale the original training schedule proportionally, following the proportions employed by the original methods. For this experiment, models are compressed to 80%, 90%, and 95% sparsity. Following the most common experimental setup we prune all weights in convolutional and linear layers (including input convolution and classification head). The exact training recipe is presented in detail in Appendix A. We note that all the experiments presented in the paper take less than a day on a standard 8-GPU server. The results, in terms of accuracy and loss vs number of training epochs are presented in Figure 3 and Figure 4, respectively.
Footnote 4: We follow the setup from the FFCV ImageNet example repository for ResNet50.
Results.The results show a strong correlation between how well the methods achieve reduction in loss and their validation accuracy. This reinforces the point that sparse training methods saturate slower, both in terms of training loss and validation accuracy. This has also been investigated by prior work: Gale et al. [21] found that extended training did improved results for GMP in some cases, while RigL [16] and Powerpropagation [66] found diminishing improvements. At the same time, we notice a significant difference between methods: specifically, AC/DC starts at a slightly better accuracy point, and consistently outperforms other methods both in terms of loss achieved, and in terms of validation accuracy, as we increase training time. (This is consistent with the AC/DC original results, executed at 100 epochs [60].) We observe that this correlates with the theoretical computational cost (FLOPs) of the methods: AC/DC will use more FLOPs than other methods due to the dense training phases,
Figure 3: Validation accuracy on ImageNet-1k vs number of epochs for different sparse training methods.
while GMP uses more FLOPs than RigL due to gradually increasing sparsity. In turn, this could also be correlated with the amount of mask exploration performed by the algorithm during training. At low sparsity RigL performs slightly better than GMP, but for higher sparsity GMP appears to perform better. For the smallest 80%, 90% AC/DC reaches a saturation point, whereas in all other setups model performance continues to improve with training budget.
#### 3.3.2 Sparsity-vs-Accuracy Results
Goals and Metrics.Based on these results, in this section, we aim to improve the best known sparsity-versus-accuracy trade-offs by performing a thorough ablation over sparsities and training length parameters. We compare our results to the highest-performing previously published sparse training methods. In particular, we compare an extended-training version of AC/DC, which we call AC/DC++, results reported in the original RigL, ST-3, and Powerpropagation papers, as well as many other existing pruning methods.5. All methods are described in Section 2. In cases where the authors conducted extended training using their method, we present those numbers, and we use the FLOPs-optimized ST-3\({}^{\sigma}\) variant. AC/DC++ candidate models were trained for four preset training lengths (1x, 2.5x, 5x and 10x the standard ImageNet training time on ResNet50) at all sparsity levels, and we chose the best results obtained by ablating over the length of the training run.
Footnote 5: The most successful Powerpropagation approach presented in the paper combines this method with Top-KAST; we use this benchmark, as it performs better than Top-KAST individually
Figure 4: Training loss on ImageNet-1k vs number of epochs for different sparse training methods.
Figure 5: Comparison of Accuracy change from dense baseline as a function of Inference FLOPs for leading sparse training methods, under uniform sparsity constraints **(left)** and global sparsity constraints **(right)**. Due to a lack of a standard benchmark, global and Erdős-Rényi Kernel (ERK) sparsity constraints were grouped together. Both sparsity schedules of AC/DC++ (with all layers sparsified and with the first and last layer kept dense) are plotted together.
As different methods have different computational budgets and different dense baselines, to ensure a fair comparison, we examine the model performance both in terms of _Top-1 Validation accuracy_, and the _Top-1 Validation accuracy difference from the corresponding dense baseline_. We use the best available numbers originally reported in the papers introducing the methods for comparisons.
Experimental Setup.We compare two pruning regimes. First, we consider _Uniform Pruning_, in which every layer is pruned exactly to the target sparsity, except for the first and last layer, which are left dense. Second, we consider the _Global/Nonuniform
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Top-1 accuracy (\%)} & \(\Delta\) Accuracy & \multicolumn{1}{c}{Sparsity} & \multicolumn{1}{c}{Remaining} & \multicolumn{1}{c}{Inference FLOPs} \\ Method & Dense (\(D\)) & Sparse (\(S\)) & \(100\times(\frac{s-p}{2})\) & (\%) & \# of params & \multicolumn{1}{c}{prop. of dense} \\ \hline Sparse Training & & & & & & \\ \(\Delta\)CDC [60] & 76.8 & 75.03 & -1.77 & 90 & 2.56 M & 0.18 \\ GraNet(\(s_{0}=0.5\)) [50] & 76.80 & 74.5 & -1.3 & 90 & - & 0.20 \\ Powerpropagation + Top-FAST FLD [66] & 76.80 & 75.23 & -1.57 & 90 & - & - \\ Powerpropagation + Top-KAST ERK [66] & 76.80 & 75.74 & -1.06 & 90 & - & 0.24 \\ RIGL ERK 1x [16] & 76.80 & 73.00 & -4.94 & 90 & - & 0.24 \\ RGL-THOPER K1x [49] & 76.80 & 73.82 & -2.98 & 90 & - & 0.24 \\ ST-3 [69] & 77.10 & 75.28 & -1.82 & 90 & - & 0.24 \\ STR [44] & 77.01 & 74.31 & -3.51 & 90.23 & 2.49 M & - \\ Variational Dropout [55] & 76.69 & 73.84 & -3.72 & 90.27 & 2.49 M & - \\ \hline Post-training sparsification & & & & & & \\ Global Magnitude [67] & 77.01 & 75.15 & -2.42 & 90 & 2.56 M & - \\ WoodFisher [67] & 77.01 & 75.21 & -2.34 & 90 & 2.56 M & - \\ \hline Extended sparse training & & & & & & \\ AC/DC++5 x (this work) & 78.78 & 78.49 & -0.29 & 90 & 2.60 M & 0.2 \\ AC/DC++ FLD 5x (this work) & 78.78 & 78.6 & -0.18 & 90 & 4.45 M & 0.22 \\ GMP FLD 15x (21) & 76.69 & 75.16 & -1.53 & 90 & - & - \\ GraNet(s_{0}=0.5\)), 2.5x [50] & 76.80 & 76.4 & -0.4 & 90 & - & 0.20 \\ Powerpropagation+Top-KAST ERK 3x[66] & 76.80 & 77.16 & +0.36 & 90 & - & 0.24 \\ RIGL ERK 5x [16] & 76.80 & 76.42 & -0.38 & 90 & - & 0.24 \\ RGL-THOPER K5x [49] & 76.80 & 75.50 & -1.30 & 90 & - & 0.24 \\ \hline \hline Sparse Training & & & & & & \\ AC/DC [60] & 76.8 & 73.14 & -3.66 & 95 & 1.28 M & 0.11 \\ GraNet(\(s_{0}=0.5\)) [50] & 76.80 & 72.3 & -6.5 & 95 & - & 0.12 \\ Powerpropagation + Top-KAST FLD[66] & 76.80 & 73.25 & -3.55 & 95 & - & - \\ RIGL ERK 1x [16] & 76.80 & 70.00 & -8.85 & 95 & - & 0.12 \\ ST-3 [69] & 77.10 & 74.46 & -2.64 & 95 & - & 0.13 \\ STR [44] & 77.01 & 70.40 & -8.58 & 95.03 & 1.27 M & - \\ Variational Dropout [55] & 76.69 & 71.81 & -6.36 & 94.94 & 1.30 M & - \\ \hline Post-training sparsification & & & & & & \\ Global Magnitude [67] & 77.01 & 71.72 & -6.29 & 95 & 1.28 M & - \\ WoodFisher [67] & 77.01 & 72.12 & -6.89 & 95 & 1.28 M & - \\ M-FAC [20] & 77.01 & 72.6 & -4.41 & 95 & 1.28 M & - \\ \hline Extended sparse training & & & & & & \\ AC/DC++ 10x (this work) & 78.78 & 77.27 & -1.48 & 95 & 1.33 M & 0.13 \\ AC/DC++ FLD 10x (this work) & 78.78 & 77.7 & -1.08 & 95 & 3.28 M & 0.14 \\ GMP FLD 15x [21] & 76.69 & 72.71 & -3.98 & 95 & 1.28 M & - \\ RIGL ERK 5x [16] & 76.80 & 74.63 & -2.17 & 95 & 1.28 M & 0.12 \\ \hline \hline Sparse training & & & & & & \\ AC/DC [60] & 76.8 & 68.44 & -9.36 & 98 & 0.7 M & 0.06 \\ ST-3 [69] & 77.10 & 70.46 & -6.64 & 98 & - & 0.07 \\ STR [44] & 77.01 & 70.40 & -8.58 & 98 & - & - \\ Variational Dropout [55] & 76.69 & 64.52 & -15.87 & 98.57 & 0.36 M & - \\ \hline Post-training sparsification & & & & & & \\ AC/DC++ FLD 10x (this work) & 77.01 & 67.5 & -9.51 & 98 & - & - \\ WoodFisher [67] & 77.01 & 65.55 & -11.46 & 98 & 0.51M & - \\ \hline Extended sparse training & & & & & & \\ AC/DC++ 10x (this work) & 78.78 & 74.06 & -4.72 & 98 & 0.51 M & - \\ AC/DC++ FLD 10x (this work) & 78.78 & 76.6 & -2.28 & 98 & 2.58 M & 0.09 \\ \hline \hline Sparse training & & & & & & \\ ST-3 [69] & 77.10 & 63.88 & -13.22 & 99 & - & 0.04 \\ \hline Extended sparse training & & & & & & \\ AC/DC++ FLD 10x (this work) & 78.78 & 72.7 & -6.08 & 99 & 2.34 M & 0.06 \\ RIGL ERK 5x [16] & 76.80 & 61.86 & -15.94 & 99 & - & 0.05 \\ RIGL ERK 10x [16] & 76.80 & 63.89 & -12.91 & 99 & - & 0.05 \\ RIGL ERK 5x [16] & 76.80 & 66.94 & -9.86 & 99 & - & 0.05 \\ RIGL ERK 100x [16] & 76.80 & 68.15 & -8.65 & 99 & - & 0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between modern sparse training methods on ImageNet-1k with ResNet-50 models for various sparsity targets. ERK refers to the Erdos-Renyi Kernel sparsity distribution. FLD refers to the first and last layers being dense (AC/DC++) or the first layer being dense and the last layer being 80% sparse (GMP, PowerPropagation).
_Pruning_ regime, in which the sparsity budget is set globally. Different works apportion the global budget differently, and also differ with respect to which parts of the network are subject to the global constraint. In particular, Extended GMP [21] and Top-KAST do not prune the first layer, prune the last layer to a fixed 80% sparsity, and prune the other layers using a global magnitude criterion. RigL uses an Erdos-Renyi-Kernel distribution for layer sparsity targets, and leaves only the first layer dense. The original AC/DC work uses global sparsity and prunes all convolutional and FC layers. Therefore, to create a more fair comparison, we consider estimated Floating-Point Operations (FLOPs) necessary for inference; these are computed as in [16]. Using FLOPs also equalizes methods across slight variations in ResNet50 architectures, and so we use it also for the Uniform pruning comparison. In addition, we use two pruning schedules for AC/DC++: one which leaves the first and last layer dense and prunes the remaining layers using a global magnitude criterion, and one that prunes all layers using the global magnitude criterion. We do not ablate between the two, but rather present both sets of results in Figure 5 (jointly) and Table 1 (separately).
We emphasize two key points regarding our comparisons:
1. Looking at accuracy alone favors AC/DC++, as it has a higher dense baseline: since we use several recent training innovations, the dense model can reach close to 79% dense accuracy over 100 epochs. Therefore, it becomes more challenging to maintain the performance of the dense model for highly sparse model compared to less-optimized baseline.
2. This is why we also examine _accuracy difference relative to the dense baseline_: this favors other methods, as they are benchmarked against a standard-recipe model that reaches lower 76.8% accuracy (77.1% for ST-3).
Results.The results are presented in Figure 5 and Table 1. We observe that, for uniform pruning budgets, the AC/DC++ models outperform other methods, both in terms of absolute and relative validation accuracy. This is true even when we consider extended-training schedules for other methods, although we believe we are the first to systematically investigate the impact of increasing training schedules at these sparsity levels.6 When looking at models trained with global pruning budgets, we observe that AC/DC++ obtains the highest absolute validation accuracy, compared to results reported previously in literature. When considering accuracy change from the dense line, AC-DC++ loses less accuracy than other methods at very high sparsities (lowest FLOPs), despite having the highest-performing dense baseline; at lower sparsity (90%), it is competitive with other extended training methods.
Footnote 6: In prior work, RigL executed >5x extended training for a 99%-sparse model only [16].
### Additional Results
Different sparsity patternsSince the ResNet models increase the number of channels with the decrease of feature map resolution, one would expect that bottom layers (those with more channels) should be pruned more aggressively compared to the one with less channels. Here our results are consistent with the known observations from literature that global sparsity achieves higher performance for the same sparsity. In addition, we have carried out experiments with the more hardware-friendly block-4 sparsity pattern.
Figure 6: Accuracy vs sparsity for different sparsity distributions. Block4 denotes global pruning with weights pruned in groups of 4.
MobileNet resultsIn addition, we conducted sparsification of the MobileNet-V1 model [30], a CNN optimized for inference on mobile devices. We applied AC/DC for 1000 epochs with sparsity targets 75% and 90% using a similar training recipe to ResNet50 except for some differences specified in Appendix A. To achieve the best results we do not prune the input convolution, as well as the classification head and depthwise convolutions, due to their minor contribution to the overall amount of FLOPs and significant impact on the performance of the model.
With a longer training recipe one can achieve almost negligible accuracy drop at 75% sparsity and moderate performance decrease at 90%. The results can be found in Table 2.
### Mask analysis
Sparse methods considered in our work differ in the amount of sparsity mask exploration. GMP gradually increases sparsity; once the weight is pruned, it is never reintroduced. RigL decreases the fraction of updated parameters following the cosine annealing rule:
\[f_{decay}(t;\alpha;T_{end})=\frac{\alpha}{2}\left(1+\cos\left(\frac{\pi t}{T_{ end}}\right)\right) \tag{3}\]
This fraction of connections is dropped and reintroduced in a single step. AC/DC makes all parameters trainable on decompression phases, therefore any parameter could be potentially reintroduced. However, as shown later, some fraction of weights remains zero even on decompression. To measure the difference between two consecutive sparsity masks we compute their IoU (Intersection over Union), the amount of parameters that are nonzero for both checkpoints divided by the amount of parameters that are nonzero in either of checkpoints. High IoU value (close to 1) means that two sparsity masks overlap significantly, whereas low IoU (close to 0) implies low similarly between sparsity masks.
We have taken checkpoints saved on the \(109^{th},119^{th},\ldots 999^{th}\) epoch (taken at the end of every AC/DC step and at the same epochs for other methods for a consistent comparison) collected during 1000 epochs runs with 95% and 98% target sparsity and measured IoU between two consecutive masks for parameters being pruned (skipping biases and batch norm parameters). For GMP and RigL, mask IoU can be computed analytically based on the update rule. The evolution of sparsity mask IoU during training is presented on Figure 7. One can observe that AC/DC shows significantly stronger mask exploration compared to GMP and RigL. This behavior could account for the better performance of AC/DC as a sparse trainer.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Top-1 accuracy (\%)} & \multicolumn{2}{c}{Relative Drop} & Sparsity \\ \cline{2-5} Method & Dense (\(D\)) & Pruned (\(P\)) & \(100\times\frac{(P-D)}{D}\) & \\ \hline AC/DC++ & 72.74 & 72.49 & -0.25 & 75.00 \\ \cline{2-5} & 72.74 & 70.80 & -1.94 & 90.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sparse training of MobileNet-V1 with AC/DC++ on ImageNet-1k.
Figure 7: Mask IoU between two consecutive checkpoints.
### Structured sparsity in unstructured sparse models
In this work, we consider only _unstructured_ sparsity, therefore groups of weights and entire channels in particular do not have to be sparse. However, we observed that some of the channels in convolutional kernels are entirely pruned. The effect becomes more pronounced with higher sparsity and longer training.
In Table 3, we show the fraction of sparse _output_ channels for layers in the front, middle and the end of the model and the global fraction of zero channels across all convolutional layers in the model. We observe that channel sparsity increases proportionally with the unstructured sparsity target and with training time, and that, for high sparsity, we obtain a very high proportion of zeroed-out output channels, especially in the wider bottom layers. This is in tune with previous work observing the emergence of structured sparsity in dynamic sparse training [60, 33, 77]. We provide a first explanation for this behavior in the next section.
### Impact of weight decay on sparsity and model performance
Recall that AC/DC makes all parameters trainable on decompression, therefore, one might expect that sparsity on decompression phase would be near zero. However, we observed that a large fraction of weights remains zero even when the sparsity mask is not imposed. This effect is linked to the channel sparsity discussed in the previous section: once a channel is completely zeroed out, it will continue to receive zero gradients even when the sparsity mask is removed. Further, we provide evidence that this phenomenon is linked to increasing the weight decay mechanism, and in particular the value of this parameter: intuitively, weight decay slowly drives weights towards zero; whenever a full channel is zeroed out in the compression phase, it remains "captured" under the sparsity mask until the end of training.
We investigated this empirically by training ImageNet models on ResNet50 with 95% compression sparsity using AC/DC++ for 100 epochs, varying the weight decay parameter from \(10^{-6}\) to \(10^{-3}\). Our results are shown in Figure 8. We observe that the fraction of zero parameters increases with the magnitude of weight decay and also over the course of training. Concretely, we observe that all weight decay values lead to almost fully dense models during the first decompression phase. From there, very low weight decay values of \(10^{-6}\) and \(10^{-5}\) lead to very little sparsity during the next two decompression phases, and about 10% sparsity during the final five. Conversely, very high weight decay of \(10^{-3}\) leads to an immediate jump to 50% sparsity during the second decompression phases, which then increases to 60% over the rest of training. The intermediate value of \(10^{-4}\), which is the standard setting and was used in our experiments, leads to an intermediate sparsity, which gradually rises to about 24% over successive decompression phases.
We further present the accuracy of the resulting models in Table 4. We observe that properly setting the weight decay hyperparameter is crucial for good performance of AC/DC++, and confirm that the standard value \(10^{-4}\) is close to the optimal value, as the Table 4 shows.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Sparsity} & \multirow{2}{*}{Epochs} & \multicolumn{4}{c}{Channel sparsity} \\ & & layer1.0.conv2 & layer2.1.conv2 & layer4.2.conv2 & avg \\ \hline
80 & 100 & 0 & 0 & 1.07 \\
80 & 1000 & 18.75 & 0 & 35.35 & 4.91 \\ \hline
90 & 100 & 4.69 & 1.56 & 0.78 & 3.58 \\ & 1000 & 31.25 & 19.53 & 61.91 & 10.68 \\ \hline
95 & 100 & 0 & 7.03 & 22.85 & 8.8 \\ & 1000 & 26.56 & 22.66 & 80.27 & 16.76 \\ \hline
98 & 100 & 7.81 & 12.5 & 75.59 & 23.65 \\ & 1000 & 48.44 & 37.5 & 96.68 & 27.93 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Fraction of zero output channels for specific layers and on average.
Sparse Decompression.Building on this observation, we ask how imposing a fixed minimal sparsity also on _decompression stage_ impacts the final performance. We conducted a few 100-epoch long AC/DC experiments with 95% target sparsity and set the sparsity on decompression to some fixed value, smaller than compression sparsity. We observe that the performance is almost unaffected up to 80% decompression sparsity, showing that full mask exploration is not necessary during training.
### Loss landscape analysis
In order to get more insights into the reasons for the difficulty of optimizing sparse neural networks, we investigate two properties of the loss landscape. First, we measured _landscape sharpness at the end of the training_, defined as the maximal eigenvalue of the Hessian matrix, for all sparse training methods considered, various sparsities and number of training epochs and compared with the one of standard dense training. Second, we interpolated the training and validation loss between checkpoints obtained at intermediate steps throughout the 1000 epoch AC/DC run with 95% target sparsity.
The largest Hessian eigenvalue was estimated via the power iteration method based on Hessian-vector products using a customized version of the Eigentnings library [22]. More details about our experimental setup are provided in Appendix I. In Figure 9, we observe that, across all methods, sharpness increases with the length of the training run, indicating that sharper minima require extended training to be reached via SGD. Additionally, sharpness decreases with the increase of sparsity. All sparse training methods attain lower sharpness compared to the dense model. Models trained with AC/DC and RigL have slightly lower sharpness compared to models trained with GMP, presumably because the former two methods manage to reach flatter optima which are conjectured to have better generalization properties [54].
\begin{table}
\begin{tabular}{c c} \hline \hline Weight decay & Top-1 accuracy (\%) \\ \hline \(10^{-6}\) & 70.52 \\ \(10^{-5}\) & 73.54 \\ \(10^{-4}\) & 74.90 \\ \(10^{-3}\) & 68.57 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Accuracy vs weight decay.
Figure 8: Sparsity on decompression phases for 100-epoch AC/DC++ runs with varying values of weight decay. We point out that on decompression phases no sparsity is enforced.
\begin{table}
\begin{tabular}{c c} \hline \hline Decompression sparsity & Top-1 accuracy (\%) \\ \hline
0 & 74.82 \\
50 & 75.09 \\
60 & 74.81 \\
70 & 74.83 \\
80 & 74.38 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Accuracy vs decompression sparsity.
Figure 9: Sharpness (highest eigenvalue) of the loss surface vs number of epochs. Dashed lines correspond to the dense model.
To examine mode connectivity behavior, in Figure 10, we connected the checkpoint obtained on the \(99^{th},199^{th},\ldots 999^{th}\) epoch via piecewise-linear curves. A notable observation is that all checkpoints are separated by a loss barrier, whose height is increasing with the number of epochs. Probably, this behavior is a manifestation of the progressive sharpening phenomenon [11] where the model sharpness is increasing gradually with training until reaching the peak value and then plateaus. Yet, for sparse models, the duration of the longest runs is not long enough to reach the sharpness plateau.
### Additional quality evaluation of AC/DC++
Having demonstrated that extended training has a strong positive effect on sparse model top-1 test accuracy, we further investigate the impact of extended training on other aspects of model quality. We consider two additional quality metrics: their performance in transfer learning scenarios and robustness to common image perturbations.
[33] demonstrated that equally sparse models with comparable performance on the original task can vary widely in their performance once finetuned on other, smaller transfer tasks. We compare the transfer performance of dense and 95% sparse AC/DC++ models, both trained for 100 and 1000 epochs in two transfer learning regimes: linear finetuning, where the hidden layers of the model are trained only on the larger (ImageNet) task, and only the final FC layer is trained on the transfer task, and full-network finetuning, where all layers are finetuned on the transfer task. We find that extended training improves the transfer performance for both transfer scenarios for 95% sparse models, but is largely neutral for dense models. Full details of the experiment and evaluation are given in Appendix F.
We test robustness by measuring model performance on the ImageNet-C dataset [26], which digitally adds 19 types of perturbations to the ImageNet-1K validation set. [48] and [28] have found that compressed models are less robust under many types of perturbations, compared to dense models. As before, we consider dense and 95% sparse AC/DC++ models trained for 100-1000 total epochs. We find that robustness to perturbations increases with training time for sparse models, but stays the same for dense ones. Full details of the experiment and evaluation are given in Appendix G.
## 4 The Difficulty of Sparse Transfer in Language Modelling
Motivated by our findings for computer-vision models in previous sections, we extend the analysis to language models, specifically to the very common scenario in which a large language model (BERT-base) is adapted to a specific task via finetuning. In contrast to Section 3.9, where we examined the effect of extended training on the quality of features created in upstream training, here we examine the impact of sparsity on the optimal recipe for the task of finetuning the model on the downstream dataset.
This setup naturally leads to the following questions: _"do finetuned sparse language models suffer from being undertrained on the downstream task?"_, and _"if yes, does the simple recipe of extended training suffice to mitigate the issue?"_. In this section, we will show that when dense finetuning recipes are used for sparse transfer learning in language models, the resulting models are indeed undertrained and have poor transfer performance. However, we also note an additional difficulty: extended training does not suffice to mitigate the issue, because sparse language models quickly shift from being undertrained to an overfitting regime. The latter is a far larger problem in language understanding tasks than in visual ones, which is likely why we don't observe the
Figure 10: Loss interpolation curves on the training (**left**) and validation (**right**) checkpoints of the 95% sparse 1000-epoch AC/DC model. \(\alpha\) corresponds to the fraction of paths traversed from the first checkpoint to the last. Stars denote checkpoints between which loss is interpolated.
same issues with visual transfer learning in Appendix F - there we simply use a long finetuning schedule in all cases. In this section, we explore the problem of balancing under- and over-training in sparse language models and propose a sparse finetuning recipe for creating properly tuned sparse models.
### Under Standard Dense Transfer Learning Recipes, Sparse Models are Undertrained
Experimental Setup.In our experiments, we make use of open-sourced _sparse_ pre-trained BERT-base models obtained by [43]. On top of these, we apply various transfer learning recipes to obtain fine-tuned sparse models on datasets from the popular GLUE benchmark [73]. For fair comparisons with results from prior work, we employ early stopping for all methods. We provide more details about each dataset in Appendix H.
The most popular and widely adopted dense transfer learning recipe consists of fine-tuning all weights with linearly decaying learning rate for as much as two or three epochs on the target downstream task. In Table 6 we present results obtained with this approach when applied to sparse models, and denote it as a _dense-transfer recipe_. Under the same transfer learning recipe, we clearly observe significant gaps (up to 14 accuracy points on RTE and CoLA) between the transfer accuracy of the _dense model_ (_Dense BERT-base_), and the transfer accuracy of the _sparse model_ (_Dense-transfer recipe_).
### Extended Training Shifts from Undertraining to Overfitting
Observing that the dense transfer learning recipe does not produce competitive sparse finetuned models, we attempt to scale the length of the recipe to mitigate undertraining. Surprisingly, for sparse language models, this simple technique does not yield a unique setup with consistently better results as models quickly shift from undertraining to an overfitting regime, in which training loss goes to zero, while validation accuracy decreases sharply. To demonstrate this overfitting effect with the extended recipe, in Table 6, we compare results obtained with this approach (_Extended dense-transfer recipe_) against results obtained by doing a full sweep of finetuning runs with rescaled recipes to #epochs \(\in\{1,2,3,...,\text{extended}-1\}\) (_Full sweep of rescaled recipes_).
The results suggest that with the existing recipes, there is no one-size-fits-all solution. Versions of this rescaling approach have been utilized by prior works like [43] and [79] to obtain accurate sparse models on various downstream datasets. However, this approach comes with a huge computational burden: for each rescaled recipe, a full hyperparameter sweep over relevant parameters has to be done in order to obtain competitive finetuned sparse models. Due to practicality and associated costs, this is not a desirable solution in practice.
### Sparse Transfer Learning for Language Models
In the previous section, we have demonstrated the following three problems with the existing approach of either using the dense finetuning recipe, or simply extending it for sparse finetuning:
1. following dense-transfer recipes, sparse language models are undertrained;
2. even at high sparsities, these models can still exhibit overfitting behavior under the extended training regime;
3. finding the optimal recipe to mitigate undertraining and overfitting has major computational burdens.
To address these issues, we propose a simple approach for sparse transfer in NLP, which produces highly accurate and competitive sparse models on a wide range of downstream datasets with minimal hyperparameter tuning. Our technique is inspired by the idea of gradual layer unfreezing presented in the ULMFiT framework [31], which introduced a universal framework for fine-tuning _dense_ language models for text-classification tasks, with a focus on LSTM models [54]. Based on ULMFiT and findings of [78], which suggests that different layers capture different information and therefore should be fine-tuned to different extents, we adopt the idea of gradual unfreezing and adjust it for _transformer-based [71] sparse_ language models.
\begin{table}
\begin{tabular}{c|c c c c c c c c} Sparse-transfer & RTE & QNLI & MRPC & SST-2 & CoLA & STS-B & MNLI & QQP \\ & Acc & Acc & Acc & Acc & Acc & Occ & Pear & Acc & Acc \\ \hline Dense BERT-base (baseline) & 66.1 & 91.3 & 85.5 & 93.0 & 56.8 & 88.9 & 84.6 & 91.5 \\ \hline Dense-transfer recipe & 52.4 & 88.9 & 82.8 & 91.2 & 42.5 & 87.1 & **82.2** & 90.0 \\ Extended dense-transfer recipe & 55.2 & 88.7 & **85.6** & 91.4 & 47.2 & 87.6 & 81.6 & 90.3 \\ Full sweep of rescaled recipes & **57.0** & **89.3** & 84.1 & **92.0** & **48.5** & **88.0** & **82.2** & **90.4** \\ \hline Best recipe length & 5 ep & 2 ep & 5 ep & 2 ep & 7 ep & 4 ep & 3 ep & 5 ep \\ \end{tabular}
\end{table}
Table 6: Sparse-transfer performance of 90% sparse pre-trained BERT-base model on the dev-set of the corresponding GLUE task, obtained with dense and extended dense (#epochs=8) transfer learning recipes, as well as with the full sweep of rescaled recipes (#epochs \(\in\{1,2,...,7\}\)).
More specifically, we focus on the popular BERT-base model which consists of three groups of layers: embeddings, 12 identical transformer blocks, and a task-specific classifier head. Sparsified versions of this model, which are the main interest of this work, prune all linear layers across all transformer blocks, which is the standard practice in literature [64, 42, 43, 79] and brings the best accuracy-vs-latency trade-offs [43].
Our approach can be summarized as follows. For each downstream task, we start from a sparse pre-trained model produced by [43] and randomly initialize a task-specific classifier head. Then we freeze all embeddings and sparsified linear weights, while keeping their biases and corresponding LayerNorm [3] layers unfrozen and trainable. We start by finetuning only the classifier head and all other trainable parameters (biases and LayerNorms) for one epoch, and then follow the same process from back-to-front by unfreezing the unpruned linear weights in preceding transformer blocks. After the last layer is unfrozen and finetuned, we continue finetuning all layers together for one more epoch.
Given that at each epoch we have a different model architecture (one more sparse transformer block unfrozen relative to the previous epoch), we finetune it with the linearly decaying learning rate and then rewind back to the initial value for the next epoch. We have also tried the slanted triangular learning rate schedule proposed in ULMFiT, but we found the warmup phase not very helpful as it is known that sparse language models usually require much higher learning rates relative to their dense counterparts in order to train and converge successfully [42].
\begin{table}
\begin{tabular}{c|c c c c c c c c} Sparse-transfer & \begin{tabular}{c} RTE \\ Acc \\ \end{tabular} & \begin{tabular}{c} QNLI \\ Acc \\ \end{tabular} & \begin{tabular}{c} MRPC \\ F1 / Acc \\ \end{tabular} & \begin{tabular}{c} SST-2 \\ Acc \\ \end{tabular} & \begin{tabular}{c} CoLA \\ Mec \\ \end{tabular} & \begin{tabular}{c} STS-B \\ Pear / Spear \\ \end{tabular} & \begin{tabular}{c} MNLI \\ m / mm \\ \end{tabular} &
\begin{tabular}{c} QQP \\ Acc / F1 \\ \end{tabular} \\ \hline Dense BERT-base & 66.1 & 91.3 & 89.8 / 85.5 & 93.0 & 56.8 & 88.9 / 88.5 & 84.6 / 83.4 & 91.5 / 88.5 \\ \hline Prune OFA [79] & N/A & 89.1 & N/A & 90.9 & N/A & N/A & 81.5 / 82.4 & **90.9 / 87.6** \\ oBERT [43] & 57.0 & 89.3 & 89.3 / **85.6** & **92.0** & 48.5 & **88.0 / 87.6** & 82.2 / 82.5 & 90.4 / 87.1 \\ This work & **60.1** & **90.5** & **89.7** / 85.2 & 91.8 & **51.4** & 87.2 / 87.1 & **83.7 / 83.8** & **90.9 / 87.6** \\ \end{tabular}
\end{table}
Table 7: Our sparse-transfer performance of 90% sparse pre-trained BERT-base model on the dev-set of the corresponding GLUE tasks, benchmarked against the current state-of-the-art sparse-transfer results from Prune OFA [79] and oBERT [43].
Figure 11: Evaluation loss (lower is better) and F1 score (higher is better) during sparse-transfer with oBERT [43] and our approach on MRPC dataset.
Figure 12: Evaluation loss (lower is better) and Matthew’s correlation coefficient (higher is better) during sparse-transfer with oBERT [43] and our approach on CoLA dataset.
To validate the effectiveness of our proposed sparse transfer approach, we benchmark it against the two current state-of-the-art sparse-transfer results presented in _Prune Once for All (Prune OFA)_[79] and _The Optimal BERT Surgeon (oBERT)_[43] papers. The former makes use of knowledge distillation from a finetuned dense teacher model, while the latter uses a full sweep over extended and rescaled dense transfer recipes, such as the ones we presented in Section 4.2. As can be seen from Table 7, our approach outperforms highly competitive results by Prune OFA in all, and oBERT in eight out of twelve datasets, setting new state-of-the-art accuracy-vs-sparsity results for many tasks in the GLUE benchmark suite. It is worth emphasizing that all of our results are obtained with significantly less hyperparameter tuning than the other two competing methods, which aligns with our goal of finding a stable one-size-fits-all solution for the sparse-transfer problem. We search and tune the initial learning rate in {1e-4, 2e-4, 3e-4}, and dropout in {0.05, 0.1}, and report mean performance over the two best runs. Thus, our grid consists of only 6 different combinations for each considered dataset, whereas competing approaches sweep over 54 ([79]) and 24 ([43]) different combinations. It is worth emphasizing that all of the considered methods, including ours, have noticeable variability in results on small datasets across different seeds and hyperparameter configurations, which aligns with findings of [12].
To better understand what happens during our proposed sparse transfer learning setup, and to develop an intuition about why it is able to provide stable and competitive results across many different datasets ranging in sizes from 2.4k (RTE) and 392k (MNLI) labeled samples, we visualize evaluation loss and evaluation accuracy metrics over the entire transfer learning process in Figures 11 and 12. As can be seen, our approach enables slower and therefore more stable transfer learning on the target datasets which effectively prevents overfitting, even though the total number of epochs is two times larger than the extended dense-transfer recipes analyzed in Section 4.2. This aligns with findings in ULMFiT, which demonstrates that gradual unfreezing in combination with a carefully designed learning rate schedule prevents catastrophic forgetting and enables robust transfer learning across a wide range of different downstream tasks.
## 5 Conclusion
In this work, we examined the impact of high sparsity on model training under standard computer vision and natural language recognition scenarios, and provided evidence that traditional training recipes used for dense models are generally too short for sparse training. Starting from this observation, we were able to produce state-of-the-art models for sparse computer vision on two classic benchmarks for pruning: the ResNet50/ImageNet from-scratch training benchmark, and transfer learning from BERT-base on several NLP datasets. Our work focused on the differences between sparse and dense training dynamics and their effect on optimal training, providing additional analysis towards the difficulty of sparse training. The main motivation for our work is to inspire further research in adapting training schedules, hyperparameters, and optimizers to improve sparse model training in order to reach higher accuracies under sparsity, but also to do so efficiently. We leave this as a challenge to the community.
## Acknowledgements
We gratefully acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 programme (grant agreement No 805223 ScaleML). E.I. was supported in part by the FWF DK VGSCO, grant agreement number W1260-N35. D.K. was supported by Russian Science Foundation, grant 21-11-00373. |
2303.10702 | Evaluation of Convolution Primitives for Embedded Neural Networks on
32-bit Microcontrollers | Deploying neural networks on constrained hardware platforms such as 32-bit
microcontrollers is a challenging task because of the large memory, computing
and energy requirements of their inference process. To tackle these issues,
several convolution primitives have been proposed to make the standard
convolution more computationally efficient. However, few of these primitives
are really implemented for 32-bit microcontrollers. In this work, we collect
different state-of-the-art convolutional primitives and propose an
implementation for ARM Cortex-M processor family with an open source deployment
platform (NNoM). Then, we carry out experimental characterization tests on
these implementations. Our benchmark reveals a linear relationship between
theoretical MACs and energy consumption. Thus showing the advantages of using
computationally efficient primitives like shift convolution. We discuss about
the significant reduction in latency and energy consumption due to the use of
SIMD instructions and highlight the importance of data reuse in those
performance gains. For reproducibility purpose and further experiments, codes
and experiments are publicly available. | Baptiste Nguyen, Pierre-Alain Moellic, Sylvain Blayac | 2023-03-19T16:17:19Z | http://arxiv.org/abs/2303.10702v1 | # Evaluation of Convolution Primitives for Embedded Neural Networks on 32-bit Microcontrollers
###### Abstract
Deploying neural networks on constrained hardware platforms such as 32-bit microcontrollers is a challenging task because of the large memory, computing and energy requirements of their inference process. To tackle these issues, several convolution primitives have been proposed to make the standard convolution more computationally efficient. However, few of these primitives are really implemented for 32-bit microcontrollers. In this work, we collect different state-of-the-art convolutional primitives and propose an implementation for ARM Cortex-M processor family with an open source deployment platform (NNoM). Then, we carry out experimental characterization tests on these implementations. Our benchmark reveals a linear relationship between theoretical MACs and energy consumption. Thus showing the advantages of using computationally efficient primitives like shift convolution. We discuss about the significant reduction in latency and energy consumption due to the use of SIMD instructions and highlight the importance of data reuse in those performance gains. For reproducibility purpose and further experiments, codes and experiments are publicly available1.
Footnote 1: [https://gitlab.emse.fr/b.nguyen/primitive_of_convolution](https://gitlab.emse.fr/b.nguyen/primitive_of_convolution)
Keywords:Deep Learning Architecture optimization Embedded systems Convolutional neural network.
## 1 Introduction
The demand for edge inference is growing and neural networks are prime candidates due to their success across a large variety of application domains. However, state-of-the-art deep neural network models, especially convolution neural networks, require a large amount of memory and computational resources. For example, the standard ResNet-18 model [3] for image classification on ImageNet
has around 11M parameters and requires approximately 1 GMACs for an inference which is prohibitive for ARM Cortex-M microcontrollers. Thus, designing efficient neural network architectures is a major topic in the embedded AI community. In the search for efficient neural network architectures, several alternatives to convolution have been proposed, but few of them are practically implemented on deployment libraries for 32-bit microcontrollers. This work focuses on the implementation and characterization of state-of-the-art convolution primitives for ARM Cortex-M MCUs. **Our contributions are as follow:**
* We implement three state-of-the-art convolution primitives for ARM Cortex-M MCUs and when possible, we propose another implementation which makes use of the SIMD1 instructions (_Single Instruction, Multiple Data_). Footnote 1: [https://www.keil.com/pack/doc/CMSIS/Core/html/group__intrinsic__SIMD__gr.html](https://www.keil.com/pack/doc/CMSIS/Core/html/group__intrinsic__SIMD__gr.html)
* We characterize the latency and energy consumption of five primitives, including the standard convolution, against different parameters such as kernel or input size.
* We provide insights on the performance of different primitives, especially for our implementations using SIMD instructions to help machine learning practitioners to design, develop and deploy efficient models according to their requirements.
## 2 Background
### Preliminaries and notation
We consider the typical case of a 2D-convolution layer with padding and a square input tensor \(X\) of dimensions of \(H_{x}\times H_{x}\times C_{x}\) with \(H_{x}\) the spatial width and \(C_{x}\) the number of channels. The convolution layer produces an output tensor \(Y\) of dimensions \(H_{y}\times H_{y}\times C_{y}\) with \(H_{y}\) the spatial width (equal to \(H_{x}\)) and \(C_{y}\) the number of channels. The convolution is performed thanks to convolutional kernels represented by a weight tensor \(W\) of size \(H_{k}\times H_{k}\times C_{x}\times C_{y}\) with \(H_{k}\) the spatial dimension of a kernel (assumed to be square), \(C_{x}\) the number of input channels and \(C_{y}\) the number of output channels (i.e. the number of filters) as defined previously. The output for standard convolution is as follows:
\[Y_{k,l,n}=\sum_{m=1}^{C_{x}}\sum_{i=1}^{H_{k}}\sum_{j=1}^{H_{k}}W_{i,j,m,n} \cdot X_{k+i-1,l+j-1,m}\quad\forall k,l\in[1,H_{y}],\quad\forall n\in[1,C_{y}] \tag{1}\]
On modern CNN architectures, convolution layers are often coupled with batch-normalization layers that normalize (recentering and rescaling) the inputs of layers to make training faster and improve stability.
### Convolution primitives
We detail the different convolution primitives evaluated in this work. Table 1 sums up performance features compared to the standard convolution.
#### 2.0.1 Grouped convolution
was first introduced in the AlexNet paper from Krizhevsky _et al_. [7] for practical issues, then several works such as Ioannou _et al_. [4] have studied its effect on the performance of a neural network model. For the standard convolution, all input channels are used to compute an output channel. For a grouped convolution with G groups, each channel of the input and output are associated with a group \(G_{i}\). Then, to compute an output channel of the group \(G_{i}\), only the corresponding input channels are processed, as depicted in Fig. 1. Thus, grouped convolutions (also referred as _filter groups_) reduce the number of parameters and MAC operations of the layer by a factor G.
#### 2.0.2 Depthwise separable convolution
Szegedy _et al_. [9] introduce depthwise separable convolutions with the _Inception_ architecture. Depthwise separable convolution replaces the standard convolution by two convolutions: _depthwise_ and _pointwise_. Depthwise convolution is an extreme version of grouped convolution where \(G=C_{x}=C_{y}\). The problem is that each filter only handles information passed down from one input channel. Pointwise convolution is applied to linearly combine the output channels of the depthwise convolution thanks to \(1\times 1\) kernels. It also acts as a reduction of the depth of the output tensor \(Y\).
#### 2.0.3 Shift convolution
Even though pointwise convolution is more computationally expensive than depthwise convolution in theory, Jeon _et al_. [6] notice, with a hardware implementation, that depthwise convolution is more time-consuming
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Convolution type & Parameters & Theoretical MACs & Parameters gain Complexity gain \\ \hline Standard & \(H_{k}^{2}\cdot C_{x}\cdot C_{y}\) & \(H_{k}^{2}\cdot C_{x}\cdot H_{y}^{2}\cdot C_{y}\) & - & - \\ Grouped & \(H_{k}^{2}\cdot\frac{C_{x}}{G}\cdot C_{y}\) & \(H_{k}^{2}\cdot\frac{C_{x}}{G^{2}}\cdot H_{y}^{2}\cdot C_{y}\) & \(\frac{1}{G}\) & \(\frac{1}{G}\) \\ Depthwise separable & \(C_{x}\cdot(H_{k}^{2}+C_{y})\) & \(C_{x}\cdot H_{y}^{2}\cdot(H_{k}^{2}+C_{y})\) & \(\frac{1}{C_{y}}+\frac{1}{H_{k}^{2}}\) & \(\frac{1}{C_{y}}+\frac{1}{H_{k}^{2}}\) \\ Shift & \(C_{x}\cdot(2+C_{y})\) & \(C_{x}\cdot C_{y}\cdot H_{y}^{2}\) & \(\frac{2}{C_{y}\cdot H_{k}^{2}}+\frac{1}{H_{k}^{2}}\) & \(\frac{1}{H_{k}^{2}}\) \\ Add & \(H_{k}^{2}\cdot C_{x}\cdot C_{y}\) & \(H_{k}^{2}\cdot C_{x}\cdot H_{y}^{2}\cdot C_{y}\) & \(1\) & \(1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the different primitives. Parameters gain is the ratio between the primitive’s number of parameters and the standard convolution. The same applies for theoretical MACs with complexity gain.
Figure 1: From [4], standard vs. grouped convolutions: the grouped convolution with 2 groups applies half of the filters to each half of the input channels in order to compute each half of the output channels.
than point convolution. They replace depthwise convolution by a shift operation which requires extremely few parameters and less computational power to produce the intermediate feature map \(I\):
\[I_{k,l,m}=X_{k+\alpha_{m},l+\beta_{m},m}\ \forall k,l\in[1,H_{x}],\quad\forall m \in[1,C_{x}] \tag{2}\]
where \(\alpha_{m}\) and \(\beta_{m}\) denote the horizontal and vertical shift assigned to the \(m^{th}\) channel of the input feature map.
#### 2.2.2 Add convolution
Multiplication operation consumes, in most cases, more energy than addition operation. Chen _et al._[2] exploit the fact that convolutions in deep neural networks are cross-correlation measuring the similarity between input and convolution kernel. They propose to replace cross-correlation by \(L1\)-norm as a similarity measure to perform an _add convolution_ as in Eq.3.
\[Y_{k,l,n}=-\sum_{m=1}^{C_{x}}\sum_{i=1}^{H_{k}}\sum_{j=1}^{H_{k}}|W_{i,j,m,n}-X _{k+i-1,l+j-1,m}|\quad\forall k,l\in[1,H_{y}],\quad\forall n\in[1,C_{y}] \tag{3}\]
The output of an add convolution is always negative. Thus, in order to make add convolution compatible with standard activation functions like ReLu, a batch normalization layer following the add convolution layer is needed.
### Neural network library for Cortex-M MCU
The challenge of porting neural networks to constrained platforms such as microcontrollers has led to the creation of embedding tools (e.g. TFLM6, N2D27, STM32Cube MX-AI8 or NNoM9). Those tools support standard convolution as well as depthwise separable convolutions layers. TFLM and STM32Cube MX-AI support floating point operations, 16 and 8 bits integer operations while NNoM supports only 8 bits integer operations. Furthermore, for Cortex-M4 and Cortex-M7 MCUs (with Digital Signal Processing extensions), SIMD instructions can be used for the computation of different primitives by integrating the middleware CMSIS-NN [8] to those tools. For our study, the open source NNoM library was chosen due to its good performance and its ease of customization.
Footnote 6: [https://www.tensorflow.org/lite/microcontrollers](https://www.tensorflow.org/lite/microcontrollers)
Footnote 7: [https://github.com/CEA-LIST/N2D2](https://github.com/CEA-LIST/N2D2)
Footnote 8: [https://www.st.com/en/embedded-software/x-cube-ai.html](https://www.st.com/en/embedded-software/x-cube-ai.html)
Footnote 9: [https://github.com/majianjia/nnom](https://github.com/majianjia/nnom)
## 3 Implementation
In this section, we present the implementation details of NNoM and CMSIS-NN convolution on which our implementations of the different primitives are based. Furthermore, we detail the differences of implementation between the standard convolution and the optimized primitives.
### Quantization
Quantization is the process of reducing the precision of weights, biases, and activations in order to reduce the memory footprint. NNoM library uses 8 bits quantization for the weights, biases, and activations with a uniform symmetric powers-of-two quantization scheme as in Eq. 4.
\[dec=ceil\Big{(}log_{2}\big{(}max(|X_{f}|))\Big{)}\ ;\,x_{i}=floor\big{(}x_{f} \cdot 2^{(8-1)-dec}\big{)} \tag{4}\]
where \(X_{f}\) is a 32 bits floating point tensor, \(x_{f}\) a value of \(X_{f}\), \(x_{i}\) its 8 bits quantized version and \(2^{dec}\) is the scale of quantization. Because this scale is a power of 2, the convolution operation only requires integer addition, multiplication and bit shifting, but no division (see Algorithm 1, left). This computation process is used for grouped and shift convolutions because of their similarity to standard convolution. We adapt it to add convolutions as presented in Algorithm 1 (right).
```
1:\(output\gets i\cdot w\)
2:\(shift_{output}\leftarrow\mathit{dec}_{weight}+dec_{input}-dec_{output}\)
3:\(output\gets output>>shift_{output}\)
4:Return output
5:\(shift\leftarrow|dec_{input}-dec_{weight}|\)
6:\(shift_{output}\gets dec_{weight}\)then
7:\(output\leftarrow-|i-w|\)
8:\(shift_{output}\gets dec_{weight}-dec_{output}\)
9:\(shift_{output}\gets dec_{weight}-dec_{output}\)
10:endif
12:\(output\gets output>>shift_{output}\)
13:Return output
```
**Algorithm 1** Inner loop of convolution (left) and add convolution (right) without bias
### Batch normalization folding
For convolutions, NNoM library uses the batch normalization folding proposed by Jacob _et al_. [5]. By merging convolution layers and batch normalization layers, this method accelerates the inference without accuracy drop. Batch normalization folding can be applied for the computation of grouped and shift convolutions but is not suitable fot add convolution.
### Im2col algorithm with SIMD instructions
In order to accelerate convolutions, the CMSIS-NN middleware [8] use the image to column (im2col) algorithm [1]. A first step is to sample patches from the input, flatten and stack them as columns of a matrix \(M\). Each filters of the convolution weight \(W\) are also flattened and stacked as rows of a matrix \(N\). In the second step, the output is computed with the matrix multiplication \(Y=M.N\). To deal with the increased memory footprint of im2col, Lai _et al._[8] limit the number of patches processed at the same time to 2. The matrix multiplication is computed using 2 filters simultaneously to maximize the data reuse at the register file level on ARM Cortex-M. Furthermore, Lai _et al._[8] use the parallelized multiply-accumulate instruction __SMLAD to speed up the matrix multiplication.
For grouped convolution, we apply Lai _et al._[8] algorithm to each group. For shift convolution, we modify the first step of im2col to sample a patch with different shifts for each input channel. We did not implement a SIMD version of add convolutions because there is no instructions similar to __SMLAD adapted to add convolutions.
## 4 Experimental characterisations
The experiments are carried out on a typical 32-bit MCU platform, the Nucleo STM32F401-RE, based on Cortex-M4 that supports SIMD instructions. Unless specified, the compiler is arm-none-eabi-gcc (version 10.3) with the optimization level sets to Os and the MCU's frequency is fixed at 84 MHz. The software STM32CubeMonitor-Power10 is used to measure the electric current of the MCU. We multiply it by the supply voltage (i.e. 3.3 V) and integrate it over the duration of an inference to obtain the inference's energy consumption.
Footnote 10: [https://www.st.com/en/development-tools/stm32cubemonpwr.html](https://www.st.com/en/development-tools/stm32cubemonpwr.html)
### Influence of the primitive parameters
#### 4.1.1 Protocol
To evaluate the influence of a parameter (i.e. kernel size, input width...), we consider a layer with every other parameters fixed excepted the concerned one. The experiment plan is defined in table 3. We measure the latency and energy consumption over 50 inferences (average) with randomized inputs. Results are presented in Fig.2.
#### 4.1.2 Results without SIMD instructions
We observe in Fig. 2.a-c that our implementation fits the theory (Table 1). For example, the theoretical MACs, latency and energy consumption increase quadratically with the kernel size (Fig 2.a, Fig 2.b and Fig 2.c). More specifically, there is a linear relationship between the MACs, latency and consumption. A linear regression leads to scores of 0.995 and 0.999 respectively. Add convolutions are slightly less efficient than convolutions despite the same number of MACs. This is explained by the quantization scheme of add convolution and the additional batch normalization layer.
Figure 2: Influence of the 1) number of groups, 2) kernel size, 3) input width, 4) number of input channels and 5) filters on a) theoretical MACs, b) latency without SIMD instructions, c) energy consumption without SIMD instructions, d) latency with SIMD instructions and e) energy consumption with SIMD instructions and f) speedup for different primitives. The different implementations fit the theory. Using SIMD instructions enables faster and less energy consuming inferences. The speedup of the im2col algorithm varies according to the primitives and their parameters.
#### 4.1.2 Effect of SIMD instructions
Using SIMD instructions decreases the latency (Fig 2.d) and energy consumption (Fig 2.e) of the different primitives. Our implementation with SIMD instructions also fits the theory. But latency is more relevant to estimate the layer's energy consumption (regression score of 0.999) than theoretical MACS (regression score of 0.932). This loss of linearity is related to the varying speedup of the im2col algorithm with respect to the primitives and their parameters (Fig 2.f). A possible explanation is in the data reuse exploitation by the im2col algorithm. To verify this, we measure the number of memory access in those programs. Fig. 3 shows the variation of the ratio of memory access without SIMD instructions by the memory access with SIMD instructions (normalized by MAC) for different parameters and primitives. We observe in Fig. 3 the same variations as in Fig. 2.f. Thus, data reuse contributes strongly to the speed up of algorithms using SIMD instructions. However, convolutions and grouped convolutions have similar ratio in Fig. 3 but different speedup in Fig. 2.f. Other factors such as memory access continuity and padding are to be taken into account to explain the performance of these programs.
### Influence of other factors
For the following experiments, we fix the number of groups at 2, the kernel size at 3, the input width at 32, the input channel at 3 and the filters at 32.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{5}{c}{Experiment Groups Kernel size Input width Input channel Filters} \\ \hline
1 & 1-32 & 3 & 10 & 128 & 64 \\
2 & 2 & 1-11 & 32 & 16 & 16 \\
3 & 2 & 3 & 8-32 & 16 & 16 \\
4 & 2 & 3 & 32 & 4-32 & 16 \\
5 & 2 & 3 & 32 & 16 & 4-32 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Primitive parameters for the different experiments.
Figure 3: Influence of the a) number of groups, b) kernel size, c) input width, d) number of input channels and e) filters on the ratio of memory access without SIMD instructions by the memory access with SIMD instructions (normalized by MACs) for different primitives.
Influence of optimization levelWe perform a convolution inference with two different optimization levels (O0 and Os). As seen in table 4, the compiler optimization has an important effect on the layer performance. Using Os level accelerates the inference by a factor 1.52. This impact is emphasized with the use of SIMD instructions (factor 9.81). Without optimization, the use of SIMD instructions can even increase the layer's energy consumption as using SIMD instructions increases the average power consumption.
## 5 Conclusion
In this paper, we implement and benchmark several state-of-the-art convolution primitives for ARM Cortex-M microcontrollers. Our benchmark shows that for microcontrollers which cannot use SIMD instructions, theoretical MACs is a relevant indicator to estimate the layer energy consumption. For microcontrollers
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{5}{c}{Optimization level Latency (s) Consumption (mJ) Optimization Speedup SIMD Speedup} \\ \hline No SIMD & O0 & 1.26 & 63.9 & - & - \\ & Os & 0.83 & 45.7 & 1.52 & - \\ \hline \multirow{2}{*}{SIMD} & O0 & 1.08 & 82.0 & - & 1.17 \\ & Os & 0.11 & 7.2 & 9.81 & 7.55 \\ \hline \end{tabular}
\end{table}
Table 4: Effect of optimization level on inference performance for convolution.
Figure 4: Influence of the MCU frequency on latency, energy consumption without SIMD instructions (a and b) and with SIMD instructions (c and d).
which use SIMD instructions, latency is preferred over theoretical MACS to estimate the layer energy consumption while using SIMD instructions. We explain this by the varying efficiency of the im2col algorithm, from CMSIS-NN, depending on the layers and highlight the role of data reuse in this performance gap. Furthermore, we study the influence of external parameters to the convolution algorithms such as the compiler optimization and the MCU frequency. Our experiments highlight the major impact of the compiler optimization on the layers performance while using SIMD instructions, and show that running the inference at maximum frequency decreases the layer's energy consumption. Our work opens up new possibilities for neural architecture search algorithms.
#### 4.0.1 Author Contribution
Nguyen, Moellic and Blayac conceived and planned the study. Nguyen carried out the experiments and performed the analysis. Nguyen and Moellic wrote the manuscript with inputs from all authors.
Part of this work was done with the support of ID-Fab (Prototyping platform: project funded by the European Regional Development Fund, the French state and local authorities). This work benefited from the French Jean Zay supercomputer thanks to the _AI dynamic access_ program. This collaborative work is partially supported by the IPCEI on Microelectronics and Nano2022 actions and by the European project InSecTT11 and by the French National Research Agency (ANR) in the framework of the _Investissements d'Avenir_ program (ANR-10-AIRT-05, irtanoelec).
Footnote 11: www.insectt.eu: ECSEL Joint Undertaking (876038). The JU receives support from the European Union’s H2020 program and Au, Sw, Sp, It, Fr, Po, Ir, Fi, Sl, Po, Nl, Tu. The document reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains.
|
2301.08734 | Force-Field-Enhanced Neural Network Interactions: from Local Equivariant
Embedding to Atom-in-Molecule properties and long-range effects | We introduce FENNIX (Force-Field-Enhanced Neural Network InteraXions), a
hybrid approach between machine-learning and force-fields. We leverage
state-of-the-art equivariant neural networks to predict local energy
contributions and multiple atom-in-molecule properties that are then used as
geometry-dependent parameters for physically-motivated energy terms which
account for long-range electrostatics and dispersion. Using high-accuracy ab
initio data (small organic molecules/dimers), we trained a first version of the
model. Exhibiting accurate gas-phase energy predictions, FENNIX is transferable
to the condensed phase. It is able to produce stable Molecular Dynamics
simulations, including nuclear quantum effects, for water predicting accurate
liquid properties. The extrapolating power of the hybrid physically-driven
machine learning FENNIX approach is exemplified by computing: i) the solvated
alanine dipeptide free energy landscape; ii) the reactive dissociation of small
molecules. | Thomas Plé, Louis Lagardère, Jean-Philip Piquemal | 2023-01-20T18:56:19Z | http://arxiv.org/abs/2301.08734v4 | Force-Field-Enhanced Neural Network Interactions: from Local Equivariant Embedding to Atom-in-Molecule properties and long-range effects
###### Abstract
We introduce FENNIX (Force-Field-Enhanced Neural Network Interaxions), a hybrid approach between machine-learning and force-fields. We leverage state-of-the-art equivariant neural networks to predict local energy contributions and multiple atom-in-molecule properties that are then used as geometry-dependent parameters for physically-motivated energy terms which account for long-range electrostatics and dispersion. Using high-accuracy _ab initio_ data (small organic molecules/dimers), we trained a first version of the model. Exhibiting accurate gas-phase energy predictions, FENNIX is transferable to the condensed phase. It is able to produce stable Molecular Dynamics simulations, including nuclear quantum effects, for water predicting accurate liquid properties. The extrapolating power of the hybrid physically-driven machine learning FENNIX approach is exemplified by computing: i) the solvated alanine dipeptide free energy landscape; ii) the reactive dissociation of small molecules.
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
## I Introduction
In large-scale simulations, interactions between atoms cannot generally be computed from first principles because of the high numerical cost of quantum methods. Instead, they are generally modeled using _force fields_ (FFs) that postulate a physically-motivated functional form of the potential energy and are parameterized in order to match _ab initio_ energies and/or reproduce experimental data. The most widespread FFs are the so-called _classical_ force fields (such as AMBER [1] or CHARMM [2]) which use a combination of fixed-charge Coulomb potential and Lennard-Jones interactions to model the inter-molecular potential. These models are extremely efficient numerically, allowing the simulation of very large systems over long time scales. Their simple functional form, however, lacks polarization and many-body effects which can be critical to correctly describe some systems (for example solvation in a polar solvent, pi-stacking or complex protein structures [3]). More advanced force fields - such as AMOEBA [4], TTM [5], CHARMM Drude [6], ARROW [7] or SIBERA [8; 9] - have thus been developed in order to explicitly include these effects. These _polarizable_ force fields (PFFs) [10; 11] are much more flexible and accurate but are significantly costlier. Nonetheless, advances in high-performance computing (HPC), the increase in GPU (Graphical Processing Units) availability and recent methodological developments (advanced iterative solvers) now allow large-scale PFF simulations[12]. Both classical and polarizable FFs however assume a fixed connectivity between atoms (_i.e._ covalent bonds cannot be broken), making them unsuitable to study chemical reactions. Some _reactive_ force fields - such as ReaxFF [13] or Empirical Valence Bond [14] - are actively being developed but are generally specialized towards a relatively narrow class of systems.
From this brief overview of the domain of force fields, it is clear that a general, many-body reactive model is highly desirable but its design remains an outstanding challenge for the current frameworks. In recent years, considerable attention and resource have been devoted to the development of machine-learning potentials that promise to bridge the accuracy and generality gap between force fields and _ab initio_ methods. These models use flexible functional forms from the domain of machine-learning (such as deep neural networks, graph networks or kernel models) in order to accurately fit _ab initio_ energies, with a numerical cost comparable to standard FFs. A large variety of such models have been developed over the last few years (for example the HD-NNP [15], ANI [16], AIMNet [17; 18], DeePMD [19], ACE [20], sGDML [21], Tensormol-0.1 [22],Nequip [23], etc...) and have been applied to small molecular systems [16], periodic crystals [24; 25] and more general condensed-phase systems [26; 27]. Among these, the ANI models occupy a particular place as they aim to provide a generic pre-trained potential for a whole class of organic molecules. In this work, we follow a similar strategy.
In order to respect the inherent symmetries of molecular systems, most architectures are designed to be _invariant_ with respect to rotations, translations and exchange of identical atoms. These models have shown good accuracy on many systems but usually require large amounts of data to be trained on (of the order of a million molecular configurations). More recently, _equivariant_ models (for example Nequip [23], Allegro [28], SpookyNet [29], UNiTE [30] or GemNet [31]) have attracted much attention because of their impressive data efficiency and their ability to generalize more accurately to out-of-distribution configurations [23; 32].
Most ML models, however, assume a purely local functional form and tend to neglect or implicitly account for long range effects from the training data. The accurate description of long-range interactions is however critical to correctly simulate condensed-phase systems and to de
scribe the structure of large molecular formations (e.g. protein or DNA structure [33; 34]). The framework of message-passing neural networks [35] in principle allows to describe long-range effects by iteratively exchanging information with neighbouring atoms. This approach however have been shown to pose difficulties when applied to large systems as this iterative process is not well suited for parallel architectures. Some other pure ML multi-scale models are being developed (for example the LODE descriptor [36; 37]) but this area is still in its infancy. On the other hand, quantum perturbation theory (for example Symmetry Adapted Perturbation Theory, SAPT) gives solid grounds to the description of long-range effects in terms of classical electrostatics [38] which can be well captured in the FF framework (via multipolar Coulomb interactions and dispersion effects for example).[39] It thus seems advantageous to combine an ML model - which excel at predicting short-range properties - with long-range FF interactions, in order to obtain the best of both approaches. A few models applying this idea have recently been developed (for example HDNNP-Gen4 [40], PhysNet [41] and others [42; 26]) and have shown good results across multiple systems. Hybrid FF/ML models thus provide a promising route to more physics-aware ML potentials.
In this paper, we propose a general framework for building force-field-enhanced ML models. We leverage the latest advances in local equivariant neural networks in order to accurately predict short-range energy contributions as well as multiple atom-in-molecule properties that are then used to dynamically parameterize QM-inspired FF energy terms that account for long-range interactions. We show that this architecture allows for highly transferable models that are able to accurately generalize on large molecular systems, as well as in the condensed phase after being trained on small monomers and dimers only. This paper is organized as follows. Section II describes the model architecture, from the Allegro [28] equivariant embedding to the _output_ and _physics_ modules. It also describes the particular FF terms that we used for the pretrained model, named FENNIX-OP1, that we provide with this work. Section III focuses on the FENNIX-OP1 model and provides details on its construction, its target properties, the datasets used for training and the different training stages that were required. In section IV, we validate the model via several applications. First, we show that the model predicts accurate dissociation energy curves of some simple molecules. We then compute structural properties of liquid water in molecular dynamics simulations including nuclear quantum effects (NQEs) via the recently developed adaptive quantum thermal bath (adQTB) [43; 44]. Indeed, since the model is trained purely on _ab initio_ data, the explicit inclusion of NQEs is critical for the correct calculation of thermodynamical properties, as was shown in numerous previous studies [45; 46]. For this purpose, the adQTB was shown to provide robust approximations of NQEs while being numerically affordable (similar to a classical MD) and thus constitutes a very efficient tool for quickly testing ML models. We then show that the model produces stable dynamics of alanine dipeptide in solution and provides a qualitatively correct description of the torsional free energy profile (computed using an enhanced sampling method[47]); as well as stable dynamics of the 1FSV protein in gas phase. Finally, section V provides some conclusions and outlooks for future extensions of the model.
## II Model architecture
The FENNIX (Force-field-Enhanced Neural Network Interactions) model is based on a local multi-output equivariant model that processes atomic neighborhoods and predicts multiple atomic or pairwise properties. As an example for this work, we will present a model that outputs local pairwise energy contributions, charges and atomic volumes. The output is subsequently enriched by a "physical" module that computes force field terms such as electrostatic and dispersion energy terms. The core of our model is a slightly modified version of the Allegro local equivariant model presented in ref. [28]. We use the Allegro model as a general embedding of atomic pairs which is then fed into independent neural networks that predict the target properties.
In this section, we will briefly describe the Allegro architecture and our modifications to the model. The theoretical analysis of this architecture was thoroughly done in the original paper [28] so that we will only review here the main points necessary to the understanding of the model and our improvements to the architecture. We will then present the output module that processes the Allegro embedding. Finally, we will describe the physical module and the particular functional form for the FENNIX-OP1 potential energy surface that we used in this work.
### Equivariant embedding using the Allegro architecture
#### ii.1.1 Review of the Allegro architecture
The Allegro model provides a local many-body descriptor \(\left(x_{ij},V_{ij}^{nlp}\right)\) - interchangeably referred to as embedding in the following - for each directed pair of atoms with source atom \(i\) and destination atom \(j\) in the neighborhood \(\mathcal{N}(i)\) defined by all the atoms located at a distance shorter than a cutoff radius \(r_{c}\) from atom \(i\):
\[\mathcal{N}(i)=\left\{k\quad\text{s.t.}\quad\left\|\vec{\mathbf{R}}_{ik}\right\| <r_{c}\right\} \tag{1}\]
with \(\vec{\mathbf{R}}_{ik}=\vec{\mathbf{R}}_{k}-\vec{\mathbf{R}}_{i}\) the vector going from the position of atom \(i\) to atom \(k\). The first part of the descriptor \(x_{ij}\) is built so that it is invariant under the action of certain geometric symmetries (_i.e._ global rotations, translations and
inversions of the system). On the other hand, the second descriptor \(V_{ij}^{\mathit{hip}}\) is composed of features that are equivariant with respect to these symmetries. These features take the form of tensors that are labeled with a channel index \(n\in{1,\ldots,N_{\mbox{\tiny channels}}}\), a rotational index \(l\in{0,1,\ldots,l_{\mbox{\tiny max}}}\) and a parity index \(p\in-1,1\). The rotational index indicates how the tensor transforms under rotation operations: \(l=0\) corresponds to scalar/invariant quantities, \(l=1\) corresponds to vector-like objects and we refer to \(l>=2\) objects as higher-order tensors. The parity index, on the other hand, indicates how the tensor's sign changes under inversion of the coordinate system. In our implementation of the Allegro model, these tensorial objects are handled by the e3nn python package [48] which provides high-level classes that represent them and easy-to-use functions to manipulate and combine them while preserving symmetries and global equivariance.
In the following paragraphs, we describe how initial two-body features are computed, how features from neighbors are combined through \(N_{\mbox{\tiny layers}}\) layers of interactions to enrich them with many-body information and how they are filtered at each layer to control the size of the embedding.
_Initial two-body features_
The Allegro model starts by decomposing each interatomic vector \(\{\vec{\mathbf{R}}_{ij}\}_{j\in\mathcal{N}(i)}\) into fingerprints that are more suitably processed by the network. The interatomic distance \(R_{ij}\) is projected onto a radial basis \(B(R_{ij})=[B_{1}(R_{ij}),\ldots,B_{N_{\mbox{\tiny basis}}}(R_{ij})]\) (we use the Bessel basis function with a polynomial envelope [49] that we normalize as in the original paper) and we compute the two-body scalar embedding as:
\[x_{ij}^{\mbox{\tiny 2B}}=\mbox{MLP}_{\mbox{\tiny 2B}}\Big{[}\mathbbm{1}(Z_{i} )\ ||\ \mathbbm{1}(Z_{j})\ ||\ B(R_{ij})\Big{]}f_{c}(R_{ij}) \tag{2}\]
where \(||\) denotes concatenation, \(\mbox{MLP}_{2B}\) is a multilayer perceptron (_i.e._ a fully connected scalar neural network), \(f_{c}(R_{ij})\) is a cutoff function going smoothly to zero as \(R_{ij}\) approaches \(r_{c}\) (we use the same polynomial envelope as for the radial basis) and \(\mathbbm{1}(Z_{i})\) (resp. \(\mathbbm{1}(Z_{j})\)) is a vector representing the chemical species of the source atom \(i\) (resp. destination atom \(j\)). In the original paper \(\mathbbm{1}(Z_{i})\) is a direct one-hot encoding of the atomic number \(Z_{i}\), meaning that one has to fix in advance the number of species the model will be able to process (the one-hot encoding then defines a simple orthonormal basis which has the same dimensions as the chosen number of species). In section II.1.2, we propose to modify this one-hot encoding by a positional encoding of coordinates in the periodic table which allows for more flexibility in the treatment of atomic species.
We obtain the two-body equivariant features by projecting the unit vector \(\hat{R}_{ij}=\vec{\mathbf{R}}_{ij}/R_{ij}\) onto a basis of real spherical harmonics \(Y_{ij}^{\mathit{lp}}\). We then mix them with radial information with a linear embedding on \(N_{\mbox{\tiny channels}}\) channels:
\[V_{ij}^{\mathit{nlp},\mbox{\tiny 2B}}=\left[MLP_{\mbox{\tiny embed}}^{\mbox{ \tiny 2B}}(x_{ij}^{\mbox{\tiny 2B}})\right]^{\mathit{nlp}}\ Y_{ij}^{\mathit{lp}} \tag{3}\]
_Interaction with the local environment_
The two-body embedding \(\left(x_{ij}^{\mbox{\tiny 2B}},V_{ij}^{\mathit{nlp},\mbox{\tiny 2B}}\right)\) is then processed through multiple "interaction" layers that allow to combine information with other atoms in the vicinity of atom \(i\). Each interaction layer starts by building a global equivariant neighborhood embedding for atom \(i\) from the current scalar embeddings \(x_{ik}\) and the spherical harmonics projections \(Y_{ik}^{\mathit{lp}}\):
\[\Gamma_{i}^{\mathit{nlp},(L)}=\sum_{k\in\mathcal{N}(i)}\left[MLP_{\mbox{ \tiny embed}}^{(L)}(x_{ik}^{(L-1)})\right]^{\mathit{nlp}}\ Y_{ik}^{\mathit{lp}} \tag{4}\]
with \(L=1,\ldots,N_{\mbox{\tiny layers}}\) the layer index and \(x_{ik}^{(0)}=x_{ik}^{\mbox{\tiny 2B}}\) and \(V_{ij}^{\mathit{nlp},(0)}=V_{ij}^{\mathit{nlp},\mbox{\tiny 2B}}\). The interaction is then performed via a tensor product of \(\Gamma_{i}^{\mathit{nlp},(L)}\) with each equivariant embedding \(V_{ij}^{\mathit{nlp},(L-1)}\) (the tensor product is done independently for each channel \(n\)). The resulting "latent space"
\[\mathcal{L}_{ij}^{\mathit{nmlp},(L)}\ =\ \left(\Gamma_{i}^{\mathit{nlp}_{1},(L)} \otimes V_{ij}^{\mathit{nl2p}_{2},(L)}\right)^{\mathit{nmlp}} \tag{5}\]
contains all possible combinations of rotational and parity indices that are allowed by symmetry (_i.e._ such that \(|l_{1}-l_{2}|\leq l\leq|l_{1}+l_{2}|\) and \(p=p_{1}p_{2}\)). Note that since multiple combinations of \((l_{1},p_{1}),(l_{2},p_{2})\) may produce outputs of indices \((l,p)\), we need to add a multiplicity index \(m\) that distinguishes these paths.
_Feature filtering and channel mixing_
Finally, the latent space is filtered to obtain the new pairwise embedding. The scalar embedding is combined with the scalar part of the latent space (with every channels and all multiplicities concatenated) to obtain:
\[x_{ij}^{(L)}=\alpha\ x_{ij}^{(L-1)}+\sqrt{1-\alpha^{2}}\ f_{c}(R _{ij})\\ \times MLP_{\mbox{\tiny latent}}^{(L)}\Bigg{[}x_{ij}^{(L-1)}\ \ ||\ \ \mathcal{L}_{ij}^{\mathit{n} \mathit{n}\mathit{m}\mathit{0}\mathit{1},(L)}\Bigg{]} \tag{6}\]
with \(0\leq\alpha<1\) a mixing coefficient that allows to easily propagate scalar information from a layer to the next. In our implementation, the value of \(\alpha\) can be set as a hyper-parameter (for example to the value \(\alpha=2/\sqrt{5}\) proposed in the original Allegro paper) or can be optimized independently for each layer during the training procedure.
The new equivariant features are obtained by linearly combining the elements of the latent space with same indices \((l,p)\) from all channels and multiplicities:
\[V_{ij}^{\mathit{nlp},(L)}=\sum_{n^{\prime},m}w_{n^{\prime},m}^{\mathit{nlp},(L )}\mathcal{L}_{ij}^{n^{\prime}\mathit{mlp},(L)} \tag{7}\]
which results in features with the same number of elements as the previous layer. The weights \(w_{n^{\prime},m}^{\mathit{nlp},(L)}\) are optimized in the training procedure.
The output features of the last layer \(\left(x_{ij}^{(N_{\text{layers}})},V_{ij}^{nlp,(N_{\text{layers}})}\right)\) compose the many-body embedding of our model which is passed to the output module to predict the different atomic or pairwise properties that the model is trained on.
#### ii.1.2 Positional encoding of chemical species
While a one-hot encoding allows to represent chemical species in a simple manner, it fixes from the start the number of different species that the model can treat. Thus, if more data becomes available for new species, one would have to retrain the model from scratch in order to accommodate for the new data. Furthermore, in such encoding, all species are treated equally and no similarities between species (for example closeness in the periodic table) are provided: the network must learn these correlations purely from data. This encoding is thus suitable when targeting a specific system but might not be the best choice when building a more general chemical model.
In our implementation, the default encoding is a positional encoding (as defined in ref. [50]) that encodes coordinates in the periodic table using sine and cosine functions of different frequencies. For example, the column index \(c\) is encoded as a vector \(e_{c}\) of dimension \(d_{\text{\tiny col}}\) as:
\[\forall k\in 0,\ldots,d_{\text{\tiny col}},\ (e_{c})_{k}=\left\{\begin{aligned} \sin\!\left(c/\gamma_{\text{\tiny col}}^{2i/d_{\text{\tiny col}}} \right)&\text{if k=2i}\\ \cos\!\left(c/\gamma_{\text{\tiny col}}^{2i/d_{\text{\tiny col}}} \right)&\text{if k=2i+1}\end{aligned}\right. \tag{8}\]
and similarly for the row index with dimension \(d_{\text{\tiny row}}\) and frequency parameter \(\gamma_{\text{\tiny row}}\). In our implementation, the dimensions and frequency parameters are fixed at \(d_{\text{\tiny col}}=10\), \(d_{\text{\tiny row}}=5\), \(\gamma_{\text{\tiny col}}=1000\) and \(\gamma_{\text{\tiny row}}=100\). These could be also be treated as hyperparameters or even learned during training. The row and column encodings are then concatenated to obtain the full encoding vector \(\mathbbm{1}(Z_{i})=\left[e_{\text{\tiny row}}(Z_{i})\;||\;e_{\text{\tiny col} }(Z_{i})\right]\) Figure 1 shows a heatmap of the positional encoding of the species H,C,N,O and F (from top to bottom). We see that the first five columns are the same for all the heavy atoms as they represent the row encoding (the second row of the periodic table in this case) while they are different from the first line corresponding to the Hydrogen. We also see that the last ten columns are different for all the species shown here as they are all on different columns of the periodic table. The motivation behind using this positional encoding is that we hypothesized that having similar encodings for species sharing a row or a column might help with generalization and allow to transfer learned knowledge from a species to another, thus requiring less training data. Furthermore, as stated in ref. [50] the encoding for index \(c+k\) can be represented as a linear function of the encoding for index \(c\), which might further help with inferring similarities. Additional features such as the ionization state could be encoded in the same manner (though we restrict ourselves to neutral atoms in this work, thus only requiring the knowledge of chemical species).
### Output module
After computing the embedding from the Allegro model, we use it as input for independent MLPs for each target property. In the following, we will simply denote \(\left(x_{ij},V_{ij}^{nlp}\right)\) the output from the last Allegro layer (thus dropping the (\(L\)) layer index). The current implementation also allows some modularity in the composition of inputs and on the operations done on the outputs. For example, the input can exploit either the scalar embedding to obtain invariant properties via a standard MLP as
\[o_{ij}=MLP_{\text{\tiny out}}[x_{ij}] \tag{9}\]
, or both the scalar and tensorial embeddings via a linear projection of \(V_{ij}^{nlp}\)
\[O_{ij}^{mlp}=\sum_{n}\left[MLP_{\text{\tiny out}}(x_{ij})\right]_{n}^{mlp}\;V _{ij}^{nlp} \tag{10}\]
. For atom-wise properties, the pairwise outputs are simply summed up on the central atom. For properties that should sum up to zero (for example partial atomic charges), the outputs \(o_{ij}\) and \(o_{ji}\) can be antisymmetrized (which for partial charges is equivalent to charge exchange between neighbouring atom pairs). Furthermore, in order to impose constraints on invariant outputs, a final activation function can optionally be applied. This is for example useful when the output targets a positive quantity (for instance an atomic volume) for which we can apply a _softplus_ function, a probability for which we may apply a sigmoid function or a discrete probability distribution (in the case of multidimensional \(o_{ij}\)) for which a _softmax_ function can be used. Finally, the output is optionally shifted and scaled.
Further modifications can optionally be applied. For instance, the input can be filtered according to an addi
Figure 1: Heatmap of the positional encodings of the chemical species H,C,N,O,F (from top to bottom). The first five columns represent the encoding of the row index in the periodic table, while the last ten columns represent the encoding of the column index.
tional shorter-range cutoff for example one distinguishing between bonded and non-bonded pairs. Finally, the two-body embedding \(x_{ij}^{2n}\) can be used in place of or concatenated to (as it is done in the FENNIX-OP1 model) the final embedding to use as input. This allows the output MLP to easily access simple pairwise information and should let the Allegro embedding specialize in finer many-body correlations. This compositional approach allows for a great flexibility in the model's output, which is especially useful when experimenting with a ML parametrization of physical models.
The output module for the FENNIX-OP1 model is composed of three targets: a local energy contribution \(V_{i}^{\textsc{nn}}=\sum_{j\in\mathcal{N}(i)}V_{ij}^{\textsc{nn}}\) (which is a simple scalar output), an atomic partial charge \(q_{i}^{\textsc{nn}}=\sum_{j\in\mathcal{N}(i)}\Delta q_{ij}-\Delta q_{ji}\) (through antisymmetrized charge exchange) and an atomic volume \(v_{i}^{\textsc{nn}}\) (constrained to be positive).
### Physics module and energy functional form
Finally, the physics module uses the results from the output module and feed them into physically-motivated models to enrich the output.
In the case of FENNIX-OP1, the force field module is composed of an electrostatic energy term \(V_{ij}^{\textsc{else,CP}}\), and a pairwise dispersion term \(V_{ij}^{\textsc{dip}}\). The functional form of FENNIX-OP1 is then given by:
\[V_{\textsc{op1}}=\sum_{i}V_{i}^{\textsc{nn}}+\sum_{i,j<i}V_{ij}^{\textsc{else, CP}}+\sum_{i,j<i}V_{ij}^{\textsc{dip}} \tag{11}\]
The first term \(V_{i}^{\textsc{nn}}\) is the neural network contribution that accounts for short-range interactions that we introduced in section II.2. We model the electrostatic interaction \(V_{ij}^{\textsc{else,CP}}\) via a Coulomb potential with fluctuating charges and the Piquemal charge penetration model [51]:
\[V_{ij}^{\textsc{else,CP}}=\frac{1}{R_{ij}}\Big{[}N_{i}N_{j}+N_{ j}(q_{i}-N_{i})f_{\textsc{damp}}^{i}(R_{ij})\\ +N_{i}(q_{j}-N_{j})f_{\textsc{damp}}^{i}(R_{ij})\\ +(q_{i}-N_{i})(q_{j}-N_{j})f_{\textsc{or}}^{i}(R_{ij})f_{\textsc{or }}^{j}(R_{ij})\Big{]} \tag{12}\]
where \(N_{i}\) is the number of valence electrons of atom \(i\), \(q_{i}=\epsilon q_{i}^{\textsc{nn}}\) is the environment-dependent charge of atom \(i\) predicted by the neural network (with an adjustable universal scaling parameter \(\epsilon\)) and \(f_{\textsc{damp}}^{i}(r)=1-e^{-\alpha r/r_{i}^{\textsc{ddw}}}\) and \(f_{\textsc{or}}^{i}(r)=1-e^{-\beta r/r_{i}^{\textsc{dw}}}\) are damping functions with \(\alpha\) and \(\beta\) adjustable parameters that are assumed to be universal and where
\[r_{i}^{\textsc{dw}}=\left(\frac{v_{i}^{\textsc{nn}}}{v_{i}^{\textsc{free}}} \right)^{\frac{1}{3}}r_{i}^{\textsc{dw,free}} \tag{13}\]
is the environment-dependent van der Waals radius of atom \(i\) with \(v_{i}^{\textsc{nn}}/v_{i}^{\textsc{free}}\) the atomic volume ratio predicted by the neural network.
Finally, the dispersion interaction \(V_{ij}^{\textsc{dip}}\) is computed using the pairwise Tkatchenko-Scheffler model [52]:
\[V_{ij}^{\textsc{dip}}=-\frac{C_{6,i}}{R_{ij}^{6}}\sigma_{ij}(R_{ij}) \tag{14}\]
with the combination rule:
\[C_{6,ij}=\frac{2C_{6,i}C_{6,j}}{C_{6,i}\frac{\alpha_{i}}{\alpha_{i}}+C_{6,j} \frac{\alpha_{i}}{\alpha_{j}}} \tag{15}\]
and the environment-dependent homonuclear parameters:
\[C_{6,i}=\bigg{(}\frac{v_{i}^{\textsc{nn}}}{v_{i}^{\textsc{free}}} \bigg{)}^{2}C_{6,i}^{\textsc{free}}\qquad;\qquad\alpha_{i}=\bigg{(}\frac{v_{i} ^{\textsc{nn}}}{v_{i}^{\textsc{free}}}\bigg{)}\alpha_{i}^{\textsc{free}} \tag{16}\]
with \(\alpha_{i}^{\textsc{free}}\) the isolated atom polarizabilities, \(C_{6,i}^{\textsc{free}}\) the isolated atom sixth order dispersion coefficients and the sigmoid damping function:
\[\sigma_{ij}(R_{ij})=\left[1+e^{-\gamma\left(\frac{1}{i}\frac{R_{ij}}{r_{i}^{ \textsc{dw}}+r_{j}^{\textsc{dw}}}-1\right)}\right]^{-1} \tag{17}\]
with \(\gamma\) and \(s\) adjustable parameters that we assume to be universal.
We furthermore mention that the physics module is not limited in principle to compute energy terms. The aim of this module is to be a general physical interface between the ML embedding and the final target properties. For example, we use it in the FENNIX-OP1 model to correct for ML-predicted charge exchanges that would deplete the valence shell of an atom. The implementation also provides charge exchange via a simple bond-capacity model [53] that leverages ML-predicted atom-in-molecule electronegativities and which capabilities will be explored in future iterations of the FENNIX model. Each physical model is implemented as a simple Pytorch Module that takes as input a dictionary which contains previously computed properties. It then adds its contribution and outputs the enriched dictionary. These physical submodules can then easily be chained, and it facilitates the implementation of additional models.
## III Datasets and training procedure
In this section, we start by providing some details on the construction of FENNIX-OP1 model. We then review the datasets that we used for training the model and its target properties. Finally, we will focus on the non-trivial task of training a multi-output FENN model.
### FENNIX-OP1 model architecture
In the FENNIX-OP1, chemical species are encoded using the positional encoding defined in section II.1.2,
with 5 dimensions for the row encoding and 10 dimensions for the column encoding. The use \(N_{\text{basis}}=10\) Bessel basis functions for the radial embedding, with a cutoff distance \(r_{c}=5.2\) A and a smoothing parameter \(p=3\) for the polynomial envelope. We used 256 features for the scalar embedding and 10 channels with a maximum rotational index \(l_{\text{max}}=2\) for all equivariant features. The two-body scalar MLP is composed of two hidden layers with 64 and 128 neurons respectively with SiLU activation function. The two-body embedding MLP for the initial equivariant features is a simple linear projection of the two-body embedding with no activation function. The embedding is constructed using 3 Allegro layers. In each layer, the embedding MLP is a simple linear projection with no activation function and the latent MLP contains two hidden layers both with 256 neurons and SiLU activation function. The mixing coefficient \(\alpha\) is initially set to the original \(2/\sqrt{5}\) and is optimized independently for each layer during training.
The output module is composed of three independent identical MLPs for the short-range energy contribution, the charge exchange and the atomic volume. The input of these MLPs is the concatenation of the two-body scalar embedding and the final scalar embedding. They have 5 hidden layers with 256, 128, 64, 32 and 16 neurons. In total, the model has approximately 1.2 million parameters.
The output module also has a "constant" submodule that simply provides the atomic reference energies. Importantly, we used the CCSD(T) isolated atom energies as a reference instead of the average atomic energy over the dataset that is typically advised when training a ML model. Indeed, this reference energy has a physical meaning - which is the energy of an atom when it has no neighbors to interact with - and is not solely a convenience parameter for facilitating the training. In particular, this choice has important consequences for the description of molecular dissociation [54], as will become clear in section IV.1.
Finally, the physics module is composed of four submodules: a charge correction module, a charge scaling module, the Coulomb interaction module and the dispersion module. These last two modules implement the physical interactions using the corresponding equations described in section II.3. The charge scaling module simply scales all charges by a constant factor in order to compensate for the lack of higher-order multipoles in the permanent electrostatics. The charge correction module antisymmetrizes the charge exchanges and ensures that atoms do not unphysically deplete their valence shell. Indeed, if we assume that only valence electrons can be transferred, an atom cannot have a partial charge larger than \(+N_{i}\) (which is particularly important for the charge penetration model that we use). This constraint is not ensured by the neural network model so we need to enforce it _a posteriori_. The constraint is achieved by transferring back some electrons from neighbouring atoms that drained too much charge. First the unphysical "hole" in the valence shell is computed using a smooth function (to ensure smooth gradients of the final coulomb energy) on the basis that an atom cannot lose more than 95 percent of its valence electrons. Charges that sum up to the valence hole are then transferred from neighbouring atoms that took charges, proportional to the quantity drained. This procedure should enforce the constraint in a single iteration for most non-pathological cases. Reassuringly however, the charge correction is almost never needed after training (_i.e._ the valence hole is almost always zero), even in condensed-phase MD simulations for which the model was not explicitly trained.
### Datasets and target properties
For the FENNIX-OP1 model, we chose to reproduce high-level coupled-cluster energies and we thus selected three of the few freely-available and generalist datasets that provide such data: the ANI-1ccx [16] dataset, the DES370K [55] dataset and the q-AQUA dimers dataset [56].
The **ANI-1ccx dataset** provides approximately 500,000 CCSD(T)/CBS total energies of various neutral monomers and dimers (composed of the elements H,C,N,O) in equilibrium and out-of-equilibrium configurations. It also provides atomic forces at DFT level that were crucial to speed-up the beginning of the training procedure. Importantly, it provides partial charges and atomic volumes obtained from a Minimal Basis Iterative Stockholder [57] (MBIS) partitioning of the DFT electronic density that were used to parametrize the force field terms.
The **DES370K dataset** is composed of approximately 370,000 CCSD(T)/CBS interaction energies of diverse dimers in various configurations. It comprises both neutral and charged molecules. The latter were discarded for the training of the FENNIX-OP1 model as out-of-scope for this study. It also provides SAPT decompositions of the interaction energies that we used to regularize the Coulomb interactions.
The **q-AQUA dimers dataset** is composed of more than 70000 water dimer interaction energies at CCSD(T)/CBS level. We used this dataset in the end of the training in order to fine-tune the model for handling water.
The FENNIX-OP1 model has then three target properties that use the available data: the CCSD(T) total energy of the system as modelled by \(V_{\text{OP1}}\) described in section II.3, the MBIS partial charges \(q_{i}^{\text{NN}}\) and the ratio of MBIS volumes to free atom volumes \(v_{i}^{\text{NN}}/v_{i}^{\text{free}}\).
### Training procedure
The training of a multi-output force-field-enhanced neural network revealed to be a non-trivial task. Indeed, the inter-dependencies between target properties (for example partial charges and the electrostatic contribution to the total energy) seemed to pose difficulties for the standard optimization methods, which implied that a brute-force optimization of the whole model would not give satisfactory results. To overcome this difficulty, we resorted to a training procedure in multiple stages that is described in the following of this section.
Furthermore, in order to obtain a final model that was stable when performing molecular dynamics, we found that strong regularization was needed, both in the form of standard weight decay and also physically-motivated loss contributions, as detailed later. Throughout, we used the AdamW optimizer [58] implemented in Pytorch, as its algorithm for weight decay provides one of the best compromise between training speed and accuracy. We used a fairly strong weight decay parameter of 0.5 for all the parameters in the output module and no weight decay for the Allegro parameters. In all stages, we used the same random 10% of the dataset as a validation set and optimized the model using mini-batches of 256 configurations from the ANI-1ccx and 64 configurations from DES370K and q-AQUA when they are used.
The training procedure for the FENNIX-OP1 model required four stages. In the first stage, the model was trained to reproduce DFT total energies and forces using the short-range contribution \(V^{\textsc{nn}}\) only. We also trained at the same time the charges and atomic volumes to reproduce the MBIS targets. At this stage, only the ANI-1ccx dataset was used. The loss function for this stage is given by:
\[L^{(1)}=\lambda_{E}(E^{\textsc{DFT}}-V^{\textsc{nn}})^{2}+ \lambda_{F}\sum_{j=1}^{3}\sum_{i=1}^{N_{\text{at}}}\left(F_{ij}^{\textsc{DFT}}- F_{ij}^{\textsc{nn}}\right)^{2} \tag{18}\] \[+\lambda_{q}\sum_{i=1}^{N_{\text{at}}}\left(q_{i}^{\textsc{MBIS} }-q_{i}^{NN}\right)^{2}+\lambda_{v}\sum_{i=1}^{N_{\text{at}}}\left(\frac{v_{ i}^{\textsc{MBIS}}}{v_{i}^{\textsc{free}}}-\frac{v_{i}^{\textsc{nn}}}{v_{i}^{ \textsc{free}}}\right)^{2}\]
with \(\lambda_{E}=0.001\), \(\lambda_{F}=1\) and \(\lambda_{q}=\lambda_{v}=1000\) and \(F^{\textsc{nn}}\) is the model's predicted force obtained by automatic differentiation (Pytorch's autograd) of the total energy \(V^{\textsc{nn}}\). As suggested in previous studies [28], we strongly favour learning forces over energies in the beginning to accelerate the training procedure. We note that the provided loss is for a single configuration and that, in practice, it is averaged over the configurations in the mini-batch. We further added a regularization in order to minimize the off-diagonal elements of the covariance matrix of the scalar embedding's features over a batch. This promotes learning statistically independent features in the embedding which should be favourable for a multi-output network and we found that it led to models with better generalization capabilities. In this stage, we train all the parameters in the the embedding and output modules with a starting learning rate of \(10^{-3}\). Furthermore, we used a learning rate scheduler that reduces the learning rate when the error on the training set stops diminishing for a few steps (we set the patience of the scheduler to 10 epochs and the learning rate scaling factor to 0.8). After about 100 epochs, progress of both training and validation steps slowed down and we modified the energy and force parameters to \(\lambda_{E}=0.01\) and \(\lambda_{F}=0.1\) in order to obtain a more balanced training. We stopped the first stage when the learning rate reached \(10^{-4}\).
In the second stage, we freeze the embedding parameters and output MLPs for charge and volumes and activate the Coulomb and dispersion energy terms. We then retrain the short-range energy MLP so that the full \(V^{\textsc{op}_{1}}\) of eq.11 reproduces DFT energies and forces. The loss function for this stage is:
\[L^{(2)}=\lambda_{E}(E^{\textsc{DFT}}-V^{\textsc{op}_{1}})^{2}+\lambda_{F} \sum_{j=1}^{3}\sum_{i=1}^{N_{\text{at}}}\left(F_{ij}^{\textsc{DFT}}-F_{ij}^{ \textsc{op}_{1}}\right)^{2} \tag{19}\]
with the same weights as in the end of the previous stage. Freezing the embedding ensures that the predicted volumes and charges are not modified in this training stage. Since the energy target is modified, the errors starts much higher than at the end of the previous stage and quickly decreased. When the error on the train and validation sets drop to the same order as in the previous stage, the full model is unfrozen and training is resumed until the error stops decreasing.
In the third stage, the embedding and charge and volumes MLPs are frozen again and the energy MLP is finally retrained to reproduce CCSD(T) energies from both ANI-1ccx total energies and DES370K interaction energies. We used the same type of mean-square loss functions with \(\lambda_{E}=0.1\) and \(\lambda_{\textsc{des}}=5\). Again, we train the energy MLP until the error is close to the previous stage, unfreeze the whole model and optimize again all the parameters. For this stage, we also optimize the parameters from the physical module in order to reach the lowest error possible. We stop the training when the learning rate reaches \(10^{-5}\).
In the last training stage, we refine the model for water by including the q-AQUA water dimers interaction energies in the loss function. We also generated batches of randomly deformed water monomers (with very large deformations) and trained the model to reproduce forces from the highly accurate Partridge-Schwenke potential energy surface (that was itself fitted on high-accuracy coupled-cluster data). This was particularly helpful to reproduce the molecule's bending energy surface away from equilibrium where fewer data are available in the ANI-1ccx dataset.
At the end of the training procedure, the model reached a root mean square error (RMSE) of less that 4 kcal/mol for CCSD(T) total energies on both valida
tion and training sets of the ANI-1ccx dataset. While it was possible to reach much lower errors with less regularized models (less than 1 kcal/mol), we found that they were unstable when performing MD and concluded that they were overfitting the data. We thus favoured a more strongly regularized, perhaps slightly underfitted model that allowed for better generalization in the condensed phase. The model also reached a RMSE of about 0.35 kcal/mol on both DES370K and q-AQUA datasets. Finally, it reached a RMSE of about 0.017 e for charges and about 0.017 for volume ratios. As for total energies, less regularized models allowed for much lower errors (less than 0.008 e for charges for example) but with visible overfitting, leading to irregular coulomb interactions and unstable dynamics.
As a validation for the water interactions, we used the standard energy benchmarks when training force fields for water. The model gives a RMSE of 0.13 kcal/mol on the Smith dimer set [59] which are representative of the most stable water dimer configurations. This low error is not surprising as the Smith dimers are very close to configurations present in the q-AQUA dataset on which the model was trained. For a more challenging test, we used a set of typical water clusters up to hexamers. For these, the model achieved a RMSE lower than 2 kcal/mol, which is comparable to our recent Q-AMOEBA model [46] which was specifically trained to reproduce these energies. In the next section, we investigate more thoroughly the validity, transferability and robustness of the model up to the unforgiving test of condensed-phase molecular dynamics including nuclear quantum effects.
## IV Model validation
In this section, we validate the FENNIX-OP1 model using a few examples of applications. First, we compute bond dissociation energy profiles of typical small molecules and show that the model is able to consistently break covalent bonds. Then, we show that the model is able to produce stable and accurate MD simulations of condensed-phase water including nuclear quantum effects. For this study, we included nuclear quantum effects using the adaptive quantum thermal bath (adQTB) method introduced in ref. [43]. We showed in previous studies [44] that the adQTB provides an efficient and accurate alternative to path integrals - the gold standard method for including NQEs in MD simulations - at a cost similar to classical MD. For these two examples, we compare our results to the ANI models [16] that have a comparable scope as FENNIX-OP1 in terms of chemical diversity and were trained on similar datasets. Finally, we show that FENNIX-OP1 is able to produce stable dynamics of organic molecules solvated in water and provides a good qualitative description of the torsional free energy landscape of the alanine dipeptide in solution.
All calculations for this work were performed on an Nvidia A100 GPU using our custom TorchNFF package that is built on top of Pytorch [60] and provides the implementation for FENNIX as well as a simple MD algorithm for NVT simulations in periodic boundary conditions (with Ewald summation for electrostatic interactions) and an efficient Pytorch implementation of the adQTB. TorchNFF is in an early development version and is thus not yet optimized. For example, a FENNIX-OP1 simulations of a box of 216 molecules of water in periodic boundary conditions can currently reach about 0.8 ns/day of adQTB simulation (with a timestep of 0.5 fs) on a single A100 GPU.
### Bond dissociation energy profiles
We first validate the training and the choice of reference energy on potential energy curves for the dissociation of covalent bonds in small molecules. This kind of bond breaking is a fundamental step in the process of many chemical reactions, for example in enzyme-catalyzed reactions [61], and is key to the description of mass spectroscopy experiments [62, 63]. They are however difficult to accurately model: force fields usually forbid them by design (by assigning a fixed connectivity) and _ab initio_ approaches require the use of expensive multi-reference calculations to correctly describe the dissociation process. Their practical description thus usually requires the use of specifically designed force fields (such as ReaxFF) or low-dimensional potential energy surfaces.
Figure 2 shows the potential energy curves for a) the dissociation of a hydrogen atom in methane; b) the asymmetric dissociation of the water molecule; c) the dissociation of the H-F molecule computed with FENNIX-OP1 and compared with reference quantum calculations from the literature (multi-reference CI/6-31G** from ref [64] for CH4, Partridge-Scwhenek PES [65] that was fitted on CCSD(T) data and multi-reference CI/au-cc-pV6Z from ref [66] for the H-F molecule) and the ANI models. We see that FENNIX-OP1 consistently captures a smooth dissociation curve thanks to the physically-motivated choice of reference energy. On the other hand, the ANI models, for which the reference energy was set according to average values over the dataset, fail to reproduce the dissociation curves for large distances. FENNIX-OP1 agrees particularly well with the reference for water as the Partridge-Schwenke PES was included in the training set. For the other two molecules, it tends to underestimate the dissociation barrier. Interestingly, while the training data did not contain covalent interaction energies for F, the model is still able to reasonably generalize its prediction. Indeed, thanks to the positional encoding of chemical species that we introduced in section II.1.2, we expect the model to be able to generalize, at least qualitatively, across the periodic table.
### Structural and spectroscopic properties of liquid water
In order to test the robustness of the model, we performed molecular dynamics simulations of liquid water. Since the model was fitted purely on _ab initio_ data, it was critical to explicitly include nuclear quantum effects in order to accurately compute thermodynamical properties [45]. We used the adaptive quantum thermal bath method to include NQEs [43], as it was previously shown to be a robust alternative to the more costly path integrals MD [44]. All simulations were performed for a box of 216 molecules in the NVT ensemble at 300K and experimental density, with the BAOAB integrator [68] and a timestep of 0.5 fs. Coulomb interactions in periodic boundary conditions were handled using the Ewald summation method [69], that we directly implemented using PyTorch operations so that we can leverage its automatic differentiation capabilities to obtain atomic forces. We equilibrated the system for 100 ps, which was sufficient to thermalize the system and converge the adQTB parameters. We then computed the radial distribution functions (RDFs) from 1 ns of MD simulation. Figure 3 shows the partial RDFs of water simulated using adQTB and classical MD with the FENNIX-OP1 model compared to experimental results from [67]. As a baseline, we also compare to ANI-2x results with adQTB MD. We see that FENNIX-OP1 is able to accurately capture the subtle structure of liquid water and agrees well with experimental results when nuclear quantum effects are included. This is quite remarkable as the model was not explicitly trained on condensed phase reference data. On the other hand, ANI-2x predicts a largely over-structured liquid. We suspect that this is due to both the insufficient quality of the _ab initio_ reference for ANI-2x (DFT with the wB97x functional) and to the lack of long-range effects in the model itself. The ANI-1ccx model, that was trained on the same CCSD(T) data as FENNIX-OP1, produced very accurate radial distribution functions at the beginning of the simulations, thus validating the necessity of accurate CCSD(T) reference data to be able to describe liquid water. It was however not stable for simulation times longer than a few tens of picoseconds (even in classical MD with a smaller 0.1 fs timestep). We thus suspect that long-range effects, which contribute to an overall energetical stabilization, are important to maintain the structure of the liquid.
Figure 3: Partial radial distribution functions of liquid water at 300K simulated using classical MD and adQTB MD with the FENNIX-OP1 model, compared to experimental results from [67] and ANI-2x (adQTB) results.
Figure 2: Potential energy curves for the dissociations of: a) methane CH4\(\rightarrow\)CH3+H with the CH3 group fixed at the CH4 equilibrium geometry; b) water H2O\(\rightarrow\)OH+H with OH and angle fixed at the H2O equilibrium geometry; c) HF\(\rightarrow\)H+F
We then explored the impact of deuteration on the structural properties. Such thermodynamical isotope effects arise purely from NQEs (since the classical Boltzmann distribution does not depend on the mass of the atoms) and can be used to experimentally probe the impact of quantum effects on chemical processes. We showed in earlier works that the adQTB is able to accurately capture these isotope effects [44; 46]. Figure 4 shows the O-O radial distribution function of light and heavy water at 300K computed using FENNIX-OP1 and compared to experimental results [67] for light water. We see that deuteration tends to strengthen the liquid's structure, in good agreement with tendencies observed experimentally (see ref. [67]).
Additionally, FENNIX-OP1 predicts an enthalpy of vaporization for light water around 13 kcal/mol, which is slightly overestimated compared to the experimental result of 10.51 kcal/mol [70]. This indicates a tendency of the model to form too strong hydrogen bonds in the condensed phase. We expect that training the model using data from larger molecular clusters would improve the results overall. Such data at CCSD(T) level is however not yet available to our knowledge.
Contrary to ML models that only predict interaction energies, FENNIX provides environment-dependent atomic charges, enabling us to estimate infrared spectra through the computation of time correlation functions of the (nonlinear) total dipole moment from the adQTB dynamics. The results, shown in Supporting Information, are in qualitative agreement with experimental data for the main features, albeit a 150 cm\({}^{-1}\) red-shift of the bending peak (similar to the q-SPC/Fw model [71]) and a broadening of low-frequency features. Comparison to spectra computed using classical MD shows that the model captures the typical red-shift of the stretching peak due to nuclear quantum effects [72; 73; 74]. The calculated spectra however lacks typical features produced by the slow dynamics of induced dipole [72; 46; 75]. Even though FENNIX-OP1 is able to capture local polarization effects via fluctuating charges, this subtle many-body effect cannot be reproduced without an explicit treatment of long-range polarization, as is done for example in (Q-)AMOEBA [46; 76], SIBFA [9] or ARROW [7]. Future iterations of FENNIX will then focus on refining the physical model for including such interactions in a framework including multipolar electrostatics.
### Enhanced sampling of the torsional free energy profile of solvated alanine dipetide
Lastly, we performed molecular dynamics simulations (using enhanced sampling) of alanine dipetide solvated in a cubic box of water of length 30 A (with periodic boundary conditions). The alanine dipeptide is a fundamental building block for larger biomolecules and its accurate description is thus critical. This system is also a typical benchmark for both interaction models and enhanced sampling methods and was recently shown to be extremely challenging for ML potentials [32]. It thus constitutes an interesting test case for our potential.
Figure 5 shows the joint free energy profile, obtained after 3 ns of classical well-tempered metadynamics [77] simulation (performed using the PLUMED library [78]) with the FENNIX-OP1 model, for the two dihedral angles defining the torsional degrees of freedom of the molecule. The model correctly assigns most of the probability to the two expected energy basins denoted 1 and 2 on figure 5. It was also able to explore the less probable states denoted 3 and 4 in the figure. The model however seems to underestimate the barrier at \(\Phi=0\), thus allowing too many transitions between states 1 and 2 to 3 and 4. We note that this simulation was performed using classical MD, as the adQTB is not directly compatible with enhanced sampling methods. It would be interesting to explore the impact of nuclear quantum effects on this energy profile. Indeed, as we showed above, nuclear quantum effects drastically modify the structure of water. Since the torsional free energy profile is strongly affected by the solvent (see the difference between solvated and gas phase alanine dipeptide for example in ref. [79]), we can expect important changes when going from classical to quantum dynamics. This calculation would require long and costly path-integrals simulations or the development of adequate enhanced sampling techniques for the adQTB, that we leave for future works.
### Perspective: gas-phase simulation of a protein
As a perspective, we pushed the model towards larger molecular structures that are not included in the training data. We performed molecular dynamics, including nuclear quantum effects, of the 26-residue 1FSV protein [80] in gas phase. It contains around 500 atoms with a few
Figure 4: O-O radial distribution computed using adQTB and the FENNIX-OP1 model for light and heavy water, compared to experimental results [67] on light water.
charged groups. Since the model is not designed to handle ionic species, we first performed a simulation where we neutralized each charged group (by addition or subtraction of a proton). We ran an adQTB MD simulation for 500 ps with a timestep of 0.5 fs. Starting from the PDB structure, the dynamics was stable and the protein folded to a more compact form typical of the gas phase. The RMSD of the backbone with respect to the PDB structure stabilized after around 150 ps of simulation to a value of \(\sim\)3.2 A, as shown in figure 6. This fast structural rearrangement is mostly driven by the long-range interactions present in the model whose magnitude is greater in gas-phase compared to condensed-phase due to a lack of screening by the solvent. These preliminary results illustrate the robustness of the force-field-enhanced ML approach, able to generalize to much larger and complex systems than that included in the training set.
Even though the model was not trained on charged species, we ran a short MD simulation of the protein in solution by explicitly distributing the ionic charges among the atoms of the charged groups. After a few picoseconds, however, the simulation displayed instabilities due to unphysical proton transfers around the charged groups and too large Coulomb interactions that collapsed the molecule. In order to produce quantitative results and stable dynamics in this context, next iterations of the FENNIX model will need to explicitly handle charged systems, which will require a more thorough dataset. The recently introduced SPICE dataset [81] could be a good complement to the ones already used in this work as it provides reference data for many charged molecules and focuses on biological systems (however at the lower quality DFT level). Furthermore, it will require finer description of molecular interactions through the inclusion of more advanced energy terms (such as multipolar electrostatics and polarization).
## V Conclusion and Outlooks
We introduced FENNIX, a general class of models representing molecular interactions using an hybrid ML/FF approach. It builds upon latest equivariant architectures to construct an embedding of the local chemical environment which is fed into a multi-output module predicting atom-in-molecule properties. The latter allows for a flexible design of potential energy terms. We apply this strategy to design a specific model: FENNIX-OP1, capturing both short-range interactions, with a dedicated ML energy term, and long range ones using predicted atomic volumes and charges that parametrize both a charge penetration corrected electrostatic energy and a Tkatchenko-Scheffler like dispersion one. FENNIX-OP1 is first trained on both the ANI-1ccx and the DES370K datasets which contain monomers, dimers and a few multimeric structures focusing on small neutral organic molecules. It is then refined for water using the q-AQUA dimers dataset and the highly accurate Partridge-Schwenke potential energy surface for a final RMSE of 0.35 kcal/mol on interaction energies. We then showed that the model is stable during molecular dynamics (including NQEs via adQTB) and able to generalize to condensed phase (despite the lack of reference data), capturing structural properties of bulk water and solvated organic molecules. More precisely, it qualitatively reproduces the torsional free energy profile of the solvated alanine dipeptide using enhanced sampling techniques and yields stable trajectories of the 1FSV protein in gas
Figure 5: Contour plot of the free energy profile of the two torsional angles of solvated alanine dipeptide computed using FENNIX-OP1 model. Sampling was enhanced using well-tempered metadynamics for the two torsional angles. Energies in kcal/mol.
Figure 6: Root-mean-square deviation of the 1FSV protein with respect to the PDB structure during a 500 ps adQTB MD simulation in gas phase with the FENNIX-OP1 model.
phase. Interestingly, the model is able to capture similarities between chemical species thanks to the continuous encoding of their positions in the periodic table. Furthermore, it allows to overcome an intrinsic limitation of traditional force fields as it satisfactorily reproduces covalent bond dissociation. All in all, this work shows the extrapolating power of hybrid physically-driven machine learning approaches and paves the way to even more refined models using enriched data sets (in particular for charged systems) and additional energy terms in the spirit of advanced polarizable force-fields such as SIBFA that include multipolar electrostatics and many-body polarization and dispersion. These will enable the study of reactive complex systems such as proton transfers between DNA pairs or enzyme catalyzed reactions where such an accurate description of molecular interactions is mandatory. The FENNIX framework is available through a dedicated python package which includes the FENNIX-OP1 model. Further work will focus on the extension of the approach and its inclusion in the optimized Deep-HP platform[82, 83] present within the Tinker-HP package[84, 12]
## Acknowledgements
This work was made possible thanks to funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 810367), project EMC2. Computations have been performed at GENCI (IDRIS, Orsay, France and TGCC, Bruyeres le Chatel) on grant no A0130712052.
## Supporting Information Available
The supporting information contains the results for infrared spectra of liquid water computed using the FENNIX-OP1 model.
|
2301.07407 | TAME: Attention Mechanism Based Feature Fusion for Generating
Explanation Maps of Convolutional Neural Networks | The apparent ``black box'' nature of neural networks is a barrier to adoption
in applications where explainability is essential. This paper presents TAME
(Trainable Attention Mechanism for Explanations), a method for generating
explanation maps with a multi-branch hierarchical attention mechanism. TAME
combines a target model's feature maps from multiple layers using an attention
mechanism, transforming them into an explanation map. TAME can easily be
applied to any convolutional neural network (CNN) by streamlining the
optimization of the attention mechanism's training method and the selection of
target model's feature maps. After training, explanation maps can be computed
in a single forward pass. We apply TAME to two widely used models, i.e. VGG-16
and ResNet-50, trained on ImageNet and show improvements over previous
top-performing methods. We also provide a comprehensive ablation study
comparing the performance of different variations of TAME's architecture. TAME
source code is made publicly available at https://github.com/bmezaris/TAME | Mariano Ntrougkas, Nikolaos Gkalelis, Vasileios Mezaris | 2023-01-18T10:05:28Z | http://arxiv.org/abs/2301.07407v1 | TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks
###### Abstract
The apparent "black box" nature of neural networks is a barrier to adoption in applications where explainability is essential. This paper presents TAME (Trainable Attention Mechanism for Explanations)1, a method for generating explanation maps with a multi-branch hierarchical attention mechanism. TAME combines a target model's feature maps from multiple layers using an attention mechanism, transforming them into an explanation map. TAME can easily be applied to any convolutional neural network (CNN) by streamlining the optimization of the attention mechanism's training method and the selection of target model's feature maps. After training, explanation maps can be computed in a single forward pass. We apply TAME to two widely used models, i.e. VGG-16 and ResNet-50, trained on ImageNet and show improvements over previous top-performing methods. We also provide a comprehensive ablation study comparing the performance of different variations of TAME's architecture.2
Footnote 1: Source code is made publicly available at: [https://github.com/bmezaris/TAME](https://github.com/bmezaris/TAME)
CNNs, Deep Learning, Explainable AI, Interpretable MI, Attention.
## I Introduction
Convolutional neural networks (CNNs) [17] have achieved exceptional performance in many important visual tasks such as breast tumor detection [6], video summarization [3] and event recognition [10]. The trade-off between model performance and explainability, and the end-to-end learning strategy, leads to the development of CNNs that many times act as "black box" models that lack transparency [12]. This fact makes it difficult to convince users in critical fields, such as healthcare, law, and governance to trust and employ such systems, thus limiting the adoption of AI [2, 12]. Therefore, it is necessary to develop solutions that address these challenges.
Explainable artificial intelligence (XAI) is an active research area in machine learning. XAI focuses on developing explainable techniques that help users of AI systems to comprehend, trust and more efficiently manage them [20, 4]. For the image classification task, a diverse range of post-hoc explanation approaches exist that in a second step take the trained model and try to uncover its decision strategy [20]. These methods produce an explanation map, highlighting salient input features. We should note that these methods should not be confused with approaches targeting weakly supervised learning tasks such as weakly supervised object localization or segmentation [16], which also generate heatmaps as an intermediate step, and their goal is to locate the region of the target object rather than to explain the classifier's decision (e.g. see the example depicted in Fig. 1).
Gradient-based methods [22, 5] were probably among the first to appear in the XAI domain. These methods use gradient information to produce explanations, but they are strongly affected by noisy gradients, and the explanations contain high-frequency variations [1]. Perturbation-based methods [19, 28], perturb the input and observe changes in the output, thus do not suffer from gradient-based problems as above. Similarly, response-based methods [21, 8, 27] combine a model's intermediate representations, or features, to generate explanations. However, most methods of the two latter categories described above are computationally expensive because
Fig. 1: An explanation produced by TAME. The input image belongs to the class ”velvet”, which cannot be localized. The produced explanation highlights the salient features of the image explaining the decision of the classifier.
each input requires many forward passes for an accurate explanation map to be produced.
To address the above limitation, L-CAM [11] trains an attention mechanism to combine feature maps from the last convolutional layer of a frozen CNN model and produce high quality explanations in one forward pass. However, L-CAM, by design, uses the feature maps of only the last convolutional layer, and thus, may not be able to adequately capture all the information contained in the CNN model. To this end, we propose TAME (Trainable Attention Mechanism for Explanations), which exploits intermediate feature maps extracted from multiple layers of any CNN model. These features are then used to train a multi-branch hierarchical attention architecture for generating class-specific explanation maps in a single forward pass. We provide a comprehensive evaluation study of the proposed method on ImageNet [7] using two popular CNN models (VGG-16 [23], ResNet-50 [13]) and popular XAI measures [5], demonstrating that TAME achieves improved explainability performance over other top-performing methods in this domain.
## II Related Work
In this section, we briefly survey the state-of-the-art XAI approaches that are mostly related to ours. For a more comprehensive review the interested reader is referred to [4, 20].
Most XAI approaches can be roughly categorized into response-, gradient- and perturbation-based. Gradient-based methods [5, 22] compute the gradient of a given input with backpropagation and modify it in various ways to produce an explanation map. Grad-CAM [22], one of the first in this category, uses global average pooling in the gradients of the target network's logits with respect to the feature maps to compute weights. The explanation maps are obtained as the weighted combination of feature maps and the computed weights. Grad-CAM++ [5] similarly uses gradients to generate explanation maps. These methods suffer the same issues as the gradients they use: neural network gradients can be noisy and suffer from saturation problems for typical activation functions such as ReLU and Sigmoid [1].
Perturbation-based methods [19, 28] alter the input and produce explanations based on the change in the confidence of the original prediction; thus, avoid problems related with noise gradients. For instance, RISE [19] utilizes Monte Carlo sampling to generate random masks, which are then used to perturb the input image and generate a respective CNN score. Using the generated scores as weights, the explanation is derived as the weighted combination of the various random masks. Thus, RISE, as most methods in this category, requires many forward passes through the network to generate an explanation, increasing the inference time considerably.
Finally, response-based methods [8, 11, 21, 27] use feature maps or activations of layers in the inference stage to interpret the decision-making process of a neural network. One of the earliest methods in this category, CAM [29], uses the output of the global average pooling layer as weights, and computes the weighted average of the features maps at the final convolutional layer. CAM requires the existence of such a global average pooling layer, restricting its applicability to only this type of architectures. SISE [21], and later AGA-SISE [27], aggregate feature maps in a cascading manner to produce explanation maps of any DCNN model. Similarly, Poly-CAM [8] upscales feature maps to the dimension of the largest spatial dimension feature map and combines them in a cascading manner. The above methods require many forward passes to produce an explanation. L-CAM [11] mitigates the above limitation using a learned attention mechanism to compute class-specific explanations in one forward pass. However, it can only harness the salient information of one set of feature maps. TAME also falls into the response-based category and operates in one forward pass, but contrarily to [11], it uses a trainable hierarchical attention module to exploit feature maps from multiple layers and generate explanations of higher quality.
We should also note that the methods of [9, 15] take a somewhat similar approach to ours in that they produce explanations using an attention module and multiple sets of feature maps. However, these methods jointly train the attention model with the CNN to improve the image classification task. In contrast, TAME does not modify the target model, which has been pretrained (and remains frozen); instead, TAME functions as a post-hoc method, exclusively optimizing the attention module in a supervised learning manner to generate visual explanations. Thus, no direct comparisons can be drawn with [9, 15] as they provide explanations for a different (i.e. not the initial pretrained one), concurrently trained classifier.
## III Tame
### _Problem formulation_
Let \(f\) be a trained CNN for which we want to generate explanation maps,
\[f:\mathcal{I}\rightarrow\mathbb{R}^{Classes}, \tag{1}\]
where, \(\mathcal{I}\) is the set of all possible input tensors \(\mathcal{I}=\left\{\boldsymbol{I}\mid\boldsymbol{I}\boldsymbol{:}\boldsymbol {C}\times\boldsymbol{W}\times\boldsymbol{H}\rightarrow\mathbb{R}\right\}\), \(\boldsymbol{C}=\left\{1,\ldots,C\right\}\), \(\boldsymbol{W}=\left\{1,\ldots,W\right\}\), \(\boldsymbol{H}=\left\{1,\ldots,H\right\}\), \(C,W,H\in\mathbb{R}\) are the input tensor dimensions [19, 21], and \(Classes\) is the number of classes that \(f\) has been trained to recognize. E.g., for RGB images, \(C=3\), and the elements of a tensor instance are the image pixel values. Moreover, let \(\mathcal{L}_{i}:\boldsymbol{C_{i}}\times\boldsymbol{W_{i}}\times\boldsymbol{ H_{i}}\rightarrow\mathbb{R}\) be the feature map set corresponding to the \(i\)th layer of the CNN, where, \(C_{i},W_{i},H_{i}\) are the respective channel, width and height dimensions. We define a feature map set \(\left\{\mathcal{L}\right\}^{s}\), where \(s\) is the set of layers for which we want to extract feature maps, i.e., \(\left\{\mathcal{L}\right\}^{s}=\left\{\mathcal{L}_{i}\mid i\in s\right\}\).
Assume an attention module defined as in the following,
\[\text{AM}:\left\{\mathcal{L}\right\}^{s}\rightarrow\boldsymbol{E}, \tag{2}\]
where, the tensor \(\boldsymbol{E}\) at the output of the attention module is the generated explanation map, \(\boldsymbol{E}:\textbf{Classes}\times\boldsymbol{W_{e}}\times\boldsymbol{H_{ e}}\rightarrow\left\{x\mid x\in\mathbb{R}\cap 0<x<1\right\}\), \(W_{e}=\max\left\{W\right\}^{s}\) and \(H_{e}=\max\left\{H\right\}^{s}\). Thus, explanation maps are class discriminative,
i.e., each slice of \(\mathbf{E}\) along its first dimension corresponds to one of the classes that \(f\) has learned; moreover, the size of the spatial dimensions of these "class-specific" slices equal to the largest spatial dimensions in the set of feature maps.
Given the above formulation, the goal is to find an attention module architecture that can combine all the salient information contained in \(\left\{\mathcal{L}\right\}^{s}\), and effectively train it.
### _Architecture_
We propose the attention module architecture depicted in Fig. 2. In this architecture, there exists a separate feature branch for each feature map set that is included in \(\left\{\mathcal{L}\right\}^{s}\) and one fusion branch. Each feature branch takes as input a single feature map set \(\mathcal{L}_{i}\) and outputs an attention map set \(\mathcal{A}_{i}\),
\[\text{FB}:\mathcal{L}_{i}\rightarrow\mathcal{A}_{i}, \tag{3}\]
where, \(\mathcal{A}_{i}\) has the same channel and spatial dimensions as \(\mathcal{L}_{i}\) and the final explanation map, respectively, i.e., \(\mathcal{A}_{i}:\mathbf{C}_{i}\times\mathbf{W}_{\mathbf{e}}\times\mathbf{H}_{\mathbf{e}} \rightarrow\mathbb{R}\). The resulting attention maps are concatenated into an single attention map set \(\left\{\mathcal{A}\right\}^{s}\), and forwarded into the fusion branch to generate the explanation map,
\[\text{FS}:\left\{\mathcal{A}\right\}^{s}\rightarrow\mathbf{E}. \tag{4}\]
The two branch types consist of different network components, as described in the following:
_Feature branch:_ Each feature branch is a neural network that prepares the feature maps for the fusion branch. It consists of a \(1\times 1\) convolution layer with the same number of input and output channels, a batch normalization layer, a skip connection, a ReLU activation, and a bilinear interpolation that upscales the feature map to match the final explanation map's dimensions (the ablation study presented in Section IV assesses the importance of each part of the feature branch).
_Fusion branch:_ It consists of a \(1\times 1\) convolutional layer that brings the number of the inputted channels to the number of classes that the CNN has been trained to recognize. Subsequently, a sigmoid activation function, \(S(x)=\frac{1}{1-e^{-x}}\), is used to scale the attention map values to the range \((0,1)\).
### _Training_
The training procedure is shown in Fig. 3. An image is inputted to the CNN model, and the derived feature maps are forwarded to the attention module for generating the respective explanation maps and the model truth label. A model truth label is the CNN model's prediction of the input image's class, which may be different from the ground truth label. A single channel containing a class discriminative _explanation_ is selected from the explanation map using the model truth label; this is used as the explanation of the input image with respect to the model truth class. The explanation is then upscaled to the dimensions of the input image using bilinear interpolation, and is piece-wise multiplied with the input image. The resulting masked image is then fed back into the CNN to generate logits. The logits, the original explanation maps, and the model truth labels are then used to compute the loss and through backpropagation update the weights of the
Fig. 3: TAME’s training method. Var: Variation loss, Area: Area loss, CE: Cross-entropy loss. The explanation of an input image is first derived; it is then upscaled and piece-wise multiplied with the corresponding input image. Subsequently, the masked image does a second forward pass through the CNN to generate logits, which are used by the loss function to compute gradients and update the attention module’s weights.
Fig. 2: TAME’s attention module: Feature branches process feature maps to provide attention maps, which are concatenated and processed by the fusion branch (shown at the bottom of the attention module) to derive explanation maps.
attention module, effectively training it. As already mentioned, the weights of the original CNN remain fixed to their original values for the whole training procedure.
The loss function used for training the proposed attention module is the weighted sum of three individual loss functions,
\[L(\mathbf{\Psi},\text{logits},\text{labels}) = \lambda_{1}CE(\text{logits},\text{labels}) \tag{5}\] \[+\,\lambda_{2}\text{Area}(\mathbf{\Psi})+\lambda_{3}\text{Var}( \mathbf{\Psi}),\]
where, \(\mathbf{\Psi}\) is the slice of the explanation map \(\mathbf{E}\) corresponding to the model truth class of the input image, \(CE()\), Area(), Var() are the cross-entropy, area and variation loss, respectively, and \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), are the corresponding regularization parameters. The cross-entropy loss uses the logits generated from the CNN with the masked input image and the model truth label to compute a loss value. This term trains the attention module to focus on salient parts of the image. The variation loss is the sum of the squares of the partial derivatives of the explanation \(\mathbf{\Psi}\) in the x and y direction. This term penalizes fragmentation in the generated heatmaps. For the partial derivatives, we use the forward difference approximation. To this end, in the x direction we have \(\frac{\partial\mathbf{\Psi}[\text{x},y,m]}{\partial x}\approx\mathbf{\Psi}[x_ {m}+1,y_{m}]-\mathbf{\Psi}[x_{m},y_{m}]\). Thus, using the forward difference approximation the variation loss is defined as,
\[\text{Var}(\mathbf{\Psi})=\sum_{x,y}\left[\left(\frac{\partial\mathbf{\Psi}[x,y]}{\partial x}\right)^{2}+\left(\frac{\partial\mathbf{\Psi}[x,y]}{\partial y }\right)^{2}\right]. \tag{6}\]
Finally, the area loss is the mean of the explanation map \(\mathbf{E}\) to the Hadamard power of \(\lambda_{4}\), i.e.:
\[\text{Area}(\mathbf{\Psi})=\sum_{x,y}\mathbf{\Psi}[x,y]^{\lambda_{4}}. \tag{7}\]
This term forces the attention module to output heatmaps that emphasize small focused regions in the input image instead of arbitrarily large areas.
### _Inference_
During inference, the final sigmoid activation function in the attention module (Fig. 2) is replaced with a min-max normalization operator, \(m(x)=\frac{x-\min(\mathbf{\Psi})}{\max(\mathbf{\Psi})-\min(\mathbf{\Psi})}\); the \(\min()\) and \(\max()\) operators return the smallest and largest element of the input tensor, respectively. This is done for consistency with other literature works, such as [5, 22], on how the final explanation maps are scaled in order to be evaluated. The test image is then forward-passed through the CNN, producing explanation maps, which are then upscaled to the input image size. The derived model truth label can then be used to provide an explanation concerning the decision of the classifier.
## IV Experiments
### _Datasets and CNNs_
We evaluate TAME on two popular CNNs pretrained on ImageNet: VGG-16 [23] and ResNet-50 [13]. We choose these two models to test the generality of our method because there are significant differences in the VGG and ResNet architectures. We obtain these pretrained networks using the torchvision.models library.
We train the attention module of our method with the ImageNet ILSVRC 2012 dataset [7]. This dataset contains 1000 classes, 1.3 million and 50k images for training and evaluation, respectively. Due to the prohibitively high cost of executing the literature's perturbation-based approaches that we use in the experimental comparisons, we use only 2000 randomly-selected testing images for testing (the same as in [11] to allow a fair comparison) and a different 2000 randomly selected images as a validation set.
### _Evaluation measures_
In the experimental evaluation, two frequently used evaluation measures, Increase in Confidence (IC) and Average Drop (AD) [5], are utilized,
\[\text{AD}(v)=\sum_{i=1}^{\Upsilon}\frac{\max(0,f(\mathbf{I}_{i})-f(\mathbf{I}_{i}\odot \phi_{v}(\mathbf{\Psi}_{i})))}{\Upsilon f(\mathbf{I}_{i})}100, \tag{8}\]
\[\text{IC}(v)=\sum_{i=1}^{\Upsilon}\frac{\text{sign}(f(\mathbf{I}_{i}\odot\phi_{v}( \mathbf{\Psi}_{i}))>f(\mathbf{I}_{i}))}{\Upsilon}100, \tag{9}\]
where, \(\phi_{v}()\) is a threshold function to select the \(v\%\) higher-valued pixels of the explanation map, \(\text{sign}()\) returns 1 when the input condition is satisfied and 0 otherwise, \(\Upsilon\) is the number of test images, \(\mathbf{I}_{i}\) is the \(i\)th test image and \(\mathbf{\Psi}_{i}\) is the corresponding explanation produced by TAME or any other method under evaluation. Intuitively, AD measures how much, on average, the produced explanation maps, when used to mask the input images, reduce the confidence of the model. In contrast, IC measures how often the explanation masks, when used to mask the input images, increase the confidence of the model. We threshold the explanation maps to test how well the pixels of the explanation map are ordered based on importance. Thus, using a smaller threshold results in a much more challenging evaluation setup.
### _Experimental setup_
TAME is applied to VGG-16 using feature maps from three different layers. The VGG-16 consists of five blocks of convolutions separated by \(2\times 2\) max-pooling operations, as shown in Fig. 4. We choose one layer from each of the last three blocks, namely the feature maps output by the max-pooling layers of each block. We have also experimented on the feature maps output by the last convolution layer of each block. On the other hand, ResNet-50 consists of five stages. In the experimental evaluation, we use the feature maps from the final three stages.
TAME is trained using the loss function defined in (5) with the SGD (Stochastic Gradient Descent) algorithm. The biggest batch size that can fit in the graphics card's memory is used, as recommended in [25]. The learning rate is varied using the OneCycleLR policy described in [26]. The maximum learning rate used by the OneCycleLR policy is chosen using the LR finder test defined in [24]. The hyperparameters of the loss
function ((5), (7)) are empirically chosen using the validation dataset, as: \(\lambda_{1}=1.5,\lambda_{2}=2,\lambda_{3}=0.01,\lambda_{4}=0.3\).
We train the attention module for eight epochs in total and select the epoch for which the attention module achieved the best IC\((15\%)\) and AD\((15\%)\) in the validation set. That is, in this model selection procedure we opt for the measures at the \(15\%\) threshold because they are the most challenging measures to improve upon and provide more focused explanation masks.
During training, each image is transformed in the same way as with the original CNN, i.e., its smaller spatial dimension is resized to 256 pixels, random-cropped to dimensions \(W=H=224\), and normalized to zero mean and unit variance. The same is done during testing, except that center-cropping is used. The feature maps are extracted from the CNN using torchvision.models.feature_extraction library.
### _Quantitative Evaluation_
The proposed method is compared against the top-performing approaches in the visual explanation domain for which source code is publicly available i.e. Grad-CAM [22], Grad-CAM++ [5], Score-CAM [28], RISE [19] and L-CAM [11]. The performance is measured using AD\((v)\) and IC\((v)\) on three different thresholds \(v\) of increasing difficulty, i.e., \(v=100\%,50\%\) and \(15\%\), similarly to the evaluation protocol of [11]. An ablation study is also conducted, to assess the importance of the different architecture components for VGG-16 and ResNet-50, as well as to showcase the effect of different layer selections in the VGG-16 model.
#### Iv-D1 Comparison with the State-Of-The-Art
In Table I we highlight with bold letters the best result and underline the second best result for each measure, separately for each base model. We can see that TAME outperforms the gradient-based methods, and is competitive to the perturbation-based methods, obtaining the best results for the more demanding \(15\%\) measures while requiring only one forward pass.
#### Iv-D2 Ablation Study
In Table II we highlight with bold letters the best results and underline the second best result for each measure in each model and layer selection. For the VGG-16 model, inspired from similar works in the literature suggesting that the last layers of the network provide more salient features [15], we report two sets of experiments, one that uses features extracted from the three last max-pooling layers and one where features are extracted from the layers exactly before the last three max-pooling layers (Fig. 4). There is a difference in the spatial dimensions of the explanation maps generated using the former or the latter layers for feature extraction, i.e., \(28\times 28\) versus \(56\times 56\), since the dimension of the explanation maps obtained by TAME is dictated by the largest feature map set (as explained in Section III). For the ResNet-50 model, we extract features from the outputs of the final three stages, resulting to an explanation map of \(28\times 28\) spatial dimensions. We examine the following variants of the proposed architecture:
_No skip connection_: It has been shown that the skip connection promotes a smoother loss landscape [18], thus contributing to training very deep neural networks. Even for shallower neural networks, such as the proposed attention module, we can benefit from using a skip connection. We see that by omitting the skip connection, we get worse results in ResNet-50. Similarly, for both baseline models we report worse performance for the harder \(50\%\) and \(15\%\) measures.
_No skip + No batch norm_: Batch normalization is used in CNNs for speeding up training and combating internal covariate shift [14]. Compared to the proposed architecture, we see that this variant generally performs better in the \(100\%\) measures, but this does not hold for the other measures. We compare the masks produced by this variant in Fig. 5.
_Sigmoid in feature branch_: In this variant we replace the ReLU function with the sigmoid function, which squeezes the input from \((-\infty,\infty)\) to the output \((0,1)\). It is well known that the sigmoid function in deeper neural networks causes the vanishing gradient problem, making it more difficult to train the early layers of the CNN. We see again that the proposed architecture prevails for the more challenging \(15\%\) measures.
_Two layers_ and _One layer_: In this case, the proposed attention module architecture is employed with fewer feature maps. The results when using just one layer, i.e., omitting the two earlier layers in the CNN pipeline (Fig. 4), are very similar to the L-CAM-Img method (as shown in Table I), which also
Fig. 4: The layers from which feature map sets are extracted on VGG-16. We denote by “Convolutional Layers” the three layers before the last three max-pooling layers. In the case of VGG-16, the layer before a max-pooling layer is the ReLU activation function. We use the same layer naming as the library torchvision.models.feature_extraction.
uses just one feature map set. All measures are improved when utilizing a second feature map set, i.e., excluding only the earliest layer in the CNN pipeline; however, the case is not the same clear when going from the two to three feature map sets, which are used in the proposed architecture. These mixed results could be attributed to the extra noise of feature maps taken earlier in a CNN pipeline.
We note that by omitting both the skip connection and the batch normalization in the feature branch architecture, we obtain generally better results in the case of the VGG-16 model, but this is not the case for the same architecture applied to the ResNet-50 model. In addition, all architectures struggle with the more difficult \(15\%\) measures compared to the proposed architecture. Although every architecture varies between models, the proposed architecture generalizes best across different models. Thus, our goal of finding an effective architecture across radically different models is achieved through the proposed architecture.
### _Qualitative Analysis_
An extensive qualitative analysis is also performed using the ILSVRC 2012 ImageNet dataset in order to gain insight of the
Fig. 5: Qualitative comparison between the proposed attention module and the ’no skip + no batch norm’ variant, applied to VGG-16. We observe that for the ‘no skip + no batch norm’ architecture, the produced explanation map is more spread out, showing that even if it performs well on the \(100\%\) measures, it fails to precisely identify the salient regions in the image.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Model & Feature Extraction & Architecture Variant & AD\((100\%)\) & IC\((100\%)\) & AD\((50\%)\) & IC\((50\%)\) & AD\((15\%)\) & IC\((15\%)\) \\ \hline \multirow{8}{*}{VGG-16} & \multirow{8}{*}{Max-pooling layers} & Proposed Architecture & 9.33 & 50 & 36.5 & 22.45 & 73.29 & 5.6 \\ & & No skip connection & 10.09 & 45.25 & 36.44 & 20.65 & 74.85 & 5.15 \\ & & No skip + No batch norm & **5.92** & **57.9** & 34.49 & **24.2** & 74.58 & 5.15 \\ & & Sigmoid feature branch & 7.22 & 55.65 & 38.4 & 21.6 & 79 & 4.85 \\ & & Two layers & 10.72 & 45.45 & **34.48** & 23.05 & **71.94** & 5.75 \\ & & One layer & 12.1 & 42.1 & 35.81 & 20.8 & 74.19 & 4.85 \\ \cline{2-8} & & Proposed Architecture & 9.07 & 51.1 & 40.72 & **20.9** & 77.05 & 4.8 \\ & & No skip connection & **6.22** & 58.85 & 41.47 & **20.9** & 79.12 & 3.8 \\ & & No skip + No batch norm & 6.62 & 56.6 & **40.48** & 20.75 & 77.84 & **4.95** \\ & & Sigmoid feature branch & 6.8 & **60** & 42.17 & 19.75 & 80.73 & 4.1 \\ & & Two layers & 10.99 & 45.85 & 40.89 & 19.55 & **76.66** & 4.8 \\ & & One layer & 13.09 & 39.65 & 42.3 & 17.7 & 78.02 & 3.8 \\ \hline \multirow{8}{*}{ResNet-50} & \multirow{8}{*}{Stage Outputs} & Proposed Architecture & 7.81 & 54 & 27.88 & **27.5** & 78.58 & **4.9** \\ & & No skip connection & **5.7** & **62.65** & 46.58 & 18.25 & 89.32 & 2.3 \\ \cline{1-1} & & No skip + No batch norm & 9.29 & 50.25 & 29.43 & 25.95 & 79.81 & 3.95 \\ \cline{1-1} & & Sigmoid in feature branch & 9.11 & 53.3 & 45.68 & 18.1 & 86.95 & 3.15 \\ \cline{1-1} & & Two layers & 9.48 & 47.05 & **27.83** & 25 & **77.95** & 4.25 \\ \cline{1-1} & & One layer & 11.32 & 43.45 & 29.85 & 24.25 & 79.59 & 3.55 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Ablation study of TAME
proposed approach and appreciate its usefulness in real-world applications, e.g., understanding why an image was correctly classified or misclassified. The examples used in this study are depicted in Figs. 5, 6 and 7.
Fig. 5 compares TAME generated explanation maps with explanations generated by the "No skip + No batch norm" architecture examined in Section IV-D2. The improved ability of TAME to identify the salient image regions highlights the importance of evaluating the method using the AD and IC measures on multiple thresholds (Table II), and particularly the significance of the \(15\%\) measures over the \(100\%\) and \(50\%\) ones in determining the quality of generated explanations.
The differences between explanations produced using TAME on ResNet-50 and VGG-16 are examined in Fig. 6. We observe that explanations produced for the ResNet-50 model are generally more activated, and, in general, explanations produced for the two different CNN types attend different areas of the image. This suggests that ResNet-50 and VGG-16 classify images in fundamentally different ways, focusing on different features of an input image to make their predictions.
In Fig. 7, we provide class-specific explanation masks referring to the ground truth class but also to an erroneous but closely related class, for both ResNet-50 and VGG-16 models. The first image of Fig. 6(a), depicts a spoonbill, a bird similar to the flamingo. Two significant differences between the spoonbill and the flamingo are the characteristic bill and the darker pink stripe on the wing of the spoonbill. We can see in the explanation maps of both models, that when choosing the class flamingo, there is no significance attributed to the bill, but, on the other hand, when the spoonbill class is chosen, the bill area is gaining significant attention. By comparing the explanation maps for adversarial classes, we can gain insight into important features which characterize a specific object against similar ones, and possibly gain new insight _from_ the classifier. The second image in Fig. 6(a) is a similar case.
The examples of Fig. 6(b) demonstrate the potential of the explanation maps to be used for explaining multiple different classes contained in a single image, i.e., the "english fox-hound" and "soccer ball" image, and the "head cabbage" and "butterrunt squash" image.
Finally, in Fig. 6(c) we provide two cases of images that have been miscategorized, and use the explanations to understand what has gone wrong. The first image of Fig. 6(c) belongs to the "dingo" class (273) but is evidently misclassified as "timber wolf" from both CNN models. Using the explanations, we can identify important features on the image for each class and CNN model. The second image depicts a lighthouse. VGG-16 misclassifies this image as a "sundial". Again, using the explanations generated by TAME we can understand which features led the model to produce a wrong decision. For instance, in this case, we see that for both models the "sundial" explanations focus on the lighthouse roof, which might resemble a sundial, explaining the erroneous classification decision of VGG-16.
## V Conclusions
We proposed TAME, a novel method for generating visual explanations for various CNNs. This is accomplished by training a hierarchical attention module to extract information from feature map sets of multiple layers. Experimental results verified that TAME outperforms gradient-based methods and competes with perturbation-based, while, in contrast to them, requires only a single forward pass to generate explanations. Further research is needed to discover the limits of the proposed approach, e.g., generalizing it to non-CNN architectures.
|
2304.06879 | Performative Prediction with Neural Networks | Performative prediction is a framework for learning models that influence the
data they intend to predict. We focus on finding classifiers that are
performatively stable, i.e. optimal for the data distribution they induce.
Standard convergence results for finding a performatively stable classifier
with the method of repeated risk minimization assume that the data distribution
is Lipschitz continuous to the model's parameters. Under this assumption, the
loss must be strongly convex and smooth in these parameters; otherwise, the
method will diverge for some problems. In this work, we instead assume that the
data distribution is Lipschitz continuous with respect to the model's
predictions, a more natural assumption for performative systems. As a result,
we are able to significantly relax the assumptions on the loss function. In
particular, we do not need to assume convexity with respect to the model's
parameters. As an illustration, we introduce a resampling procedure that models
realistic distribution shifts and show that it satisfies our assumptions. We
support our theory by showing that one can learn performatively stable
classifiers with neural networks making predictions about real data that shift
according to our proposed procedure. | Mehrnaz Mofakhami, Ioannis Mitliagkas, Gauthier Gidel | 2023-04-14T01:12:48Z | http://arxiv.org/abs/2304.06879v2 | # Performative Prediction with Neural Networks
###### Abstract
Performative prediction is a framework for learning models that influence the data they intend to predict. We focus on finding classifiers that are _performatively stable_, i.e. optimal for the data distribution they induce. Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the _model's parameters_. Under this assumption, the loss _must_ be strongly convex and smooth in these parameters; otherwise, the method will diverge for some problems. In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the _model's predictions_, a more natural assumption for performative systems. As a result, we are able to significantly relax the assumptions on the loss function. In particular, we do not need to assume convexity with respect to the model's parameters. As an illustration, we introduce a resampling procedure that models realistic distribution shifts and show that it satisfies our assumptions. We support our theory by showing that one can learn performatively stable classifiers with neural networks making predictions about real data that shift according to our proposed procedure.
## 1 Introduction
One of the main challenges in many of the decision-making tasks is that the data distribution changes over time. This concept is known as distribution shift or concept drift (Gama et al., 2014; Tsymbal, 2004; Quionero-Candela et al., 2009). With a changing data distribution, the performance of a supervised learning classifier may degrade since it implicitly assumes a static relationship between input and output variables (Fang et al., 2020). _Performative prediction_ is a framework introduced by Perdomo et al. (2020) to deal with this problem when the distribution changes as a consequence of model deployment, usually through actions taken based on the model's predictions. For example, election predictions affect campaign activities and, in turn, influence the final election results. This performative behavior arises naturally in many problems of economics, social sciences, and machine learning, such as in loan granting, predictive policing, and recommender systems (Perdomo et al., 2020; Krauth et al., 2022; Ensign et al., 2018).
So far, most works in this area assume strong convexity of the risk function \(\theta\mapsto\ell(z;\theta)\), which takes as input the model's parameters \(\theta\), and a data point \(z\)(Perdomo et al., 2020; Mendler-Dunner et al., 2020; Brown et al., 2022). For example, Perdomo et al. (2020) show that by assuming strong convexity and smoothness of the loss function along with some regularity assumptions on the data generation process, repeated retraining converges to a performative stable classifier, which is a model that is optimal for the distribution it induces. However, this strong convexity assumption does not hold for most modern ML models, e.g. neural networks. From a different perspective, given a data point \((x,y)\), the risk function can be expressed as a mapping from the prediction \(x\mapsto f_{\theta}(x)\) to a loss between the prediction \(\hat{y}:=f_{\theta}(x)\) and the target \(y\), in which case convexity almost always holds. Consider, for example, the Squared Error loss function in a binary classification problem where \(\ell(f_{\theta}(x),y)=\frac{1}{2}(f_{\theta}(x)-y)^{2}\) for a data point \(z=(x,y)\); it is convex with respect to the model's predictions \(f_{\theta}(x)\), but not necessarily with respect to the model's parameters \(\theta\).
With this in mind, we propose a different perspective and formulation that shifts attention from the space of parameters to the space of predictions. More precisely, we require distributions to be functions of the model's prediction function instead of its parameters. The rationale behind this is that in many scenarios with performative effects of a classifier in the loop, the model's predictions are the quantities of interest rather than its parameters. Actually, the framework assumes the data distribution changes as a result of model deployment, and at the time of deployment, it is the final predictions that matter rather than the parameters which led to those predictions.
Within our formulation, we show that by having a stronger assumption on the distribution map than the original framework, we can relax the convexity condition on the loss function and prove the existence and uniqueness of a performative stable classifier under repeated risk minimization with significantly weaker assumptions with regard to the regularity of the loss. Informally, these assumptions include strong convexity of the loss with respect to the predictions, the boundedness of the derivative of the loss function, and the Lipschitzness of the distribution map with respect to the \(\chi^{2}\) divergence. This more general set of assumptions on the loss function lets us theoretically analyze the performative effects of neural networks with non-convex loss functions; we believe this is a significant step toward bridging the gap between the theoretical performative prediction framework and realistic settings. Our setting and the main theoretical results will be explained in Section 2.
### Background
Before stating our main theoretical contribution, we first recall the key concepts of the performative prediction framework. The main difference between classic supervised learning and the performative prediction framework is that the latter considers the data distribution to be model-dependent, i.e. it assumes that the distribution map directly depends on the model's parameters \(\theta\) and is denoted by \(\theta\mapsto\mathcal{D}(\theta)\). The distribution map is said to satisfy a notion of Lipschitz continuity called _\(\epsilon\)-sensitivity_ if for any \(\theta\) and \(\theta^{\prime}\), \(\mathcal{W}_{1}\left(\mathcal{D}(\theta),\mathcal{D}(\theta^{\prime})\right) \leq\epsilon\|\theta-\theta^{\prime}\|_{2}\), where \(\mathcal{W}_{1}\) denotes the Wasserstein-1 distance. The performance of a model with parameters \(\theta\) is measured by its _performative risk_ under the loss function \(\ell\) which is stated as a function of a data point \(z\) and \(\theta\):
\[PR(\theta)\stackrel{{\text{def}}}{{=}}\operatorname*{\mathbb{E }}_{z\sim\mathcal{D}(\theta)}\ell(z;\theta).\]
A classifier with parameters \(\theta_{\text{PS}}\) is _performatively stable_ if it minimizes the risk on the distribution it induces. In other words, it is the fixed point of repeated retraining.
\[\theta_{\text{PS}}=\arg\min_{\theta\in\Theta}\operatorname*{\mathbb{E}}_{z \sim\mathcal{D}(\theta_{\text{PS}})}\ell(z;\theta).\]
Perdomo et al. (2020) demonstrate that with an \(\epsilon\)-sensitive distribution map, \(\gamma\)-strong convexity of \(\ell\) in \(\theta\) and \(\beta\)-smoothness of \(\ell\) in \(\theta\) and \(z\) are sufficient and _necessary_ conditions for repeated risk minimization to converge to a performatively stable classifier if \(\frac{\epsilon\beta}{\gamma}<1\). We show, however, that by slightly changing the assumptions on the distribution map, we can break their negative result regarding the necessary strong convexity and show that one can converge even when the loss is non-convex in \(\theta\).
### Our contributions
Our paper provides sufficient conditions for the convergence of repeated risk minimization to a classifier with unique predictions under performative effects in the absence of convexity to the model's parameters. The key idea in our framework is that the distribution map is no longer a function of the parameters \(\theta\), but a function of model's predictions, denoted by \(\mathcal{D}(f_{\theta})\) where \(f_{\theta}\) is in \(\mathcal{F}\), the set of parameterized functions by \(\theta\in\Theta\). We also express the loss \(\ell\) as a function of the prediction \(f_{\theta}(x)\) and the target \(y\), where both can be multi-dimensional. Following is the informal statement of our main theorem.
**Theorem 1**.: _(Informal) If the loss \(\ell(f_{\theta}(x),y)\) is strongly convex in \(f_{\theta}(x)\) with a bounded gradient norm, and the distribution map \(f_{\theta}\mapsto\mathcal{D}(f_{\theta})\) is sufficiently Lipschitz with respect to the \(\chi^{2}\) divergence and satisfies a bounded norm ratio condition, then repeated risk minimization converges linearly to a stable classifier with unique predictions._
We will state this theorem formally in section 2. The critical assumption we make on the distribution map is Lipschitz continuity, which captures the idea that a small change in the model's predictions cannot lead to a large change in the induced data distribution, as measured by the \(\chi^{2}\) divergence. This is more restrictive than the Lipschitz continuity assumption of Perdomo et al. (2020) with \(\mathcal{W}_{1}\), since for the \(\chi^{2}\) divergence to be finite, distributions should have the same support. However, we show that this still holds in realistic settings, and we discuss that this stronger assumption on the distribution map is a price we have to pay to relax the assumptions on the loss function significantly and have convergence guarantees for neural networks with non-convex loss functions.
In section 4, we demonstrate our main results empirically with a _strategic classification_ task, which has been used as a benchmark for performative prediction (Perdomo et al., 2020; Miller et al., 2021; Brown et al., 2022). Strategic classification involves an institution that deploys a classifier and agents who strategically manipulate their features to alter the classifier's predictions to get better outcomes. We propose a resampling procedure called _Resample-if-Rejected (RIR)_ in Section 3 to model the population's strategic responses and show that it results in a distribution map that satisfies the conditions of Theorem 1.
Within this process, a sample \(x\) is drawn from the base distribution and is rejected with some probability dependent on \(f_{\theta}(x)\) and accepted otherwise; in case of rejection, another sample from the base distribution will be drawn.
A real-life example that this procedure may be able to model is regarding posting content on social media. Actually, social media use many ML models to automatically regulate the content posted by the users. Consequently, some users' posts may be rejected by the automatic regulation because their content was considered to violate the platform's community guidelines. In some situations, the authors might consider this rejection unfair and may tweak some parts of the post in order to get accepted. In our experiments, we
model this resubmission by resampling some strategic features (i.e., features that do not drastically affect the content and that are easily modifiable) of the post.
### Related work
Prior work on performative prediction focused on learning from a data distribution \(\mathcal{D}(\theta)\) that could change with the model's parameter \(\theta\)(Perdomo et al., 2020; Mendler-Dunner et al., 2020; Brown et al., 2022; Drusvyatskiy and Xiao, 2022; Miller et al., 2021; Izzo et al., 2021; Maheshwari et al., 2022; Ray et al., 2022; Li and Wai, 2022; Dong and Ratliff, 2021; Jagadeesan et al., 2022). In this work, we propose to strengthen the standard \(\epsilon\)-sensitivity assumption on the distribution map initially proposed by Perdomo et al. (2020). To a certain extent, we propose a novel \(\epsilon\)-sensitivity assumption for the performative prediction framework that allows us to relax the convexity assumption on the loss function. Such relaxation is essential if we want to consider the practical setting of classifiers parametrized by neural networks.
At a technical level, our analysis is inspired by Perdomo et al. (2020, Theorem 3.5). However, because our quantity of interest is a distance in the function space (see Theorem 2) our proof significantly differs from Perdomo et al. (2020). We require a different notion of \(\epsilon\)-sensitivity and an additional assumption (Assumption 2) in order to control the variation of the functional norm defined in Assumption 1.
Various prior works have focused on finding performatively stable classifiers (Perdomo et al., 2020; Brown et al., 2022; Mendler-Dunner et al., 2020; Li and Wai, 2022), but to the best of our knowledge, none of them analyze the convergence of repeated retraining with loss functions that might be non-convex to the model's parameters.
Exploiting convexity in the model's predictions has previously been explored by Bengio et al. (2005), who noticed that most of the loss functions to train neural networks are convex with respect to the neural network itself. There have been many works trying to leverage this property to show convergence results applied to neural networks in the context of machine learning (Bach, 2017; Chizat and Bach, 2018; Mladenovic et al., 2021). However, none of these results are in the context of performative prediction. Jagadeesan et al. (2022) proposes an algorithm to find classifiers with near-optimal performative risk without assuming convexity. First, their work focuses on a different notion of optimality (namely, performatively optimal points). Second, they focus on regret minimization, while our work is concerned with finding a performatively stable classifier with gradient-based algorithms and having guarantees to make sure we converge to such a stable classifier within a reasonable number of steps.1
Footnote 1: For a \(\delta\)-approximate optimum, Jagadeesan et al. (2022) propose an algorithm that requires \(O(1/\delta^{d})\) repeated minimizations for the last iterate where \(d\) is some notion of dimension. In comparison, in Theorem 2 we require \(O(\log(1/\delta))\) minimizations.
Similarly to our work, 2 assume that the performativity of a model occurs through its predictions and consider the distribution a function of the predictive model. However, this paper has a different focus entirely; they try to find a set of conditions under which the causal effect of predictions becomes identifiable. Additionally, they focus on a subset of performative prediction problems where predictions only influence the target variable and not the features \(X\). Hence, their analysis does not capture strategic classification.
## 2 Framework and Main Results
To propose our main theorem, we first need to redefine some of the existing concepts. As mentioned earlier, we assume \(\mathcal{D}(\theta)\) to be a mapping from the model's prediction function \(f_{\theta}\) to a distribution \(\mathcal{D}(f_{\theta})\) over instances \(z\), where \(f_{\theta}\) is in \(\mathcal{F}\), the set of parameterized functions by \(\theta\in\Theta\). Each instance \(z\) is a pair of features and label \((x,y)\). With this new formulation, the objective risk function will be defined as follows:
**Definition 2.1** (_Performative Risk_).: _Performative risk (PR) is defined as follows:_
\[PR(f_{\theta})\stackrel{{\text{def}}}{{=}}\operatorname*{\mathbb{E }}_{z\sim\mathcal{D}(f_{\theta})}\ell(f_{\theta}(x),y).\]
In this work, we focus on finding a performatively stable classifier, which minimizes the risk on the distribution its prediction function entails:
**Definition 2.2**.: _A classifier with parameters \(\theta_{\text{PS}}\) is performatively stable if:_
\[\theta_{\text{PS}}=\arg\min_{\theta\in\Theta}\operatorname*{\mathbb{E}}_{z\sim \mathcal{D}(f_{\theta_{\text{PS}}})}\ell(f_{\theta}(x),y).\]
Repeated retraining is the algorithm we use to find a stable classifier, which is defined formally as follows:
**Definition 2.3** (_Rrh_).: _Repeated Risk Minimization (RRM) refers to the procedure where, starting from an initial \(\theta_{0}\), we perform the following sequence of updates for every \(t\geq 0\):_
\[\theta_{t+1}=G(\theta_{t})\stackrel{{\text{def}}}{{=}}\arg\min_{ \theta\in\Theta}\operatorname*{\mathbb{E}}_{z\sim\mathcal{D}(f_{\theta_{t}})} \ell(f_{\theta}(x),y).\]
### Assumptions
In order to provide convergence guarantees for repeated retraining, we require regularity assumptions on the distribution map and the loss function. A natural assumption we make on \(\mathcal{D}(.)\) inspired by prior work is Lipschitz continuity, formally referred to as \(\epsilon\)-_sensitivity_. Intuitively, this assumption states the idea that if two models with similar
prediction functions are deployed, then the induced distributions should also be similar. We refer to the _base distribution_\(\mathcal{D}\) as the distribution over (features, label) pairs before any classifier deployment.
**Assumption 1**.: _(A1) [\(\epsilon\)-sensitivity w.r.t Pearson \(\chi^{2}\) divergence] Suppose the base distribution \(\mathcal{D}\) has the probability density function (pdf) \(p\) over instances \(z=(x,y)\). The distribution map \(\mathcal{D}(.)\) which maps \(f_{\theta}\) to \(\mathcal{D}(f_{\theta})\) with the pdf \(p_{f_{\theta}}\) is \(\epsilon\)-sensitive w.r.t Pearson \(\chi^{2}\) divergence, i.e., for all \(f_{\theta}\) and \(f_{\theta^{\prime}}\) in \(\mathcal{F}\) the following holds:_
\[\chi^{2}(\mathcal{D}(f_{\theta^{\prime}}),\mathcal{D}(f_{\theta}))\leq \epsilon\|f_{\theta}-f_{\theta^{\prime}}\|^{2},\]
_where \(\|f_{\theta}-f_{\theta^{\prime}}\|^{2}:=\int\|f_{\theta}(x)-f_{\theta^{\prime }}(x)\|^{2}p(x)dx\) and \(\chi^{2}(\mathcal{D}(f_{\theta^{\prime}}),\mathcal{D}(f_{\theta})):=\int \frac{(p_{f_{\theta^{\prime}}}(z)-p_{f_{\theta}}(z))^{2}}{p_{f_{\theta}}(z)}dz\)_
**Assumption 2**.: _(A2) [Bounded norm ratio] The distribution map \(\mathcal{D}(.)\) satisfies bounded norm ratio with the parameter \(C\geq 1\) if for all \(f_{\theta},f_{\theta^{\prime}},f_{\theta^{\prime}}\in\mathcal{F}\):_
\[\|f_{\theta}-f_{\theta^{\prime}}\|^{2}\leq C\|f_{\theta}-f_{\theta^{\prime}} \|^{2}_{f_{\theta^{\prime}}},\]
_where_
\[\|f_{\theta}-f_{\theta^{\prime}}\|^{2}_{f_{\theta^{\prime}}}=\int\|f_{\theta} (x)-f_{\theta^{\prime}}(x)\|^{2}p_{f_{\theta^{\prime}}}(x)dx\]
_is a notation for an \(f_{\theta^{\prime}}\)-dependent norm. In other words, this assumption says that_
\[\mathbb{E}_{p(x)}[\|f_{\theta}(x)-f_{\theta^{\prime}}(x)\|^{2}]\leq C\, \mathbb{E}_{p_{f_{\theta^{\prime}}}(x)}[\|f_{\theta}(x)-f_{\theta^{\prime}} (x)\|^{2}],\]
_where \(p(x)\) and \(p_{f_{\theta^{\prime}}}(x)\) are pdfs for the marginal distribution of \(X\) according to \(\mathcal{D}\) and \(\mathcal{D}(f_{\theta^{\prime}})\) respectively._
The distribution map satisfies the bounded norm ratio condition if the bounded density ratio property holds, i.e. \(p(x)\leq C\;p_{f_{\theta}}(x)\) for every \(f_{\theta}\in\mathcal{F}\). We will show how the bounded density ratio holds in our example in Section 3.
Our notion of Lipschitz Continuity uses the Pearson \(\chi^{2}\) divergence--interchangeably referred to as \(\chi^{2}\) divergence--to measure the distance between distributions, as opposed to Perdomo et al. (2020) who use \(\mathcal{W}_{1}\) distance. Using \(\chi^{2}\) divergence is more restrictive since the distributions should have the same support for the \(\chi^{2}\) divergence between them to be finite.
As stated in Remark 1, \(\epsilon\)-sensitivity with respect to \(\chi^{2}\) implies \(K\sqrt{\epsilon}\)-sensitivity with respect to \(\mathcal{W}_{1}\) for a constant \(K\) that depends on the diameter of the space \(d_{max}\). If \(\frac{d_{max}}{\sqrt{2}}<\sqrt{\epsilon}\), our notion of \(\epsilon\)-sensitivity w.r.t \(\chi^{2}\) is indeed stronger than the corresponding notion of \(\epsilon\)-sensitivity w.r.t \(\mathcal{W}_{1}\). However, in Proposition 1 we show that within our settings, we cannot replace \(\chi^{2}\) with \(\mathcal{W}_{1}\) and still get convergence results.
**Remark 1**.: _For two distributions \(\mathcal{D}(x)\) and \(\mathcal{D}^{\prime}(x)\), the \(\mathcal{W}_{1}\) distance is upper bounded by a coefficient of the square root of \(\chi^{2}\) divergence (Peyre et al., 2019, Figure 8.2):_
\[\mathcal{W}_{1}(\mathcal{D}(x),\mathcal{D}^{\prime}(x))\leq\frac{d_{max}}{ \sqrt{2}}\sqrt{\chi^{2}(\mathcal{D}(x),\mathcal{D}^{\prime}(x))},\]
_where \(\mathcal{X}\) is a metric space with ground distance \(d\) and \(d_{max}=\sup_{(x,x^{\prime})}d(x,x^{\prime})\) is the diameter of \(\mathcal{X}\). If we define \(\epsilon\)-sensitivity of \(\mathcal{D}(.)\) w.r.t \(\mathcal{W}_{1}\) as_
\[\mathcal{W}_{1}(\mathcal{D}(f_{\theta}),\mathcal{D}(f_{\theta^{\prime}}))\leq \epsilon\|f_{\theta}-f_{\theta^{\prime}}\|,\qquad(A1)^{\prime}\]
_then \(\epsilon\)-sensitivity w.r.t \(\chi^{2}\) implies \(\frac{d_{max}}{\sqrt{2}}\sqrt{\epsilon}\)-sensitivity with respect to \(\mathcal{W}_{1}\):_
\[\mathcal{W}_{1}(\mathcal{D}(f_{\theta}),\mathcal{D}(f_{\theta^{\prime}}))\leq \frac{d_{max}}{\sqrt{2}}\sqrt{\epsilon}\|f_{\theta}-f_{\theta^{\prime}}\|.\]
Despite the downsides of our assumptions on the distribution map, these assumptions still hold in some realistic settings, an example of which is a resampling procedure proposed in Section 3. The idea of this distribution shift is that individuals are more likely to change their features (by resampling them) if there is a high chance that they will receive an unfavorable classification outcome, which is quantified by the model's prediction.
While imposing a more restrictive assumption on the distribution map, we significantly relax the assumptions on the loss function. In particular, we no longer need to assume that loss is convex to the model's parameters, which opens the door to consider deep neural networks as classifiers in our analysis. We still require some mild assumptions on the loss function \(\ell\) that are as follows:
**Assumption 3**.: _(A3) [Strong convexity w.r.t predictions] The loss function \(\ell(f_{\theta}(x),y)\) which takes as inputs the prediction \(f_{\theta}(x)\) and the target \(y\), is \(\gamma\)-strongly convex in \(f_{\theta}(x)\). More precisely, the following inequality holds for every \(f_{\theta},f_{\theta^{\prime}}\in\mathcal{F}\):_
\[\ell(f_{\theta}(x),y) \geq\ell(f_{\theta^{\prime}}(x),y)+\] \[(f_{\theta}(x)-f_{\theta^{\prime}}(x))^{\top}\nabla_{\hat{y}}\ell( f_{\theta^{\prime}}(x),y)+\] \[\frac{\gamma}{2}\|f_{\theta}(x)-f_{\theta^{\prime}}(x)\|^{2},\]
_where \(\nabla_{\hat{y}}\ell(f_{\theta}(x),y)\) is the gradient of the function \(\hat{y}\in\mathbb{R}^{d}\mapsto\ell(\hat{y},y)\) at \(f_{\theta}(x)\)._
**Assumption 4**.: _(A4) [Bounded gradient norm] The loss function \(\ell(f_{\theta}(x),y)\) has bounded gradient norm, i.e., the norm of its gradient with respect to \(f_{\theta}(x)\) is upper bounded with a finite value \(M=\sup_{x,y,\theta}\|\nabla_{\hat{y}}\ell(f_{\theta}(x),y)\|\)._
We can easily see that these two assumptions on \(\ell\) are satisfied by the Squared Error loss: \(\ell(f_{\theta}(x),y)=\frac{1}{2}\|f_{\theta}(x)-y\|^{2}\). This function is \(1\)-strongly convex with a bounded gradient norm of \(\sqrt{d}\) if \(y\) is a one-hot vector in \(\mathbb{R}^{d}\) and \(f_{\theta}(x)\in[0,1]^{d}\) for any \(\theta\). More broadly, when the predictions are bounded, e.g. in \([0,1]^{d}\), then the quantity \(M\) in
Assumption 4 always exists for continuously differentiable loss functions which makes it a very mild assumption.
In Section 2.2 we show that if assumptions \(A1-A4\) are satisfied for a distribution map and a loss function, then RRM converges to a unique stable classifier if \(\frac{\sqrt{C\epsilon}M}{\gamma}\) is less than \(1\). Proposition 1 shows that replacing \(\epsilon\)-sensitivity w.r.t \(\chi^{2}\) (\(A\) 1) with \(\epsilon\)-sensitivity w.r.t \(\mathcal{W}_{1}\) (\(A1\))\({}^{\prime}\) while keeping other assumptions would break this convergence result, in the sense that RRM will oscillate between two models forever. This justifies why we cannot use \(\mathcal{W}_{1}\) within our analysis.
**Proposition 1**.: _Suppose that the loss \(\ell(f_{\theta}(x),y)\) is \(\gamma\)-strongly convex in \(f_{\theta}(x)\), and has a derivative bounded by \(M\). If the distribution map satisfies the bounded norm ratio property with a parameter \(C\), and it is \(\epsilon\)-sensitive w.r.t \(\mathcal{W}_{1}\) (\(A1\))\({}^{\prime}\), RRM may diverge for any value of \(\epsilon\), particularly even if \(\frac{\sqrt{C\epsilon}M}{\gamma}<1\)._
Proof.: Consider a supervised learning problem where a model with parameters \(\theta\) uses the prediction function \(f_{\theta}(x)=\frac{\tanh(\theta)+2}{\epsilon}x\) where \(x\in[0,3\epsilon]\). Take the base distribution on \(X\) as a uniform distribution over this interval.
The loss function is defined as
\[\ell(f_{\theta}(x),y)=\frac{-15\gamma}{4}(f_{\theta}(x)-y)+\frac{\gamma}{2}f_ {\theta}(x)^{2}+\frac{\gamma}{2}y^{2}+\gamma(\frac{15}{4})^{2}.\]
This \(\ell\) is non-negative, \(\gamma\)-strongly convex w.r.t \(f_{\theta}(x)\), and its derivative in \(f_{\theta}(x)\) is bounded.
Let the distribution of \(X\) according to \(\mathcal{D}(f_{\theta})\) be a point mass at \(f_{\theta}(\epsilon^{2})=\epsilon(\tanh(\theta)+2)\) and the distribution of \(Y\) be invariant w.r.t \(f_{\theta}\).
\(\mathcal{D}(f_{\theta})\) is \(\epsilon\)-sensitive w.r.t the Wasserstein-1 distance:
Choose \(f_{\theta}\) and \(f_{\theta^{\prime}}\) arbitrarily. It is easy to see that
\[\mathcal{W}_{1}(\mathcal{D}(f_{\theta}),\mathcal{D}(f_{\theta^{\prime}}))\leq \epsilon|\tanh(\theta)-\tanh(\theta^{\prime})|. \tag{1}\]
\[\|f_{\theta}-f_{\theta^{\prime}}\|^{2} =\int_{0}^{3\epsilon}(f_{\theta}(x)-f_{\theta^{\prime}}(x))^{2}p( x)dx\] \[=\int_{0}^{3\epsilon}\frac{(\tanh(\theta)-\tanh(\theta^{\prime}) )^{2}}{\epsilon^{2}}x^{2}p(x)dx\] \[=\frac{(\tanh(\theta)-\tanh(\theta^{\prime}))^{2}}{\epsilon^{2}} \frac{1}{3\epsilon}\int_{0}^{3\epsilon}x^{2}dx\] \[=\frac{(\tanh(\theta)-\tanh(\theta^{\prime}))^{2}}{3\epsilon^{3}} \frac{(3\epsilon)^{3}}{3}\] \[=3\left(\tanh(\theta)-\tanh(\theta^{\prime})\right)^{2}. \tag{2}\]
As a result,
\[\|f_{\theta}-f_{\theta^{\prime}}\|=\sqrt{3}|\tanh(\theta)-\tanh(\theta^{ \prime})|. \tag{3}\]
Combining (1) and (3) results in the \(\epsilon\)-sensitivity:
\[\mathcal{W}_{1}(\mathcal{D}(f_{\theta}),\mathcal{D}(f_{\theta^{\prime}}))\leq \epsilon\|f_{\theta}-f_{\theta^{\prime}}\|.\]
Also, this distribution map satisfies the bounded norm ratio property with any \(C>3\) since:
\[\|f_{\theta}-f_{\theta^{\prime}}\|_{f_{\theta^{\prime}}}^{2}\] \[=\big{(}(\tanh(\theta)-\tanh(\theta^{\prime}))(\tanh(\theta^{*})+ 2)\big{)}^{2}\] \[>(\tanh(\theta)-\tanh(\theta^{\prime}))^{2}, \tag{4}\]
where we used the fact that \((\tanh(\theta^{*})+2)>1\).
Putting (2) and (4) together, we can write
\[\|f_{\theta}-f_{\theta^{\prime}}\|^{2}\leq C\ \|f_{\theta}-f_{\theta^{\prime}}\|_{f _{\theta^{*}}}^{2}\]
for every \(C>3\).
The update rule of RRM is as follows:
\[\theta_{t+1} =\operatorname*{argmin}_{\phi}\mathbb{E}_{z\sim\mathcal{D}(f_{ \theta_{t}})}[\ell(f_{\phi}(x),y)]\] \[=\operatorname*{argmin}_{\phi}\ell(f_{\phi}(x),y)\bigg{|}_{x= \epsilon(\tanh(\theta_{t})+2)}.\]
Taking the derivative of the loss and setting it to zero results in:
\[(\tanh(\theta_{t+1})+2)(\tanh(\theta_{t})+2)=\frac{15}{4}.\]
So if \(\theta_{t}=\tanh^{-1}(\frac{-1}{2})\), then \(\theta_{t+1}=\tanh^{-1}(\frac{1}{2})\) and if \(\theta_{t}=\tanh^{-1}(\frac{1}{2})\), then \(\theta_{t+1}=\tanh^{-1}(\frac{-1}{2})\).
In conclusion, while the loss function satisfies assumptions (A3) and (A4) and the distribution map satisfies conditions \((A1)\)\({}^{\prime}\) and (A2), RRM oscillates between \(\tanh^{-1}(\frac{-1}{2})\) and \(\tanh^{-1}(\frac{1}{2})\) with \(\theta_{0}=\tanh^{-1}(\frac{-1}{2})\), for any value of \(\epsilon,\gamma,C>3\), including when \(\frac{\sqrt{C\epsilon}M}{\gamma}<1\).
### Convergence of RRM
Here we state our main theoretical contribution, which provides sufficient conditions for repeated risk minimization to converge to a stable classifier with unique predictions.
**Theorem 2**.: _Suppose that the loss \(\ell(f_{\theta}(x),y)\) is \(\gamma\)-strongly convex w.r.t \(f_{\theta}(x)\) (A3) and the norm of its gradient w.r.t \(f_{\theta}(x)\) is upper bounded with \(M=\sup_{x,y,\theta}\|\nabla_{\hat{\theta}}\ell(f_{\theta}(x),y)\|\) (A4). If the distribution map \(\mathcal{D}(.)\) is \(\epsilon\)-sensitive w.r.t Pearson \(\chi^{2}\) divergence (A1) and satisfies bounded norm ratio property with parameter \(C\) (A2), then:_
\[\|f_{G(\theta)}-f_{G(\theta^{\prime})}\|\leq\frac{\sqrt{C\epsilon}M}{\gamma}\|f_ {\theta}-f_{\theta^{\prime}}\|.\]
_So if \(\frac{\sqrt{C\epsilon}M}{\gamma}<1\), \(G\) is a contractive mapping and RRM converges to a stable classifier at a linear rate:_
\[\|f_{\theta_{t}}-f_{\theta_{PS}}\|\leq\alpha,\]
\[\text{for}\quad t\geq(1-\frac{\sqrt{C\epsilon}M}{\gamma})^{-1}\log(\frac{\|f_{ \theta_{0}}-f_{\theta_{PS}}\|}{\alpha}).\]
As we mentioned earlier, assumptions (A3) and (A4) on \(\ell\) are satisfied by the commonly-used Squared Error loss function, and this holds even in the presence of deep neural networks as predictors. To illustrate our results, we propose the _Resample-if-Rejected_ procedure in the following section and show that it satisfies assumptions (A1) and (A2). We provide a proof sketch for Theorem 2 here, though the full proof is available in Supplementary materials.
Proof Sketch.: Fix \(\theta\) and \(\theta^{\prime}\) in \(\Theta\). Let \(h\) and \(h^{\prime}\) be mappings from \(\mathcal{F}\) to \(\mathbb{R}\) defined as follows:
\[h(f_{\hat{\theta}})=E_{z\sim\mathcal{D}(f_{\theta})}[\ell(f_{\hat{\theta}}(x), y)]=\int\ell(f_{\hat{\theta}}(x),y)p_{f_{\theta}}(z)dz.\]
\[h^{\prime}(f_{\hat{\theta}})=E_{z\sim\mathcal{D}(f_{\theta^{\prime}})}[\ell(f_ {\hat{\theta}}(x),y)]=\int\ell(f_{\hat{\theta}}(x),y)p_{f_{\theta^{\prime}}}(z )dz.\]
Because of the strong convexity of \(\ell(f_{\theta}(x),y)\) in \(f_{\theta}(x)\) and the fact that \(f_{G(\theta)}\) minimizes \(h\), we can show that
\[-\gamma\|f_{G(\theta)}-f_{G(\theta^{\prime})}\|_{f_{\theta}}^{2}\geq\] \[\int\big{(}f_{G(\theta)}(x)-f_{G(\theta^{\prime})}(x)\big{)}^{ \top}\nabla_{\hat{y}}\ell(f_{G(\theta^{\prime})}(x),y)p_{f_{\theta}}(z)dz, \tag{5}\]
where
\[\|f_{G(\theta)}-f_{G(\theta^{\prime})}\|_{f_{\theta}}^{2}=\int\|f_{G(\theta)}( x)-f_{G(\theta^{\prime})}(x)\|^{2}p_{f_{\theta}}(x)dx.\]
Because of this \(f_{\theta}\)-dependent norm, assumption 2 is required so we can remove this dependency later.
Using \(\epsilon\)-sensitivity of \(\mathcal{D}(.)\) w.r.t the \(\chi^{2}\) divergence, and the bounded gradient norm assumption which states that there exists a finite value \(M\) such that \(M=\sup_{x,y,\theta}\|\nabla_{\hat{y}}\ell(f_{\theta}(x),y)\|\), alongside the fact that \(f_{G(\theta^{\prime})}\) minimizes \(h^{\prime}\), we derive that
\[\int\big{(}f_{G(\theta)}(x)-f_{G(\theta^{\prime})}(x)\big{)}^{ \top}\nabla_{\hat{y}}\ell(f_{G(\theta^{\prime})}(x),y)p_{f_{\theta}}(z)dz\geq\] \[-M\sqrt{\epsilon}\|f_{G(\theta)}-f_{G(\theta^{\prime})}\|_{f_{ \theta}}\|f_{\theta}-f_{\theta^{\prime}}\| \tag{6}\]
which provides a lower bound on the RHS of (5).
Combining (5) and (6) with the fact that the distribution map \(\mathcal{D}(.)\) satisfies the bounded norm ratio property with parameter \(C\) results in
\[\|f_{G(\theta)}-f_{G(\theta^{\prime})}\|\leq\frac{\sqrt{C\epsilon}M}{\gamma}\| f_{\theta}-f_{\theta^{\prime}}\|.\]
So for \(\frac{\sqrt{C\epsilon}M}{\gamma}<1\), RRM converges to a stable classifier based on the Banach fixed-point theorem.
Setting \(\theta=\theta_{t-1}\) and \(\theta^{\prime}=\theta_{PS}\), we can show that this convergence has a linear rate.
## 3 \(\epsilon\)-Sensitivity of the RIR procedure
An example of strategic classification, which was introduced in section 1.2, occurs in social media when users' posts get rejected because they violated the platform's policies. In these cases, users usually re-post the same content but with different words in order to get accepted. Inspired by this application, we propose the _Resample-if-Rejected (RIR)_ procedure to model distribution shifts. Consider we have a base distribution with pdf \(p\) and a function \(g:f_{\theta}(x)\mapsto g(f_{\theta}(x))\) which indicates the probability of rejection. Here we assume that \(f_{\theta}(x)\) is a scalar. Let \(\text{RIR}(f_{\theta})\) be the distribution resulting from deploying a model with prediction function \(f_{\theta}\) under this procedure, and take \(p_{f_{\theta}}\) as its pdf. The sampling procedure of \(p_{f_{\theta}}\) is as follows:
* Take a sample \(x^{*}\) from \(p\).
* Toss a coin whose probability of getting a head is \(1-g(f_{\theta}(x^{*}))\). If it comes head, output \(x^{*}\), and if it comes tail, output another sample from \(p\).
Consider \(X\) to be a random variable with probability distribution \(p(x)\). \(p_{f_{\theta}}\) is defined mathematically as
\[p_{f_{\theta}}(x) =p(x)\Big{(}1-g(f_{\theta}(x))\Big{)}+p(x)\mathbb{E}_{X}[g(f_{ \theta}(X))]\] \[=p(x)(1-g(f_{\theta}(x))+C_{\theta}),\]
where \(C_{\theta}=\mathbb{E}_{X}[g(f_{\theta}(X))]=\int g(f_{\theta}(x^{\prime}))p(x^ {\prime})dx^{\prime}\).
The following theorem shows that the distribution resulting from the RIR procedure satisfies our conditions on the distribution map. This Theorem is proved in the Supplementary materials.
**Theorem 3**.: _If \(f_{\theta}(x)\in[0,1-\delta]\)\(\forall\theta\in\Theta\) for some fixed \(0<\delta<1\), then for \(g(f_{\theta}(x))=f_{\theta}(x)+\delta\), \(\text{RIR}(.)\) is \(\frac{1}{\delta}\)-sensitive w.r.t \(\chi^{2}\) divergence (A1) and satisfies the bounded norm ratio property (A2) for \(C=\frac{1}{\delta}\)._
**Remark 2**.: _Consider a strategic classification task where the distribution reacts to a model with the prediction function \(f_{\theta}\) in accordance with the RIR procedure. Suppose the predictions satisfy \(f_{\theta}(x)\in[0,1-\delta]\), the label \(y\) is in \(\{0,1-\delta\}\), and we use the Squared Error loss \(\ell(f_{\theta}(x),y)=\frac{1}{2}\big{(}f_{\theta}(x)-y)^{2}\) which is 1-strongly convex. According to Theorem 3, the distribution map is \(\frac{1}{\delta}\)-sensitive w.r.t \(\chi^{2}\) divergence and satisfies the bounded norm ratio property for \(C=\frac{1}{\delta}\). Also, \(M=\sup_{x,y,\theta}|\ell^{\prime}(f_{\theta}(x),y)|\) is equal to \(\sup_{x,y,\theta}|f_{\theta}(x)-y|=1-\delta\). Putting all these together, the convergence rate of RRM in Theorem 2 is equal to:_
\[\frac{\sqrt{C\epsilon}M}{\gamma}=\frac{1-\delta}{\delta}.\]
_Hence, whenever we have \(\delta>0.5\), RRM converges to a unique stable classifier._
We will use this remark in the experiments section.
In supervised learning, \(x\) corresponds to a set of features. So far in the RIR procedure, we resample the whole set of features, though we also can resample only a subset of them, and Theorem 3 still holds in this case if strategic and non-strategic features are independent as shown following the proof of this theorem in Supplementary materials. In our simulations in the next section, \(x\in\mathbb{R}^{d}\) is the set of features of individuals applying to get loans from a bank. These features are divided into two sets: _strategic_ and _non-strategic_. Strategic features are those that can be (easily) manipulated without affecting the true label, e.g. Number of open credit lines and loans. Non-strategic features, however, can be seen as causes of the label and include monthly income for example. In our experiments, we resample only strategic features as these are the ones that people can manipulate more easily.
## 4 Experiments
We complement our theoretical results with experiments on a credit scoring task and illustrate how they support our claims. We implemented our simulations based on Perdomo et al. (2020)'s code in the Whynot Python package (Miller et al., 2020), and changed it according to our settings so we can use auto-differentiation of PyTorch2. The strategic classification task of credit scoring is a two-player game between a bank that predicts the creditworthiness of loan applicants, and individuals who strategically manipulate their features to alter the classification outcome. We run the simulations using Kaggle's _Give Me Some Credit_ dataset (Kaggle, 2011), with features \(x\in\mathbb{R}^{11}\) corresponding to applicants' information along with their label \(y\in\{0,1\}\), where \(y=1\) indicates that the applicant defaulted and \(y=0\) otherwise.
Footnote 2: [https://github.com/mhrnz/Performance-Prediction-with-Neural-Networks](https://github.com/mhrnz/Performance-Prediction-with-Neural-Networks)
In our simulations, we assume that the data distribution induced by the classifier \(f_{\theta}\) shifts according to the RIR procedure where strategic features are resampled with the probability of rejection \(g(f_{\theta}(x))=f_{\theta}(x)+\delta\). Assuming that strategic and non-strategic features are independent, resampling strategic features can be implemented by simply choosing these features from another data point at random. For the classifier, we used a two-layer neural network with a hidden-layer size of 6. The choice of hidden layer size in our network was arbitrary; Figure 3 shows convergence for different hidden size values. In the network, we use a LeakyReLU activation after the first layer, and a scaled-sigmoid activation function after the second layer to bring the outcome \(f_{\theta}(x)\) to the interval \([0,1-\delta]\). This way we make sure that \(g(f_{\theta}(x))\in[\delta,1]\) is a valid probability and the assumption of Theorem 3 is satisfied. Since the outcome \(f_{\theta}(x)\) is in \([0,1-\delta]\), we change the label \(1\) to \(1-\delta\). So \(y=1-\delta\) corresponds to default, and the higher the value of \(f_{\theta}(x)\), the greater the chance of rejection. The objective is to minimize the expectation of the Squared Error loss function over instances, i.e. \(\mathbb{E}[\frac{1}{2}(f_{\theta}(x)-y)^{2}]\).
The definition of RRM requires solving an exact minimization problem at each optimization step; however, we solve this optimization problem approximately using several steps of gradient descent until the absolute difference of two consecutive risks is less than the tolerance of \(10^{-9}\). Also, note that running the same configuration twice might result in different plots because of the randomness that exists in the resampling phase.
Figure 1 shows the evolution of the log of performative risk (left) and accuracy (right) through iterations of RRM for \(\delta=0.9\). The blue lines show the changes in risk (accuracy) after optimizing on the distribution induced by the last model, and the green lines show the effect of the distribution shift on the risk (accuracy). We chose to plot the log of performative risk instead its own value only for illustration purposes. As discussed in Remark 2, for this \(\delta\), all the conditions of Theorem 2 including \(\frac{\sqrt{\mathcal{C}_{\mathcal{M}}}}{\sqrt{\mathcal{C}_{\mathcal{M}}}}<1\) are satisfied, and the
Figure 1: Evolution of log of performative risk (left) and accuracy (right) through iterations of RRM for \(\delta=0.9\). The blue lines show the changes in risk (accuracy) after optimizing on the distribution induced by the last model, and the green lines show the effect of the distribution shift on the risk (accuracy).
Theorem claims that in this case, RRM converges to a stable model; this is supported by our results in Figure 1.
Figure 2 shows the log of Performative Risk for different values of \(\delta=0.1,0.4,0.7,0.9\). The plot for \(\delta=0.9\) is generated through a different run than Figure 1. Based on Remark 2, for \(\delta=0.7\) and \(\delta=0.9\) we should see convergence behavior, though for \(\delta<0.5\), our theory neither gives a guarantee of convergence nor claims that repeated retraining will diverge, so we might or might not see convergence behavior for \(\delta=0.1\) or \(\delta=0.4\). What we see in Figure 2 is aligned with our expectations. It is important to note that for smaller \(\delta\), the value of \(\epsilon\) which indicates the strength of performative effects is larger, and for high performative effects, it is more difficult for the model to converge since the distribution is allowed to move more after the model's deployment.
On a high level, we interpret the stable classifier to be a model that relies less on non-strategic features for classification. Throughout the training, for a fixed data point \(z=(x,y)\) where \(x=(x_{s},x_{f})\) for \(x_{s}\) being the strategic features and \(x_{f}\) being the non-strategic ones, the model sees the same \(x_{f}\) but different values for \(x_{s}\) chosen randomly, all with the same label \(y\). So intuitively, the model would learn to rely less on strategic features and more on non-strategic ones for classification, and this makes it more robust to the strategic behavior of agents.
## 5 Discussion and Future Work
In this paper, we contribute the first set of convergence guarantees for finding performative stable models on problems where the risk is allowed to be non-convex with respect to parameters. This is an important development: our results pertain to modern machine learning models, like neural networks.
We achieve these stronger results by appealing to functional analytical tools, but also making slightly stronger assumptions on the performative feedback loop: rather than assuming that the distribution is \(\epsilon\)-sensitive to parameters as measured by Wasserstein distance, we instead assume that the distribution is \(\epsilon\)-sensitive to _predictions_ as measured by the \(\chi^{2}\) divergence.
On one hand, only assuming sensitivity to predictions instead of parameters is a step in the right direction. None of the big applications of performative prediction justify sensitivity to model parameters. As a matter of fact, many of the applications would motivate moving one step further in that direction: performative behavior in machine learning systems often manifests as a function of _decisions_ or _actions_ that rely on a prediction. Those decisions or actions are observed by a population that reacts by changing its behavior. We leave this important problem setting of studying sensitivity to decisions for future work.
On the other hand, \(\chi^{2}\) sensitivity is stronger and implies Wassertstein sensitivity. Furthermore, because we use a
Figure 2: Evolution of log of performative risk for different values of \(\delta=0.1,0.4,0.7,0.9\) through iterations of RRM.
variable norm that depends on parameters in our analysis, we make an extra assumption that the fixed norm is upper-bounded by a coefficient of this variable norm. While we provide in Section 3 a well-motivated, concrete example of a performative problem that satisfies both of these conditions, it is nonetheless an interesting open question to wonder how much our analytical assumptions can be loosened.
## 6 Societal Impact
We believe that the deployment of models that can have an impact on the behavior of people (i.e., are performative) should be considered with care, more especially for some critical applications such as elections or regulation of content on social media platforms. Our work proposes a new analysis of an existing algorithm that aims at learning performatively stable classifiers. Since the nature of our work is mainly theoretical and does not introduce new algorithms, it does not have a direct societal impact beyond the one described in the original paper on performative prediction. However, since our work supports the use of much more powerful models (e.g., NNs) in performative problems, and this increased power comes with increased responsibility, we should be mindful of the potential for undue influence on society while using this framework.
## Acknowledgements
We thank Quentin Bertrand, Michael Przystupa, and Reza Bayat for their helpful feedback on the manuscript. We would also like to thank Amjad Almahairi for our insightful discussions. Ioannis Mitliagkas acknowledges support by an NSERC Discovery grant (RGPIN-2019-06512), a Samsung grant and a Canada CIFAR AI chair.
|
2307.03289 | A co-kurtosis PCA based dimensionality reduction with nonlinear
reconstruction using neural networks | For turbulent reacting flows, identification of low-dimensional
representations of the thermo-chemical state space is vitally important,
primarily to significantly reduce the computational cost of device-scale
simulations. Principal component analysis (PCA), and its variants, is a widely
employed class of methods. Recently, an alternative technique that focuses on
higher-order statistical interactions, co-kurtosis PCA (CoK-PCA), has been
shown to effectively provide a low-dimensional representation by capturing the
stiff chemical dynamics associated with spatiotemporally localized reaction
zones. While its effectiveness has only been demonstrated based on a priori
analysis with linear reconstruction, in this work, we employ nonlinear
techniques to reconstruct the full thermo-chemical state and evaluate the
efficacy of CoK-PCA compared to PCA. Specifically, we combine a CoK-PCA/PCA
based dimensionality reduction (encoding) with an artificial neural network
(ANN) based reconstruction (decoding) and examine a priori the reconstruction
errors of the thermo-chemical state. In addition, we evaluate the errors in
species production rates and heat release rates that are nonlinear functions of
the reconstructed state as a measure of the overall accuracy of the
dimensionality reduction technique. We employ four datasets to assess
CoK-PCA/PCA coupled with ANN-based reconstruction: a homogeneous reactor for
autoignition of an ethylene/air mixture that has conventional single-stage
ignition kinetics, a dimethyl ether (DME)/air mixture which has two-stage
ignition kinetics, a one-dimensional freely propagating premixed ethylene/air
laminar flame, and a two-dimensional homogeneous charge compression ignition of
ethanol. The analyses demonstrate the robustness of the CoK-PCA based
low-dimensional manifold with ANN reconstruction in accurately capturing the
data, specifically from the reaction zones. | Dibyajyoti Nayak, Anirudh Jonnalagadda, Uma Balakrishnan, Hemanth Kolla, Konduri Aditya | 2023-07-06T21:00:26Z | http://arxiv.org/abs/2307.03289v1 | A co-kurtosis PCA based dimensionality reduction with nonlinear reconstruction using neural networks
###### Abstract
For turbulent reacting flows, identification of low-dimensional representations of the thermo-chemical state space is vitally important, primarily to significantly reduce the computational cost of device-scale simulations. Principal component analysis (PCA), and its variants, is a widely employed class of methods. Recently, an alternative technique that focuses on higher-order statistical interactions, co-kurtosis PCA (CoK-PCA), has been shown to effectively provide a low-dimensional representation by capturing the stiff chemical dynamics associated with spatiotemporally localized reaction zones. While its effectiveness has only been demonstrated based on _a priori_ analysis with linear reconstruction, in this work, we employ nonlinear techniques to reconstruct the full thermo-chemical state and evaluate the efficacy of CoK-PCA compared to PCA. Specifically, we combine a CoK-PCA/PCA based dimensionality reduction (encoding) with an artificial neural network (ANN) based reconstruction (decoding) and examine _a priori_ the reconstruction errors of the thermo-chemical state. In addition, we evaluate the errors in species production rates and heat release rates that are nonlinear functions of the reconstructed state as a measure of the overall accuracy of the dimensionality reduction technique. We employ four datasets to assess CoK-PCA/PCA coupled with ANN-based reconstruction: a homogeneous reactor for autoignition of an ethylene/air mixture that has conventional single-stage ignition kinetics, a dimethyl ether (DME)/air mixture which has two-stage ignition kinetics, a one-dimensional freely propagating premixed ethylene/air laminar flame, and a two-dimensional homogeneous charge compression ignition of ethanol. The analyses demonstrate the robustness of the CoK-PCA based low-dimensional manifold with ANN reconstruction in accurately capturing the data, specifically from the reaction zones.
keywords: Dimensionality reduction, Principal component analysis, Co-kurtosis tensor, Deep neural networks, Reconstruction +
Footnote †: journal: Combustion and Flame
## 1 Introduction
The multi-scale, multi-physics nature of turbulent reacting flows necessitate the use of high-fidelity simulations to accurately model chemical kinetics and turbulence-chemistry interactions. When representing chemical kinetics using first principles, e.g., direct numerical simulations with detailed kinetics, the governing system of equations has large dimensionality due to tens of chemical species participating in hundreds of chemical reactions [1; 2; 3; 4; 5]. As a result, the computational costs become prohibitively expensive for simulations of practical device-scale problems. Indeed, as the chemistry calculations associated with even the simplest of reaction mechanisms present themselves as the main driver of the large computational cost [6], reduced order modeling techniques become invaluable.
With the advent of data-driven techniques, lower-dimensional manifold (LDM) representations of the thermo-chemical subspace, identified from relevant training data, can effectively model the species dynamics of an otherwise large chemical system. Among the various available strategies to obtain these LDMs, principal component analysis (PCA) and its many flavors have been most widely employed [7; 8; 9; 10; 11; 12]. However, the principal components obtained by PCA are optimized with respect to second-order joint statistical moment, covariance, of the training data and may not be sensitive to the presence of extreme-valued samples characteristic of localized spatiotemporal events such as the formation of ignition kernels [13]. In contrast, the statistical signature of such events is shown
to be favorably captured by principal components of higher-order joint statistical moments, specifically the fourth-order co-kurtosis tensor [13]. Building upon this observation, a dimensionality reduction procedure that constructs LDMs represented by principal components of the co-kurtosis tensor, namely the co-kurtosis PCA (CoK-PCA) method, was proposed [14]. Additionally, analogous to PCA, a recently proposed online low-rank approximation algorithm known as dynamically bi-orthogonal decomposition (DBO), which is based on time-dependent low-dimensional subspaces, has been shown to effectively characterize strongly transient events in turbulent compressible reacting flows [15].
It is noteworthy that, while the CoK-PCA method was shown to represent the thermo-chemical state as well as nonlinear functions of the thermo-chemical state, such as species production rates (PRs) and heat release rates (HRRs), better than PCA in the localized spatiotemporal regions corresponding to strong chemical activity, the transformation from the principal components of the LDM to the full thermo-chemical state was performed through linear operators. However, due to the inherent nonlinear nature of the combustion phenomenon, the use of linear reconstruction has long been known not to be sufficiently accurate. Thus, the main objective of the present study is to address these concerns by studying the CoK-PCA method with nonlinear reconstruction techniques and comparing the accuracy relative to both PCA and a simple linear reconstruction.
For PCA-based LDMs, several studies have explored nonlinear reconstruction techniques such as artificial neural networks (ANNs), kernel methods, Gaussian process regression (GPR), and their hybrid approaches [16; 17; 18; 19; 20]. Nonlinear reconstruction using ANN models provides flexibility to capture complex relationships, scalability for large datasets, meaningful representation learning, robustness to noise and irregularities, and the ability to generalize well to unseen data [21]. Therefore, within the confines of this paper, our primary emphasis is directed toward nonlinear reconstruction utilizing ANN. In this study, we will compare the reconstruction performance of ANNs with linear methods [14] for thermo-chemical scalars, species production rates, and heat release rates. By contrasting the outcomes of ANN-based reconstruction with those achieved through linear techniques [14], we aim to evaluate the efficacy and superiority of nonlinear approaches in accurately capturing and predicting these important combustion variables. Following Jonnalagadda _et al._[14], the quality of the CoK-PCA-based/PCA-based encoder and ANN-based decoder models, hereafter called the CoK-PCA-ANN and PCA-ANN models, respectively, will be compared via the conventionally considered reconstruction errors of the thermo-chemical scalars as well as more sensitive PRs and HRRs for four combustion datasets namely premixed ethylene-air in a homogenous reactor, two-stage autoignition of dimethyl ether (DME)-air, a one-dimensional freely-propagating laminar flame of pre-mixed ethylene-air, and a homogeneous charge compression ignition of ethanol-air mixture.
The remainder of this paper is organized as follows. In Sec. 2, we briefly illustrate the dimensionality reduction procedure and outline the PCA and the CoK-PCA methods to obtain low-dimensional manifolds (LDMs). Section 3 describes the artificial neural network (ANN) based nonlinear reconstruction procedure to predict the thermo-chemical scalars from principal components of the LDMs. The results from _a priori_ analysis to evaluate the performance of the two LDMs based on ANN reconstruction are presented in Sec. 4. Finally, we summarize the paper and provide future directions in Sec. 5.
## 2 Dimensionality reduction
Following convention, we arrange the scaled training data as a matrix \(\mathbf{X}\in\mathds{R}^{(n_{p}\times n_{v})}\) with \(n_{g}\) observations (e.g., spatial locations, temporal checkpoints) each having \(n_{v}\) real-valued variables or features (e.g., species concentrations, temperature). With respect to the feature space, \(\mathbf{X}\) can be represented in terms of column vectors as \(\mathbf{X}=\left\{x_{i}\in\mathds{R}^{(n_{p}\times 1)}\ \forall\ i\in\{1,\cdots,n_{v}\}\right\}\). The purpose of dimensionality reduction, within the context of combustion, is to find a column subspace of dimension \(n_{q}<n_{v}\), representing an LDM of the feature space by some measure of optimality. Note that dimensionality reduction could also denote techniques that seek an optimal row subspace, which reduces the size of \(n_{g}\), but our interest here is strictly on reducing \(n_{v}\).
### Principal component analysis (PCA) based low-dimensional manifold
For PCA, the principal vectors align in the directions of maximal variance as captured by the second order data covariance matrix, \(\mathbf{C}\in\mathds{R}^{(n_{v},\gamma_{n_{v}})}\), represented using index notation as:
\[(\mathbf{C})_{ij}\equiv C_{ij}=\mathds{E}(x_{i}x_{j}),\quad i,j\in\{1,\cdots,n _{v}\}, \tag{1}\]
where \(\mathds{E}\) is the expectation operator. The required principal vectors (\(\mathbf{A}\)) are the eigenvectors of the covariance matrix obtained through an eigenvalue decomposition, \(\mathbf{C}=\mathbf{ALA}^{T}\). It should be noted that the data used in the definition of joint moments is assumed to be centered around the mean.
### Co-kurtosis tensor based low-dimensinal manifold
Similarly, with the higher order moment of interest, i.e., the fourth-order co-kurtosis tensor, the principal vectors represent the directions of maximal kurtosis in the data. The co-kurtosis tensor is defined as:
\[T_{ijkl}=\mathds{E}(x_{i}x_{j}x_{k}x_{l}),\quad i,j,k,l\in\{1,\cdots,n_{v}\} \tag{2}\]
By drawing an analogy to independent component analysis (ICA) [13], for a non-Gaussian data distribution, the fourth-order cumulant tensor, i.e., co-kurtosis \(\mathbf{K}\) is computed by subtracting the excess variance given as:
\[K_{ijkl}=T_{ijkl}-C_{ij}C_{kl}-C_{ik}C_{jl}-C_{il}C_{jk} \tag{3}\]
Again note that as the data is centered around the mean, only the second moment terms appear in the evaluation of the cumulant tensor.
The next step involves a suitable decomposition of the co-kurtosis tensor \(\mathbf{K}\) to obtain the required principal components. Directly computing the higher-order joint moment tensors is expensive due to the curse of dimensionality, i.e., in our case for the co-kurtosis tensor, computational complexity would be \(n_{v}^{4}\) where \(n_{v}\) is the number of features. The symmetric nature of the co-kurtosis tensor can be leveraged to result in roughly half of \(n_{v}^{4}\) computations. However, the existing well-defined matrix decomposition techniques cannot be directly extended to higher-order tensors. Therefore, alternate tensor decomposition methods, such as symmetric canonical polyadic (CP), higher order singular value decomposition (HOSVD), etc., should be explored to obtain the principal kurtosis vectors and values. Following [22] and [23], Aditya et al. [13] showed that the cumulant tensor \(\mathbf{K}\) could be _reshaped_ into a \(n_{v}\times n_{v}^{3}\) matrix \(\mathbf{T}\) following which the principal vectors \(\mathbf{U}\) are determined from the SVD of \(\mathbf{T}=\mathbf{U}\mathbf{S}\mathbf{V}^{T}\).
After obtaining the principal components, we can reduce the dimensionality of the original data by projecting it onto a low-dimensional manifold. This is typically performed by selecting the most informative subset of principal vectors to project \(\mathbf{X}\in\mathds{R}^{(n_{v}\times n_{v})}\) onto the reduced space represented as \(\mathbf{Z}_{q}\in\mathds{R}^{(n_{g}\times n_{q})}\), where \(n_{q}(<n_{v})\) corresponds to the number of principal vectors retained. The conventional forward projection procedure in PCA employs a simple matrix transformation,
\[\mathbf{Z}_{q}=\mathbf{X}\mathbf{A}_{q}, \tag{4}\]
where \(\mathbf{A}_{q}\in\mathds{R}^{(n_{v}\times n_{q})}\) represents the truncated subset of principal vectors (eigenvectors of the covariance matrix). For CoK-PCA, we obtain \(\mathbf{A}_{q}\) as the \(n_{q}\) leading left singular vectors of \(\mathbf{U}\) as described above. The contrast between PCA and CoK-PCA has been illustrated using a synthetic bivariate dataset with a few extreme-valued samples collectively representing anomalous events [13; 14]. It was observed that while the first PCA principal vector aligned in the direction of maximal variance, the first CoK-PCA principal vector aligned itself in the direction of the anomalous cluster, supporting the hypothesis that CoK-PCA is more sensitive to extreme-valued samples than PCA.
## 3 Reconstruction methodology
To assess the quality of the reduced manifold, we need to evaluate the reconstruction accuracy of the original state space from the low-dimensional subspace. Note that errors in the reconstructed variables are incurred at two stages: while projecting data into the low-dimensional space and during the reconstruction.
### Linear reconstruction
The standard procedure of obtaining the original thermo-chemical state is a linear reconstruction through a matrix inversion, given as:
\[\mathbf{X}_{q}=\mathbf{Z}_{q}\mathbf{A}_{q}^{T}, \tag{5}\]
where \(\mathbf{X}_{q}\) denotes the reconstructed data in the original feature space. Now, a comparison between \(\mathbf{X}_{q}\) and \(\mathbf{X}\) would provide a quantitative measure of the quality of the two reduced manifolds obtained by CoK-PCA and PCA, respectively. Jonnalagadda et al. [14] analyzed the maximum and average values of the absolute reconstruction error \(\left(\varepsilon=|\mathbf{X}-\mathbf{X}_{q}|\right)\), \(\varepsilon_{m}=\max(\varepsilon)\) and \(\varepsilon_{a}=\mathrm{mean}(\varepsilon)\), respectively to quantify the accuracy in each reconstructed variable. Specifically, they examined the error ratio,
\[r_{i}=\ln\left\{\frac{\varepsilon_{i}^{\text{PCA}}}{\varepsilon_{i}^{\text{ CoK-PCA}}}\right\}, \tag{6}\]
to analyze the performance of CoK-PCA relative to PCA; the subscript \(i\) can represent either the maximum (\(r_{m}\)) or average (\(r_{a}\)) errors.
### Nonlinear reconstruction through ANNs
It is clear that while CoK-PCA exhibits improved accuracy in capturing stiff dynamics compared to PCA [14], both methods incur significant errors while employing a linear reconstruction of the original
thermo-chemical state from the reduced manifold, particularly for an aggressive truncation (low \(n_{q}\)). Therefore, to fully establish the efficacy of CoK-PCA relative to PCA in capturing stiff dynamics, it is imperative to investigate its efficacy coupled with a nonlinear reconstruction approach. In this paper, we employ fully-connected deep neural networks to accomplish the required nonlinear reconstruction task. Since strong dependencies or relationships exist between different thermo-chemical scalars, it is appropriate to consider a fully-connected network where every subsequent layer is fully connected with the previous layer, ensuring the flow of information (of dependencies) across the network. In this regard, we also hypothesize that the use of a skip connection, i.e., introducing a sort of regularisation in deeper networks by skipping some of the layer outputs during backpropagation, would not be suitable. However, it should be noted that using artificial neural networks (ANNs) is an intuitive choice. Alternate nonlinear regression methods, such as Gaussian process regression (GPR), polynomial regression, least squares, etc., exist and can be incorporated in a similar manner as described in this study.
With significant advancements in deep learning in recent times, ANNs have proven their potential to model highly complex nonlinear relationships between any set of inputs and outputs. The goal of an ANN or, specifically, a deep feedforward neural network is to approximate some underlying function \(f^{*}\). For example, for a classifier, \(\mathbf{y}=f^{*}(\mathbf{x})\) maps an input \(\mathbf{x}\) to a category \(\mathbf{y}\), but more generally in case of regression problems \(\mathbf{x}\) is a vector of real numbers and \(\mathbf{y}\) output of a vector-valued function. A feedforward network defines a mapping \(\mathbf{y}=f(\mathbf{x};\boldsymbol{\theta})\) and learns the value of the parameters \(\boldsymbol{\theta}\) that result in the best function approximation. The nonlinear reconstruction step in a dimensionality reduction algorithm can be viewed as a nonlinear mapping from the reduced manifold (or input PCs) to the original feature space (or output features). We leverage the property of ANNs being _universal function approximators_[21] to achieve this task.
Consider a reduced data representation of the original state space \(\mathbf{X}\) given by the _score_ matrix, \(\mathbf{Z}_{q}=\mathbf{X}\mathbf{A}_{q}\), where \(\mathbf{A}_{q}\in\mathds{R}^{(n_{q}\times n_{q})}\) comprises the chosen subset of principal vectors (kurtosis or variance). Now, the objective is to use an ANN to predict (or reconstruct) \(\mathbf{X}_{q}\) from \(\mathbf{Z}_{q}\) where \(\mathbf{X}_{q}\) represents the reconstructed data in the original feature space, which is as close to \(\mathbf{X}\) as possible. This is a supervised learning problem where for every \(k^{th}\) feature vector from (\(k^{th}\) row of) the design matrix \(\mathbf{Z}_{q}\), \(z_{k*}\in\mathds{R}^{n_{q}}\), the network should accurately predict the target vector (\(k^{th}\) row of \(\mathbf{X}\)) \(x_{k*}\in\mathds{R}^{n_{q}}\), i.e., the ANN should provide the mapping \(z_{k*}\mapsto x_{k*}\), \(\forall k\in\{1,2,\cdots,n_{g}\}\). In other words, the goal of training a neural network is to drive its prediction \(\mathbf{X}_{q}\) to match \(\mathbf{X}\). Since it is a regression problem, we evaluate the performance or accuracy of the model by using a mean squared error (MSE) loss defined as:
\[\mathcal{L}_{MSE}=\frac{1}{m}\sum_{k=1}^{m}(\hat{x}_{k*}-x_{k*})^{2} \tag{7}\]
where \(\hat{x}_{k*}\), \(x_{k*}\), and \(m\) are the model prediction, ground truth, and the number of samples, respectively. Note that \(m\) can differ from \(n_{g}\) depending on how the entire dataset is split into training and test sets.
ANNs or feedforward networks are typically represented by composing together many different functions [24]. The model can be viewed as a directed acyclic graph describing how the functions are composed. For example, we might have three different functions \(f^{(1)},f^{(2)}\), and \(f^{(3)}\) connected in a chain, to form \(f(x)=f^{(3)}(f^{(2)}(f^{(1)}(x)))\). These chain-like structures form the foundation of neural networks. Each function \(f^{(i)}\) corresponds to a _hidden layer_ of the network. The overall length of the chain gives the _depth_ of the network. The final layer of the network is the _output layer_. Each hidden layer of the network is generally vector-valued. Every vector element can be interpreted as playing a role analogous to that of a _neuron_. The dimensionality of the hidden layers (number of neurons) determines the _width_ of the layer. In other words, a layer can be viewed as consisting of many _units (neurons)_ that act in parallel, each representing a vector-to-scalar function. Each connection to a unit in a hidden layer is associated with a weight \(w\) and a bias \(b\). These weights and biases parameterize the function \(f^{(i)}\) for each hidden layer. The simplest feedforward neural network computes the output of a unit by a linear combination of all weights and biases associated with it. After that, a nonlinear _activation_ acts on this output and is responsible for inducing the required nonlinearity in the network approximation. Commonly used nonlinear activations are sigmoid, hyperbolic tangent (Tanh), rectified linear unit (ReLU), Leaky ReLU, etc. From our numerous experiments, we found that the usage of Tanh provides better stability, robustness, and smoother training of the network than ReLU, effectively handles vanishing gradients, and exhibits minimal sensitivity to different random seeds. Additionally, Tanh can map inputs to spaces with both positive and negative values, unlike ReLU and sigmoid. Thus, in this study, we employ Tanh in the hidden layers. Further, we use a custom sigmoid-based activation function at the output layer to
ensure the model predictions lie within the same limits as the original state.
Since this is a non-convex optimization problem, gradient descent-based methods are generally used to iteratively converge to the optimal solution. For our study, we use the popular Adam optimization algorithm, which is a variant of stochastic gradient descent (SGD) that realizes the benefits of two other SGD algorithms: adaptive gradient algorithm (AdaGrad) and root mean square propagation (RMSProp). Instead of using a single learning rate as in SGD, Adam computes individual adaptive learning rates for different parameters from estimates of the first and second moments of the gradients. In this case, we control the learning rate so that there is minimum oscillation when it reaches the global minimum while taking big enough steps to pass the local minimum hurdles. This method is particularly efficient for larger problem sizes involving more data or parameters. Moreover, it requires relatively lesser memory for the training procedure. The number of epochs in the training process is also selected carefully to ensure convergence.
Neural network training is inherently stochastic as it involves a random initialization of the parameters (weights and biases) at the start of the optimization. Also, the non-convexity of the loss function might result in the algorithm converging to a local minimum among multiple local minima according to a specific value of initial weights and biases. This manifests in the _keras_ random seed we set in our code. If the network is robust, this generally does not affect the network predictions much. Nevertheless, in this work, we employ techniques such as stochastic weight averaging, model averaging, and ensemble averaging in the network training phase to mitigate these issues and ensure consistency in model predictions.
### Error metrics
Once trained, the network is used to predict the thermo-chemical scalars, which include species mass fractions and temperature. The species production rates and heat release rate are also computed based on this reconstructed thermo-chemical scalars. The motivation behind calculating the species production rates and heat release rate is their nonlinear dependence on the species mass fractions and temperature, which provides a more stringent metric for assessing the reconstruction accuracy of the full thermo-chemical state and the overall dimensionality reduction strategy. Further, apart from having a tangible physical meaning, the reconstruction error associated with the heat release rate also provides an overall assessment of the quality of the reduced manifold since the heat release rate represents an aggregate effect of all the quantities of interest. A key point to note is that the network predictions correspond to a scaled version of the original state since the network is trained with scaled input feature vectors. Hence, we suitably unscale the network outputs before calculating the errors in the reconstruction of thermo-chemical scalars. Analogous to the error metrics in [14], we examine the following error ratios,
\[r_{i}=\ln\left\{\frac{\varepsilon_{i}^{\text{CoK-PCA}}}{\varepsilon_{i}^{ \text{CoK-PCA-ANN}}}\right\}, \tag{8}\]
\[r_{i}=\ln\left\{\frac{\varepsilon_{i}^{\text{PCA}}}{\varepsilon_{i}^{\text{PCA -ANN}}}\right\}, \tag{9}\]
\[r_{i}=\ln\left\{\frac{\varepsilon_{i}^{\text{PCA-ANN}}}{\varepsilon_{i}^{ \text{CoK-PCA-ANN}}}\right\}, \tag{10}\]
to compare the relative performance of different methods such as CoK-PCA, PCA, CoK-PCA-ANN, and PCA-ANN considered in our study. Again, the subscript \(i\) can represent either the maximum (\(m\)) or average (\(a\)) errors. The value of \(r_{i}\) will be positive if the ratio inside the logarithm is greater than unity (the error in the denominator is lower), indicating that the technique represented by the denominator is more accurate than that represented by the numerator. In the results to be shown, following [14], we will denote positive \(r_{i}\) by blue and negative by brown colored bars.
## 4 Results
To investigate the accuracy of the proposed reconstruction methodology for combustion datasets, we consider four test cases representative of various physical and chemical phenomena (e.g., autoignition, flame propagation) ubiquitous in such scenarios:
* autoignition of a premixed ethylene/air mixture in a homogeneous reactor,
* autoignition, with two-stage ignition kinetics, of a dimethyl ether (DME)/air mixture in a homogeneous reactor,
* one-dimensional freely propagating planar laminar premixed flame of ethylene/air mixture,
* two-dimensional turbulent autoignition of ethanol/air at homogeneous charge compression ignition (HCCI) conditions.
The datasets represent an increasing order of complexity of chemical kinetics and flow-chemistry interactions. The first two cases represent homogeneous (spatially zero-dimensional) autoignition, albeit ethylene/air with conventional ignition kinetics, while DME/air has more complex low and high temperature ignition kinetics. The third case incorporates spatial variation, including convection and diffusion effects in the canonical planar laminar premixed flame configuration. The fourth case represents complex turbulence-chemistry interactions in a spatially 2-D configuration under conditions relevant to practical devices.
### Premixed ethylene-air in a homogeneous reactor
In this section, we consider the dataset that characterizes spontaneous ignition in a simple homogeneous (zero-dimensional) reactor. For dataset generation, we simulate a constant pressure reactor with a premixed ethylene-air mixture at a pressure P = 1.72 atm for a suite of nine flamelets, i.e., \(D_{i}\ \forall i\in\{1,2,\cdots,9\}\), each with a different initial temperature (T) and equivalence ratio (\(\phi\)) as illustrated in Fig. 1. Specifically, we perturb the initial conditions (T, \(\phi\)) from a reference state of \(D_{1}\equiv\) (T = 1200 K, \(\phi=0.4\)) by \(\Delta T=\pm 50\) K and \(\Delta\phi=\pm 0.25\). Thus, each flamelet is parameterized by a combination of initial (T, \(\phi\)) where T \(\in\) {1150 K, 1200 K, 1250 K} and \(\phi\ \in\{0.375,0.4,0.425\}\). The chemistry is represented by a 32-species, 206-reactions mechanism [25]. The homogeneous reactor simulations are performed with Cantera [26], and each flamelet is computed for different durations to ensure that the profiles remain nearly similar. For the reference state, the reactor is evolved for 2.5 ms with a time step of 1 us to yield 2501 data samples. Hence, in this case, the original design matrix \(\mathbf{D}\) consists of \(n_{g}=\) 2501 points and \(n_{v}=\) 33 variables, comprising 32 species and temperature. The next step involves a data preprocessing stage where the design matrix for each state is zero-centered by subtracting with the mean feature vector and normalized with the absolute maximum feature vector to obtain the scaled data matrix, \(\mathbf{X}\). This ensures an unbiased data representation with equal weightage given to all the features. To generate the low-dimensional manifolds, i.e., using PCA and CoK-PCA, we compute the principal vectors and values based on the scaled reference state (\(X_{1}\)), which eventually forms the basis for constructing the training/validation data. Next, we perform an aggressive truncation of the reduced manifolds by retaining \(n_{q}=5\) dominant principal vectors out of the \(n_{v}=33\) vectors that capture approximately 99% of the variance and 98% of the kurtosis in the dataset, respectively. Using the principal vectors computed on the scaled reference state (\(X_{1}\)), we obtain the LDM representation (score matrices) \(\mathbf{Z}_{q}^{4}\) and \(\mathbf{Z}_{q}^{2}\) through the dimensionality reduction procedure discussed in Sec. 2 for the CoK-PCA and PCA reduced manifolds, respectively. It should be noted that this projection is a linear operation.
After obtaining the LDMs with PCA and CoK-PCA, the next step in the _a priori_ analysis is to evaluate the reduced manifolds in conjunction with the nonlinear reconstruction of the original thermo-chemical state through ANNs. For the ANN training phase, the input feature vectors are the rows of the score matrices (\(\mathbf{Z}_{q}^{4},\mathbf{Z}_{q}^{2}\)) and output vectors are the corresponding
Figure 1: Illustration of train-test split in ensemble training. Training states: \(D_{1},D_{2},D_{4},D_{6},D_{8}\) and testing states: \(D_{3},D_{5},D_{7},D_{9}\). To generate the LDMs, PCs are computed based on the reference state, \(D_{1}\).
Figure 2: Training and validation loss curves for (a) CoK-PCA-ANN and (b) PCA-ANN, respectively, for the premixed ethylene-air homogeneous reactor dataset.
rows of the scaled original thermo-chemical state matrix \(\mathbf{X}\); these matrices are arranged based on the different flamelets (\(D_{j}\)s) using train-test split shown in Fig. 1, i.e., flamelets \(D_{1}\), \(D_{2}\), \(D_{4}\), \(D_{6}\), and \(D_{8}\) are used for ANN training only. Through hyperparameter tuning, the best network architecture is ascertained with four hidden layers of widths of 40, 64, 40, and 32 neurons, respectively. In addition, the widths of input and output layers correspond to \(n_{q}=5\) and \(n_{v}=33\) neurons, respectively. Further, we use a hyperbolic tangent activation in the hidden layers and a custom sigmoid-based activation at the output layer, which ensures the network predictions are bounded in the same limits as the scaled inputs. Finally, we employ the widely used Adam optimizer (learning rate = 1e\(-\)3) to facilitate robust, stable, fast network learning. Figure 2 shows the loss curves obtained for CoK-PCA-ANN and PCA-ANN, where convergence is achieved at around 100 epochs with a validation loss of about 2e\(-\)5.
Having trained on a subset of the flamelets, we use the neural network to predict (or reconstruct) the scaled species mass fractions and temperature for the test states, i.e., \(D_{j}\ \forall j\in\{3,5,7,9\}\). To ensure that the reconstructed thermo-chemical state results in a unit sum of species mass fractions, as is the standard practice, all reconstructed species mass fractions which yield negative values (that are slightly smaller than zero) are taken to be zero, after which any deviation from the sum equalling unity is adjusted for in the non-participating or bath species. Using the reconstructed thermo-chemical scalars, \(\mathbf{D_{q}}\), we proceed to compute the species production rates and heat release rates. The reconstructed quantities are compared against the original thermo-chemical state, \(\mathbf{D}\), and their derived quantities (species production rates, heat release rates) using the error metrics, \(r_{m}\) and \(r_{a}\).
In Fig. 3, we compare error ratios of linear and ANN reconstruction (Eqs. 8 and 9) of thermo-chemical scalars for both the dimensionality reduction methods. N\({}_{2}\) being an inert species has not been included here. For most variables, ANN reconstruction outperforms linear reconstruction (demonstrated by blue bars) with respect to the average (\(r_{a}\)) and maximum (\(r_{m}\)) error metrics. An exception is temperature, where linear reconstruction performs marginally better in terms of \(r_{m}\) (demonstrated by brown bars). This observation is consistent for both methods, i.e., PCA and CoK-PCA. Not surprisingly, as shown in Fig. 4, the errors in species production rates and heat release rate, computed from the reconstructed thermo-chemical state, are significantly lower with ANN reconstruction compared with linear reconstruction. In general, as \(n_{q}\) increases, the accuracy improvements obtained with ANN in comparison to linear reconstruction decrease as the reduced manifold becomes an increasingly better linear approximation of the original state; in the limit of \(n_{q}=n_{v}\) linear reconstruction is exact, which is a scenario with no reduction in dimensionality. As dimensionality needs to be reduced as aggressively as possible, one can conclude that ANN is better suited for reconstructing data from low-dimensional manifolds.
Next, we compare the two dimensionality reduction techniques against each other, both with ANN reconstruction. Figure 5 shows the error ratios for PCA-ANN vs. CoK-PCA-ANN (Eq. 10) in reconstructing thermo-chemical scalars (left), and species production rates and heat release rates (right). For the scalars, it can be clearly seen that CoK-PCA-ANN outperforms PCA-ANN in predictions of 25 and 21 (out of 33) variables for \(r_{m}\) and \(r_{a}\) metrics, respectively. The trend becomes more prominent in the case of species production rates and heat release rates where CoK-PCA-ANN predicts production rates more accurately for 23 out of the 32 species with the \(r_{a}\) metric and 24 out of the 32 species with the \(r_{m}\) metric. Notably, CoK-PCA-ANN captures heat release rate better than PCA-ANN in terms of both the error metrics.
While \(r_{m}\) and \(r_{a}\) are global error metrics, it is instructive to examine the temporal distribution of reconstruction errors and determine whether the errors are low/high in the unburnt, igniting, or fully burnt portions of the flame. Figure 6 presents the absolute reconstruction error of heat release rate plotted against time for the four test flamelets: \(D_{3},D_{5},D_{7}\), and \(D_{9}\). For reference, the progress variable is plotted on the right \(y\)-axis of each figure. Both methods incur significant error in the reaction zones, with the peak at intermediate values of the progress variable, which occurs at 0.8 ms, 0.4 ms, 1 ms, and 1.9 ms for \(D_{3},D_{5},D_{7}\), and \(D_{9}\), respectively. As expected, the error is much lower on the unburnt and the fully burnt portions. Further, for \(D_{3}\) and \(D_{9}\), CoK-PCA-ANN incurs a significantly lower peak reconstruction error than PCA-ANN (demonstrated by the blue peaks smaller in magnitude than the red peaks), which is reflected in the \(r_{m}\) error presented in Fig. 5 (d). However, the peak error for \(D_{7}\) is higher for CoK-PCA-ANN. For \(D_{5}\), both the methods incur essentially the same magnitude of errors and perform at par with each other. Nonetheless, across the four test flamelets, CoK-PCA-ANN yields an overall smaller average reconstruction error than PCA-ANN, as reflected in the \(r_{a}\) error presented in Fig. 5 (c). These comparisons provide further evidence that the proposed CoK-PCA-ANN method predicts the overall chemical kinetics in the re
action zone better than PCA-ANN.
### Two-stage autoignition of dimethyl ether-air mixture
In contrast to ethylene, which has conventional single-stage ignition chemistry, a class of hydrocarbon
Figure 4: Comparison of errors in the reconstruction of species production rates and heat release rate for (a), (b) CoK-PCA vs. CoK-PCA-ANN and (c), (d) PCA vs. PCA-ANN for the premixed ethylene-air homogeneous reactor dataset. Top and bottom plots in each column represent \(r_{a}\) and \(r_{m}\) respectively.
Figure 3: Comparison of errors in the reconstruction of thermo-chemical scalars for (a), (b) CoK-PCA vs. CoK-PCA-ANN and (c), (d) PCA vs. PCA-ANN for the premixed ethylene-air homogeneous reactor dataset. Top and bottom plots in each column represent \(r_{a}\) and \(r_{m}\) respectively.
fuels characterized by more complex two-stage ignition (a low-temperature and a high-temperature) chemistry are increasingly considered suitable for novel combustion concepts such as homogeneous charge compression ignition (HCCI) [27]. HCCI relies on volumetric autoignition of a (nearly) homogeneous fuel charge and realizes the benefits of low emissions due to fuel-lean combustion while also achieving high efficiencies.
Figure 5: Comparison of errors in the reconstruction of thermo-chemical scalars (left), species production rates and heat release rate (right) for PCA-ANN vs. CoK-PCA-ANN for the premixed ethylene-air homogeneous reactor dataset. Top and bottom plots in each column represent \(r_{a}\) and \(r_{m}\) respectively.
Figure 6: Temporal evolution of absolute errors in reconstructed heat release rates for the test states - (a) \(D_{3}\), (b) \(D_{5}\), (c) \(D_{7}\), and (d) \(D_{9}\) for the premixed ethylene-air homogeneous reactor dataset. The progress variable is plotted in grey for reference.
However, controlling the ignition timing is the biggest challenge since the charge ignites spontaneously due to compression heating. Consequently, modeling the ignition processes of two-stage ignition fuels under engine-relevant conditions is an open challenge. Dimethyl ether (DME) is a prominent example, and its ignition behavior resulting from turbulence-chemistry interactions at engine-relevant conditions has been widely studied using DNS [27; 28; 29]. From a dimensionality reduction perspective, DME ignition presents distinct challenges from that of ethylene; the chemical pathways and the participating chemical species for the low-temperature ignition chemistry are different from high-temperature chemistry. This motivates us to test the capability of CoK-PCA-ANN in reconstructing the original state space from the reduced manifold for the two-stage ignition of DME.
We consider a constant pressure zero-dimensional homogeneous reactor of a stoichiometric mixture of hydrogen-enriched DME fuel and air. The ratio of hydrogen to DME is 3:2 in the fuel mixture, similar to that in [28]. The initial pressure is 1 atm while the initial temperature is varied from 600 K to 800 K in increments of 25 K, for a total of nine flames. This range of initial temperatures is such that the flames contain both two-stage as well as single-stage ignition behavior. Finite rate chemistry is specified using the 39-species, 175-reactions skeletal mechanism developed in [28], and the flames are simulated with Cantera [26] for a duration of 1 s with a fixed time step of 0.1 ms. In this case, the original design matrix \(\mathbf{D}\) consists of \(n_{g}=10001\) points and \(n_{v}=40\) variables, comprising 39 species and temperature.
Traditional dimensionality reduction techniques, such as PCA or linear regression, may not effectively capture the nonlinear interactions present in the data. The data associated with the two-stage ignition of DME is high-dimensional and contains intricate patterns. This includes time-dependent or transient behavior, multiple ignition modes, and variations under different operating conditions. This complexity makes it difficult to find a low-dimensional representation that captures the essential information while discarding irrelevant or redundant features. The reconstruction of two-stage ignition using CoK-PCA-ANN offers several benefits. It enables a deeper understanding of DME combustion, facilitates the development of more accurate ignition models, and provides valuable insights for optimizing combustion strategies. This approach aids in reducing data dimensionality by extracting pertinent features and eliminating redundant information, thereby improving computational efficiency while maintaining prediction accuracy.
CoK-PCA and PCA are performed using the data of all nine flames, and dimensionality is reduced to \(n_{q}=5\). To train the ANNs for reconstructing the full thermo-chemical state from the reduced state, the data is split into training and testing sets, with five flames (initial temperatures of 600 K, 650 K, 700 K, 750 K, 800 K) comprising the former, and the rest, the latter. We randomly shuffle the training dataset and set aside 20% for the validation process. After conducting hyperparameter tuning, the network architecture is determined with two hidden layers comprising 10 and 20 neurons, respectively. The input and output layers have a width of \(n_{q}=5\) and \(n_{v}=40\) neurons, respectively. A hyperbolic tangent activation function for the hidden layers, a custom sigmoid-based activation function for the output layer, and the Adam optimizer are used as before.
Figure 7 shows the training and validation loss for the PCA-ANN and CoK-PCA-ANN. It is evident that the validation loss remains consistently only slightly higher than the training loss (\(\sim 2.5\)e\(-4\)) for a significant number of epochs (200-500), and the model has converged. We employ early stopping to achieve this convergence, thereby saving computational resources and preventing overfitting. This indicates that the model is generalizing well to unseen data. Despite the slight difference in loss, the model demonstrates robustness and reliability in its
Figure 7: Training and validation loss curves for (a) CoK-PCA-ANN and (b) PCA-ANN, respectively, for the DME two-stage autoignition dataset.
predictions. This suggests that the model has learned intricate patterns present in the two-stage ignition dataset and features from the training data that allow it to make accurate predictions on new examples, resulting in a reliable and effective model.
The error ratios in thermo-chemical scalars, species production rates, and heat release rates were computed using equations (8) - (10) and visualized in Figures 8, 9, and 10. Notably, the exclusion of N\({}_{2}\) as an inert species was not considered in this analysis. The results demonstrate that the overall nonlinear reconstruction employing ANN (blue bars) exhibits lower error compared to linear reconstruction (brown bars) across most species, temperature, production rates, and heat release rates for both PCA and CoK-PCA methods (Figures 8, 9).
Figure 10 illustrates the error ratio between PCA-ANN and CoK-PCA-ANN for thermo-chemical scalars (10 (a) and (b)), species production rates, and heat release rates (10 (c) and (d)). The errors in reconstructed thermo-chemical scalars show mixed trends, unlike the ethylene-air dataset for which CoK-PCA-ANN was consistently more accurate than PCA-ANN. However, the accuracy of species production rates and, more importantly, the heat release rate for CoK-PCA-ANN is better than PCA-ANN. This result reinforces the notion that error metrics based only on thermo-chemical state reconstruction may not be sufficient measures of accuracy. Going beyond the error ratio, and similar to the ethylene-air case, we plot the absolute errors of heat release rate for one of the DME-air flames from the test set with an initial temperature of 625 K as shown in Fig. 11. Since this mixture has two-stage ignition, the heat release rate for the second stage (at \(\sim\) 0.25 ms) is orders of magnitude larger than the first stage (at \(\sim\) 0.047 ms). To make the comparison clearer, insets in Fig. 11 show the regions zoomed on the two stages. It is evident that the absolute errors of heat release rate are greater by up to an order of magnitude with linear reconstruction (Fig. 11 (a)) compared with ANN-based reconstruction (Fig. 11 (b)). Moreover, while the errors for the first stage are comparable between PCA-ANN and CoK-PCA-ANN, for the second stage, CoK-PCA-ANN is more accurate.
### Premixed ethylene-air laminar flame
The third case we consider is a one-dimensional freely-propagating planar laminar premixed flame of the ethylene-air mixture. In addition to the chemical reactions that govern the evolution of homogeneous reactors of the previous two cases, this case has effects of convection and diffusion that influence the thermo-chemical evolution. The chemistry is represented by the same 32-chemical species, 206-reactions mechanism [25], resulting in \(n_{v}=33\) features. The freely-propagating flame is simulated in a one-dimensional domain of 0.02 m discretized with a grid of around 550 points. The pressure is kept at 1 atm, and a parametric variation is considered for the unburnt mixture conditions. Analogous to the ensemble training performed in Sec. 4.1, to construct the required training and testing data, we perturb the unburnt mixture temperature and equivalence ratio, (T, \(\phi\)), by \(\Delta T=\pm 50\) K and \(\Delta\phi=\pm 0.25\) from the reference state, i.e., \(D_{1}\equiv(T=300~{}\mathrm{K},\phi=0.6)\). This effectively results in nine configurations, \(D_{i}~{}\forall i\in\{1,2,\cdots,9\}\), one for each combination of (T, \(\phi\)) where T \(\in\{250~{}\mathrm{K},300~{}\mathrm{K},350~{}\mathrm{K}\}\) and \(\phi~{}\in\{0.575,0.6,0.625\}\). Again, to generate the CoK-PCA and PCA reduced manifolds, the principal components are computed with respect to the scaled reference state, \(X_{1}\), by selecting \(n_{q}=5\) leading principal vectors out of the \(n_{v}=33\) vectors that capture approximately 99% of the variance and 98% of the kurtosis in the dataset, respectively. Following the dimensionality reduction procedure in Sec. 2, we compute the score matrices, \(\mathbf{Z}_{q}^{4}\) and \(\mathbf{Z}_{q}^{2}\) for the CoK-PCA and PCA low-dimensional manifolds, respectively.
For the ANN training, a similar split of the data into training and testing sets, as Sec. 4.1, is performed here; \(D_{1}\), \(D_{2}\), \(D_{4}\), \(D_{6}\), and \(D_{8}\) are used for training and the rest for testing. Accordingly, we construct the input feature vectors and ground truths to train a neural network with four hidden layers of widths 48, 48, 48, and 56 neurons. The widths of input and output layers are \(n_{q}=5\) and \(n_{v}=33\) neurons, respectively. The layer activation functions remain the same as before, a hyperbolic tangent function, with the use of Adam optimizer (learning rate = 1e\(-\)4) for training. Figures 12 (a) and (b) depict the loss curves obtained for CoK-PCA-ANN and PCA-ANN, respectively.
Following Sec. 4.1, we assess the reconstruction accuracy of the trained models on the test states, i.e., \(D_{j}~{}\forall j\in\{3,5,7,9\}\). Similar to the trends observed in previous cases, ANN reconstruction outperforms linear reconstruction for all the quantities of interest, the plots of which are not presented here for brevity. With reconstruction based on ANNs, we next focus on the performance of CoK-PCA-ANN against PCA-ANN in terms of the error ratios (\(r_{a}\), \(r_{m}\)), which are presented in Fig. 13. For the accuracy of thermo-chemical scalars, we observe a different trend in this case, with PCA-ANN being more accurate than CoK-PCA-ANN for 19 out of the 33 variables for \(r_{a}\). However, as hypothesized, CoK-PCA-ANN performs better than PCA-ANN in terms of the \(r_{m}\) metric in accurate predictions of 21
out of the 33 variables. Further, while comparing errors in the reconstruction of species production rates and heat release rates, CoK-PCA-ANN dominates over PCA-ANN in both error ratios. In particular, CoK-PCA-ANN significantly improves upon PCA-ANN by predicting production rates for 22 out of 32 species in terms of the \(r_{m}\) error and 18 out of 32 species in terms of the \(r_{a}\) error. More importantly, it incurs lower errors in reconstructing the heat release rate in both metrics, which is an overall measure of the fidelity of the chemical system. This case clearly illustrates the fact that errors in reconstructing the thermo-chemical state alone might not be a sufficient measure of accuracy for a given dimensionality reduction technique, and a broader set of metrics might be prudent.
The profile of absolute errors in heat release rates obtained for both the methods, CoK-PCA-ANN (dashed blue) and PCA-ANN (solid red), is shown in Fig. 14 for the four test states, \(D_{3}\), \(D_{5}\), \(D_{7}\), and \(D_{9}\). We observe that CoK-PCA-ANN outperforms PCA-ANN in accurately predicting the steady-state flame location for all the test states, thereby characterizing flame propagation better. This behavior is consistent with the \(r_{m}\) errors presented in Fig. 13 (d). Further, both techniques capture the non-reacting regions reasonably well in all the test states. However, in these regions, CoK-PCA-ANN performs marginally better than PCA-ANN by predicting nearly zero heat release rates for the test flames, \(D_{5}\), \(D_{7}\), and \(D_{9}\) (Figures 14 (b), (c), (d)). It should be noted that negligible reconstruction errors incurred by the methods in these regions (i.e., predicting non-zero heat release in the non-reacting zones) can be attributed to statistical inconsistencies or stochasticity of the ANN training process. Consequently, this is reflected in the \(r_{a}\) metric (average error), which is lesser in the case of CoK-PCA-ANN than PCA-ANN (demonstrated by blue bars) in Fig. 13 (c).
### Homogeneous charge compression ignition
In this section, we examine a dataset that encompasses the influence of spatial transport involving convection and diffusion and turbulence. The dataset focuses on homogeneous charge compression ignition (HCCI) of ethanol, which is representative of internal combustion engines [30]. The simulation corresponds to high-pressure, high-temperature auto-ignition
Figure 8: Comparison of errors in the reconstruction of thermo-chemical scalars of the DME two-stage autoignition dataset for (a), (b) CoK-PCA vs. CoK-PCA-ANN and (c), (d) PCA vs. PCA-ANN. Top and bottom plots in each column represent \(r_{a}\) and \(r_{m}\) respectively.
of a turbulent mixture composed of premixed ethanolair and combustion products, emulating the process of "exhaust gas recirculation" (EGR). The simulation is performed in a fully periodic domain with a two-dimensional spatial grid of \(672\times 672\) points. The initial conditions include a nominal pressure of \(45\,\mathrm{atm}\) and a mean temperature of \(924\,\mathrm{K}\). The reactants are set to an equivalence ratio of 0.4. To account for the uneven mixing caused by EGR, a spatial temperature fluctuation and a separately computed divergence-free turbulent velocity field are superimposed onto the system. Furthermore, the simulation also considers the effects of compression heating resulting from the motion of the piston. The chemistry is represented by a 28 chemical species reaction mechanism. Thus, at each simulation snapshot, the design matrix, **D** consists of \(n_{g}=672\times 672\) data samples and \(n_{v}=29\) thermo-chemical scalars. For this study, we consider the temporal checkpoint at t = 1.2 ms [13], which corresponds to the propagation of the flame fronts in the bulk of the global domain, as shown in the heat release rate contours in Fig. 15, which has been saturated to a peak heat release rate of \(1\times 10^{9}\) Jm\({}^{-3}\)s\({}^{-1}\) in order to demonstrate the growth in the size of the ignition kernels.
For the testing state, we consider the simulation snapshot at 1.19 ms. In other words, we are interested in investigating the efficacy of the proposed CoK-PCA-ANN method in predicting the thermo-chemical state at an unseen state (t = 1.19 ms) while being trained on a subsequent checkpoint at t = 1.2 ms. To obtain the score matrices \(\mathbf{Z}_{q}^{4}\) and \(\mathbf{Z}_{q}^{2}\), we use the principal vectors computed on the reference state, i.e., on t = 1.2 ms. The low-dimensional manifolds are constructed by retaining \(n_{q}=5\) out of \(n_{v}=29\) principal vectors that correspond to approximately 99% of the variance and kurtosis in the reduced PCA and CoK-PCA manifolds, respectively. The next step involves constructing the required train and test data comprising input feature vectors and corresponding ground truths. It should be noted that since this is a two-dimensional dataset, we suitably flatten it to yield \(N=451584\) data samples. Further, a neural network with three hidden layers of widths 8, 8, and 64 neurons is trained till convergence with an Adam optimizer (learning rate = 0.028). In addition, early stopping is employed to ensure the network does not lead to overfitting of the training data. The cor
Figure 9: Comparison of errors in the reconstruction of species production rates and heat release rate of the DME two-stage autoignition dataset for (a), (b) CoK-PCA vs. CoK-PCA-ANN and (c), (d) PCA vs. PCA-ANN. Top and bottom plots in each column represent \(r_{a}\) and \(r_{m}\) respectively.
responding loss curves obtained for the manifolds are presented in Fig. 16. We then use the trained network to predict the thermo-chemical scalars at t = 1.19 ms for both CoK-PCA and PCA reduced manifolds. In a similar manner, using the reconstructed thermo-chemical scalars, species production rates and heat release rates are computed. In the following, we present a comparison of CoK-PCA-ANN and PCA-ANN in terms of the reconstruction errors of the aforementioned quantities.
In Fig. 17, it is evident that CoK-PCA-ANN performs significantly better than PCA-ANN in the reconstruction of thermo-chemical scalars with more accurate predictions of 20 out of 29 species in both \(r_{a}\) and \(r_{m}\) errors. Furthermore, it completely dominates PCA-ANN in accurately reconstructing production rates for 90% and 93% of the species in terms of \(r_{a}\) and \(r_{m}\) metrics, respectively. In addition, it provides an accurate representation of the chemical dynamics in the reaction zones by incurring lower reconstruction errors for heat release rates in both metrics (\(r_{m},r_{a}\)). Contrary to the observations in [14], where the CoK-PCA-based manifold performed poorly in terms of the average errors (\(r_{a}\)) while considering the entire spatial domain, CoK-PCA, when coupled with ANN, overcomes this issue and better represents the stiff chemical dynamics in the average error as well. Next, we plot the reconstructed heat release rate contours in Fig. 18 to get a qualitative difference in the magnitude of the heat release rate within the ignition kernels achievable through CoK-PCA and PCA-reduced manifolds. Due to the inherent ability of excess kurtosis to suitably capture outliers, CoK-PCA-ANN identifies the ignition zones better (as demonstrated by the better matching of color hues with the original) in the entire domain, which is in good agreement with the lesser maximum error (\(r_{m}\)) as shown in Fig. 17 (d) for heat release rate. Additionally, due to the coupling of ANN with CoK-PCA, the non-igniting regions are also represented better than PCA-ANN, leading to a lower average error (\(r_{a}\)) as presented in Fig. 17 (c).
## 5 Conclusions and future work
In this paper, we have proposed an enhanced version of the co-kurtosis PCA (CoK-PCA) based di
Figure 10: Comparison of errors in the reconstruction of thermo-chemical scalars (left), species production rates and heat release rate (right) for PCA-ANN vs. CoK-PCA-ANN of the DME two-stage autoignition dataset. Top and bottom plots in each column represent \(r_{a}\) and \(r_{m}\) respectively.
mensionality reduction method, namely CoK-PCA-ANN, which leverages the potential of artificial neural networks (ANNs) to model complex nonlinear relationships inherent between the aggressively truncated low-dimensional manifolds and the original thermo-chemical state. The rationale behind this work is to assess the overall efficacy of the CoK-PCA method in comparison to PCA in conjunction with nonlinear reconstruction methods and expand its applicability to chemically reacting systems presenting stiff dynamics. A brief overview of the various state-of-the-art nonlinear reconstruction methods, such as ANNs, gaussian process regression (GPR), kernel density methods, autoencoders, etc., combined with PCA was discussed. In contrast, these methods are yet to be explored in the CoK-PCA framework which motivates this work. The framework of the proposed CoK-PCA-ANN dimensionality reduction method was presented with a discussion on the generation of the low-dimensional manifold using linear projection (encoding) with CoK-PCA followed by nonlinear reconstruction of the original thermo-chemical state space (decoding) using ANNs. The performance of the CoK-PCA-ANN method was benchmarked with linear CoK-PCA and PCA-ANN methods for four combustion test cases that characterize various physical and chemical phenomena in reacting flows (e.g., autoignition, flame propagation): a homogeneous reactor simulation representing conventional single-stage and complex two-stage autoignition, a one-dimensional freely propagating laminar premixed flame exhibiting flame propagation, and two-dimensional turbulent autoignition in homogeneous charge compression ignition conditions. In contrast to linear methods, ANNs demonstrated significantly high reconstruction accuracies for the CoK-PCA and PCA manifolds in terms of thermo-chemical scalars, species production rates, and heat release rates with aggressive truncation (low \(n_{q}\)). Further, the quality of the manifolds was assessed in conjunction with ANN for the aforementioned quantities of interest. As hypothesized, CoK-PCA-ANN outperforms PCA-ANN in all the test cases in terms of the maximum \(r_{m}\) errors for the thermo-chemical scalars, species production rates, and most importantly, heat release rates, thereby reinforcing the fact that the chemical kinetics prevalent in the ignition zones representative of stiff dynamics is captured more accurately by CoK-PCA than PCA. Contrary to the findings in the previous assessment of plain vanilla CoK-PCA [14], CoK-PCA-ANN incurred lower reconstruction errors in the average error metric (\(r_{a}\)) as well with a better representation of the unburnt reactants and burnt products in all the test cases. Additionally, CoK-PCA
Figure 11: Temporal evolution of absolute errors in the reconstruction of heat release rates for the DME two-stage autoignition flame with an initial temperature of 625 K.
Figure 12: Training and validation loss curves for (a) CoK-PCA-ANN and (b) PCA-ANN, respectively, for the one-dimensional planar laminar premixed ethylene-air flame dataset.
ANN outperforms PCA-ANN in accurately predicting an unseen test state different from the training set considered in the 2D HCCI case. To summarize, the results from the above analyses suggest that CoK-PCA-ANN realizes the advantages of both CoK-PCA and ANNs and proves reliable, robust, and generalizable to unseen thermo-chemical states that share similar ignition kinetics as the training state.
Figure 14: Spatial variation of absolute errors in reconstructed heat release rates for the test states - (a) \(D_{3}\), (b) \(D_{5}\), (c) \(D_{7}\), and (d) \(D_{9}\) for the one-dimensional planar laminar premixed ethylene-air flame dataset. The progress variable is plotted in grey for reference.
Figure 13: Comparison of errors in the reconstruction of thermo-chemical scalars (left), species production rates and heat release rate (right) with PCA-ANN vs. CoK-PCA-ANN for the one-dimensional planar laminar premixed ethylene-air flame dataset. Top and bottom plots in each column represent \(r_{a}\) and \(r_{\text{in}}\) respectively.
However, it should be remarked that, in this paper, the investigation of CoK-PCA-based nonlinear reconstruction using ANNs was carried out in an _a priori_ setting. It is well known that these data-driven dimensionality reduction methods are capable of accelerating numerical simulations of reacting flows by solving a reduced set of principal component transport equations as opposed to solving a very high-dimensional system of species conservation equations. Such a kind of _a posteriori_ validation performed for PCA remains to be explored for CoK-PCA and therefore forms the future scope of this paper.
## Acknowledgments
The work at IISc was supported under a project from the National Supercomputing Mission, India. DN is a recipient of the Ansys M.Tech. (Research) Fellowship. AJ was funded by a project from Shell Technology Center, Bengaluru, India. KA is a recipient of the Arcot Ramachandran Young Investigator award, IISc. Work by HK and UB was part of the ExaLearn Co-design Center, supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
|
2303.01141 | DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint
Satisfaction | As machine learning models, specifically neural networks, are becoming
increasingly popular, there are concerns regarding their trustworthiness,
specially in safety-critical applications, e.g. actions of an autonomous
vehicle must be safe. There are approaches that can train neural networks where
such domain requirements are enforced as constraints, but they either cannot
guarantee that the constraint will be satisfied by all possible predictions
(even on unseen data) or they are limited in the type of constraints that can
be enforced. In this paper, we present an approach to train neural networks
which can enforce a wide variety of constraints and guarantee that the
constraint is satisfied by all possible predictions. The approach builds on
earlier work where learning linear models is formulated as a constraint
satisfaction problem (CSP). To make this idea applicable to neural networks,
two crucial new elements are added: constraint propagation over the network
layers, and weight updates based on a mix of gradient descent and CSP solving.
Evaluation on various machine learning tasks demonstrates that our approach is
flexible enough to enforce a wide variety of domain constraints and is able to
guarantee them in neural networks. | Kshitij Goyal, Sebastijan Dumancic, Hendrik Blockeel | 2023-03-02T10:40:50Z | http://arxiv.org/abs/2303.01141v3 | # DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint Satisfaction
###### Abstract
As machine learning models, specifically neural networks, are becoming increasingly popular, there are concerns regarding their trustworthiness, especially in safety-critical applications, e.g., actions of an autonomous vehicle must be _safe_. There are approaches that can train neural networks where such domain requirements are enforced as constraints, but they either cannot guarantee that the constraint will be satisfied by all possible predictions (even on unseen data) or they are limited in the type of constraints that can be enforced. In this work, we present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions. The approach builds on earlier work where learning linear models is formulated as a constraint satisfaction problem (CSP). To make this idea applicable to neural networks, two crucial new elements are added: constraint propagation over the network layers, and weight updates based on a mix of gradient descent and CSP solving. Evaluation on various machine learning tasks demonstrates that our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
## 1 Introduction
Widespread use of state-of-the-art machine learning (ML) techniques has given rise to concerns regarding the trustworthiness of these models, especially in safety-critical and socially-sensitive domains. For example, in autonomous vehicles that employ ML approaches to predict the next action, the actions must be safe. Such domain requirements can often be formulated as logical constraints on combinations of inputs and outputs (e.g., whenever the input satisfies some condition A, the output must satisfy condition B). Crucially, these domain constraints must be satisfied for all possible inputs to the model, not just the training data. This has motivated researchers to develop approaches that can train ML models that satisfy a given constraint for all possible predictions.
A general approach to enforcing constraints in ML models is to include a regularization term in the cost function, which typically adds a cost for every violation of a constraint in the training set (e.g., Xu et al. (2018); Diligenti et al. (2017)). Such an approach can reduce the number of violations in the training set, but it does not necessarily eliminate them. Moreover, even when it does, this does not guarantee that other instances (outside the training set) cannot violate the constraint. Alternatively, for some model types, such as neural networks, the architecture of the model can be chosen in such a
way that certain types of constraints are guaranteed to be satisfied for each possible input (not just training data) (Sivaraman et al. (2020); Hoernle et al. (2022)). But this is typically possible only for specific combinations of model and constraint types.
This raises the question of whether generality and certainty can be combined. Is it possible to come up with a _generally applicable_ approach that guarantees the satisfaction of constraints not only on the training set but on the _entire_ input space, and this for any kind of model? A step in this direction was made by Goyal et al. (2023), who propose a relatively general solution for linear models. Their approach translates the learning problem into a MaxSMT setting. MaxSMT stands for Maximum Satisfiability Modulo Theories. It is an extension of SAT solving that can take background theories into account (e.g., for reasoning about the real numbers) and that distinguishes soft and hard constraints: it returns a solution that satisfies all hard constraints and as many soft constraints as possible. Goyal et al. (2023) model the requirements as hard constraints and maximize the fit to the data using soft constraints. Their approach works for a wide range of constraint types but only handles linear models, and assumes a bounded input domain.
In this paper, we substantially extend the applicability of that approach by showing how it can be used to train feedforward neural networks. Two key modifications to the network's architecture and training procedure suffice for achieving this: (1) propagating the constraints over the network layers to the last layer, which involves adding skip connections (He et al. (2016)) that copy the input to the penultimate layer and deriving bounds on the penultimate layer from the bounds on the input (Sunaga (1958)), and (2) training the network using a hybrid procedure that uses MaxSMT to determine the weights in the output layer and gradient descent for all other weights. We demonstrate that with these changes, neural networks can be trained that have good performance and guarantee the satisfaction of given constraints. In the following, we first describe the problem setting, then briefly describe the existing approach that we build on, before detailing our approach. Afterward, we compare our approach to related work and evaluate it experimentally.
## 2 Problem Statement
In this paper, we focus on semantic constraints which constrain the behavior of the model: the predictions are required to adhere to certain requirements, e.g. safety constraints (Katz et al. (2017)), structured output constraints (Xu et al. (2018)), and hierarchical constraints (Hoernle et al. (2022)). Specifically, we focus on domain constraints: constraints that must hold for all instances in the domain. We assume that the constraints can be written with universally quantified logic formulas. In particular, we consider the constraints of the form:
\[\mathsf{K}:\forall\mathbf{x}\in\bigtimes_{i=1}^{n}[l_{i},u_{i}],\mathbf{x} \models P\Rightarrow f_{\mathbf{w}}(\mathbf{x})\models C \tag{1}\]
Which states that if an input \(\mathbf{x}\) satisfies a condition \(P\), the output must satisfy a condition \(C\). An example of a safety constraint that can be represented in this way is: "if an object comes in front of a moving vehicle, the vehicle must stop". Solving for such a constraint exactly using specific constraint solvers allows us to enforce the constraint for all possible inputs. The reason for focusing on this type of constraints is mostly practical: the constraint solving technology we use was found to scale well enough for these types of constraints. It is not a theoretical limitation: any constraint that can be handled effectively by current constraint solving technology can be handled by the approach we develop. As demonstrated later in section 6, our chosen constraint formulation (equation 1) already provides us with a variety of tasks to work with.
To make the search procedure tractable, and because features in ML problems are typically bounded (e.g., a pixel in an image takes a value in \([0,255]\)), we use bounded domains \(\bigtimes_{i=1}^{n}[l_{i},u_{i}]\) instead of \(\mathbb{R}^{n}\). As a natural choice, the training data \(D\subseteq\mathcal{X}\times\mathcal{Y}\) can be used to calculate these bounds, i.e. \(l_{i}=\min_{D}(\mathbf{x}_{i})\) & \(u_{i}=\max_{D}(\mathbf{x}_{i})\). We rely on the SMT solver Z3 (Moura and Bjorner (2008)) to solve such constraints. Other prominent constraint-solving paradigms like MILP and CP do not support such constraints over continuous domains (Nethercote et al. (2007)).
We are now ready to formulate our problem statement:
**Definition 2.1**.: **Learning problem.** _Given a training set \(D\subseteq\mathcal{X}\times\mathcal{Y}\), a set of domain constraints \(\mathcal{K}\), a loss function \(\mathcal{L}\), and a hypothesis space containing functions \(f_{\mathbf{w}}:\mathcal{X}\rightarrow\mathcal{Y}\); find \(\mathbf{w}\) such that \(f_{\mathbf{w}}\) satisfies constraints in \(\mathcal{K}\) and \(\mathcal{L}(f_{\mathbf{w}},D)\) is minimal among all such \(f_{\mathbf{w}}\)._
\(f_{\mathbf{w}}\) is assumed to be a feedforward neural network. The output layer is real-valued without any activation, and a softmax layer is used to calculate class probabilities for classification problems. The language of the constraints in \(\mathcal{K}\) is a subset of first-order logic which allows for universal quantifiers. In practice, we focus on the constraints of the form in equation 1, and \(\mathcal{K}\) can be a set of multiple such universally quantified constraints.
## 3 Background - Satisfiability Descent (SaDe)
Satisfiability (SAT) is the problem of finding a solution that satisfies a given Boolean formula, e.g. \(-a\lor b\). Satisfiability Modulo Theories (SMT) extend SAT such that formulas can be expressed in other theories, such as Real Arithmetic, e.g., \((a+b>3)\wedge(b<1)\) with real \(a\) and \(b\). Maximum Satisfiability (MaxSMT) generalizes the SMT problem: given a set of hard constraints \(\mathcal{H}\) and soft constraints \(\mathcal{S}\), it aims to find a solution that satisfies all constraints in \(\mathcal{H}\) and as many as possible in \(\mathcal{S}\).
Our work builds on _SaDe_(Goyal et al. [2023]) which is a learning algorithm that can enforce constraints in linear models and guarantee satisfaction. SaDe modifies the parameter update process of the mini-batch gradient descent (Goodfellow et al. [2016]). Unlike gradient descent, which updates the solution in the direction that minimizes the loss, SaDe solves a MaxSMT problem to find the solution at each iteration, which is formulated in such a way that its solution satisfies the domain constraint and is close to the solution that gradient descent might lead to. At each iteration, a local search space is defined for the MaxSMT problem around the previous solution. This search space is a fixed-sized n-cube where each edge of the n-cube is a hyperparameter called _maximal step size_ that upper bounds the size of the update in each dimension (illustrated for two parameters in figure 1(a)). SaDe iteratively improves the performance while learning solutions that satisfy the constraint. Formulations of the MaxSMT problem and the local search space at each iteration are provided next.
**Formulation of MaxSMT problem:** A MaxSMT problem is defined for a batch of instances in each iteration, where the soft constraints encode a certain quality of fit of the model on the instances in the batch, and the domain constraints are hard constraints. A soft constraint, for a given instance \((\mathbf{x},y)\), is defined as a logical constraint that is fulfilled when the prediction of the model \(f_{\mathbf{w}}\) for \(\mathbf{x}\) is "sufficiently consistent" with the true label \(y\). For regression, given some error \(e\), a soft constraint is defined as: \(|y-e|\leqslant f_{\mathbf{w}}(\mathbf{x})\leqslant|y+e|\). For classification, the sign of \(f_{\mathbf{w}}(\mathbf{x})\) is assumed to be the indicator of the class and the magnitude indicates the certainty of prediction, the soft constraint takes the form (for a threshold \(\tau\)):
\[f_{\mathbf{w}}(\mathbf{x})> \ \tau\] if \[y=1\] \[f_{\mathbf{w}}(\mathbf{x})< -\tau\] if \[y=-1\]
For each instance, multiple soft constraints are formulated for different values of the error \(e\) or threshold \(\tau\). Satisfying a maximum of these soft constraints, which is what the MaxSMT problem tries to achieve, correlates with minimizing the prediction loss. Thus, the solution to this MaxSMT problem at every iteration reduces the loss while satisfying the domain constraints.
**Local search space:** At every iteration, the MaxSMT problem searches for the next solution in a local search space defined by the n-cube around the previous solution. It is encoded as an additional hard constraint in the MaxSMT problem, with a "box" constraint, which states that the next solution must be inside the axis-parallel box defined by \(\hat{\mathbf{w}}\) and \(\hat{\mathbf{w}}-\alpha\cdot\text{sgn}(\mathbf{g})\), where the sign function is applied component-wise to a vector (a modified sign function is used where \(\text{sgn}(0)=1\)), \(\hat{\mathbf{w}}\) is the previous solution, \(\mathbf{g}\) is the gradient of the loss at \(\hat{\mathbf{w}}\), and \(\alpha\) is the maximal step size. The box constraint serves two important purposes. Firstly, it provides a general direction in which the loss is minimized, as the box in each dimension is aligned with the negative gradient. Secondly, it stabilizes learning by limiting the size of the updates. Interestingly, using a regularized loss instead of a standard loss has no impact as the constraint is satisfied at each step of training, making the regularization \(0\).
SaDe is limited to training linear models. Training neural networks with the same procedure would require solving the MaxSMT problem with highly non-linear soft constraints, e.g. \(|y-e|\leqslant f_{\mathbf{w}}(\mathbf{x})\leqslant|y+e|\) where \(f_{\mathbf{w}}(\mathbf{x})\) is a neural network, which is not possible with state-of-the-art SMT solvers.
## 4 DeepSaDe: Deep Satisfiability Descent
We now present our approach _DeepSaDe_, which utilizes the MaxSMT framework proposed in Goyal et al. (2023) to train neural networks with constraints. DeepSaDe exploits the structure of neural networks, which transform the input domain through a series of non-linear layers before a final linear layer maps it to the output. As the network output only explicitly depends on the last layer, enforcing the constraint on the last layer is sufficient to enforce the constraint on the network. DeepSaDe, therefore, uses batch learning with a hybrid procedure that uses MaxSMT to determine the weights in the last layer and gradient descent for all other weights. In the following, we introduce some notation before we formalize the MaxSMT problem in the last layer in section 4.1, and then detail the learning algorithm in section 4.2.
\(f_{\mathbf{w}}\) is a fully-connected neural network with \(k\) layers such that \(f_{\mathbf{w}}(\mathbf{x})=h_{k}(h_{k-1}(...h_{1}(\mathbf{x})...))\) for input \(\mathbf{x}\in\mathcal{X}\), where \(h_{n}\) is the \(n^{th}\) layer. The input to the \(n^{th}\) layer is \(\mathbf{x}^{(n)}=h_{n-1}(...h_{1}(\mathbf{x})...)\), where \(\mathbf{x}^{(1)}=\mathbf{x}\) and the latent input space for the \(h_{n}\) is represented by \(\mathcal{X}^{(n)}\), where \(\mathcal{X}^{(1)}=\mathcal{X}\). The size of a layer, \(|h_{n}|\), is the number of neurons in it. Weight and bias parameters of \(h_{n}\) are denoted by matrices \(W^{(n)}\) and \(B^{(n)}\) of dimensions \(|h_{n-1}|\times|h_{n}|\) and \(1\times|h_{n}|\) respectively (\(|h_{0}|=|\mathbf{x}^{(1)}|\)). Elements of \(W^{(n)}\) and \(B^{(n)}\) are referred with lower-case letters.
### Formulation of the maximum satisfiability problem at the last layer of the network
To formalize the MaxSMT problem for the last layer, it is necessary to express both the soft constraints and the hard constraint (domain constraint) in terms of the latent inputs \(\mathbf{x}^{(k)}\) rather than the original inputs \(\mathbf{x}^{(1)}\). The soft constraint at the last linear layer can be formulated in a similar way to what was presented in section 3. Specifically, for a regression task, the constraint is given by \(|y-e|\leqslant h_{k}(\mathbf{x}^{(k)})\leqslant|y+e|\), whereas for classification, it takes the form of:
\[h_{k}(\mathbf{x}^{(k)})> \ \tau\] if \[y=1\] \[h_{k}(\mathbf{x}^{(k)})< -\tau\] if \[y=-1\]
To formulate the hard domain constraints, we propose a method for translating each original constraint K (equation 1) into a constraint K' for the MaxSMT problem at the last layer, given the parameters of layers \(h_{1},\dots,h_{k-1}\), such that a solution to K', combined with \(h_{1},\dots,h_{k-1}\), is a solution to K.
\[\text{K}^{\prime}:\forall\mathbf{x}^{\prime}\in\overset{|h_{k-1}|}{\underset{ i=1}{\overset{|h_{k-1}|}{\times}}}[l_{i}^{(k)},u_{i}^{(k)}],\mathbf{x}^{\prime} \models P\Rightarrow h_{k}(\mathbf{x}^{\prime})\models C \tag{2}\]
Here \(\mathbf{x}^{\prime}\) represent the quantified latent variable in the domain of \(\mathcal{X}^{(k)}\) bounded by \(\times_{i=1}^{|h_{k-1}|}[l_{i}^{(k)},u_{i}^{(k)}]\). We explain how this translation is achieved in the next two paragraphs. Given a network, with this translation, the last layer of a network can be updated to satisfy the constraint by the network.
Figure 1: **(a.) Illustration of SaDe:** each grey quadrant represents the local search space for the MaxSMT problem (_maximal step size:_\(\alpha\)), defined by the gradients of the loss, the green points are the solutions found with MaxSMT; **(b.) DeepSaDe Architecture:** Last layer is updated using the MaxSMT framework and the layers before are updated with gradient descent. The input features relevant to the domain constraint (in green) are mapped to the penultimate layer via skip connections.
**Domain Bound Propagation:** To formulate \(\text{K}^{\prime}\), we first consider the bounds of the quantified variable for the latent space of \(\mathcal{X}^{(k)}\). The bounds of the latent space must be such that enforcing the constraint within these bounds enforces the constraint on the original input bounds. To construct such latent bounds, we rely on interval arithmetic (Sunaga (1958)). This involves calculating the bounds of the output of a layer based on the bounds of the input and the layer parameters, such that any input to the layer takes the output value within the output bounds. Given the lower and upper bounds \(l^{(n)}\) and \(u^{(n)}\) for the input \(\mathbf{x}^{(n)}\) of the layer \(h_{n}\), following the approach used in Gowal et al. (2018), the bounds for the output \(\mathbf{x}^{(n+1)}\) are computed as (more details in Appendix A.1):
\[\begin{split} l_{i}^{(n+1)}&=\text{act}(b_{i}^{(n) }+\sum_{j:w_{j,i}^{(n)}\geq 0}w_{j,i}^{(n)}l_{j}^{(n)}+\sum_{j:w_{j,i}^{(n)}<0}w_{j,i }^{(n)}u_{j}^{(n)})\\ u_{i}^{(n+1)}&=\text{act}(b_{i}^{(n)}+\sum_{j:w_{j,i}^{(n)}\geq 0}w_{j,i}^{(n)}u_{j}^{(n)}+\sum_{j:w_{j,i}^{(n)}<0}w_{j,i }^{(n)}l_{j}^{(n)})\end{split} \tag{3}\]
Where 'act' is the activation function (like ReLu, Sigmoid, and Tanh). Given the bounds of the input space, the bounds for the latent space \(\mathcal{X}^{(k)}\) can be calculated recursively. Enforcing a constraint within the latent bounds enforces the constraint on any input within the input bounds.
**Identity Mapping of Relevant Features:** The translation also takes into account that some domain constraints may be dependent on the input space, e.g., "_if an object comes in front of the vehicle, the vehicle must stop_". To encode such conditions in \(\text{K}^{\prime}\) (i.e., \(\mathbf{x}^{\prime}\models P\)), we make the relevant features, i.e. the features that are needed to encode the property \(P\), available at the latent space \(\mathcal{X}^{(k)}\). For this, we use skip-connections (He et al. (2016)) that map these features to the second to last layer \(h_{k-1}\) using an identity mapping, as illustrated in figure 1(b). The network, consequently, is no longer fully-connected and these features take the same value as the input. These mapped features become a part of \(\mathcal{X}^{(k)}\) and the input property can be expressed at the last layer. These features are identified in advance. Finally, as the output of the network is the same as the output of the last layer, the constraint on the network output can be encoded with \(h_{k}(\mathbf{x}^{\prime})\). This completes the formulation of \(\text{K}^{\prime}\).
### Learning Algorithm
We now present the algorithm while referring to the pseudo-code in algorithm 1. DeepSaDe modifies mini-batch gradient descent (Goodfellow et al. (2016)), where forward and backward pass at every iteration (lines 5-7) are kept the same, but the parameter update is split into two parts. First, only the last layer's parameters are updated to satisfy the domain constraint using the MaxSMT formulation presented in section 4.1 (lines 8-27) (details in next paragraph). Second, the earlier layers are updated using gradient descent to optimize predictive loss (lines 33-35). Importantly, after the latter update, the network does not guarantee constraint satisfaction due to changes in the latent space of the last layer, used to formulate the MaxSMT problem. Therefore, the network before the latter update is used for evaluation and a validation set is used to select the best model (lines 29-31).
For updating the last layer, first, a line search is used along the vector that minimizes the prediction loss, and a fixed number of candidate solutions, from furthest to closest, are checked if they satisfy \(\mathcal{K}^{\prime}\) and first one that does is picked (lines 14-16). If this search does not yield a solution, the MaxSMT problem is formulated based on the inputs to the last layer (section 4.1), and a solution is searched in the local search space around the previous solution defined based on the gradients calculated during the backward pass (lines 18-23) (similar to the local search space in SaDe). Line search is employed first because checking if a point satisfies a constraint is faster than searching for the solution in a domain, finding a solution by merely checking a few candidate points speeds up learning.
Sometimes a solution for the last layer cannot be found because the MaxSMT problem could not be solved within the local search space, possibly due to the gradient pointing to a solution space that violates the constraint. In such cases, a restart procedure is initiated where the signs of the gradients are randomly flipped (lines 9-11, 24-26) to randomize the direction of the update. This may slightly decrease predictive performance, but it effectively restarts the learning process when it gets stuck.
Since our approach is iterative, starting from an initial configuration that satisfies the domain constraint is crucial. Otherwise, we may begin in a solution space far from the constrained space, leading to no updates. We ensure this by first initializing the network using a standard method (He et al. (2015)) (line 1), and then updating the weights of the last layer that satisfy the translated constraint \(\mathcal{K}^{\prime}\) (lines
2-3). It is important to note that we are solving a satisfiability problem here, not the maximum satisfiability problem, as we are not using any soft constraints for this purpose.
## 5 Related Work
A standard approach for enforcing constraints in ML models is _regularization_, where a penalty is added to the prediction loss whenever the model makes a prediction that violates the constraint (\((1-\lambda)*\text{loss}+\lambda*\text{regularization}\)). Xu et al. (2018) propose a regularization defined on the weighted model count (WMC) (Chavira and Darwiche (2008)) of the constraint defined over the network output given the predicted probabilities. Diligenti et al. (2017) and Serafini and Garcez (2016)
propose a fuzzy logic-based regularization for constraints in first-order logic. There are many other regularization approaches in the literature, e.g. Fischer et al. (2019); Hu et al. (2016); Stewart and Ermon (2017). Regularization can enforce a variety of constraints, but it does not guarantee constraint satisfaction. Additionally, high regularization loss with a large value of \(\lambda\) may provide stronger constrain satisfaction but impacts the predictive performance negatively.
Some approaches guarantee constraints by construction but are generally limited to enforcing specific types of constraints. For example, monotonic lattices (Gupta et al. (2016)), deep lattice networks (You et al. (2017)), and COMET (Sivaraman et al. (2020)) are approaches to enforce monotonicity in neural networks; Leino et al. (2022) & Lin et al. (2020) propose approaches to satisfy some safety specifications. A more general approach, MultiplexNet, was proposed in Hoerlle et al. (2022). They use a multiplexer layer to satisfy constraints in disjunctive normal form (DNF). However, DNF representation is limiting as certain constraints have worst-case representations in DNF which may lead to exponentially many terms. Additionally, constraints conditioned on the input space cannot be enforced. Another approach, DeepProbLog (Manhaeve et al. (2018)), trains neural networks within the ProbLog framework, where constraints can be enforced with ProbLog. However, it is limited to modeling discrete variables, and cannot model regression problems.
Our work relates to combinatorial optimization approaches as we use a MaxSMT-based approach. These approaches, however, except Goyal et al. (2023), are limited to discrete models like decision trees (Gunluk et al. (2021); Bertsimas and Dunn (2017); Verwer and Zhang (2019); Demirovic et al. (2022)) and decision sets (Yu et al. (2021); Ignatiev et al. (2021)). Maximum satisfiability, specifically, has also been used in various ML tasks (Berg et al. (2019); Cussens (2012); Malioutov and Meel (2018)). None of these, however, focus on training neural networks.
There are approaches that rely on the idea of bound propagation, also used in our work, to train adversarially robust neural networks (Gowal et al. (2018); Zhang et al. (2019); Wong and Kolter (2018)). However, the constraints that can be enforced are limited to input-output bounds (if the input is in a given bound, the output should be in a specific bound). Our approach is more general and can, in theory, handle any constraint that can be written as an SMT formula, e.g., structured output constraints.
Finally, there are approaches to verify if a network satisfies a constraint (Katz et al. (2019); Wang et al. (2018); Bunel et al. (2018)). DeepSaDe trains networks that do not require verification because constraints are guaranteed by construction.
## 6 Experiments
We evaluate multiple use cases in various ML tasks with complex domain constraints. We first outline the research questions, then describe the use cases, our evaluation method, and the results.
**Q1:** Can existing methods satisfy domain constraints for all predictions in practice, even if they do not guarantee it?
**Q2:** How does the predictive performance of DeepSaDe models compare to the baselines?
**Q3:** Do the DeepSaDe models have a higher training time compared to the baselines?
### Use Cases
UC1, UC2 & UC3 are from Goyal et al. (2023), UC4 is novel, and UC5 is from Xu et al. (2018).
**UC1:** A multi-target regression to predict 5 household expenses using 13 attributes, with 41417 data instances. We enforce two constraints: _"sum of all the expenses must be smaller than the total household income"_ and _"going out expense must be smaller than 5% of the household income"_.
**UC2:** A binary classification problem of predicting if a person should be given a loan or not based on 13 attributes, with 492 data instances. We enforce the constraint: _"a person with a salary less than 5000$ and an absent credit history must be denied a loan"_.
**UC3:** A multiclass classification problem to classify a song to one of 5 music genres based on 13 attributes, with 793 data instances. We enforce the constraint: _"a song by 'the Beatles' must be classified as either rock or pop"_.
**UC4:** A multi-label classification problem of identifying the labels from a sequence of 4 MNIST images. We enforce the constraint: _"the sum of the predicted labels must be greater than \(10\)"_. 20000 instances are generated by selecting 4 images at random from the MNIST dataset.
**UC5:** A preference learning problem to predict the preference order of different sushi, with constraint: _"the prediction must have a coherent preference order"_. The preference order of 6 out of 10 sushi is used to predict the preference order of the remaining 4 sushi. The dataset contains 4926 instances. The preference ordering over \(n\) items is encoded as a flattened binary matrix \(\{X_{ij}\}\) where \(X_{ij}\) denotes that item \(i\) is at position \(j\). Under this encoding, each instance has 36 features and 16 targets.
### Evaluation Methodology
**Evaluation:** Constraint satisfaction is typically evaluated by the _constraint accuracy_ metric used in Xu et al. (2018) and Fischer et al. (2019), which corresponds to the percent of instances where the constraint is not violated by the prediction. Such an evaluation, however, is limited to a finite sample of the population. Hence, as a second measure, we also calculate the _Adversity Index (AdI)_(Goyal et al. (2023)), which is the fraction of instances for which a counter-example to the constraint can be constructed in the neighborhood, defined by an \(l_{\infty}\) ball of radius \(\delta\) around the instance. AdI takes a value between 0 and 1, a higher value of AdI implies that the model violates the constraint on more points similar to data instances. AdI is calculated on full data (training and test) because this provides more instances to evaluate constraint satisfaction; we want the constraints to be satisfied on all data, not only test data. For DeepSaDe, both constraint accuracy and AdI are zero by construction but we still calculate it as a sanity check. We use the neural network verification software _Marabou_(Katz et al. (2019)) to find counter-examples. However, we only compute the AdI for UC1-3 because _Marabou_ only handles inequality constraints. We don't know any verification tool that can verify the constraints in UC4-5. For predictive performance, we use MSE for UC1 and accuracy for UC2-3. For UC4-5, we use _coherent accuracy_ which is the fraction of instances for which the model predicts the entire configuration correctly, _flattened accuracy_ which is the fraction of individually correct binary labels, and the _Jaccard accuracy_ which is the average Jaccard index for each multi-label prediction compared to true labels. Performance is only evaluated for instances where the true label does not violate the constraint as there can be data instances in practice where this happens (e.g. racially biased data to predict crime likelihood), comparison with such instances makes the evaluation biased.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline
**UC** & **Approach** & **Constraint** & **AdI(\(\delta\) = 0.1)** & **Accuracy/MSE** & **Runtime (sec)** \\ \hline \multirow{2}{*}{UC1} & DeepSaDe & \(\mathbf{100_{+0}}\) & \(\mathbf{0_{+0}}\) & \(\mathbf{*38.36_{+4.59}}\) & \(102341_{+40198}\) \\ & REG & \(93.50_{\pm 2.01}\) & \(0.97_{\pm 0.003}\) & \(\mathbf{*30.50_{\pm 6.49}}\) & \(227_{\pm 70}\) \\ \hline \multirow{2}{*}{UC2} & DeepSaDe & \(\mathbf{100_{\pm 0}}\) & \(\mathbf{0_{\pm 0}}\) & \(80.04_{\pm 4.29}\) & \(447_{\pm 105}\) \\ & SL & \(\mathbf{100_{\pm 0}}\) & \(0.002_{\pm 0.006}\) & \(\mathbf{80.17_{\pm 3.88}}\) & \(45_{\pm 30}\) \\ & SBR & \(\mathbf{100_{\pm 0}}\) & \(0.002_{\pm 0.004}\) & \(80.04_{\pm 3.95}\) & \(45_{\pm 27}\) \\ \hline \multirow{2}{*}{UC3} & DeepSaDe & \(\mathbf{100_{\pm 0}}\) & \(\mathbf{0_{\pm 0}}\) & \(80.11_{\pm 4.99}\) & \(6580_{\pm 1915}\) \\ & SL & \(99.97_{\pm 0.03}\) & \(0.42_{\pm 0.28}\) & \(\mathbf{82.53_{\pm 3.58}}\) & \(101_{\pm 45}\) \\ \cline{1-1} & SBR & \(99.97_{\pm 0.05}\) & \(0.29_{\pm 0.10}\) & \(76.51_{\pm 7.50}\) & \(196_{\pm 68}\) \\ \hline \end{tabular}
\end{table}
Table 1: Results: UC1, UC2 & UC3 (*MSE, lower MSE value is better)
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline
**UC** & **Approach** & **Constraint** & **Coherent** & **Flattened** & **Jaccard** & **Runtime (sec)** \\ \hline \multirow{2}{*}{UC4} & DeepSaDe & \(\mathbf{100_{+0}}\) & \(6.62_{\pm 1.72}\) & \(78.52_{\pm 1.73}\) & \(62.97_{\pm 2.28}\) & \(227928_{+30559}\) \\ & FFN & \(88.00_{\pm 3.26}\) & \(\mathbf{23.94_{\pm 4.25}}\) & \(\mathbf{85.81_{\pm 1.18}}\) & \(\mathbf{71.55_{\pm 2.40}}\) & \(3215_{\pm 2663}\) \\ \hline \multirow{4}{*}{UC5} & DeepSaDe & \(\mathbf{100_{\pm 0}}\) & \(\mathbf{11.08_{\pm 2.61}}\) & \(67.17_{\pm 1.48}\) & \(\mathbf{25.94_{\pm 2.89}}\) & \(17586_{\pm 5074}\) \\ & FFN & \(0.04_{\pm 0.15}\) & \(0.01_{\pm 0.04}\) & \(\mathbf{75.69_{\pm 0.15}}\) & \(13.04_{\pm 1.06}\) & \(48_{\pm 10}\) \\ \cline{1-1} & SL & \(\mathbf{100_{\pm 0}}\) & \(4.06_{\pm 3.33}\) & \(63.16_{\pm 2.62}\) & \(18.08_{\pm 3.33}\) & \(298_{\pm 110}\) \\ \hline \end{tabular}
\end{table}
Table 2: Results: UC4 & UC5
**Baselines:** For UC2 and UC3, we use regularization baselines based on Xu et al. (2018) (SL) and Diligenti et al. (2017) (SBR). For UC1, we design a custom regularization loss REG (details in appendix A.2). For UC4, we could not find any approaches that can enforce such a constraint. Hence, we simply compare with a feedforward network (FFN). For UC5, we choose SL and FFN as baselines.
**Experimental Setup:** For solving the MaxSMT problem, we implement the Fu-Malik algorithm (Fu and Malik (2006)) over the Z3 solver for NRA (Quantified Nonlinear Real Arithmetic) formulas. We ran experiments on an Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz machine with 125 GB RAM. For each use case, we run 5 experiments with 5-fold cross-validation, and the data is split 70/20/10 into train/test/validation. Every feature is scaled in [0, 1], and the radius \(\delta=0.1\) is chosen for AdI, which is significantly smaller than the mean \(\ell_{\infty}\) distance between two points: this distance for UC1 is \(0.75\), for UC2 is \(0.97\), and for UC3 is \(0.89\). For regularization, the smallest value of \(\lambda\), in \([0,1]\), that leads to minimum violations on the validation set is selected via cross-validation. Refer to Appendix A.2 for details on the architectures and hyper-parameters.
### Results (Tables 1 & 2)
**Constraint Satisfaction:** DeepSaDe finds a model that achieves 100% constraint accuracy for each use case and AdI = 0 for UC1-3. For UC1, the constraint accuracy for REG is 93.5%, and counterexamples can be constructed close to 97% of the instances (AdI = 0.97). Similar behavior is seen for UC3 for both SL and SBR, with SBR proving to be more effective in enforcing constraints because of lower AdI. For UC2, SL and SBR both lead to 100% constraint accuracy, but counterexamples can still be constructed as AdI \(>0\). For UC4 and UC5, FFN fails to satisfy constraints on test set. For UC5, SL can satisfy the constraint but the predictive performance is much worse than DeepSaDe. Thus, in general, existing approaches, in contrast to DeepSaDe, _do not_ satisfy domain constraint satisfaction in practice, which answers **Q1**.
**Predictive Performance:** DeepSaDe treats the domain constraint as a hard constraint, which limits the solution space to the regions where the constraint is guaranteed. Thus, the predictive performance of DeepSaDe models can be worse than existing approaches which do not guarantee constraint satisfaction. For UC1, the predictive performance of DeepSaDe is slightly worse than REG, while the difference is not statistically significant for UC2 & UC3. The performance of DeepSaDe is worse for UC4 on all prediction metrics compared to FFN. For UC5, SL regularization with a high value of \(\lambda\) allows for satisfying all the constraints on the test set but performs much worse than DeepSaDe. To study this further, we plot the prediction loss (cross-entropy) for SL models on the test set for various \(\lambda\) between \(0\) and \(0.9\), averaged over 5 folds, in Figure 2. For comparison, the average loss for DeepSaDe is also plotted. DeepSaDe achieves constraint satisfaction in addition to having better performance than SL with a high \(\lambda\). High regularization makes the prediction loss insignificant compared to the regularization loss, leading to worse predictive performance.
In DeepSaDe, where the solution at every iteration is learned with MaxSMT, the solver tries to satisfy as many soft constraints as possible in addition to satisfying the domain constraints. DeepSaDe, thus, is a more stable learner compared to regularization with high \(\lambda\). This answers **Q2**. Although the constraint satisfaction with DeepSaDe comes at the cost of predictive performance in some cases, in applications where constraints are crucial, like in safety-critical domains, this may be acceptable.
**Training Time:** DeepSaDe requires between 10 to 500 times more time than baselines across different use cases. This is because DeepSaDe solves a MaxSMT problem at each iteration, which makes the training slower than the numerical updates in the baselines. This positively answers the research question **Q3**. In applications where constraints are imperative, training time is less relevant, e.g., a network trained over a week and guarantees safety is still more valuable to the autonomous vehicle compared to one trained for a few hours but cannot do so. Our work is a starting point of such an approach that combines the exact solving of universal quantifiers with gradient-based learning to train neural nets. With further research into solver technology, it can be made more scalable. Additionally, for future work, possible modifications to improve the efficiency of DeepSaDe include using an incomplete MaxSMT approach like stochastic local search (Morgado et al. (2013)) instead of a complete one like Fu-Malik, and using compact bounds for the latent space (Wang et al. (2018)).
Figure 2: Test loss for UC5
Conclusion
We proposed DeepSaDe to train feedforward neural networks which can enforce a variety of constraints and guarantee constraint satisfaction for all possible predictions, using a satisfiability framework combined with gradient-based optimization. DeepSaDe is effective in a variety of ML tasks and provides a flexible representation of constraints, though sometimes at some cost of performance. It can enforce any constraint written in an SMT formula, as long as it is feasible to solve it with the solver. We rely on the Z3 solver but our framework is not dependent on it; any solver that can solve MaxSMT problems with universally quantified constraints can be used. We believe that evolving solver capabilities will allow DeepSaDe to handle more complex constraints. Extension of DeepSaDe to other architectures (e.g. Convolutional Neural Networks) is left for future work.
|
2308.12529 | Privacy-Preserving Discretized Spiking Neural Networks | The rapid development of artificial intelligence has brought considerable
convenience, yet also introduces significant security risks. One of the
research hotspots is to balance data privacy and utility in the real world of
artificial intelligence. The present second-generation artificial neural
networks have made tremendous advances, but some big models could have really
high computational costs. The third-generation neural network, SNN (Spiking
Neural Network), mimics real neurons by using discrete spike signals, whose
sequences exhibit strong sparsity, providing advantages such as low energy
consumption and high efficiency. In this paper, we construct a framework to
evaluate the homomorphic computation of SNN named FHE-DiSNN that enables SNN to
achieve good prediction performance on encrypted data. First, benefitting from
the discrete nature of spike signals, our proposed model avoids the errors
introduced by discretizing activation functions. Second, by applying
bootstrapping, we design new private preserving functions FHE-Fire and
FHE-Reset, through which noise can be refreshed, allowing us to evaluate SNN
for an arbitrary number of operations. Furthermore, We improve the
computational efficiency of FHE-DiSNN while maintaining a high level of
accuracy. Finally, we evaluate our model on the MNIST dataset. The experiments
show that FHE-DiSNN with 30 neurons in the hidden layer achieves a minimum
prediction accuracy of 94.4%. Under optimal parameters, it achieves a 95.1%
accuracy, with only a 0.6% decrease compared to the original SNN (95.7%). These
results demonstrate the superiority of SNN over second-generation neural
networks for homomorphic evaluation. | Pengbo Li, Ting Gao, Huifang Huang, Jiani Cheng, Shuhong Gao, Zhigang Zeng, Jinqiao Duan | 2023-08-24T03:38:42Z | http://arxiv.org/abs/2308.12529v1 | # Privacy-Preserving Discretized Spiking Neural Networks
###### Abstract
The rapid development of artificial intelligence has brought considerable convenience, yet also introduces significant security risks. One of the research hotspots is to balance data privacy and utility in the real world of artificial intelligence. The present second-generation artificial neural networks have made tremendous advances, but some big models could have really high computational costs. The third-generation neural network, SNN (Spiking Neural Network), mimics real neurons by using discrete spike signals, whose sequences exhibit strong sparsity, providing advantages such as low energy consumption and high efficiency. In this paper, we construct a framework to evaluate the homomorphic computation of SNN named FHE-DiSNN that enables SNN to achieve good prediction performance on encrypted data. First, benefitting from the discrete nature of spike signals, our proposed model avoids the errors introduced by discretizing activation functions. Second, by applying bootstrapping, we design new private preserving functions **FHE-Fire** and **FHE-Reset**, through which noise can be refreshed, allowing us to evaluate SNN for an arbitrary number of operations. Furthermore, We improve the computational efficiency of FHE-DiSNN while maintaining a high level of accuracy. Finally, we evaluate our model on the MNIST dataset. The experiments show that FHE-DiSNN with 30 neurons in the hidden layer achieves a minimum prediction accuracy of 94.4%. Under optimal parameters, it achieves a 95.1% accuracy, with only a 0.6% decrease compared to the original SNN (95.7%). These results demonstrate the superiority of SNN over second-generation neural networks for homomorphic evaluation.
Keywords:Privacy Computing Fully Homomorphic Encryption Spiking Neural Network Bootstrap
## 1 Introduction
**Privacy-Preserved AI.** Machine learning algorithms based on deep neural networks have attracted extensive attention as a key technology in Artificial Intelligence (AI).
These achievements have been widely applied in various fields such as image processing, intelligent transportation, and security. However, users face challenges of insufficient local computing power when training neural network models with a large number of parameters, which leads to the consideration of MLaaS(Machine Learning as a Service) [33] to outsource the computation of neural network models to cloud services. However, outsourcing brings risks of data security breaches. To address this issue, many privacy protection techniques are applied to machine learning models, such as homomorphic encryption (HE), differential privacy (DP), and secure multi-party computation (SMC) based on cryptography.
Homomorphic encryption refers to the ability to perform arbitrary computations on ciphertext without decryption. This unique property enables homomorphic encryption to have broad theoretical and practical applications, such as secure encrypted retrieval in cloud computing and secure multi-party computation. Therefore, researching homomorphic encryption holds significant scientific and practical value. In 2009, Gentry [19; 20] constructs the first fully homomorphic encryption (FHE) scheme, which is a major breakthrough in the field of cryptography. So far, there have been four generations of FHE. In the first generation [19], Gentry constructs a true bootstrapping process, although its practical performance is poor. The second-generation scheme, represented by BFV [5] and BGV [15], introduces a technique called modulus reduction, which builds leveled HE schemes that can compute addition and multiplication of predefined depth. Another advantage of the second-generation scheme is the SIMD operation, allowing parallel processing of thousands of plaintexts in corresponding ciphertext slots, greatly improving the scheme's performance. CKKS [10] is a modification of BFV schemes that supports homomorphic real number operations with fixed precision. The third-generation schemes include FHEW [14], TFHE [11] and Gao et al. [8; 18] that have fast bootstrapping and enable an unlimited number of operations.
Although there are many works based on the early second-generation FHE, it only supports homomorphic operations of addition and multiplication, while practical computations often involve non-linear operations such as comparison and maximization, especially activation functions in neural networks. To address these issues, Gilad-Bachrach et al. [23] propose the CryptoNets method, which replaces non-linear activation functions with polynomial functions. However, polynomials of a high degree are needed for a good approximation of nonlinear functions used in machine learning. Mohassel et al. [31] introduce an interactive algorithm that utilizes two servers to handle non-linear function problems, but it requires continuous interaction between the client and the servers, leading to high communication costs. Chabanne et al. [9] modify the model for the prediction phase to address the non-linear activation function problem, but this approach results in a loss of precision in prediction and training results.
In [4], the authors design FHE-DiNN, a discrete neural network framework based on the third-generation TFHE [11] scheme, where the output of each neuron is refreshed through bootstrapping, enabling homomorphic computation for arbitrary depths of networks. Unlike standard neural networks, FHE-DiNN utilizes a discretized neural network that restricts the propagated signals to integers and employs the sign function as the activation function to achieve scale invariance. FHE-DiNN exhibits fast computation speed but has lower model prediction accuracy. This work inspires us to consider
whether SNN neurons that naturally output 0 and 1 binary values can also be efficiently homomorphically evaluated.
**Spiking Neural Network.** Compared to other neural network models, Spiking Neural Networks (SNN) are generally more reliable in biological interpretation. As the third generation of neural networks [29], SNNs have gained increasing attention due to their rich spatiotemporal neural dynamics, diverse coding mechanisms, and low-power advantages in neuromorphic chips.
In contrast to the prosperity of artificial neural networks (ANNs), the development of SNNs is still in the early stage. Currently, researches in SNNs mainly focus on five major directions: neuron models, training algorithms, programming frameworks, datasets, and hardware chips. In response to the dynamic characteristics of the potential of neurons, neurophysiologists have constructed many models. These models are the basic units that make up spiking neural networks and determine the basic dynamic characteristics of the network. Among them, the most influential models include the Hodgkin-Huxley (H-H) model [24], the leaky integrate-and-fire (LIF) model [34], the Izhikevich model [25], and the spike response model [27](SRM), etc.
The training algorithms of SNNs can be mainly divided into three types: (1) gradient-free training algorithms represented by spike-timing dependent plasticity (STDP) [26]; (2) direct conversion of ANNs to SNNs; (3) gradient-surrogate training algorithms represented by error back-propagation in the spatiotemporal domain. Bohte et al. [3] first propose a gradient descent learning algorithm that can be applied to multi-layer feedforward spiking Neural networks, called the SpikeProp learning algorithm. Recently, Wu et al. [34] propose the spatiotemporal back propagation (STBP) method for the direct training of SNNs, and significantly improve it in order to be compatible with a much deeper structure, larger dataset, and better performance.
Considering the superior stability and lower energy consumption of SNNs in handling discrete data, it is reasonable to explore the integration of SNNs with FHE. The advantage of FHE-DiSNN lies in its strong robustness to discretization. In the process of converting traditional ANNs to homomorphic computation, discretizing the activation function is a challenging problem. Whether it is approximating with low-degree polynomials [23] or directly setting it as the sign function (in DiNN), both methods result in a loss of accuracy. SNN, on the other hand, naturally avoids this problem since all its outputs are binary pulse signals taking values from 0,1. This property also satisfies the scale-invariant property, eliminating the need to consider the influence of computation depth when designing the discretization. Inspired by FHE-DiNN, we also provide discretization methods for linear neuron models such as LIF and IF and prove that the discretization error caused by this method is very small.
**Our Contribution.** In this paper, we construct a novel framework called FHE-DiSNN with the following benefits:
* develop a low-accuracy-loss method to discretize SNN to DiSNN with controllable error.
* design new private preserving functions **FHE-Fire** and **FHE-Reset** with TFHE bootstrapping technology so that the resulting FHE-DiSNN constructed from DiSNN can have an arbitrary number of operations.
* propose an easy-extended framework(SNN \(\rightarrow\) DiSNN \(\rightarrow\) FHE-DiSNN) that allows the prediction procedure of SNN to be evaluated homomorphically.
Our experiments on the MNIST [13] dataset confirm the advantages of the FHE-DiSNN. First, we train a fully connected SNN with a single hidden layer consisting of 30 neurons. This SNN is constructed based on the IF(Integrate-and-Fire) neuron model and implemented using the Spikingjelly [16] Python package. Then, we convert it to DiSNN with the optimal parameters determined experimentally. The experiments show that DiSNN achieves a prediction accuracy of 95.3% on plaintext data, with only a marginal decrease of 0.4% compared to the original SNN's accuracy of 95.7%. Finally, the accuracy of FHE-DiSNN is evaluated on ciphertext using the TFHE library, resulting in an accuracy rate of 95.1%. This demonstrates a slight degradation (0.2%, 0.6%) compared to both DiSNN (95.3%) and SNN (95.7%).
**Outline of the paper.** The paper is structured as follows: In Section 2, we provide definitions and explanations of SNN and TFHE, including a brief introduction to the bootstrapping process of TFHE. In Section 3, we present our method of constructing Discretized Spiking Neural Networks and prove that the discretization error can be controlled. In Section 4, we highlight the challenges of evaluating a DiSNN homomorphically and provide a detailed explanation of our proposed solution. In Section 5, we present comprehensive experimental results for verification of our proposed framework. And discuss the challenges and possible future work in section 6.
## 2 Preliminary Knowledge
In this chapter, we commence by presenting the training and prediction methods of the SNN model. Subsequently, we provide a concise introduction to the bootstrapping process in the TFHE scheme.
### Spiking Neural Network
The typical structure of a neuron predominantly encompasses three components: dendrites, soma (cell body), and axons. In consideration of the neuron's potential dynamic characteristics during its operation, neurophysiologists have devised diverse models that constitute the foundational constituents of spiking neural networks, thereby exerting influence on the network's fundamental dynamic properties.
The Hodgkin-Huxley (H-H) model provides a comprehensive and accurate depiction of the intricate electrical activity mechanisms in neurons. However, it entails a complex and extensive system of dynamic equations that impose substantial computational demands, so simplified models remain practical and valuable such as the most widely utilized Leaky Integrate-and-Fire (LIF) model. LIF model simplifies the process of action potentials significantly while retaining three key characteristics: leakage,
accumulation, and threshold excitation which are presented below:
\[\Omega\frac{dV}{dt}=-V+I, \tag{1}\]
where \(\Omega=RC\) is a time constant, \(R\) and \(C\) denotes the membrane resistance and capacitance respectively. Building upon the foundation of LIF model, there exist, several variant models, including QIF model [7], EIF model [17], and adaptive EIF model [6]. Besides, IF model [1] is a further simplification of LIF model, where \(\Omega=1\) and \(V\) in Equation 1 disappear, i.e. \(\frac{dV}{dt}=I\).
In practical applications, it is common to utilize discrete difference equations as an approximation method for modeling the equations governing neuronal electrical activity. Although the specific accumulation equations for various neuronal membrane potentials may differ, the threshold excitation and reset equations for the membrane potential remain consistent. Consequently, the neuronal electrical activity can be simplified into three distinct stages: charging, firing, and resetting.
\[H[t] =V_{t-1}+f(V[t-1],I[t]), \tag{2}\] \[S[t] =\mathbf{Fire}\left(H[t]-V_{threshold}\right),\] \[V[t] =\mathbf{Reset}(H[t])=\begin{cases}V_{reset},&\text{if}\quad H[t] \geq V_{threshold},\\ H[t],&\text{if}\quad V_{reset}\leq H[t]\leq V_{threshold},\\ V_{reset},&\text{if}\quad H[t]\leq V_{reset}.\end{cases}\]
\(\mathbf{Fire}(\cdot)\) is a step function:
\[\mathbf{Fire}(x)=\begin{cases}1,&\text{if}\quad x\geq 0,\\ 0,&\text{if}\quad x\leq 0.\end{cases} \tag{3}\]
\(I[t]\)(The subscript \(i\) represents the i-th neuron, here we only refer to an arbitrary neuron, so \(i\) can be omitted.) represents the total membrane current of the external input from the pre-synaptic neurons. This term can be conceptually interpreted as the voltage increment and mathematically calculated using the equation provided below:
\[I[t]=\sum_{j}w_{ij}S_{j}[t]. \tag{4}\]
To mitigate potential confusion, we employ the notation \(H[t]\) to denote the membrane potential of the neuron subsequent to the charging phase and prior to spike initiation, while \(V[t]\) signifies the membrane potential of the neuron subsequent to spike initiation. The function \(f(V[t-1],X[t])\) represents the equation governing the state transition of the neuron, wherein the distinctions between different neuron models manifest in the specific formulation of \(f\).
In practical applications, it is common to utilize spike encoding methods to transform image data into appropriate binary inputs format for SNN. Poisson encoding is a commonly used one, in which the inputs are encoded into rate-based spike by the \(\lambda\)-Poisson process. Additionally, due to the non-differentiable nature of spiking functions, the conventional back-propagation algorithm based on gradient descent in ANNs
is not suitable in this context. Therefore, alternative training approaches must be sought. Poisson encoding and surrogate gradient method are utilized in this paper and other common methodologies for encoding data and training SNN are detailed in Appendix A.
### Programmable Bootstrapping
Let \(N=2^{k}\) and \(p>1\) an even integer. Let \(Z_{p}=\{-\frac{p}{2}+1,\ldots,\frac{p}{2}\}\) be the ring of integer modulo \(p\). Let \(X^{N}+1\) be the (2N)-th cyclotomic polynomial. Let \(q\) be a prime and define \(R_{q,N}=R/qR\equiv\mathbb{Z}_{q}\left[X\right]/(X^{N}+1)\equiv\mathbb{Z}[X]/(X ^{N}+1,q)\), similarly for \(R_{p,N}\). Vectors are represented by lowercase bold letters, such as \(\mathbf{a}\). The \(i\)-th entry of a vector \(\mathbf{a}\) is denoted as \(a_{i}\). The inner product between vectors \(\mathbf{a}\) and \(\mathbf{b}\) is denoted by \(\left\langle\mathbf{a},\mathbf{b}\right\rangle\). A polynomial \(m(X)\) in \(R_{p,N}\) corresponds to a message vector of length \(N\) over \(Z_{p}\), and the ciphertext for \(m(X)\) will be a pair of polynomials in \(R_{q,N}\). Detailed fully homomorphic encryption schemes have been included in Appendix B.
When referring to a probability distribution, we indicate that a value \(d\) is drawn from the distribution \(\mathcal{D}\) as \(d\sim\mathcal{D}\).
Theorem 2.1: **(Programmable bootstrapping [12])** _TFHE/FHEW bootstrapping support the computation of any function \(g:Z_{p}\to Z_{p}\quad\text{and}\quad g(v+\frac{p}{2})=-g(v)\). We refer to \(g\) as the program function of bootstrapping. An LWE ciphertext \(LWE_{s}(m)=(\mathbf{a},b)\), where \(m\in Z_{p}\), \(\mathbf{a}\in Z_{p}^{N}\) and \(b\in Z_{p}\), can be bootstrapped into \(LWE_{s}(g(m))\) with very low noise._
This process relies on the Homomorphic Accumulator [30] denoted as \(ACC_{g}\).Using the notations of [30], the bootstrapping process can be broken down into the following steps:
-**Initialize**: Set the initial polynomial:
\[ACC_{g}\left[-b\right]=X^{-b}\cdot\sum_{i=0}^{N-1}g\left(\left\lfloor\frac{i \cdot p}{2N}\right\rfloor\right)X^{i}\bmod X^{N}+1. \tag{5}\]
-**Blind Rotation**: \(ACC_{g}\leftarrow_{+}^{+}-a_{i}\cdot ek_{i}\), modifies the content of the accumulator from \(ACC_{g}\left[-b\right]\) to \(ACC_{g}\left[-b+\sum a_{i}s_{i}\right]=ACC_{g}\left[-m-e\right]\), where
\[\mathrm{ek}=\left(RGSW\left(X^{s_{1}}\right),\ldots,RGSW\left(X^{s_{n}} \right)\right),\]
which is a list of materials over \(R_{q}^{N}\).
-**Sample Extraction**: \(ACC_{g}=\left(a(X),b(X)\right)\) is the RLWE ciphertext with component polynomials \(a(X)=\sum\limits_{0\leq i\leq N-1}a_{i}X^{i}\) and \(b(X)=\sum\limits_{0\leq i\leq N-1}b_{i}X^{i}\). The extraction operation outputs the LWE ciphertext:
\[RLWE_{z}\stackrel{{\text{Sample Extraction}}}{{ \longrightarrow}}LWE_{z}(g(m))=(\mathbf{a},b_{0}),\]
where \(\mathbf{a}=(a_{0},\ldots,a_{N-1})\) is the coefficient vector of \(a(X)\), and \(b_{0}\) is a coefficient of \(b(X)\).
**-Key Switching**: Key switching transforms the LWE instance's key from the original vector \(\mathbf{z}\) to the vector \(\mathbf{s}\) without changing plaintext message \(m\):
\[LWE_{\mathbf{z}}(g(m))\stackrel{{\text{Key Switching}}}{{ \longrightarrow}}LWE_{\mathbf{s}}(g(m)).\]
Taking a bootstrapping key and a key switching key as input, bootstrapping can be defined as:
\[\text{bootstrapping}=\textbf{KeySwitch}\circ\textbf{Extract}\circ\textbf{ BlindRotate}\circ\textbf{Initialize} \tag{6}\]
With program function \(g\), bootstrapping takes ciphertext \(LWE_{s}(m)\) as input, and output \(LWE_{s}(g(m))\) with the original secret key \(s\):
\[\text{bootstrapping}(LWE_{s}(m))=LWE_{s}(g(m)). \tag{7}\]
This property will be extensively utilized in our context. Since bootstrapping does not alter the secret key, we will use the shorthand \(LWE(m)\) to refer to an LWE ciphertext in the rest.
## 3 Discretized Spiking Neural Network
There are two parts to this section. Firstly, we present a simple discretization method to convert SNNs into Discretized Spiking Neural Networks(DiSNNs). We demonstrate that this method guarantees controllable errors for both the IF neuron model and the LIF neuron model. Furthermore, we provide estimations for the extrema of these two discretization models which can be used to determine the size of the plaintext space. Secondly, we propose an efficient method for computing the **Fire** and **Reset** functions of the SNN neuron model on the ciphertext, denoted as **FHE-Fire** and **FHE-Reset**.
Definition 1: A Discretized Spiking Neural Network (DiSNN) is a type of feed-forward spiking neural network in that all weights are discretized into a finite \(Z_{p}\), as well as the inputs and outputs of the neuron model.
We denote this discretization method as the function:
\[\hat{x}\triangleq\text{Discret}(x,\tau)=\lfloor x\cdot\tau\rceil, \tag{8}\]
where \(\hat{x}\) represents the value \(x\) after discretization and the precision of the discretization can be controlled, with a larger \(\tau\) resulting in finer discretization. The equation 2 can be discretized as follows(\(i\) is omitted like Equation4):
\[\begin{split}\hat{I}[t]&=\Sigma\hat{o}_{ij}S_{j}[t ],\\ \hat{H}[t]&=\hat{V}[t-1]+f(\hat{V}[t-1],\hat{I}[t]), \\ S[t]&=\textbf{Fire}\left(\hat{H}[t]-\hat{V}_{threshold }\right),\\ \hat{V}[t]&=\textbf{Reset}(\hat{H}[t])=\begin{cases} \hat{V}_{reset},&\text{if}\quad\hat{H}[t]\geq\hat{V}_{threshold},\\ \hat{H}[t],&\text{if}\quad\hat{V}_{reset}\leq\hat{H}[t]{<}\hat{V}_{threshold },\\ \hat{V}_{reset},&\text{if}\quad\hat{H}[t]\leq\hat{V}_{reset}.\end{cases}\end{split} \tag{9}\]
This system of equations clearly shows the advantages of SNN in terms of discretization methods. The binary spike signals with values of \(0\) and \(1\) not only avoid the losses incurred by self-discretization but also effectively control the errors caused by discretized weights. The two crucial parameters of SNN, \(V_{threshold}\) and \(V_{reset}\), are generally set as integers, eliminating any discretization errors. In fact, the only aspect that requires attention is the discretization of weights. An estimate of the upper bound on the discretization error is given in the assertion below.
Proposition 1: _For the IF neuron model and LIF neuron model, the discretization error is independent of the scaling factor \(\tau\) and only depends on the number of spikes._
Proof: For the IF and LIF neuron models, let a linear function \(f\) denote their charging processes. We have,
\[\tau f(V[t-1],I[t])=f(\tau V[t-1],\tau I[t])=f(\hat{V}[t-1],\hat{I}[t]).\]
This means that the discretization error is only concentrated in \(\hat{I}[t]\),
\[\max_{i}|\hat{I}_{i}[t]-\tau I_{i}[t]| =\max_{i}|\sum_{j}(\tau w_{ij}-\hat{w}_{ij})S_{j}[t]|\] \[\leq\max_{i,j}|\tau w_{ij}-\hat{w}_{ij}|\cdot|\sum_{j}S_{j}[t]|\] \[\leq\frac{1}{2}\cdot|\sum_{j}S_{j}[t]|.\]
As the above showing, discretization error is actually independent of \(\tau\), but proportional to the number of spikes.
Proposition 1 provides an upper bound on the overall discretization error, where \(\frac{1}{2}\) represents the maximum value of individual weight discretization error. However, in practical situations, not all weights will reach the maximum error. From a mathematical expectation perspective, the discretization error can be further reduced. The proof is provided by the following Proposition.
Proposition 2: _For the IF and LIF neuron models, assuming the weights follow a uniform distribution on \([-\frac{1}{2},\frac{1}{2}]\) and the number of spikes follows a Poisson distribution with intensity \(\lambda\), the mathematical expectation of the discretization error is \(\lambda/4\)._
Proof: Denote the random variable \(\tau w_{ij}-\hat{w}_{ij}\) as \(\xi_{j}(\omega)\), we can see that it follows a uniform distribution on the interval \([-\frac{1}{2},\frac{1}{2}]\). Set \(N(\omega)=\sum_{j}S_{j}[t]\) which is a Poisson random variable with intensity \(\lambda\). Note that \(\mathbb{E}(|\xi_{i}|)=\frac{1}{4}\), \(\mathbb{P}(N=n)=e^{\lambda}\cdot\frac{\lambda^{n}}{n!}\) and
\(\sum\limits_{n=0}^{\infty}\mathbb{P}(N=n)=1\). Then, the expectation of the error can be written as follows:
\[\mathbb{E}|\hat{I}[t]-\tau I[t]| \approx\mathbb{E}(\sum\limits_{j=0}^{N(\omega)}|\xi_{j}|)\] \[=\sum\limits_{n=0}^{\infty}\mathbb{E}(\sum\limits_{i=0}^{n}|\xi_ {i}|\mid N(\omega)=n)\cdot\mathbb{P}(N=n)\] \[=\sum\limits_{n=0}^{\infty}\frac{n}{4}\cdot e^{4}\cdot\frac{ \lambda^{n}}{n!}=\frac{\lambda}{4}.\]
Notice that in the Proof, \(\mathbb{E}(|\xi_{i}|\mid N(\omega)=n)=\mathbb{E}(|\xi_{i}|)\) is from the independence between \(\xi_{i}(\omega)\) and \(N(\omega)\).
We can obtain a similar conclusion as Proposition 1: the number of spikes, not the parameter \(\tau\), affects the magnitude of the error. Although the Proposition above indicates that the size of \(\tau\) does not affect the growth of the error, we cannot infinitely increase \(\tau\) in order to improve accuracy. This is because larger \(\tau\) implies a larger message space, which puts more computational burden on homomorphic encryption. In Proposition 3, we specify the relationship between them.
Proposition 3: _For the IF and LIF models, the maximum and minimum values generated during the computation process are controlled by \(\tau\), the number of spikes, and the extremal values of the weights._
Proof: From equation 2, for the IF model case, it can be observed that the range of membrane potential \(V[t]\) is controlled by the **Reset** function of the neuron model, bounded within \([V_{reset},V_{threshold}]\). The maximum and minimum values can only occur in the variable \(\hat{H}\). Therefore, the extreme values satisfy the following inequalities:
\[Max \triangleq\max(\hat{H}[t])=\max(\hat{V}[t]+\hat{I}[t])\] \[\leq\tau(V_{threshold}+\max_{i,j}(|w_{ij}|\cdot\sum\limits_{j}|S_{ j}[t]|),\] \[Min \triangleq\min(\hat{H}[t])\geq-|\hat{V}_{reset}|-|\hat{I}[t]|\] \[\geq-\tau(V_{reset}+\max_{i,j}(|w_{ij}|)\cdot\sum\limits_{j}|S_{ j}[t]|).\]
Besides, we can also prove LIF model in a similar way. We denote the upper and lower bound as \(\alpha,\beta\), respectively.
Corollary 1: _In general, when the neuron model has \(V_{reset}=0\), the relation between the maximum and minimum values is given by:_
\[\beta=-\alpha+\hat{V}_{threshold}, \tag{10}\]
_where \(\alpha,\beta\) represent the upper and lower bound of DiSNN from Proposition 3
This seemingly trivial but highly important conclusion ensures the correctness of homomorphic evaluation. We will encounter it in subsequent sections.
Figure 1(a) illustrates a finite field \(Z_{p}\), which forms a cyclic group. Similar to a circular arrangement, if a number exceeds \(p/2\), it wraps around to the opposite end, as well as values below \(-\frac{p}{2}+1\). Being defined on \(Z_{p}\), the intermediate values during DiSNN computation must be accommodated in \(Z_{p}\), or else exceeding the boundaries will result in computational errors. This means that the inequality \(-\frac{p}{2}+1\leq\beta{<}\alpha{<}\frac{p}{2}\) must be satisfied. However, large \(p\) leads to a decrease in computational efficiency for homomorphic encryption. Therefore, selecting an appropriate \(\tau\) that strikes a balance between computational efficiency and discretization accuracy becomes a crucial consideration.
## 4 Homomorphic Evaluation of DiSNN
This section will delve into a detailed analysis of how DiSNN performs prediction on homomorphically encrypted images. As the prediction procedure of SNN shown in Figure 1(b), all operations performed on ciphertext can be summarized as **Fire**, **Reset**, and Multisum(scalar multiplication and summation). Poisson encoding merely involves
Figure 1: (a)The circle represents the message space \(Z_{p}\), while the numbers on the circumference represent the finite field of the ciphertext space conventionally denoted as \(Z_{q}\). This mapping relationship between \(Z_{p}\) and \(Z_{q}\) is reflected in the partitioning of the circle. Operations on both plaintext and ciphertext follow the rules of finite fields, where values exceeding the bounds undergo modular arithmetic, wrapping around to the opposite end. (b)The output of an SNN corresponds to the firing frequency within a specific time window of the output layer, where the magnitude of the firing rate reflects the response strength towards a particular category. Thus, the network is required to operate for a designated duration, utilizing the average firing rate after \(T\) time steps as the classification score. During each time step, the image sequentially traverses the Poisson layer, SNN hidden layer, and SNN output layer.
comparing magnitudes, which can be computed using the **Fire** function. Multisum is naturally supported in FHE, so the challenges lie in performing **Reset** and **Fire** functions on ciphertext since they are non-polynomial functions. We leverage the programmable bootstrapping technique introduced by Chillotti et al. [12] to execute the **Fire** and **Reset** functions of the SNN model while simultaneously refreshing the noise of the ciphertext.
### Homomorphic Computation of Multisum
We select a neuron from the SNN layer, and its input is expressed as Equation 11, which is correct, as long as the noise carried by the ciphertext does not exceed the upper bound of the noise shown in the following Remark.
\[\begin{split}\sum\limits_{j}\hat{w}_{ij}LWE(S_{j}[t])& =\sum\limits_{j}LWE(\hat{w}_{ij}\cdot S_{j}[t])\\ &=LWE(\sum\limits_{j}\hat{w}_{ij}S_{j}[t])\\ &=LWE(\hat{I}_{i}[t]).\end{split} \tag{11}\]
It is observed that the multiplication and addition operations on ciphertexts will amplify the noise carried by the ciphertexts. To ensure the correctness of the above computation which equals to
\[Dec(\sum\limits_{j}\hat{w}_{ij}LWE(S_{j}[t]))=Dec(LWE(\hat{I}_{i}[t])).\]
There are two conditions that need to be satisfied: (1) \(\sum\limits_{j}\hat{w}_{ij}S_{j}[t]\in[-\frac{p}{2},\frac{p}{2})\); (2) the noise does not grow beyond the noise bound. The first condition is easy to satisfy by choosing a sufficiently large message space \(Z_{p}\). To address the noise issue, let us assume that \(LWE(S_{j}[t])\) has an initial noise \(\sigma\) (as each spike is generated via bootstrapping). After the multiplication and addition operations, the noise in the ciphertext grows to \(|\sum\limits_{j}\hat{w}_{ij}|\cdot\sigma\), which is proportional to the discretization parameter \(\tau\). One way to control the noise growth is to decrease \(\tau\), which may lead to a decrease in accuracy. Another approach is to trade off the security level by reducing the initial noise \(\sigma\), where increasing the dimension of the LWE problem \(n\) could remedy the situation [2].
### Homomorphic Computation of Fire Function
The **Fire** function is a non-polynomial function, so we must rely on the Theorem 1 to evaluate it and refresh the ciphertext noise simultaneously. We propose a solution to implement the **Fire** function on ciphertexts, referred to as the **FHE-Fire** function, which can be realized as:
\[\begin{split}\textbf{FHE-Fire}(LWE(m))&\triangleq bootstrap(LWE(m))+1\\ &=\begin{cases}LWE(2),&\text{if }m\in[0,\frac{p}{2}),\\ LWE(0),&\text{if }m\in[-\frac{p}{2},0)\end{cases}\\ &=LWE(2\cdot Spike).\end{split} \tag{12}\]
by defining the program function \(g\) of bootstrapping as:
\[g(m)\triangleq\begin{cases}1,&\text{if }m\in[0,\frac{p}{2}),\\ -1,&\text{if }m\in[-\frac{p}{2},0).\end{cases} \tag{13}\]
In this case, the spike signal is mapped to \(\{0,2\}\), doubling its original value. This adds a slight complication for the subsequent fully connected layer's computation. However, it can be easily overcome. Since the spike signal is now doubled, we ensure the consistency of the final result by halving the weights as the following equation show:
\[LWE(\hat{I}) =\sum_{j}\hat{w}_{ij}\cdot LWE(S_{j})\approx\sum_{j}\lfloor\frac{ \hat{w}_{ij}}{2}\rfloor\cdot LWE(2\cdot S_{j}) \tag{14}\] \[\approx\sum_{j}\text{Discretized}(w_{ij},\frac{\tau}{2})\cdot LWE (2\cdot S_{j}).\]
This approach also benefits the control of ciphertext noise by halving the discretization parameter \(\tau\). Since ciphertext \(LWE(2\cdot S_{j})\) obtained through bootstrapping has very low initial noise \(\sigma\), our method reduces the noise by half, which can be easily proven using Remark 1. This allows us to have lower noise growth and enables us to make more confident parameter choices.
### Homomorphic Computation of Reset Function
The **Reset** function describes two characteristics of the neuron model's membrane potential. First, when the membrane potential \(V\) exceeds \(V_{threshold}\), a spike is emitted and the membrane potential is back to \(V_{reset}\). Second, the membrane potential cannot be lower than \(V_{reset}\), so if such a value is generated during the computation process, it needs also to be set to \(V_{reset}\).
For convenience, the \(V_{reset}\) is often set to 0. For non-zero conditions, we can shift the \(V_{reset}\) to 0 using a translation method. Similarly to the **Fire** function, here we set the program function \(g\) of bootstrapping as:
\[g(m)\triangleq\begin{cases}0,&\text{if }m\in[\hat{V}_{threshold},\frac{p}{2}),\\ m,&\text{if }m\in[0,\hat{V}_{threshold}),\\ 0,&\text{if }m\in[\hat{V}_{threshold}-\frac{p}{2},0),\\ -(m+\frac{p}{2}),&\text{if }m\in[-\frac{p}{2},\hat{V}_{threshold}-\frac{p}{2}), \end{cases} \tag{15}\]
where \(g(x)=-g(x+\frac{p}{2})\) must be satisfied. Then, the **FHE-Reset** function can be computed as follows:
\[\textbf{FHE-Reset}(LWE(m)) \triangleq bootstrap(LWE(m) \tag{16}\] \[=\begin{cases}&LWE(0)\,\ m\in[\hat{V}_{threshold},\frac{p}{2}),\\ &LWE(m)\,\ m\in[0,\hat{V}_{threshold}),\\ &LWE(0)\,\ m\in[\hat{V}_{threshold}-\frac{p}{2},0),\\ &LWE(-(m+\frac{p}{2}))\,\ m\in[-\frac{p}{2},\hat{V}_{threshold}-\frac{p}{2}). \end{cases}\]
Note that \(g\) does not match **Reset** function on interval \([-\frac{p}{2},\hat{V}_{threshold}-\frac{p}{2})\), which will lead to computation error. Therefore, the computation must be evaluated within the interval \([\hat{V}_{threshold}-\frac{p}{2},\frac{p}{2})\), equaling to \(\hat{V}_{threshold}-\frac{p}{2}\leq\beta\leq\alpha\leq\frac{p}{2}\) from Proposition 3. The given conditions can be simplified to \(\alpha\!<\!\frac{p}{2}\), based on the insights provided by Equation 10. This condition serves as a necessary requirement for selecting the message space, as it ensures that the intermediate variable naturally falls within the correct range without the need for additional conditions.
Moreover, **FHE-Reset** function not only realizes computing the **Reset** function on the ciphertext but also serves the purpose of refreshing the noise that is accumulated during the computation of \(LWE(\hat{V}[t])\). This means that we don't need to worry about the noise issue, and it can support an arbitrary number of computations.
## 5 Experiment
In this chapter, we will actually build a practical FHE-DiSNN network and demonstrate its advantages and limitations in experiments by comparing it with the original SNN. There are three parts in this section. first, we determine the simulation model and network depth of our network to train a convergent SNN. Second, we proceed to convert the well-trained SNN into DiSNN. Finally, in the third part, we conduct experiments to evaluate the accuracy and efficiency of FHE-DiSNN in performing forward propagation on encrypted images. This assessment provides insights into the performance of FHE-DiSNN in a secure and encrypted environment.
### Building an SNN in the clear
We select a \(784\) (\(28\times 28\))-dimensional Poisson encoding layer as the input layer, and \(30\)-dimensional and \(10\)-dimensional IF model as the hidden layer and output layer, respectively. We utilize the Spikingjelly [16] library, a PyTorch-based framework for SNN. Training SNN is an intriguing research direction, and in this study, the gradient surrogate method is chosen to train SNNs. Other commonly used training methods such as ANN-to-SNN conversion and unsupervised training with STDP are not extensively discussed here. In general, the network needs to run for a period of time, taking the average firing rate over \(T\) time steps as the basis for classification.
The prediction process is essentially a forward propagation of the trained model, with the key difference that gradient approximation is not required. The predictive accuracy improves as \(T\) increases, but the marginal effect leads to a diminishing rate of improvement in accuracy, while the time consumption continues to grow linearly. Hence, in order to maintain accuracy, it is vital to minimize the value of \(T\) as much as possible. We are aware that in encrypted computations, bootstrapping is the most time-consuming operation. Here, we provide an estimate for the number of bootstrapping operations in FHE-DiSNN.
Proposition 4: _For a single-layer FHE-DiSNN network with \(n\)-dimensional input, \(k\)-dimensional hidden layer, \(m\)-dimensional output, and simulation time T, the number of required bootstrapping operations is \((n+2k+2m)T\)._
Proof: In FHE-DiSNN, the Poisson encoding requires one **FHE-Fire** function call, and the discharge and reset processes of the model each require one **FHE-Fire** function and one **FHE-Reset** function call. In a single simulation time, there are \(n\) Poisson encoding operations and \(m+k\)**FHE-Fire** and **FHE-Reset** operations, resulting in a total of \((n+2m+2k)\) bootstrapping. With \(T\) repeated simulation steps, the total number of bootstrapping is given by:
\[\text{nums}=(n+2k+2m)\cdot T.\]
We can reduce the number of bootstrapping from two aspects to improve experimental efficiency. First, we can encrypt the message after Poisson encoding, which eliminates the requirement for \(nT\) times bootstrapping. Second, we can shorten the simulation time \(T\). The curve in Figure 2(a) shows that the optimal trade-off is achieved at \(T=10\), where the accuracy is comparable to the highest level achieved at \(T=20\).
### Constructing an FHE-DiSNN from SNN
The accuracy of DiSNN improves with the increase of \(\tau\), but the marginal effect is also present. Moreover, the increase in \(\tau\) leads to linear growth of noise, resulting in elevated computational costs. Therefore, selecting an appropriate value for \(\tau\) is crucial. Following the design of the FHE-DiSNN algorithm, we conduct experiments to show the relationship between \(\tau\) and the prediction accuracy of DiSNN for \(T=10\) and \(T=20\). The curve is plotted in Figure 2(b), where the result indicates that \(\tau=10\) and \(\tau=10\) achieve the optimal trade-off and the highest accuracy, respectively.
The next step is to choose an appropriate \(p\) of the message space. According to Equation 10, the requirements can be simplified to\(\times\frac{p}{2}\). The maximum value depends
Figure 2: (a)The curve (orange) illustrates the influence of simulation time \(T\) on the prediction accuracy of the original SNN. By \(T=10\), the SNN has reached an accuracy of \(95.3\%\), and at \(T=20\), it has essentially achieved its maximum level of accuracy. (b)The graph illustrates the correlation between \(\tau\) and the prediction accuracy in DiSNN. Note that the light blue and lavender shading in the figure represents the maximum and minimum fluctuation intervals of the results of the five independent experiments under the conditions of \(T=20\) and \(T=10\), respectively, while the blue and purple lines represent the average values of the experimental results.
on \(\max\limits_{i,j}(\sum\limits_{j}w_{ij}S_{j}[t])\), which is a fixed value for a well-trained network and can be pre-determined. Through several experiments, we have \(\alpha\approx 50\tau\) for the first layer and approximately \(10\tau\) for the second layer. A message space size of \(p=1024\) is enough to accommodate the DiSNN with \(\tau=10\), and \(p=2048\) for \(\tau=20\). For our experiments, we have selected the STD128 parameter set [30] shown as follows:
-Size of message space: \(p=1024\) or \(2048\)
-Dimension of LWE ciphertext: \(n=512\)
-Degree of the polynomials in the ring: \(N=1024\)
-Bits of the decomposition basis of KeySwitching: \(B_{ks}=14\)
-Bits of the decomposition basis for TGSW ciphertext: \(B_{g}=7\)
### Exhibiting experiment result
We conduct the following experimental process on an Intel Core i7-7700HQ CPU @ 2.80 GHz:
**1.** The grayscale handwritten digit image is encoded by the Poisson encoding layer.
**2.** The Poisson-encoded image is then encrypted into LWE ciphertexts.
**3.** The ciphertext is multiplied by the discretized plaintext weights and passed into the SNN layer.
**4.** The IF neuron model in the SNN layer calculates the charging, firing, and reset procedure on the ciphertext. The bootstrapping operations involved in this process are accelerated using FFT.
**5.** Steps **1-4** repeat \(T\) times, and the resulting outputs are accumulated as classification scores.
**6.** Decrypt and the highest score are selected as the classification result.
In Table 1, we show the experimental results, with the **Fire** and **Reset** functions in Step 4 implemented according to our method in Section 4.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline (T, \(\tau\)) & FHE-DiSNN & DiSNN & SNN & Time per step & Time per image \\ \hline \((10,10)\) & 94.40\% & 95.00\% & 95.30\% & 0.83s & 8.31s \\ \((10,20)\) & 94.40\% & 95.00\% & 95.30\% & 0.86s & 8.67s \\ \((20,10)\) & 94.80\% & 95.10\% & 95.70\% & 0.81s & 16.21s \\ \((20,20)\) & 95.10\% & 95.30\% & 95.70\% & 0.79s & 15.97s \\ \hline \hline & FHE-DiNN & DiNN & NN & Time per step & Time per image \\ \hline
30 neurons & 93.46\% & 93.55\% & 94.46\% & 0.49s & 0.49s \\ \hline \end{tabular}
\end{table}
Table 1: The experiment result. The table above presents the results of experiments conducted with four different parameter sets. The first three columns represent the prediction accuracy of FHE-DiSNN, DiSNN, and SNN, respectively. In the case of the original SNN, only the parameter \(T\) influences the prediction performance. The fourth and fifth columns display the time consumed by FHE-DiSNN during a single time step and a complete prediction procedure, which, as stated by Proposition 4, is directly proportional to the simulation time \(T\). The last two lines excerpt the experimental results of FHE-DiNN [4] for comparison, and SNN and NN have the same structure.
The results reveal, as emphasized at the beginning of the article, that discretization has minimal impact on SNN. The original SNN, with 30 hidden layers, achieves a stable accuracy rate of around 95.7%, outperforming many second-generation neural networks. DiSNN also demonstrates a commendable accuracy rate of 95.3%(the best parameters). Importantly, it only incurs a loss of 0.4% compared to the original SNN, showcasing the inherent advantages of DiSNN. Furthermore, FHE-DiSNN performs impressively, consistently surpassing 94% accuracy across four parameter sets. Particularly, the (20,20) parameter set demonstrates comparable performance to DiSNN. However, FHE-DiSNN suffers from time inefficiency due to the increased number of bootstrapping operations caused by the simulation time \(T\), resulting in each prediction taking 8(16) seconds with 0.8 seconds consuming on average for a single simulation step.
During the experimental process, we observe that the number of spike firings differs between the FHE-DiSNN and DiSNN during computation. This suggests that certain ciphertexts may encounter noise overflow. However, this has a minimal effect on the final categorization outcomes. This is because slight noise overflow only causes a deviation of \(\pm 1\), and abnormal spike firings occur only when the value is at the edge of \(\hat{V}_{\text{threshold}}\), with a small probability. Additionally, individual instances of abnormal spike firings are effectively mitigated within the \(T\) simulation time. This indicates that FHE-DiSNN exhibits a considerable level of tolerance toward the noise, which is a highly intriguing experimental finding.
## 6 Conclusion
This paper serves as an initial exploration of the fusion of SNN(Spiking Neural Networks) with homomorphic encryption and presents a research avenue brimming with boundless potential. This innovation facilitates us in terms of both low energy consumption from the machine learning side and data privacy from a security point of view. We offer an estimation of the maximum upper bound for discretization error in DiSNN and substantiate its expected error to be \(\lambda/4\) from a mathematical expectation perspective. Experimental results further validate this finding. Furthermore, we leverage TFHE bootstrapping to construct **FHE-Fire** and **FHE-Reset** functions, enabling support for computations of unlimited depth. Besides, our proposed framework is easy to scalable and extended to more complicated neural models.
However, there still remain some challenging tasks for further research, such as more complex neuron equations, different encoding methods, parallel computing, and so on. Besides, as highlighted by Proposition 4, Poisson encoding introduces numerous bootstrapping operations (equal to the dimension of input data), which can have a high evaluation time. And this is also one of our future directions.
|
2303.17045 | Training Neural Networks is NP-Hard in Fixed Dimension | We study the parameterized complexity of training two-layer neural networks
with respect to the dimension of the input data and the number of hidden
neurons, considering ReLU and linear threshold activation functions. Albeit the
computational complexity of these problems has been studied numerous times in
recent years, several questions are still open. We answer questions by Arora et
al. [ICLR '18] and Khalife and Basu [IPCO '22] showing that both problems are
NP-hard for two dimensions, which excludes any polynomial-time algorithm for
constant dimension. We also answer a question by Froese et al. [JAIR '22]
proving W[1]-hardness for four ReLUs (or two linear threshold neurons) with
zero training error. Finally, in the ReLU case, we show fixed-parameter
tractability for the combined parameter number of dimensions and number of
ReLUs if the network is assumed to compute a convex map. Our results settle the
complexity status regarding these parameters almost completely. | Vincent Froese, Christoph Hertrich | 2023-03-29T22:16:52Z | http://arxiv.org/abs/2303.17045v2 | # Training Neural Networks is NP-Hard in Fixed Dimension
###### Abstract
We study the parameterized complexity of training two-layer neural networks with respect to the dimension of the input data and the number of hidden neurons, considering ReLU and linear threshold activation functions. Albeit the computational complexity of these problems has been studied numerous times in recent years, several questions are still open. We answer questions by Arora et al. [ICLR '18] and Khalife and Basu [IPCO '22] showing that both problems are NP-hard for two dimensions, which excludes any polynomial-time algorithm for constant dimension. We also answer a question by Froese et al. [JAIR '22] proving W[1]-hardness for four ReLUs (or two linear threshold neurons) with zero training error. Finally, in the ReLU case, we show fixed-parameter tractability for the combined parameter number of dimensions and number of ReLUs if the network is assumed to compute a convex map. Our results settle the complexity status regarding these parameters almost completely.
## 1 Introduction
Neural networks with rectified linear unit (ReLU) activations are arguably one of the most fundamental models in modern machine learning [2, 13, 21]. To use them as predictors on unseen data, one usually first fixes an architecture (the graph of the neural network) and then optimizes the weights and biases such that the network performs well on some known training data, with the hope that it will then also generalize well to unseen test data. While the ultimate goal in applications is generalization, _empirical risk minimization_ (that is, optimizing the training error) is an important step in this pipeline and understanding its computational complexity is crucial to advance the theoretical foundations of deep learning.
In this paper, we aim to understand how the choice of different meta-parameters, like the input dimension and the width of the neural network, influences the computational complexity of the training problem. To this end, we focus on two-layer neural networks, which can be seen as the standard building block also for deeper architectures.
Formally, a two-layer neural network (see Figure 1) with \(d\) input neurons, \(k\) hidden ReLU neurons, and a single output neuron computes a map
\[\phi\colon\mathbb{R}^{d}\to\mathbb{R},\quad\phi(\mathbf{x})=\sum_{j=1}^{k}a_{j} [\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}]_{+},\]
where \(\mathbf{w}_{j}\in\mathbb{R}^{d}\) and \(a_{j}\in\{-1,1\}\) are the weights between the layers, \(b_{j}\) are the biases at the hidden neurons, and \([x]_{+}\coloneqq\max(0,x)\) is the _rectifier_ function. Notice that restricting \(a_{j}\) to \(\{-1,1\}\) is without loss of generality because we can normalize by pulling any nonnegative factor into \(\mathbf{w}_{j}\) and \(b_{j}\). We also study neural networks with _linear threshold activation_ in Section 5.
Given training data \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}\) with labels \(y_{1},\ldots,y_{n}\in\mathbb{R}\), the task of training such a network is to find \(\mathbf{w}_{j},b_{j}\), and \(a_{j}\) for each \(j\in[k]\) such that the training error \(\sum_{i=1}^{n}\ell(\phi(\mathbf{x}_{i}),\ y_{i})\) for a given loss function \(\ell\colon\mathbb{R}\times\mathbb{R}\to\mathbb{R}_{\geq 0}\) is minimized. Formally, the decision version of two-layer ReLU neural network training is defined as follows:
\begin{tabular}{l l} \(2\)L-ReLU-NN-Train(\(\ell\)) \\
**Input:** & Data points \((\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\in\mathbb{R}^{d}\times \mathbb{R}\), a number \(k\in\mathbb{N}\) of ReLUs, \\ & and a target error \(\gamma\in\mathbb{R}_{\geq 0}\). \\
**Question:** & Are there weights \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\in\mathbb{R}^{d}\), biases \(b_{1},\ldots,b_{k}\in\mathbb{R}\), and coefficients \(a_{1},\ldots,a_{k}\in\{-1,1\}\) such that \\ \end{tabular}
Note that in the over-parameterized case where \(k\geq n\), the network can exactly fit any \(n\) input points1 (achieving training error \(\gamma=0\)) [24, Theorem 1]. Thus, we henceforth
Figure 1: Neural network architecture we study in this paper: After the input layer (left) with \(d\) input neurons, we have one hidden layer with \(k\) ReLU neurons and a single output neuron without additional activation function.
assume that \(k<n\).
2L-ReLU-NN-Train(\(\ell\)) is known to be NP-hard [8, 15], but all known reductions require the input dimension to be part of the input. The current state-of-the-art exact algorithm for convex loss \(\ell\) is by Arora et al. [2] and runs in \(O(2^{k}n^{dk}\operatorname{poly}(L))\) time, where \(L\) is the input bit-length.
As regards the computational complexity of 2L-ReLU-NN-Train(\(\ell\)), Arora et al. [2] posed the question(s) whether a running time
_"that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant"_
is possible. That is, they asked two questions. The first corresponds to the "and" statement, which can be phrased as follows:
**Question 1**: Is there an algorithm running in \((nk)^{f(d)}\operatorname{poly}(L)\) time for some function \(f\)?
In other words, the question is whether 2L-ReLU-NN-Train(\(\ell\)) is in the complexity class XP when parameterized by \(d\). The second question corresponding to the "or" statement can then be interpreted as
**Question 2**: Is there an algorithm running in \(n^{f(d)}g(k,d)\operatorname{poly}(L)\) or \(k^{f(d)}g(n,d)\operatorname{poly}(L)\) time for some functions \(f\) and \(g\)?
We observe that the second running time is clearly possible since \(k<n\) holds by assumption, and hence the algorithm by Arora et al. [2] runs in \(g(n,d)\operatorname{poly}(L)\) time. Hence, it remains open whether \(n^{f(d)}g(k,d)\operatorname{poly}(L)\) time is possible, which is equivalent to (uniform) fixed-parameter tractability with respect to \(k\) for every constant \(d\).
Clearly, Question 1 is the stronger statement, that is, a positive answer implies a positive answer to Question 2. Arora et al. [2] conclude with
_"Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion."_
Note that Froese et al. [10] proved that, for \(k=1\), there is no algorithm running in \(g(d)n^{o(d)}\) time unless the Exponential Time Hypothesis fails. Hence, this result already partially answered the two questions above by excluding any algorithm running in \(n^{o(d)}g(d,k)\operatorname{poly}(L)\) time.
In this paper, we answer Question 1 negatively by showing NP-hardness for \(d=2\) in Theorem 1, indicating that we cannot get rid of the exponential dependence on the network size in the algorithm by Arora et al. [2] even if the dimension is fixed. As regards Question 2, we further exclude (assuming the Exponential Time Hypothesis) any algorithm running in time \(n^{o(d)}g(d,k)\operatorname{poly}(L)\) even for the case \(\gamma=0\) and prove W[1]-hardness with respect to \(d\) for \(k=4\) (Theorem 7), which answers an open question by Froese et al. [10].
We also obtain analogous hardness results if linear threshold activation functions are used instead of ReLUs. As in the ReLU case, it is well-known that training linear threshold networks is NP-hard [4, 20]. The running time of the state-of-the-art algorithm
due to Khalife and Basu [20] is polynomial in \(n\) for fixed \(d\) and \(k\), but exponential in the latter two parameters. Khalife and Basu [20] posed an analogous question to Question 1 for linear threshold networks, which we answer negatively in Corollary 8, excluding a polynomial running time even for fixed dimension. We also show that we cannot expect fixed-parameter tractability with respect to \(d\) even for \(k=1\) (Corollary 9) and also not for \(k=2\) and \(\gamma=0\) (Corollary 10).
On the positive side, we give an algorithm running in \(2^{O(k^{2}d)}\operatorname{poly}(k,L)\) time for ReLU neural networks if \(\gamma=0\) and the function computed by the network is assumed to be convex (Theorem 11). Note that this running time yields fixed-parameter tractability with respect to \(k\) for every constant \(d\), and thus answers Question 2 positively for this restricted special case.
Implications and Limitations.In the following we provide a brief discussion of the implications and limitations of our results from various perspectives.
##### Input Dimension.
Theorem 1 implies that 2L-ReLU-NN-Train(\(\ell\)) is in fact NP-hard for every fixed \(d\geq 2\). The straight-forward reduction simply pads all the input vectors with \(d-2\) zeros. Similarly, Corollary 8 holds for every fixed \(d\geq 2\).
##### Target Error.
The hardness results Theorems 1 and 7 and Corollaries 8 and 10 also hold for every fixed \(\gamma\geq 0\). The reduction is straight-forward by introducing a set of incompatible data points which force the network to incur an additional error of \(\gamma\). For our positive result Theorem 11, however, there is indeed a difference in the complexity between the two cases \(\gamma=0\) and \(\gamma>0\). While we show fixed-parameter tractability for \(\gamma=0\), the same problem is W[1]-hard for \(\gamma>0\), already in the case \(k=1\)[10].
##### Number of ReLUs.
It is not too difficult to see (although it requires some work) that our particular reduction in Theorem 7 can be extended to any \(k\geq 4\) by introducing more data points far away from the existing data points which enforce the usage of additional ReLUs which then cannot be used to fit the data points of the actual reduction. Therefore, Theorem 7 holds for every fixed \(k\geq 4\). Similarly, Corollary 9 holds for every fixed \(k\geq 1\) and Corollary 10 holds for every fixed \(k\geq 2\).
##### Other Activation Functions.
Our hardness results hold for the piecewise linear ReLU activation function and the piecewise constant linear threshold activation function. Extending them to other piecewise linear or constant activation functions like leaky ReLU or maxout should be straight-forward. However, achieving analogous results for smooth activation functions like sigmoids probably requires fundamentally different techniques and is beyond the scope of this paper.
##### Training vs. Learning.
Our results are concerned with the problem of minimizing the training error. While this is inherently different from minimizing the generalization error, there are indeed deep connections between these two problems [23]. In particular, as pointed out by Goel et al. [15], hardness of training implies hardness of proper learning if one permits arbitrary data distributions. However, such hardness results can often be
overcome by either posing additional assumptions on the data distributions or switching to more general learning paradigms like improper learning [14].
##### Exact vs. Approximate Training.
In practice, it arguably often suffices to train a neural network to approximate instead of exact optimality. The results in this paper are concerned with solving the training problem to exact global optimality. However, since Theorems 1 and 7 and Corollaries 8 and 10 already hold for training error \(\gamma=0\), they even rule out the existence of approximation algorithms with any multiplicative factor. We conceive that for appropriate notions of _additive_ approximation (see, e.g., [15]), our reductions can also be used to show hardness of additive approximation. However, this would significantly increase the technical complexity of the analysis and is therefore beyond the scope of this paper. We leave it as an open research question to analyze the influence of meta-parameters like input dimension and number of hidden neurons on additive approximation of the training problem.
##### Related Work.
Dey et al. [8] and Goel et al. [15] showed NP-hardness of 2L-ReLU-NN-Train(\(\ell\)) for \(k=1\), but require non-constant dimension. For target error \(\gamma=0\), the problem is NP-hard for every constant \(k\geq 2\) and polynomial-time solvable for \(k=1\)[15]. Goel et al. [15] provide further conditional running time lower bounds and (additive) approximation hardness results. Froese et al. [10] considered the parameterized complexity regarding the input dimension \(d\) and proved W[1]-hardness and an ETH-based running time lower bound of \(n^{\Omega(d)}\) for \(k=1\).
Boob et al. [5] studied networks where the output neuron also is a ReLU and proved NP-hardness (and implicitly W[1]-hardness with respect to \(d\)) for \(k=2\) and \(\gamma=0\). Bertschinger et al. [3] showed that training 2-layer ReLU networks with two output and two input neurons (\(\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\)) is complete for the class \(\exists\mathbb{R}\) (existential theory of the reals) and thus likely not contained in NP. This also implies NP-hardness, but note that in contrast to our results, their reduction does not work for one-dimensional outputs. Their result strengthens a previous result by Abrahamsen et al. [1] who proved \(\exists\mathbb{R}\)-completeness for networks with a specific (not fully connected) architecture. Pilanci and Ergen [22] showed that training 2-layer neural networks can be formulated as a convex program which yields a polynomial-time algorithm for constant dimension \(d\). However, they considered a regularized objective and their result requires the number \(k\) of hidden neurons to be very large (possibly equal to the number \(n\) of input points) and hence does not contradict our NP-hardness result for \(d=2\).
To study the computational complexity of training ReLU networks, a crucial ingredient is to know the set of (continuous and piecewise linear) functions precisely representable with a certain network architecture. This is well-understood for two-layer networks [2, 3, 7], but much trickier for deeper networks [16, 17, 18]. Similar to the study of ReLU networks by Arora et al. [2], Khalife and Basu [20] studied the expressiveness and training complexity for linear threshold activation functions.
Preliminaries
Notation.For \(n\in\mathbb{N}\), we define \([n]\coloneqq\{1,\ldots,n\}\). For \(X\subseteq\mathbb{R}^{d}\), we denote by \(\operatorname{aff}(X)\) the affine hull of \(X\) and by \(\dim(X)\) the dimension of \(\operatorname{aff}(X)\).
Throughout this work, we assume \(\ell\colon\mathbb{R}\times\mathbb{R}\to\mathbb{R}_{\geq 0}\) to be any loss function with \(\ell(x,y)=0\iff x=y\).
Parameterized Complexity.We assume basic knowledge on computational complexity theory. Parameterized complexity is a multivariate approach to analyze the computational complexity of problems [9, 6].
An instance \((x,k)\) of a parameterized problem \(L\subseteq\Sigma^{*}\times\mathbb{N}\) is a pair with \(x\in\Sigma^{*}\) being a problem instance and \(k\in\mathbb{N}\) being the value of a certain _parameter_. A parameterized problem \(L\) is _fixed-parameter tractable (fpt)_ (contained in the class FPT) if there exists an algorithm deciding whether \((x,k)\in L\) in \(f(k)\cdot|x|^{O(1)}\) time, where \(f\) is a function solely depending on \(k\). Note that a parameterized problem in FPT is polynomial-time solvable for every constant parameter value where, importantly, the degree of the polynomial does not depend on the parameter value. The class XP contains all parameterized problems which can be solved in polynomial time for constant parameter values, that is, in time \(f(k)\cdot|x|^{g(k)}\). It is known that \(\operatorname{FPT}\subsetneq\operatorname{XP}\). The class \(\operatorname{W}[1]\) contains parameterized problems which are widely believed not to be in FPT. That is, a \(\operatorname{W}[1]\)-hard problem (e.g. Clique parameterized by the size of the sought clique) is not solvable in \(f(k)\cdot|x|^{O(1)}\) time. It is known that \(\operatorname{FPT}\subseteq\operatorname{W}[1]\subseteq\operatorname{XP}\).
\(\operatorname{W}[1]\)-hardness is defined via _parameterized reductions_. A parameterized reduction from \(L\) to \(L^{\prime}\) is an algorithm mapping an instance \((x,k)\) in \(f(k)\cdot|x|^{O(1)}\) time to an instance \((x^{\prime},k^{\prime})\) such that \(k^{\prime}\leq g(k)\) for some function \(g\) and \((x,k)\in L\) if and only if \((x^{\prime},k^{\prime})\in L^{\prime}\).
Exponential Time Hypothesis.The Exponential Time Hypothesis (ETH) [19] states that 3-SAT cannot be solved in subexponential time in the number \(n\) of Boolean variables in the input formula, that is, there exists a constant \(c>0\) such that 3-SAT cannot be solved in \(O(2^{cn})\) time.
The ETH implies \(\operatorname{FPT}\neq\operatorname{W}[1]\)[6] (which implies \(\operatorname{P}\neq\operatorname{NP}\)). In fact, ETH implies that Clique cannot be solved in \(\rho(k)\cdot n^{o(k)}\) time for any function \(\rho\), where \(k\) is the size of the sought clique and \(n\) is the number of vertices in the graph [6].
Geometry of 2-Layer ReLU Networks.For proving our results, it is crucial to understand the geometry of a function \(\phi\colon\mathbb{R}^{d}\to\mathbb{R}\) represented by a two-layer ReLU network. Here, we only discuss properties required to understand our results and refer to [2, 3, 7] for additional discussions in this context. Such a function \(\phi\) is a continuous and piecewise linear function. Each hidden neuron with index \(j\in[k]\) defines a hyperplane \(\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}=0\) in \(\mathbb{R}^{d}\). These \(k\) hyperplanes form a hyperplane arrangement. Inside each cell of this hyperplane arrangement, the function \(\phi\) is affine. The graph of \(\phi\) restricted to such a cell is called a _(linear) piece_ of \(\phi\).
Consider a hyperplane \(H\) from the hyperplane arrangement. Let \(\mathbf{w}\) be an orthonormal vector of \(H\) and let \(J\subseteq[k]\) be the non-empty subset of indices of neurons which induce precisely \(H\). Note that \(\mathbf{w}_{j}\) is a scaled version of \(\mathbf{w}\) for each \(j\in J\). Let \(\mathbf{x}\in\mathbb{R}^{d}\) be
a point on \(H\) which does not lie on any other hyperplane in the arrangement. There are exactly two full-dimensional cells in the arrangement containing \(\mathbf{x}\): one on each side of \(H\). The difference of the directional derivatives of the corresponding two pieces of \(\phi\) in the direction of \(\mathbf{w}\) is exactly \(\sum_{j\in J}a_{j}\|\mathbf{w}_{j}\|\). In particular, this is independent of \(\mathbf{x}\) and therefore constant along \(H\). If this value is positive, we say that \(H\) is a _convex_ hyperplane of \(\phi\). If it is negative, we say that \(H\) is a _concave_ hyperplane of \(\phi\). Note that this matches with \(\phi\) being convex or concave locally around every point \(\mathbf{x}\in H\) which does not belong to any other hyperplane in the arrangement. Moreover, a point \(\mathbf{x}\in\mathbb{R}^{d}\) is called a convex (concave) _breakpoint_ of \(\phi\) if it lies on a convex (concave) hyperplane of \(\phi\).
One important observation we will heavily use is the following: If we know that \(\phi\) originates from a 2-layer neural network with \(k\) hidden neurons and we know that we need indeed \(k\) distinct hyperplanes to separate the pieces of \(\phi\), then each hyperplane must be induced by exactly one neuron (and not several). Then the hyperplane corresponding to the \(j\)-th neuron is convex if and only if \(a_{j}>0\) and concave if and only if \(a_{j}<0\). For input dimension \(d=2\), each of the hyperplanes in the arrangement is actually a line in \(\mathbb{R}^{2}\). We call such a line a _breakline_ and define _convex_ and _concave_ breaklines accordingly.
## 3 NP-Hardness for Two Dimensions
In this section we prove our main result that 2L-ReLU-NN-Train\((\ell)\) is NP-hard for two dimensions, thus excluding any running time of the form \((nk)^{f(d)}\).
**Theorem 1**.: \(2\)L-ReLU-NN-Train\((\ell)\) _is NP-hard even for \(d=2\) and \(\gamma=0\)._
We give a polynomial-time reduction from the following NP-complete problem [11].
Positive One-In-Three 3-SAT (POITS)
**Input:**: A Boolean formula \(F\) in conjunctive normal form with three positive literals per clause.
**Question:**: Is there a truth assignment for the variables such that each clause in \(F\) has exactly one true literal?
Our construction will be such that the function represented by the neural network is equal to zero everywhere except for a finite set of "stripes", in which the function forms a _levee_ (see Definition 2), that is, when looking at a cross section, the function goes up from 0 to 1, stays constant for a while, and goes down from 1 to 0 again. See Figure 2 (right) for a top view of a levee and Figure 3 for a cross section of a levee.
**Definition 2**.: _A levee with slope \(s\in\mathbb{R}\) (centered at the origin) is the function \(f_{s}\colon\mathbb{R}^{2}\to\mathbb{R}\) with_
\[f_{s}(x_{1},x_{2})=\left\{\begin{array}{ll}0,&\mbox{if }|x_{2}-sx_{1}| \geq 2,\\ 1,&\mbox{if }|x_{2}-sx_{1}|\leq 1,\\ 2+x_{2}-sx_{1},&\mbox{if }x_{2}-sx_{1}\in\,]-2,-1[,\\ 2-x_{2}+sx_{1},&\mbox{if }x_{2}-sx_{1}\in\,]1,2[.\end{array}\right. \tag{1}\]
**Observation 3**.: _A leveee \(f_{s}\) is a continuous, piecewise linear function with two convex and two concave breaklines. It can be realized with four ReLUs as follows:_
\[f_{s}(x_{1},x_{2})=[x_{2}-sx_{1}+2]_{+}-[x_{2}-sx_{1}+1]_{+}-[x_{2}-sx_{1}-1]_ {+}+[x_{2}-sx_{1}-2]_{+}.\]
Similar levees have been used by Bertschinger et al. [3] to prove \(\exists\mathbb{R}\)-completeness of neural network training, however, in a conceptually very different way. In their work, levees encode variable values via the _slopes of the function_ on the non-constant regions of the levee. In contrast, in our reduction, we encode discrete choices via rotation of the levees, that is, via the _slopes of the breaklines in the two-dimensional input space_.
Selection Gadget.We describe a gadget allowing us to model a discrete choice between \(\ell\) many possibilities (levees). We will describe the gadget centered at the origin of the \(x_{1}\)-\(x_{2}\)-plane. Later in our reduction, we will use several shifted versions of this gadget. An illustration of the selection gadget is given in Figure 2.
Each of the \(\ell\) different choices corresponds to one of \(\ell\) different slopes \(s_{1}<s_{2}<\dots<s_{\ell}\). First, we place 13 data points on the \(x_{2}\)-axis (with \(x_{1}=0\), we call this vertical line \(h_{0}\)):
\begin{tabular}{c|c c c c c c c c c c c} \(x_{2}\) & \(-4\) & \(-3\) & \(-2\) & \(-\nicefrac{{5}}{{3}}\) & \(-\nicefrac{{4}}{{3}}\) & \(-1\) & \(0\) & \(1\) & \(\nicefrac{{4}}{{3}}\) & \(\nicefrac{{5}}{{3}}\) & \(2\) & \(3\) & \(4\) \\ \hline \(y\) & \(0\) & \(0\) & \(0\) & \(\nicefrac{{1}}{{3}}\) & \(\nicefrac{{2}}{{3}}\) & \(1\) & \(1\) & \(1\) & \(\nicefrac{{2}}{{3}}\) & \(\nicefrac{{1}}{{3}}\) & \(0\) & \(0\) & \(0\) \\ \end{tabular}
Next, we need a small \(\epsilon>0\) to be chosen later in a global context. The only condition we impose on \(\epsilon\) in order to make the selection gadget work is that \(\epsilon\leq\min\Bigl{\{}\frac{1}{3|s_{1}|},\frac{1}{3|s_{\ell}|}\Bigr{\}}\). Based on this, we place 9 data points parallel to the \(x_{2}\)-axis with \(x_{1}=-\epsilon\) (we call the corresponding vertical line \(h_{-\epsilon}\)):
\begin{tabular}{c|c c c c c c c c c} \(x_{2}\) & \(-4-\epsilon s_{\ell}\) & \(-3-\epsilon s_{\ell}\) & \(-2-\epsilon s_{\ell}\) & \(-1-\epsilon s_{1}\) & \(0\) & \(1-\epsilon s_{\ell}\) & \(2-\epsilon s_{1}\) & \(3-\epsilon s_{1}\) & \(4-\epsilon s_{1}\) \\ \hline \(y\) & \(0\) & \(0\) & \(0\) & \(1\) & \(1\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \end{tabular}
Furthermore, similar to above, we place 9 data points parallel to the \(x_{2}\)-axis with \(x_{1}=\epsilon\) (we call the corresponding line \(h_{\epsilon}\)):
\begin{tabular}{c|c c c c c c c c} \(x_{2}\) & \(-4+\epsilon s_{1}\) & \(-3+\epsilon s_{1}\) & \(-2+\epsilon s_{1}\) & \(-1+\epsilon s_{\ell}\) & \(0\) & \(1+\epsilon s_{1}\) & \(2+\epsilon s_{\ell}\) & \(3+\epsilon s_{\ell}\) & \(4+\epsilon s_{\ell}\) \\ \hline \(y\) & \(0\) & \(0\) & \(0\) & \(1\) & \(1\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \end{tabular}
Finally, we place \(2(\ell-1)\) many data points as follows: for each \(i\in[\ell-1]\), we introduce one data point \(\mathbf{q}_{i}^{-}\coloneqq(-\frac{4}{s_{i+1}-s_{i}},-\frac{2(s_{i}+s_{i+1})}{ s_{i+1}-s_{i}})\), as well as one data point \(\mathbf{q}_{i}^{+}\coloneqq(\frac{4}{s_{i+1}-s_{i}},\frac{2(s_{i}+s_{i+1})}{s_ {i+1}-s_{i}})\). All these data points receive label \(y=0\).
It is not too difficult to verify that a levee with slope \(s_{i}\), \(i\in[\ell]\), fits all data points of a selection gadget. We omit the simple but tedious calculations here. More intricately, the following lemma shows that a selection gadget indeed models a discrete choice between exactly \(\ell\) possibilities.
**Lemma 4**.: _Let \(f\colon\mathbb{R}^{2}\to\mathbb{R}\) be a continuous piecewise linear function with only four breaklines that fits all the data points of the selection gadget. Then, \(f=f_{s_{i}}\) for some \(i\in[\ell]\)._
Proof.: First, we focus on the three vertical lines \(h_{-\epsilon}\), \(h_{0}\), and \(h_{\epsilon}\). Note each of the three lines contains a sequence of nine data points of which the first three have label 0, the next three have label 1 and the final three have label 0 again. For simplicity, consider one of the three lines and denote these nine data points by \(\mathbf{p}_{1}\) to \(\mathbf{p}_{9}\). Note that \(h_{0}\) contains
even more data points, which will become important later. For the following argument, compare Figure 3.
Observe that \(f\) restricted to one of the three lines is a one-dimensional, continuous, piecewise linear function with at most four breakpoints. Looking at \(\mathbf{p}_{2}\), \(\mathbf{p}_{3}\), and \(\mathbf{p}_{4}\), the corresponding \(y\)-labels are \(0\), \(0\), and \(1\), respectively. This can only be fitted if there exists a convex breakpoint between \(\mathbf{p}_{2}\) and \(\mathbf{p}_{4}\). Analogously, there must be a concave breakpoint between \(\mathbf{p}_{3}\) and \(\mathbf{p}_{5}\), another concave breakpoint between \(\mathbf{p}_{5}\) and \(\mathbf{p}_{7}\), and a convex breakpoint between \(\mathbf{p}_{6}\) and \(\mathbf{p}_{8}\). This uses already all four available breakpoints, so there are no other breakpoints. Therefore, the function on the considered line must be linear outside the segment between \(\mathbf{p}_{2}\) and \(\mathbf{p}_{8}\). Since \(\mathbf{p}_{1}\), \(\mathbf{p}_{2}\), \(\mathbf{p}_{8}\), and \(\mathbf{p}_{9}\) all have label \(0\), it follows that the function is constant \(0\) outside this segment. Moreover, there is no concave breakpoint outside the segment between \(\mathbf{p}_{3}\) and \(\mathbf{p}_{7}\), implying that the function
Figure 3: Cross section of the selection gadget through one of the three lines \(h_{-\epsilon}\), \(h_{0}\), or \(h_{\epsilon}\). The nine data points (labeled \(\mathbf{p}_{1}\) to \(\mathbf{p}_{9}\)) on each of these lines force the function \(f\) to attain a “levee-shape” with the exact position and slope of the ascending and descending sections as the only degrees of freedom (left). The four additional data points on \(h_{0}\) even fix these properties and thus exactly determine \(f\) on that line (right).
Figure 2: Illustration of the selection gadget with \(\ell=3\) and \(s_{1}=-1\), \(s_{2}=0\), \(s_{3}=1\). Both figures show the \(x_{1}\)-\(x_{2}\)-plane while the \(y\)-coordinate is indicated via the darkness of the gray color. The left picture shows all data points belonging to the gadget as well as the breaklines of the three possible levees fitting the data points. In addition to these features, the right picture shows a levee with slope \(s_{2}=0\) as one of three possibilities to fit the data points of the gadget.
must be convex outside the segment between \(\mathbf{p}_{3}\) and \(\mathbf{p}_{7}\). However, since these two points have label \(0\) as well, it follows that \(f\) must even be constant \(0\) there.
Now consider the segment between \(\mathbf{p}_{4}\) and \(\mathbf{p}_{6}\). There is no convex breakpoint between \(\mathbf{p}_{4}\) and \(\mathbf{p}_{6}\). Therefore, the function must be concave within the segment. Since \(\mathbf{p}_{4}\), \(\mathbf{p}_{5}\), and \(\mathbf{p}_{6}\) all have label \(1\), it follows that the function is constant \(1\) between \(\mathbf{p}_{4}\) and \(\mathbf{p}_{6}\).
Putting together the insights gained so far, it follows that \(f\) restricted to the considered line is constant \(0\) first, goes up to constant \(1\) via a convex and a concave breakpoint between \(\mathbf{p}_{3}\) and \(\mathbf{p}_{4}\), and goes down to constant \(0\) again via a concave and a convex breakpoint between \(\mathbf{p}_{6}\) and \(\mathbf{p}_{7}\) (Figure 3, left). Note that the exact location of these breakpoints and the slope in the sloped segments is not implied by the nine data points considered so far.
This changes, however, when also taking into account the four other data points lying on \(h_{0}\). Combined with the insights so far, they completely determine \(f\) on this line (Figure 3, right):
\[f(0,x_{2})=\left\{\begin{array}{ll}0,&\mbox{if $x_{2}\leq-2$ or $x_{2}\geq 2$,}\\ 1,&\mbox{if $-1\leq x_{2}\leq 1$,}\\ 2+x_{2},&\mbox{if $-2\leq x_{2}\leq-1$,}\\ 2-x_{2},&\mbox{if $1\leq x_{2}\leq 2$.}\end{array}\right.\]
Observe that this is precisely the same as (1) with \(x_{1}=0\).
It remains to consider the behavior of \(f\) on both sides of \(h_{0}\). To this end, observe that the breakpoints of \(f\) restricted to one of the three lines considered so far emerge as intersections of these lines with only four breaklines in total. Let us collect what we know so far about the locations of these four breaklines:
* There are exactly two convex breaklines, intersecting \(h_{0}\) at \((0,-2)\) and \((0,2)\), respectively. We call them \(g_{1}\) and \(g_{4}\), respectively.
* There are exactly two concave breaklines, intersecting \(h_{0}\) at \((0,-1)\) and \((0,1)\), respectively. We call them \(g_{2}\) and \(g_{3}\), respectively.
* Each of the four segments \[I_{1} \coloneqq[(-\epsilon,-2-\epsilon s_{\ell}),(-\epsilon,-1-\epsilon s _{1})]\subseteq[(-\epsilon,-\nicefrac{{7}}{{3}}),(-\epsilon,-\nicefrac{{2}}{ {3}})],\] \[I_{2} \coloneqq[(-\epsilon,1-\epsilon s_{\ell}),(-\epsilon,2-\epsilon s _{1})]\subseteq[(-\epsilon,\nicefrac{{2}}{{3}}),(-\epsilon,\nicefrac{{7}}{{3 }})],\] \[I_{3} \coloneqq[(\epsilon,-2+\epsilon s_{1}),(\epsilon,-1+\epsilon s_{ \ell})]\subseteq[(\epsilon,-\nicefrac{{7}}{{3}}),(\epsilon,-\nicefrac{{2}}{{3 }})],\mbox{ and}\] \[I_{4} \coloneqq[(\epsilon,1+\epsilon s_{1}),(\epsilon,2+\epsilon s_{ \ell})]\subseteq[(\epsilon,\nicefrac{{2}}{{3}}),(\epsilon,\nicefrac{{7}}{{3 }})]\] is intersected by exactly one concave and one convex breakline. Here, the inclusions are implied by \(\epsilon\leq\min\Bigl{\{}\frac{1}{3|s_{1}|},\frac{1}{3|s_{\ell}|}\Bigr{\}}\). See Figure 4 for an illustration of the position of these segments.
Now consider \(g_{2}\), which goes through \((0,-1)\), and observe that it cannot intersect \(I_{2}\) for the following reason. If it did, it would intersect \(h_{\epsilon}\) at \(x_{2}\leq-1-\nicefrac{{5}}{{3}}=-\nicefrac{{8}}{{3}}<-\nicefrac{{7}}{{3}}\) and hence would neither intersect \(I_{3}\) nor \(I_{4}\). This is a contradiction because there are only two concave breaklines and both \(I_{3}\) and \(I_{4}\) must be intersected by exactly one of them. Consequently, \(g_{2}\) cannot intersect \(I_{2}\), and must intersect \(I_{1}\) instead.
Analogously, it follows that \(g_{1}\) and \(g_{2}\) intersect \(I_{1}\) and \(I_{3}\). Similarly, \(g_{3}\) and \(g_{4}\) intersect \(I_{2}\) and \(I_{4}\). Combining this with the fact that \(f\) restricted to each of the three vertical lines \(h_{-\epsilon}\), \(h_{0}\), and \(h_{\epsilon}\) has an increasing section from \(0\) to \(1\) and a decreasing section from \(1\) to \(0\), this implies that the four lines \(g_{1}\) to \(g_{4}\) do not cross between \(h_{-\epsilon}\) and \(h_{\epsilon}\). Let us focus on the quadrilateral enclosed by \(g_{1}\), \(g_{2}\), \(h_{-\epsilon}\) and \(h_{\epsilon}\). By what we know so far, \(f\) is constant \(0\) on \(g_{1}\), constant \(1\) on \(g_{2}\), and linear within this quadrilateral. Since \(h_{-\epsilon}\) and \(h_{\epsilon}\) are parallel, this implies that the corresponding two sides of the quadrilateral must have the same length. Thus, the quadrilateral must be a parallelogram. In particular, \(g_{1}\) and \(g_{2}\) are parallel. Similarly, \(g_{3}\) and \(g_{4}\) must be parallel.
Let \(s\) be the slope of \(g_{1}\) and \(g_{2}\), and let \(t\) be the slope of \(g_{3}\) and \(g_{4}\). To complete the proof, we need to show that all four lines are parallel, that is, \(s=t\), and that this slope value is equal to \(s_{i}\) for some \(i\in[\ell]\).
Without loss of generality, we can assume that \(s\leq t\), otherwise we mirror the gadget along the \(x_{2}\)-axis. Observe that \(s_{1}\leq s\leq t\leq s_{\ell}\) because both \(g_{1}\) and \(g_{2}\) intersect \(I_{3}\), and both \(g_{3}\) and \(g_{4}\) intersect \(I_{4}\).
Let \(i^{*}\coloneqq\max\{i\mid s_{i}\leq s\}\in[\ell]\). If \(i^{*}=\ell\), then \(s=t=s_{\ell}\) and we are done. Otherwise, consider the data point \(\mathbf{q}_{i^{*}}^{+}=\left(\frac{4}{s_{i^{*}+1}-s_{i^{*}}},\frac{2(s_{i^{*} }+s_{i^{*}+1})}{s_{i^{*}+1}-s_{i^{*}}}\right)\), which has label \(y=0\).
Let us have a look at what \(f\) restricted to the vertical line \(h\) through \(\mathbf{q}_{i^{*}}^{+}\) looks like. By \(s\leq t\), the four lines \(g_{1}\), \(g_{2}\), \(g_{3}\), and \(g_{4}\) intersect \(h\) in exactly this order (from bottom to top). This means that these lines do not cross between the \(x_{2}\)-axis and \(h\). By our insights above, this implies that restricted to \(h\), \(f\) is zero outside the intersection points with \(g_{1}\) and \(g_{4}\), increases from zero to one between \(g_{1}\) and \(g_{2}\), stays constant \(1\) between \(g_{2}\) and \(g_{3}\), and decreases back to \(0\) between \(g_{3}\) and \(g_{4}\).
By our choice of \(i^{*}\), we obtain that \(s_{i^{*}+1}>s\). Let us calculate at which \(x_{2}\)-coordinate
Figure 4: Illustration of the segments \(I_{1}\) to \(I_{4}\) used in the proof of Lemma 4. The figure also highlights (in black) the data points at \((0,-2)\) and \((0,2)\), each of which lies on a convex breakline, as well as the data points at \((0,-1)\) and \((0,1)\), each of which lies on a concave breakline.
intersects \(h\). This happens at
\[x_{2}=-2+\frac{4s}{s_{i^{*}+1}-s_{i^{*}}}<-2+\frac{4s_{i^{*}+1}}{s_{i^{*}+1}-s_{i ^{*}}}=\frac{4s_{i^{*}+1}-2(s_{i^{*}+1}-s_{i^{*}})}{s_{i^{*}+1}-s_{i^{*}}}=\frac {2(s_{i^{*}}+s_{i^{*}+1})}{s_{i^{*}+1}-s_{i^{*}}}.\]
Thus, \(\mathbf{q}_{i^{*}}^{+}\) lies strictly above the line \(g_{1}\). Since \(\mathbf{q}_{i^{*}}^{+}\) has label zero, this must imply that \(\mathbf{q}_{i^{*}}^{+}\) does not lie below \(g_{4}\). Looking at the intersection point of \(g_{4}\) with \(h\), this means:
\[2+\frac{4t}{s_{i^{*}+1}-s_{i^{*}}}\leq\frac{2(s_{i^{*}}+s_{i^{*}+ 1})}{s_{i^{*}+1}-s_{i^{*}}}\] \[\Leftrightarrow 2(s_{i^{*}+1}-s_{i^{*}})+4t\leq 2(s_{i^{*}}+s_{i^{*}+1})\] \[\Leftrightarrow t\leq s_{i^{*}}.\]
Thus, we obtain \(s_{i^{*}}\leq s\leq t\leq s_{i^{*}}\), implying that \(g_{1},g_{2},g_{3},g_{4}\) are all parallel and have one of the \(\ell\) predefined slopes. This implies that \(f\) is the levee \(f_{s_{i^{*}}}\), completing the proof of the lemma.
Combining Multiple Selection Gadgets.Having constructed and understood a single selection gadget, the next step is to use multiple of these gadgets simultaneously. To this end, we will "stack multiple selection gadgets upon each other along the \(x_{2}\)-axis". To make this formal, we define a _selection gadget with offset \(z\)_ as the set of data points of a selection gadget as described above, where we add \(z\) to all \(x_{2}\)-coordinates of the gadget. In other words, the gadget is centered around the point \((0,z)\).
Now, consider the set of data points originating from \(m\) selection gadgets with offsets \(z_{1},\ldots,z_{m}\), each one offering the choice between \(\ell_{j}\) many slopes \(s_{i}^{(j)}\), \(i\in[\ell_{j}]\), \(j\in[m]\). Suppose further that we uniformly choose \(\epsilon\coloneqq\min_{j\in[m]}\min\Bigl{\{}\frac{1}{3|s_{1}^{(j)}|},\frac{1}{ 3|s_{\ell}^{(j)}|}\Bigr{\}}\) for all the gadgets such that the vertical lines \(h_{-\epsilon}\), \(h_{0}\), and \(h_{\epsilon}\) with \(x_{1}\)-coordinates \(-\epsilon\), \(0\), and \(\epsilon\), respectively, each contain either \(9\) or \(13\) data points from each gadget. Let \(\delta\coloneqq\min_{j\in[m]}\min_{i\in[\ell_{j}-1]}(s_{i+1}^{(j)}-s_{i}^{(j)})\) be the smallest difference of two consecutive slopes in the \(m\) gadgets. Moreover, let \(S\coloneqq\max_{j\in[m]}\max_{i\in[\ell_{j}]}|s_{i}^{(j)}|\) be the largest absolute value of all the slopes. In this setting, the following lemma states that fitting all these data points is equivalent to independently choosing one slope for each single gadget and adding up the corresponding levees, provided that the distance of the gadgets is large enough.
**Lemma 5**.: _If \(z_{j+1}-z_{j}\geq\frac{8S}{\delta}+6\) for all \(j\in[m-1]\), then there are exactly \(\prod_{j=1}^{m}\ell_{j}\) many continuous piecewise linear functions \(f\colon\mathbb{R}^{2}\to\mathbb{R}\) with at most \(4m\) breaklines fitting the data points of the \(m\) selection gadgets, namely \(f(x_{1},x_{2})=\sum_{j=1}^{m}f_{s_{i_{j}}^{(j)}}(x_{1},x_{2}-z_{j})\) for each choice of indices \(i_{j}\in[\ell_{j}]\) for each \(j\in[m]\)._
Proof.: We first show that each of these functions does indeed fit all the data points. For this, it is sufficient to show that each levee \(f_{s_{i_{j}}^{(j)}}(x_{1},x_{2}-z_{j})\) is \(0\) at all the data points \((\bar{x}_{1},\bar{x}_{2})\) belonging to a selection gadget with index \(j^{\prime}\neq j\). Without loss of generality, we can assume that \(z_{j}=0\). By the definition of the selection gadget and checking all the possible \(x_{1}\)-coordinates, we obtain that \(|\bar{x}_{1}|\leq\nicefrac{{4}}{{\delta}}\). Moreover, looking at the possible \(x_{2}\)-coordinates, we obtain that \(\bar{x}_{2}\) can differ at most by \(4+S\cdot|\bar{x}_{1}|\) from \(z_{j^{\prime}}\), from which we conclude \(|\bar{x}_{2}|\geq|z_{j^{\prime}}|-4-S\cdot|\bar{x}_{1}|\geq\frac{4S}{\delta}+2\). On the other hand, all points \((x_{1},x_{2})\)
for which the levee \(f_{s_{i_{j}}^{(j)}}(x_{1},x_{2})\) is nonzero satisfy \(|x_{2}|<2+|s_{i_{j}}x_{1}|\leq 2+S|x_{1}|\). Since \(|\bar{x}_{2}|\geq\frac{4S}{\delta}+2\geq 2+S|x_{1}|\), it follows that \(f_{s_{i_{j}}^{(j)}}\) must be zero at \((\bar{x}_{1},\bar{x}_{2})\), completing the proof that all claimed functions fit the \(m\) selection gadgets.
It remains to show that all functions \(f\) fitting the data points of the \(m\) selection gadgets are of the claimed form. We show this by induction on \(m\). The base case \(m=1\) is given by Lemma 4. Now, let \(m\geq 2\) and without loss of generality let \(z_{1}=0\). We will again consider the three vertical lines \(h_{-\epsilon}\), \(h_{0}\), and \(h_{\epsilon}\) with \(x_{1}\)-coordinates \(-\epsilon\), \(0\), and \(\epsilon\), respectively. Remember that \(f\) restricted to each of these three lines is a one-dimensional continuous piecewise linear function with at most \(4m\) breakpoints, stemming from breaklines intersecting the respective vertical line. By looking at each individual gadget and arguing as in the proof of Lemma 4, we obtain the following information:
* There are exactly \(2m\) convex breaklines, intersecting \(h_{0}\) at the \(2m\) points \((0,z_{j}-2)\) and \((0,z_{j}+2)\), \(j\in[m]\). Note that by our assumptions \(z_{1}=0\) and \(z_{j+1}-z_{j}\geq\frac{8S}{\delta}+6>6\), all these points are distinct, two of them are \((0,-2)\) and \((0,2)\), and all the other \(2m-2\) points lie above the horizontal line \(x_{2}=4\).
* There are exactly \(2m\) concave breaklines, intersecting \(h_{0}\) at the \(2m\) points \((0,z_{j}-1)\) and \((0,z_{j}+1)\), \(j\in[m]\). Again by our assumptions \(z_{1}=0\) and \(z_{j+1}-z_{j}\geq\frac{8S}{\delta}+6>6\), all these points are distinct, two of them are \((0,-1)\) and \((0,1)\), and all the other \(2m-2\) points lie above the horizontal line \(x_{2}=5\).
* Each of the four segments \(I_{1}\) to \(I_{4}\) corresponding to the selection gadget with index \(j=1\) as defined in the proof Lemma 4 is intersected by exactly one convex and exactly one concave breakline. There are \(4m-4\) further such segments stemming from selection gadgets with index \(j>1\), and all of those lie completely above the horizontal line \(x_{2}=6-\nicefrac{{7}}{{3}}=\nicefrac{{11}}{{3}}\).
Looking at the breaklines passing through \((0,-2)\) and \((0,-1)\), they must also pass through one of the described \(2m\) segments on \(h_{-\epsilon}\) and one of the described \(2m\) segments on \(h_{\epsilon}\). Since the considered gadget is the lowest one on the \(x_{2}\)-axis, the same argument as in the proof of Lemma 4 applies, which means that the only way of fulfilling these requirements simultaneously is that these breaklines pass through \(I_{1}\) and \(I_{3}\). Once having this, the same argument can be repeated for the breaklines passing through \((0,1)\) and \((0,2)\), making use of the fact that all the \(4m-4\) segments not belonging to the considered gadget lie above the \(x_{2}=\nicefrac{{11}}{{3}}\)-line. Therefore, these breaklines must intersect \(h_{-\epsilon}\) and \(h_{\epsilon}\) within \(I_{2}\) and \(I_{4}\), respectively.
From this, it follows as in the proof of Lemma 4 that the only way to fit the data points of the selection gadget with index \(j=1\) is one of the \(\ell_{1}\) leveses \(f_{s_{i}^{(1)}}\), \(i\in[\ell_{1}]\). Thus, subtracting one of these \(\ell_{1}\) levees from \(f\) eliminates four of the \(4m\) breaklines. Applying induction to the resulting function and the \(m-1\) remaining selection gadgets completes the proof.
Global Construction.We are now ready to describe the overall reduction. For a given formula \(F=C_{1}\wedge C_{2}\ldots\wedge C_{m}\) with variables \(v_{1},\ldots,v_{n}\), we construct data points in \(\mathbb{R}^{2}\times\mathbb{R}\) such that they can be fitted exactly with \(k=4(m+n)\) ReLUs if and only if \(F\) is a
yes-instance of POITS. Our construction will consist of \(m+n\) selection gadgets, namely one for each clause and one for each variable, and \(3m\) further data points. Each of the \(m\) selection gadgets corresponding to a clause determines which literal of this clause we choose to be true. Each of the \(n\) selection gadgets corresponding to a variable determines whether this variable is true or false. The \(3m\) remaining data points will ensure that these choices are consistent. Let \(\delta\coloneqq\frac{1}{2m}\). This will be the smallest difference of any two consecutive slopes in any selection gadget we are going to use. Moreover, no absolute value of a slope will be larger than \(S\coloneqq 1\). From this, we conclude that, in order to apply Lemma 5 in the end, we need to maintain a distance of at least \(\Delta\coloneqq\frac{8S}{\delta}+6=16m+6\) between the centers of the gadgets.
We start by describing the positions and slopes of the selection gadgets. Compare Figure 5 for an illustration. Firstly, for each clause \(C_{j}\), \(j\in[m]\), we introduce one selection gadget with offset \(j\Delta\) (that is, centered at \((0,j\Delta)\)) and the three different slopes \(s_{1}^{(j)}\coloneqq(2j-2)\delta-1\), \(s_{2}^{(j)}\coloneqq(2j-1)\delta-1\), and \(s_{3}^{(j)}\coloneqq 2j\delta-1\). Note that all these slopes are contained in \([-1,0]\). The interpretation will be as follows: Choosing the levee with slope \(s_{r}^{(j)}\) for the \(j\)-th selection gadget corresponds to choosing the \(r\)-th literal of the \(j\)-th clause as the one that is set to true. Secondly, for each variable \(v_{i}\), \(i\in[n]\), we introduce one selection gadget with offset \(-i\Delta\) and the two slopes \(-1\) and \(1\). Here the interpretation is as follows: choosing the levee with slope \(-1\) corresponds to setting the variable to true, while choosing the levee with slope \(1\) corresponds to setting the variable to false. Finally, if the \(r\)-th literal, \(r\in[3]\), of clause \(C_{j}\) is \(v_{i}\), then we introduce a data point \(\mathbf{p}_{j,r}\) with label \(y=1\) at the intersection of the "center-line" of the levee with slope \(s_{r}^{(j)}\) corresponding to the selection gadget for \(C_{j}\) (that is, the line \(x_{2}=\Delta j+s_{r}^{(j)}x_{1}\)) and the "center-line" of the levee with slope \(1\) corresponding to the selection gadget of \(v_{i}\) (that is, the line \(x_{2}=-\Delta i+x_{1}\)). Thus, \(\mathbf{p}_{j,r}\coloneqq(\frac{\Delta(i+j)}{1-s_{r}^{(j)}},\frac{\Delta(i+j)} {1-s_{r}^{(j)}}-\Delta i)\).
This finishes the construction. Before we prove Theorem 1 using this construction, we show the following useful lemma.
**Lemma 6**.: _For each \(j\in[m]\) and \(r\in[3]\), there are exactly two out of the \(3m+2n\) possible levees defined by the selection gadgets which are non-zero at \(\mathbf{p}_{j,r}\), namely \(f_{s_{r}^{(j)}}(x_{1},x_{2}-j\Delta)\) and \(f_{1}(x_{1},x_{2}+i\Delta)\), where \(v_{i}\) is the \(r\)-th literal in \(C_{j}\)._
Proof.: Since \(\mathbf{p}_{j,r}\) is the intersection point of the center-lines of the two named levees, it suffices to show that no other levee is non-zero at this point.
Let us start by reminding ourselves that a levee with offset \(z\) and slope \(s\) is non-zero only for points within a stripe of "vertical width \(4\)", that is, for points \((x_{1},x_{2})\) with \(sx_{1}+z-2<x_{2}<sx_{1}+z+2\).
Now we focus on levees belonging to other clauses \(C_{j^{\prime}}\) with \(j^{\prime}\neq j\). If \(j^{\prime}>j\), then the slope will be at least \(s_{r}^{(j)}\) and the offset will be at least \((j+1)\Delta\). Since \(\mathbf{p}_{j,r}\) lies on the right-hand side of the \(x_{2}\)-axis and on the center-line of a levee with slope exactly \(s_{r}^{(j)}\) and offset exactly \(j\Delta\), we obtain that \(\mathbf{p}_{j,r}\) lies below the center-line of the considered levee with a vertical distance of at least \(\Delta>2\), implying that the levee must vanish at \(\mathbf{p}_{j,r}\). In the case \(j^{\prime}<j\) it follows similarly with \(\mathbf{p}_{j,r}\) lying above instead of below the considered levee.
Next, let us focus on the two levees belonging to the same clause \(C_{j}\) but to the \(r^{\prime}\)-th literal with \(r^{\prime}\neq r\). The slope of such a levee differs by at least \(\delta\) from \(s_{r}^{(j)}\), while the offset
is exactly \(j\Delta\). This implies that \(\mathbf{p}_{j,r}\) has a vertical distance of at least \(\delta\frac{\Delta(i+j)}{1-s_{r}^{(j)}}\geq\delta\frac{2\Delta}{2}=\delta\Delta >8>2\) from the center-line of the considered levee.
Next, let us focus on a levee with slope 1 belonging to a variable \(v_{i^{\prime}}\) with \(i^{\prime}\neq i\). Since \(\mathbf{p}_{j,r}\) lies on the center-line of the levee with slope 1 belonging to \(v_{i}\), these levees are parallel, and have vertical distance at least \(\Delta>2\), this case is settled, too.
Finally, let us focus on a levee with slope \(-1\) belonging to any variable. Such a levee has an offset of at most \(-\Delta\) and its slope is at most \(s_{r}^{(j)}\). Since \(\mathbf{p}_{j,r}\) lies on the center-line of the levee with offset \(j\Delta\) and slope \(s_{r}^{(j)}\), this implies that its vertical distance to the considered levee is at least \(2\Delta>2\), finishing the proof.
Finally, we are ready to prove the main theorem.
Proof of Theorem 1.: We reduce from POITS and construct an instance of 2L-ReLU-NN-Train\((\ell)\) with \(k=4(m+n)\) and \(\gamma=0\) as described above. Note that, overall, we introduce \(O(m+n)\) points with rational coordinates (with \(\mathrm{poly}(m,n)\) bits) which are polynomial-time computable.
To prove equivalence between the POITS instance and the constructed instance, let us first assume that the POITS instance is a yes-instance. Let \(T\subseteq[n]\) be a set of indices such that the truth assignment with \(v_{i}=\mathrm{true}\) for \(i\in T\) and \(v_{i}=\mathrm{false}\) for \(i\notin T\) sets exactly one literal per clause to true. Let \(r_{j}\in\{1,2,3\}\) denote which of the three literals is set to true in clause \(C_{j}\) by this assignment. We claim that the following function, which is a sum of \(m+n\) levees and thus realizable with \(k=4(m+n)\) ReLUs using Observation 3,
Figure 5: Global construction layout for the reduction from POITS to 2L-ReLU-NN-Train\((\ell)\). The figure shows the construction for the instance \((v_{5}\lor v_{4}\lor v_{3})\wedge(v_{4}\lor v_{3}\lor v_{2})\wedge(v_{5}\lor v_{ 2}\lor v_{1})\). The vertical dotted line is the \(x_{2}\)-axis along which we place all the selection gadgets. Each gadget is depicted with a black square. Each solid gray line depicts one possible levee. Each gray circle depicts a data point \(\mathbf{p}_{j,r}\) with label one. The picture on the right additionally shows one possible solution to the given instance. Indeed, choosing levees corresponding to the solid black lines selects exactly one levee per selection gadget and exactly one levee passing through each of the nine additional data points. This corresponds to the truth assignment \(v_{1}=v_{3}=\mathrm{true}\) and \(v_{2}=v_{4}=v_{5}=\mathrm{false}\).
exactly fits all the constructed data points:
\[f(x_{1},x_{2})=\sum_{i\in T}f_{-1}(x_{1},x_{2}+i\Delta)+\sum_{i\notin T}f_{1}(x_{ 1},x_{2}+i\Delta)+\sum_{j=1}^{m}f_{s^{(j)}_{r_{j}}}(x_{1},x_{2}-j\Delta). \tag{2}\]
By Lemma 5, \(f\) fits all data points belonging to the selection gadgets. It remains to show that \(f\) attains value \(1\) at all the data points \({\bf p}_{j,r}\), \(j\in[m]\), \(r\in[3]\). To see this, fix such \(j\) and \(r\) and let the \(r\)-th literal in \(C_{j}\) be \(v_{i}\). By Lemma 6, the only two levees which can potentially be non-zero at \({\bf p}_{j,r}\) are \(f_{s^{(j)}}(x_{1},x_{2}-j\Delta)\) and \(f_{1}(x_{1},x_{2}+i\Delta)\). If \(r=r_{j}\), then \(v_{i}=\) true and the former levee attains value \(1\) while the latter levee attains value \(0\) at \({\bf p}_{j,r}\). Otherwise, if \(r\neq r_{j}\), then \(v_{i}=\) false and the former levee attains value \(0\) while the latter levee attains value \(1\) at \({\bf p}_{j,r}\). In both cases, the data point is fitted correctly.
Now suppose conversely that the constructed data points can be precisely fitted with a function \(f\) representable with \(k=4(m+n)\) ReLUs. By Lemma 5, \(f\) must be of the form (2) for some set \(T\subseteq[n]\) and some values \(r_{j}\in[3]\) for all \(j\in[m]\). We claim that setting \(v_{i}=\) true for \(i\in T\) and \(v_{i}=\) false for \(i\notin T\) sets exactly one literal per clause to true. To see this, fix \(j\in[m]\) and \(r\in[3]\) and let \(v_{i}\) be the \(r\)-th literal of \(C_{j}\). Using Lemma 6 again, observe that exactly one of the two levees \(f_{s^{(j)}_{r}}(x_{1},x_{2}-j\Delta)\) and \(f_{1}(x_{1},x_{2}+i\Delta)\) must belong to the sum (2) because the data point \({\bf p}_{j,r}\) has label one. In other words, it holds that either \(r=r_{j}\) (implying \(i\in T\)) or \(i\not\in T\). This implies that, for each \(j\in[m]\), the defined truth assignment sets exactly the \(r_{j}\)-th literal of \(C_{j}\) to true, finishing the overall proof.
## 4 W[1]-Hardness for Four ReLUs
We show that fixed-parameter tractability with respect to \(d\) is unlikely even for target error zero and four ReLUs. In fact, we prove a running time lower bound of \(n^{\Omega(d)}\) based on the ETH.
**Theorem 7**.: \(2\mathrm{L}\)-ReLU-NN-Train\((\ell)\) _with \(k=4\) and \(\gamma=0\) is W[1]-hard with respect to \(d\) and not solvable in \(\rho(d)n^{o(d)}\operatorname{poly}(L)\) time (where \(L\) is the input bit-length) for any function \(\rho\) assuming the ETH._
We prove Theorem 7 with a parameterized reduction from the \(2\)-Hyperplane Separability problem.
\(2\)-Hyperplane Separability
**Input:** Two point sets \(Q\) and \(P\) in \(\mathbb{R}^{d}\).
**Question:** Are there two hyperplanes that strictly separate \(Q\) and \(P\)?
Here, two hyperplanes _strictly separate_\(Q\) and \(P\) if, for every pair \(({\bf q},{\bf p})\in Q\times P\), the open line segment \({\bf qp}\) is intersected by at least one hyperplane and no point from \(Q\cup P\) is contained in any of the two hyperplanes. Giannopoulos et al. [12] showed that this problem is W[1]-hard with respect to \(d\) and not solvable in \(\rho(d)m^{o(d)}\operatorname{poly}(L)\) time assuming the ETH (where \(m\coloneqq|Q\cup P|\) and \(L\) is the instance size). In fact, their proof shows that if there is a solution, then there is a solution where \(Q\) lies entirely in one region of the hyperplane arrangement and the points in \(P\) lie only in the two neighboring
regions. Formally, if the two hyperplanes are defined by \(\mathbf{h}_{i}\cdot\mathbf{x}+o_{i}=0\) for \(\mathbf{h}_{i}\in\mathbb{R}^{d}\), \(o_{i}\in\mathbb{R}\), \(i\in[2]\), then (without loss of generality) we can assume that the following holds:
\[\forall\mathbf{q}\in Q:\mathbf{h}_{1}\cdot\mathbf{q}+o_{1}>0> \mathbf{h}_{2}\cdot\mathbf{q}+o_{2} \tag{3}\] \[\forall\mathbf{p}\in P:\operatorname{sgn}(\mathbf{h}_{1}\cdot \mathbf{p}+o_{1})=\operatorname{sgn}(\mathbf{h}_{2}\cdot\mathbf{p}+o_{2}) \tag{4}\]
Moreover, a closer inspection of their reduction shows that one can assume that the hyperplanes have distance at least \(\epsilon\coloneqq m^{-3}\) to each input point2. That is, we can assume
Footnote 2: The critical points in the reduction are the constraint points \(q_{ij}^{uv}\) which are separated from the points \(p_{iu_{i}},p_{\#i},p_{ju_{j}},p_{\#j}\) by some translation of the hyperplane \(H(u_{1},\ldots,u_{k})\) towards the origin. The distance of any \(q_{ij}^{uv}\) to \(H(u_{1},\ldots,u_{k})\) is at least \(2\sin^{3}(\pi/m)\geq 2m^{-3}\) in one dimension.
\[\forall\mathbf{x}\in Q\cup P,i\in[2]:\frac{|\mathbf{h}_{i}\cdot \mathbf{x}+o_{i}|}{\|\mathbf{h}_{i}\|}>\epsilon. \tag{5}\]
We will make use of these assumptions in the following proof.
Proof of Theorem 7.: Let \((Q,P)\) be an instance of (restricted) \(2\)-Hyperplane Separability and let \(m\coloneqq|Q\cup P|\) and \(\epsilon\coloneqq m^{-3}\). We construct the instance \((X\subseteq\mathbb{R}^{d+1},k\coloneqq 4,\gamma\coloneqq 0)\) of \(2\)L-ReLU-NN-Train\((\ell)\), where \(X\) contains the following points:
* \((\mathbf{q},1)\) for each \(\mathbf{q}\in Q\),
* \((\mathbf{p},0)\) for each \(\mathbf{p}\in P\),
* \((\mathbf{r}_{\mathbf{qp}}\coloneqq(1-\delta)\mathbf{q}+\delta\mathbf{p},1)\) and \((\mathbf{s}_{\mathbf{qp}}\coloneqq\delta\mathbf{q}+(1-\delta)\mathbf{p},0)\) for each \((\mathbf{q},\mathbf{p})\in Q\times P\), where \(\delta\coloneqq\epsilon(2\|\mathbf{q}-\mathbf{p}\|)^{-1}\).
Note that \(\mathbf{r}_{\mathbf{qp}}\) (\(\mathbf{s}_{\mathbf{qp}}\)) lies on the line segment \(\mathbf{qp}\) at distance \(\nicefrac{{\epsilon}}{{2}}\) to \(\mathbf{q}\) (\(\mathbf{p}\)). Overall, we construct \(n\coloneqq|X|\in O(m^{2})\) points, which can be done in polynomial time.
For the correctness, assume first that there are two hyperplanes \(\mathcal{H}_{i}\), \(i\in[2]\), defined by \(\mathbf{h}_{i}\cdot\mathbf{x}+o_{i}=0\) (wlog \(\|\mathbf{h}_{i}\|=1\)) that strictly separate \(Q\) and \(P\) and satisfy (3)-(5).
A solution for \((X,4,0)\) can then be constructed as follows (see also Figure 6): We use two ReLUs realizing an "upward step" of height \(1\) (with slope \(\beta\coloneqq\nicefrac{{4}}{{\epsilon}}\)) in the direction of \(\mathbf{h}_{1}\). That is, we set
\[\mathbf{w}_{1} \coloneqq\beta\mathbf{h}_{1}, b_{1} \coloneqq\beta o_{1}, a_{1} \coloneqq 1,\] \[\mathbf{w}_{2} \coloneqq\beta\mathbf{h}_{1}, b_{2} \coloneqq\beta o_{1}-1, a_{2} \coloneqq-1.\]
Additionally, we use two ReLUs realizing a "downward step" of height \(1\) (with slope \(-\beta\)) in the direction of \(\mathbf{h}_{2}\), that is,
\[\mathbf{w}_{3} \coloneqq\beta\mathbf{h}_{2}, b_{3} \coloneqq\beta o_{2}, a_{3} \coloneqq-1,\] \[\mathbf{w}_{4} \coloneqq\beta\mathbf{h}_{2}, b_{4} \coloneqq\beta o_{2}-1, a_{4} \coloneqq 1.\]
Let \(\mathcal{W}_{i}\) be the hyperplane defined by \(\mathbf{w}_{i}\cdot\mathbf{x}+b_{i}=0\) for \(i\in[4]\). Note that \(\mathcal{W}_{1}=\mathcal{H}_{1}\) and \(\mathcal{W}_{3}=\mathcal{H}_{2}\). Note further that \(\mathcal{W}_{2}\) is parallel to \(\mathcal{W}_{1}\) at distance \(\beta^{-1}=\nicefrac{{\epsilon}}{{4}}\) and \(\mathcal{W}_{4}\) is parallel to \(\mathcal{W}_{3}\) at distance \(\nicefrac{{\epsilon}}{{4}}\).
To verify that all data points are exactly fitted, consider first a point \(\mathbf{q}\in Q\). From (3) and (5), we obtain
\[\mathbf{w_{1}}\cdot\mathbf{q}+b_{1} =\beta(\mathbf{h}_{1}\cdot\mathbf{q}+o_{1})>0,\] \[\mathbf{w_{2}}\cdot\mathbf{q}+b_{2} =\beta(\mathbf{h}_{1}\cdot\mathbf{q}+o_{1}-\beta^{-1})>\beta \epsilon-1>0,\] \[\mathbf{w_{3}}\cdot\mathbf{q}+b_{3} =\beta(\mathbf{h}_{2}\cdot\mathbf{q}+o_{2})<0,\] \[\mathbf{w_{4}}\cdot\mathbf{q}+b_{4} =\beta(\mathbf{h}_{2}\cdot\mathbf{q}+o_{2}-\beta^{-1})<0.\]
From the above inequalities, it follows
\[\phi(\mathbf{q})=\beta(\mathbf{h}_{1}\cdot\mathbf{q}+o_{1})-\beta(\mathbf{h}_ {1}\cdot\mathbf{q}+o_{1}-\beta^{-1})=1.\]
Now consider a point \(\mathbf{r_{qp}}\) and note that, for each \(\mathcal{W}_{i}\), \(\mathbf{r_{qp}}\) lies in the same half-space as \(\mathbf{q}\) since it has distance \(\nicefrac{{\epsilon}}{{2}}\) to \(\mathbf{q}\) which has distance at least \(\frac{3}{4}\epsilon\) to \(\mathcal{W}_{i}\) (by (5)). Thus,
\[\phi(\mathbf{r_{qp}})=\beta(\mathbf{h}_{1}\cdot\mathbf{r_{qp}}+o_{1})-\beta( \mathbf{h}_{1}\cdot\mathbf{r_{qp}}+o_{1}-\beta^{-1})=1.\]
Next, consider a point \(\mathbf{p}\in P\). Using (4) and (5), one easily verifies that
\[\operatorname{sgn}(\mathbf{w_{1}}\cdot\mathbf{p}+b_{1})=\operatorname{sgn}( \mathbf{w_{2}}\cdot\mathbf{p}+b_{2})=\operatorname{sgn}(\mathbf{w_{3}}\cdot \mathbf{p}+b_{3})=\operatorname{sgn}(\mathbf{w_{4}}\cdot\mathbf{p}+b_{4}).\]
Hence, \(\phi(\mathbf{p})=0\) clearly holds if all the above signs are negative. If all signs are positive, then
\[\phi(\mathbf{p})=\beta(\mathbf{h}_{1}\cdot\mathbf{p}+o_{1})-\beta(\mathbf{h}_ {1}\cdot\mathbf{p}+o_{1}-\beta^{-1})-\beta(\mathbf{h}_{2}\cdot\mathbf{p}+o_{2 })+\beta(\mathbf{h}_{2}\cdot\mathbf{p}+o_{2}-\beta^{-1})=0.\]
Finally, any point \(\mathbf{s_{qp}}\) analogously lies in the same half-space as \(\mathbf{p}\) for each \(\mathcal{W}_{i}\), which also implies \(\phi(\mathbf{s_{qp}})=0\). Thus, all points are correctly fitted.
Conversely, assume that the points in \(X\) can be exactly fitted by \(\phi\) realized by four ReLUs with values \(\mathbf{w}_{i},b_{i},a_{i}\), \(i\in[4]\). Let \(I^{+}\coloneqq\{i\in[4]\mid a_{i}=1\}\) and \(I^{-}\coloneqq\{i\in[4]\mid a_{i}=-1\}\).
Consider an arbitrary line segment \(\mathbf{qp}\) for \((\mathbf{q},\mathbf{p})\in Q\times P\). Clearly, the points \((\mathbf{q},1)\), \((\mathbf{r_{qp}},1)\) and \((\mathbf{s_{qp}},0)\) on this line segment cannot all lie on the same piece of \(\phi\). Hence, \(\phi\) must have a concave breakpoint at some point on the open segment between \(\mathbf{q}\) and \(\mathbf{p}\). That is, there must be a ReLU \(i\in I^{-}\) such that the hyperplane defined by \((\mathbf{w}_{i},b_{i})\) intersects the open line segment \(\mathbf{qp}\) and does not contain \(\mathbf{q}\) or \(\mathbf{p}\). Analogously, the points \((\mathbf{p},0)\), \((\mathbf{s_{qp}},0)\) and \((\mathbf{r_{qp}},1)\) enforce a convex breakpoint, that is, a ReLU \(j\in I^{+}\) with a hyperplane \((\mathbf{w}_{j},b_{j})\) also intersecting the open line segment \(\mathbf{qp}\) and not containing \(\mathbf{q}\) or \(\mathbf{p}\).
To sum up, every open line segment \(\mathbf{qp}\) is intersected by at least two hyperplanes (not containing \(\mathbf{q}\) or \(\mathbf{p}\)), one corresponding to a ReLU \(i\in I^{-1}\) and one corresponding to a ReLU \(j\in I^{+}\). Since there are only four ReLUs, it follows that \(\min(|I^{+}|,|I^{-}|)\leq 2\). That is, we obtain a solution for \(2\)-Hyperplane Separability by picking either all hyperplanes corresponding to \(I^{+}\) or all hyperplanes corresponding to \(I^{-}\).
This finishes the reduction. Note that since the dimension of the input data points in our constructed instance is \(d\), any algorithm solving \(2\)L-ReLU-NN-Train\((\ell)\) in time \(\rho(d)n^{o(d)}\operatorname{poly}(L)\) would imply an algorithm running in time \(\rho(d)m^{o(d)}\operatorname{poly}(L^{\prime})\) for \(2\)-Hyperplane Separability contradicting the ETH.
## 5 Hardness Results for Linear Threshold Activations
A nowadays less popular, but more classical activation function than ReLU is the linear threshold function \(x\mapsto\mathds{1}_{\{x>0\}}\). Analogously to 2L-ReLU-NN-Train\((\ell)\), we consider the following decision version of the training problem for linear threshold functions:
\begin{tabular}{l l}
2L-LT-NN-Train\((\ell)\) \\
**Input:** & Data points \((\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\in\mathbb{R}^{d}\times \mathbb{R}\), a number \(k\in\mathbb{N}\) of linear \\ & threshold neurons, and a target error \(\gamma\in\mathbb{R}_{\geq 0}\). \\
**Question:** & Are there weights \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\in\mathbb{R}^{d}\), biases \(b_{1},\ldots,b_{k}\in\mathbb{R}\), and coefficients \(a_{1},\ldots,a_{k}\in\mathbb{R}\) such that \\ & \\ \end{tabular}
Note that for linear thresholds, we cannot assume \(a_{j}\in\{-1,1\}\) because the normalization used in the ReLU case does not apply here.
As in the ReLU case, the crucial ingredient to study the training complexity of linear threshold networks is their geometry. To this end, observe that every function represented by a 2-layer linear threshold network is piecewise constant, where the pieces emerge from the hyperplane arrangement defined by the \(k\) hyperplanes \(\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}=0\), \(j\in[k]\), corresponding to the hidden neurons. Since our reductions for the ReLU case always use two ReLUs to approximate "step functions" from 0 to 1 and from 1 to 0, it is easy to adapt the reductions to the linear threshold case.
**Corollary 8**.: \(2\)_L-LT-NN-Train\((\ell)\) is NP-hard even for \(d=2\) and \(\gamma=0\)._
Figure 6: Example of the reduction from \(2\)-Hyperplane Separability for \(d=2\) dimensions. Big points are points in \(Q\) (dark gray) and in \(P\) (light gray). The small points are additionally introduced. The four lines are the breaklines of the four ReLUs. The two thick lines indicate the original two separating lines. The dashed circle has radius \(\epsilon\).
Proof.: We use an analogous reduction to the proof of Theorem 1. Instead of a sum of levees, we use a sum of "stripes" within which the function attains value \(1\). With this idea, it is straight-forward to build selection gadgets and an analogous global construction. Note that the number \(k\) of required linear threshold neurons is only \(k=2(m+n)\) for a POITS instance with \(m\) clauses and \(n\) variables because each stripe can be realized with two linear threshold neurons instead of the four ReLUs required to build a levee.
For the sake of completeness, we note that also the W[1]-hardness result by Froese et al. [10] extends to linear threshold functions. To this end, consider the \(\ell^{p}\)-loss \(\ell(\hat{y},y)=|\hat{y}-y|^{p}\), with \(\ell^{0}\) simply counting the non-zero components of \(\hat{y}-y\).
**Corollary 9**.: _For each \(p\in[0,\infty[\), \(2\)L-LT-NN-Train\((\ell^{p})\) with \(k=1\) is NP-hard, W[1]-hard with respect to \(d\) and not solvable in \(\rho(d)n^{o(d)}\operatorname{poly}(L)\) time (where \(L\) is the input bit-length) for any function \(\rho\) assuming the ETH._
Proof.: Having a careful look into the reduction from Multicolored Clique by Froese et al. [10], it turns out that the single ReLU neuron used in this reduction can be replaced by a linear threshold neuron without changing the logic of the reduction.
Finally, also Theorem 7 finds its analogue for the linear threshold case.
**Corollary 10**.: \(2\)L-LT-NN-Train\((\ell)\) _with \(k=2\) and \(\gamma=0\) is W[1]-hard with respect to \(d\) and not solvable in \(\rho(d)n^{o(d)}\operatorname{poly}(L)\) time (where \(L\) is the input bit-length) for any function \(\rho\) assuming the ETH._
Proof.: The proof is analogous to (even much easier than) the one of Theorem 7. Instead of two ReLUs to realize a step of height one, we can simply use one linear threshold neuron (which is why we obtain hardness already for \(k=2\) in this case). Note that we do not even need to introduce the additional data points \(\mathbf{r_{qp}}\) and \(\mathbf{s_{qp}}\) and obtain a much more direct reduction from \(2\)-Hyperplane Separability.
## 6 An Algorithm for Exact Fitting in the Convex Case
Contrasting the previous two hardness results, we now consider the tractable special case where all coefficients \(a_{j}\) are \(1\). In this case, the neural network realizes a convex continuous piecewise linear function \(\phi(\mathbf{x})=\sum_{j=1}^{k}[\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}]_{+}\) with at most \(2^{k}\) distinct (affine) pieces. We show that this case (which we call \(2\)L-ReLU-NN-Train\((\ell)^{+}\)) with target error \(\gamma=0\) is FPT for the parameter \(d+k\).
**Theorem 11**.: \(2\)L-ReLU-NN-Train\((\ell)^{+}\) _can be solved in \(2^{O(k^{2}d)}\operatorname{poly}(k,L)\) time for \(\gamma=0\), where \(L\) is the input bit-length._
Before giving the proof, we introduce some definitions. For \(I\subseteq[k]\), let \(R_{I}\subseteq\mathbb{R}^{d}\) be the _active region_ of the ReLUs in \(I\), that is, \(\mathbf{x}\in R_{I}\) if and only if
\[\forall j\in I:\mathbf{w}_{j}\mathbf{x}+b_{j} \geq 0,\] \[\forall j\in[k]\setminus I:\mathbf{w}_{j}\mathbf{x}+b_{j} \leq 0.\]
Note that \(R_{I}\) could be empty. Clearly, on each \(R_{I}\), \(\phi\) is the affine function \(\sum_{j\in I}(\mathbf{w}_{j}\cdot\mathbf{x}+b_{j})\). Let \(F_{I}\coloneqq\{(\mathbf{x},\phi(\mathbf{x}))\mid\mathbf{x}\in R_{I}\}\}\) be the piece corresponding to \(I\).
The convexity of \(\phi\) now allows for a branching algorithm assigning the input data points to the at most \(2^{k}\) pieces.
Proof of Theorem 11.: Let \((\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\in\mathbb{R}^{d+1}\), \(k\in\mathbb{N}\), and let \(L\) denote the overall number of input bits. The idea is to use a search tree algorithm to check whether the data can be exactly fitted with \(k\) (convex) ReLUs. To this end, we define \(2^{k}\) sets \(S_{1},\ldots,S_{2^{k}}\) where each \(i\in[2^{k}]\) one-to-one corresponds to a certain subset \(I(i)\subseteq[k]\) of active ReLUs. For given point sets \(S\subseteq\mathbb{R}^{d+1}\) and \(S_{i}\subseteq\mathbb{R}^{d+1}\), \(i\in[2^{k}]\), our algorithm checks whether the points in \(S\) can be exactly fitted by \(k\) ReLUs with the additional constraint that \(S_{i}\subseteq F_{I(i)}\) holds for each \(i\in[2^{k}]\). That is, the following (in)equalities must hold
\[\mathbf{x}\in R_{I(i)}\text{ and }\sum_{j\in I(i)}\mathbf{w}_{j}\mathbf{x}+b_{ j}=y,\quad i\in[2^{k}],(\mathbf{x},y)\in S_{i}. \tag{6}\]
Algorithm 1 depicts the pseudocode of our ExactFit algorithm. We solve an instance with an initial call where \(S\coloneqq\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\},S_{1}=S_{2 }=\cdots=S_{2^{k}}\coloneqq\emptyset\).
The correctness of Algorithm 1 follows by induction on \(|S|\). For \(S=\emptyset\), we simply need to check whether the system (6) of linear (in)equalities is feasible. This can be done by solving a linear program with \(k(d+1)\) variables and \(O(n)\) constraints in \(O(\operatorname{poly}(k,L))\) time (this is done by check-feasibility in Line 2).
If \(S\neq\emptyset\) and \((S,S_{1},\ldots,S_{2^{k}})\) is a no-instance, then none of the recursive calls in Line 9 will be successful (by induction). Hence, the algorithm correctly returns "No" in Line 11.
Now assume that \((S,S_{1},\ldots,S_{2^{k}})\) is a yes-instance. Then, any point \((\mathbf{x},y)\in S\) must lie on some piece \(F_{I(i)}\). That is, \((\mathbf{x},y)\) can be put into some \(S_{i}\). Hence, in Line 6, we branch into all \(2^{k}\) options. In each branch, we then check whether putting \((\mathbf{x},y)\) into \(S_{i}\) also forces other points from \(S\) (due to assumed convexity) to be contained in some \(S_{i^{\prime}}\) (this is done by check-forced-points in Line 8). We do this in order to achieve our claimed running time bound as we will show later.
The pseudocode for this check is given in Algorithm 2. The idea is to compute for
each \((\mathbf{x},y)\in S\) and each \(i\in[2^{k}]\) the lower bound
\[\mu\coloneqq\min_{\mathbf{w}_{j},b_{j}}\sum_{j\in I(i)}(\mathbf{w}_{j}\mathbf{x} +b_{j})\]
subject to the constraints (6), which again can be accomplished via linear programming in \(O(\operatorname{poly}(k,L))\) time. Note that both \(\mu=+\infty\) (linear program is infeasible) and \(\mu=-\infty\) (linear program is unbounded) are possible. This is done by lower-bound in Line 3. Now, note that \(\mu>y\) implies that
\[\phi(\mathbf{x})=\sum_{j=1}^{k}[\mathbf{w}_{j}\mathbf{x}+b_{j}]_{+}\geq\sum_{ j\in I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j})>y\]
holds for every \(\phi\) satisfying (6). That is, we can reject (Line 9) the current branch of ExactFit. If \(\mu=y\), then we have
\[\phi(\mathbf{x})\geq\sum_{j\in I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j})=y\]
for every \(\phi\) satisfying (6), and thus we can safely put \((\mathbf{x},y)\) into \(S_{i}\). To see that this is correct, assume that a solution puts \((\mathbf{x},y)\in F_{I^{\prime}}\) for some \(I^{\prime}\subseteq[k]\) with \(I^{\prime}\neq I(i)\). Then, we have
\[y =\sum_{j\in I^{\prime}}(\mathbf{w}_{j}\mathbf{x}+b_{j})=\sum_{j \in I^{\prime}\cap I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j})+\sum_{j\in I^{\prime }\setminus I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j})\] \[=\sum_{j\in I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j})=\sum_{j\in I^{ \prime}\cap I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j})+\sum_{j\in I(i)\setminus I^ {\prime}}(\mathbf{w}_{j}\mathbf{x}+b_{j}),\]
which implies
\[\sum_{j\in I^{\prime}\setminus I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j})=\sum_{j \in I(i)\setminus I^{\prime}}(\mathbf{w}_{j}\mathbf{x}+b_{j}).\]
Since \(\mathbf{x}\in R_{I^{\prime}}\), it follows that the left sum is at least zero and the right sum is at most zero. Thus, both sums are zero and \(\mathbf{w}_{j}\mathbf{x}+b_{j}=0\) holds for all \(j\in(I^{\prime}\setminus I(i))\cup(I(i)\setminus I^{\prime})\), which shows that \(\mathbf{x}\in R_{I(i)}\). Thus, putting \((\mathbf{x},y)\) into \(S_{i}\) is correct.
Note that adding a point to \(S_{i}\) adds new constraints to (6). Hence, we restart the procedure (Line 7) to check whether this forces new points. Overall, check-forced-points takes \(O(n^{2}2^{k}\operatorname{poly}(k,L))\subseteq O(2^{k}\operatorname{poly}(k,L))\) time.
As regards the correctness of Algorithm 1 now, note that check-forced-points clearly never incorrectly rejects a branch of ExactFit and never forces points incorrectly. Hence, one of the recursive calls in Line 9 will correctly answer "Yes" (by induction), which proves the correctness.
It remains to analyze the running time of Algorithm 1. Clearly, each call to the algorithm takes \(O(2^{k}\operatorname{poly}(k,L))\) time and recursively branches into \(2^{k}\) options. It remains to bound the depth of the recursion tree. To this end, note that the recursion stops as soon as \(S\) is empty or the current branch is rejected by Algorithm 2. We claim that the latter happens after at most \(k(d+1)+1\) recursive calls.
To verify this claim, observe that the algorithm maintains the invariant that the linear program
\[\min_{\mathbf{w}_{j},b_{j}}\sum_{j\in I(i)}(\mathbf{w}_{j}\mathbf{x}+b_{j}) \quad\text{s.t. }\eqref{eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursioneq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursioneq:recursion_eq:recursioneq:recursion_eq:recursioneq:recursion_eq:recursion_eq:rec:ursion_eq:recursion_eq:recursion_eq:recursioneq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:recursioneq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:rec:ursion_eq:recursion_eq:recursioneq:rec:ursion_eq:recursion_eq:recursion_eq:recursioneq:recursion_eq:recursion_eq:rec:ursion_eq:recursion_eq:recursioneq:recursion_eq:recursion_eq:recursion_eq:recursioneq:recursion_eq:recursion_eq:rec:ursion_eq:recursion_eq:receq:recursion_eq:recursion_eq:recursion_eq:recursion_eq:receq:recursioneq:receq:recursion_eq:receq:recursion_eq:receq:receq:ursion_eq:recursion_eq
In a broader context, open directions are to further study the computational complexity in appropriate approximate settings, draw further conclusions on generalization, and understand deeper network architectures.
|
2301.12988 | Graph Neural Network Framework for Security Assessment Informed by
Topological Measures | In the power system, security assessment (SA) plays a pivotal role in
determining the safe operation in a normal situation and some contingencies
scenarios. Electrical variables as input variables of the model are mainly
considered to indicate the power system operation as secure or insecure,
according to the reliability criteria for contingency scenarios. In this
approach, the features are in grid format data, where the relation between
features and any knowledge of network topology is absent. Moreover, the
traditional and common models, such as neural networks (NN), are not applicable
if the input variables are in the graph format structure. Therefore, this paper
examines the security analysis in the graph neural network (GNN) framework such
that the GNN model incorporates the network connection and node's neighbors'
influence for the assessment. Here the input features are separated graphs
representing different network conditions in electrical and structural
statuses. Topological characteristics defined by network centrality measures
are added in the feature vector representing the structural properties of the
network. The proposed model is simulated in the IEEE 118-Bus system for the
voltage static security assessment (SSA). The performance indices validate the
efficiency of the GNN-based model compared to the traditional NN model denoting
that the information enclosed in graph data boosts the classifier performance
since the GNN model benefits the neighbors' features. Moreover, outperforming
of GNN-based model is determined when robustness and sensitivity analyzes are
carried out. The proposed method is not limited to a specific task and can be
extended for other security assessments with different critical variables, such
as dynamic analysis and frequency criteria, respectively. | Mojtaba Dezvarei, Kevin Tomsovic, Jinyuan Stella Sun, Seddik M. Djouadi | 2023-01-30T15:29:51Z | http://arxiv.org/abs/2301.12988v1 | # Graph Neural Network Framework for Security Assessment Informed by Topological Measures
###### Abstract
In the power system, security assessment (SA) plays a pivotal role in determining the safe operation in a normal situation and some contingencies scenarios. Electrical variables as input variables of the model are mainly considered to indicate the power system operation as secure or insecure, according to the reliability criteria for contingency scenarios. In this approach, the features are in grid format data, where the relation between features and any knowledge of network topology is absent. Moreover, the traditional and common models, such as neural networks (NN), are not applicable if the input variables are in the graph format structure. Therefore, this paper examines the security analysis in the graph neural network (GNN) framework such that the GNN model incorporates the network connection and node's neighbors' influence for the assessment. Here the input features are separated graphs representing different network conditions in electrical and structural statuses. Topological characteristics defined by network centrality measures are added in the feature vector representing the structural properties of the network. The proposed model is simulated in the IEEE 118-Bus system for the voltage static security assessment (SSA). The performance indices validate the efficiency of the GNN-based model compared to the traditional NN model denoting that the information enclosed in graph data boosts the classifier performance since the GNN model benefits the neighbors' features. Moreover, outperforming of GNN-based model is determined when robustness and sensitivity analyzes are carried out. The proposed method is not limited to a specific task and can be extended for other security assessments with different critical variables, such as dynamic analysis and frequency criteria, respectively.
Security assessment, Reliability criteria, Network centrality measure, Neural network, Graph neural network, Static security assessment.
## I Introduction
Security itself means being free from risk and threat. In this context, power system can not be considered secure since it always anticipates the occurrence of disruption. Also, the possibility of an incident is increasing since the power system is being utilized inverter-based resources (IBR) such as wind turbines and solar panels [1]. So, security in the power system measures the risk or danger of disruption of the continuously operating system. In practice, that is the ability of the system to withstand sudden disturbances with minimum disruption to its performance.
From an operational point of view, the power system is secured if important electrical variables keep within an acceptable level, such as bus voltage magnitudes and angles, frequency, and power flows in response to disturbances (_contingencies_) like an electric short circuit, change of transmission system configurations due to faults or sudden load increase. Security Assessment (SA) is an analysis performed to determine whether and to what extent a power system is safe from serious interference in its operation. The SA carry outs to meet the operation requirement in two manifests: \((1)\) surviving the ensuing transient and moving into an acceptable steady-state condition, and \((2)\) in this new steady-state condition, all components are operating within established limits [2]. Followed by this time framework, Static Security Assessment (SSA) evaluates the steady-state response and Dynamic Security Assessment (DSA) analyzes the transient response. The SSA is the topic of concern for this paper as utilities companies mainly take this into account for planning and operation purposes. The acceptable level of variables such as transient voltage magnitude dip and steady-steady violation or frequency excursion is determined by reliability criteria provided by, for instance, Western Electricity Coordinating Council (WECC) or North American Electric Reliability Corporation (NERC) standards.
The SA can also target to determine the frequency-related violation or voltage violation for both dynamic or static analyses. The former is becoming more challenging in power system operation and planning, especially with the increasing penetration level of IBR, which brings about insufficient inertial and primary frequency responses [3, 4]. The authors in [5] consider the frequency security analysis to provide an appropriate frequency control scheme when power systems are utilized the wind energy or frequency security index. This determines the frequency security based on all aspects of the frequency profile presented to specify the relative distance from the security margin [6]. The significance of voltage violation is apparent as the reason for several large blackouts is due to inadequate reactive power supply. For this purpose, the SA may perform to either estimate the distance to the nose point of \(P\)-\(V\) and/or \(V\)-\(Q\) curves as a margin [7], or classify multiple operating conditions for voltage stability assessment [8].
The SA is also categorized for security classification or security margin estimation purposes. Classification determines whether the power system is secure or insecure with regard to the threshold, whereas in security margin, the distance (margin) to the insecure condition, the violation threshold, is computed. For instance, an online voltage security assessment practice to prevent a large-scale blackout and an estimation loading margin regarding transient frequency criteria capture classification and estimation tasks, respectively [9, 10]. This
paper focuses on classifying static voltage in the SA, followed by contingency scenarios.
Besides the traditional methods for the SA, such as lookup tables and nomograms, in which the operator decision was essential, the automated SA mechanism uses a model to determine the security statutes regarding the value of system variables and the measurement of so-called features. Thanks to data availability by sensors and/or parameter estimation, extensive usage of artificial intelligence (AI) methods are found in the literature for the SA. The applications of NN can be to evaluate the system security by screening the credible contingencies with loading condition and the probable contingencies as the input [11], and to estimate loadability margin concerning frequency deviation with preventive control [2]. A real-time SA to increase awareness about plausible future insecurity is applied using the Decision Tree (DT) form collection of Phase Measurement Units (PMUs) data [12]. An attempt to classify whether the power system can tolerate an \((N-1)\)-fault during different conditions is analyzed via Support Vector Machines (SVM) with the practice of Principal Component Analysis (PCA) for dimensional reduction of feature space [13]. In addition to NN, the promising results of deep learning (DL) frameworks for image and speech recognition caused an emerging usage of DL for SA purposes to capture immense amounts of data and deliver valuable information. As a typical network model, Convolutional Neural Network (CNN) can be exploited for power system transient stability assessment, instability mode prediction, and small-signal stability [14, 15].
All above models are associated with grid structure data for input features, meaning that a fixed size of grid data assuming that instances are independent, is fed to the models. For example, even if a graph represents a grid data format like an image, it has a banded structure in its adjacency matrix since all nodes are formed in a grid. This is no longer valid for graph data as the number of neighbors to each node is variable, and difference in size and shape of within graph dataset can not handle using resizing or crop operation in images. As a unique non-Euclidean data structure and the need of permutation invariant for for machine learning model due of graph isomorphism, GNN model is introduced. Graph analysis focuses on node classification, link prediction, and clustering tasks. Indeed, GNN models are DL-based methods performing on the graph domain.
Due to the promising results of GNN in social science, natural science, protein-protein interaction networks, and knowledge graphs, broad usage of GNN models can be noticed in the literature [16]. However, the application of GNN in power systems is not vast compared to other domains, and usage of the GNN models in power systems is discussed in [17]. For example, the GNN model using a power flow solution exploited for predicting the electricity market prices addresses scalability and adaptivity challenges of existing end-to-end optimal power flow (OPF) learning results [18]. In security concerns, [19] provides a scheme combining GNN and recurrent neural networks for stability classification and critical generator identification in transient assessment. A graph convolutional network (GCN) framework can be applied for fault location identification in a distribution network, indicating GCN model robustness to limited bus measurements and outperforming other machine learning models [20]. As there is a lack of a GNN model for the SSA, this paper seeks to form a GNN framework for voltage SSA.
Regardless of which model is used for the SA task, electrical variables such as active/reactive power line flow, bus voltage angle, and magnitudes, active and reactive power of each bus load are mainly considered for input features. This feature space lacks the topological knowledge of the power system, as one may represent the power grid as a graph where buses and lines are illustrated as nodes and edges, respectively. The large scalability of the power system motivates researchers to model it as a graph to study the system vulnerability in the topological context using centrality measures. These measures may indicate the most salient part against random failures and directed attacks [21, 22]. In this context, the power system characteristics coming from topological information may assist in analyzing the impact of network structure changes to enhance the model performance and ensure robustness for the SA. The centrality measures can be easily computed using the network topology processor's information implemented in Energy Management System (EMS). To the authors' best knowledge, there is no work in the literature to examine electrical and topological-related variables in the SA framework. Therefore, this paper proposes a method for voltage SSA based on the GNN model that combines electrical variables obtained by power flow solution and topological parameters defined by grid centrality measures.
With regard to the importance of SSA in power system operation and planning and considering the increment of uncertainty and incident in the power system, this paper is to deliver a resilient framework for static voltage security assessment. The proposed framework is validated in IEE 118-Bus, in which the results indicate that the GNN-based SA model outperforms the traditional NN model. Through the model robustness and sensitivity investigation, the GNN-based model also presents better performance metrics revealing that the proposed model is more capable of capturing uncertainty and obtaining promising output. The main contributions of this work are:
1. The SA schemes lack the knowledge of topological changes occurring during contingency scenarios or unplanned incidents and change the power grid structure. This paper considers the power grid as a graph to add topological information to the electrical features space. All structural changes can then be observed and measured using graph centrality measures. The new feature space is more informed about electrical and structural variables.
2. The common practice in the SA is to use the grid format data as independent features. In this fashion, the connection information in the graph representation is missed. After presenting the power grid as a graph with both electrical and topological features, this paper delivers the SA model based on the GNN model, where each sample is a graph of the power grid after an incident where features are embedded in each node. Security
classification is then transferred to graph-level classification using the GNN model. Indeed, a graph dataset encompassing both electrical and structural features per bus (node) for each sample is used to classify the graph as secure or insecure with defined security criteria. The advantage of the GNN model is to benefit the local information of each node where during model training, the node features are more knowledgeable due to shared and updating information of neighbors nodes.
3. The proposed approach is pretty straightforward to follow and implement. In addition to the electrical variables generator by power flow results, it only needs to have a graph measures generator for centrality measures. Furthermore, the proposed SA GNN-based is constructed as a comprehensive framework to examine the DSA in frequency, voltage violation, etc. Moreover, as the GNN is an active research area, researchers have presented efficient techniques for large-scale graphs, indicating no issue with the scalability of the proposed SA model for real power systems.
The rest of the paper is organized as follows. The preliminaries definition are provided in Section II. The problem formulation for the SA is expressed in Section III. The proposed GNN scheme with regard to topological measures is discussed in Section IV. The simulation procedure and the results of the proposed method on the IEEE 118-bus system are indicated in Section V, followed by model robustness and sensitivity analyses in Section VI. Finally, a discussion about the suggested method followed by a conclusion is stated in Section VII.
## II Preliminaries
A graph \(G\) is defined as \(G=(V,E)\), where set \(V\) is the set of vertices (nodes) and \(E\) is the set of edges. Here, \(V\) and \(E\) are always finite. An edge \(x,y\) is said to join the vertices \(x\) and \(y\) and is denoted by \(xy\). A _directed graph_ is a connected one where all the edges are directed from one vertex to another. In contrast, a graph where the edges are bidirectional is called an _undirected graph_. The _Adjacency matrix_ of the graph \(G=(V,E)\) is an \(n\times n\) matrix \(A=(a_{ij})\), where \(n\) is the number of vertices in \(G\), \(V=\{v_{1},\ldots,v_{n}\}\) and \(a_{ij}=\) number of edges between \(v_{i}\) and \(v_{j}\). When \(a_{ij}=0\), \((v_{i},v_{j})\) is not an edge in \(G\). The matrix \(A\) of an undirected graph is symmetric, i.e. \(A^{T}=A\). In the case of a directed graph, the same definition remains while the matrix \(A\) is no more symmetric and depends on the edges direction. The _Laplacian matrix_ or Kirchhoff matrix of a graph carries the same information as the adjacency matrix but has different valuable and vital properties, many relating to its spectrum. The laplacian matrix is defined as \(L=D-A\), where \(D\) is a diagonal matrix indicating the node degree matrix.
## III Problem statement
In this section, the static voltage security analysis is discussed to determine the security status of the power system. Many sources make power systems vulnerable, such as natural calamities, complement failure, fault, internal or external intrusion, human error, etc. In this case, the power system should be able to continue servicing in case of unpredicted contingency. If any vulnerability sources interrupt services, like an outage or blackout occurrence due to cascading failure, the system is insecure (vulnerable). In static security assessment, post-contingency time framework, regardless of transient behavior, is taken into account where the system reaches out to a new steady state operating points. If the new operating points meet the defined system limitation and reliability criteria, the system is said to be statically secure. In this fashion, a fast and reliable solution is necessary to assess the security of numerous operating strategies to reduce the risk of catastrophic incidents. This attempt is involved due to vast sources of vulnerabilities, the large scale of the power system with nonlinear behavior, different operating and operational strategies, changes in topologies, and the computation burden. Therefore, a classification model is an effective approach that can deal with with the difficulties. Classifiers' merits are that they can be developed offline, current and future operating states can be quickly assessed, and classifying a new steady state power system condition into a secure or insecure class is trivial and does not need protracted computations of an analytical solution.
In the SSA framework, the security variables could be bus voltages or line flows indicating the thermal limits. This paper considers bus voltage as a security variable; however, the proposed scheme can be applied to other variables and categories, such as dynamic assessment. In the SSA, following a contingency, each bus voltage value is analyzed at post-contingency in which the transient response has died down. As a result, the steady state of voltage (power flow solution) should not violate the range defined by operating limits. It is worth noting there is no updated guideline for the voltage violation range at the steady-state analysis when the power grid is utilized by the IBRs. However, its impact on the transient response of voltage trajectory for reliability criteria is discussed in [23]. In this regard, the problem statement is straightforward: _following a contingency scenario, we are seeking to define a model that classifies the post-contingency condition of the power system (steady state) based on bus voltages according to reliability criteria_. Therefore, the SA model is to classify secure and insecure conditions and notifies the operators to steer the system away from the insecure state in adequate time.
To address the defined problem, this paper proposes applying the GNN framework for the SA. As the power grid may be analyzed in the graph representation, the problem here is formulated as _graph-level classification_, and a graph including its information is assessed for security purposes. The details of the GNN-based model are described in Section IV.
Graph input may also contain hidden information in its structural property, and shared local node information may enhance the model performance. Traditional and most applied models work with grid format input features to label the input dataset as secure/insecure for binary classification (or multi-class SSA [24]), where input features are independent, and there is no information between each feature. To benefit the edge connectivity of the power grid and append the local node information from neighbors to boost model performance, the
GNN framework is developed. In this framework, the status of the power grid at steady state condition after contingency is considered as one sample instead of independent variables such as nodal voltage and line flow. For instance, the number of variables for one input of the non-GNN model could be the number of chosen features multiplied by the number of buses. In contrast, the GNN model only takes one graph as input, and all information is embedded in nodes or edges.
## IV The Proposed SA model
Two main modules can explain the proposed procedure for the SA model as _A. Feature Generator Engine_ to create input features, and _B. the GNN learning model_ for the SA model shown in Fig. 1.
### _Feature Generator Engine_
#### Iv-A1 Electrical Variables
Electrical features are selected based on engineering knowledge associated with the SA problem and statistical correlation coefficients to ensure variables fluctuate in relation to each other and eliminate redundancy. The typical electrical variables include line active and reactive power, voltage magnitude, and angle. Considering voltage SSA problem, the following are chosen as electrical features in the input dataset:
* The voltage magnitude of each bus, \(V_{mag}\)
* The active and reactive of each bus, \(P,Q\)
As the power grid provides numerous measurements as an extensive dataset, the SA may involve the curse of dimensionality. As only three leading electrical variables are considered in the procedure, the dimensional reduction (DR) approach can not be relevant. For numerous variables, the SA may involve the curse of dimensionality, which techniques such as PCA and fisher discrimination can be applied to identify the most significant and valuable subset of features for accurate classification [13, 25, 26].
#### Iv-A2 Topological Variables
Power grids have grown organically over the years in a random way to achieve economic benefits and safety, leading to a widely distributed grid with many connections between generation units and substations. Therefore, the complexity of the links and being a large-scale system lead researchers to study power grids in the context of graph representation using statistical tools for vulnerability studies [27]. Table 5 in [28] reviews various resilience analysis and improvement studies in graph context. This motivates us to investigate the structural properties of the power grid and append topological features into feature space.
In addition to knowing the number of nodes and edges in a graph, it is worth learning the network's characteristics to indicate the important part of a network. The metrics so-called _graph centrality_ generally measure a unit's prominence; in different substantive settings, i.e., identify the most critical nodes in a graph given its topology with the various definitions of importance. Many various centrality measures have been proposed over the years [29]; in this paper, the most applicable measures in the power grid are discussed as follows to state as a new feature for each bus.
_Degree centrality (\(C_{d}\))_ is a local measure and the simplest centrality measure. It implies that the nodes with a higher degree \(deg(v)\) i.e., connected edges, are more solid, and the normalized degree centrality is defined as
\[C_{d}(v)=\frac{deg(v)}{n-1}=\frac{L(v,v)}{n-1}\]
_Clustering Coefficient (\(C_{c}\))_ is a measure of degree to which nodes in a graph tend to cluster together. For each node \(i\), it is the number of edges between neighbors of a node, divided by the total number of possible edges between those neighbors \(C_{c_{i}}=2e_{i}/k_{i}(k_{i}-1)\) where \(k_{i}\), \(e_{i}\) are the number of he number of neighbors and connected edges between them, respectively.
_Betweenness centrality_ (\(C_{b}\)) consists both of a node and an edge. The node betweenness as most widely used reflects the influence of a node over the flow of information between other nodes, especially in cases where information flows over a network primarily follows the shortest available path. The node betweenness centrality is defined as
\[C_{b}(v)=\frac{\sum\limits_{s\neq v\neq t\in V}\sigma_{st}(v)/\sigma_{st}}{(n -1)(n-2)/2}\]
where \(\sigma_{st}\) is the number of shortest paths from \(s\) to \(t\) and \(\sigma_{st}(v)\) is the total number from the mentioned paths that pass through vertex \(v\).
_Closeness centrality_ (\(C_{k}\)) is a way of detecting nodes that can spread information very efficiently through a graph. That is, a node is vital if it has a short distance from many other nodes and defined as
\[C_{k}(v)=\frac{\sum\limits_{t\in V\setminus v}d_{G}(v,t)}{n-1}\]
where \(d_{G}(v,t)\) is shortest path length between vertices \(v\) and \(t\). This measures how far away a node is from the rest of the
Fig. 1: Procedure of the GNN-based SA model.
network instead of its closeness. Therefore, some researchers define closeness to be its reciprocal.
The general concept of the shortest path in a graph is not an appropriate metric for the power grid. The shortest path (geodesic path) between two nodes in a graph is a path with the minimum number of edges (or minimum sum of edge weights for a weighted graph). This definition should be modified to cope with power grid characteristics since the electrical flow finds a path with minimum impedance value. So, _electrical distance_\(d_{Z}\) defined by [21] is considered for computing shortest path distance based on line impedance as:
\[d_{Z}(v,t)=\|\sum_{(i,j)\in E\cap~{}path(v,t)}Z_{pr}(i,j)\|\]
where \(Z_{pr}(i,j)\) is the line impedance of the link \((i,j)\).
The measures obtain the influential nodes over the graph from a distinct aspect of view. Therefore, the topological measures may aim to fill the possible gap between measures. Hence, beyond electrical variables, the above measures indicating the topological importance of nodes, are added into features for voltage SSA as two different database shown in Fig. 1. In other words, electrical and topological are embedded in each node as feature vector
\[[V_{mag}^{i},P^{i},Q^{i},C_{d}^{i},C_{c}^{i},C_{b}^{i},C_{k}^{i}],\quad\forall i \in\textit{Bus set}\]
The representation of feature vector is illustrated in Fig. 2, in which the graph input data with node embedding futures is deployed for the learning task.
### _Learning Model: Graph Neural Network_
In this paper, Graph Neural Network (GNN) framework is chosen as a learning model for voltage SSA. GNN is a type of deep neural network that is suitable for analyzing graph-structured data. In addition to the deficiency of Convolution NNs (CNNs) and Recurrent NNs (RNNs) with graph-structured data (well-defined only for grid-structured data and only over sequences, respectively, like images and texts), these models suffer the variation of size and shape of input as typically that except a fixed size of the input. Moreover, the CNN model is sensitive to input permutations, such as rotating an input picture. To address these challenges, GNN models are proposed to work with graph-structured data as they can handle the changes in shape and size of inputs and are permutation invariant. The overall framework of the GNN model for graph-level classification is shown as in Fig. 3.
The main idea here is to generate representations of nodes that include the information on the graph's structure and any feature information it might have [30]. The procedure of GNN is encapsulated in _neural message passing_ in which the feature vector of the node is exchanged between nodes and updated using neural networks. There are two main components in GNNs:
* _Aggregate operator_\(\mathcal{G}\): permutation invariant function to its neighbors to generate the aggregated node feature.
* _Update operator_\(\mathcal{F}\): combining the message from previous aggregated node feature to generate updated node embedding.
\(\mathcal{G}\) and \(\mathcal{F}\) can be any arbitrary differentiable functions (i.e., neural networks), where \(\mathcal{G}\) has to be permutation invariant operator. The general procedure of aggregation and updates for a sample graph is shown in Fig 4. The node neighbors form a computational graph to aggregate and update information, such as from node A's local graph neighbors (i.e., B, C, and D). The messages coming from these neighbors are also based on information aggregated from their respective neighbors, and so on. This visualization shows a two-layer version of a message-passing model since the information is aggregated from two hops. Notice that each node has its computation graph in which the GNN forms a tree structure by unfolding the neighborhood around the target node. The NN modules act as both \(\mathcal{G}\) and \(\mathcal{F}\) meaning that the input, the aggregated information from node neighbors, passes through to a neural network to generate updated node embedding features.
Mathematically, for graph \(G\), a hidden embedding vector
Fig. 3: General GNN Framework for Classification Problem.
Fig. 2: Embedding Feature Vector
of a node \(h_{u}^{k}\) corresponding to each node \(u\in V\) is updated according to aggregated information from \(u\)'s graph neighborhood \(\mathcal{N}(u)\)[30]
\[h_{u}^{(k+1)}=\mathcal{F}^{(k+1)}\left(h_{u}^{(k)},\mathcal{G}^{k}(\{h_{v}^{(k) },\forall v\in\mathcal{N}(u)\})\right) \tag{1}\]
where \(k\) as iteration ( or layer) indicates number of hope that every node embedding contains information from its k-hop neighborhood. After \(k\) iterations, the embedding \(h_{u}\) of node \(u\) may encode the topological and feature-based information in \(k\) hop neighborhood. After running \(K\) iterations of the GNN message passing, the output of the final layer can be used to define the embeddings for each node, i.e., \(\textbf{z}_{u}=\textbf{h}_{u}(K),\forall u\in V\).
Depending on the choice of aggregate and update function, there are many GNNs models reviewed in [31]. For example, the basic GNN framework is similar to a multi-layer perception (MLP) as it has linear operations followed by a single element-wise non-linearity. In this paper, Graph Convolutional Network (GCN) model [32] is considered the SA model. The general idea of GCN is to apply a convolution operator like CNN but over a graph.
#### Iv-B1 Graph Convolutional Network
The GCN is based on spectral methods, in which the representation of a graph lies in the spectral domain, utilizing the Laplacian eigenvectors. The propagation rule is inspired by a first-order approximation of localized spectral filters on graphs. Given a \(N\times M\) feature matrix \(X\) (\(N\): number of nodes, \(M\): number of features), the GCN procedure is as [32]:
* [noitemsep,topsep=0pt]
* Construing self-connection by adding the identity matrix \(I_{N}\) to the adjacency matrix \(A\) \[\tilde{A}=A+I_{N}\] (2)
* Using the symmetric normalization of the Laplacian to define convolutional filters \[L_{norm}=D^{-\frac{1}{2}}LD^{-\frac{1}{2}}\] (3)
* Applying normalization trick to solve exploding/vanishing gradient problems as \[I_{N}+D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\rightarrow\tilde{D}^{-\frac{1}{2}} \tilde{A}\tilde{D}^{-\frac{1}{2}}\] (4) where \(\tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij}\) acting as a row-wise summation of the adjacency matrix with self-connection producing the degree of each node.
Given \(H\) as the feature matrix and \(W\) the layer-specific trainable weight matrix, the update rule the layer-wise propagation rule is
\[H^{(l+1)}=\sigma\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2 }}H^{(l)}W^{(l)}\right) \tag{5}\]
where \(\sigma\) is an activation function, such as the \(ReLU(\cdot)=max(0,\cdot)\), \(H^{(l)}\in\mathbb{R}^{N\times D}\) is the matrix of activations in the \(l^{th}\) layer; \(H(0)=X\). After \(K\) layer, the GCN produces a node-level output \(H^{K}=Z\) (an \(N\times F\) feature matrix where \(F\) is the number of output features per node). In this last layer, node embedding \(h_{u}^{(K)}\) can pass to a readout layer to present one vector for the final classifier \(R\) with learnable parameters to perform graph-level predictions as
\[z_{G}=R\left(Readout(h_{u}^{(K)})|u\in G\right) \tag{6}\]
In fact, the whole procedure lies on the symmetric-normalized aggregation as well as the self-loop update approach for each node as
\[h_{u}^{(l+1)}=\sigma\left(W^{l}\sum_{v\in\mathcal{N}(u)\cap\{u\}}\frac{h_{u}^ {l}}{\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}}\right) \tag{7}\]
where \(|\cdot|\) indicates the size of node's neighborhoods to train the weight matrix \(W\).
Considering the voltage SSA as supervised graph-level classification, the _softmax_ function Eq. 8 is applied to determine the predicted probability that the graph belongs to the class \(G_{i}\).
\[\textit{softmax}(z_{G_{i}})=\left(\frac{e^{\frac{r}{G_{i}}}}{\sum_{i=1}^{n}e^{ \frac{r}{G_{i}}}}\right) \tag{8}\]
where \(z_{G_{i}}\) is graph-level embedding over a set of labeled training graphs \(T=\{G_{1},\ldots,G_{n}\}\).
Therefore, the GCN model is applied for voltage SSA in graph-structured data. The task here is graph-level security classification. Each input is a power grid graph at post-contingency conditions, in which each node has topological and electrical features. The node features are aggregated and updated during the learning procedure with their \(k\)-neighbors information. Then, last layer classifies the final representations of each graph obtained by readout layer as secure/insecure. This procedure is repeated to train the weight matrix w.r.t minimizing a loss function. The actual label as secure/insecure is defined according to the reliability criteria of bus voltages in steady-state.
## V Simulation Procedure and results
For the simulation purposes, MATLAB and PSS/E software are used for generating different cases. The GCN learning model is implemented using PyTorch Geometric [33].
### _Data Generation_
In this paper, the IEEE 118-Bus system is considered for the simulation. This system represents a simple approximation of the American Electric Power system (in the U.S. Midwest) as of December 1962 that contains \(19\) generators, \(35\) synchronous
Fig. 4: Computational graph of node A: aggregation of information from two hops away
condensers, \(177\) lines, \(9\) transformers, and \(91\) loads. The primary practice to generate data in power system applications is to run different contingency scenarios. Due to the highly interconnected nature of modern power systems and various energy market scenarios, operating conditions and even the topology of a power system changes frequently. Capturing all changes in the power system while generating data is an intricate task since the source of variation is unclear, and the power grid operates at various points. An approach mentioned in Procedure 1 is regarded for data generating to provide a rich dataset. Two source of variations are assumed in data generation loop as follows:
* _Load variation during day_: The actual net load varies during the day because of the time of day. Here, it is assumed that the 118-bus system follows the same feature as the estimated net load for 2020 from the CAISO "Duck Curve" [34]. Then, the actual load is scaled by the same factor as the "Duck Curve" for different times of the day. Each scale factor is applied for randomly chosen 70% of load buses mimicking changes in load profile.
* _Stressed conditions_: In this case, the P-V analysis is involved using a series of load flow solutions for incremental power transfers (MW) between _source_ (delivers transfer power) and _sink_ (absorbs transfer power) areas at the constant power factor. Generators at buses 65, 66, and 69 and load at buses 20, 21, 22, 23, and 115 are considered source and sink areas, respectively. This case results in various operating conditions before the voltages pass the threshold or the load flow does not solve. The P-V solution parameters applied PSS/E setting are 0.5 MW as power mismatch tolerance, and 1000 MW as maximum incremental transfer with an initial transfer of 10 MW by 10 MW transfer increment tolerance.
Voltage operation threshold is assumed \(0.90\)-\(1.10\) p.u. of the steady-state of nominal voltage (post contingency), which is defined by category P1 of TPL-001-WECC-CRT-3.2. The value outside of the range violates the reliability criteria and needs the operators' action.
```
/*capturingloadvariation. */ foreachoperatingpointdo forescenariointhecontingencylistdo /*capturingstressedconditions. */ whilepowertransfer<maximum incrementaltransfer;do solve power flow; if(solutionnotfoundandvoltage violation)then break, go to the next scenario; else increase power transfer; end if end if end if end for
```
**Algorithm 1**Data Generation
As a result, \(21379\) cases were generated, out of which \(19668\) cases were secure, and the remaining \(1711\) cases were insecure. The result shows an imbalanced dataset which is rational as the power grid should be secure for most single contingencies (\(N\)-1). Each sample indicates a power grid in post-contingency conditions as a graph. Electrical feature obtained by the power flow solution and topological feature computed by centrality measures are then embedded in each node feature vector. The dataset is randomly split into a 60:20:20 ratio for training, validation, and test sets. The batch size indicating the number of training samples utilized in one training iteration is 128.
### _GCN Model Configuration_
The GCN architecture for voltage SSA as a graph-level classification task is as follows:
* _Convolutional layer_: Embedding each node by performing 6 layers of GCN with \(\mathrm{ReLU}(x)=\max(x,0)\) as activation function for each layer, all with a hidden-dimension size of 32.
* _Readout layer_: Aggregating node embeddings into a unified graph embedding by averaging the node embeddings
* _Final classifier_: A linear classifier with a softmax function to transfer embedding size of hidden-dimension to number of classes.
This architecture results 10,946 trainable parameters.
### _Optimization Set up_
Considering binary cross-entropy as loss function Eq. 9, the model parameters are trained using the adaptive moment estimation (Adam) optimizer with an initial learning rate of \(10^{-3}\) and decay the learning rate based on training results to a minimum of \(10^{-5}\) for regularization for 200 epochs.
\[\mathcal{L}=-(y\log(\hat{y})+(1-y)\log(1-\hat{y})) \tag{9}\]
where y is true label and \(\hat{y}\) denotes the predicted label by Eq. 8.
### _Performance Evaluation Metric_
For classification problems, evaluation metrics are used to compare the expected class label to the predicted class label. Since the power system should be operational safe for \((N-1)\) contingency, the majority of the dataset is secure, leading to an imbalanced dataset. Therefore, F1-score and G-mean as the efficient metrics for imbalance data are studied [35] :
\[\text{F1-score}=2\times\frac{\text{precision}\times\text{recall}}{ \text{precision}+\text{recall}}\] \[\text{G-mean}=\sqrt{\text{recall}\times\text{specificity}}\]
Considering confusion matrix in Table I that indicates all four possible outcomes,we then have \(\text{precision}=\frac{TP}{TP+FP}\), recall \(=\frac{TP}{TP+FN}\), and specificity \(=\frac{TN}{TN+FP}\).
### _Case Studies Results_
As a base case, single line contingency \((N-1)\) scenarios according to Procedure in Section V-A are run to generate samples. The same dataset but in grid-structure data is applied to an MLP model to investigate the application of GCN. For a fair comparison, the MLP model is configured to have a similar number of trainable parameters (10,699) to the GCN model. The MLP model consists of 4 fully connected layers (dense). Other settings, such as the activation function, the number of hidden channels, and optimization parameters, are the same. Training performance metrics and loss are depicted in Fig. 5 with electrical and topological input features. As shown, for the same number epoch, not only does GCN provide a better classification result, but its loss value shows convergence to a smaller value than the MLP model.
Considering the performance metrics mentioned in V-D, Table II indicates both GCN and MLP models performance. For investigation on features type's impact on voltage SSA, the models have trained separately for four different groups of input feature
* Electrical variables: bus voltage magnitude, bus active and reactive power;
* Topological variables: bus degree centrality, bus clustering coefficient, bus betweenness centrality, and bus closeness centrality;
* Voltage magnitude plus topological variable;
* Both electrical and topological.
The results show that the GCN model outperforms the MLP model as it provides higher evaluation scores. Still considering only topological delivers acceptable performance, and adding the voltage magnitude variable increases the test performance by 8.43% (F1-score) and 6.90% (G-mean). This is the expected result, as the voltage variable plays a key role in voltage security assessment.
Moreover, one may state that there is only a minute improvement in metrics either the GCN model is deployed or topological variables are added. The point here is that improving the performance of an accurate model is challenging since the model may face overfitting. The capacity of the proposed model is analyzed later on. Furthermore, in power system security assessment, the consequences of misclassification may result in a blackout or operation cost. Considering the result of both variables mentioned in Table II, the GCN model could do the task by providing correct classification by 106 samples (more FP and less FN rate) obtained from F1-score. Regarding practical application, each sample's correct classification is substantial and can be identified via _penalty matrix_[36]. NERC's guidelines approved a matrix comprising violation risk factors and violation severity levels, establishing "base penalty amounts since the protection and remedial action are activated based on classification output. Therefore, slightly enhancing the performance of the SA model not only lowers the risk of the consequences of a wrong decision but also avoids paying penalized fines.
The other finding refers to the importance of topological variables. Once voltage magnitude is added to topological variables, F1-score and G-mean metrics increase compared to only electrical variables. This indicates topological variables have meaningful intake in the voltage security analysis. The physics behind this observation is not clear. However, the reason would be incorporating the importance of buses across the grids reveals the impact of topological changes. Combining all electrical and topological variables in the feature set provides the highest value of evaluation metrics. In all cases, the GCN model outperforms the MLP because the GCN model captures and updates node information from its neighbors, leading to more informative node features. Indeed, the performance of voltage SSA is enhanced by the usage of information sharing and embedding between buses.
## VI Model Capacity Analysis
To analyze the ability of the built models to adapt properly to new, previously unseen samples so-called _generalization of model_, following scenarios are simulated to observe model capacity.
#### Vi-1 Robustness analysis
In this case, different datasets are generated to evaluate the robustness of models for unseen samples. For the case 1, a new operating point scaled by "Duck Curve" is generated randomly, and then single line contingencies followed by the increment of power transfer in the system are applied, such as Procedure 1. \(1795\) samples are generated in this case. Case 2 for robustness analysis of models is constructed based on double line contingency \((N\)-\(1\)-\(1)\) from the normal operating point. In this fashion, \(15923\) double line scenarios, including base case, are generated without running stressed conditions. Only \(500\) cases out of all double line cases are randomly chosen for power transferring for the sake of data generation speed. This results in \(5108\) samples. Now, both new datasets are evaluated for the trained models to see the robustness of the SA for unseen datasets. Table III states the robustness analysis results, and performance of models as the bar graph is shown in Fig. 6 for visual observation. As seen, the F1-score drops in both models; however, in both new
Fig. 5: Models F1-score and loss: MLP and GCN comparison
datasets and for all variable types, the GCN drops less than the MLP model, indicating more model capacity to classify the unseen data. The model's performance varies depending on the variable type. That is, a relation between feature space and new cases is noticed. Since the operating point is associated with the electrical variables, the trained models based on those are more sensitive than double-line contingency cases that address the topological changes. It also observed that trained models based on topological variables are robust for new operating points and sensitive to double-line contingency datasets. Thanks to the combination of both variable types in features, the models perform robustly for both new cases. That is, the model can capture the variation of new datasets from changes in operating point or grid topology. Furthermore, the GCN outperforms the MLP, signifying the functionality of GCN due to feature embedding aggregation and updating operation.
#### Vi-A2 Variables Sensitivity
Beyond the impact of the unseen dataset, the model can be analyzed when input variables are changed to see how it may impact output classification. Here, sensitivity analysis examines the change in the target output when one of the input features is perturbed. Regarding the state estimation architecture, variables in the SA are estimated from measurement and status, specifically electrical variables. Topological variables might face misinformation generated by the estimation of topology processors due to reported inaccurate data coming from SCADA or limitation of source measurement. However, it is unlikely that changes in the electrical variables. Therefore, variation of voltage magnitude is considered for sensitivity analysis as electrical variables are mainly studied for perturbation analysis due to the device measurement error or being prone to false data injection.
Considering a general state estimation architecture shown in Fig. 7 from [37] related to the SA, a perturbation block is only added to to mimic the voltage variation due to the state estimation (SE) error. It should mention that in practice, the disturbances are mainly added to measurements, and here the block only represents the way of voltage variations (SE error) to do sensitivity analysis. We randomly choose 10% of buses to inject 5% nominal voltage magnitude as a new test set. One may ask that the SE bad data detection (BDD) scheme can capture this variation and label it as an anomaly before it goes through monitoring and operation. As in practical practice, there is not a specified threshold for BBD, and it also varies depending on operational considerations; the proposed scenario can be applied without practical issues. The models' performance for this scenario is mentioned in Table IV, in which the superiority of the GCN model compared to the MLP model is once more seen as the performance of the GCN classifier is less degraded. Although classification performance reduces, using the GCN model with both electrical and topological information results in less sensitivity for voltage magnitude variation. That is clear that this scenario does not apply to models with only topological variables.
## VII Discussion and Conclusion
This paper introduced the GCN model for voltage SSA by considering the topological variables obtained from the topology of the power grid after contingency scenarios. The following can be discussed in this framework:
### _Do the topological features boost performance?_
As the results showed, the topological variables could enhance the model performance for security assessment and increase the model's capacity through robustness and sensitivity analysis, particularly for the GCN model. Furthermore, the type of centrality measures is also indispensable. Although
Fig. 6: Bar Graph of Robustness Analysis Results
the topological variables capture the power grid's structure properties, all graph centrality measures do not enhance performance. During the simulation, _harmonic centrality_, which measures the average distance of a node to the other nodes in the network, was also computed. Adding this measure interestingly lowered the best model performance in Table II by 8.9%. The reason could be among the topological feature voltage SSA may depend on nodes related measures mentioned in IV-A2. Therefore, the measures referring directly to distances can be misleading for the voltage SSA.
### _Training time_
The GNN-based model involves feature aggregation and updates from nodes' neighbors, so the training procedure takes longer than the MLP model. For instance, for both feature types in Table II, the training time for the GCN and the MLP is 1971 and 63 seconds, respectively. Although the GCN model took much more time for the training (about 32 times the MLP), it resulted in better performance than the MLP for various scenarios. Moreover, the proposed voltage SSA is applied for an offline SA scheme in which training time is not a concern.
### _On a GNN Universal Framework for the SA_
There is no limitation of the proposed GNN-based SA model for voltage SSA to apply for any other security assessments such as frequency, and dynamic security assessment [12, 34]. The framework is universal such that only the procedure of data generation and reliability criteria are required to be adapted. For example, in voltage DSA of voltage, the dynamic simulation needs to be run, and then bus voltage trajectories must monitor to apply appropriate reliability criteria.
In conclusion, a GCN model as a GNN-based framework is introduced for voltage SSA. Despite the traditional input features such as grid format data, a graph format structure is proposed for security analysis of the power grid. A graph then represents the status of a power grid in which each node (bus) consists of features where the GCN model acts on aggregated and updated bus features from its neighbors to deliver a well-informed design for the SA. In addition to electrical variables, topologically related variables obtained from graph centrality measures indicating a structural property of the power grid are appended to feature space. A dataset capturing possible variations such as daily change of load profile and grid stressed condition for voltage stability are generated for model validation purposes. Simulation results show the outperforming of the GCN model compared to the traditional neural network. The impact of the new feature vector is observed in model performance. Moreover, the proposed framework is studied for robustness and sensitivity analysis to examine the model's generalization. The outcomes one more confirm the superiority of the GCN model. As the GCN model makes a decision using the neighbor information of each bus where the SA is reformulated in the graph context, it provides more capacity for security classification due to the information distributed over the network. Although this paper focuses on voltage security in the steady state, the developed scheme can simply adapt to any security assessment, such as dynamic, considering other stability problems like frequency.
|
2308.00887 | Factor Graph Neural Networks | In recent years, we have witnessed a surge of Graph Neural Networks (GNNs),
most of which can learn powerful representations in an end-to-end fashion with
great success in many real-world applications. They have resemblance to
Probabilistic Graphical Models (PGMs), but break free from some limitations of
PGMs. By aiming to provide expressive methods for representation learning
instead of computing marginals or most likely configurations, GNNs provide
flexibility in the choice of information flowing rules while maintaining good
performance. Despite their success and inspirations, they lack efficient ways
to represent and learn higher-order relations among variables/nodes. More
expressive higher-order GNNs which operate on k-tuples of nodes need increased
computational resources in order to process higher-order tensors. We propose
Factor Graph Neural Networks (FGNNs) to effectively capture higher-order
relations for inference and learning. To do so, we first derive an efficient
approximate Sum-Product loopy belief propagation inference algorithm for
discrete higher-order PGMs. We then neuralize the novel message passing scheme
into a Factor Graph Neural Network (FGNN) module by allowing richer
representations of the message update rules; this facilitates both efficient
inference and powerful end-to-end learning. We further show that with a
suitable choice of message aggregation operators, our FGNN is also able to
represent Max-Product belief propagation, providing a single family of
architecture that can represent both Max and Sum-Product loopy belief
propagation. Our extensive experimental evaluation on synthetic as well as real
datasets demonstrates the potential of the proposed model. | Zhen Zhang, Mohammed Haroon Dupty, Fan Wu, Javen Qinfeng Shi, Wee Sun Lee | 2023-08-02T00:32:02Z | http://arxiv.org/abs/2308.00887v1 | # Factor Graph Neural Networks
###### Abstract
In recent years, we have witnessed a surge of Graph Neural Networks (GNNs), most of which can learn powerful representations in an end-to-end fashion with great success in many real-world applications. They have resemblance to Probabilistic Graphical Models (PGMs), but break free from some limitations of PGMs. By aiming to provide expressive methods for representation learning instead of computing marginals or most likely configurations, GNNs provide flexibility in the choice of information flowing rules while maintaining good performance. Despite their success and inspirations, they lack efficient ways to represent and learn higher-order relations among variables/nodes. More expressive higher-order GNNs which operate on k-tuples of nodes need increased computational resources in order to process higher-order tensors. We propose Factor Graph Neural Networks (FGNNs) to effectively capture higher-order relations for inference and learning. To do so, we first derive an efficient approximate Sum-Product loopy belief propagation inference algorithm for discrete higher-order PGMs. We then normalize the novel message passing scheme into a Factor Graph Neural Network (FGNN) module by allowing richer representations of the message update rules; this facilitates both efficient inference and powerful end-to-end learning. We further show that with a suitable choice of message aggregation operators, our FGNN is also able to represent Max-Product belief propagation, providing a single family of architecture that can represent both Max and Sum-Product loopy belief propagation. Our extensive experimental evaluation on synthetic as well as real datasets demonstrates the potential of the proposed model.
Graphical Models, Belief Propagation, Graph Neural Networks +
Footnote †: 1: c)2023 Zhen Zhang, Mohammed Haroon Dupty, Fan Wu, Javen Shi and Wee Sun Lee.
License: CC-BY 4.0, see [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/). Attribution requirements are provided at [http://jmlr.org/papers/v24/21-0434.html](http://jmlr.org/papers/v24/21-0434.html).
## 1 Introduction
Deep neural networks are powerful function approximators that have been extremely successful in practice. While fully connected networks are universal approximators, successful networks in practice tend to be structured, _e.g._, grid-structured convolutional neural networks and chain-structured gated recurrent neural networks (_e.g._, LSTM, GRU). Graph neural networks (Gilmer et al., 2017; Xu et al., 2018; Yoon et al., 2019) have recently been successfully used with graph-structured data to capture pairwise dependencies between variables and to propagate the information to the entire graph.
GNNs learn node representations by iteratively passing messages between nodes within their neighbourhood and updating the node embeddings based on the messages received. Though successful, these models are limited by the first order approximations they make in aggregating information from the neighbouring nodes. The dependencies in the real-world data are often of higher-order which cannot be captured by only pairwise modeling. For example, in LDPC encoding, the bits of a signal are grouped into several clusters and in each cluster, the parity of all bits is constrained to be equal to zero (Zarkeshvari and Banihashemi, 2002). For good performance, these higher-order constraints should be exploited in the decoding procedure. Furthermore, many naturally occurring graphs like molecules exhibit repeating substructures like motifs; atoms in a molecule additionally satisfy higher-order valence constraints (Wu et al., 2018; Agarwal et al., 2006). We can learn better node representations if we can design better message passing schemes that can directly utilize the higher-order dependencies that are not captured using pair-wise dependencies.
Node interactions have also been modeled by Probabilistic Graphical Models (PGMs) (Wainwright and Jordan, 2008; Koller and Friedman, 2009) wherein nodes are viewed as random variables with a factorized joint distribution defined over them. Research that focuses on such models has been directed towards finding approximate inference algorithms to compute node marginals or the most likely configuration of the variables. There is rich literature of theoretically grounded PGM inference algorithms to find the state of a node given its statistical relations with other variable nodes in the graph. Knowledge of such inference algorithms can provide good inductive bias if it can be encoded in the neural network. The inductive bias lets the network favour certain solutions over others, and if the bias is indeed consistent with the target task, it helps in better generalization (Battaglia et al., 2018). So, while we use deep learning methods to learn representations in an end-to-end setting, better generalization may be achievable for some tasks with the inductive bias of classical inference algorithms. Unfortunately, these approximate inference algorithms become inefficient at higher-order.
One such well-known algorithm to find approximate node marginals is Loopy Belief Propagation (LBP) (Murphy et al., 2013; Pearl, 2014) which operates on the factor graph data structure. A factor graph is a bipartite graph with a set of variable nodes connected to a set of factor nodes; each factor node indicates the presence of dependencies among its connected variables. In this paper, we propose to leverage Sum-Product LBP to formulate the message passing updates and build a better graph representation learning model. To this end, we derive an efficient message passing algorithm based on LBP with arbitrary higher-order factors on discrete graphical models, with the assumption that higher-order factors are of low rank, parameterized in the form of a mixture of rank-1 tensors. The derived
message passing updates only need two operations, matrix multiplication and Hadamard product, and their complexity grows linearly with the number of variables in the factor. Furthermore, this parameterization can represent any factor exactly with a large enough set of rank-1 tensors; the number of rank-1 tensors required can grow exponentially for some problems but often in practice, a small number is sufficient for a good approximation.
Further, we represent the message passing updates in a neural network module and unroll the inference over discrete graphical models as a computational graph. We allow the messages to be arbitrary real valued vectors (instead of being constrained to be positive as in LBP) and treat the messages as latent vectors in a network; the latent vectors produced by the network can then be used for learning the target task through end-to-end training. Instead of using just the product operations to aggregate the set of latent vectors, we allow the use of other aggregators, potentially even universal set functions, to provide more flexibility in representation learning. We refer to the process of unrolling the algorithm, relaxing some of the constraints, modifying some of the components to potentially make the network more powerful, and using the resulting network as a component for end-to-end learning as _neutralizing_ the algorithm. We call the neural module as a Factor Graph Neural Network (FGNN).
The FGNN is defined using two types of modules, the Variable-to-Factor (VF) module and the Factor-to-Variable (FV) module (see Figure 1). These modules are combined into a layer, and the layers are stacked together into an algorithm. Though the FGNN is motivated with Sum-Product LBP, we show that by using a different form of low-rank tensor representation and aggregator function, it is able to exactly parameterize the Max-Product Belief Propagation, which is a widely used approximate _maximum a posteriori_ (MAP) inference algorithm for PGMs, as well. Theoretically, this shows that FGNN can represent both Max-Product and Sum-Product within a single architecture simply by changing the aggregator function; furthermore if a universal approximator is used as the aggregator, it
Figure 1: The structure of the Factor Graph Neural Network (FGNN): the Variable-to-Factor (VF) module is shown on the left and the Factor-to-Variable (FV) module is shown on the right.
would be able to learn to approximate the better of the two message passing algorithms on the problem.
The theoretical relationship with Sum-Product and Max-Product provides understanding on the representational capabilities of GNNs in general, and of FGNN in particular, _e.g._ it can solve problems solvable by graphical model message passing algorithms _e.g._, (Bayati et al., 2008; Kim and Pearl, 1983). From the practical perspective, the factor graph provides a flexible way for specifying dependencies among the variables, including higher-order dependencies. Furthermore, inference algorithms for many types of graphs, _e.g._, graphs with typed edges or nodes, are easily developed using the factor graph representation. Edges, or more generally factors, can be typed by tying together parameters of factors of the same type, or can also be conditioned from input features by making the edge or factor parameters a function of the features; nodes can similarly have types or features with the use of factors that depend on a node variable. With typed or conditioned factors, the factor graph can also be assembled dynamically for each graph instance. FGNN provides a flexible learnable architecture for exploiting these graphical structures--just as factor graph allows easy specification of different types of PGMs, FGNN allows easy specification of both typed and conditioned variables and dependencies as well as a corresponding data-dependent approximate inference algorithm.
To be practically useful, the FGNN architecture needs to be practically _learnable_ without being trapped in poor local minima. We performed experiments to explore the practical potential of FGNN on both problems where the graphical model inference aspects are clear, as well as problems where the FGNN is used mostly as a graph neural network that allows the structure of the higher-order interactions to be specified and exploited. On problems closely related to PGM inference, we experimented with a synthetic higher-order PGM inference problem, LDPC decoding, as well as a graph matching problem. FGNN performed well on the synthetic PGM inference and outperforms the standard LDPC decoding method under some noise conditions. On the graph matching problem, FGNN outperforms both a graphical model approximate inference algorithm as well as a graph neural network. To show that FGNN can be used purely as a graph neural network to exploit higher-order dependencies, we conducted experiments on handwritten character recognition within words where there is strong correlation in the character sequences, human motion prediction where different joint positions are constrained by the human body structure and molecular property prediction where higher-order correlations in the molecular graphs are likely to be present. We demonstrate that FGNN is able to exploit the higher-order information with state-of-the-art results on human motion prediction. Furthermore, FGNN is able to outperform other recent \(k\)-order GNNs (Morris et al., 2019; Maron et al., 2019) substantially on two challenging large molecular datasets (QM9 and Alchemy).
## 2 Related work
Belief Propagation (BP) inference algorithms have been used in variety of applications spanning computer vision, natural language processing and other machine learning domains. Due to the intractability of the algorithm in its higher-order form, first-order pairwise BP has been predominantly used in most problems. However, multiple works have proposed approaches to efficiently run higher-order BP in various settings (Lan et al., 2006; Potetz
and Lee, 2008; Kohli and Kumar, 2010). Most of the approaches are designed either with assumptions on input graph structures or with restrictions on the kind of higher-order functions being modelled. Lan et al. (2006) used an adaptive state space to handle the increased complexity for higher-order 2x2 MRF clique structures. Potetz and Lee (2008) used specific assumption of linear constraints on higher-order functions to efficiently run the BP equations. Kohli and Kumar (2010) used linear envelope approximations to model the higher-order functions and showed its usefulness for semantic segmentation.
The inspiration for our low-rank formulation of higher-order functions comes from Wrigley et al. (2017) where higher-order potentials were decomposed with CP decompositions to efficiently run the sampling-based junction-tree algorithm. They used CP tensor decomposition to efficiently sample from higher-order functions in the junction-tree updates, whereas in our work, we assume tensor decomposition form of higher-order function and derive efficient message update equations for the LBP algorithm. Furthermore, our method is a deterministic approximation of LBP and we normalize it to learn the parameters in an end-to-end training.
There have been a significant number of works which have tried to incorporate inference algorithms in deep networks (Zheng et al., 2015; Chen et al., 2015; Lin et al., 2016, 2015; Tamar et al., 2016; Karkus et al., 2017; Xu et al., 2017). A large number of these works focus on learning pairwise potentials of graphical models on top of CNN to capture relational structures among their outputs for structured prediction. These methods are largely focused on modeling for a specific task like semantic segmentation (Zheng et al., 2015; Lin et al., 2016), scene graph prediction (Xu et al., 2017), and image denoising (Wu et al., 2016). On the other hand, (Tamar et al., 2016; Karkus et al., 2017) represent planning algorithms as neural network layers in sequential decision making tasks.
Various graph neural network models have been proposed for graph structured data, including methods based on the graph Laplacian (Bruna et al., 2013; Defferrard et al., 2016; Kipf and Welling, 2016), gated networks (Li et al., 2015), and various other neural networks structures for updating the information (Duvenaud et al., 2015; Battaglia et al., 2016; Kearnes et al., 2016; Schutt et al., 2017). Gilmer et al. (2017) show that these methods can be viewed as applying message passing on pairwise graphs and are special cases of Message Passing Neural Networks (MPNNs).
There has been some recent work on extending the graph convolutional neural networks to hyper-graphs in order to capture higher-order information (Feng et al., 2019; Yadati et al., 2019; Jiang et al., 2019; Zhang et al., 2019). Feng et al. (2019); Jiang et al. (2019) used clique-expansion of hyper-edges to extend convolutional operation to hyper-graphs. Such modeling is equivalent to decomposing an hyper-edge to a set of pairwise edges. Similar approximation is applied in Yadati et al. (2019) where the number of pairwise edges added are reduced and are linearly dependent on the size of the hyperedge. Although, these methods operate on hyper-graphs, effectively the hyper-graphs are reduced to graphs with pairwise edges.
Recently, Morris et al. (2019) and Maron et al. (2019) used Weisfeiler Lehman (WL) graph isomorphism tests to construct increasingly powerful GNNs. They proposed models of message passing called \(k\)-order GNNs to capture higher-order structures and compared their expressiveness with higher-order WL tests. In contrast to \(k\)-order GNNs which build on graph isomorphism testing, FGNN builds on probabilistic graphical models, which provide
a rich modeling language allowing the designer to specify prior knowledge in the form of pairwise as well as higher-order dependencies in a factor graph. As \(k\)-order GNNs are theoretically shown to be more expressive in capturing higher-order information, we compare our model with the \(k\)-order GNNs on the molecular datasets.
Running inference algorithms on graph neural networks has been previously explored in Yoon et al. (2019); Dai et al. (2016) and Chen et al. (2018). These works showed methods of running graphical model inference with GNN message passing updates. However, they operate with the assumption of pairwise graphical models and not more general higher-order models. In contrast, our work deals mainly with running the inference algorithms on the higher-order graphical model as a graph neural network. Following our initial work, Satorras and Welling (2020) proposed a graph neural network based on the factor graph for LDPC code decoding that also exploits the loopy belief propagation messages. However they did not connect message passing on factor graphs with higher-order factors represented using tensor decompositions as we have done which further gives a theoretical understanding of the relation between inference algorithms and GNNs.
## 3 Preliminaries
In this section, we briefly review two concepts central to our approach, low rank tensor decomposition and the loopy belief propagation inference algorithm.
### Tensor decompositions
Tensors are generalizations of matrices to higher dimensions. An order-\(m\) tensor \(\mathbf{T}\) is an element in \(\mathbb{R}^{N_{1}\times N_{2}\cdots\times N_{m}}\) with \(N_{k}\) possible values in \(k^{th}\) dimension for \(k\in\{1,2,\ldots,m\}\). Tensor rank decompositions provide succinct representation of tensors. In CANDECOMP / PARAFAC (CP) decomposition, a tensor \(\mathbf{T}\) can be represented as a linear combination of outer products of vectors as
\[\mathbf{T}=\sum_{r=1}^{R}\lambda^{r}w_{1}^{r}\otimes w_{2}^{r}\otimes\cdots \otimes w_{m}^{r} \tag{1}\]
where \(\lambda^{r}\in\mathbb{R}\), \(w_{k}^{r}\in\mathbb{R}^{N_{k}}\), \(\otimes\) is the outer product operator, _i.e._, \(\mathbf{T}(i_{1},i_{2}\ldots,i_{m})=\sum_{r=1}^{R}\lambda^{r}w_{1_{i_{1}}}^{r} w_{2_{i_{2}}}^{r}\cdots w_{m_{i_{m}}}^{r}\), and the term \(w_{1}^{r}\otimes w_{2}^{r}\otimes\cdots\otimes w_{m}^{r}\) is a rank-1 tensor. The scalar coefficients \(\lambda^{r}\) can optionally be absorbed into \(\{w_{k}^{r}\}\). The smallest \(R\) for which an exact \(R\)-term decomposition exists is the rank of tensor \(\mathbf{T}\) and the decomposition (1) is its \(R\)-rank approximation. With this compact representation, an exponentially large tensor \(\mathbf{T}\) with \(N_{1}\times N_{2}\cdots\times N_{m}\) entries can be represented with \(R\) vectors for each variable in \(\mathbf{T}\), _i.e._, with a total of \(R(N_{1}+N_{2}+\cdots+N_{m})\) parameters. More information about tensor decompositions can be found in Kolda and Bader (2009), and Rabanser et al. (2017).
### Graphical models and Loopy belief propagation
Probabilistic Graphical Models (PGMs) use graphs to model dependencies among random variables. These dependencies are conveniently represented using a factor graph, which is a bipartite graph \(\mathcal{G}=(\mathcal{V},\mathcal{C},\mathcal{E})\) where each vertex \(i\in\mathcal{V}\) in the graph is associated with a
random variable \(x_{i}\in\mathbf{x}\), each vertex \(c\in\mathcal{C}\) is associated with a non-negative function \(\phi_{c}\), and an edge connects a variable vertex \(i\) to a factor vertex \(c\) if \(\phi_{c}\) depends on \(x_{i}\). An example factor graph is shown in Figure 2.
We consider PGMs which model dependencies between discrete random variables. Let \(\mathbf{x}\) be the set of all variables and let \(\mathbf{x}_{c}\) be the subset of variables that \(\phi_{c}\) depends on. The joint distribution of variables factorizes over \(\mathcal{C}\) as
\[P(\mathbf{x}) =\frac{1}{Z}\prod_{c\in\mathcal{C}}\phi_{c}(\mathbf{x}_{c}) \tag{2}\]
\(Z\) is the normalizing constant. Without loss of generality, we assume all variables can take \(d\) values and consequently \(\phi_{c}(\mathbf{x}_{c})\in\mathbb{R}^{d^{|\,\mathbf{x}_{c}|}}\).
Marginal and MAP InferenceIn this paper we consider two main inference tasks -- the _marginal_ inference and the _maximum a posteriori_ (MAP) inference. The aim of marginal inference is to compute the marginals \(p_{i}(x_{i})\)
\[p_{i}(x_{i}) =\sum_{\mathbf{x}_{\mathbf{y}}\setminus\{i\}}P(\mathbf{x})=\sum _{\mathbf{x}_{\mathbf{y}}\setminus\{i\}}\frac{1}{Z}\prod_{c\in\mathcal{C}} \phi_{c}(\mathbf{x}_{c}), \tag{3}\]
and the aim of MAP inference is to find the assignment which maximizes \(P(\mathbf{x})\), that is
\[\mathbf{x}^{*} =\operatorname*{argmax}_{\mathbf{x}}\prod_{c\in\mathcal{C}}\phi_ {c}(\mathbf{x}_{c})\] \[=\operatorname*{argmax}_{\mathbf{x}}\sum_{c\in\mathcal{C}}\log \phi_{c}(\mathbf{x}_{c}) \tag{4}\]
Loopy Belief PropagationThe marginal inference and MAP inference problem are NP-hard in general, and thus approximation algorithms are usually required. Different versions of the loopy belief propagation (LBP) algorithms (Pearl, 2014; Murphy et al., 2013) compute approximate marginals \(p(x_{i})\) at each node \(x_{i}\), or the approximate MAP assignment, by sending messages between factor and variables nodes on a factor graph. First we introduce the Sum-Product loopy belief propagation. Essentially, the Sum-Product LBP starts by initializing two kinds of messages, factor-to-variable \(m_{c\to i}(x_{i})\) and variable-to-factor \(m_{i\to c}(x_{i})\). Messages are a function of the variable in the variable node, updated with the following recursive equations,
\[m_{i\to c}(x_{i}) =\prod_{d\in N(i)\setminus\{c\}}m_{d\to i}(x_{i}) \tag{5}\] \[m_{c\to i}(x_{i}) =\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\phi_{c}(\mathbf{x}_{c}) \prod_{j\in N(c)\setminus\{i\}}m_{j\to c}(x_{j}) \tag{6}\]
where \(N(i)\) is the set of neighbours of \(i\) and messages \(m_{i\to c},m_{c\to i}\in\mathbb{R}^{d}\). As illustrated in Figure 3, \(m_{i\to c}\) is the message from variable \(i\) to factor \(c\) and \(m_{c\to i}\) is the message from factor \(c\) to variable \(i\). After sufficient number of iterations, the belief of variables is computed by
\[b_{i}(x_{i})=\prod_{c\in N(i)}m_{c\to i}(x_{i}). \tag{7}\]
Similarly, the Max-Product belief propagation algorithm can be formulated as
\[m_{i\to c}(x_{i}) =\prod_{d\in N(i)\setminus\{c\}}m_{d\to i}(x_{i}) \tag{8}\] \[m_{c\to i}(x_{i}) =\max_{\mathbf{x}_{c}\setminus\{x_{i}\}}\phi_{c}(\mathbf{x}_{c}) \prod_{j\in N(c)\setminus\{i\}}m_{j\to c}(x_{j})\] (9) \[b_{i}(x_{i}) =\prod_{c\in N(i)}m_{c\to i}(x_{i}). \tag{10}\]
Performed in log space, the product operator in Eqns 8, 9, 10 becomes sum and we have the Max-Sum algorithm which may be better behaved numerically.
## 4 Proposed Method
In this section, we derive the FGNN model through neuralizing the Sum-Product loopy belief propagation that utilizes the low rank decomposition of higher-order potentials. Then we show that with a slight modification the model can also mimic the Max-Sum or equivalently the Max-Product belief propagation.
### Low Rank Sum-Product Loopy Belief Propagation
We start the derivation by writing the LBP equations in vectorized form. Consider a factor \(\phi_{c}(\mathbf{x}_{c})\) over \(n_{c}\) number of variables _i.e._\(\mathbf{x}_{c}=[x_{1},x_{2}\ldots,x_{n_{c}}]\). Then the message update equations are,
\[m_{i\to c}=\prod_{d\in N(i)\setminus\{c\}}m_{d\to i} \tag{11}\]
\[m_{c\to i}=\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\phi_{c}(\mathbf{x}_{c}) \prod_{j\in N(c)\setminus\{i\}}m_{j\to c} \tag{12}\]
Figure 3: Loopy Belief Propagation messages
where \(\phi_{c}(\mathbf{x}_{c})\in\mathbb{R}^{d^{n_{c}}}\) and the messages \(m_{j\to c},m_{c\to i}\) are in \(\mathbb{R}^{d}\). A tensorized way to implement Equation (12) would be to take the outer product of all incoming messages \(m_{j\to c}\), expand a dimension corresponding to the dimension of the \(i^{th}\) variable and elementwise multiply with the \(\phi_{c}(\mathbf{x}_{c})\) tensor. Then, we can marginalize all variables except \(x_{i}\) to get the message \(m_{c\to i}\)_i.e._,
\[m_{c\to i}=\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\phi_{c}(\mathbf{x}_{c}) \odot\left(m_{1\to c}\otimes\cdots\otimes\mathbf{1}_{i\to c}\otimes\cdots \otimes m_{n_{c}\to c}\right) \tag{13}\]
where \(\mathbf{1}_{i\to c}\) is a vector of ones in \(\mathbb{R}^{d}\), \(\otimes\) is the outer product and \(\odot\) is elementwise multiplication or Hadamard product. Note that the summation operator \(\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\) performs "marginalization" operation or summation across all dimensions except the \(i^{th}\) dimension corresponding to variable \(x_{i}\) which reduces the order-\(n_{c}\) tensor to an \(R^{d}\) vector.
Since \(\phi_{c}(\mathbf{x}_{c})\) is a tensor in \(\mathbb{R}^{d^{n_{c}}}\), it can be represented as a sum of \(R\) fully factored terms in CP decomposition form (1).
\[\phi_{c}(\mathbf{x}_{c})=\sum_{r=1}^{R}\lambda_{r}w_{c,1}^{r}\otimes w_{c,2}^ {r}\otimes\cdots\otimes w_{c,n_{c}}^{r} \tag{14}\]
where \(w_{c,j}^{r}\in\mathbb{R}^{d}\) and \(\lambda_{r}\) are real-valued scalars. This representation is efficient if \(\phi_{c}\) is of low-rank i.e., \(R\ll d^{n_{c}}\), and in that case, a \(d^{n_{c}}\) dimensional tensor is compressed into a set of \(n_{c}\cdot R\) vectors of \(d\) dimensons each. Please refer Section 3.1 on Preliminaries for details on CP Tensor Decomposition.
We posit that such a low-rank representation is often a good approximation of \(\phi_{c}\) in practice; the method is likely to be useful when this assumption holds. Previously, such low rank approximations have been shown to be useful in variety of real world tasks including semantic segmentation and knowledge graph embedding (Wrigley et al., 2017; Kohli and Kumar, 2010; Trouillon et al., 2017). Moreover, we provide supporting evidence on the low-rank assumption in our ablation experiments in Section 5.4 and 5.5 (See Figure 7 and 11).
Absorbing \(\lambda_{r}\) in (14) into weights \(\{w_{c,j}^{r}\}\) and substituting in (13), we have
\[m_{c\to i} =\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\Big{(}\sum_{r=1}^{R}w_{ c,1}^{r}\otimes w_{c,2}^{r}\otimes\cdots\otimes w_{c,n_{c}}^{r}\Big{)} \odot\left(m_{1\to c}\otimes\cdots\otimes\mathbf{1}_{i\to c}\otimes \cdots\otimes m_{n_{c}\to c}\right) \tag{15a}\] \[=\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\sum_{r=1}^{R}\left(w_{ c,1}^{r}\odot m_{1\to c}\right)\otimes\cdots w_{c,i}^{r}\cdots\otimes \left(w_{c,n_{c}}^{r}\odot m_{n_{c}\to c}\right) \tag{15b}\]
In Equation (15b), we have used the distribution rule _i.e._\(\forall\;u,v\in\mathbb{R}^{d}\) and \(u^{\prime},v^{\prime}\in\mathbb{R}^{d^{\prime}}\), then \((u\otimes u^{\prime})\odot(v\otimes v^{\prime})=(u\odot v)\otimes(u^{\prime} \odot v^{\prime})\).
The variables are grouped together with the factor parameters corresponding to them. Now we can marginalize out a variable easily as we have a sum of fully factorized functions. We simply push the outer summation inside, distribute and separately evaluate it over each
of the univariate products \(\left(w_{c,j}^{r}\odot m_{j\to c}\right)\). This gives us,
\[m_{c\to i}= \sum_{r=1}^{R}\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\left(w_{c,1}^ {r}\odot m_{1\to c}\right)\otimes\cdots w_{c,i}^{r}\cdots\otimes\left(w_{c,n_{ c}}^{r}\odot m_{n_{c}\to c}\right) \tag{16a}\] \[= \sum_{r=1}^{R}w_{c,i}^{r}\Big{(}\sum_{x_{1}}w_{c,1}^{r}\odot m_{1 \to c}\Big{)}\cdots\Big{(}\sum_{x_{n_{c}}}w_{c,n_{c}}^{r}\odot m_{n_{c}\to c} \Big{)}\] (16b) \[= \sum_{r=1}^{R}w_{c,i}^{r}\gamma_{c,1}^{r}\cdots\gamma_{c,n_{c}}^{ r}\qquad\qquad\qquad\qquad\text{; with }\gamma_{c,i}^{r}=1\] (16c) \[= \sum_{r=1}^{R}w_{c,i}^{r}\gamma_{c}^{r} \tag{16d}\]
In Equation (16a), we have swapped the summations and in Equation (16b), have distributed the summation \(\sum_{\mathbf{x}_{c}\setminus\{x_{i}\}}\) over each variable. In Equation (16c), we evaluate the summation over each variable to get \(\gamma_{c,j}^{r}\)_i.e._\((\sum_{x_{j}}w_{c,j}^{r}\odot m_{j\to c})=w_{c,j}^{r}m_{j\to c}=\gamma_{c,j}^{r}\in \mathbb{R}\), which is a scalar. To rule out the message from variable \(x_{i}\), we set \(\gamma_{c,i}^{r}=1\) and therefore, the product \(\mathbf{\gamma}_{c}^{r}=\gamma_{c,1}^{r}\cdot\gamma_{c,2}^{r}\ldots\gamma_{c,n_{ c}}^{r}\in\mathbb{R}\) is also a scalar in Equation (16d).
Since \(m_{c\to i}\) is a linear combination of \(R\) number of \(w_{c,i}^{r}\) vectors in Equation (16d), we can rewrite it in matrix form. For this, we stack the \(R\) component weight vectors for each variable as matrix \(\mathbf{W}_{c,i}=[w_{c,i}^{1},w_{c,i}^{2},\ldots,w_{c,i}^{R}]\in\mathbb{R}^{d \times R}\). Similarly, we stack \(R\) number of \(\mathbf{\gamma}_{c}^{r}\)'s together as vector \(\mathbf{\Gamma}_{c}=[\mathbf{\gamma}_{c}^{1},\mathbf{\gamma}_{c}^{2},\ldots,\mathbf{\gamma}_{ c}^{R}]^{T}\in\mathbb{R}^{R\times 1}\). Then, we can rewrite the Equation (16d) in matrix form as
\[m_{c\to i}=\mathbf{W}_{c,i}\mathbf{\Gamma}_{c} \tag{17}\]
Note that since \(\mathbf{\Gamma}_{c}=[\mathbf{\gamma}_{c}^{1},\mathbf{\gamma}_{c}^{2},\ldots,\mathbf{\gamma}_{ c}^{R}]^{T}\in\mathbb{R}^{R\times 1}\) where each \(\mathbf{\gamma}_{c}^{r}\) is a product of \(\gamma_{c,j}^{r}\)'s _i.e._\(\mathbf{\gamma}_{c}^{r}=\gamma_{c,1}^{r}\cdot\gamma_{c,2}^{r}\ldots\gamma_{c,n_{ c}}^{r}\) (from Equations (16c), (16d), we can rewrite \(\mathbf{\Gamma}_{c}\) as elementwise product of vectors in each variable as:
\[\mathbf{\Gamma}_{c}=\begin{bmatrix}\mathbf{\gamma}_{c}^{1}\\ \mathbf{\gamma}_{c}^{2}\\ \vdots\\ \mathbf{\gamma}_{c}^{R}\end{bmatrix}=\begin{bmatrix}\gamma_{c,1}^{1}\cdot\gamma_{ c,2}^{1}\ldots\gamma_{c,n_{c}}^{1}\\ \gamma_{c,1}^{2}\cdot\gamma_{c,2}^{2}\ldots\gamma_{c,n_{c}}^{2}\\ \vdots\\ \gamma_{c,1}^{R}\cdot\gamma_{c,2}^{R}\ldots\gamma_{c,n_{c}}^{R}\end{bmatrix}= \begin{bmatrix}\gamma_{c,1}^{1}\\ \gamma_{c,1}^{2}\\ \vdots\\ \gamma_{c,1}^{R}\end{bmatrix}\odot\begin{bmatrix}\gamma_{c,2}^{1}\\ \gamma_{c,2}^{2}\\ \vdots\\ \gamma_{c,2}^{R}\end{bmatrix}\cdots\odot\begin{bmatrix}\gamma_{c,n_{c}}^{1}\\ \gamma_{c,n_{c}}^{2}\\ \vdots\\ \gamma_{c,n_{c}}^{R}\end{bmatrix} \tag{18}\]
This gives us,
\[\mathbf{\Gamma}_{c}=[\Gamma_{c,1}\odot\Gamma_{c,2}\cdots\odot\Gamma_{c,n_{c}}] \tag{19}\]
where each \(\Gamma_{c,j}=[\gamma_{c,j}^{1},\gamma_{c,j}^{2},\ldots,\gamma_{c,j}^{R}]^{T} \in\mathbb{R}^{R\times 1}\). Note that from Equation (16c), \(\mathbf{\Gamma}_{c}\) contains \(\Gamma_{c,i}\) as a vector of all ones _i.e._\(\Gamma_{c,i}=[\gamma_{c,i}^{1},\gamma_{c,i}^{2},\ldots,\gamma_{c,i}^{R}]^{T}=[1,1,\ldots,1]^{T}\). Therefore, Equation (17) can now be written as
\[m_{c\to i}=\mathbf{W}_{c,i}\big{[}\Gamma_{c,1}\odot\Gamma_{c,2}\cdots\odot \Gamma_{c,n_{c}}\big{]} \tag{20}\]
Furthermore, from Equation (16c) we have \(\gamma_{c,j}=\sum_{x_{j}}w_{c,j}^{r}m_{j\to c}=w_{c,j}^{r^{T}}m_{j\to c}\). Therefore, we can write \(\Gamma_{c,j}\) as
\[\Gamma_{c,j} =[\gamma_{c,j}^{1},\gamma_{c,j}^{2},\ldots,\gamma_{c,j}^{R}]^{T} \tag{21a}\] \[=[w_{c,j}^{1},w_{c,j}^{2},\ldots,w_{c,j}^{R}]^{T}m_{j\to c}\] (21b) \[=\mathbf{W}_{c,j}^{T}m_{j\to c} \tag{21c}\]
Now, combining with Equation (20) we have new message passing updates for the low-rank loopy belief propagation algorithm,
\[m_{c\to i}=\mathbf{W}_{c,i}\Big{[}\big{(}\mathbf{W}_{c,1}^{T}m_{1\to c} \big{)}\odot\big{(}\mathbf{W}_{c,2}^{T}m_{2\to c}\big{)}\cdots\odot\big{(} \mathbf{W}_{c,n_{c}}^{T}m_{n_{c}\to c}\big{)}\Big{]}_{\setminus i} \tag{22a}\] \[m_{i\to c}=\bigodot_{d\in N(i)\setminus\{c\}}m_{d\to i} \tag{22b}\]
Belief update is simple. For the variable \(x_{i}\), we project the messages from the other variables sharing a factor with \(x_{i}\) to \(\mathbb{R}^{R}\), perform elementwise multiplication and then project the product back to \(\mathbb{R}^{d}\). Thereafter, multiply such messages from all the factors \(x_{i}\) is connected to get the updated belief of \(x_{i}\).
Clearly, the computational complexity of the message updates grows linearly with the addition of variables to factors and thereby the algorithm is efficent enough to run with higher-order factors.
### Neuralizing Low Rank Sum-Product LBP
To learn better node and consequently, graph representations in an end-to-end setting, we seek to neutralize the Low Rank LBP algorithm by writing the message passing updates as a functionally equivalent neural network module and _unroll_ the inference algorithm as a computational graph. We further replace the positive message vectors in LBP with unconstrained real valued hidden latent vector representations initialized from a feature extractor network.
In the Low-rank LBP algorithm, a factor is parameterized by the set of matrices \(\{\mathbf{W}_{c,j}\}\) i.e. it has as many \(\mathbf{W}_{c,j}\)'s as the number of variables adjacent to it in the factor graph. We can relax this constraint and maintain \(2|\{\mathbf{W}_{c,j}\}|\) matrices at each factor, one set is used for transformation of messages before Hadamard product and the other set after the product in equation (22a). The additional parameters are helpful as the two message updates are not tied with shared parameters and can be run parallelly. Also, with more parameters, this may increase the representative power of the neural-network while still being able to represent equation (22a) as the extra set of matrices can be learnt to be same as the first set.With this relaxation, we can push the outer \(\mathbf{W}_{c,i}\) to equation (22b) and rewrite the LBP updates of (22a) and (22b) as,
\[m_{c\to i} =\bigodot_{j\in N(c)\setminus\{i\}}\mathbf{W}_{c,j}^{T}m_{j\to c} \tag{23a}\] \[m_{i\to c} =\bigodot_{d\in N(i)\setminus\{c\}}\mathbf{W}_{d,i}m_{d\to i} \tag{23b}\]
The message updates of (23a) and (23b) only involve operations like matrix-vector product and elementwise multiplication. These operations can very well be represented in a neural network for end-to-end learning. But the multiplication of several terms can lead to numerical instability in learning due to overflow and underflow errors. Therefore, we replace the Hadamard product with other generalized set-function aggregators. It has been shown that given a set of input vectors, an MLP followed by _sum_(Zaheer et al., 2017) or _max_(Qi et al., 2017) aggregator is a universal set approximator; it can approximate any non-linear set function and hence can be used in place of Hadamard product, approximating the Hadamard product if necessary, and providing the capability to possibly learn a better aggregator than the Hadamard product. To provide even more approximation capabilities, we allow a multi-layer perceptron (MLP) to transform the message after aggregation just like typical graph neural networks.
A single aggregator of FGNN is shown in Figure 4. For a message between node \(c\) and node \(i\), we optionally allow both the the latent vectors from \(c\) and \(i\) to be used as input to the MLP at node \(c\). The matrix \(\mathbf{W}\) that is used to multiply the message can be learned directly, if necessary. Alternatively, if some feature \(t_{ci}\) is available, the \(\mathbf{W}_{c,i}\) can be conditioned on \(t_{ci}\) by using a MLP to output the matrix conditioned on the feature.
For implementing Equations (23a) and (23b), the nodes in Figure 4 correspond to messages \(m_{c\to i}\) and \(m_{i\to c}\). If we want to use the architecture as a graph neural network instead, it is convenient to define the latent vectors to correspond to factor and variable nodes as it would result in a smaller network. This can be done if we do not exclude the vector from the target node in the aggregation operation, i.e. use \(m_{c}=\bigodot_{j\in N(c)}\mathbf{W}_{c,j}^{T}m_{j}\) and \(m_{i}=\bigodot_{d\in N(i)}\mathbf{W}_{d,i}m_{d}\) as the starting points for neuralization instead. The original belief propagation equations are exact inference algorithms when there are no loops. For correctness, the information in each message is only used once for computing a node marginal. As a graph neural network, the network is trained end-to-end and can be trained to account for the repeated information. We find experimentally that the results are similar when we
Figure 4: A single aggregator of the FGNN architecture.
use networks where nodes correspond to the belief propagation messages and networks where the nodes correspond to variables and factors. Hence, we use the networks where nodes correspond to variables and factors except for theoretical results where we are simulating the Sum-Product and Max-Product algorithms.
### Factor Graph Neural Network
We now describe the Factor Graph Neural Network (FGNN), a graph neural network model based on the derived low-rank LBP equations. It consists of two modules, Variable-to-Factor module and Factor-to-Variable module. Given a factor graph \(\mathcal{G}=(\mathcal{V},\mathcal{C},\mathcal{E})\), unary features \([\mathbf{f}_{i}]_{i\in\mathcal{V}}\) and factor features \([\mathbf{g}_{c}]_{c\in\mathcal{C}}\), assume that for each edge \((c,i)\in\mathcal{E}\), with \(c\in\mathcal{C},i\in\mathcal{V}\), there is an associated edge feature vector \([t_{ci}]\). Let \(k\) and \(l\) be hidden dimensions of variable and factor node embeddings, respectively. Then, the pseudo code for a FGNN layer on \(\mathcal{G}\) is shown in Algorithm 1, where \([\Phi_{\mathrm{VF}},\Theta_{\mathrm{VF}}]\) are parameters for the Variable-to-Factor module, and \([\Phi_{\mathrm{FV}},\Theta_{\mathrm{FV}}]\) are parameters for the Factor-to-Variable module. The factor to variable messages are computed as \(\tilde{\mathbf{f}}_{i}=\underset{c:(c,i)\in\mathcal{E}}{\mathbf{AGG}}\, \mathcal{Q}(\mathbf{t}_{ci}\,|\Phi_{\mathrm{FV}})\,\mathcal{M}([\ \mathbf{g}_{c},\mathbf{f}_{i}]|\Theta_{\mathrm{FV}})\) where \(\mathcal{M}\) is a neural network mapping feature vectors to a length-\(k\) feature vector, and \(\mathcal{Q}(\mathbf{e}_{ij})\) is a neural network mapping \(\mathbf{e}_{ij}\) to a \(k\times l\) matrix. Then by matrix multiplication and aggregation operator \(\mathbf{AGG}\), a new length-\(l\) variable feature \(\tilde{\mathbf{f}}_{i}\) is generated. Similarly, the variable to factor messages are computed as \(\tilde{\mathbf{g}}_{c}\)= \(\underset{i:(c,i)\in\mathcal{E}}{\mathbf{AGG}}\,\mathcal{Q}(\mathbf{t}_{ci} \,|\Phi_{\mathrm{VF}})\,\mathcal{M}([\mathbf{g}_{c},\mathbf{f}_{i}]|\Theta_{ \mathrm{VF}})\) to generate the updated factor feature \(\tilde{\mathbf{g}}_{c}\).
```
0:\(\mathcal{G}=(\mathcal{V},\mathcal{C},\mathcal{E}),[\mathbf{f}_{i}]_{i\in \mathcal{V}},[\mathbf{g}_{c}]_{c\in\mathcal{C}},[t_{ci}]_{(c,i)\in\mathcal{E}}\)
0:\([\tilde{\mathbf{f}}_{i}]_{i\in\mathcal{V}}\), \([\tilde{\mathbf{g}}_{c}]_{c\in\mathcal{C}}\)
1:Variable-to-Factor:
2:\(\tilde{\mathbf{g}}_{c}\)= \(\underset{i:(c,i)\in\mathcal{E}}{\mathbf{AGG}}\,\mathcal{Q}(\mathbf{t}_{ci} \,|\Phi_{\mathrm{VF}})\,\mathcal{M}([\mathbf{g}_{c},\mathbf{f}_{i}]|\Theta_{ \mathrm{VF}})\)
3:Factor-to-Variable:
4:\(\tilde{\mathbf{f}}_{i}=\underset{c:(c,i)\in\mathcal{E}}{\mathbf{AGG}}\, \mathcal{Q}(\mathbf{t}_{ci}\,|\Phi_{\mathrm{FV}})\,\mathcal{M}([\ \mathbf{g}_{c},\mathbf{f}_{i}]|\Theta_{\mathrm{FV}})\)
```
**Algorithm 1** The FGNN layer
By stacking multiple FGNN layers together, we obtain a FGNN that transforms the initial factor graph features to variable and factor embeddings. To assist learning, we can add other architecture details such as residual connections. As graph neural networks, unlike belief propagation algorithms, the parameters do not need to be tied across different layers, potentially giving better representational power. Furthermore, the latent variables can have different dimensions in different layers. Different types of layers such as fully connected layers can also be interleaved with the FGNN layers.
### FGNN for Max-Product Belief Propagation
We motivated the construction of FGNN based on Low-rank Sum-Product LBP message updates. In this section, we prove that another widely used approximate inference algo
rithm, Max-Product Belief Propagation can be exactly parameterized by the FGNN using _maximization_ as **AGG** operator in Algorithm (1). For convenience of description, we use a log-linear formulation of PGMs.
Let \(\mathbf{x}\) be the set of all variables and let \(\mathbf{x}_{c}\) be the subset of variables that factor \(\phi_{c}\) depends on. Denote the set of indices of variables in \(\mathbf{x}_{c}\) by \(s(c)\). Then, an equivalent formulation of PGMs in equation (2) is as follows
\[p(\mathbf{x})=\frac{1}{Z}\exp\bigg{[}\sum_{c\in\mathcal{C}}\theta_{c}(\mathbf{ x}_{c})+\sum_{i\in\mathcal{V}}\theta_{i}(x_{i})\bigg{]}, \tag{24}\]
where \(\exp(\theta_{c}(\mathbf{x}_{c}))=\phi_{c}(\mathbf{x}_{c})\) and \(\exp(\theta_{i}(x_{i}))=\phi_{i}(x_{i})\) are non-negative factor potential functions (with \(\theta_{c}(\cdot)\), \(\theta_{i}(\cdot)\) as the corresponding log-potentials) and \(Z\) is a normalizing constant. Using the log-linear formulation, the MAP inference problem in (4) can be reformulated as
\[\mathbf{x}^{*}=\operatorname*{argmax}_{\mathbf{x}}\sum_{c\in\mathcal{C}} \theta_{c}(\mathbf{x}_{c})+\sum_{i\in\mathcal{V}}\theta_{i}(x_{i}), \tag{25}\]
and the Max-Product loopy belief propagation in Eqns 8, 9, 10 can be reformulated as
\[n_{i\to c}(x_{i})= \theta_{i}(x_{i})+\sum_{d:d\neq c,i\in s(d)}m_{d\to i}(x_{i}), \tag{26a}\] \[m_{c\to i}(x_{i})= \max_{\mathbf{x}_{c}\setminus\{x_{i}\}}\left[\theta_{c}(\mathbf{ x}_{c})+\sum_{j\in s(c),j\neq i}n_{j\to c}(x_{j})\right],\] (26b) \[b_{i}(x_{i})= \theta_{i}(x_{i})+\sum_{c:i\in s(c)}m_{c\to i}(x_{i}) \tag{26c}\]
We prove that Max-Product Belief Propagation can be exactly parameterized by the FGNN. The proof of the propositions and lemmas are provided in the appendix. The sketch of the proof is as follows. First, instead of parameterizing higher-order potentials as sum of rank-1 tensors, we show that the higher-order potentials can be also decomposed as maximization over a set of rank-1 tensors, and that the decomposition can be represented by a FGNN layer. After the decomposition, a single Max-Product iteration only requires two operations: (1) maximization over rows or columns of a matrix, and (2) summation over a group of features. We show that the two operations can be exactly parameterized by the FGNN and that \(k\) Max-Product iterations can be simulated using \(\mathcal{O}(k)\) FGNN layers.
Initially, we represented the higher-order potential functions as sum of rank-1 tensors which is suitable for Sum-Product type inference algorithms. However for Max-Product type algorithms, a decomposition as a maximum of a finite number of rank-1 tensors is more appropriate. It has been shown that there is always a finite decomposition of this type (Kohli and Kumar, 2010).
**Lemma 1** ((Kohli and Kumar, 2010)).: _Given an arbitrary potential function \(\phi_{c}(\mathbf{x}_{c})\), there exists a variable \(z_{c}\in\mathcal{Z}_{c}=\{1,2,\ldots,Z_{c}\}\) with \(Z_{c}<\infty\) and a set of univariate potentials \(\{\phi_{ic}(x_{i},z_{c})|i\in c\}\), s.t._
\[\log\phi_{c}(\mathbf{x}_{c})=\log\max_{z_{c}\in\mathcal{Z}_{c}}\prod_{i\in s (c)}\phi_{ic}(x_{i},z_{c})=\max_{z_{c}\in\mathcal{Z}_{c}}\sum_{i\in s(c)}\varphi _{ic}(x_{i},z_{c}), \tag{27}\]
_where \(\varphi_{ic}(x_{i},z_{c})=\log\phi_{ic}(x_{i},z_{c})\)._
In Lemma 1, a higher-order potential are formulated as maximization over a set of rank-one tensors. It is notable that in worst cases, the size of set \(\mathcal{Z}_{c}\), denoted by \(Z_{c}\), is exponential against the order of the factor, but in practice higher-order potentials with low-rank properties can often be decomposed as (27) with relatively small \(Z_{c}\). Using ideas from (Kohli and Kumar, 2010), we show that a PGM can be converted into single layer FGNN with the non-unary potentials represented as a finite number of rank-1 tensors.
A factor graph \(\mathcal{G}=(\mathcal{V},\mathcal{C},\mathcal{E})\) with variable log potentials \(\theta_{i}(x_{i})\) and factor log potentials \(\varphi_{c}(\mathbf{x}_{c})\) can be converted to a factor graph \(\mathcal{G}^{\prime}\) with the same variable potentials and the decomposed log-potentials \(\varphi_{ic}(x_{i},z_{c})\) using a one-layer FGNN.
With the decomposed higher-order potential, one iteration of the Max-Product (26) can be rewritten using the following three steps 1:
Footnote 1: Detailed derivation are in appendix
\[b_{c\to i}(z_{c}) \leftarrow\sum_{j\in s(c),j\neq i}\max_{x_{j}}\left[\varphi_{jc}(x_{j},z_{c})-m_{c\to j}(x_{j})+b_{j}(x_{j})\right], \tag{28a}\] \[m_{c\to i}(x_{i}) \leftarrow\max_{z_{c}}\left[b_{c\to i}(z_{c})+\varphi_{ic}(x_{i},z_{c})\right],\] (28b) \[b_{i}(x_{i}) \leftarrow\theta_{i}(x_{i})+\sum_{c:i\in s(c)}m_{c\to i}(x_{i}) \tag{28c}\]
Given the log potentials represented as a set of rank-1 tensors at each factor node, we show that each iteration of the Max-Product message passing update can be represented by a Variable-to-Factor (VF) layer and a Factor-to-Variable (FV) layer, forming a FGNN layer, followed by a linear layer (that can be absorbed into the VF layer for the next iteration).
With decomposed log-potentials, Max-Product belief propagation mainly requires two operations: (1) maximization over rows or columns of a matrix; (2) summation over a group of features. We first show that the maximization operation in (28a) and (28b) (producing max-marginals) can be done using neural networks that can be implemented by the \(\mathcal{M}\) units in the VF layer.
For arbitrary feature matrix \(\mathbf{X}\in\mathbb{R}^{k\times l}\) with \(x_{ij}\) as its entry in the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column, the feature mapping operation \(\hat{\mathbf{x}}=\left[\max_{j}x_{ij}\right]_{i=1}^{l}\) can be exactly parameterized with a \(\text{2log}_{2}\,l\)-layer neural network with at most \(\mathcal{O}(l^{2}\log_{2}l)\) parameters.
Following the maximization operations, Eq. (28a) requires summation of a group of features. However, the VF layer uses max instead of sum operators to aggregate features. Assuming that the \(\mathcal{M}\) operator has performed the maximization component of equation (28a) producing max-marginals, Proposition 4 shows how the \(\mathcal{Q}\) layer can be used to produce a matrix \(\mathbf{W}\) that converts the max-marginals into an intermediate form to be used with the max aggregators. The output of the max aggregators can then be transformed with a linear layer (\(\mathbf{Q}\) in Proposition 4) to complete the computation of the summation operation required in equation (28a). Hence, equation (28a) can be implemented using the VF layer together with a linear layer that can be absorbed in the \(\mathcal{M}\) operator of the following FV layer.
**Proposition 4**.: _For arbitrary non-negative valued feature matrix \(\mathbf{X}\in\mathbb{R}_{\geq 0}^{k\times l}\) with \(x_{ij}\) as its entry in the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column, there exists a constant tensor \(\mathbf{W}\in\mathbb{R}^{k\times l\times kl}\) that can be used to transform \(\mathbf{X}\) into an intermediate representation \(y_{ir}=\sum_{ij}x_{ij}w_{ijr}\), such that after maximization operations are done to obtain \(\hat{y}_{r}=\max_{i}y_{ir}\), we can use another constant matrix \(\mathbf{Q}\in\mathbb{R}^{l\times kl}\) to obtain \([\sum_{i}x_{ij}]_{j=1}^{l}=\mathbf{Q}[\hat{y}_{r}]_{r=1}^{kl}\)._
Eq. (28b) and (28c) can be implemented in the same way as (28a) by the FV layer. First the max operations are done by the \(\mathcal{M}\) units to obtain max-marginals. The max-marginals are then transformed into an intermediate form using the \(\mathcal{Q}\) units which are further transformed by the max aggregators. An additional linear layer is then sufficient to complete the summation operation required in (28c). The final linear layer can be absorbed into the next FGNN layer, or as an additional linear layer in the network in the case of the final Max-Product iteration.
Using the above two proposition, we can implement all important operations (28). Firstly, by Proposition 3, we can construct the Variable-to-Factor module using the following proposition.
**Proposition 5**.: _The operation in (28a) can be parameterized by one MPNN layer with \(\mathcal{O}(|X|\max_{c\in\mathcal{C}}|\operatorname{\mathcal{Z}}_{c}|)\) parameters followed by a \(\mathcal{O}(\log_{2}|X|)\)-layer neural network with at most \(\mathcal{O}(|X|^{2}\log_{2}|X|)\) hidden units._
Meanwhile, by Proposition 3 and Proposition 4 the Factor-to-Variable module can be constructed using the following proposition.
**Proposition 6**.: _The operation in (28c) can be parameterized by one MPNN layer, where the \(\mathcal{Q}\) network is identity mapping and the \(\mathcal{M}\) network consists of a \(\mathcal{O}(\max_{c\in\mathcal{C}}|\operatorname{\mathcal{Z}}_{c}|)\)-layer neural network with at most \(\mathcal{O}(\max_{c\in\mathcal{C}}|\operatorname{\mathcal{Z}}_{c}|^{2}\log_{ 2}|\operatorname{\mathcal{Z}}_{c}|)\) parameters and a linear layer with \(\mathcal{O}(\max_{c\in\mathcal{C}}|c|^{2}|X|^{2})\) parameters._
Using the above two proposition, we achieves the main result of this section as follows.
**Corollary 7**.: _The Max-Product algorithm in (26) can be exactly parameterized by the FGNN, where the number of parameters are polynomial w.r.t \(|X|\), \(\max_{c\in\mathcal{C}}|\operatorname{\mathcal{Z}}_{c}|\) and \(\max_{c\in\mathcal{C}}|c|\)._
## 5 Experiments
In this section, we evaluate our FGNN over a diverse set of tasks. First, we evaluate FGNN's performance for graphical model inference. We create synthetic PGMs in both low order and higher-order settings, and compare FGNN with other PGM inference algorithms. We also conduct experiments on low-density parity check (LDPC) decoding task (a typical PGM inference task).
Next, we want to study how does FGNN perform on real-world data as against other state-of-the-art models. For this, we evaluate FGNN over three real-world problems where capturing higher-order dependencies is likely useful. We report experiments on the graph matching problem formulated as a PGM inference problem. We performed experiments on handwritten character recognition within words to demonstrate that FGNN is able to exploit sequence information. To validate the effectiveness of FGNN in capturing higher-order information from more general graph structured data, we report results on molecular datasets
and compare with other state-of-the-art \(k\)-order GNNs that capture higher-order information as well. Finally, we show FGNN is well suited for modeling human motion prediction task.
### MAP Inference over Synthetic PGMs
We first evaluate FGNN on synthetically constructed datasets on graphical model inference tasks. As FGNN is based on inference algorithms, we test whether FGNN is able to outperform other prominent solvers on these inference problems on multiple datasets. First we describe experiments on chain-structured graphical model and then provide further results on other MRF structures.
DataWe construct three synthetic datasets (D1, D2 and D3) for this experiment. All models start with a length-30 chain structure with binary-states nodes with node potentials randomly sampled from the uniform distribution over the interval \([0,1]\), \(\mathcal{U}[0,1]\), and pairwise potentials that encourage two adjacent nodes to take state 1, _i.e._, it gives high value to configuration \((1,1)\) and low value to other configurations. In D1, the pairwise potentials are fixed, while in the others, they are randomly generated. For D1, D2, and D3, a budget higher-order potential (Martins et al., 2015) is added at every node; these potentials allow at most \(k\) of the 8 variables within their scope to take the state 1; specifically, we set \(k=5\) in D1 and D2 and set \(k\) randomly in D3.
In this paper, we use the simplest, but possibly most flexible method of defining factors in FGNN: we condition the factors on the input features. Specifically, for the problems in this section, all parameters that are not fixed are provided as input factor features. We test the ability of the proposed model to find the MAP solutions, and compare the results with a well known graph neural network, MPNN (Gilmer et al., 2017) as well as several MAP inference solvers, namely AD3 (Martins et al., 2015) which solves a linear programming relaxation using subgradient methods, Max-Product Belief Propagation (Weiss and Freeman, 2001), implemented by (Mooij, 2010), and a convergent version of Max-Product - MPLP (Globerson and Jaakkola, 2008), also based on a linear programming relaxation. The approximate inference algorithms are run with the correct models while the graph neural network models use learned models, trained with exact MAP solutions generated by a branch-and-bound solver that uses AD3 for bounding (Martins et al., 2015)2.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & AD3 & Max-Product & MPLP & MPNN & Ours \\ \hline D1 & 80.7\(\pm\)0.0014 (5 / 5) & 57.3\(\pm\)0.0020 (6) & 65.8\(\pm\)0.0071 (57) & 71.9\(\pm\)0.0009 (131) & **92.5\(\pm\)**0.0012 (144) \\ D2 & 83.8\(\pm\)0.0014 (532 / 325) & 50.5\(\pm\)0.0053 (1228) & 68.5\(\pm\)0.0074 (55) & 74.3\(\pm\)0.0009 (131) & **89.1\(\pm\)**0.0010 (341) \\ D3 & 88.1\(\pm\)0.0006 (91092 / 1059) & 53.5\(\pm\)0.0081 (4041) & 64.2\(\pm\)0.0056 (55) & 82.1\(\pm\)0.0008 (121) & **93.2\(\pm\)**0.0006 (382) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results (percentage agreement with MAP and standard error) on synthetic datasets with runtime in microseconds in bracket (exact followed by approximate inference runtime for AD3). A brief description of the three datasets is as follows. D1: random unary potentials + fixed parameter in pairwise and higher order potentials; D2: random parameter in unary and pairwise potentials + fixed parameter in higher-order potentials; D3: random parameter in all potentials.
Architecture and training detailsIn this task, we use a factor graph neural network consisting of 8 FGNN layers (see the detail in the Appendix). The model is implemented in pytorch (Paszke et al., 2017), and trained with Adam optimizer (Kingma and Ba, 2014) with initial learning rate \(\mathrm{lr}=3\times 10^{-3}\). After each epoch, \(\mathrm{lr}\) is decreased by multiplying a factor of 0.98. All the models in Table 1 were trained for 50 epochs after which all of them achieve convergence.
ResultsThe percentage of agreement with MAP solutions is shown in Table 1. Our model achieves far better results on D1, D2, and D3 than all others. D4 consists of chain models, where Max-Product works optimally3. The linear programming relaxations also perform well. In this case, our method is able to learn a near-optimal inference algorithm on the chain case as well.
Footnote 3: Additional experiment on trees, where Max-Product also works optimally can be found in Appendix 5.1.1 along with details on all experiments.
Traditional methods including Max-Product, MPLP perform poorly on D1, D2 and D3. In these, even though FGNN can emulate the traditional Max-Product, it is better to learn a different inference algorithm. AD3 have better performance than others, but is worse than our FGNN. The accuracy of FGNN is noticeably higher than that of MPNN as MPNN does not use the higher-order structural priors that are captured by FGNN.
#### 5.1.1 Ablation studies
In order to study the behaviour of the FGNN model under varying inputs, we conducted the following additional experiments as part of the ablation study using synthetic data.
Effect of wrong graph structuresFirst we did a small ablation study by modifying the graph structure inputed to FGNN using the Dataset D1 and D2. Originally D1 and D2 provides chain structured PGM, with budget higher-order factor formed over every 8 neighbor variables. In this experiment we augment the graph structure by using 4 and 6 variables to form a higher-order factor instead of the correct 8 variables. On D1, the accuracies are 81.7 and 89.9 when 4 and 6 variables are used in place of the correct 8 variables; on D2, the accuracies are 50.7 and 88.9 respectively. In both cases, the highest accuracies are achieved when the sizes of the HOPs are set correctly.
Generalization to novel graph structuresIn order to further evaluate the generalization of FGNN, we conducted an additional experiment to train the FGNN on fixed length-30 MRFs using the same protocol as Dataset3, and test the algorithm on 60000 random generated chain MRF whose length ranges from 15 to 45 (the potentials are generated using the same protocol as D3, where all unary, pairwise and higher-order potentials have random parameters). The result is in Table 2, which shows that the model trained on fixed size MRF can be generalized to MRF with different graph structures.
Effect on low order PGMs such as chains and treesIn addition to the higher-order PGM above, we conducted an additional experiment on chain and tree structured PGMs. For chain structured PGMs, we use the same protocol as D3 to generate training and testing data, but with higher-order factors removed. For tree structured dataset, the training set includes 90000 different PGMs as randomly generated binary trees whose depths are between
3 and 6. Each node is associated with a random variable \(x_{i}\in\{0,1\}\) along with a log potential \(\theta_{i}(x_{i})\) randomly sampled from Gaussian distribution \(\mathcal{N}(0,1)\). Each edge \((i,j)\) in a tree is associated with a pairwise log potential \(\theta_{ij}(x_{i},x_{j})\) which is randomly sampled from Gaussian distribution \(\mathcal{N}(0,1)\). There is also 10000 testing instances which are generated in the same way as the training set. The experiment result is shown in Table 3.
For chain PGMs, our algorithm achieves comparable results as Max-Product, which is known to be optimal on chain PGMs, and outperforms pairwise MPNN. For a tree structured PGM, it is not as easy to shrink the pairwise features to the nodes as an adaptation for MPNN as in the case of chain PGM in Section 5.1, so we omit the experiment on MPNN. Still, our Factor Graph Neural Network achieves a good performance even when compared with Max-Product which is optimal on tree PGMs and also with the linear programming relaxations.
### LDPC Decoding (MAP or Marginal Inference)
The low-density parity check (LDPC) codes are widely used in wired and wireless communication, where the decoding can be done by Sum- or Max-Product belief propagation (Zarkeshvari and Banihashemi, 2002).
**Data** Let \(\mathbf{x}\) be the 48-bit original signal, and \(\mathbf{y}\) be the 96-bit LDPC encoded signal by encoding scheme "96.3.963"(MacKay, 2009). Then a noisy signal \(\tilde{\mathbf{y}}\) is obtained by transferring \(\mathbf{y}\) through a channel with white Gaussian and burst noise, that is, for each bit \(i\), \(\tilde{y}_{i}=y_{i}+n_{i}+p_{i}z_{i}\), where \(n_{i}\sim\mathcal{N}(0,\sigma^{2})\), \(z_{i}\sim\mathcal{N}(0,\sigma_{b}^{2})\), and \(p_{i}\) is a Bernoulli random variable _s.t._\(P(p_{i}=1)=\eta;\ P(p_{i}=0)=1-\eta\). In the experiment, we set \(\eta=0.05\) as (Kim et al., 2018) to simulate unexpected burst noise during transmission. By tuning \(\sigma\), we can get different signal with \(\text{SNR}_{dB}=20\log_{10}(1/\sigma)\).
In the experiment, for all learning-based methods, we generate \(\tilde{\mathbf{y}}\) from randomly sampled \(\mathbf{x}\) on the fly with \(\text{SNR}_{dB}\in\{0,1,2,3,4\}\) and \(\sigma_{b}\in\{0,1,2,3,4,5\}\). For each learning-based method, \(10^{8}\) samples are generated for training. Meanwhile, for each different \(\text{SNR}_{dB}\) and \(\sigma_{b}\), 1000 samples are generated for validating the performance of the trained model.
\begin{table}
\begin{tabular}{c c c} \hline Chain length & AD3 & FGNN \\ \hline (15, 25) & 88.95 & 94.31 \\ (25, 35) & 88.18 & 93.64 \\ (35, 45) & 87.98 & 91.50 \\ \hline \end{tabular}
\end{table}
Table 2: Generalization ability (measured by percentage agreement with MAP) of algorithms on PGMs with different graph structures. We train our FGNN model on the training set of D3 where structure of PGMs is fixed and test it on higher-order PGMs with different chain-length.
\begin{table}
\begin{tabular}{c c c c c c} \hline & AD3 & Max-Product & MPLP & MPNN & FGNN \\ \hline Chain & 100 & 100 & 99.9 & 91.2 & 98.0 \\ Tree & 100.0 & 100.0 & 99.97 & – & 98.35 \\ \hline \end{tabular}
\end{table}
Table 3: Results (percentage agreement with MAP) on chain and tree structured PGMs.
In LDPC decoding, the SNR\({}_{dB}\) is usually assumed to be known and fixed, and the burst noise is often unexpected and its parameters are unknown to the decoder. Thus, for learning based methods and traditional LDPC decoding method, the noisy signal \(\tilde{\mathbf{y}}\) and the SNR\({}_{dB}\) are provided as input. In our experiments, since the LDPC decoding can be done by both max-product and sum-product, we train our FGNN with both max and sum aggregation function (see FGNN-Max and FGNN-Sum in Figure 5.). The FGNN is compared with baselines including two implementation of Sum-Product (Mackey-Sum (MacKay, 2009) and Commpy-Sum (Taranalli, 2020)), the max-product decoder from Commpy (Commpy-Max (Taranalli, 2020)) and the bit decoding baseline.
**Architecture and training details** In this task, we use a factor graph neural network consisting of 8 FGNN layers (the details are provided in the Appendix). The model is implemented by using pytorch (Paszke et al., 2017), and trained with Adam optimizer (Kingma and Ba, 2014) with initial learning rate \(\mathrm{lr}=1\times 10^{-2}\). After every 10000 samples, \(\mathrm{lr}\) is decreased by multiplying a factor of 0.98. After training over \(10^{8}\) samples, the training loss converges.
**Results** We compare FGNN with three public available LDPC decoder MacKay-Sum (MacKay, 2009), Commpy-Sum (Taranalli, 2020) and Commpy-Max (Taranalli, 2020). The first two decoders are using Sum-Product belief propagation to propagate information between higher-order factors and nodes, but with different belief clipping strategy and different belief propagation scheduler. The third decoder are using Max-Product for as inference algorithm. Meanwhile, our FGNN uses a learned factor-variable information propagation scheme, and the other learning based method, MPNN ignores the higher-order
Figure 5: Experimental results (Bit Error Rate, BER v.s. Signal-to-Noise Ratio, SNR) on LDPC decoding.
dependencies. The decoding accuracy is provided in Figure 5. The Sum-Product based methods (MacKay-Sum and Commpy-Sum) are known to be near optimal for Gaussian noise, however the hyper-parameter of Sum-Product are sensitive. Due to different hyper-parameters, MacKay-Sum get the best performance when the burst noise level is low while Commpy-Sum have superior performance than MacKay-Sum with high burst noise level. Our FGNN (both FGNN-Max and FGNN-Sum) always performs better than the Commpy-Sum and Commpy-Max, it achieves comparable but lower performance than the MacKay-Sum for low burst noise level(\(\sigma_{b}\in[0,2]\)), and outperforms all other methods for high burst noise levels (\(\sigma_{b}\in[3,5]\)), and the results from "sum" aggregation function are slightly better than the results from "max" aggregation function.
### The Graph Matching Problem (MAP Inference)
The Graph Matching is a fundamental problem by itself and a key step in various computer vision topics including image registration, tracking, motion analysis and more. Traditionally, the graph matching problems are often modelled as Quadratic Assignment Problems (QAPs) and they can be also viewed as MAP inference problems over factor graphs (Zhang et al., 2016; Swoboda et al., 2017). These problems can be solved by using belief propagations in PGMs (Zhang et al., 2016; Swoboda et al., 2017), or using graph neural networks (Wang et al., 2019; Zhang and Lee, 2019). In this section, we apply our FGNN to graph matching problems and compare with both traditional methods as well as recent graph neural network based approaches.
The Problem and Traditional ModelLet \(\mathcal{P}=\{\mathbf{p}_{i}\,|\,i\in[n]\}\), \(\mathcal{Q}=\{\mathbf{q}_{i}\,|i\in[n]\}\) be two sets of feature points, where \([n]\) denotes the set \(\{1,2,\ldots,n\}\). The graph matching problem can be formulated as an MAP inference problem as follows
\[\operatorname*{argmax}_{\mathbf{X}}\sum_{i,j=1}^{n}x_{ij}S_{\text{n}}( \mathbf{p}_{i},\mathbf{p}_{j})+\sum_{i,j,k,l=1}^{n}x_{ik}x_{jl}S_{\text{e}}( \mathbf{e}_{ij}^{\mathcal{P}},\mathbf{e}_{kl}^{\mathcal{Q}})+\phi(\mathbf{X}), \tag{29}\]
where \(S_{\text{n}}\) and \(S_{\text{e}}\) are user-specified similarity functions of features, and \(\mathbf{e}_{ij}^{\mathcal{P}}\) and \(\mathbf{e}_{kl}^{\mathcal{Q}}\) are edge features extracted or learned from \(\mathcal{P}\) and \(\mathcal{Q}\), respectively. The variable \(\mathbf{X}\) is a \(n\times n\) matrix with \(x_{ij}\) as its entry at \(i^{\text{th}}\) row and \(j^{\text{th}}\) column. The higher-order log-potential \(\phi(\mathbf{X})\) is used to enforce the one-to-one constraints as follows,
\[\phi(\mathbf{X})=\begin{cases}0,&\sum_{i=1}^{n}x_{ij}=1,\sum_{j=1}^{n}x_{ij}= 1,x_{ij}\geqslant 0\\ -\infty,&\text{otherwise}.\end{cases} \tag{30}\]
Traditionally, both the features and the similarity functions are handcrafted (Wang et al., 2018; Swoboda et al., 2017; Liu et al., 2014; Zhou and De la Torre, 2012; Zhang et al., 2016). Recently, many approaches were proposed (Wang et al., 2021; Xu et al., 2021; Rolinek et al., 2020; Fey et al., 2020; Wang et al., 2020; Yu et al., 2019; Wang et al., 2019) to replace the handcrafted feature with learned feature with very promising results.
The FGNN ModelIn the traditional formulation of graph matching problem (29), there are \(n^{2}\) binary variables. We instead use a more compact but equivalent formulation where
there are \(2n\) variables with \(n\) states. We formulate the graph matching problem as
\[\operatorname*{argmax}_{\mathbf{y},\mathbf{z}}=\sum_{i=1}^{n}\theta_{i}(y_{i}| \,\mathcal{P},\mathcal{Q})+\sum_{j=1}^{n}\vartheta_{j}(z_{j}|\,\mathcal{P}, \mathcal{Q})+\varphi(\mathbf{y},\mathbf{z}\,|\,\mathcal{P},\mathcal{Q}), \tag{31}\]
where \(y_{i}=j\) indicates that the \(i^{\text{th}}\) node in source graph corresponding to the \(j^{th}\) node in the target graph. Similarly, \(z_{j}=i\) indicates that the the \(i^{\text{th}}\) node in source graph corresponds the \(j^{th}\) node in the target graph. The higher-order term can enforce that \(\mathbf{y}\) and \(\mathbf{z}\) being consistency, and can absorb the pairwise terms in (29). As a result, we can extend the graph neural network in Zhang and Lee (2019) to handle higher-order terms more efficiently by using the proposed factor graph neural network shown in Figure 6, where each node in source graph corresponds to random variable \(\mathbf{y}_{i}\) and each node in target graph corresponds to random variable \(\mathbf{z}_{i}\). The pairwise message passing procedure inside the FGNN is only used to produce better node features as Zhang and Lee (2019). Without factors, the model can still work as a metric-learning based method but the factors can significantly improve the performance of GNN model.
DataFollowing the experimental settings of (Wang et al., 2019; Yu et al., 2019; Fey et al., 2020; Rolinek et al., 2020; Xu et al., 2021), we use the Pascal VOC dataset (Everingham et al., 2010) with the keypoints annotated by Bourdev and Malik (2009) to evaluate the performance of handcrafted and learning-based graph matching algorithms. The dataset contains 20 classes of instances (objects) with manually labeled keypoint locations, and the instances may vary in scale, view angle and illumination. For each instance, the number of inliers ranges from 6 to 23. We applied the same filtering and training/testing split (Wang et al., 2019; Yu et al., 2019; Fey et al., 2020; Rolinek et al., 2020; Xu et al., 2021), where 7020 annotated images are used for training and 1682 for testing.
Architecture and training detailsIn our experiment, we use the same training protocol as in (Wang et al., 2019; Yu et al., 2019; Fey et al., 2020; Rolinek et al., 2020; Xu et al., 2021), where the input of the models are two sets of coordinates of key-points and two images. In our model, as in previous work the two images are first been fed to the VGG19 (Simonyan and Zisserman, 2014) net to form visual features. By using bilinear interpolating as previous work (Fey et al., 2020), we can get the visual feature vector of every node from the outputs of VGG. For each set of key-points, each key-point is connected to its \(k\)-nearest neighbors with \(k=6\). For each node, the geometric and visual features are concatenated to form the node feature. For each edge, the difference of geometric node features of the two nodes connected by the edge serves as the edge feature. Furthermore, the initial factor features are generated as follows
\[\mathbf{f}=\max_{i\in f}\mathbf{v}_{i}, \tag{32}\]
where \(\mathbf{v}_{i}\) is the node feature associated with node \(i\), and max is the entrywise maximization. The above node, edge and factor features will be fed into a FGNN network composed by three blocks. In each block, there will be a pairwise message passing module as (Zhang and Lee, 2019), and one factor message passing module as 1. Then the FGNN network, along with the VGG network, will be trained with Adam (Kingma and Ba, 2014) optimizer with learning rate \(10^{-4}\) (\(10^{-6}\) for the VGG part). Our algorithm has been trained for 200 epochs and in each epoch we random samples 16000 image pairs from the training set.
ResultsOur FGNN based algorithm is compared with a bunch of traditional handcrafted graph matching algorithms, as well as several state-of-the-arts learned graph matching algorithms. The results are shown in Table 4, where the results of handcrafted graph matching methods are from Wang et al. (2020), and the results of learning-based approaches are from their paper except for MPNN (Zhang and Lee, 2019). For the results of MPNN, their network is identical to ours but with the factor message passing module removed, and we train the network using exactly the same protocol as ours.
Our algorithm outperforms the previous methods because our factor message passing module can handle higher-order information better. Particularly the performance of previous pairwise network based state-of-the-arts methods NGMv2 (Wang et al., 2021) can be improved by introducing higher-order terms to form NHGMv2 (Wang et al., 2021) to get an improvement of 0.3%. Meanwhile, compare to our pairwise counter-part MPNN (Zhang and Lee, 2019), we got a performance improvement of 0.8% and our average performance outperforms all previous methods.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c c c c c c c c} \hline \hline Method & aero & bike & bird & boat & bot & bcl & bus & car & cat & chair & cow & table & dog & horse & mbk & prn & plant & abp & scfa & trn & tv & avg \\ \hline IPP (Leosdona et al., 2009) & 25.1 & 26.4 & 41.4 & 50.3 & 43.0 & 32.9 & 37.3 & 25.5 & 33.6 & 28.2 & 26.5 & 26.1 & 28.9 & 32.0 & 28.8 & 62.9 & 28.2 & 45.0 & 0.03 & 31.8 & 36.6 \\ BRWN (Cho et al., 2010) & 30.9 & 40.6 & 54.1 & 52.3 & 36.7 & 47.4 & 37.3 & 36.1 & 31.1 & 28.8 & 30.0 & 39.1 & 36.2 & 30.5 & 67.8 & 38.6 & 40.4 & 70.5 & 41.3 & 43.0 \\ PSM (Fegui et al., 2012) & 26.7 & 37.8 & 49.9 & 53.2 & 47.5 & 36.6 & 30.1 & 38.2 & 37.2 & 38.1 & 32.7 & 24.2 & 37.1 & 38.5 & 62.3 & 41.7 & 54.3 & 72.6 & 40.8 & 41.1 \\ GNCCP (Lin and Qua, 2013) & 28.9 & 37.1 & 46.2 & 53.1 & 48.0 & 36.3 & 45.5 & 34.7 & 36.2 & 34.2 & 25.3 & 23.3 & 29.6 & 47.6 & 61.9 & 37.4 & 50.5 & 67.0 & 41.8 & 41.6 \\ AHPP (Wang et al., 2018) & 39.0 & 40.1 & 47.3 & 54.5 & 50.8 & 31.6 & 46.7 & 34.3 & 40.9 & 39.6 & 16.3 & 34.8 & 32.9 & 30.3 & 61.2 & 37.9 & 60.2 & 70.5 & 41.3 & 42.7 \\ \hline GNN (Zadri and Sunliesso, 2018) & 31.9 & 47.2 & 51.9 & 40.8 & 68.7 & 72.2 & 53.6 & 52.6 & 34.6 & 48.6 & 72.3 & 47.7 & 54.8 & 51.0 & 38.6 & 75.1 & 49.5 & 45.0 & 81.0 & 86.3 & 55.3 \\ LCS (Wang et al., 2020) & 45.9 & 58.0 & 58.6 & 64.9 & 50.8 & 78.8 & 71.8 & 61.8 & 63.4 & 44.3 & 61.3 & 79.3 & 54.4 & 61.7 & 56.2 & 54.1 & 62.9 & 65.8 & 59.4 & 94.2 & 60.5 \\ RNN-HMM(Lin and Qua, 2021) & 28.6 & 55.7 & 50.7 & 76.4 & 58.7 & 56.2 & 76.4 & 54.0 & 50.0 & 78.8 & 51.2 & 53.8 & 60.2 & 25.8 & 60.5 & 58.7 & 68.4 & 92.2 & 69.5 & 68.0 \\ PCA-GM (Wang et al., 2019) & 49.5 & 55.0 & 68.8 & 47.9 & 76.9 & 77.9 & 63.5 & 65.4 & 37.7 & 65.5 & 63.6 & 61.3 & 63.9 & 62.8 & 44.9 & 77.5 & 67.4 & 57.5 & 86.7 & 90.9 & 63.8 \\ CLEH (Yu et al., 2019) & 51.2 & 69.2 & 70.1 & 55.0 & 28.8 & 69.0 & 72.2 & 49.6 & 68.8 & 71.8 & 70.8 & 71.8 & 66.8 & 44.8 & 52.9 & 69.3 & 65.4 & 52.4 & 69.5 \\ DGMC (Fegui et al., 2020) & 47.0 & 65.7 & 56.5 & 67.6 & 58.7 & 53.7 & 78.3 & 78.2 & 68.7 & 68.7 & 68.7 & 68.7 & 69.4 & 72.4 & 68.9 & 65.5 & 73.0 \\ BROM (Bridhar et al., 2020) & 61.5 & 75.0 & 78.1 & 80.0 & 54.7 & 50.1 & 58.1 & 77.6 & 76.5 & 78.3 & 78.6 & 77.8 & 67.2 & 67.4 & 76.4 & 75.7 & 79.4 & 80.1 \\ NGMG (Wang et al., 2021) & 61.8 & 71.2 & 77.6 & 78.8 & 34.6 & 87.7 & 58.0 & 78.1 & 58.4 & 77.5 & 80.5 & 78.1 & 79.2 & 66.7 & 77.7 & 75.4 & 79.5 & 79.2 & 80.1 \\ NHGMv2(Wang et al., 2021) & 39.9 & 71.5 & 77.2 & 79.0 & 87.7 & 94.6 & 59.0 & 81.8 & 60.0 & 81.3 & 87.0 & 78.1 & 76.5 & 77.5 & 64.4 & 98.7 & 77.8 & 75.4 & 97.9 & 92.8 & **80.4** \\ MPNN (Zhang and Lee, 2019) & 57.8 & 69.1 & 74.4 & 77.7 & 89.2 & 90.4 & 90.4 & 70.4 & 73.1 & 81.9 & 90.4 & 76.5 & 78.6 & 75.4 & 54.4 & 97.9 & 78.2 & 70.0 & 97.3 & 94.9 & 79.8 \\ Ours & 57.3 & 69.0 & 75.9 & 78.3 & 93.8 & 91.6 & 90.8 & 76.9 & 73.9 & 82.5 & 89.9 & 77.0 & 79.8 & 75.4 & 98.2 & 78.2 & 74.9 & 97.7 & 94.7 & **80.4** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Accuracy on the Pascal VOC Keypoint dataset. **Top**: Results of traditional hand-crafted solver. **Bottom:** Result from methods using learned feature. All the learning based approaches are using VGG19 (Simonyan and Zisserman, 2014) as backbone to extract visual feature, but the graph neural network architectures are different.
Figure 6: Factor Graph Neural Networks for Graph Matching Problems. The pairwise message passing are used to produce better node features so that without the factor it can still work as a metric-learning based methods. With the factor, the performance of matching can be significantly improved as shown in Table 4.
### Handwritten character sequence recognition (Marginal Inference)
In this experiment, we explore how useful higher-order modeling with FGNN is for structured prediction tasks with sequence data. A sequence is one of the simplest graph structures where nodes are connected in the form of a linear chain. FGNN should be able to capture strong higher-order dependencies in such datasets. Therefore, we explore the effect of order of factors on the task of handwritten character recognition.
**Modeling** In handwritten character recognition sequence, the adjacent few characters of a node are likely to contain useful information for predicting a character. We consider one of the simplest higher-order model: a \(k-\)order factor for each node in the sequence. Formally, let \(x=[x_{1},\ldots,x_{|x|}]\) denote an input sequence of length \(|x|\) with each \(x_{i}\) associated with a label \(y_{i}\in\{a,b,\ldots,z\}\). If \(x_{i,k}=[x_{i-k},\ldots,x_{i+k}]\) is an order-\(k\) segment of \(x\) centered at \(x_{i}\) and \(y_{i,k}\) is the corresponding label segment of \(x_{i,k}\), we define \(f_{i}(y_{i,k}|x_{i,k}:\theta)\) as the \(k-\)order factor encoding dependencies in \(y_{i,k}\). This gives us a conditional random field with unnormalized probability as \(P(y|x)=\prod_{i=1}^{|x|}f_{i}(y_{i,k}|x_{i,k};\theta)\). This corresponds to a factor \(f_{i}\) for every node \(i\) connecting its adjacent \(k\) nodes in the sequence.
**Data** We study the properties of FGNN network with the handwriting recognition data set from (Taskar et al., 2004), originally collected by (Kassel, 1995). The dataset consists of a subset of \(\sim 6100\) handwritten words with the average length of \(\sim 8\) characters. The words are segmented into characters and each character is rasterized into an image of 16 by 8 binary pixels. The dataset is available in 10 folds with each fold containing \(\sim 6000\) characters. The task is to recognise characters by leveraging the neighbourhood of characters within the context of words. Since the words come from a small vocabulary, there is a strong higher-order correlation present in the dataset. In our framework, depending on the order, each character node \(x_{i}\) can share a factor with other character nodes \(x_{j}\) within the same word. We follow a 10-fold cross-validation setup with 1 fold for training and 9 folds for
Figure 8: Handwriting character Recognition
Figure 7: Factor Graph model for the Handwriting character recognition.
testing and report averaged results. We evaluate the performance of FGNN by varying the order and rank of the factors.
**Architecture and training details** We use 3 standard convolution and a fully connected layer as the base feature extractor to get the zero-th order feature of 512 dimensions. We then run 3 iterations of higher-order message passing followed by a classifier. We fix the rank of all factors at 1024 and share parameters between factors with the same order. We train for 50 epochs with a learning rate of 0.001.
**Results** In this experiment we study the behaviour of the model in terms of accuracy and training time when order and rank of the factors are varied. Results as shown in Figure (8) suggest the FGNN is able to capture higher-order information. The model shows strong improvements as maximum order of factors used is successively increased before saturating at 4th-order and above. To evaluate the efficiency of FGNN with higher-order factors, we analysed the computation time as we vary the order of factors used. Figure (8) shows that the training time per epoch grows almost linearly with the order of factors.
Effect of Rank of factor parametersOne important hyperparameter of the FGNN model is the rank of the higher-order factors which is given by the out dimension of transformation matrix \(\mathbf{W}\). Since, the effect of higher-order context is clear from the results, we analyse the sensitivity of the model performance with the rank of the higher-order factors. To evaluate the effect of varying rank, we fixed the zero-th order feature dimension to 64 and used factors up to order-3. We then ran three message passing iterations by varying the rank of the factors from 64 to 2048. Results in Figure (9) show consistent improvements in performance with the increasing rank before saturating at 1024 and above. Note that for characters with the class size of 26 for each character, a full-rank tensor representation for order-3 potential function needs at least \(26^{3}=17576\) components. Clearly, \(1024<<26^{3}\) shows that the underlying high-order potential is well approximated by the low-rank representation.
Figure 9: Effect of rank of factor parameters for Handwriting character Recognition. The accuracy of the model increases with increasing rank and saturates after rank-1024, whereas a full rank 3rd order factor would need at least 17576 components for exact representation.
### Molecular data (Graph Regression)
In the following set of experiments, we use FGNN purely as graph neural network operating on graph structured data. Note that FGNN only provides inductive bias of graphical model inference algorithms but is free to learn a richer algorithm suitable for the task. With the molecular data, we show that FGNN can work purely as a graph neural network on variable sized graphs and provides additional flexibility in modeling typed nodes and edges.
Molecular data has several properties needed for an effective study of higher-order representation learning models of graphs. A molecule can be represented as a graph with atoms as nodes and bonds that exist between atoms as edges. Higher-order information is present in the form of valence constraints of atoms which determine the number of bonds they can form. In addition, molecules often contain subgraphs, also called functional groups (_e.g._, OH, CHO etc.), which are identifiable and significant in determining their functionality and geometry. Any relational learning model should be powerful enough to capture such higher-order structures and dependencies in order to learn highly discriminative representations. Furthermore, molecules come with varying number of nodes and hence learning higher-order functions with shareable parameters is necessary. This makes FGNN suitable for statistical learning in such datasets. We now focus on molecular data to study the effectiveness of the proposed model and show its modeling flexibility in incorporating domain knowledge in constructing the factors.
**Modeling** In molecular data, there is an input graph with node and edge features. To use FGNN, we need to define a suitable factor graph model based on the graph structure. In FGNN, \(\mathcal{Q}(\mathbf{t}_{ci})\) is conditioned on edge features and hence separate for all variables connected to the factor. This gives much freedom in modeling to leverage domain knowledge in order to share some of the factor parameters and be able to work with large graphs with typed edges as well. Given a molecular graph, we discuss possible ways of constructing higher-order factors, conditioning and sharing of parameters.
One way to capture higher-order information is to add a factor for every node in the molecule connecting that node (we will call this node _central atom_ in that factor) and its neighbours to the factor. Then weights of the potential for the factor can be shared by conditioning on the following:
* **Central atom type (CAT):** Weights within the factor are shared but different factors share parameters only if they have the same central atom type.
* **Bond type (BT):** Weights are shared if the bond type between the central atom and its neighbour is same.
* **Central atom and bond type (CABT):** Weights are shared if both the central atom type and bond type are same.
* **Central atom, bond and neighbour atom type (CABTA):** Weights are shared if the bond type and the atom types of atoms sharing the bond are same.
Do note that most molecular datasets have small number of atom types and bond types. This shows the proposed model of message passing is flexible and provides sufficient freedom in modeling.
**Data** We evaluate our model on two large scale datasets on quantum chemistry, QM9 (Ruddigkeit et al., 2012; Ramakrishnan et al., 2014) and Alchemy (Chen et al., 2019) datasets on the task of regression on 12 quantum-mechanical properties. QM9 is composed of 134K drug-like organic molecules with sizes varying from 4 to 29 atoms per molecule. Atoms belong to one of the 5 types, Hydrogen (H), Carbon (C), Oxygen (O), Nitrogen(N), and Fluorine (F) and each molecule may contain up to 9 heavy (non-Hydrogen) atoms. Nodes come with discrete features along with their 3D positions. We follow the standard 80:10:10 random split for training, validation, and testing.
Alchemy (Chen et al., 2019) is a recently released more challenging dataset of 119K organic molecules comprising of 7 atom-types H, C, N, O, F, S (Sulphur) and Cl (Chlorine) with up to 14 heavy atoms. These molecules are screened for being more likely to be useful for medicinal chemistry based on the functional group and complexity. Furthermore, we follow the split based on molecule size where almost all training set contains molecules with 9 or 10 heavy atoms while the validation and test set contain molecules with 11 or 12 heavy atoms. As quantum-mechanical properties depend on the molecule size, this split tests how well the model can generalize to heavier molecules. The regression targets are the same as in QM9 dataset.
**Architecture and training details** We run our message passing scheme on features initialized on MPNN network with the standard implementation provided by Pytorch-geometric (Fey and Lenssen, 2019). The MPNN implementation has Edge-conv (Gilmer et al., 2017) as message function, GRU (Chung et al., 2014) as update function followed by a Set2set (Vinyals et al., 2015) function as readout for the whole graph. Readout function is required as the task is a prediction on graphs and the readout function takes in node features and outputs a vector which is used for final regression. In our implementation, we run 3 iterations of MPNN message passing scheme on input graph followed by 3 iterations of higher-order FGNN message passing described in Algorithm 1. Learning node marginals is likely to be helpful in this case as we want good node representations to be combined in the readout function. We use max function as the aggregator in VF module and summation as aggregator in FV module. We select this combination as we found it useful while being numerically stable. We then combine the MPNN output from the third iteration and the
Figure 10: Factor Graph model for molecular graphs.
FGNN output with concatenation followed by the set2set readout function. For FGNN module, we set the hidden vector dimension to 64. The projection dimension of VF module is set to to 512. We use Adam optimizer initialized with learning rate of \(1e^{-3}\). Since we want to show improvements over MPNN, all other hyperparameters are maintained as is provided by Pytorch-geometric implementation of MPNN for a fair comparison. All targets are normalized and are trained with the absolute error loss for 200 epochs with batch size of 64.
ResultsFor QM9 dataset, following (Maron et al., 2019) we report Mean Absolute Error (MAE) in two settings, one where all targets are jointly trained and the other where each target is separately trained. Factors are constructed as described in modeling, with factor weights conditioned on the central atom and bond type (CABT). The baselines we compare
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline \multirow{2}{*}{Target} & \multirow{2}{*}{MPNN*} & \multirow{2}{*}{FGNN} & \multirow{2}{*}{Gain(\%)} & \multicolumn{4}{c}{FGNN Network Ablation Models} \\ \cline{5-10} & & & & & CAT & BT & CABT & CABT \\ \hline \(\mu\) & **0.1026** & 0.1041 & -1.41 & 0.1091 & 0.1041 & 0.1092 & 0.1233 \\ \(\alpha\) & 0.0557 & **0.0451** & 19.05 & 0.0473 & 0.0451 & **0.0446** & 0.0449 \\ \(c_{homo}\) & 0.1151 & **0.1004** & 12.74 & 0.1065 & 0.1004 & **0.1001** & 0.1007 \\ \(\epsilon_{lumo}\) & 0.0817 & **0.0664** & 18.74 & 0.0712 & **0.0664** & 0.0685 & 0.0696 \\ \(\Delta_{e}\) & 0.0832 & **0.0691** & 16.88 & 0.0739 & **0.0691** & 0.0703 & 0.0720 \\ \(\langle R^{2}\rangle\) & 0.0271 & **0.0099** & 63.47 & 0.0099 & 0.0099 & **0.0094** & 0.0120 \\ \(ZPVE\) & 0.0259 & **0.0115** & 55.42 & 0.0116 & 0.0115 & **0.0108** & 0.0140 \\ \(U_{0}\) & 0.0131 & **0.0044** & 65.80 & **0.0042** & 0.0044 & 0.0046 & 0.0054 \\ \(U\) & 0.0131 & **0.0044** & 65.90 & **0.0041** & 0.0044 & 0.0046 & 0.0054 \\ \(H\) & 0.0130 & **0.0044** & 65.77 & **0.0042** & 0.0044 & 0.0046 & 0.0054 \\ \(G\) & 0.0130 & **0.0044** & 65.77 & **0.0042** & 0.0044 & 0.0046 & 0.0054 \\ \(C_{v}\) & 0.0559 & **0.0481** & 13.93 & **0.0472** & 0.0481 & 0.0488 & 0.0502 \\ \hline MAE & 0.0499 & **0.0394** & 21.18 & 0.04115 & **0.0394** & 0.0400 & 0.0424 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Graph Regression results on Alchemy dataset. The backbone neural model of FGNN is MPNN, which is also shown for comparison with FGNN.
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline \multirow{2}{*}{Target} & \multirow{2}{*}{Units} & \multicolumn{4}{c}{Joint training of targets} & \multicolumn{4}{c}{Separate training of targets} \\ \cline{3-10} & & MPNN & 123-GNN & PPGNN & FGNN & Gain(\%) & 123-GNN & NestedGNN & PPGNN & FGNN & Gain(\%) \\ \hline \(\mu\) & D & 0.3580 & 0.4070 & 0.2310 & **0.0920** & 60.19 & 0.4760 & 0.4330 & 0.0934 & **0.0688** & 26.33 \\ \(\alpha\) & \(a_{0}^{3}\) & 0.8900 & 0.3340 & 0.3820 & **0.1830** & 45.20 & 0.2700 & 0.2650 & 0.3180 & **0.1403** & 47.05 \\ \(\epsilon_{lumo}\) & meV & 147.21 & 2124 & 75.10 & **54.15** & 27.89 & 100.68 & 75.91 & 47.34 & **45.71** & 2.98 \\ \(\epsilon_{lumo}\) & meV & 169.52 & 2310.79 & 78.09 & **55.78** & 28.57 & 95.51 & 75.10 & 57.14 & **45.53** & 22.95 \\ \(\Delta_{e}\) & meV & 179.59 & 2976.65 & 110.47 & **75.91** & 31.28 & 130.61 & 106.13 & 78.91 & **65.30** & 15.51 \\ \(\langle R^{2}\rangle\) & \(a_{0}^{2}\) & 28.50 & 22.83 & 16.07 & **2.81** & 82.53 & 22.90 & 20.10 & 3.78 & **1.41** & 62.69 \\ \(ZPVE\) & meV & 587.77 & 307.217 & 17.41 & **4.898** & 71.87 & 5.170 & 4.08 & 10.85 & **2.72** & 33.33 \\ \(U_{0}\) & meV & 55783 & 224253 & 63367 & **2468** & 61.25 & 1161 & 5578 & 598 & **44.3** & 25.90 \\ \(U\) & meV & 54422 & 242453 & 6367 & **2468** & 61.23 & 3020 & 5442 & 1371 & **378** & 72.42 \\ \(H\) & meV & 54967 & 242453 & 6231 & **2405** & 60.42 & 1140 & 5775 & 800 & **476** & 40.47 \\ \(G\) & meV & 54967 & 242453 & 6476 & **2468** & 61.91 & 1276 & 6884 & 653 & **364** & 44.16 \\ \(C_{v}\) & \(\frac{\text{real}}{\text{real}}\) & 0.42 & 0.1184 & 0.184 & **0.0840** & 29.05 & 0.944 & 0.081 & 0.144 & **0.0552** & 32.09 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Graph Regression results on QM9 dataset. The error rates of MPNN, 123-GNN, PPGNN and NestedGNN are reported in different units in the respective papers. We have converted the reported numbers to units below according to multiples mentioned in Pytorch-Geometric (Fey and Lenssen, 2019). MPNN is a general GNN model and all the other baselines are learnable GNN models designed to be more expressive than MPNN. The backbone neural model of FGNN is MPNN.
with are MPNN (Gilmer et al., 2017), 123-GNN (Morris et al., 2019), PPGNN (Maron et al., 2019) and NestedGNN (Zhang and Li, 2021). MPNN is a generalized GNN similar to that of Battaglia et al. (2018) with good performance on QM9. 123-GNN and PPGNN are the \(k\)-order methods which capture higher-order information. NestedGNN is another more expressive GNN which passes messages on rooted subgraphs instead of trees and is shown to do well on QM9 dataset. Table 5 shows that FGNN outperforms MPNN, 123-GNN, NestedGNN and PPGNN by a significant margin in all the targets under both the settings. Furthermore, the margin of improvement indicates that much of the higher-order information was not sufficiently captured by the \(k\)-order GNNs.
For Alchemy dataset, following (Chen et al., 2019) we report MAE on jointly trained normalized targets and compare with MPNN which was the best performing model in the benchmark results (Chen et al., 2019). We use the validation set to select among the models in Section 5.5. As FGNN is built on MPNN as the backend network, the margin of improvement in Table 6 is mainly because of higher-order message passing module. We also did an ablation study using the different sharing configurations. Results indicate that conditioning of parameters on either central atom or edge type helps most. Conditioning in these ways helps capture most of the higher-order information centered around an atom (node). For the CABTA method, there is a slight decrease in performance which is likely caused by the large parameter size in the model. Collectively, the ablation results suggest that major improvements are coming from the higher-order message passing scheme itself since conditioning on only bond types (BT) seems to be sufficient for a better performance.
Effect of rank of factor parametersWe now study the sensitivity of the model performance MAE with the variation in the rank of the factor parameters on the molecular data as well. For this, we fix all the model hyperparameters described in architecture details and only vary the factor rank. Figure 11 shows the variation of MAE with the increasing factor rank on the QM9 dataset. Clearly, there is improvement in model performance with the increasing rank and saturates above 256. The rank of the higher-order factors in this task is considerably lower compared to the rank in the character recognition ablation
Figure 11: Effect of rank of factor parameters on QM9 dataset. Mean absolute error of the model consistently decreases with the increasing rank and saturates after rank-256. In contrast, an exact representation of the full-rank tensor of the order-5 potential would need at least 3125 components.
in Figure 9. This can be explained with the different class sizes of the variables in both the tasks. The variables in the molecular data belong to 5 atom types only, whereas the variables in character-recognition had a class size of 26. Considering a factor of order-5 (for Carbon atom), a full-rank tensor representation for the potential function would need at least \(5^{5}=3125\) components for exact representation. Both of our ablation experiments on handwriting recognition and molecular data provide consistent evidence that the higher-order factors can be approximated with a mixture of relatively small number of rank-1 tensors.
#### 5.5.1 QM9 with positional information
The QM9 dataset comes with additional atom coordinate features which can provide the positional as well as directional information. These directional features have been shown to be useful in Directional message passing network (Dimenet) (Klicpera et al., 2020), which was further confirmed in subsequent works of ALIGNN (Choudhary and DeCost, 2021), SphereNet (Liu et al., 2022) and GEM (Fang et al., 2022). The directional features, _i.e_. positional and angular information, used in Dimenet are known to help substantially in QM9 dataset. Specifically, Dimenet extracts the features from triplets of nodes with each triplet feature constructed from the angular information within the triplet encapsulated within a basis function. In order to incorporate this informative directional information, we augment our FGNN model with the additional factors along with node positional coordinates. We conduct additional experiments with the added factors and compare with the Dimenet and other recent baselines in performance.
**Modeling and Architecture** We augment the above FGNN model for molecular graphs with edge factors with positional coordinates. The edge factor connects two adjacent nodes in the molecular graph with elementwise multiplication as the aggregator function.
\[\tilde{\mathbf{g}}_{ij}=\mathcal{M}(\mathbf{f}_{i}\left|\Theta_{\mathrm{VF}_{ i}}\right)\odot\mathcal{M}(\mathbf{f}_{j}\left|\Theta_{\mathrm{VF}_{j}}\right) \tag{33}\]
where \(\mathbf{f}_{i}\) is the node position coordinate feature and \(\mathcal{M}()\) is an MLP. Note that unlike Dimenet, our model does not include features for each triplet of nodes.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{Target} & \multirow{2}{*}{Units} & \multicolumn{3}{c}{Joint training of targets} & \multicolumn{6}{c}{Separate training of targets} \\ \cline{3-10} & & Dimenet & FGNN & Dimenet & MGCN & EGNN & ALIGNN & SphereNet & FGNN \\ \hline \(\mu\) & D & 0.0775 & **0.0549** & 0.0286 & 0.056 & 0.029 & **0.0146** & 0.0245 & 0.0373 \\ \(\alpha\) & \(a_{0}^{3}\) & **0.0616** & 0.1364 & 0.0469 & **0.030** & 0.071 & 0.0561 & 0.0449 & 0.0868 \\ \(\epsilon_{\mathrm{HOMO}}\) & meV & 45.1 & **38.7** & 27.8 & 42.1 & 29 & **21.4** & 22.8 & 33.68 \\ \(\epsilon_{\mathrm{LUMO}}\) & meV & 41.1 & **37.4** & 19.7 & 57.4 & 25 & 19.5 & **18.9** & 31.26 \\ \(\Delta\epsilon\) & meV & 59.2 & **53.7** & 34.8 & 64.2 & 48 & 38.1 & **31.1** & 48.7 \\ \(\left\langle R^{2}\right\rangle\) & \(a_{0}^{2}\) & **0.345** & 1.40 & 0.331 & 0.11 & **0.106** & 0.543 & 0.268 & 0.365 \\ \(Z\)PVE & meV & **2.87** & 3.51 & 1.29 & **1.12** & 1.55 & 3.1 & **1.12** & 1.8 \\ \(U_{0}\) & meV & **12.9** & 36.3 & 8.02 & 12.9 & 11 & 15.3 & **6.26** & 21.5 \\ \(U\) & meV & **13.0** & 36.6 & 7.89 & 14.4 & 12 & 14.4 & **6.36** & 20.6 \\ \(H\) & meV & **13.0** & 36.7 & 8.11 & 16.2 & 12 & 14.7 & **6.33** & 27.4 \\ \(G\) & meV & **13.8** & 36.8 & 8.98 & 14.6 & 12 & 14.4 & **7.78** & 26.9 \\ \(c_{v}\) & \(\frac{\mathrm{eal}}{\mathrm{mol\;K}}\) & **0.0309** & 0.057 & 0.0249 & 0.038 & 0.031 & NA & **0.0215** & 0.0249 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of FGNN with position coordinate factors with recent baselines. The backbone model of FGNN is MPNN. All the baselines compared are more powerful models than MPNN specifically designed for exploiting 3D geometric information of molecules.
**Results** Table 7 shows the results of the augmented FGNN model on QM9 dataset in both the settings _i.e._ joint training of 12 targets and separate training of each target. Results suggest that FGNN performs competitively in most targets and under the setting of joint training of targets, FGNN is able to beat the Dimenet in four of the twelve targets. Note that FGNN uses only node position features without explicit angular features encapsulated in basis functions as in DImenet. Furthermore, for completeness, we compare with other models including MGCN (Shui and Karypis, 2020) and more recent baselines EGNN (Satorras et al., 2021), ALIGNN(Choudhary and DeCost, 2021), SphereNet(Liu et al., 2022) which all use the 3D coordinate positional information. Furthermore, note that these models, including Dimenet, are specifically designed for molecular graph structures with domain-specific inductive biases while FGNN is a general model which can be used over any graph structure. Hence, FGNN is not able to outperform these models, however, can perform competitively.
### Human Motion Prediction (Sequential Prediction)
The human motion prediction aims at predicting the future motion of a human given a history motion sequence. As there are obviously higher order dependencies between joints, the factor graph neural network may help to improve the performance of the predictor. In this section, we consider the human motion prediction problem for the skeleton data, where the angle and 3d position of each joint are predicted. We build a factor graph neural network model for the skeleton data and compare the FGNN model with the state-of-the-art model based on GNN.
**Data and Modeling** For human motion prediction, we are using the Human 3.6M (H3.6M) dataset. In this experiment, we replace the last two GNN layer in (Mao et al., 2019)'s model with FGNN layer with the same number of output channels. The H3.6M dataset includes seven actors performing 15 varied activities such as walking, smoking _etc._. The poses of the actors are represented as an exponential map of joints, and a special pre-processing of global translation and rotation. In our experiments, as in previous work(Li et al., 2018; Mao et al., 2019), we only predict the exponential map of joints. That is, for each joints, we need to predict a 3-dimensional feature vector. Thus we add a factor for the
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Walk} & \multicolumn{2}{c}{Eating} & \multicolumn{2}{c}{Smoking} & \multicolumn{2}{c}{Discussion} & \multicolumn{2}{c}{Average} \\ milliseconds & 560 & 1000 & 560 & 1000 & 560 & 1000 & 560 & 1000 & 560 & 1000 \\ \hline convSeq2Seq(Li et al., 2018) & N/A & 0.92 & N/A & 1.24 & N/A & 1.62 & N/A & 1.86 & N/A & 1.41 \\ GNN(Mao et al., 2019) & **0.65** & 0.67 & 0.76 & **1.12** & 0.87 & 1.57 & **1.33** & 1.70 & 0.90 & 1.27 \\ DMGNN (Li et al., 2020) & 0.66 & 0.75 & **0.14** & 1.14 & **0.83** & **1.52** & **1.33** & **1.45** & **0.89** & **1.21** \\ Ours & 0.67 & **0.70** & 0.76 & **1.12** & 0.88 & 1.57 & 1.35 & 1.70 & 0.91 & 1.27 \\ \hline convSeq2Seq(Li et al., 2018) & 69.2 & 81.5 & 71.8 & 91.4 & 50.3 & 85.2 & 101.0 & 143.0 & 73.1 & 100.3 \\ GNN(Mao et al., 2019) & 55.0 & 60.8 & 68.1 & 79.5 & 42.2 & 70.6 & 93.8 & 119.7 & 64.8 & 82.6 \\ Hist-Attention (Mao et al., 2020) & 47.4 & 58.1 & **50.0** & 75.7 & 47.6 & 69.5 & **86.6** & 119.8 & 57.9 & 80.7 \\ Ours & **44.1** & **53.5** & 59.5 & **73.0** & **33.0** & **61.9** & 86.9 & **113.5** & **55.9** & **75.5** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Long-term prediction error (the smaller the better) of joint angles (top) and 3D joint positions (bottom) on H3.6M. Our model share the same backbone as Mao et al. (2019), and we replace the last two layer of GNN in Mao et al. (2019) with our FGNN model.
3 variable for each joint 4. Also for two adjacent joint, a factor of 6 variables are created. The factor node feature are created by put all its variable node feature together. For the edge feature, we simply use one hot vector to represent different factor-to-variable edge. For evaluation, we compared 4 commonly used action -- walk, eating, smoking and discussion. The result of GNN and convSeq2Seq are taken from (Mao et al., 2019), and our FGNN model also strictly followed the training protocol of (Mao et al., 2019).
Footnote 4: In practice, those angles with very small variance are ignored, and these variables are not added to the factor graph
**Architecture and training details** We train our model on the Human3.6M dataset using the standard training-val-test split as previous works (Mao et al., 2019; Li et al., 2018; Martinez et al., 2017), and we train and evaluate our model using the same protocol as (Mao et al., 2019) (For details, see the Appendix).
**Results** The results are provided in Table 8. For angle error, our FGNN model achieves similar results compared to the previous state-of-the-art GNN-based method (Mao et al., 2019; Li et al., 2020), while for 3D position error, our model achieves superior performance than the state-of-the-art models (Mao et al., 2019, 2020). Furthermore, since the backbone of our model is same as that of Mao et al. (2019), the performs improvement suggests that the higher-order modeling with FGNN is able to capture better higher-order structural priors compared to the pairwise GNN.
## 6 Conclusion
In this paper, we derive an efficient Low-rank Sum-Product Loopy Belief Propagation procedure for inference in factor graphs. The derived update functions are _simple_--need only matrix multiplication and Hadamard product operations, and _efficient_--the complexity of message updates grows linearly with the number of variables in the factor. In order to learn better node representations with end-to-end training, we normalize the message passing updates to give Factor Graph Neural Network (FGNN) allowing the network to capture higher-order dependencies among the variables. We then showed FGNN can also represent the execution of the Max-Product inference algorithm on probabilistic graphical models, providing a graph neural network architecture that can represent both the Sum and Max-Product belief propagation inference algorithms.
Furthermore, we showed multiple ways of modeling higher-order factors with graph structured input data. This gives us a fairly simple, flexible, powerful, and efficient message passing scheme for representation learning of graph data where higher-order information is present. We evaluated the proposed model with extensive experiments on various tasks and domains including inference in PGMs, molecular and vision datasets where it either outperforms other state-of-the-art models substantially or is at least competitive enough. The FGNN provides a convenient method of capturing arbitrary dependencies in graphs and hypergraphs, including those with typed or conditioned nodes and edges, opening up new opportunities for adding higher-order inductive biases into learning and inference problems. More importantly, it provides a deeper theoretical understanding of the relationship between graph neural networks and inference algorithms on graphical models.
### Acknowledgments and Disclosure of Funding
This work was supported by the National Research Foundation Singapore under its AI Singapore Program (Award Number: AISGRP- 2018-006). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. Zhen Zhang's participation was partially supported by the Australian Research Council Grant DP160100703. Zhen Zhang and Javen Shi's participation were partially supported by Centre for Augmented Reasoning at the Australian Institute for Machine Learning.
## Appendix A Proof of propositions
### Propositions for decomposing higher-order potentials
First we provide Lemma 8, which will be used in the proof of Proposition 2 and 4.
**Lemma 8**.: _Given \(l\) non-negative feature vectors \(\mathbf{f}_{i}=[f_{i0},f_{i1},\ldots,f_{ik}]\), where \(i=1,\ldots,l\), there exists \(l\) matrices \(\mathbf{Q}_{i}\) with shape \(lk\times k\) and \(l\) vector \(\mathbf{\hat{f}}_{i}=\mathbf{Q}_{i}\,\mathbf{f}_{i}^{\mathsf{T}}\), s.t._
\[,\qquad[\mathbf{f}_{1},\mathbf{f}_{2},\ldots,\mathbf{f}_{l}]=[\max_{i}\hat{f} _{i0},\max_{i}\hat{f}_{i1},\ldots,\max_{i}\hat{f}_{i,kl}].\]
**Proof** Let
\[\mathbf{Q}_{i}=\left[\underbrace{\mathbf{0}^{k\times k},\ldots,\mathbf{0}^{k \times k}}_{i-1\text{ matrices}},\mathbf{I},\underbrace{\mathbf{0}^{k\times k}, \ldots,\mathbf{0}^{k\times k}}_{n-i\text{ matrices}}\right]^{\top}, \tag{34}\]
then we have that
\[\mathbf{\hat{f}}_{i}=\mathbf{Q}_{i}\,\mathbf{f}_{i}^{T}=\left[\underbrace{0, \ldots,0}_{(i-1)\text{\scriptsize{$k$ zeros}}},f_{i0},f_{i1},\ldots,f_{ik}, \underbrace{0,\ldots,0}_{(n-i)k\text{ zeros}}\right]^{\top}.\]
By the fact that all feature vectors are non-negative, obviously we have that \([\mathbf{f}_{1},\mathbf{f}_{2},\ldots,\mathbf{f}_{l}]=[\max_{i}\hat{f}_{i0}, \max_{i}\hat{f}_{i1},\ldots,\max_{i}\hat{f}_{i,kl}]\).
Lemma (8) suggests that for a group of feature vectors, we can use the \(\mathcal{Q}\) operator to produce several \(\mathbf{Q}\) matrices to map different vector to different sub-spaces of a high-dimensional spaces, and then our maximization aggregation can sufficiently gather information from the feature groups.
**Proposition 2**.: _A factor graph \(\mathcal{G}=(\mathcal{V},\mathcal{C},\mathcal{E})\) with variable log potentials \(\theta_{i}(x_{i})\) and factor log potentials \(\varphi_{c}(\mathbf{x}_{c})\) can be converted to a factor graph \(\mathcal{G}^{\prime}\) with the same variable potentials and the decomposed log-potentials \(\varphi_{ic}(x_{i},z_{c})\) using a one-layer FGNN._
**Proof** Without loss of generality, we assume that \(\log\phi_{c}(\mathbf{x}_{c})\geqslant 1\). Then let
\[\theta_{ic}(x_{i},z_{c})=\left\{\begin{array}{ll}\frac{1}{|s(c)|}\log\phi_{ c}(\mathbf{x}_{c}^{z_{c}}),&\text{if }\hat{x}_{i}=x_{i}^{z_{c}},\\ -c_{x_{i},z_{c}},&\text{otherwise},\end{array}\right. \tag{35}\]
where \(c_{x_{i},z_{c}}\) can be arbitrary real number which is larger than \(\max_{\mathbf{x}_{c}}\theta_{c}(\mathbf{x}_{c})\). Obviously we will have
\[\log\phi_{c}(\mathbf{x}_{c})=\max_{z_{c}}\sum_{i\in s(c)}\theta_{ic}(x_{i},z_{ c}) \tag{36}\]
Assume that we have a factor \(c=1,2,\ldots n\), and each nodes can take \(|X|\) states. Then \(\mathbf{x}_{c}\) can be sorted as
\[[\,\mathbf{x}_{c}^{0}=[x_{1}=0,x_{2}=0,\ldots,x_{n}=0],\] \[\mathbf{x}_{c}^{1}=[x_{1}=1,x_{2}=0,\ldots,x_{n}=0],\] \[\ldots,\] \[\mathbf{x}_{c}^{|X|^{n}-1}=[x_{1}=|X|,x_{2}=|X|,\ldots,x_{n}=|X|]],\]
and higher-order potential can be organized as vector \(\mathbf{g}_{c}=[\log\phi_{c}(\mathbf{x}_{c}^{0}),\log\phi_{c}(\mathbf{x}_{c}^{1}),\ldots,\log\phi_{c}(\mathbf{x}_{c}^{|X|^{n}-1})]\). Then for each \(i\) the item \(\theta_{ic}(x_{i},z_{c})\) in (35) have \(|X|^{n+1}\) entries, and each entry is either a scaled entry of the vector \(\mathbf{g}_{c}\) or arbitrary negative number less than \(\max_{\mathbf{x}_{c}}\theta_{c}(\mathbf{x}_{c})\).
Thus if we organize \(\theta_{ic}(x_{i},z_{c})\) as a length-\(|X|^{n+1}\) vector \(\mathbf{f}_{ic}\), then we define a \(|X|^{n+1}\times|X|^{n}\) matrix \(\mathbf{Q}_{ci}\), where if and only if the \(l^{\text{th}}\) entry of \(\mathbf{f}_{ic}\) is set to the \(m^{\text{th}}\) entry of \(\mathbf{g}_{c}\) multiplied by \(1/|s(c)|\), the entry of \(\mathbf{Q}_{ci}\) in \(l^{\text{th}}\) row, \(m^{\text{th}}\) column will be set to \(1/|s(c)|\); all the other entries of \(\mathbf{Q}_{ci}\) is set to some negative number smaller than \(-\max_{\mathbf{x}_{c}}\theta_{c}(\mathbf{x}_{c})\). Due to the assumption that \(\log\phi_{c}(\mathbf{x}_{c})\geqslant 1\), the matrix multiplication \(\mathbf{Q}_{ci}\,\mathbf{g}_{c}\) must produce a legal \(\theta_{ic}(x_{i},z_{c})\).
If we directly define a \(\mathcal{Q}\)-network which produces the above matrices \(\mathbf{Q}_{ci}\), then in the aggregating part of our network there might be information loss. However, by Lemma 8 there must exists a group of \(\mathbf{\hat{Q}}_{ci}\) such that the maximization aggregation over features \(\mathbf{\hat{Q}}_{ci}\,\mathbf{Q}_{ci}\,\mathbf{g}_{c}\) will produce exactly a vector representation of \(\theta_{ic}(x_{i},z_{c}),i\in s(c)\). Thus if every \(t_{ci}\) is a different one-hot vector, we can easily using one single linear layer \(\mathcal{Q}\)-network to produce all \(\mathbf{\hat{Q}}_{ci}\,\mathbf{Q}_{ci}\), and with a \(\mathcal{M}\)-network which always output factor feature, we are able to output a vector representation of \(\theta_{ic}(x_{i},z_{c}),i\in s(c)\) at each factor node \(c\).
### Derivation of decomposed max-product belief propagation
In this section, we reformulate the (26) using the decomposed higher-order-log potentials. We use \(m_{c\to i}(x_{i})\) and \(b_{i}(x_{i})\) for the previous messages and beliefs, and use \(m^{\prime}_{c\to i}(x_{i})\) and \(b^{\prime}_{i}(x_{i})\) for the updated messages and beliefs. Then message updating step in the max product belief propagation (26) can be reformulated as
\[n_{i\to c}(x_{i})= \theta_{i}(x_{i})+\sum_{d:d\neq c,i\in s(d)}m_{d\to i}(x_{i}),\] \[= \theta_{i}(x_{i})+\sum_{d:i\in s(d)}m_{d\to i}(x_{i})-m_{c\to i }(x_{i})\] \[= b_{i}(x_{i})-m_{c\to i}(x_{i}) \tag{37a}\] \[m^{\prime}_{c\to i}(x_{i})= \max_{\mathbf{x}_{c}\setminus x_{i}}\left[\theta_{c}(\mathbf{x}_ {c})+\sum_{j\in s(c),j\neq i}n_{j\to c}(x_{j})\right]\] \[= \max_{\mathbf{x}_{c}\setminus x_{i}}\left\{\max_{z_{c}}\left[\sum _{j\in s(c),j\neq i}\varphi_{jc}(x_{j},z_{c})+\varphi_{ic}(x_{i},z_{c})\right] +\sum_{j\in s(c),j\neq i}n_{j\to c}(x_{j})\right\}\] \[= \max_{\mathbf{x}_{c}\setminus x_{i}}\left\{\max_{z_{c}}\left[\sum _{j\in s(c),j\neq i}\varphi_{jc}(x_{j},z_{c})+\varphi_{ic}(x_{i},z_{c})\right] +\sum_{j\in s(c,j\neq i)}\left[b_{j}(x_{j})-m_{c\to j}(x_{j})\right]\right\}\] \[= \max_{z_{c}}\max_{\mathbf{x}_{c}\setminus x_{i}}\left\{\left[\sum _{j\in s(c),j\neq i}\varphi_{jc}(x_{j},z_{c})+\varphi_{ic}(x_{i},z_{c})\right] +\sum_{j\in s(c,j\neq i)}\left[b_{j}(x_{j})-m_{c\to j}(x_{j})\right]\right\}\] \[= \max_{z_{c}}\left\{\sum_{j\in s(c),j\neq i}\max_{x_{j}}\left[ \varphi_{jc}(x_{j})-m_{c\to j}(x_{j})+b_{j}(x_{j})\right]+\varphi_{ic}(x_{i},z _{c})\right\} \tag{37b}\]
Here for simplying the notation we define
\[b_{c\to i}(z_{c})=\sum_{j\in s(c),j\neq i}\max_{x_{j}}\left[\varphi_{jc}(x_{j},z_ {c})-m_{c\to j}(x_{j})+b_{j}(x_{j})\right],\forall c,i\in s(c)\]
and then the updating rule for beliefs can be reformulated as
\[b_{i}^{\prime}(x_{i}) =\theta_{i}(x_{i})+\sum_{c:i\in s(c)}m_{c\to i}^{\prime}(x_{i})\] \[=\theta_{i}(x_{i})+\sum_{c:i\in s(c)}\max_{z_{c}}\left[b_{c\to i }(z_{c})+\varphi_{ic}(x_{i},z_{c})\right].\]
Thus finally the max-product updating rules are
\[b_{c\to i}(z_{c}) \leftarrow\sum_{j\in s(c),j\neq i}\max_{x_{j}}\left[\varphi_{jc}(x _{j},z_{c})-m_{c\to j}(x_{j})+b_{j}(x_{j})\right],\] \[m_{c\to i}(x_{i}) \leftarrow\max_{z_{c}}\left[b_{c\to i}(z_{c})+\varphi_{ic}(x_{i},z_ {c})\right],\] \[b_{i}(x_{i}) \leftarrow\theta_{i}(x_{i})+\sum_{c:i\in s(c)}m_{c\to i}(x_{i})\]
### Recovering decomposed max-product belief propagation using FGNN
Given the log potentials represented as a set of rank-1 tensors at each factor node, we need to show that each iteration of the Max Product message passing update can be represented by a Variable-to-Factor layer followed by a Factor-to-Variable layer (forming a FGNN layer). We reproduce the update equations here.
\[b_{c\to i}(z_{c}) \leftarrow\sum_{j\in s(c),j\neq i}\max_{x_{j}}\left[\varphi_{jc}(x _{j},z_{c})-m_{c\to j}(x_{j})+b_{j}(x_{j})\right], \tag{38a}\] \[m_{c\to j(x_{i})} \leftarrow\max_{z_{c}}\left[b_{c\to i}(z_{c})+\varphi_{ic}(x_{i},z_ {c})\right],\quad b_{i}(x_{i})\leftarrow\theta_{i}(x_{i})+\sum_{c:i\in s(c)}m_ {c\to j(x_{i})} \tag{38b}\]
In the Max-Product updating procedure, we should keep all the decomposed \(\varphi_{jc}(x_{j},z_{c})=\log\phi_{jc}(x_{j},z_{c})\) and all the unary potential \(\theta_{i}(x_{i})\) for use at the next layer. That requires the FGNN to have the ability to fit the identity mapping. Consider letting the \(\mathcal{Q}\) network to always output identity matrix, \(\mathcal{M}([\mathbf{g}_{c},f_{i}]|\Theta_{\text{VF}})\) to always output \(f_{i}\). Then the FGNN will be an identity mapping. As \(\mathcal{Q}\) always output a matrix and \(\mathcal{M}\) output a vector, we can use part of their blocks as the identity mapping to keep \(\log\phi_{jc}(x_{j},z_{c})\) and \(\theta_{i}(x_{i})\). The other blocks are used to updating \(b_{c\to i}(z_{c})\), messages \(m_{c\to j}(x_{j})\), and \(b_{i}(x_{i})\).
First we show that \(\mathcal{M}\) operators in the Variable-to-Factor layer can be used to construct the computational graph for the max-marginal operations.
For arbitrary real valued feature matrix \(\mathbf{X}\in\mathbb{R}^{k\times l}\) with \(x_{ij}\) as its entry in the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column, the feature mapping operation \(\hat{\mathbf{x}}=[\max_{j}x_{ij}]_{i=1}^{k}\) can be exactly parameterized with a \(2\log_{2}l\)-layer neural network with RELU as activation function and at most \(2n\) hidden units.
**Proof** Without loss of generality we assume that \(k=1\), and then we use \(x_{i}\) to denote \(x_{1i}\). When \(l=2\), it is obvious that
\[\max(x_{1},x_{2})=\mathbf{Relu}(x_{1}-x_{2})+x_{2}=\mathbf{Relu}(x_{1}-x_{2})+ \mathbf{Relu}(x_{2})-\mathbf{Relu}(-x_{2})\]
and the maximization can be parameterized by a two layer neural network with 3 hidden units, which satisfied the proposition.
Assume that when \(l=2^{a}l\) for some integer \(a>=1\), the proposition is satisfied 5. Then for \(l=2^{a+1}\), we can find \(\max(x_{1},\ldots,x_{2^{a}})\) and \(\max(x_{2^{a}+1},\ldots,x_{2^{a+1}})\) using two network with \(2a\) layers and at most \(2^{a+1}\) hidden units. Stacking the two neural network together would results in a network with \(2i\) layers and at most \(2^{i+2}\) parameters. Then we can add another 2 layer network with 3 hidden units to find \(\max(\max(x_{1},\ldots,x_{2^{a}}),\max(x_{2^{a}+1},\ldots,x_{2^{a+1}}))\). Thus by mathematical induction the proposition is proved.
Footnote 5: For any \(l\) we can always padding the vector to get an \(l^{\prime}=2^{a}\) for some interger \(a\)
The update equations contain summations of columns of a matrix after the max-marginal operations. However, the VF and FV layers use max operators to aggregate features produced by \(\mathcal{M}\) and \(\mathcal{Q}\) operator. Assume that the \(\mathcal{M}\) operator has produced the max-marginals, then we use the \(\mathcal{Q}\) to produce several weight matrix. The max-marginals are multiplied by the weight matrices to produce new feature vectors, and the maximization aggregating function are used to aggregating information from the new feature vectors. We use the following propagation to show that the summations of max-marginals can be implemented by one MPNN layer plus one linear layer. Thus we can use the VF layer plus a linear layer to produce \(b_{c\to i}(z_{c})\) and use the FV layer plus another linear layer to produce \(b_{i}(x_{i})\). Hence to do \(k\) iterations of Max Product, we need \(k\) FGNN layers followed by a linear layer.
**Proposition 4**.: _For arbitrary non-negative valued feature matrix \(\mathbf{X}\in\mathbb{R}_{\geq 0}^{k\times l}\) with \(x_{ij}\) as its entry in the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column, there exists a constant tensor \(\mathbf{W}\in\mathbb{R}^{k\times l\times kl}\) that can be used to transform \(\mathbf{X}\) into an intermediate representation \(y_{ir}=\sum_{ij}x_{ij}w_{ijr}\), such that after maximization operations are done to obtain \(\hat{y}_{r}=\max_{i}y_{ir}\), we can use another constant matrix \(\mathbf{Q}\in\mathbb{R}^{l\times kl}\) to obtain_
\[[\sum_{i}x_{ij}]_{j=1}^{l}=\mathbf{Q}[\hat{y}_{r}]_{r=1}^{kl}. \tag{39}\]
**Proof** The proposition is a simple corollary of Lemma 8. The tensor \(\mathbf{W}\) serves as the same role as the matrices \(\mathbf{Q}_{i}\) in Lemma 8, which can convert the feature matrix \(\mathbf{X}\) as a vector, then a simple linear operator can be used to produce the sum of rows of \(\mathbf{X}\), which completes the proof.
In Lemma 8 and Proposition 4, only non-negative features are considered, while in log-potentials, there can be negative entries. However, for the MAP inference problem in (25), the transformation as follows would make the log-potentials non-negative without changing the final MAP assignment,
\[\tilde{\theta}_{i}(x_{i})=\theta_{i}(x_{i})-\min_{x_{i}}\theta_{i}(x_{i}), \qquad\tilde{\theta}_{c}(\mathbf{x}_{c})=\theta_{c}(\mathbf{x}_{c})-\min_{ \mathbf{x}_{c}}\theta_{c}(\mathbf{x}_{c}). \tag{40}\]
As a result, for arbitary PGM we can first apply the above transformation to make the log-potentials non-negative, and then our FGNN can exactly do Max-Product Belief Propagation on the transformed non-negative log-potentials.
### A Factor Graph Neural Network Module Recovering the Belief Propagation
In this section, we give the proofs of Proposition 5 and 6 by constructing two FGNN layers which exactly recover the belief propagation operation. As lower order factors can always shrank by higher-order factors, we will construct the FGNN layers on an factor graph \(\mathcal{H}=(\mathcal{V},\mathcal{F},\hat{\mathcal{E}})\), which satisfies the following condition
1. \(\forall i\in\mathcal{V}\), the associated \(\theta_{i}(x_{i})\) satisfies that \(\theta_{i}(x_{i})>0\forall x_{i}\in X\);
2. \(\forall f_{1},f_{2}\in\mathcal{F}\), \(|f_{1}|=|f_{2}|\);
3. \(\forall f\in\mathcal{F}\), the corresponding \(\varphi_{f}(\mathbf{x}_{f})\) can be decomposed as \[\varphi_{f}(\mathbf{x}_{f})=\max_{z_{f}\in\mathcal{Z}}\sum_{i\in f}\varphi_{ fi}(x_{i},z_{f}),\] (41) and \(\forall i\in f,\varphi_{fi}(x_{i},z_{f})\) satisfies that \(\varphi_{fi}(x_{i},z_{f})>0\).
On factor graph \(\mathcal{H}\), we construct a FGNN layer on the directed bipartite graph in Figure 12.
FGNN Layer to recover (28a)Here we construct an FGNN layer to produce all \(b_{f\to i}(z_{f})\). First we reformulate (28a) as
\[\begin{split} b_{f\to i}(z_{f})&\leftarrow\tilde{ \varphi}_{f}(z_{f})-\max_{x_{i}}[\varphi_{if}(x_{i},z_{f})-m_{f\to i}(x_{i})+b_ {i}(x_{i})],\\ \tilde{\varphi}_{f}(z_{f})&\leftarrow\sum_{i\in f} \max_{x_{i}}[\varphi_{if}(x_{i},z_{f})-m_{f\to i}(x_{i})+b_{i}(x_{i})].\end{split} \tag{42}\]
Here we use the Variable-to-Factor sub-graph to implement (42). For each variable node \(i\), we associated it with an length-\(|X|\) vector \([b_{i}(x_{i})]_{x\in X}\) (Initially \(b_{i}(x_{i})=\theta_{i}(x_{i})\)). For each edge in the sub-graph, assume that \(f=[i_{1},i_{2},\ldots,i_{|f|}]\), then for some \(i_{j}\in f\), the associated feature vector is as length-\(|f|\) one-hot vector as follows
\[[0,0,\ldots,\underbrace{1}_{\text{The }j^{\text{th entry}}},\ldots,0].\]
Figure 12: Directed bipartite graph for constructing FGNN layers. In the Variable-to-Factor sub-graph, each factor receives the messages from the same number of nodes. On the other hand, for each Factor-to-Variable sub-graph, each nodes may receives messages from different number of factors.
For each factor node \(f=[i_{1},i_{2},\ldots,i_{|f|}]\) in the sub-graph, it is associated with an \(|f|\times|X||Z|\) feature matrix as follows
\[\begin{bmatrix}[\varphi_{fi}(x_{i_{1}},z_{f})-m_{f\to i}(x_{i})]_{x_{i_{1}}=|X|, z_{f}=|Z|}^{x_{i_{1}}=|X|,z_{f}=|Z|}\\ _{i_{1},i_{2}=|X|,z_{f}=|Z|}\\ _{i_{2}=1,z_{f}=1}^{x_{i_{2}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=|X|,z_{f}=|Z|}\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{ |f|}}=|X|,z_{f}=1}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{ |f|}}=|X|,z_{f}=1}\\ \ldots\\ _{i_{|f|}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=1}^{x_{i_{
Then the max operation over all \(i\in f\) will produce edge feature matrix
\[\begin{bmatrix}&[\varphi_{fi_{1}}(x_{i_{1}},z_{f})-m_{f\to i_{1}}(x_{i_{1}})+b_{i_{ 1}}(x_{i_{1}})]_{x_{i_{1}}=1,z_{f}=1}^{x_{i_{1}}=|X|,z_{f}=|Z|}\\ &[\varphi_{fi_{2}}(x_{i_{2}},z_{f})-m_{f\to i_{2}}(x_{i_{2}})+b_{i_{2}}(x_{i_{2} })]_{x_{i_{2}}=1,z_{f}=1}^{x_{i_{2}}=|X|,z_{f}=|Z|}\\ &\cdots\\ &[\varphi_{fi_{|f|}}(x_{i_{2}},z_{f})-m_{f\to i_{n}}(x_{i_{n}})+b_{i_{|f|}}(x_{ i_{|f|}})]_{x_{i_{|f|}}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\end{bmatrix}.\]
Then by Proposition 3, we can recover the maximization operation in (42) using an \(\mathcal{O}(\log_{2}|X|)\)-layer neural network with at most \(\mathcal{O}(|X|^{2}\log_{2}|X|)\) hidden units. After that, all the other operations are simple linear operations, and they can be easily encoded in a neural-network without adding any parameter. Thus we can construct an FGNN layer, which produces factor features for each factor \(f\) as follows
\[\begin{bmatrix}&[b_{f\to i_{1}}(z_{f})]_{z_{f}=1}^{z_{f}=|Z|}\\ &[b_{f\to i_{2}}(z_{f})]_{z_{f}=1}^{z_{f}=|Z|}\\ &\cdots\\ &[b_{f\to i_{|f|}}(z_{f})]_{z_{f}=1}^{z_{f}=|Z|}\end{bmatrix}.\]
Finally we constructed an FGNN to parameterize the operation in (28a), and this construction also proves Proposition 5 as follows.
The operation in (28a) can be parameterized by one MPNN layer with \(\mathcal{O}(|X|\max_{c\in\mathcal{C}}|\mathcal{Z}_{c}\,|\) hidden units followed by a \(\mathcal{O}(\log_{2}|X|)\)-layer neural network with at most \(\mathcal{O}(|X|^{2}\log_{2}|X|)\) hidden units.
FGNN Layer to recover (28c)Here we construct an FGNN layer to parameterize (28b) and (28c) in order to prove Proposition 6. Using the notation in this section the operation in (28c) can be reformulated as
\[m_{f\to i}(x_{i}) \leftarrow\max_{z}[\varphi_{if}(x_{i},z_{f})+b_{c\to i}(z_{f})]\] \[b_{i}(x_{i}) \leftarrow\theta_{i}(x_{i})+\sum_{f:i\in f}\max_{z}[\varphi_{if} (x_{i},z_{f})+b_{f\to i}(z_{f})].\]
In previous paragraph, the new factor feature
\[\begin{bmatrix}&[b_{f\to i_{1}}(z_{f})]_{z_{f}=1}^{z_{f}=|Z|}\\ &[b_{f\to i_{2}}(z_{f})]_{z_{f}=1}^{z_{f}=|Z|}\\ &\cdots\\ &[b_{f\to i_{|f|}}(z_{f})]_{z_{f}=1}^{z_{f}=|Z|}\end{bmatrix}.\]
Considering the old factor feature
\[\begin{bmatrix}&[\varphi_{fi}(x_{i_{1}},z_{f})]_{x_{i_{1}}=1,z_{f}=|Z|}^{x_{i_ {1}}=|X|,z_{f}=|Z|}\\ &[\varphi_{fi}(x_{i_{2}},z_{f})]_{x_{i_{2}}=1,z_{f}=1}^{x_{i_{2}}=|X|,z_{f}=|Z |}\\ &\cdots\\ &[\varphi_{fi}(x_{i_{|f|}},z_{f})]_{x_{i_{|f|}}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{ f}=|Z|}\end{bmatrix},\]
we can use _broadcasted_ addition between these two features to get
\[\begin{bmatrix}[b_{f\to i_{1}}(z_{f})+\varphi_{fi}(x_{i_{1}},z_{f})]_{x_{i_{1}}=1, z_{f}=1}^{x_{i_{1}}=|X|,z_{f}=|Z|}\\ _{x_{i_{1}}=1,z_{f}=1}^{x_{i_{1}}=|X|,z_{f}=|Z|}\\ _{x_{i_{2}}=1,z_{f}=1}^{x_{i_{2}}=|X|,z_{f}=|Z|}\\ \ldots\\ _{x_{i_{|f|}}=1,z_{f}=1}^{x_{i_{|f|}}=|X|,z_{f}=|Z|}\\ \end{bmatrix}.\]
After that we have an \(|f|\times|X|\times|Z|\) feature tensor for each factor \(f\in\mathcal{F}\). By 3, a \(\mathcal{O}(\log_{2}|\,\mathcal{Z}\,|)\)-layer neural network with at most \(\mathcal{O}(|\,\mathcal{Z}\,|^{2}\log_{2}|\,\mathcal{Z}\,|)\) parameters can be used to convert the above feature to
\[\begin{bmatrix}[m_{f\to i_{1}}(x_{i_{1}})]_{x_{i_{1}}=1}^{x_{i_{1}}=|X|}\\ _{x_{i_{1}}=1}^{x_{i_{1}}=|X|}\\ _{x_{i_{2}}=1}^{x_{i_{2}}=|X|}\\ \ldots\\ _{x_{i_{|f|}}=1,z_{f}=1}^{x_{i_{|f|}}=|X|}\\ \ldots\\ _{x_{i_{|f|}}=1,z_{f}=1}^{x_{i_{|f|}}=|X|}\end{bmatrix}\leftarrow\begin{bmatrix} [\max_{z_{f}}[b_{f\to i_{1}}(z_{f})+\varphi_{fi}(x_{i_{1}},z_{f})]]_{x_{i_{1}} =1}^{x_{i_{1}}=|X|}\\ _{x_{i_{1}}=1}^{x_{i_{2}}=|X|}\\ _{x_{i_{1}}=1}^{x_{i_{2}}=|X|}\\ \ldots\\ _{x_{i_{|f|}}=1,z_{f}=1}^{x_{i_{|f|}}=|X|}\end{bmatrix}.\]
We will use this as the first part of our \(\mathcal{M}\) network. For the second part, as we need to parameterize the \(\sum_{f:i\in f}\max_{z}[\varphi_{if}(x_{i},z_{f})+b_{c\to i}(z_{f})]\) from feature \(\max_{z}[\varphi_{if}(x_{i},z_{f})+b_{c\to i}(z_{f})\), by Proposition 4, it will require another linear layer with \(\mathcal{O}(\max_{i\in\mathcal{V}}\deg(\mathrm{i})^{2}|X|^{2})\), where \(\deg(i)=|\{f|f\in\mathcal{F},i\in f\}|\). After that, the \(\mathcal{Q}\) network can be a simple identity mapping, and the FGNN would produce updated messages \(m_{f\to i}(x_{i})=\max_{z}[\varphi_{if}(x_{i},z_{f})+b_{c\to i}(z_{f})]\) for each node. Adding these feature with the initial node feature would results new node feature \(b_{i}(x_{i})\). Thus by constructing a FGNN layer to parameterize (28b) and (28c) we complete the proof of Proposition 6.
### Example of Recovering Max Product Belief Propagation
We provide a simple example that uses the proposed FGNN to recover Max Product Belief Propagation. Since the Max Product Belief Propagation can be viewed as a continuous mapping between the input log-potentials and the output "beliefs", and our FGNN is actually a universal approximator for such mapping. In this part, we provide one parametrization of FGNN that can exactly recover Max Product Belief Propagation, but it may not be the only one or the optimal one. Our goal is to design a network that is capabale of recovering traditional inference procedures such as belief propagation, but by learning from data our approach may learn a better inference approach.
Let's consider a simple MAP inference problem over a simple graphical model as follows,
\[\max_{\mathbf{x}}\left[\theta_{1,2}(x_{1},x_{2})+\theta_{2,3}(x_{2},x_{3}) \right], \tag{44}\]
where each variable \(x_{i}\in\{0,1,\ldots N-1\}\), and the log-potentials are all real-valued functions. Then we show the most complicated procedure of (28), that is (28a), can be recovered by a Variable-to-Factor module. In such a Variable-to-Factor (shown in Figure 13) module, in the first layer of \(\mathcal{M}(\cdot|\Theta_{FV})\), the edge potentials are mapped into the decomposed log-potentials defined in Lemma 1. This operation only requires a linear transformation. Then the
decomposed log-potentials will be concatenated as the input of the second layer of \(\mathcal{M}(\cdot|\Theta_{FV})\) (recall that the Variable-to-Factor requires both factor and variable feature as input), then by another linear transformation we can get the term inside the max-operation in (28a), plus a redundant term with the same shape. Then the \(\mathcal{Q}\) network, works as a selector, will set the redundant term to \(-\infty\), then by the aggregation part, the redundant term will be filtered. Finally, by applying MLP to max over \(x\) and do the summation, the factor \(\{1,2\}\) can have a feature vector that consists of \(b_{1,2\to 1}(z_{c})\) and \(b_{1,2\to 2}(z_{c})\).
## Appendix B Experiments
### Additional Information on MAP Inference over PGM
DataWe construct four datasets. All variables are binary. The instances start with a chain structure with unary potential on every node and pairwise potentials between consecutive nodes. A higher-order potential is then imposed to every node for the first three datasets.
The node potentials are all randomly generated from the uniform distribution over \([0,1]\). We use two kinds of pairwise potentials, one randomly generated (as in Table 10), the other encouraging two adjacent nodes to both take state 1 (as in Table 9 and Table 11),
Figure 13: Example of Variable-to-Factor module that recovers the operations in (28a).
i.e. the potential function gives high value to configuration \((1,1)\) and low value to all other configurations. For example, in Dataset1, the potential value for \(x_{1}\) to take the state 0 and \(x_{2}\) to take the state 1 is 0.2; in Dataset3, the potential value for \(x_{1}\) and \(x_{2}\) to take the state 1 at the same time is sampled from a uniform distribution over \([0,\,2]\).
For Dataset1,2,3, we additionally add the budget higher-order potential (Martins et al., 2015) at every node; these potentials allow at most \(k\) of the 8 variables that are within their scope to take the state 1. For the first two datasets, the value \(k\) is set to 5; for the third dataset, it is set to a random integer in {1,2,3,4,5,6,7,8}. For Dataset4, there is no higher-order potential.
As a result of the constructions, different datasets have different inputs for the FGNN; for each dataset, the inputs for each instance are the parameters of the PGM that are not fixed. For Dataset1, only the node potentials are not fixed, hence each input instance is a factor graph with the randomly generated node potential added as the input node feature for each variable node. Dataset2 and Dataset4 are similar in terms of the input format, both including randomly generate node potentials as variable node features and randomly generated pairwise potential parameters as the corresponding pairwise factor node features. Finally, for Dataset3, the variable nodes, the pairwise factor nodes and the high order factor nodes all have corresponding input features.
ArchitectureWe use a multi-layer factor graph neural network with architecture FGNN(64) - Res[FC(64) - FGNN(64) - FC(64)] - MLP(128) - Res[FC(64) - FGNN(64) - FC(128)] - FC(256) - Res[FC(256) - FGNN(64) - FC(128) - Res[FC(128) - FGNN(64) - FC(128)] - FC(64) - Res[FC(64) - FGNN(64) - FC(64)] - FGNN(2). Here one FGNN(\(C_{\text{out}}\)) is a FGNN layer with \(C_{\text{out}}\) as output feature dimension with ReLU (Nair and Hinton, 2010) as activation. One FC(\(C_{\text{out}}\)) is a fully connected layer with \(C_{\text{out}}\) as output feature dimension and ReLU as activation. Res[\(\cdot\)] is a neural network with residual link from its input to output; these additional architecture components can assist learning.
Running TimeWe report the inference time of one instance and the training time of one epoch for the synthetic datasets in Table 12. The results show that our method runs in a reasonable amount of time.
\begin{table}
\begin{tabular}{c|c c
### Implementation details on MAP Solvers
In the experiment, the AD3 code is from the official code repo 6, which comes with a python interface. For Max-Product algorithm, we use the implementation from libdai and convert the budget higher potential as a table function. For the MPLP algorithm, we implemented it in C++ to directly support the budget higher-order potential. The re-implemented version is compared with the original version 7, and its performance is better than the original one in our experiment. So we provide the result of the re-implemented version.
Footnote 6: [https://github.com/andre-martins/AD3](https://github.com/andre-martins/AD3)
Footnote 7: [https://people.csail.mit.edu/dsontag/code/mplp_ver2.tgz](https://people.csail.mit.edu/dsontag/code/mplp_ver2.tgz)
### Dataset Generation and Training Details of LDPC decoding
DataEach instance of training/evaluation data is generated as follows:
```
0:\(\mathbf{y}\): a 96-bit noisy signal; \(\text{SNR}_{dB}\): signal-to-noise ratio, a scalar Uniformly sample a 48-bit binary signal \(\mathbf{x}\), where for each \(0<i\leqslant 48\), \(P(x_{i}=1)=P(x_{i}=0)=0.5\) Encode \(\mathbf{x}\) using the "96.3.963" scheme (MacKay, 2009) to get a 96-bit signal \(\mathbf{y}\) sample \(\text{SNR}_{dB}\in\{0,1,,2,3,4\}\) and \(\sigma_{b}\in\{0,1,,23,4,5\}\) uniformly For each \(0<i\leqslant 96\),uniformly, sample * \(\eta_{i}\in\mathcal{U}(0,1)\), * \(n_{i}\in\mathcal{N}(0,\sigma^{2})\) s.t. \(\text{SNR}_{dB}=20\log_{10}1/\sigma\) * \(z_{i}\in\mathcal{N}(0,\sigma_{b}^{2})\) Set noisy signal \(\tilde{\mathbf{y}}\) to * \(\tilde{y}_{i}=y_{i}+n_{i}+\mathbb{I}(\eta_{i}\leqslant 0.05)z_{i}\)
```
**Algorithm 2** Data Generation for LDPC decoding
During the training of FGNN, the node feature include the noisy signal \(\tilde{\mathbf{y}}\) and the signal-to-noise ratio \(\text{SNR}_{dB}\). For FGNN, for each factor \(f\), the vector \([\tilde{y}_{i}]_{i\in f}\) is provided as feature vector. Meanwhile, for each edge from factor node \(f\) to one of its variable node \(i\), the factor feature and the variable node feature are put together to get the edge feature.
ArchitectureIn our FGNN, every layer share the same \(\mathcal{Q}\) network, which is 2-layer network as follows \(\text{MLP}(64)\text{-MLP}(4)\). Here the first layer comes with a ReLU activation function and the second layer is with no activation function.
The overall structure of our FGNN is as follows Input - Res[FC(64) - FGNN(64) - FC(64)] - Res[FC(64) - FGNN(64) - FC(64)] - FC(64) - FGNN(64) - FC(128) - FC(256) - FGNN(128) - FC(256) - - Res[FC(256) - FGNN(128) - FC(256)] - FC(128) - FC(128) - FC(64) - FGNN(64) - FC(64) - Res[FC(64) - FGNN(64) - FC(128) - FC(128) - FC(1). In the network, a batch-normalization layer and
a ReLU activation function is after each FC layer and FGNN layer except for the last FC layer.
### Additional experiments on Molecular data
#### b.4.1 Improvement over MPNN when distance information is excluded
In the main paper, we reported the improved performance of FGNN over MPNN on Alchemy dataset. The FGNN is built on MPNN as backend feature extractor. Therefore, its improved performance is the result of capturing dependencies not modeled by MPNN. This begs the question: can higher-order message passing capture more information when the backend MPNN module is further constrained. For this, we limit the input to MPNN module and see if FGNN can further improve its gain with respect to MPNN.
One of the main reasons for the superior performance of MPNN on molecular datasets is that it can capture the 3D geometric structure of the molecule (Chen et al., 2019; Gilmer et al., 2017). MPNN is provided with edge features which include bond type and spatial distance between the pair of atoms. Further, it operates on a complete graph where extra _virtual edges_ are added between every pair of atoms with no bond. The edge feature for such _virtual edges_ contains the spatial distance between the pair of atoms. Consequently, MPNN can capture 3D geometric structure of the molecule with such a complete graph.
In the following experiments, we evaluate whether higher-order message passing can help capture structure of the molecule in the absence of pairwise distance edge features. We do include 3D atom positions in the node features and hence the information about the geometric structure of the molecule is indirectly provided. Based on the pairwise distance feature, we divide the experimental setup in three categories.
* **Sparse graph without distance:** Input graph is a sparse graph i.e., edge exists only if bond exists between atoms, with edge features containing the bond type without the distance between the pair of atoms.
* **Sparse graph with distance:** Input graph is a sparse graph with edge features containing bond type and distance between the pair of atoms.
* **Complete graph with distance:** Input graph is a complete graph with extra _virtual edges_ containing distance information between pair of atoms in the edge. This setup is the standard MPNN model.
In all three cases, regardless of input to the MPNN module, the higher-order message passing works only on the sparse graph.
Results in Table 13 shows that the margin of improvement of FGNN over MPNN is significantly higher when pairwise distance feature is not included in the graph. This suggests MPNN is not able to sufficiently capture the 3D molecular shape in both the cases where sparse graph is used, In such scenarios, capturing higher-order structures with FGNN is more helpful in reducing the MAE. Furthermore, results of both the cases are similar when sparse graph is used and inclusion of pairwise distance in the edge feature does not lead to significant performance gains. Following this, it can be inferred that bond types are indicative of distance between the atoms as well.
#### b.4.2 Results of ablation models on QM9 dataset:
In the main paper, we considered ablation models of FGNN based on conditioning of factor parameters. These models are CAT (central atom type), BT (bond type), CABT (central atom and bond type) and CABTA (central atom, bond type and neighbouring atom type). We reported results on Alchemy dataset where we found that conditioning on bond type was sufficient for good performance. To further verify the results, we evaluate the ablation models on QM9 dataset.
Results in Table 14 show that unlike Alchemy dataset, CABT model performs better in almost all the targets on QM9 dataset. This perhaps suggests that in QM9 dataset, higher-order constraints are more centered around the atom and are better captured by having separate parameters for different central atom types. Collectively, the ablation study on QM9 and Alchemy datasets suggests that conditioning the parameters on neighbouring atom type (CABTA) is not helpful and only increases the paramter size. It is sufficient if edge type and central atom type information is directly captured in the model.
|
2306.03822 | Swing contract pricing: with and without Neural Networks | We propose two parametric approaches to evaluate swing contracts with firm
constraints. Our objective is to define approximations for the optimal control,
which represents the amounts of energy purchased throughout the contract. The
first approach involves approximating the optimal control by means of an
explicit parametric function, where the parameters are determined using
stochastic gradient descent based algorithms. The second approach builds on the
first one, where we replace parameters in the first approach by the output of a
neural network. Our numerical experiments demonstrate that by using Langevin
based algorithms, both parameterizations provide, in a short computation time,
better prices compared to state-of-the-art methods. | Vincent Lemaire, Gilles Pagès, Christian Yeo | 2023-06-06T16:09:16Z | http://arxiv.org/abs/2306.03822v4 | Swing Contract Pricing: A Parametric Approach with Adjoint Automatic Differentiation and Neural Networks
###### Abstract
We propose two parametric approaches to price swing contracts with firm constraints. Our objective is to create approximations for the optimal control, which represents the amounts of energy purchased throughout the contract. The first approach involves explicitly defining a parametric function to model the optimal control, and the parameters using stochastic gradient descent-based algorithms. The second approach builds on the first one, replacing the parameters with neural networks. Our numerical experiments demonstrate that by using Langevin-based algorithms, both parameterizations provide, in a short computation time, better prices compared to state-of-the-art methods (like the one given by Longstaff and Schwartz).
_Keywords - Swing contracts, stochastic control, stochastic approximation, neural network, Langevin dynamics, greeks._
## Introduction
With the energy market becoming increasingly deregulated, various derivative products have emerged, offering flexibility in delivery dates and purchased energy amounts. "Swing" contracts, also known as "Take-or-Pay" contracts (see [38] for more details), are among the most widely traded contracts in gas and power markets. These contracts allow their holders to purchase amounts of energy on fixed exercise dates, subject to certain constraints. There exist two types of constraints. In the firm constraints setting, the contract holder is not allowed to violate the constraints. Beside this case, there exists an alternative setting where the holder can violate them but will face a penalty proportional to the default. Our paper focuses on the valuation of swing contracts in the firm constraints case.
The valuation of swing contracts is a more challenging task than that of classic American-style contracts [9, 19, 27, 36], mainly due to the presence of constraints related to time and volume. From a probabilistic standpoint, the valuation of swing contracts is related to a Stochastic Optimal Control (_SOC_) problem. Here, the control represents the vector of amounts of energy that can be purchased at each exercise date. To solve this SOC problem, two primary groups of methods have been developed.
The first group concerns methods that are based on the so called "Backward Dynamic Programming Principle" (BDPP). In this group, the price of the swing contract can be written as a solution of a dynamic programming equation (see [1, 2, 3, 20, 24, 38]). More precisely, at each exercise date, given
the underlying price and the cumulative consumption, the contract value is given by the maximum over the possible consumptions of the immediate cash flow plus the (conditional) expected value of future cash flows. The latter amount (which is in fact a conditional expectation) is often called the "continuation value" and its evaluation is the main difficulty when consider BDPP-based approaches. The mostly used method for computing the continuation value is the Longstaff and Schwartz one (see [27]); where the continuation value is approximated as an orthogonal projection over a subspace spanned by a finite number of squared integrable random variables (see [3, 27]). Another method is based on the so called optimal quantization (see [30]) where the underlying (continuous) random variable is approximated by its discrete version which in turns is defined by a "projection" on nearest neighbors or Voronoi cells (see [2]). Thus the continuation value reduces to a conditional expectation of a discrete random variable. One of the major problems with BDPP's approaches is the following. To compute the value of the contract at each exercise date, the maximum is taken over a geometric interval. In practice the latter interval needs to be discretized leading to a precision/complexity tradeoff. Indeed, achieving a high level of accuracy in pricing the contract requires a finer discretization of the geometric interval, which in turn increases computation time. Additionally, the Longstaff-Schwartz method faces a storage challenge, as regression coefficients for each simulation and admissible cumulative consumption must be stored at each exercise date to compute the continuation value. This often leads to a memory overflow when a large number of simulations is required to obtain a price with a tight confidence interval. Meanwhile, the optimal quantization method suffers from the "curse of dimensionality" because it is well known that the rate of convergence of this method is of order \(\mathcal{O}(N^{-\frac{1}{d}})\) (where \(N\) is the number of simulations and \(d\) is the problem dimension).
An alternative approach to BDPP-based methods is to consider the valuation of swing contracts as a global optimization problem. As mentioned, the valuation of swing contract is equivalent to solve a SOC problem where the objective is to find a vector of purchase amounts that maximizes the expected value of cumulative cash flows up to the expiry of the contract. This leads to a stochastic optimization problem where the control is generally substituted by a parametric function, and optimal parameters are identified using Stochastic Gradient Descent (SGD) algorithms. The use of parametric functions to approximate the optimal control reduces the control problem to a parametric optimization problem. However, the success of such approaches is contingent upon finding a suitable parametric function, which often requires a solid understanding and intuition of the optimal control's behavior. Note that, solving SOC problem with SGD based algorithms have been considered in [17] for general SOC problems and in [3] for the swing contracts case. To the best of our knowledge, there has been limited exploration of such approaches in the literature to price swing options. Thus, in this paper, we propose two parametric methods for global optimization and compare them with the Longstaff-Schwartz method. We optimize both parameterizations using two algorithms: Adaptive Moment Estimation (Adam) and (Preconditioned) Stochastic Gradient Langevin (PSGLD). While Adam is widely used in stochastic approximation, PSGLD has demonstrated effectiveness in recent studies for Bayesian learning. Our results show that using Langevin-type algorithms can accelerate the training of our parameterizations and leads to better prices compared to the state-of-the-art methods.
Our paper is organized as follows. Section 1. We describe swing contracts and recall the pricing framework. Section 2. We present our contribution by proposing two numerical methods for the pricing of swing contracts. Section 3. We present generalities about stochastic approximation and study two optimization algorithms to optimize our numerical solutions. Finally in section 4 we present some additional results about the effectiveness of our methods.
On swing contracts
In this section, we will provide an overview of swing contracts and explain how to build the swing volume grid defining the space of volume constraints. In a firm constraints setting, this space is a critical component to price swing contracts, and this will discussed towards the end of this section.
### Description
Swing option is a commonly encountered derivative on energy (gas and power) markets which allows its holder to buy amounts of energy \(q_{k}\) at times \(t_{k}\), \(k=0,...,n-1\) (called exercise dates) until the contract maturity \(t_{n}=T\). At each exercise date \(t_{k}\), the purchase price (or strike price of the contract) is denoted \(K_{k}\) and can be constant (i.e \(K_{k}=K,k=0,...,n-1\)) or indexed on a formula. In this paper, we focus on the fixed strike price case but the case of indexed strike can be treated likewise. In this latter case the strike at a time \(t_{k}\) is often calculated as the average of the underlying prices over a period which is already completed. Therefore the indexed strike price case only adds a computational complexity rather than a theoretical one.
One important feature on swing option are volume constraints. Swing option gives its holder a flexibility on the amount of energy he is allowed to purchase and this flexibility is subject to some (firm) constraints
* **Local constraints**: at each exercise date \(t_{k}\), the holder of the swing contract is allowed to buy at least \(q_{\min}\) and at most \(q_{\max}\) i.e, \[q_{\min}\leq q_{k}\leq q_{\max},\ \ \ \ 0\leq k\leq n-1.\] (1.1)
* **Global constraints**: the cumulative consumption up to the maturity of the contract must be not lower than \(Q_{\min}\) and not greater than \(Q_{\max}\) i.e, \[Q_{n}=\sum_{k=0}^{n-1}q_{k}\in[Q_{\min},Q_{\max}],\ \ \ \text{with}\ \ Q_{0}=0\ \text{and}\ 0\leq Q_{\min}\leq Q_{\max}\leq+\infty.\] (1.2)
Note that in this paper we only consider the **firm constraints** case which means that the holder of the contract is not allowed to violate the constraints. This case is often released in the literature by adding a penalty term whenever the global constraints are violated (see [2, 3]). In penalty setting, the existence of an optimal Markov consumption had been proved in [3] using a smooth penalty function. A similar result has been established in [1] for the firm constraints case.
Now let us focus on the firm constraints case in which one of the main components is the physical space. That is the area of possible actions taking into account the volume constraints.
### Physical space
At each exercise date, through the firm constraints, the achievable cumulative volumes are delimited by boundaries represented by the functions \(t_{k}\mapsto Q^{down}(t_{k})\) and \(t_{k}\mapsto Q^{up}(t_{k})\)
\[\left\{\begin{array}{l}Q^{down}(t_{0})=0\\ Q^{down}(t_{k})=\max\left(0,Q_{\min}-(n-k)\cdot q_{\max}\right)\ \,k\in\{1,...,n-1\}\\ Q^{down}(t_{n})=Q_{\min}\end{array}\right. \tag{1.3}\]
\[\left\{\begin{array}{l}Q^{up}(t_{0})=0\\ Q^{up}(t_{k})=\min\big{(}k\cdot q_{\max},Q_{\max}\big{)}\\ Q^{up}(t_{n})=Q_{\max}\end{array}\right.,\,k\in\{1,...,n-1\} \tag{1.4}\]
where \(Q^{down}(t_{k})\) and \(Q^{up}(t_{k})\) denote respectively the lower and the upper bound of the cumulative consumption at time \(t_{k}\). These boundaries lead to the physical space of the swing contract, as drawn in Figure 1 and represents at each exercise date, the range of attainable cumulative consumptions.
In Figure 1 we assumed that \(q_{\min}=0\). This assumption can be made without loss of generality because, as shown in [2], the general case can always be reduced to the case where \(q_{\min}=0\) and \(q_{\max}=1\) (see appendix A for the proof). Besides, if at an exercise date \(t_{k}\) we have bought the amounts \(q_{0},...,q_{k-1}\) leading to a cumulative consumption \(Q_{k}=\sum_{i=0}^{k-1}q_{i}\), then due to local constraints and the cumulative consumption borders, the actual range of attainable cumulative consumption at time \(t_{k+1}\) is the following
\[Q_{k}+q_{k}\in\Big{[}\underbrace{\max(Q^{down}(t_{k+1}),Q_{k}+q_{\min})}_{:=L _{k+1}(Q_{k})},\,\,\underbrace{\min(Q^{up}(k+1),Q_{k}+q_{\max})}_{:=U_{k+1}(Q _{k})}\Big{]}. \tag{1.5}\]
That is, starting with a cumulative consumption \(Q_{k}\), at time \(t_{k}\), the range of admissible volume is
\[q_{k}\in\Big{[}\underbrace{L_{k+1}(Q_{k})-Q_{k}}_{:=A_{k}^{-}(Q_{k})},\,\, \underbrace{U_{k+1}(Q_{k})-Q_{k}}_{:=A_{k}^{+}(Q_{k})}\Big{]}. \tag{1.6}\]
With theses basic blocks, we can now set the theoretical framework for the pricing of swing contracts.
### Pricing and sensitivity calculus
Let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\},\mathbb{P})\) be a filtered probability space. The price at time \(t\) of the forward contract delivered at the maturity \(T\) is denoted by \(F_{t,T}\). In this paper (as in [2, 3]) we consider contract on the spot price (\(S_{t}=F_{t,t}\)) even if in practice the spot is not a tradable instrument on market. Instead, the most encountered gas contract is the day-ahead forward contract whose price is \(F_{t,t+1}\). But this case can be treated in the same way.
The decision process \((q_{k})_{0\leq k\leq n-1}\) is defined on the same probability space and is supposed \(\mathcal{F}_{t_{k}}^{S}\)- adapted. At each exercise date \(t_{k}\), by buying a certain volume \(q_{k}\), the holder of the contract makes a profit (or loss)
\[\psi_{k}\left(q_{k},S_{t_{k}}\right):=q_{k}\cdot\left(S_{t_{k}}-K\right). \tag{1.7}\]
Let \(Q\in\mathbb{R}_{+}\). Given a cumulative consumption \(Q\), an admissible strategy at time \(t_{k}\) is a vector \((q_{k},\ldots,q_{n-1})\) lying within the following set
\[\mathcal{A}_{k,Q}^{Q_{\min},Q_{\max}}=\left\{(q_{\ell})_{k\leq\ell\leq n-1},\;q _{\ell}:(\Omega,\mathcal{F}_{t_{\ell}}^{S},\mathbb{P})\mapsto\left[q_{\min},q_ {\max}\right]\right.\left.\sum_{\ell=k}^{n-1}q_{\ell}\in\left[(Q_{\min}-Q)_{+}, Q_{\max}-Q\right]\right\}.\]
Note that using equation (1.6), the preceding set reads,
\[\mathcal{A}_{k,Q}^{Q_{\min},Q_{\max}}=\left\{(q_{\ell})_{k\leq\ell\leq n-1},\; q_{\ell}:(\Omega,\mathcal{F}_{t_{\ell}}^{S},\mathbb{P})\mapsto\left[A_{\ell}^{-} (Q_{\ell}),A_{\ell}^{+}(Q_{\ell})\right]\right.\text{where}\;Q_{\ell}=Q+\sum_ {i=k}^{\ell-1}q_{i}\right\}. \tag{1.8}\]
with the convention \(\sum_{i=k}^{k-1}q_{i}=0\). Then for every non negative \(\mathcal{F}_{t_{k-1}}^{S}-\) measurable random variable \(Q\), the price of the swing option at time \(t_{k}\) starting from a cumulative consumption \(Q\) is given by
\[P_{k}\left(S_{t_{k}},Q\right)=\operatorname*{ess\,sup}_{(q_{\ell})_{k\leq\ell \leq n-1}\in\mathcal{A}_{k,Q}^{Q_{\min},Q_{\max}}}\;\mathbb{E}\left(\sum_{\ell =k}^{n-1}e^{-r_{\ell}(t_{\ell}-t_{k})}\psi_{\ell}\left(q_{\ell},S_{t_{\ell}} \right)|\mathcal{F}_{t_{k}}^{S}\right), \tag{1.9}\]
where the expectation is taken under the risk-neutral probability and \(r_{\ell}\) are interest rates over the period \([t_{0},t_{n-1}]\) that we will assume to be zero throughout this paper. Then the price of the swing contract is given by (\(S_{t_{0}}\) is assumed to be deterministic)
\[P_{0}:=P_{0}\big{(}S_{t_{0}},0\big{)}=\sup_{(q_{\ell})_{0\leq\ell\leq n-1}\in \mathcal{A}_{0,0}^{Q_{\min},Q_{\max}}}\;\mathcal{J}\big{(}q_{0},\ldots,q_{n-1} \big{)}. \tag{1.10}\]
where, given an admissible strategy \((q_{0},\ldots,q_{n-1})\), the reward function \(\mathcal{J}\) is defined as the expected value of cumulative future cash flows up to the expiry
\[\mathcal{J}\big{(}q_{0},\ldots,q_{n-1}\big{)}:=\mathbb{E}\left(\sum_{\ell=0}^ {n-1}\psi_{\ell}\big{(}q_{\ell},S_{t_{\ell}}\big{)}\right). \tag{1.11}\]
The latter problem appears to be a constrained stochastic control problem in which the aim is to find an admissible strategy that maximizes the reward function \(\mathcal{J}\). As mentioned in the introduction, there exists in the literature two groups of pricing methods to solve this optimization problem. The first group of methods is based on the "backward dynamic programming principle" and the second on a global optimization approach. In this paper, we propose two numerical solutions to solve problem (1.10) based on this latter group.
It is important to note the "bang-bang" feature, which implies that under certain conditions, at each exercise date, the optimal consumption is given by one of two values: \(A_{k}^{-}\) or \(A_{k}^{+}\) (defined in equation (1.6)). This feature has been proven for the case of firm constraints and when global constraints are whole numbers by Pages et al. [1]. For the penalty case, a proof can be found in Gobet et al. [3]. The "bang-bang" feature is particularly valuable since it allows to significantly reduce the computation time it.
In this paper and unless otherwise stated, we use a one-factor model as in [3, 20]. That is,
\[\frac{dF_{t,T}}{F_{t,T}}=\sigma e^{-\alpha(T-t)}dW_{t},\quad t\leq T \tag{1.12}\]
where \(W\) is a standard Brownian motion. As mentioned before we deal with the spot price whose price is given by a straightforward application of Ito formula
\[S_{t}=F_{0,t}\cdot\exp\big{(}\sigma X_{t}-\frac{1}{2}\lambda_{t}^{2}\big{)}, \quad X_{t}=\int_{0}^{t}e^{-\alpha(t-s)}\,\mathrm{d}W_{s}\ \ \ \text{and}\ \ \lambda_{t}^{2}=\frac{\sigma^{2}}{2\alpha}\big{(}1-e^{-2\alpha t}\big{)}. \tag{1.13}\]
Along with this model, we will use two settings (presented below). In each case we set \(\alpha=4,\sigma=0.7\).
**Case 1**
\[31\text{ exercise dates}\quad\ q_{\min}=0\quad q_{\max}=6\quad Q_{\min}=140\quad Q _{\max}=200.\]
**Case 2** (as in [2, 3])
\[365\text{ exercise dates}\quad\ q_{\min}=0\quad q_{\max}=6\quad Q_{\min}=1300 \quad Q_{\max}=1900.\]
The computer that has been used has the following characteristics: _Processor: Intel(R) Core(TM) i7-1185G7 @ 3.00GHz 3.00 GHz, 32Go of RAM, Microsoft Windows 10 Enterprise_. Deep learning part had been implemented using PyTorch toolbox. The GPU device used is: Nvidia A100-PCIE 40GB.
It is worth noting that for practitioners, the prices of derivatives are closely linked to their sensitivities with respect to market data. These sensitivities are essential for hedging purposes, as they indicate how the price of a derivative product changes with the market. Computation of sensitivities involves calculating derivatives, and in our case, we need to differentiate a price with respect to some parameters, where the price is a solution to the stochastic control problem in equation (1.10). To achieve this, we rely on the so called "envelope theorem."
Let \(f(x,\alpha)\) and \(g_{j}(x,\alpha),j=1,2,\ldots,m\) be real-valued continuously differentiable functions on \(\mathbb{R}^{n+\ell}\), where \(x\in\mathbb{R}^{n}\) are some variables and \(\alpha\in\mathbb{R}^{\ell}\) are parameters, and consider the constrained optimization problem
\[\left\{\begin{array}{l}\max\limits_{x}f(x,\alpha)\\ \text{subject to}\ \ g_{j}(x,\alpha)\geq 0,\quad 1\leq j\leq m\end{array}\right.\]
We introduce the Lagrangian function,
\[\mathcal{L}\left(x,\lambda,\alpha\right)=f(x,\alpha)+\langle\lambda,g(x,\alpha)\rangle\]
where \(\lambda\in\mathbb{R}^{m}\) is the Lagrange multipliers, \(\langle\cdot,\cdot\rangle\) is the Euclidean inner-product and \(g=(g_{1},\ldots,g_{m})^{\top}\). Then we define the value function \(V(\alpha)=f(x^{*}(\alpha),\alpha)\) where \(x^{*}(\alpha)\) is a solution that maximizes the function \(f(\cdot,\alpha)\). The following theorem gives the derivative of the value function \(V\) in case it is differentiable.
**Theorem 1** (Envelope theorem): _Assume that \(V\) and \(\mathcal{L}\) are continuously differentiable. Then,_
\[\frac{\partial V(\alpha)}{\partial\alpha_{k}}=\frac{\partial\mathcal{L}\left( x^{*}(\alpha),\lambda^{*}(\alpha),\alpha\right)}{\partial\alpha_{k}}\quad\quad k =1,\ldots,\ell\]
_where \(\frac{\partial\mathcal{L}}{\partial\alpha_{k}}=\frac{\partial f}{\partial \alpha_{k}}+\langle\lambda,\frac{\partial g}{\partial\alpha_{k}}\rangle\)._
The envelope theorem states that, under some regularity conditions, in a SOC problem, for example our problem (1.10), the derivative of the optimal objective function \(\mathcal{J}\) is given by differentiating the cash flows along with the optimal control. In our model (1.12), we deduce the following proposition which is a corollary application of Theorem 1.
**Proposition 1**: _Let \(\left(q_{k}^{*}\right)_{0\leq k\leq n-1}\) be a solution of the problem (1.10). Assume that the function_
\[\left(F_{0,t_{k}}\right)_{0\leq k\leq n-1}\mapsto\mathbb{E}\left(\sum_{k=0}^{n- 1}q_{k}^{*}\cdot\left(F_{0,t_{k}}\cdot e^{\sigma X_{t_{k}}-\frac{1}{2}\Lambda_{ t_{k}}}-K\right)\right)\]
_is continuously differentiable. Let \(P_{0}\) be the price of the swing contract (1.10). Note that \(P_{0}\) is a function of \(\left(S_{t_{k}}\right)_{0\leq k\leq n-1}\) which in turns depends on \(\left(F_{0,t_{k}}\right)_{0\leq k\leq n-1}\) in the model (1.12). Then for all \(k=0,\ldots,n-1\), the delta (sensitivity of swing price with respect to the initial forward price) is given by_
\[\frac{\partial P_{0}}{\partial F_{0,t_{k}}}=\mathbb{E}\left(q_{k}^{*}\cdot e^ {\sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k}}}\right).\]
Proof We define the following functions:
\[f\left((q_{k})_{0\leq k\leq n-1},(F_{0,t_{k}})_{0\leq k\leq n-1} \right):=\mathbb{E}\left(\sum_{k=0}^{n-1}q_{k}\cdot\left(F_{0,t_{k}}\cdot e^{ \sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k}}}-K\right)\right)\] \[g_{1}\left((q_{k})_{0\leq k\leq n-1},(F_{0,t_{k}})_{0\leq k\leq n -1}\right):=\sum_{k=0}^{n-1}q_{k}-Q_{\min}\] \[g_{2}\left((q_{k})_{0\leq k\leq n-1},(F_{0,t_{k}})_{0\leq k\leq n -1}\right):=Q_{\max}-\sum_{k=0}^{n-1}q_{k}\]
and for all \(k=0,\ldots,n-1\):
\[g_{2k+3}\left((q_{k})_{0\leq k\leq n-1},(F_{0,t_{k}})_{0\leq k\leq n-1} \right):=q_{k}-q_{\min}\]
\[g_{2k+4}\left((q_{k})_{0\leq k\leq n-1},(F_{0,t_{k}})_{0\leq k\leq n-1} \right):=q_{\max}-q_{k}\]
Then it suffices to prove that functions \(f,g_{1},g_{2},\ldots,g_{2n+2}\) are continuously differentiable in order to use the envelope theorem. This holds for functions \(g_{1},g_{2},\ldots,g_{2n+2}\). It remains to prove it for function \(f\). The latter reduces to show that function \(f\) is continuously differentiable in each of its components. For any \(k=0,\ldots,n-1\), the random variable \(q_{k}\cdot\left(F_{0,t_{k}}\cdot e^{\sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k} }}-K\right)\) is integrable since
\[\left|q_{k}\cdot\left(F_{0,t_{k}}\cdot e^{\sigma X_{t_{k}}-\frac{1}{2}\Lambda_ {t_{k}}}-K\right)\right|\leq q_{\max}\cdot\left(F_{0,t_{k}}\cdot e^{\sigma X_ {t_{k}}-\frac{1}{2}\Lambda_{t_{k}}}+K\right)\in\mathbb{L}_{\mathbb{R}}^{1}( \mathbb{P}).\]
Moreover, the function \(F_{0,t_{k}}\mapsto q_{k}\cdot\left(F_{0,t_{k}}\cdot e^{\sigma X_{t_{k}}-\frac {1}{2}\Lambda_{t_{k}}}-K\right)\) is differentiable and its derivative does not depends on \(F_{0,t_{k}}\). Furthermore,
\[\left|\frac{\partial}{\partial F_{0,t_{k}}}\Big{(}q_{k}\cdot\left(F_{0,t_{k}} \cdot e^{\sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k}}}-K\right)\Big{)}\right|= \left|q_{k}\cdot e^{\sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k}}}\right|\leq q _{\max}\cdot e^{\sigma X_{t_{k}}}\in\mathbb{L}_{\mathbb{R}}^{1}(\mathbb{P}).\]
Then thanks to Lebesgue theorem to interchange derivation and integral, the function \(f\) is continuously differentiable in all \(F_{0,t_{k}}\). Likewise one may also show that for all \(k=0,\ldots,n-1\) the function \(f\) is continuously differentiable in \(q_{k}\), and one may use the envelope theorem. For any \(k=0,\ldots,n-1\) we introduce the Lagrangian of the problem (1.10)
\[\mathcal{L}\left(\left(q_{k}\right)_{0\leq k\leq n-1},\lambda, \left(F_{0,t_{k}}\right)_{0\leq k\leq n-1}\right) =\mathbb{E}\left(\sum_{k=0}^{n-1}q_{k}\cdot\left(F_{0,t_{k}}\cdot e ^{\sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k}}}-K\right)\right)+\] \[\langle\lambda,g\big{(}(q_{k})_{0\leq k\leq n-1},(F_{0,t_{k}})_{0 \leq k\leq n-1}\big{)}\rangle,\]
where \(g=\left(g_{1},\ldots,g_{2n+2}\right)^{\top}\) and \(\lambda\in\mathbb{R}^{2n+2}\). Then it follows from envelope theorem that for any \(k=0,\ldots,n-1\):
\[\frac{\partial P_{0}}{\partial F_{0,t_{k}}}=\mathbb{E}\left(\sum_{k=0}^{n-1}q _{k}^{*}\cdot\frac{\partial}{\partial F_{0,t_{k}}}\left(F_{0,t_{k}}\cdot e^{ \sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k}}}-K\right)\right)=\mathbb{E}\left( \sum_{k=0}^{n-1}q_{k}^{*}\cdot e^{\sigma X_{t_{k}}-\frac{1}{2}\Lambda_{t_{k}}} \right).\]
This completes the proof.
We have outlined the theoretical framework for pricing swing contracts and computing their associated sensitivities. To obtain practical numerical solutions, we introduce parametric methods in this paper. Specifically, we replace the optimal control variable \(q_{k}\) at a time \(t_{k}\) with a parametric function \(q_{k}\big{(}I_{k};\theta\big{)}\), where \(I_{k}\) represents information needed to determine the volume to purchase, and \(\theta\) is a parameter belonging to a parameter space \(\Theta\) that depends on the chosen parameterization. Thus, the optimization problem in equation (1.10) can be expressed as,
\[\sup_{\theta\in\Theta}\mathcal{J}\Big{(}q_{0}(I_{0};\theta),\ldots,q_{n-1}(I_ {n-1};\theta)\Big{)}. \tag{1.14}\]
Note that the parameter space has to be designed to guarantee that, almost surely, for \(\theta\in\Theta\), \(\big{(}q_{0}(I_{0};\theta),\ldots,q_{n-1}(I_{n-1};\theta)\big{)}\) is an admissible strategy. That is, it lies within \(\mathcal{A}_{0,0}^{Q_{\min},Q_{\max}}\) which is defined through (1.8) and where the range of admissible volumes at each time \(t_{k}\) depends on purchased volumes up to that time.
## 2 Swing pricing: A global optimization approach
In this paper, we propose two parametric methods for approximating the optimal control (i.e., the volume to purchase at each exercise date) in a swing pricing context. It is worth noting that the use of parametric exercise strategies for pricing swing contracts is not a new approach and has demonstrated its advantages when compared to classical methods such as the Longstaff and Schwartz method (see [3, 27]). For instance, Gobet et al. [3] developed two parametric methods, in the context of swing contracts with penalties, which provide satisfactory prices when compared to their benchmark (obtained using the forest of trees method [24]). The first method is based on a neural network that approximates a \([0,1]\)-valued function \(f\), which is then used to determine the volume to purchase in the range \([q_{\min},q_{\max}]\) (recall that they studied a swing contract with penalties. Thus the range of admissible volume is not constrained as in our case through amounts \(A^{-}\) and \(A^{+}\) (1.6)) using the simple transformation \(q_{\min}+(q_{\max}-q_{\min})f\). In addition to the previous parameterization, the authors proposed another one based on the heuristic that a
higher gas price usually results in a higher consumption at a fixed strike price (though this may not always be true due to global constraints). This heuristic suggests the existence of a threshold beyond which the highest possible volume should be purchased. We studied this heuristic using a neural network to replace the volume \(q_{k}\) with a nonlinear parameterization \(q_{k}:=q(t_{k},S_{t_{k}}-K;\theta)\), where \(\theta\) are the parameters of the neural network trained to solve the swing pricing problem (1.14) (see section 3 for details on the training process). The results showed that when the payoff \(S_{t_{k}}-K\) is below a certain threshold \(\beta_{1}^{k}\), the optimal consumption tends to be \(A_{k}^{-}\) (as defined in equation (1.6)), while for payoffs above a certain threshold \(\beta_{2}^{k}(>\beta_{1}^{k})\), the optimal consumption tends to be \(A_{k}^{+}\) (also defined in equation (1.6)). Therefore, the optimal consumption profile exhibits a similar behavior as that depicted in Figure 2.
Based on this observation, we aim to find a parametric function that reproduces the consumption profile shown in Figure 2, which depends on two thresholds \(\beta_{1}\) and \(\beta_{2}\) (that are assumed to be constant over time in a first step). To achieve this, we define a parametric "decision" function
\[I_{k}\mapsto f_{k}(I_{k};\theta):=f(I_{k};\theta)=\mathbf{1}_{\{I_{k}>\beta_{ 2}\}}+\frac{I_{k}-\beta_{1}}{\beta_{2}-\beta_{1}}\mathbf{1}_{\{\beta_{1}\leq I _{k}\leq\beta_{2}\}} \tag{2.1}\]
where \(\mathbf{1}\) denotes the indicator function, \(\theta=(\beta_{1},\beta_{2})^{\top}\in\mathbb{R}^{2}\) and \(I_{k}=S_{t_{k}}-K\) is the payoff. Note that the thresholds and therefore the parameterization do not depend on time. The preceding "decision" function, once defined, allows to choose an admissible control using the following transformation
\[q_{k}(I_{k};\theta)=A_{k}^{-}(Q_{k}^{\theta})+\left(A_{k}^{+}(Q_{k}^{\theta}) -A_{k}^{-}(Q_{k}^{\theta})\right)\cdot f_{k}(I_{k};\theta) \tag{2.2}\]
with \(Q_{0}^{\theta}=0\) and \(Q_{k}^{\theta}=\sum_{i=0}^{k-1}q_{i}(I_{i};\theta)\) for all \(1\leq k\leq n-1\). Then we aim at solving the parametric optimization problem (1.14) with the preceding choice of parameterization (2.2) and with \(\Theta=\{(x,y)\in\mathbb{R}^{2};x<y\}\). To achieve this, we look for values of the thresholds \(\theta=(\beta_{1},\beta_{2})\) which maximize the resulting reward function \(\mathcal{J}\) (using strategy given by (2.1)). To implement this approach, we simulate \(10^{5}\) independent realizations of the payoff \(I_{k}=S_{t_{k}}-K\) and compute the Monte-Carlo price given by strategy (2.2). The values of \(\beta_{1},\beta_{2}\) used to compute this maximum are the \(50\) evenly distributed within the interval \([-10,10]\), for the contract setting of case \(2\) (as described in section 1.3).
The optimal values for \(\beta_{1}\) and \(\beta_{2}\) that lead to the maximum price of \(2611.7\) are obtained as \(-2.24\) and \(-1.02\), respectively. This simple parametric strategy which has no flexibility (the strategy is the same for all exercise dates since thresholds are constants) and does not depend on the cumulative volume (which is obviously essential when deciding which volume to buy at each exercise dates) gives a satisfying
Figure 2: Optimal consumption profile as a function of the payoff.
price when compared to the more elaborated methods in [3]1. Following the promising results obtained by this straightforward and simple strategy, we introduce two improvements leading to two different parameterizations.
Footnote 1: Note the relative error between this price and the best of theirs is roughly \(2\%\) and they studied the penalty case whose theoretical price is greater than that of the firm constraints case we study in this paper.
* **Explicit Payoff-Volume parameterization**: we suppose the optimal consumption behaves like profile 2 and define a (smooth) parametric function reproducing this profile. The final idea is to use stochastic gradient descent based optimization algorithms to find the best parameters. In this parametrization, we allow the strategy to depend not only on time but also on cumulative consumption. This latter is useful and gives flexibility to our strategy.
* **Neural network parameterization**: this is a variant of the preceding parameterization. Here we replace coefficients which define the first parameterization by the output of a neural network. Once the neural network is trained, we get the parameters and define the same strategy.
To illustrate both points of view, we use the settings of case 1 in the following sections.
### Explicit Payoff-Volume parameterization (_PV strat_)
We aim to find a parameterization that replicates the target profile in Figure 2, and that is also sufficiently regular to allow for optimization using Stochastic Gradient Descent methods. The parameterization needs to be a sufficiently regular function to ensure that the resulting reward function \(\mathcal{J}\) is differentiable. To achieve this, we select the logistic function, denoted by \(\sigma(x):=\frac{1}{1+e^{-x}}\), which is infinitely differentiable on \(\mathbb{R}\) and has a shape close enough to that of the target profile. Additionally, to incorporate flexibility into our exercise strategy, we no longer assume that the parameters are constant over exercise dates. We also need to take into account for the dependence of the strategy on cumulative consumption. Hence, we propose the following parameterization.
\[f_{k}(I_{k};\theta)=\sigma\big{(}\langle\theta_{k},I_{k}\rangle\big{)}\qquad \text{with}\quad I_{k}=\big{(}S_{t_{k}}-K,M(Q_{k}),1\big{)}^{\top}\in\mathbb{R }^{3} \tag{2.3}\]
where for some vector \(\theta=(\theta_{1},\ldots,\theta_{n})\in\Theta=\mathbb{R}^{3n}\), \(\theta_{k}\in\mathbb{R}^{3}\) (\(1\leq k\leq n\)) is the subvector of \(\theta\) made of the \(k^{th},(k+1)^{th},(k+2)^{th}\) components of \(\theta\). Thus vector \(\theta\) embed all parameters used to define the strategy for all exercise dates. In other words, each \(\mathbb{R}^{3}\)-valued component \(\theta_{k}\) represents coefficients that drive the optimal decision of _PV strat_ at the corresponding time \(t_{k}\). The function \(M\) is chosen as the (normalized) margin or remaining purchasing capacity defined by: \(M(Q):=\frac{Q-Q_{\min}}{Q_{\max}-Q_{\min}}\). The importance of this choice is discussed in Remark 1 and 2. In case \(Q_{\min}=Q_{\max}\), one may use \(M(Q)=\frac{Q-Q_{\min}}{Q_{\min}}\). Then the problem to solve is the same as in (1.14) where the parameters space is \(\Theta=\mathbb{R}^{3n}\) and the control \(q_{k}(I_{k};\theta)\) in (2.2) is defined using the strategy (2.3).
**Remark 1** (Importance of function _M_): At first glance, the function \(M\) may seem to be a simple normalization of the cumulative consumption. However, it should be noted that several other normalizations were tested and did not lead to a good strategy. We tested the following normalizations: \(M(Q_{k})=Q_{k}\), \(\frac{Q_{k}-Q_{\min}}{q_{\max}-q_{\min}}\), \(\frac{k\cdot q_{\max}-Q_{k}}{q_{\max}-q_{\min}}\) and \(\frac{Q_{k}-Q^{down}(t_{k+1})}{Q^{up}(t_{k+1})-Q^{down}(t_{k+1})}\).
**Remark 2** (Remaining capacity _Versus_ Cumulative Consumption): We observed that the remaining purchasing capacity is a more crucial factor than the current cumulative consumption when determining
the volume to buy at each exercise date. This contrasts with the neural network approach tested in [3], which uses the current cumulative consumption. When using \(M(Q)=Q\), our algorithm tended to purchase the maximum possible volume at the contract's outset, as long as the payoff was positive. This behavior suggests that the algorithm did not take into account the fact that purchasing capacity diminishes as the contract approaches its expiration, due to global constraints.
Remarks 1 and 2 explain the reason for which function \(M\) yields desirable results. In addition, this choice has a rational interpretation for the optimal control behavior (see appendix D).
### Neural Network parameterization (_NN strat_)
Our second parameterization is based on neural networks. The goal of a neural network is to approximate some mapping \(x\mapsto\Phi^{*}(x)\) by a parametric one (neural network) \(x\mapsto\Phi(x;\theta)\) where \(\theta\) is a parameter (or weights) of the neural network that has to be optimized such in order to give a "good" approximation. A Neural Network can approximate a wide class of complicated (linear or non-linear) relationships (For more details, see Universal Approximation Theorem [18]) between the inputs and outputs by composing linear functions and non-linear thresholds. More precisely, note that a neural network is made of nodes connected to one another where a column of nodes forms a layer (when there are more than one layer in the neural network architecture, we speak of deep neural network). The outermost (see diagram 3) are the input and output layers and all those in between are called the hidden layers. The connection between the input and output layers through hidden layer is made by means of linear functions and activation functions (non-linear ones):
* **Linear functions**: a node in connected to another with an associated weight (\(w\)) and bias (\(b\)) so that it will receive a weighted value (\(wx+b\)) from a node in the previous layer. \(\theta=(w,b)\) forms parameters of the neural network.
* **Non-linear functions**: each node contains an activation function that activates the node (so the input can pass through the node) when the input of the node is above a certain threshold.
Training a neural network is performed through two steps: **forward propagation** and **backward propagation**. The forward propagation leads the predicted output \(\hat{y}\) starting from the input \(x\) layer-by-layer. More formally, the forward propagation comes down to the evaluation of the following function:
\[x\in\mathbb{R}^{d}\mapsto\Phi(x;\theta):=a_{I}^{\theta_{I}}\circ\phi_{q_{I-1}} \circ a_{I-1}^{\theta_{I-1}}\circ\ldots\circ\phi_{q_{1}}\circ a_{1}^{\theta_{1 }}(x)\in\mathbb{R}^{\ell} \tag{2.4}\]
where
\(\rhd I,q_{I},...,q_{1}\) are positive integers specifying the depth of the network and the number of nodes for each hidden layer.
\(\rhd a_{1}^{\theta_{1}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{q_{1}},\ldots,a_{ I-1}^{\theta_{I-1}}:\mathbb{R}^{q_{I-2}}\rightarrow\mathbb{R}^{q_{I-1}}\) and \(a_{I}^{\theta_{I}}:\mathbb{R}^{q_{I-1}}\rightarrow\mathbb{R}^{\ell}\) are affine functions. As explained above, theses functions are of the form: \(a(x)=Wx+b\) where \(W\) is a matrix of weights and \(b\) is a vector of bias.
\(\rhd\) For \(j\in\mathbb{N}\), \(\phi_{j}:\mathbb{R}^{j}\rightarrow\mathbb{R}^{j}\) are the activation functions. In this paper we only use ReLU (Rectified Linear Unit) function i.e, \(x\mapsto\max(x,0)\).
In our case, the neural network is designed/trained to provide a "decision" function which maximizes the reward function \(\mathcal{J}\). To achieve this, we design a \(\mathbb{R}^{3}\)-valued neural network \(\Phi(\cdot;\theta)\) (thus \(\ell=3\), see
diagram 3) of form (2.4) and which depends on the time \(t_{k}\), the payoff \(S_{t_{k}}-K\) and the margin \(M(Q_{k})\) defined as in (2.3). Thus \(d=3\) and parameter \(\theta\) of the neural network \(\Phi(\cdot;\theta)\) lies within
\[\Theta=\mathbb{R}^{q_{1}}\times\mathbb{R}^{3\times q_{1}}\times \Big{(}\prod_{i=2}^{I-1}\mathbb{R}^{q_{i}}\times\mathbb{R}^{q_{i}\times q_{i-1 }}\Big{)}\times\mathbb{R}^{3}\times\mathbb{R}^{3\times q_{I-1}}. \tag{2.5}\]
As for _PV strat_, we define recursively the "decision" function by \(f_{0}(I_{0};\theta)=\sigma\Big{(}\langle\Phi\big{(}t_{0},S_{t_{0}}-K,M(0); \theta\big{)},I_{0}\rangle\Big{)}\) and for all \(1\leq k\leq n-1\)
\[f_{k}(I_{k};\theta)=\sigma\Big{(}\langle\Phi\big{(}t_{k},S_{t_{k}}-K,M(Q_{k}^{ \theta});\theta\big{)},I_{k}\rangle\Big{)}\quad\text{ with }\,Q_{k}^{\theta}=\sum_{i=0}^{k-1}q_{i}(I_{i};\theta) \tag{2.6}\]
where information \(I_{k}\) is the same as in (2.3).Then the parametric optimization problem to solve is the same as defined in equation (1.14) with the preceding choice of \(\Theta\) given by equation (2.5) and where the control is defined using the strategy (2.6) combined with (2.2).
**Remark 3** (_PV strat Versus NN strat_): Note that in _NN strat_, the parameter \(\theta\in\Theta\) (as defined in (2.5)) replaces all vectors \(\theta_{k}\in\mathbb{R}^{3}\) in equation (2.3), which drives the strategy in _PV strat_. Unlike the latter, in _NN strat_, the same set of parameters defines the strategy for all exercise dates, but the strategy is different according to the exercise date since time is integrated as input. In contrast, the size of the parameter vector in _PV strat_ increases linearly with the number of exercise dates. Additionally, in section 3.4, we will, numerically, demonstrate that even for a high number of exercise dates, a relatively small neural network architecture is still sufficient. This is one of the advantages of _NN strat_ over _PV strat_.
Our neural network parameterization is a more robust one compared to the one performed by Gobet et Al. [3]. Indeed, in their article they had chosen, for the decision function, a non linear \([0,1]\)-valued parameterization. Our neural network parameterization is an improvement of theirs in the sense that we first impose a particular parametric shape. Then the parameters that drive the latter shape are chosen as
Figure 3: Illustration of (deep) neural network architecture.
an output of a neural network. This approach not only provides flexibility to the strategy but also helps the algorithm to "learn" the optimal control by imposing a specific shape.
All the strategies presented above require training to find the best parameters. To achieve this, we rely on stochastic approximation theory.
## 3 Stochastic optimization
In this section we aim at optimizing a \(\mathbb{R}\)-valued function \(h:y\in\mathbb{R}^{q}\mapsto\mathbb{E}(H(y,Z))\) with \(H\) being a \(\mathbb{R}\)-valued function defined on \(\mathbb{R}^{q}\times\mathbb{R}^{d}\) and where \(Z\) is a \(d-\)dimensional random vector. Like a probabilistic extension of the classic Newton-Raphson procedure, stochastic approximation [23, 34]) suggests the following procedure to find zeros of function \(h\):
\[\forall n\in\mathbb{N},\ \ \ y_{n+1}=y_{n}-\gamma_{n+1}\cdot h(y_{n}) \ \ \,0<\gamma_{n}\leq\gamma_{0}. \tag{3.1}\]
When \(h\) is the gradient of another function i.e, \(h=\nabla V\), we speak about **stochastic gradient descent** (SGD) which is an extension of gradient descent (see [37]) to stochastic optimization. Hereafter we consider a SGD setting i.e, \(h=\nabla V\). The sequence \((\gamma_{n})_{n\in\mathbb{N}}\) is called the algorithm step or learning rate and has to be chosen carefully. In fact if the step is too large, then the procedure (3.1) can overshoot and completely miss the global optimum of the function \(V\). If the step is too small, the procedure may take a long time to converge as it will be taking small steps towards the global minimum. Thus to ensure convergence (which is widely discussed in the literature [5, 10, 35]), a classical requirement is the "decreasing step" assumption. This means that the step sequence is non-increasing and satisfy the following conditions
\[\sum_{n\geq 0}\gamma_{n}=+\infty\ \ \text{and}\ \ \sum_{n\geq 0} \gamma_{n}^{2}<+\infty. \tag{3.2}\]
From a practical point of view, to implement iteration (3.1), one uses mini-batch procedure. That is, rather than computing gradient over a single observation, leading to one iteration in procedure (3.1), we use small batches. We divide a sample of size \(M\) into \(B\) samples of size \(L:=M/B\). Then we perform \(B\) iterations using
\[y_{n+1}=y_{n}-\frac{\gamma_{n+1}}{L}\sum_{\ell=1}^{L}H(y_{n},Z_{n+1}^{b,[\ell ]}),\ \ 0\leq n\leq N. \tag{3.3}\]
where \(Z_{n+1}^{b,[\ell]}\) are independent copies of \(Z\). This alternative increases update frequency compared to vanilla SGD which allows a more robust convergence, avoiding local minima/maxima. It should be noted that in the SGD framework, \(H\) often represents the gradient of a multivariate function, which makes it difficult or impossible to implement the SGD procedure by hand. To address this issue, we use a technique called "Adjoint Algorithm Differentiation" (AAD) as described in [4, 32]. This technique involves a program that computes a value and automatically computes derivatives of that value by combining the derivatives of several simple arithmetic expressions.
Vanilla SGD can often perform poorly due to various reasons. One of the well-known reasons is its inability to converge or a slow convergence to a local minima/maxima when the objective function has saddle points. A saddle point is a critical point on the surface of the function's graph that is not a local extremum. Saddle points are often found in regions that look like plateaus, as shown in Figure 4.
It should be noted that saddle points are notorious in high-dimensional spaces, especially in cases involving deep neural networks, which often have non-convex objective functions (see [13, 26]). In fact, as the
dimensionality of the problem increases, the number of saddle points has been shown to increase exponentially compared to the number of local minima/maxima. For both _PV strat_ and _NN strat_, the number of parameters to optimize is high, especially in _NN strat_. Our experiments have shown that vanilla SGD does not perform well in these high-dimensional spaces. Therefore, in this paper, we propose two alternative optimization algorithms, Adaptive Moment Estimation (Adam) and Stochastic Langevin Dynamics (SGLD), and compare their performance.
### Adaptive Moment Estimation (Adam)
Adam [22] is an efficient stochastic optimization method that only requires first-order gradients with little memory requirement. The method computes individual adaptive learning rates for different parameters from estimates of first gradient moments. Practically, this is done by means of a preconditioning matrix \(P\) (see updating (3.4)). Adam can be seen at as a combination of RMSprop 2 and Stochastic Gradient Descent with momentum [33]. The procedure in Adam updating is the following
Footnote 2: Unpublished optimization algorithm designed for neural networks, first proposed by Geoffrey Hinton in a Coursera course [https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)
\[y_{n+1}=y_{n}-\gamma_{n+1}P_{n+1}\cdot\widehat{\nabla V}(y_{n}) \tag{3.4}\]
where \(\widehat{\nabla V}\) is a (scaled) estimation of the gradient (see Algorithm 1) computed as a moving average of the estimated gradient in the spirit of momentum technique. \((P_{n})_{n}\) is a sequence of preconditioning matrix. In Adam updating the latter matrix is a diagonal matrix whose diagonal components depend on the squared (elementwise) of the gradient \(g_{n+1}\) estimated after iteration \(n\). More precisely, given a gradient \(g=\big{(}g_{1},\ldots,g_{q}\big{)}\in\mathbb{R}^{q}\), the square elementwise of this gradient denoted \(g\odot g\) is given by
\[g\odot g=\big{(}g_{1}^{2},\ldots,g_{q}^{2}\big{)}. \tag{3.5}\]
In the same spirit, for some matrix \(A=\big{(}a_{i,j}\big{)}_{1\leq i,j\leq q},B=\big{(}b_{i,j}\big{)}_{1\leq i,j \leq q}\) where \(b_{i,j}\neq 0\) for all \(1\leq i,j\leq q\) we define elementwise division as follows
\[A\oslash B=\big{(}c_{i,j}\big{)}_{1\leq i,j\leq q}\quad\text{ where }\;c_{i,j}=a_{i,j}/b_{i,j}. \tag{3.6}\]
Adam algorithm is described in 1.
Figure 4: Saddle points illustration. \((x,y)\mapsto x^{3}-3xy^{2}\) (on left) and \((x,y)\mapsto x^{2}-y^{2}\) (on right). The point \((0,0)\) is a saddle point.
**Algorithm 1** Adam updating. \(\lambda\) is a small correction term to avoid division by zero. \(\odot\) denotes element-wise multiplication and \(\oslash\) denotes elementwise division defined in (3.5) and (3.6). \(Id_{q}\) represents the \(q\times q\) identity matrix.
**Require:**
\(\rhd\)\(\gamma\): Step size
\(\rhd\)\(\mu_{1},\mu_{2}\in[0,1)\): Exponential decay rates for the moment estimates
\(\rhd\)\(V(y)\): Stochastic objective function with parameters \(y\). Note that \(V\) is computed by its Monte-Carlo version using (3.3)
\(\rhd\)\(y_{0}\): Initial parameter vector
\(M_{0}=0\) (Initialize \(1^{st}\) moment vector)
\(MS_{0}=0\) (Initialize \(2^{nd}\) moment vector)
```
1:while\(y_{n}\) not converged do
2:\(g_{n+1}\leftarrow\nabla_{y}V(y_{n})\)\(\triangleright\) Compute gradient w.r.t to parameters
3:\(M_{n+1}\leftarrow\mu_{1}\cdot M_{n}+(1-\mu_{1})\cdot g_{n+1}\)\(\triangleright\) Update biased first moment estimate
4:\(MS_{n+1}\leftarrow\mu_{2}\cdot MS_{n}+(1-\mu_{2})\cdot g_{n+1}\odot g_{n+1}\)\(\triangleright\) Update biased second raw moment estimate
5:\(\widehat{M}_{n+1}\gets M_{n+1}/(1-\mu_{1}^{n+1})\)\(\triangleright\) Compute bias-corrected first moment estimate
6:\(\widehat{MS}_{n+1}\gets MS_{n+1}/(1-\mu_{1}^{n+1})\)\(\triangleright\) Compute bias-corrected second raw moment estimate
7:\(P_{n+1}=\operatorname{diag}\left(Id_{q}\oslash(\lambda\cdot Id_{q}+\sqrt{ \widehat{MS}_{n+1}})\right)\)\(\triangleright\) Compute preconditioning matrix
8:\(y_{n+1}\gets y_{n}-\gamma\cdot P_{n+1}\cdot\widehat{M}_{n+1}\)\(\triangleright\) Update parameters
9:endwhile
```
**Algorithm 2** Adam updating. \(\lambda\) is a small correction term to avoid division by zero. \(\odot\) denotes elementwise multiplication and \(\oslash\) denotes elementwise division defined in (3.5) and (3.6). \(Id_{q}\) represents the \(q\times q\) identity matrix.
Adam uses the squared gradients to scale the learning rate like RMSprop and it takes advantage of momentum by using the moving average (driven by \(\mu_{1},\mu_{2}\)) of the gradient instead of the gradient itself like SGD with momentum. The scaling of the learning rate by the squared gradient allows to solve gradient magnitude problem. Otherwise, sometimes the gradients may be huge and sometimes small which intricate the choice of a learning rate. The exponential moving average of the gradient allows to "de-noise" the estimation of the gradient. Indeed, it is recommended to choose \(\mu_{1},\mu_{2}\approx 1\) so that the contribution of the current estimation of the gradient is small since we give a higher weight to previous data points and lower one to current data points. This allows to smooth subsequent estimation of gradient.
Adam has become the default algorithm for stochastic optimization and especially for neural networks training. Its theoretical convergence had been widely studied in the literature. First, the authors of the algorithm analyzed the convergence in a convex framework with bounded gradient assumption. For non-convex cases, one may refers to [14, 40].
In this paper, we do not limit our study to Adam updating. We propose to explore other algorithm based on Langevin dynamics and which shares, as Adam, good properties when dealing with saddle points.
### Preconditioned Stochastic Gradient Langevin Dynamics (PSGLD)
Stochastic Gradient Langevin Dynamics (SGLD) has been introduced for bayesian learning [39] to approximate a posterior distribution given some data. SGLD combines Robbins-Monro type algorithm with Langevin dynamics which injects noise into the parameter updates such that the trajectory of the parameters converges to the posterior distribution. To be more precise, it has been shown that, under mild conditions, the latter posterior distribution is the unique invariant probability measure of the Langevin
Stochastic Differential Equation (SDE):
\[dy_{t}=-\nabla V(y_{t})dt+\sqrt{2}dW_{t}, \tag{3.7}\]
where \(W\) is a \(q\)-dimensional Brownian motion. Practically, the posterior distribution is approximated using an Euler discretization of the Langevin SDE
\[y_{n+1}=y_{n}-\gamma\nabla V(y_{n})+\sqrt{2\gamma}Z_{n+1}, \tag{3.8}\]
where \((Z_{n})_{n}\) is a sequence of i.i.d. standard \(q\)-dimensional Gaussian vectors.
From equations (3.1) (with \(h=\nabla V\)) and (3.8), SGLD appears to be an extension of vanilla SGD by adding an exogenous white noise to the gradient descent. This allows to regularize the problem and escape from traps (see [28]). The use of Langevin dynamics based algorithms for optimization is justified by the fact that recent studies (see [11, 12]) showed that sampling from a distribution which concentrates around the global minimum of \(V\) is a similar task as minimizing \(V\) via certain optimization algorithms. But, under some conditions, it can be shown (see [29]) that if \((Y_{t}^{y})_{t\geq 0}\) is a \(\mathbb{R}^{d}\)-valued random vector solution of the following SDE
\[dY_{t}=b(Y_{t})dt+\sigma(Y_{t})dW_{t},\ \ Y_{0}=y \tag{3.9}\]
with \(W\) a standard \(q\)-dimensional Brownian motion and a drift term of the form
\[b:=-\frac{1}{2}\Big{(}(\sigma\sigma^{\top})\nabla V-\Big{[}\sum_{j=1}^{d} \partial_{y_{j}}(\sigma\sigma^{\top})_{ij}\Big{]}_{i=1:d}\Big{)}. \tag{3.10}\]
Then, the distribution
\[\nu_{V}(dy)=C_{V}e^{-V(y)}\cdot\lambda_{d}(dy) \tag{3.11}\]
is the unique invariant distribution of the SDE (3.9). In particular if \(\sigma=\sqrt{2}\cdot Id_{d}\) then get the langevin SDE (3.7). Which shows that the latter SDE converges to its stationary distribution, namely the Gibbs measure \(\propto\exp(-V(y))\) which, in fact, concentrates on the global minimum of \(V\). All this motivates the use of Langevin dynamics based algorithms for stochastic optimization (see [6, 8, 21]). In this paper, as in [8], we consider the application of Langevin algorithms in a non-bayesian setting and we will consider the preconditioned version of SGLD which leads to Preconditioned SGLD (PSGLD, see [25]). The latter choice is due to the fact that in its standard version, SGLD (without noise) acts like SGD by updating all parameters with the same step size leading to slow mixing when the components of \(y\) have different curvature. Given a sequence of preconditioning matrix \((P_{n})_{n}\), the updating procedure (3.8), reads in PSGLD setting
\[y_{n+1}=y_{n}-\gamma_{n+1}P_{n+1}\cdot\nabla V(y_{n})+\sigma_{n+1}\sqrt{ \gamma_{n+1}}\mathcal{N}(0,P_{n+1}) \tag{3.12}\]
where \((\sigma_{n})_{n}\) is a constant or non-increasing sequence controlling the amount of injected noise. As in Adam updating the preconditioning matrix sequence \((P_{n})_{n}\) is a diagonal matrix whose components depend on the squared (elementwise) of the gradient estimation. The general procedure is presented in Algorithm 2.
The convergence of Langevin algorithm has been widely studied. One may refers to [15, 16] when the sequence \((\sigma_{n})_{n}\) is constant and to [7, 29] when the sequence is decreasing.
In the next section we implement _PV strat_ and _NN strat_ with settings of case 1 and optimize them using both Adam and PSGLD presented above.
### Adam versus PSGLD
We implement _PV strat_ and _NN strat_ using either Adam or PSGLD optimization algorithm with case 1 setting. For this case the Longstaff-Schwartz method, with canonical polynomial functions of degree 3 provides a price of 65.14 with a 95 % confidence interval: \([65.08,65.21]\) and in 4 minutes and 40 second in average. Note that with case 1 (even for case 2) setting, regardless the chosen methods, the resulting Monte-Carlo estimator of the swing price exhibits high variance due to the high variance of the underlying. Thus, in the following table and just for this section, prices are computed by averaging _100 replications_ of prices. Each of the 100 prices is obtained using \(B=1\) batch of size \(L=M=2^{14}\) and a learning rate \(\gamma=0.1\). In what follows we denote by \(N\) the number of iterations in the stochastic procedure (see (3.3)), by \(\widehat{P_{0}}\) the (raw) price resulting from _PV strat_ or _NN strat_. And \(\widehat{P_{0}}^{BG}\) denotes the price given by forcing the decision function (either (2.3) for _PV strat_ or (2.6) for _NN strat_) to be strictly bang-bang. Results are recorded in Tables 1 and 2 for _PV strat_ and in Tables 2 and 4 for _NN strat_.
Tables 1 and 2 suggest that Adam requires several iterations and, as a result, increases computation time when using a learning rate of 0.1. Whereas with only 1000 iterations, using PSGLD optimization algorithm, leads to a better price. This fact is also observable in _NN strat_ where PSGLD optimization algorithm allows to achieve a better price than Adam.
Tables 3 and 3 show that prices obtained by _NN strat_ are better than that of _PV strat_ and, as already mentioned, PSGLD optmization algorithm provides a better exercise strategy. This means that adding some noise in the optimization procedure helps to converge quickly. Following the performance of PSGLD updating, we consider this algorithm in the remainder along with the following configuration: \(\sigma=1\cdot e^{-6},\beta=0.8,\lambda=1\cdot e^{-10}\). With the latter setting, one may compute sensitivity of the swing price with respect to the initial forward price (see Figure 5).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\sigma\) & \(\beta\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) & Time (s) \\ \hline \hline \(1\cdot e^{-6}\) & 0.9 & 65.23 ([65.16, 65.30]) & 65.21 ([65.14, 65.28]) & 53.4 \\ \hline \(1\cdot e^{-5}\) & 0.9 & 65.23 ([65.14, 65.31]) & 65.22 ([65.13, 65.30]) & 52.3 \\ \hline \(1\cdot e^{-6}\) & 0.8 & 65.27 ([65.20, 65.35]) & 65.26 ([65.18, 65.33]) & 51.3 \\ \hline \end{tabular}
\end{table}
Table 4: Results for _NN strat_ using PSGLD optimization algorithm. For the neural network architecture, we used 2 hidden layers (\(I=2\)) with 10 units per layer (\(q_{1}=10,q_{2}=10\)). Values in brackets are confidence intervals (95%). Columns “time” includes the training and the valuation times. We used \(N=1000\) iterations and \(\lambda=1\cdot e^{-10}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\sigma\) & \(\beta\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) & Time (s) \\ \hline \hline \(1\cdot e^{-6}\) & 0.8 & 65.19 ([65.12, 65.26]) & 65.21 ([65.13, 65.28]) & 21.3 \\ \hline \(1\cdot e^{-6}\) & 0.9 & 65.21 ([65.15, 65.28]) & 65.23 ([65.17, 65.29]) & 21.3 \\ \hline \(1\cdot e^{-5}\) & 0.9 & 65.22 ([65.16, 65.28]) & 65.23 ([65.18, 65.29]) & 21.6 \\ \hline \end{tabular}
\end{table}
Table 2: Results for _PV strat_ using PSGLD optimization algorithm. Values in brackets are confidence intervals (95%). Column “time” includes both training and valuation times. We used \(N=1000\) iterations and \(\lambda=1\cdot e^{-10}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(N\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) & Time (s) \\ \hline \hline
3000 & 65.22 ([65.16, 65.28]) & 65.23 ([65.17, 65.29]) & 150.4 \\ \hline
1000 & 65.13 ([65.05, 65.20]) & 65.22 ([65.15, 65.29]) & 53.2 \\ \hline \end{tabular}
\end{table}
Table 3: Results for _NN strat_ using Adam optimization algorithm. For the neural network architecture, we used 2 hidden layers (\(I=2\)) with 10 units per layer (\(q_{1}=10,q_{2}=10\)). Values in brackets are confidence intervals (95%). Columns “time” includes the training and the valuation times.
Note that for both optimization algorithms (Adam and PSGLD) different other hyperparameters had been tested and results are reported in Appendix E.
### Practitioner's corner: Transfer learning
As mentioned, in practice, training both parameterizations we proposed in this paper is allowed by AAD. However, the latter can be time consuming especially in the case of a swing contract having several exercise dates and using more than one batch per iteration (see Appendix E). For instance, let us consider a swing contract with maturity one year and daily exercise (365 exercise dates). Alongside we use the diffusion model (1.12) with the setting of case 2. The results are reported in the table below. As observed in [3] the Longstaff-Schwartz method generates numerical instabilities. Moreover, note that we get better prices than optimal quantization method like in [2].
From Table 5 one may notice that the computation time is not negligible even if, compared to other methods (like Longstaff-Schwartz) it remains reasonable. To reduce it, we propose a method based on the so called **transfer learning** (for details see [31]). Transfer learning refers to a machine learning method where a model developed for a task is reused as the starting point for a model on a second and similar task. This method may accelerates the evaluation of swing contract with several exercise dates. In our case, we perform "transfer learning" as follows. For a swing contract with several exercise dates, we first consider an aggregated version. That means that we consider a contract with less exercise dates and where the local constraints are aggregated. For instance for a swing contract over one year with daily exercise and with \(q_{\min}=0,q_{\max}=6\), we may consider a contract with one exercise per month (the middle of each month) and where for example, for months with 30 days, the local constraints become \(q_{\min}=0\times 30=0,q_{\max}=30\times 6=180\). Then we run few iterations to optimize the pricing of the
Figure 5: Delta forward for setting of case 1 (left) and case 2 (right) using _PV strat_ and _NN strat_.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) & \((T_{train},T_{eval})\) \\ \hline \hline _PV strat_ & 2690.10 ([2689.20, 2691.00]) & 2692.59 ([2691.68, 2693.49]) & (274.1, 74.7) \\ \hline _NN strat_ & 2693.74 ([2692.83, 2694.64]) & 2694.12 ([2693.21, 2695.02]) & (680.9, 398.3) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for a one-year swing contract. Values in brackets are confidence intervals (95%). The valuation was performed with a sample of size \(1\cdot e^{8}\). For the training we used \(N=1000\) iterations. \(T_{train}\) denotes the training time and \(T_{eval}\) the valuation time.
aggregated contract. The resulting parameters are used as an initial guess for the actual pricing problem. To extend the parameters obtained in the problem with 12 exercise dates to the problem with 365 exercise dates, we assume that all days within the same month behave the same. Therefore the initial guess parameters for all days within the same month are those obtained in the aggregated problem for this month. It turns out that this allows to reach more quickly a descent optimum.
We can notice from Table 6 that by means of transfer learning the computation time is drastically reduced. With our adaptation of transfer learning, the training time is reduced by a factor of 3 without degrading the accuracy. Besides, always with in mind the aim of reducing the computation time, transfer learning can be used differently. In a few words, by means of transfer learning, when the market data change or when the contracts settings change we do not need to start all over again. To illustrate this point let us consider below three cases: M1, M2, M3, representing some shifts of market/contract settings. Recall that the baseline settings are those consider above (case 2 setting).
In Table 7 we consider three different market move scenarios described as follows. In the first case we bumped the initial forward curve from 20 to 22. In the second case we change the global constraints from \(Q_{\min}=1300\) and \(Q_{\max}=1900\) to \(Q_{\min}=1400\) and \(Q_{\max}=2000\). In the final case we reduced the strike price from 20 to 18. We could have changed the local constraints but as we already pointed out, swing pricing can always reduce to the case where \(q_{\min}=0,q_{\min}=1\) (see Appendix A). Now the question is the following: Could our baseline model give accurate prices without training again? Or could it help us to get accurate prices quickly? The answers to theses questions are recorded in the following tables.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \(\widehat{P_{0}}\) & \(T_{agg}\) & \((T_{train},T_{eval})\) \\ & \(\widehat{P_{0}}^{BG}\) & & \\ \hline \hline _PV strat_ & 2688.73 ([2687.82, 2689.63]) & 4.8 & (83.2, 76.3) \\ & 2692.81 ([2691.90, 2693.71]) & & \\ \hline _NN strat_ & 2693.22 ([2692.31, 2694.13]) & 6.9 & (206.5, 398.1) \\ & 2694.12 ([2693.21, 2695.02]) & & \\ \hline \end{tabular}
\end{table}
Table 6: Results for a one-year swing contract using transfer learning. We used 500 iterations for the aggregated contract and 300 iterations for the actual one-year contract. For the valuation, we used a sample of size \(1\cdot e^{8}\). \(T_{agg}\) denotes the training time for the aggregated contract, \(T_{train}\) and \(T_{eval}\) denote respectively the training time and the valuation time for the actual contract.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Case M1** & **Case M2** & **Case M3** \\ \hline \hline \(F_{0,t_{k}}=22\) & \((Q_{\min},Q_{\max})=(1400,2000)\) & \(K=18\) \\ \hline \end{tabular}
\end{table}
Table 7: Market data move scenarios
Tables 8 and 9 demonstrate that in the case where global constraints change (M2), using the baseline parameters without additional training gives prices that are comparable to those obtained by training with a few iterations starting from the baseline parameters. Furthermore, it is important to note that increasing the number of iterations beyond 300 (we tested up to 1000 iterations in our experiments) does not lead to better prices than those obtained after 300 iterations (see columns labeled "Re-train"). This suggests that reusing the baseline parameters can speed up convergence. This observation remains true regardless of the case (M1, M2, or M3).
## 4 Numerical experiments
We now perform additional simulations to demonstrate the effectiveness of the two parameterizations that we proposed in this paper.
### Three factor model
We first consider a three factor model whose dynamics is given by
\begin{table}
\begin{tabular}{l l l l} \hline \hline Cases & Re-use & Re-train & \((T_{train},T_{eval})\) \\ \hline \hline
**Case M1** & 5942.65 ([5941.64, 5943.65]) & 6005.75 ([6004.70, 6006.79]) & (85.4, 77.2) \\ & 5944.27 ([5943.26, 5945.27]) & 6008.54 ([6007.50, 6009.58]) & \\ \hline
**Case M2** & 2514.86 ([2513.90, 2515.81]) & 2516.99 ([2516.04, 2517.93]) & (85.0, 76.9) \\ & 2517.26 ([2516.30, 2518.21]) & 2518.73 ([2517.78, 2519.67]) & \\ \hline
**Case M3** & 5655.98 ([5655.06, 5656.89]) & 5744.91 ([5743.95, 5745.86]) & (85.9, 77.05) \\ & 5657.67 ([5656.75, 5658.58]) & 5747.39 ([5746.43, 5748.34]) & \\ \hline \end{tabular}
\end{table}
Table 8: Results for _PV strat._ Column “Re-use” provides results when the baseline model parameters are reused as is. Column “Re-train” gives results when we re-train with only 300 iterations with the baseline model parameters used as starting values. \(T_{train}\) denotes the training time and \(T_{eval}\) the valuation time. The testing computation time is the same when we re-use the model as when we re-train.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Cases & Re-use & Re-train & \((T_{train},T_{eval})\) \\ \hline \hline
**Case M1** & 5981.17 ([5980.14, 5982.19]) & 6011.08 ([6010.03, 6012.12]) & (203.1, 397.8) \\ & 5982.57 ([5981.54, 5983.59]) & 6011.62 ([6010.57, 6012.66]) & \\ \hline
**Case M2** & 2509.97 ([2509.01, 2510.90]) & 2516.92 ([2515.96, 2517.87]) & (203.7, 397.5) \\ & 2511.08 ([2510.12, 2512.03]) & 2517.58 ([2516.62, 2518.53]) & \\ \hline
**Case M3** & 5713.80 ([5712.85, 5714.72]) & 5749.97 ([5749.01, 5750.92]) & (203.4, 397.6) \\ & 5715.24 ([5714.34, 5716.17]) & 5750.51 ([5749.55, 5751.46]) & \\ \hline \end{tabular}
\end{table}
Table 9: Results for _NN strat_. Column “Re-use” provides results when the baseline model parameters are reused as is. Column “Re-train” gives results when we re-train with only 300 iterations with the baseline model parameters used as starting values. \(T_{train}\) denotes the training time and \(T_{eval}\) the valuation time. The testing computation time is the same when we re-use the model as when we re-train.
\[\frac{dF_{t,T}}{F_{t,\,T}}=\sigma_{1}e^{-\alpha_{1}(T-t)}dW_{t}^{1}+\sigma_{2}e^{- \alpha_{2}(T-t)}dW_{t}^{2}+\sigma_{3}e^{-\alpha_{3}(T-t)}dW_{t}^{3}, \tag{4.1}\]
where for all \(1\leq i,j\leq 3\), the instantaneous correlation is given by
\[\langle dW_{\cdot}^{i},dW_{\cdot}^{j}\rangle_{t}=\left\{\begin{array}{ll}dt& \text{ if }\,i=j\\ \rho_{i,j}\cdot dt&\text{ if }\,i\neq j\end{array}\right.\]
In this model, the spot price is given by
\[S_{t}=F_{0,t}\cdot\exp\Big{(}\langle\sigma,X_{t}\rangle-\frac{1}{2}\lambda_{t }^{2}\Big{)},\]
where \(\sigma=\big{(}\sigma_{1},\sigma_{2},\sigma_{3}\big{)}^{\top}\), \(X_{t}=\big{(}X_{t}^{1},X_{t}^{2},X_{t}^{3}\big{)}^{\top}\) and for all \(1\leq i\leq 3\),
\[X_{t}^{i}=\int_{0}^{t}e^{-\alpha_{i}(t-s)}\,\mathrm{d}W_{s}^{i}\ \ \text{and}\ \,\lambda_{t}^{2}=\frac{1}{2}\sum_{i=1}^{3}\frac{\sigma_{i}^{2}}{\alpha_{i}} \big{(}1-e^{-2\alpha_{i}t}\big{)}+\sum_{i\neq j}\rho_{i,j}\frac{\sigma_{i} \sigma_{j}}{\alpha_{i}+\alpha_{j}}\Big{(}1-e^{-(\alpha_{i}+\alpha_{j})t}\Big{)}.\]
We fix the following configuration \(\sigma_{i}=\sigma=0.7,\alpha_{i}=\alpha=1.5,\rho_{i,j}=\rho\in[-1,1]\). The swing contract setting corresponds to case 1. We use \(N=1000\) iterations and a learning rate \(\gamma=0.1\). Results are recorded in Tables 10 and 11.
It should be noted that there is a different approach to implement _NN strat_, and taking into account the structure of forward prices. Specifically, in the three-factor framework (4.1), the forward price depends on state variables that are components of the \(\mathbb{R}^{3}\)-valued random vector \(X_{t}\). In this context, one can include
\begin{table}
\begin{tabular}{c l l} \hline \hline \(\rho\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) \\ \hline \hline
0.6 & 172.71 ([172.51, 172.90]) & 172.72 ([172.52, 172.92]) \\ \hline
0.3 & 147.79 ([147.63, 147.95]) & 147.78 ([147.62, 147.95]) \\ \hline -0.2 & 91.02 ([90.92, 91.12]) & 91.02 ([90.92, 91.12]) \\ \hline \end{tabular}
\end{table}
Table 10: Results using _PV strat_. Values in brackets are confidence intervals (95%). The valuation had been performed with a sample of size \(1\cdot e^{8}\). For each result, the execution time (training plus testing) is roughly equal to 22s.
\begin{table}
\begin{tabular}{c l l} \hline \hline \(\rho\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) \\ \hline \hline
0.6 & 173.05 ([172.85, 173.24]) & 172.98 ([178.78, 173.17]) \\ \hline
0.3 & 147.97 ([147.81, 148.14]) & 147.92 ([147.75, 148.08]) \\ \hline -0.2 & 91.18 ([91.08, 91.28]) & 91.13 ([91.03, 91.23]) \\ \hline \end{tabular}
\end{table}
Table 11: Results using _NN strat_. Values in brackets are confidence intervals (95%). The valuation had been performed with a sample of size \(1\cdot e^{8}\). For the neural network architecture we used \(I=2\) layers with \(q_{1}=q_{2}=10\) units. For each result, the execution time (training plus testing) is roughly equal to 45s.
as an additional input to the neural network described in (2.4), in addition to time \(t_{k}\), payoff \(S_{t_{k}}-K\), and volume normalization \(M(Q_{k})\). That is, using the above notation, \(I_{k}=\left(t_{k},S_{t_{k}}-K,X_{t_{k}},M(Q_{k})\right)\in\mathbb{R}^{d+3}\), where \(d\) is the dimension of \(X_{t_{k}}\) (in the three-factor model, \(d=3\)). This approach can help to capture the correlation structure in a multi-factor framework. The prices resulting from this alternative approach are presented in Table 12.
It appears that, regardless of whether the model is multi-factor or not, both strategies (_PV strat_ and _NN strat_) can determine the optimal volume to purchase at each exercise date, based only on the payoff and the cumulative volume. In the following section, we will evaluate the performance of our strategies on a more complex diffusion model.
### Multi-curve forward diffusion
We finally consider a multi-curve model. At a certain valuation date \(t\) we consider \(p\) risk factors. Each risk factor \(i\in\{1,\ldots,p\}\), denoted by \(F_{t,T_{i}}\), is a forward contract observed at date \(t\) and expiring at date \(T_{i}\). It is modeled through an instantaneous volatility function \(\sigma_{t}(T_{i})\) with a dynamics given by,
\[\frac{dF_{t,T_{i}}}{F_{t,T_{i}}}=\sigma_{t}(T_{i})dW_{t}(T_{i}) \tag{4.2}\]
where \((W_{t}(T_{i}))_{t\geq 0}\) is a standard Brownian motion. The instantaneous correlation between Brownian motions are given by
\[\langle dW_{\cdot}(T_{i}),dW_{\cdot}(T_{j})\rangle_{t}=\rho_{i,j}dt\]
The model (4.2) implies to diffuse one curve per risk factor; \(F_{t,T_{i}}\) defined by its maturity date \(T_{i}\). Likewise, the spot at a given date generates its own risk factor. Thus if we consider a swing contract on the day-ahead where we can exercise on all working days of a year, then we have about 365 delivery dates. Therefore 365 curves to diffuse. But due to high correlation (at any date, two risk factors are very correlated when maturities are very close), one can see that only a few risk factors really impact the price. In practice forward contracts delivering in the same month are highly correlated. Thus we will consider there are as many factor as months. For example, for a contract on a whole year \(p=12\); therefore 12 (correlated) Brownian motions per exercise date. In this model we consider a valuation date given by \(17^{th}\) march 2021 and a swing contract delivering on the period January 2022-December 2022. The swing constraints are: \(q_{\min}=0,q_{\max}=1,Q_{\min}=180,Q_{\max}=270\). For the diffusion model, the correlation matrix is (_in percentage_)
\begin{table}
\begin{tabular}{c l l} \hline \hline \(\rho\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) \\ \hline \hline
0.6 & 172.76 ([172.56, 172.95]) & 172.70 ([172.50, 172.89]) \\ \hline
0.3 & 147.95 ([147.78, 148.11]) & 147.91 ([147.75, 148.07]) \\ \hline -0.2 & 91.11 ([91.01, 91.20]) & 91.07 ([90.97, 91.17]) \\ \hline \end{tabular}
\end{table}
Table 12: Results using _NN strat_ including state variables. Values in brackets are confidence intervals (95%). The valuation had been performed with a sample of size \(1\cdot e^{8}\). For the neural network architecture we used \(I=2\) layers with \(q_{1}=q_{2}=10\) units. For each result, the execution time (training plus testing) is roughly equal to 50s.
Volatilities and initial forward prices are assumed to be constant per month. For months in 2022, the square of the instantaneous volatility \(\sigma_{t}^{2}(T_{i})\) is given by Figure 6 and the initial forward curve is given by Figure 7.
Finally, the strike price corresponds to the average of all initial forward prices; which gives 17.865. For this diffusion model, _PV strat_ does not perform well. This is probably due to the multi-curve framework
Figure 6: Square of instantaneous volatility per risk factor observed on 17-March-2021. Values are: \(\left(45.09\%,44.88\%,42.63\%,40.57\%,36.99\%,34.37\%,31.18\%,29.59\%, 30.22\%,30.6\%,31.05\%,30.25\%\right)\)
Figure 7: Forward prices per risk factor observed on 17-March-2021. Prices per risk factor are: \(\left(20.07,20,19.6,17.4,16.75,16.5,16.56,16.53,16.71,17.31,18.31,18.64\right)\)
which is more complex than a multi factor model. However modifying _NN strat_ as in the preceding section by adding, in neural network inputs, Brownian motions \(\big{(}W_{t_{k}}(T_{i})\big{)}_{1\leq i\leq p}\) at each exercise date \(t_{k}\) allows to achieve very good prices. We compare the latter implementation of _NN strat_ with a method based on a static replication of swing contracts by means of spread options. The latter method aims at approximating swing options prices by selecting a combination of spread options (_SO_) where the underlying is driven by the same dynamics (4.2). The selection is done through a linear programming method. Results are recorded in Table 13.
## Conclusion
We introduced two parametric approaches for pricing swing contracts. The first method involves a direct parameterization of the optimal control based on some heuristics, while the second method improves upon the first by using neural networks. We conducted numerical experiments to compare two optimization algorithms (Adam and PSGLD). Our results demonstrate that using Langevin-based optimization algorithms allows us to achieve better prices with short computation time. We also found that our methods outperform the state-of-the-art methods in terms of accuracy. Additionally, we tested our neural network parameterization in various complex diffusion models and demonstrating its robustness. We can hence conclude that our neural network-based method is well-suited to the pricing swing options.
## Acknowledgments
The first author is grateful for the financial support provided by Engie Global Markets via CIFRE agreement and would like to thank Asma Meziou and Frederic Muller for throughout insights on swing contracts.
## Appendix A Swing contract decomposition
We aim to prove that any swing contract can be reduced to a normalized contract, namely a swing contract with local constraints \(q_{\min}=0,q_{\max}=1\). The swing contract price is given by
\[P_{0}=\operatorname*{ess\,sup}_{(q_{\ell})_{0\leq\ell\leq n-1} \in\mathcal{A}^{\mathcal{Q}_{\min},\,Q_{\max}}_{0,0}}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}q_{\ell}\times(S_{t_{\ell}}-K)\right)\]
where
\[\mathcal{A}^{\mathcal{Q}_{\min},\,Q_{\max}}_{0,0}=\left\{(q_{ \ell})_{0\leq\ell\leq n-1},\ q_{\ell}:(\Omega,\mathcal{F}^{S}_{t_{\ell}}, \mathbb{P})\mapsto[q_{\min},q_{\max}],\ \sum_{\ell=0}^{n-1}q_{\ell}\in\,[Q_{\min},Q_{\max}]\right\}.\]
It follows from the linearity of the expectation that,
\begin{table}
\begin{tabular}{c c c} \hline \hline \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) & \(SO\) \\ \hline \hline
708.46 ([705.76, 711.16]) & 711.32 ([708.61, 714.03]) & 645.3 \\ \hline \end{tabular}
\end{table}
Table 13: Results using _NN strat_. Values in brackets are confidence intervals (95%). The valuation had been performed with a sample of size \(5\times 1\cdot e^{6}\). For the neural network architecture we used \(I=2\) layers with \(q_{1}=q_{2}=50\) units.
\[P_{0}=q_{\min}\times\mathbb{E}\left(\sum_{\ell=0}^{n-1}(S_{t_{\ell}}-K)\right)+(q_{ \max}-q_{\min})\times\underset{(q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0 }^{Q_{\min},\,Q_{\max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}\frac{q_{\ell}-q_{\min}}{q_{\max}-q_{\min} }\times(S_{t_{\ell}}-K)\right).\]
The first term in the last equality is given by
\[\mathbb{E}\left(\sum_{\ell=0}^{n-1}(S_{t_{\ell}}-K)\right)=\sum_{\ell=0}^{n-1} \mathbb{E}\left(S_{t_{\ell}}\right)-n\cdot K\]
and can be easily computed using either closed formula (depending on the underlying diffusion model) or Monte-Carlo method. Let us consider the second term. Let \((q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^{Q_{\min},Q_{\max}}\) and define \(\tilde{q}_{\ell}=\frac{q_{\ell}-q_{\min}}{q_{\max}-q_{\min}}\). Note that \((\tilde{q}_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^{\tilde{Q}_{\min},\tilde{Q}_{\max}}\) where
\[\mathcal{A}_{0,0}^{\tilde{Q}_{\min},\tilde{Q}_{\max}}=\left\{(q_{\ell})_{0 \leq\ell\leq n-1},\ q_{\ell}:(\Omega,\mathcal{F}_{t_{\ell}},\mathbb{P}) \mapsto[0,1],\ \sum_{\ell=0}^{n-1}q_{\ell}\in[\tilde{Q}_{\min},\tilde{Q}_{\max}]\right\}.\]
and
\[\tilde{Q}_{\min}=\frac{(Q_{\min}-n\cdot q_{\min})_{+}}{q_{\max}-q_{\min}} \qquad\tilde{Q}_{\max}=\frac{(Q_{\max}-n\cdot q_{\min})_{+}}{q_{\max}-q_{\min}}.\]
Thus,
\[\mathbb{E}\left(\sum_{\ell=0}^{n-1}\frac{q_{\ell}-q_{\min}}{q_{ \max}-q_{\min}}\times(S_{t_{\ell}}-K)\right) =\mathbb{E}\left(\sum_{\ell=0}^{n-1}\tilde{q}_{\ell}\times(S_{t_ {\ell}}-K)\right)\] \[\leq\underset{(q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0} ^{\tilde{Q}_{\min},\tilde{Q}_{\max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}q_{\ell}\times(S_{t_{\ell}}-K)\right).\]
Therefore taking the supremum yields,
\[\underset{(q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^{Q_{\min},Q_{ \max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}\frac{q_{\ell}-q_{\min}}{q_{\max}-q_{\min} }\times(S_{t_{\ell}}-K)\right)\leq\underset{(q_{\ell})_{0\leq\ell\leq n-1}\in \mathcal{A}_{0,0}^{Q_{\min},\tilde{Q}_{\max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}q_{\ell}\times(S_{t_{\ell}}-K)\right).\]
Conversely let \((q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^{\tilde{Q}_{\min},\tilde{Q} _{\max}}\) and define \(\tilde{q}_{\ell}=q_{\min}+(q_{\max}-q_{\min})\cdot q_{\ell}\in[q_{\min},q_{\max}]\). It follows \(\sum_{\ell=0}^{n-1}q_{\ell}\in[Q_{\min},Q_{\max}]\) so that \(\left(\tilde{q}_{\ell}\right)_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^{Q_{ \min},Q_{\max}}\). Thus,
\[\mathbb{E}\left(\sum_{\ell=0}^{n-1}q_{\ell}\times(S_{t_{\ell}}-K)\right) =\mathbb{E}\left(\sum_{\ell=0}^{n-1}\frac{\tilde{q}_{\ell}-q_{\min }}{q_{\max}-q_{\min}}\times(S_{t_{\ell}}-K)\right)\] \[\leq\underset{(q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^ {Q_{\min},\,Q_{\max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}\frac{q_{\ell}-q_{\min}}{q_{\max}-q_{\min} }\times(S_{t_{\ell}}-K)\right).\]
Taking the supremum, we get,
\[\underset{(q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^{\tilde{Q}_{\min},\,\tilde{Q}_{\max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}q_{\ell}\times(S_{t_{\ell}}-K)\right)\leq \underset{(q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^{Q_{\min},\,Q_{ \max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}\frac{q_{\ell}-q_{\min}}{q_{\max}-q_{\min} }\times(S_{t_{\ell}}-K)\right).\]
Therefore,
\[P_{0}=q_{\min}\times\mathbb{E}\left(\sum_{\ell=0}^{n-1}(S_{t_{\ell}}-K)\right)+( q_{\max}-q_{\min})\times\underset{(q_{\ell})_{0\leq\ell\leq n-1}\in\mathcal{A}_{0,0}^ {Q_{\min},\,Q_{\max}}}{\text{ess}\sup}\ \mathbb{E}\left(\sum_{\ell=0}^{n-1}q_{\ell}\times(S_{t_{\ell}}-K)\right).\]
On the rate of convergence
This section aims to estimate, at least numerically, the rate of convergence. We denote by \(U_{n}\) the price obtained after the \(n^{th}\) iteration in the training step. We make the assumption that for some constant \(C>0\)
\[U_{n}=U_{\infty}+\frac{C}{n^{\alpha}}\]
where \(U_{\infty}\) is the limit of the stochastic procedure that we assume to exist. Thus,
\[\log(|U_{2n}-U_{n}|)=K_{\alpha}+\alpha\ \log(\frac{1}{n}),\ \ \ \ \text{ where }K_{\alpha}=\log(|C(2^{-\alpha}-1)|).\]
Therefore the coefficient \(\alpha\) (representing the rate of convergence following the assumption) appears to be the slope in the \(\log-\log\) regression of \(|U_{2n}-U_{n}|\) against \(\frac{1}{n}\).
From figures 8 and 9 it suggests that both parameterizations give a rate of convergence of order \(\mathcal{O}(\frac{1}{N})\) (where \(N\) is the number of iterations). This convergence rate is much faster than Monte-Carlo, and suggests that our parameterizations are efficient alternatives to Longstaff-Schwartz method for this problem.
## Appendix C Estimator variance and computation time
In this section we illustrate the high variance phenomenon which may appear when pricing swing option. To this end, we compute 100 realizations of swing contract price with case 1 setting for the three methods: _PV strat_, _NN strat_ and Longstaff-Schwartz. The distributions of prices are represented in Figures 10, 11, 12.
Figures 10, 11, and 12 demonstrate that prices can significantly fluctuate, regardless of the pricing method. Using \(1000000\) simulations is insufficient to obtain a reliable price estimate. Storage limitations prevent the Longstaff-Schwartz method from exceeding this number of simulations, and this limit is even lower when the number of exercise dates is high. However, our proposed methods allow to increase the number of simulations as needed once the strategy trained. This is possible because we can evaluate our strategies on mini-batches sequentially instead of using the entire test set. For instance, to compute a price with a sample size of \(10^{8}\), we estimate \(100\) prices over \(100\) mini-batches, each with a sample size of \(10^{6}\). The final price is then the average of the \(100\) estimated prices.
Hereafter (see Figure 13) we present CPU time for _PV strat_ and _NN strat_. We use one batch with size of \(2^{14}\) and the same PSGLD setting.
Figure 11: Distribution of swing prices using _NN strat_. From left to right we used successively \(10^{5},10^{6},10^{7},5\times 10^{8}\) simulations.
Figure 12: Distribution of swing prices using Longstaff-Schwartz method. From left to right we used successively \(10^{5},10^{6}\) simulations. Higher number of simulations leads to memory overflow.
Figure 10: Distribution of swing prices using _PV strat_. From left to right we used successively \(10^{5},10^{6},10^{7},5\times 10^{8}\) simulations.
## Appendix D _PV strat_ coefficients
In this appendix we present the coefficients obtained with _PV strat_ and either Adam or PSGLD updating. For that purpose, we used \(N=1000\) iterations, a learning rate of \(0.1\) and the following configurations. For Adam, we use \(B=4\) batches of size \(2^{12}\). For the PSGLD updating, we use \(\sigma=1\cdot e^{-6},\beta=0.8,\lambda=1\cdot e^{-10}\) and \(B=1\) batch of size \(2^{14}\). In the following graphics, coefficients \(a_{k},b_{k}\) represent the coefficients which multiply respectively the payoff \(S_{t_{k}}-K\), the margin of cumulative consumption \(M(Q_{k})\). The coefficient \(c_{k}\) is the constant (see (2.3)). For both optimization algorithms, the strategy is estimated with the same sample.
In general, it can be observed that the coefficients estimated using the PSGLD algorithm are smoother across the exercise dates compared to those estimated with the Adam updating method. Additionally, _PV strat_ results in interpretable coefficients. Specifically, the coefficient \(a_{k}\) tends to increase globally, with a rapid growth over the last exercise dates. This is because, as we move forward in exercise dates, the minimal global constraint is gradually fulfilled, giving us more flexibility on the control or allowed volumes to purchase. In this particular example, the maximum global constraint is always fulfilled, so towards the last exercise dates, we can decide to buy the minimum possible amount if the payoff is negative and buy the maximum possible amount if the payoff is positive (i.e., the parameterization (2.3) approaches 0 or 1).
On the other hand, the coefficient \(b_{k}\) generally decreases at the beginning of the contract and starts increasing towards zero over the last exercise dates. This indicates that the remaining exercise capacity has less influence on the strategy towards the end of the contract. This is consistent with the claim made earlier, which stated that towards the end of the contract, the optimal strategy depends mainly on the payoff and less on the volume constraints.
Figure 14: Coefficients of _PV strat_ using PSGLD updating.
Figure 13: CPU time (in seconds) as a function of the number of iterations.
## Appendix E Summary tables: Adam versus PSGLD
Hereafter are recorded all the results obtained using Adam and PSGLD updatings with different hyperparameters combinations. Prices are computed by averaging 100 replications of prices; each obtained using a sample size of \(10^{6}\). We consider a swing contract with case 1 setting. Results are recorded in tables 14, 15, 16, 17.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\sigma\) & \(\beta\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) & Time (s) \\ \hline \hline \(1e^{-5}\) & 0.8 & 65.15 ([65.09, 65.22]) & 65.17 ([65.10, 65.23]) & 22.6 \\ \hline \(1e^{-5}\) & 0.7 & 65.13 ([65.05, 65.21]) & 65.15 ([65.07, 65.22]) & 22.1 \\ \hline \(1e^{-5}\) & 0.9 & 65.22 ([65.16, 65.28]) & 65.23 ([65.18, 65.29]) & 21.6 \\ \hline \(1e^{-6}\) & 0.8 & 65.19 ([65.12, 65.26]) & 65.21 ([65.13, 65.28]) & 21.3 \\ \hline \(1e^{-6}\) & 0.9 & 65.21 ([65.15, 65.28]) & 65.23 ([65.17, 65.29]) & 21.4 \\ \hline \end{tabular}
\end{table}
Table 16: Summary table for _PV strat_ using PSGLD. The values in brackets are confidence intervals (95%). The column “time” includes both training and valuation time. We used a learning rate equal to 0.1, one batch of size \(2^{14}\), \(\lambda=1\cdot e^{-10}\) and 1000 iterations.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(L\times B\) & \(\gamma\) & \(N\) & \(\widehat{P_{0}}\) & Time (s) \\ \hline \hline \(2^{14}\times 1\) & 0.1 & 3000 & 65.22 ([65.16, 65.28]) & 65.23 ([65.17, 65.29]) & 150.4 \\ \hline \(2^{14}\times 1\) & 0.1 & 1000 & 65.13 ([65.05, 65.20]) & 65.22 ([65.15, 65.29]) & 53.2 \\ \hline \(2^{12}\times 4\) & 0.1 & 3000 & 65.26 ([65.21, 65.32]) & 65.23 ([65.17, 65.29]) & 603.4 \\ \hline \(2^{12}\times 4\) & 0.1 & 1000 & 65.20 ([65.14, 65.26]) & 65.20 ([65.14, 65.26]) & 133.5 \\ \hline \(2^{14}\times 1\) & 0.01 & 3000 & 64.78 ([62.65, 64.90]) & 65.06 ([64.98, 65.14]) & 157.1 \\ \hline \(2^{14}\times 1\) & 0.01 & 1000 & 64.81 ([64.74, 64.88]) & 65.05 ([64.98, 65.12]) & 55.1 \\ \hline \(2^{12}\times 4\) & 0.01 & 3000 & 65.20 ([65.15, 65.26]) & 65.20 ([65.14, 65.26]) & 607.3 \\ \hline \(2^{12}\times 4\) & 0.01 & 1000 & 65.02 ([64.92, 65.11]) & 65.17 ([65.09, 65.24]) & 135.4 \\ \hline \end{tabular}
\end{table}
Table 15: Summary table for _NN strat_ using Adam. We used a neural network architecture as follows: 2 hidden layers (\(I=2\)) and 10 units per layer (\(q_{1}=10,q_{2}=10\)). The values in brackets are confidence intervals (95%). The time includes the training and the valuation time.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\sigma\) & \(\beta\) & \(\widehat{P_{0}}\) & \(\widehat{P_{0}}^{BG}\) & Time (s) \\ \hline \hline \(1e^{-5}\) & 0.8 & 65.15 ([65.09, 65.22]) & 65.17 ([65.10, 65.23]) & 22.6 \\ \hline \(1e^{-5}\) & 0.7 & 65.13 ([65.05, 65.21]) & 65.15 ([65.07, 65.22]) & 22.1 \\ \hline \(1e^{-5}\) & 0.9 & 65.22 ([65.16, 65.28]) & 65.23 ([65.18, 65.29]) & 21.6 \\ \hline \(1e^{-6}\) & 0.8 & 65.19 ([65.12, 65.26]) & 65.21 ([65.13, 65.28]) & 21.3 \\ \hline \(1e^{-6}\) & 0.9 & 65.21 ([65.15, 65.28]) & 65.23 ([65.17, 65.29]) & 21.4 \\ \hline \end{tabular}
\end{table}
Table 16: Summary table for _PV strat_ using PSGLD. The values in brackets are confidence intervals (95%). The column “time” includes both training and valuation time. We used a learning rate equal to 0.1, one batch of size \(2^{14}\), \(\lambda=1\cdot e^{-10}\) and 1000 iterations. |
2304.08868 | Soft-Output Deep Neural Network-Based Decoding | Deep neural network (DNN)-based channel decoding is widely considered in the
literature. The existing solutions are investigated for the case of hard
output, i.e. when the decoder returns the estimated information word. At the
same time, soft-output decoding is of critical importance for iterative
receivers and decoders. In this paper, we focus on the soft-output DNN-based
decoding problem. We start with the syndrome-based approach proposed by
Bennatan et al. (2018) and modify it to provide soft output in the AWGN
channel. The new decoder can be considered as an approximation of the MAP
decoder with smaller computation complexity. We discuss various regularization
functions for joint DNN-MAP training and compare the resulting distributions
for [64, 45] BCH code. Finally, to demonstrate the soft-output quality we
consider the turbo-product code with [64, 45] BCH codes as row and column
codes. We show that the resulting DNN-based scheme is very close to the
MAP-based performance and significantly outperforms the solution based on the
Chase decoder. We come to the conclusion that the new method is prospective for
the challenging problem of DNN-based decoding of long codes consisting of short
component codes. | Dmitry Artemasov, Kirill Andreev, Pavel Rybin, Alexey Frolov | 2023-04-18T10:03:54Z | http://arxiv.org/abs/2304.08868v1 | # Soft-Output Deep Neural Network-Based Decoding
###### Abstract
Deep neural network (DNN)-based channel decoding is widely considered in the literature. The existing solutions are investigated for the case of hard output, i.e. when the decoder returns the estimated information word. At the same time, soft-output decoding is of critical importance for iterative receivers and decoders. In this paper, we focus on the soft-output DNN-based decoding problem. We start with the syndrome-based approach proposed by Bennatan et al. (2018) and modify it to provide soft output in the AWGN channel. The new decoder can be considered as an approximation of the MAP decoder with smaller computation complexity. We discuss various regularization functions for joint DNN-MAP training and compare the resulting distributions for \([64,45]\) BCH code. Finally, to demonstrate the soft-output quality we consider the turbo-product code with \([64,45]\) BCH codes as row and column codes. We show that the resulting DNN-based scheme is very close to the MAP-based performance and significantly outperforms the solution based on the Chase decoder. We come to the conclusion that the new method is prospective for the challenging problem of DNN-based decoding of long codes consisting of short component codes.
Channel decoding, machine learning, deep neural networks, soft-output, iterative codes
## I Introduction
Nowadays, the scope of application of machine learning algorithms and deep neural networks (DNN) is growing rapidly. In the past decade, the use of DNNs has allowed groundbreaking results to be achieved in applications such as image, video, and natural language processing [1]. All of these applications deal with natural signals. At the same time, much less attention has been devoted to the application of ML methods in communications. In this paper, we consider the application of ML algorithms for the channel decoding problem. To justify this research direction we note that the decoding problem is a classification problem: the channel output must correspond to one of the classes (codewords). The significant difference between this problem and a typical classification problem lies in the exponentially large number of classes.
The idea to use NNs in the channel decoding problem is not new, here we mention the early papers [2, 3]. But due to the lack of computation capabilities, these methods were forgotten until a recent paper [4]. The authors of [4] consider a binary input channel with additive white Gaussian noise (AWGN) and utilize a fully connected NN as a decoder. The major ML challenge is dataset collection and labeling but in the decoding task, this problem disappears as a dataset of any size can be generated easily. At the same time, the approach of [4] suffers from the "curse of dimensionality" problem, as the number of codewords is exponential in the number of information bits. Thus, for any reasonable parameters, it is not possible to train the NN on all the codewords. The only hope is that the NN can learn the code structure by observing a small number of codewords. Note that all the practical codes are linear ones and can be defined by a basis, so the basis vectors are sufficient to learn the code structure. The main outcome of [4] is that the fully connected NN cannot learn the code structure1 and thus the such method is applicable for very short codes only. The subsequent articles propose to combine the existing decoding algorithms and NNs. The articles [5, 6, 7, 8, 9, 10, 11] consider belief propagation algorithm (both Sum-Product and Min-Sum modifications) which is suitable for any linear code, but shows the best results for sparse-graph codes, such as Low-Density Parity-Check (LDPC) codes [12]. The idea is to unwrap (or unroll) the underlying Tanner graph and obtain a sparse NN, which repeats the decoder operations but is equipped with trainable weights. The improvements were obtained for BCH codes [5, 6, 10] and LDPC codes [7, 8, 11]. The next idea was to replace the activation functions, the architecture is called a hyper-network [13, 14]. Later, Cammerer et al. proposed to replace node and edge message updates with trainable functions, thus allowing NN to learn a generalized message passing algorithm [15]. Another approach proposed in [16] is to consider the syndrome-based decoding algorithm that is suitable for any linear codes. The basic syndrome-based decoding algorithm implies the use of the mapping (syndrome to the coset leader), which has the exponential (in the number of parity-check bits) size. The idea of [16] is to approximate this table with a NN. We note that the syndrome does not depend on the codeword and, therefore, we do not require the NN to have a special structure, it can be arbitrary, but the best results were obtained with recurrent NNs [16]. Later a syndrome-based approach was adapted to the transformer and denoising diffusion architectures [17, 18].
We also note the papers (see, e.g. [19]) devoted to DNN-based code construction. For additional literature and a more detailed overview, we refer the reader to [20].
The papers above focus on the performance of hard-output decoding, i.e. the decoder is required to return the estimated information word. At the same time, modern receivers (such as MIMO receivers [21]) and modern codes consisting of short component codes [22] require iterative (or turbo) decoders. Soft-output decoding is of critical importance for such schemes. We note that several papers (e.g. [15, 16]) mention the possibility of obtaining a soft output by the proposed DNN architectures, but to the best of our knowledge, the quality of such output was not investigated in the literature. In what follows, we fill this gap.
Our contribution is as follows. We start with a syndrome-based approach [16] and modify it to provide soft output. The major change is the training process and the loss function, including the regularization term, which controls the soft output quality. We demonstrate the performance of the new decoder for the \([64,45]\) BCH code on the binary input AWGN channel. We choose such parameters as maximum a-posteriori (MAP) decoding is feasible for this code but has large complexity, which prevents the use of such a method in practice. Our decoder can be considered as an approximation of the MAP decoder with smaller computation complexity, in other words, we require our DNN to reproduce the MAP output. We discuss various regularization functions and compare the resulting distributions. Finally, to demonstrate soft output quality, we consider the iterative decoding scheme, namely the turbo product code (TPC) with \([64,45]\) BCH codes as row and column codes. We show that the resulting DNN-based scheme is very close to MAP-based performance and significantly outperforms the Chase decoder-based solution [23] in combination with the soft output calculation [24].
The paper is organized as follows. In section II the proposed preprocessing procedure described, NN model architecture, and soft-output quality metrics are introduced and applied for distribution optimization. The section ends with the decoding performance results and their discussion. Section III provides a description of the proposed soft-decoding approach application in the TPC decoding scheme. The framework preprocessing steps for iterative decoding and model tuning steps are followed by a discussion of the results.
## II Soft-input soft-output DNN-based decoding
### _System model_
Let us describe the system model. The user aims to transmit a \(k\)-bit information word \(\mathbf{u}\in\{0,1\}^{k}\). We assume the use of a binary linear block code \(\mathcal{C}\) of length \(n\) and dimension \(k\). Let \(\mathbf{H}\) and \(\mathbf{G}\) denote parity-check and generator matrices of the code \(\mathcal{C}\) accordingly. The information word \(\mathbf{u}\) is first encoded into the codeword \(\mathbf{c}=(c_{1},\ldots,c_{n})=\mathbf{u}\mathbf{G}\in\{0,1\}^{n}\). Then the binary phase-shift keying (BPSK) modulation is applied, implying the following mapping.
\[\mathbf{x}=\tau(\mathbf{c}),\quad\tau(\mathbf{c})=(\tau(c_{1}),\ldots,\tau(c_ {n})),\]
where \(\tau:\{0,1\}\rightarrow\{1,-1\}\).
Modulated codeword \(\mathbf{x}\) is transmitted over the AWGN channel, thus the receiver obtains corrupted codeword
\[\mathbf{y}=\mathbf{x}+\mathbf{z},\]
where \(\mathbf{y}=(y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\), \(\mathbf{z}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{n})\) and \(\mathbf{I}_{n}\) is the identity matrix of size \(n\times n\). In what follows by \(E_{s}/N_{0}\) we denote signal-to-noise ratio, \(E_{s}/N_{0}=1/2\sigma^{2}\).
As usual [25], the input of the decoder is presented as a vector \(\boldsymbol{\gamma}=(\gamma_{1},\ldots,\gamma_{n})\) of log-likelihood ratios, where
\[\gamma_{i}=\log\frac{p(y_{i}|c_{i}=0)}{p(y_{i}|c_{i}=1)}=\frac{2y_{i}}{\sigma^ {2}},\ i=1,\ldots,n, \tag{1}\]
where \(\log(\cdot)\) stands for a natural logarithm and \(p(x)=1/\sqrt{2\pi\sigma^{2}}\exp\left[-x^{2}/(2\sigma^{2})\right]\) is the probability density function of a random variable distributed as \(\mathcal{N}(0,\sigma^{2})\).
Now, let us describe the decoding performance metric. Let us start with hard output decoding, and let \(\hat{\mathbf{u}}=(\hat{u}_{1},\ldots,\hat{u}_{k})\in\{0,1\}^{k}\) be the estimated information word. In what follows we utilize bit error rate (BER) \(P_{b}=\frac{1}{k}\sum_{i=1}^{k}\Pr[u_{i}\neq\hat{u}_{i}]\) and frame error rate (FER) \(P_{f}=\Pr[\mathbf{u}\neq\hat{\mathbf{u}}]\).
To assess soft output quality, we compare the decoder output to the bit-wise MAP output \(\boldsymbol{\gamma}^{*}=(\gamma_{1}^{*},\ldots,\gamma_{n}^{*})\), where for \(i=1,\ldots,n\) we have
\[\gamma_{i}^{*}=\log\frac{\Pr[c_{i}=0|\mathbf{y}]}{\Pr[c_{i}=1|\mathbf{y}]}= \log\frac{\sum_{\mathbf{c}\in\mathcal{C},c_{i}=0}\exp\left[(\mathbf{1}- \mathbf{c})\boldsymbol{\gamma}^{T}\right]}{\sum_{\mathbf{c}\in\mathcal{C},c_{i} =1}\exp\left[(\mathbf{1}-\mathbf{c})\boldsymbol{\gamma}^{T}\right]},\]
where \(\mathbf{1}\) is the all-one vector, \(\boldsymbol{\gamma}^{T}\) is the transpose of \(\boldsymbol{\gamma}\). We refer the reader to [26] for the derivation.
### _Syndrome-based approach_
The proposed soft-output decoding framework inherits the syndrome-based structure described by Bennatan et al. [16]. The original syndrome-based decoder implementation was designed for the system with multiplicative noise and its performance was discussed for hard decision decoding. In this paper, we propose to modify the pre- and postprocessing steps to adapt the framework for soft-output decoding in the AWGN channel.
In [16] authors propose to pass vector \([|\mathbf{y}|,\mathbf{s}]\) as the input to the noise estimator, where \([\cdot,\cdot]\) denotes concatenation, \(|\mathbf{y}|\) - reliability vector and \(\mathbf{s}=\text{bin}(\mathbf{y})\mathbf{H}^{T}\) - binary syndrome. In such notation \(\text{bin}(\cdot)\) implies a hard decision over the received vector. Instead, we propose to utilize the so-called _soft syndrome_ introduced by Lugosch et al. [27] to avoid the hard-decision step in preprocessing. Since there is an isomorphism in between \((\{0,1\},\oplus)\) and \((\{1,-1\},*)\), the syndrome can be expressed as follows
\[s_{i}=\prod_{j\in\mathcal{M}(i)}\text{sign}(y_{j}),\forall i\in[1,n-k] \tag{2}\]
where \(\mathcal{M}(i)\) is the set of columns in the \(i\)-th row of parity check matrix \(\mathbf{H}\) equal to \(1\).
Thus, the hard syndrome relaxation for input LLR vector \(\mathbf{\gamma}\) can be introduced as
\[\tilde{s}_{i}=\min_{j\in\mathcal{M}(i)}|\gamma_{j}|\prod_{j\in\mathcal{M}(i)} \text{sign}(\gamma_{j}),\forall i\in[1,n-k] \tag{3}\]
For following description we denote noise estimator input vector by \(\mathbf{d}=[|\mathbf{\gamma}|,\mathbf{\tilde{s}}]\in\mathbb{R}^{2n-k}\).
The proposed decoding algorithm is summarized in Algorithm 1. The noise estimation function is denoted by \(\mathcal{F}\). In what follows \(\mathcal{F}\) is chosen to be a DNN.
```
0:\(\mathbf{\gamma}\in\mathbb{R}^{n}\) - input LLRs, \(\mathbf{\tilde{s}}\in\mathbb{R}^{n-k}\) - soft syndrome
0:\(\mathbf{\hat{\gamma}}\in\mathbb{R}^{n}\) - transmitted message LLRs estimation
1:\(\mathbf{\hat{z}}\leftarrow\mathcal{F}([\mathbf{\gamma}|,\mathbf{\tilde{s}}])\)
2:\(\mathbf{\hat{\gamma}}\leftarrow\mathbf{\gamma}-\text{sign}(\mathbf{\gamma})\odot\mathbf{ \hat{z}}\)
3:return\(\mathbf{\hat{\gamma}}\)
```
**Algorithm 1** Soft-output syndrome-based DNN decoding
### _NN model architecture_
The main goal of the neural network \(\mathcal{F}\) is to estimate the noise vector, and the choice of the best architecture remains an open question [16, 17, 18]. In this paper, we focus on estimating the ability of the neural network framework to perform soft decoding. Based on the analysis of hard decoding quality and the time required to train different architectures, we choose a Stacked-GRU architecture [16].
Stacked-GRU is a multi-layer Recurrent Neural Network (RNN) architecture composed of Gated Recurrent Unit (GRU) cells [28] with trainable "update" and "reset" gates. Each GRU cell can be described by the following equations (see Fig. 1 for more details).
\[\mathbf{g}_{t} =\sigma(\mathbf{W}_{g}\mathbf{d}_{t}+\mathbf{U}_{g}\mathbf{q}_{t- 1}+\mathbf{b}_{g}), \tag{4}\] \[\mathbf{r}_{t} =\sigma(\mathbf{W}_{r}\mathbf{d}_{t}+\mathbf{U}_{r}\mathbf{q}_{t- 1}+\mathbf{b}_{r}),\] (5) \[\mathbf{\hat{q}}_{t} =\tanh\big{(}\mathbf{W}_{h}\mathbf{d}_{t}+\mathbf{U}_{h}(\mathbf{ r}_{t}\odot\mathbf{q}_{t-1})+\mathbf{b}_{h}\big{)},\] (6) \[\mathbf{q}_{t} =\mathbf{g}_{t}\odot\mathbf{\hat{q}}_{t}+(\mathbf{1}-\mathbf{g}_ {t})\odot\mathbf{q}_{t-1}, \tag{7}\]
where \(\mathbf{d}_{t}\) is the input vector, \(\mathbf{q}_{t}\) - output vector, \(\mathbf{\hat{q}}_{t}\) - candidate output vector, \(\mathbf{g}_{t}\) - update gate vector, \(\mathbf{r}_{t}\) - reset gate vector, \(\mathbf{W},\mathbf{U}\) - trainable parameters matrices and \(\mathbf{b}\) - trainable bias vectors. \(\sigma(\cdot)\) denotes sigmoid function, \(\tanh(\cdot)\) hyperbolic tangent and \(\odot\) Hadamard product. This architecture is widely used for Natural Language Processing (NLP).
To form a Stacked-GRU architecture, cells are bundled in two dimensions. The first stacking dimension is similar to the general fully connected NN (FCNN) layers. The output \(\mathbf{q}_{t}\) of the preceding cell is passed to the feature input \(\mathbf{d}_{t}\) of the subsequent cell. By \(L\) we denote the total number of layers in the Stacked-GRU network. The second stacking dimension defines the recurrent structure of the network. The output of the preceding cells is passed as the hidden state \(\mathbf{q}_{t-1}\) to the following cells. The initial hidden state of the network \(\mathbf{q}_{0}\) is set to zero. We denote the total number of _time steps_ by \(T\).
For a single input vector, Stacked-GRU NN generates \(T\) vectors on the outputs of the last layer. We denote matrix of stacked output vectors by \(\mathbf{Q}^{(L)}=[\mathbf{q}_{1}^{(L)},\mathbf{q}_{T}^{(L)}]\in\mathbb{R}^{2n-k \times T}\), where superscript \(L\) denotes index of the last layer and subscript \(t\) denotes the GRU time step. In order to reduce the size of Stacked-GRU output, its vectorized representation \(\text{vec}(\mathbf{Q}^{(L)})\) is passed to the single FC layer. The complete architecture of the noise estimator model is depicted in Fig.2.
For model training, the loss is calculated from the framework soft output and binary codeword. The Binary Cross-Entropy (BCE) with sigmoid function is utilized.
\[\begin{split}\mathcal{L}_{BCE}(\mathbf{\hat{\gamma}},\mathbf{c})& =-\frac{1}{n}\sum_{i=1}^{n}c_{i}\log\sigma(-\hat{\gamma}_{i})+\\ &(1-c_{i})\log(1-\sigma(-\hat{\gamma}_{i}))\end{split} \tag{8}\]
### _Soft-output quality optimization_
In order to optimize the soft-output distribution we propose to introduce a regularization term into the loss function for the last epochs of a model training procedure. Three types of regularization are proposed: Mean Squared Error (MSE), Kullback-Leibler (KL) divergence and moments-based. MAP decoder output LLRs \(\mathbf{\gamma}^{\star}\) are used as a reference.
MSE \(\mathcal{L}_{MSE}\) and KL divergence \(\mathcal{L}_{KL}\) regularizations are defined in a pointwise manner. Moments-based regularization
Fig. 1: Gated Recurrent Unit cell
Fig. 2: Stacked-GRU model architecture
\(\mathcal{L}_{M}\) is expressed as the weighted sum of the MSEs of the first and second moments of the decoder output distributions.
\[\mathcal{L}_{\text{MSE}}(\boldsymbol{\gamma}^{*},\boldsymbol{\hat{\gamma}})= \frac{1}{n}\sum_{i=1}^{n}(\gamma_{i}^{*}-\hat{\gamma}_{i})^{2} \tag{9}\]
\[\mathcal{L}_{\text{KL}}(\boldsymbol{\gamma}^{*},\boldsymbol{\hat{\gamma}})= \sum_{i=1}^{n}\gamma_{i}^{*}\cdot\log\frac{\gamma_{i}^{*}}{\hat{\gamma}_{i}} \tag{10}\]
\[\mathcal{L}_{\text{M}}(\boldsymbol{\gamma}^{*},\boldsymbol{\hat{\gamma}})= \rho_{\text{M}}\Big{(}\mathbb{E}(|\boldsymbol{\gamma}^{*}|)-\mathbb{E}(| \boldsymbol{\hat{\gamma}}|)\Big{)}^{2}+ \tag{11}\]
The loss function with regularization term is expressed as
\[\mathcal{L}=\mathcal{L}_{\text{BCE}}+\alpha_{\text{Reg}}\mathcal{L}_{\text{Reg}} \tag{12}\]
where \(\mathcal{L}_{Reg}\) is the selected regularization metric and \(\alpha_{\text{Reg}}\) its weight coefficient.
The results of the described regularization terms applied for the optimization of the soft output distribution of the NN decoder are summarized in Table I and depicted in Fig.3 for the moments-based approach. Table I evaluates the NN-decoder output distribution similarity to the MAP decoder in terms of metrics used for regularizations. Results are provided for NN-decoder trained on \([64,45]\) BCH code with distribution evaluation on \(E_{s}/N_{0}=1dB\). Regularization terms weights were estimated empirically: \(\alpha_{\text{MSE}}=0.01\), \(\alpha_{\text{KL}}=10^{10}\), \(\alpha_{\text{M}}=0.1\), \(\rho_{\text{M}}=0.95\).
### _Simulation results_
To evaluate the decoding performance of the proposed framework, the Stacked-GRU model with a hidden size of \(5n\), 4 layers, and 5 time-steps, as in [16], was trained on zero codewords with a batch size of \(2^{13}\) codewords. The initial learning rate of Adam optimizer [29] was set to \(10^{-3}\) with a further decrease to \(10^{-6}\) by the "reduce on plateau" scheduler. Initial training was performed with BCE loss (8) only. MAP-based regularization terms were introduced for the last epochs only due to the high complexity of MAP decoding2.
Footnote 2: we note that proposed soft-output DNN can be utilized without joint DNN-MAP fine tuning stage, if such is restricted by the complexity reasons.
The performance of soft-output DNN decoder was compared with the Chase decoder, Belief propagation with 50 decoding iterations and NN-Tanner [7] with 20 decoding iterations.
## III NN iterative soft-output decoding
To work in iterative decoding schemes, the decoder must be able to produce a soft output. Turbo Product Code (TPC) scheme was chosen to demonstrate the potential of using the proposed framework in iterative decoding schemes.
### _Turbo product code_
Turbo Product Code (TPC) structure can be explained using the diagram in Fig.5. TPC is constructed from two component codes in the systematic form with parameters \((n_{1},k_{1})\) and
Fig. 4: Bit error rate results for \([64,45]\) BCH code
Fig. 3: Output LLR distributions histogram for \(E_{s}/N_{0}=1dB\)\([64,45]\) BCH code. The moments-based approach is used for regularization.
\((n_{2},k_{2})\) respectively. The encoding is performed in two steps. Initially, the information submatrix \(k_{1}\times k_{2}\) is encoded by the "column code" producing the "column checks" submatrix. Then the information and column check submatrices are encoded with the "row code", thus producing the "row checks" and "checks-on-checks" submatrices. The aggregate code rate of TPC is \(R=(k_{1}k_{2})/(n_{1}n_{2})\)[25].
The iterative TPC decoding procedure is summarized in the algorithm 2. There by \(N\) we denote the number of decoding iterations, by \(\mathcal{D}_{c}(\cdot),\mathcal{D}_{r}(\cdot)\) column and row decoding functions, by \(\mathbf{L}_{c}\in\mathbb{R}^{n_{2}\times n_{1}}\), \(\mathbf{L}_{r}\in\mathbb{R}^{n_{1}\times n_{2}}\) extrinsic information matrices and by \(\alpha_{c}^{(i)},\alpha_{r}^{(i)}\in[0,1]\) extrinsic LLRs scale factors on \(i\)-th iteration.
### _NN TPC decoding_
To utilize the soft-output DNN decoder we pre-train the model for component code decoding, as described in Section II, and then fine-tune it in the iterative scheme.
One of the advantages of a syndrome-based approach lies in its robustness to overfitting. The noise estimation NN-model is trained on reliability vectors and syndromes, which do not depend on the transmitted codeword. Thus, the model can be trained on a zero codeword with different realizations of noise. However, the performance of the model trained on the defined range of Signal-to-Noise Ratios degrades for values over the range. This problem arises in the iterative schemes, since with each iteration the absolute value of the output LLRs grows. The proposed iterative decoding approach does not require training a separate model for each decoding iteration. The same pretrained DNN decoder is utilized for all TPC iterations. To solve the issue of growing LLRs we apply \(L^{1}\) batch normalization for the input of the decoding framework.
In the NN-TPC decoding scheme extrinsic LLRs scale factors \(\alpha_{c},\alpha_{r}\) are initialized as the trainable parameters, thus during the fine-tuning stage their optimal value is calculated by the gradient descent jointly with the decoding model.
The loss function for the NN-TPC fine-tuning stage is the exponentially weighted sum of BCE loss (8) of all decoding iterations.
\[\boldsymbol{\beta}=[e^{0},\ldots,e^{2N-1}] \tag{13}\] \[\mathcal{L}_{\text{NN-TPC}}=\frac{1}{2N\|\boldsymbol{\beta}\|_{1 }}\sum_{j=1}^{2N}\beta_{j}\mathcal{L}_{BCE}(\widehat{\mathbf{\Gamma}}_{j}, \mathbf{C}) \tag{14}\]
Where \(\widehat{\mathbf{\Gamma}}_{j}\) is decoded by columns/rows message LLRs on iteration \(\lceil j/2\rceil\) and \(\mathbf{C}\) is transmitted binary TPC message.
### _Simulation results_
To evaluate the proposed framework in an iterative decoding scheme, we use TPC with \([64,45]\) BCH as component code. Soft-output DNN model was initially trained on component code, as described in II-E. Then the model was fine-tuned in the TPC decoding scheme with a learning rate \(10^{-6}\) for 4000 epochs with a batch size of \(256\). The extrinsic scales \(\alpha_{c},\alpha_{r}\) were initialized by the value \(0.7\).
The performance of the soft-output DNN decoder is compared to the Chase-Pyndiah algorithm [24] for \(N=2\) and \(N=4\) TPC decoding iterations. Chase-Pyndiah results were obtained with the AFF3CT toolbox [30].
Further research directions are as follows. In the real-world applications with increasing requirements to latency,
Fig. 5: TPC structure
Fig. 6: FER results for TPC decoding scheme with BCH(64,45) as component code, 2 iterations
memory and power usage, there is a challenge of decoder complexity reduction without loss of decoding performance. DNN utilized for noise estimation in this paper has relatively high complexity (\(2.2\cdot 10^{6}\) parameters). The question of optimal architecture selection for soft-output decoding is an open question. Apart from that, we point out the model weights adaptive quantization [31, 32, 33], activation functions approximation [33] and model weights pruning [34] as the potential directions of soft-output DNN complexity reduction.
|
2304.02202 | Towards Self-Explainability of Deep Neural Networks with Heatmap
Captioning and Large-Language Models | Heatmaps are widely used to interpret deep neural networks, particularly for
computer vision tasks, and the heatmap-based explainable AI (XAI) techniques
are a well-researched topic. However, most studies concentrate on enhancing the
quality of the generated heatmap or discovering alternate heatmap generation
techniques, and little effort has been devoted to making heatmap-based XAI
automatic, interactive, scalable, and accessible. To address this gap, we
propose a framework that includes two modules: (1) context modelling and (2)
reasoning. We proposed a template-based image captioning approach for context
modelling to create text-based contextual information from the heatmap and
input data. The reasoning module leverages a large language model to provide
explanations in combination with specialised knowledge. Our qualitative
experiments demonstrate the effectiveness of our framework and heatmap
captioning approach. The code for the proposed template-based heatmap
captioning approach will be publicly available. | Osman Tursun, Simon Denman, Sridha Sridharan, Clinton Fookes | 2023-04-05T03:29:37Z | http://arxiv.org/abs/2304.02202v1 | Towards Self-Explainability of Deep Neural Networks with Heatmap Captioning and Large-Language Models
###### Abstract
Heatmaps are widely used to interpret deep neural networks, particularly for computer vision tasks, and the heatmap-based explainable AI (XAI) techniques are a well-researched topic. However, most studies concentrate on enhancing the quality of the generated heatmap or discovering alternate heatmap generation techniques, and little effort has been devoted to making heatmap-based XAI automatic, interactive, scalable, and accessible. To address this gap, we propose a framework that includes two modules: (1) context modelling and (2) reasoning. We proposed a template-based image captioning approach for context modelling to create text-based contextual information from the heatmap and input data. The reasoning module leverages a large language model to provide explanations in combination with specialised knowledge. Our qualitative experiments demonstrate the effectiveness of our framework and heatmap captioning approach. The code for the proposed template-based heatmap captioning approach will be publicly available.
## 1 Introduction
Deep neural networks have continued to achieve very promising results for various machine learning and computer vision applications. However, they are commonly referred to as "black-box" models, as their decision-making process lacks transparency and interpretability; though many attempts have been made to explain the hidden behaviour behind these models.
One widely studied interpretation approach is generating heatmaps to interrogate a neural network's decision. A heatmap is a graphical representation that highlights important elements of the data on which the neural network focuses for it's final decision. Heatmaps are useful for gaining insights into neural networks, and much research has been devoted to increasing the accuracy and improving the quality of heatmap visualisation [16, 18, 21]. However, accurately interpreting heatmaps requires contextual knowledge and specialised knowledge of deep neural networks themselves, and without this information, any interpretation may be incomplete or misleading. Moreover, end-user accessibility, scalability and the automation of heatmap interpreta
Figure 1: A demonstration of the proposed framework for self-explainable heatmap-based XAI. In this example, ChatGPT is used for generating an XAI report.
tion are limited. Therefore, here we focus on how to make heatmap-based explainable artificial intelligence (XAI) automatic, scalable, interpretable and interactive, so machine learning models will be more reliable and transparent across a range of applications and for users without specialist domain knowledge.
Our approach involves combining image captioning and with the power of large-language models such as GPT-3 [2], allowing us to extract critical task-related contextual information and easily access specialised expert knowledge. Both heatmap captioning and interaction with a large language model could be fully automatic, making the process scalable. The process can also be made interactive through user prompts based on the response of the large language model. For example, as shown in Figure 1, the heatmap captions are sent to ChatGPT 1, which is developed based on GPT-3.5 [11]. In this example, we asked ChatGPT to explain the capability of the neural network and comment on if it is able to separate the dog and cat classes. The chatbot generates a report which includes expert knowledge and is based on the specific context in this case, and the process is fully automatic. Based on needs, various general and technical questions can be asked. This makes heatmap-based XAI approachable, interactive, automatic and scalable.
Footnote 1: chat.openai.com
The primary objective of this study is on generating meaningful captions for heatmaps, which is not only the first step but also the most challenging part of our proposed framework. Although image captioning has achieved a marked improvement in recent years [1, 20], most methods are trained using natural image and text pairs in a supervised manner. To the best of our knowledge, there is no such dataset for heatmaps. Creating synthetic heatmap and text pairs is one potential approach, while generating synthetic captions for heatmaps is a non-trivial task as heatmaps have irregular shapes and positions in images. Moreover, heatmap captions take both source images and heatmap images into consideration. Therefore, existing approaches can't be directly applied to heatmap captioning.
In this work, to address the aforementioned issues, we propose a simple template-based approach. The approach uses heatmaps to localise important regions in an image, and then extracts key attributes to generate captions. This approach is highly extendable as new attributes can be added to extract additional information from the heatmap and its source image.
Overall, the contribution of this work is summarised as: (i) We propose a fully-automatic framework for self-explainable heatmap-based XAI. This framework enables automatic, scalable and accessible XAI. (ii) We propose a template-based approach for captioning heatmaps, which is useful for extracting text-based contextual information from heatmaps. (iii) Through our qualitative experiments, we have demonstrated the potential of our framework and methodology.
## 2 Related Studies
In this study, our primary focus is on heatmap-based XAI approaches designed for black-box models such as neural networks. These approaches have gained popularity since the success of AlexNet, and many different techniques have been proposed for generating heatmaps to explain the behaviour of neural networks. These techniques are generally grouped into three categories: gradient-based [15, 17], class activation-based [14, 22], and perturbation-based [3, 4] methods.
Recently, the efficiency and quality of heatmap generation have significantly improved with the development of advanced techniques [12, 16, 18, 19, 21]. Despite these advancements, expert interpretation is still required to fully understand the heatmaps and take the given task context into consideration. As Kim _et al_. [7] discussed, a heatmap-based explanation is intuitive to high-AI background end-users, while not intuitive to low-AI background end-users. Heatmap captions and accessible specialised knowledge will reduce the gap between high-AI and low-AI end-users, making them more accessible as an interpretation method.
Human-in-the-loop XAI is an emerging area of research that focuses on incorporating human feedback into the XAI process. The proposed framework supports human feedback and interaction, making the XAI more approachable, consumable, and interactive. To address the importance of human feedback in XAI, Kim et al. [6] proposed a human-centred evaluation framework that takes into account the user's perspective in the evaluation process. Similarly, Lai et al. [8] presented a framework for generating selective explanations by leveraging human input, which enhances the understandability of AI explanations. These studies demonstrate the potential of human-in-the-loop XAI to improve the effectiveness and usability of XAI systems.
## 3 Method
In this work, we are proposing a new framework for scalable, self-interpretable, and interactive heatmap-based explainable artificial intelligence (XAI). As shown in Figure. 2, this framework consists of two main modules: _context_ and _reasoning_.
The context module extracts task-specific information from the input image and its corresponding heatmap(s), using the proposed heatmap captioning approach. The reasoning module leverages a large-language model to analyse this context information in combination with specialised expert knowledge. This process can be fully-automatic, making it highly scalable, but can also be made interactive by incorporating user feedback. For simplicity, we apply Chat
GPT, the state-of-the-art large language model-based dialogue bot, for reasoning, while for the context module we will use the approach described in the following section.
### Captioning Heatmap with A Template-based Approach
To extract context information, captioning the heatmap image is very important. We, therefore, propose a template-based method to generate captions for heatmaps. Compared to mainstream deep learning-based end-to-end image captioning methods, template-based approaches do not require supervised training on image-text pairs. Another advantage of the template-based approach is its extendibility. Additional attributes can be added if required. As shown in Figure 3, we extract four attributes from the objects located within the given heatmap. Here we briefly explain these attributes and the steps for extracting them.
To extract those four attributes, an image \(I\) and its grayscale heatmap \(H\) will be utilised. In this paper, SESS [18] with GradCAM [14] is chosen for extracting heatmaps. \(H\) is used to localise objects and salient regions while \(I\) is for the recognition and extraction of details. All objects under \(H\) are localised by thresholding \(H\) and applying connected component analysis. Each connected region is considered a single object and its rectangular bounding box is considered to be the bounding box of the object. Here, the notation \((x_{i},y_{i},w_{i},h_{i})\) represents the rectangle bounding box of the object \(i\). For each located object, the following four attributes will be extracted.
The first attribute is the object identity. To identify the object, CLIP [13], a language-image model, is applied as it has a strong zero-shot classification ability. For object \(i\), the cropped region \(I[x_{i}:x_{i}+w_{i},y_{i}:y_{i}+h_{i}]\) is sent to CLIP for classification. In this study, a ViT-B/16 Transformer model and COCO classification labels [9] are used for classification.
The second attribute is the global position and size of the object. The centre of the bounding box of the object is used to localise the object. \(I\) is equally divided into nine regions: top-left, centre-left, bottom-left, top-centre, centre, bottom-centre, top-right, centre-right and bottom-right. The region in which the object's centre \((x_{i},y_{i})\) is located is considered to be the object's position within the image. The relative size of the object relative to the image is also used included, which is equal to \((w_{i}*h_{i})/size(I)\).
The third attribute is the salient regions of the object, where we seek to find the most important region of the object which contributes most to the decision of the model. For object \(i\), \(H[x_{i}:x_{i}+w_{i},y_{i}:y_{i}+h_{i}]\) is equally divided into nine regions as we did for the global position attribute. The three regions with the highest mean intensity values are considered the most salient.
The last attribute of the object specifies its dominant colour. This is determined using a colour naming algorithm applied to the HSV colour space, which is capable of naming the colour of a pixel with one of 93 semantic colour names, as described in [10]. To identify the dominant colours for object \(i\), the algorithm selects foreground pixels from the region \(I[x_{i}:x_{i}+w_{i},y_{i}:y_{i}+h_{i}]\) based on their heatmap intensity values. A pixel is from the foreground if its intensity exceeds 0.5. The algorithm then assigns a colour name to each selected pixel and calculates the percentage of each colour. The three colours with the highest percentage are then identified as the dominant colours for the object.
After the attribution extraction process is complete, the resulting attributes are inserted into predetermined templates to generate the final caption, as shown in Figure 3.
Figure 3: A diagram of the proposed template-based heatmap captioning method, which combines four different attributes to generate a heatmap caption.
Figure 2: An illustration of the proposed framework. It consists of two main modules: _context_ and _reasoning_. The context module generates contextual information through captioning heatmaps. The reasoning module leverages a large language model for analysing contextual information and user feedback in combination with specialised knowledge.
## 4 Experiment
In this section, we provide some qualitative results including generated captions and XAI reports generated with ChatGPT.
In Figure 4, we display examples of the generated captions through the proposed template-based heatmap captioning approach. Those heatmaps are extracted from the ResNet50 [5]. The proposed method can generate captions for both single-object scenes (a, b, c) and multiple-object scenes (d), covering all four key attributes. The generated captions are sensitive to changes in the heatmap, as demonstrated in (a-b). However, there are some specific instances where the generated captions are not entirely correct. For example, in (c), the heatmap covers both the "human driver" and "go-kart," but the caption only includes one object identified as a "car." Similarly, in (d), one object is labelled as "kite," while the other, which is identical, is labelled as "bird." These errors are due to the limitations of heatmap-based object localization and zero-shot classification, which can be minimised with the SOTA object localisation and classification techniques.
### Generating XAI Reports with ChatGPT
To test out if the generated captions are informative enough for generating meaningful XAI report with a large-language model such as ChatGPT, we generated some reports with ChatGPT by sending generated captions to ChatGPT. We asked ChatGPT to answer questions including (i) Is a model working properly? (ii) What is the possible shortcoming of the model? (iii) Is a classification model able to locate certain types of objects?
When the correct captions and well-formed prompts are given, ChatGPT produces a well-written and informative XAI report. Examples of these reports are presented in the supplementary materials. The prompt for ChatGPT includes the following content:
* How a heatmap is generated. For example, which neuron is used for extracting the heatmap.
* Heatmap caption(s). This is generated with the proposed template-based heatmap captioning approach, and are shown in Figure 4 of the main paper. However, we slightly modified the captions from Figures (c-d) to help obtain a reasonable answer from ChatGPT.
* A question regarding the neural network. We underlined these questions in the following examples. We also underlined the key parts of ChatGPT's responses to those questions.
**Prompt for Figure 4 (a-b)**: _A neural network is used for extracting two heatmaps for an image, with Heatmap1 being a
Figure 4: Qualitative results of generated captions. In each example, a caption is generated based on the image and its heatmap.
generated form the "tiger cat" neuron and showing the activation on a cat object in the given image, and Heatmap 2 being generated from the "bull mastif" neuron and showing the activation on a dog object in the given object. Here are detailed information about heatmaps: Heatmap1: In this image, one object is detected under the heatmap. Object 1 is located on the top-center side of the image. It occupies 13.33% of the image. It is a dog. Its center, center-right and top-center parts are mostly considered important by the model. The main colours of it and its background are pale orange, orange, and pale bright orange. Heatmap2: In this image, one object is detected under the heatmap. Object 1 is located on the bottom-middle side of the image. It occupies 23.79% of the image. It is a cat. Its bottom-left, center-left and center-right parts are mostly considered important by the model.The main colours of it and its background are pale orange, orange, and pale yellow. Can this neural network accurately classify the image as either a cat or dog, and what is the basis for this conclusion?_
**ChatGPT**: _Based on the information provided, the neural network has generated two heatmaps, one for detecting a cat and another for detecting a dog in the image. The description of Heatmap1 shows that the detected object is a dog, located on the top-center side of the image and occupying 13.33% of the image. The network has identified the center, center-right, and top-center parts of the dog as the most important for its detection, and the main colors of the object and its background are pale orange, orange, and pale bright orange. On the other hand, the description of Heatmap2 shows that the detected object is a cat, located on the bottom-middle side of the image and occupying 23.79% of the image. The network has identified the bottom-left, center-left, and center-right parts of the cat as the most important for its detection, and the main colors of the object and its background are pale orange, orange, and pale yellow. Since the network has generated separate heatmaps for detecting a cat and a dog in the image, and each heatmap shows a high degree of activation on the corresponding object, it is likely that the neural network is capable of accurately classifying the image as either a cat or a dog. However, without additional information about the neural network's training data, architecture, and performance, it is difficult to make a definitive conclusion about its accuracy. It is possible that the network may have difficulty distinguishing between similar-looking objects or may not generalize well to other images. Further evaluation and testing would be necessary to determine the neural network's classification accuracy more precisely._
**Prompt for Figure 4 (c)**: _A neural network classified an image as "go-kart", and a heatmp is generated through visualising its most activated neuron. Here is the description of this heatmap: "In this image, one object is detected under the heatmap. Object 1 is located on the center side of the image. It occupies 68.44% of the image. It is a go-kart with a human driver. Its top-center, bottom-center and center parts are mostly considered important by the model. The main colours of it and its background are pale yellow, pale orange, and black." What is the possible shortcoming of this neural network. hint: the human driver and the go-cart objects have the same degree of saliency._
**ChatGPT**: _One possible shortcoming of this neural network is its lack of attention to distinguishing features that could differentiate the human driver from other objects or individuals. The fact that the model assigns the same degree of saliency to both the go-kart and the human driver suggests that the network may not be taking into account important details or features that could impact its classification._
_For example, if the network is trained on a dataset of go-karts without human drivers, it may not have learned to differentiate between a go-kart with and without a driver. Alternatively, it is possible that the network's architecture or training data may not have been optimized to detect and classify human drivers accurately._
_In either case, the lack of attention to distinguishing features could be a potential shortcoming of this neural network, as it may result in misclassifications or incorrect predictions when applied to other images. To address this issue, the network could be trained on a more diverse dataset that includes images of go-karts with and without human drivers, as well as other types of vehicles and objects to ensure that it can accurately distinguish between them. Additionally, modifications to the network's architecture or training data may be necessary to improve its ability to identify and differentiate between relevant features._
**Prompt for Figure 4 (d)**: _A neural network is used for extracting a heatmap for an image. Detailed information of the heatmap: In this image, two objects are detected under the heatmap. Object 1 is located on the center-left side of the image. It occupies 12.28% of the image. It is a bird. Its center, center-right and top-center parts are mostly considered important by the model. The main colours of it and its background are blue, pale yellow, and grey. Object 2 is located on the center-right side of the image. It occupies 8.93% of the image. It is a bird. Its center, center-right and top-center parts are mostly considered important by the model. The main colours of it and its background are blue, pale yellow, and white. Based on the heatmap information, is this network useful for locating a bird object?_
**ChatGPT**: _Based on the information provided, it is likely that the neural network is capable of accurately detecting and locating bird objects in an image. The heatmap generated by the network indicates that two bird objects were detected in the image, each occupying a distinct region of the image and identified by specific regions of importance, including the center, center-right, and top-center parts of
each object. Furthermore, the description of the main colors of each object and its background suggests that the network is capable of identifying and distinguishing between different color patterns associated with bird objects. Overall, the heatmap information provided suggests that the neural network is useful for locating bird objects in images, particularly those with distinct color and feature patterns that are easily distinguishable by the network. However, it is important to note that the accuracy and effectiveness of the network may vary depending on the quality and diversity of the training data and the complexity and adaptability of the network's architecture._
## 5 Future Work and Conclusion
In this study, we present a promising framework for automatic, interactive, and scalable XAI report generation using a large-language model. Specifically, we utilized a proposed template-based heatmap caption generation approach to provide contextual information for a large-language-based reasoning module, such as ChatGPT. Our results demonstrate the promise of this approach, yet also highlight the importance of an accurate template-based captioning approach, and we note that further improvements are required to realise a fully automatic XAI report generation. We observe that while the generated captions were informative, they lack diversity and can contain redundant information. Future research could explore the use of deep learning-based image captioning approaches to address these limitations. Furthermore, we found that the reports generated by ChatGPT were not concise and required well-designed prompts. Despite these shortcomings, designing a large-language model specifically for XAI report generation shows promise as a research direction.
|
2302.13095 | Bayesian Neural Networks Avoid Encoding Complex and
Perturbation-Sensitive Concepts | In this paper, we focus on mean-field variational Bayesian Neural Networks
(BNNs) and explore the representation capacity of such BNNs by investigating
which types of concepts are less likely to be encoded by the BNN. It has been
observed and studied that a relatively small set of interactive concepts
usually emerge in the knowledge representation of a sufficiently-trained neural
network, and such concepts can faithfully explain the network output. Based on
this, our study proves that compared to standard deep neural networks (DNNs),
it is less likely for BNNs to encode complex concepts. Experiments verify our
theoretical proofs. Note that the tendency to encode less complex concepts does
not necessarily imply weak representation power, considering that complex
concepts exhibit low generalization power and high adversarial vulnerability.
The code is available at https://github.com/sjtu-xai-lab/BNN-concepts. | Qihan Ren, Huiqi Deng, Yunuo Chen, Siyu Lou, Quanshi Zhang | 2023-02-25T14:56:35Z | http://arxiv.org/abs/2302.13095v2 | # Bayesian Neural Networks Tend to Ignore Complex and Sensitive Concepts
###### Abstract
In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN. It has been observed and studied that a relatively small set of interactive concepts usually emerge in the knowledge representation of a sufficiently-trained neural network, and such concepts can faithfully explain the network output. Based on this, our study proves that compared to standard deep neural networks (DNNs), it is less likely for BNNs to encode complex concepts. Experiments verify our theoretical proofs. Note that the tendency to encode less complex concepts does not necessarily imply weak representation power, considering that complex concepts exhibit low generalization power and high adversarial vulnerability. _The code will be released when the paper is accepted._
## 1 Introduction
Unlike standard deep neural networks (DNNs), Bayesian neural networks (BNNs) represent network weights as probability distributions. Therefore, BNNs exhibit distinctive representation capacities from standard DNNs. Existing studies (Blundell et al., 2015; Gal and Smith, 2018; Kristiadi et al., 2020; Carbone et al., 2020; Wenzel et al., 2020; Krishnan et al., 2020; Zhang et al., 2022) usually analyzed BNNs in terms of generalization power, adversarial robustness, and optimization.
In contrast to the above studies, this paper proposes a new perspective to investigate the representation capacity of BNNs, _i.e._, we discover and theoretically prove that BNNs are less likely to encode sensitive and complex concepts than standard DNNs. In fact, such a property brings in specific advantages to feature representations of the BNN. To be precise, we limit our research to the scope of **mean-field variational BNNs**(Blundell et al., 2015), which is one of the most commonly used BNNs. Thus, in this paper, we just use the term \(BNN\) to refer to mean-field variational BNNs.
**Common phenomenon of concept emergence in various neural networks.** Although it is well-known that a neural network does not explicitly encode concepts like graphical models, recent studies (Ren et al., 2021, 2023; Deng et al., 2021) have discovered a common concept-emerging phenomenon that neural networks usually _implicitly_ encode a small number of interactive concepts for inference, which have been observed in different neural networks for various tasks. Specifically, each interactive concept represents an AND relationship among a set of input variables.
For example, we can use \(I(S=\{\text{eyes},\text{nose},\text{mouth}\})=U_{S\cdot\text{exist}(\text{eyes}) \cdot\text{exist}(\text{nose})\cdot\text{exist}(\text{mouse})}\) to illustrate the AND relationship for the face concept in image classification. If any image patch in the set \(S=\{\text{eyes},\text{nose},\text{mouth}\}\) is masked, then the face concept will be deactivated, and the numerical effect of this concept is removed (\(I(S)=0\)) and no longer influences the network output.
More importantly, interactive concepts can be considered as faithful inference patterns encoded by the neural network. It is because Ren et al. (2021) has proved that people can use a relatively small number of interactive concepts to well mimic the inference logic of the neural network on a certain input sample. That is, numerical effects of these concepts always well predict diverse network outputs, no matter how the input sample is masked.
**BNNs ignore complex and sensitive concepts.** Based on the interactive concepts, in this paper, we discover and theoretically prove that _compared to standard DNNs, it is more difficult for a neural network to encode complex interactive concepts, as long as it has weight uncertainty_. The complexity of an interactive concept \(S\) is defined as the number of variables in the set \(S\), _i.e._, \(\operatorname{complexity}(S)=|S|\). \(|S|\) is also termed the _order_ of the interactive concept.
We prove the above conclusion through three steps. First, it is difficult to theoretically analyze interactive concepts encoded by BNNs, because BNNs represent network weights as probability distributions. To this end, we find that we can usually use a _surrogate DNN model_, which is constructed by adding perturbations to both the input and low-layer features of a standard DNN, to approximate feature representations
of a BNN. In this way, we can directly analyze the surrogate DNN model with feature uncertainty, instead of investigating the BNN with weight uncertainty.
Second, we prove that in the surrogate DNN model, high-order interactive concepts are more sensitive to random perturbations than low-order interactive concepts.
Third, we prove that the sensitivity makes high-order interactive concepts difficult to be learned when features are perturbed. In this way, we can conclude that high-order interactive concepts are also less likely to be learned by the BNN when its weights are perturbed.
In addition, experiments showed that the strength of high-order (complex) interactive concepts encoded by BNNs was weaker than those encoded by standard DNNs, which verified the above theoretical conclusion.
**Note that our proof does NOT mean that a BNN has limited representation capacity.** Instead, we just demonstrate the distinctive tendency of avoiding encoding complex (high-order) interactive concepts, when weight uncertainty is introduced into the neural network. This does not mean that BNNs have weaker representation power than standard DNNs. If the task loss requires to encode complex concepts, then our research indicates that the BNN must reduce its weight uncertainty, to some extent.
**Practical values and advantages of avoiding encoding complex concepts.** Although we prove that BNNs tend to avoid encoding complex concepts, it is not necessarily a disadvantage of the BNN, compared to standard DNNs. On the contrary, it has been found that compared to simple (low-order) interactive concepts, complex (high-order) interactive concepts encoded by a neural network usually have poorer generalization ability (Lengerich et al., 2022) and are more vulnerable to adversarial attacks (Ren et al., 2021). Thus, encoding less complex concepts may be an advantage. Please see Appendix D for experiments that show the high adversarial vulnerability of high-order interactive concepts.
## 2 BNNs ignore complex and sensitive concepts
Unlike standard DNNs, a BNN represents each weight in the network as a probability distribution, instead of a scalar. In this paper, we limit the scope of our study to mean-field variational BNNs (Blundell et al., 2015), where all weights \(\mathbf{W}\) are formulated as a Gaussian distribution \(\mathcal{N}(\mathbf{W};\mathbf{\mu},\mathbf{\Sigma})\), and the covariance matrix \(\mathbf{\Sigma}\) is diagonal. Other types of BNNs (_e.g._, BNNs based on the Monte Carlo Dropout (Gal and Ghahramani, 2016)) are not discussed. The BNN learns parameters \(\mathbf{\theta}=(\mathbf{\mu},\mathbf{\Sigma})\), and we use \(q_{\mathbf{\theta}}(\mathbf{W})\) to represent the weight distribution. Let us consider a classification task with the training data \(\mathcal{D}=\{(\mathbf{x}^{(1)},y^{(1)}),\ldots,(\mathbf{x}^{(n)},y^{(n)})\}\). Training a BNN is to minimize the Kullback-Leibler (KL) divergence between the distribution \(q_{\mathbf{\theta}}(\mathbf{W})\) and the posterior distribution \(p(\mathbf{W}|\mathcal{D})\).
\[\mathbf{\theta}^{*} =\operatorname*{argmin}_{\mathbf{\theta}}\mathrm{KL}[q_{\mathbf{\theta}} (\mathbf{W})\|p(\mathbf{W}|\mathcal{D})] \tag{1}\] \[=\operatorname*{argmin}_{\mathbf{\theta}}-\mathbb{E}_{\mathbf{W}\sim q_{ \mathbf{\theta}}(\mathbf{W})}[\log p(\mathcal{D}|\mathbf{W})]+\mathrm{KL}[q_{\mathbf{\theta}} (\mathbf{W})\|p(\mathbf{W})],\]
where the first term is the classification loss, and the second term is the KL divergence between \(q_{\mathbf{\theta}}(\mathbf{W})\) and the prior distribution \(p(\mathbf{W})\), which is usually formulated as a Gaussian distribution \(\mathcal{N}(\mathbf{W};\mathbf{0},\mathbf{I})\). In addition, given a testing sample \(\mathbf{x}\), the inference of the BNN is conducted as follows. First, network weights are sampled from the weight distribution \(q_{\mathbf{\theta}}(\mathbf{W})\) to construct multiple neural networks. Then, each network is used to conduct inference on the sample \(\mathbf{x}\), and the final inference result \(p(y|\mathbf{x})\) is computed as the average classification probability of all the networks.
\[p(y|\mathbf{x})=\mathbb{E}_{\mathbf{W}\sim q_{\mathbf{\theta}}(\mathbf{W})}[p(y|\mathbf{x},\mathbf{W})] \tag{2}\]
### Preliminaries: emergence of sparse concepts
The learning of neural networks is usually regarded as a fitting problem between the ground-truth label and the
Figure 1: (a) Illustration of interactive concepts encoded by a neural network. Each interactive concept \(S\) corresponds to an AND relationship among a specific set \(S\) of input variables (image patches). \(C_{S}\) represents the activation state of the concept \(S\). The patches \(x_{1}\) and \(x_{6}\) are masked, so that concepts \(S_{1}\) and \(S_{3}\) are deactivated, _i.e._, \(C_{S_{1}}=0\) and \(C_{S_{3}}=0\). (b) Experiments demonstrate the common concept-emerging phenomenon. Neural networks with various architectures all encode sparse interactive concepts. In other words, most interactive concepts have near-zero effects, _i.e._, \(I(S)\approx 0\), and can be considered as noises; only a relatively small number of interactive concepts have significant effects. For better visualization, the interactive concepts are sorted by strengths in descending order.
model prediction, which does not require explicit learning of specific concepts. However, recent studies (Ren et al., 2021, 2023; Deng et al., 2021) have empirically discovered that in many tasks, sparse AND relationships between input variables were usually implicitly encoded by a neural network, when the neural network was sufficiently trained. As shown in Figure 1(a), these AND relationships can be viewed as specific types of _interactive concepts_, which will be introduced in the **interactive concepts** paragraph.
Although counter-intuitive, this concept-emerging phenomenon does exist in various neural networks. Furthermore, such interactive concepts have been used to prove the representation bottleneck of the neural network (Deng et al., 2021) and obtain optimal masking states for attribution methods (Ren et al., 2023). **We also verify the trustworthiness of using interactive concepts to explain neural networks in experiments (see the end of this section).**
**Interactive concepts.**Ren et al. (2021) proposed the interaction effect \(I(S)\) to study the emergence of concepts. Let us consider a pre-trained neural network \(v\) and an input sample \(\mathbf{x}=[x_{1},\ldots,x_{n}]\) with \(n\) input variables indexed by \(N=\{1,\ldots,n\}\). Let \(\Omega\) denote a set of interactive concepts extracted from the network. Each interactive concept \(S\in\Omega\) corresponds to the collaboration (AND relationship) between input variables in a specific set \(S\subseteq N\), thus \(\Omega\subseteq 2^{N}=\{S|S\subseteq N\}\). For instance, as Figure 1(a) shows, a concept \(S=\{x_{1},x_{2},x_{3}\}\) is formed due to the co-occurrence of the three image patches. The concept will be activated and make a certain interaction effect \(I(S)\) on the network output, only if the patches \(x_{1},x_{2},x_{3}\) are all present. In contrast, the absence (masking) of any patch among \(x_{1}\), \(x_{2}\), and \(x_{3}\) will deactivate the concept and remove the interaction effect, _i.e._, \(I(S|\mathbf{x}^{\text{mask}})=0\).
Specifically, the interaction effect \(I(S|\mathbf{x})\) on the sample \(\mathbf{x}\) is computed by the Harsanyi dividend1(Harsanyi, 1963).
Footnote 1: Please see Appendix E for properties and trustworthiness of the Harsanyi dividend.
\[I(S|\mathbf{x})=\sum\nolimits_{T\subseteq S}(-1)^{|S|-|T|}\cdot v(\mathbf{x}_{T}). \tag{3}\]
If \(I(S|\mathbf{x})\) has a significant value, then the neural network is considered to encode an interactive concept \(S\); otherwise, if \(I(S|\mathbf{x})\approx 0\), the concept \(S\) does not exist. Here, \(\mathbf{x}_{T}\) denotes the masked input sample, where variables in \(N\setminus T\) are masked and variables in \(T\) are kept unchanged. Besides, \(v(\mathbf{x}_{T})\in\mathbb{R}\) can be computed as a scalar output of the neural network on the masked sample \(\mathbf{x}_{T}\) (_e.g._, the confidence score of classifying the input sample \(\mathbf{x}_{T}\) to the ground-truth category \(v(\mathbf{x}_{T})=\text{log}\frac{p(y=y_{\text{data}}|\mathbf{x}_{T})}{1-p(y=y_{ \text{data}}|\mathbf{x}_{T})}\)).
**Faithfulness of interactive concepts.** Given an input sample \(\mathbf{x}\) with \(n\) variables, we have \(2^{n}\) different ways to mask the sample \(\mathbf{x}\) and obtain the masked sample \(\mathbf{x}_{T}\)_w.t.t._ all subsets \(T\subseteq N\). To this end, Ren et al. (2021) proved that
\[\exists\ \Omega\subseteq 2^{N},\ s.t.\ \forall\ T\subseteq N,v(\mathbf{x}_{T})= \sum\nolimits_{S\in\Omega,S\subseteq T}I(S|\mathbf{x}), \tag{4}\]
where \(2^{N}=\{S|S\subseteq N\}\). The equation indicates that interactive concepts in \(\Omega\) can well mimic network outputs on all the \(2^{n}\) masked samples. Thus, we can consider that all interactive concepts in the set \(\Omega\) as faithful inference patterns encoded by the neural network.
**Sparsity of interactive concepts.** More crucially, extensive experiments (Ren et al., 2021; Deng et al., 2021; Ren et al., 2023) discovered that interactive concepts emerging in a neural network are usually very sparse. Figure 1(b) shows that most interactive concepts have near-zero interaction effects (\(|I(S|\mathbf{x})|\approx 0\)), thus having negligible influence on the network output. Only a few salient interactive concepts have significant effects \(I(S|\mathbf{x})\) on the network output. In this way, the network output can be mimicked by only a few salient interactive concepts in \(\Omega_{\text{salient}}\).
\[\forall\ T\subseteq N,\ \ v(\mathbf{x}_{T})=\sum\nolimits_{S\in\Omega_{\text{salient}}}I(S|\mathbf{x})+\epsilon \tag{5}\]
The above equation decomposes the output \(v(\mathbf{x}_{T})\) into two parts: (1) effects of all salient interactive concepts in \(\Omega_{\text{salient}}\), and (2) a small residual term \(\epsilon\) containing negligible effects of all non-salient interactive concepts.
**Empirically verifying the sparsity of concepts.** Based on Eq. (5), in the following analysis, _only salient interactive concepts in \(\Omega_{\text{salient}}\) are regarded as valid concepts encoded by a neural network._ We empirically verify the emergence of sparse concepts in various neural networks, including multi-layer perceptrons (MLPs), residual multi-layer perceptrons (ResMLPs) (Touvron et al., 2022), long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997), and convolutional neural networks (CNNs), and on different datasets, including tabular data (Census dataset and TV news dataset (Dua and Graff, 2017)), language data (CoLA (Warstadt et al., 2019) and SST-2 (Socher et al., 2013)), and image data (MNIST (LeCun et al., 1998)). (1) Figure 1(b) verifies that concepts encoded by various neural networks are all sparse. (2) Appendix G verifies that the training process boosts the sparsity of concepts.
**Complexity of a neural network representing a concept.** In many previous studies (Deng et al., 2021; Wang et al., 2021; Zhang et al., 2020), the complexity of an interactive concept \(S\) was measured by the number of variables in the set \(S\) (also termed the _order_ of the interactive concept), _i.e._, \(\operatorname{complexity}(S)=\operatorname{order}(S)=|S|\). Then, a low-order concept represents a simple collaboration among a few input variables, while a high-order concept represents a complex collaboration among many input variables.
### Approximating weight uncertainty by adding input perturbations
In this paper, we aim to prove that compared to standard DNNs, it is more difficult to encode high-order (complex) interactive concepts as long as the network has weight uncertainty. Note that previous studies [10, 22] found that a DNN encoding less complex concepts was **NOT** necessarily equivalent to a weak representation capacity. Instead, it usually boosts the generalization power and adversarial robustness. Please see Appendix D for experiments on the high adversarial vulnerability of high-order concepts. In addition, as discussed in the last two paragraphs of the introduction, the BNN can still encode complex concepts when it learns small variances.
Unlike standard DNNs, a BNN formulates each weight as a probability distribution, which boosts the difficulty of theoretically analyzing interactive concepts encoded in a BNN. Therefore, in this subsection, we first discover that introducing uncertainty to weights in the BNN can be approximated by adding perturbations to input variables and low-layer features in experiments. In other words, we add random perturbations to both input variables and low-layer features of a standard DNN, and we demonstrate that such a perturbed DNN performs as a _surrogate DNN model_, which well approximates feature representations of a BNN.
Let us consider a feed-forward BNN, which has \(L\) cascaded linear layers and ReLU layers. Given an input sample \(\mathbf{x}\in\mathbb{R}^{P_{0}}\) (\(D_{0}=n\)), the feature of the \(l\)-th layer \(\mathbf{h}^{(l)}\in\mathbb{R}^{D_{l}}\) (\(1\leq l\leq L\)) is computed as follows.
\[\mathbf{h}^{(l)}=\mathbf{W}^{(l)}(\cdots\mathbf{\Phi}^{(1)}(\mathbf{W}^{(1)}\mathbf{x}+\mathbf{b}^{(1 )})\cdots)+\mathbf{b}^{(l)}, \tag{6}\]
where \(\mathbf{W}^{(l)}\in\mathbb{R}^{D_{l}\times D_{l-1}}\) and \(\mathbf{b}^{(l)}\in\mathbb{R}^{D_{l}}\) denote the weight matrix and bias of the \(l\)-th linear layer, respectively. In the BNN, \(W^{(l)}_{ij}\sim\mathcal{N}(\overline{W}^{(l)}_{ij},(\sigma^{(l)}_{ij})^{2})\) is independently sampled from Gaussian distributions. We use \(\mathbf{\mu}_{\mathbf{W}^{(l)}}=[\overline{W}^{(l)}_{ij}]\in\mathbb{R}^{D_{l}\times D _{l-1}}\) to denote the mean of the weight matrix. Besides, \(\mathbf{b}^{(l)}\sim\mathcal{N}(\mathbf{\mu}_{\mathbf{b}^{(l)}},\mathbf{\Sigma}_{\mathbf{b}^{(l)}})\), where \(\mathbf{\Sigma}_{\mathbf{b}^{(l)}}\) is a diagonal matrix. The diagonal matrix \(\mathbf{\Phi}^{(l)}=\mathrm{diag}(\phi^{(l)}_{1},\cdots,\phi^{(l)}_{D_{l}})\in \{0,1\}^{D_{l}\times D_{l}}\) denotes binary gating states of the \(l\)-th ReLU layer.
Then, we construct the surrogate DNN model with the same architecture as the BNN, to approximate the BNN's feature distribution. Parameters of this surrogate DNN model \(\mathbf{\psi}\) are set as the mean of the weight distribution and the mean of the bias distribution in the BNN, _i.e_., \(\mathbf{\psi}=\{\mathbf{\mu}_{\mathbf{W}^{(l)}},\mathbf{\mu}_{\mathbf{b}^{(l)}}\}_{L=1}^{L}\). Given an input sample \(\mathbf{x}\), we add perturbations \(\Delta\mathbf{x}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{\Delta\mathbf{x}})\) to input variables and perturbations \(\Delta\mathbf{h}^{(l^{\prime})}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{\Delta\mathbf{h}^ {(l^{\prime})}})\) to features between the first layer and the \((l-1)\)-th layer in the surrogate DNN model (\(1\leq l^{\prime}\leq l-1\)). In this way, we can obtain the distribution of the \(l\)-th layer feature \(\tilde{h}^{(l)}\) in the surrogate DNN model, denoted as \(p_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
BNN, we constructed a corresponding surrogate DNN model. Please see Appendix I for implementation details.
Figure 2 shows that the feature distribution of the surrogate DNN model well matched the feature distribution of the BNN. Furthermore, we used the KL divergence \(\mathrm{KL}(p_{\text{BNN}}(\mathbf{h}^{(l)})\|p_{\text{DNN}}(\mathbf{\tilde{h}}^{(l)}|\mathbf{ \Delta}))\) in Eq. (7) to measure the approximation error. To compare with \(\mathrm{KL}(p_{\text{BNN}}(\mathbf{h}^{(l)})\|p_{\text{DNN}}(\mathbf{\tilde{h}}^{(l)}| \mathbf{\Delta}))\), we further constructed a simple baseline distribution of the features \(p_{\text{base}}(\mathbf{h}^{(l)})=\mathcal{N}(\hat{\mathbf{\mu}}\hat{1},\hat{\sigma}^{ 2}\mathbf{I})\), where \(\hat{\mu}\) and \(\hat{\sigma}^{2}\) denote the mean and the variance over all feature dimensions of the BNN, respectively. We computed \(\mathrm{KL}(p_{\text{BNN}}(\mathbf{h}^{(l)})\|p_{\text{base}}(\mathbf{h}^{(l)}))\) for comparison. Table 1 shows that the approximation error of the surrogate DNN model was significantly smaller than the approximation error of the baseline distribution.
Experimental results showed that the weight uncertainty in a BNN could be well approximated by adding random perturbations to both input variables and low-layer features.
### High-order concepts are sensitive
In this subsection, we theoretically prove that high-order interactive concepts are more sensitive to perturbations than low-order interactive concepts. In the next subsection, we will prove that sensitive interactive concepts are difficult to be learned by a neural network.
Note that according to Section 2.2, introducing the weight uncertainty in a BNN can be approximated by adding random perturbations to both input variables and features of different layers. However, simultaneously adding perturbations to features of multiple layers significantly boosts the difficulty of analysis. Fortunately, adding perturbations to output features of the \(l\)-th layer can be considered as perturbing input variables of the \((l+1)\)-th layer. Hence, in this subsection, we just analyze interactive concepts in a simple case where we perturb input variables in a certain layer, instead of analyzing the complex case of simultaneously perturbing features of different layers.
To prove that high-order interactive concepts are more sensitive to input perturbations than low-order interactive concepts, let us first derive the analytical form of the interaction effect \(I(S)\) of an interactive concept.
**Lemma 2.1** (Proof in Appendix F.1).: _Given a neural network \(v\) and an arbitrary input sample \(\mathbf{x}^{\prime}\in\mathbb{R}^{n}\), the network output can be decomposed using the Taylor expansion \(v(\mathbf{x}^{\prime})=\sum_{S\in N}\sum_{\mathbf{x}\in Q_{S}}U_{S,\mathbf{\pi}}\cdot J(S, \mathbf{\pi}|\mathbf{x}^{\prime})\). In this way, according to Eq. (3), the interaction effect \(I(S|\mathbf{x}^{\prime})\) on the sample \(\mathbf{x}^{\prime}\) can be reformulated as_
\[I(S|\mathbf{x}^{\prime})=\sum\nolimits_{\mathbf{\pi}\in Q_{S}}U_{S,\mathbf{\pi}}\cdot J(S, \mathbf{\pi}|\mathbf{x}^{\prime}), \tag{8}\]
_where \(J(S,\mathbf{\pi}|\mathbf{x}^{\prime})=\prod_{i\in S}\left(\mathrm{sign}(x^{\prime}_{i }-r_{i})\cdot\frac{x^{\prime}_{i}-r_{i}}{\tau}\right)^{x_{i}}\) denotes an expansion term of the degree \(\mathbf{\pi}\), \(\mathbf{\pi}\in Q_{S}=\{[\pi_{1},\ldots,\pi_{n}]|\forall i\in S,\pi_{i}\in\mathbb{ N}^{+};\forall i\not\in S,\pi_{i}=0\}\). \(U_{S,\mathbf{\pi}}\)= \(\frac{\tau^{m}}{\prod_{i=1}^{n}\pi_{i}!}\frac{\partial^{m}\pi(\mathbf{x}_{i})}{ \partial x^{n_{1}}_{1}\cdots\partial x^{m_{n}^{n}}_{m}}\cdot\prod_{i\in S}[ \mathrm{sign}(x^{\prime}_{i}-r_{i})]^{\pi_{i}}\), \(m=\sum_{i=1}^{n}\pi_{i}\)._
Lemma 2.1 provides a new perspective to analyze the sensitivity of the interaction effect \(I(S)\). In particular, just like in Ren et al. (2021) and Ren et al. (2023), we mask the input variable \(x_{i}\) by setting it to its reference value \(x_{i}\gets r_{i}\). The reference value \(r_{i}\) is designed as follows. Let \(\mathbb{E}_{\mathbf{x}}[x_{i}]\) denote the average value of the input variable \(x_{i}\) over all input samples, which is usually regarded as a no-information state of this input variable (Ancona et al., 2019). In this paper, we remove the information from the input variable \(x_{i}\) by pushing \(x_{i}\) by a large enough distance \(\tau\) towards its mean value. In other words, if \(x_{i}>\mathbb{E}_{\mathbf{x}}[x_{i}]\), we set the reference value \(r_{i}=x_{i}-\tau^{2}\); otherwise, \(r_{i}=x_{i}+\tau^{1}\). Here, \(\tau\in\mathbb{R}\) is a pre-defined constant. In this way, compared to setting \(r_{i}=\mathbb{E}_{\mathbf{x}}[x_{i}]\), the above setting ensures comparable perturbation magnitudes over different input dimensions.
Furthermore, in order to simplify the proof, when we add a small Gaussian perturbation \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) to the sample \(\mathbf{x}\), we ignore the extremely low possibility of large perturbations \(|\epsilon_{i}|\geq\tau\) because the variance \(\sigma^{2}\) is small.
Let us start with a simple case in Lemma 2.1. Since people usually adopt low-order Taylor expansion for approximation in real implementations, we first approximate the interaction effect \(I(S|\mathbf{x}^{\prime})\) using the expansion term of the lowest degree, and analyze the influence of input perturbations on \(I(S|\mathbf{x}^{\prime})\).
**Theorem 2.2** (Proof in Appendix F.2).: _Let \(\mathbf{\hat{\pi}}\) denote the lowest degree of the expansion terms of the interaction effect \(I(S|\mathbf{x}^{\prime})\), i.e., \(\forall i\in S,\hat{\pi}_{i}=1;\forall i\not\in S,\hat{\pi}_{i}=0\). Let us consider the interaction effect \(I(S|\mathbf{x}^{\prime})\) only containing the expansion term of the lowest degree, i.e., \(I(S|\mathbf{x}^{\prime})=U_{S,\mathbf{\hat{\pi}}}\cdot J(S,\mathbf{\hat{\pi}}|\mathbf{x}^{ \prime})\). In this way, the mean and variance of the interaction effect \(I(S|\mathbf{x}^{\prime}=\mathbf{x}+\mathbf{\epsilon})\) over different perturbations \(\mathbf{\epsilon}\) are given as_
\[\begin{split}\mathbb{E}_{\mathbf{\epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})] &=U_{S,\mathbf{\hat{\pi}}},\\ \mathrm{Var}_{\mathbf{\epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})]& =U_{S,\mathbf{\hat{\pi}}}^{2}((1+(\sigma/\tau)^{2})^{|S|}-1).\end{split} \tag{9}\]
Theorem 2.2 proves that _the variance \(\mathrm{Var}_{\mathbf{\epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})]\) increases along with the order \(|S|\) of the interactive concept in an exponential manner_. It indicates that high-order interactive concepts are much more sensitive to input perturbations than low-order concepts. Furthermore, as mentioned in Section 2.2, since we can add perturbations to a surrogate DNN model to well mimic feature representations of a BNN, **we can consider that high-order interactive concepts encoded by the BNN are much more sensitive to weight uncertainty in the BNN than low-order concepts.**
**Theorem 2.3** (Proof in Appendix F.3).: _Let \(\mathbf{\pi}\in Q_{S}=\{[\pi_{1},\ldots,\pi_{n}]|\forall i\in S,\pi_{i}\in\mathbb{N}^{+} ;\forall i\not\in S,\pi_{i}=0\}\) denote an arbitrary degree. Then, the mean and the variance of \(J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{\epsilon})\) over perturbations \(\mathbf{\epsilon}\) are_
\[\mathbb{E}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{\epsilon})] =\mathbb{E}_{\mathbf{\epsilon}}[\prod\nolimits_{i\in S}(1+\frac{ \epsilon_{i}}{\tau})^{\pi_{i}}], \tag{10}\] \[\mathrm{Var}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{\epsilon})] =\mathrm{Var}_{\mathbf{\epsilon}}[\prod\nolimits_{i\in S}(1+\frac{ \epsilon_{i}}{\tau})^{\pi_{i}}]\]
Theorem 2.3 extends Theorem 2.2 to a **general case**, where we use a higher-order Taylor expansion to represent \(I(S|\mathbf{x}^{\prime})\).
**Theorem 2.4** (Proof in Appendix F.4).: _Let \(S\) and \(S^{\prime}\) be two interactive concepts, such that \(S\subseteq S^{\prime}\). Let us consider expansion terms \(J(S,\mathbf{\pi})\) and \(J(S^{\prime},\mathbf{\pi}^{\prime})\), where the term \(J(S^{\prime},\mathbf{\pi}^{\prime})\) is extended from the term \(J(S,\mathbf{\pi})\) with \(\mathbf{\pi}\prec\mathbf{\pi}^{\prime}\). I.e., (1) \(\forall i\in S^{\prime},\pi^{\prime}_{i}\in\mathbb{N}^{+}\); otherwise, \(\pi^{\prime}_{i}=0\). (2) Given \(\mathbf{\pi}^{\prime}\), \(\forall j\in S,\pi_{j}=\pi^{\prime}_{j}\); otherwise, \(\pi_{j}=0\). Then, we have_
\[\frac{\mathrm{Var}_{\mathbf{\epsilon}}[J(S^{\prime},\mathbf{\pi}^{\prime }|\mathbf{x}+\mathbf{\epsilon})]}{\mathrm{Var}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\bm {\epsilon})]}>\prod\nolimits_{i\in S^{\prime}\setminus S}\mathbb{E}_{\epsilon_ {i}}^{2}[(1+\frac{\epsilon_{i}}{\tau})^{\pi^{\prime}_{i}}], \tag{11}\] \[\frac{\mathbb{E}_{\mathbf{\epsilon}}[J(S^{\prime},\mathbf{\pi}^{\prime}| \mathbf{x}+\mathbf{\epsilon})]/\mathrm{Var}_{\mathbf{\epsilon}}[J(S^{\prime},\mathbf{\pi}^{ \prime}|\mathbf{x}+\mathbf{\epsilon})]}{\mathbb{E}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x }+\mathbf{\epsilon})]/\mathrm{Var}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{ \epsilon})]}\] \[<\frac{1}{\prod\nolimits_{i\in S^{\prime}\setminus S}\mathbb{E}_{ \epsilon_{i}}[(1+\frac{\epsilon_{i}}{\tau})^{\pi^{\prime}_{i}}]},\]
_and we can also obtain \(\mathbb{E}_{\epsilon_{i}}[(1+\frac{\epsilon_{i}}{\tau})^{\pi^{\prime}_{i}}]\geq 1\)._
Theorem 2.4 indicates that for an arbitrary degree \(\mathbf{\pi}\) of the interactive concept \(\mathbf{S}\), \(\mathrm{Var}_{\mathbf{\epsilon}}[J(S^{\prime},\mathbf{\pi}^{\prime}|\mathbf{x}+\mathbf{ \epsilon})]/\mathrm{Var}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{\epsilon})]\) increases in an exponential manner along with \(|S^{\prime}\setminus S|=|S^{\prime}|-|S|\). Therefore, we can roughly consider that \(\mathrm{Var}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{\epsilon})]\) increases exponentially _w.r.t._ the order \(|S|\). Furthermore, according to Lemma 2.1, \(I(S|\mathbf{x}+\mathbf{\epsilon})\) can be re-written as the weighted sum of \(J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{\epsilon})\). Since coefficients \(U_{S,\mathbf{\pi}}\)_w.r.t._ different \(S\) and \(\mathbf{\pi}\) are usually chaotic, we can roughly consider that the sensitivity of \(I(S|\mathbf{x}+\mathbf{\epsilon})\) also grows exponentially along with the order \(|S|\) of the interactive concept \(S\). In addition, Theorem 2.4 also proves the approximately exponential decrease of \(\frac{\mathbb{E}_{\mathbf{\epsilon}}[J(S^{\prime},\mathbf{\pi}^{\prime}|\mathbf{x}+\mathbf{ \epsilon})]/\mathrm{Var}_{\mathbf{\epsilon}}[J(S^{\prime},\mathbf{\pi}^{\prime}|\mathbf{x}+ \mathbf{\epsilon})]}{\mathbb{E}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{ \epsilon})]/\mathrm{Var}_{\mathbf{\epsilon}}[J(S,\mathbf{\pi}|\mathbf{x}+\mathbf{\epsilon})]}\) along with \(|S^{\prime}|-|S|\). Similarly, we can obtain that the relative stability \(\mathbb{E}_{\mathbf{\epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})]/\mathrm{Var}_{\mathbf{ \epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})]\) decreases along with the order \(|S|\).
**Conclusions.** Both Theorem 2.2 and Theorem 2.4 tell us that high-order interactive concepts are much more sensitive to input perturbations. Furthermore, combined with the conclusion in Section 2.2, **we can conclude that high-order interactive concepts encoded by the BNN are much more sensitive to the weight uncertainty in the BNN than low-order concepts.
**Experimental verification.** We conducted experiments to verify the above conclusions. To verify the sensitivity to input perturbations, we added a random perturbation \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I)\) to a given input sample \(\mathbf{x}\), where \(\sigma^{2}=0.05^{2}\). Then, we used the following two metrics, \(V^{(s)}_{\text{noise}}=\mathbb{E}_{\mathbf{\epsilon}}[\mathbb{E}_{|S|=s}[\mathrm{ Var}_{\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I)}[I(S|\mathbf{x}+\mathbf{ \epsilon})]]]\) and \(K^{(s)}_{\text{noise}}=\mathbb{E}_{\mathbf{\epsilon}}[\mathbb{E}_{|S|=s}[\frac{ \mathbb{E}_{\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I)}[I(S|\mathbf{x}+\mathbf{ \epsilon})]]}{\mathrm{Var}_{\mathbf{\epsilon}}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I) (I(S|\mathbf{x}+\mathbf{\epsilon})]}]\), to measure the average variance and the average relative stability of the \(s\)-order interactive concepts _w.r.t._ the input perturbation \(\mathbf{\epsilon}\). Then, a large \(V^{(s)}_{\text{noise}}\) or a small \(K^{(s)}_{\text{noise}}\) indicated that the \(s\)-order interactive concepts were sensitive to input perturbations.
Similarly, to verify the sensitivity to the weight uncertainty, we sampled different weights \(\mathbf{W}\) from the weight distribution \(q_{\mathbf{\theta}}(\mathbf{W})\) of the BNN. Then, we used \(V^{(s)}_{\text{BNN}}=\mathbb{E}_{\mathbf{\pi}}[\mathbb{E}_{|S|=s}[\mathrm{Var}_{ \mathbf{W}\sim q_{\mathbf{\theta}}(\mathbf{W})}[I(S|\mathbf{x},\mathbf{W})]]]\) and \(K^{(s)}_{\text{BNN}}=\mathbb{E}_{\mathbf{\pi}}[\mathbb{E}_{|S|=s}[\frac{\mathbb{E}_{ \mathbf{W}\sim q_{\mathbf{\theta}}(\mathbf{W})}[I(S|\mathbf{x},\mathbf{W})]]}{\mathrm{Var}_{\mathbf{ W}\sim q_{\mathbf{\theta}}(\mathbf{W})}[I(S|\mathbf{x},\mathbf{W})]}]\) to measure the average variance and the average relative stability of the \(s\)-order interactive concepts _w.r.t._ the weight uncertainty in the BNN. Therefore, a large value of \(V^{(s)}_{\text{BNN}}\) or a small value of \(K^{(s)}_{\text{BNN}}\) indicated that the \(s\)-order interactive concepts were sensitive to the weight uncertainty. We followed experimental settings in the _experiments_ paragraph in Section 2.2 to train BNNs. Specifically, we trained BNNs with the MLP architecture on the MNIST dataset, the TV news dataset, and the Census dataset. We trained BNNs with the LeNet architecture on the CIFAR-10 dataset. Appendix I introduces how to efficiently compute \(I(S|\mathbf{x})\) on images.
Figure 3 shows that the average variance \(V^{(s)}_{\text{noise}}\) and \(V^{(s)}_{\text{BNN}}\) increased exponentially along with the order \(s\), while the relative stability \(K^{(s)}_{\text{noise}}\) and \(K^{(s)}_{\text{BNN}}\) both decreased along with
Figure 3: (a) The exponential increase of the average variance \(V^{(s)}_{\text{noise}}\) and (b) the roughly exponential decrease of the average relative stability \(K^{(s)}_{\text{noise}}\) along with the order \(s\), under perturbations from a distribution \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},0.05^{2}\cdot\mathbf{I})\). (c) The exponential increase of the average variance \(V^{(s)}_{\text{BNN}}\) and (d) the roughly exponential decrease of the average relative stability \(K^{(s)}_{\text{BNN}}\) along with the order \(s\), under weight uncertainty in the BNN.
the order. This demonstrated that high-order interactive concepts were much more sensitive to input perturbations and the weight uncertainty in the BNN, thereby verifying Theorem 2.2 and Theorem 2.4.
### Sensitive concepts are difficult to learn
In this subsection, we prove that high-order interactive concepts, which are sensitive to input perturbations and weight uncertainty, are difficult to be learned by a BNN in a regression task. Specifically, we measure the learning effects of interactive concepts (denoted by \(U_{S}\)), and Theorems 2.6 and 2.7 prove the small learning effects of sensitive concepts.
To facilitate the analysis, we first simplify the conceptual learning as a linear problem. Specifically, we first rewrite the interaction effect of an interactive concept \(S\). Given an input sample \(\mathbf{x}\), according to Eq. (8), the interaction effect of the concept \(S\) on the sample \(\mathbf{x}^{\prime}\) (obtained by applying some transformations on \(\mathbf{x}\)), \(I(S|\mathbf{x}^{\prime})\), can be rewritten as
\[I(S|\mathbf{x}^{\prime})=U_{S}\cdot C_{S}(\mathbf{x}^{\prime}), \tag{12}\]
where the constant \(U_{S}=I(S|\mathbf{x})\) denotes the interaction effect of the concept \(S\), and the function for the activation state is given as \(C_{S}(\mathbf{x}^{\prime})=\sum_{\mathbf{\pi}\in Q_{S}}U_{S,\mathbf{\pi}}J(S,\mathbf{\pi}|\mathbf{ x}^{\prime})/U_{S}\).
**Theorem 2.5** (Proof in Appendix F.5).: _Given an arbitrarily masked sample \(\mathbf{x}_{T}(\forall T\subseteq N)\), the function \(C_{S}(\mathbf{x}_{T})\) defined above can well fit the binary activation state of the concept \(S\) in the sample \(\mathbf{x}_{T}\)._
\[\forall\,T\subseteq N,\;C(S|\mathbf{x}_{T})=\prod_{i\in S}A_{i}(\mathbf{x}_{T})= \mathbb{1}(S\subseteq T), \tag{13}\]
_where \(A_{i}(\mathbf{x}_{T})\in\{0,1\}\) denotes whether the variable \(x_{i}\) is present or being masked in the sample \(\mathbf{x}_{T}\)._
Theorem 2.5 shows that the function \(C_{S}(\mathbf{x}_{T})\) actually represents the AND relationship of the concept \(S\) under an arbitrarily masking condition \(\forall\;T\subseteq N\). Only when all variables in \(S\) are all present under the masking condition \(T\), the concept \(S\) is activated \(C_{S}(\mathbf{x}_{T})=1\). If any of variables in \(S\) is masked, then the concept \(S\) will not be activated \(C_{S}(\mathbf{x}_{T})=0\), yielding zero interaction effect \(I(S|\mathbf{x}_{T})=0\).
Thus, we can extend Eq. (4) to a continuous version that explains the output as a **linear regression problem**.
\[v(\mathbf{x}^{\prime})=\sum\nolimits_{S\in\Omega}U_{S}\cdot C_{S}(\mathbf{x}^{\prime}), \tag{14}\]
where the activation state \(C_{S}(\mathbf{x}^{\prime})\) can be considered as an input dimension of the linear function, which reflects whether the input sample \(\mathbf{x}^{\prime}\) contains the concept \(S\).
Therefore, the absolute value of the coefficient \(U_{S}\) can be considered as _the strength of the neural network in learning the interactive concept \(S\)_. According to Section 2.1 and Ren et al. (2021), most interactive concepts have negligible coefficients \(|U_{S}|\approx 0\), so we can consider that the neural network only encodes a few interactive concepts \(S\) with large absolute values \(|U_{S}|\).
Let us facilitate the poof on a regression task. Based on the conclusion in Section 2.2, we can roughly consider that training a BNN on normal samples is equivalent to training a surrogate DNN model on perturbed input samples \(\mathbf{x}^{\prime}=\mathbf{x}+\mathbf{\epsilon}\). Then, according to Eq. (14), the learning of the BNN on a certain input sample can be roughly represented as \(\min_{\{U_{S}|S\in\Omega\}}L(\{U_{S}\})\), and the loss is given by
\[\begin{split} L(\{U_{S}\})&=\mathbb{E}_{\mathbf{\epsilon }}\left[(y^{*}-v(\mathbf{x}^{\prime}))^{2}\right]\\ &=\mathbb{E}_{\mathbf{\epsilon}}[(y^{*}-\sum\nolimits_{S\in\Omega} U_{S}\cdot C_{S}(\mathbf{x}+\mathbf{\epsilon}))^{2}]\end{split} \tag{15}\]
where \(\mathbf{x}\) and \(y^{*}\) denote the input sample and the ground-truth output, respectively, and \(\mathbf{x}^{\prime}=\mathbf{x}+\mathbf{\epsilon}\).
**Theorem 2.6** (Proof in Appendix F.6).: _Given two random interactive concepts \(S\) and \(S^{\prime}\), we can roughly assume that \(C_{S}(\mathbf{x}+\mathbf{\epsilon})\) is independent of \(C_{S^{\prime}}(\mathbf{x}+\mathbf{\epsilon})\), because the two concepts \(S\) and \(S^{\prime}\) usually have little overlap in most cases. Let \(\mathbb{E}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]\) and \(\mathrm{Var}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]\) denote the mean and the variance of \(C_{S}(\mathbf{x}+\mathbf{\epsilon})\) w.r.t. \(\mathbf{\epsilon}\), respectively. Then, the solution to Eq. (15) satisfies the following property:_
\[\forall\;S\in\Omega,\quad|U_{S}^{*}|\propto|\mathbb{E}_{\mathbf{\epsilon}}[C_{S}( \mathbf{x}+\mathbf{\epsilon})]/\mathrm{Var}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]| \tag{16}\]
Theorem 2.6 proves that the learning effect of an interactive concept \(S\), measured by \(|U_{S}^{*}|\), is proportional to the relative stability of the activation state of the interactive concept \(|\mathbb{E}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]/\mathrm{Var}_{\mathbf{ \epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]|\) w.r.t. perturbations \(\mathbf{\epsilon}\). This indicates that sensitive interactive concepts are more difficult to learn. The experimental verification of this theorem is shown in Appendix H.
**Theorem 2.7** (Proof in Appendix F.7).: _Let \(A^{\text{min}}=\min_{S}|U_{S}|\) and \(A^{\text{max}}=\max_{S}|U_{S}|\) denote the lower bound and the upper bound of \(|U_{S}|\) over all interactive concepts \(S\). Then, for any \(S\subseteq N\), we have_
\[\begin{split} A^{\text{min}}\cdot\frac{|\mathbb{E}_{\mathbf{\epsilon }}[I(S|\mathbf{x}+\mathbf{\epsilon})]|}{\mathrm{Var}_{\mathbf{\epsilon}}[I(S|\mathbf{x}+\mathbf{ \epsilon})]}&\leq\frac{|\mathbb{E}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+ \mathbf{\epsilon})]|}{\mathrm{Var}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]}\\ &\leq A^{\text{max}}\cdot\frac{|\mathbb{E}_{\mathbf{\epsilon}}[I(S| \mathbf{x}+\mathbf{\epsilon})]|}{\mathrm{Var}_{\mathbf{\epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})] }\end{split} \tag{17}\]
Theorem 2.7 proves that high-order (complex) interactive concepts have low relative stability _w.r.t._ perturbations \(\mathbf{\epsilon}\). In fact, both Theorem 2.4 and Figure 3 have told us that \(|\mathbb{E}_{\mathbf{\epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})]/\mathrm{Var}_{\mathbf{ \epsilon}}[I(S|\mathbf{x}+\mathbf{\epsilon})]|\) significantly decreases along with the order \(s=|S|\) of the interactive concept \(S\). Therefore, both the lower bound and the upper bound of \(|\mathbb{E}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]/\mathrm{Var}_{\mathbf{ \epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]|\) in Eq. (17) decrease along with the order \(s\) significantly. In this way, we can approximately consider that the strength of encoding a concept \(|U_{S}^{*}|\propto|\mathbb{E}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]/ \mathrm{Var}_{\mathbf{\epsilon}}[C_{S}(\mathbf{x}+\mathbf{\epsilon})]|\) also decreases along
with the order of interactive concepts. In other words, we prove that high-order interactive concepts are more difficult to be learned under perturbations \(\epsilon\). Combining the conclusion in Section 2.2, we also prove that high-order interactive concepts are more difficult to be learned by the BNN.
## 3 Experiments
In this section, we experimentally verified that compared to standard DNNs, BNNs were less likely to encode high-order (complex) interactive concepts. Specifically, we constructed three pairs of baseline networks for comparison.
(1) Given a trained BNN \(\mathbf{\theta}^{*}\), we constructed a standard DNN by setting its weights to the mean value of the weight distribution of the BNN. The standard DNN was denoted by \(\mathbf{\psi}_{\mathbf{\theta}^{*}}\). Then, we compared the strength of all high-order interactive concepts between the BNN \(\mathbf{\theta}^{*}\) and the standard DNN \(\mathbf{\psi}_{\mathbf{\theta}^{*}}\) without weight/feature uncertainty.
(2) Similarly, given a trained standard DNN \(\mathbf{\psi}^{*}\), we constructed a BNN \(\mathbf{\theta}_{\psi^{*}}\) by setting the mean value of its weight distribution to the weights of the standard DNN. We set all weight dimensions in the \(l\)-th layer of the BNN to share the same variance \(\sigma_{l}^{2}\), where \(\sigma_{l}^{2}\) was computed as the average of variances of all weight dimensions in the \(l\)-th layer of the previous BNN \(\mathbf{\theta}^{*}\). Then, we compared the strength of high-order interactive concepts between the standard DNN \(\mathbf{\psi}^{*}\) and the BNN \(\mathbf{\theta}_{\psi^{*}}\).
(3) We trained a standard DNN and a BNN with the same architecture. Then, we compared the strength of high-order interactive concepts between each pair of standard DNN \(\hat{\mathbf{\psi}}\) and the BNN \(\hat{\mathbf{\theta}}\) when these two networks were trained to have the same training accuracy. We used the training accuracy to align the learning progress of the two networks for fair comparison.
Specifically, the average strength of the \(s\)-order interactive concepts was measured as \(I_{\text{unegth}}^{(s)}=\mathbb{E}_{\mathbf{x}}[\mathbb{E}_{S\subseteq N,|S|=s}[l( S|\mathbf{x})]]\). To compute the interaction effect \(I(S|\mathbf{x})\), we set \(v(\mathbf{x}_{S})=\log\frac{p(y=y^{*}|\mathbf{x}_{S})}{1-p(y=y^{*}|\mathbf{x}_{S})}\in \mathbb{R}\), which reflected the confidence of classifying the masked input sample \(\mathbf{x}_{S}\) into the ground-truth category \(y^{*}\). For standard DNNs, \(p(y=y^{*}|\mathbf{x}_{S})\) referred to the classification probability of the ground-truth category on the masked sample \(\mathbf{x}_{S}\). For BNNs, \(p(y=y^{*}|\mathbf{x}_{S})\) was computed according to Eq. (2), where we sampled ten neural networks from the weight distribution \(q_{\mathbf{\theta}}(\mathbf{W})\) of the BNN, and computed the average classification probability over all these networks.
We followed experimental settings in the _experiments_ paragraph in Section 2.2 to train the networks. Specifically, we trained standard DNNs and BNNs with the MLP architecture on the TV news dataset, the Census dataset, and the MNIST dataset. We trained standard DNNs and BNNs with the LeNet architecture on the CIFAR-10 dataset. Appendix I introduces how to efficiently compute \(I(S|\mathbf{x})\) on images. Figure 4 shows that the strength of high-order interactive concepts of BNNs was much weaker than that of standard DNNs in all comparisons. This verified that BNNs were less likely to encode high-order (complex) interactive concepts than standard DNNs.
## 4 Conclusion and discussion
In this paper, we have proven the tendency of mean-field variational BNNs to avoid encoding high-order (complex) concepts. Many studies (Ren et al., 2021; Deng et al., 2021; Ren et al., 2023) have shown that there does exist a concept-emerging phenomenon when a neural network is sufficiently trained.
Besides, as discussed in the introduction, encoding less complex concepts does not mean that BNNs have weaker representation power than standard DNNs, because a standard DNN can be considered as a specific BNN with zero weight uncertainty. More crucially, Ren et al. (2021) and Lengerich et al. (2022) proved that high-order concepts are usually vulnerable to adversarial attacks and have weak generalization power. Thus, it is hard to say whether the tendency to avoid encoding complex concepts is a demerit or not.
Figure 4: (a) Comparison of the strength of interactive concepts (i) between a trained BNN \(\mathbf{\theta}^{*}\) and the constructed standard DNN \(\mathbf{\psi}_{\mathbf{\theta}^{*}}\), (ii) between a trained standard DNN \(\mathbf{\psi}^{*}\) and the constructed BNN \(\mathbf{\theta}_{\psi^{*}}\). (b) We trained a standard DNN \(\hat{\mathbf{\psi}}\) and a BNN \(\hat{\mathbf{\theta}}\) with the LeNet architecture on the CIFAR-10 dataset, and compared the strength of interactive concepts between the two networks when the two networks were trained to have the same training accuracy. |
2302.03286 | Algorithmically Designed Artificial Neural Networks (ADANNs): Higher
order deep operator learning for parametric partial differential equations | In this article we propose a new deep learning approach to approximate
operators related to parametric partial differential equations (PDEs). In
particular, we introduce a new strategy to design specific artificial neural
network (ANN) architectures in conjunction with specific ANN initialization
schemes which are tailor-made for the particular approximation problem under
consideration. In the proposed approach we combine efficient classical
numerical approximation techniques with deep operator learning methodologies.
Specifically, we introduce customized adaptions of existing ANN architectures
together with specialized initializations for these ANN architectures so that
at initialization we have that the ANNs closely mimic a chosen efficient
classical numerical algorithm for the considered approximation problem. The
obtained ANN architectures and their initialization schemes are thus strongly
inspired by numerical algorithms as well as by popular deep learning
methodologies from the literature and in that sense we refer to the introduced
ANNs in conjunction with their tailor-made initialization schemes as
Algorithmically Designed Artificial Neural Networks (ADANNs). We numerically
test the proposed ADANN methodology in the case of several parametric PDEs. In
the tested numerical examples the ADANN methodology significantly outperforms
existing traditional approximation algorithms as well as existing deep operator
learning methodologies from the literature. | Arnulf Jentzen, Adrian Riekert, Philippe von Wurstemberger | 2023-02-07T06:39:20Z | http://arxiv.org/abs/2302.03286v2 | # Algorithmically Designed Artificial
###### Abstract
In this article we propose a new deep learning approach to solve parametric partial differential equations (PDEs) approximately. In particular, we introduce a new strategy to design specific artificial neural network (ANN) architectures in conjunction with specific ANN initialization schemes which are tailor-made for the particular scientific computing approximation problem under consideration. In the proposed approach we combine efficient classical numerical approximation techniques such as _higher-order Runge-Kutta schemes_ with sophisticated deep (operator) learning methodologies such as the recently introduced _Fourier neural operators (FNOs)_. Specifically, we introduce customized adaptions of existing standard ANN architectures together with specialized initializations for these ANN architectures so that at initialization we have that the ANNs closely mimic a chosen efficient classical numerical algorithm for the considered approximation problem. The obtained ANN architectures and their initialization schemes are thus strongly inspired by numerical algorithms as well as by popular deep learning methodologies from the literature and in that sense we refer to the introduced ANNs in conjunction with their tailor-made initialization schemes as _Algorithmically Designed Artificial Neural Networks_ (ADANNs). We numerically test the proposed ADANN approach in the case of some parametric PDEs. In the tested numerical examples the ADANN approach significantly outperforms existing traditional approximation algorithms as well as existing deep learning methodologies from the literature.
###### Contents
* 1 Introduction
* 2 A rough overview of the ADANN approach
* 2.1 Base model with highly specialized initializations
* 2.2 Difference models
* 2.3 Multiple runs over initialization and training
* 3 Derivation of base models for semilinear heat PDEs
* 3.1 One-dimensional semilinear heat PDEs with Dirichlet boundary conditions
* 3.2 Designing algorithms
* 3.2.1 Spatial finite difference discretization
* 3.2.2 Temporal linearly implicit Runge-Kutta discretizations
* 3.2.3 A compact reformulation of the designing algorithms
* 3.3 Designing the base model
* 4 Numerical simulations
* 4.1 One-dimensional reaction diffusion type equation
* 4.2 One-dimensional Sine-Gordon type equation
* 4.3 Two-dimensional semilinear heat equation
* A Second order linearly implicit Runge-Kutta methods
* A.1 Order conditions for general LIRK methods
* A.2 A family of 2 stage linearly implicit Runge-Kutta methods of order 2
* A.3 The special case of the Crank-Nicolson explicit Euler method
## 1 Introduction
Deep learning approximation methods - usually consisting of deep artificial neural networks (ANN) trained through stochastic gradient descent (SGD) optimization methods - belong nowadays to the most heavily employed approximation methods in the digital world. The striking feature of deep learning methods is that in many situations numerical simulations suggest that the computational effort of such methods seem to grow only at most polynomially in the input dimension \(d\in\mathbb{N}=\{1,2,3,\dots\}\) of the problem under consideration. In contrast, classical numerical methods usually suffer under the so-called _curse of dimensionality_ (cf., e.g., Bellman [4], Novak & Wozniakowski [37, Chapter 1], and Novak & Wozniakowski [38, Chapter 9]) in the sense that the computational effort grows at least exponentially in the dimension.
In the recent years, deep learning technologies have also been intensively used to attack problems from scientific computing such as the numerical solutions of partial differential equations (PDEs). In particular, deep learning approximation methods have been used to approximately solve high-dimensional nonlinear PDEs (see, e.g., [25, 10, 11, 14, 16, 2, 42] and the references mentioned therein) such as high-dimensional nonlinear pricing problems from financial engineering and Hamiltonian-Jacobi-Bellman equations from optimal control. In the context of such high-dimensional nonlinear PDEs, the progress of deep learning approximation methods is obvious as there are - except of in some special cases (see, e.g., [19, 20, 36] and the references therein for Branching type methods and see, e.g., [11, 12, 13, 22] and the references therein for multilevel Picard methods) - essentially no alternative numerical approximation methods which are capable of solving such high-dimensional nonlinear PDEs.
There is nowadays also a huge literature on deep learning approximation methods for low-dimensional PDEs (cf., e.g., [24, 41]). For low-dimensional PDEs, in most cases, there usually
already exist a number of highly efficient traditional (non-deep learning based) approximation methods in the scientific literature (cf., e.g., [23, 43]). Nonetheless, there are several convincing arguments that deep learning approximation methods might have the potential to significantly outperform such efficient traditional approximation methods from classical numerics.
One situation where this strongly applies is in the context of _parametric_ PDE approximation problems. Specifically, in applications one is often not only interested to approximately solve the considered PDE models once but instead there is often the need to approximately solve such models again and again but with different initial values and/or different model parameters and the idea of deep learning approaches in this context is to try to not only solve one fixed PDE model but instead to use deep ANNs to learn the whole solution mapping which maps initial values and model parameters to the corresponding PDE solutions. In particular, even though the original PDE model has often only one to three space-dimensions, the associated parametric approximation problem becomes very high-dimensional due to the high number of parameters to approximately describe the initial value and the model parameters. Deep learning methods seem to be very natural candidates for such kind of problems in a way that the deep ANNs learn the mapping from parametrizations of the initial values and/or the model parameters to the PDE solution based on training data - the deep learning methods in this situation are then often referred to as _operator learning_ approaches (cf., e.g., [29, 30, 32]).
However, even though very remarkable advances have been accomplished in this direction of research, for instance, by means of so-called _Fourier neural operator_ (FNO) approximations (see Li et al. [30]), so far in most situations deep operator learning techniques do not outperform the most efficient higher order classical numerical methods for the considered approximation problem. This is also not entirely surprising due to fundamental lower bounds established in the literature that a wide class of methods, including typical deep learning approximations, can in general not overcome the curse of dimensionality in the \(L^{\infty}\)-norm (cf., e.g., Heinrich & Sindambiwe [18], Heinrich [17], Grohs & Voigtlander [15]).
It is precisely the objective of this work to introduce a new deep operator learning approximation approach which aims to overcome this challenge and outperforms highly efficient higher order classical numerical methods for the considered approximation problems. For this, we introduce a new strategy to design specific ANN architectures in conjunction with specific ANN initialization schemes which are tailor-made for the particular scientific computing approximation problem under consideration. In the proposed approach we combine efficient classical numerical approximation methods such as higher-order Runge-Kutta schemes with sophisticated deep (operator) learning techniques such as FNO approximations. The obtained ANN architectures and their initialization schemes are thus strongly inspired by numerical algorithms as well as by popular deep learning methodologies from the literature and, in that sense, we refer to the introduced ANNs in conjunction with their tailor-made initialization schemes as _Algorithmically Designed Artificial Neural Networks_ (ADANNs). We numerically test the proposed ADANN approach in the case of some parametric PDEs. In the tested numerical examples the ADANN approach significantly outperforms existing traditional approximation algorithms as well as existing deep learning methodologies from the literature.
We now discuss some ideas in the scientific literature which are related to the ADANN approach introduced in this paper. The ADANN technology is partially inspired by the _learning the random variable_ methodology in Becker et al. [3] where so-called Monte Carlo neural networks have been introduced which have the property that at initialization the realizations of those networks correspond to sample realizations of Monte Carlo algorithms. Further approaches related to the here proposed ADANN methodology can, e.g., be found in [1, 8, 34, 39] where certain parameters of classical numerical approximation methods such as finite difference methods and higher-order Runge-Kutta schemes have been improved through a training process.
Next we mention several other promising deep operator learning approaches for learning the solution operators associated to PDEs. One of the most successful methods in practice are the
FNOs introduced in Li et al. [30]. The derivation of FNOs is based on Li et al. [29], an earlier paper by the same authors, where so-called graph kernel networks are employed. In [29] each layer of the network represents a convolution with a kernel computed by a neural network, which is replaced by a multiplication in Fourier space in [30]. The article Li et al. [28] generalizes FNOs to more complicated geometries. In Brandstetter et al. [6] the FNO methodology is extended by using Clifford layers, where calculations are performed in higher-dimensional non-commutative Clifford algebras. Another successful approach is the deep operator network (DeepONet) architecture introduced in Lu et al. [32], which consists of two types of DNNs that take as input the output space points and the input function values, respectively. For a comparison between the DeepONet and FNO methodologies we refer to Lu et al. [33]. Generalizing DeepONets, the work Lanthaler et al. [27] uses a more sophisticated nonlinear DNN architecture for operator learning. In Pham & Warin [40] operators on Wasserstein spaces, for example, mean-field interactions of measures, are learned using networks based on standard DNNs and DeepONets. The article Nelsen & Stuart [35] uses random feature maps associated to operator-valued kernels to approximate operators between Banach spaces. The paper Liu et al. [31] approximates the entire flow map associated to ODEs by training a different DNN in each time-step and combining these architectures with classical Runge-Kutta methods on different time scales. We also refer to [7, 26] for estimates for approximation and generalization errors in network-based operator learning for PDEs. Finally, we refer to [6, Appendix D] for a much more detailed literature overview regarding different variants of FNOs and other neural network architectures for solving PDEs.
The remainder of this article is organized as follows. In Section 2 we introduce the main ideas of the ADANN methodology in an abstract setting. In Section 3 we describe the ADANN methodology in detail in the case of semilinear heat PDEs. In Section 4 we present three numerical simulations comparing the ADANN methodology to existing methods in the literature.
## 2 A rough overview of the ADANN approach
In this section we describe the main ideas of the ADANN methodology for a class of representative approximation problems. Specifically, we consider the problem of numerically approximating a function
\[\mathcal{S}\colon\mathcal{I}\to\mathcal{O} \tag{1}\]
where \(\mathcal{I}\) and \(\mathcal{O}\) are topological spaces1. Roughly speaking, the ADANN methodology proposed in this paper relies on the following three main ingredients:
Footnote 1: For example, we think of the mapping which assigns to all suitable initial conditions of an initial value PDE problem of evolutionary type the corresponding solution of the PDE at the terminal time.
1. _Base model with highly specialized initializations_: Based on a family of classical (higher order) numerical approximation algorithms we design a tailor-made problem-specific ANN type model and a corresponding family of highly specialized initializations for the model and train the model to approximate \(\mathcal{S}\) (cf. Section 2.1).
2. _Difference model_: We employ existing ANN technologies from the literature to approximately learn the difference between realizations of the base model of step (i) and the solution operator \(\mathcal{S}\). Adding the difference model to the base model results in the _full ADANN model_ (cf. Section 2.2).
3. _Multiple runs over initialization and training_: We use an additional suitable optimization approach aiming to minimize the overall approximation error of the full ADANN model to repeat the training of the base model with different (possibly random) highly specialized initializations and the training of the difference model with different random initializations (cf. Section 2.3).
### Base model with highly specialized initializations
To derive the base model we consider a parametric family \((\Phi_{p})_{p\in\mathfrak{P}}\) of classical numerical approximation algorithms2 of the solution operator \(\mathcal{S}\colon\mathcal{I}\to\mathcal{O}\) indexed over a parameter set \(\mathfrak{P}\) and given for every \(p\in\mathfrak{P}\) by a mapping
Footnote 2: For example we think of a family of Runge-Kutta and/or finite element methods
\[\Phi_{p}\colon\mathcal{I}\to\mathcal{O}. \tag{2}\]
We then design an ANN type model \(\mathscr{B}\colon\mathbb{R}^{\mathrm{d}_{\mathrm{Base}}}\times\mathcal{I}\to \mathcal{O}\) with \(\mathbf{d}_{\mathrm{Base}}\in\mathbb{N}\) trainable parameters which can reproduce all the approximation algorithms in (2) as realizations of the model in the sense that for every \(p\in\mathfrak{P}\) there exists a parameter vector \(\mathbf{W}_{p}\in\mathbb{R}^{\mathrm{d}_{\mathrm{Base}}}\) such that
\[\mathscr{B}(\mathbf{W}_{p},\,\cdot\,)=\Phi_{p}\approx\mathcal{S}. \tag{3}\]
We refer to the function \(\mathscr{B}\) as the base model, for every choice of a parameter vector \(W\in\mathbb{R}^{\mathrm{d}_{\mathrm{Base}}}\) we call the function \(\mathscr{B}(W,\,\cdot\,)\) a realization of the base model, and we refer to the algorithms in (2) as the designing algorithms for the base model \(\mathscr{B}\).
Note that (3) shows that we already have parameters for the base model which yield reasonable approximations of \(\mathcal{S}\). In the next step we propose to train the base model with SGD type optimization methods to improve these approximations. In order to generate training data we assume that we have a random input value \(\mathfrak{I}\colon\Omega\to\mathcal{I}\) on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) and an algorithm \(\Psi\colon\mathcal{I}\to\mathcal{O}\) which approximates \(\mathcal{S}\) with a very high accuracy (and potentially very high computational effort3) in the sense that the approximation
Footnote 3: For example, one can think of \(\Psi\) as being of the same class of algorithms as the algorithms employed in (2) but with much finer discretizations.
\[\Psi\approx\mathcal{S} \tag{4}\]
is much more accurate than the approximations in (3). Moreover, in order to describe the training objective we introduce a seminorm \(\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\colon\mathcal{O}\to[0,\infty)\) and use it to define the loss function \(L_{\mathrm{Base}}\colon\mathbb{R}^{\mathrm{d}_{\mathrm{Base}}}\times\mathcal{ I}\to[0,\infty)\) and the objective function \(\mathbf{L}_{\mathrm{Base}}\colon\mathbb{R}^{\mathrm{d}_{\mathrm{Base}}}\to[0,\infty]\) by imposing for all \(W\in\mathbb{R}^{\mathrm{d}_{\mathrm{Base}}}\), \(i\in\mathcal{I}\) that
\[L_{\mathrm{Base}}(W,i)=\left|\!\left|\!\left|\mathscr{B}(W,i)-\Psi(i)\right|\! \right|\!\right|^{2}\qquad\text{and}\qquad\mathbf{L}_{\mathrm{Base}}(W)= \mathbb{E}[L_{\mathrm{Base}}(W,\mathfrak{I})]. \tag{5}\]
We then propose to minimize \(\mathbf{L}_{\mathrm{Base}}\) by means of SGD type processes. As starting values for the SGD type processes we use some of the (possibly randomly chosen) parameters in (3) which result already at initialization in small expected loss values.
### Difference models
Once satisfactory base model parameters \(\mathfrak{W}\in\mathbb{R}^{\mathrm{d}_{\mathrm{Base}}}\) have been found in the previous step (see Section 2.1) we propose to employ existing deep learning strategies from the literature to approximate a scaled up difference between the reference solution \(\Psi\) and the realization of the base model \(\mathscr{B}(\mathfrak{W},\,\cdot\,)\). More precisely, to scale the error of the base model we use a Monte Carlo method to get an estimate \(\varepsilon\in(0,\infty)\) of the square root of the expected loss
\[\mathbf{L}_{\mathrm{Base}}(\mathfrak{W})=\left(\mathbb{E}\big{[}\!\left|\! \left|\mathscr{B}(\mathfrak{W},\mathfrak{I})-\Psi(\mathfrak{I})\right|\! \right|\!\right|^{2}\big{]}\right)^{\nicefrac{{1}}{{2}}}\approx\varepsilon \tag{6}\]
and then introduce an ANN structure \(\mathscr{D}\colon\mathbb{R}^{\mathrm{d}_{\mathrm{Diff}}}\times\mathcal{I}\to \mathcal{O}\) with \(\mathbf{d}_{\mathrm{Diff}}\in\mathbb{N}\) trainable parameters for which we aim to find a parameter vector \(\theta\in\mathbb{R}^{\mathrm{d}_{\mathrm{Diff}}}\) such that
\[\mathscr{D}(\theta,\,\cdot\,)\approx\tfrac{1}{\varepsilon}(\Psi-\mathscr{B}( \mathfrak{W},\,\cdot\,)). \tag{7}\]
We refer to the function \(\mathscr{D}\) as the difference model. To train this model we define the loss function \(L_{\text{Diff}}\colon\mathbb{R}^{\text{d}_{\text{Base}}}\times\mathbb{R}^{\text{ d}_{\text{Diff}}}\times(0,\infty)\times\mathcal{I}\to[0,\infty)\) and the objective function \(\mathbf{L}_{\text{Diff}}\colon\mathbb{R}^{\text{d}_{\text{Diff}}}\to[0,\infty]\) by requiring for all \(W\in\mathbb{R}^{\text{d}_{\text{Base}}}\), \(\theta\in\mathbb{R}^{\text{d}_{\text{Diff}}}\), \(v\in(0,\infty)\), \(i\in\mathcal{I}\) that
\[L_{\text{Diff}}(W,\theta,v,i)=\big{\|}\big{\|}\mathscr{D}(\theta,i )-\big{(}\tfrac{1}{v}(\Psi(i)-\mathscr{B}(W,i))\big{)}\big{\|}\big{\|}^{2}\quad \text{and}\quad\mathbf{L}_{\text{Diff}}(\theta)=\mathbb{E}\big{[}L_{\text{Diff} }(\mathfrak{W},\theta,\varepsilon,\mathfrak{I})\big{]} \tag{8}\]
and suggest to use SGD type methods to minimize \(\mathbf{L}_{\text{Diff}}\).
Combining the base model of Section 2.1 and the difference model of this subsection, we define the full ADANN model \(\mathscr{A}\colon\mathbb{R}^{\text{d}_{\text{Base}}}\times\mathbb{R}^{\text{ d}_{\text{Diff}}}\times\mathbb{R}\times\mathcal{I}\to\mathcal{O}\) by imposing for all \(W\in\mathbb{R}^{\text{d}_{\text{Base}}}\), \(\theta\in\mathbb{R}^{\text{d}_{\text{Diff}}}\), \(\epsilon\in\mathbb{R}\), \(i\in\mathcal{I}\) that
\[\mathscr{A}(W,\theta,\epsilon,i)=\mathscr{B}(W,i)+\epsilon \mathscr{D}(\theta,i). \tag{9}\]
### Multiple runs over initialization and training
The results of SGD type methods can strongly depend on the initial parameters and (3) suggests many choices of good initial parameters for the base model. In view of this, we propose to run through the trainings of the models described in Section 2.1 and Section 2.2 several times with different initialization parameters for the base and the difference model and use the best run as final approximation of the mapping \(\mathcal{S}\).
To make this procedure more clear, we now describe it in a simplified setting in detail. Specifically, let \(R\in\mathbb{N}\) be the number of runs. **For every run**\(r\in\{1,2,\dots,R\}\) we consider a random designing algorithm parameter \(\mathfrak{p}_{r}\colon\Omega\to\overline{\mathfrak{P}}\) and **train the base model** with an SGD process \(\mathcal{W}^{(r)}=(\mathcal{W}^{(r)}_{n})_{n\in\mathbb{N}_{0}}\colon\mathbb{N }_{0}\times\Omega\to\mathbb{R}^{\text{d}_{\text{Base}}}\) starting at \(\mathcal{W}^{(r)}_{0}=\mathbf{W}_{\mathfrak{p}_{r}}\) and proceeding for all \(n\in\mathbb{N}\) by
\[\mathcal{W}^{(r)}_{n}=\mathcal{W}^{(r)}_{n-1}-\frac{\gamma_{\text{ Base}}}{B_{\text{Base}}}\Big{[}{\sum_{b=1}^{B_{\text{Base}}}}(\nabla_{W}L_{ \text{Base}})(\mathcal{W}^{(r)}_{n-1},\mathfrak{I}^{(r,n,b)}_{\text{Base}}) \Big{]}, \tag{10}\]
where \(\gamma_{\text{Base}}\in(0,\infty)\) is the learning rate of the SGD method, where \(B_{\text{Base}}\in\mathbb{N}\) is the batch size of the SGD method, and where \(\mathfrak{I}^{(r,n,b)}_{\text{Base}}\colon\Omega\to\mathcal{I}\), \(n,b\in\mathbb{N}\), are i.i.d. copies of the random input \(\mathfrak{I}\). We stop these SGD processes after \(N_{\text{Base}}\in\mathbb{N}\) training steps.
Next, for every run \(r\in\{1,2,\dots,R\}\) we use a Monte Carlo method to get an estimate \(\varepsilon^{(r)}\colon\Omega\to(0,\infty)\) of the square root of the expected loss
\[\big{(}\mathbb{E}\big{[}\big{\|}\mathscr{B}(W,\mathfrak{I})- \Psi(\mathfrak{I})\big{\|}^{2}\big{]}\big{)}^{\nicefrac{{1}}{{2}}}\big{|}_{W= \mathcal{W}^{(r)}_{N_{\text{Base}}}}\approx\varepsilon^{(r)} \tag{11}\]
and then **train the difference model** starting with some standard initialization from the literature with the SGD process \(\Theta^{(r)}=(\Theta^{(r)}_{n})_{n\in\mathbb{N}_{0}}\colon\mathbb{N}_{0}\times \Omega\to\mathbb{R}^{\text{d}_{\text{Base}}}\) satisfying for all \(n\in\mathbb{N}\) that
\[\Theta^{(r)}_{n}=\Theta^{(r)}_{n-1}-\frac{\gamma_{\text{Diff}}}{B_ {\text{Diff}}}\Big{[}{\sum_{b=1}^{B_{\text{Diff}}}}(\nabla_{\theta}L_{\text{ Diff}})(\mathcal{W}^{(r)}_{N_{\text{Base}}},\Theta^{(r)}_{n-1},\varepsilon^{(r)}, \mathfrak{I}^{(r,n,b)}_{\text{Diff}})\Big{]}, \tag{12}\]
where \(\gamma_{\text{Diff}}\in(0,\infty)\) is the learning rate of the SGD method, where \(B_{\text{Diff}}\in\mathbb{N}\) is the batch size of the SGD method, and where \(\mathfrak{I}^{(r,n,b)}_{\text{Diff}}\colon\Omega\to\mathcal{I}\), \(n,b\in\mathbb{N}\), are i.i.d. copies of the random input \(\mathfrak{I}\). We stop these SGD processes after \(N_{\text{Diff}}\in\mathbb{N}\) steps.
Finally, we approximately **calculate the optimal run**\(\mathbf{r}\in\{1,2,\dots,R\}\) such that the expected loss
\[\begin{split}&\Big{(}\mathbb{E}\Big{[}\big{\|}\big{\|}\mathscr{A} \big{(}W,\theta,\epsilon,\mathfrak{I}\big{)}-\Psi(\mathfrak{I})\big{\|}^{2} \Big{]}\Big{)}\Big{|}_{(W,\theta,\epsilon)=(\mathcal{W}^{(r)}_{N_{\text{Base}}}, \Theta^{(r)}_{N_{\text{Diff}}},\varepsilon^{(r)})}\\ &=\min_{r\in\{1,2,\dots,R\}}\Big{(}\mathbb{E}\Big{[}\big{\|} \big{\|}\mathscr{A}\big{(}W,\theta,\epsilon,\mathfrak{I}\big{)}-\Psi( \mathfrak{I})\big{\|}^{2}\Big{]}\Big{)}\Big{|}_{(W,\theta,\epsilon)=( \mathcal{W}^{(r)}_{N_{\text{Base}}},\Theta^{(r)}_{N_{\text{Diff}}},\varepsilon^{( r)})}\end{split} \tag{13}\]
of the full ADANN model \(\mathscr{A}\) in (9) is minimal and, thereafter, we propose to use
\[\mathscr{A}\big{(}\mathcal{W}^{(\mathbf{r})}_{N_{\text{Base}}},\Theta^{(\mathbf{r })}_{N_{\text{Diff}}},\varepsilon^{(\mathbf{r})},\,\cdot\,\big{)}=\mathscr{B}( \mathcal{W}^{(\mathbf{r})}_{N_{\text{Base}}},\,\cdot\,)+\varepsilon^{(\mathbf{ r})}\mathscr{D}(\Theta^{(\mathbf{r})}_{N_{\text{Diff}}},\,\cdot\,)\approx \mathcal{S} \tag{14}\]
as an approximation of \(\mathcal{S}\) in (1).
In the procedure described above we made several major simplifications when compared to the methodology used in our numerical simulations in Section 4. First, in our numerical simulations we train the models with sophisticated SGD type methods such as the ADAM optimizer with adaptive learning rates and adaptive batch sizes. However, for simplicity in this section we only described the case of the plain-vanilla SGD method with constant learning rates and constant batch sizes. Second, we did not specify above how to sample the random designing algorithm parameter vectors \(\mathfrak{p}_{r}\colon\Omega\to\mathfrak{P}\), \(r\in\{1,2,\ldots,R\}\), used to initialize the base model. In our numerical simulations we used different approaches to this end such as, e.g., deterministically exploring the parameter set \(\mathfrak{P}\) or drawing random samples from a uniform distribution over the parameter set \(\mathfrak{P}\). Finally, above we restricted ourselves to the case where in each run we train exactly one base network and one corresponding difference network, whereas in some of our numerical simulations, we use different approaches to decide for each run whether to train a new base model and a corresponding difference model or whether to only train a new difference model for an already trained base model.
## 3 Derivation of base models for semilinear heat PDEs
In this section we describe a possible way to design and initialize base models for the problem of approximating the mapping from the initial condition of a semilinear heat PDE to the terminal value. We will use this base model in our numerical simulations in Sections 4.1 and 4.2.
We describe the approximation problem for semilinear heat PDEs in Section 3.1, we introduce the designing algorithms in Section 3.2, and we present the resulting base model and its tailor-made problem specific initializations in Section 3.3.
### One-dimensional semilinear heat PDEs with Dirichlet boundary conditions
We now introduce the setting for the approximation problem considered in this section. Let \(T\in(0,\infty)\), \(f\in C(\mathbb{R},\mathbb{R})\), \(\mathcal{I}\subseteq C([0,1],\mathbb{R})\) and for every \(g\in\mathcal{I}\) let \(u_{g}\in C^{1,2}([0,T]\times[0,1],\mathbb{R})\) satisfy for all \(t\in[0,T]\), \(x\in[0,1]\) that
\[\big{(}\tfrac{\partial}{\partial t}u_{g}\big{)}(t,x)=(\Delta_{x}u_{g})(t,x)+f( u_{g}(t,x)),\quad u_{g}(0,x)=g(x),\quad\text{and}\quad u_{g}(t,0)=u_{g}(t,1)=0. \tag{15}\]
Our goal is to approximate the map \(\mathcal{S}\colon\mathcal{I}\to C([0,1],\mathbb{R})\) given for all \(g\in\mathcal{I}\) by \(\mathcal{S}(g)=u_{g}(T,\cdot)\).
### Designing algorithms
We now derive a family of approximation algorithms for the semilinear heat PDE in (15) which will serve as designing algorithms for the base model derived in this section. The algorithms are based on discretizing the space with the finite difference method and on discretizing the time with a family of second order linearly implicit Runge-Kutta (LIRK) methods.
#### 3.2.1 Spatial finite difference discretization
For the spatial discretization of the PDE in (15) we use \(N\in\mathbb{N}\) equidistant points \(\mathfrak{x}_{1},\mathfrak{x}_{2},\ldots,\mathfrak{x}_{N}\in[0,1]\) given for all \(i\in\{1,2,\ldots,N\}\) by \(\mathfrak{x}_{i}=\frac{i}{N+1}\) and consider the corresponding finite difference
discretization of the Laplace operator with Dirichlet boundary conditions given by
\[A=(1+N)^{2}\begin{pmatrix}-2&1&0&0&\cdots&0&0&0\\ 1&-2&1&0&\cdots&0&0&0\\ 0&1&-2&1&\cdots&0&0&0\\ &&&\ddots&&\\ 0&0&0&0&\cdots&1&-2&1\\ 0&0&0&0&\cdots&0&1&-2\end{pmatrix}\in\mathbb{R}^{N\times N}. \tag{16}\]
Note that for all \(g\in C^{2}([0,1],\mathbb{R})\) with \(g(0)=g(1)=0\) we have that
\[A\begin{pmatrix}g(\mathfrak{x}_{1})\\ g(\mathfrak{x}_{2})\\ \vdots\\ g(\mathfrak{x}_{N})\end{pmatrix}\approx\begin{pmatrix}(\Delta g)(\mathfrak{x} _{1})\\ (\Delta g)(\mathfrak{x}_{2})\\ \vdots\\ (\Delta g)(\mathfrak{x}_{N})\end{pmatrix}. \tag{17}\]
Using this spatial discretization on the PDE in (15) results in an initial value problem. Formally, for every \(\mathfrak{g}\in\mathbb{R}^{N}\) we let \(\mathbf{u}_{\mathfrak{g}}\in C^{1}([0,T],\mathbb{R}^{N})\) satisfy4 for all \(t\in[0,T]\) that
Footnote 4: Throughout this paper for every \(h\colon\mathbb{R}\to\mathbb{R}\), \(n\in\mathbb{N}\), \(x=(x_{1},x_{2},\ldots x_{n})\in\mathbb{R}^{n}\) we denote by \(h(x)\in\mathbb{R}^{n}\) the vector given by \(h(x)=(h(x_{1}),h(x_{2}),\ldots h(x_{n}))\).
\[\big{(}\tfrac{\partial}{\partial t}\mathbf{u}_{\mathfrak{g}}\big{)}(t)=A \mathbf{u}_{\mathfrak{g}}(t)+f(\mathbf{u}_{\mathfrak{g}}(t))\qquad\text{and} \qquad\mathbf{u}_{\mathfrak{g}}(0)=\mathfrak{g}. \tag{18}\]
If \(N\) is chosen sufficiently large we expect for all \(g\in\mathcal{I}\) that
\[\mathbf{u}_{(g(\mathfrak{x}_{1}),g(\mathfrak{x}_{2}),\ldots,g(\mathfrak{x}_{N }))}(T)\approx\begin{pmatrix}u_{g}(T,\mathfrak{x}_{1})\\ u_{g}(T,\mathfrak{x}_{2})\\ \vdots\\ u_{g}(T,\mathfrak{x}_{N})\end{pmatrix}\approx\begin{pmatrix}\mathcal{S}(g)( \mathfrak{x}_{1})\\ \mathcal{S}(g)(\mathfrak{x}_{2})\\ \vdots\\ \mathcal{S}(g)(\mathfrak{x}_{N})\end{pmatrix}. \tag{19}\]
#### 3.2.2 Temporal linearly implicit Runge-Kutta discretizations
In the next step we use a parametric family of second order LIRK methods5 to discretize the ODE in (18). For all parameters \(p=(p_{1},p_{2})\in(0,\infty)^{2}\) and all step sizes \(h\in[0,\infty)\) we let the LRIK time step \(\phi_{p}^{h}\colon\mathbb{R}^{N}\to\mathbb{R}^{N}\) satisfy for all \(U,k_{1},k_{2}\in\mathbb{R}^{N}\) with
Footnote 5: We refer to Section A for a derivation of this family
\[k_{1}=(I_{N}-hp_{2}A)^{-1}(AU+f(U))\qquad\text{and} \tag{20}\]
\[k_{2}=(I_{N}-hp_{2}A)^{-1}\big{(}A(U+h2p_{1}(\tfrac{1}{2}-p_{2})k_{1})+f(U+hp_{ 1}k_{1})\big{)} \tag{21}\]
that
\[\phi_{p}^{h}(U)=U+h\Big{[}(1-\tfrac{1}{2p_{1}})k_{1}+(\tfrac{1}{2p_{1}})k_{2} \Big{]}, \tag{22}\]
where \(I_{N}\in\mathbb{R}^{N\times N}\) is the identity matrix. For every number of time steps \(\mathbf{m}\in\mathbb{N}\) and every choice of parameters \(p\in(0,\infty)^{2}\) the corresponding designing algorithm \(\Phi_{p}^{\mathbf{m}}\colon\mathbb{R}^{N}\to\mathbb{R}^{N}\) is subsequently given by
\[\Phi_{p}^{\mathbf{m}} \tag{23}\]
If \(N\) is chosen sufficiently large we then expect for all parameters \(p\in(0,\infty)^{2}\) and all sufficiently large \(M\in\mathbb{N}\) that for all \(g\in\mathcal{I}\) we have that
\[\Phi_{p}^{M}(g(\mathfrak{x}_{1}),g(\mathfrak{x}_{2}),\ldots,g(\mathfrak{x}_{N }))\approx\mathbf{u}_{(g(\mathfrak{x}_{1}),g(\mathfrak{x}_{2}),\ldots,g( \mathfrak{x}_{N}))}(T)\approx\begin{pmatrix}\mathcal{S}(g)(\mathfrak{x}_{1})\\ \mathcal{S}(g)(\mathfrak{x}_{2})\\ \vdots\\ \mathcal{S}(g)(\mathfrak{x}_{N})\end{pmatrix}. \tag{24}\]
#### 3.2.3 A compact reformulation of the designing algorithms
To make the designing algorithms of Section 3.2.2 amenable to be written as realizations of an ANN type base model we now present a more compact reformulations of the algorithms in (23). For this we fix the number of time steps \(M\in\mathbb{N}\) and the corresponding time step size \(H=T/M\) and for every \(p=(p_{1},p_{2})\in(0,\infty)^{2}\) we let \(\mathbf{W}_{p}=(\mathbf{W}_{p,i})_{i\in\{1,2,\ldots,5\}}\in(\mathbb{R}^{N\times N })^{5}\) satisfy
\[\mathbf{W}_{p,1}=(I_{N}-Hp_{2}A)^{-1}(I_{N}+H(1-p_{2})A)+H^{2}( \tfrac{1}{2}-p_{2})\big{[}(I_{N}-Hp_{2}A)^{-1}A\big{]}^{2}, \tag{25}\] \[\mathbf{W}_{p,2}=H(1-\tfrac{1}{2p_{1}})(I_{N}-Hp_{2}A)^{-1}+H^{2} (\tfrac{1}{2}-p_{2})(I_{N}-Hp_{2}A)^{-1}A(I_{N}-Hp_{2}A)^{-1},\] (26) \[\mathbf{W}_{p,3}=H(\tfrac{1}{2p_{1}})(I_{N}-Hp_{2}A)^{-1},\qquad \mathbf{W}_{p,4}=(I_{N}-Hp_{2}A)^{-1}(I_{N}+H(p_{1}-p_{2})A),\] (27) \[\text{and}\qquad\mathbf{W}_{p,5}=Hp_{1}(I_{N}-Hp_{2}A)^{-1}. \tag{28}\]
Note that for all \(p\in(0,\infty)^{2}\), \(U\in\mathbb{R}^{N}\) we have that
\[\phi_{p}^{H}(U)=\mathbf{W}_{p,1}U+\mathbf{W}_{p,2}f(U)+\mathbf{W} _{p,3}f\big{(}\mathbf{W}_{p,4}U+\mathbf{W}_{p,5}f(U)\big{)}. \tag{29}\]
### Designing the base model
Roughly speaking, we propose to design the base model by considering the matrices in (29) as trainable parameters. Specifically, let \(\mathscr{B}\colon((\mathbb{R}^{N\times N})^{5})^{M}\times\mathcal{I}\to \mathbb{R}^{N}\) satisfy for all \(W=((W_{m,i})_{i\in\{1,2,\ldots,5\}})_{m\in\{1,2,\ldots,M\}}\in((\mathbb{R}^{N \times N})^{5})^{M}\), \(g\in\mathcal{I}\), \(U_{0},U_{1},\ldots,U_{M}\in\mathbb{R}^{N}\) with \(U_{0}=(g(\mathfrak{x}_{1}),g(\mathfrak{x}_{2}),\)\(\ldots,g(\mathfrak{x}_{N}))\) and
\[\forall\,m\in\{1,2,\ldots,M\}\colon\qquad U_{m}=W_{m,1}U_{m-1}+W_ {m,2}f(U_{m-1})\\ +W_{m,3}f\big{(}W_{m,4}U_{m-1}+W_{m,5}f(U_{m-1})\big{)} \tag{30}\]
that
\[\mathscr{B}(W,g)=U_{M}. \tag{31}\]
Note that if \(N\) and \(M\) were chosen sufficiently large (24), (29), (30), and (31) imply that for all \(p\in(0,\infty)^{2}\), \(g\in\mathcal{I}\) it holds that
\[\mathscr{B}\big{(}(\underbrace{\mathbf{W}_{p},\ldots,\mathbf{W}_{ p}}_{\text{$M$-times}}),g\big{)}=\Phi_{p}^{M}\big{(}(g(\mathfrak{x}_{1}),g( \mathfrak{x}_{2}),\ldots,g(\mathfrak{x}_{N}))\big{)}\approx(\mathcal{S}(g)( \mathfrak{x}_{1}),\mathcal{S}(g)(\mathfrak{x}_{2}),\ldots,\mathcal{S}(g)( \mathfrak{x}_{N})). \tag{32}\]
We have thus derived a model together with a family of model parameters which have the property that, under sufficient assumptions, the realization of the model for any of the model parameters in the family is approximating the solution operator at discrete space points.
## 4 Numerical simulations
In this section we test the ADANN methodology numerically in the case of three selected parametric PDE problems.
### One-dimensional reaction diffusion type equation
In this section we apply the ADANN methodology to a reaction diffusion type equation. Roughly speaking, we consider the semilinear heat PDE problem described in (15) when the nonlinearity is taken to be of the reaction diffusion type. Specifically, let \(\mathcal{I}=\{g\in C^{2}([0,1],\mathbb{R})\colon g(0)=g(1)=0\}\) and for every \(g\in\mathcal{I}\) consider the PDE
\[\big{(}\tfrac{\partial}{\partial t}u\big{)}(t,x)=\tfrac{1}{100}( \Delta_{x}u)(t,x)+\frac{1-u(t,x)}{1+(u(t,x))^{2}} \tag{33}\]
with Dirichlet boundary conditions \(u(t,0)=u(t,1)=0\) and initial value \(u(0,x)=g(x)\) for \((t,x)\in[0,\infty)\times[0,1]\). We want to approximate the mapping \(\mathcal{S}\colon\mathcal{I}\to C^{2}([0,1],\mathbb{R})\) from the initial condition of the PDE to the terminal value at time \(T=1\) (cf. Section 3.1). We apply the ADANN methodology as described in Section 2 to this problem with the following choices.
**Base model:** Roughly speaking, we use the base model for semilinear heat PDEs derived in Section 3 with \(N=35\) equidistant space discretization points and \(M=5\) timesteps adapted to the situation when the Laplace operator is multiplied with the factor \(\nicefrac{{1}}{{100}}\) as considered in (33). Concretely, this means that we let \(f\colon\mathbb{R}\to\mathbb{R}\) satisfy for all \(U\in\mathbb{R}\) that \(f(U)=\frac{1-U}{1+U^{2}}\) and we let \(\mathcal{B}\colon((\mathbb{R}^{35\times 35})^{5})^{5}\times\mathcal{I}\to \mathbb{R}^{35}\) satisfy for all \(W=((W_{m,i})_{i\in\{1,2,\ldots,5\}})_{m\in\{1,2,\ldots,5\}}\in((\mathbb{R}^{3 5\times 35})^{5})^{5}\), \(g\in\mathcal{I}\), \(U_{0},U_{1},\ldots,U_{5}\in\mathbb{R}^{35}\) with \(U_{0}=(g(\nicefrac{{1}}{{36}}),g(\nicefrac{{2}}{{36}}),\ldots,g(\nicefrac{{ 35}}{{36}}))\) and
\[\forall\,m\in\{1,2,\ldots,5\}\colon\qquad U_{m}=W_{m,1}U_{m-1}+W_ {m,2}f(U_{m-1})\\ +W_{m,3}f\big{(}W_{m,4}U_{m-1}+W_{m,5}f(U_{m-1})\big{)} \tag{34}\]
that
\[\mathcal{B}(W,g)=U_{5}. \tag{35}\]
To initialize this base model we use for some choices of \(p\in(0,\infty)^{2}\) the model parameters
\[(\underbrace{\mathbf{W}_{p},\ldots,\mathbf{W}_{p}}_{\text{5- times}})\in((\mathbb{R}^{35\times 35})^{5})^{5} \tag{36}\]
as proposed in (32) with the modification that the matrix \(A\) needs to be multiplied with the factor \(\nicefrac{{1}}{{100}}\).
**Difference model:** As difference model we use a classical feed forward ANN with layer dimensions \((35,50,150,35)\) and the GELU activation function. In every training run this difference model is initialized randomly with the standard Glorot uniform initialization.
**Training objective:** As a distribution for the initial value \(\mathfrak{I}\colon\Omega\to\mathcal{I}\) we use a sine expansion with decaying randomly distributed coefficients. Specifically, we have for all \(x\in[0,1]\) that
\[\mathfrak{I}(x)=\sum_{n=1}^{32}\frac{5Z_{n}\sin(\pi nx)}{n^{2}}, \tag{37}\]
where \(Z_{n}\colon\Omega\to\mathbb{R}\), \(n\in\{1,2,\ldots,16\}\), are independent standard normally distributed random variables. In addition, to measure the loss we use the seminorm \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\
\((1/2,1/2)\) with 300 timesteps for the temporal discretization (this LIRK scheme is sometimes referred to as the Crank-Nicolson explicit midpoint method (cf. Section A.3)), and only keeping the approximation of the terminal value at the space points \(\nicefrac{{1}}{{36}},\nicefrac{{2}}{{36}},\ldots,\nicefrac{{35}}{{36}}\). In summary, the training objective is to find a function \(\mathscr{N}\colon\mathcal{I}\to\mathbb{R}^{35}\) such that the loss
\[\mathbb{E}\big{[}\tfrac{1}{35}\left\|\mathscr{N}(\mathfrak{I})-\Psi(\mathfrak{ I})\right\|^{2}\big{]} \tag{39}\]
is as small as possible. If the loss is suitably small, we expect for suitable \(g\in\mathcal{I}\) that \(\mathscr{N}(g)\approx(\mathcal{S}(g)(\nicefrac{{1}}{{36}}),\mathcal{S}(g)( \nicefrac{{2}}{{36}}),\ldots,\mathcal{S}(g)(\nicefrac{{35}}{{36}}))\) is a good approximation of the terminal value at the space points \(\nicefrac{{1}}{{36}},\nicefrac{{2}}{{36}},\ldots,\nicefrac{{35}}{{36}}\).
**Runs over initializations and trainings:** We perform two numerical simulations with different loops over the initializations and training procedures of the base and difference model. In the first numerical simulation we use a grid-based approach to explore different possible initializations of the base model. Specifically, for every \(p\in\{\frac{1}{10},\frac{2}{10},\ldots,\frac{9}{10}\}\times\{\frac{3}{10}, \frac{4}{10},\ldots,\frac{9}{10}\}\) we
* initialize the base model with the specialized initialization described (32) corresponding to the parameter vector \(p\),
* train the base model with the ADAM optimization method with adaptive learning rates and batch sizes,
* randomly initialize the difference model with the standard Glorot uniform initialization, and
* train the difference model with the ADAM optimization method with adaptive learning rates and batch sizes.
We illustrate the results of this numerical simulation in Figure 1 and approximately summarize the performance in the rows 9-10 in Table 1. In the second numerical simulation we choose the parameters used to initialize the base model by randomly sampling them uniformly distributed on the set \([\frac{1}{10},\frac{9}{10}]\times[\frac{3}{10},\frac{9}{10}]\). We perform 50 runs and in each run we use an optimization based approach to decide whether
* to randomly sample a new parameter vector \(p\in[\frac{1}{10},\frac{9}{10}]\times[\frac{3}{10},\frac{9}{10}]\) and carry out (i)-(iv) above or
* to initialize a new instance of a difference model and train it to approximate the scaled error for one of the already trained base models.
We illustrate the results of this numerical simulation in Figure 2 and approximately summarize the performance in the row 11 in Table 1.
In order to compare the ADANN methodology with existing techniques from the literature we also show the results of existing deep learning methods and classical numerical methods when applied to the optimization problem in (39). Specifically, in row 1 in Table 1 we approximately summarize the performance of classical feedforward ANNs with the architecture \((35,100,220,150,35)\) and the GELU activation function and in row 2 in Table 1 we approximately summarize the performance of FNOs with 4 layers, width 32, and 13 Fourier modes. For both the ANNs and the FNOs we performed 3 training runs using the ADAM optimizer with adaptive learning rates and batch sizes to minimize the objective in (39) and present the resulting error of the best of the 3 runs. In rows 3-8 in Table 1 we approximately present the resulting errors when the approximations are computed by applying the classical Crank-Nicolson explicit midpoint method with finite difference spatial discretization with 35 space discretization steps and 15-20 timesteps to the PDE in (33). Note that this corresponds to the approximation described in (24) resp. (32) with \(M\in\{15,16,\ldots,20\}\), \(N=35\), and \(p=(1/2,1/2)\).
All the simulations in this subsection were run on a remote machine on [https://vast.ai](https://vast.ai) equipped with an NVIDIA GeForce RTX 3090 GPU with 24 GB RAM and an Intel(r)
Xeon(r) E5-2696 v2 CPU with 32 GB of total system RAM. All the ADANN, ANN, and FNO models were trained using \(2^{19}\) training samples, which were computed ahead of the training in 311 seconds.
### One-dimensional Sine-Gordon type equation
In this section we present numerical simulation results for the ADANN methodology applied to a one-dimensional Sine-Gordon type PDE with periodic boundary conditions. Specifically, let \(\mathcal{I}=\{g\in C^{2}([0,1],\mathbb{R})\colon g(0)=g(1)\text{ and }g^{\prime}(0)=g^{\prime}(1)\}\) and for every \(g\in\mathcal{I}\) consider the PDE
\[\big{(}\tfrac{\partial}{\partial t}u\big{)}(t,x)=\tfrac{1}{100}(\Delta_{x}u)(t,x)+\sin(u(t,x)) \tag{40}\]
with periodic boundary conditions \(u(t,0)=u(t,1)\) and \((\tfrac{\partial}{\partial x}u)(t,0)=(\tfrac{\partial}{\partial x}u)(t,1)\) and initial value \(u(0,x)=g(x)\) for \(t\in[0,\infty)\), \(x\in[0,1]\). We want to approximate the mapping \(\mathcal{S}\colon\mathcal{I}\to C^{2}([0,1],\mathbb{R})\)
Figure 1: Error plots in dependence of the initialization of the base model for the reaction diffusion type equation in (33). _Left_: Estimated \(L^{2}\)-errors of the base model prior to training. _Middle_: Estimated \(L^{2}\)-errors of the base model after training. _Right_: Estimated \(L^{2}\)-errors of the full ADANN model after training.
Figure 2: Estimated \(L^{2}\)-errors for the optimization based run for the reaction diffusion type equation in (33). All runs in between two vertical lines correspond to the same base model initialization.
from the initial condition of the PDE to the terminal value at time \(T=2\). We apply the ADANN methodology as described in Section 2 to this problem with the following choices.
**Base model:** Roughly speaking, we use the base model for semilinear heat PDEs derived in Section 3 with \(N=30\) equidistant space discretization points and \(M=15\) timesteps adapted to the situation of periodic boundary conditions and when the Laplace operator is multiplied with the factor \(\nicefrac{{1}}{{100}}\) as considered in (40). Concretely, this means that we let \(\mathscr{B}\colon((\mathbb{R}^{30\times 30})^{5})^{15}\times\mathcal{I} \to\mathbb{R}^{30}\) satisfy for all \(W=((W_{m,i})_{i\in\{1,2,\ldots,5\}})_{m\in\{1,2,\ldots,15\}}\in((\mathbb{R}^{3 0\times 30})^{5})^{15}\), \(g\in\mathcal{I}\), \(U_{0},U_{1},\ldots,U_{15}\in\mathbb{R}^{30}\) with \(U_{0}=(g(\nicefrac{{0}}{{30}}),g(\nicefrac{{1}}{{30}}),\ldots,g(\nicefrac{{ 29}}{{30}}))\) and
\[\forall\,m\in\{1,2,\ldots,15\}\colon\qquad U_{m}=W_{m,1}U_{m-1}+W _{m,2} \text{sin}(U_{m-1})\\ +W_{m,3}\text{sin}\big{(}W_{m,4}U_{m-1}+W_{m,5}\text{sin}(U_{m-1 })\big{)} \tag{41}\]
that
\[\mathscr{B}(W,g)=U_{15}. \tag{42}\]
To initialize this base model we use for some choices of \(p\in(0,\infty)^{2}\) the model parameters
\[(\underbrace{\mathbf{W}_{p},\ldots,\mathbf{W}_{p}}_{\text{15-times}})\in(( \mathbb{R}^{30\times 30})^{5})^{15} \tag{43}\]
as proposed in (32) with the modification that the matrix \(A\) is multiplied with the factor \(\nicefrac{{1}}{{100}}\) and adjusted to periodic boundary conditions.
**Difference model:** As difference model we use an FNO with 3 layers, width 20, and 16 Fourier nodes. In every training run this difference model is initialized randomly as proposed in [30].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & Estimated & Estimated & Number & Training & Time for 4096 \\ & \(L^{1}\)-error & \(L^{2}\)-error & of trainable parameters & time (in s) & evaluations (in s) \\ \hline ANN & 0.003430 & 0.003950 & 64255 & 367 & 0.007 \\ \hline FNO & 0.002411 & 0.003175 & 61921 & 593 & 0.050 \\ \hline Classical (5 timesteps) & 0.001683 & 0.001785 & 30625 & 0 & 0.023 \\ \hline Classical (6 timesteps) & 0.001301 & 0.001360 & 36750 & 0 & 0.025 \\ \hline Classical (7 timesteps) & 0.001123 & 0.001157 & 42875 & 0 & 0.033 \\ \hline Classical (8 timesteps) & 0.001025 & 0.001047 & 49000 & 0 & 0.040 \\ \hline Classical (9 timesteps) & 0.000964 & 0.000982 & 55125 & 0 & 0.035 \\ \hline Classical (10 timesteps) & 0.000925 & 0.000940 & 61250 & 0 & 0.044 \\ \hline Best trained base model (grid) & 0.000174 & 0.000189 & 30625 & 12270 & 0.020 \\ \hline Best trained full ADANN (grid) & 0.000015 & 0.000019 & 45360 & 21147 & 0.027 \\ \hline Best trained full ADANN (opt) & 0.000062 & 0.000065 & 45360 & 13393 & 0.026 \\ \hline \end{tabular}
\end{table}
Table 1: Numerical simulations for the reaction diffusion type PDE in (33)
Figure 3: Example approximation plots for a randomly chosen initial value for the reaction diffusion type equation in (33). _Left_: Best ANN approximation. _Middle_: Best FNO approximation. _Right_: Best full ADANN approximation of the optimization based run.
**Training objective:** As a distribution for the initial value \(\mathcal{I}\colon\Omega\to\mathcal{I}\) we use a Fourier expansion with decaying randomly distributed coefficients. Specifically, we have for all \(x\in[0,1]\) that
\[\mathfrak{I}(x)=2Z_{0}+\sum_{n=1}^{16}\frac{2Z_{n}\sin(2\pi nx)}{n^{2}}+\frac{2Z _{n}\cos(2\pi nx)}{n^{2}}, \tag{44}\]
where \(Z_{n}\colon\Omega\to\mathbb{R}\), \(n\in\{-32,-31,\ldots,32\}\), are independent standard normally distributed random variables. In addition, to measure the loss we use the seminorm \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!
We illustrate the results of this numerical simulation in Figure 5 and approximately summarize the performance in the row 12 in Table 2.
In order to compare the ADANN methodology with existing techniques from the literature we also show the results of deep learning methods and classical numerical methods when applied to the optimization problem in (46). Specifically, in row 1 in Table 2 we approximately summarize the performance of classical feedforward ANNs with the architecture \((30,100,300,160,30)\) and GELU activation function and in row 2 in Table 2 we approximately summarize the performance of FNOs with 4 layers, width 34, and 13 Fourier modes. For both the ANNs and the FNOs we performed 3 training runs using the ADAM optimizer with adaptive learning rates and batch sizes to minimize the objective in (46) and present the resulting error of the best of the 3 runs. In rows 3-8 in Table 2 we approximately present the resulting errors when the approximations are computed by applying the classical Crank-Nicolson explicit midpoint method with finite difference spatial discretization with 30 space discretization steps and 15-20 timesteps to the PDE in (40). Note that this corresponds to the approximation described in (24) resp. (32) with \(M\in\{15,16,\ldots,20\}\), \(N=30\), and \(p=(1/2,1/2)\).
All the simulations in this subsection were run on a remote machine on [https://vast.ai](https://vast.ai) equipped with an NVIDIA GeForce RTX 3090 GPU with 24 GB RAM and an Intel(r) Core(tm) i3-10100 CPU with 32 GB of total system RAM. All the ADANN, ANN, and FNO models were trained using \(2^{19}\) training samples, which were computed ahead of the training in 819 seconds.
### Two-dimensional semilinear heat equation
In this section we present numerical simulation results for the ADANN methodology applied to a two-dimensional semilinear heat PDE with periodic boundary conditions. Specifically, let
\[\mathcal{I}=\left\{g\in C^{2}([0,1]^{2},\mathbb{R})\colon\left(\begin{array}{c}g (z,0)=g(z,1),\\ g(0,z)=g(1,z),\\ g^{\prime}(z,0)=g^{\prime}(z,1),\\ g^{\prime}(0,z)=g^{\prime}(1,z)\end{array}\right)\right\} \tag{47}\]
and for every \(g\in\mathcal{I}\) consider the PDE
\[\big{(}\tfrac{\partial}{\partial t}u\big{)}(t,x)=\tfrac{1}{100}(\Delta_{x}u)( t,x)+\big{(}1+(u(t,x))^{2}\big{)}^{1/2} \tag{48}\]
with periodic boundary conditions \(u(t,(z,0))=u(t,(z,1))\), \(u(t,(0,z))=u(t,(1,z))\), \(\big{(}\tfrac{\partial}{\partial x}u\big{)}(t,(z,0))\)\(=(\tfrac{\partial}{\partial x}u)(t,(z,1))\), and \(\big{(}\tfrac{\partial}{\partial x}u\big{)}(t,(0,z))=(\tfrac{\partial}{ \partial x}u)(t,(1,z))\) and initial value \(u(0,x)=g(x)\) for \(t\in[0,\infty)\), \(z\in[0,1]\), \(x\in[0,1]^{2}\). We want to approximate the mapping \(\mathcal{S}\colon\mathcal{I}\to C^{2}([0,1]^{2},\mathbb{R})\) from the initial condition of the PDE to the terminal value at time \(T=2\). We apply the ADANN methodology as described in Section 2 to this problem with the following choices.
Figure 5: Estimated \(L^{2}\)-errors for the optimization based run for the Sine-Gordon type equation in (40)
Figure 6: Example approximation plots for a randomly chosen initial value for the reaction diffusion type equation in (33). _Left_: Best ANN approximation. _Middle_: Best FNO approximation. _Right_: Best full ADANN approximation of the optimization based run.
**Base model:** Roughly speaking, we adapt the base model for one-dimensional semilinear heat PDEs derived in Section 3 to the two-dimensional situation with periodic boundary conditions considered in (48). For the finite difference spatial discretization we employ a \(40\times 40\) grid of equidistant points and for the second order LIRK temporal discretization we use 2 timesteps. Since we consider \(40^{2}=1600\) space discretization points the trainable base model parameters are matrices of the form \(W=((W_{m,i})_{i\in\{1,2,\ldots,5\}})_{m\in\{1,2\}}\in((\mathbb{R}^{1600\times 1 600})^{5})^{2}\) and they are employed to compute the output of the base model analogous to the description in (30)-(31). To initialize this base model we use initializations corresponding to parameters \(p\in(0,\infty)^{2}\) of the LIRK scheme as described in (32), with the modification that the matrix \(A\) represents the two-dimensional Laplace operator discretized with finite differences on a \(40\times 40\)-grid, multiplied by the factor \(\nicefrac{{1}}{{100}}\) and adjusted to periodic boundary conditions.
**Difference model:** As difference model we use an FNO with 4 layers, width 30, and 15 Fourier modes. In every training run this difference model is initialized randomly as proposed in [30].
**Training objective:** We choose the initial value \(\mathfrak{I}\colon\Omega\to\mathcal{I}\) to be \(\mathcal{N}(0,4(2I-\Delta)^{-2})\)-distributed, where \(\Delta\) denotes the Laplace operator with periodic boundary conditions on \(L^{2}([0,1]^{2})\). In our numerical simulations we approximate this distribution on an \(80\times 80\) equidistant grid. Specifically, let \(\mathbf{N}=80\), let \(X\colon\Omega\to\mathbb{R}^{\mathbf{N}\times\mathbf{N}}\) be a random matrix with independent \(\mathcal{N}(0,\mathbf{N}^{2})\)-distributed entries, let \(\Delta_{\mathbf{N}}\colon\mathbb{R}^{\mathbf{N}\times\mathbf{N}}\to\mathbb{R} ^{\mathbf{N}\times\mathbf{N}}\) be the discrete Laplace operator with periodic boundary conditions, let \(I_{\mathbf{N}}\colon\mathbb{R}^{\mathbf{N}\times\mathbf{N}}\to\mathbb{R}^{ \mathbf{N}\times\mathbf{N}}\) be the identity operator. We employ the approximation
\[\big{(}\mathfrak{I}\big{(}\tfrac{i}{\mathbf{N}},\tfrac{j}{\mathbf{N}}\big{)} \big{)}_{(i,j)\in\{0,1,\ldots,\mathbf{N}-1\}^{2}}\approx 2(2I_{\mathbf{N}}- \Delta_{\mathbf{N}})^{-1}X. \tag{49}\]
In addition, to measure the loss we use the seminorm \(\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!
and GELU activation function and in row 2 in Table 3 we approximately summarize the performance of FNOs with 5 layers, width 80, and 20 Fourier modes. For both the ANNs and the FNOs we used the ADAM optimizer with adaptive learning rates. In row 3 in Table 3 we approximately present the resulting errors when the approximations are computed by applying the classical Crank-Nicolson explicit midpoint method with finite difference spatial discretization with \(40\times 40\) space discretization steps and 2 timesteps to the PDE in (48).
All the simulations in this subsection were run on a remote machine on [https://vast.ai](https://vast.ai) equipped with an NVIDIA GeForce RTX 3090 GPU with 24 GB RAM and an AMD EPYC(tm) 7502 CPU with 32 GB of total system RAM. All the ADANN, ANN, and FNO models were trained using \(2^{14}\) training samples, which were computed ahead of the training in 2065 seconds.
## Appendix A Second order linearly implicit Runge-Kutta methods
In this section we present a formal derivation of a well-known family of second order LIRK methods for semilinear ODEs, which are used to construct a base model and a corresponding family of initialization parameters in Section 3 (cf., e.g., Deuflhard & Bornemann [9, Section 6.4] and Hochbruck & Ostermann [21]). We will work in the following setting. Let \(d\in\mathbb{N}\), \(A\in\mathbb{R}^{d\times d}\), \(f\in C(\mathbb{R}^{d},\mathbb{R}^{d})\) and consider the ODE
\[\dot{u}(t)=Au(t)+f(u(t)) \tag{51}\]
for \(t\in(0,\infty)\).
### Order conditions for general LIRK methods
We first introduce the one step increment function of general LIRK methods for the ODE in (51). Specifically, let \(s\in\mathbb{N}\), \(\alpha=(\alpha_{i,j})_{(i,j)\in\{1,2,\ldots,s\}^{2}}\in\mathbb{R}^{s\times s}\), \(\beta=(\beta_{i,j})_{(i,j)\in\{1,2,\ldots,s\}^{2}}\in\mathbb{R}^{s\times s}\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & Estimated & Estimated & Number & Precomputation & Time for 512 \\ & \(L^{1}\)-error & \(L^{2}\)-error & parameters & time (in s) & evaluations (in s) \\ \hline ANN & 0.220621 & 0.240503 & 30731200 & 1472 & 0.76 \\ \hline FNO & 0.013329 & 0.013707 & 25643217 & 2085 & 0.82 \\ \hline Classical (2 timesteps) & 0.666985 & 0.939649 & 25600000 & 0 & 0.79 \\ \hline Best trained base model & 0.012739 & 0.014994 & 25600000 & 10768 & 0.79 \\ \hline Best trained full ADANN & 0.001291 & 0.001543 & 27227937 & 23359 & 1.01 \\ \hline \end{tabular}
\end{table}
Table 3: Numerical simulations for the two-dimensional semilinear heat PDE in (48)
Figure 7: Error plots in dependence of the initialization of the base model for the two-dimensional semilinear heat PDE in (48). _Left_: Estimated \(L^{2}\)-errors of the base model prior to training. _Middle_: Estimated \(L^{2}\)-errors of the base model after training. _Right_: Estimated \(L^{2}\)-errors of the full ADANN model after training.
\(b=(b_{i})_{i\in\{1,2,\ldots,s\}}\in\mathbb{R}^{s}\) and let \(\Phi=(\Phi^{h}(u))_{(h,u)\in[0,\infty)\times\mathbb{R}^{d}}\colon[0,\varepsilon] \times\mathbb{R}^{d}\to\mathbb{R}^{d}\) satisfy for all \(h\in[0,\infty)\), \(U,k_{1},k_{2},\ldots,k_{s}\in\mathbb{R}^{d}\) with
\[\forall\,i\in\{1,2,\ldots,s\}\colon\quad k_{i}=A(U+h\sum_{j=1}^{i}\beta_{i,j} k_{j})+f(U+h\sum_{j=1}^{i-1}\alpha_{i,j}k_{j}) \tag{52}\]
that
\[\Phi^{h}(U)=U+h\sum_{i=1}^{s}b_{j}k_{j}. \tag{53}\]
We refer to the number \(s\) as the number of stages of the LIRK method, we refer to \(\alpha\) as the nonlinear LIRK parameters, we refer to \(\beta\) as the linear LIRK parameters, we refer to \(b\) as the LIRK integration weights, and we refer to \(k_{1},k_{2},\ldots,k_{s}\) as the LIRK stages. Although the LIRK stages are defined implicitly in (52), under suitable conditions they can be computed explicitly. Specifically, under suitable conditions, we have for all \(h\in[0,\infty)\), \(U,k_{1},k_{2},\ldots,k_{s}\in\mathbb{R}^{d}\) with
\[\forall\,i\in\{1,2,\ldots,s\}\colon\quad k_{i}=(I_{d}-h\beta_{i,i}A)^{-1} \Big{(}A(U+h\sum_{j=1}^{i-1}\beta_{i,j}k_{j})+f(U+h\sum_{j=1}^{i-1}\alpha_{i,j }k_{j})\Big{)} \tag{54}\]
that
\[\Phi^{h}(U)=U+h\sum_{i=1}^{s}b_{j}k_{j}. \tag{55}\]
Order conditions for the one step method \(\Phi\) are obtained by formally setting the Taylor expansion of \(\Phi\) equal to the Taylor expansion of the solution of the ODE in (51) for a fixed initial value \(U\in\mathbb{R}^{d}\) up to terms of a certain order. The resulting order conditions for a second order scheme are given by
\[\sum_{i=1}^{s}b_{i}=1\qquad\text{and}\qquad\sum_{i=1}^{s}b_{i}C_{i}=\sum_{i=1} ^{s}b_{i}c_{i}=\frac{1}{2}, \tag{56}\]
where \((C_{i})_{i\in\{1,2,\ldots,s\}},(c_{i})_{i\in\{1,2,\ldots,s\}}\subseteq\mathbb{R}\) satisfy for all \(i\in\{1,2,\ldots,s\}\) that
\[C_{i}=\sum_{j=1}^{i}\beta_{i,j}\qquad\text{and}\qquad c_{i}=\sum_{j=1}^{i-1} \alpha_{i,j}. \tag{57}\]
Under suitable regularity on the nonlinearity \(f\) the conditions in (56) ensure that the ODE integration scheme defined through the one step increment function \(\Phi\) will have global convergence order 2.
### A family of 2 stage linearly implicit Runge-Kutta methods of order 2
In this section we solve the order conditions in (56) in the case of \(s=2\) stages and under the assumption that \(\beta_{1,1}=\beta_{2,2}\). For this let \(p_{1},p_{2}\in(0,\infty)\) and assume that
\[\alpha_{1,2}=p_{1}\qquad\text{and}\qquad\beta_{1,1}=\beta_{2,2}=p_{2}. \tag{58}\]
This and (56) imply that
\[b_{1}=1-\tfrac{1}{2p_{1}},\qquad b_{2}=\tfrac{1}{2p_{1}},\qquad\text{and} \qquad\beta_{1,2}=2p_{1}(\tfrac{1}{2}-p_{2}). \tag{59}\]
This and (54) in turn imply, under suitable conditions, that for all \(h\in[0,\infty)\), \(U,k_{1},k_{2}\in\mathbb{R}^{d}\) with
\[k_{1}=(I_{d}-hp_{2}A)^{-1}(AU+f(U))\qquad\text{and} \tag{60}\]
\[k_{2}=(I_{d}-hp_{2}A)^{-1}\big{(}A(U+h2p_{1}(\tfrac{1}{2}-p_{2})k_{1})+f(U+hp_{ 1}k_{1})\big{)} \tag{61}\]
it holds that
\[\Phi^{h}(U)=U+h\big{[}(1-\tfrac{1}{2p_{1}})k_{1}+(\tfrac{1}{2p_{1}})k_{2}\big{]}. \tag{62}\]
We have thus derived a family of LIRK methods of order two, which is parametrized by two parameters \(p_{1}\) and \(p_{2}\). We use this family in Section 3.2.2.
### The special case of the Crank-Nicolson explicit Euler method
The scheme in (60)-(62) includes as a special case the well-known Crank-Nicolson explicit midpoint scheme. Specifically, note that in the special case where \(p_{1}=p_{2}=\frac{1}{2}\) we have for all \(h\in[0,\infty)\), \(U\in\mathbb{R}^{d}\) that
\[\Phi^{h}(U)=(I_{d}-\tfrac{h}{2}A)^{-1}\big{(}(I_{d}+\tfrac{h}{2}A)U+hf\big{(}( I_{d}-\tfrac{h}{2}A)^{-1}(U+\tfrac{h}{2}f(u))\big{)}\big{)}. \tag{63}\]
## Acknowledgments
This work has been partially funded by the National Science Foundation of China (NSFC) under grant number 12250610192. This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044-390685587, Mathematics Munster: Dynamics-Geometry-Structure.
|
2305.04228 | Heterogeneous Directed Hypergraph Neural Network over abstract syntax
tree (AST) for Code Classification | Code classification is a difficult issue in program understanding and
automatic coding. Due to the elusive syntax and complicated semantics in
programs, most existing studies use techniques based on abstract syntax tree
(AST) and graph neural network (GNN) to create code representations for code
classification. These techniques utilize the structure and semantic information
of the code, but they only take into account pairwise associations and neglect
the high-order correlations that already exist between nodes in the AST, which
may result in the loss of code structural information. On the other hand, while
a general hypergraph can encode high-order data correlations, it is homogeneous
and undirected which will result in a lack of semantic and structural
information such as node types, edge types, and directions between child nodes
and parent nodes when modeling AST. In this study, we propose to represent AST
as a heterogeneous directed hypergraph (HDHG) and process the graph by
heterogeneous directed hypergraph neural network (HDHGN) for code
classification. Our method improves code understanding and can represent
high-order data correlations beyond paired interactions. We assess
heterogeneous directed hypergraph neural network (HDHGN) on public datasets of
Python and Java programs. Our method outperforms previous AST-based and
GNN-based methods, which demonstrates the capability of our model. | Guang Yang, Tiancheng Jin, Liang Dou | 2023-05-07T09:28:16Z | http://arxiv.org/abs/2305.04228v3 | Heterogeneous Directed Hypergraph Neural Network over abstract syntax tree (AST) for Code Classification
###### Abstract
Code classification is a difficult issue in program understanding and automatic coding. Due to the elusive syntax and complicated semantics in programs, most existing studies use techniques based on abstract syntax tree (AST) and graph neural network (GNN) to create code representations for code classification. These techniques utilize the structure and semantic information of the code, but they only take into account pairwise associations and neglect the high-order correlations that already exist between nodes in the AST, which may result in the loss of code structural information. On the other hand, while a general hypergraph can encode high-order data correlations, it is homogeneous and undirected which will result in a lack of semantic and structural information such as node types, edge types, and directions between child nodes and parent nodes when modeling AST. In this study, we propose to represent AST as a heterogeneous directed hypergraph (HDHG) and process the graph by heterogeneous directed hypergraph neural network (HDHGN) for code classification. Our method improves code understanding and can represent high-order data correlations beyond paired interactions. We assess heterogeneous directed hypergraph neural network (HDHGN) on public datasets of Python and Java programs. Our method outperforms previous AST-based and GNN-based methods, which demonstrates the capability of our model.
hypergraph, heterogeneous graph, code classification, graph neural networks, program understanding
## I Introduction
With the advancement of modern computer software, how to learn from vast open-source code repositories to enhance software development has become an essential research topic. In recent years, source code processing, which tries to help computers automatically comprehend and analyze source code, has received a lot of attention. Several works have been suggested including code classification [1, 2, 3, 4, 5, 6], method name prediction [3][4][7][8], code summarization [3][9][10] and code clone detection [5][11][12], etc.
Due to the improvement of machine learning technology, particularly deep learning, more and more work has employed deep learning for code classification. Currently, there are two main categories of code classification methods: AST-based and GNN-based. To take advantage of the semantic and structural information of the source code, several studies adopt AST when learning code representations [1][5][7][8]. Some research uses graph neural networks (GNN) to create code representations for code categorization to a better understanding of the structure of code based on AST [3][4][13][14].
Although these AST-based and graph-based techniques employ the structural information of source code and demonstrate their effectiveness, there is a problem that they only take into the pairwise relationships and ignore the possible high-order correlations between AST nodes. For example, when code is parsed into an AST, each parent AST node has child AST nodes belonging to various fields or called attributes. A parent node may have several child nodes under the same field, and these nodes have high-order correlations with one another. Fig. 1 depicts an example of a python code snippet. The corresponding AST generated by the official python ast module1 is illustrated in Fig. 2(a). As we can see, the "Module" is the root node of the AST. It has two child nodes, "Assign" and "Expr" which belong to the field named "body." When modeling the correlations between the three nodes, previous approaches only consider the pairwise relationships, i.e., the pair of "Module" and "Assign" and the pair of "Module" and "Expr," as demonstrated in Fig. 3(a). The high-order data correlation that "Assign" and "Expr" both belong to the "body" of "Module" as shown in Fig. 3(b) is dismissed may result in the loss of code structural information.
Footnote 1: [https://docs.python.org/3/library/ast.html](https://docs.python.org/3/library/ast.html).
In recent years, hypergraph, which can encode high-order data correlations, has drawn a lot of interest. Considering the outstanding performance of hypergraph in graph classification [15][16], we present hypergraph into code classification. On the other hand, a general hypergraph is homogeneous and undirected, i.e., it only has one type of node and one type of edge, and its hyperedge is undirected. If we represent the AST with a general hypergraph, it will result in lack of semantic
Fig. 1: An example of the code snippet. The program reads three inputs a, b, and c in turn, if a equals c, “Yes” will be output, otherwise, “No” will be output.
and structural information such as field names and directions. As illustrated in Fig. 4(a), a typical hypergraph does not have the field name and the direction to show who is the parent node and who is the child node.
To tackle the above problems, we suggest a heterogeneous directed hypergraph (HDHG) to model the AST and a HD-HGN for code classification. First, we propose to use heterogeneous directed hyperedge to show the relationship of AST nodes, the example of Fig. 3 is shown in Fig. 4(b). Second, we combine deep learning techniques from hypergraph neural networks and heterogeneous graph neural networks to create the HDHGN, and we also add operations to process directed hyperedge.
We evaluate our method on public datasets Python800 and Java250 [17]. Our model gets 97% in accuracy on Python800 and 96% on Java250 which outperforms previous state-of-the-art AST-based and GNN-based work. Our study demonstrates the utility of HDHG and HDHGN for code classification.
The main contributions of this paper are:
* We propose an HDHG to depict AST.
* We propose an HDHGN to generate vector representations for code classification.
* We assess our model on public datasets and compare it with previous SOTA AST-based and graph-based methods.
## II Related Work
Code classification is to classify codes based on their functions. Different from natural language, code has structural information. As a result, several works adopt AST by various techniques. Mou et al. [1] is one of the first works to suggest a Tree-Based Convolutional Neural Network (TBCNN) in code classification. Alon et al. propose code2seq [7] and code2vec [8] to deconstruct code to a collection of paths in its AST. J. Zhang et al. [5] propose a novel neural called ASTNN for source code representation for code classification and clone detection. N. D. Q. Bui et al. [18] propose a novel method named TreeCaps by fusing capsule networks with TBCNN in code classification.
With the popularity of GNN, more works apply kinds of GNN in code classification based on AST to strengthen the comprehension of code structures. M. Allamanis et al. [13] first construct graphs from source code by adding edges like control flow and data flow to AST and employing a gated graph neural network (GGNN) to process program graphs. V. Hellendoorn et al. [19] propose a model called GREAT based on the transformer architecture by extracting global relational information from code graphs. D. Vagavoglu et al. [3] propose
Fig. 4: Comparison of using general hypergraph and HDHG to model the relationships between three nodes of Fig. 3. The difference is that hyperedges in HDHG have direction and type.
Fig. 3: The pairwise connections and high-order data correlation between three AST nodes. We ignore other AST nodes in the figure.
Fig. 2: The AST of code snippet in Fig. 1. We use the official python module to print the AST to depict the details in Fig. 2(a). We draw the illustration of the AST in Fig. 2(b) to demonstrate the parent-child relationship between AST nodes.
an approach that can extract and use program features from multiple code graphs. M. Lu et al. [6] improved GGNN in program classification. T. Long [14] proposes a multi-view graph program representation method that combines both data flow and control flow as multiple views and applies GNN to process. W. Wang et al. [4] propose to leverage heterogeneous graphs to show code based on previous work and adopt heterogeneous GNN to process.
## III Preliminary
In this section, we introduce some fundamental background ideas, such as hypergraph, heterogeneous graph, and AST.
### _Hypergraph_
In an ordinary graph, an edge can only be connected with two vertices. Different from general graphs, the edge of a hypergraph [20] can link any number of vertices. Formally, a hypergraph \(H\) is a pair \(H=(V,E)\) where \(V\) is a set of elements called nodes or vertices, and \(E\) is a set of non-empty subsets of \(V\) called hyperedges or links.
A directed hypergraph [21] is a hypergraph with directed hyperedges. A directed hyperedge or hyperarc is an ordered pair, \(E=(X,Y)\), of (possibly empty) disjoint subsets of vertices; \(X\) is the tail of \(E\) while \(Y\) is its head. A backward hyperarc, or simply B-arc, is a hyperarc \(E=(X,Y)\) with \(|Y|=1\). A forward hyperarc, or simply F-arc, is a hyperarc \(E=(X,Y)\) with \(|X|=1\). A hypergraph whose hyperarcs are B-arcs is known as a B-graph (or B-hypergraph). A hypergraph whose hyperarcs are F-arcs is known as an F-graph or F-hypergraph.
In our study, since the child node of AST points to the parent node and the child node has only one parent node, our HDHG is a B-hypergraph.
### _Heterogeneous Graph_
A heterogeneous graph [22] is a graph consisting of multiple types of entities or nodes and multiple types of links or edges. A heterogeneous graph is represented as \(G=(V,E)\), consisting of an entity set \(V\) and a link set \(E\). A heterogeneous graph is also correlated with a node type mapping function \(\phi:V\to A\) and a link type mapping function \(\psi:E\to R\). \(A\) and \(R\) represent the sets of predefined object types and link types, where \(|A|+|R|>2\).
### _Abstract Syntax Tree_
The AST represents the source code's abstract syntax structure. The code compiler will parse the code into an AST through the program syntax and semantic analysis. Each node on the tree represents a structure in the source code and belongs to different AST node types. Each AST node has zero, one, or several fields that can be thought of as the node's attributes. Each field may have none, one, or a list of objects such as AST node, number, and string. If one AST node contains a field with a different AST node, and the latter is equivalent to the former's child AST node.
## IV Methodology
We first convert the code snippet into an AST and construct an HDHG based on it, then put it into our HDHGN. We combine the vector representations for code categorization once we get the network's node's vector representation. The overview of our model is demonstrated in Fig. 5.
### _Heterogeneous Directed Hypergraph_
We parse the code snippet into an AST with a code compiler, then we develop the HDHG based on the AST. We set the node of AST as the "AST" node and the identifier of AST as the "identifier" node in HDHG. We set the value of the "AST" node as its AST node type name, set the value of the "identifier" node as its content, and treat them as two different types of nodes. The field is configured as a directed hyper edge. If one node has a field including another node, the latter node belongs to the tail of the field hyperedge, the former is the head of the field hyperedge. We designated the field name as the type of hyper edge. The illustration of the HDHG of AST in Fig. 2 is shown in Fig. 6.
### _Heterogeneous Directed Hypergraph Neural Network_
#### Iv-B1 Definition
We let a HDHG \(G=(N,E)\), which includes a node set \(N=\{n_{1},\ n_{2},\ldots,n_{|N|}\}\) and a directed hyperedge set \(E=\left\{e_{1},e_{2},\ldots,e_{|E|}\right\}\). Each node \(n=(\mu,x)\), where \(\mu\) represents node type and \(x\) is the value of the node. Each directed hyperedge \(e=(\rho,S(e),T(e))\), \(\rho\) represents edge type, \(S(e)=\left\{n_{1},\ldots,n_{|S(e)|}\right\}\subseteq N\) is the tail nodes of hyperedge \(e\), \(T(e)\in N\) is the head node of hyperedge \(e\), they show the direction of the hyperedge \(e\) is \(S(e)\) to \(T(e)\).
#### Iv-B2 Feature initialization
According to the value \(x\) and the category \(\mu\) of node \(n\), we obtain embedding vector \(d_{n}\in\mathbb{R}^{C_{1}}\) by embedding function as (1), where \(C_{1}\) is the dimension size of the embedding vector.
\[d_{n}=Embed_{\mu}(x) \tag{1}\]
To put embedding vectors of various types into the same vector space, we make a linear projection to obtain the initial feature vector \(h_{n}^{0}\in\mathbb{R}^{C_{2}}\) of node \(n\) based on the corresponding node type \(\mu\) as (2), where \(C_{2}\) is the dimension size of feature vector and hidden vector.
\[h_{n}^{0}=W_{\mu}d_{n}+b_{\mu} \tag{2}\]
We also obtained embedding vector \(d_{e}\in\mathbb{R}^{C_{2}}\) of hyperedge \(e\) according to the edge type \(\rho\) as (3).
\[d_{e}=Embed_{edge}(\rho) \tag{3}\]
#### Iv-B3 Heterogeneous Directed Hypergraph Convolution Layer
Our model updates each node vector in one heterogeneous directed hypergraph convolution (HDHGConv) layer. We refer to the framework of two-stage message passing of hypergraph neural network [15] which has two steps: aggregating messages from nodes to hyperedges and aggregating messages from hyperedges to nodes. Contrarily, we add operations that add heterogeneous information and direction information.
**Aggregating messages from nodes to hyperedges:** First, the hidden vector \(h_{n}^{l-1}\) of each node \(n\) is multiplied by the head matrix or tail matrix to get the message vector \(m_{n\_\_}^{l}\) from node \(n\) to hyperedge \(e\) as (4), where \(l=1,\ 2,\ldots,L\) indicate layer number, \(L\) is the total number of layers.
\[m_{n\_}^{l}=\left\{\begin{matrix}W_{head}^{l}h_{n}^{l-1}+b_{head}^{l}\,if\ n \in S(e)\\ W_{tail}^{l}h_{n}^{l-1}+b_{tail}^{l}\,if\ n=T(e)\end{matrix}\right. \tag{4}\]
Directed hyperedge \(e\) gathers message from their tail nodes and head node by transformer attention mechanism [23]. For each message, the attention score is formulated as (5), where \(d_{e}\) is the edge type vector.
\[\alpha_{n\_}^{l}=Softmax\left(\frac{\left(W_{q1}^{l}d_{e}\right)^{T}W_{k1}^{l }m_{n\_}^{l}}{\sqrt{C_{2}}}\right) \tag{5}\]
We obtain the vector \(o_{e}^{l}\) of directed hyperedge \(e\) as (6).
\[o_{e}^{l}=\sum_{n\in S(e)\ or\ n=T(e)}\alpha_{n\_}^{l}W_{v1}^{l}m_{n\_}^{l} \tag{6}\]
Then we add edge type vector to \(o_{e}^{l}\) as (7), where \(z_{e}^{l}\) is formed as (8).
\[q_{e}^{l}=o_{e}^{l}+z_{e}^{l} \tag{7}\]
\[z_{e}^{l}=W_{z}^{l}d_{e}+b_{z}^{l} \tag{8}\]
**Aggregating messages from hyperedges to nodes:** For each directed hyperedge \(e\) whose head node or tail node is \(n\), the \(q_{j}^{l}\) will be linear projected by (9) to get message \(m_{n\_}^{l}\) which will be sent to \(n\).
\[m_{n\_}^{l}=\left\{\begin{matrix}W_{ta\_head}^{l}q_{e}^{l}+b_{ta\_head}^{l}\, if\ n\in S(e)\\ W_{to\_tail}^{l}q_{e}^{l}+b_{ta\_tail}^{l}\,if\ n=T(e)\end{matrix}\right. \tag{9}\]
Same as before, we aggregate messages to get \(v_{n}^{l}\) by the transformer attention mechanism.
\[\alpha_{e\_n}^{l}=Softmax\left(\frac{\left(W_{q2}^{l}h_{n}^{l-1}\right)^{T}W _{k2}^{l}m_{n\_}^{l}}{\sqrt{C_{2}}}\right) \tag{10}\]
\[v_{n}^{l}=\sum_{T(e)=n\ or\ n\in S(e)}\alpha_{e\_n}^{l}W_{v2}^{l}m_{e\_n}^{l} \tag{11}\]
Last, we update hidden vector \(h_{n}^{l}\) of node \(n\) by (12), where \(\sigma\) is the elu activation function and GraphNorm is graph normalization [24].
\[h_{n}^{l}=\sigma\left(GraphNorm\left(W_{u1}^{l}v_{n}^{l}+W_{u2}^{l}h_{n}^{l-1} +b_{u}^{l}\right)\right. \tag{12}\]
The \(W\) above are all the weight matrix, and the \(b\) above are all bias vectors, which will be learned by training. All of the attention mechanisms mentioned above use multiple heads.
### _Classification_
When we obtain the node hidden vectors \(h_{1}^{L},h_{2}^{L},\ldots,h_{|N|}^{L}\) from the last layer, we utilize attention pooling to aggregate the information of each node to obtain vector representation \(r\) as (13)(14), where \(g\in\mathbb{R}^{C_{2}}\) is a learnable vector.
\[\alpha_{n}=Softmax\left(g^{T}h_{n}^{L}\right) \tag{13}\]
\[r=\sum_{n\in N}\alpha_{n}h_{n}^{L} \tag{14}\]
Fig. 5: Overview of the process.
Fig. 6: The HDIG of AST in Fig. 2. The circular node is the “AST” node, and the square node is the “identifier” node. The edge can connect multiple nodes, the node connected with an arrow is the head node of the edge, and the node connected without the arrow is the tail node. Various hues correspond to various edge kinds. The node and edge both display the node value and edge type.
To obtain the final classification prediction, we use an MLP, which is expressed as (15).
\[pred=Softmax\left(MLP(r)\right) \tag{15}\]
The attention mechanism above is also multi-head attention. We employ the standard cross-entropy loss function for the training.
## V Evaluation
We implement code by torch_geometric2. Our implementation is available on [https://github.com/qiankunmu/HDHGN](https://github.com/qiankunmu/HDHGN).
Footnote 2: [https://pytorch-geometric.readthedocs.io/en/latest/index.html](https://pytorch-geometric.readthedocs.io/en/latest/index.html)
### _Datasets_
We use Python800 and Java250 to train and assess our model. The two public datasets are from Project CodeNet [17] which are obtained from downloading submissions from two online judge websites: AIZU Online Judge and AtCoder. The code snippets are classified by the problem. The statistics of the datasets are depicted in Table I. To be clear, the AST node type means the AST type such as "Module" and "Assign," different from node types in HDHG, i.e., "AST" and "identifier." The edge type means the field name or called attribute in AST. We randomly split the dataset into the training set, validation set, and test set by 6:2:2.
### _Baselines_
We compare our model with AST-based and GNN-based techniques which acquire the best performance in code classification including TBCNN [1], TreeCaps [18], GGNN [13], GREAT [19] and HPG+HGT [4]. TBCNN used a tree-based convolutional neural network to extract features from AST. A model called TreeCaps combines TBCNN and capsule networks. By adding edges like control flow and data flow to AST, the gated graph neural network (GGNN) processes graphs from source code. GREAT is a model extracting global relational information from code graphs based on the transformer architecture. A technique known as HPG+HGT uses a heterogeneous graph transformer to describe code as a heterogeneous graph. We also trained a GCN [25] and a GIN [26] in an experiment to compare.
### _Experiment settings_
We use a parser from the official Python 3.8 ast library and javalang library3 to parse the code snippets into ASTs. The embedding vectors are produced by random initialization and learned via training. Our model's layer number was set to four. The hidden vector dimension size and embedding vector dimension size were both set to 128. We use a narrow multi-head attention [23] mechanism and set the number of heads to eight. We employed Adam optimizer with the learning rate of \(5\times 10^{-5}\) to train our model. We set the dropout rate to 0.2. We optimized the hyper-parameters of other baselines for the validation set's greatest performance. The models were trained for 100 epochs and we saved the models which perform best in validation set.
Footnote 3: [https://pypi.org/project/javalang/](https://pypi.org/project/javalang/)
### _Results_
We use the performance of the model on the test set as the outcome. We select classification accuracy as the metric. We calculate the mean accuracy and standard deviation after five iterations of the experiment. The results are depicted in Table II. Our HDHGN outperforms other methods in both datasets. In Python800, our HDHGN is 2.88% higher than the best baseline. In Java250, our model outperforms baseline models by at least 2.47%. This demonstrates that our model utilizes the semantic and structural features of code AST more effectively than previous approaches.
### _Ablation study_
We perform some ablation studies of our HDHGN on Python800. We take into account three variants as below.
#### V-C1 - hyperedge
We eliminate hyperedges from our model, leaving only paired edges, or normal edges, in the graph. A few regular edges will develop from the initial hyperedge.
#### V-C2 - heterogeneous information
We eliminate heterogeneous information from our model, which entails treating identifier nodes and AST nodes as a single type of node in the graph and eliminating the information about edge types.
#### V-C3 - direction
We remove direction information in our model, this means that the hyperedge is not directed hyperedge, it does not differentiate the head nodes and tail nodes.
We also repeat the experiment five times and compute the mean accuracy and standard deviation. The outcomes are depicted in Table III. Removing hyperedge make the result decrease by 3.08%. This demonstrates that high-order data correlations between AST nodes in code are indeed useful for comprehending programs. The removal of heterogeneous information reduces the result by 2.64%. Heterogeneous information often contains a lot of semantic information, which is helpful for program understanding. Removing direction caused a drop of 2.38% on the result. The direction of the graph can help enhance the model and get structural information by indicating whether the nodes connected by hyperedges are parent nodes or child nodes. The above outcomes demonstrate that our model can obtain a better understanding of AST structure and acquire more precise results in code classification after considering high-order data correlations, heterogeneous information, and direction information.
## VI Conclusion
In this study, we propose an HDHGN for code classification. To possibly encode high-order data correlations between nodes in AST, we introduce the use of hypergraphs. Due to the general hypergraph being homogeneous and undirected which will result in a lack of semantic and structural information, we propose to represent AST as a heterogeneous directed hypergraph. We create an HDHGN accordingly to utilize high-order data correlation, heterogeneous information and direction information better than previous methods. We test our model using open Python and Java datasets, and we compare the results to the baselines developed using the SOTA AST and GNN. The experiment demonstrates that our HDHGN outperforms the baselines. Further ablation study describes that the HDHGN enhances the performance of code classification.
Presently, the hypergraph we produce is large and contains many nodes and edges. Future research will focus on ways to scale down hypergraphs for modeling AST and enhance the current hypergraph model to make it more effective at classifying codes.
## Acknowledgment
This work was supported by the Open Research Fund of NPPA Key Laboratory of Publishing Integration Development, ECNUP.
|
2301.03098 | Comprehensive Mapping of Continuous/Switching Circuits in CCM and DCM to
Machine Learning Domain using Homogeneous Graph Neural Networks | This paper proposes a method of transferring physical continuous and
switching/converter circuits working in continuous conduction mode (CCM) and
discontinuous conduction mode (DCM) to graph representation, independent of the
connection or the number of circuit components, so that machine learning (ML)
algorithms and applications can be easily applied. Such methodology is
generalized and is applicable to circuits with any number of switches,
components, sources and loads, and can be useful in applications such as
artificial intelligence (AI) based circuit design automation, layout
optimization, circuit synthesis and performance monitoring and control. The
proposed circuit representation and feature extraction methodology is applied
to seven types of continuous circuits, ranging from second to fourth order and
it is also applied to three of the most common converters (Buck, Boost, and
Buck-boost) operating in CCM or DCM. A classifier ML task can easily
differentiate between circuit types as well as their mode of operation, showing
classification accuracy of 97.37% in continuous circuits and 100% in switching
circuits | Ahmed K. Khamis, Mohammed Agamy | 2023-01-08T19:32:07Z | http://arxiv.org/abs/2301.03098v1 | # Comprehensive Mapping of
###### Abstract
This paper proposes a method of transferring physical continuous and switching/converter circuits working in continuous conduction mode (CCM) and discontinuous conduction mode (DCM) to graph representation, independent of the connection or the number of circuit components, so that machine learning (ML) algorithms and applications can be easily applied. Such methodology is generalized and is applicable to circuits with any number of switches, components, sources and loads, and can be useful in applications such as artificial intelligence (AI) based circuit design automation, layout optimization, circuit synthesis and performance monitoring and control. The proposed circuit representation and feature extraction methodology is applied to seven types of continuous circuits, ranging from second to fourth order and it is also applied to three of the most common converters (Buck, Boost, and Buck-boost) operating in CCM or DCM. A classifier ML task can easily differentiate between circuit types as well as their mode of operation, showing classification accuracy of 97.37% in continuous circuits and 100% in switching circuits.
Electric circuit, Bond Graph, Graph Neural Networks (GNN), Machine Learning
## I Introduction
AI algorithms are used to model computationally complex systems or systems/processes with significant parameter uncertainties. Modern improvements in computation resources enable the incorporation of AI algorithms in power converter design and control. Complex nonlinear problems such as thermal and electromagnetic designs, modeling of layout parasities and estimation of component stresses under different operating conditions are some areas where AI algorithms can significantly simplify and optimize the design process. [3, 4, 5, 6]. Power electronics applications of ML have focused on control, component design and maintenance [7]. ML-based surrogate/black box models are used for online prediction tasks to reduce computational effort, memory and power used by classical simulation/mathematical-based models [7]. Design optimization is an additional target as ML models obtain the optimal target without compromising other design constraints or trade-offs of design, which is known in its mathematical formulation as Pareto front [8]. ML-based circuit design should be able to reflect circuit component connectivity as well as the effect of varying the values of these components. In [9] a graph representation of circuits with a combined feature map for input and output nodes was proposed. However, it does not represent details of component types or connectivity, rather it is just a numerical input/output transfer characteristic of the circuit. Reinforcement Learning (RL) was introduced in [10] to optimize passive component values. An updated version of the RL agent was presented in [11], where the RL-based optimization algorithm is used to optimally size transistors. In this case, based on a given design flow, the RL algorithm updates the node embedding in Graph Neural Network (GNN) representation of the circuit to maximize the cost function. One-hot encoding is used to represent transistors, in addition to other internal parameters, which are passed as features to a Graph Convolution Network (GCN) to extract node embedding. Despite the simplicity of this approach, it it is incorrect and does not guarantee a solution in the inverse problem. In other words, in circuit synthesis/generation problem, there is no guarantee that the circuit synthesis neural network can transform the generated graph to a physically realisable circuit. Existing methods do not provide a systematic way of circuit feature embedding in GNNs. These models have several limitations including scalability of circuit size (number of nodes and/or components), mapping con
nectivity and identifying component types within the circuit. This paper proposed a systematic approach for electric circuit representation to enable use of ML design or performance prediction tools. This method has the benefits of being scalable and topology agnostic. In this paper the following key contributions are proposed:
* A comparative review of different research attempts in mapping circuits to ML domain including circuit representation techniques, feature assignment, intended task and how components and connections are represented.
* Proposing three possible circuit representation techniques, listing the advantages and disadvantages while providing mathematical reasons for technique selection.
* The circuit representation includes different circuit element types and circuit connection types, without indulging the concept with numerical tuning or the empirical hyper-parameters optimization of ML.
* Proposing a unified (applied to all circuit elements) node feature assignment algorithm, irrespective of number of connections present in circuit or circuit order, while combining the feature maps of the nodes to generate the feature map for the whole graph in a GNN.
* Proposing a dataset generation algorithm, that is easily applicable to the ML task or application, capturing circuit performance variables of interest in a standardized data format that can be used in ML problems.
* A proof of concept classifier problem applied to variable structure continuous circuits or switching circuits operating in CCM or DCM is presented. The target ML task covers a wide range of possible tasks or even a combination of tasks including regression, classification and clustering tasks, whether it is supervised or unsupervised tasks.
The proposed mapping approach enables a wide range of possible ML tasks or a combination of tasks including regression, classification, clustering, and synthesis of power electronic converter circuits.
## II Problem Description
Neural networks can construct model from training data after being processed in order to obtain features to characterize the built model. In the case of electrical circuits, the process does not have an established methodology or criteria. Problems with interfacing electric circuits to ML tools are highlighted in this section, while different solutions are proposed in next section.
### _Circuit Structure Representation_ _Problem_
The main problem faced when circuits are to be fed to a NN is the fixed size input layer, which has a defined dimension, invalidating the scalability requirement. The workaround proposed in [12] pre-processes a matrix consisting of multiple vectors representing circuit components, so that the input to the Convolutional Neural Network (CNN) is of a fixed size. Eventually, this workaround added more computational overhead and increased training time and computational resources. Moreover, from a circuit standpoint, it is an incomplete circuit model because it has no explicit representation of the circuit structure or the dynamic behaviour or circuit elements interactions. In this paper we lay some foundations on how the physical properties of an electric circuit can be mapped to ML space, as follows:
1. Circuit performance is independent of circuit entering order or elements order variation as long as the connection is kept invariant (isomorphic circuits). This makes the circuit representation Permutation Invariant.
2. Circuit connectivity (series or parallel connections) and circuit elements values define the circuit performance.
3. Circuits may have any numbers of elements and has no upper boundary.
4. For circuits of similar input/output response (e.g. dual circuits [13]), circuit type/connection will be the identifying factor in each case.
The realization of the last three definitions necessitates that the ML input layer be independent of the size of the input dataset. Hence, the representation becomes Scalable.
### _Dataset Expressiveness Problem_
Machine learning algorithms gain knowledge by iterative training. Datasets contain standardized/normalized data according to the nature of the ML task. Neither a generalized and confirmed methodology to handle circuit datasets nor a feature extraction/definition algorithm are defined that independently capture the circuit topology and the effects of component variation. More importantly, a clear measure of dataset expressiveness is absent. Given the circuits in Fig. 2, every class has identical component count however, their performance is different and depends on component values, especially at resonance, and the dataset should indicate that difference.
### _Neural Network Topology Problem_
The physical circuit topology and the influence of parameter variation on its output variables must be clearly expressed by the selected NN topology. As an example, same circuit performance, can be obtained by using dual components [13]. In [14] a model of similar purpose employs CNN and takes placement images as its features. Arguably, Graph Neural Networks (GNN) are superior in capturing the netlist topology, which is a graph. Moreover, GNN is more efficient in feature encoding. For instance, the shape of a transistor can be represented by two real numbers (width and height)
in GNN while it requires an array of pixels for CNN. The spatial features can be easily embraced in GNN by taking the location coordinates as features, which are motivations to take the GNN approach.
## III Review of Circuit Representation Techniques
This section offers all possible solutions to presented problems in section II, and highlights the flow of work and derivations made from initial problem statements and better explains available solutions by offering detailed comparisons between them. There has been a lot of attempts to better represent circuits in ML domain, which are thoroughly explained in this paper. Moreover, the paper will also highlight why solutions offered are insufficient, ungeneralizable and empirical solutions, which either require fixed layout, huge datasets or extensive training and very complex models.
### _Circuit Representation Methodologies_
The main problem is to properly encode circuit problem into computer interpretable form, which has been addressed by three modelling techniques, i.e graph theory, Y-Matrix and Bond graph [15, 16, 17]. A brief is given on every modelling technique, with an expanded illustration on the one used in this paper, wile a comparison between the merits and disadvantage of three modelling techniques are listed in Table II.
#### Iii-1 Graph Theory Representation
Graph theory is a mathematical tool used to model complex systems in a simplified way. In the field of power electronics and converters, graph theory has proved to be a powerful tool for representing and analyzing the complex network of components and their interactions.There have been numerous studies in the literature which use graph theory to represent power electronics and converters [18]. The use of graph theory to represent power electronics and converters has several advantages. Graphs provide a concise and intuitive way to represent the components and their interactions. Furthermore, graph algorithms can be used to analyze the system and identify system faults. However, the use of graph theory to represent power electronics and converters also has some limitations. Graphs are limited in their ability to represent complex systems with many components, as the number of nodes and edges increases, the graph becomes cluttered and difficult to interpret. In addition, graph matrices are usually very large and computationally intensive, making it difficult to obtain simulation results in real-time. This can result in inaccurate or unsatisfactory results [19]. Furthermore, due to the complex relationship between the different components in the power system, the graph model may not be able to accurately represent the real-world system, leading to incorrect results [20]. Graph theory cannot account for nonlinearity and non-smoothness. Power electronic converters are nonlinear systems and their circuits may contain high-frequency harmonics, which is difficult to capture using graph theory [21].Finally, when using graph theory to model a power electronic converter, the system needs to be linearized, which may neglect certain important nonlinear effects. This can lead to incorrect results and further limitations to the accuracy of the model [20].
#### Iii-2 Y-Matrix Representation
The admittance matrix is a powerful tool used to represent power systems and power electronic converters. This method of representation has been used since its inception in the 1960s, and continues to be an efficient and novel way to model electrical systems. The admittance matrix is a complex quantity that describes the relationship between the voltage and the current in an electrical network. It consists of a matrix whose elements are admittances of electrical components such as resistors, capacitors, and inductors [22]. This relationship between the voltage and the current provides a useful representation for solving electrical circuit problems [?]. The admittance matrix has been used for many applications such as transient analysis and stability analysis. In particular, it has been used to study power systems [23]. In power system analysis, the admittance matrix can represent the components in the power system such as transmission lines, transformers, and loads, which can be analyzed in both the frequency domain and the time domain [24]. The advantage of the admittance matrix is that it is computationally efficient and provides a concise representation of the wide range of power system components [25].The admittance matrix has also been used for analyzing the stability of power electronic converter systems [26]. Power electronic converters are devices used to convert AC power to DC power or vice versa, and they generally consist of power switches, capacitors, and inductors [27]. Using the admittance matrix, the stability of the power electronic converter can be accurately analyzed in the frequency and time domains [28]. This method of representation is relatively old but can provides for accurate and efficient simulations of power electronic converters.
In a preliminary attempt of this work, different circuits were modelled utilizing Y-bus admittance matrix, where nodes represented buses and admittances serve as node features, while edges represent whether a connection is established between nodes. Fig. 1 shows a three and four element bus systems and its equivalent Y-bus admittance matrix and the corresponding features. However, this representation was proven to be non expressive based on the fact that it is not uniformly scalable, i.e a three and four element (admittances) systems can both have the same number of nodes, which in this case is two, hence losing a very important feature in graph notation. This
is because the branch elements are lumped together into a single equivalent admittance making it impossible to distinguish between different elements. Moreover, with this representation, the change in node feature values doesn't discriminate between whether a new element is added or component value has changed.
#### Iii-B3 Bond Graph Representation
Bond graphs (BG) were proposed as a graphical language and systematic representation, to overcome limitations of block diagram models [29]. Using BG, a circuit can be modeled as bonds during all possible series and parallel connection permutations and combinations. Two key model elements were devised the 0 junction that is used to represent a parallel connection and 1 junction for series connections [29, 30]. In addition to electric circuits, this approach can be extended to mechanical and chemical models as well [31, 32, 33]. The BG representation capturing the dynamics of a system is based on transforming (mapping) system components to their BG model counterparts. The bond graph analogies used to describe physical systems in the form of bonds and paths are listed in Table I.
Bond graphs in opposition to transfer function which are behavioral models, belong to the class of structural models. Controllability and structural observability are applicable to BG, which are structural properties of models [37]. Moreover, it was proven in [36] that BGs are structurally identifiable, which allows a unique set of parameters to associate with given input/output response. In other words, bidirectional transformation governs circuit to graph and graph to circuit transformation and hence, graphs generated from ML algorithms can be translated into a circuit if they match structural identifiability criterion.
## IV Review of Neural Network Topologies
### _Classical Neural Network Topologies_
Linear regression, random forest (RF) and artificial neural networks (ANN) are classical regression models used as attempts for regression tasks. For classification tasks, support vector machine (SVM), K-Nearest-Neighbor (KNN) algorithm and RF are used. Convolutional neural network (CNN) and recurrent neural networks are extensively used in ML tasks. CNN models are composed of convolutional layers and other basic blocks such as non-linear activation functions and down-sample pooling functions. While CNN is suitable for feature extraction on grid structure data like 2-D image, RNN is good at processing sequential data such as text or audio [38] due to their ability to leverage statistical properties of the image as euclidean data such as stationarity and compositionality through local statistics. On the contrary, non-Euclidean data has no familiar properties as global parameterization, common system of coordinates, vector space structure, or shift-invariance. Operations like convolution that are taken for granted in the Euclidean case are even not well defined on non-Euclidean domains [39]. From that prospective, it is necessary to use an ML topology that can better represent non-euclidean structures like electric circuits.
### _Graph Neural Networks_
GNNs are composed of definite function layers, but unlike other neural networks, the input is a graph. Acyclic, cyclic, directed, and undirected graphs can be processed by GNN as was stated in the first GNN model in [40]. Scalablity and permutation invariance are unique properties in GNNs allowing input layer to be variable while graph node re-ordering will not affect the NN layer output, which satisfies the requirements needed for physical circuits representations. RNNs and GNNs, capable of directly processing graphs with labeled nodes and edges. An image classification task showed that GNNs outperforms RNNs, both in terms of accuracy and error rate [41]. Convolution operation on graphs is defined by spectral and spatial operations. In [42], spectral-based GCNs was proposed, which used the spectral graph theory to develop a new variant of graph convolutional operation. Graph mutual dependence complexity was solved using non-recursive layers presented in [43]. Moreover, spatial GCNs have been developed based on the fact that spectral GCNs are difficult to extend to large-scale graphs [44]. This makes GNNs suitable for circuit representation.
_1) Graph definition_
Graph G is a defined as (V, E) with V the set of vertices/Nodes equals \(\upsilon_{1},...,\upsilon_{N}\), while set of Edges E \(\sqsubseteq V\times\ V\). Let N and M be the number of vertices and edges, respectively. Each graph can be represented by an adjacency matrix A of size N \(\times\) N : \(A_{ij}\) = 1 if there
Fig. 1: Early attempt of converting circuit to graph by using Y-Matrix
is an edge from vertex \(\upsilon_{i}\) to vertex \(\upsilon_{j}\), and \(A_{i,j}=0\) otherwise. Every edge has a set of edge features \(e\)
## V Review of Circuit Representation and Design using GNN
In [45] it was shown that the most intuitive way to represent circuit, netlists or layouts is graph representation. It was also stated that graph neural networks (GNNs) are an opportunity replace shallow methods or mathematical optimization techniques, and Table III shows the state of the art circuit representation trials. Many research has utilized GNN in circuit optimizations/classification operations and in many applications like transistor sizing, capacitor value optimization and many more. In [46],
[47], the model leverages reinforcement learning (RL) to learn the optimal policy for best parameter selection by rewarding the model for the best Figure of Merits (FOM) composed of several performance metrics. The circuit is embedded into a graph whose vertices are components and edges are wires, while generating a vector for each transistor and passing the graph to the RL agent. Finally, the RL agent processes each vertex in the graph and generates an action vector for each node, then process the graph with an action vector with the purpose of maximizing the reward. [48] proposes a model that solves the forward and inverse problems. In which, the model maps a given circuit to the corresponding transfer function and
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline Terminology & Description & \\ \hline Strong Bond & A single bond that causes effort in the 0 junction and flow in the 1 junction Passive Element A one port element that stores input power as potential energy (C-element), as kinetic energy (I-element) or transforms it into dissipative power (R-element). & \\ \hline Causal BG & A BG is called causally completed or causal if the causal stroke known as causality is added on one end of each bond & \\ \hline Causal Path & A sequence of bonds with/without a transformer in between having causality at the same end of all bonds or a sequence of bonds with a gyrator in between, and all the bonds of one side of the gyrator having same and causality while all the bonds on the other side with causality on opposite end. That means gyrator switches the direction of efforts/flows on one of its side [9]. & \\ \hline Branch & A causal path can be a backward or forward or both depending upon the junction structure, elements and causality & \\ \hline & A branch is a series of junctions having parent-child relationship. & \\ Branch & Two differ-ent sequences of junctions can be connected with a common bond or two-port element. & \\ & Thus, one of the junction’s sequence acts as parent branch and the other one as child. & \\ \hline Causal Loop & A causal loop is a closed causal path with bonds (of the child branch) either connected to a similar junction or two different junctions of the parent branch & \\ \hline \end{tabular}
\end{table}
Table II: Comparison between different circuit representation techniques
-wise versa. Inversely, the model utilizes gradient descent to optimize the circuit parameters to produce a transfer function. The model leverages the differentiable nature of the neural network and applying gradient descent methods to optimize the input parameters of the neural network. However, the neural network is trained for a particular circuit topology, and hence cannot be used for general circuit representation, in addition to the lack of switching circuit representation. Moreover, [49] proposed a technique for combining the feature maps of the nodes to generate the feature map for the whole graph in a GNN. By propagating information from nodes to nodes representing input and output instead of pooling operation. The paper represents graphs as a concatenations of the feature maps of the input and output nodes. In resonator circuits applications, [49] introduced a model that learns to simulate electromagnetic properties of distributed circuits. Circuit were mapped on system level basis, such that each node refers to a resonator and each edge refers to the interaction between a pair of resonators (i.e., the electromagnetic coupling) between a pair of resonators. This representation does not incorporate the resonator internal structure or if the system had different resonators with different characteristics. By propagating information from nodes to nodes, while representing circuits as concatenation of input and output node features instead of pooling operation, regression task is utilized to obtain predictions about circuit performance. On the other hand, feature concatenation is not the correct technique to represent circuit. Feature concatenation is a numerical representation of circuit inputs and outputs that properly tuned by minimizing the loss function. Attempts has been made to include different circuit topologies and obtain predictions as in [50], where two circuit types were included in the study: the ladder circuits and two stage operational amplifier circuits, with 20k training data instances of resistor ladders with 2 to 10 branches with equal distribution weight. The model is based on DeepGEN architecture and was able to make predictions on ladder circuits with higher number of branches. However, the model's ability to generalize and applicability to other circuit topologies and types remain questionable. Moreover, no clue was given on how to distinguish connection type, and its effect on circuit performance. Moreover, the representation was limited to transistors, without the inclusion of other circuit parameters or elements(Transistor/resistor/voltage sources,.. etc). Also, no guidelines/rules were given on how to model circuit elettents properties like frequency, phase shift,.. etc. One major drawback in this representation is the elements with multiple terminals like transistors are represented as four connected nodes, which can cause unnecessary excessive computations. In [51], heterogeneous GNN were utilized to construct a graph based on a circuit schematic, where each device (transistor, resistor, capacitor, etc.) can be mapped into different node and edge type within the graph. The model target is to predict net capacitance, which was achieved by mapping connections as nodes with corresponding node information (i.e. net capacitance), preventing information loss if nets were represented as graph edges. To complete the structure, circuits were represented as multi-graphs, where graphs have two edges with opposing directions, and are mapped between every net node and the appropriate device nodes corresponding to terminal connections within the schematic. Despite leveraging heterogeneous GNNs to differentiate between circuit elements nodes and netlist nodes, this representation works around the circuit connection type problem (series or parallel) in the netlist nodes by assigning four types of connection signal (Net to transistor gate, transistor gate to net, Net to transistor drain, and transistor drain to net), resulting in an over complicated representation that extensively require more time at training. Physically, connections in series share the same current and connections in parallel share the same voltage, which are not shown in multi-graph heterogeneous graphs. In the area of analog circuit layout automation, [52] showed a GNN based model that can identify symmetry constraints in analog circuits That can be extended to other pairwise constraints. However, the graph representation of circuits is simplistic as it treats device instances and device pins as graph nodes, while edges represents connections between pin nodes of devices. Eventually, this simplistic representation creates a problem of isomorphic graphs, which was mitigated by adding an additional a two-dimensional vector to node feature to distinguish between whether a node is a device or a pin, which eventually increases computational cost at training. Followed by [53] in which circuits was represented as heterogeneous multi-graphs to the purpose of modelling active and passive elements for analog and mixed signal circuits. In this representation, four types of edges (To transistor (drain), To transistor (source), To transistor (gate), To passive device) are used to represent connections between device/circuit elements, which were represented as nodes. Circuit representation in previous research can be summarized as:
* All methods for circuit to graph representation are arbitrary, without any mathematical/scientific base.
* These methods disregards mapping the connection type and hence is substituted by a significant increase in the number of hidden layers, number of neurons, training for many epochs,... etc.
* Other implications of disregarding connection type in previous methods are the limited scope of the methodology. Previous methods cannot be applied to any circuit except what it is intended for.
* All methodologies had deficiency in modelling
common circuit properties like frequency, phase shift,... etc.
* Most methodologies mention only elements of interest (Transistors and capacitors), but ignores other circuit parameters like inductance, resistance, voltage source, current source, transformers,... etc.
* Some methodologies try to simulate the connection type by adding component terminals as nodes and define the circuit as a multi-graph heterogeneous graph. Despite the added complexity and extensive computational cost of heterogeneous graphs, This representation suffers a major disadvantage as different circuit topologies can have the same graph representations (isomorphic graphs). This problem is usually addressed by defining another node feature the define whether a node is a pin or a device at the expense of added computational cost.
* Some representations omits voltage and current
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Node Features & Edge Features & Circuit Representation & Task & Network type \\ \cline{3-6} & & & Circuit components & Connections & \multirow{2}{*}{RNN+RL} \\ \hline
[46] & DC operating points, & & & (Series/Parallel) & \\ \cline{3-6} & One-hot encoding & & & & \\ \cline{3-6} & of simulation step, & & & & \\ \cline{3-6} & Transistor parameters, & & & & \\ \cline{3-6} & Internal capacitances & & & & \\ \cline{3-6} & One-hot encoding & & & & \\ \cline{3-6} & of element type & & & & \\ \cline{3-6} & Circuit order, & & & & \\ \cline{3-6} & Passive and active & & & & \\ \cline{3-6} & characteristics & & & & \\ \cline{3-6} & & & & & \\ \hline
[54] & Gate logic level, & & & & \\ \cline{3-6} & Controllability, & & & & \\ \cline{3-6} & Observability & & & & \\ \hline
[48] & Subcircuit coordinates, & & & & \\ \cline{3-6} & Center position of the & & & & \\ \cline{3-6} & Subcircuit, Angular & & & & \\ \cline{3-6} & position of the slit. & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & Operation type & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \hline
[55] & Operation type & Signal & & & \\ \cline{3-6} & Bitwidth. & & & & \\ \cline{3-6} & One-hot encoding & & & & \\ \cline{3-6} & of terminal type, & & & & \\ \cline{3-6} & Device parameters. & & & & \\ \hline
[50] & gate poly length, & & & & \\ \cline{3-6} & number of fingers, & & & & \\ \cline{3-6} & number of copies, & & & & \\ \cline{3-6} & Lapets of resistor, & & & & \\ \cline{3-6} & Capacitors, & & & & \\ \cline{3-6} & number of copies, & & & & \\ \cline{3-6} & net N & & & & \\ \hline
[52] & One hot encoding & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & & & \\ \cline{3-6} & One hot encoding & & & & \\ \cline{3-6} & One hot encoding & & & &
sources nodes to focus on circuit structure. However, this is incorrect representation since source location can change the circuit behavior.
* Some methodologies include one-hot encoding of device position in circuit along with device type, which inherently means the node features vector size per node is linearly proportional to the circuit size.
## VI Proposed Converter Circuits Modeling for Machine Learning Applications
In this section, the proposed formulation of a graph representation of continuous or switching circuits that allow the application of ML algorithms to circuit design and control will be presented. This formulation is completed in several steps:
1. Bond graph modeling of circuit topology.
2. Generating standardized datasets that capture circuit topology, input and output circuit variables and operating conditions.
3. Defining a scalable and permutation invariant NN structure.
### _Graph Creation Using Bond Graph Modeling_
This section explains how to model electric circuit as a graph for further processing.
#### Vi-A1 Continuous circuit presentation as Bond Graph
An electrical circuit consists of five main components such as resistors, inductors, capacitors, voltage source, and current source. The generalized BG elements and their mathematical relations can describe any continuous circuit and perform analysis of dynamics of electrical systems. Zero-junction is assigned for each distinct voltage node in the circuit where according to Kirchhoff's voltage law (KVL)--the algebraic sum of all voltage drops around a closed circuit is equal to zero. Additionally, one-junction is assigned for each element in the circuit, according to Kirchhoff's current law (KCL)--the algebraic sum of all electrical currents entering and leaving a node is equal to zero), taking into consideration the relative voltage or drops related to each element located between two 0-junctions, since 1-junction represents and effort summation point. Fig. 2 shows the bond graph models of seven classes of resonant circuits of increasing order and Table IV shows the equivalent notations used in BGs with their circuit counterparts.
#### Vi-A2 Switching Circuit Representation as Bond Graph
A study in [57, 58] showed that switches (unidirectional or bidirectional) can be represented in BG by the concept of Switched Power Junctions (SPJ) and activated bonds and hence, BG can be used to model switching circuits. Other switch modelling techniques including Modulated Transformer (MTF) with Boolean modulation index m and a resistive element R or the
\begin{table}
\begin{tabular}{|c|c|} \hline Circuit Element & Bondgraph Equivalent Element \\ \hline Voltage Source (V) & Effort Source (Se) \\ Current Source (I) & Flow Source (Sf) \\ Resistance (R) & Resistance (R) \\ Inductance (L) & Inertance (I) \\ capacitance (C) & Compliance (C) \\ \hline \end{tabular}
\end{table} TABLE IV: Circuit to bondgraph equivalent elements
Fig. 2: Converter circuits to Bondgraphs: (a) Two elements circuits, (b) Three elements circuits, (c) Four elements circuits
Ideal Switch Element method where switch state depends on the junction to which the switch element is connected, an energetic connection is established or broken [59, 60]. A comparative study in [61] shows that the most convenient method is the SPJ Modelling method as it does not lead to causality conflicts and leads to a unified model, like the Modulated Transformer method, but does not require additional elements (R) to eliminate algebraic loops. In this paper, the SPJ method will be used to represent switches. Converter topology and its function are defined by the location of the energy storage/resonance elements (L & C) and the type and order of the switching cell. Simplification of Single Pole Double Throw switching cell can be in the form of two Single Pole Single Throw (SPST). Every SPST is modelled as a 1s-junction with two flow decider bonds. For the sake of completion, the physical interpretation of current interruption when the SPST switch is OFF is represented when one flow decider bond is modelled as the zero current source (Sf) and the other flow decider bond is connected to the system. The current source has a zero value, indicating that current falls to zero when switch is OFF. \(\mathbf{D}\) and \(\mathbf{D}\) are the control signals that control the junction flows. This is uniformly analogous to the duty cycle (D) physical concept in converter circuits. Based on [57, 58], SPST switches combinations can be modelled using (0s and 1s) junctions. Fig. 3 shows a switching cell represented as two SPST switches and its equivalent bond graph representation, the flow decider bond and the zero value flow sources. Additionally, switched power junctions are a generalisation of the already existing zero and one-junction concepts of the bond graph element set [57]. Thus, the traditional zero and one-junctions are special cases of the more general switched power zero and switched power one-junctions. When converters operates in DCM, the inductor current reaches zero before switching cycle is over. This paper utilizes the virtual switch concept to represent converter operation in DCM mode. As the inductor current reaches zero, both switches \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\) are in OFF state. This virtual switch only closes when both switches become OFF. \(\mathbf{D}_{1}\), \(\mathbf{D}_{2}\), \(\mathbf{D}3\) are mutually exclusive control signal to control switches operation. The concept of virtual switch presented in [62] is used to express the converter operation in DCM. This representation is based on the fact that inductor current reaches zero in DCM. The virtual switch shorts the inductor ensuring no current passes through, while connecting certain circuit nodes to maintain voltage balance equations during the DCM time period \(\mathbf{D}3\). This representation compatible with the predefined physical property namely Scalability.
### _Circuits to Graph Representation_
The second step is to convert the BG formulation to a graph representation containing all gathered and simulated information including circuit types, classes, nodes, edges, node and edge features. Fig. 4 shows a continuous circuit represented as graph following BG formulation, with minor changes in Switching circuits. Nodes are used to represent circuit element as well as zero and one junctions. Edges are used to describe circuit connection between nodes. Node and edge features describe operating condition of the circuit. In continuous circuits, edge features are set as one describing 100% connection between designated nodes. The same notation is used for switching circuit. Node features are used to describe element type as well as the element value placed in circuit. Some switching circuit properties require special consideration and explained as:
Duty Cycle RepresentationThe duty cycle is a property in every switching circuit and physically represent the percentage of the connection existence within switching cycle. Duty cycle is mapped as a feature of the edges the connects to switching nodes (0s & 1s nodes).
Switching Frequency RepresentationThe one/zero switching junctions representing switching cell are connected to zero-valued current source, interrupting the switch current with frequency equal to switching frequency. In other words, the zero-valued current source works as a control source for every switch. Based on the physical properties of the control source, including the switching frequency as a property of the BG control source aligns with the physical properties of
Fig. 4: Circuit with equivalent BG formulation
Fig. 3: Switching cell and equivalent BG formulation
the circuit.
Switching Pattern Representation
A generalized switching pattern representation is proposed, allowing all types of switching patterns and duty cycle variations. This adds more flexibility to represent converters that operate differently when subjected to different switching patterns, i.e resonant converters operating with different control modes. The switching pattern representation is expressed in the control source (flow source in BG representation) node features. Fig. 6 shows two cases of switching patterns. In the first case, the switching is aligned so that the first switching operation compliments the second one. The current source node features should indicate the same phase shift reference, and by default is set to zero. In the second case, where switch operations are not aligned either at turn on or turn off, a phase shift \(\varphi\) indicates that delay, and is set the control source of the delayed switch. Combining the phase shift information along with duty cycle information, allows complete representation of the switching patterns in switch operations. Table V summarizes the switching pattern modes and their node feature representation.
### _Dataset Generation_
Generating a dataset of different circuit topologies, circuit elements and circuit order is shown in this section. Also, a proposed technique for storing recorded data in a general format for any ML task is highlighted. Fig. 7 shows a paradigm for such dataset generation step, where a circuit netlist is converted to its equivalent bond graph model. Since BG is a graph notation for modeling circuits, they inherently have all graph characteristics, with all requirements of graph definitions like number of nodes, node types, edge weights and the adjacency matrix. Finally, BGs are passed to feature assignment algorithm, where features are assigned to each node in graph.
#### Iv-C1 Feature Assignment
Node features are defined based on circuit element type and its behavior in circuit using the proposed algorithm. Circuit simulations are used to obtaining features describing circuit performance such as node voltages and loop currents. Simulations run for multiple instances at multiple operating points for all circuits including different component values and circuit conditions. Output values are normalized to common base to avoid sparsity of the feature vector, which is referred in Table VI as "Normalized Values Vector". The proposed feature assignment algorithm is expandable and can include many circuit features if it is desired to be included in the dataset. Therefore, the normalized values vector can be multiple columns listing not only component's value, but also different component properties i.e source frequency in continuous circuits or phase shift in switching circuits. One main function of feature extraction algorithm is to define the circuit element types, which are defines the concept of Element ID. Element ID assigns a binary code based on circuit element type by utilizing one-hot encoding [63]. The second main function of feature assignment algorithm is to concatenate the assigned one-hot encoded vector with normalized values vector, forming the feature matrix of the whole graph with dimension \(N\times d_{in}\), where N is number of nodes and \(d_{in}\) is the dimension of feature vector.
#### Iv-C2 Dataset Format
Extracted features and other graph information like types and number of node, adjacency matrix and edge features are saved in a unique graph dataframe format.This unique dataset format features independent graph dataset of circuits, which allows using this graph representation in any ML library independent of saved graph dataset. Since there are many graph ML libraries like pytorch-Geometric [64], DGL [65], Keras [66].. etc, the final step in the algorithm is to process the dataset to be in a compatible format. Pytorch-Geometric GNN library was chosen to build the GNN structure.
### _Different Circuit Examples Using Proposed Methodology_
This section shows some examples from different areas where the proposed methodology is applicable to many ML applications.
#### Iv-D1 Example 1: Power System
Power systems (PS) area have a lot of research where ML methodologies has bee applied. Recently GNN has been on the spotlight for application in PS, and
\begin{table}
\begin{tabular}{|l|l|} \hline & \multicolumn{2}{c|}{Representation} \\ \hline \multirow{4}{*}{Case 1} & *Phase shift is set to \(\varphi\)=0 \\ & *Edge Features represents duty cycle \\ & *Switches which are controlled dependently \\ & are represented with the same phase shift. \\ \hline \multirow{2}{*}{Case 2} & *\(\varphi\) Is the phase shift \\ & *Delayed switch include phase shift as node feature \\ \hline \end{tabular}
\end{table}
Table V – General representation of all possible switching patterns as node features
many publications utilizing GNN in power systems have emerged. A comprehensive overview of GNN applications such as fault scenario application, time series prediction, power flow calculation, and data generation are reviewed in [67]. In [68, 69] the provided network learns to solve load flow problem on random power grids whose size range from 10 to 110 buses. A method to identify the topology of a PS network is proposed in [70] based on GNN, avoiding errors in Traditional knowledge graphs in the case of errors or informational conflicts in the data. All previously mentioned research empirically transform the PS network into graph without following a circuit-laws-consistent formulation. Fig. 8(a) shows a PS network example and its graph equivalent with node features, following the proposed methodology.
#### Iii-D2 Example 2: Two-Stage Amplifier
Fig. 8(b) shows a two-stage amplifier that was used in [10] as a circuit layout. The equivalent graph representation proposed in this work was arbitrarily transformed into a graph by representing every transistor, resistor and capacitor as nodes connected to each other by edges, disregarding the original connection or the physical/electrical consequences of such connections. The Fig. also shows the proposed graph representation includes component and connection nodes, in addition to node features for each node.
### _Graph Convolution Network_
NN have many variants like GCN [71], GraphSage [72], Gated Convolution [73], Transformer convolution [74] and many more, but the most common is GCN. GCN was chosen for the following reasons:
* Unique ability to extract latent information from graph data compared to other GNN structures as reported in [75].
* Most practical circuit GNN based applications in Table III utilize GCN as their main network model or a part of the model, hence the results from this study can be fairly compared to previous ones.
Fig. 5: Buck, boost and Buck-Boost converters and their equivalent BondGraphs in CCM
Fig. 6: Switching pattern representation as features
* Simple construction and implementation, which can be beneficial if implemented as digital twin on a microcontroller [76]
The selection of GCN as the engine for the proposed GNN has allowed better focus on other hyperparameters and eventually led to better circuit representation. GCNs obtain updated features by inspecting neighboring nodes, and aggregating current node information to other neighbours through message-passing process then updating the node state. Eventually, all the nodes in graph obtain knowledge about self and surrounding neighbor information. Fig. 10 shows three layer message passing applied to a single node (node of type 1) of class 1 circuit. A deeper level of neighbor nodes exploration and better awareness of self node position can be gained by adding an additional GCN layer, at the expense of additional computational effort. Three layer GCN network is utilized in this paper as a mid point between exploration depth and computational efficiency. Node features are repetitively aggregated through the GCN layers via multiple message passing layers. At the end of this process, the final node embeddings contain self and all neighbor information.
Mathematically, this initial embedding function is represented by equation (1). The aggregation layer has multiple Graph Convolution Networks (GCN) that performs multiple message passing leaps to collect information about neighbouring nodes and keeps updating the latent dimensional vector with dimension d, which is mathematically represented as in equation (2).
\[X^{(0)}=E(X) \tag{1}\]
Fig. 8: Equivalent graph with node and edge features in: a) LCC Continuous circuits, b) Buck converter switching circuit
Fig. 7: From circuit to ML Block diagram
\[X^{(l+1)}=\sigma(\hat{D}^{-\frac{1}{2}}\hat{A}\hat{D}^{-\frac{1}{2}}X^{l}\Theta^{ l}) \tag{2}\]
where \(\Theta^{l}\) is a weight matrix for the l-th neural network layer and \(\sigma\) is a non-linear activation function like the ReLU, \(\hat{A}\)= A + I, where I is the identity matrix and \(\hat{D}\) is the diagonal node degree matrix of \(\hat{A}\). This allows the GCN to scale well, because the number of parameters in the model is not tied to the size of the graph.
_F. GCN Time complexity and Graph Scalability Limit_
Generally speaking, there are no limitation on the size of the circuit fed to the ML model (theoretically, the circuit order can be infinite). However, the computation time and RAM consumption are the main concerns when feeding circuit graphs to model, which mainly depends on how the model was built, the libraries used to build the model (pytorch or keras or tensorflow....etc), the layers depth, operating system used, the model architecture and the output size,... etc. From a GNN designer prospective, Graph circuit for a GNN input can be represented in two ways:
* sparse: As a list of nodes and a list of edge indices
* dense: As a list of nodes and an adjacency matrix
For any graph G with N vertices of feature vector length F and E edges, the sparse version will operate on
the nodes of size N*F and a list of edge indices of size 2*E. The dense representation in contrast will require an adjacency matrix of size N*N, with node degree of d.
The choice of dense or sparse representation not only affects the memory usage, but also the calculation method. Dense and sparse graph tensors require graph convolutions that operate on dense or sparse inputs (or alternatively as seen in some implementations convert between sparse and dense inside the network layer). Sparse graph tensors would operate on sparse convolutions that use sparse operations. Generally, dense computations would be more expensive but faster than sparse, because sparse graphs would require processing of operations in the shape of a list. For simplicity, we assume the node features at every layer are size- \(F\). As such, \(\Theta^{l}\) is an _F_\(\times\)_F_ matrix. The time complexity of the convolution operation can be decomposed as:
Fig. 10: Rooted subtree showing message passing applied to node of of type 1 in the circuit of class 1 in Fig. 2 with three GCN layers
Fig. 9: Examples of proposed concept in different applications: a) Power system example b) 65 nm 2 stage amplifier example. [10]
* Equation (1) : which is a dense matrix multiplication between matrices of size \(N\times F_{l}\) and \(F_{l}\times F_{l+1}\). We assume for all \(l,F_{l}=F_{l+1}=F\). Therefore, this is \(O\)\(N\)\(F\)\(F\)\(F\).
* Equation (2): which is a multiplication between ma \(\operatorname{\text{t}res}\) of size \(N\times N\) and \(N\times F\), yielding \(O\)\(N\)\(F\)\(F\)\(F\) time complexity. Hence, the neighborhood aggregation for each node therefore requires \(O\)(dF) work, with a total of \(O\)(NdF) = \(O\)(\(EF\)). \(\sigma(\cdot)\) : is the activation function which is an element-wise function, so its cost is \(O\)(\(N\)).
Over L layers, this results in computational time complexity of: \(O\)\(\operatorname{\text{L}NF}\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)\(F\)
### _Optimal Node And Edge Features Exploration_
To determine the optimal representation of circuit component values, twelve experiments were performed on the continuous circuits of Fig. 2 and the results are shown in Fig. 11 - Fig. 14. The dataset contained 6000 graphs representing the seven circuit types. 70% of the dataset was used for training.The data is shuffled before being applied to the model, and there was no mutual data between training and testing. Cross entropy loss function is used in training the model with Adam optimizer [77] with learning rate of 0.02. Twelve experiments were conducted in order to obtain conclusions and a paradigm of how the node and edge features should represent the circuit parameters. These experiments were divided into four sets. Each set contains three experiments and a conclusion based on observations from these experiments. The conditions/modifications applied on the dataset when fed to the classifier are listed on the left of each set. The purpose of these experiments is to identify the effect of different component representations, and how would that affect the ML task. Figures also show the classifier problem evolution ranging from three class to seven class classifier problem, along with physical circuit elements representation as features.
The purpose of the upcoming experiments is to explore the highest impact features on task accuracy. However, since features are hyper-parameters, some result obtained from edge features may eventually update how the node features are expressed. In the first set of experiments shown in Fig. 11, edge features are explored and the problem is limited to three classes classifier, edge weights are separately tested as normalized frequency (\(\frac{\text{circuit}\text{ frequency}}{\text{resources}\text{ frequency}}\)) vs. being set as ones, vs. being the circuit frequency. This experiment is concluded with the highest accuracy achieved is when edge weights were set as normalized frequency and as ones. As frequency can be included as edge features, it can be tested if capacitive elements can to be expressed as (\(\frac{1}{\text{resources}\text{ frequency}}\)), which is the purpose of the second experiment set.
Fig. 12 is the second set of experiments, where edge weights were set as the normalized frequency, while nodes that represents capacitive elements were set to have (\(\frac{1}{\text{normualized}\text{frequencies}}\)) as edge feature. Another experiment is to test whether negative component values would increase the accuracy, or setting the capacitive components as \(\frac{1}{\text{$\mathcal{C}$}}\). These experiments are reflection from circuit analysis as \(X_{c}=\frac{-j}{\text{Frequency}+\text{E}}\). However, the results shows that negative capacitive element value and its edge features as (\(\frac{1}{\text{normualized}\text{Frequency}}\)) have negative effect on the accuracy of the classifier, while setting capacitive elements as of inverted value(\(\frac{1}{\text{$\mathcal{L}$}}\)) had a significant training accuracy boost to 91.12%. It is imperative to modify node features expression for capacitive elements. Eventually, circuit graph dataset was modified to include this change in the the third experiment set. Also, from the first experiment set, edge features set as one had the highest accuracy score. The next experiment aims to explore if the concluded node and edge features modifications can enhance the accuracy.
In the third experiment set, the highest accuracy of 100% was achieved in training and testing when edge weights were set to ones and capacitive elements has node feature values of (\(\frac{1}{\text{$\mathcal{C}$}}\)). The first experiment tested whether edge feature can be used as a scaling factor substituted by the node feature. The second one tested whether edge weights can be set to one, while the third experiment tested if inductive elements can be set as(\(\frac{1}{\text{$\mathcal{L}$}}\)). From results shown in Fig. 13, it can be concluded that utilizing edge features for scaling deteriorates the classification accuracy as well as representing inductive elements as(\(\frac{1}{\text{$\mathcal{L}$}}\)). The optimal edge feature can be defined to be one, without embedding any circuit characteristics or parameters.
In the last set of experiments in Fig. 14, all outcomes and recommendations that was concluded from previous experiments were taken into consideration, while increasing the classification problem difficulty to four, five and seven classes classification problem to further verify the optimal representation. In a four-classes problem, the classifier scored a training accuracy of 92.3%, while in five-classes problem the training accuracy score was 95.92%. Lastly, the seven-classes problem resulted in training accuracy score of 97.37%. The discrepancy of accuracy scales while using the same feature representation is due to the change in dataset number of circuits. The result is a graph of a circuit with connection nodes and element nodes each has its own features. Nodes are connected by edges having edge features of one.
## VII Case Study
As a proof of concept, the proposed approach is applied to map two types of topologies: i) continuous circuits and ii) switching circuits, to a ML compatible representation. Seven resonant circuit topologies of cir
cuit orders ranging from second to fourth order as shown in Fig. 2, and three switching circuit topologies in CCM and DCM shown in Fig. 3 are fed to a classifier to show the applicability of the proposed methodology to any ML task. Following the sequence illustrated in Fig. 7 and same steps presented in this paper and in [1] and [2], converters are converted to graph form and computer simulations are used to assign normalized node features of the generated graph according to section VI-C1. Steady state simulations are run for multiple instances at multiple operating points for all circuits including different component values and circuit conditions and circuit behavior is recorded and stored. The circuit simulation sampling rate is a measure of the accuracy of the circuit simulations in the continuous circuit classifier case. In this case study, a dataset of 6000 graphs with 6000 steady state simulations have been normalized to a common base. This helps to ensure that each feature vector is consistent and not overly sparse. The normalized values vector is then used to provide a representation of the
Fig. 11: First experiment set. Edge weights are set as: a) Frequency, b) value of one, c)Normalized frequency.
Fig. 12: Second experiment set. a) No change in node features, b) Capacitive element representation is \(\frac{1}{\textbf{c}}\)) Capacitive element representation is \(\frac{1}{\textbf{c}^{\prime}}\).
Fig. 13: Third experiment set. Edge weights are set as: a) Scaling factor, b) value of one, c) value of one but different inductive element representation.
circuit simulation data that is accurate and reliable. To ensure that the sampling rate is accurate, the graphs are divided into a number of subsets based on circuit class, and each subset is simulated separately. Each of these subsets is tested for accuracy, and any discrepancies are noted and addressed. After all the subsets have been tested and corrected, the overall sampling rate of the circuit simulations can be determined. Once the sampling rate has been determined, the normalized values vector is concatunated with element ID to complement the feature vector. Fig. 15 shows a block diagram of the classifier structure. Three GCN layers are used to get information about 3\({}^{\text{rd}}\) level neighbors. The classifier output layer computes a probability score for the class of each topology.
#### V-A1 Classifier Problem Formulation
Circuit topologies in graph forms (G) are fed to the classifier. Each circuit graph has number of nodes (N) along with their corresponding node features (X) each has dimension (\(d_{\text{in}}\)). The adjacency matrix (A) defines connections between each node. The classifier outputs a probability (Y) of a converter to belong to a certain class (C). Sub-GCN networks are embedded in each GCN layer, allowing aggregation processes between feature vectors in the neighboring nodes. Hyperbolic tangent ("tanh") is used as the non-linear activation function, while being slower than the Rectified Linear Unit (ReLU) activation function, it helps to avoid the dying ReLU problem due to the very different values of both inputs and outputs [78]. The global mean readout (GM-Read out) layer returns graph level outputs by averaging GCN processed node features. A fully Connected (FC) linear layer is a score function for each circuit, while (Softmax) output layer is used to calculate the probability, in range of [0-1], of each circuit belonging to a certain class. The Softmax function formula \(\sigma()\) is stated in equation (10). The classifier uses training datasets and updates weights or GCN layers and linear layers by minimizing the cross entropy loss function, which is shown in equation (11), where:
* Number of classes
* The natural log
* Binary indicator (0 or 1) if class label c is the correct classification for observation O.
* Predicted probability observation O is of class C.
A mathematical formulation of the transformations of the designed classifier is stated as:
\[Y\ =\textit{classifier}(X,A) \tag{3}\]
Where
Fig. 14: Fourth experiment set. a) Four-class, b) Five-class, c) Seven-class classification problem.
Fig. 15: Circuit classifier structure [1]
\[X\in\mathrm{R}^{(N)\times d_{in}} \tag{4}\] \[Y\in\mathrm{R}^{C\times 1}\] (5) \[G\mathrm{C}\mathrm{N}^{(k)}:\mathrm{R}^{N\times d_{in}}\rightarrow \mathrm{R}^{N\times d},k\in\ \ \ \ \ \ \ 0,1,..,k-1\] (6) \[GM-Readout:\mathrm{R}^{N\times d}\rightarrow\mathrm{R}^{1\times d}\] (7) \[FC:\mathrm{R}^{1\times d}\ \rightarrow\mathrm{R}^{1\times C}\] (8) \[Softmax=\mathrm{R}^{1\times C}\ \rightarrow\mathrm{R}^{1\times C} \tag{9}\]
where
\[\sigma(z_{i})=\underset{j=1}{\underset{\begin{subarray}{c}K\\ K\end{subarray}}{\underbrace{e^{z_{i}}}}}\quad\text{for $i=1,2,\ldots,K$} \tag{10}\]
\[CrossEntropy=-\underset{c=1}{\underset{\begin{subarray}{c}K\\ K\end{subarray}}{\underbrace{\overline{\overline{\overline{\overline{\overline{ \overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\cdot{\cdot \
close proximity.
in general are heavily dependent on hyper-parameter tuning. Several aspects are to be included when circuit designer incorporate ML model in circuit design like network depth, number of neuron, activation functions, pooling layers...etc. These uncertainties in ML models adds more burden when incorporating ML techniques in circuit design. Eventually, a network update is a must at some point of the design process, and eventually designer must fine tweak the ML based design tool. The proposed method can be applied to a wide range of applications such as, power electronic converters condition monitoring and prognostics, since the developed representation maps the circuit structure and thus voltage stresses at each node and current stresses in each branch can be evaluated and tied to a component/converter reliability function. Another application is network structure and fault detection in large power systems [79]. Circuit design is another application that fits the proposed methodology, where circuit performance parameters are set, and the GNN model can generate a circuit topology that meets the input criteria. Moreover, this study can be further developed to for the purpose of linking finite element modelling software in AI assisted design of magnetic components for the purpose of optimal component values/shape design. Additionally, the proposed methodology has very high potential in circuit obfuscation and reverse engineering when it is required to identify/obscure circuit structure [80]. One idea works on the circuit side utilizing the GNN capability of learning the proper transformation function of the converter, i.e can obtain a mathematical transformation of every circuit component and eventually all circuit behavior. On the application side, the end goals whether they are gain, current ripples, magnetic design..etc, are transformed into a fictitious statistical domain, and the purpose of the GNN is to generate circuits with similar statistical domain. This can be beneficial to train AI to generate application specific converters, which eventually will help reduce component size, increase power density, speed and efficiency. This methodology is also applicable in power system applications such as network reconstruction and fault detection and load flow estimation...etc.
## IX Conclusion
In this paper a graph representation of electric circuits is proposed. This method enables a dynamically scalable interface of different circuit aspects including physical connections, component values and mode of operation, to the machine learning domain. Applying the circuit graphs as inputs to a GNN different circuit modeling, design and optimization tasks can be performed. The effect of bond graph feature selection, scaling and formulation was also analyzed. Optimal feature representation results in a more well defined feature matrix and consequently a more accurate circuit and operating mode identification. As a proof of concept case studies of classifiers of continuous and switching circuits were presented where, the proposed algorithms were proven to distinctly identify with high accuracy circuit types based on physical connectivity as well as identifying their mode of operation based on parameter values and control variable values.
|
2306.02447 | Active Inference-Based Optimization of Discriminative Neural Network
Classifiers | Commonly used objective functions (losses) for a supervised optimization of
discriminative neural network classifiers were either distribution-based or
metric-based. The distribution-based losses could compromise the generalization
or cause classification biases towards the dominant classes of an imbalanced
class-sample distribution. The metric-based losses could make the network model
independent of any distribution and thus improve its generalization. However,
they could still be biased towards the dominant classes and could suffer from
discrepancies when a class was absent in both the reference (ground truth) and
the predicted labels. In this paper, we proposed a novel optimization process
which not only tackled the unbalancedness of the class-sample distribution of
the training samples but also provided a mechanism to tackle errors in the
reference labels of the training samples. This was achieved by proposing a
novel algorithm to find candidate classification labels of the training samples
from their prior probabilities and the currently estimated posteriors on the
network and a novel objective function for the optimizations. The algorithm was
the result of casting the generalized Kelly criterion for optimal betting into
a multiclass classification problem. The proposed objective function was the
expected free energy of a prospective active inference and could incorporate
the candidate labels, the original reference labels, and the priors of the
training samples while still being distribution-based. The incorporation of the
priors into the optimization not only helped to tackle errors in the reference
labels but also allowed to reduce classification biases towards the dominant
classes by focusing the attention of the neural network on important but
minority foreground classes. | Faezeh Fallah | 2023-06-04T19:30:28Z | http://arxiv.org/abs/2306.02447v1 | # Active Inference-Based Optimization of Discriminative Neural Network Classifiers
###### Abstract
Commonly used objective functions (losses) for a supervised optimization of discriminative neural network classifiers were either distribution-based or metric-based. The distribution-based losses were mostly based on the cross entropy and fitted the network model to the distribution of the training samples. This could compromise the generalization (predictive performance on unseen samples) or cause classification biases towards the dominant classes of an imbalanced class-sample distribution. The metric-based losses could make the network model independent of any distribution and thus improve its generalization. However, the metrics involved in them were binary classification metrics. This implied to decompose a multiclass classification into a series of one-vs-all classifications and then form the overall loss from an average of the one-vs-all losses. This averaging could naturally lead to a bias towards the dominant classes. Moreover, the metric-based losses could suffer from discrepancies when a class was absent in both the reference (ground truth) labels and the predicted labels. To tackle these issues, recent works have used a combination of the distribution-based and metric-based losses. In this paper, we formulated the optimization of a discriminative neural network classifier within the framework of active inference and showed that the cross entropy-based losses were indeed the variational free energy of a retrospective active inference. Then, we proposed a novel optimization process which not only tackled the unbalancedness of the class-sample distribution of the training samples but also provided a mechanism to tackle errors in the reference (ground truth) labels of the training samples. This was achieved by proposing a novel algorithm to find candidate classification labels of the training samples during the network optimization and a novel objective function for the optimizations. The algorithm could find the candidate labels of the training samples from their prior probabilities and the currently estimated posteriors on the network. The proposed objective function incorporated these candidate labels along with the original reference labels and the priors of the training samples while still being distribution-based. The proposed algorithm was the result of casting the generalized Kelly criterion for optimal betting into a multiclass classification problem. To this end, we showed that the objective function of the generalized Kelly criterion was a tight upper bound of the expected complexity of the expected free energy of a prospective active inference. This in turn allowed us to derive our proposed objective function from such an expected free energy. The incorporation of the priors into the optimization not only helped to tackle errors in the reference labels but also allowed to reduce classification biases towards the dominant classes by focusing the attention of the neural network on important but minority foreground classes.
Background and Motivation
### Active Inference
Bayesian inference enabled perception, learning, and decision making in a passive or active perceptual task. This perception could be over a categorical (multinomial) distribution of independent and mutually exclusive states. This distribution assigned one probability to each state of each observation with the sum of these probabilities for each observation being one. That is, each observation could only be in one state at a time. In an active perception, an agent actively engaged with its environment to gather information, seek preferred observations, avoid unpreferred observations, and take actions which could reduce uncertainty and maximize reward. If the states, observations, and policies (actions) could be discretized, then the tasks could be formulated over categorical distributions of the states, observations, and policies. These formed a discrete state-space model in which the time could be discrete as well. An active perception ruled by the Bayesian inference was called an **active inference**. The Bayesian inference inferred joint/posterior distribution of a generative/discriminative model by using the Bayes' theorem. For the classification/segmentation tasks addressed in this dissertation, a discriminative model was sufficient. Thus, we restricted the use of the active inference to a discriminative model and only involved the posteriors in our formulations [Smith 2022].
According to the Bayes' theorem, for each observation (o), state (s), and policy (\(\pi\)), the posterior \(p(s|o,\pi)\) could be deduced from the likelihood \(p(o|s,\pi)\) as
\[p(s|o,\pi)=\frac{p(o|s,\pi)\cdot p(s|\pi)}{p(o|\pi)} \tag{1}\]
with \(p(o|\pi)=\sum_{s|\pi}p(o|s,\pi)\) being the model evidence or the marginal likelihood. This way, the Bayesian inference enabled perception, learning, and decision making by model inversion, i.e. deduction of the posterior \(p(s|o,\pi)\) from the likelihood \(p(o|s,\pi)\). This resulted in a maximum a posteriori estimation. In a simpler approach, a maximum likelihood estimation might be followed. However, the maximum likelihood estimation was prone to overfitting because the likelihoods only encoded the aleatoric uncertainty of the model caused by noise (disturbances) in its process. The epistemic (cognitive) uncertainty of the model was reflected by the states' priors \(\left\{p(s|\pi)\right\}_{s}\) and the model evidence \(p(o|\pi)\) included in the posteriors. The computation of the model evidence implied to sum the likelihoods of every observation over all possible states. For most of the categorical distributions this computation was intractable. Also, by increasing the number of the states the number of the summation terms increased exponentially. For continuous distributions this summation mostly turned into a nonconvex integration of no closed-form (analytical) solution. To enable a computationally tractable active inference, the Bayes' theorem got approximated by minimizing
* variational free energy (VFE)1 for perception and learning Footnote 1: The term **free energy** stemmed from connections between the Bayesian inference and the Bayesian mechanics ruling free energy in particular (quantum) physics elaborated by neuroscientists [Friston 2019].
* expected free energy (EFE) for optimal decision making, planning, and action selection.
Each of the aforementioned objective functions depended on the policies (actions). Accordingly, the minimization of each of them provided an estimate of the posteriors conditioned on the policies. However, the VFE resulted from a course of policies based on the observations in the past and present but the EFE resulted from a course of policies based on the observations in the future. Thus, the VFE and the EFE respectively enabled retrospective and prospective policy evaluations. This difference mattered in the cases where optimal policies for the past or present were not the optimal policies for the future or vice versa. To derive the aforementioned objectives, negative logarithm of both sides of the Bayes' formula was taken and \(-\ln\bigl{(}p(o|\pi)\bigr{)}\) was introduced to be the self-information or surprisal2 of the model evidence \(p(o|\pi)\). Then, the VFE got defined to be the upper bound of this quantity. This way, by minimizing the VFE,
the surprisal or deviation between observations and predictions of the model got minimized or the amount of evidence an observation could provide for the model got maximized, i.e. the model evidence got maximized.
As detailed in [Smith 2022], the objective function of the VFE was given by
\[\begin{split}\mathcal{L}_{\mathrm{VFE}}&=\mathrm{KL} \Big{[}p(s|\pi)||q(s|\pi)\Big{]}-\mathrm{E}_{p(s|\pi)}\Big{[}\mathrm{ln} \big{(}q(o|s)\big{)}\Big{]}\\ &=\underbrace{\mathrm{E}_{p(s|\pi)}\Big{[}\mathrm{ln}\big{(}p(s| \pi)\big{)}-\mathrm{ln}\big{(}q(s|\pi)\big{)}\Big{]}}_{\mathrm{complexity}}- \underbrace{\mathrm{E}_{p(s|\pi)}\Big{[}\mathrm{ln}\big{(}q(o|s)\big{)}\Big{]} }_{\mathrm{accuracy}}\\ &=\sum_{s|\pi}p(s|\pi)\cdot\mathrm{ln}\big{(}p(s|\pi)\big{)}- \sum_{s|\pi}p(s|\pi)\cdot\mathrm{ln}\big{(}q(s|\pi)\big{)}-\sum_{s|\pi}p(s|\pi )\cdot\mathrm{ln}\big{(}q(o|s)\big{)}\\ &=\underbrace{\sum_{s|\pi}p(s|\pi)\cdot\mathrm{ln}\big{(}p(s|\pi )\big{)}}_{-\mathrm{entropy}}+\underbrace{\sum_{s|\pi}-p(s|\pi)\cdot\mathrm{ln} \big{(}q(o|\pi)\big{)}}_{\mathrm{cross\ entropy}}\end{split} \tag{2}\]
with \(q(\cdot)\) being the distribution approximating the true distribution \(p(\cdot)\), \(\mathrm{KL}[p(\cdot)||q(\cdot)]\) being the Kullback-Leibler (KL) divergence (dissimilarity) between \(p(\cdot)\) and \(q(\cdot)\), and \(\mathrm{E}_{p(s|\pi)}[\cdot]\) being the expectation with respect to \(p(s|\pi)\). The KL divergence was derived from the Akaike information criterion (AIC) measuring the goodness of a model in terms of its underfitting (estimation bias on seen samples) and overfitting (predictive variance on unseen samples). The AIC measured the amount of information loss (relative entropy) resulted from representing a model with another model. Here, the cross entropy was not a distance metric because the cross entropy of two identical distributions equaled their entropy. However, after subtracting the entropy from the cross entropy, the KL divergence become a distance metric. That is, the KL divergence of two identical distributions was zero [Kullback 1951, McMillan 1956]. This way, the minimization of \(\mathcal{L}_{\mathrm{VFE}}\) amounted to finding the distribution \(q(\cdot)\) which best fitted \(p(\cdot)\). The best fit was the minimizer of the **complexity** (overfitting) and the maximizer of the **accuracy**. The minimization of \(\mathcal{L}_{\mathrm{VFE}}\) was independent of \(p(s|\pi)\). Thus by adding the **entropy** term to \(\mathcal{L}_{\mathrm{VFE}}\), an objective function called the cross entropy loss was obtained as
\[\mathcal{L}_{\mathrm{CE}}=-\sum_{s|\pi}p(s|\pi)\cdot\mathrm{ln}\big{(}q(o|\pi )\big{)}. \tag{3}\]
If \(q(\cdot)\) was Gaussian, then the cross entropy loss become a sum of squared errors.
The minimization of the EFE selected optimal policies (actions) by solving the explore-exploit dilemma [Friston 2019]. That is, when information about the states were not enough, it emphasized on exploration (maximization of information gain or minimization of uncertainty). When the information was enough, it emphasized on exploitation (maximization of reward or minimization of expected complexity). The choice of the exploratory or the exploitative optimization depended on the current uncertainty and the future (expected) reward. This way, the minimization of the EFE sought the policies which could lead to future observations optimizing the trade-off between the maximization of the information gain and the maximization of the reward. These self-evidencing observations were called to be **preferred**. The incidence probability of a preferred observation \(o\) was denoted by \(p(o)\). As detailed in [Smith 2022], the objective function of the EFE was given by
\[\mathcal{L}_{\mathrm{EFE}} =\mathrm{KL}\Big{[}p(o)||q(o|\pi)\Big{]}+\mathrm{E}_{p(s|\pi)} \Big{[}\mathrm{H}\big{[}q(o|\pi)\big{]}\Big{]} \tag{4}\] \[=\underbrace{\mathrm{E}_{p(o)}\Big{[}\mathrm{ln}\big{(}p(o))- \mathrm{ln}\big{(}q(o|\pi)\big{)}\Big{]}}_{\mathrm{expected\ complexity}}+ \underbrace{\mathrm{E}_{p(s|\pi)}\Big{[}\mathrm{H}\big{[}q(o|\pi)\big{]}\Big{]} }_{\mathrm{uncertainty}}\] \[=\underbrace{\sum_{o}p(o)\cdot\Big{[}\mathrm{ln}\big{(}p(o) \big{)}-\mathrm{ln}\big{(}q(o|\pi)\big{)}\Big{]}}_{\mathrm{expected\ complexity}}+ \underbrace{\sum_{s|\pi}-p(s|\pi)\cdot\sum_{o|\pi}q(o|\pi)\cdot\mathrm{ln}\big{(} q(o|\pi)\big{)}}_{\mathrm{uncertainty}}\]
with \(\mathrm{H}\big{[}q(o|\pi)\big{]}=-\sum_{o|\pi}q(o|\pi)\cdot\ln\big{(}q(o|\pi)\big{)}\) being the entropy of \(q(o|\pi)\). This way, active inference provided a unified mathematical framework to model interdependent aspects of perception, learning, and decision making. This framework could build highly flexible and generalizable generative models which could explain neuro-cognitive behavioral processes as well as partially observable Markov decision processes [17, 20].
### Optimization of Discriminative Neural Network Classifiers
A neural network was composed of several perceptrons (nodes) in multiple layers. The layers included an input layer, some hidden layers, and an output layer. A perceptron contained a nonlinear function called an activation and was connected to other perceptrons in neighboring layers via some weights and a bias. These weights, biases, and the nonlinear activations formed **main parameters** of the neural network. Besides, the neural network had some **hyperparameters** defining its architecture and its optimization process. Neural networks have demonstrated promising results in a wide range of applications. This was due to the **universal approximation theorem** stating that a feed-forward network with a hidden layer containing a finite number of neurons (perceptrons) could approximate any continuous function on a compact subset of \(\mathbb{R}^{d}\) if and only if the used activations (perceptrons' nonlinearities) were nonpolynomial. The number of the parameters of such an approximating model defined its capacity to represent and to predict patterns. For a fully connected neural network, this number was \(\mathcal{O}(n_{\mathrm{layer}}\cdot n_{\mathrm{width}}^{2})\) where \(n_{\mathrm{layer}}\) was the number of layers (depth of the network) and \(n_{\mathrm{width}}^{2}\) was the number of perceptrons per layer (width of the network). Thus, an increase in the width increased the number of the parameters faster than an increase in the number of layers. An increase in the number of parameters increased the chance of overfitting. Moreover, a wide shallow network could fit to the patterns in the seen (training) samples but could not predict the patterns in unseen (validation or test) samples. To enhance the generalization (predictive performance on unseen samples), the neural network should contain more layers (become deeper) [1, 13, 14].
In a fully connected neural network, every perceptron was connected to all the perceptrons in its neighboring layers. This network lacked the capability of capturing regional (intra-layer) neighborhood patterns and thus needed handcrafted features to accomplish its task. To have an end-to-end neural network, directly applicable to the input samples without any preprocessing or explicit feature extraction, the features should be extracted by the network itself. This implied to capture regional (intra-layer) neighborhood patterns through limited receptive fields. The receptive field of a perceptron defined the size and the shape of the region at the input of the network affecting the output of the perceptron. The receptive field was determined by the kernel and the dept of the perceptron in the neural network. The deeper the perceptron in the network was the larger its receptive field become.
The application of a perceptron's kernel to its inputs returned a number of feature maps. By increasing the receptive field of the perceptron, the number and the abstraction level of its feature maps got increased but the size of each map got decreased. Accordingly, by using different kernels and locating the perceptrons at different depths of the network, features of different resolutions and abstraction levels could be obtained. Besides capturing subtle features and patterns, a kernel-based network enabled **weight sharing** by applying the same kernel coefficients to various regions in space. This resulted in a significantly lower number of parameters than a fully connected network and thus reduced the chance of overfitting and improved the generalization (predictive performance on unseen samples). In addition, it reduced the number of samples needed to train (optimize) the network. An easy-to-implement kernel for estimating a categorical distribution in a classification problem or a continuous distribution in a regression task was **convolutional1**. This type of kernel formed a convolutional neural network (CNN) which could be end-to-end and deep as well.
Footnote 1: In practice, many machine learning libraries avoided the _sign flip_ action involved in the convolution and thus simply implemented a cross correlation between the inputs and the kernels of each layer.
As shown in Figure 1, a neural network could be **plain** or **Bayesian**. In the plain network, each parameter, i.e. each weight, bias, or activation, had a single value. In the Bayesian network, each parameter had a vector of values representing its distribution and uncertainty. The Bayesian network was formed from an ensemble of plain networks. That is, multiple plain networks got built and then the Bayesian network's parameters got derived from a weighted
average of the plain networks' parameters with the weight of each network being the posteriors estimated by it for the training samples. Accordingly, whatever derived or concluded for the plain networks could be extended to the Bayesian networks. In the following, we simply referred to the plain neural network as the neural network. Such a network demanded an objective function and a process to optimize its parameters as well as a regularization to mitigate overfitting. A commonly used objective function for such a network was the cross entropy loss introduced in (10). The commonly used optimization processes were based on the gradient (first derivative) descent of the objective function [Kingma 2015]. The regularization was mostly done by penalizing large perceptrons' weights or dropping perceptrons of low confident weights in a method called Dropout [Gal 2015, Jospin 2022].
The gradient descent optimization relied on the fact that the opposite direction of the gradient (first derivative) of the scalar field of the objective function pointed to the minimum of the function. Accordingly, in each iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\) of this optimization, a movement in the direction of the negative gradient of the objective function at the current point updated the network's parameters. This optimization had a linear complexity with regard to the number of network's parameters. The gradient at each iteration was the average gradient of the training samples passed through the network's layers. The samples could be passed one-by-one or all at once. The former led to a stochastic and the latter led to a batch-based optimization. A complete pass through all the training samples was called an _epoch_[Dean 2012, Ruder 2016, Goodfellow 2016].
The averaging of the gradients of the batch's samples resulted in a smooth variation of the cost versus the iterations. In addition, the batch-based optimization allowed to apply vectorized and parallelized operations. However, it was restricted to convex or relatively smooth error manifolds and could only find local minima. Moreover, feeding a large batch of samples become memory intensive. The stochastic gradient descent optimization updated the network's parameters by passing one sample through the network in each iteration. This could avoid memory issues, could address nonconvex optimizations, and could even find global minima. However, due to a more frequent update of the network's parameters it resulted in fluctuating cost versus the iterations. Depending on the samples' gradients the fluctuations might never reach a minimum but rather dance around it. Moreover, the stochastic optimization could not benefit from the vectorized or the parallelized operations.
An intermediate between the stochastic and the batch-based optimization was a mini-batch-based optimization. In this approach, the training samples got divided into \(n_{\mathrm{batch}}\) disjoint batches, i.e. \(\mathbb{T}_{\mathrm{train}}=\cup_{b=1}^{n_{\mathrm{batch}}}\mathbb{T}_{b}\). Then, in each iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\), the samples of one batch got passed through the network and the average gradient of these samples updated the network's parameters. The size or the number of the batches was a hyperparameter. This way, by adapting the size or the number of the batches, the mini-batch-based optimization could utilize the vectorized and the parallelizable operations to speed up its computations while fitting the fluctuations of the cost versus the iterations to the nonconvexity of the addressed problem. Accordingly, if \(n_{\mathrm{epoch}}\) was the number of epochs, then the network was optimized by \(n_{\mathrm{it}}=(|\mathbb{T}_{\mathrm{train}}|/|\mathbb{T}_{b}|)\times n_{ \mathrm{epoch}}\) iterations. In each epoch, the batches and the samples of each batch got randomly shuffled to avoid overfitting to some of the samples.
Figure 1: A neural network with plain (single-valued) weights and biases (a), plain activations (b), Bayesian (distributed) weights and biases (c), and Bayesian activations (d).
With \(\alpha_{\mathrm{lr}}\in(0,1)\) being the learning rate (step size), \(\mathbf{\eta}^{(i)}\) being the vector of the main parameters of the neural network in the iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\), and \(\nabla_{\mathbf{\eta}^{(i)}}(\mathcal{L})\) being the gradient of a generic objective function \(\mathcal{L}\) with regard to these parameters, we had
\[\mathbf{\eta}^{(i)}=\mathbf{\eta}^{(i-1)}-\alpha_{\mathrm{lr}}\cdot\mathbf{\delta}^{(i)}. \tag{5}\]
In the gradient descent optimization, \(\mathbf{\delta}^{(i)}=\nabla_{\mathbf{\eta}^{(i-1)}}(\mathcal{L})\). This resulted in a slow convergence and sensitivity to abrupt variations of the gradient due to noise and perturbations. To speed up the convergence, to propel out of local minima, and to smooth out the gradient variations, in the method of _momentum_, \(\mathbf{\delta}^{(i)}\) got defined to be an exponentially weighted moving average (first moment) of the current and past gradients. The averaging weight was a decay rate called first moment rate \(\beta_{\mathrm{fm}}\in[0,1)\). It emphasized the importance of recent gradients to the older ones. For \(\beta_{\mathrm{fm}}=0\), the momentum boiled down to the gradient descent. For \(\beta_{\mathrm{fm}}=1\) and \(\alpha_{\mathrm{lr}}\approx 0\) it resulted in endless fluctuations of the cost versus the iterations like the movements of a ball in a frictionless bowl. Two major bottlenecks of the gradient descent and the momentum were the possibility of being trapped into saddle points (i.e. points of zero gradients in all directions) and a slow update in the directions of sparse features of weak gradients. To tackle these, the adaptive gradient algorithm (AdaGrad) defined \(\mathbf{\delta}^{(i)}\) to be the instant (current) gradient divided (normalized) by the square root of the sum of the squared gradients. This scaling allowed to avoid saddle points and adapted the gradient and thus the optimization rate in each direction to its history of updates. That is, the more a feature (direction) was updated in the past the less it would be updated in the future.
Despite of these improves, the AdaGrad was slow since the sum of the squared gradients only grew but never shrank. This growth also resulted in a rapid decay of \(\mathbf{\delta}^{(i)}\) and thus a poor performance in dealing with nonconvex objective functions and dense features (directions of strong gradients). The root mean square propagation (RMSprop) fixed these issues by replacing the sum of the squared gradients with an exponentially weighted moving average of the squared gradients. This was called second moment of the gradient. The averaging weight was a decay rate called the second moment rate \(\beta_{\mathrm{sm}}\in[0,1)\). It emphasized the importance of recent gradients to the older ones. Moreover, in the formation of \(\mathbf{\delta}^{(i)}\), the division (normalization) of the instant gradient by the second moment balanced the step size. More specifically, it decreased the step size for large gradients to prevent their explosion and increased the step size for small gradients to prevent their vanishing. The exploding and the vanishing gradients were common issues of deep neural networks.
The adaptive moment estimation (Adam) combined the momentum (first moment) with the RMSprop (second moment) to take advantages of both. This was done by defining the \(\mathbf{\delta}^{(i)}\) to be the first moment divided (normalized) by the second moment. This way, the Adam got the convergence speed from the momentum and the ability to adapt the gradients in different directions from the RMSprop (Kingma, 2015). More specifically,
\[\mathbf{\delta}^{(i)}=\hat{\mathbf{m}}^{(i)}\odot\left(\sqrt[6]{\hat{\bm {v}}^{(i)}}\oplus 10^{-8}\right) \mathbf{g}^{(i)}=\nabla_{\mathbf{\eta}^{(i-1)}}(\mathcal{L}) \tag{6}\] \[\text{biased first moment:} \mathbf{m}^{(i)}=\beta_{\mathrm{fm}}\odot\mathbf{m}^{(i-1)}\oplus(1-\beta _{\mathrm{fm}})\odot\mathbf{g}^{(i)}\] bias-corrected first moment: \[\hat{\mathbf{m}}^{(i)}=\mathbf{m}^{(i)}\odot(1-\beta_{\mathrm{fm}}^{i})\] biased second moment: \[\mathbf{v}^{(i)}=\beta_{\mathrm{sm}}\odot\mathbf{v}^{(i-1)}\oplus(1-\beta _{\mathrm{sm}})\odot\mathbf{g}^{(i)}\odot\mathbf{g}^{(i)}\] bias-corrected second moment: \[\hat{\mathbf{v}}^{(i)}=\mathbf{v}^{(i)}\odot(1-\beta_{\mathrm{sm}}^{i}).\]
All the aforementioned techniques relied on the gradient (first derivative) of the scalar field of the objective function of the neural network. The second derivative of this scalar field was represented by a Hessian matrix. Commonly used optimization techniques based on the Hessian matrix were the Newton and the quasi-Newton method, the conjugate gradient method, and the Levenberg-Marquardt algorithm (Dean, 2012; Ruder, 2016). A common way to optimize a network's parameters by any one of the derivative-based techniques was a backpropagation. This method demanded the objective function to be expressed in terms of the network's outputs (goodness of the model) and to be differentiable with respect to the outputs of every layer. In case of using the gradient of the objective function with respect to the
network's parameters, this gradient got expressed as a product of the layerwise errors. Then, the backpropagation took the following steps:
* initialized the network's parameters with random numbers.
* passed a batch through all the layers and computed the outputs of every layer.
* computed the error at the last layer by comparing the predictions with the references.
* propagated the error from the last layer to the first layer to find the error of each layer.
* expressed the gradient of the objective function as a product of the layerwise errors.
* updated the network's parameters according to (5).
### Commonly Used Objective Functions
For a probabilistic estimate, the outputs of the neural network got converted to probabilities (posteriors) by using a softmax (normalized exponential) function. This function converted a vector to another vector whose elements summed up to one and each element of the output had a monotonic relationship with an element of the input. In our case, the input vector was the network's outputs for each sample and had a length of \(n_{\mathrm{clas}}=\left|\mathbb{L}\right|\). This way, the output of the softmax function could be interpreted as a categorical probability distribution of a multinomial classification over \(n_{\mathrm{clas}}\) mutually exclusive classes. That is, every sample could only have one reference classification label. A special case of the softmax function was the sigmoid function. This function assumed that the classes were independent but not mutually exclusive. Thus, every sample could have multiple reference labels. The sigmoid function cast a multinomial classification into a series of binary (one-vs-all) classifications. Accordingly, its outputs did not necessarily sum up to one. For a sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\), the network's outputs at the \(i^{\mathrm{th}}\) iteration of the optimization formed a vector \(\mathbf{z}_{b,j}^{(i)}=\left[z_{b,j,c}^{(i)}\right]_{c\in\mathbb{L}}\). Then, the posteriors \(\hat{\mathbf{p}}_{b,j}^{(i)}=\left[\hat{p}_{b,j,c}^{(i)}\right]_{c\in\mathbb{L}}\) produced by applying the softmax function to these outputs were
\[\hat{p}_{b,j,c}^{(i)}=\frac{\exp\bigl{(}z_{b,j,c}^{(i)}\bigr{)}}{\sum_{k\in \mathbb{L}}\exp\bigl{(}z_{b,j,k}^{(i)}\bigr{)}}\in(0,1)\quad\text{ with }\quad\sum_{c\in\mathbb{L}}\hat{p}_{b,j,c}^{(i)}=1. \tag{7}\]
Accordingly, if the training samples \(\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) were used to optimize the network's parameters in the iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\), then \(\mathbf{L}_{b}=\left[\mathbf{l}_{b,j}\right]_{j}=\left[\mathbf{l}_{b,c} \right]_{c}=\left[b_{b,j,c}\right]_{j,c}\) was the \(\left|\mathbb{T}_{b}\right|\times n_{\mathrm{clas}}\) matrix of vectorized reference labels of these samples, \(\mathbf{Z}_{b}^{(i)}=\left[\mathbf{z}_{b,j}^{(i)}\right]_{j}=\left[z_{b,j,c}^ {(i)}\right]_{j,c}\) was the \(\left|\mathbb{T}_{b}\right|\times n_{\mathrm{clas}}\) matrix of the network's outputs for these samples, and \(\hat{\mathbf{P}}_{b}^{(i)}=\left[\hat{\mathbf{p}}_{b,j}^{(i)}\right]_{j}= \left[\hat{p}_{b,j,c}^{(i)}\right]_{j,c}\) was the \(\left|\mathbb{T}_{b}\right|\times n_{\mathrm{clas}}\) matrix of their classification posteriors estimated by the network.
If the reference (ground truth) labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were provided at the time of optimization (training), then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vector \(\mathbf{l}_{b,j}\) was a one-hot-encoding of its reference label \(l_{b,j}\in\mathbb{L}\) and was given by
\[\mathbf{l}_{b,j}=\left[l_{b,j,c}\right]_{c\in\mathbb{L}}\quad\text{ with }\quad l_{b,j,c}=\begin{cases}1&\text{if }c=l_{b,j}=\text{reference label of }v_{b,j}\in\mathbb{T}_{b}\\ 0&\text{otherwise}\end{cases}. \tag{8}\]
If the reference (ground truth) labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were not provided at the time of optimization (training), then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vector \(\mathbf{l}_{b,j}\) was
\[\mathbf{l}_{b,j}=\left[l_{b,j,c}\right]_{c\in\mathbb{L}}=\frac{1}{n_{\mathrm{ clas}}}\odot\mathbf{1}_{n_{\mathrm{clas}}=\left|\mathbb{L}\right|}. \tag{9}\]
For a discriminative neural network classifier acting on \(\left|\mathbb{L}\right|=n_{\mathrm{clas}}\) classes, a common way to evaluate the estimated posteriors against the reference labels was to use the cross entropy loss introduced in (3). In this application, the policies \(\pi\) incorporated in (3) represented the network's parameters. Each state \(s\) was a class \(c\in\mathbb{L}\) and each observation \(o\) was a sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\). Accordingly, \(p(s|\pi)=p(s)\) was the occurrence probability of a class (state) \(s\) which could be represented by the vectorized reference labels of the samples (observations). Also, \(q(o|\pi)\) was the classification
posterior estimated by the network's parameters \(\pi\) for the reference classification label of a sample (observation) \(o\). With these, the cross entropy loss of the discriminative neural network classifier become
\[\mathcal{L}_{\mathrm{CE}}(\hat{\mathbf{P}}_{b}^{(i)},\mathbf{L}_{b})=\frac{-1}{| \mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_{b}}\sum_{c\in\mathbb{L}} l_{b,j,c}\cdot\ln\bigl{(}\hat{p}_{b,j,c}^{(i)}\bigr{)}. \tag{10}\]
If the posteriors were generated by the softmax function, then this loss was called a **softmax cross entropy loss**. As detailed in (2), the cross entropy loss resulted from the minimization of the VFE through minimizing the KL divergence (dissimilarity) between the reference distribution \(p(\cdot)\) and the estimated distribution \(q(\cdot)\). In a categorical classification, the reference distribution \(p(\cdot)\) was the histogram of the class-sample distribution of the training samples. The estimated distribution \(q(\cdot)\) was a known function parametrized with the network's parameters. This way, the cross entropy loss and the objective functions of the active inference compared the distributions and thus were **distribution-based**. If the class-sample distribution of the training samples was imbalanced, then it had maxima at the dominant classes. These maxima formed minima of the cross entropy loss. Thus, any minimizer of the cross entropy loss could be trapped into those minima and could thus return classifications biased towards the dominant classes of the training samples.
To reduce the impacts of the dominant classes on the optimization of a neural network, the cross entropy loss got weighted and/or modulated. The resulting losses included
1. **weighted cross entropy loss** which weighted the contribution of each class \(c\in\mathbb{L}\) by the inverse of its frequency \(w_{b,c}\in(0,1)\) in the batch \(\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) and (optionally) weighted the contribution of each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) by its distance \(d_{b,j,1}\in\mathbb{R}_{\geq 0}\) to the border of the nearest class and its distance \(d_{b,j,2}\in\mathbb{R}_{\geq 0}\) to the border of the second nearest class through the weight \(w_{b,j}\in(0,1)\)[Ronneberger, 2015, Badrinarayanan, 2016] \[\mathcal{L}_{\mathrm{WCE}}(\hat{\mathbf{P}}_{b}^{(i)},\mathbf{L}_{b})=\frac{-1 }{|\mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_{b}}\sum_{c\in \mathbb{L}}w_{b,j,c}\cdot l_{b,j,c}\cdot\ln\bigl{(}\hat{p}_{b,j,c}^{(i)}\bigr{)}\] (11) \[w_{b,j,c}=w_{b,c}+w_{b,j}=\frac{\sum_{k\in\mathbb{L}}|\mathbb{T}_{b,k}|}{ \mathbb{T}_{b,c}|+10^{-8}}+\underbrace{w_{\mathrm{mo}}\cdot\exp\Bigl{(}- \frac{(d_{b,j,1}+d_{b,j,2})^{2}}{2\cdot\sigma_{\mathrm{mo}}^{2}}\Bigr{)}}_{w_{ b,j}\in(0,1)}\] (12) with \(w_{\mathrm{mo}}=10\), \(\sigma_{\mathrm{mo}}=5\), and \(|\mathbb{T}_{b,c}|=\mathrm{card}\Bigl{(}\{l_{b,j,c}=1\}\Bigr{)}\). The distances to the classification borders could be computed by applying morphological operators to the samples in the classification domain, e.g. the spatial domain in an image segmentation task.
2. **focal (modulated cross entropy) loss** which weighted the contribution of each class by the difficulty of classifying its samples with the difficulties being highlighted with a modulation factor \(\gamma_{\mathrm{mod}}\in\mathbb{R}_{+}\). That is, the higher the \(\gamma_{\mathrm{mod}}\in\mathbb{R}_{+}\) was, the more the easy samples got downweighted to emphasize the role of the difficult samples [Lin, 2018] \[\mathcal{L}_{\mathrm{FL}}(\hat{\mathbf{P}}_{b}^{(i)},\mathbf{L}_{b})=\frac{-1 }{|\mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_{b}}\sum_{c\in \mathbb{L}}\bigl{(}1-\hat{p}_{b,j,c}^{(i)}\bigr{)}^{\gamma_{\mathrm{mod}}} \cdot l_{b,j,c}\cdot\ln\bigl{(}\hat{p}_{b,j,c}^{(i)}\bigr{)}.\] (13)
3. **weighted focal loss** which additionally weighted the contribution of each class \(c\in\mathbb{L}\) by the inverse of its frequency \(w_{b,c}\in(0,1)\) in the batch \(\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\)[Lin, 2018] \[\mathcal{L}_{\mathrm{WFL}}(\hat{\mathbf{P}}_{b}^{(i)},\mathbf{L}_{b})=\frac{-1 }{|\mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_{b}}\sum_{c\in \mathbb{L}}w_{b,c}\cdot\bigl{(}1-\hat{p}_{b,j,c}^{(i)}\bigr{)}^{\gamma_{\mathrm{ mod}}}\cdot l_{b,j,c}\cdot\ln\bigl{(}\hat{p}_{b,j,c}^{(i)}\bigr{)}.\] (14)
The weighted cross entropy and the weighted focal loss highlighted the role of the minority classes over the role of the majority classes by including the weight \(w_{b,c}\in(0,1)\) in their terms. This way, the more a class had training samples, the less its classification errors contributed to the overall loss. In a so-called class-balanced cross entropy loss [Cui, 2019], each weight \(w_{b,c}\in(0,1)\) got defined based on the _effective number_\(n_{b,c}\in(0,1)\) of the training samples of
the class \(c\in\mathbb{L}\) in the feature space as
\[w_{b,c}=\bigg{[}1-\frac{n_{b,c}-1}{n_{b,c}}\bigg{]}/\bigg{[}1-\Big{(}\frac{n_{b,c }-1}{n_{b,c}}\Big{)}^{|\mathbb{T}_{b,c}|}\bigg{]}. \tag{15}\]
This method assumed that each sample in the feature space covered a subspace and the overall samples' subspaces of each class formed its prototypical subspace. Then, the volume of this prototype defined the effective number of the class. However, in most of the applications, the feature space was hardly accessible. In a neural network, it was also variable across the network's layers. Moreover, the computation of the subspace coverages in the feature space was expensive and depending on the dimensionality and the geometry of the space. Accordingly, in [15], each number \(n_{b,c}\in(0,1)\) got handled as a hyperparameter.
The aforementioned weighting and modulation schemes could reduce the impacts of the dominant classes of the seen (training) samples on the network's optimization. However, they were still based on the cross entropy loss and thus fitted the network's model to the seen distribution. This could compromise the network's generalization (predictive performance on unseen samples) when the distribution of the unseen (validation or test) samples differed from the distribution of the seen (training) samples. An objective evaluation of a classifier on unseen samples could be done through several metrics. Among these metrics, the Dice coefficient (DICE) and its equivalent the Jaccard index (JI) provided perceptual clues, scale invariance, and counts of false positive and false negative mispredictions. The JI was also called the intersection over union (IoU) and the DICE was the F-\(\beta\) score with \(\beta=1\). These metrics could be computed with a low complexity. This enabled their integration into an iterative optimization of neural network classifiers in the form of **metric-based** losses. Then, the optimum network's parameters were the **maximizers** of the DICE [10] or the **minimizers** of the Jaccard distance (JD)=1\(-\)JI=1\(-\)IoU [1].
The DICE=F-1 score and the JD=1\(-\)JI=1\(-\)IoU directly compared the binary masks of the predicted and the reference labels of the training samples without considering their distribution. This made the network's model independent of any distribution and could thus tackle the differences of the seen and unseen distributions. However, the binary masks compared by these metrics got formed from discrete-valued labels. This hindered to integrate those metrics into a continuous optimizer with backpropagation. More specifically, the predicted labels were the results of applying an **arg max** operation to the classification posteriors \(\hat{\mathbf{p}}_{b,j}^{(i)}=[\hat{p}_{b,j,c}^{(i)}]_{c\in\mathbb{L}}\) estimated by the network. This operation was nonlinear, irreversible, and indifferentiable. Thus, to integrate the metrics into a continuous optimizer with backpropagation, the network's outputs \(\mathbf{z}_{b,j}^{(i)}=[z_{b,j,c}^{(i)}]_{c\in\mathbb{L}}\) should be stored in each iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\) and for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\). These storages got retrieved during the backpropagation and thus increased the memory footprint of the network and hindered to optimize a large network with a large number of samples per batch [1].
To integrate the aforementioned metrics into a continuous optimization framework, they should be replaced by their continuous relaxed (real-valued) surrogates. For the DICE, this surrogate compared the vectorized reference labels \(\mathbf{L}_{b}=\left[\mathbf{l}_{b,j}\right]_{j}=\left[\mathbf{l}_{b,c}\right] _{c}=\left[\mathbf{l}_{b,j,c}\right]_{j,c}\) against the classification posteriors \(\hat{\mathbf{P}}_{b}^{(i)}=\left[\hat{\mathbf{p}}_{b,j}^{(i)}\right]_{j}=[\hat {p}_{b,j,c}^{(i)}]_{j,c}\) estimated by the network as
\[\mathcal{L}_{\mathrm{DICE}}(\hat{\mathbf{P}}_{b}^{(i)},\mathbf{L}_{b})=\frac{ 2}{\left|\mathbb{L}\right|}\sum_{c\in\mathbb{L}}\frac{\sum_{j\in\mathbb{T}_{b} }l_{b,j,c}\cdot\hat{p}_{b,j,c}^{(i)}}{\sum_{j\in\mathbb{T}_{b}}\left[l_{b,j,c} ^{2}+\hat{p}_{b,j,c}^{(i)^{2}}\right]}. \tag{16}\]
The above DICE loss was reversible and differentiable and could thus be integrated into a gradient descent optimization with backpropagation [10]. However, its **nonconvexity** hindered its wide use in many applications. Other metrics such as the mean symmetric surface distance and the Hausdorff distance were also nonconvex besides being too complex for an iterative optimization process [1]. In addition, each discrete-valued metric was a set function mapping from a set of mispredictions to a set of real numbers. However, among them, only the set function of the JD was submodular. This allowed to find a convex closure of the JD in a polynomial time. This convex closure was a **convex continuous** relaxed (real-valued) surrogate taking nonnegative real-valued mispredictions as inputs. Another
metric of these properties was the Hamming distance. The convex closure of the JD got derived according to the smooth convex Lovasz extension of submodular set functions [11, 12]. The JD was defined as
\[\text{Jaccard distance (JD)=1-JII=}\frac{\left|\mathbb{V}_{\text{prd}}\cup\mathbb{V}_{ \text{ref}}\big{|}\big{\backslash}\big{|}\mathbb{V}_{\text{prd}}\cap\mathbb{V}_ {\text{ref}}\right|}{\left|\mathbb{V}_{\text{prd}}\cup\mathbb{V}_{\text{ref}} \right|}\ \ \ \ =\ \frac{\left|\mathbb{V}_{\text{prd}}\big{\backslash}\mathbb{V}_{\text{ ref}}\big{|}+\left|\mathbb{V}_{\text{ref}}\right.\right\rangle\mathbb{V}_{ \text{prd}}\big{|}}{\left|\mathbb{V}_{\text{prd}}\cup\mathbb{V}_{\text{ref}} \right|}\,. \tag{17}\]
Based on this definition, the set function of the JD for the batch \(\mathbb{T}_{b}\subseteq\mathbb{T}_{\text{train}}\) and the class \(c\in\mathbb{L}\) in the iteration \(i\in\{1,\cdots,n_{\text{it}}\}\) was
\[\text{JD}:\quad\mathbb{M}^{(i)}_{b,c}\in\{0,1\}^{|\mathbb{T}_{b}|} \longmapsto\frac{\operatorname{nnz}\!\left(\mathbb{M}^{(i)}_{b,c}\right)}{ \operatorname{nnz}\!\left(\{l_{b,j,c}=1\}\cup\{\tilde{l}^{(i)}_{b,j,c}=1\} \right)}\in\mathbb{R} \tag{18a}\] \[\text{with}\ \ \tilde{l}^{(i)}_{b,j,c}=\begin{cases}1&\text{if }c= \operatorname{arg\,max}_{k}\{\tilde{p}^{(i)}_{b,j,k}\}\\ 0&\text{otherwise}\end{cases}\quad\text{forming}\ \ \tilde{l}^{(i)}_{b,j}=[\tilde{l}^{(i)}_{b,j,c}]_{c \in\mathbb{L}}\] (18b) \[\text{and}\ \ \mathbb{M}^{(i)}_{b,c}=\left[\left\{l_{b,j,c}=1, \tilde{l}^{(i)}_{b,j,c}\neq 1\right\}\cup\left\{l_{b,j,c}\neq 1,\tilde{l}^{(i)}_{b,j,c}=1\right\}\right]\in\{0,1\}^{|\mathbb{T}_{b}|} \tag{18c}\]
being the set of mispredictions defined over the discrete hypercube \(\{0,1\}^{|\mathbb{T}_{b}|}\). Also, \(\operatorname{nnz}\!\left(\mathbb{M}^{(i)}_{b,c}\right)\) was the number of nonzero elements of the binary set \(\mathbb{M}^{(i)}_{b,c}\). To form the convex continuous surrogate of the JD, first \(\mathbb{M}^{(i)}_{b,c}\in\{0,1\}^{|\mathbb{T}_{b}|}\) should be replaced by a nonnegative real-valued misprediction vector \(\mathbf{m}^{(i)}_{b,c}=\left[m^{(i)}_{b,j,c}\right]_{j}\in\mathbb{R}^{| \mathbb{T}_{b}|}_{\geq 0}\). Then, the surrogate should be found in \(\mathbb{R}^{|\mathbb{T}_{b}|}_{\geq 0}\). This search was NP-hard unless the JD was submodular. According to Proposition 11 in [13], the set function \(\text{JD}:\{0,1\}^{|\mathbb{T}_{b}|}\longmapsto\mathbb{R}\) was submodular. That is,
\[\forall\mathbb{M}_{1},\mathbb{M}_{2}\in\{0,1\}^{|\mathbb{T}_{b}|}:\quad\text{ JD}\!\left(\mathbb{M}_{1}\right)+\text{JD}\!\left(\mathbb{M}_{2}\right)\geq \text{JD}\!\left(\mathbb{M}_{1}\cup\mathbb{M}_{2}\right)+\text{JD}\!\left( \mathbb{M}_{1}\cap\mathbb{M}_{2}\right). \tag{19}\]
Under this condition, the convex closure of \(\text{JD}:\{0,1\}^{|\mathbb{T}_{b}|}\longmapsto\mathbb{R}\) in \(\mathbb{R}^{|\mathbb{T}_{b}|}_{\geq 0}\) was tight and continuous and could be computed in a polynomial time. This convex closure was called the Lovasz extension and was given in [14, 15] as
\[\begin{split}\overline{\text{JD}}:\ \ \mathbf{m}^{(i)}_{b,c}\in \mathbb{R}^{|\mathbb{T}_{b}|}_{\geq 0}\longmapsto\left[\frac{1}{|\mathbb{T}_{b}|} \sum_{j\in\mathbb{T}_{b}}m^{(i)}_{b,j,c}\cdot g_{j}\big{(}\mathbf{m}^{(i)}_{b, c}\big{)}\right]\in\mathbb{R}\\ \text{with}\ \ g_{j}\big{(}\mathbf{m}^{(i)}_{b,c}\big{)}=\text{JD}\! \left(\{u_{1},\cdots,u_{j}\}\right)-\text{JD}\!\left(\{u_{1},\cdots,u_{j-1}\} \right)\end{split} \tag{20}\]
being the \(j^{\text{th}}\) element of the gradient \(\mathbf{g}\big{(}\mathbf{m}^{(i)}_{b,c}\big{)}\) and \(\{u_{1},\cdots,u_{|\mathbb{T}_{b}|}\}\) denoting a permutation of the elements of \(\mathbf{m}^{(i)}_{b,c}=\left[m^{(i)}_{b,j,c}\right]_{j}\) in descending order, i.e. \(\left[\mathbf{m}^{(i)}_{b,c}\right]_{u_{1}}\geq\cdots\geq\left[\mathbf{m}^{(i) }_{b,c}\right]_{u_{|\mathbb{T}_{b}|}}\). Thus, the \(\overline{\text{JD}}\!\left(\mathbf{m}^{(i)}_{b,c}\right)\) was a weighted average of the elements of the misprediction vector \(\mathbf{m}^{(i)}_{b,c}\in\mathbb{R}^{|\mathbb{T}_{b}|}_{\geq 0}\) with the weights being the elements of the first derivative (gradient) of \(\overline{\text{JD}}\) with respect to \(\mathbf{m}^{(i)}_{b,c}\in\mathbb{R}^{|\mathbb{T}_{b}|}_{\geq 0}\). This way, the Lovasz extension \(\overline{\text{JD}}\) interpolated JD in \(\mathbb{R}^{|\mathbb{T}_{b}|}_{\geq 0}\setminus\{0,1\}^{|\mathbb{T}_{b}|}\) while having the same values as JD on \(\{0,1\}^{|\mathbb{T}_{b}|}\)[11, 11].
For a binary classification, the misprediction vector \(\mathbf{m}^{(i)}_{b,c}=\left[m^{(i)}_{b,j,c}\right]_{j}\in\mathbb{R}^{| \mathbb{T}_{b}|}_{\geq 0}\) was given by \(m^{(i)}_{b,j,c}=\max\!\left[(1-z^{(i)}_{b,j,c}\cdot l_{b,j,c}),\ 0\right]\) with \(\mathbf{z}^{(i)}_{b,j}=\left[z^{(i)}_{b,j,c}\right]_{c\in\mathbb{L}}\) being the network's outputs (before the softmax function) at the \(i^{\text{th}}\) iteration for the sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\text{train}}\). This misprediction vector resulted in a convex piecewise linear surrogate called the Lovasz hinge loss [13].
For a multiclass classification, the misprediction vector \(\mathbf{m}^{(i)}_{b,c}=\left[m^{(i)}_{b,j,c}\right]_{j}\in\mathbb{R}^{| \mathbb{T}_{b}|}_{\geq 0}\) was formed from the classification posteriors \(\hat{\mathbf{p}}^{(i)}_{b,j}=\left[\hat{p}^{(i)}_{b,j,c}\right]_{c\in\mathbb{L}}\) produced by the softmax function in (7). This misprediction vector resulted in a convex continuous surrogate with regard to the batch \(\mathbb{T}_{b}\subseteq\mathbb{T}_{\text{train}}\) and the class \(c\in\mathbb{L}\) in the iteration \(i\in\{1,\cdots,n_{\text{it}}\}\). Thus, for the classification over \(n_{\text{clas}}=|\mathbb{L}|\) classes, the overall loss was an average of these class-specific surrogates. This
overall loss was called the Lovasz-Softmax loss and was given in [Berman 2018] as
\[\mathcal{L}_{\mathrm{LS}}(\mathbf{\hat{P}}_{b}^{(i)},\mathbf{L}_{b})= \frac{1}{|\mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{c\in\mathbb{L}}\sum_{j\in \mathbb{T}_{b}}m_{b,j,c}^{(i)}\cdot g_{j}\big{(}\mathbf{m}_{b,c}^{(i)}\big{)} \tag{21}\] \[\text{with}\ \ \mathbf{m}_{b,c}^{(i)}=[m_{b,j,c}^{(i)}]_{j}\in \mathbb{R}_{\geq 0}^{|\mathbb{T}_{b}|}\ \ \text{and}\ \ m_{b,j,c}^{(i)}=\begin{cases}1-\hat{p}_{b,j,c}^{(i)}&\text{if }c=l_{b,j,c}\\ \hat{p}_{b,j,c}^{(i)}&\text{otherwise}\end{cases}\in(0,1).\]
The computation of the Lovasz extension \(\overline{\mathrm{JD}}\) in (20) implied to sort the elements of \(\mathbf{m}_{b,c}^{(i)}=[m_{b,j,c}^{(i)}]_{j}\in\mathbb{R}_{\geq 0}^{|\mathbb{T}_{b }|}\) and to call the JD with the permutation order. The sort had a complexity of \(\mathcal{O}\big{(}|\mathbb{T}_{b}|\cdot\log(|\mathbb{T}_{b}|)\big{)}\) and the call had a complexity of \(\mathcal{O}(|\mathbb{T}_{b}|)\). However, by keeping a track of the cumulative number of false positive and false negative mispredictions, the complexity of the call could be amortized to \(\mathcal{O}(1)\). That is, in each iteration, instead of computing the gradient from scratch only the gradient got updated. In this case, the overall complexity of computing (20) become \(\mathcal{O}\big{(}|\mathbb{T}_{b}|\cdot\log(|\mathbb{T}_{b}|)\big{)}\). The procedure of computing the gradient of the Lovasz-Softmax loss in (21) was given by Algorithm 1 in [Berman 2018].
The convexity and the differentiability of the Lovasz-Softmax loss in (21) allowed to use it as an objective function for optimizing a discriminative neural network classifier by a gradient descent optimizer with backpropagation. Also, the operations involved in its computation were differentiable and implementable on graphics processing units (GPUs).
### Baseline Architecture
Each convolutional layer of a neural network could extract features of a certain resolution while being capable of downsampling or reducing the spatial resolution by using an appropriate stride. These allowed to learn hierarchical (multiresolution) features by cascading multiple convolutional layers. The opposite of a convolutional layer was a transposed convolutional or a deconvolutional layer of similar feature learning capability but an inherent upsampling or increase of the spatial resolution. By following the convolutional layers with the deconvolutional layers an **encoder-decoder** architecture was obtained. The encoder was a downsampler, a compressor, or a contractor performing **analysis**. The decoder was an upsampler, a decompressor, or an expander performing **synthesis**. Each encoder/decoder was composed of multiple stages. Each **stage** processed features of a certain resolution through one or more convolutional/deconvolutional layers and then downsampled/upsampled its newly computed features to the next resolution. To avoid loss of information due to the downsampling, in each encoder stage, the number of the newly computed features got multiplied by the downsampling rate. Conversely, in each decoder stage, the number of the newly computed features got divided by the upsampling rate.
A widely used neural network of such an encoder-decoder architecture was the U-net. As the inputs passed through its encoder stages, the progressively expanding receptive fields of its convolutional layers increased the abstraction and the context of its extracted features. Thus, at the end of the encoder or bottom of the \(\mathbf{U}\), features of minimum resolution but maximum abstraction and context were obtained. The spatial resolution of these features got reconstructed by passing
Figure 2: Downsampling (left) and upsampling (right) in the V-net.
them through the deconvolutional layers of the decoder stages and combining them with original higher resolution features. The original features were directly obtained from the corresponding encoder stage through a skip connection. That is, features extracted by each encoder stage got forwarded to the corresponding decoder stage to compensate information loss due to the downsampling. This feature forwarding could enhance the delineation of boundaries between different classes and sped up the convergence of the optimization. At the end of the decoder, the resulting feature maps had a resolution and size like the input of the network. A weighted average of these feature maps combined them into the desired number of classes. This was done by passing them through a convolutional layer of \(1\times 1\times 1\) kernel size, \(0\) padding, and stride of \(1\) in each dimension. As given by (7), the resulting network's outputs got then passed through a softmax function to produce the estimated classification posteriors for the samples [Ronneberger 2015].
The downsampling and the upsampling of the U-net made it a hierarchical architecture capable of capturing, analyzing, and synthesizing features at different spatial resolutions. This way, the U-net could automatically extract local and contextual patterns. The local patterns got captured by the shallower layers and the contextual patterns by the deeper layers of a larger receptive field. At the end, the decoder synthesized (gathered and assembled) the local (high resolution) and the contextual (low resolution) features into the final classification. These enabled a localization as well as an accurate classification in any domain of any size and thus made the U-net a breakthrough for end-to-end optimizations. Moreover, making all the operations of the U-net 3D allowed to apply it to 3D volumetric domains. The 3D U-net got enhanced by making its encoder stages **residual**. That is, the input of each encoder stage got added to its output. This could mitigate vanishing gradients and speed up the convergence of the optimization [He 2016a]. In addition, the 3D U-net could learn 3D volumetric structures out of sparsely annotated 2D slices. This allowed to use it in a semi-automated annotation process as well as a fully automated 3D detection [Cicek 2016, Rakhlin 2018].
In the 3D U-net, each downsampling/upsampling had a factor of 2 and was done through a max-pooling/unpoolig over a \(2\times 2\times 2\) kernel with a stride of \(2\) in each dimension. Also, each convolutional layer applied \(0\) padding. Thus, the valid part of each feature map at the output of each convolutional layer had a smaller size than its input feature map. In addition, the 3D U-net learned the residual functions only in its encoder stages. In a so-called V-net, the 3D U-net become **fully convolutional** by applying each downsampling/upsampling through a convolutional/deconvolutional layer of a kernel size of \(2\times 2\times 2\), a 0 padding, and a stride of \(2\) in each dimension. To avoid loss of information, each downsampling doubled the number of feature maps. Conversely, each upsampling halved the number of feature maps. Figure 2 shows the downsampling and the upsampling in the V-net.
In contrast to the max-pooling/unpoolig operations, the convolution/deconvolution-based downsampling/upsampling was reversible and differentiable. These allowed to backpropagate each downsampling/upsampling without needing to store its inputs per sample and iteration. This way, the memory footprint of the V-net become much less than the 3D U-net while the analysis and comprehension of its internal process got simplified. Moreover, each convolution of the V-net applied an appropriate padding to make the feature maps at its output of the same size as its input. Furthermore, the V-net learned the residual functions not only in the encoder stages but also in the decoder stages. This further boosted its performance and sped up its optimization [Milletari 2016]. This way, the 3D U-net or the V-net got widely used in many applications [Rakhlin 2018, Li 2022]. Accordingly, we resorted to an end-to-end optimization of the 3D fully convolutional and residual V-net for our implementations and evaluations. For this, we tailored the number and
\begin{table}
\begin{tabular}{c|c c|c}
**Stage** & \multicolumn{2}{c|}{**Receptive Field**} & **Size of Feature Maps** \\ \hline \(1\) & \(5\times 5\times 5\) & \(551\times 551\times 551\) & \(128\times 352\times 256\) \\ \(2\) & \(22\times 22\times 22\) & \(546\times 546\times 546\) & \(64\times 176\times 128\) \\ \(3\) & \(72\times 72\times 72\) & \(528\times 528\times 528\) & \(32\times 88\times 64\) \\ \(4\) & \(172\times 172\times 172\) & \(476\times 476\times 476\) & \(16\times 44\times 32\) \\ \(5\) & \(372\times 372\times 372\) & \(372\times 372\times 372\) & \(8\times 22\times 16\) \\ \end{tabular}
\end{table}
Table 1: The receptive fields and the sizes of the feature maps at different stages of the V-net.
the sizes of the feature maps and the kernels of the convolutional/deconvolutional layers to our volumetric fat-water images. Also, through the network, we processed the data in an N\(\times\)D\(\times\)H\(\times\)W\(\times\)C format with N=\(\|\mathbb{T}_{b}|\) being the number of the volumetric fat-water images in each batch, C being the number of the feature maps, D being the depth, H being the height, and W being the width of each feature map. We trained (optimized) the V-net by using a mini-batch-based gradient descent optimizer with backpropagation and a sufficiently large input volume to capture as much contextual information as possible. Due to the memory limitations of the used GPU, we could only include 2 volumetric fat-water images in each batch. Moreover, each volumetric fat-water image had 2 channels containing its voxelwise fat and water intensities. Accordingly, at the input of the network, N\(\times\)D\(\times\)H\(\times\)W\(\times\)C=\(2\times 128\times 352\times 256\times 2\).
Each encoder/decoder stage of the V-net extracted and learned features of a certain spatial resolution by using one to three 3D (volumetric) convolutional/deconvolutional layers. In our case, each of these layers had a kernel size of \(5\times 5\times 5\), a padding of \(2\), and a stride of \(1\) in each dimension. Also, regarding the size of our images and the sizes of the addressed objects (tissues) in our segmentations, we found **5 stages (resolution levels)** to be sufficient for our hierarchical feature learning. Table 1 shows the receptive fields and the sizes of the feature maps at different stages. As can be seen, the innermost (deepest) stage of the network could already capture the entire context of the input volume. This allowed to perceive the whole anatomy of interest and ensured access to enough contextual information for reliably classifying each voxel at the output of the neural network classifier.
Besides the convolutional/deconvolutional layers, each **residual** encoder/decoder stage normalized its feature maps and applied nonlinearities to them. Like the original V-net, we used a parametric rectified linear unit (PReLU) with a parameter \(a_{\mathrm{prelu}}\in\mathbb{R}_{\geq 0}\) for each nonlinear activation. The parameter \(a_{\mathrm{prelu}}\in\mathbb{R}_{\geq 0}\) controlled the outputs for negative inputs and thus was called the coefficient of leakage. It got optimized along with the main parameters (weights and biases) of the network. The normalization of the feature maps decoupled the lengths of the network's gradients from their directions. This could accelerate the convergence of the optimizations and thus allowed higher learning rates. It could also stabilize the optimizations by mitigating the internal covariate shift1, enhancing the robustness against the initializations, and smoothing the objective function. Moreover, it could penalize large network's weights and thereby reduce the overfitting or improve the generalization.
Footnote 1: changes of stochastic distributions of the inputs of each layer of the network due to the changes of the parameters of the previous layers
We modified the V-net by changing the type of the normalization from batch normalization [10] to instance (contrast) normalization [10]. The commonly used batch normalization was based on mini-batch statistics. That is, during the training, the mean and the variance of each feature map of each batch got learned across all the dimensions (D, H, W) and all the N members of the batch to normalize (remove bias and scale of) the corresponding feature map in the evaluation phase. The instance normalization took a similar approach. However, it computed the mean and the variance of each feature map of each batch only across the dimensions (D, H, W). In case of having a small
Figure 3: Different normalization techniques applied to a feature map of size N\(\times\)D\(\times\)H\(\times\)W\(\times\)C with N denoting the number of batches, C denoting the number of channels, and D\(\times\)H\(\times\)W denoting the spatial dimensions. In each case, the blue voxels got normalized by the same mean and variance aggregated across them.
batch size, like our case, the exponential moving averages of the mean and the variance of each feature map of each batch had strong fluctuations across the training iterations. This was due to the poor statistical power of the small batch and thereby made the batch normalization ineffective. In this case, the instance normalization was more effective and consistent [16]. Other varieties of the normalization were the layer and the group normalization [20]. Figure 3 shows their differences to the batch and the instance normalization.
We also modified the V-net by changing the order of operations in each **residual** encoder/decoder stage. Instead of the convention of applying the normalization between the convolution/deconvolution and the nonlinear activation, as suggested in [14], we applied a full preactivation normalization and removed after-addition activation. Figure 4 compares the new and the original orders of the operations of a residual encoder/decoder stage comprising 2 convolutional/deconvolutional layers. The advantage of the new order was that it made the overall nonlinear function of each stage a real identity mapping. This enabled a direct and clean propagation of signals from one stage to another stage in both forward and backward directions. Other kinds of skip connections which involved a sort of scaling (like the Dropout), gating, or convolution/deconvolution on the signal path could hamper a clean propagation of the information and thus lead to optimization problems. Moreover, the new order could improve the generalization of the network's model by reducing its overfitting. That is, it increased the error on seen (training) samples but reduced the error on unseen (validation or test) samples. Furthermore, in the original order, addition of the shortcut to the normalized signal made the overall signal at the input of the last nonlinear activation unnormalized. However, in the new order, the signal at the input of each nonlinear activation was normalized. Figure 5 shows the described V-net architecture.
To mitigate **overfitting** and the **imbalanced class-sample distribution** of the training samples, **attention mechanisms** got proposed. These methods aimed to focus the attention of the network's parameters on important (foreground) minority classes. This attention could reduce the training samples to an **effective subset** of a lower unbalancedness than the original set. It could also vanish the redundant or irrelevant network's parameters by suppressing feature activations in irrelevant regions of the classification domain. These in turn reduced the overfitting and sped up the convergence of the network's optimization. The attention could be stimulated by **incorporating priors** into the optimization process and/or **modifying the network's architecture**. Neither the cross entropy-based nor the metric-based losses, defined in subsection 1.3, could accommodate the priors of the samples. Consequently, the attention mechanisms were restricted to architectural modifications.
Trainable (optimizable) attention mechanisms were categorized as **hard** or **soft**. The hard attention mechanisms iteratively cropped a region of interest through a Monte Carlo sampling optimized by a reinforcement learning. These
Figure 4: A residual encoder/decoder stage comprising 2 convolutional/deconvolutional layers (a) with the new (b) and the original order (c) of the operations.
sampling-based updates were indifferentiable and thus hard to optimize. The soft attention mechanisms involved a differentiable model composed of real-valued parameters. Thus, they could be optimized through a gradient descent optimizer with backpropagation. The output of the soft attention model for each feature map was a probabilistic map
Figure 5: Schematic of the 3D fully convolutional and residual V-net with the encoder and the decoder stages on its left and right side, respectively.
called **attention map**. In an additive or a multiplicative attention mechanism this map got computed by adding or multiplying the filtered feature map(s) by a filtered gating map, respectively. If the attention map was commuted by a convolutional neural network (CNN), then each filter was a convolutional layer. The attention mechanism turned into a self-attention if the gating maps were produced internally. The elementwise multiplication or addition of each attention map with its corresponding feature map highlighted salient features for the classification. This enabled an attention-based **feature pooling or pruning**. If the gating maps brought contextual information, then the feature pooling was with regard to the contextual dependencies of the features. Besides mitigating the overfitting and the imbalanced class-sample distribution of the training samples, the attention-based feature pooling could enhance the sensitivity, the prediction accuracy, and the robustness of the neural network classifier. A commonly used architecture for soft attention was a region proposing feed-forward CNN. A bottleneck of this approach was its excessive and redundant use of the model's parameters and features. This could increase the overall optimization overhead and the overfitting before the convergence of the optimization could realize any attention for a possible reduction of the network's parameters [20].
As mentioned earlier, the U-net and the V-net were capable of extracting (analyzing) and reconstructing (synthesizing) multiresolution (multiscale) features. This was done by extracting coarser features through downsampling the feature maps across the encoder stages and then reconstructing finer (higher resolution) features across the decoder stages. To this end, the receptive field at the coarsest resolution was to be large enough to capture all the contextual information highlighting the overall category and location of the foreground classes. After the localization, the finer (higher resolution) features delineated boundaries between different classes more precisely. These altogether allowed to capture large shape and size variations in the classification domain and thus improved the classification accuracy.
The reconstruction of the finer (higher resolution) features in each decoder stage was with the help of the features extracted by the corresponding encoder stage at the same spatial resolution. This feature forwarding reduced redundant and repeated computation of the features and thus enhanced efficiency in the usage of the computational power and memory. The plain skip connection of the feature forwarding path could be replaced by an **attention gate** realizing an **attention-based feature pooling**. This pooling vanished redundant features right before the concatenation of the original features with the reconstructed features. This way, it could suppress irrelevant regions in the classification domain by vanishing redundant network's perceptrons. This in turn reduced the overfitting of the network and the unbalancedness of the samples' distribution seen at the time of its training (optimization). Furthermore, the computational overhead of such an attention gate was much lower than the region proposing CNN. This and the reduction of the network's parameters could reduce the computational complexity of the optimizations and speed up their convergence [20].
A promising self-attention mechanism for integration into each feature forwarding path of the U-net or the V-net was a grid-based gating module. In this approach, each gating map was not fixed across the elements of its corresponding feature maps for which the attention maps were to be computed. Instead, it was a feature map of a lower (coarser) resolution already generated by the network itself. This way, the resulting attention maps were grid-based (i.e. variable across the elements of the feature maps) and could thus highlight salient features with respect to local patterns. The gating based on the feature maps of a lower (coarser) resolution allowed to consider a bigger context in the feature pooling and thereby disambiguated irrelevant and noisy features. Moreover, the grid-based gating module eliminated the need to an external explicit region proposing CNN by implicitly proposing soft (probabilistic) map of the target structures on the fly. This attention mechanism could be trained from scratch to focus on the target structures of varying shapes and sizes without additional supervision. Its filters (linear transformations) downweighted the gradients from irrelevant regions and could thus be implemented through convolutional layers filtering the network's activations in both forward and backward passes [20].
In [20, 17, 16], to reduce the number of the parameters and the computational complexity of the attention gates, each filter was a convolutional layer of \(0\) padding and \(1\times 1\times 1\) kernel size, i.e. without any spatial support. To downsample the input feature maps of each attention gate to the resolution of its gating maps, the convolutional filters of the feature maps had a stride of \(2\) in each dimension. Moreover, each attention gate handled a
binary classification and thus computed a common attention map for all the feature maps at its input. To this end, the downsampling convolutional filters of the feature maps linearly transformed them to an intermediate number of feature maps denoted by C'. Also, the convolutional filters of the gating maps linearly transformed them to C' intermediate maps. The intermediate feature/gating maps were to be more semantically discriminative than the original feature/gating maps in localizing the target structures. Thus, the number C' was a resolution-specific hyperparameter and needed to be optimized for each attention gate separately. Then, according to an **additive attention** mechanism, the intermediate downsampled feature maps got added to the intermediate gating maps and then passed through a nonlinear rectified linear unit (ReLU), a \(1\times 1\times 1\) convolutional layer of \(0\) padding and a stride of \(1\), and a nonlinear Sigmoid layer to form the attention map for all the input feature maps. This attention map had a lower resolution than the input feature maps and thus was upsampled by a grid-based trilinear interpolation to the same resolution as the input feature maps. In comparison to a multiplicative attention, the additive attention was more computationally demanding but more effective in enhancing the classification accuracy.
To handle a multiclass classification over \(n_{\mathrm{class}}=|\mathbb{L}|\) classes, we modified the aforementioned gating module by replacing the nonlinear Sigmoid function with a nonlinear Softmax function. Also, after the ReLU operation, the \(1\times 1\times 1\) convolutional layer did not map the outputs of the ReLU to one channel rather to the number of feature maps at the input of the gating module. That is, instead of computing one common attention map for all the input feature maps, we computed an attention map for each feature map separately and independently from other feature maps. Furthermore, to simplify the network's optimization we eliminated the resolution-specific hyperparameter C' defining the number of the intermediate feature/gating maps. To this end, the \(1\times 1\times 1\) convolutional layer directly applied to the input feature maps transferred them to the number of channels already existing in the input gating maps. This in turn eliminated the \(1\times 1\times 1\) convolutional layer directly applied to the input gating maps and thus further simplified the architecture of the gating module. Figure 6 compares the original gating module with our proposed one and Figure 7 shows the V-net architecture with such a gating module in each of its feature forwarding paths.
To reduce the overfitting of the baseline architectures to the seen (training) samples and thereby improve the generalization (predictive performance on unseen samples), we applied Dropout to every perceptron (node) of these architectures. This technique had a common root with a Bayesian neural network which, as described in subsection 1.2, was an ensemble of plain neural networks. In the training (optimization) phase, the Dropout dropped some of the perceptrons (nodes) of the network by vanishing their incoming and outgoing weights. The keep (retention) probability of each perceptron (node) was the occurrence probability of a Bernoulli distributed random variable. This probabi
Figure 6: Schematic of the original (**upper row**) and the proposed (**lower row**) grid-based gating module with \(\underline{\mathbf{F}}\) denoting the tensor of the input feature maps, \(\underline{\mathbf{G}}\) denoting the tensor of the gating maps, \(\underline{\mathbf{A}}\) denoting the tensor of the attention maps, \(\underline{\mathbf{E}}^{\prime}\) denoting the tensor of the output feature maps, and each blue box depicting a convolutional layer.
tunable hyperparameter indicating the confidence (inverse of the variance) of the node's estimations. We considered a common retention probability for all the perceptrons (nodes) of each encoder/decoder stage of the baseline architectures. For the \(s^{\text{th}}\) encoder/decoder stage, this probability was denoted by \(p_{s}\in[0,1]\). In the test phase, all the perceptrons (nodes) of the network were kept. However, the outgoing weights of each node got multiplied by its retention probability (\(p_{s}\)). The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in training phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase. The weights of each node are multiplied by the number of neurons in training phase phase. The weights of each node are multiplied by the number of neurons in training phase phase. The weights of each node are multiplied by the number of neurons in the training phase phase phase. The weights of each node are multiplied by the number of neurons in training phase phase phase. The weights of each node are multiplied by the number of neurons in training phase phase phase. The weights of each node are multiplied by the number of neurons in training phase phase phase. The weights of each node are multiplied by the number of neurons in training phase phase phase phase. The weights of each node are multiplied by the number of neurons in training phase
optimized during the hyperparameter optimization. The Dropout was shown to be superior to other regularization techniques such as the weight decay which penalized the weights of large \(l_{2}\) norms. This superiority come at the cost of a higher number of iterations for convergence of the optimizations (Srivastava 2014; Gal 2015; Jospin 2022).
## 2 Outline of Contributions
All the **metric-based** losses introduced in subsection 1.3 were independent of the class-sample distribution of the training samples and could thus enhance the generalization (predictive performance on unseen samples) of a neural network trained (optimized) with them. However, the metrics involved in those losses were binary classification metrics. This implied to decompose a multiclass classification into a series of one-vs-all classifications and then form its overall loss from an average of the one-vs-all losses. This was observable in the definition of the DICE loss in (16) and the Lovasz-Softmax loss in (21).
The averaging across the classes could naturally lead to a bias towards the dominant classes, i.e. classes of more samples. This bias could not be mitigated by a weighting mechanism such as the ones incorporated in the distribution-based losses introduced in page 8 and page 8. The reason was that such a weighting could diminish the false positive mispredictions on dominant classes and could thus mislead the optimization. Moreover, if a class was absent in both the reference labels and the predicted labels, then \(\text{DICE}=\text{JI}=1\) and \(\text{JD}=0\).
All the **distribution-based** losses introduced in subsection 1.3 were based on the cross entropy and had a common root with the variational free energy (VFE) of a retrospective active inference. These losses fitted the network's model to the class-sample distribution of the training samples and could thus compromise the network's generalization when the distribution of unseen (validation or test) samples differed from the distribution of the seen (training) samples. However, as described in page 8 and page 8, these losses could reduce the classification biases towards the dominant classes by weighting each class's term with regard to its number of samples or importance. In spite of this capability, there existed no optimal weighting which could be incorporated into the cross entropy-based losses to make them equivalent to any of the metric-based losses. Thus, to benefit from the advantages of the cross entropy-based and the metric-based losses while mitigating their drawbacks, a combination of them was used. Alternatively, to reduce the overfitting and thus to improve the generalization of the cross entropy-based losses, additional co-training with augmented training samples got conducted. Also, to reduce the classification biases towards the dominant classes, the false positive mispredictions of the network trained with the metric-based losses got post-corrected by using morphological operations (Isensee 2018; Bertels 2019; Jadon 2020; Chen 2022).
Despite of some improves, all the aforementioned schemes imposed extra overheads to the training or predictions of the neural networks. In addition, the augmentation of the training samples obtained from images was mostly done on the fly by applying gamma (luminance) modifications, mirroring, random scaling, random rotation, and random elastic deformation1 to the original images. These techniques could not be easily applied to medical images where pathological alterations should be differentiated from the augmentations. Moreover, none of the aforementioned schemes could completely mitigate the overfitting of a large network to a limited number of the training samples or the classification biases towards the dominant classes. Furthermore, none of the described losses could **incorporate priors** or **handle errors or uncertainties in the reference labels of the training samples**(Lo 2021). Errors in the reference labels of the training samples could arise from human errors in the manual annotations of the training samples and images or the errors induced by noise and artifacts. Uncertainties and ambiguities in the reference labels of the training samples could stem from similar features and textures of different classes. These similarities not only confused the manual annotators but also the neural network relying on those features and textures for learning boundaries between different classes.
Footnote 1: The elastic deformations were obtained from a B-spline interpolation over a grid of control points on a dense deformation field.
To mitigate the aforementioned bottlenecks, we proposed
1. a novel algorithm, based on the generalized (multinomial) Kelly criterion for optimal betting, to recompute the reference labels of the training samples by using their priors and the currently estimated classification posteriors on the network;
2. a novel objective function, based on the expected free energy (EFE) of a prospective active inference, with the capability of * incorporating prior probabilities of the training samples to focus the attention of the neural network on important but minority foreground classes and thereby reshape the effectively seen distribution for a reduction of the class-sample unbalancedness, the overfitting, and the classification biases towards the dominant classes; * representing the _precision_ and _recall_ metrics by its terms to enhance the robustness of the network's optimization against the class-sample unbalancedness;
3. a process to integrate the proposed algorithm and the proposed objective function into a mini-batch-based gradient descent optimizer with backpropagation.
The proposed algorithm for recomputing the reference labels was listed in Algorithm 1. This algorithm calculated a **set of candidate labels** for each training sample from its prior and currently estimated posterior probabilities on the network. This algorithm resulted from our reformulation of the generalized (multinomial) Kelly criterion for optimal betting on multiple horses in a horse race. This reformulation cast the generalized Kelly criterion into a multiclass classification problem by interpreting each training sample as a bettor, each class as a horse, and each iteration of the network's optimization as a horse race. Then, the classification prior of the training sample with regard to each class become the win probability of the corresponding horse. The classification posterior currently estimated by the network for the training sample with regard to the same class become the belief probability of the corresponding horse. The proposed sets of candidate labels got then plugged into the proposed objective function to form the current loss for an update (optimization) of the network's parameters in the current iteration. Thus, instead of a reference label, a set of candidate labels got considered for each training sample in each iteration.
This consideration allowed to mitigate the aforementioned uncertainties and ambiguities in the labels generated from manual annotations in the presence of noise, artifacts, and similar features or textures of different classes. In other words, the sets of candidate labels could handle possible overlaps between different classes and thus enhanced the reliability and the flexibility of the neural network's optimization. More specifically, these sets could help a gradient descent optimizer to escape from local optimums caused by the original reference labels. Moreover, if the reference labels of some training samples were missing, then their candidate labels could still be computed from their priors and posteriors. This **semi-supervised optimization** was of particular importance in the applications where the manual annotations of the reference labels were costly and cumbersome.
Our proposed Algorithm 1 for finding the candidate labels aimed to minimize the objective function of the generalized Kelly criterion. This minimized function was given by (36) and was indeed the **expected complexity** term of the EFE of a prospective active inference. That is, the objective function of the generalized Kelly criterion was a tight upper bound of the expected complexity of the EFE. The EFE was given by (4) and was composed of an **expected complexity** term plus an **uncertainty** term. As described in subsection 1.1, the minimization of the expected complexity was equivalent to the maximization of the reward. The reward maximization was also a goal of the Kelly criterion and could thus be partially fulfilled by finding the candidate labels through the proposed Algorithm 1.
More specifically, from the prior (win) and the posterior (belief) probabilities of each training sample (bettor), the generalized Kelly criterion computed optimal allocation fractions of the bettor's asset for betting on the candidate classes (horses)1. These allocation fractions maximized the geometric average of the growth rate of the bettor's asset or the **reward**. To further maximize the reward, the expected complexity of the EFE should be minimized further. This was doable by having enough information or maximizing the information gain, i.e. minimizing the **uncertainty** of the
EFE. Accordingly, to optimize a discriminative neural network classifier, we proposed a novel objective function based on the EFE of a prospective active inference. This function was given by (39) and was reversible and differentiable with respect to the outputs of every layer of the neural network. Thus, as described in subsection 1.2, it could be minimized by a gradient descent optimizer with backpropagation.
As explained in subsection 1.3, all the cross entropy-based losses were **distribution-based** and stemmed from the VFE given by (2) for a retrospective active inference. The VFE was **complexity** minus **accuracy**. The complexity reflected the overfitting of the neural network's model to the distribution of seen (training) samples and thus the variance of the predictions on unseen (validation or test) samples. The accuracy was inversely proportional to the bias (difference) of the predictions from their true values. Thus, the minimization of the VFE implied to minimize the complexity or the overfitting while maximizing the classification accuracy by minimizing the classification bias. This way, the VFE and the cross entropy-based losses addressed the bias-variance tradeoff of the classification problems without considering the unbalancedness of the class-sample distribution of the seen samples.
In contrast, the EFE given by (4) for a prospective active inference and thus our proposed objective function in (39) addressed the unbalancedness of the class-sample distribution of the seen (training) samples by representing the _precision_ and _recall_ metrics in their terms. The _precision_ and the _recall_ metrics were independent of the correct classification of unimportant majority samples (designated by true negatives) and instead focused on the correct classification of important minority samples (designated by true positives). This made them less sensitive than the other metrics to the imbalanced class-sample distributions and the classification biases towards the dominant classes.
As mentioned earlier, the minimization of the EFE or our proposed objective function implied to minimize the **expected complexity** and the **uncertainty**. The minimization of the expected complexity implied to maximize the reward and the reward was equivalent to the _recall_ (completeness or diversity). The minimization of the uncertainty implied to maximize the information gain or the _precision_ (exactness or confidence). This way, the EFE and our proposed objective function aimed to maximize the _precision_ and the _recall_ metrics. This allowed them to handle an imbalanced class-sample distribution while still being **distribution-based**[11, 12, 13].
Moreover, our proposed objective function could incorporate the prior probabilities of the training samples directly and indirectly. The indirect incorporation was through using the candidate classification labels computed from the priors and the posteriors of the training samples by the proposed Algorithm 1. This incorporation resulted in a grouping of the terms of the proposed objective function with regards to the candidate and noncandidate labels. More specifically, the priors or the posteriors of the noncandidate labels got summed together to form a collective prior or posterior for the noncandidate classes. This way, the noncandidate classes formed a collective class together and the neural network got enforced to find the boundary between each candidate class and the collective class of the noncandidates. In comparison to computing the boundaries between each pair of the classes, this grouping reduced the effective number of the classes and the boundaries needed to be computed. This in turn reduced the network's complexity and its overfitting to the seen (training) distribution and could thus enhance its generalization (predictive performance on unseen samples).
The direct incorporation of the prior probabilities of the training samples into the objective function of the network's optimization could focus the attention of the neural network on important but minority foreground classes. This could reshape the distribution effectively seen by the network during its optimization and could thereby reduce the class-sample unbalancedness, the overfitting, and the classification biases towards the dominant classes [12]. Similar effects could result from the architecture-based attention mechanisms described in subsection 1.4. That is, if no prior probabilities were provided, then **stronger posteriors** resulted from an **architecture-based attention mechanism** should help. In the baseline architecture described in subsection 1.4, an attention gate could be incorporated into each feature forwarding path between an encoder stage and its corresponding decoder stage. Without such a gate, the feature forwarding path was a plain skip connection.
Our proposed algorithm for finding the candidate labels and our proposed objective function for optimizing a discriminative neural network classifier got integrated into a mini-batch-based gradient descent optimizer with backpropagation
by using the process proposed in section 4. This process got evaluated against a similar process incorporating a representative of the cross entropy-based losses or a representative of the metric-based losses introduced in subsection 1.3. The representative of the cross entropy-based losses was the weighted focal loss. This loss comprised of a modulating factor and a weighting mechanism to alleviate classification biases towards the dominant classes of the training samples. The representative of the metric-based losses was the Lovasz-Softmax loss. Besides being smooth and differentiable, to the best of our knowledge, this loss was the only convex loss among the metric-based losses.
Accordingly, the evaluated losses were
1. the proposed objective function given by (39)
2. the weighted focal loss given by (14)
3. the Lovasz-Softmax loss given by (21).
These evaluations were on an end-to-end optimization of the baseline architecture described in subsection 1.4. For each case, the baseline architecture was once used without attention gates as depicted in Figure 5 and once used with the attention gates as depicted in Figure 7. Also, for (2) and (3) each training sample was accompanied by its reference (ground truth) label to fulfill the supervised nature of these objective functions. However, our proposed algorithm for finding the candidate labels and our proposed objective function got evaluated according to a fully supervised, a semi-supervised, and an unsupervised approach. These resulted in the training samples being
1. accompanied by their reference labels and their priors \(\rightarrow\) fully supervised
2. only accompanied by their reference labels \(\rightarrow\) semi-supervised
3. only accompanied by their priors \(\rightarrow\) semi-supervised
4. accompanied by neither their reference labels nor their priors \(\rightarrow\) unsupervised.
The unsupervised case only relied on the posteriors estimated by the neural network during its optimization and could thus be considered as a self-supervised case as well.
For the cases with the priors, the prior probabilities of the training samples could be computed by a multiatlas registration. If no prior probabilities were provided at the time of optimization (training), then uniform priors got assumed. If the reference (ground truth) labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were provided at the time of optimization (training), then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vectorized reference label \(\mathbf{l}_{b,j}\) was the one-hot-encoding of its reference label \(l_{b,j}\in\mathbb{L}\) and was given by (8). If the reference labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were not provided at the time of optimization, then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vector \(\mathbf{l}_{b,j}\) was uniform and given by (9).
For each evaluation case, the main parameters and the hyperparameters of the baseline architecture got trained (optimized) to automatically segment \(n_{\mathrm{class}}=|\mathbb{L}|=8\) classes of vertebral bodies (VBs), intervertebral disks (IVDs), psoas major (PM) and quadratus lumborum (QL) muscles, epicardial adipose tissues (EpAT), pericardial adipose tissues (PeAT), cardiac perivascular adipose tissues (PvAT), and background on each volumetric fat-water image. To this end, the volumetric fat-water images got divided into a training and a test set. The training set formed the samples set \(\mathbb{T}_{\mathrm{train}}\) and got used to optimize the main parameters and the hyperparameters of the baseline architecture by each method. The test set formed the samples set \(\mathbb{T}_{\mathrm{test}}\) and got used to evaluate the classification performance of the baseline architecture after being fully optimized by each method. The training set was composed of samples accompanied by their reference labels and priors. The test set was composed of samples accompanied by their reference labels. The reference labels of the test samples were not fed to the neural network. They were rather compared against the corresponding labels predicted by the network to evaluate the classification performance of the network. The predicted label of each sample was the index of its maximum classification posterior estimated by the network.
Finally, our proposed optimization process was based on the generalized Kelly criterion for optimal betting and a prospective active inference. It addressed optimization of discriminative neural network classifiers with a feed-forward
architecture. Active inference-based optimizations could foster building highly flexible and generalizable generative models with and without memory. An example of a model with the memory was the one which could explain a partially observable Markov decision process. This model could be implemented by a recurrent or a long short-term memory network [14, 15, 16]. Accordingly, our proposed optimization process could be easily extended to generative or recurrent neural networks such as the networks in [1, 16, 17].
## 3 Application of the Kelly Criterion to Classification
The generalized (multinomial) Kelly criterion proposed optimal allocation fractions of a bettor's asset in betting on multiple horses in a horse race. Each horse had a win and a belief probability. The win probability was the chance of the horse to win the race. The belief probability was the collective belief of other bettors about the chance of the horse to win the race. Thus, for a specific bettor, an optimum betting strategy was to invest as much as possible on a horse of maximum win probability and minimum belief probability (minimum number of other bettors investing on it). This was based on the assumption that all the bettors followed the same strategy and the gain of a horse win got divided between all the bettors who have invested on it. Therefore, the lesser the belief probability was, the higher the paid gain to the investing bettor would be [13, 16].
To optimize a discriminative neural network classifier in a multiclass classification over \(n_{\mathrm{clas}}=|\mathbb{L}|\) classes by using the generalized Kelly criterion, we assumed
* every training sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) to be a bettor
* every class \(c\in\mathbb{L}\) to be a horse
* every iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\) of the optimization to be a round of horse race with its gambling competitions among the bettors (training samples)
* the win probability of each horse (class) \(c\in\mathbb{L}\) for each bettor (training sample) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) to be the prior probability \(a_{b,j,c}\in(0,1)\) estimated by another classifier1 Footnote 1: If no prior probabilities were provided then uniform priors got assumed.
* the belief probability of each class \(c\in\mathbb{L}\) for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) to be the classification posterior \(\hat{p}_{b,j,c}^{(i)}\in(0,1)\) estimated by the network in the current iteration \(i\).
It should be noted that in the betting, the win probabilities of the horses were shared across the bettors, but, in the classification, each sample had its own win probability for each class. Moreover, the interpretation of the estimated posteriors of the network as the belief probabilities might look counterintuitive because each sample (bettor) had no _other_ samples (bettors) to compete with. Thus the overall belief about a class (horse) could not be collected from other samples (bettors). Moreover, it was more tempting to select a class (invest on a horse) of maximum belief probability as this probability could be an indicator of the chance of the class (horse) to win. Our definition of the win probability and our counterintuitive definition of the belief probability could be explained under an **attention mechanism**.
On one hand, the selection of the classes (horses) of maximum win probability encouraged the network to focus on classes of confident (high) prior probabilities. In an image segmentation task conducted in a spatial domain, this implied to focus on important (relevant) regions highlighted by high prior probabilities in the image. On the other hand, the selection of the classes (horses) of minimum belief probability encouraged the network to focus on inconfident (low) posteriors and thus to improve its classification by tackling difficult examples.
In each iteration (race) \(i\), for each training sample (bettor) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\), the Kelly criterion proposed allocation fractions \(\hat{\mathbf{g}}_{b,j}^{(i)}=\left[\hat{g}_{b,j,c}^{(i)}\in[0,1]\right]_{c\in \mathbb{L}}\) of its asset for betting on \(n_{\mathrm{clas}}=|\mathbb{L}|\) classes (horses). If in the iteration (race) \(i\) the class (horse) \(c\in\mathbb{L}\) won, then the asset of \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) would be multiplied by \(\left[1-\sum_{k\in\mathbb{I}}\hat{g}_{b,j,k}^{(i)}+\frac{\hat{g}_{b,j,c}^{(i )}}{\hat{p}_{b,j,c}^{(i)}}\right]^{-1}\). We assumed that the outcomes of the iterations (horse races) were independent identically distributed (i.i.d.) random
variables. Thus, after \(i\) iterations, the geometric average of the growth rate of the asset of \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) with \(n_{c}^{(i)}\in[0,i]\) number of wins for each class \(c\in\mathbb{L}\) become
\[\eta_{b,j}^{(i)}=\prod_{c\in\mathbb{L}}\left[1-\sum_{k\in\mathbb{L}}\hat{g}_{b, j,k}^{(i)}+\frac{\hat{g}_{b,j,c}^{(i)}}{\hat{p}_{b,j,c}^{(i)}}\right]^{-n_{c}^{(i )}/i}\qquad i=\sum_{c\in\mathbb{L}}n_{c}^{(i)}. \tag{22}\]
By taking the \(\ln(\cdot)\) of both sides of (22), one obtained
\[\begin{split}\ln(\eta_{b,j}^{(i)})=\sum_{c\in\mathbb{L}}\frac{- n_{c}^{(i)}}{i}\cdot\ln\biggl{[}1-\sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i)}+ \frac{\hat{g}_{b,j,c}^{(i)}}{\hat{p}_{b,j,c}^{(i)}}\biggr{]}\\ \lim_{i\to\infty}\frac{n_{c}^{(i)}}{i}=a_{b,j,c}\implies\lim_{i \to\infty}\ln(\eta_{b,j}^{(i)})=\sum_{c\in\mathbb{L}}-a_{b,j,c}\cdot\ln\biggl{[} 1-\sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i)}+\frac{\hat{g}_{b,j,c}^{(i)}}{ \hat{p}_{b,j,c}^{(i)}}\biggr{]}.\end{split} \tag{23}\]
If the allocation fractions \(\mathbf{g}_{b,j}^{(i)}=\bigl{[}g_{b,j,c}^{(i)}\in[0,1]\bigr{]}_{c\in\mathbb{L}}\) proposed by the Kelly criterion for each sample (better) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) were **asymptotically optimum** over a long run \((i\to\infty)\), then they maximized the geometric average in (22). Due to the monotonic increase of the \(\ln(\cdot)\) function, the maximization of (22) was equivalent to the maximization of (23). This way, the asymptotically optimum allocation fractions were the maximizers of the averaged logarithms of the growth rate in (23). That is, \(\mathbf{g}_{b,j}^{(i)}=\operatorname*{arg\,max}_{\hat{\mathbf{g}}_{b,j}^{(i)} }\ \Bigl{[}\ln(\eta_{b,j}^{(i)})\Bigr{]}\) or
\[\mathbf{g}_{b,j}^{(i)}=\operatorname*{arg\,min}_{\hat{\mathbf{g}}_{b,j}^{(i) }}\ \Bigl{[}-\ln(\eta_{b,j}^{(i)})\Bigr{]}=\operatorname*{arg\,min}_{\hat{ \mathbf{g}}_{b,j}^{(i)}}\ \biggl{[}\sum_{c\in\mathbb{L}}a_{b,j,c}\cdot\ln\biggl{[}1-\sum_{k\in \mathbb{L}}\hat{g}_{b,j,k}^{(i)}+\frac{\hat{g}_{b,j,c}^{(i)}}{\hat{p}_{b,j,c}^ {(i)}}\biggr{]}\biggr{]}. \tag{24}\]
As detailed in [10], \(\hat{\mathbf{g}}_{b,j}^{(i)}=\bigl{[}\hat{g}_{b,j,c}^{(i)}\bigr{]}_{c\in \mathbb{L}}\in[0,1]^{n_{\rm clas}=|\mathbb{L}|}\) formed a convex set
\[\mathbb{G}_{b,j}^{(i)}=\left\{\hat{\mathbf{g}}_{b,j}^{(i)}\in[0,1]^{n_{\rm clas }=|\mathbb{L}|}\ \bigg{|}\ \biggl{[}1-\sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i)}+\frac{\hat{g}_{b,j,c}^{( i)}}{\hat{p}_{b,j,c}^{(i)}}\biggr{]}>0\right\}\subseteq[0,1]^{n_{\rm clas}=| \mathbb{L}|} \tag{25}\]
which was an intersection of half spaces. Each half space was a side of a hyperplane. In addition, in the above optimization, \(\bigl{[}1-\sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i)}\bigr{]}\in[0,1]\implies \sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i)}\in[0,1]\). That is, it was allowed to back a horse to win but not to lay a horse to lose. This condition constrained every \(\hat{\mathbf{g}}_{b,j}^{(i)}\in\mathbb{G}_{b,j}^{(i)}\) to a stricter convex set given by
\[\mathbb{G}_{b,j}^{(i)}=\left\{\hat{\mathbf{g}}_{b,j}^{(i)}\in\mathbb{G}_{b,j}^ {(i)}\ \right|\ \sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i)}\leq 1\ \ \text{and}\ \ \forall c\in\mathbb{L}:\hat{g}_{c,j}^{(i)}\geq 0\right\}\subseteq \mathbb{G}_{b,j}^{(i)}. \tag{26}\]
The definition of \(\ln(\eta_{b,j}^{(i)})\) in (23) showed that it was a finite linear combination of strictly concave logarithms with the coefficients being the priors \(\mathbf{a}_{b,j}=\bigl{[}a_{b,j,c}\in(0,1)\bigr{]}_{c\in\mathbb{L}}\). This way, the \(\ln(\eta_{b,j}^{(i)})\) become differentiable, strictly concave downwards, and of a unique maximum on the boundary of every bounded subset of \(\mathbb{G}_{b,j}^{(i)}\). Accordingly, to find the maximizers of \(\ln(\eta_{b,j}^{(i)})\) or the optimum allocation fractions \(\mathbf{g}_{b,j}^{(i)}=\bigl{[}g_{b,j,c}^{(i)}\in[0,1]\bigr{]}_{c\in\mathbb{L}}\), it was enough to only explore the boundaries of \(\mathbb{G}_{b,j}^{(i)}\subseteq\mathbb{G}_{b,j}^{(i)}\)[10]. This exploration (maximization) could be done by using the method of Lagrange multipliers and the Karush-Kuhn-Tucker (KKT) theory [1]. That is, instead of maximizing \(\ln(\eta_{b,j}^{(i)})\), we maximized
\[\gamma_{b,j}^{(i)}=\ln(\eta_{b,j}^{(i)})+\Bigl{[}\sum_{k\in\mathbb{L}}\lambda_{b,j,k}^{(i)}\cdot\hat{g}_{b,j,k}^{(i)}\Bigr{]}+\lambda_{b,j,0}^{(i)}\cdot \Bigl{[}1-\sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i)}\Bigr{]} \tag{27}\]
with \(\bigl{\{}\lambda_{b,j,k}^{(i)}\in\mathbb{R}_{\geq 0}\bigr{\}}_{k=0}^{|\mathbb{L}|}\) being the Lagrange multipliers.
The KKT theory stated that every constrained maximizer of \(\ln(\eta^{(i)}_{b,j})\) was an unconstrained maximizer of \(\gamma^{(i)}_{b,j}\). The unconstrained maximization of \(\gamma^{(i)}_{b,j}\) was done through vanishing its gradient (derivatives) with respect to \(\hat{\mathbf{\beta}}^{(i)}_{b,j}=\left[\hat{\mathbf{\beta}}^{(i)}_{b,j,c}\in[0,1]\right]_ {c\in\mathbb{L}}\). That is,
\[\frac{\partial\gamma^{(i)}_{b,j}}{\partial\hat{g}^{(i)}_{b,j,c}}=\frac{-a_{b,j, c}+a_{b,j,c}/\hat{p}^{(i)}_{b,j,c}}{1-\sum_{k\in\mathbb{L}}\hat{g}^{(i)}_{b,j,k}+ \hat{g}^{(i)}_{b,j,c}/\hat{p}^{(i)}_{b,j,c}}+\lambda^{(i)}_{b,j,c}-\lambda^{(i )}_{b,j,0}=0. \tag{28}\]
This resulted in the following KKT optimality constraints:
\[\text{if}\ \ \lambda^{(i)}_{b,j,c}\cdot\hat{g}^{(i)}_{b,j,c}=0 \implies\lambda^{(i)}_{b,j,c}=0\ \ \text{if}\ \ \hat{g}^{(i)}_{b,j,c}>0\] \[\text{if}\ \ \lambda^{(i)}_{b,j,0}\cdot\left[1-\sum_{k\in \mathbb{L}}\hat{g}^{(i)}_{b,j,k}\right]=0\implies\lambda^{(i)}_{b,j,0}=0\ \ \text{if}\ \ \sum_{k\in\mathbb{L}}\hat{g}^{(i)}_{b,j,k}<1. \tag{29}\]
The allocation fractions \(\hat{\mathbf{\beta}}^{(i)}_{b,j}=\left[\hat{g}^{(i)}_{b,j,c}\in[0,1]\right]_{c\in \mathbb{L}}\) and the Lagrange multipliers \(\left\{\lambda^{(i)}_{b,j,k}\in\mathbb{R}_{\geq 0}\right\}_{k=0}^{|\mathbb{L}|}\) should fulfill (29) on the convex set \(\mathbb{G}^{\prime}_{\ b,j}\subseteq\mathbb{G}^{(i)}_{b,j}\). According to [11], the maximum of \(\ln(\eta^{(i)}_{b,j})\) under \(\sum_{k\in\mathbb{L}}\hat{g}^{(i)}_{b,j,k}=1\) was less than its maximum under \(\sum_{k\in\mathbb{L}}\hat{g}^{(i)}_{b,j,k}<1\). Thus, in (26), we replaced \(\sum_{k\in\mathbb{L}}\hat{g}^{(i)}_{b,j,k}\leq 1\) with \(\sum_{k\in\mathbb{L}}\hat{g}^{(i)}_{b,j,k}<1\) and obtained \(\lambda^{(i)}_{b,j,0}=0\) from (29). For each sample (bettor) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\text{train}}\), the classes (horses) whose allocation fractions were nonzero were deemed to be **candidate** and formed the set \(\mathbb{L}^{(i)}_{b,j}\) with
\[\begin{split}\forall c\in\mathbb{L}^{(i)}_{b,j}\subseteq\mathbb{ L}:\ \hat{g}^{(i)}_{b,j,c}>0\ \ \text{and}\ \ \lambda^{(i)}_{b,j,c}=0\\ \forall c\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}:\ \hat{g}^{(i)}_{b,j,c}=0\ \ \text{and}\ \ \lambda^{(i)}_{b,j,c}\geq 0.\end{split} \tag{30}\]
Then, solving (28) under the above conditions gave
\[\forall c\in\mathbb{L}^{(i)}_{b,j}\subseteq\mathbb{L}:\ g^{(i)}_{b,j,c}=a_{b, j,c}-\hat{p}^{(i)}_{b,j,c}\cdot\frac{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}a_{b, j,k}}{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}\hat{p}^{(i)}_{b,j,k}} \tag{31}\]
\[\implies s^{(i)}_{b,j}=1-\sum_{c\in\mathbb{L}}g^{(i)}_{b,j,c}=1-\sum_{c\in \mathbb{L}^{(i)}_{b,j}}g^{(i)}_{b,j,c}=\overbrace{1-\sum_{c\in\mathbb{L}^{(i)} _{b,j}}a_{b,j}}^{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}a_{b,j,k}}+\frac{ \sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}a_{b,j,k}}{\sum_{k\in\mathbb{L}- \mathbb{L}^{(i)}_{b,j}}\hat{p}^{(i)}_{b,j,k}}\cdot\sum_{c\in\mathbb{L}^{(i)}_{ b,j}}\hat{p}^{(i)}_{b,j,c}\]
\[=\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}a_{b,j,k}\cdot\left[1+\frac{\sum_{ c\in\mathbb{L}^{(i)}_{b,j}}\hat{p}^{(i)}_{b,j,c}}{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{ b,j}}\hat{p}^{(i)}_{b,j,k}}\right]=\frac{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}a_{b,j,k}}{ \sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}\hat{p}^{(i)}_{b,j,k}} \tag{32}\]
\[\forall c\in\mathbb{L}^{(i)}_{b,j}\subseteq\mathbb{L}:\ s^{(i)}_{b,j}+\frac{g^{( i)}_{b,j,c}}{\hat{p}^{(i)}_{b,j,c}}=\frac{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}a_{b,j,k}}{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}\hat{p}^{(i)}_{b,j,k}}+\frac{a_ {b,j,c}}{\hat{p}^{(i)}_{b,j,c}}-\frac{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j} }a_{b,j,k}}{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}}\hat{p}^{(i)}_{b,j,k}}= \frac{a_{b,j,c}}{\hat{p}^{(i)}_{b,j,c}} \tag{33}\]
\[\forall c\in\mathbb{L}^{(i)}_{b,j}\subseteq\mathbb{L}\ \text{and}\ \ \forall l\in\mathbb{L}- \mathbb{L}^{(i)}_{b,j}:\ \ \frac{a_{b,j,l}}{\hat{p}^{(i)}_{b,j,l}}\leq s^{(i)}_{b,j}=\frac{\sum_{k\in\mathbb{L}- \mathbb{L}^{(i)}_{b,j}}a_{b,j,k}}{\sum_{k\in\mathbb{L}-\mathbb{L}^{(i)}_{b,j}} \hat{p}^{(i)}_{b,j,k}}<\frac{a_{b,j,c}}{\hat{p}^{(i)}_{b,j,c}}. \tag{34}\]
## 4 Proposed Objective and Process of Optimization
By using our classification-based formulation of the Kelly criterion in section 3 we proposed an objective function and a process for optimizing discriminative neural network classifiers. To be generic, we formulated the objective and the process in such a way that they could accommodate a fully supervised, a semi-supervised, or an unsupervised
optimization. In the fully supervised optimization, both the reference (ground truth) labels and the prior (win) probabilities of the training samples were provided at the time of optimization (training). In the semi-supervised optimization, either the reference labels or the prior (win) probabilities of the training samples were not provided at the time of optimization (training). In the unsupervised optimization, neither the reference labels nor the prior (win) probabilities of the training samples were provided at the time of optimization (training). If no prior probabilities were provided at the time of optimization (training), then uniform priors got assumed. If the reference (ground truth) labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were provided at the time of optimization (training), then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vectorized reference label \(\mathbf{l}_{b,j}\) was a one-hot-encoding of its reference (ground truth) label \(l_{b,j}\in\mathbb{L}\) and was given by (8). If the reference (ground truth) labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were not provided at the time of optimization (training), then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vector \(\mathbf{l}_{b,j}\) was uniform and given by (9).
We denoted the vectorized reference labels, the fixed prior (win) probabilities, and the estimated posterior (belief) probabilities of the samples in the batch \(\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) with the \(\left|\mathbb{T}_{b}\right|\times n_{\mathrm{class}}\) matrices of \(\mathbf{L}_{b}=\left[\mathbf{l}_{b,j}\right]_{j}=\left[l_{b,j,c}\right]_{j,c}\), \(\mathbf{A}_{b}=\left[\mathbf{a}_{b,j}\right]_{j}=\left[a_{b,j,c}\right]_{j,c}\), and \(\hat{\mathbf{P}}_{b}^{(i)}=\left[\hat{\mathbf{p}}_{b,j}^{(i)}\right]_{j}= \left[\hat{p}_{b,j,c}^{(i)}\right]_{j,c}\), respectively. Also, the allocation fractions estimated by the Kelly criterion for these samples formed a \(\left|\mathbb{T}_{b}\right|\times n_{\mathrm{class}}\) matrix denoted by \(\hat{\mathbf{G}}_{b}^{(i)}=\left[\hat{\mathbf{g}}_{b,j}^{(i)}\right]_{j}= \left[\hat{g}_{b,j,c}^{(i)}\right]_{j,c}\).
In each iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\) of optimizing a discriminative neural network classifier, we first found the set of **candidate** classification labels \(\mathbb{L}_{b,j}^{(i)}\subseteq\mathbb{L}\) for each sample (better) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\). To this end, we proposed Algorithm 1 by using (31), (32), (33), and (34). Through this algorithm, the set of candidate labels \(\mathbb{L}_{b,j}^{(i)}\subseteq\mathbb{L}\) got computed from the estimated posterior (belief) probabilities \(\hat{\mathbf{p}}_{b,j}^{(i)}=\left[\hat{p}_{b,j,c}^{(i)}\in(0,1)\right]_{c\in \mathbb{L}}\) and the fixed prior (win) probabilities \(\mathbf{a}_{b,j}=\left[a_{b,j,c}\in(0,1)\right]_{c\in\mathbb{L}}\) of the sample (better) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\).
The set \(\mathbb{L}_{b,j}^{(i)}\subseteq\mathbb{L}\) could contain multiple class labels or be empty. An empty set implied that the current posterior (belief) and the fixed prior (win) probabilities found no class label, even the reference label \(l_{b,j}\in\mathbb{L}\), to be reliable enough for the optimization of the neural network classifier. This could result in no further update of the posterior
(belief) probabilities in the following iterations. To avoid this standstill, at the end of the Algorithm 1, if \(\mathbb{L}_{b,j}^{(i)}=\emptyset\), then the reference label \(l_{b,j}\in\mathbb{L}\) of the sample (better) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) got inserted into it.
By extending (24) to all the samples in the batch \(\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\), one obtained
\[\mathbf{G}_{b}^{(i)}=\operatorname*{arg\,min}_{\mathbf{G}_{b}^{(i)}}\;\underbrace{ \frac{1}{|\mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_{b}}\sum_{c\in \mathbb{L}}a_{b,j,c}\cdot\ln\!\left[1-\sum_{k\in\mathbb{L}}\hat{g}_{b,j,k}^{(i )}+\frac{\hat{g}_{b,j,c}^{(i)}}{\hat{p}_{b,j,c}^{(i)}}\right]}_{\mathcal{L}_{ \rm Kelly}(\mathbf{G}_{b}^{(i)})}. \tag{35}\]
However, the optimum allocation fractions \(\mathbf{G}_{b}^{(i)}=\left[\mathbf{g}_{b,j}^{(i)}\right]_{j}=\left[g_{b,j,c}^{ (i)}\right]_{j,c}\) had a closed form solution given by (31). This solution resulted in (32) and (33) and allowed to express
\[\min_{\mathbf{G}_{b}^{(i)}}\;\mathcal{L}_{\rm Kelly}(\mathbf{ \hat{G}}_{b}^{(i)})=\mathcal{L}_{\rm Kelly}(\mathbf{G}_{b}^{(i)})=\frac{1}{| \mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_{b}}\sum_{c\in\mathbb{L} }a_{b,j,c}\cdot\ln\!\left[s_{b,j}^{(i)}+\frac{g_{b,j,c}^{(i)}}{\hat{p}_{b,j,c} ^{(i)}}\right] \tag{36}\] \[=\frac{1}{|\mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_ {b}}\left[\sum_{c\in\mathbb{L}_{b,j}^{(i)}}a_{b,j,c}\cdot\ln\left[\frac{a_{b, j,c}}{\hat{p}_{b,j,c}^{(i)}}\right]+\left[\sum_{k\in\mathbb{L}-\mathbb{L}_{b,j}^{(i )}}a_{b,j,k}\right]\cdot\ln\left[\frac{\sum_{k\in\mathbb{L}-\mathbb{L}_{b,j}^{( i)}}a_{b,j,k}}{\sum_{k\in\mathbb{L}-\mathbb{L}_{b,j}^{(i)}}\hat{p}_{b,j,k}^{(i)}} \right]\right].\]
As given by (10), the cross entropy loss for optimizing discriminative neural network classifiers was the variational free energy (VFE) of a retrospective active inference. That is,
\[\mathcal{L}_{\rm CE}(\mathbf{\hat{P}}_{b}^{(i)},\mathbf{L}_{b})=\frac{-1}{| \mathbb{L}|\cdot|\mathbb{T}_{b}|}\sum_{j\in\mathbb{T}_{b}}\sum_{e\in\mathbb{L }}l_{b,j,c}\cdot\ln\!\left(\hat{p}_{b,j,c}^{(i)}\right) \equiv -\sum_{s|\pi}p(s|\pi)\cdot\ln\!\left(q(o|\pi)\right). \tag{37}\]
Also, the expected free energy (EFE) of a prospective active inference was given in (4) as
\[\mathcal{L}_{\rm EFE}=\underbrace{\sum_{o}p(o)\cdot\left[\ln\!\left(p(o) \right)-\ln\!\left(q(o|\pi)\right)\right]}_{\rm expected\ complexity}+\underbrace{ \sum_{s|\pi}-p(s|\pi)\cdot\sum_{o|\pi}q(o|\pi)\cdot\ln\!\left(q(o|\pi)\right) }_{\rm uncertainty}. \tag{38}\]
Our proposed Algorithm 1 for finding the candidate labels \(\mathbb{L}_{b,j}^{(i)}\) aimed to minimize the objective function of the generalized Kelly criterion. This minimized function was given by (36). A comparison of (38) and (36) with regard to (37) revealed that the minimized objective of the Kelly criterion was the **expected complexity** term of the EFE of a prospective active inference. That is, the objective function of the generalized Kelly criterion was a tight upper bound of the expected complexity of the EFE. This equivalence got summarized in Table 2 and implied that the **preferred observations** denoted by \(o\) were realized through dividing \(\mathbb{L}\) into candidate \(\mathbb{L}_{b,j}^{(i)}\) and noncandidate classes \(\mathbb{L}-\mathbb{L}_{b,j}^{(i)}\) and then handling the noncandidate classes altogether as one class. To this end, in (36), the prior (win) probabilities of the noncandidate classes got summed together to form their collective prior (win) probability. Similarly, the estimated
\begin{table}
\begin{tabular}{|c|l|} \hline \(p(s|\pi)\) & \(l_{b,j,c}:\;c^{\rm th}\) entry of the vectorized reference label of the sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) \\ \(p(o)\) & \(a_{b,j,c}:\) prior (win) probability of the sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) \\ \(q(o|\pi)\) & \(\hat{p}_{b,j,c}^{(i)}:\) estimated posterior (belief) probability of the sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) \\ \hline \hline \(\sum_{k\in\mathbb{L}-\mathbb{L}_{b,j}^{(i)}}a_{b,j,k}\): collective prior of noncandidate classes of the sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) \\ \(\sum_{k\in\mathbb{L}-\mathbb{L}_{b,j}^{(i)}}b_{b,j,k}^{(i)}\): collective posterior of noncandidate classes of the sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\rm train}\) \\ \hline \end{tabular}
\end{table}
Table 2: Equivalence of the notations used in the objective functions of the active inference (left column) and the neural network optimization (right column).
posterior (belief) probabilities of the noncandidate classes got summed together to form their collective posterior (belief) probability.
The EFE in (38) was composed of an **expected complexity** term plus an **uncertainty** term. As described in subsection 1.1, the minimization of the expected complexity was equivalent to the maximization of the reward. The reward
Figure 8: Sagittal slices of the feature maps at the spatial regions enclosing the vertebral bodies and the intervertebral disks at the outputs of different encoder/decoder stages of the baseline architecture depicted in Figure 5 after being optimized by the proposed objective function and its associated optimization process.
maximization was also a goal of the Kelly criterion and could thus be partially fulfilled by finding the candidate labels through the proposed Algorithm 1. To further maximize the reward, the expected complexity should be minimized further. This was doable by having enough information or maximizing the information gain, i.e. minimizing the **uncertainty**. Accordingly, to optimize a discriminative neural network classifier, we proposed a novel objective function based on the EFE of a prospective active inference. The proposed function was given by
\[\mathcal{L}_{\mathrm{EFE}}(\hat{\mathbf{P}}_{b}^{(i)},\mathbf{A}_{b},\mathbf{ L}_{b})=\underbrace{\frac{-1}{\left\|\mathbb{L}\right|\cdot\left|\mathbb{T}_{b} \right|}}_{\begin{subarray}{c}j\in\mathbb{T}_{b}\end{subarray}}\sum_{c\in \mathbb{L}}\sum_{b,j,c}\mathbb{L}_{b,j,c}\cdot\hat{p}_{b,j,c}^{(i)}\cdot\ln \left[\hat{p}_{b,j,c}^{(i)}\right]\\ \mathrm{uncertainty}\]
This function was reversible and differentiable with respect to the posteriors \(\hat{\mathbf{P}}_{b}^{(i)}\). As given by (7), these posteriors were generated by applying the Softmax function to the network's outputs \(\mathbf{Z}_{b}^{(i)}=\left[\mathbf{z}_{b,j}^{(i)}\right]_{j}=\left[z_{b,j,c} ^{(i)}\right]_{j,c}\). Thus, the proposed function was also differentiable with respect to the \(\mathbf{Z}_{b}^{(i)}\) and the outputs of every layer. As described in subsection 1.2, these allowed to minimize it by a gradient descent optimizer with backpropagation.
We preceded the minimization of (3) with a partial minimization of its expected complexity term by finding the candidate classification labels \(\mathbb{L}_{b,j}^{(i)}\) of each sample (bettor) \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) through the Algorithm 1 proposed based on the Kelly criterion.
Accordingly, in each iteration \(i\in\{1,\cdots,n_{\mathrm{it}}\}\) of our proposed optimization process, every sample \(v_{j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) got passed through the network to estimate its classification posteriors \(\hat{\mathbf{P}}_{b}^{(i)}=\left[\hat{\mathbf{p}}_{b,j}^{(i)}\in(0,1)\right]_ {j}=\left[\hat{p}_{b,j,c}^{(i)}\right]_{j,c}\). From these posteriors and the fixed priors \(\mathbf{a}_{b,j}=\left[a_{b,j,c}\in(0,1)\right]_{c\in\mathbb{L}}\) of the sample, its candidate classification labels \(\mathbb{L}_{b,j}^{(i)}\subseteq\mathbb{L}\) got computed by using the proposed Algorithm 1. Then, the loss at the last network's layer got obtained by inputting the posteriors, the priors, and the candidate labels of the samples into the proposed function in (3). By propagating this loss from the last layer to the first layer, the loss of every layer got obtained. Then, the gradient (first derivative) of each layer's loss got calculated with respect to its outputs. The product of these layerwise gradients got used by the gradient descent optimizer to update the network's parameters.
In an image segmentation task, each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) was an image patch processed by a network's layer. In our baseline architecture described in subsection 1.4, each network's layer processed samples (patches) of a certain spatial resolution. The multiresolution hierarchy of the network was the result of downsampling and upsampling each volumetric fat-water image through convolutional and deconvolutional layers, respectively. For sake of simplicity, we omitted the resolution specifying indices from the samples' notations.
Figure 8 shows sagittal slices of the feature maps at the spatial regions enclosing the vertebral bodies and the intervertebral disks at the outputs of different encoder/decoder stages of the baseline architecture depicted in Figure 5 after being optimized by the proposed objective function and its associated optimization process.
## 5 Network's Parameters and Their Optimization
Our proposed algorithm for finding the candidate labels and our proposed objective function for optimizing a discriminative neural network classifier got integrated into a mini-batch-based gradient descent optimizer with backpropagation by using the process proposed in section 4. This process got evaluated against a similar process incorporating a representative of the cross entropy-based losses or a representative of the metric-based losses introduced in subsection 1.3. The representative of the cross entropy-based losses was the weighted focal loss. This loss comprised of a modulating factor and a weighting mechanism to alleviate classification biases towards the dominant classes of the training samples.
The representative of the metric-based losses was the Lovasz-Softmax loss. Besides being smooth and differentiable, to the best of our knowledge, this loss was the only convex loss among the metric-based losses.
Accordingly, the evaluated losses were
1. the proposed objective function (Po) given by (39)
2. the weighted focal loss (Fo) given by (14)
3. the Lovasz-Softmax loss (Lo) given by (21).
These evaluations were on an end-to-end optimization of the baseline architecture described in subsection 1.4. For each case, the baseline architecture was once used without attention gates (Na) as depicted in Figure 5 and once used with the attention gates (At) as depicted in Figure 7. Also, for (2) and (3) each training sample was accompanied by its reference (ground truth) label to fulfill the supervised nature of these objective functions. However, our proposed algorithm for finding the candidate labels and our proposed objective function got evaluated according to a fully supervised, a semi-supervised, and an unsupervised approach. These resulted in the training samples being
1. accompanied by their reference labels and their priors (GrPr) \(\rightarrow\) fully supervised
2. only accompanied by their reference labels (GrNp) \(\rightarrow\) semi-supervised
3. only accompanied by their priors (NgPr) \(\rightarrow\) semi-supervised
4. accompanied by neither their reference labels nor their priors (NgNp) \(\rightarrow\) unsupervised.
For the cases with the priors, the prior probabilities of the training samples could be computed by a multiatlas registration. If no prior probabilities were provided at the time of optimization (training), then uniform priors got assumed. If the reference (ground truth) labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were provided at the time of optimization (training), then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vectorized reference label \(\mathbf{l}_{b,j}\) was the one-hot-encoding of its reference label \(l_{b,j}\in\mathbb{L}\) and was given by (8). If the reference labels of the training samples \(\mathbb{T}_{\mathrm{train}}\) were not provided at the time of optimization, then for each sample \(v_{b,j}\in\mathbb{T}_{b}\subseteq\mathbb{T}_{\mathrm{train}}\) the vector \(\mathbf{l}_{b,j}\) was uniform and given by (9).
For each evaluation case, the main parameters and the hyperparameters of the baseline architecture got trained (optimized) to automatically segment \(n_{\mathrm{clas}}=|\mathbb{L}|=8\) classes of vertebral bodies (VBs), intervertebral disks (IVDs), psoas major (PM) and quadratus lumborum (QL) muscles, epicardial adipose tissues (EpAT), pericardial adipose tissues (PeAT), cardiac perivascular adipose tissues (PvAT), and background on each volumetric fat-water image. To this end, the volumetric fat-water images got divided into a training and a test set. The training set formed the samples set \(\mathbb{T}_{\mathrm{train}}\) and got used to optimize the main parameters and the hyperparameters of the baseline architecture by each method. The test set formed the samples set \(\mathbb{T}_{\mathrm{test}}\) and got used to evaluate the classification performance of the baseline architecture after being fully optimized by each method. The training set was composed of samples accompanied by their reference labels and priors. The test set was composed of samples accompanied by their reference labels. The reference labels of the test samples were not fed to the neural network. They were rather compared against the corresponding labels predicted by the network to evaluate the classification performance of the network. The predicted label of each sample was the index of its maximum classification posterior estimated by the network.
The main parameters of the baseline architecture included the weights and the biases of the convolutional and deconvolutional layers, the leakage coefficient \(a_{\mathrm{prelu}}\in\mathbb{R}_{\geq 0}\) of every nonlinear PReLU activation, and the means and variances of the (instance) normalizers introduced in page 14. Prior to the optimization of the main parameters, they should be initialized. This initialization was extremely important for the weights of the convolutional and deconvolutional layers of a residual network of several layers and thus different paths of signal propagation. Without a proper weight initialization, some parts of the network might have excessive activations and thus produce stronger gradients while some other parts might produce weaker gradients and thus get optimized less. To avoid this, a random initialization of the weights with the aim of breaking symmetries and making each feature map of a unit variance
was suggested. For this, the weights were drawn from a certain distribution. In networks with nonlinear Sigmoid or hyperbolic tangent activations as well as linear activations, the proper initializations of the weights of every layer were random numbers drawn from a uniform distribution in the range \([-\sqrt{6/(n_{\mathrm{in}}+n_{\mathrm{out}})},\;\sqrt{6/(n_{\mathrm{in}}+n_{ \mathrm{out}})}]\) with \(n_{\mathrm{in}}\) being the number of incoming network connections (fan-in) and \(n_{\mathrm{out}}\) being the number of outgoing network connections (fan-out) of the layer. This type of initialization was called a _Glorot_ or a _Xavier_ initialization and was shown to be improper for networks involving nonlinear rectified linear units, including the PReLU, as their activations [Glorot 2010]. For these networks, like our baseline architecture, the proper initializations of the weights of every convolutional/deconvolutional layer were random numbers drawn from a Gaussian distribution with a mean of 0 and a standard deviation of \(\sqrt{2/n_{\mathrm{in}}}\)[He 2015, Ronneberger 2015]. For a convolutional layer of a kernel size of \(5\times 5\times 5\), \(16\) input feature maps, and \(32\) output feature maps, the number of incoming network connections (fan-in) was \(5\times 5\times 5\times 16=2000\) and the number of outgoing network connections (fan-out) was \(32\). The biases of every convolutional/deconvolutional layer were initialized to 0. The leakage coefficient of every nonlinear PReLU activation got initialized to \(0.15\) to allow a small leakage of negative inputs. The means and the variances of the (instance) normalizers got initialized to 0 and 1 respectively.
The hyperparameters of the baseline architecture and their discretized values were
* number of convolutional/deconvolutional layers \(n_{s}\in\{1,2,\cdots,5\}\) of the \(s^{\mathrm{th}}\) encoder/decoder stage of the V-net of the baseline architecture
* Dropout's retention probability \(p_{s}\in\{0.1,0.2,\cdots,0.9\}\) of the perceptrons (nodes) of the \(s^{\mathrm{th}}\) encoder/decoder stage of the V-net of the baseline architecture.
To optimize the main parameters and the hyperparameters of the baseline architecture by each method, a random search over the discretized hyperparameter values and a 5-fold cross validation were conducted. To this end, the training set got divided into 5 subsets. Then, for each method, in each optimization trial, a set of hyperparameter values got randomly selected. With these hyperparameter values, 5 times training and validation got performed according to the 5-fold cross validation. In each fold, the main parameters of the baseline architecture got optimized on 4 subsets by using a mini-batch-based gradient descent optimizer with backpropagation. The gradient descent optimizer was the Adam optimizer described in subsection 1.2. The resulting network model got then evaluated on the remaining (validation) subset by calculating the _precision_ and the _recall_ metrics for each of the \(n_{\mathrm{clas}}-1=8-1=7\) foreground classes against the rest of the classes. This way, for the selected hyperparameter values, at the end of the 5-fold cross validation, 5 network models and 7 _precision_ and 7 _recall_ values per network model were obtained. For each model, the 7 _precision_ and the 7 _recall_ values got averaged. Then, for the selected hyperparameter values, the model of maximum averaged _precision_ and _recall_ was the best performing model. The optimization trials continued by randomly selecting another set of hyperparameter values until the best performing model resulted from the current hyperparameter values could not exceed the averaged _precision_ and _recall_ values of any of the best models in the last 50 trials. The _precision_ and _recall_ metrics were selected due to their robustness against the imbalanced class-sample distributions. Moreover, the aforementioned cross validation aimed to reduce the impacts of the randomized initialization of the main parameters on the resulting network models. The 5 folds were selected with regard to the maximum size of the baseline architecture and the sufficiency of the number of training and validation samples for the optimization and evaluation in each fold, respectively. The above process was done by using the tools provided in the distributed asynchronous hyperparameter optimization (Hyperopt) library in Python [Bergstra 2015]. For the hyperparameter selection, in addition to the randomization, this library provided a tree of Parzen estimators (TPE) and its adaptive variant. The TPE was more appropriate for belief neural networks of undirected graph topology than the feed-forward networks like our baseline architecture [Bergstra 2011, Bergstra 2012].
The evaluated objective functions and the Adam-based gradient descent optimizer involved following fixed parameters:
* N=\(|\mathbb{T}_{b}|=2\): As explained in page 13, due to the memory limitations of the used GPU, only 2 volumetric fat-water images were included in each mini-batch.
Figure 9: Convergence patterns of different evaluation cases with each case optimizing its main parameters with the best performing hyperparameters.
* \(\gamma_{\rm mod}=2\): Modulating factor of the focal loss given by (13).
* \(\alpha_{\rm lr}=0.001\): Learning rate (step size) of the gradient descent optimizer defined in (5). This learning rate did not need to be adapted manually as the Adam optimizer automatically changed the effective learning rate by the ratio of the exponential moving average of the first moment to the exponential moving average of the second moment.
* \(\beta_{\rm fm}=0.90\): Decay rate of the estimated first moments.
* \(\beta_{\rm sm}=0.99\): Decay rate of the estimated second moments.
* \(\mathbf{m}^{(0)}=\mathbf{0}\): Initial first moments.
* \(\mathbf{v}^{(0)}=\mathbf{0}\): Initial second moments.
The number of iterations \(n_{\rm it}\in\{10,\cdots,15000\}\) was determined according to an early stopping criterion. That is, when the exponential moving average of the validation error (loss) was not improved within the last 100 iterations, then the optimization got stopped.
Figure 9 shows convergence patterns of different evaluation cases with each case optimizing its main parameters with the best performing hyperparameters.
The aforementioned optimizations were conducted on 4 NVIDIA TITAN X(r) GPUs of 12 GB memory each and by using a memory efficient cuDNN3 implementation of the convolutional/deconvolutional layers and the TensorFlow(tm) library of version 2.3 [1].
Table 3 shows the optimized hyperparameters and the overall time of optimizing the main parameters and the hyperparameters for each evaluation case. After the optimizations, an automatic segmentation of the \(n_{\rm clas}=8\) classes on an unseen volumetric fat-water image took around 3 seconds for each evaluation case on the GPUs used for the optimizations.
\begin{table}
\begin{tabular}{|c|c|c c c c c c|} \hline & & \multicolumn{6}{c|}{**Evaluation Case**} \\ \cline{2-9} & & PoNaGrPr & PoNaGrNp & PoNaNpGr & PoNaNpNp & PoAtGrPr & PoAtGrNp \\ \hline \multirow{6}{*}{**Fashion**} & \(n_{1}\) & 2 & 2 & 2 & 3 & 1 & 1 \\ & \(n_{2}\) & 2 & 3 & 2 & 4 & 2 & 2 \\ & \(n_{3}\) & 3 & 3 & 4 & 4 & 2 & 2 \\ & \(n_{4}\) & 3 & 4 & 4 & 5 & 3 & 3 \\ & \(n_{5}\) & 3 & 4 & 5 & 5 & 3 & 4 \\ & \(p_{1}\) & 0.9 & 0.8 & 0.8 & 0.7 & 0.9 & 0.9 \\ & \(p_{2}\) & 0.8 & 0.7 & 0.7 & 0.7 & 0.7 & 0.8 \\ & \(p_{3}\) & 0.7 & 0.7 & 0.7 & 0.7 & 0.7 & 0.7 \\ & \(p_{4}\) & 0.7 & 0.7 & 0.7 & 0.6 & 0.6 & 0.7 \\ & \(p_{5}\) & 0.6 & 0.6 & 0.6 & 0.6 & 0.5 & 0.6 \\ & time & 83 & 90 & 95 & 107 & 82 & 85 \\ \hline \hline \multirow{6}{*}{**Fashion**} & \multicolumn{6}{c|}{**Evaluation Case**} \\ \cline{2-9} & PoAtNgPr & PoAtNgNp & PoNaGrNp & PoAtGrNp & LoNaGrNp & LoAtGrNp \\ \cline{2-9} & \(n_{1}\) & 1 & 2 & 2 & 2 & 1 & 1 \\ & \(n_{2}\) & 2 & 3 & 3 & 2 & 2 & 2 \\ & \(n_{3}\) & 3 & 4 & 4 & 3 & 3 & 2 \\ & \(n_{4}\) & 3 & 4 & 4 & 3 & 4 & 3 \\ & \(n_{5}\) & 4 & 5 & 4 & 4 & 4 & 3 \\ & \(p_{1}\) & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 \\ & \(p_{2}\) & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 & 0.8 \\ & \(p_{3}\) & 0.7 & 0.8 & 0.8 & 0.7 & 0.7 & 0.7 \\ & \(p_{4}\) & 0.6 & 0.7 & 0.7 & 0.6 & 0.7 & 0.6 \\ & \(p_{5}\) & 0.5 & 0.6 & 0.6 & 0.5 & 0.5 & 0.5 \\ & time & 88 & 97 & 84 & 82 & 79 & 77 \\ \hline \end{tabular}
\end{table}
Table 3: Optimized hyperparameters and the overall time [hours] of optimizing the main parameters and the hyperparameters for each evaluation case. |
2306.12640 | On Addressing the Limitations of Graph Neural Networks | This report gives a summary of two problems about graph convolutional
networks (GCNs): over-smoothing and heterophily challenges, and outlines future
directions to explore. | Sitao Luan | 2023-06-22T02:50:16Z | http://arxiv.org/abs/2306.12640v2 | # On Addressing the Limitations of Graph Neural Networks
###### Abstract
This report gives a comprehensive summary of two problems about graph convolutional networks (GCNs): over-smoothing and heterophily challenges, and outlines future directions to explore.
## 1 Introduction
Many real-world problems can be modeled as graphs. Recently, neural network based approaches have achieved significant progress for solving large, complex, graph-structured problems [9, 12, 17, 21, 24, 50, 12]. Inspired by the success of Convolutional Neural Networks (CNNs) [31] in computer vision [33], graph convolution defined on the graph Fourier domain stands out as the key operator and one of the most powerful tools for using machine learning to solve graph problems. Although with high expressive power, GCNs still suffer from several difficulties, _e.g._ the over-smoothing problem limits deep GCNs to sufficiently exploit multi-scale information, heterophily problem makes the graph-aware models underperform the graph-agnostic models. This report summarizes the methods we have proposed to address those challenges and puts forward some research problems we will investigate.
To fully explain the above problems, in subsection 1.1, we will first introduce the notation and background knowledge of graph networks. In section 2, we introduce the loss of expressive power of deep graph neural networks (GNNs) and propose snowball and truncated Krylov architecture to address it; in section 3, we analyze heterophily problems for the existing GNNs and propose Adaptive Channel Mixing (ACM) architecture to address it.
Main ContributionIn section 2, we first point out that the output of deep GCN with ReLU activation function will suffer from loss of rank problem under certain conditions and this can cause deep GCN lose expressive power. We then prove that Tanh is better at preserving the rank of the output and verify this claim with numerical tests. Then we find a way to deepen GCN in block Krylov form and propose snowball and truncated Krylov networks which perform better than state-of-the-arts (SOTA) model on semi-supervised node classification tasks on 3 benchmark
datasets. Besides, we point out that finding **a specifically tailored weight initialization scheme for GCNs** can be an promising direction to address over-smoothing efficiently in section 2.2. In section 3, we first illustrate the insufficiency of the current homophily metrics and propose aggregation homophily based on a new similarity matrix. We then show the advantage of the new homophily metric over the existing ones on synthetic graph. Based on the similarity matrix, we define diversification distinguishability of a node and demonstrate why high-pass filters can help to address heterophily problem. To include both low-pass and high-pass in GNNs, we extend filterbank method and propose ACM and ACMII frameworks that can boost the performance of baseline GNNs on heterophilous graphs.
### Notation and Background Knowledge
Suppose we have an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},A)\), where \(\mathcal{V}\) is the node set with \(|\mathcal{V}|=N\); \(\mathcal{E}\) is the edge set without self-loop; \(A\in\mathbb{R}^{N\times N}\) is the symmetric adjacency matrix with \(A_{ij}=1\) if and only if \(e_{ij}\in\mathcal{E}\), otherwise \(A_{ij}=0\); \(D\) is the diagonal degree matrix, _i.e._\(D_{ii}=\sum_{j}A_{ij}\) and \(\mathcal{N}_{i}=\{j:e_{ij}\in\mathcal{E}\}\) is the neighborhood set of node \(i\). A graph signal is a vector \(\mathbf{x}\in\mathbb{R}^{N}\) defined on \(\mathcal{V}\), where \(x_{i}\) is defined on the node \(i\). We also have a feature matrix \(X\in\mathbb{R}^{N\times F}\) whose columns are graph signals and each node \(i\) has a corresponding feature vector \(X_{i:}\) with dimension \(F\), which is the \(i\)-th row of \(X\). We denote \(Z\in\mathbb{R}^{N\times C}\) as label encoding matrix, where \(Z_{i:}\) is the one hot encoding of the label of node \(i\) and \(C\) is the total number of classes.
The (combinatorial) graph Laplacian is defined as \(L=D-A\), which is a Symmetric Positive Semi-Definite (SPSD) matrix [7]. Its eigendecomposition gives \(L=U\Lambda U^{T}\), where the columns of \(U\in\mathbb{R}^{N\times N}\) are orthonormal eigenvectors, namely the _graph Fourier basis_, \(\Lambda=\text{diag}(\lambda_{1},\ldots,\lambda_{N})\) with \(\lambda_{1}\leq\cdots\leq\lambda_{N}\), and these eigenvalues are also called _frequencies_. The graph Fourier transform of the graph signal \(\mathbf{x}\) is defined as \(\mathbf{x}_{\mathcal{F}}=U^{-1}\mathbf{x}=U^{T}\mathbf{x}=[\mathbf{u}_{1}^{T}\mathbf{x},\ldots, \mathbf{u}_{N}^{T}\mathbf{x}]^{T}\), where \(\mathbf{u}_{i}^{T}\mathbf{x}\) is the component of \(\mathbf{x}\) in the direction of \(\mathbf{u}_{i}\).
Some graph Laplacian variants are commonly used, _e.g._ the symmetric normalized Laplacian \(L_{\text{sym}}=D^{-1/2}LD^{-1/2}=I-D^{-1/2}AD^{-1/2}\) and the random walk normalized Laplacian \(L_{\text{rw}}=D^{-1}L=I-D^{-1}A\). The eigenvalues of \(L_{\text{rw}}\) and \(L_{\text{sym}}\) are the same and are in \([0,2)\), and their corresponding eigenvectors satisfy \(\mathbf{u}_{\text{rw}}^{i}=D^{-1/2}\mathbf{u}_{\text{sym}}^{i}\).
The affinity (transition) matrices can be derived from the Laplacians, _e.g._\(A_{\text{rw}}=I-L_{\text{rw}}=D^{-1}A\), \(A_{\text{sym}}=I-L_{\text{sym}}=D^{-1/2}AD^{-1/2}\). Then \(\lambda_{i}(A_{\text{rw}})=\lambda_{i}(A_{\text{sym}})=1-\lambda_{i}(A_{\text {sym}})=1-\lambda_{i}(A_{\text{rw}})\in(-1,1]\). Renormalized affinity and Laplacian matrices are introduced in [24] as \(\widehat{A}_{\text{sym}}=\widetilde{D}^{-1/2}\tilde{A}\widetilde{D}^{-1/2}\), \(\hat{L}_{\text{sym}}=I-\hat{A}_{\text{sym}}\), where \(\tilde{A}\equiv A+I,\widetilde{D}\equiv D+I\) and it essentially adds a self-loop and is widely used in Graph Convolutional Network (GCN) as follows:
\[Y=\text{softmax}(\hat{A}_{\text{sym}}\text{ ReLU}(\hat{A}_{\text{sym}}XW_{0})\ W_{1}) \tag{1}\]
where \(W_{0}\in\mathbb{R}^{F\times F_{1}}\) and \(W_{1}\in\mathbb{R}^{F_{1}\times O}\) are parameter matrices. GCN can learn by minimizing the following cross entropy loss
\[\mathcal{L}=-\text{trace}(Z^{T}\log Y). \tag{2}\]
The random walk renormalized matrix \(\hat{A}_{\text{rw}}=\tilde{D}^{-1}\tilde{A}\) can also be applied to GCN and it has the same eigenvalues as \(\hat{A}_{\text{sym}}\). The corresponding Laplacian is defined as \(\hat{L}_{\text{rw}}=I-\hat{A}_{\text{rw}}\) Specifically, the nature of random walk matrix makes \(\hat{A}_{\text{rw}}\) behaves as a mean aggregator
\(\sum_{j\in\{\mathcal{N}_{i}\cup i\}}x_{j}/(D_{ii}+1)\) which is applied in [17] and is important to bridge the gap between spatial- and spectral-based graph convolution methods.
## 2 Loss of Expressive Power of Deep Graph Neural Networks
One major problem of the existing GCNs is the low expressive power limited by their shallow learning mechanisms [66, 61]. There are mainly two reasons why an architecture that is scalable in depth has not been achieved yet. First, this problem is difficult: considering graph convolution as a special form of Laplacian smoothing [32], networks with multiple convolutional layers will suffer from an over-smoothing problem that makes the representation of even distant nodes indistinguishable [66]. Second, some people think it is unnecessary: for example, [4] states that it is not necessary for the label information to totally traverse the entire graph and one can operate on the multi-scale coarsened input graph and obtain the same flow of information as GCNs with more layers. Acknowledging the difficulty, we hold on to the objective of deepening GCNs since the desired compositionality1 will yield easy articulation and consistent performance for problems with different scales.
Footnote 1: The expressive power of a sound deep Neural Network (NN) architecture should be expected to grow with the increment of network depth [19, 30].
In subsection 2.1, we first analyze the limits of deep GCNs brought by over-smoothing and the activation functions. Then, we show that any graph convolution with a well-defined analytic spectral filter can be written as a product of a block Krylov matrix and a learnable parameter matrix in a special form. Based on this, we propose two GCN architectures that leverage multi-scale information in different ways and are scalable in depth, with stronger expressive powers and abilities to extract richer representations of graph-structured data. For empirical validation, we test different instances of the proposed architectures on multiple node classification tasks. The results show that even the simplest instance of the architectures achieves state-of-the-art performance, and the complex ones achieve surprisingly higher performance. In subsection 2.2, we propose to study an over-smoothing problem and give some ideas.
### A Stronger Multi-scale Deep GNN with Truncated Krylov Architecture
Suppose we deepen GCN in the same way as [32, 24], we have
\[Y=\text{softmax}(\hat{A}_{\text{sym}}\text{ ReLU}(\cdots\hat{A}_{\text{sym}}\text{ ReLU}(\hat{A}_{\text{sym}}\text{ ReLU}(\hat{A}_{\text{sym}}XW_{0})\text{ }W_{1})\text{ }W_{2}\cdots)\text{ }W_{n})\equiv\text{softmax}(Y^{ \prime}) \tag{3}\]
For this architecture, without considering the ReLU activation function, [32] shows that \(Y^{\prime}\) will converge to a space spanned by the eigenvectors of \(\hat{A}_{\text{sym}}\) with eigenvalue 1. Taking activation function into consideration, our analyses on (3) can be summarized in the following theorems (see proof in the appendix of [44]).
**Theorem 1**.: Suppose that \(\mathcal{G}\) has \(k\) connected components. Let \(X\in\mathbb{R}^{N\times F}\) be any feature matrix and let \(W_{j}\) be any non-negative parameter matrix with \(\|W_{j}\|_{2}\leq 1\) for \(j=0,1,\ldots\). If \(\mathcal{G}\) has no bipartite components, then in (3), as \(n\rightarrow\infty\), \(\text{rank}(Y^{\prime})\leq k\).
**Theorem 2**.: Suppose the \(n\)-dimensional \(\mathbf{x}\) and \(\mathbf{y}\) are independently sampled from a continuous distribution and the activation function \(\mathrm{Tanh}(z)=\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}\) is applied to \([\mathbf{x},\mathbf{y}]\) pointwisely, then
\[\mathbbm{P}(\mathrm{rank}\left(\mathrm{Tanh}([\mathbf{x},\mathbf{y}])\right)=\mathrm{ rank}([\mathbf{x},\mathbf{y}]))=1\]
Theorem 1 shows that, even considering ReLU, if we simply deepen GCN as (3), the extracted features will degrade under certain conditions, _i.e._\(Y^{\prime}\) only contains the stationary information of the graph structure and loses all the local information in node for being smoothed. In addition, from the proof we see that the pointwise ReLU transformation is a conspirator. Theorem 2 tells us that Tanh is better at keeping linear independence among column features. We design a numerical experiment on synthetic data to test, under a 100-layer GCN architecture, how activation functions affect the rank of the output in each hidden layer during the feedforward process. As Figure 1(a) shows, the rank of hidden features decreases rapidly with ReLU, while having little fluctuation under Tanh, and even the identity function performs better than ReLU. So we propose to replace ReLU by Tanh.
Besides activation function, to find a way to deepen GCN, we first show that any graph convolution with well-defined analytic spectral filter defined on \(\hat{A}_{\mathrm{sym}}\in\mathbbm{R}^{N\times N}\) can be written as a product of a block Krylov matrix with a learnable parameter matrix in a specific form. Based on this, we propose snowball network and truncated Krylov network.
Figure 1: Changes in the number of independent features with the increment of network depth
Figure 2: Snowball and Truncated Krylov Architectures
We take \(\mathsf{S}=\mathbb{R}^{F\times F}\). Given a set of block vectors \(\{X_{k}\}_{k=1}^{m}\subset\mathbb{R}^{N\times F}\), the \(\mathsf{S}\)-span of \(\{X_{k}\}_{k=1}^{m}\) is defined as \(\text{span}^{\mathsf{S}}\{X_{1},\ldots,X_{m}\}:=\{\sum\limits_{k=1}^{m}X_{k}C_{ k}:C_{k}\in\mathsf{S}\}\). Then, the order-\(m\) block Krylov subspace with respect to the matrix \(A\in\mathbb{R}^{N\times N}\), the block vector \(B\in\mathbb{R}^{N\times F}\) and the vector space \(\mathsf{S}\), and its corresponding block Krylov matrix are respectively defined as
\[\mathcal{K}_{m}^{\mathsf{S}}(A,B)\equiv\text{span}^{\mathsf{S}}\{B,AB,\ldots, A^{m-1}B\},\;\;K_{m}(A,B)\equiv[B,AB,\ldots,A^{m-1}B]\in\mathbb{R}^{N\times mF}.\]
It is shown in [11, 15] that there exists a smallest \(m\) such that for any \(k\geq m\), \(A^{k}B\in\mathcal{K}_{m}^{\mathsf{S}}(A,B)\), where \(m\) depends on \(A\) and \(B\).
Let \(\rho(\hat{A}_{\text{sym}})\) denote the spectrum radius of \(\hat{A}_{\text{sym}}\) and suppose \(\rho(\hat{A}_{\text{sym}})<R\) where \(R\) is the radius of convergence for a real analytic scalar function \(g\). Based on the above definitions and conclusions, the graph convolution can be written as
\[g(\hat{A}_{\text{sym}})X=\sum\limits_{n=0}^{\infty}\frac{g^{(n)}(0)}{n!}\hat{A }_{\text{sym}}^{n}X\equiv\left[X,\hat{A}_{\text{sym}}X,\ldots,\hat{A}_{\text{ sym}}^{m-1}X\right]\left[(\Gamma_{0}{}^{\mathsf{S}})^{T},(\Gamma_{1}{}^{\mathsf{S}})^{T}, \cdots,(\Gamma_{m-1}^{\mathsf{S}})^{T}\right]^{T}\equiv K_{m}(\hat{A}_{\text{ sym}},X)\Gamma^{\mathsf{S}} \tag{4}\]
where \(\Gamma_{i}^{\mathsf{S}}\in\mathbb{R}^{F\times F}\) for \(i=1,\ldots,m-1\) are parameter matrix blocks and \(\Gamma^{\mathsf{S}}\in\mathbb{R}^{mF\times F}\). Then, a graph convolutional layer can generally be written as
\[g(\hat{A}_{\text{sym}})XW^{\prime}=K_{m}(\hat{A}_{\text{sym}},X)\Gamma^{ \mathsf{S}}W^{\prime}=K_{m}(\hat{A}_{\text{sym}},X)W^{\mathsf{S}} \tag{5}\]
where \(W^{\prime}\in\mathbb{R}^{F\times O}\) is a parameter matrix, and \(W^{\mathsf{S}}\equiv\Gamma^{\mathsf{S}}W^{\prime}\in\mathbb{R}^{mF\times O}\). The essential number of learnable parameters is \(mF\times O\).
The block Krylov form provides an insight about why an architecture that concatenates multi-scale features in each layer will boost the expressive power of GCN. Based on this idea, we propose the snowball and truncated Block Krylov architectures [44] shown in Figure 2, where we stack multi-scale information in each layer. From the performance comparison on semi-supervised node classification tasks with different label percentage in table 1, we can see that the proposed models consistently perform better than the state-of-the-art models, especially when there are less labeled nodes. See detailed experimental results in [44].
### Future Works on Over-smoothing
Weight Initialization for GNNsEven without aggregation in each hidden layer, an NN with deep architecture still suffers from vanishing activation variances and back-propagated gradients variance problem [13], which make the training of deep NN hard. In last decade, designing new parameter initialization methods is proved to be effective [13, 18] to address the variance reduction problem during feedforward and backpropagation process. This motivates us to investigate the variance propagation in GNNs and analyze if the current weight initialization methods are suitable for GNNs or not. To this end, we can show that the vanishing variance caused by aggregation operation in GNNs is more serious than NN. Designing a new parameter initialization scheme for GNNs is potentially a feasible way to address this problem and empirically achieves promising performance [45]. we will propose a new method in this subsection.
The current initialization scheme of GNNs still follows the Xavier initialization [13], _i.e._\(W_{i}\sim U\left[-\frac{\sqrt{6}}{\sqrt{n_{j}+n_{j+1}}},\frac{\sqrt{6}}{\sqrt{n_{j} +n_{j+1}}}\right]\), or He (or Kaiming) initialization [18], _i.e._\(W_{i}\sim N\left(0,\sqrt{2/n_{i}}\right)\)
which is designed for traditional multilayer perceptron (MLP), where \(W_{i}\) is the parameter matrix of layer \(i\) and \(n_{i}\) is the number of hidden units of layer \(i\). These two initialization methods are derived by studying the variance propagation between layers during feedforward and backpropagation process. These two processes are different in GNNs by an extra multiplication of aggregation operator \(\hat{A}\). To analyze the variance propagation, we use deep GCN as an example, use \(\hat{A}=\hat{A}_{\text{rw}}\) and decompose it as follows,
\[\begin{split}& Y_{0}=X,\ H_{1}=\hat{A}_{\text{rw}}XW_{0},\ Y_{1}=f(H_{1}),\ H_{l+1}=\hat{A}_{\text{rw}}Y_{l}W_{l},\ Y_{l+1}=f(H_{l+1}),\ l=1,\ldots,n\\ & Y=\text{softmax}(\hat{A}_{\text{rw}}Y_{n}W_{n})\equiv\text{ softmax}(H_{n+1}),\ \mathcal{L}=-\text{trace}(Z^{T}\text{log}Y)\end{split} \tag{6}\]
where \(H_{l},Y_{l}\in\mathbb{R}^{N\times F_{l}}\), \(W_{l}\in\mathbb{R}^{F_{l}\times F_{l+1}}\); \(Z\in\mathbb{R}^{N\times C}\) is the ground truth matrix with one-hot label vector. Then the gradient propagates in the following way,
\[\frac{\partial\mathcal{L}}{\partial H_{l}}=\frac{\partial\mathcal{L}}{ \partial Y_{l}}\odot f^{\prime}(H_{l}),\ \frac{\partial\mathcal{L}}{\partial W_{l-1}}=Y_{l-1}^{T}\hat{A}_{\text{rw}} \frac{\partial\mathcal{L}}{\partial H_{l}},\ \frac{\partial\mathcal{L}}{ \partial Y_{l-1}}=\hat{A}_{\text{rw}}\frac{\partial\mathcal{L}}{\partial H_{l }}W_{l-1}^{T} \tag{7}\]
Variance Analysis: Forward ViewConsider element \(i,j\) in matrix \(H_{l+1}\) during the feed-forward process in (6),
\[(H_{l+1})_{ij}=(\hat{A}_{\text{rw}})_{i,z}Y_{l}(W_{l})_{:,j}=\sum_{t=1}^{F_{l }}\sum_{k=1}^{N}(\hat{A}_{\text{rw}})_{ik}\left(Y_{l}\right)_{kt}\left(W_{l} \right)_{t,j},\ Y_{l+1}=f(H_{l+1}),\ l=1,\ldots,n \tag{8}\]
Suppose we have linear activation function such as that proposed in [60]; each element in \(W_{l}\) is _i.i.d._ initialized with \(E\left((W_{l})_{ij}\right)=0\); \(E\left((Y_{l})_{kt}\right)=0\) and all elements in \(Y_{l}\) are independent 2. Then, \(\text{Var}\left((Y_{l+1})_{ij}\right)=\text{Var}\left((Y_{l+1})\right)\) can be written as
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c c} \multirow{2}{*}{Algorithms} & \multicolumn{4}{c|}{Cora} & \multicolumn{4}{c}{CiteSeer} & \multicolumn{4}{c}{PubMedMed} \\ & 0.5\% & 1\% & 2\% & 3\% & 4\% & 5\% & 0.5\% & 1\% & 2\% & 3\% & 4\% & 5\% & 0.03\% & 0.05\% & 0.1\% & 0.3\% \\ \hline LP & 56.4 & 62.3 & 65.4 & 67.5 & 69.0 & 70.2 & 34.8 & 40.2 & 45.6 & 45.3 & 46.4 & 47.3 & 61.4 & 66.4 & 65.4 & 66.8 \\ Cheby & 38.0 & 52.0 & 62.4 & 70.8 & 74.1 & 77.6 & 31.7 & 42.8 & 59.9 & 66.2 & 68.3 & 69.3 & **40.4** & **73.5** & 51.2 & 72.8 \\ Co-training & 56.6 & 66.4 & 73.5 & 75.9 & 78.9 & 80.8 & 47.5 & 57.6 & 62.1 & 62.5 & 64.5 & 65.5 & 62.2 & 68.3 & 72.7 & 78.2 \\ Self-training & 53.7 & 66.1 & 73.8 & 77.2 & 79.4 & 80.0 & 43.3 & 58.1 & 68.2 & 69.8 & 70.4 & 71.0 & 51.9 & 87.6 & 66.8 & 77.0 \\ Union & 58.5 & 69.9 & 75.9 & 78.5 & 80.4 & 81.7 & 46.3 & 59.1 & 66.7 & 67.6 & 67.6 & 68.2 & 58.4 & 64.0 & 70.7 & 79.2 \\ Intersection & 49.7 & 65.0 & 72.9 & 77.1 & 79.4 & 80.2 & 42.9 & 59.1 & 68.6 & 70.1 & 70.8 & 71.2 & 52.0 & 59.3 & 69.7 & 77.6 \\ MultiStage & 61.1 & 63.7 & 74.4 & 76.1 & 77.2 & 53.0 & 78.3 & 68.3 & 68.0 & 69.0 & 57.4 & 64.3 & 70.2 \\ M3S & 61.5 & 67.2 & 76.5 & 77.8 & 78.0 & 56.1 & 62.1 & 66.4 & 70.3 & 70.5 & 59.2 & 64.4 & 70.6 \\ GCN & 42.6 & 56.9 & 67.8 & 49.7 & 77.6 & 79.3 & 38.4 & 46.5 & 62.6 & 66.9 & 68.7 & 69.6 & 46.4 & 49.7 & 56.3 & 76.6 \\ GCN-SVAT & 43.6 & 53.9 & 71.4 & 75.6 & 78.3 & 78.5 & 47.0 & 52.4 & 65.8 & 68.6 & 69.5 & 70.7 & 52.1 & 56.9 & 63.5 & 77.2 \\ GCN-DVAT & 49 & 61.8 & 71.9 & 75.9 & 78.4 & 78.6 & 51.5 & 58.5 & 67.4 & 69.2 & 70.8 & 71.3 & 53.3 & 58.6 & 66.3 & 77.3 \\ \hline \multicolumn{2}{c}{_linear Snowball_} & _67.6_ & _74.6_ & _78.9_ & _80.9_ & _82.3_ & _82.9_ & _56.0_ & _63.4_ & _69.3_ & _70.6_ & **72.5** & **72.6** & **65.5** & _68.5_ & _73.6_ & _79.7_ \\ \multicolumn{2}{c}{_Snowball_} & _68.4_ & _73.2_ & _78.4_ & _80.8_ & _82.3_ & _83.0_ & _56.4_ & _63.9_ & _68.7_ & _70.5_ & _71.8_ & _72.8_ & _66.5_ & _68.6_ & _73.2_ & _80.1_ \\ \multicolumn{2}{c}{_truncated Krylov_} & _71.8_ & _76.5_ & _80.0_ & _82.0_ & _83.0_ & _94.1_ & _59.9_ & _66.1_ & _69.8_ & _71.3_ & _72.3_ & _73.7_ & **68.7** & _71.4_ & _75.5_ & _80.4_ \\ \hline \hline \end{tabular}
* For each (column), the greener the cell, the better the performance. The redder, the worse. If our methods achieve better performance than all others, the corresponding cell will be in bold.
\end{table}
Table 1: Accuracy without Validation
\[\text{Var}\left(\sum_{t=1}^{F_{l}}\sum_{k=1}^{N}(\hat{A}_{\text{rw}})_{jk}\left(Y_ {l}\right)_{kt}\left(W_{l}\right)_{t,j}\right)=\sum_{t=1}^{F_{l}}\sum_{k=1}^{N} \text{Var}\left((\hat{A}_{\text{rw}})_{ik}\left(Y_{l}\right)_{kt}\left(W_{l} \right)_{t,j}\right)=\frac{F_{l}}{d_{i}+1}\text{Var}\left(Y_{l}\right)\text{ Var}(W_{l}) \tag{9}\]
Suppose each element in \(Y_{l}\) shares the same variance denoted as \(\text{Var}\left(Y_{l}\right)\). To prevent variance vanishing between layers, _i.e._\(\text{Var}\left(Y_{l+1}\right)=\text{Var}\left(Y_{l}\right)\), from (8) we can approximately have (see computation in Appendix A.2.1)
\[\text{Var}(W_{l})=\frac{d_{i}+1}{F_{l}} \tag{10}\]
This tells us that the variance of \(W_{l}\) depends on the degree of a node, but since the parameter matrix is shared by all nodes, we cannot design a node specified initialization scheme. Thus, we make a compromise between nodes as follows
\[\text{Var}(W_{l})\approx\frac{\sum\limits_{i=1}^{N}(d_{i}+1)}{NF_{l}}=\frac{1 +\text{average node degree}}{F_{l}} \tag{11}\]
Another way is to use weighted average by considering the node degree as the weight of each node. Through this way, we have
\[\text{Var}(W)=\sum\limits_{i=1}^{N}\frac{d_{i}+1}{\sum\limits_{j=1}^{N}d_{j}+ 1}\frac{d_{i}+1}{F_{l}}=\frac{\sum\limits_{i=1}^{N}(d_{i}+1)^{2}}{(\sum\limits _{i=1}^{N}d_{i}+1)F_{l}} \tag{12}\]
Variance Analysis: Backward ViewUnder the same assumption as feedforward view and suppose each element in \(\frac{\partial\mathcal{L}}{\partial H_{l}}\) and \(\frac{\partial\mathcal{L}}{\partial W_{l-1}}\) are independent to each other and has zero mean, from (7) we can approximately (see computation in Appendix A.2.2)
\[\frac{\partial\mathcal{L}}{\partial H_{l}}=\frac{\partial\mathcal{L}}{\partial Y _{l}}=\hat{A}_{\text{rw}}\frac{\partial\mathcal{L}}{\partial H_{l+1}}W_{l}^{ T},\ \frac{\partial\mathcal{L}}{\partial W_{l-1}}=Y_{l-1}^{T}\hat{A}_{\text{rw}}\frac{ \partial\mathcal{L}}{\partial H_{l}} \tag{13}\]
Then,
\[\left(\frac{\partial\mathcal{L}}{\partial H_{l}}\right)_{ij}=\sum \limits_{t=1}^{F_{l+1}}\sum\limits_{k=1}^{N}(\hat{A}_{\text{rw}})_{ik}\left( \frac{\partial\mathcal{L}}{\partial H_{l+1}}\right)_{kt}(W_{l}^{T})_{t,j} \tag{14}\] \[\left(\frac{\partial\mathcal{L}}{\partial W_{l-1}}\right)_{ij}=( \hat{A}_{\text{rw}}Y_{l-1})_{i}^{T}\left(\frac{\partial\mathcal{L}}{\partial H _{l}}\right)_{.j}=\sum\limits_{k=1}^{N}(\sum\limits_{t=1}^{N}(\hat{A}_{\text{rw }})_{kt}(Y_{l-1})_{ti})\left(\frac{\partial\mathcal{L}}{\partial H_{l}}\right)_ {kj},\]
Thus,
\[\text{Var}\left(\left(\frac{\partial\mathcal{L}}{\partial H_{l}} \right)_{ij}\right) =\text{Var}\left(\sum\limits_{t=1}^{F_{l+1}}\sum\limits_{k=1}^{N} \hat{A}_{\text{rw}_{k}}\left(\frac{\partial\mathcal{L}}{\partial H_{l+1}} \right)_{kt}(W_{l}^{T})_{t,j}\right)=\frac{F_{l+1}}{d_{i}+1}\text{Var}\left( \frac{\partial\mathcal{L}}{\partial H_{l+1}}\right)\text{Var}\left(W_{l}\right) \tag{15}\] \[\text{Var}\left(\left(\frac{\partial\mathcal{L}}{\partial W_{l-1}} \right)_{ij}\right) =\text{Var}\left(\sum\limits_{k=1}^{N}(\sum\limits_{t=1}^{N}( \hat{A}_{\text{rw}})_{kt}(Y_{l-1})_{ti})\left(\frac{\partial\mathcal{L}}{ \partial H_{l}}\right)_{kj}\right)=\left(\sum\limits_{k=1}^{N}\frac{1}{d_{k}+1 }\right)\text{Var}\left(Y_{l-1}\right)\text{Var}\left(\frac{\partial\mathcal{ L}}{\partial H_{l}}\right)\]
\[\text{Var}(W_{l})=\frac{\sum\limits_{i=1}^{N}(d_{i}+1)}{NF_{l+1}}\approx\frac {1+\text{average node degree}}{F_{l+1}} \tag{16}\]
From (9)(15), \(\text{Var}\left(Y_{l-1}\right)\) can be approximately written as
\[\text{Var}\left(\frac{\partial\mathcal{L}}{\partial H_{l}} \right)\approx\frac{NF_{l+1}}{\sum\limits_{i=1}^{N}d_{i}+1}\text{Var}\left( \frac{\partial\mathcal{L}}{\partial H_{l+1}}\right)\text{Var}\left(W_{l} \right)\approx\text{Var}\left(\frac{\partial\mathcal{L}}{\partial H_{l+1}} \right)\prod\limits_{l=1}^{n+1}\frac{NF_{l^{\prime}}}{\sum\limits_{i=1}^{N}d _{i}+1}\text{Var}\left(W_{l^{\prime}-1}\right) \tag{17}\] \[\text{Var}\left(Y_{l-1}\right)\approx\frac{NF_{l-2}}{\sum_{k}d_ {k}+1}\text{Var}\left(\mathbf{Y_{l-2}}\right)\text{Var}(W_{l-2})\approx\text{ Var}(\mathbf{Y_{0}})\prod\limits_{l^{\prime}=0}^{l-2}\frac{NF_{l^{\prime}}}{\sum_{k}d_{k}+1} \text{Var}(W_{l^{\prime}})\]
From (15), if each \(\text{Var}(W_{l^{\prime}})\) equals to \(\text{Var}(W)\) and each \(F_{l}\) equals to \(F\), then
\[\text{Var}\left(\left(\frac{\partial\mathcal{L}}{\partial W_{l-1}} \right)_{ij}\right)\approx\left(\sum\limits_{k=1}^{N}\frac{1}{d_{k}+1}\right) \text{Var}(\mathbf{Y_{0}})\text{Var}\left(\frac{\partial\mathcal{L}}{\partial H_{ n+1}}\right)\left(\frac{NF}{\sum_{k}d_{k}+1}\text{Var}(W)\right)^{n} \tag{18}\]
Combined with (11), we can set the variance of the parameter matrix as
\[\text{Var}(W_{l})\approx\frac{2\sum\limits_{i=1}^{N}(d_{i}+1)}{N(F_{l}+F_{l+1} )}=\frac{2(1+\text{average node degree})}{(F_{l}+F_{l+1})} \tag{19}\]
Thus, each element in \(W_{i}\) can be drawn from \(N(0,\sqrt{\frac{2(1+\text{average node degree})}{(F_{l}+F_{l+1})}})\).
Adaptive ReLU (AdaReLU) Activation FunctionTo satisfy the assumption that the activation function is linear at the beginning of training process and to still learn a nonlinear function during training, we design the following adaptive ReLU (AdaReLU) activation function
\[f(\mathbf{x}_{i})=\left\{\begin{array}{ll}\beta_{i}\mathbf{x}_{i},&\text{if }\mathbf{x}_{i}>0\\ \alpha_{i}\mathbf{x}_{i},&\text{if }\mathbf{x}_{i}\leq 0\end{array}\right.\]
where \(\alpha_{i}\) and \(\beta_{i}\) are learnable parameters and are initialized to be 1. If \(\alpha_{i}=\alpha\) and \(\beta_{i}=\beta\) for all \(i\), we have channel-shared AdaReLU, otherwise we have channel-wise AdaReLU 3. From the preliminary experimental results, channel-wise works better than channel-shared AdaReLU.
There exist some experimental evidence [45] that controlling variance flow by initialization like (19) can relieve the performance decrease of deep GCN. But more tests and hyperparameter tunning still needs to be done. More theoretical analysis on variance propagation needs to be done.
## 3 GNNs on Heterophily Graphs
GNNs are considered as an extension of basic Neural Networks (NNs) by additionally making use of graph structure based on the relational inductive bias (homophily assumption), rather than treating the nodes as collections of independent and identically distributed (_i.i.d._) samples. Though GNNs are believed to outperform basic NNs in real-world tasks, it is found that in some cases, the graph-aware models have little performance gain or even underperform graph-agnostic models [6, 46, 51, 69, 71]. One of the main reasons for the performance degradation is believed to be heterophily, _i.e._ when the connected nodes tend to have different labels [69, 71]. Heterophily challenge has received attention recently and there are increasing number of models being put forward to analyze [39, 43] and address this problem [6, 36, 41, 69, 70, 42, 64].
In this section, we first introduce the most commonly used homophily metrics in subsection 3.1. Then, we show that not all cases of heterophily are harmful for GNNs and propose new metrics based on a similarity matrix which considers the influence of both graph structure and input features on GNNs in subsection 3.2. The metrics demonstrate advantages over the commonly used homophily metrics by tests on synthetic graphs. From the metrics and the observations, we find some cases of harmful heterophily can be addressed by diversification operation and its effectiveness can be proved in subsection 3.3. With this fact and knowledge of filterbanks, we propose the Adaptive Channel Mixing (ACM) framework in subsection 3.4 to adaptively exploit aggregation, diversification and identity channels in each GNN layer to address harmful heterophily. We validate the ACM-augmented baselines with real-world node classification tasks. They consistently achieve significant performance gain and exceed the state-of-the-art GNNs on most of the tasks without incurring significant computational burden. In subsection 3.5, we introduce some prior work on addressing heterophily problems and explain their differences with ACM framework. The limitation of diversification operation and remaining challenges of heterophily problems are discussed in subsection 3.6
### Metrics of Homophily
The metrics of homophily are defined by considering different relations between node labels and graph structures defined by adjacency matrix. There are three commonly used homophily metrics: edge homophily [70, 1], node homophily [51], and class homophily [35]4 defined as follows:
Footnote 4: The authors in [35] did not name this homophily metric. We name it class homophily based on its definition.
\[H_{\text{edge}}(\mathcal{G}) =\frac{\left|\left[e_{uv}\mid e_{uv}\in\mathcal{E},Z_{u,:}=Z_{v,: }\right]\right|}{\left|\mathcal{E}\right|},\;\;H_{\text{node}}(\mathcal{G})= \frac{1}{\left|\mathcal{V}\right|}\sum_{v\in\mathcal{V}}\frac{\left|\left[u \mid u\in\mathcal{N}_{v},Z_{u,:}=Z_{v,:}\right]\right|}{d_{v}}, \tag{20}\] \[H_{\text{class}}(\mathcal{G}) =\frac{1}{C-1}\sum_{k=1}^{C}\left[h_{k}-\frac{\left|\left\{v \mid Z_{v,k}=1\right\}\right|}{N}\right]_{+},\;\;h_{k}=\frac{\sum_{v\in \mathcal{V}}\left|\left\{u\mid Z_{v,k}=1,u\in\mathcal{N}_{v},Z_{u,:}=Z_{v,:} \right\}\right|}{\sum_{v\in\left[v\mid Z_{v,k}=1\right]d_{v}}}\]
where \([a]_{+}=\max(a,0)\); \(h_{k}\) is the class-wise homophily metric [35]. They are all in the range of \([0,1]\) and a value close to \(1\) corresponds to strong homophily while a value close to \(0\) indicates strong heterophily. \(H_{\text{edge}}(\mathcal{G})\) measures the proportion of edges that connect two nodes in the same class; \(H_{\text{node}}(\mathcal{G})\) evaluates the average proportion of edge-label consistency of all nodes; \(H_{\text{class}}(\mathcal{G})\) tries to avoid the sensitivity to imbalanced class, which can cause \(H_{\text{edge}}\) misleadingly large. The above definitions are all based on the graph-label consistency and imply that the inconsistency will cause harmful effect to the performance of GNNs. With this in mind, we will show a counter example to illustrate the insufficiency of the above metrics and propose new metrics in the following subsection.
### Analysis of Heterophily and Aggregation Homophily Metric
Heterophily is believed to be harmful for message-passing based GNNs [6, 51, 70] because intuitively features of nodes in different classes will be falsely mixed and this will lead nodes indistinguishable [70]. Nevertheless, it is not always the case, _e.g._ the bipartite graph shown in Figure 3 is highly heterophilous according to the homophily metrics in (20), but after mean aggregation, the nodes in classes 1 and 2 only exchange colors and are still distinguishable. Authors in [6] also point out the insufficiency of \(H_{\text{node}}\) by examples to show that different graph typologies with the same \(H_{\text{node}}\) can carry different label information.
To analyze to what extent the graph structure can affect the output of a GNN, we first simplify the GCN by removing its nonlinearity as [60]. Let \(\hat{A}\in\mathbb{R}^{N\times N}\) denote a general aggregation operator. Then, equation (1) can be simplified as,
\[Y=\text{softmax}(\hat{A}XW)=\text{softmax}(Y^{\prime}) \tag{21}\]
After each gradient decent step \(\Delta W=\gamma\frac{d\mathcal{L}}{dW}\), where \(\gamma\) is the learning rate, the update of \(Y^{\prime}\) will be,
\[\Delta Y^{\prime}=\hat{A}X\Delta W=\gamma\hat{A}X\frac{d\mathcal{L}}{dW} \propto\hat{A}X\frac{d\mathcal{L}}{dW}=\hat{A}XX^{T}\hat{A}^{T}(Z-Y)=S(\hat{A},X)(Z-Y) \tag{22}\]
where \(S(\hat{A},X)\equiv\hat{A}X(\hat{A}X)^{T}\) is a post-aggregation node similarity matrix, \(Z-Y\) is the prediction error matrix. The update direction of node \(i\) is essentially a weighted sum of the prediction error, _i.e._\(\Delta(Y^{\prime})_{i,:}=\sum_{j\in\mathcal{V}}\left[S(\hat{A},X)\right]_{i,j }(Z-Y)_{j,:}\).
To study the effect of heterophily, we first define the _aggregation similarity score_ as follows.
**Definition 1**.: _Aggregation similarity score_
\[S_{agg}\left(S(\hat{A},X)\right)=\frac{\left|\left[v\left|\operatorname{Mean }_{u}\left(\{S(\hat{A},X)_{v,u}|Z_{u,:}=Z_{v,:}\}\right)\right.\right.\right) \geq\operatorname{Mean}_{u}\left(\{S(\hat{A},X)_{v,u}|Z_{u,:}\neq Z_{v,:}\} \right)\right|}{\left|\mathcal{V}\right|} \tag{23}\]
_where \(\operatorname{Mean}_{u}\left(\{\cdot\}\right)\) takes the average over \(u\) of a given multiset of values or variables._
Figure 3: Example of harmless heterophily
\(S_{\text{agg}}(S(\hat{A},X))\) measures the proportion of nodes \(v\in\mathcal{V}\) that will put relatively larger similarity weights on nodes in the same class than in other classes after aggregation. It is easy to see that \(S_{\text{agg}}(S(\hat{A},X))\in[0,1]\). But in practice, we observe that in most datasets, we will have \(S_{\text{agg}}(S(\hat{A},X))\geq 0.5\). Based on this observation, we rescale (23) to the following modified aggregation similarity for practical usage,
\[S_{\text{agg}}^{M}\left(S(\hat{A},X)\right)=\left[2S_{\text{agg}}\left(S(\hat {A},X)\right)-1\right]_{+} \tag{24}\]
In order to measure the consistency between labels and graph structures without considering node features and make a fair comparison with the existing homophily metrics in (20), we define the graph (\(\mathcal{G}\)) aggregation (\(\hat{A}\)) homophily and its modified version as
\[H_{\text{agg}}(\mathcal{G})=S_{\text{agg}}\left(S(\hat{A},Z)\right),\ H_{ \text{agg}}^{M}(\mathcal{G})=S_{\text{agg}}^{M}\left(S(\hat{A},Z)\right) \tag{25}\]
In practice, we will only check \(H_{\text{agg}}(\mathcal{G})\) when \(H_{\text{agg}}^{M}(\mathcal{G})=0\). As Figure 3 shows, when \(\hat{A}=\hat{A}_{\text{rw}}\), \(H_{\text{agg}}(\mathcal{G})=H_{\text{agg}}^{M}(\mathcal{G})=1\). Thus, this new metric reflects the fact that nodes in classes 1 and 2 are still highly distinguishable after aggregation, while other metrics mentioned before fail to capture the information and misleadingly give value 0. This shows the advantage of \(H_{\text{agg}}(\mathcal{G})\) and \(H_{\text{agg}}^{M}(\mathcal{G})\) by additionally considering information from aggregation operator \(\hat{A}\) and the similarity matrix.
Comparison of Homophily Metrics on Synthetic GraphsTo comprehensively compare \(H_{\text{agg}}^{M}(\mathcal{G})\) with the metrics in (20) in terms of how they reveal the influence of graph structure on the GNN performance, we generate synthetic graphs (\(d\)-regular graphs with edge homophily varied from 0.005 to 0.95) and evaluate SGC with 1-hop aggregation (SGC-1) [60] and GCN [24] on them.
The performance of SGC-1 and GCN are expected to be monotonically increasing with a proper and informative homophily metric. However, Figure 4(a)(b)(c) show that the performance curves under \(H_{\text{edge}}(\mathcal{G}),H_{\text{node}}(\mathcal{G})\) and \(H_{\text{class}}(\mathcal{G})\) are \(U\)-shaped 5, while Figure 4(d) reveals a nearly monotonic curve only with a little numerical perturbation around 1. This indicates that \(H_{\text{agg}}^{M}(\mathcal{G})\) can describe how the graph structure affects the performance of SGC-1 and GCN more appropriately and adequately than the existing metrics.
Figure 4: Comparison of baseline performance under different homophily metrics.
### How Diversification Operation Helps with Harmful Heterophily
We first consider the example shown in Figure 5. From \(S(\hat{A},X)\), nodes 1,3 assign relatively large positive weights to nodes in class 2 after aggregation, which will make node 1,3 hard to be distinguished from nodes in class 2. Despite the fact, we can still distinguish between nodes 1,3 and 4,5,6,7 by considering their neighborhood difference: nodes 1,3 are different from most of their neighbors while nodes 4,5,6,7 are similar to most of their neighbors. This indicates, in some cases, although some nodes become similar after aggregation, they are still distinguishable via their surrounding dissimilarities. This leads us to use _diversification operation_, _i.e._ high-pass (HP) filter \(I-\hat{A}\)[10] (will be introduced in the next subsection) to extract the information of neighborhood differences and address harmful heterophily. As \(S(I-\hat{A},X)\) in Figure 5 shows, nodes 1,3 will assign negative weights to nodes 4,5,6,7 after diversification operation, _i.e._ nodes 1,3 treat nodes 4,5,6,7 as negative samples and will move away from them during backpropagation. Base on this example, we first propose diversification distinguishability as follows to measures the proportion of nodes that diversification operation is potentially helpful for,
**Definition 2**.: _Diversification Distinguishability (DD) based on \(S(I-\hat{A},X)\)._
_Given \(S(I-\hat{A},X)\), a node \(v\) is diversification distinguishable if the following two conditions are satisfied at the same time,_
\[\begin{split}\textbf{1.}&\ \mathrm{Mean}_{u}\left(\{S(I-\hat{A},X)_{v,u}|u \in\mathcal{V}\wedge Z_{u,z}=Z_{v,z}\}\right)>0;\\ \textbf{2.}&\ \mathrm{Mean}_{u}\left(\{S(I-\hat{A},X)_{v,u }|u\in\mathcal{V}\wedge Z_{u,z}\neq Z_{v,z}\}\right)\leq 0\end{split} \tag{26}\]
_Then, graph diversification distinguishability value is defined as_
\[\mathrm{DD}_{\hat{A},X}(\mathcal{G})=\frac{1}{|\mathcal{V}|}\left|\{v|v\ \text{is diversification distinguishable}\}\right| \tag{27}\]
We can see that \(\mathrm{DD}_{\hat{A},X}(\mathcal{G})\in[0,1]\). The effectiveness of diversification operation can be proved for binary classification problems under certain conditions based on definition 2, leading us to:
**Theorem 3**.: Suppose \(X=Z,\hat{A}=\hat{A}_{\mathrm{rw}}\). Then, for a binary classification problem, _i.e._\(C=2\), all nodes are diversification distinguishable, _i.e._\(\mathrm{DD}_{\hat{A},Z}(\mathcal{G})=1\).
Figure 5: Example of how HP filter addresses harmful heterophily
Theorem 3 theoretically demonstrates the importance of diversification operation to extract high-frequency information of graph signal [10]. Combined with aggregation operation, which is a low-pass filter [10, 48], we can get a filterbank which uses both aggregation and diversification operations to distinctively extract the low- and high-frequency information from graph signals. We will introduce filterbank in the next subsection.
### Filterbank and Adaptive Channel Mixing(ACM) GNN Framework
FilterbankFor the graph signal \(\mathbf{x}\) defined on \(\mathbf{\mathcal{G}}\), a 2-channel linear (analysis) filterbank [10]6 includes a pair of low-pass(LP) and high-pass(HP) filters \(H_{\text{LP}},H_{\text{HP}}\), where \(H_{\text{LP}}\) and \(H_{\text{HP}}\) retain the low-frequency and high-frequency content of \(\mathbf{x}\), respectively. Filterbanks with \(H_{\text{LP}}+H_{\text{HP}}=I\) will not lose any information of the input signal, _i.e._ perfect reconstruction property [10].
Footnote 6: In graph signal processing, an additional synthesis filter [10] is required to form the 2-channel filterbank. But synthesis filter is not needed in our framework, so we do not introduce it in our paper.
However, most existing GNNs are under uni-channel filtering architecture [17, 24, 58] with either \(H_{\text{LP}}\) or \(H_{\text{HP}}\) channel that only partially preserves the input information. Generally, the Laplacian matrices (\(L_{\text{sym}}\), \(L_{\text{rw}}\), \(\hat{L}_{\text{sym}}\), \(\hat{L}_{\text{rw}}\)) can be regarded as HP filters [10] and affinity matrices (\(A_{\text{sym}}\), \(A_{\text{rw}}\), \(\hat{A}_{\text{sym}}\), \(\hat{A}_{\text{rw}}\)) can be treated as LP filters [48, 16]. Moreover, we consider MLPs as owing a special identity filterbank with matrix \(I\) that satisfies \(H_{\text{LP}}\) + \(H_{\text{HP}}\) = \(I+0=I\).
Filterbank in Spatial FormFilterbank methods can also be extended to spatial GNNs. Formally, on the node level, left multiplying \(H_{\text{LP}}\) and \(H_{\text{HP}}\) on \(\mathbf{x}\) performs as aggregation and diversification operations, respectively. For example, suppose \(H_{\text{LP}}\) = \(\hat{A}\) and \(H_{\text{HP}}\) = \(I-\hat{A}\), then for node \(i\) we have
\[(H_{\text{LP}}\mathbf{x})_{i}=\sum_{j\in[\mathcal{N}_{i}\cup i]}\hat{A}_{ij}\mathbf{x _{j}},\ (H_{\text{HP}}\mathbf{x})_{i}=\mathbf{x_{i}}-\sum_{j\in[\mathcal{N}_{i}\cup i]}\hat{A} _{ij}\mathbf{x_{j}} \tag{28}\]
where \(\hat{A}_{i,j}\) is the connection weight between two nodes. To leverage HP and identity channels in GNNs, we propose the Adaptive Channel Mixing (ACM) framework which can be applied to lots of baseline GNN. We use GCN as an example and introduce ACM framework in matrix form. We use \(H_{\text{LP}}\) and \(H_{\text{HP}}\) to represent general LP and HP filters. The ACM framework includes 3 steps as
follows,
**Step 1. Feature Extraction for Each Channel:**
Option 1: \(H_{L}^{l}=\text{ReLU}\left(H_{\text{LP}}H^{l-1}W_{L}^{l-1}\right),\ H_{H}^{l}= \text{ReLU}\left(H_{\text{HP}}H^{l-1}W_{H}^{l-1}\right),\ H_{I}^{l}=\text{ ReLU}\left(H^{l-1}W_{I}^{l-1}\right);\)
Option 2: \(H_{L}^{l}=H_{\text{LP}}\text{ReLU}\left(H^{l-1}W_{L}^{l-1}\right),\ H_{H}^{l}=H_{ \text{HP}}\text{ReLU}\left(H^{l-1}W_{H}^{l-1}\right),\ H_{I}^{l}=I\text{ ReLU}\left(H^{l-1}W_{I}^{l-1}\right);\)
\(W_{L}^{l-1},\ W_{H}^{l-1},\ W_{I}^{l-1}\in\mathbb{R}^{F_{l-1}\times F_{l}}\);
**Step 2. Feature-based Weight Learning**
\[\bar{\alpha}_{L}^{l}=\sigma\left(H_{L}^{l}\bar{W}_{L}^{l}\right),\ \bar{\alpha}_{H}^{l}=\sigma\left(H_{H}^{l}\bar{W}_{I}^{l}\right),\ \bar{\alpha}_{I}^{l}=\sigma\left(H_{I}^{l}\bar{W}_{I}^{l}\right),\ \bar{W}_{L}^{l-1},\ \bar{W}_{H}^{l-1},\ \bar{W}_{I}^{l-1}\in \mathbb{R}^{F_{l}\times 1}\]
\[\left[\alpha_{L}^{l},\alpha_{H}^{l},\alpha_{I}^{l}\right]=\text{Softmax}\left( \left[\bar{\alpha}_{L}^{l},\bar{\alpha}_{H}^{l},\bar{\alpha}_{I}^{l}\right]W _{\text{Mix}}^{l}/T\right),\ W_{\text{Mix}}^{l}\in\mathbb{R}^{3\times 3},T\in \mathbb{R}\text{ is the temperature;}\]
**Step 3. Node-wise Channel Mixing:**
\[H^{l}=\left(\text{diag}(\alpha_{L}^{l})H_{L}^{l}+\text{diag}(\alpha_{H}^{l}) H_{H}^{l}+\text{diag}(\alpha_{I}^{l})H_{I}^{l}\right). \tag{29}\]
The framework with option 1 in step 1 is ACM framework and with option 2 is ACMII framework. ACM(II)-GCN first implement distinct feature extractions for 3 channels, respectively. After processed by a set of filterbanks, 3 filtered components \(H_{L}^{l},H_{H}^{l},H_{I}^{l}\) are obtained. Different nodes may have different needs for the information in the 3 channels, _e.g._ in Figure 5, nodes 1,3 demand high-frequency information while node 2 only needs low-frequency information. To adaptively exploit information from different channels, ACM(II)-GCN learns row-wise (node-wise) feature-conditioned weights to combine the 3 channels. ACM(II) can be easily plugged into spatial GNNs by replacing \(H_{\text{LP}}\) and \(H_{\text{HP}}\) by aggregation and diversification operations as shown in (28).
ComplexityNumber of learnable parameters in layer \(l\) of ACM(II)-GCN is \(3F_{l-1}(F_{l}+1)+9\), while it is \(F_{l-1}F_{l}\) in GCN. The computation of step 1-3 takes \(NF_{l}(8+6F_{l-1})+2F_{l}(\text{nnz}(H_{\text{LP}})+\text{nnz}(H_{\text{HP}}))+18N\) flops, while GCN layer takes \(2NF_{l-1}F_{l}+2F_{l}(\text{nnz}(H_{\text{LP}}))\) flops, where \(\text{nnz}(\cdot)\) is the number of non-zero elements.
Performance ComparisonWe implement SGC [60] with 1 hop and 2 hops (SGC-1, SGC-2), GCNII [5], GCNII*[5], GCN [24] and snowball networks with 2 and 3 layers (snowball-2, snowball-3) and apply them in ACM or ACMII framework: we use \(\hat{A}_{\text{rw}}\) as LP filter and the corresponding HP filter is \(I-\hat{A}_{\text{rw}}\). We compare them with several baseline and SOTA GNN models: MLP with 2 layers (MLP-2), GAT [58], APPNP [25], GPRGNN [6], H\({}_{2}\)GCN [70], MixHop [1], GCN+JK [24, 35, 63], GAT+JK [35, 58, 63], FAGCN [2] GraphSAGE [17] and Geom-GCN [51]. Besides the 9 benchmark datasets _Cornell_, _Wisconsin_, _Texas_, _Film_, _Chameleon_, _Squirrel_, _Cora_, _Citeseer_ and _Pubmed_ used in [51], we further test the above models on a new benchmark dataset, _Deezer-Europe_, that is proposed in [35]. On each dataset used in [51], we test the models 10 times following the same early stopping strategy, the same random data splitting method and Adam [23] optimizer as used in GPRGNN [6]. For _Deezer-Europe_, we test the above models 5 times with the same early stopping strategy, the same fixed splits and AdamW [37] used in [35].
To better visualize the performance boost and the comparison with SOTA models, in Figure 6, we plot the bar charts of the test accuracy of SOTA models, 3 selected baselines (GCN, snowball-2, snowball-3) and their ACM and ACMII augmented models on 6 most commonly used benchmark heterophily datasets (See [40] for the full results and comparison). We can see that after being
applied in ACM or ACMII framework, the performance of the 3 baseline models are significantly boosted on all tasks and can achieve SOTA performance. Especially on _Cornell, Texas, Film_ and _Squirrel_, the augmented models significantly outperform the current SOTA models. Overall, It suggests that ACM or ACMII framework can help GNNs to generalize better on node classification tasks on heterophilous graphs.
### Prior Work
We discuss relevant work of GNNs on addressing heterophily challenge in this part. Authors in [1] acknowledge the difficulty of learning on graphs with weak homophily and propose MixHop to extract features from multi-hop neighborhood to get more information. Geom-GCN [51] precomputes unsupervised node embeddings and uses graph structure defined by geometric relationships in the embedding space to define the bi-level aggregation process. Authors in [20] propose measurements based on feature smoothness and label smoothness that are potentially helpful to guide GNNs on dealing with heterophilous graphs. H\({}_{2}\)GCN [70] combines 3 key designs to address heterophily: (1) ego- and neighbor-embedding separation; (2) higher-order neighborhoods; (3) combination of intermediate representations. CPGNN [69] models label correlations by the compatibility matrix, which is beneficial for heterophily settings, and propagates a prior belief estimation into GNNs by the compatibility matrix. FBGNN [47] first proposes to use filterbank to address heterophily problem, but it does not fully explain the insights behind HP filters and does not contain identity channel and node-wise channel mixing mechanism. FAGCN [2] learns edge-level aggre
Figure 6: Comparison of SOTA models (magenta), selected baseline GNNs (red) and their ACM (green) and ACMII (blue) augmented models on 6 selected datasets. The black line and the error bar indicate the standard deviation. The symbol “\(\uparrow\)” means the amount of improvement of the best ACM-baseline and ACM-baseline over the SOTA models.
gation weights as GAT [58] but allows the weights to be negative which enables the network to capture the high-frequency components in graph signals. GPRGNN [6] uses learnable weights that can be both positive and negative for feature propagation, it allows GPRGNN to adapt heterophily structure of graph and is able to handle both high- and low-frequency parts of the graph signals.
### Future Work
Limitation of diversification operationDiversification operation does not work well in all harmful heterophily cases. For example, consider an imbalanced dataset where several small clusters with distinctive labels are densely connected to a large cluster. In this case, the surrounding differences of nodes in small clusters are similar, _i.e._ the neighborhood differences are mainly from their connection to the same large cluster, and this possibly makes diversification operation fail to discriminate them. Thus, it is obvious that ACM framework is not able to handle all heterophily cases.
From Figure 4, we can see that GNNs consistently perform well in the high homophily area. This reveals the fact that all homophily cases are helpful. This reminds us that instead of using a fixed adjacency matrix, we can learn a new adjacency matrix with different homophily level. With this in mind, we design an architecture with additional adjacency learner as shown in Figure 7: instead of using a fixed predefined adjacency matrix, we will learn an adjacency matrix with edges that can reveal the label similarity between nodes, _i.e._ homophily.. This adjacency learner should ideally be trained end-to end. From some preliminary experimental results (not included in this report) of a GCN with a pretrained adjacency learner, this method is promising although there are some stability issues need to be fixed.
Exploring Different Ways for Adjacency Candidate SelectionSome tricks can be explored when we are selecting the adjacency candidates for the adjacency learner:
* Sample or select (top-\(k_{1}\)) nodes from complementary graph, put them together with the predefined neighborhood set to form adjacency candidate set, then sample or select (top-\(k_{2}\)) adjacency candidates for training. Try to train it end-to-end.
* Consider modeling the candidate selection process as a multi-armed bandit problem. Find an efficient way to learn to select good candidates from complementary graph. Can use pseudo count to prevent selecting the same nodes repeatedly.
Figure 7: GNN with adjacency learner
Graph Representation Learning for Reinforcement Learning
### Markov Decision Process (MDP)
MDP is a framework to model the learning process that the agent learns from the interaction with the environment [56, 67, 68]. The interaction happens in discrete time steps, \(t=0,1,2,3,\cdots\). At step \(t\), given a state \(S_{t}=s_{t}\in\mathcal{S}\), the agent picks an action \(a_{t}\in\mathcal{A}(s_{t})\) according to a policy \(\pi(\cdot|s_{t})\), which is a rule of choosing actions given a state. Then, at time \(t+1\), the environmental dynamics \(p:\mathcal{S}\times\mathcal{R}\times\mathcal{A}\times\mathcal{S}\to[0,1]\) take the agent to a new state \(S_{t+1}=s_{t+1}\in\mathcal{S}\) and provide a numerical reward \(R_{t+1}=r_{t+1}(s_{t},a_{t},s_{t+1})\in\mathbb{R}\). Such a sequence of interaction gives us a trajectory \(\tau=\{S_{0},A_{0},R_{1},S_{1},A_{1},R_{2},S_{2},A_{2},R_{3},\cdots\}\). The objective is to find an optimal policy to maximize the expected long-term discounted cumulative reward \(V_{\pi}(s)=E_{\pi}[\sum\limits_{k=0}^{\infty}\gamma^{k}R_{t+k+1}|S_{t}=s]\) for each state \(s\), where \(\gamma\) is the discount factor.
For a given policy \(\pi\), solving its value function \(\mathbf{V}_{\pi}\) is equivalent to solving the following linear system,
\[\mathbf{V}_{\pi}=\mathbf{r}_{\pi}+\gamma P_{\pi}\mathbf{V}_{\pi} \tag{30}\]
where \(\mathbf{V}_{\pi}=[V_{\pi}(s)]_{s\in\mathcal{S}}^{T}\in\mathbb{R}^{|\mathcal{S}|}, \mathbf{r}_{\pi}=[r_{\pi}(s)]_{s\in\mathcal{S}}^{T}\in\mathbb{R}^{|\mathcal{S}|}, P_{\pi}=[P_{\pi}(s^{\prime}|s)]_{s^{\prime},s\in\mathcal{S}}\in\mathbb{R}^{| \mathcal{S}|\times|\mathcal{S}|}\). The state transition matrix \(P_{\pi}\) essentially defines a graph structure over states and the reward vector \(\mathbf{r}_{\pi}\) is a signal defines on graph. Thus, solving value function can be considered as a (supervised or semi-supervised) node regression tasks over graph. Besides solving \(\mathbf{V}_{\pi}\), the graph structure can also be used for reward propagation and representation learning in Reinforcement Learning (RL) [26, 27, 28].
### Graph Representation Learning for MDP
Treating MDP as a graph is an old but never outdated idea. Traditional methods use graph Laplacian for a fixed policy to estimate \(\mathbf{V}_{\pi}\), _e.g._ proto-value function [49]. In addition to value function estimation, [28] proposes to use GCN to learn potential-based reward shaping, which can accelerate the learning process of the agent.
The above methods both construct the graph from the sampled trajectory data. With modern Graph Representation Learning (GRL) methods _e.g._ node embedding methods [3], link prediction methods [52, 54, 57], we can learn to reconstruct the underlying graph (adjacency matrix) from sampled data more efficiently. And label propagation [53], which is a commonly used algorithm for graph semi-supervised learning, can be helpful for efficient reward propagation. In section 4.3, we will introduce the potential of using GRL for reward propagation and representation learning in reinforcement learning.
### Reinforcement Learning with Graph Representation Learning
In this section, we will draw how to represent Markov Decision Process (MDP) with graph and introduce two possible ways of using graph representation learning to address the problems defined on MDP.
Each state can be treated as a node on a graph, the transition probability between each pair of nodes (an element in state transition matrix) can be represented by the edge (or weight) between
them and value function is a function defined on each node of the graph. The details (for finite MDP) are introduced in matrix form as follows [38, 59]:
* Denote \(|S||A|\times|S|\) environment transition matrix as \(P\), where \[P_{(sa,s^{\prime})}=\sum_{r}p\left(s^{\prime},r|s,a\right)\] (31) and \(P_{(sa,s^{\prime})}\geq 0\), \(\sum_{s^{\prime}}P_{(sa,s^{\prime})}=1\), for all \(s,a\). Note that \(P\) is not a square matrix.
* We rewrite the policy \(\pi\) by an \(|S|\times|S||A|\) matrix \(\Pi\), where \(\Pi_{(s,s^{\prime}a)}=\pi(a|s)\) if \(s^{\prime}=s\), otherwise 0: \[\Pi=\text{diag}(\pi(\cdot|s_{1})^{T},\cdots,\pi(\cdot|s_{|S|})^{T})\] (32) where \(\pi(\cdot|s_{i})^{T}\) is an \(|A|\)-dimensional row vector. From this definition, one can quickly verify that the matrix product \(\Pi P\) gives the \(|S|\times|S|\) state-to-state transition matrix \(P_{\pi}\) (asymmetric) induced by the policy \(\pi\) in the environment \(P\), and the \(|S||A|\times|S||A|\) matrix product \(P\Pi\) gives the state-action-to-state-action transition matrix \(P^{\prime}_{\pi}\) (asymmetric) induced by policy \(\pi\) in the environment \(P\).
* We denote the \(|S||A|\times 1\) reward vector as \(\mathbf{r}\), whose entry \(r_{(sa)}\) specifies the reward obtained when taking action \(a\) in state \(s\), i.e. \[\mathbf{r}_{(sa)}=E[r|s,a]=\sum_{s^{\prime}\in\mathcal{S}}P_{sa,s^{\prime}}\cdot r (s,a,s^{\prime}).\] (33)
* The state value function and state-action value function can be represented by \[\mathbf{V}_{\pi}=\sum_{i=0}^{\infty}\gamma^{i}(\Pi P)^{i}\Pi\mathbf{r}=\Pi\mathbf{r}+ \gamma\Pi P\mathbf{V}_{\pi}\in\mathbb{R}^{|S|\times 1},\ \mathbf{Q}_{\pi}=\sum_{i=0}^{\infty} \gamma^{i}(P\Pi)^{i}\mathbf{r}=\mathbf{r}+\gamma P\Pi\mathbf{Q}_{\pi}\in\mathbb{R}^{|S|A| \times 1}\] (34)
#### 4.3.1 Learn Reward Propagation as Label Propagation
The sampling process from an MDP can be considered as a random walk defined on a graph, because the relation (edge) between each pair of states is essentially a transition probability 7. Discovering the underlying graph of a MDP can help us to leverage the correlation between states to learn value function or do to efficient exploration in sparse reward environment.
Footnote 7: From this perspective, we should not treat the trajectory as sequential data, because we do not necessarily have an ordered relation between states on a graph, even for directed graph. Although the observation seems to have an order in it, we actually only have transition relation.
Usually, the graph is constructed from the trajectory data, _i.e._ the pairwise state transition data. But once we update the policy, we need to reconstruct the graph. With graph embedding methods for link prediction tasks, _e.g._, Deepwalk [52], node2vec [14], Line [57], we are able to learn graph reconstruction by inferring some unobserved transition. To be more specifically, instead of learning \(P_{\pi}(s^{\prime}|s)\) for a fixed policy \(\pi\), we can learn the state-action transition probability \(P(s^{\prime}|s,a)\), which is independent of \(\pi\). In this way, we can take use of trajectory data in all history no matter the policy changes or not. And once we are given a policy, we can infer the graph by combining \(\pi(a|s)\) and \(P(s^{\prime}|s,a)\).
#### 4.3.2 Graph Embedding as Auxiliary Task for Representation Learning
Learning auxiliary tasks is showed to be helpful for state representation learning [22], which is critical to learn a good policy for agents. Among the methods, successor representation is showed to be theoretically and empirically important for learning a good state representations [8, 29]. Modeling the successor triplet \((s,a,s^{\prime})\) for MDP is essentially equivalent to modeling the triplet (head, relation, tail) in knowledge graph. And there exist a lot of algorithms in knowledge graph embedding community to address triplet embedding problem, _e.g._, TransE [3], RotateE [55], QuatE [65] and DihEdral [62]. These methods can be borrowed to learn richer representation for RL tasks.
|
2307.04486 | Normal approximation of Random Gaussian Neural Networks | In this paper we provide explicit upper bounds on some distances between the
(law of the) output of a random Gaussian NN and (the law of) a random Gaussian
vector. Our results concern both shallow random Gaussian neural networks with
univariate output and fully connected and deep random Gaussian neural networks,
with a rather general activation function. The upper bounds show how the widths
of the layers, the activation functions and other architecture parameters
affect the Gaussian approximation of the ouput. Our techniques, relying on
Stein's method and integration by parts formulas for the Gaussian law, yield
estimates on distances which are indeed integral probability metrics, and
include the total variation and the convex distances. These latter metrics are
defined by testing against indicator functions of suitable measurable sets, and
so allow for accurate estimates of the probability that the output is localized
in some region of the space. Such estimates have a significant interest both
from a practitioner's and a theorist's perspective. | Nicola Apollonio, Daniela De Canditiis, Giovanni Franzina, Paola Stolfi, Giovanni Luca Torrisi | 2023-07-10T11:19:56Z | http://arxiv.org/abs/2307.04486v2 | # Normal approximation of random Gaussian neural networks
###### Abstract.
In this paper we provide explicit upper bounds on some distances between the (law of the) output of a random Gaussian neural network and (the law of) a random Gaussian vector. Our main results concern deep random Gaussian neural networks, with a rather general activation function. The upper bounds show how the widths of the layers, the activation function and other architecture parameters affect the Gaussian approximation of the output. Our techniques, relying on Stein's method and integration by parts formulas for the Gaussian law, yield estimates on distances which are indeed integral probability metrics, and include the convex distance. This latter metric is defined by testing against indicator functions of measurable convex sets, and so allow for accurate estimates of the probability that the output is localized in some region of the space. Such estimates have a significant interest both from a practitioner's and a theorist's perspective.
Key words and phrases:Gaussian approximation, Neural Networks, Stein's method 2020 Mathematics Subject Classification: 60F05, 68T07 \({}^{\ast}\)Corresponding author, [email protected] \({}^{\ddagger}\)Istituto per le Applicazioni del Calcolo "M. Picone", CNR
## 1. Introduction
This work is part of the literature studying random neural networks (NNs for short), i.e., NNs whose biases and weights are random variables. In the context of modern deep learning, the interest in these types of networks is twofold: on the one hand, they naturally constitute a prior in a Bayesian approach, on the other hand they may represent the initialization of gradient flows in empirical risk minimization, see [17] for a general reference on the subject.
Within the boundaries of this topic, many contributions in the literature have been handling the asymptotic Gaussianity of random NNs, as the number of neurons in the hidden layers tends to infinity. A seminal paper is [13], where the output of a shallow (i.e., having a single hidden layer) random NN, viewed as a stochastic process on the sphere, is shown to converge to a Gaussian process, as the number of neurons in the hidden layer grows large. From that point onward, many sophisticated results have been published for deep (i.e., having more than one hidden layer) random NNs. The starting point of our work is [9], where the output of a deep random NN, viewed as a random element on the space of continuous functions on a compact set, is proved to converge to a Gaussian process, as the number of neurons in all the hidden layers tends to infinity. Other related contributions are [11, 21, 22].
Recently, the problem of the quantitative Gaussian approximation of the output of a random NN received a lot of attention. For instance, [7], exploiting Wasserstein distances, provides quantitative versions of the results in [13] when the activation function is polynomial, ReLu and hyperbolic tangent; see [1, 5, 10] for other important related contributions. In particular, the reference [1] share with our work the idea to apply Stein's method for Gaussian approximation in the context of deep NN, although in a different mathematical setting.
A significant achievement is provided in [2] where, for the first time in the literature, is given a quantitative proof of the Gaussian behavior of the output of a deep random Gaussian NN (i.e. a random NN whose biases and weights are Gaussian distributed) with a Lipschitz continuous activation function; in [2] the distance from Gaussianity is measured by means of the 2-Wasserstein metric, which comes from the Monge-Kantorovich problem with quadratic cost. As far as shallow random Gaussian NNs with univariate output is concerned, we mention the recent manuscript [4] which provides quantitative bounds on the Kolmogorov, the total variation and the 1-Wasserstein distances between the output and a Gaussian random variable, when the activation function is sufficiently smooth and has a sub-polynomial growth. A special mention deserves the independently written paper [8] where Stein's method is used to obtain tight probabilistic bounds for various distances between the output (and its derivatives) of a deep random Gaussian NN and a Gaussian random variable. We refer the reader to Remarks 4.2, 6.2 and 6.4 for comparisons between our results and the corresponding achiements in [8].
The main contribution of this paper concerns the Gaussian approximation of the output of deep random Gaussian NNs in the convex and 1-Wasserstein distances, under mild assumptions on the activation function (which, differently from [2], can be non-Lipschitz). A specialization of these results clearly provides approximations for shallow random Gaussian NN with univariate output. However, in this specific case we furnish direct proofs, which (for various technical reasons) give the same rates under more general assumptions on the activation function.
For shallow random Gaussian NNs with univariate output, combining the Stein method for the Gaussian approximation with the integration by parts formula for the Gaussian law, we provide explicit bounds for the Kolmogorov, the total variation and the 1-Wasserstein distances between the output and a Gaussian random variable, under a minimal assumption on the activation function (see Theorem 4.1). Remarkably, we obtain the same rate of convergence as in [4], as the number of
neurons in the hidden layer grows large, our constants being presumably better than the ones in [4] (see Table 1). For deep random Gaussian NNs, the novelty of our results is that we measure the error in the Gaussian approximation of the output in terms of the convex distance (see Theorem 6.1) and of the 1-Wasserstein distance (see Theorem 6.3), for a class of activation functions which includes the family of Lipschitz continuous functions (see Proposition 5.2\((ii)\)). Remarkably, for both the convex and the 1-Wasserstein distances the rate of convergence that we obtain is of the same order as the one in [2], as the number of neurons in all the hidden layers tend to infinity. The proofs of Theorems 6.1 and 6.3 are based on the Stein method for the multivariate Gaussian approximation and the integration by parts formula for the multivariate Gaussian law. The presence of more than one hidden layers complicates the derivations of the bounds, which rely on a key estimate for the \(L^{2}\)-distance between the so-called collective observables and their limiting values (see Theorem 5.1). We emphasize that, when considering the convex distance, an expedient tool is provided by a smoothing lemma that we borrow from [16].
It is well-known that localizing the output of a random NN, i.e., having a control over the probability that the output lies in a region of the space (belonging to a large class of measurable sets), is of valuable interest for practitioners. From a theorist's point of view, the output distribution of a NN is often analytically untractable, and computing the probability that the output belongs to some measurable set resorts in performing a "heroic" mathematical integration (see e.g.[17, p. 49]). Our Theorems 4.1\((ii)\) and 6.1 offer some insights into the localization problem in a simple and efficient way for both the univariate output of a shallow random Gaussian NN and the output of a deep random Gaussian NN, respectively. We refer the reader to Section 7 for some numerical illustrations of this issue.
The paper is organized as follows. In Section 2 we introduce our toolkit such as the integral probability metrics considered in the paper and some preliminaries on the Stein method. In Section 3, first we introduce all the NNs considered in this work and then we give a brief overview of the main results in [2] and [4], comparing them with our achievements. In Section 4 we give upper bounds for the Kolmogorov, the total variation and the 1-Wasserstein distances between the (univariate) output of a shallow random Gaussian NN and a Gaussian random variable. In Section 5 we prove the aforementioned key estimate on the \(L^{2}\)-distance between the collective observables and their limiting values. In Section 6 we furnish explicit upper bounds on the convex and the 1-Wasserstein distances between the output of a deep random Gaussian NN and a Gaussian random vector. Finally, in Section 7 we present some numerical illustrations concerning the above mentioned issue of the output localization.
## 2. Preliminaries
In the present section we introduce some notation and we recall some results that will be of use throughout the paper.
### Distances between probability measures
In this paper we consider various distances between probability measures on \(\mathbb{R}^{d}\), \(d\in\mathbb{N}:=\{1,2,\ldots\}\): the total variation distance, the convex distance, the Komogorov distance, and the \(p\)-Wasserstein distances. Hereon, the symbol \(\|\cdot\|_{d}\) denotes the Euclidean norm on \(\mathbb{R}^{d}\).
**Definition 2.1**.: _The total variation distance between the laws of two \(\mathbb{R}^{d}\)-valued random vectors \(\mathbf{X}\) and \(\mathbf{Y}\), written \(d_{TV}(\mathbf{X},\mathbf{Y})\), is given by_
\[d_{TV}(\mathbf{X},\mathbf{Y}):=\sup_{B\in\mathcal{B}(\mathbb{R}^{d})}|\mathbb{ P}(\mathbf{X}\in B)-\mathbb{P}(\mathbf{Y}\in B)|,\]
_where \(\mathcal{B}(\mathbb{R}^{d})\) denotes the Borel \(\sigma\)-field on \(\mathbb{R}^{d}\)._
**Definition 2.2**.: _The convex distance between the laws of two \(\mathbb{R}^{d}\)-valued random vectors \(\mathbf{X}\) and \(\mathbf{Y}\), written \(d_{c}(\mathbf{X},\mathbf{Y})\), is given by_
\[d_{c}(\mathbf{X},\mathbf{Y}):=\sup_{g\in\mathcal{I}_{d}}|\mathbb{E}[g(\mathbf{ X})]-\mathbb{E}[g(\mathbf{Y})]|,\]
_where \(\mathcal{I}_{d}\) denotes the collection of all indicator functions of measurable convex sets in \(\mathbb{R}^{d}\)._
**Definition 2.3**.: _The Kolmogorov distance between the laws of two \(\mathbb{R}^{d}\)-valued random vectors \(\mathbf{X}=(X_{1},\ldots,X_{d})\) and \(\mathbf{Y}=(Y_{1},\ldots,Y_{d})\), written \(d_{K}(\mathbf{X},\mathbf{Y})\), is given by_
\[d_{K}(\mathbf{X},\mathbf{Y}):=\sup_{\mathbf{y}=(y_{1},\ldots,y_{d})\in\mathbb{ R}^{d}}|\mathbb{P}(X_{1}\leq y_{1},\ldots,X_{d}\leq y_{d})-\mathbb{P}(Y_{1} \leq y_{1},\ldots,Y_{d}\leq y_{d})|.\]
**Definition 2.4**.: _For \(p\in[1,+\infty)\), the \(p\)-Wasserstein distance between the laws of two \(\mathbb{R}^{d}\)-valued random vectors \(\mathbf{X}\) and \(\mathbf{Y}\), written \(d_{W_{p}}(\mathbf{X},\mathbf{Y})\), is given by_
\[d_{W_{p}}(\mathbf{X},\mathbf{Y}):=\inf_{(\mathbf{U},\mathbf{V})\in C_{( \mathbf{X},\mathbf{Y})}}\mathbb{E}[\|\mathbf{U}-\mathbf{V}\|_{d}^{p}]^{1/p},\]
_where \(C_{(\mathbf{X},\mathbf{Y})}\) is the family of all the couplings of \(\mathbf{X}\) and \(\mathbf{Y}\), i.e., the family of all random vectors \((\mathbf{U},\mathbf{V})\) such that \(\mathbf{U}\) is distributed as \(\mathbf{X}\) and \(\mathbf{V}\) is distributed as \(\mathbf{Y}\)._
Clearly, by Jensen's inequality \(d_{W_{1}}\leq d_{W_{2}}\), and it follows directly by the definitions that \(d_{K}\leq d_{c}\leq d_{TV}\). Furthermore, for all \(s=TV,c,K,W_{p}\), if \(d_{s}(\mathbf{Y}_{n},\mathbf{Y})\to 0\), as \(n\to+\infty\), where \(\mathbf{Y}_{n}\), \(n\in\mathbb{N}\), and \(\mathbf{Y}\) are random vectors with values in \(\mathbb{R}^{d}\), then \(\mathbf{Y}_{n}\) converges in law to \(\mathbf{Y}\), as \(n\to+\infty\) (see e.g. [14, 20]).
In view of the Kantorovich-Rubinstein duality (see Theorem 5.10 and Eq. (5.11) in [20]), the \(1\)-Wasserstein distance between the laws of two \(\mathbb{R}^{d}\)-valued random vectors \(\mathbf{X}\) and \(\mathbf{Y}\) such that \(\max\{\mathbb{E}\|\mathbf{X}\|_{d},\mathbb{E}\|\mathbf{Y}\|_{d}\}<\infty\), satisfies the relation
\[d_{W_{1}}(\mathbf{X},\mathbf{Y})=\sup_{g\in\mathcal{L}_{d}(1)}|\mathbb{E}[g( \mathbf{X})]-\mathbb{E}[g(\mathbf{Y})]|, \tag{2.1}\]
where \(\mathcal{L}_{d}(1)\) indicates the collection of all functions \(g:\mathbb{R}^{d}\to\mathbb{R}\) which are Lipschitz continuous with Lipschitz constant less than or equal to \(1\).
Since \(d_{c}\) is defined by testing against indicator functions of Borel convex sets rather than arbitrary Borel sets, the convex distance can be expected to be estimated easier than the total variation distance; moreover, the convex distance looks more flexible than the Kolmogorov distance, for example it enjoys a number of invariance properties not satisfied by \(d_{K}\), see [3].
As for the relation between the convex distance and the optimal transport metric \(d_{W_{1}}\), it turns out that the convex distance to a fixed centered Gaussian law is bounded from above by a multiple of the square root of the \(1\)-Wasserstein distance. More precisely, one has the following Proposition 2.5, which is proved in [15].
Here and henceforth, we denote by \(\mathbf{N}_{\Sigma}=(N_{1},\ldots,N_{d})\), \(d\in\mathbb{N}\), a centered Gaussian vector with invertible covariance matrix \(\Sigma=(\Sigma_{ij})_{1\leq i,j\leq d}\).
**Proposition 2.5**.: _For any \(d\)-dimensional random vector \(\mathbf{Y}\), we have_
\[d_{c}(\mathbf{Y},\mathbf{N}_{\Sigma})\leq 2\sqrt{2}\Gamma(\Sigma)^{1/2}d_{W_{1} }(\mathbf{Y},\mathbf{N}_{\Sigma})^{1/2},\]
_where \(\Gamma(\Sigma)\) is the constant defined by_
\[\Gamma(\Sigma):=\sup_{Q,\,\epsilon>0}\frac{\mathbb{P}(\mathbf{N}_{\Sigma}\in Q ^{\epsilon})-\mathbb{P}(\mathbf{N}_{\Sigma}\in Q)}{\epsilon}, \tag{2.2}\]
_where \(Q\) ranges over all the Borel measurable convex subsets of \(\mathbb{R}^{d}\), and \(Q^{\epsilon}\) denotes the set of all elements of \(\mathbb{R}^{d}\) whose Euclidean distance from \(Q\) does not exceed \(\epsilon\)._
\(\Gamma(\Sigma)\) is in fact an isoperimetric constant. Indeed, if \(Q\) is a bounded convex Borel set, then it is easily seen that
\[\sup_{\epsilon>0}\frac{\mathbb{P}(\mathbf{N}_{\Sigma}\in Q^{\epsilon})- \mathbb{P}(\mathbf{N}_{\Sigma}\in Q)}{\epsilon}=\int_{\partial Q}h_{\Sigma} \left(\mathbf{x},\boldsymbol{\nu}_{\mathbf{x}}^{Q}\right)d\,\mathscr{H}^{d-1}\,,\]
where \(\mathscr{H}^{d-1}\) is the \((d-1)\)-dimensional Hausdorff measure, for \(\mathscr{H}^{d-1}\)-a.e. boundary point \(\mathbf{x}\) we are denoting by \(\boldsymbol{\nu}_{\mathbf{x}}^{Q}\) the outward unit normal to \(\partial Q\) at \(\mathbf{x}\), and we have set
\[h_{\Sigma}(\mathbf{x},\boldsymbol{\nu}):=(2\pi)^{-d/2}e^{-\frac{|\mathbf{x}| ^{2}}{2}}\|\Sigma^{-\frac{1}{2}}\boldsymbol{\nu}_{\mathbf{x}}^{Q}\|_{d}\,, \qquad\text{for all }(\mathbf{x}\,,\boldsymbol{\nu})\in\mathbb{R}^{d}\times \mathbb{S}^{d-1},\]
where \(\mathbb{S}^{d-1}\) is the unit sphere of \(\mathbb{R}^{d}\). Thus, \(\Gamma(\Sigma)\) is the worst (largest) possible anisotropic Gaussian perimeter for a convex body in \(\mathbb{R}^{d}\).
We refer the reader to [12] for the following bound
\[e^{-\frac{5}{4}}d^{1/4}\leq\Gamma(\Sigma)\leq(2\pi)^{-\frac{1}{4}}d^{1/4}. \tag{2.3}\]
### The one-dimensional Stein equation
Throughout this paper, we denote by \(\mathcal{N}(\mu,\eta)\) the one-dimensional Gaussian law with mean \(\mu\in\mathbb{R}\) and variance \(\eta>0\), and let \(Z\sim\mathcal{N}(0,1)\).
The celebrated Stein equation for the one-dimensional Normal approximation [18] is given by
\[g(w)-\mathbb{E}g(Z)=f_{g}^{\prime}(w)-wf_{g}(w), \tag{2.4}\]
where \(g:\mathbb{R}\to\mathbb{R}\) is a measurable function such that \(\mathbb{E}|g(Z)|<\infty\) and \(f_{g}:\mathbb{R}\to\mathbb{R}\) is unknown. The following lemma holds, see e.g. Proposition 3.2.2, Theorem 3.3.1, Theorem 3.4.2 and Proposition 3.5.1 in [14]. See [6] for an introduction on the Stein method.
Hereon, for a Lipschitz continuous function \(g:\mathbb{R}^{d}\to\mathbb{R}\) we denote by \(\mathrm{Lip}(g)\) the Lipschitz constant of \(g\), and for a function \(f:\mathbb{R}^{d}\to\mathbb{R}\) we denote by \(\|f\|_{\infty}\) the supremum norm of \(f\).
**Lemma 2.6**.: _The following claims hold:_
\((i)\) _For any \(y\in\mathbb{R}\), the Stein equation (_2.4_) with \(g(w):=\mathbf{1}_{(-\infty,y]}(w)\) has a unique solution \(f_{g}\) and \(\|f_{g}^{\prime}\|_{\infty}\leq 1\)._
\((ii)\) _Let \(g:\mathbb{R}\to[0,1]\) be a measurable function. Then there exists a unique solution \(f_{g}\) of the Stein equation (_2.4_) and \(\|f_{g}^{\prime}\|_{\infty}\leq 2\)._
\((iii)\) _Let \(g:\mathbb{R}\to\mathbb{R}\) be a Lipschitz continuous function. Then there exists a unique solution \(f_{g}\) of the Stein equation (_2.4_) and \(\|f_{g}^{\prime}\|_{\infty}\leq\mathrm{Lip}(g)\sqrt{2/\pi}\)._
### The multidimensional Stein equation
Throughout this paper, given a sufficiently smooth function \(f:\mathbb{R}^{d}\to\mathbb{R}\), we define
\[\partial_{i_{1}i_{1}\ldots i_{n}}^{n}f(x_{1},\ldots,x_{d}):=\frac{\partial^{n} f}{\partial x_{i_{1}}\ldots\partial x_{i_{n}}}(x_{1},\ldots,x_{d}).\]
Let \(\mathcal{M}_{d\times d}(\mathbb{R})\), \(d\in\mathbb{N}\), be the set of \(d\times d\) real matrices. For a function \(f\in C^{2}(\mathbb{R}^{d})\), we denote by \(\mathrm{Hess}\,f(\mathbf{y})\in\mathcal{M}_{d\times d}(\mathbb{R})\) the Hessian matrix of \(f\) at \(\mathbf{y}\in\mathbb{R}^{d}\) and by \(\|\cdot\|_{op}\) the operator norm on \(\mathcal{M}_{d\times d}(\mathbb{R})\), i.e., for any \(\Gamma\in\mathcal{M}_{d\times d}(\mathbb{R})\), \(\|\Gamma\|_{\mathrm{op}}:=\sup_{\mathbf{y}:\ \|\mathbf{y}\|_{d}=1}\|\Gamma\mathbf{y}\|_{d}\). We consider the Hilbert-Schmidt inner product and the Hilbert-Schmidt norm on \(\mathcal{M}_{d\times d}(\mathbb{R})\), which are defined, respectively, by
\[\langle\Gamma,\Psi\rangle_{H.S.}:=\mathrm{Tr}(\Gamma\Psi^{\top})=\sum_{i,j=1}^ {d}\Gamma_{ij}\Psi_{ij}\quad\text{and}\quad\|\Gamma\|_{H.S.}=\sqrt{\langle \Gamma,\Gamma\rangle_{H.S.}}\]
for every pair of matrices \(\Gamma=(\Gamma_{ij})_{1\leq i,j\leq j}\) and \(\Psi=(\Psi_{ij})_{1\leq i,j\leq d}\), where the symbols \(\operatorname{Tr}(\Gamma)\) and \(\Gamma^{\top}\) denote, respectively, the trace and the transpose of the matrix \(\Gamma\).
The Stein equation for multivariate Normal approximation is defined as
\[g(\mathbf{y})-\mathbb{E}[g(\mathbf{N}_{\Sigma})]=\langle\mathbf{y},\nabla f_{ g}(\mathbf{y})\rangle_{d}-\langle\Sigma,\operatorname{Hess}f_{g}(\mathbf{y}) \rangle_{H.S.},\quad\mathbf{y}\in\mathbb{R}^{d} \tag{2.5}\]
where \(g:\mathbb{R}^{d}\to\mathbb{R}\) is given and \(f_{g}\) is unknown. Throughout this paper the symbol \(\langle\cdot,\cdot\rangle_{d}\) denotes the inner product in \(\mathbb{R}^{d}\).
The following lemmas provide solutions to Stein's equation (2.5), under different assumptions on \(g\).
**Lemma 2.7**.: _Let \(g\in\mathcal{L}_{d}(1)\). Then the function_
\[f_{g}(\mathbf{y}):=\int_{0}^{\infty}\mathbb{E}[g(\mathbf{N}_{\Sigma})-g( \mathrm{e}^{-t}\mathbf{y}+\sqrt{1-\mathrm{e}^{-2t}}\mathbf{N}_{\Sigma})]\, \mathrm{d}t,\quad\mathbf{y}\in\mathbb{R}^{d}\]
_is such that: \(f_{g}\in C^{2}(\mathbb{R}^{d})\), \(f_{g}\) solves (2.5), \(f_{g}\) satisfies_
\[\|\partial_{i}f_{g}\|_{\infty}\leq 1,\quad\text{for any $i=1,\ldots,d$} \tag{2.6}\]
_and_
\[\sup_{\mathbf{y}\in\mathbb{R}^{d}}\|\mathrm{Hess}f_{g}(\mathbf{y})\|_{H.S.} \leq\sqrt{d}\|\Sigma^{-1}\|_{\sigma p}\|\Sigma\|_{op}^{1/2}.\]
**Lemma 2.8**.: _For \(g:\mathbb{R}^{d}\to\mathbb{R}\) measurable and bounded, define the smoothed function_
\[g_{t}(\mathbf{y}):=\mathbb{E}[g(\sqrt{t}\mathbf{N}_{\Sigma}+\sqrt{1-t}\mathbf{ y})],\quad\mathbf{y}\in\mathbb{R}^{d}, \tag{2.7}\]
_where \(t\in(0,1)\) is a smoothing parameter. Then: \((i)\) For any \(t\in(0,1)\), the function_
\[f_{t,g}(\mathbf{y}):=\frac{1}{2}\int_{t}^{1}\frac{1}{1-s}\mathbb{E}[g(\sqrt{s} \mathbf{N}_{\Sigma}+\sqrt{1-s}\mathbf{y})-g(\mathbf{N}_{\Sigma})]\,\mathrm{d }s,\quad\mathbf{y}\in\mathbb{R}^{d}\]
_is such that: \(f_{t,g}\in C^{2}(\mathbb{R}^{d})\), \(f_{t,g}\) solves (2.5) with \(g_{t}\) in place of \(g\), and_
\[\|\partial_{i}f_{t,g}\|_{\infty}\leq\|g\|_{\infty}\sqrt{\frac{1-t}{t}}\sum_{ \ell,j=1}^{d}(\Sigma^{-1/2})_{\ell j}(\Sigma^{-1/2})_{\ell i}\sqrt{\Sigma_{jj} },\quad\text{for any $i=1,\ldots,d$.} \tag{2.8}\]
\((ii)\) _For any \(d\)-dimensional random vector \(\mathbf{Y}\) it holds_
\[\sup_{g\in\mathcal{J}_{d}}\mathbb{E}\|\mathrm{Hess}(f_{t,g}(\mathbf{Y}))\|_{H. S.}^{2}\leq\|\Sigma^{-1}\|_{op}^{2}(d^{2}(\log t)^{2}d_{c}(\mathbf{Y}, \mathbf{N}_{\Sigma})+530d^{17/6}),\quad\text{for any $t\in(0,1)$.}\]
See Proposition 4.3.2 in [14] for Lemma 2.7; in particular in [14] it is noticed that
\[\partial_{i}f_{g}(\mathbf{y})=-\int_{0}^{\infty}\mathrm{e}^{-t}\mathbb{E} \left[\partial_{i}g(\mathrm{e}^{-t}\mathbf{y}+\sqrt{1-\mathrm{e}^{-2t}} \mathbf{N}_{\Sigma})\right]\mathrm{d}t,\quad i=1,\ldots,d\]
which, combined with the fact that \(g\in\mathcal{L}_{d}(1)\), gives the bound (2.6); see [16] p. 12 and Proposition 2.3 for Lemma 2.8, and Lemma 3.6\((ii)\) in [19] for the bound (2.8).
### The smoothing lemma and the integration by parts formula for Gaussian random vectors
We state a remarkable smoothing lemma for the convex distance proved in [16], see Lemma 2.2 therein. It plays a crucial role in the proof of the Normal approximation of the output of a deep random Gaussian NN in the metric \(d_{c}\), see Theorem 6.1.
**Lemma 2.9**.: _Let \(\mathbf{Y}\) be a \(d\)-dimensional random vector. Then, for any \(t\in(0,1)\),_
\[d_{c}(\mathbf{Y},\mathbf{N}_{\Sigma})\leq\frac{4}{3}\sup_{g\in\mathcal{I}_{d} }|\mathbb{E}[g_{t}(\mathbf{Y})-g_{t}(\mathbf{N}_{\Sigma})]|+\frac{20d}{\sqrt{2 }}\frac{\sqrt{t}}{1-t},\]
_where \(g_{t}\) is defined by (2.7)._
We recall the Gaussian integration by parts formula (we refer the reader to Exercise 3.1.4 in [14] for the Part \((i)\) of Lemma 2.10 and to Exercise 3.1.5 in [14] for the Part \((ii)\) of Lemma 2.10).
**Lemma 2.10**.: _The following claims hold: \((i)\)\(N\sim\mathscr{N}(\mu,\eta)\) if and only if, for any differentiable function \(g:\mathbb{R}\to\mathbb{R}\) such that \(\mathbb{E}|g^{\prime}(N)|<\infty\), we have \(\mathbb{E}(N-\mu)g(N)=\eta\mathbb{E}g^{\prime}(N)\). \((ii)\) Let \(g\in C^{1}(\mathbb{R}^{d})\) with bounded first partial derivatives. Then_
\[\mathbb{E}[N_{i}g(\mathbf{N}_{\Sigma})]=\sum_{j=1}^{d}\Sigma_{ij}\mathbb{E}[ \partial_{j}g(\mathbf{N}_{\Sigma})],\quad\text{for any $i=1,\ldots,d$.}\]
## 3. Random neural networks
We let \(L\in\mathbb{N}\), we take \(L+2\) positive integers \(n_{0},\ldots,n_{L+1}\in\mathbb{N}\), and we fix a function \(\sigma:\mathbb{R}\to\mathbb{R}\). A fully connected NN of depth \(L\) with input dimension \(n_{0}\), output dimension \(n_{L+1}\), hidden layer widths \(n_{1},\ldots,n_{L}\) and non-linearity \(\sigma\) is a mapping
\[\mathbf{x}:=(x_{1},\ldots,x_{n_{0}})\in\mathbb{R}^{n_{0}}\mapsto\mathbf{z}^{(L +1)}(\mathbf{x})=(z_{1}^{(L+1)}(\mathbf{x}),\ldots,z_{n_{L+1}}^{(L+1)}( \mathbf{x})))\in\mathbb{R}^{n_{L+1}}\]
that is defined by a recursive relation of the form
\[z_{i}^{(1)}(\mathbf{x}) =b_{i}^{(1)}+\sum_{j=1}^{n_{0}}W_{ij}^{(1)}x_{j},\quad i=1,\ldots,n_{1}\] \[z_{i}^{(\ell)}(\mathbf{x}) =b_{i}^{(\ell)}+\sum_{j=1}^{n_{\ell-1}}W_{ij}^{(\ell)}\sigma(z_{ j}^{(\ell-1)}(\mathbf{x})),\quad i=1,\ldots,n_{\ell},\,\ell=2,\ldots,L+1\]
where the parameters \(b_{i}^{(\ell)}\in\mathbb{R}\) and \(W_{ij}^{(\ell)}\in\mathbb{R}\) are called network biases and weights, respectively. The quantities \(L\) and \(n_{0},\ldots,n_{L+1}\) constitute the so-called network architecture. The function \(\sigma\) is usually called activation function. NNs of this kind will be denoted by
\[\text{NN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b},\mathbf{ W})\,,\]
where \(\mathbf{n}_{L}:=(n_{1},\ldots,n_{L})\), \(\mathbf{b}:=(b_{i}^{(\ell)})\) and \(\mathbf{W}:=(W_{ij}^{(\ell)})\).
We say that the neural network \(\text{NN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b},\mathbf{ W})\) is a (fully connected and) deep random Gaussian neural network, denoted by
\[\text{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b},\mathbf{ W})\,,\]
if \(\sigma:\mathbb{R}\to\mathbb{R}\) is measurable and \(b_{i}^{(\ell)},W_{ij}^{(\ell)}\), \(i=1,\ldots,n_{\ell}\), \(j=1,\ldots,n_{\ell-1}\), \(\ell=1,\ldots,L+1\), are independent random variables with
\[b_{i}^{(\ell)}\sim\mathcal{N}(0,C_{b})\quad\text{and}\quad W_{ij}^{(\ell)}\sim \mathcal{N}(0,C_{W}/n_{\ell-1}),\quad\ell=1,\ldots,L+1\]
for positive constants \(C_{b},C_{W}>0\).
NNs of depth \(L=1\) are called shallow NNs. We shall denote shallow NNs (respectively, shallow random Gaussian NNs) by \(\text{NN}(1,n_{0},n_{2},n_{1},\sigma,\mathbf{x},\mathbf{b},\mathbf{W})\) (respectively, by \(\text{GNN}(1,n_{0},n_{2},n_{1},\sigma,\mathbf{x},\mathbf{b},\mathbf{W})\)).
Throughout this paper we will consider also NNs with univariate output, i.e., NNs with \(n_{L+1}=1\).
Consider a deep random Gaussian neural network \(\text{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b}, \mathbf{W})\). It turns out that the random variables \(z_{i}^{(1)}=z_{i}^{(1)}(\mathbf{x})\), \(i=1,\ldots,n_{1}\), are independent and identically distributed with
\[z_{i}^{(1)}\sim\mathcal{N}\left(0,C_{b}+\frac{C_{W}}{n_{0}}\sum_{j=1}^{n_{0}} x_{j}^{2}\right).\]
For \(\ell=1,\ldots,L\), let \(\mathcal{F}_{\ell}\) be the \(\sigma\)-field generated by the random variables
\[\{b_{i}^{(h)},W_{ij}^{(h)},\quad i=1,\ldots,n_{h},\,j=1,\ldots,n_{h-1},\,h=1, \ldots,\ell\}.\]
By construction, for any fixed \(\ell\in\{2,\ldots,L+1\}\), given \(\mathcal{F}_{\ell-1}\), the random variables \(z_{i}^{(\ell)}=z_{i}^{(\ell)}(\mathbf{x})\), \(i=1,\ldots,n_{\ell}\), are independent and Gaussian (as linear combination of independent Gaussian random variables). A straightforward computation yields
\[\mathbb{E}[z_{i}^{(\ell)}\,|\,\mathcal{F}_{\ell-1}]=0,\quad i=1,\ldots,n_{\ell}\]
and
\[\mathbb{E}[|z_{i}^{(\ell)}|^{2}\,|\,\mathcal{F}_{\ell-1}]=C_{b}+\frac{C_{W}}{ n_{\ell-1}}\sum_{j=1}^{n_{\ell-1}}|\sigma(z_{j}^{(\ell-1)})|^{2},\quad i=1, \ldots,n_{\ell}. \tag{3.1}\]
Setting \(\mathbf{n}_{\ell}=(n_{1},\ldots,n_{\ell})\), \(\ell=1,\ldots,L\), we define the quantities
\[\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}:=\frac{1}{n_{\ell}}\sum_{j=1}^{n_{ \ell}}\sigma(z_{j}^{(\ell)})^{2},\quad\ell=1,\ldots,L\]
and
\[\mathcal{O}^{(\ell)}:=\mathbb{E}\left[\sigma\left(Z\sqrt{C_{b}+C_{W}\mathcal{ O}^{(\ell-1)}}\right)^{2}\right],\quad\ell=1,\ldots,L \tag{3.2}\]
where
\[\mathcal{O}^{(0)}:=\frac{1}{n_{0}}\sum_{j=1}^{n_{0}}x_{j}^{2}\quad\text{and} \quad Z\sim\mathcal{N}(0,1). \tag{3.3}\]
In the literature, the random variable \(\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}\) is often referred to as collective observable at layer \(\ell\).
### Some related literature
Consider the output \(\mathbf{z}^{(L+1)}:=(z_{1}^{(L+1)},\ldots,z_{n_{L+1}}^{(L+1)})\) of a deep random Gaussian neural network \(\mathrm{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma;\mathbf{x};\mathbf{b}, \mathbf{W})\) and let
\[\mathbf{z}:=(z_{1},\ldots,z_{n_{L+1}})\]
be a centered \(n_{L+1}\)-dimensional Gaussian random vector with (invertible) covariance matrix
\[\Sigma_{n_{L+1}}:=(C_{b}+C_{W}\mathcal{O}^{(L)})\mathrm{Id}_{n_{L+1}}, \tag{3.4}\]
where \(\mathrm{Id}_{n_{L+1}}\) is the identity matrix of \(\mathcal{M}_{n_{L+1}\times n_{L+1}}(\mathbb{R})\). It follows from Theorem 1.2 in [9] (which indeed, more generally, establishes a functional weak convergence) that, if \(\sigma\) is continuous and polynomially bounded, then
\[\mathbf{z}^{(L+1)}\to\mathbf{z}\quad\text{in law, as }\min\{n_{1},\ldots,n_{L}\} \to+\infty. \tag{3.5}\]
The following result for shallow random Gaussian NNs is proved in [4], see Theorem 3.2 therein.
**Theorem 3.1**.: _Let \(\mathrm{GNN}(1,n_{0},1,n_{1},\sigma,\mathbf{x},\mathbf{b},\mathbf{W})\) be a shallow random Gaussian NN with univariate output. If_
\[\sigma\in C^{2}(\mathbb{R})\text{ and }\max\{|\sigma(x)|,|\sigma^{\prime}(x)|,| \sigma^{\prime\prime}(x)|\}\leq r_{1}+r_{2}|x|^{\gamma}\text{, }x\in\mathbb{R} \tag{3.6}\]
_for some \(r_{1},r_{2},\gamma\geq 0\), then_
\[d_{s}(z^{(2)},z)\leq c_{s}\sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}+(C_{ b}+C_{W}\mathcal{O}^{(0)})^{2}(2+\sqrt{3(1+2(C_{b}+C_{W}\mathcal{O}^{(0)})+3(C_{ b}+C_{W}\mathcal{O}^{(0)})^{2})})}\\ \times\|r_{1}+r_{2}|Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}}|^{ \gamma}\|_{L^{4}}^{2}\frac{1}{\sqrt{n_{1}}}, \tag{3.7}\]
_where \(s=TV,K,W_{1}\) and_
\[c_{TV}:=\frac{4}{C_{b}+C_{W}\mathcal{O}^{(1)}},\quad c_{K};=\frac{2}{C_{b}+C_ {W}\mathcal{O}^{(1)}},\quad c_{W_{1}}:=\frac{1}{\sqrt{C_{b}+C_{W}\mathcal{O}^ {(1)}}}\sqrt{8/\pi}.\]
In Section 4, we will give bounds on the quantities \(d_{s}(z^{(2)},z)\), \(s=TV,K,W_{1}\), of order \(1/\sqrt{n_{1}}\), as \(n_{1}\to\infty\), under a minimal assumption on the activation function (note that Condition (3.6) excludes the important case of the ReLu function, i.e., \(\sigma(x):=x\mathbf{1}\{x\geq 0\}\)), see Theorem 4.1. In Section 6 we will give two general bounds on \(d_{c}(\mathbf{z}^{(L+1)},\mathbf{z})\) and \(d_{W_{1}}(\mathbf{z}^{(L+1)},\mathbf{z})\) for deep random Gaussian NNs, see Theorems 6.1 and 6.3. When specialized to shallow random Gaussian neural networks \(\mathrm{GNN}(1,n_{0},n_{2},n_{1},\sigma,\mathbf{x},\mathbf{b},\mathbf{W})\), they provide computable bounds respectively on \(d_{c}(\mathbf{z}^{(2)},\mathbf{z})\) and \(d_{W_{1}}(\mathbf{z}^{(2)},\mathbf{z})\) of order \(1/\sqrt{n}_{1}\), as \(n_{1}\to\infty\), see Theorem 3.3 in [4] for a related result.
The first result in literature which quantifies the convergence in distribution (3.5) with \(L\geq 2\) is given in [2], where the following theorem has been proved.
**Theorem 3.2**.: _Let \(\mathrm{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b}, \mathbf{W})\) be a deep random Gaussian NN, and suppose that the activation function \(\sigma\) is Lipschitz continuous. Then_
\[d_{W_{2}}(\mathbf{z}^{(L+1)},\mathbf{z})\leq\sqrt{n_{L+1}}\sum_{i=1}^{L}\frac{ C^{(i+1)}[\mathrm{Lip}(\sigma)\sqrt{C_{W}}]^{L-i}}{\sqrt{n_{i}}},\]
_where, for any \(i=1,\ldots,L\), \(C^{(i+1)}\) are explicitly known positive constants, depending upon \(\sigma\), \(\mathbf{x}\), \(C_{b}\) and \(C_{W}\)._
The next corollary is an immediate consequence of Theorem 3.2, the fact that \(d_{W_{1}}\leq d_{W_{2}}\), Proposition 2.5 and (2.3).
**Corollary 3.3**.: _Let the asssumptions and notation of Theorem 3.2 prevail. Then_
\[d_{W_{1}}(\mathbf{z}^{(L+1)},\mathbf{z})\leq\sqrt{n_{L+1}}\sum_{i=1}^{L}\frac{C^ {(i+1)}[\mathrm{Lip}(\sigma)\sqrt{C_{W}}]^{L-i}}{\sqrt{n_{i}}} \tag{3.8}\]
_and_
\[d_{c}(\mathbf{z}^{(L+1)},\mathbf{z})\leq 2^{\frac{11}{8}}\pi^{-\frac{1}{8}}n_ {L+1}^{\frac{3}{8}}\left(\sum_{i=1}^{L}\frac{C^{(i+1)}[\mathrm{Lip}(\sigma) \sqrt{C_{W}}]^{L-i}}{\sqrt{n_{i}}}\right)^{1/2}. \tag{3.9}\]
Theorems 6.1 and 6.3 will provide, under more general assumptions on \(\sigma\) (see Proposition 5.2\((ii)\)) explicit bounds respectively on \(d_{c}(\mathbf{z}^{(L+1)},\mathbf{z})\) and \(d_{W_{1}}(\mathbf{z}^{(L+1)},\mathbf{z})\) which are of the same order of the bound in (3.8), as \(n_{1},\ldots,n_{L}\to\infty\). In particular, the bound on the convex distance of Theorem 6.1 considerably improves the one in (3.9).
**Remark 3.4**.: Although not strictly related to our results, the pioneering papers [7] and [13] deserve a special mention. In [13] the author considers a random shallow \(NN(1,n_{0},1,n_{1},\sigma,\mathbf{x},\mathbf{0},\mathbf{W})\) with univariate output, \(b_{i}^{(1)}:=0\), for all \(i=1,\ldots,n_{1}\), \(b_{1}^{(2)}:=0\), \(W_{ij}^{(1)}\), \(i=1,\ldots,n_{1}\), \(j=1,\ldots,n_{0}\), independent with law \(\mathcal{N}(0,1)\) and \(W_{1j}^{(2)}\), \(j=1,\ldots,n_{1}\), independent, identically distributed with law \(\mathbb{P}(W_{1j}^{(2)}=\pm 1/\sqrt{n_{1}})=1/2\) and independent of the random variables \(W_{ij}^{(1)}\). It is proved in [13] that there exists a Gaussian process \(G\) on \(\mathbb{S}^{n_{0}-1}\) (the unit sphere in \(\mathbb{R}^{n_{0}}\)) such that the process \(\{z^{(2)}(\mathbf{x})\}_{\mathbf{x}\in\mathbb{S}^{n_{0}-1}}\) converges in distribution to \(G\), as \(n_{1}\to\infty\). Quantitative versions of this result (for various Wasserstein metrics and some specific choices of \(\sigma\)) are provided in [7].
## 4. Normal approximation of shallow random Gaussian NNs with univariate output
The following theorem holds.
**Theorem 4.1**.: _Let \(\mathrm{GNN}(1,n_{0},1,n_{1},\sigma,\mathbf{x},\mathbf{b},\mathbf{W})\) be a shallow random Gaussian NN with univariate output, and assume that the activation function \(\sigma\) is such that_
\[0<\mathbb{V}\mathrm{ar}(\sigma(Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}})^{2})<\infty. \tag{4.1}\]
_Then:_
\((i)\)__
\[d_{K}(z^{(2)},z)\leq\frac{C_{W}\sqrt{\mathbb{V}\mathrm{ar}(\sigma(Z\sqrt{C_{b }+C_{W}\mathcal{O}^{(0)}})^{2})}}{C_{b}+C_{W}\mathbb{E}\sigma(Z\sqrt{C_{b}+C_{ W}\mathcal{O}^{(0)}})^{2}}\frac{1}{\sqrt{n_{1}}}.\]
\((ii)\)__
\[d_{TV}(z^{(2)},z)\leq 2\frac{C_{W}\sqrt{\mathbb{V}\mathrm{ar}(\sigma(Z\sqrt{C_{b }+C_{W}\mathcal{O}^{(0)}})^{2})}}{C_{b}+C_{W}\mathbb{E}\sigma(Z\sqrt{C_{b}+C_{ W}\mathcal{O}^{(0)}})^{2}}\frac{1}{\sqrt{n_{1}}}.\]
\((iii)\)__
\[d_{W_{1}}(z^{(2)},z)\leq\sqrt{2/\pi}\frac{C_{W}\sqrt{\mathbb{V}\mathrm{ar}( \sigma(Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}})^{2})}}{\sqrt{C_{b}+C_{W}\mathbb{ E}\sigma(Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}})^{2}}}\frac{1}{\sqrt{n_{1}}}.\]
_Note that the bound on the Kolmogorov distance is better than the one which can be obtained using the relation \(d_{K}\leq d_{TV}\)._
**Remark 4.2**.: Remarkably, Theorem 3.3 in [8] shows that if \(\operatorname{GNN}(1,n_{0},1,n_{1},\sigma,\mathbf{x},\mathbf{b},\mathbf{W})\) is a shallow random Gaussian NN with univariate output, and the activation function \(\sigma\) is polynomially bounded to order \(r\geq 1\) (see Definition 2.1 in [8]), then there exist two constants \(C,C_{0}>0\) such that
\[\frac{C_{0}}{n_{1}}\leq\max\{d_{W_{1}}(z^{(2)},z),d_{TV}(z^{(2)},z)\}\leq\frac {C}{n_{1}}.\]
Although this inequality shows the optimality of the rate \(1/n_{1}\), since the constants are not provided in closed form, it can not be directly used for the purpose of output localization (see Section 7). Moreover, the assumption (4.1) on \(\sigma\) does not require any regularity of the activation function.
Proof.: We prove the three bounds \((i)\), \((ii)\), and \((iii)\) separately, by the Stein method.
_Proof of Part \((i)\)._ Set
\[\nu:=\sqrt{C_{b}+C_{W}\mathcal{O}^{(1)}}, \tag{4.2}\]
and consider the Stein equation (2.4) with
\[g(w):=\mathbf{1}_{(-\infty,y]}(\nu w)\,.\]
Let \(f_{g}\) be the unique solution of the Stein equation (see Lemma 2.6\((i)\)). Then, for any \(y\in\mathbb{R}\),
\[\mathbf{1}\{z^{(2)}\leq y\}-\mathbb{P}(z\leq y)=f_{g}^{\prime}(z^{(2)}/\nu)-(z ^{(2)}/\nu)f_{g}(z^{(2)}/\nu).\]
Taking the expectation, we have
\[\mathbb{P}(z^{(2)}\leq y)-\mathbb{P}(z\leq y)=\mathbb{E}[f_{g}^{\prime}(z^{(2) }/\nu)-(z^{(2)}/\nu)f_{y/\nu}(z^{(2)}/\nu)].\]
By Lemma 2.10\((i)\) we have
\[\begin{split}\mathbb{E}[(z^{(2)}/\nu)f_{g}(z^{(2)}/\nu)\,|\, \mathcal{F}_{1}]&=\nu^{-2}(C_{b}+C_{W}\mathcal{O}_{n_{1}}^{(1)}) \mathbb{E}[f_{g}^{\prime}(z^{(2)}/\nu)\,|\,\mathcal{F}_{1}]\\ &=\mathbb{E}[\nu^{-2}(C_{b}+C_{W}\mathcal{O}_{n_{1}}^{(1)})f_{g}^ {\prime}(z^{(2)}/\nu)\,|\,\mathcal{F}_{1}],\end{split} \tag{4.3}\]
where the latter equality follows by the \(\mathcal{F}_{1}\)-measurability of \(\mathcal{O}_{n_{1}}^{(1)}\). Then
\[\mathbb{P}(z^{(2)}\leq y)-\mathbb{P}(z\leq y)=\mathbb{E}[f_{g}^{\prime}(z^{(2 )}/\nu)(1-\nu^{-2}(C_{b}+C_{W}\mathcal{O}_{n_{1}}^{(1)}))],\quad y\in\mathbb{ R}.\]
Setting
\[\varphi(n_{1}):=\nu^{-2}C_{W}\sqrt{\mathbb{V}\mathrm{ar}(\mathcal{O}_{n_{1}}^ {(1)})}=\nu^{-2}C_{W}\sqrt{\mathbb{V}\mathrm{ar}(\sigma(z_{1}^{(1)})^{2})}/ \sqrt{n_{1}}, \tag{4.4}\]
we have
\[\frac{\mathbb{P}(z^{(2)}\leq y)-\mathbb{P}(z\leq y)}{\varphi(n_{1})}=\mathbb{ E}\left[f_{y/\nu_{1}}^{\prime}(z^{(2)}/\nu)V_{n_{1}}\right], \tag{4.5}\]
where
\[V_{n_{1}}:=\frac{1-\nu^{-2}(C_{b}+C_{W}\mathcal{O}_{n_{1}}^{(1)}))}{\varphi(n _{1})}=-\frac{\mathcal{O}_{n_{1}}^{(1)}-\mathcal{O}^{(1)}}{\sqrt{\mathbb{V} \mathrm{ar}(\mathcal{O}_{n_{1}}^{(1)})}}=-\frac{\sum_{j=1}^{n_{1}}\sigma(z_{j }^{(1)})^{2}-n_{1}\mathbb{E}\sigma(z_{1}^{(1)})^{2}}{\sqrt{n_{1}}\sqrt{ \mathbb{V}\mathrm{ar}(\sigma(z_{1}^{(1)})^{2})}}\]
(recall that \(\nu\) is defined in (4.2)). Taking the modulus in (4.5) and then using that \(\|f_{g}^{\prime}\|_{\infty}\leq 1\) uniformly in \(y\in\mathbb{R}\) (see Lemma 2.6\((i)\)) and that \(\mathbb{E}V_{n_{1}}^{2}=1\), we have
\[d_{K}(z^{(2)},z)\leq\varphi(n_{1})\mathbb{E}[|V_{n_{1}}|]\leq\varphi(n_{1}),\]
which, combined with (4.4), proves the statement.
_Proof of Part \((ii)\)._ Consider the Stein equation (2.4) with \(g(w):=\mathbf{1}_{B}(\nu w)\), where \(B\subseteq\mathbb{R}^{d}\) is a
Borel set, and \(\nu\) is defined at the beginning of the proof of Part \((i)\). Let \(f_{g}\) be the unique solution of the Stein equation (see Lemma 2.6\((ii)\)). Then
\[\mathbf{1}_{B}(z^{(2)})-\mathbb{E}\mathbf{1}_{B}(z)=f_{g}^{\prime}(z^{(2)}/\nu) -(z^{(2)}/\nu)f_{g}(z^{(2)}/\nu).\]
Taking the expectation and arguing as in (4.3), we have
\[\mathbb{P}(z^{(2)}\in B)-\mathbb{P}(z\in B)=\mathbb{E}[f_{g}^{\prime}(z^{(2)}/ \nu)(1-\nu^{-2}(C_{b}+C_{W}\mathcal{O}_{n_{1}}^{(1)}))].\]
Along similar computations as for (4.5), we have
\[\frac{\mathbb{P}(z^{(2)}\in B)-\mathbb{P}(z\in B)}{\varphi(n_{1})}=\mathbb{E} \left[f_{g}^{\prime}(z^{(2)}/\nu)V_{n_{1}}\right].\]
Taking the modulus on this relation and then using that \(\|f_{g}^{\prime}\|_{\infty}\leq 2\) (see Lemma 2.6\((ii)\)) and that \(\mathbb{E}V_{n_{1}}^{2}=1\), we have
\[d_{TV}(z^{(2)},z)\leq 2\varphi(n_{1})\mathbb{E}[|V_{n_{1}}|]\leq 2\varphi(n_{ 1}),\]
which, combined with (4.4), proves the statement.
_Proof of Part \((iii)\)._ Consider the Stein equation (2.4) with \(g(y):=h(\nu y)\), where \(h:\mathbb{R}\to\mathbb{R}\) is Lipschitz continuous with \(\mathrm{Lip}(h)\leq 1\), and \(\nu_{1}\) is defined at the beginning of the proof of Part \((i)\). Let \(f_{g}\) be the unique solution of the Stein equation (see Lemma 2.6\((iii)\)). Then
\[h(z^{(2)})-\mathbb{E}h(z)=f_{g}^{\prime}(z^{(2)}/\nu)-(z^{(2)}/\nu)f_{g}(z^{( 2)}/\nu).\]
Taking the expectation and arguing as in (4.3), we have
\[\mathbb{E}h(z^{(2)})-\mathbb{E}h(z)=\mathbb{E}[f_{g}^{\prime}(z^{(2)}/\nu)(1- \nu^{-2}(C_{b}+C_{W}\mathcal{O}_{n_{1}}^{(1)}))].\]
Along similar computations as for (4.5), we have
\[\frac{\mathbb{E}h(z^{(2)})-\mathbb{E}h(z)}{\varphi(n_{1})}=\mathbb{E}\left[f_{ g}^{\prime}(z^{(2)}/\nu)V_{n_{1}}\right].\]
Taking the modulus on this relation and then using that \(\|f_{g}^{\prime}\|_{\infty}\leq\nu\sqrt{2/\pi}\) (see Lemma 2.6\((iii)\)) and that \(\mathbb{E}V_{n_{1}}^{2}=1\), we have
\[d_{W_{1}}(z^{(2)},z)\leq\nu\sqrt{2/\pi}\varphi(n_{1})\mathbb{E}[|V_{n_{1}}|] \leq\nu\sqrt{2/\pi}\varphi(n_{1}),\]
which, combined with (4.4), proves the statement.
\(\square\)
Note that both Theorem 3.1 and Theorem 4.1 provide bounds on \(d_{s}(z^{(2)},z)\), \(s=TV,K,W_{1}\), with a common rate \(1/\sqrt{n_{1}}\), but different constants. In Table 1 we compare those constants in a special case. We observe that the constants given by Theorem 4.1 are more effective than those in [4]. We also note that Condition (4.1) is satisfied by the ReLu activation function, while the assumptions of Theorem 3.1 do not hold for the ReLu.
\begin{table}
\begin{tabular}{c|c c c} \hline & \(d_{TV}\left(z^{(2)},z\right)\) & \(d_{K}\left(z^{(2)},z\right)\) & \(d_{W_{1}}\left(z^{(2)},z\right)\) \\ \hline Thm. 3.1 & 5.05 & 2.52 & 2.01 \\ Thm. 4.1 & 1.68 & 0.84 & 0.67 \\ \hline \end{tabular}
\end{table}
Table 1. Values of the constants given in Theorems 3.1 and 4.1. Here, \(\sigma(x)=x^{3}\), \(\gamma=3\), \(r_{2}=1\), \(r_{1}=6\), \(L=1\), \(C_{b}=C_{W}=1\) and \(x=1\).
## 5. A key estimate for the collective observables
The next theorem provides an estimate for the \(L^{2}\)-norm of the random variable \(\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}-\mathcal{O}^{(\ell)}\). In Section 6, such estimate will play a crucial role in the proofs of the results on the Normal approximation of the output of a deep random Gaussian NN, both in the convex and in the \(1\)-Wasserstein distances (see Theorems 6.1 and 6.3).
Hereon, we denote by \(\|Y\|_{L^{2}}:=(\mathbb{E}[|Y|^{2}])^{1/2}\) the \(L^{2}\)-norm of a real-valued random variable \(Y\).
**Theorem 5.1**.: _Let \(\operatorname{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b },\mathbf{W})\) be a deep random Gaussian NN. Suppose that the activation function \(x\mapsto\sigma(x)\) is such that: \((i)\) For any \(a_{1},a_{2}\geq 0\) and \(C_{b},C_{W}>0\), there exists a polynomial_
\[P(x):=\sum_{k=0}^{m}d_{k}x^{k},\]
_with non-negative coefficients \(d_{k}=d_{k}(\sigma(\cdot),C_{b},C_{W})\geq 0\) dependent only on \(\sigma(\cdot),C_{b},C_{W}\) and degree \(m\geq 0\) independent of \(\sigma(\cdot),a_{1},a_{2},C_{b},C_{W}\), such that_
\[|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})^{2}-\sigma(x\sqrt{C_{b}+C_{W}a_{1}})^{2}| \leq P(|x|)|a_{2}-a_{1}|,\quad\text{for all }x\in\mathbb{R}. \tag{5.1}\]
\((ii)\) _For any \(\kappa\in\mathbb{R}\), \(\mathbb{E}\sigma(\kappa Z)^{4}<\infty\)._
_Then, for any \(\ell=1,\ldots,L\), we have_
\[\|\mathcal{O}_{\mathbf{n}_{\ell},\sigma^{2}}^{(\ell)}-\mathcal{O}_{\sigma^{2 }}^{(\ell)}\|_{2}\leq\sum_{k=1}^{\ell}(4\sqrt{2}\|P(|Z|)\|_{2})^{\ell-k}\frac{ c_{k}}{\sqrt{n_{k}}},\]
_where_
\[c_{\ell}=c_{\ell}(n_{0},\sigma,\mathbf{x},C_{b},C_{W}):=\sqrt{2\mathbb{E} \left[\sigma\left(Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(\ell-1)}}\right)^{4}\right] }<\infty,\quad\ell=1,\ldots,L. \tag{5.2}\]
The proof of the theorem is given later on in this section. We proceed stating a proposition and a remark, which clarify the generality of our assumptions on the activation function \(\sigma\).
**Proposition 5.2**.: _The following statements hold: \((i)\) If \(\sigma\) is the perceptron function, i.e., \(\sigma(x):=\mathbf{1}\{x\geq 0\}\), \(x\in\mathbb{R}\), then it satisfies Conditions \((i)\) and \((ii)\) of Theorem 5.1. \((ii)\) If \(\sigma\) is Lipschitz continuous, then it satisfies Conditions \((i)\) and \((ii)\) of Theorem 5.1. In particular, Condition \((i)\) holds with_
\[P(x):=\frac{2|\sigma(0)|C_{W}\mathrm{Lip}(\sigma)}{2\sqrt{C_{b}}}x+C_{W} \mathrm{Lip}(\sigma)^{2}x^{2}. \tag{5.3}\]
\((iii)\) _If \(\sigma\) is such that \(\sigma(\cdot)^{2}\) is Lipschitz continuous then it satisfies Conditions \((i)\) and \((ii)\) of Theorem 5.1. In particular, Condition \((i)\) holds with_
\[P(x)=\frac{\mathrm{Lip}(\sigma^{2})C_{W}}{2\sqrt{C_{b}}}x. \tag{5.4}\]
\((iv)\) _If \(\sigma\in C^{1}(\mathbb{R})\) and \(\max\{|\sigma(x)|,|\sigma^{\prime}(x)|\}\leq r_{1}+r_{2}|x|^{\gamma}\), \(x\in\mathbb{R}\), for some \(r_{1},r_{2},\gamma\geq 0\), then \(\sigma\) satisfies Conditions \((i)\) and \((ii)\) of Theorem 5.1. In particular, Condition \((i)\) holds with_
\[P(x)=\frac{C_{W}}{\sqrt{C_{b}}}x(r_{1}+r_{2}|x|^{\gamma})\]
The proof of the proposition is given later on in this section.
**Remark 5.3**.: As a consequence of Proposition 5.2, we have that the most common activation functions satisfy the assumptions of Theorem 5.1. Indeed, one can easily prove that the ReLu function \(\sigma(x):=x\mathbf{1}\{x\geq 0\}\), the sigmoid function \(\sigma(x):=(1+\mathrm{e}^{-x})^{-1}\), the hyperbolic tangent function \(\sigma(x):=(\mathrm{e}^{2x}-1)/(\mathrm{e}^{2x}+1)\), the \(\sin\) function \(\sigma(x):=\sin(x)\), the softplus function \(\sigma(x):=\log(1+\mathrm{e}^{x})\) and the SWISH function \(\sigma(x):=x/(1+\mathrm{e}^{-x})\) are Lipschitz continuous. We emphasize that the conditions on the activation function of Theorem 5.1 are more general than the one required in [2], where \(\sigma(\cdot)\) is assumed Lipschitz continuous (see Theorem 3.2 and Proposition 5.2\((ii)\)). We also emphasize that the conditions on the activation function of Theorem 5.1 are satisfied by the perceptron function which is non-continuous and therefore non-Lipschitz (see Proposition 5.2\((i)\)). Another non-Lipschitz function which satisfies the conditions of Theorem 5.1 is e.g. \(\sigma(x):=\sqrt{x}\mathbf{1}\{x\geq 0\}\). Indeed, \(\sigma^{2}\) is the ReLu function and therefore Lipschitz continuous (see Proposition 5.2\((iii)\)).
Proof of Theorem 5.1.: We consider separately the cases of the first hidden layer and that of the following ones.
_Case \(\ell=1\)._
Since the random variables \(z_{i}^{(1)}\), \(i=1,\ldots,n_{1}\), are independent and identically distributed with law \(\mathcal{N}(0,C_{b}+C_{W}\mathcal{O}^{(0)})\), we have
\[\|\mathcal{O}^{(1)}_{n_{1}}-\mathcal{O}^{(1)}\|_{L^{2}} =\sqrt{\mathbb{E}\left(\frac{1}{n_{1}}\sum_{j=1}^{n_{1}}\sigma(z_{ j}^{(1)})^{2}-\mathbb{E}\left[\sigma\left(Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}} \right)^{2}\right]\right)^{2}}\] \[=\sqrt{\mathbb{E}\left(\frac{1}{n_{1}}\sum_{j=1}^{n_{1}}(\sigma( z_{j}^{(1)})^{2}-\mathbb{E}\sigma(z_{1}^{(1)})^{2})\right)^{2}}\] \[=\frac{1}{\sqrt{n_{1}}}\sqrt{\mathbb{V}\mathrm{ar}(\sigma(z_{1}^ {(1)})^{2})}\] \[\leq\frac{1}{\sqrt{n_{1}}}\sqrt{\mathbb{E}\left[\sigma\left(Z \sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}}\right)^{4}\right]} \tag{5.6}\] \[\leq\frac{c_{1}}{\sqrt{n_{1}}}. \tag{5.5}\]
Note that this latter term is finite due to the assumption \((ii)\).
_Case \(\ell=2,\ldots,L\)._
Take \(\ell\in\{2,\ldots,L\}\). We have already noticed that, given \(\mathcal{F}_{\ell-1}\), the random variables \(\{z_{i}^{(\ell)}\}_{i=1,\ldots,n_{\ell}}\) are independent with Gaussian law with mean zero and variance
\[C_{b}+C_{W}\mathcal{O}^{(\ell-1)}_{\mathbf{n}_{\ell-1}}.\]
Therefore, letting \(Z_{\ell-1}\) denote a standard Gaussian random variable, independent of \(\mathcal{F}_{\ell-1}\), we have
\[z_{i}^{(\ell)}\stackrel{{ d}}{{=}}Z_{\ell-1}\sqrt{C_{b}+C_{W} \mathcal{O}^{(\ell-1)}_{\mathbf{n}_{\ell-1}}},\quad i=1,\ldots,n_{\ell}\]
where the symbol \(\stackrel{{ d}}{{=}}\) denotes the equality in law (this relation immediately follows computing e.g. the characteristic function of both random variables). Therefore, letting \(p(\cdot)\) denote the standard
Gaussian density, we have
\[\mathbb{E}[\sigma(z_{i}^{(\ell)})^{r}\,|\,\mathcal{F}_{\ell-1}]\overset{d}{=}\int_ {\mathbb{R}}\sigma\left(z\sqrt{C_{b}+C_{W}\mathcal{O}_{\mathbf{n}_{\ell-1}}^{( \ell-1)}}\right)^{r}p(z)\mathrm{d}z,\quad r\in\{2,4\},\,i=1,\ldots,n_{\ell} \tag{5.7}\]
and
\[\mathbb{E}\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}=\mathbb{E}\left[\sigma \left(Z_{\ell-1}\sqrt{C_{b}+C_{W}\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}} \right)^{2}\right].\]
By assumption \((i)\) and the fact that \(Z_{\ell-1}\) is independent of \(\mathcal{F}_{\ell-1}\), we have
\[|\mathbb{E}\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}-\mathcal{O}^{ (\ell)}| =\left|\mathbb{E}\left[\sigma\left(Z_{\ell-1}\sqrt{C_{b}+C_{W} \mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}}\right)^{2}\right]-\mathbb{E} \left[\sigma\left(Z_{\ell-1}\sqrt{C_{b}+C_{W}\mathcal{O}^{(\ell-1)}}\right)^{ 2}\right]\Big{|}\] \[\leq\mathbb{E}P(|Z_{\ell-1}|)|\mathcal{O}_{\mathbf{n}_{\ell-1}}^ {(\ell-1)}-\mathcal{O}^{(\ell-1)}|\] \[\leq\mathbb{E}P(|Z|)\|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1 )}-\mathcal{O}^{(\ell-1)}\|_{L^{2}}.\]
Therefore
\[\|\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}-\mathcal{O}^{(\ell)} \|_{L^{2}} \leq\|\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}-\mathbb{E}\mathcal{ O}_{\mathbf{n}_{\ell}}^{(\ell)}\|_{L^{2}}+|\mathbb{E}\mathcal{O}_{\mathbf{n}_{ \ell}}^{(\ell)}-\mathcal{O}^{(\ell)}| \tag{5.8}\] \[=\sqrt{\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell}}^{( \ell)})}+|\mathbb{E}\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}-\mathcal{O}^{( \ell)}|\] \[\leq\sqrt{\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell}}^{( \ell)})}+\mathbb{E}P(|Z|)\|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}- \mathcal{O}^{(\ell-1)}\|_{L^{2}}.\]
Note that
\[\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)})= \frac{1}{n_{\ell}^{2}}\left(\mathbb{E}\left(\sum_{j=1}^{n_{\ell}}\sigma(z_{j}^ {(\ell)})^{2}\right)^{2}-n_{\ell}^{2}(\mathbb{E}\sigma(z_{1}^{(\ell)})^{2})^{2 }\right)\] \[=\frac{1}{n_{\ell}^{2}}\left(n_{\ell}\mathbb{E}\sigma(z_{1}^{( \ell)})^{4}+n_{\ell}(n_{\ell}-1)\mathbb{E}\sigma(z_{1}^{(\ell)})^{2}\sigma(z_ {2}^{(\ell)})^{2}-n_{\ell}^{2}(\mathbb{E}\sigma(z_{1}^{(\ell)})^{2})^{2}\right)\] \[=\frac{1}{n_{\ell}^{2}}\left(n_{\ell}\mathbb{E}\sigma(z_{1}^{( \ell)})^{4}-n_{\ell}\mathbb{E}\sigma(z_{1}^{(\ell)})^{2}\sigma(z_{2}^{(\ell)} )^{2}+n_{\ell}^{2}\mathbb{C}\mathrm{ov}(\sigma(z_{1}^{(\ell)})^{2},\sigma(z_ {2}^{(\ell)})^{2})\right)\] \[=\frac{1}{n_{\ell}^{2}}\left(n_{\ell}\mathbb{V}\mathrm{ar}( \sigma(z_{1}^{(\ell)})^{2})+(n_{\ell}^{2}-n_{\ell})\mathbb{C}\mathrm{ov}( \sigma(z_{1}^{(\ell)})^{2},\sigma(z_{2}^{(\ell)})^{2})\right) \tag{5.9}\] \[=\frac{1}{n_{\ell}}\mathbb{V}\mathrm{ar}(\sigma(z_{1}^{(\ell)})^{ 2})+\left(1-\frac{1}{n_{\ell}}\right)\mathbb{C}\mathrm{ov}(\sigma(z_{1}^{( \ell)})^{2},\sigma(z_{2}^{(\ell)})^{2}).\]
By (5.7) we have
\[\mathbb{E}[\sigma(z_{1}^{(\ell)})^{4}]=\mathbb{E}\sigma\left(Z_{\ell-1}\sqrt{ C_{b}+C_{W}\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}}\right)^{4}.\]
By assumption \((i)\), we have
\[\sigma\left(Z_{\ell-1}\sqrt{C_{b}+C_{W}\mathcal{O}_{\mathbf{n}_{\ell-1}}^{( \ell-1)}}\right)^{2}\leq P(|Z_{\ell-1}|)|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{( \ell-1)}-\mathcal{O}^{(\ell-1)}|+\sigma\left(Z_{\ell-1}\sqrt{C_{b}+C_{W} \mathcal{O}^{(\ell-1)}}\right)^{2},\quad\mathbb{P}\text{-a.s.}\]
and so (using that \(Z_{\ell-1}\) is independent of \(\mathcal{F}_{\ell-1}\) and the inequality \((a+b)^{2}\leq 2a^{2}+2b^{2}\), \(a,b\in\mathbb{R}\))
\[\mathbb{V}\mathrm{ar}(\sigma(z_{1}^{(\ell)})^{2}) \leq\mathbb{E}[\sigma(z_{1}^{(\ell)})^{4}]\] \[\leq 2\mathbb{E}P(|Z|)^{2}\|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{( \ell-1)}-\mathcal{O}^{(\ell-1)}\|_{2}^{2}+2\mathbb{E}\sigma\left(Z\sqrt{C_{b}+ C_{W}\mathcal{O}^{(\ell-1)}}\right)^{4} \tag{5.10}\] \[=A\|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}-\mathcal{O}^{( \ell-1)}\|_{L^{2}}^{2}+c_{\ell}^{2},\]
where \(A:=2\mathbb{E}P(|Z|)^{2}<\infty\). Note that the quantities \(\mathcal{O}^{(\ell)}\), \(\ell=2,\ldots,L,\) are all finite due to the assumption \((ii)\). Then, again by assumption \((ii)\), we have
\[\mathbb{E}\sigma\left(Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(\ell-1)}}\right)^{4}< \infty,\quad\text{for any }\ell=2,\ldots,L\]
and therefore \(c_{\ell}<\infty\), for any \(\ell=2,\ldots,L\). By the conditional independence of the random variables \(z_{1}^{(\ell)}\) and \(z_{2}^{(\ell)}\), given \(\mathcal{F}_{\ell-1}\), we have
\[\mathbb{C}\mathrm{ov}(\sigma(z_{1}^{(\ell)})^{2},\sigma(z_{2}^{( \ell)})^{2})=\mathbb{C}\mathrm{ov}(\mathbb{E}[\sigma(z_{1}^{(\ell)})^{2}\,| \,\mathcal{F}_{\ell-1}],\mathbb{E}[\sigma(z_{2}^{(\ell)})^{2}\,|\,\mathcal{F}_ {\ell-1}])\] \[=\mathbb{E}[(\mathbb{E}[\sigma(z_{1}^{(\ell)})^{2}\,|\,\mathcal{F} _{\ell-1}]-\mathbb{E}\sigma(z_{1}^{(\ell)})^{2})(\mathbb{E}[\sigma(z_{2}^{( \ell)})^{2}\,|\,\mathcal{F}_{\ell-1}]-\mathbb{E}\sigma(z_{2}^{(\ell)})^{2})],\]
and so by the Cauchy-Schwarz inequality and (5.7) we have
\[|\mathbb{C}\mathrm{ov}(\sigma(z_{1}^{(\ell)})^{2},\sigma(z_{2}^{( \ell)})^{2})|\leq\mathbb{E}[(\mathbb{E}[\sigma(z_{1}^{(\ell)})^{2}\,|\, \mathcal{F}_{\ell-1}]-\mathbb{E}\sigma(z_{1}^{(\ell)})^{2})^{2}]. \tag{5.11}\]
Letting \(\mathbb{P}_{X}\) denote the law of a random variable \(X\) and using again (5.7), we have that the random variable \(\mathbb{E}[\sigma(z_{1}^{(\ell)})^{2}\,|\,\mathcal{F}_{\ell-1}]-\mathbb{E} \sigma(z_{1}^{(\ell)})^{2}\) has the same law as the random variable
\[\int_{\mathbb{R}}\left(\sigma\left(z\sqrt{C_{b}+C_{W}\mathcal{O}_ {\mathbf{n}_{\ell-1}}^{(\ell-1)}}\right)^{2}-\int_{[0,\infty)}\sigma\left(z \sqrt{C_{b}+C_{W}y}\right)^{2}\mathbb{P}_{\mathcal{O}_{\mathbf{n}_{\ell-1}}^{( \ell-1)}}(\mathrm{d}y)\right)p(z)\mathrm{d}z\] \[=\int_{[0,\infty)\times\mathbb{R}}\left(\sigma\left(z\sqrt{C_{b}+ C_{W}\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}}\right)^{2}-\sigma\left(z \sqrt{C_{b}+C_{W}y}\right)^{2}\right)\mathbb{P}_{\mathcal{O}_{\mathbf{n}_{\ell -1}}^{(\ell-1)}}(\mathrm{d}y)p(z)\mathrm{d}z.\]
Therefore, by (5.11) and Jensen's inequality we have
\[|\mathbb{C}\mathrm{ov}(\sigma(z_{1}^{(\ell)})^{2},\sigma(z_{2}^{ (\ell)})^{2})|\] \[\qquad\leq\mathbb{E}\int_{[0,\infty)\times\mathbb{R}}\left(\sigma \left(z\sqrt{C_{b}+C_{W}\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}}\right)^{ 2}-\sigma\left(z\sqrt{C_{b}+C_{W}y}\right)^{2}\right)^{2}\mathbb{P}_{\mathcal{ O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}}(\mathrm{d}y)p(z)\mathrm{d}z\]
By assumption \((i)\) it then follows that
\[|\mathbb{C}\mathrm{ov}(\sigma(z_{1}^{(\ell)})^{2},\sigma(z_{2}^{( \ell)})^{2})|\leq(\mathbb{E}P(|Z|))^{2}\int_{[0,\infty)}\mathbb{E}|\mathcal{ O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}-y|^{2}\mathbb{P}_{\mathcal{O}_{\mathbf{n}_{ \ell-1}}^{(\ell-1)}}(\mathrm{d}y)\leq A\mathbb{V}\mathrm{ar}\left(\mathcal{O}_ {\mathbf{n}_{\ell-1}}^{(\ell-1)}\right), \tag{5.12}\]
where the latter relation follows by the definition of the constant \(A\). Combining (5.9), (5.10) and (5.12), we have
\[\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)})\leq A(\mathbb{ V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)})+\|\mathcal{O}_{ \mathbf{n}_{\ell-1}}^{(\ell-1)}-\mathcal{O}_{\sigma^{2}}^{(\ell-1)}\|_{L^{2}}^ {2})+\frac{c_{\ell}^{2}}{n_{\ell}}.\]
Iterating this inequality we have
\[\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}) \leq A\left(A(\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell-2} }^{(\ell-2)})+\|\mathcal{O}_{\mathbf{n}_{\ell-2}}^{(\ell-2)}-\mathcal{O}^{( \ell-2)}\|_{L^{2}}^{2})+\frac{c_{\ell-1}^{2}}{n_{\ell-1}}\right)\] \[\qquad\qquad\qquad\qquad+A\|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{( \ell-1)}-\mathcal{O}^{(\ell-1)}\|_{L^{2}}^{2}+\frac{c_{\ell}^{2}}{n_{\ell}}\] \[=A^{2}\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell-2}}^{( \ell-2)})+A^{2}\|\mathcal{O}_{\mathbf{n}_{\ell-2}}^{(\ell-2)}-\mathcal{O}^{( \ell-2)}\|_{L^{2}}^{2}+A\|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1)}- \mathcal{O}^{(\ell-1)}\|_{L^{2}}^{2}\] \[\qquad\qquad\qquad\qquad+A\frac{c_{\ell-1}^{2}}{n_{\ell-1}}+ \frac{c_{\ell}^{2}}{n_{\ell}}\] \[\leq A^{3}\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell-3} }^{(\ell-3)})+A^{3}\|\mathcal{O}_{\mathbf{n}_{\ell-3}}^{(\ell-3)}-\mathcal{O}^ {(\ell-3)}\|_{L^{2}}^{2}+A^{2}\|\mathcal{O}_{\mathbf{n}_{\ell-2}}^{(\ell-2)}- \mathcal{O}^{(\ell-2)}\|_{L^{2}}^{2}\] \[\qquad\qquad\qquad+A\|\mathcal{O}_{\mathbf{n}_{\ell-1}}^{(\ell-1 )}-\mathcal{O}^{(\ell-1)}\|_{L^{2}}^{2}+A^{2}\frac{c_{\ell-2}^{2}}{n_{\ell-2}} +A\frac{c_{\ell-1}^{2}}{n_{\ell-1}}+\frac{c_{\ell}^{2}}{n_{\ell}}\] \[\leq A^{\ell-1}\mathbb{V}\mathrm{ar}(\mathcal{O}_{n_{1}}^{(1)})+ \sum_{k=1}^{\ell-1}A^{\ell-k}\|\mathcal{O}_{\mathbf{n}_{k}}^{(k)}-\mathcal{O}^ {(k)}\|_{L^{2}}^{2}+\sum_{k=2}^{\ell}A^{\ell-k}\frac{c_{k}^{2}}{n_{k}}\] \[=2A^{\ell-1}\mathbb{V}\mathrm{ar}(\mathcal{O}_{n_{1}}^{(1)})+ \sum_{k=2}^{\ell-1}A^{\ell-k}\|\mathcal{O}_{\mathbf{n}_{k}}^{(k)}-\mathcal{O}^ {(k)}\|_{L^{2}}^{2}+\sum_{k=2}^{\ell}A^{\ell-k}\frac{c_{k}^{2}}{n_{k}},\]
for any \(\ell=2,\ldots,L\), where the latter equality follows noticing that \(\mathbb{E}\mathcal{O}_{n_{1}}^{(1)}=\mathcal{O}^{(1)}\). Here, we adopt the usual convention \(\sum_{k=k_{1}}^{k_{2}}\cdots=0\) if \(k_{1}>k_{2}\). Note that by (5.5) we have
\[\mathbb{V}\mathrm{ar}(\mathcal{O}_{n_{1}}^{(1)})\leq\frac{1}{n_{1}}\mathbb{E} \sigma\left(Z\sqrt{C_{b}+C_{W}\mathcal{O}^{(0)}}\right)^{4},\]
and so
\[2\mathbb{V}\mathrm{ar}(\mathcal{O}_{n_{1}}^{(1)})\leq\frac{c_{1}^{2}}{n_{1}}.\]
Consequently,
\[\mathbb{V}\mathrm{ar}(\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)})\leq\sum_{k=2}^ {\ell-1}A^{\ell-k}\|\mathcal{O}_{\mathbf{n}_{k}}^{(k)}-\mathcal{O}^{(k)}\|_{L^ {2}}^{2}+\sum_{k=1}^{\ell}A^{\ell-k}\,\frac{c_{k}^{2}}{n_{k}}.\]
Combining this inequality with (5.8) and using the elementary relation \(\sqrt{a_{1}+a_{2}}\leq\sqrt{a}_{1}+\sqrt{a}_{2}\), \(a_{1},a_{2}\geq 0\), for any \(\ell=2,\ldots,L\), we have
\[\|\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}-\mathcal{O}^{(\ell)}\| _{L^{2}} \leq\sum_{k=2}^{\ell-1}A^{(\ell-k)/2}\|\mathcal{O}_{\mathbf{n}_{k}}^{(k)}- \mathcal{O}^{(k)}\|_{L^{2}}+\sum_{k=1}^{\ell}A^{(\ell-k)/2}\frac{c_{k}}{\sqrt{n _{k}}}\] \[\qquad\qquad\qquad+\mathbb{E}P(|Z|)\|\mathcal{O}_{\mathbf{n}_{ \ell-1}}^{(\ell-1)}-\mathcal{O}^{(\ell-1)}\|_{L^{2}} \tag{5.13}\] \[\leq\sum_{k=2}^{\ell-1}B^{(\ell-k)/2}\|\mathcal{O}_{\mathbf{n}_{k }}^{(k)}-\mathcal{O}^{(k)}\|_{L^{2}}+\sum_{k=1}^{\ell}B^{(\ell-k)/2}\frac{c_{k}} {\sqrt{n_{k}}},\]
where we used that \(\mathbb{E}P(|Z|)\leq A^{1/2}\), and we set \(B:=4A\). Recalling that \(\sqrt{A}=\sqrt{2}\|P(|Z|)\|_{2}\) and the definition of \(B\), one easily realizes that the claim reads as
\[\|\mathcal{O}_{\mathbf{n}_{2}}^{(\ell)}-\mathcal{O}^{(\ell)}\|_{L^{2}}\leq\sum_ {j=1}^{\ell}(2\sqrt{B})^{\ell-j}\frac{c_{j}}{\sqrt{n_{j}}},\quad\ell=2,\ldots,L. \tag{5.14}\]
Now we use the relation (5.13) to prove (5.14) by induction on \(\ell=2,\ldots,L\). Taking \(\ell=2\) in (5.13) we have
\[\|\mathcal{O}_{\mathbf{n}_{2}}^{(2)}-\mathcal{O}^{(2)}\|_{L^{2}}\leq B^{1/2} \frac{c_{1}}{\sqrt{n_{1}}}+\frac{c_{2}}{\sqrt{n_{2}}}\leq 2B^{1/2}\frac{c_{1}}{ \sqrt{n_{1}}}+\frac{c_{2}}{\sqrt{n_{2}}},\]
i.e., (5.14) with \(\ell=2\). Now, suppose that
\[\|\mathcal{O}_{\mathbf{n}_{k}}^{(k)}-\mathcal{O}^{(k)}(\mathbf{x})\|_{L^{2}} \leq\sum_{j=1}^{k}(2\sqrt{B})^{k-j}\frac{c_{j}}{\sqrt{n_{j}}},\quad\text{for any }k=2,\ldots,\ell-1.\]
Then by (5.13), we have
\[\|\mathcal{O}_{\mathbf{n}_{\ell}}^{(\ell)}-\mathcal{O}^{(\ell)}\| _{L^{2}} \leq\sum_{k=2}^{\ell-1}B^{(\ell-k)/2}\sum_{j=1}^{k}2^{k-j}B^{(k-j )/2}\frac{c_{j}}{\sqrt{n_{j}}}+\sum_{k=1}^{\ell}B^{(\ell-k)/2}\frac{c_{k}}{ \sqrt{n_{k}}}\] \[\leq\sum_{k=2}^{\ell-1}\sum_{j=1}^{k}2^{k-j}B^{(\ell-j)/2}\frac{c _{j}}{\sqrt{n_{j}}}+\sum_{k=1}^{\ell}B^{(\ell-k)/2}\frac{c_{k}}{\sqrt{n_{k}}}\] \[\leq\sum_{j=1}^{\ell-1}\sum_{k=j}^{\ell-1}2^{k-j}B^{(\ell-j)/2} \frac{c_{j}}{\sqrt{n_{j}}}+\sum_{k=1}^{\ell}B^{(\ell-k)/2}\frac{c_{k}}{\sqrt{n _{k}}}\] \[=\sum_{j=1}^{\ell-1}\left(\sum_{k=j}^{\ell-1}2^{k-j}+1\right)B^{( \ell-j)/2}\frac{c_{j}}{\sqrt{n_{j}}}+\frac{c_{\ell}}{\sqrt{n_{\ell}}}\] \[=\sum_{j=1}^{\ell-1}(2\sqrt{B})^{\ell-j}\frac{c_{j}}{\sqrt{n_{j}} }+\frac{c_{\ell}}{\sqrt{n_{\ell}}}=\sum_{j=1}^{\ell}(2\sqrt{B})^{\ell-j}\frac {c_{j}}{\sqrt{n_{j}}},\]
where we used that
\[\sum_{k=j}^{\ell-1}2^{k-j}+1=\sum_{s=0}^{\ell-j-1}2^{s}+1=2^{\ell-j},\quad \text{for any }j=1,\ldots,\ell-1\]
since \(\{2^{s}\}_{s\geq 0}\) is a geometric progression. The proof is completed.
_Proof of Proposition 5.2._
_Proof of \((i)\)._ The claim immediately follows noticing that, due to the positivity of the quantities \(\sqrt{C_{b}+C_{W}a_{2}}\) and \(\sqrt{C_{b}+C_{W}a_{1}}\), we have \(|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})^{2}-\sigma(x\sqrt{C_{b}+C_{W}a_{1}})^{2}|=0\), for any \(x\in\mathbb{R}\).
_Proof of \((ii)\)._ Since \(\sigma\) is Lipschitz continuous, we have
\[|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})^{2}-\sigma(x\sqrt{C_{b}+C_{W}a_ {1}})^{2}|\] \[=|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})-\sigma(x\sqrt{C_{b}+C_{W}a_{1} })||\sigma(x\sqrt{C_{b}+C_{W}a_{2}})+\sigma(x\sqrt{C_{b}+C_{W}a_{1}})|\] \[\leq\text{Lip}(\sigma)|x||\sqrt{C_{b}+C_{W}a_{2}}-\sqrt{C_{b}+C_{ W}a_{1}}|(|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})|+|\sigma(x\sqrt{C_{b}+C_{W}a_{1}})|) \tag{5.15}\] \[\leq\text{Lip}(\sigma)|x||\sqrt{C_{b}+C_{W}a_{2}}-\sqrt{C_{b}+C_{ W}a_{1}}|[2|\sigma(0)|+\text{Lip}(\sigma)|x|(\sqrt{C_{b}+C_{W}a_{2}}+\sqrt{C_{b}+C_{ W}a_{1}})],\]
where the latter inequality follows noticing that (again by the Lipschitz continuity of \(\sigma\)), for any \(x,\kappa\in\mathbb{R}\),
\[|\sigma(\kappa x)|\leq|\sigma(0)|+\text{Lip}(\sigma(\cdot))|\kappa||x|. \tag{5.16}\]
Multiplying and dividing the term in (5.15) by \(\sqrt{C_{b}+C_{W}a_{2}}+\sqrt{C_{b}+C_{W}a_{1}}\), we easily have that
\[|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})^{2}-\sigma(x\sqrt{C_{b}+C_{W}a _{1}})^{2}|\] \[\leq C_{W}\text{Lip}(\sigma)|x|\left(\frac{2|\sigma(0)|}{\sqrt{C _{b}+C_{W}a_{2}}+\sqrt{C_{b}+C_{W}a_{1}}}+\text{Lip}(\sigma)|x|\right)|a_{2}-a _{1}|\] \[\leq P(|x|)|a_{2}-a_{1}|,\]
where \(P(x)\) is defined by (5.3). As far as Condition \((ii)\) of Theorem 5.1 is concerned, we note that by (5.16)
\[\sigma(\kappa Z)^{4}\leq(|\sigma(0)|+\text{Lip}(\sigma)|\kappa||Z|)^{4},\quad \text{almost surely}\]
and so the claim is an immediate consequence of the fact that the absolute moments of \(Z\) are finite.
_Proof of \((iii)\)_. If \(\sigma(\cdot)^{2}\) is Lipschitz continuous, then
\[|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})^{2}-\sigma(x\sqrt{C_{b}+C_{W}a _{1}})^{2}|\] \[\qquad\leq\text{Lip}(\sigma^{2})|x||\sqrt{C_{b}+C_{W}a_{2}}-\sqrt {C_{b}+C_{W}a_{1}}|\] \[\qquad=\text{Lip}(\sigma^{2})|x||\sqrt{C_{b}+C_{W}a_{2}}-\sqrt{C_ {b}+C_{W}a_{1}}|\frac{\sqrt{C_{b}+C_{W}a_{2}}+\sqrt{C_{b}+C_{W}a_{1}}}{\sqrt{C _{b}+C_{W}a_{2}}+\sqrt{C_{b}+C_{W}a_{1}}}\] \[\qquad=\text{Lip}(\sigma^{2})|x||(C_{b}+C_{W}a_{2})-(C_{b}+C_{W}a _{1})|\frac{1}{\sqrt{C_{b}+C_{W}a_{2}}+\sqrt{C_{b}+C_{W}a_{1}}}\] \[\qquad\leq\frac{\text{Lip}(\sigma^{2})C_{W}}{2\sqrt{C_{b}}}|x||a_ {2}-a_{1}|,\]
which shows that Condition \((i)\) of Theorem 5.1 holds with \(P(x)\) given by (5.4). Furthermore, using (5.16) with \(\sigma(\cdot)^{2}\) in place of \(\sigma(\cdot)\), for any \(\kappa\in\mathbb{R}\),
\[\sigma(\kappa Z)^{4}\leq\left(|\sigma(0)|^{2}+\text{Lip}(\sigma(\cdot)^{2})| \kappa||Z|\right)^{2},\quad\text{almost surely}.\]
Therefore, Condition \((ii)\) of Theorem 5.1 is an immediate consequence of this latter relation and the fact that the absolute moments of \(Z\) are finite.
_Proof of \((iv)\)_. Condition \((ii)\) of Theorem 5.1 immediately follows noticing that, for any \(\kappa\in\mathbb{R}\),
\[\mathbb{E}\sigma(\kappa Z)^{4}\leq\mathbb{E}(r_{1}+r_{2}|\kappa|^{\gamma}|Z|^ {\gamma})^{4}<\infty.\]
As far as Condition \((i)\) is concerned, note that by the mean value theorem, for any \(x\in\mathbb{R}\), there exists \(\xi\in\left(\min\{x\sqrt{C_{b}+C_{W}a_{1}},x\sqrt{C_{b}+C_{W}a_{1}}\},\max\{x \sqrt{C_{b}+C_{W}a_{1}},x\sqrt{C_{b}+C_{W}a_{1}}\}\right)\) such that
\[|\sigma(x\sqrt{C_{b}+C_{W}a_{2}})^{2}-\sigma(x\sqrt{C_{b}+C_{W}a _{1}})^{2}| =2|\sigma^{\prime}(\xi)||x||\sqrt{C_{b}+C_{W}a_{2}}-\sqrt{C_{b}+C_ {W}a_{1}}|\] \[=\frac{2C_{W}|\sigma^{\prime}(\xi)|}{\sqrt{C_{b}+C_{W}a_{2}}+ \sqrt{C_{b}+C_{W}a_{1}}}|x||a_{2}-a_{1}|.\]
The claim follows noticing that by assumption \(|\sigma^{\prime}(x)|\leq r_{1}+r_{2}|x|^{\gamma}\), for any \(x\in\mathbb{R}\).
## 6. Normal approximation of deep random Gaussian NNs
### Normal approximation of deep random Gaussian NNs in the convex distance
The following theorem holds.
**Theorem 6.1**.: _Let \(\operatorname{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma;\mathbf{x};\mathbf{b}, \mathbf{W})\) be a deep random Gaussian NN, and let the notation and assumptions of Theorem 5.1 prevail. Then_
\[d_{c}(\mathbf{z}^{(L+1)},\mathbf{z}) \leq C_{1}\left(\sum_{k=1}^{L}[4\sqrt{2}\|P(|Z|)\|_{L^{2}}]^{L-k} \frac{c_{k}}{\sqrt{n_{k}}}\right)\] \[\leq C_{i}\left(\sum_{k=1}^{L}[4\sqrt{2}\|P(|Z|)\|_{L^{2}}]^{L-k} \frac{c_{k}}{\sqrt{n_{k}}}\right),\quad i=2,3\]
_where the constants \(c_{k}\), \(k=1,\ldots,L\), are defined by (5.2),_
\[C_{1}=C_{1}(n_{0},n_{L+1},\mathbf{x},\sigma,C_{b},C_{W}):=C_{W} \left(\frac{80}{(C_{b}+C_{W}\mathcal{O}^{(L+1)})^{3/2}}+\frac{48}{C_{b}+C_{W} \mathcal{O}^{(L+1)}}+20\sqrt{2}\right)n_{L+1}^{59/24},\] \[C_{2}=C_{2}(n_{L+1},C_{b},C_{W}):=C_{W}\left(\frac{80}{C_{b}^{3/2 }}+\frac{48}{C_{b}}+20\sqrt{2}\right)n_{L+1}^{59/24} \tag{6.1}\]
_and_
\[C_{3}=C_{3}(n_{0},n_{L+1},\mathbf{x},\sigma,C_{W}):=C_{W}\left(\frac{80}{(C_{W }\mathcal{O}_{\sigma^{2}}^{(L+1)})^{3/2}}+\frac{48}{C_{W}\mathcal{O}_{\sigma^ {2}}^{(L+1)}}+20\sqrt{2}\right)n_{L+1}^{59/24}.\]
**Remark 6.2**.: Let \(\operatorname{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma;\mathbf{x};\mathbf{b },\mathbf{W})\) be a deep random Gaussian NN. Theorem 3.5 in [8] shows that if
\[c_{2}n\leq n_{1},\ldots,n_{L}\leq c_{1}n,\quad\text{for some constants $c_{1}\geq c_{2}>0$ and some $n\geq 1$} \tag{6.2}\]
and \(\sigma\) is polynomially bounded to order \(r\geq 1\), then there exists a constant \(C_{0}\) such that
\[d_{c}(\mathbf{z}^{(L+1)},\mathbf{z})\leq C_{0}n^{-1/2}.\]
We stress that the bounds in Theorem 6.1 provide (under different assumptions on \(\sigma\) and without any condition on the widths of the hidden layers) a detailed description of the analytical dependence of the upper estimate on the parameters of the model.
Proof.: Throughout this proof, for ease of notation, we put
\[\kappa:=d_{c}(\mathbf{z}^{(L+1)},\mathbf{z})\quad\text{and}\quad\gamma:=C_{W} \|\mathcal{O}_{\mathbf{n}_{L}}^{(L)}-\mathcal{O}^{(L)}\|_{L^{2}}.\]
We preliminary note that it suffices to prove that
\[\kappa\leq(80\|\Sigma_{n_{L+1}}^{-1}\|_{op}^{3/2}+48\|\Sigma_{n_{L+1}}^{-1}\|_ {op}+20\sqrt{2})n_{L+1}^{59/24}\gamma. \tag{6.3}\]
Indeed, the claim then follows by Theorem 5.1, noticing that
\[\|\Sigma_{n_{L+1}}^{-1}\|_{op}=\frac{1}{C_{b}+C_{W}\mathcal{O}^{(L)}} \tag{6.4}\]
and that \(C_{1}\leq C_{i}\), \(i=2,3\).
If \(\gamma>1/\mathrm{e}\), then the inequality (6.3) holds since \(\kappa\leq 1\) (which follows by the definition of the convex distance) and
\[20\sqrt{2}n_{L+1}^{59/24}\gamma>20\sqrt{2}\gamma>20\sqrt{2}/3>1.\]
From now on, we assume \(\gamma\leq 1/\mathrm{e}\). Let \(h\in\mathcal{I}_{n_{L+1}}\) (i.e., \(h\) is the indicator function of a measurable convex set in \(\mathbb{R}^{n_{L+1}}\)) be arbitrarily fixed and let
\[h_{t}(\mathbf{y}):=\mathbb{E}h(\sqrt{t}\mathbf{z}+\sqrt{1-t}\mathbf{y}),\quad t \in(0,1),\quad\mathbf{y}\in\mathbb{R}^{n_{L+1}}.\]
For any \(t\in(0,1)\), by Lemma 2.8\((i)\)
\[f_{t,h}(\mathbf{y}):=\frac{1}{2}\int_{t}^{1}\frac{1}{1-s}(\mathbb{E}h(\sqrt{t} \mathbf{z}+\sqrt{1-t}\mathbf{y})-\mathbb{E}h(\mathbf{z}))\mathrm{d}s,\quad \mathbf{y}\in\mathbb{R}^{n_{L+1}}\]
solves the Stein equation (2.5) with \(n_{L+1}\), \(h_{t}\), \(\Sigma_{n_{L+1}}\) defined by (3.4), and \(\mathbf{z}\), in place of \(d\), \(g\), \(\Sigma\) and \(\mathbf{N}_{\Sigma}\), respectively, i.e.,
\[h_{t}(\mathbf{y})-\mathbb{E}[h_{t}(\mathbf{z})]=\langle\mathbf{y},\nabla f_{t,h}(\mathbf{y})\rangle_{n_{L+1}}-\langle\Sigma_{n_{L+1}},\mathrm{Hess}\,f_{t,h }(\mathbf{y})\rangle_{H.S.},\quad\mathbf{y}\in\mathbb{R}^{n_{L+1}}. \tag{6.5}\]
By Lemma 2.9 it follows
\[\kappa\leq\frac{4}{3}\sup_{h\in\mathcal{I}_{n_{L+1}}}|\mathbb{E}h_{t}(\mathbf{ z}^{(L+1)})-\mathbb{E}h_{t}(\mathbf{z})|+\frac{20n_{L+1}}{\sqrt{2}}\frac{\sqrt{t}}{1 -t},\quad t\in(0,1). \tag{6.6}\]
Without loss of generality, hereafter we assume that \(\mathbf{z}\) is independent of \(\mathcal{F}_{L}\). Therefore, by (6.5) we have
\[\mathbb{E}[h_{t}(\mathbf{z}^{(L+1)})-h_{t}(\mathbf{z})\,|\,\mathcal{F}_{L}]= \sum_{i=1}^{n_{L+1}}\mathbb{E}[z_{i}^{(L+1)}\partial_{i}f_{t,h}(\mathbf{z}^{(L +1)})\,|\,\mathcal{F}_{L}]-\sum_{i,j=1}^{n_{L+1}}\Sigma_{n_{L+1}}(i,j)\mathbb{ E}[\partial_{ij}^{2}f_{t,h}(\mathbf{z}^{(L+1)})\,|\,\mathcal{F}_{L}]. \tag{6.7}\]
By Lemma 2.8\((i)\), for any \(t\in(0,1)\), the mapping \(\mathbf{y}\mapsto\partial_{i}f_{t,h}(\mathbf{y})\) is in \(C^{1}(\mathbb{R}^{n_{L+1}})\) and has bounded first order derivatives. Then, since \(\mathbf{z}^{(L+1)}\,|\,\mathcal{F}_{L}\) is a centered Gaussian random vector with covariance matrix
\[\Sigma^{\prime}_{n_{L+1}}:=(C_{b}+C_{W}\mathcal{O}_{\mathbf{n}_{L}}^{(L)}) \mathrm{Id}_{n_{L+1}}, \tag{6.8}\]
by Lemma 2.10\((ii)\) we have
\[\mathbb{E}[z_{i}^{(L+1)}\partial_{i}f_{t,h}(\mathbf{z}^{(L+1)})\,|\,\mathcal{ F}_{L}]=\sum_{j=1}^{n_{L+1}}\Sigma^{\prime}_{n_{L+1}}(i,j)\mathbb{E}[\partial_{ij}^{2}f_ {t,h}(\mathbf{z}^{(L+1)})\,|\,\mathcal{F}_{L}].\]
On combining this relation with (6.7) and taking the expectation, we have
\[\mathbb{E}[h_{t}(\mathbf{z}^{(L+1)})-h_{t}(\mathbf{z})] =\sum_{i,j=1}^{n_{L+1}}\mathbb{E}[(\Sigma^{\prime}_{n_{L+1}}(i,j) -\Sigma_{n_{L+1}}(i,j))\mathbb{E}[\partial_{ij}^{2}f_{t,h}(\mathbf{z}^{(L+1)}) \,|\,\mathcal{F}_{L}]]\] \[=\sum_{i,j=1}^{n_{L+1}}\mathbb{E}[\mathbb{E}[(\Sigma^{\prime}_{n_ {L+1}}(i,j)-\Sigma_{n_{L+1}}(i,j))\partial_{ij}^{2}f_{t,h}(\mathbf{z}^{(L+1)}) \,|\,\mathcal{F}_{L}]] \tag{6.9}\] \[=\mathbb{E}\langle\Sigma^{\prime}_{n_{L+1}}-\Sigma_{n_{L+1}}, \mathrm{Hess}(f_{t,h}(\mathbf{z}^{(L+1)}))\rangle_{H.S.}\]
where in the second equality we used the \(\mathcal{F}_{L}\)-measurability of \(\mathcal{O}_{\mathbf{n}_{L}}^{(L)}\). Taking the modulus on the relation (6.9) and applying the Cauchy-Schwarz inequality, we have
\[|\mathbb{E}[h_{t}(\mathbf{z}^{(L+1)})-h_{t}(\mathbf{z})]|\leq\sqrt{\mathbb{E} \|\Sigma^{\prime}_{n_{L+1}}-\Sigma_{n_{L+1}}\|^{2}_{H.S.}}\sqrt{\mathbb{E}\| \mathrm{Hess}(f_{t,h}(\mathbf{z}^{(L+1)}))\|^{2}_{H.S.}}. \tag{6.10}\]
By Lemma 2.8\((ii)\), for any \(h\in\mathcal{I}_{n_{L+1}}\), we have
\[\mathbb{E}\|\mathrm{Hess}(f_{t,h}(\mathbf{z}^{(L+1)}))\|_{H.S.}^{2}\leq\|\Sigma_{ n_{L+1}}^{-1}\|_{op}^{2}(n_{L+1}^{2}(\log t)^{2}\kappa+530n_{L+1}^{17/6}),\quad t \in(0,1).\]
Moreover,
\[\mathbb{E}\|\Sigma_{n_{L+1}}^{\prime}-\Sigma_{n_{L+1}}\|_{H.S.}^{2}=n_{L+1}C_{ W}^{2}\|\mathcal{O}_{\mathbf{n}_{L}}^{(L)}-\mathcal{O}^{(L)}\|_{L^{2}}^{2}. \tag{6.11}\]
On combining these relations and using the elementary inequality \(\sqrt{a_{1}+a_{2}}\leq\sqrt{a}_{1}+\sqrt{a}_{2}\), \(a_{1},a_{2}\geq 0\), we have
\[\sup_{h\in\mathcal{I}_{n_{L+1}}}|\mathbb{E}[h_{t}(\mathbf{z}^{(L +1)})-h_{t}(\mathbf{z})]|\] \[\qquad\leq C_{W}\|\Sigma_{n_{L+1}}^{-1}\|_{op}(n_{L+1}^{3/2}|\log t |\sqrt{\kappa}+24n_{L+1}^{23/12})\|\mathcal{O}_{\mathbf{n}_{L}}^{(L)}- \mathcal{O}^{(L)}\|_{L^{2}}.\]
On combining this latter inequality with (6.6), we have
\[\kappa\leq\frac{4}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}(n_{L+1}^{3/2}|\log t| \sqrt{\kappa}+24n_{L+1}^{23/12})\gamma+\frac{20n_{L+1}}{\sqrt{2}}\frac{\sqrt{t }}{1-t},\quad t\in(0,1). \tag{6.12}\]
Since \(\kappa\leq 1\), by this relation we have
\[\kappa\leq\frac{4}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}(n_{L+1}^{3/2}|\log t|+24n_ {L+1}^{23/12})\gamma+\frac{20n_{L+1}}{\sqrt{2}}\frac{\sqrt{t}}{1-t},\quad t\in (0,1).\]
Setting \(t=\gamma^{2}\) in this latter inequality (note that this choice of the parameter \(t\) is admissible since \(\gamma\leq 1/\mathrm{e}<1\)), we have
\[\kappa \leq\frac{4}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}(2n_{L+1}^{3/2}|\log \gamma|+24n_{L+1}^{23/12})\gamma+\frac{20n_{L+1}}{\sqrt{2}}\frac{\gamma}{1- \gamma^{2}} \tag{6.13}\] \[\leq\frac{4}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}(2n_{L+1}^{3/2}|\log \gamma|+24n_{L+1}^{23/12})\gamma+20\sqrt{2}n_{L+1}\gamma,\]
where in the latter inequality we used the relation
\[\frac{1}{\sqrt{2}}\frac{\gamma}{1-\gamma^{2}}\leq\sqrt{2}\gamma, \tag{6.14}\]
which holds since \(\gamma\leq 1/\mathrm{e}<1/\sqrt{2}\). We rewrite the inequality (6.13) as
\[\kappa\leq\frac{8}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}n_{L+1}^{3/2}\gamma|\log \gamma|+(32n_{L+1}^{23/12}\|\Sigma_{n_{L+1}}^{-1}\|_{op}+20\sqrt{2}n_{L+1})\gamma.\]
Taking the square root and multiplying by \(|\log\gamma|\), we have
\[|\log\gamma|\sqrt{\kappa}\leq\sqrt{\frac{8}{3}}\|\Sigma_{n_{L+1}}^{-1}\|_{op}^ {1/2}n_{L+1}^{3/4}\gamma^{1/2}|\log\gamma|^{3/2}+(32n_{L+1}^{23/12}\|\Sigma_{n_ {L+1}}^{-1}\|_{op}+20\sqrt{2}n_{L+1})^{1/2}\gamma^{1/2}|\log\gamma|.\]
Since \(\max\{\sup_{y\in(0,1/\mathrm{e}]}y^{1/2}|\log y|^{3/2},\sup_{y\in(0,1/\mathrm{e}]}y^{1/2}| \log y|^{1/2}\}\leq 4\), we have
\[|\log\gamma|\sqrt{\kappa} \leq 4\sqrt{\frac{8}{3}}\|\Sigma_{n_{L+1}}^{-1}\|_{op}^{1/2}n_{L+1}^ {3/4}+4(32n_{L+1}^{23/12}\|\Sigma_{n_{L+1}}^{-1}\|_{op}+20\sqrt{2}n_{L+1})^{1/2}\] \[=8\frac{\sqrt{6}}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}^{1/2}n_{L+1}^ {3/4}+4(32n_{L+1}^{23/12}\|\Sigma_{n_{L+1}}^{-1}\|_{op}+20\sqrt{2}n_{L+1})^{1/2}\] \[\leq 8\frac{\sqrt{6}}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}^{1/2}n_{L+1 }^{3/4}+16\sqrt{2}n_{L+1}^{23/24}\|\Sigma_{n_{L+1}}^{-1}\|_{op}^{1/2}+\sqrt{2 0\sqrt{2}}n_{L+1}^{1/2}\] \[\leq\left[\left(8\frac{\sqrt{6}}{3}+16\sqrt{2}\right)\|\Sigma_{n _{L+1}}^{-1}\|_{op}^{1/2}+\sqrt{20\sqrt{2}}\right]n_{L+1}^{23/24} \tag{6.15}\] \[\leq(30\|\Sigma_{n_{L+1}}^{-1}\|_{op}^{1/2}+6)n_{L+1}^{23/24}.\]
By (6.12) with \(t=\gamma^{2}\), (6.14) and (6.15), we finally have (6.3), indeed
\[\kappa \leq\frac{4}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}(2n_{L+1}^{3/2}|\log \gamma|\sqrt{\kappa}+24n_{L+1}^{23/12})\gamma+20\sqrt{2}n_{L+1}\gamma\] \[\leq\frac{4}{3}\|\Sigma_{n_{L+1}}^{-1}\|_{op}\left[\left(60\| \Sigma_{n_{L+1}}^{-1}\|_{op}^{1/2}+12\right)n_{L+1}^{59/24}+24n_{L+1}^{23/12} \right]\gamma+20\sqrt{2}n_{L+1}\gamma\] \[\leq(80\|\Sigma_{n_{L+1}}^{-1}\|_{op}^{3/2}+48\|\Sigma_{n_{L+1}}^ {-1}\|_{op}+20\sqrt{2})n_{L+1}^{59/24}\gamma.\]
The proof is completed.
\(\square\)
### Normal approximation of deep random Gaussian NNs in the \(1\)-Wasserstein distance
The following theorem holds.
**Theorem 6.3**.: _Let \(\mathrm{GNN}(L,n_{0},n_{L+1},\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b}, \mathbf{W})\) be a deep random Gaussian NN, and let the notation and assumptions of Theorem 5.1 prevail. Then_
\[d_{W_{1}}(\mathbf{z}^{(L+1)},\mathbf{z}) \leq K_{1}\left(\sum_{k=1}^{L}[4\sqrt{2}\|P(|Z|)\|_{L^{2}}]^{L-k} \frac{c_{k}}{\sqrt{n_{k}}}\right)\] \[\leq K_{i}\left(\sum_{k=1}^{L}[4\sqrt{2}\|P(|Z|)\|_{L^{2}}]^{L-k} \frac{c_{k}}{\sqrt{n_{k}}}\right),\quad i=2,3\]
_where the constants \(c_{k}\), \(k=1,\ldots,L\), are defined by (5.2),_
\[K_{1}=K_{1}(n_{0},n_{L+1},\mathbf{x},\sigma,C_{b},C_{W}):=\frac{n_{L+1}C_{W}}{ \sqrt{C_{b}+C_{W}\mathcal{O}^{(L)}}},\]
\[K_{2}=K_{2}(n_{L+1},C_{b},C_{W}):=\frac{n_{L+1}C_{W}}{\sqrt{C_{b}}}\]
_and_
\[K_{3}=K_{3}(n_{0},n_{L+1},\mathbf{x},\sigma,C_{W}):=\frac{n_{L+1}\sqrt{C_{W}}}{ \sqrt{\mathcal{O}^{(L)}}},\]
**Remark 6.4**.: Let \(\mathrm{GNN}(L,n_{0},1,\mathbf{n}_{L},\sigma,\mathbf{x},\mathbf{b},\mathbf{W})\) be a deep random Gaussian NN with univariate output, widths of the hidden layers satsfying (6.2) and a polynomially bounded to order \(r\geq 1\)
activation function \(\sigma\). Then by Theorem 3.3 in [8] we have that there exist two constants \(C,C_{0}>0\) such that
\[\frac{C_{0}}{n}\leq d_{W_{1}}(z^{(L+1)},z)\leq\frac{C}{n}.\]
Clearly this inequality shows the optimality of the rate \(1/n\). Here again, we note that the corresponding bound provided by Theorem 6.3 gives (under different assumptions on \(\sigma\) and without any condition on the widths of the hidden layers) a detailed description of the analytical dependence of the upper estimate on the parameters of the model. This makes our result useful for the purpose of output localization (see Section 7).
_Proof._ Let \(g\in\mathcal{L}_{n_{L+1}}(1)\) be arbitrarily fixed. Without loss of generality we assume that \(\mathbf{z}\) is independent of \(\mathcal{F}_{L}\). By Lemma 2.7 we then have
\[\mathbb{E}[g(\mathbf{z}^{(L+1)})-g(\mathbf{z})\,|\,\mathcal{F}_{L}]=\sum_{i=1 }^{n_{L+1}}\mathbb{E}[z_{i}^{(L+1)}\partial_{i}f_{g}(\mathbf{z}^{(L+1)})\,| \,\mathcal{F}_{L}]-\sum_{i,j=1}^{n_{L+1}}\Sigma_{n_{L+1}}(i,j)\mathbb{E}[ \partial_{ij}^{2}f_{g}(\mathbf{z}^{(L+1)})\,|\,\mathcal{F}_{L}], \tag{6.16}\]
where
\[f_{g}(\mathbf{y}):=\int_{0}^{\infty}\mathbb{E}[g(\mathbf{z})-g(\mathrm{e}^{-t }\mathbf{y}+\sqrt{1-\mathrm{e}^{-2t}}\mathbf{z})]\,\mathrm{d}t,\quad\mathbf{y }\in\mathbb{R}^{n_{L+1}}.\]
Again by Lemma 2.7 we have that the mapping \(\mathbf{y}\mapsto\partial_{i}f_{g}(\mathbf{y})\) is in \(C^{1}(\mathbb{R}^{n_{L+1}})\) and has bounded first order derivatives. Applying Lemma 2.10\((ii)\) exactly as in the proof of Theorem 6.1 (see a few lines before Equation (6.9)), we have
\[\mathbb{E}[g(\mathbf{z}^{(L+1)})-g(\mathbf{z})]=\mathbb{E}\langle\Sigma^{ \prime}_{n_{L+1}}-\Sigma_{n_{L+1}},\mathrm{Hess}(f_{g}(\mathbf{z}^{(L+1)})) \rangle_{H.S.},\]
where the matrix \(\Sigma^{\prime}_{n_{L+1}}\) is defined by (6.8). Therefore, applying the Cauchy-Schwarz inequality as in (6.10), we have
\[|\mathbb{E}[g(\mathbf{z}^{(L+1)})-g(\mathbf{z})]|\leq\sqrt{\mathbb{E}\| \Sigma^{\prime}_{n_{L+1}}-\Sigma_{n_{L+1}}\|_{H.S.}^{2}}\sqrt{\mathbb{E}\| \mathrm{Hess}(f_{g}(\mathbf{z}^{(L+1)}))\|_{H.S.}^{2}} \tag{6.17}\]
By Lemma 2.7 we have
\[\sqrt{\mathbb{E}\|\mathrm{Hess}(f_{g}(\mathbf{z}^{(L+1)}))\|_{H.S.}^{2}}\leq \sup_{\mathbf{y}\in\mathbb{R}^{n_{L+1}}}\|\mathrm{Hess}f_{g}(\mathbf{y})\|_{H.S.}\leq\sqrt{n_{L+1}}\|\Sigma_{n_{L+1}}^{-1}\|_{op}\|\Sigma_{n_{L+1}}\|_{op}^ {1/2},\]
On combining these relations with (6.11), we have
\[|\mathbb{E}[g(\mathbf{z}^{(L+1)})-g(\mathbf{z})]|\leq n_{L+1}C_{W}\|\Sigma_{n _{L+1}}^{-1}\|_{op}\|\Sigma_{n_{L+1}}\|_{op}^{1/2}\|\mathcal{O}_{\mathbf{n}_{ L}}^{(L)}-\mathcal{O}^{(L)}\|_{L^{2}}.\]
The claim follows taking the supremum over \(g\) on this inequality and then using Theorem 5.1, relation (6.4) and the fact that
\[\|\Sigma_{n_{L+1}}\|_{op}=C_{b}+C_{W}\mathcal{O}^{(L)}.\]
\(\square\)
## 7. Localization of the output
In this section we want to show the potentiality of the obtained results for practical applications. Indeed, both Theorem 4.1\((ii)\) and Theorem 6.1 allow one to explicitly estimate the probability that the output of a random Gaussian NN evaluated at the input \(\mathbf{x}\) belongs to a suitable set, without resorting to computationally expensive Monte Carlo methods. This is what we call output localization, which can suggest a suitable architecture design to estimate a target function \(f\). Indeed in statistical learning, given a training set
\[\{(\mathbf{x}_{i},f(\mathbf{x}_{i})\}_{i\in I}\subset\mathbb{R}^{n_{0}}\times \mathbb{R}^{n_{L+1}},\]
the goal is to estimate the unknown function \(f\) by a NN with a certain fixed architecture. This is performed by minimizing an empirical risk function over the space of NN's parameters, i.e., the biases and the weights. Such kind of minimization problems are non-convex and are usually addressed by a gradient descent procedure; the latter stabilizes towards a solution which depends on the inizialization point. Since in practical applications the inizialization point is given by a realization of a random Gaussian NN, it can be convenient to choose \(L\), \(\mathbf{n}_{L}\), \(\sigma\), \(C_{b}\) and \(C_{W}\) by exploiting output localization, i.e., in such a way to have a good estimate of the probability \(\mathbb{P}(\mathbf{z}^{(L+1)}\left(\mathbf{x}_{i}\right)\in V_{i})\), \(i\in I\), where \(V_{i}\) is an appropriate neighborhood of \(f(\mathbf{x}_{i})\).
Hence, in the following we want to show how to use the upper bound on the total variation distance provided in Theorem 4.1\((ii)\), for a shallow random Gaussian NN with univariate output, and the upper bound on the convex distance provided by Theorem 6.1, for a deep random Gaussian NN, to localize the output. Indeed, given a measurable convex set \(V\subset\mathbb{R}^{n_{L+1}}\), by the definitions of both the total variation and the convex distances we have
\[\mathbb{P}(\mathbf{z}\in V)-C_{bound}\leq\mathbb{P}(\mathbf{z}^{(L+1)}\in V) \leq\mathbb{P}(\mathbf{z}\in V)+C_{bound},\]
where \(L=1\), \(n_{2}=1\) and
\[C_{bound}:=2\frac{C_{W}\sqrt{\forall\text{Var}(\sigma(Z\sqrt{C_{b}+C_{W} \mathcal{O}^{(0)}})^{2})}}{C_{b}+C_{W}\mathbb{E}\sigma(Z\sqrt{C_{b}+C_{W} \mathcal{O}^{(0)}})^{2}}\frac{1}{\sqrt{n_{1}}}, \tag{7.1}\]
for the case of a shallow NN, while
\[C_{bound}:=C_{1}\left(\sum_{k=1}^{L}[4\sqrt{2}\|P(|Z|)\|_{L^{2}}]^{L-k}\frac{ c_{k}}{\sqrt{n_{k}}}\right), \tag{7.2}\]
for the case of a deep NN, where \(C_{1}\) is given by (6.1) and the constants \(c_{k}\), \(k=1,\ldots,L\), are defined by (5.2).
Let \(V:=\prod_{i=1}^{n_{L+1}}[r_{i},s_{i}]\) be a rectangle of \(\mathbb{R}^{n_{L+1}}\). Since \(\mathbf{z}\) is an \(n_{L+1}\)-dimensional centered Gaussian random vector with covariance matrix (3.4), we have
\[\mathbb{P}(\mathbf{z}\in V)=\prod_{i=1}^{n_{L+1}}\left(\mathbb{P}\left(Z\leq \frac{s_{i}}{\sqrt{C_{b}+C_{W}\mathcal{O}^{(L)}}}\right)-\mathbb{P}\left(Z\leq \frac{r_{i}}{\sqrt{C_{b}+C_{W}\mathcal{O}^{(L)}}}\right)\right).\]
Now, we furnish numerical values of the constant \(C_{bound}\) in (7.1) and (7.2); in both cases we take \(\sigma(x):=x\mathbf{1}\{x\geq 0\}\), i.e., a ReLu activation function. Since the ReLu function is Lipschitz
continuous with Lipschitz constant equal to \(1\), \(\mathbb{E}Z^{2}=1\) and \(\mathbb{E}Z^{4}=3\), by Proposition 5.2\((ii)\) we have
\[\|P(|Z|)\|_{L^{2}}=C_{W}\sqrt{\mathbb{E}Z^{4}\mathbf{1}\{Z\geq 0\}}=C_{W}\sqrt{3/2}.\]
By the expression of the constants \(c_{\ell}\) in Theorem 5.1, we have
\[c_{\ell}=(C_{b}+C_{W}\mathcal{O}^{(\ell-1)})\sqrt{2\mathbb{E}Z^{4}\mathbf{1}\{ Z\geq 0\}}=(C_{b}+C_{W}\mathcal{O}^{(\ell-1)})\sqrt{3},\quad\ell=1,\ldots,L.\]
By the definition of the quantities \(\mathcal{O}^{(\ell)}\), \(\ell=1,\ldots,L\), (see (3.2)) we have
\[\mathcal{O}^{(\ell)}=(C_{b}+C_{W}\mathcal{O}^{(\ell-1)})\mathbb{E}Z^{2} \mathbf{1}\{Z\geq 0\}=(C_{b}+C_{W}\mathcal{O}^{(\ell-1)})/2,\quad\ell=1,\ldots,L\]
with \(\mathcal{O}^{(0)}\) given by (3.3). Therefore,
\[\mathcal{O}^{(\ell)}=\frac{C_{b}}{2}\sum_{k=0}^{\ell-1}\frac{C_{W}^{k}}{2^{k}} +\frac{C_{W}^{\ell}}{2^{\ell}}\mathcal{O}^{(0)}.\]
Table 2 gives the values of \(C_{bound}\) in (7.1) in the case of a shallow random Gaussian NN with architecture \(L=1\), \(n_{0}=4\), \(n_{1}=n\) and \(n_{2}=1\), for different values of \(n\in\{1,10,10^{2},10^{3},10^{4},10^{5}\}\). Table 3 gives the values of \(C_{bound}\) in (7.2) in the case of a deep random Gaussian NN with architecture \(L=3\), \(n_{0}=4\), \(n_{1}=n_{2}=n_{3}=n\) and \(n_{4}=1\), for different values of \(n\in\{10^{4},10^{5},10^{6},10^{7},10^{8},10^{9}\}\). In both tables we consider four different inputs
\[\mathbf{x}\in\{(0,0,0,0),(0.1,0.1,0.1,0.1),(0.5,-0.5,0.5,-0.5),(10,10,10,10)\},\]
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c c|c c c|c c c} \hline \hline & \multicolumn{11}{c}{\(\mathbf{x}=(\mathbf{0},\mathbf{0},\mathbf{0},\mathbf{0})\)} \\ \hline \(n\) & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{10\({}^{2}\)} & \multicolumn{1}{c}{10\({}^{3}\)} & \multicolumn{1}{c}{10\({}^{4}\)} & \multicolumn{1}{c}{10\({}^{5}\)} \\ \hline \(C_{b}\) & \(C_{W}\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(1\) & \(0.01\) & \(1\) \\ \hline \(1\) & \(0.02\) & \(0.21\) & \(1.49\) & \(0.01\) & \(0.07\) & \(0.47\) & \(0.00\) & \(0.02\) & \(0.15\) & \(0.00\) & \(0.01\) & \(0.05\) & \(0.00\) & \(0.00\) & \(0.01\) & \(0.00\) & \(0.00\) & \(0.00\) \\ \(10\) & \(0.01\) & \(0.07\) & \(0.47\) & \(0.00\) & \(0.02\) & \(0.15\) & \(0.00\) & \(0.01\) & \(0.05\) & \(0.00\) & \(0.00\) & \(0.01\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) \\ \hline \hline & \multicolumn{11}{c}{\(\mathbf{x}=(\mathbf{0}.\mathbf{1},\mathbf{0},\mathbf{1},\mathbf{0}, \mathbf{1},\mathbf{0}.\mathbf{1})\)} \\ \hline \(n\) & \multicolumn{11}{c}{10} & \multicolumn{11}{c}{10\({}^{2}\)} & \multicolumn{11}{c}{10\({}^{3}\)} & \multicolumn{11}{c}{10\({}^{4}\)} & \multicolumn{11}{c}{10\({}^{5}\)} \\ \hline \(C_{b}\) & \(C_{W}\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(1\) & \(0.01\) & \(1\) & \(0.01\) & \(1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) \\ \hline \(1\) & \(0.02\) & \(0.21\) & \(1.49\) & \(0.01\) & \(0.07\) & \(0.47\) & \(0.00\) & \(0.02\) & \(0.15\) & \(0.00\) & \(0.01\) & \(0.05\) & \(0.00\) & \(0.00\) & \(0.01\) & \(0.00\) & \(0.00\) & \(0.00\) \\ \(10\) & \(0.01\) & \(0.07\) & \(0.47\) & \(0.00\) & \(0.02\) & \(0.15\) & \(0.00\) & \(0.01\) & \(0.05\) & \(0.00\) & \(0.00\) & \(0.01\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) \\ \hline \hline & \multicolumn{11}{c}{\(\mathbf{x}=(\mathbf{0}.\mathbf{5},-\mathbf{0}.\mathbf{5},\mathbf{0}. \mathbf{5},-\mathbf{0}.\mathbf{5})\)} \\ \hline \(n\) & \multicolumn{11}{c}{10} & \multicolumn{11}{c}{10\({}^{2}\)} & \multicolumn{11}{c}{10\({}^{3}\)} & \multicolumn{11}{c}{10\({}^{4}\)} & \multicolumn{11}{c}{10\({}^{5}\)} \\ \hline \(C_{b}\) & \(C_{W}\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(0.1\) & \(1\) & \(0.01\) & \(1\) & \(0.01\) & \(1\) \\ \hline \(1\) & \(0.02\) & \(0.22\) & \(1.54\) & \(0.01\) & \(0.07\) & \(0.49\) & \(0.00\) & \(0.02\) & \(0.15\) & \(0.00\) & \(0.01\) & \(0.05\) & \(0.00\) & \(0.00\) & \(0.02\) & \(0.00\) & \(0.00\) & \(0.00\) \\ \(10\) & \(0.01\) & \(0.07\) & \(0.47\) & \(0.00\) & \(0.02\) & \(0.15\) & \(0.00\) & \(0.01\) & \(0.05\) & \(0.00\) & \(0.00\) & \(0.01\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) & \(0.00\) \\ \hline \hline & \multicolumn{11}{c}{\(\mathbf{x}=(\mathbf{10},\mathbf{10},\mathbf{10},\mathbf{10}, \mathbf{10})\)} \\ \hline \(n\) & \multicolumn{11}{c}{10} & \multicolumn{11}{c}{10} & \multicolumn{11
whose Euclidean norm is, respectively, \(0\), strictly less than \(1\), \(1\) and strictly larger than \(1\), and
\[C_{b}\in\{1,10\}\quad\text{and}\quad C_{W}\in\{0.01,0.1,1\}.\]
It is clear from Table 2 that for a shallow random Gaussian NN, the Gaussian approximation of the output is very good for any of the choices of the parameters \(n\), \(C_{b}\), \(C_{W}\) and of the input \(\mathbf{x}\). It is also clear from Table 3 that for a deep random Gaussian NN, such as the considered NN with three hidden layers, the Gaussian approximation of the output is very good for some choices of the parameters \(C_{b}\), \(C_{W}\), \(n\) and of the input \(\mathbf{x}\) (also when the number \(n\) of neurons in the hidden layers is not excessive) but it is very poor for other choices of these quantities. In order to analyse the influence of the parameters \(C_{b}\), \(C_{W}\) and of the input \(\mathbf{x}\) on the value of \(C_{bound}\) in (7.2), in Table 4 we reported the value of the constant \(C_{1}\) that appears in (7.2) and it is explicitly given in (6.1). As it can be seen from this table, the value of \(C_{bound}\) strongly depends on the value of \(C_{1}\) which,
\begin{table}
\begin{tabular}{l|r r r|r r r|r r r|r r r r|r r r r} \hline & \multicolumn{10}{c}{\(\mathbf{x}=(\mathbf{0},\mathbf{0},\mathbf{0},\mathbf{0})\)} & \multicolumn{10}{c}{\(\mathbf{x}=(\mathbf{0}.\mathbf{1},\mathbf{0}.\mathbf{1},\mathbf{0}.\mathbf{1}, \mathbf{0}.\mathbf{1})\)} & \multicolumn{1}{c}{\(\mathbf{x}=(\mathbf{0}.\mathbf{5},-\mathbf{0}.\mathbf{5},\mathbf{0}. \mathbf{5},-\mathbf{0}.\mathbf{5})\)} & \multicolumn{1}{c}{\(\mathbf{x}=(\mathbf{10},\mathbf{10},\mathbf{10},\mathbf{10})\)} \\ \hline \(C_{b}\)\(C_{W}\) & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 \\ \hline
1 & 1.55 & 14.80 & 85.04 & 1.55 & 14.80 & 85.00 & 1.55 & 14.80 & 83.86 & 1.55 & 14.78 & 33.09 & \\
10 & 0.36 & 3.52 & 31.83 & 0.36 & 3.52 & 31.83 & 0.36 & 3.52 & 31.82 & 0.36 & 3.52 & 30.28 & \\ \hline \end{tabular}
\end{table}
Table 4. Values of \(C_{1}\) in (6.1) for different inputs \(\mathbf{x}\) for a deep random Gaussian NN with architecture \(L=3\), \(n_{0}=4\), \(n_{1}=n_{2}=n_{3}=n\), \(n_{4}=1\) and ReLu activation function.
\begin{table}
\begin{tabular}{l|r r r|r r|r r r|r r r|r r r r|r r r} \hline \hline \(n\) & \multicolumn{10}{c}{\(\mathbf{x}=(\mathbf{0},\mathbf{0},\mathbf{0},\mathbf{0})\)} & \multicolumn{10}{c}{\(\mathbf{x}=(\mathbf{0},\mathbf{0},\mathbf{0},\mathbf{1})\)} \\ \hline \(C_{b}\)\(C_{W}\) & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 \\ \hline
1 & 0.03 & 0.58 & 88.59 & 0.01 & 0.18 & 28.01 & 0.00 & 0.06 & 8.86 & 0.00 & 0.02 & 2.80 & 0.00 & 0.01 & 0.89 & 0.00 & 0.00 & 0.28 \\
10 & 0.07 & 1.38 & 331.57 & 0.02 & 0.44 & 104.85 & 0.01 & 0.14 & 33.16 & 0.00 & 0.04 & 10.49 & 0.00 & 0.01 & 3.32 & 0.00 & 0.00 & 1.05 \\ \hline \hline \multicolumn{10}{c}{\(\mathbf{x}=(\mathbf{0},\mathbf{1},\mathbf{0},\mathbf{1},\mathbf{0},\mathbf{1}, \mathbf{0}.\mathbf{1})\)} \\ \hline \(n\) & \multicolumn{10}{c}{\(\mathbf{10}^{4}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{5}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{6}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{7}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{8}\)} & \multicolumn{1}{c}{\(\mathbf{10}^{9}\)} \\ \hline \(C_{b}\)\(C_{W}\) & 0.01 & 0.1 & 1 & 0.01 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 \\ \hline
1 & 0.03 & 0.58 & 89.30 & 0.01 & 0.18 & 28.24 & 0.00 & 0.06 & 8.93 & 0.00 & 0.02 & 2.82 & 0.00 & 0.01 & 0.89 & 0.00 & 0.00 & 0.28 \\
10 & 0.07 & 1.38 & 331.85 & 0.02 & 0.44 & 104.94 & 0.01 & 0.14 & 33.18 & 0.00 & 0.04 & 10.49 & 0.00 & 0.01 & 3.32 & 0.00 & 0.00 & 1.05 \\ \hline \hline \multicolumn{10}{c}{\(\mathbf{x}=(\mathbf{0},\mathbf{5},-\mathbf{0},\mathbf{5},\mathbf{0},-\mathbf{0}.\mathbf{5})\)} \\ \hline \(n\) & \multicolumn{10}{c}{\(\mathbf{10}^{*}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{*}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{*}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{*}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{*}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{*}\)} \\ \hline \(C_{b}\)\(C_{W}\) & 0.01 & 0.1 & 1 & 0.01 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 \\ \hline
1 & 0.03 & 0.58 & 106.14 & 0.01 & 0.18 & 33.56 & 0.00 & 0.06 & 10.61 & 0.00 & 0.02 & 3.36 & 0.00 & 0.01 & 1.06 & 0.00 & 0.00 & 0.34 \\
10 & 0.07 & 1.38 & 338.62 & 0.02 & 0.44 & 107.08 & 0.01 & 0.14 & 33.86 & 0.00 & 0.04 & 10.71 & 0.00 & 0.01 & 3.39 & 0.00 & 0.00 & 1.07 \\ \hline \hline \multicolumn{10}{c}{\(\mathbf{x}=(\mathbf{10},\mathbf{10},\mathbf{10},\mathbf{10},\mathbf{10})\)} \\ \hline \(n\) & \multicolumn{10}{c}{\(\mathbf{10}^{4}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{5}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{6}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{7}\)} & \multicolumn{10}{c}{\(\mathbf{10}^{8}\)} & \multicolumn{1}{c}{\(\mathbf{10}^{6}\)} \\ \hline \(C_{b}\)\(C_{W}\) & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 \\ \hline
1 & 0.03 & 1.90 & 2998.50 & 0.01 & 0.06 & 948.21 & 0.00 & 0.19 & 299.85 & 0.00 & 0.06 & 94.82 & 0.00 & 0.02 & 29.99
in turn, it is closely related to the choice of the parameters \(C_{b}\) and \(C_{W}\) and does not depend much on the norm of the input vector \(\mathbf{x}\).
|
2304.01910 | On the Variance of Neural Network Training with respect to Test Sets and
Distributions | Typical neural network trainings have substantial variance in test-set
performance between repeated runs, impeding hyperparameter comparison and
training reproducibility. In this work we present the following results towards
understanding this variation. (1) Despite having significant variance on their
test-sets, we demonstrate that standard CIFAR-10 and ImageNet trainings have
little variance in performance on the underlying test-distributions from which
their test-sets are sampled. (2) We show that these trainings make
approximately independent errors on their test-sets. That is, the event that a
trained network makes an error on one particular example does not affect its
chances of making errors on other examples, relative to their average rates
over repeated runs of training with the same hyperparameters. (3) We prove that
the variance of neural network trainings on their test-sets is a downstream
consequence of the class-calibration property discovered by Jiang et al.
(2021). Our analysis yields a simple formula which accurately predicts variance
for the binary classification case. (4) We conduct preliminary studies of data
augmentation, learning rate, finetuning instability and distribution-shift
through the lens of variance between runs. | Keller Jordan | 2023-04-04T16:09:55Z | http://arxiv.org/abs/2304.01910v4 | # Calibrated Chaos: Variance Between Runs of Neural Network Training is Harmless and Inevitable
###### Abstract
Typical neural network trainings have substantial variance in test-set performance between repeated runs, impeding hyperparameter comparison and training reproducibility. We present the following results towards understanding this variation. (1) Despite having significant variance on their test-_sets_, we demonstrate that standard CIFAR-10 and ImageNet trainings have very little variance in their performance on the test-_distributions_ from which their test-sets are sampled, suggesting that variance is less of a practical issue than previously thought. (2) We present a simplifying statistical assumption which closely approximates the structure of the test-set accuracy distribution. (3) We argue that test-set variance is inevitable in the following two senses. First, we show that variance is largely caused by high sensitivity of the training process to initial conditions, rather than by specific sources of randomness like the data order and augmentations. Second, we prove that variance is unavoidable given the observation that ensembles of trained networks are well-calibrated. (4) We conduct preliminary studies of distribution-shift, fine-tuning, data augmentation and learning rate through the lens of variance between runs.
## 1 Introduction
Modern neural networks (Krizhevsky et al., 2012; He et al., 2016; Vaswani et al., 2017) are trained using stochastic gradient-based algorithms (Rumelhart et al., 1986; Kingma and Ba, 2014), involving randomized weight initialization, data/batch ordering, and data augmentations. Because of this stochasticity, each independent run of training produces a different network with better or worse performance than average.
The difference between such independent runs is often substantial. Picard (2021) finds that for a standard CIFAR-10 (Krizhevsky et al., 2009) training configuration, there exist random seeds which differ by 1.3% in terms of test-set accuracy. In comparison, the gap between the top two methods competing for state-of-the-art on CIFAR-10 has been less than 1% throughout the majority of the benchmark's lifetime1. Prior works therefore view this variance as an obstacle which impedes comparisons between training configurations (Bouthillier et al., 2021; Picard, 2021) and reproducibility (Bhojanapalli et al., 2021; Zhuang et al., 2022). To mitigate stochasticity, Zhuang et al. (2022) study deterministic tooling, Bhojanapalli et al. (2021) develop regularization methods, and several recent works (Wightman et al., 2021; Liu et al., 2022) report the average of validation metrics across multiple runs when comparing training configurations.
Footnote 1: [https://paperswithcode.com/sota/image-classification-on-cifar-10](https://paperswithcode.com/sota/image-classification-on-cifar-10)
In this paper, we begin with the following series of questions regarding the variance in test-set accuracy between independent, identically-configured runs of neural network training:
1. What causes this variance? Can we isolate which of source of randomness is most responsible?
2. If we train many times, and take the network which performed best on the test-_set_, should we also expect it to perform above average on the test-_distribution_?
3. Which hyperparameters affect variance? For example, do there exist training configurations with the same average accuracy, but different amounts of variance?
Our contributions towards resolving these questions come in two flavors. First, we utilize large-scale data (\(\approx 350,000\) trained networks) in order to obtain a series of empirical results, for the case of standard trainings on CIFAR-10 and ImageNet (Deng et al., 2009). Second, we present theoretical results regarding distribution-wise variance, and establish a connection between variance and the calibration of neural network ensembles (Lakshminarayanan et al., 2017). Summarized, our contributions are as follows:
1. We demonstrate that no single source of randomness contributes independently to the overall variance between runs (Section 3.1). Instead, we argue that variance is caused by the extreme sensitivity of the training process to initial conditions (Section 3.2). In particular we show, confirming the results of Summers and Dinneen (2021), that varying just a single weight at initialization produces only 1% less churn than all three sources of randomness combined.
2. We argue that the observable variance between runs of training in terms of test-_set_ accuracy is a form of calibrated finite-sample noise, which does not imply variance with respect to the test-_distribution_. We begin by demonstrating that when training to convergence, disjoint splits of test data become decorrelated with respect to trained model performance (Section 4.1). Next we present a simplifying statistical assumption (Hypothesis 1) which closely approximates the accuracy distribution (Section 4.2). We derive an estimator for the standard deviation of distribution-wise accuracy, which predicts a value of 0.033% for CIFAR-10 and 0.034% for ImageNet. This is 10-20\(\times\) less variance than what we observe on the test-set (Section 4.3). We prove that a quantity of test-set variance is inevitable, given the observation that ensembles of independently trained networks are well-calibrated (Lakshminarayanan et al., 2017). For binary classification problems we derive a lower bound on the variance which we demonstrate is a good approximation to the true value (Section 4.4).
3. We obtain a variety of results connecting variance to other observed phenomena in deep learning. First, we strengthen the results of prior works (Devlin et al., 2018; Dodge et al., 2020; Mosbach et al., 2020) by showing that BERT-Large finetuning exhibits 99\(\times\) more distribution-wise variance than BERT-Base (Section 5.1). Next, we demonstrate that distribution-shifted test-sets experience excess variance between runs of training, compared to an in-domain test-set (Section 5.2). We find that data augmentation significantly reduces variance between runs (Section 5.3). When increasing the learning rate, we show that excess distribution-wise variance appears at the same point at which accuracy begins to decline (Section 5.4). Finally, using statistical correlations between the predicted logits of many independently trained networks, we obtain a new kernel function which reveals structure more effectively than distance in the penultimate feature-space of much larger models (Section 5.5).
### Related work
A number of prior works report results pertaining to our first question about variance - namely, whether we can determine which source of training stochasticity is most responsible. Fort et al. (2019) observe that when using a below-optimal learning rate, randomized data ordering has a smaller impact than model initialization on the churn of predictions between runs. Bhojanapalli et al. (2021) similarly find that fixing the data ordering has no effect, while fixing the model initialization reduces churn. Bouthillier et al. (2021) report that data ordering has a larger impact than model initialization. In contrast, we demonstrate in Section 3 that most variation can be attributed to the high sensitivity of the training process to initial conditions. We recently became aware of Summers and Dinneen (2021), who reach the same conclusions; our results further confirm their findings via several new experiments.
Dodge et al. (2020) study variation between runs of BERT finetuning. The authors report achieving substantial performance increases relative to previously reported results, via the strategy of re-running finetuning many times and taking the best-performing result. Other recent works have picked up this strategy, _e.g._, Margatina et al. (2021) evaluate active learning strategies using the best result out of five runs, citing Dodge et al. (2020) as motivation. We demonstrate (Section 5.1) that for the case of BERT-Base, these performance gains only amount to overfitting the validation set, and cannot be expected to generalize to new samples of data from the test-distribution.
Several prior works study the effects of varying the choice of which examples are included in the training and test datasets. Neal et al. (2018) study the behavior of two-layer MLPs as the data used to train them is varied. Baldock et al. (2021) define a metric called the "consistency score", which is the probability that an example is predicted correctly over randomness induced by varying the training data, and relates it to other notions of example difficulty. Bouthillier et al. (2021) explore re-sampling both the train and test sets, and find that across bootstrap re-samples of the test-set, the variance in accuracy of a trained model conforms to the theoretically expected binomial approximation.
Ilyas et al. (2022) demonstrate that across varying sub-samplings of the training dataset, the trained network's test-example predictions can be approximately modeled as linear functions ("datamodels") of the vector of indicator-variables corresponding to the subset of examples that was used to train. In total, the authors use over 3,000,000 runs of training on CIFAR-10 in order to learn these datamodels. The authors show that datamodels can be used to generate a high-quality feature embedding of CIFAR-10. We demonstrate that a similar such embedding can even be recovered from simple between-run statistical correlations, using a fixed training set (Section 5.5). Overall, our work shares the large-scale empirical approach of Ilyas et al. (2022), and we also use the same software package, FFCV (Lecler et al., 2022), to perform high-throughput CIFAR-10 training. We use this approach to study the simpler topic of between-run variation given fixed training and test datasets.
Broadly, our work is part of a line of research which aims to understand the relationship between pairs of neural network weights produced by repeated independent runs of stochastic gradient descent. This topic is of both theoretical and practical interest, and has been studied from a variety of angles, including the similarity of internal representations (Li et al., 2015; Kornblith et al., 2019), degree of correlation between predictions (Fort et al., 2019; Jiang et al., 2021), similarity of decision boundaries (Sompalli et al., 2022), path-connectivity in weight-space (Draxler et al., 2018; Garipov et al., 2018), linear mode connectivity (Frankle et al., 2020), and linear mode connectivity modulo weight-space permutation symmetries (Tatro et al., 2020; Entezari et al., 2021; Ainsworth et al., 2022; Jordan et al., 2022).
## 2 Setup
We study supervised classification problems in which test examples are sampled from a distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\). We make no assumptions on the training distribution. We view a given stochastic training configuration \(\mathcal{A}\), with fixed hyperparameters other than the random seed, as inducing a distribution over trained network weights \(\theta\sim\mathcal{A}\). We write \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) to denote the prediction function computed by the network which has weights \(\theta\). In particular, for our networks which end in a softmax layer, \(f_{\theta}(x)\) computes
Figure 1: **Measuring variance. Test-set accuracy distributions across four training durations on CIFAR-10. Each distribution is a histogram of test-set accuracy across 60,000 independent runs of training. The difference between the “luckiest” and most unlucky run (max and min accuracy) is 13.2%, 6.6%, 1.7%, and 1.4% for the 0, 4, 16, and 64-epoch training durations, respectively. The standard deviations are 1.87%, 0.56%, 0.19%, and 0.15%.**
the argmax over predicted probabilities. For each example \((x,y)\) we define \(C_{x,y}(\theta)=\mathbf{1}_{f_{\theta}(x)=y}\) to be equal to one when the network \(f_{\theta}\) correctly predicts the example. We view \(C_{x,y}\) as a Bernoulli random variable, whose mean \(\overline{C}_{x,y}=\mathbb{E}_{\theta\sim\mathcal{A}}[C_{x,y}(\theta)]\) is the probability that \((x,y)\) is correctly predicted by a newly trained network. For a test set \(S=((x_{1},y_{1}),\ldots,(x_{n},y_{n}))\) we write \(A_{S}(\theta)=\frac{1}{n}\sum_{i=1}^{n}C_{x_{i},y_{i}}(\theta)\) to denote the test-set accuracy of network \(f_{\theta}\). We assume that the test-set is sampled IID, so that, putting things together, we can write \(\mathbb{E}_{S\sim\mathcal{D}^{n}}[\text{Var}_{\theta\sim\mathcal{A}}(A_{S}( \theta))]\) to denote the expected (over re-sampling of the test-set) variance (over training stochasticity) of test-set accuracy. Finally, we define the distribution-wise accuracy of \(f_{\theta}\) as \(A(\theta)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[C_{x,y}(\theta)]\), and the average error of training configuration \(\mathcal{A}\) as \(\text{err}(\mathcal{A})=\mathbb{E}_{\theta\sim\mathcal{A}}[1-A(\theta)]\).
For our core experiments (Section 4), we train ResNets on CIFAR-10. We additionally conduct experiments on ImageNet in Section 5.2. For CIFAR-10 we use the 9-layer ResNet architecture developed by Page (2019). Derivatives of this architecture have become standard for use in fast CIFAR-10 training, _e.g._, in the DAWNBench competition (Coleman et al., 2017). We train using SGD-momentum with weight decay, and use random flipping and translation data augmentations. We study a wide range of training durations, and always linearly ramp the learning rate down to zero by the end of training. The 64-epoch training configuration achieves a mean accuracy of 93.78%, which is increased to 94.42% when we additionally use Cutout (DeVries and Taylor, 2017). In Figure 1 we show histograms of test-set accuracy for this configuration across four training durations. The zero-epoch case corresponds to evaluating the network at initialization; this naturally has an average accuracy of 10% which is equivalent to random chance, but some random initializations reach as high as 14% and as low as 6% accuracy. As we train for longer, the average accuracy improves and the distribution becomes more concentrated. Further details of our training configurations are provided in Appendix A.
## 3 Isolating sources of randomness
### The three typical sources
Training neural networks typically involves three sources of stochasticity, namely, model initialization, data ordering, and data augmentations. In this section we investigate how each of these sources contributes to the final variance between runs that we observe at the end of training.
We develop a CIFAR-10 training framework2 that allows each source to be independently controlled by one of three different seeds. For example, when the data-augmentation seed is fixed and the data-order seed is varied, the set of augmented images seen by the network throughout training will remain the same, but be presented in a different order. When all three seeds are fixed, training is deterministic, so that repeated runs produce the same network. Standard training is equivalent to allowing all three seeds to vary.
Footnote 2: [https://github.com/KellerJordan/CIFAR10-isolated-rng](https://github.com/KellerJordan/CIFAR10-isolated-rng)
Figure 2: **One source of stochasticity suffices for long training. When training for only 1 epoch, varying all three sources of randomness induces a standard deviation of 1.33% in test-set accuracy between runs, while each source alone induces 25-40% less variance. But when training for 64 epochs, varying any one source induces as much variance as all three together. Each distribution corresponds to 4,000 runs of training.**
We fix two seeds, and vary just the third (_e.g._, varying only the data order while keeping the model initialization and data augmentations fixed). Our naive intuition is that each factor contributes some part to the overall variance, so that this should decrease variance relative to the baseline of varying all three seeds.
Our results show that for short trainings of 1-16 epochs, this intuition is correct (Figure 2). For example, when training for 4 epochs, if we fix the data order and augmentations, while varying only the model initialization, then variance in test-set accuracy is reduced by 26%, with the standard deviation going from \(0.45\pm 0.01\%\) to \(0.38\pm 0.01\%\).
However, for longer trainings of 32 epochs or more, varying only one of the three random factors produces approximately the same variance as the baseline of varying all three. For example, across 8,000 runs of training for 64 epochs, varying just the model initialization (with data ordering and augmentation fixed) produces a standard deviation of 0.158%, almost the same as the baseline, which has 0.160%. At \(n=8,000\) this is not a statistically significant difference; it is possible that the true values are the same, or that they differ by a small amount. We conclude that for this training regime, any single random factor suffices to generate the full quantity of variance, rather than each factor contributing to overall variance.
### Sensitivity to initial conditions
In the previous section, we showed that when training to convergence, varying just the model initialization (or just the data ordering, or augmentations) produces approximately the same quantity of variance between runs as a baseline fully random setup. In this section we find that even varying a single weight at initialization suffices. Our findings confirm the work of Summers and Dinneen (2021), who reach similar conclusions.
Consider multiplying a single random weight in the network by 1.001. We call this "poking" the network. This is a tiny change; recent work in quantization (_e.g._, Dettmers et al., 2022) implies that trained models can typically have _all_ their weights modified more than this without losing accuracy.
Nevertheless, in Figure 3 we demonstrate that poking the network early in training produces a large difference in the final result. Our experiment is to run two trainings with the same random seed, but with one network being "poked" at some point during training. We measure the disagreement rate between the two networks, _i.e._, the fraction of their test-set predictions that differ. For short trainings, poking induces much less disagreement than changing the random seed. But when training for 128 epochs, poking alone produces an average disagreement of 5.14%, barely less than the 5.19% produced by using two different random seeds. In our experiments, we also observe that varying just the first batch of data, or the numerical precision of the first step (_e.g._, fp16 vs. fp32) has a similar effect. We conclude that almost all variation between runs is not produced by specific sources of randomness like model initialization, data ordering, etc., but is instead intrinsic to the training process, which has extreme sensitivity to initial conditions.
Figure 3: **Training has high sensitivity to initial conditions.** (Left:) For short trainings, pairs of runs which differ only by one network having been “poked” (_i.e._, had a single weight changed slightly at initialization) disagree on 7.0-7.5% of predictions. Pairs of runs with fully different random seeds disagree more, on \(\sim\)8.5% of predictions. For long trainings, there is almost no difference. (Right:) The earlier a network is poked during the training process, the more its predictions will disagree with the network that trained unperturbed with the same random seed.
## 4 Delving into variance
### Do lucky random seeds generalize?
In Figure 1 we observed that our standard CIFAR-10 training configuration has significant variation between runs. Even when training for a long duration, we found pairs of random seeds which produce trained networks whose test-set accuracy differs by more than 1%. In this section, we argue that this variance is merely a form of finite-sample noise caused by the limited size of the test-set, and does not imply almost any genuine fluctuation in the quality of the trained network.
Suppose we view the random seed as a training hyperparameter. Then we have observed that it can be effectively "tuned" to obtain improved performance on the test-set - on average, our training configuration attains an accuracy of 94.42%, but we found random seeds which reach above 95%, which is more than 10% fewer errors. However, this improvement on the test-set alone is not enough to conclude that the random seed genuinely affects model quality. What remains to be seen is whether these performance improvements can generalize to unseen data, or if we are merely over-fitting the random seed to the observed test-set.
To find out, we perform the following experiment. First, we split the CIFAR-10 test-set into two halves of 5,000 examples each. We can view the first half as the hyperparameter-validation split and second as the held-out test split. Next, we execute many independent runs of training, with identical configurations other than the varying random seed. We measure the performance of each trained network on both splits of data. If lucky random seeds do generalize, then we should observe that runs of training which perform well on the first split also perform better than average on the second split.
To additionally determine the effect of training duration, we repeat this experiment for trainings of 0, 4, 16, and 64 epochs, using 60,000 independently trained networks for each duration. We view the results in Figure 4. For short trainings, the two splits are indeed highly correlated, such that runs which perform well on the first split also tend to do well on the second. But when training for longer, this correlation nearly disappears. When training for 64 epochs, for example, our highest-performing network on the first split does not even perform better than average on the second split. And on average, the top 1/4 of runs with respect to the first split only perform 0.02% better than average on the second split.
This result has the following practical implication. Suppose we want to obtain a good CIFAR-10 model. Noticing significant variation between runs (Figure 1), we might be tempted to re-run training many times, in order to obtain networks with better test-set performance. However, according to Figure 4, this would be useless, because improvements on the test-set due to re-training will have near-zero correlation with improvements on unseen data. These networks would be "better" only in the sense of attaining higher test-set accuracy, but not in the sense of being more accurate on unseen data from the same distribution.
Figure 4: **Disjoint splits of test data become decorrelated when training to convergence. We evaluate a large number of independently trained networks on two splits of the CIFAR-10 test-set. When under-training, there is substantial correlation, so that a “lucky” run which over-performs on the first split is also likely to achieve higher-than-average accuracy on the second. As we increase the training duration, this correlation shrinks.**
### Example-wise independence
In the previous section we showed that when training to convergence, disjoint splits of test data become nearly decorrelated, in the sense that networks which randomly perform well on one split do not perform better than average on another. In this section we argue that this property also extends to individual examples, which become decorrelated as training progresses.
We begin by randomly choosing a pair of test-set examples to check for dependence: say, examples 776 and 796. In Figure 6 we observe that this pair is indeed independent; _i.e._, using the notation introduced in Section 2, we observe the relation \(C_{x_{776},y_{776}}\perp\!\!\!\perp C_{x_{796},y_{796}}\) (up to statistical significance across 60,000 runs). Given this small positive result, we next hypothesize that _all_ test-set examples vary independently between runs:
**Hypothesis 1** (Examplewise independence).: \[C_{x_{i},y_{i}}\perp\!\!\!\perp C_{x_{j},y_{j}}\quad\forall\,i\neq j\] (1)
Using our large collection of trained networks, we have accurate estimates for each example's mean \(\overline{C}_{x_{i},y_{i}}\). This enables us to sample from the hypothesis distribution by simulating the average of 10,000 independent coin flips with probabilities \(\overline{C}_{x_{1},y_{1}}\) through \(\overline{C}_{x_{10,000},y_{10,000}}\). In Figure 5 we demonstrate that this simulated distribution becomes a close fit with reality as we train to convergence. For short trainings, the empirical distribution contains excess variance which is unexplained by Hypothesis 1, but for long trainings this excess disappears, so that the hypothesis becomes true in the aggregate sense of predicting the shape and standard deviation of the empirical distribution. In Figure 14 we also show that the hypothesis compares favorably to the binomial approximation. And in Section 5.5 we investigate example-wise deviations, finding only a small number of visually similar pairs which violate the hypothesis. In the next section we will explore how deviations from Hypothesis 1 can be used to estimate variance with respect to the test-distribution.
Figure 5: **Examples independence explains variance when training to convergence. (Left:) We compare the empirical test-set accuracy distributions with those generated by an equal number of samples from our hypothesized statistical model (Equation 1). The hypothesis is clearly wrong for short trainings, but becomes a close fit as training progresses. (Right:) Assuming Hypothesis 1, the standard deviation of test-set accuracy should be \(\sqrt{\frac{1}{n^{2}}\sum_{i=1}^{n}\overline{C}_{x_{i},y_{i}}(1-\overline{C}_ {x_{i},y_{i}})}\). This formula becomes a good approximation when training to convergence.**
Figure 6: **Independent example-wise variation. (Left) is image 776 of the CIFAR-10 test-set. Out of 60,000 64-epoch trainruns, 21,736 (36.2%) correctly predict this example. (Right) is image 796, which is correctly predicted by 36,392 (60.7%). If the two are statistically independent, then they should be simultaneously correct \(0.362\cdot 0.607=21.97\%\) of the time. In fact, 13,103 (21.84% \(\pm 0.33\%\)) of our trained networks do correctly predict both.**
### Estimating distribution-wise variance
In Section 4.1 we showed that accuracy is decorrelated between disjoint splits of test-data, and argued that this implies small genuine variation in model quality between runs of training. In this section we clarify our notion of model quality, and present a method of directly estimating variance on a test-distribution.
We first discuss our notion of model quality. Neural networks are typically evaluated by their performance on a test-set. However, what really matters is performance on the test-_distribution_\(\mathcal{D}\), because this is what determines the expected performance on new batches of unseen data. Therefore, our notion of model quality is the distribution-wise accuracy \(A(\theta)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[\mathbf{1}_{f_{\theta}(x)=y}]\). The test-set is a finite sample from \(\mathcal{D}\), so that test-set accuracy is a noisy approximation to this.
Estimating the _mean_ of distribution-wise accuracy \(\mu:=\mathbb{E}_{\theta\sim\mathcal{A}}[A(\theta)]\) is relatively easy, because test-set accuracy is an unbiased estimator, as \(\mathbb{E}_{S\sim\mathcal{D}^{n}}[\mathbb{E}_{\theta\sim\mathcal{A}}[A_{S}( \theta)]]=\mu\). Estimating the _variance_\(\sigma^{2}:=\operatorname{Var}_{\theta\sim\mathcal{A}}(A(\theta))\) is more challenging. We prove that test-set variance is an overestimate for this quantity (proof in Appendix B.1).
**Theorem 1**.: _In expectation, variance in test-set accuracy overestimates distribution-wise variance._
\[\mathbb{E}_{S\sim\mathcal{D}^{n}}\left[\operatorname*{Var}_{\theta\sim \mathcal{A}}(A_{S}(\theta))\right]\geq\operatorname*{Var}_{\theta\sim \mathcal{A}}(A(\theta))\]
We aim to obtain an unbiased estimate of \(\operatorname{Var}_{\theta\sim\mathcal{A}}(A(\theta))\). We recall the results of Section 4.2: When training to convergence, we found that test-set accuracy follows a distribution which can be approximately recovered by assuming that all test examples vary independently from each other, in terms of the event that the model classifies them correctly (Hypothesis 1, Figure 5). The distribution-wise variance in this case should be essentially zero. On the other hand, for shorter trainings we observed substantial test-set variance in excess of that predicted by Hypothesis 1, _i.e._, \(\operatorname{Var}_{\theta\sim\mathcal{A}}(A_{S}(\theta))>\frac{1}{n^{2}}\sum_ {i=1}^{n}\overline{C}_{x_{i},y_{i}}(1-\overline{C}_{x_{i},y_{i}})\). For example, the hypothesis predicted that our 4-epoch configuration should have a standard deviation of \(0.22\%\), but the observed value was much larger at around \(0.56\%\). This suggests that these shorter trainings may have significant true variance between runs.
Our intuition is therefore that distribution-wise variance is related to this excess of test-set variance over that predicted by Hypothesis 1. We prove (Appendix B.2) that with a small rescaling, this is true:
**Theorem 2**.: _The distribution-wise variance between runs is equal to the following expectation._
\[\operatorname*{Var}_{\theta\sim\mathcal{A}}(A(\theta))=\frac{n}{n-1}\cdot \mathbb{E}_{S\sim\mathcal{D}^{n}}\left[\operatorname*{Var}_{\theta\sim \mathcal{A}}(A_{S}(\theta))\,-\,\frac{1}{n^{2}}\sum_{i=1}^{n}\overline{C}_{x_{ i},y_{i}}(1-\overline{C}_{x_{i},y_{i}})\right] \tag{2}\]
In particular, the quantity \(\hat{\sigma}_{S}^{2}:=\frac{n}{n-1}\cdot\left(\operatorname*{Var}_{\theta\sim \mathcal{A}}(A_{S}(\theta))\,-\,\frac{1}{n^{2}}\sum_{i=1}^{n}\overline{C}_{x_{ i},y_{i}}(1-\overline{C}_{x_{i},y_{i}})\right)\) is our desired unbiased estimator for distribution-wise variance. We calculate this estimate using many runs of training, with a fixed test-set. In Figure 7 we compare \(\hat{\sigma}_{S}^{2}\) to the simple test-set variance \(\operatorname{Var}_{\theta\sim\mathcal{A}}(A_{S}(\theta))\) across a range of training durations. When training for 4 epochs, we estimate the standard deviation of test-distribution accuracy to be \(\sqrt{\hat{\sigma}_{S}^{2}}=0.52\%\), indicating significant differences in model quality between trainings of this duration. In contrast, when training for 64 epochs, we estimate the distribution-wise standard deviation to be only \(0.033\%\). We obtain a similarly small estimate for ImageNet trainings in Section 5.2. These findings suggest that when training to convergence, there is little variation in model quality (_i.e._, expected performance with respect to new batches of data from the test-distribution) between runs of training.
Having confirmed that distribution-wise variance is small, it still remains to explain why there is high variance on the finite test-set in the first place. In the next section, we investigate the reasons for this.
Figure 7: **Test-set variance overestimates true variance.** We use Equation 2 to calculate unbiased estimates of the distribution-wise variance \(\operatorname{Var}_{\theta\sim\mathcal{A}}(A(\theta))\), which become \(22\times\) smaller than the test-set variance when training to convergence.
### Consequences of calibration
In this section we prove that variance in the test-set accuracy between runs of training is an inevitable consequence of the observation that ensembles of trained networks are well-calibrated. Our analysis additionally leads us to derive a simple formula which accurately estimates variance for binary classification problems.
In comparison to neural networks, regularized linear models have little variance between runs of training. For example, when training a linear model on CIFAR-10 with optimized choice of weight decay, we observe in Figure 9 (left) that the standard deviation of test-set accuracy is below 0.01%. This is because \(L^{2}\)-regularized linear models have a single optimum, so that variance with respect to both the test-set and test-distribution tends towards zero as training progresses. For neural networks, we found in the previous section that distribution-wise variance drops to a small value when training for long enough. But, unlike the linear model, the standard deviation of test-set accuracy plateaus around 0.14%, even for long trainings. It remains to explain this excessive test-set variation in neural network trainings, relative to linear models.
Lakshminarayanan et al. (2017) observe that neural networks have the following property: if we collect an ensemble of many independently trained networks, then the uncertainty estimates of this ensemble will be roughly calibrated. That is, if we let \(S^{\prime}\) be the subset of test images which are classified by 30-40% of independently trained networks as "dog", then approximately 30-40% of the images in \(S^{\prime}\) really will be dogs. Mathematically, ensemble-calibration can be defined as the following hypothesized property of training.
**Hypothesis 2** (Ensemble calibration).: _For every \(j\in\mathcal{Y}\) and \(p\in[0,1]\),_
\[\underset{(x,y)\sim\mathcal{D}}{P}\left(y=j\mid P_{\theta\sim\mathcal{A}}(f_{ \theta}(x)=j)=p\right)=p\]
To see how this property relates to variance, suppose we have a zero-variance training configuration which makes the same test-set predictions every run. This will produce a subset of examples which are predicted as "dog" by 100% of trained networks. But unless the network has zero error, not all of these examples really will be dogs. So the ensemble must be overconfident, and cannot satisfy Hypothesis 2. Therefore, ensemble-calibration seems to imply a degree of variation between runs. We prove the following results in support of this claim.
**Theorem 3**.: _For binary classification, test-set variance is bounded below as follows._
\[\underset{S\sim\mathcal{D}^{n}}{\mathbb{E}}\left[\underset{\theta\sim \mathcal{A}}{\mathrm{Var}}(A_{S}(\theta))\right]\geq\frac{\mathrm{err}( \mathcal{A})-\mathrm{ECE}(\mathcal{A})}{2n}\]
**Theorem 4**.: _For binary classification, if we have ensemble-calibration and zero distribution-wise variance, then the test-set variance is given by the following formula._
\[\underset{S\sim\mathcal{D}^{n}}{\mathbb{E}}\left[\underset{\theta\sim \mathcal{A}}{\mathrm{Var}}(A_{S}(\theta))\right]=\frac{\mathrm{err}(\mathcal{ A})}{2n}\]
**Theorem 5**.: _For \(k\)-way classification, assuming ensemble-calibration, test-set variance is lower-bounded as follows._
\[\underset{S\sim\mathcal{D}^{n}}{\mathbb{E}}\left[\underset{\theta\sim \mathcal{A}}{\mathrm{Var}}(A_{S}(\theta))\right]\geq\frac{\mathrm{err}( \mathcal{A})}{nk^{2}}\]
In these theorems, we first consider binary classification. In this setting, we derive a lower-bound (Theorem 3) on the test-set variance, as a function of the expected calibration error (ECE). The definition of ECE can be found with the proof in Appendix B.3; it is equal to
Figure 8: **Test-set variance can be estimated by a simple formula for binary classification tasks.** Across hundreds of tasks, the formulas of Theorem 3 and 4 approximately recover the true stddevs of test-set accuracy. In comparison, the binomial approximation is inaccurate.
zero when Hypothesis 2 is satisfied. We additionally derive an exact formula (Theorem 4), which holds when assuming both ensemble-calibration and zero distribution-wise variance. Proof is provided in Appendix B.4.
In Figure 8 we apply these formulas to predicting the variance in test-set accuracy, for each of hundreds of binary classification tasks derived from CIFAR-10. We evaluate all \(\binom{10}{5}/2\) tasks corresponding to classifying each subset of five classes versus the other five. These tasks have a wide range of error rates and variances. As theoretically expected, the formula \((\operatorname{err}(\mathcal{A})-\operatorname{ECE}(\mathcal{A}))/2n\) is a lower bound. Empirically, it looks to be close to the true values. The downside of this formula is that it involves the calculation of \(\operatorname{ECE}(\mathcal{A})\), which requires sampling many runs of training. The next formula we evaluate is \(\operatorname{err}(\mathcal{A})/2n\), which does not have this weakness, since only a single run is needed to approximately estimate the average error. When assuming both Hypothesis 1 and Hypothesis 2, this formula is theoretically exact (Theorem 4). Empirically, it is also a good fit, with \(R^{2}=0.996\) across this collection of tasks. As a baseline, we also include the variance estimate produced by the binomial approximation, which is \(\operatorname{err}(\mathcal{A})(1-\operatorname{err}(\mathcal{A}))/n\). This estimate is around two times larger than that of Theorem 4 for these tasks, and significantly overestimates the true variance.
Finally, we consider \(k\)-way classification for arbitrary \(k\). In this setting, assuming ensemble-calibration, we derive a lower-bound for the test-set variance in terms of the average error (Theorem 5), with proof in Appendix B.5. We note that this bound is loose, and in our experiments not a good approximation to the true variance. From these theorems, we conclude that ensemble-calibration (Hypothesis 2) inevitably implies positive variance in test-set accuracy between runs of training, for all classification tasks.
## 5 Additional experiments
### BERT finetuning
In this section we study BERT (Devlin et al., 2018) finetuning, and use the tools developed in Section 4 to clearly differentiate the behavior of BERT-Large from BERT-Base. For our experiment, we finetune pretrained checkpoints of both models 1,000 times each on the MRPC (Dolan and Brockett, 2005) task. MRPC contains 5,801 sentence pairs labeled yes/no for whether the second sentence paraphrases the first, and has a training set of 3,668 examples, validation set of 408 examples, and test set of 1,725 examples.
Previous works (Devlin et al., 2018; Dodge et al., 2020; Mosbach et al., 2020) report and investigate training instability for BERT finetuning, noting particular instability for BERT-Large. In our experiment, we find that both BERT-Large and BERT-Base have substantial variance in validation-set performance between runs. BERT-Base has a standard deviation of 0.80% and BERT-Large has 2.24%, which seems to imply that both models are unstable, with BERT-Large being only somewhat moreso. However, we find that the tools of Section 4 yield a more complete picture which clearly differentiates the two models. In
Figure 9: (Far left:) Training a regularized linear model has very little variance between runs. (Center left:) Removing either data augmentation, or 80% of training data from CIFAR-10 training, reduces the average accuracy to around 87.5%. But the former produces much more variance between runs than the latter. (Right two:) When finetuning BERT-Base on MRPC, variations between runs in terms of performance on the validation and test sets are not strongly correlated. On the other hand, BERT-Large has significant instability.
Figure 9 (right) we show that for BERT-Base, the validation and test splits are close to decorrelated in terms of finetuned model performance. The top 15% of seeds in terms of validation-set performance achieve only 0.09% higher performance than average on the test-set, implying that most of the observed variation does not generalize. In contrast, for BERT-Large there is a clear correlation. Furthermore, Equation 2 estimates a distribution-wise standard deviation of 2.08% for BERT-Large compared to only 0.21% for BERT-Base, meaning that the former has 99\(\times\) more variance. This provides a strong confirmation that BERT-Large finetuning is significantly more unstable compared to BERT-Base; as to the reasons underlying this instability, we defer to prior and potentially future work.
### ImageNet distribution shifts
In this section we show that shifted distributions of test data have increased variance in their accuracy distributions between runs of training. We additionally confirm that the findings of Section 4 generalize to standard ImageNet training.
Our experiment is as follows. We independently train 1,000 ResNet-18s on ImageNet using a standard configuration (see Appendix A.2), attaining an average accuracy of 71.0%. We study the predictions of these networks on the ImageNet validation set, ImageNet-V2, and three distribution-shifted datasets.
We first look at the ImageNet validation set. In Figure 10 (rightmost) we observe that the true distribution closely matches the one predicted by Hypothesis 1. The observed standard deviation is 0.118%, but using Equation 2 we estimate distribution-wise variance to be 12\(\times\) smaller, at 0.034%. This value is close to what we found for CIFAR-10, confirming that both training scenarios adhere to our conclusions from Section 4.
Next we consider ImageNet-V2 (Recht et al., 2019). This dataset is intended to have the same distribution of examples as ImageNet, and we demonstrate that its accuracy distribution has similar statistical properties as well. Specifically, we find that the distribution predicted by Hypothesis 1 also closely matches the true distribution, and Equation 2 estimates a distribution-wise standard deviation of 0.071%, which is larger than what we found on the ImageNet validation set, but still relatively small. We note that the accuracy distribution for this dataset is wider, but under Hypothesis 1 this can be explained simply by the fact that it has 5\(\times\) fewer examples than the ImageNet validation set.
By contrast, ImageNet-R (Hendrycks et al., 2021), ObjectNet (Barbu et al., 2019) and ImageNet-Sketch (Wang et al., 2019) have different statistical behavior compared to the first two datasets. These datasets are constructed to have shifted distributions relative to ImageNet. We find that their accuracy distributions have variance significantly in excess of that predicted by Hypothesis 1. We estimate using Equation 2 that these three test-sets have large distribution-wise standard deviations of 0.181%, 0.179%, and 0.257% respectively, indicating significant differences between runs of training.
We additionally investigate correlations between pairs of these five datasets (Figure 15). The strongest correlation is between ImageNet-R and ImageNet-Sketch, with \(R^{2}=0.14\) (\(p<10^{-8}\)). Manual inspection
Figure 10: **Distribution shift produces excess distribution-wise variance between runs.** Across 1,000 runs of ImageNet training, both the ImageNet validation set and ImageNet-V2 have accuracy distributions close to that predicted by Hypothesis 1. On the other hand, distribution-shifted sets have significant excess variance, which indicates genuine differences between trained models, in light of Theorem 2.
shows that both ImageNet-Sketch and ImageNet-R contain many sketch-like images, suggesting that similar features may induce correlation between distributions. All other pairs have \(R^{2}<0.01\). For example, ImageNet-Sketch is decorrelated from ObjectNet, with \(R^{2}=0.001\) (\(p=0.336\)).
Overall, our findings suggest that training instability is in some sense a relative notion. ImageNet training is stable when evaluated on the main distribution, with a small standard deviation of \(0.034\%\) on the distribution of the ImageNet validation set. But it is unstable on shifted distributions, with ImageNet-Sketch having \(58\times\) as much distribution-wise variance, at a standard deviation of \(0.257\%\). This serves as a caveat to the title: from the perspective of the main training distribution, variance is harmless in that every trained network has almost the same performance, but from the perspective of shifted distributions, there are significant differences between runs.
### The effect of data augmentation
In this section we look at the effect of data augmentation on variance. In Figure 9 (center left) we compare two ablations: first, removing a fixed \(80\%\) of training data, and second, removing data augmentation. While both configurations achieve a similar mean accuracy of \(87.5\%\), the augmentation-free training has over \(3.5\times\) more variance between runs. We also observe that the ensemble accuracy of the augmentation-free networks is higher, reaching \(91.2\%\), compared to the reduced-data ensemble, which achieves only \(89.8\%\). Based on these observations, we speculate that one role of data augmentation may be to reduce variance between runs.
### The effect of the learning rate
In this section we investigate the relationship between learning rate and variance. Our experiment is to execute 1,000 64-epoch CIFAR-10 trainings for each binary-power learning rate between \(2^{-10}\) and \(2^{2}\). For each setting, we measure the mean and variance of test-set accuracy. We observe three distinct regimes (Figure 11). For small learning rates below \(2^{-5}\), performance is low and variance is high. Between \(2^{-5}\) and \(0.5\), the performance increases with the learning rate, and variance is close to the level predicted by Hypothesis 1, indicating little distribution-wise variance. Above \(0.5\), which is the optimal learning rate, performance drops and variance rapidly increases.
We observe that a learning rate of \(0.5\) produces both the highest mean accuracy and the lowest variance. Raising it to \(1.0\) causes the standard deviation of test-set accuracy to increase from \(0.148\%\) to \(0.168\%\). This may seem insignificant, but Equation 2 estimates that this corresponds to the distribution-wise standard deviation rising from \(0.033\%\) to \(0.075\%\), which is a significant \(5\times\) increase in distribution-wise variance. We therefore speculate that, as a general property of neural network trainings, the optimal learning rate is the largest one which does not induce significant distribution-wise variance.
Figure 11: **Accuracy peaks at the largest learning rate without excess variance. We show the mean and standard deviation of test-set accuracy across a range of learning rates. The highest accuracy is attained at learning rate \(0.5\), which also has the smallest excess variance relative to the prediction of Hypothesis 1. When raising the learning rate above that, variance rises and accuracy drops.**
### Neural posterior correlation kernel
In Section 4.2, we showed that the shape and standard deviation of the test-set accuracy distribution are closely approximated by the hypothesis that all test-example predictions vary independently between runs of training. In this section, we investigate whether there exist individual pairs of examples which deviate from this hypothesis, and introduce a new kernel based on correlations between predicted logits.
Hypothesis 1 predicts that each pair of test-set examples \((x_{i},y_{i}),(x_{j},y_{j})\) varies independently in terms of the events that each are classified correctly by the trained model. That is, a trained model \(f_{\theta}\) which correctly predicts \(f_{\theta}(x_{i})=y_{i}\) should be no more likely than chance to also correctly predict \(f_{\theta}(x_{j})=y_{j}\).
To find deviations from this hypothesis, we repeat the experiment of Figure 6 across all pairs of test-set examples. That is, for a pair \((x_{i},y_{i}),(x_{j},y_{j})\), let \(C_{1}:=C_{x_{i},y_{i}}\) and \(C_{2}:=C_{x_{j},y_{j}}\). Then we estimate the values of \(P(C_{1})\), \(P(C_{2})\), and \(P(C_{1}C_{2})\) across our 60,000 runs of training, and compute \(\delta_{ij}=|\hat{P}(C_{1}C_{2})-\hat{P}(C_{1})\hat{P}(C_{2})|\) in order to measure deviation from Hypothesis 1.
Across all \(\binom{10,000}{2}\) pairs, we find that there exist only five for which \(\delta_{ij}\geq 0.02\) (Figure 13), and an additional 19 with \(\delta_{ij}\geq 0.01\). The remaining 49,994,976 approximately conform the hypothesis with \(|\hat{P}(C_{1}C_{2})-\hat{P}(C_{1})\hat{P}(C_{2})|<0.01\). Therefore, on a fine-grained level Hypothesis 1 also seems to be a good approximation for all but a small number of pairs.
We next look for pairs of examples whose predicted _logits_ are correlated between runs of training. For \(K\)-way classification, let \(F_{\theta}(x)\) be the vector of \(K\) logits produced by the
Figure 12: **The neural posterior correlation kernel.** (Left:) TriMap projection of the CIFAR-10 test-set according to the NPCK, which groups examples together based on correlations of their predicted logits between repeated runs of training. (Right:) From the top, ten pairs with \(\kappa_{NPCK}(x,x^{\prime})\approx 0.75\), then 0.50, then 0.25. The reader is invited to interactively explore this visualization at [https://observablehq.com/d/24e96d5a104e383f](https://observablehq.com/d/24e96d5a104e383f).
Figure 13: **Anomalous pairs.** Out of all pairs of examples in the CIFAR-10 test-set, only these five deviate by more than 2% from Hypothesis 1.
trained network \(\theta\) on input \(x\). For a pair of inputs \((x,x^{\prime})\), we define the "neural posterior correlation kernel" (NPCK) as follows:
\[\kappa_{NPCK}(x,x^{\prime})=\frac{1}{K}\sum_{k=1}^{K}\operatorname*{ corr}_{\theta\sim\mathcal{A}}(F_{\theta}(x)_{k},F_{\theta}(x^{\prime})_{k}) \tag{3}\]
We compute an approximation of this kernel using the logit outputs from 60,000 networks trained for 64 epochs each. We find that the NPCK is effective at revealing structure in the dataset; _e.g._, pairs of inputs with high NPCK values are almost all visually near-duplicates. We present a sample of such pairs in Figure 12; there are 11 pairs with \(\kappa_{NPCK}(x,x^{\prime})\geq 0.75\) which are almost exact duplicates, 100 pairs with \(\kappa_{NPCK}(x,x^{\prime})\geq 0.5\) which are near-duplicates, and 670 pairs with \(\kappa_{NPCK}(x,x^{\prime})\geq 0.25\), which share visual features. We additionally use TriMap (Amid and Warmuth, 2019) to produce a 2D projection of the test-set according to the kernel, which the reader is invited to explore at the URL provided in Figure 12. One can find clusters of near-duplicate images and various intriguing subpopulations by inspecting this projection.
Compared to distances in penultimate layer feature-spaces, the NPCK performs favorably. We compare it to the penultimate layer features of two models: one of our small trained ResNets, and a much larger finetuned ViT (Dosovitskiy et al., 2020) which attains over 98% accuracy on CIFAR-10. Out of the 100 most similar pairs for each metric, the NPCK recovers a far higher fraction of near-duplicates (Figure 16 and Figure 17). Only a specially trained duplicate detection model (Yokoo, 2021) performs comparably.
The NPCK bears a close similarity to the neural network prior kernel (NNPK), which is defined by Amid et al. (2022) as \(\kappa_{NNPK}(x,x^{\prime})=\mathbb{E}_{\theta\sim\pi}[\langle F_{\theta}(x), F_{\theta}(x^{\prime})\rangle]\), that is, the expected inner product between predicted logits as the network weights vary over the initialization distribution \(\pi\). The NPCK is equivalent to the NNPK with two modifications: first, normalizing the logits, and second, passing from the weight distribution at initialization to the weight distribution after training. That is, let \(G_{\theta}(x)_{k}=\frac{F_{\theta}(x)_{k}-\mathbb{E}_{\theta\sim\mathcal{A}} [F_{\theta}(x)_{k}]}{\operatorname*{std}_{\theta\sim\mathcal{A}}(F_{\theta}( x)_{k})}\) be the normalized logits, then the NPCK satisfies the equality \(\kappa_{NPCK}(x,x^{\prime})=\frac{1}{K}\mathbb{E}_{\theta\sim\mathcal{A}}[ \langle G_{\theta}(x),G_{\theta}(x^{\prime})\rangle]\).
We can view the NPCK as inducing an embedding via the map \(\varphi(x)=\{\kappa_{NPCK}(x,x_{j})\}_{j\in[10,000]}\). This map shares similar properties to the _datamodel embeddings_ proposed by Ilyas et al. (2022). Both are computed via thousands of runs of training, and reveal global structure in the dataset more effectively than penultimate-layer feature embeddings. And both embeddings have a high "effective dimension": the top 500 principal components of the datamodel embeddings of the training set explain less than 50% of their variation, and similarly, the top 500 components of our NPCK-based embeddings of the test set explain only 55%. In comparison, just 10 components explain 90% of variation in penultimate layer features (Ilyas et al., 2022). The NPCK-based embeddings are somewhat simpler to compute than datamodel embeddings, as they do not depend on varying the training data, but of course, this makes them lack the specific meaning of datamodels. Overall, both embeddings can be used effectively to reveal structure in the training and test datasets.
## 6 Conclusions
In this paper we present a method of estimating the variance between repeated runs of training with respect to performance on a test-_distribution_. This form of variance implies genuine differences between trained networks in terms of their expected performance on new batches of data, whereas we show that the variance in performance on a test-_set_ often does not. In particular, we find that standard CIFAR-10 and ImageNet trainings have significant variance on their respective test-sets, but almost no variance with respect to their test-distributions, so that all networks generated by repeated runs of such trainings have almost the same expected performance on new samples of test data.
We demonstrate that for these trainings, variation in the test-set predictions approximately satisfies _examplewise independence_ (Hypothesis 1), where the event that a trained network correctly predicts a particular example can be modeled as a (biased) coin flip which varies independently from the network's predictions on other examples. On the other hand, at the level of predicted logits there do exist more significant dependencies between examples, which we show can be used to reveal structure in the dataset via the neural posterior correlation kernel.
Prior works have observed that ensembles of networks from repeated runs of training make predictions that are roughly _calibrated_ (Hypothesis 2). We prove that this calibration inevitably implies a degree of variance in test-set accuracy between runs of training. And for binary classification tasks, when additionally assuming Hypothesis 1, we prove that the variance in test-set accuracy is given by \(\text{err}/2n\), where \(\text{err}\) is the average error and \(n\) is the size of the test-set. We demonstrate that this simple formula is indeed a good approximation across hundreds of binary classifications tasks derived from CIFAR-10.
We use our estimator for distribution-wise variance to obtain the following new or strengthened conclusions connecting variance to other phenomena in deep learning. (1) Distribution-wise variance shrinks as we increase the training duration, reaching a small standard deviation of below 0.04% for standard CIFAR-10 and ImageNet trainings. (2) The optimal learning rate is the largest one that does not induce excess distribution-wise variance. (3) Data augmentation reduces variance between runs. (4) Test-sets which are distribution-shifted relative to the training set incur excess variance compared to in-domain test-data. We demonstrate each of these claims in a limited context, for benchmark tasks. Each therefore forms a partially validated hypothesis about the empirical behavior of neural network training. We are interested in future work which finds counterexamples to these hypotheses, or expands their demonstrated range of applicability. We also hope that our work will lead to the discovery of other such regularities in the variance between runs of neural network training.
## Acknowledgements
We are grateful to Behnam Neyshabur for his guidance on a preliminary version of this work. We thank Ehsan Amid, Luke Johnston, and Ryan Weber each for their insightful comments on the draft.
|
2310.11450 | Explaining Deep Neural Networks for Bearing Fault Detection with
Vibration Concepts | Concept-based explanation methods, such as Concept Activation Vectors, are
potent means to quantify how abstract or high-level characteristics of input
data influence the predictions of complex deep neural networks. However,
applying them to industrial prediction problems is challenging as it is not
immediately clear how to define and access appropriate concepts for individual
use cases and specific data types. In this work, we investigate how to leverage
established concept-based explanation techniques in the context of bearing
fault detection with deep neural networks trained on vibration signals. Since
bearings are prevalent in almost every rotating equipment, ensuring the
reliability of intransparent fault detection models is crucial to prevent
costly repairs and downtimes of industrial machinery. Our evaluations
demonstrate that explaining opaque models in terms of vibration concepts
enables human-comprehensible and intuitive insights about their inner workings,
but the underlying assumptions need to be carefully validated first. | Thomas Decker, Michael Lebacher, Volker Tresp | 2023-10-17T17:58:19Z | http://arxiv.org/abs/2310.11450v1 | # Explaining Deep Neural Networks for Bearing Fault Detection with Vibration Concepts
###### Abstract
Concept-based explanation methods, such as Concept Activation Vectors, are potent means to quantify how abstract or high-level characteristics of input data influence the predictions of complex deep neural networks. However, applying them to industrial prediction problems is challenging as it is not immediately clear how to define and access appropriate concepts for individual use cases and specific data types. In this work, we investigate how to leverage established concept-based explanation techniques in the context of bearing fault detection with deep neural networks trained on vibration signals. Since bearings are prevalent in almost every rotating equipment, ensuring the reliability of intransparent fault detection models is crucial to prevent costly repairs and downtimes of industrial machinery. Our evaluations demonstrate that explaining opaque models in terms of vibration concepts enables human-comprehensible and intuitive insights about their inner workings, but the underlying assumptions need to be carefully validated first.
Explainable AI, Concept activation vectors, Rolling element bearings, Vibration analysis, Deep learning
## I Introduction
Bearings (Fig 1) are critical mechanical components that are omnipresent in many industrial applications. They are used to support and reduce friction in rotating shafts, which are commonly found in motors, pumps, fans, and other types of rotating equipment. But over time, normal wear and tear or exogenous factors like improper lubrication or excessive overloading can cause bearings to degrade. Unexposed faults of this kind will not only lead to higher maintenance efforts and losses in productivity but might additionally compromise safe machine operation in general. To prevent such scenarios, various methods have been proposed to enable fast and reliable bearing fault detection using artificial intelligence [1]. Such methods are typically trained to identify and analyze patterns in vibration signals obtained during machine operation. Implementing fault detection using deep neural networks has demonstrated very promising capabilities [2] as such models are able to achieve impressive performance on various benchmark datasets [3]. Nevertheless, deep neural networks are also considered black box models as their complicated computational structure makes it infeasible to fully comprehend their prediction logic. This makes it difficult to rigorously assess their trustworthiness and reliability when facing real-world circumstances under actual deployment. To overcome these inherent limitations, various techniques have been developed to make such models more transparent to humans, which are also referred to as Explainable AI [4]. Some prominent methods include for instance LIME [5], SHAP [6], or gradient-based approach [7], which all aim to explain single model prediction by evaluating the influence of individual input features. However, such methods usually indicate feature importance in raw units of the model input domain. For deep neural networks trained on raw vibration signals, this would merely indicate how individual signal values in time affect the model output, which is also illustrated in Fig 1. Formulating a concrete reason that reveals why the model has detected an outer ring fault based on signal value importance is challenging as such kinds of explanations do not convey insights with an appropriate degree of abstraction and generality. While there exist techniques to adapt such methods to this particular use case [8] another appealing remedy is provided by concept-based explanations [9]. This category of explanation methods enables quantifying how important high-level and potentially abstract characteristics of input data are for model predictions. For instance, Concept Activation Vectors (CAVs) [10] have been used to analyze if the presence of the concept "stripes" influences an image classifier when detecting zebras. However, defining appropriate concepts and conducting corresponding analyses for industrial vibration signals is not straightforward and requires adequate design choices grounded in problem-specific domain expertise. In this work, we investigate the applicability and utility of concept-based explanations in the context of bearing fault detection. This is to the best of our knowledge the first attempt to rigorously apply such techniques in the context of vibration signal analysis related to industrial applications. Our contributions are threefold:
* We introduce simulated high-frequency resonances as a meaningful way to exemplify important high-level signal characteristics related to bearing fault vibrations.
* We utilize these vibration concepts to derive corresponding Concept Activation Vectors for the purpose of model explanation for different fault detection architectures.
* We analyze the faithfulness as well as the utility of concept-based explanations in the context of bearing fault detection and demonstrate how to derive novel insights.
## II Background
### _Bearing Fault induced Vibrations_
Defective bearings emit characteristic vibrations during machine operation depending on the precise location of the fault, which could, for instance, occur at the inner or outer ring. This
can be illustrated by modeling the bearing and connected parts as a simple linear system that repeatedly gets excited during rotation [11]. Whenever a ball hits the defect located at one of the rings, an impact gets induced that causes vibrations in the form of high-frequency resonances of connected machine parts. This can be expressed via a periodic impulse train denoted by \(i(t)=\sum_{k=-\infty}^{\infty}a(t)\delta(t-kT)\), where \(\delta\) describes the discrete unit impulse function, \(T\) the time between impulses and \(a(t)\) the amplitude of each impulse, which depends on parameters like fault size or load distribution. Note, that the frequency of impacts is characteristic of the fault origin and can be computed explicitly based on the bearing's geometry [11]. For outer and inner ring faults these frequencies are referred to as the Ball Passing Frequency Outer ring (BPFO) and Ball Passing Frequency Inner ring (BPFI). Given the bearing diameter \(D\), ball diameter \(d\), number of balls \(n\), and the bearing contact angle \(\alpha\), such frequencies can be expressed in dependence of the bearing's rotation frequency \(f_{r}\):
\[\begin{split}\text{BPFO}(f_{r})&=\frac{n}{2}f_{r} \left(1-\frac{d}{D}\cos\alpha\right)\\ \text{BPFI}(f_{r})&=\frac{n}{2}f_{r}\left(1+\frac{d }{D}\cos\alpha\right)\end{split}\]
Thus, the occurrence of a fault at a specific position inside the bearing will lead to periodic impact impulses with an indicative frequency. Moreover, each impulse will excite the system causing an impulse response denoted by \(h(t)\). This response \(h(t)\) will typically consist of high-frequency resonances of the structure with exponentially decaying magnitude in time [12]. The resulting fault-induced vibrations can simply be modeled as system output caused by \(i(t)\) resulting from convolution with the corresponding impulse response function: \(x_{\text{fault}}(t)=h(t)*i(t)\), which can be recorded by a vibration sensor. Although rather simple, this signal model is able to express important physical properties of bearing fault-induced vibrations that are helpful to detect and distinguish different fault types in practice [11]. A comprehensive discussion and other approaches to model such signals can be found in [13].
### _Concept-based Explanations with TCAV_
The goal of concept-based explanations is to enable insights into how a machine learning model forms its predictions in terms of comprehensible and intuitive concepts. In this context, a concept can be understood as any human understandable characteristic related to the data domain of interest. When dealing with image data, such concepts could, for instance, correspond to the visual appearance of different textures, shapes, or colors. Explaining an image classifier based on such concepts could for example answer concrete questions such as "does the presence of stripes matter for an image to be classified as a zebra", or "is the color red important for a model to detect fire engines?" [10]. Testing with Concept Activation Vectors (TCAV) [10] is a popular method for neural networks that globally quantifies the sensitivity of a classwise prediction with respect to a predefined concept. It assumes that different concepts are linearly separable in the network's activation space and requires access to suitable example data which resemble the concepts of interest. To mathematically derive TCAV scores consider the following setup. Let \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{C}\) be a deep neural network trained to map \(d\)-dimensional inputs \(x\) to \(C\)-dimensional output scores such that \(F_{c}(x)\in\mathbb{R}\) is indicative for predicting class \(c\). Deep neural networks are typically structured in terms of computation layers such that \(F(x)=F^{L}\circ\cdots\circ F^{1}(x)\) and \(F^{[l]}(x)=F^{l}\circ\cdots\circ F^{1}(x)\) describes the activation vector of the \(l\)-th layer given input \(x\). TCAV associates each concept of interest with a corresponding Concept Activation Vector _CAV_ that characterizes the concept in terms of the network's activation patterns after a specific layer. To do so it requires access to a set of positive examples inputs \(\mathcal{P}\subset\mathbb{R}^{d}\) which exhibit the concept and a set of negative examples \(\mathcal{N}\subset\mathbb{R}^{d}\) where the concept is absent. Based on these exemplifying datasets, a binary linear classifier can be trained to separate the corresponding activations \(F^{[l]}(\mathcal{P})\) and \(F^{[l]}(\mathcal{N})\) at a predefined layer \(l\). Given such a linear discriminant, the CAV of a concept is defined as the normal vector oriented toward the activation values of positive examples. Thus, CAVs express directions in the network's activation space that capture the common characteristics of the underlying examples exhibiting a high-level concept. Therefore, CAVs can be used to evaluate the sensitivity of a class prediction \(F_{c}(x)\) concerning a concept by evaluating the directional derivative with respect to the activation values along its CAV: \(\nabla_{F^{[l]}}F_{c}(x)\cdot\textit{CAV}\). This estimates how the prediction of a certain class score changes if a concept would be marginally more present in the input \(x\). The final TCAV score is computed by aggregating concept sensitivities over a set of input examples \(\mathcal{X}\subset\mathbb{R}^{d}\) where the importance of the concept should be evaluated. One common way to do so is by measuring the share of examples in \(\mathcal{X}\) with positive concept sensitivity:
\[\textit{TCAV}_{c}^{l}=|\{x\in\mathcal{X}:\nabla_{F^{[l]}}F_{c}(x)\cdot\textit {CAV}>0\}|/|\mathcal{X}|\]
As concepts tend to become more separable at later layers [10], we consider TCAV scores always with respect to the layer \(L-1\) before the final output is computed so \(\textit{TCAV}_{c}=\textit{TCAV}_{c}^{L-1}\). By repeating TCAV computation over multiple sets of positive and negative examples for the same concept one is also able to derive confidence intervals and conduct tests assessing the statistical significance of the obtained scores [10]. By now, there exist also many variations of this technique
Fig. 1: Left: Schematics of a rolling element bearing and its components. Image Credit: Silberwolf / CC-BY-2.5. Right: Incomprehensible results of LIME highlighting the importance of signal values for the prediction of a deep neural network. Explanations in this form are hard to interpret for humans.
to derive concept-based explanations [14, 15, 16, 17, 18] but we only rely solely on the original formulation for the remainder of this paper. Concept-based explanations using TCAV have been already successfully applied to diverse applications, like meteorology [19], medical imaging [20], emotion recognition [21], recommender systems [22], natural language processing [23] or electronic health records [24].
## III Defining suitable vibration concepts
In order to define adequate concepts for the purpose of model explanations, the authors in [25] propose three necessary prerequisites. First, concepts and exemplifying examples shall be intrinsically meaningful to humans on their own. Second, examples exhibiting a concept should all be coherent in sharing common properties that can be associated with the concept while still being clearly distinguishable from other concepts. Third, concepts need to be important and relevant for the prediction problem to be analyzed. Guided by these desiderata, we propose to define vibration concepts as simulated high-frequency resonances that are amplitude-modulated with a distinctive frequency. Remember that these kinds of signals express expected properties related to the expected structure of simplified bearing fault vibrations. The computational procedure that we use to simulate such vibration concepts is described in Algorithm 1 and some noise-free examples of simulated concepts are depicted in Fig. 2. Note, that each simulated signal is meaningful on its own as it represents an idealized version of a potential bearing fault-related vibration. Examples of such vibration concepts are also coherent as they all share the common feature of periodic modulation with the same characteristic frequency while still being individually different for varying choices of the remaining parameters. Furthermore, they are also important for the task of bearing fault detection as the presence of specifically modulated resonances is a typical indicator of the presence of a certain fault type.
```
Input: characteristic fault frequency \(f_{char}\), resonance frequency \(f_{res}\), amplitude \(a\), time decay \(\tau\), noise level \(\sigma\), offset \(t_{0}\) Compute:\(i(t)=\sum_{k}a\delta(t-k/f_{char}+t_{0})\) \(h(t)=e^{-t/\tau}cos(2\pi f_{res}t)\) \(\varepsilon(t)\sim\mathcal{N}(0,\sigma^{2})\) Output:\(x_{vib}=h(t)*i(t)+\varepsilon(t)\)
```
**Algorithm 1** Simulating examples of vibration concepts
## IV Explaining in terms of Vibration Concepts
To overcome the limitations of common feature importance techniques when applied to fault detection models for raw vibration data, we propose to rather analyze such models in terms of the above-defined vibration concepts. This is also illustrated in Fig. 2. Consider, for instance, a set of vibration signals \(\mathcal{X}_{outer}^{fr}\) originating from an outer ring fault during rotation with frequency \(f_{r}\). To better understand how a deep neural network performs fault classification, a domain expert might explicitly seek to assess whether the model is sensitive to the presence of high-frequency resonances modulated with \(BPFO(f_{r})\) when evaluating signals in \(\mathcal{X}_{outer}^{fr}\). This would be a desirable model behavior as such sensitivity implies that the model has indeed learned a valid relationship that is in line with existing domain knowledge. In the case of inner fault signals the same reasoning applies with \(BPFI(f_{r})\). Leveraging TCAV in combination with vibration concepts we are now in a position to retrieve such kind of explanations as follows. Consider again the signals \(\mathcal{X}_{outer}^{fr}\) introduced above. Utilizing Algorithm 1 we can simulate a set of positive examples \(\mathcal{P}_{outer}^{fr}\) where we fix \(f_{char}=BPFO(f_{r})\) and vary all remaining parameters. To obtain a set of negative examples \(\mathcal{N}\) we simulate vibration concepts where we additionally randomize
Fig. 2: Obtaining Concept Activation Vectors (CAVs) [10] for vibration concepts to explain a deep neural network \(F\) trained to perform bearing fault detection on raw vibration signals \(x\): Our approach allows us to define and simulate meaningful vibration concepts that express relevant but high-level signal properties grounded in domain knowledge. It starts by simulating a set of diverse positive examples \(\mathcal{P}\) where all impacts occur with the characteristic frequency \(BPFO\) and a set of negative ones \(\mathcal{N}\) where the occurrence is random. These sets can be used to localize the concepts of ”being modulated with frequency \(BPFO\)” in the network’s activation space of layer \(L-1\) by training a linear classifier to separate \(F^{[L-1]}(\mathcal{P})\) from \(F^{[L-1]}(\mathcal{N})\). The direction oriented towards \(F^{[L-1]}(\mathcal{P})\) defines the corresponding CAV which can be used to evaluate how important the concept is for the predictions of the deep neural network \(F\).
the modulation frequency \(f_{char}\) over a specified interval. This ensures that the resulting _CAV_ precisely captures the high-level property of being modulated with the right characteristic frequency as this is the only distinguishable property between both sets of concepts. The computed score of this setup \(\textit{TCAV}_{\textit{outer}}^{f_{r}}\in[0,1]\) measures the proportion of outer ring fault vibrations in \(\mathcal{X}_{\textit{outer}}^{f_{r}}\) where the corresponding prediction shows positive sensitivity with respect to the concept. Note that such kind of information is immediately accessible and comprehensible to a human expert trying to validate and comprehend the prediction logic of an opaque fault detection model. To demonstrate the applicability and utility of this kind of analysis we conducted dedicated numerical experiments.
## V Numerical Experiments
We consider two publicly available datasets containing real vibration recordings obtained under varying machine operating conditions. To compute TCAV scores we use Captum [26].
### _Dataset details_
#### V-A1 Cwru
The Case Western Reserve University (CWRU) [27] provides real vibration recordings of healthy and damaged bearings with respect to various damage types under four different rotation speeds with varying fault sizes and load conditions. For outer ring fault data, also different positions of the damage on the ring are considered. We restrict our analysis to healthy, inner and outer ring damage signals measured with 48kHz at the drive end sensor. All faults are mechanically enforced using electro-discharge machining. More details about this dataset can be found in [28]. We partitioned all available signals into segments of size \(d=3000\) with no overlap and normalized them to the range of -1 to 1 to obtain the dataset CWRU3000. The set CWRU9000 was constructed equivalently containing signal of length \(d=9000\).
#### V-A2 Paderborn
The dataset published by Paderborn University [29] contains vibration data collected from a test motor running with six different undamaged bearings as well as various kinds of damaged ones. These include inner and outer ring faults that are either artificially induced using different mechanical procedures or generated via accelerated lifetime tests to mimic more realistic damages from actual wear and tear. During the extensive experiments, also varying operating conditions and damage levels have been considered including two different rotation speeds. Precise information about all conducted experiments is documented in [29]. For this data source, we divided all signals into non-overlapping segments of size \(d=16000\), normalized them, and constructed two separate datasets. PB artificial is formed by all healthy signals and all fault signals that are artificially enforced while PB realistic contains all healthy signals together with all realistic faults resulting from actual degradation.
### _Fault detection models and training_
To train deep neural networks for bearing fault detection, we consider different architectures that are popular in computer vision. In particular, we fitted one-dimensional versions of LeNet [30], AlexNet [31], ResNet [32] with four residual blocks, DenseNet [33] with four densely connected blocks, and Vision Transformers (ViT) [34] with a patch size of 250 to classify vibration signals. Following common practice, we split all datasets into train, validation and test sets with shares 60/20/20. All models are trained for 50 epochs to distinguish between healthy, inner, and outer fault-related signals based on the training set and we selected the final model for each architecture based on the validation loss. The resulting test performances for all final models are presented in Fig. 3 (left). On the CWRU datasets, all networks are able to attain high accuracies independent of the signal length while on Paderborn data the performance tends to be lower, especially for LeNet, DenseNet and ResNet. Moreover, many models show similar prediction capabilities, although our architecture choice reflects various network designs, sizes, and complexities. This raises the question of whether the models also exhibit the same internal reasoning to solve the task. To better understand how the models form their prediction, concept-based explanations can be useful due to their capability to provide relevant insights in human comprehensible form. To assess the applicability and utility of TCAV with vibration concepts for this purpose, we conducted a detailed analysis.
### _Vibration Concept Sensitivity with TCAV_
To compute vibration concept sensitivity with TCAV we considered the following setup. For all datasets, we randomly selected \(100\) signals separately for each available rotation speed (in rpm) and fault type yielding to individual evaluation datasets \(\mathcal{X}_{\textit{fault type}}^{\textit{r}pm}\). For each evaluation dataset, we further simulated \(200\) positive and negative examples of corresponding vibration concepts as described in section IV. To establish conclusive results we repeated all experiments for each evaluation set additionally with ten different pairs of simulated example sets such that each reported result constitutes an average of ten outcomes. Remember that a crucial assumption for the applicability and faithfulness of TCAV is that the concepts of interest are linearly separable in the activation space. Otherwise, the resulting CAV is not able to faithfully express the concept and the sensitivities are unreliable. In Fig. 4 we plot the test performances of the classifiers trained to distinguish the concepts in the respective activation space. A consistent finding across all datasets is that the concepts are only clearly separable for DenseNet and ResNet models. This demonstrates that the applicability of TCAV for industrial prediction problems is not guaranteed and requires careful
Fig. 3: Left: Test accuracies for all considered deep neural network architectures and datasets. Right: Confusion matrix of DenseNet on PB realistic.
initial analysis. It also hints at the conjecture that vibration concepts are easier to distinguish for architectures that incorporate skipped connections into their design which can be of interest to model developers. However, since all models exhibit high test performance on CWRU data and DenseNet and ResNet even tend to be inferior on the PB datasets, the linear separability of the vibration concept seems not to be a necessary condition for strong model performance. To assure reliable TCAV scores, we restrict all further analyses to the two models where the separability assumption is unambiguously satisfied. The resulting TCAV scores for all evaluation settings are presented in Fig. 5. On CWRU3000 and CWRU9000, the DenseNet model always attains higher scores compared to ResNet across all rotation speeds and fault types. Remember that higher sensitivity to vibration concepts is a desirable model property due to its link to existing domain knowledge. Since both models show almost identical performance on test data, the difference in TCAV scores suggests here to prefer DenseNet over ResNet for actual deployment. However, the TCAV results are inconclusive on PB datasets across model types. All evaluated predictions on PB artificial are not particularly sensitive to the relevant concepts as all sign counts are predominantly around or significantly below \(50\%\). Hence, the majority of predictions on signals in the different evaluation sets do not show positive sensitivity to the presence of the relevant vibration concept. Thus, despite being able to localize concepts in their activations, the models still leverage other signal patterns to perform classification on this dataset. On PB realistic, the TCAV scores for DenseNet exhibit a strong discrepancy between fault types. Predictions on outer faults attain a score of around \(80\%\) for both rotation speeds, whereas the respective scores for inner faults are substantially lower. To further investigate this issue, we depicted the corresponding confusion matrix of this architecture in Fig. 3 (right). Note that when presented with an outer ring vibration, the model is able to detect this fault type in \(89\%\) (1013/1136) of the cases. This is in line with the high TCAV scores for outer faults, which imply that during such predictions the DenseNet is sensitive to the desirable vibration concept. However, when facing inner fault vibrations, the model is only able to correctly identify \(60\%\) (772/1289). This observation matches with the low TCAV score for inner faults which suggests that the model neglects the domain knowledge that is exemplified by vibration concepts when evaluating inner faults.
## VI Conclusion
In this work, we investigated the applicability and utility of TCAV to better understand the inner workings of different deep neural networks trained to detect bearing faults in vibra
Fig. 4: Test accuracies of a linear binary classifier trained to separate vibration concepts in the activation space of different considered model architectures. Only DenseNet and ResNet can achieve consistently good results, indicating that the corresponding TCAV scores are faithful only for such models.
Fig. 5: TCAV results of DenseNet and ResNet for all considered evaluation sets and data sources. The results imply that for predictions of models trained on CWRU data, the DenseNet architecture exhibits stronger sensitivity to vibration concepts compared to ResNet independent of the segment size. On PB artificial, the TCAV scores are inconclusive, while the significant discrepancy between scores for DenseNet on PB realistic hints at a potential model deficiency.
tion signals. For this purpose, we introduced simulated resonance vibrations with characteristic modulation as meaningful concepts that exemplify relevant physical signal properties. Our results imply that the retrieved concept-based explanation can indeed produce novel insight once the underlying assumptions are satisfied. Our analysis can be complemented by defining additional concepts of interest based on more sophisticated signal models enabling to alter and control more nuanced signal properties. We hope that our analyses can contribute to a better understanding for domain experts of how deep neural networks are solving industry-relevant prediction problems, which is crucial to increase their trustworthiness and applicability for real-world deployment.
|
2310.05343 | Investigating Continuous Learning in Spiking Neural Networks | In this paper, the use of third-generation machine learning, also known as
spiking neural network architecture, for continuous learning was investigated
and compared to conventional models. The experimentation was divided into three
separate phases. The first phase focused on training the conventional models
via transfer learning. The second phase trains a Nengo model from their
library. Lastly, each conventional model is converted into a spiking neural
network and trained. Initial results from phase 1 are inline with known
knowledge about continuous learning within current machine learning literature.
All models were able to correctly identify the current classes, but they would
immediately see a sharp performance drop in previous classes due to
catastrophic forgetting. However, the SNN models were able to retain some
information about previous classes. Although many of the previous classes were
still identified as the current trained classes, the output probabilities
showed a higher than normal value to the actual class. This indicates that the
SNN models do have potential to overcome catastrophic forgetting but much work
is still needed. | C. Tanner Fredieu | 2023-10-09T02:08:18Z | http://arxiv.org/abs/2310.05343v1 | # Investigating Continuous Learning in Spiking Neural Networks
###### Abstract
In this paper, the use of third-generation machine learning, also known as spiking neural network architecture, for continuous learning was investigated and compared to conventional models. The experimentation was divided into three separate phases. The first phase focused on training the conventional models via transfer learning. The second phase trains a Nengo model from their library. Lastly, each conventional model is converted into a spiking neural network and trained. Initial results from phase 1 are inline with known knowledge about continuous learning within current machine learning literature. All models were able to correctly identify the current classes, but they would immediately see a sharp performance drop in previous classes due to catastrophic forgetting. However, the SNN models were able to retain some information about previous classes. Although many of the previous classes were still identified as the current trained classes, the output probabilities showed a higher than normal value to the actual class. This indicates that the SNN models do have potential to overcome catastrophic forgetting but much work is still needed.
spiking neural networks, transfer learning, pre-trained models, continuous learning, neuromorphic computing, catastrophic forgetting
## I Introduction
### _Continuous Learning in Artificial and Biological Systems_
Continuous learning systems are predicated on the notion that natural learning occurs overtime and when new information enters an environment. In biological organisms, this is clearly indicated with how they naturally adapt to their environments even when new challenges arise either through a form of supervised, unsupervised, or reinforcement learning strategies.
The integration of continuous learning into artificial intelligence is not a new endeavor, but it is one that has alluded researchers since the beginning. The biggest challenge is overcoming catastrophic forgetting [3]. Catastrophic forgetting occurs when prior information about previous classes is lost when learning information about new classes. Many different methods have been tried in attempts to solve the problem. Some of these include new architecture designs [17], new learning strategies such as integrating data from previous classes into the new training data [13] as a form of refresher, and possible replacement of backpropagation with more bio-inspired methods [9]. One of the most promising routes is to use artificial neural networks and hardware that more closely mimic the brain such as with spiking neural networks [1] and neuromorphic hardware [4].
### _Spiking Neural Networks_
Spiking neural networks (SNNs) are a type of neural network that seeks to mimic the transfer of information inside biological neurons [1]. This is accomplished by using a temporal dimension to identify the timing or spiking of each neuron. The sequence of the firing neurons are known as a spike train [1]. Because of this, SNNs are considered to be the basis for third-generation artificial neural networks.
One of the largest advantages to SNN models are their low-power consumption and efficiency [7] when compared to conventional models especially when paired with neuromorphic hardware which will be discussed in the next section.
Currently, training SNNs proves to be a challenge due to a number of hurdles. First, training SNNs conventionally is difficult due to the non-differentiable nature of the neurons [9]. There have been a number of ways to overcome this such as converting conventional models to SNNs [12] which is what was performed in this experimentation. However, this comes at the cost of increasing inference latency and losing information during conversion. Another hurdle is the time and complexity needed when training these networks. As stated previously, neuromorphic hardware will greatly increase the use of SNNs, but they are also more of a necessity than complimentary. Training on current hardware proves to be difficult due to the complexity of the spikes from the neurons [9].
### _Neuromorphic Computing_
While SNNs provide the software portion of the third-generation, a hardware solution must also accompany it much like graphic processing units (GPUs) did for deep learning in the 2010s. This type of new, specialized hardware is known as neuromorphic computing. Much like SNNs, neuromorphic computing strives to mimic the physical structure and benefits of the brain [4]. This includes faster processing times and lower power consumption [4]. These new hardware platforms also promise to allow for greater complexity where SNNs can make the greatest use. This will solve the previously mentioned problem of training SNNs as training on current hardware is time consuming due to the nature of SNNs being different than those of conventional models [12]. While much research is being devoted to neuromorphic computing both industrially and academically [4, 8, 16], commercial use of this hardware is not yet achievable due to a number of reasons.
## II Experimental Setup
The experimentation was divided into three different phases. In phase 1, conventional models were trained and tested using incremental learning strategies to observe their limitations and benefits. All models of this type were pre-trained models that used transfer learning from ImageNet weights. In total three different pre-trained models were used, ResNet50, ResNet101 and VGG19. Phase 2 used the original model developed by Nengo for their tutorial on converting conventional models to SNN models. Lastly, phase 3 used the NengoDL library to convert the conventional models from phase 1 to SNNs to investigate the impact of transfer learning.
Training and testing were kept as minimalistic as possible due to time and resource constraints. All experimentations were performed on Google Colaboratory using TensorFlow 2.11, Python 3, and the Nengo API which will be discussed more in the next section. Training was conducted by increments of two classes. As each dataset contains 10 classes, each model was trained a total of 10 separate times to test the ability to remember previous information. This means that at each increment any previous class training data would not be present within the new training data. 5 total training phases are used for MNIST training data, and 5 total training phases are used for the Fashion MNIST training data.
### _Nengo API and Software_
To implement SNNs as well as convert conventional models to SNNs, a software package is needed to perform the tasks efficiently and quickly rather than from scratch each time. This is similar to how machine learning researchers now make use of frameworks such as Pytorch and TensorFlow to rapidly prototype models. There are many openly available packages online such as PySNN, snnTorch, and Nengo. For this experimentation, Nengo was chosen for converting and training the SNN models.
Nengo created an API that allows for ease of use for creating conventional models inside the TensorFlow framework then converting them efficiently to SNN model equivalents. This ease of use along with documentation sources are the primary reasons it was chosen for the experimentation. One of the major goals of Nengo is create a framework that is ubiquitous for all neuromorphic computing hardware.
### _Datasets_
As stated earlier, the common benchmark datasets of MNIST and Fashion MNIST were used to provide consistent comparisons across all models. Each dataset contains 10 classes with 60,000 training and 10,000 test images of size 28x28. A sample of the digits contained in MNIST is illustrated in Figure 1, and a sample of the clothing contained in Fashion MNIST is illustrated in Figure 2.
### _Regularization and Optimization_
Regularization and optimization of the models were kept at a constant due to many of these models already have established optimal hyperparameters. As such, the conventional models in phase 1 used the Adam optimizer [2] with a learning rate of 0.001 and L2 regularization.
Regularization and optimization for the SNNs are different as the goal is to optimize the rate of firing of each neuron along with smoothing the synapses for the regularization. Consideration for the value of the firing rate must also take into account the trade-off for accuracy versus inference time. As the number of firings per neuron increases, so too does the latency of the inference. After various testing over different values, the optimal fire rate for the neurons was between 10 and 20 while the optimal synapses smoothing was found to be 0.001 for the regularization.
### _Training and Testing_
Each model was trained on new classes for 10 epochs with a batch size of 200 for each increment. The total amount of training images used per increment varied due to the difference in number of images per class. However, after all 5 increments, the model had seen all 60,000 training images from the target dataset. The same occurs with the test images until the models were tested on all 10,000 images.
Fig. 1: MNIST Samples
Fig. 2: Fashion MNIST Samples
## III Results
### _Pre-trained Models_
Table 1 and Table 2 provides the performance data of each pre-trained model during their separate testings for the MNIST and Fashion MNIST datasets. In the first testing increment of the first two classes of each dataset, all models show performances that are previously known from the literature and machine learning community. Catastrophic forgetting becomes obviously apparent in the next few testing increments when new classes are introduced. As new classes are added, the accuracy of the models on previous classes decreases dramatically while current classes see high-performance accuracy. Performance accuracy above 99% is retained for the new classes while dropping to around 30% by the third set of training all the way to below 20% by the time all 10 classes have been trained on by the models.
### _SNNs_
In this section, the results from the testing of the different SNN models are observed and discussed. Like the previous section the performance results of each model during testing are illustrated in Table 3 and Table 4. The performances of all the models follow a similar trend as those observed with the conventional models. With each new set of classes add, the accuracy of those classes are high while the accuracy of previous classes decreases. However, unlike previous examples, the information about previous classes seems to be somewhat retained.
Figure 3, Figure 6, and Figure 7 gives confirmation of the models ability to learn the current classes effectively from the MNIST and Fashion MNIST datasets. The first frame illustrates the image that is being classified while the second frame shows which neurons are spiking in the first convolutional layer after the input layer. Finally, the output prediction is shown in the final frame.
Figure 8 shows a different classification of the image from Figure 6 after the model has gone through incremental training on new classes. This clearly illustrates the model has experienced a form of catastrophic forgetting as the model attempts to classify the image as one of the new classes. This is also evident in the changes in neuron spiking in the second frame when compared to the same frame from Figure 6. However, it is also illustrated that the correct class still retains a significant output probability even though catastrophic forgetting has occurred.
This same result is shown again in Figure 4 and Figure 5. Though, Figure 4 shows just how little information can be retained at times. This would explain why the SNN models are able to slightly outperform the conventional models but to a limited degree. The primary drawback of the SNN models, as discussed previous from [1], becomes evident from these observations. With each variation in firing rate to regularize the model as well as when new classes are added, the latency of inference becomes larger as illustrated in the second frame of the figures.
Fig. 4: Accuracy of sample from previous MNIST class
Fig. 5: Information retention of previous MNIST class
Fig. 3: Accuracy of sample from current MNIST class
## IV Future Directions
For future endeavors, there are a few different routes that can be investigated. The first is trying to solve the problem occurring with converted models. Information about previous classes are shown to be somewhat retained. Although, this isn't enough to get an accurate classification. A possible solution may also lie within biological processes. The role of dreaming in the artificial sense might be able to strengthen the old information. Dreaming is known to be necessary in biological organisms to allow for successful organization of new and old information [5]. It has also been shown that computer architecture [5] and reinforcement learning environments that mimic dreaming are able to performed better in old and even new environments that are similar [11].
The second route is far less clear. Some researchers [9,14,15] have suggested a whole new approach to artificial intelligence learning that is closer to biological processes such as how the brain uses plasticity. This would mean that the current success of conventional models and backpropagation may not be able to be used such as with the methods used in this experimentation. Both should be investigated thoroughly.
The third and most immediate route is to experiment with other successful architectures such as vision transformers [10] and other learning strategies such as unsupervised and reinforcement learning.
## V Conclusion
In conclusion, the experimentation showed that there is potential in the use of spiking neural networks for the realization of continuous learning. However, there is much work to be accomplished in this area. It is shown that after incremental training the SNNs are able to retain some level of information from the previous classes when compared to conventional models, but they still are unable retain enough information to correctly identify them from the current classes used for training. The next step is to investigate methods that may be able to help improve information retention such as a way to mimic the organization of knowledge during the dreaming stage in biological organisms or perhaps a new learning strategy overall.
|
2310.09551 | Visualizing convolutional neural network for classifying gravitational
waves from core-collapse supernovae | In this study, we employ a convolutional neural network to classify
gravitational waves originating from core-collapse supernovae. Training is
conducted using spectrograms derived from three-dimensional numerical
simulations of waveforms, which are injected onto real noise data from the
third observing run of both Advanced LIGO and Advanced Virgo. To gain insights
into the decision-making process of the model, we apply class activation
mapping techniques to visualize the regions in the input image that are
significant for the model's prediction. The class activation maps reveal that
the model's predictions predominantly rely on specific features within the
input spectrograms, namely, the $g$-mode and low-frequency modes. The
visualization of convolutional neural network models provides interpretability
to enhance their reliability and offers guidance for improving detection
efficiency. | Seiya Sasaoka, Naoki Koyama, Diego Dominguez, Yusuke Sakai, Kentaro Somiya, Yuto Omae, Hirotaka Takahashi | 2023-10-14T10:10:55Z | http://arxiv.org/abs/2310.09551v2 | Visualizing Convolutional Neural Network for Classifying Gravitational Waves from Core-Collapse Supernovae
###### Abstract
In this study, we employ a convolutional neural network to classify gravitational waves originating from core-collapse supernovae. Training was conducted using spectrograms derived from three-dimensional numerical simulations of waveforms, which were injected onto real noise data from the third observing run of both Advanced LIGO and Advanced Virgo. To gain insights into the model's decision-making process, we apply class activation mapping techniques to visualize the regions in the input image that are significant for the model's prediction. Our model distinguished between 9 different waveforms and noise with an accuracy of 98.4% at 1 kpc. Visualization through class activation mapping revealed that the model's predictions predominantly rely on specific features within the input spectrograms, namely the g-mode and low-frequency modes.
## I Introduction
The first detection of gravitational waves (GWs) from a binary black hole merger by the Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) [1] in 2015 marked the beginning of GW astronomy [2]. Throughout three observing runs, O1, O2, and O3, Advanced LIGO and Advanced Virgo [3] reported 90 GW events [4; 5; 6; 7]. As of May 2023, the international GW network, now including KAGRA [8], has begun its fourth observing run (O4) with improved sensitivity.
All the GW events detected so far are exclusively from compact binary coalescences. However, short-duration GW bursts arising from core-collapse supernovae (CCSNe) are expected to be detected by the current and the next-generation GW detectors, such as the Einstein Telescope [9] and the Cosmic Explorer [10]. CCSNe, resulting from massive star explosions leading to neutron stars or stellar-mass black holes, stand as one of the most energetic astrophysical events in the universe, emitting electromagnetic waves, neutrinos, and GWs. While electromagnetic waves from CCSNe are frequently observed, neutrinos have only been detected from SN1987A [11; 12]. GWs are expected to carry information about the inner core's dynamics, providing vital insights into the explosion mechanism that remains elusive. The primary conundrum lies in discerning how a stalled shock wave is revived to cause a star to explode. Currently, there are two prevailing theories [13]: the neutrino-driven mechanism [14], in which shock waves are revived by beneath-stored neutrinos heating the surrounding matter, and the magnetorotational mechanism [15], in which rapid rotation of the progenitor causes explosions driven by strong magnetic fields. The typical GW detection range for neutrino-driven signals is expected to be around 10 kpc, while the detection range for magnetorotational signals is expected to be above 100 kpc [16].
Due to the stochastic nature of GW signals from CCSNe, the conventional matched filtering technique, which relies on specific waveform templates, is unsuitable. Alternative detection methods based on time-frequency representation have been devised in response. In particular, the coherent WaveBurst (cWB) pipeline [17; 18] detects and reconstructs burst GW signals by searching for excess power in a time-frequency map, with minimal reliance on a specific source model.
Predicting GW signals from CCSNe remains a formidable challenge. However, recent advancements in theoretical research and multi-dimensional numerical simulations have revealed certain signal properties. For neutrino-driven CCSNe, the dominant emissions arise from the g-mode oscillation of the proto-neutron star (PNS) surface. These frequencies progressively increase over time, ranging from a few hundred Hz to a few kHz. Additionally, at low frequencies (\(\lesssim\) 200 Hz), GW emissions associated with hydrodynamics instabilities including neutrino-driven convection and standing accretion shock instability (SASI) [19] are observed in some simulations. Insights obtained from these simulations are pivotal in enhancing methods for CCSNe detection and
analysis.
In recent years, machine learning techniques, especially deep learning, has gained traction in a variety of scientific fields due to its capacity for recognizing intricate patterns and extracting meaningful features from large datasets. This ability has been especially noted in areas such as computer vision and natural language processing. Its application in GW research has followed, with numerous implementations and explorations, as highlighted in a comprehensive review by Ref. [20] and the foundational efforts by George and Huerta [21; 22]. In the field of CCSNe analysis, Astone _et al._[23] leveraged convolutional neural networks (CNNs) to detect CCSNe within Gaussian noise, using g-mode phenomenological waveforms for training, outperforming the cWB pipeline. Consecutive studies by Iess _et al._[24; 25] involved the training of both one- and two-dimensional CNNs and long short-term memory (LSTM) networks [26] to identify seven distinct CCSN waveforms embedded in real noise and glitches, with their models achieving 98% classification accuracy at 1 kpc with a three-detector network. Additionally, Chan _et al._[27] employed one-dimensional CNNs to investigate both magnetorotational and neutrino-driven signals in Gaussian noise, recording a true alarm probability of 80% for magnetorotational signals from sources at 60 kpc and 55% for neutrino-driven signals from sources at 10 kpc with a fixed false alarm probability of 10%. In another study, Edwards [28] used two-dimensional CNNs to classify 18 different equations of state (EOS) from pure magnetorotational CCSN signals, attaining an accuracy of 72%. Lopez _et al._[29] refined phenomenological waveforms originally used by Astone _et al._[23], achieving a 60% true alarm probability for signals located at 15 kpc with a 5% false alarm rate.
Although deep learning exhibits strong performance on a wide range of tasks, its intricate models, characterized by a large number of parameters, pose challenges in elucidating their decision-making processes. To address this, the field of explainable artificial intelligence (XAI) [30] has surged, aiming to make model decisions transparent and interpretable. Within the context of CNNs, efforts have been made to develop techniques that attempt to understand the decision-making process by reverse-mapping the output of the network into the input space to identify the specific input components that were discriminative in producing the output. Class activation mapping (CAM) [31] is one such method, which computes a weighted sum of the outputs of the last convolutional layer using the outputs of the global average pooling layer after the last convolutional layer as weights. It helps identify the regions in the input image that were important for a prediction, but the model needs to be modified to include a global average pooling layer, which may result in lower accuracy. Gradient-weighted class activation mapping (Grad-CAM) [32] was introduced as a solution to this limitation of CAM, offering the advantage of not requiring any modifications to the network architecture by using gradient information from the prediction for weighted parameters. Subsequently, Grad-CAM++ [33], a generalization of Grad-CAM, and Score-CAM [34], a gradient-free CAM method, were developed to generate more accurate saliency maps than Grad-CAM. These techniques to analyze deep learning models are commonly used in fields such as electrocardiogram signal analysis [35] and X-ray diagnosis [36], however, for GW analysis, they have only been used in Ref. [37] to the best of our knowledge.
In this study, we first take an approach similar to Ref. [25] and train a two-dimensional CNN model to classify CCSNe signals using short-time Fourier transformed spectrograms as input for simplicity. We use nine types of waveforms from recent three-dimensional numerical simulations and O3 real noise to train and validate our model. In the test, signals from sources between 1 and 10 kpc are considered, and the performance of the model for sources at each distance is discussed. To interpret the model, we use three CAM methods to generate saliency maps and evaluate them using two metrics: average drop and average increase. The best CAM method is then applied to correctly classified and also misclassified samples to visualize the regions in the input spectrogram that influence the predictions of our model.
The remainder of this paper is organized as follows. Section II describes our datasets, the CNN model, and the CAM techniques. In Sec. III, we discuss the classification performance of our model, and apply multiple visualization techniques to interpret the model. We summarize and conclude the paper in Sec. IV.
## II Method
Our CNN model is trained to classify strains at three detectors LIGO Hanford (H1), LIGO Livingston (L1), and Virgo (V1) into 10 classes: noise and 9 different CCSN waveforms. In this section, we first provide an overview of the data used in this study, including the brief summary of the CCSN simulation data and the pre-processing strategy to generate our training, validation, and test sets. Subsequently, our CNN architecture and the theory of visualization technique of the model are explained.
### Dataset
#### ii.1.1 CCSN Waveforms
Modelling the stellar core collapse, bounce and the subsequent post-bounce evolution is very complicated and computationally expensive. However, remarkable advancements in three-dimensional numerical simulations of neutrino-driven explosions have been achieved by several groups in recent years. Specific details of the waveform depend on various properties of the progenitor, such
as mass, angular velocity, and EOS of the dense matter. Both the general relativity approximation and the handling of neutrino transport critically influence simulations. From the available simulation data under a variety of conditions, we selected nine types of waveforms from four recent three-dimensional numerical simulations [38; 39; 40; 41]. All of them allow us to compute the GW amplitude in any observer direction from the quadrupole moment.
Powell and Muller 2019 [38] performed simulations using the general relativistic neutrino hydrodynamics code CoCoNuT-FMT [42]. We used two waveforms from the models he3.5 and s18. The progenitor of he3.5 is an ultra-stripped star evolved from a helium star with an initial mass of 3.5 \(M_{\odot}\). The simulation is stopped at 0.7 s after core bounce. The GW is dominated by excitation of g-modes in the PNS with a peak frequency around 900 Hz. Model s18 is a single star with a zero-age main-sequence (ZAMS) mass of 18 \(M_{\odot}\). The simulation was stopped 0.89 s after core bounce. The GW emission is similar to model he3.5, with g-modes oscillations of the PNS with a peak frequency around 900 Hz.
Radice _et al._ 2019 [39] studied eight models using the Eulerian radiation-hydrodynamics code FORNAX [43]. We used waveforms from the models s13 and s25 corresponding to progenitors of 13 and 25 \(M_{\odot}\) ZAMS, respectively. The simulation is ended at 0.77 s in s13 and 0.62 s in s25 after bounce. Both waveforms are characterized by f- and g- modes with a peak frequency around 1400 Hz in s13 and 1100 Hz in s25. In addition, s25 waveform has a clear SASI mode around 100 Hz.
From the simulation by Powell and Muller 2020 [40], we used three models m39, y20, and s18np. Model m39 is a rapidly rotating 39 \(M_{\odot}\) Wolf-Rayet star with an initial surface rotation velocity of 600 \(\mathrm{kms}^{-1}\). It produces a neutrino-driven explosion without magnetic fields. The other two are non-rotating models of 20 \(M_{\odot}\) Wolf-Rayet star and an 18 \(M_{\odot}\) ZAMS star. The simulation is ended at 0.98 s, 1.2 s, 0.56 s after core bounce in models m39, y20, and s18np, respectively. All of the three models show GW emission associated with prompt convection shortly after bounce as well as f-mode oscillations of the PNS. In model s18np, the absence of strong perturbations from convective oxygen burning, in contrast to s18, prevents the shock from being revived and leads to the development of strong SASI activity, with a frequency reaching \(\sim\)400 Hz by the end of the simulation.
Powell _et al._ 2021 [41] performed simulations using three different EOS, LS220 [44], SFHx and SFHo [45]. The progenitor models are 85 and 100 \(M_{\odot}\) Population III ZAMS stars. We used z85_sfhx and z100_sfho models. The simulation is ended at 0.59 s in z85_sfhx and 0.62 s in z100_sfho model. Both waveforms show typical g-mode emission with a peak frequency of \(\sim\)700 Hz. In z85_sfhx model, the frequency of the SASI emission increases up to the point of shock revival reaching \(\sim\)200 Hz and decreases afterwards. In z100_sfho model, which does not explode, the frequency of the SASI emission continues to increase by the end of the simulation reaching \(\sim\)400 Hz.
Figure 1 shows the amplitude spectral density of the plus mode of each waveform at 1 kpc from the polar direction. The amplitude of the m39 waveform is the largest of these waveforms. Waveforms that have peaks around 100 Hz such as the s25 and the s13 indicate that they have SASI induced GW mode.
#### ii.1.2 Data Processing
From the simulation data presented in the previous section, we calculate the amplitude of the GW for generating the datasets. The directions of radiation \((\theta,\phi)\) are uniformly sampled and the plus and cross polarizations of each GW are calculated using the formulae
\[h_{+} =\frac{1}{D}\frac{2G}{c^{4}}(\ddot{Q}_{\theta\theta}-\ddot{Q}_{ \phi\phi}), \tag{1}\] \[h_{\times} =\frac{1}{D}\frac{G}{c^{4}}\ddot{Q}_{\theta\phi}, \tag{2}\]
where \(Q\) is the traceless quadrupole moment, and \(D\) is the distance between a source and Earth. As the sampling of simulation is usually not uniform in time, we resampled data uniformly with a sampling rate of 4096 Hz. A high pass filter with a cutoff frequency of 11 Hz and a Tukey window with \(\alpha=0.1\) are applied to the resampled signals. Each signal is then truncated or padded with zeros to make the length one second. In order to make the model robust, we randomly time-shifted the signals so that the time of core bounce is between 0 and 0.15 s. For the training and validation sets, the signals are scaled using optimal matched filter signal-to-noise ratio (SNR), as the amplitudes of the simulated signals are
Figure 1: Amplitude spectral density of the plus mode of each waveform at 1 kpc. The observer is in the polar direction.
quite different. SNR is defined as
\[\rho=\sqrt{4\int_{f_{\rm{min}}}^{f_{\rm{max}}}\frac{|\tilde{h}(f)|^{2}}{S_{n}(f)} \mathrm{d}f}, \tag{3}\]
where \(\tilde{h}(f)\) is the Fourier transform of the signal and \(S_{n}(f)\) is the one-side power spectral density of the noise. The network SNR of the detectors H1, L1, and V1, given by
\[\rho_{\rm{net}}=\sqrt{\rho_{\rm{H1}}^{2}+\rho_{\rm{L1}}^{2}+\rho_{\rm{V1}}^{2}}, \tag{4}\]
is used to scale the signals. We generated samples with network SNRs from 20 to 50 for training and validation sets. For the test set, the signals are scaled to have the distances between 1 and 10 kpc. Sky location is also randomly selected and GW amplitude \(h(t)\) is computed, taking into account the antenna pattern functions \(F_{+}\) and \(F_{\times}\), and the delay in arrival time of each detector with the following equation:
\[h(t)=F_{+}(\alpha,\delta,\psi,t)h_{+}(t+\Delta t)\] \[\qquad\qquad+F_{\times}(\alpha,\delta,\psi,t)h_{\times}(t+\Delta t), \tag{5}\]
where \(\alpha\) is the right ascension, \(\delta\) is the declination and \(\psi\) is the polarization angle. \(\Delta t\) is the delay in arrival time between the detector and the center of Earth. We used PyCBC software library [46] to carry out these computations.
Noise used in this study are O3 real data of Advanced LIGO and Advanced Virgo, obtained from Gravitational Wave Open Science Center (GWOSC) [47]. Data from GPS time 1238265720 to 1238252308 was used for the training set, 1238265720 to 1238354855 was used for the validation set, and 1238404064 to 1238457121 was used for the test set. Data around the event time reported in the second Gravitational Wave Transient Catalog (GWTC-2) [6] are excluded. After a signal is injected in noise, each sample is whitened with the power spectral density computed using Welch's method [48] and then short-time Fourier transformed with a window size of 0.0625 seconds to produce a spectrogram. The spectrogram is normalized to [0, 1] before input to the network.
We generated 60,000 samples as each of training and validation sets, and 100,000 samples as test set. The test set has 1,000 samples for each class and each distance. Sample spectrograms in the training set are shown in Fig. 2.
### CNN Model
Our CNN model consists of two convolutional layers of kernel size 3, each followed by a max-pooling layer of size 2 and a rectified linear unit (ReLU) layer. The outputs of these layers are fed into two fully connected layers, and finally the softmax layer outputs a size 10 vector whose elements represent a probability of each class. The model has 427,378 trainable parameters in total. This model is shallower than the one used in Ref. [25]. However, its classification performance is comparable to the previous study, prompting us to adopt this model. Reducing the number of layers also helps us generate higher-resolution CAM maps.
The model is trained using categorical cross entropy as the loss function and Adam optimizer [49] with a learning rate of \(5\times 10^{-4}\) to update the weights. In the training, we adopt curriculum learning [50] as a strategy to enhance the model and accelerate the training by starting from inputting high SNR samples and gradually adding lower
Figure 2: Sample whitened spectrograms of each class at the H1 detector in the training set. Each signal sample is observed in the polar direction and scaled to have an SNR of 40. The bounce time is fixed at 0.1 s.
SNR samples. We trained the model on a single GPU (NVIDIA GeForce RTX3090) for 120 epochs with a mini-batch size of 128.
### Visualization
After training the model, we use CAM techniques to generate saliency maps. These maps show the regions in the input that influenced the model's prediction. In this study, we selected three CAM methods: Grad-CAM, Grad-CAM++, and Score-CAM, which are widely used today to interpret CNN models. All of these CAM techniques are applied to the convolutional layer prior to the final max-pooling layer in our model.
#### ii.3.1 Grad-CAM
Grad-CAM is a gradient-based visualization technique that highlights the important regions of an input image that the model is looking at while making a prediction. Suppose that for a given input, the prediction score for class \(c\) before the softmax layer of the trained model is \(y^{c}\), and the \(k\)-th output matrix of the last convolutional layer is \(A^{k}\). To obtain the Grad-CAM map of class \(c\), we first compute the gradients of the score \(y^{c}\) with respect to the \((i,j)\) component of the \(k\)-th feature map \(A^{k}\). We then take the global average of these gradients:
\[\alpha_{k}^{c}=\frac{1}{Z}\sum_{i,j}\frac{\partial y^{c}}{\partial A_{ij}^{k}}, \tag{6}\]
where \(Z\) is the number of pixels in \(A^{k}\). This weight \(\alpha_{k}^{c}\) represents the importance of the feature map \(k\) for the class \(c\).
The Grad-CAM map of the class \(c\) is computed as linear sum of \(A^{k}\) with \(\alpha_{k}^{c}\) as weights. The ReLU function is applied to extract only features that have a positive contribution to the prediction score. The resulting map of class \(c\) is expressed as
\[L_{\text{Grad-CAM}}^{c}=\text{ReLU}\Bigg{(}\sum_{k}\alpha_{k}^{c}A^{k}\Bigg{)}. \tag{7}\]
Since convolutional layers and pooling layers make the size of the feature map smaller than the input, Grad-CAM map is finally interpolated to make it the same size as the input.
#### ii.3.2 Grad-CAM++
While Grad-CAM takes a global average of the gradient matrix when calculating the weight \(\alpha_{k}^{c}\) in Eq. (6), Chattopadhay _et al._[33] proposed a method to fully include the importance of each pixel in the gradient matrix by taking its weighted average for the weight:
\[\alpha_{k}^{c}=\sum_{i,j}\alpha_{ij}^{kc}\ \text{ReLU}\Bigg{(}\frac{\partial y^{c}} {\partial A_{ij}^{k}}\Bigg{)}. \tag{8}\]
The ReLU function is used to account for features that increase the activation of the output neuron rather than suppress the activation of the output neuron. The weights \(\alpha_{ij}^{kc}\) can be theoretically derived using higher-order derivatives:
\[\alpha_{ij}^{kc}=\frac{\frac{\partial^{2}y^{c}}{(\partial A_{ij}^{k})^{2}}}{2 \frac{\partial^{2}y^{c}}{(\partial A_{ij}^{k})^{2}}+\sum_{a,b}A_{ab}^{k} \frac{\partial^{3}y^{c}}{(\partial A_{ij}^{k})^{2}}}. \tag{9}\]
This method is known as Grad-CAM++, since it can be considered as a generalization of Grad-CAM. The saliency map for Grad-CAM++ is expressed in the same way as for Grad-CAM, using weights in Eq. (8) and feature maps, as
\[L_{\text{Grad-CAM++}}^{c}=\text{ReLU}\Bigg{(}\sum_{k}\alpha_{k}^{c}A^{k} \Bigg{)}. \tag{10}\]
#### ii.3.3 Score-CAM
Wang _et al._[34] proposed a gradient-free CAM method called Score-CAM. It solves the problem of gradient-based CAM methods, namely the gradient is unstable, easily disturbed by noise, and can vanish or explole in deep networks. To generate a Score-CAM map, feature maps are used to mask an input image. Let \(H^{k}\) be the \(k\)-th feature map, up-sampled to the same size as the input and normalized to [0, 1]. Given an input image \(X\), the weight for the \(k\)-th feature map is computed as the difference between the score of the masked image \(X\circ H^{k}\) and the score of the baseline image \(X_{\text{b}}\):
\[\alpha_{k}=f(X\circ H^{k})-f(X_{\text{b}}), \tag{11}\]
where \(f(\cdot)\) denotes the output of the CNN and \(\circ\) denotes the Hadamard product. A black image is used as a baseline image. The Score-CAM map of class \(c\) is then computed as linear sum of the \(c\)-th value of \(\alpha_{k}\) and the feature map \(A^{k}\) as
\[L_{\text{Score-CAM}}^{c}=\text{ReLU}\Bigg{(}\sum_{k}\alpha_{k}^{c}A^{k}\Bigg{)}. \tag{12}\]
## III Results and Discussion
### Classification Performance
Classification accuracy is defined as the proportion of correctly classified samples out of the total number of
samples. After training, the model achieved a classification accuracy of 97.8% on a validation set consisting of uniformly sampled signals with SNRs between 20 and 50. On the test set, our model shows an accuracy of 98.4% for signals with sources from 1 kpc, which is comparable to the results of the previous study [25], despite some differences in the condition that we used O3 noise instead of O2 noise and performed 10-class classification instead of 8-class classification. In Fig. 3, we plot a true positive rate (TPR) for each waveform in the test set against distance. A TPR, also known as the sensitivity of a class \(c\), is defined as the ratio of the number of samples correctly classified into class \(c\) to the number of samples of class \(c\) in the test set. For signals from sources at 1 kpc, each waveform has a TPR greater than 90%, and this decreases monotonically with distance of the source, having an average TPR of 26.1% at 10 kpc. For the m39 waveform, because the amplitude of the strain is much larger than the others due to its rapid rotation and high explosion energy, the TPR for sources at 10 kpc is 99.2%.
The performance of a multi-class classifier is also expressed by a confusion matrix, which shows the number of samples classified into each class. Figure 4 plots the confusion matrices normalized for each class and the distribution of the network SNR for signals from sources at 1, 5, and 10 kpc. We can see from the confusion matrices that as the distance increases, the amplitude of the signal becomes smaller and the number of samples misclassified as noise increases. The accuracy for signals at 10 kpc is 33.2%, and our model cannot identify most of these signals, except for the m39 waveforms, whose SNR is much higher than others with a median value of 47.9.
### Dimensionality Reduction
Before implementing CAM techniques, we used the t-distributed Stochastic Neighbor Embedding (t-SNE) [51] algorithm to see if the convolutional layers in the model could extract the features in the input to classify samples. The t-SNE algorithm is a dimensionality reduction technique that minimizes the Kullback-Leibler divergence between two probability distributions: one representing pairwise similarities between data points in the original high-dimensional space and another representing pairwise similarities in a lower-dimensional space. In our CNN model, each sample is compressed into a vector with a length of 2112 before the dense layers. The t-SNE algorithm is used to map this vector into two-dimensional space to make it interpretable for humans. We visualized the dimensionally reduced feature maps of the test set, whose signals are coming from sources at 1 kpc, for which our model showed a good classification accuracy. The visualized data are shown in Fig. 5. We can clearly see that there are 10 clusters in the dataset and our model could extract meaningful features to classify these samples into 10 classes. The fact that some signal samples are also found in the noise cluster and that s13 samples are found in other clusters, especially in the noise cluster, is consistent with the results of the confusion matrix in Fig. 4(a).
### Saliency Maps
To quantitatively evaluate different CAM methods, we use two metrics, average drop and average increase [33], which focus on the change in a model's score caused by the explanation map. An explanation map for a target class \(c\) is generated as element-wise multiplication of a saliency map \(L^{c}\) with an original image \(X\):
\[E^{c}=L^{c}\circ X. \tag{13}\]
Average drop measures the percentage decrease in a model's score for a target class \(c\) when inputting only the explanation map, instead of the original image. It is expressed as
\[\text{Average drop}=100\cdot\frac{1}{N}\sum_{i}\frac{\max(0,y_{i}^{c}-o_{i}^{ c})}{y_{i}^{c}}, \tag{14}\]
where \(y_{i}^{c}\) is the score for class \(c\) on \(i\)-th original image and \(o_{i}^{c}\) is the score on the explanation map. The lower this value, the more effective the visualization method, since the explanation map includes more of the relevant information for making a correct prediction.
Average increase measures the number of samples in the dataset, the model's confidence increased when providing only the explanation map as input. It is expressed as
\[\text{Average increase}=100\cdot\frac{1}{N}\sum_{i}\Theta(o_{i}^{c}-y_{i}^{c}), \tag{15}\]
Figure 3: True positive rate of each waveform in the test set against source distance.
Figure 4: Confusion matrices of the test set (left) and violin plots of network SNR of each waveform (right) from sources with distances of 1, 5, and 10 kpc.
where \(\Theta\) is the Heaviside step function. Unlike the previous metric, the higher this value is, the more effective the visualization method will be because there are more samples that score higher when given the explanation map than when given the original image.
For the three visualization methods Grad-CAM, Grad-CAM++, and Score-CAM, the two metrics described above were computed using signals from sources at 1 kpc in the test set. The results are summarized in Tab. 1. Score-CAM showed the best results in both metrics, meaning that it is the best visualization technique for our model among the three CAM methods considered in this study. We also qualitatively compared these methods by visualizing some samples. One example is shown in Fig. 6. The input image is represented by a color image, with the red, green, and blue channels corresponding to the H1, L1, and V1 spectrograms, respectively. All the three saliency maps take large values around the SASI mode around 100 Hz. At high frequencies, the Grad-CAM and Grad-CAM++ maps only take slightly larger values around 1 kHz, whereas Score-CAM has g-mode-like arch shapes around 1 kHz. It suggests that the visualization by Score-CAM captures more of the input features that were discriminative for the prediction.
As discussed above, we determined that the Score-CAM is the optimal method for generating saliency maps for our model. We produced saliency maps by Score-CAM for input of each class, which can be seen in Fig. 7. In the input images, as in the previous figure, the red, green, and blue channels corresponding to the H1, L1, and V1 data, respectively. This means, for example, that in the reddish image such as the m39 sample in Fig. 7, the SNR at the H1 detector is smaller than that at the L1 and V1 detectors. All of the plotted signal samples are scaled to have SNR of 40 and were correctly classified by our model. We plotted several CAM maps for noise samples in addition to the one in this figure, but the regions that the model sees to predict them to be noise were random. In the he3.5 and s13 samples, we can see that the model focuses on g-mode arch shape, especially their low and high frequencies area. In the s18 and y20 model, the model sees the entire g-modes. In the s18np model, the CAM map indicates that the model considers not only g-mode but also prompt convection and SASI. The s25 model has SASI activity, but its amplitude is not too large, and the CAM map shows that the model's prediction is based on the prompt convection and the high-frequency g-mode. In the m39, z85_sfhx, and z100_sfho model, the CAM maps take large values at high frequencies in g-mode. In addition, in z85_sfhx and z100_sfho, the low-frequency SASI mode, whose frequency increases with time, is also visible in the CAM maps. To summarize these outcomes, we found that the model looks at g-mode in all signal waveforms, and also looks at SASI and prompt convection in some signal waveforms when classified.
Additionally, we plotted saliency maps of the misclassified samples. Figure 8 shows the spectrogram of the s25 signal sample at each detector and the Score-CAM map, which the model classified as s18np. The SNR of this signal is 85, which is quite large, and the g-mode and the prompt convection are visible at H1 and L1 detectors, but there is a glitch in the strain at L1 detector. The Score-CAM map shows that the model focuses on the prompt convection and the glitch, which are used to determine that the signal is s18np. Another example is plotted in Fig. 9. This sample contains a s13 signal with an SNR of 48, and there are no glitches, but the model classified it as y20. We can see SASI induced GW mode around 100 Hz from 0.2 to 0.4 s especially in the L1 spectrogram, but the Score-CAM map indicates that the model sees g-mode and does not see the low frequency mode.
From the misclassified samples and the Score-CAM maps, it was found that the performance of the model is sometimes affected by glitches, and does not fully take advantage of the characteristics of the signals. The former could be resolved by generating the training sets containing more glitches, and the latte
\begin{table}
\begin{tabular}{l c c c} Method & Grad-CAM & Grad-CAM++ & Score-CAM \\ \hline Ave. Drop (\%) & & & \\ (_Lower is better_) & 30.70 & 17.58 & **9.61** \\ \hline Ave. Increase (\%) & & & \\ (_Higher is better_) & 1.30 & 1.40 & **1.96** \\ \end{tabular}
\end{table}
Table 1: Results for evaluation of the explanations generated by Grad-CAM, Grad-CAM++, and Score-CAM on the test set.
Figure 5: Features of the test samples at 1 kpc extracted by CNN and mapped into two-dimensional space by the t-SNE algorithm.
Figure 6: Qualitative comparison of three CAM maps for the s25 sample at 1kpc.
Figure 7: Input spectrograms and Score-CAM maps of correctly classified samples. The SNR of each signal sample is 40.
using better time-frequency representation able to reflect the various features that the CCSN signals have.
## IV Conclusions
In this study, we trained a two-dimensional CNN model to classify CCSN GW signals immersed in real noise of O3 observation data. Our model achieved a high accuracy of 98.4% for signals from sources with distances of 1 kpc, but the model struggled to correctly identify most of the signals from sources at 10 kpc.
To interpret our model, we used t-SNE algorithm and mapped the extracted features by the convolutional layers into a two-dimensional space. The dimension-reduced features show that the convolutional filters could extract meaningful features that are significant for classifying the signals.
To gain insights into the decision-making process of the model, we applied the CAM technique to visualize the regions in the inputs that were influential to the predictions. Three methods, Grad-CAM, Grad-CAM++, and Score-CAM were considered and we concluded that the Score-CAM is the best for our model in terms of the average drop and average increase metrics. The Score-CAM maps of correctly classified signal samples revealed that the model's predictions were heavily affected by a part of the entire g-mode in the spectrogram of each signal. In some waveform models such as s18np, s25, z85_sfho, and z100_sfho, their CAM maps suggest that the prompt convection or SASI induced GW mode also affects the model's prediction.
In this analysis, a time-frequency map was created from the short-time Fourier transform, but its resolution is limited by the uncertainty relationship between time and frequency. In future studies, we would like to improve the accuracy of the CNN model by using methods such as the Hilbert-Huang transform [52], which can generate higher resolution time-frequency maps, and to confirm that the CNN can also utilize several more GW modes to classify CCSN signals.
###### Acknowledgements.
The authors would like to thank J. Powell for providing us gravitational wave simulation data. This research was supported in part by JSPS Grant-in-Aid for Scientific Research [No. 22H01228 (K. Somiya), and Nos. 19H01901, 23H01176 and 23H04520 (H. Takahashi)]. This research was also supported by the Joint Research Program of the Institute for Cosmic Ray Research, University of Tokyo and Tokyo City University Prioritized Studies. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation, as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan.
|
2304.04854 | iPINNs: Incremental learning for Physics-informed neural networks | Physics-informed neural networks (PINNs) have recently become a powerful tool
for solving partial differential equations (PDEs). However, finding a set of
neural network parameters that lead to fulfilling a PDE can be challenging and
non-unique due to the complexity of the loss landscape that needs to be
traversed. Although a variety of multi-task learning and transfer learning
approaches have been proposed to overcome these issues, there is no incremental
training procedure for PINNs that can effectively mitigate such training
challenges. We propose incremental PINNs (iPINNs) that can learn multiple tasks
(equations) sequentially without additional parameters for new tasks and
improve performance for every equation in the sequence. Our approach learns
multiple PDEs starting from the simplest one by creating its own subnetwork for
each PDE and allowing each subnetwork to overlap with previously learned
subnetworks. We demonstrate that previous subnetworks are a good initialization
for a new equation if PDEs share similarities. We also show that iPINNs achieve
lower prediction error than regular PINNs for two different scenarios: (1)
learning a family of equations (e.g., 1-D convection PDE); and (2) learning
PDEs resulting from a combination of processes (e.g., 1-D reaction-diffusion
PDE). The ability to learn all problems with a single network together with
learning more complex PDEs with better generalization than regular PINNs will
open new avenues in this field. | Aleksandr Dekhovich, Marcel H. F. Sluiter, David M. J. Tax, Miguel A. Bessa | 2023-04-10T20:19:20Z | http://arxiv.org/abs/2304.04854v1 | # iPINNs: Incremental learning for
###### Abstract
Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs). However, finding a set of neural network parameters that lead to fulfilling a PDE can be challenging and non-unique due to the complexity of the loss landscape that needs to be traversed. Although a variety of multi-task learning and transfer learning approaches have been proposed to overcome these issues, there is no incremental training procedure for PINNs that can effectively mitigate such training challenges. We propose incremental PINNs (iPINNs) that can learn multiple tasks (equations) sequentially without additional parameters for new tasks and improve performance for every equation in the sequence. Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learned subnetworks. We demonstrate that previous subnetworks are a good initialization for a new equation if PDEs share similarities. We also show that iPINNs achieve lower prediction error than regular PINNs for two different scenarios: (1) learning a family of equations (e.g., 1-D convection PDE); and (2) learning PDEs resulting from a combination of processes (e.g., 1-D reaction-diffusion PDE). The ability to learn all problems with a single network together with learning more complex PDEs with better generalization than regular PINNs will open new avenues in this field.
Keywords:Physic-informed neural networks (PINNs) Scientific machine learning (SciML) Incremental learning Sparsity
## 1 Introduction
Deep neural networks (DNNs) play a central role in scientific machine learning (SciML). Recent advances in neural networks find applications in real-life problems in physics [38, 6, 50, 28], medicine [41, 51, 49], finance [22, 66, 17, 58], and engineering [5, 55, 11, 26]. In particular, they are also applied to solve Ordinary Differential
Equations and Partial Differential Equations (ODEs/PDEs) [34, 42, 65, 46]. Consider the following PDE,
\[\mathcal{F}[u(\mathbf{x},t)] =f(\mathbf{x}),\quad\mathbf{x}\in\Omega,\ t\in[t_{0},T], \tag{1}\] \[\mathcal{B}[u(\mathbf{x},t)] =b(\mathbf{x}),\quad\mathbf{x}\in\partial\Omega,\] (2) \[u(\mathbf{x},t_{0}) =h(\mathbf{x}),\quad\mathbf{x}\in\Omega, \tag{3}\]
where \(\mathcal{F}\) is a differential operator, \(\mathcal{B}\) is a boundary condition operator, \(h(\mathbf{x})\) is an initial condition, and \(\Omega\) is a bounded domain.
The first neural network-based approaches incorporated a form of the equation into the loss function with initial and boundary conditions included as hard constraints [31, 32]. However, these works used relatively small neural networks with one or two hidden layers. On the contrary, PINNs [46] encode initial and boundary conditions as soft constraints into the loss function of a DNN. Subsequently, PINNs and their extensions found applications in fluid mechanics [40, 60, 7], inverse problems [13, 37, 61] and finance [46, 2]. Later, the generalized version of PINNs, called XPINNs [24], was proposed by decomposing the domain into multiple subdomains. However, this method uses as many networks as the number of subdomains, increasing the algorithm's complexity. Multi-head PINNs (MH-PINNs) [67] is a multi-task learning approach for PINNs that is employed to learn stochastic processes, synergistic learning of PDEs and uncertainty quantification. MH-PINNs have a shared part of the network and task-specific output heads for prediction. Therefore, it uses additional parameters for every head, increasing the model's size with respect to the number of tasks.
Despite the popularity of DNNs, and PINNs in particular, there are few _incremental_ learning algorithms available in SciML literature. Yet, incremental learning and continual learning algorithms [8, 9, 27] are capable of handling tasks sequentially, instead of altogether as in multi-task learning and other strategies. Moreover, they are still capable of not forgetting how to solve all of the previously learned tasks. If tasks have some similarities with each other, new tasks have the potential of being learned better (i.e., faster or with lower testing error) with the help of previously learned ones. The goal of this work is to propose an incremental learning algorithm for PINNs such that similar symbiotic effects can be obtained.
#### 1.0.1 Background and main challenges.
PINNs formulate the PDE solution problem by including initial and boundary conditions into the loss function of a neural network as soft constraints. Let us denote the output of the network \(\mathcal{N}\) with learnable parameters \(\theta\) as \(\hat{u}(\theta,\mathbf{x},t)=\mathcal{N}(\theta;\mathbf{x},t)\). Then sampling the set of collocation points, i.e. a set of points in the domain, \(\mathcal{CP}=\{(x^{i},t^{i}):x^{i}\in\mathrm{int}\ \Omega,\ t^{i}\in(t_{0},T],\ i=1,2, \ldots N_{\mathcal{F}}\}\), the set of initial points \(\mathcal{IP}=\{(x^{j},t_{0}):x^{j}\in\partial\Omega,\ j=1,2,\ldots,N_{u_{0}}\}\) and the set of boundary points \(\mathcal{BP}=\{(x^{k},t^{k}):x^{k}\in\partial\Omega,\ t^{k}\in(t_{0},T],\ k=1,2, \ldots,N_{b}\}\) one can write the optimization problem and loss function arising from PINNs as follows:
\[\mathcal{L}(\theta) =\mathcal{L}_{\mathcal{F}}(\theta)+\mathcal{L}_{u_{0}}(\theta)+ \mathcal{L}_{b}(\theta)\rightarrow\min_{\theta}, \tag{4}\] \[\mathcal{L}_{\mathcal{F}}(\theta) =\frac{1}{N_{\mathcal{F}}}\sum_{i=1}^{N_{\mathcal{F}}}\big{|} \big{|}\mathcal{F}[\hat{u}(\theta,x^{i},t^{i})]-f(x^{i})\big{|}\big{|}^{2},\quad (x^{i},t^{i})\in\mathcal{CP},\] (5) \[\mathcal{L}_{u_{0}}(\theta) =\frac{1}{N_{u_{0}}}\sum_{j=1}^{N_{u_{0}}}\big{|}\big{|}\hat{u}( \theta,x^{j},t_{0})-h(x^{j})\big{|}\big{|}^{2},\quad(x^{j},t_{0})\in\mathcal{ IP},\] (6) \[\mathcal{L}_{b}(\theta) =\frac{1}{N_{b}}\sum_{k=1}^{N_{b}}\big{|}\big{|}\mathcal{B}[\hat{ u}(\theta,x^{k},t^{k})]-b(x^{k})\big{|}\big{|}^{2},\quad(x^{k},t^{k})\in \mathcal{BP}. \tag{7}\]
However, sometimes PINNs struggle to learn the ODE/PDE dynamics [59, 30, 47, 43] (see Figure 1). Wight & Zhao [62] proposed several techniques to improve the optimization process compared to the original formulation: mini-batch optimization and adaptive sampling of collocation points. Adaptive sampling in time, splits the time interval \([t_{0},T]=\cup_{k=0}^{K}[t_{k-1},t_{k}],\ t_{K}=T\), and solves an equation on the first interval \([t_{0},t_{1}]\), then on \([t_{0},t_{2}]\), and so on up to \([t_{0},T]\). Thus, if a solution can be found on a domain \(\Omega\times[t_{0},t_{k-1}]\), then the network is pretrained well for the extended domain \(\Omega\times[t_{0},t_{k}]\). Krishnapriyan et al. [30] proposed the _seq2seq_ approach that splits the domain into smaller subdomains in time and learns the solution on each of the subdomains with a separate network. Thus, both adaptive sampling in time and _seq2seq_ are based on the idea of splitting the domain into multiple subdomains, on which solutions can be learned easier.
As explained in [47], improving PINN's solutions by considering small subdomains is possible because the loss residuals (\(\mathcal{L}_{\mathcal{F}}\) term) can be trivially minimized in the vicinity of fixed points, despite corresponding to nonphysical system dynamics that do not satisfy the initial conditions. Therefore, the reduction of the domain improves the convergence of the optimization problem (4) and helps to escape nonphysical solutions.
Figure 1: 1-D reaction equation with parameter \(\rho=5\) (see P1.2).
Another strategy is to consider transfer learning. Transfer learning is commonly used in computer vision and natural language processing [3, 57, 48, 23]. It tries to improve the optimization process by starting with better weight initialization. In PINNs, transfer learning is also successfully used to accelerate the loss convergence [18, 45, 10, 63]. For instance, Chen et al. [12] apply transfer learning to learn faster different PDEs creating tasks by changing coefficients or source terms in equations. Analogously, curriculum regularization (similar to curriculum learning [4]) is proposed in [30] to find good initial weights.
#### 1.0.1 Our contribution.
We propose _incremental PINNs_ (iPINNs) and implement this strategy by creating one subnetwork per task such that a complete neural network can learn multiple tasks. Each subnetwork \(\mathcal{N}_{i}\) has its own set of parameters \(\theta_{i}\subset\theta\), and the model is trained sequentially on different tasks. A subnetwork for a new task can overlap with all previous subnetworks, which helps to assimilate the new task. As a result, the network consists of overlapping subnetworks, while the free parameters can be used for future tasks. To illustrate the benefits of the algorithm we consider two problem formulations (Section 3). Firstly, we learn a family of equations (e.g., convection) starting from a simple one and incrementally learning new equations from that family. Secondly, we learn a dynamical system that consists of two processes (e.g., reaction-diffusion) by first learning the individual components of the process. Both scenarios demonstrate that the incremental approach enables an iPINN network to learn for cases where regular PINNs fail. To the best of our knowledge, this is the first example where one network can sequentially learn multiple equations without extending its architecture, with the added benefit that performance is significantly improved.
## 2 Related work
Our methodology is based on creating sparse network representations and, similarly to other PINN research, is sensitive to the choice of activation functions. We briefly highlight key related work herein.
#### 2.0.1 Sparse network representation.
Sparse architectures are often advantageous compared to dense ones [19, 1, 64, 35]. According to the lottery ticket hypothesis (LTH) [16], every randomly initialized network contains a subnetwork that can be trained in isolation to achieve comparable performance as the original network. Based on this observation, the idea of using subnetworks has been adopted in continual learning [39, 53, 54]. In this paradigm, every subnetwork created is associated with a particular task and used only for this task to make a prediction. One of the approaches to find these tasks-related subnetworks is connections' pruning [33, 21, 20, 15, 14] that removes unimportant parameters while exhibiting similar performance.
#### 2.0.2 Choice of the activation function.
There are several studies that investigate how different activation functions affect the performance of neural networks in
classification and regression tasks [56, 25]. It was shown that ReLU[44] activation function which can be powerful in classification tasks, in the case of physics-informed machine learning (PIML) regression, may not be the optimal choice. Meanwhile, hyperbolic tangent (tanh) or sine (sin) perform well for PIML. Sinusoidal representation networks (SIRENs) [52] tackle the problem of modeling the signal with fine details. Special weights initialization scheme combined with sin activation function allows SIREN to learn complex natural signals. Hence, we use sin activation function in our experiments. In Section 6.1, we provide the comparison in results between the discussed activation functions.
## 3 Problem formulation
We focus on two scenarios: (1) incremental PINNs learning, where the network sequentially learns several equations from the same family; and (2) learning a combination of multiple equations that create another physical process. To illustrate these cases, we consider one-dimensional convection, reaction and reaction-diffusion problems with periodic boundary conditions.
### Scenario 1: Equation incremental learning
We consider the problem of learning the sequence of equations that belong to one family:
\[\mathcal{F}_{k}(u(x,t))=0,\quad x\in\Omega,\ t\in[t_{0},T],\ k=1,2,\ldots,K,\] (P1)
where \(\{\mathcal{F}_{k}\}_{k=1}^{K}\) are differential operators from the same family of equations.
\begin{tabular}{c|c}
**1-D convection equation** & **1-D reaction equation** \\ \(\dfrac{\partial u}{\partial t}+\beta_{k}\dfrac{\partial u}{\partial x}=0,\) & (P1.1) \\ \(u(x,0)=h_{1}(x),\) & \(\dfrac{\partial u}{\partial t}-\rho_{k}u(1-u)=0,\) & (P1.2) \\ \(u(0,t)=u(2\pi,t),\) & \(u(x,0)=h_{2}(x),\) \\ & \(u(0,t)=u(2\pi,t),\) \\ & where \\ \(t\in[0,1],\ x\in[0,2\pi],\ \beta_{k}\in\mathcal{B}\subset\mathbb{N}\). & \(t\in[0,1],\ x\in[0,2\pi],\ \rho_{k}\in\mathcal{R}\subset\mathbb{N}\). \\ \end{tabular}
In this case, every task \(k\) is associated with \(\mathcal{D}_{k}=\{(x,t,k):\ x\in[0,2\pi],\ t\in[t_{0},T],\ k\in\mathbb{N}\}\). Following [30], we take \(h_{1}(x)=\sin x\) and \(h_{2}(x)=e^{-\frac{(x-\pi)^{2}}{2(\pi/4)^{2}}}\).
### Scenario 2: Combination of multiple equations
We also consider the case when a dynamic process consists of multiple components. Let us consider the reaction-diffusion equation:
\[\frac{\partial u}{\partial t}-\nu\frac{\partial^{2}u}{\partial x^{2}} -\rho u(1-u) =0,\] (P2) \[u(x,0) =h_{2}(x),\] \[u(0,t) =u(2\pi,t),\]
where \(t\in[0,1],\ x\in[0,2\pi],\ \nu,\rho>0\). This process consists of two parts: reaction term (\(\nu=0\)): \(-\rho u(1-u)\) and diffusion term (\(\rho=0\)): \(-\nu\frac{\partial^{2}u}{\partial x^{2}}\). Therefore, we construct one task as the reaction, another one as the diffusion, and the final one as the reaction-diffusion. We can change the order of the reaction tasks and diffusion tasks to show the robustness of incremental learning. The reaction-diffusion task should be the last one since our goal is first to learn the components of the system and only then the full system.
Considering these two problems, we want to show that better generalization can be achieved by pretraining the network with simpler related problems rather than by dividing the domain into smaller subdomains. In the following section, we show how one network can incrementally learn different equations without catastrophic forgetting.
## 4 Methodology
The proposed method needs to be applicable to both types of problems P1 and P2. However, these problems cannot be solved by one network with the same output head for all \(K\) tasks, since \(\mathcal{F}_{i}(u(x,t))\neq\mathcal{F}_{j}(u(x,t))\) for \(i\neq j\) and \(x\in\Omega,\ t\in[t_{0},T]\). Therefore, we propose iPNNs - an incremental learning algorithm that focuses on learning task-specific subnetworks \(\mathcal{N}_{1},\mathcal{N}_{2},...,\mathcal{N}_{K}\) for each task \(k=1,2,\ldots,K\). To create these subnetworks, we use an iterative pruning algorithm NNrelief [14]. This pruning approach uses input data to estimate the contribution of every connection to the neuron in the pretrained network and
Figure 2: An example of iPNNs with two PDEs: every subnetwork corresponds to only one task (PDE).
delete the least important ones. However, in principle, any connections pruning algorithm or any other approach that is able to find and train sparse network representations is suitable.
iPINN trains task-related subnetworks with pruning, allowing the subnetworks to overlap on some connections. This way the method provides knowledge sharing between the subnetworks. These overlaps are updated with respect to all tasks that are assigned to a particular connection. Let us denote the loss of each task \(\mathcal{D}_{j}\) as \(\mathcal{L}_{j}=\mathcal{L}(\theta_{j};\mathcal{D}_{j})\), where \(\theta_{j}\) is the parameter vector for task \(\mathcal{D}_{j}\), \(1\leq j\leq k\). Then the total loss and its gradient with respect to a parameter \(w\) can be written as:
\[\mathcal{L} =\sum_{j=1}^{k}\mathcal{L}_{j}, \tag{8}\] \[\frac{\partial\mathcal{L}}{\partial w} =\sum_{j=1}^{k}\frac{\partial\mathcal{L}_{j}}{\partial w}=\sum_{j :\ w\in\mathcal{N}_{j}}\frac{\partial\mathcal{L}_{j}}{\partial w}, \tag{9}\]
because if \(w\not\in\mathcal{N}_{j}\), then \(\frac{\partial\mathcal{L}_{j}}{\partial w}=0\). The pseudocode of the algorithm is shown as follows:
```
0: neural network \(\mathcal{N}\), training datasets \(\mathcal{D}_{k}\) (\(k=1,2,\ldots,K\)), training hyperparameters, pruning hyperparameters (\(\textit{num\_iters}\)).
1:for\(k=1,2,\ldots,K\)do
2:\(\mathcal{N}_{k}\leftarrow\mathcal{N}\)\(\triangleright\) set full network as a subnetwork
3: Train \(\mathcal{N}_{1},\mathcal{N}_{2},\ldots,\mathcal{N}_{k}\) on tasks \(\mathcal{D}_{1},\mathcal{D}_{2},\ldots,\mathcal{D}_{k}\) using Eq. 9.
4:for\(it=1,2,\ldots,\textit{num\_iters}\)do\(\triangleright\) repeat pruning
5:\(\mathcal{N}_{k}\leftarrow\textit{Pruning}(\mathcal{N}_{k},\mathcal{D}_{k})\)\(\triangleright\) reduce unimportant connections
6: Retrain subnetworks \(\mathcal{N}_{1},\mathcal{N}_{2},\ldots,\mathcal{N}_{k}\) on tasks \(\mathcal{D}_{1},\mathcal{D}_{2},\ldots,\mathcal{D}_{k}\) using Eq. 9.
7:endfor
8:endfor
```
**Algorithm** PINN incremental learning
The main advantage of the proposed approach is that a neural network learns _all_ tasks (subdomains or equations) that were given during training and not only the last one. This is achieved by constantly replaying old data. In the next section, we experimentally show that pretrained parts of the network help to improve the convergence process.
## 5 Numerical experiments
Our findings illustrate the advantage of the Algorithm over regular PINNs [46]. The Algorithm allows the network to learn multiple equations (P1) from the same family. Furthermore, by starting with simpler tasks, the network can learn more complex ones that cannot be learned separately.
Experiments setup.
Let us start by examining the proposed algorithms on the convection and reaction equations with periodic boundary conditions (P1). Following the setup in [30], we use a four-layer neural network with 50 neurons per layer. We use 1000 randomly selected collocation points on every time interval between 0 and 1 for \(\mathcal{L}_{\mathcal{F}}\). The Adam optimizer [29] is used to train the model.
To evaluate the performance of the algorithms we compare the final error after the last task. In addition, following continual learning literature [36], we compare backward and forward transfer metrics. Let us denote the test set as \(\mathcal{D}^{test}=\{(x^{i},t^{i},l):x^{i}\in[0,2\pi],\ t^{i}\in[0,1],\ l\) is the task-ID\(\}\), the solution of the equation at the point \((x^{i},t^{i},l)\) as \(\mathbf{u}_{l,k}^{i}=u_{l,k}^{i}(x^{i},t^{i})\), and \(\mathbf{\hat{u}}_{l,k}^{i}\) is a prediction of the model at point \((x^{i},t^{i},l)\) after task \(\mathcal{D}_{k}\) is learned. Relative and absolute errors are denoted as \(r_{l,k}\) and \(\varepsilon_{l,k}\), respectively, as they are calculated for task \(l\) after task \(k\) is learned (\(l\leq k\)).
Relative error: \[r_{l,k}=\frac{1}{N}\frac{||\mathbf{u}_{l}-\mathbf{\hat{u}}_{l,k} ||_{2}}{||\mathbf{u}_{l}||_{2}}\times 100\%,\] (10) Absolute error: \[\varepsilon_{l,k}=\frac{1}{N}\sum_{i=1}^{N}\lvert\mathbf{u}_{l}^{i}- \mathbf{\hat{u}}_{l,k}^{i}\rvert,\] (11) Backward Transfer: \[\text{BWT}=\frac{1}{k-1}\sum_{l=1}^{k-1}\varepsilon_{l,k}- \varepsilon_{l,l}\ \ \text{or}\] (12) BWT \[=\frac{1}{k-1}\sum_{l=1}^{k-1}r_{l,k}-r_{l,l}\] (13)
### Results
Table 1 presents the results after all reaction equations are learned varying \(\rho\) from 1 to 5. Figure 3 shows the error history for every equation after incremental steps. The Table summarizes the performance improvement of iPINNs compared to regular PINNs, exhibiting negligible error for all values of \(\rho\), which is especially relevant for cases when \(\rho\) is larger. Moreover, iPINNs provide negative BWT which means that previous subnetworks help to learn the following ones.
Similarly, we observe for the convection equation the same learning behaviour. By learning incrementally the sequence of convection equations, we achieve much lower absolute and relative errors for the equations that are more difficult to learn (\(\beta=30,40\)). In Table 2 we show final errors at the end of training, and Figure 4 shows the absolute error history for each equation.
In Figures 5 and 6, we illustrate the error of iPINNs on convection and reaction equations and the exact solutions for every value of parameter \(\beta\) or \(\rho\) that were considered. Overall, we see that the neural network learns more complicated tasks more accurately if parts of the network are pretrained with easier tasks. At the same time, iPINNs replay the training data for previous PDEs during training for the new one. There are no additional costs to store or
generate input points \((x,t)\) for previous tasks since they can be easily sampled when necessary.
Another illustration of the method is learning the problem P2. We consider the values of \(\rho\) and \(\nu\) for which PINN does not have difficulties with learning each component separately. Results obtained when first learning the reaction
\begin{table}
\begin{tabular}{c c c c} \hline \hline & regular PINN & iPINN \\ \hline \multirow{2}{*}{\(\rho=1\)} & abs. err & \(1.09\times 10^{-3}\) & \(\mathbf{1.5\times 10^{-4}}\) \\ & rel. err & 0.263\% & \(\mathbf{0.039\%}\) \\ \hline \multirow{2}{*}{\(\rho=2\)} & abs. err & \(1.97\times 10^{-3}\) & \(\mathbf{2.5\times 10^{-4}}\) \\ & rel. err & 0.479\% & \(\mathbf{0.070\%}\) \\ \hline \multirow{2}{*}{\(\rho=3\)} & abs. err & \(6.72\times 10^{-3}\) & \(\mathbf{6.1\times 10^{-4}}\) \\ & rel. err & 2.05\% & \(\mathbf{0.210\%}\) \\ \hline \multirow{2}{*}{\(\rho=4\)} & abs. err & \(1.13\times 10^{-2}\) & \(\mathbf{1.18\times 10^{-3}}\) \\ & rel. err & 3.68\% & \(\mathbf{0.458\%}\) \\ \hline \multirow{2}{*}{\(\rho=5\)} & abs. err & \(5.04\times 10^{-2}\) & \(\mathbf{1.91\times 10^{-3}}\) \\ & rel. err & 12.19\% & \(\mathbf{0.763\%}\) \\ \hline \hline \multirow{2}{*}{BWT} & abs. err & N/A & -\(3.8\times 10^{-4}\) \\ & rel. err & N/A & -0.112\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Final error and forgetting after all reaction equations are learned.
part (diffusion part) are shown in Table 3 (Table 4). The main finding is that the network can learn almost every equation at least as well as when it is learned independently. In fact, for the reaction equation, the neural network improves significantly the prediction error. Another interesting observation is that the model learns the reaction-diffusion equation with almost the same error, regardless of the order of the tasks.
Figure 5: iPINNs on 1-D reaction equation.
Figure 6: iPINNs on 1-D convection equation.
\begin{table}
\begin{tabular}{l l l c c} \hline \hline parameters & equation & \multicolumn{2}{c}{regular PINN} & iPINN \\ \hline & diffusion & abs. err & \(\mathbf{1.38\times 10^{-4}}\) & \(8.64\times 10^{-4}\) \\ & & rel. err & \(\mathbf{0.05\%}\) & \(0.28\%\) \\ \(\rho=3,\ \nu=5\) & reaction & abs. err & \(6.72\times 10^{-3}\) & \(\mathbf{2.11\times 10^{-3}}\) \\ & & rel. err & \(2.05\%\) & \(\mathbf{0.68\%}\) \\ & reaction-diffusion & abs. err & \(4.89\times 10^{-3}\) & \(\mathbf{4.07\times 10^{-3}}\) \\ & & rel. err & \(0.80\%\) & \(\mathbf{0.67\%}\) \\ \hline & diffusion & abs. err & \(4.35\times 10^{-4}\) & \(\mathbf{3.45\times 10^{-4}}\) \\ \(\rho=4,\ \nu=4\) & reaction & abs. err & \(0.16\%\) & \(\mathbf{0.12\%}\) \\ & reaction-diffusion & abs. err & \(1.13\times 10^{-2}\) & \(\mathbf{4.91\times 10^{-3}}\) \\ & reaction-diffusion & rel. err & \(3.68\%\) & \(\mathbf{1.97\%}\) \\ & reaction-diffusion & abs. err & \(4.58\times 10^{-3}\) & \(\mathbf{4.42\times 10^{-3}}\) \\ & & rel. err & \(0.70\%\) & \(\mathbf{0.67\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Final error and forgetting for diffusion \(\rightarrow\) reaction \(\rightarrow\) reaction-diffusion.
\begin{table}
\begin{tabular}{l l l c c} \hline \hline parameters & equation & \multicolumn{2}{c}{regular PINN} & iPINN \\ \hline & reaction & abs. err & \(6.72\times 10^{-3}\) & \(\mathbf{9.41\times 10^{-4}}\) \\ & & rel. err & \(2.05\%\) & \(\mathbf{0.31\%}\) \\ \(\rho=3,\ \nu=5\) & diffusion & abs. err & \(\mathbf{1.38\times 10^{-4}}\) & \(1.85\times 10^{-4}\) \\ & & rel. err & \(\mathbf{0.05\%}\) & \(0.06\%\) \\ & reaction-diffusion & abs. err & \(4.89\times 10^{-3}\) & \(\mathbf{4.10\times 10^{-3}}\) \\ & reaction-diffusion & rel. err & \(0.80\%\) & \(\mathbf{0.68\%}\) \\ \hline & reaction & abs. err & \(1.13\times 10^{-2}\) & \(\mathbf{7.88\times 10^{-3}}\) \\ \(\rho=4,\ \nu=4\) & diffusion & abs. err & \(3.68\%\) & \(\mathbf{2.99\%}\) \\ & & abs. err & \(\mathbf{4.35\times 10^{-4}}\) & \(5.84\times 10^{-4}\) \\ & reaction-diffusion & rel. err & \(0.16\%\) & \(0.19\%\) \\ & reaction-diffusion & abs. err & \(4.58\times 10^{-3}\) & \(\mathbf{4.42\times 10^{-3}}\) \\ & reaction-diffusion & rel. err & \(0.69\%\) & \(\mathbf{0.65\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Final error and forgetting for reaction \(\rightarrow\) diffusion \(\rightarrow\) reaction-diffusion.
## 6 Additional study
In this section, we provide additional information about the learning procedure of iPINNs. We highlight some important training details such as the presence of regularization and the choice of activation functions. Also, we explore the subnetworks that our approach produces showing the proportion of parameters allocated to each task.
### Sensitivity to hyperparameters
Here we illustrate the influence of different training hyperparameters on the performance of iPINNs. First, we compare the results with and without regularization parameter (weight decay). In Figure 7, it can be observed that the presence of weight decay worsens the prediction error. However, looking at the result it is clear that iPINNs still work if weight decay is present. We can explain the lack of need for weight decay with the fact that many parameters are assigned to multiple tasks and cannot overfit to a particular one. Each subnetwork is also less parameterized than the original network and therefore does not tend to overfit. Thus, weight decay is not necessary and its presence only worsens the result due to the complication of the optimization procedure.
Furthermore, we compare the performance when using sin and tanh activation functions for two task orderings in Figure 8. We observe that sin works significantly better in both cases. Also, we test ReLU activation but it demonstrates poor performance in both PDEs orderings. If the reaction is learned first, the absolute errors are \(0.4959,0.2369\) and \(0.1493\). If we start with the diffusion equation and then learn reaction and reaction-diffusion PDEs, the errors are \(0.2399,0.2977\) and \(0.3003\).
In addition, we present how different values of pruning parameter \(\alpha\) affect the results. The higher the value of \(\alpha\) is, the less the network is pruned. Therefore, if
Figure 7: Influence of weight decay on the results for reaction (left) and convection (right) equations after all tasks are learned.
\(\alpha=0.95\) the task-specific subnetworks are sparser than with \(\alpha=0.99\) but less sparse if \(\alpha=0.9\). In Figure 9, we observe that for the reaction equation, we can prune less and achieve better performance which can be explained by the fact that PDEs in the reaction family are quite similar. Therefore, we can allow the network to have more overlaps to share knowledge between subnetworks. For the case of learning within the same family of convection PDEs, the value of \(\alpha=0.95\) was revealed to be a better option for constructing a sufficiently expressive task-specific subnetwork and frees space for future tasks. Notwithstanding, the performance is good with any reasonable choice of pruning parameter.
### Subnetworks analysis
In Figure 10, we present the portions of the subnetworks that are occupied by each task. We will illustrate this by considering both orders - when the model learns the reaction equation first (Figure 10a), and when diffusion comes first
Figure 8: Influence of activation function on the results when the reaction learned first (left) and diffusion learned first (right).
Figure 9: iPINNs with different values of pruning parameter \(\alpha\).
(Figure 9(b)). These results are averaged over 3 different runs for each of the orderings. It is noteworthy that the percentage of parameters occupied by all tasks is very similar for both orderings (31.8% and 31.5% respectively of all network parameters). On the other hand, the percentages of used parameters for both cases are 79.5% and 79.3%. This means that the total number of trained parameters for the two incremental procedures is the same for both cases, which shows the robustness of the method. Moreover, the network has about 20% of free connections to learn new tasks.
## 7 Conclusion
In this work, we propose an incremental learning approach for PINNs where every task is presented as a new PDE. Our algorithm is based on task-related subnetworks for every task obtained by iterative pruning. To illustrate our idea, we consider two cases when incremental learning is applicable to a sequence of PDEs. In the first case, we consider the family of convection/reaction PDEs, learning them sequentially. In the second example, we consider the reaction-diffusion equation and learn firstly the components of the process, namely reaction and diffusion, and only then the reaction-diffusion equation. Our main goal is to show the possibility of incremental learning for PINNs without significantly forgetting previous tasks. From our numerical experiments, the proposed algorithm can learn all the given tasks, which is not possible with standard PINNs. Importantly, we also show that future tasks are learned better because they can share connections trained from previous tasks, leading to significantly better performance than if these tasks were learned independently. We demonstrate that this stems from the transfer of knowledge occurring between subnetworks that are associated with each task. Interestingly, the model's performance on previous tasks is also improved by learning the following tasks. In essence, iPINNs demonstrate symbiotic training effects between past and future tasks by learning them with a single network composed of dedicated subnetworks for each task that share relevant neuronal connections.
Figure 10: Percentage of parameters used for every equation with \(\rho=4,~{}\nu=4\). |
2305.19292 | Revisiting Random Forests in a Comparative Evaluation of Graph
Convolutional Neural Network Variants for Traffic Prediction | Traffic prediction is a spatiotemporal predictive task that plays an
essential role in intelligent transportation systems. Today, graph
convolutional neural networks (GCNNs) have become the prevailing models in the
traffic prediction literature since they excel at extracting spatial
correlations. In this work, we classify the components of successful GCNN
prediction models and analyze the effects of matrix factorization, attention
mechanism, and weight sharing on their performance. Furthermore, we compare
these variations against random forests, a traditional regression method that
predates GCNNs by over 15 years. We evaluated these methods using simulated
data of two regions in Toronto as well as real-world sensor data from selected
California highways. We found that incorporating matrix factorization,
attention, and location-specific model weights either individually or
collectively into GCNNs can result in a better overall performance. Moreover,
although random forest regression is a less compact model, it matches or
exceeds the performance of all variations of GCNNs in our experiments. This
suggests that the current graph convolutional methods may not be the best
approach to traffic prediction and there is still room for improvement.
Finally, our findings also suggest that for future research on GCNN for traffic
prediction to be credible, researchers must include performance comparison to
random forests. | Ta Jiun Ting, Xiaocan Li, Scott Sanner, Baher Abdulhai | 2023-05-30T00:50:51Z | http://arxiv.org/abs/2305.19292v1 | Revisiting Random Forests in a Comparative Evaluation of Graph Convolutional Neural Network Variants for Traffic Prediction*
###### Abstract
Traffic prediction is a spatiotemporal predictive task that plays an essential role in intelligent transportation systems. Today, graph convolutional neural networks (GCNNs) have become the prevailing models in the traffic prediction literature since they excel at extracting spatial correlations. In this work, we classify the components of successful GCNN prediction models and analyze the effects of matrix factorization, attention mechanism, and weight sharing on their performance. Furthermore, we compare these variations against random forests, a traditional regression method that predates GCNNs by over 15 years. We evaluated these methods using simulated data of two regions in Toronto as well as real-world sensor data from selected California highways. We found that incorporating matrix factorization, attention, and location-specific model weights either individually or collectively into GCNNs can result in a better overall performance. Moreover, although random forest regression is a less compact model, it matches or exceeds the performance of all variations of GCNNs in our experiments. This suggests that the current graph convolutional methods may not be the best approach to traffic prediction and there is still room for improvement. Finally, our findings also suggest that for future research on GCNN for traffic prediction to be credible, researchers must include performance comparison to random forests.
## I Introduction
Accurate traffic prediction is an integral component of intelligent transportation systems (ITS) as it is critical in traffic control strategies and traveler information systems. Predicting evolving traffic patterns on a road network is not a trivial task, and researchers have used advanced models to approximate traffic behavior. Over the years, these models include time series methods such as the autoregressive integrated moving average (ARIMA) model [1, 2, 3], non-parametric regression models such as the k-nearest neighbor [4, 5] or support vector regression [6], standard artificial neural network models of fully-connected [7, 8] and recurrent neural networks [9]. However, most recently, the state-of-the-art traffic prediction models are the graph convolutional neural network (GCNN) methods [10, 11].
This paper first describes the graph convolution perspective and then develops a taxonomy of GCNN short-term traffic prediction models based on their components. Afterwards, we explore different variations along these components and eventually arrive at a variant that is similar to a traditional recurrent neural network. We then revisit a regression view of short-term traffic prediction using random forests, a powerful ensemble regression method. Finally, we compare the performance of these models using data from traffic simulations as well as the real world.
### _Problem Definition_
Short-term traffic prediction can be performed at different levels, from the behaviors of individual vehicles to the traffic states of entire districts. This work investigates the prediction of traffic speed or flow at the level of individual links (segments of roads). Throughout this paper, we refer to road links as nodes, and connections between road links (such as road intersections) as edges in the context of GCNNs. In practice, a variety of factors such as weather and road design would also influence traffic patterns; however, similar to other works on this topic, we only use the past observations and the graph structure of the road network as inputs to our models.
The following notations are used throughout this paper:
* \(\mathcal{G}=(\mathcal{V},\mathcal{E})\): The directed graph which describes the road network. \(\mathcal{V}\) is the set of nodes, and \(|\mathcal{V}|=N\). \(\mathcal{E}\) is the set of connections or edges.
* \(\mathcal{N}(i)\): The set of nodes in the neighborhood of node \(i\). This is not restricted to the immediate neighbors of node \(i\), and also includes node \(i\) itself.
* \(\mathbf{x}_{i}^{(t)}\): A vector with length \(d\) that represents the observation of node \(i\) at time \(t\).
* \(\mathbf{X}^{(t)}\) : A matrix with size \((N\times d)\) that represents the observation of the entire road network at time \(t\).
* \(\hat{x}_{i}^{(t)}\): A scalar that represents the prediction of node \(i\) at time \(t\).
* \(\mathbf{\hat{X}}^{(t)}\): A vector with size \((N\times 1)\) that represents the prediction of the entire road network at time \(t\).
* \(H\): The prediction horizon.
Additionally, the variable \(i\) and \(j\) are reserved to distinguish a node and a time slice, respectively.
Using the above notation, we can define the prediction problem as learning a function \(f\) that maps the past observations to predictions using the graph \(\mathcal{G}\) and minimize the prediction error \(L\) as follows:
\[\hat{\mathbf{X}}^{(t+1)},\hat{\mathbf{X}}^{(t+2)},...,\hat{\mathbf{X}}^{(t+H)} =f\left(\mathbf{X}^{(t)},\mathbf{X}^{(t-1)},...,\mathcal{G}\right) \tag{1}\]
\[\min L=\sum_{i=1}^{|\mathcal{V}|}\sum_{j=1}^{H}\left\|\hat{x}_{i}^{(t+j)}-x_{i}^{ (t+j)}\right\| \tag{2}\]
### _The Graph Convolution Perspective_
A road network can be viewed as a graph, and road traffic is a dynamic process that develops gradually on the graph. Graph convolutional neural networks extend the notion of the convolution operation, which is commonly applied to analyzing visual imagery with a grid-like structure, to an operation that can be applied to graphs with arbitrary structures. Therefore, a GCNN is capable of extracting information using the spatial correlations between nodes in a graph and lends itself well to capturing the complex patterns needed for short-term traffic prediction. [10] is the first application of graph convolutional neural network in short-term traffic prediction, and there have been many subsequent works that expand upon this idea.
In the GCNN perspective, every node has a defined neighborhood around it based on the structure of the graph. Using the formulation introduced by [12], the operations of a GCNN can be divided into two phases, a message-passing phase and a readout phase. For a given node \(i\), the message passing phase aggregates the information within its neighborhood \(\mathcal{N}(i)\), then the subsequent readout phase applies the model parameters to create the output. In traffic, the predominant form of GCNN is the linear model introduced by [13] and its extensions. In this framework, the message passing phase can be interpreted as taking a weighted sum of the inputs within the neighborhood, where the coefficients \(a\) are determined by graph properties such as the adjacency matrix or the Laplacian matrix. Meanwhile, the readout phase is defined as a linear transformation with an activation function. Using the notation introduced earlier, this type of GCNN can be formulated as follows:
\[\mathbf{k}_{i}=\rho\left(\sum_{n\in\mathcal{N}(i)}a_{in}\mathbf{W}\mathbf{x}_ {n}+\mathbf{b}\right) \tag{3}\]
where \(\mathbf{k}_{i}\) is the output representation for node \(i\), \(\mathbf{x}_{n}\) is the graph input for node \(n\), \(a_{in}\) is the influence from node \(n\) to node \(i\) that is defined by the aggregation matrix, \(\mathbf{W}\) and \(\mathbf{b}\) are respectively the weight and bias terms of the model that transforms the input to hidden dimension, and \(\rho(\cdot)\) denotes the activation function.
In the more recent graph attention networks [14], \(a_{in}\) are instead produced by an additional module that learns the relationship between every pair of nodes to assign weights for the aggregation. This can be achieved with a variety of attention mechanisms that exist in the literature such as the works of [15, 16], the mechanism cited in [14] is as follows:
\[a_{mn}=\text{softmax}_{m}\left(\text{LeakyReLU}\left(\boldsymbol {\alpha}^{\top}\left[\mathbf{W}\mathbf{x}_{m}\parallel\mathbf{W}\mathbf{x}_{n} \right]\right)\right) \tag{4}\] \[\mathbf{k}_{i}=\sigma\left(\sum_{n\in\mathcal{N}(i)}a_{in}\mathbf{ W}\mathbf{x}_{n}+\mathbf{b}\right)\]
where \(\boldsymbol{\alpha}\) defines the linear layer that calculates the attention value between two nodes, and \(\parallel\) denotes the concatenation operator. It is important to note that there is only one set of model weight \(\mathbf{W}\) and bias \(\mathbf{b}\) that is applied to all nodes in both formulations.
### _Components of GCNN Traffic Prediction Models_
A short-term traffic prediction model can use GCNNs to capture the spatial correlations between different nodes of a road network; however, the model also needs to account for the changing dynamic of traffic through time. This is commonly achieved in the literature through the use of recurrent neural networks (RNNs) as exemplified by [11, 10, 17]. An RNN is the predominant deep learning model for analyzing sequential data, consisting of repeated cells that form a temporal sequence. The output of each cell is used as the input to the next cell, and the parameters within the cell are shared across time steps. Consequently, this architecture can process sequential data of different lengths by varying the number of repetitions. The gated recurrent unit (GRU) [18] is a standard cell architecture used in RNNs, which operates as follows:
\[\mathbf{z}_{i}^{(t)} =\sigma\left(\mathbf{W}_{z}\mathbf{x}_{i}^{(t)}+\mathbf{U}_{z} \mathbf{k}_{i}^{(t-1)}+\mathbf{b}_{z}\right) \tag{5}\] \[\mathbf{r}_{i}^{(t)} =\sigma\left(\mathbf{W}_{r}\mathbf{x}_{i}^{(t)}+\mathbf{U}_{r} \mathbf{k}_{i}^{(t-1)}+\mathbf{b}_{r}\right)\] \[\mathbf{m}_{i}^{(t)} =\text{tanh}\left(\mathbf{W}_{m}\mathbf{x}_{i}^{(t)}+\mathbf{U}_{ m}\left(\mathbf{r}_{i}^{(t)}\ast\mathbf{k}_{i}^{(t-1)}\right)+\mathbf{b}_{m}\right)\] \[\mathbf{k}_{i}^{(t)} =\left(1-\mathbf{z}_{i}^{(t)}\right)\ast\mathbf{k}_{i}^{(t-1)}+ \mathbf{z}_{i}^{(t)}\ast\mathbf{m}_{i}^{(t)}\]
where \(\mathbf{k}_{i}\) is the hidden states for node \(i\); \(\mathbf{z}_{i},\mathbf{r}_{i},\mathbf{m}_{i}\) are respectively the update gate, the reset gate, and the candidate hidden state; \(\ast\) denotes the Hadamard product; \(\sigma(\cdot)\) denotes the sigmoid function; while \(\mathbf{W}\) and \(\mathbf{U}\) represent the weights and \(\mathbf{b}\) represents the biases of the model.
Typically, a short-term traffic prediction model based on GCNNs and RNNs integrates the two components by replacing the matrix multiplications in the GRU with a GCNN operation. For example, (5) can be modified as follows:
\[\mathbf{z}_{i}^{(t)} =\sigma\left(\sum_{n\in\mathcal{N}(i)}a_{in}\left(\mathbf{W}_{z} \mathbf{x}_{n}^{(t)}+\mathbf{U}_{z}\mathbf{k}_{n}^{(t-1)}\right)+\mathbf{b}_{z}\right) \tag{6}\] \[\mathbf{r}_{i}^{(t)} =\sigma\left(\sum_{n\in\mathcal{N}(i)}a_{in}\left(\mathbf{W}_{r} \mathbf{x}_{n}^{(t)}+\mathbf{U}_{r}\mathbf{k}_{n}^{(t-1)}\right)+\mathbf{b}_{r}\right)\] \[\mathbf{m}_{i}^{(t)} =\] \[\text{tanh}\left(\sum_{n\in\mathcal{N}(i)}a_{in}\left(\mathbf{W}_ {m}\mathbf{x}_{n}^{(t)}+\mathbf{U}_{m}\left(\mathbf{r}_{i}^{(t)}\ast\mathbf{k}_{ n}^{(t-1)}\right)\right)+\mathbf{b}_{m}\right)\] \[\mathbf{k}_{i}^{(t)} =\left(1-\mathbf{z}_{i}^{(t)}\right)\ast\mathbf{k}_{i}^{(t-1)}+ \mathbf{z}_{i}^{(t)}\ast\mathbf{m}_{i}^{(t)}\]
With this formulation, we developed a taxonomy of GCNN short-term traffic prediction models by identifying 3 GCNN components. The first 2 components are the operation concerning the input \(\mathbf{x}_{i}^{(t)}\) and the last hidden state \(\mathbf{k}_{i}^{(t-1)}\), which
can be a standard matrix multiplication or a GCNN variant. The third component is the model weights \(\mathbf{W}\), \(\mathbf{U}\), and \(\mathbf{b}\), which can be either shared among nodes or independent. We explore variations along these components in the next section. It is important to note that some works use other mechanisms to capture the temporal dynamics of traffic; however, this investigation is focused on RNN-based models due to their prevalence.
## II Methods
### _Variations on GCNN Components_
We begin with the work of [11], which uses a standard matrix multiplication for input, convolution for the last hidden state, and shared model weights among nodes. This formulation transforms the first equation of (5) to the following:
\[\mathbf{z}_{i}^{(t)}=\sigma\left(\mathbf{W}_{z}\mathbf{x}_{i}^{(t)}+\sum_{n \in\mathcal{N}(i)}a_{in}\mathbf{U}_{z}\mathbf{k}_{n}^{(t-1)}+\mathbf{b}_{z}\right) \tag{7}\]
As in (6), the remaining equations in (5) can be transformed likewise and are omitted for brevity in this section.
We then experiment with changing the graph convolution operation to graph attention shown in (4) and examine the different combinations of applying the attention operation to input and hidden states. For this investigation, we call this type of model the graph attention gated recurrent unit (GA-GRU) as it combines the concepts of graph attention networks and gated recurrent units. Equation 8 is one configuration where attention is only applied to the last hidden state, i.e., GA-GRU (hidden).
\[\begin{split} a_{mn}=\frac{\text{exp}(\text{LeakyReLU}\left( \boldsymbol{\alpha}^{\top}[\mathbf{W}\mathbf{x}_{m}\parallel\mathbf{W} \mathbf{x}_{n}]\right))}{\sum\limits_{o\in\mathcal{N}(i)}\text{exp}(\text{ LeakyReLU}\left(\boldsymbol{\alpha}^{\top}[\mathbf{W}\mathbf{x}_{m}\parallel \mathbf{W}\mathbf{x}_{o}]\right))}\\ \mathbf{z}_{i}^{(t)}=\sigma\left(\mathbf{W}_{z}\mathbf{x}_{i}^{( t)}+\sum_{n\in\mathcal{N}(i)}a_{in}\mathbf{U}_{z}\mathbf{k}_{n}^{(t-1)}+ \mathbf{b}_{z}\right)\end{split} \tag{8}\]
Afterwards, we explore the removal of shared weights among different nodes by designating a unique set of model weights \(\mathbf{W}\), \(\mathbf{U}\), and \(\mathbf{b}\) for each node. In order to remove all weight sharing across nodes, we also replaced the shared attention layer in (8) with a trainable attention matrix. For this investigation, we call this type of model the attentional graph recurrent neural network, i.e., AGRNN (hidden). In this framework, (8) is transformed to the following:
\[\mathbf{z}_{i}^{(t)}=\sigma\left(\mathbf{W}_{iz}\mathbf{x}_{i}^{(t)}+\sum_{n \in\mathcal{N}(i)}a_{in}\mathbf{U}_{iz}\mathbf{k}_{n}^{(t-1)}+\mathbf{b}_{iz}\right) \tag{9}\]
In contrast with (8), the subscript \(i\) in all weights and biases signifies that each node contains its own model parameters, and the \(a\) in this framework are learnable weights.
We also highlight the input-only attention variant of the AGRNN, i.e., AGRNN (input). The diagram is shown in Fig. 1, and the equation is defined below:
\[\mathbf{z}_{i}^{(t)}=\sigma\left(\sum_{n\in\mathcal{N}(i)}a_{in}\mathbf{W}_{iz }\mathbf{x}_{n}^{(t)}+\mathbf{U}_{iz}\mathbf{k}_{i}^{(t-1)}+\mathbf{b}_{iz}\right) \tag{10}\]
Since there is no convolution or attention applied to the previous hidden states, the hidden states of different nodes do not influence one another. Combined with the independent model weights, this structure is akin to traditional recurrent neural networks with added location context and models of different nodes can be trained independently.
Lastly, we also include in this comparison an example in the literature where the independent model weights are factorized according to the spatial correlations among nodes [17], which results in a structure that shares weights between nodes yet applies a distinct model to every node. Table I summarizes the different variations of GCNN discussed in this section.
### _Random Forests_
Traffic propagation within a single time step is limited in space since the traffic state at a given link is independent of recent traffic states of faraway links. Although the exact radius of influence is changing and unknown, we can predict short-term traffic at a given link using recent observation within a neighborhood. We can then define short-term traffic prediction as a regression problem with the predictions (\(\hat{x}_{i}^{(t+1)},\hat{x}_{i}^{(t+2)},...,\hat{x}_{i}^{(t+H)}\)) as the regressands, and the recent
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Input & Hidden & Weights \\ \hline GRNN [11] & Multiplication & Convolution & Shared \\ \hline GA-GRU (input) & Attention & Multiplication & Shared \\ GA-GRU (hidden) & Multiplication & Attention & Shared \\ GA-GRU (both) & Attention & Attention & Shared \\ \hline AGRNN (input) & Attention & Multiplication & Independent \\ AGRNN (hidden) & Multiplication & Attention & Independent \\ AGRNN (both) & Attention & Attention & Factorized \\ \hline \hline \end{tabular}
\end{table} TABLE I: The GCNN variants explored in this paper, categorized along the 3 component discussed in Section I-C
Fig. 1: Input attention and independent input weights for the update gate of the GRU.
observations (\(\mathbf{x}_{n}^{(t)},\mathbf{x}_{n}^{(t-1)},...\forall n\in\mathcal{N}(i)\)) as the regressors to facilitate the regression analysis.
In this work, we solve this regression problem by employing random forests [19], which is a type of ensemble regression trees. A regression tree splits the input samples recursively into a tree-like decision diagram until it reaches the desired number of depth or leaf nodes. Each internal node of the tree contains a rule that splits the samples according to the value of a regressor and passes each split to the corresponding children node. Each leaf node of the tree contains a simple model that describes only the samples within its split. During prediction, we traverse the tree based on the regressors until we reach a leaf node, then the model within the node can generate the predicted regressand. With a large number of nodes, this approach can approximate complex functions with relatively simple models; however, this can also lead to overfitting. Random forests combats overfitting by splitting the training data to create multiple regression trees and averaging the output of all trees during prediction, which leads to a more robust regression model.
In the recent deep learning-focused traffic prediction works, we found a glaring lack of direct comparison against ensemble regression tree methods. This type of model consistently performs well in a variety of predictive problems, and we believe that it should not be overlooked in the short-term traffic prediction context. In contrast to neural network models, regression trees are much simpler to interpret and require minimal data preparation and model selection procedures. We used scikit-learn [20] to build and train the random forests model for this paper.
## III Experimental Setup
### _Datasets_
**Toronto datasets**. We created two sets of data using Aimsun Next [21] traffic simulation software. The first dataset is the traffic flow and speed of Queen Elizabeth Way, a highway in Ontario, Canada with 56 measurement links. The second dataset is the traffic flow and speed of downtown Toronto, Canada with 165 measurement links. The two regions are shown in Fig. 2 and Fig. 3.
For the simulation, we used the travel demand collected from a survey in 2016 [22], and further calibrated using measurements from the loop detectors installed along the roads [23]. We built the simulation model using morning peak-hour travel demands, and each simulation is for the 4-hour period between 6:00 and 10:00 AM. The speed (distance traveled per unit time) and flow (number of vehicles per unit time) for every link were extracted from the simulations in 1-minute interval. Speed and flow were selected because they are the most common form of data in the real-world measured using loop detectors and the GPS. To augment the data, the simulation was run 50 times with the original travel demands multiplied by a random scalar factor between 0.5 and 1.5.
**California datasets**. The PeMS04 and PeMS08 datasets are collected from the Caltrans Performance Measurement System (PeMS) of districts in California. The PeMS04 dataset consists of 307 measurement locations on highways surrounding San Jose, California, dating from January 1st to February 28th in 2018. The PeMS08 dataset consists of 170 measurement locations on highways surrounding San Bernardino, California, dating from July 1st to August 31st in 2016. For both datasets, the measurement interval is 5 minutes, which corresponds to 288 data points per day. The adjacency matrix is defined according to road distance and connectivity. We follow the evaluation procedure established by other papers [17, 24], which uses the last 12 observations to predict the next 12 time steps, i.e., use the past hour of traffic data to predict that of the next hour.
### _Model Selection_
We evaluated our model against a selection of different methods that are listed below, including other graph convolutional neural networks as well as time series analysis methods. We built all neural network models, including GA-GRUs and AGRNNs, using PyTorch [25] in this paper. Additionally, we tuned the hyperparameters summarized in Table II using coordinate descent.
* **Historical Average**: A time series model that predicts the average of observations from the same time of day in previous weeks. It is not applicable to simulated datasets since the simulation model simulates only one day and has no long-term time series.
* **ARIMA**: A time series model that is well-documented in the traffic prediction literature. This is a univariate model so only the recent observations at the same link
Fig. 3: Map of the urban region chosen for this study.
Fig. 2: Map of the highway region chosen for this study.
are used to generate the prediction. We used pmdarima [26] to construct the ARIMA model for this paper.
* **GCN**[13]: A Graph Convolutional Network that contains 1 hidden layer, and the GCN output layer is connected with a fully-connected layer to predict traffic states.
* **AGCRN**: This model [17] leverages node embeddings to learn adaptive spatial correlations, and uses node-specific parameters for convolution. To capture the temporal correlations, gated recurrent units are adopted.
### _Evaluation Metrics_
To assess the performance, we selected three of the most commonly used time series regression metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and root-mean-square error (RMSE). MAE is the average of the absolute error across all predictions while MAPE is the average of the absolute relative error that emphasizes lower values. Meanwhile, RMSE is the square root of the average squared errors, which is also the standard deviation of all prediction errors. For a prediction horizon \(H\), given predictions \(\hat{x}_{i}^{(t+1)},\hat{x}_{i}^{(t+2)},...,\hat{x}_{i}^{(t+H)}\) and the actual observed value \(x_{i}^{(t+1)},x_{i}^{(t+2)},...,x_{i}^{(t+H)}\), we can calculate the three metrics using the equations below:
\[\text{MAE}=\frac{1}{|\mathcal{V}|}\frac{1}{H}\sum_{i=1}^{|\mathcal{V}|}\sum_{ j=1}^{H}\left|x_{i}^{(t+j)}-\hat{x}_{i}^{(t+j)}\right| \tag{11}\]
\[\text{MAPE}=\frac{1}{|\mathcal{V}|}\frac{1}{H}\sum_{i=1}^{|\mathcal{V}|}\sum_ {j=1}^{H}\left|\frac{x_{i}^{(t+j)}-\hat{x}_{i}^{(t+j)}}{x_{i}^{(t+j)}}\right| \tag{12}\]
\[\text{RMSE}=\sqrt{\frac{1}{|\mathcal{V}|}\frac{1}{H}\sum_{i=1}^{|\mathcal{V}| }\sum_{j=1}^{H}\left(x_{i}^{(t+j)}-\hat{x}_{i}^{(t+j)}\right)^{2}} \tag{13}\]
For the Toronto datasets, we used both 5th minute and 15th minute as the prediction horizon and computed error using only the prediction at the horizon \(\hat{x}_{i}^{(t+H)}\). Meanwhile, on the California datasets, we followed the convention of [17, 24] and computed error using all predictions up to the prediction horizon \(H\), i.e., \(\hat{x}_{i}^{(t+1)},\hat{x}_{i}^{(t+2)},...,\hat{x}_{i}^{(t+H)}\).
Although the above metrics conveniently produce numerical values for easy comparison across models, they are unable to represent all model aspects. Therefore, we also measured the complexity of each model for a more well-rounded comparison as shown in Table VI.
## IV Results and Discussion
We performed the evaluation using a data split of 60% training, 20% validation, and 20% testing for each dataset. We then recorded each metric to produce results shown in Tables III, IV, and V below, where the bolded number is the lowest error and the underlined number is the second lowest error. In addition, we also report the model complexity for the 5-minute prediction horizon on the highway dataset in Table VI.
Across all experiments, performances improve from that of GRNN, GA-GRUs, to AGRNNs. First, this indicates that using an attention mechanism to learn spatial correlations is better than using a fixed adjacency matrix. Besides, the node-specific convolutional weights can capture distinct traffic
\begin{table}
\begin{tabular}{c c c} \hline \hline Name & Range & Applicable Models \\ \hline Neighborhood size (number of steps to target node) & \{1, 2, 3, 4, 5, 6\} & All models except ARIMA \\ \hline Number of historical time steps & \{2, 3,..., 9\} & All models on simulation datasets \\ \hline Number of historical time steps & 12 & All models on PwMS datasets \\ \hline Number of hidden features & \{32, 64, 128\} & Neural network models \\ \hline Batch size & \{32, 64, 128\} & Neural network models \\ \hline Learning rate & \{1e-5, 1e-4, 1e-3\} & Neural network models \\ L2 regularization strength & \{1e-5, 1e-4, 1e-3\} & Neural network models \\ Number of training epochs & \{500, 100,..., 2000\} & Neural network models \\ Number of differences & \{0, 1, 2\} & ARIMA \\ \hline Number of trees & \{100, 150, 200\} & Random forests \\ Tree depth & \{5, 10, 15, 20\} & Random forests \\ \hline \hline \end{tabular}
\end{table} TABLE II: Hyperparameter selection
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{4}{c}{5-minute horizon} & \multicolumn{4}{c}{15-minute horizon} \\ \hline Model & MAE & MAPE & RMSE & MAE & MAPE & RMSE \\ & (km/h) & (\%) & (km/h) & (\%) & (km/h) \\ \hline Historical Average & \multicolumn{4}{c}{Not applicable} & \multicolumn{4}{c}{Not applicable} \\ \hline
**ARIIMA** & 4.41 & 11.96 & 9.13 & 7.52 & 20.73 & 15.47 \\ GCN & 3.72 & 9.05 & 5.95 & 5.24 & 13.08 & 8.99 \\ GRNN & 3.24 & 9.66 & 5.31 & 5.18 & 14.90 & 9.00 \\ GA-GRU (input) & 4.99 & 15.30 & 7.72 & 13.09 & 38.09 & 16.83 \\ GA-GRU (hidden) & 3.38 & 10.30 & 5.35 & 5.00 & 15.31 & 8.55 \\ GA-GRU (both) & 4.08 & 12.55 & 6.32 & 13.48 & 47.70 & 16.95 \\ AGRNN (input) & 3.28 & 9.71 & 5.46 & 4.90 & 14.85 & 8.63 \\ AGRNN (hidden) & 3.92 & 9.14 & 6.00 & 4.99 & 12.13 & 8.12 \\ AGRNN (both) & 3.53 & 8.69 & 6.17 & 3.84 & **9.52** & 6.86 \\ AGRCN & 3.41 & **7.70** & 7.38 & 4.28 & 10.00 & 8.84 \\ Random forests & **2.77** & 8.02 & **4.73** & **3.60** & 10.31 & **6.51** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Performance comparison of traffic speed prediction models for simulated highway dataset
pattern in each node and improve accuracy. Moreover, the AGRNN (input) model is competitive with other GCNNs according to all error metrics; which signifies that the propagation of hidden states among nodes between consecutive RNN time steps is not essential in achieving accurate prediction. It should be noted that although our experiments keep the other components of the model constant among GRNN, GA-GRUs, and AGRNNs, the findings of this work may not generalize to other architectures, such as multiple graph convolutional layers or different RNN configurations.
The AGRNN with input-only attention is an extreme version of separating GCNN weights to create independent models; meanwhile, the factorized weights in AGCRN can be viewed as a tradeoff between having completely shared weights of GCNNs and completely independent weights. The superior results of the AGCRN model in our experiments suggest that this tradeoff approach is worthy of further investigation.
The experiment results also show that the random forests model exhibits the lowest error on the simulated datasets, while being narrowly outperformed by the AGCRN model on the California datasets. This supports the hypothesis that short-term traffic prediction can be formed as a regression problem and further highlights that sharing model weights and latent states are inconsequential in attaining model accuracy. However, the random forests model contains by far the largest number of parameters across all experiments. Overall, the result suggests that while GCNNs can be more compact, random forests regression remains competitive and should not be overlooked in short-term traffic prediction.
## V Conclusion and Future Work
This work classifies and evaluates the variants of GCNNs for short-term traffic prediction based on their components. We found that incorporating attention, matrix factorization, and location-specific model weights are beneficial to overall performance. However, the traditional random forest regression method cannot be ignored for its excellent performance compared to that of the latest GCNN models. In terms
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{3}{c}{PeMS04} & \multicolumn{3}{c}{PeMS08} \\ \hline Model & MAE & MAPE & RMSE & MAE & MAPE & RMSE \\ & & (\%) & & & (\%) & \\ \hline Historical Average & 24.99 & 16.07 & 41.84 & 21.21 & 13.72 & 36.73 \\ ARIMA & 27.53 & 20.55 & 42.44 & 22.67 & 14.92 & 35.08 \\ GCN & 23.72 & 17.92 & 37.47 & 21.09 & 14.42 & 31.45 \\
**GRNN** & 30.66 & 25.02 & 46.06 & 26.09 & 21.92 & 39.07 \\ \hline GA-GRU (input) & 32.78 & 30.24 & 46.65 & 27.73 & 42.20 & 38.78 \\ GA-GRU (hidden) & 24.73 & 17.17 & 38.18 & 19.89 & 14.01 & 30.73 \\ \hline AGRNN (input) & 29.35 & 25.23 & 42.74 & 23.53 & 19.96 & 34.46 \\ \hline AGRNN (input) & 24.01 & 17.37 & 37.82 & 21.04 & 14.57 & 31.19 \\ \hline AGRNN (hidden) & 22.97 & 16.32 & 37.25 & 20.31 & 13.48 & 30.80 \\
**AGRNN** (both) & 23.77 & 16.54 & 38.53 & 22.04 & 14.33 & 34.16 \\ AGCRN & **19.86** & **13.06** & **32.57** & **16.08** & **10.40** & **25.55** \\ \hline Random forests1 & 171195284 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Performance comparison of traffic flow prediction models on PeMS04 and PeMS08 dataset
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{3}{c}{5-minute horizon} & \multicolumn{3}{c}{15-minute horizon} \\ \hline Model & MAE & MAPE & RMSE & MAE & MAPE & RMSE \\ & (km/h) & (\%) & (km/h) & (km/h) & (\%) & (km/h) \\ \hline Historical Average & \multicolumn{3}{c}{Not applicable} & \multicolumn{3}{c}{Not applicable} \\ \hline ARIMA & 4.87 & 30.98 & 7.73 & 5.43 & 35.12 & 8.58 \\ GCN & 4.30 & 24.33 & 6.53 & 4.62 & 25.21 & 6.94 \\ \hline GRNN & 4.57 & 24.88 & 7.06 & 4.94 & 29.32 & 7.78 \\ GA-GRU (input) & 4.85 & 31.06 & 7.65 & 5.31 & 33.45 & 8.39 \\ GA-GRU (hidden) & 4.24 & 24.04 & 6.91 & 4.83 & 28.48 & 7.98 \\ GA-GRU (both) & 4.62 & 29.27 & 7.40 & 4.96 & 31.18 & 7.90 \\ \hline AGRNN (input) & 4.11 & 22.49 & 6.88 & 4.31 & 24.44 & 7.21 \\ AGRNN (hidden) & 4.23 & 24.55 & 6.71 & 4.43 & 25.77 & 7.02 \\ AGRNN (both) & 4.08 & 22.35 & 6.62 & 4.27 & **23.88** & 7.02 \\ AGCRN & 4.02 & 23.59 & 7.02 & 4.49 & 28.40 & 7.78 \\ Random forests1 & **3.89** & **22.18** & **6.21** & **4.17** & 24.37 & **6.66** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Performance comparison of traffic speed prediction models for simulated urban dataset
\begin{table}
\begin{tabular}{c c} \hline \hline Model & Complexity \\ \hline Historical Average & 48960 \\ ARIMA & 1035 \\ GCN & 66587944 \\ AGCRN & 150112 \\ GRNN & 3393 \\ GA-GRU (input) & 1025 \\ GA-GRU (hidden) & 1025 \\ GA-GRU (both) & 1121 \\ AGRNN (input) & 8602510 \\ AGRNN (hidden) & 605710 \\ AGCRN (both) & 634610 \\ Random forests1 & 171195284 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Model complexity measured in number of parameters
of future work, we can expand this comparison to other components and encompass a larger selection of recent works. Furthermore, we plan to harness these findings to improve existing deep learning traffic prediction methods.
|
2304.01086 | Self-building Neural Networks | During the first part of life, the brain develops while it learns through a
process called synaptogenesis. The neurons, growing and interacting with each
other, create synapses. However, eventually the brain prunes those synapses.
While previous work focused on learning and pruning independently, in this work
we propose a biologically plausible model that, thanks to a combination of
Hebbian learning and pruning, aims to simulate the synaptogenesis process. In
this way, while learning how to solve the task, the agent translates its
experience into a particular network structure. Namely, the network structure
builds itself during the execution of the task. We call this approach
Self-building Neural Network (SBNN). We compare our proposed SBNN with
traditional neural networks (NNs) over three classical control tasks from
OpenAI. The results show that our model performs generally better than
traditional NNs. Moreover, we observe that the performance decay while
increasing the pruning rate is smaller in our model than with NNs. Finally, we
perform a validation test, testing the models over tasks unseen during the
learning phase. In this case, the results show that SBNNs can adapt to new
tasks better than the traditional NNs, especially when over $80\%$ of the
weights are pruned. | Andrea Ferigo, Giovanni Iacca | 2023-04-03T15:42:28Z | http://arxiv.org/abs/2304.01086v1 | # Self-building Neural Networks
###### Abstract.
During the first part of life, the brain develops while it learns through a process called synaptogenesis. The neurons, growing and interacting with each other, create synapses. However, eventually the brain prunes those synapses. While previous work focused on learning and pruning independently, in this work we propose a biologically plausible model that, thanks to a combination of Hebbian learning and pruning, aims to simulate the synaptogenesis process. In this way, while learning how to solve the task, the agent translates its experience into a particular network structure. Namely, the network structure _builds itself_ during the execution of the task. We call this approach _Self-building Neural Network_ (SBNN). We compare our proposed SBNN with traditional neural networks (NNs) over three classical control tasks from OpenAI. The results show that our model performs generally better than traditional NNs. Moreover, we observe that the performance decay while increasing the pruning rate is smaller in our model than with NNs. Finally, we perform a validation test, testing the models over tasks unseen during the learning phase. In this case, the results show that SBNNs can adapt to new tasks better than the traditional NNs, especially when over 80% of the weights are pruned.
Neural networks, plasticity, pruning, neuroevolution +
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp. 1-10.
+
Footnote †: conference: conference: Proceedings of the 2018 ACM SIGGRAPH international conference on Knowledge discovery and data mining, ACM, 2018, pp.
can effectively be seen as a form of implicit pruning, especially in cases where synaptic connections can saturate their values in such a way to obtain a quasi-binary mask on the weights (Han et al., 2015; Wang et al., 2016; Wang et al., 2017; Wang et al., 2018). However, in these works, no explicit pruning mechanism has been employed.
## 3. Methods
As introduced before, we aim to construct a network that can simulate the synaptogenesis process. In this section, we introduce the structure of this network and its behavior. Then, we briefly describe the optimization process and the classical control tasks used in our experimentation. We used these environments to measure the performance of the proposed SBNNs and to prove how they can change their structure during (and depending on) the task at hand.
### Hebbian learning
Hebbian learning is a plasticity model that allows an NN to change its weights during the execution of a task. Importantly, this change is agnostic w.r.t. the reward for the task, because it is based only on the local knowledge of each synapse, in particular the activation of the pre-synaptic and post-synaptic neurons. The ABCD model used in this work updates the weights after each forward pass of the network using the following rule:
\[w_{i,j}=w_{i,j}+\eta(Aa_{i}+Ba_{j}+Ca_{i}a_{j}+D)\]
where \(a_{i}\) is the pre-synaptic activation value, \(a_{j}\) is the post-synaptic value, and \(w_{i,j}\) is the weight on the connection between the two neurons. The \(A\), \(B\), \(C\), and \(D\) are parameters to optimize.
### Pruning mechanism
The pruning mechanism aims to find a subset of an NN that performs as well as (or better than) the original network. This process is performed by removing connections based on a given strategy. In this work, we use the global magnitude pruning algorithm (Han et al., 2015; Wang et al., 2016) that simply consists in removing all the connections whose weights are smaller, in absolute value, than a threshold that is defined as the \(pr\)-th percentile, where \(pr\) is the desired pruning rate (i.e., the percentage of connections to remove).
### Self-building Neural Network
During synaptogenesis, neurons explore the extracellular space, assembling as many connections as possible with other neurons (Wang et al., 2016; Wang et al., 2017; Wang et al., 2018). Here, we aim to simulate this mechanism by allowing the possibility that any two neurons could directly connect.
Our model works as follows. We start from an NN composed of \(I\) inputs, \(H\) hidden nodes, and \(O\) outputs. At the first episode of the task, the \(I\) inputs are connected to all the \(H\) hidden nodes and the \(O\) outputs. In turn, the \(H\) hidden nodes are fully connected with each other (excluding self-loops), and with all the \(O\) outputs. In total, the number of connections \(C\), expressed as a function of \(H\) (which is the only hyperparameter, as \(I\) and \(O\) depend on the task), is \(C(H)=H^{2}+H\times(I+O)+I\times O\). Overall, the internal structure of the network resembles the one of the Boltzmann machines (Han et al., 2015; Wang et al., 2016; Wang et al., 2017; Wang et al., 2018), where the hidden nodes are fully connected. However, we also directly connect the inputs to the outputs, as the computational power of the hidden nodes in some cases may not be necessary.
We initialize the weights of all these connections to zero. In this way, we intend to simulate the initial condition where no connections between neurons exist. Then, _within_ each episode, the Hebbian procedure will update the weights based on the ABCD rule described in Section 3.1. Note that, in our model, each connection in the network has its own ABCD rule with its corresponding parameters. In this way, the network can arrange itself based on the experience that the agent accumulates during the task. We use a Hebbian rule for each connection because, starting from a condition where all the weights are 0, using a single Hebbian rule could lead all the weights to change in the same direction, which in turn would make learning ineffective.
The second step of synaptogenesis is the process that prunes the synapses, as described in Section 3.2. As we will describe later, differently from Hebbian learning, pruning occurs _across_ episodes. Note that, as soon as pruning starts, Hebbian learning is stopped. Figure 2 summarizes the pruning procedure, showing the state of the initial network and its development. Formally, we can analyze the network before and after pruning. In particular, before pruning, the hidden nodes are fully connected with each other and all the inputs are connected with all the hidden nodes. For this reason, it is not possible to define a fixed activation order. Hence, we maintain the overall order of activation: firstly, the inputs, then the hidden nodes, and then the outputs. However, for the hidden nodes, we randomly select the activation order.
After pruning, the remaining connections define the network. Differently from before, in this phase we can define an activation order more easily because the pruning mechanism naturally resolves most cycles, especially if the ratio of connections removed is high enough. Hence, to find the activation order, we can perform a topological sort of the underlined graph \(G(V,E)\), where \(V\) is the set of nodes (i.e., the neurons) and \(E\) is the set of connections. If, during the topological sort, we find a cycle, we apply the following procedure. Indicating with \(N_{c}\) the subset of \(V\) that contains all the nodes in the cycle, first we remove all the nodes in \(N_{c}\) from \(V\) and replace them with a _fake_ node, \(f\). Then, indicating with \(E(N_{c})_{incoming}=\{(o,n)\ \forall o\in V\setminus N_{c}\ \wedge\ \forall n\in N_{c}\}\) the subset of \(E\) composed of the connections that terminate in \(N_{c}\) and that do not start from \(N_{c}\), we add to \(E\) the set of connections \(\{(o,f)\ \forall(o,o^{\prime})\in E(N_{c})_{incoming}\ \wedge\ \nicefrac{{\pi}}{{\nu}}\in N_{c}\}\). We perform the same operation for the connections outgoing from \(N_{c}\). This procedure, that we apply iteratively and independently for every cycle found in the graph, results in a new graph \(G^{\prime}\) where the cycle \(N_{c}\) is replaced by the fake node \(f\). All the nodes connected to \(N_{c}\) are now connected to \(f\), and \(f\) is connected to all the nodes reached from \(N_{c}\). We store the information that the node \(f\) replaces the \(N_{c}\) nodes in the _cycles_history_ variable and then retry find a topological order. We repeat this procedure until all the cycles have been replaced, and a topological order can be defined. Algorithm 1 illustrates this simplification procedure. Note that the nodes in \(N_{c}\) can also be fake nodes from a previous iteration of the procedure, as illustrated in Figure 1.
After calculating the topological order of the network, we can follow that for the activation of the hidden nodes. If we find a fake node during the activation, we retrieve from _cycles_history_ the set of \(N_{c}\) nodes that compose the cycle, and proceed with a random activation order. If a node in \(N_{c}\) is a fake node \(f^{\prime}\) covering the
cycle, we repeat the procedure solving the inner cycle \(N_{\mathcal{C}}\), before continuing with the nodes in \(N_{\mathcal{C}}\).
Thanks to this process, the network after pruning can have a different structure. We identify three base structures, which can be described as follows. In the first case, the inputs are connected to all the hidden nodes that in turn are connected to the outputs. This creates an NN with a single hidden layer, see Figure 3(a). In the second case, the pruning process cuts all the connections between the input and hidden nodes. Hence, the inputs are directly connected to the output nodes, creating a zero-layer NN, see Figure 3(b). In the third case, the pruning process removes all the connections between the inputs and a subset of the hidden nodes, but the hidden nodes remain connected, creating an NN with more than one layer. For example, given the hidden nodes \(A\), \(B\), and \(C\), if the inputs are connected only with \(A\) and \(B\), but not with \(C\), which in turn remains connected with \(A\) and \(B\), the resulting NN will have two hidden layers: the first one, composed of neurons \(A\) and \(B\); the second one, composed only neuron \(C\), see Figure 3(c).
It is worth noticing that, starting from these three base structures, we can derive more complex structures. For example, each node in the third case can be a _fake_ node, hence "hiding" a set of nodes.
While in principle promising, this model is not free from weak points. First of all, the number of weights in the SBNN increases quadratically with the number of hidden nodes, as each hidden node is fully connected with all the other hidden nodes. Moreover, each connection is associated with an ABCD rule with its corresponding 4 parameters. Therefore, the total number of parameters to optimize is orders of magnitude greater with respect to a Feed Forward Neural Network (FFNN) with the same number of hidden nodes. Figure 3 shows a comparison in terms of number of parameters to optimize between the proposed SBNN, and two other NN-based models, namely an FFNN with and without Hebbian learning.
Secondly, before pruning the hidden nodes compose a single, fully connected subnetwork on which the activation order can influence the network's output. For example, if we consider a fixed activation order, the first hidden node will receive as inputs the values of the inputs at the current timestep, while, for the other hidden nodes, the values received as inputs will be the ones from the previous activation. On the other hand, as discussed above, after pruning we can find the topological order for network by visiting the underlying graph. Still, in the presence of a subset of hidden nodes that are linked together, we cannot determine a unique activation order.
### OpenAI tasks
To measure the performance of the proposed SBNN, we use three classical control tasks from OpenAI (Dong et al., 2018), namely _Cart Pole_, _Mountain Car_, and _Lunar Lander_.
In Cart Pole, the agent has to move a cart to maintain a pole in equilibrium. The agent can push the cart in both directions (move left/right), and it receives a positive reward for each timestep in which the pole is in equilibrium. The episode ends after 500 timesteps, or if the angle of the pole is outside the range \(\pm 12\)deg.
In Mountain Car, the agent has to drive a car from a valley to the top of a mountain. The agent has to build momentum to increase its velocity, thanks to another hill positioned before it. The agent can perform three actions (accelerate left, accelerate right, or do not accelerate), and receives a negative reward at each timestep until it reaches the top of the mountain. The episode ends if the car reaches the top of the hill, or after 200 timesteps.
Finally, in Lunar Lander, the agent has to land a spaceship. The agent has to reduce the terminal velocity of the spaceship while compensating for the lateral wind. Hence, the agent can perform four actions: two to control the lateral (left/right) engines, one to activate the main engine, and one that does not perform any action. The agent increases the received reward if it lands in the designated
Figure 1. A minimal example of the simplification procedure described in Algorithm 1. The base graph has two cycles: one composed of nodes \(A\) and \(B\), and one composed of nodes \(B\) and \(C\). The procedure identifies first the cycle \(B,C\), circled in red, and replaces it with the red fake node \(F_{1}\), which has incoming connections from \(A\) and \(I\) and outgoing connections to \(A\) and \(O\). Then, it finds the cycle \(F_{1},A\), circled in blue, and replaces it with the blue fake node \(F_{2}\). Eventually, a directed acyclic graph is obtained, with a single fake hidden node \(F_{2}\), which hides the cycle \(A,F_{1}\), where \(F_{1}\) is another fake node hiding the nodes \(B\) and \(C\).
area at a lower speed and does not tilt. The episode ends if the spaceship lands, or if its \(x\) position is greater than 1.
The three tasks above are solved if the average reward over 100 episodes is greater than a predefined threshold, which is 475 for Cart Pole, \(-110\) for Mountain Car, and 200 for Lunar Lander.
## 4. Results
In this section, we will analyze the performance and behavior of the SBNN in comparison with a FFNN for which we apply the same pruning mechanism used in the SBNN, but before the fitness evaluation. Note that, in our implementation, we use as activation function the \(tanh\) function, both on the hidden nodes and on the input/output nodes, for both the SBNN and the FFNN. Concerning the fitness evaluation, we measure the performance of the agent as the average reward over 100 episodes seen during training. For the FFNN, as the pruning process happens before the first episode, in this case the fitness by construction is measured after pruning. On the contrary, as for the SBNN the pruning process happens during the life of the agent (i.e., across episodes), in this case the fitness contains two components, one before and one after pruning (which are then averaged). We divide our experiments into three parts, to answer three different research questions:
1. [leftmargin=*]
2. What is the performance of the SBNN? What are the main hyperparameters of this model that affect the performance?
3. Is there any structural difference between the networks produced by the SBNN and an FFNN?
4. Are SBNNs able to generalize over the different tasks?
To answer these questions, we perform a campaign of simulations varying the three main hyperparameters of the SBNN: the number of hidden nodes \(hn\), the pruning rate \(pr\), and the pruning time \(pt\), the latter indicating when pruning is applied. To calculate the pruning time, we consider the number of episodes in the task, i.e., a pruning time of 10 means that pruning happens after the agent completes the 10-th episode.
For each combination of these parameters, we perform 30 independent evolutionary processes. To optimize the parameters of the network (i.e., the weights for the FFNN, or the parameters of the ABCD rules for the SBNN), we use the well-known Covariance Matrix Adaptation Evolution Strategies (CMA-ES) (Han et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). We stop the evolution after the generation of a fixed number of 2000 individuals for Lunar Lander and Cart Pole, and 4000 for Mountain Car. In all cases, we set \(\lambda=4+\lfloor 3*ln(|\mathbf{p}|)\rfloor\) and \(\mu=\frac{1}{2}\), where \(\mathbf{p}\) is the vector of parameters to optimize. Table 1 summarizes the configurations tested. Note that we omit the results obtained on the Cart Pole task as there were no significant differences between the FFNN and the SBNN: in fact, all the individuals using both models solved the task, that is comparatively simpler than the other two. We make our code publicly available at [https://github.com/ndr09/SBM](https://github.com/ndr09/SBM).
### RQ1: Performance
Concerning the performance of the SBNN, we aim to evaluate how it compares with that of an FFNN. Since the SBNN has more connections than the FFNN given the same number of hidden nodes, we make two comparisons: one comparing the results of the two
Figure 4. Pruning can create different structures in the SBNN. In Figure 3(a), an NN with one layer is created. In Figure 3(b), pruning cuts all the connections to the inner nodes, reducing the NN only to the input-output connections. Finally, in Figure 3(c), pruning results in the creation of two hidden layers.
Figure 3. Number of parameters to optimize with respect to the number of hidden nodes. For the (Hebbian) FFNN, we consider an NN with two hidden layers with the same number of neurons indicated on the x-axis. In all cases, a single input and a single output are considered. The FFNN with Hebbian learning uses a different ABCD rule for each connection.
Figure 2. Scheme of the SBNN over the episodes. Initially, we set all the connections to 0 (red); then during the task Hebbian plasticity changes the weights, leading to the second NN, where different thickness indicates different weights. At a certain time, the pruning mechanism cuts the weakest connections, resulting in the final structure of the NN.
models given the same number of hidden nodes, and one given the same total number of connections after pruning. In the following, we start with the Mountain Car task and then move to describe the results for the Lunar Lander one.
Figure 5 shows the results for the Mountain Car environment. The upper and lower row relate, respectively, to an FFNN with one layer with 3 or 4 hidden nodes. The left and right column present, respectively, the results with a pruning rate of 40% and 60%. In each subfigure, we plot the average results of the best individual over 30 independent runs. The first three boxplots indicate the results of the SBNN with different pruning times, namely 10, 5, and 1, respectively from left to right. The last boxplot shows the baseline results of the FFNN. In Mountain Car, the results indicate that the SBNN reaches similar or better performance with respect to an FFNN with the same pruning rate.
Interestingly, we observe a clear trend on the pruning time, i.e., the performance increases when decreasing the value of \(pt\), regardless of the pruning rate and the number of hidden nodes. Hence, we can conclude that, after the first episode, the agent has already received enough experience (information) to build the network.
To understand the effect of pruning on the performance, we make an additional analysis, by comparing the average reward before and after pruning. For instance, considering \(pt=10\), we measure separately the average reward until the 10-th episode, i.e., before pruning, and the average reward after the 10-th episode, i.e., the post-pruning one. Based on this procedure, we observe that, thanks to pruning, the performance receives a \(7-13\%\) boost on the Mountain Car task.
Figure 6 presents the results for the Lunar Lander environment. For this environment, we consider hidden nodes ranging from 5 to 9, because of the greater complexity of the task. Here, the results are shown differently from Figure 5: in particular, we plot the median rewards for the best individuals of each evolutionary run, varying the pruning rate while keeping the pruning time and the number of hidden nodes fixed. In this way, we highlight two points. The first one is that the performances of the SBNNs are in most cases equal to or better than the ones obtained by the FFNN for the same pruning rate. The second point is that, while increasing the pruning rate, the drop in performance for the SBNN is lower than what observed with the FFNN baseline. We can also observe that this trend is maintained when comparing solutions with a comparable number of connections. For example, the SBNN with 5 hidden nodes and the FFNN with 9 hidden nodes have a similar number of connections (respectively, 117 and 108). Moreover, we can see the same trend observed in Mountain Car, i.e., that the best results are achieved with \(pt=1\). Hence, also in this task it appears that a single episode contains enough information to learn about the prunable connections. These observations suggest that, at least in the tested tasks, the SBNN is effectively capable to exploit the network structure (and the information therein) better than the FFNN.
Also in this case, we perform the same analysis of before, dividing the results before and after pruning and observing the average results. As for the case of Mountain Car, also for Lunar Lander we can see an increase in performance after pruning, in this case between 6% and 100%. This improvement indicates, once again, the importance of pruning and its complementary effect with respect to Hebbian learning.
Finally, we compare the results of the networks based on total number of connections after pruning. In Figure 7, we compare the FFNN with \(pr=20\%\) with the SBNN with \(pr=60\%\) (with these values, in fact, the FFNN and the SBNN have a similar number of connections after pruning). We can observe that the SBNN reaches almost always a comparable or slightly better performance with respect to the FFNN. However, in the case where the FFNN performs better than the SBNN (i.e., with 9 nodes), the difference is not statistically significant (Wilcoxon rank-sum test, \(\alpha=0.01\)).
### RQ2: Difference between SBNN and FFNN
In this section, we want to analyze the structural difference between the network found with the SBNN and the FFNN. For this reason, we characterize the networks based on the number of _working connections_, i.e., the connections that link inputs to outputs after pruning. To calculate these connections, we remove the synapses that lead to sink or come from source nodes. While a sink is a node
\begin{table}
\begin{tabular}{l c c c} \hline \hline Parameter & Cart Pole & Mountain Car & Lunar Lander \\ \hline Fitness evaluation & 2000 & 4000 & 2000 \\ Hidden nodes & 3, 4 & 3, 4 & 5, 6, 7, 8, 9 \\ Pruning time & 5, 10 & 1, 5, 10 & 1, 5, 10, 15, 20 \\ Pruning rate & 40, 60 & 40, 60 & 20, 40, 60, 80 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Parameter configuration used for each task considered in the RQ1 experiments. For RQ2 and RQ3, we use a representative subset of these configurations.
Figure 5. Results on the Mountain Car environment. The y-axis shows the average reward obtained during training from the best agent found in each of 30 runs. The dashed line indicates the solving threshold for the environment. For the SBNN, the results indicate a clear trend where the performance increases while \(pt\) decreases.
with only incoming connections that is not an output node, a source is a node with only outgoing connections that is not an input node.
Figure 8 and Figure 9 show the distribution of working connections for a representative subset of the configurations presented in the previous section, considering the best solution (one per each evolutionary run) obtained for each considered configuration. On the x-axis, we indicate the percentage of remaining working connections (after pruning) with respect to the total number of synapses, grouped every 10%, while on the y-axis we indicate how many networks (out of 30, one per run) have that number of connections. For example, if we consider a point at \(x=40\%\), \(y=0.5\), we mean that in 50% of runs (i.e., 15 out of 30) networks after pruning have a number of working connections in the range (30%, 40%).
In both figures, we can observe a quite clear pattern: all the FFNN configurations use the majority of the connections available. On the other hand, the distributions of working connections for the SBNN have two peaks: the first one occurs between 10% and 20% for the Mountain Car task and between 20% and 40% for the Lunar Lander one; the second peak is in common between the two tasks at around 90%. We visually analyzed all the networks and discovered that the SBNNs that compose the first peak have a structure like the one shown in Figure 4b, where all the hidden nodes are disconnected. Interestingly, the percentage of this kind of structures increases when \(pt\) increases, as the first peak is higher for higher values of \(pt\). This suggests that the later pruning occurs, the more probable it is that connections to the hidden nodes are pruned, thus leaving only input-output connections. Our intuition is that this form of simplification somehow correlates with the complexity needed to solve the task.
Concerning the total number of connections, we observe that in the Lunar Lander environment SBNNs maintain between 10 and 30 working connections for pruning rates higher than 20%. This range appears independent on the pruning time and the number of hidden nodes available. On the contrary, FFNNs use the majority of the connections available, as for Mountain Car. Hence, in this case the number of working connections is strongly dependent on the number of hidden nodes and the pruning rate, resulting in a total number of working connections between 10 and 64. The fact that the number of working connections varies is especially relevant when comparing the SBNN and the FFNN: for example, on the Lunar Lander task, the FFNN with 9 hidden nodes and a 40% pruning rate uses all the 64 connections available, while the SBNN
Figure 6: Median reward on the Lunar Lander task for different numbers of hidden nodes. The x-axis indicates the pruning rate, while the different lines refer to different values of pruning time. The red dashed line indicates the solving threshold. The results show that while increasing the pruning rate, the performance drop for the SBNN is lower than for the FFNN.
Figure 7: Average reward on the Lunar Lander task for a SBNN with a pruning rate of \(60\%\) and an FFNN with a pruning rate of \(20\%\). The x-axis indicates the number of hidden nodes (note that in this setting the two models have a similar number of total connections after the pruning process). The results show that the SBNN reaches similar or slightly better performances with respect to the FFNN. For 9 hidden nodes, the FFNN seems to perform slightly better, but the difference is not statistically significant (Wilcoxon rank-sum test, \(\alpha=0.01\)).
solves the task and obtains better performance using on average only \(20\) connections, see Figure 6.
In Figure 10, we show two SBNNs (after pruning) trained on the Mountain Car task. These networks are of two different kinds: one where all the hidden nodes have been removed, and that uses only two actions (accelerates left and accelerate right); and one that uses uses \(2\) out of \(4\) hidden nodes available, and all the three available actions.
### RQ3: Generalization
In this section, we evaluate the generalization capabilities of the SBNN by testing the best agent (found after an evolution process on a given task) on another, unseen task.
In particular, we test the agents trained on Lunar Lander on the Cart Pole and Mountain Car tasks. We restrict the validation to this case, as the Lunar Lander environment is the only one with enough input and output nodes to perform the other two tasks. In fact, the different tasks have different input and action spaces.
To perform this analysis, we remap inputs and outputs from the validation task to the relative inputs in the Lunar Lander environment. For example, if in the Lunar Lander environment the first input is the x-position, we map the observation of the position from the validation task (i.e., the x-position of the cart or the car, respectively for Cart Pole and Mountain Car), to the first input. We set all the unused inputs to \(0\). Regarding the output, we consider only the outputs present in the validation task. For example, considering the validation task as Cart Pole, the only actions available are _move left_ and _move right_. Hence, we consider only the outputs that control the left and right engines in Lunar Lander.
Figure 12 shows the performances in the validation tasks for a subset of the configurations (statistical significance assessed with Wilcoxon rank-sum test, \(\alpha=0.05\)). On the Cart Pole task, the results
Figure 8. Distribution of the number of connections after pruning the network on the Mountain Car task. On the x-axis, we show the percentage of working connections, which are connections that do not lead to a sink node or are outgoing from a source node, grouped every \(10\%\). On the y-axis, we show how many networks (out of \(30\), one per run) have that percentage of working connections. The results show that, for a pruning rate higher than \(20\%\), FFNN tends to use all the available connections, while SBNN has two peaks: one around \(10-20\%\), and one at \(80-90\%\).
Figure 10. Structure of two selected SBNNs obtained on the Mountain Car task. On the left, an SBNN that uses \(2\) out of \(4\) hidden nodes and a \(2\)-layer structure. On the right, an SBNN that uses only two connections.
Figure 9. Distribution of the number of connections after pruning the network on the Lunar Lander task. On the x-axis, we show the percentage of working connections, which are connections that do not lead to a sink node or are outgoing from a source node, grouped every \(10\%\). On the y-axis, we show how many networks (out of \(30\), one per run) have that percentage of working connections. The results show that, the available connections, while SBNN has two peaks: one around \(20-40\%\), and one at \(80-90\%\).
of the SBNN are similar to or better than the ones of the FFNN. In the Mountain Car environment, the SBNN performs slightly worse than the FFNN for \(pr=20\%\), although the differences are not always statistically significant; for \(pr=80\%\), the SBNN shows better performances, with higher performance in the first quartile (indicating, once again, that the SBNN models effectively learns to use the network structure).
Finally, with these validation experiments, we can observe how the task affects the network structure. Figure 11 shows the network structure of the same agent, which can solve both the Lunar Lander and the Cart Pole task after the pruning process. We can observe that the networks differ in the number of connections used and in the neurons that those synapses connect.
## 5. Conclusions
In this work, we took inspiration from the synaptogenesis process occurring in natural brains to propose a new learning model that combines both plasticity and pruning. At the beginning of the first episode of the task, all the neurons are connected with each other and initialized to 0 (i.e., the connections exist, but they are initially deactivated). Then, within each episode, a plasticity model based on Hebbian learning grows those synapses, activating the corresponding connections. Eventually, during the life of the agent (i.e. at a predetermined episode), those connections are pruned through a global magnitude pruning algorithm and Hebbian learning is stopped. We called this model Self-building Neural Network (SBNN), as it changes its structure based on the experience of the agent during the episodes of the task.
We tested our model on three classical control tasks from the OpenAI, namely Cart Pole, Mountain Car, and Lunar Lander. In our experiments, we varied the three main parameters of the model, affecting respectively _when_ to prune, _how much_ to prune, and the number of hidden nodes. We showed that, in general, the SBNN reaches better performance than the FFNN, and that it can adapt better to unseen tasks. Furthermore, we assessed the importance of the model's parameters, in particular regarding when and how much to prune. Finally, we highlighted how the same agent reorganizes its brain differently, based on the task, and how it can remove unnecessary complexity from the brain, given enough time.
In the future, we plan to develop a system for automatically deciding how much and when to prune based on the information flow in the network. In addition, we aim to tackle more complex tasks, such as the control of soft robots, where we can test the proposed SBNN on larger input and action spaces. We also plan to address the two limitations indicated in Section 3. For the number of parameters to optimize, we will test the use of a Hebbian rule for each neuron, rather than one for each connection, also modifying the update rule to avoid that each connection receives the same update. For the activation order, we plan to use a distance-based approach to solve the cycle, using as the distance the weights.
|
2303.13814 | Multimodal Adaptive Fusion of Face and Gait Features using Keyless
attention based Deep Neural Networks for Human Identification | Biometrics plays a significant role in vision-based surveillance
applications. Soft biometrics such as gait is widely used with face in
surveillance tasks like person recognition and re-identification. Nevertheless,
in practical scenarios, classical fusion techniques respond poorly to changes
in individual users and in the external environment. To this end, we propose a
novel adaptive multi-biometric fusion strategy for the dynamic incorporation of
gait and face biometric cues by leveraging keyless attention deep neural
networks. Various external factors such as viewpoint and distance to the
camera, are investigated in this study. Extensive experiments have shown
superior performanceof the proposed model compared with the state-of-the-art
model. | Ashwin Prakash, Thejaswin S, Athira Nambiar, Alexandre Bernardino | 2023-03-24T05:28:35Z | http://arxiv.org/abs/2303.13814v1 | Multimodal Adaptive Fusion of Face and Gait Features using Keyless attention based Deep Neural Networks for Human Identification
###### Abstract
Biometrics plays a significant role in vision-based surveillance applications. Soft biometrics such as gait is widely used with face in surveillance tasks like person recognition and re-identification. Nevertheless, in practical scenarios, classical fusion techniques respond poorly to changes in individual users and in the external environment. To this end, we propose a novel adaptive multi-biometric fusion strategy for the dynamic incorporation of gait and face biometric cues by leveraging keyless attention deep neural networks. Various external factors such as viewpoint and distance to the camera, are investigated in this study. Extensive experiments have shown superior performance of the proposed model compared with the state-of-the-art model.
Soft-biometrics, surveillance, Gait, Face, Adaptive fusion, person identification, Deep Learning, attention models, multimodal fusion.
## I Introduction
Human biometrics refers to the unique intrinsic physical or behavioural traits that allow distinguishing between different individuals, e.g., face, fingerprint, hand geometry, iris, and gait. The use of biometrics helps in various surveillance applications such as access control, human recognition, and re-identification. Single biometric modalities are often affected by practical challenges such as noisy data, lack of distinctiveness, intra/ inter-class variability, error rate, and spoof attacks. A common method to overcome this issue is to combine multiple biometric modalities, known as multimodal biometric fusion.
A critical constraint that any biometric system confronts is the variation in the environment owing to external conditions. This includes user-induced variability, i.e., inherent distinctiveness, pose, distance, and expression, or environment-induced variability, i.e., lighting condition, background noise, and weather conditions [1]. These constraints have not been adequately addressed in the literature on multimodal fusion. For instance, most of the existing works are based on static fusion strategies, wherein the fusion rules are fixed for certain external conditions such as pose/ lighting/ distance or based on manual computations. As a result, when the environment changes, the biometric system performs sub-optimally. To overcome this issue, a novel context-aware adaptive multi-biometric fusion strategy, which can dynamically adapt the fusion rules to external conditions, is proposed in this paper. In particular, the adaptive fusion of gait and face at different viewpoints was investigated using an attention-based deep learning technique.
Face is one of the predominant biometric traits commonly employed in human recognition. Similarly, gait is an important soft biometric commonly used in surveillance applications, because it is unobtrusive and perceivable from a distance [2]. While fusing gait and face, the most influential factors may be the view angle and distance from the subject to the camera. Notably, gait can be clearly captured in the lateral view, whereas the face can be well captured in the frontal view. Based on this rationale, a novel context-aware adaptive fusion mechanism was designed to assign weights to gait and face biometric cues based on the context. The key notion of the proposed model is that when the person is in far/ lateral view, gait features should be gaining more priority than the less visible facial cues, whereas when the person is in near/ frontal view, the face should be getting more importance than the partially occluded gait features.
To facilitate the aforesaid context-aware adaptive fusion strategy, a keyless attention-based deep learning fusion is leveraged in the multimodal biometric fusion framework. As mentioned in [3], keyless attention is a sophisticated and effective technique for better accounting for the sequential character of data without the need for supplementary input, thereby excelling in identifying relationships across modalities. Extensive experiments are conducted via individual biometric-based identification, naive bilinear pooling [4] based multimodal fusion and keyless attention-based adaptive fusion mechanism. Results clearly highlight the superior performance of the proposed model.
The remainder of this paper is organized as follows. Related works on face and gait-based human recognition are detailed in Section 2. Section 3 presents the framework of the proposed context-aware adaptive multibiometric fusion method. The experiments and results are presented in Sections 4 and 5, respectively. Finally, conclusions and future directions are presented in Section 6.
## II Related Work
One of the earliest face recognition systems was discovered in [5] using the manual marking of various facial landmarks. Recognition of faces in images with objects has gained popularity with [6], which introduced the eigenface method. Since then, various other similar techniques, e.g., Linear Discriminant Analysis, to produce Fisherfaces, Gabor, LBP, and PCANet were reported in [7]. Recently, deep learning-based techniques have also gained popularity, e.g. DeepFaces, Facenet, and Blazeface approach human-level performance under unconstrained conditions [7] (DeepFace: 97.35% vs. Human: 97.53%).
Classical gait-based identification approaches use either model-based or appearance-based approaches[2]. The former detects joints/body parts using 2D cameras or depth cameras. For example, [8] applied Hough transform to detect legs in each frame, whereas [9] leveraged Procrustes shape analysis to calculate joint angles of body parts. Gait recognition/re-identification using a Kinect camera has also been proposed in some works [10]. In contrast to model-based approaches, appearance-based approaches use richer information, such as silhouettes of the human body in gait frames, to recognise gaits, e.g., gait energy image (GEI) [11] and GEI-based local multi-scale feature descriptors [12]. Recent deep learning approaches presented advanced techniques, e.g., view-invariant gait recognition using a convolutional neural network GEINet [13], a comprehensive model with both LSTM and residual attention components for cross-view gait recognition [14].
On the fusion of gait and face for human identification, one of the early works [15] proposed a fusion strategy by combining the results of gait and face recognition algorithms based on sequential importance sampling. A probabilistic combination of facial and gait cues was studied in [16]. Yet another work on the adaptive fusion of gait and face is [17] via score-level fusion. All the aforementioned studies leverage either classical machine learning techniques using handcrafted features, static fusion rules, or manual computations. On the contrary, in this work, we present a deep learning technique based on a keyless attention-based adaptive fusion mechanism for human identification, one of its first kind to the best of our knowledge.
## III Multimodal Adaptive Fusion Methodology
The proposed keyless attention-based adaptive fusion of face and gait towards human identification is shown in Fig.1, in which all the symbols are introduced in the following subsections. The proposed framework maps spatio-temporal feature sequences corresponding to gait and face to a single label. First, the video sequence's descriptors of gait and face are extracted from each frame via a _Feature extractor_ module. Further, the _Attention & Fusion_ block is employed to compute the feature importance and adaptively amalgamate them. Finally, the class probabilities are generated by a _classifier_ module using a fully connected (FC) layer, followed by a softmax layer.
### _Gait feature extractor_
Gait recognition involves recognizing a person based on their gait features, i.e, movement patterns [18]. The temporal variation in human silhouettes is considered by calculating the cyclic pattern of movement, commonly referred to as the _gait cycle_. It can be observed that the size of the closed area between the legs and the aspect ratio in the human silhouettes are alternating periodically in a gait sequence (Refer Fig. 2(a)& 2(b)). Based on this notion, a complete gait cycle is determined by the number of frames between three consecutive local minima (two red points in Fig. 2(b)). The corresponding frames are extracted from the RGB images. This technique of gait cycle computation is applied to every person. Accordingly, the video is divided into an adequate number of frames required for gait feature computation.
The images are preprocessed and converted from RGB to grayscale to facilitate computational efficiency. Further, the extracted frames of gait silhouette images of height \(H\) and width \(W\) are fed onto a Convolutional LSTM [19] architecture as depicted in gait feature extractor network \(\mathcal{G}\) in Fig. 1 and obtains a gait feature descriptor \(G\). Formally, the gait feature sequence of a video can be represented as \(G=\{g_{1},\cdots,g_{L}\},g_{i}\in\mathbb{R}^{C}\) where \(g_{i}\) denotes the gait feature of frame \(i\), \(C\) denotes the feature dimension, and \(L\) denotes the number of frames.
### _Face feature extractor_
Face recognition involves recognizing a person by his facial features [5]. In our case, since the viewpoint and distance of the person vary significantly across the frames, traditional face detection algorithms that rely on the frontal view do not work well. Hence, facial bounding boxes are initially cropped out of the video frames leveraging Google Mediapipe human pose detection framework [20]. The framework employs a two-step detector-tracker setup where the detector locates the pose region-of-interest (ROI) within the frame and the tracker predicts all 33 keypoints from this ROI. In the case of videos, the detector is run only on the first frame and the ROI of the subsequent images is derived from the pose keypoints of the previous frame. As shown in Fig. 3, from the estimated Mediapipe keypoints, the human face is manually cropped out by fixed measurements, with respect to the facial coordinates.
The cropped face images are preprocessed, converted from RGB to grayscale, and resized to the dimension of _H_\(\times\)_W_. Further, the images are fed into ConvolutionallLSTM [19] architecture to extract facial feature descriptor \(F\) per person, as depicted in the face feature extractor network \(\mathcal{F}\) in Fig.1. Formally, the face feature descriptor corresponding to an _L_-frame video is represented as \(F=\{f_{1},\cdots,f_{L}\},f_{i}\in\mathbb{R}^{C}\), where \(f_{i}\) represents the facial feature of frame \(i\).
### _Naive fusion of Face and Gait via Bilinear pooling_
As an initial fusion technique, we propose the naive bilinear pooling (BLP) method [4] to fuse features. The method takes in the 3D tensor outputs from the final max-pooling
layers of the face (\(\mathcal{F}\)) and gait (\(\mathcal{G}\)) feature extraction networks. The outputs are further reshaped into the matrix of dimensions \(p\times d\) and are combined via the bilinear pooling method to obtain the fusion result \(Z\), as follows:
\[Z=FG^{T},F\in\mathbb{R}^{p\times d},G\in\mathbb{R}^{p\times d} \tag{1}\]
The matrix \(Z\) is then flattened into a vector and then passed onto the softmax activation function, where it computes the probability for class \(k\) out of \(K\) classes.
### _Keyless attention based Adaptive fusion of Face and Gait_
Attention mechanisms are widely used in sequence models, enabling the modelling of dependencies regardless of their location in the input or output sequences [21]. In our case, not every frame in a video helps identify a subject in the same way. In order to estimate the importance weights for each frame, we adapt the attention mechanism. An attention function is a process that takes a query vector and a set of key-value pairs and produces an output. In existing soft attention mechanisms [21], the weight computation is not limited to the feature vectors but also incorporates an additional input, such as the previously hidden state vector of the LSTM or a vector representing a target entity as in [22]. These additional inputs along with feature vectors referred to as _key vectors_, help to find the most related weighted average of feature vectors. However, the weights in our work depend only on the feature vectors and do not require any additional input, thus named as _keyless attention_, synonymous to the work [3]. In our case, referring to Fig. 1, the gait feature descriptor \(G\) and face feature descriptor \(F\) are further fed onto two attention modules _viz_. Gait attention block and Face attention block respectively. Further, multimodal adaptive weights are computed via the fusion mechanism. Detailed explanations are given below.
#### Iii-D1 **Face Attention**
The facial feature is updated by incorporating the attention mechanism to assign weighted visual elements. Formally, face attention is computed as follows:
\[\tilde{f}_{i}=\mathbf{W_{f}}f_{i}+\mathbf{b_{f}} \tag{2}\]
Fig. 1: Overall architecture of the proposed keyless attention-based adaptive fusion of face and gait for person recognition. The gait and face features are encoded by the gait and face feature extraction networks \(\mathcal{F}\) and \(\mathcal{G}\), respectively. The outputs are subsequently weighted using keyless attention. Context-aware adaptive multimodal fusion is then employed to fuse global gait and facial features. Finally, the outputs are passed through the classifier to determine the class (Person ID) of the person.
Fig. 3: Process of obtaining face images from Mediapipe pose estimation.
Fig. 2: (a) Human silhouette taken from a gait video sequence of CASIA-A. (b) Representation of silhouette aspect ratio over the whole video. The marked points in red represent the starting and ending of one gait cycle. (c),(d) & (e) Glimpses from the CASIA-A dataset at angles 0\({}^{\circ}\), 45\({}^{\circ}\), 90\({}^{\circ}\) respectively.
\[\bar{e_{i}}=\bar{u}^{T}\tanh\bigl{(}\bar{f_{i}}\bigr{)} \tag{3}\]
\[\bar{\alpha_{i}}=\frac{\exp\{(\lambda\bar{e_{i}})\}}{\sum_{k=1}^{L}\exp\{( \lambda\bar{e_{k}})\}} \tag{4}\]
Here, \(\bar{f_{i}}\) is the low-dimension representation of frame \(i\) and \(\mathbf{W_{f}}\) & \(\mathbf{b_{f}}\) are the learnable parameters. The importance weight, \(\bar{e_{i}}\), of the element \(f_{i}\) is computed by the inner product between the new representation of \(\bar{f_{i}}\) and a learnable vector \(\bar{u}\). The normalized importance weight of facial feature \(\bar{\alpha_{i}}\) is calculated using the softmax function, as shown in Eq. (5). \(\lambda\) is a scale factor that ensures that the importance weights are evenly distributed, which ranges between 0 and 1. Nevertheless, it is observed in Eq. (4) that \(\tanh\left(\cdot\right)\) non-linearity may not be effective for learning complicated linkages, since \(\tanh\left(x\right)\) is roughly linear for x \(\in\) [-1, 1]. Therefore, inspired by the method as in [23], we leverage an effective gated mechanism as shown in Eq. (6) to formulate a better normalized facial importance weight \(\bar{\alpha_{i}}\).
\[\bar{\alpha_{i}}=\frac{\exp\bigl{\{}\{\lambda\bar{u}^{T}(\tanh\bigl{(}\bar{f_ {i}}\bigr{)}\odot sigm(\bar{f_{i}})\}\bigr{\}}\bigr{\}}{\sum_{k=1}^{L}\exp\bigl{\{} \{\lambda\bar{u}^{T}(\tanh\bigl{(}\bar{f_{k}}\bigr{)}\odot sigm(\bar{f_{k}}) \}\bigr{\}}\bigr{\}}} \tag{5}\]
\[\bar{\alpha}=\sum_{i=1}^{L}\bar{\alpha_{i}} \tag{6}\]
where _sigm(\(\cdot\))_ is the sigmoid non-linearity and \(\odot\) is an element-wise multiplication. This new \(\bar{\alpha_{i}}\) is further used to compute global facial attention weight \(\bar{\alpha}\) by combining facial importance weight across all \(L\) frames(Refer Eq. (7)).
#### Iii-D2 **Gait Attention**
Analogous to the face modality, the attention mechanism is incorporated in the gait counterpart as well. The global gait attention weight \(\bar{\beta}\) by leveraging the weighted visual elements in the gait stream is computed as follows.
\[\bar{g_{i}}=\mathbf{W_{g}}g_{i}+\mathbf{b_{g}} \tag{7}\]
\[\bar{\beta_{i}}=\frac{\exp\bigl{\{}\{\lambda\bar{u}^{T}(\tanh(\bar{g_{i}}) \odot sigm(\bar{g_{i}})\}\bigr{\}}\bigr{\}}{\sum_{k=1}^{L}\exp\{\{\lambda\bar{u }^{T}(\tanh(\bar{g_{k}})\odot sigm(\bar{g_{k}})\}\}\bigr{\}}} \tag{8}\]
\[\bar{\beta}=\sum_{i=1}^{L}\bar{\beta_{i}} \tag{9}\]
#### Iii-D3 **Context-aware Adaptive Fusion**
From Eq.(6) & Eq.((9), we obtain the value of \(\bar{\alpha}\) and \(\bar{\beta}\), which are the global individual attention weights of the face and gait features, respectively. The weighted average of the face and gait feature is computed using the adaptive weights:
\[\alpha=\frac{\left\|\bar{\alpha}\right\|}{\left\|\bar{\alpha}\right\|+\left\| \bar{\beta}\right\|} \tag{10}\]
\[\beta=\frac{\left\|\bar{\beta}\right\|}{\left\|\bar{\alpha}\right\|+\left\| \bar{\beta}\right\|} \tag{11}\]
The adaptive fusion is performed by combining the two features multiplied individually by their weighted global attention weights, as follows:
\[\mathbb{Z}=\alpha F+\beta G \tag{12}\]
where \(\mathbb{Z}\) refers to the context-aware adaptively fused feature. \(\mathbb{Z}\) is further passed onto a fully-connected (FC) layer, followed by a softmax function that classifies the feature according to the \(K\) classes provided. The resultant column vector, \(R\) is then used to determine the class identifier (_Person ID_) of the subject in consideration for the fused feature \(\mathbb{Z}\) by
\[ID(\mathbb{Z})=argmax(R) \tag{13}\]
### _Objective functions_
The model classifier employs categorical cross-entropy loss, also known as Softmax loss. This supervised loss calculates the classification error among \(K\) classes. The number of nodes in the softmax layer depends on the number of identities in the training set. Considering \(t\) and \(w_{k}\) as the target vector and learnable vector respectively, the loss is computed as:
\[Loss=-\sum_{i}^{K}t_{i}\log\bigl{(}softmax(w_{k}^{T}\mathbb{Z})_{i}\bigr{)} \tag{14}\]
## IV Experimental Setup
**Dataset:** In this work, we use CASIA Gait Dataset A [24], which includes 19139 images of 20 subjects. Each person has 12 image sequences, 4 sequences for each of the three directions, i.e. 0\({}^{\circ}\), 45\({}^{\circ}\), 90\({}^{\circ}\)(Refer Fig. 2(c), 2(d) and 2(e)). Among the 4 sequences per angle, 2 sequences are used for training, and the remaining 2 sequences are used for testing.
**Evaluation protocols:** Standard evaluation metrics like _accuracy_ and _log-loss_ are employed to validate the performance of our model. _Accuracy_ is used to evaluate how well the algorithm is performing for all classes by giving them equal importance, whereas _log loss_ is considered to be a crucial metric that is based on probabilities. Mathematically, log-loss is computed by:
\[log\_loss=\frac{-1}{N\sum_{i=1}^{N}[y_{i}\ln p_{i}+(1-y_{i})\ln(1-p_{i})]} \tag{15}\]
where \(N\) is number of person and \(y_{i}\) is the observed value and \(p_{i}\) is predicted probability.
**Implementation details:** The proposed method is implemented using the TensorFlow framework. During training, video frames correspond to one gait cycle across three orientations 0\({}^{\circ}\), 45\({}^{\circ}\), and 90\({}^{\circ}\)are considered. In this work, the gait cycle corresponds to \(L\) = 24 frames, each with height _H_= 128, and width \(W\) = 128 is used. The images are normalized using the RGB mean and standard deviation of ImageNet before passing them to the network. After dimension reduction, the resulting dimension of the gait and face feature descriptor is 588 each. In the experiments, we use Optuna [25], a hyperparameter optimization framework, to obtain the best hyperparameter for our models. We train the network for approximately 1000 iterations. The implementations are done in a machine with Tesla V100 GPU with 12GB RAM and took around 1 hour to train the model.
## V Experimental Results
To verify the effectiveness of our proposed approach, various experiments using single feature-based and multimodal fusion-based human identification are carried out. The result summary is shown in Tables I and II. Referring to Table I, the first two rows are single-modality based results, whereas the remaining are multi-modality results.
**(i) Face feature based human recognition:** Training of facial features separately on each angle 0\({}^{\circ}\), 45\({}^{\circ}\)and 90\({}^{\circ}\) with custom parameters and hyperparameter tuning using Optuna [25] produce accuracies up to 65%, 80%, and 85%, respectively (Refer Table II). The overall accuracy of the face model incorporating all orientations is 80% (Refer Table I).
**(ii) Gait feature based human recognition:** Training of the gait features across three view-points produces accuracies 75%, 60%, and 55%, respectively (Refer Table II). Referring to Table I, the overall accuracy of the gait model across viewpoints is observed to be 70%.
One noteworthy observation from the aforesaid single-modality based results, is the outperformance of face and gait models in 90\({}^{\circ}\)and in 0\({}^{\circ}\)viewpoints, respectively. This accentuates our initial intuition of the influence viewpoint on feature modalities. To incorporate the best of both modalities in different viewpoints, we facilitate fusion techniques. In particular, four fusion approaches are carried out.
**(iii) Average based fusion:** Average weight-based fusion incorporates a manual weight input to the face and gait model. For this technique, a weightage of 0.5 is devised on the individual face and gait features, achieving an accuracy of \(75\%\) on the test dataset.
**(iv) Naive fusion via BLP:** Bilinear Pooling incorporates the fusion of both gait and face models, as explained in Section III (C). The model achieves 85%, 75%, and 85% viewpoint-wise accuracies, as shown in Table II. The overall fused model results in an accuracy of 80%. It was observed that compared to the _Average based fusion_ model, _Naive fusion with BLP_ improves the accuracy by \(5\%\).
**(v) Attention Fusion:** In this model, the keyless attention mechanism (discussed in Sec III-D) is implemented to obtain global face and gait attention weights \(\bar{\alpha}\) and \(\bar{\beta}\). It is further multiplied with the respective features \(F\) and \(G\) and is concatenated to obtain a single feature vector. Note that no adaptive fusion strategy is employed in this scheme. This model is able to achieve an overall accuracy of \(85\%\) incorporating features over all the viewpoints.
**(vi): Context-aware Adaptive Fusion with attention:** This strategy incorporates the proposed context-aware adaptive fusion strategy to the attention module, discussed in Sec.III-D. The viewpoint-wise accuracy attained by this method are 90%, 80%, and 90%, respectively. Referring to Table I, this model outperforms all other models by achieving an overall accuracy of 90%, highlighting the importance of the context-aware fusion of modalities across the view-points. In terms of the log loss metric, the adaptive fusion strategy has achieved the least value with 0.389 compared to all other models.
From the view-point-wise performance of the models in Table II, some key interpretations also can be drawn. For 0\({}^{\circ}\), the gait model surpasses the face model performance, which aligns with our intuition that the model learns gait features better when the subject walks laterally. Similarly, works well when the subject walks towards the camera at 90\({}^{\circ}\). However, while incorporating adaptive fusion strategy, the best of both modalities are incorporated adaptively based on the context, resulting in high performance irrespective of the view-point.
A visual interpretation of the attention based adaptive fusion model is depicted in Fig. 4 in terms of a confusion matrix. A confusion matrix visualizes and summarizes the performance of a classification algorithm. All the three viewpoints together on the test dataset of 20 subjects is depicted in Fig. 4. We can observe that 18 out of 20 subjects are correctly classified, except subject IDs '1' and '13'. The subject ID '1' is most often classified as 14 and the subject ID '13' as 5 and 20 in most models. Further, we observe that subject 13 being identified correctly by the face model but not in the gait model, which might be ascribable to the suboptimal learning of the model from the grayscale features of the image.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Angle(\({}^{\circ}\))**} & \multicolumn{4}{c|}{**Accuracy(\(\%\))**} \\ \cline{2-5} & **Face** & **Gait** & **Naïve Fusion** & **Adaptive Fusion** \\ \hline
0\({}^{\circ}\) & 65 & 75 & 85 & 90 \\ \hline
45\({}^{\circ}\) & 75 & 60 & 75 & 80 \\ \hline
90\({}^{\circ}\) & 85 & 55 & 85 & 90 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Result of all models conducted angle-wise \(0^{\circ}\), \(45^{\circ}\), and \(90^{\circ}\) with respect to camera
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Index** & **Model** & **Accuracy(\%)** & **Log Loss** \\ \hline (i) & Face feature model & 80 & 0.436 \\ (ii) & Gait feature model & 70 & 1.641 \\ \hline (iii) & Average based fusion & 75 & 0.779 \\ (iv) & Naïve Fusion via BLP & 80 & 0.519 \\ (v) & Attention Fusion & 85 & 1.619 \\ (vi) & **Adaptive Fusion Attention** & **90** & **0.389** \\ \hline (vii) & **Geng, Wang et. al. [17]** & 86.67 & - \\ \hline \end{tabular}
\end{table} TABLE I: Overall result summary of the models
## VI Conclusion and Future Works
In this work, we proposed a multimodal adaptive fusion of the face and gait toward human identification. In particular, a keyless attention-based deep neural network for learning the attention in the gait and face videos and a context-aware adaptive fusion strategy to efficiently extract and fuse the features is presented. Based on the observation that single biometric modality results in suboptimal results, various studies leveraging average based, naive fusion, attention fusion and context-aware adaptive fusion were investigated. Results of the proposed attention-based adaptive fusion strategy show superior performance compared to all the other models as well as the state-of-the-art result. Future Improvements can be made by introducing better attention mechanisms such as dense co-attention and spatial-channel attention, as well as advanced fusion mechanisms like tucker fusion, block fusion etc.
|
2306.12330 | ProtoGate: Prototype-based Neural Networks with Global-to-local Feature
Selection for Tabular Biomedical Data | Tabular biomedical data poses challenges in machine learning because it is
often high-dimensional and typically low-sample-size (HDLSS). Previous research
has attempted to address these challenges via local feature selection, but
existing approaches often fail to achieve optimal performance due to their
limitation in identifying globally important features and their susceptibility
to the co-adaptation problem. In this paper, we propose ProtoGate, a
prototype-based neural model for feature selection on HDLSS data. ProtoGate
first selects instance-wise features via adaptively balancing global and local
feature selection. Furthermore, ProtoGate employs a non-parametric
prototype-based prediction mechanism to tackle the co-adaptation problem,
ensuring the feature selection results and predictions are consistent with
underlying data clusters. We conduct comprehensive experiments to evaluate the
performance and interpretability of ProtoGate on synthetic and real-world
datasets. The results show that ProtoGate generally outperforms
state-of-the-art methods in prediction accuracy by a clear margin while
providing high-fidelity feature selection and explainable predictions. Code is
available at https://github.com/SilenceX12138/ProtoGate. | Xiangjian Jiang, Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik | 2023-06-21T15:17:39Z | http://arxiv.org/abs/2306.12330v2 | # ProtoGate: Prototype-based Neural Networks with Local Feature Selection for Tabular Biomedical Data
###### Abstract
Tabular biomedical data poses challenges in machine learning because it is often high-dimensional and typically low-sample-size. Previous research has attempted to address these challenges via feature selection approaches, which can lead to unstable performance on real-world data. This suggests that current methods lack appropriate inductive biases that capture patterns common to different samples. In this paper, we propose ProtoGate, a prototype-based neural model that introduces an inductive bias by attending to both homogeneity and heterogeneity across samples. ProtoGate selects features in a global-to-local manner and leverages them to produce explainable predictions via an interpretable prototype-based model. We conduct comprehensive experiments to evaluate the performance of ProtoGate on synthetic and real-world datasets. Our results show that exploiting the homogeneous and heterogeneous patterns in the data can improve prediction accuracy while prototypes imbue interpretability.
## 1 Introduction
In biomedical research, tabular data is frequently collected [1; 2; 3] for a wide range of applications such as detecting marker genes [4], identifying cancer sub-types [4], and performing survival analysis [5; 6]. Clinical trials, whilst collecting large amounts of high-dimensional data using modern high-throughput sequencing technologies, often consider a small number of patients due to practical reasons [7]. The resulting tabular datasets are thus often high-dimensional and typically low-sample-size (HDLSS). Moreover, given the inherent heterogeneity of biomedical data, important features often vary from sample to sample - even in the same dataset [5; 8]. Such scenarios have proven challenging for current machine learning approaches, including deep tabular models [9; 10; 11; 5; 12; 13].
Previous methods [14; 15; 16; 8; 5] have attempted to address such challenges by performing local feature selection: rather than selecting a general set of important features across all samples, local feature selection methods select specific subsets of features for each sample and these subsets may vary from sample to sample. However, existing methods have three limitations: (i) In many real-world tasks, even simple models - such as an MLP or Lasso - can outperform many existing methods [5]. One reason is the accuracy of current methods can be substantially lower for some classes than other classes, and we illustrate this in Figure 1. (ii) These methods commonly comprise a trainable feature selector to select features and a trainable predictor to make predictions with these features, which can be susceptible to the co-adaptation problem [17; 18; 19]. Because the two components are jointly trained, the predictor can fit the selected features to achieve high accuracy even when these features do not reflect the real data distribution [17]. Consequently, the prediction accuracy is
inconsistent with the quality of selected features. For instance, L2X [16] achieves 96% accuracy in digit classification on MNIST by using only one pixel as input [17]. (iii) Current methods [8; 17; 5; 16] are not explainable because they mainly use an MLP-based predictor. This lack of explainability is a major concern in high-stake applications such as medicine [20; 21; 22; 23; 24].
We hypothesise that existing local feature selection methods exhibit subpar performance on biomedical data for two reasons: (i) They lack appropriate inductive biases. These methods mainly make predictions using MLPs, although, in the biomedical domain, the clustering assumption (which states similar samples should belong to the same class [25]) has been shown effective [26; 27; 28; 29; 30]. Based on the clustering assumption, the prototype-based models can perform well on tabular data by classifying the new instances according to their similarities to the existing prototypes. For instance, a simple prototype-based model, such as \(k\)-means, can outperform complex neural networks with an accurate pre-trained feature selector [5]. (ii) Existing local feature selection models tend to overly emphasize heterogeneity, often neglecting that different samples might share some informative features. The high accuracy of global feature selection models on real-world datasets [13] suggests that informative features can indeed be shared across samples. We believe that effective local feature selection methods should be able to identify both homogeneous and heterogeneous feature patterns across samples, provided the data supports their existence.
We aim to address the challenges of suboptimal performance and opaqueness of local feature selection methods applied to tabular biomedical data. We propose ProtoGate, a novel method which performs local feature selection and makes accurate and explainable predictions in the HDLSS regime.
Firstly, ProtoGate uses a prototype-based predictor without learnable parameters - namely Differentiable K-Nearest Neighbors (DKNN) [31] - which enables explainable predictions. The prototype-based predictor confers ProtoGate two important features: (i) an inductive bias aligned with the clustering assumption in biomedical data; and (ii) consistent evaluations of the quality of selected features throughout the training process, eliminating the possibility of co-adaptation from joint training. Secondly, ProtoGate performs feature selection in a global-to-local manner with an \(\ell_{1}\)-regularised gating network. The global-to-local design helps ProtoGate consider the homogeneous and heterogeneous patterns across multiple samples.
Our contributions can be summarised as follows:
1. We propose ProtoGate, a novel method which addresses the challenge of high-dimensional and low-sample-size (HDLSS) biomedical data by achieving local feature selection and explainable predictions with a global-to-local feature selector and a prototype-based classifier.
2. We show that ProtoGate generally outperforms 12 benchmark methods on seven real-world biomedical datasets (Section 4.1) while selecting fewer features (Section 4.2), paving the path to more robust and interpretable local feature selection models.
3. We demonstrate that ProtoGate effectively handles the co-adaptation problem with a prototype-based predictor by comparing its performance against nine feature selection benchmark methods on three synthetic datasets (Section 4.4).
Figure 1: Illustration of the unstable performance of LSPIN [5] on the lung dataset. (a) The class-wise classification accuracy, with class distribution in the parentheses. (b) The mean number of selected features, displayed on a logarithmic scale. MLP and ProtoGate achieve stable performance, while LSPIN has a large variance in the accuracy and number of selected features across different classes.
Related Work
**Feature Selection Methods** Feature selection is a common technique for improving the accuracy and interpretability of machine learning models on HDLSS datasets. An extensive line of work selects features globally with Lasso-based regularisation [32; 33; 34; 35; 36] or specialised layers in neural networks [13; 37; 38; 39; 40]. However, the global feature selection ignores the heterogeneous nature of biomedical data, leading to insufficient interpretability [5; 8].
Prior studies attend to the heterogeneity between samples by designing local feature selection models that select instance-wise features for explaining a pre-trained predictor [17; 41; 42; 43; 44; 45; 46]. These methods are limited because the post-hoc analysis on feature importance does not improve the performance of pre-trained predictors.
Recent work proposes to select instance-wise features for making predictions [8; 5; 16; 15; 47]. L2X uses mutual information for instance-wise feature selection with Concrete distribution, but it requires specifying the number of selected features [16]. INVASE addresses such limitation by modelling each feature's mask/gate value with independent Bernoulli distributions [8]. However, both methods utilise computationally expensive gradient estimators: REINFORCE [48] or REBAR [49]. Similar to STG [40], LSPIN/LLSPIN re-formalises the mask/gate value with injected Gaussian noise and extends Localized Lasso [50] with a gating network that can select similar features for similar samples [5]. However, the poor performance of a vanilla KNN on real-world datasets (Table 1) demonstrates that the similarity in the initial high-dimensional feature space is inaccurate because a large proportion of features can be noise for the prediction. In contrast, ProtoGate measures the similarity across samples within an intrinsically interpretable DKNN predictor. The predictor takes the samples after feature selection as input, and thus the similarity is measured in a feature space with fewer dimensions than LSPIN/LLSPIN and Localized Lasso.
**Co-adaptation Problem** In feature selection, co-adaptation refers to the situation where the model encodes predictions into the feature selection, leading to high accuracy with features that do not reflect the real data distributions [17; 18; 19; 51]. REAL-X proves that co-adaptation can happen in models with jointly trained feature selectors and predictors [17], and addresses this problem by decoupling the training objectives of the feature selector and predictor. But it only provides post-hoc explanations of the feature importance for a predictor trained with all features. In ProtoGate, we propose to address the co-adaptation problem with DKNN, a prototype-based predictor without learnable parameters. The DKNN predictor can consistently evaluate the selected features throughout the training process, eliminating the possibility of co-adaptation from joint training.
## 3 Method
### Problem Setup
We consider the classification task on tabular biomedical data with \(\mathcal{Y}\) classes. Let \(X\coloneqq[\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}]^{\top}\in\mathbb{R}^{N\times D}\) be the data matrix consisting of \(N\) samples \(\mathbf{x}^{(i)}\in\mathbb{R}^{D}\) with \(D\) features, and let \(Y\coloneqq[y^{(1)},\ldots,y^{(N)}]\in\mathbb{R}^{N}\) be the corresponding labels. We denote \(x^{(i)}_{d}\) as the \(d\)-th feature of the \(i\)-th sample. To simplify the notation, we assume all samples in \(X\) are used for training.
A common local feature selection model contains two components: (i) an instance-wise feature selector \(S_{\mathbf{W}}:\mathbb{R}^{D}\rightarrow\{0,1\}^{D}\) that takes as input a sample \(\mathbf{x}^{(i)}\) and generates a mask \(\mathbf{s}^{(i)}\in[0,1]^{D}\) for its features, and (ii) a predictor model \(F_{\theta}:\mathbb{R}^{D}\rightarrow\mathcal{Y}\) which takes as input both the sample \(\mathbf{x}^{(i)}\) and the mask \(\mathbf{s}^{(i)}\) for prediction:
\[y^{(i)}=F_{\theta}\left(S_{\mathbf{W}}(\mathbf{x}^{(i)}),\mathbf{x}^{(i)}\right)=F_{ \theta}(\mathbf{x}^{(i)}\odot\mathbf{s}^{(i)}), \tag{1}\]
where \(\hat{y^{(i)}}\) is the predicted label and \(\odot\) is the element-wise multiplication. Here, we define the \(d\)-th feature is selected if and only if the mask value is positive (\(s^{(i)}_{d}>0\)).
### Rationale for Model Architecture
We propose ProtoGate as a method for selecting instance-wise features with inductive bias from the prototype-based model, as shown in Figure 2. Instead of predicting the local mask from all features,
ProtoGate selects instance-wise features in a global-to-local manner with an \(\ell_{1}\)-regularised gating network. This allows the feature selector to attend to both homogeneity and heterogeneity across samples. Additionally, ProtoGate leverages the selected features with a prototype-based predictor of DKNN. The DKNN predictor makes explainable predictions and encodes the clustering assumption into feature selection. Without learnable parameters, DKNN can further avoid the co-adaptation problem by providing consistent evaluations for the selected features while training the feature selector. The pseudocode for model training is summarised in Algorithm 1.
```
0: training samples \(X\in\mathbb{R}^{N\times D}\), ground truth labels \(Y\in\mathbb{R}^{N}\), global-to-local feature selector \(S_{\mathbf{W}}\), prototype-based classifier \(F\), sparsity hyper-parameters (\(\lambda_{g}\), \(\lambda_{l}\)), number of nearest neighbours \(K\), total training epochs \(E\), learning rate \(\alpha\) trained model \(S_{\mathbf{W}}\), prototype base \(\mathcal{B}\)\(\mathbf{W}\leftarrow\) GaussianInitialisation()\(\triangleright\) Initialise the weights of feature selector for\(e\gets 1\) to \(E\)do\(\mathcal{B}\leftarrow\{\}\)\(\triangleright\) Initialise the prototype base as an empty set for\(i\gets 1\) to \(N\)do\(\triangleright\) Construct the prototype base \(\mathbf{x}^{(i)}_{\text{masked}}\leftarrow\mathbf{x}^{(i)}\odot S_{\mathbf{W}}(\mathbf{x} ^{(i)})\)\(\triangleright\) Select instance-wise features for training samples \(\mathcal{B}\leftarrow\mathcal{B}\cup\{(\mathbf{x}^{(i)}_{\text{masked}},y^{(i)})\}\)\(\triangleright\) Add masked sample and its label to the prototype base endforfor\(i\leftarrow\) to \(N\)do\(\mathbf{x}^{(i)}_{\text{masked}}\leftarrow\mathbf{x}^{(i)}\odot S_{\mathbf{W}}(\mathbf{x} ^{(i)})\)\(\triangleright\) Select instance-wise features for training samples \(P^{(i)}_{\mathcal{B}}\leftarrow\) NeuralSort(\(\mathcal{B},\mathbf{x}^{(i)}_{\text{masked}}\))\(\triangleright\) Compute the permutation matrix for \(\mathbf{x}^{(i)}_{\text{masked}}\)\(y^{(i)}\gets F(\mathcal{B},\mathbf{x}^{(i)}_{\text{masked}},K)\)\(\triangleright\) Classify the query sample with \(K\) nearest prototypes endfor\(L=\frac{1}{N}\sum_{i=1}^{N}\left(\ell_{\text{pred}}(P^{(i)}_{\mathcal{B}},\mathbf{x}^{(i)},y^{(i)})+R( \mathbf{W}^{[1]},\mathbf{s}^{(i)},\lambda_{g},\lambda_{l})\right)\)\(\triangleright\) Compute the training loss \(\mathbf{W}\leftarrow\mathbf{W}-\alpha\nabla_{\mathbf{W}}L\)\(\triangleright\) Update the weights of feature selector endfor return \(S_{\mathbf{W}}\), \(\mathcal{B}\)
```
**Algorithm 1** Training Procedure of ProtoGate
### Global-to-local Feature Selection
The global-to-local feature selector \(S_{\mathbf{W}}:\mathbb{R}^{D}\rightarrow[0,1]^{D}\) (Figure 2 (A)) is a neural network that maps feature values \(\mathbf{x}^{(i)}\) into mask values \(\mathbf{s}^{(i)}\). The feature selector attends to the homogeneity
Figure 2: The architecture of ProtoGate. **(A)** Given a sample \(\mathbf{x}\in\mathbb{R}^{D}\), the global-to-local feature selector performs global feature selection in the first layer. The orange dashed lines denote sparsified weights in \(\mathbf{W}^{[1]}\) under \(\ell_{1}\) regularisation. The neural network then computes the instance-wise mask values \(\{s_{d}\}_{d=1}^{D}\in[0,1]^{D}\) with a thresholding function. **(B)** The mask is applied to the sample for local feature selection by element-wise multiplication. **(C)** The prototype-based predictor classifies \(\mathbf{x}\) by retrieving the \(K\) nearest neighbours to the masked sample in base \(\mathcal{B}\). The majority class of neighbours is used as the predicted label \(\hat{y}\).
between samples via applying \(\ell_{1}\)-regularisation on \(\mathbf{W}^{[1]}\), the weights of the first layer. Intuitively, the regularisation can lead to sparse weights in the first layer, which implicitly selects features globally for all samples. The output \(\boldsymbol{\mu}^{(i)}\) from the last layer is thresholded to obtain instance-wise mask values by
\[s_{d}^{(i)}=\max(0,\min(1,\mu_{d}^{(i)}+\epsilon_{d}^{(i)})) \tag{2}\]
where \(\epsilon_{d}^{(i)}\) is the injected noise sampled from Gaussion distribution \(\mathcal{N}(0,\sigma^{2})\). The standard deviation \(\sigma\) is fixed during training, and it is removed during the inference time for deterministic mask values. With the injected noise, \(\boldsymbol{s}^{(i)}\) can be re-formalised as random vectors with parameters \(\boldsymbol{\mu}^{(i)}\) predicted by a neural network. Therefore, the sparsity regularisation on mask values can be computed by
\[R(\mathbf{W}^{[1]},\boldsymbol{s}^{(i)},\lambda_{g},\lambda_{l})=\lambda_{g} ||\mathbf{W}^{[1]}||_{1}+\mathbb{E}\left[\lambda_{l}||\boldsymbol{s}^{(i)}|| _{0}\right]\!=\!\lambda_{g}||\mathbf{W}^{[1]}||_{1}+\lambda_{l}\!\sum_{d=1}^{D }\!\left(\frac{1}{2}-\frac{1}{2}\text{erf}(-\frac{\mu_{d}^{(i)}}{\sqrt{2} \sigma})\right) \tag{3}\]
where \((\lambda_{g},\lambda_{l})\) is a pair of hyper-parameters to balance the effects of global and local feature selection, and \(\text{erf}(\cdot)\) is the Gauss error function. The full derivations are available in Appendix B.
### Prototype-based Prediction
The prototype-based predictor \(F:\mathds{R}^{D}\rightarrow\mathcal{Y}\) is a DKNN model (Figure 2 (C)). The DKNN predictor first constructs a prototype base \(\mathcal{B}\) with training samples. After masking the training samples with the mask generated by \(S_{\mathbf{W}}(X)\), DKNN retains the masked samples and their labels as prototypes in the base \(\mathcal{B}\coloneqq\{(S_{\mathbf{W}}(\boldsymbol{x}^{(i)})\odot\boldsymbol{ x}^{(i)},y^{(i)})\}_{i=1}^{N}\). With the acquired prototypes, the predictor can classify a query sample \(\boldsymbol{x}_{\text{query}}\in X\) by retrieving the base \(\mathcal{B}\). The predictor sorts the prototypes by their similarities to the masked query sample with NeuralSort [31], a differentiable relaxed sorting operator. Note that ProtoGate computes the Euclidean distance between samples as the similarity evaluation metric. According to the sorting results, the predictor uses the majority class of the \(K\) closest prototypes as the predicted label \(\hat{y}_{\text{query}}\). Because the feature selector is learnable, the mask can change and thus the base \(\mathcal{B}\) is dynamic over the training time. After training, the prototype base \(\mathcal{B}\) is fixed and query samples are from unseen test data.
For each query sample, the loss of prototype-based classification is defined as:
\[\ell_{\text{pred}}(P_{\mathcal{B}}^{\text{query}},\boldsymbol{x}_{\text{query }},y_{\text{query}})=K-\frac{1}{K}\sum_{j=1}^{K}\sum_{i=1}^{N}\mathbbm{1}\left( y^{(i)}=y_{\text{query}}\right)P_{\mathcal{B}}^{\text{query}}[i,j] \tag{4}\]
where \(P_{\mathcal{B}}^{\text{query}}\in\mathds{R}^{N\times N}\) denotes the relaxed permutation matrix and \(\mathbbm{1}(\cdot)\) denotes the indicator function. In the permutation matrix, \(P_{\mathcal{B}}^{\text{query}}[i,j]\) denotes the possibility that the \(i\)-th prototype is the \(j\)-th closest to query sample \(\boldsymbol{x}_{\text{query}}\) under NeuralSort. Among the \(K\) nearest prototypes, Equation 4 estimates the number of prototypes that have different labels to \(\boldsymbol{x}_{\text{query}}\). DKNN encodes the clustering assumption into feature selection by encouraging samples of the same class to have similar representations. Additionally, DKNN measures the similarity between masked samples and mitigates the effects of the noisy features in the initial high-dimensional feature space.
### Training Loss
The training loss is comprised of the average classification loss in Equation 4 and the sparsity regularisation in Equation 3:
\[L=\frac{1}{N}\sum_{i=1}^{N}\left(\ell_{\text{pred}}(P_{\mathcal{B}}^{(i)}, \boldsymbol{x}^{(i)},y^{(i)})+R(\mathbf{W}^{[1]},\boldsymbol{s}^{(i)},\lambda _{g},\lambda_{l})\right) \tag{5}\]
Because the loss function is fully differentiable, the global-to-local feature selector and the prototype-based predictor can be trained in tandem. The whole model can be optimised with standard gradient-based approaches, such as stochastic gradient descent. We did not observe optimisation issues when training over 3,000 models (Appendix A.4).
## 4 Experiments
We now evaluate ProtoGate on both synthetic and real-world datasets to substantiate the model design choices. Firstly, we compare ProtoGate against 12 benchmark methods on real-world classification
tasks (Section 4.1 and Section 4.2). Secondly, we investigate the impact of the prototype-based predictor by replacing it with a linear or MLP-based prediction head (Section 4.3) and adjusting the number of nearest neighbours (Appendix D). Thirdly, we investigate the impact of the global-to-local feature selector by considering the interplay between global and local feature selection (Section 4.3). Finally, we analyse the co-adaptation problem by considering the performance misalignment between feature selection and classification on the synthetic datasets (Section 4.4). We also provide the comparison of training time in Appendix E.
**Real-world datasets.** Following [13], we utilise seven HDLSS tabular biomedical datasets. The datasets contain \(2000-5966\) features with \(62-197\) samples of \(2-4\) different classes. We are interested in datasets with much fewer samples than LSPIN [5], which uses \(\sim 1,500\) samples. Full descriptions of the real-world datasets are available in Appendix A.1.
**Experimental setup.** For each dataset, we perform 5-fold cross-validation on 5 different splits, summing up to 25 runs per model. We obtain the validation set by randomly selecting 10% of training data. For each benchmark model, the training loss is a weighted loss, and we perform a hyper-parameter search for model selection on the validation set. Full details about the reproducibility and hyper-parameter tuning are available in Appendix A.5.
**Evaluation metrics.** We report the results averaged over 25 runs on test sets. (i) For classification, we measure the performance by the mean \(\pm\) std test balanced accuracy. (ii) Note that the proportion of selected features varies across samples for local feature selection methods. Therefore, we measure the sparsity of feature selection by the mean \(\pm\) std proportion of selected features across samples. (iii) To distinguish between "similar number of selected features" and "similar selected features", we introduce a new metric: degree of local sparsity \(\mathcal{Q}\), which is computed by
\[\mathcal{Q}=\frac{1}{D\cdot N}\sum_{j=1}^{N}\text{card}\left(\bigcup_{i=1}^{N} \text{nonzero}(\mathbf{s}^{(i)})-\text{nonzero}(\mathbf{s}^{(j)})\right) \tag{6}\]
where \(\text{card}(\cdot)\) returns the cardinality of a set and \(\text{nonzero}(\cdot)\) returns the indices of non-zero elements in a vector. \(\mathcal{Q}\) measures the difference between the union set of selected features for all samples and the selected features for a specific sample. Intuitively, a non-zero \(\mathcal{Q}\) denotes selected features are different across samples, and thus the feature selection is local. For global feature selection, the degree of local sparsity is zero (\(\mathcal{Q}\equiv 0\)).
**ProtoGate implementation.** The global-to-local feature selector is flexible on the number of hidden layers, and we implement it as a three-layer feed-forward neural network. The numbers of neurons in the input and output layers are the same as the number of features of the input data and the number of neurons in the hidden layer is set to 100. The feature selector has batch normalisation and \(tanh\) activation for all layers. We train the models with a batch size of 64 and utilise an SGD optimizer with a weight decay of \(1e-4\). The number of nearest neighbours \(K\) is searched in \(\{1,2,3,4,5\}\). The global sparsity hyper-parameter \(\lambda_{g}\) is searched in \(\{1e-4,2e-4,3e-4,4e-4,6e-4\}\), and the local sparsity hyper-parameter \(\lambda_{l}\) is set as \(1e-3\).
**Benchmark methods.** We evaluate the classification accuracy of ProtoGate and compare it with several benchmark models, including global feature selection models (LightGBM [52], Random Forest (RF) [53], Lasso [32] and STG [40]) and local feature selection models (TabNet [15], L2X [16], INVASE [8], REAL-X [17] and LSPIN/LLSPIN [5]). Additionally, we also compare ProtoGate with some standard models, including KNN [54] and MLP.
### Classification Performance
Table 1 shows that ProtoGate consistently achieves better than or comparable balanced accuracy to the benchmark models. We compute the average rank across different datasets and ProtoGate ranks first, followed by Lasso. ProtoGate outperforms all other local feature selection models by a clear margin. We also find that the existing local feature selection methods cannot outperform even the simple linear Lasso or vanilla MLPs on HDLSS datasets. Note that REAL-X trains the MLP-based predictor with all features, and thus it achieves comparable performance as the MLP model.
The stable and competitive performance of ProtoGate shows the suitability of the clustering assumption in the biomedical field. Moreover, ProtoGate intrinsically provides explanations for the predictions by explicitly pointing out the \(K\) nearest prototypes, while other local feature selection methods can be unexplainable with MLP-based predictors. Poor performance of the vanilla KNN
model also demonstrates that a large proportion of features can be irrelevant to the predictions, and thus the similarity in the high-dimensional feature space can introduce noise to feature selection, which can be one reason for the failure of LSPIN and LLSPIN.
In most HDLSS cases, ProtoGate consistently outperforms both Lasso and MLP. The exceptions are the lung, prostate and toxicity datasets, where the ProtoGate accuracy is slightly lower. As mentioned in [13, 5], Lasso and MLP can outperform other feature selection models when they are well-regularised on some datasets, such as the toxicity dataset. Compared with well-regularised Lasso and MLP, the prototype-based predictor could have limited expressivity, resulting in the suboptimal performance of ProtoGate.
### Feature Selection Performance
We compare ProtoGate against both global feature selection methods (RF, Lasso and STG) and local feature selection methods (L2X, LSPIN and LLSPIN). We plot the mean \(\pm\) std of the proportion of selected features across samples in Figure 3. The numerical results and full visualisation of selected features are available in Appendix C.
Figure 3 and Figure 4 show that ProtoGate consistently selects fewer features per sample than other benchmark methods, except L2X. Because the performance of L2X is the worst among the 12 benchmark models, we argue that the L2X model does not perform better than ProtoGate on feature selection
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Methods & lung & meta-dr & meta-pm & prostate & \multicolumn{2}{c}{tega-2y} & toxicity & colon & **Avg. Rank** \\ \hline LightGBM & \(93.42\pm 5.91\) & \(58.23\pm 8.56\) & \(94.98\pm 5.19\) & \(91.38\pm 5.71\) & \(57.09\pm 7.87\) & \(81.98\pm 6.25\) & \(76.60\pm 11.67\) & \(5.71\) \\ RF & \(91.73\pm 6.61\) & \(51.48\pm 3.41\) & \(88.73\pm 6.24\) & \(90.38\pm 7.31\) & \(58.70\pm 6.84\) & \(79.78\pm 7.10\) & \(80.05\pm 10.37\) & \(7.14\) \\ KNN & \(91.06\pm 7.92\) & \(54.64\pm 7.95\) & \(82.79\pm 9.20\) & \(78.78\pm 6.71\) & \(58.83\pm 7.07\) & \(83.86\pm 1.20\) & \(77.33\pm 5.41\) & \(8.00\) \\ Lasso & \(94.47\pm 3.71\) & \(85.88\pm 9.49\) & \(91.55\pm 2.83\) & \(91.19\pm 6.39\) & \(56.99\pm 6.26\) & \(91.98\pm 5.27\) & \(94.09\pm 5.00\) & \(4.29\) \\ MLP & \(\mathbf{95.81\pm 2.69}\) & \(\mathbf{54.68\pm 9.63}\) & \(95.71\pm 2.89\) & \(87.22\pm 7.41\) & \(55.52\pm 7.24\) & \(93.84\pm 4.28\) & \(90.80\pm 8.70\) & \(5.14\) \\ STG & \(93.30\pm 6.28\) & \(81.58\pm 8.67\) & \(67.18\pm 8.91\) & \(89.38\pm 5.85\) & \(57.04\pm 5.76\) & \(87.95\pm 5.01\) & \(79.55\pm 10.53\) & \(6.29\) \\ TabNet & \(77.65\pm 11.56\) & \(49.18\pm 15.02\) & \(82.66\pm 7.81\) & \(65.66\pm 9.03\) & \(51.58\pm 8.26\) & \(40.06\pm 12.23\) & \(56.75\pm 7.31\) & \(12.00\) \\ L2X & \(50.02\pm 8.30\) & \(52.54\pm 13.75\) & \(62.64\pm 13.69\) & \(61.78\pm 6.29\) & \(52.30\pm 9.11\) & \(31.72\pm 13.48\) & \(57.60\pm 14.26\) & \(12.43\) \\ INVASE1 & \(91.22\pm 6.16\) & \(-\) & \(91.70\pm 6.84\) & \(-\) & \(-\) & \(55.98\pm 6.45\) & \(80.40\pm 6.60\) & \(-\) & \(9.00\) \\ REAL-X & \(93.27\pm 4.32\) & \(60.01\pm 7.12\) & \(95.95\pm 3.04\) & \(86.75\pm 6.68\) & \(59.30\pm 7.49\) & \(90.97\pm 4.75\) & \(76.55\pm 12.21\) & \(5.14\) \\ ILSPIN & \(70.10\pm 12.31\) & \(56.77\pm 9.65\) & \(95.50\pm 3.60\) & \(85.71\pm 5.98\) & \(57.87\pm 6.02\) & \(61.67\pm 9.01\) & \(79.35\pm 7.34\) & \(7.14\) \\ LSPIN & \(76.92\pm 9.38\) & \(53.98\pm 8.00\) & \(\mathbf{97.18\pm 3.16}\) & \(87.75\pm 6.74\) & \(55.95\pm 4.75\) & \(83.47\pm 8.59\) & \(\mathbf{81.30\pm 7.97}\) & \(6.71\) \\ \hline
**ProtoGate** & \(93.44\pm 6.37\) & \(\mathbf{60.43\pm 7.61}\) & \(95.96\pm 3.93\) & \(90.58\pm 5.64\) & \(\mathbf{61.18\pm 6.47}\) & \(92.34\pm 5.67\) & \(\mathbf{81.10\pm 12.14}\) & \(\mathbf{2.00}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation comparison of ProtoGate with 12 benchmark methods on seven real-world tabular biomedical datasets. We report the mean \(\pm\) std balanced accuracy (averaged across 25 runs) and average accuracy rank across datasets. A higher rank implies higher accuracy. We highlight the **First**, Second and Third ranking accuracy for each dataset. ProtoGate consistently ranks Top-3 across datasets and achieves the best overall performance.
Figure 4: Heatmaps of mask values on the meta-dr dataset. ProtoGate has a sparser feature selection result, since it can better identify the homogeneity across samples.
Figure 3: Comparison of the feature selection sparsity on real-world datasets. We report the mean \(\pm\) std of the proportion of selected features on test samples, averaged over 25 runs. ProtoGate learns sparser patterns than other methods by a clear margin except for L2X.
although it has the fewest selected features. Compared with the rest local feature selection methods, ProtoGate has smaller standard deviations in the proportion of selected features across test samples. Note that this does not mean that ProtoGate selects features globally, because similar proportions of selected features only denote similar numbers of selected features, not necessarily similar selected features (Section 4.3). The sparse feature selection results from ProtoGate demonstrate the effectiveness of global information in feature selection, and the global-to-local process helps ProtoGate attend to both homogeneity and heterogeneity across samples.
### Model Design Ablations
**Impact of prototype-based predictor.** We now investigate how the prototype-based predictor impacts classification performance. For a fair comparison, we replace the DKNN predictor with a linear head network or an MLP, and then tune the hyper-parameter for global sparsity \(\lambda_{g}\) by searching within \(\{1e-4,2e-4,3e-4\}\).
As shown in Table 2, the DKNN predictor consistently outperforms other predictors. We attribute the performance improvement to the appropriate inductive bias in prototype-based classification and the reduction in learnable parameters. In ProtoGate, only the feature selector needs training, while other local feature selection methods have learnable predictors with vast amounts of parameters to optimise. We also find that simply combining a global-to-local feature selector and an MLP/linear prediction head does not outperform LSPIN/LLSPIN. This further indicates that a prototype-based predictor is the key to the high accuracy of ProtoGate.
**Impact of global-to-local feature selector.** In Section 4.2, we discussed how the global-to-local feature selector helps ProtoGate generate a sparser feature selection result. We further examine how different hyper-parameter values of global sparsity \(\lambda_{g}\) impact the feature selection behaviour.
Figure 5(a) shows that increasing \(\lambda_{g}\) can lead to a lower degree of local sparsity. We also find in Figure 5(b) that ProtoGate achieves the best test accuracy when selecting features locally (\(\mathcal{Q}>0\)), which aligns with the domain knowledge that heterogeneity across samples is important for accurate predictions on biomedical data. This also suggests that the outstanding performance of ProtoGate is due to its considering both homogeneity and heterogeneity for feature selection.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Predictors & Lung & meta-dr & meta-span & prostate & tega-2y & toxicity & colon \\ \hline MLP & \(69.97\pm 9.17\) & \(560.00\pm 6.37\) & \(93.62\pm 6.04\) & \(89.13\pm 6.36\) & \(54.74\pm 8.11\) & \(90.36\pm 5.61\) & \(80.95\pm 7.77\) \\ linear Head & \(66.51\pm 12.45\) & \(56.10\pm 8.95\) & \(93.20\pm 6.18\) & \(89.87\pm 5.80\) & \(56.60\pm 8.20\) & \(90.29\pm 5.93\) & \(79.45\pm 6.23\) \\
**DKNN** & \(\mathbf{93.44\pm 6.37}\) & \(\mathbf{60.43\pm 7.61}\) & \(\mathbf{95.96\pm 3.93}\) & \(\mathbf{90.58\pm 5.64}\) & \(\mathbf{61.18\pm 6.47}\) & \(\mathbf{92.34\pm 5.67}\) & \(\mathbf{81.10\pm 12.14}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Balanced accuracy for different predictors on real-world datasets, averaged over 25 runs. We **bold** the highest accuracy for each dataset. The prototype-based classifier consistently outperforms linear and MLP predictors on all datasets.
Figure 5: Comparison of different values of global sparsity hyper-parameter \(\lambda_{g}\). (a) The degree of local sparsity averaged over 25 runs. Increasing \(\lambda_{g}\) reduces the diversity of selected features between samples. (b) The balanced accuracy averaged over 25 runs. Increasing \(\lambda_{g}\) does not guarantee improvement in the prediction accuracy.
**Training considerations.** ProtoGate can require larger training overhead, mostly for tuning the hyper-parameters compared to some existing models, since we need to consider the interplay between \(\lambda_{g}\), \(\lambda_{l}\) and \(K\). ProtoGate also stores all training samples in the prototype base \(\mathcal{B}\), leading to higher memory consumption on large datasets than benchmark methods. Because we mainly focus on the HDLSS datasets, memory consumption is not a major problem in this regime.
### Co-adaptation Analysis
We evaluate ProtoGate and benchmark feature selection models on the synthetic datasets to examine their correctness in feature selection and susceptibility to the co-adaptation problem. We use the same experimental settings as real-world datasets and change the range of hyper-parameter searching for each model to achieve their optimal performance. Following [5; 17], we measure the quality of selected features by computing the F1 score with predicted masks and ground truth masks, and the results are averaged over 25 runs.
**Synthetic datasets.** We generate three synthetic datasets by adapting the nonlinear datasets used in [5; 8; 17], and the exact data models are described in Appendix A.2. Each dataset has 200 samples of 100 features, which is only 10% of the samples and 10 times more features compared to [5]. All feature values are sampled independently from \(\mathcal{N}(0,\mathbf{I})\), where \(\mathbf{I}\) is an \(100\times 100\) identity matrix. Each dataset has two classes, and we make the data distribution imbalance by generating 50 and 150 samples for two classes respectively.
_We purposely design Syn3\({}_{(-)}\) to examine the inductive bias in ProtoGate._ Note that the absolute value function is an even function. Two samples with opposite values of the same feature are likely to have equal logit values, and then they belong to the same class. However, the opposite values mean a long distance between them, and they should not belong to the same class according to the clustering assumption. Therefore, prototype-based models are expected to perform poorly in this regime. We implement it by adding absolute value function \(|x_{9}|\) in the first class of Syn3\({}_{(-)}\) to observe the performance degradation in ProtoGate.
**Results.** On Syn1\({}_{(+)}\) and Syn2\({}_{(+)}\), ProtoGate achieves better or comparable performance in feature selection and classification than benchmark methods. On Syn3\({}_{(-)}\), ProtoGate performs poorly as expected. Although Syn1\({}_{(+)}\) and Syn2\({}_{(+)}\) also contain even functions like square and absolute value, they also have many other informative features that do not utilise the even functions to compute logit value. Therefore, the side effect of even functions is diluted in Syn1\({}_{(+)}\) and Syn2\({}_{(+)}\).
We also find LSPIN achieves the highest accuracy on Syn1\({}_{(+)}\), but has poor F1\({}_{\text{select}}\) in the selected features, denoting a severe problem of co-adaptation between the feature selector and predictor. In other words, LSPIN simply overfits the dataset without correctly identifying the informative features, making the feature selection results meaningless. In contrast, ProtoGate has consistently non-positive rank differences between F1\({}_{\text{select}}\) and ACC\({}_{\text{pred}}\), showing that the co-adaptation does not occur. The results demonstrate that ProtoGate can achieve a well-aligned performance of feature selection and classification, guaranteeing the quality of selected features.
\begin{table}
\begin{tabular}{l c c c c c|c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Syn1\({}_{(+)}\)} & \multicolumn{3}{c|}{Syn2\({}_{(+)}\)} & \multicolumn{3}{c}{Syn3\({}_{(-)}\)} \\ \cline{2-10} & \multicolumn{2}{c}{F1\({}_{\text{select}}\)} & \multicolumn{2}{c}{ACC\({}_{\text{pred}}\)} & \multicolumn{1}{c}{**Diff.**} & \multicolumn{2}{c}{F1\({}_{\text{select}}\)} & \multicolumn{2}{c|}{ACC\({}_{\text{pred}}\)} & \multicolumn{1}{c|}{**Diff.**} & \multicolumn{2}{c}{F1\({}_{\text{select}}\)} & \multicolumn{2}{c}{ACC\({}_{\text{pred}}\)} & \multicolumn{1}{c}{**Diff.**} \\ \hline RF & 0.1461 \(\pm\) 0.0367 & 57.08 \(\pm\) 6.48 & 3 & 0.1921 \(\pm\) 0.0230 & 59.44 \(\pm\) 5.24 & 1 & 0.2232 \(\pm\) 0.0241 & 56.33 \(\pm\) 9.08 & -1 \\ Lasso & 0.0905 \(\pm\) 0.0197 & 54.55 \(\pm\) 6.14 & 2 & 0.1130 \(\pm\) 0.0070 & 52.42 \(\pm\) 6.69 & 0 & 0.0900 \(\pm\) 0.0179 & 55.30 \(\pm\) 7.44 & 2 \\ STG & 0.2656 \(\pm\) 0.0420 & 86.85 \(\pm\) 9.3 & -1 & 0.2247 \(\pm\) 0.0994 & 58.28 \(\pm\) 8.36 & -2 & **0.2846** \(\pm\) **0.1820** & 54.00 \(\pm\) 9.09 & -7 \\ ThNet & 0.0843 \(\pm\) 0.0172 & 48.59 \(\pm\) 6.55 & 1 & 0.0642 \(\pm\) 0.0246 & 49.57 \(\pm\) 5.38 & 0 & 0.0605 \(\pm\) 0.0200 & 48.45 \(\pm\) 8.31 & 0 \\ L2X & 0.1599 \(\pm\) 0.0710 & 52.89 \(\pm\) 7.51 & -3 & 0.1873 \(\pm\) 0.0976 & 55.78 \(\pm\) 6.97 & -1 & 0.0984 \(\pm\) 0.0889 & 55.92 \(\pm\) 7.30 & 2 \\ INVAE & 0.1763 \(\pm\) 0.0456 & 53.36 \(\pm\) 9.00 & -1 & 0.1553 \(\pm\) 0.0338 & 60.28 \(\pm\) 8.61 & 6 & 0.1332 \(\pm\) 0.0256 & **50.75** \(\pm\) **8.70** & 5 \\ REAL-X & **0.1850** & 0.0438 & 47.54 \(\pm\) 9.51 & -7 & 0.2328 \(\pm\) 0.0729 & 52.50 \(\pm\) 6.38 & -6 & 0.2630 \(\pm\) 0.0567 & 56.48 \(\pm\) 9.34 & -1 \\ LLSPIN & 0.1060 \(\pm\) 0.0246 & 54.96 \(\pm\) 9.49 & 2 & 0.1692 \(\pm\) 0.0795 & 56.18 \(\pm\) 5.80 & 1 & 0.1031 \(\pm\) 0.0635 & 52.35 \(\pm\) 8.32 & -2 \\ LSPIN & 0.1466 \(\pm\) 0.0380 & **59.04 \(\pm\) 9.24** & 5 & 0.1911 \(\pm\) 0.0389 & 59.40 \(\pm\) 8.07 & 1 & 0.1927 \(\pm\) 0.0645 & 58.09 \(\pm\) 6.41 & 2 \\ \hline ProtoGate & **0.2948** \(\pm\) **0.0728** & 58.68 \(\pm\) 6.28 & **-1 & **0.2922** \(\pm\) **0.0943** & **60.67 \(\pm\) 8.21** & 0 & 0.1653 \(\pm\) 0.0554 & 56.16 \(\pm\) 6.82 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation comparison of ProtoGate and nine benchmark methods on three synthetic datasets. We report the F1 score of selected features (F1\({}_{\text{select}}\)) and the balanced accuracy for prediction (ACC\({}_{\text{pred}}\)). “Diff.” refers to the difference between the ranks of F1\({}_{\text{select}}\) and ACC\({}_{\text{pred}}\), and a positive value indicates a high possibility of co-adaptation. We highlight the First, Second and Third performance for each dataset. ProtoGate achieves well-aligned performance for feature selection and prediction.
Conclusion
We present ProtoGate, a prototype-based neural model for local feature selection on high-dimensional and low-sample-size datasets. ProtoGate selects features in a global-to-local manner and makes predictions with an interpretable prototype-based model. The experimental results on real-world datasets demonstrate that ProtoGate improves classification accuracy and interpretability by attending to both homogeneity and heterogeneity across samples. The analysis of synthetic datasets further reveals that ProtoGate can effectively avoid the co-adaptation problem by utilising a prototype-based predictor without learnable parameters. Although we evaluate ProtoGate only on classification tasks in this paper, it is readily extendable and applicable to other biomedical tasks, including regression.
|
2307.05217 | Supervised Attention Using Homophily in Graph Neural Networks | Graph neural networks have become the standard approach for dealing with
learning problems on graphs. Among the different variants of graph neural
networks, graph attention networks (GATs) have been applied with great success
to different tasks. In the GAT model, each node assigns an importance score to
its neighbors using an attention mechanism. However, similar to other graph
neural networks, GATs aggregate messages from nodes that belong to different
classes, and therefore produce node representations that are not well separated
with respect to the different classes, which might hurt their performance. In
this work, to alleviate this problem, we propose a new technique that can be
incorporated into any graph attention model to encourage higher attention
scores between nodes that share the same class label. We evaluate the proposed
method on several node classification datasets demonstrating increased
performance over standard baseline models. | Michail Chatzianastasis, Giannis Nikolentzos, Michalis Vazirgiannis | 2023-07-11T12:43:23Z | http://arxiv.org/abs/2307.05217v2 | # Supervised Attention Using Homophily in Graph Neural Networks
###### Abstract
Graph neural networks have become the standard approach for dealing with learning problems on graphs. Among the different variants of graph neural networks, graph attention networks (GATs) have been applied with great success to different tasks. In the GAT model, each node assigns an importance score to its neighbors using an attention mechanism. However, similar to other graph neural networks, GATs aggregate messages from nodes that belong to different classes, and therefore produce node representations that are not well separated with respect to the different classes, which might hurt their performance. In this work, to alleviate this problem, we propose a new technique that can be incorporated into any graph attention model to encourage higher attention scores between nodes that share the same class label. We evaluate the proposed method on several node classification datasets demonstrating increased performance over standard baseline models.
Keywords:Graph Neural Networks Graph Attention Networks Supervised Attention
## 1 Introduction
Graph neural networks (GNNs) have recently emerged as a general framework for learning graph representations and have been applied with great success in different domains such as in bioinformatics [22], in physics [9] and in natural language processing [29], just to name a few. Among others, GNNs have been used to generate molecules with specific chemical characteristics [24], to predict compound-protein interactions for drug discovery [34] and to detect misinformation in social media [16].
While different types of GNNs have been proposed, most of these models follow an iterative message passing scheme, where each node aggregates information from its neighbors [12]. One of the most popular classes of this kind of models are the graph attention networks (GATs) [35, 3, 6, 19]. GATs employ an attention mechanism which can capture the importance of each neighbor and are thus considered state-of-the-art models in various graph learning tasks. These models are also highly interpretable since the learned attention scores can provide information about the relevance of the neighboring nodes.
Unfortunately, real-world graphs often contain noise, as there usually exist edges between unrelated nodes. In such a setting, once multiple message passing
steps are performed, nodes will end up receiving too much noisy information from nodes that belong to different classes, thus leading to indistinguishable representations. This problem is known as oversmoothing in the graph representation learning literature [7, 4], and can dramatically harm the performance of GNNs in the node classification task. Several approaches have been proposed to address the issue of oversmoothing such as normalization layers [39, 11], generalized band-pass filtering operations [25], and approaches that change the graph structure [7]. However, most of them are computationally expensive or require extensive architectural modifications.
In this work, we focus on removing the noisy information from the graph using an attention mechanism. Specifically, we propose a new loss function that encourages nodes to mainly attend to nodes that belong to the same class, and to a lesser extent to nodes that belong to different classes by supervising the attention scores. To motivate our approach, we first experimentally verify that GNNs perform better as the edge homophily in the graph increases, i. e., as we remove inter-class edges. Therefore, it is important to learn attention scores close to 0 for the inter-class edges. Furthermore, we demonstrate that the proposed method outperforms various baselines in real-world node classification tasks. Our approach is computationally efficient, and it can be applied to any graph attention model with minimal modifications in the architecture. Finally, we visualize the distribution of the learned attention scores of our proposed model and of vanilla graph attention networks, for the intra- and the inter-class edges. We verify that our proposed model learns higher attention scores for the intra-class edges, leading to high quality node representations.
Our contributions can be summarized as follows:
* We show experimentally that GNNs perform better as the edge homophily in the graph increases, and that it is important to learn attention scores close to 0 for the inter-class edges.
* We propose a novel loss function for attentional GNNs that encourages nodes to attend mainly to nodes that belong to the same class, and to a lesser extent to nodes that belong to different classes.
* We show that our approach outperforms various baselines in real-world node classification tasks.
The rest of the paper is organized as follows. Section 2 presents the related work. Section 3 introduces the proposed loss function. Finally, Sections 4 and 5 present the experimental results and conclusions, respectively.
## 2 Related Work
Graph Neural Networks (GNNs) have received significant attention in the past years, with a growing number of research works proposing novel methods and applications. The first GNN models were proposed several years ago [33, 30], however with the rise of deep learning, GNNs have gained renewed interest in the research community [21, 15]. The majority of GNN models can be reformulated
into a single common framework known as Message Passing Neural Networks (MPNNs) [12]. These models iteratively update a given node's representation by aggregating the feature vectors of its neighbors. Graph attention networks (GATs) correspond to one of the major subclasses of MPNNs [35, 3, 6]. GATs employ an attention mechanism which allows them to incorporate explicit weights for each neighbor. One of the main advantages of these models is that they are highly interpretable due to the learned attention scores. Numerous studies have proposed several enhancements and expansions to the message passing mechanism of MPNNs. These include among others, works that use more expressive or learnable aggregation functions [26, 32, 10, 6], schemes that operate on high-order neighborhoods of nodes [1, 17, 28], and approaches that operate in the hyperbolic space [23, 5, 27]. However, a common issue that affects the performance of various MPNNs is oversmoothing. Several studies have investigated the causes and effects of oversmoothing, as well as potential solutions to mitigate this problem, including normalization techniques [38, 11] and graph rewiring methods [7, 18].
## 3 Methodology
### Preliminaries
Let \(G=(V,E)\) be an undirected graph where \(V\) is a set of nodes and \(E\) is a set of edges. We will denote by \(N\) the number of vertices and by \(M\) the number of edges, i. e., \(N=|V|\) and \(M=|E|\). Then, we have that \(V=\{v_{1},v_{2},\ldots,v_{N}\}\). Let \(\mathbf{A}\in\mathbb{R}^{N\times N}\) denote the adjacency matrix of \(G\), \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]^{\top}\in\mathbb{R}^{N\times d}\) be the matrix that stores the node features, and \(\mathbf{Y}=[y_{1},y_{2},\ldots,y_{N}]^{\top}\in\{1,\ldots,C\}^{N}\) the vector that stores the nodes' class labels where \(C\) is the number of classes. Let \(\mathcal{N}(i)\) denote the indices of the neighbors of node \(v_{i}\), i. e., the set \(\{j\colon\{v_{i},v_{j}\}\in E\}\). We denote the features of the neighbors of a node \(v_{i}\) by the multiset \(\mathbf{X}_{\mathcal{N}(i)}=\{\mathbf{x}_{j}\colon j\in\mathcal{N}(i)\}\). We also define the neighborhood of \(v_{i}\) including \(v_{i}\) as \(\overline{\mathcal{N}}(i)=\mathcal{N}(i)\cup\{i\}\) and the corresponding features as \(\mathbf{X}_{\overline{\mathcal{N}}(i)}\). Given a training set of nodes, the goal of supervised node classification is to learn a mapping from the node set to the set of labels, \(f:V\rightarrow\{0,1,\ldots,C\}\).
### Graph Neural Networks
GNNs typically use the graph structure \(\mathbf{A}\) along with the node features \(\mathbf{X}\) to learn a representation \(\mathbf{h}_{i}\) for each node \(v_{i}\in V\)[14]. As already discussed, most GNNs employ a message-passing scheme [12] where every node updates its representation by aggregating the representations of its neighbors and combining them with its own representation. Since there is no natural ordering of the neighbors of a node, the aggregation function needs to be permutation invariant, and usually has a significant impact on the performance and the expressiveness of the GNN model [37]. Common aggregation functions include the sum, mean, max, and min operators, but also attention-based pooling aggregators [21, 15].
In this work, we mainly focus on attention-based aggregators, where the representation of each node \(v_{i}\) is updated using a weighted sum of the representations of its neighbors:
\[\mathbf{h}_{i}=\sigma\left(\sum_{j\in\overline{\mathcal{N}}(i)}\alpha_{ij}\mathbf{W}\mathbf{h }_{j}\right) \tag{1}\]
where \(\mathbf{h}_{i}\in\mathbb{R}^{d}\) denotes the hidden representation of node \(v_{i}\), \(\mathbf{W}\in\mathbb{R}^{d_{o}\times d}\) is a weight matrix and \(\alpha_{ij}\) is the learned attention score (e. g., how much node \(v_{i}\) attends to node \(v_{j}\)). Equation (1) is applied iteratively, however, for ease of notation, we have dropped the superscript (that denotes the iteration number). Among the different attention models that have been proposed in the past years, the Graph Attention Network (GAT) [35] computes the attention scores by applying a single-layer feedforward neural network in the concatenated node features of the two nodes, while GATv2 [3], an improved version of GAT, computes more expressive and dynamic attention. In our experiments, we use GATv2 as the backbone network, but our approach can be easily applied to any graph attention model. Specifically, we compute the un-normalized attention score between two nodes \(v_{i},v_{j}\) using the following equation:
\[e_{ij}=\mathbf{a}^{\top}\text{LeakyReLU}\left(\mathbf{W}_{2}\left[\mathbf{h}_{i}\|\mathbf{h}_{ j}\right]\right) \tag{2}\]
where \(\mathbf{a}\in\mathbb{R}^{d_{o}}\) is a weight vector and \(\mathbf{W}_{2}\in\mathbb{R}^{d_{o}\times 2d}\) a weight matrix. Then, we apply the softmax function to normalize the attention scores across all neighbors of \(v_{i}\):
\[\alpha_{ij}=\frac{\exp\left(e_{ij}\right)}{\sum_{k\in\mathcal{N}(i)}\exp\left( e_{ik}\right)} \tag{3}\]
### Problem of Information Mixing
Definition 1 (Edge Homophily): Given a graph \(G=(V,E)\) with a vector of node class labels \(\mathbf{Y}\), the edge homophily ratio is the fraction of intra-class edges in the graph, i. e., the fraction of edges that connect nodes with the same labels.
\[h(G,\mathbf{Y})=\frac{\left|\left\{\left(v_{i},v_{j}\right)\colon\left(v_{i},v_{j} \right)\in E\wedge y_{i}=y_{j}\right\}\right|}{\left|E\right|} \tag{4}\]
The problem of information mixing or oversmoothing [7; 36] occurs mainly in cases where there are edges between nodes that belong to different classes (i. e., inter-class edges). In each message passing iteration, information will be exchanged through these "noisy" edges, leading nodes that belong to different classes into obtaining highly non-separable representations. Therefore, the node classification task is becoming extremely challenging. Ideally, we would like to identify and eliminate those "noisy" edges, such that nodes will only aggregate information from intra-class edges.
In this paper, we leverage graph attention networks in order to alleviate this issue. Specifically, we encourage the network to learn attention scores that minimize information mixing in the graph. Note that a node \(v_{i}\) receives noisy information as follows:
\[\text{noise}(i)=\sum_{j\in\mathcal{N}_{\text{inter}}(i)}\alpha_{ij}\mathbf{W}\mathbf{h}_ {j} \tag{5}\]
where \(\mathcal{N}_{\text{inter}}(i)\) is the set of indices of the inter-class neighbors of node \(v_{i}\). Therefore, we would like our attention scores to satisfy the following equation:
\[\left\{\alpha_{ij}^{*}\colon j\in\mathcal{N}_{\text{inter}}(i)\right\}=\operatorname* {arg\,min}_{\left\{\alpha_{ij}\colon j\in\mathcal{N}_{\text{inter}}(i)\right\} }\sum_{j\in\mathcal{N}_{\text{inter}}(i)}||\alpha_{ij}\mathbf{W}\mathbf{h}_{j}|| \tag{6}\]
The solution of the above equation gives us \(\alpha_{ij}=0\) for all the inter-class edges \(\{v_{i},v_{j}\}\).
### Supervised Attention using Homophily (HS-GATv2)
Based on the previous analysis, we propose a new loss function for training graph attention networks that deals with the information mixing problem. Specifically, we propose to supervise the attention scores between the edges, by providing labels that indicate if the edge is an intra- or an inter-class edge. Let \(V_{\text{train}}\) denote a set that contains the indices of the nodes that belong to the training set. Let also \(E_{\text{train}}\) denote the training edge set which consists of all the edges where both source and target nodes belong to the training node set, i. e., \(E_{\text{train}}=\left\{\{v_{i},v_{j}\}\colon\{v_{i},v_{j}\}\in E\wedge i\in V _{\text{train}}\wedge j\in V_{\text{train}}\right\}\). Formally, the proposed loss function combines the following two terms: (1) the cross-entropy loss between model predictions and class labels of nodes (denoted by \(L_{V}\)); and (2)
Figure 1: An illustration of our proposed method (**HS-GATv2**). We combine the loss \(L_{V}\) from the node classification task and the loss \(L_{E}\) from the attention scores based on the training edges, to push neighbor nodes with the same class to have large attention scores and nodes from different classes to have small attention scores. Our approach is applicable to any graph attention model, by setting accordingly the attention function \(\mathbf{f}\).
the supervised attention losses for the edges between nodes of the training set (denoted by \(L_{E}\)) with mixing coefficient \(\lambda\):
\[\begin{split} L&=L_{V}+\lambda\,L_{E}\\ L_{V}&=-\frac{1}{|V_{\text{train}}|}\sum_{i\in V_{ \text{train}}}\sum_{c=1}^{C}y_{i,c}\log(\mathbf{p}_{i,c})\\ L_{E}&=-\frac{1}{T\,|E_{\text{train}}|}\sum_{t=1}^{T }\sum_{e\in E_{\text{train}}}\Big{(}y_{e}\log\big{(}\sigma(e_{e}^{(t)})\big{)} \\ &\qquad\qquad\qquad+(1-y_{e})\log\big{(}1-\sigma(e_{e}^{(t)}) \big{)}\Big{)}\end{split} \tag{7}\]
where \(y_{i,c}\) indicates if node \(v_{i}\) belongs to the class \(c\) (i. e., \(y_{i,c}=1\) if \(v_{i}\) belongs to class \(c\), and \(0\) otherwise), \(\mathbf{p}_{i,c}\) is the predicted probability of node \(v_{i}\) belonging to class \(c\), \(e_{e}^{(t)}\) is the un-normalized attention score of edge \(e\) in the \(t\)-th message passing layer, and \(y_{e}\) is the label of the edge (i. e., \(1\) if source and target nodes belong to the same class, and \(0\) otherwise). An illustration of the proposed method is given in Figure 1.
## 4 Experiments
In this section, we extensively evaluate our method on synthetic as well as standard node classification datasets, and compare it against state-of-the-art GNNs. As already mentioned, we apply the proposed method to the GATv2 model.
### Adjusting Homophily in Graphs
Our method is based on the assumption that node classification is easier in homophilic graphs, since nodes from the different classes will have separable representations. In this experiment, we try to verify this claim. Therefore, we test the performance of various GNNs by adjusting the edge homophily in various graph datasets. Specifically, we remove \(k|E_{\text{inter}}|\) inter-class edges from each dataset, where \(|E_{\text{inter}}|\) is the number of inter-class edges in a graph. Setting \(k=0\) corresponds to the original graph and \(k=1\) corresponds to a fully homophilic graph. We report the results for Cora, Citeseer and Disease datasets in Figure 2. We observe that the performance increases for all the models, as the homophily of the graph increases. Our approach is strongly motivated by this observation, since the proposed loss function encourages the attention scores of inter-class edges to be close to \(0\), thus generating a more homophilic-like setting.
### Node Classification Benchmarks
**Baselines.** We compare our approach (HS-GATv2) against the following state-of-the-art GNN models: Graph Convolutional Network (GCN) [21], GraphSAGE [15],
Figure 2: Test performance of GNNs by removing \(k|E_{\text{inter}}|\) inter-class edges. Setting \(k=0\) corresponds to the original graph and \(k=1\) corresponds to a fully homophilic graph. Performance improves as the ratio of homophilic edges increases.
Graph Attention Network (GAT) [35], GATv2 [3], and Principal Neighbourhood Aggregation (PNA) [8].
**Datasets.** We utilize four well-known node classification benchmark datasets to evaluate our approach in real-world scenarios. We use three citation network datasets: Cora, CiteSeer and Pubmed [31], where each node corresponds to a scientific publication, edges correspond to citations and the goal is to predict the category of each publication. We follow the experimental setup of [21] and use 140 nodes for training, 300 for validation and 1000 for testing. We further use one disease spreading model: Disease [5]. It simulates the SIR disease spreading model [2], where the label of a node indicates if it is infected or not. We follow the experimental setup of [5] and use 30/10/60% for training, validation and test sets and report the average results from 10 different random splits.
**Experimental Setup.** We use the Adam optimizer [20] with the Glorot initialization [13]. We search the layers from \(\{1,2\}\) and the attention heads from \(\{1,4,8\}\). We set the weight decay equal to \(5\mathrm{e}{-5}\). We fix the mixing coefficient \(\lambda\) to \(0.1\). We search the hidden dimensions from \(\{8,16,32,64,128\}\), the learning rate from \(\{0.001,0.005\}\) and the dropout rate from \(\{0.0,0.2,0.5\}\).
**Results** Table 1 illustrates the obtained test accuracies. We observe that the proposed HS-GATv2 method outperforms the baselines on all three datasets. This highlights the ability of the proposed approach to use the attention mechanism to reduce the noisy information that each node receives from its neighbors, thus producing high-quality node representations.
### Distribution of Attention Scores
In this experiment, we compute the distribution of the un-normalized attention scores produced by HS-GATv2 and GATv2 for edges whose endpoints are not in the training set. The results for the Cora dataset are illustrated in Figure 3. Attention scores obtained from GATv2 have the same distribution for the intra- and inter-class edges. On the other hand, we observe that HS-GATv2 produces higher attention values for the intra-class edges even though it has not seen them during training. This allows our model to reduce the noisy information in the message passing procedure, and to focus mainly on the homophilic edges.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Cora** & **Citeseer** & **Disease** & **Pubmed** \\ \hline MLP & 43.8 & 52.9 & 79.1 \(\pm\) 1.0 & 74.2 \(\pm\) 0.2 \\ GCN & 81.4 & 67.5 & 89.0 \(\pm\) 2.2 & 77.8 \(\pm\) 0.3 \\ GraphSAGE & 77.2 & 65.3 & 88.8 \(\pm\) 2.0 & 77.9 \(\pm\) 0.6 \\ PNA & 76.4 & 58.9 & 86.8 \(\pm\) 1.9 & 75.8 \(\pm\) 0.6 \\ \hline GAT & 82.5 & 70.6 & 88.1 \(\pm\) 2.5 & 78.1 \(\pm\) 0.6 \\ GATv2 & 83.5 & 71.6 & 89.2 \(\pm\) 1.7 & 78.5 \(\pm\) 0.4 \\
**HS-GATv2 (ours)** & **85.3** & **73.5** & **89.3**\(\pm\) 3.3 & **79.1**\(\pm\) 0.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy in the node classification benchmarks.
## 5 Conclusion
In this paper, we introduced a new type of graph attention model that uses supervision in the attention scores by exploiting the network homophily. Our proposed loss function contains a loss term that encourages attention scores to be high between nodes that share the same label and therefore alleviates the problem of information mixing in GNNs. Our extensive experiments demonstrate an increase in the performance of the proposed method over state-of-the-art GNNs such as GAT and GATv2 in node classification tasks.
## Acknowledgements
G.N. is supported by the French National research agency via the AML-HELAS (ANR-19-CHIA-0020) project.
|
2310.11875 | Fractional Concepts in Neural Networks: Enhancing Activation and Loss
Functions | The paper presents a method for using fractional concepts in a neural network
to modify the activation and loss functions. The methodology allows the neural
network to define and optimize its activation functions by determining the
fractional derivative order of the training process as an additional
hyperparameter. This will enable neurons in the network to adjust their
activation functions to match input data better and reduce output errors,
potentially improving the network's overall performance. | Zahra Alijani, Vojtech Molek | 2023-10-18T10:49:29Z | http://arxiv.org/abs/2310.11875v1 | # Fractional Concepts in Neural Networks: Enhancing Activation and Loss Functions
###### Abstract
The paper presents a method for using fractional concepts in a neural network to modify the activation and loss functions. The methodology allows the neural network to define and optimize its activation functions by determining the fractional derivative order of the training process as an additional hyperparameter. This will enable neurons in the network to adjust their activation functions to match input data better and reduce output errors, potentially improving the network's overall performance.
## 1 Introduction
Fractional derivatives have been studied in the context of activation functions [21, 20], which can morph activation functions from one to another in a continuous manner. The activation function is a fundamental element within a neural
Figure 1: Fractional derivative of Mish and sigmoid activation functions.
network, introducing essential nonlinearity required to model complex relations in machine learning tasks, including classification and regression.
In this paper, we emphasize the importance of selecting appropriate activation and loss functions. Over the years, various neural network architectures have surfaced, each accompanied by its distinctive set of activation functions--ranging from the sigmoid, radial basis function (RBF), ReLU [5], Softplus [4], swish [16], Mish [14], and many more. However, the current practice predominantly relies on manual or heuristic selection of activation functions, often leading to an exhaustive trial-and-error methodology and frequent retraining of networks to find the optimal configuration.
Fractional calculus, on the other hand, opens doors to entire families of functions by naturally extending the original function through fractional derivatives. Here, the user only needs to specify the original function, while the fractional derivative is automatically adjusted. This elegant approach holds particularly true for activation functions. In contrast, for loss functions, the fractional derivative requires manual configuration.
However, the application of fractional activation functions comes with its unique set of challenges. One prominent challenge is the potential increase in computational complexity associated with fractional derivatives, which may impact the efficient implementation and execution of neural networks. Additionally, fractional activation functions demand careful tuning of specific hyperparameters, which can be intricate and time-consuming. Furthermore, the research on fractional activation functions remains relatively limited, leaving the practical effectiveness of these functions still an open question.
All of our code is publicly available1.
Footnote 1: [https://gitlab.com/irafm-ai/frac_calculus_01](https://gitlab.com/irafm-ai/frac_calculus_01)
## 2 Fractional Calculus
Fractional derivatives and integrals are used in neural networks to define a numerical trainable parameter value that automatically selects the optimal activation function for a particular task. This approach can be helpful when it is difficult to manually select the best activation function, as fractional derivatives and integrals allow for a more flexible and adaptive selection process. Furthermore, using fractional calculus can also improve the performance of neural networks by introducing more complex and nuanced mathematical models.
### Fractional Derivatives
Fractional calculus is a powerful mathematical tool to model various complex engineering and real-world systems. The three popular fundamental definitions of fractional derivatives are [7] :
* Grunwald-Letnikov [15] derivative of fractional order \(a\in\mathcal{R}^{+}\) : It is defined as \[D^{a}f(t)=\lim_{h\to 0}\frac{1}{h^{a}}\sum_{n=0}^{[\frac{t-n}{h}]}(-1)^{n}C_{n,a}f (t-nh),\] (1) where, \([x]\) is the integer part of \(x\) and \(C_{n,a}\) is the binomial coefficient.
* Riemann-Liouville derivative of fractional order \(a\in\mathcal{R}^{+}\): It is defined as \[D^{a}f(t)=\frac{1}{\Gamma(n-a)}\frac{d^{n}}{dt^{n}}\int_{a}^{t}\frac{f(\mu)}{( t-\mu)^{a-n+1}}d\mu,\] (2) for \(n-1<a<n,n\in\mathcal{Z}^{+}\) and \(\Gamma(.)\) is the Gamma function (we will recall it shortly).
* Caputo derivative of fractional order \(a\in\mathcal{R}^{+}\): It is defined as \[D^{a}f(t)=\frac{1}{\Gamma(n-a)}\int_{a}^{t}\frac{f^{n}(\mu)}{(t-\mu)^{a-n+1}}d\mu,\] (3) for \(n-1<a<n,n\in\mathcal{Z}^{+}\) where \(f^{n}(\mu)\) is the \(n\)-th order derivative of the function \(f(t)\).
It should be noted that for an initially relaxed function, all three definitions coincide. The Caputo definition, though more restrictive, is widely employed, as it allows the use of physical initial conditions.
The Gamma function, denoted by \(\Gamma(z)\), is a generalization of the factorial operator and is used to define the fractional derivative in fractional calculus. The Gamma function is defined as [1]:
\[\Gamma(z)=\int_{0}^{\infty}t^{(z-1)}e^{-t}dt. \tag{4}\]
The Gamma function is defined for non-negative integers as \(\Gamma(n)=(n-1)!\), and for other non-negative values of \(z\) it can be computed by [1]:
\[\Gamma(z)=\frac{e^{-\gamma z}}{z}\prod_{k=1}^{\infty}((1+\frac{z}{k})^{-1}e^{ \frac{k}{k}}). \tag{5}\]
Here, \(\gamma\) is the Euler-Mascheroni constant \((\gamma=0.57721..)\). The fractional derivative, represented above, can be modified by replacing the factorial with the gamma function. The definition provided in the following statement represents the fractional derivative of the function \(f(x)=x^{k}\) for \(k,x\geq 0\).
\[D^{a}x^{k}=\frac{\Gamma(k+1)}{\Gamma(k+1-a)}x^{k-a}. \tag{6}\]
The Gamma function allows for the definition of the fractional derivative for non-integer values of \(k\), while the factorial is only defined for integers. This allows for a more general and flexible formulation of the fractional derivative. In machine learning, fractional derivatives and integrals have been used in various ways. For example, they have been used to design activation functions, as discussed in [21]. They will also be used in the design of loss functions, where they can capture more complex patterns in data.
## 3 Grouping Activation Functions by Using Fractional Calculus
It is possible to group certain activation functions into families using fractional calculus. By analyzing the fractional derivative of an activation function, it is possible to mathematically generate other activation functions that belong to the same family.
It is worth noting that not all activation functions can be grouped into families using fractional calculus. The step function is an example of an activation function that is computationally efficient but is a poor choice for an activation function due to discontinuity and flat derivations. Other available activation functions, such as sigmoid, ReLU, and tanh, each with its own strengths and weaknesses, provide detailed information on the various options [19].
It also depends on the type of neural network and the problem you are trying to solve. It is also important to note that fractional calculus is a relatively new area of research and is still being fully understood. Its use in neural networks is still experimental and not widely used.
Calculating the fractional derivative of some activation functions is challenging and may not have a simple closed-form expression. However, there are approximation methods, i.e., the Grunwald-Letnikov [15] or other specialized techniques, that are generally used to calculate fractional derivatives. These methods involve approximating the fractional derivative using finite differences or numerical integration.
### Fractional Mish
Mish is mathematically defined as:
\[f(x)=x\tanh(\ln(1+e^{x})))=x.\frac{(e^{x}+1)^{2}-1}{(e^{x}+1)^{2}+1}.\]
Some traditional activation functions, such as sigmoid and tanh, suffer from the saturation problem, where the gradients become very small for significant inputs. The Mish activation (and its fractional version) helps mitigate this issue by having bounded angles for substantial positive and negative inputs, leading to more stable training. The Mish activation has been reported to perform well in particular machine learning tasks, such as image classification. The fractional Mish can be computed as:
\[D^{a}f(x)=\lim_{h\to 0}\frac{1}{h^{a}}\sum_{n=0}^{\infty}(-1)^{n}\frac{ \Gamma(a+1)(x-nh)}{\Gamma(n+1)\Gamma(1-n+a)}\cdot\frac{(e^{x-nh}+1)^{2}-1)}{( (e^{x-nh}+1)^{2}+1)}. \tag{7}\]
The Mish activation function combines the linear and non-linear properties of the identity and hyperbolic tangent functions, respectively. It has been shown to improve the training performance of deep neural networks in some cases.
### Fractional sigmoid
The sigmoid activation function takes an input value and maps it into a value between 0 and 1. One characteristic of the sigmoid function is its tendency to saturate as the input becomes large or small. The saturation behavior leads to the vanishing gradient problem, where the gradients approach zero during backpropagation, hindering the learning process. The fractional sigmoid activation function can be implemented using the soft plus function or directly by applying the fractional derivative to the sigmoid function. The sigmoid function is defined as:
\[f(x)=\frac{1}{1+e^{-x}}\]
and the fractional sigmoid is defined as:
\[D^{a}f(x)=\lim_{h\to 0}\frac{1}{h^{a}}\sum_{n=0}^{\infty}(-1)^{n}\frac{\Gamma(a+1)}{ \Gamma(n+1)\Gamma(1-n+a)(1+e^{-x+nh})} \tag{8}\]
where \(a\) is the fractional derivative order. By adjusting the value of \(a\), we can control the shape and smoothness of the activation function.
Sigmoid is an outdated activation function primarily used in the last layer of networks to squeeze an output into \([0,1]\) interval. Fractionality makes sigmoid usable and alleviates gradient vanishing, albeit it does not remove it completely.
## 4 Grouping Loss Functions by Using Fractional Calculus
The term "fractional" suggests that these loss functions may involve fractional exponents or weights, which can be used to fine-tune the contribution of individual data points or classes. Fractional loss functions can have a more complex form than traditional loss functions and may combine various components to achieve specific objectives. Here is an example of a loss function that utilizes fractional calculus in a time series prediction problem:
\[L(y,\hat{y})=\int_{t_{1}}^{t_{2}}w(t)D^{a}(y(t)-\hat{y}(t))dt\]
In this equation, \(y(t)\) represents the actual label or target value, \(\hat{y}(t)\) represents the predicted value by the model, \(a\) is the fractional derivative order, \(D^{a}\) represents the fractional derivative operator, and \(w(t)\) is a weight function that assigns weights to different time points or data points. The integral is taken over the interval \([t_{1},t_{2}]\), which means the specified time interval. It is important to note that the specific choice and formulation of the loss function depends on the problem at hand and the desired properties or characteristics that need to be incorporated. Despite having a trainable parameter in the calculation of the fractional activation function, for the loss function, \(a\) is a non-trainable parameter.
### Fractional Mean Squared Error
Machine learning aims to find a model that can accurately predict the output given the input data. This is typically done by minimizing a loss function, which measures the difference between expected and actual output. The model parameters are then adjusted to reduce the loss function. A standard loss function used in many machine learning algorithms is the mean squared error (MSE), which is defined as:
\[\text{MSE}(y,\hat{y})=\frac{1}{n}\sum(y-\hat{y})^{2}. \tag{9}\]
\(y\) is the proper output, \(\hat{y}\) is the predicted output, and n is the number of samples. The goal is to minimize the MSE by adjusting the model parameters. One example of a loss function with a fractional derivative is the fractional mean squared error (FMSE), which is defined as:
\[\text{FMSE}(y,\hat{y},a)=\frac{1}{n}\sum\big{(}(y-\hat{y})^{2}\big{)}^{\frac{ 9}{2}}, \tag{10}\]
where \(a\) is the order of the fractional derivative. When \(a=1\), the FMSE is equivalent to the MSE. However, when \(a\) is a non-integer value, the FMSE captures more complex relationships between the input and output.
### Fractional Cross-Entropy Loss
The fractional cross-entropy loss is a loss function with a fractional derivative that can be used in classification problems to improve the model's accuracy. The fractional cross-entropy loss function is defined as:
\[\text{FCrossEntropy}(y,\hat{y},a)=-\frac{1}{n}\sum(ya\log(\hat{y})+(1-y)a\log(1 -\hat{y})) \tag{11}\]
One potential limitation of using loss functions with fractional derivatives is that they may require more computational resources and may be more challenging to optimize than standard loss functions. Additionally, the choice of the order of the fractional derivative may not be evident in all cases and may require some trial and error.
### Fractional Huber Loss
The fractional Huber loss is a modified version of the Huber loss [8] that incorporates a fractional derivative to capture more complex relationships between variables. Huber loss is a robust function combining squared error for minor errors and absolute errors for significant ones. By adjusting the "a" value, the fractional Huber loss can adapt to different levels of non-linearity and robustness in the data. Additionally, the delta parameter allows control over the point at which the loss function transitions from quadratic to linear. The fractional Huber loss is particularly useful when the data contain outliers or extreme values, as it provides a robust alternative to the standard squared error loss. A fractional version of the Huber loss can be defined as:
\[\text{FHuberLoss}(y,\hat{y},a)=\left\{\begin{array}{ll}\frac{1}{2}((y-\hat{y })^{2})^{\frac{\delta}{2}},&\text{if}\quad|y-\hat{y}|\leq\delta,\\ \delta(|y-\hat{y}|)^{\frac{\delta}{2}}-\delta^{2}(\frac{1-a}{2}),&\text{ otherwise}.\end{array}\right. \tag{12}\]
Here \(n\) is the number of samples, a is the order of the fractional derivative, and \(\delta\) is a threshold determining the point at which the loss transitions from quadratic to linear.
## 5 Activation function results
In our experimental study, we evaluated the performance of three activation functions: Mish, swish, and sigmoid, on the CIFAR-10 dataset [10]. These evaluations were conducted across four distinct network architectures: a simple CNN, ResNet 18 and 50 [6], and an architecture resembling EfficientNetB0 [18]. For detailed insights into the architecture specifics, we encourage readers to refer to our repository. We should note that we utilized the fractional swish implementation in [20], but, despite our efforts, we could not surpass the performance of the original swish activation function.
Setup:We use the common CIFAR-10 setting. SGD optimizer with a five epochs warm-up and decreasing learning rate at 30%, 60%, and 80% milestones. Data are fed to the network in batches of 128 images augmented with padded random cropping, horizontal flipping, and normalization. The network is trained for 200 epochs using a cross-entropy with label smoothing. There is an exception: ResNet-50 fractional Mish is trained with 96 batch size due to memory consumption. We track the best test accuracy and the fractional derivative order of each activation function during training. All activation functions have hyperparameters \(h=1\) and \(\sum_{n=0}^{3}\) (see function definitions). Sec. 7 discusses these two constants in further detail.
Table 1 displays the CIFAR-10 test accuracies for various combinations of models and activation functions. The results for Fractional Mish and Swish are somewhat underwhelming, as they do not exhibit a signif
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Act fc** & \begin{tabular}{c} **Simple CNN** \\ [\%] \\ \end{tabular} & \begin{tabular}{c} **ResNet-18** \\ [\%] \\ \end{tabular} & \begin{tabular}{c} **ResNet-50** \\ [\%] \\ \end{tabular} &
\begin{tabular}{c} **EffNet** \\ \end{tabular} \\ \hline ReLU & 75.97 & 95.38 & 95.42 & 94.40 \\ \hline Sig & **65.69** & 82.01 & 56.49 & 84.16 \\ Frac sig & 10.00 & **94.39** & **90.77** & **92.21** \\ \hline swish & **77.53** & **94.31** & **94.62** & **93.92** \\ Frac swish & 77.40 & 93.42 & 93.82 & 92.97 \\ \hline Mish & **77.67** & **94.22** & **94.90** & **93.55** \\ Frac Mish & 77.46 & 94.07 & 94.06 & 93.34 \\ \hline \hline \end{tabular}
\end{table}
Table 1: CIFAR-10 test accuracies of different model/activation function combinations.
the original activation functions. On the other hand, the Fractional Sigmoid outperforms the original function notably, except in the case of the simple CNN model, where the entire model fails to converge completely. It's noteworthy that Sigmoid performs significantly worse with ResNet-50 compared to both ResNet-18 and EfficientNetB0. To ensure that this behavior is not solely due to seeding issues, we conducted an additional run without seeding or deterministic algorithms, resulting in a final accuracy of 43.44%. One plausible explanation for this discrepancy is the vanishing gradient issue, particularly as network depth increases.
Figures 1(a) and 1(b) provide insights into the evolution of fractional derivative orders for Fractional Mish and Sigmoid. Notably, most fractional derivative orders for these activation functions tend to converge toward similar values. In the case of Fractional Sigmoid, convergence is observed toward its second derivative, whereas Fractional Mish remains closely aligned with the original function.
## 6 Loss function results
In this set of experiments, we explore the effectiveness of three distinct loss functions: fractional cross-entropy, fractional mean squared error (MSE), and fractional Huber loss. Fractional cross-entropy is evaluated on CIFAR-100LT [2] and ImageNet [17], while fractional MSE and fractional Huber loss are tested using the UTKFace regression dataset [22].
Figure 2: Evolution of fractional derivative order during the training of ResNet-18 on CIFAR-10.
Fractional cross-entropy and CIFAR-100LT:We use Adam optimizer with a decreasing learning rate at 60%, 80%, and 92.5% milestones. Data are fed to the network in batches of 50 images augmented with translation, horizontal flipping, and normalization. The network is trained for 200 epochs. Learning rates vary between experiments because fractionality adds a power term to the cross-entropy equation. In Fig. 2(a), we present a visual representation of the impact of fractional derivative order and learning rate on the final test accuracy. Lower learning rates tend to benefit from fractional derivative orders less than 1, whereas higher learning rates exhibit improved performance with fractional derivative orders greater than 1.
Fractional cross-entropy and ImageNet:We use Adam optimizer with a decreasing learning rate at 33% and 66% milestones. Data are fed to the network in batches of 256 images augmented with random resized cropping, horizontal flipping, and normalization. The network is trained for 90 epochs. As illustrated in Fig. 2(b), the model's accuracy with a fixed learning rate of 2e-3 exhibited minimal changes and peaked at 0.6.
Fractional Huber/MSE loss and UTKFace:We use the same experiment setting as CIFAR-10 in sec. 5 except for using Lion optimizer. We chose Lion over SGD due to its performance in this particular experiment. We run training with different fractional derivative orders. Table 2 summarizes the lowest losses achieved across different network architectures. It's important to note that, as the loss function itself is being tuned, we did not employ it for reporting test loss values. Instead, we utilized the L1 loss (Mean Absolute Error - MAE) to assess the test performance. The fractional loss function served the sole purpose of training during this experiment.
## 7 Practical challenges
As we transition from the theoretical framework of fractional calculus to practical implementation, we encounter several specific challenges, which we'll address in this section.
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Loss function** & **Simple CNN** & **ResNet-18** \\ \hline MSE & \(8.702\) & \(6.142\) \\ Fractional MSE & \(\mathbf{8.319}\) (1.6) & \(\mathbf{5.982}\) (0.9) \\ \hline Huber loss & \(8.678\) & \(6.046\) \\ Fractional Huber loss & \(\mathbf{8.174}\) (3.4) & \(\mathbf{6.042}\) (1) \\ \hline \hline \end{tabular}
\end{table}
Table 2: UTKFace test MAE of Baseline MSE and Huber loss and their fractional variants. The numbers in parentheses are fractional derivative orders.
Figure 3: Effect of the fractional derivative order of cross-entropy and learning rate on test accuracy.
Computational hyperparameters:One of the key challenges lies in setting appropriate computational hyperparameters, specifically the value of \(h\) and the number of sigma iterations. To illustrate this, let's begin with a fundamental derivative theorem:
\[L=\lim_{h\to 0}\frac{f(b+h)-f(b)}{h} \tag{13}\]
and its application to a sigmoid function. We must substitute \(lim_{h\to 0}\) with \(h\) equal to a scalar value. Fig. 4 shows the effect of increasing \(h\). With increasing \(h\), the derivative graph shifts to the right. This is true for both eq. 13 and eq. 8.
In Eq.8, we have the option to increase the number of sigma iterations to obtain a more precise approximation (see Fig.5). However, this approach introduces its own set of challenges. Increasing the number of iterations results in slower computation, potential issues related to type overflow due to the factorial function \(\Gamma(1+n)=n!\), the occurrence of negative integer arguments for the gamma function, and problems associated with memory consumption.
Modern deep learning frameworks, like PyTorch, construct a computational graph to track values relevant to gradient computations. In this context, each sigma iteration becomes a part of the computational graph, causing memory consumption to grow rapidly (see Table 3).
The problem of a negative, integral gamma function argument comes from the term \(\Gamma(1-n+a)\) when \(a\in\mathcal{N}^{+}\); therefore, it is impossible to increase the number of iterations of the sigma beyond a certain point. In general, \(n<1+a\) therefore, we impose the restriction that \(a\in[0.1,1.9]\) and \(\sum_{n=0}^{3}\). To maintain the constraint on \(a\) within the interval \([0.1,1.9]\), we employ a leaky piecewise-linear function:
\[f(x)=\begin{cases}0.1+(x-0.1)*0.01,&x<=0.1\\ x,&0.1<x<1.9\\ 1.9+(x-1.9)*0.01,&x>=1.9\end{cases}.\]
Figure 4: Left: first derivative of sigmoid using eq. 13. Right: first derivative of sigmoid using fractional derivative eq. 8. The first derivative of a sigmoid function with variable \(h\).
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Act fc** & \begin{tabular}{c} **Simple CNN** \\ [MiB] \\ \end{tabular} & \begin{tabular}{c} **ResNet-18** \\ [MiB] \\ \end{tabular} & \begin{tabular}{c} **ResNet-50** \\ [MiB] \\ \end{tabular} &
\begin{tabular}{c} **EffNet** \\ [MiB] \\ \end{tabular} \\ \hline ReLU & 1 867 & 2 961 & 6 081 & 5 017 \\ \hline Sig & 1 867 & 2 963 & 6 075 & 5 165 \\ Frac sig & 1 897 & 5 623 & 20 503 & 15 775 \\ \hline swish & 1 867 & 3 203 & 7 533 & 6 045 \\ Frac swish & 1 893 & 4 597 & 14 685 & 11 585 \\ \hline Mish & 1 867 & 3 131 & 7 541 & 6 237 \\ Frac Mish & 1 955 & 10 517 & OOM & 26 059 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Memory consumption of different activation functions/models using CIFAR-10.
This formula enables the argument \(x\) to move outside the \([0.1,1.9]\) range when necessary. The slope or leakiness factor of 0.01 allows the fractional derivative order to recover from deviating outside these bounds.
All the problems mentioned above can be relaxed using different definitions for fractional derivatives, which is possible in particular cases. For example, fractional derivation of swish function eq. (28) in [20]: \(D^{a}g(x)=g(x)+a\sigma(x)(1-g(x))\).
## 8 Ablation study
Optimizers:One of the aspects we explored is the impact of optimizers on the training process of fractional derivative activation functions. These functions exhibit changes in shape during training, and it's unclear whether stochastic gradient descent (SGD) allows them to converge to the final fractional derivative order quickly enough. To test this convergence speed hypothesis, we experimented with various optimizers, including Lion [3], AdaBound [13], Adam [9], AdamW [12], and RAdam [11].
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**SGD** & **Lion** & **AdaBound** & **Adam** & **AdamW** & **RAdam** \\ \([\%]\) & [\%] & [\%] & [\%] & [\%] & [\%] \\ \hline
**94.45** & 92.01 & 91.15 & 92.52 & 92.72 & 92.19 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test accuracy of ResNet-18 trained on CIFAR-10 using different optimizing algorithms.
Figure 5: Different number of sigma iteration and h values. Graphs compare the sigmoid’s first derivative (blue) and 1.1 derivative (green).
Table 4 displays the test accuracy of ResNet-18 trained on CIFAR-10 using different optimizing algorithms. We adhered to the optimal values recommended in the respective optimizer papers and for the experiments' general settings, as outlined in Section 5.
The activation functions in question are notably sensitive to the initial value of the fractional derivative order. To identify the optimal initial settings for both the fractional sigmoid and fractional Mish, we utilized 20% of the CIFAR-10 dataset and the Optuna framework for a search.
As shown in Fig.6, these results align with Fig.2. The fractional sigmoid tends to converge close to the second derivative, while the fractional Mish converges towards the original function.
Block activation functions:In the ResNet architecture, layers are organized into blocks. We investigated the impact of using a unique activation function with a fractional derivative order per layer compared to a shared activation function with a fractional order per block (meaning the same activation function shape is employed within a block). To find a difference, we used ResNet18 and trained it on CIFAR-10 with the following result:
As Table 5 demonstrates, the difference between employing a fractional derivative order per layer and per block is insignificant. Consequently, we have adopted the per block variant for all our experiments.
## 9 Conclusion
This paper introduces a novel technique for integrating fractional concepts into neural networks, enabling the modification of activation and loss functions. We leverage fractional derivatives to create families of activation and loss functions. In this framework, the fractional derivative order becomes a trainable parameter for activation functions,
Figure 6: CIFAR-10 test accuracy with respect to the initial fraction derivative of the factional Mish and sigmoid activation functions. Trained on 20% of data.
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Per layer** & **Per block** \\ & [\%] & [\%] \\ \hline
**Frac Sig** & 94.29 & 94.39 \\
**Frac Mish** & 94.05 & 94.07 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Fractional derivative order per layer versus per block.
while it remains non-trainable for loss functions. The potential for shaping activation functions offers promise, particularly in situations where conventional functions, such as sigmoid, prove suboptimal.
However, we have encountered various challenges and obstacles throughout our exploration. The computation of fractional derivatives using the Grunwald-Letnikov method results in infinite-series representations, necessitating a trade-off between approximation accuracy and computational feasibility. Additionally, alternative calculation variants often involve multiple variables, making it challenging to integrate fractional derivatives into neural networks seamlessly.
The experimental results presented in our study underscore the potential benefits of fractional sigmoid and the fractional loss functions. Fractional sigmoid exhibits the ability to address some of the limitations encountered by the conventional sigmoid function. Moreover, the fractional loss functions consistently outperform their non-fractional counterparts, although they require the manual selection of fractional derivative orders.
## Acknowlegement
The work of Zahra Alijani was partially supported by project AIMet4AI No. CZ.02.1.01/0.0/0.0/17_049/0008414.
|
2307.10197 | Solvable Neural Network Model for Input-Output Associations: Optimal
Recall at the Onset of Chaos | In neural information processing, an input modulates neural dynamics to
generate a desired output.
To unravel the dynamics and underlying neural connectivity enabling such
input-output association, we proposed an exactly soluble neural-network model
with a connectivity matrix explicitly consisting of inputs and required
outputs.
An analytic form of the response upon the input is derived, whereas three
distinctive types of responses including chaotic dynamics as bifurcation
against input strength are obtained depending on the neural sensitivity and
number of inputs.
Optimal performance is achieved at the onset of chaos, and the relevance of
the results to cognitive dynamics is discussed. | Tomoki Kurikawa, Kunihiko Kaneko | 2023-07-11T17:35:45Z | http://arxiv.org/abs/2307.10197v1 | # Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos
###### Abstract
In neural information processing, an input modulates neural dynamics to generate a desired output. To unravel the dynamics and underlying neural connectivity enabling such input-output association, we proposed an exactly soluble neural-network model with a connectivity matrix explicitly consisting of inputs and required outputs. An analytic form of the response upon the input is derived, whereas three distinctive types of responses including chaotic dynamics as bifurcation against input strength are obtained depending on the neural sensitivity and number of inputs. Optimal performance is achieved at the onset of chaos, and the relevance of the results to cognitive dynamics is discussed.
Neural systems exhibit rich dynamics generated by strong recurrent connections[1]. For performing cognitive tasks in neural systems, sensory inputs modulate the neural dynamics to generate specific output patterns resulting in suitable behaviors. In the association task between the input signals and output choices, for instance, the signal modifies ongoing (spontaneous) neural dynamics, leading to the emergence of an appropriate attractor that guides the correct choice[2; 3], as strongly contrasted with input-output transformation in feed-forward networks[4; 5] Unveiling the mechanisms behind such modulation and the type of connectivity relevant to it is essential for understanding information processing in neural systems.
One widespread and powerful approach to understanding the information processing involves recurrent neural networks trained with machine learning techniques[2; 3; 6; 7; 8]. However, these trained networks are finely tuned for specific tasks, which masks the connectivity relevant to cognitive functions. There is a need for a simple model to bridge the gap between neural connectivity and neural dynamics in shaping the input/output transformations.
Another approach, the auto-associative memory model, offers network connectivity explicitly represented by memorized patterns, as pioneered in the Hopfield network [9; 10; 11]. In this approach, different fixed-point attractors correspond to distinct memorized patterns, to which neural states converge, depending on their initial states. Thus, neural dynamics themselves are not modulated by the input. The role of spontaneous dynamics without input and the connectivity underlying the change in the dynamics to produce the output remain to be elucidated.
In the present Letter, we propose a neural network model with a novel connectivity matrix designed to generate any memorized patterns when the associated inputs are applied with a certain strength. This connectivity is explicitly represented based on a set of input and output patterns. The recall to the input is given by the location of a fixed point for any strength of the input, which is analytically obtained in our model. Besides this fixed-point, a chaotic attractor also emerges depending on the input strength, the gain parameter, and the number of memorized patterns. We obtain the phase diagram on distinct recall behaviors against these parameters, which demonstrates that the highest performance in the recall is achieved at the onset of the chaos. Finally, computational roles of chaotic internal dynamics are discussed in possible relation to experimental observations.
We consider a neural network model composed of \(N\) neurons. The network is required to generate target patterns \(\mathbf{\xi^{\mu}}\) (\(\mu=1,2,,,,M\)) in response to input patterns \(\mathbf{\eta^{\mu}}\), where \(M=\alpha N\), and, \(\mathbf{\eta^{\mu}}\) and \(\mathbf{\xi^{\mu}}\) are \(N\)-element vertical vectors. Each element of these vectors takes a binary value (\(\pm 1\)) that is randomly generated according to the probability distribution \(P(\xi^{\mu}_{i}=\pm 1)=P(\eta^{\mu}_{i}=\pm 1)=1/2\). The neural activity \(x_{i}\) evolves according to the following equation:
\[\dot{x_{i}}=\tanh(\beta(\Sigma_{j}J_{ij}x_{j}+\gamma\eta^{\mu}_{i}))-x_{i}, \tag{1}\]
where \(\beta\) and \(\gamma\) are the gain of the activation function and the input strength, respectively.
To memorize input/output maps between \(\mathbf{\eta}\) and \(\mathbf{\xi}\), we have designed the connectivity matrix \(J\) that is composed of \(\mathbf{\eta}\) and \(\mathbf{\xi}\), in contrast to the Hopfield network that incorporates only \(\mathbf{\xi}\). Further, to mitigate the effects of
potential interference across memories that could impair recall performance[12; 13; 14], the designed connectivity is given with a pseudo-inverse matrix of the target-input matrix \(X\) as follows:
\[J= X\left(\begin{array}{cc}I&I\\ -I&-I\end{array}\right)X^{+} \tag{2}\] \[X= [\boldsymbol{\xi}^{\boldsymbol{1}},\boldsymbol{\xi}^{\boldsymbol {2}},\ldots,\boldsymbol{\xi}^{\boldsymbol{M}},\boldsymbol{\eta}^{\boldsymbol{1 }},\boldsymbol{\eta}^{\boldsymbol{2}},\ldots,\boldsymbol{\eta}^{\boldsymbol{M }}], \tag{3}\]
where \(I\) is an \(M\)-dimensional identity matrix, \(X\) is an \((N,2M)\)-matrix and \(X^{+}\triangleq(X^{T}X)^{-1}X^{T}\) is a pseudo-inverse matrix of \(X\), where \(X^{T}\) is a transpose matrix of \(X\). Due to the pseudo-inverse matrix, \(J\boldsymbol{\xi}^{\boldsymbol{\mu}}+\gamma\boldsymbol{\eta}^{\boldsymbol{ \mu}}=\boldsymbol{\xi}^{\boldsymbol{\mu}}+(\gamma-1)\boldsymbol{\eta}^{ \boldsymbol{\mu}}\) and, consequently, the target \(\boldsymbol{\xi}^{\boldsymbol{\mu}}\) is a fixed point under \(\boldsymbol{\eta}^{\boldsymbol{\mu}}\) with \(\gamma=1\) for \(\beta\to\infty\), based on the properties of \(\tanh(\beta x)\). This property applies to all \(\mu\), indicating that all \(\boldsymbol{\xi}^{\boldsymbol{\mu}}\) are the fixed points under the corresponding inputs with \(\gamma=1\). In other words, all associations are successfully memorized in this model. To satisfy the pseudo-inverse matrix, however, the number of vectors, \(2M\), that are linearly independent of each other should be less than \(N\). As a consequence, at best, \(M=N/2\) associations are allowed and the memory capacity is bounded by \(\alpha=0.5\) at a maximum.
How does the network recall to the input except for \(\gamma=1\) and \(\beta\to\infty\)? We, now, derive an analytical form of a fixed point of the neural dynamics upon input for any value of \(\gamma\) with finite \(\beta\). For it, we consider \(x^{\text{fp}}(\gamma)=(a(\gamma)\boldsymbol{\xi}+b(\gamma)\boldsymbol{\eta})\) and derive \(a(\gamma)\) and \(b(\gamma)\) such that satisfy the fixed point condition for any \(\gamma\) as follows. Below, the superscript \(\mu\) is omitted for clarity unless otherwise noted since the result is not dependent on \(\mu\). By using \(J\boldsymbol{\xi}=J\boldsymbol{\eta}=\boldsymbol{\xi}-\boldsymbol{\eta}\), we have
\[Jx^{\text{fp}}=(a+b)(\boldsymbol{\xi}-\boldsymbol{\eta}), \tag{4}\]
and, subsequently, by substituting \(x^{\text{fp}}\) to \(\dot{x}=0\) in Eq. 1,
\[a\boldsymbol{\xi}+b\boldsymbol{\eta}=f((a+b)(\boldsymbol{\xi}-\boldsymbol{ \eta})+\gamma\boldsymbol{\eta}), \tag{5}\]
where \(f(x)=\tanh(\beta x)\). Considering \(i\)-th elements such that \(\xi_{i}\) equals \(\eta_{i}\), \(a+b=f(\gamma)\) should be satisfied and, similarly, by considering \(i\)-th elements such that \(\xi_{i}\) equals \(-\eta_{i}\), \(a-b=f(2(a+b)-\gamma)\) should be satisfied. Thus we derive \(a\) and \(b\) as
\[a =(f(\gamma)+f(2f(\gamma)-\gamma))/2, \tag{6}\] \[b =(f(\gamma)-f(2f(\gamma)-\gamma))/2, \tag{7}\]
where \(a\) and \(b\) are uniquely determined and depend solely on \(\gamma\) for a given activation function \(f(x)\) while they are independent of \(N\), \(\alpha\). It is straightforward to check that Eq.4 is satisfied for any \(\boldsymbol{\xi}\) and \(\boldsymbol{\eta}\). Although not proven analytically, we have confirmed numerically that \(x^{\text{fp}}\) is a unique fixed-point for given \(\mu\) and \(\gamma\). This rigorous solution is applicable not only to \(\tanh(\cdot)\) but also to any arbitrary function, as long as \(\boldsymbol{\xi}\) and \(\boldsymbol{\eta}\) are binary vectors. As \(\gamma\) increases from zero, \(a(\gamma)\) increases and takes a peak for \(\gamma=1\), while \(b(\gamma)\) increases more slowly as plotted in Fig.1A. For \(\gamma\) less than 2, \(a(\gamma)\) is larger than \(b(\gamma)\) and, oppositely, beyond \(\gamma=2\), \(b(\gamma)\) is larger than \(a(\gamma)\)[15]. The overlap of \(x^{\text{fp}}\) with \(\boldsymbol{\xi}\), termed \(m\triangleq\Sigma_{x}x^{\text{fp}}_{i}\xi_{i}/N\), is also plotted in Fig. 1B. For a given \(\beta\), \(m\) increases up to \(\gamma=1\) and, subsequently, decreases. As \(\beta\) increases, the curve of \(m\) is steeper so that \(x^{\text{fp}}\) nearly equals 1 even for the weak input. The overlap of \(x^{\text{fp}}\) with \(\boldsymbol{\eta}\) slowly increases with \(\gamma\), followed by a sharp rise at \(\gamma\approx 2\) beyond which it approaches unity, i.e., the network just outputs the input as it is (Fig. 1B). Thus, in the following part, we consider the range of \(0\leq\gamma\leq 2\).
Although \(x^{\text{fp}}\) is a fixed point for any value of parameters, it is necessary to ascertain its stability and the existence of other attractors, to assess whether the recall of \(x^{\text{fp}}\) really works from any initial states. We numerically solved Eq. (1) (\(N=2048\), unless otherwise noted), and found another chaotic attractor in addition to \(x^{\text{fp}}\). By varying \(\alpha\) and \(\beta\), three types of recall behaviors are observed depending on the stability of these two attractors, which are characterized by the distinct bifurcation diagrams of \(m\) against \(\gamma\) (as shown in Fig. 2A(i)-(iii)): (i) Stable recall of \(x^{\text{fp}}\) for any strength of the input: \(x^{\text{fp}}\) is a unique attractor for any \(\gamma\). (ii) Stable recall of \(x^{\text{fp}}\) only for a certain range of \(\gamma\): \(x^{\text{fp}}\) is a unique attractor for \(\gamma\sim 1\), whereas for smaller \(\gamma\) the chaotic attractor appears, which exhibits a smaller overlap with \(\xi\) compared with the overlap of \(x^{\text{fp}}\)[16]. For smaller \(\gamma\) values, the neural state fails to converge into \(x^{\text{fp}}\), and instead, it converges into the chaotic attractor from most initial states, meaning that the network fails to recall the target. Still, for \(\gamma\sim 1\), the neural state from any initial
Figure 1: Analytically obtained response of the network to input with increasing the input strength \(\gamma\). A) \(a(\gamma)\) and \(b(\gamma)\) in Eqs.6 and 7, for \(\beta=1\). B) The overlaps of \(x^{\text{fp}}\) with a target and an input for increasing \(\gamma\), plotted for different \(\beta\) in blue and red, respectively.
state converges to \(x^{\rm fp}\) whose overlap with the target is close to unity, resulting in the recall of the target. (iii) No stable recall of \(x^{\rm fp}\) for any \(\gamma\): the chaotic attractor exists across all ranges of \(\gamma\), even though \(x^{\rm fp}\) coexists around \(\gamma=1\). The chaotic attractor has a much larger basin of attraction than \(x^{\rm fp}\) even for \(\gamma\sim 1\) (Fig. S1C). Consequently, the recall of the target is impaired.
To analyze these three behaviors, we first explored the stability of \(x^{\rm fp}\) and of the chaotic attractor across a range of \(\beta\) with a constant \(\alpha=0.38\). We found that for a small value of \(\beta\) (\(\beta=0.8\)), the stable recall (i) is achieved. The neural states from any initial states for any \(\gamma\) converge rapidly into \(x^{\rm fp}\) (as shown in Fig. S1A), indicating high robustness in the success recall. However, the degree of overlap with the target is notably below the unity.
As \(\beta\) increases, \(x^{\rm fp}\) approaches the target for all ranges of \(\gamma\). Beyond the critical \(\beta\), denoted by \(\beta_{F}\), \(x^{\rm fp}\) turns to be unstable for a certain range of \(\gamma\), while the chaotic attractor emerges, corresponding to the recall type (ii) as shown in Fig. 2A(ii). The overlap of the chaotic attractor with the target is much lower than that of \(x^{\rm fp}\). Although, for \(\gamma=1\), \(x^{\rm fp}\) is the unique attractor, there exists long-term transient chaos before the neural state converges into \(x^{\rm fp}\) (see Fig. S1B).
With the further increase in \(\beta\), the range of \(\gamma\) within which the chaotic attractor exists expands, eventually, covering all the range of \(0\leq\gamma\leq 2\) at another critical value of \(\beta\) (termed \(\beta_{I}\)). Beyond \(\beta_{I}\), the system exhibits the recall type (iii). Even for \(\gamma=1\), the basin of the chaotic attractor covers the full state space, and most orbits from random initial conditions converge into it (Fig. S1C). Thus, the recall of the target almost fails.
To comprehensively understand the recall behavior across \(\beta\) and \(\gamma\), we draw the regions where the chaotic attractor is present in Fig. 2B. We also investigated the stability of \(x^{\rm fp}\), which is, however, not directly related to the type of recall and is shown in Fig. S2. In the area above the curve, the chaotic attractor is present. \(\beta_{F}\) is the minimum value of the curve in \(\beta\leq 1\), whereas \(\beta_{I}\) is the maximum value of \(\beta\) on the curve at \(\gamma=1\). These two critical values of \(\beta\) determine the phase boundary of three recall behaviors (i)-(iii).
With an increase in \(\beta\), \(x^{\rm fp}\) approaches the target, and accordingly, the final states in all the recall trials overlap almost perfectly with the target below \(\beta=\beta_{I}\) at which the chaotic attractor emerges (Fig. 2C). As \(\beta\) increases beyond \(\beta_{I}\), the basin of the \(x^{\rm fp}\) attractor shrinks, while that of the chaotic attractor expands. Consequently, the overlap between the final state and the target averaged over randomly chosen initial states significantly decreases, as depicted in Fig. 2C. Thus, the recall performance reaches its peak (i.e., at the onset of chaos) across all ranges of \(\gamma\).
So far, we presented the results with the fixed number of memories \(\alpha N\) (\(\alpha=0.38\)). Noting that standard associative memory models such as the Hopfield network, recall fails beyond a critical number of embedded memories. We next analyze the change in the recall process with increasing \(\alpha\) and demonstrate it exhibits similar behavior to the change with the increases in \(\beta\): Three types of recall behavior emerge as \(\alpha\) varies, as shown in Fig. 3A. For small \(\alpha\), \(x^{\rm fp}\) is stable and a unique attractor for any \(\gamma\) (type (i) ). With the increase in \(\alpha\), the chaotic attractor emerges within a certain range of \(\gamma\) (type (ii) ), and this range expands (Fig.3B). Finally, the range within which the chaotic attractor is present covers all the ranges of \(\gamma\) (type (iii) ). In contrast to the clear change in the recall process with increasing \(\beta\), the value of \(x^{\rm fp}\) remains
Figure 2: Three recall behaviors in response to \(\mathbf{\eta}\) depending on \(\beta\). A) The overlaps \(m\) against the increase in \(\gamma\) are shown for \(\beta=0.8,4,4.7\) in (i)-(iii) panels, respectively. In each panel, each of the black dots for a given \(\gamma\) represents the overlap of \(x\) with \(\mathbf{\xi}^{1}\) averaged over 100 unit-time after the transient period. To confirm the stability of \(x^{\rm fp}\) and explore another attractor, we sampled the dynamics from 20 random initial states in addition to an initial state equal to \(x^{\rm fp}\). The dotted lines represent the overlap of \(x^{\rm fp}\) with \(\mathbf{\xi}^{1}\) as also shown in panel C. B) Stability of the chaotic attractor against \(\beta\) and \(\gamma\). The chaotic attractor is present above the curve. All results are obtained for \(\alpha=0.38\). C) The overlap at \(\gamma=1\) with the increase in \(\beta\), as in A. Dots represent the overlaps obtained from 100 randomly chosen initial states, while the solid line exhibits the overlap averaged over them.
unchanged during the increase in \(\alpha\).
We now focus on the behavior for \(\gamma=1\) and explore the number of memories recalled successfully. We found that at \(\alpha=\alpha_{C}(\beta)\) the chaotic attractors emerge for all embedded patterns \(\mu\) (\(\alpha_{C}(\beta)\) is obtained by solving \(\beta=\beta_{I}(\alpha)\)). For \(\alpha<\alpha_{C}\), the fixed points \(x^{\mathrm{fp},\mu}=(a\mathbf{\xi}^{\mu}+b\mathbf{\eta}^{\mu})\) for all \(\mu\) are stable and all the embedded patterns are successfully recalled, whereas for \(\alpha>\alpha_{C}\), almost all the recall trials fail for all patterns due to the emergence of the chaotic attractors whose basins of attraction are much larger than those of \(x^{\mathrm{fp},\mu}\). The number of successfully recalled memories increases linearly below \(\alpha=\alpha_{C}(\beta)\) and then drops to zero drastically (see Fig. 3C,) signifying that \(\alpha_{C}(\beta)N\) is the memory capacity in this model (e.g., \(\alpha_{C}(4)=0.38\)). \(\alpha_{C}(\beta)\) decreases towards a certain finite value \(\alpha_{C}(\infty)\) with the increase in \(\beta\) as analyzed in detail in the following.
We finally show the phase diagram of the recall process against \(\alpha\) and \(\beta\) by identifying \(\beta_{F}(\alpha)\) and \(\beta_{I}(\alpha)\) (\(\alpha_{C}(\beta)\) is the inverse function of \(\beta_{I}(\alpha)\)) as shown in Fig 3D. As \(\alpha\) approaches zero, \(\beta_{F}\) diverges, meaning that if \(\alpha\) is set to a sufficiently small value, \(x^{\mathrm{fp}}\) is stable throughout all \(\gamma\) even for quite large \(\beta\). For such a limit, \(x^{\mathrm{fp}}\) approaches a step function; \(x^{\mathrm{fp}}=1\) for \(0<\gamma<2\) and \(x^{\mathrm{fp}}=0\) for otherwise. Consequently, the network perfectly recalls the target for \(0<\gamma<2\).
\(\beta_{I}\) increases drastically as \(\alpha\) decreases from \(0.5\) and diverges at \(\alpha_{C}(\infty)\). For \(\alpha\) below \(\alpha_{C}(\infty)\), the neural state converges to \(x^{\mathrm{fp}}\) for \(\gamma=1\) even for \(\beta\rightarrow\infty\). The asymptotic analysis demonstrates that \(\alpha_{C}(\infty)\sim 0.340\) for \(N\rightarrow\infty\) (See Fig. S3), indicating that the memory capacity is \(\alpha=0.340\) when \(\beta\) is sufficiently large.
In summary, we present an analytically solvable neural network model for I/O associations in which each input stabilize the target pattern as a unique fixed-point under the limit of memory capacity. This connectivity in the network consists of both target and input patterns, by introducing the pseudo-inverse matrix, which allows for rigorous recalls of any (correlated) target patterns. This is in contrast to our previous model[17] valid only for mutually orthogonalized patterns. By using this model, we derive the response to the input as the analytical expression of the fixed point for any input strength, whereas the response dynamics were explored in random networks (without embedded patterns)[18; 19; 20] and low-rank networks[21; 22]. We also demonstrate the emergence of the additional chaotic attractor numerically. Through exploration of the stability of these attractors, we identified three distinct recall processes.
Introducing the pseudo-inverse matrix (\(X^{+}\) in Eq. 3) into the connectivity generally requires the global information of the network, which may be difficult to implement biologically( but see[14] for Hopfield network). In our previous study[23; 24], however, a Hebbian and anti-Hebbian learning that only requires local information can shape the connectivity that is similar to our current connectivity. Still, filling the gap between the learning-shaped connectivity and the current connectivity needs further studies.
Here, we uncovered three phases of recalls, concerning the dominance of the chaotic attractor. Interestingly, the recall performance is maximized at the onset of the chaos, where the spontaneous chaotic activity is bifurcated to the fixed point that corresponds to the target output. In fact, such transitions of the activities with changes in the stimuli are observed in many cortical areas[25; 26] These are consistent with our findings of the optimal performance under sponta
Figure 3: Three recall behaviors in response to \(\mathbf{\eta}\) depending on \(\alpha\). A) The overlaps against the increase in \(\gamma\) are shown for \(\alpha=0.05,0.3,0.4\) in (i)-(iii) panels, respectively. Black dots and dotted lines exhibit the neural states and \(x^{\mathrm{fp}}\) in the same way as in Fig. 2A. B) Stability of the chaotic attractor against \(\alpha\) and \(\gamma\). The chaotic attractor is present above the red curve. All results are obtained for \(\beta=4\). C) (Upper panel) Bifurcation diagram of the overlap at \(\gamma=1\) with the increase in \(\alpha\) is shown in the same way as in A. (Lower panel) The number of memorized patterns (i.e., the number of \(x^{\mathrm{fp},\mu}\) (\(\mu=1,\ldots,\alpha N\) ) into which the neural state converges) normalized by \(N\) is plotted. Filled lines in gray and black show the behavior for \(\beta=4,32\), respectively. The dotted line represents the maximum number of possible memories normalized by \(N\) (i.e., \(\alpha\)). D) Three recall behaviors in response to \(\mathbf{\eta}\). A phase diagram of the recall regimes (i,ii,iii) against \(\alpha\) and \(\beta\). \(\beta_{F}\) (orange) gives the boundary of the stable \(\mathbf{x}^{\mathrm{fp}}\), while \(\beta_{I}\) (magenta) shows the border of the impaired recall regime.
neous chaotic dynamics, whereas the roles of the chaotic dynamics in the response and learning need to be further elucidated. Indeed, the relevance of spontaneous chaotic (and high-dimensional) dynamics to computational neuroscience has been discussed, for instance, in the reservoir computing[27; 28; 29; 30], memories[31], mixed-selectivity for efficient separation[32], sampling[33], neural avalanche[34; 35] and learning[36; 37]. Our study has demonstrated a new role of chaotic dynamics in recall performance.
Although Hopfield networks[9; 10] and their variants[12; 13; 14] have great contributions to associative memory, the modulation of the internal dynamics by external input that is essential for performing cognitive functions has not been included. Our model presents a novel prototype connectivity underlying such modulation, which will advance our understanding of neural processing.
###### Acknowledgements.
T.K. and K.K. are supported by JSPS KAKENHI (No.20H00123, T.T and K.K) and Novo Nordisk Foundation (0065542, K.K)
|
2308.09531 | Privacy-Preserving 3-Layer Neural Network Training | In this manuscript, we consider the problem of privacy-preserving training of
neural networks in the mere homomorphic encryption setting. We combine several
exsiting techniques available, extend some of them, and finally enable the
training of 3-layer neural networks for both the regression and classification
problems using mere homomorphic encryption technique. | John Chiang | 2023-08-18T13:11:23Z | http://arxiv.org/abs/2308.09531v2 | # Privacy-Preserving 3-Layer Neural Network Training using
###### Abstract
In this manuscript, we consider the problem of privacy-preserving training of neural networks in the mere homomorphic encryption setting. We combine several exsiting techniques available, extend some of them, and finally enable the training of 3-layer neural networks for both the regression and classification problems using mere homomorphic encryption technique.
## 1 Introduction
### Background
In the age of artificial intelligence and machine learning, neural networks have emerged as state-of-the-art models, exhibiting exceptional predictive capabilities for both regression and classification tasks. These networks are favored across diverse domains like healthcare and finance due to their remarkable performance. However, achieving high accuracy in training neural network models demands access to substantial volumes of private and sensitive data. This creates a requirement for individuals and institutions to securely share sensitive data, thereby extracting valuable insights from it. Homomorphic Encryption emerges as a straightforward solution for preserving privacy during neural network training in such scenarios, offering a high level of security.
### Related work
Several papers already discussed privacy-preserving training of binary logistic regression [23; 5; 7; 18; 1; 24; 11; 20] and homomorphic inference on neural networks [13; 17; 22]. Sav et al. [26] address the challenge of preserving privacy during the training and evaluation of neural networks within a multi-party federated learning framework.
The research most closely related to this paper is the work of Chiang et al. [16]. They also employed machine learning based on homomorphic encryption but used a faster gradient variant called Quadratic Gradient [11; 12; 15] extending from the (Simplified) Fixed Hessian [5; 4; 3; 2]. However, it's important to note that in their work they only completed the training for classification problems using multiclass logisitic regression, namely a simple 2-layer neural network without any hidden layer.
### Contributions
In this study, we extend homomorphic multiclass logistic regression training in the work [16] to 3-layer neural networks with one single hidden layer of various numbles of nodes, for both regression and classification problems. We demonstrate the feasibility of training a homomorphic 3-layer neural network in the HE-encrypted domain and extend the loss function, Squared Likelihood Error, to its two variants that can be used for classification problems in privacy-preserving neural network training
using mere homomorphic encryption. The Mean Squared Error (MSE) loss function is HE-friendly and can be applied for regression tasks in this paper.
## 2 Preliminaries
Let "\(\odot\)" denote the component-wise multiplication between matrices. The Sigmoid function is represented as follows:
\[Sigmoid(x)=\frac{1}{1+\exp(-x)}.\]
### Fully Homomorphic Encryption
Homomorphic encryption is an important technique in cryptography and is hailed as the Holy Grail in the field of cryptography. It is a special encryption method that allows calculations to be performed in an encrypted state without the need to decrypt the data. This means that by using homomorphic encryption, it is possible to perform various operations on encrypted data, such as addition, multiplication, etc., without having to decrypt them first. This provides a powerful tool for protecting data privacy, as the data remains encrypted at all times.
HE is especially useful for handling sensitive data, such as medical records and financial information. However, due to the computational complexity of homomorphic encryption, it can be slower than traditional encryption methods. In recent years, with the continuous development of technology [19; 6; 27; 9], researchers have continuously worked hard to improve the efficiency of homomorphic encryption to make it more feasible in practical applications.
### Database Encoding Method
The current efficient database encoding method [23] for a given dataset matrix \(M\) is to flatten the matrix \(M\) into a vector first and then encrypt this vector into a single ciphertext, but finally to see the resulting ciphertext as encrypting a matrix directly. This database encoding method could make full use of the HE computation and storage resources:
\[M=\left[\begin{array}{ccccc}x_{10}&x_{11}&\ldots&x_{1d}\\ x_{20}&x_{21}&\ldots&x_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n0}&x_{n1}&\ldots&x_{nd}\end{array}\right]\]
\[\Big{\downarrow}\]
Flatents the input dataset in a row-by-row manner
\[\left[\begin{array}{ccccccccc}x_{10}&x_{11}&\ldots&x_{1d}&x_{20}&x_{21}& \ldots&x_{2d}&\ldots&\ldots&\ldots&x_{n0}&x_{n1}&\ldots&x_{nd}\end{array}\right]\]
\[\Big{\downarrow}\]
Encrts the row vector
\[Enc\left[\begin{array}{ccccccccc}x_{10}&x_{11}&\ldots&x_{1d}&x_{20}&x_{21}& \ldots&x_{2d}&\ldots&\ldots&\ldots&x_{n0}&x_{n1}&\ldots&x_{nd}\end{array}\right]\]
\[\Big{\downarrow}\]
Seen as encrypting the dataset directly
\[Enc\left[\begin{array}{ccccc}x_{10}&x_{11}&\ldots&x_{1d}\\ x_{20}&x_{21}&\ldots&x_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n0}&x_{n1}&\ldots&x_{nd}\end{array}\right]=EncM.\]
Based on this database encoding, two simple operations, the complete row shifting and the \(incomplete\) column shifting, can be obtained by shifting the encrypted vector by two different
positions \((1+d)\) and \(1\), respectively:
\[Enc\left[\begin{array}{cccc}x_{10}&x_{11}&\ldots&x_{1d}\\ x_{20}&x_{21}&\ldots&x_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n0}&x_{n1}&\ldots&x_{nd}\end{array}\right]\stackrel{{\text{ complete row shifting}}}{{\longmapsto}}Enc\left[\begin{array}{cccc}x_{20}&x_{21}&\ldots&x_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n0}&x_{n1}&\ldots&x_{nd}\\ x_{10}&x_{11}&\ldots&x_{1d}\end{array}\right],\]
\[Enc\left[\begin{array}{cccc}x_{10}&x_{11}&\ldots&x_{1d}\\ x_{20}&x_{21}&\ldots&x_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n0}&x_{n1}&\ldots&x_{nd}\end{array}\right]\stackrel{{\text{ incomplete column shifting}}}{{\longmapsto}}Enc\left[\begin{array}{cccc}x_{11}&\ldots&x_{1d}&x_{20}\\ x_{21}&\ldots&x_{2d}&x_{30}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n1}&\ldots&x_{nd}&x_{10}\end{array}\right].\]
The complete column shifting to obtain the matrix \(Z^{{}^{\prime\prime\prime}}\) can also be achieved by two Rot, two cMult, and an Add.
\[Enc\left[\begin{array}{cccc}x_{10}&x_{11}&\ldots&x_{1d}\\ x_{20}&x_{21}&\ldots&x_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n0}&x_{n1}&\ldots&x_{nd}\end{array}\right]\stackrel{{\text{ complete column shifting}}}{{\longmapsto}}Enc\left[\begin{array}{cccc}x_{11}&\ldots&x_{1d}&x_{10}\\ x_{21}&\ldots&x_{2d}&x_{20}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n1}&\ldots&x_{nd}&x_{n0}\end{array}\right].\]
The same database encoding method also facilitates the development of other procedures [20, 13], such as SumRowVec and SumColVec to compute the sum of each row and column, respectively.
To address the homomorphic evaluation of training 3-layer neural networks for regression and classification tasks, we introduce two procedures KeepOnly and RollFill :
\[Enc\left[\begin{array}{cccccccc}x_{10}&x_{11}&\ldots&x_{ij}&\ldots&x_{n0}&x _{n1}&\ldots&x_{nd}\end{array}\right]\]
\[\odot\]
\[\left[\begin{array}{cccccccc}0&0&\ldots&1&\ldots&0&0&\ldots&0\end{array}\right]\]
\[\parallel\]
\[Enc\left[\begin{array}{cccccccc}0&0&\ldots&x_{ij}&\ldots&0&0&\ldots&0\end{array}\right]\]
\[Enc\left[\begin{array}{cccccccc}x_{10}&x_{11}&\ldots&x_{ij}&\ldots&x_{n0}&x _{n1}&\ldots&x_{nd}\end{array}\right]\]
\[\Big{\downarrow}\]
the transpose of the first matrix is equally viable, leading to a multiplication algorithm similar to Algorithm 2 presented in [13].
Furthermore, should either of the matrices prove too extensive to be encrypted into a single ciphertext, an alternative approach involves encrypting the matrices into two separate groups, labeled as Team \(A\) and Team \(B\), each comprising multiple ciphertexts. In this particular context, the encoding methodology referred to as Double Volley Revolver [16] begins to come into play, which encompasses two distinct loops. The outer loop manages calculations involving ciphertexts from both teams, while the inner loop performs calculations on two sub-matrices encrypted by the respective ciphertexts \(A_{[i]}\) and \(B_{[j]}\), following the fundamental Volley Revolver algorithm.
#### 2.3.1 Datasets
We adopt three common datasets in our experiments: Boston Housing Prices, Iris and MNIST. Table 1 describes the three datasets.
## 3 Technical details
### 3-Layer Neural Networks
Neural networks (NNs) are powerful machine learning algorithms engineered to capture complex nonlinear connections between input and output data, even though they may lack reasonable interpretability [14, 21, 8]. A typical NN consists of a series of layers, where iterative feedforward and backpropagation steps implement linear and non-linear transformations (activations) on input data. Each training iteration comprises a forward pass and a backward pass.
In our implementations, we utilize a 3-layer neural network comprising a single hidden layer with 12 nodes for the regression task using the Boston Housing Prices dataset. Similarly, for the classification tasks involving the Iris and MNIST datasets, we employ a 3-layer neural network featuring a single hidden layer equipped with 120 nodes.
Figure 1 illustrates the neural network architecture created for the regression task, whereas Figure 2 depicts the network architecture designed for the classification challenges.
Given the matrix \(X\in\mathbb{R}^{n\times(1+d)}\), the column vector \(Y\in\mathbb{N}^{n\times 1}\), the matrix \(\bar{Y}\in\mathbb{R}^{n\times c}\), the matrix \(W\in\mathbb{R}^{m\times(1+d)}\), and the matrix \(V\in\mathbb{R}^{c\times(1+m)}\), we interpret these entities as follows: \(X\) represents the dataset, \(Y\) denotes the class labels in column vector format, \(\bar{Y}\) signifies the one-hot encoding of the class labels, \(W\) stands for the weight matrix connecting the first two layers of the neural network, and \(V\) symbolizes the weight matrix linking the last two layers. Here, \(m\) is the number of nodes in the hidden layer.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Dataset & No. Samples & No. Samples & No. Features & No. Classes \\ \hline Boston Housing Prices & 506 & - & 13 & - \\ \hline Iris & 150 & - & 4 & 3 \\ \hline MNIST & 60,000 & 10,000 & 28\(\times\)28 & 10 \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of the several datasets used in our experiments
Figure 1: The neural network architecture created for the regression task
Figure 2: The network architecture designed for the classification challenges
\[X=\begin{bmatrix}x_{[1]}\\ x_{[2]}\\ \vdots\\ x_{[n]}\end{bmatrix}=\begin{bmatrix}1&x_{[1][1]}&\cdots&x_{[1][d]}\\ 1&x_{[2][1]}&\cdots&x_{[2][d]}\\ \vdots&\vdots&\ddots&\vdots\\ 1&x_{[n][1]}&\cdots&x_{[n][d]}\end{bmatrix}=\begin{bmatrix}x_{[1][0]}&x_{[1][1]} &\cdots&x_{[1][d]}\\ x_{[2][0]}&x_{[2][1]}&\cdots&x_{[2][d]}\\ \vdots&\vdots&\ddots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ x_{[n][0]}&x_{[n][1]}&\cdots&x_{[n][d]}\end{bmatrix},\] \[Y=\begin{bmatrix}y_{1}\\ y_{2}\\ \vdots\\ y_{n}\end{bmatrix}\underbrace{\text{\small one-hot encoding}}_{\text{\small$y_{2}$}}\ Y=\begin{bmatrix}y_{[1]}\\ \mathcal{Y}_{[2]}\\ \vdots\\ \mathcal{Y}_{[n]}\end{bmatrix}=\begin{bmatrix}y_{[1][1]}&y_{[1][2]}&\cdots&y_{ [1][c]}\\ y_{[2][1]}&y_{[2][2]}&\cdots&y_{[2][c]}\\ \vdots&\vdots&\ddots&\vdots\\ y_{[n][1]}&y_{[n][2]}&\cdots&y_{[n][c]}\end{bmatrix},\] \[W=\begin{bmatrix}w_{[1]}\\ w_{[2]}\\ w_{[m]}\\ \vdots\\ w_{[m]}\end{bmatrix}=\begin{bmatrix}w_{[1][0]}&w_{[1][1]}&\cdots&w_{[1][d]}\\ w_{[2][0]}&w_{[2][1]}&\cdots&w_{[2][d]}\\ \vdots&\vdots&\ddots&\vdots\\ w_{[m][0]}&w_{[m][1]}&\cdots&w_{[m][d]}\end{bmatrix},\] \[V=\begin{bmatrix}v_{[1]}\\ v_{[2]}\\ \vdots\\ v_{[c]}\end{bmatrix}=\begin{bmatrix}v_{[1][0]}&v_{[1][1]}&\cdots&v_{[1][m]}\\ v_{[2][0]}&v_{[2][1]}&\cdots&v_{[2][m]}\\ \vdots&\vdots&\ddots&\vdots\\ v_{[c][0]}&v_{[c][1]}&\cdots&v_{[c][m]}\end{bmatrix}.\]
NN training conssits of two stages: Forward inference and Backward training.
#### Forward inference
Step 1:
\[X\times W^{\intercal}=\begin{bmatrix}z_{[1][1]}&z_{[1][2]}&\cdots&z_{[1][m]} \\ z_{[2][1]}&z_{[2][2]}&\cdots&z_{[2][m]}\\ \vdots&\vdots&\ddots&\vdots\\ z_{[n][1]}&z_{[n][2]}&\cdots&z_{[n][m]}\end{bmatrix}=Z_{0},\]
Step 2: we select the square function as the activation function denoted as \(\phi(x)=x^{2}\)
\[Z_{0}\xrightarrow{\phi(Z)}Z_{1}=\begin{bmatrix}\phi(z_{[1][1]})&\phi(z_{[1][2 ]})&\cdots&\phi(z_{[1][m]})\\ \phi(z_{[2][1]})&\phi(z_{[2][2]})&\cdots&\phi(z_{[2][m]})\\ \vdots&\vdots&\ddots&\vdots\\ \phi(z_{[n][1]})&\phi(z_{[n][2]})&\cdots&\phi(z_{[n][m]})\end{bmatrix},\]
Step 3:
\[Z_{1}\mapsto Z=\begin{bmatrix}1&\phi(z_{[1][1]})&\phi(z_{[1][2]})&\cdots&\phi (z_{[1][m]})\\ 1&\phi(z_{[2][1]})&\phi(z_{[2][2]})&\cdots&\phi(z_{[2][m]})\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&\phi(z_{[n][1]})&\phi(z_{[n][2]})&\cdots&\phi(z_{[n][m]})\end{bmatrix},\]
Step 4:
\[Z\times V^{\intercal}=\bar{Y}=\begin{bmatrix}\bar{y}_{[1][1]}&\bar{y}_{[1][2]} &\cdots&\bar{y}_{[1][c]}\\ \bar{y}_{[2][1]}&\bar{y}_{[2][2]}&\cdots&\bar{y}_{[2][c]}\\ \vdots&\vdots&\ddots&\vdots\\ \bar{y}_{[n][1]}&\bar{y}_{[n][2]}&\cdots&\bar{y}_{[n][c]}\end{bmatrix},\]
Backward training\(\nabla W=(S\times\bar{V}\odot Z^{{}^{\prime}})^{\intercal}\times X\), \(\nabla V=S^{\intercal}\times Z\) where \(\bar{V}\) is
\[\bar{V}=\begin{bmatrix}v_{[1][1]}&v_{[1][2]}&\cdots&v_{[1][m]}\\ v_{[2][1]}&v_{[2][2]}&\cdots&v_{[2][m]}\\ \vdots&\vdots&\ddots&\vdots\\ v_{[c][1]}&v_{[c][2]}&\cdots&v_{[c][m]}\end{bmatrix},\]
and \(Z^{{}^{\prime}}\) is obtained from
\[Z_{1}\xrightarrow{\nabla}Z^{{}^{\prime}}=\begin{bmatrix}\phi^{{}^{\prime}}( z_{[1][1]})&\phi^{{}^{\prime}}(z_{[1][2]})&\cdots&\phi^{{}^{\prime}}(z_{[1][m]}) \\ \phi^{{}^{\prime}}(z_{[2][1]})&\phi^{{}^{\prime}}(z_{[2][2]})&\cdots&\phi^{{} }(z_{[2][m]})\\ \vdots&\vdots&\ddots&\vdots\\ \phi^{{}^{\prime}}(z_{[n][1]})&\phi^{{}^{\prime}}(z_{[n][2]})&\cdots&\phi^{{} ^{\prime}}(z_{[n][m]})\end{bmatrix}.\]
Finally, various first-order gradient descent algorithms with a certain learning rate scheme can be used to modify the NN parameters \(W\) and \(V\). In our work, we just adopt the raw gradient descent with a fixed learning rate \(\eta\): \(W=W-\eta\cdot\nabla W\) and \(V=V-\eta\cdot\nabla V\).
### Approximating Softmax Function
The conventional training approach for classification involves utilizing the log-likelihood loss function, which incorporates the Softmax function:
\[\bar{Y}\xrightarrow{Softmax}S_{0}=\] \[\begin{bmatrix}\exp(\bar{y}_{[1][1]})/\sum_{i=1}^{c}\exp(\bar{y}_ {[1][i]})&\exp(\bar{y}_{[1][2]})/\sum_{i=1}^{c}\exp(\bar{y}_{[1][i]})&\cdots& \exp(\bar{y}_{[1][c]})/\sum_{i=1}^{c}\exp(\bar{y}_{[1][i]})\\ \exp(\bar{y}_{[2][1]})/\sum_{i=1}^{c}\exp(\bar{y}_{[2][i]})&\exp(\bar{y}_{[2][ 2]})/\sum_{i=1}^{c}\exp(\bar{y}_{[2][i]})&\cdots&\exp(\bar{y}_{[2][c]})/\sum_{i =1}^{c}\exp(\bar{y}_{[2][i]})\\ \vdots&\vdots&\ddots&\vdots\\ \exp(\bar{y}_{[n][1]})/\sum_{i=1}^{c}\exp(\bar{y}_{[n][i]})&\exp(\bar{y}_{[n][ 2]})/\sum_{i=1}^{c}\exp(\bar{y}_{[n][i]})&\cdots&\exp(\bar{y}_{[n][c]})/\sum_{i =1}^{c}\exp(\bar{y}_{[n][i]})\end{bmatrix},\] \[S_{0}\xrightarrow{\arg\max}output=\begin{bmatrix}\arg\max_{j} \exp(\bar{y}_{[1][j]})/\sum_{i=1}^{c}\exp(\bar{y}_{[1][i]})\\ \arg\max_{j}\exp(\bar{y}_{[2][j]})/\sum_{i=1}^{c}\exp(\bar{y}_{[1][i]})\\ \vdots\\ \arg\max_{j}\exp(\bar{y}_{[n][j]})/\sum_{i=1}^{c}\exp(\bar{y}_{[1][i]})\end{bmatrix}.\]
Given the presence of inherent uncertainties, it could be challenging to attain an acceptably-usable polynomial approximation of the Softmax function within the context of privacy-preserving computations. To tackle this challenge, Chiang [16] apply the mathematical strategy: transforming a complex problem into a simpler counterpart. Considering this, rather than attempting a direct approximation of the Softmax function, they shift their focus towards approximating the Sigmoid function within the encrypted domain. In doing so, they have developed a novel loss function named \(SLE\) (Squared Likelihood Error), which solely relies on the Sigmoid function, rather than using the Softmax function.
### Squared Likelihood Error
The SLE loss function is expected to yield the following neural network output:
\[\bar{Y}\xrightarrow{SLE_{1}}S_{1}=\] \[\begin{bmatrix}Sigmoid(\bar{y}_{[1|1]})&Sigmoid(\bar{y}_{[1|2]})& \cdots&Sigmoid(\bar{y}_{[1|[c]})\\ Sigmoid(\bar{y}_{[2|1]})&Sigmoid(\bar{y}_{[2|2]})&\cdots&Sigmoid(\bar{y}_{[ 2|[c]})\\ \vdots&\vdots&\ddots&\vdots\\ Sigmoid(\bar{y}_{[n|1]})&Sigmoid(\bar{y}_{[n|2]})&\cdots&Sigmoid(\bar{y}_{[ n|[c]})\end{bmatrix},\] \[S_{1}\xrightarrow{\arg\max}output=\begin{bmatrix}\arg\max_{j} Sigmoid(\bar{y}_{[1|[j]})\\ \arg\max_{j}Sigmoid(\bar{y}_{[2|[j]})\\ \vdots\\ \arg\max_{j}Sigmoid(\bar{y}_{[n|[j]})\end{bmatrix},\]
and thus has the \(S\):
\[S =\begin{bmatrix}S_{[1|1]}&S_{[1|2]}&\cdots&S_{[1|[c]}\\ S_{[2|1]}&S_{[2|2]}&\cdots&S_{[2|[c]}\\ \vdots&\vdots&\ddots&\vdots\\ S_{[n|1]}&S_{[n|2]}&\cdots&S_{[n|[c]}]\end{bmatrix}\] \[=\begin{bmatrix}1-Sigmoid(\bar{y}_{[1|[1]})-y_{[1|1]}&1-Sigmoid( \bar{y}_{[1|2]})-y_{[1|2]}&\cdots&1-Sigmoid(\bar{y}_{[1|[c]})-y_{[1|[c]}\\ 1-Sigmoid(\bar{y}_{[2|[1]})-y_{[2|1]}&1-Sigmoid(\bar{y}_{[2|[2]})-y_{[2|2]}& \cdots&1-Sigmoid(\bar{y}_{[2|[c]})-y_{[2|[c]}\\ \vdots&\vdots&\ddots&\vdots\\ 1-Sigmoid(\bar{y}_{[n|[1]})-y_{[n|1]}&1-Sigmoid(\bar{y}_{[n|[2]})-y_{[n|[2]}& \cdots&1-Sigmoid(\bar{y}_{[n|[c]})-y_{[n|[c]}]\end{bmatrix}.\]
The SLE loss function, denoted as \(L\), along with its logarithmic differential \(\mathrm{d}\ln L\), is defined as follows:
\[L=\prod_{i=1}^{n}\prod_{j=1}^{c}(Sigmoid(\bar{y}_{[i][j]})-y_{[i][j]})^{2} \longmapsto\ln L=\sum_{i=1}^{n}\sum_{j=1}^{c}\ln|Sigmoid(\bar{y}_{[i|[j]})- y_{[i][j]}|,\]
and
\[\mathrm{d}\ln L=\sum_{i=1}^{n}\sum_{j=1}^{c}(1-Sigmoid(\bar{y}_{[i][j]})-y_{[ i|[j]})\mathrm{d}\bar{y}_{[i|[j]}.\]
#### 3.3.1 First Variant of SLE
Although SLE demonstrates success in multiclass logistic regression, as evidenced in [16], it fails to yield effective results in neural networks with hidden layers. This could be attributed to a tendency to easily converge towards local minima.
We propose a variant of SLE that can address this issue still using only the Sigmoid function: 1st variant of SLE:
\[L_{1}=\sum_{i=1}^{n}\sum_{j=1}^{c}(Sigmoid(\bar{y}_{[i|[j]})-y_{[i][j]})^{2}.\]
Its differential \(\mathrm{d}L_{1}\), is defined as follows:
\[\mathrm{d}L_{1} =\sum_{i=1}^{n}\sum_{j=1}^{c}\mathrm{d}(Sigmoid(\bar{y}_{[i|[j] })-y_{[i|[j]})^{2}\] \[=\sum_{i=1}^{n}\sum_{j=1}^{c}2\cdot(Sigmoid(\bar{y}_{[i|[j]})-y_{[ i|[j]})\cdot Sigmoid(\bar{y}_{[i|[j]})\cdot(1-Sigmoid(\bar{y}_{[i|[j]}))\mathrm{d} \bar{y}_{[i|[j]},\]
and a simplified approximation of \(\mathrm{d}L_{1}\) also demonstrates remarkable performance while requiring fewer calculations:
\[\mathrm{d}L_{1}=\sum_{i=1}^{n}\sum_{j=1}^{c}2\cdot(Sigmoid(\bar{y}_{[i|[j]})-y _{[i|[j]})\cdot 0.25\mathrm{d}\bar{y}_{[i|[j]}.\]
For this variant of the \(SLE\) loss function, we can formulate the expression for \(S\) as follows:
\[S =\begin{bmatrix}s_{[1[1]}&s_{[1][2]}&\cdots&s_{[1][c]}\\ s_{[2[1]}&s_{[2][2]}&\cdots&s_{[2][c]}\\ \vdots&\vdots&\ddots&\vdots\\ s_{[n][1]}&s_{[n][2]}&\cdots&s_{[n][c]}\end{bmatrix}\] \[=\begin{bmatrix}2\cdot(Sigmoid(\bar{y}_{[1][1]})-y_{[1][1]}) \cdot 0.25&\cdots&2\cdot(Sigmoid(\bar{y}_{[1][c]})-y_{[1][c]})\cdot 0.25\\ 2\cdot(Sigmoid(\bar{y}_{[2][1]})-y_{[2][1]})\cdot 0.25&\cdots&2\cdot(Sigmoid(\bar{y}_{[ 2][c]})-y_{[2][c]})\cdot 0.25\\ \vdots&\ddots&\vdots\\ 2\cdot(Sigmoid(\bar{y}_{[n][1]})-y_{[n][1]})\cdot 0.25&\cdots&2\cdot(Sigmoid(\bar{y}_{[ n][c]})-y_{[n][c]})\cdot 0.25\end{bmatrix}.\]
#### 3.3.2 Second Variant of SLE
The aforementioned initial variant of SLE with the expression \(L_{1}\) demonstrates consistent, robust, and high performance when applied to neural networks with a single hidden layer. However, the input values to the Sigmoid function tend to be large. For instance, when implementing \(SLEL_{1}\) on the MNIST dataset, the Sigmoid function often encounters input values wider than the range of \([-25,+25]\). Even a minor discrepancy from the Sigmoid function itself can adversely affect the performance of the first variant of SLE. While existing HE techniques can achieve near-perfect approximations of the Sigmoid function within such a range, it requires a significant amount of homomorphic computations in the encrypted domain. This drawback results in a less-than-ideal practical solution.
We introduce a second novel variant of SLE that can address the limitations of the first version. This second variant bears a strong resemblance to the Mean Squared Error (MSE) loss function and solely relies on the Sigmoid function:
2nd variant of SLE:
\[L_{2}=\sum_{i=1}^{n}\sum_{j=1}^{c}(\bar{y}_{[i][j]}-y_{[i][j]})^{2}.\]
Its differential \(\mathrm{d}L_{2}\), is defined as follows:
\[\mathrm{d}L_{2}=\sum_{i=1}^{n}\sum_{j=1}^{c}2\cdot(\bar{y}_{[i][j]}-y_{[i][j]} )\cdot\mathrm{d}\bar{y}_{[i][j]}.\]
For this particular variant of the \(SLE\) loss function, we can deduce the expression for \(S\):
\[S =\begin{bmatrix}s_{[1][1]}&s_{[1][2]}&\cdots&s_{[1][c]}\\ s_{[2[1]}&s_{[2][2]}&\cdots&s_{[2][c]}\\ \vdots&\vdots&\ddots&\vdots\\ s_{[n][1]}&s_{[n][2]}&\cdots&s_{[n][c]}\end{bmatrix}\] \[=\begin{bmatrix}2\cdot(\bar{y}_{[1][1]}-y_{[1][1]})&2\cdot(\bar{y }_{[1][2]}-y_{[1][2]})&\cdots&2\cdot(\bar{y}_{[1][c]}-y_{[1][c]})\\ 2\cdot(\bar{y}_{[2][1]}-y_{[2][1]})&2\cdot(\bar{y}_{[2][2]}-y_{[2][2]})&\cdots &2\cdot(\bar{y}_{[2][c]}-y_{[2][c]})\\ \vdots&\vdots&\ddots&\vdots\\ 2\cdot(\bar{y}_{[n][1]}-y_{[n][1]})&2\cdot(\bar{y}_{[n][2]}-y_{[n][2]})&\cdots &2\cdot(\bar{y}_{[n][c]}-y_{[n][c]})\end{bmatrix}.\]
Performance EvaluationThe two variants of SLE possess their individual advantages and disadvantages. We employ the first \(5,000\) MNIST training images to train the NN model with 120 hidden nodes, while utilizing the complete MNIST testing dataset to evaluate the generated performance of the resulting model. This test is repeated 12 times, utilizing two distinct learning rates for the two loss function variants. The average performances in terms of loss and accuracy are presented in Figure 3 and Figure 4.
## 4 Homomorphic NN Training
For simplicity, we utilize the extreme case of the "Double Volley Revolver" methodology to illustrate the feasibility of homomorphic training for a 3-layer neural network (NN) in both regression and
classification tasks. In this scenario, we make the assumption that each individual ciphertext can only encrypt a single row vector such as \(x_{[i]}\), \(y[j]\), \(w[k]\), or \(v_{[g]}\):
\[X=\begin{bmatrix}x_{[1][0]}&x_{[1][1]}&\cdots&x_{[1][d]}\\ x_{[2][0]}&x_{[2][1]}&\cdots&x_{[2][d]}\\ \vdots&\vdots&\ddots&\vdots\\ x_{[n][0]}&x_{[n][1]}&\cdots&x_{[n][d]}\end{bmatrix}\longrightarrow\begin{bmatrix} \mathtt{Enc}\left[x_{[1][0]}&x_{[1][1]}&\ldots&x_{[1][d]}&0&\ldots&0\right]\\ \mathtt{Enc}\left[x_{[2][0]}&x_{[2][1]}&\ldots&x_{[2][d]}&0&\ldots&0\right] \\ \vdots&\vdots&\ddots&\vdots\\ \mathtt{Enc}\left[x_{[n][1]}&x_{[n][1]}&\ldots&x_{[n][d]}&0&\ldots&0\right] \end{bmatrix},\]
\[Y=\begin{bmatrix}y_{[1][1]}&y_{[1][2]}&\cdots&y_{[1][c]}\\ y_{[2][1]}&y_{[2][2]}&\cdots&y_{[2][c]}\\ \vdots&\vdots&\ddots&\vdots\\ y_{[n][1]}&y_{[n][2]}&\cdots&y_{[n][c]}\end{bmatrix}\longrightarrow\begin{bmatrix} \mathtt{Enc}\left[y_{[1][1]}&y_{[1][2]}&\ldots&y_{[1][c]}&0&\ldots&0\right] \\ \mathtt{Enc}\left[y_{[2][1]}&y_{[2][2]}&\ldots&y_{[2][c]}&0&\ldots&0\right] \\ \vdots&\vdots&\ddots&\vdots\\ \mathtt{Enc}\left[y_{[n][1]}&y_{[n][2]}&\ldots&y_{[n][c]}&0&\ldots&0\right] \end{bmatrix},\]
\[W=\begin{bmatrix}w_{[1][0]}&w_{[1][1]}&\cdots&w_{[1][d]}\\ w_{[2][0]}&w_{[2][1]}&\cdots&w_{[2][d]}\\ \vdots&\vdots&\ddots&\vdots\\ w_{[m][0]}&w_{[m][1]}&\cdots&w_{[m][d]}\end{bmatrix}\longrightarrow\begin{bmatrix} \mathtt{Enc}\left[w_{[1][0]}&w_{[1][1]}&\ldots&w_{[1][d]}&0&\ldots&0\right] \\ \mathtt{Enc}\left[w_{[2][0]}&w_{[2][1]}&\ldots&w_{[2][d]}&0&\ldots&0\right] \\ \vdots&\vdots&\vdots&\vdots\\ \mathtt{Enc}\left[w_{[m][0]}&w_{[m][1]}&\ldots&w_{[m][d]}&0&\ldots&0\right] \end{bmatrix},\]
Figure 3: Training and testing experimental results for the two variants of SLE are provided using the same learning rate \(0.12\)
\[V=\begin{bmatrix}v_{[1][0]}&v_{[1][1]}&\cdots&v_{[1][m]}\\ v_{[2][0]}&v_{[2][1]}&\cdots&v_{[2][m]}\\ \vdots&\vdots&\ddots&\vdots\\ v_{[c][0]}&v_{[c][1]}&\cdots&v_{[c][m]}\end{bmatrix}\longrightarrow\begin{bmatrix} \text{Enc}\begin{bmatrix}v_{[1][0]}&v_{[1][1]}&\ldots&v_{[1][m]}&0&\ldots&0\\ \text{Enc}\begin{bmatrix}v_{[2][0]}&v_{[2][1]}&\ldots&v_{[2][m]}&0&\ldots&0\\ &\vdots&&&\\ \text{Enc}\begin{bmatrix}v_{[c][0]}&v_{[c][1]}&\ldots&v_{[c][m]}&0&\ldots&0 \end{bmatrix}\end{bmatrix}.\end{bmatrix}\]
By using HE operations merely we can obtain the ciphertexts that encrypt each row of the matrices \(S\), \(\bar{V}\), \(Z^{{}^{\prime}}\) and \(Z\) :
\[S=\begin{bmatrix}s_{[1][1]}&s_{[1][2]}&\cdots&s_{[1][c]}\\ s_{[2][1]}&s_{[2][2]}&\cdots&s_{[2][c]}\\ \vdots&\vdots&\ddots&\vdots\\ s_{[n][1]}&s_{[n][2]}&\cdots&s_{[n][c]}\end{bmatrix}\longrightarrow\begin{bmatrix} \text{Enc}\begin{bmatrix}s_{[1][1]}&s_{[1][2]}&\ldots&s_{[1][c]}&0&\ldots&0\\ \text{Enc}\begin{bmatrix}s_{[2][1]}&s_{[2][2]}&\ldots&s_{[2][c]}&0&\ldots&0 \end{bmatrix}\\ \vdots&\vdots&\ddots&\vdots\\ \vdots&&&\\ \text{Enc}\begin{bmatrix}s_{[n][1]}&s_{[n][2]}&\ldots&s_{[n][c]}&0&\ldots&0 \end{bmatrix}\end{bmatrix},\]
\[\bar{V}=\begin{bmatrix}v_{[1][1]}&v_{[1][2]}&\cdots&v_{[1][m]}\\ v_{[2][1]}&v_{[2][2]}&\cdots&v_{[2][m]}\\ \vdots&\vdots&\ddots&\vdots\\ v_{[c][1]}&v_{[c][2]}&\cdots&v_{[c][m]}\end{bmatrix}\longrightarrow\begin{bmatrix} \text{Enc}\begin{bmatrix}v_{[1][1]}&v_{[1][2]}&\ldots&v_{[1][m]}&0&\ldots&0 \\ \text{Enc}\begin{bmatrix}v_{[2][1]}&v_{[2][2]}&\ldots&v_{[2][m]}&0&\ldots&0 \end{bmatrix}\\ \vdots&\vdots&\ddots&\vdots\\ \text{Enc}\begin{bmatrix}v_{[c][1]}&v_{[c][2]}&\ldots&v_{[c][m]}&0&\ldots&0 \end{bmatrix}\end{bmatrix},\]
Figure 4: Training and testing experimental results for the two variants of SLE are provided using the same learning rate \(0.01\)
\[Z^{\prime} =\begin{bmatrix}\phi^{{}^{\prime}}\left(z_{[1][1]}\right)&\phi^{{}^{ \prime}}\left(z_{[1][2]}\right)&\cdots&\phi^{{}^{\prime}}\left(z_{[1][m]}\right) \\ \phi^{{}^{\prime}}\left(z_{[2][1]}\right)&\phi^{{}^{\prime}}\left(z_{[2][2]} \right)&\cdots&\phi^{{}^{\prime}}\left(z_{[2][m]}\right)\\ \vdots&\vdots&\ddots&\vdots\\ \phi^{{}^{\prime}}\left(z_{[n][1]}\right)&\phi^{{}^{\prime}}\left(z_{[n][2]} \right)&\cdots&\phi^{{}^{\prime}}\left(z_{[n][m]}\right)\end{bmatrix}\] \[\longrightarrow\begin{bmatrix}\mathtt{Enc}\left[\phi^{{}^{\prime}} \left(z_{[1][1]}\right)&\phi^{{}^{\prime}}\left(z_{[1][2]}\right)&\cdots&\phi^ {{}^{\prime}}\left(z_{[1][m]}\right)&0&\ldots&0\\ \mathtt{Enc}\left[\phi^{{}^{\prime}}\left(z_{[2][1]}\right)&\phi^{{}^{ \prime}}\left(z_{[2][2]}\right)&\ldots&\phi^{{}^{\prime}}\left(z_{[2][m]} \right)&0&\ldots&0\right]\\ \mathtt{Enc}\left[\phi^{{}^{\prime}}\left(z_{[n][1]}\right)&\phi^{{}^{ \prime}}\left(z_{[n][2]}\right)&\cdots&\phi^{{}^{\prime}}\left(z_{[n][m]} \right)&0&\ldots&0\right]\end{bmatrix},\] \[Z =\begin{bmatrix}1&\phi\left(z_{[1][1]}\right)&\phi\left(z_{[1][2]} \right)&\cdots&\phi\left(z_{[1][m]}\right)\\ 1&\phi\left(z_{[2][1]}\right)&\phi\left(z_{[2][2]}\right)&\cdots&\phi\left(z_ {[2][m]}\right)\\ \vdots&\vdots&\ddots&\vdots\\ 1&\phi\left(z_{[n][1]}\right)&\phi\left(z_{[n][2]}\right)&\cdots&\phi\left(z_ {[n][m]}\right)\end{bmatrix}\] \[\longrightarrow\begin{bmatrix}\mathtt{Enc}\left[1&\phi\left(z_{[1] [1]}\right)&\phi\left(z_{[1][2]}\right)&\cdots&\phi\left(z_{[1][m]}\right)&0 &\ldots&0\\ \mathtt{Enc}\left[1&\phi\left(z_{[2][1]}\right)&\phi\left(z_{[2][2]}\right)& \cdots&\phi\left(z_{[2][m]}\right)&0&\ldots&0\right]\\ &&\vdots&\\ \mathtt{Enc}\left[1&\phi\left(z_{[n][1]}\right)&\phi\left(z_{[n][2]}\right)& \cdots&\phi\left(z_{[n][m]}\right)&0&\ldots&0\right]\end{bmatrix},\]
### For Classification Problems
We can first compute the gradients of \(w_{[k]}\) and \(v_{[k]}\) in the encrypted environment and then use the regular gradient descent method to update \(w_{[k]}\) and \(v_{[k]}\):
\[\begin{array}{ccccccccc}\frac{\mathrm{d}}{\mathrm{d}\mathsf{w}_{[k]}}L_{2}=\\ \mathtt{Enc}\big{[}s_{[1][1]}&\cdots&s_{[1][1]}&0&\cdots&0\big{]}\\ &\odot\\ &\mathtt{Enc}\big{[}v_{[1][1+k]}&\cdots&v_{[1][1+k]}&0&\ldots&0\big{]}\\ &\odot\\ &\mathtt{Enc}\big{[}\phi^{\prime}\left(z_{[1][k]}\right)&\cdots&\phi^{\prime} \left(z_{[1][k]}\right)&0&\ldots&0\big{]}\\ &\odot\\ &\mathtt{Enc}\big{[}x_{[1][0]}&\cdots&x_{[1][d]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}s_{[2][1]}&\cdots&s_{[2][1]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}v_{[1][1+k]}&\cdots&v_{[1][1+k]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}v_{[2][k]}&\cdots&\phi^{\prime}\left(z_{[2][k]}\right)&0 &\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}x_{[1][0]}&\cdots&x_{[1][d]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}s_{[1][1]}&\cdots&s_{[2][1]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}v_{[1][1+k]}&\cdots&v_{[1][1+k]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}\phi^{\prime}\left(z_{[2][k]}\right)&\cdots&\phi^{\prime} \left(z_{[2][k]}\right)&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}x_{[1][0]}&\ldots&x_{[1][d]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}s_{[n][1]}&\ldots&s_{[n][1]}&0&\ldots&0\big{]}\\ &\mathtt{\odot}\\ &\mathtt{Enc}\big{[}v_{[1][1+k]}&\cdots&v_{[1][1+k]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}\phi^{\prime}\left(z_{[n][k]}\right)&\cdots&\phi^{\prime} \left(z_{[n][k]}\right)&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}x_{[1][0]}&\ldots&x_{[1][d]}&0&\ldots&0\big{]}\\ \end{array}\]
and
\[\begin{array}{ccccc}\frac{\mathrm{d}}{\mathrm{d}\mathsf{v}_{[k]}}L_{2}=\\ \mathtt{Enc}\big{[}s_{[1][k]}&s_{[1][k]}&\cdots&s_{[1][k]}&0&\ldots&0\big{]} \odot\mathtt{Enc}\big{[}1&\phi(z_{[1][1]})&\phi(z_{[1][2]})&\cdots&\phi(z_{[1 ][m]})&0&\ldots&0\big{]}\\ &\mathtt{+}\\ &\mathtt{Enc}\big{[}s_{[2][k]}&s_{[2][k]}&\ldots&s_{[2][k]}&0&\ldots&0\big{]} \odot\mathtt{Enc}\big{[}1&\phi(z_{[2][1]})&\phi(z_{[2][2]})&\cdots&\phi(z_{[2 ][m]})&0&\ldots&0\big{]}\\ &\mathtt{+}\\ &\mathtt{Enc}\big{[}x_{[1][0]}&\ldots&x_{[1][d]}&0&\ldots&0\big{]}\\ &\mathtt{Enc}\big{[}s_{[1][k]}&s_{[n][k]}&\ldots&s_{[n][k]}&0&\ldots&0\big{]} \odot\mathtt{Enc}\big{[}1&\phi(z_{[n][1]})&\phi(z_{[n][2]})&\cdots&\phi(z_{[n] [m]})&0&\ldots&0\big{]}.\end{array}\]
Since the Double Volley Revolver method only requires one of the two matrices to be transposed before encryption, and the expressions \(\frac{\mathrm{d}}{\mathrm{d}\mathsf{w}_{[k]}}L_{2}\) and \(\frac{\mathrm{d}}{\mathrm{d}\mathsf{v}_{[k]}}L_{2}\) happen to meet this requirement during matrix multiplication, we are able to carry out the homomorphic evaluation of the whole pipeline for
homomorphic 3-layer NN training: \(\mathrm{w}_{[k]}=\mathrm{w}_{[k]}-\eta\cdot\frac{\mathrm{d}}{\mathrm{d}\mathrm{w}_{ [k]}}L_{2}\) and \(\mathrm{v}_{[k]}=\mathrm{v}_{[k]}-\eta\cdot\frac{\mathrm{d}}{\mathrm{d}\mathrm{ v}_{[k]}}L_{2}\), where the learning rate \(\eta\) is set to \(0.01\).
\(L_{2}\) regularization, also referred to as Ridge Regression, introduces a penalty term proportional to the square of the model's parameters. This approach encourages the utilization of all parameters while minimizing their magnitudes, resulting in a less complex model that is less susceptible to overfitting. It is straightforward that \(L_{2}\) regularization can be used in this encoding method by initializing the \(\frac{\mathrm{d}}{\mathrm{d}\mathrm{w}_{[k]}}L_{2}\) and \(\frac{\mathrm{d}}{\mathrm{d}\mathrm{v}_{[k]}}L_{2}\) with their corresponding L2 gradient components:
\[\frac{\mathrm{d}}{\mathrm{d}\mathrm{w}_{[k]}}L_{2}= \lambda\cdot\mathtt{Enc}\left[w_{[k][0]}\quad w_{[k][1]}\quad \dots\quad w_{[k][d]}\quad 0\quad\dots\quad 0\right]+\cdots,\]
and
\[\frac{\mathrm{d}}{\mathrm{d}\mathrm{v}_{[k]}}L_{2}= \lambda\cdot\mathtt{Enc}\left[v_{[k][0]}\quad v_{[k][1]}\quad \dots\quad v_{[k][m]}\quad 0\quad\dots\quad 0\right]+\cdots,\]
where \(\lambda\) represents the \(L_{2}\) regularization parameter.
### For Regression Problems
The mean squared error (MSE) loss function can be used in conjunction with this encoding method for regression tasks. Interestingly, the special case of the \(L_{2}\) loss function for a dataset containing only a single class label has the same formulation as the MSE loss function.
### Approximated Activation Functions
The first variant of the SLE loss function requires approximated activation functions to replace the sigmoid function with polynomials. Various methods [10, 25] can be employed to accomplish this task. For example, both Python and MATLAB (Octave) offer a function called polyfit, which can be used to approximate functions with polynomials using the least-square approach.
In our implementation, the activation function in the hidden layer is simply set to the square function.
## 5 Experiments
The C++ source code required to carry out the experiments described in this section is openly accessible at: [https://github.com/petitioner/HE.NNtraining](https://github.com/petitioner/HE.NNtraining).
ImplementationWe implemented vanilla gradient descent using the \(L_{2}\) loss function, solely relying on homomorphic encryption (HE), with the HEAAN library. All the experiments involving ciphertexts were performed on a public cloud with \(64\) virtual CPUs and \(192\) GB of RAM.
We use the entire set of \(150\) Iris examples for training and employ a fixed learning rate of \(0.01\). The complete Iris dataset \(X\) and its corresponding one-hot encoded labels \(Y\) are encrypted into two distinct ciphertexts. Moreover, for each row of the weight matrices \(W\) and \(V\), we encrypt its multiple repetitions into a single ciphertext:
\(X\)\(\xrightarrow{\text{encoding \& encryption}}Enc\begin{bmatrix}x_{[1][0]}&x_{[1][1]}&\cdots&x_{[1][d] }\\ x_{[2][0]}&x_{[2][1]}&\cdots&x_{[2][d]}\\ \vdots&\vdots&\ddots&\vdots\\ x_{[m][0]}&x_{[m][1]}&\cdots&x_{[n][d]}\end{bmatrix},\)
\(Y\)\(\xrightarrow{\text{encoding \& encryption}}Enc\begin{bmatrix}y_{[1][1]}&y_{[1][2]}&\cdots&y_{[1][c]}\\ y_{[2][1]}&y_{[2][2]}&\cdots&y_{[2][c]}\\ \vdots&\vdots&\ddots&\vdots\\ y_{[n][1]}&y_{[n][2]}&\cdots&y_{[n][c]}\end{bmatrix},\)
\(W\)\(\xrightarrow{\text{encoding \& encryption}}Enc\begin{bmatrix}w_{[1][0]}&w_{[1][1]}&\cdots&w_{[1 ][d]}\\ w_{[1][0]}&w_{[1][1]}&\cdots&w_{[1][d]}\\ \vdots&\vdots&\ddots&\vdots\\ w_{[1][0]}&w_{[1][1]}&\cdots&w_{[1][d]}\end{bmatrix},\cdots,Enc\begin{bmatrix}w_{[m][ 0]}&w_{[m][1]}&\cdots&w_{[m][d]}\\ w_{[m][0]}&w_{[m][1]}&\cdots&w_{[m][d]}\\ \vdots&\vdots&\ddots&\vdots\\ w_{[m][0]}&w_{[m][1]}&\cdots&w_{[m][d]}\end{bmatrix},\)
\(V\)\(\xrightarrow{\text{encoding \& encryption}}Enc\begin{bmatrix}v_{[1][0]}&v_{[1][1]}&\cdots&v_{[1][m]}\\ v_{[1][0]}&v_{[1][1]}&\cdots&v_{[1][m]}\\ \vdots&\vdots&\ddots&\vdots\\ v_{[1][0]}&v_{[1][1]}&\cdots&v_{[1][m]}\end{bmatrix},\cdots,Enc\begin{bmatrix}v_{ [c][0]}&v_{[c][1]}&\cdots&v_{[c][m]}\\ v_{[c][0]}&v_{[c][1]}&\cdots&v_{[c][m]}\\ \vdots&\vdots&\ddots&\vdots\\ v_{[c][0]}&v_{[c][1]}&\cdots&v_{[c][m]}\end{bmatrix}.\)
We selected the following parameters for the HEAAN library: \(logN=16\), \(logQ=990\), \(logp=30\), \(slots=32768\), which collectively ensure a security level of \(\lambda=128\). For detailed information about these parameters, please refer to [23]. We did not employ bootstrapping to refresh the weight ciphertexts, which limits our algorithm to only perform \(2\) iterations. Each iteration takes approximately \(52\) minutes to complete. The maximum runtime memory usage under these conditions is around \(60\) GB. The dataset of 150 Iris examples is encrypted into a single ciphertext. In addition, the one-hot encoded labels \(Y\)are encrypted into a single ciphertext. The weight matrix \(W\) is encrypted into \(120\) ciphertexts, with each ciphertext encrypting multiple repetitions of each row. We initialized the weight matrices \(W\) and \(V\) with a normal distribution having a mean of \(0.0\) and a standard deviation of \(0.05\).
## 6 Conclusion
In this work, we implemented privacy-preserving 3-layer NN training using solely homomorphic encryption (HE) techniques. However, the current low-level encoding method within the framework of Double Volley Revolver is not well-suited for incorporating bootstrapping. Further research is necessary to investigate the integration of bootstrapping into our approach using an alternative low-level implementation.
|
2310.03530 | Joint Group Invariant Functions on Data-Parameter Domain Induce
Universal Neural Networks | The symmetry and geometry of input data are considered to be encoded in the
internal data representation inside the neural network, but the specific
encoding rule has been less investigated. In this study, we present a
systematic method to induce a generalized neural network and its right inverse
operator, called the ridgelet transform, from a joint group invariant function
on the data-parameter domain. Since the ridgelet transform is an inverse, (1)
it can describe the arrangement of parameters for the network to represent a
target function, which is understood as the encoding rule, and (2) it implies
the universality of the network. Based on the group representation theory, we
present a new simple proof of the universality by using Schur's lemma in a
unified manner covering a wide class of networks, for example, the original
ridgelet transform, formal deep networks, and the dual voice transform. Since
traditional universality theorems were demonstrated based on functional
analysis, this study sheds light on the group theoretic aspect of the
approximation theory, connecting geometric deep learning to abstract harmonic
analysis. | Sho Sonoda, Hideyuki Ishi, Isao Ishikawa, Masahiro Ikeda | 2023-10-05T13:30:37Z | http://arxiv.org/abs/2310.03530v2 | # Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks
###### Abstract
The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated. In this study, we present a systematic method to induce a generalized neural network and its right inverse operator, called the _ridgelet transform_, from a _joint group invariant function_ on the data-parameter domain. Since the ridgelet transform is an inverse, (1) it can describe the arrangement of parameters for the network to represent a target function, which is understood as the _encoding rule_, and (2) it implies the _universality_ of the network. Based on the group representation theory, we present a new simple proof of the universality by using Schur's lemma in a unified manner covering a wide class of networks, for example, the original ridgelet transform, formal _deep_ networks, and the dual voice transform. Since traditional universality theorems were demonstrated based on functional analysis, this study sheds light on the group theoretic aspect of the approximation theory, connecting geometric deep learning to abstract harmonic analysis.
Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks
1]Sho Sonoda1
Hideyuki Ishi2
Isao Ishikawa3,1
1][email protected]
1]Hideyuki Ishi1
1]Hideyuki [email protected]
1]Iasio [email protected]
1]Masahiro Ikeda1
1][email protected]
1]Osaka Central Advanced Mathematical Institute (OCAMI), Osaka Metropolitan University
1]Center for Data Science, Ehime University
[2]Sophia Sanborn, Christian Shewmake, Simone Azeglio, Nina Miolane
## 1 Introduction
The internal data representation of neural networks is expected to reflect the symmetry and geometry of the data domain. In geometric deep learning (Bronstein et al., 2021), several authors have developed novel network architectures that are compatible with the geometric structure of the data (e.g. group equivariant networks). However, these methods typically require handcrafting the network architecture for each specific symmetry and geometry. In this study, we present a systematic method to induce a generalized neural network and its right inverse operator, called the _ridgelet transform_, from a _joint group invariant function_ on the data-parameter domain. Since the ridgelet transform is an inverse, (1) it explicitly describes the arrangement of parameters for the network to represent a target function, and (2) it implies the _universality_ of the network.
**Remark 1**: _Our reviewers have kindly let us know that Cohen et al. (2019), Finzi et al. (2021), and Aslan et al. (2023) have proposed versatile group equivariant network architectures that cover a wide class of groups in a unified manner, and Ravanbakhsh et al. (2017) have investigated the symmetry in the parameters. Since our results are applicable to any network architectures, it would be interesting to find the ridgelet transform for each network._
The proof of a universality theorem contains hints for understanding the internal data processing mechanisms inside neural networks. The year 1989 was the beginning of the universality theorem and a great year, as four different proofs were presented by Cybenko (1989), Hornik et al. (1989), Funahashi (1989), and Carroll and Dickinson (1989). Among them, Cybenko's proof using Hahn-Banach and Hornik et al.'s proof using Stone-Weierstrass are existential proofs, meaning that it is not clear how to assign the parameters. On the other hand, Funahashi's proof reducing to the Fourier transform and Carroll and Dickinson's proof reducing to the Radon transform are constructive proofs, meaning that it is clear how to assign the parameters. The latter constructive methods, which reduce to integral transforms, were refined as the so-called integral representation by Barron (1993) and further culminated as the ridgelet transform discovered by Murata (1996) and Candes (1998).
The ridgelet transform, the main topic of this study, is a pseudo-inverse operator of the integral representation neural network and is a detailed analysis tool that can describe the relationship between data and parameters due to its analytical representation. In the 2000s, thanks to the efforts of Donoho and others, research on ridgelet transforms evolved into geometric multiscale analysis (GMA, see e.g. Donoho, 2002), leading to the development of various x-lets such as curvelets (Candes and Donoho, 2004), contourlet (Do and Vetterli, 2005), shearlet (Labate et al., 2005), bandelet (Pennec and Mallat, 2005), and grouplet (Mallat, 2009b). These lines of studies mainly focused on developing multidimensional wavelet transforms for image processing (i.e., 2D signals) (Starck et al., 2010; Mallat, 2009a) and gradually moved apart from neural networks.
In the 2020s, the concept of integral representations has re-emerged as tools for analyzing deep learning theories, bringing renewed attention to ridgelet transforms. Precisely, they are often referred to by different names such as overparametrization, continuous/infinite width, mean field theory (Nitanda and Suzuki, 2017; Mei et al., 2018; Rotskoff and Vanden-Eijnden, 2018; Chizat and Bach, 2018; Sirignano and Spiliopoulos, 2020), and Langevin dynamics (Suzuki, 2020). Sonoda et al. (2022b, a) have developed ridgelet transforms for various networks, such as group convolutional networks and networks on manifolds, and have shown constructive universality theorems. In these proofs, reducing the network to Fourier transforms was an essential step to find the ridgelet transforms. In this study, we can find the ridgelet transforms even when there is no clear path to reducing them to Fourier transforms, as long as we can find a group invariant function.
The theory of function expansion based on group representations is well investigated in abstract harmonic analysis (Folland, 2015). There are two main streams: one is the generalization of _Fourier transform_, which expands functions on group \(G\) as a sum/integration of multiple irreducible unitary representations (Sugiura, 1990), and the other is the generalization of _wavelet transform_ called the _voice transform_, which expands functions in representation space \(\mathcal{H}\) as a sum/integration of functions generated by a single square-integrable unitary representation (Holschneider, 1998; Berge, 2021). For example, recent studies by Miyato et al. (2022) and Koyama et al. (2023) belong to the Fourier stream, while this study belongs to the wavelet/voice stream. Yet, it is precisely a new integral transform that differs from the conventional voice transform. The generalized ridgelet transform discovered in this study was motivated by the research objective of geometrically analyzing the parameters of neural networks, and we believe it is a missing link for connecting geometric deep learning to abstract harmonic analysis.
## 2 Preliminaries
We showcase the original integral representation and the ridgelet transform, a mathematical model of depth-2 fully-connected network and its right inverse, then list a few facts in the group representation theory.
Notation.For any topological space \(X\), \(C_{\mathcal{C}}(X)\) denotes the Banach space of all compactly supported functions \(f\) on \(X\). \(\mathcal{S}(\mathbb{R}^{d})\) and \(\mathcal{S}^{\prime}(\mathbb{R}^{d})\) denote the classes of rapidly decreasing functions (or Schwartz test functions) and tempered distributions on \(\mathbb{R}^{d}\), respectively.
### Quick Introduction to Integral Representation and Ridgelet Transform
**Definition 2**: _For any measurable function \(\sigma:\mathbb{R}\to\mathbb{C}\) and Borel measure \(\gamma\) on \(\mathbb{R}^{m}\times\mathbb{R}\), put_
\[S_{\sigma}[\gamma](\mathbf{x}):=\int_{\mathbb{R}^{m}\times\mathbb{R}} \gamma(\mathbf{a},b)\sigma(\mathbf{a}\cdot\mathbf{x}-b)\mathrm{d}\mathbf{a}\mathrm{d}b,\quad \mathbf{x}\in\mathbb{R}^{m}. \tag{1}\]
_We call \(S_{\sigma}[\gamma]\) an (integral representation of) neural network, and \(\gamma\) a parameter distribution._
The integration over all the hidden parameters \((\mathbf{a},b)\in\mathbb{R}^{m}\times\mathbb{R}\) means all the neurons \(\{\mathbf{x}\mapsto\sigma(\mathbf{a}\cdot\mathbf{x}-b)\ |\ (\mathbf{a},b)\in\mathbb{R}^{m}\times \mathbb{R}\}\) are summed (or integrated, to be precise) with weight \(\gamma\), hence formally \(S_{\sigma}[\gamma]\) is understood as a continuous neural network with a single hidden layer. We note, however, when \(\gamma\) is a finite sum of point measures such as \(\gamma_{p}=\sum_{i=1}^{p}c_{i}\delta_{(\mathbf{a}_{i},b_{i})}\), then it can also reproduce a finite width network
\[S_{\sigma}[\gamma_{p}](\mathbf{x})=\sum_{i=1}^{p}c_{i}\sigma(\mathbf{a}_ {i}\cdot\mathbf{x}-b_{i}). \tag{2}\]
In other words, the integral representation is a mathmatical model of depth-2 network with _any_ width (ranging from finite to continuous).
**Definition 3**: _For any measurable functions \(\rho:\mathbb{R}\to\mathbb{C}\) and \(f:\mathbb{R}^{m}\to\mathbb{C}\), put_
\[R_{\rho}[f](\mathbf{a},b):=\int_{\mathbb{R}^{m}}f(\mathbf{x})\overline {\rho(\mathbf{a}\cdot\mathbf{x}-b)}\mathrm{d}\mathbf{x},\quad(\mathbf{a},b)\in\mathbb{R}^{m} \times\mathbb{R}. \tag{3}\]
_We call \(R_{\rho}\) a ridgelet transform._
The ridgelet transform is known to be a right-inverse operator to \(S_{\sigma}\). To be precise, the following reconstruction formula holds.
**Theorem 4** (Reconstruction Formula): _Suppose \(\sigma\) and \(\rho\) are a tempered distribution (\(\mathcal{S}^{\prime}\)) and a rapid decreasing function (\(\mathcal{S}\)) respectively. There exists a bilinear form \((\!(\sigma,\rho)\!)\) such that_
\[S_{\sigma}\circ R_{\rho}[f]=(\!(\sigma,\rho)\!)f, \tag{4}\]
_for any square integrable function \(f\in L^{2}(\mathbb{R}^{m})\). Further, the bilinear form is given by_
\[(\!(\sigma,\rho)\!)=\int_{\mathbb{R}}\sigma^{\sharp}(\omega) \overline{\rho^{\sharp}(\omega)}|\omega|^{-m}\mathrm{d}\omega \tag{5}\]
_where \(\sharp\) denotes the 1-dimensional Fourier transform._
See Sonoda et al. (2021, Theorem 6) for the proof. In particular, according to Sonoda et al. (2021, Lemma 9), for any activation function \(\sigma\), there always exists \(\rho\) satisfying \((\!(\sigma,\rho)\!)=1\). Here, \(\sigma\) being a tempered distribution means that typical activation functions are covered such as ReLU, step function, \(\tanh\), gaussian, etc... We can interpret the reconstruction formula as a universality theorem of continuous neural networks, since for any given data generating function \(f\), a network with output weight \(\gamma_{f}=R_{\rho}[f]\) reproduces \(f\) (up to factor \((\!(\sigma,\rho)\!)\)), i.e. \(S[\gamma_{f}]=f\). In other words, the ridgelet transform indicates how the network parameters should be organized so that the network represents an individual function \(f\).
In this study, we showcase a new proof of the reconstruction formula based on the group theoretic arguments, and present a systematic scheme to find the ridgelet transform for a variety of given network architecture based on the symmetry in the data-parameter domain.
### Irreducible Unitary Representation and Schur's Lemma
Let \(G\) be a locally compact group, \(\mathcal{H}\) be a nonzero Hilbert space, and \(U(\mathcal{H})\) be the group of unitary operators on \(\mathcal{H}\). For example, any finite group, discrete group, compact group, and finite-dimensional Lie group are locally compact, while an infinite-dimensional Lie group is not locally compact. A _unitary representation_\(\pi\) of \(G\) on \(\mathcal{H}\) is a group homomorphism that is continuous with respect to the strong operator topology--that is, a map \(\pi:G\to U(\mathcal{H})\) satisfying \(\pi(gh)=\pi(g)\pi(h)\) and \(\pi(g^{-1})=\pi(g)^{-1}=\pi(g)^{*}\), and for any \(\psi\in\mathcal{H}\) map \(G\ni g\mapsto\pi(g)[\psi]\in\mathcal{H}\) is continuous. Suppose \(\mathcal{M}\) is a closed subspace of \(\mathcal{H}\). \(\mathcal{M}\) is called an _invariant_ subspace when \(\pi(g)\mathcal{M}\subset\mathcal{M}\) for all \(g\in G\). Particularly, \(\pi\) is called _irreducible_ when it does not admit any nontrivial invariant subspace \(\mathcal{M}\neq\{0\}\) nor \(\mathcal{H}\).
Let \(C(\pi)\) be the set of all bounded linear operators \(T\) on Hilbert space \(\mathcal{H}\) that commutes with \(\pi\), namely \(C(\pi):=\{T\in B(\mathcal{H})\mid T\pi(g)=\pi(g)T\text{ for all }g\in G\}\).
**Lemma 5** (Schur's lemma): _A unitary representation \(\pi\) of \(G\) is irreducible iff \(C(\pi)\) only contains scalar multiples of the identity, i.e., \(C(\pi)=\{c\operatorname{Id}\mid c\in\mathbb{C}\}\) or \(\{0\}\)._
See Folland (2015, Theorem 3.5(a)) for the proof.
### Calculus on Locally Compact Group
By Haar's theorem, if \(G\) is a locally compact group, then there uniquely exist left and right invariant measures \(\mathrm{d}_{l}g\) and \(\mathrm{d}_{r}g\), satisfying for any \(s\in G\) and \(f\in C_{c}(G)\),
\[\int_{G}f(sg)\mathrm{d}_{l}g=\int_{G}f(g)\mathrm{d}_{l}g,\quad\text{and} \quad\int_{G}f(gs)\mathrm{d}_{r}g=\int_{G}f(g)\mathrm{d}_{r}g.\]
Let \(X\) be a \(G\)-space with transitive left (resp. right) \(G\)-action \(g\cdot x\) (resp. \(x\cdot g\)) for any \((g,x)\in G\times X\). Then, we can further induce the left (resp. right) invariant measure \(\mathrm{d}_{l}x\) (resp. \(\mathrm{d}_{r}x\)) so that for any \(f\in C_{c}(G)\),
\[\int_{X}f(x)\mathrm{d}_{l}x:=\int_{G}f(g\cdot o)\mathrm{d}_{l}g,\quad\text{ resp.}\quad\int_{X}f(x)\mathrm{d}_{r}x:=\int_{G}f(o\cdot g)\mathrm{d}_{r}g,\]
where \(o\in G\) is a fixed point called the origin.
## 3 Main Results
We introduce generalized neural networks and generalized ridgelet transforms induced from joint group invariant functions on data-parameter domain, and present a simple group theoretic proof of the reconstruction formula.
Let \(G\) be a locally compact group equipped with a left invariant measure \(\mathrm{d}g\). Let \(X\) and \(\Xi\) be \(G\)-spaces equipped with \(G\)-invariant measures \(\mathrm{d}x\) and \(\mathrm{d}\xi\), called the data domain and the parameter domain, respectively. Particularly, we call the product space \(X\times\Xi\) the _data-parameter_ domain (like time-frequency domain). By abusing notation, we use the same symbol \(\cdot\) for the \(G\)-actions on \(X\) and \(\Xi\) (e.g., \(g\cdot x\) and \(g\cdot\xi\)).
Let \(\pi\) and \(\widehat{\pi}\) be left-regular actions of \(G\) on \(L^{2}(X)\) and \(L^{2}(\Xi)\), respectively. Namely, for any \(g\in G,f\in L^{2}(X)\) and \(\gamma\in L^{2}(\Xi)\),
\[\pi_{g}[f](x):=f(g^{-1}\cdot x),\quad\text{and}\quad\widehat{\pi}_{g}[\gamma] (\xi):=\gamma(g^{-1}\cdot\xi). \tag{6}\]
**Definition 6** (Joint \(G\)-Invariant Function): _We say a function \(\phi\) on \(X\times\Xi\) is joint \(G\)-invariant when it satisfies for all \(g\in G\) and \((x,\xi)\in X\times\Xi\),_
\[\phi(g\cdot x,g\cdot\xi)=\phi(x,\xi). \tag{7}\]
_By \(\mathcal{A}\), we symbolize the algebra of all joint \(G\)-invariant functions._
Here, \(\mathcal{A}\) is indeed an _algebra_ because if \(\phi\) and \(\psi\) are joint \(G\)-invariant, then so are \(\phi+\psi\) and \(\phi\psi\). Namely, \(\phi,\psi\in\mathcal{A}\implies\phi+\psi,\phi\psi\in\mathcal{A}\). As visualized in Figure 1, a joint \(G\)-invariant function is constant along each \(G\)-orbit \(\{(g\cdot x,g\cdot\xi)\mid g\in G\}\). Hence finding a joint \(G\)-invariant function is not difficult.
**Definition 7** (Generalized Neural Network Induced from Invariant \(\phi\)): _For any joint invariant function \(\phi\in\mathcal{A}\) and Borel measure \(\gamma\) on \(\Xi\), put_
\[\mathtt{NN}[\gamma;\phi](x):=\int_{\Xi}\gamma(\xi)\phi(x,\xi)\mathrm{d}\xi, \quad x\in X. \tag{8}\]
_We call the integral transform \(\mathtt{NN}[\bullet;\phi]\) a \(\phi\)-transform, and each individual image \(\mathtt{NN}[\gamma;\phi]\) a \(\phi\)-network for short._
The \(\phi\)-network is an extension of the original neural network because when \(X=\mathbb{R}^{m},\Xi=\mathbb{R}^{m}\times\mathbb{R}\) and \(\phi(\mathbf{x},(\mathbf{a},b)):=\sigma(\mathbf{a}\cdot\mathbf{x}-b)\) with some activation function \(\sigma:\mathbb{R}\to\mathbb{R}\), it reduces to a fully-connected network \(\int_{\mathbb{R}^{m}\times\mathbb{R}}\gamma(\mathbf{a},b)\sigma(\mathbf{a}\cdot\mathbf{x}-b) \mathrm{d}\mathbf{a}\mathrm{d}b\).
**Definition 8** (Generalized Ridgelet Transform Induced from Invariant \(\phi\)): _For any joint invariant map \(\phi\in\mathcal{A}\) and measurable function \(f\) on \(X\), put_
\[\mathtt{R}[f;\phi](\xi):=\int_{X}f(x)\overline{\phi(x,\xi)} \mathrm{d}x,\quad\xi\in\Xi. \tag{9}\]
_We call the integral transform \(\mathtt{R}[\bullet;\phi]\) a \(\phi\)-ridgelet transform for short._
As long as the integrals are convergent, it is the dual operator of \(\phi\)-transform, since
\[\langle\gamma,\mathtt{R}[f;\phi]\rangle_{L^{2}(\Xi)}=\int_{X\times \Xi}\gamma(\xi)\phi(x,\xi)\overline{f(x)}\mathrm{d}x\mathrm{d}\xi=\langle \mathtt{NN}[\gamma;\phi],f\rangle_{L^{2}(X)}. \tag{10}\]
**Theorem 9**: _Let \(G\) be a locally compact group. For any joint invariant functions \(\phi,\psi\in\mathcal{A}\), suppose that composite \(\mathtt{NN}_{\phi}\circ\mathtt{R}_{\psi}:L^{2}(X)\to L^{2}(X)\) is bounded, and that regular representation \((\pi,L^{2}(X))\) is irreducible. Then, there exists a bilinear form \((\!(\phi,\psi)\!)\in\mathbb{C}\) such that for any function \(f\in L^{2}(X)\),_
\[\mathtt{NN}_{\phi}\circ\mathtt{R}_{\psi}[f]=(\!(\phi,\psi)\!)f. \tag{11}\]
In other words, the \(\psi\)-ridgelet transform \(\mathtt{R}_{\psi}\) is understood as a group theoretic generalization of the original ridgelet transform, as it is a right inverse operator of \(\phi\)-transform \(\mathtt{NN}_{\phi}\).
**Proof** We write \(\mathtt{NN}[\bullet;\phi]\) as \(\mathtt{NN}_{\phi}\) and \(\mathtt{R}[\bullet;\phi]\) as \(\mathtt{R}_{\phi}\) for short. By the left-invariances of \(\mathrm{d}x\) and \(\psi\), for all \(g\in G\), we have
\[\mathtt{R}_{\psi}[\pi_{g}[f]](\xi) =\int_{X}f(g^{-1}\cdot x)\overline{\psi(x,\xi)}\mathrm{d}x =\langle\pi_{g}[f],\psi(\bullet,\xi)\rangle_{L^{2}(X)}\] \[=\int_{X}f(x)\overline{\psi(g\cdot x,\xi)}\mathrm{d}x =\langle f,\pi_{g}^{*}[\psi](\bullet,\xi)\rangle_{L^{2}(X)}\] \[=\int_{X}f(x)\overline{\psi(x,g^{-1}\cdot\xi)}\mathrm{d}x =\langle f,\widehat{\pi}_{g}[\psi](\bullet,\xi)\rangle_{L^{2}(X)}\] \[=\widehat{\pi}_{g}[\mathtt{R}_{\psi}[f]](\xi). \tag{12}\]
Here, \(\pi^{*}\) denotes the dual representation of \(\pi\) with respect to \(L^{2}(X)\)-product. Similarly,
\[\mathtt{NN}_{\phi}[\widehat{\pi}_{g}[\gamma]](x) =\int_{\Xi}\gamma(g^{-1}\cdot\xi)\phi(x,\xi)\mathrm{d}\xi =\langle\widehat{\pi}_{g}[\gamma],\phi(x,\bullet)\rangle_{L^{2}( \Xi)}\] \[=\int_{\Xi}\gamma(\xi)\phi(x,g\cdot\xi)\mathrm{d}\xi =\langle\gamma,\widehat{\pi}_{g}^{*}[\phi](x,\bullet)\rangle_{L^{2} (\Xi)}\] \[=\int_{\Xi}\gamma(\xi)\phi(g^{-1}\cdot x,\xi)\mathrm{d}\xi =\langle\gamma,\pi_{g}[\phi](x,\bullet)\rangle_{L^{2}(\Xi)}\] \[=\pi_{g}[\mathtt{NN}_{\phi}[\gamma]](x). \tag{13}\]
Here, \(\widehat{\pi}^{*}\) denotes the dual representation of \(\widehat{\pi}\) with respect to \(L^{2}(\Xi)\)-product.
As a consequence, \(\mathtt{NN}_{\phi}\circ\mathtt{R}_{\psi}:L^{2}(X)\to L^{2}(X)\) commutes with \(\pi\) as below
\[\mathtt{NN}_{\phi}\circ\mathtt{R}_{\psi}\circ\pi_{g}=\mathtt{NN}_{\phi}\circ \widehat{\pi}_{g}\circ\mathtt{R}_{\psi}=\pi_{g}\circ\mathtt{NN}_{\phi}\circ \mathtt{R}_{\psi} \tag{14}\]
for all \(g\in G\). Hence by Schur's lemma (Lemma 5), there exist a constant \(C_{\phi,\psi}\in\mathbb{C}\) such that \(\mathtt{NN}_{\phi}\circ\mathtt{R}_{\psi}=C_{\phi,\psi}\operatorname{Id}_{L^{2} (X)}\). By the construction of left-hand side, \(C_{\phi,\psi}\) is bilinear in \(\phi\) and \(\psi\). \(\blacksquare\)
## 4 Examples
### Original Ridgelet Transform
This study started from a group theoretic proof of the original reconstruction formula (Theorem 4). The proof is in fact new, thought-provoking and valuable, so we leave it in Appendix A. Below is a sketch of the full proof.
**Example 1**: _Let \(G\) be the affine group \(\operatorname{Aff}(m)=GL(m)\ltimes\mathbb{R}^{m}\), \(X=\mathbb{R}^{m}\) be the data domain with \(G\)-action_
\[g\cdot\boldsymbol{x}:=L\boldsymbol{x}+\boldsymbol{t},\quad g=(L,\boldsymbol{ t})\in G,\ \boldsymbol{x}\in\mathbb{R}^{m}=X\]
_and \(\Xi=\mathbb{R}^{m}\times\mathbb{R}\) be the parameter domain with dual \(G\)-action_
\[g\cdot(\boldsymbol{a},b)=(L^{-\top}\boldsymbol{a},b+\boldsymbol{t}^{\top}L^{- \top}\boldsymbol{a}),\quad g=(L,\boldsymbol{t})\in G,\ (\boldsymbol{a},b)\in \mathbb{R}^{m}\times\mathbb{R}=\Xi. \tag{15}\]
_We can see \(\phi(\boldsymbol{x},(\boldsymbol{a},b)):=\sigma(\boldsymbol{a}\cdot \boldsymbol{x}-b)\) is joint \(G\)-invariant. In fact,_
\[\phi(g\cdot\boldsymbol{x},g\cdot(\boldsymbol{a},b))=\sigma\left(L^{-\top} \boldsymbol{a}\cdot(L\boldsymbol{x}+\boldsymbol{t})-(b+\boldsymbol{t}^{\top}L ^{-\top}\boldsymbol{a})\right)=\sigma(\boldsymbol{a}\cdot\boldsymbol{x}-b)= \phi(\boldsymbol{x},(\boldsymbol{a},b)).\]
_Further, by Lemma 12, the regular representation \(\pi_{g}\) of \(G=\operatorname{Aff}(m)\) is known to be irreducible. Hence we can retain the original neural network and ridgelet transform:_
\[\mathtt{NN}[\gamma](\boldsymbol{x})=\int_{\mathbb{R}^{m}\times\mathbb{R}} \gamma(\boldsymbol{a},b)\sigma(\boldsymbol{a}\cdot\boldsymbol{x}-b) \mathrm{d}\boldsymbol{a}\mathrm{d}b,\quad\text{and}\quad\mathtt{R}[f]( \boldsymbol{a},b)=\int_{\mathbb{R}^{m}}f(\boldsymbol{x})\overline{\rho( \boldsymbol{a}\cdot\boldsymbol{x}-b)}\mathrm{d}\boldsymbol{x},\]
_satisfying \(\mathtt{NN}\circ\mathtt{R}=(\!(\sigma,\rho)\!)\operatorname{Id}_{L^{2}( \mathbb{R}^{m})}\)._
Additionally, a geometric interpretation of dual \(G\)-action (15) is discussed in Appendix B.
### Deep Ridgelet Transform
Sonoda et al. (2023) presented the ridgelet transform for _deep_ neural networks. We noticed their network can also be induced from an invariant function. In other words, from the group representation theory perspective, function approximation with _any depth_ is unified.
**Example 2**: _Let \(G\) be any locally compact group, data domain \(X\) be any \(G\)-space, rewriting its \(G\)-action \(g\cdot x\) as \(g(x)\) so as to formally identify \(g\) with a hidden layer map, and parameter domain \(\Xi\) be the group \(G\) itself with dual \(G\)-action_
\[g\cdot\xi=\xi g^{-1}. \tag{16}\]
_We can see \(\phi(x,\xi):=\psi\circ\xi(x)\) is joint \(G\)-invariant. In fact,_
\[\phi(g\cdot x,g\cdot\xi)=\psi\circ(g\cdot\xi)(g\cdot x)=\psi\circ(\xi\circ g^{-1} )(g(x))=\psi\circ\xi(x)=\phi(x,\xi)\]
_Therefore, assuming that the regular representation \(\pi_{g}=\psi\circ g\) is irreducible on an invariant subspace \(\mathcal{H}\) of \(L^{2}(X)\), we can retain the formal deep network and deep ridgelet transform:_
\[\mathtt{NN}[\gamma](x):=\int_{\Xi}\gamma(\xi)\psi\circ\xi(x)\mathrm{d}\xi, \quad\text{and}\quad\mathtt{R}[f](\xi)=\int_{X}f(x)\overline{\psi\circ\xi(x)} \mathrm{d}x,\]
_satisfying \(\mathtt{NN}\circ\mathtt{R}=(\!(\sigma,\rho)\!)\operatorname{Id}_{\mathcal{H}}\)._
### Voice Transform, or Generalized Wavelet Transform
The voice transform is also known as the _Gilmore-Perelomov coherent states_ and the _generalized wavelet transform_(Perelomov, 1986; Ali et al., 2014). It is well investigated in the research field of _coorbit theory_(Feichtinger and Grochenig, 1988, 1989a,b). We refer to Berge (2021) for a quick review of voice transform and coorbit theory.
**Definition 10**: _Given a unitary representation \((\pi,\mathcal{H})\) of group \(G\) on a Hilbert space \(\mathcal{H}\), the voice transform is defined as_
\[V_{\phi}[f](g):=\langle f,\pi_{g}[\phi]\rangle_{\mathcal{H}},\quad g\in G,\ f, \phi\in\mathcal{H}. \tag{17}\]
This unifies several integral transforms from the perspective of group theory such as short-time Fourier transform (STFT), wavelet transform (Grossmann et al., 1985, 1986; Holschneider, 1998; Laugesen et al., 2002; Gressman et al., 2003), and continuous shearlet transform (Labate et al., 2005; Guo and Labate, 2007; Kutyniok and Labate, 2012).
**Example 3**: _Let \(G\) be any group, data domain \(X\) be any \(G\)-space, and parameter domain \(\Xi\) be the group \(G\) itself with dual \(G\)-action \(g\cdot\xi=g\xi\). We can see \(\theta(x,\xi):=\psi(\xi^{-1}\cdot x)\) is joint \(G\)-invariant. In fact,_
\[\theta(g\cdot x,g\cdot\xi)=\psi((g\cdot\xi)^{-1}\cdot(g\cdot x))=\psi(\xi^{-1 }\cdot x)=\theta(x,\xi).\]
_Therefore, assuming that the regular representation \(\pi_{g}\) is irreducible, we can retain a dual voice transform and voice transform:_
\[\mathtt{NN}[\gamma](x):=\int_{\Xi}\gamma(\xi)\phi(\xi^{-1}\cdot x)\mathrm{d} \xi,\quad\text{and}\quad\mathtt{R}[f](\xi)=\int_{X}f(x)\psi(\xi^{-1}\cdot x) \mathrm{d}x,\]
_satisfying \(\mathtt{NN}\circ\mathtt{R}=(\!(\sigma,\rho)\!)\operatorname{Id}_{L^{2}(X)}\). This is a special case of the voice transform when \(\mathcal{H}=L^{2}(X)\), and \(\pi_{g}[\psi]=\psi(g^{-1}\cdot\bullet)\)._
We note that the voice transform \(V_{\phi}[f](g):=\langle f,\pi_{g}[\phi]\rangle_{\mathcal{H}}\) and the \(\phi\)-ridgelet transform \(\mathtt{R}_{\phi}[f](\xi):=\langle f,\phi(\bullet,\xi)\rangle_{L^{2}(X)}\) have common parts, but are different in general. While the example above and the original wavelet transform \(W_{\psi}[f](b,a):=\int_{\mathbb{R}}f(x)\psi((x-b)/a)\mathrm{d}x/\sqrt{a}\) are simultaneously the voice and ridgelet transforms, a ridgelet transform can be a voice transform only when the representation \((\pi,\mathcal{H})\) is the regular representation on \(L^{2}(X)\), and a voice transform can be a ridgelet transform only when the parameter domain \(\Xi\) is the group \(G\) itself and the feature map \(\phi\) is generated by \(G\)-action on a single function \(\psi\). Hence pursuing parallel results for the coorbit theory would be an interesting future work.
## 5 Discussion
We presented a systematic method to induce a generalized neural network and its ridgelet transform, from a joint group invariant function on the data-parameter domain. Namely, given a joint group invariant function, the marginalization of parameter \(\xi\) (resp. data \(x\)) induces the network (resp. the ridgelet transform). Based on the group theoretic arguments, we demonstrated a simple proof of the reconstruction formula by using Schur's lemma, which implies the universality of the network. Since conventional universality theorems were shown using functional analytic tools, the group theoretic proof is a new contribution to the approximation theory, connecting geometric deep learning to abstract harmonic analysis. Further, since the proposed network covers both shallow and deep networks, the group representation theory can offer a unified perspective on function approximation with _any depth_.
In the past, Sonoda et al. (2022a,b) have developed the ridgelet transforms for neural networks on manifolds and function spaces using the Fourier transforms on manifolds and function spaces, and proposed a systematic scheme to derive a ridgelet transform for neural networks on a given domain based on the Fourier transform on there. Compared to our group theoretic method, the Fourier transform method is indirect and requires additional knowledge (not only on the symmetry on the data domain but also) on the Fourier transform on there. We conjecture that those Fourier-based ridgelet transforms can also be derived in our group-theoretic method.
The authors are extremely grateful to the three anonymous reviewers for their valuable comments and suggestions, which have helped improve the quality of our manuscript. This work was supported by JSPS KAKENHI 20K03657, JST PRESTO JPMJPR2125, JST CREST JPMJCR2015 and JPMJCR1913, and JST ACTX JPMJAX2004.
|
2306.08383 | Neural network as a tool for design of amorphous metal alloys with
desired elastoplastic properties | The development and implementation of the methods for designing amorphous
metal alloys with desired mechanical properties is one of the most promising
areas of modern materials science. Here, the machine learning methods appear to
be a suitable complement to empirical methods related to the synthesis and
testing of amorphous alloys of various compositions. In the present work, it is
proposed a method to determine amorphous metal alloys with mechanical
properties closest to those required. More than $50\,000$ amorphous alloys of
different compositions have been considered, and the Young's modulus $E$ and
the yield strength $\sigma_{y}$ have been evaluated for them by the machine
learning model trained on the fundamental physical properties of the chemical
elements. Statistical treatment of the obtained results reveals that the
fundamental physical properties of the chemical element with the largest mass
fraction are the most significant factors, whose values correlate with the
values of the mechanical properties of the alloys, in which this element is
involved. It is shown that the values of the Young's modulus $E$ and the yield
strength $\sigma_{y}$ are higher for amorphous alloys based on Cr, Fe, Co, Ni,
Nb, Mo and W formed by the addition of semimetals (e.g. Be, B, Al, Sn),
nonmetals (e.g. Si and P) and lanthanides (e.g. La and Gd) than for alloys of
other compositions. Increasing the number of components in alloy from $2$ to
$7$ and changing the mass fraction of chemical elements has no significantly
impact on the strength characteristics $E$ and $\sigma_{y}$. Amorphous metal
alloys with the most improved mechanical properties have been identified. In
particular, such extremely high-strength alloys include Cr$_{80}$B$_{20}$
(among binary), Mo$_{60}$B$_{20}$W$_{20}$ (among ternary) and
Cr$_{40}$B$_{20}$Nb$_{10}$Pd$_{10}$Ta$_{10}$Si$_{10}$ (among multicomponent). | B. N. Galimzyanov, M. A. Doronina, A. V. Mokshin | 2023-06-14T09:18:27Z | http://arxiv.org/abs/2306.08383v1 | # Neural network as a tool for design of amorphous metal alloys with desired elastoplastic properties
###### Abstract
The development and implementation of the methods for designing amorphous metal alloys with desired mechanical properties is one of the most promising areas of modern materials science. Here, the machine learning methods appear to be a suitable complement to empirical methods related to the synthesis and testing of amorphous alloys of various compositions. In the present work, it is proposed a method to determine amorphous metal alloys with mechanical properties closest to those required. More than \(50\,000\) amorphous alloys of different compositions have been considered, and the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) have been evaluated for them by the machine learning model trained on the fundamental physical properties of the chemical elements. Statistical treatment of the obtained results reveals that the fundamental physical properties of the chemical element with the largest mass fraction are the most significant factors, whose values correlate with the values of the mechanical properties of the alloys, in which this element is involved. It is shown that the values of the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) are higher for amorphous alloys based on Cr, Fe, Co, Ni, Nb, Mo and W formed by the addition of semimetals (e.g. Be, B, Al, Sn), nonmetals (e.g. Si and P) and lanthanides (e.g. La and Gd) than for alloys of other compositions. Increasing the number of components in alloy from 2 to 7 and changing the mass fraction of chemical elements has no significantly impact on the strength characteristics \(E\) and \(\sigma_{y}\). Amorphous metal alloys with the most improved mechanical properties have been identified. In particular, such extremely high-strength alloys include Cr\({}_{80}\)B\({}_{20}\) (among binary), Mo\({}_{60}\)B\({}_{20}\)W\({}_{20}\) (among ternary) and Cr\({}_{40}\)B\({}_{20}\)Nb\({}_{10}\)Pd\({}_{10}\)Ta\({}_{10}\)Si\({}_{10}\) (among multicomponent).
keywords: machine learning; materials design; mechanical properties; metals; amorphous alloys +
Footnote †: journal:
## 1 Introduction
Amorphous metal alloys are the promising materials for the automotive, aerospace, energy, electronics and medical technology industries [1; 2; 3; 4]. High corrosion resistance, high magnetic permeability, superior mechanical strength, high fracture toughness, high elastic strain limit and high formability are just some of the unique set of properties that make amorphous metal alloys widely applicable [5; 6; 7]. Such the combination of properties is directly due to the absence of structural order accompanied by defects that is typical for crystalline analogues [8; 9; 10; 11]. However, despite all the advantages of amorphous metal alloys, their production is complicated by the fact that the formation of a stable disordered structure depends strongly on alloy composition (i.e. number of components, type of added chemical elements) and its preparation protocol (i.e. cooling and compression procedures, initial and final melt temperatures) [12; 13; 14; 15; 16].
Amorphous metal alloys are actively studied for more than 80 years, beginning, in particular, with the works of Kramer [17; 18]. One of the first methods of practical formation of alloys with amorphous structure was based on the so-called electrodeposition process. Later, in the 60's of the 20th century, the first works related with formation of amorphous metal films by rapid cooling of the corresponding melts were appeared [19; 20]. As it turned out later, amorphization of metallic melts of almost any composition is possible if extremely fast cooling is used. The next stage in the development of the amorphous alloy formation methodology concerned the consideration of alloys in eutectics, where it was found that bulk amorphous samples of more than 1 mm thickness can be formed [21]. Further attention in this area focuses on some aspects. Namely, the mechanical properties of bulk amorphous metal alloys are strongly dependent on alloy composition and chemical purity of raw material. The strength properties of amorphous alloys can be significantly reduced due to the presence of impurities. Moreover, bulk amorphous metal alloys are inherently fragile. Therefore, in the early 2000's, studies were aimed to improve the alloy hardening methods as well as to determine the relationship between the key mechanical properties of amorphous metal alloys, which include the Young's modulus \(E\), the yield strength \(\sigma_{y}\) and the strength \(\sigma_{f}\)[22; 23]. It has been shown that the relationship between the hardness \(H\) (by Vickers method), the strength \(\sigma_{f}\), the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) of amorphous metal alloys is close to linear and can be reproduced, for example, by Tabor's relation \(H=K\sigma_{y}\), by Johnson's model \(H=\sigma_{y}(a+b\ln[cE/\sigma_{y}])\)
and by relation \(\sigma_{f}=dE^{1/2}\) (here, \(K\), \(a\), \(b\), \(c\) and \(d\) are constants) [24, 25, 26]. These studies found that amorphous metal alloys with large values of \(E\) and \(\sigma_{y}\) are characterized by high hardness \(H\) and strength \(\sigma_{f}\).
The synthesis of amorphous metal alloy with desired mechanical properties may require listing various combinations of compositions followed by mechanical testing. This makes the process of synthesizing new alloys extremely difficult and significantly increases the costs. Then, methods of computer design seem to be a suitable support for empirical methods at the stage of determining amorphous metal alloys with desired mechanical properties [27, 28]. In recent decades, rapid development of information technologies as well as automation of data collection and storage processes contribute to accumulation and systematization of information about the physical and mechanical properties of bulk amorphous metal alloys glasses [29, 30, 31, 32]. The methods of machine learning operate with large arrays of the data and allow us to determine the relationship between composition and properties of alloys both already known and not previously known [33, 34, 35, 36]. For example, Xiong and co-authors have been developed a machine learning model that can predict the glass-forming ability and elastic moduli of bulk metallic glasses based on the fundamental atomic properties, chemical and physical properties obtained from experiments or density functional theory simulations [37]. These results find the importance of individual chemical element properties and macroscopic properties in determining the strength characteristics of amorphous alloys. The results obtained by Khakurel et al. established that the average concentration of valence electrons, the atomic radius and the melting temperature are the key properties, which are correlated with the Young's modulus of compositionally complex alloys [38]. The results of this work can also be extended to amorphous metal alloys, as it is confirmed in Refs. [39, 40]. In addition, as it was found in Ref. [41] using a machine learning model, the Young's modulus of metal alloys under normal conditions correlates with the yield strength and with the glass transition temperature. In this case, the specificity of "chemical formula" of alloy, which is determined by the molar mass and the number of components, is not as important as is usually expected. Johnson and Samwer have found that the mechanical properties (elastic constants, compressive yield strength, elastic strain limit) of 30 bulk metallic glasses as functions of the scaled temperature \(T_{R}/T_{g}\) obey the universal law \(\propto a-b(T_{R}/T_{g})^{2/3}\), where \(a\) and \(b\) are the constants, \(T_{R}\) is the room temperature, \(T_{g}\) is the glass transition temperature [42]. The results of this work systematize existing knowledge about the mechanical properties of amorphous alloys. An artificial neural network has created by Jeon and
co-authors for designing Fe-based amorphous metal alloys with the desired crystallization temperature and glass transition temperature [43]. Thus, all these studies show that the machine learning methods are suitable tool to find new amorphous alloys with required physical and mechanical properties. Despite the significant number of such studies, little attention has been paid to the development of methods for determining previously unknown amorphous alloys with the desired mechanical properties.
The present work proposes a new method for determining amorphous metal alloys of arbitrary composition based on a large set of empirical data. The originality of this method is that it is based on a machine learning model capable of predicting the Young's modulus and the yield strength of amorphous alloys taking into account the fundamental properties of each chemical element that forms the alloys. It is quite significant that the obtained results lead to new knowledge, which will contribute to the determination of amorphous metal alloys that maximally satisfy the required mechanical properties.
## 2 Method for determining the mechanical properties of amorphous metal alloys
### General strategy of the method
The developed method for determining amorphous metal alloys is based on a machine learning model, which is an artificial neural network of direct propagation. The main advantage of this method is the possibility to calculate the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) both for known amorphous metal alloys and for alloys that are yet to be synthesized. The developed method makes it possible to determine \(E\) and \(\sigma_{y}\) of alloys, whose number of components varies in the range from 2 to 7. Note that such the number of components is ordinary for the majority of known metal alloys. In addition, the proposed method can be adapted to identify alloys with large number of components at the presence of appropriate data for neural network training. The composition and mass fraction of chemical elements in the generated alloys are the control parameters, which allow us to construct a diverse set of compounds.
The general strategy for determining amorphous metal alloys implemented in this work consists of four main stages [see Figure 1]:
* Stage I. This stage includes the process of data collection and systematization of information about the properties of multicomponent amorphous metal alloys based on Al, Au, Ca, Co,
Cu, Fe, La, Hf, Mg, Ni, Pd, Pt, Sc, Ti, W, Zr, etc., as well as information about the properties of the other additional chemical elements involved in the formation of these alloys. Among these properties are the atomic mass \(m_{a}\), the covalent radius \(r_{c}\), the ionization energy \(E_{i}\) and the electronegativity \(\chi\), which characterize the nature of the chemical element [see Table 1]. This choice is due to the following reasons. First, these parameters most clearly define the possible physical and chemical bonds between the elements, which can either promote or inhibit the formation of an amorphous structure. For example, according to the empirical rule proposed by Inoue et al. in the early 1990's [44], the difference in atomic sizes must be greater than 12 % for good amorphization of a liquid. Secondly, most of the intrinsic properties of chemical elements (especially of the same type) are correlated. In addition, the thermal conductivity \(\lambda\), the specific heat capacity \(C_{s}\), the density \(\rho\), the melting temperature \(T_{m}\) and the boiling temperature \(T_{b}\) of chemical elements at normal conditions are used. The atomic number \(Z\) and the mass fraction \(m_{f}\) of each chemical element in the alloy are used to characterize the alloy composition. The Young's modulus \(E\) and the yield strength \(\sigma_{y}\) are also applied, whose values are known for the considered amorphous alloys. The values of all the
Figure 1: Four-stage scheme of the method for determining amorphous metal alloys and calculating their mechanical properties.
listed physical properties are taken from the database _ITPhyMS_ (Information technologies in physical materials science) [45] and the database _Materials Project_[46] as well as from Refs. [36; 47; 48; 49; 50] [see Supplementary data of the present work]. These properties are characterized by different physical dimensions and by different ranges of values. Therefore, the properties are calibrated so that their values vary in the range [0; 1]. The calibration is done according to the rule
\[\text{Property}^{\prime}=\frac{\text{Property}-\text{Value}_{\text{min}}}{ \text{Value}_{\text{max}}-\text{Value}_{\text{min}}}, \tag{1}\]
where "Value\({}_{\text{min}}\)" and "Value\({}_{\text{max}}\)" are the smallest and largest known values of the "Property". Moreover, all these listed properties correlate with the mechanical properties of materials. For example, Xiong et al. have shown that the accuracy of predicting the mechanical properties of amorphous metal alloys is improved when the quantities \(m_{a}\), \(Z\), \(r_{c}\), \(\rho\), \(\lambda\), \(T_{m}\) and \(T_{b}\) are considered in a machine learning model [37]. In addition, the results obtained by Wang based on the analysis of a large set of empirical data for amorphous alloys allow one to establish the existence of correlation between elastic moduli (i.e. Young's modulus, shear modulus, bulk modulus), microstructural features, rheological properties, the glass transition temperature, the melting temperature and the boson peak [47; 48].
* Stage II. Alloys with different compositions are generated. Taking into account the number of possible components, combinations of all chemical elements and their mass fraction, up to \(10^{18}\) different compositions can be determined simultaneously. When obtaining alloys, those chemical elements are selected that are included on the alloys in the training dataset. In the present work, 32 chemical elements were used including transition metals (Fe, Co, Ni, Cu, etc.), semimetals (B, Al, Sn, etc.), lanthanides (La, Gd, Er, etc.) and alkali and alkaline earth metals (Li, Be, Mg, Ca, etc.). A list of all the considered chemical elements is given on Table S1 in Supplementary data. The mass fraction of the chemical elements in a generated alloy is also set randomly so that the total mass fraction of all chemical elements is equal to 100 %. A set of physical properties is created for each chemical element [see Table 1].
* Stage III. Information about the alloy composition and the physical properties of all the chemical elements is processed by the pre-trained neural network. This neural network evaluates the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) for all generated alloys. The training pro
cedure of the neural network is discussed in more details in the subsection "Machine learning model: structure and training".
* Stage IV. Statistical interpretation of machine learning results is performed.
Thus, the proposed method makes it possible to perform a complete cycle of alloy design and determine its mechanical properties: from obtaining the correct alloy composition to calculating the correct values of \(E\) and \(\sigma_{y}\).
### Machine learning model: structure and training
The machine learning model is the four-layer artificial neural network. The first layer has 77 input neurons for the values of 11 physical properties for all chemical elements of the obtained alloy (7 input neurons are allocated to each property because the maximal number of components in the alloy is also seven). If the number of components in the alloy is less than 7, then the remaining neurons are unused. The next two layers are hidden. The first hidden layer consists of 80 neurons, while the second hidden layer has 10 neurons. Note that the number of neurons in the hidden layers is optional. The neural network produces close results with 80 to 100 neurons in the first hidden layer and with 10 to 80 neurons in the second hidden layer. The fourth layer consists of one neuron
\begin{table}
\begin{tabular}{l c c} \hline \hline Property & Symbol & Unit \\ \hline Atomic number & \(Z\) & – \\ Mass fraction of elements & \(m_{f}\) & \% \\ \hline Atomic mass & \(m_{a}\) & a.e.m. \\ Covalent radius & \(r_{c}\) & pm \\ Ionization energy & \(E_{i}\) & eV \\ Electronegativity & \(\chi\) & – \\ Thermal conductivity & \(\lambda\) & W/(m\(\cdot\)K) \\ Specific heat capacity & \(C_{s}\) & J/(g\(\cdot\)K) \\ Density & \(\rho\) & g/cm\({}^{3}\) \\ Melting temperature & \(T_{m}\) & K \\ Boiling temperature & \(T_{b}\) & K \\ \hline \hline \end{tabular}
\end{table}
Table 1: Physical properties of chemical elements used as input parameters in the artificial neural network.
that determines the Young's modulus \(E\) or the yield strength \(\sigma_{y}\). It is important to note that two separate independent neural networks with the same structure are used to calculate the values of \(E\) and \(\sigma_{y}\).
Calculation of the values of all neurons is carried out by expression [51]:
\[n_{i}^{(k)}=f\left(\sum_{j=1}^{N_{k-1}}w_{ij}^{(k-1)}n_{j}^{(k-1)}+b_{i}^{(k)} \right). \tag{2}\]
Here, \(n_{i}^{(k)}\) is the value of the \(i\)th neuron in the \(k\)th layer (\(k=2,\,3,\,4\)); \(w_{ij}^{(k-1)}\) is the value of the \((k-1)\)th layer weight going from a neuron with index \(j\) to a neuron with index \(i\) from the \(k\)th layer; \(b_{i}^{(k)}\) is the bias weight acting on a neuron with index \(i\); \(N_{k-1}\) is the number of neurons in the \((k-1)\)th layer. The sigmoid \(f(x)=1/(1-\exp[-x])\) is applied as the activation function [52].
The neural network is trained using the backpropagation algorithm [53; 54]. The values of the weight coefficients are adjusted as follows:
\[w_{ij}^{(k),\,new}=w_{ij}^{(k)}-\gamma\frac{\partial\xi}{\partial w_{ij}^{(k) }}, \tag{3}\]
Figure 2: (a) Plot of the predicted Young’s modulus \(E\) versus the empirical \(E\). (b) Plot of the predicted yield strength \(\sigma_{y}\) versus the empirical \(\sigma_{y}\). Top insets: mean relative error as function of the number of training cycles for \(E\) and \(\sigma_{y}\). Bottom insets: dependence of the mean relative error and the training cycles on the training rate \(\gamma\), from which the optimal value of \(\gamma\) (indicated by the red arrows) was determined.
where \(\xi\) is the squared error between the output neuron and the desired value of the mechanical property; \(\gamma\) is the training rate. In the present work, the training rate is \(\gamma=0.3\), which is optimal for the created neural network. At the training rate \(\gamma=0.3\), the machine learning model gives the best result for \(E\) and \(\sigma_{y}\) with the lower MRE at the relatively small number of training cycles [see insets on Figures 2(a) and 2(b)]. The original dataset is divided into training and validation subsets in proportion 80:20. The training subset consists of amorphous metal alloys based on Al, Au, Ca, Co, Cu, Fe, La, Hf, etc. with different compositions, for which the values of \(E\) and \(\sigma_{y}\) are known [36; 47; 48; 49; 50]. The physical properties of the chemical elements of these alloys are also used in the training procedure [see Supplementary data]. To verify the correctness of the machine learning results, the validation subset is applied, which includes amorphous alloys that were not included on the training subset. The criterion to stop the training procedure is the minimal error between the results of the output neuron and the required values from the validation subset.
### Validation of the machine learning model
Typically, RMSRE (Root Mean Squared Relative Error), RMSE (Root Mean Square Error), RRMSE (Relative Root Mean Square Error), MSE (Mean Square Error), MAE (Mean Absolute Error) or MRE (Mean Relative Error) are used as indicators for measuring accuracy of results [55; 56; 57; 58; 59]. In the present work, it was important to use an indicator that does not depend on units of physical quantities. At the same time, this indicator must be easy to estimate. Therefore, we chose the MRE, which is calculated by expression:
\[\text{MRE}=\frac{1}{N}\sum_{i=1}^{N}\frac{|\mathcal{M}_{\text{ANN}}-\mathcal{ M}_{\text{req}}|}{\mathcal{M}_{\text{req}}}\times 100\%. \tag{4}\]
Here, \(\mathcal{M}=\{E\text{ or }\sigma_{y}\}\) denotes the mechanical property; \(\mathcal{M}_{\text{ANN}}\) is the result of the neural network; \(\mathcal{M}_{\text{req}}\) is the required value of the mechanical property; \(N\) is the number of items in the validation subset. We find that the MRE is \(\sim 13\) % for the Young's modulus \(E\) and \(\sim 11\) % for the yield strength \(\sigma_{y}\). This relatively low MRE indicates a correlation between the predicted and empirical values of the mechanical properties that is also confirmed by the results presented in Figures 2(a) and 2(b). Moreover, the values of the MRE are stable. This is confirmed by the computed loss functions [see insets on Figures 2(a) and 2(b)], which reach a plateau after \(3\times 10^{3}\) training cycles. Thus, the results of the machine learning model are reliable and predictable.
## 3 Properties importance scores
The analysis of the importance scores shows that all the considered physical properties (\(\lambda\), \(T_{b}\), \(\chi\), \(\rho\), \(C_{s}\), \(T_{m}\), \(E_{i}\), \(r_{c}\) and \(m_{a}\)) are necessary for the correct evaluation of the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) by the machine learning model. As can be seen from Figure 3(a), in the case of the Young's modulus \(E\), the importance scores of the properties \(\lambda\), \(T_{b}\), \(C_{s}\), \(T_{m}\) and \(m_{a}\) are similar (MRE\(\sim\) 45 %), and these physical properties can be recognized as significant factors. The lowest importance scores are observed for the density \(\rho\) and for the properties characterizing the chemical nature of the atoms: the ionization energy \(E_{i}\) (MRE\(\sim\) 75 %), the electronegativity \(\chi\) (MRE\(\sim\) 61 %) and the covalent radius \(r_{c}\) (MRE\(\sim\) 60 %). These properties have less impact on the result of the machine learning model. Furthermore, considering all the properties reduces the error to MRE\(\sim\) 13 %. The importance scores of the properties in the case of the yield strength \(\sigma_{y}\) differ significantly from the Young's modulus \(E\) [see Figure 3(b)]. The main factors affecting the yield strength \(\sigma_{y}\) are the thermal conductivity \(\lambda\) (MRE\(\sim\) 22 %) and the covalent radius \(r_{c}\) (MRE\(\sim\) 23 %). MRE for other parameters is above 28 %. Together, these properties lead to the best result, where the error is MRE\(\sim\) 11 %.
Thus, the formation of the machine learning models for the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) using a single physical property (\(\lambda\), or \(T_{b}\), or \(\chi\), or \(\rho\), or \(C_{s}\), or \(T_{m},\dots\)) produces an error that is much larger than the error when these machine learning models are formed with the entire set of physical properties. Mathematically, such the situation is possible when the correlation between the parameter \(E\) (or \(\sigma_{y}\)) and some individual physical property appears indirectly (not explicitly). In turn, this means that it is not possible to obtain analytical expressions relating a mechanical property with any parameter of the set (\(\lambda\), \(T_{b}\), \(\chi\), \(\rho\), \(C_{s}\), \(T_{m}\), \(E_{i}\), \(r_{c}\) and \(m_{a}\)) and correctly reproducing the results for an arbitrary metal alloy. The methodology of artificial neural networks used in this study makes it possible to obtain a correspondence between the Young's modulus \(E\) (or the yield strength \(\sigma_{y}\)) and the whole set of the considered physical properties; and this correspondence is reproduced not by an analytical expression, but by the internal structure of the formed neural network. In fact, this feature of this methodology is an advantage when dealing with a fairly large set of parameters.
An additional evaluation of the accuracy of the machine learning model was performed by computing the MRE for different numbers of physical properties in the input of the neural network. Figure 3(c) shows that at adding properties in the order of \(\rho\), \(T_{m}\), \(T_{b}\), \(\lambda\), \(C_{s}\), \(m_{a}\), \(r_{c}\), \(\chi\) and
the MRE decreases from \(\sim 58\) % to \(\sim 13\) % for \(E\) and from \(\sim 33\) % to \(\sim 11\) % for \(\sigma_{y}\). A rapid decrease of the error is observed when the temperatures \(T_{m}\) and \(T_{b}\) as well as the quantities \(E_{i}\) and \(\chi\) have been added, which may be due to their multicollinearity. Figure 3(d) shows that the Pearson correlation coefficients for the considered properties take both positive and negative values in the range from \(-1\) to \(1\)[60]. For example, the positive correlation between the temperatures \(T_{m}\) and \(T_{b}\) is due to the fact that an increase in the melting temperature leads to an increase in the boiling temperature [61]. An increase in the atomic mass \(m_{a}\) of the alloy components usually leads to an increase in its density \(\rho\), which leads to a positive correlation between \(m_{a}\) and \(\rho\)[62; 63].
Figure 3: Importance scores for each physical property: (a) in the case of the Young’s modulus and (b) in the case of the yield strength. (c) Mean relative error as a function of the number of parameters in the artificial neural network input. (d) Pearson correlation heat map for the considered physical properties.
The presence of the pronounced negative correlation between the pairs \(r_{c}\), \(E_{i}\) and \(r_{c}\), \(\chi\) is due to the fact that a decrease in the covalent radius \(r_{c}\) leads to an increase \(E_{i}\) and \(\chi\) by increasing the electron density in the atom [64; 65].
## 4 Statistical interpretation of the results
In the present study, \(50\,000\) different amorphous metal alloys were obtained by the proposed method. All alloys were sorted according to the atomic number \(Z\) of the basic chemical element and the number of components in the alloy. Using the trained machine learning model, the values of \(E\) and \(\sigma_{y}\) were calculated for each alloy. Then, the average value of the mechanical property was found for each \(X\)-based alloy consisting \(n\) components (where \(n=2,\,3,...,\,7\)). Here, \(X\) denotes a chemical element, the mass fraction of which in the alloy is greater than that of other elements. For example, Al-based binary alloys Al\({}_{90}\)Fe\({}_{10}\), Al\({}_{80}\)Cu\({}_{20}\), Al\({}_{60}\)Ni\({}_{40}\), etc. were selected and the average values of \(E\) and \(\sigma_{y}\) were determined for all these alloys. Similar calculations were performed for alloys based on other metals with different number of components. Then, the dependence of the average values of \(E\) and \(\sigma_{y}\) on the atomic number \(Z\) of the basic chemical element has been determined.
In the statistical interpretation, the results reveal that \(E\) and \(\sigma_{y}\) depend mainly on the properties of the chemical element with the largest mass fraction. As seen in Figure 4, changing the number of components in alloy has no significant effect on values of \(E\) and \(\sigma_{y}\). In the array of \(50\,000\) different alloys obtained by the machine learning model, some alloys with the highest Young's modulus \(E\) and the yield strength \(\sigma_{y}\) were selected [see Table 2]. The results show that these alloys are mainly based on Ti, Cr, Fe, Co, Ni, Zr, Nb, Mo, Pd, Ta and W. For example, these are Mo\({}_{60}\)B\({}_{20}\)W\({}_{20}\), Co\({}_{40}\)B\({}_{20}\)Be\({}_{20}\)Al\({}_{20}\), Cr\({}_{40}\)B\({}_{20}\)Nb\({}_{10}\)Pd\({}_{10}\)Ta\({}_{10}\)Si\({}_{10}\) and Cr\({}_{30}\)Mo\({}_{20}\)W\({}_{20}\)Pd\({}_{10}\)Gd\({}_{10}\)B\({}_{10}\) alloys for which the mechanical properties are \(E>300\) GPa and \(\sigma_{y}>5.0\) GPa. It is important to note that the results for alloys of these compositions were not previously known, although alloys of some related compositions have been studied. So, for example, for W\({}_{46}\)Ru\({}_{37}\)B\({}_{17}\), Co\({}_{43}\)B\({}_{31.5}\)Fe\({}_{20}\)Ta\({}_{5.5}\) and Co\({}_{60}\)B\({}_{35}\)Ta\({}_{5}\) it was experimentally established that \(E>250\) GPa and \(\sigma_{y}>5.0\) GPa [50; 41]. Obtained results reveal that the alloys based on Cr, Mo and W from the group VI-B of the Periodic Table of the Elements have improved mechanical properties. Such the metals as Cr, Mo and W are refractory and have very high hardness [66; 67; 68]. Then, their significant presence in an alloy improves its strength. Note that this fact is also known in metallurgy, where these metals are widely used to increase the hardness of steel alloys, to increase wear resistance and to form
wear-resistant coatings (e.g. alloys Cr-Co, Cr-Fe, Mo-Fe, Mo-Cr-Fe, W-Fe, W-Ni-Co) [69; 70]. The machine learning model predicts improved mechanical properties in the case of alloys based on Ti, Cr, Fe, Co, Ni, Zr, Nb, Mo, etc. when these alloys are doped with other metals (e.g. Be, B, Hf), nonmetals (e.g. Si, P) and lanthanides (e.g. La, Gd). The mechanical properties of alloys based on Al, Mg, Ca, Cu, Zn, Ag, Au, Hf, lanthanides, etc. are inferior to those of the alloys based on Ti, Cr, Fe, Co, etc. It should be noted that the relatively high values of \(E\) and \(\sigma_{y}\) purely for B and Si are due to statistical error, since B-based and Si-based alloys were not used in the training stage of the alloys.
Figure 4: 3D plot of the dependence of the mechanical properties on the atomic number \(Z\) of the basic chemical element in the alloy and on its number of components: (a) for the Young’s modulus \(E\) and (b) for the yield strength \(\sigma_{y}\).
the machine learning model. However, alloys containing B and Si were considered.
In Table 3, we list 10 binary and ternary amorphous metal alloys selected from 50 000 alloys considered in this study. By simple comparison one can reveal that the predicted \(E\) is mainly correlated with the mechanical properties of chemical element with the highest mass fraction. For example, for amorphous Cr\({}_{80}\)B\({}_{20}\), we find \(E\approx 305\) GPa, while the Young's modulus of pure crystalline Cr is \(E\approx 279\) GPa. In the case of amorphous W\({}_{40}\)Mo\({}_{40}\)B\({}_{20}\), we have \(E\approx 318\) GPa, while \(E\) is \(\sim 410\) GPa for pure crystalline W. The mechanical properties can vary depending on the concentration of the doped chemical elements and on the class to which these elements belong (metals, nonmetals, lanthanides, etc.). For example, the predicted Young's modulus for Ni\({}_{40}\)Cr\({}_{40}\)Co\({}_{20}\) is \(E\approx 58\) GPa. At the same time, the presence of Zr and Si in the Ni-based alloy doubles the Young's modulus \(E\) (i.e. one has \(E\approx 108\) GPa for Ni\({}_{40}\)Zr\({}_{40}\)Si\({}_{20}\)). For Ni\({}_{40}\)Mo\({}_{40}\)W\({}_{20}\), the machine learning model predicts \(E\approx 183\) GPa, where the refractory metals Mo and W are included [see Table 3]. The doping with refractory metals, nonmetals and lanthanides (e.g. B, Si, Gd, La) makes it possible to increase the strength of these alloys, which is actively used in modern metallurgy to produce heat-resistant alloys [71, 72]. This simple quantitative analysis confirms that the properly selected composition and physical properties of the main chemical elements are most important in determining the alloys that best match the required mechanical properties.
\begin{table}
\begin{tabular}{c l c c c} \hline \hline Number of components & Alloy & \(E\), GPa & Alloy & \(\sigma_{y}\), GPa \\ \hline
2 & Cr\({}_{80}\)B\({}_{20}\) & 305 & Pd\({}_{60}\)B\({}_{40}\) & 5.13 \\ & W\({}_{60}\)Hf\({}_{40}\) & 271 & W\({}_{60}\)Hf\({}_{40}\) & 5.04 \\ \hline
3 & Mo\({}_{60}\)B\({}_{20}\)W\({}_{20}\) & 319 & Mo\({}_{60}\)B\({}_{20}\)Si\({}_{20}\) & 5.35 \\ & Nb\({}_{40}\)Hf\({}_{40}\)B\({}_{20}\) & 289 & Ni\({}_{60}\)B\({}_{20}\)Sc\({}_{20}\) & 5.17 \\ & Ni\({}_{60}\)B\({}_{20}\)W\({}_{20}\) & 280 & Pd\({}_{60}\)B\({}_{20}\)P\({}_{20}\) & 4.31 \\ \hline
4 & Ag\({}_{40}\)B\({}_{20}\)Sc\({}_{20}\)Ta\({}_{20}\) & 302 & Co\({}_{40}\)B\({}_{20}\)Be\({}_{20}\)Al\({}_{20}\) & 5.27 \\ & Zr\({}_{40}\)Ni\({}_{20}\)B\({}_{20}\)Be\({}_{20}\) & 285 & Nb\({}_{40}\)W\({}_{20}\)La\({}_{20}\)B\({}_{20}\) & 4.88 \\ & Cr\({}_{40}\)B\({}_{20}\)Zr\({}_{20}\)Hf\({}_{20}\) & 271 & Ti\({}_{40}\)W\({}_{20}\)Pd\({}_{20}\)B\({}_{20}\) & 4.70 \\ \hline
5 & Ti\({}_{30}\)Fe\({}_{30}\)B\({}_{20}\)Sn\({}_{10}\)Be\({}_{10}\) & 296 & Co\({}_{40}\)B\({}_{30}\)Ag\({}_{10}\)Gd\({}_{10}\)Si\({}_{10}\) & 5.54 \\ & Pd\({}_{40}\)B\({}_{20}\)Si\({}_{20}\)P\({}_{10}\)Hf\({}_{10}\) & 289 & Fe\({}_{50}\)B\({}_{20}\)Mo\({}_{10}\)Ta\({}_{10}\)Ag\({}_{10}\) & 5.39 \\ \hline
6 & Cr\({}_{40}\)B\({}_{20}\)Nb\({}_{10}\)Pd\({}_{10}\)Ta\({}_{10}\)Si\({}_{10}\) & 310 & Cr\({}_{30}\)Mo\({}_{20}\)W\({}_{20}\)Pd\({}_{10}\)Gd\({}_{10}\)B\({}_{10}\) & 5.62 \\ & Pd\({}_{40}\)Be\({}_{20}\)Mo\({}_{10}\)Ti\({}_{10}\)B\({}_{10}\)Fe\({}_{10}\) & 306 & Mo\({}_{40}\)W\({}_{20}\)Pd\({}_{10}\)Gd\({}_{10}\)B\({}_{10}\)Cr\({}_{10}\) & 5.53 \\ & W\({}_{30}\)B\({}_{20}\)Au\({}_{20}\)Be\({}_{10}\)Nb\({}_{10}\)Ag\({}_{10}\) & 296 & Ta\({}_{20}\)Nb\({}_{20}\)Al\({}_{20}\)Au\({}_{20}\)B\({}_{10}\)W\({}_{10}\) & 5.06 \\ \hline
7 & W\({}_{20}\)Co\({}_{20}\)Nb\({}_{20}\)Ag\({}_{10}\)B\({}_{10}\)Be\({}_{10}\)Mg\({}_{10}\) & 284 & W\({}_{20}\)B\({}_{20}\)Ag\({}_{20}\)Nb\({}_{10}\)Si\({}_{10}\)Co\({}_{10}\)Pd\({}_{10}\) & 5.24 \\ & Cr\({}_{20}\)Ag\({}_{20}\)Ti\({}_{20}\)B\({}_{10}\)Gd\({}_{10}\)Be\({}_{10}\)Mg\({}_{10}\) & 234 & Cr\({}_{20}\)Fe\({}_{20}\)W\({}_{20}\)Ca\({}_{10}\)B\({}_{10}\)Sn\({}_{10}\)Be\({}_{10}\) & 3.78 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Young’s modulus \(E\) and yield strength \(\sigma_{y}\) predicted by machine learning model for different amorphous metal alloys. Here, the average accuracy is \(\sim 88\) %.
## 5 Conclusions
In the present study, the machine learning model was applied to predict the Young's modulus \(E\) and the yield strength \(\sigma_{y}\) of amorphous metal alloys with different compositions. More than \(50\,000\) different alloys were determined as well as \(E\) and \(\sigma_{y}\) were evaluated for each of them. It was found that the artificial neural network trained on the basis of information about the atomic number of a chemical element, its atomic mass, covalent radius, ionization energy, electronegativity, thermal conductivity, specific heat capacity, density, melting temperature and boiling temperature allows us to correctly determine of \(E\) and \(\sigma_{y}\) of amorphous metal alloys consisting 2 to 7 components and containing chemical elements with atomic numbers from \(Z=3\) to \(Z=79\). Here, the mean relative error is \(\sim(12\pm 1)\%\) that is the good accuracy for the direct propagation multilayer neural network. The results of the statistical treatment made it possible to determine the chemical elements with the largest mass fraction, whose presence in the alloy leads to a significant increase in the strength of alloys. These chemical elements are B, Cr, Fe, Co, Ni, Nb, Mo, Pd and W. At the same time, the quantities \(E\) and \(\sigma_{y}\) show a weak dependence on the number of components in alloy. Thus, the most significant factors in the synthesis of alloys with the desired mechanical properties are the properly selected composition and the physical properties of the basic chemical element of alloy.
## Acknowledgment
This research was funded by the Russian Science Foundation (project no. 19-12-00022).
\begin{table}
\begin{tabular}{l c|l c} \hline \hline Alloy & \(E\), GPa & Alloy & \(E\), GPa \\ \hline Cr\({}_{80}\)B\({}_{20}\) & 305 & Cu\({}_{80}\)Mg\({}_{20}\) & 60 \\ W\({}_{40}\)Mo\({}_{40}\)B\({}_{20}\) & 318 & Cu\({}_{60}\)Mo\({}_{40}\) & 154 \\ Ni\({}_{40}\)Cr\({}_{40}\)Co\({}_{20}\) & 58 & W\({}_{40}\)Ag\({}_{40}\)B\({}_{20}\) & 234 \\ Ni\({}_{40}\)Zr\({}_{40}\)Si\({}_{20}\) & 108 & Cr\({}_{40}\)B\({}_{40}\)Gd\({}_{20}\) & 217 \\ Ni\({}_{40}\)Mo\({}_{40}\)W\({}_{20}\) & 183 & Cr\({}_{40}\)Nb\({}_{40}\)La\({}_{20}\) & 196 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Young’s modulus \(E\) predicted by machine learning model for binary and ternary amorphous metal alloys. Here, the average accuracy is \(\sim 87\) %. |
2306.02686 | A Deep Learning Neural Network Algorithm for Classification of Eclipsing
Binary Light Curves | We present an image classification algorithm using deep learning
convolutional neural network architecture, which classifies the morphologies of
eclipsing binary systems based on their light curves. The algorithm trains the
machine with light curve images generated from the observational data of
eclipsing binary stars in contact, detached and semi-detached morphologies,
whose light curves are provided by Kepler, ASAS and CALEB catalogues. The
structure of the architecture is explained, the parameters of the network
layers and the resulting metrics are discussed. Our results show that the
algorithm, which is selected among 132 neural network architectures, estimates
the morphological classes of an independent validation dataset, 705 real data,
with an accuracy of 92%. | Burak Ulas | 2023-06-05T08:27:40Z | http://arxiv.org/abs/2306.02686v1 | # A Deep Learning Neural Network Algorithm for Classification of Eclipsing Binary Light Curves
###### Abstract
We present an image classification algorithm using deep learning convolutional neural network architecture, which classifies the morphologies of eclipsing binary systems based on their light curves. The algorithm trains the machine with light curve images generated from the observational data of eclipsing binary stars in contact, detached and semi-detached morphologies, whose light curves are provided by Kepler, ASAS and CALEB catalogues. The structure of the architecture is explained, the parameters of the network layers and the resulting metrics are discussed. Our results show that the algorithm, which is selected among 132 neural network architectures, estimates the morphological classes of an independent validation dataset, 705 real data, with an accuracy of 92%.
## 1 Introduction
Deep learning techniques strengthen their solid ground in various areas from art to science every passing day. In the present, countless machine and deep learning applications let researchers achieve faster and more precise results in their studies, as well as changing the daily life. Convolutional neural networks, a specialized architecture in deep learning algorithms using neural networks, give an opportunity to use powerful methods in some processes such as image recognition and classification. The prototype of these networks, recognitron, was proposed by Fukushima (1980). Lecun et al. (1998) introduced the convolutional networks by remarking their performance on variability in 2D shapes. They also noted the advantages of fast learning in their handwriting experiment. The improvement in both hardware and software technology allows taking giant leaps in the usage and development of convolutional neural networks. For instance, a famous architecture AlexNet (Krizhevsky et al., 2012) classified 1.2 million images with an accuracy value of about 85% in ImageNet computer vision challenge. CoAtNet (Dai et al., 2021) also reached 91% accuracy by improving the model capacity and introducing the hybrid models.
Eclipsing binary stars are basically stellar systems showing light variations in their light curves due to occultations of the companions' light. Their importance arises from being tools for deriving the crucial stellar parameters precisely, and therefore, allowing the researchers to determine the structure of the stars in realistic estimations.
Guinan (1993) remarked that the analyses of their light curve enable us to estimate important parameters like mass, radius, luminosity, effective temperature as well as atmospheric properties. The systems appeared in several morphological types, mainly contact, detached and semi-detached, (Kopal, 1955; Bradstreet, 2005) that can be corresponded to various phases of their evolution. Thus, determining the morphological classes of these stars relieve information about related light parameters as well as the stellar evolution in different circumstances, which opens the doors to understanding the universe better as we know the binary and multiple systems are very common in our galaxy.
Researchers made efforts in detecting, fitting and classifying the light curves of binary systems using machine and deep learning algorithms. Wyrzykowski et al. (2003) proposed an algorithm using artificial neural networks and detected 2580 binary systems in Large Magellanic Cloud based on the OGLE data (Udalski et al., 1998). Prsa et al. (2008) presented an artificial neural network trained with data points of 33235 light curve samples of detached eclipsing binaries for deriving some physical parameters of eclipsing binary stars selected from several databases. The authors remarked that the success rate of the algorithm is more than 90% for OGLE and CALEB data sets, respectively. Kochoska et al. (2020) evaluate different fitting methods and concluded that machine learning techniques are useful tools for estimating the initial parameters of the binaries. The preliminary results of a systematic classification for the light curve morphologies of eclipsing binaries from TESS (Ricker et al., 2015) performing machine learning technique were published by Birky et al. (2020). Ulas (2020) suggested a deep learning image classification algorithm for the classification of light curve morphologies of ASAS-SN eclipsing binaries with an accuracy value of 92%. Lately, Cokina et al. (2021b) introduced a two-class (detached and over-contact) classification based on the 491425 synthetic light curve data generated by ELISa software (Cokina et al., 2021a). The authors accomplished 98% accuracy with their combined deep learning architecture.
Applications of machine learning techniques on astrophysical data are developing and promise novel results in the area. The potential of the subject motivated us to apply the method to the eclipsing binary light curves. In the following section, we introduce the details and structure of the light curve data used in the study. Sec. 3 deals with the details of our code and the architecture of the convolutional neural network. The results are discussed and the concluding remarks are given in the last section.
## 2 Light Curve Data
The algorithm needs light curve images corresponding to certain morphological classes of binary stars to train the machine and perform the classification. Therefore, we collected real light curve data to construct light curve images of eclipsing binary stars. The data for eclipsing binary stars with known morphological types are provided from three main data sources in this study; Kepler Eclipsing Binary Catalog (Kirk et al., 2016), All Sky Automated Survey (ASAS, Pojmanski, 1997) and Catalog and Atlas of Eclipsing Binaries (CALEB, former EBOLA, Bradstreet et al., 2004).
Kepler light curves were accessed through the Kepler Eclipsing Binary Catalog
(Kirk et al., 2016). The authors catalogued some basic properties of 2920 binary systems and indicated a parameter for their morphological classes. The parameter (\(c\)), introduced by Matijevic et al. (2012) using locally linear embedding method, is a classification criterion for contact (\(0.7<c<0.8\)), detached (\(c<0.5\)) and semi-detached (\(0.5<c<0.7\)) binary systems. We collect 1913 binary systems (239 contact, 1253 detached and 421 semi-detached) with corresponding \(c\) parameters and used their phase and detrended flux values from the catalogue to construct the light curve images.
ASAS (Pojmanski, 1997) variable star database was also used to gather light curve data and morphological classes of the eclipsing binary stars. The variability class of the targets in the catalogue was determined by Pojmanski (2002) using an approach based on multidimensional parametric space as well as an extended method using certain Fourier coefficients. We were able to collect data and morphological classes of 5907 binary systems through database query service1. The phases for the light curves were calculated by adopting the times of minimum and orbital period values from the ACVS (ASAS Catalog of Variable Stars) list given by the author. The magnitudes were also converted to normalized fluxes by deriving the maximum magnitudes for corresponding light curves for the systems.
Footnote 1: [http://www.astrouw.edu.pl/asas/](http://www.astrouw.edu.pl/asas/)
The CALEB data were achieved via the catalogue's web page2. The author catalogued light curves and observational properties of 305 individual stars with their morphological classes. Since the catalogue contains light curves in several filters for many stars, the actual number of the data exceeds the above-mentioned value. 1632 light curves from the database were included in our study. The light curve images were constructed by using the phase and flux values given by the catalogue.
Footnote 2: [http://caleb.eastern.edu](http://caleb.eastern.edu)
The \(256\times 256\) pixels light curve images were generated by plotting the data in \(0.25-1.25\) phase interval from the mentioned databases. The total number of data was decreased after eliminating the light curves which (\(i\)) show very large scattering, (\(ii\)) have very few data points and (\(iii\)) do not resemble the light curve of an eclipsing binary system. The incorrect orbital period values, especially in ASAS Catalog, were also responsible for the decrement in the number of light curves. Additionally, the entire dataset from three databases was checked by eye to prevent misclassification. The data in each class were also balanced. Namely, we limit the number of light curve images in each morphological class is to be equal, thus, it is one-third of the total number of data in a given database (e.g. 2286 light curves from ASAS contains 762 images from each individual class; contact, detached and semi-detached). We randomly chose 657, 2286 and 585 images from the final datasets of Kepler, ASAS and CALEB, respectively. Therefore, a total of 3528 light curves from all databases were selected to use for image classification. The number of the light curves in the training set is 2823 while the validation set covers 705 data, about 20% of the training set, following the Pareto principle (Moore, 1897; Juran and Godfrey, 1999). Fig. 1 is a Sankey diagram showing the relation among the morphologies, databases and datasets based on the number of light curves. Nine samples of data with different morphologies in the training set from three databases are illustrated in Fig 2.
Figure 1: Sankey diagram showing the distribution of 3528 light curve image data in three nodes; morphology, database and type of the dataset. C, D and SD refer to contact, detached and semi-detached morphologies, respectively. **tr** indicates training set, while **val** remarks the validation data. See text for details. Diagram created using SankeyMATIC\({}^{3}\).
Figure 2: Selected images from the training set generated by using the light curve data of KIC 12458133 (a), ASAS 065227-5524.6 (b), V572 Cen (c), KIC 06545018 (d), ASAS 075602-4454.8 (e), QX Car (f), KIC 03954798 (g), ASAS 101553-6012.9 (h) and TZ Lyr from three different databases. C, D and SD refer to contact, detached and semi-detached morphologies, respectively. Figure created using gnuplot4.
## 3 Architecture of the Neural Network
A Python (Van Rossum and Drake, 2009) code6 was written to set a deep learning neural network algorithm and thus train the machine to classify the light curve images generated from the light curve data. The code contains import procedure of the main and axillary sources in the preamble, that are _NumPy_ package (Harris et al., 2020), _os_ module (Van Rossum, 2020), _pandas_ library (Wes McKinney, 2010; The Pandas Development Team, 2020), _Random_ module (Van Rossum, 2020), _TensorFlow_ platform (Abadi et al., 2015) and _Keras_ API (Chollet et al., 2015) which are needed the processes in the algorithm work. It proceeds with seed fixing for NumPy, Python and TensorFlow to avoid randomness and make the results reproducible, and yet randomness that may arise from the calculations on the Graphics Processing Unit (GPU) still remains. It must be noted that when running the code on a GPU, randomness may alter the results slightly from one run to another due to the parallel operations, as remarked by the Keras team. The problem can be solved by conducting the calculations on a Central Processing Unit (CPU), however, neural networks are computationally expensive, and it takes an extremely long time to achieve the results on a CPU.
Footnote 6: [https://github.com/tensorflow/tensorflow/tensorflow](https://github.com/tensorflow/tensorflow/tensorflow)
The image data in three folders (C: contact, D: detached and SD: semi-detached) were indicated in the code and the total number of the data files inside those directories was commanded to display on the output. The sizes of input images were also defined. We applied data augmentation by adding random Gaussian blur to images in the training dataset. Augmentation enriches information about data by applying certain operations to a given dataset, and it helps prevent overfitting (Shorten and Khoshgoftaar, 2019). As some augmentation methods (e.g. flip and rotation) may cause changes in the shape of the light curve, we avoided employing further augmentation to our data. Training and validation directories were defined to direct the algorithm to the targeted data which was intended to be dealt with. The data generation process resizes the images to \(128\times 128\) pixels to save computing time and converts to greyscale since colour is not a distinctive feature of our data. In data generation, class_mode argument was categorical which defines 2D _one-hot_ encoded labels that contain one _hot_ (1) among all other _cold_ (\(\mathfrak{\theta}\)) values (Harris and Harris, 2007).
The backbone of the algorithm is the sequential model, a stack of layers, which includes several hyperparameters forming the convolutional neural network architecture which consists of convolutional, pooling, fully connected and output layers. Convolutional layers use kernels with the size of \((3,3)\), referring to the size of the convolutional window (Chollet et al., 2015). Rectified Linear Unit (ReLU) function, \(f(x)=max(0,x)\), was selected as the activation function for the layers, which assigns \(0\) for the values smaller than or equal to zero (Goodfellow et al., 2016). The training is done by using the stochastic gradient descent algorithm (Ruder, 2016) to achieve the converged result and then ReLU provides relatively more effortless optimization and calculation since it represents the nonlinearities with two linear functions. The convolution operation was done by applying a \(L2\) regularization penalty (Cortes et al., 2009). The regularization term is:
\[\lambda\sum_{i=1}^{N}\omega_{i}^{2} \tag{1}\]
where \(\lambda\) (=0.001 in our case), \(\omega\) and \(N\) are the regularization parameter, weight and the number of features, respectively. The term adds the squared weights to the loss function and controls the weights to be relatively small values, thus, preventing the model from overfitting and structural complexity. The padding hyperparameter was adjusted to same, which guarantees that the feature map is the same size as the input (Chollet et al., 2015). The stride value was left default, \((1,1)\) which corresponds that the filter moves one pixel at a time. We also applied max pooling operation (Christlein et al., 2019) with a pool size of \((2,2)\) between convolutional layers. It basically downsamples the input data by taking the maximum values within the pool size. The pooling also helps avoid overfitting and lowers the computation time. The above processes lead to the feature extraction and the next stage, the flattening operation, is necessary since the final convolutional layer does not cover the entire dimension of the input image (Basha et al., 2020). Flattening converts the data into a 1-dimensional array, the shape which is mandatory to make the algorithm be able to perform the classification. A Dropout layer with a rate of 0.5 follows the flattening, which avoids the model from overfitting, as mentioned by Srivastava et al. (2014). The last steps of the convolutional neural network include fully connected layers where the classification takes place. All the input neurons are connected to the neurons in the present layer at this stage (Geron, 2017), therefore, the dimensionality of the upper Dense layer is equal to the filter number of the last convolutional layer. The dimension was set to 3 in the output layer of the network since we have three classes (contact, detached and semi-detached). Probabilistic distribution was determined using softmax activation function as it is appropriate for multiclass classifications using categorical cross-entropy loss function, which is:
\[L=-\sum_{i=1}^{n}p(x_{i})\log_{e}(q(x_{i})) \tag{2}\]
given by Zhou et al. (2021), where \(p(x_{i})\) and \(q(x_{i})\) denote real and predicted distributions, and \(n\) is the number of classes. Layers from the first convolutional to the second last Dense layer, generally called hidden layers, consist of 5 trainable and 6 nontrainable layers.
The Adam algorithm (Kingma and Ba, 2015) was chosen as the optimizer, which uses the stochastic gradient descent method. Adam is appropriate for multiclass problems and can be adjusted with the learning rate hyperparameter. The learning rate is basically referring to the step size (Murphy, 2012) in the convergence of the learning process. Tuning this parameter plays an important role in obtaining reliable results during calculations. Large values can result in straying from convergence, while small values may cause taking long times of training. We control the learning process by monitoring the validation loss through EarlyStopping callback, which is known to boost the performance of algorithms (Yao et al., 2007). The arguments of early stopping were arranged to stop training when no decrement in validation loss is observed in 20 consecutive epochs, and therefore, training was prevented to be overfit. Another callback, ModelCheckpoint, was also included in the code to save the best model having
the maximum validation accuracy in a model file. We compiled our model using cross-entropy loss, as mentioned before, based on accuracy evaluation. The final operation, fitting the model, was done by specifying the number of training samples per iteration (batch_size=32), generators, and the callbacks remarked above. Additionally, in our code, we stored the number of filters in convolutional layers and learning rate values in variables (11, 12, 13, 14 and \(\mathtt{lrate}\)) to be able to test various architectures quicker, only by changing the set of variables.
The aim of the neural network is to minimize the cross-entropy type loss function and reach the maximum accuracy value. Accuracy is a measure of how model predictions are close to the ones from the real model in all classes, while loss, a cost function, measures whether the predictions give the correct values (Sammut and Webb, 2017). Specifically, in zero-one loss, 0 and 1 refer to correct and incorrect classifications, respectively. To achieve the best result based on the corresponding values, we employ a total of 132 different convolutional neural network architectures (Fig. 3) with three different learning rate values (\(10^{-3}\), \(10^{-4}\), \(10^{-5}\)) which were run using NVidia T4 GPU accelerator provided by Kaggle5 platform. Although the higher accuracy values were reached in some other architectures, the optimum result was achieved in the network of 5 convolutional layers having 32, 32, 64, 128, 256 filters, respectively (see Sec. 4). The final accuracy and loss values for the models with validation accuracies larger than 0.9 are represented in Fig. 4. Models with the learning rate value of \(10^{-3}\) are not included in the figure, since their validation accuracy never exceeded 0.9.
Footnote 5: [https://www.kaggle.com](https://www.kaggle.com)
## 4 Results and Conclusion
As mentioned in the previous section, an accuracy of 92% was achieved in architecture with 5 convolutional layers having 32, 32, 64,128 and 256 filters. The learning rate was \(10^{-4}\), and it takes 142 epochs for the architecture to achieve the result, whose training loss (0.233) is slightly lower than validation loss (0.257), and the training and validation accuracies (0.937 and 0.936) are close. Thus, the model can be considered prevented from overfitting and underfitting compared to other models shown in Fig. 4. A visualization of the final architecture is shown in Fig. 5. The learning curves, accuracy and loss values for both training and validation versus epoch, are plotted in Fig. 6. The trends of the curves imply a typical good fit. The KERAS model file containing the final architecture is provided through GitHub repository6.
Footnote 6: [https://github.com/burakulas/ebclass](https://github.com/burakulas/ebclass)
A plot of the confusion matrix (Fig. 7) for the validation dataset reveals the details of the classification result. 217 of contact, 228 of detached and 201 of semi-detached systems out of total 705 were correctly classified. The maximum misclassification was seen in semi-detached binaries; 22 of them are classified as detached systems. Selected light curves among the best and the worst classified data for each of the morphological classes are given in Fig. 8. True positive (TP), true negative (TN), false positive (FP) and false negative (FN) values calculated based on the confusion matrix are listed in Table 1.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & _TP_ & _TN_ & _FP_ & _FN_ \\ \hline C & 217 & 458 & 12 & 18 \\ D & 228 & 448 & 22 & 7 \\ SD & 201 & 445 & 25 & 34 \\ \hline \hline \end{tabular}
\end{table}
Table 1: True positive (_TP_), true negative (_TN_), false positive (_FP_) and false negative (_FN_) values of classification for validation dataset covering 705 image data.
Figure 3: Schematic representation of 44 different architectures with their filter numbers which are employed with three specific learning rate values (\(10^{-3}\), \(10^{-4}\), \(10^{-5}\)) separately, and correspond to 132 different networks.
Figure 4: Final Accuracy (a and b) and Loss (c and d) values for different architectures whose validation accuracies are larger than 0.9. Filter orders refer to the filter numbers of convolutional layers. Blue and light blue refer to the training and validation datasets with the learning rate value of \(10^{-4}\). Red and orange bars represent the training and validation results when the learning rate was set to \(10^{-5}\).
Figure 5: Visualization of the final neural network architecture. Orange, red, green, blue and black colors refer to Convolutional, Max pooling, Flatten, Dropout and Dense layers. Figure created using visualkeras for Keras/TensorFlow (Gavrikov, 2020).
The classification report, indicating metrics for the classification of the validation dataset, is shown in Table 2. Precision is the ratio of true positives to the total number of true and false positives (\(TP/(TP+FP)\)), a measure of how trustable is the model in predicting positive samples (Ting, 2010). Recall is defined as the ratio of the number of correctly classified positives to the total number of positives, \(TP/(TP+FN)\), and it focuses on positive samples. F\({}_{1}\) score is the harmonic mean of precision and recall. In addition to these metrics, the subset accuracy of the classification, calculated using accuracy_score function of scikitlearn library (Pedregosa et al., 2011), is 92%. This is simply the percentage of correctly classified samples (Tsoumakas and Vlahavas, 2007):
\[\frac{1}{|D|}\sum_{i=1}^{|D|}I(Z_{i}=Y_{i}) \tag{3}\]
where \(Y_{i}\) and \(Z_{i}\) are actual and predicted labels, while \(|D|\) is the number of multilabel examples and \(I\) takes the value of 0 or 1 for false or true statements, respectively.
Furthermore, it is worth looking up the filters and the output of convolutional layers (feature maps) of the final architecture in the way of how the machine sees and processes the light curves through the network. As an example, we demonstrate the feature maps for the light curve of KIC 03954798 as proceeded along with the filters of the first and the fourth convolutional layers in Fig. 9 and Fig. 10. For the human eye, the deeper the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Precision & Recall & F\({}_{1}\) score & number of data \\ \hline C & 0.948 & 0.923 & 0.935 & 235 \\ D & 0.912 & 0.970 & 0.940 & 235 \\ SD & 0.889 & 0.855 & 0.872 & 235 \\ \hline Average & 0.916 & 0.916 & 0.916 & 705 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification report. C, D and SD refer to contact, detached and semi-detached systems, respectively.
Figure 6: Learning curves, variation of training (black) and validation (red) accuracy and loss with epoch.
Figure 7: Confusion matrix for validation dataset (705 images) as obtained by using _metrics_ module of Scikit-learn (Pedregosa et al., 2011) based on Keras model file containing the final network architecture. C, D and SD refer to contact, detached and semi-detached morphologies, respectively.
Figure 8: Selection from the best and the worst classified light curves with their estimated morphological classes. The algorithm correctly classified KIC 10723143 (a), BW Aqr (b) and KIC 06852488 (c) with the probability of 99.9% as contact, detached and semi-detached, respectively. The semidetached binaries ASAS 125523-7322.2 (d) and KIC 10191056 (e) were estimated as contact and detached systems with 99.9% probability. KIC 03530668 (e) was also misclassified as 93.1% semidetached, while its actual class is detached. C, D and SD refer to contact, detached and semi-detached morphologies, respectively.
layer is, the harder the light curve perception is.
Beside fixing the seeds in our code for reproducibility, following the 2.0 version of the reproducibility checklist for machine learning given by Pineau et al. (2020) we addressed the details of our model in the previous section. The algorithm was explained in detail with necessary mathematical descriptions. The application platform and infrastructure used were also denoted. The sample size of the data was given, the number of examples in the train and validation set was remarked, and the data preparation process was denoted (Sec. 2). The dataset and the code executing the classification are downloadable6. We defined and give the metrics of the classification and indicated the classification report, which refers to the quality of the classification.
Footnote 6: [https://github.com/faceface/face/face/face/face](https://github.com/faceface/face/face/face/face).
The scientific importance of our neural network algorithm arises from its providing a way to distinguish the morphological types of eclipsing binary systems with high accuracy only using their light curve images. It is also a lot faster than other conventional methods conducting the same process, such as a workflow covering the trials of at least two morphologies in a widely known light curve analysis software and comparing the results to choose the best one. The determination of the morphological class is vital in the analysis of an eclipsing binary light curve in order to yield physically meaningful results, therefore, our algorithm can be applied to a light curve image before its analysis to establish a rapid and reliable morphological assumption for the light curve solution.
When it comes to comparing our results to other studies using machine learning algorithms related to the morphologies, our accuracy is found to be close to that of investigations in the literature. Although they were not to deal with the morphology alone, the accuracy in the three-layer artificial neural network by Prsa et al. (2008), which focused on detached morphological classification, was higher than 90%. An image classification algorithm proposed by Ulas (2020) was also reached an accuracy value of 91%. Cokina et al. (2021b) achieved 98% accuracy through their combined classifier, which was trained with synthetic light curve data constructed using ELISa software for detached and overcontact morphologies. We did not run our code using the images generated from the light curve of the above-mentioned studies, since a classification owes its resulting accuracy to properly collected training data as well as the architecture. A complete change in the training set most probably requires modification in the network architecture and hyperparameters to achieve the same accuracy, over 90%.
The accurate information on the classes of training samples plays a vital role in the quality of the results. Therefore, as a future of the study, we plan to improve our algorithm by collecting light curve images with more accurate information on their types, namely the light curves of the systems having the morphological classes determined by analyses through human-controlled software, since hands-on modelling is the finest approach as Kochoska et al. (2020) concluded. This is projected to be done by a detailed survey of the literature for individual analyses of eclipsing binary light curves. Additionally, we publish a data collection platform6 to where the researchers from the community can upload morphological information and light curve data of human-confirmed eclipsing binary stars. Thuswise, the number of training and validation samples, another crucial parameter, is also aimed to be increased. The increasing number of space telescope data of binary stars will boost the number of samples without a doubt, as long as morphological classes are precisely determined.
Figure 9: 32 (\(3x3\)) filters of the first convolutional layer (upper panel) and 32 feature maps of the light curve image of KIC 03954798 as output from the first convolutional layer with corresponding filters (lower panel). Figure created using Matplotlib(Hunter, 2007).
Figure 10: Same as the lower panel of Fig. 9, but for the fourth convolutional layer with 128 filters. Note that human perception for the light curve is almost lost.
Finally, our code and collected data are public, therefore, it is open to be improved by tuning the hyperparameters or altering the architecture by the researchers from the area.
## Acknowledgements
This paper includes data collected by the Kepler mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the Kepler mission is provided by the NASA Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## Data Availability
The datasets generated and analyzed during the current study are available in the GitHub repository, [https://github.com/burakulas/ebclass](https://github.com/burakulas/ebclass).
|
2306.12495 | Verifying Global Neural Network Specifications using Hyperproperties | Current approaches to neural network verification focus on specifications
that target small regions around known input data points, such as local
robustness. Thus, using these approaches, we can not obtain guarantees for
inputs that are not close to known inputs. Yet, it is highly likely that a
neural network will encounter such truly unseen inputs during its application.
We study global specifications that - when satisfied - provide guarantees for
all potential inputs. We introduce a hyperproperty formalism that allows for
expressing global specifications such as monotonicity, Lipschitz continuity,
global robustness, and dependency fairness. Our formalism enables verifying
global specifications using existing neural network verification approaches by
leveraging capabilities for verifying general computational graphs. Thereby, we
extend the scope of guarantees that can be provided using existing methods.
Recent success in verifying specific global specifications shows that attaining
strong guarantees for all potential data points is feasible. | David Boetius, Stefan Leue | 2023-06-21T18:08:55Z | http://arxiv.org/abs/2306.12495v1 | # Verifying Global Neural Network Specifications using Hyperproperties
###### Abstract
Current approaches to neural network verification focus on specifications that target small regions around known input data points, such as local robustness. Thus, using these approaches, we can not obtain guarantees for inputs that are not close to known inputs. Yet, it is highly likely that a neural network will encounter such truly unseen inputs during its application. We study global specifications that -- when satisfied -- provide guarantees for all potential inputs. We introduce a hyperproperty formalism that allows for expressing global specifications such as monotonicity, Lipschitz continuity, global robustness, and dependency fairness. Our formalism enables verifying global specifications using existing neural network verification approaches by leveraging capabilities for verifying general computational graphs. Thereby, we extend the scope of guarantees that can be provided using existing methods. Recent success in verifying specific global specifications shows that attaining strong guarantees for all potential data points is feasible.
Keywords:Neural Network Verification Safe Deep Learning Hyperproperties General Computational Graphs.
## 1 Introduction
Deep learning is a game changer for research, education, business and beyond [9, 11]. Yet, we remain unable to provide strong guarantees on the behaviour of neural networks. In particular, while neural network verification in principle can provide strong guarantees, current approaches almost exclusively consider _local_ specifications [1, 14, 20, 25, 32, 38] that only apply to small regions around known input data points. This means that the currently widely-used specifications only sparsely cover the input space, providing no guarantees for inputs that are not close to known inputs. In contrast, _global_ specifications cover the entire input space.
We propose a specification formalism for neural networks that encompasses a rich class of global specifications while enabling verification using existing verifier technology. In particular, we show how monotonicity, Lipschitz continuity, two notions of global robustness [21, 24], and dependency fairness [15, 35] can be expressed using our formalism.
As noted in [30], global specifications such as monotonicity and global robustness are hyperproperties [8]. In difference to regular properties that only
consider one network execution at a time, hyperproperties relate executions for several inputs of the same neural network to each other. This allows us, for example, to express a naive notion of global robustness stating that an arbitrary input and a second input that lies close need to receive the same classification.
A central aspect of our formalism is that we use auxiliary neural networks to define input sets and output sets. By leveraging capabilities for verifying general computational graphs [37], the auxiliary networks, together with _self-composition_[8], allow for verifying hyperproperties using existing neural network verification approaches. Here, the role of the auxiliary neural networks is to make complex hyperproperty input and output sets accessible to existing verification approaches. Concretely, we design an auxiliary neural network to generate the tuples of inputs that need to be compared to determine whether a hyperproperty is satisfied. Another auxiliary neural network detects whether the outputs a network produces for these inputs satisfy the output constraint. For the naive notion of global robustness, this means that we derive a neural network that generates arbitrary pairs of inputs that are close to each other and another neural network that detects whether two outputs represent the same classification. Importantly, these auxiliary neural networks _exactly_ capture the targeted input and output constraints using standard neural network components.
Recent success in verifying global robustness [36] and global individual fairness [35] demonstrates that verifying global specifications is feasible. Our formalism is a general framework for global specifications targeting existing verifiers [14, 22, 32, 38]. While our formalism does not alleviate the need for specialised techniques, such as the Interleaving Twin Encoding [36], it allows for
1. Comparing general-purpose verifiers with specialised verifiers for specific global specifications and
2. Applying general-purpose verifiers to global specifications for which no specialised verifier exists.
## 2 Preliminaries
We consider verifying whether a neural network conforms to a global specification. Neural networks are computational graphs [16]. Global specifications are formalised using hyperproperties [8, 30].
Definition 1 (Computational Graph): A _computational graph_ is a directed acyclic graph with computations \((V,E,h)\), where \(V=\{1,\ldots,L\}\) with \(L\in\mathbb{N}\) is the set of nodes, \(E\subseteq V\times V\) is the edge relation and \(h=(h_{1},\ldots,h_{L})\) is the computations tuple. Let \(\mathrm{degin}:V\rightarrow\mathbb{N}\) denote the in-degree. The computation of node \(i\in V\) is \(h_{i}:\mathbb{R}^{m_{k_{1}}}\times\cdots\times\mathbb{R}^{m_{k_{\mathrm{degin}( i)}}}\rightarrow\mathbb{R}^{m_{i}}\), where \(m_{i}\in\mathbb{N}\) is the output dimension of node \(i\) and \(k_{1},\ldots,k_{\mathrm{degin}(i)}\in\{i\mid(k,i)\in E\}\) with \(k_{1}\leq\cdots\leq k_{\mathrm{degin}(i)}\) are the direct predecessors of \(i\).
Definition 2 (Neural Network): A _neural network \(\mathrm{net}_{\boldsymbol{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\)_, \(n,m\in\mathbb{N}\) is a composition of affine transformations and non-affine activation functions
represented by a computational graph \((V,E,h)\) with a source \(i\) and a single sink \(j\), such that \(h_{i}:\{\emptyset\}\to\mathbb{R}^{n}\) and \(h_{j}:\mathbb{R}^{m_{k_{1}}}\times\cdots\times\mathbb{R}^{m_{k_{\mathrm{design} (j)}}}\to\mathbb{R}^{m}\). The source \(i\) is the _input_ of \(\mathrm{net}_{\boldsymbol{\theta}}\). The remaining sources of the computational graph together form the _parameters_\(\boldsymbol{\theta}\) of \(\mathrm{net}_{\boldsymbol{\theta}}\). The sink \(j\) is the _output_ of \(\mathrm{net}_{\boldsymbol{\theta}}\). For classification tasks, \(\arg\max_{j=1}^{m}\mathrm{net}_{\boldsymbol{\theta}}(\boldsymbol{x})\) is the class \(\mathrm{net}_{\boldsymbol{\theta}}\) assigns to an input \(\boldsymbol{x}\in\mathbb{R}^{n}\)._
Figure 1 contains the computational graph of a residual unit [19] as an example. This graph defines the steps necessary for computing the output of a residual unit, given an input. It also allows for computing gradients and verifying a residual unit. Assume we want to compute the outputs of a neural network for an input \(\boldsymbol{x}\in\mathbb{R}^{n}\). Also, assume we have a parameter assignment \(\boldsymbol{\theta}\). We assign \(\boldsymbol{x}\) to the network input node \(i\) and the corresponding parameter values to the remaining sources. Now, computing the outputs corresponds to a forward walk over the computational graph, propagating the computation results of each node to its direct successors. Similarly, a backwards walk from sinks to sources allows for computing the gradients of the sink with respect to each source (backpropagation). Forward and backwards walks also allow for computing certified lower and upper bounds on the network output that can be used for verifying the neural network [37].
Verifying a neural network means that we want to automatically prove or disprove whether the neural network satisfies a _specification_. A specification is a set of properties.
Definition 3 (Property): A _property_\(\varphi=(\mathcal{X}_{\varphi},\mathcal{Y}_{\varphi})\) is a tuple of an _input set_\(\mathcal{X}_{\varphi}\subseteq\mathbb{R}^{n}\) and an _output set_\(\mathcal{Y}_{\varphi}\subseteq\mathbb{R}^{m}\), \(n,m\in\mathbb{N}\). We write \(\mathrm{net}_{\boldsymbol{\theta}}\vDash\varphi\) when a neural network \(\mathrm{net}_{\boldsymbol{\theta}}:\mathbb{R}^{n}\to\mathbb{R}^{m}\)_satisfies the property \(\varphi\). Specifically,
\[\mathrm{net}_{\boldsymbol{\theta}}\vDash\varphi\Leftrightarrow\forall \boldsymbol{x}\in\mathcal{X}_{\varphi}:\mathrm{net}_{\boldsymbol{\theta}}( \boldsymbol{x})\in\mathcal{Y}_{\varphi}.\]
We call an input \(\boldsymbol{x}\in\mathcal{X}_{\varphi}\) for which \(\mathrm{net}_{\boldsymbol{\theta}}(\boldsymbol{x})\notin\mathcal{Y}_{\varphi}\) a _counterexample.
A verifier determines whether a neural network \(\mathrm{net}_{\boldsymbol{\theta}}\) satisfies a property \(\varphi\). We require verifiers to **1.** report property satisfaction if and only if the property
Figure 1: **The computational graph of a residual unit [19].** In this figure, \(*\) denotes convolution, / denotes batch normalisation, \([\bullet]^{+}\) denotes ReLU, and \(+\) denotes addition. We use pink nodes \(\boxed\) for inputs, yellow \(\boxed\) for parameters, and blue \(\boxed\) for outputs.
is indeed satisfied (_soundness_) and **2.** to terminate (_completeness_). In this paper, we only require verifiers to support bounded hyperrectangles as property input sets and the non-negative real numbers as output set. Practically, verifiers can also handle more complicated input and output sets.
For formalising global specifications, we make use of _hyperproperties_. Hyperproperties extend properties by considering multiple input variables and input-dependent output sets.
Definition 4 (Hyperproperty): A _hyperproperty_\(\psi=(\mathcal{X}_{\psi},\mathcal{Y}_{\psi})\) is a tuple of a _multi-variable input set_\(\mathcal{X}_{\psi}\subseteq\left(\mathbb{R}^{n}\right)^{v}\) and an _input-dependent output set
\[\mathcal{Y}_{\psi}\subseteq\underbrace{\mathbb{R}^{n}\times\cdots\times \mathbb{R}^{n}}_{v\ times}\times\underbrace{\mathbb{R}^{m}\times\cdots\times \mathbb{R}^{m}}_{v\ times},\]
where \(n,m,v\in\mathbb{N}\). For a neural network \(\mathrm{net}_{\boldsymbol{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), we write \(\mathrm{net}_{\boldsymbol{\theta}}\vDash\psi\) if
\[\forall\boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(v)}\in\mathcal{X}_{\psi}: \Big{(}\boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(v)},\mathrm{net}_{ \boldsymbol{\theta}}\Big{(}\boldsymbol{x}^{(1)}\Big{)},\ldots,\mathrm{net}_{ \boldsymbol{\theta}}\Big{(}\boldsymbol{x}^{(v)}\Big{)}\Big{)}\in\mathcal{Y}_{ \psi}.\]
## 3 Formalising Global Specifications using Hyperproperties
Global specifications allow for expressing desired behaviour for the entire input domain of a neural network while local specifications only apply to small regions around known inputs. This property of local specifications brings with it that we have a fixed reference point for each property in a local specification. We typically do not have such a fixed reference point for global specifications, since they apply to the entire input domain.
For example, a local robustness property expresses that a classifier assigns the same class to all inputs that lie within a small \(L_{p}\)-ball \(\mathcal{B}_{p}(\boldsymbol{x})\) around a fixed input point \(\boldsymbol{x}\). Because we have this fixed input \(\boldsymbol{x}\) as a reference point, we know the class that should be assigned to all the inputs in \(\mathcal{B}_{p}(\boldsymbol{x})\). Knowing this class allows for judging whether an input \(\boldsymbol{x}^{\prime}\in\mathcal{B}_{p}(\boldsymbol{x})\) is a counterexample to the local robustness property by executing the network once for \(\boldsymbol{x}^{\prime}\).
If we now look at global robustness, we find that it does not suffice to consider a single execution of a network to check for specification violations. As the inputs now are arbitrary inputs from the entire input domain, we can not determine whether robustness is violated by looking only at the output for one input \(\boldsymbol{x}^{(1)}\). Instead, we need to find another input \(\boldsymbol{x}^{(2)}\in\mathcal{B}_{p}\big{(}\boldsymbol{x}^{(1)}\big{)}\) such that the classes that a network assigns to \(\boldsymbol{x}^{(1)}\) and \(\boldsymbol{x}^{(2)}\) do not match. Only in pair, these inputs form a counterexample. The necessity to compare outputs for multiple inputs requires us to adopt hyperproperties for formalising global specifications.
If we look more closely at our example of global robustness, we find that requiring the points in all \(L_{p}\)-balls to have the same output forces the network to produce the same output for all inputs. This means that we also have to consider more complicated output sets for global specifications. In this case, we either need to allow small changes in class scores (Example 2) or devise special
rules for points close to the decision boundary (Example 3). Furthermore, if we express global robustness as Lipschitz continuity [7] (Example 4), our output set needs to be _input-dependent_. This means that it does not suffice to only compare network outputs with network outputs to determine whether a specification is violated. Instead, we also need to take the inputs that lead to the observed outputs into account.
For the reasons outlined above, we consider hyperproperties with multi-variable input sets and input-dependent output sets as in Definition 4 for formalising global specifications. To leverage existing neural network verification approaches for verifying these hyperproperties, we express the multi-variable input set and the input-dependent output set using auxiliary neural networks.
Definition 5 (Neural-Network-Defined Hyperproperty): Let \(n,m,v,w\in\mathbb{N}\). A _Neural-Network-Defined Hyperproperty (NNDH)_ is a hyperproperty \(\psi=(\mathcal{X}_{\psi},\mathcal{Y}_{\psi})\), where \(\mathcal{X}_{\psi}=\{\mathrm{net}_{\mathrm{In}}(\mathbf{w})\mid\mathbf{w}\in\mathcal{ W}\}\) and
\[\mathcal{Y}_{\psi}=\left\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(v)},\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(v)}\ \Big{|}\,\mathrm{net}_{\mathrm{Sat}}\Big{(}\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(v)}, \mathbf{y}^{(1)},\ldots,\mathbf{y}^{(v)}\Big{)}\geq 0\right.\right\},\]
where \(\mathcal{W}\subset\mathbb{R}^{w}\) is a bounded hyperrectangle and \(\mathrm{net}_{\mathrm{In}}:\mathbb{R}^{w}\to\left(\mathbb{R}^{n}\right)^{v}\) and \(\mathrm{net}_{\mathrm{Sat}}:\underbrace{\mathbb{R}^{n}\times\cdots\times \mathbb{R}^{n}}_{v\ times}\times\underbrace{\mathbb{R}^{m}\times\cdots\times \mathbb{R}^{m}}_{v\ times}\to\mathbb{R}\) are neural networks.
We can think of the neural network \(\mathrm{net}_{\mathrm{In}}\) as generating the multi-variable input set from a single-variable hyperrectangular input space. The neural network \(\mathrm{net}_{\mathrm{Sat}}\) serves as a _satisfaction function_[4] for the output set. A satisfaction function is non-negative if and only if an output -- or, in this case, a tuple of inputs and outputs -- lies within the output set or a property or hyperproperty.
It is central to Definition 5 that \(\mathrm{net}_{\mathrm{In}}\) and \(\mathrm{net}_{\mathrm{Sat}}\) do not _approximate_ our desired input and output set, but express them _exactly_. Usually, we train neural networks to approximate a potentially unknown relationship between inputs and outputs. The neural networks \(\mathrm{net}_{\mathrm{In}}\) and \(\mathrm{net}_{\mathrm{Sat}}\), however, are not trained but carefully constructed to generate our desired input and output set. As such, these auxiliary neural networks are relatively simple structures in this paper. Their main purpose is to make hyperproperties accessible for existing neural network verification approaches.
We now provide several concrete examples of NNDHs including concrete \(\mathrm{net}_{\mathrm{In}}\) and \(\mathrm{net}_{\mathrm{Sat}}\) networks. We formalise global monotonicity, two notions of global robustness [21, 24], Lipschitz continuity, and dependency fairness [15, 35] as NNDHs. Afterwards, we show how NNDHs can be verified using existing neural network verifiers that can handle general computational graphs.
In the following, let \(\mathcal{X}\subset\mathbb{R}^{n}\) be the bounded hyperrectangular input domain of the neural network under consideration. This domain is determined by the target application. In the case of image classification, for example, \(\mathcal{X}\) would be the (normalised) pixel space.
Example 1 (Global Monotonicity): Monotonicity is a desired behaviour of a neural network in applications from medicine to aviation [33]. Here, we formalise
that the output \(j\in\{1,\ldots,m\}\) may not _increase_ when input \(i\in\{1,\ldots,n\}\) increases. Non-decreasing monotonicity can be formalised analogously. We formalise global monotonicity as a hyperproperty \(\psi_{M}=(\mathcal{X}_{\psi_{M}},\mathcal{Y}_{\psi_{M}})\), where the input set \(\mathcal{X}_{\psi_{M}}\subseteq\mathcal{X}\times\mathcal{X}\) and output set \(\mathcal{Y}_{\psi_{M}}\subset\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{ R}^{m}\times\mathbb{R}^{m}\) are
\[\mathcal{X}_{\psi_{M}} =\left\{\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)}\;\left|\; \boldsymbol{x}^{(2)}_{i}\geq\boldsymbol{x}^{(1)}_{i}\right.\right\}\] \[\mathcal{Y}_{\psi_{M}} =\left\{\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)},\boldsymbol{y} ^{(1)},\boldsymbol{y}^{(2)}\;\left|\;\boldsymbol{y}^{(2)}_{j}\leq\boldsymbol{y }^{(1)}_{j}\right.\right\}.\]
To generate these sets using neural networks to obtain an NNDH, we define
\[\mathcal{W}_{M}=\left\{\boldsymbol{x}^{(1)}_{1},\ldots,\boldsymbol{x}^{(1)}_ {n},\boldsymbol{x}^{(2)}_{1},\ldots,\boldsymbol{x}^{(2)}_{n}\;\left|\; \boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)}\in\mathcal{X}\right.\right\},\]
\[\text{net}_{\text{In}_{M}}\Big{(}\boldsymbol{x}^{(1)}_{1},\ldots, \boldsymbol{x}^{(1)}_{n},\boldsymbol{x}^{(2)}_{1},\ldots,\boldsymbol{x}^{(2)} _{n}\Big{)}=\Big{(}\boldsymbol{x}^{\prime(1)},\boldsymbol{x}^{\prime(2)} \Big{)}\,,\]
\[\text{where }\boldsymbol{x}^{\prime(1)}=\Big{(}\boldsymbol{x}^{(1)}_{1}, \ldots,\min\Big{(}\boldsymbol{x}^{(1)}_{i},\boldsymbol{x}^{(2)}_{i}\Big{)}, \ldots,\boldsymbol{x}^{(1)}_{n}\Big{)}\]
\[\boldsymbol{x}^{\prime(2)}=\Big{(}\boldsymbol{x}^{(2)}_{1},\ldots,\max\Big{(} \boldsymbol{x}^{(1)}_{i},\boldsymbol{x}^{(2)}_{i}\Big{)},\ldots,\boldsymbol{x }^{(2)}_{n}\Big{)}\,,\]
and
\[\text{net}_{\text{Sat}_{M}}\Big{(}\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)}, \boldsymbol{y}^{(1)},\boldsymbol{y}^{(2)}\Big{)}=\boldsymbol{y}^{(1)}_{j}- \boldsymbol{y}^{(2)}_{j}.\]
The function \(\text{net}_{\text{Sat}_{M}}\) is a neural network with a single affine layer. Concerning \(\text{net}_{\text{In}_{M}}\), we can compute max either using a maxpooling layer or by leveraging \(\forall a,b\in\mathbb{R}:\max(a,b)=\left[a-b\right]^{+}+b\) where \(\left[\bullet\right]^{+}=\max(\bullet,0)\) is ReLU. Furthermore, since \(\forall a,b\in\mathbb{R}:\min(a,b)=-\max(-a,-b)\), we can also compute min in a neural network. Therefore, \(\mathcal{W}_{M}\), \(\text{net}_{\text{In}_{M}}\) and \(\text{net}_{\text{Sat}_{M}}\) together form an NNDH having \(\mathcal{X}_{\psi_{M}}\) as its input set and \(\mathcal{Y}_{\psi_{M}}\) as its output set.
Example 2: (Global \(L_{\infty}\) Robustness following [21]).: Neural networks are susceptible to adversarial attacks where slightly modifying the input allows an attacker to control the output produced by a neural network [34]. This is a safety concern, for example, for traffic sign recognition [12] and biometric authentication using face recognition [31]. In this example, we express \(L_{\infty}\) global robustness according to [21] as an NNDH. This specification limits how much the output of a neural network may change for inputs that lie within an \(L_{\infty}\)-ball of a certain size. Let \(\delta,\varepsilon\in\mathbb{R}_{>0}\) be the radius of the \(L_{\infty}\)-ball and the permitted magnitude of change, respectively. Let
\[\mathcal{W}_{R} =\{\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n},\boldsymbol{ \tau}_{1},\ldots,\boldsymbol{\tau}_{n}\mid\boldsymbol{x}\in\mathcal{X}, \boldsymbol{\tau}\in\left[-\delta,\delta\right]^{n}\}\] \[\text{net}_{\text{In}_{R}}(\boldsymbol{x}_{1},\ldots,\boldsymbol {x}_{n},\boldsymbol{\tau}_{1},\ldots,\boldsymbol{\tau}_{n}) =(\boldsymbol{x},\text{project}_{\mathcal{X}}(\boldsymbol{x}+ \boldsymbol{\tau}))\] \[\text{net}_{\text{Sat}_{R1}}\Big{(}\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)},\boldsymbol{y}^{(1)},\boldsymbol{y}^{(2)}\Big{)} =\varepsilon-\left\|\boldsymbol{y}^{(1)}-\boldsymbol{y}^{(2)} \right\|_{\infty}=\varepsilon-\max_{j=1}^{m}\left|\boldsymbol{y}^{(1)}_{j}- \boldsymbol{y}^{(2)}_{j}\right|,\]
where \(\text{project}_{\mathcal{X}}\) computes the projection into the hyperrectangle \(\mathcal{X}\). Projecting a point \(\boldsymbol{x}\) into a hyperrectangle corresponds to computing the minimum between each coordinate and the lower boundary of the hyperrectangle and the
maximum between each coordinate and the upper boundary of the hyperrectangle. As we show in Example 1, we can compute minima and maxima in a neural network. Similarly, \(\mathrm{net}_{\mathrm{Sat}}R1\) computes a maximum and absolute values, which we can compute by leveraging \(\forall a\in\mathbb{R}:|a|=\max(a,-a)\). Overall, \(\mathcal{W}_{R}\), \(\mathrm{net}_{\mathrm{In}_{R}}\), and \(\mathrm{net}_{\mathrm{Sat}_{R1}}\) define an NNDH \(\psi_{R1}=(\mathcal{X}_{\psi_{R}},\mathcal{Y}_{\psi_{R1}})\), where \(\mathcal{X}_{\psi_{R}}\subset\mathcal{X}\times\mathcal{X}\) and \(\mathcal{Y}_{\psi_{R1}}\subset\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{ R}^{m}\times\mathbb{R}^{m}\), with
\[\mathcal{X}_{\psi_{R}} =\left\{\mathbf{x}^{(1)},\mathbf{x}^{(2)}\ \Big{|}\ \Big{\|}\mathbf{x}^{(1)}-\mathbf{x}^{(2)}\Big{\|}_{\infty}\leq\delta\right\}\] \[\mathcal{Y}_{\psi_{R1}} =\left\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},\mathbf{y}^{(1)},\mathbf{y}^{(2)}\ \Big{|}\ \Big{\|}\mathbf{y}^{(1)}-\mathbf{y}^{(2)}\Big{\|}_{\infty}\leq\varepsilon\right\}.\]
This captures that a network is globally robust as defined in [21].
Example 3 (Global \(L_{\infty}\) Robustness following [24]): We also present an alternative definition of global robustness using an extra class representing non-robustness at an input point [24]. This definition may be more desirable in some applications, as it still permits non-robustness for noise-only _rubbish class_ inputs [17] that lie off the data manifold. Let \(\delta\in\mathbb{R}_{>0}\) be as in Example 2. Assume the classifier network we are studying produces an additional output \(\bot=m+1\) that shall indicate non-robustness. We reuse \(\mathcal{X}_{\psi_{R}}\) from Example 2 and define \(\psi_{R2}=(\mathcal{X}_{\psi_{R}},\mathcal{Y}_{\psi_{R2}})\), where \(\mathcal{Y}_{\psi_{R2}}\subset\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{ R}^{m+1}\times\mathbb{R}^{m+1}\) and, concretely,
\[\mathcal{Y}_{\psi_{R2}}=\left\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},\mathbf{y}^{(1)},\mathbf{y}^{ (1)}\ \Big{|}\ N\!R\!\left(\mathbf{y}^{(1)}\right)\lor N\!R\!\left(\mathbf{y}^{(2)}\right) \lor Same\!\left(\mathbf{y}^{(1)},\mathbf{y}^{(2)}\right)\Big{\}}\,,\]
where
\[N\!R\!\left(\mathbf{y}\right) =\bigwedge_{j=1}^{m}\mathbf{y}_{\bot}^{(k)}\geq\mathbf{y}_{j}^{(k)}\] \[Same\!\left(\mathbf{y}^{(1)},\mathbf{y}^{(2)}\right) =\bigvee_{j_{1}=1}^{m}\bigwedge_{k=1}^{2}\bigwedge_{j_{2}=1}^{m} \mathbf{y}_{j_{1}}^{(k)}\geq\mathbf{y}_{j_{2}}^{(k)}.\]
Intuitively, \(N\!R\) captures that the extra class \(\bot\) is assigned to an input, while \(Same\) captures that the same class is assigned to \(\mathbf{y}^{(1)}\) and \(\mathbf{y}^{(2)}\)1. To construct a neural network \(\mathrm{net}_{\mathrm{Sat}_{R2}}\) that serves as a satisfaction function for \(\psi_{R2}\), we note that for an arbitrary vector \(\mathbf{u}\in\mathbb{R}^{u}\), \(u\in\mathbb{N}\)
Footnote 1: Strictly speaking, \(Same\) only requires that there is an intersection between the largest elements of \(\mathbf{y}^{(1)}\) and \(\mathbf{y}^{(2)}\). This comes into play when the assigned class is ambiguous due to an output having several largest elements.
\[\bigvee_{a\in\mathcal{A}}\bigwedge_{b\in B(a)}\mathbf{u}_{k_{1}(a,b) }\geq\mathbf{u}_{k_{2}(a,b)} \tag{1}\] \[\Leftrightarrow \left(\max_{a\in\mathcal{A}}\min_{b\in B(a)}\mathbf{u}_{k_{1}(a,b)}- \mathbf{u}_{k_{2}(a,b)}\right)\geq 0, \tag{2}\]
where \(\mathcal{A}\) and \(\mathcal{B}\) are finite sets, \(B:\mathcal{A}\to 2^{\mathcal{B}}\), and \(k_{1},k_{2}:\mathcal{A}\times\mathcal{B}\to\mathbb{N}\). As we can transform any formula in propositional logic into Disjunctive Normal Form, we
can bring the formula defining \(\mathcal{Y}_{\psi_{R2}}\) into the form of Equation (1). Therefore, since we can compute \(\min\) and \(\max\) using a neural network (Example 1), we can define a neural network \(\mathrm{net}_{\mathrm{Sat}_{R2}}\) serving as a satisfaction function for \(\psi_{R2}\). Together with \(\mathcal{W}\) and \(\mathrm{net}_{\mathrm{In}_{R}}\) from Example 2, \(\mathrm{net}_{\mathrm{Sat}_{R2}}\) defines an NNDH with the same input and output set as \(\psi_{R2}\).
Example 4 (Lipschitz Continuity): The Lipschitz continuity of a neural network is linked not only to robustness [34] but also to fairness [10], generalisation [3], and explainability [13]. While many neural network architectures are always Lipschitz continuous [7; 34; 29], it is the magnitude of the Lipschitz constant that matters [7]. Let \(K\in\mathbb{R}_{\geq 0}\) be the desired global Lipschitz constant. Define \(\mathcal{W}_{C}=\left\{\left.\mathbf{x}_{1}^{(1)},\ldots,\mathbf{x}_{n}^{(1)},\mathbf{x}_ {1}^{(2)},\ldots,\mathbf{x}_{n}^{(2)}\ \right|\ \mathbf{x}^{(1)},\mathbf{x}^{(2)}\in\mathcal{X}\right\}\) and
\[\mathrm{net}_{\mathrm{In}_{C}}\Big{(}\mathbf{x}_{1}^{(1)},\ldots,\mathbf{ x}_{n}^{(1)},\mathbf{x}_{1}^{(2)},\ldots,\mathbf{x}_{n}^{(2)}\Big{)} =\Big{(}\mathbf{x}^{(1)},\mathbf{x}^{(2)}\Big{)}\] \[\mathrm{net}_{\mathrm{Sat}_{C}}\Big{(}\mathbf{x}^{(1)},\mathbf{x}^{(2)}, \mathbf{y}^{(1)},\mathbf{y}^{(2)}\Big{)} =K\Big{\|}\mathbf{x}^{(1)}-\mathbf{x}^{(2)}\Big{\|}_{\infty}-\Big{\|}\mathbf{ y}^{(1)}-\mathbf{y}^{(2)}\Big{\|}_{\infty}.\]
First, \(\mathrm{net}_{\mathrm{In}_{C}}\) is an identity function and, thus, a trivial neural network. Then, by computing \(\|\bullet\|_{\infty}\) as in Example 2 in a neural network, we obtain an NNDH \(\psi_{C}=(\mathcal{X}_{\psi_{C}},\mathcal{Y}_{\psi_{C}})\) with
\[\mathcal{X}_{\psi_{C}} =\mathcal{X}\times\mathcal{X}\] \[\mathcal{Y}_{\psi_{C}} =\left\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},\mathbf{y}^{(1)},\mathbf{y}^{(2)}\ \Big{|}\ \Big{\|}\mathbf{y}^{(1)}-\mathbf{y}^{(2)}\Big{\|}_{\infty}\leq K\left\|\mathbf{x}^{(1)}- \mathbf{x}^{(2)}\right\|_{\infty}\right\},\]
which corresponds to Lipschitz continuity with Lipschitz constant \(K\).
Example 5 (Dependency Fairness): Machine learning applications from automated hiring [6] to image classification [28] bear the danger of producing unfair machine-learning models. However, in some applications, ensuring fairness may be legally required [27]. One fairness requirement that we may pose is that "similar individuals are treated similarly" [10]. _Dependency fairness_[15; 35] is a fairness criterion based on this idea2. Assume the first dimension of the input space is a categorical sensitive attribute with \(A\in\mathbb{N}\) disjoint values. We consider two inputs to be _similar_ if they are equal except for the sensitive attribute. Dependency fairness specifies that all similar inputs are assigned to the same class. Let \(\psi_{F}=(\mathcal{X}_{\psi_{F}},\mathcal{Y}_{\psi_{F}})\), with \(\mathcal{X}_{\psi_{F}}\subset\mathcal{X}^{A}\), \(\mathcal{Y}_{\psi_{F}}\subset\left(\mathbb{R}^{n}\right)^{A}\times\left( \mathbb{R}^{m}\right)^{A}\), where
Footnote 2: We believe dependency fairness is an overly simplistic fairness criterion as it can be trivially satisfied by withholding sensitive attributes from the neural network, which is known to be insufficient for real-world fairness [2]. However, we still think that dependency fairness is suitable as an example for experimenting with verifying global specifications.
Let \(\psi_{F}=(\mathcal{X}_{\psi_{F}},\mathcal{Y}_{\psi_{F}})\), with \(\mathcal{X}_{\psi_{F}}\subset\mathcal{X}^{A}\), \(\mathcal{Y}_{\psi_{F}}\subset\left(\mathbb{R}^{n}\right)^{A}\times\left( \mathbb{R}^{m}\right)^{A}\), where
\[\mathcal{X}_{\psi_{F}} =\left\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(A)}\ \left|\begin{array}{c}\forall k\in\{1,\ldots,A\}:\\ \Big{(}\mathbf{x}_{1}^{(k)}=k\wedge\forall i\in\{2,\ldots,n\}:\mathbf{x}_{i}^{(1)}=\bm {x}_{i}^{(k)}\Big{)}\end{array}\right.\right\}\] \[\mathcal{Y}_{\psi_{F}} =\left\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(A)},\mathbf{y}^{(1)},\ldots,\mathbf{ y}^{(A)}\ \left|\begin{array}{c}\bigvee_{j_{1}=1}^{m}\bigwedge_{k=1}^{A}\bigwedge_{j_{2}=1}^{m} \mathbf{y}_{j_{1}}^{(k)}\geq\mathbf{y}_{j_{2}}^{(k)}\end{array}\right.\right\}.\]
We can construct a neural network satisfaction function \(\mathrm{net}_{\mathrm{Sat}_{F}}\) for this property analogously to Example 3. The input set \(\mathcal{X}_{\psi_{F}}\) consists of tuples of similar inputs which contain each value of the sensitive attribute in a fixed order. Let \(\mathbf{A}\in\mathbb{R}^{n\times n}\) be the diagonal matrix with \(0,1,\ldots,1\) on its diagonal. Let \(\mathrm{assign}:\mathbb{N}\times\mathbb{R}\rightarrow\mathbb{R}\) be an affine function with \(\mathrm{assign}(k,\mathbf{x})=\mathbf{A}\mathbf{x}+(k,0,\ldots,0)^{T}\). Define \(\mathcal{W}=\mathcal{X}\) and \(\mathrm{net}_{\mathrm{In}_{F}}(\mathbf{x})=(\mathrm{assign}(1,\mathbf{x}),\ldots, \mathrm{assign}(A,\mathbf{x}))\). Since assign is affine, \(\mathrm{net}_{\mathrm{In}_{F}}\) is a neural network. Overall, \(\mathcal{W}\), \(\mathrm{net}_{\mathrm{In}_{F}}\), and \(\mathrm{net}_{\mathrm{Sat}_{F}}\) define an NNDH with the same input and output set as \(\psi_{F}\).
These examples demonstrate that Definition 5 is an expressive specification formalism, despite restricting input and output sets to be defined by neural networks. It remains to show that we can indeed verify NNDHs using existing neural network verification approaches. This builds upon the ability to verify general computational graphs. In [37], the Linear Relaxation-based Perturbation Analysis (LiRPA) framework is extended to general computational graphs. LiRPA underlies verifiers such as \(\alpha\),\(\beta\)-CROWN [38] and ERAN [32], and is used in Marabou [22] and MN-BaB [14], among others. Among these verifiers, \(\alpha\),\(\beta\)-CROWN already supports verifying general computational graphs.
The central idea in verifying an NNDH \(\psi\) is to compose the network to verify \(\mathrm{net}_{\mathbf{\theta}}\) with itself and the networks \(\mathrm{net}_{\mathrm{In}_{\psi}}\) and \(\mathrm{net}_{\mathrm{Sat}_{\psi}}\) that define the input and output set of \(\psi\).
Theorem 3.1 (NNDH Verification): _Let \(\psi=(\mathcal{X}_{\psi},\mathcal{Y}_{\psi})\) with \(\mathcal{W}\subseteq\mathbb{R}^{w}\), \(\mathrm{net}_{\mathrm{In}}:\mathbb{R}^{w}\rightarrow(\mathbb{R}^{n})^{v}\) and \(\mathrm{net}_{\mathrm{Sat}}:(\mathbb{R}^{m})^{v}\rightarrow\mathbb{R}\) be an NNDH. Let \(\mathrm{net}_{\mathbf{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) be a neural network. Define \(\mathrm{net}^{\prime}_{\mathbf{\theta}}:\mathbb{R}^{w}\rightarrow\mathbb{R}\) as_
\[\mathrm{net}^{\prime}_{\mathbf{\theta}}(\mathbf{w}) =\mathrm{net}_{\mathrm{Sat}}\Big{(}\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(v )},\mathrm{net}_{\mathbf{\theta}}\Big{(}\mathbf{x}^{(1)}\Big{)},\ldots,\mathrm{net}_{ \mathbf{\theta}}\Big{(}\mathbf{x}^{(v)}\Big{)}\Big{)}\] \[\text{where }\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(v)}=\mathrm{net}_{ \mathrm{In}}(\mathbf{w}).\]
_Further, let \(\varphi=(\mathcal{W},\mathbb{R}_{\geq 0})\). It holds that \(\mathrm{net}^{\prime}_{\mathbf{\theta}}\models\varphi\Leftrightarrow\mathrm{net}_{ \mathbf{\theta}}\models\psi\)._
Proof: Theorem 3.1 follows from applying Definitions 3 and 5.
Figure 2 visualises \(\mathrm{net}^{\prime}_{\mathbf{\theta}}\) from Theorem 3.1. We construct a new computational graph by generating several inputs using \(\mathrm{net}_{\mathrm{In}}\) and feeding each input to a separate copy of \(\mathrm{net}_{\mathbf{\theta}}\). Finally, \(\mathrm{net}_{\mathrm{Sat}}\) takes the generated inputs and the output of each copy of \(\mathrm{net}_{\mathbf{\theta}}\) and computes the satisfaction function value. Considering several copies of the same artefact is known as _self-composition_[8]. As Theorem 3.1 shows, verifying an NNDH \(\psi\) corresponds to verifying a property \(\varphi\) of the new computational graph \(\mathrm{net}^{\prime}_{\mathbf{\theta}}\). Overall, \(\mathrm{net}^{\prime}_{\mathbf{\theta}}\) has a more complicated graph structure than \(\mathrm{net}_{\mathbf{\theta}}\), but it only contains computations that also appear in \(\mathrm{net}_{\mathbf{\theta}}\), \(\mathrm{net}_{\mathrm{In}}\) or \(\mathrm{net}_{\mathrm{Sat}}\). Therefore, \(\psi\) can be verified using verifiers that can verify \(\mathrm{net}_{\mathbf{\theta}}\), \(\mathrm{net}_{\mathrm{In}}\) and \(\mathrm{net}_{\mathrm{Sat}}\) and support general computational graphs.
## 4 Related Work
Using self-composition for verifying specific global specifications was explored previously [21, 23]. We use self-composition for verifying a range of global speci
fications. Improved encodings of self-composition [36] and approaches from differential verification of neural networks [26] are interesting directions for improving the verification of NNDHs.
In 2017, verifying global robustness was found to be infeasible using the then-available verifiers [21]. Recent approaches to global robustness [36] and global fairness specifications [35] have demonstrated that verifying global specifications is feasible today. The reason behind this could be that practically, neural networks appear not to realise their full combinatorial potential [18], in a way that allows for efficient branch-and-bound verification [35].
## 5 Conclusion
We present a versatile formalism for expressing global specifications while maintaining compatibility with existing verification approaches. Evaluating this approach empirically remains future work. A promising verifier for this approach is \(\alpha\),\(\beta\)-CROWN [38], as it already supports verifying arbitrary computational graphs. An interesting direction is comparing our generally applicable approach with approaches specialised to individual global specifications, such as, global robustness [36], dependency fairness [35] and Lipschitz continuity [5].
Figure 2: **Computational Graph for Verifying NNDHs.** Verifying an NNDH (Definition 5) reduces to verifying an input-output property of the computational graph in this figure. The boxes enclose sub-graphs of the computational graph. The contents of each box are placeholders. Pink nodes represent inputs, yellow nodes represent parameters, and blue nodes represent outputs. The input and output nodes in each sub-graph are repetitions of their direct predecessors or direct successors outside of the subgraph. The inputs of \(\mathrm{net}_{\mathrm{Sat}}\) were rearranged for better legibility. |
2304.05790 | Deep neural network approximation of composite functions without the
curse of dimensionality | In this article we identify a general class of high-dimensional continuous
functions that can be approximated by deep neural networks (DNNs) with the
rectified linear unit (ReLU) activation without the curse of dimensionality. In
other words, the number of DNN parameters grows at most polynomially in the
input dimension and the approximation error. The functions in our class can be
expressed as a potentially unbounded number of compositions of special
functions which include products, maxima, and certain parallelized Lipschitz
continuous functions. | Adrian Riekert | 2023-04-12T12:08:59Z | http://arxiv.org/abs/2304.05790v1 | # Deep neural network approximation of composite functions without the curse of dimensionality
###### Abstract.
In this article we identify a general class of high-dimensional continuous functions that can be approximated by deep neural networks (DNNs) with the rectified linear unit (ReLU) activation without the curse of dimensionality. In other words, the number of DNN parameters grows at most polynomially in the input dimension and the approximation error. The functions in our class can be expressed as a potentially unbounded number of compositions of special functions which include products, maxima, and certain parallelized Lipschitz continuous functions.
Key words and phrases:Approximation error, curse of dimensionality, artificial neural networks
## 1. Introduction
Many practically relevant numerical approximation algorithms suffer from the curse of dimensionality (cf., e.g., Bellman [2] and Novak & Ritter [27]). Roughly speaking, this means that the number of parameters needed to approximate certain functions grows exponentially in the input dimension, which is often problematic in high dimensions. In recent years deep neural networks (DNNs) have been successfully employed in numerous high-dimensional applications, where the unknown function to be approximated can depend on thousands or even millions of real parameters. It thus seems that DNNs are able to overcome the curse of dimensionality in many relevant cases. Nevertheless, the reasons for these promising practical results are still not fully understood.
In this article we aim to enhance the understanding of the approximation capabilities of DNNs by identifying a suitably large class of continuous functions that can provably be approximated by DNNs without the curse of dimensionality. That is, the number of DNN parameters is allowed to grow at most polynomially with respect to the input dimension and the prescribed approximation accuracy. The functions in this class can be expressed as compositions of particular functions which include products, maxima, and parallelized Lipschitz continuous functions; and the number of functions in the composition itself is allowed to grow polynomially in the input dimension, which leads to interesting new examples of approximable functions.
### Literature overview
The fact that neural networks with a suitable activation function and sufficiently many parameters can approximate any continuous function up to an arbitrarily small error is known in the literature as the universal approximation theorem. Qualitative results of this type were first established, e.g., in [5, 8, 14, 15, 19]. We also refer to Pinkus [29] for further references regarding such universal approximation results.
Quantitative upper bounds on the number of DNN parameters needed to approximate continuous functions can be found, e.g., in [9, 21, 29, 33, 35]. These estimates suffer from the curse of dimensionality, since one needs \(\Omega(\varepsilon^{-cd})\) DNN parameters to approximate
## 1. Introduction
### Background
The study of the _generalized
This is the measure we use to describe the complexity of the neural network \(\Phi\).
For every \(p\in[1,\infty]\), every \(d\in\mathbb{N}\) and every \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) the \(\ell_{p}\)-norm of \(x\) is defined by
\[\|x\|_{p}=\begin{cases}\left(|x_{1}|^{p}+\cdots+|x_{d}|^{p}\right)^{1/p}&:p< \infty\\ \max\{|x_{1}|,\ldots,|x_{d}|\}&:p=\infty.\end{cases}\]
For every \(n\in\mathbb{N}_{0}=\mathbb{N}\cup\{0\}=\{0,1,2,\ldots\}\) we write \([n]=\mathbb{N}\cap(0,n]=\{1,2,\ldots,n\}\).
We now introduce the special function classes we consider throughout the remainder of this article. Firstly, we consider for \(d\in\mathbb{N}\) the multidimensional maximum and product functions \(m_{d},p_{d}\colon\mathbb{R}^{d}\to\mathbb{R}\) and \(\mathfrak{m}_{d},\mathfrak{p}_{d}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) which satisfy for all \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) that
\[\begin{split} m_{d}(x)&=\max\{x_{1},\ldots,x_{d}\}, \\ p_{d}(x)&=\prod_{i=1}^{d}x_{i},\\ \mathfrak{m}_{d}(x)&=(m_{1}(x_{1}),m_{2}(x_{1},x_{2 }),\ldots,m_{d}(x_{1},\ldots,x_{d})),\\ \mathfrak{p}_{d}(x)&=(p_{1}(x_{1}),p_{2}(x_{1},x_{2 }),\ldots,p_{d}(x_{1},\ldots,x_{d})).\end{split} \tag{1.2}\]
Furthermore, for every \(n\in\mathbb{N}\), \(d_{1},\ldots,d_{n}\), \(e_{1},\ldots,e_{n}\in\mathbb{N}\), subsets \(A_{i}\subseteq\mathbb{R}^{d_{i}}\), and arbitrary functions \(f_{i}\colon A_{i}\to\mathbb{R}^{e_{i}}\), \(i\in[n]\), we denote by \((f_{1}\square f_{2}\square\cdots\square f_{n})\colon A_{1}\times A_{2}\times \cdots\times A_{n}\to\mathbb{R}^{\sum_{i=1}^{n}e_{i}}\) the _parallelization_ of \(f_{1},\ldots,f_{n}\), i.e., the function which satisfies for all \(x_{1}\in A_{1},\ldots,x_{n}\in A_{n}\) that
\[(f_{1}\square f_{2}\square\cdots\square f_{n})(x_{1},x_{2},\ldots,x_{n})=(f_{ 1}(x_{1}),f_{2}(x_{2}),\ldots,f_{n}(x_{n})). \tag{1.3}\]
Now let \(Q\) be a (\(d\)-dimensional) _hypercube_, by which we in the following mean a set of the form \(Q=[a,b]^{d}\subseteq\mathbb{R}^{d}\). Here \(d\in\mathbb{N}\) is an arbitrary natural number which describes the input dimension and \(a\in\mathbb{R}\), \(b\in(a,\infty)\) are real numbers which describe the input domain. For every \(k\in\mathbb{N}\), \(p\in[1,\infty]\), \(L\in[0,\infty)\) we denote by \(\mathscr{A}_{k,p}(Q,L)\subseteq\bigcup_{n\in\mathbb{N}}C(Q,\mathbb{R}^{n})\) the set of all continuous functions \(f\) on \(Q\) for which there exist \(n\in\mathbb{N}\), \(d_{1},\ldots,d_{n}\in\mathbb{N}\), and functions \(f_{i}\colon[a,b]^{d_{i}}\to\mathbb{R}\), \(i\in[n]\), such that
\[\begin{split}\sum\limits_{i=1}^{n}d_{i}=d,&\max_{i \in[n]}d_{i}\leq k,\qquad\forall\,i\in[n],\,x,y\in[a,b]^{d_{i}}\colon|f_{i}(x) -f_{i}(y)|\leq L\|x-y\|_{p},\\ \text{and}&\quad f=f_{1}\square f_{2}\square\cdots \square f_{n}.\end{split}\]
That is, \(f\) is the parallelization of functions defined on domains of dimension at most \(k\) which are Lipschitz with respect to the \(p\)-norm with Lipschitz constant \(L\).
We also denote by \(\mathbf{M}(Q)\subseteq\bigcup_{n\in\mathbb{N}}C(Q,\mathbb{R}^{n})\) the set of all continuous functions \(f\) on \(Q\) for which there exist \(n\in\mathbb{N}\) and \(d_{1},\ldots,d_{n}\in\mathbb{N}\) such that \(\sum_{i=1}^{n}d_{i}=d\) and
\[f=(m_{d_{1}}|_{[a,b]^{d_{1}}})\square(m_{d_{2}}|_{[a,b]^{d_{2}}})\square\cdots \square(m_{d_{n}}|_{[a,b]^{d_{n}}}),\]
i.e., the parallelizations of the maximum functions \(m_{d_{i}}\).
Finally, we denote by \(\mathbf{P}(Q)\subseteq\bigcup_{n\in\mathbb{N}}C(Q,\mathbb{R}^{n})\) the set of all continuous functions \(f\) on \(Q\) for which there exist \(n\in\mathbb{N}\) and \(d_{1},\ldots,d_{n}\in\mathbb{N}\) such that \(\sum_{i=1}^{n}d_{i}=d\) and
\[f=(p_{d_{1}}|_{[a,b]^{d_{1}}})\square(p_{d_{2}}|_{[a,b]^{d_{2}}})\square\cdots \square(p_{d_{n}}|_{[a,b]^{d_{n}}}),\]
i.e., the parallelizations of the product functions \(p_{d_{i}}\).
### Main results
In the following we show how to approximate certain compositions of parallelized functions by DNNs without the curse of dimensionality. Our results are somewhat similar to the previous results in Cheridito et al. [4] and Beneventano et al. [3]. While the authors of [4] use the general framework of catalog networks, our arguments exploit the compositional structure of the target functions more directly. While some of
our arguments rely on the results in [3], a main improvement is that we consider compositions of a variable and potentially unbounded number of functions. Specifically, in our first main result, Theorem 1.1 below, the number \(\mathbf{k}(d)\) of functions in the composition is allowed to grow at most polynomially in the parameter \(d\in\mathbb{N}\) which describes the dimension. An additional improvement in Theorem 1.1 compared to [3] is that the functions \(g_{i}^{d}\) are allowed to be parallelizations of Lipschitz functions of input dimension at most \(c\in\mathbb{N}\) (the class \(\mathscr{P}_{c,1}\)) instead of only \(1\)-dimensional Lipschitz functions.
We now present the precise statement of Theorem 1.1 and, thereafter, illustrate this statement by means of several examples.
**Theorem 1.1**.: _Let \(c\in\mathbb{N}\), for every \(d\in\mathbb{N}\) let \(\mathbf{k}(d),\mathfrak{d}_{1}^{d},\mathfrak{d}_{2}^{d},\ldots,\mathfrak{d}_{ \mathbf{k}(d)+1}^{d}\in[cd^{c}]\), for every \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) let \(Q_{i}^{d}\subseteq[-cd^{c},cd^{c}]^{\mathfrak{d}_{i}^{d}}\) be a \(\mathfrak{d}_{i}^{d}\)-dimensional hypercube and let \(g_{i}^{d}\in C(Q_{i}^{d},\mathbb{R}^{\mathfrak{d}_{i+1}^{d}})\) be a function, assume for every \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) that_
\[g_{i}^{d}\in\mathscr{P}_{c,1}(Q_{i}^{d},1)\cup\mathbf{M}(Q_{i}^{d})\cup \mathbf{P}(Q_{i}^{d}), \tag{1.4}\]
_assume for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)-1]\) that \(g_{i}^{d}(Q_{i}^{d})\subseteq Q_{i+1}^{d}\), assume for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) with \(g_{i}^{d}\in\mathbf{P}(Q_{i}^{d})\) that \(Q_{i}^{d}\subseteq[-\frac{1}{8},\frac{1}{8}]^{\mathfrak{d}_{i}^{d}}\), and for every \(d\in\mathbb{N}\) let \(F_{d}\in C(Q_{1}^{d},\mathbb{R}^{\mathfrak{d}_{\mathbf{k}(d)+1}^{d}})\) satisfy \(F_{d}=g_{\mathbf{k}(d)}^{d}\circ g_{\mathbf{k}(d)-1}^{d}\circ\cdots\circ g_{1} ^{d}\). Then there exists \(K\in\mathbb{N}\) such that for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1]\) there exists \(\Phi\in\mathbf{N}\) such that \(\mathcal{R}(\Phi)\in C(\mathbb{R}^{\mathfrak{d}_{1}^{d}},\mathbb{R}^{ \mathfrak{d}_{\mathbf{k}(d)+1}^{d}})\),_
\[\mathcal{P}(\Phi)\leq Kd^{K}\varepsilon^{-2c},\qquad\text{and}\qquad\forall\, x\in Q_{1}^{d}\colon\|\mathcal{R}(\Phi)(x)-F_{d}(x)\|_{1}\leq\varepsilon. \tag{1.5}\]
The condition (1.4) in Theorem 1.1 asserts that each function \(g_{i}^{d}\) in the composition is either a parallelized Lipschitz function with Lipschitz constant \(1\), a parallelized maximum function, or a parallelized product function of \(\mathfrak{d}_{i}^{d}\leq cd^{c}\) variables. Here we think of \(d\in\mathbb{N}\) as a parameter describing the order of magnitude of the dimensions. A particular case is \(\mathfrak{d}_{1}^{d}=d\) for all \(d\in\mathbb{N}\), so that the target function \(F_{d}\) is defined on a \(d\)-dimensional domain.
We need to assume for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) with \(g_{i}^{d}\in\mathbf{P}(Q_{i}^{d})\) that \(Q_{i}^{d}\subseteq[-\frac{1}{8},\frac{1}{8}]^{\mathfrak{d}_{i}^{d}}\) since only on sufficiently small hypercubes the parallelized product functions in \(\mathbf{P}(Q_{i}^{d})\) can be approximated with Lipschitz constant \(1\). For Lipschitz constants greater than \(1\), the composition could potentially produce exponentially large errors and thus could not be approximated without the curse of dimensionality.
The conclusion of the theorem in (1.5) establishes that each of the functions \(F_{d}\) can be approximated uniformly with accuracy \(\varepsilon\) on the \(\mathfrak{d}_{1}^{d}\)-dimensional hypercube \(Q_{1}^{d}\) by a DNN with ReLU activation using at most \(Kd^{K}\varepsilon^{-2c}\) parameters. Hence the number of parameters grows at most polynomially in \(d\) and the accuracy \(\varepsilon\).
Next let us illustrate the statement of Theorem 1.1 by some examples, which demonstrate several cases of compositions of a number of functions depending on the input dimension.
**Example 1.1**.: A situation where Theorem 1.1 applies is the family of functions
\[[e^{-1},1]^{d}\ni x=(x_{1},x_{2},\ldots,x_{d})\mapsto x_{1}^{x_{2}^{-x_{d}}} \in\mathbb{R},\quad d\in\mathbb{N},\]
which can be written as \(g_{1}^{d}\circ\cdots\circ g_{d-1}^{d}\) where \(g_{i}^{d}\colon[e^{-1},1]^{i+1}\to[e^{-1},1]^{i}\) defined by \(g_{i}^{d}(x_{1},\ldots,x_{i-1},x_{i},x_{i+1})=(x_{1},\ldots,x_{i-1},x_{i}^{x_{ i+1}})\) is an element of \(\mathscr{P}_{2,1}([e^{-1},1]^{i+1},1)\), as can be easily verified. Theorem 1.1 hence implies that these functions can be approximated by DNNs without the curse of dimensionality.
**Example 1.2**.: We can also consider for an arbitrary \(a\in(1,\infty)\) the family of functions
\[[1,a]^{d}\ni x=(x_{1},x_{2},\ldots,x_{d})\mapsto\ln(x_{1}+\ln(x_{2}+\ln(\cdots+ \ln(x_{d}))))\in\mathbb{R},\quad d\in\mathbb{N}.\]
Define \(a_{1}^{d},a_{2}^{d},\ldots,a_{d}^{d}\in(1,\infty)\) by \(a_{d}^{d}=a\) and \(a_{i}^{d}=a_{i+1}^{d}+\ln(a_{i+1}^{d})\) for \(i\in\left[d-1\right]\). It is not hard to see that \(a_{i}^{d}\leq cd^{c}\) for a suitable constant \(c\). Now the functions in question can be written as \(g_{i}^{d}\circ\cdots\circ g_{d-1}^{d}\) where \(g_{i}^{d}\colon[1,a_{i+1}^{d}]^{i+1}\to[1,a_{i}^{d}]^{i}\), defined by \(g_{i}^{d}(x_{1},\ldots,x_{i-1},x_{i},x_{i+1})=(x_{1},\ldots,x_{i-1},x_{i}+\ln( x_{i+1}))\), is an element of \(\mathscr{P}_{2,1}([1,a_{i+1}^{d}]^{i+1},1)\). By Theorem 1.1 these functions can thus be approximated by DNNs without the curse of dimensionality.
**Example 1.3**.: For arbitrary \(a\in(0,\frac{1}{8}]\) we can consider the family of functions
\[[-a,a]^{d^{2}}\ni x=(x_{1},x_{2},\ldots,x_{d^{2}})\mapsto\\ \left(\prod_{i=1}^{d}x_{i}\right)\max\bigl{\{}\max_{j=1}^{d}x_{d+ j},\left(\prod_{k=1}^{d}x_{2d+k}\right)\max\{\max_{l=1}^{d}x_{3d+l},\cdots\} \bigr{\}}\in\mathbb{R},\quad d\in\mathbb{N},\]
which is a composition of \(\leq d\) parallelized maximum and product functions and thus satisfies the conditions of the theorem. For example, if \(d=4\) this is the function
\[(x_{1},\ldots,x_{16})\mapsto x_{1}x_{2}x_{3}x_{4}\max\{x_{5},x_{6},x_{7},x_{8 },x_{9}x_{10}x_{11}x_{12}\max\{x_{13},x_{14},x_{15},x_{16}\}\}.\]
By Theorem 1.1 these functions can thus again be approximated by DNNs without the curse of dimensionality.
In our second main result, Theorem 1.2, the number of functions in the composition is a fixed integer \(k\in\mathbb{N}\), but the Lipschitz constants of the functions in the composition are allowed to depend on the dimension \(d\in\mathbb{N}\). This is also an improvement compared to [3] where the maximal Lipschitz constant is a fixed number independent of \(d\in\mathbb{N}\). Here we allow both the parallelized maximum and product functions in \(\mathbf{M},\mathbf{P}\), as well as the generalized multidimensional maximum and product functions \(\mathfrak{m}_{d},\mathfrak{p}_{d}\).
**Theorem 1.2**.: _Let \(c,k\in\mathbb{N}\), for every \(d\in\mathbb{N}\) let \(\mathfrak{d}_{1}^{d},\mathfrak{d}_{2}^{d},\ldots,\mathfrak{d}_{k+1}^{d}\in[cd ^{c}]\), for every \(d\in\mathbb{N}\), \(i\in[k]\) let \(Q_{i}^{d}\subseteq[-cd^{c},cd^{c}]^{\mathfrak{d}_{i}^{d}}\) be a hypercube and let \(g_{i}^{d}\in C(Q_{i}^{d},\mathbb{R}^{\mathfrak{d}_{i+1}})\) be a function, let \(p\in[1,\infty]\), assume for every \(d\in\mathbb{N}\), \(i\in[k]\) that_
\[g_{i}^{d}\in\mathscr{P}_{c,p}(Q_{i}^{d},cd^{c})\cup\{\mathfrak{m}_{\mathfrak{ d}_{i}^{d}}|_{Q_{i}^{d}}\}\cup\{\mathfrak{p}_{\mathfrak{d}_{i}^{d}}|_{Q_{i}^{d}}\} \cup\mathbf{M}(Q_{i}^{d})\cup\mathbf{P}(Q_{i}^{d}), \tag{1.6}\]
_assume for all \(d\in\mathbb{N}\), \(i\in[k-1]\) that \(g_{i}^{d}(Q_{i}^{d})\subseteq Q_{i+1}^{d}\), assume for all \(d\in\mathbb{N}\), \(i\in[k]\) with \(g_{i}^{d}\in\mathbf{P}(Q_{i}^{d})\cup\{\mathfrak{p}_{\mathfrak{d}_{i}^{d}}|_{Q_{ i}^{d}}\}\) that \(Q_{i}^{d}\subseteq[-1,1]^{\mathfrak{d}_{i}^{d}}\), and for every \(d\in\mathbb{N}\) let \(F_{d}\in C(Q_{1}^{d},\mathbb{R}^{\mathfrak{d}_{k+1}^{d}})\) satisfy \(F_{d}=g_{k}^{d}\circ g_{k-1}^{d}\circ\cdots\circ g_{1}^{d}\). Then there exists \(K\in\mathbb{N}\) such that for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1]\) there exists \(\Phi\in\mathbf{N}\) such that \(\mathcal{R}(\Phi)\in C(\mathbb{R}^{\mathfrak{d}_{1}^{d}},\mathbb{R}^{ \mathfrak{d}_{k+1}^{d}})\),_
\[\mathcal{P}(\Phi)\leq Kd^{K}\varepsilon^{-2c},\qquad\text{and}\qquad\forall\,x \in Q_{1}^{d}\colon\left\|\mathcal{R}(\Phi)(x)-F_{d}(x)\right\|_{p}\leq\varepsilon. \tag{1.7}\]
Again, we need to assume for all \(d\in\mathbb{N}\), \(i\in[k]\) with \(g_{i}^{d}\in\mathbf{P}(Q_{i}^{d})\cup\{\mathfrak{p}_{\mathfrak{d}_{i}^{d}}|_{Q_ {i}^{d}}\}\) that \(Q_{i}^{d}\subseteq[-1,1]^{\mathfrak{d}_{i}^{d}}\) since on larger hypercubes the multidimensional product functions can only be approximated with a Lipschitz constant growing exponentially in the dimension (cf., e.g., [3, Proposition 6.8]).
Let us also illustrate the statement of Theorem 1.2 by means of a few examples.
**Example 1.4**.: Consider the family of functions
\[[-1,1]^{d}\ni x\mapsto\max\bigl{\{}(x_{1})^{d},(x_{1}x_{2})^{d+1},\ldots,(x_{1 }x_{2}\cdots x_{d})^{2d-1}\bigr{\}}\in\mathbb{R},\quad d\in\mathbb{N},\]
which can be written as a composition of three functions \(g_{3}^{d}\circ g_{2}^{d}\circ g_{1}^{d}\). Here \(g_{1}^{d}=\mathfrak{p}_{d}\colon[-1,1]^{d}\to[-1,1]^{d}\) is the extended product function, \(g_{2}^{d}\colon[-1,1]^{d}\to[-1,1]^{d}\) is the parallelization of the component-wise functions \([-1,1]\ni y_{i}\mapsto(y_{i})^{d+i-1}\in\mathbb{R}\) which have a Lipschitz constant that grows polynomially in the dimension \(d\), and \(g_{3}^{d}=m_{d}\colon[-1,1]^{d}\to\mathbb{R}\) is the maximum function. Theorem 1.2 thus implies that these functions can be approximated by DNNs without the curse of dimensionality.
**Example 1.5**.: Consider for arbitrary \(c\in\mathbb{N}\) the family of functions
\[[-cd^{c},cd^{c}]^{d}\ni x\mapsto\prod_{i=1}^{d}\exp(-i|x_{i}|^{2})\in\mathbb{R}, \quad d\in\mathbb{N},\]
which is the composition of the product function \(p_{d}\) and the parallelization of the one-dimensional functions \(x_{i}\mapsto\exp(-i|x_{i}|^{2})\) which have a Lipschitz constant bounded by a polynomial in \(d\) and map into the interval \([0,1]\). By Theorem 1.2 these functions can thus be approximated by DNNs without the curse of dimensionality.
**Example 1.6**.: Consider for arbitrary \(c\in\mathbb{N}\) the family of functions
\[[-cd^{c},cd^{c}]^{3d}\ni x\mapsto\max_{l=1}^{d}\cos(lx_{3l-2}+l^{2}x_{3l-1}+l ^{3}x_{3l})\in\mathbb{R},\quad d\in\mathbb{N},\]
which can again be approximated by DNNs without the curse of dimensionality by Theorem 1.2, since the functions \(\mathbb{R}^{3}\ni(y_{1},y_{2},y_{3})\mapsto\cos(ly_{1}+l^{2}y_{2}+l^{3}y_{3}) \in\mathbb{R}\), \(1\leq l\leq d\), have a Lipschitz constant bounded by a polynomial in \(d\).
The above examples illustrate the type of compositional functions that can now be approximated by DNNs with number of parameters growing at most polynomially in the input dimension and the approximation accuracy.
The remainder of this article is organized as follows. In Section 2 we establish abstract DNN approximation results for certain function compositions. Afterwards, in Section 3 we prove that under suitable assumptions the parallelized Lipschitz, product, and maximum functions introduced above can be approximated without the curse of dimensionality. We then combine this with the abstract results from Section 2 to establish the main theorems.
## 2. DNN approximation of composite functions
Throughout this section we employ the notation introduced in Subsection 1.2. The main results of this section are the two abstract approximation results for compositions of functions in Propositions 2.9 and 2.10 which cover the cases of a polynomially growing number of functions with Lipschitz constant \(1\) and of a fixed number of functions with polynomially growing Lipschitz constants, respectively. To prove these we require some technical preparations.
### DNN approximation cost
We first define in Definition 2.1 our version of the approximation cost of continuous functions by DNNs and then establish some basic properties of the approximation cost. Definition 2.1 is inspired by Beneventano et al. [3, Definition 3.2], but is in a sense more general since it allows us to consider different \(\ell_{p}\)-norms on \(\mathbb{R}^{n}\).
**Definition 2.1** (Cost of DNN approximations).: For every \(p\in[1,\infty]\), \(m,n\in\mathbb{N}\), \(D\subseteq\mathbb{R}^{m}\), \(f\in C(D,\mathbb{R}^{n})\), \(L,\varepsilon\in[0,\infty)\) we denote by \(\mathrm{Cost}_{p}(f,L,\varepsilon)\in\mathbb{N}\cup\{\infty\}\) the extended real number given by
\[\begin{split}\mathrm{Cost}_{p}(f,L,\varepsilon)&=\min \Big{(}\Big{\{}N\in\mathbb{N}\colon\exists\,\Phi\in\mathbf{N}\colon\big{[}( \mathcal{R}(\Phi)\in C(\mathbb{R}^{m},\mathbb{R}^{n}))\\ &\wedge(N=\mathcal{P}(\Phi))\wedge\big{(}\sup_{x\in D}\lVert \mathcal{R}(\Phi)(x)-f(x)\rVert_{p}\leq\varepsilon\big{)}\\ &\wedge\big{(}\forall\,x,y\in D\colon\lVert\mathcal{R}(\Phi)(x)- \mathcal{R}(\Phi)(y)\rVert_{p}\leq L\lVert x-y\rVert_{p}\big{)}\Big{]}\Big{\}} \cup\{\infty\}\Big{)}.\end{split} \tag{2.1}\]
That is, \(\mathrm{Cost}_{p}(f,L,\varepsilon)\) is the minimal number of parameters of a DNN which can approximate \(f\) up to accuracy \(\varepsilon\) in the \(\ell_{p}\)-norm while being \(L\)-Lipschitz on the domain of \(f\) with respect to the \(\ell_{p}\)-norm. If such a DNN does not exist then the cost is defined to be infinite. Controlling the Lipschitz constant will be important when estimating the propagation of the approximation error through compositions; see Proposition 2.6 below.
We next restate in Lemma 2.2 below the monotonicity of the approximation cost with respect to the parameters \(L,\varepsilon\) and the domain \(D\), which was established, e.g., in [3, Lemma 3.8] for the case \(p=2\). The case of general \(p\) is entirely analogous.
**Lemma 2.2**.: _Let \(p\in[1,\infty]\), \(m,n\in\mathbb{N}\), \(D\subseteq\mathbb{R}^{m}\), \(E\subseteq D\), \(f\in C(D,\mathbb{R}^{n})\), \(L_{1},L_{2},\varepsilon_{1},\varepsilon_{2}\in[0,\infty)\) satisfy \(L_{1}\leq L_{2}\) and \(\varepsilon_{1}\leq\varepsilon_{2}\). Then_
\[\operatorname{Cost}_{p}(f,L_{1},\varepsilon_{1})\geq\operatorname{Cost}_{p}(f| _{E},L_{2},\varepsilon_{2}). \tag{2.2}\]
In the next lemma we show for arbitrary \(p,q\in[1,\infty]\) how to estimate the cost with respect to the \(\ell_{q}\)-norm against the cost with respect to the \(\ell_{p}\)-norm. For this we need to rescale the Lipschitz constant \(L\) and the approximation accuracy \(\varepsilon\) by factors depending on the input and output dimensions \(m,n\).
**Lemma 2.3** (Cost with respect to different norms).: _Let \(p,q\in[1,\infty]\), \(m,n\in\mathbb{N}\), \(D\subseteq\mathbb{R}^{m}\), \(f\in C(D,\mathbb{R}^{n})\), \(L,\varepsilon\in[0,\infty)\). Then_
\[\operatorname{Cost}_{q}\bigl{(}f,\max\bigl{\{}m^{\nicefrac{{1}}{{p-1}}\!/_{q }},n^{\nicefrac{{1}}{{q-1}}\!/_{p}}\bigr{\}}L,\max\bigl{\{}n^{\nicefrac{{1}}{ {q-1}}\!/_{p}},1\bigr{\}}\varepsilon\bigr{)}\leq\operatorname{Cost}_{p}(f,L, \varepsilon). \tag{2.3}\]
In Lemma 2.3 above we use the convention that \(\frac{1}{\infty}=0\).
Proof of Lemma 2.3.: Throughout this proof we use the well-known fact that for all \(k\in\mathbb{N}\), \(x\in\mathbb{R}^{k}\), \(1\leq s\leq t\leq\infty\) it holds that \(\|x\|_{t}\leq\|x\|_{s}\leq k^{\nicefrac{{1}}{{s-1}}\!/t}\|x\|_{t}\). Assume w.l.o.g. that \(\operatorname{Cost}_{p}(f,L,\varepsilon)<\infty\). (2.1) therefore assures that there exists \(\Phi\in\mathbf{N}\) which satisfies
\[\mathcal{R}(\Phi)\in C(\mathbb{R}^{m},\mathbb{R}^{n}),\qquad \forall\,x\in D\colon\|\mathcal{R}(\Phi)(x)-f(x)\|_{p}\leq\varepsilon,\] \[\forall\,x,y\in D\colon\|\mathcal{R}(\Phi)(x)-\mathcal{R}(\Phi)(y )\|_{p}\leq L\|x-y\|_{p},\qquad\text{and}\qquad\mathcal{P}(\Phi)=\operatorname {Cost}_{p}(f,L,\varepsilon).\]
This yields for all \(x\in D\) that
\[\|\mathcal{R}(\Phi)(x)-f(x)\|_{q}\leq\max\bigl{\{}n^{\nicefrac{{1}}{{q-1}}\! /_{p}},1\bigr{\}}\|\mathcal{R}(\Phi)(x)-f(x)\|_{p}\leq\max\bigl{\{}n^{ \nicefrac{{1}}{{q-1}}\!/_{p}},1\bigr{\}}\varepsilon.\]
Furthermore, if \(q\geq p\) we obtain for all \(x,y\in D\) that
\[\|\mathcal{R}(\Phi)(x)-\mathcal{R}(\Phi)(y)\|_{q} \leq\|\mathcal{R}(\Phi)(x)-\mathcal{R}(\Phi)(y)\|_{p}\leq L\|x-y \|_{p}\] \[\leq m^{\nicefrac{{1}}{{p-1}}\!/_{q}}L\|x-y\|_{q}.\]
On the other hand, if \(q\leq p\) we get for all \(x,y\in D\) that
\[\|\mathcal{R}(\Phi)(x)-\mathcal{R}(\Phi)(y)\|_{q} \leq n^{\nicefrac{{1}}{{q-1}}\!/_{p}}\|\mathcal{R}(\Phi)(x)- \mathcal{R}(\Phi)(y)\|_{p}\leq n^{\nicefrac{{1}}{{q-1}}\!/_{p}}L\|x-y\|_{p}\] \[\leq n^{\nicefrac{{1}}{{q-1}}\!/_{p}}L\|x-y\|_{q}.\]
In any case, we have \(\forall\,x,y\in D\colon\|\mathcal{R}(\Phi)(x)-\mathcal{R}(\Phi)(y)\|_{q}\leq \max\{m^{\nicefrac{{1}}{{p-1}}\!/_{q}},n^{\nicefrac{{1}}{{q-1}}\!/_{p}}\}L\|x -y\|_{q}\). The proof of Lemma 2.3 is thus complete.
**Remark 2.4**.: Note that a particularly simple case of Lemma 2.3 arises if \(n=1\), i.e. the output dimension of the considered functions is \(1\), since then all \(\ell_{p}\)-norms on the output space agree. In this case, we obtain
\[\operatorname{Cost}_{q}\bigl{(}f,\max\bigl{\{}m^{\nicefrac{{1}}{{p-1}}\!/_{ q}},1\bigr{\}}L,\varepsilon\bigr{)}\leq\operatorname{Cost}_{p}(f,L,\varepsilon).\]
In particular, if \(p\geq q\) we simply get \(\operatorname{Cost}_{q}(f,L,\varepsilon)\leq\operatorname{Cost}_{p}(f,L,\varepsilon)\).
We next establish in Lemma 2.5 an auxiliary result which shows that if the values of a function are contained in a hypercube \(Q\), the output values of the approximating DNN can be clipped to be contained in \(Q\) as well with only a moderate increase in the number of parameters. This result will be useful when composing multiple DNNs. Lemma 2.5 is a generalization of [3, Lemma 3.7], which covers the case \(p=2\) and \(Q=[-R,R]^{n}\). The proof is very similar.
**Lemma 2.5**.: _Let \(p\in[1,\infty]\), \(m,n\in\mathbb{N}\), \(D\subseteq\mathbb{R}^{m}\), \(f\in C(D,\mathbb{R}^{n})\), \(L,\varepsilon\in[0,\infty)\) satisfy \(\operatorname{Cost}_{p}(f,L,\varepsilon)<\infty\), let \(Q\subseteq\mathbb{R}^{n}\) be a hypercube, and assume \(f(D)\subseteq Q\). Then there exists \(\Phi\in\mathbf{N}\) which satisfies_
(2.4) \[\mathcal{R}(\Phi)\in C(\mathbb{R}^{m},\mathbb{R}^{n}),\qquad \forall\,x\in D\colon\mathcal{R}(\Phi)(x)\in Q,\] \[\forall\,x\in D\colon\|\mathcal{R}(\Phi)(x)-f(x)\|_{p}\leq \varepsilon,\qquad\forall\,x,y\in D\colon\|\mathcal{R}(\Phi)(x)-\mathcal{R}( \Phi)(y)\|_{p}\leq L\|x-y\|_{p},\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
Combining (2.7) with Lemma 2.5 implies that there exist \(\Phi_{i}\in\mathbf{N}\), \(i\in[n]\), which satisfy for all \(i\in[n]\) that
\[\mathcal{R}(\Phi_{i})\in C(\mathbb{R}^{d_{i}},\mathbb{R}^{d_{i+1}}), \qquad\forall\,x\in Q_{i}\colon\,\mathcal{R}(\Phi_{i})(x)\in Q_{i+1},\] \[\qquad\forall\,x\in Q_{i}\colon\,\|\mathcal{R}(\Phi_{i})(x)-f_{i} (x)\|_{p}\leq\varepsilon n^{-1}(\prod_{j=i+1}^{n}L_{j})^{-1}\|x-y\|_{p},\] \[\qquad\forall\,x,y\in Q_{i}\colon\,\|\mathcal{R}(\Phi_{i})(x)- \mathcal{R}(\Phi_{i})(y)\|_{p}\leq L_{i}\|x-y\|_{p},\] \[\qquad\text{and}\qquad\mathcal{P}(\Phi_{i})=\operatorname{Cost}_ {p}\bigl{(}f_{i},L_{i},\varepsilon n^{-1}(\prod_{j=i+1}^{n}L_{j})^{-1}\bigr{)} +2d_{i+1}(d_{i+1}+1). \tag{2.8}\]
Now define \(\Phi\in\mathbf{N}\) as the composition \(\Phi=\Phi_{n}\circ\mathbf{I}_{d_{n}}\circ\Phi_{n-1}\cdots\circ\mathbf{I}_{d_{2 }}\circ\Phi_{1}\). Here \(\mathbf{I}_{d_{i}}\in\mathbf{N}\) denotes a suitable identity network for \(\mathbb{R}^{d_{i}}\), i.e., it satisfies \(\forall\,x\in\mathbb{R}^{d_{i}}\colon\mathcal{R}(\mathbf{I}_{d_{i}})(x)=x\) (cf., e.g., [3, Definition 2.12]). Observe that [3, Proposition 2.19] and (2.8) imply that
\[\mathcal{P}(\Phi)\leq 3\sum\limits_{i=1}^{n}\mathcal{P}(\Phi_{i})\leq 6\sum \limits_{i=2}^{n}d_{i}(d_{i}+1)+3\sum\limits_{i=1}^{n}\operatorname{Cost}_{p} \bigl{(}f_{i},L_{i},\varepsilon n^{-1}(\prod_{j=i+1}^{n}L_{j})^{-1}\bigr{)}.\]
Furthermore, we have \(\mathcal{R}(\Phi)=\mathcal{R}(\Phi_{n})\circ\cdots\circ\mathcal{R}(\Phi_{1})\), and thus \(\mathcal{R}(\Phi)\in C(\mathbb{R}^{d_{1}},\mathbb{R}^{d_{n+1}})\) is Lipschitz continuous with Lipschitz constant \(\prod_{i=1}^{n}L_{i}\). Finally, [3, Lemma 6.5] ensures that
\[\sup_{x\in Q_{i}}\|\mathcal{R}(\Phi)(x)-(f_{n}\circ f_{n-1}\circ \cdots\circ f_{1})(x)\|_{p}\] \[\leq\sum\limits_{i=1}^{n}\Bigl{[}(\prod_{j=i+1}^{n}L_{j}) \varepsilon n^{-1}(\prod_{j=i+1}^{n}L_{j})^{-1}\Bigr{]}=\sum\limits_{i=1}^{n} (\varepsilon n^{-1})=\varepsilon.\]
The proof of Proposition 2.6 is thus complete.
As a consequence, we obtain in Corollary 2.7 a general result regarding approximations of compositions of functions without the curse of dimensionality.
**Corollary 2.7**.: _Let \(p\in[1,\infty]\), \(n\in\mathbb{N}\), let \(d_{1},d_{2},\ldots,d_{n+1}\in\mathbb{N}\), \(L_{1},L_{2},\ldots,L_{n}\in(0,\infty)\), for every \(i\in[n]\) let \(Q_{i}\subseteq\mathbb{R}^{d_{i}}\) be a hypercube and let \(f_{i}\in C(Q_{i},\mathbb{R}^{d_{i+1}})\) be a function, assume for all \(i\in[n-1]\) that \(f_{i}(Q_{i})\subseteq Q_{i+1}\), let \(\mathfrak{d}=\max\{d_{1},\ldots,d_{n}\}\), \(\mathfrak{L}=\max\{L_{1},\ldots,L_{n},1\}\), and assume that there exists \(c\in\mathbb{N}\) which satisfies for all \(i\in[n]\), \(\delta\in(0,1]\) that \(\operatorname{Cost}_{p}(f_{i},L_{i},\delta)\leq cd_{i}^{c}L_{i}^{c}\delta^{-c}\). Then it holds for all \(\varepsilon\in(0,1]\) that_
\[\operatorname{Cost}_{p}\bigl{(}f_{n}\circ f_{n-1}\circ\cdots\circ f_{1},\prod _{i=1}^{n}L_{i},\varepsilon\bigr{)}\leq 12n\mathfrak{d}^{2}+3cn^{c+1} \mathfrak{d}^{c}\mathfrak{L}^{cn}\varepsilon^{-c}. \tag{2.9}\]
Proof of Corollary 2.7.: Applying Proposition 2.6 and the monotonicity from Lemma 2.2 yields for all \(\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{p}\bigl{(}f_{n}\circ f_{n-1}\circ\cdots\circ f _{1},\prod_{i=1}^{n}L_{i},\varepsilon\bigr{)}\] \[\leq 6\sum\limits_{i=2}^{n}d_{i}(d_{i}+1)+3\sum\limits_{i=1}^{n} \operatorname{Cost}_{p}\bigl{(}f_{i},L_{i},\varepsilon n^{-1}(\prod_{j=i+1}^{n} L_{j})^{-1}\bigr{)}\] \[\leq 12n\mathfrak{d}^{2}+3\sum\limits_{i=1}^{n}c\mathfrak{d}^{c}n^ {c}\varepsilon^{-c}\mathfrak{d}^{c(n-i+1)}\leq 12n\mathfrak{d}^{2}+3c\mathfrak{d}^{c}n^ {c+1}\varepsilon^{-c}\mathfrak{d}^{cn}.\]
The proof of Corollary 2.7 is thus complete.
**Remark 2.8**.: In Corollary 2.7 we want the right-hand side to grow at most polynomially in the dimension \(\mathfrak{d}\). In the case \(\mathfrak{L}=1\), i.e. \(\max\{L_{1},\ldots,L_{n}\}\leq 1\), the number \(n\) of functions in the composition is allowed to grow polynomially in \(\mathfrak{d}\), i.e., we can have \(n\leq c\mathfrak{d}^{c}\) for some constant \(c\in\mathbb{N}\). If the maximal Lipschitz constant \(\mathfrak{L}\) is larger than \(1\) and the number \(n\) of functions in the composition is a fixed constant we will also obtain an upper bound that avoids the curse of dimensionality.
### Abstract approximation results without the curse of dimensionality
In this subsection we establish two abstract approximation results for composite functions from which Theorems 1.1 and 1.2 in the introduction will follow. Both Proposition 2.9 and Proposition 2.10 are simple consequences of Corollary 2.7. First, in Proposition 2.9 we consider a composition of \(\mathcal{O}(d^{c})\) functions, where \(d\in\mathbb{N}\) is a dimensionality parameter, each of which can be approximated with Lipschitz constant \(1\) without the curse of dimensionality.
**Proposition 2.9**.: _Let \(c\in\mathbb{N}\), \(p\in[1,\infty]\), for every \(d\in\mathbb{N}\) let \(\mathfrak{d}_{1}^{d},\mathfrak{d}_{2}^{d},\ldots,\mathfrak{d}_{\mathbf{k}(d) +1}^{d}\in[d^{c}]\) and \(\mathbf{k}(d)\in[cd^{c}]\), for every \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) let \(Q_{i}^{d}\subseteq\mathbb{R}^{\mathfrak{d}_{i}^{d}}\) be a \(\mathfrak{d}_{i}^{d}\)-dimensional hypercube and let \(g_{i}^{d}\in C(Q_{i}^{d},\mathbb{R}^{\mathfrak{d}_{i+1}^{d}})\) be a function, assume for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\), \(\varepsilon\in(0,1]\) that_
\[\operatorname{Cost}_{p}(g_{i}^{d},1,\varepsilon)\leq cd^{c}\varepsilon^{-c},\]
_assume for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)-1]\) that \(g_{i}^{d}(Q_{i}^{d})\subseteq Q_{i+1}^{d}\), and for every \(d\in\mathbb{N}\) let \(F_{d}\in C(Q_{1}^{d},\mathbb{R}^{\mathfrak{d}_{(d)+1}^{d}})\) satisfy \(F_{d}=g_{\mathbf{k}(d)}^{d}\circ g_{\mathbf{k}(d)-1}^{d}\circ\cdots\circ g_{1} ^{d}\). Then there exists \(K\in\mathbb{N}\) such that for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1]\) there exists \(\Phi\in\mathbf{N}\) such that \(\mathcal{R}(\Phi)\in C(\mathbb{R}^{\mathfrak{d}_{1}^{d}},\mathbb{R}^{ \mathfrak{d}_{(d)+1}^{d}})\),_
\[\mathcal{P}(\Phi)\leq Kd^{K}\varepsilon^{-c},\qquad\text{and}\qquad\forall\,x \in Q_{1}^{d}\colon\|\mathcal{R}(\Phi)(x)-F_{d}(x)\|_{p}\leq\varepsilon. \tag{2.10}\]
Proof of Proposition 2.9.: For every \(d\in\mathbb{N}\), applying Corollary 2.7 (with \(L_{i}\curvearrowright 1\), \(n\curvearrowright\mathbf{k}(d)\), \(d_{i}\curvearrowright\mathfrak{d}_{i}^{d}\)) yields for all \(\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{p}(F_{d},1,\varepsilon) =\operatorname{Cost}_{p}(g_{\mathbf{k}(d)}^{d}\circ g_{\mathbf{k} (d)-1}^{d}\circ\cdots\circ g_{1}^{d},1,\varepsilon)\] \[\leq 12\mathbf{k}(d)d^{2c}+3c(\mathbf{k}(d))^{c+1}d^{c^{2}} \varepsilon^{-c}\] \[\leq 12cd^{3c}+3c^{c+2}d^{c(2c+1)}\varepsilon^{-c}\leq 15c^{c+2}d^{c(2 c+1)}\varepsilon^{-c}.\]
The proof of Proposition 2.9 is thus complete.
Next, in Proposition 2.10 we consider compositions of a constant number \(k\in\mathbb{N}\) of functions, each of which can be approximated with Lipschitz constant \(\mathcal{O}(d^{c})\) without the curse of dimensionality.
**Proposition 2.10**.: _Let \(c,k\in\mathbb{N}\), \(p\in[1,\infty]\), for every \(d\in\mathbb{N}\) let \(\mathfrak{d}_{1}^{d},\mathfrak{d}_{2}^{d},\ldots,\mathfrak{d}_{k+1}^{d}\in[d^ {c}]\), for every \(d\in\mathbb{N}\), \(i\in[k]\) let \(Q_{i}^{d}\subseteq\mathbb{R}^{\mathfrak{d}_{i}^{d}}\) be a \(\mathfrak{d}_{i}^{d}\)-dimensional hypercube and let \(g_{i}^{d}\in C(Q_{i}^{d},\mathbb{R}^{\mathfrak{d}_{i+1}^{d}})\) be a function, assume for all \(d\in\mathbb{N}\), \(i\in[k]\), \(\varepsilon\in(0,1]\) that_
\[\operatorname{Cost}_{p}(g_{i}^{d},cd^{c},\varepsilon)\leq cd^{c}\varepsilon^{-c},\]
_assume for all \(d\in\mathbb{N}\), \(i\in[k-1]\) that \(g_{i}^{d}(Q_{i}^{d})\subseteq Q_{i+1}^{d}\), and for every \(d\in\mathbb{N}\) let \(F_{d}\in C(Q_{1}^{d},\mathbb{R}^{\mathfrak{d}_{k+1}^{d}})\) satisfy \(F_{d}=g_{k}^{d}\circ g_{k-1}^{d}\circ\cdots\circ g_{1}^{d}\). Then there exists \(K\in\mathbb{N}\) such that for every \(d\in\mathbb{N}\), \(\varepsilon\in(0,1]\) there exists \(\Phi\in\mathbf{N}\) such that \(\mathcal{R}(\Phi)\in C(\mathbb{R}^{\mathfrak{d}_{1}^{d}},\mathbb{R}^{\mathfrak{ d}_{k+1}^{d}})\),_
\[\mathcal{P}(\Phi)\leq Kd^{K}\varepsilon^{-c},\qquad\text{and}\qquad\forall\,x \in Q_{1}^{d}\colon\|\mathcal{R}(\Phi)(x)-F_{d}(x)\|_{p}\leq\varepsilon. \tag{2.11}\]
Proof of Proposition 2.10.: For every \(d\in\mathbb{N}\), applying Corollary 2.7 (with \(L_{i}\curvearrowright cd^{c}\), \(n\curvearrowright k\), \(d_{i}\curvearrowright\mathfrak{d}_{i}^{d}\)) shows for all \(\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{p}(F_{d},(cd^{c})^{k},\varepsilon) =\operatorname{Cost}_{p}(g_{k}^{d}\circ g_{k-1}^{d}\circ\cdots \circ g_{1}^{d},(cd^{c})^{k},\varepsilon)\] \[\leq 12kd^{2c}+3ck^{c+1}d^{c^{2}}(cd^{c})^{ck}\varepsilon^{-c}\] \[=12kd^{2c}+3c^{ck+1}k^{c+1}d^{c^{2}(1+k)}\varepsilon^{-c}\leq 15 k^{c+1}c^{ck+1}d^{c^{2}(1+k)}\varepsilon^{-c}.\]
The proof of Proposition 2.10 is thus complete.
## 3. DNN approximation of specific function classes
In this section we establish approximation results for the specific functions introduced in Subsection 1.2, which allows us to apply Propositions 2.9 and 2.10 to these functions. At the end of this section we employ our approximation results to prove Theorems 1.1 and 1.2 from the introduction.
### DNN approximations of parallelizations
The first step is to show in Lemma 3.2 how parallelizations of the form \(f_{1}\square f_{2}\square\cdots\square f_{n}\), as defined in Subsection 1.2, can be approximated efficiently by DNNs. For this we employ the parallelized DNN architecture from Cheridito et al. [4], which allows one to approximate parallelizations efficiently. Specifically, we use Lemma 3.1 below, which is a reformulation of [4, Proposition 5] in the ReLU case, where \(c=2\) (in the notation of [4, Proposition 5]).
**Lemma 3.1**.: _Let \(n\in\mathbb{N}\), \(\Phi_{1},\ldots,\Phi_{n}\in\mathbf{N}\). Then there exists \(\Phi\in\mathbf{N}\) which satisfies \(\mathcal{R}(\Phi)=\mathcal{R}(\Phi_{1})\square\cdots\square\mathcal{R}(\Phi_ {n})\) and_
\[\mathcal{P}(\Phi)\leq\tfrac{11}{4}n^{2}\big{(}\max_{i\in[n]}\max\{\mathcal{I}( \Phi_{i}),\mathcal{O}(\Phi_{i})\}\big{)}^{2}\sum\limits_{i=1}^{n}\mathcal{P}( \Phi_{i}). \tag{3.1}\]
**Lemma 3.2**.: _Let \(p\in[1,\infty]\), \(n\in\mathbb{N}\), \(a\in\mathbb{R}\), \(b\in(a,\infty)\), \(L,\varepsilon\in[0,\infty)\), \(d_{1},\ldots,d_{n}\), \(e_{1},\ldots,e_{n}\in\mathbb{N}\), and for every \(i\in[n]\) let \(f_{i}\in C([a,b]^{d_{i}},\mathbb{R}^{e_{i}})\). Then_
\[\mathrm{Cost}_{p}(f_{1}\square f_{2}\square\cdots\square f_{n},L,\varepsilon) \leq\tfrac{11}{4}n^{2}\big{(}\max_{i\in[n]}\max\{d_{i},e_{i}\}\big{)}^{2}\sum \limits_{i=1}^{n}\mathrm{Cost}_{p}\big{(}f_{i},L,n^{-\nicefrac{{1}}{{p}}} \varepsilon\big{)}. \tag{3.2}\]
Proof of Lemma 3.2.: Throughout this proof assume without loss of generality that
\[\forall\,i\in[n]\colon\,\mathrm{Cost}_{p}\big{(}f_{i},L,n^{-\nicefrac{{1}}{{ p}}}\varepsilon\big{)}<\infty. \tag{3.3}\]
Note that (3.3) and (2.1) imply that there exist \(\Phi_{i}\in\mathbf{N}\), \(i\in[n]\), which satisfy for all \(i\in[n]\) that
\[\mathcal{R}(\Phi_{i})\in C(\mathbb{R}^{d_{i}},\mathbb{R}^{e_{i}}),\qquad\forall\,x\in[a,b]^{d_{i}}\colon\|\mathcal{R}(\Phi_{i})(x)-f_{i}(x)\| _{p}\leq\varepsilon n^{-\nicefrac{{1}}{{p}}},\] \[\forall\,x,y\in[a,b]^{d_{i}}\colon\|\mathcal{R}(\Phi_{i})(x)- \mathcal{R}(\Phi_{i})(y)\|_{p}\leq L\|x-y\|_{p},\] \[\text{and}\qquad\mathcal{P}(\Phi_{i})=\mathrm{Cost}_{p}\big{(}f_{i },L,\varepsilon n^{-\nicefrac{{1}}{{p}}}\big{)}.\]
Let \(\Phi\in\mathbf{N}\) be the parallelization of \(\Phi_{1},\ldots,\Phi_{n}\) given by Lemma 3.1. That is, we have \(\mathcal{R}(\Phi)=\mathcal{R}(\Phi_{1})\square\cdots\square\mathcal{R}(\Phi_ {n})\in C(\mathbb{R}^{\sum_{i=1}^{n}d_{i}},\mathbb{R}^{\sum_{i=1}^{n}e_{i}})\) and
\[\mathcal{P}(\Phi) \leq\tfrac{11}{4}n^{2}\big{(}\max_{i\in[n]}\max\{d_{i},e_{i}\} \big{)}^{2}\sum\limits_{i=1}^{n}\mathcal{P}(\Phi_{i})\] \[=\tfrac{11}{4}n^{2}\big{(}\max_{i\in[n]}\max\{d_{i},e_{i}\} \big{)}^{2}\sum\limits_{i=1}^{n}\mathrm{Cost}_{p}(f_{i},L,n^{-\nicefrac{{1}}{{ p}}}\varepsilon).\]
We now consider two cases depending on whether \(p\in[1,\infty)\) or \(p=\infty\).
**Case 1.** Assume \(p<\infty\). Thus we get for all \(x=(x_{1},\ldots,x_{n})\in[a,b]^{\sum_{i=1}^{n}d_{i}}\) with \(\forall\,i\in[n]\colon x_{i}\in[a,b]^{d_{i}}\) that
\[\|\mathcal{R}(\Phi)(x)-(f_{1}\square f_{2}\square\cdots\square f_{n})(x)\|_{p}^ {p}=\sum\limits_{i=1}^{n}\|\mathcal{R}(\Phi_{i})(x_{i})-f_{i}(x_{i})\|_{p}^{p} \leq n(\varepsilon n^{-\nicefrac{{1}}{{p}}})^{p}=\varepsilon^{p}.\]
In addition, for all \(x=(x_{1},\ldots,x_{n})\), \(y=(y_{1},\ldots,y_{n})\in[a,b]^{\sum_{i=1}^{n}d_{i}}\) with \(\forall\,i\in[n]\colon x_{i},y_{i}\in[a,b]^{d_{i}}\) we obtain that
\[\|\mathcal{R}(\Phi)(x)-\mathcal{R}(\Phi)(y)\|_{p}^{p} =\sum\limits_{i=1}^{n}\|\mathcal{R}(\Phi_{i})(x_{i})-\mathcal{R}( \Phi_{i})(y_{i})\|_{p}^{p}\leq L^{p}\sum\limits_{i=1}^{n}\|x_{i}-y_{i}\|_{p}^ {p}\] \[=L^{p}\|x-y\|_{p}^{p}\]
and hence \(\mathcal{R}(\Phi)\) is indeed \(L\)-Lipschitz on \([a,b]^{\sum_{i=1}^{n}d_{i}}\) with respect to \(\|\cdot\|_{p}\).
**Case 2**.: Assume \(p=\infty\). Then we get for all \(x=(x_{1},\ldots,x_{n})\in[a,b]^{\sum_{i=1}^{n}d_{i}}\) with \(\forall\,i\in[n]\colon x_{i}\in[a,b]^{d_{i}}\) that
\[\|\mathcal{R}(x)-(f_{1}\square f_{2}\square\cdots\square f_{n})(x)\|_{\infty} =\max_{i\in[n]}\|\mathcal{R}(\Phi_{i})(x_{i})-f_{i}(x_{i})\|_{\infty}\leq\varepsilon.\]
In addition, for all \(x=(x_{1},\ldots,x_{n})\), \(y=(y_{1},\ldots,y_{n})\in[a,b]^{\sum_{i=1}^{n}d_{i}}\) with \(\forall\,i\in[n]\colon x_{i},y_{i}\in[a,b]^{d_{i}}\) we obtain that
\[\|\mathcal{R}(\Phi)(x)-\mathcal{R}(\Phi)(y)\|_{\infty} =\max_{i\in[n]}\|\mathcal{R}(\Phi_{i})(x_{i})-\mathcal{R}(\Phi_{ i})(y_{i})\|_{\infty}\] \[\leq L\max_{i\in[n]}\|x_{i}-y_{i}\|_{\infty}=L\|x-y\|_{\infty}\]
and thus \(\mathcal{R}(\Phi)\) is indeed \(L\)-Lipschitz with respect to \(\|\cdot\|_{\infty}\).
This completes the proof of Lemma 3.2.
### DNN approximations of parallelized Lipschitz functions
We next state the essentially known result that every Lipschitz continuous function on a compact set in \(\mathbb{R}^{d}\) can be approximated with error \(\varepsilon>0\) using at most \(\mathcal{O}(\varepsilon^{-2d})\) parameters. While this rate is not optimal (cf., e.g., Shen et al. [33]), it is also important for our purposes to control the Lipschitz constant of the approximating DNN realization itself, which does not directly follow from the optimal bounds in the literature. The proof of Proposition 3.3 uses the results from [16] and Lemma 2.3.
**Proposition 3.3**.: _Let \(d\in\mathbb{N}\). Then there exists \(K\in\mathbb{N}\) which satisfies for all \(a\in\mathbb{R}\), \(b\in(a,\infty)\), \(L\in[0,\infty)\), \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\), and \(f\colon[a,b]^{d}\to\mathbb{R}\) with \(\forall\,x,y\in[a,b]^{d}\colon|f(x)-f(y)|\leq L\|x-y\|_{p}\) that_
\[\operatorname{Cost}_{p}\bigl{(}f,d^{1-\nicefrac{{1}}{{p}}}L,\varepsilon \bigr{)}\leq K(\max\{L(b-a),1\})^{2d}\varepsilon^{-2d}. \tag{3.4}\]
Proof of Proposition 3.3.: First observe that for all \(u\in\mathbb{R}^{d}\), \(p\in[1,\infty]\) we have \(\|u\|_{p}\leq\|u\|_{1}\). Hence, [16, Corollary 4.9 and Eq. (4.38)] show that there exists \(K\in\mathbb{N}\) which satisfies for all \(a\in\mathbb{R}\), \(b\in(a,\infty)\), \(L\in[0,\infty)\), \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\), and \(f\colon[a,b]^{d}\to\mathbb{R}\) with \(\forall\,x,y\in[a,b]^{d}\colon|f(x)-f(y)|\leq L\|x-y\|_{p}\leq L\|x-y\|_{1}\) that1
Footnote 1: The Lipschitz property of the approximating DNN follows since its realization is given by a maximum convolution; cf. [16, Lemma 3.12 and Proposition 4.4].
\[\operatorname{Cost}_{1}(f,L,\varepsilon)\leq K(\max\{L(b-a),1\})^{2d} \varepsilon^{-2d}.\]
Combining this with Lemma 2.3 demonstrates for all \(a\in\mathbb{R}\), \(b\in(a,\infty)\), \(L\in[0,\infty)\), \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\), and \(f\colon[a,b]^{d}\to\mathbb{R}\) with \(\forall\,x,y\in[a,b]^{d}\colon|f(x)-f(y)|\leq L\|x-y\|_{p}\) that
\[\operatorname{Cost}_{p}(f,d^{1-\nicefrac{{1}}{{p}}}L,\varepsilon)\leq \operatorname{Cost}_{1}(f,L,\varepsilon)\leq K(\max\{L(b-a),1\})^{2d} \varepsilon^{-2d}.\]
This completes the proof of Proposition 3.3.
Combining this result with Lemma 3.2 we obtain the following general result about parallelized low-dimensional Lipschitz functions with input dimension \(d_{i}\leq k\).
**Corollary 3.4**.: _Let \(k\in\mathbb{N}\), \(L\in[0,\infty)\), \(n\in\mathbb{N}\), \(a\in\mathbb{R}\), \(b\in(a,\infty)\), \(p\in[1,\infty]\), \(d_{1},\ldots,d_{n}\in\mathbb{N}\) satisfy \(\max_{i\in[n]}d_{i}\leq k\), and for every \(i\in[n]\) let \(f_{i}\in C([a,b]^{d_{i}},\mathbb{R})\) satisfy \(\forall\,x,y\in[a,b]^{d_{i}}\colon|f_{i}(x)-f_{i}(y)|\leq L\|x-y\|_{p}\). Then there exists \(K\in\mathbb{N}\), depending only on \(k\), which satisfies for all \(\varepsilon\in(0,1]\) that_
\[\operatorname{Cost}_{p}\bigl{(}f_{1}\square f_{2}\square\cdots\square f_{n},k^{1- \nicefrac{{1}}{{p}}}L,\varepsilon\bigr{)}\leq Kn^{2k+3}(\max\{L(b-a),1\})^{2k} \varepsilon^{-2k}. \tag{3.5}\]
Proof of Corollary 3.4.: Note that Proposition 3.3 and Lemma 2.2 imply that there exists \(K\in\mathbb{N}\), depending only on \(k\), which satisfies for all \(i\in[n]\), \(\varepsilon\in(0,1]\) that
\[\mathrm{Cost}_{p}\big{(}f_{i},k^{1-\nicefrac{{1}}{{p}}}L,\varepsilon\big{)}\leq \mathrm{Cost}_{p}\big{(}f_{i},(d_{i})^{1-\nicefrac{{1}}{{p}}}L,\varepsilon\big{)} \leq K(\max\{L(b-a),1\})^{2k}\varepsilon^{-2k}.\]
Hence, Lemma 3.2 demonstrates for all \(\varepsilon\in(0,1]\) that
\[\mathrm{Cost}_{p}\big{(}f_{1}\square f_{2}\square\cdots\square f_{ n},k^{1-\nicefrac{{1}}{{p}}}L,\varepsilon\big{)}\] \[\leq\tfrac{11}{4}n^{2}\big{(}\max_{i\in[n]}d_{i}\big{)}^{2}\sum \limits_{i=1}^{n}\mathrm{Cost}_{p}\big{(}f_{i},k^{1-\nicefrac{{1}}{{p}}}L,n^{- \nicefrac{{1}}{{p}}}\varepsilon\big{)}\] \[\leq\tfrac{11}{4}n^{2}k^{2}nK(\max\{L(b-a),1\})^{2k}\varepsilon^ {-2k}n^{2\nicefrac{{1}}{{p}}}\] \[\leq\tfrac{11k^{2}K}{4}n^{2k+3}(\max\{L(b-a),1\})^{2k} \varepsilon^{-2k}.\]
The proof of Corollary 3.4 is thus complete.
Finally, we restate this result in terms of the parallelized Lipschitz function classes \(\mathcal{G}_{\!k,p}\) as defined above. Note that the required number of parameters grows only polynomially in the input dimension \(d\) for every fixed integer \(k\in\mathbb{N}\).
**Corollary 3.5**.: _Let \(k\in\mathbb{N}\). Then there exists \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(a\in\mathbb{R}\), \(b\in(a,\infty)\), \(p\in[1,\infty]\), \(L\in[0,\infty)\), \(\varepsilon\in(0,1]\), \(f\in\mathcal{G}_{\!k,p}([a,b]^{d},L)\) that_
\[\mathrm{Cost}_{p}\big{(}f,k^{1-\nicefrac{{1}}{{p}}}L,\varepsilon\big{)}\leq Kd ^{2k+3}(\max\{L(b-a),1\})^{2k}\varepsilon^{-2k}. \tag{3.6}\]
### DNN approximations of products
In this section we state some results on the approximation cost of multidimensional products as defined in (1.2). The first result, Lemma 3.6 below, is a consequence of [3, Proposition 6.8] and Lemma 2.3. Similar approximation results for multidimensional products without the curse of dimensionality can also be found in [4, 32, 35].
**Lemma 3.6**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(a\in[1,\infty)\), \(p\in[1,\infty]\), \(\varepsilon\in(0,\infty)\) that_
\[\mathrm{Cost}_{p}\big{(}p_{d}|_{[-a,a]^{d}},\sqrt{32}d^{3}a^{2d-1},\varepsilon \big{)}\leq Kd^{3}\big{(}1+\ln(a)+\mathfrak{r}(\ln(\varepsilon^{-1}))\big{)}. \tag{3.7}\]
Proof of Lemma 3.6.: Observe that [3, Proposition 6.8] demonstrates that there exists \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(a\in[1,\infty)\), \(\varepsilon\in(0,\infty)\) that
Combining this with Lemma 2.3 and the fact that for all \(d\in\mathbb{N}\), \(p\in[1,\infty]\) it holds that \(d^{\nicefrac{{1}}{{2}}-\nicefrac{{1}}{{p}}}\leq d^{\nicefrac{{1}}{{2}}}\) establishes (3.7). This completes the proof of Lemma 3.6.
Since the Lipschitz constant in (3.7) grows exponentially in \(d\) for \(a>1\), in the following we restrict to the hypercube \([-1,1]^{d}\). For the parallelized product functions in \(\mathbf{P}([-1,1]^{d})\) we obtain as a consequence the following approximation result.
**Corollary 3.7**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(\varepsilon\in(0,1]\), \(p\in[1,\infty]\), \(f\in\mathbf{P}([-1,1]^{d})\) that_
\[\mathrm{Cost}_{p}\big{(}f,\sqrt{32}d^{3},\varepsilon\big{)}\leq Kd^{K} \varepsilon^{-1} \tag{3.8}\]
Proof of Corollary 3.7.: Note that Lemma 3.6 ensures that there exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(p\in[1,\infty]\), \(\varepsilon\in(0,\infty)\) that
\[\mathrm{Cost}_{p}\big{(}p_{d}|_{[-1,1]^{d}},\sqrt{32}d^{3},\varepsilon\big{)} \leq Kd^{3}\big{(}1+\mathfrak{r}(\ln(\varepsilon^{-1}))\big{)}.\]
Combining this with Lemma 3.2 and the fact that \(\forall\,u\in[1,\infty)\colon 1+\ln(u)\leq u\) demonstrates for all \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\), \(d,n,d_{1},d_{2},\ldots,d_{n}\in\mathbb{N}\) with \(\sum_{i=1}^{n}d_{i}=d\) and all \(f=(p_{d_{1}}|_{[-1,1]^{d_{i}}})\square\cdots\square[p_{d_{n}}|_{[-1,1]^{d_{n}}}) \in\mathbf{P}([-1,1]^{d})\) that
\[\operatorname{Cost}_{p}(f,\sqrt{32}d^{3},\varepsilon) \leq\tfrac{11}{4}n^{2}\big{(}\max_{i\in[n]}d_{i}\big{)}^{2}\sum_{ i=1}^{n}\operatorname{Cost}_{p}\big{(}p_{d_{i}}|_{[-1,1]^{d_{i}}},\sqrt{32}d^{3}, n^{-\nicefrac{{1}}{{p}}}\varepsilon\big{)}\] \[\leq\tfrac{11}{4}n^{2}d^{2}K\sum_{i=1}^{n}(d_{i})^{3}\big{(}1+\ln (\varepsilon^{-1})+\tfrac{1}{p}\ln n\big{)}\] \[\leq\tfrac{11}{4}n^{2}d^{2}Kd^{3}(1+\ln(\varepsilon^{-1}))(1+\ln n)\] \[\leq\tfrac{11}{4}Kn^{3}d^{5}\varepsilon^{-1}\leq\tfrac{11}{4} Kd^{8}\varepsilon^{-1}.\]
The proof of Corollary 3.7 is thus complete.
Using the monotonicity in Lemma 2.2, the same approximation result holds for \(f\in\mathbf{P}(Q)\) for any \(d\)-dimensional hypercube \(Q\subseteq[-1,1]^{d}\).
We next turn to the extended product functions \(\mathfrak{p}_{d}\). Combining [3, Corollary 6.9] with Lemma 2.3 we obtain an approximation of \(\mathfrak{p}_{d}\) with respect to arbitrary \(\ell_{p}\)-norms.
**Lemma 3.8**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\) that_
\[\operatorname{Cost}_{p}\big{(}\mathfrak{p}_{d}|_{[-1,1]^{d}},\sqrt{32}d^{ \nicefrac{{1}}{{2}}},\varepsilon\big{)}\leq Kd^{K}\varepsilon^{-1}. \tag{3.9}\]
Proof of Lemma 3.8.: Observe that [3, Corollary 6.9] implies that there exists \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(a\in[1,\infty)\), \(\varepsilon\in(0,\infty)\) that
Combining this with Lemma 2.3 yields for all \(d\in\mathbb{N}\), \(p\in[1,\infty]\), \(\varepsilon\in(0,\infty)\) that
\[\operatorname{Cost}_{p}\big{(}\mathfrak{p}_{d}|_{[-1,1]^{d}},\sqrt{32}d^{ \nicefrac{{1}}{{2}}},d^{\nicefrac{{1}}{{2}}}\varepsilon\big{)}\leq Kd^{5} \big{(}1+\mathfrak{r}(\ln(\varepsilon^{-1}))\big{)}.\]
Hence, we obtain for all \(d\in\mathbb{N}\), \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{p}\big{(}\mathfrak{p}_{d}|_{[-1,1]^{d}}, \sqrt{32}d^{\nicefrac{{1}}{{2}}},\varepsilon\big{)} \leq Kd^{5}\big{(}1+\mathfrak{r}(\ln(d^{\nicefrac{{1}}{{2}}} \varepsilon^{-1}))\big{)}\] \[\leq Kd^{5}(1+\ln(d))(1+\ln(\varepsilon^{-1}))\leq Kd^{6} \varepsilon^{-1}.\]
This establishes (3.9). The proof of Lemma 3.8 is thus complete.
Again the analogous result holds for \(\mathfrak{p}_{d}|_{Q}\) for any hypercube \(Q\subseteq[-1,1]^{d}\).
### DNN approximations of products with Lipschitz constant 1
In this subsection we show that on smaller cubes of side-length at most \(\tfrac{1}{4}\) we can even approximate the product function \(p_{d}\) with Lipschitz constant \(1\) with respect to arbitrary \(\ell_{p}\)-norms. The proof of Lemma 3.9 is based on [3, Lemma 6.7], which essentially implies the claimed statement for dimensions equal to a power of \(2\). For a general input dimension we use a form of backward induction. If we want to approximate compositions of a polynomially growing number of functions we need the Lipschitz constant to be at most \(1\); cf. Remark 2.8 above.
**Lemma 3.9**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\) that_
\[\operatorname{Cost}_{p}\big{(}p_{d}|_{[-\frac{1}{8},\frac{1}{8}]^{d}},1, \varepsilon\big{)}\leq Kd^{4}\big{(}1+\ln(\varepsilon^{-1})\big{)}. \tag{3.10}\]
Proof of Lemma 3.9.: First, [3, Lemma 6.7] implies that there exists a constant \(K\in\mathbb{N}\) which satisfies for all \(e\in\mathbb{N}\), \(a,\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{2}\bigl{(}p_{2^{e}}|_{[-a,a]^{2^{e}}},2^{5e/2}a^{2^{e-1}}, \varepsilon\bigr{)}\leq K8^{e}\bigl{(}1+\ln(\varepsilon^{-1})\bigr{)}.\]
Combining this with Lemma 2.3 demonstrates for all \(p\in[1,\infty]\), \(e\in\mathbb{N}\), \(a,\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{p}\bigl{(}p_{2^{e}}|_{[-a,a]^{2^{e}}},2^{3e}a^{2^{e-1}}, \varepsilon\bigr{)}\leq K8^{e}\bigl{(}1+\ln(\varepsilon^{-1})\bigr{)}.\]
Applying this with \(a=\frac{1}{8}\) and \(\varepsilon\curvearrowright8^{-2^{e-1}}\varepsilon\) shows that for all \(p\in[1,\infty]\), \(e\in\mathbb{N}\), \(\varepsilon\in(0,1]\) there exists a network \(\Phi_{e}^{p,\varepsilon}\in\mathbf{N}\) such that
\[\mathcal{R}(\Phi_{e}^{p,\varepsilon})\in C(\mathbb{R}^{2^{e}}, \mathbb{R}),\qquad\forall\,x\in[-\tfrac{1}{8},\tfrac{1}{8}]^{2^{e}}\colon| \mathcal{R}(\Phi_{e}^{p,\varepsilon})-p_{2^{e}}(x)|\leq 8^{-2^{e-1}}\varepsilon,\] \[\qquad\forall\,x,y\in[-\tfrac{1}{8},\tfrac{1}{8}]^{2^{e}}\colon| \mathcal{R}(\Phi_{e}^{p,\varepsilon})(x)-\mathcal{R}(\Phi_{e}^{p,\varepsilon })(y)|\leq 8^{e+1-2^{e}}\|x-y\|_{p},\] \[\text{and}\qquad\mathcal{P}(\Phi_{e}^{p,\varepsilon})\leq K8^{e} \bigl{(}1+\ln(\varepsilon^{-1})+2^{e-1}\ln(8)\bigr{)}\leq(K+\ln(8))16^{e} \bigl{(}1+\ln(\varepsilon^{-1})\bigr{)}. \tag{3.11}\]
Now let \(d\in\mathbb{N}\), assume w.l.o.g. that \(d\geq 3\) (otherwise, \(d\) is a power of \(2\)), and choose \(e\in\mathbb{N}\) with \(2^{e-1}+1\leq d\leq 2^{e}\). Denote by \(A\colon\mathbb{R}^{d}\to\mathbb{R}^{2^{e}}\) the affine linear map defined by
\[A(x_{1},\dots,x_{d})=\bigl{(}x_{1},\dots,x_{d},\tfrac{1}{8},\dots,\tfrac{1}{8 }\bigr{)}\in\mathbb{R}^{2^{e}}.\]
Note that \(A\) maps \([-\tfrac{1}{8},\tfrac{1}{8}]^{d}\) into \([-\tfrac{1}{8},\tfrac{1}{8}]^{2^{e}}\). Given \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\) we denote by \(\Psi_{d}^{p,\varepsilon}\in\mathbf{N}\) the network with realization given by
\[\mathcal{R}(\Psi_{d}^{p,\varepsilon})(x)=8^{2^{e}-d}\mathcal{R}(\Phi_{e}^{p, \varepsilon})(A(x)).\]
In other words, \(\Psi_{d}^{p,\varepsilon}\) is obtained from \(\Phi_{e}^{p,\varepsilon}\) by pre-composition with the affine map \(A\) and post-composition with a scalar multiplication. Hence, [16, Proposition 2.20] implies that \(\mathcal{I}(\Psi_{d}^{p,\varepsilon})=d\leq 2^{e}=\mathcal{I}(\Phi_{e}^{p,\varepsilon})\) and all other layer dimensions are equal, whence
\[\mathcal{P}(\Psi_{d}^{p,\varepsilon})\leq\mathcal{P}(\Phi_{e}^{p,\varepsilon })\leq(K+\ln(8))16^{e}\bigl{(}1+\ln(\varepsilon^{-1})\bigr{)}\leq 16(K+3)d^{4} \bigl{(}1+\ln(\varepsilon^{-1})\bigr{)}.\]
Furthermore, note that (3.11) and the fact that \(p_{2^{e}}(A(x))=8^{d-2^{e}}p_{d}(x)\) show for all \(x,y\in[-\tfrac{1}{8},\tfrac{1}{8}]^{d}\) that
\[|\mathcal{R}(\Psi_{d}^{p,\varepsilon})(x)-p_{d}(x)|=|8^{2^{e}-d}\mathcal{R}( \Phi_{e}^{p,\varepsilon})(A(x))-8^{2^{e}-d}p_{2^{e}}(A(x))|\leq 8^{2^{e}-d}8^{-2^{e-1} }\varepsilon\leq\varepsilon\]
and
\[|\mathcal{R}(\Psi_{d}^{p,\varepsilon})(x)-\mathcal{R}(\Psi_{d}^{ p,\varepsilon}(y))| =8^{2^{e}-d}|\mathcal{R}(\Phi_{e}^{p,\varepsilon})(A(x))- \mathcal{R}(\Phi_{e}^{p,\varepsilon})(A(y))|\] \[\leq 8^{2^{e}-d}8^{e+1-2^{e}}\|A(x)-A(y)\|_{p}=8^{e+1-d}\|x-y\|_{p}\] \[\leq 8^{e-2^{e-1}}\|x-y\|_{p}\leq\|x-y\|_{p}.\]
This completes the proof of Lemma 3.9.
**Remark 3.10**.: From this proof one can see that the upper bound \(\frac{1}{8}\) is not optimal for high dimensions \(d\). Choosing \(e\) as in the proof, \(a\) can be an arbitrary number in \((0,8^{-e/2^{e-1}}]\) in order for the Lipschitz constant to be at most \(1\). This upper bound approaches \(1\) as \(d\to\infty\) (whence \(e\to\infty\)).
For any hypercube \(Q\subseteq[-\tfrac{1}{8},\tfrac{1}{8}]^{d}\) we thus obtain as a consequence the result in Corollary 3.11 the parallelized product functions in \(\mathbf{P}(Q)\), again using Lemma 3.2.
**Corollary 3.11**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), every \(d\)-dimensional hypercube \(Q\subseteq[-\tfrac{1}{8},\tfrac{1}{8}]^{d}\), and all \(\varepsilon\in(0,1]\), \(p\in[1,\infty]\), \(f\in\mathbf{P}(Q)\) that_
\[\operatorname{Cost}_{p}(f,1,\varepsilon)\leq Kd^{K}\varepsilon^{-1}. \tag{3.12}\]
Proof of Corollary 3.11.: Observe that Lemma 3.9 implies that there exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), every hypercube \(Q=[a,b]^{d}\subseteq[-\frac{1}{8},\frac{1}{8}]^{d}\), and all \(\varepsilon\in(0,1]\), \(p\in[1,\infty]\), that
\[\operatorname{Cost}_{p}\bigl{(}p_{d}|_{Q},1,\varepsilon\bigr{)}\leq Kd^{4} \bigl{(}1+\ln(\varepsilon^{-1})\bigr{)}.\]
Combining this with Lemma 3.2 and the fact that \(\forall\,u\in[1,\infty)\colon 1+\ln(u)\leq u\) shows for all \(p\in[1,\infty]\), \(\varepsilon\in(0,1]\), \(d,n,d_{1},d_{2},\ldots,d_{n}\in\mathbb{N}\) with \(\sum_{i=1}^{n}d_{i}=d\) and all \(f=(p_{d_{1}}|_{[a,b]^{d_{1}}})\square\cdots\square(p_{d_{n}}|_{[a,b]^{d_{n}}}) \in\mathbf{P}(Q)\) that
\[\operatorname{Cost}_{p}(f,1,\varepsilon) \leq\tfrac{11}{4}n^{2}\bigl{(}\max_{i\in[n]}d_{i}\bigr{)}^{2} \sum_{i=1}^{n}\operatorname{Cost}_{p}\bigl{(}p_{d_{i}}|_{[a,b]^{d_{i}}},1,n^{ \nicefrac{{-1}}{{p}}}\varepsilon\bigr{)}\] \[\leq\tfrac{11}{4}n^{2}d^{2}K\sum_{i=1}^{n}(d_{i})^{4}\bigl{(}1+ \ln(\varepsilon^{-1})+\tfrac{1}{p}\ln n\bigr{)}\] \[\leq\tfrac{11}{4}n^{2}d^{2}Kd^{4}(1+\ln(\varepsilon^{-1}))(1+\ln n)\] \[\leq\tfrac{11}{4}Kn^{3}d^{6}\varepsilon^{-1}\leq\tfrac{11}{4} Kd^{9}\varepsilon^{-1}.\]
The proof of Corollary 3.11 is thus complete.
### DNN approximations of maxima
Our final example of a particular family of functions which can be approximated by DNNs without the curse of dimensionality is given by the multidimensional maximum functions as defined in (1.2). In fact, it is well-known in the scientific literature that these functions can be represented exactly by ReLU DNNs with only polynomially many parameters; cf., e.g., [1, 3, 4, 16]. Using Lemma 2.3 and [3, Proposition 5.4] we derive the following result for an approximation with Lipschitz constant \(1\).
**Lemma 3.12**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), every \(d\)-dimensional hypercube \(Q\), and all \(\varepsilon\in[0,\infty)\), \(p\in[1,\infty]\) that_
\[\operatorname{Cost}_{p}(m_{d}|_{Q},1,\varepsilon)\leq Kd^{2}. \tag{3.13}\]
Proof of Lemma 3.12.: Note that [3, Proposition 5.4] shows that there exist \(K\in\mathbb{N}\) and \(\Phi_{d}\in\mathbf{N}\), \(d\in\mathbb{N}\), which satisfy for all \(d\in\mathbb{N}\), \(x,y\in\mathbb{R}^{d}\) that \(\mathcal{R}(\Phi_{d})\in C(\mathbb{R}^{d},\mathbb{R})\), \(\mathcal{P}(\Phi_{d})\leq Kd^{2}\), \(\mathcal{R}(\Phi_{d})(x)=m_{d}(x)\), and \(|\mathcal{R}(\Phi_{d})(x)-\mathcal{R}(\Phi_{d})(y)|\leq\|x-y\|_{\infty}\). This ensures for all \(d\in\mathbb{N}\), \(\varepsilon\in[0,\infty)\) and every \(Q=[a,b]^{d}\subseteq\mathbb{R}^{d}\) that \(\operatorname{Cost}_{\infty}(m_{d}|_{Q},1,\varepsilon)\leq Kd^{2}\). Combining this with Lemma 2.3 establishes (3.13). This completes the proof of Lemma 3.12.
Similarly as for the product functions, we get for the parallelized maximum functions in \(\mathbf{M}(Q)\) the following approximation result.
**Corollary 3.13**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), every \(d\)-dimensional hypercube \(Q\subseteq\mathbb{R}^{d}\), and all \(\varepsilon\in[0,\infty)\), \(p\in[1,\infty]\), \(f\in\mathbf{M}(Q)\) that_
\[\operatorname{Cost}_{p}(f,1,\varepsilon)\leq Kd^{K}. \tag{3.14}\]
Proof of Corollary 3.13.: Observe that Lemma 3.12 assures that there exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(Q=[a,b]^{d}\subseteq\mathbb{R}^{d}\), \(\varepsilon\in[0,\infty)\), \(p\in[1,\infty]\) that
\[\operatorname{Cost}_{p}(m_{d}|_{Q},1,\varepsilon)\leq Kd^{2}.\]
Analogously to the proof of Corollary 3.11, we combine this with Lemma 3.2 to obtain for all \(a\in\mathbb{R}\), \(b\in(a,\infty)\), \(p\in[1,\infty]\), \(\varepsilon\in(0,\infty)\), \(d,n,d_{1},d_{2},\ldots,d_{n}\in\mathbb{N}\) with \(\sum_{i=1}^{n}d_{i}=d\)
and all \(f=(m_{d_{1}}|_{[a,b]^{d_{4}}})\square\cdots\square(m_{d_{n}}|_{[a,b]^{d_{n}}})\in \mathbf{M}(Q)\) that
\[\operatorname{Cost}_{p}(f,1,\varepsilon) \leq\tfrac{11}{4}n^{2}\big{(}\max_{i\in[n]}d_{i}\big{)}^{2}\sum_{ i=1}^{n}\operatorname{Cost}_{p}\big{(}m_{d_{i}}|_{[a,b]^{d_{i}}},1,n^{-\nicefrac{{1}}{{ p}}}\varepsilon\big{)}\] \[\leq\tfrac{11}{4}n^{2}d^{2}\sum_{i=1}^{n}K(d_{i})^{2}\leq\tfrac{11 }{4}Kd^{6}.\]
This completes the proof of Corollary 3.13.
For the extended maximum functions \(\mathfrak{m}_{d}\) we obtain the following result by using [3, Corollary 5.5] and Lemma 2.3. For \(p<\infty\) the Lipschitz constant depends on the dimension, due to the use of the \(\ell_{p}\)-norm on the output space \(\mathbb{R}^{d}\).
**Lemma 3.14**.: _There exists a constant \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), every \(d\)-dimensional hypercube \(Q\subseteq\mathbb{R}^{d}\), and all \(\varepsilon\in[0,\infty)\), \(p\in[1,\infty]\) that_
\[\operatorname{Cost}_{p}\big{(}\mathfrak{m}_{d}|_{Q},d^{\nicefrac{{1}}{{p}}}, \varepsilon\big{)}\leq Kd^{4}. \tag{3.15}\]
Proof of Lemma 3.14.: Note that [3, Corollary 5.5] assures that there exist \(K\in\mathbb{N}\) and \(\Phi_{d}\in\mathbf{N}\), \(d\in\mathbb{N}\), which satisfy for all \(d\in\mathbb{N}\), \(x\in\mathbb{R}^{d}\) that \(\mathcal{R}(\Phi_{d})\in C(\mathbb{R}^{d},\mathbb{R}^{d})\), \(\mathcal{P}(\Phi_{d})\leq Kd^{4}\), and \(\mathcal{R}(\Phi_{d})(x)=\mathfrak{m}_{d}(x)\). Furthermore, observe that \(\mathfrak{m}_{d}\) is \(1\)-Lipschitz with respect to the \(\ell_{\infty}\)-norm. Hence, we obtain for all \(d\in\mathbb{N}\), \(Q=[a,b]^{d}\subseteq\mathbb{R}^{d}\), \(\varepsilon\in[0,\infty)\) that \(\operatorname{Cost}_{\infty}\big{(}\mathfrak{m}_{d}|_{Q},1,\varepsilon\big{)} \leq Kd^{4}\). Combining this with Lemma 2.3 establishes (3.15). The proof of Lemma 3.14 is thus complete.
### Proof of the main results
Now we have all the ingredients to prove the main theorems from the introduction. We first employ Proposition 2.9 to establish the result in Theorem 1.1 with a polynomial number of functions in the composition.
Proof of Theorem 1.1.: Note that Corollary 3.5 shows that there exists \(K_{1}\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) with \(g_{i}^{d}\in\mathscr{P}_{\varepsilon,1}(Q_{i}^{d},1)\) that
\[\forall\,\varepsilon\in(0,1]\colon\operatorname{Cost}_{1}(g_{i}^{d},1, \varepsilon)\leq K_{1}(\mathfrak{d}_{i}^{d})^{2c+3}(2cd^{c})^{2c}\varepsilon^{ -2c}\leq K_{1}c^{2c+3}(2c)^{2c}d^{c(2c+3)+2c^{2}}\varepsilon^{-2c}.\]
Here we used that Corollary 3.5 can be applied with the side-length \(b-a\leq 2cd^{c}\), since \(Q_{i}^{d}\subseteq[-cd^{c},cd^{c}]^{b_{i}^{d}}\) by assumption. Furthermore, Corollary 3.11 ensures that there exists \(K_{2}\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) with \(g_{i}^{d}\in\mathbf{P}(Q_{i}^{d})\) that
\[\forall\,\varepsilon\in(0,1]\colon\operatorname{Cost}_{1}(g_{i}^{d},1, \varepsilon)\leq K_{2}(\mathfrak{d}_{i}^{d})^{K_{2}}\varepsilon^{-1}\leq K_{2 }c^{K_{2}}d^{K_{2}}\varepsilon^{-2c}.\]
Finally, Corollary 3.13 implies that there exists \(K_{3}\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\) with \(g_{i}^{d}\in\mathbf{M}(Q_{i}^{d})\) that
\[\forall\,\varepsilon\in(0,1]\colon\operatorname{Cost}_{1}(g_{i}^{d},1, \varepsilon)\leq K_{3}(\mathfrak{d}_{i}^{d})^{K_{3}}\leq K_{3}c^{K_{3}}d^{cK_{3 }}\varepsilon^{-2c}.\]
Hence, we obtain for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\), \(\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{1}(g_{i}^{d},1,\varepsilon)\leq\max\bigl{\{}2^{2c}K_{1}c^{ 4c+3},K_{2}c^{K_{2}},K_{3}c^{K_{3}}\bigr{\}}d^{c\max\{4c+3,K_{2},K_{3}\}} \varepsilon^{-2c}.\]
Combining this with Proposition 2.9 establishes the claim. The proof of Theorem 1.1 is thus complete.
Next we apply Proposition 2.10 to establish Theorem 1.2, where the Lipschitz constants of the composed functions grow polynomially in the dimension.
Proof of Theorem 1.2.: Analogously to the proof of Theorem 1.1 we can combine (1.6), Corollary 3.5, Corollary 3.7, Lemma 3.8, Corollary 3.13, and Lemma 3.14 to obtain that there exists \(K\in\mathbb{N}\) which satisfies for all \(d\in\mathbb{N}\), \(i\in[\mathbf{k}(d)]\), \(\varepsilon\in(0,1]\) that
\[\operatorname{Cost}_{p}(g_{i}^{d},Kd^{K},\varepsilon)\leq K(\mathfrak{d}_{i}^{d} )^{K}\max\{\varepsilon^{-2c},\varepsilon^{-1},1\}\leq Kd^{K}\varepsilon^{-2c}.\]
Applying the second abstract approximation result in Proposition 2.10 hence establishes the claim. The proof of Theorem 1.2 is thus complete.
### Acknowledgments
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044-390685587, Mathematics Munster: Dynamics-Geometry-Structure. Helpful suggestions by Patrick Cheridito, Arnulf Jentzen, Benno Kuckuck, and Florian Rossmannek, in particular regarding Lemma 3.9, are gratefully acknowledged.
|
2303.13627 | Associated Random Neural Networks for Collective Classification of Nodes
in Botnet Attacks | Botnet attacks are a major threat to networked systems because of their
ability to turn the network nodes that they compromise into additional
attackers, leading to the spread of high volume attacks over long periods. The
detection of such Botnets is complicated by the fact that multiple network IP
addresses will be simultaneously compromised, so that Collective Classification
of compromised nodes, in addition to the already available traditional methods
that focus on individual nodes, can be useful. Thus this work introduces a
collective Botnet attack classification technique that operates on traffic from
an n-node IP network with a novel Associated Random Neural Network (ARNN) that
identifies the nodes which are compromised. The ARNN is a recurrent
architecture that incorporates two mutually associated, interconnected and
architecturally identical n-neuron random neural networks, that act
simultneously as mutual critics to reach the decision regarding which of n
nodes have been compromised. A novel gradient learning descent algorithm is
presented for the ARNN, and is shown to operate effectively both with
conventional off-line training from prior data, and with on-line incremental
training without prior off-line learning. Real data from a 107 node packet
network is used with over 700,000 packets to evaluate the ARNN, showing that it
provides accurate predictions. Comparisons with other well-known state of the
art methods using the same learning and testing datasets, show that the ARNN
offers significantly better performance. | Erol Gelenbe, Mert Nakıp | 2023-03-23T19:32:31Z | http://arxiv.org/abs/2303.13627v1 | # Associated Random Neural Networks for Collective Classification of Nodes in Botnet Attacks
###### Abstract
Botnet attacks are a major threat to networked systems because of their ability to turn the network nodes that they compromise into additional attackers, leading to the spread of high volume attacks over long periods. The detection of such Botnets is complicated by the fact that multiple network IP addresses will be simultaneously compromised, so that Collective Classification of compromised nodes, in addition to the already available traditional methods that focus on individual nodes, can be useful. Thus this work introduces a collective Botnet attack classification technique that operates on traffic from a \(n\)-node IP network, with a novel Associated Random Neural Network (ARNN) that identifies the nodes which are compromised. The ARNN is a recurrent architecture that incorporates two mutually associated, interconnected and architecturally identical \(n\)-neuron random neural networks, that act simultaneously as mutual critics to reach the decision regarding which of \(n\) nodes have been compromised. A novel gradient learning descent algorithm is presented for the ARNN, and is shown to operate effectively both with conventional off-line training from prior data, and with on-line incremental training without prior off-line learning. Real data from a 107 node packet network is used with over \(700,000\) packets to evaluate the ARNN, showing that it provides accurate predictions. Comparisons with other well-known state of the art methods using the same learning and testing datasets, show that the ARNN offers significantly better performance.
keywords: Collective Classification, Botnet Attack Detection, Associated Random Neural Networks, The Internet, Nodes Compromised by Botnets, Random Neural Networks, ARNN Learning +
Footnote †: journal: Neural Networks
## 1 Introduction
Many classification problems, such as identifying a given individual's face in a large dataset of face images of people [1], associate a binary label to data items [2]. This is also the usual case for network attack detection from traffic data [3] that attemps to determine whether a given network node has been compromised by an attack [4]. Such problems are often solved with Machine Learning (ML) algorithms that learn off-line from one or more datasets that contain the ground-truth data. The trained ML algorithm can then be tested on datasets that have not been used for learning, and then used online with previously unseen or new data. Typically, the online usage of such attack detection algorithms is carried out "one node at a time", i.e. as an individual classification problem for a specific node that may be concerned by possible attacks [5; 6].
When we need to classify each individual node in a set \(V=\{v_{1},\...\,v_{n}\}\) of interconnected nodes in a
network as being "compromised" or uncompromised (i.e., "safe") we obviously face with a Binary _Individual_ Classification Problem for each of the \(n\) nodes. However, when the attacking entity is a Botnet which induces a compromised node to attack several other nodes with which it is able to directly communicate, then we are faced with a _Collective Binary Classification Problem_ where the classification of the distinct nodes is correlated, even though we cannot be sure that a compromised node has sufficient bandwidth or processing capacity to actually compromise other nodes.
Indeed let \(A=[A_{ij}]_{n\times n}\) be the (deterministic) adjacency matrix where \(A_{ij}=1\) indicates that node \(v_{i}\) has opened a connection to node \(v_{j}\) and therefore can send packet traffic to it, while \(A_{ij}=0\) indicates that node \(v_{i}\) is unable to send packets to node \(v_{j}\). Then during a Botnet attack, nodes that can receive traffic from compromised nodes are themselves likely to become compromised, and to become in turn attackers against other nodes, so that one needs to classify nodes by taking account both the local attack traffic at each node, and their patterns of communication between nodes.
Collective (also known as "relational") classification problems have been widely studied [7; 8] using a variety of techniques linked to ML. As indicated in the literature [9], collective classification may use a collection of local conditional classifiers which classify an individual's label conditionally on the label value of others, and then fuses the overall sets of outputs, or may try to solves the problem either as a global optimization or a global relaxation problem [10; 11], with the global approach being often computationally more costly.
Botnet attack detection has been discussed in numerous papers, mainly using single node attack detection techniques [12; 13; 14] which can identify individually compromised nodes, except for some studies that analyze relations between nodes to detect the existence or spread of a Botnet [15; 16; 17].
Thus in this paper we address the Collective Classification problem of detecting all the nodes in a given network which have been compromised by a Botnet. In particular, we introduce a ML method that combines supervised learning by a novel Random Neural Network [18] architecture - which we call the Associated Random Neural Network (ARNN) - that learns from a sample taken from the traffic flowing among a set of network nodes, to classify them as being either compromised by a Botnet, or as non-compromised.
The Random Neural Network is a bio-inspired spiking Neural Network that has a convenient mathematical solution, and has been applied by numerous authors, including [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], in diverse problems that can be addressed with ML such as video compression, tumor recognition from MRI images, video quality evaluation, smart building climate management, enhanced reality, voice quality evaluation over the internet, wireless channel modulation, climate control in buildings, the detection of network viruses and other cyberattacks.
In the case of Botnet detection, the ARNN is trained off-line with data that is certified as containing Botnet attacks, and with data that is attack free, and the trained ARNN is then used online to monitor a network's traffic to collectively classfly which nodes - if any - are compromised by a Botnet.
In the sequel, Section 2 surveys previous research on Botnet attacks. In Section 3 the proposed ARNN is described; to improve readability its gradient learning algorithm is detailed separately in A.
Section 4 presents the experimental work based on a large MIRAI Botnet dataset involving 107 network nodes and over \(760,000\) packets [40] that is used for training and evaluating the proposed method. The evaluation of the ARNN using this dataset is detailed in Section 5, where we have also compared our results with other well known ML methods. Finally, conclusions and suggestions for further work are presented in Section 6.
## 2 Recent Work on Botnet Attack Detection
In networked systems the cost of not meeting security requirements can be very high [41; 42; 43], hence much effort has been devoted to developing techniques that **detect attacks** against network components such as hosts, servers, routers, switches, IoT
devices, mobile devices and various network applications.
Botnet attacks are particularly harmful, since they induce their victims to become sources of further attacks against third parties [44; 45; 46]. Recent Botnet reports include the 2016 MIRAI attack [47], and the MERIS type attacks from 2021 and 2022 that can generate some 46 million requests per second, lasting more than 60 minutes, exploiting over \(5,000\) source IP addresses as Bots from over 130 countries [48; 49], which is a similar rate of requests as all the Wikimedia daily requests made in ten seconds. Another MERIS attack generated 17.2 million requests per second against a commercial web site, and such attacks have been observed to target some 50 web sites per day, with over 100 Distributed Denial of Service (DDoS) attacks, of which one third appear to occur in China, and 13% in the USA, involving a number of Bots sometimes ranging between \(30,000\) up to \(250,000\).
Botnet attack detection techniques typically examine incoming traffic streams and identify sub-streams that are benign or "normal", and those that may contain attacks [50; 51; 52], and often classify attacks into "types" [53] based on signatures [54; 55] that exploit prior knowledge about attack patterns. In addition, false alarms should also be minimized so that useful network traffic is not eliminated by mistake. However, such methods can also be overwhelmed by attack generators [56] that have been designed to adaptively modify their behaviour.
Defense techniques for Botnets based on the smart location of counter-attacks by "white hat" worm launchers have also been suggested [57; 58], while refined deep learning (DL) techniques have been investigated to recognize constantly evolving Botnet traffic [59], and transfer learning can improve detection accuracy, without concatenating large datasets having different characteristics [60].
Recent work has also created a taxonomy of Botnet communication patterns including encryption and hiding [61] with some authors examining how Internet Service Providers (ISP) can participate collectively to mitigate their effect [62]. Other work suggests that traditional Botnet detection techniques in the Internet are not well adapted to emerging applications such as the IoT [63], some studies have addressed Botnet apps in specific operating system contexts such as Android [64] or Botnet detection for specific applications such as Peer to Peer Systems (P2P) [65], or Vehicular Networks for which specific detection and protection mechanisms are suggested [66].
Some recent research has focused on the manner in which Botnet variability can be reflected in intrusion detection software that is designed for a given host [67]. Universal sets of features that may be applicable to attack detection [68] have also been suggested, and detection techniques for specific types of Botnets such as the ones based on the Domain Generation Algorithm [69] have been proposed.
Most of the previous literature on Botnets, as well as our recent work, has focused on _single node detection_ with off-line learning. We developed detection techniques for Distributed Denial of Service (DDoS) attacks using _gradient descent learning_ with the RNN [4; 70], because Botnets often use DDoS as the means of bringing down their victims. The system-level remedial actions that should be taken after an attack is detected [71] were also analyzed. To avoid learning all possible types of attack patterns, an _auto-associative approach based on Deep Learning of "normal" patterns with a dense multi-layer RNN_[72] was developed to detect malicious attacks by identifying deviations from normal traffic [73; 74; 75]. It was also shown that a single trained auto-associative dense RNN can provide detection of multiple types of attacks (i.e. not just Botnets) [76], and that learning can be partially conducted on-line, with less need for long and computationally costly off-line training [6].
### Approach Developed in this Paper
While it is possible to accurately detect malicious attacks by processing traffic at a given node, it is difficult to certify that the detected attack is indeed a Botnet by observing a single node since Botnets are based on the propagation of attack patterns through multiple nodes. Furthermore, many attack detectors detect anomalies in the incoming traffic rather that pointing to a specific attack [76]. Thus the present paper develops a Collective Classification approach
to specifically address the Botnet detection problem in the following manner:
1. A finite set of \(n\) interconnected network IP (Internet Protocol) addresses is considered,
2. Some of these addresses are equipped with a Local Attack Detector (LAD), so that a local evaluation is available at some of the nodes about whether they are being attacked. Note that the fact that a node is attacked does not necessarily imply that it has been compromised,
3. A specific neural network architecture, the Associated RNN (ARNN) with \(2n\) neurons, is designed **to deduce which (if any) of the IP addresses have been _compromised_** by Botnet(s), using the available decisions from the LADs regarding individual nodes. The ARNN is trained, using the algorithm detailed in A, on a small subset of data taken from a large open access Botnet dataset [40] containing over \(760,000\) packets exchanged among \(107\) IP (Internet Protocol) addresses.
4. Then using the remaining large dataset (not used for training) we determine which of the \(107\) IP addresses have been compromised and become Botnet attackers, resulting in a high level of accuracy regarding which IP addresses are compromised.
5. Two other well established ML methods are also used to identify which of the \(107\) nodes have been compromised. The results show that the ARNN provides significantly better accuracy concerning both True Positives and True Negatives.
## 3 The ARNN Decision System
The decision system presented in this paper, the "self-critical" ARNN with \(2n\) neurons, is shown schematically in Figure 1. The ARNN carries out a Collective Classification of the compromised nodes (if any) for a \(n\)-node IP network denoted \(V=\{v_{1},\...\,v_{n}\}\). For each network node \(v_{i}\), the ARNN has two neurons \(X_{i}\) and \(Y_{i}\) that represent opposite views. \(X_{i}\) indicates that \(v_{i}\) is compromised, while \(Y_{i}\) indicates that \(v_{i}\) is not compromised. Their corresponding numerical decision variables are \(Q_{i},\ q_{i}\in[0,1]\), where \(Q_{i}\) is the probability that \(X_{i}\) is excited and \(q_{i}\) is the probability that \(Y_{i}\) is excited. \(X_{i}\) has an excitatory connection \(W^{+}_{ij}\) to each other neuron \(X_{j}\) and an inhibitory connection \(W^{-}_{ij}\) to all other \(Y_{j}\) neurons, and \(Y_{i}\) has an excitatory connection \(w^{+}_{ij}\) to each other neuron \(Y_{j}\) and an inhibitory connection \(w^{-}_{ij}\) to all other \(X_{j}\) neurons. A neuron does not excite or inhibit its own self. Thus inside the ARNN, the neurons of type \(X\) excite other neurons of type \(X\) and inhibit all neurons of type \(Y\), and vice-versa for the neurons of type \(Y\). The ARNN is "self-critical" in the sense that neurons of type \(X\) try to supress the neurons of type \(Y\), and vice-versa. \(\Lambda_{i}\) represents the output from the LAD (local attack detector) at node \(v_{i}\) stating that \(v_{i}\) has been compromised while \(\lambda_{i}\) represents the LAD output at node \(v_{i}\) stating that it has not been compromised. \(\Lambda_{i},\ \lambda_{i}\) act as an excitatory and inhibitory external input, respectively, for \(X_{i}\), while they act as an inhibitory, excitatory input for \(Y_{i}\).
While \(Y_{i}\) indicates that \(v_{i}\) is not compromised. Their corresponding numerical decision variables are \(Q_{i},\ q_{i}\in[0,1]\), where \(Q_{i}\) is the probability that \(X_{i}\) is excited and \(q_{i}\) is the probability that \(Y_{i}\) is excited. \(X_{i}\) has an excitatory connection \(W^{+}_{ij}\) to each other neuron \(X_{j}\) and an inhibitory connection \(W^{-}_{ij}\) to all other \(Y_{j}\) neurons, and \(Y_{i}\) has an excitatory connection \(w^{+}_{ij}\) to each other neuron \(Y_{j}\) and an inhibitory connection \(w^{-}_{ij}\) to all other \(X_{j}\) neurons. A neuron does not excite or inhibit its own self. Thus inside the ARNN, the neurons of type \(X\) excite other neurons of type \(X\) and inhibit all neurons of type \(Y\), and vice-versa for the neurons of type \(Y\). The ARNN is "self-critical" in the sense that neurons of type \(X\) try to supress the neurons of type \(Y\), and vice-versa. \(\Lambda_{i}\) represents the output from the LAD (local attack detector) at node \(v_{i}\) stating that \(v_{i}\) has been compromised while \(\lambda_{i}\) represents the LAD output at node \(v_{i}\) stating that it has not been compromised. \(\Lambda_{i},\ \lambda_{i}\) act as an excitatory and inhibitory external input, respectively, for \(X_{i}\), while they act as an inhibitory, excitatory input for \(Y_{i}\).
Figure 1: A schematic diagram of the \(2n\)-neuron ARNN that carries out a Collective Classification of the compromised nodes (if any) for a \(n\)-node IoT network denoted \(V=\{v_{1},\...\,v_{n}\}\). The ARNN has two neurons \(X_{i}\) and \(Y_{i}\) that represent opposite views for each network node \(v_{i}\): \(X_{i}\) indicates that \(v_{i}\) is compromised, while \(Y_{i}\) indicates that \(v_{i}\) is not compromised. The corresponding numerical decision variables are \(Q_{i}\), \(q_{i}\in[0,1]\), where \(Q_{i}\) is the probability that \(X_{i}\) is excited and \(q_{i}\) is the probability that \(Y_{i}\) is excited. \(X_{i}\) has an excitatory connection \(W^{+}_{ij}\) to each other neuron \(X_{j}\) and an inhibitory connection \(W^{-}_{ij}\) to all other \(Y_{j}\) neurons, and \(Y_{i}\) has an excitatory connection \(w^{+}_{ij}\) to each other neuron \(Y_{j}\) and an inhibitory connection \(w^{-}_{ij}\) to all other \(X_{j}\) neurons. A neuron does not excite or inhibit its own self. Thus inside the ARNN, the neurons of type \(X\) excite other neurons of type \(X\) and inhibit all neurons of type \(Y\), and vice-versa for the neurons of type \(Y\). The ARNN is “self-critical” in the sense that neurons of type \(X\) try to supress the neurons of type \(Y\), and vice-versa. \(\Lambda_{i}\) represents the output from the LAD (local attack detector) at node \(v_{i}\) stating that \(v_{i}\) has been compromised while \(\lambda_{i}\) represents the LAD output at node \(v_{i}\) stating that it has not been compromised. \(\Lambda_{i},\ \lambda_{i}\) act as an excitatory and inhibitory external input, respectively, for \(X_{i}\), while they act as an inhibitory, excitatory input for \(Y_{i}\).
neuron \(X_{j}\) and an inhibitory connection \(W^{-}_{ij}\) to all \(Y_{j}\) neurons, and \(Y_{i}\) has an excitatory connection \(w^{+}_{ij}\) to every other neuron \(Y_{j}\) and an inhibitory connection \(w^{-}_{ij}\) to all \(X_{j}\) neurons. None of the neuron can directly excite or inhibit themselves. Thus inside the ARNN, the neurons of type \(X\) excite other neurons of type \(X\) and inhibit all neurons of type \(Y\), and vice-versa for the neurons of type \(Y\). The ARNN is "self-critical" in the sense that neurons of type \(X\) try to supress the neurons of type \(Y\), and vice-versa. \(\Lambda_{i}\), is a non-negative real number that represents the output from the LAD (local attack detector) at \(v_{i}\) stating that \(v_{i}\) has been compromised while \(\lambda_{i}\) represents the LAD output at node \(v_{i}\) stating that it has not been compromised. \(\Lambda_{i}\), \(\lambda_{i}\) can be chosen from the corresponding probabilities outputted from the LADs acting as excitatory and inhibitory external input, respectively, for each \(X_{i}\), while they have the opposite effect as inhibitory and excitatory input for \(Y_{i}\), respectively.
The two neurons \(X_{i}\) and \(Y_{i}\) have _internal states_\(K_{i}(t)\geq 0\) and \(k_{i}(t)\geq 0\), respectively. If its internal state \(K_{i}(t)\) is strictly positive, then the RNN neuron \(X_{i}\) will fire spikes at exponentially distributed successive intervals, sending excitatory and/or inhibitory spikes at rates \(W^{+}_{ij}\), \(W^{-}_{ij}\geq 0\) to the other neurons in the ARNN. Similarly when \(k_{i}(t)>0\) neuron \(Y_{i}\) will fire spikes at rates and \(w^{+}_{ij}\), \(w^{-}_{ij}\geq 0\) for \(Y_{i}\), respectively, to the other neurons \(X_{j}\) and \(Y_{j}\) in the ARNN. These firing rates are the "weights" that are learned with the training dataset using the algorithm described in A.
When any of the neurons \(\{X_{i},\ Y_{i},\ i=1,\...\ n\}\) receives an excitatory spike either from its external input or from another neuron, say at time \(t\), its internal state will increase by 1, i.e. \(K_{i}(t^{+})=K_{i}(t)+1\) or \(k_{i}(t^{+})=k_{i}(t)+1\). Similarly if a neuron receives an inhibitory spike then its internal state decreases by 1 provided it was previously at a positive state value, and its state does not change if it was previously at the zero value, i.e. \(K_{i}(t^{+})=max[0,K_{i}(t)-1]\) or \(k_{i}(t^{+})=max[0,k_{i}(t)-1]\). Also when a neuron fires, its internal state drops by 1, i.e. \(K_{i}(t^{+})=K_{i}(t)-1\) or \(k_{i}(t^{+})=k_{i}(t)-1\); note that a neuron can only fire if its state was previously positive.
We thus define the probability that these \(2n\) neurons are "excited" or firing by:
\[For\ X_{i}:\ Q_{i}=\lim_{t\rightarrow\infty}Prob[K_{i}(t)>0], \tag{1}\] \[For\ Y_{i}:\ q_{i}=\lim_{t\rightarrow\infty}Prob[k_{i}(t)>0], \tag{2}\]
and \(Q_{i}\) is the variable that "advocates" that node \(i\) is compromised, while the role of \(q_{i}\) is to advocate the opposite.
Consider the following system of \(2n\) equations for \(Q_{i},\ q_{i}\), obtained from the RNN equations [77]:
\[Q_{i} =\frac{\Lambda_{i}+\sum_{j=1}^{n}W^{+}_{ji}Q_{j}}{\lambda_{i}+ \sum_{j=1}^{n}[W^{+}_{ij}+W^{-}_{ij}]+\sum_{j=1}^{n}w^{-}_{ji}q_{j}}, \tag{3}\] \[q_{i} =\frac{\lambda_{i}+\sum_{j=1}^{n}w^{+}_{ji}q_{j}}{\Lambda_{i}+ \sum_{j=1}^{n}[w^{+}_{ij}+w^{-}_{ij}]+\sum_{j=1}^{n}W^{-}_{ji}Q_{j}},\]
where
\[W^{+}_{ii}=W^{-}_{ii}=w^{+}_{ii}=w^{-}_{ii}=0. \tag{4}\]
Let \(K(t)=(K_{1}(t),\...\,K_{n}(t))\) and \(k(t)=(k_{1}(t),\...\,k_{n}(t))\), and define the vectors of non-negative integers \(H=(H_{1},\...\,H_{n})\) and \(h=(h_{1},\...\,h_{n})\). From [77], we know that if the solution to the equations (3) satisfy \(0\leq Q_{i},\ q_{i}<1\) for \(1\leq i\leq n\), then the joint stationary distribution of the ARNN's state is:
\[\lim_{t\rightarrow\infty}Prob[K(t)=H,\ k(t)=h\ ] \tag{5}\] \[=\prod_{i=1}^{n}Q^{H_{i}}_{i}(1-Q_{i}).q^{h_{i}}_{i}(1-q_{i})\.\]
**Note:** From (5) we can see that if \(Q_{i}>q_{i}\) then:
\[\lim_{t\rightarrow\infty}Prob[K_{i}(t)>k_{i}(t)]=\frac{Q_{i}(1-q_{i})}{q_{i}(1 -Q_{i})}>1. \tag{6}\]
To simplify the learning algorithm, we restrict the weights in the following manner:
\[W=W^{+}_{ij}+W^{-}_{ij}=w^{+}_{ij}+w^{-}_{ij},\ i,j\in\{1,\..\ n],\ i\neq j, \tag{7}\]
where \(W>0\) is a constant representing the total firing or spiking rate any neuron \(X_{i}\) or \(Y_{i}\) towards other neurons. This restriction also avoids having weights which take very large values. We can write
the \(2n\) RNN equations (3) as:
\[Q_{i} =\frac{\Lambda_{i}+\sum_{j=1}^{n}W_{ji}^{+}Q_{j}}{\lambda_{i}+(n-1)W+ \sum_{j=1}^{n}w_{ji}^{-}q_{j}}, \tag{8}\] \[q_{i} =\frac{\Lambda_{i}+\sum_{j=1}^{n}w_{ji}^{+}q_{j}}{\Lambda_{i}+(n-1 )W+\sum_{j=1}^{n}W_{ji}^{-}Q_{j}}.\]
On the other hand, the learning algorithm detailed in A computes the values of \(W_{ij}^{+},~{}w_{ij}^{+}\) for all the neuron pairs \(i,j,~{}i\neq j\) so as to minimize an error based cost function \(\mathbf{E}\) using an appropriate training dataset such as Kitsune [78; 40].
## 4 Network Learning and Accuracy of Botnet Attack Prediction
The data we use concerns the MIRAI Botnet Attack [79]. documented in the Kitsune dataset [78; 40] which contains a total of \(764,137\) individual packets. The dataset contains \(107\) network nodes identified by IP addresses, and a given node may be both a source node for some packets, and a destination for other packets.
This publicly available dataset, which is already partially processed (by the providers of the dataset) contains the ground-truth that the providers held, regarding the packets which are Botnet attack packets, and those which are not attack packets. Thus each packet is labeled as either an "attack" (\(a=1\)) or a "normal" packet (\(a=0\)), so that the Kitsune dataset already contains the "ground truth". Since the dataset is quite large, some parts of the data may be used for training the attack detection algorithms, while other parts may be used for evaluating the effectiveness of them.
The data items in this dataset are the individual packets, where each packet can be denoted as \(pk(t,s,d,a)\), where:
* \(t\) is a time-stamp indicating when the packet is sent,
* \(s,d\) are the source and destination nodes of the packet,
* \(a\) is the binary variable with \(a=1\) for a packet that has been identified as an attack packet, and \(a=0\) for a packet that has been identified as a benign non-attack packet.
It is interesting to note that this dataset is time varying. The obvious reason is that in the course of a Botnet attack the number of nodes that are compromised increases with the number of attacks which occur, and the number of attack packets obviously also increases as the number of compromised nodes increases. The Kitsune dataset does not incorporate the consequences of attack detection. Indeed if an attack is detected and the compromised nodes are progressively blacklisted, then the number of attack packets and the number of nodes that are compromised, may eventually decrease, but this is not incorporated in the Kitsune dataset.
Thus, since this data is based on an attack that is going unchecked, the initial part of the data contains hardly any attack packets, while the latter part contains many more attack packets, as would be expected. Whether a given node is compromised or not also depends on the amount of traffic it receives from compromised nodes, as this traffic may contain attack packets capable of compromising the destination node. Thus detecting whether a network node is compromised or not, does not only depend on its own behaviour, i.e. on whether it sends attack packets, but also on whether it has received traffic from other compromised nodes.
### Processing the MIRAI Botnet Data
These \(764,137\) packets in [40] cover a consecutive time period of roughly \(7137\) seconds (approximately \(2\) hours). Thus we aggregate the data in a more compact form by grouping packets into successive time \(10\) second time slots whose length is denoted by \(\tau\). The choice of \(\tau=10~{}secs\) is based on the need to have a significant number of \(\approx 713\) time slots, and to have a statistically significant number of packets in each slot. Since we have \(107\) nodes, the average number of packets per node in each slot is also approximately \(10\).
The packets within each successive slot are thus grouped into "buckets", where \(B^{l}\) denotes the \(l-th\) bucket, i.e. the set of packets whose time stamp lies between \((l-1)\tau\) and \(l\tau\) seconds:
\[B^{l}=\{pk(t,s,d,a),~{}(l-1)\tau\leq t<lr\},~{}\tau=10~{}secs.\]
Let \(S^{l}(s)\) denote the set of packets that have been transmitted by node \(s\)_until the end of the \(l-th\) time
slot_:
\[S^{l}(s)=\{pk(t,s,d,a),\ \forall d,\ \forall a,\ 0\leq t<l\tau\}, \tag{9}\]
and, let \(R^{l}(d)\) denote the set of packets that have been received by node \(d\) in the same time frame:
\[R^{l}(d)=\{pk(t,s,d,a),\ \forall s,\ \forall a,\ 0\leq t<l\tau\}. \tag{10}\]
Then \(A^{l}_{d}\) is the _attack ratio_ which represents the ratio of attack packets, among all packets received by node \(d\) at the end of \(l-th\) slot and is computed as
\[If\ \ |R^{l}(d)|>0:\] \[A^{l}_{d}=\frac{|\{pk(t,s,d,1),\ \forall s,\ 0\leq t<l\tau\}|}{|R^{l}(d)|}, \tag{11}\] \[Else\ \ A^{l}_{d}=0,\]
while \(K^{l}_{s}\) is the _proportion of compromised packets_ which is the ratio of attack packets sent by node \(s\) at the end of the same slot, given by:
\[If\ \ |S^{l}(s)|>0:\] \[K^{l}_{s}=\frac{|\{pk(t,s,d,1),\ \forall d,\ 0\leq t<l\tau\}|}{|S^{l}(s)|}, \tag{12}\] \[Else\ \ K^{l}_{s}=0.\]
Since any node \(i\) may be a source or destination, or both a source and destination, of packets, \(A^{l}_{i}\) and \(K^{l}_{i}\) are, respectively, the input and output ground truth data regarding which nodes are attacked, and which nodes are compromised at the end of \(l-th\) time slot.
In addition, for each node \(i\), we define the binary variable regarding the _ground truth_, denoted by \(G^{l}_{i}\) as:
\[G^{l}_{i}=\mathbf{1}\left[K^{l}_{i}>\Theta\right], \tag{13}\]
where \(\mathbf{1}\left[L\right]=1\) if \(L\) is true and \(0\) otherwise, where \(\Theta\in[0,1]\) is a threshold. Thus, at the end of the \(l-th\) slot, if \(G^{l}_{i}=1\) the ground truth indicates that node \(i\) has been compromised. If \(G^{l}_{i}=0\) then node \(i\) is considered not to be compromised.
### The ARNN Error Functio \(E\)
Let us call "**TrainData**" the subset of time slots used for Training the ARNN. The manner in which this subset is selected from the MIRAI dataset is detailed below. Since we wish to predict whether each of the \(n\) nodes has been compromised given the data about attacks, the error function to be minimized by the learning algorithm takes the form:
\[\mathbf{E}=\frac{1}{2}\sum_{l\in\textbf{TrainData}}\sum_{i=1}^{n} \left[(Q^{l}_{i}(A^{l}_{i})-K^{l}_{i})^{2}\right.\\ +\left(q^{l}_{i}(1-A^{l}_{i})-(1-K^{l}_{i})\right)^{2}\right], \tag{14}\]
where the functions \(Q^{l}_{i}(A^{l}_{i})\) and \(q^{l}_{i}(1-A^{l}_{i})\) are computed by the ARNN using equation (8) as follows:
\[Q^{l}_{i}(A^{l}_{i})=\] \[\frac{A^{l}_{i}+\sum_{j=1}^{n}W^{+}_{ji}Q^{l}_{j}(A^{l}_{i})}{(1- A^{l}_{i})+(n-1)W+\sum_{j=1}^{n}w^{-}_{ji}q^{l}_{j}(1-A^{l}_{i})},\] \[q^{l}_{i}(1-A^{l}_{i})\ =\] \[\frac{(1-A^{l}_{i})+\sum_{j=1}^{n}w^{+}_{ji}q^{l}_{j}(1-A^{l}_{i} )}{A^{l}_{i}+(n-1)W+\sum_{j=1}^{n}W^{-}_{ji}Q^{l}_{j}(A^{l}_{i})}.\]
For each node \(i\), we define the **binary decision** of the output of the ARNN, denoted by the binary variable \(Z_{i}\) as
\[Z^{l}_{i}=\mathbf{1}\left[L^{l}_{i}=\frac{Q^{l}_{i}(1-q^{l}_{i})}{q^{l}_{i}(1 -Q^{l}_{i})}>\gamma\right], \tag{15}\]
where \(\gamma\in[0,\infty]\) is a "decision threshold". Thus, at \(l-th\) slot, if \(Z^{l}_{i}=1\) the ARNN indicates that node \(i\) has been compromised, while if \(Z^{l}_{i}=0\) then ARNN considers that node \(i\) is not compromised.
Then, we perform two distinct experiments:
#### 4.2.1 Experiment I: Offline Training of ARNN
To construct a balanced training dataset **TrainData** for the ARNN, the sequence of slots was scanned chronologically from the beginning of the whole MIRAI dataset until the first slot was found that contained some nodes that had been compromised. Specifically, this was in \(l^{*}-th\) slot with \(l^{*}=445\) in the MIRAI dataset.
Then, the training set **TrainData** with a total of 25 time slots was constructed as follows:
\[\textbf{TrainData}=\\ \{(A^{l}_{i},K^{l}_{i}),\ l=l^{*}-12,...,l^{*}+12;\ i=1,...,n\},\]
of which the first 12 have very few attack packets, while the following 13 all contain a significant number of attack packets.
The test set, denoted by **TestData**, is composed of _all the remaining_ time slots which have not used for training the ARNN:
\[\textbf{TestData}=\] \[\{(A_{i}^{l},K_{i}^{l}),\ l=\{1,...,l^{r}-13\}\cup\{l^{r}+13,...,713\};\] \[i=1,...,n\},\]
#### 4.2.2 Experiment II: Online (Incremental) Training of ARNN
In this part, ARNN's training took place online, along with testing, which represents the case where there is no available training set offline. To this end, it was used for prediction on every slot \(l\) and also if \(mod(l,6)=0\) it was trained at the end of slot \(l\). That is, we perform testing for 10 second slots and training for 1 minute slots.
Accordingly, on each "training slot" \(l\) for which \(mod(l,6)=0\), the training set **TrainData** for incremental learning was constructed as follows:
\[\textbf{TrainData}=\{(A_{i}^{l^{r}},K_{i}^{r}),\ l^{r}=l-5,...,l;\ i=1,...,n\}.\]
Recall that **TrainData** is updated for each \(l\) such that \(mod(l,6)=0\), so that the ARNN's weights (\(W_{ij}^{+}\) and \(w_{ij}^{+}\)) are updated based on **TrainData** at the end of slot \(l\), without reinitializing the weights.
### Other Machine Learning Models Used for Comparison
For both Experiments I and II, the performance of the ARNN is also compared with those obtained with two well-known ML models: the Multi-Layer Perceptron (MLP) and the Long-Short Term Memory (LSTM) neural network. We now briefly present the specific architectures of these models which we use during our experimental work, and Figure 2 displays the inputs and outputs which are common to the ML models.
Then, based on these input-output sets, each ML model is used as follows:
* **MLP**, which is a feedforward (fully-connected) neural network, is comprised of three hidden layers and an output layer, where there are \(n\) neurons at each layer. A sigmoidal activation function is used for each neuron in the network.
* **LSTM**, which is a recurrent neural network, is comprised of an lstm layer, two hidden layers and an output layer, where there are \(n\) lstm units or neurons at each layer. A sigmoidal activation function is used for each neuron in the network.
## 5 Experimental Results
We now evaluate the performance of the ARNN model and compare it with the performance of some existing techniques for Experiment I and Experiment II, respectively. Note that we set the learning rate \(\eta=0.1\) in the algorithm of A.
### Experiment I - Offline Training of the ARNN
We set \(\Theta=0.3\) and \(0.96\leq\gamma\leq 1\), and summarize the statistics of Accuracy, True Negative Rate (TNR) and True Positive Rate (TNR) performances of ARNN, which are presented in detail in Figures 4, 5, and 6, respectively. Figure 3 displays a box-plot that shows the statistics over all the IP Addresses. These results show that ARNN achieves a high performance with very few outliers in regards of Accuracy, TNR, and TPR. The median accuracy is about 92% while the first quartile is at 87%; that is, accuracy is above 87% for 75% of IP addresses. The median of TNR is almost 100%; that is, there are almost no false alarms (TNR= 100%) for more than 50% of IP addresses. Also, the median of TPR equals 100% and the first quartile is about 62%. Thus the TPR equals 100% for more than 50% of IP addresses while it is lower than 62% for only less than 25% of addresses.
Figure 4 displays the average decision accuracy for each IP Address \(i\in\{1,\ldots,107\}\). The results
Figure 2: High-level architecture that shows the inputs and outputs at each slot \(l\) for the ML techniques that are used in the comparison with the ARNN, where \(\hat{K}_{i}^{l}\) denotes the predicted compromised ratio of IP Address \(i\) at slot \(l\) by any considered ML model.
in this figure show that the accuracy of ARNN is above 95% for 50% of the IP Addresses while it is between 62% and 80% for only 20% of addresses and does not decrease below 62%. Next, Figure 5 presents average percentage TNR of ARNN for each IP Address. The results in this figure show that TNR is above 95% for 59% of all IP Addresses, and it is between 62% and 80% for 15% of addresses. Lastly, Figure 6 displays the percentage average TPR for 39 IP Addresses for which are considered compromised at least once in the ground truth. The results in this figure show that TPR is greater than 95% for 64% of IP addresses while it is above 90% for more than 74% of the addresses.
### Online (Incremental) Training of the ARNN
Having set \(\Theta=0.3\) and \(0.96\leq\gamma\leq 1\), we obtain the Accuracy, TNR, and TPR of ARNN with online training shown in Figure 10 in the form of box-plots. In this figure, we see that median accuracy equals 92% while the first quartile equals 87%. That is, the accuracy is above 87% for 75% of all IP Addresses. The TNR is above 99% for at least 50% of IP Addresses. The median TPR equals 100%; that is, at least 50% (exactly 62%) of IP Addresses are 100% accurately identified as compromised. When the results in Experiment II in Figure 7 are compared with those of Experiment I of Figure 3, we see that TPR increases slightly with online training and TNR remains almost the same. In addition, recall that online training is simpler since it does not require data collection, as offline training does.
Figure 8 presents the average accuracy of ARNN
Figure 4: Evaluation of the average accuracy over all packets of each IP Address \(i\in\{1,\ldots,107\}\) in **TestData**. The accuracy is computed by comparing the binary decision in the ground truth \(G_{i}^{l}\) and the binary decision of ARNN \(Z_{i}^{l}\).
Figure 5: Evaluation of the average percentage TNR over all packets of each IP Address \(i\in\{1,\ldots,107\}\) in **TestData**. For each \(i\), TNR is computed by comparing \(G_{i}^{l}\) and \(Z_{i}^{l}\) for the values of \(l\) where \(G_{i}^{l}=0\).
Figure 3: Box-plots of the Accuracy, TNR and TPR performance of ARNN over IP Addresses, where each of box-plot shows the calculated statistics (e.g. median) based on the results presented in Figures 4, 5, and 6, respectively
Figure 6: Evaluation of the average percentage TPR over all packets of each IP Address \(i\in\{1,\ldots,107\}\) in **TestData**. For each \(i\), TPR is computed by comparing \(G_{i}^{l}\) and \(Z_{i}^{l}\) for the values of \(l\) where \(G_{i}^{l}=1\). Note that if \(G_{i}^{l}=0\) for an IP Address \(i\) for any \(l\) in **TestData** (that is, the ground truth indicates that IP Address \(i\) has not been compromised within the observation period of the dataset), TPR does not exist for \(i\). Accordingly, in the considered dataset TPR exists for 39 IP Address.
for each IP Address \(i\) is displayed. The results in this figure show that the accuracy of ARNN is above 95% for 50% of IP Addresses while it is between 62% and 80% for only 20% of addresses and does not decrease below 62%. Next, we present the average percentage TNR in Figure 9, and show that the TNR is above 95% for 59% of IP Addresses. Moreover, Figure 10 displays the average percentage TPR for individual IP Addresses, where for IP Address \(i\), TPR is presented only if \(G_{i}^{l}=1\) for at least a single value of \(l\). The results in this figure show that percentage TPR is greater than 95% for 72% of IP Addresses, while TPR under offline training is shown (in Fig. 6) to be above 95% for 64% of IP Addresses. Hence, one may observe that ARNN achieves significantly higher TPR when it is trained online.
### Performance Comparison
We now compare the performance of ARNN with that of MLP and LSTM neural networks with respect to the mean of each Accuracy, TNR, TPR, and F1 Score. The traditional F-measure or \(F_{1}\) score is computed as
\[F_{1}=2\frac{Precision.Recall}{Precision+Recall}=\frac{TP}{TP+\frac{1}{2}(FP+ FN)}, \tag{16}\]
First, Figure 11 presents the performance comparison of neural network models for Experiment I (offline training), where the results show that the ARNN model significantly outperforms all of the other techniques with respect
Figure 8: Evaluation of the average accuracy over all packets of each IP Address \(i\in\{1,\ldots,107\}\). The accuracy is computed by comparing the binary decision in the ground truth \(G_{i}^{l}\) and the binary decision of ARNN \(Z_{i}^{l}\).
Figure 10: Evaluation of the average percentage TPR over all packets of each IP Address \(i\in\{1,\ldots,107\}\) in **TestData**. For each \(i\), TPR is computed by comparing \(G_{i}^{l}\) and \(Z_{i}^{l}\) for the values of \(l\) where \(G_{i}^{l}=1\). Note that if \(G_{i}^{l}=0\) for an IP Address \(i\) for any \(l\) (that is, the ground truth indicates that IP Address \(i\) has not been compromised within the observation period of the dataset), TPR does not exist for \(i\). Accordingly, in the considered dataset TPR exists for 39 IP Address.
Figure 7: Box-plots of the Accuracy, TNR and TPR performance of ARNN over all IP Addresses, where each box-plot shows the calculated statistics (e.g. median) based on the results presented in Figures 8, 9, and 10, respectively
Figure 9: Evaluation of the average percentage TNR over all packets of each IP Address \(i\in\{1,\ldots,107\}\). For each \(i\), TNR is computed by comparing \(G_{i}^{l}\) and \(Z_{i}^{l}\) for the values of \(l\) where \(G_{i}^{l}=0\).
to all Accuracy, F1 Score, TNR, and TPR. In addition, we also see that LSTM is more successful than MLP for identifying uncompromised nodes (Figure 11 (bottom left)) while MLP identifies the compromised nodes more successfully than LSTM (Figure 11 (bottom right)). However, ARNN outperforms LSTM by 24% with respect to TNR and MLP by 13% with respect to TPR.
Then, in Figure 12, the comparison of the neural network models for Experiment II (online training) with respect to the mean of each Accuracy, F1 Score, TNR and TPR is presented. The results in this figure show that ARNN significantly outperforms both MLP and LSTM with respect to any measure by at least 27%. Moreover, we see that although the overall performances of both MLP and LSTM have been significantly decreased under online training compared with offline training, the performance of ARNN is almost the same under both online and offline training.
### Training and Execution Times
Finally, in Table 1, we present the average training and execution time. Note that these results are collected on a workstation with 32 Gb RAM and an AMD 3.7 GHz (Ryzen 7 3700X) processor. The second row of this table displays the average training time that has been spent for a single data sample in a single training step. Thus, during the discussion of the results on training time, we shall calculate the total training time during Experiment I and that for one training window during Experiment II. One should note that both the number of inputs and the number of outputs of ARNN are twice those of MLP and LSTM. One should also note that the implementation of ARNN can be optimized to achieve lower training and execution time, and both MLP and LSTM have been implemented by using Keras library in Python.
During Experiment I, ARNN, MLP and LSTM have been trained on 25 samples for 20 epochs, 1000 epochs and 1000 epochs, respectively. Accordingly, the total training time of these models are \(40.02\times 25\times 20=20010\ s\), \(3.82\times 10^{-4}\times 25\times 1000=9.55\ s\), and \(0.01\times 25\times 1000=250\ s\), respectively. We see that the training time of ARNN is much higher than those of the other models. However, ARNN can be selected as identification method while the training of all models in Experiment I is performed offline and ARNN achieves significantly higher accuracy than MLP and LSTM.
During Experiment II, all three models have been trained online on 1 minute windows (6 samples) for 3 epochs, 100 epochs and 100 epochs respectively. Accordingly, the total training time of these models for each window are \(40.02\times 6\times 3=720.36\ s\), \(3.82\times 10^{-4}\times 6\times 100=0.23\ s\), and \(0.01\times 6\times 100=6\ s\), respectively. Although the training time results show that MLP and LSTM are suitable for training once in 1 minute, the performance of either MLP or LSTM has shown not to be acceptable for practical usage. On the other hand, ARNN with its current implementation achieves high accuracy but can be trained once in 720.36 seconds (\(\approx\)12 minutes) on 1 minute of data.
Furthermore, the third row of this table displays the average execution time that has been spent to make a prediction for a single sample. The results in this row show that the execution time of ARNN is one order of magnitude higher than the execution times of MLP and LSTM.
## 6 Conclusions
In a network of IP addresses, when n individual node is attacked by a Botnet and becomes compromised, it can then compromise other network nodes and turn them into attackers. Thus attacks may propagate across the system and affect other nodes and IP addresses. There is a large prior literature regarding Botnet attacks, but most of the work has addressed attacks against a specific network node, while the collective detection of Botnet attacks has received less attention.
Thus in this paper we have developed a ML based decision method, that identifies all the nodes of a given interconnected set of nodes, that are
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & ARNN & MLP & LSTM \\ \hline Training (\(s\)) & 40.02 & \(3.82\times 10^{-4}\) & 0.01 \\ \hline Execution (\(ms\)) & 8.4 & 0.17 & 0.78 \\ \hline \end{tabular}
\end{table}
Table 1: Average Training Time per Sample per Step and Average Execution Time per Sample of ARNN, MLP and LSTM
compromised by a Botnet attack. The approach is based on designing an Associated Random Neural Network that incorporates two connected and recurrent Random Neural Networks (RNN), where the network is trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network as the network trained on the same network. The network is trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network as the trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network as the network trained on the same network as the network as the trained on the same network as the network as the network trained on the same network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network trained on the same network as the network as the network as the network trained on the same network as the network as the network as
each RNN offers a contradictory recommendation regarding whether any one of the IP addresses or network nodes in the system are compromised. The final decision is taken by the ARNN based on which of the two recommendations for each of the nodes appears to be stronger. We have also developed a gradient based learning algorithm for the ARNN, which learns based on linear-algebraic operations on the network weights. If the system is composed of \(n\) IP addresses or nodes, then the resulting learning algorithm is of time complexity \(O(n^{3})\) since all computations are based on the inversion of \(n\times n\) matrices.
In this paper, the ARNN and its learning algorithm have been described and tested on real Botnet data involving some \(760,000\) packets. The experimental results show that the ARNN provides very accurate predictions of the order of \(92\%\) for a \(107\) node network. For comparison purposes, we have also implemented and tested two well known ML approaches for the same training and testing datasets, showing that the ARNN results provide significantly much better accuracy.
In future work, we plan to develop a generalization of the ARNN for multiple valued binary collective decision making and classification in other significant areas with datasets that contain inter-related or inter-dependent data items, such as social networks and the analysis of epidemics.
## Appendix A Appendix: ARNN Learning Algorithm
In this Appendix, we focus on the ARNN's learning algorithm, recalling that the ARNN is a specific ML structure based on the Random Neural Network (RNN), which has been proven to be an effective approximator in the sense of [80] for continuous and bounded functions [81]. It was generalized to G-Networks in the framework of queueing theory [82; 83; 84]. Gradient learning for the RNN was initially designed for both feedforward and recurrent (feedback) RNNs [77], and other RNN learning algorithms have also been proposed [37; 85; 72].
Prior to running the learning algorithm, the ARNN parameters are set to "neutral" values which express the fact that _initially_ the ARNN does not know whether any of the network nodes are compromised. To this effect, we:
* Initialize all the weights between \(X_{i}\) and \(Y_{i}\) to zero: \(W_{ii}^{+}=w_{ii}^{+}=W_{ii}^{-}=w_{ii}^{-}=0\).
* Set \(W_{ij}^{+}=W_{ij}^{-}=w_{ij}^{+}=w_{ij}^{-}=0.5W\) for \(i\neq j\), and choose \(Q_{i}=q_{i}=0.5\) to represent the perfect ignorance of the ARNN.
* Set the external inputs of the ARNN to \(\Lambda_{i}=\lambda_{i}=L(n-1),\ L>0\), so that the _external_ excitatory and inhibitory inputs are all initially set to an identical value.
* Keep \(W\) constant in the learning procedure, and only learn \(W_{ij}^{+},\ w_{ij}^{+}\) for each \(i\neq j\).
* Accordingly (8) becomes: \[q_{i}=Q_{i}=0.5\] (12) \[=\frac{L(n-1)+0.25(n-1)W}{L(n-1)+(n-1)W+0.25(n-1)W},\] \[or\ 0.5=\frac{L+0.25W}{L+W+0.25W},\] \[yielding\ L=0.75W\.\] (13)
* Taking \(W=1\) and \(L=0.75\), all the neuron states are initialized with the values \(Q_{i}=q_{i}=0.5,\ i=1,\...\,n\).
Now for any given value of the data, we use gradient descent to update the ARNN weights so as to search for a local minimum of the error \(\mathbf{E}\) in equation (14). We drop the notation regarding the \(l-th\) data item for simplicity, and compute \(\mathbf{E}\)'s derivative with respect to each of the ARNN weights:
\[E^{U,V}\equiv\frac{\partial\mathbf{E}}{\partial W_{U,V}^{+}}\] \[=\sum_{i=1}^{n}\ [\ (Q_{i}-K_{i})Q_{i}^{U,V}+(q_{i}-1+K_{i})q_{i}^ {U,V}\ ], \tag{14}\] \[E^{u,v}\equiv\frac{\partial\mathbf{E}}{\partial w_{u,v}^{+}}\] \[=\sum_{i=1}^{n}\ [\ (Q_{i}-K_{i})Q_{i}^{u,v}+(q_{i}-1+K_{i})q_{i}^ {u,v}\ ], \tag{15}\]
where the derivatives of the ARNN state values are denoted:
\[Q_{i}^{U,V}=\frac{\partial Q_{i}}{\partial W_{U,V}^{+}},\ Q_{i}^{u, v}=\frac{\partial Q_{i}}{\partial w_{u,v}^{+}},\] \[q_{i}^{U,V}=\frac{\partial q_{i}}{\partial W_{U,V}^{+}},\ q_{i}^ {u,v}=\frac{\partial q_{i}}{\partial w_{u,v}^{+}}.\]
We can then use the expressions (A.3) and (A.4) to update the ARNN weights iteratively for successive values of \(d=1,\...\,|\textbf{TrainData}|\), with the Gradient Descent Rule with some \(\eta>0\):
\[W_{new,U,V}^{+}\gets W_{U,V}^{+}-\eta E^{U,V}|_{(v^{d},v^{ d})},\] \[w_{new,U,V}^{+}\gets w_{u,v}^{+}-\eta E^{u,v}|_{(v^{d},v^{ d})}.\] (A.5)
### Derivatives of the ARNN State Probabilities
Now consider the ARNN with generic inputs \(\Lambda=(\Lambda_{1},\...\ \Lambda_{n})\) and \(\lambda=(\lambda_{1},\...\,\lambda_{n})\). In order to obtain the derivatives needed for the gradient descent expression (A.5), we use (8) to write:
\[Q_{i}^{U,V} =\frac{Q_{U}}{D_{V}}1[i=V]+\sum_{j=1}^{n}\frac{W_{ji}^{+}}{D_{i}} \ Q_{j}^{U,V}\] \[-\sum_{j=1}^{n}\frac{Q_{i}[W-w_{ji}^{+}]}{D_{i}}\ q_{j}^{U,V},\] (A.6) \[q_{i}^{U,V} =\sum_{j=1}^{n}\frac{w_{ji}^{+}}{d_{i}}\ q_{j}^{U,V}-\sum_{j=1}^{n }\frac{q_{i}[W-W_{ji}^{+}]}{d_{i}}\ Q_{j}^{U,V}\] \[+\frac{q_{U}}{d_{V}}1[i=V],\] (A.7)
where \(D_{i}\) and \(d_{i}\) are the denominators of \(Q_{i}\) and \(q_{i}\) respectively, in (8):
\[D_{i}=\Lambda_{i}+\sum_{j=1,j\neq i}^{n}W+\sum_{j=1,j\neq i}^{n}[ W-w_{ji}^{+}]\cdot q_{j},\] (A.8) \[d_{i}=\lambda_{i}+\sum_{j=1,j\neq i}^{n}W+\sum_{j=1,j\neq i}^{n} [W-W_{ji}^{+}]\cdot Q_{j}.\]
Define the vectors \(Q=(Q_{1},\...\,Q_{n})\) and \(q=(q_{1},\...\,q_{n})\) and the corresponding vectors of derivatives \(Q_{i}^{U,V}=(Q_{1}^{U,V},\...\,Q_{n}^{U,V})\) and \(q_{i}^{U,V}=(q_{1}^{U,V},\...\,q_{n}^{U,V})\). Similarly we define the \(n\times n\) matrices:
\[B^{+}=\{\frac{W_{ij}^{+}}{D_{j}}\},\ C=\{\frac{Q_{j}[W-w_{ij}^{+}]}{D_{j}}\},\] (A.9) \[F^{+}=\{\frac{w_{ij}^{+}}{d_{j}}\},\ G=\{\frac{q_{j}[W-W_{ij}^{+} ]}{d_{j}}\}.\]
We use the vector \(\delta_{V}\) whose elements are zero everywhere, except in position \(V\) where the value is 1, and write (A.6) and (A.7) in vector form:
\[Q^{U,V} =B^{+}Q^{U,V}-Cq^{U,V}+\delta_{V}.\frac{Q_{U}}{D_{V}},\] \[q^{U,V} =F^{+}q^{U,V}-GQ^{U,V}+\frac{q_{U}}{d_{V}}\delta_{V},\] \[=[-GQ^{U,V}+\frac{q_{U}}{d_{V}}\delta_{V}][I-F^{+}]^{-1},\] (A.10)
which yields:
\[Q^{U,V}=B^{+}Q^{U,V}+[CGQ^{U,V}-\frac{q_{U}}{d_{V}}C\delta_{V}][1-F^{+}]^{-1} +\delta_{V}.\frac{Q_{U}}{D_{V}},\]
and hence:
\[Q^{U,V} =\{-\frac{q_{U}}{d_{V}}C\delta_{V}[I-F^{+}]^{-1}+\frac{Q_{U}}{D_{ V}}\delta_{V}\}.\] \[.\{I-B^{+}-CG[I-F^{+}]^{-1}\}^{-1}\] (A.11)
Also define the matrices:
\[B_{*}^{+}=\{\frac{w_{ij}^{+}}{d_{j}}\},\ C_{*}=\{\frac{q_{j}[W-W_{ ij}^{+}]}{d_{j}}\},\] (A.12) \[F_{*}^{+}=\{\frac{W_{ij}^{+}}{D_{j}}\},\ G_{*}=\{\frac{Q_{j}[W-w_ {ij}^{+}]}{D_{j}}\}.\]
Since \(Q^{U,V}\) and \(q^{u,v}\) are symmetric with respect to each other, as are \(Q^{u,v}\) and \(q^{U,V}\), we also obtain:
\[q^{u,v}=\{-\frac{Q_{u}}{D_{v}}C_{*}\delta_{v}[I-F_{*}^{+}]^{-1}+ \frac{q_{u}}{d_{v}}\delta_{v}\}.\] \[.\{I-B_{*}^{+}-C_{*}G_{*}[I-F_{*}^{+}]^{-1}\}^{-1},\] (A.13)
and
\[Q^{u,v}=\{-G_{*}q^{u,v}+\frac{Q_{u}}{D_{v}}\delta_{v}\}[I-F_{*}^{+}]^{-1}\.\] (A.14)
This completes the computation of all the needed derivatives of the ARNN state probability vectors \(Q\) and \(q\). |
2302.02403 | Neural networks meet hyperelasticity: A guide to enforcing physics | In the present work, a hyperelastic constitutive model based on neural
networks is proposed which fulfills all common constitutive conditions by
construction, and in particular, is applicable to compressible material
behavior. Using different sets of invariants as inputs, a hyperelastic
potential is formulated as a convex neural network, thus fulfilling symmetry of
the stress tensor, objectivity, material symmetry, polyconvexity, and
thermodynamic consistency. In addition, a physically sensible stress behavior
of the model is ensured by using analytical growth terms, as well as
normalization terms which ensure the undeformed state to be stress free and
with zero energy. In particular, polyconvex, invariant-based stress
normalization terms are formulated for both isotropic and transversely
isotropic material behavior. By fulfilling all of these conditions in an exact
way, the proposed physics-augmented model combines a sound mechanical basis
with the extraordinary flexibility that neural networks offer. Thus, it
harmonizes the theory of hyperelasticity developed in the last decades with the
up-to-date techniques of machine learning. Furthermore, the non-negativity of
the hyperelastic neural network-based potentials is numerically examined by
sampling the space of admissible deformations states, which, to the best of the
authors' knowledge, is the only possibility for the considered nonlinear
compressible models. For the isotropic neural network model, the sampling space
required for that is reduced by analytical considerations. In addition, a proof
for the non-negativity of the compressible Neo-Hooke potential is presented.
The applicability of the model is demonstrated by calibrating it on data
generated with analytical potentials, which is followed by an application of
the model to finite element simulations. In addition, an adaption of the model
to noisy data is shown and its [...] | Lennart Linden, Dominik K. Klein, Karl A. Kalina, Jörg Brummund, Oliver Weeger, Markus Kästner | 2023-02-05T15:20:42Z | http://arxiv.org/abs/2302.02403v2 | # Neural networks meet hyperelasticity: A guide to enforcing physics
###### Abstract
In the present work, a hyperelastic constitutive model based on neural networks is proposed which fulfills all common constitutive conditions by construction, and in particular, is applicable to compressible material behavior. Using different sets of invariants as inputs, a hyperelastic potential is formulated as a convex neural network, thus fulfilling symmetry of the stress tensor, objectivity, material symmetry, polyconvexity, and thermodynamic consistency. In addition, a physically sensible stress behavior of the model is ensured by using analytical growth terms, as well as normalization terms which ensure the undeformed state to be stress free and with zero energy. The normalization terms are formulated for both isotropic and transversely isotropic material behavior and do not violate polyconvexity. By fulfilling all of these conditions in an exact way, the proposed physics-augmented model combines a sound mechanical basis with the extraordinary flexibility that neural networks offer. Thus, it harmonizes the theory of hyperelasticity developed in the last decades with the up-to-date techniques of machine learning. Furthermore, the non-negativity of the hyperelastic potential is numerically verified by sampling the space of admissible deformations states, which, to the best of the authors' knowledge, is the only possibility for the considered nonlinear compressible models. The applicability of the model is demonstrated by calibrating it on data generated with analytical potentials, which is followed by an application of the model to finite element simulations. In addition, an adaption of the model to noisy data is shown and its extrapolation capability is compared to models with reduced physical background. Within all numerical examples, excellent and physically meaningful predictions have been achieved with the proposed physics-augmented neural network.
**Key words:** hyperelasticity, physics-augmented neural networks, normalization, anisotropy, constitutive modeling, finite element simulation
## 1 Introduction
The mechanical principles underlying hyperelasticity were extensively discussed in the last decades, but for a long time, fulfilling them all at once could be seen as "the main open problem of the theory of material
behavior" (Truesdell and Noll [51]). For instance, while both the polyconvexity [5, 6] and objectivity condition have a sound mechanical motivation, objective strain measures easily violate the polyconvexity condition [24]. This does not mean that different constitutive conditions contradict each other - it rather shows the big challenge of fulfilling them all at the same time. With an increasing amount of restrictions a model should fulfill, this effort increases considerably, and it took sophisticated approaches to construct analytical models which fulfill all relevant conditions at the same time [8, 44]. In addition, the calibration of such models is also not a trivial task and requires a lot of knowledge [42].
To overcome the time consuming task of formulating classical constitutive models and to improve the restricted functional relationships that most of these analytical models bring along, concepts like the data-driven mechanics approach [23] or modern machine learning methods such as Gaussian process regression [11, 13] or _neural networks (NNs)_[1, 26] represent promising alternatives. For the first time, the idea of applying NNs in constitutive modeling was proposed in the early 1990s by Ghaboussi et al. [16]. However, in this early phase, mostly pure black-box approaches were used, i.e., networks that do not take into account any physical principles and therefore can only reproduce the training dataset, here consisting of stress-strain couples, well but extrapolate poorly. To remedy this weakness, a fairly new trend in NN-based constitutive modeling, and in scientific machine learning in general [41], is to include essential underlying physics in a strong form, e.g., by using adapted network architectures, or in a weak form, e.g., by modifying the loss term for the training [34]. These types of approaches, coined as _physics-informed_[22], _mechanics-informed_[4], _physics-augmented_[25], _physics-constrained_[19], or _thermodynamics-based_[37], enable an improvement of the extrapolation capability and the usage of sparse training data [22, 27], which is particularly important when constitutive models are to be fitted to experimental data.
In the following, a brief overview on the mentioned NN-based approaches applied to _finite strain hyper-elasticity_ modeling is given. Regarding isotropic materials, transferred from analytical models, the works [29, 46] propose to approximate the elastic potential by a feed-forward neural network (FFNN) with three deformation-type invariants as input and thus fulfill several constitutive conditions, e.g., thermodynamic consistency, objectivity, or material symmetry. However, similar to the approaches [28, 31, 43] applied to anisotropic problems, the elastic potential is needed directly for training within [29, 46]. In the meantime, NNs using _invariants_ as inputs and the hyperelastic potential as output, thus also being a priori thermodynamically consistent, have become a fairly established approach [12, 19, 24, 25, 30, 32, 33, 48]. Thereby, a more sophisticated training is applied, which allows the direct calibration of the network by tuples of stress and strain, i.e., the derivative of the energy with respect to the deformation is used in the loss term. This technique is also named Sobolev training [52, 54]. Alternatively, in order to ensure thermodynamic consistency a posteriori, a previously trained network predicting stress coefficients can be used to construct a pseudo-potential [20]. An NN-based approach which is coupled to a specific model, the so-called micro-sphere approach, is presented in [57].
Besides the mentioned requirements, namely thermodynamic consistency, objectivity, and material symmetry, there exist further physical conditions, e.g., ellipticity, which ensures material stability [56]. However, since ellipticity is difficult to verify and ensure, the concept of polyconvexity of the strain energy potential [5, 6], which implies ellipticity and is mathematically linked to the existence and stability of solutions of the elasticity problem, is preferable for the formulation of constitutive models [39]. There are several approaches for building polyconvex NNs [4, 7, 12, 24, 33, 47, 48, 50], with the most notable technique for incorporating this condition being the use of _input convex neural networks (ICNNs)_ originally introduced by Amos et al. [3]. For the fulfillment of the growth condition, a special network architecture may be applied [19], whereas using analytical growth terms is more widely spread [24, 25]. Finally, while several works introduce correction terms which ensure normalization conditions, they are either not polyconvex [9, 50] or restricted to the case of nearly incompressible material behavior [33]. E.g., the model proposed in [33] includes terms of the form \((I_{1}-3)^{2}\), where \(I_{1}\) denotes the first invariant of the right Cauchy-Green deformation tensor. However, in order to preserve polyconvexity of \(I_{1}\), the functions acting on it must in general be convex and _non-decreasing_,
cf. [24, Remark A.10]. For the quadratic function used in [33], this holds only if \(I_{1}\) is bounded from below by \(3\), which is only the case for \(\det\mathbf{F}=1\), cf. [44, Corollary A.11], where \(\mathbf{F}\) denotes the deformation gradient. Thus, for \(\det\mathbf{F}\neq 1\), the polyconvexity of the potential in [33] is not ensured by construction. Overall, to the best of our knowledge, the models found in literature so far only fulfill subsets of constitutive conditions belonging to compressible hyperelasticity in an exact way, while the remaining ones are only fulfilled in an approximate fashion, i.e., they are taken into account by penalty terms in the loss [49, 55]. Becoming more specific, while [24] fulfills the polyconvexity condition in an exact way, the normalization condition is only approximated by learning it through the calibration data. On the other side, while [50] uses a stress correction for the exact fulfillment of the normalization condition, this stress correction term includes non-diagonal components of the right Cauchy-Green deformation tensor, and thus is in general not polyconvex and violates material symmetry.
Concluding on NN-based constitutive models, fulfilling all common constitutive conditions of compressible hyperelasticity in an exact way at the same time has so far remained an open challenge.1 In the present work, such an approach which consequently accounts for _thermodynamic consistency, symmetry of the stress tensor, objectivity, material symmetry, polyconvexity, growth condition_ as well as _normalization of energy and stress_ by construction of the network architecture is systematically derived. Regarding the introduced new family of NN-based hyperelastic models, which fulfill all of the aforementioned conditions in an _exact_ way, we advocate for naming them as _physics-augmented neural networks (PANNs)_. The proposed framework will be very valuable in fields, where highly flexible and at the same time physically sensible constitutive models are required, such as the simulation of microstructured materials [14, 19]. For this purpose, the PANN approach is build up by extending the aforementioned model [24] by polyconvex normalization terms, followed by a detailed analytical and numerical study analyzing the overall model. Thereby, it is clearly explained step by step how all the considered physical conditions are incorporated into the PANN approach, which is particularly applicable to compressible anisotropic material behavior. Our framework is applied to the isotropic and transversely isotropic case, where several descriptive examples including multiaxial stress-strain states and noisy data are considered. The data basis for training is thereby generated by using analytical potentials. After the calibration of the models, they are applied within finite element (FE) computations to demonstrate their usability and general accuracy.
Footnote 1: This is only true for the case of compressible elastic materials. A corresponding proposal is made by Linka and Kuhl [33] for the incompressible case. However, apart from the restriction to incompressible materials, it is less general in some other points compared to the approach presented here.
The outline of the manuscript is as follows: In Sec. 2, the fundamentals of hyperelasticity are discussed, which are then applied to the proposed NN model in Sec. 3. In Sec. 4, numerical examples are presented. Finally, in Sec. 5 we conclude with a short discussion on the importance of augmenting neural networks with physics.
NotationThroughout this work the space of tensors
\[\mathcal{L}_{n}:=\underbrace{\mathbb{R}^{3}\otimes\cdots\otimes\mathbb{R}^{3} }_{\text{$n$-times}}\ \forall n\in\mathbb{N} \tag{1}\]
is used, except for a tensor of rank zero. In Eq. (1), \(\mathbb{R}^{3}\), \(\mathbb{N}\) and \(\otimes\) denote the Euclidean vector space, the set of natural numbers without zero and the dyadic product, respectively. Tensors of rank one and two are given by boldface italic symbols in the following, i. e., \(\mathbf{a}\in\mathcal{L}_{1}\) or \(\mathbf{B},\mathbf{C}\in\mathcal{L}_{2}\). Transpose and inverse of a second order tensor \(\mathbf{B}\) are marked by \(\mathbf{B}^{T}\) and \(\mathbf{B}^{-1}\), respectively. Furthermore, trace, determinant and cofactor are denoted by \(\operatorname{tr}\mathbf{B}\), \(\det\mathbf{B}\) and \(\operatorname{cof}\mathbf{B}:=\det(\mathbf{B})\mathbf{B}^{-T}\). The set of invertible second order tensors with positive determinant is denoted by \(\mathcal{E}\mathcal{L}^{+}(3):=\{\mathbf{A}\in\mathcal{L}_{2}\,|\det\mathbf{A}>0\}\), while the orthogonal group and special orthogonal group in \(\mathbb{R}^{3}\) are denoted by \(\mathcal{O}(3):=\big{\{}\mathbf{A}\in\mathcal{L}_{2}\,|\,\mathbf{A}^{T}\cdot\mathbf{A}= \mathbf{1}\big{\}}\) and \(\mathcal{SO}(3):=\big{\{}\mathbf{A}\in\mathcal{L}_{2}\,|\,\mathbf{A}^{T}\cdot\mathbf{A}= \mathbf{1},\,\det\mathbf{A}=1\big{\}}\), respectively.
Here, \(\mathbf{1}\in\mathcal{L}_{2}\) denotes the second order identity tensor. The space of symmetric second order tensors is denoted as \(\mathcal{S}\mathcal{S}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U }\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U }\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U} \mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}\mathcal{U}
a potential
\[\psi:\mathcal{SD}^{+}(3)\to\mathbb{R},\quad\mathbf{F}\mapsto\psi(\mathbf{F}) \tag{5}\]
is introduced, which corresponds to the strain energy density stored in the body and is equal to the Helmholtz free energy density \(W\). Thus, to satisfy the inequality (4) for arbitrary \(\dot{\mathbf{F}}\), the corresponding first Piola-Kirchhoff stress follows to
\[\mathbf{P}=\frac{\partial\psi}{\partial\mathbf{F}}\,. \tag{6}\]
By this definition, the stress tensor is a gradient field which implies energy conservation and path-independency, thus being **thermodynamically consistent** by construction [19, 30, 33].2
Footnote 2: Note that the constitutive equations must always be physically consistent, i.e., without contradiction to the introduced balance equations. Special importance is attached to the compatibility with the second law of thermodynamics, given here in the form of the Clausius-Duhem inequality (4). Therefore, the thermodynamic consistency is especially highlighted here.
The hyperelastic potential is subjected to further mathematical and physical considerations, which are shortly discussed in the following and also illustrated in Fig. 1. For a more detailed introduction to hyperelasticity, the reader is referred to [17, 18, 42].
First of all, \(\psi(\mathbf{F})\) has to be constructed such that the compatibility with the balance of angular momentum is ensured. By using Eqs. (3) and (6) this requirement is expressed as
\[\frac{\partial\psi}{\partial\mathbf{F}}\cdot\mathbf{F}^{T}=\mathbf{F}\cdot \frac{\partial\psi}{\partial\mathbf{F}^{T}}\,. \tag{7}\]
Using the stress transformations introduced in Sec. 2.1, this results in the requirement for the **symmetry of the stress tensors \(\mathbf{\sigma}\)** and \(\mathbf{T}\), e.g., \(\mathbf{T}=\mathbf{F}^{-1}\cdot\frac{\partial\psi}{\partial\mathbf{F}^{T}}\overset{!}{=} \frac{\partial\psi}{\partial\mathbf{F}^{T}}\cdot\mathbf{F}^{-T}=\mathbf{T}^{T}\,.\)
Secondly, a constitutive model should be independent on the choice of observer, which is referred to as **objectivity**. In hyperelasticity, this is formalized as
\[\psi(\mathbf{Q}\cdot\mathbf{F})=\psi(\mathbf{F})\quad\forall\mathbf{F}\in \mathcal{SD}^{+}(3),\mathbf{Q}\in\mathcal{SO}(3)\,. \tag{8}\]
The constitutive equations should also reflect the material's underlying (an-)isotropy which is expressed as **material symmetry** and mathematically written as
\[\psi(\mathbf{F}\cdot\mathbf{Q}^{T})=\psi(\mathbf{F})\quad\forall\mathbf{F}\in \mathcal{SD}^{+}(3),\mathbf{Q}\in\mathcal{G}\subseteq\mathcal{O}(3)\,, \tag{9}\]
where \(\mathcal{G}\) denotes the symmetry group of the material under consideration.
Furthermore, we consider hyperelastic potentials which are **polyconvex**[5, 6, 8, 44], allowing for a representation
\[\psi(\mathbf{F})=\mathcal{P}\left(\mathbf{F},\,\text{cof}\,\mathbf{F},\,\text {det}\,\mathbf{F}\right)\,, \tag{10}\]
where \(\mathcal{P}\left(\mathbf{F},\,\text{cof}\,\mathbf{F},\,\text{det}\,\mathbf{F}\right)\) is convex in its arguments, with \(\text{cof}\,\mathbf{F}=\text{det}(\mathbf{F})\,\mathbf{F}^{-T}\) denoting the cofactor of the deformation gradient. Polyconvexity stems from a quite theoretical context - however, it implies ellipticity [56], which is of practical importance and far more challenging to include in the model formulation than polyconvexity. The ellipticity (or rank-one convexity) condition [39, 56]
\[(\mathbf{a}\otimes\mathbf{b}):\,\frac{\partial^{2}\psi}{\partial\mathbf{F} \partial\mathbf{F}}\,:\,\,(\mathbf{a}\otimes\mathbf{b})\geq 0\qquad\forall\mathbf{a},\mathbf{b} \in\mathcal{L}_{1}\,, \tag{11}\]
ensures material stability of the model, which leads to a favorable behavior in numerical applications. Note that, in order to ensure polyconvexity, all steps that are made in the construction of the hyperelastic potential must preserve polyconvexity, in particular, polyconvex invariants have to be used, cf. Sec. 2.3.
In addition, a variety of coercivity conditions can be considered, the most common one being the **volumetric growth condition**
\[\psi\left(\mathbf{F}\right)\rightarrow\infty\quad\text{as}\quad\left(J\to 0^{+} \quad\vee\quad J\rightarrow\infty\right)\,, \tag{12}\]
in order to take into account the observation that a material body can not be compressed to a volume of zero or expanded to an infinite volume [18]. In particular, the case of large volumetric compression is important, because it may be in a relevant range of practical engineering applications.
Finally, the functional relationship given in Eq. (5) is subjected to further considerations on a physically
Figure 1: Schematic depiction of the common conditions on elastic potential and stresses: (a) thermodynamic consistency, (b) symmetry of the Cauchy stress, (c) objectivity, (d) material symmetry, (e) polyconvexity, (f) volumetric growth condition, (g) and (h) normalization conditions for energy and stress, as well as (i) non-negativity of energy. Within (e)–(i), solid blue lines denote models accounting for the respective condition, while the dashed red lines mark models which violate it. In (c) and (d), \(\mathrm{d}\mathbf{X}\), \(\mathrm{d}\mathbf{X}^{*}\), \(\mathrm{d}\mathbf{x}\), and \(\mathrm{d}\mathbf{x}^{*}\) denote material line elements for different deformation states, where \(\mathrm{d}\mathbf{X}^{*}\) and \(\mathrm{d}\mathbf{x}^{*}\) are transformed by a rotation of \(\mathrm{d}\mathbf{X}\) and \(\mathrm{d}\mathbf{x}\), respectively.
sensible behavior. In the undeformed configuration, i.e., \(\mathbf{F=1}\), the **normalization** conditions
\[\psi\left(\mathbf{F=1}\right)\overset{!}{=}0\quad\text{and}\quad\mathbf{P}(\mathbf{F=1}) \overset{!}{=}\mathbf{0} \tag{13}\]
for both energy and stress should hold. Besides the normalization, the free energy should increase in any case if deformation appears. Thus, in addition to Eq. (13)\({}_{1}\), the **non-negativity of the strain energy**, i.e., \(\psi\left(\mathbf{F}\right)\geq 0\), is required.
By formulating the potential in terms of invariants following from the right Cauchy-Green deformation tensor \(\mathbf{C}\) and a set of structural tensors, summarized in a set \(\mathcal{S}^{\Box}\), which reflects the material symmetry of the material body under consideration [17, 18], i.e.,
\[\psi\!:\mathbb{R}^{m}\to\mathbb{R},\quad\mathbf{I}\mapsto\psi\left(\mathbf{I}\right)\,, \tag{14}\]
both the **objectivity** and the **material symmetry** condition are fulfilled [42].3 Therein, \(\mathbf{I}\!:=\!\left(I_{1},\ldots,I_{m}\right)\in\mathbb{R}^{m}\) denotes an \(m\)-tuple containing a set of complete and irreducible invariants \(I_{\alpha}(\mathbf{C},\mathcal{S}^{\Box})\) of the symmetry group under consideration. Here, complete means that an arbitrary invariant can be expressed as a function of the elements of \(\mathbf{I}\), e.g., \(\psi\left(\mathbf{I}\right)\). Irreducible means that one invariant \(I_{\alpha}\) of the set \(\mathbf{I}\) cannot be expressed by the other elements of \(\mathbf{I}\), cf. [8]. Note that these invariants have to be chosen such that they do not violate the **polyconvexity** condition.4 The corresponding second Piola-Kirchhoff stress tensor is then given as
Footnote 3: Note that \(\psi\left(\mathbf{F}\right)\) and \(\psi\left(\mathbf{I}\right)\) are different functions, as indicated by the arguments. In favor of a reduced set of symbols, this casual notation is used within this work.
Footnote 4: It should be noted that there are symmetry groups for which no complete set can be found that does not violate polyconvexity [25]. In this case it must be decided which property of the model is preferable. Furthermore, it might be possible that there does not exist a set of invariants to describe the anisotropy of materials with arbitrary microstructures, e.g., 3D-printed composites.
\[\mathbf{T}=2\frac{\partial\psi}{\partial\mathbf{C}}=2\sum_{\alpha=1}^{m}\frac{ \partial\psi}{\partial I_{\alpha}}\frac{\partial I_{\alpha}}{\partial\mathbf{C}} =\mathbf{F}^{-1}\cdot\mathbf{P}\,. \tag{15}\]
Finally, formulating the hyperelastic potential in terms of invariants of the right Cauchy-Green tensor implies **symmetry of the Cauchy stress tensor \(\mathbf{\sigma}\)**, which ensures the conservation of angular momentum [18].
### Special material symmetry classes and specific models
Throughout this work, both isotropic (\(\Box:=\oplus\)) and transversely isotropic (\(\Box:=\parallel\)) material behavior are considered as examples of specific material symmetry groups. In the isotropic case, i.e., when the material response is direction-independent, \(\mathcal{G}=\mathcal{O}(3)\), a complete and irreducible set is given by the three invariants
\[I_{1}:=\operatorname{tr}\mathbf{C},\qquad I_{2}:=\operatorname{tr}(\operatorname {cof}\mathbf{C}),\qquad I_{3}:=\operatorname{det}\mathbf{C}\,. \tag{16}\]
In this case, there are no structural tensors needed, hence \(\mathcal{S}^{\otimes}=\varnothing\) holds. For transversely isotropic materials, a complete functional basis is given by the three isotropic invariants \(I_{1}\), \(I_{2}\), \(I_{3}\) together with
\[I_{4}:=\operatorname{tr}(\mathbf{C}\cdot\mathbf{G}),\qquad I_{5}:=\operatorname{tr}( \operatorname{cof}(\mathbf{C})\cdot\mathbf{G})\,, \tag{17}\]
where \(\mathbf{G}\) denotes the second order transversely isotropic structural tensor [8, 45]. In case the preferred direction is parallel to the \(X_{1}\)-direction, the structural tensor's components are given by
\[\left(G_{KL}\right):=\begin{pmatrix}\beta^{2}&0&0\\ 0&\frac{1}{\beta}&0\\ 0&0&\frac{1}{\beta}\end{pmatrix}, \tag{18}\]
where \(\beta\in\mathbb{R}_{>0}\) is a model parameter [45]. Thus, it holds \(\mathcal{S}^{\parallel}=\left\{\mathbf{G}\right\}\).
#### 2.3.1 Isotropic model
At this point, analytical models have to make an explicit choice of the functional relationship for the hyperelastic potential. While some choices have a strong physical motivation [36], most models are of a heuristic nature - and the reduced flexibility that often goes along with this human choice of functional relationship purely stems from the necessity of an explicit form of the model. To become more specific, we consider the isotropic Neo-Hookean model
\[\psi^{\text{nh}}(I_{1},I_{3})=\frac{1}{2}\left(\mu\left(I_{1}-\ln I_{3}-3 \right)+\frac{\lambda}{2}\left(I_{3}-\ln I_{3}-1\right)\right)\,,\quad\mu=\frac {E}{2(1+\nu)}\,,\quad\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)} \tag{19}\]
with the material parameters \((E,\nu)\) corresponding to the Young's modulus and Poisson's ratio. From this potential, the second Piola-Kirchhoff stress can be derived as
\[\mathbf{T}^{\text{nh}}=\mu\mathbf{1}+\left(\frac{\lambda}{2}-\frac{2\mu+\lambda}{2I_{ 3}}\right)\text{cof}\,\mathbf{C}\,. \tag{20}\]
Note that, e.g., the linear dependency on \(I_{1}\) is a quite restrictive choice. Again, the second law of thermodynamics is fulfilled by constructing the stress as a gradient field, while symmetry of the Cauchy stress tensor, objectivity and material symmetry are fulfilled by using invariants. Then, the normalization conditions are fulfilled by cleverly combining both material parameters and invariants, which becomes evident when setting \(\mathbf{C}=\mathbf{1},I_{1}=3,I_{3}=1\) in Eqs. (19) and (20). By using the term \(-\ln I_{3}\) in combination with the remaining invariants, the volumetric growth condition is also fulfilled. Finally, all mathematical operations preserve the polyconvexity of the invariants.
Surprisingly, to the best of the authors' knowledge, it is not possible to prove analytically that the non-negativity of the elastic potential, i.e., \(\psi^{\text{nh}}(I_{1},I_{3})\geq 0\) for arbitrary physically permissible combinations of invariants \(I_{1}\) and \(I_{3}\), is satisfied by this compressible elastic model in every case. This is due to the fact that \(\psi^{\text{nh}}(I_{1},I_{3})\) is polyconvex, i.e., convex with respect to \(\mathbf{F}\), \(\text{cof}\,\mathbf{F}\) and \(\text{det}\,\mathbf{F}\), but not convex with respect to \(\mathbf{F}\) or \(\mathbf{C}\).5 Consequently, the non-negativity condition has to be numerically checked for admissible deformation states \(\mathbf{C}\), see Sec. 4.
Footnote 5: This is in contrast to the incompressible case [33], where the definition range of the invariants is furthermore significantly more limited since \(J\equiv 1\).
#### 2.3.2 Transversely isotropic model
In order to describe transversely isotropic material behavior, we consider the analytical model
\[\psi^{\text{ii}}(I_{1},I_{2},I_{3},I_{4},I_{5})=\alpha_{1}I_{1}+\alpha_{2}I_{ 2}+\delta_{1}I_{3}-\delta_{2}\ln(\sqrt{I_{3}})+\eta^{*}\left(I_{4}^{\alpha_{4 }}+I_{5}^{\alpha_{4}}\right),\quad\eta^{*}:=\frac{\eta_{1}}{\alpha_{4}\,\text{ tr}(\mathbf{G})^{\alpha_{4}}}\,, \tag{21}\]
as proposed by Schroder et al. [45]. Here, the corresponding second Piola-Kirchhoff stress can be derived as
\[\mathbf{T}^{\text{ti}}=2\bigg{(}\alpha_{1}\mathbf{1}+\alpha_{2}\big{(}I_{1}\mathbf{1}-\mathbf{C }\big{)}+\bigg{(}\delta_{1}I_{3}-\frac{\delta_{2}}{2}\bigg{)}\mathbf{C}^{-1}+2 \eta^{*}I_{4}\mathbf{G}+2\eta^{*}I_{5}\bigg{(}I_{5}\mathbf{C}^{-1}-\text{cof}(\mathbf{C}) \cdot\mathbf{G}\cdot\mathbf{C}^{-1}\bigg{)}\bigg{)}\,. \tag{22}\]
The constitutive model thus includes the material parameters \((\beta,\alpha_{1},\alpha_{2},\delta_{1},\delta_{2},\alpha_{4},\eta_{1})\), with restrictions given in [45]. Herein, all conditions introduced in Sec. 2.2 are satisfied by the analytical model. However, as stated in Sec. 2.3.1, the non-negativity of the strain energy density has to be verified numerically.
To close, while the constitutive conditions on hyperelasticity discussed in this section have a sound physical and mathematical basis, the restricted functional relationships that most analytical models choose have not. This limitation can be circumvented by using PANNs.
Physics-augmented neural network constitutive model
In the previous section, we discussed the constitutive conditions of hyperelasticity. Now, instead of choosing an analytical formulation for the potential, like in Secs. 2.3.1 and 2.3.2, we aim to exploit the excellent approximation properties of FFNNs [1, 26]. In the following, the overall PANN model is introduced, which satisfies all further introduced conditions, except for the non-negativity of the energy, in an exact way, see Sec. 2.2.
### Basic conditions
By constructing the stress as the gradient of a potential predicted directly by a FFNN, **thermodynamic consistency** of the model is fulfilled. If the potential is additionally formulated in terms of invariants, a **symmetric Cauchy stress tensor**6 is automatically obtained and the condition of **objectivity** as well as the condition of **material symmetry** are satisfied. At this stage, the neural network already fulfills _basic conditions_ by construction, i.e., thermodynamic consistency, objectivity, material symmetry and symmetry of the stress tensor.
Footnote 6: It should be noted that a FFNN directly mapping from \(\mathbf{F}\) to \(\mathbf{P}\) does not necessarily yield an elastic model which accounts for \(\mathbf{\sigma}=\mathbf{\sigma}^{T}\). The same restriction holds for a model which maps from \(\mathbf{F}\) to \(\mathbf{\psi}(\mathbf{F})\). E.g., in Naumann and Ihlemann [38], a pseudo elastic model is investigated which violates the integrability condition, so that the stress is no longer a gradient field.
We assume that the anisotropic hyperelastic material behavior can be described by the irreducible and independent set of invariants \(\mathbf{I}:=(I_{1},\ldots,I_{m})\) with \(I_{\mathbf{\beta}}(\mathbf{C},\mathbf{\mathcal{S}}^{\Box})\), \(\beta\in\mathbb{N}_{\leq m}\). However, it may be necessary to incorporate further invariants \(I_{\mathbf{\gamma}}^{*}(\mathbf{I}),\gamma\in\mathbb{N}_{\leq A}\) with \(A\in\mathbb{N}_{\geq 0}\) into the argument list of the predicted potential, e.g., in order to fulfill additional physical conditions or to increase the approximation quality of the predictions [19, 24]. By using the extended set of invariants \(\mathbf{I}^{*}:=(I_{1},\ldots,I_{m},I_{1}^{*},\ldots,I_{A}^{*})\) as inputs of a FFNN with scalar-valued output which is then taken as a hyperelastic potential \(\psi(\mathbf{I}^{*})\), the flexibility of the resulting model exceeds the one of analytical formulations by far. Furthermore, in a lot of practical applications it is sufficient to restrict the network architecture to only one hidden layer containing \(N^{\text{NN}}\) neurons. Using the activation function \(\mathcal{F}:\mathbb{R}\to\mathbb{R}\), which needs to be continuously differentiable twice, the most simple case of an invariant-based potential using only one hidden layer is given by
\[\psi^{\text{NN},\Box}(\mathbf{I}^{*}):=\sum_{\alpha=1}^{N^{\text{NN}}}W_{\alpha} \mathcal{F}\left(\sum_{\beta=1}^{m}w_{\alpha\beta}I_{\beta}+\sum_{\gamma=1}^ {A}w_{\alpha\gamma}^{*}I_{\gamma}^{*}+b_{\alpha}\right)\,, \tag{23}\]
where \(W_{\alpha},w_{\alpha\beta},w_{\alpha\gamma}^{*}\) and \(b_{\alpha}\) denote weights and bias values, respectively, which together form the set of parameters \(\mathbf{\mathcal{P}}\in\mathbb{R}^{P}\), with \(P\in\mathbb{N}\) denoting the total number of parameters, to be optimized in the calibration process to fit a given dataset. Note that none of the methods introduced in this paper is restricted to FFNNs with one hidden layer, but can directly be applied for multilayered network architectures, cf. Appendix A. Nevertheless, we use only one layer here to better illustrate the derivations. Then, the stress prediction is done by applying
\[\mathbf{T}^{\text{NN},\Box}=2\sum_{\alpha=1}^{m}\frac{\partial\psi^{\text{NN}, \Box}}{\partial I_{\alpha}}\frac{\partial I_{\alpha}}{\partial\mathbf{C}}+2\sum_{ \gamma=1}^{A}\sum_{\beta=1}^{m}\frac{\partial\psi^{\text{NN},\Box}}{\partial I _{\gamma}^{*}}\frac{\partial I_{\gamma}^{*}}{\partial I_{\beta}}\frac{\partial I _{\beta}}{\partial\mathbf{C}}. \tag{24}\]
This special choice of _input quantities_ - namely, invariants - is the first way of including physics into the NN.
### Polyconvexity
For ensuring **polyconvexity** of the potential, it is necessary to use polyconvex invariants of the symmetry group under consideration. In addition, the _network architecture_ must be adapted in a certain way. By using
a convex and non-decreasing activation function \(\mathcal{F}\) and non-negative weights, polyconvexity of the overall potential \(\psi^{\mathrm{NN,\Box}}(\mathbf{I}^{*})\) is ensured [24]. Here, the _Softplus_ activation function \(\mathcal{SP}(x):=\log(1+\exp(x))\in\mathcal{C}^{\infty}\) is applied, which is convex and non-decreasing, which overall leads to the conditions
\[W_{\alpha},w_{\alpha\beta},w^{*}_{\alpha\gamma}\in\mathbb{R}_{\geq 0},b_{ \alpha}\in\mathbb{R}\quad\forall\alpha\in\mathbb{N}_{\leq N^{\mathrm{NN}}}, \beta\in\mathbb{N}_{\leq m},\gamma\in\mathbb{N}_{\leq A}\,. \tag{25}\]
**Remark 3.1** (**Input convex neural networks (ICNNs))**.: This special kind of FFNN, namely, one with its output being convex in its input arguments, is referred to as ICNN [3]. In order to understand what makes this kind of NNs _convex_, we take a step back and consider the univariate function
\[f\colon\mathbb{R}\to\mathbb{R},\quad x\mapsto f(x):=(g\circ h)(x)\,, \tag{26}\]
where \(f\) is composed of two functions \(g,h\colon\mathbb{R}\to\mathbb{R}\). Given that all of the above functions are twice continuously differentiable, convexity of \(f(x)\) in \(x\) is equivalent to the non-negativity of its second derivative
\[f^{\prime\prime}(x)=(g^{\prime\prime}\circ h)(x)\,h^{\prime}(x)^{2}+(g^{ \prime}\circ h)(x)\,h^{\prime\prime}(x)\geq 0\,. \tag{27}\]
A sufficient, albeit not necessary condition for this is that the innermost function \(h\) is convex (\(h^{\prime\prime}\geq 0\)), while \(g\) is convex and non-decreasing (\(g^{\prime\prime}\geq 0\) and \(g^{\prime}\geq 0\)). This can be generalized to arbitrarily many function compositions, where the innermost function must be convex, while every following function must be convex and non-decreasing. Transferred to FFNNs, which can be seen as compositions of multiple vector-valued functions, this generalizes to the condition that in an ICNN the first hidden layer must be node-wise convex, while every subsequent layer must be node-wise convex and non-decreasing, cf. [24, Appendix A].
When applied to hyperelasticity, the architecture of ICNNs has to be further adapted: as the invariants are nonlinear functions of the arguments defined in the polyconvexity condition, i.e., \((\mathbf{F},\operatorname{cof}\mathbf{F},\operatorname{det}\mathbf{F})\), cf. Eq. (10), they are the innermost function acting on the arguments of the polyconvexity condition. Thus, already the first hidden layer has to be convex and _non-decreasing_. The only exception is the use of \(J=\sqrt{I_{3}}\) as an additional invariant (instead of \(I_{3}\)), since \(J\) is an argument of the polyconvexity condition. For this reason, the activation function acting on \(J\) must only be convex and not necessarily non-decreasing. This is pragmatically taken into account by including the additional invariant \(I_{1}^{*}:=-2J\) in the set of invariants, which is furthermore essential to represent negative stresses at all [24]. Note that there are different ways of constructing polyconvex neural networks, e.g., [48] or [7]. However, the simple structure and excellent flexibility of ICNNs makes them a very natural choice for this task. For a more extensive introduction to polyconvex neural networks and explicit proofs, see [24].
### Growth and normalization conditions
Finally, **growth and normalization conditions** remain to be included in the neural network, ensuring a physically sensible stress behavior of the model. For this, growth and normalization terms are added to the original potential \(\psi^{\mathrm{NN,\Box}}(\mathbf{I}^{*})\) of Eq. (23). Then, with \(\Box\) denoting the symmetry group under consideration, the overall PANN model, given as
\[\psi^{\mathrm{PANN,\Box}}(\mathbf{I}^{*}):=\psi^{\mathrm{NN,\Box}}(\mathbf{I}^{*})+ \psi^{\mathrm{stress,\Box}}(J,I_{4},\dots,I_{m})+\psi^{\mathrm{energy,\Box}}+ \psi^{\mathrm{growth}}(J)\,, \tag{28}\]
fulfills all constitutive conditions introduced in Sec. 2.2, except for the non-negativity of the energy, in an exact way. In Fig. 2, the overall structure of the PANN model is exemplarily illustrated for one hidden layer. Consequently, the corresponding stress is given by the expression
\[\mathbf{T}^{\mathrm{PANN,\Box}}=2\frac{\partial\psi^{\mathrm{PANN,\Box}}}{ \partial\mathbf{C}}=\mathbf{T}^{\mathrm{NN,\Box}}+\mathbf{T}^{\mathrm{stress,\Box}}+\mathbf{T}^ {\mathrm{energy,\Box}}+\mathbf{T}^{\mathrm{growth}}\,\,. \tag{29}\]
One way to fulfill the volumetric **growth condition** are coercive functions. However, since ICNNs are not necessarily coercive, they are not suited to fulfill this condition and therefore an analytical term
\[\psi^{\text{growth}}(J):=\left(J+\frac{1}{J}-2\right)^{2} \tag{30}\]
is introduced, which is chosen in such a way that polyconvexity is not violated. This leads to the corresponding stress contribution
\[\mathbf{T}^{\text{growth}}=2\left(J+\frac{1}{J}-2\right)\left(1-\frac{1}{J^{2}} \right)J\mathbf{C}^{-1}\,. \tag{31}\]
Another way to fulfill the volumetric growth condition is further adapting the network architecture, cf. [19], however, using an analytical term is more straightforward.
The correction term for the **energy normalization** is given by
\[\psi^{\text{energy},\Box}:=-\psi^{\text{NN},\Box}(\mathbf{T}^{*}) \Big{|}_{\mathbf{C}=\mathbf{1}}\in\mathbb{R}\,. \tag{32}\]
Since \(\psi^{\text{energy},\Box}\) is a constant, it holds \(\mathbf{T}^{\text{energy},\Box}=\mathbf{0}\). Together with the stress normalization term denoted by \(\psi^{\text{stress},\Box}(J,I_{4},\dots,I_{m})\), meaning vanishing gradients of the potential for the undeformed state, we ensure
\[\psi^{\text{PANN},\Box}(\mathbf{T}^{*})\Big{|}_{\mathbf{C}=\mathbf{1}}=0 \tag{33}\]
as well as
\[\mathbf{T}^{\text{PANN},\Box}\Big{|}_{\mathbf{C}=\mathbf{1}}=2\left(\sum_{ \alpha=1}^{m}\frac{\partial\psi^{\text{PANN},\Box}}{\partial I_{\alpha}} \frac{\partial I_{\alpha}}{\partial\mathbf{C}}+\sum_{\gamma=1}^{A}\sum_{\beta=1}^ {m}\frac{\partial\psi^{\text{PANN},\Box}}{\partial I_{\gamma}^{*}}\frac{ \partial I_{\gamma}^{*}}{\partial I_{\beta}}\frac{\partial I_{\beta}}{ \partial\mathbf{C}}\right)\Bigg{|}_{\mathbf{C}=\mathbf{1}}=\mathbf{0}. \tag{34}\]
Thus, the potential has a local minimum exactly for the undeformed state with \(\mathbf{C}=\mathbf{1}\), where for this case we define the invariants as \(I_{\alpha}^{0}:=I_{\alpha}(\mathbf{C}=\mathbf{1})\) and \(I_{\beta}^{*0}:=I_{\beta}^{*}(\mathbf{C}=\mathbf{1})\). However, due to the fact that \(\psi^{\text{PANN},\Box}(\mathbf{T}^{*})\) is polyconvex, i.e., convex with respect to \(\mathbf{F}\), \(\text{col}\,\mathbf{F}\) and \(\text{det}\,\mathbf{F}\), but not convex with respect to \(\mathbf{F}\) or \(\mathbf{C}\), the **non-negativity of the energy**, i.e., \(\psi^{\text{PANN},\Box}(\mathbf{T}^{*})\geq 0\) does not automatically follow. As already stated in Sects. 2.3.1 and 2.3.2, a numerical test for admissible deformation states is needed to prove the fulfillment of this condition, see Sec. 4.
The polyconvex stress correction \(\psi^{\text{stress},\Box}(J,I_{4},\dots,I_{m})\) depends on the symmetry group under consideration and is now further discussed.
Figure 2: Illustration of the PANN based constitutive model for the material symmetry group \(\Box\) under consideration. Note that the hidden-layer (yellow) of the NN may be multilayered.
#### 3.3.1 Isotropic normalization term
In the isotropic case with \(\mathbf{\varGamma}^{*}:=(I_{1},I_{2},I_{3},I_{1}^{*})\) the normalization term is given by
\[\psi^{\text{stress,\tiny\textregistered}}(J):=-\mathfrak{n}(J-1)\, \tag{35}\]
where the constant
\[\mathfrak{n}:=2\left(\frac{\partial\psi^{\text{NN,\tiny\textregistered}}}{ \partial I_{1}}+2\frac{\partial\psi^{\text{NN,\tiny\textregistered}}}{\partial I_{ 2}}+\frac{\partial\psi^{\text{NN,\tiny\textregistered}}}{\partial I_{3}}+\frac{ \partial\psi^{\text{NN,\tiny\textregistered}}}{\partial I_{1}^{*}}\frac{\partial I _{1}^{*}}{\partial I_{3}}\right)\Bigg{|}_{\mathbf{C}=\mathbf{1}}\in\mathbb{R} \tag{36}\]
is a weighted sum of derivatives of the ICNN potential with respect to the invariants for the undeformed state \(\mathbf{C}=\mathbf{1}\). The corresponding stress contribution is given by
\[\mathbf{T}^{\text{stress,\tiny\textregistered}}=-\mathfrak{n}\mathbf{\varGamma}^{-1}. \tag{37}\]
and leads to the fact that the undeformed state is stress-free by construction.
This definition of the correction term comprises two ideas: first of all, this approach preserves poly-convexity of the potential, since the additional term is a linear function in \(J\), which is an invariant quantity included in the arguments of the polyconvexity condition, cf. Eq. (10). Furthermore, the partial derivatives
\[\left.\frac{\partial I_{\alpha}}{\partial\mathbf{C}}\right|_{\mathbf{C}=\mathbf{1}}=\xi \mathbf{1}\quad\text{and}\quad\left.\frac{\partial I_{1}^{*}}{\partial\mathbf{C}} \right|_{\mathbf{C}=\mathbf{1}}=\eta\mathbf{1} \tag{38}\]
of the isotropic invariants with respect to \(\mathbf{C}\) for the undeformed state are multiples of the identity tensor \(\mathbf{1}\) for all \(\alpha\in\mathbb{N}_{\leq 3}\) with constants \(\xi,\eta\in\mathbb{R}\). Hence, it can normalize the stress for the undeformed state to zero, which becomes evident when setting \(\mathbf{C}=\mathbf{1}\) in Eq. (37). Further details are provided in Appendix B, where all necessary tensor derivatives are given in analytical form.
#### 3.3.2 Transversely isotropic normalization term
In the case of transverse isotropy with \(\mathbf{\varGamma}^{*}:=(I_{1},I_{2},I_{3},I_{4},I_{5},I_{1}^{*})\) the partial derivatives of the additional invariants \(I_{4},I_{5}\) with respect to \(\mathbf{C}\) for the undeformed state are not multiples of the identity anymore but include the structural tensor \(\mathbf{G}\), e.g.,
\[\left.\frac{\partial I_{4}}{\partial\mathbf{C}}\right|_{\mathbf{C}=\mathbf{1}}=\mathbf{G} \quad\text{and}\quad\left.\frac{\partial I_{5}}{\partial\mathbf{C}}\right|_{\mathbf{C }=\mathbf{1}}=I_{5}\mathbf{1}-\mathbf{G}. \tag{39}\]
Thus, the correction term
\[\psi^{\text{stress,\tiny\textregistered}}(J,I_{4},I_{5}):=-\mathfrak{o}(J-1)+ \mathfrak{p}(I_{4}-I_{4}^{0})+\mathfrak{q}(I_{5}-I_{5}^{0}) \tag{40}\]
is introduced, with \(I_{4}^{0}=I_{5}^{0}=\mathfrak{tr}\,\mathbf{G}\). Again, the constant
\[\mathfrak{o}:=2\left(\frac{\partial\psi^{\text{NN,\tiny\textregistered}}}{ \partial I_{1}}+2\frac{\partial\psi^{\text{NN,\tiny\textregistered}}}{\partial I _{2}}+\frac{\partial\psi^{\text{NN,\tiny\textregistered}}}{\partial I_{3}}+\frac{ \partial\psi^{\text{NN,\tiny\textregistered}}}{\partial I_{1}^{*}}\frac{\partial I _{1}^{*}}{\partial I_{3}}+\frac{\partial\psi^{\text{NN,\tiny\textregistered}}}{ \partial I_{5}}\,\mathfrak{tr}\,\mathbf{G}+\mathfrak{q}\,\mathfrak{tr}\,\mathbf{G} \right)\Bigg{|}_{\mathbf{C}=\mathbf{1}}\in\mathbb{R} \tag{41}\]
is a weighted sum of the derivatives of the ICNN potential with respect to the invariants for the undeformed state \(\mathbf{C}=\mathbf{1}\). Furthermore, with the ReLU-function denoted as \(\mathcal{R}\mathcal{L}\), the non-negative constants
\[\mathfrak{p}:=\mathcal{R}\mathcal{L}(-x)\in\mathbb{R}_{\geq 0},\quad\mathfrak{q}:= \mathcal{R}\mathcal{L}(x)\in\mathbb{R}_{\geq 0} \tag{42}\]
are defined with the argument
\[x:=\left(\frac{\partial\psi^{\text{NN},\parallel}}{\partial I_{4}}-\frac{\partial \psi^{\text{NN},\parallel}}{\partial I_{5}}\right)\bigg{|}_{\mathbf{C}=\mathbf{1}}\,. \tag{43}\]
Due to the non-negativity of \(\mathsf{o}\), \(\mathsf{p}\) as well as the polyconvexity of \(I_{4},I_{5}\), the correction term is again polyconvex. It should be emphasized again that in the correction terms no assumptions on the number of hidden layers in the neural networks are made, and the approach can directly be applied to multilayered network architectures. Overall, in the transversely isotropic case, the stress contribution of the normalization term is given by
\[\mathbf{T}^{\text{stress},\parallel}=-\mathsf{o}\mathbf{JC}^{-1}+2\mathsf{p}\mathbf{G}+2 \mathsf{a}\left(I_{5}\mathbf{C}^{-1}-\operatorname{cof}(\mathbf{C})\cdot\mathbf{G}\cdot \mathbf{C}^{-1}\right)\,, \tag{44}\]
where all necessary tensor derivatives of the Appendix B are integrated.
Note that in the definitions of all correction terms, even though \(\psi^{\text{stress},\Box}(J,I_{4},I_{5})\) and \(\psi^{\text{energy}}\) depend on the evaluation of \(\psi^{\text{NN},\Box}(\mathbf{T}^{*})\) and its partial derivatives at \(\mathbf{C}=\mathbf{1}\), no assumptions on the number of hidden layers in the ICNN \(\psi^{\text{NN},\Box}(\mathbf{T}^{*})\) are made, and the approaches can directly be applied to multilayered network architectures.
### Model calibration
Finally, the model has to be calibrated to data of a specific material including the set of structural tensors, summarized in \(\mathcal{S}^{\Box}\), which corresponds to the material symmetry group \(\Box\) under consideration. Throughout this work, datasets of the form
\[\mathcal{D}=\left\{\left(\,{}^{1}\mathbf{C},\,{}^{1}\mathbf{T}\,\right),\left(\,{}^{2 }\mathbf{C},\,{}^{2}\mathbf{T}\,\right),\dots\right\} \tag{45}\]
consisting of strain-stress tuples in terms of right Cauchy-Green deformation and second Piola-Kirchhoff stress tensors are used. In order to examine the generalization of a model, i.e., its prediction of general load cases, it is essential to evaluate it on data not seen in the calibration process. Thus, the overall dataset \(\mathcal{D}\) is split into a calibration dataset \(\mathcal{D}_{\text{c}}\) and a test dataset \(\mathcal{D}_{\text{t}}\) with \(\mathcal{D}_{\text{c}}\cap\mathcal{D}_{\text{t}}=\varnothing\). Then, after the calibration of the model on the dataset \(\mathcal{D}_{\text{c}}\), its predictions can be evaluated on \(\mathcal{D}_{\text{t}}\). Only when the model is able to predict the load cases of the test dataset, and only if the load cases included in the test dataset are sufficiently general, it can be assumed that the model generalizes well and can predict the stress for arbitrary deformations.
The next step of the model calibration is the choice of its hyperparameters, here, the number of hidden layers and the nodes they contain. For this, a "sufficiently large" number of layers and nodes should be chosen, so that the model is flexible enough for the material behavior under consideration. For general NNs, overfitting [1, 26], i.e., a too accurate interpolation of the calibration data which results in a bad prediction of general data, is an issue. However, as we will demonstrate, for the PANN proposed in this work, the inclusion of physics provides the model with a pronounced mathematical structure, which makes it less prone to overfitting. Then, with a fixed model architecture, the model parameters \(\mathbf{\mathcal{P}}\) must be optimized, i.e., its weights and biases. To calibrate the model parameters \(\mathbf{\mathcal{P}}\) on the dataset \(\mathcal{D}_{\text{c}}\), the loss function defined in terms of the mean squared error
\[\mathcal{MSE}^{\Box}(\mathbf{\mathcal{P}})=\frac{1}{|\mathcal{D}_{\text{c}}|}\sum _{i=1}^{|\mathcal{D}_{\text{c}}|}\left\|i\mathbf{T}-\mathbf{T}^{\text{model},\Box}(i \mathbf{C};\mathbf{\mathcal{P}})\right\|^{2} \tag{46}\]
is minimized, where \(|\mathcal{D}_{\text{c}}|\) denotes the number of tuples in \(\mathcal{D}_{\text{c}}\) and \(\left\|\cdot\right\|\) denotes the Frobenius norm. Note that the hyperelastic potential \(\psi^{\text{PANN}}(\mathbf{T}^{*})\) is calibrated only through its gradients, i.e., the corresponding stress tensor, which is referred to as Sobolev training [53]. For the optimization process, the _SLSQP optimizer_ (Sequential Least Squares Programming) is applied [40]. According the proposed model, loss function and optimizer have no influence on the underlying physics, and they could also be chosen otherwise. An implementation of the described workflow is realized using _Python, TensorFlow_ and _SciPy_.
**Remark 3.2**.: In the calibration process, it is important to already account for the growth term \(\psi^{\rm growth}(J)\) included in the model \(\psi^{\rm PANN,\Box}(\mathbf{T}^{*})\), cf. Eq. (28), as in particular for calibration data including high volumetric compressions, it can have a considerable influence on the model. On the other side, the _stress and energy normalization_ terms may not necessarily have to be included in the calibration process: the energy normalization term \(\psi^{\rm energy}\) can be added to the PANN model after calibration, as it is a constant it does not influence the stress prediction at all. Then, even when the model is calibrated without the stress correction term \(\psi^{\rm stress,\Box}(J,I_{4},\ldots,I_{m})\), it can learn through suitable data how to approximate the stress-free reference configuration [24]. Given that the approximation of the stress-free reference configuration is good enough, the derivatives of the potential w.r.t. the invariants become very small at the identity \(\mathbf{C}=\mathbf{1}\), and consequently, the constants \(\mathfrak{u},\mathfrak{o},\mathfrak{p}\) on which the stress correction depends become very small, cf. Eqs. (36,41,42). Then, both the correction term \(\psi^{\rm stress,\Box}(J,I_{4},\ldots,I_{m})\) and its gradient \(\mathbf{T}^{\rm stress,\Box}\) approximately vanish, and adding the stress correction after the model calibration has a negligibly small influence on the overall model behavior. While it is indeed possible to fulfill the normalization conditions in good approximation by learning them through suitable data, they are only fulfilled exactly when incorporated in the model formulation, e.g., by normalization terms.
## 4 Numerical examples
After the introduction of the PANN hyperelastic constitutive model in the former section, the ability of our approach is now demonstrated by several numerical examples and compared to approaches with reduced physical foundation. We start with simple stress states and analyze interpolation and extrapolation behavior, also for perturbed data. Thereafter, we go on to complex multiaxial deformation states. Finally, the trained PANN model is applied within a finite element simulation.
### Simple stress-strain states
In this subsection, three types of models, namely a network \(\mathbf{P}^{\rm simple}(\mathbf{F})\) directly mapping from \(\mathbf{F}\) to \(\mathbf{P}\), a network \(\psi^{\rm NN,\emptyset}(\mathbf{I}^{*})\) accounting for the basic conditions, and the PANN \(\psi^{\rm PANN,\emptyset}(\mathbf{I}^{*})\), are compared with respect to their interpolation and extrapolation capabilities for simple stress-strain states. Thereby, the first model, given by
\[P^{\rm simple}_{kL}(\mathbf{F})\coloneqq B_{kL}+\sum_{\alpha=1}^{N^{\rm NN}}W_{ \alpha kL}\,\mathcal{S}\mathcal{P}\left(\sum_{i=1}^{3}\sum_{J=1}^{3}w_{\alpha iJ }F_{iJ}+b_{\alpha}\right)\text{ with }B_{kL},W_{\alpha kL},w_{\alpha iJ},b_{ \alpha}\in\mathbb{R}\,, \tag{47}\]
does not take into account any of the introduced conditions, the second, which is defined by Eq. (23), respects for thermodynamic consistency, symmetry of \(\mathbf{\sigma}\), objectivity and material symmetry, and the PANN fulfills the conditions of the previous model and additionally includes polyconvexity, growth condition, as well as energy and stress normalization, see Eq. (28).
In the following examples, the architectures of the three models are set to one hidden layer with \(N^{\rm NN}\coloneqq 4\) neurons. Furthermore, the softplus activation function is used for all models. The calibration of the model accounting for basic conditions and the PANN is performed according to Sec. 3.4. In contrast, naturally, the training of the \(\mathbf{F}\)-\(\mathbf{P}\) model is performed with data for \(\mathbf{F}\) and \(\mathbf{P}\). Furthermore, the Adam optimizer is used for the training of this model. The training data for this study are generated by using the isotropic Neo-Hooke model, cf. Eq. (20), where the constants \((E,\nu)\coloneqq(1\,\mathrm{MPa},0.3)\) are chosen.
#### 4.1.1 Interpolation behavior
In order to analyze the interpolation behavior of the three models, we investigate three different sets of uniaxial stress states: ideal data, offset data, as well as noisy data.
Ideal dataThe three models are first trained on analytical data from a uniaxial tension and compression test in \(X_{1}\)-direction, cf. Fig. 3(a1). The principal stretch \(\lambda_{1}\in\mathbb{R}_{>0}\) in \(X_{1}\)-direction is prescribed and the principal stretch \(\lambda_{2}\in\mathbb{R}_{>0}\) has to be calculated in such a way that the stresses \(T_{22}^{\oplus},T_{33}^{\oplus}\) vanish, so that the components of the deformation tensor and the stress tensor result in
\[(C_{KL})^{\text{uniaxial}}:=\begin{pmatrix}\lambda_{1}^{2}&0&0\\ 0&\lambda_{2}^{2}&0\\ 0&0&\lambda_{2}^{2}\end{pmatrix},\quad(T_{KL})^{\text{uniaxial},\oplus}= \begin{pmatrix}T_{11}^{\oplus}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}. \tag{48}\]
Accordingly, this leads to a uniaxial stress state, which is commonly applied in experimental investigations. Only 30 tuples \((\,{}^{i}\mathbf{C},\mathbf{T}^{\oplus})\) with \(0.8\leq\lambda_{1}\leq 2\) are used as training data for the models. Note that the tuple for \(\lambda_{1}=1\) occurs twice in the data set, since this point is assigned to both the compression and tension ranges. This has no influence on the approximation behavior of the PANN model \(\psi^{\text{PANN},\oplus}(\mathbf{I}^{*})\), since this model is normalized by construction. However, for both the \(\mathbf{F}\)-\(\mathbf{P}\) model and the basic conditions model \(\psi^{\text{NN},\oplus}(\mathbf{I}^{*})\), the undeformed state is weighted more heavily in the data set.
After calibration, the stress-stretch curves \(T_{11}^{\text{simple}}(\lambda_{1})\), \(T_{11}^{\text{NN},\oplus}(\lambda_{1})\),and \(T_{11}^{\text{PANN},\oplus}(\lambda_{1})\) of the three models are available in analytical form and are compared with the training data in Fig. 3(a1). As expected, it can be
Figure 3: Predicted stress-stretch curves \(T_{11}^{\oplus}(\lambda_{1})\) of the PANN, the NN fulfilling the basic conditions, as well as the \(\mathbf{F}\)-\(\mathbf{P}\) model with one hidden layer containing \(N^{\text{NN}}:=4\) neurons. The trained models are compared to data from a uniaxial tension/compression test in \(X_{1}\)-direction for: (a1) ideal isotropic Neo-Hooke data, (b1) data with offset, and (c1) noisy data. In (a2)–(c2), the corresponding energies of the models are shown. For reasons of improved comparability, energy normalization is applied for \(\psi^{\text{NN},\oplus}(\mathbf{I}^{*})\) which, however, has no influence on the stress prediction.
seen that all three NN-based models are able to perfectly approximate the ideal dataset which is also shown by the MSEs given in Tab. 1. The basic conditions model as well as the PANN also reproduce the energy \(\psi^{\otimes}(\lambda_{1})\) with high accuracy although it was not trained directly, see Fig. 3(a2). Thus, incorporating physics does not degrade the model prediction quality. In addition, we would like to point out that only the PANN model really fulfills the normalization condition, i.e., \(T_{11}^{\text{PANN},\oplus}(\lambda_{1}=1)=0\), exactly.
Offset dataThe next step is to investigate the flexibility of the models for non-ideal data. For this, the stress components of the ideal dataset are shifted according to
\[(T_{KL})^{\text{offset},\oplus}:=(T_{KL})^{\text{uniaxial},\oplus}+\begin{pmatrix} 100&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\text{kPa}\,, \tag{49}\]
so that the calibration dataset is no longer normalized for the undeformed state, i.e., \(T_{11}^{\text{offset},\oplus}(\lambda_{1}=1)\neq 0\). Thereby, the normalization condition included into the PANN model is of particular interest now.
As can be seen in Fig. 3(b1), the \(\mathbf{F}\)-\(\mathbf{P}\) model \(\mathbf{P}^{\text{simple}}(\mathbf{F})\) perfectly approximates the training data again, see also the MSE given in Tab. 1. However, this is only possible since this simple model does not know about the existence of a potential and is thus not really an hyperelastic one. In contrast, the basic conditions model \(\psi^{\text{NN},\oplus}(\mathbf{I}^{*})\) is not able to reproduce the training data well near \(\lambda=1\), which is due to the fact that it cannot violate the material symmetry and that the stress results from the derivative of an elastic potential. However, for the undeformed state a non-zero stress is predicted, which is in contradiction to the expectation of a standard elastic model.
As can be seen in Fig. 3(b1), due to the model approach according to Sec. 3, the normalization condition is only exactly fulfilled by the PANN model \(\psi^{\text{PANN},\oplus}(\mathbf{I}^{*})\) after model calibration, i.e., \(T_{11}^{\text{PANN},\oplus}(\lambda_{1}=1)=0\). Although it becomes evident from the proofs in Sec. 3, it is now demonstrated that this important condition is fulfilled at all times by construction, even for the case when the training data have an offset. Similar to the basic conditions model, due to the stress normalization and the polyconvexity, the PANN model does not approximate the data points in the neighborhood of \(\lambda_{1}=1\) well - but in a physically meaningful way - resulting in an inflection point within the stress-stretch curve. Outside this neighborhood, the approximation of the data points is close to perfect. Thereby, the non-convexity of \(T_{11}\) in \(\lambda_{1}\) should not be mistaken as a violation of the polyconvexity condition. The proposed NN model is polyconvex by construction, cf. Remark 3.1, and furthermore, polyconvexity does not pose restrictions on the dependency of \(\mathbf{T}\) in \(\mathbf{F}\).
Due to the described physical restrictions, the MSEs of the basic conditions model and the PANN are several orders of magnitude larger compared to the \(\mathbf{F}\)-\(\mathbf{P}\) model for the offset data, cf. Tab. 1. The elastic energies given by \(\psi^{\text{NN},\oplus}(\mathbf{I}^{*})\) and \(\psi^{\text{PANN},\oplus}(\mathbf{I}^{*})\) are shown in Fig. 3(b2). Thereby, no significant difference between both models occurs. Note that the training data are not included in this plot because there is no energy available for the offset dataset under consideration.
Noisy dataIn the last step, the approximation behavior of the three models with respect to noisy data is investigated. For this purpose, the stress components of a larger ideal dataset containing 100 tuples \((\mathbf{{}^{i}}\mathbf{C},\mathbf{{}^{i}}\mathbf{T}^{\otimes})\) are shifted according to
\[(T_{KL})^{\text{noisy},\otimes}:=(T_{KL})^{\text{uniaxial},\oplus}+ \begin{pmatrix}\xi&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\quad\text{with}\quad\xi\sim\mathcal{N}(\mu,\sigma^{2})\,, \tag{50}\]
where \(\xi\) describes a normally distributed noise with mean value \(\mu=0\,\text{kPa}\) and standard deviation \(\sigma=50\,\text{kPa}\).
Although the training data are overlaid with a strong Gaussian noise, as it can be seen in Fig. 3(c1), all three calibrated models give a result that appears to be physically reasonable. To be more precise, the
predicted stress-stretch curves are monotonously increasing and the potentials \(\psi^{\text{PANN},\oplus}(\lambda_{1})\) and \(\psi^{\text{NN},\oplus}(\lambda_{1})\) shown in Fig. 3(c2) are convex in \(\lambda_{1}\), which is both to be expected for uniaxial stress, they are smooth, and do not show any markedly oscillations. It can be seen that the stress prediction is approximately zero for the undeformed state in this case, see Fig. 3(c1). However, only the PANN model satisfies all relevant conditions in an exact manner, which also applies to stress-strain states not included in the training. The advantage of this property will be revealed in the next study. The _F-P_ model even violates the balance of angular momentum, since \(\mathbf{T}^{\text{simple}}\) and \(\mathbf{\sigma}^{\text{simple}}\) are not symmetrical.
The MSEs for all three models are again given in Tab. 1. Due to the imperfect data, the values are increased by several orders of magnitude compared to the ideal case.
#### 4.1.2 Extrapolation behavior
Now, the extrapolation behavior of the compared models is analyzed by considering three load cases: uniaxial tension/compression, biaxial tension/compression, as well as simple shear. For the training of \(\mathbf{P}^{\text{simple}}(\mathbf{F})\), \(\psi^{\text{NN},\oplus}(\mathbf{I}^{*})\), and \(\psi^{\text{PANN},\oplus}(\mathbf{I}^{*})\) only uniaxial stress states and the corresponding deformations within the narrow range \(0.8\leq\lambda_{1}\leq 1.1\) have been used here, cf. Eq. (48), which are stored in the dataset \(\mathcal{D}^{\text{uniaxial},\oplus}\) with \(|\mathcal{D}^{\text{uniaxial},\oplus}|=15\). Again, note that the tuple for \(\lambda_{1}=1\) occurs twice in the data set, since this point is assigned to both the compression and tension ranges. Given that the PANN model \(\psi^{\text{PANN},\oplus}(\mathbf{I}^{*})\) is normalized by construction, this does not affect the approximation behavior. In contrast, for both the _F-P_ model and the basic conditions model \(\psi^{\text{NN},\oplus}(\mathbf{I}^{*})\), the undeformed state is weighted more heavily in the data set.
Uniaxial stress statesAs one can see in Fig. 4(a1), all models again perfectly approximate the training data which is also evident from the MSEs given in Tab. 2. However, the _F-P_ model fails immediately when it has to extrapolate. The model \(\psi^{\text{NN},\oplus}(\mathbf{I}^{*})\), in contrast, is very good at extrapolating up to a stretch of \(\lambda_{1}=4\), which is really impressing from the author's point of view. Likewise, the related energy shown in Fig. 4(a2) is very well reproduced. Thus, the advantage of invariant-based approaches [12, 19, 24, 25, 30, 32, 33, 48] which approximate the energy and not directly the stress is particularly evident here. Even more impressive is the result for the PANN model, which is able to reproduce the data almost perfectly despite extrapolation. This is also evident from the MSE for the extrapolated data, which is two orders of magnitude lower for the PANN model than for the basic conditions model, cf. Tab. 2. The observed improvement results from the insertion of the additional physical principles, namely polyconvexity, growth condition, as well as energy and stress normalization, into the PANN model.
Biaxial stress states and simple shearNow, as a next step, we want to evaluate how the models perform if they have to extrapolate for completely unknown load cases, i.e., biaxial stress states
\[(C_{KL})^{\text{biaxial}}:=\begin{pmatrix}\lambda_{1}^{2}&0&0\\ 0&\lambda_{1}^{2}&0\\ 0&0&\lambda_{2}^{2}\end{pmatrix},\quad(T_{KL})^{\text{biaxial},\oplus}= \begin{pmatrix}T_{11}^{\oplus}&0&0\\ 0&T_{11}^{\oplus}&0\\ 0&0&0\end{pmatrix} \tag{51}\]
\begin{table}
\begin{tabular}{c c c c} Model & Ideal data & Data with offset & Noisy data \\ \hline _F-P_ model & \(3.14\,\text{kPa}^{2}\) & \(2.87\,\text{kPa}^{2}\) & \(2.02\cdot 10^{3}\,\text{kPa}^{2}\) \\ Basic conditions NN & \(1.74\cdot 10^{-4}\,\text{kPa}^{2}\) & \(1.22\cdot 10^{3}\,\text{kPa}^{2}\) & \(2.03\cdot 10^{3}\,\text{kPa}^{2}\) \\ PANN model & \(5.92\cdot 10^{-5}\,\text{kPa}^{2}\) & \(2.77\cdot 10^{3}\,\text{kPa}^{2}\) & \(2.02\cdot 10^{3}\,\text{kPa}^{2}\) \\ \end{tabular}
\end{table}
Table 1: MSE according to Eq. (46) achieved with the _F-P_ model, the NN fulfilling the basic conditions and the PANN for training with different uniaxial stress data. The respective stress predictions are shown in Fig. 3.
and simple shear
\[(C_{KL})^{\text{shear}}:=\begin{pmatrix}1&\gamma&0\\ \gamma&\gamma^{2}+1&0\\ 0&0&1\end{pmatrix} \tag{52}\]
with \(0\leq\gamma\leq 2\) denoting the shearing. The results for the stress predictions are given in Fig. 4(b1) and (c1). Again, the simple approach \(\mathbf{P}^{\text{simple}}(\mathbf{F})\) completely fails for the unseen data. In the simple shear load case, even a completely implausible shear stress \(T_{12}^{\text{simple}}\approx 0\) is predicted, since the model has learned in training that the diagonal elements of the stress tensor disappear. In contrast, the basic conditions model \(\psi^{\text{NN},\emptyset}(\mathbf{T}^{\ast})\) is still quite good at physically plausible extrapolation, but shows noticeable deviations for \(\lambda_{1}>1.4\) within the biaxial stress loading. A similar result is obtained for simple shear, where noticeable deviations occur starting at \(\gamma\approx 0.8\). The same holds for the approximation of the energy which is shown in Fig. 4(b2) and (c2). Surprisingly, the PANN model is able to predict the data almost perfectly for biaxial and simple shear, although full extrapolation is required here. The significantly improved extrapolation capability of the PANN model \(\psi^{\text{PANN},\emptyset}(\mathbf{I}^{\ast})\) compared to \(\mathbf{P}^{\text{simple}}(\mathbf{F})\) and \(\psi^{\text{NN},\emptyset}(\mathbf{I}^{\ast})\) becomes also evident from the MSEs given
Figure 4: Predicted stress-stretch curves \(T_{KL}^{\otimes}(\lambda_{1})\) and \(T_{KL}^{\otimes}(\gamma)\) of the PANN, the NN fulfilling the basic conditions, as well as the \(\mathbf{F}\)-\(\mathbf{P}\) model with one hidden layer containing \(N^{\text{NN}}:=4\) neurons. Shown is the extrapolation of the models trained with uniaxial stress data stored in \(\mathcal{D}^{\text{uniaxial},\emptyset}\) from an isotropic Neo-Hooke model for: (a1) uniaxial tension/compression test, (b1) biaxial tension/compression test, and (c1) a simple shear test. The corresponding energies are given in (a2)–(c2).
in Tab. 2. To illustrate into which ranges the models must extrapolate for the considered test cases, the deformation states examined are shown within the invariant space in Fig. 5.
Thus, summarizing the findings of the presented study, the adaption of NN-based models in such a way that they fulfill physical conditions for arbitrary loadings does not necessarily improve the approximation of training data, at least for the simple load cases considered, but it does allow for a significant improvement in the extrapolation capability.
### Complex multiaxial stress-strain states
Now, as we have shown the advantage of _invariant-energy-based NN approaches_, we want to analyze only this model class in the following. Thereby, it is of interest whether the exact fulfillment of the physical principles introduced in Sec. 2.2 could be too restrictive, so that the model is not flexible enough to approximate complex multiaxial stress-strain states given by a non-linear material behavior sufficiently well. Thereby, the four different architectures \(\psi^{\text{(i),\tiny{D}}}(\mathbf{I}^{*})\), \(\psi^{\text{(ii),\tiny{D}}}(\mathbf{I}^{*})\), \(\psi^{\text{(iii),\tiny{D}}}(\mathbf{I}^{*})\), as well as \(\psi^{\text{(iv),\tiny{D}}}(\mathbf{I}^{*})\) satisfying
1. the basic conditions, i.e., thermodynamic consistency, symmetric \(\mathbf{\sigma}\), objectivity and material symmetry,
2. the conditions of (i) + polyconvexity,
3. the conditions of (ii) + growth condition, as well as
4. the conditions of (iii) + energy and stress normalization
are compared to each other for isotropic as well as transversely isotropic behavior, respectively. Note that the first and the fourth model are equal to \(\psi^{\text{NN,\tiny{D}}}(\mathbf{I}^{*})\) and \(\psi^{\text{PANN,\tiny{D}}}(\mathbf{I}^{*})\).
\begin{table}
\begin{tabular}{c c c c c} Model & Training uniaxial stress & Extrapolation uniaxial stress & Biaxial stress & Simple shear stress \\ \hline \(\mathbf{F}\)_-\(\mathbf{P}\)_ model & \(1.95\cdot 10^{-1}\) kPa\({}^{2}\) & \(9.43\cdot 10^{4}\) kPa\({}^{2}\) & \(1.35\cdot 10^{5}\) kPa\({}^{2}\) & \(8.06\cdot 10^{5}\) kPa\({}^{2}\) \\ Basic conditions NN & \(1.30\cdot 10^{-4}\) kPa\({}^{2}\) & \(1.26\cdot 10^{5}\) kPa\({}^{2}\) & \(6.24\cdot 10^{5}\) kPa\({}^{2}\) & \(5.65\cdot 10^{4}\) kPa\({}^{2}\) \\ PANN model & \(3.91\cdot 10^{-5}\) kPa\({}^{2}\) & \(6.21\cdot 10^{2}\) kPa\({}^{2}\) & \(4.11\cdot 10^{3}\) kPa\({}^{2}\) & \(1.58\cdot 10^{-5}\) kPa\({}^{2}\) \\ \end{tabular}
\end{table}
Table 2: MSE according to Eq. (46) achieved with the \(\mathbf{F}\)_-\(\mathbf{P}\)_ model, the NN fulfilling the basic conditions, and the PANN model for training with uniaxial tension/compression (\(0.8\leq\lambda_{1}\leq 1.1\)) as well as extrapolation to unknown states.
Figure 5: Location of the deformation states within the isotropic invariant space for uniaxial tension/compression test, biaxial tension/compression test, and simple shear test. Shown are the sectional planes \(I_{1}\)-\(I_{2}\), \(I_{1}\)-\(I_{3}\), and \(I_{2}\)-\(I_{3}\).
#### 4.2.1 Generation of training data
In a first step, the data basis for the training of the NNs has to be acquired. In the absence of real experiments, it is generated numerically here. To this end, a uniaxial tensile test is applied on a virtual sample in an FE simulation, see Fig. 6(a) and (b). Here, the sample's geometric dimensions are specified by \(L_{x_{1}}\times L_{x_{2}}\times L_{x_{3}}=(100\times 100\times 5)\,\mathrm{mm}\) and within the uniaxial tensile test the displacement boundary condition with a maximum value \(\hat{u}\leq 40\,\mathrm{mm}\) is linearly increased in each increment of the FE simulation. The geometry generation and the meshing was realized using the tool _Gmsh_[15].
Within this virtual experiment, the required dataset \(\mathcal{D}^{\oplus}\) consisting of the tuples \(\mathcal{D}^{\oplus}_{i}:=(^{i}\mathbf{C},^{i}\mathbf{T}^{\oplus})\) is collected at the quadrature points of the finite elements within 30 increments. In order to demonstrate the ability of the proposed NN-based method, the nonlinear stress-strain relation (20) is chosen for the isotropic constitutive behavior of the sample's material. Here the material parameters for the analytical model are chosen according to Tab. 3.
Following the work of Kalina et al. [20] and prescribing a relative tolerance \(\eta:=1\,\%\), the dataset was then filtered with respect to the invariants \((^{i}I_{1},^{i}I_{2},^{i}I_{3})\in\mathbb{R}^{3}\) for the corresponding deformation state \({}^{i}\mathbf{C}\) to obtain a reduced dataset \(\mathcal{D}^{\mathrm{red},\oplus}\) with \(|\mathcal{D}^{\mathrm{red},\oplus}|=963\) for the calibration process of the PANN. This procedure took advantage of the fact that the introduced model lives in the space of invariants rather than in the space of deformations.
Finally, different deformation states are available, which are stored for the isotropic case with the corresponding stresses of the Neo-Hooke model in the dataset \(\mathcal{D}^{\mathrm{red},\oplus}\). For the transversely isotropic case with the preferred direction parallel to the \(X_{1}\)-direction, the same deformation states are chosen as data basis and stored with the corresponding stresses of Schroder's model (22) in the dataset \(\mathcal{D}^{\mathrm{red},\parallel}\) with \(|\mathcal{D}^{\mathrm{red},\parallel}|=963\) as well. The chosen parameters of the analytical transversely isotropic model are also given in Tab. 3.
#### 4.2.2 Overall prediction quality
First, for the reduced dataset \(\mathcal{D}^{\mathrm{red},\Box}\), we will compare the overall prediction quality of the trained NNs (i)-(iv) satisfying different physical constraints, introduced in Sec. 2.2, for the isotropic as well as the transversely isotropic case. Thereby, for the transversely isotropic case \(\beta=2\) is chosen for the structural tensor \(\mathbf{G}\) given
\begin{table}
\begin{tabular}{c c||c c c c c c} \(E\) & \(\nu\) & \(\beta\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\delta_{1}\) & \(\delta_{2}\) & \(\alpha_{4}\) & \(\eta_{1}\) \\ \hline \(10^{3}\,\mathrm{kPa}\) & 0.3 & 2 & 8 kPa & 0 kPa & 10 kPa & 56 kPa & 2 & 10 kPa \\ \end{tabular}
\end{table}
Table 3: Material parameters of isotropic Neo-Hooke and transversely isotropic analytical model proposed by Schroder et al. [45]. The models are given in Eqs. (20) and (22), respectively.
Figure 6: Uniaxial tensile test for data generation: (a) applied boundary conditions with prescribed displacement \(\hat{u}\,\mathbf{e}_{1}\) and (b) inhomogeneous specimen geometry with \(L_{x_{1}}\times L_{x_{2}}\times L_{x_{3}}=(100\times 100\times 5)\,\mathrm{mm}\).
in Eq. (18). In the following examples, the network architecture is set to one hidden layer with \(N^{\text{NN}}:=8\) neurons. The NN-based models are trained with respect to the dataset \(\mathcal{D}^{\text{red,\Box}}\), where a random division into calibration (70 %) and test (30 %) data is made once. Within the training process, the weights and bias values \(W_{\alpha},b_{\alpha},w_{\alpha B}\) and \(w^{*}_{\alpha\gamma}\) are then determined according to Sec. 3.4. Within one training run, the respective NN is trained 30 times, and the parameters of the best achieved training state with the lowest MSE (46), see Sec. 3.4, are stored at the end [20].7 In order to evaluate the approximation behavior of the trained NNs, we compute the relative error measure
Footnote 7: Due to local minima within the loss function, the optimization procedure which is applied here depends on the starting values of the weights and biases. Thus, the network is trained several times to overcome this, cf. Kalina et al. [19, 20].
\[\varepsilon^{\Box}:=\frac{\max\limits_{i\in\mathbb{N}_{\leq|\Box^{\Box}}} \left\|i\mathbf{T}^{\Box}-i\mathbf{T}^{\text{model,\Box}}\right\|}{\max\limits_{j\in \mathbb{N}_{\leq|\Box^{\Box}}}\left\|i\mathbf{T}^{\Box}\right\|} \tag{53}\]
for the Frobenius norm \(\left\|\mathbf{T}\right\|\) of the second Piola-Kirchhoff stress. To exclude random effects, a statistical study is performed, i.e., a total of 300 training runs has been performed for each model. Since no uniform distribution with respect to the error measure \(\varepsilon^{\Box}\) can be seen from the results, no underlying distribution is assumed here, see the histogram plots given in App. C. Thus, median \(\varepsilon^{\text{med,\Box}}\) and quantiles are used to compare the interpolation quality of the models with each other. The results of the described study are shown in Fig. 7.
Isotropic modelRegarding the NNs' stress approximation for the isotropic case, an extremely good prediction quality with a median of the errors \(\varepsilon^{\text{med,\phi}}<0.003\,\%\) is achieved for all models, cf. Fig. 7(a). As one can see, the approximation quality of \(\psi^{\text{(ii),\phi}}(\mathbf{I}^{*})\) accounting for the polyconvexity worsens in comparison to the architecture \(\psi^{\text{(i),\phi}}(\mathbf{I}^{*})\) which only fulfills the basic conditions. Thus, the limitation to positive weights reduces the approximation quality of the NN in the statistical sense. However, as already mentioned, the errors are still extremely low. If the growth condition is further added, the error of this NN denoted as \(\psi^{\text{(iii),\phi}}(\mathbf{I}^{*})\), which fulfills basic conditions + polyconvexity + growth condition, stays in a similar range. Surprisingly, when the normalization conditions are finally added, the resulting distribution achieved with \(\psi^{\text{(iv),\phi}}(\mathbf{I}^{*})\) is very similar to the first model, i.e., the basic conditions model. Thus, summarizing the study
Figure 7: Boxplots with the median \(\varepsilon^{\text{med,\Box}}\), 25th and 75th as well as the 1st and 99th percentile of relative error measure \(\varepsilon^{\Box}\) according to Eq. (53) for the reduced dataset \(\mathcal{D}^{\text{red,\Box}}\) and the (a) isotropic NNs or (b) transversely isotropic NNs with (i) basic conditions, (ii) polyconvexity, (iii) growth condition and polyconvexity as well as (iv) the PANN satisfying all conditions including normalization.
carried out, adding all common physical principles of hyperelasticity to the NN-based model does not lead to a deterioration in the approximation for the isotropic case. This is although conditions such as positive weights to account for polyconvexity are reducing the variability of the NN.
Finally, regarding the non-negativity condition on the elastic energy, a numerical test has been applied. To scan only physically meaningful deformation states, the principle stretches \(\lambda_{1},\lambda_{2},\lambda_{3}\in\mathbb{R}_{>0}\) are varied and \(\mathbf{C}^{\text{diag}}(\lambda_{1},\lambda_{2},\lambda_{3}):=(\lambda_{1}^{2} \mathbf{e}_{1}\otimes\mathbf{e}_{1}+\lambda_{2}^{2}\mathbf{e}_{2}\otimes\mathbf{e}_{2}+\lambda_ {3}^{2}\mathbf{e}_{3}\otimes\mathbf{e}_{3})\) is computed as a diagonal tensor which is sufficient for isotropy. The invariants are calculated and the respective energy is determined. Within a range of \(1/10\leq\lambda_{\alpha}\leq 10\) with \(\alpha\in\mathbb{N}_{\leq 3}\), only positive energies have been numerically detected for both the Neo-Hooke model (19) and the trained isotropic PANN with relative error \(\varepsilon^{\text{\tiny\textregistered}}\), which is closest to the median \(\varepsilon^{\text{med},\text{\tiny\textregistered}}\), exemplarily.
Transversely isotropic modelRegarding the NNs' stress approximation for the transversely isotropic scenario given in Fig. 7(b), compared to isotropy, the error is now an order of magnitude higher even for the best model \(\psi^{(\text{i}),\parallel}(\mathbf{I}^{*})\). This can be explained by the increased complexity of the reference model (21). However, with a median of the errors \(\varepsilon^{\text{med},\parallel}<0.4\,\%\), a high level of prediction quality is still attained for all models. Now the four models (i)-(iv) are compared. Applying the model \(\psi^{(\text{ii}),\parallel}(\mathbf{I}^{*})\), which takes polyconvexity into account, leads to errors that are increased by an order of magnitude compared to the basic conditions architecture \(\psi^{(\text{i}),\parallel}(\mathbf{I}^{*})\). Thus, the limitation to positive weights significantly reduces the approximation quality of the NN. If the growth condition is further added, the error of this NN, denoted as \(\psi^{(\text{iii}),\parallel}(\mathbf{I}^{*})\), stays in a similar range. Finally, when the normalization conditions are added, the resulting distribution of errors achieved with \(\psi^{(\text{iv}),\parallel}(\mathbf{I}^{*})\) shifts again to slightly larger values.
We would like to emphasize that the models are based on only one hidden layer with \(N^{\text{NN}}=8\) neurons. The approximation quality of the NN-based models could easily be increased by adapting the network architecture, i.e., using several hidden layers or more neurons, in order to be able to represent the complex material behavior sufficiently well. Summarizing, adding all common physical principles of hyperelasticity to the NN-based model leads to a deterioration of the prediction quality of approximately one order of magnitude for the transversely isotropic case. Since the errors are nevertheless very small, the PANN model should be chosen anyway, especially with regard to the very good extrapolation capability.
Regarding the non-negativity condition on the elastic energy, a numerical test has been applied for transverse isotropy, too. Again, to scan only physically meaningful deformation states, the principle stretches \(\lambda_{1},\lambda_{2},\lambda_{3}\in\mathbb{R}_{>0}\) are varried and the diagonal tensor \(\mathbf{C}^{\text{diag}}(\lambda_{1},\lambda_{2},\lambda_{3})\) is computed. Here, in addition, rotations of these states perpendicular to the preferred direction \(X_{1}\) have to be considered. Thus, we end up with a five parameter space for the deformation to be sampled:
\[\mathbf{C}^{\parallel}(\lambda_{1},\lambda_{2},\lambda_{3},\varphi_{2},\varphi_{3 })=\mathbf{R}(\varphi_{2},\varphi_{3})\cdot\mathbf{C}^{\text{diag}}(\lambda_{1}, \lambda_{2},\lambda_{3})\cdot\mathbf{R}^{T}(\varphi_{2},\varphi_{3}). \tag{54}\]
In the equation above, \(\mathbf{R}(\varphi_{2},\varphi_{3})=\mathbf{R}_{x_{2}}(\varphi_{2})\cdot\mathbf{R}_{x_{3}} (\varphi_{3})\in\mathcal{G}(3)\) is a rotation tensor with \(\mathbf{R}_{x_{2}}(\varphi_{2})\) and \(\mathbf{R}_{x_{2}}(\varphi_{3})\) denoting rotations around the \(X_{2}\)- and \(X_{3}\)-axis, respectively. With Eq. (54) and the structural tensor given in Eq. (18) invariants are calculated and the respective energy is determined. Within a range of \(1/10\leq\lambda_{\alpha}\leq 10\), \(0\leq\varphi_{\beta}\leq\pi/2\) with \(\alpha\in\mathbb{N}_{\leq 3},\beta\in\{2,3\}\), only positive energies have been numerically detected for both the model (21) and the trained transversely isotropic PANN with relative error \(\varepsilon^{\parallel}\), which is closest to the median \(\varepsilon^{\text{med},\parallel}\), exemplarily.
#### 4.2.3 Application of the calibrated PANN within an FE simulation
In order to prove the suitability of invariant-based NNs for the numerical simulation of complex shaped samples and components, a comparison to reference results generated with the isotropic Neo-Hooke model (20), which has been used for the calibration, is shown in the following. Thereby, the model fulfilling the basic
conditions and the PANN are each analyzed in two different scenarios: once trained with multiaxial stress-strain states \(\mathcal{D}^{\text{red,\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \
material tangents
\[\mathbb{C}^{\text{NN},\oplus}:=4\frac{\partial^{2}\psi^{\text{NN},\oplus}}{ \partial\mathbf{C}\partial\mathbf{C}}\in\mathcal{L}_{4}\quad\text{and}\quad\mathbb{C}^ {\text{PANN},\oplus}:=4\frac{\partial^{2}\psi^{\text{PANN},\oplus}}{\partial\bm {C}\partial\mathbf{C}}\in\mathcal{L}_{4}\;, \tag{55}\]
which are required within the solution via a standard Newton-Raphson scheme, are calculated by means of _automatic differentiation_.
To evaluate the NNs' prediction quality, a comparison to the local stress field \(P_{31}^{\text{nh}}\) is considered in the following, cf. Fig. 9(b). We start with the models trained by complex multiaxial stress-strain states stored in the dataset \(\mathcal{D}^{\text{red},\oplus}\), where the covered domain in the invariant space is shown in Fig. 8. For both NNs, relative errors below \(0.004\,\%\) occur, see the surface plots given in Fig. 9(c), (d), i.e., the predictions are almost perfect with respect to the reference stresses \(P_{31}^{\text{nh}}\). This is not surprising, since a mapping of the deformations of the torsional sample into the invariant space shows that it is almost completely covered by \(\mathcal{D}^{\text{red},\oplus}\). Therefore, nearly no extrapolation is necessary here, see also the discussion in [20]. In the next step, we consider the NN-based models only trained by uniaxial stress states stored in the dataset \(\mathcal{D}^{\text{uniaxial},\oplus}\), where the covered curve in the invariant space is again shown in Fig. 8. Looking now at the results in Figs. 10(c) and (d), we see a significant error of \(3.7\,\%\) for the basic conditions model, which is due to the need for extrapolation. However, despite this, very low errors below \(0.14\,\%\) are achieved with the PANN model.
Consequently, in this example, both the basic conditions model and the PANN approach are very well able to describe the learned nonlinear constitutive behavior within the FE simulation of a comparatively complex load case, whereas the PANN is better, especially if extrapolation is required. Thereby, NNs with \(N^{\text{NN}}=8\) and \(N^{\text{NN}}=4\) neurons in only one hidden layer are used, respectively, which is very small compared to typical NNs applied to problems originating from computational mechanics. Moreover, the implemented basic conditions model and the PANN provide the typical quadratic convergence of Newton iteration and are therefore computationally very efficient.
Figure 10: FE-simulation of torsional sample: (a) loading conditions, (b) macroscopic stress field \(P_{31}^{\text{nh}}\) on the deformed configuration \(\mathcal{B}\) by specifying a distortion of \(\hat{\phi}=45^{\circ}\), and (c) relative error of the PANN-stress field \(P_{31}^{\text{PANN},\oplus}\) as well as (d) relative error of the stress field \(P_{31}^{\text{NN},\oplus}\) of basic conditions model with respect to \(P_{31}^{\text{nh}}\). The NNs trained with only uniaxial stress-strain data \(\mathcal{D}^{\text{uniaxial},\oplus}\), \(|\mathcal{D}^{\text{uniaxial},\oplus}|=15\), and \(N^{\text{NN}}=4\) neurons in only one hidden-layer were implemented as constitutive equations each.
Conclusion
In the present work, an NN-based constitutive model for compressible finite strain hyperelasticity is proposed. This approach denoted as PANN fulfills all common constitutive conditions belonging to the class of hyperelasticity, i.e., _thermodynamic consistency, symmetry of the stress tensor, objectivity, material symmetry, growth condition_, as well as _normalization of energy and stress_, in an exact way. Furthermore, the condition on the _non-negativity of the strain energy_ is numerically validated for the trained models. The proposed model allows the description of highly nonlinear hyperelastic relationships while taking into account the underlying physics and is trainable by using standard machine learning libraries such as Tensorflow.
Starting with a short literature review on NN-based elastic models, an introduction on finite strain hyperelasticity including an overview on general requirements as well as two specific anisotropy classes and models is given. Based on this, the PANN approach is built up step by step: using different sets of invariants as inputs for a convex neural network, the model fulfills the balance of angular momentum, objectivity and material symmetry conditions, as well as thermodynamic consistency and polyconvexity. Then, the volumetric growth condition is fulfilled by using an analytical growth term. Finally, energy and stress normalization are fulfilled by polyconvex normalization terms. The stress normalization terms depend on the material symmetry group and are exemplarily derived for isotropic and transversely isotropic material behavior. However, the procedure for fulfilling physical conditions, e.g., using normalization terms, can also be applied to analytical or other machine learning approaches. Afterwards, the applicability of the PANN models is demonstrated, where a calibration to isotropic and transversely isotropic data generated with analytical potentials is performed. For all cases, even for highly multiaxial deformation states and noisy stress-strain data, a highly accurate and robust prediction quality has been shown. In addition, it has been shown that the PANN is characterized by an extremely good extrapolation capability. Finally, the straightforward application into FE simulations is demonstrated.
Summarizing, the introduced PANN model for compressible finite strain hyperelasticity has shown to be an efficient tool, which can be used in numerous applications stemming from solid mechanics. Thereby, including physics into the NN-based model is the crucial step for several reasons: first of all it leads to reliable, i.e., physically sensible, model predictions. But more than that, it is also essential in order to improve the generalization properties of NNs [22, 27] and allows for extrapolation [19, 25]. Furthermore, only with the pronounced mathematical structure that the inclusion of constitutive conditions provides, it is possible to calibrate the models with small amounts of data which are usually available in engineering applications. Finally, this also enables the use of comparatively small network architectures, cf. [19, 25]. Thus, when constructing NN-based constitutive models, as many constitutive conditions should be fulfilled in an exact way as possible, as this is the only way to ensure their fulfillment with absolute certainty. However, for some applications this might not be possible, e.g., when for the symmetry group under consideration no complete functional basis in invariants is available and using invariants would restrict the model flexibility too much, cf. [24]. Only then the structure of the model should be weakened by fulfilling some constitutive conditions in an approximate fashion, in order to gain more model flexibility. Besides that, as already mentioned, the structure and reliability that the exact fulfillment of constitutive conditions provides should always be prioritized.
In order to generalize our proposed NN-based framework for elastic materials, several extensions are planned in the future. For instance, polyconvex normalization terms for further material symmetry groups [8] have to be derived. Furthermore, in order to allow an automated discovery of type and orientation of the underlying anisotropy, the usage of tensor-basis NNs would be a valuable addition [12]. The application of the PANN model in the identification of material models from experimental is also promising [10, 50]. Finally, an extension to multiphysics problems [21, 25] is feasible to expand the possible field of application.
## Acknowledgements
Dominik K. Klein and Oliver Weeger acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) - Grant No. 492770117 and support by the Graduate School of Computational Engineering within the Centre of Computational Engineering at the TU Darmstadt.
All presented computations were performed on a PC-Cluster at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden. The authors thus thank the ZIH for generous allocations of computer time. Finally, the authors would like to thank Franz Hirsch and Philipp Metsch for providing the serverjob scripts to communicate with the HPC-Cluster.
## CRediT authorship contribution statement
**Lennart Linden:** Conceptualization, Formal analysis, Investigation, Methodology, Visualization, Software, Validation, Visualization, Writing - original draft, Writing - review and editing. **Dominik Klein:** Conceptualization, Formal analysis, Methodology, Visualization, Software, Validation, Writing - original draft, Writing - review and editing. **Karl A. Kalina:** Conceptualization, Formal analysis, Methodology, Visualization, Software, Writing - original draft, Writing - review and editing. **Jorg Brummund:** Formal analysis, Methodology, Writing - review and editing. **Oliver Weeger:** Conceptualization, Funding acquisition, Resources, Writing - review and editing.
## Declarations
**Conflict of interest:** The authors declare that they have no conflict of interest.
## Appendix A Multilayered neural networks
In this work, sets of invariants are used as inputs for FFNNs [1, 26] with scalar-valued output, where the output is used to model a hyperelastic potential. In Sec. 3, a FFNN architecture with only one hidden layer is introduced, which prooves to be flexible enough for a lot of practical applications [19, 25]. Nevertheless, the methods introduced in Sec. 3 are not restricted to network architectures containing only one hidden layer, and for the sake of completeness, multilayered network architectures, cf. Fig. 11, are now introduced.
In a nutshell, FFNNs can be seen as a composition of multiple vector-valued functions, where the components are referred to as nodes or neurons and the functions acting in each node are referred to as activation functions. FFNNs can gain flexibility in two ways: either the number of nodes in each hidden layer is increased, as it is done for one hidden layer in Sec. 3, or the amount of hidden layers is increased. Generalizing the single-layered architecture of Eq. (23) for a network architecture with \(H\) hidden layer and \(N^{\text{NN},H}\) nodes in each hidden layer yields
\[\mathbf{A}_{\alpha}^{[1]} =\mathcal{F}\bigg{(}\sum_{\beta=1}^{m}w_{\alpha\beta}^{[1]}I_{ \beta}+\sum_{\gamma=1}^{A}w_{\alpha\gamma}^{*[1]}I_{\gamma}^{*}+b_{\alpha}^{[1 ]}\bigg{)}\in\mathbb{R}^{N^{\text{NN},1}}\,, \tag{56}\] \[\mathbf{A}_{\alpha}^{[h]} =\mathcal{F}\bigg{(}\sum_{\beta=1}^{N^{\text{NN},h-1}}w_{\alpha \beta}^{[h]}\mathbf{A}_{\beta}^{[h-1]}+b_{\alpha}^{[h]}\bigg{)}\in\mathbb{R}^{N^{ \text{NN},h}}\text{ with }h=2,\ldots,H\,,\] (57) \[\psi^{\text{NN},\Box}(\mathbf{I}^{*}) =\sum_{\alpha=1}^{N^{\text{NN},H}}W_{\alpha}\,\mathbf{A}_{\alpha}^{[ H]}\in\mathbb{R}\,, \tag{58}\]
with the polyconvex, irreducible and independent invariants \(I_{\beta}(\mathbf{C},\mathcal{S}^{\mathbb{D}})\) as well as the additional invariants \(I_{\gamma}^{*}(\mathbf{I})\) as defined in Sec. 3. By the special choice of activation function \(\mathcal{F}\) as convex and non-decreasing in every hidden layer and \(W_{\alpha}\geq 0\), the polyconvexity of the invariants is preserved. Again, the _Softplus_ activation function \(\mathcal{SP}(x):=\log(1+\exp(x))\) is applied, which is convex and non-decreasing for non-negative weights and arbitrary bias values, and overall the conditions
\[W_{\alpha},w_{\alpha\beta}^{[h]},w_{\alpha\gamma}^{*[1]}\in\mathbb{R}_{\geq 0 },b_{\alpha}\in\mathbb{R}\quad\forall h\in\mathbb{N}_{\leq H},\alpha\in \mathbb{N}_{\leq N^{\mathrm{NN},h-1}},\beta\in\mathbb{N}_{\leq m},\gamma\in \mathbb{N}_{\leq A} \tag{59}\]
result in a polyconvex neural network, cf. Remark 3.1, see also [24] for a more extensive discussion and explicit proofs.
## Appendix B Derivatives of invariants
For the convenience of the reader, in Tab. 4, both isotropic and transversely isotropic invariants, as well as their derivatives w.r.t. \(\mathbf{C}\), are provided. From this, the derivative of the determinant \(J=\sqrt{I_{3}}\) follows as
\[\frac{\partial J}{\partial\mathbf{C}}=\frac{J}{2}\mathbf{C}^{-1}\, \tag{60}\]
which implies the derivative of the adapted invariant \(I_{1}^{*}:=-2J\) as
\[\frac{\partial I_{1}^{*}}{\partial\mathbf{C}}=-\mathbf{JC}^{-1}\,. \tag{61}\]
\begin{table}
\begin{tabular}{|l|l|c|} \hline \hline & \(I_{\alpha}\) & \(\frac{\partial I_{\alpha}}{\partial\mathbf{C}}\) \\ \hline & \(I_{1}:=\operatorname{tr}\mathbf{C}\) & \(\mathbf{1}\) \\ Isotropy & \(I_{2}:=\operatorname{tr}(\operatorname{cof}\mathbf{C})\) & \(I_{1}\mathbf{1}-\mathbf{C}\) \\ & \(I_{3}:=\operatorname{det}\mathbf{C}\) & \(\operatorname{cof}\mathbf{C}\) \\ \hline & \(I_{4}:=\operatorname{tr}(\mathbf{C}\cdot\mathbf{G})\) & \(\mathbf{G}\) \\ isotropy & \(I_{5}:=\operatorname{tr}(\operatorname{cof}(\mathbf{C})\cdot\mathbf{G})\) & \(I_{5}\mathbf{C}^{-1}-\operatorname{cof}(\mathbf{C})\cdot\mathbf{G}\cdot\mathbf{C}^{-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Derivatives of isotropic and transversely isotropic invariants.
Figure 11: Illustration of the multilayered PANN based constitutive model for the material symmetry group \(\square\) under consideration.
Stochastic
In this appended section, the raw data obtained in the statistical study which is discussed in Sec. 4.2.2 are presented. The histograms including median and 25th as well as 75th percentile for the isotropic (\(\Box=\otimes\)) and the transversely isotropic (\(\Box=\|\)) case are given in Fig. 12(a)-(b), respectively. For each model in this statistical investigation, a total of 300 training runs have been completed. Based on the reduced data set \(\mathcal{D}^{\text{red,\Box}}\), the respective NN is trained 30 times in one training run, and the parameters of the optimal training state with the lowest MSE (46), as described in Sec. 3.4, are stored.
Figure 12: Histograms with \(N^{\text{bins}}:=30\) of the error measure \(\varepsilon^{\Box}\) given in Eq. (53) for different anisotropies: (a) isotropic and (b) transversely isotropic invariant-based models. (i) basic conditions, (ii) polyconvexity, (iii) growth condition and polyconvexity, as well as (iv) PANN satisfying all conditions including normalization. The results were generated with 300 training runs, selecting the best of 30 trains each. Calibration has been done with the reduced data set \(\mathcal{D}^{\text{red,\Box}}\). |
2302.12177 | EquiPocket: an E(3)-Equivariant Geometric Graph Neural Network for
Ligand Binding Site Prediction | Predicting the binding sites of target proteins plays a fundamental role in
drug discovery. Most existing deep-learning methods consider a protein as a 3D
image by spatially clustering its atoms into voxels and then feed the voxelized
protein into a 3D CNN for prediction. However, the CNN-based methods encounter
several critical issues: 1) defective in representing irregular protein
structures; 2) sensitive to rotations; 3) insufficient to characterize the
protein surface; 4) unaware of protein size shift. To address the above issues,
this work proposes EquiPocket, an E(3)-equivariant Graph Neural Network (GNN)
for binding site prediction, which comprises three modules: the first one to
extract local geometric information for each surface atom, the second one to
model both the chemical and spatial structure of protein and the last one to
capture the geometry of the surface via equivariant message passing over the
surface atoms. We further propose a dense attention output layer to alleviate
the effect incurred by variable protein size. Extensive experiments on several
representative benchmarks demonstrate the superiority of our framework to the
state-of-the-art methods. | Yang Zhang, Zhewei Wei, Ye Yuan, Chongxuan Li, Wenbing Huang | 2023-02-23T17:18:26Z | http://arxiv.org/abs/2302.12177v3 | # EquiPocket: an E(3)-Equivariant Geometric Graph Neural Network for Ligand Binding Site Prediction
###### Abstract.
Predicting the binding sites of the target proteins plays a fundamental role in drug discovery. Most existing deep-learning methods consider a protein as a 3D image by spatially clustering its atoms into voxels and then feed the voxelized protein into a 3D CNN for prediction. However, the CNN-based methods encounter several critical issues: 1) defective in representing irregular protein structures; 2) sensitive to rotations; 3) insufficient to characterize the protein surface; 4) unaware of data distribution shift. To address the above issues, this work proposes EquiPocket, an E(3)-equivariant Graph Neural Network (GNN) for binding site prediction. In particular, EquiPocket consists of three modules: the first one to extract local geometric information for each surface atom, the second one to model both the chemical and spatial structure of the protein, and the last one to capture the geometry of the surface via equivariant message passing over the surface atoms. We further propose a dense attention output layer to better alleviate the data distribution shift effect incurred by the variable protein size. Extensive experiments on several representative benchmarks demonstrate the superiority of our framework to the state-of-the-art methods.
Binding Site Prediction, Graph Neural Network, Drug Discovery +
Footnote †: journal: Computer Science and
ACM Research in Computing and Networking, 2018
+
Footnote †: journal: Computer Science and
## 1. Introduction
Nearly all biological and pharmacological processes in living systems involve interactions between receptors (_i.e._ target proteins) and ligands (_i.e._ small molecules or other proteins) (Steintein et al., 2017). The places where such interactions occur are known as the binding sites/pockets of the ligands on the target protein structures, which are essential to determine whether or not the ligands are druggable and functionally relevant. Moreover, the knowledge of the binding site is able to facilitate many downstream tasks, such as docking (Steintein et al., 2017), the design of drug molecules (Stein et al., 2018), etc. Therefore, predicting the binding sites of the target proteins via in-silico algorithms forms an indispensable and even the first step in drug discovery.
Through the past years, plenty of computational methods have been proposed to detect binding sites, which can be roughly classified into three categories (Stein et al., 2017): the geometry-based (Stein et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), the probe-energy-based (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) and the template-based (Stein et al., 2017; Wang et al., 2018). The computational methods exploit hand-crafted algorithmic procedures guided by domain knowledge or external templates, leading to insufficient expressivity in representing complicated proteins. Recently, with the accumulation of labeled data and the development of machine learning techniques, the learning-based approaches have been developed (Wang et al., 2018), which manage to analyze and extract the underlying patterns of the input data that eventually align with the assign labels through the iterative process of learning. Although the learning-based methods have exhibited clear superiority over the classical computational counterparts, their performance and flexibility are still limited by the hand-crafted input features and the insufficiently-expressive models they used (Wang et al., 2018).
More recently, motivated by the breakthrough of deep learning in a variety of fields, Convolutional Neural Networks (CNNs) have been applied successfully for the prediction of binding sites (Wang et al., 2018). Typical works include DeepSite (Wang et al., 2018), DeepPocket (Chen et al., 2018), DeepSurf (Wang et al., 2018), etc. The CNN-based methods consider a protein as a 3D image by spatially clustering its atoms into the nearest voxels, and then model
Figure 1. Illustrative comparison between previous CNN-based methods and our EquiPocket.
the binding site prediction as a object detection problem or a semantic segmentation task on 3D grids. Thanks to the multi-layer and end-to-end learning paradigm, the CNN-based methods are observed to outperform traditional learning-based approaches and generally achieve the best performance on several public benchmarks (Shen et al., 2017).
In spite of the impressive progress, existing CNN-based models still encounter several issues as below:
**Issue 1.** Defective in leveraging regular voxels to model the proteins of irregular shape. First, a considerable number of voxels probably contain no atom since the protein atoms are unevenly distributed in space, which yields unnecessary redundancy in computation and memory. Moreover, the voxelization is usually constrained within a fixed-size space (_e.g._\(70\AA\times 70\AA\times 70\AA\)) (Shen et al., 2017; Wang et al., 2018). The atoms beyond this size will be directly discarded, resulting in incomplete and inaccurate modeling particularly for large proteins. Besides, although the voxelization process is able to encode certain spatial structure of the protein, it overlooks the irregular chemical interactions (_i.e._ the chemical bonds) between atoms and the topological structure upon that is also useful for binding site detection.
**Issue 2.** Sensitive to rotations. To discretize the protein into 3D grids, the CNN methods fix the three bases of the coordinates beforehand. When rotating the protein, the voxelization results could be distinct, and the predicted binding sites will change, which, however, conflicts with the fact that any rotation of the protein keeps the binding sites invariant. While it can be alleviated by the local grid (Shen et al., 2017) or augmenting training data with random rotations (Shen et al., 2017; Wang et al., 2018), which yet is data-dependent and unable to guarantee rotation invariance in theory.
**Issue 3.** Insufficient to characterize the geometry of the protein surface. The surface atoms comprise the major part of the binding pocket, which should be elaborately modeled. In the CNN-based methods, the surface atoms are located in the voxels that are surrounded by empty voxels, which somehow encodes the surface geometry. Nevertheless, such information is coarse to depict how the surface atoms interact and what their local geometry is. Indeed, the description of the surface atoms is purely driven by the geometric shape of the solvent-accessible surface of the protein (Shen et al., 2017) (Figure 1(b)), which, unfortunately, is less explored in current works.
**Issue 4.** Unaware of data distribution shift. In practical scenarios, the size of the proteins varies greatly across different datasets. It requires the deep learning model we apply to be well generalizable and adaptive, so that it is able to overcome the distribution shift incurred by the variable protein size. However, this point is not seriously discussed previously.
In this paper, to address the above issues, we propose to apply Graph Neural Networks (GNNs) (Grover et al., 2016; Wang et al., 2018; Wang et al., 2018) instead of CNNs to represent proteins. By considering atoms as nodes, interactions as edges, GNNs are able to encode the irregular protein structures by multi-layer message passing. More importantly, a recent line of researches (Grover et al., 2016; Wang et al., 2018; Wang et al., 2018) has enhanced GNNs by encapsulating E(3) equivariance/invariance with respect to translations/rotations; in this way, equivariant GNNs yield outputs that are independent of the choice of the coordinate systems, leading to improved generalization ability. That being said, trivially applying equivariant GNNs for the binding site prediction task is still incapable of providing desirable performance, and even achieves worse accuracy than the CNN-based counterparts. By looking into the design of the architecture, equivariant GNNs naturally cope with the first two issues as mentioned above, yet leave the other two unsolved. To this end, we make the contributions as follows:
* To the best of our knowledge, we are the first to apply an E(3)-equivariant GNN for ligand binding site prediction, which is dubbed **EquiPocket**. In contrast to conventional CNN-based methods, EquiPocket is free of the voxelization process, able to model irregular protein structures by nature, and insensitive to any Euclidean transformation, thereby addressing Issue 1 and 2.
* EquiPocket consists of three modules: the first one to extract local geometric information for each surface atom with the help of solvent-accessible surface (Shen et al., 2017), the second one to model both the chemical and spatial structure of the protein, and the last one to capture the comprehensive geometry of the surface via equivariant message passing over the surface atoms. The first and last module are proposed to tackle Issue 3, while the second module attempts to involve both the chemical and spatial interactions, as presented in Issue 1.
* To resolve Issue 4, namely, alleviating the effect by data distribution shift, we further propose a novel output layer called _dense attention output layer_ in Equipocket, which enables us to adaptively balance the scope of the receptive field for each atom in accordance to the density distribution of the neighbor atoms.
* Extensive experiments on serveral representative benchmarks demonstrate the superiority of our framework to the state-of-the-art methods in prediction accuracy. The design of our model is sufficiently ablated as well.
## 2. Related Work
### Binding Site Prediction
**Computational Methods.** The computational methods for binding site prediction include geometry-based (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), probe- and energy-based (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) and template-based (Beng et al., 2016; Wang et al., 2018) methods: 1) Since most ligand binding sites occur on the 3D structure, geometry-based methods (POCKET (Wang et al., 2018), CriticalFinder (Wang et al., 2018), LigSite (Wang et al., 2018), Focket (Wang et al., 2018), etc. ) are designed to identify these hollow spaces and then rank them using the expert design geometric features. 2) Probe-based methods (SURFNET (Wang et al., 2018), Q-SiteFinder (Wang et al., 2018), etc. (Wang et al., 2018)), also known as energy-based methods, calculate the energy resulting from the interaction between protein atoms and a small-molecule probe, whose value dictates the existence of binding sites. 3)Template-based methods (FINDSITE (Beng et al., 2016), LIBRA (Wang et al., 2018), etc.) are mainly to compare the required query protein with the published protein structure database to identify the binding sites.
**Traditional Learning-based Methods.** PRANK (Kang et al., 2018) is a learning-based method that employs the traditional machine learning algorithm random forest(RF) (Beng et al., 2016). Based on the pocket points and chemical properties from Fpocket (Wang et al., 2018) and Concavity (Beng et al., 2016), this method measures the "ligandability" as the binding ability of a candidate pocket using the RF model. However, those methods require the manual extraction of numerous features with limit upgrading.
**CNN-based Methods.** Over the last few years, deep learning has surpassed far more traditional ML methods in many domains.
For binding site prediction task, many researchers (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) regard a protein as a 3D image, and model this task as a computer vision problem. DeepSite (Wang et al., 2019) is the first attempt to employ the CNN architecture for binding site prediction, which like P2Rank (Wang et al., 2019) treats this task as a binary classification problem and converts a protein to 3D voxelized grids. The methods FRSite (Wang et al., 2019) and Kalasundy (Klass and Kalasundy, 2019) adhere to the principle of deepsite, but the former regards this task as an object detection problem, and the latter regards this task as a semantic segmentation task. Deeppocket (Han et al., 2017) is a method similar to p2rank, but implements a CNN-based segmentation model as the scoring function in order to more precisely locate the the binding sites. The recent CNN-based method DeepSurf (Wang et al., 2019) constructs a local 3D grid and updates the 3D-CNN architecture to mitigate the detrimental effects of protein rotation.
### Graph Neural Networks for Molecule Modeling
There are multi-level information in molecules including atom info, chemical bonds, spatial structure, physical constraints, etc. Numerous researchers view molecules as topological structures and apply topological-based GNN models (like graph2vec (Wang et al., 2019), GAT (Wang et al., 2019), GCN (Chen et al., 2019), GCN2 (Chen et al., 2019), GIN (Yang et al., 2019) and etc. (Wang et al., 2019)) to extract the chemical info, which achieve positive outcomes. With the accumulation of structure data for molecules, spatial-based graph models (DimeNet (Wang et al., 2019), DimNet++ (DimeNet et al., 2019), SphereNet (Sandel et al., 2019), SchNet (Wang et al., 2019), Egnn (Yang et al., 2019), (Wang et al., 2019) and etc.) are proposed for molecule task which aggregates spatial and topological information. However, these models may not be adequate for macro-molecules due to their high calculation and resource requirements.
## 3. Notations and Definitions
Protein GraphA protein such as the example in Figure 1(b) is denoted as a graph \(\mathcal{G}_{P}=(\mathcal{V}_{P},\mathcal{E}_{C},\mathcal{E}_{D})\), where \(\mathcal{V}_{P}=\{v_{0},...,v_{N}\}\) forms the set of \(N\) atoms, \(\mathcal{E}_{C}\) represents the chemical-bond edges, and \(\mathcal{E}_{D}\) collects the spatial edges between any two atoms if their spatial distance is less than a cutoff \(\theta>0\). In particular, each node (_i.e._ atom) is associated with a feature \((\mathbf{x}_{i},\mathbf{c}_{i})\), where \(\mathbf{x}_{i}\in\mathbb{R}^{3}\) denotes the 3D coordinates and \(\mathbf{c}_{i}\in\mathbb{R}^{5}\) is the chemical feature.
Surface Point SetThe surface geometry of a protein is of crucial interest for binding site detection. Here we define the set of surface points, by \(\mathcal{S}=\{s_{0},...,s_{M}\}\), \(M\gg N\). Each surface point \(s_{i}\) is NOT necessarily an atom of the protein, and it corresponds to \((\mathbf{x}_{i},p_{i})\), where \(\mathbf{x}_{i}\in\mathbb{R}^{3}\) represents the 3D coordinates of \(s_{i}\) and \(p_{i}\in\mathcal{V}_{P}\) indicates the index of the nearest protein atom in \(\mathcal{V}_{P}\) to \(s_{i}\). We employ the open source MSMS (Wang et al., 2019) to derive surface points.
Protein Surface GraphBy referring to the surface points defined above, we collect all the nearest protein atoms \(p_{i}\) of the surface points, giving rise to the surface graph \(\mathcal{G}_{S}=(\mathcal{V}_{S},\mathcal{E}_{S})\), and clearly \(\mathcal{G}_{S}\subseteq\mathcal{G}_{P}\). _We call the atoms in the surface graph as surface atoms, which are distinguished from surface points defined in the last paragraph_. Notably, the edges of the surface graph, _i.e._, \(\mathcal{E}_{S}\) is only composed of spatial edges from \(\mathcal{E}_{D}\), since those chemical edges are mostly broken among the extracted atoms.
Equivariance and InvarianceIn 3D space, the symmetry of the physical laws requires the detection model to be equivariant with respect to arbitrary coordinate systems (Klass and Kalasundy, 2019). In form, suppose \(\mathbf{X}\) to be 3D geometric vectors (positions, velocities, etc) that are steerable by E(3) group (rotations/translations/reflections), and \(\mathbf{h}\) non-steerable
Figure 2. An illustration of the scheme of our EquiPocket framework.
features. The function \(f\) is E(3)-equivariant, if for any transformation \(g\in\mathbb{E}(3)\), \(f(g\cdot\mathbf{X},\mathbf{h})=g\cdot f(\mathbf{X},\mathbf{h})\), \(\forall\mathbf{X}\in\mathbb{R}^{3\times m},\mathbf{h}\in\mathbb{R}^{d}\). Similarly, \(f\) is invariant if \(f(g\cdot\mathbf{X},\mathbf{h})=f(\mathbf{X},\mathbf{h})\). The group action \(\cdot\) is instantiated as \(g\cdot\mathbf{X}:=\mathbf{X}+\mathbf{b}\) for translation \(\mathbf{b}\in\mathbb{R}^{3}\) and \(g\cdot\mathbf{X}(t):=\mathbf{OX}\) for rotation/reflection \(\mathbf{O}\in\mathbb{R}^{3\times 3}\).
Problem StatementGiven a protein \(\mathcal{G}_{P}\) and its surface points \(\mathbb{S}\), as well as the constructed surface graph \(\mathcal{G}_{S}\), our goal is to learn an E(3)-invariant model \(f(\mathcal{G}_{P},\mathbb{S},\mathcal{G}_{S})\) to predict the atoms of the binding site: \(\mathcal{V}_{B}\subseteq\mathcal{V}_{P}\).
## 4. The Proposed Methodology
Figure 2 illustrates the overall framework of our EquiPocket, which consists of three modules: the _local geometric modeling module_ SS 4.1 that focuses on extracting the geometric information of each surface atom, the _global structure modeling module_ SS 4.2 to characterize both the chemical and spatial structures of the protein, and the _surface message passing module_ SS 4.3 which concentrates on capturing the entire surface geometry based on the extracted information by the two former modules. The training losses are also presented. We defer the pseudo codes of EquiPocket to Appendix 1.
### Local Geometric Modeling Module
This subsection presents how to extract the local geometric information of the protein surface \(\mathcal{G}_{S}\), with the help of surface points \(\mathbb{S}\). The local geometry of each protein atom closely determines if the region nearby is appropriate or not to become part of binding sites. We adopt the surrounding surface points of each protein surface atom to describe the local geometry.
To be specific, for every surface atom \(i\in\mathcal{V}_{S}\), its surrounding surface points are returned by a subset of \(\mathbb{S}\), namely, \(\mathbb{S}_{i}=\{s_{j}=(\mathbf{x}_{j},p_{j})\in\mathbb{S}\mid p_{j}=i\}\), where \(p_{j}\), as defined before, indicates the nearest protein atom. We now construct the geometric information based on \(\mathbb{S}_{i}\). We denote the center/mean of all 3D coordinates in \(\mathbb{S}_{i}\) as \(\bar{x}_{i}\). For each surrounding surface point \(s_{j}\in\mathbb{S}_{i}\), we first search its two nearest surface points from \(\mathbb{S}\) as \(s_{j_{1}}\) and \(s_{j_{2}}\), and then calculate the following relative position vectors:
\[\left\{\begin{array}{l}\mathbf{x}_{jj_{1}}=\mathbf{x}_{j}-\mathbf{x}_{j_{1}},\\ \mathbf{x}_{jj_{2}}=\mathbf{x}_{j}-\mathbf{x}_{j_{2}},\\ \mathbf{x}_{j,\text{center}}=\mathbf{x}_{j}-\bar{\mathbf{x}}_{i},\\ \mathbf{x}_{j,\text{protein}}=\mathbf{x}_{j}-\mathbf{x}_{i},\\ \mathbf{x}_{\text{center,protein}}=\bar{\mathbf{x}}_{i}-\mathbf{x}_{i}.\end{array}\right. \tag{1}\]
We further derive the following scalars upon Eq. 1:
\[\begin{split}&\mathbf{g}(s_{j}):=[\|\mathbf{x}_{jj_{1}}\|_{2},\|\mathbf{x}_{ jj_{2}}\|_{2},\angle_{1},\\ &\|\mathbf{x}_{j,\text{center}}\|_{2},\|\mathbf{x}_{j,\text{protein}}\|_{2},\|\mathbf{x}_{ \text{protein,center}}\|_{2},\angle_{2}],\end{split} \tag{2}\]
where the angels are computed by \(\angle_{1}=\frac{\mathbf{x}_{jj_{1}}\cdot\mathbf{x}_{jj_{2}}}{\|\mathbf{x}_{jj_{1}}\| \frac{\mathbf{x}_{jj_{1}}}{\|\mathbf{x}_{jj_{2}}\|_{2}}}\) and \(\angle_{2}=\frac{\mathbf{x}_{j,\text{center}}\cdot\mathbf{x}_{\text{center,protein}}}{\|\mathbf{x}_{j, \text{center}}\|_{2}}\|\mathbf{x}_{\text{center,protein}}\|_{2}\); here the operator \(\cdot\) defines the inner-product between two vectors. Basically, as displayed in Figure 3, the first three quantities in \(\mathbf{g}(s_{j})\) depict how the nearby surface points are arranged around \(s_{j}\), and the last four ones describe where \(s_{j}\) is located within the global region of \(\mathbb{S}_{i}\).
We aggregate the geometric information \(\mathbf{g}(s_{j})\) over all surface points in \(\mathbb{S}_{i}\) and obtain a readout descriptor for surface atom \(i\) as
\[\begin{split}&\mathbf{g}_{i}=[\text{Pooling}(\{\text{MLP}(\mathbf{g}(s_{j}))\}_{s_{j}\in \mathbb{S}_{i})},\\ &\text{MLP}(\text{Pooling}(\{\mathbf{g}(s_{j})\})_{s_{j}\in\mathbb{S}_ {i}\})]\end{split} \tag{3}\]
Here, MLP denotes multi-layer perceptron, and the function Pooling is implemented as a concatenation of mean pooling and max pooling throughout our experiments. The front part in Eq. 3 is used to gather local geometric features, while the latter part attempts to compute the global size of surrounding surface points. Notably, the geometric descriptor \(\mathbf{g}_{i}\) is E(3)-invariant.
### Global Structure Modeling Module
This module aims at processing the information of the whole protein \(\mathcal{G}_{P}\), including atom type, chemical bonds, relevant spatial positions, etc. Although the binding pocket is majorly comprised of surface atoms, the global structure of the protein in general influences how the ligand is interacted with and how the pocket is formulated, which should be modeled. We fulfil this purpose via two concatenated processes: chemical-graph modeling and spatial-graph modeling.
The chemical-graph modeling process copes with the chemical features \(\{\mathbf{c}_{i}\}_{i\in\mathcal{V}_{P}}\) and the chemical interactions \(\mathcal{E}_{C}\) of the protein graph. For each atom in the protein, its chemical type, the numbers of electrons around, and the chemical bonds connected to other atoms are important clues to identify the interaction between the protein and the ligand (Stein and Grinner, 2017). We employ typical GNNs (Stein and Grinner, 2017; Grinner, 2018; Grinner, 2019) to distill this type of information. Formally, we proceed:
\[\{\mathbf{c}^{\prime}_{i}\}_{i\in\mathcal{V}_{P}}=\text{GNN}(\{\mathbf{c}_{i}\}_{i\in \mathcal{V}_{P}},\mathcal{E}_{C}), \tag{4}\]
where \(\mathbf{c}^{\prime}_{i}\) is the updated chemical feature for atom \(v_{i}\). While various GNNs can be used in Eq. 4, here we implement GAT (Grinner, 2019) given its desirable performance observed in our experiments.
The spatial-graph modeling process further involves the 3D coordinates \(\{\mathbf{x}_{i}\}_{i\in\mathcal{V}_{P}}\) to better depict the spatial interactions \(\mathcal{E}_{D}\) within the protein. Different from chemical features \(\mathbf{c}^{\prime}_{i}\), the 3D coordinates provide the spatial position of each atom and reflect the pair-wise distances in 3D space, which is helpful for physical interaction modeling. We leverage EGNN (Stein and Grinner, 2019) as it conforms to E(3) equivariance/invariance and achieves promising performance on modeling spatial graphs. Specifically, we process EGNN as follows:
\[\{\mathbf{c}^{\prime\prime}_{i}\}_{i\in\mathcal{V}_{P}}=\text{EGNN}(\{\mathbf{x}_{i}, \mathbf{c}^{\prime}_{i}\}_{i\in\mathcal{V}_{P}},\mathcal{E}_{D}). \tag{5}\]
Here, we only reserve the invariant output (_i.e_., \(\mathbf{c}^{\prime\prime}_{i}\)) and have discarded the equivariant output (_e.g_. updated 3D coordinates) of EGNN, since the goal of this module is to provide invariant features.
Figure 3. An illustration of local geometric features.
We select the updated features of the surface atoms \(\mathcal{V}_{\text{S}}\), which will be fed into the module in SS 4.3.
### Surface Message Passing Module.
Given the local geometric features \(\{\mathbf{g}_{i}\}_{i\in\mathcal{V}_{\text{S}}}\) from SS 4.1, and the globally-encoded features of the surface atoms \(\{\mathbf{c}^{\prime\prime}_{i}\}_{i\in\mathcal{V}_{\text{S}}}\) from SS 4.2, the module in this subsection carries out equivariant message passing on the surface graph \(\mathcal{G}_{\text{S}}\) to renew the entire features of the protein surface. We mainly focus on the surface atoms here, because firstly the surface atoms are more relevant to the binding sites than the interior atoms, and secondly the features \(\{\mathbf{c}^{\prime\prime}_{i}\}_{i\in\mathcal{V}_{\text{S}}}\) that are considered as the input have somehow encoded the information of the interior structure via the processes in 4.2.
**Surface-EGNN.** During the \(l\)-th layer message passing, each node is associated with an invariant feature \(\mathbf{h}^{(l)}_{i}\in\mathbb{R}^{m_{l}}\) and an equivariant double-channel matrix \(\mathbf{X}^{(l)}_{i}\in\mathbb{R}^{3\times 2}\). We first concatenate \(\mathbf{c}^{\prime\prime}_{i}\) with \(\mathbf{g}_{i}\) as the initial invariant feature:
\[\mathbf{h}^{(0)}_{i}=[\mathbf{c}^{\prime\prime}_{i},\mathbf{g}_{i}]. \tag{6}\]
The equivariant matrix \(\mathbf{X}^{(0)}_{i}\) is initialized by the 3D coordinates of the atom and the center of its surrounding surface points, that is,
\[\mathbf{X}^{(0)}_{i}=[\mathbf{x}_{i},\bar{\mathbf{x}}_{i}]. \tag{7}\]
We update \(\mathbf{h}^{(l)}_{i}\in\mathbb{R}^{ld}\) and \(\mathbf{X}^{(l)}_{i}\in\mathbb{R}^{3\times 2}\) synchronously to unveil both the topological and geometrical patterns. Inspired from EGNN (Steintein and Tschur, 2017) and its multi-channel version GMN (Garvin et al., 2018), we formulate the \(l\)-th layer for each surface atom \(i\in\mathcal{V}_{\text{S}}\) as:
(8) \[\mathbf{m}_{ij} =\phi_{m}\left(\mathbf{h}^{(l)}_{i},\mathbf{h}^{(l)}_{j},f_{x}(\mathbf{X}^{(l) }_{i},\mathbf{X}^{(l)}_{j}),e_{ij}\right),\] (9) \[\mathbf{h}^{(l+1)}_{i} =\phi_{h}\left(\mathbf{h}^{(l)}_{i},\sum\nolimits_{j\in\mathbb{N}(i )}\mathbf{m}_{ij}\right),\] (10) \[\mathbf{X}^{(l+1)}_{i} =\mathbf{X}^{(l)}_{i}+\frac{1}{|\mathbb{N}(i)|}\sum\nolimits_{j\in \mathcal{N}(i)}(\mathbf{x}^{(l)}_{i}-\mathbf{x}^{(l)}_{j})\phi_{x}(\mathbf{m}_{ij}),\] (11)
where the functions \(\phi_{m},\phi_{h},\phi_{x}\) are all MDs, \(\mathbb{N}(i)\) denotes the neighbors of node \(i\) in terms of the spatial edges \(\mathcal{E}_{d}\), \(|\cdot|\) counts the size of the input set, and the invariant message \(\mathbf{m}_{ij}\) from node \(j\) to \(i\) is employed to update the invariant feature \(\mathbf{h}^{(l+1)}_{i}\) via \(\phi_{h}\) and the equivariant matrix \(\mathbf{X}^{(l+1)}_{i}\) via the aggregation of the relative position \(\mathbf{x}^{(l)}_{i}-\mathbf{x}^{(l)}_{j}\) multiplied with \(\phi_{x}\).
As a core operator in the message passing above, the function \(f_{x}(\mathbf{X}_{i},\mathbf{X}_{j})\) is defined as follows:
\[f_{x}(\mathbf{X}_{i},\mathbf{X}_{j})\coloneqq\{\|\mathbf{x}_{ij}\|_{2},\|\mathbf{x}_{ci}\|_{2 },\|\mathbf{x}_{ej}\|_{2},\angle_{c1,ij,i},\angle_{c2,ij,ij},\angle_{c1,cj}\}, \tag{12}\]
where, the relative positions are given by \(\mathbf{x}_{ij}=\mathbf{x}_{i}-\mathbf{x}_{j},\mathbf{x}_{ci}=\bar{\mathbf{x}}_{i}-\mathbf{x}_{i}\) and \(\mathbf{x}_{cj}=\bar{\mathbf{x}}_{j}-\mathbf{x}_{j}\); the angles \(\angle_{c1,ij}\), \(\angle_{c2,ij,j}\), \(\angle_{c1,c2,j}\) defined as the inner-products of the corresponding vectors denoted in the subscripts, _e.g._, \(\angle_{c1,ij}=\frac{\mathbf{x}_{ci}\cdot\mathbf{x}_{ij}}{\|\mathbf{x}_{ci}\|_{2}\|_{2}\|_{ 2}}\). Through the design in Eq. 12, \(f_{x}(\mathbf{X}_{i},\mathbf{X}_{j})\) elaborates the critical information (including relative distances and angles) around the four points: \(\mathbf{x}_{i},\bar{\mathbf{x}}_{i},\mathbf{x}_{j},\bar{\mathbf{x}}_{j}\), which largely characterizes the geometrical interaction between the two input matrices. Nicely, \(f_{x}(\mathbf{X}_{i},\mathbf{X}_{j})\) is invariant, ensuring the equivariance of the proposed Surface-EGNN.
**Dense Attention Output Layer.** Conventionally, we can apply the output of the final layer, _i.e._, \((\mathbf{h}^{(L)}_{i},\mathbf{X}^{(L)}_{i})\) to estimate the binding site. Nevertheless, such flat output overlooks the discrepancy of size and shape between different proteins. As showed in Figure 5(b), for small or densely-connected proteins, the receptive field of each node will easily cover most nodes after a small number of message-passing layers, and excessive message passing will lead to over-smoothing (Krause et al., 2018) that will incurs performance detriment. For large or sparsely-connected proteins, on the contrary, insufficient message passing can hardly attain the receptive field with a desirable scope, which will also decrease the performance. It thus requires us to develop an adaptive mechanism to balance the message passing scope between different proteins. We propose the _dense attention output layer_ to achieve this goal.
Intuitively, for each target atom, the spatial distribution of the neighbors is able to reflect the density of the spatial connections around. This motivates us to calculate the proportion of the atoms with different distance ranges. As \(\theta\) is the cutoff to create the spatial graph, we use it as the distance unit. We compute by:
\[n^{(l)}_{i}=\frac{|\{j\in\mathcal{V}_{\text{P}}\mid 0\leq\|\mathbf{x}_{i}-\mathbf{x}_{j}\|_{2 }<l\theta\}|}{N_{\text{P}}}, \tag{13}\]
where, the proportion is evaluated within the distance range \([0,l\theta]\), \(N_{\text{P}}=|\mathcal{V}_{\text{P}}|\), and the neighbor hop \(l\in\mathbb{Z}^{+}\). We collect the proportions of all hops from 0 to \(L\), yielding the proportion vector \(\mathbf{n}_{i}=[n^{(0)}_{i},n^{(1)}_{i},\cdots,n^{(L)}_{i},N_{\text{P}}]\in\mathbb{R }^{L+2}\) with \(N_{\text{P}}\) plus to emphasize the total number of the protein atoms. Clearly, \(\mathbf{n}_{i}\) contains rich information of the spatial density, and we apply it to determine the importance of different layers, by producing the attention as:
\[\mathbf{a}_{i}=\text{Sigmoid}(\phi_{a}(\mathbf{n}_{i})). \tag{14}\]
Here, \(\phi_{a}\) is an MLP with the number of output channels as \(L+1\), the Sigmoid function1 is applied for each channel, implying that \(\mathbf{a}_{i}\in(0,1)^{L+1}\). We then multiply the hidden feature of the corresponding layer with each channel of the attention vector, and concatenate them into a vector:
Footnote 1: Note that the sum of all channels of \(\mathbf{a}_{i}\) is unnecessarily equal to 1, since the Sigmoid function instead of the previously-used SoftMax function is applied here.
\[\mathbf{h}^{\text{out}}_{i}=\text{Concat}(a_{i0}\mathbf{h}^{(0)}_{i},...,a_{iL}\mathbf{h}^{(L )}_{i}),\]
Figure 4. An illustration of Dense Attention in a Protein.
where \(a_{il}\) is the \(l\)-th channel of \(\mathbf{a_{i}}\). By making use of Eq. 14, the learnable attentions enable the model to adaptively balance the importance of different layers for different input proteins. We will illustrate the benefit of the proposed strategy in our experiments. As for the coordinates, we simply compute the mean of all layers to retain translation equivariance:
\[\mathbf{X}_{i}^{\text{out}}=\frac{1}{L+1}\sum_{l=0}^{L}\mathbf{X}_{i}^{(l)}. \tag{15}\]
### Optimization Objective
We set \(y_{i}=1\) if a surface atom \(i\) is within \(4\)A to any ligand atom (Kang et al., 2017). We predict the probability \(\hat{y_{i}}\) of being a part of binding site according its dense embedding \(h_{i}\).
\[\hat{y_{i}}=\textbf{Sigmoid}(\text{MLP}(\mathbf{h}_{i}^{\text{out}})). \tag{16}\]
Following (Kang et al., 2017; Li et al., 2018), Dice loss is used:
\[\mathcal{L}_{b}=1-\frac{2\cdot\sum(\hat{y_{i}}\cdot y_{i})}{\sum(\hat{y_{i}}) +\sum(y_{i})+\epsilon}, \tag{17}\]
where \(\epsilon>0\) is a small value to maintain numeric stability.
**Predict the relative direction of nearest ligand atom.** Beyond the CNN-based methods, our EquipPocket is an E(3)-equivariant model, which can not only output the embedding \(\mathbf{h}_{i}^{\text{out}}\) but also the coordinate matrix \(\mathbf{X}_{i}^{\text{out}}\) (with initial position vector \(\mathbf{x_{i}}\)). We further leverage the position vector to predict the relative direction \(\mathbf{d_{i}}\) of its nearest ligand atom (with position vector \(\mathbf{m_{i}}\)), in order to enhance our framework to gather local geometric features.
\[\mathbf{d_{i}}=\frac{\mathbf{m_{i}}-\mathbf{x_{i}}}{\|\mathbf{m_{i}}-\mathbf{x_{i}}\|_{2}},\quad \hat{\mathbf{d_{i}}}=\frac{\mathbf{x}_{i}^{\text{out}}-\mathbf{x_{i}}}{\|\mathbf{x}_{i}^{ \text{out}}-\mathbf{x_{i}}\|_{2}}. \tag{18}\]
The cosine loss function is used for the direction loss \(\mathcal{L}_{d}\):
\[\mathcal{L}_{d}=\sum(1-\cos(\hat{\mathbf{d_{i}}},\mathbf{d_{i}})). \tag{19}\]
The eventual loss is \(\mathcal{L}=\mathcal{L}_{b}+\mathcal{L}_{d}\). We train the parameters of all the three modules end to end.
## 5. Experiments
In this section, we will conduct experiments on multiple datasets to evaluate the performance of our framework in comparison to baseline methods and investigate the following tasks:
* **Task 1.** Can our framework's performance match or surpass that of existing methods?
* **Task 2.** Can different modules of our framework bring significant improvement for binding site prediction?
* **Task 3.** Can our framework mitigate the detrimental effects of the data distribution shift?
* **Task 4.** How do the hyperparameters (the cutoff \(\theta\) and depth of surface-egnn) affect the performance and computational cost?
### Experimental Settings
#### 5.1.1. Dataset
We conduct experiments based on the following datasets:
* **scPDB**(Kang et al., 2017) is the famous dataset for binding site prediction, which contains the protein structure, ligand structure, and 3D cavity structure generated by VolSite (Kang et al., 2017). The 2017 release of scPDB is used for training and cross-validation of our framework, which contains 17,594 structures, 16,034 entries, 4,782 proteins, and 6,326 ligands.
* **PDBbind**(Kang et al., 2017) is a well-known and commonly used dataset for the research of protein-ligand complex. It contains the 3D structures of proteins, ligands, binding sites, and accurate binding affinity results determined in the laboratory. We use the release of v2020, which consists of two parts: general set (14, 127 complexes) and refined set (5,316 complexes). The general set contains all protein-ligand complexes. The refined set contains better-quality compounds selected from the general set, which is used for the test in our experiments.
* **COACH 420 and HOLO4K** are two test datasets for the binding site prediction, which are first introduced by (Kang et al., 2017). Consistent with (Beng et al., 2017; Kang et al., 2017; Kang et al., 2017), we use the mlig subsets of each dataset for evaluation, which contain the relevant ligands for binding site prediction.
**Data Distribution Shift.** As depicted in the Figure 5(a) and Table 1 that after data processing, there is a significant gap in protein size and protein distribution between the training dataset (scPDB) and the test dataset (COACH420, HOLO4K, PDBbind). The number of atoms within a protein ranges from hundreds to tens of thousands. As for protein distribution in datasets, scPDB has the longest average structure, followed by HOLO4k and PDBbind, with COACH420 having the shortest average protein structure. This fact will hurt model learning and generalization, as discussed in SS 5.2.3.
#### 5.1.2. Target of Binding Sites
The CNN-based methods (Beng et al., 2017; Li et al., 2018; Li et al., 2018) label the subgrid as positive if its geometric center is closer than \(4\)A to the binding sites geometric center. In our experiment, consistent with (Kang et al., 2017), we set the protein atoms within \(4\)A of any ligand atom as positive and negative otherwise. After obtaining the probability that an atom is a candidate binding site, we use the mean-shift algorithm (Beng et al., 2017) to predict the binding site center, which can determine the number of clusters on its own (details in Appendix A.2.4).
#### 5.1.3. Data Preparation
We perform the following four processing steps: i) Cluster the structures in scPDB by their Uniprot IDs, and
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{DataSet} & \multicolumn{4}{c}{Average} \\ \cline{2-5} & Atom Num & Atom in Surface & Surface Points & Target Atoms \\ \hline scPDB & 4205 & 2317 & 24010 & 47 \\ COACH420 & 2123 & 1217 & 12325 & 58 \\ HOLO4k & 3845 & 2052 & 20023 & 106 \\ PDBbind & 3104 & 1677 & 17357 & 37 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of Dataset
Figure 5. The protein distribution of datasets and spatial distribution of atom neighbors
select the longest sequenced protein structures from every cluster as the train data (Kalasanty et al., 2017). Finally, 5,372 structures are selected out. ii) Split proteins and ligands for the structures in COACH420 and HOLO4k, according to the research (Kalasanty et al., 2016). iii) Clean protein by removing the solvent, hydrogens atoms. Using MSMS (Kalasanty et al., 2016) to generate the solvent-accessible surface of a protein. iv) Read the protein file by RDKIT (Kalasanty et al., 2016), and extract the atom and chemical bond features. Remove the error structures.
#### 5.1.4. Evaluation Metrics
**DCC** is the distance between the predicted binding site center and the true binding site center. **DCA** is the shortest distance between the predicted binding site center and any atom of the ligand. The samples with DCC(DCA) less than the threshold are considered successful. The samples without any binding site center are considered failures. Consistent with (Becker et al., 2015; Kalasanty et al., 2016; Kalasanty et al., 2017; Kalasanty et al., 2018), threshold is set to 4 A. We use **Success Rate** and **Failure Rate** to evaluate experimental performance.
\[\text{Success Rate}(\text{DCC})=\frac{1(\{\text{Predicted sites}|\text{DCC}<\text{ threshold}\})}{1(\{\text{True sites}\})},\] \[\text{Success Rate}(\text{DCA})=\frac{1(\{\text{Predicted sites}|\text{DCA}<\text{threshold}\})}{1(\{\text{True sites}\})},\] \[\text{Failure Rate}=\frac{1(\{\text{Protein}|1(\text{predicted binding center})=0\})}{1(\{\text{Protein}\})}, \tag{20}\]
where \(1(\cdot)\) represents the cardinality of a set. After ranking the predicted binding sites, we take the same number with the true binding sites to calculate the success rate.
#### 5.1.5. EquiPocket Framework
We implement our EquiPocket framework based on (GAT (Kalasanty et al., 2016)+EGNN (Kalasanty et al., 2016)) as our global structure modeling module. The cutoff \(\theta\) and depth in our surface-egnn model are set to 6 and 4.
To indicate the EquiPocket Framework with different modules, we adopt the following symbol as follows: i) **EquiPocket-L**: Only contain the local geometric modeling module. ii) **EquiPocket-G**: Only contain the global structure modeling module. iii) **EquiPocket-LG**: Only contain both the local geometric and global structure modeling modules. iii) **EquiPocket**: Contain all the modules.
#### 5.1.6. Baseline Models
We compare our framework with the following models: 1) geometric-based method(Fpocket (Kalasanty et al., 2016)), 2) CNN-based methods (DeepSite (Kalasanty et al., 2016), Kalasanty et al. (2016) and DeepSurf (Kalasanty et al., 2016)), 3) topological graph-based models (GAT (Kalasanty et al., 2016), GCN (Kalasanty et al., 2016) and GCN2 (Kalasanty et al., 2016)), 4) spatial graph-based models (SchNet (Net et al., 2017), EGNN (Kalasanty et al., 2016)).
#### 5.1.7. Environment and Parameter
We implement our EquiPocket framework in PyTorch Geometric, all the experiments are conducted on a machine with an NVIDIA A100 GPU (80GB memory). We take 5-fold cross validation on training data scPDB and use valid loss to save checkpoint. The batch size is set to 8. For baseline models and models in global structure modeling module of EquiPocket, we use them suggested settings to get optimal performance. More details and related resources for our experiments can be found in **Appendix A.2.2**.
### Result analysis
#### 5.2.1. Model Comparison
In Table 2, we compared our EquiPocket framework with baseline methods mentioned above. As can be observed, the performance of the computational method Fpocket is inferior, with no failure rate, since it simply employs the geometric feature of a protein. The performance of CNN-based methods is much superior to that of the conventional method, with DCC and DCA metrics improving by more than 50 percent but requiring enormous parameter values and computing resources. However, these two early methods DeepSite and Kalasanty are hampered by data distribution shift (Issue 4) and their inability to process big proteins, which may fail prediction. The recently proposed method DeepSurf employs the local-grid concept to handle any size of proteins, although CNN architecture also still results in inevitable failures.
For graph models, the poor performance of topological-graph models (GCN, GAT, GCN2) is primarily due to the fact that they only
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Type} & Param & Failure & \multicolumn{2}{c}{COACH420} & \multicolumn{2}{c}{HOLO4K} & \multicolumn{2}{c}{PDBbind2020} \\ \cline{4-10} & & (M) & Rate \(\downarrow\) & DCC\(\uparrow\) & DCA\(\uparrow\) & DCC\(\uparrow\) & DCA\(\uparrow\) & DCC\(\uparrow\) & DCA\(\uparrow\) \\ \hline Fpocket\({}^{\text{b}}\) & Geometric-based & \(\backslash\) & **0.000** & 0.228 & 0.444 & 0.192 & 0.457 & 0.253 & 0.371 \\ \hline DeepSite\({}^{\text{b}}\) & & 1.00 & \(\backslash\) & \(\backslash\) & 0.564 & \(\backslash\) & 0.456 & \(\backslash\) & \(\backslash\) \\ Kalasanty\({}^{\text{b}}\) & 3D-CNN & 70.64 & 0.120 & 0.335 & 0.636 & 0.244 & 0.515 & 0.416 & 0.625 \\ DeepSurf\({}^{\text{b}}\) & & 33.06 & 0.054 & 0.386 & **0.658** & 0.289 & 0.635 & 0.510 & 0.708 \\ \hline GAT & & **0.03** & 0.11 & 0.039(0.005) & 0.130(0.009) & 0.036(0.003) & 0.110(0.010) & 0.032(0.001) & 0.088(0.011) \\ GCN & Topological & 0.06 & 0.163 & 0.049(0.001) & 0.139(0.010) & 0.044(0.003) & 0.174(0.003) & 0.018(0.001) & 0.070(0.002) \\ GAT + GCN & Graph & 0.08 & 0.31 & 0.036(0.009) & 0.131(0.021) & 0.042(0.003) & 0.152(0.020) & 0.022(0.008) & 0.074(0.007) \\ GCN2 & & 0.11 & 0.466 & 0.042(0.098) & 0.131(0.017) & 0.051(0.004) & 0.163(0.008) & 0.023(0.007) & 0.089(0.013) \\ \hline SchNet & Spatial & 0.49 & 0.14 & 0.168(0.019) & 0.444(0.020) & 0.192(0.005) & 0.501(0.004) & 0.263(0.003) & 0.457(0.004) \\ Egnn & Graph & 0.41 & 0.270 & 0.156(0.017) & 0.361(0.020) & 0.127(0.005) & 0.406(0.004) & 0.143(0.007) & 0.302(0.006) \\ \hline EquiPocket-L & & 0.15 & 0.552 & 0.070(0.009) & 0.171(0.008) & 0.044(0.004) & 0.138(0.006) & 0.051(0.003) & 0.132(0.009) \\ EquiPocket-G & Ours & 0.42 & 0.292 & 0.159(0.016) & 0.373(0.021) & 0.129(0.005) & 0.411(0.005) & 0.145(0.007) & 0.311(0.007) \\ EquiPocket-LG & & 0.50 & 0.220 & 0.212(0.016) & 0.443(0.011) & 0.183(0.004) & 0.502(0.008) & 0.274(0.004) & 0.462(0.005) \\ EquiPocket & & 1.70 & 0.051 & **0.423(0.014)** & 0.556(0.007) & **0.337(0.006)** & **0.662(0.007)** & **0.545(0.010)** & **0.721(0.004)** \\ \hline \hline \end{tabular}
* The standard deviation of each index is indicated in brackets. The result of 5-fold for EquiPocket is shown in Appendix A.2.5.
* We use their published pre-train models or published result, details in Appendix A.2.3.
\end{table}
Table 2. Experimental and ablation results of baseline models and our framework.a
consider atom attributes and chemical bond information, ignoring the spatial structure in a protein. The performance of spatial-graph models is generally better than that of topological-graph models. EGNN model utilizes not only the properties of atoms but also their relative and absolute spatial positions, resulting in a better effect. SchNet merely updates the information of atoms based on the relative distance of atoms. We attempt to execute the Diment++ (Diment++, 2018), which uses the angle info between atoms and atoms, but it requires too many computing resources, resulting in an OOM (Out Of Memory) error. However, the performance of the spatial-graph model is worse than that of the CNN-based and geometric-based methods because the former cannot obtain enough geometric features (Issue 3) and cannot address the data distribution shift (Issue 4).
As the above results indicate, geometric info of protein surface and multi-level structure info in a protein is essential for binding site prediction. In addition, it reflects the limitations of the current GNN models, where it is difficult to collect sufficient geometric information from the protein surface or the calculation resources are too large to apply to macromolecular systems like proteins. Consequently, our EquiPocket framework is not only able to update chemical and spatial information from an atomic perspective but also able to effectively collect geometric information without excessive computing expense, resulting in a 10-20% increase in effect over previous results. Case study based on different methods is showed in Appendix 5.2.5.
#### 5.2.2. **Ablation Study**
As shown in Table 2, we conduct ablation experiments on our EquiPocket framework with different modules.
**Local Geometric Modeling Module.** This module is used to extract the geometric features of protein atoms from their nearest surface points. EquiPocket-G consists solely of this module, and the performance is negligible. There are two primary causes for this result. First, geometric information can only determine part of the binding sites. Second, it can only reflect the geometric features over a relatively small distance and cannot cover an expansive area.
**Global Structure Modeling Module.** The primary purpose of this module is to extract information about the whole protein, such as atom type, chemical bonds, relevant spatial positions, etc. We implement EquiPocket-G based on (GAT + EGNN) models, which is E(3) equivariance/invariance and has a better effect than its predecessor, EquiPocket-L. In comparison, the value of DCC increased by about 10%, and DCA increased by about 20%. This demonstrates that structure information of the whole protein is necessary for binding site prediction. In addition, when the two modules are combined as the EquiPocket-LG, the prediction effect is significantly improved, proving the complementarity of surface geometric information and global structure information.
**Surface Message Passing Module.** In the previous model, EquiPocket-LG, information was extracted solely from atoms and their closest surface points. Nonetheless, the binding site is determined not only by the information of a single atom but also by the atoms surrounding it. Therefore, the surface message passing module is proposed to collect and update the atom's features from its neighbors. After adding this module, the performance of EquiPocket has been significantly enhanced, DCC and DCA have increased by approximately 20% on average, and the failure rate has been significantly reduced. Through the addition of multiple modules, we address the Issue 3 and the performance of our framework eventually surpasses that of the existing SOTA method, demonstrating the efficacy of our framework design.
#### 5.2.3. **Data Distribution Shift**
As shown in Figure 6, we calculate the average DCC with the distribution of various sizes proteins. The geometric-based method Fpocket only utilizes the geometric features of a protein surface. Therefore, its performance is superior to that of most other methods for proteins with fewer than 1,000 atoms, but its prediction effect decreases significantly as the size of the protein increases. Kalasanty is a CNN-based and learn-based method. As the number of atoms in the protein varies, the prediction effect exhibits an increasing and then a decreasing trend, which is not only influenced by the size of the protein but also has a significant correlation with the dataset's distribution. According to the train data (scPDB), the majority of proteins contain fewer than 2,000 protein atoms (as depicted in Figure 5(a)). Consequently, the model's parameters will be biased toward this protein size. In addition, for proteins with more than 8000 atoms, the prediction effect is not even as good as the geometric-based method. This is due to the fact that CNN methods typically restrict the protein space to 70A * 70A * 70A, and for proteins larger than this size, the prediction frequently fails. For our EquiPocket framework, we do not need to cut the protein into grids, and we utilize both geometric information from the surface points and global structure information from the whole protein, so the performance for proteins of varying sizes is significantly superior to that of other methods.
**Dense Attention.** The Dense Attention is introduced in SS 4.3 to reduce the negative impact caused by the data distribution shift (Issue 4). As shown in 6, when the number of atoms contained in a protein is less than 3000, the result of the EquiPocket (w/o Dense Attention) is weaker than that of the original EquiPocket, whereas between the protein is larger, there is no significant distinction between the two models. It simply reflects the role of Dense Attention, which, by weighting the surface-egnn layer at different depths, mitigates the detrimental effect of the data distribution shift.
**Direction Loss.** Direction loss is a novel task designed to improve the extraction of local geometric features. The result of the EquiPocket (w/o Direction Loss) in Figure 6 demonstrates conclusively that the prediction performance of small proteins with fewer than 3,000 atoms is diminished in the absence of this task, which reveals the importance of the task.
Figure 6. The performance of EquiPocket and baseline methods for proteins of various sizes.
#### 5.2.4. **Hyperparameters Analysis**
In our EquiPocket framework, the cutoff \(\theta\) and depth of surface-egnn are two crucial parameters that can impact performance and computational efficiency.
**Cutoff \(\theta\).** We set the depth of surface-egnn to 4 and implement various cutoff values (2, 4, 6, 8, 10). Figure 7(a) indicates that when the cutoff is set to 2, the average DCC of our framework is poor, and GPU memory consumption is relatively low (22GB). This is due to the fact that when the cutoff is small, the surface-egnn can only observe a tiny receptive field. As the cutoff increases, the performance and GPU memory continue to rise until the DCC reaches a bottleneck when the cutoff is 10, and the GPU memory reaches 62GB. Therefore, when selecting parameters for our framework, we must strike a balance between performance and efficiency.
**Depth.** The depth of surface-egnn has an immediate influences on the performance and computation cost. We set the cutoff to 6 and implement various depth (1, 2, 3, 4, 5, 6). Figure 7(b) demonstrates that as depth increases, performance steadily improves and becomes stable as GPU memory continues to expand. Because the prediction of binding sites is highly influenced by their surrounding atoms, therefore, an excessively large receptive field may not offer any benefits but will necessitate additional computing resources.
#### 5.2.5. Case Study
We also display two examples of our EquiPocket and other methods in Figure 8. We take two proteins, 1f8e (with 12,268 atoms) and 5ei3 (with 1,572 atoms), from the test dataset PDBbind. As can be seen from Figure 8: The binding sites predicted by the geometry-based method Fpocket are extremely distant from the actual binding sites. This is due to the fact that this method prioritizes local geometric information and disregards the multi-level structure information of proteins, resulting in limited scope and weak performance. The CNN-based method Kalasanty did not provide any predicted binding site for protein 1f8e. We conjecture that this method restricts the protein within a specific space size which is highly susceptible to failure with large proteins. The recently-proposed CNN-based method DeepSur takes local grids on the protein surface, which can address the issue of fixed space size. However, the prediction of binding sites in protein 5ei3 by DeepSur is far from the ground truth because the CNN-based methods are defective in obtaining geometric and chemical features. Our EquiPocket framework is unaffected by the shortcomings of the aforementioned methods, allowing it to achieve superior outcomes for both large and small proteins.
## 6. Conclusion
In this paper, concentrating on the ligand binding site prediction, we propose a novel E(3)-Equivariant geometric graph framework called EquiPocket, which contains the local geometric modeling module, global structure modeling module, and surface passing module to gather the surface geometric and multi-level structure features in a protein. Experiments demonstrate that our framework is highly generalizable and beneficial, and achieves superior prediction accuracy and computational efficiency compared with the existing methods.
### Future Work
#### 6.1.1. **Protein Surface**
As demonstrated by our experiments, the geometric information derived from the protein surface plays a significant role in the prediction of binding sites. In this work, we use MSMS (Song et al., 2017) to generate the protein surface, which may have an uncertain impact on the prediction results. Therefore, in the future, we will be able to establish a more efficient surface generation method or gather the geometric information of a protein without a fixed protein surface.
#### 6.1.2. **Global Structure Model**
In this work, we take the existing graph-based models (GAT + EGNN) to gather the multi-level structure information in a protein. However, their capabilities are limited because these models are not entirely tailored to complex structures such as proteins. In the future, we will be able to develop more effective models to gather information from proteins in order to improve the prediction performance of binding sites.
#### 6.1.3. **Computing Resources**
As the experimental results show, both our EquiPocket and CNN-based methods require a significant amount of computing resources to analyze protein data, which will have a negative impact on their actual deployment. Consequently, when applying our method to real-world problems, it is crucial to consider how to compress algorithm parameters, increase computational efficiency, and decrease resource consumption.
|
2308.08174 | Accelerating Generic Graph Neural Networks via Architecture, Compiler,
Partition Method Co-Design | Graph neural networks (GNNs) have shown significant accuracy improvements in
a variety of graph learning domains, sparking considerable research interest.
To translate these accuracy improvements into practical applications, it is
essential to develop high-performance and efficient hardware acceleration for
GNN models. However, designing GNN accelerators faces two fundamental
challenges: the high bandwidth requirement of GNN models and the diversity of
GNN models. Previous works have addressed the first challenge by using more
expensive memory interfaces to achieve higher bandwidth. For the second
challenge, existing works either support specific GNN models or have generic
designs with poor hardware utilization.
In this work, we tackle both challenges simultaneously. First, we identify a
new type of partition-level operator fusion, which we utilize to internally
reduce the high bandwidth requirement of GNNs. Next, we introduce
partition-level multi-threading to schedule the concurrent processing of graph
partitions, utilizing different hardware resources. To further reduce the extra
on-chip memory required by multi-threading, we propose fine-grained graph
partitioning to generate denser graph partitions. Importantly, these three
methods make no assumptions about the targeted GNN models, addressing the
challenge of model variety. We implement these methods in a framework called
SwitchBlade, consisting of a compiler, a graph partitioner, and a hardware
accelerator. Our evaluation demonstrates that SwitchBlade achieves an average
speedup of $1.85\times$ and energy savings of $19.03\times$ compared to the
NVIDIA V100 GPU. Additionally, SwitchBlade delivers performance comparable to
state-of-the-art specialized accelerators. | Shuwen Lu, Zhihui Zhang, Cong Guo, Jingwen Leng, Yangjie Zhou, Minyi Guo | 2023-08-16T07:05:47Z | http://arxiv.org/abs/2308.08174v1 | # Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
###### Abstract
Graph neural networks (GNNs) have shown significant accuracy improvements in a variety of graph learning domains, sparking considerable research interest. To translate these accuracy improvements into practical applications, it is essential to develop high-performance and efficient hardware acceleration for GNN models. However, designing GNN accelerators faces two fundamental challenges: the high bandwidth requirement of GNN models and the diversity of GNN models. Previous works have addressed the first challenge by using more expensive memory interfaces to achieve higher bandwidth. For the second challenge, existing works either support specific GNN models or have generic designs with poor hardware utilization.
In this work, we tackle both challenges simultaneously. First, we identify a new type of partition-level operator fusion, which we utilize to internally reduce the high bandwidth requirement of GNNs. Next, we introduce partition-level multi-threading to schedule the concurrent processing of graph partitions, utilizing different hardware resources. To further reduce the extra on-chip memory required by multi-threading, we propose fine-grained graph partitioning to generate denser graph partitions. Importantly, these three methods make no assumptions about the targeted GNN models, addressing the challenge of model variety. We implement these methods in a framework called SwitchBlade, consisting of a compiler, a graph partitioner, and a hardware accelerator. Our evaluation demonstrates that SwitchBlade achieves an average speedup of \(1.85\times\) and energy savings of \(19.03\times\) compared to the NVIDIA V100 GPU. Additionally, SwitchBlade delivers performance comparable to state-of-the-art specialized accelerators.
GNN, bandwidth, multi-threading.
## I Introduction
Graph neural networks (GNNs) have gained significant momentum as researchers have begun to integrate the concept of _graphs_ into deep learning (DL) [34]. By merging the end-to-end hierarchical learning capabilities of DL with the structural representation power of graphs, GNNs have achieved improved accuracy across various domains, including molecular science [10], recommendation systems [39], and transportation [3]. To translate the algorithmic advancements of GNNs into practical applications, effective and efficient execution is crucial [23]. However, general-purpose processors, such as CPUs and GPUs, struggle with performance and energy inefficiency when executing graph-related operations [36, 42]. As a result, GNN-dedicated accelerators have been proposed.
Numerous efforts have focused on accelerating one of the most popular GNN models, Graph Convolution Networks (GCNs) [14, 20]. GCNs comprise two primary operators: the sparse graph adjacency matrix and the dense vertex feature matrix, which are typically organized into a two-stage computation. Based on this formulation, various designs, including inter- and intra-stage optimizations, have been proposed to achieve high performance [8, 9, 18, 19, 21, 37, 40].
GNNs, however, encompass a broad category that varies in both the number and combination of operators [6]. As shown in Tbl. I, several popular GNN models exhibit quite different characteristics in each stage, despite being divisible into two-stage forms. Thus, prior GCN-specific accelerators may not offer the same performance and efficiency when executing other models. Additionally, designing dedicated accelerators for every GNN model is impractical due to the high hardware development costs and the vast GNN model space. As a result, a generic solution is desirable for the diverse range of GNN models. Though uniform architectures have been proposed to address these challenges [2, 15], they suffer from long-distance data movement [1], leading to increased latency and energy consumption.
Another challenge in GNN acceleration is the high bandwidth requirement. Since GNNs combine deep learning with graph processing [34], they inherit the characteristics of large feature maps and poor data locality [26]. Both characteristics necessitate substantial reads and writes to off-chip DRAM. Moreover, current GNN models are primarily executed in an operator-by-operator paradigm, where all operators read and write to DRAM, and modern GNN models typically comprise ten or more operators in one layer [35]. As a result, the current GNN model execution leads to massive off-chip data access [4]. Such high requirements make hardware bandwidth a potential bottleneck in GNN execution. Existing works addressing GNN acceleration satisfy this high band
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **Aggregation (\(a_{i}\))** & **Combination (\(h_{i}^{l+1}\))** \\ \hline GCN [20] & \(\sum_{j\in N(i)}h_{j}^{L}d_{j}^{-1/2}\) & \(\texttt{ReLU}(d_{i}^{-1/2}W^{l}a_{i})\) \\ \hline GAT [29] & \(\sum_{j\in N(i)}\alpha_{ij}W^{h}h_{j}^{L}\) & \(\texttt{ReLU}(a_{i})\) \\ \hline SAGE-Pool [14] & \(\max_{j\in N(i)}(W^{l}_{pool}h_{j}^{L}+b)\) & \(\texttt{ReLU}(W^{l}(h_{i}^{l}\parallel a_{i}))\) \\ \hline GG-NN [22] & \(\sum_{j\in N(i)}(W^{l}h_{j}^{L}+b)\) & \(\texttt{GRU}(h_{i}^{l},a_{i})\) \\ \hline \hline \end{tabular} where \(\alpha_{ij}\) is the attention coefficient whose calculation is omitted in this table, \(||\) is matrix concatenation, MLP is Multi-Layer Perceptron [24], and GRU is Gated Recurrent Unit [5].
\end{table} TABLE I: Operations in popular GNN models.
width requirement through emerging yet costly techniques, such as Processing-In-Memory (PIM) [1, 33]. This work aims to provide a more cost-effective solution to overcome this challenge.
Though considerable effort has been expended, none of the prior works address both variety and bandwidth challenges simultaneously. To fill this gap, this work seeks to tackle both challenges within a single system. Our core idea is to optimize GNN bandwidth requirements without making any assumptions about the targeted GNN model structure. Guided by this idea, we propose three generic methods.
The first method is partition-level operator fusion (PLOF). Operator fusion has been proven to effectively mitigate the high bandwidth requirement challenge for conventional neural networks [4]. However, previous efforts have neglected the irregular graph-traversal-based operators (GTRs), which are central to GNNs. In this work, we propose a new graph partition-level operator fusion that fuses operators in arbitrary GNN models into three phases to alleviate high bandwidth requirements. This is based on two observations. First, all operators can be reorganized into a three-phase paradigm by borrowing the programming model of traditional graph processing [11]. Second, graphs are typically partitioned into smaller components for on-chip loading and processing due to their large data size. As a result, we can transfer data only at phase boundaries rather than operator boundaries.
The second method is shard-level multi-threading (SLMT). Although operators can be fused together to reduce memory footprint, they are still executed sequentially. Since each operator utilizes only one part of hardware resources, including bandwidth, resource utilization for other parts can be low during that operator's execution. To enhance the utilization of different hardware resources simultaneously, we introduce shard-level multi-threading, where different shards of the graph are assigned to different hardware units. This parallelizes the execution of multiple fused phases across different shards, allowing for more efficient use of hardware resources and further reducing the memory bandwidth requirements.
The third method is fine-grained graph partitioning (FGGP) on host. The above two methods deploying multiple threads for concurrent processing improves hardware utilization yet increases the on-chip memory pressure. To mitigate the memory-concurrency contention, we further propose fine-grained graph partitioning running on host device. FGGP generates denser partitions to store more effective data under the same memory budget so as to improve the graph data reuse and reduce the bandwidth requirement.
All three proposed methods enhance hardware performance for GNN acceleration without making any assumptions about the GNN model structure. As a result, they offer a fresh perspective to design more flexible architectures that achieve both excellent applicability and high performance.
We propose a systematic framework, SwitchBlade, which comprises a compiler, a graph partitioner, and a hardware accelerator to implement the above methods, as illustrated in Fig. 1. The compiler is responsible for implementing PLOF. It maps GNN models written in high-level frameworks such as DGL and PyG into our PLOF phases, which are expressed in the instruction set architecture (ISA) of our hardware accelerator. Additionally, the compiler provides the model information to our graph partitioner, which implements FGGP and generates graph partitions. The accelerator ultimately executes the PLOF phases and processes the graph partitions concurrently using SLMT.
We evaluate SwitchBlade in comparison to the NVIDIA V100 GPU and demonstrate that it achieves an average speedup of 1.85\(\times\) and energy savings of 19.03\(\times\) across a diverse range of GNN models. Furthermore, SwitchBlade attains comparable or even superior performance on GCN models when compared to state-of-the-art GCN accelerators like HyGCN [37], but with significantly greater flexibility.
The main contributions of this work are as follows:
* We propose a set of generic methods that make no assumptions about the underlying GNNs, addressing the bandwidth and variety challenges of generic GNN acceleration. These methods span algorithmic, software, and hardware aspects.
* We develop SwitchBlade, a comprehensive full-stack framework that implements the proposed three methods. The framework consists of a compiler, a graph partitioner, and a hardware accelerator.
* We evaluate the performance of SwitchBlade and demonstrate its effectiveness by achieving a 1.85\(\times\) speedup and 19.03\(\times\) energy savings compared to the NVIDIA V100 GPU. Furthermore, SwitchBlade exhibits comparable performance against a prior state-of-the-art GCN accelerator, showcasing its flexibility and adaptability.
## II Background
In this section, we provide an overview of Graph Neural Networks (GNNs) and the dual-sliding-window graph partitioning method as the foundation for SwitchBlade.
### _Graph Neural Networks_
Graph Neural Networks (GNNs) combine the power of deep learning with traditional graph processing, leading to improved accuracy in a wide range of domains that depend on graph structures, such as molecular science [10], recommendation systems [39], and transportation [3]. This improvement is
Fig. 1: The workflow of SwitchBlade.
achieved by replacing prior hand-crafted or intuition-based methods (e.g., node2vec [12]) with end-to-end learning capabilities.
Similar to conventional deep neural networks (DNNs), GNNs consist of layers. A layer \(l\) takes the vertex and edge embedding matrix as input, along with the adjacency matrix representing the graph structure, and produces a new embedding matrix for layer \(l+1\)[34]. The primary distinction between GNNs and DNNs lies in the graph-traversal operators, which exhibit irregular computation and memory access patterns [36]. We now introduce the primitive operators and their role in various GNN models.
**Primitive Operators.** Each GNN layer typically comprises two types of primitive operators: the graph-traversal operators (GTRs, detailed below) and neural network operators. The latter can be further categorized as either dense matrix multiplication operators (DMMs) [24] and element-wise operators (ELWs) such as ADD, EXP, and ReLU. The combination of these three operator types covers all forms of GNN computation. Most GNN libraries, including popular ones like DGL [31] and PyG [7], support GNN programming by providing efficient implementations for these three primitive operators.
GTRs can be generalized into two types: ScatterOp and GatherOp. They are considered as _vectorized_ graph propagation operators in traditional graph processing models (e.g., GAS [11]). The ScatterOp distributes the embedding of each vertex to its outgoing (or incoming) edges, while the GatherOp collects the embeddings of all incoming (or outgoing) edges of each vertex and reduces them into a fixed-length vector using a _reduction_ function. Examples of reduction functions include \(max\), \(sum\), and \(mean\), which align vertices' embeddings for subsequent operators, as different vertices may have varying edge numbers.
### _Dual-Sliding-Window Graph Partitioning_
Traditional graph processing [32] often has a memory footprint that can reach tens of gigabytes, including vertex and edge features, which exceeds both on-chip and off-chip memory capacities. Numerous graph partitioning techniques have been studied to reduce memory footprints [16, 30].
One notable approach is the dual-sliding-window-based graph partitioning (DSW-GP) method [43]. This method is popular due to its regular memory access patterns and has been adopted by several graph processing systems [27]. DSW-GP first divides vertices into disjoint destination _intervals_ and then creates smaller _shards_ containing edges under each destination interval. Fig. 2 illustrates an example graph partitioned into 4 destination intervals and 16 shards. During partitioning, it is ensured that each shard can fit into the targeted memory space. During computation, operations are performed on each interval and shard instead of the entire graph, with shards being iterated interval-wise--referred to as window sliding.
GNNs, as an extension of traditional graph processing, also demand a large amount of memory concerning both graph scale and embedding size. To reduce memory footprint in GNN acceleration, we consider employing DSW-GP in this work.
```
Input:\(G\), input graph; \(M\), targeted GNN model. Output:\(S\), the resulted shards. \(shardHeight\leftarrow\)\(\textit{calShardHeight}(G,M)\); \(shardNumInterval\gets G.vTotal\)\(/\)\(shardHeight\); \(I\leftarrow\)initIntervals\((G,M)\); for each\(i\) in \(I\)do for\(sidx\) in range(\(\textit{shardNumInterval}\))do \(srcBegin\gets\)\(sidx\times shardHeight\); \(srcEnd\leftarrow\)\(srcBegin+shardHeight\); if\(srcEnd>G.vTotal\)then \(srcEnd\gets\)\(G.vTotal\); \(s\leftarrow\)initShard\((i)\); setShardSource\((s,srcBegin,srcEnd)\); finalizeShard\((i,s)\); finalizeInterval\((i)\);
```
**Algorithm 1**_DSW-GP_
## III Motivation
In this section, we discuss the motivation behind designing our SwitchBlade GNN accelerator framework. We first analyze the characteristics of GNNs to identify two primary challenges in GNN acceleration. Next, we review existing solutions and find that none of them address both challenges simultaneously. Finally, we present the goal of our SwitchBlade framework, inspired by the above insights.
### _Challenges of GNN Acceleration_
By analyzing GNN characteristics, we identify two significant challenges in GNN acceleration:
**High model variety.** GNNs exhibit a high degree of variety in model structure. Unlike graph processing and deep learning, which are dominated by GEMM/CONV and SpMV/SpMM operations [41], GNNs do not have a single computational hotspot. Instead, they are more ad-hoc, depending on the
Fig. 2: Dual-Sliding-Window Graph Partitioning (DSW-GP) adopted in SwitchBlade. The red arrow indicates the shard processing order to traverse the while graph.
targeted applications [34]. This characteristic leads to the challenge of accommodating the diverse GNN model structures when designing accelerators.
**High bandwidth demand.** Previous studies have shown that graph processing and deep learning applications are often limited by off-chip memory bandwidth [4]. While deep learning achieves high bandwidth utilization when transferring large amounts of weights and feature maps, graph processing experiences lower bandwidth utilization due to irregularity or sparsity in the graphs. As a combination of both graph processing and deep learning [34], GNNs demand even higher bandwidth, exceeding today's bandwidth capacity.
### _Limitations of Existing Solutions_
Although numerous GNN accelerators have been proposed in recent years, achieving significant performance improvements compared to general-purpose architectures, none of them fully address the two challenges mentioned above. One group of prior works focuses on the bandwidth challenge, either alleviating bandwidth demand by designing dedicated cores that exploit graph sparsity [37] or satisfying bandwidth demand by incorporating Process-In-Memory (PIM) in the accelerator [1, 33]. However, most of these solutions lack flexibility in supporting more powerful yet complex GNNs, as their designs typically make strong assumptions about the targeted GNN models, limiting their applicability to a broader range of GNNs. Another group of works [2, 15] offers high flexibility for various GNNs but fails to address the bandwidth challenge, resulting in long-distance data movement problems and limited performance [1].
### _Our Goal._
The limitations of prior work motivate us to tackle the challenges of bandwidth requirement and model variety concurrently. Our approach explores generic and cross-stack optimizations for GNN computation, making few assumptions about GNN model structures. We integrate these optimizations into a single framework called SwitchBlade, which we will detail in the following sections.
## IV SwitchBlade Design
### SwitchBlade _Overview_
Fig. 1 illustrates the SwitchBlade workflow, which encompasses three core methods that form the backbone of SwitchBlade. Specifically, a GNN model \(M\) written in a high-level language such as DGL or PyG is first compiled by our GNN compiler to process a graph \(G\). This spans across the algorithm, software, and hardware layers.
### _Partition-Level Operator Fusion (PLOF)_
PLOF is co-designed with compiling-level operator fusion and dual-sliding-window-based graph processing style to eliminate redundant DRAM accesses between GNN operators. In contrast to prior operator fusion for Deep Neural Networks (DNNs), which only considers the program of the DNN model [4], PLOF also takes the graph data structure into account to introduce a new fusion paradigm. However, due to the irregularity of the graph, determining which operators should be fused and at which abstraction level remains challenging.
To achieve PLOF's goal, we propose fusing operators that process vertices or edges at the graph interval and shard level. The GNN model is first divided into multiple phases, and the input graph is partitioned into intervals and shards. Then, all phases will iterate either the intervals or shards according to the DSW-GP to complete the GNN computation. In this scenario, the total off-chip memory access is roughly \(n_{p}\times M\) instead of \(n_{o}\times M\), where \(n_{p}\) is the phase number, \(n_{o}\) is the operator number, and \(M\) is the off-chip memory access of one operator. Consequently, each phase can be regarded as a fused operator, which alleviates the high bandwidth requirement.
We employ a template program to separate a GNN model and iterate the graph, defining three _phases_: ScatterPhase, GatherPhase, and ApplyPhase. The pseudo code is presented in Alg. 2. The template is inspired by traditional graph processing [11], where a graph analytic algorithm can be represented using a similar three-stage GAS programming model. Instead of the GAS model operating on a single vertex or edge, we batch them into vertex intervals and edge shards produced by DSW-GP to further improve locality and expose parallelism. A GNN model can thus be represented by phases of the template program operating on different parts of the graph. Fig. 6-d shows an example of the PLOF program written in our ISA (Sec. V-A) corresponding to the high-level language in Fig. 6-a.
**Compiler Support.** Assigning each operator in a GNN model to the appropriate phase is a non-trivial task due to the large number of operators involved. A GNN model typically consists of multiple layers, with each layer containing tens of operators, including GTRs. Consequently, it can be challenging for programmers to take advantage of the proposed operator fusion technique. However, the semantics of each operator can be inferred from specific operators within the model, offering an opportunity to automate the phase construction process through software support. To achieve this, we incorporate the process into our compiler, which will be detailed later in Sec. V-C.
### _Shard-Level Multi-Threading (SLMT)_
We propose SLMT to balance hardware flexibility and performance. In contrast, previous works [37] primarily focus on designing hardwired accelerators dedicated to specific GNN
models. Although these designs achieve remarkable performance and efficiency, they suffer from limited flexibility. To address the variety challenge, we carefully trade off hardware flexibility and performance via SLMT.
The SLMT exploits shard-level parallelism, a new parallelism type dedicated to graph-related computation. We can parallelize shards since most operators in GatherPhase for shard processing have minimal dependency between shards, except for the GatherOp. Additionally, the shard is an ideal abstraction for parallelization due to its suitable size for the targeted memory and adjustability.
To implement SLMT, we construct a GNN accelerator with simultaneous multi-threading [28]. The approach allows the hardware to automatically schedule different shards to issue their GatherPhase instructions, enabling those shards to utilize different hardware resources, such as functional units and bandwidth, simultaneously. Fig. 3 demonstrates this process, where multiple shards are processed concurrently by two shard-threads (sThreads). As shown in the figure, SLMT optimizes hardware resource utilization during shard processing, which is the central aspect of the overall GNN computation. We will describe the architectural details later in Sec. V-B.
### _Fine-Grained Graph Partitioning (FGGP)_
FGGP is proposed to reduce redundant and unnecessary data transfers for partitioning-based GNN computation by exploiting graph sparsity. Generally, FGGP achieves this goal in two ways. First, FGGP can significantly increase the destination interval size (also the shard width) under memory constraints. We use Fig. 4-a as an example to illustrate the benefit. Initially, the source vertex 5 would be loaded twice under the first and second destination intervals. By increasing the interval size from 3 to 6, the source vertex 5 will only be loaded once, saving bandwidth. In this context, we identify the multiple loads of a single source vertex as redundant data transfer. The second aspect of FGGP is to skip unused source vertices for each shard, eliminating useless data transfers.
The core of FGGP lies in generating shards at the level of individual edges and source vertices during graph partitioning. To generate a shard, rather than forming a list of consecutive source vertices and simply assuming each source is fully connected to the vertices in the destination interval, we propose dynamically adding each edge with its associated source vertex to the shard until it is full. As a result, the source vertex lists of our shards can be discontinuous, as shown in Fig. 4-b. This method not only skips unused sources but also decouples the interval size from the memory constraint. Therefore, we can specify an interval size that far exceeds the memory size available for storing the shard.
We implement a graph partitioner to realize FGGP. To generate shards, the partitioner requires both the adjacency matrix of the input graph and the data dimensions from the compiler. We will discuss the graph partitioner in more detail later in Sec. V-D.
## V SwitchBlade Implementation
To actualize the proposed methods, we develop a GNN accelerator framework comprising the GNN Accelerator (GA), GNN Compiler (GC), and Graph Partitioner (GP). Furthermore, we design an instruction set architecture (ISA) to serve as the interface connecting these three components of our framework.
In particular, the GA features a versatile instruction-driven pipeline equipped with various domain-specific functional units. Additionally, the GA employs SLMT to automatically pipeline GNN execution, ensuring efficient execution across a wide range of GNN models. The GC is devised to generate ISA code in the form of PLOF phases, accepting arbitrary GNN models written in high-level frameworks (e.g., DGL [31], PyG [7]) as input, automatically mapping them into PLOF phases, and generating the corresponding ISA code. The GP employs FGGP to create denser shards from the input graph in accordance with the GA specification and GNN model information derived from the GC. We delve into the details of our developed framework in the following sections.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Type** & **Opname** & **Operand** \\ \hline \multirow{2}{*}{Compute} & ELW (ADD, MUL, RELU), DMM (GEMM), & Dimensions, \\ & GTR (GTRR.SUM. F, SCTR. F, SCTR.B) & Memory \\ \hline \multirow{2}{*}{Memory} & Load/Store Src/DM (LD.D, LD.S, ST.D) & \\ \hline \hline \end{tabular}
\end{table} TABLE II: Example of SwitchBlade instructions.
Fig. 4: Graph shards produced by (a) prior graph partitioning with sparsity elimination [37] and (b) our fine-grained graph partitioning.
Fig. 3: Executing PLOF phases using Shard-Level Multi-ThreadingMulti-Threading with 2 sThreads.
### _Instruction Set Architecture (ISA)_
We first introduce the ISA of SwitchBlade, which serves as the foundation for our GA, GC, and GP. Tbl. II presents an example of the instructions, while Fig. 5 illustrates the GNN accelerator architecture.
Initially, we categorize the instructions in the ISA into two types: Compute and Memory. Each instruction type targets a specific architectural component within the hardware. Compute instructions are further classified into three sub-types, corresponding to the three primitive operator types in GNNs (Sec. II). Generally, compute instructions are directed to the functional unit depicted in Fig. 5. Memory instructions, on the other hand, handle data transfer of vertices and edges between the on-chip embedding buffer and off-chip DRAM and are issued to the load-store unit (LSU) shown in Fig. 5.
For each instruction, we define three fields: opname, data-dimension, and memory-symbol. The opname and data-dimension fields specify the targeted operator and the associated parameters of the input and output data. To apply a single program to multiple intervals and shards with varying sizes, we also establish a set of macros representing the parameters of intervals and shards in the second field. For instance, we use \(E\) for the number of edges in the current shard. These macros should be decoded at runtime by the hardware controller. The third field, memory-symbol, indicates the memory addresses of the input and output data. We define memory-symbols as another kind of macro, offering three symbol types--\(D\) (destination), \(S\) (source), and \(E\) (edge)--to denote the data types of input and output, which may correspond to different hardware stores. To obtain the actual addresses, the hardware controller calculates the address at runtime, also accounting for the varying sizes of intervals and shards.
### _GNN Accelerator (GA)_
The GA is an instruction-driven platform designed to perform various GNN computations as expressed by the ISA. To address the challenge of supporting diverse operator numbers and orders, we utilize the SLMT as the core of GA. Based on SLMT, we achieve a more flexible design where different functional units connect to various buffers in parallel, enabling versatile data access. Our GA design is illustrated in Fig. 5.
#### Iii-B1 Functional Unit
The GA comprises two types of dedicated functional units for GNN primitive operators (Sec. II-A). The first, called a _vector unit (VU)_, includes multiple cores operating in the Single-Instruction-Multiple-Data (SIMD) paradigm. It targets the lightweight ELW and GTR operators. For ELW, each core operates independently on different embeddings. For GTR, each core is responsible for one destination vertex in GatherOp or one edge in ScatterOp. The second, a _matrix unit (MU)_, is an output-stationary systolic multiply-accumulate (MAC) array targeting compute-intensive DMM operators for increased efficiency. In summary, this functional unit covers all three operator types in GNNs.
#### Iii-B2 Controller
The Controller implements the SLMT as described in Sec. IV-C. It maintains multiple program counters (PCs), each corresponding to a thread. To simplify control flow, we employ one _interval thread (iTread)_ and multiple _shard threads (sThreads)_. The iThread executes only the ScatterPhase and ApplyPhase, processing destination vertices in intervals, while sThreads execute only the GatherPhase, processing source vertices and edges in shards. For each instruction fetched by a PC, the controller decodes it and enqueues it into a queue specified by its instruction types. Each queue will dequeue and issue an instruction when the targeted component is ready.
To adhere to the execution logic in Alg. 2, we introduce a Phase Scheduler (PS) to manage phase switching comprehensively. Whenever a PC reaches the end of a phase, the PS is activated to determine the next phase for the PC or to pause or resume a thread by accessing the shard metadata in the Graph Buffer. For instance, when an sThread completes a shard, the PS resets its PC and assigns it another shard if any are available within the interval. If no shards are left within this interval, the PS pauses the sThread. If all sThreads are paused, the PS resumes the iThread and sets its PC to ApplyPhase.
#### Iii-B3 Embedding Buffer
We employ two on-chip scratchpad memory (SPM) pieces as Embedding Buffers to store vertex and edge embeddings and intermediate data. These buffers connect in parallel to the functional units through a crossbar, enhancing data access flexibility and allowing for arbitrary operator numbers and orders to be executed on the GA. One embedding buffer, called DstBuffer, stores data of destination vertices in intervals, corresponding to memory-symbol D in the ISA. The other embedding buffer, named SrcEdgeBuffer, stores data of edges and source vertices in shards, corresponding to memory-symbol S and E in the ISA. As the buffer holds various data for intervals and shards, we develop a compiler (Sec. V-C) to effectively manage and access this buffer.
To supply data for the SLMT, we logically divide the SrcEdgeBuffer into \(num\_stthread\) parts to hold multiple shards simultaneously. Each sThread privately accesses shard data in the SrcEdgeBuffer using different base addresses, while all threads share access to interval data in the DstBuffer.
#### Iii-B4 Graph Buffer
The Graph Buffer is composed of three parts: MetaBuffer, DataBuffer, and LSU. The DataBuffer stores vectors representing connections between vertices and edges of a shard, using Coordinate Format (COO) to sup
Fig. 5: The architecture of SwitchBlade GNN Accelerator.
port GTR operators performed by the Functional Unit. The MetaBuffer stores scalars indicating current shard sizes, including the number of source vertices and edges. Both DataBuffer and MetaBuffer are designed to hold multiple shards simultaneously, supporting the SLMT. The LSU handles shard and embedding transfers, receiving memory instructions from the controller, translating them into low-level transactions using shard and interval data in the DataBuffer, and sending the transactions to the downstream DRAM interface.
We implement a straightforward prefetch mechanism to supply data to graph buffers. A 1-bit flag is assigned to each shard, indicating whether the data requires an update. If the bit is set to \(1\), the current data is deemed outdated, prompting the LSU to automatically load the subsequent shard into the buffer and change the bit to \(0\). The system remains idle until the phase scheduler in the controller switches the bit back to \(1\) due to phase changes.
### _PLOF Compiler_
We develop the Graph Compiler (GC) to automatically map a GNN model, written in a high-level framework, to the PLOF template program. Fig. 6 illustrates the workflow of GC. The entire compilation process consists of three steps, as described below.
#### V-C1 Constructing Unified Computational Graph
In this phase, GC extracts a unified computational graph from high-level frameworks. This unified graph replaces framework-specific graph operators--such as apply_edges and update_all in DGL, and scatter in PyG--with more generic GTR operators, as presented in Sec. II.
#### V-C2 Constructing PLOF Phases
GC decomposes the unified computational graph into _groups_ of PLOF phases, as outlined in Sec. IV-B. Specifically, GC traverses the unified graph from each GTR operator along both in-edges and out-edges, respectively, until encountering another GTR. During traversals, GC labels each visited edge with _src_, _dst_, or _edge_, depending on the starting GTR operator type. Subsequently, GC performs reverse topology sorting on the labeled unified graph, cutting the foremost edge of each successive edge block--marked with dst labels--and other corresponding unvisited edges.
Next, GC determines the phase to which each operator belongs. This process also relies on the data labels marked earlier. Specifically, GC executes reverse topology sorting, during which it appends visited operators to ApplyPhase until encountering an operator whose in-edge lacks a dst label. Then, GC appends visited operators to GatherPhase until encountering an operator whose out-edge does not have an edge label. Finally, GC appends all remaining operators to ScatterPhase.
#### V-C3 Code Generation and Post-Processing
GC generates compute and memory instructions for operators in accordance with the ISA. Specifically, GC first creates one compute instruction for each operator. The opname and data-dimension are directly derived from the current operator, while the memory-symbols are jointly determined by the current and adjacent operators. For the destination memory-symbol, GC establishes its type based on the operator label marked in Sec. V-C2 and assigns a new number to the type. The source memory-symbols, therefore, are the destination symbols of its depending instructions. GC also inserts corresponding memory instructions into the phases if the input or output memory-symbols are not produced or consumed in this PLOF phases.
After instruction generation, GC performs memory-symbol liveness analysis to manage the on-chip buffer. GC first calculates the size of each symbol and then merges two symbols of the same size if the former is no longer in use. Consequently, more useful data can be stored on-chip, resulting in better buffer utilization. At this stage, GC completes code generation for GNNs from high-level frameworks.
During execution, GC produces parameters for graph partitioning. Specifically, GC calculates two parameters: _dim_src_ and _dim_edge_. The former represents the total data-dimensions of all source vertex memory-symbols in each GatherPhase,
Fig. 6: The workflow of SwitchBlade GNN Compiler with an example DGL program.
while the latter signifies the total data-dimensions of all edge memory-symbols. These parameters are determined by accumulating the data-dimensions of corresponding symbols. These parameters allow the downstream GP to perform FGGP, as detailed in the following Sec. V-D.
### _Graph Partitioner (GP)_
GP implement our FGGP based on DSW-GP. Alg. 3 presents the pseudocode for our GP. The primary distinction between our GP and the original DSW-GP, as shown in Alg. 1, lies in the inner loop, where we iterate through source vertices for each interval. For every source, GP first executes _acquireNeiList_ to obtain the _dstList_ of adjacent destination vertices, along with the associated edge under the current interval. If dstList is empty, GP directly skips the source; otherwise, GP performs _probeShardSize_ to check whether there is extra space for the source and its associated edges in the shard, based on the following rule:
\[\begin{split} num\_src\times dim\_src+num\_edge& \times dim\_edge\\ &\leq\frac{mem\_capacity}{num\_Thread},\end{split} \tag{1}\]
where \(num\_src\) and \(num\_edge\) represent the numbers of source vertices and edges of a shard, \(dim\_src\) and \(dim\_edge\) denote the total dimensions of all source vertex and edge memory-symbols in the generated phases from GC, and \(num\_thread\) is the number of sThreads running on GA. If Equ. 1 is satisfied, GP performs _appendSource_ to include the current source vertex and its associated edges into the shard. Otherwise, GP finalizes the current shard and initializes a new one before appending the source vertex.
```
Input:\(G\), input graph; \(M\), targeted GNN model. Output:\(S\), the resulted shards.
1\(I\leftarrow\)initIntervals\((G,M)\);
2for each \(i\) in \(I\)do
3\(s\leftarrow\)initShard\((i)\);
4\(srcPtr\gets 0\);
5whilesrcPtr\(<G.vTotal\)do
6\(dstList\leftarrow\)acquireNeiList\((G,i,srcPtr)\);
7if\(dstList.size>0\)then
8ifprobeShardSize\((s,dstList,M)\)then
9\(finalizeShard\)\((i,s)\);
10\(s\leftarrow\)initShard\((i)\);
11appendShards\(Source(s,srcPtr,dstList)\);
12\(srcPtr\leftarrow\)srcPtr\(+1\);
13\(finalizeShard\)\((i,s)\);
14finalizeInterval\((i)\);
```
**Algorithm 3**_DSW-GP with proposed FGGP_
## VI Methodology
**Simulation Method.** We implement, synthesize, and validate the SwitchBlade components using Verilog HDL with Synopsys Design Compiler and TSMC 28 nm standard cell library at 1 GHz. Based on the synthesis, we construct a C++-based cycle-level simulator with aligned latency for end-to-end SwitchBlade evaluation. We integrate Ramulator [17] to obtain accurate latency for off-chip memory behaviors. Our simulator is validated against DGL built-in models [31] to ensure correct functionality. For on-chip scratchpad memory, we use Synopsys Memory Compiler to estimate area and power. We measure HBM access energy at 7 pJ/bit [38].
**Benchmark Datasets and Models.** We select five graphs from real-world workloads [13] with varying graph sizes and sparsity levels, detailed in Tbl. IV. Additionally, we choose four popular and diverse GNN models [6] with mathematical expressions shown in Tbl. I. For each model, we stack two identical layers with a dimension of 128 for input, hidden, and output embeddings for simplicity.
**Baselines.** We compare SwitchBlade with a general-purpose processor and a GNN accelerator. For the general-purpose processor, we use the DGL-0.7 [31] library running on an NVIDIA V100 GPU [25] with 32 GB memory and an Intel Xeon E5-2630 v4 with 256 GB memory. For the GNN accelerator, we reproduce HyGCN, a state-of-the-art accelerator specifically designed for the GCN model, and compare its performance against SwitchBlade under the GCN. Tbl. III summarizes the configuration details. We set three sThreads for our shard-level multi-threading to match the three types of hardware resources: VU, MU, and bandwidth.
## VII Evaluation Results
**Energy.** To ensure a fair comparison, we convert the results from 28nm to 12nm [26]. Fig. 8 displays the results. SwitchBlade achieves an average \(19.03\times\) energy saving over the baseline V100 GPU and \(0.82\times\) over HyGCN. The high energy efficiency compared to the GPU is attributed to reduced on-chip data access and efficient domain-specific functional units in the SwitchBlade accelerator. In comparison to HyGCN, SwitchBlade also holds a slight advantage due to its simpler MU micro-architecture.
**Area and Power.** Tbl. V summarizes the area and power of SwitchBlade components under the TSMC 28 nm standard library. The total area and power of SwitchBlade are \(28.25\)\(mm^{2}\) and \(6.06\)\(W\), respectively, accounting for only \(3.47\%\) and \(2.43\%\) of the baseline V100 GPU with \(815\)\(mm^{2}\) and \(250\)\(W\) under the 12 nm technology node. Among the components, the SRAM-based SPM (SEB, DB, and TB) consumes the majority, representing \(76\%\) and \(58\%\) of the total area and power, respectively.
In the following subsections, we evaluate the effectiveness and sensitivity of each individual method proposed in this paper.
### _Partition-Level Operator Fusion_
We assess the effectiveness of PLOF by measuring the reduction in total data transfer between on-chip and off-chip memory. Fig. 9 presents the results. PLOF is effective for all types of GNN workloads and significantly reduces data transfer compared to the operator-by-operator execution paradigm on the GPU platform. This substantial reduction contributes to the speedup and energy reduction over the baseline V100 GPU.
### _Shard-Level Multi-Threading_
**Hardware Utilization.** To evaluate the effectiveness of SLMT, we measure the overall hardware utilization by averaging the individual utilization of DRAM bandwidth, VU, and MU. Fig. 10 presents the results. SwitchBlade achieves higher overall utilization across all workloads when employing 3 SLMT sThreads rather than 1, which is regarded as SLMT turned off.
**sThread Number.** We also assess the performance and resource utilization under different concurrent sThread numbers. Fig. 11 displays the results. Generally, the execution latency decreases initially and then increases, reaching optimal performance with two sThreads on each workload. The decrease in latency is expected as multiple sThreads can concurrently exercise different hardware execution units (VU, MU, and bandwidth), resulting in higher overall hardware utilization and performance. The reason for the increase in latency after two sThreads is the reduced data parallelism and hardware functional unit efficiency due to limited on-chip memory space for each sThread. Consequently, we observe minimal performance improvement with more than three sThreads based on the three types of hardware execution units.
\begin{table}
\begin{tabular}{c|c c c c|c} \hline
**TSMC 28nm** & **MU** & **VU** & **CTRL** & **RAM** & **Total** \\ \hline
**Area / \%** & 15.46 & 6.37 & 2.11 & 76.06 & 28.25 mm2 \\ \hline
**Power / \%** & 24.02 & 14.95 & 2.66 & 58.38 & 6.06 W \\ \hline \end{tabular}
\end{table} TABLE V: Area and power breakdown of SwitchBlade.
Fig. 8: Energy saving over the baseline V100 GPU.
Fig. 7: Speedup over the baseline V100 GPU.
Fig. 9: Normalized off-chip data-transfer with PLOF over GPU execution paradigm.
### _Fine-Grained Graph Partitioning_
**Data Density.** To evaluate the effectiveness of mitigating the larger on-chip SPM requirement from SLMT, we first measure the average buffer occupancy rate for SEB and DB on different datasets, calculated as:
\[\textit{occupancy\_rate}=\frac{1}{W}\sum_{i\in[1,W]}^{i}\frac{\textit{data\_accessed }_{i}}{\textit{total\_buffer\_space}}\]
where \(W\) is the total number of writes to the targeted buffer. Fig. 12 presents the results. The occupancy rate of SwitchBlade reaches nearly \(99\%\) using FGGP, whereas prior graph partitioning with sparsity elimination in HyGCN only achieves a \(44\%\) occupancy rate. As a result, SwitchBlade can attain similar performance using \(31\%\) less on-chip SPM.
**Data Reuse.** We also measure the total data transfer and corresponding performance improvement from FGGP under the same on-chip SPM budget as HyGCN. Fig. 13 displays the results. By increasing the DB size from 8MB to 13MB, we achieve an additional \(10\%\) data-transfer reduction through FGGP, leading to a further \(1.1\times\) speedup. FGGP gains less speedup on the HW graph because HW is relatively dense, making the bandwidth less critical than functional units and rendering the FGGP optimization less effective.
## VIII Related Work
In the past three years, GNN hardware acceleration has become a prominent research topic. Some studies have achieved high performance and efficiency through model-specific optimizations. HyGCN [37] is the first to decouple the two operators in GCNs into stages, designing two dedicated compute engines to enable a two-stage pipeline execution. GReTA [18] extends this concept with a multi-stage pipeline and bidirectional dataflow. GraphACT [40] and ReGraphX [1] also incorporate CPU and 3D ReRAM techniques in their multi-stage acceleration. AWB-GCN [8] proposes a unified SpMM architecture for GCN's two operators, improving hardware utilization. GCNAX [21] presents a reconfigurable loop ordering and fusion design for GCN, while I-GCN [9] explores data locality with specific reorder mechanisms to enhance accelerator performance. These approaches achieve high performance and efficiency by making strong assumptions about the targeted GNN models and designing specialized hardware architectures. Although they can run other models that do not match their hardware designs by bypassing some stages, this increases data movement and subsequently reduces performance and efficiency.
Other studies focus on more flexible hardware or scheduling designs. EnGN [15] introduces a unified SIMD architecture that adopts a ring-based dataflow for SpMM to improve hardware utilization. Auten et al. [2] connect all components with a crossbar switch in a hardware block. However, these works do not capture the generic GNN characteristics and suffer from long-distance data movement problems, leading to low performance and energy efficiency. In contrast, we make no assumptions about the targeted GNN models and propose three methods to enhance GNN accelerator performance. Our approach involves co-designing the software and hardware of the compiler, architecture, and graph partitioner as a whole.
## IX Conclusion
In this paper, we present SwitchBlade, a comprehensive framework designed to address both variety and bandwidth challenges in GNN acceleration. Our approach achieves an average speedup of \(1.85\times\) and energy savings of \(19.03\times\) compared to GPUs, while also delivering performance on par with state-of-the-art model-specific GNN accelerators. To accomplish these results, we introduce a partition-level operator fusion technique that significantly reduces the high bandwidth requirements associated with GNN execution. Furthermore, we propose a shard-level multi-threading approach to enhance the overall utilization of bandwidth and other functional units. Lastly, to tackle the increased on-chip memory contention caused by multi-threading, we present a fine-grained graph partitioning method for refining shard data. Importantly, all three methods are model-agnostic, making them suitable for a wide range of GNN models. Our experimental results demonstrate the effectiveness of the proposed methods in SwitchBlade for enhancing the performance of various GNN models and datasets.
|
2303.16038 | Polar Coded Integrated Data and Energy Networking: A Deep Neural Network
Assisted End-to-End Design | Wireless sensors are everywhere. To address their energy supply, we proposed
an end-to-end design for polar-coded integrated data and energy networking
(IDEN), where the conventional signal processing modules, such as
modulation/demodulation and channel decoding, are replaced by deep neural
networks (DNNs). Moreover, the input-output relationship of an energy harvester
(EH) is also modelled by a DNN. By jointly optimizing both the transmitter and
the receiver as an autoencoder (AE), we minimize the bit-error-rate (BER) and
maximize the harvested energy of the IDEN system, while satisfying the transmit
power budget constraint determined by the normalization layer in the
transmitter. Our simulation results demonstrate that the DNN aided end-to-end
design conceived outperforms its conventional model-based counterpart both in
terms of the harvested energy and the BER. | Luping Xiang, Jingwen Cui, Jie Hu, Kun Yang, Lajos Hanzo | 2023-03-28T15:14:31Z | http://arxiv.org/abs/2303.16038v1 | # Polar Coded Integrated Data and Energy Networking: A Deep Neural Network Assisted End-to-End Design
###### Abstract
Wireless sensors are everywhere. To address their energy supply, we proposed an end-to-end design for polar-coded integrated data and energy networking (IDEN), where the conventional signal processing modules, such as modulation/demodulation and channel decoding, are replaced by deep neural networks (DNNs). Moreover, the input-output relationship of an energy harvester (EH) is also modelled by a DNN. By jointly optimising both the transmitter and the receiver as an autoencoder (AE), we minimize the bit-error-rate (BER) and maximize the harvested energy of the IDEN system, while satisfying the transmit power budget constraint determined by the normalization layer in the transmitter. Our simulation results demonstrate that the DNN aided end-to-end design conceived outperforms its conventional model-based counterpart both in terms of the harvested energy and the BER.
## I Introduction
Wireless sensors are becoming pervasive in support of the Internet of Everything (IoE) [1]. However, their limited energy storage constrains their operational cycles. Fortunately, radio-frequency (RF) signals can be relied upon for controllable wireless energy transfer (WET) towards these miniature sensors. Generally, the RF signals simultaneously convey energy as well as information, which forms the basis of integrated data and energy networking (IDEN). The WET aims to meet the associated recharging requirement, while the wireless information transfer (WIT) aims for meeting the communication requirement. But again, coordinating both WIT and WET within the same spectrum is challenging, although highly desirable for simultaneously satisfying both communication and recharging requirements [2].
The concept of end-to-end communication system was proposed in [3] for improving the attainable performance in complex scenarios, in the face of uncertainties where conventional mathematical methods were hard to apply. In such systems, the transmitter, the channel, and the receiver may be implemented in the form of deep neural networks (DNNs), which can be trained together as an autoencoder (AE). This approach does not rely on the classical functional modules for modulation/demodulation, hence it is also often termed as being model-free. This novel architecture achieves competitive bit error rate (BER) performance, when compared to traditional model-based communication system. This is because the DNN aided model-free transceiver is capable of jointly optimizing the entire process from the generation of data bits at the transmitter to their reception at the receiver, which constitutes a so-called "end-to-end" design. Moreover, this design allows the transceiver to cope with the imperfections of practical systems, such as their non-linearity for example. Since the channel is unknown in practice, Aoudia and Hoydis [4] presented a new learning algorithm, which alleviated this problem by training the transmitter and receiver differently. Explicitly, they trained the receiver with the aid of an approximation of the loss function gradient, while training the transmitter by relying on the true gradient. Most AEs were trained based on the symbol-level information [4, 5, 6], but this philosophy is incompatible with practical bit-metric based decoding (BMD) at the receivers [7]. Therefore, Cammerer _et al._[8] conceived an AE based on bit-wise mutual information (BMI), which was eminently suitable for integration with practical receivers.
For WET systems, the transmit signals carry energy. Hence, the energy harvester (EH) at the receiver harvests RF energy from the received signals. The EH relies on an antenna and a rectifier, which converts the RF signal power into the direct current (DC) by relying on a non-linear mapping characteristic [1]. Obviously, the specific characteristics of EHs have substantial impact on the WET performance at the receiver. Therefore, it is crucial to accurately model the nonlinear nature of the energy harvesting process precisely. Varasteh _et al._[6] proposed a pair of analytical EH models for low and high RF input power, respectively. However, some practical hardware impairments, such as the impedance mismatch and the non-ideal nature of the low-pass filters, were hard to model accurately. Accordingly, they proposed to characterise the EH model by a DNN and they investigated the IDEN performance in an end-to-end manner for the first time.
As one of the most important functions of an end-to-end communication system, channel decoding has a beneficial impact on
the BER performance [12]. Polar codes have been adopted in the 5G New Radio (NR) control channel as a benefit of its good performance at short block-length. Hence, many researches on polar code were conducted for further improvement, e.g., polar code design adapted to multiple fast fading channels [13] and soft list polar decoding for multiple-input multiple-output (MIMO) system [14]. As a further advance, deep learning aided polar decoding design were conceived in [9, 10, 11]. Specifically, Zhu _et al._[9] designed a residual neural network decoder for polar codes, where a denoising module based on residual learning was appended before the neural network. In 3GPP Release 15 [15], the cyclic redundancy check-assisted successive cancellation list (CA-SCL) algorithm is standardized as the polar decoder because of its superiority in error correction. However, the BP algorithm achieves lower latency than the SCL due to its parallel structure. Due to the drawback of slow convergence and inferior error correction of the BP algorithm, the DNN based BP decoders are proposed to overcome these problems. For instance, Xu _et al._[10] proposed a novel DNN based polar decoder, which reduced the latency and complexity compared to the conventional belief propagation (BP) based method, while a recurrent neural network (RNN)-aided polar decoder was proposed in [11], which required reduced memory without substantial performance erosion.
However, in existing systems, typically a single functional module (e.g modulation/demodulation, EH or channel decoder) is implemented by DNN in isolation, which merely optimizes a single module, but fails to achieve globally optimal performance. Moreover, the application of polar codes in 5G demonstrates its practical significance, while the benefits of polar codes in the existing literature of IDEN have been overlooked, even though they are capable of substantially improving the WIT performance. Hence, harnessing them in IDEN systems is also expected to improve the WET performance, since we may be able to allocate more communication resources to WET services. Therefore, it is essential to consider end-to-end design of a polar coded IDEN system.
Against this background, our main contributions are totally and explicitly contrasted to the existing literature in Table I at a glance and they are summarized in more detail as follows:
* We conceive an end-to-end polar-coded IDEN system, where the polar code is harnessed both for data and energy transmission. The original functional modules of modulation, demodulation, EH and polar decoding are replaced by DNNs, which are jointly optimized to achieve an improved IDEN performance.
* By exploiting the similarities between the polar code's graph-based representation and the neural network connections, we formulate a DNN-aided BP based polar decoder. In contrast to [10], this decoder is designed for satisfying both the WIT and WET requirements, minimizing the BER while satisfying the energy harvesting requirement and the transmit power budget of the IDEN system designed.
Our proposed system provides a gain of almost \(14\) dB in comparison to the traditional system at the target BER of \(10^{-3}\) at \(22\) dB with the aid of \(3\) BP iterations and \(\lambda=0.01\) for transmission over a Rayleigh fading channel.
The rest of this paper is organised as follows. Our system model is described in Section II, while our optimization problem and the corresponding solution is detailed in Section III. After providing our simulation results in Section IV, we finally conclude in Section V.
## II System Model
### _Transmitter_
The transmitter is constituted by a polar encoder and an AE mapper, as illustrated in Fig. 1.
#### Ii-A1 Polar Encoder
A \(K\)-bit information sequence \(\mathbf{b}\) is first polar-coded into an \(N\)-bit coded bit sequence. To obtain an \((N,K)\) polar code, we assign the information bits in \(\mathbf{b}\) to the \(K\) most reliable "sub-channels" out of the total of \(N\) "sub-channels". The remaining \((N-K)\) bits are referred to as zero-valued frozen bits and they are assigned to the \((N-K)\) less reliable sub-channels. The sub-channel reliability sequence we implemented is proposed in [15]. Note that the positions and values of frozen bits are known by both the polar encoder and the decoder. Polar encoding is performed based on the combined information \(\boldsymbol{\mathsf{k}}\) frozen bit sequence \(\mathbf{u}\) having \(N\) bits in total. The output \(\mathbf{c}\) of the polar encoder is obtained as
\[\mathbf{c}=\mathbf{u}\mathbf{G}_{N}=\mathbf{u}\mathbf{F}^{\otimes\mathbf{n}} \mathbf{B}_{N}, \tag{1}\]
where \(\mathbf{G}_{N}\) is the generator matrix, while \(\mathbf{B}_{N}\) is the bit-reversal permutation matrix, which is harnessed for simplifying the design of the decoder [16]. Furthermore, the symbol \(\otimes\) denotes the Kronecker product and \(\mathbf{F}^{\otimes\mathbf{n}}\) is the \(n\)-th Kronecker power of \(\mathbf{F}=\left[\begin{smallmatrix}1&0\\ 1&0\end{smallmatrix}\right]\) associated with \(n=\log_{2}N\).
#### Ii-A2 AE-Mapper
The AE-mapper performs the modulation function with the binary bit sequence as its input and modulated symbol as its output. A generic architecture of the AE-mapper is portrayed in Fig. 2(a), where the channel signal-to-noise (SNR) \(\gamma\) together with the polar encoded bit vector \(\mathbf{c}\) are input to the AE-mapper. The AE-mapper includes a fully-connected \(I^{\text{S}}\)-layer DNN \(f_{\boldsymbol{\theta}_{\text{S}}}\), with the last layer being the so-called normalization layer for satisfying the transmit power constraint, as shown in Fig. 2(a). One-hot mapping is applied to \(\mathbf{c}\). Given the modulation order \(M\), we have a matrix of dimension \(\mathbf{V}\in\mathbb{C}^{M\times N/\log_{2}M}\), where the \(n\)-th column represents a one-hot vector \(\mathbf{v}_{n}\in\mathbb{C}^{M\times 1}\). Given a channel SNR \(\gamma\), the DNN \(f_{\boldsymbol{\theta}_{\text{S}}}\) learns to constitute an \(M\)-ary constellation \(\mathbf{M}_{\gamma}\in\mathbb{C}^{M\times 2}\), whose two columns represent the real and imaginary parts of the \(M\) constellation points, respectively. The function \(f_{\boldsymbol{\theta}_{\text{S}}}\) of the DNN can be expressed as
\[\mathbf{M}_{\gamma} =f_{\boldsymbol{\theta}_{\text{S}}}(\gamma) \tag{2}\] \[=f_{\text{norm}}\left(\mathbf{W}_{I_{\text{S}}}^{(\text{S})}\left[ \cdots f_{\text{ReLU}}(\mathbf{W}_{1}^{(\text{S})}\gamma+\mathbf{b}_{1}^{( \text{S})})\cdots\right]+\mathbf{b}_{I_{\text{S}}}^{(\text{S})}\right),\]
where \(f_{\text{norm}}(\cdot)\) and \(f_{\text{ReLU}}(\cdot)\) represent the normalization and the ReLU activation functions, respectively, while the trainable weight and bias parameters \(\mathbf{W}_{i}^{(\text{S})}\) and \(\mathbf{b}_{i}^{(\text{S})}\) for \(\forall i=1,\ldots,I_{\text{S}}\) are collected in the set \(\boldsymbol{\theta}_{\text{S}}\).
Fig. 1: An end-to-end polar-coded IDEN system.
To obtain the modulated symbols \(\mathbf{x}\), the constellation set \(\mathbf{M}_{\gamma}\) is multiplied by the matrix \(\mathbf{V}\) to map the one-hot vectors to the constellation points in \(\mathbf{M}_{\gamma}\). Upon considering the \(n\)-th symbol as an example, the real and imaginary parts of the complex baseband symbol \(x_{n}\) are obtained by multiplying the constellation matrix \(\mathbf{M}_{\gamma}\) with the one-hot vector \(\mathbf{v}_{n}\), which can be expressed as
\[[\Re(x_{n})\ \Im(x_{n})]=\mathbf{v}_{n}^{T}\cdot\mathbf{M}_{\gamma}. \tag{3}\]
The resultant modulated symbol vector \(\mathbf{x}=[x_{1},\ldots,x_{N/\log_{2}M}]^{T}\) is then transmitted.
### _Receiver_
As illustrated in Fig. 1, the receiver consists of a power splitter, an AE-demapper, an EH and a BP decoder. The signal \(\mathbf{y}\in\mathbb{C}^{N/\log_{2}M\times 1}\) impinging at the single-antenna receiver can be expressed as
\[\mathbf{y}=\mathbf{h}\odot\mathbf{x}+\mathbf{n}, \tag{4}\]
where \(\mathbf{h}\in\mathbb{C}^{N/\log_{2}M\times 1}\) represents the channel coefficients, and \(\odot\) denotes the element-wise multiplication. We assume encountering an uncorrelated Rayleigh fading channel, where \(\mathbf{h}\) follows a complex Gaussian distribution \(\mathbf{h}\sim\mathcal{CN}(0,1)\), while the additive white Gaussian noise (AWGN) \(\mathbf{n}\in\mathbb{C}^{N/\log_{2}M\times 1}\) follows \(\mathbf{n}\sim\mathcal{CN}(0,\sigma^{2})\). Since the normalization layer of the AE-mapper ensures that \(\mathbb{E}\left[\|\mathbf{x}\|^{2}\right]=1\) and \(2\sigma^{2}\) is the complex noise variance, \(\sigma^{2}\) can be expressed as \(\sigma^{2}=\frac{\mathbb{E}\left[\|\mathbf{x}\|^{2}\right]}{2\gamma}=\frac{1} {2\gamma}\).
#### Ii-B1 Power Splitter
The received signal \(\mathbf{y}\) is firstly input to the power splitter, which can divide the input signal into two branches according to the specified energy ratio. As shown in Fig. 1, the parameter \(0\leq\rho\leq 1\) denotes the power splitting factor that determines the energy ratio. Then these two branches are forwarded to the AE-demapper and EH, respectively.
#### Ii-B2 AE-Demapper
A portion of the received signal given by \(\sqrt{\rho}\mathbf{y}\) is then fed into AE-demapper for demodulation, where the output \(\mathbf{d}\) represents the prediction of the encoded sequence \(\mathbf{c}\). The AE-demapper employs a \(I_{\text{D}}\)-layer DNN \(f_{\boldsymbol{\theta}_{\text{D}}}\) relying on the ReLU and on the linear activation functions for recovering the received symbol, as shown in Fig. 2(b). The action of this DNN \(f_{\boldsymbol{\theta}_{\text{D}}}\) can be formulated as
\[\begin{split}\mathbf{d}=& f_{\boldsymbol{\theta}_{ \text{D}}}(\mathbf{y})\\ =&\mathbf{W}_{I_{\text{D}}}^{(\text{D})}\left[\cdots f _{\text{ReLU}}\left(\mathbf{W}_{1}^{(\text{D})}\mathbf{y}+\mathbf{b}_{1}^{( \text{D})}\right)\cdots\right]+\mathbf{b}_{I_{\text{D}}}^{(\text{D})},\end{split} \tag{5}\]
where \(\mathbf{W}_{i}^{(\text{D})}\) and \(\mathbf{b}_{i}^{(\text{D})}\) are collected into the parameter set \(\boldsymbol{\theta}_{\text{D}}\) denoting the weight and bias of the \(i\)-th layer in the DNN \(f_{\boldsymbol{\theta}_{\text{D}}}\) for \(\forall i=1,\ldots,I_{\text{D}}\).
#### Ii-B3 EH
The remaining portion of the received signal, namely \(\sqrt{1-\rho}\mathbf{y}\) flows into the EH. The harvested direct-current (DC) power is \(P_{\text{del}}\), while the corresponding input RF power is \(P_{\text{in}}\). The relationship between the input RF power and the output DC power can be modeled by a \(I_{\text{E}}\)-layer DNN \(f_{\boldsymbol{\theta}_{\text{E}}}\), where \(\boldsymbol{\theta}_{\text{E}}\) is the parameter set, as proposed in [6]. The function \(f_{\boldsymbol{\theta}_{\text{E}}}\) is formulated as
\[\begin{split} P_{\text{del}}=& f_{\boldsymbol{ \theta}_{\text{E}}}(P_{\text{in}})\\ =& f_{\text{tanh}}(\mathbf{W}_{I_{\text{D}}}^{(\text{E })}\left[\cdots f_{\text{tanh}}(\mathbf{W}_{1}^{(\text{E})}\cdot P_{\text{in} }+\mathbf{b}_{1}^{(\text{E})})\cdots\right]+\mathbf{b}_{I_{\text{D}}}^{(\text{ E})}),\end{split} \tag{6}\]
where we have \(P_{\text{in}}=(1-\rho)\|\boldsymbol{y}\|^{2}\) and \(f_{\text{tanh}}(\cdot)\) represents the tanh function, while \(\mathbf{W}_{i}^{(\text{E})}\) and \(\mathbf{b}_{i}^{(\text{E})}\) represent the weight and bias of the \(i\)-th layer in the DNN \(f_{\boldsymbol{\theta}_{\text{E}}}\) for \(\forall i=1,\ldots,I_{\text{E}}\), respectively. Note that the EH model is trained separately in advance, using a nonlinear regression algorithm. Then the well-trained model operates as a fixed module in our system during the global training, without any further adjustment.
#### Ii-B4 BP Decoder
After obtaining the demodulated vector \(\mathbf{d}\), the DNN-aided BP algorithm processes the logarithmic likelihood ratios (LLRs) for carrying out channel decoding and outputs the prediction \(\hat{\mathbf{b}}\) of the original bits \(\mathbf{b}\). The conventional scaled BP decoder is replaced in our system by a multi-layer partially-connected DNN [10], where the connections between two layers correspond to those in the polar code's factor graph, as exemplified in Fig. 3. Generally, for a polar code of length \(N\), its polar factor graph has \(\log_{2}N\) stages, which corresponds to a \(\left[\log_{2}N+1\right]\)-layer DNN associated with \(N(\log_{2}N+1)\) neurons in total. Each layer has \(N\) neurons. As the decoding iteration index \(t=1,\cdots,T\) increases, the DNN expands by repeating the initial \(\log_{2}N\) number of stages. Specifically, the BP decoding process having \(T\) iterations is represented by \([2(\log_{2}N-1)T+1]\) hidden layers in the process of completing the left-to-right (\(L\to R\)) and right-to-left \((R\to L)\) LLR propagation, as illustrated in Fig. 3. The updates of the left-to-right LLR \(R_{i,j}^{(t)}\) and the right-to-left LLR \(L_{i,j}^{(t)}\) at the \(t\)-th iteration are for \(\text{target}_{\text{target}_{\text{target}_{\text{target}_{\text{target}_{\text{target}_ {\text{target}_{\text{target}_{\text{target}_{\text{target}_{\text{target}_{\text{target}_{\text{target}}}}}}}}}}}}\) (7)
where we have \(g(a,b)\approx\text{sign}(a)\text{sign}(b)\min(|a|,b|)\), and \(\alpha_{i,j}^{(t)}\) as well as \(\beta_{i,j}^{(t)}\) are the right-to-left and the left-to-right scaling parameters of the \(j\)-th neuron at \(i\)-th stage during the \(t\)-th iteration, respectively. In the DNN-aided decoder, the basic computation unit termed as a "processing element" is composed of connected neurons as shown in Fig. 4. The LLRs update throughout this process according to Eq. (7).
To ensure that the output falls into the range of \([0,1]\), the classic sigmoid activation function \(f_{\text{sigmoid}}\) is employed by the last layer of the DNN. The function of the DNN aided BP
Fig. 3: An example of DNN-based BP decoding with \(N=4\).
Fig. 2: A generic architecture of (a) AE-Mapper and (b) AE-Demapper and EH.
decoder \(f_{\mathbf{\theta}_{\text{br}}}\) can be expressed as
\[\hat{\textbf{b}}=f_{\mathbf{\theta}_{\text{br}}}(\textbf{d}). \tag{8}\]
Note that the structure of the DNN-aided BP decoder \(f_{\mathbf{\theta}_{\text{br}}}\) follows the design guidelines of [10].
Given the SNR-dependent characteristics of the system, the neural network parameters are susceptible to the channel conditions, which has a substantial impact on the communication performance. Therefore, in order to enhance the adaptability of our system to time-variant communication scenarios and reduce the offline training time, we train it for multiple SNRs within a complete training process. We select three SNRs with an appropriate spacing of \(2\) dB as a training SNR set to avoid the SNR range becoming too wide, which may result in poor performance at some specific SNR levels.
## III DNN based End-to-End Design
In this section, the end-to-end optimization problem is formulated for our IDEN system, followed by our end-to-end training example for characterizing the overall process.
### _Optimization Problem_
We aim for satisfying the energy harvesting requirement \(P_{\text{del}}\), while minimizing the BER performance. Hence, the optimization problem of our AE architecture can be formulated as
\[\text{(P1):}\min_{\mathbf{\theta}_{\text{s}},\mathbf{\theta}_{\text{br}}, \mathbf{\theta}_{\text{br}},\mathbf{\varphi}}\mathrm{E}\Big{[}\sum_{u=1}^{N}\Big{(}b_ {n}\log\check{b}_{n}+(1-b_{n})\log\Big{(}1-\check{b}_{n}\Big{)}\Big{)}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
Compared to the isolated training model, the main drawback of the proposed joint training model is its complexity. In terms of the joint training model, the complexity of the end-to-end training is \(\mathcal{O}((T+1)N\text{log}N+Q_{m}^{2}+2MQ_{m}+Q_{d}^{2}+Q_{d}log_{2}M)\), which is more complex than the isolated training model associated with \(\mathcal{O}((T+1)N\text{log}N)\). Here \(Q_{m}\) and \(Q_{d}\) represent the number of neurons in each layer of the AE-mapper and AE-demapper, respectively.
## IV Simulation Results
In this section, we evaluate the performance of the proposed polar-coded IDEN system over both AWGN and Rayleigh channels. A polar code having \(K=32\) and \(N=64\) is employed. We implement our system on TensorFlow 1.14. The number of training epochs for each training SNR is set to \(E=5\), and in each epoch, the training samples are randomly generated with the mini-batch size being 1000. We use the Adam optimizer with a learning rate of \(\delta=0.005\). The training SNR ranges from 16 dB to 30 dB for Rayleigh channels. The power splitting factor \(\rho\) is set to \(\frac{\sqrt{2}}{2}\) in our system, which means that the data and energy branches output by the power splitter have equal energy. Moreover, the size and activation functions of layers in each DNN based module are summarized in Table II. The AE-mapper and the AE-demapper rely on fully connected neural networks, where the activation functions for the hidden layers are either the ReLU function or the Linear function. By contrast, the BP decoder is formed by a non-fully connected neural network, whose number of layers depends on the index of BP iterations.
We first investigate the constellation output by the AE-mapper. Fig. 5 shows substantial difference between the conventional \(8\)-PSK constellation and the output of our AE-mapper having \(M=8\) at different values of the bias parameter \(\lambda\) over AWGN channels. The constellation points in Fig. 5 are uniformly distributed on the circumference of a circle. The Euclidean distance between the adjacent constellation points is limited at a given power, which determines the BER performance. By contrast, the constellation points output by the AE-mapper have unequal distances, as shown in Fig. 5(b) and (c), which depend on the parameter \(\lambda\). Moreover, the constellation associated with \(\lambda=0.05\) in Fig. 5(c) is different from that of \(\lambda=0\) in Fig. 5(b). This demonstrates that the data vs. energy trade-off of the IDEN receiver affects the optimal shape of the AE-mapper's output constellation. This is because we take \(\lambda\) into consideration, while designing the loss function. As \(\lambda\) increases, the WET part dominates the loss function, and hence the DNNs are mainly trained in order to improve the energy harvesting performance. The constellation of Fig. 5 is constructed for exclusively optimizing the WIT performance, while that of Fig. 5(c) strikes a compromise between the WIT and WET performance, which ensures that the amplitude of transmitted phasor is large enough. Therefore, the constellation Fig. 5(c) achieves a better WET performance than that of Fig. 5(b).
Let us now compare the BER and the energy harvesting performance of the proposed DNN-aided IDEN system to that of the conventional \(8\)-PSK and BP decoding. These simulations are carried out using \(M=8\) and \(\lambda=0.01\) for a Rayleigh channel and either \(1\) or \(3\) BP iterations. Observe from Fig. 6(a) that in the Rayleigh channel, the BER performance of both systems improves as we increase the number of iterations. However, the improvement attained by the DNN-aided system is more significant, than that of its conventional counterpart. This explicitly demonstrates our advantage of the DNN-aided scaled BP decoder, where the scaling parameters \(\alpha_{i,j}^{(t)}\) and \(\beta_{i,j}^{(t)}\) of the BP decoder achieve near-optimality after training for just a few epochs. The simulation results demonstrate that our IDEN system performs well in practical propagation environments in the face of both fading and noise.
To explore the impact of WET on WIT, the DNN-aided IDEN system having different data vs. energy demands \(\lambda\) over Rayleigh channel is investigated in Fig. 6(b), where we have \(M=8\) and \(3\) BP iterations. Observe from Fig. 6(b) that upon increasing \(\lambda\) of the IDEN receiver in the DNN-aided system, the energy curve is significantly shifted upward and the BER
Fig. 6: BER and energy harvesting performance of IDEN system over Rayleigh channel: (a) Performance of IDEN system and conventional system with \(\lambda=0.01\), the noise power \(P_{noise}=0.17\) dBm under different iterations; (b) Performance of the IDEN system with iteration=3 and different \(\lambda\).
performance deteriorates, as expected. By contrast, since the 8-PSK constellation is fixed and conventional design does not strike a tradeoff between the WIT and WET performance, the IDEN performance of the conventional system remains unchanged. Hence, it cannot achieve a satisfactory WET performance. The simulation results demonstrate the superiority of the DNN-aided system in terms of WET (e.g., the harvested energy increases by \(2.5\times 10^{-3}\) mW when \(\lambda\) increases from 0 to 0.25 at SNR\(=14\)dB). Moreover, we can control the tradeoff between WIT and WET by adjusting the parameter \(\lambda\) depending on the near-instantaneous demands. Furthermore, with the increase of \(\lambda\), the amount of harvested energy decreases, which is in line with the non-linear relationship between the input RF power \(P_{\text{in}}\) and the output DC power \(P_{\text{del}}\).
Given that 5G predominantly relies on QAM rather than PSK for modulation, we compared the BER performance of our proposed DNN-aided IDEN system and the traditional modulation technologies, e.g., \(4\)-QAM, \(8\)-QAM and \(16\)-QAM, to provide a more comprehensive comparison. The simulations were carried out using \(\lambda=0.01\) for a Rayleigh channel and 3 BP iterations. As shown in Fig.7, our AE-aided modulation scheme outperforms its corresponding traditional counterparts at the same modulation order \(M\). Due to the adaptability of AE, which can adjust its trainable parameters according to the time-variant channel conditions, the advantage of our IDEN system is clearly demonstrated.
## V Conclusions
A DNN-aided polar-coded IDEN system was proposed, which replaces the conventional functional modules by DNNs and characterises the whole system as an AE. All the DNNs can be trained in an end-to-end manner for the sake of jointly optimising the WET and the WIT performance. Our simulation results conducted in both AWGN and Rayleigh channels demonstrate the superiority of our data-driven end-to-end design over its conventional model-based counterpart in terms of both the BER and the energy harvesting performance.
|
2308.05322 | DegUIL: Degree-aware Graph Neural Networks for Long-tailed User Identity
Linkage | User identity linkage (UIL), matching accounts of a person on different
social networks, is a fundamental task in cross-network data mining. Recent
works have achieved promising results by exploiting graph neural networks
(GNNs) to capture network structure. However, they rarely analyze the realistic
node-level bottlenecks that hinder UIL's performance. First, node degrees in a
graph vary widely and are long-tailed. A significant fraction of tail nodes
with small degrees are underrepresented due to limited structural information,
degrading linkage performance seriously. The second bottleneck usually
overlooked is super head nodes. It is commonly accepted that head nodes perform
well. However, we find that some of them with super high degrees also have
difficulty aligning counterparts, due to noise introduced by the randomness of
following friends in real-world social graphs. In pursuit of learning ideal
representations for these two groups of nodes, this paper proposes a
degree-aware model named DegUIL to narrow the degree gap. To this end, our
model complements missing neighborhoods for tail nodes and discards redundant
structural information for super head nodes in embeddings respectively.
Specifically, the neighboring bias is predicted and corrected locally by two
modules, which are trained using the knowledge from structurally adequate head
nodes. As a result, ideal neighborhoods are obtained for meaningful aggregation
in GNNs. Extensive experiments demonstrate the superiority of our model. Our
data and code can be found at https://github.com/Longmeix/DegUIL. | Meixiu Long, Siyuan Chen, Xin Du, Jiahai Wang | 2023-08-10T03:48:18Z | http://arxiv.org/abs/2308.05322v1 | # DegUIL: Degree-aware Graph Neural Networks for Long-tailed User Identity Linkage
###### Abstract
User identity linkage (UIL), matching accounts of a person on different social networks, is a fundamental task in cross-network data mining. Recent works have achieved promising results by exploiting graph neural networks (GNNs) to capture network structure. However, they rarely analyze the realistic node-level bottlenecks that hinder UIL's performance. First, node degrees in a graph vary widely and are long-tailed. A significant fraction of _tail nodes_ with small degrees are underrepresented due to limited structural information, degrading linkage performance seriously. The second bottleneck usually overlooked is _super head nodes_. It is commonly accepted that head nodes perform well. However, we find that some of them with super high degrees also have difficulty aligning counterparts, due to noise introduced by the randomness of following friends in real-world social graphs. In pursuit of learning ideal representations for these two groups of nodes, this paper proposes a degree-aware model named DegUIL to narrow the degree gap. To this end, our model complements missing neighborhoods for tail nodes and discards redundant structural information for super head nodes in embeddings respectively. Specifically, the neighboring bias is predicted and corrected locally by two modules, which are trained using the knowledge from structurally adequate head nodes. As a result, ideal neighborhoods are obtained for meaningful aggregation in GNNs. Extensive experiments demonstrate the superiority of our model. Our data and code can be found at [https://github.com/Longmeix/DegUIL](https://github.com/Longmeix/DegUIL).
Keywords:User identity linkage Long-tailed graph representation learning Graph neural networks.
## 1 Introduction
To enjoy diverse types of services, people tend to join multiple social media sites at the same time. Generally, the identities of a person on various social platforms have underlying connections, which triggers research interest in user identity linkage (UIL). This task aims to link identities belonging to the same natural person across distinct social networks. As an information fusion task, UIL has enormous practical value in many network data fusion and mining applications, such as cross-platform recommendation [8, 14], etc.
To date, a corpus of literature has emerged to tackle the UIL problem. Earlier approaches [31, 22] aligned users by comparing account profiles such as usernames or post contents. However, such auxiliary information is becoming less accessible and inconsistent due to increased privacy concerns. With the advent of graph neural networks (GNNs), research attention related to this problem has been shifted to network-structured data. Although structure-based methods [25, 15, 2] have achieved substantial progress, they rarely doubt whether social networks provide reliable and adequate information for each node.
Realistic Problems.In reality, however, social networks are always full of noise and provide scarce structural information, especially in cold-start scenarios with lots of new users. There are three problems that cannot be ignored.
(1) **An inherent structural gap exists among nodes.** The number of neighbors varies from user to user in many social networks, and approximately follows a long-tailed distribution, as shown in Fig.1(a). However, existing approaches apply the same learning strategy to all nodes despite their diverse degrees, which hinders the overall linkage performance. (2) **The limited neighborhoods of tail nodes hinder the linkage performance.** The performance of structure-aware UIL methods heavily depends on the observed neighborhood. Unfortunately, a significant fraction of low-degree nodes, known as _tail nodes_, connect to few neighbors. In the absence of sufficient structural information, the embeddings of these tail nodes may be unsatisfactory or biased, resulting in inferior performance, as demonstrated in Fig.1(b). (3) **Noise hidden in super head nodes exacerbates the quality of representation.** According to the first-order proximity [26], UIL works typically assume that friends have similar
Figure 1: A motivation example on the Foursquare-Twitter dataset with PALE [20]. (a) illustrates the node degree distribution of the Foursquare network, with a large proportion of nodes below 10 degrees. (b) presents PALE’s performance by the degrees of test nodes when 50% anchors are used for training. Low-degree nodes (\(0,5\)] and super high-degree nodes (\(200,522\)) perform worse than the others, indicating these two groups of nodes are the major bottleneck of UIL.
interests. However, the random nature of users' behavior in following friends is unavoidable [17]. Due to this, fraudulent or meaningless edges are hidden in a graph unnoticeably, especially in users with thousands of friends, which is called _super head nodes_ in this paper. Small noises in structure can be easily propagated to the entire graph, thereby affecting the embeddings of many others.
All of these realistic issues motivate us to formulate a novel setting for user identity linkage, aimed at improving the linkage performance of tail nodes, which are the most vulnerable and dominant group. In other words, this paper investigates the following research problem: **how can we effectively link identities for socially-inactive users in a noisy graph?**
Challenges and Our ApproachTo obtain more competitive embeddings for tail nodes, we need to address three core issues, i.e. data gap, the absence of neighboring information, and noise-filled graphs, which present three challenges.
First, addressing absent neighborhoods poses a dilemma: _tail nodes have no additional information but few neighbors._ This is especially severe if only network structures are available, without accessing additional side information such as profiles or posts on a platform. Secondly, to defend against the noise in networks, an intuitive idea is to delete fake edges or reduce their negative impacts. However, _how can noise be eliminated while preserving the intrinsic graph structure?_ Social networks are full of complicated relationships, making it difficult to discern which edges should be discarded. The above two issues lead to the third challenge: _each node owns both a unique locality and a generality_, which means that bias should be locally corrected without losing the common knowledge across nodes.
To address these challenges, this paper proposes a degree-aware user identity linkage method named DegUIL to improve the matching of tail identities that account for the majority. More concretely, to address the first and second challenges, we utilize the ideal neighborhood knowledge of head nodes to train two modules. They complement potential local contexts for tail nodes and remove redundant neighborhoods of super head nodes in embeddings. Due to this, degree bias is mitigated and their observed neighborhoods are corrected for meaningful aggregation in each GNN layer, thereby improving the quality of node embeddings. For the third challenge, two shared vectors are employed across the graph, which adapt to the local context of each node without losing generality.
ContributionsTo summarize, our main contributions are three-fold:
* **Problem**: This paper highlights that the performance bottlenecks of user identity linkage arise not only from tail nodes but also from super head nodes. The observation motivates us to explore the realistic long-tailed UIL.
* **Algorithm**: A degree-aware model is proposed to tackle the above two issues, in pursuit of learning high-quality node embeddings for tail nodes' alignment. Our DegUIL corrects the neighborhood bias of the two groups of nodes and thus narrows the degree gap without additional attributes. This strategy brings a novel perspective to the long-tailed UIL problem.
* **Evaluations**: Extensive experiments demonstrate that our model is superior and has significant advantages in dealing with complex networks.
## 2 Related Work
Structure-based UIL Methods.Structure-based methods have become increasingly promising in tackling the UIL problem. Most of them are composed of two major phases: feature extraction and identity matching. Recently, graph neural networks have been well extended into the UIL task [2, 3, 7, 9, 13, 33] and have become mainstream, owing to their powerful capabilities in extracting graph data. For instance, dName [33] learns a proximity-preserving model locally by graph convolutional networks. As simple topology information may be insufficient, MGCN [2] considers convolutions on both local and hypergraph network structures. While many works neglect topological differences such as low-degree nodes, whose small neighborhood impedes the advance of GNN-based approaches. Some recent works in entity alignment are devoted to handling the long-tailed issue by supplementing entity names [29, 30], or by preventing entities with similar degrees from clustering into the same region of embedded space [23].
However, we have not seen a method that rectifies structural bias and narrows degree gap for the realistic UIL task. Different from the existing approaches, our model is dedicated to obtaining high-quality tail nodes' embeddings when no additional side information is available.
Other Long-tailed Problems.The long-tailed problem has been studied in many fields [11, 4], but most of the findings cannot be directly applied to the UIL problem due to differences in problem settings. Two closely related works are Tail-GNN [18] and meta-tail2vec [19], which refine feature vectors of tail nodes by transferring the prior knowledge gained from ideal head nodes, leading to a significant improvement in node classification performance. Nevertheless, we observe that not all head nodes are surrounded by ideal neighborhoods in social networks. Structural noise exists in some of very high-degree nodes and impairs performance, as seen in Fig.1(b). Therefore, our paper mitigates the noise issue of super head nodes to improve the linkage performance of tail nodes.
## 3 Preliminaries
### Problem Formulation
This paper regards a social network as an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{N}\}\) is the set of vertices (user identities), \(\mathcal{E}=\{e_{ij}=(v_{i},v_{j})\}\subseteq\mathcal{V}\times\mathcal{V}\) represents the edge set (social connections between users). Each edge \(e_{ij}\) is associated with a weight \(a_{ij}\in\mathbb{R}\), and \(a_{ij}>0\) denotes that node \(v_{i}\) and \(v_{j}\) are connected, otherwise \(a_{ij}=0\). Here \(\mathbf{A}=[a_{ij}]\in\mathbb{R}^{N\times N}\) is a symmetric adjacency matrix. \(\mathbf{X}\in\mathbb{R}^{N\times d}\) is a feature matrix with \(\mathbf{x}_{i}\) representing the \(d\)-dimensional feature vector for node \(v_{i}\). Now our problems are formally defined as below.
Definition 1 (Super Head Nodes and Tail Nodes): For a node \(v_{i}\in\mathcal{V}\), let \(\mathcal{N}_{i}\) denote the set of first-order neighbors (neighborhood), and its size \(|\mathcal{N}_{i}|\) is the degree of \(v_{i}\). Tail nodes have a small degree not exceeding some threshold \(D\)
i.e. \(\mathcal{V}_{tail}\ =\{v_{i}:|\mathcal{N}_{i}|\leq D\}\). Nodes with a degree greater than \(M\) are super head nodes as \(\mathcal{V}_{super}=\{v_{i}:|\mathcal{N}_{i}|>M\}\). The remaining nodes are called head nodes, i.e. \(\mathcal{V}_{head}\ =\{v_{i}:D<|\mathcal{N}_{i}|\leq M\}\). Apparently, \(\mathcal{V}_{tail}\ \cap\ \mathcal{V}_{super}\ \cap\mathcal{V}_{head}\ =\emptyset\)._
Definition 2 (User Identity Linkage Aimed at Tail Nodes): Given two social networks \(\mathcal{G}^{1}\), \(\mathcal{G}^{2}\), and a collection of observed anchor links as inputs, our goal is to identify the unobserved corresponding anchors of tail nodes. Ideally, the matched node should be ranked as top as possible in predicted top-\(k\) candidates.
### Graph Neural Networks
A graph neural network with multiple layers transforms the raw node features to another Euclidean space as output. Under the message-passing mechanism, the initial features of any two nodes can affect each other even if they are far away, along with the network going deeper. The input features to the \(l\)-th layer can be represented by a set of vectors \(\mathbf{H}^{l}=\left\{\mathbf{h}_{1}^{l},...,\mathbf{h}_{N}^{l}\right\}\), where \(\mathbf{h}_{i}^{l}\in\mathbb{R}^{d_{l}}\) is \(v_{i}\)'s representation in the \(l\)-th layer. Particularly, \(\mathbf{H}^{0}=\mathbf{X}\) is in the input layer. The output node features of the (\(l\)+1)-th layer are generated as:
\[\mathbf{h}_{i}^{l+1}=\text{Agg}\left(\mathbf{h}_{i}^{l},\left\{\mathbf{h}_{k} ^{l}:k\in\mathcal{N}_{i}\right\};\theta^{l+1}\right) \tag{1}\]
where \(\text{Agg}\left(\cdot\right)\) parameterized by \(\theta^{l+1}\), denotes an aggregation function such as mean-pooling, generating new node features from the previous one and messages from first-order neighbors. Most GNNs [12, 28] follow the above definition.
## 4 The Proposed Framework: DegUIL
DegUIL aims to learn high-quality embeddings for tail nodes and super head nodes as a way to enhance linkage performance. Its overall framework is illustrated in Fig.2. As shown in Fig.2(b), we train two predictors named _absent neighborhood predictor_ and _noisy neighborhood remover_ to predict the neighborhood bias of these two groups of nodes (Section 4.1-4.2). As a result, tail nodes are enriched by complementing potential neighboring data, and super head nodes are refined by removing noise adaptively, thereby supporting meaningful aggregation (Section 4.3). Finally, predictors and weight-sharing GNNs are jointly optimized by the task loss and several auxiliary constraints (Section 4.4), for matching identities effectively in Fig.2(c). The target node with the highest similarity to a source anchor node is returned as its alignment result.
### Uncovering Absent Neighborhood
Neighboring relations connected with tail nodes are relatively few, resulting in biased representations and further hindering linkage results. To solve this problem, we propose an _absent neighborhood predictor_ to predict the missing information in their structure, which facilitates subsequent aggregation in each GNN layer. It is trained by exploiting the structurally rich prior learned from head nodes. This component enriches the structural information of tail nodes to obtain better representations as ideal as head nodes.
#### 3.0.1 Absent Neighborhood Information for Tail Nodes.
Tail nodes lack structural data owing to a variety of reasons, such as being new users on a social platform. Relationships in networks change dynamically, in other words, tail users may interact with other users in the near future, which can be considered as potential relations. Thus, predicting and completing the latent structural information for tail nodes is reasonable.
More concretely, for a tail node \(v_{i}\in\mathcal{V}_{\text{tail}}\), the absent information \(\mathbf{m}_{i}\) measures the gap of feature vectors between its observed neighborhood \(\mathcal{N}_{i}\) and _ideal neighborhood_\(\mathcal{N}_{i}^{*}\), that is,
\[\mathbf{m}_{i}=\mathbf{h}_{\mathcal{N}_{i}^{*}}-\mathbf{h}_{\mathcal{N}_{i}}. \tag{2}\]
The ideal representation \(\mathbf{h}_{\mathcal{N}_{i}^{*}}\) theoretically contains not only the observed aggregated information from local neighborhoods but also friends that would have been associated with \(v_{i}\). To construct \(\mathbf{h}_{\mathcal{N}_{i}^{*}}\), we train an absent neighborhood predictor \(f_{m}\) to uncover the missing features caused by limited local contexts. That is, the ideal neighborhood representation of \(v_{i}\in\mathcal{V}_{\text{tail}}\) can be predicted as \(\mathbf{h}_{\mathcal{N}_{i}^{*}}=\mathbf{h}_{\mathcal{N}_{i}}+\mathbf{m}_{i}\). Empirically \(\mathbf{h}_{\mathcal{N}_{i}}\) is represented by a mean-pooling over all nodes in the observed neighborhood, i.e., \(\mathbf{h}_{\mathcal{N}_{i}}=\text{MEAN}(\{\mathbf{h}_{k}:v_{k}\in\mathcal{N} _{i}\})\). Now the problem turns into modeling the potential information in a neighborhood.
#### 3.0.2 Training Absent Neighborhood Predictor.
The prediction model is learned using the local contexts of head nodes. Let \(\mathbf{m}_{i}^{l}\) be absent neighboring information of node \(v_{i}\) in the \(l\)-th GNN layer. For a head node \(v_{j}\), its observed neighborhood is regarded as complete and ideal, thus no missing information on its neighborhood. In other words, the representation of \(v_{j}\)'s ideal neighborhood can be approximated by \(\mathbf{h}_{\mathcal{N}_{j}}^{l}\), the representation of observed neighborhood \(\mathcal{N}_{j}\) in the
Figure 2: Overview of DegUIL. (a) Inputting two networks; (b) Complementing potential information \(m_{2}\) for tail nodes and removing redundant data \(r_{0}\) for super head nodes to correct their observed neighborhood to be ideal, which improves their representations during aggregation; (c) Mapping two embeddings into a unified space and then matching identities.
same layer. Therefore, we train a prediction model \(f_{m}\) by predicting missing neighborhood information of \(v_{j}\) closed to zero as expected, i.e. \(\|\mathbf{m}_{j}^{l}\|_{2}\approx 0\). It will be an auxiliary loss term further discussed in Section 4.4.
However, the training scheme has a major flaw: the abundance of head nodes in training differs from tail nodes in testing. To tackle this problem, _forged tail nodes_ are supplemented via edge dropout on head nodes. On each head node, neighbors (\(|\mathcal{N}_{i}|\leq D\)) are randomly sampled to mimic the real tail nodes. For example, in Fig.2(b), \(v_{1}^{\prime}\) is a forged tail node generated from the head node \(v_{1}\).
Toward ideal tail nodes representations, a key idea is to uncover the latent information \(\mathbf{m}_{i}^{l}\) on tail nodes (forged or real), which will be predicted adaptively in Section 4.3 to correct their observed neighborhoods that may be biased.
### Removing noisy neighborhood
As the first step of UIL, learning effective representations for users is crucial. In contrast to tail nodes, super head nodes are structurally rich and even have redundant edges connecting them, since social networks are complex and unreliable. Perturbed neighbors may cause error propagations through the network that drop the final performance [5]. To defend against the damage for further enhancing tail node alignment, we design a _redundant neighborhood remover_.
To be specific, given a super head node \(v_{i}\in\mathcal{V}_{\text{super}}\), \(\mathbf{r}_{i}\) denotes the embedding redundancy between its observed neighborhood \(\mathcal{N}_{i}\) and ideal one \(\mathcal{N}_{i}^{*}\), i.e.,
\[\mathbf{r}_{i}=\mathbf{h}_{\mathcal{N}_{i}}-\mathbf{h}_{\mathcal{N}_{i}^{*}}. \tag{3}\]
Our module removes the neighboring bias \(\mathbf{r}_{i}^{l}\) in each layer \(l\) to mitigate the error cascade in message aggregation of GNNs. As a result, the ideal neighborhood representation of \(v_{i}\) can be obtained by \(\mathbf{h}_{\mathcal{N}_{i}^{*}}^{l}=\mathbf{h}_{\mathcal{N}_{i}}^{l}-\mathbf{ r}_{i}^{l}\). Similar to the first module, the absent neighborhood predictor, we employ a function \(f_{r}\) to predict \(\mathbf{r}_{i}^{l}\).
To refine an ideal graph, a natural strategy is to eliminate adversarial noise. Many works [10, 27, 34] delete perturbed edges by graph structure learning or graph defense techniques, but such techniques act on a single network rather than cross-network user matching. Besides, mistakenly deleting a useful edge may lead to cascading defects. Instead, we refine node embeddings directly to distill local structure, which eliminates noise without destroying scarce but valuable relations on tail nodes. We locally predict redundancy in the following section.
### Adaptive Aggregation
#### 4.3.1 Localization.
The absent or redundant neighborhood information varies across nodes, hence necessitating fine-grained node-wise adaptation. To capture the unique locality of each node while simultaneously preserving generality across the graph, two globally shared vectors \(\mathbf{m}\) and \(\mathbf{r}\) (per layer) are introduced.
Formally, for each node \(v_{i}\) in the \(l\)-th layer of DegUIL, a locality-aware missing vector \(\mathbf{m}_{i}\in\mathbb{R}^{d_{l}}\) and a redundant vector \(\mathbf{r}_{i}\in\mathbb{R}^{d_{l}}\) are customized according to its local context. Specifically, the local context information is defined as the
concatenation of the node representation with its local observed neighborhood representation, i.e. \(\mathbf{c}_{i}^{l}=\left[\mathbf{h}_{i}^{l},\mathbf{h}_{\mathcal{N}_{i}}^{l}\right]\). Then, the absent neighborhood predictor model \(f_{m}\) and noisy neighborhood removed \(f_{r}\) output localized structural information \(\mathbf{m}_{i}^{l}\) and \(\mathbf{r}_{i}^{l}\), respectively. That is,
\[\mathbf{m}_{i}^{l} =f_{m}\left(\mathbf{c}_{i}^{l},\mathbf{m}^{l};\theta_{m}^{l} \right)=\boldsymbol{\gamma}_{i}^{l}\odot\mathbf{m}^{l}+\boldsymbol{\alpha}_{i} ^{l}, \tag{4}\] \[\mathbf{r}_{i}^{l} =f_{r}\left(\mathbf{c}_{i}^{l},\mathbf{r}^{l};\theta_{r}^{l} \right)=\boldsymbol{\gamma}_{i}^{l}\odot\mathbf{r}^{l}+\boldsymbol{\beta}_{i} ^{l}, \tag{5}\]
where \(\theta_{m}^{l}\) and \(\theta_{r}^{l}\) are the parameters of \(f_{m}\) and \(f_{r}\) in the \(l\)-th layer. Element-wise scaling (\(\odot\)) and shifting (\(+\)) operations are used to implement the personalization function for each node. The scaling vector \(\boldsymbol{\gamma}_{i}^{l}\in\mathbb{R}^{d_{l}}\) can be calculated as \(\boldsymbol{\gamma}_{i}^{l}=\mathbf{c}_{i}^{l}\mathbf{W}_{\gamma}^{l}\) with a learnable matrix \(\mathbf{W}_{\gamma}^{l}\in\mathbb{R}^{2d_{l}\times d_{l}}\). Shift vectors \(\boldsymbol{\alpha}_{i}^{l}\) and \(\boldsymbol{\beta}_{i}^{l}\) are trained using two fully connected networks, respectively.
#### 4.3.2 Neighborhood Aggregation.
Our discussion now turns to neighborhood aggregation related to super head nodes and tail nodes. The neighborhoods of head nodes are taken as ideal to follow the standard GNNs aggregation in Eq.(1). In contrast, the embedding vectors of tail nodes are underrepresented and those of super head nodes tend to be noisy. Thankfully, our DegUIL complements potential neighboring data for the former and removes local noise for the latter.
The corrected neighborhoods of these two groups of nodes are ideal for key aggregation in GNN-based methods. In the (\(l\)+1)-th layer, the standard neighborhood aggregation in Eq.(1) is adjusted as follows:
\[\mathbf{h}_{i}^{l+1}=\text{Agg}\left(\mathbf{h}_{i}^{l},\left\{\mathbf{h}_{k} ^{l}:v_{k}\in\mathcal{N}_{i}\right\}\cup\left\{I\left(v_{i}\in\mathcal{V}_{ \text{tail}}\right)\mathbf{m}_{i}^{l}-I\left(v_{i}\in\mathcal{V}_{\text{super} }\right)\mathbf{r}_{i}^{l}\right\};\theta^{l+1}\right), \tag{6}\]
where \(I(\cdot)\) is a 0/1 indicator function based on the truth value of its argument.
#### 4.3.3 Global and Local Aggregation for UIL.
This paper employs two different aggregation strategies to maintain global common knowledge and local structure:
\[\mathbf{Z}=\left[\text{Agg}_{\text{GA}}\left(\mathbf{X},\mathbf{A}\right), \text{Agg}_{\text{LA}}\left(\mathbf{X},\mathbf{A}\right)\right]. \tag{7}\]
Here, the global structure aggregator \(\text{Agg}_{\text{GA}}\left(\cdot\right)\) observes the whole network by graph convolutional networks (GCN)[12]. The local structure aggregator \(\text{Agg}_{\text{LA}}\left(\cdot\right)\) acquires specific patterns of nodes' 1-hop neighborhood, implemented by graph attention networks (GAT)[28]. Both of them adopt a two-layer architecture in our method, i.e., \(\ell=2\). By stacking aggregation layers, larger area patterns are observed. The final representation \(\mathbf{Z}\) is obtained by concatenating the outputs of aggregators. To preserve the consistency of cross-network node pairs in the embedding space, we apply a shared weight GNN architecture for \(\mathcal{G}^{1}\) and \(\mathcal{G}^{2}\). In other words, GCN and GAT embed nodes from both the source network and target network via shared learnable parameters.
### Training Loss
The whole training process is controlled by three objective terms, 1) topology loss; 2) cross-network mapping loss; and 3) prediction constraints of Eq.(2) and Eq.(3). They are described as follows.
Topology Loss.Global topology is preserved by minimizing the weighted difference on all edges between the input and reconstructed networks, i.e.,
\[\mathcal{L}_{s}=\sum_{i=1}^{N}\sum_{j=1}^{N}b_{ij}\left(a_{ij}-s_{ij}\right)^{2} =\|(\mathbf{A}-\mathbf{S})\odot\mathbf{B}\|_{F}^{2}. \tag{8}\]
Here, \(\mathbf{A}\) represents the adjacency matrix. \(\mathbf{S}=[s_{ij}]\) is the new connection matrix where each element is \(s_{ij}=\text{Sim}(\mathbf{z}_{i},\mathbf{z}_{j})\). Sim\((\cdot,\cdot)\) is the similarity function, cosine similarity here. \(s_{ij}\) ranges from \(-1\) to \(1\), a larger value indicates a stronger social connection between \(v_{i}\) and \(v_{j}\). Moreover, the sampling matrix \(\mathbf{B}=[b_{ij}]\in\{0,1\}^{N\times N}\) is used to balance the number of connected and unconnected edges. We adopt a simple uniform negative sampling [24] here, while you are able to make advances by replacing it with better sampling strategies [21].
Cross-network Matching Loss.Existing UIL models [20] learn desirable mapping functions \(f\) to unify the embeddings of different graphs. Formally, given a matched pair \((v_{i}^{1},v_{a}^{2})\) from the set of anchor links \(U_{a}\) and their features \((\mathbf{z}_{i}^{1},\mathbf{z}_{a}^{2})\), \(p=5\) unmatched node pairs \((v_{i}^{1},v_{b}^{2})\) are sampled uniformly as negative identity links with features \((\mathbf{z}_{i}^{1},\mathbf{z}_{b}^{2})\). After mapping by functions \(f_{1}\) and \(f_{2}\), the embedding vectors from source network \(\mathcal{G}^{1}\) and target network \(\mathcal{G}^{2}\) are projected to a common embedding space, i.e. \(o_{i}=f_{1}(z_{i}^{1})\), \(o_{a}=f_{2}(z_{a}^{2})\) and \(o_{b}=f_{2}(z_{b}^{2})\), respectively. Let \(t_{ia}=\text{Sim}(o_{i},o_{a})\), the loss is defined as:
\[\mathcal{L}_{t}=\sum_{\left(v_{i}^{1},v_{a}^{2}\right)\in U_{a}}\left(1-t_{ia} \right)^{2}+\sum_{\left(v_{i}^{1},v_{b}^{2}\right)\notin U_{a}}(t_{ib}^{2}+t_ {ab}^{2}). \tag{9}\]
The objective aims to maximize the similarities of anchor links while minimizing the link probabilities of unmatched identities. \(f_{1}\left(\cdot;\theta_{f_{1}}\right)\) and \(f_{2}\left(\cdot;\theta_{f_{2}}\right)\) are implemented by two multi-layer perceptrons (MLPs) with learnable parameters \(\theta_{f}=(\theta_{f_{1}},\theta_{f_{2}})\).
Constraints on Predicted Information.For tail nodes, DegUIL aims to complement rather than refine its neighborhood. In contrast, the neighborhood of super head nodes is refined but not enriched. The other nodes' local contexts are regarded as ideal without absence or redundancy. Therefore, both predicted missing data for nodes except tail nodes and noisy information for nodes except super head nodes should be close to zero, which can be formulated as:
\[\mathcal{L}_{p}=\sum_{l=1}^{\ell}\left(\sum_{v_{i}\notin\mathcal{V}_{\text{tail} }}\left\|\mathbf{m}_{i}^{l-1}\right\|_{2}^{2}+\sum_{v_{i}\notin\mathcal{V}_{ \text{super}}}\left\|\mathbf{r}_{i}^{l-1}\right\|_{2}^{2}\right). \tag{10}\]
Optimization.For \(g=2\) social networks (\(\mathcal{G}\)), the total loss is is a combined loss:
\[\mathcal{L}=\mathcal{L}_{t}+\lambda\sum_{i}^{g}\mathcal{L}_{s}^{\mathcal{G}^{i }}+\mu\sum_{i}^{g}\mathcal{L}_{p}^{\mathcal{G}^{i}}. \tag{11}\]
Hyperparameters \(\lambda\) and \(\mu\) balance the importance of topology and predicted information constraint.
Here we discuss the computational complexity of DegUIL. Let \(N_{\max}=\max\left(\left|\mathcal{V}^{1}\right|,\left|\mathcal{V}^{2}\right|\right)\) denote the maximum number of nodes of two input graphs. First, we employ node2vec to generate initial features, resulting in \(O(N_{\max})\) complexity. Next, our model employs GCN and GAT to learn powerful representations. In each GNN layer \(l\), the overhead involves forging tail nodes, the localization, and the aggregation of absent information and redundant information. Forging tail nodes consumes \(O(ND)\) time since we sample up to \(D\) neighbors on a head node to forge a tail node, where \(D\) is the degree threshold of the tail node; Locally predicting \(\mathbf{m}_{i}^{l}\) in (4) and \(\mathbf{r}_{i}^{l}\) in (5) needs \(O(N\bar{D}d_{l}^{2})\) complexity, where \(d_{l}\) is the dimension of the \(l\)-th layer and \(\bar{D}\) is the average node degree. Aggregating the corrected neighboring information takes \(O(N(\bar{D}+1)d_{l}d_{l-1})\) time. As \(d_{l},d_{l-1}\) and the number of GNN layers are small constants, when \(\bar{D}\ll N_{\max}\), the complexity of node2vec and our degree-aware GNNs is \(O(N_{\max})\) for the representation learning process. Overall, the time complexity of our proposed DegUIL is \(O(N_{\max})\), i.e., it scales linear time with respect to the number of nodes.
### Characteristics of DegUIL
DegUIL is characterized by the following features. (1) Unlike most UIL methods that apply the same learning approach to all nodes, our method divides nodes into three groups (tail/head/super head nodes) according to their degrees. DegUIL considers neighborhood differences and adopts different neighboring bias correction strategies for them to narrow the structural gap by a node-wise localization technique. (2) DegUIL predicts and complements potential neighboring information of tail nodes directly, which avoids designing an extra neighborhood translation [18] or separates the embedding and refinement processes [19]. It eliminates noisy topology of super head nodes implicitly, preventing valuable edges from being deleted by mistake like some graph structure learning methods [10, 27, 34]. (3) We use weight-sharing GNNs instead of two separate GNNs to preserve cross-network similarity and reduce training parameters.
## 5 Experiments
In this section, we aim to answer the following questions via experiments. **Q1**: How effective is our proposed DegUIL compared with baselines? **Q2**: How does each component of DegUIL contribute to the final results? **Q3**: Is our method compatible with previous data partitions? **Q4**: How much performance does our method improve for nodes in each degree interval?
### Experimental Settings
**Datasets.** Two benchmark datasets are employed for evaluation, as summarized in Table 1. **Foursquare-Twitter** (FT), widely used real-world data in previous literature [15, 16], provides partial anchor nodes for identity linkage. **DBLP17-DBLP19** (DBLP) [1] includes two co-author networks, in which a node represents an author, and an edge connects two nodes if they are co-authors of at
least one paper. Common authors across two networks are used as the ground truth. We define tail links as anchor links with a node degree of 5 or less.
To simulate a user cold-start scenario where a large number of nodes are tail nodes, anchors containing tail nodes are split into the testing set, and the rest anchor links are used in training.
#### 4.1.1 Baselines.
To evaluate the effectiveness of DegUIL, we compare it with three kinds of embedding-based baselines, including a conventional representation learning method (node2vec), state-of-the-art UIL methods and a tail node refinement model (Tail-GNN). The baselines are described as follows.
* **node2vec**[6]: It encodes network topology into a low-dimensional space, whose outputs serve as initial input features to our methods.
* **PALE**[20]: This method learns embeddings and predicts anchor links by maximizing the log-likelihood of observed edges and latent space matching.
* **SEA**[23]: It is a semi-supervised entity alignment method that tries to avoid embedding entities with similar degrees closely by an adversarial training.
* **NeXtAlign**[32]: A semi-supervised network alignment method that achieves a balance between alignment consistency and disparity.
* **Tail-GNN**[18]: The GNN framework refines embeddings of tail nodes with predicted missing neighborhood information. Tail-GCN is compared here.
Note that node2vec and Tail-GNN are not UIL methods, so the matching process and other settings are the same as ours, for the sake of fair comparison. All codes come from open-access repositories of the original papers.
#### 4.1.2 Evaluation Metrics.
Following previous works [22, 23, 33], we employ two widely used Hits-Precision (Hits@\(k\)) and mean reciprocal rank (MRR) as evaluation metrics. \(Hits@k=\frac{1}{N}\sum_{i=1}^{N}\frac{k-(hit(v_{i})-1)}{k}\), \(hit(v_{i})\) is the rank position of the matched target user in the top-\(k\) candidates. MRR denotes the average reciprocal rank of ground truth results. Higher metric values indicate better performance.
#### 4.1.3 Setup and Parameters.
For each method, we set the embedding vector dimension \(d=256\) on all datasets. The initial node feature of our method is generated by node2vec [6]. We set hyperparameter \(\lambda=0.2\) in Eq.(11), \(\mu\) to \(0.001\) and \(0.01\) for FT and DBLP datasets respectively. The dimension of hidden layers in Agg
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Networks & \#Nodes & \#Edges & \#Anchor links & \#Tail links \\ \hline \hline Foursquare & 5313 & 76972 & & \\ Twitter & 5120 & 164919 & 1609 & 443 \\ \hline DBLP17 & 9086 & 51700 & & \\ DBLP19 & 9325 & 47775 & 2832 & 975 \\ \hline \end{tabular}
\end{table}
Table 1: Dataset statistics.
is 64. Tail nodes' degree is set to be no greater than 5, i.e. \(D=5\), consistent with Tail-GNN. Super head nodes are the top 10% nodes with the highest degree, thus \(M\) is set to {46, 116, 25, 23} in four networks (Fourquare, Twitter, DBLP17, DBLP19), respectively. The 2-layer MLP network for matching outputs 256-dimensional embeddings, and the dimension of hidden layers is twice the input length. The optimal hyperparameters for each method are either determined by experiments or the suggestions from the original papers. All experiments are repeated five times to obtain the average Hits@\(k\) and MRR scores.
### Result
**Overview of Results (Q1).** Comparison results on two UIL datasets are presented in Table 2. From the results, we have the following observations.
- _DegUIL consistently outperforms other baselines._ On the Foursquare-Twitter dataset, DegUIL achieves a remarkable relative improvement of 16%-39% compared to the best baseline, TailGNN. This is empirical evidence that our method is more effective than previous models in boosting linkage accuracy. An exception is on the DBLP dataset, where SEA obtains the best Hit@1 and MRR, while DegUIL remains a close runner-up ahead of other baselines. We infer that SEA's technique of encoding relations benefits learning node representations. Besides, with the same mapping process, node2vec is inferior to the GNNs-based TailGNN. It demonstrates the power of GNNs in capturing neighboring topology, so mitigating the neighborhood bias to further advance GNNs is significant.
- _Degree-aware models perform better than traditional methods._ Node2vec and PALE treat all nodes uniformly without considering the structural disparity such as node degree. As a result, node representations learned by the two simple methods are unsatisfactory for linking user identities. This highlights the importance of degree-aware baselines, which achieve more effective results. However, SEA, NeXtAlign, and Tail-GNN are not specially designed for enhancing super head nodes, their performance still falls short compared to our model.
- _DegUIL has a greater advantage in complex long-tailed datasets._ Under all evaluation metrics, methods perform worse on the FT dataset than that on the
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline Dataset & \multicolumn{4}{c|}{Foursquare-Twitter} & \multicolumn{4}{c}{DBLP17-DBLP19} \\ \hline Metric & Hits@1 & Hits@10 & Hits@30 & MRR & Hits@1 & Hits@10 & Hits@30 & MRR \\ \hline \hline node2vec & 5.43 & 15.08 & 25.49 & 10.93 & 33.18 & 55.10 & 66.52 & 44.17 \\ PALE & 6.00 & 15.77 & 26.48 & 11.51 & 21.28 & 39.78 & 52.04 & 30.94 \\ SEA & 6.93 & 15.89 & 23.94 & 11.80 & **38.62** & 60.13 & 71.01 & **49.27** \\ NeXtAlign & 6.47 & 12.23 & 16.62 & 9.63 & 36.82 & 59.58 & 70.46 & 48.06 \\ Tail-GNN & 6.70 & 17.67 & 28.39 & 12.66 & 36.36 & 56.58 & 67.21 & 46.44 \\ DegUIL & **9.33** & **21.70** & **32.81** & **16.00** & 37.59 & **60.73** & **71.51** & 48.96 \\ \hline DegUIL\({}_{w/o\_AP}\) & 8.11 & 19.39 & 30.39 & 14.30 & 36.26 & 59.29 & 70.32 & 47.67 \\ DegUIL\({}_{w/o\_NR}\) & 8.94 & 20.53 & 31.79 & 15.21 & 37.13 & 59.61 & 70.02 & 48.26 \\ \hline \end{tabular}
\end{table}
Table 2: Overall performance. Best result appears in bold and the second best model is underlined except for ablation variants.
DBLP dataset, despite the former having more known anchor links. One explanation for this discrepancy may be the greater complexity of edge relationships in FT, which makes it challenging to link users in social networks with disparate node degrees. Our model can effectively handle this complex situation, giving it a distinct advantage. Further discussions are in the ablation study.
#### 4.0.1 Ablation Study (Q2).
DegUIL comprises two components: an absent neighborhood predictor (AP) and a noisy neighborhood remover (NR). To evaluate the contribution of each component, we designed two variants of our model. \(\textbf{DegUIL}_{w/o\_AP}\) does not complement the predicted potential neighborhood for learning tail nodes' embeddings. Another variant model \(\textbf{DegUIL}_{w/o\_NR}\) does not eliminate the noise from the local structure of super head nodes.
The results of the ablation study are presented in Table 2, which reveals several conclusions. First, without AP predicting and complementing absent neighborhoods for tail nodes, UIL performance declines by 1.70% and 1.29% in terms of MRR on the FT and DBLP datasets, respectively. This indicates that the limited local context of tail nodes hinders user alignment, and our AP component is proposed as a solution for improving tail node embeddings. Second, removing structural noise in super head nodes also contributes to performance. It supports our theoretical motivation that super head nodes are also a challenging group of nodes. Notably, the gain of AP is more significant than that of NC on both datasets, suggesting that correcting the neighborhoods of tail nodes offers more substantial alignment benefits. One explanation for this phenomenon is the greater number of tail nodes, compared to super head nodes, which allows them to exert a more considerable influence on the overall performance.
#### 4.0.2 Effect on Dataset with Classic Partition (Q3).
This paper splits datasets in a novel way to mimic a challenging UIL scenario, i.e. an anchor link without tail nodes is assigned into the training set, otherwise in the testing set. This naturally raises a question: whether DegUIL is compatible with previous ways of data partitioning and still outperforms other baselines under this setting. To answer it, we vary the proportion of labeled anchors for training from 20% to
60% with a step of 10%, and use the rest for testing. Experiments are conducted on the FT dataset with competitive PALE and SEA as comparison methods.
Fig.3 illustrates the Hits@1 and MRR scores. As the training ratio increases, more alignment information is available, enabling all models to discover potential user identities more easily. In most cases, our proposed DegUIL achieves superior performance in both metrics, except when the training data is less than 30%. This exception arises due to the difficulty of effectively training the GNNs used in DegUIL when labeled supervision is insufficient. In such scenario, SEA and PALE show slight superiority thanks to their semi-supervised way or network extension using observed anchor links. In the future, we will consider semi-supervised or self-supervised training to mitigate the problem of data scarcity. With more supervision information, DegUIL consistently and significantly outperforms the other two baselines. This means that our degree-aware method is also applicable and competent in the previous data partition.
#### 5.5.2 Evaluation by degree (Q4).
To demonstrate the effectiveness of DegUIL in aligning long-tail entities, we divide the test anchors into multiple groups based on their source node degrees. We compare our method with simple PALE and illustrate their MRR results by degree in Fig.4. As hypothesized, low-degree nodes and super high-degree nodes perform worse than those normal nodes with adequate local topology information. This experimental evidence shows that drastic disparities in node degrees could lead to unsatisfactory node representations and biased outcomes. Moreover, DegUIL outperforms PALE across all degree groups in both datasets, validating its effectiveness in handling long-tail issues. While the improvements are smaller on nodes with fewer than two neighbors, given that DegUIL is also constrained by the very limited structural information.
## 6 Conclusion
Commonly, node degrees in a social graph are long-tailed, yet UIL works rarely explore the issue of degree bias. We associate the overlooked distribution with UIL performance, observing that the key to improving overall performance is tail nodes and super head nodes. This paper defines a realistic problem setting and proposes DegUIL to learn high-quality node embeddings by mitigating degree differences in the embedding process through two localized modules. These modules enrich neighborhood information for tail nodes and refine local contexts of super head nodes. As a result, node representations are improved thanks to the corrected ideal neighborhood. Extensive experiments show that DegUIL significantly surpasses the baselines. In the future, we will consider high-order neighborhood and predict structural bias more accurately to enhance our model.
#### 6.0.1 Acknowledgement.
This work is supported by the National Natural Science Foundation of China (62072483), and the Guangdong Basic and Applied Basic Research Foundation (2022A1515011690, 2021A1515012298). |
2306.04590 | Proximity-Informed Calibration for Deep Neural Networks | Confidence calibration is central to providing accurate and interpretable
uncertainty estimates, especially under safety-critical scenarios. However, we
find that existing calibration algorithms often overlook the issue of
*proximity bias*, a phenomenon where models tend to be more overconfident in
low proximity data (i.e., data lying in the sparse region of the data
distribution) compared to high proximity samples, and thus suffer from
inconsistent miscalibration across different proximity samples. We examine the
problem over 504 pretrained ImageNet models and observe that: 1) Proximity bias
exists across a wide variety of model architectures and sizes; 2)
Transformer-based models are relatively more susceptible to proximity bias than
CNN-based models; 3) Proximity bias persists even after performing popular
calibration algorithms like temperature scaling; 4) Models tend to overfit more
heavily on low proximity samples than on high proximity samples. Motivated by
the empirical findings, we propose ProCal, a plug-and-play algorithm with a
theoretical guarantee to adjust sample confidence based on proximity. To
further quantify the effectiveness of calibration algorithms in mitigating
proximity bias, we introduce proximity-informed expected calibration error
(PIECE) with theoretical analysis. We show that ProCal is effective in
addressing proximity bias and improving calibration on balanced, long-tail, and
distribution-shift settings under four metrics over various model
architectures. We believe our findings on proximity bias will guide the
development of *fairer and better-calibrated* models, contributing to the
broader pursuit of trustworthy AI. Our code is available at:
https://github.com/MiaoXiong2320/ProximityBias-Calibration. | Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, Bryan Hooi | 2023-06-07T16:40:51Z | http://arxiv.org/abs/2306.04590v2 | # Proximity-Informed Calibration for Deep Neural Networks
###### Abstract
Confidence calibration is central to providing accurate and interpretable uncertainty estimates, especially under safety-critical scenarios. However, we find that existing calibration algorithms often overlook the issue of _proximity bias_, a phenomenon where models tend to be more overconfident in low proximity data (i.e., data lying in the sparse region of the data distribution) compared to high proximity samples, and thus suffer from inconsistent miscalibration across different proximity samples. We examine the problem over \(504\) pretrained ImageNet models and observe that: 1) Proximity bias exists across a wide variety of model architectures and sizes; 2) Transformer-based models are more susceptible to proximity bias than CNN-based models; 3) Proximity bias persists even after performing popular calibration algorithms like temperature scaling; 4) Models tend to overfit more heavily on low proximity samples than on high proximity samples. Motivated by the empirical findings, we propose ProCal, a plug-and-play algorithm with a theoretical guarantee to adjust sample confidence based on proximity. To further quantify the effectiveness of calibration algorithms in mitigating proximity bias, we introduce proximity-informed expected calibration error (PIECE) with theoretical analysis. We show that ProCal is effective in addressing proximity bias and improving calibration on balanced, long-tail, and distribution-shift settings under four metrics over various model architectures. 2
Footnote 2: Our codes are available at: [https://github.com/MiaoXiong2320/ProximityBias-Calibration.git](https://github.com/MiaoXiong2320/ProximityBias-Calibration.git)
## 1 Introduction
Machine learning systems are increasingly deployed in high-stakes and safety-critical applications such as autonomous driving and medical diagnosis [30; 36; 9; 6], where incorrect decisions can have severe human health consequences. To ensure safe and reliable deployment, _confidence calibration_ approaches [10; 23; 28] are employed to produce more accurate uncertainty estimates, which allow models to establish trust by communicating their level of uncertainty, and to defer to human decision-making when the models are uncertain.
In this paper, we present a calibration-related phenomenon termed _proximity bias_, which refers to the tendency of current deep classifiers to exhibit higher levels of overconfidence on samples of low proximity, i.e., samples in sparse areas within the data distribution (see Figure 1bc for an illustrative example). In this study, we quantify the proximity of a sample as the average distance to its \(K\) (e.g. \(K=10\)) nearest neighbor samples in the data distribution, and we observe that proximity bias holds for various choices of \(K\). Importantly, the phenomenon persists even after applying existing popular calibration methods, leading to different levels of miscalibration across proximities.
The proximity bias issue raises safety concerns in real-world applications, particularly for underrepresented populations (i.e. low proximity samples) [30; 35]. A recent skin cancer analysis highlights
this concern by revealing that AI-powered models demonstrate high performance for light-skinned individuals but struggle with dark-skinned individuals due to their underrepresentation [11]. This issue can also manifest in the form of proximity bias: suppose a dark-skinned individual has a high risk of 40% of having the cancer. However, due to their underrepresentation within the data distribution, the model overconfidently assigns them 98% confidence of not having cancer. As a result, these low proximity individuals may be deprived of timely intervention.
To study the ubiquity of this problem, we examine \(504\) ImageNet pretrained models from the timm library [41] and make the following key observations: 1) Proximity bias exists generally across a wide variety of model architectures and sizes; 2) Transformer-based models are more susceptible to proximity bias than CNN-based models; 3) Proximity bias persists even after performing popular calibration algorithms including temperature scaling; 4) Low proximity samples are more prone to model overfitting while high proximity samples are less susceptible to this issue.
Besides, we argue that proximity bias is overlooked by _confidence calibration_. Revisiting its definition, \(\mathbb{P}(Y=\hat{Y}\mid\hat{P}=p)=p\) for all \(p\in[0,1]\), we find that its primary goal is to match confidence with the accuracy of samples sharing the same confidence level. However, Figure 1a reveals that although the model seems well-calibrated within each confidence group, there still exists miscalibration errors among these groups (e.g. low and high proximity samples) due to proximity bias.
Motivated by this, we propose a debiased variant of the expected calibration error (ECE) metric, called proximity-informed expected calibration error (PIECE) to further capture the miscalibration error due to proximity bias. The effectiveness is supported by our theoretical analysis that PIECE is at least as large as ECE. This equality holds when there is no cancellation effect with respect to proximity bias.
To tackle proximity bias and further improve confidence calibration, we propose a plug-and-play method, ProCal. Intuitively, ProCal learns a joint distribution of proximity and confidence to adjust probability estimates. To fully leverage the characteristics of the input information, we develop two separate algorithms tailored for continuous and discrete inputs. We evaluate the algorithms on large-scale datasets: **balanced datasets** including ImageNet [7] and Yahoo-Topics[47], **long-tail datasets**iNaturalist 2021 [3] and ImageNet-LT[27] and **distribution-shift datasets**MultiNLI [42] and ImageNet-C[15]. The results show that our algorithm consistently improves the performance of existing algorithms under four metrics with 90% significance (p-value < 0.1).
Our main contributions can be summarized as follows:
* **Findings**: We discover the proximity bias issue and show its prevalence over large-scale analysis (\(504\) ImageNet pretrained models).
* **Metrics**: To quantify the effectiveness of mitigating proximity bias, we introduce proximity-informed expected calibration error (PIECE) with theoretical analysis.
* **Method Effectiveness**: To address proximity bias and improve calibration, we propose a plug-and-play algorithm ProCal with the theoretical guarantee. We verify its effectiveness on image and text datasets with balanced, distribution-shift, and long-tail settings.
## 2 Related Work
Confidence CalibrationConfidence calibration aims to yield uncertainty estimates via aligning a model's confidence with the accuracy of samples with the same confidence level [10; 25; 28]. To
Figure 1: **Samples with lower (higher) proximity tend to be more overconfident (underconfident)**. These results are obtained using XCiT, an Image Transformer, on the ImageNet validation set (**All Samples**). The proximity of a sample is measured as its average distance to its nearest neighbors (\(K=10\)) in the validation set. We split samples into \(10\) equal-size bins based on proximity and choose the bin with the highest proximity (**High Proximity Samples**) and lowest proximity (**Low Proximity Samples**).
achieve this, **Scaling-based** methods represented by temperature scaling [10] adjust the predicted probabilities by learning a temperature scalar for all samples. Similarly, parameterized temperature scaling [39] offers improved expressiveness via input-dependent temperature parameterization, and Mix-n-Match [46] adopts ensemble and composition strategies to yield data-efficient and accuracy-preserving estimates. **Binning-based** methods divide samples into multiple bins based on confidence and calibrate each bin. Popular methods include classic histogram binning [44], mutual-information-maximization-based binning [32], and isotonic regression [45]. However, existing calibration methods overlook the proximity bias issue, which fundamentally limits the methods' capabilities in delivering reliable and interpretable uncertainty estimates.
MulticalibrationMulticalibration algorithms [13; 21] aim to achieve a certain level of fairness by ensuring that a predictor is well-calibrated for the overall population as well as different computationally-identifiable subgroups. [33] proposes a grouping loss to evaluate subgroup calibration error while we propose a metric to integrate the group cancellation effect into existing calibration loss. [21] focuses on understanding the fundamental trade-offs between group calibration and other fairness criteria, and [13] proposes a conceptual iterative algorithm to learn a multi-calibrated predictor. In this regard, our proposed framework can be considered a specific implementation of the fairness objectives outlined in [13], with a particular focus on proximity-based subgroups. This approach offers easier interpretation and implementation compared to subgroups discussed in [13; 21].
## 3 What is Proximity Bias?
In this section, we study the following questions: What is proximity bias? When and why does proximity bias occur?
BackgroundWe consider a supervised multi-class classification problem, where input \(X\in\mathcal{X}\) and its label \(Y\in\mathcal{Y}=\{1,2,\cdots,C\}\) follows a joint distribution \(\pi(X,Y)\). Let \(f\) be a classifier with \(f(X)=(\hat{Y},\hat{P})\), where \(\hat{Y}\) represents the predicted label, and \(\hat{P}\) is the model's confidence, i.e. the estimate of the probability of correctness [10]. For simplicity, we use \(\hat{P}\) to denote both the model's confidence and the confidence calibrated using existing calibration algorithms.
ProximityWe define proximity as a function of the average distance between a sample and its \(K\) nearest neighbors in the data distribution:
\[D(X)=\exp\left(-\frac{1}{K}\sum_{X_{i}\in\mathcal{N}_{K}(X)}\mathrm{dist}(X,X_ {i})\right), \tag{1}\]
where \(\mathcal{N}_{K}(X)\) represents the set of \(K\) nearest neighbors of sample \(X\), and \(\mathrm{dist}(X,X_{i})\) denotes the distance between sample \(X\) and its \(i\)-th nearest neighbor \(X_{i}\), estimated using the Euclidean distance between the features of \(X\) and \(X_{i}\) from the model's penultimate layer. \(K\) is a hyperparameter (we set \(K=10\) in this paper). We use the validation set as a proxy to estimate the data distribution. Although the training set can also be employed to compute proximity, we utilize the validation set because it is readily accessible during the calibration process.
This definition allows us to capture the local density of a sample and its relationship to its neighborhood. For instance, a sample situated in a sparse region of the training distribution would receive a low proximity value, while a sample located in a dense region would receive a high proximity value. Samples with low proximity values represent **underrepresented samples** in the data distribution that merit attention, such as rare ("long-tail") diseases, minority populations, and samples with distribution shift.
Proximity BiasTo investigate the relationship between proximity and model miscalibration, we define proximity bias as follows:
**Definition 3.1**.: Given any confidence level \(p\), the model suffers from proximity bias if the following condition does not hold:
\[\mathbb{P}\left(\hat{Y}=Y\mid\hat{P}=p,D=d_{1}\right)=\mathbb{P}\left(\hat{Y}= Y\mid\hat{P}=p,D=d_{2}\right)\quad\forall\;d_{1},d_{2}\in(0,1],d_{1}\neq d_{2}.\]
The intuition behind this definition is that, ideally, a sample with a confidence level of \(p\) should have a probability of being correct equal to \(p\), regardless of proximity. However, if low proximity samples
consistently display higher confidence than high proximity samples (as shown in Figure 1), it can lead to unreliable and unjust decision-making, particularly for underrepresented populations.
### Main Empirical Findings
To showcase the ubiquity of this problem, we examine the proximity bias phenomenon on \(504\) ImageNet pretrained models from the timm library [41] and show the results in Figure 2 (see Appendix D for additional figures and analysis).
We use statistical hypothesis testing to investigate the presence of proximity bias. The null hypothesis \(H_{0}\) is that proximity bias does not exist, formally, for any confidence \(p\) and proximities \(d_{1}>d_{2}\):
\[\mathbb{P}\left(\hat{Y}=Y\mid\hat{P}=p,D=d_{1}\right)=\mathbb{P}\left(\hat{Y} =Y\mid\hat{P}=p,D=d_{2}\right). \tag{2}\]
To test the above null hypothesis, i.e., to evaluate whether there is a significant difference in the sample means (i.e. accuracy) of the two selected groups, we use the Wilcoxon rank-sum test [24]. Specifically, we split the samples into 5 equal-sized proximity groups and select the highest and lowest proximity groups. Every time we randomly sample a point from the highest (or lowest) group and search for a point from the lowest (or highest) group _with the same confidence_. To ensure statistical significance, we repeat this process 20,000 times. We call the resulting sampled points the _confidence-matched high and low proximity groups_\(B_{H}\), \(B_{L}\), and apply the Wilcoxon rank-sum test to evaluate whether the mean difference in their accuracy is significantly different from zero.
The hypothesis testing results indicate that over 80% of \(504\) models have a p-value less than \(0.05\) (72% after Bonferroni correction [4]), i.e., the null hypothesis is rejected with a confidence level of at least 95%, indicating that proximity bias plagues most of the models in timm.
Inspired by the hypothesis testing, we define **Bias Index** as the accuracy drop between the confidence-matched high proximity group \(B_{H}\) and low proximity group \(B_{L}\) to reflect the degree of bias:
\[\text{Bias Index}=\frac{\sum_{(X,Y)\in B_{H}}\mathbbm{1}\{\hat{Y}=Y\}}{|B_{H}|} -\frac{\sum_{(X,Y)\in B_{L}}\mathbbm{1}\{\hat{Y}=Y\}}{|B_{L}|}. \tag{3}\]
Figure 2: Proximity bias analysis on \(504\) public models. Each marker represents a model, where marker sizes indicate model parameter numbers and different colors/shapes represent different architectures. The bias index is computed using Equation (3) (0 indicates no proximity bias). **Left**: We observed the following: 1) Models with higher accuracy tend to have a larger bias index. 2) Proximity bias exists across a wide range of model architectures. 3) Transformer variants (e.g. DEiT, XCiT, CaiT, and SwinV2) have a relatively larger bias compared to convolution-based networks (e.g. VGG and ResNet variants). **Right**: Confidence calibrated by temperature scaling (Upper Right) is similar to the original model confidence w.r.t proximity bias. Our ProCal (Bottom Right) is effective in reducing proximity bias. Analysis of other existing calibration algorithms can be found in Appendix D.
Note that \(B_{H},B_{L}\) are obtained from the hypothesis testing process and hence have the same mean confidence.
We show the bias index of \(504\) models in Figure 2 and make the following findings:
**1. Proximity bias exists generally across a wide variety of model architecture and sizes.** Figure 2 shows that most models (85% of the models as supported by hypothesis testing) have a bias index larger than 0, indicating the existence of proximity bias.
**2. Transformer-based methods are more susceptible to proximity bias than CNN-based methods.** In Figure 2, models with lower accuracy (primarily CNN-based models such as VGG, EfficientNet, MobileNet, and ResNet [12] variants) tend to have lower bias index. On the other hand, among models with higher accuracy, Transformer variants (e.g., DEiT, XCiT [1], CaiT [40], and SwinV2) demonstrate relatively higher bias compared to convolution-based networks (e.g., ResNet variants). This is concerning given the increasing popularity of Transformer-based models in recent years and highlights the need for further research to study and address this issue.
**3. Popular calibration methods such as temperature scaling do not noticeably alleviate proximity bias.** Figure 2 (upper right) shows that the proximity bias index remains large even after applying temperature scaling, indicating that this method does not noticeably alleviate the problem. In contrast, Figure 2c demonstrates that our proposed approach successfully shifts the models to a much closer distribution around the line \(y=0\) (indicating no proximity bias). The bias index figures for more existing calibration methods are provided in Appendix D.
**4. Low proximity samples are more prone to model overfitting.** Figure 4 in Appendix D shows that the model's accuracy difference between the training and validation set is more significant on low proximity samples (31.67%) compared to high proximity samples (0.6%). This indicates that the model generalizes well on samples of high proximity but tends to overfit on samples of low proximity. The overconfidence of low proximity samples can be a consequence of the overfitting tendency, as the overfitting gap also reflects the mismatch between the model's confidence and its actual accuracy.
## 4 Proximity-Informed ECE
As depicted in Figure 1, _existing evaluation metrics underestimate the true miscalibration level_, as proximity bias causes certain errors in the model to cancel out. As an example, consider a scenario:
**Example 4.1**.: All samples are only drawn from two proximity groups of equal probability mass, \(d=0.2\) and \(d=0.8\), with true probabilities of \(\mathbb{P}(Y=\hat{Y}|X,f)\) being \(0.5\) and \(0.9\), respectively. The model outputs the confidence score \(p=0.7\) to all samples.
We consider the most commonly used metric, expected calibration error (ECE) that is defined as \(\mathrm{ECE}=\mathbb{E}_{\hat{P}}\left[\left|\mathbb{P}(\hat{Y}=Y\mid\hat{P})- \hat{P}\right|\right]\). In Example 4.1, the ECE is 0, suggesting that the model is perfectly calibrated in terms of confidence calibration. In fact, the model has significant miscalibration issues: it is heavily overconfident in one proximity group while heavily underconfident in the other, highlighting the limitations of existing calibration metrics. The miscalibration errors within the same confidence group are _canceled out_ by samples with both high and low proximity, resulting in a phenomenon we term _cancellation effect_.
To further evaluate the miscalibration canceled out by proximity bias, we propose the proximity-informed expected calibration error (PIECE). PIECE is defined in an analogous fashion as ECE, yet it further examines information about the proximity of the input sample, \(D(X)\), in the calibration evaluation:
\[\mathrm{PIECE}=\mathbb{E}_{\hat{P},D}\left[\left|\mathbb{P}(\hat{Y}=Y\mid\hat {P},D)-\hat{P}\right|\right]. \tag{4}\]
Back to Example 4.1 where \(\mathrm{ECE}=0\), we have PIECE \(=0.2\), revealing its miscalibration level in the subpopulations of different proximities, i.e., the calibration error regarding proximity bias. Additionally, we demonstrate that PIECE is _always at least as large_ as ECE, with the equality holding only when there is no cancellation effect w.r.t proximity. The detailed proof is relegated to Appendix B.
**Theorem 4.2** (PIECE captures cancellation effect.).: _Given any joint distribution \(\pi(X,Y)\) and any classifier \(f\) that outputs model confidence \(\hat{P}\) for sample \(X\), we have the following inequality, where
equality holds only when there is no cancellation effect with respect to proximity:_
\[\underbrace{\mathbb{E}_{\hat{P}}\left[\left|\mathbb{P}(\hat{Y}=Y\mid\hat{P})-\hat{ P}\right|\right]}_{\text{ECE}}\leq\underbrace{\mathbb{E}_{\hat{P},D}\left[\left| \mathbb{P}(\hat{Y}=Y\mid\hat{P},D)-\hat{P}\right|\right]}_{\text{PIECE}}.\]
## 5 How to Mitigate Proximity Bias?
In this section, we propose ProCal to achieve three goals: 1) **mitigate proximity bias**, i.e., ensure samples with the same confidence level have the same miscalibration gap across all proximity levels, 2) **improve confidence calibration** by reducing overconfidence and underconfidence, and 3) provide a **plug-and-play** method that can combine the strengths of existing approaches with our proximity-informed approach.
The high-level intuition is to explicitly incorporate proximity when estimating the underlying probability of the model prediction being correct. To fully leverage the distinct properties of the input information, we develop two separate algorithms tailored for continuous and discrete inputs. This differentiation is based on the observation that continuous outputs (e.g., those produced by scaling-based methods) contain rich distributional information suitable for density estimation. On the other hand, discrete inputs (e.g., those generated by binning-based methods) allow for robust binning-based adjustments. By treating these inputs separately, we can effectively harness the characteristics of each type.
### Continuous Confidence: Density-Ratio Calibration
Our objective is to estimate the likelihood of a model prediction \(\hat{Y}\) being identical to the ground truth label \(Y\) for every sample \(X\). Computing this probability directly with density estimation methods can be computationally demanding, particularly in high-dimensional spaces. To circumvent the curse of dimensionality and address proximity bias, we incorporate the model confidence \(\hat{P}\) and proximity information \(D(X)\) to estimate the posterior probability of correctness, i.e., \(\mathbb{P}(\hat{Y}=Y\mid\hat{P},D)\). This approach is data-efficient since it conducts density estimation in a two-dimensional space only, rather than in the higher dimensional feature or prediction simplex space.
Consider a test sample \(X\) with proximity \(D=D(X)\) and uncalibrated confidence score \(\hat{P}\), which can be the standard Maximum Softmax Probability (MSP), or the output of any calibration method. \(\mathbb{P}(\hat{Y}=Y\mid\hat{P},D)\) can be computed via Bayes' rule:
\[\mathbb{P}\left(\hat{Y}=Y\mid\hat{P},D\right)=\frac{\mathbb{P}\left(\hat{P},D \mid\hat{Y}=Y\right)\mathbb{P}\left(\hat{Y}=Y\right)}{\mathbb{P}(\hat{P},D)},\]
where \(\hat{Y}\) is the model prediction and \(Y\) is the ground truth label. This can be re-expressed as follows by using the law of total probability:
\[\frac{\mathbb{P}(\hat{P},D\mid\hat{Y}=Y)}{\mathbb{P}\left(\hat{P},D\mid\hat{Y }=Y\right)+\mathbb{P}\left(\hat{P},D\mid\hat{Y}\neq Y\right)\cdot\frac{ \mathbb{P}\left(\hat{Y}\neq Y\right)}{\mathbb{P}\left(\hat{Y}=Y\right)}}.\]
To compute this calibrated score, we need to estimate the distributions \(\mathbb{P}\left(\hat{P},D\mid\hat{Y}=Y\right)\) and \(\mathbb{P}\left(\hat{P},D\mid\hat{Y}\neq Y\right)\), and the class ratio \(\frac{\mathbb{P}\left(\hat{Y}\neq Y\right)}{\mathbb{P}\left(\hat{Y}=Y\right)}\).
To estimate the probability density functions \(\mathbb{P}\left(\hat{P},D\mid\hat{Y}=Y\right)\) and \(\mathbb{P}\left(\hat{P},D\mid\hat{Y}\neq Y\right)\), various density estimation methods can be used, such as parametric methods like Gaussian mixture models or non-parametric methods like kernel density estimation (KDE)[31]. We choose KDE because it is flexible and robust, making no assumptions about the underlying distribution. Specifically, we split samples into two groups based on whether they are correctly classified and then use KDE to estimate the two densities. To obtain the class ratio \(\frac{\mathbb{P}\left(\hat{Y}\neq Y\right)}{\mathbb{P}\left(\hat{Y}=Y\right)}\), we simply use the ratio of the number of correctly classified samples and the number of misclassified samples in the validation set. The **pseudocode** for inference and training can be found in Appendix 2 and 1.
### Discrete Confidence: Bin Mean-Shift
The Bin-Mean-Shift approach aims to first use 2-dimensional binning to estimate the joint distribution of proximity \(D\) and input confidence \(\hat{P}\) and then estimate \(\mathbb{P}\left(\hat{Y}=Y\mid\hat{P},D\right)\). Considering test samples \(X\) with proximity \(D=D(X)\) and uncalibrated confidence score \(\hat{P}\), we first group samples into 2-dimensional equal-size bins based on their \(D\) and \(\hat{P}\) (other binning schemes can also be used; we choose quantile for simplicity). Next, for each bin \(B_{mh}\), we calculate its accuracy \(\mathcal{A}(B_{mh})\) and mean confidence \(\mathcal{F}(B_{mh})\). Then, for all the samples in the bin, we adjust their confidence scores as follows:
\[\hat{P}_{ours}=\hat{P}+\lambda\cdot\left(\mathcal{A}(B_{mh})-\mathcal{F}(B_{mh })\right), \tag{5}\]
where the weight hyperparameter \(\lambda\in(0,1]\) is a constant, acting as regularization to stabilize the adjustment.
Note that our approach (i.e. applying a mean-shift in each bin) differs from the typical histogram binning method (replacing the confidence scores with its mean accuracy in each bin). Rather than completely replacing the input confidence scores \(\hat{P}\) (which are often reasonably well-calibrated), our approach better utilizes these scores by only adjusting them by the minimal mean-shift needed to correct for proximity bias in each of the 2-dimensional bins.
### Theoretical Guarantee
Here we present that our method, Bin-Mean-Shift, can consistently achieve a smaller Brier Score given a sufficient amount of data in the context of binary classification. The Brier Score [5] is a strictly proper score function that measures both calibration and accuracy aspects [22], with a smaller value indicating better performance. As illustrated below, our algorithm's Brier Score is asymptotically bounded by the original Brier Score, augmented by a non-negative term.
**Theorem 5.1** (Brier Score after Bin-Mean-Shift is asymptotically bounded by Brier Score before calibration).: _Given a joint data distribution \(\pi(X,Y)\) and a binary classifier \(\hat{f}\), for any calibration algorithm \(h\) that outputs score \(h(\hat{P})\) based on model confidence \(\hat{P}\), we apply Bin-Mean-Shift to derive calibrated score \(\tilde{h}(\hat{P})\) as defined in Equation (5). Let \(h_{c}(\hat{P})=h(\hat{P})\times\mathbb{1}\{\hat{Y}=1\}+(1-h(\hat{P}))\times \mathbb{1}\{\hat{Y}=0\}\) denote the probability assigned to class 1 by \(h(\hat{P})\), and define \(\tilde{h}_{c}(\hat{P})\) similarly. Then, the Brier Score before calibration can be decomposed as follows:_
\[\underbrace{\mathbb{E}_{\pi(X,Y)}\left[\left(h_{c}(\hat{P})-Y\right)^{2} \right]}_{\text{Brier Score before Calibration}}=\underbrace{\mathbb{E}_{\pi(X,Y)} \left[\left(\tilde{h}_{c}(\hat{P})-Y\right)^{2}\right]}_{\text{Brier Score after Calibration}}+\underbrace{\mathbb{E}_{B\sim\mathbb{P}(B)}\left[\left(\hat{\mathcal{A}}(B)- \hat{\mathcal{F}}(B)\right)^{2}\right]}_{\geq 0}+o(1),\]
_where \(\mathbb{P}(B)\) is determined by the binning mechanism used in Bin-Mean-Shift._
Remark.Note that when the calibration algorithm \(h\) is an identity mapping, it demonstrates that Bin-Mean-Shift achieves better model calibration performance than the original model confidence, given a sufficient amount of data. The detailed proof is relegated to Appendix A.
## 6 Experiments
In this section, we aim to answer the following questions:
* **Performance across different datasets and model architectures:** How does ProCal perform on datasets with balanced distribution, long-tail distribution, and distribution shift, as well as on different model architectures?
* **Inference efficiency:** How efficient is our ProCal? (see Appendix E.1)
* **Hyperparameter sensitivity**: How sensitive is ProCal to different hyperparameters, e.g. neighbor size \(K^{2}\) (See Appendix F.2)
* **Ablation study:** What is the difference between Density-Ratio and Bin-Mean-shift on calibration? How should the choice between these techniques be determined? (See Appendix F.1)
### Experiment Setup
Evaluation Metrics.Following [10], we adopt 3 commonly used metrics to evaluate the _confidence calibration_: Expected Calibration Error (ECE), Adaptive Calibration Error (ACE[29]), Maximum Calibration Error (MCE) and our proposed PIECE to evaluate the _bias mitigation_ performance.
Datasets.We evaluate the effectiveness of our approach across large-scale datasets of three types of data characteristics (balanced, long-tail and distribution-shifted) in image and text domains: (1) Dataset with **balanced** class distribution (i.e. each class has an equal size of samples) on vision dataset ImageNet [7] and two text datasets including Yahoo Answers Topics [47] and MultiNLI-match [42]; (2) Datasets with **long-tail** class distribution on two image datasets, including iNaturalist 2021 [3] and ImageNet-LT [27]; (3) Dataset with **distribution-shift** on two datasets, including ImageNet-C [15] and MultiNLI-Mismatch [42].
Comparison methods.We compare our method to existing calibration algorithms: base confidence score (Conf)[16], _scaling-based methods_ such as Temperature Scaling (TS) [10], Ensemble Temperature Scaling (ETS) [46], Parameterized Temperature Scaling (PTS) [39], Parameterized Temperature Scaling with K Nearest Neighbors (PTSK), and _binning based methods_ such as Histogram Binning (HB), Isotonic Regression (IR) and Multi-Isotonic Regression (MIR) [46]. Throughout the experiment section, we apply Density-Ratio Calibration to Conf, TS, ETS, PTS, and PTSK and apply Bin-Mean-Shift to binning-based methods IR, HB, and MIR. HB and IR are removed from the long-tail setting due to its instability when the class sample size is very small.
More details on baseline algorithms, datasets, pretrained models, hyperparameters, and implementation details can be found in Appendix C.
### Effectiveness
Datasets with the balanced class distribution.The results on ImageNet of 504 models from timm[41] are depicted in Figure 3, where our method (red color markers) consistently appears at the bottom, achieving the lowest calibration error across all four evaluation metrics in general. This indicates that our method consistently outperforms other approaches in eliminating proximity bias and improving confidence calibration. Additionally, we summarize the results of four popular models, namely BeiT[2], MLP Mixer[38], ResNet50[12] and ViT[8] in Table 4 of Appendix E. Additionally, Table 2a presents the calibration results for the text classification task on Yahoo
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{2}{c}{ECE \(\downarrow\)} & \multicolumn{2}{c}{ACE \(\downarrow\)} & \multicolumn{2}{c}{MCE \(\downarrow\)} & PIECE \(\downarrow\) \\ \cline{2-7} Method & base & +ours & base & +ours & base & +ours & base & +ours \\ \hline Conf & 4.85 & **0.78*** & 4.86 & **0.76*** & 0.55 & **0.18*** & 4.91 & **1.51*** \\ TS & 2.03 & **0.70*** & 2.02 & **0.78*** & 0.30 & **0.14*** & 2.34 & **1.43*** \\ ETS & 1.12 & **0.66*** & 1.15 & **0.77*** & 0.18 & **0.13*** & 1.79 & **1.38*** \\ PTS & 4.86 & **0.71*** & 4.90 & **0.86*** & 2.96 & **0.12** & 7.04 & **1.44*** \\ PTSK & 2.97 & **0.65*** & 3.01 & **0.82*** & 0.56 & **0.11*** & 4.66 & **1.41*** \\ MIR & 1.05 & **0.98** & 1.09 & **1.06** & **0.18** & 0.19 & **1.63** & 1.64 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance on iNaturalist 2021 long-tail dataset with ResNet50 pretrained on ImageNet. * denotes significant improvement (p-value < 0.1). ‘Base’ refers to existing calibration methods, ‘Ours’ to our method applied to calibration. Note that ‘Conf+Ours’ shows the result of our method applied directly to model confidence. Calibration error is given by \(\times 10^{-2}\).
Figure 3: Calibration errors on ImageNet across 504 timm models. Each point represents the calibration result of applying a calibration method to the model confidence. Marker colors indicate different calibration algorithms used. Among all calibration algorithms, our method consistently appears at the bottom of the plot. See Appendix E Figure 11 for high resolution figures.
Answers Topics where our method consistently improves the calibration of existing methods and model confidence. See Table 5 in Appendix E for results of the text understanding task MultiNLI.
Datasets with the long-tail class distribution.Table 1 shows the results on the image dataset inNaturalist 2021. Our method ('ours') consistently improves upon existing algorithms ('base') regarding reducing confidence calibration errors (ECE, ACE, and MCE) and mitigating proximity bias (PIECE). Note that even when used independently without combining with existing algorithms ('Conf+ours'), our method achieves the best performance across all metrics. This result suggests that our algorithm can make the model more calibrated in the long-tail setting by effectively mitigating the bias towards low proximity samples (i.e. tail classes), highlighting its practicality in real-world scenarios where data is often imbalanced and long-tailed. Besides, ImageNet-TL results in Table 6 of Appendix E.3 show similar improvement.
Datasets with distribution shift.Table 2b shows our method's calibration performance when trained on an in-distribution validation set (MultiNLI Match) and applied to a cross-domain test set (MultiNLI Mismatch). The results show that our method can improve upon most existing methods on ECE, ACE and MCE, and gain consistent improvement on PIECE, indicating its effectiveness in mitigating proximity bias. Besides, compared to Bin-Mean-Shift, Density-Ratio exhibits more stable performance on enhancing the existing baselines. More analysis on their comparison can be found in Appendix F.1. Moreover, empirical results on ImageNet-C on Figure 12 of Appendix E also demonstrate consistent improvements of our method over baselines.
## 7 Conclusions and Discussion
In this paper, we focus on the problem of proximity bias in model calibration, a phenomenon wherein deep models tend to be more overconfident on data of low proximity (i.e. lying in the sparse region of data distribution) and thus suffer from miscalibration. We study this phenomenon on \(504\) public models across a wide variety of model architectures and sizes on ImageNet and find that the bias persists even after applying the existing calibration methods, which drives us to propose ProCal for tackling proximity bias. To further evaluate the miscalibration due to proximity bias, we propose a proximity-informed expected calibration error (PIECE) with theoretical analysis. Extensive empirical studies on balanced, long-tail, and distribution-shifted datasets under four metrics support our findings and showcase the effectiveness of our method.
Potential Impact, Limitations and Future WorkWe uncover the proximity bias phenomenon and show its prevalence through large-scale analysis, highlighting its negative impact on the safe deployment of deep models, e.g. unfair decisions on minority populations and false diagnoses for underrepresented patients. We also provide ProCal as a starting point to mitigate the proximity bias, which we believe has the potential to inspire more subsequent works, serve as a useful guidance in the literature and ultimately lead to _improved and fairer decision-making in real-world applications_, especially for underrepresented populations and safety-critical scenarios. However, our study also has several limitations. First, our ProCal maintains a held-out validation set during inference for computing proximity. While we have shown that the cost can be marginal for large models (see Inference Efficiency in Appendix E.1), it may be challenging if applied to small devices where memory is limited. Future research can investigate the underlying mechanisms of proximity bias and explore various options to replace the existing approach of local density estimation. Additionally, we only focus on the closed-set multi-class classification problem; future work can generalize this to multi-label, open-set or generative settings.
\begin{table}
\end{table}
Table 2: We use RoBERTa models [26] fine-tuned on Yahoo and MultiNLI Match, respectively, as their models. ‘Base’ refers to existing calibration methods and ‘Ours’ refers to our method applied to existing calibration methods. Calibration error is given by \(\times 10^{-2}\). |
2305.12852 | Cycle Consistency-based Uncertainty Quantification of Neural Networks in
Inverse Imaging Problems | Uncertainty estimation is critical for numerous applications of deep neural
networks and draws growing attention from researchers. Here, we demonstrate an
uncertainty quantification approach for deep neural networks used in inverse
problems based on cycle consistency. We build forward-backward cycles using the
physical forward model available and a trained deep neural network solving the
inverse problem at hand, and accordingly derive uncertainty estimators through
regression analysis on the consistency of these forward-backward cycles. We
theoretically analyze cycle consistency metrics and derive their relationship
with respect to uncertainty, bias, and robustness of the neural network
inference. To demonstrate the effectiveness of these cycle consistency-based
uncertainty estimators, we classified corrupted and out-of-distribution input
image data using some of the widely used image deblurring and super-resolution
neural networks as testbeds. The blind testing of our method outperformed other
models in identifying unseen input data corruption and distribution shifts.
This work provides a simple-to-implement and rapid uncertainty quantification
method that can be universally applied to various neural networks used for
solving inverse problems. | Luzhe Huang, Jianing Li, Xiaofu Ding, Yijie Zhang, Hanlong Chen, Aydogan Ozcan | 2023-05-22T09:23:18Z | http://arxiv.org/abs/2305.12852v1 | # Cycle Consistency-based Uncertainty Quantification of Neural Networks in Inverse Imaging Problems
###### Abstract
Uncertainty estimation is critical for numerous applications of deep neural networks and draws growing attention from researchers. Here, we demonstrate an uncertainty quantification approach for deep neural networks used in inverse problems based on cycle consistency. We build forward-backward cycles using the physical forward model available and a trained deep neural network solving the inverse problem at hand, and accordingly derive uncertainty estimators through regression analysis on the consistency of these forward-backward cycles. We theoretically analyze cycle consistency metrics and derive their relationship with respect to uncertainty, bias, and robustness of the neural network inference. To demonstrate the effectiveness of these cycle consistency-based uncertainty estimators, we classified corrupted and out-of-distribution input image data using some of the widely used image deblurring and super-resolution neural networks as testbeds. The blind testing of our method outperformed other models in identifying unseen input data corruption and distribution shifts. This work provides a
simple-to-implement and rapid uncertainty quantification method that can be universally applied to various neural networks used for solving inverse problems.
## 1 Introduction
In the past decades, deep learning achieved enormous advances, demonstrating unprecedented performance, and significantly impacted numerous research fields, e.g., data mining [1], natural language processing [2], and computer vision (CV) [3]. For example, the widespread applications of deep neural networks (DNNs) in CV fields can be seen in autonomous driving [4], biomedical imaging and microscopy [5, 6]. However, there is still a strong need for further improvements regarding DNN's reliability since incorrect network inference can lead to wrong decisions and jeopardize their uses in critical real-world applications. Most existing deep learning models cannot provide reliable uncertainty quantification (UQ) for their predictions to distinguish input data distribution shifts during the test stage [7, 8] or to detect adversarial attacks [9, 10].
The sources of uncertainty are categorized into data (aleatoric) uncertainty and model (epistemic) uncertainty [11, 12]. Data uncertainty comes from the inherent errors and random perturbations of the measurement system and data acquisition process, e.g., measurement noise [13] or image distortions [14]. On the other hand, model uncertainty usually results from model imperfections caused by DNN architecture design and stochastic training procedure, as well as generalization errors on unknown data distributions [15]. Some of the existing deep learning methods solving inverse imaging problems attempt to avoid data uncertainty caused by perturbations in the physical systems, e.g., movements/misalignments of the imaging elements or the objective lens, through training techniques involving specialized loss functions, training protocols or image data augmentation [16, 17, 18]. However, despite these existing efforts to alleviate data uncertainty in the network training, the joint effects of uncontrollable perturbations and model uncertainty make it nearly impossible to eliminate inference uncertainty for various practical applications. Thus, estimating the uncertainty of network inference has drawn increasing interest among researchers. The most common uncertainty quantification approaches are based on Bayesian techniques [15, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Bayesian uncertainty quantification has been applied in inverse imaging problems [19, 20, 25], especially in the field of biomedical imaging [26, 27, 28, 29, 30]. For example, Bayesian methods were
integrated into neural network training procedures to provide uncertainty estimation[21, 31, 32]. Other techniques were also demonstrated for UQ in inverse problems, such as ensemble learning[33, 34, 35] and function priors[36]. However, most of these existing UQ methods require large amounts of training data, additional network modules, or significant modifications to the training process.
In this work, we introduce a simple yet powerful UQ method to quantitatively measure the uncertainty of neural network outputs in solving inverse problems and automatically detect input data corruption and distribution shifts that have never been seen before. As shown in Fig. 1, this approach is based on the execution of forward-backward cycles using the physical forward model and an accordingly trained neural network in an iterative manner. Through a regression-based method, we use these iterative back-and-forth cycles to estimate the neural network's inference uncertainty and derive uncertainty estimators utilizing cycle consistency as a quantitative measure (Fig. 1(b)). Furthermore, we theoretically establish the relationship of these cycle consistency metrics with respect to the uncertainty, bias, and robustness of the neural network inference. Based on these fitted uncertainty estimators, a linear regression model was established to accurately predict the uncertainty for each image inference, and a simple and fast binary classifier was trained to successfully distinguish out-of-distribution (OOD) data where the neural network would normally produce significant generalization errors (Fig. 1(c)) - i.e., helping us avoid catastrophic errors in the network inference.
This UQ method broadly applies to various deep learning-based solutions to inverse problems. To demonstrate its effectiveness, we validated this UQ approach on two tasks used as our testbeds: (1) corrupted input detection for an image deblurring neural network and (2) OOD data detection for an image super-resolution (SR) neural network. Compared to existing UQ methods, our method is simpler as it does not require any modification to the neural network architecture, training or testing processes, and can be directly applied to various trained networks as a universal approach. In addition, our method is independent of the distributions of the OOD data during the test stage. Through our testbeds, we demonstrated that the OOD data classifier could be trained on a limited set of outliers generated by introducing, on purpose, noise into in-distribution (ID) image data to successfully generalize to OOD test image data perturbed by unseen factors, e.g., different noise levels, noise distributions, and different object classes never
seen before. Our results reveal that the presented approach, with its cycle-consistency metrics, outperforms existing OOD detection methods across multiple inference tasks; specifically, our approach outperformed other supervised deep neural network-based methods and has significantly less model complexity and can be trained much faster, in \(\leq 1\) sec. This cycle-consistency-based UQ framework provides a simple yet powerful approach to rapidly quantify output uncertainty in various fields, where neural network inference is used to solve inverse problems.
## 2 Results
### Theoretical bounds of cycle consistency
In most imaging and sensing tasks, low-dimensional measurements \(x\in R^{N}\) about a ground truth high-dimensional signal \(y\in R^{M}\) are captured through a non-linear measurement system. The forward process of such a measurement can be formulated by[5]
\[x=f\circ h(y)+n \tag{1}\]
, where \(f\) and \(h\) represent the sampling process and the transfer function of the measurement system, respectively. \(\circ\) represents function composition, and \(n\) is the random noise present in the measurement system. In most cases, we can assume a linear transfer function and simplify its implementation using the multiplication of the input with a transfer matrix \(H\in R^{N\times M}\), representing the physical forward model.
Without loss of generality, we select inverse problems in imaging as our testbed in this manuscript. A network is trained on a dataset \(\mathcal{D}=\{(x,y),x\in X,y\in Y\}\), where \(X\) and \(Y\) denote the domains of the input (measurements) and target images (ground truth), respectively. We denote a trained network to estimate the ground truth image from the measurements as \(g_{\theta}\) parameterized by \(\theta\) such that:
\[y_{0}\ =\ g_{\theta}\ (x)\ =\ y\ +\ \varepsilon_{0} \tag{2}\]
\(\varepsilon_{0}\) represents the error (uncertainty) between the network output \(y_{0}\) and the ground truth \(y\).
In general, such uncertainty is hard to estimate without the knowledge of the distribution of the ground truth signals during the test stage. To address this issue, we build forward-backward
cycles between the input and target domains, where the uncertainty accumulates iteratively and can be effectively estimated (see Fig. 1). These cycles are built by sequentially passing the images through the neural network (\(g_{\theta}\)) and the deterministic physical forward model, i.e.,
\[\begin{cases}\quad y_{0}=g_{\theta}(x)\\ x_{n}=f\circ h(y_{n-1})\\ y_{n}=g_{\theta}(x_{n})\end{cases} \tag{3}\]
where \(n=1,2,\cdots\) is the cycle index. Through these cycles, we get two image sequences, \(\{x,x_{1},x_{2},\cdots,x_{n}\}\) and \(\{y_{0},y_{1},\cdots,y_{n}\}\), in the input and target domains, respectively.
Next, we theoretically show that the cycle consistency, defined as the difference between two adjacent outputs \(\|y_{n+1}-y_{n}\|\), has an upper bound proportional to \(\|\varepsilon_{0}\|\) with two assumptions about the forward physical process and the trained model:
1. The trained model is unbiased on the data distribution \(\mathcal{D}=\{(x,y),x\in X,y\in Y\}\), i.e., \[y=g_{\theta}\big{(}f\circ h(y)\big{)},\forall y\in Y\] (4)
2. The deterministic forward-backward cycle satisfies Lipschitz continuity with a constant \(L\) around a neighbor of \(y\) containing \(y_{0},y_{1},\cdots y_{N}\). \[\big{\|}g_{\theta}(f\circ h(z_{1}))-g_{\theta}\big{(}f\circ h(z_{2})\big{)} \big{\|}\leq L\|z_{1}-z_{2}\|,\forall z_{1},z_{2}\in N_{y}\] (5)
\(N_{y}\) is the neighbor of \(y\), and \(z_{i}\in N_{y}\), \(i=0,1,2,\cdots,N\). \(N\) is the number of maximum cycles. The Lipschitz continuity of common neural network components, e.g., linear and convolutional layers, was proven, and efficient estimation of the corresponding Lipschitz constants has been thoroughly studied in the literature [37, 38, 39].
To quantify the uncertainty \(\varepsilon_{0}\)_without_ any access to the ground truth \(y\), \(y_{0}\) is passed through the forward-backward cycles, with the uncertainty accumulating gradually. We can derive the following recursive relationship for the differences between adjacent outputs:
\[\|y_{n+1}-y_{n}\|=\big{\|}g_{\theta}\big{(}f\circ h(y_{n})\big{)}-g_{\theta} \big{(}f\circ h(y_{n-1})\big{)}\big{\|}\leq L\|y_{n}-y_{n-1}\|,\forall n\geq 1 \tag{6}\]
By induction, the difference can be further bounded by \(\|\varepsilon_{0}\|\):
\[\|\Delta y_{n}\| =\|y_{n}-y_{n-1}\|\leq L\|y_{n-1}-y_{n-2}\|\leq L^{n-1}\|y_{1}-y_ {0}\|\] \[\qquad=L^{n-1}\big{\|}g_{\theta}\big{(}f\circ h(y+\varepsilon_{0}) \big{)}-y-\varepsilon_{0}\big{\|}\] \[\qquad\leq L^{n-1}(\|g_{\theta}(f\circ h(y+\varepsilon_{0}))-y\| +\|\varepsilon_{0}\|)\] \[\qquad\qquad\leq L^{n}\left(\frac{L+1}{L}\right)\|\varepsilon_{0} \|,\forall n\geq 1 \tag{7}\]
Likewise, the lower bound of the cycle consistency can be derived with the assumption that
\[\|g_{\theta}(f\circ h(z_{1}))-g_{\theta}(f\circ h(z_{2}))\|\geq l\|z_{1}-z_{2}\|, \forall z_{1},z_{2}\in N_{y} \tag{8}\]
In the neighbor of \(y\), the two constants \(L\) and \(l\) can be approximated by the largest and smallest singular values of the Jacobian matrix \(\frac{\partial}{\partial y}g_{\theta}(f\circ h(y))\), respectively.
In our analysis, we consider two common cases: the sequence of the cycle outputs \(\{y_{n},n=0,1,2,\cdots,N\}\) diverges or converges. When \(\{y_{n}\}\) diverges, i.e., \(l\geq 1\), the lower bound can be expressed in terms of \(\|\varepsilon_{0}\|\):
\[\|\Delta y_{n}\|=\|y_{n}-y_{n-1}\|\geq l^{n-1}\|y_{1}-y_{0}\|\] \[\geq l^{n-1}\|\|g_{\theta}(f\circ h(y+\varepsilon_{0}))-y\|-\| \varepsilon_{0}\|\|\] \[\geq l^{n}\left(\frac{l-1}{l}\right)\|\varepsilon_{0}\|.\ \forall n\geq 1 \tag{9}\]
On the other hand, for \(1>L>l>0\), the cycle outputs \(\{y_{n}\}\) will eventually converge, and in this case, the lower bound can be written as:
\[\|\Delta y_{n}\|\geq l^{n-1}\|\|g_{\theta}\big{(}f\circ h(y+ \varepsilon_{0})\big{)}-y\|-\|\varepsilon_{0}\|\|\] \[\geq l^{n}\left(\frac{1-L}{l}\right)\|\varepsilon_{0}\|.\ \ \forall n\geq 1 \tag{10}\]
Based on the exponential form of the upper and lower bounds in Eqs. 7, 9 and 10 as a function of \(n\), we can fit \(\|\Delta y_{n}\|\) to an exponential function of the cycle index \(n\). Therefore, we used the following regression relationship to estimate the uncertainty of the neural network output from cycle consistency \(\|\Delta y_{n}\|\), _without_ any knowledge of the ground truth:
\[\|\Delta y_{n}\|=k_{y}^{n}\varepsilon_{y}\big{(}1+e_{n,y}\big{)}+b_{y},n=1,2, \cdots,N \tag{11}\]
Here \(e_{n,y}\) represents the random errors in the regression model that capture the effect of any unmodeled variables. For simplicity, we further assume that each \(e_{n,y}\) independently follows a normal distribution with a variance of \(s^{2}\), i.e., \(e_{n,y}\overset{i.i.d.}{\sim}N(0,s^{2}),\forall n=1,2,\cdots,N\). The coefficients \(k_{y}\), \(\varepsilon_{y}\) and \(b_{y}\) represent (and model) the robustness, uncertainty, and bias of \(g_{\theta}\) inference, respectively. In compliance with the upper and lower bounds reported in Eqs. 7, 9 and 10, Eq. 11 indicates that the cycle consistency \(\|\Delta y_{n}\|\) should exhibit an exponentially increasing or decaying trend over the cycle index \(n\). Although the theoretical bounds in Eqs. 7, 9 and 10 require the unbiasedness of the trained neural network, i.e., \(y=g_{\theta}\big{(}f\circ h(y)\big{)},\forall y\in Y\), the
regression relationship in Eq. 11 relaxes this assumption and can be applied to even biased neural networks.
We can perform a similar regression to the cycle consistency in the input domain, i.e., \(\|\Delta x_{n}\|\):
\[\|\Delta x_{n}\|\ =\ k_{x}^{n}\varepsilon_{x}\big{(}1+e_{n,x}\big{)}+b_{x},n=2,3, \cdots,N+1 \tag{12}\]
Note here that since \(\|\Delta x_{1}\|=\|x_{1}-x\|\) is directly affected by the noise in the original input (measurement) \(x\), it is not included in the regression analysis and is left as a separate measure to be used. The maximum cycle number \(N\) should be selected in view of the applicability of our assumptions: \(y_{n},n=1,2,\cdots,N\) cannot exceed the neighbor of \(\gamma\) where Eqs. 5 and 8 hold. For each input image and a given \(N\), we perform the cyclic inference to get two sequences \(\{x,x_{1},x_{2},\cdots,x_{N+1}\}\) and \(\{y_{0},y_{1},\cdots,y_{N}\}\). Then, five uncertainty and bias estimators, i.e., \(\hat{\varepsilon}_{x},\hat{\varepsilon}_{y},\hat{b}_{x},\hat{b}_{y},\|\Delta x _{1}\|\), are obtained from the regression analysis based on Eqs. 11 and 12.
To demonstrate the effectiveness of our method, we validated it with two OOD input detection tasks on two common inverse imaging problems used as our testbeds, which are detailed in the next sub-sections. First, we tested our method on a corrupted input detection task with an image deblurring network. We quantified the classification accuracy on various input data corruption cases, including different noise levels and distributions. Second, we applied our method to REAL-ESRGAN[40], an image SR network and detected OOD test images of unseen object classes. These two inference tasks consider data and model uncertainty separately to comprehensively evaluate our method for quantifying neural network uncertainty caused by various factors. In both tasks, the classifier was trained on a combination of ID data and OOD data generated by injecting noise, and the trained classifier, using cycle consistency metrics, was shown to distinguish OOD data perturbed by various unseen factors.
### Corrupted input detection for an image deblurring neural network
We used the GoPro dataset[41] and the DeepRFT[42] model to implement image deblurring. Figure 2(a) illustrates a sharp image of a typical field of view (FOV) and a randomly generated motion blur kernel (see the Methods). The trained DeepRFT (indicated by the red arrows) could recover the sharp image from a noiseless blurry input image \(x\), as shown in Fig. 2(b). For each blurred test image (\(x\)), the recovered sharp image \(y_{0}\) was passed through the deterministic physical
forward model (gray arrows) to generate the predicted input image \(x_{1}\). In this way, a total of \(N=5\) cycles were implemented, and two sequences of images were created in the input and target domains, i.e., \(\{x,x_{n},n=1,2,\cdots,N+1\}\) and \(\{\mathcal{Y}_{n},n=0,1,2,\cdots,N\}\), respectively. In Fig. 2(c), the cycle consistency \(\|\Delta\mathcal{Y}_{n}\|\) and \(\|\Delta x_{n}\|\) versus the cycle number \(n\) are plotted, and the corresponding regression curves are illustrated, corresponding to Eqs. 11 and 12, respectively. Both curves demonstrate a good fit to the data points with coefficients of determination (\(R^{2}\)) of 0.9994 and 0.9960 for \(\|\Delta\mathcal{Y}_{n}\|\) and \(\|\Delta x_{n}\|\), respectively. Figure 2(d) further shows the actual uncertainty \(\|\varepsilon_{0}\|\) versus \(\hat{\varepsilon}_{y}\) over 100 blurry test images of the GoPro dataset generated with the shown blur kernel. The resulting data points indicate a positive correlation between \(\|\varepsilon_{0}\|\) and \(\hat{\varepsilon}_{y}\), confirming the effectiveness of our uncertainty estimator.
To further demonstrate the uncertainty quantification of our method, we fitted a linear regression model based on the five uncertainty and bias estimators \((\hat{\varepsilon}_{x},\hat{\varepsilon}_{y},\hat{b}_{x},\hat{b}_{y},\|\Delta x _{1}\|)\) to predict the value of \(\|\varepsilon_{0}\|\). The linear regression model was first trained on \(\sim\)900 Gaussian noise-corrupted blurry input images of the test scene in the GoPro dataset with a random noise level \(\sigma_{Gauss}\) from 0 to 0.1, and then tested on \(\sim\)400 salt-and-pepper (SNP) noise-corrupted blurry input images of the test scene with a random noise level \(\sigma_{SNP}\) from 0 to 0.1 (see the Methods for details). Figure 2(e) illustrates the predicted and ground truth uncertainty values on the training and testing sets of the linear regression model. This linear regression model accurately predicted the uncertainty for each input image in the training set, resulting in \(R^{2}=0.9724\); after its training, the linear model could further generalize to the testing set with SNP noise corruption, achieving \(R^{2}=0.8073\) (see Fig. 2(e)).
Next, we implemented corrupted input image detection using the cycle-consistency-based uncertainty and bias estimators. This detection was performed through a binary XGBoost[43] classifier using 5 attributes \(\hat{\varepsilon}_{x},\hat{\varepsilon}_{y},\hat{b}_{x},\hat{b}_{y},\|\Delta x _{1}\|\), which were calculated through the cyclic inference and regression analysis of each input image as shown in Fig. 2(b, c). The classifier was trained on a dataset combining 1,000 ID input images and 1,200 corrupted (OOD) input images with random motion blur kernels and took \(\sim\)1 sec to train (see the Methods for details). The corrupted input images were generated by introducing Gaussian noise with a noise level of
\(\sigma_{Gauss}\geq 0.01\) into the noiseless blurry image. Another 100 sharp images and 5 random kernels (excluded from training) were used to generate testing images. Figure 3(a) visualizes corrupted input images under various Gaussian noise levels \(\sigma_{Gauss}\) from 0.00 to 0.03 and their corresponding outputs. As expected, we observed significant errors and artifacts in the output images corresponding to the corrupted input images with relatively high Gaussian noise levels, emphasizing the importance of corrupted input detection. In Fig. 3(b), a random subset of ID and OOD training and testing data are projected into a two-dimensional space formed by \(\|\Delta x_{1}\|\) and \(\xi_{y}\). Overall, the ID and OOD data are spatially separated, demonstrating the feasibility of corrupted input detection using the cycle consistency-based uncertainty and bias estimators. Furthermore, Fig. 3(c) quantifies the accuracy of the same classifier on a balanced dataset of ID input images and corrupted input images with Gaussian noise levels from 0.01 to 0.10. Two other baseline methods, i.e., an XGBoost classifier without cyclic inference and a supervised ResNet-50, were trained and tested on the same datasets for comparison (see the Methods for details). The XGBoost baseline took a similar time as our method for training and tuning (\(\sim\)1 sec per model), while the ResNet-50 baseline took \(\sim\)12 hours to converge. As a simple machine learning algorithm without the need for time-consuming training, our method based on cycle consistency metrics matches the accuracy of the supervised ResNet-50 baseline over various noise levels, and shows considerably higher accuracy than the XGBoost baseline. The accuracy drops at around \(\sigma_{Gauss}=0.01\) as the data distribution approaches the boundary between ID and OOD data. However, our method significantly outperforms the two other baselines on this boundary; for example, when \(\sigma_{Gauss}=0.009\), our method scores an accuracy of 0.736, while the XGBoost and ResNet-50 scored lower accuracies of 0.576 and 0.485, respectively. In Fig. 3(d), we depict the 5 classification attributes (\(\xi_{x},\xi_{y},\hat{b}_{x},\hat{b}_{y}\), \(\|\Delta x_{1}\|\)) and the actual uncertainty as a function of the noise level \(\sigma_{Gauss}\). The uncertainty and bias estimators (blue and purple lines) exhibit similar trends with respect to the actual uncertainty (gray lines), further validating the effectiveness of our quantitative uncertainty estimation. The importance of each attribute is also visualized in Fig. 3(d). Among the five attributes, \(\xi_{y}\) and \(\|\Delta x_{1}\|\) contributed more to the classification results than the other attributes, scoring relatively higher importance scores of 0.37 and 0.22, respectively.
We also blindly tested the same corrupted input image classifier, trained with the Gaussian noised inputs, on images with SNP noise that it never saw before. A summary of its classification performance on these corrupted inputs with Gaussian and SNP noise and its quantitative comparison against the two baseline methods are presented in Table 1. When tested on the balanced dataset of ID and Gaussian noise-injected OOD input images (\(\sigma_{Gauss}\in[0.01,0.10]\)), our method scored an accuracy of 0.850, equivalent to the ResNet-50 baseline (0.845) and much higher than the XGBoost baseline (0.600). Furthermore, on the balanced dataset of ID and SNP noise-injected OOD input images (\(\sigma_{SNP}\in[0.01,0.10]\)), our method scored an average accuracy of 0.980, which is better than the ResNet-50 baseline (0.845). These results prove the strong generalization of the OOD data classifier, trained on limited data, to distinguish corrupted data using cycle consistency-based metrics.
### Out-of-distribution detection for an image super-resolution (SR) neural network
To validate our method's effectiveness on a different inverse problem with different sources of OOD data, we also implemented our method on the image SR task. For image SR networks, a common source of OOD data is distribution shifts, e.g., test images from object classes/types unseen in the training. In such cases, the external generalization of a neural network refers to testing on data distributions unseen in the training process, while internal generalization refers to testing on the dataset from the same distribution as in the training. External generalization usually results in more notable inference errors.
For this SR task, we selected the REAL-ESRGAN[44] model as our testbed, which demonstrates a good performance with checkpoints optimized for various image classes, including anime images and natural images. We implemented the cyclic inference process similar to the earlier experiment reported in the previous section, where 4 x average pooling formed the forward measurement function \(f\circ h\), and the SR model maps low-resolution measurements back to the domain of the high-resolution images. We empirically adopted the maximum cycle number to be \(N=3\). Figure 4(a) visualizes the training pipeline of our method and the two baseline methods: we first prepared OOD data by introducing noise into the training images of the model, e.g., noisy anime images were used as OOD data in the training of the classifier for the anime image SR model (see the Methods for more details). Then, similar to the training process used in the
previous experiment (Section 2.2), our method implemented the cyclic inference (highlighted by the blue dashed lines in Fig. 4a) and trained the XGBoost classifier (blue arrows) to distinguish the network outputs for the ID images and the OOD noise-injected images, whereas both ID and OOD images belong to the same object class. Although the trained classifier in our method never saw other distribution shifts during its training, it could generalize to and classify new OOD data from unseen distributions, e.g., other object classes. The training processes of the ResNet-50 (green arrows) and the XGBoost (orange arrows) baselines (without any cycle consistency metrics used) are also illustrated in Fig. 4(a), where the ResNet-50 baseline directly performed classification on \(y_{0}\) and the XGBoost baseline relied on \(\|\Delta x_{1}\|\) for its classification. Our method and the XGBoost baseline took \(\sim\)0.6 sec for training and tuning, whereas the ResNet-50 baseline took \(\sim\)12 hours to converge. Although the XGBoost classifier in our method has lower model complexity and requires significantly less training time than standard deep neural networks, the reported cycle consistency-based quantitative metrics enhanced the XGBoost classifier to surpass the performance of the ResNet-50 baseline. Figures 4(b, c, d) quantify the classification accuracies of our method as well as the ResNet-50 and XGBoost baseline models on three classes of test images: anime, microscopy and human face images. As reported in Fig. 4(d), when the anime image SR model is externally generalizing on images of unseen object classes (microscopy images and human face images), our method classifies OOD data with an accuracy of 0.971 on the microscopy image dataset and with an accuracy of 0.891 on the face image dataset. For the other two specialized models (microscopy and face image SR models), our method can achieve accuracies of 0.743 and 0.809, respectively, to identify the OOD data. On the contrary, the baseline classification methods not using cycle consistency metrics shown in Fig. 4(b, c) significantly underperform compared to our method and cannot distinguish OOD images from different classes. The XGBoost baseline without cyclic inference lacks a high-dimensional description of uncertainty, and therefore results in worse classification accuracy, as shown in Fig. 4(c). For the ResNet-50 baseline, the distribution shifts between the anime images and the microscopy and face images were more significant than that between the anime ID and OOD training images, such that the trained classifier on anime images could easily distinguish the other two image classes but remained confused on identifying ID anime images (see the first row of Fig. 4(b)). Due to the limited number of training samples, the ResNet-50 baseline also tended to overfit to the training image class and could not generalize to unseen distribution shifts,
as indicated by the failure on the second and third rows of Fig. 4(b). In contrast, our method successfully utilized the cycle consistency metrics and generalized to unseen distribution shifts, as shown by the relatively high off-diagonal accuracy scores in Fig. 4(d). Moreover, the diagonal entries (highlighted by the gray dashed lines) of the accuracy heatmaps shown in Fig. 4(b, c, d) report the accuracy scores identifying ID data that the SR models generalized well. Our method achieves \(>\)0.85 accuracy for such cases, avoiding excessive false positives.
## 3 Discussion and Conclusions
We introduced a novel UQ approach to quantify the inference errors of neural networks used in inverse problems and detect unseen corrupted input and OOD data. First, we built forward-backward cycles by iteratively running the physical forward model and the trained neural network, and established the theoretical relationship of the cycle consistency metrics in the expression of uncertainty, bias, and robustness of the neural network inference. Then, we derived uncertainty estimators through a regression analysis of the cycle consistency, without knowing the ground truth data. Finally, our method was validated on two applications: corrupted input detection on image deblurring tasks and OOD data detection on image SR tasks. Different from most existing UQ methods, our approach is model-agnostic and can adapt to various deep learning models in inverse imaging problems without modifications to the model architecture, training, or testing. On the corrupted input detection task with the image deblurring model, our method scored an accuracy of up to 0.980 on unseen corrupted input distributions; on the OOD data detection task with the image SR models, our method achieved up to 0.971 accuracy on unseen OOD image classes. Compared to the ResNet-50 baseline, our method achieved superior performance when generalizing on unseen distribution shifts, as reported in Table 1; when trained on Gaussian noise-injected images, our method scored 0.980 accuracy on unseen SNP noise-injected images, while the ResNet-50 scored a lower accuracy of 0.845.
In addition, our UQ method does not require prior knowledge of the OOD data distribution. In our experiments, the binary classifier was trained on simulated OOD data generated by injecting random noise of specific noise distributions, whereas the trained classifier could generalize to unseen OOD distributions from unseen sources, including e.g., different noise levels (Fig. 3), noise types (Table 1), and object classes (Fig. 4).
Future research could potentially apply deep neural networks such as ResNet to substitute the XGBoost classifier used in our method and further improve the classification accuracy using the uncertainty and bias estimators or the cyclic inference outputs directly. However, compared to the deep learning-based classifiers requiring millions of trainable parameters and hours of training, such as the ResNet-50 baseline used in this work, our method utilized the cycle consistency-based uncertainty estimators to enhance the standard XGBoost classifier that entails considerably less model complexity and less training time (\(\leq\) 1 sec per model), providing a simple but efficient tool for OOD detection. In contrast, it took us \(\sim\)12 hours on average to train the ResNet-50 baseline models with \(\sim\)25M trainable parameters used in our comparisons.
Our UQ method also has some limitations. First, our UQ method originates from estimating the norm between the network outputs and ground truths, which often varies largely from image to image as an unnormalized metric. This issue impedes consistent quantitative uncertainty estimation across multiple datasets and physical forward models. As shown in Fig. 2, for input images of the same scene whose foreground and background have similar subjects and styles, a simple linear regression model can be established on the cycle consistency-based uncertainty estimators to provide quantitative uncertainty estimation per input image. However, fitting such a linear regression model may not always be possible, especially on training datasets with significant variations in the image objects and styles. Additionally, our cycle consistency metrics may not be able to detect hallucinations and artifacts generated by advanced neural networks, which produce photorealistic images with almost identical distribution as the ground truth[45, 46]. In these cases, human perception or a discriminator with prior knowledge about the objects could be used to identify artifacts and hallucinations in the outputs of these networks. Second, our UQ method evaluates the comprehensive influence of data and model uncertainty without differentiating them from each other. However, as revealed in Fig. 3(d), the cycle consistency-based uncertainty estimators have different correlations to the measurement noise, which might be utilized to separate data and model uncertainty. Third, prior knowledge about the physical forward model \(f\circ h\) is required by our UQ method. In some practical applications, the forward model is often parameterized by specific variables \(\phi\), e.g., the blur kernel and the camera distortion/aberrations, and the prior knowledge or measurement of such variables during the
blind test stage could be unavailable. Nevertheless, the cycle consistency metrics and the uncertainty/bias estimators in our method can potentially adapt to such situations with proper modifications. For one thing, our derivation of the relationship between the cycle consistency and uncertainty/bias does not require the exact forward model \(f\circ h(\cdot)\) with ground truth parameters \(\phi_{GT}\). Therefore, using the parameters of the physical forward model in the training \(\phi_{train}\) to approximate \(\phi_{GT}\) could be a promising solution, which needs further investigation. Furthermore, some of the existing models have explored simultaneous estimation of the forward model parameters and the ground truth signal in inverse problems[47, 48, 49]. Integrating the estimated forward model parameters into the cycle consistency evaluation process could be another future improvement to our method.
## Methods
### Training of OOD classifiers
The uncertainty and bias estimators derived in Section 2.1 are used as the attributes for OOD data classification. We picked XGBoost[43] as our classifier for this task. XGBoost is a highly accurate implementation of gradient boosting that builds parallel trees and follows a level-wise strategy to iteratively train decision trees using the previous model's error residual. Before training, we randomly split 80% of the training data into the training set, and the remaining 20% into the validation set.
To find the optimal parameters for XGBoost, we first used the random grid cross-validation search to fit and test the model with parameters randomly chosen from a wide range. The resulting parameters gave an estimation of the optimal parameters. We then tuned each parameter one by one using a grid search to select the best model. The optimal classification threshold was selected to maximize the F1 score on the validation set.
Additionally, we established two baseline models for comparison. The XGBoost baseline without cyclic inference adapted the same XGBoost training and tuning processes and utilized \(||\!||\Delta x_{1}||\!|\) as the classification attribute. The second baseline leveraged the ResNet-50 architecture[50] to perform binary classification on the direct output \(y_{0}\) and learned on the same training and validation datasets as our method and the XGBoost baseline.
### Testbeds: networks and datasets
For the image deblurring task, we used the DeepRFT[42] model trained on the GoPro dataset[41]. We selected 900 sharp images (9 scenes) from the GoPro dataset and randomly split them into a training set of 800 images (8 training scenes) and a test set of 100 images (1 test scene) for the classifier, where the test scene was strictly excluded from the training set. Meanwhile, we randomly generated 26 motion-blur kernels to blur the training set images, resulting in \(\sim\)1K ID and \(\sim\)1.2K OOD training images. Motion blur kernels convolved with sharp images to generate blurry inputs and Gaussian noise \(n_{Gauss}\sim N(0,\sigma_{Gauss}^{2})\) with a random level \(\sigma_{Gauss}\) from 0 to 0.1 were introduced. The same operation was implemented on the testing images with 5 different blur kernels. SNP noise with a random level of \(\sigma_{SNP}\) from 0 to 0.1 was also introduced to the test images to generate another test set to assess the classifier's generalization to other types of noise during the testing stage. The level of SNP noise \(\sigma_{SNP}\) is defined as the ratio of the number of pixels whose values are randomly turned to 0 or 1 to the total number of pixels within each image. For the DeepRFT model, we empirically regard the input images with Gaussian noise levels lower than 0.01 as the ID data, since the corresponding outputs are almost indistinguishable from the outputs of their noiseless counterparts, as shown in Fig. 3(a). All the input images with Gaussian or SNP noise levels higher than or equal to 0.01 are classified as OOD data.
In the SR task, we used pre-trained models of REAL-ESRGAN[44], a widely-used image SR neural network for image restoration applications, and used 3 different image datasets, anime name and image dataset, microscopy image dataset, and Flickr-Faces-HQ dataset[51]. The microscopy image dataset was captured by a benchtop brightfield microscope (Leica Biosystems Aperio AT2) equipped with a 40 \(\times\) objective on hematoxylin and eosin (H&E) stained human lung, breast, and kidney tissue sections (from existing, deidentified samples and data)[52].
Throughout this paper, we refer to these three datasets as anime dataset, microscopy dataset, and face dataset. Three ESRGAN models were separately optimized for the three image datasets. First, a public ESRGAN model optimized for anime images was used for the anime dataset. Following the recommended training setup, we also finetuned the pre-trained ESRGAN model on 15000 face images and 15000 microscopy images to generate two SR models specialized for
the face and microscopy datasets, respectively. Test sets of anime, microscopy and face images were excluded from the corresponding training datasets, and each test set contains 100 images.
For the training of the OOD data classifier for each image SR model, 100 random pristine images were selected from the corresponding training set of the SR model and served as ID data; random noise was injected into the ID images to create OOD data. The injected noise was randomly chosen from Gaussian and SNP noise processes and set at a random noise level from 0.02 to 0.05 for each OOD image. Besides, Gaussian and SNP noise, with a random noise level at 0.005 or 0.01, were added to ID data for data augmentation. As a result, the OOD data classifier for each image SR model was trained on 300 ID and 400 OOD images.
#### Algorithm implementation
All the algorithms and neural networks were implemented using Python and PyTorch framework[53] on a computer with an Intel Xeon W-2195CPU @ 2.30GHZ, 256 GB RAM and four Nvidia GeForce RTX 2080 Ti GPUs. For example, in the image SR task, the classifier in our method takes \(\sim\)0.6s for training, and a single cyclic inference process (\(N=3\)) takes approximately 0.71s for a high-resolution image with 1024\(\times\)1024 pixels from the face dataset on a single RTX 2080 Ti GPU. The XGBoost baseline took a similar training time and slightly shorter inference time compared to our method since it implemented the same XGBoost classification processes but skipped the cyclic inference. The ResNet-50 baseline was trained from scratch and stopped after 1000 epochs to avoid overfitting. Standard augmentation techniques, including random flipping and rotation, were applied. The training time for the ResNet-50 baseline was \(\sim\)12 hours, and the inference time on a single 1024 \(\times\) 1024 image is 0.015s on the same machine using one RTX 2080 Ti GPU.
## Figures and Tables
Figure 2: Uncertainty estimation using cycle consistency. (a) The sharp image (ground truth) and the motion blur kernel used. (b) The generated blurry image \(\mathbf{x}\) and the following cyclic inference, following Fig. 1b. (c) Regression analysis of the cycle consistency in (b). (d) Scatter plot of the uncertainty estimator and the actual uncertainty of 100 input images. (e) A linear regression model to predict the actual uncertainty. The linear regression model was first fitted on \(\sim\)900 Gaussian noise-injected input images and then tested on \(\sim\)400 SNP noise-injected input images.
Figure 3: Corrupted input image detection using cycle consistency-based uncertainty and bias estimators. (a) Left: the sharp image (ground truth) and the motion blur kernel used. Right: the Gaussian noise-corrupted input images and the corresponding outputs. (b) Projection of the ID and OOD training and testing data on the two-dimensional space formed by the uncertainty estimators. (c) Detection accuracy of our method and two baseline methods on corrupted input images under various noise levels \(\sigma_{Gauss}\). The classifiers were trained with ID images with \(\sigma_{Gauss}<0.01\) and OOD images with \(\sigma_{Gauss}\geq 0.01\). (d) The estimated and actual uncertainty under various noise levels, and the importance of each attribute for ID vs. OOD classification. Scale bar: 20 pixels.
Figure 4: (a) Training pipelines of our method and the baseline methods (XGBoost and ResNet-50) for image SR models specialized for different image classes. The training OOD images were synthesized by injecting noise into the training images of each image SR model. After training, the classifiers were tested to distinguish ID images of the same class as training and OOD images of different classes. (b, c, d) OOD detection accuracy of ResNet-50, XGBoost and our method, respectively. For each method, a classifier was trained for each of the three image SR models specialized for anime, microscopy, and face images and tested on all three test sets. Diagonal entries highlighted by gray dashed lines represent the detection accuracy of ID images. Scale bar: 20 pixels.
**Table 1**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 2**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 3**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 4**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 5**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 6**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 7**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 8**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 9**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 10**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 11**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 12**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 13**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 14**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 15**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 16**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 17**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 18**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 19**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 20**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 30**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 4**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 5**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 6**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 7**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 8**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 9**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 10**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 11**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 12**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 13**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 14**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise-corrupted input images, and then tested on corrupted input images with Gaussian and SNP noise under various noise levels. The accuracy was calculated on balanced datasets with the same numbers of ID and OOD images. AP: average precision, AUC (ROC): area under the curve (receiver operating characteristic curve).
**Table 15**. ID vs. OOD classification performance of our method and the baseline methods (XGBoost and ResNet-50). For each method, the classifier was trained on Gaussian noise |
2303.17468 | Surrogate Neural Networks for Efficient Simulation-based Trajectory
Planning Optimization | This paper presents a novel methodology that uses surrogate models in the
form of neural networks to reduce the computation time of simulation-based
optimization of a reference trajectory. Simulation-based optimization is
necessary when there is no analytical form of the system accessible, only
input-output data that can be used to create a surrogate model of the
simulation. Like many high-fidelity simulations, this trajectory planning
simulation is very nonlinear and computationally expensive, making it
challenging to optimize iteratively. Through gradient descent optimization, our
approach finds the optimal reference trajectory for landing a hypersonic
vehicle. In contrast to the large datasets used to create the surrogate models
in prior literature, our methodology is specifically designed to minimize the
number of simulation executions required by the gradient descent optimizer. We
demonstrated this methodology to be more efficient than the standard practice
of hand-tuning the inputs through trial-and-error or randomly sampling the
input parameter space. Due to the intelligently selected input values to the
simulation, our approach yields better simulation outcomes that are achieved
more rapidly and to a higher degree of accuracy. Optimizing the hypersonic
vehicle's reference trajectory is very challenging due to the simulation's
extreme nonlinearity, but even so, this novel approach found a 74%
better-performing reference trajectory compared to nominal, and the numerical
results clearly show a substantial reduction in computation time for designing
future trajectories. | Evelyn Ruff, Rebecca Russell, Matthew Stoeckle, Piero Miotto, Jonathan P. How | 2023-03-30T15:44:30Z | http://arxiv.org/abs/2303.17468v1 | # Surrogate Neural Networks for Efficient Simulation-based Trajectory Planning Optimization
###### Abstract
This paper presents a novel methodology that uses surrogate models in the form of neural networks to reduce the computation time of simulation-based optimization of a reference trajectory. Simulation-based optimization is necessary when there is no analytical form of the system accessible, only input-output data that can be used to create a surrogate model of the simulation. Like many high-fidelity simulations, this trajectory planning simulation is very non-linear and computationally expensive, making it challenging to optimize iteratively. Through gradient descent optimization, our approach finds the optimal reference trajectory for landing a hypersonic vehicle. In contrast to the large datasets used to create the surrogate models in prior literature, our methodology is specifically designed to minimize the number of simulation executions required by the gradient descent optimizer. We demonstrated this methodology to be more efficient than the standard practice of hand-tuning the inputs through trial-and-error or randomly sampling the input parameter space. Due to the intelligently selected input values to the simulation, our approach yields better simulation outcomes that are achieved more rapidly and to a higher degree of accuracy. Optimizing the hypersonic vehicle's reference trajectory is very challenging due to the simulation's extreme nonlinearity, but even so, this novel approach found a 74% better-performing reference trajectory compared to nominal, and the numerical results clearly show a substantial reduction in computation time for designing future trajectories.
## I Introduction
High-fidelity simulations are used to analyze the dynamics of complex systems in many engineering and scientific disciplines. For most of these applications, there is some desired system outcome, such as the curvature of an airfoil or the path planned for a robot, but, lacking a closed-form model, the simulation must be used in the design process. Here arises the need for simulation-based optimization (SO), as opposed to traditional optimization with analytical models, generally with the goal of optimizing the simulation input values to achieve the desired output.
There are many methods of simulation-based optimization [1, 2] applied across disciplines. A key issue in these approaches is that the execution of such simulations often requires a large amount of computational power and/or processing time. When the simulation itself is too computationally expensive, surrogate models are needed to minimize the number of queries and, therefore, the total computation time. The surrogate model is designed to imitate the input-output behavior of the simulation. Input-output data is required from the simulation to create this surrogate model, so efficiency in creating that model is a key concern [3]. The methodology presented herein uses neural networks as surrogate models and provides a novel sampling strategy to reduce the number of simulation runs - a crucial concern when each run of the simulation takes several minutes, and optimization might require hundreds, if not thousands [4], of simulation runs.
The use case simulation for this methodology is an Approach and Landing (A/L) simulation for hypersonic vehicles (HV) combined with the reference trajectory calculation [5]. The HV simulation includes high-fidelity models for flex, slosh, engine thrust, aerodynamics, and full flight software in addition to the trajectory planning algorithm which propagates the vehicle down many possible trajectories. The simulation takes 2-3 minutes to run and the relationship between inputs and outputs is highly nonlinear due to the series of models listed. The system inputs are the 13 trajectory design parameters and the three outputs are the most important performance metrics of the vehicle during A/L.
Hence, finding the set of trajectory design parameters that provides the optimal landing trajectory for the HV is typically done by manually tuning the various input parameters, running the full simulation, and evaluating the performance outputs. A Monte Carlo approach could also be implemented by randomly selecting hundred or thousands of input values until a sufficiently good reference trajectory is found. Finding a sufficiently good A/L reference trajectory using either of these approaches is a very time-consuming iterative process that requires extensive domain knowledge to either understand the complicated input-output mapping or to accurately evaluate the performance outcomes.
The algorithm outlined in this paper offers a novel simulation-based optimization approach that eliminates the need for hand-tuning input parameters and significantly reduces the overall computation time. The main contributions of this methodology are:
* Novel algorithm using neural network gradients to intelligently select simulation input values that improves desired simulation outcome by 74%.
* Computationally efficient methodology for optimizing general black-box simulations with desired outputs shown to be six times faster than Monte Carlo approach;
* New automated optimal trajectory planning tool for hypersonic vehicles that takes hours as opposed to days.
## II Related Work
This paper aims to improve upon two categories of research: simulation-based optimization via surrogate models and optimal trajectory planning.
### _Simulation-based Optimization via Surrogate Models_
There are many simulation-based optimization algorithms, and their suitability relies heavily on the specific application. Whether the system is continuous or discrete, cheap or expensive to evaluate, and deterministic or stochastic all must be considered [2]. This makes various SO methods very challenging to compare. In literature, these methods are often divided into four categories: statistical selection, surrogate models (metamodels), stochastic gradient estimation, and global search [1]. Surrogate models reduce the computational burden of optimizing the simulation outputs by creating a simpler, less-expensive model of the simulation to use instead of the simulation itself.
For the purpose of this paper, only SO via surrogate models will be analyzed because the chosen application is continuous and very expensive to run, making surrogate modeling the ideal type of algorithm to use [1]. There have been many kinds of surrogate models applied across various fields. Most notable include algebraic models [3], kriging [6], polynomial response surfaces [7], and radial basis functions [8]. However, with the proliferation of machine learning in recent decades, surrogate models have more recently been created using support-vector machines [9] and neural networks [10].
Once the surrogate model has been fitted to the data, it is integrated with an optimizer to find the optimal set of inputs that achieve the desired output to the simulation. The applications where neural networks have been applied as surrogate models are very diverse, such as fluids systems [11, 12, 13] and airfoil design [14, 15]. Instead of taking advantage of the gradient information available from the neural network, previous methods generally have relied on gradient-free optimization, such as genetic algorithms [12, 13, 14], particle swarm optimization [15], and the grey wolf optimizer [11].
Ref. [16] uses the gradients of a neural network in combination with the IPOPT software library to optimize a topology. However, the simulation was simple enough that only a single-layer feedforward neural network was necessary, which meant the derivatives had to be calculated analytically and fed into the optimizer. Ref. [4] uses a similar method of gradient-based optimization using multi-layer neural networks combined with the SNOPT software library to optimize airfoils. Both methods rely on creating a highly accurate surrogate model that only needs to be trained once to be used by the optimizer. Ref. [16] does this through a complicated training procedure using the Sobolov error and [4] does this by collecting tens of thousands of data points. In the method proposed in this paper, the algorithm collects more data from the actual simulation iteratively in areas of interest suggested by the surrogate model to reduce the total number of queries and save on computation cost.
### _Optimal Trajectory Planning_
Reference trajectory optimization is a very challenging process that requires formulating a nonlinear optimal control problem with multiple constraints and solving, directly or indirectly [17]. In general, reference trajectories are calculated offline and stored in the HV's computer, although some algorithms work to reduce computation time in order to be calculated during flight [18]. For comparison, only offline algorithms will be considered in this review.
Ref. [19] proposes a methodology for HV trajectory optimization by combining particle swarm optimization and non-intrusive polynomial chaos. This method works to minimize the flight time of the HV over the course of the trajectory while being robust to physical uncertainties. While a time-based objective function works for the re-entry phase of hypersonic flight, it does not allow for optimizing specific properties at phase completion. Furthermore, this is a direct numerical method that requires access to the dynamical functions, which is not the case in the application of the algorithm proposed in this paper. The same is true for the generalized polynomial chaos algorithm proposed by [20] to optimize landing trajectories of airplanes and the mapped Chebyshev pseudospectral method presented by [21].
Currently, there are no methods of optimal trajectory planning that use surrogate models for simulation-based optimization.
## III Preliminaries
### _Neural Networks as Surrogate Models_
Neural networks (NNs) are very good at modeling complex relationships between inputs and outputs. In fact, multi-layer feedforward networks can approximate any measurable function to any degree of accuracy, provided enough training data [22]. Multilayer NNs can find intricate structures in high-dimensional data and learn hierarchical feature representations with multiple levels of abstraction [23]. Furthermore, neural networks can simultaneously estimate the derivatives of an approximated function, even if the function does not have classically differentiable functions [22]. This makes them a perfect choice for surrogate models. Most commonly, the derivatives of the neural network, referred to as gradients, are used to backpropagate error through the neural network to train them on an initial dataset. These gradients can also be used for sensitivity analysis [16] or for gradient-based optimization, as in this proposed method.
### _Objective Function with Constraints_
For all simulations, there must be \(m\)-dimensional inputs \((x\in\mathbb{R}^{m})\) into the simulation \(f(x)\) and \(n\)-dimensional outputs \((y\in\mathbb{R}^{n})\). Furthermore, there are \(m\)-dimensional minimum and maximum bounds \((x_{\min},x_{\max}\in\mathbb{R}^{m})\) for each input as well as a \(n\)-dimensional desired set of outputs \((y_{\text{target}}\in\mathbb{R}^{n})\). Using an initial training dataset, a neural network is trained to be a surrogate model \(\hat{f}(x)\) for the real simulation. This surrogate model is able to predict \(n\)-dimensional outputs based on real \(m\)-dimensional inputs
\[\hat{y}=\hat{f}(x) \tag{1}\]
where \(\hat{y}\) is the predicted outputs from the surrogate model. This surrogate model is then used to minimize the loss between the desired outputs and the predicted outputs from the surrogate model. This loss is evaluated by the objective
\[g\left(\hat{y},y_{\text{target}}\right). \tag{2}\]
Note that the parameters being optimized are not the predicted output from the surrogate model, but the inputs to the surrogate model, making the objective a function of \(x\):
\[g\left(\hat{f}(x),y_{\text{target}}\right) \tag{3}\]
Further note that this is a constrained optimization problem because the inputs must be bounded within the initial training data range or else the surrogate model will not be able to accurately predict the input's respective output. The inputs are bounded in the physically feasible range by the inequality conditions:
\[c_{1}(x):x\geq x_{\min} \tag{4}\]
\[c_{2}(x):x\leq x_{\max} \tag{5}\]
Because the objective function is a function of the inputs, its gradients can be used to optimize the inputs to minimize the objective loss. The gradient of the objective function with respect to the inputs is
\[\frac{\partial}{\partial x}\left[g(\hat{f}(x))\right]=g^{\prime}(\hat{f}(x)) \hat{f}^{\prime}(x). \tag{6}\]
This derivation shows the need for the surrogate model and its gradients for optimizing the inputs to minimize the objective function.
## IV Approach
This section details the general methodology for using surrogate neural networks to optimize a black-box simulation, as shown in Algorithm 1. Subsections B-H detail how to apply the methodology to a specific trajectory planning simulation.
```
1:Query an initial dataset \(\mathcal{D}=\{x\in\mathbb{R}^{m}\}\) from a quasi-random sample. [IV-B]
2:Train a neural network surrogate model \(\hat{f}(x)\) on \(\mathcal{D}\) to approximate \(f(x)\). [IV-C]
3:Initialize set of quasi-randomly distributed input parameters \(x_{\text{init}}\) across initial dataset. [IV-F]
4:Using stochastic gradient descent to find the set \(x_{\text{local}}\) that locally minimizes \(g(\hat{f}(x),y_{\text{target}})\). [IV-D]
5:Select the input \(x_{\text{best}}\) that produces minimum loss according to objective function. [IV-F]
6:Intelligently query \(f(x)\) with the selected \(x_{\text{best}}\). [IV-G]
7:if stopping criteria \(=True\)then
8: Take \(x_{\text{best}}\) and \(y_{\text{best}}=f(x_{\text{best}})\).
9:else
10: Add \(x_{\text{best}}\) and \(y_{\text{best}}\) to the training dataset \(\mathcal{D}\).
11: Iterate from Step 2. [IV-H]
```
**Algorithm 1**
### _Reference Trajectory Planning Algorithm and Simulation_
Designing the approach and landing trajectory for a hypersonic vehicle is very challenging due to the vehicle's lack of thrusters and large complex physical uncertainties. The process is so computationally expensive that the nominal trajectory is calculated offline and then stored in an onboard computer for the guidance and control system to follow using speed brakes and unboosted maneuvers.
Our simulation uses the Autolanding I-load Program (ALIP) [5] to calculate the A/L reference trajectory for the HV. This program has legacy from Orbital Science's X-34 and NASA's Shuttle Program [24]. ALIP relies on initializing the parameters defining the geometric segments of the trajectory with an accurate prediction and then optimizes those parameters to achieve the desired dynamic pressure on the vehicle at touchdown. Various physical constraints are applied to reduce the trajectory design problem to a two-point boundary value problem [5].
There are 13 input parameters to ALIP that define the initial trajectory that all must be optimized to result in a successful landing of the HV. These parameters include initial dynamic pressure, landing velocity, and flight path angle of constant-glide segments, among others. Fig. 1 shows an example reference trajectory broken into its geometric segments.
Once a reference trajectory has been defined using ALIP, that trajectory is tested through a digital twin simulation. This simulation outputs certain metrics which quantify the success of the HV touchdown. Specifically, there is an ideal rate at which the vehicle loses altitude (sink rate), distance from the start of the runway where landing happens (downrange position), and horizontal velocity of the HV at touchdown. The digital twin simulation is executed using physical parameters from the hypersonic vehicle.
All inputs parameters to ALIP and all output parameters from the landing simulation have been standardized and selectively scaled to protect any proprietary information.
### _Initial Data Collection_
Each input to the simulation is bounded by physical constraints inside the trajectory planning simulation. These bounds can be adjusted based on historical knowledge of the simulation to reduce the size of the input parameter space. The 13-dimensional input parameters are bounded around a known set of nominal inputs that produce an acceptable landing of the hypersonic vehicle. To create sufficient coverage of the high-dimensional input parameter space, the initial input values were sampled using a Sob
Fig. 1: Defined geometric segments of reference trajectory for approach and landing of hypersonic vehicle.
sequences are low-discrepancy, quasi-random sequences that provide a more uniform distribution than compared to other quasi-random sampling methods such as Latin hypercube [26].
Enough data samples from the simulation must be collected to sufficiently train the neural network. Too few samples will lead to the surrogate model being a poor representation of the actual simulation and too many samples lead to an unnecessarily high computation cost and time.
Although the input parameter space is large, the number of samples can be reduced because the surrogate model only needs to understand the gradients of the predicted outputs based on the inputs. An initial sample size of 400 points was chosen to collect input-output mapping for training the surrogate neural network. Increasing the size of the training data past 400 points leads to diminishing improvements to the surrogate's model accuracy, as shown in Fig. 2.
### _Surrogate Model Training_
Once the initial sampling is done, the data must be standardized for training a neural network to be the best possible surrogate model to accurately represent the simulation. The neural network is trained to minimize the mean absolute error (MAE) between the outputs predicted by the network and the outputs from the training data using the Adam optimizer in PyTorch. The weights of the network are tuned to reduce the MAE using a variant of stochastic gradient descent with backpropagated gradients. The model can be further improved by tuning the hyperparameters defining the neural network's numbers of layers, number of nodes per layer, batch size, and learning rate. The hyperparameters were optimized using the Optuna software library [27] integrated with PyTorch.
Ideally, the neural network would be able to perfectly predict the three outputs of the simulation based on the same set of inputs. Due to the highly nonlinear nature of the simulation and the limited size of the training data, this is not possible. A comparison of the predicted versus actual outputs is shown in Fig. 3 for all three outputs of the simulation.
Fig. 4 shows the test loss for each output as it converges over epochs. The scale of the test loss varies based on the outputs feasible range in their physical units. For example, the possible range for sink rate only varies between 0 and 5 ft/s, so the surrogate model predicting to an accuracy under 0.2 ft/s is quite good. Similarly, when the feasible range of physical values for downrange position is 0 to 500 feet, predicting within 10 feet is good.
### _Custom Objective Function_
The objective of this constrained optimization problem is to minimize the distance between desired and predicted outputs. For this application, the objective function to be
Fig. 4: Validation losses for each simulation output over training epoch. All three outputs converge to low loss relative to their scale.
Fig. 3: Predicted outputs from surrogate neural network versus actual outputs from simulation. Ideally, these outputs would be perfectly correlated but the neural network is able to mostly accurately predict all three outputs with a few outliers. Sink rate is the most challenging output to predict.
Fig. 2: Normalized validation losses from surrogate neural network depending on size of training data set. Diminishing returns on minimizing validation loss after 400 data points.
minimized evaluates the mean absolute error between the desired outputs and the predicted outputs from the surrogate model. MAE calculates the average of the absolute distance between all the predicted and desired outputs. MAE is more robust to outliers and ensure stable gradients with the different terms in the objective function. This makes the objective function
\[g(\hat{y})=\left|y_{\text{target}}\right.-\left.\hat{y}\right|. \tag{7}\]
However, the parameters being optimized are not the three predicted outputs from the surrogate model \(\hat{y}\), but the 13 inputs to the surrogate model \(x\), making the objective function
\[g(x)=\left|y_{\text{target}}\right.-\left.\hat{f}(x)\right|. \tag{8}\]
Furthermore, each of the desired outputs can be weighted based on domain knowledge of the simulation. In this application, sink rate is more important to a successful landing than either horizontal velocity or downrange position. Therefore, the MAE of predicted versus actual sink rate is scaled by a factor of two. And the desired downrange position is too high to be realistically achievable by any combination of input values, so its MAE is reduced by a factor of 10 to ensure it does not overpower the other two terms.
To constrain the inputs to the physically feasible parameter space, the inequality constraints are added directly to the objective function as a soft penalty. When either the upper or lower bounds are violated, these penalties are triggered as
\[g(x)=\left|y_{\text{target}}-\hat{f}(x)\right|+\alpha\max\left(0,x_{\min}-x\right) \tag{9}\]
\[+\alpha\max\left(0,x-x_{\max}\right)\]
where \(\alpha=1\), but can be tuned depending on the scale of the other components of the objective function. These penalties act to steer the inputs being optimized to stay in the constrained input parameter space through gradient descent. The gradients of the objective function with respect to the inputs are used to optimize the inputs to minimize the objective. The gradient of the objective function with respect to the inputs when the input bounds are violated is
\[\frac{\partial g}{\partial x}=\hat{f}^{\prime}(x)\left(\frac{\hat{f}(x)-y_{ \text{target}}}{\left|y_{\text{target}}\right.-\hat{f}(x)\right|}\right)\pm\alpha \tag{10}\]
As defined in IV-A, the outputs are sink rate, downrange position, and horizontal velocity of the HV at touchdown. The target values of the outputs is determined by domain knowledge and vehicle requirements. It is important to note that the number of desired outputs as well as their values could be changed, and this novel algorithm would still work. For example, if the requirement for sink rate was lowered to 1.5 ft/s, the objective function can be adjusted to produce a new optimal reference trajectory.
### _Sensitivity Analysis and Loss Landscape Visualization_
It is impossible to easily visualize the gradients of the objective function with respect to all 13 inputs. While it would be possible to do a dimensional reduction to two dimensions, because these inputs have real meanings the two most important inputs can simply be identified. A sensitivity analysis was conducted to identify the two most important inputs, meaning the two inputs that most affect the output of the simulation. This was done by excluding one input at a time from the NN training and evaluating which input increased the loss the most, therefore most effecting the neural network's ability to accurately predict the outputs. In Fig. 5, the two most important inputs for predicting the outputs are shown to be the landing velocity of the HV and the circular flare radius.
With the two most important inputs identified, how varying the inputs affect the objective function can be visualized. While not necessary for optimization, these visualizations are to better understand and validate this methodology. Linearly spaced samples across the input range for landing velocity and flare radius were selected while all other inputs were frozen to their nominal values. Each combination of landing velocity and flare radius was fed through the trained surrogate neural network, and the corresponding output was evaluated by the objective function to calculate loss. This loss landscape in Fig. 6 shows how the loss from the objective function varies across the two most important inputs, as predicted by the surrogate model.
It is evident that the surrogate neural network has identified regions in the input parameter space as more promising for minimizing the objective function than others. Specifically, a trough in the middle where both variables increase proportionally and a section of low flare radius and high landing velocity. To validate the surrogate model, the true loss landscape from actually querying the simulation is shown in Fig. 7.
### _Input Optimization and Intelligent Querying_
With the initial surrogate model trained, the 13 inputs can now be optimized to minimize the objective function using gradient-based descent. Similar to how the gradients of the neural network are used to optimize the weights during the initial training, the gradients are used to optimize the inputs to the simulation to minimize the objective function (9).
One of the issues with gradient-based optimization is the convergence to local minima or maxima. To mitigate this concern, many input vectors are optimized according to the objective function. These input vectors are quasi-randomly distributed across the input parameter space using a Sobol
Fig. 5: Sensitivity Analysis Results. Landing velocity and flare radius inputs have the most significant impact on the accuracy of the surrogate model.
sequence. Even if some of the inputs become stuck at local minima or plateaus, at least one of the vectors will find the global minimum.
Through trial-and-error, the best optimizer was found to be stochastic gradient descent with added momentum. Because the many inputs are optimized in batches, this process is extremely rapid compared to the time it takes to query the simulation, seconds as opposed to minutes. This means it is computationally cheap to initialize hundreds of inputs and optimize over many steps to ensure convergence. In the methodology proposed in this paper, 100 input vectors are optimized over 500 steps.
This process is visualized in Fig. 8 using the same loss landscape from the previous section. Again, the inputs are limited to the two most important parameters, landing velocity and flare radius, so the optimization corresponds to the loss landscape background. Only 10 of the 100 input vectors are shown, their initial and final points shown in red and green, respectively. The input vector that finds the global minimum has its path in white instead of black.
The input vectors clearly travel down the gradient from places with higher estimated loss to places with lower estimated loss over optimization. It is also shown that the input vectors are successfully constrained within the bounds.
### _Intelligent Querying_
The input vector that found the global minimum of the loss landscape is now sent to the actual simulation. While there are many input vectors that show promising results for minimizing the objective function according to the surrogate model, only one is sent to the trajectory planning simulation. The input selected to query the simulation is intelligently selected instead of being randomly selected or intuitively chosen by an engineer. This reduces the overall computational cost of optimizing the simulation and finding an exceptional reference trajectory for the HV.
### _Iteration_
Until the stopping criteria is met, this algorithm will iteratively train a new neural network, optimize the objective function, and intelligently query the simulation. The stopping criteria for this application checks for three conditions every iteration:
1. If the goal criteria for the outputs have been met
2. If the algorithm converged on a solution outside of the goal criteria
3. If the maximum number of iterations been met
Each iteration of the algorithm allows the surrogate model to improve its representation of the actual system and increase its fidelity in areas that appear promising for minimizing loss. Without initializing and training a new neural network each time, the algorithm would always search around the same global minimum. Instead, the random weight initialization for each iteration's surrogate model means each neural network will be slightly different.
Furthermore, having only one surrogate model does not capture any'model uncertainty'. Creating a new model every time is more similar to ensemble learning and is shown to be more robust to outliers [28]. This encourages more exploration during optimization instead of continually exploiting the same promising regions.
## V Numerical Results
### _Optimal Reference Trajectory_
This section demonstrates the success of this algorithm when applied to optimizing the reference trajectory for the hypersonic vehicle. This algorithm found a new reference trajectory for approach and landing that outperformed the existing nominal solution. The new reference trajectory produced a 74% decrease in loss from the objective function when compared to the nominal solution as shown in Table I.
The algorithm is very successful at finding values for the 13 simulation inputs that result in values for sink rate and
Fig. 8: Optimization of inputs across loss landscape. The input vectors are optimized from places with higher loss to areas that, according to the surrogate model, will reduce the loss from objective function.
Fig. 6: Loss landscape as predicted by surrogate neural network.
Fig. 7: Loss landscape from real outputs of the simulation. The loss scale and gradients match those of the trained surrogate model.
horizontal velocity almost exactly at their target values. However, downrange position is harder to achieve due to internal simulation constraints. The algorithm is essentially trying to maximize downrange position without compromising the results for sink rate and horizontal velocity.
This new optimal reference trajectory was obtained by training the surrogate NN on an initial dataset of 400 points quasi-randomly selected across the input parameter space. The set of input parameter values that produced this optimal reference trajectory were found after only 10 queries to the actual simulation. The objective loss of the true outputs from the simulation is shown over those 10 iterations until the end condition of reaching desired values is satisfied in Fig. 9.
### _Monte Carlo Comparison_
Although this methodology has proved its ability to optimize the trajectory planning simulation, the true motivation of this algorithm is in its timesaving. It is able to intelligently and efficiently search to the input parameter space to find an optimal solution faster than tuning by hand or through a Monte Carlo random search approach.
This can further be proved by tracking the lowest objective loss found so far by the quasi-random sampling and this intelligent algorithm. Over 50 iterations, this approach consistently finds better solutions faster with less variation than quasi-random sampling. Fig. 10 shows the loss from the best solution found so far by this algorithm as compared to the quasi-random search, averaged over five trials.
Evidently, this algorithm finds inputs that minimize the objective function much faster than a quasi-random search can. The quasi-random search takes 30 simulation queries to reduce the objective loss from 0.16 to 0.13, while the intelligent search reduces the same loss in the first five queries, making this method six times faster. Furthermore, it consistently reaches a lower objective loss, meaning it finds a better reference trajectory. Similarly, Fig. 11 shows how the algorithm will converge to the desired output values faster than the quasi-random search.
### _Second Hypersonic Vehicle Simulation_
All the results so far show the optimization of an A/L reference trajectory based on a simulation of a proprietary hypersonic vehicle. If the underlying physical parameters of the vehicle change, a new reference trajectory will need to be calculated to meet the new set of requirements. For example, if additional wind tunnel testing results in an update to the vehicle aerodynamic model. This methodology can quickly be applied to calculate a new optimal reference trajectory, negating the need for an engineer to spend copious time tuning the input variables.
To test this, the existing HV's aerodynamic properties were perturbed randomly within a range of \(3\sigma\). Specifically, the drag was increased by 11%, the lift was reduced by 1% and the pitch moment coefficient was increased by 13%. Now, the set of inputs found in Subsection V-A result in a very low sink rate and therefore a poor landing of the HV. When applied, this methodology finds a better performing reference trajectory that decreases loss by 100% as shown in Table II.
Fig. 11: Best outputs found by intelligent queries compared to Monte Carlo quasi-random search. The intelligent queries converge much faster to the ideal outputs than the quasi-random search.
Fig. 10: Average lowest loss from intelligent queries compared to quasi-random queries over 50 algorithm iterations with 0.5\(\sigma\) shaded. The intelligent queries result in lower losses quicker than quasi-random queries.
Fig. 9: Objective loss over algorithm iterations until successful simulation outputs are achieved.
## VI Conclusions
This paper presents a new simulation-based optimization algorithm that uses surrogate neural networks and their ability to produce gradients to drastically reduce computation time. When applied to a highly nonlinear A/L reference trajectory planning simulation for hypersonic vehicles, this algorithm rapidly optimized the 13 input parameters to produce the best possible result for landing the hypersonic vehicle. The novelty of this algorithm is that it uses neural network surrogate models to intelligently select queries to the simulation. By doing so, the total number of simulation runs can be reduced while simultaneously finding the optimal reference trajectory.
This generalized methodology has been shown to work for different HV simulations. However, future work includes testing this methodology in an entirely different field. This methodology should be able to accommodate any black-box simulation with any number of inputs or outputs. Furthermore, total number of queries could be reduced by eliminating the initial quasi-random training dataset so that all queries are intelligently selected by the surrogate model. It would also be interesting to explicitly incorporate the epistemic uncertainty of the neural network to better balance and understand exploration versus exploitation.
Compared to quasi-random search, this algorithm works six times faster to find the optimal output of the simulation by intelligently querying areas that minimize loss as predicted by the surrogate model. Furthermore, no current optimal trajectory planning algorithms make use of surrogate models, their efficiency, and their ability to produce gradients. This methodology will enable much more rapid calculation of optimal approach and landing reference trajectories for hypersonic vehicles.
|
2304.13871 | Typical and atypical solutions in non-convex neural networks with
discrete and continuous weights | We study the binary and continuous negative-margin perceptrons as simple
non-convex neural network models learning random rules and associations. We
analyze the geometry of the landscape of solutions in both models and find
important similarities and differences. Both models exhibit subdominant
minimizers which are extremely flat and wide. These minimizers coexist with a
background of dominant solutions which are composed by an exponential number of
algorithmically inaccessible small clusters for the binary case (the frozen
1-RSB phase) or a hierarchical structure of clusters of different sizes for the
spherical case (the full RSB phase). In both cases, when a certain threshold in
constraint density is crossed, the local entropy of the wide flat minima
becomes non-monotonic, indicating a break-up of the space of robust solutions
into disconnected components. This has a strong impact on the behavior of
algorithms in binary models, which cannot access the remaining isolated
clusters. For the spherical case the behaviour is different, since even beyond
the disappearance of the wide flat minima the remaining solutions are shown to
always be surrounded by a large number of other solutions at any distance, up
to capacity. Indeed, we exhibit numerical evidence that algorithms seem to find
solutions up to the SAT/UNSAT transition, that we compute here using an 1RSB
approximation. For both models, the generalization performance as a learning
device is shown to be greatly improved by the existence of wide flat minimizers
even when trained in the highly underconstrained regime of very negative
margins. | Carlo Baldassi, Enrico M. Malatesta, Gabriele Perugini, Riccardo Zecchina | 2023-04-26T23:34:40Z | http://arxiv.org/abs/2304.13871v2 | # Typical and atypical solutions in non-convex neural networks with discrete and continuous weights
###### Abstract
We study the binary and continuous negative-margin perceptrons as simple non-convex neural network models learning random rules and associations. We analyze the geometry of the landscape of solutions in both models and find important similarities and differences. Both models exhibit subdominant minimizers which are extremely flat and wide. These minimizers coexist with a background of dominant solutions which are composed by an exponential number of algorithmically inaccessible small clusters for the binary case (the frozen 1-RSB phase) or a hierarchical structure of clusters of different sizes for the spherical case (the full RSB phase).
In both cases, when a certain threshold in constraint density is crossed, the local entropy of the wide flat minima becomes non-monotonic, indicating a break-up of the space of robust solutions into disconnected components. This has a strong impact on the behavior of algorithms in binary models, which cannot access the remaining isolated clusters. For the spherical case the behaviour is different, since even beyond the disappearance of the wide flat minima the remaining solutions are shown to always be surrounded by a large number of other solutions at any distance, up to capacity. Indeed, we exhibit numerical evidence that algorithms seem to find solutions up to the SAT/UNSAT transition, that we compute here using an 1RSB approximation. For both models, the generalization performance as a learning device is shown to be greatly improved by the existence of wide flat minimizers even when trained in the highly underconstrained regime of very negative margins.
## I Introduction
One of the most important and open problems in machine learning is to characterize at the theoretical level the typical properties of the loss landscape of neural networks. Given the impressive results that machine learning has achieved, e.g. in computer vision, image and speech recognition and translation, it is of primary importance to understand what are the key ingredients in those high-dimensional loss landscapes that allow very simple algorithms to work so well.
In the field of the physics of disordered systems [1] there have been, in the last few decades, many efforts to develop analytical and numerical techniques suitable for the study of nonconvex high-dimensional landscapes. Despite the many differences in the types of landscapes studied [2; 3], the main questions addressed by physicists are very similar to those of interest to the machine learning community. For example: can we predict, given an initial condition, in which part of the landscape a given algorithm will converge? What is the role played by local minima and saddles?
In the last years several empirical results have emerged on the general structure of the landscape. First, it has been found that there exists a wide and flat region in the bottom of the landscape: empirical analysis of the Hessian of configurations obtained by using simple variants of gradient descent algorithms [4; 5] showed that such regions are very attractive for simple learning dynamics. In [6] it was also shown that the most common algorithm used in machine learning, Stochastic Gradient Descent (SGD), is intrinsically biased by its noisy component towards flatter solutions. Low dimensional representations of the landscape have also shown [7] that several heuristic techniques used in machine learning are effective because they tend to smooth the landscape, increasing therefore the convergence abilities of gradient descent algorithms. Other lines of research [8; 9; 10], studied the connectivity properties of solutions, showing that minimizers of the loss function obtained with different initializations actually lie in the same basin, i.e. can be connected by a zero training-error path. In [11] the authors tested a large number of complexity measures on several and diverse models concluding that the ones based on flatness are more predictive of generalization properties.
In the statistical mechanics literature the role of wide and flat minima in the neural network landscape has emerged recently from the study of simple one-layer [12; 13] and two-layer models [14; 15]. Those works, that we summarize for convenience in the next section, have been limited so far to non-convex neural networks with binary weights; the connections between the geometrical properties of the landscape and the behavior of algorithms has been investigated in several works. In this paper we bridge this gap, trying to shed new light on the geometrical organization of solutions in simple non convex continuous neural network models.
Our main findings show that both discrete and continuous models are characterized by the existence of wide-flat minima which undergo a shattering phase transition below (but close) to capacity. For the continuous model the disappearance of wide-flat minima does not imply the onset of algorithmic hardness, as it happens for the discrete case. We argue that this is due to the underlying full-RSB background of ground states which is different from the disconnected structure of ground states of the discrete case. We show numerically that the algorithms typically used for learning tend to end-up in the wide-flat regions when they exist and that these regions show remarkable generalization capabilities.
For more complex neural network architectures, such as deep multi layer networks, we expect that the differences be
tween discrete and continuous models to become less evident. However, more analytical studies will be needed to address this point.
The paper is organized as follows: in Section II we summarize the main phenomenology that has emerged in the context of binary weights models; in Section III we introduce the non-convex models studied in this paper, namely the binary and spherical negative-margin perceptrons and we summarize their known properties. In Section IV we present the 1-step replica symmetry breaking (1RSB) computation of the SAT/UNSAT transition of the negative-margin perceptron. In Section V we employ the Franz-Parisi technique to study the local landscape of a configuration extracted with a given probability measure from the set of solutions. In particular we study the flatness of the typical minima of a generic loss function and we develop a method to extract configurations maximizing the flatness. We then explore how clusters of flat solutions evolve as a function of the training set size, revealing a phase transition in the geometrical organization of the solutions in the landscape. We call this _Local Entropy_ (LE) transition. In Section VI we present further numerical experiments: firstly we show numerical evidence suggesting that at the Local Entropy transition the manifold of atypical solutions undergoes a profound change in structure; secondly we show in a continuous-weights model how the generalization performance is greatly improved by the existence of wide and flat minimizers. Section VII contains our conclusions.
## II Overview of the phenomenology in binary models
In this section we summarize the main results on the connection between algorithmic behaviour and geometrical properties of the landscape of solutions in binary models of neural networks. We focus for simplicity on the paradigmatic example of the binary perceptron problem, defined as follows.
Given a training set composed by \(P=\alpha N\) binary random patterns \(\mathbf{\xi}^{\mu}\in\{-1,1\}^{N}\) and labels \(y^{\mu}\in\{-1,1\}\) with \(\mu=1,\ldots,P\), we aim to find a binary \(N\)-dimensional weight \(\mathbf{w}\in\{-1,1\}^{N}\) such that
\[y^{\mu}=\text{sign}\left(\mathbf{w}\cdot\mathbf{\xi}^{\mu}\right) \tag{1}\]
for all \(\mu\). The non-convexity of the problem is induced by the binary nature of the weights. Since the late 1980s this model has been studied using statistical physics techniques by Gardner and Derrida [16; 17] and by Krauth and Mezard [18] in the limit \(N\), \(P\to\infty\) with fixed \(\alpha\). They found that one can find solutions to the problem only up to a given critical value of \(\alpha\) that separates a SAT from an UNSAT phase: for \(\alpha<\alpha_{c}\) the problem admits an exponential number of solutions, whereas for \(\alpha>\alpha_{c}\) no solution to the problem can be found. Huang and Kabashima [19] computed the entropy of solutions at a given distance from a typical solution (the so-called _Franz-Parisi_ entropy), and showed that for \(\alpha<\alpha_{c}\) the space of solutions splits into well-separated clusters of vanishing entropy. More in detail, they found that picking a configuration \(\tilde{\mathbf{w}}\) at random from the set of all solutions, its closest other solution is found by flipping an extensive number of weights. This means that typical solutions are completely isolated. The geometrical organization of typical solutions was later confirmed rigorously in a related model, the symmetric binary perceptron [20; 21]. In several constraint satisfaction problems (CSPs), such point-like solutions are generally believed to be hard to find algorithmically. In particular in Boolean satisfiability problems like \(k\)-SAT [22] and in graph coloring [23] all the best known algorithms start to fail in finding solutions at the so-called _clustering_ transition, which occurs at the constraint density for which only such isolated isolated solutions remain.
The analytical picture of the landscape of the binary perceptron, and its relation with the behaviour of algorithms, however, was far from being completely understood. Indeed, if only those kind of point-like, sharp solutions exists, it should be really hard to find them. Nevertheless, not only there exist algorithms that solve efficiently the optimization problem [24; 25; 26], but the solutions they find also have substantially different properties from the typical, isolated ones that can be described analytically: we mention in particular the fact that one can numerically show that solutions found by algorithms usually lie in dense clusters of solutions [15] and observables like stability distribution [14] and generalization (in teacher-student models) [12] do not coincide with the theoretical calculations cited above.
This apparent paradox has been resolved in [12; 13], where it was shown analytically that there exists rare, subdominant (i.e. atypical) clusters of solutions in the landscape which are highly attractive for the learning algorithm dynamics. It was then shown [27; 28] that the typical (sharp) and atypical (clusterized) solutions coexist in the landscape until a certain critical value \(\alpha_{\text{LE}}\) is reached; this new transition has been named _Local Entropy_ (LE) transition: for \(\alpha<\alpha_{\text{LE}}\), both sharp and flat minima can be found whereas for \(\alpha>\alpha_{\text{LE}}\) only sharp minima exist. This has a drastic impact on learning: the most efficient algorithms are not able to overcome this threshold, since beyond it only isolated solutions exists. \(\alpha_{\text{LE}}\) can be therefore thought as a "clustering transition" as was studied earlier in various CSPs, but whose nature is different, being the result of disappearance of atypical, clustered, solutions. Moreover, this analytical picture was exploited to create new algorithms, that have been explicitly designed to target wide and flat minima: we mention in particular focusing Belief Propagation (fBP); for other types of "replicated algorithms" see [12]. Those algorithms in turn are the ones that succeed in getting closer to the local entropy transition. Other algorithms like Reinforced Belief Propagation (rBP) [24], reinforced maximum [26] and Stochastic BP-inspired (SBPI) [25] also induce a bias towards clustered solutions.
The phenomenology described above has been shown to hold qualitatively also on other binary models like, e.g. one-hidden layer tree committee machines and teacher student models. Those lines of research also showed that the heuristic techniques and architectural choices used in machine learn
ing, such as the use of the cross-entropy loss [15], or ReLU activation function [14], or regularization [29] tend to affect the learning landscape and to induce wider and flatter minima. In addition, a similar analysis of the role of overparameterization [28] suggests that it allows the emergence of atypical clusterized solutions at the local entropy transition. The appearance of those solutions makes the dynamics change from glassy to non-glassy [3].
## III The model
The scenario described in the previous section holds for binary weights neural network models. On the other hand, in continuous models less results are known so far. In [14; 15] the existence of wide and flat solutions was shown in one layer continuous weights models. However, there has been no detailed study of the change in the structure of these regions as a function of overparameterization, nor of their algorithmic implications as in the case of binary models. Here we study these questions in an even simpler model, the negative-margin perceptron. It is defined as follows.
Given a set of \(P=\alpha N\) normally distributed random patterns \(\xi^{\mu}\), \(\mu=1,\ldots,P\) we want to find a vector of \(N\) continuous weights \(\mathbf{w}\) normalized on \(\mathcal{S}_{N}\), the sphere of radius \(\sqrt{N}\)
\[\mathbf{w}\cdot\mathbf{w}=\sum_{i=1}^{N}w_{i}^{2}=N \tag{2}\]
that satisfy the set of \(P\) constraints
\[\Delta^{\mu}(\mathbf{w};\kappa)=\frac{1}{\sqrt{N}}\mathbf{w}\cdot\mathbf{\xi}^{\mu}-\kappa \geq 0\,,\quad\forall\mu=1,\ldots,P\,. \tag{3}\]
\(\kappa\) is fixed and it is called _margin_. We will call a configuration \(\tilde{w}\) satisfying (2) and (3) simply as a _solution_ to the problem. For \(\kappa\geq 0\) the problem is convex and the space of solutions is always connected. For \(\kappa<0\) the problem becomes non-convex and it has been named the negative spherical perceptron problem [33].
In the following we will be also interested in the binary analog of the problem (\(w_{i}=\pm 1\)), that we will call the "negative binary perceptron problem".
### Related work
The negative spherical perceptron has been studied in the statistical mechanics community since the 1980s in the seminal work of Gardner and Derrida [30]. In particular they studied, for a fixed value of \(\kappa\), the so called SAT/UNSAT transition \(\alpha_{c}\): this is the maximum value of patterns per number of parameters that can be perfectly classified by the network, in the limit of large \(N\). They computed it by using the replica method in the Replica-Symmetric (RS) approximation. They also computed the De Almeida-Thouless (dAT) line that marks the onset of the instability of the RS ansatz, showing that the SAT/UNSAT transition is correctly computed in the RS approximation only for \(\kappa\geq 0\). We plot \(\alpha_{c}(\kappa)\) in the RS approximation and \(\alpha_{\text{dAT}}(\kappa)\) for \(\kappa<0\) in Fig. 1.
More recently, the negative spherical perceptron has been studied as the "simplest model of jamming" [34]: the patterns \(\mathbf{\xi}^{\mu}\) can be interpreted as point obstacles in fixed random positions on \(\mathcal{S}_{N}\) and the constraints (3) correspond to imposing that a particle at position \(\mathbf{w}\) on \(\mathcal{S}_{N}\) is at a Euclidean distance larger than \(\sigma=\sqrt{2N+2\kappa}\) from the point obstacles1. The jamming point corresponds, for a fixed \(\alpha\), to the maximization of the margin \(\kappa\) (that we will denote from now on as \(\kappa_{\text{max}}(\alpha)\)2), i.e. to the maximization of the distance from the point obstacles. Because of isomorphism with the problem of packing of spheres, the negative perceptron problem has been studied using the replica method in [31], where the whole phase diagram of typical solutions has been derived. In particular they showed that for low enough margin the model exhibits the classical Random First Order Transition (RFOT) phenomenology: increasing \(\alpha\) for a fixed \(\kappa\) one first finds a _clustering_ transition, then a _Kautzmann_ transition and finally a _Gardner_ transition. For clarity, we have plotted those lines in Fig. 1; we refer to the caption of the figure and the paper [31] for their precise definitions. In the same paper, the critical exponents were calculated on the jamming line and they appeared to be equal to the ones found in the jamming of hard spheres in infinite dimension [35].
Footnote 1: For this reason the variable \(\Delta^{\mu}\) has been named _gap variable_. In the statistical mechanics literature it has been also called _stability_ of pattern \(\mu\).
Footnote 2: \(\kappa_{\text{max}}\) (\(\alpha\)) is simply the inverse function of the SAT/UNSAT transition \(\alpha_{c}\) (\(\kappa\)) mentioned before.
In another work [33], Montanari and coworkers studied the performance of several algorithms by characterizing their algorithmic threshold and compared it with rigorous upper and lower bounds to the critical capacity \(\alpha_{c}\). In particular they showed that there exists a gap between the interpolation threshold of Linear Programming (LP) algorithm and the lower bound to \(\alpha_{c}\). However, other algorithms, such as Gradient Descent (GD) and Stochastic GD (SGD) on the cross-entropy loss function, behave much better since they have a much higher algorithmic threshold with respect to the one of LP. They were not able to conclude if in this problem there exists a computational barrier that cannot be overcome by any algorithm (as common in binary CSP) or if one can design better optimization algorithms able to reach \(\alpha_{c}\).
## IV 1RSB critical capacity in the negative spherical perceptron
Following the seminal work of Gardner [16; 17; 30] the partition function of the negative perceptron models is
\[Z=\int d\mu(\mathbf{w})\,\mathbb{X}_{\xi}(\mathbf{w};\kappa) \tag{4}\]
where, denoting by \(\Theta(\cdot)\) the Heaviside theta function, we have denoted with
\[\mathbb{X}_{\xi}(\mathbf{w};\kappa)\equiv\prod_{\mu=1}^{\alpha N}\Theta\left(\Delta^{ \mu}(\mathbf{w};\kappa)\right) \tag{5}\]
the indicator that selects the solutions to the optimization problem (3). \(d\mu(\mathbf{w})\) is a measure over the weights that we have introduced to treat both the spherical and binary cases:
\[d\mu_{\text{sph}}(\mathbf{w}) =\delta\left(N-\sum_{i}w_{i}^{2}\right)\prod_{i=1}^{N}dw_{i} \tag{6a}\] \[d\mu_{\text{bin}}(\mathbf{w}) =\prod_{i=1}^{N}dw_{i}\,\left[\frac{1}{2}\delta\left(w_{i}-1 \right)+\frac{1}{2}\delta\left(w_{i}+1\right)\right] \tag{6b}\]
One then is interested in computing the free entropy of the system in the thermodynamic limit
\[\phi\equiv\lim_{N\to\infty}\frac{1}{N}\left\langle\ln Z\right\rangle_{\xi}\, \tag{7}\]
where \(\left\langle\cdot\right\rangle_{\xi}\) is the average over all the random patterns \(\{\xi^{\mu}\}_{\mu=1}^{\alpha N}\). The free entropy can be computed by using the replica trick [1]
\[\left\langle\ln Z\right\rangle_{\xi}=\lim_{n\to 0}\frac{1}{n}\ln\left\langle Z ^{n}\right\rangle_{\xi} \tag{8}\]
The entropy can be fully characterized by a \(n\times n\) order parameter matrix \(q^{ab}\) which physically represents the typical overlap between two replicas extracted from the Gibbs measure (5), i.e.
\[q^{ab}=\frac{1}{N}\sum_{i=1}^{N}\left\langle w_{i}^{\alpha}w_{i}^{b}\right\rangle. \tag{9}\]
where we have indicated by \(\left\langle\cdot\right\rangle\) the average over the Gibbs measure in (5). We review in appendix B the analytical calculations of the entropy in the case in which the structure of the overlap matrix \(q^{ab}\) is Replica-Symmetric (RS)
\[q_{ab}=\delta_{ab}+(1-\delta_{ab})q \tag{10}\]
Figure 1: Phase diagram of the negative spherical perceptron. The light blue line represents the SAT/UNSAT transition computed in the Replica-Symmetric approximation as presented originally in Gardner-Derrida’s work [30]. This line is correct only in the region \(\kappa\geq 0\), where the model is convex. A better approximation to the SAT/UNSAT transition is the orange line, which is computed here using the 1RSB approximation. The other lines were computed in [30] and [31]. The green line represents the De Almeida-Thouless instability of the RS ansatz: when one crosses \(\alpha_{\text{dAT}}\) for \(\kappa>\kappa_{\text{1RSB}}\) one goes from an RS to a fRSB phase; when \(\kappa_{\text{RFOT}}<\kappa<\kappa_{\text{1RSB}}\) one instead goes from RS to a 1RSB stable phase. For a value of \(\kappa<\kappa_{\text{RFOT}}\) one encounters several transitions when \(\alpha\) is increased: the _dynamical_ or _clustering_ transition \(\alpha_{\text{dyn}}\) (blue line), the _Kauzmann transition_ (green line) and the _Gardner transition_ (in red). For \(\alpha<\alpha_{\text{dyn}}\) the system is RS. For \(\alpha_{\text{dyn}}<\alpha<\alpha_{\text{K}}\) the system is in a dynamical 1RSB phase, where \(q_{1}>q_{0}\) and the Parisi block parameter \(m=1\). For \(\alpha_{\text{K}}<\alpha<\alpha_{\text{G}}\) the Gibbs measure is dominated by a 1RSB solution having \(m<1\). Above the Gardner transition \(\alpha>\alpha_{\text{G}}\) the system is fRSB. Note that the dynamical transition presented in [31] is slightly below the one presented here [32].
or broken at 1-step Replica Symmetry Breaking level (1RSB)
\[q_{ab}=q_{0}+(q_{1}-q_{0})I_{ab}^{m}+(1-q_{1})\delta_{ab} \tag{11}\]
where \(I_{ab}^{m}\) is the \(n\times n\) matrix having elements equal to 1 inside the blocks of size \(m\) located around the diagonal and 0 otherwise.
In both spherical and binary negative perceptron problems the free entropy is a decreasing function of \(\alpha\), meaning that the solution space shrinks when constraints are added. Increasing \(\alpha\) one then crosses a critical value \(\alpha_{c}\) (the SAT/UNSAT transition) such that for larger constraint densities there is no solution to the problem. In binary models \(\alpha_{c}\) can be computed easily by evaluating the value of \(\alpha\) for which the Replica-Symmetric (RS) free entropy goes to 0 [18; 36]. In this model the typical overlap between solutions does not go to 1 at the SAT/UNSAT transition.
In continuous models the estimation of \(\alpha_{c}\) is instead much harder and requires the use of the full Replica Symmetry Breaking Ansatz (fRSB) [37]. We present in appendix B the computation of \(\alpha_{c}\) (or, equivalently of the _maximum margin_\(\kappa_{\text{max}}\) at a fixed value of \(\alpha\)) in the spherical negative perceptron in the RS and 1RSB approximations. In the RS case, this requires to study the limit \(q\to 1\). In the 1RSB case, we must consider instead the limit \(q_{1}\to 1\) with \(m\to 0\) and \(\vec{m}\equiv m/(1-q_{1})\) finite [38]. We plot those two approximations in Fig. 1; the 1RSB ansatz shows a substantial change on the estimation of the critical capacity. It can be regarded as a good upper bound to the true SAT/UNSAT transition. For another upper bound (which is slightly less stringent than the one presented here) and a lower bound to the true value of \(\alpha_{c}\) see [33].
## V Probing the local entropy landscape using the Franz-Parisi method
In this section we discuss how the landscape of solutions of the negative perceptron problem is composed by solutions that can be completely different in nature. In particular, among the various observables that we can compute analytically, we are interested in the so called "_local entropy_" of a given solution. Given a configuration \(\mathbf{w}\), normalized as in eq. (2) and that solves the set of constraints (3), we define its local entropy \(\mathcal{S}_{\xi}(\mathbf{\tilde{w}},S;\mathbf{\kappa})\) as the log of the volume (or number in the binary case) of solutions at a given (normalized) distance \(d\) from \(\tilde{w}\); namely
\[\mathcal{S}_{\xi}(\mathbf{\tilde{w}},d;\mathbf{\kappa}) =\frac{1}{N}\ln\mathcal{N}_{\xi}(\mathbf{\tilde{w}},d;\mathbf{\kappa}) \tag{12}\] \[\equiv\frac{1}{N}\ln\!\int\!d\mu(\mathbf{w})\,\mathbb{Z}_{\xi}\left( \mathbf{w};\mathbf{\kappa}\right)\delta(N(1-2d)-\mathbf{w}\cdot\tilde{\mathbf{w}})\]
For any distance \(d\) the local entropy is bounded from above by the value it attains for \(\alpha=0\), which we call \(\mathcal{S}_{\text{max}}\). In this case the previous equation simply measures the log of the _total_ volume (or the total number in the binary case) of configurations that are at a certain distance by \(d\) from \(\mathbf{\tilde{w}}\). Because of the homogeneity of space, \(\mathcal{S}_{\text{max}}\) cannot depend on \(\mathbf{\tilde{w}}\). In the spherical case one gets (see appendix C.1)
\[\begin{split}\mathcal{S}_{\text{max}}^{\text{spin}}(d)& \equiv\frac{1}{N}\ln\int\,d\mu_{\text{spin}}(\mathbf{w})\,\delta \left(N(1-2d)-\sum_{i}w_{i}\right)\\ &\overset{N\to\infty}{=}\frac{1}{2}\left[1+\ln(2\pi)+\ln\left(1- (1-2d)^{2}\right)\right]\.\end{split} \tag{13}\]
Figure 2: Left panel: average local entropy of typical minimizers of the cross-entropy loss function defined in (17). The light blue dots are obtained by optimizing the same loss using SGD (\(N=1000\)). Here we have chosen \(\alpha=3.0\), \(\kappa=-0.5\) and \(\gamma=5\). Right panel: comparison of the average local entropy of typical and atypical solutions in the spherical negative perceptron. Franz-Parisi entropy for \(\kappa=-0.5\) and \(\alpha=1\) for a typical reference extracted from the flat measure over solutions having margin \(\tilde{\kappa}\) as in (19). The blue line corresponds to the case \(\tilde{\kappa}=\kappa\); the red curve is instead the curve for the maximum margin \(\tilde{\kappa}=\kappa_{\text{max}}\). Since \(\alpha<2\) in this case the RS estimation of \(\kappa_{\text{max}}\) is correct. The dashed line corresponds to the bound in (13). The points correspond to the average local entropy estimated using BP on solutions found by SA on the number of error loss (light blue point) and fBP (orange ones). The agreement between theory and simulations is remarkable.
whereas in the binary case we have
\[\begin{split}\mathcal{S}_{\max}^{\text{bin}}(d)&\equiv \frac{1}{N}\ln\int d\mu_{\text{bin}}(\mathbf{w})\,\delta\left(N(1-2d)-\sum_{l}w_{ i}\right)\\ &\overset{N\to\infty}{=}-d\ln(d)-(1-d)\ln(1-d)\.\end{split} \tag{14}\]
Given a probability distribution \(P_{\xi}(\tilde{\mathbf{w}})\) over the set of configurations satisfying the constraints (3), we are interested in computing the local entropy of a typical \(\tilde{\mathbf{w}}\) sampled from \(P_{\xi}(\tilde{\mathbf{w}})\) and averaged over all the possible realizations of the \(\alpha N\) patterns:
\[\phi_{\text{FP}}(d;\kappa)=\left(\int d\mu(\tilde{\mathbf{w}})P_{\xi}(\tilde{\mathbf{ w}})\mathcal{S}_{\xi}(\tilde{\mathbf{w}},d;\kappa)\right)_{\xi}. \tag{15}\]
This "averaged local entropy" was introduced in the context of mean field spin glasses by Franz and Parisi [40] and for this reason we will call it _Franz-Parisi entropy_. Intuitively, sampling from a distribution \(P_{\xi}^{1}(\tilde{\mathbf{w}})\) that has a larger Franz-Parisi entropy with respect to another one \(P_{\xi}^{2}(\tilde{\mathbf{w}})\) for all distances within some radius will result in solutions that lie in wider and flatter minima.
It is interesting therefore to study how extracting the reference configuration \(\tilde{\mathbf{w}}\) by using a different class of probability distributions \(P_{\xi}(\tilde{\mathbf{w}})\) one obtains different types of local entropy curves. Those analytical results on the flatness of a particular class of solutions can then be compared with the ones actually found by several algorithms.
In the next two subsections we explore two different ways of choosing \(P_{\xi}(\tilde{\mathbf{w}})\).
### Extracting the reference from the typical minima of a loss function
What one usually does to find a solution to (3) is to introduce a loss function \(\mathcal{L}\)
\[\mathcal{L}(\tilde{\mathbf{w}})=\sum_{\mu=1}^{\alpha N}\ell(\Delta^{\mu}(\tilde{ \mathbf{w}};\kappa)) \tag{16}\]
where \(\ell(\cdot)\) is a loss per pattern. In first-order algorithms such as gradient descent (GD) and Stochastic GD (SGD) the loss \(\mathcal{L}\) needs to be differentiable. A very common loss that is used extensively in machine learning practice is the cross-entropy, which for a binary classification problem has the form3
Footnote 3: In standard deep learning practice the “pseudo-inverse-temperature” parameter \(\mathbf{\gamma}\) is not normally used since in the exponent it is redundant with the norm of the last layer; in our models the norm is fixed so we add it explicitly (and keep it fixed). The renormalization by \(\mathbf{\gamma}^{-1}\) keeps under control the limits of small or large \(\mathbf{\gamma}\), which could be also achieved by re-parameterizing the SGD learning rate.
\[\ell(x)=\frac{1}{2\gamma}\log\left(1+e^{-2\gamma x}\right) \tag{17}\]
In order to study the local entropy landscape around the minimizer found by such algorithms we use as reference configurations \(\tilde{\mathbf{w}}\) the ones extracted from the Gibbs measure
\[P_{\xi}(\tilde{\mathbf{w}})=\frac{e^{-\beta\sum_{\mu=1}^{\alpha N}\ell(\Delta^{ \mu}(\tilde{\mathbf{w}};\kappa))}}{\int d\mu(\tilde{\mathbf{w}}^{\prime})\,e^{-\beta \sum_{\mu=1}^{\alpha N}\ell(\Delta^{\mu}(\tilde{\mathbf{w}}^{\prime};\kappa))}} \tag{18}\]
In order to focus on the minima of the loss function \(\mathcal{L}\) we take the large \(\beta\) limit. We report in Appendix C the result of the computation of the Franz-Parisi entropy in the RS ansatz.
Figure 3: Average local entropy \(\phi_{\text{FP}}\) of typical and atypical solutions in the binary (left panel, with \(\alpha=1\), \(\kappa=-0.5\)) and continuous weight models (right panel, with \(\alpha=2\), \(\kappa=-0.5\)) as a function of the distance. In both cases \(\phi_{\text{FP}}\) is computed within the RS ansatz and for several values of the margin \(\tilde{\kappa}\) of the reference solution. The dashed grey line correspond to the upper bound to the entropy as in equation (14) and (13) respectively for the binary and spherical cases. The distance at which the curves attain the maximum correspond to the typical distance between solution having margin \(\tilde{\kappa}\) and \(\kappa\)[39].
We plot in the left panel of Fig. 2 the RS Franz-Parisi entropy of the typical minima of the cross entropy loss function. We show in the same plot that the theoretical computation is in striking agreement with numerical simulations done by optimizing the same loss with the SGD algorithm. The local entropy of a configuration of weights has been computed using Belief Propagation (BP), see [13] for the details of the implementation.
### Flat measure over solutions having margin \(\tilde{\kappa}\)
Secondly, we studied the average local entropy of solutions sampled using a flat measure over all configurations having margin \(\tilde{\kappa}\geq\kappa\), that is
\[P_{\xi}(\tilde{\mathbf{w}})=\frac{\mathbb{X}_{\xi}(\tilde{\mathbf{w}};\tilde{\mathbf{\kappa }})}{\int d\mu(\tilde{\mathbf{w}}^{\prime})\,\mathbb{X}_{\xi}(\tilde{\mathbf{w}}^{ \prime};\tilde{\kappa})} \tag{19}\]
In the case \(\tilde{\kappa}=\kappa\) we are sampling typical solutions to the problem. We studied how the Franz-Parisi curve changes as we vary the margin \(\tilde{\kappa}\) in the RS ansatz (see SM); since a solution having a margin \(\tilde{\kappa}>\kappa\) solves a more constrained problem than (3), we intuitively expect that its Franz-Parisi entropy will be higher than that for \(\tilde{\kappa}=\kappa\).
This approach has been explored so far in models with binary weights and \(\kappa\geq 0\). We summarize here the main findings for the case of the negative binary perceptron, see the left panel of Fig. 3. First, if one samples a typical solution to the problem one will find that there exists, for any (arbitrarily small) value of \(\alpha\), a neighborhood of \(d=0\) where the Franz-Parisi entropy is negative [27]. This implies that, within a certain distance from the reference, only a sub-extensive number of solutions can be found. One therefore says that typical solutions are _isolated_[19; 20; 41]. Secondly, if one samples solutions having larger margin with respect to the one of the problem \(\tilde{\kappa}>\kappa\) one finds that there always exists a neighborhood around \(d=0\) having positive average local entropy. Therefore solutions having larger margin are always surrounded by an exponential number of other solutions within a small but extensive distance. Moreover as one decreases the distance from the reference further, the local entropy curve becomes nearly indistinguishable from the total number of configurations at that distance, implying that the cluster where the reference is located is very dense. If \(\alpha\) is sufficiently small, the local entropy becomes monotonic as the margin \(\tilde{\kappa}\) is increased.
Differently from the binary weights case, in the spherical case we found that no typical solution is actually isolated, there is always a non-vanishing volume of solutions at a given distance from a typical solution, see Fig. 3, right panel. Moreover if \(\alpha\) is low enough even a typical solution has a monotonic local entropy. Apart for those two differences, the general picture is similar: as the margin \(\tilde{\kappa}\) is increased, the local entropy gets larger in a given range of distances from the reference, meaning that those (atypical) solutions are located inside a denser cluster. As in the binary case, the solution having the largest local entropy at small distances is the one sampled with the maximum margin \(\tilde{\kappa}=\kappa_{\text{max}}\). We refer to appendix C.5 for a discussion of the RSB effects that play an important role in the large \(\alpha\) regime.
In the right panel of Fig. 2, we also show the comparison between the average local entropy of typical and atypical solutions in the negative spherical perceptron and the agreement with the one found by two algorithms: Simulated Annealing on the number-of-errors loss and fBP. The latter algorithm was explicitly designed to target flatter solutions, see [12] for the details of its implementation; we found that fBP finds solutions whose local entropy is comparable to (even slightly larger than) the theoretical one found imposing the maximum possible margin for the reference, as was previously observed in binary models [27]. In Appendix E we show that even for
Figure 4: Derivative of the Franz-Parisi entropy \(\partial\phi_{\text{FP}}/\partial d\) for the maximum margin solutions as a function of \(d\) for several values of \(\alpha\). The left panel is for the binary case (\(\kappa=-1\)), whereas the right panel is for the spherical case (\(\kappa=-0.5\)). Qualitatively the two plots are similar: for small values of \(\alpha\) the \(\phi_{\text{FP}}\) is monotonic (the derivative is always positive). At \(\alpha_{\text{LE}}\), \(\partial\phi_{\text{FP}}/\partial d\) develops a zero at small distances. For \(\alpha>\alpha_{\text{LE}}\) the Franz-Parisi entropy is not monotonic.
larger value of \(\alpha\), where the RS ansatz on the order parameters for the reference configuration is wrong (i.e. \(\alpha>\alpha_{\text{LAT}}(\bar{\kappa})\)) the agreement with numerical simulations is still rather good.
### Phase transition in the geometrical organization of solutions: local entropy transition
Next, we investigate what happens to the widest and flattest minima as we increase the constraint density \(\alpha\). We therefore study the local entropy profile of the maximum-margin solutions for several increasing values of \(\alpha\), for a fixed value of \(\kappa\).
In Fig. 4 we plot \(\frac{\partial\phi_{\text{LP}}}{\partial d}\) as a function of the distance \(d\). The phenomenology is quite similar in both the binary and the spherical cases. If \(\alpha\) is lower than a critical threshold \(\alpha_{\text{LE}}(\kappa)\), the local entropy profile exhibits only one maximum (not shown in the figure), located at the typical distance \(d_{\text{typ}}(\alpha)\) between solutions with margin \(\bar{\kappa}=\kappa_{\text{max}}\) and \(\kappa\); for \(d<d_{\text{typ}}\) the local entropy is monotonic with positive derivative. This means that the reference is located in a wide and flat region that extends to very large scales [12; 27]. For \(\alpha=\alpha_{\text{LE}}\), there appears at small distances another point where the derivative \(\frac{\partial\phi_{\text{LP}}}{\partial d}\) vanishes. For \(\alpha>\alpha_{\text{LE}}\) the local entropy is non-monotonic and it has a local maximum at a distance \(d_{\star}<d_{\text{typ}}(\alpha)\): this suggests that the most robust solutions are no longer located in regions that extend to arbitrary large distances, but that have a typical size \(d_{\star}\) instead. This _Local Entropy transition_[12; 27] occurring at \(\alpha_{\text{LE}}(\kappa)\) can therefore be interpreted as the point at which the cluster of atypical robust solutions fractures in many pieces.
In [27] the local entropy transition has been computed for the binary perceptron model for \(\kappa=0\) and it has been shown that it gives similar results to the more precise method of finding the reference that maximizes the local entropy at every distance [12]. In the same works, moreover, it has been shown that this change in the geometry of atypical solutions strongly affects the behaviour of algorithms: no known algorithm is able to find solutions for \(\alpha>\alpha_{\text{LE}}\). We plot in Fig. 5 the local entropy transition as a function of \(\kappa\) for the binary (left panel) and for the spherical case (right panel). In the same plots we show the algorithmic threshold of several algorithms. In the same plots we show the SAT/UNSAT transition, which was computed by using the zero entropy criterium in the binary case [18] and by using the RS and 1RSB approximations in the spherical case. In the left panel of Fig. 5 we can see that in the binary case no algorithm is able to cross the local entropy transition; in addition fBP which is an algorithm designed to target maximally entropic regions appears to stop working _exactly_ at the local entropy transition. In the spherical case, this is not the case: even if the atypical states fracture in many pieces, algorithms are still able to overcome the threshold and find solutions. Indeed the typical landscape of solutions is very different in the two models: in the spherical case even typical solutions are surrounded by an exponential number of solutions up to capacity. The algorithmic thresholds plotted in the right panel of Fig. 5 seem to suggest that algorithms are able to reach the SAT/UNSAT transition of the model, especially knowing that taking into account higher order RSB corrections can considerably lower the estimate of \(\alpha_{c}\). Binary and spherical models are thus significantly different from the optimization point of view.
Notice that the computation of \(\alpha_{\text{LE}}\) could still be very imprecise in the spherical case because of the presence of large RSB effects. However, when \(\kappa\) is near zero, we expect the RSB corrections to play a minor role and our computation to be reliable; indeed the RS estimate of the maximum margin configuration is expected to approximate quite well the true value. On the other hand, when the margin is very negative, the RS estimate of \(\kappa_{\text{max}}\) is very imprecise (cf. Fig. 1); therefore we expect our estimate of \(\alpha_{\text{LE}}\) to be imprecise as well. In the appendix we describe a method to estimate the local entropy transition that does not rely on the ability to sample a replica in deep RSB phase; the method is therefore expected to give much more precise estimates when \(\kappa\) is large in modulus.
## VI Numerical experiments
### Numerical justification of the local entropy transition
While we expect to observe a clear difference in algorithmic behaviour between the discrete and continuous versions of the model at the local entropy transition, we still expect to observe in both cases a structural geometrical change. Beyond the local entropy transition the discrete model displays a disconnected 1-RSB structure of solutions also at the out-of-equilibrium level. On the contrary, the continuous version displays a full RSB structure which is expected to be accessible (though not particularly flat). For the discrete case several works have already clarified the phenomenon both analytically and numerically. Here we focus on the continuous case.
We measure numerically the error of a weight vector \(\mathbf{w}_{\mathbf{\gamma}}\) obtained as a convex combination of \(y\) solutions \(\mathbf{w}^{a}\) with \(a=1,\ldots,y\) and normalized on the sphere in \(N\) dimension of radius \(\sqrt{N}\), namely
\[\mathbf{w}_{\mathbf{\gamma}}\equiv\frac{\sqrt{N}\sum_{a=1}^{y}\gamma_{a}\mathbf{w}^{a}}{ \|\sum_{a=1}^{y}\gamma_{a}\mathbf{w}^{a}\|}\,,\qquad\sum_{a=1}^{y}\gamma_{a}=1\,, \quad\forall a:\;\gamma_{a}\geq 0 \tag{20}\]
The study of the training error around geodesic paths connecting the same or different classes of solutions is actually an interesting problem in its own right [43]. Here we provide some preliminary numerical results on the simple cases \(y=2\) or \(3\) in which the solutions \(\mathbf{w}^{a}\) are obtained with SGD with the hinge loss \(\ell\left(x\right)=\max\left(0,-x\right)\) (the margin is included in \(x\), see eq. (16)). In particular, the case \(y=2\) amounts at computing the barrier along a "linear" (geodesic) path connecting two given solutions. This topic has been widely studied in deep learning literature [8; 9; 10] as it is believed to be a good proxy to probe the error landscape around solutions. Based on the phenomenology exhibited by deep networks, we expect
two robust solutions to be connected by an almost zero error path [10]. This is in fact what we observe in the overparameterized regime (low \(\alpha\)), see Fig. 6. However, as the constraint density is increased, a barrier in the linear path connecting solutions appears, in a region close to the RS estimate of the local entropy transition. Moreover, if we study the error landscape on the "plane" (\(2D\) manifold) spanned by \(y=3\) solutions, we see that, for \(\alpha>\alpha_{\text{LE}}\), a high error region appears in the barycenter, signaling that right above the local entropy transition SGD starts to find solutions that are likely to be located in different basins.
### Connections with generalization
In order to probe numerically the computational advantages of wide flat minima and to create a natural link to future studies on multilayered models, we have analyzed the generalization properties in a teacher-student setting. Specifically we generate data with a random teacher perceptron (\(\kappa=0\)) and train a student perceptron with negative \(\kappa\). Once learning has completed, we test the generalization performance of the student with zero margin. Remarkably, we find that - provided we converge into wide flat minima - even learning with very negative values of \(\kappa\) (a very under-constrained learning problem, with very little signal coming for the training set) leads to good generalization performance, see Fig. 7 for the continuous case. Learning with fBP leads to minimizers which are well inside the flat region and as such are effectively robust, even though the robustness condition coming from the learning constraint is very weak (the negative \(\kappa\)). Other algorithms display different degree of robustness depending on the details, such as the effective temperature \(\gamma\) of the cross-entropy loss minimized by SGD, or the cooling schedule for Simulated Annealing. Similar behaviours are found in the binary case, which we refer to the appendix.
Figure 5: SAT/UNSAT and local entropy transitions as a function of \(\kappa\) for the binary (left) and spherical (right) cases. The points represent the highest value of \(\alpha\) that we were able to reach at \(N=1000\) with several algorithms: fBP, SBPI [25] and Binary NET (BNET) [42] in the binary case; fBP and SGD for the continuous case. In the spherical case we plot the critical capacity both in the RS and 1RSB approximations. Notice also that in this case the local entropy transition coincides with the SAT/UNSAT transition for \(\kappa\geq 0\), since the solution space is connected and convex.
Figure 6: Average maximum error fraction along the geodesic path connecting two solutions (red points) and on the \(2D\) manifold (“plane”) spanned by three solutions on the \(N\)-dimensional hypersphere surface (blue points). A high error barrier between solutions appears before the algorithmic threshold \(\alpha_{\text{c}}\), and its on-set is compatible with the local entropy transition (in this case \(\alpha_{\text{LE}}(\kappa=-0.5)\simeq 4.2\)). The two inset plots show the error on the plane spanned by three solutions (represented by red dots at the vertices of the triangle): for \(\alpha<\alpha_{\text{LE}}\), the error along the linear paths (edges of the triangles) is tiny, and it is even smaller at the barycenter of the plane; for \(\alpha>\alpha_{\text{LE}}\) a high error peak appears in the barycenter. Configurations were obtained by optimizing the hinge loss with margin \(\kappa=-0.5\) using SGD. Points are averages over 20 realizations of the patterns and 5 different runs for each dataset. The value of \(N\) is 2000.
## VII Conclusions
In this paper we have studied the binary and spherical negative-margin perceptrons, i.e. two simple non-convex neural networks learning a random rule with binary and spherical weights. We have analyzed in both models the geometry of the landscape of solutions, showing important similarities but also differences. First, we have pointed out how the typical solutions of the models are substantially different: in the binary case for any \(\alpha>0\) the landscape is composed by an extensive number of clusters with vanishing entropy; in the spherical case, instead, typical solutions are always surrounded by an exponential number of other solutions. Second, we have studied highly robust (i.e. high-margin) but exponentially rare solutions of both problems, and showed that in both cases those configurations have a larger local entropy compared to typical solutions. In binary models the configurations having the largest local entropy corresponds to the one having the largest margin; in continuous weights models the same is true, modulo RSB effects (see Appendix).
Finally, we analyzed the solutions with the largest local entropy as a function of the constraint density \(\alpha\): we showed that in both models when a certain threshold \(\alpha_{\text{LE}}\) is crossed the local entropy becomes non-monotonic. This seems to signal a break-up of the space of robust solutions into disconnected components. Indeed, we have verified numerically in the spherical case, that for \(\alpha<\alpha_{\text{LE}}\) algorithms find solutions lying in the same basin, whereas for \(\alpha>\alpha_{\text{LE}}\) we observe a sudden rise of energy barriers along the geodesic path between pairs of solutions and in between triplets of solutions. Even if we cannot rule out at this time the existence of non-geodesic zero-energy paths connecting these solutions, this is already an indication of a profound change in the structure of the manifold of solutions.
In binary models we have verified that the transition has a very strong impact on the behavior of algorithms. In spherical models, even if we have a strong indication that the geometry of the space of solutions is undergoing a radical change in structure near the transition, it does not appear to have an impact on algorithmic hardness. As we have shown, efficient algorithms are probably able to reach the SAT/UNSAT transition, which we have computed here in the 1RSB ansatz.
We have verified that similar analytical findings are found also in models, the so called _tree-committee_ machine [14], where the non-convexity in the problem is not induced by the negative margin but by the presence of an additional layer and a generic non-linearity. Interesting future research directions involve the extension of those results to models presenting the notion of generalization and finally the analytical investigation of the connectivity properties of solutions in neural network models [43].
## Acknowledgements
E.M.M. wishes to thank R. Diaz Hernandez Rojas, S. Franz, P. Urbani and F. Zamponi for several interesting discussions.
|
2307.06092 | Quantitative CLTs in Deep Neural Networks | We study the distribution of a fully connected neural network with random
Gaussian weights and biases in which the hidden layer widths are proportional
to a large constant $n$. Under mild assumptions on the non-linearity, we obtain
quantitative bounds on normal approximations valid at large but finite $n$ and
any fixed network depth. Our theorems show both for the finite-dimensional
distributions and the entire process, that the distance between a random fully
connected network (and its derivatives) to the corresponding infinite width
Gaussian process scales like $n^{-\gamma}$ for $\gamma>0$, with the exponent
depending on the metric used to measure discrepancy. Our bounds are strictly
stronger in terms of their dependence on network width than any previously
available in the literature; in the one-dimensional case, we also prove that
they are optimal, i.e., we establish matching lower bounds. | Stefano Favaro, Boris Hanin, Domenico Marinucci, Ivan Nourdin, Giovanni Peccati | 2023-07-12T11:35:37Z | http://arxiv.org/abs/2307.06092v5 | # Quantitative Clts in Deep Neural Networks
###### Abstract.
We study the distribution of a fully connected neural network with random Gaussian weights and biases in which the hidden layer widths are proportional to a large constant \(n\). Under mild assumptions on the non-linearity, we obtain quantitative bounds on normal approximations valid at large but finite \(n\) and any fixed network depth. Our theorems show both for the finite-dimensional distributions and the entire process, that the distance between a random fully connected network (and its derivatives) to the corresponding infinite width Gaussian process scales like \(n^{-\gamma}\) for \(\gamma>0\), with the exponent depending on the metric used to measure discrepancy. Our bounds are strictly stronger in terms of their dependence on network width than any previously available in the literature; in the one-dimensional case, we also prove that they are optimal, i.e., we establish matching lower bounds.
**AMS 2010 Classification:** 60F05; 60F07; 60G60; 68T07.
[email protected], Department of Economics and Statistics, University of Torino and Collegio Carlo Alberto
[email protected], Department of Operations Research and Financial Engineering, Princeton University
[email protected], Department of Mathematics, University of Rome Tor Vergata [email protected], Department of Mathematics, Luxembourg University
[email protected], Department of Mathematics, Luxembourg University
in this family for an approximation to \(f\). In this article we will study the simplest, so-called fully connected, network architectures:
**Definition 1.1** (Fully Connected Network).: _Fix a positive integer \(L\) as well as \(L+2\) positive integers \(n_{0},\ldots,n_{L+1}\) and a function \(\sigma:\mathbb{R}\to\mathbb{R}\). A_ **fully connected depth \(L\) neural network** _with input dimension \(n_{0}\), output dimension \(n_{L+1}\), hidden layer widths \(n_{1},\ldots,n_{L}\), and non-linearity \(\sigma\) is any function \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L+1}}\) of the following form_
\[z_{\alpha}^{(\ell)}=\begin{cases}W^{(1)}x_{\alpha}+b^{(1)},&\ell=1\\ W^{(\ell)}\sigma(z_{\alpha}^{(\ell-1)})+b^{(\ell)},&\ell=2,\ldots,L+1\end{cases}, \qquad z_{\alpha}^{(\ell)}\in\mathbb{R}^{n_{\ell}}, \tag{1.1}\]
_where \(W^{(\ell)}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) are matrices, \(b^{(\ell)}\in\mathbb{R}^{n_{\ell}}\) are vectors, and \(\sigma\) applied to a vector is shorthand for \(\sigma\) applied to each component._
The trainable parameters of a fully connected network are the **network weights**\(W_{ij}^{(\ell)}\) (entries of the weight matrices \(W^{(\ell)}\)) and **network biases**\(b_{i}^{(\ell)}\) (components of the bias vectors \(b^{(\ell)}\)). Of course, the network architecture and dataset must be compatible in the sense that \(f\) must be a function from \(\mathbb{R}^{n_{0}}\) to \(\mathbb{R}^{n_{L+1}}\). For a training dataset and a network architecture, the goal is to find a setting of the weights and biases so that not only do we have
\[z_{\alpha}^{(L+1)}\approx f(x_{\alpha})\]
for \(x_{\alpha}\) in the training dataset but also for inputs not included in the training data. This optimization is typically done in two steps:
1. Randomly initialize (i.e. sample) the network weights and biases.
2. Optimize the weights and biases by some variant of gradient descent on an empirical loss such as the squared error: \[\sum_{\alpha=1}^{k}\big{|}\big{|}z_{\alpha}^{(L+1)}-f(x_{\alpha})\big{|} \big{|}_{2}^{2}.\]
We thus see that neural networks with random weights and biases, the main subject of this article, describe the properties of neural networks at the start of training. The usual way to initialize parameters in practice leads to the following:
**Definition 1.2** (Random Fully Connected Neural Network).: _Fix \(L\geq 1,\,n_{0},\ldots,n_{L+1}\geq 1,\,\sigma:\mathbb{R}\to\mathbb{R}\) as well as \(C_{b}\geq 0,\,C_{W}>0\). A_ **random depth \(L\) neural network** _with input dimension \(n_{0}\), output dimension \(n_{L+1}\), hidden layer widths \(n_{1},\ldots,n_{L}\), and non-linearity \(\sigma\), is the random field (1.1) with random weights and biases:_
\[W_{ij}^{(\ell)}\sim\mathcal{N}\left(0,\frac{C_{W}}{n_{\ell-1}}\right),\qquad b _{i}^{(\ell)}\sim\mathcal{N}(0,C_{b})\qquad\text{independent}. \tag{1.2}\]
In general, both describing the distribution of a randomly initialized neural network and tracking the dynamics of optimization is quite difficult. To make progress, several influential lines of research study these questions asymptotically when the network widths \(n_{1},\ldots,n_{L}\) are large [62, 27, 54, 34, 57, 82, 17, 39, 77, 78, 47, 14, 58, 30, 72, 81, 4]. That neural networks simplify significantly in this _infinite width limit_ can already be seen at initialization:
**Theorem 1.3** (Infinite Networks as Gaussian Processes - [62, 54, 57, 83, 17, 39])).: _Fix \(L,n_{0},n_{L+1},r\geq 1\) and a non-linearity \(\sigma:\mathbb{R}\to\mathbb{R}\) that is polynomially bounded to order \(r\) in the sense of the forthcoming formula (2.1). As \(n_{1},\ldots n_{L}\to\infty\), the random field \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L+ 1}}\) converges weakly in distribution, as an element of \(C^{r-1}(\mathbb{R}^{n_{0}},\mathbb{R}^{n_{L+1}})\), to a Gaussian process with \(n_{L+1}\) iid centered components \((z_{i;\alpha}^{(L+1)},\,i=1,\ldots,n_{L+1})\) with limiting covariance_
\[K_{\alpha\beta}^{(L+1)}:=\lim_{n_{1}\ldots,n_{L}\to\infty}\operatorname{Cov} \left(z_{i;\alpha}^{(L+1)},z_{i;\beta}^{(L+1)}\right)\]
_satisfying_
\[K_{\alpha\beta}^{(\ell+1)}=\begin{cases}C_{b}+C_{W}\left\langle\sigma\left(z_ {i;\alpha}^{(\ell)}\right)\sigma\left(z_{i;\beta}^{(\ell)}\right)\right\rangle _{K^{(\ell)}},&\ell\geq 1\\ C_{b}+\frac{C_{W}}{n_{0}}x_{\alpha}\cdot x_{\beta},&\ell=0\end{cases}, \tag{1.3}\]
_where for any \(f:\mathbb{R}^{2}\to\mathbb{R}\) we've written \(\left\langle f(z_{i;\alpha}^{(\ell)},z_{i;\beta}^{(\ell)})\right\rangle_{K^{( \ell)}}\) for the average values of \(f\) with respect to the distribution_
\[\left(z_{i;\alpha}^{(\ell)},z_{i;\beta}^{(\ell)}\right)\sim\mathcal{N}\left(0,\left(\begin{array}{cc}K_{\alpha\alpha}^{(\ell)}&K_{\alpha\beta}^{(\ell)} \\ K_{\alpha\beta}^{(\ell)}&K_{\beta\beta}^{(\ell)}\end{array}\right)\right).\]
We can now state the main question taken up in this article:
**Question:**: How close is a random neural network at finite width to the
\[\text{infinite width Gaussian process described in Theorem \ref{eq:1.3}?}\]
Perhaps the main motivation for taking up this question comes from prior work on the neural tangent kernel (NTK) regime [47, 30, 5, 55, 82, 83, 84, 72, 37, 81], which occurs when \(L,n_{0},n_{L+1}\), and the training dataset are fixed, weights and biases are initialized as in (1.2), and the hidden layer weights \(n_{1},\ldots,n_{L}\) are sent to infinity. The NTK regime has two salient features:
* The stochastic processes \(x_{\alpha}\mapsto z_{\alpha}^{(L+1)}\) converge in distribution to centered Gaussian processes with independent components. We already saw this in Theorem 1.3.
* Using sufficiently small learning rates and losses such as the mean squared error, the entire trajectory of optimization coincides with the one obtained by replacing the non-linear network \(z_{\alpha}^{(L+1)}\) by its linearization around the randomly initialized setting of network weights and biases (see [47], Theorem 3.2 in [5], and Theorem 5.4 in [8]).
The second point shows that in the infinite width limit optimization will not get stuck in a spurious local minimum of the training loss, as the loss is convex after we replace the network by its linearization. But it also suggests that taking the width to infinity comes at a steep explanatory cost. Indeed, one of the most important practical features of neural networks is precisely that they are _not_ linear models and hence learn data-dependent features [21]. The NTK regime is thus too rigid to capture important aspects of the behavior of realistic neural networks. To study non-linear effects such as feature learning one must either change the initialization scheme (leading to the mean-field limit [59, 20, 73, 77]), consider regimes in which the training dataset size grows with network width (see e.g. [29, 60, 23, 76, 43, 4]), or
study neural networks at finite width (see e.g. [63, 85, 43, 72, 81, 37, 40]). In this article, we focus on this last option and develop new probabilistic tools for analyzing neural networks at finite but large width (see (3.1)).
Beyond the development of the NTK regime, the infinite width Gaussian process of Theorem 1.3 has been exploited to develop Bayesian inference for deep neural networks [34, 54, 43, 81, 25, 4] and to investigate properties of infinitely wide neural networks as functions of the depth through information propagation [70, 75, 44, 41, 81, 72, 38].
### Informal Overview of Results
Our main results, which we present in more detail in SS3 below, can be summarized as follows:
1. **One-dimensional QCLTs.** For a fixed network input \(x_{\alpha}\in\mathbb{R}^{n_{0}}\) we consider a single component \(z_{i;\alpha}^{(L+1)}\) of the network output. Here we ask, as a function of network width, how close is the distribution of \(z_{i;\alpha}^{(L+1)}\) (and its derivatives with respect to \(x_{\alpha}\)) to the corresponding infinite width Gaussian? We find that the total-variation distance between them is bounded above by a constant times \((\text{width})^{-1}\); we also prove that this rate is optimal, i.e., we establish matching lower bounds. See Theorem 3.3 for the precise statement.
2. **Finite-dimensional QCLTs.** For a fixed finite collection of network inputs \(x_{\alpha}\in\mathbb{R}^{n_{0}},\,\alpha\in\mathcal{A}\), we obtain in Theorem 3.5 upper bounds on the convex distance (see (3.11)) between the vector \(\left(z_{i;\alpha}^{(L+1)},\,\alpha\in\mathcal{A}\right)\) (and its derivatives with respect to \(x_{\alpha}\)) and the corresponding Gaussian. Here we find an upper bound of the order of \((\text{width})^{-1/2}\), with a pre-factor that scales polynomially with the number of elements in \(\mathcal{A}\). We conjecture that this rate is sub-optimal and can be improved to be a constant depending on \(\mathcal{A}\) times \((\text{width})^{-1}\).
3. **Functional QCLTs.** We prove upper bounds in Theorem 3.10 between \(z_{\alpha}^{(L+1)}\) - viewed as an element of the infinite-dimensional Sobolev space of weakly differentiable functions from a compact set in \(\mathbb{R}^{n_{0}}\) to \(\mathbb{R}^{n_{L+1}}\) - and its infinite width limit. These bounds are in terms of both the Wasserstein-2 metric and the \(d_{2}\) distance (see (3.26)) and scale like \((\text{width})^{-\kappa}\) for certain exponents \(\kappa\in(0,1/2]\). When \(\sigma\) is either a smooth or the ReLU non-linearity, and we study the Wasserstein-2 metric in some appropriate Sobolev space, we can take \(\kappa=1/8\). In Theorem 3.14 - which is one of the most innovative contributions of our work and applies to smooth nonlinearities - we use a Sobolev embedding argument to deduce upper bounds on transport distances associated with supremum norms on spaces of differentiable functions. In this case, we again achieve bounds that scale as \((\text{width})^{-1/8}\).
As we review in more detail in SS4, our results in cases (1)--(3) above are strictly stronger in terms of their dependence on network width than any previously available in the literature. We conclude by emphasizing that, on a technical level, this article introduces several novel ideas:
- that is of independent interest
- extends techniques and ideas initially introduced in [65, 66, 67].
* and finite-dimensional approximations revolves around the use of integration by parts formulae and the so-called **Stein's method** for probabilistic approximations
- see [53, 66, 68].
* We formulate, based on ideas from [28], a novel coupling argument for conditionally Gaussian fields (see Proposition 5.14) that is used in the proof of Theorem 3.10 and Theorem 3.14.
* We develop a new application of **modified Powers-Stormer inequalities**[71], that we formally state (in full generality) in Proposition 5.12 below. Such a result, that extends bounds already established in [28], allows one to upper bound the Hilbert-Schmidt norm of the difference of the square roots of two positive semi-definite operators without requiring that one of them is strictly positive definite. This will be used in the proof of Theorem 3.10. Our approach should be compared with the discussion contained in [10, Section 5], where some alternate strategies for deriving functional bounds is partially outlined.
**Remark 1.4**.: The recent (independently written) paper [3] uses Stein's method in a way comparable to ours to deduce upper bounds in the total variation and convex distances for shallow and deep networks, in the case where the input is fixed and only the network's output is concerned (no derivatives). We stress that our paper focuses on the derivation of tight probabilistic bounds in terms of the width of the neural network, providing only a partial description of the analytical dependence of the constants on the other parameters of the model (e.g. \(C_{W},C_{b}\)). While it is in principle possible to explicit such dependence in a finite-dimensional setting (see e.g. [10, 15, 3]) the task becomes much more complicated in a functional framework since in this case the constants in our bounds crucially depend on certain traces of integral operators that one cannot directly represent in terms of the involved parameters.
We prefer to think of this point as a separate issue, and leave it open for further research.
### Outline for Remainder of Article
The rest of this article is structured as follows. First, in SS2, we formally introduce assumptions and notation related to the non-linearity \(\sigma\). Then, in SS3, we state our precise results on one-dimensional (SS3.2), finite-dimensional (SS3.3), and infinite-dimensional quantitative CLTs (SS3.4). Throughout SS3 we compare our results to prior work and mention a range of further related articles in SS4. We then develop in SS5 some preparatory results that will be used in the proofs of our main Theorems. Specifically, SS5.1 builds on the simple observation from Lemma 2.5 that random neural networks
are conditionally Gaussian and recalls key estimates on the fluctuations of the conditional covariances (see Theorem 5.1). Next, SS5.2 recalls Stein's method for one-dimensional quantitative CLTs, while SS5.3 and SS5.4 provide the finite-dimensional and infinite-dimensional extensions, respectively. Several of these extensions (specifically Propositions 5.9, 5.12, and 5.14) are elementary but new. Finally, in SS6 we complete the proofs of our main results.
## 2. Assumptions and Definitions
For our precise results, we will need the following mild technical condition on the activation function.
**Definition 2.1** (Polynomially Bounded Activations).: _For fixed \(r\geq 1\), we say that the non-linearity \(\sigma:\mathbb{R}\to\mathbb{R}\) is_ **polynomially bounded to order \(r\)**_, if either \(\sigma\) is \(r\) times continuously differentiable or if it is \(r-1\) times continuously differentiable and its \((r-1)-\)st derivative is a continuous piecewise linear function with a finite number of points of discontinuity for its derivative. In either case we also require that the \(r\)-th derivative of \(\sigma\) is polynomially bounded_
\[\exists k\geq 0\text{ s.t. }\left|\left|(1+|x|)^{-k}\frac{d^{r}}{dx^{r}} \sigma(x)\right|\right|_{L^{\infty}(\mathbb{R})}<\infty, \tag{2.1}\]
_and that for every fixed \(x_{\alpha},x_{\beta}\) and \(I,J\) such that \(|I|,|J|=r\), the mixed derivatives \(D_{\alpha}^{J}D_{\beta}^{I}\Sigma_{\alpha\beta}^{(\ell)}\) are well-defined and finite with probability one, where \(\Sigma^{(\ell)}\) is defined according to the forthcoming formula (2.8)._
**Remark 2.2**.: The condition that the mixed derivatives \(D_{\alpha}^{J}D_{\beta}^{I}\Sigma_{\alpha\beta}^{(\ell)}\) are well-defined and finite with probability one hold in the following settings, which cover virtually all cases of practical importance:
* \(r\) is arbitrary and \(\sigma\) is a smooth function.
* \(r\) is arbitrary and \(\sigma\) is strictly monotone (e.g. leaky ReLU with \(r=1\))
* \(r\) is arbitrary and the bias variance \(C_{b}\) (see (1.2)) is strictly positive.
* \(r=1,C_{b}=0\) and \(\sigma=\) ReLU and we restrict the network inputs \(x_{\alpha},x_{\beta}\) to be non-zero (this case is somewhat more subtle and is proved in the course of establishing Proposition 9 in [42]).
Virtually all non-linearities used in practice are polynomially bounded to some order \(r\geq 1\). This is true, for instance, of smooth non-linearities such as tanh and GeLU [46] (in which case \(r\) is arbitrary) as well as piecewise linear non-linearities such ReLU [51] and leaky ReLU [45].
Since we are concerned in this article with quantitative central limit theorems not only for the outputs of a random neural network but also for its derivatives with respect to network inputs, let us agree that for a **multi-index**\(J=(j_{1},\ldots,j_{n_{0}})\in\mathbb{N}^{n_{0}}\) we will write \(|J|:=j_{1}+\cdots+j_{n_{0}}\) for the **order** of \(J\) and let
\[D_{\alpha}^{J}=\partial_{x_{1}}^{j_{1}}\cdots\partial_{x_{n_{0}}}^{j_{n_{0}}} \bigg{|}_{x=x_{\alpha}=(x_{1},\ldots,x_{n_{0}})} \tag{2.2}\]
denote the corresponding partial derivative operator, with the convention that \(D^{0}=\mathrm{Id}\).
**Remark 2.3**.: For fixed \(\ell=1,...,L+1\), consider the real valued random field \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z_{i;\alpha}^{(\ell)}\in\mathbb{R}\) defined in (1.1), and denote by \(\Gamma\) the centered Gaussian field having covariance \(K^{(\ell)}\), as defined in (1.3). Assume that \(\sigma\) is polynomially bounded to the order \(r\) (Definition 2.1), and that weights and biases are selected according to Definition 1.2. Then, one can verify the following properties:
1. both \(\Gamma\) and \(z_{i}^{(\ell)}\) are of class \(C^{r-1}\) with probability one;
2. with probability one, both \(z_{i}^{(\ell)}\) and \(\Gamma\) are Lebesgue-almost everywhere \(r\) times differentiable and, for all \(J\) such that \(|J|=r\), there exist versions of the fields \(x_{\alpha}\mapsto D_{\alpha}^{J}z_{i;\alpha}^{(\ell)}\) and \(x_{\alpha}\mapsto D_{\alpha}^{J}\Gamma_{\alpha}\) that are locally bounded;
3. for every fixed non-zero \(x_{\alpha}\) and \(J\) such that \(|J|=r\), the mixed derivatives \(D_{\alpha}^{J}z_{i;\alpha}^{(\ell)}\) and \(D_{\alpha}^{J}\Gamma_{\alpha}\) are well-defined and finite with probability one;
4. For every \(x_{\alpha},x_{\beta}\in\mathbb{R}^{n_{0}}\) and every \(I,J\) such that \(|I|,|J|\leq r-1\), \[\mathbb{E}[D_{\alpha}^{I}\Gamma_{\alpha}\cdot D_{\beta}^{J}\Gamma_{\beta}]=D_ {\alpha}^{I}D_{\beta}^{J}K_{\alpha\beta}^{(\ell)},\] (2.3) and the mapping \((x_{\alpha},x_{\beta})\mapsto D_{\alpha}^{I}D_{\beta}^{J}K_{\alpha\beta}^{( \ell)}\) is continuous. Relation (2.3) continues to hold whenever \(\max\{|J|,|I|\}=r\) and one considers non-zero inputs \(x_{\alpha},x_{\beta}\), and in this case there exists a version of the mapping \((x_{\alpha},x_{\beta})\mapsto D_{\alpha}^{I}D_{\beta}^{J}K_{\alpha\beta}^{( \ell)}\) that is bounded on compact sets.
The following definition formalizes what it means for a random neural network, together with some of its derivatives, to have a non-degenerate covariance structure in the infinite width limit. This condition will be used as a hypothesis in most of our results.
**Definition 2.4**.: _Fix \(r\geq 1\) and suppose that \(\sigma\) is polynomially bounded to order \(r\), in the sense of Definition 2.1. Consider any finite set \(\mathcal{A}\) of indexing distinct network inputs_
\[x_{\mathcal{A}}=\left\{x_{\alpha}:\alpha\in\mathcal{A}\right\}\subseteq \mathbb{R}^{n_{0}}\]
_and any finite set of directional derivative operators_
\[V=\left\{V_{1},\ldots,V_{p}\right\},\quad V_{j}:=\sum_{i=1}^{n_{0}}v_{ij} \partial_{x_{i}}. \tag{2.4}\]
_We say that the infinite-width covariance structure \(\left\{K^{(\ell)}:\ell=1,...,L+1\right\}\) from Theorem 1.3 is_ **non-degenerate on \(x_{\mathcal{A}}\) to order \(q\leq r\) with respect to \(V\)** _if, for all \(\ell=1,...,L+1\), the infinite width covariance matrix_
\[K_{\mathcal{A},V}^{(\ell),\leq q}:=\left(V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}} ^{J_{2}}K_{\alpha_{1}\alpha_{2}}^{(\ell)},\,|J_{1}|,|J_{2}|\leq q,\ \alpha_{1},\alpha_{2}\in\mathcal{A}\right) \tag{2.5}\]
_is invertible, where for each multi-index \(J_{i}=(j_{i1},\ldots,j_{ip})\in\mathbb{N}^{p}\) with order \(|J_{i}|=j_{i1}+\cdots+j_{ip}\) we have written_
\[V_{\alpha_{i}}^{J_{i}}:=V_{1}^{j_{i1}}\cdots V_{p}^{j_{ip}}\bigg{|}_{x=x_{ \alpha_{i}}} \tag{2.6}\]
_for the corresponding differential operators and have set \(V^{0}=\mathrm{Id}.\) We stress that, in (2.5), the rows and columns of the matrix \(K_{\mathcal{A},V}^{(\ell),\leq r}\) are indexed, respectively, by pairs \((J_{1},\alpha_{1})\) and \((J_{2},\alpha_{2})\)._
_When the set \(V\) is such that \(p=n_{0}\) and \(v_{i,j}=\delta_{ij}\), then \(V^{J}=D^{J}\) -- as defined in (2.2); in this special case, we will say that the infinite-width covariance structure \(\{K^{(\ell)}:\ell=1,...,L+1\}\) is_ **canonically non-degenerate to the order \(q\leq r\)** _and use the notation \(K^{(\ell),\leq q}_{\mathcal{A},V}=K^{(\ell),\leq q}_{\mathcal{A}}\)._
By virtue if (2.3), the invertibility of the covariance matrix \(K^{(\ell),\leq q}_{\mathcal{A},V}\) implies (thanks to Cauchy-Schwarz) that \(K^{(\ell),\leq q}_{\mathcal{A},V}\) has strictly positive diagonal terms. Plainly, the covariance structure \(\{K^{(\ell)}\}\) is non-degenerate on \(x_{\mathcal{A}}\) to order \(0\) if and only if the matrices \(\{K^{(\ell)}_{\alpha\beta}:\alpha,\beta\in\mathcal{A}\}\) defined in (1.3) are invertible for \(\ell=1,...,L+1\). In particular, if \(\mathcal{A}=\{\alpha\}\) is a singleton, non-degeneracy to the order \(0\) simply means that \(K^{(\ell)}_{\alpha\alpha}>0\) for \(\ell=1,...,L+1\). Finally, for \(\ell=1,...,L+1\), we introduce the notation
\[\kappa^{(\ell)}_{\alpha\beta}:=\operatorname{Cov}\left(z^{(\ell)}_{i;\alpha}, \,z^{(\ell)}_{i;\alpha}\right), \tag{2.7}\]
and write
\[\mathcal{F}^{(\ell)}:=\text{ sigma field generated by weights and biases in layers }1,\ldots,\ell.\]
We conclude this section with the following elementary lemma that will be used throughout the paper.
**Lemma 2.5**.: _Conditionally on \(\mathcal{F}^{(\ell)}\) the random field \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z^{(\ell+1)}_{\alpha}\in\mathbb{R}^{n _{\ell+1}}\) has i.i.d. centered Gaussian components with conditional covariance_
\[\operatorname{Cov}\left(z^{(\ell+1)}_{i;\alpha},\,z^{(\ell+1)}_{j;\beta}\ \mid\mathcal{F}^{(\ell)}\right)=\delta_{ij}\Sigma^{(\ell)}_{\alpha\beta}\]
_where_
\[\Sigma^{(\ell)}_{\alpha\beta}:=C_{b}+\frac{C_{W}}{n_{\ell}}\sum_{j=1}^{n_{ \ell}}\sigma\left(z^{(\ell)}_{j;\alpha}\right)\sigma\left(z^{(\ell)}_{j; \beta}\right). \tag{2.8}\]
_One has in particular that \(\kappa^{(\ell)}_{\alpha\beta}=\mathbb{E}[\Sigma^{(\ell)}_{\alpha\beta}]\), where we used the notation (2.7)._
**Remark 2.6**.: Assume that \(\sigma\) is polynomially bounded to the order \(r\geq 1\). In resonance with Remark 2.3, the random covariance function \((x_{\alpha},x_{\beta})\mapsto\Sigma^{(\ell)}_{\alpha\beta}\) verifies the following properties:
1. \(\Sigma^{(\ell)}\) is of class \(C^{r-1,r-1}(\mathbb{R}^{n_{0}}\times\mathbb{R}^{n_{0}})\) with probability one;
2. with probability one, \(\Sigma^{(\ell)}\) is Lebesgue-almost everywhere \(r\) times differentiable in each variable and, for all multi-indices \(I,J\) such that \(|I|,|J|=r\), there exist a version of the field \((x_{\alpha},x_{\beta})\mapsto D^{I}_{\alpha}D^{J}_{\beta}\Sigma^{(\ell)}_{\alpha\beta}\) that is locally bounded;
3. For every \(x_{\alpha},x_{\beta}\in\mathbb{R}^{n_{0}}\) and every \(I,J\) such that \(|I|,|J|\leq r\), \[\mathbb{E}[D^{J}_{\alpha}D^{I}_{\beta}\Sigma^{(\ell)}_{\alpha\beta}]=D^{J}_{ \alpha}D^{I}_{\beta}\kappa^{(\ell)}_{\alpha\beta},\] (2.9) and the mapping \((x_{\alpha},x_{\beta})\mapsto\mathbb{E}[(D^{J}_{\alpha}D^{I}_{\beta}\Sigma^{( \ell)}_{\alpha\beta})^{2}]\) is integrable over arbitrary compact sets.
## 3. Main Results
In the subsequent sections we will present our main results, respectively, in the one-dimensional (SS3.2), the finite-dimensional (SS3.3), and the functional setting (SS3.4). Before stating them we present some notation in SS3.1.
### Notation and Setting for Main Results
Our results give quantitative CLTs for random neural networks verifying the following (parameterized) set of assumptions.
**Assumption 3.1**.: Fix constants \(c_{1}\geq c_{2}>0\), integers \(r,L,n_{0},n_{L+1}\geq 1\), scalars \(C_{b}\geq 0,\,C_{W}>0\), and a mapping \(\sigma:\mathbb{R}\to\mathbb{R}\) that is polynomially bounded to order \(r\) as in Definition 2.1. We then consider a random neural network \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L+ 1}}\) with input dimension \(n_{0}\), output dimension \(n_{L+1}\), hidden layer widths \(n_{1},\ldots,n_{L}\), and non-linearity \(\sigma\) as in Definition 1.2 and suppose that for some \(n\geq 1\)
\[c_{2}n\leq n_{1},\ldots,n_{L}\leq c_{1}n. \tag{3.1}\]
For the sake of brevity, we define the set of parameters
\[\mathcal{P}:=\{\sigma,c_{1},c_{2},L,n_{0},C_{b},C_{W}\} \tag{3.2}\]
(note that \(\mathcal{P}\) does not contain \(r\)).
The results in this article give quantitative CLTs showing that when \(n\) is large the random field \(z_{\alpha}^{(L+1)}\) and its derivatives
\[D_{\alpha}^{J}z_{i;\alpha}^{(L+1)}:=\partial_{x_{1}}^{j_{1}}\cdots\partial_{x _{n_{0}}}^{j_{n_{0}}}\bigg{|}_{x=x_{\alpha}}z_{i;\alpha}^{(L+1)},\qquad J=(j_{ 1},\ldots,j_{n_{0}})\in\mathbb{N}^{n_{0}} \tag{3.3}\]
(or, more generally, the mixed directional derivatives appearing in (2.2)) are close to those of a centered Gaussian process with \(n_{L+1}\) independent and identically distributed components.
Although the classes of probabilistic distances we study vary somewhat between the one-dimensional, finite-dimensional, and functional cases below, they all contain Wasserstein distances, which we recall here for the sake of readability. See [80, Chapter 6] for further details and proofs.
**Definition 3.2**.: _Let \(K\) be a real separable Hilbert space, let \(X,Y\) be two \(K\)-valued random elements, and fix \(p\geq 1\). We define the \(p\)_**-Wasserstein distance**_, between the distributions of \(X\) and \(Y\), to be the quantity_
\[\mathbf{W}_{p}(X,Y):=\left(\inf_{(T,S)}\mathbb{E}[\|T-S\|_{K}^{p}]\right)^{1/ p}, \tag{3.4}\]
_where the infimum runs over all random elements \((T,S)\) such that \(T\stackrel{{ law}}{{=}}X\) and \(S\stackrel{{ law}}{{=}}Y\)._
Definition 3.2 will be applied to \(K=\mathbb{R}\) in Section 3.2, to \(K=\mathbb{R}^{m}\) in Section 3.3 and to \(K\) equal to some appropriate Sobolev space in Section 3.41. We note that, trivially, \(\mathbf{W}_{p}\leq\mathbf{W}_{q}\) for \(p\leq q\), and also record the following two additional facts:
* if \(U\) is an arbitrary random element defined on the same probability space as \(X\) (taking values in some Polish space \(\mathcal{U}\) and with law \(\mathbb{P}_{U}\)), then there exists a version \[\mathbb{P}_{X|U}:\mathcal{U}\times\mathcal{B}(\mathcal{K})\to[0,1]:(u,B)\mapsto \mathbb{P}_{X|U=u}(B)\] of the conditional distribution of \(X\) given \(U\) such that \[\mathbf{W}_{q}^{q}(X,Y)\leq\int_{\mathcal{U}}\mathbf{W}_{q}^{q}(\mathbb{P}_{X| U=u},\mathbb{P}_{Y})\,d\mathbb{P}_{U}(u),\] (3.5) see e.g. [9];
* in the case \(q=1\) one has the dual representation \[\mathbf{W}_{1}(X,Y)=\sup_{h\in\mathrm{Lip}(1)}|\mathbb{E}h(X)-\mathbb{E}h(Y)|,\] (3.6) where the supremum runs over all \(1\)-Lipschitz mappings on \(K\), that is, all real-valued mappings \(h\) such that \(|h(a)-h(b)|\leq\|a-b\|_{K}\), for all \(a,b\in K\).
### One-dimensional bounds
Our first result, Theorem 3.3, measures the total variation and \(1\)-Wasserstein distances between the output of a random neural network evaluated at a single input and a Gaussian random variable. To state it, recall that, given random variables \(X,Y\), the **total variation distance** between the distributions of \(X\) and \(Y\) is defined as
\[d_{TV}(X,Y):=\sup_{B\in\mathcal{B}(\mathbb{R})}|\mathbb{P}(X\in B)-\mathbb{P} (Y\in B)| \tag{3.7}\]
where \(\mathcal{B}(\mathbb{R})\) denotes the Borel-measurable subsets of \(\mathbb{R}\).
**Theorem 3.3**.: _Consider a random neural network \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L+1}}\) verifying Assumption 3.1 with a non-linearity \(\sigma\) that is polynomially bounded to order \(r\geq 1\) as in Definition 2.1, and recall notation (3.2). Fix a network input \(x_{\alpha}\in\mathbb{R}^{n_{0}}\), and directional derivative operators \(V=\{V_{1},...,V_{p}\}\) like in (2.4). Fix also a multi-index \(J\in\mathbb{N}^{p}\) such that \(|J|\leq r\), and let \(Z\) be a centered Gaussian random variable with variance \(V_{\alpha}^{J}V_{\beta}^{J}K_{\alpha\beta}^{(L+1)}|_{x_{\alpha}=x_{\beta}}\), where we have adopted the notation (2.2). If the infinite-width covariance structure \(\{K^{(\ell)}\}\) is non-degenerate on the singleton \(\{x_{\alpha}\}\) to order \(q=|J|\leq r\) with respect to \(V\), in the sense of Definition 2.4, then the following conclusions hold:_
**(1)**: _there exists_ \(C>0\)_, depending on_ \(r,V,J,x_{\alpha},\mathcal{P}\)_, with the property that, for each_ \(i=1,\ldots n_{L+1}\)_,_
\[\max\Big{\{}\mathbf{W}_{1}(V_{\alpha}^{J}z_{i;\alpha}^{(L+1)},Z),\ d_{TV}(V_{ \alpha}^{J}z_{i;\alpha}^{(L+1)},Z)\Big{\}}\leq Cn^{-1},\] (3.8) _and the constant_ \(C\) _can be chosen uniformly when_ \(||x_{\alpha}||^{2}\,/n_{0}\) _varies over a compact set;_
**(2)**: _the dependence on_ \(n\) _in (_3.8_) is optimal when_ \(q=0\)_, in the following sense: denoting by_ \(Z^{\prime}\) _a centered Gaussian random variable with the same variance as_ \(z_{i;\alpha}^{(L+1)}\)_, there exists_ \(C_{0}>0\)_, depending on_ \(x_{\alpha}\) _and_ \(\mathcal{P}\)_, such that, for each_ \(i=1,\ldots n_{L+1}\)_,_
\[\min\Big{\{}\mathbf{W}_{1}(z_{i;\alpha}^{(L+1)},Z^{\prime}),\ d_{TV}(z_{i; \alpha}^{(L+1)},Z^{\prime})\Big{\}}\geq C_{0}n^{-1}. \tag{3.9}\]
We prove Theorem 3.3 in SS6.1. Before presenting our results in the multi-dimensional setting, we make several remarks:
1. Let us give two examples of situations where Theorem 3.3 applies: * When \(\sigma(t)=\operatorname{ReLU}(t)=\max\left\{0,t\right\}\), we may take \(C_{b}=0,C_{W}=2\) and \[V=\left\{\partial_{x_{i}}\right\}\text{ for some }i.\] For any non-zero network input \(x_{\alpha}\), a simple computation shows that \[K_{\alpha\alpha}^{(\ell)}=\frac{2}{n_{0}}\left|\left|x_{\alpha}\right|\right|^ {2},\qquad\partial_{x_{i;\alpha}}\partial_{x_{i;\beta}}K_{\alpha\beta}^{(\ell) }\big{|}_{\alpha=\beta}=\frac{2}{n_{0}}.\] Hence, the infinite width covariance structure is non-degenerate on the singleton \(\left\{x_{\alpha}\right\}\) both to order \(0\) and to order \(1\) with respect to \(V\). * By inspection of the proof of Theorem 3.3 (given in Section 6.1) and by virtue of Theorem 5.1, one sees that the conclusion continues to hold if \(\sigma\) is smooth (that is, of class \(C^{\infty}(\mathbb{R})\)) and \(V_{\alpha}^{J}V_{\beta}^{J}K_{\alpha\beta}^{(L+1)}|_{x_{\alpha}=x_{\beta}}>0.\) In particular, we may take \(\sigma\) to be any smooth function such as \(\tanh(t)\) and set \(C_{b}=0,C_{W}=1\). For any non-zero network input \(x_{\alpha}\) the recursion from Theorem 1.3 then reads \[K_{\alpha\alpha}^{(\ell+1)}=\left\langle\sigma(z_{i;\alpha}^{(\ell)})^{2} \right\rangle_{K^{(\ell)}},\] showing that the infinite width covariance structure is non-degenerate on the singleton \(\left\{x_{\alpha}\right\}\) to order \(q=0\).
2. We recall that \(V_{\alpha}^{0}\) corresponds to the identity operator, so that Theorem 3.3 in the case \(\left|J\right|=0\) yields quantitative CLTs for the random variables \(z_{i;\alpha}^{(L+1)}\).
3. When \(\left|J\right|=0\) and \(L=1\), our estimates on the total variation distance strictly improve those proved in [15, Theorem 4] and [3, Theorem 4.1], that obtain a rate of convergence of the order \(n^{-1/2}\) by using some version of Stein's method in dimension one. In particular, the results of [15] are based on the **improved second-order Poincare inequalities** established in [79]. Similarly, in the case \(\left|J\right|=0\) and \(L\geq 1\) arbitrary, our estimates on \(\mathbf{W}_{1}\) strictly improve those that can be deduced by combining [10, Theorem 1.1] with the general relation \(\mathbf{W}_{1}\leq\mathbf{W}_{2}\).
4. In probabilistic approximations, it is typical to measure the distance between the laws of two random variables \(X,Y\) by using the so-called **Kolmogorov distance**, which is defined as \[d_{K}(X,Y):=\sup_{t\in\mathbb{R}}\left|\mathbb{P}(X>t)-\mathbb{P}(Y>t)\right|.\] (3.10) We observe that \(d_{TV}\geq d_{K}\) so that, in particular, our bound (3.8) implies an estimate on the Kolmogorov distance \(d_{K}(V_{\alpha}^{J}z_{i;\alpha}^{(L+1)},Z)\) that is strictly better than the one implied by the standard relation \(d_{K}(V_{\alpha}^{J}z_{i;\alpha}^{(L+1)},Z)\leq c\sqrt{\mathbf{W}_{1}(V_{ \alpha}^{J}z_{i;\alpha}^{(L+1)},Z)}\), with \(c\) an absolute constant - see [66, Remark C.22]. We refer the reader to [66, Appendix C], and the references therein, for further details on probabilistic distances.
### Finite-dimensional bounds
We now report what happens at the level of finite-dimensional distributions. For this, we recall that, for all integers \(m\geq 1\), the **convex distance** between the distributions of two \(m-\)dimensional random vectors \(X\) and \(Y\) is
\[d_{c}(X,Y):=\sup_{B}\left|\mathbb{P}(X\in B)-\mathbb{P}(Y\in B)\right|, \tag{3.11}\]
where the supremum runs over all convex \(B\subset\mathbb{R}^{m}\).
**Remark 3.4**.: The convex distance \(d_{c}\) is a natural generalization of (3.10) in a multivariate setting. Another popular distance for measuring the discrepancy between the distributions of random vectors is the so-called **multivariate Kolmogorov distance**, which is obtained from (3.11) by restricting the supremum to the class of all hyperrectangles \(R\subset\mathbb{R}^{m}\) (see e.g. [32] and the references therein). Our choice of \(d_{c}\) over the multivariate Kolmogorov distance is motivated by the following two features: (i) \(d_{c}\) is invariant with respect to orthogonal and affine transformations, and (ii) \(d_{c}\) can be directly connected to multivariate transport distances through the estimate (3.14) discussed below. In particular, property (i) will allow us to compare the distribution of the output of a given neural network and that of a possibly singular Gaussian random vector, see the forthcoming Theorem 3.5-**(2)** and its proof. We refer the reader to [11, 36] for some classical examples of the use of \(d_{c}\) in the context of the multivariate CLT, as well as to [68, 49] for a discussion of recent developments.
**Theorem 3.5**.: _Let \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L+1}}\) be a random neural network verifying Assumption 3.1 with a non-linearity \(\sigma\) that is polynomially bounded to order \(r\geq 1\) as in Definition 2.1; recall notation (3.2). Fix \(m\geq 1\), a set \(\mathcal{A}=\{\alpha_{1},\ldots,\alpha_{m}\}\), a finite collection of distinct non-zero network inputs_
\[\{x_{\alpha}:\alpha\in\mathcal{A}\}\subseteq\mathbb{R}^{n_{0}}\]
_and a collection of directional derivatives \(V=\{V_{1},...,V_{p}\}\) as in (2.4). Further, consider a family \(\mathbf{B}=\{(J_{k},\alpha_{k}):k=1,...,M\}\) of distinct pairs such that \(M\geq 2\), where \(J_{k}\in\mathbb{N}^{p}\) is a multi-index verifying \(|J_{k}|\leq r\) and \(\alpha_{\ell}\in\mathcal{A}\). Finally, for any multi-index \(J=(j_{1},\ldots,j_{p})\in\mathbb{N}^{p}\), use the notation (2.2) and set_
\[G:=\begin{pmatrix}V_{\alpha_{k}}^{J_{k}}\Gamma_{i;\alpha_{k}}^{(L+1)}\end{pmatrix} _{\begin{subarray}{c}1\leq i\leq n_{L+1}\\ (J_{k},\alpha_{k})\in\mathbf{B}\end{subarray}}\in\mathbb{R}^{M\times n_{L+1}},\]
_where \(\mathbb{R}^{n_{0}}\ni x_{\alpha}\mapsto(\Gamma_{1;\alpha}^{(L+1)},...,\Gamma _{n_{L+1};\alpha}^{(L+1)})\) is the centered Gaussian field with covariance_
\[\operatorname{Cov}\left(\Gamma_{i;\alpha}^{(L+1)},\Gamma_{j;\beta}^{(L+1)} \right)=\delta_{ij}K_{\alpha\beta}^{(L+1)},\]
_as defined in (1.3)._
* _Suppose the infinite width covariance structure_ \(\{K^{(\ell)}:\ell=1,...,L+1\}\) _is non-degenerate to the order_ \(r\) _on_ \(\{x_{\alpha}:\alpha\in\mathcal{A}\}\) _with respect to_ \(V\)_, in the sense of Definition_ 2.4_. Then, the covariance matrix of_ \(G\) _is invertible, and there exists a constant_ \(C_{0}>0\) _depending on_ \(V,r,\mathbf{B},\mathcal{P}\) _such that_ \[d_{c}\left(\begin{pmatrix}V_{\alpha_{k}}^{J_{k}}z_{i;\alpha_{k}}\end{pmatrix} _{\begin{subarray}{c}1\leq i\leq n_{L+1}\\ (J_{k},\alpha_{k})\in\mathbf{B}\end{subarray}},G\right)\leq C_{0}\,n^{-1/2},\] (3.12)
_where we have implicitly regarded \(\big{(}V^{J_{k}}_{\alpha_{k}}z_{i;\alpha_{k}}\big{)}_{\begin{subarray}{c}1\leq i \leq n_{L+1}\\ (J_{k},\alpha_{k})\in\mathbf{B}\end{subarray}}\) and \(G\) as \((M\cdot n_{L+1})\)-dimensional random vectors._
**(2)**: _Assume that the non-linearity_ \(\sigma\) _is smooth (_\(\sigma\in C^{\infty}(\mathbb{R})\)_). Then, there exists a constant_ \(C_{1}>0\) _depending on_ \(V,r,\mathbf{B},\mathcal{P}\) _such that_
\[d_{c}\left(\big{(}V^{J_{k}}_{\alpha_{k}}z_{i;\alpha_{k}}\big{)}_{\begin{subarray} {c}1\leq i\leq n_{L+1}\\ (J_{k},\alpha_{k})\in\mathbf{B}\end{subarray}},G^{\prime}\right)\leq C_{1}\,n ^{-1/2}, \tag{3.13}\]
_where_
\[G^{\prime}:=\big{(}V^{J_{k}}_{\alpha_{k}}\Gamma^{\prime}_{i;\alpha_{k}}\big{)} _{\begin{subarray}{c}1\leq i\leq n_{L+1}\\ (J_{k},\alpha_{k})\in\mathbf{B}\end{subarray}}\in\mathbb{R}^{M\times n_{L+1}},\]
_and_ \(\mathbb{R}^{n_{0}}\ni x_{\alpha}\mapsto(\Gamma^{\prime}_{1;\alpha},...,\Gamma ^{\prime}_{n_{L+1};\alpha})\) _is the centered Gaussian field with covariance_
\[\operatorname{Cov}\big{(}\Gamma^{\prime}_{i;\alpha},\Gamma^{\prime}_{j;\beta }\big{)}=\delta_{ij}\mathbb{E}[\Sigma^{(L)}_{\alpha\beta}]=\delta_{ij}\kappa ^{(L+1)}_{\alpha\beta},\]
_with_ \(\Sigma^{(L)}_{\alpha\beta}\) _and_ \(\kappa^{(L+1)}_{\alpha\beta}\) _defined according to (_2.8_) and (_2.7_), respectively._
We prove Theorem 3.5 in SS6.2. Before providing in the next section our infinite-dimensional results, we make several remarks, where we write for simplicity
\[\big{(}V^{J_{\ell}}z_{i;\alpha_{\ell}}\big{)}:=\big{(}V^{J_{\ell}}_{\alpha_{ \ell}}z_{i;\alpha_{\ell}}\big{)}_{\begin{subarray}{c}1\leq i\leq n_{L+1}\\ (J_{\ell},\alpha_{\ell})\in\mathbf{B}\end{subarray}}.\]
1. Under the assumptions of Point **(2)** of Theorem 3.5, one might have that the covariance matrix of the vector \(G\) is singular. In this case, the law of \(G\) is supported by a lower-dimensional subspace \(\mathcal{L}\subset\mathbb{R}^{M\cdot n_{L+1}}\), and, in principle, one might have that \(\mathbb{P}\left[\big{(}V^{J_{\ell}}z_{i;\alpha_{\ell}}\big{)}\in\mathcal{L} \right]=0\) for any choice of \(n_{1},...,n_{L}\), which would imply in turn \[d_{c}\left(\big{(}V^{J_{\ell}}z_{i;\alpha_{\ell}}\big{)}\,,G\right)=1.\] This difficulty is resolved by replacing \(G\) with a vector \(G^{\prime}\) having the same covariance matrix as \(\big{(}V^{J_{\ell}}z_{i;\alpha_{\ell}}\big{)}\).
2. Theorem 1.1 in [10] allows one to deduce an estimate analogous to (3.12)-(3.13) in the case \(r=0\), where the left-hand sides of these bounds are replaced by the quantities \(\mathbf{W}_{2}\big{(}z_{i;\alpha}^{(L+1)}\,,\,G\big{)}\) and \(\mathbf{W}_{2}\big{(}z_{i;\alpha}^{(L+1)}\,,\,G^{\prime}\big{)}\), respectively -- see Definition 3.2. As discussed in Section 6.2, our findings are based on some refinements of the bounds established in [68] by means of the so-called _Stein's method_ for multivariate approximations, whereas [10] exploit optimal transport arguments, close to those that we will use in an infinite-dimensional setting (see the forthcoming Section 3.4). It is also remarkable that our bounds (in the case \(r=0\)) are strictly stronger than those obtained by combining the results of [10] with the following standard estimate (proved in [68, Proposition A.1]): if \(N\) is a centered Gaussian vector with invertible covariance matrix \(\Sigma\), then, for all random vectors \(F\), \[d_{c}(F,N)\leq C\sqrt{\|\Sigma\|_{HS}}\cdot\mathbf{W}_{1}(F,N)^{1/2}.\] (3.14) Indeed, applying (3.14) to \(F=\Big{(}z_{i;\alpha}^{(L+1)}\Big{)}\) and \(N=G\) or \(N=G^{\prime}\) (and using the fact that \(\mathbf{W}_{1}\leq\mathbf{W}_{2}\)), one can infer from [10] an upper bound of the order \(n^{-1/4}\) on
\(d_{c}\left(\left(z_{i;\alpha}^{(L+1)}\right),G\right)\) and \(d_{c}\left(\left(z_{i;\alpha}^{(L+1)}\right),G^{\prime}\right)\), as \(n\to\infty\).
3. (_Dimensional dependence_) An inspection of the arguments rehearsed in our proofs (see Section 6.3) reveals the following estimates: (i) the constant \(C_{0}\) in (3.12) is such that \[C_{0}\leq a_{0}(M\cdot n_{L+1})^{65/24},\] (3.15) where \(a_{0}\) depends on \(\sigma,L,n_{0},C_{b},C_{W}\); (ii) the constant \(C_{1}\) in (3.13) is such that \[C_{1}\leq a_{1}\lambda_{+}^{-3/2}R^{65/24},\] where the constant \(a_{1}\) depends on \(\sigma,L,C_{b},C_{W},n_{0}\), the symbol \(R\) denotes the rank of the covariance matrix of \(G^{\prime}\), and \(\lambda_{+}\) is the smallest strictly positive eigenvalue of the covariance matrix of \(G^{\prime}\). This implies in turn the following rough estimate: if \(\lambda_{+}\) is bounded away from zero as \(M\to\infty\), then \(C_{1}=O(M^{65/24})\). The exponent \(65/24\) does not carry any specific meaning: it is an artifact of the (recursive) techniques used in [68] and can be improved in special situations. To see this, consider for instance the elementary situation where \(L=1\) and \(\mathbf{B}=\{0,\alpha_{0}\}\) (so that \(|\mathbf{B}|=1\)). In this case, \(\left(V^{J_{\ell}}z_{i;\alpha_{\ell}}\right)=(z_{i;\alpha_{0}})_{1\leq i\leq n _{2}}\) has the law of some multiple of a random vector with the form \[Z_{n_{1}}=\frac{1}{\sqrt{n_{1}}}\sum_{k=1}^{n_{1}}Y_{k},\] where the \(\{Y_{k}\}\) are i.i.d. centered \(n_{2}\)-dimensional Gaussian vectors with identity covariance matrices. Now, assuming for simplicity that the non-linearity \(\sigma\) is bounded, one deduces from [11, 36] (and some elementary computations left to the reader) that, denoting by \(Z\) a standard \(n_{2}\)-dimensional Gaussian vector, \[d_{c}(Z_{n_{1}},Z)\leq B\,n_{1}^{-1/2},\] where \(B=O(n_{2}^{7/4})\) as \(n_{2}\to\infty\) (and the implicit constants are absolute). We also observe that, in the case where \(\mathbf{B}=\{(0,\alpha_{0})\}\) (one input, no derivatives), the exponent \(65/24\) implicitly appearing in our bound can be reduced to \(53/24\).
4. We provide here one important example in which the infinite width covariance structure fails to be canonically non-degenerate but is instead non-degenerate with respect to a particular set of directional derivatives. Specifically, consider \[\sigma(t)=\text{ReLU}(t)=\max\left\{0,t\right\},\quad C_{b}=0,\quad C_{W}=2, \quad r=1,\] and fix a network input \(x_{\alpha}\) with \(||x_{\alpha}||=1\). Note that, since \(x_{\alpha}\mapsto z_{i;\alpha}^{(\ell)}\) is homogeneous of degree one with respect to \(x_{\alpha}\), we have \[z_{i;\alpha}^{(\ell)}=(x_{\alpha}\cdot\nabla)z_{i;\alpha}^{(\ell)}.\] A direct computation now shows that for any \(v_{1},v_{2}\in\mathbb{R}^{n_{0}}\) \[V_{j}=v_{j}\cdot\nabla=\sum_{i=1}^{n_{0}}(v_{j})_{i}\partial_{x_{i}}\quad \Rightarrow\quad V_{j}V_{k}K_{\alpha\alpha}^{(\ell)}=\frac{2}{n_{0}}\left\langle v _{j},\,v_{k}\right\rangle.\] (3.16)
Thus, writing \(\partial_{x_{0}}=\mathrm{id}\) we have \[\left(\partial_{x_{i}}\partial_{x_{j}}K^{(\ell)}_{\alpha\alpha}\right)_{i,j=0, \ldots,n_{0}}=\frac{2}{n_{0}}\mathrm{Gram}\left(x_{\alpha},e_{1},\ldots,e_{n_{0 }}\right),\] where \(e_{j}\) is the \(j-\)th standard unit vector. The Gram matrix is not invertible since the vectors are not linearly independent. In contrast, \(V=(V_{1},\ldots,V_{n_{0}-1})\) to be partial derivatives in any set of directions that are a basis for the orthogonal complement to \(x_{\alpha}\), we see from (3.16) that the infinite width covariance structure is indeed non-degenerate to order \(1\) on \(\{x_{\alpha}\}\) with respect to \(V\).
* Similarly to the results obtained in [10], and as noted above in (c), the bounds we obtain diverge when the dimension \(M\) increases. It is hence natural to focus on functional bounds, which are indeed addressed in the next subsection.
* In the case where \(\mathbf{B}=\{0,\alpha_{0}\}\) (one input, no derivatives), our results are comparable to [3, Theorem 6.5], where some slightly different set of assumptions on the nonlinearity \(\sigma\) is considered.
* see [66, Chapters 4 and 6] for a discussion).
### Functional Bounds
For the rest of the section, we let \(\mathbb{R}^{n_{0}}\ni x_{\alpha}\mapsto z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L +1}}\) be a random neural network verifying Assumption 3.1; in particular, \(\sigma\) is polynomially bounded to the order \(r\geq 1\). To simplify the discussion, from now on we will use the symbol \(\mathcal{M}_{r}\) to denote the class of all multi-indices \(J\in\mathbb{N}^{n_{0}}\) such that \(|J|\leq r\); with such a notation, one has that \(\mathcal{M}_{0}=\{0\}\). For the rest of the section, \(\mathbb{U}\) is an open ball contained in \(\mathbb{R}^{n_{0}}\).
**Remark 3.6** (Spaces of smooth functions).:
* For \(k\geq 0\), we write \(C^{k}(\mathbb{U};\mathbb{R}^{n_{L+1}}):=C^{k}(\mathbb{U})\) to indicate the class of \(\mathbb{R}^{n_{L+1}}\)-valued, \(k\) times differentiable functions on \(\mathbb{U}\). We also denote by \(C^{k}_{b}(\mathbb{U})\) the subspace of \(C^{k}(\mathbb{U})\) composed of functions whose derivatives of order \(\leq k\) are bounded and uniformly continuous on \(\mathbb{U}\). It is a well-known fact (see e.g. [26, Section 1.3]) that the elements of \(C^{k}_{b}(\mathbb{U})\), as well as their derivatives, admit continuous extensions to the closure \(\bar{\mathbb{U}}\). It follows that \(C^{k}_{b}(\mathbb{U})\) can be identified with the space \(C^{k}(\bar{\mathbb{U}})\), that we endow with the supremum norm, defined as follows: for \(f=(f_{1},...,f_{n_{L+1}})\in C^{k}(\bar{\mathbb{U}})\), \[\|f\|_{C^{k}(\bar{\mathbb{U}})}:=\max_{i\in[n_{L+1}]}\max_{J\in\mathcal{M}_{r} }\max_{x\in\bar{\mathbb{U}}}|D^{J}f_{i}(x)|.\] (3.17) It is clear that the space \(C^{k}(\bar{\mathbb{U}})\) is Polish. In this paper, we will sometimes use the following fact (whose proof can be deduced along the lines of [61, proof of Lemma A.2]): for every \(k\geq 0\) and every \(m>k\), the set \(C^{m}(\bar{\mathbb{U}})\) is a Borel subset of \(C^{k}(\bar{\mathbb{U}})\).
Analogous conventions and results hold for the spaces \(C^{k,k}(\mathbb{U}\times\mathbb{U})\) and \(C^{k,k}(\bar{\mathbb{U}}\times\bar{\mathbb{U}})\), \(k\geq 0\).
2. For \(r\geq 0\), we define \[\mathbb{W}^{r,2}(\mathbb{U}):=\mathbb{W}^{r,2}(\mathbb{U};\mathbb{R}^{n_{L+1}})\] to be the Sobolev space obtained as the closure of square-integrable \(\mathbb{R}^{n_{L+1}}\)-valued mappings on \(\mathbb{U}\) with square-integrable (weak) derivatives up to the order \(r\), see [1, Chapter 3]. We observe that \(\mathbb{W}^{r,2}(\mathbb{U})\) is a closed subspace of the Hilbert space \[H:=L^{2}\big{(}\mathcal{M}_{r}\times[n_{L+1}]\times\mathbb{U},d\nu_{0}\otimes d \nu_{1}\otimes dx\big{)}\] (3.18) (from which it inherits the inner product), where \([n]:=\{1,...,n\}\), and \(\nu_{0}\) and \(\nu_{1}\) are the counting measures on \(\mathcal{M}_{r}\) and \([n_{L+1}]\), respectively; plainly, for \(r=0\), the spaces \(\mathbb{W}^{0,2}(\mathbb{U})\) and \(H=L^{2}\big{(}[n_{L+1}]\times\mathbb{U},d\nu_{1}\otimes dx\big{)}\) coincide.
3. For every \(r\geq 0\), there exists a canonical continuous injection \(\iota\) from \(C^{r}(\bar{\mathbb{U}})\) to \(\mathbb{W}^{r,2}(\mathbb{U})\). This implies that, if \(X\) is a random element with values in \(C^{r}(\bar{\mathbb{U}})\), then \(\iota(X)\) is a well-defined random element with values in \(\mathbb{W}^{r,2}(\mathbb{U})\). For the sake of brevity, we will often refer to this fact by saying that "\(X\) is regarded as a random element with values in \(\mathbb{W}^{r,2}(\mathbb{U})\)" (or some equivalent formulation), and write \(X\) instead of \(\iota(X)\) by a slight abuse of notation. Similar conventions are tacitly adopted to deal with the spaces \(C^{r,r}(\bar{\mathbb{U}}\times\bar{\mathbb{U}})\) and \(\mathbb{W}^{r,2}(\mathbb{U})\otimes\mathbb{W}^{r,2}(\mathbb{U})\).
4. We will use the following special consequence of the **Sobolev embedding theorem** (as stated e.g. in [26, Theorem 2.72] or [28, Lemma 4.3]): if \(\mathbb{U}\) is an open ball (or, more generally, a Lipschitz domain) and \(u\in C^{\infty}(\bar{\mathbb{U}}):=\bigcap_{k}C^{k}(\bar{\mathbb{U}})\), then, for all \(k\geq 1\), \[\|u\|_{C^{k}(\mathbb{U})}\leq A\cdot\|u\|_{\mathbb{W}^{r,2}(\mathbb{U})},\] (3.19) where \(r:=k+1+\lfloor\frac{n_{0}}{2}\rfloor\), \(\lfloor y\rfloor\) stands for the integer part of \(y\), and \(A\) is an absolute constant uniquely depending on \(\mathbb{U}\).
#### 3.4.1. Random fields as random elements
Given the ball \(\mathbb{U}\subset\mathbb{R}^{n_{0}}\), we define the random field
\[z_{\mathbb{U}}^{(L+1)}:=\{z_{i;x_{\alpha}}^{(L+1)}:i=1,...,n_{L+1}\,;\,x_{ \alpha}\in\mathbb{U}\}. \tag{3.20}\]
Our aim is to compare the law of \(z_{\mathbb{U}}^{(L+1)}\) with that of
\[\Gamma_{\mathbb{U}}^{(L+1)}=\{\Gamma_{i;\alpha}^{(L+1)}:i\in[n_{L}]\,;\,x_{ \alpha}\in\mathbb{U}\}, \tag{3.21}\]
where, as before, \(x_{\alpha}\mapsto(\Gamma_{1;\alpha}^{(L+1)},...,\Gamma_{n_{L+1};\alpha}^{(L+1)})\) is the centered Gaussian field with covariance
\[\mathbb{E}(\Gamma_{i;\alpha}^{(L+1)}\Gamma_{j;\beta}^{(L+1)})=\delta_{ij}K_{ \alpha\beta}^{(L+1)}, \tag{3.22}\]
with \(K^{(L+1)}\) recursively defined according to (1.3). In view of Remarks 2.3, 2.6 and 3.6-(c), \(z_{\mathbb{U}}^{(L+1)}\) and \(\Gamma_{\mathbb{U}}^{(L+1)}\) can be regarded as both \(C^{q}(\mathbb{U})\)- and \(\mathbb{W}^{q;2}(\mathbb{U})\)-valued random elements, for all \(q\in\{0,...,r-1\}\). The case \(q=r\) is more delicate, however, and we will sometimes make the following assumption:
**Assumption 3.7**.: The domain \(\bar{\mathbb{U}}\) does not contain the origin, and the non-linearity \(\sigma\) is polynomially bounded to the order \(r\geq 1\), and both \(z_{\mathbb{U}}^{(L+1)}\) and \(\Gamma_{\mathbb{U}}^{(L+1)}\) are random elements with values in \(\mathbb{W}^{r,2}(\mathbb{U})\) with probability \(1\).
Though we do not know if it always holds, this assumption is easy to verify in the following three cases:
1. \(\sigma\) is smooth (i.e. in \(C^{\infty}(\mathbb{R})\)) and \(r\geq 1\) is arbitrary.
2. \(\sigma\) is ReLU or leaky ReLU and \(r=1\).
3. \(\sigma\) is any non-linearity that is polynomially bounded to the order \(r\geq 1\) and \(C_{b}>0\).
Indeed, case (i) is trivial. Case (ii) follows from the observation that the network function is continuous and piecewise linear subordinate to a finite partition of the input space \(\mathbb{R}^{n_{0}}\) into a finite collection of convex polyehra on which the network is affine. And case (iii) follows from the fact when \(C_{b}>0\) the set of inputs \(x_{\alpha}\) for which \(\sigma\) is not \(r\)-times differentiable at \(z_{i;\alpha}^{(\ell)}\) for some \(i,\ell\) has Lebesgue measure \(0\). With this in mind, from now on, for an integer \(q\leq r-1\), let us write \(\mathbf{W}_{2;q}\) to indicate the distance \(\mathbf{W}_{2}\) defined in (3.4) for \(K=\mathbb{W}^{q,2}(\mathbb{U})\) (and we extend this definition to \(q=r\) when Assumption 3.7 holds).
#### 3.4.2. Bounds in Sobolev spaces
For \(q\leq r\), we canonically associate with the covariance (3.22) the (symmetric and positive) trace-class operator \(\mathbf{K}:\mathbb{W}^{q,2}(\mathbb{U})\to\mathbb{W}^{q,2}(\mathbb{U})\) given by
\[\mathbb{W}^{q,2}(\mathbb{U})\ni h=\left\{h_{i}(x):i\in[n_{L+1}],x \in\mathbb{U}\right\}\mapsto\mathbf{K}h \tag{3.23}\] \[:=\left\{(\mathbf{K}h)_{j}(x_{\beta})=\sum_{J\in\mathcal{M}_{q}} \int_{\mathbb{U}}D_{\alpha}^{J}h_{j}(x_{\alpha})D_{\alpha}^{J}K_{\alpha\beta}^ {(L+1)}dx_{\alpha}:j\in[n_{L+1}],x_{\beta}\in\mathbb{U}\right\},\]
and denote by
\[\lambda_{1;q}^{(L+1)}\geq\lambda_{2;q}^{(L+1)}\geq\cdots\geq\lambda_{k;q}^{(L+ 1)}\geq\cdots\geq 0\]
its eigenvalues.
**Remark 3.8**.: Let \(T\) be a generic smooth covariance kernel, and let \(\mathbf{K}_{T}:\mathbb{W}^{q,2}(\mathbb{U})\to\mathbb{W}^{q,2}(\mathbb{U})\) be the operator obtained from (3.23) by replacing \(K^{(L+1)}\) with \(T\). Then, exploiting the fact that \(\mathbf{K}_{T}\,g=0\) for all \(g\in H\cap\mathbb{W}^{q,2}(\mathbb{U})^{\perp}\) (with \(H\) defined as in (3.18)), one deduces that
\[\|\mathbf{K}_{T}\|_{HS}^{2}=\|T\|_{\mathbb{W}^{q,2}(\mathbb{U})\otimes \mathbb{W}^{q,2}(\mathbb{U})}^{2}:=n_{L+1}\times\sum_{I,J\in\mathcal{M}_{q}} \int_{\mathbb{U}}\int_{\mathbb{U}}D_{\alpha}^{J}D_{\beta}^{I}T(x_{\alpha},x_{ \beta})^{2}dx_{\alpha}dx_{\beta} \tag{3.24}\]
**Remark 3.9**.: For \(q\leq r-1\), and adopting the notation (3.23), one has that
\[\operatorname{Tr}\left(\mathbf{K}\right)=n_{L+1}\sum_{J\in\mathcal{M}_{q}} \int_{\mathbb{U}}\left(D_{\alpha}^{J}D_{\beta}^{J}K_{\alpha\beta}^{(L+1)} \left|{}_{x_{\alpha}=x_{\beta}}\right)\,dx_{\alpha}=\sum_{k\geq 1}\lambda_{k;q}^{(L+ 1)}, \tag{3.25}\]
where \(\operatorname{Tr}\left(\cdot\right)\) stands for the trace operator. Relation (3.25) can be deduced e.g. from [24, Proposition 1.8 and its proof], combined with the fact that - by virtue of Remark 2.3 - one has that \(\mathbb{E}[\|\Gamma_{\mathbb{U}}^{(L+1)}\|_{\mathbb{W}^{q,2}(\mathbb{U})}^{2} ]<\infty\).
In addition to the Wasserstein distances \(\mathbf{W}_{2;q}\) introduced above, we will consider a smoother notion of discrepancy between the distributions of Hilbert space-valued random elements, that we borrow from [16]. More precisely, following Bourguin and Campese [16, Section 3.1],
given two random elements \(X,Y\) with values in a real separable Hilbert space \(K\), we define the distance \(d_{2}\), between the distributions of \(X\) and \(Y\), to be the quantity
\[d_{2}(X,Y):=\sup_{\begin{subarray}{c}g\in C_{b}^{2}(K):\\ \sup_{x\in K}\|\nabla^{2}g(x)\|_{K^{\otimes 2}}\leq 1\end{subarray}}|\mathbb{E}[g(X) ]-\mathbb{E}[g(Y)]|, \tag{3.26}\]
where \(C_{b}^{2}(K)\) indicates the class of twice Frechet differentiable real-valued mappings on \(K\) with bounded second derivative; it is a standard fact that \(d_{2}\) metrizes convergence in distribution on \(K\).
The following statement contains explicit bounds on the functional Gaussian approximation of random neural networks with arbitrary depth. It is one of the main contributions of our work.
**Theorem 3.10**.: _Let the above assumptions prevail (in particular, \(\sigma\) is polynomially bounded to the order \(r\geq 1\)), and suppose moreover that the infinite width covariance structure \(\{K^{(\ell)}:\ell=1,...,L+1\}\) is canonically non-degenerate up to the order \(q\leq r-1\), in the sense of Definition 2.4, for all finite subsets \(x_{\mathcal{A}}\subset\mathbb{U}\). Then, one has the following two estimates:_
1. _There exists a constant_ \(C>0\) _depending on_ \(q,\mathbb{U},\mathcal{P}\) _(see (_3.2_)) such that,_ \[d_{2}\left(z_{\mathbb{U}}^{(L+1)},\Gamma_{\mathbb{U}}^{(L+1)}\right)\leq Cn^{ -1/2},\] (3.27) _where_ \(d_{2}\) _is the distance defined in (_3.26_), with_ \(K=\mathbb{W}^{q,2}(\mathbb{U})\)_._
2. _Suppose that_ \[\sum_{k=1}^{\infty}\left(\lambda_{k;q}^{(L+1)}\right)^{\frac{1}{2}}<\infty;\] (3.28) _then, there exists a constant_ \(C>0\) _depending on_ \(q,\mathbb{U},\mathcal{P}\) _such that_ \[\mathbf{W}_{2;q}\left(z_{\mathbb{U}}^{(L+1)},\Gamma_{\mathbb{U}}^{(L+1)} \right)\leq Cn^{-\frac{1}{8}}.\] (3.29)
_The conclusions of Points_ **(1)**_-_**(2)** _hold, more generally, for every \(q\leq r\) if one of the following set of assumptions is verified:_
1. _the non-linearity_ \(\sigma\) _is smooth (regardless of any non-degeneracy assumption on the infinite-width covariance structure);_
2. _Assumption_ 3.7 _holds, the infinite width covariance structure_ \(\{K^{(\ell)}:\ell=1,...,L+1\}\) _is canonically non-degenerate up to the order_ \(r\)_, and (_3.25_) is verified for_ \(q=r\)_._
**Remark 3.11**.: A careful inspection of the proof reveals that the constants in Theorem 3.10 depend on the volume measure of \(\mathbb{U}\) and hence, indirectly, on the input dimension \(n_{0}\). To make the comparison with the existing literature more transparent, it is convenient to normalise \(\mathbb{U}\) to have unit measure, analogously to what was done in [31], [50] and [19], see the discussion below in Example 3.13 and Section 4.
**Remark 3.12**.: Consider the case in which \(r=1\) and the infinite-width covariance structure is canonically non-degenerate to the order \(q=0\) on every finite collection of inputs \(x_{\mathcal{A}}\subset\mathbb{R}^{n_{0}}\). Then, the conclusions of Points **(1)** and **(2)** in Theorem 3.10 continue to hold when replacing the ball \(\mathbb{U}\) with any bounded subset \(\mathbb{V}\subset\mathbb{R}^{n_{0}}\) (possibly lower dimensional) endowed with
some finite measure \(\mu\). In this case, one has to interpret \(\mathbb{W}^{0,2}(\mathbb{U})\) to be the Hilbert space \(L^{2}(\mathbb{V},d\mu)\), in such a way that the eigenvalues considered in (3.28) are those of the integral operator on \(L^{2}(\mathbb{V},\mu)\to L^{2}(\mathbb{V},\mu)\) given by
\[h\mapsto\int_{\mathbb{V}}K^{(L+1)}(x,y)h(y)\mu(dy). \tag{3.30}\]
**Example 3.13**.: If \(\sigma(x)=\max\{0,x\}\) (ReLU), then we know that Assumption 3.7 is verified, and that the infinite-width covariance structure is canonically non-degenerate up to the order \(q=0\) on every finite collection of inputs \(x_{\mathcal{A}}\subset\mathbb{R}^{n_{0}}\). We can therefore apply the content of Remark 3.12 to the case \(\mathbb{V}=\mathbb{S}^{n_{0}-1}\) (sphere) endowed with the unit mass Haar measure. In this case, the results of [13, Corollary 2] imply that (3.28) is true for \(q=0\), since \(\sum_{k=1}^{\infty}(\lambda_{k;0}^{(L+1)})^{p}<\infty\) for all \(p>1/(2+n_{0})\). This yields that there exists a constant \(C>0\) depending on \(\mathcal{P}\) such that
\[\mathbf{W}_{2;0}\left(z_{\mathbb{U}}^{(L+1)},\Gamma_{\mathbb{U}}^{(L+1)} \right)\leq Cn^{-\frac{1}{8}}.\]
This result can be compared with the bounds established in similar circumstances but for shallow networks (i.e., for \(L=1\)) by [31], where a logarithmic rate \(O((\log n)^{-1})\) is obtained, and by [50], whose bound is of order \(O(n^{-3/(4n_{0}-2)})\); see Section 4 for more discussion and details.
#### 3.4.3. Embedding of smooth non-linearities
In this section, we assume that Assumption 3.1 is verified for a certain non-linearity \(\sigma\in C^{\infty}(\mathbb{R})\) that is moreover polynomially bounded to the order \(r\), for every \(r\geq 1\) (this is equivalent to saying that all derivatives of \(\sigma\) are polynomially bounded). In this case, one has that \(K^{(L+1)}\in C^{\infty,\infty}(\mathbb{R}^{n_{0}}\times\mathbb{R}^{n_{0}})\), and the results stated in [28, Lemma 4.3] yield that, since \(\mathbb{U}\) is an open ball, the estimate (3.28) holds for all \(q\geq 0\). Thanks to the last part of Theorem 3.10, this implies in turn that, for all \(r\geq 1\), there exists a constant \(C>0\) depending on \(r,\mathcal{P}\) (see (3.2)) such that
\[\mathbf{W}_{2;r}\left(z_{\mathbb{U}}^{(L+1)},\Gamma_{\mathbb{U}}^{(L+1)} \right)\leq Cn^{-\frac{1}{8}}. \tag{3.31}\]
By using (3.19) one deduces from (3.31) some remarkable uniform estimates, that can be expressed in terms of the transport distance \(\mathbf{W}_{\infty;k}\), defined as follows: given two random elements \(X,Y\) with values in \(C^{k}(\bar{\mathbb{U}})\), we set
\[\mathbf{W}_{\infty;k}(X,Y):=\left(\inf_{(T,S)}\mathbb{E}[\|T-S\|_{C^{k}(\bar{ \mathbb{U}})}^{2}]\right)^{1/2}, \tag{3.32}\]
where the infimum runs over all random elements \((T,S)\) such that \(T\stackrel{{ law}}{{=}}X\) and \(S\stackrel{{ law}}{{=}}Y\).
**Theorem 3.14**.: _Let the assumptions of the present section prevail, and fix \(k\geq 1\). Then, there exists a probability space \((\Omega_{0},\mathcal{F}_{0},\mathbb{P}_{0})\) supporting two random elements \(X,Y\) with values in \(C^{k}(\bar{\mathbb{U}})\) such that_
1. \(X\stackrel{{ law}}{{=}}z_{\mathbb{U}}^{(L+1)}\)_;_
2. \(Y\stackrel{{ law}}{{=}}\Gamma_{\mathbb{U}}^{(L+1)}\)_;_
3. _There exists a constant_ \(C>0\) _depending on_ \(r,k,\mathbb{U},\mathcal{P}\) _such that_ \[\mathbb{E}_{0}\left[\left\|X-Y\right\|_{C^{k}(\bar{\mathbb{U}})}^{2}\right]^{1/ 2}\leq Cn^{-\frac{1}{8}}.\] (3.33) _In particular, one has that,_ \[\mathbf{W}_{\infty;k}\left(z_{\mathbb{U}}^{(L+1)},\Gamma_{\mathbb{U}}^{(L+1)} \right)\leq Cn^{-\frac{1}{8}},\] (3.34) _where the constant_ \(C>0\) _is the same as in (_3.31_)._
The previous result implicitly uses the fact that, under the assumptions in the statement, \(z_{\mathbb{U}}^{(L+1)}\) and \(\Gamma_{\mathbb{U}}^{(L+1)}\) take values in \(C_{b}^{k}(\mathbb{U})\) (and, consequently, in \(C^{k}(\bar{\mathbb{U}})\)) with probability one.
**Remark 3.15**.:
1. The results in Theorem 3.14 can be exploited to obtain useful bounds for the Wasserstein distance between the finite-dimensional distributions of neural networks and their Gaussian limit. Indeed for some \(i\in(1,...,n_{L+1})\) consider the two \(M\)-dimensional vectors \((z_{i;\alpha_{k}}^{(L+1)})_{k=1,2,...,M}\), \(G\); under the previous assumptions and notation, equation (3.34) and some simple algebra yield a bound of the form \[\mathbf{W}_{2}(z_{i;\alpha}^{(L+1)},G)\leq C\times\sqrt{M}\times\left(\frac{ 1}{n}\right)^{\frac{1}{8}},\] (3.35) where \(\mathbf{W}_{2}\) is the distance defined in (3.4) for \(K=\mathbb{R}^{M}\). In view of the dependence of the constants in equations (3.12) and (3.13) on the dimension of the vectors \((z_{i;\alpha}^{(L+1)},G)\) (see also equation (3.15)), it is immediate to check that, for \(M\) large enough with respect to \(n\), the bound in (3.35) can be tighter than those in Theorem 3.5.
2. We believe that the uniform convergence results established in Theorem 3.14 can open several venues for further research. In particular, these results are the first step to establish weak convergence for geometric and topological functionals of neural networks at the initialization step: for instance, the distribution of their critical points and extrema, the Euler-Poncare characteristic of their excursion sets and their nodal volume. On the one hand, these functionals can provide neat characterizations of the complexity of the neural network landscape; on the other hand, the analytic determination of their expected values and moments is made possible, in the limiting Gaussian case, by classical tools of stochastic geometry such as the Kac-Rice Theorem and the Gaussian Kinematic Formula, see [6] and [2]. We leave these topics for further research.
3. In our proofs, we will exploit the following property, valid for all \(k\geq 0\) and analogous to (3.5): if \(X,Y\) are random elements with values in \(C^{k}(\bar{\mathbb{U}})\) and \(V\) is a random element defined on the same probability space as \(X\) and taking values in some Polish space \(\mathcal{V}\) and with law \(\mathbb{P}_{V}\), then \[\mathbf{W}_{\infty;k}^{2}(X,Y)\leq\int_{\mathcal{V}}\mathbf{W}_{\infty;k}^{2}( \mathbb{P}_{X|V=v},\mathbb{P}_{Y})\,d\mathbb{P}_{V}(v),\] (3.36) see again [9].
## 4. Related Work
A few papers have recently addressed quantitative functional central limit theorems for neural networks in the shallow case of \(L=1\) hidden layers. More precisely, [31] and [50] have studied one-hidden-layer neural networks, with random initialization models that are slightly different than ours. In particular, in both cases it is assumed that \(x_{\alpha}\in\mathbb{S}^{n_{0}-1}\); also, their random weights are assumed to be Rademacher sequences for the second layer (in the special case of polynomial activations, [50] allows for more general random weights with finite fourth moments). The random coefficients in the inner layer are Gaussian for [31], uniform on the sphere in [50]. For activation functions which are polynomials of order \(p\), the bounds by [31] and [50] are respectively of order
\[\mathbf{W}_{2;0}\left(z_{\mathbb{S}^{n_{0}-1}}^{(L+1)},\Gamma_{\mathbb{S}^{n_{ 0}-1}}\right)\leq C\frac{n_{0}^{5p/6-1/12}}{n^{1/6}}\,\ \mathbf{W}_{2;0}\left(z_{\mathbb{S}^{n_{0}-1}}^{(L+1)},\Gamma_{ \mathbb{S}^{n_{0}-1}}\right)\leq C\frac{(n_{0}+p)^{n_{0}}}{n^{1/2}}\ ;\]
these rates can be compared with those given above in Theorem 3.10, which for \(L=1\) and \(\mathbb{U}=\mathbb{S}^{n_{0}-1}\) yield a decay of square root order, irrespective of the input dimension. In the more relevant case of ReLU nonlinearities, these same authors obtain, respectively:
\[\mathbf{W}_{2;0}\left(z_{\mathbb{S}^{n_{0}-1}}^{(L+1)},\Gamma_{\mathbb{S}^{n_ {0}-1}}\right)\leq C\left(\frac{\log\log n\times\log n_{0}}{\log n}\right)^{3/ 4}\,\ \mathbf{W}_{2;0}\left(z_{\mathbb{S}^{n_{0}-1}}^{(L+1)},\Gamma_{\mathbb{S}^{n_{ 0}-1}}\right)\leq 7\times\frac{1}{n^{3/(4n_{0}-2)}}\ ;\]
comparing to the results discussed above in Example 3.13 it is easy to see that the bounds in the present paper improve from logarithmic to algebraic compared to [31], and are exponentially more efficient in the input dimension \(n_{0}\) compared to [50]. It should also be noted that both [31] and [50] use cleverly some special properties of the sphere and its eigenfunctions to construct explicit couplings of the neural networks with Gaussian processes; as such, it does not seem seems trivial to generalize their arguments to arbitrary input spaces and/or to the multi-layer framework.
Even more recently, the authors in [19] have considered one-hidden-layer networks (\(L=1\)) on the sphere with Gaussian initializations; they have hence exploited functional quantitative central limit results by [16] to obtain the following bounds in the \(d_{2}\) norm, in the ReLU and polynomial case, respectively:
\[d_{2}\left(z_{\mathbb{S}^{n_{0}-1}}^{(L+1)},\Gamma_{\mathbb{S}^{n_{0}-1}} \right)\leq C\left(\frac{1}{\log n}\right)^{3/4}\,\ d_{2}\left(z_{\mathbb{S}^{n_{0}-1}}^{(L+1)},\Gamma_{ \mathbb{S}^{n_{0}-1}}\right)\leq C\times\frac{1}{n^{1/2}}\.\]
The bounds obtained in the present paper are tighter: for instance, for the ReLU case they are algebraic rather than logarithmic in the number of nodes \(n\), even when applied to the Wasserstein metric (which, we recall, is strictly stronger than \(d_{2}\)). Moreover, the argument in [19] exploits a direct computation of moments and cumulants which seems difficult to extend to networks of arbitrary depth.
To the best of our knowledge, the only paper so far devoted to quantitative functional central limit theorems for multi-layer networks is [7], where the authors establish bounds on the uniform Wasserstein distance between a neural network defined on a sphere, and a Gaussian field with matching covariance. The results in [7] allow for non-Gaussian weights and hold with respect to rather stringent distance functions. However, the bounds established in [7] do not apply to the regime \(n_{1}\asymp n_{2}\asymp\cdots n_{L-1}\) considered in the present paper. As a consequence, a direct comparison with our findings is not possible.
## 5. Preparatory results
### Variance estimates
Our proofs rely extensively on the following estimates from [38] on the fluctuations of the random covariances \(\Sigma^{(\ell)}\), defined in (2.8). In what follows, we write \(\kappa(Z_{1},...,Z_{m})\) to indicate the joint cumulant of random variables \(Z_{1},...,Z_{m}\) (see e.g. [69, Chapter 3] for definitions), with the special notation \(\kappa_{m}(Z)\) in the case \(Z_{1}=\cdots=Z_{m}=Z\).
**Theorem 5.1** (Thm 3.1, Corollary 3.15, Equation (11.31) in [38]).: _Let \(x_{\alpha}\in\mathbb{R}^{n_{0}}\mapsto z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L+1}}\) be a random neural network verifying Assumption 3.1 where, for \(r\geq 1\), \(\sigma\) is polynomially bounded to order \(r\) in the sense of Definition 2.1. Fix also a collection of distinct non-zero network inputs_
\[x_{\mathcal{A}}:=\{x_{\alpha},\quad\alpha\in\mathcal{A}\}\]
_and directional derivative operators \(\{V_{1},...,V_{p}\}\) as in (2.4). Suppose that either \(\sigma\) is smooth or that the infinite width covariance structure \(\left\{K^{(\ell)}\right\}\) is non-degenerate to order \(q\leq r\) on \(x_{\mathcal{A}}\) with respect to \(V\), in the sense of Definition 2.4. Then, the following asymptotic relations are in order:_
1. _for_ \(\ell=1,\ldots,L\)_, all multi-indices_ \(J_{1},J_{2}\) _of order at most_ \(q\)_, and any network inputs_ \(x_{\alpha_{1}},x_{\alpha_{2}}\in x_{\mathcal{A}}\) _we have for all_ \(n\geq 1\)__ \[\max\left\{\mathbf{Var}(V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{2}}\Sigma_{ \alpha_{1}\alpha_{2}}^{(\ell)}),\,\left|V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^ {J_{2}}\left\{\mathbb{E}\left[\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}\right]-K _{\alpha_{1}\alpha_{2}}^{(\ell+1)}\right\}\right|\right\}|\leq Cn^{-1},\] (5.1) _where for a multi-index_ \(J=(j_{1},\ldots,j_{p})\) _we have used notation (_2.2_) and have adopted the notational conventions_ \[V_{\alpha_{1}}^{J_{1}}V_{\alpha_{1}}^{J_{2}}\Sigma_{\alpha_{1} \alpha_{1}}^{(\ell)}:=V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{2}}\Sigma_{ \alpha_{1}\alpha_{2}}^{(\ell)}\left|{}_{x_{\alpha_{1}}=x_{\alpha_{2}}},\quad V _{\alpha_{1}}^{J_{1}}V_{\alpha_{1}}^{J_{2}}\mathbb{E}[\Sigma_{\alpha_{1} \alpha_{1}}^{(\ell)}]:=V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{2}}\mathbb{E}[ \Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}]\left|{}_{x_{\alpha_{1}}=x_{\alpha_{2}}},\right.\] \[V_{\alpha_{1}}^{J_{1}}V_{\alpha_{1}}^{J_{2}}K_{\alpha_{1} \alpha_{1}}^{(\ell)}:=V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{2}}K_{\alpha_{1 }\alpha_{2}}^{(\ell)}\left|{}_{x_{\alpha_{1}}=x_{\alpha_{2}}}.\right.\] (5.2) _The constant_ \(C\) _depends on_ \(\alpha_{1},\alpha_{2},J_{1},J_{2},\ell,r,q,\mathcal{P}\) _(see (_3.2_)) but is uniform over_ \(\alpha_{1},\alpha_{2}\) _when the ratios_ \(\left|\left|x_{\alpha_{1}}\right|\right|^{2}/n_{0},\left|\left|x_{\alpha_{2}} \right|\right|^{2}/n_{0}\) _vary over a compact set._
2. _When_ \(r=1\) _and_ \(\mathcal{A}=\{\alpha\}\) _is a singleton, one has also that_ \[\kappa_{3}(\Sigma_{\alpha\alpha}^{(\ell)})\leq C_{1}n^{-2},\quad\text{and} \quad\kappa_{4}(\Sigma_{\alpha\alpha}^{(\ell)})\leq C_{2}n^{-3},\] (5.3) _where the constants_ \(C_{1},C_{2}\) _depend on_ \(\alpha,\ell,\mathcal{P}\) _and are uniform over_ \(\alpha\) _when the ratio_ \(\left|\left|x_{\alpha_{1}}\right|\right|^{2}/n_{0}\) _varies over a compact set._
3. _Again when_ \(r=1\) _and_ \(\mathcal{A}=\{\alpha\}\) _is a singleton, there exist strictly positive constants_ \(B_{1},B_{2}\) _and_ \(D_{1},D_{2}\) _(depending on_ \(\alpha,\ell,\mathcal{P}\) _and uniform over_ \(\alpha\) _when the ratio_ \(\left|\left|x_{\alpha_{1}}\right|\right|^{2}/n_{0}\) _varies over a compact set) such that_ \[\left|\mathbf{Var}(\Sigma_{\alpha\alpha}^{(\ell)})-B_{1}n^{-1}\right|\leq B_{2} \,n^{-2},\] (5.4) _and_ \[\left|\left|\mathbb{E}\left[\Sigma_{\alpha\alpha}^{(\ell)}\right]-K_{\alpha \alpha}^{(\ell+1)}\right|-D_{1}n^{-1}\right|\leq D_{2}\,n^{-2}.\] (5.5)
Proof Idea.: Although the proof of Theorem 5.1 is somewhat technical, for the sake of completeness we indicate here a few of the key ideas and refer the interested reader to SS7 of [38] for the details. The starting point for the approach is based on the following structural properties of random neural networks:
* The sequence of fields \(z_{\alpha}^{(\ell)}\) is a Markov Chain with respect to \(\ell\).
* Conditional on the sigma algebra \(\mathcal{F}^{(\ell)}\) defined by \(z_{\alpha}^{(\ell)}\) is a Gaussian field with independent components \(z_{i;\alpha}^{(\ell+1)}\).
* The conditional variance \(\Sigma_{\alpha\alpha}^{(\ell)}\) of each component \(z_{i;\alpha}^{(\ell+1)}\) depends on \(z_{\alpha}^{(\ell)}\) only through random variables of the form \[\mathcal{O}_{f}^{(\ell)}:=\frac{1}{n_{\ell}}\sum_{i=1}^{n_{\ell}}f(z_{i;\alpha }^{(\ell)}).\] The article [38] refers to such random variables as collective observables.
* Centered moments of collective observables depend on \(n\) as if the random variables \(f(z_{i;\alpha}^{(\ell)})\) were independent: \[\mathbb{E}\left[\left(\mathcal{O}_{f}^{(\ell)}-\mathbb{E}\left[\mathcal{O}_{ f}^{(\ell)}\right]\right)^{q}\right]=O_{q}\left(n^{-\lceil\frac{q}{2}\rceil} \right),\qquad q\geq 0.\] (5.6) Establishing this is the most difficult technical aspect of [38]. The basic idea is to proceed by induction on \(\ell\). When \(\ell=1\), the neuron pre-activations \(z_{i;\alpha}^{(1)}\) are independent and hence the estimate (5.6) is straight-forward. When \(\ell\geq 2\), however, the neuron pre-activations \(z_{i;\alpha}^{(\ell)}\) are not independent. The idea is to analyze them by first using the law of total cumulance to write cumulants of collective observables in layer \(\ell+1\) in terms of cumulants of such objects at layer \(\ell\).
Once the estimate (5.6) is established, it is now fairly straight-forward to study the mean and variance of \(\Sigma_{\alpha\alpha}^{(\ell)}\). So let us now explain, mostly dispensing with rigor, how these four ideas come together to obtain a recursive description of the distribution of the field \(z_{\alpha}^{(\ell+1)}\) in terms of that of \(z_{\alpha}^{(\ell)}\) (we stick here to the case of a single input \(x_{\alpha}\)). Denoting by \(\xi=(\xi_{1},\ldots,\xi_{m})\) dual variables, consider the characteristic function
\[p^{(\ell+1)}(\xi):=\mathbb{E}\left[\exp\left[-i\sum_{i=1}^{m}\xi_{i}z_{i;\alpha }^{(\ell+1)}\right]\right]\]
of \(m\) neuron pre-activations \(\left(z_{i;\alpha}^{(\ell)}\), \(i=1,\ldots,m\right)\). Conditioning on \(z_{\alpha}^{(\ell)}\) and using conditional Gaussianity allows us to write
\[p^{(\ell+1)}(\xi):=\mathbb{E}\left[\exp\left[-\frac{1}{2}\left|\left|\xi \right|\right|^{2}\Sigma_{\alpha\alpha}^{(\ell)}\right]\right].\]
where we note that \(\Sigma_{\alpha\alpha}^{(\ell)}\) is a collective observable in layer \(\ell\). Writing as before
\[\kappa_{\alpha\alpha}^{(\ell)}:=\mathbb{E}\left[\Sigma_{\alpha\alpha}^{(\ell) }\right],\qquad\Delta_{\alpha\alpha}^{(\ell)}:=\Sigma_{\alpha\alpha}^{(\ell) }-\mathbb{E}\left[\Sigma_{\alpha\alpha}^{(\ell)}\right],\]
we find
\[p^{(\ell+1)}(\xi):=\mathbb{E}\left[\exp\left[-\frac{1}{2}\left|\left|\xi \right|\right|^{2}\Delta_{\alpha\alpha}^{(\ell)}\right]\right]\exp\left[-\frac {1}{2}\left|\left|\xi\right|\right|^{2}\kappa_{\alpha\alpha}^{(\ell)}\right].\]
The second term is precisely the characteristic function of a centered \(m\)-dimensional Gaussian with iid components of variance \(\kappa_{\alpha\alpha}^{(\ell)}\). Moreover, at least heuristically, the first term can be written
\[\mathbb{E}\left[\exp\left[-\frac{1}{2}\left|\left|\xi\right|\right|^{2}\Delta_ {\alpha\alpha}^{(\ell)}\right]\right]=\sum_{q\geq 0}\mathbb{E}\left[\left(\Delta_{ \alpha\alpha}^{(\ell)}\right)^{q}\right]\frac{(-1)^{q}}{2^{q}q!}\left|\left| \xi\right|\right|^{2q}.\]
Since
\[-||\xi||^{2}=\text{ Laplacian in the variables }z_{i;\alpha}^{(\ell+1)},\]
we have for any reasonable test function \(f\) that
\[\mathbb{E}\left[f(z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m)\right]=\sum_{q=0}^{ \infty}\frac{1}{2^{q}q!}\mathbb{E}\left[\left(\Delta_{\alpha\alpha}^{(\ell)} \right)^{q}\right]\left\langle\left(\sum_{i=1}^{m}\partial_{z_{i;\alpha}}^{2} \right)^{q}f(z_{i;\alpha},\,i=1,\ldots,m)\right\rangle_{\kappa_{\alpha\alpha}^ {(\ell)}},\]
where \((z_{i;\alpha},\,i=1,\ldots,m)\) is a vector of iid centered Gaussians with variance \(\kappa_{\alpha\alpha}^{(\ell)}\). The concentration estimates (5.6) ensure that this expression is a power series in \(1/n\). In particular,
\[\mathbb{E}\left[f(z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m)\right] =\left\langle f(z_{i;\alpha},\,i=1,\ldots,m)\right\rangle_{\kappa _{\alpha\alpha}^{(\ell)}} \tag{5.7}\] \[+\frac{\mathbb{E}\left[(\Delta_{\alpha\alpha}^{(\ell)})^{2} \right]}{8}\left\langle\left(\sum_{i=1}^{m}\partial_{z_{i;\alpha}}^{2}\right) ^{2}f(z_{i;\alpha},\,i=1,\ldots,m)\right\rangle_{\kappa_{\alpha\alpha}^{(\ell )}}+O(n^{-2}).\]
To derive usable recursions for cumulants of \(z_{i;\alpha}^{(\ell+1)}\) in terms of those of \(z_{i;\alpha}^{(\ell)}\) one now notes that \(\mathbb{E}\left[(\Delta_{\alpha\alpha}^{(\ell)})^{2}\right]\) is precisely three times the \(4\)-th cumulant of \(z_{i;\alpha}^{(\ell)}\) (see Lemma 5.3) and takes \(f\) to be various polynomials.
The following statement - used repeatedly in our proofs - is a direct consequence of Theorem 5.1-**(1)**. As before, we write \(\mathcal{M}_{r}\) to denote the class of all multi-indices \(J\) such that \(|J|\leq r\).
**Proposition 5.2**.: _Fix a compact domain \(\mathbb{T}\subset\mathbb{R}^{n_{0}}\), and consider a Borel-measurable \(\mathbb{U}\subset\mathbb{T}\). Suppose that the assumptions of Theorem 5.1 are satisfied for every finite collection of distinct network inputs \(x_{\mathcal{A}}\subset\mathbb{U}\), and let \(\mu\) be a finite measure on \(\mathcal{M}_{q}\times\mathbb{U}\). Then, defining_
\[A_{n}:=\int_{\mathcal{M}_{q}\times\mathbb{U}}\int_{\mathcal{M}_{q}\times \mathbb{U}}\mathbf{Var}\left(V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{2}} \Sigma_{\alpha_{1}\alpha_{2}}^{(L)}\right)\mu(dJ_{1},dx_{\alpha_{1}})\mu(dJ_ {2},dx_{\alpha_{2}}), \tag{5.8}\]
\[B_{n}:=\int_{\mathcal{M}_{q}\times\mathbb{U}}\int_{\mathcal{M}_{q}\times \mathbb{U}}\mathbb{E}\left[\left(V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{2}} \Sigma_{\alpha_{1}\alpha_{2}}^{(L)}-V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{ 2}}K_{\alpha_{1}\alpha_{2}}^{(L+1)}\right)^{2}\right]\mu(dJ_{1},dx_{\alpha_{1} })\mu(dJ_{2},dx_{\alpha_{2}}), \tag{5.9}\]
_and_
\[C_{n}:=\int_{\mathcal{M}_{q}\times\mathbb{U}}\mathbb{E}\left[\left(V_{\alpha }^{J}V_{\alpha}^{J}\Sigma_{\alpha\alpha}^{(L)}-V_{\alpha}^{J}V_{\alpha}^{J}K_{ \alpha\alpha}^{(L+1)}\right)^{2}\right]\mu(dJ,dx_{\alpha}), \tag{5.10}\]
_one has that_
\[\max\{A_{n},B_{n}\}\leq D\cdot\mu(\mathcal{M}_{q}\times\mathbb{U})^{2}\cdot n ^{-1}\quad\text{and}\quad C_{n}\leq D\cdot\mu(\mathcal{M}_{q}\times\mathbb{U}) \cdot n^{-1}, \tag{5.11}\]
_where the constant \(D\) depends on \(\mathbb{T},\mathcal{P},q,r\)._
#### 5.1.1. Connection with output cumulants
The variances appearing in (5.1) admit a direct interpretation in terms of the cumulants of the network outputs \(\{z_{i;\alpha}^{(L+1)}\}\) and their derivatives. To see this, we record the following elementary statement (the proof is left to the reader).
**Lemma 5.3**.: _Consider a random vector \((X,Y)\) as well as a positive definite \(2\times 2\) symmetric random matrix \(\Sigma=\{\Sigma(i,j):1\leq i,j\leq 2\}\) with square-integrable entries. Assume that, conditionally on \(\Sigma\), \((X,Y)\) is a centered Gaussian vector with covariance \(\Sigma\). Then, \(X\) and \(Y\) have finite moments of order 4 and_
\[2\mathbf{Var}(\Sigma(1,2))=\kappa(X,X,Y,Y)-\mathbf{Cov}(\Sigma(1,1),\Sigma(2,2)).\]
Applying Lemma 5.3 to \(X=V_{\alpha_{1}}^{J_{1}}z_{i;\alpha_{1}}^{(\ell+1)}\) and \(Y=V_{\alpha_{2}}^{J_{2}}z_{i;\alpha_{2}}^{(\ell+1)}\), and exploiting Lemma 2.5, yields the remarkable identity
\[2\mathbf{Var}\left(V_{\alpha_{1}}^{J_{1}}V_{\alpha_{2}}^{J_{2}} \Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}\right) + \mathbf{Cov}\left(V_{\alpha_{1}}^{J_{1}}V_{\alpha_{1}}^{J_{1}} \Sigma_{\alpha_{1}\alpha_{1}}^{(\ell)},V_{\alpha_{2}}^{J_{2}}V_{\alpha_{2}}^{ J_{2}}\Sigma_{\alpha_{2}\alpha_{2}}^{(\ell)}\right)\] \[=\kappa\left(V_{\alpha_{1}}^{J_{1}}z_{i;\alpha_{1}}^{(\ell+1)},V_ {\alpha_{1}}^{J_{1}}z_{i;\alpha_{1}}^{(\ell+1)},V_{\alpha_{2}}^{J_{2}}z_{i; \alpha_{2}}^{(\ell+1)},V_{\alpha_{2}}^{J_{2}}z_{i;\alpha_{2}}^{(\ell+1)}\right),\]
where we have used (5.2); in particular,
\[3\mathbf{Var}\left(V_{\alpha_{1}}^{J_{1}}V_{\alpha_{1}}^{J_{1}}\Sigma_{\alpha_ {1}\alpha_{1}}^{(\ell)}\right)=\kappa_{4}\left(V_{\alpha_{1}}^{J_{1}}z_{i; \alpha_{1}}^{(\ell+1)}\right).\]
In the next two sections, we will focus on probabilistic bounds based on the so-called **Stein's method** for normal approximations. The reader is referred e.g. to [66] for a general introduction to this topic.
### Stein's bounds in dimension 1
Our main tool for one-dimensional probabilistic approximations is the following new estimate on the normal approximation of condtionally Gaussian random variables.
**Proposition 5.4**.: _Let \(F\) be a centered random variable with finite variance \(\sigma^{2}>0\), and consider \(Z\sim N(0,\sigma^{2})\). Assume that there exists an auxiliary integrable random variable \(A\geq 0\) such that, conditionally on \(A\), the random variable \(F\) has a centered Gaussian distribution with variance \(A\). Then, for all functions \(f:\mathbb{R}\to\mathbb{R}\) continuously differentiable and Lipschitz and every \(\varphi:\mathbb{R}_{+}\to\mathbb{R}\) bounded,_
\[\mathbb{E}[Ff(F)\varphi(A)]=\mathbb{E}[Af^{\prime}(F)\varphi(A)], \tag{5.12}\]
_so that, in particular, \(\sigma^{2}=\mathbb{E}(A)\). Moreover, the following two properties hold:_
**(1)**: _if_ \(A\) _is square-integrable, then_
\[d_{TV}(F,Z) \leq \frac{8}{\sigma^{4}}\mathbf{Var}(A) \tag{5.13}\] \[\mathbf{W}_{1}(F,Z) \leq \frac{4}{\sigma^{2}}\mathbf{Var}(A); \tag{5.14}\]
**(2)**: _if_ \(\mathbb{E}(A^{4})<\infty\)_, then_
\[\min\{2d_{TV}(F,Z)\,;\,\mathbf{W}_{1}(F,Z)\}\geq e^{-\sigma^{2}/2}\left|\frac {1}{8}\mathbf{Var}(A)-\frac{1}{48}\mathbb{E}[(A-\sigma^{2})^{3}]+R\right|,\] (5.15) _where_ \(|R|\leq 384^{-1}e^{\sigma^{2}/2}\mathbb{E}[(A-\sigma^{2})^{4}]\)_._
**Remark 5.5**.: By virtue of Lemma 5.3, one has that \(\mathbf{Var}(A)=\frac{1}{3}\kappa_{4}(F)\).
Proof of Proposition 5.4.: Formula (5.12) follows by conditioning and Gaussian integration by parts, and we can consequently focus on the proof of Point **(1)**. Using the fact that the random variable \(\widetilde{F}:=F/\sigma\) verifies the assumptions in the statement with \(\widetilde{A}:=A/\sigma^{2}\), one sees that it is sufficient to only consider the case \(\sigma=1\). Combining Stein's method with Lusin's theorem (see [74, p. 56]) as in [64, Lemma 3.1, Proposition 4.16 and Theorem 5.2] yields that
\[d_{TV}(F,N)\leq\sup_{f:|f|\leq 1,\,|f^{\prime}|\leq 2}\left|\mathbb{E}[Ff(F)-f^{ \prime}(F)]\right|,\]
where the supremum runs over all mappings \(f:\mathbb{R}\to\mathbb{R}\) of class \(C^{1}(\mathbb{R})\) such that \(|f|\) and \(|f^{\prime}|\) are bounded by \(1\) and \(2\), respectively. Similarly, [66, Theorem 3.5.2] yields that
\[\mathbf{W}_{1}(F,N)\leq\sup_{f:|f^{\prime}|\leq 1}\left|\mathbb{E}[Ff(F)-f^{ \prime}(F)]\right|,\]
where the supremum runs over all mappings \(f:\mathbb{R}\to\mathbb{R}\) of class \(C^{1}(\mathbb{R})\) such that \(|f^{\prime}|\) is bounded by \(1\). Combining (5.12) with the two estimates above and taking conditional expectations yields that
\[d_{TV}(F,Z)\leq 2\mathbb{E}[[\mathbb{E}(1-A\,|\,F)]],\quad\text{and}\quad \mathbf{W}_{1}(F,Z)\leq\mathbb{E}[|\mathbb{E}(1-A\,|\,F)|]\]
(recall that, in this part of the proof, \(\sigma^{2}=1\) by assumption). The key step (corresponding to a strategy already exploited in [67, Section 3]) is now to observe that
\[\mathbb{E}[|\mathbb{E}(1-A\,|\,F)|]=\mathbb{E}[\,\mathbf{sign}(\mathbb{E}(1-A \,|\,F))\,\mathbb{E}(1-A\,|\,F)],\]
so that, by using once again Lusin's theorem in the form of [74, p. 56] one deduces that
\[\mathbb{E}[|\mathbb{E}(1-A\,|\,F)|]\leq\sup_{g\in\mathcal{C}}\left|\mathbb{E} [g(F)(1-A)]\right|,\]
where the supremum runs over the class \(\mathcal{C}\) of all continuous functions \(g:\mathbb{R}\to\mathbb{R}\) that have compact support and are such that \(|g|\leq 1\). Fix \(g\in\mathcal{C}\). Since \(\mathbb{E}(A)=1\), one has that
\[\mathbb{E}[g(F)(1-A)]=\mathbb{E}[(g(F)-\mathbb{E}[g(Z)])(1-A)].\]
To estimate the right-hand side of the previous equation, we use the classical fact that, according to e.g. to [67, Proposition 2.1], the differential equation
\[g(x)-\mathbb{E}[g(Z)]=f^{\prime}(x)-xf(x),\]
admits a unique bounded solution \(f_{g}\in C^{1}(\mathbb{R})\) such that \(|f_{g}^{\prime}|\leq 4\). As a consequence, one has that
\[\mathbb{E}[g(F)(1-A)]=\mathbb{E}[f_{g}^{\prime}(F)(1-A)]-\mathbb{ E}[Ff_{g}(F)(1-A)]\] \[=\mathbb{E}[f_{g}^{\prime}(F)(1-A)]-\mathbb{E}[f_{g}^{\prime}(F) A(1-A)]=\mathbb{E}[f_{g}^{\prime}(F)(1-A)^{2}],\]
where in the second equality we have used the fact that \(\mathbb{E}[Ff_{g}(F)\,|\,A]=A\mathbb{E}[f_{g}^{\prime}(F)\,|\,A]\), by (5.12). This implies that \(|\mathbb{E}[g(F)(1-A)]|\leq 4\mathbf{Var}(A)\), and the proof of Point **(1)** is complete. To deal with Point **(2)**, we consider a generic \(\sigma^{2}>0\) and observe that, according e.g. to [66, Proposition C.3.5],
\[2d_{TV}(F,Z)=\sup_{h:|h|\leq 1}|\mathbb{E}[h(F)]-\mathbb{E}[h(Z)]|,\]
where the supremum runs over all Borel measurable functions \(h\) whose absolute value is bounded by \(1\). By virtue of (3.6) one has therefore that both \(2d_{TV}(F,Z)\) and \(\mathbf{W}_{1}(F,Z)\) are bounded from below by the quantity
\[|\mathbb{E}(\cos(F))-\mathbb{E}(\cos(Z))|=\left|\mathbb{E}[e^{-A/2}-e^{-\sigma^{ 2}/2}]\right|\]
Relation (5.15) now follows by writing the Taylor expansion
\[e^{-A/2}-e^{-\sigma^{2}/2}\] \[=-e^{-\sigma^{2}/2}(A/2-\sigma^{2}/2)+\frac{e^{-\sigma^{2}/2}}{2} (A/2-\sigma^{2}/2)^{2}-\frac{e^{-\sigma^{2}/2}}{6}(A/2-\sigma^{2}/2)^{3}+R_{0},\]
with \(|R_{0}|\leq\frac{1}{24}(A/2-\sigma^{2}/2)^{4}\), and taking expectations on both sides.
**Remark 5.6**.: If \(Z_{1}\sim N(0,\sigma_{1}^{2})\) and \(Z_{2}\sim N(0,\sigma_{2}^{2})\), then [67, Proposition 3.6.1] implies that
\[d_{TV}(Z_{1},Z_{2})\leq\frac{2}{\sigma_{1}^{2}\vee\sigma_{2}^{2}}\times| \sigma_{1}^{2}-\sigma_{2}^{2}|. \tag{5.16}\]
Also, choosing as a coupling \(T=\sigma_{1}\cdot Z\) and \(S=\sigma_{2}\cdot Z\), with \(Z\sim N(0,1)\), one infers that
\[\mathbf{W}_{1}(Z_{1},Z_{2})\leq|\sigma_{1}-\sigma_{2}|. \tag{5.17}\]
### Multidimensional Stein's bounds
When dealing with multidimensional normal approximations in the convex distance \(d_{c}\), one has to deal separately with the case of singular and non-singular target covariance matrices.
The next statement deals with the non-singular case; the proof can be deduced by reproducing the arguments leading to the proof of [68, Theorem 1.2], and is omitted for the sake of brevity.
**Proposition 5.7** (Convex distance to non-singular Gaussian vectors).: _Let \(F=(F_{1},...,F_{M})\) be a centered random vector with square-integrable entries. Assume that there exists a \(M\times M\) random matrix \(\Sigma=\{\Sigma(i,j):i,j=1,...,M\}\) with square-integrable entries and such that, for all twice differentiable functions \(h:\mathbb{R}^{M}\to\mathbb{R}\) that are \(1\)-Lipschitz and such that_
\[\sup_{x\in\mathbb{R}^{M}}\|\mathrm{Hess}\,h(x)\|_{HS}\leq 1,\]
_one has the identity_
\[\mathbb{E}[\langle F,\nabla h(F)\rangle]=\mathbb{E}[\langle\Sigma,\mathrm{Hess }\,h(F)\rangle_{HS}]. \tag{5.18}\]
_Then, \(\mathbf{Cov}(F_{i},F_{j})=\mathbb{E}[\Sigma(i,j)]\), \(i,j=1,...,M\). Moreover, denoting by \(N=(N_{1},...,N_{M})\) a centered Gaussian vector with covariance \(B>0\), the following estimate is in order:_
\[d_{c}(F,N)\leq 402\left\{\lambda_{min}(B)^{-3/2}+1\right\}M^{41/24}\sqrt{\| \Sigma-B\|_{HS}^{2}},\]
_where \(\lambda_{min}(B)\) is the smallest eigenvalue of \(B\)._
**Remark 5.8**.: In the parlance of [53, 22, 33], any random matrix \(\Sigma\) verifying relation (5.18) is a **Stein's kernel** associated with \(F\). It is a well-known fact that, for \(m\geq 2\), Stein's kernels are in general not unique (see e.g. the discussion contained in [22]).
The second result of the section is new and allows one to deal with singular covariance matrices in some specific situations (that are relevant to the present paper). The proof uses ideas already exploited in [10].
**Proposition 5.9** (Convex distance to singular Gaussian vectors).: _Let \(F=(F_{1},...,F_{M})\) be a centered random vector with square-integrable entries. Assume that there exists a \(m\times m\) positive definite symmetric random matrix \(\Sigma=\{\Sigma(i,j):i,j=1,...,M\}\) with square-integrable entries and such that, conditionally on \(\Sigma\), \(F\) has a centered Gaussian distribution with covariance \(\Sigma\). Then, \(\mathbf{Cov}(F_{i},F_{j}):=C(i,j)=\mathbb{E}[\Sigma(i,j)]\), \(i,j=1,...,M\). Moreover, denoting by \(N=(N_{1},...,N_{M})\) a centered Gaussian vector with covariance \(C\), the following estimate is in order_
\[d_{c}(F,N)\leq 402\left\{\lambda_{+}(C)^{-3/2}+1\right\}\operatorname{rk}(C)^ {41/24}\sqrt{\sum_{i,j=1}^{M}\mathbf{Var}(\Sigma(i,j))},\]
_where \(\lambda_{+}(C)\) is the smallest positive eigenvalue of \(C\) and we have written \(\operatorname{rk}(C)\) for the rank of \(C\)._
Proof.: If \(C\) has full rank, then the result follows from Proposition 5.7. We can therefore assume that \(\operatorname{rk}(C)=k<M\). Without loss of generality, we may also assume that \(C=U^{T}DU\), where \(U\) is an orthogonal matrix, and \(D\) is a diagonal matrix whose diagonal entries \(d_{i}\) are such that \(d_{i}>0\) if \(i\leq k\) and \(d_{i}=0\) otherwise. Following a strategy put forward in [10], we now introduce an auxiliary random vector \(Z=(Z_{1},...,Z_{M})\) defined as \(Z:=UF\). A direct computation shows the following facts:
1. conditionally on \(\Sigma\), the vector \(Z\) is centered and Gaussian with covariance \(\Sigma_{0}:=U\Sigma U^{T}\);
2. as a consequence, \(Z\) is centered with covariance given by the diagonal matrix \(D=UCU^{T}\), which yields \(Z_{i}=0\), a.s.-\(\mathbb{P}\), for all \(i>k\), and \(\Sigma_{0}(i,\ell)=0\), a.s.-\(\mathbb{P}\), whenever \(\max(i,\ell)>k\);
3. the vector \(UN\) is centered and Gaussian, with covariance matrix given by \(D\).
To conclude the proof, we observe that
\[d_{c}(F,N)=d_{c}(Z,UN)\leq d_{c}(Z(k),UN(k)),\]
where \(Z(k)\) and \(UN_{0}(k)\) denote, respectively, the first \(k\) entries of \(Z\) and \(UN_{0}\). Applying Proposition 5.7 to the right-hand side of the previous inequality yields the desired conclusion, by virtue of the relation
\[\|\Sigma-C\|_{HS}=\|U^{T}\Sigma U-U^{T}CU\|_{HS}=\|\Sigma_{0}-D\|_{HS},\]
where the first equality follows from the unitary invariance of the Hilbert-Schmidt norm.
**Remark 5.10**.: Let \(N_{1}\) and \(N_{2}\) be \(M\)-dimensional centered Gaussian vectors with covariances \(C_{1}\) and \(C_{2}\), respectively. Then, choosing the pairing \(T=\sqrt{C_{1}}\,N\) and \(S=\sqrt{C_{2}}\,N\), where \(N\) is a standard Gaussian vector, one has the following estimate:
\[\mathbf{W}_{2}(N_{1},N_{2})\leq\|\sqrt{C_{1}}-\sqrt{C_{2}}\|_{HS}. \tag{5.19}\]
See e.g. [35] for optimal bounds.
### Comparison of Hilbert-space valued Gaussian random elements
Let \(H\) be a separable real Hilbert space, and endow \(H\) with the Borel \(\sigma\)-field associated with the norm \(\|\bullet\|_{H}\).
We consider two centered, Gaussian \(H\)-valued random elements \(X_{1},X_{2}\), and denote by \(S_{1}\) and \(S_{2}\) their covariance operators. We recall that \(S_{i}\) is the unique symmetric, positive and trace-class linear operator \(S_{i}:H\to H\) such that, for all \(g\in H\), \(\langle X_{i},g\rangle_{H}\) is a centered Gaussian random variable with variance \(\langle S_{i}g,g\rangle_{H}\geq 0\) (see e.g. [24, Chapter 1]). The following classical bound allows one to compare the distributions of \(X_{1}\) and \(X_{2}\) in the sense of the \(2\)-Wasserstein distance. It is a direct consequence of Gelbrich [35, Theorem 3.5]; see also [56] for a modern discussion of Gelbrich's results.
**Proposition 5.11** (See [35, 56]).: _Let the above assumptions and notations prevail. Then,_
\[\mathbf{W}_{2}(X_{1},X_{2})\leq\|\sqrt{S_{1}}-\sqrt{S_{2}}\|_{HS}.\]
In order to deal with the norm \(\|\sqrt{S_{1}}-\sqrt{S_{2}}\|_{HS}\) (which is typically not directly amenable to analysis), we will use a variation of the classical **Powers-Stormer's inequality** from [71, Lemma 4.2], in a form that represents a slight generalization of [28, Lemma 4.4]. A detailed proof is provided for the sake of completeness.
**Proposition 5.12**.: _Under the assumptions of the present section, one has that_
\[\|\sqrt{S_{1}}-\sqrt{S_{2}}\|_{HS}\leq|\mathrm{Tr}\left(S_{1}\right)-\mathrm{ Tr}\left(S_{2}\right)|^{1/2}+\sqrt{2}\|S_{1}-S_{2}\|_{HS}^{\frac{1}{4}}\, \mathrm{Min}\{\mathrm{Tr}\left(\sqrt{S_{1}}\right),\mathrm{Tr}\left(\sqrt{S_ {2}}\right)\}.\]
Proof of Proposition 5.12.: By symmetry, it is enough to prove
\[\|\sqrt{S_{1}}-\sqrt{S_{2}}\|_{HS}\leq|\mathrm{Tr}\left(S_{1}\right)-\mathrm{ Tr}\left(S_{2}\right)|^{1/2}+\sqrt{2}\|S_{1}-S_{2}\|_{HS}^{\frac{1}{4}}\, \mathrm{Tr}\left(\sqrt{S_{1}}\right).\]
For this, let us denote by
\[\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{k}\geq\cdots\geq 0\]
the eigenvalues of the operator \(S_{1}\). We can assume that \(\mathrm{Tr}\left(\sqrt{S_{1}}\right)=\sum_{k=1}^{\infty}(\lambda_{k})^{1/2}<\infty\), because otherwise the inequality is trivial. For all \(h\in H\), the action of the operators \(S_{1}\) and \(\sqrt{S_{1}}\) on \(h\) can be written, respectively, as \(S_{1}h=\sum_{i}\lambda_{i}\langle e_{i},h\rangle_{H}\,e_{i}\) and \(\sqrt{S_{1}}h=\sum_{i}\sqrt{\lambda_{i}}\langle e_{i},h\rangle_{H}\,e_{i}\), for some orthonormal basis \(\{e_{i}:i\geq 1\}\) of \(H\) such that \(e_{i}\) is an eigenfunction of \(S_{1}\) with eigenvalue \(\lambda_{i}\) (such a basis \(\{e_{i}\}\) is fixed for the rest of the proof). We start by writing the elementary relation
\[\|\sqrt{S_{1}}-\sqrt{S_{2}}\|_{HS}^{2}\leq|\mathrm{Tr}\left(S_{1}\right)- \mathrm{Tr}\left(S_{2}\right)|+2\left|\langle\sqrt{S_{1}}-\sqrt{S_{2}},\sqrt {S_{1}}\rangle_{HS}\right|.\]
Writing \(T:=\sqrt{S_{1}}-\sqrt{S_{2}}\), from the definition of the Hilbert-Schmidt norm one infers that
\[\left|\langle\sqrt{S_{1}}-\sqrt{S_{2}},\sqrt{S_{1}}\rangle_{HS}\right| \leq \sum_{j=1}^{\infty}\sqrt{\lambda_{j}}\,|\langle Te_{j},e_{j}\rangle _{H}|.\]
The conclusion now follows by observing that, for every \(j\geq 1\), \(|\langle Te_{j},e_{j}\rangle_{H}|\leq\|T\|_{op}\), and by exploiting the relations
\[\|T\|_{op}=\|\sqrt{S_{1}}-\sqrt{S_{2}}\|_{op}\leq\|S_{1}-S_{2}\|_{op}^{1/2} \leq\|S_{1}-S_{2}\|_{HS}^{1/2},\]
where the first inequality in the previous display is a consequence of [12, Theorem V.1.9 and Theorem X.1.1], and the second inequality is standard2.
Footnote 2: The results from [12] cited in our proof are stated in such a reference only in the case where \(H\) is finite-dimensional (and, consequently, \(S_{1},\,S_{2}\) are matrices); the needed extension to a separable Hilbert space follows from a standard limiting procedure.
We also record the following bound from [16]: for the sake of completeness, we provide here a direct proof neither appealing to the notion of abstract Wiener space nor assuming that \(X_{1}\) and \(X_{2}\) are non-degenerate (as in [16, Corollary 3.3]).
**Proposition 5.13**.: _Let the assumptions and notation of the present section prevail. Then,_
\[d_{2}(X_{1},X_{2})\leq\frac{1}{2}\|S_{1}-S_{2}\|_{HS}.\]
Proof.: Let \(h\in C_{b}^{2}(H)\) be such that \(\sup_{x\in H}\|\nabla^{2}h(x)\|_{H^{\otimes 2}}\leq 1\). Without loss of generality, let us assume that \(X_{1}\) and \(X_{2}\) are independent, and let us set \(U_{t}=\sqrt{t}X_{1}+\sqrt{1-t}X_{2}\) for \(t\in[0,1]\). We have
\[\mathbb{E}[h(X_{1})]-\mathbb{E}[h(X_{2})] = \int_{0}^{1}\frac{d}{dt}\mathbb{E}[h(U_{t})]dt\] \[= \int_{0}^{1}\left(\frac{1}{2\sqrt{t}}\mathbb{E}\big{[}\langle \nabla f(U_{t}),X_{1}\rangle_{H}\big{]}-\frac{1}{2\sqrt{1-t}}\mathbb{E}\big{[} \langle\nabla f(U_{t}),X_{2}\rangle_{H}\big{]}\right)dt\] \[= \frac{1}{2}\int_{0}^{1}\mathbb{E}\big{[}\langle\nabla^{2}f(U_{t} ),S_{1}-S_{2}\rangle_{HS}\big{]}.\]
Therefore
\[\big{|}\mathbb{E}[h(X_{1})]-\mathbb{E}[h(X_{2})]\big{|} \leq \frac{1}{2}\sup_{x\in H}\|\nabla^{2}f(x)\|_{HS}\ \|S_{1}-S_{2}\|_{HS},\]
and the desired conclusion follows.
We will use Propositions 5.12 and 5.13 in combination with (3.5), in order to compare the distributions of \(H\)-valued random elements \(Z,Y\) such that \(Y\) is Gaussian and \(Z\) is conditionally Gaussian. To simplify the discussion, the corresponding statement is provided below in the special case in which \(H\) is a subspace of a \(L^{2}\) space.
**Proposition 5.14**.: _Let \((T,\mathcal{T},\nu)\) be a measure space such that \((T,\mathcal{T})\) is Polish and \(\nu\) is a finite positive Borel measure. Write \(L^{2}(\nu):=L^{2}(T,\mathcal{T},\nu)\), consider a closed subspace \(H_{1}\subset L^{2}(\nu)\), and select two \(H_{1}\)-valued random elements \(Z,Y\) with the following properties:_
* \(Y=\{Y(x):x\in T\}\) _is a centered Gaussian field with covariance_ \(K(x,y)=\mathbb{E}[Y(x)Y(y)]\) _such that_ \(\int_{T}\int_{T}K(x,y)^{2}\nu(dx)\nu(dy),\int_{T}K(x,x)^{2}\nu(dx)<\infty\)_;_
* _there exists a symmetric positive definite random field_ \(\Sigma=\{\Sigma(x,y):x,y\in T\}\) _such that_ \(\mathbb{E}\left[\int_{T}\int_{T}\Sigma(x,y)^{2}\nu(dx)\nu(dy)\right],\ \mathbb{E} \left[\int_{T}\Sigma(x,x)^{2}\nu(dx)\right]<\infty\) _and, conditionally on_ \(\Sigma\)_,_ \(Z=\{Z(x):x\in T\}\) _is a centered Gaussian field with covariance_ \(\Sigma\)_._
_Then, the following estimates hold._
**(1)**: _One has that_
\[d_{2}(Z,Y)\leq\frac{1}{2}\sqrt{\mathbb{E}\left[\int_{T}\int_{T}(K(x,y)-\Sigma(x,y) )^{2}\,\nu(dx)\nu(dy)\right]}.\]
**(2)**: _Let_ \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq 0\) _denote the eigenvalues of the covariance_ \(K\)_, that we identify with the integral operator_ \(S_{1}:H_{1}\to H_{1}\)__
\[h\mapsto Sh:=\int_{T}K(\cdot,y)h(y)\nu(dy).\]
_Also, denote by_ \(S_{2}\) _the (random integral operator) associated with the covariance_ \(\Sigma\)_. Then,_
\[\operatorname{Tr}\left(S_{1}\right)=\int_{T}K(x,x)\nu(dx),\quad\operatorname{ Tr}\left(S_{2}\right)=\int_{T}\Sigma(x,x)\nu(dx)\quad\text{(a.s.-$\mathbb{P}$)}, \tag{5.20}\]
_and_
\[\mathbf{W}_{2}(Z,Y) \leq \left\{\mathbb{E}\left[\int_{T}(K(x,x)-\Sigma(x,x))^{2}\,\nu(dx) \right]\right\}^{\frac{1}{4}}\] \[+2^{\frac{1}{2}}\left\{\mathbb{E}\left[\int_{T}\int_{T}(K(x,y)- \Sigma(x,y))^{2}\,\nu(dx)\nu(dy)\right]\right\}^{\frac{1}{8}}.\]
_At Point_ **(1)** _and Point_ **(2)**_, the distances_ \(d_{2}\) _and_ \(\mathbf{W}_{2}\) _are defined with respect to the Hilbert space_ \(H_{1}\)_._
Proof.: To prove Point **(1)**, we can assume without loss of generality that \(X,Y,\Sigma\) are defined on the same probability space, and that \((Z,\Sigma)\) and \(Y\) are stochastically independent. Now, for every \(h\in C_{b}^{2}(H)\) such that \(\|h\|_{C_{b}^{2}(H)}\leq 1\) one has that
\[\Big{|}\mathbb{E}[h(Z)]-\mathbb{E}[h(Y)]\Big{|}\leq\mathbb{E}\Big{[}\Big{|} \mathbb{E}[h(Z)\,|\,\Sigma]-\mathbb{E}[h(Y)\,|\,\Sigma]\,\Big{|}\Big{]},\]
and the result follows by applying Proposition 5.13 in the case \(S_{1}=\Sigma\) and \(S_{2}=K\). The proof of Point **(2)** follows by applying (3.5) to the case \(q=2\) and \(U=\Sigma\), and then by applying Proposition 5.12 to the case \(S_{1}=K\) and \(S_{2}=\Sigma\). Relation (5.20) follows e.g. from the arguments rehearsed in [24, Proof of Proposition 1.8] and the fact that assumptions on \(Z\) and \(Y\) imply that \(\mathbb{E}[\|Y\|_{H_{1}}^{2}]\), \(\mathbb{E}[\|Z\|_{H_{1}}^{2}]<\infty\).
## 6. Proof of the main results
### Proof of Theorem 3.3
Fix \(J\) and \(x_{\alpha}\) as in the statement. Then, conditionally on \(\mathcal{F}^{(L)}\), the random variable \(V_{\alpha}^{J}z_{i;\alpha}^{(L+1)}\) is centered and Gaussian, with variance \(V_{\alpha}^{J}V_{\beta}^{J}\Sigma_{\alpha\beta}^{(L)}\left|{}_{x_{\alpha}=x_{ \beta}}:=A\). Writing \(d\) for either \(d_{TV}\) or \(\mathbf{W}_{1}\) and denoting by \(Y\) a centered Gaussian random variable with variance \(\mathbb{E}(A)\), we infer that
\[d(V_{\alpha}^{J}z_{i;\alpha}^{(L+1)},Z)\leq d(V_{\alpha}^{J}z_{i;\alpha}^{(L+ 1)},Y)+d(Y,Z):=P+Q,\]
and the conclusion of Point **(1)** is obtained by bounding \(P\) and \(Q\) by means of (5.13)-(5.14) and (5.16)-(5.17), respectively, and then by applying (5.1) in the case \(J_{1}=J_{2}=J,\,\ell=L\) and \(\alpha_{1}=\alpha_{2}=\alpha\). Point **(2)** in the statement follows from (5.15) in the case \(A=\Sigma_{\alpha\alpha}^{(L)}\)
and \(\sigma^{2}=\mathbb{E}(\Sigma^{(L)}_{\alpha\alpha})\), that one should combine with (5.4), and the fact that, in this specific configuration and by virtue of (5.3),
\[|R+\mathbb{E}[(A-\sigma^{2})^{3}]|\leq Qn^{-2}, \tag{6.1}\]
for some constant \(Q\) independent of \(n\). We observe that, in order to deduce (6.1), we used the two elementary identities: \(\mathbb{E}[(A-\sigma^{2})^{3}]=\kappa_{3}(A)\), and \(\mathbb{E}[(A-\sigma^{2})^{4}]=\kappa_{4}(A)+3\kappa_{2}(A)^{2}\).
### Proof of Theorem 3.5
Write \(M_{0}:=M\cdot n_{L+1}\). We start by observing that, conditionally on \(\mathcal{F}^{(L)}\), the \(M_{0}\)-dimensional random vector \(F:=\begin{pmatrix}V^{J_{\ell}}_{\alpha_{\ell}}z_{i;\alpha_{\ell}}\end{pmatrix} _{\begin{subarray}{c}1\leq i\leq n_{L+1}\\ (J_{\ell},\alpha_{\ell})\in\mathbf{B}\end{subarray}}\) is Gaussian and centered, with covariance
\[\Sigma(i,(J_{\ell},\alpha_{\ell})\,;\,j,(J_{k},\alpha_{k})):=\delta_{ij}V^{J_ {\ell}}_{\alpha_{\ell}}V^{J_{k}}_{\alpha_{k}}\Sigma^{(L)}_{\alpha_{\ell}\alpha _{k}},\]
where we used the convention (5.2) to deal with the case \(\alpha_{k}=\alpha_{\ell}\). Gaussian integration by parts yields, in particular, that, for all twice differentiable functions \(h:\mathbb{R}^{M_{0}}\to\mathbb{R}\) that are \(1\)-Lipschitz and such that
\[\sup_{x\in\mathbb{R}^{M_{0}}}\|\mathrm{Hess}\,h(x)\|_{HS}\leq 1,\]
one has the identity
\[\mathbb{E}[\langle\nabla h(F),F\rangle_{\mathbb{R}^{M_{0}}}]=\mathbb{E}[ \mathbb{E}[\langle\nabla h(F),F\rangle_{\mathbb{R}^{M_{0}}}\,|\,\mathcal{F}^{ (L)}]]=\mathbb{E}[\langle\Sigma,\mathrm{Hess}\,h(F)\rangle_{HS}].\]
Now suppose that the assumptions of Point **(1)** in the statement are in order. One can apply Proposition 5.7 in the case \(M=M_{0}\) and \(N=G\) to deduce that the quantity \(d_{c}(F,G)\) is bounded by a multiple of \(\sqrt{B_{n}}\), where \(B_{n}\) is defined in (5.9) with \(\mu(dJ,dx)\) equal to the counting measure on \(\mathbf{B}\), and the conclusion follows from (5.11). Similarly, under the assumptions of Point **(2)** in the statement, one can exploit Proposition 5.9 in the case \(M=M_{0}\) and \(N=G^{\prime}\) to deduce that the quantity \(d_{c}(F,G^{\prime})\) is bounded by a multiple of \(\sqrt{A_{n}}\), where \(A_{n}\) is defined in (5.8) with \(\mu(dJ,dx)\) equal to the counting measure on \(\mathbf{B}\), and (5.11) yields once again the desired conclusion.
### Proof of Theorem 3.10
The statement follows from Proposition 5.14, as applied to the following configuration
* \(T=\mathcal{M}_{q}\times[n_{L+1}]\times\mathbb{U}\) and \(\nu=\nu_{0}\otimes\nu_{1}\otimes dx\), where \(\nu_{0}\) and \(\nu_{1}\) are counting measures;
* \(Y=\Gamma^{(L+1)}_{\mathbb{U}}\), regarded as a random element with values in \(H_{1}=\mathbb{W}^{q;2}(\mathbb{U})\subset L^{2}(\nu)\) ;
* \(Z=z^{(L+1)}_{\mathbb{U}}\), regarded as a random element with values in \(H_{1}=\mathbb{W}^{q;2}(\mathbb{U})\subset L^{2}(\nu)\) ;
* for \((J_{1},i_{1},x_{\alpha_{1}})\), \((J_{2},i_{2},x_{\alpha_{2}})\in T\), \[\Sigma((J_{1},i_{1},x_{\alpha_{1}});(J_{2},i_{2},x_{\alpha_{2}}))=\delta_{i_{1 }i_{2}}D^{J_{1}}_{\alpha_{1}}D^{J_{2}}_{\alpha_{2}}\Sigma^{(L)}_{\alpha_{1} \alpha_{2}},\] where the convention (5.2) has been implicitly applied.
Proposition 5.14 implies therefore that, under the assumptions of Theorem 3.10-**(1)**, the quantity \(d_{2}\left(z^{(L+1)}_{\mathbb{U}},\Gamma^{(L+1)}_{\mathbb{U}}\right)\) is bounded by a multiple of \(\sqrt{B_{n}}\), where \(B_{n}\) is defined according to (5.9) in the case \(\mu=\nu_{0}\otimes dx\), so that the conclusion follows from (5.11). Analogously, under the assumptions of Theorem 3.10-**(2)**, Proposition 5.14 yields that the quantity \(\mathbf{W}_{2;q}\left(z^{(L+1)}_{\mathbb{U}},\Gamma^{(L+1)}_{\mathbb{U}}\right)\) is bounded by a multiple of \(B^{\frac{1}{8}}_{n}+C^{\frac{1}{4}}_{n}\) (see (5.10)), and (5.11) yields
once again the desired conclusion. The last statement in the theorem follows by an analogous route.
### Proof of Theorem 3.14
Fix \(\mathbb{U}\) and \(k\geq 1\) as in the statement, and define \(r:=k+1+\lfloor\frac{n_{0}}{2}\rfloor\). In view of [80, Theorem 4.1], it is sufficient to prove formula (3.34). To accomplish this task, we will exploit relation (3.36) in the following setting: \(X=z_{\mathbb{U}}^{(L+1)}\), \(Y=\Gamma_{\mathbb{U}}^{(L+1)}\) and \(V=\Sigma^{(L)}=\{\Sigma_{\alpha\beta}^{(L)}:x_{\alpha},x_{\beta}\in\bar{ \mathbb{U}}\}\), as defined in (2.8). We regard \(z_{\mathbb{U}}^{(L+1)}\) and \(\Gamma_{\mathbb{U}}^{(L+1)}\) as random elements with values in \(C^{k}(\bar{\mathbb{U}})\), such that \(\mathbb{P}(z_{\mathbb{U}}^{(L+1)}\in C^{\infty}(\bar{\mathbb{U}}))=\mathbb{P} (\Gamma_{\mathbb{U}}^{(L+1)}\in C^{\infty}(\bar{\mathbb{U}}))=1\). Similarly, we regard \(\Sigma^{(L)}\) as a random element with values in the space \(C^{k,k}(\bar{\mathbb{U}}\times\bar{\mathbb{U}})\) such that \(\mathbb{P}_{\Sigma^{(L)}}(C^{\infty,\infty}(\bar{\mathbb{U}}\times\bar{ \mathbb{U}}))=1\), where \(\mathbb{P}_{\Sigma^{(L)}}\) is shorthand for the law of \(\Sigma(L)\). By construction, there exists a version of the conditional probability
\[\mathbb{Q}_{S}:=\mathbb{P}_{z_{\mathbb{U}}^{(L+1)}\,|\,\Sigma^{(L)}=S}\]
such that, for \(\mathbb{P}_{\Sigma^{(L)}}\)-almost every \(S\), one has that \(S\in C^{\infty,\infty}(\bar{\mathbb{U}}\times\bar{\mathbb{U}})\) and, under \(\mathbb{Q}_{S}\), the random element \(z_{\mathbb{U}}^{(L+1)}\) is a centered Gaussian random field with \(n_{L+1}\) independent components with common covariance \(S\); when these two requirements are met, one has that
\[\mathbb{Q}_{S}(C^{\infty}(\bar{\mathbb{U}}))=1.\]
The following statement gathers together the main results one can deduce from the construction of coupled smooth Gaussian fields detailed in [28, Section 4.2].
**Lemma 6.1** (See [28]).: _Let the above notation and assumptions prevail, and let \(S\) be a symmetric and positive definite element of \(C^{\infty,\infty}(\bar{\mathbb{U}}\times\bar{\mathbb{U}})\). Let \(\mathbf{K}\) be the operator defined in (3.23) for \(r=k+1+\lfloor\frac{n_{0}}{2}\rfloor\), and let \(\mathbf{K}_{S}\) be the operator obtained from (3.23) by replacing the kernel \(K^{(L+1)}\) with \(S\). Then, there exists a probability space \((\Omega_{1},\mathcal{F}_{1},\mathbb{P}_{1})\) supporting two random elements \(E,F,\) with values in \(C^{k}(\bar{\mathbb{U}})\) and such that:_
* \(E\) _has the law of a centered Gaussian field on_ \(\bar{\mathbb{U}}\) _with_ \(n_{L+1}\) _independent components having common covariance_ \(S\)_;_
* \(F\overset{\text{law}}{=}\Gamma_{\mathbb{U}}^{(L+1)}\)_;_
* _the following estimate is in order:_ \[\mathbb{E}_{1}[\|E-F\|_{\mathscr{W}^{r,2}(\mathbb{U})}^{2}]=\|\sqrt{\mathbf{K }}-\sqrt{\mathbf{K}_{S}}\|_{HS}^{2}.\]
For \(E,F\) as in Lemma 6.1, one has that \(\mathbb{P}_{1}(E,F\in C^{\infty}(\bar{\mathbb{U}}))=1\), and on can apply (3.19) to deduce that, for some absolute constant \(A\) depending on \(\mathbb{U}\), one has the bound \(\|E-F\|_{C^{k}(\bar{\mathbb{U}})}\leq A\cdot\|E-F\|_{\mathscr{W}^{r,2}(\mathbb{ U})}\), a.s.-\(\mathbb{P}_{1}\). Since, by virtue of Proposition 5.12, one has that, for all \(p\in(0,1)\),
\[\|\sqrt{\mathbf{K}}-\sqrt{\mathbf{K}_{S}}\|_{HS}\leq c\cdot\|\mathbf{K}- \mathbf{K}_{S}\|_{HS}^{\frac{1-p}{2-p}}\]
for some finite constant \(c\) uniquely depending on \(p\) and on the deterministic operator \(\mathbf{K}\), we deduce from (3.36) that \(\mathbf{W}_{\infty;k}\left(z_{\mathbb{U}}^{(L+1)},\Gamma_{\mathbb{U}}^{(L+1)}\right)\) is bounded by a multiple of \(B_{n}^{\frac{1}{3}}+C_{n}^{\frac{1}{4}}\), where \(B_{n},C_{n}\) are defined according to (5.9) and (5.10), respectively, in the case \(\mu=\nu_{0}\otimes dx\). The conclusion now follows from relation (5.11).
## 7. Statements and Declarations
### Funding
BH gratefully acknowledges support from NSF CAREER grant DMS-2143754 as well as NSF grants DMS-1855684, DMS-2133806 and an ONR MURI on Foundations of Deep Learning. DM is grateful to MUR projects _MatModTov_, _Grafia_ and to PNRR/CN1 Spoke 3 for financial support. IN's research is supported by the Luxembourg National Research Fund (Grant: O22/17372844/FraMStA). GP's research is supported by the Luxembourg National Research Fund (Grant: O21/16236290/HDSA).
### Other Interests
The authors declare no financial or non-financial competing interests.
### Acknowledgement
We thank Nicholas Nelsen for pointing out a mistake in the previous version of this paper.
|
2301.02191 | Physics informed neural network for charged particles surrounded by
conductive boundaries | In this paper, we developed a new PINN-based model to predict the potential
of point-charged particles surrounded by conductive walls. As a result of the
proposed physics-informed neural network model, the mean square error and R2
score are less than 7% and more than 90% for the corresponding example
simulation, respectively. Results have been compared with typical neural
networks and random forest as a standard machine learning algorithm. The R2
score of the random forest model was 70%, and a standard neural network could
not be trained well. Besides, computing time is significantly reduced compared
to the finite element solver. | Fatemeh Hafezianzade, Morad Biagooi, SeyedEhsan Nedaaee Oskoee | 2023-01-05T17:52:36Z | http://arxiv.org/abs/2301.02191v1 | # Physics informed neural network for charged particles surrounded by conductive boundaries
###### Abstract
In this paper, we developed a new PINN-based model to predict the potential of point-charged particles surrounded by conductive walls. As a result of the proposed physics-informed neural network model, the mean square error and \(R^{2}\) score are less than 7% and more than 90% for the corresponding example simulation, respectively. Results have been compared with typical neural networks and random forest as a standard machine learning algorithm. The \(R^{2}\) score of the random forest model was 70%, and a standard neural network could not be trained well. Besides, computing time is significantly reduced compared to the finite element solver.
Poisson Laplace Physics-informed neural network charged particles Conductive boundaries supercapacitor
## 1 Introduction
Computational Electromagnetic Simulation (CES) plays a significant role in many areas of science and engineering, such as soft matter, electrical engineering, biomedical engineering and chemistry. In addition, it has numerous applications in industry. For example, it is one of the main tools in investigating and designing the process of supercapacitors, which are porous energy storage devices with many applications in industry, especially when high power consumption or transfer is neededMiller and Simon (2008). Here, studying the physical mechanisms arising from charge storage in supercapacitors is essential for further technological developmentSalanne et al. (2016); Simon and Gogotsi (2008).
Solving Maxwell's equation, especially the Poisson equation in this study, is an essential part of computational electromagnetic algorithmsJackson (1962). Solving the Poisson equation can help scientists to calculate the potential of electrical sources in any system. However, many difficulties arise in practice due to the long-range nature of electrical interactions. In particular, estimating the potential of point-charged components in an environment with conductive walls is challenging because of the induced charges presented on the boundaries.
Generally, there are two approaches to solving the Poisson equation: analytical solutionJackson (1962) and numerical methods. There are limited techniques for solving analytically, like image charges methods applicable for cases with regular geometries; however, there is no guarantee to achieve practical results. If, for example, a particle is placed in a cubic conductive container, the image charges method will produce an infinite series. On the other hand, numerical methods lead to approximate solutions based on discretizing space and/or time domains. One of the typical numerical methods is the Finite Element Method (FEM)Jin (2014) which discretizes the continuous partial differential equations (PDEs) and forms a linear set of algebraic equationS. et al. (1991). Nevertheless, even FEM fails in calculating the potential in a charged particle's position since the electrical potential is singular at the place of charges. There are a number of methods and algorithms that have been developed to address this problem, including Induced Charge MMM2D (ICMMM2D)Tyagi et al. (2007) for 2D, ELCICTyagi et al. (2008) for 2D + h, Induced Charge Computation (\(ICC*\) )Tyagi et al. (2010), Kesselheim et al. (2010), Arnold et al. (2013) for 3D periodicity, and a method introduced by Reed et al.Reed et al. (2007) have been developed. In addition, recently, there has been another algorithm named PLT. It was first demonstrated for a partially periodic system constrained between two metallic plates in Rostami et al. (2016), and then it was applied to CAVIAR Biagooi et al. (2020), a molecular dynamics simulation package for charged particles surrounded by non-trivial conductive boundaries. Numerical solving of these problems with the CAVIAR package is accurate; moreover, it took less time than \(ICC*\) Biagooi et al. (2020) but is still time and memory-consuming.
Recently another data-driven approach to solving the PDEs based on deep machine learning is also of great current interest. For instance, Shan et al. Shan et al. (2020) present a CNN to predict the electric potential with different excitations and permittivity distribution in 2D and 3D models. It is fast and efficient compared with FEMJin (2014). However, a couple of problems prevent it from utilizing as a Poisson solver in the MD simulation process; first, it could not work in the case of discrete density functions such as those of point charges, and second, it is a physics-free approach which makes it hard to consider boundary conditions. To overcome the first problem, one can use the PLT algorithm. Additionally, Raissi et al. introduced the physics-informed neural network (PINN) that the loss function defined by Raissi et al. (2019) is an excellent alternative to the conventional deep learning method because of the governing equations, boundary conditions, and initial conditions used in its definition.
In this paper, we applied a new PINN-based model to predict the potential of point-charged particles surrounded by conductive walls. We then compared the results with typical neural networks and random forests as a standard machine learning algorithm. For instance, we tried to implement these models for a charged particle in a spherical container. The reason for utilizing this simple example was that there is an exact solution to this problem through the analytical method, the image charges method. As a starting point, we used the PLT algorithm to transfer the Poisson equation into the Laplace equation with modified boundary conditions. Then we trained the model to solve the Laplacian equation with new boundary conditions. The input data is included the position in which we want to evaluate the potential on it and the modified boundary conditions; the output data is the corresponding electrical potential of that position.
## 2 Methods
This article aims to build a machine-learning model (ML-Model) to predict the potential of point-charged particles surrounded by conductive walls. The potential of charged particles is calculated by solving the Poisson equation, which can be written as Jackson (1962):
\[\nabla^{2}\phi=-\rho/\epsilon_{0}=-\sum_{i=1}^{N}q_{i}\delta(x-x_{q_{i}})/ \epsilon_{0}, \tag{1}\]
where \(\phi\) is the potential and \(\rho\) is a charge distribution. The first and straightforward ML-Model that jumps to mind is a model that includes \(x_{q}\) and \(x\) as an input, and \(\phi(x_{q},x)\) as an output. Here \(x_{q}\) is the position of point-charged particle, \(x\) is the position in which we want to calculate the potential on it, and \(\phi(x_{q},x)\) is the corresponding potential. So the number of input features depends on the number of charged particles; for instance, in 3 dimensions, if there is N charged particles, the input features have to be \(3+3\times N\). Therefore,
this kind of model could only predict the potential of fix number of charged particles. In many applications of this method, such as the molecular dynamic simulation, this number is not fixed and could even increase or decrease during the simulation. We use the PLT algorithm to transpose the Poisson equation into the Laplace equation with new boundary conditions to overcome this problem. This algorithm will be discussed in more detail in 2.1. So we can train a model which includes \(x\) and modified boundary conditions as input features and \(\phi(\phi_{b},x)\) as an output. We define the boundary conditions only on \(N_{b}\) fixed points on the boundary \(\{\phi_{1},\phi_{2},\ldots,\phi_{N_{b}}\}\). In this case, with the PLT algorithm, we can build a model with a fixed number of input features that can predict any charged particles' potential.
### Poisson to Laplace Transformation (PLT)
According to the PLT algorithm, the electrical potential is divided into two parts: singular potential (\(\phi_{si}\)) and smooth potential (\(\phi_{sm}\)); \(\phi(\vec{x})=\phi_{si}(\vec{x})+\phi_{sm}(\vec{x})\). It is important to note that the smooth part here is the solution of the Laplace equation with modified boundary conditions,
\[\nabla^{2}\phi_{sm}(\vec{x})=0, \tag{2}\]
Figure 1: Methodology flow chart, The blue part: Preparing the data, in which the reference data set is created based on the PLT algorithm. The red part: Training models process, first the reference set is split to train and test set, then RF, ANN, and PINN model were applied on the train set, after tuning the hyperparameters the best model were chose.
while \(\phi_{si}\) obeys the famous Columb laws
\[\phi_{si}(\vec{x})=\sum_{i=1}^{N}\frac{q_{i}}{4\pi\epsilon_{0}\|\vec{x}-\vec{x}_{ i}\|}. \tag{3}\]
It can be seen that the modified boundary condition for \(\phi_{sm}\) is represented by
\[\phi_{sm}\|_{\vec{x}_{bc}}=\phi\|_{\vec{x}_{bc}}-\phi_{si}\|_{\vec{x}_{bc}}, \tag{4}\]
where \(\phi\|_{\vec{x}_{bc}}\) corresponds to the initial electrical potential on the boundaries. Finally, with the PLT algorithm, we could transfer the Poisson to the Laplace equation with new modified boundary conditions, then train an ML-Model with these modified boundary conditions as an input parameter and the smooth potential as an output. Afterward with the summation of singular and predicted smooth potential, we can reach the total potential. Advantage of utilizing PLT algorithm is that it leads to having a fixed number of input data since the number of input data would be independent of the number of point-charged particles.
### Data Engineering
For training a highly accurate model, having a nice train set is crucial. In this work, the reference set is \(\Gamma=\left\{x^{i},y^{i},z^{i},\varphi_{bc}^{\vec{i}},\phi^{i}\right\}_{i=1}^ {N}\), where the input is concluded \(\{x,y,z,\vec{\varphi}_{bc}=\{\phi_{1},\phi_{2},...,\phi_{N_{b}}\}\}\) and \(\phi\) is the target. \(x,y,z\) is the coordinate of a point in the container on which we want to calculate their potential, \(\phi\) is the numeric value of the potential at this point, and \(\{\phi_{1},\phi_{2},...,\phi_{N_{b}}\}\) is the boundary condition on \(N_{b}\) points on the boundary. First, in the container, \(N_{p}\) positions are chosen to predict the potential in their situations. In fact, for each boundary condition, \(\{\phi_{1},\phi_{2},...,\phi_{N_{b}}\}\), there are \(N_{p}\) points that we want to calculate the potential at their positions. Then, the reference set could be created for \(N_{q}\) different boundary conditions. So the reference set consists of \(N=N_{q}\times N_{p}\) samples which could split to train and test set. In this case, our container is a sphere, we also set \(N_{p}=78\), \(N_{b}=26\), and \(N_{q}=100\). So, our reference consists of 100 different boundary conditions and for each boundary condition \(\{\phi_{1},\phi_{2},...,\phi_{26}\}\) there are 78 points in the sphere on which we want to calculate the potential on it. We use the solution of the image charges method (Eq.5) to calculate targets of the reference set:
\[\phi(\vec{x})=\frac{1}{4\pi\epsilon_{0}}\{\frac{q}{\|\vec{x}-\vec{x}_{q}\|}+ \frac{q^{\prime}}{\|\vec{x}-\vec{x_{q^{\prime}}}\|}\},\quad q^{\prime}=-\frac {rq}{a},\quad\vec{x_{q^{\prime}}}=\frac{a^{2}}{r}\frac{\vec{x}_{q}}{\|\vec{x}_ {q}\|}, \tag{5}\]
where \(a\) is the conductive spherical shell radius and \(r\) is the distance of a point charge \(q\) from its center. The numeric value of potential is minimal (\(\sim 10^{-9}\)), which conducts to significant rounding error during computation; therefore, the potential of an electron in a \(1m\) distance of it, \(1.44\times 10^{-9}[V]\), is used as a unit
Figure 2: Schematic of the PLT method: a) the main system which had point charges inside of it b) the new system without any point charges and the boundaries were modified c) \(N_{b}\) points on the boundary are shown to be used as our model input.
to make equation 5 dimensionless. We randomly chose 5000 and 1000 samples from the reference set to create a train and test set. The train and test set have no samples in common. In addition, the best model could adequately predict the potential of test samples and samples with different boundary conditions from the train and test set. So to evaluate the model better, we prepare an extrapolation set that includes 1000 samples with 55 distinct boundary conditions.
### ML Algorithms
In this work, three different supervised learning methods have been used, and their regression accuracy, based on the metrics presented in 2.4, has been evaluated. Mainly, we stick to Physics-Informed Neural Networks (PINN, Raissi et al. (2019)), but to compare our results with other ML algorithms, we use Random Forest (RF, Breiman (2001)) and Artificial Neural Networks (ANN). All the models are briefly introduced, the hyperparameters are fine-tuned, and their performance is reported. Scikit-learn Pedregosa et al. (2011), Tensorflow Abadi (2016), Keras Chollet (2015), and NumPy Walt et al. (2011) are all the python libraries that have been used in this project.
#### 2.3.1 Random Forest(RF)
RF is one of the most popular machine learning algorithms in regression problems for many reasons, but in this project, this model has been chosen since
a) It is speedy to learn.
b) It is robust against over-fitting.
over-fitting is detected when the performance of train samples is perfect while the performance of test samples is poor. RF is an ensemble model in which an average of many uncorrelated trees determines the predicted potential for the target data set. Although each tree is a weak learner, they make a strong learner when many trees are grouped. The RF randomizes the trees by choosing a subset of training data and features for each tree. Here we use scikit-learn Pedregosa et al. (2011) RF implementation.
#### 2.3.2 Ann
Typical neural network architecture consists of the input layer, multiple hidden layers, and the output layer with several neurons in each layer. Totally:
* Input layer: The neurons in the input layer are the input features.
* Hidden layers: The value of every neuron in the hidden layers is a linear combination of the neurons in the previous layer followed by the implementation of an activation function(Eq.6); in most cases, the activation function is non-linear. \[a_{n}=\sigma_{l}\left(a_{n-1}\mathbf{w}_{n}+\mathbf{b}_{n}\right).\] (6) \(\mathbf{n}\) is the layer number, \(\mathbf{w}\) and \(\mathbf{b}\) are the model parameters, weights and bias respectively, and \(\sigma_{l}\) is the activation function based on Goodfellow et al. (2016).
* Output layer: The neurons in the output layer are the model targets and they are calculated with Eq.6 with linear activation function.
* Loss function: There is a function in all neural networks that must be minimized over the model parameters during the training stage via back-propagation, typically the loss function is the mean square error between the true and the predicted values. \[\text{Loss}(w)=MSE_{d}+\lambda\sum_{w}w^{2},\] (7) \[MSE_{d}=\frac{1}{N}\sum_{\text{i=1}}^{\text{N}}\left[\text{U}\left(\mathbf{ X}_{\text{i}},\mathbf{w}\right)-\text{T}_{\text{i}}\right]^{2}.\] (8)
U and T are the predicted output and true target values, respectively, X is the input data, and w is the parameter of neural networks, weights, and biases. The first sentence in Eq.7 is a mean square error, and
The second sentence exists to prevent over-fitting, namely \(L2\) regularizationa. K. Connect et al. (1992), that is used in order to reduce the effects of the large weights.
#### 2.3.3 PINN
PINNRAissi et al. (2019) enforces the Laplace equation, a physical law of the electromagnetic system, as a constraint on the neural network. This study proposes a PINN-based approach to solve the Laplace equation with changeable boundary conditions. Fig.3 shows a schematic of the neural network layout for this approach. PINN-based models are neural networks with modified \(loss\) functions:
\[\text{Loss}=\lambda_{1}MSE_{d}+\lambda_{2}MSE_{f}+\lambda_{3}MSE_{b}+\lambda_ {4}\sum_{w}w^{2}, \tag{9}\]
The first and the last term of Eq.9 are the same as typical neural networks in Eq.7. The second term corresponds to the governing physical equation, i.e., the is Laplace, and the third term corresponds to the boundary conditions;
\[MSE_{d}=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}\|u(\mathbf{x}_{d}^{i},\vec{\varphi}_ {d}^{i};w)-\phi_{d}^{i}\|^{2}, \tag{10}\]
\[MSE_{f}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\|f(\mathbf{x}_{f}^{i},u_{f}^{i};w) \|^{2}, \tag{11}\]
and
\[MSE_{b}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\|\mathcal{B}(\mathbf{x}_{b}^{i}, \vec{\varphi}_{b}^{i},u_{b}^{i};w)\|^{2}. \tag{12}\]
Here we define \(f\left(\mathbf{x},\mathrm{u};w\right)\)
\[\begin{split} f\left(\mathbf{x},\mathrm{u};w\right)& =0,\hskip 28.452756pt\mathrm{x}\in\Gamma_{f}\\ &=\nabla^{2}u\\ &=\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{ \partial y^{2}}+\frac{\partial^{2}u}{\partial z^{2}}\\ &=\frac{\partial w}{\partial x}\frac{\partial}{\partial w}\left( \frac{\partial w}{\partial x}\frac{\partial u}{\partial w}\right)+\frac{ \partial w}{\partial y}\frac{\partial}{\partial w}\left(\frac{\partial w}{ \partial y}\frac{\partial u}{\partial w}\right)+\frac{\partial w}{\partial z} \frac{\partial}{\partial w}\left(\frac{\partial w}{\partial z}\frac{ \partial u}{\partial w}\right),\end{split} \tag{13}\]
with Dirichlet boundary conditions
\[\begin{split}\mathcal{B}\left(\mathbf{x},\vec{\varphi},u;w \right)&=0,\hskip 42.679134pt\mathrm{x}\in\Gamma_{b}\\ &=\sum_{j=1}^{26}(u(\mathbf{x}^{j},\vec{\varphi};w)-\varphi_{b} ^{j}).\end{split} \tag{14}\]
\(\lambda_{1},\lambda_{2},\lambda_{3}\) in Eq.9 correspond to the weight coefficients for the data contributions, Laplace equation, and boundary losses. We use the weight coefficient by motivating from the study of Kag et al. Kag et al. (2022). The last sentence is the \(L2\) regularizationa. K. Connect et al. (1992). Notice that the model with \(\lambda_{2}=\lambda_{3}=0.0\) is exactly a typical neural network described in the previous subsection.
### Evaluating Metrics
The performance evaluation of different algorithms for potential estimation depends on different metrics,
\[\Delta\phi=\phi_{True}-\phi_{Pred}, \tag{15}\]
\[\sigma=\sqrt{\frac{1}{n}\sum_{i=0}^{n-1}(\Delta\phi)^{2}}, \tag{16}\]
\[\text{R}^{2}=1-\frac{\sum_{i=1}^{n}\left(\phi_{\text{True}}-\phi_{\text{Pred}} \right)^{2}}{\sum_{i=1}^{n}\left(\phi_{\text{True}}-\bar{\phi}_{\text{True}} \right)^{2}}, \tag{17}\]
\[\text{MSE}=<(\Delta\phi)^{2}>. \tag{18}\]
Where \(\phi_{True}\) is the true potential, \(\phi_{Pred}\) is the predicted potential, and \(\bar{\phi}_{True}\) is the mean true potential of a given test sample. In this study we used scatter \(\sigma\), \(R^{2}\) score and \(MSE\) as our evaluating metrics.
## 3 Result
In this paper, we predict the smooth potential of a point-charged particle in a spherical conductive container. First, we set the train and test set with 5000 and 1000 samples (2.2), then we train our models to predict smooth potential. We can calculate total potential by summating smooth and singular potential (more detailed in 2.1). However, in this work, to compare our results with CAVIARBiagooi et al. (2020), we investigate the smooth potential.
### Random Forest
#### 3.1.1 hyperparameter for RF
We optimize over the only hyperparameter, the number of trees in the forest that influences the fitting of the random forest model. In Fig.4, we plot \(MSE\) (the left panel) and \(R^{2}\) score (the right one) as a function of the number of trees for the test set to determine the optimal hyperparameter, which we find 100 trees since progress after 100 trees are negligible. Afterward, we trained the RF model using 100 trees.
Figure 3: Physics-informed neural network scheme for solving Laplace equation with variable boundaries
#### 3.1.2 RF Prediction
Fig.5 is illustrated the RF model with 100 trees. It shows that prediction is acceptable when the numeric value of potential is less than 0.3 (\(\phi_{True}<0.3\)) while it could not predict precisely in the case of \(\phi_{True}>=0.3\). The graphs of Fig.5 compare the true potential \(\phi_{True}\) and RF model predicted potential \(\phi_{RF}\) for the train data set(left picture) and test data set(right image). The RF method is relatively fast; however, it works when the predicted potential is smooth and relatively small, it is not suitable in the case of point-charged particles (where a gradient of potential as well as its numeric value is high at the position of the charge). Furthermore, it fails to predict the potential near the boundaries since the gradient of the potential is considerable.
Figure 4: \(MSE\) (left), and \(R^{2}\) score (right) for 1000 different sample of Test set
Figure 5: Potential estimation of RF model: a) show the train data set with 5000 samples with scatter of \(\sigma=0.02\), b) show the test data set with 1000 samples with scatter of \(\sigma=0.07\). The dashed red line shows where the predicted potential equals the true potential. The pink-shaded region marks \(1\sigma\) scatter of potential errors.
### PINN based model and NN
By setting both \(\lambda_{2}\) and \(\lambda_{3}\) to zero in PINN-based model, one can get exactly NN model. Therefore we investigate both models together and report their reulsts at the same time in the following section.
#### 3.2.1 hyperparameter for PINN and NN
Unlike the RF model, we define several hyperparameters: a number of neurons, a number of layers, \(\lambda_{2}\), and \(\lambda_{3}\). To tune all hyperparameters, we train the model up to 100000 epochs, using the L-BFGS-B optimizerLiu and Nocedal [1989], until the model's tolerance reaches the level of machine epsilon. For all layers except the last one, we use a tanh activation function. Table 1 is reported the \(MSE\) between the predicted and the same potential for a different value of hyperparameters; \(\lambda_{2}=[0,0.1,0.2,0.3]\), \(\lambda_{3}=[0,0.1,0.2,0.3,0.4]\), number of hidden layers\(=[1,3,5,7]\) and number of neurons per hidden layer\(=[10,30,50]\) for 1000 samples of the test set. As shown in Table 1, we observe that a model with one hidden layer could not predict the potential well. Also, a model with ten neurons per layer could not work well. So, Table 2 and Table 3 reported the results for just \([3,5,7]\) layers as well as for \([30,50]\) neurons per layer. We chose \(\lambda_{4}=0.0001\) to prevent over-fitting.
#### 3.2.2 PINN and NN Prediction
In Contrast with Table 1, Table 2 is reported not only the \(MSE\) but also the \(R^{2}score\) of the test set. As can be seen in Table 2, the model with seven layers and 50 neurons per layer resulted better when \(\lambda_{2}\) and \(\lambda_{3}\) are \(0.3,0.3,\) or \(0.2,0.4\), respectively. When \(\lambda_{2}\) and \(\lambda_{3}\) are zero, a standard neural network, the model has not worked well; it is observed from Tabel 2 and Fig.4.
Both plots in Fig.6 compare the true and predicted potentials for the best-tuned NN and the best-tuned PINN model with 7 layers and 50 neurons per layer, \(\lambda_{2}=0.3\) and \(\lambda_{3}=0.3\)- on the train set. As can be seen, the NN model is not trained well, while the PINN-based model could predict the potential precisely with a scatter of 0.01. Although the PINN-based model predicts the train set well, aiming to clarify that over-fitting has not been accrued, we also evaluate the model on the test set, Fig.7.
Figure 6: Potential estimation of best-tuned (a) NN with scatter of \(\sigma=0.07\) and (b) PINN model on the 5000 samples of the train set with scatter of \(\sigma=0.01\). The dashed red line shows where the predicted potential equals the true potential. The pink-shaded region marks \(1\sigma\) scatter of potential errors.
Figure 7: Potential estimation of best-tuned PINN model on the 1000 samples of the test set with scatter of \(\sigma=0.02\), \(MSE=0.069\) and \(R^{2}score=0.851\). The dashed red line shows where the predicted potential equals the true potential. The pink-shaded region marks \(1\sigma\) scatter of potential errors.
\begin{table}
\begin{tabular}{c c||c c c|c c c} \hline \hline & \multicolumn{4}{c}{\(MSE_{test}\)} & \multicolumn{4}{c}{\(MSE_{test}\)} \\ \cline{2-9} \(\lambda_{2}\) & \(\lambda_{3}\) & **Neurons=10** & **Neurons=30** & **Neurons=50** & **Neurons=10** & **Neurons=30** & **Neurons=50** \\ \hline \hline \multicolumn{9}{c}{**Num of hidden layer:1**} & \multicolumn{4}{c}{**Num of hidden layer:5**} \\ \hline \hline & 0.0 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 \\ & 0.1 & 0.162 & 0.145 & 0.124 & **0.099** & 0.145 & **0.078** \\
0.0 & 0.2 & 0.191 & 0.212 & 0.132 & 0.136 & **0.062** & **0.072** \\ & 0.3 & 0.175 & 0.159 & 0.171 & 0.14 & **0.069** & 0.124 \\ & 0.4 & 0.24 & 0.171 & 0.18 & 0.191 & **0.064** & **0.076** \\ \hline & 0.0 & 0.246 & 0.247 & 0.247 & 0.246 & 0.246 & 0.246 \\ & 0.1 & 0.222 & 0.143 & 0.117 & 0.107 & 0.245 & 0.192 \\
0.1 & 0.2 & 0.201 & 0.165 & 0.129 & 0.243 & **0.079** & **0.062** \\ & 0.3 & 0.22 & 0.159 & 0.16 & 0.112 & **0.094** & **0.071** \\ & 0.4 & 0.249 & 0.177 & 0.243 & 0.205 & **0.088** & **0.07** \\ \hline & 0.0 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 \\ & 0.1 & 0.158 & 0.144 & 0.154 & 0.235 & 0.161 & **0.083** \\
0.2 & 0.2 & 0.225 & 0.174 & 0.174 & 0.198 & **0.097** & **0.071** \\ & 0.3 & 0.223 & 0.143 & 0.158 & 0.192 & **0.081** & **0.075** \\ & 0.4 & 0.197 & 0.185 & 0.189 & 0.216 & **0.065** & 0.118 \\ \hline & 0.0 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 \\ & 0.1 & 0.21 & 0.132 & 0.129 & 0.245 & **0.088** & **0.098** \\
0.3 & 0.2 & 0.199 & 0.139 & 0.157 & 0.209 & 0.117 & **0.065** \\ & 0.3 & 0.216 & 0.241 & 0.202 & 0.182 & 0.168 & **0.08** \\ & 0.4 & 0.225 & 0.182 & 0.196 & 0.239 & **0.083** & **0.097** \\ \hline & \multicolumn{4}{c}{**Num of hidden layer:3**} & \multicolumn{4}{c}{**Num of hidden layer:7**} \\ \hline & 0.0 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 \\ & 0.1 & 0.157 & 0.104 & **0.086** & 0.244 & 0.237 & 0.247 \\
0.0 & 0.2 & 0.151 & **0.088** & **0.086** & 0.238 & 0.087 & 0.105 \\ & 0.3 & 0.13 & **0.078** & **0.089** & 0.244 & **0.07** & **0.067** \\ & 0.4 & 0.202 & 0.173 & 0.102 & 0.188 & **0.068** & **0.063** \\ \hline & 0.0 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 \\ & 0.1 & **0.099** & **0.089** & 0.132 & 0.244 & 0.244 & 0.244 \\
0.1 & 0.2 & 0.198 & **0.093** & **0.077** & 0.23 & 0.221 & **0.069** \\ & 0.3 & 0.188 & **0.088** & **0.083** & 0.224 & 0.223 & **0.082** \\ & 0.4 & 0.262 & 0.104 & **0.079** & 0.213 & **0.072** & **0.074** \\ \hline & 0.0 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 \\ & 0.1 & 0.119 & 0.146 & **0.082** & 0.244 & 0.244 & 0.245 \\
0.2 & 0.2 & 0.172 & 0.09 & **0.086** & 0.244 & 0.244 & **0.071** \\ & 0.3 & 0.179 & 0.084 & **0.085** & 0.225 & 0.251 & **0.08** \\ & 0.4 & 0.197 & 0.185 & **0.086** & 0.256 & 0.166 & **0.067** \\ \hline & 0.0 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 & 0.246 \\ & 0.1 & 0.17 & 0.125 & **0.082** & 0.244 & 0.244 & 0.244 \\
0.3 & 0.2 & 0.199 & 0.117 & **0.096** & 0.244 & **0.083** & **0.08** \\ & 0.3 & 0.224 & 0.107 & **0.088** & 0.202 & 0.235 & **0.069** \\ & 0.4 & 0.199 & 0.222 & 0.152 & 0.236 & **0.097** & **0.075** \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(MSE\) between the predicted and the exact potential \(\phi(x)\) for a different value of \(\lambda_{2}\), and \(\lambda_{3}\), and the different number of hidden layers, and neurons per hidden layer in PINN for 1000 different sample of the Test set. Here, \(\lambda_{4}=0.0001\) is fixed and \(\lambda_{1}=1-(\lambda_{2}+\lambda_{3}+\lambda_{4})\). In this table, the bold number means \(MSE<0.1\).
\begin{table}
\begin{tabular}{c c||c c c c c c} \hline \hline & \multicolumn{3}{c}{\(Num_{layer}\)=3} & \multicolumn{3}{c}{\(Num_{layer}\)=5} & \multicolumn{3}{c}{\(Num_{layer}\)=7} \\ \cline{3-8} \(\lambda_{2}\) & \(\lambda_{3}\) & \(MSE\) & \(R^{2}\) & \(MSE\) & \(R^{2}\) & \(MSE\) & \(R^{2}\) \\ \hline \hline \multicolumn{8}{c}{\(Num_{neuron}\)=30} \\ \hline \hline \multirow{4}{*}{0.0} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.104 & 0.774 & 0.145 & 0.365 & 0.237 & -14.7 \\ & 0.2 & 0.088 & 0.85 & 0.062 & 0.889 & 0.087 & 0.807 \\ & 0.3 & 0.078 & 0.826 & 0.069 & 0.884 & **0.07** & **0.904** \\ & 0.4 & 0.173 & 0.551 & 0.064 & 0.863 & 0.068 & 0.868 \\ \hline \multirow{4}{*}{0.1} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.089 & 0.796 & 0.245 & -1478 & 0.244 & 0 \\ & 0.2 & 0.093 & 0.825 & 0.079 & 0.852 & 0.221 & -2.40 \\ & 0.3 & 0.088 & 0.793 & 0.094 & 0.714 & 0.223 & 0.094 \\ & 0.4 & 0.104 & 0.719 & 0.088 & 0.793 & 0.072 & 0.887 \\ \hline \multirow{4}{*}{0.2} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.146 & 0.64 & 0.161 & 0.566 & 0.244 & 0 \\ & 0.2 & 0.09 & 0.769 & 0.097 & 0.75 & 0.244 & -3167 \\ & 0.3 & 0.084 & 0.819 & 0.081 & 0.857 & 0.251 & -2.32 \\ & 0.4 & 0.185 & 0.521 & 0.065 & 0.874 & 0.166 & 0.07 \\ \hline \multirow{4}{*}{0.3} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.125 & 0.626 & 0.088 & 0.747 & 0.244 & 0 \\ & 0.2 & 0.117 & 0.666 & 0.117 & 0.689 & 0.083 & 0.863 \\ & 0.3 & 0.107 & 0.67 & 0.168 & 0.59 & 0.235 & 0 \\ & 0.4 & 0.222 & 0.25 & 0.083 & 0.837 & 0.097 & 0.804 \\ \hline \multicolumn{8}{c}{\(Num_{neuron}\)=50} \\ \hline \hline \multirow{4}{*}{0.0} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.086 & 0.832 & 0.078 & 0.865 & 0.247 & -254 \\ & 0.2 & 0.086 & 0.838 & 0.072 & 0.873 & 0.105 & 0.711 \\ & 0.3 & 0.089 & 0.767 & 0.124 & 0.66 & 0.067 & 0.875 \\ & 0.4 & 0.102 & 0.734 & 0.076 & 0.846 & 0.063 & 0.849 \\ \hline \multirow{4}{*}{0.1} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.132 & 0.64 & 0.192 & -0.48 & 0.244 & 0 \\ & 0.2 & 0.077 & 0.85 & 0.062 & 0.888 & 0.069 & 0.897 \\ & 0.3 & 0.083 & 0.837 & 0.071 & 0.895 & 0.082 & 0.774 \\ & 0.4 & 0.079 & 0.847 & 0.07 & 0.887 & 0.074 & 0.863 \\ \hline \multirow{4}{*}{0.2} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.082 & 0.756 & 0.083 & 0.859 & 0.245 & -1774 \\ & 0.2 & 0.086 & 0.796 & 0.071 & 0.88 & **0.071** & **0.908** \\ & 0.3 & 0.085 & 0.799 & 0.075 & 0.888 & 0.08 & 0.87 \\ & 0.4 & 0.086 & 0.8 & 0.118 & 0.727 & **0.067** & **0.902** \\ \hline \multirow{4}{*}{0.3} & 0.0 & 0.246 & 0 & 0.246 & 0 & 0.246 & 0 \\ & 0.1 & 0.082 & 0.75 & 0.098 & 0.753 & 0.244 & 0 \\ \cline{1-1} & 0.2 & 0.096 & 0.777 & 0.065 & 0.894 & 0.08 & 0.798 \\ \cline{1-1} & 0.3 & 0.088 & 0.785 & 0.08 & 0.834 & **0.069** & **0.902** \\ \cline{1-1} & 0.4 & 0.152 & 0.557 & 0.097 & 0.818 & 0.075 & 0.867 \\ \hline \hline \end{tabular}
\end{table}
Table 2: \(MSE\), and \(R^{2}\) score between the predicted and the exact potential \(\phi(x)\) for a different value of \(\lambda_{2}\), and \(\lambda_{3}\), and the different number of hidden layers, and neurons per hidden layer in PINN for 1000 different samples of the Test set. Here, \(\lambda_{4}=0.0001\) is fixed and \(\lambda_{1}=1-(\lambda_{2}+\lambda_{3}+\lambda_{4})\). In this table bold numbers show cases with \(MSE<0.1\) and \(R^{2}score>0.9\).
\begin{table}
\begin{tabular}{l c||c c c c c c} \hline \hline & \multicolumn{3}{c}{\(Num_{layer}\)=3} & \multicolumn{3}{c}{\(Num_{layer}\)=5} & \multicolumn{3}{c}{\(Num_{layer}\)=7} \\ \cline{2-7} \(\lambda_{2}\) & \(\lambda_{3}\) & \(MSE\) & \(R^{2}\) & \(MSE\) & \(R^{2}\) & \(MSE\) & \(R^{2}\) \\ \hline \hline \multicolumn{7}{c}{\(Num_{neuron}\)=30} \\ \hline \multirow{7}{*}{0.0} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0.281 & 0 \\ & 0.1 & 0.14 & 0.71 & 0.148 & 0.364 & 0.275 & -15.28 \\ & 0.2 & 0.118 & 0.764 & 0.077 & 0.847 & 0.126 & 0.646 \\ & 0.3 & 0.104 & 0.737 & 0.09 & 0.782 & 0.089 & 0.835 \\ & 0.4 & 0.208 & 0.587 & 0.072 & 0.842 & 0.083 & 0.822 \\ \hline \multirow{7}{*}{0.1} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0.281 & 0 \\ & 0.1 & 0.105 & 0.808 & 0.28 & -1801 & 0.28 & 0 \\ & 0.2 & 0.112 & 0.775 & 0.09 & 0.845 & 0.259 & -2.137 \\ & 0.3 & 0.105 & 0.793 & 0.123 & 0.697 & 0.264 & 0.07 \\ & 0.4 & 0.116 & 0.735 & 0.12 & 0.684 & 0.092 & 0.813 \\ \hline \multirow{7}{*}{0.2} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0.281 & 0 \\ & 0.1 & 0.183 & 0.601 & 0.201 & 0.484 & 0.28 & 0 \\ & 0.2 & 0.118 & 0.739 & 0.114 & 0.68 & 0.28 & -3620 \\ & 0.3 & 0.095 & 0.829 & 0.109 & 0.799 & 0.275 & -1.82 \\ & 0.4 & 0.209 & 0.586 & 0.078 & 0.851 & 0.205 & 0.183 \\ \hline \multirow{7}{*}{0.3} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0.281 & 0 \\ & 0.1 & 0.154 & 0.663 & 0.105 & 0.743 & 0.28 & 0 \\ & 0.2 & 0.13 & 0.74 & 0.139 & 0.688 & 0.109 & 0.725 \\ & 0.3 & 0.143 & 0.565 & 0.2 & 0.604 & 0.271 & 0.012 \\ & 0.4 & 0.269 & 0.356 & 0.158 & 0.364 & 0.118 & 0.726 \\ \hline \multicolumn{7}{c}{\(Num_{neuron}\)=50} \\ \hline \multirow{7}{*}{0.0} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0 & 0.281 & 0 \\ & 0.1 & 0.116 & 0.764 & 0.112 & 0.769 & 0.28 & -310 \\ & 0.2 & 0.1 & 0.837 & 0.092 & 0.829 & 0.135 & 0.598 \\ & 0.3 & 0.101 & 0.79 & 0.145 & 0.719 & 0.083 & 0.843 \\ & 0.4 & 0.117 & 0.706 & 0.096 & 0.801 & 0.106 & 0.662 \\ \hline \multirow{7}{*}{0.1} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0.281 & 0 \\ & 0.1 & 0.149 & 0.657 & 0.227 & -0.465 & 0.28 & 0 \\ & 0.2 & 0.121 & 0.658 & 0.077 & 0.832 & 0.09 & 0.824 \\ & 0.3 & 0.108 & 0.719 & 0.096 & 0.817 & 0.102 & 0.734 \\ & 0.4 & 0.117 & 0.716 & 0.085 & 0.843 & 0.093 & 0.842 \\ \hline \multirow{7}{*}{0.2} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0.281 & 0 \\ & 0.1 & 0.094 & 0.782 & 0.112 & 0.777 & 0.28 & -2164 \\ & 0.2 & 0.114 & 0.758 & 0.094 & 0.785 & 0.091 & 0.854 \\ & 0.3 & 0.101 & 0.762 & 0.093 & 0.818 & 0.104 & 0.771 \\ & 0.4 & 0.124 & 0.644 & 0.143 & 0.703 & 0.088 & 0.849 \\ \hline \multirow{7}{*}{0.3} & 0.0 & 0.281 & 0 & 0.281 & 0 & 0.281 & 0 \\ & 0.1 & 0.097 & 0.772 & 0.115 & 0.727 & 0.28 & 0 \\ \cline{1-1} & 0.2 & 0.112 & 0.768 & 0.096 & 0.807 & 0.167 & 0.363 \\ \cline{1-1} & 0.3 & 0.11 & 0.777 & 0.113 & 0.682 & **0.089** & **0.851** \\ \cline{1-1} & 0.4 & 0.177 & 0.563 & 0.114 & 0.76 & 0.103 & 0.741 \\ \hline \hline \end{tabular}
\end{table}
Table 3: \(MSE\) and \(R^{2}\) score between the predicted and the exact potential \(\phi(x)\) for a different value of \(\lambda_{2}\), and \(\lambda_{3}\), and the different number of hidden layers, and neurons per hidden layer in PINN for 1000 different samples of the Extrapolate set. Here, \(\lambda_{4}=0.0001\) is fixed and \(\lambda_{1}=1-(\lambda_{2}+\lambda_{3}+\lambda_{4})\). In this table bold number shows the best hyperparameters for our PINN-based model.
### Comparison
We evaluate RF, NN, and PINN to estimate the potential of point-charged particles surrounded by conductive walls. According to Fig.6, NN was not trained well, while RF and PINN-based models could predict potential precisely. However, RF did not work well to estimate \(\phi_{True}>0.3\). Apart from this, the best model could estimate not only the potential of the train and the test sets but also the potential of point-charged particles that are not in the train or test set. So we evaluate the best tuned-PINN model and RF on the extrapolation samples; the results are reported in Table 3. As can be seen in Fig.8 PINN-based model could predict the potential of newly charged particles better than the RF model, where PINN could predict \(\phi_{True}>0.3\) by far better than RF.
### Generalization (Multi charged particles)
For generalization, we test the PINN-based model with \(\lambda_{2}=0.3\) and \(\lambda_{3}=0.3\) for the case of more than one charged particle surrounded with conductive boundaries. Since the Laplace equation is a linear function, we predict the potential of each charged particle and then calculate the total potential with a superposition of the corresponding predicted potential. After that, we report the \(MSE\) between the predicted smooth potential and the exact smooth solution, which is calculated by the image charges method. Fig.9 shows the relation between \(MSE\) and \(N\), the number of charged particles. As expected, the \(MSE\) is independent of the number of charged particles. Therefore, it leads to the fact that we can also use this method for problems with any desired particles.
Figure 8: Potential estimation of best-tuned (a) RF with scatter of \(\sigma=0.07\), and (b) PINN-based model with scatter of \(\sigma=0.02\) on the 1000 samples of the Extrapolation set. The dashed red line shows where the predicted potential equals the true potential. The pink-shaded region marks \(1\sigma\) scatter of potential errors.
## 4 Conclusion
In this study, we have trained a machine to predict the smooth potential of charged components surrounded by conductive boundaries. In this scene, the total potential could be easily calculated by the summation of predicted smooth potential with singular potential due to the PLT algorithm. The reference set consists of analytic solutions, the solution of the image charge method, which is split into a train set with 5000 samples and a test set with 1000 samples. To check the accuracy of our model, we set another data set called extrapolation set consisting of 1000 samples with different boundary conditions which were not in the train or even test set. Our main conclusion can be summarized as follows:
* We find that the PINN-based model trained better than RF and NN models. RF could not predict high potential; on the other hand, the NN could not be trained well at all.
* our PINN-based model could predict the potential of the test set with \(MSE=0.069\), \(R^{2}score=0.902\), and scatter \(\sigma=0.02\). It also could predict the potential of the extrapolation set with \(MSE=0.089\), \(R^{2}score=0.851\), and scatter \(\sigma=0.02\).
* Since the Laplace equation is a linear equation, the trained model could predict the potential of more than one charged particle by summating every particle's predicted potential. Besides, we show that the \(MSE\) of more than one particle is independent of a number of particles.
|
2306.03228 | Discovering Novel Biological Traits From Images Using Phylogeny-Guided
Neural Networks | Discovering evolutionary traits that are heritable across species on the tree
of life (also referred to as a phylogenetic tree) is of great interest to
biologists to understand how organisms diversify and evolve. However, the
measurement of traits is often a subjective and labor-intensive process, making
trait discovery a highly label-scarce problem. We present a novel approach for
discovering evolutionary traits directly from images without relying on trait
labels. Our proposed approach, Phylo-NN, encodes the image of an organism into
a sequence of quantized feature vectors -- or codes -- where different segments
of the sequence capture evolutionary signals at varying ancestry levels in the
phylogeny. We demonstrate the effectiveness of our approach in producing
biologically meaningful results in a number of downstream tasks including
species image generation and species-to-species image translation, using fish
species as a target example. | Mohannad Elhamod, Mridul Khurana, Harish Babu Manogaran, Josef C. Uyeda, Meghan A. Balk, Wasila Dahdul, Yasin Bakış, Henry L. Bart Jr., Paula M. Mabee, Hilmar Lapp, James P. Balhoff, Caleb Charpentier, David Carlyn, Wei-Lun Chao, Charles V. Stewart, Daniel I. Rubenstein, Tanya Berger-Wolf, Anuj Karpatne | 2023-06-05T20:22:05Z | http://arxiv.org/abs/2306.03228v1 | # Discovering Novel Biological Traits From Images Using Phylogeny-Guided Neural Networks
###### Abstract.
Discovering evolutionary traits that are heritable across species on the tree of life (also referred to as a phylogenetic tree) is of great interest to biologists to understand how organisms diversify and evolve. However, the measurement of traits is often a subjective and labor-intensive process, making _trait discovery_ a highly label-scarce problem. We present a novel approach for discovering evolutionary traits directly from images without relying on trait labels. Our proposed approach, _Phylo-NN_, encodes the image of an organism into a sequence of quantized feature vectors-or codes-where different segments of the sequence capture evolutionary signals at varying ancestry levels in the phylogeny. We demonstrate the effectiveness of our approach in producing biologically meaningful results in a number of downstream tasks including species image generation and species-to-species image translation, using fish species as a target example.1
computer vision, neural networks, phylogeny, morphology, knowledge-guided machine learning +
Footnote †: journal: Computer vision, neural networks, “Regularization”,
## 1. Introduction
One of the grand challenges in biology is to find features of organisms- or _traits_-that define groups of organisms, their genetic and developmental underpinnings, and their interactions with environmental selection pressures (Battelle, 2016). Traits can be physiological, morphological, and/or behavioral (e.g., beak color, stripe pattern, and fin curvature) and are integrated products of genes and the environment. The analysis of traits is critical for predicting the effects of environmental change or genetic manipulation, and to understand the process of evolution. For example, discovering traits that are heritable across species on the tree of life (also referred to as the _phylogenetic tree_), can serve as a starting point for linking traits to underlying genetic factors. Traits with such genetic or phylogenetic signal, termed _evolutionary traits_, are of great interest to biologists, as the history of genetic ancestry captured by such traits can guide our understanding of how organisms diversify and evolve. This understanding enables tasks such as estimating the morphological features of ancestors, understanding how species have responded to environmental changes, and even predicting the potential future course of trait changes (Battelle, 2016; Mabee, 2016). However, the measurement of traits is not straightforward and often relies on subjective and labor-intensive human expertise and definitions (Battelle, 2016). Hence, _trait
_discovery_ has remained a highly label-scarce problem, hindering rapid scientific advancement (Krishnan et al., 2017).
With the recent availability of large-scale image repositories containing millions of images of biological specimens (Zhu et al., 2017; Zhang et al., 2018; Zhang et al., 2018), there is a great opportunity for machine learning (ML) to contribute to the problem of trait discovery (Krishnan et al., 2017). In particular, advances in deep learning have enabled us to extract useful information from images and to map them to structured feature spaces where they can be manipulated in a number of ways. We ask the question: _how can we develop deep learning models to discover novel evolutionary traits automatically from images without using trait labels?_
Despite the biological relevance of this question, answering it is challenging for two main reasons. First, not all image features extracted by a deep learning model for ML tasks such as image reconstruction or species classification will exhibit evolutionary signals. Hence, it is important to disentangle the image features of an organism that preserve evolutionary information, from remaining features influenced by unrelated factors (Beng et al., 2016). Second, information about evolutionary signals is not available as a set of known attributes (or trait labels) but rather in the form of structured knowledge of how species are related to each other in the phylogenetic tree (see Figure 1). Without access to trait labels, current methods for feature disentanglement in deep learning (Beng et al., 2016; Krishnan et al., 2017) are unfit for discovering evolutionary traits. Furthermore, current standards in deep learning for generative modeling (Zhu et al., 2017; Zhang et al., 2018) or interpretable ML (Beng et al., 2016; Krishnan et al., 2017) are unable to leverage structured forms of biological knowledge (e.g., phylogenetic trees) in the learning of image features, and hence are unable to analyze and manipulate learned features in biologically meaningful ways.
We propose a novel approach for discovering evolutionary traits automatically from images, termed _phylogeny-guided neural networks_ (_Phylo-NN_), which encodes the image of an organism into a sequence of quantized feature vectors or "codes" (see Figure 1). A unique feature of the image-derived sequences learned by _Phylo-NN_ is that different segments of the sequence capture evolutionary information at varying ancestry levels in the phylogeny, where every level corresponds to a certain point of time in evolutionary history. Analogous to how the genome of an organism encodes all its genetic information and structures it as a set of genes, our image-derived sequences encodes all of the visual information contained in the organism's image and structures it as a set of evolutionary traits shared with ancestor nodes within its lineage (at different levels of phylogeny). We thus refer to the image-derived sequences of _Phylo-NN_ as _Imageomes_, a brand-new concept in evolutionary biology. By analyzing and manipulating the codes in the Imageomes of organisms, we can perform a number of biologically meaningful downstream tasks such as species image generation, species-to-species image translation, and visualization of evolutionary traits. We demonstrate the effectiveness of _Phylo-NN_ in solving these tasks using fish species as a target example.
Our work, for the first time, provides a bridge between the "language of evolution" represented as phylogeny and the "language of images" extracted by _Phylo-NN_ as Imageomes. This work is part of a larger-scale effort to establish a new field of research in "Imageomics" (Zhu et al., 2017), where images are used as the source of information to accelerate biological understanding of traits, ranging from their selective consequences to their causation. Our work also provides a novel methodological advance in the emerging field of knowledge-guided machine learning (KGML) (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) by using tree-based knowledge to structure the embedding space of neural networks and produce scientifically meaningful image generation and translation results.
## 2. Background and Related Work
**What is a Phylogenetic Tree?** A phylogenetic tree visually characterizes the evolutionary distances among a set of species and their common ancestors represented as nodes of the tree. In this tree, the length of every edge is a value that represents the evolutionary distance between two nodes (measured in time intervals representing thousands or millions of years), which is estimated from living species and time-calibrated ages using dated fossil ancestors or molecular methods. While rates of change along different edges may vary substantially, on average we expect that longer edges will accumulate higher levels of evolutionary trait change than shorter edges. In our work, we consider discretized versions of the phylogenetic tree with \(n_{\text{l}}=4\) ancestry levels, such that every species class (leaf node in the tree) has exactly \(m_{\text{l}}-1\) ancestors. Every ancestry level corresponds to a certain point of time in evolutionary history. See Appendix B for details on phylogeny preprocessing.
**Generative Modeling for Images:** There exists a large body of work in deep learning for image generation, including methods based on Variational Autoencoders (VAEs) (Krishnan et al., 2017), Generative Adversarial Networks (GANs) (Krishnan et al., 2017; Krishnan et al., 2017; Zhang et al., 2018; Zhang et al., 2018), Transformer networks (Zhu et al., 2017; Zhang et al., 2018), and Diffusion models (Beng et al., 2016). While some recent advances in this field (e.g., DALL-E 2) have been shown to produce images with very high visual quality, they involve large and complex embedding spaces that are difficult to structure and analyze using tree-based knowledge (e.g., phylogeny). Instead, we build upon a recent line of work in generative modeling using vector-quantized feature representations of images (Zhu et al., 2017; Zhang et al., 2018) that are easier to manipulate than continuous features. In particular, a recent variant of the VAE, termed Vector-Quantized VAE (VQVAE) (Zhang et al., 2018), uses discrete feature spaces quantized using a learned codebook of feature vectors and employs a PixelCNN (Zhu et al., 2017) model for sampling in the discrete feature
Figure 1. _Phylo-NN_ converts images to discrete sequences of features (called Imageomes) where different sequence segments (shown in distinct colors) capture evolutionary information at varying ancestry levels of phylogeny (L1 to L4).
space. This work was extended in [14] to produce VQGAN, which is different from VQVAE in two aspects. First, it adds a discriminator to its framework to improve the quality of the generated images. Second, it uses a Transformer model, namely the GPT architecture [39], to generate images from the quantized feature space instead of a PixelCNN. VQGAN is a state-of-the-art method that generates images of better quality efficiently at higher resolutions than other counterparts such as StyleGAN [23] and Vision Transformers [11; 17]. Our work draws inspiration from VQGAN to embed images in discrete feature spaces (analogous to the discrete nature of symbols used in genome sequences) but with the grounding of biological knowledge available as phylogenetic trees.
**Interpretable ML:** There is a growing trend in the ML community to focus on the interpretability of deep learning features [12]. Some of the earliest works in this direction include the use of saliency scores [44] and Class Activation Maps (CAMs) [42] that reveal sensitive regions of an image influencing classification decisions. However, these methods are known to be noisy and often imprecise [1]. Recent work includes the ProtoPNet model [4], which first learns a set of template image patches (or prototypes) for each class during training, and then uses those templates to both predict and explain the class label of a test image. These methods suffer from two drawbacks. First, they do not allow for structured knowledge to guide the learning of interpretable features and hence are not designed to produce results that are _biologically meaningful_. Second, they are mostly developed for classification problems and cannot be directly applied to image generation or translation problems.
**Disentangling ML Features:** Another related line of research involves disentangling the feature space of deep learning models to align the disentangled features with target "concepts." This includes the approach of "Concept whitening" (CW) [5], where the latent space of a classification model is whitened (i.e., normalized and decorrelated) such that the features along every axis of the latent space corresponds to a separate class. Another approach in this area is that of Latent Space Factorization (LSF) [28], where the latent space of an autoencoder is linearly transformed using matrix subspace projections to partition it into features aligned with target concepts (or attributes) and features that do not capture attribute information. Note that our proposed _Phylo-NN_ model can also be viewed as a latent space disentanglement technique, where the disentangled segments of the learned Imageome correspond to different ancestry levels of the phylogeny. We thus use CW and LSF as baselines in our experiments to test if it is possible to discover evolutionary traits just by disentangling the latent space using species classes as orthogonal concepts, without using the structured knowledge of how species are related to one another in the phylogeny.
**Knowledge-Guided ML:** KGML is an emerging area of research that aims to integrate scientific knowledge in the design and learning of ML models to produce generalizable and scientifically valid solutions [22]. Some examples of previous research in KGML include modifying the architecture of deep learning models to capture known forms of symmetries and invariances [2; 51], and adding loss functions that constrain the model outputs to be scientifically consistent even on unlabeled data [9; 40]. In biology, KGML methods have been developed for species classification that leverage the knowledge of taxonomic grouping of species [10; 13]. KGML methods have also been developed for generative modeling of images using domain knowledge available as knowledge graphs or ontologies [15; 37]. In contrast to these prior works, we focus on structuring the embedding space of neural networks using tree-based knowledge (i.e., phylogeny) to enable the discovery and analysis of novel evolutionary traits automatically from images.
## 3. Proposed Approach: Phylo-Nn
We consider the problem of discovering novel (or "unknown") evolutionary traits automatically from images without using any trait labels or knowledge of how the unknown traits correspond to known concepts in a knowledge graph or ontology. We only use the "distant" supervision of how these unknown traits have evolved over time and are shared across species, available in the form of the phylogenetic tree. Figure 2 provides an overview of our proposed _Phylo-NN_ model. Our method can operate on the latent space of any backbone encoder model \(E\) that takes in images as input and produces continuous feature maps \(\mathbf{x}\) as output. There are three computing blocks in _Phylo-NN_ as shown in Figure 2. The first block, Phylo-Encoder (\(PE\)), takes continuous feature maps \(\mathbf{x}\) as input and generates quantized feature sequences (or Imageomes) as output. Imageome sequences \(\mathbf{z}^{Q}\) comprise of two _disentangled_ segments: \(\mathbf{z}^{Q}_{\text{p}}\), which captures phylogenetic information (p) at varying ancestry levels, and \(\mathbf{z}^{Q}_{\text{np}}\), which captures non-phylogenetic information (np) that is still important for image reconstruction but is unrelated to the phylogeny. The second block, Phylo-Decoder (\(PD\)), maps the Imageone sequences back to the space of feature maps \(\mathbf{\hat{x}}\), such that \(\mathbf{\hat{x}}\) is a good reconstruction of \(\mathbf{x}\). We then feed \(\mathbf{\hat{x}}\) into a backbone decoder model \(D\) that reconstructs the original image. Note that in the training of \(PE\) and \(PD\) models, both the backbone models \(E\) and \(D\) are kept frozen, thus requiring low training time. _Phylo-NN_ can thus be plugged into the latent space of any powerful encoder-decoder framework. The third block of _Phylo-NN_ is a transformer model \(T\) that takes in the species class variable as input, and generates a distribution of plausible Imageome sequences corresponding to the class as output. These sequences can be fed to the \(PD\) model to generate a distribution of synthetic images. In the following, we provide details on each of the three blocks of _Phylo-NN_.
### Phylo-Encoder (PE) Block
Figure 3 shows the sequence of operations that we perform inside the \(PE\) block. We first apply a convolutional layer on \(\mathbf{x}\) to produce feature maps of size (\(H\times W\times C\)), where \(C\) is the number of channels. We split these \(C\) feature maps into two sets. The first \(C_{\text{p}}\) maps are fed into an MLP layer to learn a global set of feature vectors \(\mathbf{z}_{\text{p}}\) capturing phylogenetic information. The size of \(\mathbf{z}_{\text{p}}\) is kept equal to
Figure 2. Overview of proposed _Phylo-NN_ model architecture.
(\(n_{\eta}n_{\mathrm{p}}\times d\)), where \(n_{\eta}\) is the number of phylogeny levels, \(n_{\mathrm{p}}\) is the number of feature vectors we intend to learn at every phylogeny level, and \(d\) is the dimensionality of feature vectors. Similarly, the remaining \(C-C_{\mathrm{p}}\) maps are fed into an MLP layer to produce a set of feature vectors \(\mathbf{z}_{\mathrm{np}}\) capturing non-phylogenetic information of size (\(n_{\mathrm{np}}\times d\)).
**Vector Quantization:** Both \(\mathbf{z}_{\mathrm{p}}\) and \(\mathbf{z}_{\mathrm{np}}\) are converted to _quantized_ sequences of feature vectors, \(\mathbf{z}_{\mathrm{p}}^{Q}\) and \(\mathbf{z}_{\mathrm{np}}^{Q}\), respectively, using the approach developed in VQVAE (Sutton et al., 2017). The basic idea of this quantization approach is to learn a set (or codebook) of \(n_{\mathrm{q}}\) distinct feature vectors (or codes), such that every feature vector in \(\mathbf{z}_{\mathrm{p}}\) and \(\mathbf{z}_{\mathrm{np}}\) is replaced by its nearest counterpart in the codebook. This is achieved by minimizing the _quantization loss_, \(L_{\mathrm{q}}=|\mathbf{z}-\mathbf{z}^{Q}|\). The advantage of working with quantized vectors is that every feature vector in \(\mathbf{z}_{\mathrm{p}}^{Q}\) and \(\mathbf{z}_{\mathrm{np}}^{Q}\) can be referenced just by its location (or index) in the codebook. This allows for faster feature manipulations in the space of discrete code positions than continuous feature vectors.
**Using phylogenetic knowledge in \(\mathbf{z}_{\mathrm{p}}^{Q}\)**: Here, we describe our approach to ensure that the quantized feature sequence \(\mathbf{z}_{\mathrm{p}}^{Q}\) contains phylogenetic information. Note that \(\mathbf{z}_{\mathrm{p}}^{Q}\) contains \(n_{\eta}\) sub-sequences of length \(n_{\eta}\), where every sub-sequence corresponds to a different ancestry level in the phylogeny. While the first sub-sequence \(S_{1}\) should ideally capture information contained in \(\mathbf{x}\) that is necessary for identifying ancestor nodes at level 1 of the phylogeny, \(S_{2}\) should contain additional information that when combined with \(S_{1}\) is sufficient to identify the correct ancestor node of \(\mathbf{x}\) at level 2. In general, we define the concept of a _Phylo-descriptor_\(D_{i}=\{S_{1},S_{2},\ldots,S_{i}\}\) of \(\mathbf{x}\) that contains the necessary information for identifying nodes at level \(i\) (see Figure 3). We feed \(D_{i}\) to an MLP layer that predicts the class probabilities of nodes at level \(i\), which are then matched with the correct node class of \(\mathbf{x}\) at level \(i\), \(c_{i}(\mathbf{x})\), by minimizing the following _phylogeny-guided loss_, \(L_{\mathrm{p}}\):
\[L_{\mathrm{p}}=\sum_{i=0}^{m}\beta_{i}\mathrm{CE}(\mathrm{MLP}_{i}(D_{i}( \mathbf{x})),c_{i}(\mathbf{x})), \tag{1}\]
where \(\mathrm{CE}\) is the cross-entropy loss and \(\beta_{i}\) is the weighting hyperparameter for level \(i\).
**Disentangling \(\mathbf{z}_{\mathrm{p}}^{Q}\) and \(\mathbf{z}_{\mathrm{np}}^{Q}\)**: While minimizing \(L_{\mathrm{p}}\) guides the learning of \(\mathbf{z}_{\mathrm{p}}^{Q}\) to contain phylogenetic information, we still need a way to ensure that \(\mathbf{z}_{\mathrm{np}}^{Q}\) focuses on complementary features and does not contain phylogenetic information. To achieve this, we first apply an orthogonal convolution loss \(L_{\mathrm{o}}\) (originally proposed in (Sutton et al., 2017)) to the convolutional layer of Phylo-Encoder, to constrain the \(C\) convolutional kernels to be orthogonal to each other. To further ensure that \(\mathbf{z}_{\mathrm{np}}^{Q}\) has no phylogenetic information, we also employ an adversarial training procedure to incrementally remove phylogenetic information from \(\mathbf{z}_{\mathrm{np}}^{Q}\). In particular, we apply an MLP layer _MLP_adv on \(\mathbf{z}_{\mathrm{np}}^{Q}\), and then train the parameters of _MLP_adv to minimize the following _adversarial loss_:
\[L_{\mathrm{adv}}=\sum_{i=0}^{n_{\eta}}\beta_{i}\mathrm{CE}(\mathrm{MLP}_{i}( \mathbf{z}_{\mathrm{np}}^{Q}(\mathbf{x}))),c_{i}(\mathbf{x})), \tag{2}\]
This is aimed at training _MLP_adv to detect any phylogenetic information contained in \(\mathbf{z}_{\mathrm{np}}^{Q}\). Simultaneously, we train the rest of _Phylo-NN_'s parameters to maximize \(L_{\mathrm{adv}}\), such that \(\mathbf{z}_{\mathrm{np}}^{Q}\) becomes irrelevant for the task of identifying nodes in the phylogeny and only contains non-phylogenetic information.
## 4. Phylo-Decoder (PD) Block
The goal of the PD block is to convert the space of Imageome sequences, \(\mathbf{z}^{Q}=\{\mathbf{z}_{\mathrm{p}}^{Q},\mathbf{z}_{\mathrm{np}}^{Q}\}\), back to the space of original feature maps, \(\mathbf{x}\). The sequence of operations in \(PD\) is almost a mirror image of those used in _PE_. We first pass \(\mathbf{z}_{\mathrm{p}}^{Q}\) and \(\mathbf{z}_{\mathrm{np}}^{Q}\) through two MLPs, and then concatenate their outputs to create feature maps of size (\(H\times W\times C\)). These feature maps are then fed into a convolutional layer to produce \(\hat{\mathbf{x}}\). Minimizing the reconstruction loss, \(L_{\mathrm{rec}}=|\hat{\mathbf{x}}-\mathbf{x}|\), ensures that \(\hat{\mathbf{x}}\) is a good approximation of \(\mathbf{x}\). Finally, _PE_ and _PD_ are jointly trained using a weighted summation of all the losses mentioned above.
### Transformer (T) Block
Once _PE_ and _PD_ are trained, we can extract Imageome sequences \(\mathbf{z}^{Q}\) for every image in the training set. The goal of the Transformer block is to learn the patterns of codes in the extracted Imageome sequences of different classes (e.g., species class or ancestor node class), and use these patterns to generate synthetic Imageome sequences for every class. To achieve this task, we follow the approach used by VQGAN (Krizhevsky et al., 2014) and train a GPT transformer model (Zhu et al., 2017)\(T_{i}\) to generate plausible sequences of \(\mathbf{z}^{Q}\) for every node class at level \(i\). The generated Imageome sequences can then be converted into synthesized specimen images using _PD_ and \(D\).
## 5. Evaluation Setup
### Data
We used a curated dataset of teleost fish images from five ichthytological research collections that participated in the Great Lakes Invasics Network Project (GLIN). After obtaining the raw images from these collections, we handpicked a subset of about \(11,000\) images and pre-processed them by resizing and appropriately padding each image to be of a \(256\times 256\) pixel resolution. Finally, we partitioned the images into a training set and a validation set using an \(80-20\) split. See Appendix A for details on data pre-processing.
Our dataset includes images from \(38\) species of teleost fishes with an average number of \(200\) images per species. We discretized the phylogenetic tree to have \(n_{\eta}=4\) ancestry levels, where the last
Figure 3. Detailed view of the Phylo-Encoder block.
level is the species class. See Appendix B for details on phylogeny selection and discretization.
### Backbone Encoder and Decoder
Since _Phylo-NN_ can operate on the feature space \(\mathbf{x}\) of any backbone encoder \(E\) and produce reconstructed feature maps \(\mathbf{\hat{x}}\) that can be decoded back to images by a corresponding backbone decoder \(D\), we tried different encoder-decoder choices including pix2pix (Dordes and Torr, 2017), ALAE (Srivastava et al., 2017), and StyleGAN (Srivastava et al., 2017). However, we found VQGAN (Srivastava et al., 2017) feature maps to produce images of better visual quality than other encoder-decoder models. Hence, we used the embeddings of a base VQGAN encoder \(E\) as inputs in _Phylo-NN_ for all our experiments. The reconstructed feature maps of _Phylo-NN_ were then fed into a base VQGAN quantizer serving as the backbone decoder \(D\). Note that while training _Phylo-NN_, we kept the parameters of the backbone models fixed, thus saving training time and resources.
### Baseline Methods
Since no direct baselines exist for structuring the embedding space of neural networks using tree-based knowledge or discovering novel evolutionary traits from images, we considered the following baselines that are closest in motivation to _Phylo-NN_:
**Vanilla VQGAN (Srivastava et al., 2017):** The first baseline that we consider is a vanilla VQGAN model trained to generate and reconstruct images on the fish dataset. By comparing the learned embeddings and generated images of _Phylo-NN_ with vanilla VQGAN, we aim to demonstrate the importance of using biological knowledge to structure the embedding space of neural networks for trait discovery, rather than solely relying on information contained in data.
**Concept whitening (CW) (Beng et al., 2015):** For this second baseline, we replaced the last normalization layer in the encoder block of vanilla VQGAN with the concept whitening (CW) module, where we used species class labels as concept definitions. This is intended to evaluate if CW is capable of disentangling the evolutionary traits of species automatically from images without using the phylogeny. The whitened embeddings \(z_{\text{CW}}\) produced by the CW module are fed into the quantizer module of vanilla VQGAN for converting the embeddings to images. While training the CW module, we optimized the whitening and rotation matrices for all concepts every 30 batches. We used the VQGAN's transformer to generate plausible feature sequences \(z_{\text{cw}}\) conditioned on the species label, which are then decoded into specimen images using the VQGAN's decoder.
**Latent Space Factorization (LSF) (Krizhevsky et al., 2014):** The third baseline that we considered is the LSF method, which is another approach for feature disentanglement given concept attribute labels. Specifically, we introduced a variational autoencoder (VAE) model between the encoder and the quantization layer of the base VQGAN model. Similar to CW, we used the species class of each image as the concept attribute for factorizing the latent space in LSF. The LSF module was trained to optimize VAE's KL-divergence loss and recreation loss along with the attribute and non-attribute losses, as originally defined in the LSF method (Krizhevsky et al., 2014).
## 6. Results
In the following, we analyze the results of _Phylo-NN_ from multiple angles to assess the quality of its learned embeddings and generated images in comparison with baseline methods.
### Validating Species Distances in the Embedding Space
In order to evaluate the ability of _Phylo-NN_ to extract novel (or unknown) evolutionary traits from images without using trait labels, we show that distances between species pairs in the embedding space of _Phylo-NN_ are biologically meaningful and are correlated with ground-truth values better than baseline methods. In the following, we describe the two types of ground-truths used, the approach used for computing distances in the embedding space of comparative methods, and the comparison of correlations with ground-truth values.
**Phylogenetic Ground-truth (GT):** The first ground-truth distance between pairs of species is the _evolutionary distance_ between their corresponding nodes in the phylogenetic tree. In particular, for any two species, we can calculate the total sum of edge lengths in the path between their nodes in the phylogenetic tree. The longer the path, the more distant the species are on the evolutionary scale. Hence, if _Phylo-NN_ indeed captures evolutionary traits in its embedding space, we would expect it to show higher correlations with evolutionary distances computed from the phylogeny as compared to baselines. We applied min-max scaling of evolutionary distances so that they range from 0 to 1.
**Morphological Ground-truth (GT):** Another type of ground-truth distance between species was computed based on measurements of known morphological traits obtained from the FishShapes v1.0 dataset (Srivastava et al., 2017), which contains expert-measured traits known to carry evolutionary signals, defined and collected using traditional methods that are subjective and labor-intensive. We specifically used 8 functionally relevant traits from this dataset for every fish species. Some species were not available in this dataset, so when possible, either the closest relative was substituted or the species was dropped. The species were then matched to a time-calibrated phylogeny of fishes (Beng et al., 2015; Wang et al., 2016) and the log-transformed measurements were rotated with phylogenetically-aligned components analysis (PACA) (Beng et al., 2015), which rotates the traits to the axis with the highest level of phylogenetic signal. After correcting for overall size and allometry, the principal components of PACA were used to compute the Mahalonobis distance between every species-pair, using a covariance matrix proportional to the evolutionary rate matrix. See Appendix D for details on PACA calculations.
**Computing Embedding Distances:** To compute pair-wise distances in the embedding space of _Phylo-NN_, we first compute the probability distributions (or histograms) of quantized codes at every position of the Imageome sequence (i.e., \(\mathbf{z}_{\text{p}}^{Q}\) and \(\mathbf{z}_{\text{np}}^{Q}\)) in the test images for every species. We then compute the _Jensen-Shannon (JS) divergence_(Srivastava et al., 2017) between the probability distributions of codes at a pair of species to measure the dissimilarity of their learned embeddings. We adopt a similar approach for computing the JS-divergence of species-pairs in the quantized feature space of vanilla VQGAN. For baseline methods that operate in continuous feature spaces (CW and LSF), we first calculate the mean feature vector for every
species and then compute the _cosine distance_ (\(1-\text{cosine similarity}\)) of vectors for a pair of species. For both metrics, JS-divergence and cosine distance, a value closer to \(0\) represents higher similarity.
**Comparing Correlations with Ground Truth:** Figure 4(a) and Figure 4(b) show the pair-wise species distance matrices for morphological and phylogenetic GTs, respectively. Note that the rows and columns of all matrices in Figure 4 are species ordered according to their position in the phylogeny (see Appendix B for details) and the diagonal values (correlation with self) are removed so that they do not affect the colormap scale. We can see that both ground-truths show a similar clustering structure of species. However, there are differences too; while phylogenetic GT is solely based on phylogeny, the morphological GT uses both the phylogeny and information about "known" traits. Figure 4(c) and Figure 4(d) show the JS-divergences among species computed separately for the two disentangled parts of PhyloNN's embeddings (\(\mathbf{z}_{\text{p}}^{Q}\) and \(\mathbf{z}_{\text{mp}}^{Q}\)). We can see that the embeddings containing phylogenetic information show a similar clustering structure of distances as the GT matrices, in contrast to the non-phylogenetic embeddings. This shows the ability of _Phylo-NN_ to disentangle features related to phylogeny from other unrelated features. Figure 4 also shows the embedding distance matrices of the baseline methods, which are not as visually clean as _Phylo-NN_ in terms of matching with the GT matrices.
To quantitatively evaluate the ability of _Phylo-NN_ to match with GT distances, we compute the Spearman correlation between the GT distance matrices and embedding distance matrices for different methods as shown in Table 1. We can see that _Phylo-NN_ shows higher correlations at the species level with both GTs. Furthermore, since _Phylo-NN_ learns a different descriptor for every ancestry level in contrast to baseline methods that learn a flat representation, we can also compute _Phylo-NN_'s distance matrix at any ancestry level and compare it with GT matrices at the same level. Table 1 shows that _Phylo-NN_ shows significantly higher correlations with GT matrices at higher ancestry levels than the species level.
### Evaluating Species-to-species Image Translations
To further assess how well _Phylo-NN_'s embeddings capture evolutionary traits, we investigate how altering the learned Imageome sequence of an image specimen incrementally in a phylogenetically meaningful ordering affects the observed traits when the altered embeddings are decoded back as an image. To do that, we set up the following experiment. We pick two specimen images from a pair of species. By encoding the two images using _Phylo-NN_, we obtain their corresponding Imageome encodings, \(\mathbf{z}_{1}^{Q}\) and \(\mathbf{z}_{2}^{Q}\). We then start to replace the codes in the Imageome sequence \(\mathbf{z}_{1}^{Q}\) with the corresponding codes in \(\mathbf{z}_{2}^{Q}\) iteratively, until \(\mathbf{z}_{1}^{Q}\) transforms completely into \(\mathbf{z}_{2}^{Q}\). The order of this iterative replacement is by first replacing the codes representing the non-phylogenetic part of the embedding \(\mathbf{z}_{\text{mp}}^{Q}\), then the part capturing evolutionary information at the earliest ancestry level (level 0), to the next ancestry level (level 1), till we eventually reach the last level of the phylogeny, which is the species level. At the final point, the entire Imageome sequence \(\mathbf{z}_{1}^{Q}\) has been replaced with \(\mathbf{z}_{2}^{Q}\). This phylogeny-driven ordering of code replacements helps us capture key "snapshots" of the species-to-species translation process that are biologically
\begin{table}
\begin{tabular}{l c|c c} \hline \hline & & Morphological & Phylogenetic \\ \hline \multirow{4}{*}{PhyloNN} & level0 & 0.86 & 0.83 \\ & level1 & 0.87 & 0.85 \\ & level2 & 0.78 & 0.83 \\ & species & 0.70 & 0.78 \\ \hline \hline \multirow{2}{*}{LSF} & 0.38 & 0.55 \\ & 0.70 & 0.67 \\ \cline{1-1} & vanilla VQGAN & 0.31 & 0.24 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Correlations between GT and embedding distances
Figure 4. Comparing embedding distance matrices of methods with morphological and phylogenetic ground-truths.
meaningful. In particular, by observing the traits that appear or disappear at every ancestry level of code replacement, we can infer and generate novel hypotheses about the biological timing of trait changes as they may have happened in evolutionary history.
Figure 5 shows an example of such a translation process between a specimen of the species _Carassius auratus_ to a specimen of the species _Lepomis cyanellus_. We can see that although the two specimens look similar on the surface, there are several subtle traits that are different in the two species that are biologically interesting. For example, the source species has a V-shaped tail fin (termed caudal fin), while the target species has a rounded caudal fin. By looking at their place of occurrence in the translation process of _Phylo-NN_, we can generate novel biological hypotheses of whether they are driven by phylogeny or not, and whether they appeared earlier or later in the target species in the course of evolution. For example, we can see that the rounded tail feature of the target species appears right after replacing the non-phylogenetic part of the embedding (see blue circle), indicating that this feature may not be capturing evolutionary signals and instead maybe affected by unrelated factors (e.g., environment). On the other hand, if we observe another fin (termed pectoral fin) that appears on the side of the body just behind the gill cover, compared to the fin's lower position closer to the underside of the specimen in the source image, we can see that it seems to get sharper and compact only in the later levels (it is faintly visible in level 1 but shows up prominently as a white region in level 2, see green circle). This suggests that the change in the position and shape of the pectoral fin occurred later in fish evolution, which is supported by the phylogeny and is in fact the case. Our work opens novel opportunities for generating such biological hypotheses, which can be further investigated by biologists to potentially accelerate scientific discoveries. Figure 5 also shows the translations obtained by baseline methods for the same pair of species specimens. We can see that the baselines are mostly performing a smooth interpolation between the source and target images. This is in contrast to the discrete and non-smooth nature of changes observed in the translation of _Phylo-NN_, which is indeed the desired behavior since the appearance or disappearance of traits at every ancestry level are expected to be orthogonal to those at other levels. Furthermore, the transition points in the translation process of baseline methods do not correspond to biologically meaningful events as opposed to _Phylo-NN_.
### Generalization to Unseen Species
As _Phylo-NN_ aims to encode specimen images into their corresponding phylogenetic and non-phylogenetic sequences of codes, we expect specimens that belong to the same species to largely share the same phylogenetic code in terms of the species descriptor \(D_{\text{fn}}\), while varying in terms of the non-phylogenetic codes. More generally, specimens belonging to species sharing a common ancestor at phylogenetic level \(i\) should largely share the codes with the descriptor at level \(i\), \(D_{i}\), while varying at the rest of codes. This should also apply for specimens of _unseen_ (or newly discovered) species that we have not yet observed in the training set. We posit that by looking at the similarity of codes generated for an unseen species, we should be able to infer its ancestral lineage in terms of the species sampled during training. In other words, by analyzing the distribution of codes generated from the image of an unseen species, we should be able to locate it on the phylogenetic tree and position it to next to the subset of known species (seen during training) that share a common ancestor.
To quantify this phenomenon, for a given species or ancestor node, we construct two sets of histograms, \(H_{\text{p}}\) and \(H_{\text{np}}\), of sizes \([n\times n_{\text{p}}]\) and \([n_{\text{mp}}]\), respectively. Each value in the histograms, \(H_{\text{p}}^{i,j}\) and \(H_{\text{np}}^{k}\), describes the distribution of codes at a certain location in the Imageomes across all specimens belonging to the species or ancestor node. See Appendix F for an example. We then compute the entropy of each sequence location in \(H_{\text{p}}\) and \(H_{\text{np}}\) to measure the "purity" of codes used at every location. If the entropy is low for a certain location, it means only a few possible codes occur at that location, suggesting that those specific codes are key at characterizing the species or ancestor node in question. On the other hand, higher entropy means a variety of codes occur at that location, implying that such a location is not discriminative to the species or ancestor node. Finally, to compare the code distributions for two species, we use the JS-divergence metric for calculating the difference between two histograms of a sequence location. Similar to Section 6.1, such a metric can be aggregated to quantify the coding differences between species-pairs.
To assess _Phylo-NN_'s ability to generalize to unseen species, we train it on a subset of the species and then evaluate the quality of the embedding space when the model is introduced to species it has never seen before during training. In our experiment, we chose to train on the same dataset as before while only excluding three species. Once the model is trained, we look at the average JS-divergence distance between these missing species and three other species in the tree. These three other species were selected such that each missing species has one seen species that is close to it phylogenetically (i.e., both species share the same ancestor at the immediate ancestry level) while others are relatively far from it.
Table 2 shows the average distance of the phylogenetic codes among the six aforementioned species. We can see that the distance is smallest for each unseen species and its counterpart that shares the same immediate ancestor (shown as the diagonal in the table). This confirms that even though the model has not seen the former species, it was able to characterize it using an Imageome sequence that is significantly closer to that of its seen counterpart than the other species' Imageomes.
While Table 2 highlights the phylogenetic matching in the embedding space at the species descriptor level, \(D_{\text{fn}}\), Table 3 does the same but for the descriptor at a distant ancestry level (level 0), i.e., \(D_{0}\). Based on the phylogenetic tree we have used in this example, both the _Notropis_ and _Noturus_ species share the same distant ancestor at that descriptor level. On the other hand, _Lepomis_ species does not share that ancestor. Hence, we find that the JS-divergences increase for the _Lepomis_ unseen species with seen species that are not _Lepomis_ as compared to Table 2. On the other hand, the JS-divergences decrease for the other two unseen species w.r.t. seen species that are on the off-diagonals of the table. This confirms that \(D_{0}\) specifically captures the phylogenetic information of that distant ancestor that is common across _Notropis_ and _Noturus_ seen and unseen species. Finally, to confirm that this phylogenetic correlation is mainly constrained only to the phylo-descriptors, we
calculate the same distances but using the non-phylogenetic part of the sequences. The result is shown in Table 4. We can see that the distances are much closer to each other, implying that the non-phylogenetic embedding is not specialized at differentiating among different species, and hence cannot be used to phylogenetically categorize the unseen species.
### Assessing the Clustering Quality of the Embedding Space Using t-SNE Plots
In this section, we qualitatively assess the quality of generated images by visualizing their embedding space. Visualization tools such as loss landscape visualizations (Kang et al., 2017) and t-SNE plots (Zhu et al., 2018), have been frequently used as investigative tools in deep learning in recent years as they help gauge a model's generalization power. To that end, we are interested in understanding how _Phylo-NN_ clusters the embedding space compared to other baselines by analyzing these models' t-SNE plots. To construct the t-SNE plot for each model,
\begin{table}
\begin{tabular}{l l|l l l} \hline \hline & & & \multicolumn{3}{c}{Seen species} \\ \hline & & \multicolumn{1}{c}{_Notropis_} & \multicolumn{1}{c}{_Lepomis_} & \multicolumn{1}{c}{_Noturus_} \\ & & \multicolumn{1}{c}{_nublilus_} & \multicolumn{1}{c}{_macrochirus_} & \multicolumn{1}{c}{_flavus_} \\ \hline \multirow{4}{*}{_Notoris_} & \multicolumn{1}{c}{_Notropis pero-bromus_} & 0.39 & 0.45 & 0.39 \\ & \multicolumn{1}{c}{_Lepomis mega-lotiis_} & 0.46 & 0.36 & 0.48 \\ \cline{1-1} & \multicolumn{1}{c}{_Noturus mu-rus_} & 0.40 & 0.42 & 0.36 \\ \hline \hline \end{tabular}
\end{table}
Table 4. JS-diveergence of the non-phylogenetic codes between unseen and seen species
Figure 5. Comparing species-to-species image translations from a _Carassius auratus_ specimen to a _Lepomis cyanellus_ specimen.
\begin{table}
\begin{tabular}{l|l|l l l} \hline \hline & & & \multicolumn{3}{c}{Seen species} \\ \hline & & \multicolumn{1}{c}{_Notropis_} & \multicolumn{1}{c}{_Lepomis_} & \multicolumn{1}{c}{_Noturus_} \\ & & \multicolumn{1}{c}{_nublilus_} & \multicolumn{1}{c}{_macrochirus_} & \multicolumn{1}{c}{_flavus_} \\ \hline \multirow{4}{*}{_Notoris_} & \multicolumn{1}{c}{_Notropis pero-bromus_} & 0.47 & 0.71 & 0.62 \\ & \multicolumn{1}{c}{_Lepomis mega-lotiis_} & 0.73 & 0.43 & 0.72 \\ \cline{1-1} & \multicolumn{1}{c}{_Noturus mu-rus_} & 0.62 & 0.71 & 0.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2. JS-diveergence of the phylogenetic codes at the seriesci level between unseen and seen species
\begin{table}
\begin{tabular}{l l|l l l l} \hline \hline & & & \multicolumn{3}{c}{Seen species} \\ \hline & & \multicolumn{1}{c}{_Notropis_} & \multicolumn{1}{c}{_Lepomis_} & \multicolumn{1}{c}{_Noturus_} \\ & & \multicolumn{1}{c}{_nublilus_} & \multicolumn{1}{c}{_macrochirus_} & \multicolumn{1}{c}{_flavus_} \\ \hline \multirow{4}{*}{_Notoris_} & \multicolumn{1}{c}{_Notropis pero-bromus_} & 0.26 & 0.81 & 0.50 \\ & \multicolumn{1}{c}{_Lepomis mega-lotiis_} & 0.81 & 0.27 & 0.81 \\ \cline{1-1} & \multicolumn{1}{c}{_Noturus mu-rus_} & 0.52 & 0.80 & 0.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3. JS-diveergence of the phylogenetic codes at the earliest ancestral level between unseen and seen species
we iterate through its generated images, encode them, obtain the quantized embedding vector for each image (\(\mathbf{z}_{\text{up}}^{Q}\) and \(\mathbf{z}^{Q}\) for _Phylo-NN_ and vanilla VQGAN, respectively), and finally create the t-SNE plots. For CW, we use the whitened embeddings \(z_{\text{cw}}\) instead.
Figure 6 shows these constructed t-SNE plots with two different color-coding schemes. The first one (left column) color-codes the data-points based on the grouping of species at the second phylogenetic level (i.e., the direct ancestor of the specimen's species). Using this color-coding scheme allows us to inspect how different species cluster in the embedding space. The second color-coding (right column) is the average phylogenetic distance between the data-point and its \(k\)-nearest neighbors (KNN), where \(k=5\) in this setup. The higher the average distance (i.e., the darker the data-point's color), the more distant the specimen is from those \(k\) specimen's that are closest to it in the quantized embedding space. This color-coding helps us spot how well the different species are separated from each other in the embedding space, which generally characterizes the quality of the encoding and its propensity for downstream tasks, such as classification.
From Figure 6, we can see that _Phylo-NN_ (top row) clusters the generated images better than vanilla VQGAN and CW as evident from its hierarchical clustering where the specimens belonging to the same species clump into small clusters and these clusters in turn clump into larger clusters (representing ancestor nodes) that have a singular color. This demonstrates that _Phylo-NN_ is able to learn a phylogenetically-meaningful encoding, whereas the other base models' clustering is quite fuzzy and poorly characterizes any biological knowledge. Also, by looking at the right column, we can see that _Phylo-NN_ commits very little clustering error in terms of its phylogenetic constraints because the average phylogenetic distance is low (almost zero) for the majority of points. This is in contrast to the other baselines where there is quite a high clustering error as seen from the "heat" of its scatter plot.
## 7. Conclusions and Future Work
In this work, we presented a novel approach of _Phylo-NN_ for discovering biological traits related to evolution automatically from images in an unsupervised manner without requiring any trait labels. The key novelty of our approach is to leverage the biological knowledge of phylogeny to structure the quantized embedding space of _Phylo-NN_, where different parts of the embedding capture phylogenetic information at different ancestry levels of the phylogeny. This enables our method to perform a variety of tasks in a biologically meaningful way such as species-to-species image translation and identifying the ancestral lineage of newly discovered unseen species.
In the future, our work can be extended to include a larger number of embedding dimensions to improve the visual quality of generated images and can be applied to other image datasets beyond the fish dataset. Future work can explore extensions of _Phylo-NN_ to generate images of ancestor species or to predict images of species that are yet to be evolved. Future work can also focus on making the discovered Imageome sequences more explainable by understanding the correspondence of each quantized code with a region in the image space. Our work opens a novel area of research in grounding image representations using tree-based knowledge, which can lead to new research paradigms in other fields of science where images are abundant but labels are scarce.
###### Acknowledgements.
This work was supported, in part, by NSF awards for the HDR Imageomics Institute (Award # 2118240) and the Biology-guided Neural Network (BGNN) projects (Award # 1940247, # 1940322, # 1940233, # 2022042, # 1939505). Access to computing facilities was provided by the Advanced Research Computing (ARC) Center at Virginia Tech.
|
2305.16375 | Data Topology-Dependent Upper Bounds of Neural Network Widths | This paper investigates the relationship between the universal approximation
property of deep neural networks and topological characteristics of datasets.
Our primary contribution is to introduce data topology-dependent upper bounds
on the network width. Specifically, we first show that a three-layer neural
network, applying a ReLU activation function and max pooling, can be designed
to approximate an indicator function over a compact set, one that is
encompassed by a tight convex polytope. This is then extended to a simplicial
complex, deriving width upper bounds based on its topological structure.
Further, we calculate upper bounds in relation to the Betti numbers of select
topological spaces. Finally, we prove the universal approximation property of
three-layer ReLU networks using our topological approach. We also verify that
gradient descent converges to the network structure proposed in our study. | Sangmin Lee, Jong Chul Ye | 2023-05-25T14:17:15Z | http://arxiv.org/abs/2305.16375v1 | # Data Topology-Dependent Upper Bounds of Neural Network Widths
###### Abstract
This paper investigates the relationship between the universal approximation property of deep neural networks and topological characteristics of datasets. Our primary contribution is to introduce data topology-dependent upper bounds on the network width. Specifically, we first show that a three-layer neural network, applying a ReLU activation function and max pooling, can be designed to approximate an indicator function over a compact set, one that is encompassed by a tight convex polytope. This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure. Further, we calculate upper bounds in relation to the Betti numbers of select topological spaces. Finally, we prove the universal approximation property of three-layer ReLU networks using our topological approach. We also verify that gradient descent converges to the network structure proposed in our study.
## 1 Introduction
This paper addresses a fundamental question in machine learning: for any \(p\geq 1\), \(d\in\mathbb{N}\), and \(f^{*}\in L^{p}([0,1]^{d})\), what is the necessary depth and width of a neural network to approximate \(f^{*}\) within a small error? This constitutes the universal approximation property (UAP) of deep neural networks (DNNs). Since Cybenko's seminal work in 1989 [6], demonstrating the UAP of two-layer networks with non-polynomial activation functions, subsequent research has extended these results to various settings.
In the context of deep ReLU networks, recent literature establishes that the minimal depth is \(2\) (given sufficient width), and the minimal width is \(\max\{d_{x},d_{y}\}\) assuming adequate depth over a compact domain for some function classes where \(d_{x}\) and \(d_{y}\) are input and output dimensions [21; 34]. However, these UAP results on a compact domain have limitations for classifiers or discriminators. For instance, a discriminator used in generative adversarial network (GAN) training receives both training data and output from a generator, which is not confined to a specific bounded domain [15; 20; 24; 26; 35; 36].
There are recent UAP results for neural networks concerning unbounded input domains (like \(\mathbb{R}^{d}\)). Notably, a study by Wang et al. [45] demonstrates that two-layer ReLU networks fail to serve as universal approximators on \(\mathbb{R}^{2}\). As such, to approximate compactly supported functions in \(\mathbb{R}^{d}\), the required minimum depth is at least 3. While Wang et al. [45] affirmed that three-layer ReLU networks are universal approximators in \(L^{p}(\mathbb{R}^{d})\), they only establish the existence of such networks, leaving open the questions of their construction and the number of required hidden neurons. This paper addresses these questions specifically for the class of discriminator functions.
Specifically, we explore the following question: _given dataset \(\mathcal{X}\), \(\varepsilon>0\) and \(p\geq 1\), how can we construct a neural network \(\mathcal{N}\) such that \(\left\|\mathcal{N}(\mathbf{x})-\mathbb{1}_{\{\mathcal{X}\}}(\mathbf{x})\right\|_{L^{p} (\mathbb{R}^{d})}<\varepsilon\)?_ Intuitively, the answer is closely tied to the topological structure of the dataset \(\mathcal{X}\). The manifold assumption in machine
learning suggests that dataset \(\mathcal{X}\) adheres to a low-dimensional manifold, representable by simple topological features [4; 10; 28; 30; 33]. Consequently, if \(\mathcal{X}\) has a'simple' topological structure, the required depth and width to approximate \(\mathbbm{1}_{\{\mathcal{X}\}}\) would be minimal. Numerous experimental and theoretical results indeed show a strong correlation between the required width of neural networks and the topology of training dataset \(\mathcal{X}\)[8; 12; 30; 42; 43; 46]. Despite this, no research has specifically addressed how to architect neural networks based on the topological structure of the dataset. For instance, Betti numbers in Topological data analysis (TDA) represent the topological structure of dataset \(\mathcal{X}\), but no previous work has linked this quantity with network architecture.
Therefore, this paper is to bridge this gap by constructing three or four-layer neural networks with bounded widths determined by the topological features of the dataset \(\mathcal{X}\). Our contributions are summarized below.
* Motivated by the results of Wang et al. [45], we generalize their negative result for two-layer ReLU networks on \(\mathbb{R}^{d}\) with \(d\geq 2\). The proof is in Appendix B. **Proposition 1.1**.: _Let \(f^{*}:\mathbb{R}^{d}\to\mathbb{R}\) be a nonzero compactly supported function. Then for \(p\geq 1\) and \(d\geq 2\), two-layer ReLU networks cannot universally approximate \(f^{*}\) in \(L^{p}(\mathbb{R}^{d})\)._
* For a compact set \(\mathcal{X}\subset\mathbb{R}^{d}\), we develop a three-layer neural network \(\mathcal{N}\) approximating the indicator function \(\mathbbm{1}_{\{\mathcal{X}\}}\) over \(\mathbb{R}^{d}\) under a given \(\varepsilon\) error. The network's width is bound by a number related to a convex polytope cover of \(\mathcal{X}\) (as stated in Proposition 3.1). We further refine this result for a four-layer ReLU network when \(\mathcal{X}\) is represented by the difference of two unions of convex polytopes (Theorem 3.2).
* When \(\mathcal{X}\) forms an \(m\)-simplicial complex with \(k\) facets in \(\mathbb{R}^{d}\), we derive a similar result. Here, we establish upper bounds of the width in terms of \(d,k\), and \(m\), introducing novel data topology-dependent bounds (Theorem 3.3). If \(\mathcal{X}\) is a topological space represented by the difference of \(k\)-dimensional disjoint cuboids, we propose a four-layer ReLU network that can approximate \(\mathbbm{1}_{\{\mathcal{X}\}}\), where the widths are bound by terms of the Betti number of \(\mathcal{X}\) (Theorem 3.5). This underscores the significant impact of Betti numbers on the network architecture, a novel contribution in this field.
* As a practical application, we prove that the set of three-layer ReLU networks is dense in \(L^{p}(\mathbb{R}^{d_{x}},[0,1]^{d_{y}})\) for \(p\geq 1\), confirming the UAP of three-layer ReLU networks over an unbounded domain (Theorem 4.1). In conjunction with Proposition 1.1, this confirms that the minimal depth of deep ReLU networks in \(L^{p}(\mathbb{R}^{d\geq 2})\) is exactly \(3\). Furthermore, for a Lipschitz function \(f^{*}:[0,1]^{d_{x}}\to[0,1]^{d_{y}}\), we offer upper bounds of width by \(O(\varepsilon^{-d_{x}})\), where \(\varepsilon\) represents the error bound.
## 2 Related Works
In this section, we will review studies that relate to topological approaches in deep learning, compactly supported DNNs, and the universal approximation property (UAP) along with width bounds for small depth neural networks.
Topological approach in deep learning.Although few studies have explored the connection between neural network architecture and the topological features of the training data, their findings hold significant implications. [32] conducted experiments demonstrating the rapid reduction of Betti numbers of the data topology during training with deep ReLU layers. [12] attempted to correlate the architecture and data topology by increasing the complexity of the training data to guide the selection of deep neural network architecture. [46] offered upper bounds on the Betti numbers for the preimage of a layer in deep neural networks and compared them with the Betti numbers of the dataset to inform network architecture selection. Further experiments have underscored the relevance of topology-dependent neural network architecture [13; 14; 30; 42]. For an in-depth review of topological deep learning, refer to [19; 47].
Compactly supported DNNs.The construction of compactly supported subnetworks for use as building blocks in DNNs has been a focus in several studies, often with the aim of approximating a compactly supported function [1; 16; 40]. For instance, [22] proposed a 'TRLU function' for approximating constant or piecewise linear functions on a compact domain, and [23] further examined the resulting neural networks, considering open convex polytopes as the partition of input space.
This concept aligns closely with Lemma C.2 which we used to construct desired neural networks. Additionally, [26] studied the UAP for deep neural networks with ReLU and pooling layers over compactly-supported integrable functions, providing bounds of width, depth, and the number of pooling layers. Our work aligns with these studies as we explicitly construct small depth (three or four-layer) neural networks using ReLU activation and pooling layers to approximate the indicator function over a given topological space.
UAP and width bounds for small depth neural networks.The Universal Approximation Property (UAP) of shallow neural networks and the width bounds associated with them have been a subject of extensive study under various conditions [2; 18; 25; 37]. Recently, [20] utilized tropical geometry [29] and polyhedral theory to prove the UAP of two-layer neural networks, a method analogous to our approach in this paper. They also provided width and depth bounds to approximate a function that is the difference of two convex functions, a result we replicate for indicator functions over the difference of unions of convex polytopes.
Addressing unbounded domain \(\mathbb{R}^{d}\), [7] proposed a radial function that two-layer networks fail to approximate. [45] further contributed to this dialogue by presenting both negative and positive results on unbounded domains. They demonstrated that two-layer ReLU neural networks cannot universally approximate on the Euclidean plane \(\mathbb{R}^{2}\). However, they also proved that three-layer ReLU networks can serve as universal approximators in \(L^{p}(\mathbb{R}^{d})\). Drawing inspiration from these findings, we extend their initial result by illustrating that two-layer ReLU networks cannot serve as universal approximators on \(L^{p}(\mathbb{R}^{d})\) for \(p\geq 1\) and \(d\geq 2\) (Proposition 1.1).
On a different note, [5] deduced that any Lipschitz function over a compact set can be approximated by a three-layer neural network with a novel activation function. This research bears similarity to our result in Section 4, albeit with a slightly different conclusion. While they introduced a new activation function and focused on the boundedness of matrix norm, we exclusively employ the ReLU activation function and provide the upper bound of widths. Additionally, [26] explored the UAP of DNNs approximating compactly supported Lipschitz functions through ReLU and pooling layers.
In this paper, we merge these approaches to establish data-topology dependent width bounds and to validate the UAP of three-layer ReLU networks on \(\mathbb{R}^{d}\).
## 3 Data-Topology Dependent Upper Bounds of Widths
### Preliminaries
Notation.In this article, scalars are denoted by lowercase letters, vectors by boldface lowercase letters, and matrices by boldface uppercase letters. The \(L^{p}\)-norm in function spaces is represented as \(\left\|\cdot\right\|_{L^{p}}\). For a positive integer \(m\), \([m]\) is used to represent the set \(\{1,2,\cdots,m\}\). The ReLU activation function is denoted by \(\sigma(x):=\text{ReLU}(x)=\max\{0,x\}\), and it is applied to a vector coordinate-wise. The sigmoid activation function is denoted as \(\texttt{SIG}(x)=\frac{1}{1+e^{-x}}\). The max pooling operation is represented as \(\texttt{MAX}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), returning the maximum value among the elements of a given vector. \(B_{\varepsilon}(\mathbf{x}_{0}):=\{\mathbf{x}:\left\|\mathbf{x}-\mathbf{x}_{0}\right\|_{2}<\varepsilon\}\) denotes the epsilon neighborhood of \(\mathbf{x}_{0}\). The \(\varepsilon\) neighborhood of a compact set \(\mathcal{X}\) is defined by \(B_{\varepsilon}(\mathcal{X}):=\{\mathbf{x}:\min_{\mathbf{y}\in\mathcal{X}}\left\|\mathbf{x }-\mathbf{y}\right\|_{2}<\varepsilon\}\). Lebesgue measure in \(\mathbb{R}^{d}\) is represented by \(\mu_{d}\), or simply \(\mu\) when \(d\) is apparent in the context. For a given set \(\mathcal{X}\subset\mathbb{R}^{d}\), the indicator function over \(\mathcal{X}\) is denoted as follows:
\[\mathbbm{1}_{\{\mathcal{X}\}}(\mathbf{x}):=\begin{cases}1,&\quad\text{if }\mathbf{x}\in\mathcal{X},\\ 0,&\quad\text{otherwise.}\end{cases}\]
Deep neural networks.We use the following notation to represent the architecture of a \(k\)-layer neural network \(\mathcal{N}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) with widths \(d_{1},d_{2},\cdots,d_{k-1}\) and activation functions \(\texttt{ACT}_{1},\texttt{ACT}_{2},\cdots,\texttt{ACT}_{k}\) on each hidden layer: \(d\overset{\texttt{ACT}}{\rightarrow}d_{1}\overset{\texttt{ACT}}{\rightarrow}d _{2}\overset{\texttt{ACT}_{2}}{\rightarrow}\cdots\overset{\texttt{ACT}_{k-1}} {\rightarrow}d_{k-1}\overset{\texttt{ACT}_{k}}{\rightarrow}1\). When the activation function is identity, we denote nothing on the arrow. For example, consider a three-layer fully connected network \(\mathcal{N}\) with ReLU activation functions on hidden layers and a max pooling operation for the last layer, defined by
\[\mathcal{N}(\mathbf{x})=\texttt{MAX}\left[\sigma(\mathbf{W}_{2}\sigma(\mathbf{W}_{1}\mathbf{x} +\mathbf{b}_{1})+\mathbf{b}_{2})\right].\]
Then its architecture is denoted by \(d\overset{\sigma}{\rightarrow}d_{1}\overset{\sigma}{\rightarrow}d_{2} \overset{\texttt{MAX}}{\rightarrow}1\).
The objective of this paper is to construct a neural network \(\mathcal{N}\) such that, given \(\varepsilon>0\), \(p\geq 1\), and a compact set \(\mathcal{X}\subset\mathbb{R}^{d}\), the following inequality holds: \(\left\|\mathcal{N}(\mathbf{x})-\mathbb{1}_{\{\mathcal{X}\}}(\mathbf{x})\right\|_{L^{p} (\mathbb{R}^{d})}<\varepsilon\). By Proposition C.1, it is sufficient to construct a neural network \(\mathcal{N}\) that satisfies the following three conditions: \(\mathbf{A}\). \(\mathcal{N}(\mathbb{R}^{d})\subset[0,1]\), \(\mathbf{B}\). \(\mathcal{N}(\mathbf{x})=1\) if \(\mathbf{x}\in\mathcal{X}\), and \(\mathbf{C}\). \(\mathcal{N}(\mathbf{x})=0\) if \(\mathbf{x}\not\in B_{\varepsilon^{\prime}}(\mathcal{X})\), for a given \(\varepsilon^{\prime}>0\). In other words, the desired network should output a constant value of \(1\) over the given manifold \(\mathcal{X}\) and vanish for inputs farther than \(\varepsilon^{\prime}\) from \(\mathcal{X}\). This is a property desired for classifiers or discriminators.
### Main Theoretical Findings
#### 3.2.1 Upper bounds of the widths for compact sets
Consider \(\mathcal{X}\subset\mathbb{R}^{d}\), a compact set, and the task of approximating its indicator function \(\mathbb{1}_{\{\mathcal{X}\}}\) within \(\mathbb{R}^{d}\). As per Proposition 1.1, a minimum of three layers is required for this task. Intriguingly, if \(\mathcal{X}\) can be encapsulated within a collection of convex polytopes, a three-layer neural network employing ReLU activation and max pooling operations can be constructed. The following proposition outlines not just the feasibility of such a neural network, but also provides a method for its construction.
**Proposition 3.1**.: _Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be a compact set. For a given \(\varepsilon>0\), suppose there exists a finite collection of convex polytopes \(\mathcal{C}\) such that \(\mathcal{X}\subset\bigcup_{C\in\mathcal{C}}C\subset B_{\varepsilon}(\mathcal{ X})\). Let \(k:=|\mathcal{C}|\) be the caldinality of \(\mathcal{C}\), and \(l\) be the total sum of number of faces of each polytope in \(\mathcal{C}\). Then, there exists a three-layer neural network \(\mathcal{N}\) with the architecture \(d\xrightarrow{\sigma}l\xrightarrow{\sigma}k\xrightarrow{\mathbf{x}}1\) such that \(\mathcal{N}(\mathbb{R}^{d})=[0,1]\) and_
\[\mathcal{N}(\mathbf{x}) =1 \text{if }\mathbf{x}\in\mathcal{X},\] \[\mathcal{N}(\mathbf{x}) =0 \text{if }\mathbf{x}\not\in B_{\varepsilon}(\mathcal{X}).\]
Proof.: Let \(\mathcal{C}=\{C_{1},\cdots,C_{k}\}\) and \(l_{i}\) be the number of faces of \(C_{i}\). By Lemma C.2, for each \(C_{i}\in\mathcal{C}\), there exists a two-layer ReLU network \(\mathcal{T}_{i}\) with the architecture \(d\xrightarrow{\sigma}l_{i}\to 1\) such that \(\mathcal{T}_{i}(\mathbf{x})=1\) for \(\mathbf{x}\in C_{i}\) and \(\mathcal{T}_{i}(\mathbf{x})<0\) for \(\mathbf{x}\not\in B_{\varepsilon}(C)\). Therefore, \(\sigma(\mathcal{T}_{i}(\mathbf{x}))=1\) for \(\mathbf{x}\in C_{i}\) and \(\sigma(\mathcal{T}_{i}(\mathbf{x}))=0\) for \(\mathbf{x}\not\in B_{\varepsilon}(C_{i})\). Now, we take max pooling operation to define a three-layer neural network \(\mathcal{N}\).
\[\mathcal{N}(\mathbf{x}):=\texttt{MAX}(\mathcal{T}_{1}(\mathbf{x}),\cdots,\mathcal{T}_ {k}(\mathbf{x})).\]
Then it is easy to verify that \(\mathcal{N}(\mathbf{x})\) is the desired three-layer neural network which has the architecture \(d\xrightarrow{\sigma}l\xrightarrow{\sigma}k\xrightarrow{\texttt{MAX}}1\), where \(l=l_{1}+\cdots+l_{k}\).
Proposition 3.1 confirms the universal approximation property of three-layer neural networks for indicator functions over a compact set, granted it can be closely covered by finite convex polytopes. It is crucial to highlight that the upper bound on the neural network's widths is dictated by the covering \(\mathcal{C}\), specifically the number of constituents \(k\) and the cumulative number of all faces \(l\). Given that a single compact set \(\mathcal{X}\) may have multiple potential convex polytope coverings, deciding on the optimal covering method becomes a significant consideration. If \(\varepsilon\) is significantly large, we can opt for a loose cover of \(\mathcal{X}\) with smaller values of \(k\) and \(l\). However, a smaller \(\varepsilon\) necessitates larger values for both \(k\) and \(l\).
This brings us to an extension of the original proposition to address the issue of a high count of convex polytopes in the covering of \(\mathcal{X}\), which can occur due to the set's intricate characteristics. The subsequent theorem tackles this problem by increasing depth: if \(\mathcal{X}\) can be enveloped by the difference between two unions of convex polytopes, then a four-layer ReLU network can effectively approximate \(\mathbb{1}_{\{\mathcal{X}\}}\).
**Theorem 3.2**.: _Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be a compact set. Suppose there exists a finite collection of convex polytopes \(\mathcal{C}=\{P_{1},\cdots,P_{n_{P}},Q_{1},\cdots,Q_{n_{Q}}\}\) such that the set difference \(D:=\bigcup_{i\in[n_{P}]}P_{i}-\bigcup_{j\in[n_{Q}]}Q_{j}\) satisfies \(\mathcal{X}\subset D\subset B_{\varepsilon}(\mathcal{X})\). Let \(l\) denote the total number of faces of the convex polytopes in \(\mathcal{C}\). Then, there exists a four-layer ReLU network \(\mathcal{N}\) with the architecture \(d\xrightarrow{\sigma}l\xrightarrow{\sigma}(n_{P}+n_{Q})\xrightarrow{\sigma}2 \xrightarrow{\sigma}1\) such that \(\mathcal{N}(\mathbb{R}^{d})=[0,1]\) and_
\[\mathcal{N}(\mathbf{x}) =1 \text{if }\mathbf{x}\in\mathcal{X},\] \[\mathcal{N}(\mathbf{x}) =0 \text{if }\mathbf{x}\not\in B_{\varepsilon}(\mathcal{X}).\]
Proof.: By Lemma C.2, for each \(A\in\mathcal{C}=\{P_{1},\cdots,P_{n_{P}},Q_{1},\cdots,Q_{n_{Q}}\}\), we can construct a two-layer ReLU network \(\mathcal{T}_{A}\) such that \(\mathcal{T}_{A}(\mathbf{x})=1\) for \(\mathbf{x}\in A\) and \(\mathcal{T}_{A}(\mathbf{x})=0\) for \(\mathbf{x}\not\in B_{\varepsilon}(A)\). Let \(a_{i}:=\mathcal{T}_{P_{i}}\) for \(i\in[n_{P}]\) and \(b_{j}:=\mathcal{T}_{Q_{j}}\) for \(j\in[n_{Q}]\). Define two neurons in the third layer by
\[a:=\sigma(1-a_{1}-\cdots-a_{n_{P}})\qquad\text{and}\qquad b:=\sigma(1-b_{1}- \cdots-b_{n_{Q}}).\]
Finally, defining the last layer by \(\sigma(b-a)\), we obtain the desired network \(\mathcal{N}\). The architecture of this network is \(d\xrightarrow{\sigma}\xrightarrow{\sigma}(n_{P}+n_{Q})\xrightarrow{\sigma}2 \xrightarrow{\sigma}1\).
The primary advantage of this theorem is the reduction in the width of neural networks. While Proposition 3.1 requires the covering of \(\mathcal{X}\) solely through the union of convex polytopes, Theorem 3.2 relaxes this requirement by allowing the difference between two unions of convex polytopes. This can decrease the necessary number of neurons, given that a compact set might be covered by the difference of two unions of fewer convex polytopes, as we will illustrate in Example 3.4 (Figure 1).
However, a persisting challenge is that there is no general method known for covering a compact set using convex polytopes, or their differences. To address this, in the following section, we present general upper bounds of widths in three-layer neural networks when \(\mathcal{X}\) forms a simplicial complex.
#### 3.2.2 Upper bounds of widths for simplicial \(m\)-complexes
Before delving into the theorem, we recall some definitions. A simplicial \(m\)-complex is a type of simplicial complex where the highest dimension of any simplex equals \(m\). For a given simplicial complex \(K\), a facet of \(K\) is a maximal simplex which does not serve as a face of any larger simplex. With these definitions in mind, in the following theorem, we showcase the architecture of three-layer neural networks that can approximate \(1_{\{\mathcal{X}\}}\) for a given simplicial complex \(\mathcal{X}\).
**Theorem 3.3**.: _Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be a simplicial \(m\)-complex consists of \(k\) facets, and let \(k_{j}\) be the number of \(j\)-dimensional facets of \(\mathcal{X}\). Then, for a given \(\varepsilon>0\), there exists a three-layer neural network \(\mathcal{N}\) with the architecture \(d\xrightarrow{\sigma}d_{1}\xrightarrow{\sigma}k\xrightarrow{\mathbf{ML}}1\) such that \(\mathcal{N}(\mathbb{R}^{d})=[0,1]\), \(\mathcal{N}(\mathbf{x})=1\), for \(\mathbf{x}\in\mathcal{X}\), and \(\mathcal{N}(\mathbf{x})=0\) for \(\mathbf{x}\not\in B_{\varepsilon}(\mathcal{X})\). Furthermore, \(d_{1}\) is bounded by_
\[d_{1}\leq\min\left\{k(d+1)-(d-1)\left\lfloor\sum_{j<\frac{d}{2}}\frac{k_{j}}{ 2}\right\rfloor,\;(d+1)\left[\sum_{j\leq\frac{d}{2}}\left(k_{j}\frac{j+2}{d-j} +\frac{j+2}{j+1}\right)+\sum_{j>\frac{d}{2}}k_{j}\right]\right\}. \tag{1}\]
Proof Sketch.: Let \(X_{1},X_{2},\cdots,X_{k}\) be \(k\) facets of \(\mathcal{X}\). For each facet \(X_{i}\), consider a \(d\)-simplex cover appeared in Lemma C.3. Then Proposition 3.1 provides the neural network \(\mathcal{N}\) with the architecture \(d\xrightarrow{\sigma}(d+1)k\xrightarrow{\sigma}k\xrightarrow{\mathbf{ML}}1\). Lastly, removing the neurons in the first layer that can be overlapped, we get a slightly improved bound. The full proof can be found in Appendix B.
Theorem 3.3 reveals that the width is restricted by the dimension \(m\) and the number of facets \(k\) of a provided simplicial complex. Looking at this from a topological perspective, it is generally intuitive that a smaller number of facets signifies a simpler structure of the simplicial complex. This notion is mathematically expressed in (1), which suggests that when \(m<\frac{d}{2}\) is fixed, the first maximum value in (1) results in \(d_{1}\lesssim k(d+1)-(d-1)\frac{k}{2}=\frac{k}{2}(d+3)\), which magnifies as \(k\) increases. Similarly, if \(k\) is fixed and \(\mathcal{X}\) consists of \(m\)-simplices with \(m<\frac{d}{2}\), the summation in the second maximum value in (1) reduces to \(d_{1}\lesssim(d+1)\left(k\frac{m+2}{d-m}+2\right)\), which rapidly diminishes as \(m\) decreases. This suggests that a smaller dimension \(m\) demands smaller widths, which aligns with the intuition that a low-dimensional manifold could be approximated with fewer neurons. To our knowledge, this is the first upper bound of the width of neural networks that depends on the topological data structure.
Theorem 3.3 is a 'universal' approximation result since the width bounds presented in (1) apply to any simplicial complex. However, being a general upper bound, there might be a smaller network architecture that can approximate a given simplicial complex. Proposition 3.1 and Theorem 3.2 show that the width upper bounds are determined by a convex polytope covering, which is heavily dependent on the topological features of \(\mathcal{X}\). In the upcoming toy example, we clarify how the upper bounds of width can vary based on the choice of covering, for the same topological space \(\mathcal{X}\).
**Example 3.4** (Comparison of Proposition 3.1, Theorem 3.2, and Theorem 3.3).: Let \(\mathcal{X}\) be the boundary of a regular \(k\)-gon in \(\mathbb{R}^{2}\). (b) A triangle covering of \(\mathcal{X}\) is given. Proposition 3.1 provides the network architecture \(2\overset{\sigma}{\to}3k\overset{\sigma}{\to}k\overset{\text{\tiny{MA}}}{ \to}1\). (c) \(B_{\varepsilon}(\mathcal{X})\) can be covered by the difference of two \(k\)-gon \(P\) and \(Q\). Then, Theorem 3.2 guarantees that a network with the architecture \(2\overset{\sigma}{\to}2k\overset{\sigma}{\to}2\overset{\sigma}{\to}2\overset{ \sigma}{\to}1\) can approximate \(\mathbbm{1}_{\{\mathcal{X}\}}\).
**Example 3.4** (Comparison of Proposition 3.1, Theorem 3.2, and Theorem 3.3).: Let \(\mathcal{X}\) be the boundary of a regular \(k\)-gon in \(\mathbb{R}^{2}\) as shown in Figure 1(a). First, consider a convex polytope covering \(\mathcal{C}\) that consists of \(k\) green triangles in Figure 1(b). Proposition 3.1 shows that a three layer network with the architecture \(2\overset{\sigma}{\to}3k\overset{\sigma}{\to}k\overset{\text{\tiny{MA}}}{ \to}1\) can approximate \(\mathbbm{1}_{\{\mathcal{X}\}}\). Since \(\mathcal{X}\) can be regarded as a simplicial \(2\)-complex with \(k\) facets, Theorem 3.3 provides a slightly better bound \(2\overset{\sigma}{\to}\left(3k-\left\lfloor\frac{1}{2}k\right\rfloor\right) \overset{\sigma}{\to}k\overset{\text{\tiny{MA}}}{\to}1\), but both networks still have \(O(k)\) widths for two hidden layers. However, from its special structure of \(\mathcal{X}\), it can be covered by difference of two solid \(k\)-gons \(P\) and \(Q\) (Figure 1(c)). Then, Theorem 3.2 provides a four-layer ReLU network with the architecture \(2\overset{\sigma}{\to}2k\overset{\sigma}{\to}2\overset{\sigma}{\to}2\overset{ \sigma}{\to}1\), which has only one \(O(k)\) width. This example helps understand the benefits of depth from a topological perspective, akin to the advantages of depth studied in previous works [3, 39, 41, 44, 45]. It is also important to note that the high number of neurons in the first layer is inevitable, as described in [12] and [46].
It is remarkable that our findings can be extended to various other neural network architectures. In Appendix A, we broaden our results to encompass deep ReLU networks (as seen in Corollary A.1) and networks using sigmoid activation function at last (as presented in Corollary A.2).
#### 3.2.3 Upper bounds of widths in terms of Betti numbers
The Betti number is a key metric used in TDA to denote the number of \(k\)-dimensional 'holes' in a data distribution. Owing to its homotopy invariance, Betti numbers are frequently employed to study the topological features of a given topological space. Interestingly, our prior results can be leveraged to ascertain a neural network architecture with width bounds defined in terms of the Betti numbers, given that the dataset \(\mathcal{X}\) exhibits certain structural characteristics.
Specifically, Theorem 3.2 offers upper width bounds when \(\mathcal{X}\) can be depicted as a difference between unions of convex sets. Consequently, if \(\mathcal{X}\) is convex and only contains 'convex-shaped holes', we can derive a network width bound in relation to its Betti numbers. This concept is further explicated in the following theorem and example.
**Theorem 3.5**.: _Suppose \(\mathcal{X}\subset\mathbb{R}^{d}\) is a topological space obtained by removing some disjoint rectangular prisms from a \(d\)-dimensional cuboid. Let \(\beta_{0},\beta_{1},\cdots,\beta_{d}\) be the Betti numbers of \(\mathcal{X}\). Then for any \(\varepsilon>0\), there exists a four-layer ReLU network \(\mathcal{N}\) with the architecture_
\[d\ \overset{\sigma}{\to}\ 2\left(d-1+\sum_{k=0}^{d}(k+1)\beta_{k}\right) \ \overset{\sigma}{\to}\ \left(\sum_{k=0}^{d}\beta_{k}\right)\ \overset{\sigma}{\to}2\ \overset{\sigma}{\to}1 \tag{2}\]
_such that \(\mathcal{N}(\mathbb{R}^{d})=[0,1]\), \(\mathcal{N}(\mathbf{x})=1\) for \(\mathbf{x}\in\mathcal{X}\), and \(\mathcal{N}(\mathbf{x})=0\) for \(\mathbf{x}\not\in B_{\varepsilon}(\mathcal{X})\)._
Proof Sketch.: Since each \(k\)-dimensional hole is a rectangular prism, we can consider it as convex polytopes. The result is deduced from Theorem 3.2 by computing the required number of faces of polytopes. The full proof can be found in Appendix B.
**Example 3.6**.: Let \(\mathcal{X}\) be a topological space in \(\mathbb{R}^{3}\) shown in Figure 2(a), which has three nonzero Betti numbers \(\beta_{0}=\beta_{1}=2\) and \(\beta_{2}=3\). Then we can consider the homotopy equivalent topological space \(\mathcal{X}^{\prime}\subset\mathbb{R}^{3}\) that satisfies the assumptions in Theorem 3.5: \(\mathcal{X}^{\prime}\) is obtained by 'cutting out' a plate (orange), 'punching' two rectangular prisms (green), and 'hollowing out' three small cubes (red) from a large cuboid (blue) as described in Figure 2(c). Then, Theorem 3.5 shows that a four-layer ReLU network with the architecture \(3\overset{\sigma}{\to}34\overset{\sigma}{\to}7\overset{\sigma}{\to}2\overset{ \sigma}{\to}1\) can approximate \(\mathbbm{1}_{\{\mathcal{X}^{\prime}\}}\) arbitrarily closely.
Theorem 3.5 and Example 3.6 illustrate how the Betti numbers of a topological space \(\mathcal{X}\) can contribute to defining upper bounds for the widths of neural networks. However, it's important to note that two homotopy equivalent spaces may require differing network architectures. In Proposition C.4, we prove that the indicator function over a crown-shaped topological space (Figure 5(a)) in \(\mathbb{R}^{2}\) cannot be approximated by a two-layer ReLU network with the architecture \(2\overset{\sigma}{\to}3\to 1\), while a triangle can be approximated by Proposition 3.1. Since these two spaces have the same Betti numbers \(\beta_{0}=1\), this example suggests that a neural network architecture cannot be solely determined by Betti numbers.
Nevertheless, Theorem 3.5 provides an upper bound on the widths of four-layer ReLU networks in terms of Betti numbers of \(\mathcal{X}\), under the conditions stipulated. This is another novel result linking the topological characteristics of a dataset with upper bounds on widths. It is worth noting that similar results can be achieved when the cuboid assumptions in Theorem 3.5 are modified to other convex polytopes, using the same proof strategy.
We further elaborate on the topic of network architecture. Intriguingly, the sum of Betti numbers \(\sum_{k=0}^{d}\beta_{k}\) that appears in the third layer in (2) is termed the _topological complexity_ of \(\mathcal{X}\). This quantity is recognized as a measure of the complexity of a given topological space [32]. This value has connections with other fields, for example, it has some lower and upper bounds from Morse theory [31] and Gromov's Betti number Theorem [11]. In the context of topological data analysis, consider a Cech complex constructed on a dataset \(\mathcal{X}\) consisting of \(n\) points, using a filtration parameter \(\varepsilon\). Its topological complexity fluctuates from \(n\) (when \(\varepsilon=0\)) to \(1\) (when \(\varepsilon>\text{diam}(\mathcal{X})\)) as \(\varepsilon\) increases. This implies that the architecture in Theorem 3.5 is dictated by the filtration number \(\varepsilon\), which controls the topological structure of the given dataset. We believe this approach could inspire novel investigative methods in TDA, which we propose as an avenue for future research.
## 4 Universal Approximation Property of Three-Layer ReLU Networks
In the preceding section, we demonstrated how indicator functions over certain topological spaces can be approximated by three-layer neural networks. Interestingly, this topological result has an application in proving the Universal Approximation Property (UAP) of three-layer ReLU networks. Moreover, we can derive upper bounds on the widths in three-layer ReLU networks for approximating Lipschitz functions. We present this result in the upcoming theorem.
**Theorem 4.1**.: _Let \(d_{x},d_{y}\in\mathbb{N}\) and \(p\geq 1\). Then, the set of three-layer ReLU networks is dense in \(L^{p}(\mathbb{R}^{d_{x}},[0,1]^{d_{y}})\). Furthermore, let \(f:\mathbb{R}^{d_{x}}\to[0,1]^{d_{y}}\) be a compactly supported Lipschitz function. Then for any \(\varepsilon>0\), there exists a three-layer ReLU network \(\mathcal{N}\) with the architecture_
\[d_{x}\overset{\sigma}{\to}(2nd_{x}d_{y})\overset{\sigma}{\to}(nd_{y})\ \to d_{y}\]
_such that \(\left\|\mathcal{N}-f\right\|_{L^{p}(\mathbb{R}^{d_{x}})}<\varepsilon\). Here, \(n=O(\varepsilon^{-d_{x}})\)._
Figure 2: An illustration of the correlation between Betti numbers and the network architecture (Example 3.6). (a) A topological space \(\mathcal{X}\subset\mathbb{R}^{3}\) is given, whose Betti numbers are \(\beta_{0}=\beta_{1}=2\) and \(\beta_{2}=3\). (b) A homotopy equivalent space \(\mathcal{X}^{\prime}\) is presented, obtained by removing several small cuboids from a larger one. (c) The removed cuboids from \(\mathcal{X}^{\prime}\) are shown. Theorem 3.5 demonstrates that a four-layer ReLU network with the architecture \(3\overset{\sigma}{\to}34\overset{\sigma}{\to}7\overset{\sigma}{\to}2 \overset{\sigma}{\to}1\) can approximate \(\mathbbm{1}_{\{\mathcal{X}^{\prime}\}}\).
Proof Sketch.: The first assertion is a consequence of the second one. Regarding the second assertion, note that \(f^{*}\), being Lipschitz, is continuous and Riemann integrable. Consequently, we can construct a linear combination of indicator functions that approximates \(f^{*}\), which are known as simple functions in Lebesgue theory [38]. Each of these indicator functions can be implemented by a three-layer ReLU network using Proposition 3.1. The complete proof can be found in Appendix B.
Theorem 4.1 makes two assertions. The first one affirms that the set of three-layer ReLU networks is dense in \(L^{p}(\mathbb{R}^{d_{x}},[0,1]^{d_{y}})\), aligning with the findings presented in [45]. Considering that we have also demonstrated that the set of two-layer ReLU networks cannot universally approximate a compactly supported function (as per Proposition 1.1), we can conclude that the minimum depth of DNNs to achieve universal approximation in \(L^{p}(\mathbb{R}^{d_{x}})\) is precisely 3. This conclusion refines the results shown in [45]. Moreover, while [45] merely demonstrates the possibility of approximating functions using three-layer ReLU networks, the second assertion of Theorem 4.1 provides upper bounds on the widths, which are \(O(\varepsilon^{-d_{x}})\), when the function to be approximated, \(f^{*}\), is Lipschitz and compactly supported. As far as we are aware, this is the first work to present an upper bound on the width of three-layer ReLU networks for UAP.
These findings open a potential path to extend our topological results. We expect that for certain classes of functions, the width bounds could be reduced by adding more layers to the network, as demonstrated in Theorem 3.2 and Example 3.4. However, we leave this as an area for future research.
## 5 Experimental Results
In Section 3, we introduced a construction for three or four-layer neural networks that can approximate \(\mathbb{I}_{\{\mathcal{X}\}}\) for any given topological space \(\mathcal{X}\) with a sufficient degree of accuracy. Naturally, this leads us to the question of whether these networks can be produced using a gradient method. While most prior theoretical studies concerning the existence of neural networks have not included experimental verification [22; 23; 25; 34], the issue of experimental verification remains an important one, as noted in [44]. In this section, we will provide experimental evidence that gradient descent can indeed converge to the neural networks that we constructed in Section 3.
We consider two illustrative manifolds, \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\), depicted in Figure 3(a) and (d) respectively. The first compact set \(\mathcal{X}_{1}\) is a simplicial \(2\)-complex in \(\mathbb{R}^{2}\) comprised of two triangles. The second compact set \(\mathcal{X}_{2}\) is a hexagon with a pentagonal hole, which is not a simplicial complex but a compact manifold. The datasets are constructed by selecting 1600 equidistant lattice points in the domain \([-20,20]\times[-20,20]\subset\mathbb{R}^{2}\), where each point has label '\(1\)' if it lies on the manifold \(\mathcal{X}_{i}\), and '\(0\)' otherwise. We undertake both regression and classification tasks, using mean square error (MSE) loss and binary cross entropy (BCE) loss respectively. For the BCE loss task, we adhere to the architecture proposed in Corollary A.2 to ensure trainability. We employ the gradient descent algorithm for training our networks. For a clearer visualization of weight vectors in each layer, we plot the lines of vanishing points for each layer in blue (1st layer), red (2nd layer), etc. The grayscale color denotes the output range of the trained network.
For the first data manifold \(\mathcal{X}_{1}\), Theorem 3.3 suggests that a network with the architecture \(2\overset{\sigma}{\rightarrow}6\overset{\sigma}{\rightarrow}2\overset{ \text{MAX}}{\rightarrow}1\) can precisely represent \(\mathbb{I}_{\{\mathcal{X}_{1}\}}\). As depicted in Figure 3(b), the trained network indeed converges exactly to the network proposed in the theorem under MSE loss. The weight vectors in the first layer encapsulate the two triangles, reflecting the topology of \(\mathcal{X}_{1}\). A similar result can be observed for a network with the architecture \(2\overset{\sigma}{\rightarrow}6\overset{\sigma}{\rightarrow}2\overset{ \text{STG}}{\rightarrow}1\), as suggested by Corollary A.2. For the second data manifold \(\mathcal{X}_{2}\), Theorem 3.2 suggests that a network with the architecture \(2\overset{\sigma}{\rightarrow}11\overset{\sigma}{\rightarrow}2\overset{ \sigma}{\rightarrow}2\overset{\sigma}{\rightarrow}1\) can fit this manifold, and Figure 3(e) verifies this under MSE loss. The architecture \(2\overset{\sigma}{\rightarrow}11\overset{\sigma}{\rightarrow}2\overset{ \text{STG}}{\rightarrow}1\) also converges to the network proposed in Corollary A.2, as shown in Figure 3(f). More specifically, the eleven weight vectors in the first layer align with the eleven boundaries of the outer hexagon and the inner pentagon, with two neurons in the second layer encapsulating each polygon.
These experimental results provide evidence that the networks proposed in Section 3 can indeed be reached as the global minima by gradient descent, under both MSE loss and BCE loss. However, we must stress the importance of initialization. Although we observe successful convergence to the expected networks, the results are heavily dependent on the initialization. For instance, under random initialization, it is known that all neurons in a layer may 'die' at initialization with a nonzero
probability [27], leading to a poor network performance. We provide further experimental results concerning different initializations in Appendix D. The key takeaway from this section is that _the networks proposed in Section 3 can indeed be reached by gradient descent_, a fact not demonstrated by prior studies on UAP, even in the context of toy examples.
## 6 Conclusions
While many previous studies have explored the universal approximation property of DNNs, they have largely overlooked the connection with the topological features of datasets. In this paper, we have addressed this gap by providing data topology-dependent upper bounds on the width of DNNs. We have shown that for a dataset \(\mathcal{X}\) that can be tightly covered by convex polytopes, a three-layer network architecture can be derived to approximate the indicator function \(\mathbbm{1}_{\{\mathcal{X}\}}\). We also extended this to four-layer ReLU networks for \(\mathcal{X}\) that can be covered by the difference of two unions of convex polytopes. We further generalized this to simplicial \(m\)-complexes and demonstrated the construction of a three-layer neural network with ReLU activation and max pooling operation. Imposing further assumptions on \(\mathcal{X}\), we proposed the architecture of four-layer ReLU networks whose widths are bounded in terms of Betti numbers, a novel result in this field. Finally, we demonstrated that to approximate a compactly supported Lipschitz function in \(\mathbb{R}^{d_{x}}\) by a three-layer ReLU network under an \(\varepsilon\) error, a width of \(O(\varepsilon^{-d_{x}})\) is sufficient. Through our experiments, we showed that gradient descent indeed converges to the networks we proposed.
Limitations and Future WorksThere are several limitations. Firstly, although we experimentally showed convergence to the constructed network, we did not provide a rigorous guarantee for this convergence here. To overcome this and to ensure both existence and convergence, it would be valuable to prove a convergence result under specific conditions. Secondly, further research is needed to investigate the relationship between the network architecture and Betti numbers of the given topological spaces. The results in Theorem 3.5 suggest that it may be possible to expand our findings or reduce some assumptions on the given topological space. Since Betti numbers are commonly used features to characterize topological features of data manifolds in Topological Data Analysis (TDA), we believe that this line of research could yield significant insights.
Figure 3: Experimental verification of convergence of gradient descent. (a) and (d) exhibit the shape of two data manifolds, which are ‘two triangles’ and ‘a hexagon with a pentagon hole’. (b) and (e) show the converged networks by gradient descent, when the loss function is given by the mean square error (MSE) loss. Similarly, (e) and (f) show the results for the binary cross entropy (BCE) loss. These results verify that gradient descent indeed converges to the networks we proposed in Section 3. |
2303.13197 | Stochastic Recurrent Neural Networks for Modelling Astronomical Time
Series: Advantages and Limitations | This paper reviews the Stochastic Recurrent Neural Network (SRNN) as applied
to the light curves of Active Galactic Nuclei by Sheng et al. (2022).
Astronomical data have inherent limitations arising from telescope
capabilities, cadence strategies, inevitable observing weather conditions, and
current understanding of celestial objects. When applying machine learning
methods, it is vital to understand the effects of data limitations on our
analysis and ability to make inferences. We take Sheng et al. (2022) as a case
study, and illustrate the problems and limitations encountered in implementing
the SRNN for simulating AGN variability as seen by the Rubin Observatory. | Xinyue Sheng, Matt Nicholl, Nicholas Ross | 2023-03-23T11:53:21Z | http://arxiv.org/abs/2303.13197v1 | Stochastic Recurrent Neural Networks for Modelling Astronomical Time Series: Advantages and Limitations
###### Abstract
This paper reviews the Stochastic Recurrent Neural Network (SRNN) as applied to the light curves of Active Galactic Nuclei by Sheng et al. (2022). Astronomical data have inherent limitations arising from telescope capabilities, cadence strategies, inevitable observing weather conditions, and current understanding of celestial objects. When applying machine learning methods, it is vital to understand the effects of data limitations on our analysis and ability to make inferences. We take Sheng et al. (2022) as a case study, and illustrate the problems and limitations encountered in implementing the SRNN for simulating AGN variability as seen by the Rubin Observatory.
quasars: general - methods: statistical - surveys - software: data analysis 75282
Xinyue Sheng et al.
## 1 Introduction
Machine learning has become increasingly popular in many branches of astronomical research. In particular, these methods are being applied to the large data sets from wide-field sky surveys. Despite many successes, it is vital to understand that any machine learning implementation comes with difficulties and limitations, and may not always be appropriate for every problem (Kremer et al., 2017).
Various model architectures, such as the many flavours of neural networks, extract features through multiple layers. This is particularly suited to imaging data, though without care these models may extract noise as well as or instead of real astronomical features. However, they are often treated as intelligent "black boxes" whose complex nonlinear computations prevent researchers from directly understanding the feature extraction process, especially for non-image data. Additionally, methods used for data pre-processing and argumentation can greatly effect the training process and model accuracy (Maharana et al., 2022).
Limitations also inevitably arise from the data themselves. For example, there is often a trade off between purity and completeness (Smethhurst et al., 2021); simulated data may not be representative of reality; and the quality of the data could be influenced by observing conditions, signal-to-noise ratio (SNR), etc. It is becoming more crucial to understand these
effects with increasingly big data from large sky surveys, such as Vera Rubin Observatory Legacy Survey of Space and Time (LSST), where the utility of the data for a particular investigation are also influenced by the chosen filter and cadence strategies.
In this paper, we summarize and review the paper Sheng et al. (2022), discussing the advantages and limitations of applying machine learning techniques for astronomical research purposes using a novel neural network architecture - a stochastic recurrent neural network (SRNN) - as a case study. To our knowledge, this was the first application of the SRNN in astronomical research. In that project, the motivation was to estimate the suitability of different cadence strategies in the upcoming LSST survey for studying the variability in active galactic nuclei (AGN) time series. The SRNN was applied model simulated AGN light curves as observed with various proposed cadence strategies for the LSST Wide-Fast-Deep (WFD) survey (LSST Science Collaboration et al., 2009).
## 2 Data sets
To evaluate LSST cadences over a 10-year observation period, AGN light curves are simulated using Continuous Auto-Regressive Moving Average (CARMA) models.
The CARMA model is a statistical description of stochastic and stationary processes in time series. Although it is not a physical model, it has been widely employed as a description of long-term AGN variability. CARMA models are notated as CARMA(p, q) where p gives the order of the Autoregressive (AR) process and q gives the description of the Moving Average (MA) process. The first order CARMA model - CARMA(1,0) or the Damped Random Walk (DRW) - has been applied to many quasar variability studies (e.g. Kelly et al., 2014; Feigelson et al., 2018; Moreno et al., 2019). It can be expressed by Equation 1, where \(\alpha\) is the C-AR coefficient and \(\beta\) is the coefficient of the random perturbations. In the case of AGN, x corresponds to the flux or magnitude. \(W(t)\) is a Wiener process, and \(dW(t)\) means a white noise process with \(\mu\) = 0 and \(\sigma^{2}\) = 1 (Kelly et al., 2014).
\[d^{1}x+\alpha_{1}x(t)=\beta_{0}dW(t) \tag{1}\]
Its Structure Function (SF), the average difference in amplitude between points separated by a given time interval \(\Delta t\), is expressed as
\[\mathrm{SF}(\Delta t)=\mathrm{SF}_{\infty}(1-e^{-|\Delta t|/\tau})^{1/2}, \,\mathrm{SF}_{\infty}=\sqrt{2}\sigma. \tag{2}\]
There are two key parameters: the characteristic timescale \(\tau\), and the long-term variability amplitude \(\mathrm{SF}_{\infty}\).
The second-order CARMA(2,1) or Damped Harmonic Oscillators (DHO) is also applied to simulate AGN with quasi-periodic features, such as blazars. For the detailed formula, see Sheng et al. (2022, Table A1).
This project used the DRW parameters derived from 7384 quasars from the Sloan Digital Sky Survey Stripe 82 field (MacLeod et al., 2010), and extended them to both DRW and DHO cases. The _EzTao_ Python Package (Yu & Richards, 2022) was applied to simulate CARMA light curves with daily observations in the \(u,g,r,i,z,y\) bands. Then, five proposed LSST cadence strategies1 were selected to downsample the light curves to simulate realistic observations. On average, LSST will re-observe an object in some band every \(\sim 3\) days and in the same band every \(\gtrsim 7\) days. These could then be modelled using an SRNN to attempt to recover the input CARMA parameters from the incomplete data.
Footnote 1: The five chosen cadence strategies are baseline, u_long, filterdist, cadence_drive and rolling. Details see Sheng et al. (2022, section 3.2)
### Representativeness of the simulated data
It is worth noting that there are differences between CARMA models and the true AGN light curves, which may cause the former to be less representative of the latter:
1. CARMA models are stationary time-series processes, but AGN light curves seem to be non-stationary (Tachibana et al., 2020).
2. CARMA models are statistical without any physical mechanism. Moreover, quasars can have occasional large flares on top of their DRW-like variability.
3. CARMA models do not consider the correlations between bands, whereas quasars' timescales and variability amplitude varies with bands.
### Limitations of LSST data
Most of the proposed WDF cadence strategies distribute unbalanced observations among the six bands, with a large allocation in the \(r\), \(i\), \(z\), \(Y\) bands and less in the \(u\) and \(g\) bands. Furthermore, the observations for each band are not simultaneous. Those factors might bring difficulties for multi-band time-series modelling, with gaps in observations leading to poor sensitivity in recovering short-timescale variability (Sheng et al. 2022).
Also problematic for the case of AGN light curve analysis with the DRW model, Kozlowski (2017) suggest that reliably measuring the variability timescale \(\tau\) requires a temporal baseline of at least \(8-10\tau\). With a finite duration of 10 years in the LSST survey, this introduces significant biases in recovering timescales for the large fraction of AGN with \(\tau\gtrsim 1\) year, regardless of the machine learning employed (Sheng et al. 2022).
## 3 Stochastic Recurrent Neural Networks high-level overview
Inspired from Bayer and Osendorfer (2015), Fraccaro et al. (2016) propose the idea of adding stochasticity in a latent state representation on the classical Recurrent Neural Networks (RNN). They stack a state space model (SSM) on deterministic RNNs to achieve a stochastic and sequential generative model (see Figure 1a) and a structured variational inference network (see Figure 1b), which produce the output sequences and provide the model's posterior distributions, respectively. The loss function includes the negative log-likelihood of the predictions and targets and Kullback-Leibler divergence (\(D_{KL}\)) (Kullback and Leibler 1951) between the prior and posterior distributions.
This algorithm is expected to be compatible with CARMA models as CARMA can be represented as state space models. The SRNN is applied to the simulated LSST-cadence AGN light curves, and outputs the predicted/interpolated light curves on a daily cadence over 10 years. Figure 2 is an example.
### Limitations of SRNN
The results from Sheng et al. (2022, Section 5) show the SRNN modelling performance for both uniformly-sampled and LSST-like light curves. Given the similar amounts of observation numbers, SRNN can model light curves better with uniform cadences than with LSST cadences. SRNN can recover the long-term variability \(SF_{\infty}\) well, but the timescale \(\tau\) is always underestimated when \(\tau\) is long, which is restricted by the number of input observations and the gaps between groups of observations.
Compared with Variational Auto-encoders, (such as Sanchez-Saez et al. 2021), the SRNN also lacks explanations of latent features. The correlations between close and distant time steps are not human-interpretable.
SRNN is designed to estimate and compare 10-year length light curves with potential cadence strategies, however, for the upcoming LSST data, SRNN modelling could be difficult as the light curves are much shorter.
### Problems of 'filling the gaps'
Here we discuss how SRNN modelling fills in the gaps between distant observations. Shown from Sheng et al. (2022, Figures 8-11), SRNN can reconstruct the input observations when cadence gaps are reasonably short compared with their timescales, but for large gaps, SRNN's general performance is weak. The following factors all affect the SRNN light curve reconstruction:
1. Number of observations.
2. Cadence strategies and different bands.
3. Level/timescale of variability: high SF\({}_{\infty}\)/short \(\tau\).
4. Quasi-Periodicity.
5. Assumption of stationarity.
In summary, for the LSST cadences shown in Sheng et al. (2022, Figures 9-11), long gaps exist between observations, and for the reasons above, the SRNN model struggles to impute the behaviour during these gaps, especially for the non-periodic DRW and DHO-overdamped cases. This will turn out to be an important limitation when attempting to infer CARMA parameters from these light curves.
## 4 Better data or better models?
Recently, there has been a discussion in machine learning research: Do better data or better models contribute more to high accuracy? The answer is that for the same model, the accuracy rate increases with the amount of training data, but its'marginal utility' decreases. The capability of models are restricted by the data volume though a better model is able to improve the accuracy to a certain level.
In astronomy, the situation can be more complicated: Data quality is not always sufficient due to weather conditions, satellite inter
Figure 1: (a) Generative model (left) (Sheng et al. 2022, Figure 7(a)). Observed light curves are fed into the Input layer, then a number of hidden layers (RNN layers). The output of the last hidden layer has two paths: one copy is fed to the SSM layer, realized by multiple **prior** Gaussian distributions at each time step, and then the sampling layer will randomly sample a value from each Gaussian distribution; the other copy will be combined with the sampled values and fed to the output layer. The output layer produces predicted daily light curves. (b) Inference network (right) (Sheng et al. 2022, Figure 7(b)). It is only used for training process. The output of the last hidden layer is combined with the target light curves at each timestep and fed into a reversed RNN layer, producing the approximate **posterior** Gaussian distributions.
Figure 2: An example of SRNN modelling. The input light curves are the observed light curves with time dilation considered, presented with grey points. Black points with error bars show the light curve with ‘baseline’ cadence. The reconstructed light curves (by SRNN) are shown with red points. The lower left show the Mean Square Errors of the SRNN modelling and recovered DRW parameters by Gaussian Process Regression.
ference and other constraints. Accordingly, it is worth discussing whether we should use only "good" (quality trimmed data) or real data for training, validation, and testing.
While effective data preprocessing methods can greatly improve model results, there are some tricky tasks that developers need to be aware of. For example, how to replace missing values with values that have no significant physical meaning? How to scale, normalize and feed the input data? How to deal with poor-quality data that retains useful information? How to understand the model reflection? Compared with designing model architectures, these issues are more prominent and deserve attention.
## 5 Conclusions
In this paper, we discuss the unavoidable problems of real and simulated astronomical data for machine learning applications as well as the limitations of applying SRNN for astronomical time series.
However, the existing difficulties in this project are not uncommon. Researchers are expected to conduct more investigations into the model interpretation and data sets while developing machine learning algorithms and applying them to specific astronomical tasks.
|
2305.18402 | Neural Sculpting: Uncovering hierarchically modular task structure in
neural networks through pruning and network analysis | Natural target functions and tasks typically exhibit hierarchical modularity
-- they can be broken down into simpler sub-functions that are organized in a
hierarchy. Such sub-functions have two important features: they have a distinct
set of inputs (input-separability) and they are reused as inputs higher in the
hierarchy (reusability). Previous studies have established that hierarchically
modular neural networks, which are inherently sparse, offer benefits such as
learning efficiency, generalization, multi-task learning, and transfer.
However, identifying the underlying sub-functions and their hierarchical
structure for a given task can be challenging. The high-level question in this
work is: if we learn a task using a sufficiently deep neural network, how can
we uncover the underlying hierarchy of sub-functions in that task? As a
starting point, we examine the domain of Boolean functions, where it is easier
to determine whether a task is hierarchically modular. We propose an approach
based on iterative unit and edge pruning (during training), combined with
network analysis for module detection and hierarchy inference. Finally, we
demonstrate that this method can uncover the hierarchical modularity of a wide
range of Boolean functions and two vision tasks based on the MNIST digits
dataset. | Shreyas Malakarjun Patil, Loizos Michael, Constantine Dovrolis | 2023-05-28T15:12:32Z | http://arxiv.org/abs/2305.18402v3 | Neural Sculpting: Uncovering hierarchically modular task structure through pruning and network analysis
###### Abstract
Natural target functions and tasks typically exhibit hierarchical modularity -- they can be broken down into simpler sub-functions that are organized in a hierarchy. Such sub-functions have two important features: they have a distinct set of inputs (_input-separability_) and they are reused as inputs higher in the hierarchy (_reusability_). Previous studies have established that hierarchically modular neural networks, which are inherently sparse, offer benefits such as learning efficiency, generalization, multi-task learning, and transferability. However, identifying the underlying sub-functions and their hierarchical structure for a given task can be challenging. The high-level question in this work is: if we learn a task using a sufficiently deep neural network, how can we uncover the underlying hierarchy of sub-functions in that task? As a starting point, we examine the domain of Boolean functions, where it is easier to determine whether a task is hierarchically modular. We propose an approach based on iterative unit and edge pruning (during training), combined with network analysis for module detection and hierarchy inference. Finally, we demonstrate that this method can uncover the hierarchical modularity of a wide range of Boolean functions and two vision tasks based on the MNIST digits dataset.
## 1 Introduction
Modular tasks typically consist of smaller sub-functions that operate on distinct input modalities, such as visual, auditory, or haptic inputs. Additionally, modular tasks are often hierarchical, with simpler sub-functions embedded in, or reused by, more complex functions [1]. Consequently, hierarchical modularity is a key organizing principle studied in both artificial and biological systems. In neuroscience, the hierarchical modularity of the brain's neural circuits is believed to play a crucial role in its ability to process information efficiently and adaptively [2; 3; 4; 5]. This hierarchical organization allows the brain to break down complex tasks into simpler sub-tasks, which can be processed in a distributed manner. Translating the hierarchical modularity of the brain's neural circuits to artificial neural networks (NNs) can potentially lead to more efficient, adaptable, and interpretable learning systems. Prior works have already shown that modular NNs efficiently adapt to new tasks [6; 7; 8] and display superior generalization over standard NNs [9; 10; 11]. However, those studies assume knowledge of the task's hierarchy, or hand-design, modular NNs at initialization. Given an arbitrary task, however, we normally do not know its underlying sub-functions or their hierarchical organization. The high-level question in this work is: if we learn a task using a NN, how can we uncover the underlying hierarchical organization of sub-functions in that task?
Recent studies have demonstrated that certain hierarchically modular structures can emerge during the training of NNs. Previous methods to extract those structures in NNs can be categorized into _structural_ and _functional_. Structural methods organize trained networks based on the structural properties of units, such as their connectivity and edge-weights [12; 13; 14], while functional methods
consider characteristics based on unit activation [15; 16]. However, it is unclear whether the structures extracted reflect the underlying hierarchy of sub-functions in a task. In a recent study, Csordas et al. ([17]) proposed a method that identifies sub-networks or modules in NNs that learn specific sub-functions. However, this method requires knowledge of the exact sub-functions involved, and cannot be applied when such knowledge is unavailable. Ideally, modules corresponding to specific sub-functions should emerge through a training strategy, and a method should be available to detect these modules without explicit knowledge of the corresponding sub-functions.
Biological networks exhibit hierarchically modular structures where clusters of nodes with relatively dense internal connectivity and sparse external connectivity learn specific sub-functions [4; 5; 18; 19]. Drawing inspiration from this, we propose _Neural Sculpting_, a novel approach to train and structurally organize NN units to reveal the underlying hierarchy of sub-functions in a task. We begin by considering sub-functions that are input-separable and reused within a hierarchically modular task. We show that standard training of NNs does not result in the acquisition of structural properties that reflect these sub-function properties. To address this, we propose an iterative unit and edge pruning method to train NNs, which results in sparse networks that do acquire those previous structural properties. Further, we propose a method based on network analysis to uncover modules within these sparse NNs. Finally, we showcase the capability of the proposed methodology to reveal the structure of diverse hierarchically modular tasks. To the best of our knowledge, this paper is the first to analyze specific sub-function properties within a hierarchically modular task and propose an end-to-end methodology to uncover its structure. This work also sheds light on the potential of pruning and network analysis methods to uncover and harness structural properties in NNs.
### Preliminary
To represent the decomposition of a task, we visualize it as a graph. Initially, we consider Boolean functions and their corresponding function graphs. A Boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}^{m}\), maps \(n\) input bits to \(m\) output bits. We define the set of gates \(G\) as \(\{\wedge,\vee,\mathbb{I}\}\), where \(\wedge\) represents logical conjunction (AND), \(\vee\) represents logical disjunction (OR), and \(\mathbb{I}\) represents the identity function (ID). Additionally, we define the set of edge-types \(E\) as \(\{\rightarrow,\neg\}\), where \(\rightarrow\) represents a transfer edge and \(\neg\) represents a negation edge.
**Function Graph:** A \((G,E)\)**-graph** representing a Boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}^{m}\) is a directed acyclic graph comprising: \(n\) sequentially ordered vertices with zero in-degree, designated as the _input nodes_; \(k\) vertices with non-zero in-degree, designated as the _gate nodes_, with each vertex associated with a gate in \(G\) and each of its in-edges associated with an edge-type in \(E\); and \(m\) sequentially-ordered vertices with zero out-degree, designated as the _output nodes_.
We use the \(G\) and \(E\) defined above, as NN units have been demonstrated to learn these universal gates [20]. Function graphs, which break down complex computations into simpler ones, are typically sparse. Modularity in sparse graphs refers to the structural organization of subsets of nodes that exhibit strong internal and weak external connectivity, which are grouped into modules.
**Sub-function:** A _sub-function_ is a subset of nodes within the function graph that collectively perform a specific task or computation. The sub-function is characterized by having strong internal connectivity between its nodes, meaning that they are highly interdependent and work together to achieve the desired output. At the same time, nodes in the sub-function have weak external connectivity, indicating that they are relatively independent from the rest of the graph.
**Input Separable:** Two sub-functions are considered _input separable_ if their in-edges originate from distinct and non-overlapping subsets of nodes in the function graph.
**Reused:** A sub-function is _reused_ if it has two or more outgoing edges in the function graph.
**Training NNs on target Boolean functions:** We start by obtaining the truth table of each function graph, which serves as our data source. The training set \((\mathbf{\mathcal{X}}_{t},\mathbf{\mathcal{Y}}_{t})\) includes the complete truth table with added random noise (\(\mathcal{N}(0,0.1)\)) during each iteration to increase the number of training samples. The validation set \((\mathbf{\mathcal{X}}_{v},\mathbf{\mathcal{Y}}_{v})\) consists of the noise-free rows of the truth table. We use multi-layered perceptrons (MLPs) with ReLU activation for the hidden units and Kaiming weight initialization to learn the Boolean functions. The loss function is bitwise cross-entropy with Sigmoid activation. The NNs are trained using Adam optimizer with L2 regularization of \(1e-4\).
## 2 Standard training of NNs is not enough
In this section, we show that NNs through standard training, do not acquire structural properties that reflect the properties of sub-functions. Specifically, we consider the two properties in isolation by constructing two graphs: one with input separable sub-functions, and the other with a reused sub-function.
**Input Separable:** The first function graph we consider has 4 input nodes and 4 output nodes. The output nodes \(\{y_{1},y_{2}\}\) depend only on \(\{x_{1},x_{2}\}\), while the output nodes \(\{y_{3},y_{4}\}\) depend only on \(\{x_{3},x_{4}\}\) (Figure 0(a)). We train various NN architectures to learn the target function with perfect accuracy on the validation set. The property of the function graph that reflects input-separable sub-functions is that there are no paths from input nodes \(\{x_{1},x_{2}\}\) to output nodes \(\{y_{3},y_{4}\}\) and from input nodes \(\{x_{3},x_{4}\}\) to output nodes \(\{y_{1},y_{2}\}\). However, in a neural network, all input units are connected to all output units through the same number of paths. Therefore, we analyze the strength of their learned relationship by considering the product of weight magnitudes along those paths.
Consider a trained NN parametrized by \(\mathbf{\theta}\in\mathbb{R}^{a}\) and the set of all paths, \(\mathbf{\mathcal{P}}\). Each edge weight \(\theta_{i}\) is assigned a binary variable \(p_{i}\) to indicate whether it belongs to a given path \(p\in\mathbf{\mathcal{P}}\). We define the edge-weight product of a path as \(\mathbf{\pi}_{p}=\prod_{i=1}^{a}|\theta_{i}^{p_{i}}|\), and \(\mathbf{\pi}(i,j)=\sum_{p\in\mathbf{\mathcal{P}}_{i\to j}}\mathbf{\pi}_{p}\) represents the sum of \(\mathbf{\pi}_{p}\) for all paths from input unit \(i\) to output unit \(j\). We evaluate \(\mathbf{\pi}(i,j)\) for \(i=1,2\) and \(j=3,4\), and \(\mathbf{\pi}(i,j)\) for \(i=1,2\) and \(j=1,2\). If the former is significantly smaller than the latter, then the input units \(1\) and \(2\) are not used by output units \(3\) and \(4\).
We perform a two-sample mean test with unknown standard deviations, with \((\mu_{1},s_{1})\) representing the mean and sample standard deviation of \(\mathbf{\pi}(i,j)\) for \(i=1,2\) and \(j=3,4\), and \((\mu_{2},s_{2})\) representing the mean and sample standard deviation of \(\mathbf{\pi}(i,j)\) for \(i=1,2\) and \(j=1,2\). The null hypothesis \(H_{0}\) is \(\mu_{1}=\mu_{2}\), and the alternative hypothesis is \(\mu_{1}<\mu_{2}\). A similar test is performed for input units \(3\) and \(4\). Figure 0(b) shows an example heat map for \(\mathbf{\pi}(i,j)\), where \(i,j\in\{1,2,3,4\}\). For the example shown we cannot reject the null hypotheses. Similarly, we conducted statistical tests on nine different NNs with varying architectures and seed values, and the results are summarized in Table 1. For NNs with a single hidden layer, we can reject the null hypotheses, but for deeper NNs, both the null hypotheses cannot be rejected.
**Reused:** Consider the function graph shown in Figure 1(a), consisting of 4 input nodes and 16 output nodes. First, we construct an intermediate sub-function \(g(X)\) such that the three gate nodes depend on all the inputs. This sub-function is utilized by two separate gates, \(f_{1}(g)\) and \(f_{2}(g)\), which are then used 8 times by different output nodes. All paths to the output nodes pass through three gate nodes in the first hierarchical level and two gate nodes in the second level. We analyze the edge-weight product of the paths from hidden units to output units. Let \(\mathbf{\pi}_{p}^{l}\) denote the sum of \(\mathbf{\pi}_{p}\) for all paths originating from hidden layer \(l\), which contains \(N_{l}\) hidden units. We compute the minimum number of units, \(N_{P}^{l}\), that are necessary to achieve \(P\%\) of the total \(\mathbf{\pi}_{p}\). By comparing \(N_{P}^{l}\) to that of the function graph, we can conclude whether the NN has learned to reuse intermediate states.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Width & 24 & 36 & 48 \\ \hline Hidden Layers & & & \\ \hline
1 & & & \\ \hline
2 & & & \\ \hline
3 & & & \\ \hline \end{tabular}
\end{table}
Table 1: NN architectures with varying widths and depths for which both null hypotheses are rejected.
Figure 1: a. Function graph with input separable sub-function, b. Edge-weight product of paths from input units to output units in trained NNs.
Figure 2: a. Function graph with reused sub-function, b. the number of units covering 90% of the total edge-weight product of paths.
We independently trained NNs with two hidden layers and different widths on the target Boolean function. The resulting \(N_{90}^{l}\) values for different layers are shown in Figure 2b. Our analysis indicates that, as the width of the NN increases, so does \(N_{90}^{l}\) in both the hidden layers, and these values are consistently close to the actual width of the NN. We trained an NN with hidden layers of width 3 and 2, respectively, to confirm that NNs with those widths can learn the function well.
## 3 Iterative pruning of NNs
In the previous section, we demonstrated that standard NN training is inadequate for learning structural properties that reveal input separable or reused sub-functions. Nevertheless, we observed that NNs with a relatively low number of parameters could still acquire these properties. Since hierarchical modularity is a property of sparse networks, we explored NN pruning algorithms as a means of achieving this sparsity. Prior research has shown that pruning NNs can reduce their number of parameters without compromising their performance [21; 22; 23]. Edge pruning methods typically require specifying a target NN density or pruning ratio [21; 24], but determining a density value that preserves the NN's performance remains an open question. Recently, iterative magnitude pruning [25; 26] has emerged as a natural solution to this problem. The algorithm prunes edges iteratively, removing some edges in each iteration and then retraining the NN. This process can be repeated until the NN achieves the same validation performance as the dense NN while being as sparse as possible.
**Iterative edge pruning:** Consider the initial edge pruning step \(p_{e}\). We train the dense NN and prune \(p_{e}\%\) of the edges with the lowest weight magnitude. The sparse NN resulting from this pruning is then trained with the same number of epochs and learning rate schedule as the original NN. We repeat this process until the sparse NN can no longer achieve the same validation accuracy as the dense NN. At this point, we rewind the NN to the previous sparse NN that achieved the same validation accuracy as the dense NN and update the value of \(p_{e}\) as \(p_{e}=p_{e}/2\). We repeat this process until \(p_{e}\) becomes lower than the required step to prune a single edge, thereby ensuring that the lowest possible density is achieved.
We next apply the tests developed in the previous section to analyze whether edge-pruned NNs acquire structural properties resembling those of the sub-functions. Specifically, we train and prune different NN architectures to learn the function graph with input-separable sub-functions. For the example shown in Figure 3a, as well as for NNs with different architectures, we can reject the null hypotheses in favor of the alternate hypotheses. This implies that edge-pruned NNs acquire structural properties that enable them to recognize input-separable sub-functions.
Consider the previous function graph that has reused sub-functions and NNs with increasing widths. We independently train and prune each NN architecture and determine the number of units \(N_{90}^{l}\) (Figure 3b). We find that in a majority of the trials, \(N_{90}^{2}=2\), which are reused by the output units. Although there is a significant reduction in \(N_{90}^{1}\), the edge-pruned NNs fail to identify 3 units that are being reused. We observe that edge-pruned NNs identify sparse connectivity between units and reuse those units. However, they do not identify units corresponding to sub-functions that are not sparsely connected yet reused. We hypothesize that this may be due to the initial pruning iterations where some edges from all units are removed, leaving no hidden units with dense connectivity to learn densely connected and reused sub-functions. Additionally, this could also be due to very large NN widths and the absence of an objective conditioning NNs to utilize as few hidden units as possible. Therefore, next we introduce an iterative unit pruning method to limit the number of units used.
**Iterative hidden unit pruning:** To prune hidden units, we first train the neural network and then assign each hidden unit a score. We eliminate \(p_{u}\%\) of the hidden units with the lowest scores and train the resulting pruned network for the same number of epochs and learning rate schedule as the original network. We repeat this two-step pruning process iteratively.
Figure 3: a. Edge-weight product of paths from input units to output units in edge-pruned NNs; b. the number of units covering 90% of the total edge-weight product of paths in edge-pruned NNs.
We use loss sensitivity as the scoring metric for the units [27; 28; 29; 30], which approximates the change in the network's loss when the activation of a particular hidden unit is set to zero. Specifically, for a unit \(i\) in hidden layer \(l\), with activation \(a_{i}^{l}\), the score is computed as :
The iterative unit pruning process continues until the NN can no longer achieve the same validation accuracy as the original NN. To ensure that we have pruned as many units as possible, we revert to the latest NN that achieved the same validation accuracy as the original NN and halve the value of \(p_{u}\). This process is repeated until the unit pruning step becomes lower than the step size needed to prune a single unit. We perform unit pruning before the edge pruning process to identify the minimum widths required in each hidden layer. Once the minimum widths are determined, edges are pruned to reveal the sparse connectivity between those units.
We conducted tests to analyze whether the unit-edge pruned NNs acquire structural properties resembling those of the sub-functions, as presented in the previous section. The results of these tests are shown in Figure 4, and show that the unit-edge pruned NNs do acquire structural properties resembling both input separable sub-functions and reused sub-functions.
## 4 Detecting modules within sparse NNs
In this section, we propose a method to uncover sub-networks or modules that approximate the sub-functions within the target function. We approach this by projecting the problem as a two-step partitioning problem due to the layered structure of NNs. First, we cluster the units belonging to the same layer. We assume that given a layer, there exist various subsets of units that participate in learning the same sub-function. Next, we merge the previous unit clusters across layers such that they exhibit strong connectivity. The overview of our proposed method is illustrated in Figure 5.
**Layer-wise unit clustering:** Let us consider a single layer \(l\) with \(N_{l}\) units. For each unit, we construct a feature vector based on its outgoing connectivity. The feature vector for a unit \(i\) is a binary vector \(f_{i}^{l}\in\{0,1\}^{g}\), where \(g\) is the total number of units in all the later layers. If unit \(j\) is connected to unit \(i\) through at least one directed path, then \(f_{i}^{l}(j)=1\), otherwise \(f_{i}^{l}(j)=0\). Our hypothesis is that the units that participate in learning the same sub-function are reused by similar units in the later layers. To partition the units into clusters such that their feature vectors have low intra-cluster and high inter-cluster distances, we use the Agglomerative clustering method with cosine distance and average linkage. To identify the optimal number of clusters \(K_{l}\), we use the modularity metric, which we have modified for our problem [31; 32; 33].
Let \(\tilde{D}\) be the normalized distance matrix of the unit feature vectors, where the sum of all elements is equal to \(1\). Consider the units partitioned into \(k\) clusters and a matrix \(A\in\mathbb{R}^{k\times k}\), where \(A_{ij}=\sum_{a\in C_{i},b\in C_{j}}\tilde{D}_{ab}\) represents the sum of the distances between all pairs of units in clusters \(C_{i}\) and \(C_{j}\). The modularity metric, denoted by \(M\), measures the quality of the partitioning and is defined as:
\[M=\sum_{i=1}^{k}\left(A_{ii}-\left[\sum_{j=1}^{k}A_{ij}\right]^{2}\right) \tag{2}\]
Figure 4: a. Edge-weight product of paths from input units to output units in unit-edge pruned NNs; b. the number of units covering 90% of the total edge-weight product of paths in unit-edge pruned NNs.
The modularity metric can accurately detect the presence of multiple unit clusters (\(K_{l}=2,...,N_{l}-1\)). However, it fails for the edge cases where all units may belong to a single cluster (\(K_{l}=1\)) or to separate individual clusters (\(K_{l}=N_{l}\)). To address those, we conduct a separability test if the modularity metric is lowest for \(k=2\) or \(k=N_{l}-1\), or if the modularity metric values are close to zero. Modularity metric close to zero is an indicator that the intra-cluster distances are not significantly different from the random baseline. To determine this, we set a threshold on the lowest modularity value obtained.
**Separability test:** The unit separability test is designed to evaluate whether two units in a cluster can be separated into sub-clusters. Consider two units \(i\) and \(j\), with \(o_{i}=\sum f_{i}^{l}\), \(o_{j}=\sum f_{j}^{l}\) neighbors respectively, and \(o_{ij}=f_{i}^{l}\odot f_{j}^{l}\) common neighbors. We consider a random baseline that preserves \(o_{i}\) and \(o_{j}\). The number of common neighbors is modeled as a binomial random variable with \(g\) trials and probability of success \(p=\frac{o_{i}\times o_{j}}{g^{2}}\). The units are separable if the observed value of \(o_{ij}\) is less than the expected value \(\mathbb{E}(o_{ij})\) under the random model.
Consider the partition where \(N_{l}-1\) clusters are obtained. If the two units that are found in the same cluster are separable, it implies that all units belong to separate clusters. Now let us consider the partition of units into two clusters. We merge the feature vectors of the two unit groups. If the two groups of units are not separable, it implies that all units must belong to the same cluster. In some cases, both tests yield positive results. We determine the optimal number of clusters by selecting the result that is more statistically significant.
**Merging clusters across layers:** Strongly connected clusters from adjacent layers are next merged to uncover multi-layered modules. Consider, \(C_{l}^{i}\), \(i=1,2,...,K_{l}\) to be the clusters identified at layer \(l\). Let \(e_{i,j}^{l}\) be the number of edges from cluster \(C_{l}^{i}\) to cluster \(C_{l+1}^{j}\). The two clusters are merged if : \(\frac{e_{i,j}^{l}}{\sum_{j=1}^{K_{l+1}}e_{i,j}^{l}}\geq\delta_{m}\) and \(\frac{e_{i,j}^{l}}{\sum_{i=1}^{K_{l}}e_{i,j}^{l}}\geq\delta_{m}\), where \(\delta_{m}\) is the merging threshold.
The output units are merged with the previous layer's modules, ensuring that \(\delta_{m}\) fraction of incoming edges to the unit are from that module. This allows multiple output units to be matched to the same structural module.
## 5 Experiments and results
### Modular and hierarchical Boolean function graphs
In this section, we conduct experiments on Boolean function graphs with different sub-function properties to validate our pipeline. We begin by testing the pipeline on four function graphs shown in Figure 7. These graphs include: 1) input separable sub-functions, 2) a reused sub-function, 3) sub-functions that are both input separable and reused, and 4) a function graph without any such sub-function where all nodes are strongly connected to every other node.
We perform 36 trials for each function graph by training neural networks with combinations of 3 width values, 3 depth values, and 4 seed values. A trial is considered successful if the proposed pipeline detects a module corresponding to an input separable or reused sub-function (Figure 5). For Boolean
Figure 5: Proposed module detection pipeline.
Figure 6: Success rates for the validation function graphs
functions, we set the modularity metric threshold to -0.2 and the cluster merging threshold to 0.9. Figure 6 shows the success rates for each function graph and NNs with different depths. We observed that the proposed pruning and module detection pipeline has a high success rate when the depth of the NN exceeds that of the function graph. (see to appendix section B for NN visualizations)
**Sub-functions that are reused many times are uncovered more accurately:** To demonstrate this, we consider the function graph shown in Figure (b)b, which contains a single reused sub-function. We vary the number of times the two intermediate gate nodes in the second hierarchical level are used by decreasing the number of output nodes and measure the success rate. The results are shown in Figure 8, where we observe an increasing trend in the success rate as the number of output nodes using the two gate nodes increases. As the number of output units using the two gate nodes decreases, learning the two gate nodes using the previous "dense" sub-function may not be efficient. We provide visualizations of the corresponding NNs in the appendix section C. Note that NNs with only one hidden layer recover a single module, as the function graphs require four hierarchical levels. If only three hierarchical levels are available, the function graph collapses to a dense graph.
**Sub-functions with higher input separability are detected more accurately:** In this experiment, we consider a function graph with two input separable sub-functions shown in Figure (a)a. We increase the overlap between the two separable input sets by decreasing the total number of input nodes and reusing them. We also vary the number of times each sub-function is replicated or used in the output nodes. Our goal is to uncover three properties of the structure: accurately separated input units into sub-function specific and reused, two output sub-functions accurately detected in later layers, and all hidden units belong to either of the two output modules. The success rate for each of these properties as a function of input overlap and sub-function use is shown in Figure 10.
Figure 10: Success rates for various properties uncovered by the pipeline when the input overlap between two sub-functions is increased
Figure 7: Function graphs used to validate the proposed pipeline
Figure 9: Increasing the input overlap (reuse) between two input separable sub-functions
We observe that our method has a high success rate for detecting these properties for sub-functions with high input separability. However, as the overlap between input node sets increases, the success rate for detecting input modules decreases. Furthermore, the success rate decreases as the number of times a sub-function is used decreases. We also observe that for sub-functions with low input separability (and high input overlap), intermediate units are often clustered into a single module due to the hidden units pruning step. Finally, we find that the same trend is observed for detecting output sub-functions, where increasing input overlap and decreasing sub-function use results in a low success rate for our method. (See appendix section D for visualization of these results).
**Hierarchical structures uncovered vary depending on NN depth:** The function graph in Figure 10(a) consists of two input separable sub-functions, the output of those sub-functions is then used by a single intermediate sub-function. This intermediate sub-function is then reused by two additional sub-functions to produce the final output. Interestingly, we observe that the proposed pipeline uncovers different hierarchical structures depending on the depth of the NN.
Figure 11(a) shows the success rate of uncovering the specific sub-functions segregated by the NN depth. We find that NNs with depth greater than or equal to the number of hierarchical levels (5) can uncover all three types of sub-functions in the function graph. However, for NNs with lower depth, only the input separable sub-functions are uncovered, while the intermediate and output sub-functions are merged into a single module. This observation highlights the importance of selecting an appropriate depth for the NN architecture to effectively uncover hierarchical structures in Boolean function graphs. (see appendix section E for NN visualizations) In addition, we report the success rates of uncovering the exact hierarchical structure that corresponds to the depth of the NN as we vary the number of times the output sub-functions were used (Figure 11(b)). The findings demonstrate an increasing trend in the success rate, which is consistent with our previous results.
### Modular and hierarchical functions with MNIST digits
In this section, we present an experimental evaluation of hierarchically modular tasks constructed using the MNIST handwritten digits dataset. We set the modularity metric threshold to -0.3 and the cluster merging threshold to 0.6 for tasks with MNIST digits.
**Classifying two MNIST digits:** We begin by considering a simple function constructed to classify two MNIST digits. The NN is presented with two images concatenated together, and the output is a vector of length 20. The first 10 indices of the output predict the class for the first image, and the remaining 10 predict the class for the second image. We construct the dataset such that each unique combination of labels has 1000 input data points generated by randomly selecting images. We split
Figure 11: The same Boolean function represented by different function graphs depending on the number of hierarchical levels
Figure 12: Success rates: a. uncovering specific sub-functions, b. uncovering the overall hierarchical structure, for NNs with varying depths trained on function graph in Figure 11
the data into training and validation sets with an 8:2 ratio, and then train nine neural networks with varying widths (392, 784, 1568) and number of hidden layers (2, 3, 4) to learn the function for four different seed values. We use the Adam optimizer with bit-wise cross-entropy as the loss function, and select \(99\%\) as the accuracy threshold for the pruning algorithm, as all dense NNs achieve at least \(99\%\) validation accuracy. Out of the 36 trials conducted, we obtain two completely separable modules for 31 out of 36 trials, indicating the high effectiveness of our approach. (see appendix F for NN visualizations)
**Hierarchical and Modular MNIST Task:** In this section, we present a hierarchical and modular task that uses the MNIST digits dataset (Figure 12(a)). The task takes two MNIST images as input, each belonging to the first 8 classes (digits 0-7). The digit values are represented as 3-bit binary vectors and used to construct three output Boolean sub-functions. To generate the dataset, we randomly select 1000 input data points for each unique combination of digits. The data is then split into training and validation sets with an 8:2 ratio. All the NNs that we experiment with reach a validation accuracy of \(98\%\), which we use as the accuracy threshold for the pruning algorithm.
To analyze the modular structure uncovered by our methodology, we divide the success rates into three categories: (1) detecting two input separable modules, (2) detecting three output modules, and (3) middle layer unit-separability into either the input separable modules or the output separable modules. We observe that the NNs uncover the three output modules with high success rates. However, for NNs with lower depths, the NNs fail to recover the two input separable modules, and all the units in the early layers are clustered into a single module. As the depth of the NN increases, we find that the input separable modules are recovered with high accuracy as well. We observe that the success rates for middle layer unit-separability are very low, even when the depth of the NN increases. The units belonging to those 1-2 hidden layers are clustered into a single module. These units may be learning representations required to approximate the two digits well. This finding indicates that the NN depths we experiment with may not be sufficient to capture the underlying function, despite the NNs learning the task with high validation accuracy. Please refer to appendix F for NN visualizations.
## 6 Conclusion
We have introduced _Neural Sculpting_, a methodology to uncover the hierarchical and modular structure of target tasks in NNs. Our findings first demonstrated that NNs trained conventionally do not naturally acquire the desired structural properties related to input separable and reused sub-functions. To address this limitation, we proposed a training strategy based on iterative pruning of units and edges, resulting in sparse NNs with those previous structural properties. Building upon this, we introduced an unsupervised method to detect modules corresponding to various sub-functions while also uncovering their hierarchical organization. Finally, we validated our proposed methodology by using it to uncover the underlying structure of a diverse set of modular and hierarchical tasks.
As a future research direction, we could investigate the efficiency and theoretical underpinnings of modularity in function graphs, which could further motivate pruning NNs. One potential approach to overcome the computational costs and dependence on initial architecture depth could be to use neural architecture search algorithms to construct modular NNs during training. Additionally, exploring the use of attention mechanisms and transformers to uncover hierarchical modularity in tasks could be an interesting direction for future work. These approaches could provide a more efficient way of obtaining modular NNs that can also better capture the underlying structure of the target task.
Figure 13: a. Hierarchically modular task constructed with MNIST digits; b. Success rates for various modules recovered by NNs with varying depths. |
2301.08494 | Application of a Neural Network classifier for the generation of clean
Small Magellanic Cloud stellar samples | Context. Previous attempts to separate Small Magellanic Cloud (SMC) stars
from the Milky Way (MW) foreground stars are based only on the proper motions
of the stars. Aims. In this paper we develop a statistical classification
technique to effectively separate the SMC stars from the MW stars using a wider
set of Gaia data. We aim to reduce the possible contamination from MW stars
compared to previous strategies. Methods. The new strategy is based on neural
network classifier, applied to the bulk of the Gaia DR3 data. We produce three
samples of stars flagged as SMC members, with varying levels of completeness
and purity, obtained by application of this classifier. Using different test
samples we validate these classification results and we compare them with the
results of the selection technique employed in the Gaia Collaboration papers,
which was based solely on the proper motions. Results. The contamination of MW
in each of the three SMC samples is estimated to be in the 10-40%; the "best
case" in this range is obtained for bright stars (G > 16), which belong to the
Vlos sub-samples, and the "worst case" for the full SMC sample determined by
using very stringent criteria based on StarHorse distances. A further check
based on the comparison with a nearby area with uniform sky density indicates
that the global contamination in our samples is probably close to the low end
of the range, around 10%. Conclusions. We provide three selections of SMC star
samples with different degrees of purity and completeness, for which we
estimate a low contamination level and have successfully validated using SMC RR
Lyrae, SMC Cepheids and SMC/MW StarHorse samples. | Ó. Jiménez-Arranz, M. Romero-Gómez, X. Luri, E. Masana | 2023-01-20T09:49:45Z | http://arxiv.org/abs/2301.08494v1 | Application of a Neural Network classifier for the generation of clean Small Magellanic Cloud stellar samples+
###### Abstract
Context:Previous attempts to separate Small Magellanic Cloud (SMC) stars from the Milky Way (MW) foreground stars are based only on the proper motions of the stars.
Aims:In this paper we develop a statistical classification technique to effectively separate the SMC stars from the MW stars using a wider set of _Gaia_ data. We aim to reduce the possible contamination from MW stars compared to previous strategies.
Methods:The new strategy is based on neural network classifier, applied to the bulk of the _Gaia_ DR3 data. We produce three samples of stars flagged as SMC members, with varying levels of completeness and purity, obtained by application of this classifier. Using different test samples we validate these classification results and we compare them with the results of the selection technique employed in the _Gaia_ Collaboration papers, which was based solely on the proper motions.
Results:The contamination of MW in each of the three SMC samples is estimated to be in the \(10-40\%\); the "best case" in this range is obtained for bright stars (\(G>16\)), which belong to the \(V_{los}\) sub-samples, and the "worst case" for the full SMC sample determined by using very stringent criteria based on StarHorse distances. A further check based on the comparison with a nearby area with uniform sky density indicates that the global contamination in our samples is probably close to the low end of the range, around 10%.
Conclusions:We provide three selections of SMC star samples with different degrees of purity and completeness, for which we estimate a low contamination level and have successfully validated using SMC RR Lyrae, SMC Cepheids and SMC/MW StarHorse samples.
## 1 Introduction
This paper is a follow-up of Jimenez-Arranz et al. (2022) (hereafter, J22). In that paper the authors analyzed the kinematics of the Large Magellanic Cloud (LMC) using the _Gaia_ DR3 data; the analysis required a reliable separation of LMC and foreground (Milky Way) stars in the dataset; for this purpose a classification method based on a Neural Network was developed, tested and applied. The result was a series of datasets providing a reliable selection of LMC objects, published through the Centre de Donnees de Strasbourg for public use.
In this paper we extend the application of this methodology to the Small Magellanic Cloud (SMC), in order to obtain similarly reliable datasets for the study of this object, and we also make them public for general use.
The paper is organized as follows. In Section 2 we describe the _Gaia_ base sample and the training sample. In Section 3 we explain how we train the classifier and apply it to the _Gaia_ base sample. We also compare the different datasets obtained. Next, in Section 4, we validate the data sets with external data, such as Cepheids (Ripepi et al. 2017), RRLyrae (Muraveva et al. 2018) and StarHorse (Anders et al. 2022). Finally, we give our conclusions in Section 5.
## 2 Data selection
In this section we introduce the samples used in this paper. First, we characterise the _Gaia_ DR3 base sample (Gaia Collaboration et al. 2021a) with stars selected around the SMC center. The contamination of foreground MW stars in this sample is non-negligible. One may think on distinguishing SMC and MW through their distances, however, due to the large uncertainties in the parallax-based distances at SMC (Lindegren et al. 2021) it is not possible and would only be effective when subtracting bright MW stars. Second, we characterise the _Gaia_ training sample we use to train the machine learning classifier (Neural Network) to discriminate SMC stars from MW foreground stars. This training sample intends to mimic the full dataset available in the _Gaia_ catalogue.
### _Gaia_ base sample
The _Gaia_ base sample was obtained using a selection from the gaia_source table in _Gaia_ DR3 with a \(10^{\circ}\) radius around the SMC centre defined as \((\alpha,\delta)=(12.80^{\circ},-73.15^{\circ})\)(Cioni et al. 2000a) and a limiting \(G\) magnitude of 20.5. We only kept the stars with parallax and integrated photometry information, since they are used in the SMC/MW classification. This selection can be reproduced using the following ADQL query in the _Gaia_ archive:
SELECT * FROM gaiadr3.gaia_source as g
where 1=CONTAINS(POINT('ICRS',g.ra,g.dec), CIRCLE('ICRS',12.80,-73.15,10)) AND g.parallax IS NOT NULL AND g.phot_g_mean_mag IS NOT NULL AND g.phot_bp_mean_mag IS NOT NULL AND g.phot_rp_mean_mag IS NOT NULL AND g.phot_g_mean_mag < 20.5
The resulting base sample contains a total of 4 047 225 objects.
### Gaia training sample
As in J22, we use GOG (Luri et al. 2014) to produce a training data set of similar characteristics to the base sample. We select particles within 10\({}^{\circ}\) around the SMC centre. We make it compatible with recent estimations of the mean distance and systemic motion obtained from EDR3 data: a distance of 62.8 kpc (Cioni et al. 2000b) and a systemic motion of \(\mu_{\alpha\ast}=1.858\) mas yr\({}^{-1}\), \(\mu_{\delta}=0.385\) mas yr\({}^{-1}\) as inferred in the linear fit (Table 4) to the proper motions in Gaia Collaboration et al. (2021b) (hereafter, MC21).
The _Gaia_ training sample is split into two labelled subsets one containing SMC and the other MW stars. The SMC simulation includes 54 109 sources, a smaller number of stars in comparison to what expected for the data. That is because the GOG simulator is based on a pre-defined catalogue of OGLE stars to provide real positions for the SMC stars (see details in Luri et al. 2014). On the other hand, the MW simulation is based on a realistic galactic model which generates a number of stars that matches the observations. Similarly to the strategy used in J22, we compensate this unbalanced and unrealistic ratio between SMC and MW stars by retaining a random 20% fraction of the MW simulation, obtaining 285 258 sources. In Figure 1 both SMC and MW training subsets are characterised.
Our training sample is the result of combining these two simulations, which we contrast with the _Gaia_ base sample in Figure 2. These plots demonstrate that the _Gaia_ training sample roughly matches the major characteristics of the _Gaia_ base sample, but also highlights some of its limitations. For example, the colour-magnitude diagram (CMD) for the SMC simulation is not fully representative at the faintest magnitudes, with a lack of stars and an artificial cut line, and the distribution of the SMC stars in the sky forms a kind of square due to its origin based on an extraction from the OGLE catalogue. We will test their effectiveness using a number of validation samples to ensure that they are appropriate.
### Proper motions-based classification
To establish a baseline comparison with previous methods, we use the same selection based on the proper motions as in MC21. In short, the MW foreground contamination is minimized by computing the median proper motions of the SMC from a sample constrained to its very centre plus a cut in magnitude and parallax. We keep only stars whose proper motions obey the constraint of \(\chi^{2}<9.21\), that is, an estimated 99% confidence region (see details in Section 2.2 of MC21). The resulting sample (hereafter, PM selection) contains 1 720 856 objects1.
Footnote 1: Note that the difference in the number of sources with the ones in MC21 comes from the different cut in radius, now being of 10\({}^{\circ}\) instead of 11\({}^{\circ}\).
## 3 SMC/MW classification
In this section we define an improved, more efficient and adjustable selection strategy to distinguish the SMC stars from the Milky Way foreground. Then, based on this classifier, we select three samples of candidate SMC stars with different degrees of completeness and purity.
### Training the classifier
The sklearn Python package (Pedregosa et al. 2011) was used to create a classifier. Using the _Gaia_ data, this module includes a number of classifiers that can be used to differentiate the MW foreground objects from the SMC objects in our base sample using the training sample mentioned in the preceding section. We use position (\(\alpha\), \(\delta\)), parallax and its uncertainty (\(\varpi\), \(\sigma_{\varpi}\)), along with the proper motions and their uncertainties (\(\mu_{\alpha\ast}\), \(\mu_{\delta}\), \(\sigma_{\mu\ast}\), \(\sigma_{\mu\ast}\)), and _Gaia_ photometry (\(G\), \(G_{BP}\), \(G_{RP}\)).
As in J22, we select as classifier the Neural Network (NN). The NN has 11 input neurons, corresponding to the 11 _Gaia_ parameters listed above; three-hidden-layers with six, three and two nodes, respectively; and a single output which gives for each object the probability \(P\) of being a SMC star (or, conversely, the probability of not being a MW star). The object is very likely to belong to the SMC (MW) if the \(P\) value is close to 1 (0). The activation function that we employed was the Rectified Linear Unit (ReLU). With a constant learning rate, stochastic gradient descent is used in our model to optimize the log-loss function. The strength of the L2 regularization term is 1e-5.2
Footnote 2: The corresponding author can be contacted if readers are interested in using the Neural Network developed in the paper.
To train the algorithm, we used 60% of the training sample, and the remaining 40% was used for testing purposes. By creating the Receiver Operating Characteristic (ROC) curve and computing the Area Under the Curve (AUC), we assessed the classifier performance. One of the most crucial evaluation criteria for determining the effectiveness of any classification model is the ROC curve. Using various probability thresholds, it summarizes the trade-off between the true positive rate and false positive rate. Another useful tool for classifier evaluation is the AUC of the ROC curve. The larger the AUC, the better the classifier works. An excellent model has an AUC that is close to 1, indicating that it has a high level of separability. Having an AUC equal to 0.5 indicates that the model is incapable of classifying the data.
We provide the ROC curve of our NN classifier in the left panel of Figure 3. We achieve an AUC of 0.998, indicating that our classifier accurately distinguishes between SMC and MW stars in the test sample. We show the Precision-Recall curve in the right panel of Figure 3. When the classes are severely unbalanced, it is another helpful indicator to assess the output quality of the classifier. Both evaluation criteria display a nearly flawless classifier when applied to the training (simulated) data, however, same warnings regarding the classifier described in J22 apply here.
### Applying the classifier to the _Gaia_ base data
After the NN has been trained, we use it to extract probabilities for each object in the _Gaia_ base sample3. Figure 4 displays the resulting probability distribution. Two distinct peaks can be seen, one with probability near 0 and the other with probability near 1.
Figure 1: Characteristics of the GOG simulated samples, in orange and blue: SMC and the MW training samples, respectively. Top left and middle: Distribution of proper motions in right ascension and declination, respectively. Top right: Parallax distribution. Bottom left: Magnitude \(G\) distribution of the simulated samples. Bottom middle and right: Colour-magnitude diagram of the SMC and MW, respectively. Colors represent relative stellar density, with darker colors meaning higher densities.
Figure 2: _Gaia_ base and training samples comparison. Top from left to right: Density distribution in equatorial coordinates of the _Gaia_ base and _Gaia_ training samples in logarithmic scale, parallax, and G-magnitude distributions. Bottom from left to right: Proper motion distributions in right ascension and declination and colour-magnitude diagrams for the _Gaia_ base and training samples. In the histograms, in gray we show the _Gaia_ base sample, while in dotted purple we show the _Gaia_ training sample. In the color-magnitude diagrams, colors represent relative stellar density with darker colors meaning higher densities.
These peaks match stars that the classifier can definitely identify as being MW and SMC sources, respectively. There is a flat tail with intermediate probability in between, which represents sources for which the Neural Network has more difficulties to classify. Only 537 137 stars have a probability \(P\) between 0.01 and 0.9, corresponding to the 13% of the SMC base sample.
We must establish a probability threshold \(P_{cut}\) in order to acquire a classification using the probabilities that the classifier generated for each star. The star is thought to belong to the SMC if \(P>P_{cut}\) and the MW if \(P<P_{cut}\) (alternatively, we could deem stars with intermediate probabilities as unclassified). Fixing a low probability threshold allows us to ensure that no SMC objects are missed, but at the cost of having more "mistaken" MW stars in the SMC-classified sample. Conversely, by setting a high probability threshold, we can reduce contamination in the resultant SMC-classified sample, but at the cost of omitting some SMC stars and producing a less complete sample.
As seen in J22, a choice about the purity-completeness trade-off will determine the characteristics of the final sample and may, therefore, have an impact on the results. To examine the impact of this trade-off, we defined two different samples in this work:
1. Complete sample (\(P_{cut}=0.01\)). In this case, a cut at low probability prioritizes completeness at the cost of larger MW contamination. We determined the cut value by looking at the classification's probability histogram (Figure 4) and selecting the upper limit of the peak of small probability values.
2. Optimal sample (\(P_{cut}=0.31\)). The probability cut in this instance was determined to be the best possible in terms of classification; the value corresponds to the "elbow" of the ROC curve (Figure 3), which is in principle the ideal compromise between completeness and purity.
Additionally, and because MW stars exponentially rise at fainter magnitudes whereas SMC stars rapidly decrease beyond \(G\simeq 19.5\) (see discussion in the next section), we introduced the third case after carefully studying the results for the optimal sample. We refer to it as the truncated-optimal sample (\(P_{cut}=0.31\)) with \(G<19.5\) mag. As mentioned above, this cut avoids a region in the faint end where the SMC training sample is not representative; by removing these stars, the MW contamination can be reduced and the stars with larger uncertainties are also discarded. Given the purity of the SMC diagrams in Figure 5, we decided against making a second selection by excluding areas of the CMD diagram where contamination is more likely.
Finally, we take into account two datasets for each of the four samples. First, the full sample, where we assume that there is no information on the line-of-sight velocities for any of the stars. Secondly, a subset of the first sample that only contains stars with _Gaia_ DR3 line-of-sight velocities is kept. These samples are referred to as the corresponding \(V_{los}\) samples. In Table 1, the second and third columns show the number of stars for each data set together with the mean astrometric information.
### Comparison of classifications
Figure 6 displays the sky density distributions for the classified SMC/MW members in our various samples. We provide the SMC selection for each sample in the left column, and the sources designated as MW are displayed in the right column. Proper motion selection is the first row, followed by the three NN-based selection strategies, and each row corresponds to one selection technique. As may be expected, the outcomes of the proper motion-based selection closely resemble those of MC21.
Since an anomalous classification in the SMC outskirts is not seen in these figures, we notice that the restricted spatial distribution of the SMC training sample (square region in top-left panel of Figure 2) does not pose an issue for extrapolating the membership outside this region.
Additionally, we observe that sources identified as MW by all four samples exhibit an overdensity in the SMC central part, the most populated region, indicating that SMC stars were misidentified. Two globular clusters Tuc 47 and NGC362 are successfully removed from the SMC samples, see the concentration of stars around \((\alpha,\delta)\simeq(5^{\circ},-72^{\circ})\) and \((16^{\circ},-71^{\circ})\), respectively. Moreover, we observe that, in accordance with the concept of the probability cut, fewer stars are categorized as belonging to the MW the more complete the SMC sample is. In this regard, a cross-match between the complete sample and the proper motion selection sample reveals that the latter almost entirely contains the former: of the 1 720 856 stars in the proper motion sample, 1 697 614 of them are included in the complete sample, and the complete sample also contains nearly four hundred thousand additional stars. Regarding the MW samples, we can estimate their SMC contamination by comparing its density with the one of an uniform sky field observed nearby, but away from the SMC cen
Figure 4: _Gaia_ base sample’s probability distribution for the NN classifier. A high likelihood of being an SMC (MW) star is indicated by a probability value close to 1 (0).
Figure 3: Evaluation metrics for the Neural Network classifier performance. Left: ROC curve. Black dot is in the “elbow” of the ROC curve and it shows the best balance between completeness and purity. The purple star shows the completeness threshold. Right: Precision-Recall curve. In both cases, we compare our model (orange solid curve) with a classifier that has no class separation capacity (blue dashed curve).
## References
* [1] M. C.
tre; the observed overdensity gives an estimation of the "excess" of SMC stars. From this comparison the percentage of SMC stars in the MW sample is estimated to be around 5-10%, being the less contaminated one the MW optimal sample.
We also notice that the astrometric parameter dispersion decreases from the NN complete to the NN truncated-optimal samples. This is to be expected given that the samples' distance and velocities are more similar due to the stricter sequence of selection criteria.
In Figure 5, we compare the astrometry and photometry distribution of the different SMC samples. In the proper motion selection sample, the distribution of proper motion is observed to be narrow around the bulk motion of the SMC due to the severe cut in proper motion enforced, however in the MW classification, two minor peaks are evident after the SMC. The NN samples do not reveal this misclassification. We observe a secondary peak in the right ascension proper motion around 5.2 mas yr\({}^{-1}\) which corresponds to the systemic motion of Tuc47 (Gaia Collaboration et al., 2018). The truncated-optimal sample has the narrowest parallax distribution among the four LMC samples, which are all quite similar to one another. The \(G\) magnitude distributions in the four SMC selections vary significantly from one another. Both the PM and the NN samples have a \(G\) magnitude peak at \(G\sim 19\) mag, which is related to the SMC stars, and a secondary peak at the limiting magnitude \(G=20.5\) mag, which corresponds to the MW contamination. Due to this, we define the truncated-optimal sample by subtracting the secondary peak from the optimal sample, as mentioned above. This secondary peak is caused by the exponential distribution in \(G\) of the MW stars, arising from the logarithmic relation between the stellar flux and the apparent magnitude combined with the magnitude cut and the spatial distribution of the stars in the disk. The SMC stars, on the other hand, exhibit a significant peak at \(G\simeq 19\) mag, slightly differing between samples depending on the amount of MW misclassified sources.
All SMC samples have a fairly similar CMD. Only minor variations are visible in the MW selection of the optimal and truncated-optimal samples, which comprise, as expected, sources of the red giant branch of the SMC that the NN classifier misidentifies as MW.
## 4 External validation of the classification
In order to validate the results of our selection criteria we compare each of the generated samples with external independent classifications. To do so, we cross-matched our samples with dedicated catalogues of the SMC chosen to have a high degree of purity in the visible band. For this reason, we exclude from this exercise the VMC survey (Cioni et al., 2011) for being in the near-infrared and the SMASH survey (Nidever et al., 2017) for not performing any contamination study, and we use the following:
* SMC Cepheids (Ripepi et al., 2017): we used the 4 793 Cepheids from the paper's sample as a set of highly reliable SMC objects. Using a 0.3" search radius to find high confidence matches and keeping 4 788 stars, we cross-matched the positions supplied in the study with the _Gaia_ DR3 catalogue to obtain the _Gaia_ DR3 data. To make a final selection of 4 765 SMC Cepheids, we introduced a cut with a 10\({}^{\circ}\) radius around the SMC center (replicating our base sample).
* SMC RR-Lyrae (Muraveva et al., 2018): we employed the 2 997 RR-Lyrae sample from the paper as high-reliability SMC objects in a manner similar to the foregoing. After the sample is cross-matched with the _Gaia_ DR3 catalogue, it is downsized to 2 982 stars, and then we cut a final sample of 2 922 SMC RR-Lyrae in a 10\({}^{\circ}\) radius around the SMC center.
* StarHorse (Andres et al., 2022): using a cut of 10\({}^{\circ}\) around the SMC center, we cross-matched this catalog with the _Gaia_ DR3 data and obtained a sample of 1 000 066 stars. We distinguished MW and SMC stars using the StarHorse distances, but with a cutoff of \(d=55\) kpc, using criteria similar to those put forward in Schmidt et al. (2020, 2022) for the LMC. This choice is supported by the StarHorse sample's distance distribution, which is depicted in Figure 7. A very stringent categorization is produced by a cut in \(d=55\) kpc, reducing the pollution of MW stars (see discussion below). As a result, we are left with a StarHorse SMC sample of 193 402 stars and a StarHorse MW sample of 806 660 stars. Notice that this sample only has stars up to \(G=18.5\).
The Cepheids and RR-Lyrae datasets contain objects that are highly reliably identified as SMC stars, therefore they are used to assess how complete our classification of SMC objects is (i.e., how many we lose). On the other hand, because the StarHorse classification is imperfect, this sample can be used to estimate the contamination brought on by incorrectly identified MW stars. Furthermore, the estimated amount of MW contamination in the classification will be a "worst case" scenario because of the extremely strict criteria utilized in StarHorse for the separation (cut in \(d=55\) kpc).
Table 2 compares the outcomes of our four classification criteria as they were applied to the stars in the three validation samples. The results using the Cepheids, RR-Lyrae, and StarHorse SMC validation samples reveal that the completeness of the resulting SMC classifications is excellent, typically exceeding 95%. The truncated-optimal sample is the exception, where the cut in faint stars reduces the RR-Lyrae's completeness.
On the other hand, the relative contamination by MW stars it is more challenging to evaluate in the samples. We depend on an external comparison, the StarHorse distance-based classification, with the caveat that this classification also includes its own classification errors. In order to do this, we recalculate the Precision-Recall curve, using the StarHorse classification as a reference this time; the outcome is depicted in Figure 8. We can observe that the precision essentially stays flat across the entire plot's range, or across the entire range of probability threshold values. This suggests that the complete and optimal samples both have identical relative contamination since the more restrictive we are, the more MW stars we remove, but also we lose more
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline SMC sample & N & \(N_{vlos}\) & \(\overline{\overline{\sigma}}\) & \(\sigma_{\overline{\sigma}}\) & \(\overline{\mu_{\alpha\alpha}}\) & \(\overline{\mu_{\delta}}\) & \(\sigma_{\mu_{\delta}}\) \\ \hline Proper motion selection & 1 720 856 & 4 014 & -0.0029 & 0.323 & 0.731 & 0.370 & -1.226 & 0.297 \\ NN complete & 2 172 427 & 4 195 & -0.0013 & 0.417 & 0.706 & 0.580 & -1.221 & 0.558 \\ NN optimal & 1 979 603 & 3 335 & -0.0083 & 0.381 & 0.696 & 0.485 & -1.218 & 0.463 \\ NN truncated-optimal & 1 265 824 & 3 335 & -0.0018 & 0.254 & 0.700 & 0.383 & -1.225 & 0.349 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of the SMC samples number of sources and mean astrometry between the proper motion selection (MC21) and the neural networks. Parallax is in mas and proper motions in mas yr\({}^{-1}\).
SMC stars. According to the precision values in Figure 8, using the classification based on StarHorse distances as a reference, the relative contamination of our samples could be around 40%; this is a worst-case scenario because we used a very restrictive distance cut. These statistics need to be interpreted carefully because the MW-SMC separation based on StarHorse distances is not a perfect classification criterion and actually uses less data than our criterion. Although many stars still have intermediate distances that fall between the Magellanic Clouds and the MW as a result of the multimodal posterior distance distributions, these populations are plainly evident as overdensities in the maps as mentioned in the StarHorse publication (Anders et al. 2022).
These findings indicate that there may be a few tens of percent of MW stars in our samples, but we can further investigate using the line-of-sight velocities in _Gaia_ DR3, which are only available for a (small) subset of the full sample. These line-of-sight velocities have distinct mean values for the MW and SMC and are not used by any of our classification criteria and therefore providing an independent check. The contamination of the SMC sample is evident from the histograms of line-of-sight velocities plotted separately for MW and SMC stars in Fig. 9. This contamination is most likely far lower than the values mentioned above. For instance, we estimate the MW contamination to be around 10% if we take into account the SMC NN complete sample and (roughly) separate the MW stars with a cut at \(V_{los}<75\) km s\({}^{-1}\). Also, this check is not entirely representative since only stars at the bright end of the sample (\(G\lesssim 16\)) are included in the subset of _Gaia_ DR3 stars having observed line-of-sight velocities.
Finally, we made a new query to the _Gaia_ archive similar to the one described in Section 2.1. This time, we select all the sources within a \(10^{\circ}\) radius in a nearby area with uniform sky
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Stars classified as SMC & SMC Cepheids & SMC RR-Lyrae & SMC StarHorse & MW StarHorse \\ & (4 765) & (2 922) & (193 402) & (806 664) \\ \hline Proper motion selection & 4 578 (96.1\%) & 2 447 (83.7\%) & 190 166 (98.3\%) & 114 354 (14.2\%) \\ NN complete & 4 688 (98.4\%) & 2 814 (96.3\%) & 191 692 (99.1\%) & 125 200 (15.5\%) \\ NN optimal & 4 599 (96.5\%) & 2 694 (92.2\%) & 186 063 (96.2\%) & 110 704 (13.7\%) \\ NN truncated-optimal & 4 598 (96.5\%) & 821 (28.1\%) & 186 063 (96.2\%) & 110 704 (13.7\%) \\ \hline \end{tabular}
\end{table}
Table 2: Matches of the classified SMC members in our four considered samples against the validation samples. The total number of stars, which is listed beneath the sample name, is used to determine percentages.
Figure 6: Sky density distribution in equatorial coordinates of both the SMC (left) and MW (right) sample obtained from the different classifiers. First row: proper motion selection classification. Second row: Complete NN classification. Third row: Optimal NN classification. Fourth row: Truncated-optimal NN classification. Note: in the fourth row, we display a cut in magnitude \(G>19.5\) for both the SMC and MW samples and, therefore, the total number of stars is reduced.
Figure 7: StarHorse validation sample distance distribution. In blue (orange), the StarHorse stars classified as MW (SMC) according to the \(d=55\) kpc criteria.
density from the _Gaia_ DR3 database. By doing so, we may estimate the number of MW stars that should be present in locations that our _Gaia_ base sample covers. We found 932 332 stars from this new query, so we may anticipate a comparable number of MW stars in the area we chose to surround the SMC. Given that the _Gaia_ base sample contains 4 047 225 objects and the number of objects classified as SMC (Table 1) is around 1 - 2 million, the number of stars classified as MW is around 3 - 2 million; therefore, we can conclude that our NN SMC samples prioritise purity over completeness since there are too many stars classified as MW (an excess of 1 to 2 million). This is also clear from the right panels of Figure 6, where the pattern of SMC contamination is displayed in the distribution of stars classified as MW.
## 5 Conclusions
In this work, we present a new SMC/MW classification method which is compared with previous selection strategies based on the proper motion. It is based on neural networks and trained using a MW+SMC simulation created by GOG. We created two SMC samples using various probability cuts, \(P_{cut}\), the NN complete, with \(P_{cut}=0.01\), and the NN optimal sample, with \(P_{cut}=0.32\), which corresponds to the best value according to the ROC curve. In order to remove any remaining contamination from incorrectly categorised faint stars, we added an additional cut to this final sample at the apparent \(G\) magnitude of \(G<19.5\) mag, creating the NN truncated-optimal sample. Moreover, we created sub-samples that contain both proper motions and line-of-sight velocities by using the recently released spectroscopic line-of-sight velocities provided in _Gaia_ DR3. Finally, we successfully validated our classifier using external and independent classifications: SMC Cepheids, SMC RR Lyrae and SMC/MW StarHorse stars. In general, the estimated contamination of MW stars in each of the SMC samples is about \(10-40\%\), being the "best case" for the bright stars (\(G>16\)), which belong to the \(V_{los}\) sub-samples, and the "worst case" for the full SMC sample determined by the very stringent criteria used for the separation in the StarHorse validation sample. A further check based on the comparison with a nearby area with uniform sky density indicates that the global contamination in our samples is probably close to the low end of the range, around 10%.
## Acknowledgements
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. OJA acknowledges funding by I'Agencia de Gestio d'Ajutts Universitaria i de Recar (AGAUR) official doctoral program for the development of a R+D+i project under the FI-SDUR grant (2020 FISDU 00011). OJA, MRG, XL and EM acknowledge funding by the Spanish MICIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe" by the "European Union" through grant RTI2018-095076-B-C21, and the Institute of Cosmos Sciences University of Barcelona (ICCUB, Unidad de Excelencia 'Maria de Maeztu') through grant CEX2019-000918-M.
## References
* Anders et al. (2022) Anders, F., Khalatyan, A., Queiroz, A. B. A., et al. 2022, A&A, 658, A91
* Cioni et al. (2011) Cioni, M. R. L., Clementini, G., Girardi, L., et al. 2011, A&A, 527, A116
* Cioni et al. (2000) Cioni, M. R. L., Habing, H. J., & Israel, F. P. 2000a, A&A, 358, L9
* Cioni et al. (2000) Cioni, M. R. L., van der Marel, R. P., Loup, C., & Habing, H. J. 2000b, A&A, 359, 601
* Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021a, A&A, 649, A1
* Gaia Collaboration et al. (2018) Gaia Collaboration, Helmi, A., van Leeuwen, F., et al. 2018, A&A, 616, A12
* Gaia Collaboration et al. (2021) Gaia Collaboration, Lari, X., Chemin, L., et al. 2021b, A&A, 649, A7
* Jimenez-Arranz et al. (2022) Jimenez-Arranz, O., Romero-Oomez, M., Luri, X., et al. 2022, arXiv e-prints, arXiv:2210.01728
* Lindegren et al. (2021) Lindegren, L., Klioner, S. A., Hernandez, J., et al. 2021, A&A, 649, A2
* Luri et al. (2014) Luri, X., Palmer, M., Arenou, F., et al. 2014, A&A, 566, A119
* Muraveva et al. (2018) Muraveva, T., Subramanian, S., Clementini, G., et al. 2018, MNRAS, 473, 3131
* Nidever et al. (2017) Nidever, D. L., Olson, K., Walker, A. R., et al. 2017, AJ, 154, 199
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011, Journal of Machine Learning Research, 12, 2825
* Ripepi et al. (2017) Ripepi, V., Cioni, M.-R. L., Moretti, M. L, et al. 2017, MNRAS, 472, 808
* Schmidt et al. (2020) Schmidt, T., Cioni, M.-R. L., Niederhofer, F., et al. 2020, A&A, 641, A134
* Schmidt et al. (2022) Schmidt, T., Cioni, M.-R. L., Niederhofer, F., et al. 2022, arXiv e-prints, arXiv:2201.10018
Figure 8: Evaluation metrics for the Neural Network classifier performance using the StarHorse sample. Left: ROC curve. Black dot is in the “elbow” of the ROC curve and it shows the best balance between completeness and purity. Right: Precision-Recall curve. In both cases, we compare our model (orange solid curve) with a classifier that has no class separation capacity (blue dashed curve). |
2305.18370 | Explainable Brain Age Prediction using coVariance Neural Networks | In computational neuroscience, there has been an increased interest in
developing machine learning algorithms that leverage brain imaging data to
provide estimates of "brain age" for an individual. Importantly, the
discordance between brain age and chronological age (referred to as "brain age
gap") can capture accelerated aging due to adverse health conditions and
therefore, can reflect increased vulnerability towards neurological disease or
cognitive impairments. However, widespread adoption of brain age for clinical
decision support has been hindered due to lack of transparency and
methodological justifications in most existing brain age prediction algorithms.
In this paper, we leverage coVariance neural networks (VNN) to propose an
explanation-driven and anatomically interpretable framework for brain age
prediction using cortical thickness features. Specifically, our brain age
prediction framework extends beyond the coarse metric of brain age gap in
Alzheimer's disease (AD) and we make two important observations: (i) VNNs can
assign anatomical interpretability to elevated brain age gap in AD by
identifying contributing brain regions, (ii) the interpretability offered by
VNNs is contingent on their ability to exploit specific eigenvectors of the
anatomical covariance matrix. Together, these observations facilitate an
explainable and anatomically interpretable perspective to the task of brain age
prediction. | Saurabh Sihag, Gonzalo Mateos, Corey McMillan, Alejandro Ribeiro | 2023-05-27T22:28:25Z | http://arxiv.org/abs/2305.18370v3 | # Explainable Brain Age Prediction using coVariance Neural Networks
###### Abstract
In computational neuroscience, there has been an increased interest in developing machine learning algorithms that leverage brain imaging data to provide estimates of "brain age" for an individual. Importantly, the discordance between brain age and chronological age (referred to as "brain age gap") can capture accelerated aging due to adverse health conditions and therefore, can reflect increased vulnerability towards neurological disease or cognitive impairments. However, widespread adoption of brain age for clinical decision support has been hindered due to lack of transparency and methodological justifications in most existing brain age prediction algorithms. In this paper, we leverage coVariance neural networks (VNN) to propose an anatomically interpretable framework for brain age prediction using cortical thickness features. Specifically, our brain age prediction framework extends beyond the coarse metric of brain age gap in Alzheimer's disease (AD) and we make two important observations: (i) VNNs can assign anatomical interpretability to elevated brain age gap in AD by identifying contributing brain regions, (ii) the interpretability offered by VNNs is contingent on their ability to exploit specific eigenvectors of the anatomical covariance matrix. Together, these observations facilitate an explainable perspective to the task of brain age prediction.
## 1 Introduction
Aging is characterized by progressive changes in the anatomy and function of the brain [1] that can be captured by different modalities of neuroimaging [2; 3]. Importantly, individuals can age at variable rates, a phenomenon described as "biological aging" [4]. Numerous existing studies based on a large spectrum of machine learning approaches study brain-predicted biological age, also referred to as brain age, which is derived from neuroimaging data [5; 6; 7; 8; 9; 10; 11; 12]. Accelerated aging, i.e., when biological age is elevated as compared to chronological age (time since birth), may predict age-related vulnerabilities like risk for cognitive decline or neurological conditions like Alzheimer's disease (AD) [13; 14]. In this domain, the metric of interest is _brain age gap_, i.e., the difference between brain age and chronological age. We use the notation \(\Delta\)-Age to refer to the brain age gap.
Inferring \(\Delta\)-age from neuroimaging data presents a unique statistical challenge as it is essentially a qualitative metric with no ground truth and is expected to be elevated in individuals with underlying neurodegenerative condition as compared to the healthy population[15; 12]. The existing machine learning approaches often rely on models that have been trained to predict chronological age for a healthy population and then applied to a cohort with neurodegenerative condition. A layman overview of the procedure of inferring \(\Delta\)-Age is included in Appendix B. Several criticisms for brain age
evaluation approaches have been identified that stem from the lack of transparency in the machine learning models. Major criticisms include the coarseness of \(\Delta\)-Age that results in lack of specificity of brain regions contributing to the elevated brain age; and an unexplained reliance on the prediction accuracy for chronological age in the design of these machine learning models [5]. A significant portion of existing literature that studies brain age using deep learning models considers the ability of their models to accurately predict chronological age for healthy controls [16; 17; 18] as a relevant metric for assessing the quality of their methodological approach. Simultaneously, deep learning models that have a relatively moderate fit on the chronological age of healthy controls can also provide better insights into brain age than the ones with a tighter fit [17; 19]. Thus, there is a lack of conceptual clarity in the role of training to predict chronological age of healthy controls in predicting a meaningful \(\Delta\)-age [20].
In this paper, we propose a principled framework for brain age prediction using cortical thickness data that accommodates interpretability by isolating the contributing brain regions to elevated \(\Delta\)-Age in neurodegeneration. Our framework is based on the recently studied coVariance neural networks (VNNs) [21]. VNN is a graph convolutional network (GCN) with sample covariance matrix as the graph. The foundational analyses of VNNs in [21] showed that VNNs draw similarities with principal component analysis (PCA) while overcoming its significant drawbacks concerning reproducibility. Cortical thickness evolves with normal aging [22] and is impacted due to neurodegeneration [23; 24]. Thus, the age-related and disease severity related variations also appear in anatomical covariance matrices evaluated from the correlation among the cortical thickness measures across a population [25]. We focus our analysis on open access OASIS-3 dataset consisting of cortical thickness features from cognitively normal individuals and subjects in various stages of cognitive decline [26]. The utility of VNNs in predicting \(\Delta\)-Age has been explored previously in [27] but no insights were provided regarding their explainability. See Appendix A for other relevant studies.
**Contributions.** Our contributions in this paper are summarized as follows.
1. **VNNs provide anatomically interpretable \(\Delta\)-Age:**\(\Delta\)-Age in individuals with AD diagnosis was elevated as compared to healthy controls and significantly correlated with a clinical marker of dementia severity. Moreover, we could identify contributing brain regions to elevated \(\Delta\)-Age by analyses of the outputs at the final layer of the VNNs. Hence, by exploiting the VNN architecture, we could characterize \(\Delta\)-Age with anatomical interpretability (Fig. 2).
2. **Anatomical interpretability correlate with eigenvectors of the anatomical covariance matrix:** Our experiments demonstrated that certain eigenvectors of the anatomical covariance matrix were strongly correlated with the features that facilitated anatomical interpretability for \(\Delta\)-Age (Fig. 3). Thus, \(\Delta\)-Age was linked to the ability of VNNs to exploit specific eigenvectors of the anatomical covariance matrix.
The aforementioned findings also helped clarify the role of the preliminary step of training the VNNs to predict chronological age of a healthy population in \(\Delta\)-Age prediction. Specifically, this step equipped the VNNs with the ability to exploit the eigenvectors of the anatomical covariance matrix associated with elevated \(\Delta\)-Age or accelerated aging in AD-driven neurodegeneration.
## 2 coVariance Neural Networks
We begin with a brief introduction to VNNs. VNNs inherit the architecture of GCNs and operate on sample covariance matrix as the graph [21]. A dataset consisting of \(n\) random, independent and identically distributed (i.i.d) samples, given by \(\mathbf{x}_{i}\in\mathbb{R}^{m\times 1},\forall i\in\{1,\ldots,n\}\), can be represented in matrix form as \(\mathbf{X}_{n}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{n}]\). Using \(\mathbf{X}_{n}\), the sample covariance matrix is estimated as
\[\mathbf{C}\triangleq\frac{1}{n-1}\sum_{i=1}^{n}(\mathbf{x}_{i}-\bar{\mathbf{x }})(\mathbf{x}_{i}-\bar{\mathbf{x}})^{\mathsf{T}}\;, \tag{1}\]
where \(\bar{\mathbf{x}}\) is the sample mean of samples in \(\mathbf{X}_{n}\). The covariance matrix \(\mathbf{C}\) can be viewed as the adjacency matrix of a graph representing the stochastic structure of the dataset \(\mathbf{X}_{n}\), where the \(m\) dimensions of the data can be thought of as the nodes of an \(m\)-node, undirected graph and its edges represent the pairwise covariances between different dimensions.
### Architecture
Similar to GCNs that rely on convolution operations modeled by _linear-shift-and-sum_ operators [28; 29], the convolution operation in a VNN is modeled by a coVariance filter, given by
\[\mathbf{H}(\mathbf{C})\triangleq\sum_{k=0}^{K}h_{k}\mathbf{C}^{k}\, \tag{2}\]
where scalar parameters \(\{h_{k}\}_{k=0}^{K}\) are referred to as filter taps. The application of coVariance filter \(\mathbf{H}(\mathbf{C})\) on an input \(\mathbf{x}\) translates to combining information across different sized neighborhoods. For \(K>1\), the convolution operation combines information across multi-hop neighborhoods (up to \(K\)-hop) according to the weights \(h_{k}\) to form the output \(\mathbf{z}=\mathbf{H}(\mathbf{C})\mathbf{x}\).
A single layer of VNN is formed by passing the output of the coVariance filter through a non-linear activation function \(\sigma(\cdot)\) (e.g., \(\mathsf{ReLU},\tanh\)) that satisfies \(\sigma(\mathbf{u})=[\sigma(u_{1}),\ldots,\sigma(u_{m})]\) for \(\mathbf{u}=[u_{1},\ldots,u_{m}]\). Hence, the output of a single layer VNN with input \(\mathbf{x}\) is given by \(\mathbf{z}=\sigma(\mathbf{H}(\mathbf{C})\mathbf{x})\). The construction of a multi-layer VNN is formalized next.
**Remark 1** (Multi-layer VNN).: _For an \(L\)-layer VNN, denote the coVariance filter in layer \(\ell\) of the VNN by \(\mathbf{H}_{\ell}(\mathbf{C})\) and its corresponding set of filter taps by \(\mathcal{H}_{\ell}\). Given a pointwise nonlinear activation function \(\sigma(\cdot)\), the relationship between the input \(\mathbf{x}_{\ell-1}\) and the output \(\mathbf{x}_{\ell}\) for the \(\ell\)-th layer is_
\[\mathbf{x}_{\ell}=\sigma(\mathbf{H}_{\ell}(\mathbf{C})\mathbf{x}_{\ell-1}) \quad\text{ for }\quad\ell\in\{1,\ldots,L\}\, \tag{3}\]
_where \(\mathbf{x}_{0}\) is the input \(\mathbf{x}\)._
Furthermore, similar to other deep learning models, sufficient expressive power can be facilitated in the VNN architecture by incorporating multiple input multiple output (MIMO) processing at every layer. Formally, consider a VNN layer \(\ell\) that can process \(F_{\ell-1}\) number of \(m\)-dimensional inputs and outputs \(F_{\ell}\) number of \(m\)-dimensional outputs via \(F_{\ell-1}\times F_{\ell}\) number of filter banks [30]. In this scenario, the input is specified as \(\mathbf{X}_{\text{in}}=[\mathbf{x}_{\text{in}}[1],\ldots,\mathbf{x}_{\text{in }}[F_{\text{in}}]]\), and the output is specified as \(\mathbf{X}_{\text{out}}=[\mathbf{x}_{\text{out}}[1],\ldots,\mathbf{x}_{\text{ out}}[F_{\text{out}}]]\). The relationship between the \(f\)-th output \(\mathbf{x}_{\text{out}}[f]\) and the input \(\mathbf{x}_{\text{in}}\) is given by \(\mathbf{x}_{\text{out}}[f]=\sigma\Big{(}\sum_{g=1}^{F_{\text{in}}}\mathbf{H}_{ fg}(\mathbf{C})\mathbf{x}_{\text{in}}[g]\Big{)}\), where \(\mathbf{H}_{fg}(\mathbf{C})\) is the coVariance filter that processes \(\mathbf{x}_{\text{in}}[g]\). Without loss of generality, we assume that \(F_{\ell}=F,\forall\ell\in\{1,\ldots,L\}\). In this case, the set of all filter taps is given by \(\mathcal{H}=\{\mathcal{H}^{\ell}_{fg}\},\forall f,g\in\{1,\ldots,F\},\ell\in\{ 1,\ldots,L\}\), where \(\mathcal{H}_{fg}=\{h^{\ell}_{fg}[k]\}_{k=0}^{K}\) and \(h^{\ell}_{fg}[k]\) is the \(k\)-th filter tap for filter \(\mathbf{H}_{fg}(\mathbf{C})\). Thus, we can compactly represent a multi-layer VNN architecture capable of MIMO processing via the notation \(\Phi(\mathbf{x};\mathbf{C},\mathcal{H})\), where the set of filter taps \(\mathcal{H}\) captures the full span of its architecture. We also use the notation \(\Phi(\mathbf{x};\mathbf{C},\mathcal{H})\) to denote the output at the final layer of the VNN. Various aspects of the VNN architecture are illustrated in Fig. 6 in Appendix C. The VNN final layer output \(\Phi(\mathbf{x};\mathbf{C},\mathcal{H})\) is succeeded by a readout function that maps it to the desired inference outcome.
### Properties of VNNs
The foundational analyses of VNNs in [21] established the following properties relevant to the data analysis in this paper.
**VNNs and PCA.** Given the eigendecomposition of \(\mathbf{C}=\mathbf{V}\mathbf{\Lambda}\mathbf{V}^{\mathsf{T}}\), the spectral properties of the coVariance matrix are established by studying the projection of the coVariance filter output \(\mathbf{z}=\mathbf{H}(\mathbf{C})\mathbf{x}\) on the eigenvectors \(\mathbf{V}\) (similar to that for a graph filter using graph Fourier transform [31; 32]). Theorem 1 in [21] establishes the equivalence between processing data samples with PCA and processing data samples with a specific polynomial on the covariance matrix \(\mathbf{C}\). Hence, it can be concluded that input data is processed with VNNs, at least in part, by exploiting the eigenvectors of \(\mathbf{C}\).
**Stability to perturbations in C.** VNNs are stable to perturbations in \(\mathbf{C}\)[21; Theorem 3]. This property implies that the inference performance achieved by VNN is likely to be reproducible, and hence, robust to the number of samples \(n\) used to estimate the covariance matrix \(\mathbf{C}\).
**VNNs are scale-free.** The filter taps in the coVariance filter in (2) are independent of the dimension of \(\mathbf{C}\). Hence, the covariance matrix \(\mathbf{C}\) can readily be replaced with another covariance matrix \(\mathbf{C}^{\prime}\) (not necessarily of the same dimensionality as \(\mathbf{C}\)) to process another dataset. By extension, the set of filter taps \(\mathcal{H}\) for a VNN is scale-free and can be used to process a dataset of arbitrary dimensionality. This
observation is particularly relevant for data analysis in neuroimaging, where the number of features is highly variable across datasets, but different datasets contain similar information [33].
We leverage the aforementioned properties of VNNs in the context of brain age to demonstrate the relationships of the inference outcomes with the eigenvectors of \(\mathbf{C}\), robustness of the results to the number of samples used to estimate \(\mathbf{C}\), and cross-validate various findings by leveraging datasets of different dimensionalities.
### VNN Learning
The VNN model is trained for a regression task using a dataset of \(m\) cortical thickness features and the chronological age for a population. Since the VNN architecture has \(F\) number of \(m\)-dimensional outputs in the final layer, \(\Phi(\mathbf{x};\mathbf{C},\mathcal{H})\) is of dimensionality \(m\times F\). The regression output is determined by a readout layer, which evaluates an unweighted mean of all outputs at the final layer of VNN. Therefore, the regression output for an individual with cortical thickness \(\mathbf{x}\) is given by
\[\hat{y}=\frac{1}{Fm}\sum_{j=1}^{m}\sum_{f=1}^{F}[\Phi(\mathbf{x};\mathbf{C}, \mathcal{H})]_{jf}\;. \tag{4}\]
Prediction using unweighted mean at the output implies that the VNN model exhibits permutation-invariance (i.e., the final output is independent of the permutation of the input features and covariance matrix) and VNN retains its scale-free property. Moreover, it allows us to associate a scalar output with each brain region among the \(m\) regions at the final layer. Specifically, we have
\[\mathbf{p}=\frac{1}{F}\sum_{f=1}^{F}[\Phi(\mathbf{x};\mathbf{C},\mathcal{H})] _{f}\;, \tag{5}\]
where \(\mathbf{p}\) is the vector denoting the mean of filter outputs in the final layer's filter bank. Note that the mean of all elements in \(\mathbf{p}\) is the prediction \(\hat{y}\) formed in (4). In the context of cortical thickness datasets, each element of \(\mathbf{p}\) can be associated with a distinct brain region. Therefore, \(\mathbf{p}\) is a vector of "regional contributions" to the output \(\hat{y}\) by the VNN. This observation will be instrumental to establishing the interpretability offered by VNNs in the context of \(\Delta\)-Age prediction in Section 3. For a regression task, the training dataset \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) (where \(\mathbf{x}_{i}\) are the cortical thickness data for an individual \(i\) with chronological age \(y_{i}\)) is leveraged to learn the filter taps in \(\mathcal{H}\) for the VNN \(\Phi(\cdot;\mathbf{C},\mathcal{H})\) such that they minimize the training loss, i.e.,
\[\mathcal{H}_{\text{opt}}=\min_{\mathcal{H}}\frac{1}{n}\sum_{i=1}^{n}\ell(\hat {y}_{i},y_{i})\;, \tag{6}\]
where \(\hat{y}_{i}\) is evaluated similarly to (4) and \(\ell(\cdot)\) is the mean-squared error (MSE) loss function.
## 3 Methods Overview for Brain Age Prediction
In this section, we provide an overview of the brain age prediction framework based on VNNs (see Fig. 1 for a summary). Our results primarily focus on the dataset described below.
**OASIS-3 Dataset**. This dataset was derived from publicly available freesurfer estimates of cortical thickness (derived from MRI images collected by 3 Tesla scanners and hosted on central.xnat.
Figure 1: Workflow for brain age and \(\Delta\)-Age prediction using VNNs.
org), as previously reported [26], and comprised of cognitively normal individuals (HC; \(n=611\), age = \(68.38\pm 7.62\) years, \(351\) females) and individuals with AD dementia diagnosis and at various stages of cognitive decline (\(n=194\), age = \(74.72\pm 7.02\) years, \(94\) females). The cortical thickness features were curated according to the Destrieux (DKT) atlas [34](consisting of \(m=148\) cortical regions). For clarity of exposition of the brain age prediction method, any dementia staging to subdivide the group of individuals with AD dementia diagnosis into mild cognitive impairment (MCI) or AD was not performed, and we use the label AD+ to refer to this group. The individuals in AD+ group were significantly older than those in HC group (t-test: \(p\)-value = \(2.46\times 10^{-23}\)). The boxplots for the distributions of chronological age for HC and AD+ groups are included in Fig. 14 in Appendix H. For \(191\) individuals in the AD+ group, the clinical dementia rating (CDR) sum of boxes scores evaluated within one year (365 days) from the MRI scan were available (CDR sum of boxes = \(3.45\pm 1.74\)). CDR sum of boxes scores are commonly used in clinical settings to stage dementia severity [35] and were evaluated according to [36].
The findings obtained via the analyses of the OASIS-3 dataset were validated using a smaller, independent dataset (described in Appendix G).
### Training the VNNs on HC group
We first train the VNN model to predict chronological age using the cortical thickness features for the HC group. This enables the VNN models to capture the information about healthy aging from the cortical thickness and associated anatomical covariance matrix. The hyperparameters for the VNN architecture and learning rate of the optimizer were chosen according to a hyperparameter search procedure using the package Optuna [37]. The VNN model had \(L=2\)-layers with a filter bank, such that we had \(F=5\), and \(6\) filter taps in the first layer and \(10\) filter taps in the second layer. The learning rate for the Adam optimizer was set to \(0.059\). The number of learnable parameters for this VNN was \(290\). The HC group was split into a \(90/10\) training/test split, and the covariance matrix was set to be the anatomical covariance evaluated from the training set. A part of the training set was used as a validation set and the other used for training the VNN model. We trained a set of 100 VNNs on different permutations of the training data. The training process was similar for all VNNs and is described in Appendix D. No further training was performed for the VNN models in the subsequent analyses.
### Analyses of regional residuals in AD+ and HC groups
Next, the VNN models trained to predict the chronological age for the HC group and (5) were adopted to study the effect of neurodegeneration on brain regions for AD+ group. Since the impact of neurodegeneration is expected to appear in the anatomical covariance matrix, we report the results when anatomical covariance \(\mathbf{C}_{\text{HA}}\) from the combined cortical thickness data of HC and AD+ groups was deployed in the trained VNN models. Because of the stability property of VNNs [21, Theorem 3], the inference drawn from VNNs is expected to be stable to the composition of combined HC and AD+ groups used to estimate the anatomical covariance matrix \(\mathbf{C}_{\text{HA}}\).
For every individual in the combined dataset of HC and AD+ groups, we processed their cortical thickness data \(\mathbf{x}\) through the model \(\Phi(\mathbf{x};\mathbf{C}_{\text{HA}},\mathcal{H})\) where parameters \(\mathcal{H}\) were learned in the regression task on the data from HC group as described previously. Hence, the vector of mean of all final layer outputs for cortical thickness input \(\mathbf{x}\) is given by \(\mathbf{p}=\frac{1}{F}\sum_{f=1}^{F}[\Phi(\mathbf{x};\mathbf{C}_{\text{HA}}, \mathcal{H}]_{f}\) and the VNN output is \(\hat{y}=\frac{1}{148}\sum_{j=1}^{148}[\mathbf{p}]_{j}\). Furthermore, we define the residual for feature \(a\) (or brain region represented by feature \(a\) in this case) as
\[[\mathbf{r}]_{a}\triangleq[\mathbf{p}]_{a}-\hat{y}. \tag{7}\]
Thus, (7) allows us to characterize the residuals with respect to the VNN output \(\hat{y}\) at the regional level, where brain regions are indexed by \(a\). Henceforth, we refer to the residuals (7) as "regional residuals". Recall that these are evaluated for an individual with cortical thickness data \(\mathbf{x}\).
In our experiments, for a given VNN model, the residual vector \(\mathbf{r}\) was evaluated for every individual in the OASIS-3 dataset. Also, the population of residual vectors for the HC group is denoted by \(\mathbf{r}_{\text{HC}}\), and that for individuals in the AD+ group by \(\mathbf{r}_{\text{AD+}}\). The length of the residual vectors is the same as the number of cortical thickness features (i.e., \(m=148\)). Since each element of the residual vector is associated with a distinct brain region, ANOVA was used to test for group differences between
individuals in HC and AD+ groups. Also, since elevation in \(\Delta\)-Age is the biomarker of interest in this analysis, we hypothesized that the brain regions that exhibited higher means for regional residuals for AD+ group than HC group would be the most relevant to capturing accelerated aging. Hence, the results are reported only for brain regions that showed elevated regional residual distribution in AD+ group with respect to HC group. Further, the group difference between AD+ and HC groups in the residual vector element for a brain region was deemed significant if it met the following criteria: i) the corrected \(p\)-value (Bonferroni correction) for the clinical diagnosis label in the ANOVA model was smaller than \(0.05\); and ii) the uncorrected \(p\)-value for clinical diagnosis label in ANCOVA model with age and sex as covariates was smaller than \(0.05\). See Appendix F for an example of this analysis.
Recall that 100 distinct VNNs were trained as regression models on different permutations of the training set of cortical thickness features from HC group. We used these trained models to establish the robustness of observed group differences in the distributions of regional residuals.
**Deriving regional profile for \(\Delta\)-Age from the robustness of findings from regional analyses.** We performed the regional analysis described above corresponding to each trained VNN model and tabulated the number of VNN models for which a brain region was deemed to be associated with a significantly elevated regional residual for the AD+ group. A higher number of VNN models isolating a brain region as significant suggested that this region was likely to be a highly robust contributor to accelerated aging in the AD+ group.
### Individual-level Brain Age Prediction
Finally, a scalar estimate for the brain age was obtained from the VNN regression output through a procedure consistent with the existing studies in this domain. Note that \(100\) VNNs provide \(100\) estimates \(\hat{y}\) of the chronological age for each subject. For simplicity, we consider \(\hat{y}\) to be the mean of these estimates. A systemic bias in the gap between \(\hat{y}\) and \(y\) may potentially exist when the correlation between \(\hat{y}\) and \(y\) is smaller than 1. Such a bias can confound the interpretations of brain age due to underestimation for older individuals and overestimation for younger individuals [38]. Therefore, to correct for this age-induced bias in \(\hat{y}-y\), we adopted a linear regression model-based approach [38, 39]. Specifically, the following bias correction steps were applied on the VNN estimated age \(\hat{y}\) to obtain the brain age \(\hat{y}_{\text{B}}\) for an individual with chronological age \(y\):
**Step 1.** Fit a regression model for the HC group to determine scalars \(\alpha\) and \(\beta\) in the following model:
\[\hat{y}-y=\alpha y+\beta. \tag{8}\]
**Step 2.** Evaluate brain age \(\hat{y}_{\text{B}}\) as follows:
\[\hat{y}_{\text{B}}=\hat{y}-(\alpha y+\beta). \tag{9}\]
The gap between \(\hat{y}_{\text{B}}\) and \(y\) is the \(\Delta\)-Age and is defined below. For an individual with cortical thickness \(\mathbf{x}\) and chronological age \(y\), the brain age gap \(\Delta\)-Age is formally defined as
\[\Delta\text{-Age}\triangleq\hat{y}_{\text{B}}-y\, \tag{10}\]
where \(\hat{y}_{\text{B}}\) is determined from the VNN age estimate \(\hat{y}\) and \(y\) according to steps in (8) and (9). The age-bias correction in (8) and (9) was performed only for the HC group to account for bias in the VNN estimates due to healthy aging, and then applied to the AD+ group. Further, the distributions of \(\Delta\)-Age were obtained for all individuals in HC and AD+ groups.
\(\Delta\)-Age for AD+ group was expected to be elevated as compared to HC group as a consequence of elevated regional residuals derived from the VNN model. To elucidate this, consider a toy example with two individuals of the same chronological age \(y\), with one belonging to the AD+ group and another to the HC group. Equation (9) suggests that their corresponding VNN outputs (denoted by \(\hat{y}_{\text{AD+}}\) for individual in the AD+ group and \(\hat{y}_{\text{HC}}\) for individual in the HC group) are corrected for age-bias using the same term \(\alpha y+\beta\). Hence, \(\Delta\)-Age for the individual in the AD+ group will be elevated with respect to that from the HC group only if the VNN prediction \(\hat{y}_{\text{AD+}}\) is elevated with respect to \(\hat{y}_{\text{HC}}\). Since the VNN predictions \(\hat{y}_{\text{AD+}}\) and \(\hat{y}_{\text{HC}}\) are proportional to the unweighted aggregations of the estimates at the regional level [see (4) and (5)], larger \(\hat{y}_{\text{AD+}}\) with respect to \(\hat{y}_{\text{HC}}\) can be a direct consequence of a subset of regional residuals [see (7)] being robustly elevated in AD+ group with respect to HC group across the \(100\) VNN models. When the individuals in this example have different chronological age, the age-bias correction is expected to remove any variance due to chronological age in \(\Delta\)-Age. We also verified that the differences in \(\Delta\)-Age for AD+ and HC group were not driven by age or sex differences via ANCOVA with age and sex as covariates.
Results
### Chronological age prediction for the HC group
The performance achieved by the VNNs on the chronological age prediction task for the HC group has been summarized over the \(100\) nominal VNN models. VNNs achieved a mean absolute error (MAE) of \(5.82\pm 0.13\) years and Pearson's correlation of \(0.51\pm 0.078\) between the chronological age estimates and the ground truth on the test set. Moreover, on the complete dataset, the MAE was \(5.44\pm 0.18\) years and Pearson's correlation was \(0.47\pm 0.074\). Thus, the trained VNNs were not overfit on the training set.
Next, for every individual in the HC group, we evaluated the mean of the inner products (also equivalently referred to as dot product) between the vectors of contributions of every brain region [**p** in (5)] and the eigenvectors of the anatomical covariance matrix for all \(100\) VNN models. The strongest alignment was present between the first eigenvector of the anatomical covariance matrix (i.e., the eigenvector associated with the largest eigenvalue) and the vectors of regional contributions to the VNN output (\(0.987\pm 0.0005\) across the HC group), with relatively smaller associations for second (\(0.051\pm 0.003\)), third (\(0.075\pm 0.004\)), and fourth (\(0.094\pm 0.003\)) eigenvectors. Additional details are included in Appendix E. Thus, the VNNs exploited the eigenvectors of the anatomical covariance matrix to achieve the learning performance in this task. The first eigenvector of the anatomical covariance matrix predominantly included bilateral anatomic brain regions in the parahippocampal gyrus, precuneus, inferior medial temporal gyrus, and precentral gyrus. These findings were cross-validated on the HC group by the VNNs that had been trained on a different dataset (Appendix G).
We remark that several existing studies on brain age prediction have utilized deep learning and other approaches to report better MAE on their respective healthy populations [16; 17; 18; 40]. In contrast, our contribution here is conceptual, where we have explored the properties of VNNs when they are trained to predict the chronological age for the HC group. Subsequently, our primary focus in the context of brain age is on demonstrating the anatomical interpretability offered by VNNs and relevance of eigenvectors of the anatomical covariance matrix. Thus, we further provide the insights that have not been explored (or are infeasible to obtain) for most existing brain age evaluation frameworks based on less transparent deep learning models.
### Analyses of regional residuals derived from VNNs reveal regions characteristic of AD
Figure 1(a) displays the robustness (determined via analyses of 100 VNN models) for various brain regions being associated with significantly larger residual elements for the AD+ group than the HC group. The most significant regions with elevated regional residuals in AD+ with respect to HC were concentrated in bilateral inferior parietal, temporal, entorhinal, parahippocampal, subcallosal, and precentral regions. All these brain regions, except for precentral and subcallosal, mirrored the cortical atrophy (Fig. 15 in supplementary material), and these regions are known to be highly interconnected with hippocampus [41]. Hence, brain regions characteristic of AD had significant differences in regional residual distributions for AD+ group as compared to HC group. Similar findings were recovered by the analysis of the regional residuals derived from the VNNs that were trained on an independent dataset (Appendix G).
Although the results in Fig. 1(a) provided a meaningful regional profile for AD+ group, we further performed exploratory analysis to check whether the regional residuals had any clinical significance. To this end, we evaluated the Pearson's correlations between CDR sum of boxes and the regional residuals derived from final layer VNN outputs for the AD+ group for all 100 VNN models. Interestingly, the brain regions with the largest correlations with the CDR sum of boxes scores in Fig. 1(c) were concentrated in the parahippocampal, medial temporal lobe, and temporal pole regions (also isolated in Fig. 1(a)). This observation provides the evidence that the regional residuals for the AD+ group that led to the result in Fig. 1(a) could predict dementia severity.
### \(\Delta\)-Age is elevated in AD+ group and correlated with CDR
We evaluated the \(\Delta\)-Age for HC and AD+ groups according to the procedure specified in Section 3.3. We also investigated the Pearson's correlation between \(\Delta\)-Age and CDR sum of boxes scores in AD+ group. Figure 1(b) illustrates the distributions for \(\Delta\)-Age for HC and AD+ groups (\(\Delta\)-Age for HC: \(0\pm 2.83\) years, \(\Delta\)-Age for AD+: \(3.54\pm 4.49\) years). The difference in \(\Delta\)-Age for AD+ and
HC groups was significant (Cohen's \(d\) = 0.942, ANCOVA with age and sex as covariates: \(p\)-value \(<10^{-20}\), partial \(\eta^{2}=0.158\)). Also, age and sex were not significant in ANCOVA (\(p\)-value \(>0.4\) for both). Hence, the group difference in \(\Delta\)-Age for the two groups was not driven by the difference in the distributions of their chronological age. Figure 2d plots CDR sum of boxes scores versus \(\Delta\)-Age for the AD+ group. Pearson's correlation between CDR sum of boxes score and \(\Delta\)-Age was \(0.474\) (\(p\)-value \(=2.88\times 10^{-12}\)), thus, implying that the \(\Delta\)-Age evaluated for AD+ group captured information about dementia severity. Hence, as expected, the \(\Delta\)-Age for AD+ was likely to be larger with an increase in CDR sum of boxes scores. For instance, the mean \(\Delta\)-Age for individuals with CDR sum of boxes greater than 4 was \(6.04\) years, and for CDR sum of boxes \(\leq\) 4 was \(2.42\) years.
Given that the age-bias correction procedure is a linear transformation of VNN outputs, it can readily be concluded that the statistical patterns for regional residuals in Fig. 2a and Fig. 2c lead to elevated \(\Delta\)-Age and correlation between \(\Delta\)-Age and CDR sum of boxes scores. Therefore, our framework provides a feasible way to characterize accelerated biological aging in AD+ group with a regional profile. Additional figures and details pertaining to VNN outputs and brain age before and after the age-bias correction was applied are included in Appendix H.
Regional residuals derived from VNNs trained on OASIS-3 are correlated with eigenvectors of the anatomical covariance matrix
We further investigated the relationship between regional residuals derived from VNNs and the eigenvectors of \(\mathbf{C}_{\text{HA}}\) to determine whether any specific eigenvectors (principal components) of \(\mathbf{C}_{\text{HA}}\) were instrumental to recover the findings in Fig. 2b. For this purpose, we evaluated the inner product of normalized residual vectors (norm = 1) obtained from VNNs and the eigenvectors of the covariance matrix \(\mathbf{C}_{\text{HA}}\) for the individuals in AD+ group. The normalized residual vector is denoted by \(\mathbf{\bar{r}}_{\text{AD+}}\). For every individual, the mean of the absolute value of the inner product \(|<\mathbf{\bar{r}}_{\text{AD+}},\mathbf{v}_{i}>|\) (where \(\mathbf{v}_{i}\) is the \(i\)-th eigenvector of \(\mathbf{C}_{\text{HA}}\)) was evaluated for the 100 VNN models that were used to derive the results in Fig. 2. Figure 3 plots the mean inner product for eigenvectors associated with \(50\) largest eigenvalues of \(\mathbf{C}_{\text{HA}}\). The three largest mean correlations with the regional residuals in AD+ group were observed for the third eigenvector of \(\mathbf{C}_{\text{HA}}\) (\(|<\mathbf{\bar{r}}_{\text{AD+}},\mathbf{v}_{3}>|=0.645\pm 0.016\)), fourth eigenvector (\(|<\mathbf{\bar{r}}_{\text{AD+}},\mathbf{v}_{4}>|=0.305\pm 0.02\)), and the first eigenvector (\(|<\mathbf{\bar{r}}_{\text{AD+}},\mathbf{v}_{1}>|=0.299\pm 0.001\)). Similar correlation patterns between the regional residuals and the eigenvectors
Figure 2: **Interpretable \(\Delta\)-Age evaluation in OASIS-3 dataset.** Panel **a** displays the robustness of the significantly elevated regional residuals for AD+ group with respect to HC group for different brain regions. For every VNN model in the set of \(100\) nominal VNN models that were trained on HC group, the analyses of regional residuals helped isolate brain regions that corresponded to significantly elevated regional residuals in AD+ group with respect to HC group. After performing this experiment for \(100\) VNN models, the robustness of the observed significant effects in a brain region was evaluated by calculating the number of times a brain region was identified to have significantly elevated regional residuals in AD+ group with respect to HC group. The number of times a brain region was linked with significantly elevated regional residuals in AD+ group with respect to HC group is projected on the brain template. Panel **b** displays the distribution of \(\Delta\)-Age for HC and AD+ groups. The elevated brain age effect here is characterized by regional profile in Panel **a**. Panel **c** projects the mean Pearson’s correlation between regional residuals (derived for each VNN model in the set of 100 nominal VNN models) and CDR sum of boxes for AD+ group on the brain template. Panel **d** displays the scatter plot for CDR sum of boxes versus \(\Delta\)-Age in AD+ group. The correlation between \(\Delta\)-Age and CDR sum of boxes could be attributed to the observations made in Panel **c**.
of \(\mathbf{C_{HA}}\) for the AD+ group were recovered by the VNNs that had been trained on an independent dataset and used to process OASIS-3 dataset (Appendix G). These eigenvectors are plotted on a brain template in the expanded Fig. 9b in the supplementary material. Inspection of the first, third, and fourth eigenvector of \(\mathbf{C_{HA}}\) suggested that subcallosal, entorhinal, parahippocampal and temporal pole regions could be the most relevant contributors to the elevated \(\Delta\)-Age in AD+ group in Fig. 2.
### Additional Experiments
**Cross-validation:** Using the VNNs trained to predict chronological age for an independent dataset of healthy population (cortical thickness curated according to a different brain atlas), we recovered a regional profile similar to Fig. 2a on the OASIS-3 dataset. Furthermore, the regional residuals derived from this set of VNNs had similar correlation patterns with the eigenvectors of \(\mathbf{C_{HA}}\). This observation and Fig. 3 led to the conclusion that training to predict chronological age of a healthy population was instrumental for the VNNs to gain the ability to exploit eigenvectors of the anatomical covariance matrix that are relevant to brain age prediction. See Appendix G for details.
**Stability to perturbations in \(\mathbf{C_{HA}}\):** As a consequence of the stability of VNNs, we observed that the regional profile for \(\Delta\)-Age in Fig. 2a was stable even when the covariance matrix \(\mathbf{C_{HA}}\) was estimated by a variable composition of individuals from the HC and AD+ group (Appendix J).
**Anatomical covariance matrix and brain age:** Use of anatomical covariance matrix derived from only HC group provides results consistent with Fig. 2, albeit with a slightly smaller group difference between the \(\Delta\)-Age distributions for HC and AD+ groups. See Appendix K for details.
## 5 Discussion
Our study has provided a foundational contribution to the methodology of brain age prediction. Anatomical interpretability of \(\Delta\)-Age was derived from the features extracted at the final layer of the VNNs and these features were correlated with marker of dementia severity and certain eigenvectors of the anatomical covariance matrix. Importantly, training the VNNs to predict chronological age helped fine tune their parameters to exploit the relevant eigenvectors of the anatomical covariance matrix. Thus, the role of the age-bias correction step was restricted to projecting the VNN outputs onto a space where one could infer accelerated biological aging with respect to the chronological age from a layman's perspective. By associating \(\Delta\)-Age with a regional profile, VNNs also provide a feasible tool to distinguish pathologies if the distributions of \(\Delta\)-Age for them are overlapping. Moreover, a near-perfect chronological age prediction for healthy controls by itself may not be a determinant of the quality of brain age prediction in neurodegeneration. A larger focus is needed on principled statistical approaches for brain age prediction that can capture the factors that lead to accelerated aging. Locally interpretable and theoretically grounded deep learning models such as VNNs can provide a feasible, promising future direction to build statistically and conceptually legitimate brain age prediction models in broader contexts. Incorporating other modalities of neuroimaging or alternative metrics of aging other than chronological age (such as DNA methylation aging [42]) in the training of VNNs provide promising future directions that can help improve our understanding of aging.
**Limitations.** We provide findings derived on a single dataset (although corroborated using VNNs trained on an independent dataset) and further investigation on larger and more diverse datasets may improve the confidence in our conclusions. Existing studies, including this paper, fall short at
Figure 3: Bar plots of the mean inner products between the normalized vector of regional residuals (norm = 1) of VNN outputs (VNNs trained on OASIS-3) obtained from AD+ group and the eigenvectors of \(\mathbf{C_{HA}}\) (covariance matrix of combined HC and AD+ group) with respective standard deviations as error bars. Results with coefficient of variation \(>30\%\) across the AD+ group have been excluded.
concretely defining the notion of optimal brain age. From a broader perspective, quantifying biological age even for a healthy population is a complex task due to various factors that can contribute to accelerated aging in the absence of an adverse health condition [43; 44; 45]. Moreover, the impacts of the quality of MRI scans and brain atlases across datasets on \(\Delta\)-Age must also be explored.
## 6 Data and Code Availability
OASIS-3 dataset is publicly available and hosted on central.xnat.org. Access to OASIS-3 dataset may be requested through [https://www.oasis-brains.org/](https://www.oasis-brains.org/). Code for demonstrating brain age evaluation is available at [https://github.com/pennbindlab/VNN_Brain_Age](https://github.com/pennbindlab/VNN_Brain_Age). Requests for details regarding IDs of individuals in OASIS-3 and source data for all figures may be sent to [email protected].
## 7 Acknowledgements
OASIS-3 data were provided by Longitudinal Multimodal Neuroimaging: Principal Investigators: T. Benzinger, D. Marcus, J. Morris; NIH P30 AG066444, P50 AG00561, P30 NS09857781, P01 AG026276, P01 AG003991, R01 AG043434, UL1 TR000448, R01 EB009352. The MRI data for XYZ dataset was provided by the Penn Frontotemporal Degeneration Center (NIH AG066597) and Penn Institute on Aging. Cortical thickness data were made available by Penn Image Computing and Science Lab at University of Pennsylvania.
|
2309.02572 | Experience Capture in Shipbuilding through Computer Applications and
Neural Networks | It has always been a severe loss for any establishment when an experienced
hand retires or moves to another firm. The specific details of what his
job/position entails will always make the work more efficient. To curtail such
losses, it is possible to implement a system that takes input from a new
employee regarding the challenges he/she is facing and match it to a previous
occurrence where someone else held his/her chair. This system could be made
possible with input through the ages from the array of individuals who managed
that particular job and processing this data through a neural network that
recognizes the pattern. The paper is based on data collected from traditional
wooden dhow builders and some of the modern day unconventional shipyards. Since
the requirements for successful implementation in such scenarios seems too
steep at the moment, an alternate approach has been suggested by implementation
through the design processes across multiple shipyards. The process entails the
traditional value passed down through generations regarding a particular
profession and analysis has been done regarding how this knowledge/experience
can be captured and preserved for future generations to work upon. A series of
tools including SharePoint, MATLAB, and some similar software working in tandem
can be used for the design of the same. This research will provide valuable
insight as to how information sharing can be applied through generations for
effective application of production capabilities. | Sankaramangalam Ulhas Sangeet, Sivaprasad K, Yashwant R. Kamath | 2023-09-05T20:48:27Z | http://arxiv.org/abs/2309.02572v1 | ## Experience Capture in Shipbuilding Through Computer Applications and Neural Networks
## Abstract
It has always been a severe loss for any establishment when an experienced hand retires or moves to another firm. The specific details of what his job/position entails will always make the work more efficient. To curtail such losses, it is possible to implement a system that takes input from a new employee regarding the challenges he/she is facing and match it to a previous occurrence where someone else held his/her chair. This system could be made possible with input through the ages from the array of individuals who managed that particular job and processing this data through a neural network that recognizes the pattern. The paper is based on data collected from traditional wooden dhow builders and some of the modern day unconventional shipyards. Since the requirements for successful implementation in such scenarios seems too steep at the moment, an alternate approach has been suggested by implementation through the design processes across multiple shipyards. The process entails the traditional value passed down through generations regarding a particular profession and analysis has been done regarding how this knowledge/experience can be captured and preserved for future generations to work upon. A series of tools including SharePoint, MATLAB, and some similar software working in tandem can be used for the design of the same. This research will provide valuable insight as to how information sharing can be applied through generations for effective application of production capabilities.
## 1 Nomenclature
ANN Artificial Neural Network
EULA End User License Agreement
GUI Graphical User Interface
IACS International Association of
Classification Societies
Knowledge Base Management System
Non-profit Organisation
## 2 Introduction
Since the advent of ships, there have been accidents at sea. The root causes for all those unfortunate events were lack of attention to detail in both maintenance and newly built ships. Human errors were the main reason for almost 80-85% of maritime accidents [2]. The entities that are responsible to supervise and enforce attention to detail are the classification societies that ensure that shipbuilding conform to certain acceptable standards. It would be a huge leap if we could ensure better quality and increase production efficiency with a single tool. This paper finally provides a proposal that could accomplish just that.
## 3 Methodology
Two approaches were considered based on reach and economic viability. Since computational power has been keeping up with the fast advancing requirements for the past decade, this method has acquired relevance only recently. Therefore it is only justified that a case study be conducted to verify the applicability of such a design. Analysis was done through case studies that finally provided an idea as to how it would help or harm the current scenario. Therefore this paper projects the results of a case study.
## 4 Case Study
### Case 1- Implementation in Production
### (a) Applicability in a Shipyard
It is possible to implement a database development programme to serve as a base for creating a knowledge system that is preformatted as per the requirements for an artificial neural network that can perform deep learning. Deep Learning recognises intricate patterns in huge databases that has varying levels of hierarchy through a backpropogation algorithm and suggests how a machine should re-compute each layer using different parameters [1]. With this, a probable neural network that applies here can calculate the extent of dependency between databases generated from different hierarchical positions. In the present day scenario, this applies to the different levels of communication between the design, production and outfit departments in a new build shipyard [3]. The most viable way to implement such a database development programme would be in the production department where decisions are made every day as to what the next step should be in assembly, transportation and welding in general. A lot of time can be saved by creating a comprehensive timeline where every department can work in
tandem without disrupting another's work. User friendly GUI's can further alleviate the pains associated with training the working professionals to log their work into the database every day. Microsoft SharePoint is a platform where shipyard workers can log their everyday decisions and subsequent effects [4]. So the prime requirement from our solution is anomaly detection in the logs created through Microsoft SharePoint. As shown in Figure 1, there can be different types of anomalies depending on the type of ship being built. Since shipbuilding is one off kind, there can be point anomalies like the one in Figure1-(B) which could simply mean a different type of outfitting as requested by the owner that calls for a specific type of welding in a location or timeframe which is not found among the data from other shipbuilding activities. There can also be collective anomalies like the one shown in the Figure 1-(C) which could be due to an entirely different kind of ship being built which involves entirely different procedures in unit assembly, sub-assembly, welding, outfitting and launching [5]. Together, they account for a collection of anomalies that are grouped together. If we assume that our observations are based on a stochastic model, statistical analysis can be used to forecast further data points with occasional anomalies [5]. Figure 1-(A) represents a typical new build job with no anomalies in any of the different processes involved in the build process.
Simply put, after such a neural network has been subjected to deep learning and is properly trained, the ANN can forecast the entire timeline for the shipbuilding process including many circumstances that may be overlooked generally with solutions to get around those difficulties. This could be a huge impact on the production scenario in shipyards and could easily boost production efficiency drastically. At the same time it could save a lot of time and effort for the planning team. Such a system in its entirety could be attributed the name, KBMS, a knowledge based management system [6]. A series of steps necessary for the implementation of such a system in a shipyard is detailed below in chronological order:
The above process will require at least a decade to properly train the network to better analyse and provide solutions for problems. Since this follows a statistical approach, the only way to improve accuracy is to add more data points provided the network definition is sound. A program with such a long term goal can only be undertaken by a body that has several shipyards working around the world that can provide different databases to efficiently train the network and make it capable of dealing with ships of all types and sizes. Any other shipyard would find it wasteful both economically and in human resources. Even here, classification societies could together (IACS) create a database for the same using data from shipyards all around the world.
### (b) Applicability in Traditional Shipbuilding
Traditional wooden dhow construction carried out in Kerala, India is the perfect example of knowledge capture. The head shipwright also known as 'Moothasari' in local language trains an apprentice as his successor. The apprentice works with the Head shipwright for more than half of his mentor's working tenure until the mentor deems his successor worthy to take over. Therefore this would be the perfect platform to implement the deep learning concept and create an artificial neural network that can later on become the apprentice's personal troubleshooting tool. In the said context, it is still uncertain whether this can be implemented immediately without an
Figure 1: Statistical representation of data points collected. (A): Normal Distribution, (B): Anomaly in single value of data, (C): Collective Anomalies
overseeing body that can provide technical assistance. Hence this approach requires more resources including financial funding. Therefore this method is not feasible presently but could be very useful in the future provided classification societies can take a more active role in analysing and classifying traditional wooden throws.
### (c) Applicability in a NPO
A KBMS may contribute differently to different shipyards around the world that predominantly deals in certain types or sizes of ships. Hence it is more sensible to create a global model that can be trained with data from shipyards around the world, capable of dealing with huge amounts of data on all varieties of ocean going vessels. Even though this might require several teraflops or more in computing power, it will certainly yield better efficiency in production and further improve communication of practices among shipyards globally. It is possible to set up a Non-profit organisation (NPO) that is funded and run by classification societies and other institutions like RINA (Royal Institution of Naval Architects). Such an organisation can monitor and provide a common platform for various shipbuilding activities to work in tandem under an array of strict quality control systems that can ensure safety of the vessels at sea while decreasing the production time of every individual ship. This will greatly reduce the time required to train an ANN to comply with all the required norms in shipbuilding. It also adds to the fact that having a great many number of databases to include in the knowledge base will increase the accuracy of all predictions done by the network tremendously. A common data entry form can be provided to all shipyards to log their data and problems faced. Instant troubleshooting will be available to all the employees through real time feedback from the network as well as other working individuals of the same position in a different shipyard. All amendments made by the IMO, IACS and other authorities can be brought to practise with ease and by spending lesser resources through simple parameter manipulation. In an age where macrocosms of computers will become the tools used to govern industries at large, such a tool would be the perfect ground to build upon. It remains to be seen whether a technology will arise that can further simplify the process in the very less timeframe that this approach needs. In event of such a discovery, this method will lose its relevance and applicability.
### (d) Conclusion of Case 1
In all three methods elaborated on above, it was apparent that applying such a tool successfully requires a number of resources both financially and in man power. Hence, we came to the conclusion that it would be easier to implement the tool at the grassroots level. That is in the design department. Already the shipbuilding industry has become very progressive by developing countless comprehensive software that can deal in both design and production. Hence the second case studies the effects and applicability of an ANN in design software.
### Case 2- Implementation in Design
With time, ship design has aged from full scale drawings by master loftsmen to complete three dimensional renderings of ship models with computerised structural analysis using advanced finite element analysis.
It is now possible to simulate almost all conditions that the vessel might face during its lifetime on desktop based computers. As shown in Figure 2, all parts including structure, tanks, plumbing, outfitting etc. can be simulated along with the hull for exact real time data replication. Every aspect of the shipbuilding process from the stockyard to the fully outfitted ship at sea can now be designed, planned and constructed using comprehensive software tools that provide naval architects with everything they need to complete the detailed design. Simulations can even tell us what might happen if we start using some new technology, or different equipment mainly focusing on the cost of operation and strict adherence to build timeline [7]. This field is still evolving. All software are now being equipped with user-friendly, interactive, and easy to learn graphical user interfaces that simplify the process for the working personnel in a shipyard's design department. It would be very easy to train an ANN with deep learning using data from widely used ship design software. Since the same software will be used in shipyards all around the world, there will be numerous data points to serve as input while training the neural network. Hence prediction accuracy will be very high. Since classification societies are already taking an active role in promoting ship design software to incorporate eco friendly building techniques as well as the design of hulls, bows and propellers with higher efficiencies, it will not require a great deal of effort to include a knowledge acquiring system in them. Due to the large user base that such software has in the industry, we can train the ANN in a very short span. Qualitative and quantitative data from over a hundred shipyards can easily train the ANN for life. Such a venture can be tailored to perfection by IACS. Similar to the archives of ship data that some of the classification societies now maintain, a knowledge base that supplements the archives with the
details of the building process will easily suffice when collected from a large number of shipyards. Once so many data points are amassed, the network will be able to forecast data on a variety of subjects covering all aspects of the design process.
Once it is successfully implemented in the design process, it can be extended to the production stage through class surveyors who supervise the production process. Information can be logged through the small tablet computers that a surveyor takes with him. Such a device can reduce the number of items that he has to carry around as it is an exceptionally task while accessing areas like the double bottoms and wing tanks for survey. If everything starts to be logged electronically starting now, the database we build will be most useful for developing better ways of accomplishing tasks that are otherwise hard to do.
### Sample Case Study in Suggestion Output
Figure 3 shows a sample data entry form that would be used by the designers in the shipyard to train the ANN for future suggestions. In future when the network is capable of extending beyond the design department, it will need both qualitative and quantitative data from more than a hundred shipyards. For example, a welder in his right mind-set can do his work perfectly while a disturbed one cannot. In everyday shipbuilding there are numerous welded joints being created per day. If a welder is careless in his work, the whole region in the newly built ship might become weak and prone to fracture. In a scenario where the ANN extends its reach into production, it can be monitored whether workers are over exerted or if someone is not good at their jobs. This can improve quality assurance at every shipyard. Moreover different problems faced by different shipyards might have a single and easy solution. Such instances can be easily identified and rectified.
## 5 Conclusion
In short, it can be concluded that the easiest and most economical way to start the process of building a KBMS that employs an ANN capable of deep learning is through the premium design software that are widely used. Once a legal EULA is drawn that safeguards the commercial interests of all the users, data can be used to build a knowledge base that will be useful for future generations to build upon. It is a fact that shipbuilding practises have remained more or less the same save for some advanced tools that simplify the process. There have been no radical technological advancements that questioned the relevance of conventionally modelled ships. Hence it is paramount that a digital database for the same be developed that encases all the core values of traditional and present day shipbuilding.
## 6 Acknowledgements
The work on this paper started amidst a pandemonium of other responsibilities and ended the same way. Therefore, first and foremost I would like to thank my family who provided valuable advice, support and motivation and so became the prime mover in this venture. This paper would have been impossible without the
Figure 3: A GUI for Initial Database Creation/ Output
Figure 2: Typical schematic rendered on a design tool [8]
invaluable help and advice provided by my teacher and mentor, Dr. K Sivaprasad.
|
2301.06136 | Quantitative Verification with Neural Networks | We present a data-driven approach to the quantitative verification of
probabilistic programs and stochastic dynamical models. Our approach leverages
neural networks to compute tight and sound bounds for the probability that a
stochastic process hits a target condition within finite time. This problem
subsumes a variety of quantitative verification questions, from the
reachability and safety analysis of discrete-time stochastic dynamical models,
to the study of assertion-violation and termination analysis of probabilistic
programs. We rely on neural networks to represent supermartingale certificates
that yield such probability bounds, which we compute using a
counterexample-guided inductive synthesis loop: we train the neural certificate
while tightening the probability bound over samples of the state space using
stochastic optimisation, and then we formally check the certificate's validity
over every possible state using satisfiability modulo theories; if we receive a
counterexample, we add it to our set of samples and repeat the loop until
validity is confirmed. We demonstrate on a diverse set of benchmarks that,
thanks to the expressive power of neural networks, our method yields smaller or
comparable probability bounds than existing symbolic methods in all cases, and
that our approach succeeds on models that are entirely beyond the reach of such
alternative techniques. | Alessandro Abate, Alec Edwards, Mirco Giacobbe, Hashan Punchihewa, Diptarko Roy | 2023-01-15T16:35:36Z | http://arxiv.org/abs/2301.06136v3 | # Quantitative Verification With Neural Networks For Probabilistic Programs and Stochastic Systems
###### Abstract.
We present a machine learning approach to quantitative verification. We investigate the quantitative reachability analysis of probabilistic programs and stochastic systems, which is the problem of computing the probability of hitting in finite time a given target set of states. This general problem subsumes a wide variety of other quantitative verification problems, from the invariant and the safety analysis of discrete-time stochastic systems to the assertion-violation and the termination analysis of single-loop probabilistic programs. We exploit the expressive power of neural networks as novel templates for indicating supermartingale functions, which provide general certificates of reachability that are both tight and sound. Our method uses machine learning to train a neural certificate that minimises an upper bound for the probability of reachability over sampled observations of the state space. Then, it uses satisfiability modulo theories to verify that the obtained neural certificate is valid over every possible state of the program and conversely, upon receiving a counterexample, it refines neural training in a counterexample-guided inductive synthesis loop, until the solver confirms the certificate. We experimentally demonstrate on a diverse set of benchmark probabilistic programs and stochastic dynamical models that neural indicating supermartingales yield smaller or comparable probability bounds than existing state-of-the-art methods in all cases, and further that the approach succeeds on models that are entirely beyond the reach of such alternative techniques.
Keywords:**Theory of computation Program verification; Probabilistic computation; Computing methodologies Machine learning; Neural networks.
## 1. Introduction
_Probabilistic Programs_. Probabilistic programs extend imperative programs with the ability to sample from probability distributions (Hastings and Ghahram, 1996; Hastings and Ghahram, 1996; Hastings and Ghahram, 1996), which provide an expressive language for modelling applications such as randomised algorithms (Hastings and Ghahram, 1996; Hastings and Ghahram, 1996), robot motion planning (Kirkpatrick and Ghahram, 1996), cryptographic protocols (Kirkpatrick and Ghahram, 1996), and more generally for modelling and simulation of discrete-time stochastic processes (Hastings and Ghahram, 1996).
_Quantitative Reachability Analysis_. A fundamental problem in the formal verification of probabilistic programs is to compute _reachability probabilities_(Hastings and Ghahram, 1996; Hastings and Ghahram, 1996; Hastings and Ghahram, 1996; Hastings and Ghahram, 1996). More concretely, this requires us to compute the probability that an execution of a probabilistic program will reach a given subset of program configurations. By selecting the appropriate set of program configurations, we can express a variety of interesting quantitative verification questions in terms of computing reachability probabilities, such as computing the probability that a probabilistic loop terminates or violates an assertion (Hastings and Ghahram, 1996; Hastings and Ghahram, 1996; Hastings and Ghahram, 1996; Hastings and Ghahram, 1996), and computing the probability that a discrete-time stochastic system, such as a robot moving across an environment, remains within a set of its safe configurations (Hastings and Ghahram, 1996; Hastings and Ghahram, 1996; Hastings and Ghahram, 1996). However, as we might expect for a question of such generality, the problem of determining exact reachability probabilities for probabilistic loops over infinite state spaces is an uncomputable problem, as it can be instantiated to the decision problem of (deterministic) termination. Therefore, algorithmic methods for reachability analysis of probabilistic programs consider the problem of _bounding_ the probability of reaching a predetermined subset of states.
###### Abstract.
We consider the _nonnegative repulsing supermartingales_, which are the most important ones in the study of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expectation of the post-expect of the post-expectation of the post-expectation of the post-expect of the post-expectation of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expectation of the post-expect of the post-expect of the post-expectation of the post-expect of the post-expect of the post-expectation of the post-expect of the post-expect of the post-expectation of the post-expect of the post-expect of the post-expectation of the post-expect of the post-expect of the post-expectation of the post-expect of the post-expect of the post-expect of the post-expectation of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post-expect of the post post-expect of the post-expect of the post post-expect of the post-expect of the post-expect of the post post-expect of the post-expect of the post post-expect of the post post-expect of the post post-expect of the post
#### Variables
An ordered set Vars of real-valued variables with \(n=|\operatorname{Vars}|\), which in turn defines the state space \(\mathbb{R}^{n}\). Under this definition, a state \(s\in\mathbb{R}^{n}\) is an \(n\)-dimensional tuple of reals.
**Initial Condition**: A Boolean expression \(B_{0}\), which in turn defines the initial set \(S_{0}\subseteq\mathbb{R}^{n}\), as the set of states for which expression \(B_{0}\) evaluates to true.
**Update Command**: An update command \(C_{U}\) generated by the grammar of Fig. 1, where we consider every distribution associated with a probabilistic assignment as being obtained by applying the appropriate inverse transformation to a random variable uniformly distributed in the interval \([0,1]\). This defines the update function \(f\colon\mathbb{R}^{n}\times[0,1]^{k}\to\mathbb{R}^{n}\), where \(k\) is the number of syntactic probabilistic assignment statements occurring in \(C_{U}\).
Altogether, our probabilistic program defines a stochastic process, whose behavior is determined by the update equations
\[s_{t+1}=f(s_{t},r_{t}),\quad r_{t}\sim\mathbb{U}^{k},\qquad s_{0}\in S_{0}, \tag{1}\]
where \(\mathbb{U}^{k}\) denotes the uniform distribution over the \(k\)-dimensional hypercube \([0,1]^{k}\). The state at time \(0\) is determined by the initial condition, and each subsequent state is determined by the previous state and \(k\) random numbers sampled from the uniform distribution over \([0,1]\).
We remark that our model of infinitely running probabilistic programs encompasses state-dependent distributions (cf. Fig. 1, probability distributions). Probability distributions can be either discrete or continuous and may depend not only on constant parameters, but also on parameters that are determined from the system state. As such, parameters may depend on other distributions and thus define joint, multi-variate and hierarchically-structured distributions.
To formalise our quantitative verification questions, we first describe the semantics of our probabilistic programs as a Markov chain over the probability space of infinite words of random samples. Namely, this is the probability space defined as \((\Omega,\mathcal{F},\mathbb{P})\), where
* \(\Omega\) is the set of infinite sequences \(\big{(}[0,1]^{k}\big{)}^{\omega}\) of \(k\)-dimensional tuples of values in \([0,1]\), where \(k\) is the number of probabilistic assignments that occur in the update command;
* \(\mathcal{F}\) is the extension of the Borel \(\sigma\)-algebra over the unit interval \(\mathcal{B}([0,1])\) to \(\Omega\);
* \(\mathbb{P}\) is the extension of the Lebesgue measure on \([0,1]\) to \(\Omega\).
This extension of the Borel \(\sigma\)-algebra and Lebesgue measure to infinite sequences is a standard construction [12; 30]. Every state \(s\in\mathbb{R}^{n}\), when used as initial state, induces a stochastic process \(\{X_{t}^{s}(\omega)\}_{t\in\mathbb{N}}\) over the state space \(\mathbb{R}^{n}\). Let \(\omega=r_{0}r_{1}r_{2}\dots\) be an infinite sequence of random samples
Figure 1: Grammar of update commands, Boolean and arithmetic expressions.
in \([0,1]^{k}\), then the stochastic process is defined by the sequence of random variables
\[X^{s}_{t+1}(\omega)=f(X^{s}_{t}(\omega),r_{t}),\qquad X^{s}_{0}(\omega)=s, \tag{2}\]
This defines the natural filtration \(\{\mathcal{F}_{t}\}_{t\in\mathbb{N}}\) which is the smallest filtration to which the stochastic process \(X^{s}_{t}\) is adapted. In other words (Srivastava, 2017), this can be seen as a Markov chain with state space \(\mathbb{R}^{n}\) and transition kernel
\[T(s,S^{\prime})=\operatorname{Leb}\left(\left\{r\in[0,1]^{k}\mid f(s,r)\in S ^{\prime}\right\}\right). \tag{3}\]
where \(\operatorname{Leb}\) refers to the Lebesgue measure of a measurable subset of \([0,1]^{k}\). The transition kernel defines the _next time operator_\(\mathbb{X}\)(Srivastava, 2017, Definition 2.16) which can be applied to an arbitrary Borel-measurable function \(h\colon\mathbb{R}^{n}\to\mathbb{R}\) as
\[\mathbb{X}[h](s)=\int h(s^{\prime})T(s,\mathrm{d}s^{\prime}). \tag{4}\]
The next time operator applied to the function \(h\) defines the _post-expectation_\(\mathbb{X}[h]\) of \(h\).
### Quantitative Reachability Verification
We treat the general question of computing an upper bound on the probability that our stochastic process reaches a state in a target set \(A\in\mathcal{B}(\mathbb{R}^{n})\). This is the _reachability verification_ question. We define the event of reaching the target set, starting from state \(s\in\mathbb{R}^{n}\) after exactly \(t\) time steps as
\[Reach_{t}^{s}(A)=\left\{\omega\in\Omega\mid X^{s}_{0}(\omega)\notin A,\ldots,X^ {s}_{t-1}(\omega)\notin A,X^{s}_{t}(\omega)\in A\right\}. \tag{5}\]
Note that, under this definition, the special case \(t=0\) is defined as \(Reach_{0}^{s}(A)=\Omega\) when \(s\in A\) and \(Reach_{0}^{s}(A)=\emptyset\) when \(s\notin A\).
Henceforth, we use \(\mathbf{1}_{S}\) to denote the _indicator function_ of \(S\subseteq\mathbb{R}^{n}\), namely \(\mathbf{1}_{S}(s)=1\) if \(s\in S\) and \(\mathbf{1}_{S}(s)=0\) if \(s\notin S\). Below, we express the probability measure of the reachability event in exactly \(t\) steps, and at most \(t\) steps, in terms of indicator functions and the next time operator \(\mathbb{X}\).
**Lemma 2.1**.: \(Reach_{t}^{s_{0}}(A)\) _is measurable for every \(A\in\mathcal{B}(\mathbb{R}^{n})\), \(s_{0}\in\mathbb{R}^{n}\) and \(t\in\mathbb{N}\), and_
\[\mathbb{P}[Reach_{t+1}^{s_{0}}(A)] =\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_{0})\cdot\mathbb{X}[ \lambda s_{1}.\,\mathbb{P}[Reach_{t}^{s_{1}}(A)]](s_{0}), \tag{6a}\] \[\mathbb{P}[Reach_{0}^{s_{0}}(A)] =\mathbf{1}_{A}(s_{0}). \tag{6b}\]
Proof.: Measurability of \(Reach_{t}^{s_{0}}(A)\) is a standard result (Srivastava, 2017, Sect. 10.1.1). Eq. (6a) follows from the measurability of the transition kernel (A
**Lemma 2.3**.: \(Reach_{\mathrm{fin}}^{s}(A)\) _is measurable for every \(A\in\mathcal{B}(\mathbb{R}^{n})\) and \(s\in\mathbb{R}^{n}\), and_
\[\mathbb{P}[Reach_{\mathrm{fin}}^{s}(A)]=\lim_{t\to\infty}\mathbb{P}[ Reach_{\leq t}^{s}(A)]. \tag{10}\]
Proof.: \(Reach_{\mathrm{fin}}^{s}\) is measurable because it is a countable union of measurable sets. Furthermore, (10) follows from (9) and the properties of the measure of an increasing union ((7, Prop. 2.59)).
As we show in Sect. 3, our method computes an upper bound for \(\mathbb{P}[Reach_{\mathrm{fin}}^{s}(A)]\) over the initial states \(s\in S_{0}\). Reachability verification subsumes several important verification questions, and infinitely running probabilistic programs generalise a wide variety of systems.
### Quantitative Verification of Probabilistic Loops
Infinitely running probabilistic programs are generalised models that subsume a wide variety of other probabilistic models. A common example are _probabilistic loops_(Nakamura and others, 2010), which refer to probabilistic programs with a single while-loop, a Boolean expression \(B_{G}\) that denotes the guard condition of the loop, and an update command \(C_{L}\) that denotes the body of the loop:
\[\textbf{while}\ B_{G}\ \textbf{do}\ C_{L}\ \textbf{od} \tag{11}\]
Since our model is infinitely running, we express probabilistic loops as infinitely running probabilistic programs with the initial condition \(B_{0}\) defined as \(B_{G}\), and the update command \(C_{U}\) defined as \(\textbf{if}\ B_{G}\ \textbf{then}\ C_{L}\ \textbf{else skip}\ \textbf{fi}\). This implies that the program executes \(C_{L}\) as long as \(B_{G}\) is satisfied, while it leaves all variables unchanged after leaving the loop. We remark that this model is restricted to single loops in isolation, without nesting. Common verification problems for probabilistic loops are the analysis of termination and of assertion-violation, which we express as follows.
**Termination Analysis**: Let \(G\in\mathcal{B}(\mathbb{R}^{n})\) be the guard set of a probabilistic loop, i.e., the set of states of states for which the guard condition evaluates to true. Then, the termination event is characterised by \(Reach_{\mathrm{fin}}^{s}(\mathbb{R}^{n}\setminus G)\). Our method computes an upper bound for the probability of termination \(\mathbb{P}[Reach_{\mathrm{fin}}^{s}(\mathbb{R}^{n}\setminus G)]\) over every state \(s\in G\) or, dually, a lower bound for the probability of non-termination.
**Assertion-violation Analysis**: Let \(G\in\mathcal{B}(\mathbb{R}^{n})\) the guard set of a single-loop probabilistic program and let \(A\in\mathcal{B}(\mathbb{R}^{n})\) be the satisfying set of an assertion at the beginning of the loop body. The assertion violation event is characterized by \(Reach_{\mathrm{fin}}^{s}(G\setminus A)\). Our method computes an upper bound for the probability of assertion violation or, dually, a lower bound for its satisfaction. Note that multiple assertions can be modelled similarly by adding variables.
### Quantitative Verification of Discrete-time Stochastic Systems
Instances of probabilistic systems that are subsumed by our definition of infinitely running probabilistic programs include stochastic (autonomous) dynamical systems in discrete time, which are widely studied in control theory ((4; 12)). Below, we instantiate these exemplars within our modeling framework. Discrete-time stochastic systems can be defined in terms of stochastic difference equations given a possibly nonlinear vector field \(g\colon\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{w}}\to\mathbb{R}^{n_{x}}\), a time-invariant input disturbance over \(\mathbb{R}^{n_{w}}\) with arbitrary distribution \(\mathcal{W}\), and an initial condition \(x_{0}\in\mathbb{R}^{n_{x}}\):
\[x_{t+1}=g(x_{t},w_{t}),\quad w_{t}\sim\mathcal{W}. \tag{12}\]
Under the assumption that \(g\) and the distribution \(\mathcal{W}\) can be defined as an update command generated by our program grammar, this is an infinitely running probabilistic program with \(n=n_{x}+n_{w}\) variables, where \(C_{U}\) encodes \(g\) as a composition of deterministic assignments, sampling from \(\mathcal{W}\) as probabilistic assignments from distributions with the respective constant parameters. Autonomous systems are naturally infinitely running and typical questions are the reachability analysis question
(cf. Sect. 2.2), the safety verification question, and the dual invariant verification question, which we phrase as follows.
**Safety Verification**: Let \(B\in\mathcal{B}(\mathbb{R}^{n})\) be a set of bad states. Then \(\Omega\setminus Reach^{s}_{\text{fin}}(B)\) characterises the event that the system never reaches a bad state when initialised in \(s\). Our method computes a lower bound over \(s\in S_{0}\) for the probability of safety, that is, \(1-\mathbb{P}[ Reach^{s}_{\text{fin}}(B)]\).
**Invariant Verification**: Let \(I\in\mathcal{B}(\mathbb{R}^{n})\) be a candidate invariant, then \(\Omega\setminus Reach^{s}_{\text{fin}}(\mathbb{R}^{n}\setminus I)\) characterises the event that \(I\) is an invariant, when the system is initialised in \(s\). Our method computes a lower bound over \(s\in S_{0}\) for the probability that \(I\) is invariant.
## 3. Neural Indicating Supermartingales
We provide the criteria for a certificate to ensure that the probability that a system, with update function \(f\), reaches a state in a target set \(A\in\mathcal{B}(\mathbb{R}^{n})\) from state in an initial set \(S_{0}\in\mathcal{B}(\mathbb{R}^{n})\) is bounded from above by a given value \(p\in[0,1]\). We call this certificate a _neural indicating supermartingale_. This we define as a neural network with \(n\) input neurons, an arbitrary number of hidden layers, and one output neuron, which encodes a function of type \(V\colon\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\cup\{\infty\}\), whose output is guaranteed to be non-negative over the entire domain and satisfies the following three conditions:
(indicating condition) \[\forall s\in A\colon V(s)\geq 1,\] (13a) (non-increasing condition) \[\forall s\notin A\colon\mathbb{X}[V](s)\leq V(s),\] (13b) (bounding condition) \[\forall s\in S_{0}\colon V(s)<p.\] (13c)
Theorem 3.1 ().: _Let \(A\in\mathcal{B}(\mathbb{R}^{n})\) be a target set and \(s\in\mathbb{R}^{n}\) be an arbitrary initial state. Let \(V\colon\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\cup\{\infty\}\) be a non-negative function that satisfies (13a) and (13b). Then \(V(s)\geq\mathbb{P}[ Reach^{s}_{\text{fin}}(A)]\)._
Proof.: It follows from Lem. 2.2 and 2.3 and Kleene's fixpoint theorem over a CPO defined over the space of Borel measurable non-negative functions that satisfy (13a) and (13b), cf. Appendix A.3. We discard the requirement of an externally provided invariant from the soundness proof for non-negative repulsing supermartingales (S
yields probability bounds that are comparable or better than existing methods, and further succeeds on a wider range of probabilistic programs than previously possible.
### Training of Neural ISMs From Sample Observations
As previously stated, in the present work a neural ISM consists of a neural network with ReLU activation functions, with an arbitrary number of hidden layers. As with any ReLU-based neural network, we can train it using gradient descent and a finite set of states \(d\) sampled over \(\mathbb{R}^{n}\), which we denote as the finite sampling of the state space \(D=\{d^{(1)},\ldots,d^{(m)}\}\).
We seek to construct a loss function that guides the neural network to satisfy the specification set out Eq. (13). We therefore consider a loss function consisting of the summation of multiple terms, with each term corresponding to a condition for the ISM:
\[\mathcal{L}(V)=\beta_{1}\mathcal{L}_{\text{ind}}(V)+\beta_{2}\mathcal{L}_{ \text{bnd}}(V)+\beta_{3}\mathcal{L}_{\text{non-inc}}(V). \tag{15}\]
Here, \(\beta_{j}>0,j\in\{1,2,3\}\), denotes a constant coefficient for scaling the terms of each constituent loss. This reduces the task of learning a neural ISM to constructing a suitable loss function. The proposed loss function in Eq. (15) consists of three separate components, one for each condition in Eq. (13). Notably, we satisfy a typical non-negativity condition by construction (cf. Eq. (14)). Next, we consider each component in turn and provide a suitable loss that will encourage satisfaction of Eqs. (13a), (13c) and (13b).
#### 3.1.1. Indicating Loss
First, consider the condition in Eq. (13a), which we refer to as the indicating condition. For this, we use the following loss function:
\[\mathcal{L}_{\text{ind}}(V)=\mathbb{E}_{d\in D\cap A}\left[\text{ReLU}(1-V(d) )\right]. \tag{16}\]
This penalises all points not within the bad states which fail to satisfy the indicating condition, whilst ignoring any states which already satisfy it. In particular, we take the expectation of this over our finite data set \(S\).
#### 3.1.2. Bounding Loss
Secondly, we turn to the bounding condition, given by equation (13c). This condition determines the 'quality' of the ISM, as it provides the upper-bound \(p\). We can encourage to synthesis better ISMs by including a term in the loss that encourages
\[\mathcal{L}_{\text{bnd}}(V)=\mathbb{E}_{d\in D\cap S_{0}}\left[V(d)\right)]. \tag{17}\]
Finally, we consider the non-increasing condition, Eq. (13b), which relies on computing the post-expectation of the martingale. We consider two alternative approaches to this calculation: one which assumes information about the program, by leveraging a symbolic inference engine, and one which does not, relying instead on a data-driven approach to perform approximate inference. We label these approaches as _program-aware_ and _program-agnostic_ synthesis, and treat them next.
#### 3.1.3. Program-aware Non-increasing Loss
We begin by considering program-aware synthesis, which leverages knowledge of the program within the loss function itself. In particular, we implement the algorithm used by the exact symbolic inference engine PSI (Zhu et al., 2017; Zhang et al., 2018), along with symbolic representations of the program and candidate martingale \(V\) to calculate the post-expectation \(\mathbb{X}[V]\) of the martingale. This can then be used to construct a loss function which penalises samples for which the non-increasing condition is unsatisfied
\[\mathcal{L}_{\text{non-inc}}(V)=\mathbb{E}_{d\in D\setminus A}\left[\text{ReLU}( \mathbb{X}[V](d)-V(d))\right]. \tag{18}\]
#### 3.1.4. Program-agnostic Non-increasing Loss
While it is advantageous to incorporate as much knowledge of the program into the loss function as possible, the program-aware approach relies on a symbolic formula representing the program. For large programs this formula becomes more complex and possibly intractable. Furthermore, it relies on the smoothness of \(\mathbb{X}[V]\). We therefore consider an alternative formulation of the loss function which does not require knowledge of the program--rather, solely access to traces of it--and therefore is likely to perform better for more complex programs.
Ultimately, a program-agnostic approach differs from a program-aware one in that it does not exactly calculate the post-expectation of the ISM \(\mathbb{X}[V]\); instead, this is now approximately calculated using a data-driven approach. We can then use the same loss function as the program-aware case, Eq. (18), but using this approximation for the post-expectation. We emphasise that this does not impact the soundness of our scheme as an exact symbolic calculation of the post-expectation is used during verification. However we expect the probability bounds associated to the certificates to be less tight than those distilled from the program-aware ones. In practice, we utilise a Monte Carlo based approach to estimate the post-expectation. For each state, we sample its successor states by executing the program \(f\). To obtain an estimate of the post-expectation of \(V\) we extend every element \(d^{(i)}\) in our data set with a set of \(m^{\prime}\) randomly sampled successor states \(D^{\prime(i)}=\{d^{(i,1)},\ldots,d^{(i,m^{\prime})}\}\). Then \(\mathbb{X}[V](d)\) is estimated as:
\[\mathbb{X}[V](d)\approx\mathbb{E}_{d^{\prime}\in D^{\prime}}[V(d^{\prime})]. \tag{19}\]
#### 3.1.5. Smoothness of Activation Functions
Finally, we note that since ReLU is a non-smooth function, we found that in practice it is preferable to substitute it with a smooth approximation, Softplus, which takes the form
\[\text{Softplus}(s)=\log(1+\exp(s)). \tag{20}\]
However, Softplus can be a poor approximation to ReLU over the required \([0,1]\) interval for probabilities. Therefore, in practice we learn a function that instead represents a scaled probability \(\alpha p\), with the value \(\alpha>0\) being fixed to some large integer. As a result of this, the indicating component of the loss, Eq. (16) becomes \(\mathcal{L}_{\text{ind}}(V)=\mathbb{E}_{d\in D\cap\alpha}[\text{ReLU}(\alpha- V(d))]\). It is important to note that while Softplus is used as activation in the learning stage, during the verification stage we still verify using ReLU activation functions, ensuring soundness of the generated neural ISM.
### Verification of Neural ISMs Using SMT Solving
The purpose of the verification stage is to ensure that for no possible program trace are the conditions described in Eq. (13) violated. For this purpose, we rely on SMT solving. This involves providing an SMT solver with the negation of the specification described by Eq. (13). However, since the non-negativity condition is satisfied by construction of the networks, this part can be disregarded. Furthermore, the bounding condition does not impact the validity of the ISM, but instead its quality. The probability bound \(p\) is first optimised by the \(\mathcal{L}_{\text{bnd}}\) component in our loss function, then estimated using a binary search over \([0,1]\) and then checked over \(S_{0}\) as part of the verification step. The conditions given in Eq. (13) become
\[\forall s\in\mathbb{R}^{n}:\underbrace{(s\in A\to V(s)\geq 1)}_{\varphi_{ \text{ind}}}\wedge\underbrace{(s\notin A\to\mathbb{X}[V](s)\leq V(s))}_{ \varphi_{\text{non-inc}}}\wedge\underbrace{(s\in S_{0}\to V(s)<p)}_{ \varphi_{\text{ind}}}. \tag{21}\]
Therefore, the SMT solver is provided with the negation of Eq. (21), namely
\[\exists s\in\mathbb{R}^{n}:\underbrace{(s\in A\wedge V(s)<1)}_{\neg\varphi_{ \text{ind}}}\vee\underbrace{(s\notin A\wedge\mathbb{X}[V](s)>V(s))}_{\neg\varphi _{\text{non-inc}}}\vee\underbrace{(s\in S_{0}\wedge V(s)\geq p)}_{\neg\varphi _{\text{ind}}}. \tag{22}\]
The verifier seeks any assignment \(d_{\text{ex}}\) of \(s\) such that any of the formulae \(\neg\varphi_{\text{ind}}\), \(\neg\varphi_{\text{non-inc}}\) and \(\neg\varphi_{\text{{\sf{bnd}}}}\) are satisfiable (this check can be run in parallel over the three formulae). SMT solving is _sound_: if an assignment for \(s\) exists such that any of the provided formulae \(\neg\varphi_{\text{ind}}\), \(\neg\varphi_{\text{non-inc}}\) and \(\neg\varphi_{\text{{\sf{bnd}}}}\) are satisfiable then it will be found. Crucially, this means that if no counterexample is found, then the conditions in Eq. (13) are satisfied and the neural ISM is valid.
Alternatively, an assignment that satisfies any of \(\neg\varphi_{\text{ind}}\), \(\neg\varphi_{\text{non-inc}}\) or \(\neg\varphi_{\text{non-inc}}\) is a counterexample \(d_{\text{ex}}\): this represents a state for which the required conditions are invalidated and thus the candidate ISM is incorrect. The counterexample is then added to the data set \(D\) for synthesis to resume. Furthermore, we can determine which condition is invalidated by evaluating the clauses \(\neg\varphi_{\text{ind}}\), \(\neg\varphi_{\text{non-inc}}\) and \(\neg\varphi_{\text{{\sf{bnd}}}}\) separately at \(d_{\text{ex}}\) and determining which of these are satisfied.
### Counterexample-guided Inductive Synthesis
In this section we have detailed an inductive synthesis procedure to generate neural ISMs which utilises counterexamples. This procedure is known as counter-example guided inductive synthesis (CEGIS) (Steintein and Tschum, 2007; Steintein and Tschum, 2008) (cf. Fig. 2), and consists of of two components: a learner and a verifier. These components work in opposition to each other. On the one hand, the learner seeks to synthesise a candidate neural ISM that meets the desired specification (cf. Eq. (13)) over a finite set of samples from the state space. On the other, the verifier seeks to disprove the candidate generated by the learner by searching for counterexamples, i.e., instances of invalidation of the desired specification, over the original state space. If the verifier shows that no such counterexample exists, then the desired specification is met by the program and synthesis terminates successfully.
In this work, the learner takes the form of a ReLU-based neural network, trained over finite samples; meanwhile, the role of the verifier is assumed by the SMT solver. Several options exist for the latter choice, under the condition that the SMT solver can reason over nonlinear real arithmetic: in this work, we use Z3 (Zi et al., 2017).
#### 3.3.1. Generation of Additional Counterexamples
In order to better facilitate counterexample-guided synthesis, it is desirable to generate multiple counterexamples (if they exist). This can be achieved by calling the verifier with the condition \(\neg\varphi_{\text{ind}}\land\neg\varphi_{\text{non-inc}}\land\neg\varphi_{ \text{{\sf{bnd}}}}\land s\neq d_{\text{ex}}\), i.e. we now seek an assignment to \(\neg\varphi_{\text{ind}}\land\neg\varphi_{\text{non-inc}}\land\neg\varphi_{ \text{{\sf{bnd}}}}\) that is different to the one already obtained. However, in practice more useful counterexamples exist outside the neighbourhood of our current one. Suppose we have \(N\) counterexamples \(d_{\text{ex}}^{(1)},\ldots,d_{\text{ex}}^{(N)}\); we can instead construct
\[\neg\varphi_{\text{ind}}\land\neg\varphi_{\text{non-inc}}\land\neg\varphi_{ \text{{\sf{bnd}}}}\land\bigwedge_{1\leq i\leq N}(||d_{\text{ex}}^{(i)}-s||\geq \gamma), \tag{23}\]
where \(\gamma\in\mathbb{R}^{+}\) and \(||\cdot||\) denotes any norm of its input, to find further counterexamples that are at least \(\gamma\) from all current counterexamples (according to the chosen norm).
Figure 2. Overview of the counterexample-guided inductive synthesis procedure used to synthesise neural ISMs. Inputs to the procedure include a probabilistic program \(f\), initial set \(S_{0}\) and target set \(A\). The procedure generates an neural indicating supermartingale \(V(s)\) alongside a probability bound \(p\).
```
error=0; i=0; whilei<10do assert(error==0); p~Bernoulli(0.999); ifp==1then q~Bernoulli(0.5); ifq==1then i+=1 fi od else error=1 fi od
```
Listing 1: faulty_loop
## 4. Experimental Evaluation
The previous section develops a method for synthesising neural ISMs. This section presents an empirical evaluation of the method, by testing it against a series of benchmarks. Each benchmark is tested ten times. A test is successful if a valid ISM is synthesised, and the proportion of successful tests is recorded. Further, the average bound from the valid ISMs is also recorded, along with the average time taken by learning and verification. This procedure is applied separately for program-aware and program-agnostic synthesis. To compare our method against existing work, we perform template-based synthesis of linear ISMs using Farkas' Lemma.
It should be noted that our method is inherently stochastic. One source of randomness is the initialisation of template parameters in learning. In program-agnostic synthesis, an additional source of randomness is sampling successor states. So that the results accurately reflect the performance of our method, the random seed for these sources of randomness is different for each test. An additional source of randomness arises from the SMT solver Z3, when generating counterexamples. This cannot be controlled externally. Benchmarks are run on a machine with an Nvidia A40 GPU.
The benchmarks in this report are created using two patterns.
**Unreliable Hardware**: The first pattern models a program that executes on unreliable hardware. The goal is to upper bound the probability that the program fails to terminate due to a hardware fault. A simple example of such a program is faulty_loop, whose source code is presented in Listing 1. In this program, a variable i is initialised to 0. In each successful iteration, it is incremented by 1 with probability 0.5. A hardware fault is modelled with the error flag being set to 1.
**Robot Motion**: The second pattern models an agent that is moving through a physical environment. In these benchmarks, the uncertainty in control and sensing is modelled probabilistically. Part of the environment is designated as the target region, and part of the environment is designated as a hazardous region. The goal is to upper bound the probability that the robot enters the hazardous region. A simple example of such a program is repulse, whose source code is presented in Listing 2. The program starts at co-ordinate 10, in a one-dimensional environment. The target region is where \(x<0\), and hazardous region is where \(x>100\)
In each iteration, there is an equal probability of \(x\) being decremented by 2, and \(x\) being incremented by 1.
Several of the programs are based on benchmarks used in prior work focusing on other types of supermartingales (Beng et al., 2015; Beng et al., 2015). Additionally, there are several benchmarks that are entirely new.
### Results
The results are reported in Table 1. The table is divided into three sections. The first section shows the benchmarks where Farkas' Lemma can not be applied, and only our method is capable of producing a bound. The second section shows examples where both methods are able to produce a bound, but our method produces a notably better bound. The third section shows benchmarks where both methods produce comparable bounds.
The table has a column which describes the structure of the neural ISM that was trained. A single number \(h\) indicates that the ISM has a single hidden layer with \(h\) neurons. A pair \((h_{1},h_{2})\) indicates that there are two hidden layers with \(h_{1}\) and \(h_{2}\) neurons respectively.
Dashes in the table indicate experiments where a valid ISM could not be obtained. In the case of Farkas' Lemma, there are several cases, where there is no linear ISM for the benchmark. By contrast, program-agnostic and program-aware synthesis could be applied to almost all benchmarks. The one exception is \(\mathsf{repulse\_uniform}\) where only program-aware verification was unsuccessful. This is due to indicator functions in the post-expectation. These functions are not smooth expressions which posed a problem for the optimiser. This benchmark underscores the value of program-agnostic synthesis, since it does not require embedding the post-expectation in the loss function.
The first section of the benchmarks demonstrates that our method is capable of producing useful results on programs that are out-of-scope for previous methods. Furthermore, it is encouraging that the success ratio is high on all benchmarks. The second and third sections show benchmarks in which Farkas' Lemma can be applied. For these sections, the success ratio of our method is mostly 1.0. A high success ratio is to be expected, since these programs can also be solved by Farkas' Lemma.
The second section shows more complex benchmarks where our method was able to significantly improve upon Farkas' Lemma. The smallest improvement was over 0.04, and the largest improvement was over 0.39. The intuition here is that the neural structure allows more sophisticated ISMs to be learnt, that can better approximate how the reachability probability varies across the state space than is possible with linear templates, and thereby yield tighter probability bounds.
The third section shows relatively simple benchmarks. Here our method produces results that are marginally less tight in comparison to Farkas' Lemma. This is not surprising since our method uses neural networks consisting of a single neuron for these examples, owing to their simplicity. The expressive power of these networks is therefore similar to linear templates.
In summary, the results show our method does significantly better on more complex examples, and marginally worse on very simple examples. This is highlighted in Figure 3. Each point represents a benchmark. The position on the \(x\)-axis shows the probability bound obtained by our program-agnostic method, and the \(y\)-axis shows the probability bound obtained by Farkas' Lemma. Points above the line indicate benchmarks where neural ISMs outperform Farkas' Lemma, and vice versa. The scale is logarithmic to emphasise order-of-magnitude differences.
It is also worth considering the difference between the two loss functions used. Notice that the program-aware algorithm usually performs better than the program-agnostic algorithm, but the improvement is mostly marginal. This is in fact a strength of our method: our data-driven approach performs almost as well as one dependent on symbolic representations, which is promising in light of questions of scalability to more complex programs.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Benchmark**} & \multirow{2}{*}{**Farkas’**} & \multicolumn{4}{c}{**Quantitative Neural Certificates**} & \multicolumn{2}{c}{**Network**} \\ \cline{3-6} & & \multicolumn{2}{c}{Program-Agnostic} & \multicolumn{2}{c}{Program-Aware} & \multirow{2}{*}{**Arch.**} \\ \cline{3-6} & & \multicolumn{1}{c}{\(p\)} & \multicolumn{1}{c}{Success Ratio} & \multicolumn{1}{c}{\(p\)} & \multicolumn{1}{c}{Success Ratio} \\ \hline persist\_2d & - & \(\leq\) 0.1026 & 0.9 & \(\leq\) 0.1175 & 0.9 & (3, 1) \\ faulty\_marbles & - & \(\leq\) 0.0739 & 0.9 & \(\leq\) 0.0649 & 0.8 & 3 \\ faulty\_unreliable & - & \(\leq\) 0.0553 & 0.9 & \(\leq\) 0.0536 & 1.0 & 3 \\ faulty\_regions & - & \(\leq\) 0.0473 & 0.9 & \(\leq\) 0.0411 & 0.9 & (3, 1) \\ \hline cliff\_crossing & \(\leq\) 0.4546 & \(\leq\) 0.0553 & 0.9 & \(\leq\) 0.0591 & 0.8 & 4 \\ repulsive & \(\leq\) 0.0991 & \(\leq\) 0.0288 & 1.0 & \(\leq\) 0.0268 & 1.0 & 3 \\ repulsive\_uniform & \(\leq\) 0.0991 & \(\leq\) 0.0344 & 1.0 & - & - & 2 \\ repulsive\_2d & \(\leq\) 0.0991 & \(\leq\) 0.0568 & 1.0 & \(\leq\) 0.0541 & 1.0 & 3 \\ faulty\_varying & \(\leq\) 0.1819 & \(\leq\) 0.0864 & 1.0 & \(\leq\) 0.0865 & 1.0 & 2 \\ faulty\_concave & \(\leq\) 0.1819 & \(\leq\) 0.1399 & 1.0 & \(\leq\) 0.1356 & 0.9 & (3, 1) \\ \hline fixed\_loop & \(\leq\) 0.0091 & \(\leq\) 0.0095 & 1.0 & \(\leq\) 0.0094 & 1.0 & 1 \\ faulty\_loop & \(\leq\) 0.0181 & \(\leq\) 0.0195 & 1.0 & \(\leq\) 0.0184 & 1.0 & 1 \\ faulty\_uniform & \(\leq\) 0.0181 & \(\leq\) 0.0233 & 1.0 & \(\leq\) 0.0221 & 1.0 & 1 \\ faulty\_rare & \(\leq\) 0.0019 & \(\leq\) 0.0022 & 1.0 & \(\leq\) 0.0022 & 1.0 & 1 \\ faulty\_easyl & \(\leq\) 0.0801 & \(\leq\) 0.1007 & 1.0 & \(\leq\) 0.0865 & 1.0 & 1 \\ faulty\_ndercr & \(\leq\) 0.0561 & \(\leq\) 0.0723 & 1.0 & \(\leq\) 0.0630 & 1.0 & 1 \\ faulty\_walk & \(\leq\) 0.0121 & \(\leq\) 0.0173 & 1.0 & \(\leq\) 0.0166 & 1.0 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results comparing neural ISMs with Farkas Lemma for different benchmarks. Here, \(p\) is the average probability bound generated by the certificate; success ratio is the number of successful experiments, out of 10 repeats, generated by CEGIS with neural ISM; ‘-’ means no result was obtained. We also denote the architecture of the network: \((h_{1},h_{2})\) denotes a network with 2 hidden layers consisting of \(h_{1}\) and \(h_{2}\) neurons respectively.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Benchmark**} & \multirow{2}{*}{**Farkas’**} & \multicolumn{4}{c}{**Quantitative Neural Certificates**} \\ \cline{3-6} & & \multicolumn{2}{c}{Program-Agnostic} & \multicolumn{2}{c}{Program-Aware} \\ \cline{3-6} & & Learn Time & Verify Time & Learn Time & Verify Time \\ \hline persist\_2d & - & 169.14 & 85.31 & 44.96 & 74.90 \\ faulty\_marbles & - & 114.24 & 29.23 & 15.86 & 28.68 \\ faulty\_unreliable & - & 123.85 & 45.48 & 18.34 & 33.97 \\ faulty\_regions & - & 17.92 & 35.85 & 17.55 & 32.38 \\ \hline cliff\_crossing & 0.11 & 134.61 & 19.02 & 21.27 & 29.07 \\ repulse & 0.19 & 16.65 & 5.00 & 6.49 & 3.74 \\ repulse\_uniform & 0.19 & 21.28 & 14.18 & - & - \\ repulsive\_2d & 0.12 & 122.92 & 64.54 & 15.75 & 47.70 \\ faulty\_varying & 0.36 & 21.74 & 5.06 & 4.71 & 3.28 \\ faulty\_concave & 0.39 & 49.12 & 13.37 & 13.49 & 7.82 \\ \hline fixed\_loop & 0.15 & 14.16 & 3.14 & 3.34 & 2.43 \\ faulty\_loop & 0.16 & 25.52 & 3.81 & 3.73 & 2.66 \\ faulty\_uniform & 0.34 & 20.20 & 1.91 & 6.75 & 1.33 \\ faulty\_rare & 0.27 & 25.52 & 4.27 & 3.71 & 2.96 \\ faulty\_easyl & 0.31 & 104.20 & 12.78 & 4.95 & 7.51 \\ faulty\_ndercr & 0.33 & 104.89 & 9.06 & 5.37 & 4.66 \\ faulty\_walk & 0.32 & 15.08 & 4.00 & 6.97 & 3.33 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results showing the time taken in seconds to synthesise ISMs by our method and Farkas’ Lemma. For our method, we show the time taken during learning and verification.
The time taken to run the benchmarks are shown in Table 2. This is separated into the time taken for learning and the time taken for verification. Unsurprisingly, the time taken for synthesis varies, depending the complexity of the program. Time taken for verification is similar for both program-aware and program-agnostic synthesis. However, time taken for learning is significantly higher in the program-agnostic algorithm. This is not surprising, since the ISM needs to be evaluated many times for each state in program-agnostic synthesis, whereas the post-expectation needs to be evaluated once for each state in program-aware synthesis. Also, note that Farkas' Lemma is faster than our method. This is expected, as when the ISM certificate is instantiated to the setting of linear templates and programs, the conditions give rise to a convex optimisation problem that can be solved by linear programming (Farkas, 2018). By contrast, learning neural certificates is a non-convex optimisation problem for which we must resort to gradient descent.
### Case Studies
Having presented the experimental results, we shall look at some benchmarks in more detail.
Recall the repulse program introduced earlier (Listing 2). While this is a small program, our method is still able to produce a significantly better result than Farkas' Lemma. The linear and neural ISMs are illustrated in Figure 3(a). This neural ISM has a single hidden layer consisting of three ReLU components that are summed together. This allows a convex piecewise linear function to be learnt.
The cliff_crossing program (Listing 3) is an interesting benchmark, for which our method is capable of producing a significantly better bound. This program models a robot crossing a road that
Figure 3. Probability bounds generated using program-agnostic neural ISMs and using Farkas’ Lemma. We also show the line \(y=x\) to indicate which approach outperforms and by how much: above the line indicates that neural ISMs outperform Farkas’ Lemma, below the line for the opposite. This demonstrates that neural ISMs can significantly outperform linear templates when a better bound exists, but otherwise is able to achieve similar results. Note that our approach with program-aware neural ISMs provides even better outcomes, compared with Farkas’ Lemma.
is adjacent to a cliff. The x-coordinate models how far along the road the robot is, with \(x=0\) being the beginning, and \(x=50\) being the end. Then the y-coordinate models how close to the cliff the robot is, with \(y=0\) being the furthest point away from the cliff, and \(y=10\) being the closest point to the cliff still on the road. Here the neural ISM consists of four ReLU components, and the benefits of a piecewise linear function occur in two dimensions, leading to a clear improvement compared to the linear ISM. These two ISMs are illustrated in Figure 4. Notice the probability bound at the initial state \((0,0)\) is significantly lower in the neural ISM.
So far, these examples have used neural ISMs with a single hidden layer. An example that uses two hidden layers is faulty_concave. The source code can be found in Listing 4. Two hidden layers are necessary in order to obtain performance that is superior to Farkas' Lemma. The benchmark faulty_concave is similar to faulty_loop, which has already been introduced. The variable \(i\) is initialised to \(1\). In each successful iteration, it is incremented with probability \(0.5\) until \(10\) is reached. In each iteration, there is a small probability of a hardware fault. Unlike faulty_loop, the probability of failure changes depending on the value of \(i\). When \(1\leq i\leq 3\), the probability of failure is \(0.001\). However, when \(4\leq i\leq 9\), the probability of failure is \(0.01\), an order of magnitude higher. A linear template cannot account for this conditional behaviour, so Farkas' Lemma produces an overly conservative ISM, which behaves as if the probability of failure were \(0.01\) in every iteration. Indeed, a neural ISM with a single hidden layer also cannot account for this branching behaviour. This is because such an ISM would always be convex. A concave ISM is needed, which requires at least two hidden layers. This is illustrated in Figure 3(b).
## 5. Related Work
### Martingales
Martingales have already been applied extensively in the context of verifying probabilistic programs.
_Ranking supermartingales_. RSMs have previously been studied to prove positive almost-sure termination [13]. This is a qualitative reachability property, in contrast to the quantitative properties studied here. In particular, [3] use a CEGIS-based approach to synthesise neural RSMs.
_Repulsing supermartingales_. RepSMs are another kind of martingale used to prove quantitative reachability properties [16]. They produce a bound using Azuma's inequality. A weakness of RepSMs is that they can only be applied to a narrow range of problems [59]. In many other cases, a valid RepSM does not exist, or such a RepSM produces a trivial bound. By contrast, ISMs can be applied to a broad range of probabilistic problems.
Figure 4. Indicating supermartingales for the repulse benchmark and faulty_concave as generated using Farkas’ Lemma and using neural ISMs. The initial state (dashed line) shows that tighter upper bounds are generated using neural ISMs compared to Farkas’ Lemma.
Figure 5. Indicating supermartingales for the cliff_crossing benchmark as generated using Farkas’ Lemma (5a) and using neural ISMs (5b). The right hand figure illustrates the tighter bounds obtainable through the use of neural templates.
_Indicating supermartingales_. This work introduces ISMs to verify quantitative reachability properties. Similar certificates have been introduced earlier under the names _non-negative repulsive supermartingales_ and _stochastic invariant indicators_(Han and Krapivsky, 2015; Krapivsky, 2016; Krapivsky, 2016). However, existing methods rely on symbolic synthesis approaches which impose further, strong conditions.
Specifically, they require the identification of deterministic invariants that are stronger than \(\mathbb{R}^{n}\). Invariants are required by methods for the synthesis of linear certificates based on Farkas' Lemma to enforce non-negativity, and it is required for the synthesis of polynomial certificates, which additionally imposes compactness to leverage Putinar's Positivstellensatz.
By contrast, neural ISMs do not rely on further conditions beyond (13a), (13b), and (13c), and leverage the expressivity of neural networks to represent certificates without requiring a deterministic invariant to restrict the domain of the function. While we ensure the soundness of our approach, this additional expressive power limits its theoretical completeness. Still, we have experimentally shown that our method is highly likely to succeed in practice over a broad variety of benchmarks.
_Cost martingales_. Previous work has used martingales and similar structures to bound the expected value of a variable in program state, when a probabilistic program terminates. This variable can be interpreted as a reward or a cost (Krapivsky, 2016; Krapivsky, 2016). This work has been extended to higher moments (Krapivsky, 2016). These techniques uses linear and semidefinite programming for synthesis, unlike this work which uses learning in a CEGIS architecture for synthesis.
### Probabilistic Model Checking
A well-developed approach to reasoning about probabilistic systems is probabilistic model checking (Krapivsky, 2016; Krapivsky, 2016). A well established probabilistic model checking tool is PRISM (Krapivsky, 2016). This approach encodes properties using probabilistic extensions to temporal logics such as LTL and CTL. Verification occurs through a combination of graph analysis and numerical approximation. They can be applied to systems modelled as discrete-time Markov chains (DTMCs), continuous-time Markov chains (CTMCs) and Markov decision processes (MDPs). Probabilistic model checking is usually applied to finite state spaces, whereas this work looks at programs with infinite state spaces, and programs that combine discrete and continuous variables.
### Pre-Expectation Calculus
The weakest pre-expectation calculus is a formalism for reasoning about probabilistic programs (Krapivsky, 2016; Krapivsky, 2016; Krapivsky, 2016). It is an extension of predicate transformer semantics for classical programs (Krapivsky, 2016). The calculus relates expressions to their expected values after a program runs. The pre-expectation calculus is connected to martingales, and this is studied in (Krapivsky, 2016). In order to reason about the weakest pre-expectations of loops, invariants are used.
Recent work concentrates on automatically finding weakest pre-expectations. (Krapivsky, 2016) present a method where finding an invariant from a template is reduced to first-order logic constraints, that are given to a constraint solver to handle. (Krapivsky, 2016) use counterexample refinement and Lagrange interpolation to find polynomial invariants. (Krapivsky, 2016) introduce a method for the static analysis of probabilistic programs, based on algebraic structures. This method was applied to generate linear invariants.
Most recently, a CEGIS-like method has been used to learn invariants (Krapivsky, 2016). This work is similar to the work presented here in that it is a data-driven approach using learning, uses a computer algebra system, and uses counterexamples to explore the state space. One difference is that the work uses regression trees as models, as opposed to neural networks. A limitation of this work is that it only supports discrete distributions, whereas this work also looks at probabilistic programs, that sample from continuous distributions.
### Dynamical Systems
While our work looks at discrete-time stochastic systems, CEGIS-based methods have also been used to verify continuous-time systems, specified with differential equations. CEGIS has been used to verify stability for such systems by synthesising Lyapunov functions with polynomial and neural templates (Bauer and Sigmond, 2001; Sigmond, 2002; Sigmond, 2002). Further, safety has been verified with barrier certificates (Sigmond, 2002; Sigmond, 2002).
## 6. Conclusion
Our method is the first to use neural networks as certificates for quantitative reachability analysis of probabilistic programs, to upper bound the probability that a target set is reached in finitely many steps. We exploit the expressive power of neural networks as templates for indicating supermartingales to obtain tight approximations of the reachability probability, without the need for supporting deterministic invariants. Our program-agnostic synthesis method uses gradient descent optimisation of a loss function defined over program executions to discover neural certificates, whose soundness we verify using satisfiability modulo theories in a counterexample-guided inductive synthesis loop. We provide a prototype implementation, and show the advantages of neural certificates over a diverse set of benchmarks, including their applicability to programs that lack linear/polynomial certificates, and their ability to attain tighter probability bounds on reachability. Our approach is suited to extension to further quantitative verification questions, such as obtaining probability bounds on a wider range of temporal properties beyond reachability (Bauer and Sigmond, 2002), and bounds on other types of quantities such as expected accrued costs (Sigmond, 2002; Sigmond, 2002; Sigmond, 2002).
## Appendix A. Detailed Proofs
### Proof of Lemma 2.1
From measurability of the transition kernel and (Bauer and Sigmond, 2002, Lem. 1) we have that
\[\mathbb{P}[Reach_{t+1}^{s_{0}}(A)] =\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_{0})\cdot\int_{ \mathbb{R}^{n}}\mathbb{P}[Reach_{t}^{s_{1}}(A)]\ T(s_{0},\mathrm{d}s_{1}) \text{By \@@cite[cite]{[\@@bibref{}{Bauer and Sigmond, 2002}{}{}]}} \tag{4}\] \[=\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_{0})\cdot\mathbb{X}[ \lambda s_{1}.\mathbb{P}[Reach_{t}^{s_{1}}(A)]] \text{By \@@cite[cite]{[\@@bibref{}{Bauer and Sigmond, 2002}{}{}]}}\]
Case \(t=0\) follows from the definition of the reachability event in (5).
### Proof of Lemma 2.2
Since the sets in the family \(\{Reach_{t}^{s_{0}}\colon t\in\mathbb{N}\}\) are disjoint, we note that the union in (7) that defines \(Reach_{\leq t}^{s_{0}}\) is a union of disjoint sets. By countable additivity of measure and disjointness of the union in (7), we see that:
\[\mathbb{P}[Reach_{\leq t}^{s_{0}}(A)]=\sum_{i=0}^{t}\mathbb{P}[ Reach_{t}^{s_{0}}(A)] \tag{24}\]
By Lem. 2.1, we have that
\[\mathbb{P}[Reach_{\leq t+1}^{s_{0}}(A)] =\mathbb{P}[Reach_{0}^{s_{0}}(A)]+\sum_{i=1}^{t+1}\mathbb{P}[ Reach_{i}^{s_{0}}(A)] \text{By \@@cite[cite]{[\@@bibref{}{Bauer and Sigmond, 2002}{}{}]}} \tag{25}\] \[=\mathbf{1}_{A}(s_{0})+\sum_{i=1}^{t+1}\mathbb{P}[ Reach_{i}^{s_{0}}(A)] \text{By \@@cite[cite]{[\@@bibref{}{Bauer and Sigmond, 2002}{}{}]}}\] \[=\mathbf{1}_{A}(s_{0})+\sum_{i=0}^{t}\mathbb{P}[ Reach_{i+1}^{s_{0}}(A)]\]
\[=\mathbf{1}_{A}(s_{0})+\sum_{i=0}^{t}\mathbf{1}_{\mathbb{R}^{n} \setminus A}(s_{0})\cdot\mathbb{X}[\lambda s_{1}.\,\mathbb{P}[Reach_{t}^{s_{1}}( A)]](s_{0})\] By ( 6a ) \[=\mathbf{1}_{A}(s_{0})+\sum_{i=0}^{t}\left(\mathbf{1}_{\mathbb{R}^ {n}\setminus A}(s_{0})\cdot\int_{\mathbb{R}^{n}}\mathbb{P}[Reach_{t}^{s_{1}}( A)]\ T(s_{0},\mathrm{d}s_{1})\right)\] By ( 4 ) \[=\mathbf{1}_{A}(s_{0})+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_{0 })\cdot\int_{\mathbb{R}^{n}}\left(\sum_{i=0}^{t}\mathbb{P}[Reach_{t}^{s_{1}}( A)]\right)\ T(s_{0},\mathrm{d}s_{1})\] \[=\mathbf{1}_{A}(s_{0})+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_ {0})\cdot\int_{\mathbb{R}^{n}}\mathbb{P}[Reach_{\leq t}^{s_{1}}(A)]\ T(s_{0}, \mathrm{d}s_{1})\] By ( 24 ) \[=\mathbf{1}_{A}(s_{0})+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_ {0})\cdot\mathbb{X}[\lambda s_{1}.\,\mathbb{P}[Reach_{\leq t}^{s_{1}}(A)]](s_{0})\] By ( 4 )
Furthermore, we have that
\[\mathbb{P}[Reach_{\leq 0}^{s_{0}}(A)]=\mathbb{P}[Reach_{0}^{s_{0}}(A)]= \mathbf{1}_{A}(s_{0})\] By ( 24 ) and ( 6b ) ( 25 )
### Proof of Theorem 3.1
We adapt the soundness proof of non-negative repulsing supermartingales [59] to neural certificates (which do not require a externally-provided deterministic invariant to restrict their domain) in our setting of probabilistic systems (i.e. we do not consider non-deterministic transition dynamics).
We consider the set of Borel measurable functions
\[\mathcal{V}=\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}\cup\{\infty\} \tag{26}\]
and give it the structure of a complete partially ordered set (CPO), [19, p. 175]:
* we extend the natural ordering relation on \(\mathbb{R}_{\geq 0}\cup\{\infty\}\) to the pointwise partial order over functions: \[U\sqsubseteq V\Leftrightarrow\forall s\in\mathbb{R}^{n}.\,U(s)\leq V(s)\] (27)
* we define the bottom element \(\bot\) as the function that always returns zero, \[\bot(x)=0,\] (28)
* for a countable increasing chain of functions \(\{V_{i}\}_{i\in\mathbb{N}}\) we define its least upper bound (lub) \(\sqcup\{V_{i}\colon i\in\mathbb{N}\}\) argumentwise: \[(\sqcup\{V_{i}\colon i\in\mathbb{N}\})\ (s)=\sqcup\{V_{i}(s)\colon i\in \mathbb{N}\}\] (29)
We introduce a higher-order function \(\Phi:\mathcal{V}\rightarrow\mathcal{V}\) defined as
\[\Phi(V)(s)=\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s)\cdot \mathbb{X}[V](s) \tag{30}\]
We now show that if a \(V\in\mathcal{V}\) satisfies conditions (13a) and (13b) then it is a pre-fixpoint of \(\Phi\), namely that:
\[\Phi(V)\sqsubseteq V \tag{31}\]
Let us recall that \(\Phi(V)(s)=\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s)\cdot \mathbb{X}[V](s)\) by definition (30). Then, we let \(s\in\mathbb{R}^{n}\) be arbitrary, and proceed by cases.
\[\text{Case }s\in A\colon\ \Phi(V)(s)=\mathbf{1}_{A}(s)+\mathbf{1}_{ \mathbb{R}^{n}\setminus A}(s)\cdot\mathbb{X}[V](s)=1\leq V(s)\] By ( 13a ) \[\text{Case }s\notin A\colon\ \Phi(V)(s)=\mathbf{1}_{A}(s)+\mathbf{1}_{ \mathbb{R}^{n}\setminus A}(s)\cdot\mathbb{X}[V](s)=\mathbb{X}[V](s)\leq V(s)\] By ( 13b )
We now aim to show that the function \(\lambda s_{0}\in\mathbb{R}^{n}.\,\mathbb{P}[Reach_{\mathrm{fin}}^{s_{0}}(A)]\) is the least fixpoint of \(\Phi\), using Kleene's Fixpoint Theorem [19, CPO Fixpoint Theorem 1, Theorem 8.15, p. 183], and to do this we have to show that \(\Phi\) is continuous, that is,
* \(\Phi\) is monotone, that is, \(U\sqsubseteq V\) implies \(\Phi(U)\sqsubseteq\Phi(V)\) and
* \(\Phi\) preserves least upper bounds, that is, \(\Phi(\sqcup\{V_{i}\colon i\in\mathbb{N}\})=\sqcup\{\Phi(V_{i})\colon i\in\mathbb{N}\}\)
_Monotonicity._ To show that that \(\Phi\) is monotone, let \(s\in\mathbb{R}^{n}\) be arbitrary and assume \(U\sqsubseteq V\):
\[\Phi(U)(s) =\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s) \cdot\mathbb{X}[U](s)\] \[=\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s)\cdot \int_{\mathbb{R}^{n}}U(s^{\prime})\ T(s,\mathrm{d}s^{\prime})\] By ( 4 ) \[\leq\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s) \cdot\int_{\mathbb{R}^{n}}V(s^{\prime})\ T(s,\mathrm{d}s^{\prime})\] By \[U\sqsubseteq V\] and ( 27 ) \[=\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s) \cdot\mathbb{X}[V](s)\] By ( 4 ) \[=\Phi(V)(s)\]
therefore \(\Phi(U)\sqsubseteq\Phi(V)\).
_Least upper bound preservation._ We first note that \(\{\Phi(V_{i}):i\in\mathbb{N}\}\) is a countable increasing chain because \(\Phi\) is monotone. Then, to show that \(\Phi\) preserves least upper bounds, we show that for any countable increasing chain \(\{V_{i}:i\in\mathbb{N}\}\):
\[\Phi(\sqcup\{V_{i}\colon i\in\mathbb{N}\})=\sqcup\{\Phi(V_{i})\colon i\in \mathbb{N}\} \tag{32}\]
To this end, we first note that by Eq. 29, and properties of monotone increasing real sequences in the extended non-negative reals (54, Theorem 15, p.21), for arbitrary \(s\in\mathbb{R}^{n}\) we have:
\[(\sqcup\{V_{i}\colon i\in\mathbb{N}\})(s)=\sqcup\{V_{i}(s):i\in\mathbb{N}\}= \lim_{i\to\infty}V_{i}(s) \tag{33}\]
Then, we show that (33) implies (34) defined as
\[\mathbb{X}[\sqcup\{V_{i}:i\in\mathbb{N}\}](s)=(\sqcup\{\mathbb{X}[V_{i}]:i\in \mathbb{N}\})(s) \tag{34}\]
which holds because
\[\mathbb{X}[\sqcup\{V_{i}:i\in\mathbb{N}\}](s) =\int_{\mathbb{R}^{n}}(\sqcup\{V_{i}:i\in\mathbb{N}\})(s^{\prime })\ T(s,\mathrm{d}s^{\prime})\] By ( 4 ) \[=\int_{\mathbb{R}^{n}}\left(\lim_{i\to\infty}V_{i}(s^{\prime}) \right)\ T(s,\mathrm{d}s^{\prime})\] By ( 33 ) \[=\lim_{i\to\infty}\int_{\mathbb{R}^{n}}V_{i}(s^{\prime})\ T(s, \mathrm{d}s^{\prime})\] By MCT \[=\lim_{i\to\infty}\mathbb{X}[V_{i}](s)\] By ( 4 ) \[=(\sqcup\{\mathbb{X}[V_{i}]:i\in\mathbb{N}\})(s)\] By ( 33 )
This is an application of the Monotone Convergence Theorem (MCT) (7, p.78) because \(\{V_{i}\}_{i\in\mathbb{N}}\) is a countable increasing chain of non-negative functions. We now use (34) to show that \(\Phi\) preserves least upper bounds. Let \(s\in\mathbb{R}^{n}\) be arbitrary:
\[\Phi(\sqcup\{V_{i}\colon i\in\mathbb{N}\})(s) =\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s) \cdot\mathbb{X}[\sqcup\{V_{i}\colon i\in\mathbb{N}\}](s)\] By ( 30 ) \[=\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s)\cdot \sqcup\{\mathbb{X}[V_{i}](s)\colon i\in\mathbb{N}\}\] By ( 34 ) and ( 29 ) \[=\sqcup\{\mathbf{1}_{A}(s)+\mathbf{1}_{\mathbb{R}^{n}\setminus A }(s)\cdot\mathbb{X}[V_{i}](s)\colon i\in\mathbb{N}\}\] \[=(\sqcup\{\Phi(V_{i})\colon i\in\mathbb{N}\})\ (s)\] By ( 29 ) and ( 30 )
_Least fixpoint of \(\Phi\)._ Next, we define \(\Phi^{0}(V)=V\) and \(\Phi^{t+1}(V)=\Phi(\Phi^{t}(V))\), and show that the function that maps a state to its finite time reachability probability is equal to a particular least upper bound
\[\lambda s_{0}\in\mathbb{R}^{n}.\,\mathbb{P}[\mathit{Reach}_{\mathrm{fin}}^{s_{0 }}(A)]=\sqcup\{\Phi^{t}(\bot)\colon t\in\mathbb{N}\} \tag{35}\]
which by Kleene's Fixpoint Theorem is the least fixpoint of \(\Phi\). To achieve this, we show that
for all
\[t\in\mathbb{N}\]
and
\[s_{0}\in\mathbb{R}^{n}\]
, we have \[\mathbb{P}[\mathit{Reach}_{\leq t}^{s_{0}}(A)] =\Phi^{t+1}(\bot)(s_{0});\] (36) for all
\[s_{0}\in\mathbb{R}^{n}\]
, we have \[\mathbb{P}[\mathit{Reach}_{\mathrm{fin}}^{s_{0}}(A)] =(\sqcup\{\Phi^{t}(\bot)\colon t\in\mathbb{N}\})(s_{0})\] (37)
_Proof of (36)._ We proceed by induction on \(t\in\mathbb{N}\), that is, we show
* \(\mathbb{P}[\mathit{Reach}_{\leq 0}^{s_{0}}(A)]=\Phi^{1}(\bot)(s_{0})\),
* and, assuming \(\mathbb{P}[\mathit{Reach}_{\leq t}^{s_{0}}(A)]=\Phi^{t+1}(\bot)(s_{0})\) we show that \(\mathbb{P}[\mathit{Reach}_{\leq t+1}^{s_{0}}(A)]=\Phi^{t+2}(\bot)(s_{0})\)
For the base case, we recall that \(\forall s.\,\bot(s)=0\) and therefore \(\mathbb{X}[\bot](s_{0})=0\). Then, we have
\[\mathbb{P}[\mathit{Reach}_{\leq 0}^{s_{0}}(A)] =\mathbf{1}_{A}(s_{0}) \text{By \eqref{eq:1}}\] \[=\mathbf{1}_{A}(s_{0})+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_ {0})\cdot\mathbb{X}[\bot](s_{0}) \text{By \eqref{eq:2} and \eqref{eq:2}}\] \[=\Phi^{1}(\bot)(s_{0})\]
For the inductive case, we leverage Lem. 2.2 as follows:
\[\mathbb{P}[\mathit{Reach}_{\leq t+1}^{s_{0}}(A)] =\mathbf{1}_{A}(s_{0})+\mathbf{1}_{\mathbb{R}^{n}\setminus A}(s_ {0})\cdot\mathbb{X}[\lambda s_{1}.\mathbb{P}[\mathit{Reach}_{\leq t}^{s_{1}}(A )]](s_{0}) \text{By \eqref{eq:2}}\] \[=\Phi(\lambda s_{1}.\mathbb{P}[\mathit{Reach}_{\leq t}^{s_{1}}(A )])(s_{0}) \text{By \eqref{eq:2}}\] \[=\Phi(\Phi^{t+1}(\bot))(s_{0}) \text{By IH}\] \[=\Phi^{t+2}(\bot)(s_{0})\]
_Proof of (37)._ We leverage Lem. 2.3 and (36):
\[\mathbb{P}[\mathit{Reach}_{\mathrm{fin}}^{s_{0}}(A)] =\lim_{t\to\infty}\mathbb{P}[\mathit{Reach}_{\leq t}^{s_{0}}(A)] \text{By \eqref{eq:2}}\] \[=\lim_{t\to\infty}\Phi^{t+1}(\bot)(s_{0}) \text{By \eqref{eq:2}}\] \[=\lim_{t\to\infty}\Phi^{t}(\bot)(s_{0})\] \[=(\sqcup\{\Phi^{t}(\bot)\colon t\in\mathbb{N}\})(s_{0}) \text{By \eqref{eq:2}}\]
_Conclusion._ By (37) we conclude (35). By (35), and continuity of \(\Phi\), we invoke Kleene's Fixpoint Theorem [19, CPO Fixpoint Theorem 1, Theorem 8.15, p. 183] to conclude that \(\lambda s_{0}\in\mathbb{R}^{n}.\,\mathbb{P}[\mathit{Reach}_{\mathrm{fin}}^{s_ {0}}(A)]\) is the least fixpoint of \(\Phi\). Therefore, \(\lambda s_{0}\in\mathbb{R}^{n}.\,\mathbb{P}[\mathit{Reach}_{\mathrm{fin}}^{s_ {0}}(A)]\) is also the least pre-fixpoint of \(\Phi\). This implies that \(\left(\lambda s_{0}\in\mathbb{R}^{n}.\,\mathbb{P}[\mathit{Reach}_{\mathrm{fin }}^{s_{0}}(A)]\right)\sqsubseteq V\) and therefore \(V(s)\geq\mathbb{P}[\mathit{Reach}_{\mathrm{fin}}^{s}(A)]\) follows.
|
2303.10139 | Distill n' Explain: explaining graph neural networks using simple
surrogates | Explaining node predictions in graph neural networks (GNNs) often boils down
to finding graph substructures that preserve predictions. Finding these
structures usually implies back-propagating through the GNN, bonding the
complexity (e.g., number of layers) of the GNN to the cost of explaining it.
This naturally begs the question: Can we break this bond by explaining a
simpler surrogate GNN? To answer the question, we propose Distill n' Explain
(DnX). First, DnX learns a surrogate GNN via knowledge distillation. Then, DnX
extracts node or edge-level explanations by solving a simple convex program. We
also propose FastDnX, a faster version of DnX that leverages the linear
decomposition of our surrogate model. Experiments show that DnX and FastDnX
often outperform state-of-the-art GNN explainers while being orders of
magnitude faster. Additionally, we support our empirical findings with
theoretical results linking the quality of the surrogate model (i.e.,
distillation error) to the faithfulness of explanations. | Tamara Pereira, Erik Nascimento, Lucas E. Resck, Diego Mesquita, Amauri Souza | 2023-03-17T17:27:18Z | http://arxiv.org/abs/2303.10139v2 | # Distill n' Explain: explaining graph neural networks using simple surrogates
###### Abstract
Explaining node predictions in graph neural networks (GNNs) often boils down to finding graph substructures that preserve predictions. Finding these structures usually implies backpropagating through the GNN, bonding the complexity (e.g., number of layers) of the GNN to the cost of explaining it. This naturally begs the question: _Can we break this bond by explaining a simpler surrogate GNN?_ To answer the question, we propose _Distill n' Explain_ (DnX). First, DnX learns a surrogate GNN via _knowledge distillation_. Then, DnX extracts node or edge-level explanations by solving a simple convex program. We also propose FastDnX, a faster version of DnX that leverages the linear decomposition of our surrogate model. Experiments show that DnX and FastDnX often outperform state-of-the-art GNN explainers while being orders of magnitude faster. Additionally, we support our empirical findings with theoretical results linking the quality of the surrogate model (i.e., distillation error) to the faithfulness of explanations.
## 1 Introduction
Graph neural networks (GNNs) (Gori et al., 2005; Scarselli et al., 2009) have become the pillars of representation learning on graphs. Typical GNNs resort to message passing on input graphs to extract meaningful node/graph representations for the task at hand. Despite the success of GNNs in many domains (Stokes et al., 2020; Gilmer et al., 2017; Ying et al., 2018; Sanchez-Gonzalez et al., 2020), their architectural design often results in models with limited interpretability. This naturally makes it hard to diagnose scenarios in which GNNs are fooled by confounding effects or align poorly with expert knowledge.
To mitigate this lack of interpretability, a popular strategy is to use post-hoc explanation methods (Ribeiro et al., 2016; Lundberg and Lee, 2017; Slack et al., 2021; Han et al., 2022; Huang et al., 2022). The idea is to increase model transparency by highlighting input/model elements that are particularly important for predictions, helping users to understand what is happening under the hood.
There has been a recent outbreak of methods for explaining GNNs (Yuan et al., 2022). Although GNN explanations can come in different flavors (Ying et al., 2019; Yuan et al., 2021; Wang et al., 2021; Lucic et al., 2022; Yuan et al., 2020), they usually take the form of (minimal) substructures of input graphs that are highly influential to the prediction we want to explain. The seminal work of Ying et al. (2019, GNNExplainer) proposes learning a _soft_ mask to weigh graph edges. To find meaningful masks, GNNExplainer maximizes the mutual information between the GNN predictions given the original graph and the masked one. To alleviate the burden of optimizing again whenever we want to explain a different node, Luo et al. (2020, PGExplainer) propose using node embeddings to parameterize the masks, i.e., amortizing the inference. Nonetheless, GNNExplainer and PGExplainer impose strong assumptions on our access to the GNN we are trying to explain. The former assumes we are able to back-propagate through the GNN. The latter further assumes that we can access hidden activations of the GNN. Vu and Thai (2020, PGExplainer) relieve these assumptions by approximating the local behavior of the GNN with a probabilistic graphical model (PGM) over components, which can be used to rank the relevance of nodes and edges. On the other hand, getting explanations from PGMExplainer involves learning the structure of a PGM, and may not scale well.
In this work, we adopt the same black-box setting of Vu and Thai (2020) but severely cut down on computational cost by extracting explanations from a _global_ surrogate model. In particular, we propose _Distill n' Explain_ (DnX). DnX uses knowledge distillation to learn a simple GNN \(\Psi\), e.g. simple graph convolution (Wu et al., 2019, SGC), that mimics the behavior of the GNN \(\Phi\) we want to explain. Then, it solves a simple convex program to find a mask that weighs the influence of each node in the output of \(\Psi\). We also propose FastDnX, a variant of DnX that leverages the
linear nature of our surrogate to speed up the explanation procedure. Notably, we only require evaluations of \(\Phi\) to learn the surrogate \(\Psi\) and, after \(\Psi\) is fixed, we can use it to explain any node-level prediction. To back up the intuition that explaining a surrogate instead of the original GNN is a sensible idea, we provide a theoretical result linking the distillation quality to the faithfulness of our explanations.
Experiments on eight popular node classification benchmarks show that DnX and FastDnX often outperform GNN-, PG-, and PGM-Explainers. We also demonstrate that both DnX and FastDnX are much faster than the competitors. Remarkably, FastDnX presents a speedup of up to \(65K\times\) over GNNExplainer. Finally, we discuss the limitations of current benchmarks and show that explainers capable of leveraging simple inductive biases can ace them.
**Our contributions** are three-fold:
1. we propose a new framework for GNN explanations that treats GNNs as black-box functions and hinges on explaining a simple surrogate model obtained through knowledge distillation;
2. we provide theoretical bounds on the quality of explanations based on these surrogates, linking the error in the distillation procedure to the faithfulness of the explanation;
3. we carry out extensive experiments, showing that our methods outperform the prior art while running orders of magnitude faster.
## 2 Background
**Notation.** We define a graph \(\mathcal{G}=(V,E)\), with a set of nodes \(V=\{1,\dots,n\}\) and a set of edges \(E\subseteq V\times V\). We denote the adjacency matrix of \(\mathcal{G}\) by \(A\in\mathbb{R}^{n\times n}\), i.e., \(A_{ij}\) is one if \((i,j)\in E\) and zero otherwise. Let \(D\) be the diagonal degree matrix of \(\mathcal{G}\), i.e., \(D_{ii}=\sum_{j}A_{ij}\). We also define the _normalized_ adjacency matrix with added self-loops as \(\widetilde{A}=(D+I_{n})^{-1/2}(A+I_{n})(D+I_{n})^{-1/2}\), where \(I_{n}\) is the \(n\)-dimensional identity matrix. Furthermore, let \(X\in\mathbb{R}^{n\times d}\) be a matrix of \(d\)-dimensional node features. Throughout this work, we often represent a graph \(\mathcal{G}\) using the pair \((A,X)\).
Graph neural networks (GNNs).We consider the general framework of message-passing GNNs (Gilmer et al., 2017). Typical GNNs interleave aggregation and update steps at each layer. Specifically, for each node \(v\) at layer \(\ell\), the aggregation is a nonlinear function of the \((\ell-1)\)-layer representations of \(v\)'s neighbors. The update step computes a new representation for \(v\) based on its representation at layer \(\ell-1\) and the aggregated messages (output of the aggregation step). Here we cover two specific GNN architectures: graph convolutional networks (Kipf and Welling, 2017, GCNs) and simplified graph convolutions (Wu et al., 2019, SGC). The former is arguably the most popular GNN in the literature and is used profusely throughout our experiments. The latter is a linear graph model, which will be an asset to our explanation method. For a more thorough overview of GNNs, we refer the reader to Hamilton (2020).
Graph convolutional networks combine local filtering operations (i.e., graph convolutions) and non-linear activation functions (most commonly ReLU) at each layer. Denoting the weights of the \(\ell\)-th GCN layer by \(W^{(\ell)}\) and the element-wise activation function by \(\sigma\), we can recursively write the output of the \(\ell\)-th layer \(H^{(\ell)}\) as:
\[H^{(\ell)}=\sigma\left(\widetilde{A}H^{(\ell-1)}W^{(\ell)}\right), \tag{1}\]
where \(H^{(0)}=X\). To obtain node-level predictions, we propagate the final embeddings -- after an arbitrary number of layers -- through a modified convolution with a row-wise softmax instead of \(\sigma\), i.e., \(\hat{Y}=\operatorname{softmax}(\widetilde{A}H^{(\ell)}W^{(\ell+1)})\). In practice, it is also common to apply multilayer perceptron on top of the final embeddings.
SGC can be viewed as a simplification of the GCN model. Wu et al. (2019) derive SGC by removing the nonlinear activation functions in GCNs. Consequently, the chained linear transformations become redundant and we can use a single parameter matrix \(\Theta\). Thus, node predictions from an \(L\)-layer SGC are:
\[\hat{Y}=\operatorname{softmax}(\widetilde{A}^{L}X\Theta). \tag{2}\]
Interestingly, Wu et al. (2019) showed that SGC often performs similarly to or better than GCN in a variety of node classification tasks. On top of that, training SGCs is computationally more efficient than training GCNs, and SGC has significantly fewer parameters.
## 3 DnX: Distill n' Explain
We now introduce DnX -- a new post-hoc explanation method for GNNs. DnX comprises two steps: knowledge distillation and explanation extraction. During the former, we use a linear GNN \(\Psi\) to approximate the predictions from the GNN \(\Phi\) we want to explain. In the second step, we extract explanations directly from \(\Psi\) (instead of \(\Phi\)). We hypothesize that, as long as \(\Psi\) is a good approximation of \(\Phi\), substructures highly influential to the output of \(\Phi\) should also be relevant to \(\Psi\). Therefore, explanations of our surrogate should also explain well the original GNN. To obtain explanations, we exploit the linear nature of \(\Psi\) and propose two simple procedures. The first consists of solving a convex program. The second ranks nodes based on a simple decomposition of predictions into additive terms.
Following Vu and Thai (2020), we assume \(\Phi\) is a black-box model that we can only probe to get outputs. More
specifically, we cannot access gradients of \(\Phi\), nor can we access inner layers to extract node embeddings.
### Knowledge distillation
We use SGC (Wu et al., 2019) to approximate the predictions obtained with the GNN \(\Phi\). Formally, the surrogate model (SGC) \(\Psi\) receives the input graph \(\mathcal{G}=(A,X)\) and provides class predictions \(\hat{Y}^{(\Psi_{\Theta})}=\operatorname{softmax}(\widetilde{A}^{L}X\Theta)\), where \(\Theta\) is the matrix of model parameters, and \(L\) is a hyper-parameter.
The distillation process consists of adjusting the parameters of \(\Psi_{\Theta}\) to match its predictions to those of the network \(\Phi\). We do so by minimizing the Kullback-Leibler divergence \(\operatorname{KL}\) between the predictions of \(\Phi\) and \(\Psi_{\Theta}\). Let \(\hat{Y}_{i}^{(\Psi_{\Theta})}\) and \(\hat{Y}_{i}^{(\Phi)}\) denote the class predictions for node \(i\) from the \(\Psi_{\Theta}\) and \(\Phi\) models, respectively. We distill \(\Phi\) into \(\Psi\) by solving:
\[\min_{\Theta}\left\{\operatorname{KL}\left(\hat{Y}^{(\Phi)},\hat{Y}^{(\Psi_{ \Theta})}\right)\coloneqq\sum_{i\in V}\sum_{c}\hat{Y}_{ic}^{(\Phi)}\log\frac{ \hat{Y}_{ic}^{(\Phi)}}{\hat{Y}_{ic}^{(\Psi_{\Theta})}}\right\}, \tag{3}\]
which is equivalent to the categorical cross-entropy between \(\hat{Y}^{(\Phi)}\) and \(\hat{Y}^{(\Psi_{\Theta})}\). Note that minimizing this loss does not require back-propagating through the original GNN \(\Phi\), only through the surrogate \(\Psi\). We also do not require any knowledge about \(\Phi\)'s architecture.
### Explanation extraction
To obtain an explanation to a given prediction \(\hat{Y}_{i}^{(\Psi_{\Theta})}\), we want to identify a subgraph of \(\mathcal{G}\) containing the nodes that influence the most that prediction. We denote an explanation \(\mathcal{E}\) as an \(n\)-dimensional vector of importance scores (higher equals more relevant), one for each node in the vertex set \(V\). We introduce two strategies to compute \(\mathcal{E}\).
Optimizing for \(\mathcal{E}\).We can formulate the problem of finding the explanation \(\mathcal{E}\) by treating it as a vector of 0-1 weights, and minimizing the squared \(L_{2}\) norm between the logits associated with \(\hat{Y}_{i}^{(\Psi_{\Theta})}\) and those from the graph with node features masked by \(\mathcal{E}\):
\[\min_{\mathcal{E}\in\{0,1\}^{n}}\parallel\widetilde{A}_{i}^{L}\mathrm{diag}( \mathcal{E})X\Theta-\widetilde{A}_{i}^{L}X\Theta\parallel_{2}^{2}, \tag{4}\]
where \(\widetilde{A}_{i}^{L}\) denotes the \(i\)-th row of the matrix \(\widetilde{A}^{L}\). Note that the formulation in Equation 4 has a major issue: it admits the trivial solution \(\mathcal{E}=[1,1,\dots,1]\). To circumvent the issue and simultaneously avoid binary optimization, we replace the search space \(\{0,1\}^{n}\) by the \((n-1)\)-simplex \(\Delta=\{r\in\mathbb{R}^{n}:\sum_{i}r_{i}=1,\forall i_{i}r_{i}\geq 0\}\). Implementing this change and re-arranging computations, we wind up with:
\[\min_{\mathcal{E}\in\Delta}\left\|\widetilde{A}_{i}^{L}\left(\mathrm{diag}( \mathcal{E})-I_{n}\right)X\Theta\right\|_{2}^{2}. \tag{5}\]
Note that nodes outside the \(L\)-hop neighborhood of node \(i\) do not affect how \(\Psi\) classifies it. Thus, we can mask all nodes at distance \(\geq L+1\) without altering the solution of Equation 5. For ease of implementation, we solve Equation 5 reparameterizing \(\mathcal{E}\) as a softmax-transformed vector.
Finding \(\mathcal{E}\) via linear decomposition.Let \(Z_{i}\) denote the logit vector associated with the prediction \(\hat{Y}_{i}^{(\Psi_{\Theta})}\). Due to the linear nature of \(\Psi\), we can decompose \(Z_{i}\) into a sum of \(n\) terms, one for each node in \(V\) (plus the bias):
\[\widetilde{A}_{i1}^{L}X_{1}\Theta+\widetilde{A}_{i2}^{L}X_{2}\Theta+\dots+ \widetilde{A}_{in}^{L}X_{n}\Theta+b=Z_{i}. \tag{6}\]
Therefore, we can measure the contribution of each node to the prediction as its scalar projection onto \(Z_{i}-b\):
\[\mathcal{E}_{j}\coloneqq\widetilde{A}_{ij}^{L}X_{j}\Theta(Z_{i}-b)^{\intercal} \tag{7}\]
When we use this strategy instead of solving Equation 5, we refer to our method as FastDnX.
## 4 Analysis
In this section, we discuss the theoretical and computational aspects of our method. We first provide theoretical results supporting the hypothesis that good explanations of a global surrogate \(\Psi\) also characterize good explanations of \(\Phi\) -- in terms of faithfulness. Then, we discuss the convexity of the optimization problem DnX solves to extract explanations. We delegate proofs to the Appendix A.
Let \(\mathcal{G}_{u}\) denote the subgraph of \(\mathcal{G}\) induced by the \(L\)-hop neighborhood around node \(u\). We say an explanation \(\mathcal{E}_{u}\) for a node \(u\) is faithful with respect to \(\Phi\) if: i) \(\Phi\) outputs approximately the same predictions for \(u\) regardless of using \(\mathcal{E}_{u}\) to weigh the nodes of \(\mathcal{G}_{u}\) or not; and ii) the same holds under small perturbations of \(\mathcal{G}_{u}\). We can define a perturbation \(\mathcal{G}_{u}^{\prime}\) of \(\mathcal{G}_{u}\) by adding noise to \(u\)'s features or by randomly rewiring node \(u\)'s incident edges (Agarwal et al., 2022). In this work, we consider perturbations over node features. More precisely, this entails that \(V(\mathcal{G}_{u}^{\prime})=V(\mathcal{G}_{u})\), \(E(\mathcal{G}_{u}^{\prime})=E(\mathcal{G}_{u})\), and that features are corrupted by noise, i.e., \(X_{i}^{\prime}=X_{i}+\epsilon_{i}\) for \(i\in V(\mathcal{G}_{u})\) and \(\epsilon_{i}\in\mathbb{R}^{d}\).
**Definition 1** (Faithfulness).: _Given a set \(\mathcal{K}\) of perturbations of \(\mathcal{G}_{u}\), an explanation \(\mathcal{E}_{u}\) is faithful to a model \(f\) if_
\[\frac{1}{|\mathcal{K}|+1}\sum_{\mathcal{G}_{u}^{\prime}\in\mathcal{K}\cup \{\mathcal{G}_{u}\}}\left\|f(\mathcal{G}_{u}^{\prime})-f(t(\mathcal{G}_{u}^{ \prime},\mathcal{E}_{u}))\right\|_{2}\leq\delta,\]
_where \(\mathcal{G}_{u}^{\prime}\) is a possibly perturbed version of \(\mathcal{G}_{u}\), \(t\) is a function that applies the explanation \(\mathcal{E}_{u}\) to the graph \(\mathcal{G}_{u}^{\prime}\), and \(\delta\) is a small constant (Agarwal et al., 2022)._
Lemma 1 provides an upper bound on the unfaithfulness of \(\mathcal{E}_{u}\) with respect to the surrogate model \(\Psi\). Theorem 1 extends this result to obtain a bound for \(\mathcal{E}_{u}\) with respect to the model we originally want to explain, i.e., \(\Phi\).
**Lemma 1** (Unfaithfulness with respect to \(\Psi\)).: _Given a node \(u\) and a set \(\mathcal{K}\) of perturbations, the unfaithfulness of the explanation \(\mathcal{E}_{u}\) with respect to the prediction \(Y_{u}^{(\Psi_{\Theta})}\) of node \(u\) is bounded as follows:_
\[\frac{1}{|\mathcal{K}|+1}\sum_{\begin{subarray}{c}\mathcal{G}_{u}^{\prime}\in \\ \mathcal{K}\cup\{\mathcal{G}_{u}\}\end{subarray}}\left\|\Psi(\mathcal{G}_{u}^{ \prime})-\Psi(t(\mathcal{G}_{u}^{\prime},\mathcal{E}_{u}))\right\|_{2}\leq \gamma\left\|\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L}\right\|_{2},\]
_where \(\mathcal{G}_{u}^{\prime}\) is a possibly perturbed version of \(\mathcal{G}_{u}\), \(t\) is a function that applies the explanation \(\mathcal{E}_{u}\) to the graph \(\mathcal{G}_{u}^{\prime}\), \(\gamma\) is a constant that depends on the model weights \(\Theta\), node features \(X\), and perturbation \(\epsilon\). Furthermore, \(\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L}\) is the \(u\)-th row of the difference of the powered, normalized adjacency matrix \(\widetilde{A}^{L}\) before and after applying the explanation \(\mathcal{E}_{u}\)._
Sketch of the proof.: We first show that
\[\left\|\Psi(\mathcal{G}_{u})-\Psi(t(\mathcal{G}_{u},\mathcal{E}_{u}))\right\| _{2}\leq\left\|(X\Theta)^{\intercal}\right\|_{2}\left\|\widetilde{A}_{u}^{L}- \widetilde{E}_{u}^{L}\right\|_{2}\]
by using Lipschitz continuity of the softmax function and the compatibility property of the \(L_{2}\) matrix norm. We repeat for \(\mathcal{G}_{u}^{\prime}\in\mathcal{K}\), take the mean in \(\mathcal{K}\cup\{\mathcal{G}_{u}\}\) and isolate \(\left\|\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L}\right\|_{2}= \left\|\widetilde{A}_{u}^{L}-\widetilde{E}_{u}^{L}\right\|_{2}\). The complete proof is available in Appendix A.
**Theorem 1** (Unfaithfulness with respect to \(\Phi\)).: _Under the same assumptions of Lemma 1 and assuming the \(L_{2}\) distillation error is bounded by \(\alpha\), the unfaithfulness of the explanation \(\mathcal{E}_{u}\) for the original model \(\Phi\)'s node \(u\) prediction is bounded as follows:_
\[\frac{1}{|\mathcal{K}|+1}\sum_{\begin{subarray}{c}\mathcal{G}_{u}^{\prime}\in \\ \mathcal{K}\cup\{\mathcal{G}_{u}\}\end{subarray}}\left\|\Phi(\mathcal{G}_{u}^{ \prime})-\Phi(t(\mathcal{G}_{u}^{\prime},\mathcal{E}_{u}))\right\|_{2}\leq \zeta\left\|\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L}\right\|_{2}\] \[+2\alpha.\]
Note that Theorem 1 establishes a bound on faithfulness that depends directly on the distillation error \(\alpha\). Importantly, when \(\Psi\) is a perfect approximation of \(\Phi\), we retrieve upper-bound on the RHS of Lemma 1.
We note that Theorem 1 by Agarwal et al. (2022) covers an upper bound for the unfaithfulness of GNN explanation methods. However, they do not cover the case in which the explanation is a (weighted) subset of nodes in the \(L\)-hop neighborhood of \(u\), as in our method.
For completeness, we also extend Lemma 1 and Theorem 1 to account for the (very often) probabilistic nature of the noise, i.e., for the case in which \(\epsilon_{i}\) are random variables.
**Lemma 2** (Probability bound on unfaithfulness _w.r.t. \(\Psi\)).: _Given a node \(u\) and a set \(\mathcal{K}\) of perturbations and assuming the perturbations are i.i.d. with distribution \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\), the unfaithfulness of the explanation \(\mathcal{E}_{u}\) with respect to the prediction \(Y_{u}^{(\Psi_{\Theta})}\) of node \(u\) is bounded in probability as follows:_
\[\mathbb{P}\left(\frac{1}{|\mathcal{K}|+1}\sum_{\begin{subarray}{c }\mathcal{G}_{u}^{\prime}\in\\ \mathcal{K}\cup\{\mathcal{G}_{u}\}\end{subarray}}\left\|\Psi(\mathcal{G}_{u}^{ \prime})-\Psi(t(\mathcal{G}_{u}^{\prime},\mathcal{E}_{u}))\right\|_{2}\leq \xi\right)\geq\] \[\geq F_{\chi_{|\mathcal{K}|=d}^{2}}\left(\frac{\xi-\gamma_{1} \left\|\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L}\right\|_{2}}{ \gamma_{2}\left\|\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L} \right\|_{2}\sigma}-|\mathcal{K}|\right)\]
_where \(\gamma_{1}\) is a constant that depends on the model weights \(\Theta\) and node features \(X\), \(\gamma_{2}\) is a constant that depends on the model weights \(\Theta\), and \(F_{\chi_{|\mathcal{K}|=d}^{2}}\) is the c.d.f. of a chi-square r.v. with \(|\mathcal{K}|\times n\times d\) degrees of freedom where \((n,d)\) are the row- and column-wise dimensions of \(X\)._
**Theorem 2** (Probability bound on unfaithfulness _w.r.t. \(\Phi\)).: _Under the same assumptions of Lemma 2 and assuming the \(L_{2}\) distillation error is bounded by \(\alpha\), the unfaithfulness of the explanation \(\mathcal{E}_{u}\) for the original model \(\Phi\)'s node \(u\) prediction is bounded in probability as follows:_
\[\mathbb{P}\left(\frac{1}{|\mathcal{K}|+1}\sum_{\begin{subarray}{c }\mathcal{G}_{u}^{\prime}\in\\ \mathcal{K}\cup\{\mathcal{G}_{u}\}\end{subarray}}\left\|\Phi(\mathcal{G}_{u}^{ \prime})-\Phi(t(\mathcal{G}_{u}^{\prime},\mathcal{E}_{u}))\right\|_{2}\leq \xi\right)\geq\] \[\geq F_{\chi_{|\mathcal{K}|=d}^{2}}\left(\frac{\xi-\gamma_{1} \left\|\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L}\right\|_{2}-2 \alpha}{\gamma_{2}\left\|\mathop{\Delta}_{\mathcal{E}_{u}}\widetilde{A}_{u}^{L} \right\|_{2}\sigma}-|\mathcal{K}|\right)\]
In Lemma 2 and Theorem 2, when the variance \(\sigma^{2}\) approaches zero, \(\xi\) relinquishes its random nature and the probability in the RHS converges to one. We note that numerators in the RHS must be non-negative.
Recall DnX/FastDnX's pipeline involves two steps: model distillation (Equation 3) and explanation extraction (Equation 5). The former is done only once to learn the surrogate \(\Psi\). The latter, however, must be executed for each node whose prediction we want to explain. Then, gauging the cost of the extraction step may become a genuine concern from a practical point of view, especially for DnX, which implies solving an optimization problem repeatedly. Fortunately, the loss landscape of our extraction problem depends only on the shape of \(\Psi\), and not on the original GNN \(\Phi\) as in GNNExplainer. Since \(\Psi\) is an SGC, Equation 5 is a convex program (Theorem 3) and we reach global optima using, e.g., gradient-based algorithms.
**Theorem 3** (Convexity of DnX).: _The optimization problem of Equation 5 is convex._
## 5 Additional related works
Explanations for GNNs.The ever-increasing application of GNNs to support high-stake decisions on critical domains (Stokes et al., 2020; Jimenez-Luna et al., 2020; Derrow-Pinion et al., 2021) has recently boosted interest in explainability methods for graph models. Pope et al. (2019) first extended classical gradient-based explanation methods for GNNs. Importantly, Ying et al. (2019) introduced GNNExplainer and synthetic benchmarks that have been widely adopted to assess GNN explainers. Building on parameterized explainers by Luo et al. (2020), Wang et al. (2021) proposed ReFine to leverage both global information (e.g., class-wise knowledge) via pre-training and local one (i.e., instance specific patterns) using a fine-tuning process. Lucic et al. (2022); Bajaj et al. (2021) investigated counterfactual explanations for GNNs, aiming to find minimal perturbations to the input graph such that the prediction changes, e.g., using edge deletions. Feng et al. (2021) proposed measuring the contribution of different components of the input graph to the GNN prediction by decomposing the information generation and aggregation mechanism of GNNs. Recently, Zhang et al. (2022) introduced a structure-aware scoring function derived from cooperative game theory to determine node importance. Explainability methods for GNNs have also been approached through the lens of causal inference (Lin et al., 2021, 2022). For a more comprehensive coverage of the literature, we refer the reader to Yuan et al. (2022).
Knowledge distillation.Since the pivotal work of Hinton et al. (2015), condensing the knowledge from a possibly complex _teacher_ model into a simpler _student_ surrogate has been an active research topic (e.g. Vadera et al., 2020; Malinin et al., 2020; Ryabinin et al., 2021; Zhou et al., 2022; Hen et al., 2021). Nonetheless, despite numerous works using distillation in image domains (e.g. Rebuffi et al., 2017; Douillard et al., 2021; Baek et al., 2022), the distillation of GNNs is still a blooming direction. Yang et al. (2020) proposed the first method for GNN distillation, using a structure-preserving module to explicitly factor in the topological structure embedded by the teacher. (Joshi et al., 2021) proposed using contrastive learning to implicitly align the node embeddings of the student and the teacher in a common representation space. Jing et al. (2021) combined the knowledge of complementary teacher networks into a single student using a dedicated convolutional operator and topological attribution maps. Zhang et al. (2022) used an attention mechanism to weigh different teachers depending on the local topology of each node.
## 6 Experiments
In this section, we assess the performance of DnX and FastDnX on several popular benchmarks, including artificial and real-world datasets. We have implemented experiments using PyTorch (Paszke et al., 2017) and Torch Geometric (Fey and Lenssen, 2019). Our code is available at [https://github.com/tamararruda/DnX](https://github.com/tamararruda/DnX).
### Experimental setup
Datasets.We consider six synthetic datasets broadly used for evaluating explanations of GNNs: BA-House-Shapes, BA-Community, BA-Grids, Tree-Cycles, Tree-Grids, and BA-Bottle-Shaped. These datasets are available in (Ying et al., 2019) and (Vu and Thai, 2020). Each dataset is a single graph with multiple copies of identical motifs connected to base subgraphs. These subgraphs either consists of random sample graphs from the Barabasi-Albert (BA) model (Barabasi and Albert, 1999) or 8-level balanced binary trees. An explanation associated with a motif-node must only include motif elements. Thus, base nodes denote information irrelevant to the prediction of any node.
We also use two real-world datasets: Bitcoin-Alpha and Bitcoin-OTC (Kumar et al., 2016, 2018). These datasets denote networks in which nodes correspond to user accounts that trade Bitcoin. A directed edge \((u,v)\) (between users \(u\) and \(v\)) denotes the degree of reliability assigned by \(u\) to \(v\), i.e., each edge has a score denoting the degree of trust. Appendix B provides more details regarding datasets.
Baselines.We compare DnX against three baseline explainers: GNNExplainer (Ying et al., 2019), PGEExplainer (Luo et al., 2020), and PGMExplainer (Vu and Thai, 2020). To ensure a valid comparison, we closely follow guidelines and the evaluation setup from the original works. We first generate explanations for a 3-layer GCN (Kipf and Welling, 2017) with ReLU activation. We also consider three additional architectures: graph isomorphism networks (GIN) (Xu et al., 2019), gated graph sequence neural networks (GATED) (Li et al., 2016) and auto-regressive moving average GNNs (ARMA) (Bianchi et al., 2022) This allows for evaluating the robustness and performance of explainers across GNNs of different complexities.
Implementation details.We use an 80/10/10% (train/val/test) split for all datasets. All GNNs have 3 layers and are trained for \(1000\) epochs, with early stopping if the validation accuracy does not improve in \(100\) consecutive epochs. We train all baseline GNNs using Adam (Kingma and Ba, 2015) with a learning rate of 0.01 with a weight decay of \(5.0\times 10^{-4}\). We show the performance of these GNNs on the benchmark datasets in the supplementary material. Importantly, we observe accuracy \(\geq 95\%\) for most data/model combinations.
For the distillation phase in DnX, we use an SGC model with \(3\) layers. We use the predictions for all nodes to train the surrogate SGC. For the optimization, we use
AdamW (Loshchilov and Hutter, 2019) with a learning rate of \(0.1\) with a weight decay of \(5.0\times 10^{-6}\) and \(10000\) epochs.
It is worth mentioning that PGExplainer and GNNExplainer -- as described in the experimental section of their respective papers -- output edge-level explanations, so their results are not immediately comparable to that of our methods and PGMExplainer. More specifically, the two former output importance scores for each edge. On the other hand, our methods and PGMLExplainer output node importance scores. Therefore, we convert edge-level explanations to node-level ones by averaging over the scores of all edges incident in a node. For completeness, we provide additional results doing the reverse transformation (i.e., node- to edge-level explanations) in the Supplement.
### Results
Table 1 compares the performance of DnX and FastDnX against previous art in terms of explanation accuracy, i.e., the number of nodes in method's output that are also in the ground-truth explanations divided by the total number of nodes in the latter. Overall, FastDnX is the best-performing method for all network architectures (GCN, ARMA, GATED, and GIN) on all datasets but Tree-Cycles and Tree-Grids. For Tree-Grids, FastDnX places second for GCN, ARMA and GATED whereas PGMLExplainer obtains the highest accuracies. We also note that, while DnX is often better than GNNExplainer and PGExplainer, its performance bests FastDnX only in \(12.5\%\) of cases. GNN- and PGExplainer do not appear in the comparison for GIN since they require propagating edge masks, and Torch Geometric does not support edge features for GIN.
Table 2 reports the performance of all explainers on the Bitcoin-Alpha and Bitcoin-OTC datasets. Following previous work (Vu and Thai, 2020), we use average precision (AP) as evaluation metric, i.e., the percentage of top-\(k\) nodes obtained from each explainer that are correct, averaged over all nodes to be explained. While running the experiments, we noticed that the evaluation protocol employed by Vu and Thai (2020) obtains explanations for a 3-layer GCN but only considers 1-hop candidate nodes during the explanation phase. This implies that some potentially relevant nodes are discarded by design. Table 2 shows results for both 1-hop and 3-hop settings. DnX is the best-performing method, and its fast variant is the second-best across all experiments. For 3-hop candidate nodes, the absolute precision gap between DnX and the best baseline is at least 14% for Bitcoin-Alpha and 11% for Bitcoin-OTC. Overall, DnX outperforms GNNExplainer and PGMExplainer by a large margin. Note that the performance of PGMLExplainer drops considerably when going from 1-to 3-hop. We report additional results in the Appendix.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**Model** & **Explainer** & **BA-House** & **BA-Community** & **BA-Grids** & **Tree-Cycles** & **Tree-Grids** & **BA-Bottle** \\ \hline \multirow{4}{*}{GCN} & GNNExplainer & \(77.5\pm 1.2\) & \(64.7\pm 1.0\) & \(89.2\pm 2.0\) & \(77.2\pm 9.0\) & \(71.1\pm 1.0\) & \(73.3\pm 3.0\) \\ & PGExplainer & \(95.0\pm 1.1\) & \(70.6\pm 2.0\) & \(86.2\pm 9.0\) & \(92.4\pm 5.2\) & \(76.7\pm 1.2\) & \(98.2\pm 3.0\) \\ & PGMLExplainer & \(97.9\pm 0.9\) & \(92.2\pm 0.2\) & \(88.6\pm 0.9\) & \(94.1\pm 0.8\) & \(86.8\pm 2.0\) & \(97.5\pm 1.5\) \\ \cline{2-8} & DnX & \(97.7\pm 0.2\) & \(94.6\pm 0.1\) & \(89.8\pm 0.1\) & \(83.3\pm 0.4\) & \(80.2\pm 0.1\) & \(99.6\pm 0.1\) \\ & FastDnX & \(99.6\pm \text{NA}\) & \(95.4\pm \text{NA}\) & \(93.9\pm \text{NA}\) & \(87.3\pm \text{NA}\) & \(85.0\pm \text{NA}\) & \(99.8\pm \text{NA}\) \\ \hline \hline \multirow{4}{*}{ARMA} & GNNExplainer & \(80.9\pm 1.2\) & \(78.5\pm 1.0\) & \(87.3\pm 1.3\) & \(77.7\pm 1.0\) & \(79.3\pm 1.1\) & \(84.3\pm 1.3\) \\ & PGExplainer & \(91.4\pm 0.1\) & \(72.1\pm 0.1\) & \(83.8\pm 1.0\) & \(92.6\pm 2.1\) & \(85.1\pm 0.1\) & \(97.0\pm 1.1\) \\ & PGMLExplainer & \(99.3\pm 0.2\) & \(67.5\pm 0.8\) & \(86.8\pm 0.3\) & \(95.0\pm 0.2\) & \(90.6\pm 0.3\) & \(99.7\pm 0.1\) \\ \cline{2-8} & DnX & \(98.1\pm 0.2\) & \(92.7\pm 0.2\) & \(90.8\pm 0.1\) & \(83.5\pm 0.4\) & \(79.6\pm 0.3\) & \(96.9\pm 0.2\) \\ & FastDnX & \(100.0\pm\text{NA}\) & \(95.2\pm\text{NA}\) & \(94.7\pm\text{NA}\) & \(87.1\pm\text{NA}\) & \(87.7\pm\text{NA}\) & \(99.9\pm\text{NA}\) \\ \hline \hline \multirow{4}{*}{GATED} & GNNExplainer & \(79.7\pm 1.0\) & \(68.8\pm 1.0\) & \(91.4\pm 3.0\) & \(85.2\pm 2.0\) & \(73.2\pm 4.0\) & \(70.0\pm 2.0\) \\ & PGExplainer & \(96.1\pm 4.1\) & \(70.9\pm 3.0\) & \(90.7\pm 1.0\) & \(91.7\pm 7.0\) & \(83.7\pm 1.5\) & \(98.7\pm 0.1\) \\ \cline{1-1} & PGMLExplainer & \(98.6\pm\text{NA}\) & \(69.4\pm 0.5\) & \(86.8\pm 0.3\) & \(94.1\pm 0.2\) & \(90.1\pm 0.2\) & \(98.3\pm 0.2\) \\ \cline{1-1} \cline{2-8} & DnX & \(98.3\pm 0.1\) & \(91.1\pm 0.1\) & \(90.8\pm 0.1\) & \(85.0\pm 0.3\) & \(82.1\pm 0.2\) & \(98.0\pm 0.2\) \\ \cline{1-1} & FastDnX & \(99.6\pm\text{NA}\) & \(93.5\pm\text{NA}\) & \(94.0\pm\text{NA}\) & \(76.8\pm\text{NA}\) & \(86.8\pm\text{NA}\) & \(98.0\pm\text{NA}\) \\ \hline \hline \multirow{4}{*}{GIN} & PGMLExplainer & \(60.2\pm 0.2\) & \(84.5\pm 0.3\) & \(68.4\pm 0.2\) & \(89.3\pm 0.2\) & \(85.0\pm 0.5\) & \(55.7\pm 0.4\) \\ \cline{1-1} \cline{2-8} & DnX & \(99.0\pm 0.1\) & \(94.0\pm 0.2\) & \(91.1\pm 0.1\) & \(84.1\pm 0.3\) & \(77.3\pm 0.2\) & \(95.3\pm 0.2\) \\ \cline{1-1} & FastDnX & \(99.6\pm\text{NA}\) & \(94.7\pm\text{NA}\) & \(93.9\pm\text{NA}\) & \(75.2\pm\text{NA}\) & \(76.5\pm\text{NA}\) & \(99.1\pm\text{NA}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance (accuracy) of explanation methods for node-level explanations (i.e., explanations given as subsets of nodes) in the synthetic datasets. Blue and Green numbers denote the best and second-best methods, respectively. Standard deviations are taken over 10 runs of the explanation process, distillation is not included. Since FastDnX’s explanations are deterministic, we mark its variance with not applicable (NA). In most cases, FastDnX achieves the best performance.
Time comparison.To demonstrate the computational efficiency of DnX/FastDnX, Figure 1 shows the time each method takes to explain a single GCN prediction. For a fair comparison, we also take into account the distillation step in DnX/FastDnX. In particular, we add a fraction - one over the total number of nodes we wish to explain - of the distillation time and add it to the time DnX and FastDnX actually take to generate an explanation. Notably, both DnX and FastDnX are consistently much faster than GNNexplainer and PGMExplainer. For instance, FastDnX is more than forty thousand times faster than PGMExplainer in Bitcoin-Alpha and Bitcoin-OTC.
Distillation results.For completeness, Table 3 shows the distillation accuracy achieved by our linear network \(\Psi\) when \(\Phi\) is a GCN, for both the synthetic and the real datasets. Here, we measure accuracy using the predictions of the model \(\Phi\) as ground truth. For all cases, we observe accuracy superior to \(86\%\). Table 3 also shows the time elapsed during the distillation step. Similar results are achieved when distilling ARMA, GATED and GIN models, these results are shown and described in the Appendix.
Interestingly, although BA-community is the dataset with the lowest distillation accuracy (86.6%), DnX and FastDnX achieve significantly better results than the previous state-of-the-art (cf. Table 1). The rationale for these counter-intuitive results is that the distiller can differentiate between motif nodes and base nodes, and this is enough to get good explanations - since the evaluation set comprises motif nodes only. More concretely, the confusion matrix in Figure 2 reveals that, despite the low distillation accuracy, the surrogate model \(\Psi\) correctly predicts the base nodes (classes 1 and 5). Therefore, \(\Psi\) achieves high accuracy for the binary classification problem of distinguishing motif and base nodes, supporting our hypothesis.
\begin{table}
\begin{tabular}{l l|c c c|c c c} \hline \hline \multicolumn{6}{c}{**Bitcoin-Alpha**} & \multicolumn{3}{c}{**Bitcoin-OTC**} \\ \hline
**GNN** & **Explainer** & **top 3** & **top 4** & **top 5** & **top 3** & **top 4** & **top 5** \\ \hline \multirow{4}{*}{\begin{tabular}{l} GCN (1-hop) \\ \end{tabular} } & GNNEx & \(86.3\) & \(85.2\) & \(81.2\) & \(83.3\) & \(81.7\) & \(77.0\) \\ & PGMEx & \(83.5\) & \(83.6\) & \(79.5\) & \(79.9\) & \(80.1\) & \(76.6\) \\ & PGMEx & \(87.3\) & \(85.7\) & \(84.8\) & \(83.3\) & \(81.7\) & \(80.8\) \\ \cline{2-7} & DnX & \(92.2\) & \(89.5\) & \(88.4\) & \(89.4\) & \(86.6\) & \(84.7\) \\ & FastDnX & \(89.4\) & \(87.8\) & \(86.8\) & \(87.7\) & \(85.1\) & \(83.4\) \\ \hline \hline \multirow{4}{*}{
\begin{tabular}{l} GCN (3-hop) \\ \end{tabular} } & GNNEx & \(80.1\) & \(74.9\) & \(70.9\) & \(82.4\) & \(79.6\) & \(70.6\) \\ & PGEx & \(81.5\) & \(78.1\) & \(69.5\) & \(78.5\) & \(74.5\) & \(67.4\) \\ \cline{1-1} & PGMEx & \(67.0\) & \(59.8\) & \(51.8\) & \(63.0\) & \(55.2\) & \(47.4\) \\ \cline{1-1} \cline{2-7} & DnX & \(95.8\) & \(91.9\) & \(87.9\) & \(94.8\) & \(91.4\) & \(86.3\) \\ \cline{1-1} & FastDnX & \(89.8\) & \(85.2\) & \(80.2\) & \(88.0\) & \(83.0\) & \(78.8\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance (average precision) of node-level explanations for real-world datasets. Blue and Green numbers denote the best and second-best methods, respectively. DnX significantly outperforms the baselines (GNN-, PG-, and PGM-Explainers).
Figure 1: Time comparison. The bar plots show the average time each method takes to explain a prediction from GCN. FastDnX is consistently the fastest method, often by a large margin. For the datasets with largest average degree (Bitcoin datasets), FastDnX is 4 orders of magnitude faster than PGMExplainer and 2 orders faster than the other methods.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Dataset** & **Accuracy** & **Time (s)** \\ \hline BA-House & \(94.2\pm 1.2\) & \(13.996\) \\ BA-Community & \(86.6\pm 0.1\) & \(16.447\) \\ BA-Grids & \(99.9\pm 0.1\) & \(2.721\) \\ Tree-Cycles & \(97.7\pm 0.2\) & \(3.820\) \\ Tree-Grids & \(98.0\pm 0.2\) & \(3.803\) \\ BA-Bottle & \(98.5\pm 0.2\) & \(3.181\) \\ Bitcoin-Alpha & \(90.4\pm 0.1\) & \(28.317\) \\ Bitcoin-OTC & \(89.1\pm 0.2\) & \(32.414\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Distillation accuracy and time for GCN. For all cases, accuracy \(>86\%\) and the distillation phase takes considerably less than 1 minute.
Fidelity results.To further assess the quality of explanations, we consider a fidelity metric -- we use _Fidelity_-as in (Yuan et al., 2022). This metric measures how the GNN's predictive performance (accuracy) fluctuates when we classify nodes based only on the subgraph induced by the explanations. When the fidelity is positive, there is a decrease in performance. When it is negative, using "only the explanation" yields better predictions on average. Tables 4 and 5 report fidelity for the synthetic and the real datasets, respectively. Note that we have considered three additional real-world datasets (citation networks): Cora, Citeseer, and Pubmed. Results obtained from DnX for the synthetic datasets are the best ones in 50% of the cases. It is interesting to observe that for Tree-Cycles and Tree-Grids, DnX/FastDnX are not the best performing ones wrt accuracy (Table 1), but are the best ones wrt fidelity (Table 4). For real datasets, in most cases, either DnX or FastDnX achieves the best results overall. Importantly, this corroborates the results we observed for the precision metric on Bitcoin-Alpha/OTC datasets. We note that it was infeasible to run PGMExplainer on Pubmed as explaining one prediction with it can take up to an hour in our local hardware.
## 7 Discussion
Are benchmarks too simple?Given that DnX/FastDnX often achieve remarkable performance by explaining simple surrogates, a natural questions arises: _are these popular benchmarks for GNN explanations too simple?_ Since these benchmarks rely on model-agnostic ground-truth explanations, we now investigate inductive biases behind these explanations, and show that they can be easily captured.
Figure 3 reports the degree distribution of motif and base nodes for all synthetic datasets. Recall that, by design, ground-truth explanations are always given by motif nodes. Note also that support for the distributions motif and base nodes have almost no overlap for most datasets (except Tree-Cycles & Tree-Grids). Thus, any explainer capable of leveraging degree information would obtain high accuracy.
To make this more concrete, we propose a very simple baseline "explainer" that outputs explanations based on the normalized adjacency matrix. In particular, we define the importance of node \(j\) to the prediction of node \(i\) as the \((i,j)\)-entry of \(\widehat{A}^{L}\), with \(L=3\). With this simple baseline, we obtain the following accuracy values: 99.9%
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Explainer** & **BA-House** & **BA-Community** & **BA-Grids** & **Tree-Cycles** & **Tree-Grids** & **BA-Bottle** \\ \hline GNNEx & \(0.035\) & \(-0.276\) & \(0.015\) & \(-0.810\) & \(-0.120\) & \(-0.290\) \\ PGEx & \(0.035\) & \(-0.232\) & \(-0.194\) & \(-0.830\) & \(-0.175\) & \(0.142\) \\ PGMEx & \(0.035\) & \(-0.290\) & \(0.015\) & \(-0.677\) & \(-0.005\) & \(0.025\) \\ \hline DnX & \(0.035\) & \(-0.286\) & \(0.008\) & \(-0.230\) & \(-0.001\) & \(0.002\) \\ FastDnX & \(0.035\) & \(-0.272\) & \(-0.018\) & \(-0.240\) & \(0.000\) & \(0.050\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance (fidelity) of different methods for node-level explanations (i.e., explanations given as subsets of nodes) on synthetic datasets. The numbers in Blue and Green denote the best and second-best methods, respectively. The closer to zero, the better. DnX performs as well as or better than GNNEx, PGEx, and PGMEx in 5 out of 6 datasets.
\begin{table}
\begin{tabular}{l l|c|c|c|c|c} \hline \hline
**Sparsity** & **Explainer** & **Bitcoin-Alpha** & **Bitcoin-OTC** & **Cora** & **Citeseer** & **Pubmed** \\ \hline \multirow{8}{*}{\(30\%\)} & GNNex & \(0.008\) & \(0.060\) & \(0.015\) & \(0.006\) & \(0.000\) \\ & PGEx & \(0.101\) & \(0.100\) & \(0.019\) & \(0.051\) & \(0.046\) \\ & PGMEx & \(0.154\) & \(0.155\) & \(0.013\) & \(0.012\) & - \\ & DnX & \(0.028\) & \(0.020\) & \(0.007\) & \(0.006\) & \(0.015\) \\ & FastDnX & \(0.012\) & \(0.036\) & \(0.015\) & \(0.006\) & \(0.015\) \\ \hline \multirow{8}{*}{\(50\%\)} & GNNex & \(0.148\) & \(0.040\) & \(0.014\) & \(0.003\) & \(0.006\) \\ & PGEx & \(0.102\) & \(0.107\) & \(0.014\) & \(0.027\) & \(0.025\) \\ & PGMEx & \(0.102\) & \(0.118\) & \(0.011\) & \(-0.003\) & - \\ & DnX & \(0.012\) & \(0.018\) & \(0.000\) & \(0.009\) & \(0.010\) \\ & FastDnX & \(0.004\) & \(0.056\) & \(0.000\) & \(0.003\) & \(0.005\) \\ \hline \multirow{8}{*}{\(70\%\)} & GNNex & \(-0.004\) & \(0.016\) & \(0.015\) & \(-0.009\) & \(-0.005\) \\ & PGEx & \(0.091\) & \(0.099\) & \(-0.003\) & \(-0.006\) & \(0.005\) \\ \cline{1-1} & PGMEx & \(0.088\) & \(0.099\) & \(0.009\) & \(0.008\) & - \\ \cline{1-1} & DnX & \(0.000\) & \(0.000\) & \(-0.004\) & \(0.003\) & \(0.000\) \\ \cline{1-1} & FastDnX & \(0.004\) & \(-0.012\) & \(0.000\) & \(-0.003\) & \(0.010\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance (fidelity) of different methods for node-level explanations on real-world datasets. The numbers in Blue and Green denote the best and second-best method, respectively. The closer to zero the better. We show results for sparsity levels of 30%, 50% e 70%. In all cases, FastDnX or DnX are among the two best-performing methods.
(BA-House), 98.1% (BA-Community), 99.9% (BA-Grids), 95.9% (Tree-Cycles), 90.4% (Tree-Grids), and 99.9% (BA-Bottle). Notably, this baseline would rank 1 if included as an explanation method for GCNs in Table 1.
Faber et al. (2021) have also raised issues regarding these benchmarks, proposing alternative datasets as well. We have run FastDnX to explain a 2-layer GCN model for two of their proposed datasets (_Community_ and _Negative evidence_), and obtained remarkably good accuracy results: 94.0% and 99.5%, respectively. Also, simply ranking nodes based on the entries of \(\bar{A}^{L}\) (\(L=2\)) achieves accuracy of 93.0% (_Community_) and 99.6% (_Neg. evidence_).
Limitations.While simple graph models (like SGC) have been shown to achieve good performance on node-level classification tasks, they fail to rival recent GNNs for graph-level prediction tasks (Huang et al., 2021; Wu et al., 2019). Naturally, we would not expect DnX and FastDnX to work well out-of-the-shelf to explain graph-level predictions. However, our methods could be easily extended to use more powerful linear GNNs that incorporate different types of diffusion operators (Rossi et al., 2020), or use long-range residual connections (Chen et al., 2020).
## 8 Conclusion
This work proposes _DnX_ as a simple and intuitive two-step framework for post-hoc explanation of GNNs. First, we distill the GNN into a simpler and more interpretable one, that serves as a global surrogate. Then, we leverage the simple structure of the surrogate to extract explanations. Experiments show that (Fast)DnX outperforms the prior art on a variety of benchmarks. Remarkably, our simple design allows FastDnX to run at least \(200\times\) faster than relevant baselines on real-world tasks. Additionally, we provide theoretical results that justify our framework and support our empirical findings. Besides advancing the current art, we hope this work will motivate other researchers to focus on developing compute-efficient explainability methods.
## Acknowledgments
This work was supported by the Silicon Valley Community Foundation (SVCF) through the Ripple impact fund, the Fundacao de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ), the Fundacao Cearense de Apoio ao Desenvolvimento Cientifico e Tecnologico (FUNCAP), the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), and the Getulio Vargas Foundation's school of applied mathematics (FGV EMAp).
Figure 3: Degree distribution of motif and base nodes. While we can overall distinguish motif and base nodes from degree information on BA-based datasets, there is a significant overlap on Tree-Cycles and Tree-Grids.
Figure 2: Confusion matrix of the distillation process for the BA-Community dataset. Classes 1 and 5 correspond to base nodes. While the surrogate misclassifies many motif nodes, it is able to correctly predict almost all base ones. |
2302.12407 | HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure
Attack of Hypergraph Neural Networks | Hypergraph neural networks (HGNN) have shown superior performance in various
deep learning tasks, leveraging the high-order representation ability to
formulate complex correlations among data by connecting two or more nodes
through hyperedge modeling. Despite the well-studied adversarial attacks on
Graph Neural Networks (GNN), there is few study on adversarial attacks against
HGNN, which leads to a threat to the safety of HGNN applications. In this
paper, we introduce HyperAttack, the first white-box adversarial attack
framework against hypergraph neural networks. HyperAttack conducts a white-box
structure attack by perturbing hyperedge link status towards the target node
with the guidance of both gradients and integrated gradients. We evaluate
HyperAttack on the widely-used Cora and PubMed datasets and three hypergraph
neural networks with typical hypergraph modeling techniques. Compared to
state-of-the-art white-box structural attack methods for GNN, HyperAttack
achieves a 10-20X improvement in time efficiency while also increasing attack
success rates by 1.3%-3.7%. The results show that HyperAttack can achieve
efficient adversarial attacks that balance effectiveness and time costs. | Chao Hu, Ruishi Yu, Binqi Zeng, Yu Zhan, Ying Fu, Quan Zhang, Rongkai Liu, Heyuan Shi | 2023-02-24T02:15:42Z | http://arxiv.org/abs/2302.12407v1 | HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks
###### Abstract.
Hypergraph neural networks (HGNN) have shown superior performance in various deep learning tasks, leveraging the high-order representation ability to formulate complex correlations among data by connecting two or more nodes through hyperedge modeling. Despite the well-studied adversarial attacks on Graph Neural Networks (GNN), there is few study on adversarial attacks against HGNN, which leads to a threat to the safety of HGNN applications. In this paper, we introduce HyperAttack, the first white-box adversarial attack framework against hypergraph neural networks. HyperAttack conducts a white-box structure attack by perturbing hyperedge link status towards the target node with the guidance of both gradients and integrated gradients. We evaluate HyperAttack on the widely-used Cora and PubMed datasets and three hypergraph neural networks with typical hypergraph modeling techniques. Compared to state-of-the-art white-box structural attack methods for GNN, HyperAttack achieves a 10-20X improvement in time efficiency while also increasing attack success rates by 1.3%-3.7%. The results show that HyperAttack can achieve efficient adversarial attacks that balance effectiveness and time costs.
Adversarial attack, hypergraph neural network, white-box testing +
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
## 1. Introduction
As a unique non-Euclidean data structure for machine learning, graph modeling and hypergraph modeling are widely used to formulate the relationships between data, and graph neural networks (GNN) and hypergraph neural networks (HGNNs) are designed to derive knowledge embedding and learn the low-dimensional representation of nodes and links in many deep learning tasks. Various types of GNN have been proposed, e.g., GraphSAGE (Gan et al., 2015), Graph Convolutional Networks (GCN) (Kipf and Welling, 2015) and Graph Attention Network (GAT) (Kipf and Welling, 2015), for many graph-based analysis tasks, such as link prediction (Kipf and Welling, 2015; Li et al., 2017), node classification (Wang et al., 2016; Wang et al., 2017), community detection (Kipf and Welling, 2015; Li et al., 2017), and social network analysis (Kipf and Welling, 2015). Though the network structure of GNNs is different, they all aggregate information based on graph modeling, where one edge can only have two nodes in a graph. Hypergraph Neural Networks (HGNNs) have become a hot topic in recent years, and various HGNNs have been proposed, such as HyperGCN (Wang et al., 2016), HGNN* (Chen et al., 2017) and Dynamic Hypergraph Neural Networks (DHGNN) (Kipf and Welling, 2015). HGNNs take advantage of the stronger relational representation ability of hypergraph modeling, i.e., one hyperedge can connect two or more number of nodes, which is more practical to formulate the numerous complex relationships in the real-world scenario, compared to the graph. A number of studies have shown that HGNNs outperform GNNs in many deep learning tasks, such as graph visualization (Gan et al., 2015; Wang et al., 2017), bio-informatics and medical image analysis (Gan et al., 2015; Wang et al., 2017) and recommendation system (Chen et al., 2017).
**However, there is little research on adversarial attacks against HGNNs, although similar work on GNNs has been extensively studied.** GNNs are easily fooled by malicious adversarial examples (AEs) and output incorrect results (Wang et al., 2017). Recent studies show that HGNNs outperform GNNs in many deep-learning tasks, but they neglect the safety of HGNNs. **Moreover, the efficiency of adversarial attacks is severely limited though existing adversarial attack methods of GNNs can be adapted to HGNNs.** For example, when we exploit the integrated gradient (Wang et al., 2017), which is used for structure attack on GNN, the time cost of a successful attack on a single node is 40 seconds on average (more detail of our motivation is given in Section 2). Since the Cora dataset has 2000 nodes in the hypergraph, the time cost of attacking all the hypergraph nodes is unacceptable. Even worse, The problem of limited attack performance is more severe for hypergraphs built on real-world data that have more nodes on them. Therefore, despite the superior performance of HGNN, it is still challenging to guarantee its safety.
In this paper, we propose the first adversarial attack framework against HGNNs, called HyperAttack, which focuses on white-box structure attacks against HGNNs. The goal of HyperAttack is to mislead HGNN and output incorrect node classification results by modifying the hypergraph structure. For each attacked node in the hypergraph, HyperAttack disturbs the connectivity state of its hyperedges, i.e., adds or removes certain hyperedges of the target node. The perturbation priority of hyperedges is determined by the guidance of calculated gradients and integrated gradients to
improve attack efficiency. To evaluate the efficiency and effectiveness of HyperAttack, we first consider three different approaches to generating hypergraphs and pre-training HGNN models using popular datasets, i.e., Cora and PubMed, respectively. We then perform white-box structure attacks on these HGNN models and evaluate the performance of the attacks. Compared to white-box structural attack methods for GNNs, which use the fast-gradient algorithm (FGA) (Cheng et al., 2017) and the integrated-gradient algorithm (IGA) (Zhou et al., 2018), HyperAttack achieves a 10-13X improvement in time efficiency for successful attacks while also increasing attack success rates by 1.3%-1.7%. Our main contributions are:
* We present HyperAttack, a framework for white-box structural attacks designed for hypergraph neural networks. To the best of our knowledge, this is the first work on adversarial attacks against HGNNs.
* We clarify the differences between GNN and HGNN structure attacks, analyze the difficulties of simply applying GNN structure attack methods to HGNNs, and present the main challenges of HGNN structure attacks.
* We propose a multi-gradient guided hyperedge selection algorithm to determine hyperedge perturbation priorities, and the experimental results demonstrate that the method can improve the performance of structure attacks for HGNNs.
## 2. Background and Motivation
In this section, we first briefly introduce the network structures of GNN and HGNN. Then, we show the existing structure attack methods of GNN and clarify the difficulties of applying them to HGNNs. Finally, we conclude the main challenges of HGNN structure attacks.
### Structure of GNN and HGNN
Though Graph Neural Networks (GNN) and Hypergraph Neural Networks (HGNN) are designed to derive knowledge embedding by aggregating the relationship among data, their network structure and required inputs are different. We first introduce the structure of GNN and HGNN, respectively. Then, we introduce the main difference between GNN and HGNN applications.
**GNN and graph modeling.** All GNNs use data features and relationships between data as inputs, and the relationships are formulated by graph modeling. The graph modeling is introduced as follows. Given a graph as \(G=(V,E)\), where \(V\) is the nodes sets and \(E\) is the edges sets. \(X\in\mathcal{R}^{|V|\times|D|}\) is the feature of each node. \(D\) represents the dimension of the node feature. \(A\in\mathcal{R}^{|V|\times|V|}\) is the adjacency matrix, which represents the relationship between each node. If the pair of nodes \(v_{i}\) and \(v_{j}\) are connected, we set the corresponding element \(A_{ij}\) in \(A\) to 1, otherwise set it to 0. After we model the graph structure by using the node feature matrix \(X\) and the adjacency matrix \(A\), we input \(X\) and \(A\) into the propagation module (Kang et al., 2017), and each layer can be formulated as follows:
\[X^{l+1}=\sigma(\widetilde{D}^{-1/2}(A)\widetilde{D}^{-1/2}H^{l}\mathbf{W}^{l}) \tag{1}\]
where \(\mathbf{X}^{l}\) and \(\mathbf{X^{l+1}}\) represent the input and the output of the layer, respectively. \(\mathbf{W}\) is the weight matrix of the \(I-th\) layer, \(\sigma\) indicates the nonlinear activation function. \((A)\) represents the adjacency matrix with self connection, and \(widetildeD\) is the degree matrix of the self connected matrix.
The aggregated information can be captured through the convolution operator to propagate information between nodes.
**HGNN and hypergraph modeling.** Hypergraph is a kind of graph type with more complex modeling. It could provide more information on nodes and their connections via a hyperedge.
Generally, the HGNN model can be divided into hypergraph modeling and hypergraph learning (Han et al., 2017). Here, we introduce the hypergraph modeling and lay the relevant contents of hypergraph learning later in Section 7.
A hypergraph can be denoted as \(\mathcal{G}\), which consists of a set of nodes \(\mathcal{V}\) and a set of hyperedges \(\mathcal{E}\). We concatenate the hyperedge groups to generate the incidence matrix \(\mathcal{H}\in\mathcal{R}^{|\mathcal{V}|\times|\mathcal{E}|}\), which represents the relationship between each node and hyperedges. Each entry \(\mathcal{H}(\boxbox,\,)\) indicates whether the node \(\box
can be determined by comparing the output with the real node label.
There is a huge difference in terms of dimension and the internal correlation meaning. To be specific, \(\mathcal{H}\) concentrates on which nodes should be included in each hyperedge, while \(\mathbf{A}\) focuses on the node relationships. We believe that a major difference between HGNN and GNN remains in the graph modeling process with the different construction strategies between the adjacency matrix and the incidence matrix.
### Structure Attack of GNN and HGNN
Although both structure attacks are used to mislead the model by changing the edge connectivity relationships, the difference in relationship modeling, network structure, and inputs between GNN and HGNN lead to differences in the idea of structure attacks on the two models. Here, we introduce the structure attack of GNN and HGNN, respectively. Then, we analyze the difference between structure Attacks on GNN and HGNN and the Difficulties of Applying GNNs Structure Attack Methods to HGNNs.
**Structure Attack of GNN.** One of the most common strategies of the GNN structure attack is deleting or adding specific edges on the original graph. It is a key problem to know which edges have the greatest impact on the classification ability of the target model. In white-box settings, gradient information is often used as the basis for judging the priority of each edge. For example, some works use gradients as indicators. They select the pair of nodes with maximum absolute gradient (Beng et al., 2019) or reflect a better effect on perturbing certain edges indicated from the integrated gradient (Zhu et al., 2019). Some other works, such as (Zhu et al., 2019; Li et al., 2019), attempt to inject fake nodes and link them with some benign nodes in the original graph, which increases the classification error rate without changing the primary structure between existing nodes. However, they focus too much attention on the attack success rate, which has the potential to be time-consuming.
**Structure Attack of HGNN.** In view of the inherent attribute of preserving the data relevance of hypergraphs, we can still learn from existing attack methods to address structure disturbance. One of the simplest strategies is to randomly perturb the inherent structure of the hypergraph. For example, inspired by (Zhu et al., 2019), we randomly connect nodes with different labels with newly generated hyperedges, or discard specific existing hyperedges between nodes sharing the same label. Besides, using gradients as information to guide structure attacks can produce more effective changes on a hypergraph. We can use two typical algorithms, Fast Gradient Sign Method (FGSM) (Gil et al., 2018) and Jacobian-based Saliency Map Approach (JSMA) (Zhu et al., 2019), which shows the success for image data and be studied firstly by (Zhu et al., 2019) for graph models.
**Difficulties of Applying GNNs Structure Attack Methods to HGNNs.**
* The structure attack on HGNN is realized by modifying the internal information of the incidence matrix. Different from modifying the adjacency matrix of GNN, this unique modification is currently in the blank, which brings the first difficulty.
* Because the construction of HGNN is more complex in preserving data correlations compared with GNN, directly migrating existing graph attack methods has been proven in experiments for causing unsatisfactory performance, especially in terms of time complexity.
Through some direct migration experiments (e.g., integrated-gradient-based attack (Zhu et al., 2019)). For example, it takes 40 seconds at least to fully implement the attack on one single target node When we set \(2*\) Tesla V100S GPU as our computation resources. Imagine the implementation against the entire dataset (e.g., the Cora dataset contains 2708 scientific publications as nodes), the time cost will be more than 30 hours. Worse still, this is only the cost when we assume that all the attacks succeed. Nevertheless, in the case of worse computing resources, the time burden will be more serious.
As a result, a large gap between the increasing-mature GNN testing and deficient HGNN testing methods still exists.
### Challenges of HGNN structure attacks
Based on the clarified difference between HGNN and GNN, as well as the difficulties of Applying GNNs Structure, we introduce the main challenges of HGNN structure attacks.
**Lack of modification on hypergraph structure.** Compared with graphs, the dimension of hypergraph structure is even larger in many cases. The dimension is directly proportional to the complexity in the face of some complex origin data, such as multi-modal data or data with a high-order association. It will spend more overhead to obtain the internal information and address structure attack on the HGNN rather than GNN. For example, when determining a specific node, traversing and computing on the incidence matrix \(\mathcal{H}\) may cause more burden, which leads to the existing methods of modifying the structure matrix can not be well applied. The modification of the hypergraph structure still lacks practice strategies.
**Balance between attack success rate and time consumption.** A superior attack algorithm should balance the time cost and
Figure 2. Difference of structure attack against hypergraph and graph. (a) Given a node set \((v_{1},...v_{6})\), we attack the target node \(v_{2}\) in the pre-trained GNN and HGNN models. (b) In the upper part, by adding a specific edge connecting the target node \(v_{2}\) with \(v_{1}\), the adjacency matrix structure is modified. The attack ends when the classification of target node \(v_{2}\) changes. (c) In the lower part, we modify the relationship between \(v_{2}\) and the hyperedge \(E1\) in \(\mathcal{H}\), which means we have added the target node to a new hyperedge. At last, we input the modified \(\mathcal{H}\) into the forward propagation of HGNN to judge the classification of the target node.
attack success rate. However, graph attack algorithms in the past paid too much attention to the attack success rate in a long term. In the premise of a successful attack, reducing the running time of generating adversarial samples should not be ignored. Therefore, it is another challenge for us to realize HyperAttack with both a high attack success rate and low time cost.
## 3. Threat Model
The goal of our structure attack is to reduce the classification ability of the target node. We expect to attack the target node of a hypergraph by changing the structure within the threshold of a constraint. In this section, we clarify the structure attack scenario of HyperAttack by introducing the setting target model, target objects, and specific attack actions.
### Target Model
We consider a two-layer HGNN model (Han et al., 2017) which is a general framework for hypergraph representation learning as our target model. Based on Section 2, the description of the target model can be stated as:
Given \(\mathcal{G}=(\mathcal{H},X)\) with a set of labeled nodes \(v_{t}\) where \(\mathbf{Y_{I}}\) is the ground truth of each node. \(\mathcal{H}\) and \(\mathbf{X}\) represent the incidence matrix and node feature matrix, respectively. Our target model focuses on the node classification task to categorize nodes without labels into different classes by using a large number of labeled nodes. After the training process as mentioned in Eq 4, the target model can complete the semi-supervised learning task and predict the classification result. We set the target node as \(v_{t}\), and attack \(v_{t}\). In the classification results, the target model will be misled only when facing the target node.
### Structure Attack Objectives
Because the HGNN model first constructs hypergraph from original data, the strategies of generating hypergraph directly affect both the quality of the prediction ability and the stability under adversarial attack. The hypergraph generation methods vary a lot and we introduce the details in Section 7. Here, we consider the incidence matrix of the hypergraph as the objective in our structure attack and introduce three classic strategies to verify the different results of different construction strategies under the same attack algorithm.
Based on K nearest neighbors (KNN) methods, we follow (Koh et al., 2017) and construct hyperedges by connecting each node with their nearest \(K\) nodes, called Hypergraph_KNN. Based on \(e\)-ball, we use another distance-based method to construct hyperedges by connecting nodes whose distances are less than the preset threshold, called Hypergraph_\(e\). Based on the \(\mathbf{L1}\) reconstruction method, we followed (Koh et al., 2018) to generate each hyperedge via modeling the relationship between nodes through feature reconstruction, called Hypergraph_L1. We also show the different robustness of the above three modeling strategies under attacks in Section 7.
### Structure Attack Operations
In this subsection, in order to make our experiments more interpretable, we introduce some attack actions behind the implementation of our proposed structure attack. Our goal is to find out which elements in \(\mathcal{H}\) have the maximum impact on the classification results after being attacked. We further clarify our attack actions as deleting (or adding) the target node on the selected hyperedges, which modifies the incidence matrix \(\mathcal{H}\) indeed. As a highly directional attack, modifying the relevant relationship of the target node has little tiny impact on other nodes. This makes HyperAttack only change the classification ability of the target node, which well shortens the attacking burden on a small budget.
It is worth mentioning that we abandon another attack action in the experiment: creating a new hyperedge with the target node. Because the newly created hyperedge will change the dimension of the original matrix \(\mathcal{H}\), which may be a huge burden to HyperAttack. Worse still, other nodes placed in the new hyperedge will also bring potential damages to the global classification ability of hypergraph.
## 4. Methodology
In this section, we introduce the framework and algorithm of HyperAttack. For convenience, we briefly summarize some important symbols and corresponding definitions in table1.
### Overview
As shown in Figure 3, there are three main parts in HyperAttack, the Fast-Gradient based algorithm part, the Integrated-Gradient
\begin{table}
\begin{tabular}{c c} \hline Symbol & Definition \\ \hline \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) & input graph \(\mathcal{G}\) with nodes \(\mathbf{V}\) and edges \(\mathbf{E}\) \\ \(\mathcal{G}=(\mathbf{V},\mathbf{\mathcal{E}})\) & input hypergraph \(\mathcal{G}\) with nodes \(\mathcal{V}\) and hyperedges \(\mathcal{E}\) \\ \(\mathcal{G}^{*}\) & output hypergraph \(\mathcal{G}^{*}\) \\ \(\mathcal{D}_{\mathbf{o}}\) & the diagonal matrices of the node degrees \\ \(D_{\mathbf{e}}\) & the diagonal matrices of the edge degrees \\ \(\mathbf{W}\) & the diagonal matrix of the hyperedge weights \\ \(\mathbf{A}\) & the adjacency matrix of GNN \\ \(\mathcal{H}\) & the incidence matrix of HGNN \\ \(X\) & the node feature matrix \\ \(D\) & the dimension of the node feature \\ \(\mathcal{H}\) & the convolved signal matrix of \(\mathcal{H}\) \\ \(F\) & the forward propagation output of the HGNN \\ \(Y_{I}\) & the real label confidence list \\ \(v_{t}\) & the target node \\ \(M\) & the size of fast-gradient-based hyperedge filter \\ \(N\) & the size of integrated-gradient-based hyperedge selection \\ \(\mathcal{H}^{e}\) & modified incidence matrix \\ \(\mathcal{Y}\) & number of disturbance limit of ASR \\ \(\eta\) & number of disturbance limit of AML \\ \hline \end{tabular}
\end{table}
Table 1. Notation And Definition
Figure 3. The framework of HyperAttack.
based algorithm part, and the modified hypergraph part, out of which the selection process is made up of the first two parts, and the modification process is the third part.
In comprehensive consideration of the factors with the lowest time cost and high attack success rate, HyperAttack is well-designed through a fast-gradient-based hyperedge filter using gradients and integrated-gradient-based hyperedge selection using integrated gradients. Both gradients and integrated gradients are the indicators to evaluate the priority of each hyperedge to the target node.
To be specific, we first obtain the gradient of the whole \(\mathcal{H}\) by calculating a designed loss function. Then we keep the hyperedges with the maximum absolute value of gradient as the result of a fast-gradient-based hyperedge filter. Secondly, by further setting another loss function, the integrated gradients of each hyperedge from the fast-gradient-based hyperedge filter are obtained. The hyperedges with maximum value will be selected as the result of fine-grained screening. In the third step, we regard the hyperedges from fine-grained screening as the final selection. We remove or add the target node on these hyperedges via modifying the incidence matrix \(\mathcal{H}\). The details are as follows.
### Fast-gradient-based hyperedge filter
As mentioned in the target model, we use a two-layer HGNN model (Han et al., 2017) for the node classification task with an incidence matrix \(\mathcal{H}\) and a node feature matrix \(\mathbf{X}\) as the input of the forward propagation. As mentioned in Eq 4, we use \(\mathcal{H}\) as the variable of the output prediction \(\mathbf{F}(\mathcal{H})\). The HGNN model forward propagation obtains the output \(\mathbf{F}\) of the last layer, which can be simply described as \(\mathbf{F}=forward(\mathcal{H},\mathbf{X})\) and its simple form are as followed:
\[\mathbf{F}(\mathcal{H})=f(\left(\tilde{\mathcal{H}}\sigma\left(\tilde{\mathcal{H }}X\mathcal{W}_{t}\right)\mathcal{W}_{\infty}\right) \tag{4}\]
where \(\tilde{\mathcal{H}}\) is formulated as shown in Eq 5. \(\mathbf{W}_{0}\) and \(\mathbf{W}_{1}\) are the input-to-hidden and hidden-to-output weight matrices,respectively. \(f\) represents the \(softmax\) function and \(\sigma\) is the \(Relu\) active function. It should be noted that \(\mathcal{H}\) is the only variable of forward propagation. HyperAttack does not add disturbance on \(\mathbf{X}\), thus \(\mathbf{X}\) remains unchangeable and can be regarded as a constant as the same as \(\mathbf{W}_{0}\) and \(\mathcal{W}_{\infty}\).
\[\tilde{\mathcal{H}}=D_{u}^{-1/2}HWD_{e}^{-1}\mathcal{H}^{T}D_{u}^{-1/2} \tag{5}\]
We take the difference between the output prediction \(\mathbf{F}\) and the real result \(\mathbf{Y}_{\mathbf{I}}\) and use a simple cross-entropy to express the discrepancy. Aiming at the target node \(v_{t}\), we formulate our designed function \(L_{t}\) as:
\[L_{t}=-\sum_{k=0}^{|\mathcal{E}|-1}Y_{\mathbf{I}\mathbf{I}\mathbf{k}}ln\left(F_{tk}\left( \mathcal{H}\right)\right) \tag{6}\]
We calculate the partial derivatives in Eq 6 with respect to \(\mathcal{H}\) and get \(g_{tk}\) as the \(k^{th}\) gradient of the target node \(\boxed{u_{\parallel}}\).
\[g_{tk}=\frac{\partial L_{tk}}{\partial\mathcal{H}_{tk}} \tag{7}\]
We set a hyper parameter \(\mathbf{M}(0<\mathcal{M}<|E|)\). The hyperedges within top-\(M\) largest gradients will be recorded as the results of the Fast-gradient-based hyperedge filter. The pseudo-code for the fast-gradient-based hyperedge filter is given in lines 3 to 4 of Algorithm 1.
### Integrated-gradient-based hyperedge selection
The results of preliminary filtering using fast gradient-based methods have been saved. Limited by the number of perturbations, we are inspired by (Zhu et al., 2017; Wang et al., 2018) and we choose the integrated gradient-based method to realize hyperedges selection. The integrated gradients-based method combines direct gradient and back-propagation-based approaches. Let \(x\) be the input value and \(x^{\prime}\) be the baseline value. The function mapping is expressed as \(F\). The Integrated gradient of the \(i^{th}\) input can be expressed as follows:
\[IntergratedGrads_{i}\left(x\right):=\left(x_{i}-x_{i}^{\prime}\right)\times \int_{a=0}^{1}\frac{\partial F\left(x^{\prime}+a\times\left(x_{i}-x_{i}^{ \prime}\right)\right)}{\partial x_{i}}d\alpha \tag{8}\]
Since the gradient of all points on the whole path is considered, it is no longer limited by the gradient of a specific point. For target node \(v_{t}\),we set the matrix \(H_{a}^{{}^{\prime}}\) to all-one and the matrix \(H_{r}^{{}^{\prime}}\) to all-zero matrix respectively. The matrix \(H_{a}^{{}^{\prime}}\) and matrix \(H_{r}^{{}^{\prime}}\) represent the target node \(v_{t}\) with all hyperedges connected and fully unconnected, respectively.
\[H^{{}^{\prime}}=\begin{cases}H_{a}^{{}^{\prime}}:H\left[t\right]\left[i\right] =1,0\leq i<|E|\\ H_{r}^{{}^{\prime}}:H\left[t\right]\left[i\right]=0,0\leq i<|E|\end{cases} \tag{9}\]
For target node \(v_{t}\), when there is no connection between the target node \(v_{t}\) and the hyperedge \(i\), we set the \(H_{a}^{{}^{\prime}}\) matrix as a baseline since we want to describe the overall change pattern of the target function \(F\) while gradually disconnect the target node \(v_{t}\) from the hyperedges to the current state of \(H\). On the contrary, when the target node \(v_{t}\) is already connected to the hyperedge \(i\), we use the \(H_{r}^{{}^{\prime}}\) matrix as a baseline to calculate the changing pattern by gradually adding the connection between \(v_{t}\) and the hyperedges.
\[IG\left(F\left(H,t\right)\right)\left[t,i\right]=\] \[\left\{\begin{aligned} &(H_{ti}-0)\times\sum_{k=1}^{m}\frac{ \partial F\left(H_{r}^{{}^{\prime}}+\frac{k}{m}\left(H_{r}^{{}^{\prime}}-H_{r} ^{{}^{\prime}}\right)\right)}{\partial H_{ti}}\times\frac{1}{m},H\left[t\right] \left[\mathbb{\delta}\right]\neq 0,\\ &(1-H_{ti})\times\sum_{k=1}^{m}\frac{\partial F\left(H_{a}^{{}^{ \prime}}-\frac{k}{m}\left(H_{a}^{{}^{\prime}}-H\right)\right)}{\partial H_{ti }}\times\frac{1}{m},H\left[t\right]\left[\mathbb{\delta}\right]=0.\end{aligned}\right. \tag{10}\]
Lines 5 to 11 of Algorithm 1 show the pseudo-code for the Integrated-gradient-based hyperedge selection of HyperAttack. We calculate the integrated gradient of the hyperedges and then save \(N\) hyperedges with the largest gradient, where \(N\) is the size of perturbations.
### Matrix-modified operation
The result of HyperAttack is an adversarial hypergraph network with a modified matrix \(\mathcal{H}^{*}\) which can ensure the original HGNN model produces the wrong classification for the target node in forward propagation. In consideration of the notion of 'unnoticeable changes' in hypergraph, we set a strict upper limit on the number of perturbations. We use the selected index of the hyperedges from the
fine-grained screening to modify the links of the original incidence matrix \(H\) which is defined as:
\[\mathcal{H}^{*}=\mathcal{H}+\theta\left(\mathbf{g_{i}}\right), \tag{11}\]
where \(\theta\left(\mathbf{g_{i}}\right)\) represent the sign of adding/removing the \(i^{th}\)relationship of the target node and links. The overall HyperAttack algorithm is summarized by Algorithm 1.
## 5. Evaluation
### Evaluation Design
#### 5.1.1. Research questions
We conduct experiments and try to answer the questions as follows.
**RQ1. How is the performance of HyperAttack in attack success rate compared to other state-of-the-art methods?** In this RQ, we compare the attack success rate of HyperAttack with some state-of-the-art attack methods on graph. We wonder whether these graph attack methods can be adopted on HGNN with a high attack success rate.
**RQ2. How is the performance of HyperAttack in time cost when completing a successful attack compared to other state-of-the-art methods?** In this RQ, we wonder about the efficiency of HyperAttack and the others. We use running time to record the time consumption required for a successful attack using HyperAttack and other methods. We consider the attack method which has higher efficiency and is able to spend less running time while achieving the same attack effect.
**RQ3. How does the classification margin perform under HyperAttack compared to other state-of-the-art methods?** In this RQ, we calculate the classification margin for all methods and analyze the characteristics of HyperAttack.
#### 5.1.2. Evaluation environment
Our experimental environment consists of 2*Intel(R) Xeon Gold 6248 CPU @2.50GHzx40, 2*Tesla V100S GPU, 384 memory, and CentOS 7. The max number of perturbations \(N\) is 10. We use three different hypergraph generation approaches to formulate the data correlation from the original data, which consists of the K-Nearest Neighbor-based (KNN) method, \(\varepsilon\)-ball-based method, and \(l1\)-hypergraph-based method. The reason for using different hypergraph modeling is that the sensitivity of
Figure 4. The flowchart of HyperAttack.
HGNNs to adversarial attacks is different, and the robustness of the attack performance needs to be evaluated.
#### 5.1.3. Evaluation metrics
As we mentioned in Section 2, we stated that a successful structure attack must have two key characteristics: (1) achieving the desired attack performance; (2) balancing the attack time consumption. Therefore, we use the Attack Success Rate (ASR) and Average Number of Modified Links (AML) to evaluate structure attack performance. Then, Running Time (RT) is used to evaluate the time cost of the structure attack.
* Attack Success Rate (ASR). Given a hyper-parameter \(\gamma(0<\gamma\leq 10)\). Each target node changes no more than \(\gamma\) links. ASR represents the success rate of each target node under a certain number of attacks. The formal expression is as follows: \[ASR=\frac{Successfulattacks}{Allattacks}\] (12)
* Average Number of Modified Links (AML). AML calculates the average counts of modifying links for a successful attack. \(\eta\) is used to represent the upper bound of the number of perturbations. AML describes the average number of modified links the attacker needed to meet the attack objective: \[AML=\frac{Modifiedlinks}{Allattacks}\] (13)
* Running Time(RT). RT records the time consumption required for each successful attack. It takes the original matrix \(\mathcal{H}\) input as the starting time, and the modified matrix \(H^{\bullet}\) after the successful attack as the ending time, we formulate it as follows: \[RT=Time_{end}-Time_{start}\] (14)
#### 5.1.4. Parameter settings
We give some parameter settings here which will be used in the experiment. \(\gamma\): We set the limit number of ASR \(\gamma\) to 10 in our experiments, which means the successful attack should be limited to 10 perturbation times, otherwise the attack is considered failed. In the experiment of calculating the running time, \(\gamma\) is set to increase continuously from 1 to 10.
\(\eta\): We set the upper bound of the number of perturbations \(\eta\) to 10 in our experiments. We only consider the successful attack in 10 times and calculate the average number of perturbations.
\(K\): KNN is a traditional method of constructing hyperedges. In the experiment part, we fixed the value of \(k\) as 10, which means each hyperedge will contain 10 nodes. In addition, we conduct experiments to verify how \(K\) affects the robustness of HGNN in the Discussion part.
#### 5.1.5. Datasets
The target models in our experiments are focused on the node classification task. The node classification task is a semi-supervised learning task that aims to accurately predict the category of each node. We have selected two widely-used datasets and some basic statistics for these datasets are provided below.
* Cora: The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 links, which represent the citation relationships.
* PubMed: The PubMed dataset consists of 19717 scientific publications from the PubMed database pertaining to diabetes classified into one of three classes. The citation network contains 88676 links, which represent the citation relationships.
#### 5.1.6. Compared Methods
Here, we compare HyperAttack with five baseline methods. In general, we divide the five existing baseline methods into two categories: random-based attack methods and gradient-based attack methods. We briefly describe them as follows.
**Random-based Attack.** In view of the fact that the robustness of graph and hypergraph lack of stability mechanism, the classification performance of the target node will be influenced to a certain extent when the inherent attribute of the structure are changed via random-based methods (i.e. connect or disconnect edges via modifying original incidence matrix). In our work, we introduce three random-based methods, including Random Delete (**RanD**), Random Modified (**RanA**), and Disconnect Internally, Connect Extremally (**DiceA**) ((Kumar et al., 2017)). Particularly, given \(\mathbf{c}\) and \(\mathbf{c}\) as the actual disturbance times and the maximum limit of disturbance times, RanD randomly selects \(c(c<C)\) hyperedges from the hyperedges connected to the target node and then disconnect the target node from the selected hyperedges. RanA randomly selects \(c(c<C)\) hyperedges and can disconnect or connect the relationship between the target node and the selected hyperedges. DiceA first randomly disconnects \(\frac{c}{2}(c<C)\) hyperedges of the target node, then randomly connect the target node on \(|E|-c\) non-selected hyperedges.
**Gradient-based Attack.** The gradient-based attack is widely used as an efficient algorithm in many types of research, especially in methods based on white-box settings. When the internal information of the target model is available for attackers, the simplest approach is to use the gradient of the training process as indicators to evaluate the priority of each component between nodes and hyperedges. Attackers can then modify the matrix according to the priority. In our work, we introduce two gradient-base methods, including **FGA**(Kumar et al., 2017) and **IGA**(Kumar et al., 2017), which use gradient and integrated gradient, respectively.
### Structure Attack Effectiveness
As shown in Table 2, we use HyperAttack and five other baseline methods on the Cora and PubMed datasets. Take the Cora dataset as an example, HyperAttack has great advantages over all Random-based Attack methods where the average ASR has increased from 41% to more than 92% when we use the KNN-based method to generate hyperedges. Compared with the 95% attack success rate achieved by IGA, which is the state-of-the-art attack method on graph, HyperAttack increased to 96 %. When we use the \(c-ball\), ASR has increased from 91% of IGA to 96% of HyperAttack. As for the PubMed dataset, ASR has decreased to different degrees under each attack algorithm, but HyperAttack is still far superior to all Random-based methods and shows slightly better than the ASR of IGA. **Answer to RQ1:** We can preliminarily conclude that HyperAttack has outstanding advantages over the existing Random-based methods, and is slightly superior to IGA by 1.3% - 3.7% of ASR.
We use classification margin to evaluate the attack performance. For the target node \(\circ\), its classification margin is \(X=Z_{\alpha,c}-max_{c^{\prime}\neq c}Z_{\alpha,c^{\prime}}\)
where \(c\) is the ground truth label, \(Z_{o,c}\) is the probability that the target node \(v\) is given label \(c\) by HGNN model. Lower \(X\) are better. \(X\) smaller than 0 means the target node is misclassified. Figure5 shows the classification margin of different attack methods on the Cora dataset and PubMed dataset. Obviously, the classification margin of IGA and HyperAttack is significantly better than other attack methods. IGA is relatively stable, but HyperAttack has a slightly higher success rate than IGA. **Answer to RQ3:** HyperAttack has the highest attack success rate with a guaranteed classification margin better than other attack methods except for IGA. We believe HyperAttack is accurate and relatively stable.
### Structure Attack Performance
In addition, we can get the running time of a successful attack process from Table 2. We find the running time of all Random-based methods is only 0.02 seconds, because this kind of method only needs the operation time of disturbance, and does not need time to filter from the incidence matrix. In contrast, the Gradient-based methods take longer for they spend most of the time doing the selection. IGA and HyperAttack both show superior ASR. However, the time cost of HyperAttack is much shorter than that of IGA. To be specific, when we implement a successful attack via IGA and HyperAttack respectively, IGA takes 42-45 seconds, while HyperAttack only takes 2-4 seconds, which reduces the time by 10-20 times. **Answer to RQ2:** We believe that the running time of HyperAttack is 10-20 times shorter than that of similar the-state-of-the-art methods under the condition of ensuring a high success rate of the attack.
### Visualization and case study
In order to make HyperAttack easier to understand, Figure 6 shows the visualization of HyperAttack. We visualize the connection relationship of the target node and compare the changes before and after HyperAttack. We take a HyperAttack of Hypergraph_KNN on the Cora dataset as the example, and the perturbation size is set to 2. After the original hypergraph is trained by HGNN, the connection relationship of the target node \(V1\) is shown in Figure 6 (a). \(V1\) is connected with six hyperedges and belongs to the category represented by purple. In HyperAttack, we choose hyperedges \(E1\) and \(E2\) to attack by Fast-gradient-based filter and Integrated-gradient-based selection. Then, we modify the incidence matrix and test it in the trained HGNN, the result is shown in Figure 6 (b). After HyperAttack, the target node \(V1\) adds the connection to the hyperedges \(E1\) and \(E2\). Its label changes to the category represented by orange which proves the attack is successful.
## 6. Discussion
In this section, we discuss the robustness of different methods of constructing hyperedges under HyperAttack. Especially, we focus on the modeling method based on KNN and we set experiments to investigate different robustness of parameter \(K\).
\begin{table}
\begin{tabular}{c|c|c|c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} \multicolumn{3}{c|}{Hypergraph\_KNN} \\ \end{tabular} } & \multicolumn{3}{c|}{Hypergraph\_\_\(e\)} & \multicolumn{3}{c}{Hypergraph\_L1} \\ \cline{3-11} & & & ASR(\%) & AML & RT(s) & ASR(\%) & AML & RT(s) & ASR(\%) & AML & RT(s) \\ \hline \multirow{4}{*}{Cora} & Random-based & \multirow{2}{*}{\begin{tabular}{c} RandD \\ Attack \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} RanD \\ RanA \\ DiceA \\ \end{tabular} } & 15 & 4.20 & 0.02 & 12 & 4.39 & 0.02 & 80 & 1.96 & 0.02 \\ & & 48 & 4.78 & 0.02 & 52 & 4.34 & 0.02 & 45 & 4.27 & 0.02 \\ & & DiceA & 37 & 5.49 & 0.02 & 34 & 4.88 & 0.02 & 69 & 3.2 & 0.02 \\ \cline{2-11} & \multirow{2}{*}{Gradient-based} & FGA & 85 & 3.00 & 0.03 & 91 & 1.37 & 0.04 & 89 & 1.75 & 0.03 \\ & & IGA & 88 & 2.49 & 42.23 & 95 & 1.43 & 42.18 & 91 & 1.85 & 44.44 \\ \cline{2-11} & & HyperAttack (OURS) & 89 & 2.84 & 3.65 & 96 & 1.39 & 3.64 & 96 & 2.02 & 3.74 \\ \hline \multirow{4}{*}{PubMed} & Random-based & \multirow{2}{*}{\begin{tabular}{c} RandD \\ Attack \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} RanD \\ RanA \\ DiceA \\ \end{tabular} } & 11 & 6.45 & 0.08 & 3 & 8.74 & 0.08 & / & / & / \\ & & RanA & 29 & 6.36 & 0.08 & 30 & 6.49 & 0.08 & 40 & 3.92 & 0.08 \\ & & DiceA & 17 & 5.43 & 0.08 & 14 & 7.33 & 0.08 & / & / & / \\ \cline{2-11} & \multirow{2}{*}{Gradient-based} & FGA & 48 & 5.22 & 0.16 & 72 & 3.79 & 0.15 & 96 & 1.47 & 0.10 \\ & & IGA & 69 & 5.16 & 187.13 & 82 & 3.60 & 187.33 & 97 & 1.32 & 186.13 \\ \cline{2-11} & & HyperAttack (OURS) & 74 & 5.11 & 19.38 & 83 & 3.68 & 19.50 & 97 & 1.26 & 19.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Structure attack performance (ASR, AML, and RT) of five structure attack methods: RanD, RanA, DiceA, FGA, IGA, and HyperAttack with different hypergraph models. The performance of HyperAttack shows the best results in both attack success rate and time cost.
Figure 5. The classification margin of different attack methods on the Cora dataset.
### Impact on perturbation
Figure 7 show the results on ASR of different attack methods as functions of perturbation size \(\gamma\) on various hypergraph model for the Cora dataset and PubMed dataset. In the Random-based attack, ASR is not proportional to the number of perturbations due to the high-level randomness in the methods. For example, as shown in Figure 7 (c), the ASR of DiceA is fluctuating as \(\gamma\) increases. In the Gradient-based attack, ASR shows an overall growing tendency as the number of perturbations rises. When \(\gamma\) increases to a certain value, ASR may reach an upper limit and no longer change significantly, as shown in Figure 7 (f).
Figure 8 shows the Running Time of the attack with an increasing number of perturbations. The running time of the attack consists of the time to modify the incidence matrix and the time to calculate the gradient. The time consumption for modifying the incidence is very small, so the number of perturbations has little effect on the Random-based attack methods. As the number of perturbations increases, FGA and Hyperattack calculate more gradients, so RT will increase. For IGA, it needs to calculate the gradient of all hyperedges associated with the target node regardless of the number of perturbations, therefore, the size of perturbations barely affects it.
### Impact of hypergraph modeling
We study the impact of different hypergraph generation methods on the attack. As shown in Figure 7, we find that the Hypergraph_KNN has the lowest ASR and the Hypergraph_L1 has the highest ASR on both datasets, which we believe is also related to the number of nodes connected by the hyperedges. The Hypergraph_KNN has the largest number of nodes connected to each hyperedge on average, and it is the most stable. The number of nodes connected by each hyperedge of Hypergraph_L1 is the lowest, it is the most likely to be attacked successfully. But in terms of classification accuracy, the performance of the Hypergraph_L1 is optimal. The topic of how to balance hypergraph performance and stability is important.
In particular, we study the effect of hyperedges connected to different numbers of nodes on Hypergraph robustness. We set experiments and give the results in Figure 9, which clearly shows the ASR of HyperAttack when the incidence matrix \(H\) is constructed based on KNN with different \(K\). We find that the ASR is higher when the \(K\) is smaller. Therefore, we believe that the stability of the hypergraph is related to the number of nodes connected by the hyperedges, The hypergraph structure is more stable and robust when the number of nodes connected by the hyperedges is higher. Meanwhile, when \(K=2\), i.e., each hyperedge connects 2 nodes, the constructed hypergraph is a graph structure, indicating that our method is applicable to graph, and has generalization ability.
## 7. Related Work
In this section, we introduce the related work of hypergraph learning application and the adversarial attack against graph data.
### Hypergraph learning
A hypergraph denoted as \(\mathcal{G}\), is composed of a set of nodes \(\mathcal{V}\), a set of hyperedges \(\mathcal{E}\), and a weight matrix \(\mathcal{W}\), which represents the connection strengths between the nodes in the hypergraph.
To preserve the high-order associations within the hypergraph and perform downstream tasks effectively, two main steps are involved. The first step is to construct the hypergraph from the original graph-based data. Previous studies on hypergraph generation methods have been reviewed in (Kang et al., 2018), which categorize these methods into four types: distance-based (Kang et al., 2018; Li et al., 2018), representation-based (Li et al., 2018; Li et al., 2018; Li et al., 2018), attribute-based (Li et al., 2018; Li et al., 2018), and network-based (Li et al., 2018; Li et al., 2018). These methods can be further classified into implicit and explicit methods. Implicit methods, such as distance-based and representation-based methods, do not directly obtain the hyperedges from the original data and require reconstruction through specific measurement and representation algorithms. Explicit methods, including attribute-based and network-based methods, can directly construct the hyperedges based on the attributes or network connections of the original data, preserving the correlations within the data.
The second step is to design learning methods for the constructed hypergraph. Hypergraph learning can be divided into spectral-analysis methods, neural-network methods, and other methods based on their implementation. Spectral-analysis methods are the mainstream in hypergraph learning, utilizing matrix analysis and spectral theory. For instance, Yadati et al. (Yadati et al., 2018) employed group expansion to convert the hypergraph into an ordinary graph and used trainable hyperedge perception and hyperedge scoring layers to retain the high-order associativity between nodes and hyperedges. With the advancement of GNN research, some researchers have introduced neural networks into hypergraphs. Inspired by graph neural networks, Feng et al. (Feng et al., 2019) introduced the Laplace matrix into hypergraphs and proposed the Hypergraph Neural Network (HGNN) framework, which generalizes the star expansion process of hypergraphs to neural networks. Other hypergraph learning methods focus on specific applications, such as video images and other fields. For example, Su et al. (Su et al., 2020) introduced weighted hypergraphs into the 3D target classification task, allowing for the re-evaluation of node correlations through the weight matrix and obtaining the potential correlations between nodes.
### Adversarial attack on graph data
In recent years, a large number of researchers have tried to generate adversarial examples to make the graph-based deep learning systems have the ability to resist the input of adversarial test (Kang et al., 2018; Li et al., 2018; Li et al., 2018).
Figure 6. The visualization of HyperAttack. (a) is the connection relationship of the nodes before the attack. (b) is the connection relationship of the nodes after the attack.
23, 25, 45, 48]. The vast majority of attack methods realize effective performance by modifying graph structures. The structure attack manipulates the adjacent matrix of the target node in directions and leads to large classification loss. Many methods try to apply the traditional software testing technology to graph-based deep learning systems testing via white-box testing and black-box testing. In white-box settings, attackers are able to acquire internal parameters and modify the graph structure using specific algorithms via these available parameters. Some gradient-based strategies are proposed to realize effective structure attacks. Nettack [48] is regarded as the first work to exploit the ideas. It obtains the approximate optimal attack of a disturbance based on a greedy algorithm by calculating scores for structural attacks and feature attacks. Some works [4] use a gradient of pairwise nodes based on the target model and select the pair of nodes with a maximum absolute gradient to update the modified adjacency matrix. Topology attack [25] uses randomization sampling to select sub-optimal binary perturbations. Meta-gradient is introduced for the first time [1] on graph adjacency matrix to solve a bi-level optimization problem for the poisoning attack. Mettack greedily adds perturbations via the selected largest meta-gradient.
In black-box settings, the attacker is restricted to propose adversarial samples without any access to the target model. This kind of attack is the most dangerous attack among the attack methods. Because some evil attacks can attack the model in the reality with very limited acknowledgment. Some strategies [3, 5, 24, 28] were proposed to learn the generalized attack policy while only requiring prediction labels from the target classifier. The attackers regard the attack as a Markov decision process. The current decision is only based on the current state and has nothing to do with the previous decision. The reward function is designed according to the
Figure 8. Time cost of attack methods with different perturbation size \(\gamma\) on Cora dataset.
Figure 7. ASR of different attack methods as functions of perturbation size \(\gamma\) on various hypergraph generation methods.
Figure 9. ASR of HyperAttack on attacking HGNN and KNN-based hypergraph construction with different K value.
feedback of the victim model after the attack, and then it is solved by reinforcement learning.
## 8. Conclusion
In this paper, we propose HyperAttack, the first white-box structure attack framework against hypergraph neural networks. HyperAttack can modify the incidence matrix of the hypergraph by adding or deleting the target node on specific hyperedges. To measure the priority of each hyperedge, we use both gradient and the integrated gradient as indicators. We conduct a large number of experiments by using five different baseline methods. In two widely-used datasets Cora and PubMed, HyperAttack can greatly shorten the time of structure attack when the attack effect still achieves state-of-the-art performance at the same time. We also implement HyperAttack on the hypergraph of three different modeling methods to evaluate the different robustness. We expect to study other attacks with different levels of knowledge(i.e. black-box attack) on HGNNs, and try to influence the classification result by modifying the node feature matrix in the future.
|
2310.10767 | Wide Neural Networks as Gaussian Processes: Lessons from Deep
Equilibrium Models | Neural networks with wide layers have attracted significant attention due to
their equivalence to Gaussian processes, enabling perfect fitting of training
data while maintaining generalization performance, known as benign overfitting.
However, existing results mainly focus on shallow or finite-depth networks,
necessitating a comprehensive analysis of wide neural networks with
infinite-depth layers, such as neural ordinary differential equations (ODEs)
and deep equilibrium models (DEQs). In this paper, we specifically investigate
the deep equilibrium model (DEQ), an infinite-depth neural network with shared
weight matrices across layers. Our analysis reveals that as the width of DEQ
layers approaches infinity, it converges to a Gaussian process, establishing
what is known as the Neural Network and Gaussian Process (NNGP) correspondence.
Remarkably, this convergence holds even when the limits of depth and width are
interchanged, which is not observed in typical infinite-depth Multilayer
Perceptron (MLP) networks. Furthermore, we demonstrate that the associated
Gaussian vector remains non-degenerate for any pairwise distinct input data,
ensuring a strictly positive smallest eigenvalue of the corresponding kernel
matrix using the NNGP kernel. These findings serve as fundamental elements for
studying the training and generalization of DEQs, laying the groundwork for
future research in this area. | Tianxiang Gao, Xiaokai Huo, Hailiang Liu, Hongyang Gao | 2023-10-16T19:00:43Z | http://arxiv.org/abs/2310.10767v1 | # Wide Neural Networks as Gaussian Processes:
###### Abstract
Neural networks with wide layers have attracted significant attention due to their equivalence to Gaussian processes, enabling perfect fitting of training data while maintaining generalization performance, known as benign overfitting. However, existing results mainly focus on shallow or finite-depth networks, necessitating a comprehensive analysis of wide neural networks with infinite-depth layers, such as neural ordinary differential equations (ODEs) and deep equilibrium models (DEQs). In this paper, we specifically investigate the deep equilibrium model (DEQ), an infinite-depth neural network with shared weight matrices across layers. Our analysis reveals that as the width of DEQ layers approaches infinity, it converges to a Gaussian process, establishing what is known as the Neural Network and Gaussian Process (NNGP) correspondence. Remarkably, this convergence holds even when the limits of depth and width are interchanged, which is not observed in typical infinite-depth Multilayer Perceptron (MLP) networks. Furthermore, we demonstrate that the associated Gaussian vector remains non-degenerate for any pairwise distinct input data, ensuring a strictly positive smallest eigenvalue of the corresponding kernel matrix using the NNGP kernel. These findings serve as fundamental elements for studying the training and generalization of DEQs, laying the groundwork for future research in this area.
## 1 Introduction
Neural networks with wide layers have recently received significant attention due to their intriguing equivalence to Gaussian processes, known as the Neural Network and Gaussian Process (NNGP) correspondence. It has been established that two-layer fully-connected networks tend towards Gaussian processes as the width of the layers approaches infinity [29; 25]. This equivalence has also been theoretically demonstrated in various neural network architectures, including deep feed-forward networks [28], convolutional neural networks [33; 16], recurrent networks [39], and residual neural networks [35]. This equivalence not only sheds light on the training dynamics of these networks but also highlights their generalization performance, especially when the corresponding covariance matrix is strictly positive definite. These discoveries have paved the way for overparameterized neural networks to achieve perfect fit on training data [11; 31; 3], while maintaining low generalization error on unseen data [4; 30; 2], a phenomenon known as benign overfitting [7; 8; 27].
In recent years, the emergence of infinite-depth neural network architectures, such as neural ordinary differential equations (ODEs) [9] and deep equilibrium models (DEQs) [6], has demonstrated their potential to capture complex dynamic behaviors and achieve superior modeling capabilities [24; 34; 22; 6]. However, the analysis of these architectures in the context of wide neural networks with infinite-depth layers remains largely unexplored. Understanding the convergence properties and relationship to Gaussian processes of these networks is crucial to unravel their underlying mechanisms and unlock their full potential. While some limited studies have investigated the convergence properties of neural ODEs or ResNet architectures [21], such as demonstrating the convergence to a diffusion process in the infinite-depth limit for a specific ResNet architecture [35] and introducing scaling to allow the interchange of the two limits [20], to the best of our knowledge, there is no existing work studying the commutative limits of DEQs.
In this paper, we focus on analyzing the deep equilibrium model (DEQ), an infinite-depth neural network architecture with shared weight matrices across layers. Our objective is to comprehensively analyze the properties of DEQs in the context of wide neural networks and investigate their convergence behavior as the width of the layers tends to infinity. We establish that as the width approaches infinity, DEQ tends to a Gaussian process. Furthermore, under appropriate scaling of the weight matrices, the limits of depth and width commute, enhancing our understanding of the convergence properties of DEQs. Additionally, we demonstrate that the resulting covariance function is strictly positive definite for any distinct input data, provided that the activation function is non-polynomial.
## 2 Related Works
Implicit neural networks [12], such as deep equilibrium models (DEQs), have gained significant attention in the research community over the past decade. Recent studies have shown that implicit neural network architecture encompasses a broader class of models, making it a versatile framework that includes feed-forward neural networks, convolutional neural networks, residual networks, and recurrent neural networks [12; 6]. Moreover, DEQs have been recognized for their competitive performance compared to standard deep neural networks, offering the advantage of achieving comparable results while demanding much fewer computational and memory resources, especially due to the utilization of shared weights [6]. Despite the practical success of DEQs in various real-world applications, our theoretical understanding of DEQs remains limited.
On the other hand, numerous studies [29; 25; 28; 33; 16; 39; 35] have made observations that finite-depth neural networks with random initialization tend to exhibit behavior similar to Gaussian processes as the width approaches infinity, known as NNGP correspondence. This correspondence has led to investigations into the global convergence properties of gradient-based optimization methods. The work of [23] established that the trajectory of the gradient-based method can be characterized by the spectral property of a kernel matrix that is computed by the so-called neural tangent kernel (NTK). Consequently, if the limiting covariance function or NNGP kernel \(\Sigma^{L}\) can be shown to be strictly positive definite under mild conditions, simple first-order methods such as stochastic gradient descent can be proven to converge to a global minimum at a linear rate, provided the neural networks are sufficiently overparameterized [11; 3; 40; 32; 5; 31; 15; 13]. Furthermore, this equivalence offers valuable insights into the generalization performance of neural networks on unseen data. It suggests that wide neural networks can be viewed as kernel methods, and the Rademacher complexity of these networks can be easily computed if the parameter values remain bounded during training. As a result, a line of current research [4; 30; 2; 7; 8; 27; 14] has demonstrated that gradient-based methods can train neural networks of various architectures to achieve arbitrarily small generalization error, given that the neural networks are sufficiently overparameterized.
Unfortunately, the existing results in the literature primarily focus on finite-depth neural networks, and there has been relatively limited research investigating the training and generalization properties of infinite-depth neural networks. One key challenge arises when the limits of depth and width do not commute, leading to distinct behaviors depending on whether the depth or width is relatively larger. For instance, [35] demonstrates that a ResNet with bounded width tends to exhibit a diffusion process, while [20] observes heavy-tail distributions in standard MLPs when the depth is relatively larger than the width. These unstable behaviors give rise to a loss of expressivity in large-depth neural networks, specifically in terms of perfect correlation among network outputs for different inputs, as highlighted by studies such as [36; 37; 19]. This raises significant issue, as it indicates the networks loss the covariance structure from the inputs as growth of depth. In the case of ResNets,
a proposed solution to mitigate this problem involves employing a carefully chosen scaling on the residual branches [20; 18], resulting in commutative limits of depth and width. However, to the best of our knowledge, there is currently no research exploring the interplay between the limits of depth and width for DEQs, which represent another class of infinite-depth neural networks with shared weights.
## 3 Preliminary and Overview of Results
In this paper, we consider a simple deep equilibrium model \(f_{\theta}(x)\) defined as follows:
\[f_{\theta}(x)= V^{T}h^{*}(x). \tag{1}\]
Here, \(h^{0}(x)=0\), and \(h^{*}(x)\) represents the limit of the transition defined by:
\[h^{\ell}(x) =\phi(W^{T}h^{\ell-1}(x)+U^{T}x), \tag{2}\] \[h^{*}(x) =\lim_{\ell\to\infty}h^{\ell}(x), \tag{3}\]
where \(\phi(\cdot)\) is an activation function. The parameters are defined as \(U\in\mathbb{R}^{n_{in}\times n}\), \(W\in\mathbb{R}^{n\times n}\), and \(V\in\mathbb{R}^{n\times n_{out}}\). The following equilibrium equality arises from the fact that \(h^{*}(x)\) is a fixed point of the equation (2):
\[h^{*}(x)=\phi(W^{T}h^{*}(x)+U^{T}x). \tag{4}\]
To initialize the parameters \(\theta=:\text{vec}\left(U,W,V\right)\), we use random initialization as follows:
\[U_{ij}\stackrel{{\text{iid}}}{{\asymp}}\mathcal{N}\left(0, \frac{\sigma_{u}^{2}}{n_{in}}\right),\quad W_{ij}\stackrel{{\text {iid}}}{{\asymp}}\mathcal{N}\left(0,\frac{\sigma_{w}^{2}}{n}\right),\quad V_{ ij}\stackrel{{\text{iid}}}{{\asymp}}\mathcal{N}\left(0,\frac{\sigma_{v}^{2}}{n} \right), \tag{5}\]
where \(\sigma_{u},\sigma_{w},\sigma_{v}>0\) are fixed variance parameters.
To ensure the well-definedness of the neural network parameterized by (1), we establish sufficient conditions for the existence of the unique limit \(h^{*}\) by leveraging fundamental results from random matrix theory applied to random square matrices \(A\in\mathbb{R}^{n\times n}\):
\[\lim_{n\to\infty}\frac{\|A\|_{op}}{\sqrt{n}}=\sqrt{2}\quad\text{ almost surely (a.s.)},\]
where \(\|A\|_{op}\) denotes the operator norm of \(A\).
**Proposition 3.1** (Informal Version of Lemma F.1).: _There exists an absolute small constant \(\sigma_{w}>0\) such that \(h^{*}(x)\) is uniquely determined almost surely for all \(x\)._
While similar results have been obtained in [15] using non-asymptotic analysis, our contribution lies in the asymptotic analysis, which is essential for studying the behaviors of DEQ under the limit of width approaching infinity. This asymptotic perspective allows us to investigate the convergence properties and relationship between the limits of depth and width. We refer readers to Section 4 and Theorem 4.4 where we leverage this result to demonstrate the limits of depth and width commutes.
After ensuring the well-definedness of \(h^{*}(x)\), the next aspect of interest is understanding the behavior of the neural network \(f_{\theta}\) as a random function at the initialization. Previous studies [28; 33; 16; 39; 35] have demonstrated that finite-depth neural networks behave as Gaussian processes when their width \(n\) is sufficiently large. This raises the following question:
_Q1: Do wide neural networks still exhibit Gaussian process behavior when they have infinite-depth, particularly with shared weights?_
Unfortunately, the answer is generally _No_. The challenge arises when the limits of depth and width do not commute. Several studies have observed that switching the convergence sequence of depth and width leads to different limiting behaviors. While wide neural networks behave as Gaussian processes, [26; 20] have observed heavy-tail distributions when the depth becomes relatively larger than the width. However, in the case of DEQs, we demonstrate that such deviations from Gaussian process behavior do not occur, since the infinite width limit and infinite depth limit do commute for DEQs given by (2). This crucial property is established through a meticulous analysis, focusing on fine-grained analysis to accurately determine the convergence rates of the two limits. Our findings affirm the stability and consistent Gaussian process behavior exhibited by DEQs, reinforcing their unique characteristics in comparison to other wide neural networks with infinite-depth layers.
**Theorem 3.1** (Informal Version of Theorem 4.4).: _Under the limit of width \(n\to\infty\), the neural network \(f_{\theta}\) defined on (1) tends to a centered Gaussian process with a covariance function \(\Sigma^{*}\)._
Once we have confirm width DEQs acts as Gaussian process, given a set of inputs, the corresponding multidimensional Gaussian random vectors are of interest, especially the nondegeneracy of the covariance matrix. This raises the following question:
_Q2: Is the covariance function \(\Sigma^{*}\) strictly positive definite?_
If the covariance function \(\Sigma^{*}\) of the Gaussian process associated with the DEQ is strictly positive definite, it implies that the corresponding covariance matrix is nondegenerate and has a strictly positive least eigenvalue. This property is crucial in various classical statistical analyses, including inference, prediction, and parameter estimation. Furthermore, the strict positive definiteness of \(\Sigma^{*}\) has implications for the global convergence of gradient-based optimization methods used in training neural networks. In the context of wide neural networks, these networks can be viewed as kernel methods under gradient descent, utilizing the NTK [23]. By making appropriate assumptions on the activation function \(\phi\), we establish that the covariance function \(\Sigma^{*}\) of DEQs is indeed strictly positive definite, meaning that the corresponding covariance matrix \(K^{*}\) has strictly positive least eigenvalue when the inputs are distinct.
**Theorem 3.2** (Informal Version of Theorem 4.5).: _If the activation function \(\phi\) is nonlinear but non-polynomial, then the covariance function \(\Sigma^{*}\) is strictly positive definite._
These findings expand the existing literature on the convergence properties of infinite-depth neural networks and pave the way for further investigations into the training and generalization of DEQs.
## 4 Main Results
To study the DEQ, we introduce the concept of finite-depth neural networks, denoted as \(f^{L}_{\theta}(x)=V^{T}h^{L-1}(x)\), where \(h^{\ell}(x)\) represents the post-activation values. The definition of \(h^{\ell}(x)\) for \(\ell\in[L-1]\) is as follows:
\[\begin{split} g^{1}(x)&=U^{T}x,\quad h^{1}(x)=\phi (g^{1}(x)),\\ g^{\ell}(x)&=W^{T}h^{\ell-1},\quad h^{\ell}(x)= \phi(g^{\ell}(x)+g^{1}(x)),\;\text{for}\;\ell=2,3,\ldots,L-1.\end{split} \tag{6}\]
**Remark 4.1**.: _We assume \(U\), \(W\), and \(V\) are randomly initialized according to (5). The post-activation values \(h^{\ell}\) differ slightly from those in classical Multilayer Perceptron (MLP) models due to the inclusion of input injection. It is worth mentioning that \(f^{L}_{\theta}\) is equivalent to the DEQ \(f_{\theta}\) when we let \(L\to\infty\), provided that the limit exists._
### \(f^{L}_{\theta}\) as a Gaussian Process
The finite-depth neural network \(f^{L}_{\theta}\) can be expressed as a Tensor Program, which is a computational algorithm introduced in [39] for implementing neural networks. In their work, [39] provides examples of various neural network architectures represented as tensor programs. They also establish that all G-var (or pre-activation vectors in our case) in a tensor program tend to Gaussian random variables as the width \(n\) approaches infinity [39, Theorem 5.4]. Building upon this result, we can employ a similar argument to demonstrate that the neural network \(f^{L}_{\theta}\) defined by (6) converges to a Gaussian process, with the covariance function computed recursively, under the assumption of a controllable activation function.
**Definition 4.1**.: _A real-valued function \(\phi:\mathbb{R}^{k}\to\mathbb{R}\) is called **controllable** if there exists some absolute constants \(C,c>0\) such that \(|\phi(x)|\leq Ce^{c\sum_{i=1}^{k}|x_{i}|}\)._
It is important to note that controllable functions are not necessarily smooth, although smooth functions can be easily shown to be controllable. Moreover, controllable functions, as defined in [39, Definition 5.3], can grow faster than exponential but remain \(L^{1}\) and \(L^{2}\)-integrable with respect to the Gaussian measure. However, the simplified definition presented here encompasses almost most functions encountered in practice.
Considering the activation function \(\phi\) as controllable and conditioned on previous layers, we observe that the pre-activation \(g^{\ell}_{k}(x)\) behaves like independent and identically distributed (i.i.d.) Gaussian
random variables. Through induction, both the conditioned and unconditioned distributions of \(g_{k}^{\ell}(x)\) converge to the same Gaussian random variable \(z^{\ell}(x)\) as the limit approaches infinity. This result is proven in Appendix B.
**Theorem 4.1**.: _For a finite-depth neural network \(f_{\theta}^{L}\) defined in (6), as the width \(n\to\infty\), the output functions \(f_{\theta,k}^{L}\) for \(k\in[1,n_{out}]\) tends to centered Gaussian processes in distribution with covariance function \(\Sigma^{L}\) defined recursively as follows: for all \(\ell\in[2,L-1]\)_
\[\Sigma^{1}(x,x^{\prime}) =\sigma_{u}^{2}\left\langle x,x^{\prime}\right\rangle \tag{7}\] \[\Sigma^{2}(x,x^{\prime}) =\sigma_{w}^{2}\mathbb{E}\phi(z^{1}(x))\phi(z^{1}(x^{\prime}))\] (8) \[\Sigma^{\ell+1}(x,x^{\prime}) =\sigma_{w}^{2}\mathbb{E}\phi(z^{\ell}(x)+z^{1}(x))\phi(z^{\ell}( x^{\prime})+z^{1}(x^{\prime})), \tag{9}\]
_where_
\[\begin{bmatrix}z^{1}(x)\\ z^{\ell}(x)\\ z^{1}(x^{\prime})\\ z^{\ell}(x^{\prime})\end{bmatrix}\sim\mathcal{N}\left(0,\left[\begin{array}{ c|c|c}\Sigma^{1}(x,x)&0&\Sigma^{1}(x,x^{\prime})&0\\ 0&\Sigma^{\ell}(x,x)&0&\Sigma^{\ell}(x,x^{\prime})\\ \hline\Sigma^{1}(x^{\prime},x)&0&\Sigma^{1}(x^{\prime},x^{\prime})&0\\ 0&\Sigma^{\ell}(x^{\prime},x)&0&\Sigma^{\ell}(x^{\prime},x^{\prime})\end{array} \right]\right) \tag{10}\]
Furthermore, we derive a compact form of the covariance function \(\Sigma^{L}\) in Corollary 4.2 by using the fact \(z^{1}\) and \(z^{\ell}\) are independent, which is proven in Appendix C.
**Corollary 4.2**.: _The covariance function \(\Sigma^{L}\) in Theorem 4.1 is rewritten as follows: \(\forall\ell\in[1,L-1]\)_
\[\Sigma^{1}(x,x^{\prime}) =\sigma_{u}^{2}\left\langle x,x^{\prime}\right\rangle/n_{in}, \tag{11}\] \[\Sigma^{\ell+1}(x,x^{\prime}) =\sigma_{w}^{2}\mathbb{E}\phi(u^{\ell}(x))\phi(u^{\ell}(x^{\prime })), \tag{12}\]
_where \((u^{\ell}(x),u^{\ell}(x^{\prime}))\) follows a centered bivariate Gaussian distribution with covariance_
\[\mathrm{Cov}(u^{\ell}(x),u^{\ell}(x^{\prime}))=\begin{cases}\Sigma^{1}(x,x^{ \prime}),&\ell=1\\ \Sigma^{\ell}(x,x^{\prime})+\Sigma^{1}(x,x^{\prime}),&\ell\in[2,L-1]\end{cases} \tag{13}\]
**Remark 4.2**.: _It is worth noting that the same Gaussian process or covariance function \(\Sigma^{L}\) is obtained regardless of whether the same weight matrix \(W\) is shared among layers. Additionally, there is no dependence across layers in the limit if different weight matrices are used. That is, if \(W^{\ell}\neq W^{k}\), then \(\mathrm{Cov}(z^{\ell}(x),z^{k}(x^{\prime}))=0\). These observation align with studies of recurrent neural networks [1, 39], where the same weight matrix \(W\) is applied in each layer._
### On the Strictly Positive Definiteness of \(\Sigma^{L}\)
To clarify the mathematical context, we provide a precise definition of the strict positive definiteness of a kernel function:
**Definition 4.2**.: _A kernel function \(k:X\times X\to\mathbb{R}\) is said to be strictly positive definite if, for any finite set of pairwise distinct points \(x_{1},x_{2},\ldots,x_{n}\in X\), the matrix \(K=[k(x_{i},x_{j})]_{i,j=1}^{n}\) is strictly positive definite. In other words, for any non-zero vector \(c\in\mathbb{R}^{n}\), we have \(c^{T}Kc>0\)._
Recent works [11, 38, 3, 3] have studied the convergence of (stochastic) gradient descent to global minima when training neural networks. It has been shown that the covariance function or NNGP kernel \(\Sigma^{L}\) being strictly positive definite guarantees convergence. In the case where the data set is supported on a sphere, we can establish the strict positive definiteness of \(\Sigma^{L}\) using Gaussian integration techniques and the existence of strictly positive definiteness of priors. The following theorem (Theorem 4.3) is proven in Appendix D.
**Theorem 4.3**.: _For a non-polynomial Lipschitz nonlinear \(\phi\), for any input dimension \(n_{0}\), the restriction of the limiting covariance function \(\Sigma^{L}\) to the unit sphere \(\mathbb{S}^{n_{0}-1}=\{x:\|x\|=1\}\), is strictly positive definite for \(2\leq L<\infty\)._
This theorem establishes that the limiting covariance function \(\Sigma^{L}\) of finite-depth neural network \(f_{\theta}^{L}\) is strictly positive definite when restricted to the unit sphere \(\mathbb{S}^{n_{0}-1}\), provided that a non-polynomial activation function is used.
### \(f_{\theta}\) as a Gaussian Process
In this subsection, we explore the convergence behavior of the infinite-depth neural network \(f_{\theta}\) to a Gaussian process as the width \(n\) tends to infinity. Since we have two limits involved, namely the depth and the width, it can be considered as a double sequence. Therefore, it is essential to review the definitions of convergence in double sequences.
**Definition 4.3**.: _Let \(\left\{a_{m,n}\right\}\) be a double sequence, then it has two types of **iterated limits**_
\[\lim_{m\to\infty}\lim_{n\to\infty}a_{m,n}=\lim_{m\to\infty}\left(\lim_{n\to \infty}a_{m,n}\right), \tag{14}\]
\[\lim_{n\to\infty}\lim_{m\to\infty}a_{m,n}=\lim_{n\to\infty}\left(\lim_{m\to \infty}a_{m,n}\right). \tag{15}\]
_The **double limit** of \(\left\{a_{m,n}\right\}\) is denoted by_
\[L:=\lim_{m,n\to\infty}a_{m,n}, \tag{16}\]
_which means that for all \(\varepsilon>0\), there exists \(N(\varepsilon)\in\mathbb{N}\) s.t. \(m,n\geq N(\epsilon)\) implies \(|a_{m,n}-L|\leq\epsilon\)._
In Subsection 4.1, we have previously shown that \(f_{\theta}^{L}\) converges to a centered Gaussian process with a covariance function \(\Sigma^{L}\), which is recursively defined. However, it is important to note that this convergence does not necessarily imply that the infinite-depth neural network \(f_{\theta}\) also converges to a Gaussian process, as the order in which the limits are taken can affect the result. Recent studies, such as [28, 20], have demonstrated that the convergence behavior of neural networks depends on the order in which the width and depth limits are taken. Specifically, when the width tends to infinity first, followed by the depth, a standard multi-layer perceptron (MLP) converges weakly to a Gaussian process. However, if the depth tends to infinity first, followed by the width, a heavy-tail distribution emerges. Additionally, when both the depth and width tend to infinity at a fixed ratio, a log-normal distribution is observed for Residual neural networks. Hence, the two limits are not necessarily equivalent unless they commute.
When studying the convergence behaviors of DEQs, it is more crucial to focus on the infinite-depth-then-width limit, as DEQs are defined as infinite-depth neural networks. Therefore, to establish the Gaussian process nature of DEQs, it is important to show that the infinite-depth-then-width limit is equal to the infinite-width-then-depth limit. Fortunately, we can demonstrate that these two limits commute and are equal to the double limit, as the convergence of the depth is much faster than the width, as proven in Appendix F.
**Lemma 4.1**.: _Choose \(\sigma_{w}>0\) small such that \(\gamma:=2\sqrt{2}\sigma_{w}<1\). Then for every \(x,x^{\prime}\in\mathbb{S}^{n_{in}-1}\), \(\lim_{\ell\to\infty}\lim_{n\to\infty}\frac{1}{n}\left\langle h^{\ell}(x),h^{ \ell}(x^{\prime})\right\rangle\) and \(\lim_{n\to\infty}\lim_{\ell\to\infty}\frac{1}{n}\left\langle h^{\ell}(x),h^{ \ell}(x^{\prime})\right\rangle\) exist and equal to \(\Sigma^{*}(x,x^{\prime})\) a.s., i.e.,_
\[\Sigma^{*}(x,x^{\prime}):=\lim_{\ell\to\infty}\lim_{n\to\infty}A_{n,\ell}= \lim_{n\to\infty}\lim_{\ell\to\infty}A_{n,\ell}=\lim_{\ell,n\to\infty}A_{n, \ell}, \tag{17}\]
_where \(A_{n,\ell}:=\frac{1}{n}\left\langle h^{\ell}(x),h^{\ell}(x^{\prime})\right\rangle\)._
Proof.: Proof is provided in Appendix F
Lemma 4.1 confirms the two iterated limits of the empirical covariance \(\frac{1}{n}\left\langle h_{n}^{\ell}(x),h_{n}^{\ell}(x^{\prime})\right\rangle\) of the pre-activation \(g_{k}^{\ell}\) exist and equal to the double limit \(\Sigma^{*}(x,x^{\prime})\) for any \(x,x^{\prime}\in\mathbb{S}^{n_{in}-1}\). Consequently, it establishes the commutation of the two limits, implying that the limit-depth-then-width and limit-width-then-depth have the same limit. Building upon this result, we can state Theorem 4.4, which asserts that as the width \(n\) of the infinite-depth neural network \(f_{\theta}\) tends to infinity, the output functions \(f_{\theta,k}\) converge to independent and identically distributed (i.i.d.) centered Gaussian processes with the covariance function \(\Sigma^{*}(x,x^{\prime})\). The detailed proofs can be found in Appendix G.
**Theorem 4.4**.: _Choose \(\sigma_{w}>0\) small such that \(\gamma:=2\sqrt{2}\sigma_{w}<1\). For infinite-depth neural network \(f_{\theta}\) defined on (1), as width \(n\to\infty\), the output functions \(f_{\theta,k}\) tend to i.i.d. centered Gaussian processes with covariance function \(\Sigma^{*}\) defined by_
\[\Sigma^{*}(x,x^{\prime})=\lim_{\ell\to\infty}\Sigma^{\ell}(x,x^{\prime}), \tag{18}\]
_where \(\Sigma^{\ell}\) are defined in Theorem 4.1._
### The Strict Positive Definiteness of \(\Sigma^{*}\)
We conclude this section by establishing the strictly positive definiteness of the limiting covariance function \(\Sigma^{*}\). Notably, the proof techniques used in Theorem 4.3 are not applicable here, as the strict positive definiteness of \(\Sigma^{L}\) may diminish as \(L\) approaches infinity.
Instead, we leverage the inherent properties of \(\Sigma^{*}\) itself and Hermitian expansion of the dual activation \(\hat{\phi}\) of \(\phi\)[10]. To explore the essential properties of \(\Sigma^{*}\), we perform a fine analysis on the pointwise convergence of the covariance function \(\Sigma^{L}\) for each pair of inputs \((x,x^{\prime})\).
**Lemma 4.2**.: _Choose \(\sigma_{w}>0\) small for which \(\beta:=\frac{\sigma_{w}^{2}}{2}\mathbb{E}|z|^{2}|z^{2}-1|<1\), where \(z\) is standard Gaussian random variable. Then for all \(x,x^{\prime}\in\mathbb{S}^{n_{in}-1}\), the function \(\Sigma^{\ell}\) satisfies_
1. \(\Sigma^{\ell}(x,x)=\Sigma^{\ell}(x^{\prime},x^{\prime})\)_,_
2. \(\Sigma^{\ell}(x,x)\leq(1+1/\beta)\Sigma^{2}(x,x)\)_._
_Consequently, \(\Sigma^{*}(x,x^{\prime})=\lim_{\ell\to\infty}\Sigma^{\ell}(x,x^{\prime})\) is well-defined and satisfies for all \(x,x^{\prime}\in\mathbb{S}^{n_{in}-1}\)_
\[0<\Sigma^{*}(x,x)=\Sigma^{*}(x^{\prime},x^{\prime})<\infty.\]
Lemma 4.2, proven in Appendix E, ensures the well-definedness of the limiting covariance function \(\Sigma^{*}\) for all \(x,x^{\prime}\in\mathbb{S}^{n_{in}-1}\) by choosing a small \(\sigma_{w}>0\). The lemma also guarantees that \(\Sigma^{*}(x,x)\) and \(\Sigma^{*}(x^{\prime},x^{\prime})\) are strictly positive, equal, and finite for all \(x,x^{\prime}\in\mathbb{S}^{n_{in}-1}\). These findings are crucial for demonstrating the strict positive definiteness of \(\Sigma^{*}\). Specifically, by leveraging these properties of \(\Sigma^{*}\), we can derive its Hermitian expansion of the limiting kernel \(\Sigma^{*}\).
By utilizing [23, Theorem 3], we establish in Theorem 4.5, as proven in Appendix H, that \(\Sigma^{*}\) is strictly positive definite if \(\phi\) is non-polynomial. It is important to note that our analysis can be extended to analyze the covariance or kernel functions induced by neural networks, particularly those that are defined as limits or induced by infinite-depth neural networks. This is because the analysis does not rely on the existence of the positive definiteness of priors. Instead, we examine the intrinsic properties of \(\Sigma^{*}\), which remain independent of the properties of the activation function \(\phi\).
**Theorem 4.5**.: _For a non-polynomial Lipschitz nonlinear \(\phi\), for any input dimension \(n_{0}\), the restriction of the limiting covariance \(\Sigma^{*}\) to the unit sphere \(\mathbb{S}^{n_{0}-1}=\{x:\|x\|=1\}\), is strictly positive definite._
## 5 Experimental Results
In this section, we present a series of numerical experiments to validate the theoretical results we have established. Our experiments aim to verify the well-posedness of the fixed point of the transition equation (2). We also investigate whether the DEQ behaves as a Gaussian process when the width is sufficiently large, as stated in our main result, Theorem 4.4. Additionally, we examine the strictly positive definiteness of the limiting covariance function \(\Sigma^{*}\), as established in Theorem 4.5, by computing the smallest eigenvalue of the associated covariance matrix \(K^{*}\). These experiments serve to empirically support our theoretical findings.
### Convergence to the fixed point
Proposition 3.1 guarantees the existence of a unique fixed point for the DEQ. To verify this, we conducted simulations using neural networks with the transition equation (2). We plotted the relative error between \(h^{\ell}\) and \(h^{\ell+1}\) in Figure 1. As shown in the figure, we observed that the iterations required for convergence to the fixed point were approximately 25 for various widths. This observation aligns with our theoretical findings in Lemma F.1, where the random initialization (5) scales the weight matrix \(W\) such that \(\|W\|_{\text{op}}=\mathcal{O}(\sigma_{w})\).
### The Gaussian behavior
Theorem 4.4 predicts that the outputs of DEQ tends to follow a Gaussian process as the width approaches infinity. To demonstrate this, we consider a specific DEQ with \(n_{in}=10\) and \(n_{out}=10\)
activated by tanh. We analyze the output distributions of 10,000 neural networks. An important implication of Theorem 4.4 is that the output forms an independent identical Gaussian distribution. To visualize this, we plot a pairplot in Figure 1 illustrating the randomly selected three outputs, confirming the validity of this implication.
Next, we generate histograms of the 10,000 neural networks to approximate the distribution of the first neuron in the output layer. In the third plot of Figure 1, we present the histogram for a width of 1000. Remarkably, the output distribution exhibits a strong adherence to the Gaussian model, as evidenced by a Kolmogorov-Smirnov (KS) statistic of 0.0056 and a corresponding p-value of 0.9136. Furthermore, in Figure 1(a) in the supplementary material, we provide histograms for widths of 10, 50, 100, 500, and 1000. As the width increases, the output distribution progressively converges towards a Gaussian distribution. This is evident from the decreasing KS statistics and the increasing p-values as the width extends from 10 to 1000.
Based on Theorem 4.4, the outputs of the neural network exhibit a behavior reminiscent of a joint Gaussian distribution for different inputs \(x\) and \(x^{\prime}\). To illustrate this, we plot the first output of the 10,000 neural networks for two distinct inputs as the first plot in Figure 2. Notably, the predicted limiting Gaussian level curves, derived from the limiting kernel function stated in Lemma 4.2, perfectly match the results of the simulations when the width is set to 1000.
### Convergence of the kernel
According to Theorem 4.4, the DEQs tends to a Gaussian process with a covariance function \(\Sigma^{*}=\lim_{\ell\to\infty}\bar{\Sigma}^{\ell}\). Given \(N\) distinct inputs \(\{x_{i}\}_{i=1}^{N}\), as stated in Theorem 4.1, the limiting covariance matrix \(K^{*}\) can be computed recursively, _i.e._, \(K^{\ell}_{ij}=\Sigma^{\ell}(x_{i},x_{j})\). By Lemma 4.1, each element \(K^{\ell}_{ij}\) can be approximated by \(\frac{1}{n}\langle h^{\ell}(x_{i}),h^{\ell}(x_{j})\rangle\). We conduct a series of numerical experiments to visually assess this convergence
First of all, we examine the convergence in width. We fix a large depth \(\ell\) and vary the widths by \(2^{2-13}\). We draw the errors between limiting covariance matrix \(K^{*}\) and finite-width empirical estimate \(K^{\ell}_{n}\) in the first two plots of Figure 3. The relative errors \(\|K^{\ell}_{n}-K^{*}\|_{F}/\|K^{*}\|_{F}\) consistently decreases as the growth of the width, and a convergence rate of order \(n^{-1}\) is observed.
Figure 1: Convergence to the fixed point (left); distribution of the first neuron of the output for \(10,000\) neural networks, KS statistics, p-value (middle); joint distributions for the first neuron for three outputs for \(10,000\) neuron networks, with orange curve denotes the Gaussian distribution (right)
Figure 2: Joint distributions for the first neuron for two different inputs over \(10,000\) neural networks (left); Covariance matrix obtained by a neural network (middle); Covariance matrix obtained by Gaussian process (right)
Next, we examine the convergence in depth by fixing a large width. The results are shown in the third and fourth plots of Figure 3. From these plots, we can observe that the error converges rapidly as the depth of the network increases, illustrating an exponential convergence rate.
### The positive definiteness of the kernel
Theorem 4.5 establishes that the NNGP kernel is strictly positive. As discussed earlier, the kernel matrix \(K^{L}\) can be computed recursively, as stated in Theorem 4.1 or Corollary 4.2. We refer to this computation as the _theoretical_ approach. Alternatively, it can be calculated as the covariance through simulation, which we denote as _simulation_ approach. We employ both methods to compute the smallest eigenvalue of the kernel matrix \(K^{L}\). The results are summarized in Figure 4. It is evident from the figure that the smallest eigenvalues increase with increasing depths and become stable once the kernel is well approximated. Furthermore, the smallest eigenvalue increases with higher values of \(\sigma_{w}\).
### Test Performance
To complement the theoretical analysis, we conducted numerical experiments demonstrating the NNGP correspondence for DEQs on real datasets with varying widths. A visual representation of these findings is available in Figure 4. Intriguingly, our observations consistently reveal that the NNGP continually outperforms trained finite-width DEQs. Moreover, a compelling trend emerges: as network width increases, the performance of DEQs converges more closely to NNGP performance. Notably, this phenomenon mirrors observations made in the context of standard feedforward neural networks [25; 28]. These experiments stand as practical evidence, effectively shedding light on the behavior of DEQs across different network sizes. The insights gleaned from these experiments have been thoughtfully integrated into our paper to enhance its comprehensiveness and practical relevance.
## 6 Conclusion and Future Work
This paper establishes that DEQs (Deep Equilibrium Models) can be characterized as Gaussian processes with a strict positive definite covariance function \(\Sigma^{*}\) in the limit of the width of the network approaching infinity. This finding contributes to the understanding of the convergence properties of infinite-depth neural networks, demonstrating that when and how the depth and width limits commute. An important direction for future research is to leverage the results presented in this paper to investigate the training and generalization performance of DEQs. While the results obtained in
Figure 4: From left to right: \(\lambda_{\min}(K^{\ell})\) across varying depths \(\ell\); \(\lambda_{\min}(K^{*})\) for different \(\sigma_{w}\) (blue curve: theory; orange curve: simulation); Test accuracy of the MNIST dataset using NNGP and DEQs with various widths; MSE of the MNIST dataset using NNGP and DEQs with various widths.
Figure 3: Covariance behaviors with varying width and depth
this paper hold for commonly used activation functions, it would be interesting to explore more complex transition functions in future work.
## 7 Acknowledgements
We would like to acknowledge the generous support of the National Science Foundation (NSF) under grant DMS-1812666 and III-2104797.
|
2310.00574 | YFlows: Systematic Dataflow Exploration and Code Generation for
Efficient Neural Network Inference using SIMD Architectures on CPUs | We address the challenges associated with deploying neural networks on CPUs,
with a particular focus on minimizing inference time while maintaining
accuracy. Our novel approach is to use the dataflow (i.e., computation order)
of a neural network to explore data reuse opportunities using heuristic-guided
analysis and a code generation framework, which enables exploration of various
Single Instruction, Multiple Data (SIMD) implementations to achieve optimized
neural network execution. Our results demonstrate that the dataflow that keeps
outputs in SIMD registers while also maximizing both input and weight reuse
consistently yields the best performance for a wide variety of inference
workloads, achieving up to 3x speedup for 8-bit neural networks, and up to 4.8x
speedup for binary neural networks, respectively, over the optimized
implementations of neural networks today. | Cyrus Zhou, Zack Hassman, Ruize Xu, Dhirpal Shah, Vaugnn Richard, Yanjing Li | 2023-10-01T05:11:54Z | http://arxiv.org/abs/2310.00574v3 | YFlows: Systematic Dataflow Exploration and Code Generation for Efficient Neural Network Inference using SIMD Architectures on CPUs
###### Abstract
We address the challenges associated with deploying neural networks on CPUs, with a particular focus on minimizing inference time while maintaining accuracy. Our novel approach is to use the dataflow (i.e., computation order) of a neural network to explore data reuse opportunities using heuristic-guided analysis and a code generation framework, which enables exploration of various Single Instruction, Multiple Data (SIMD) implementations to achieve optimized neural network execution. Our results demonstrate that the dataflow that keeps outputs in SIMD registers while also maximizing both input and weight reuse consistently yields the best performance for a wide variety of inference workloads, achieving up to 3x speedup for 8-bit neural networks, and up to 4.8x speedup for binary neural networks, respectively, over the optimized implementations of neural networks today.
Code Generation, Compiler Support, SIMD Vectorization, CPU Optimization, Dataflow, Neural Network
## I Introduction
In recent years, neural networks have expanded their reach beyond high-performance computing environments, permeating low-end servers and edge devices such as smartphones, IoT devices, and smart sensors [1, 2, 3, 4]. However, the deployment of neural networks on these devices presents various challenges, with inference time being a critical factor [5, 6, 7, 8]. The Single Instruction, Multiple Data (SIMD) capabilities of contemporary CPUs present an opportunity to accelerate neural networks. SIMD allows a single instruction to be executed on multiple data elements concurrently, thereby substantially improving computational throughput and overall performance, and yielding benefits in terms of both energy conservation and efficient utilization of computational resources [9, 10, 11].
_Dataflow_ refers to an execution order of computational operations of a neural network, and it is an important consideration when utilizing SIMD for inference. It determines the reuse opportunities of different variables (e.g., inputs, weights, and outputs), and can therefore guide how to best allocate valuable SIMD register resources to maximize reuse. While dataflows for deep learning accelerators have been extensively explored [12, 13, 14, 15], the majority of previous studies and libraries for CPUs do not consider dataflows [16, 17, 18, 19]. Instead, weight stationary, i.e., keep using the same weight value until all computations requiring this value are done before moving on to the next weight value, is widely adopted [20, 21, 22]. However, we found that by adopting the carefully designed dataflow and co-optimizing with other techniques (i.e., blocking, operator fusion), the inference speed can be improved significantly, up to 3.5 times, compared to state-of-the-art implementations of 8-bit integer networks [18], and \(>\)10 / 4.8 times compared to optimized bitserial [18, 23] / state-of-the-art SIMD [20] implementations of binary neural networks, respectively.
Unfortunately, compiler support for efficient SIMD code generation is lacking [24, 25, 26], as demonstrated in our experiments on x86 and ARM architectures. Programs written to explicitly utilize SIMD often receive no further compiler optimization, such as harnessing unused vector registers [27]. Furthermore, auto-vectorization features in compilers [28, 29] overlook vectorizable scalar implementations [24, 25, 30, 31], possibly due to the expansive search space noted in [32].
The nuances of SIMD optimization, such as ensuring nondependency in vector register values, are highlighted in [27, 31]. These complexities are compounded by the reliance on fragile heuristics in current autovectorization techniques, as critiqued in [26, 33, 26]. This is also true for highly optimized frameworks like TVM [18] as they rely on compiler backends such as LLVM [34]. With these challenges, the burden of SIMD optimization predominantly lies with program
mers. Consequently, there's a pressing need for a systematic approach to maximize SIMD implementation efficiency.
To this end, we present the first work that employs the notion of dataflow to systematically explore the full SIMD computation capacities on CPUs for efficient neural network inference. The major contributions include:
1. We extended the existing dataflows, which typically specify only one type of variable to be reused, by allowing all types of variables to be reused. Extended dataflows enable systematic exploration to fully utilize SIMD register resources, and substantially reduce costs associated with data and instruction movements.
2. We formalized a set of heuristics, based on data movement costs, to optimize three basic, general neural network dataflows - defined in Sec. II - by maximizing reuse opportunities within each dataflow.
3. We implemented a code generator that automatically uses SIMD instructions to implement the three basic dataflows and various extended dataflows, for any given neural network configuration. This code generator allows us to compare different dataflows to determine the most efficient implementation.
4. We quantitatively compared our best implementation against state-of-the-art implementations using representative workloads, and show that our results achieve substantial improvements: up to 3.5x speedup for 8-bit neural networks (against TVM [18]), and up to 4.8x speedup for binary neural networks (against [20]), respectively.
## II Basic Dataflows of Neural Networks
Three major, basic dataflows have been identified in the literature1[36, 37, 38], as shown in Algorithms 1, 2, and 3 in the semantics of ARM SIMD intrinsics [39], using convolution layers as an example.
Footnote 1: We exclude dataflows that are specifically tailored to specific deep learning accelerator architectures (e.g., _Row-stationary_[12], _No-local-reuse_[35], etc.) as they cannot be applied to CPUs. For example, row-stationary keeps software variables stationary in the rows of processing engines of a 2D systolic array; however, there is no notion of “rows of cores” in CPUs.
```
inputs[\(H\)], weights[\(R\)], outputs[\(E\)] for\(n\) in \(H\)do input \(\gets vload\)(&weights[\(n\)]); for\(r\) in \(R\)do weight = \(vload\)(&weights[\(r\)]); calculate \(e\) from \(h\), \(r\); outputs[\(e\)] += \(vredsum\)(\(vmul\)(input, weight)); endfor endfor
```
**Algorithm 1** IS Dataflow for Convolution Layers.
### _Weight Stationary (WS)_
WS iterates through the weight tensor. For each output entry whose computation depends on the current weight tensor, WS collects each relevant entry from the input for computations and accumulates the result to the corresponding output.
```
inputs[\(H\)], weights[\(R\)], outputs[\(E\)] for\(r\) in \(R\)do weight \(\leftarrow\)\(vload\)(&weights[\(r\)]); for\(e\) in \(E\)do calculate \(i\) from \(e\), \(r\); input = \(vload\)(&inputs[\(i\)]); outputs[\(e\)] += \(vredsum\)(\(vmul\)(input, weight)); endfor endfor
```
**Algorithm 2** WS Dataflow for Convolution Layers.
### _Output Stationary (OS)_
OS iterates through the output tensor. It performs all necessary multiply-accumulate computations to obtain the final result for one output entry before moving on to the next.
```
inputs[\(H\)], weights[\(R\)], outputs[\(E\)] for\(e\) in \(E\)do output = \(vmov\)(\(\vec{0}\)) for\(r\) in \(R\)do if such \(i\) exists, calculate \(i\) from \(e\), \(r\), else continue; input, weight = \(vload\)(&inputs[\(i\)]), \(vload\)(&weights[\(r\)]); output = \(vadd\)(\(vmul\)(input, weight),output); endfor outputs[\(e\)] = \(vredsum\)(output); endfor
```
**Algorithm 3** OS Dataflow for Convolution Layers.
### _Memory layout and Computation Order_
Naturally, the computation order under a dataflow follows the sequential memory addresses of the corresponding data elements. We illustrate the memory layout scheme in Fig. 1.
We opt for the NCHW[xc] memory layout for each input/output tensor. In traditional NCHW alignment, tensors are arranged by first the number of images (batch size, N), then channels (C), followed by height (H), and lastly width (W). In NCHW[xc], data are grouped into blocks of size \(x\times H\times W\), and we call these blocks _channel blocks_. The channel blocks follow the NCHW layout, while data in each channel block follows the HW[xc] layout, and \(x\) is typically chosen so that \(x\times element\_width\) is a multiple of the size of the physical vector registers (\(1\)-\(3\times\) in our implementation).
There are two main reasons for this memory layout choice. First, vectorization in the channel dimension streamlines vector computations, avoiding excessive operations such as shifting, because the number of channels multiplied by data size in a neural network layer is usually a multiple of SIMD register length (or vice versa). Previous works have demonstrated the effectiveness of this scheme for floating-point, integer and binary neural networks [19, 20, 40].
Second, NCHW[xc] enables data reuse between successive channel blocks. With NHWC, no element engages in calculations for two successive elements, whether inputs, weights, or
outputs, under any dataflow. In contrast, NCHW[xc] enables various dataflows to be exploited to maximize data reuse (see Sec. III). Note that, for binary networks, NHWC can be largely the same as NCHW[xc] in performance since the number of channels in most network architectures is \(\leq 512\) and a multiple of vector register size in modern ISAs [19].
To optimize weight data access locality, we adopt the CKRS[xc] memory layout (matching the input/output tensor layout), where \(C\), \(K\), \(R\), \(S\) denote \(\#\)Input Channels, \(\#\)Output Channels, \(\#\)rows/filter height, \(\#\)columns/filter width, respectively, and \(x\) in the notation for the weight tensor is chosen to be exactly the \(x\) of the input tensor. Following this layout, the output tensors can be written back sequentially regardless of the size of the input/output channel blocks and dataflows.
In terms of the compute order across input channel blocks, for better memory locality (as validated by our observation), we proceed along the output channel dimension before moving onto the next input channel block. In other words, the loop on the input channel dimension is an outer loop of that on the output channel dimension.
### _Implementation and Performance of Basic SIMD Dataflows_
In software, we declare three _vector variables_ to implement any of the three basic dataflows, one for each of the input, weight, and output data types. The size of each vector variable is \(x\times element\_width\) as shown in Fig. 1), which is a multiple of the vector register size. Also, the total size of all vector variables is less than or equal to the total size of all vector registers. We distinguish these two terms because physical vector registers in some architectures can be concatenated to form longer vectors. For example, in ARM, vector registers are \(128\) bits in size, but vector variables can be multiples of \(128\) bits occupying multiple physical registers.
We compared the three basic dataflows (the experiment setup is outlined in Sec. V), and the results can be found in Fig. 2. We see that OS consistently outperforms the others in all tests conducted in terms of runtime. With a stride of 1, OS is by median 1.93x and 3.41x faster than IS and WS, respectively. With a stride of 2, OS is, by median, 5.39x and 2.81x faster than IS and WS, respectively. The superior performance of OS is due to a multitude of factors including lowered numbers of reduction sum operations, reduced output tensor data movement, and more regular instruction and memory access patterns.
While basic dataflows capture the reuse opportunities of the data that are active in the current computation, they only utilize a limited number of vector registers (precisely \(\frac{3\times\text{vector variable size}}{\text{vector register size}}\)), leaving all others idle. This is because, as discussed in Sec. I, compilers today are not able to discover vectorizable code - except for the simple cases - and fully utilize all vector registers automatically. This necessitates the need to extend the basic dataflows for faster inference.
## III Extending the Basic Dataflows
We say that a dataflow utilizes the stationarity of some data if it keeps that data close to the compute units - in vector registers in our case - for reuse. A dataflow is \(\sigma\) stationary if it uses \(\sigma\) stationarity, where \(\sigma\) is a predefined type of data (inputs, weights, or outputs). We extend the notion of dataflow by defining two types of stationarities, i.e., _anchoring stationarities_ and _auxiliary stationarities_.
_Anchoring stationarity_ is the stationarity that decides the execution order of computations. For example, output stationary dataflows have the outputs as their anchoring data type, so we always complete **all** computations involving an output element
Fig. 1: Memory layout of tensors. Red arrows show a subset of data elements following sequential memory addresses. Input channel blocks are traversed first along the output channel dimension. The purple shade covers a single vector variable.
Fig. 2: Relative latency of basic dataflows for various convolution layers for \(\text{Vector Length}=(elem\_width\times c)\in\{128,256,512\}\) (mean of 100 runs), normalized to the latency of OS. Configurations on the y-axes are in the format of \((fw/fh,iw/ih,nf)\).
before moving on to the next. One dataflow can have at most one _anchoring stationarity_. The most naive implementation of a dataflow is constituted of an anchoring stationarity only, which is equivalent to one of the basic dataflows discussed in Sec. II. The major limitation of the basic dataflows is that not all vector registers are utilized.
In optimized implementations, vector registers are fully utilized to stash data to lower the data movement costs associated with both anchoring and non-anchoring data types - non-anchoring data types are also referred to as _auxiliary data types_. The _auxiliary stationarities_ determine which auxiliary data types should be allocated in vector registers. For example, an output-anchored dataflow may be accompanied by weight and/or input auxiliary stationarity. More than one auxiliary stationarity can accompany an anchoring stationarity.
An important question is to decide how to allocate vector registers to store (or stash) anchoring and auxiliary data types, which is dependent on two factors: (1) the total number of available vector registers, which constraints the overall SIMD capability, and (2) data reuse opportunities, which affects data movement costs, and also bounds the benefits that can be obtained by stashing the corresponding data in vector registers.
## IV Optimizing Extended Dataflows
Our methodology for optimizing an extended dataflow follows two steps. First, we analyze reuse opportunities and develop heuristics to maximize data reuse benefits within each basic (i.e., anchoring stationarity only) dataflow to derive the corresponding auxiliary stationarities. Next, we empirically compare different implementations of the extended dataflows by varying vector register allocation schemes using a code generator to determine the best dataflow for performance.
While this methodology can be applied to most layers in neural networks, we focus our discussions on convolution layers, including simple convolutions [41], depthwise convolutions [6, 42], grouped convolutions [43], shuffled grouped convolutions [44], and so on. This is because these layers are common, and their latencies are generally longer compared to other layers [45, 5, 6, 7]. The convolution operation is shown in Fig. 3. Notation-wise, we use \(ih\), \(iw\), \(fh\), \(fw\), \(oh\), \(ow\) for input height, input width, filter/weight height, filter/weight width, output height, and output width, \(s\) for strides, \(x\) for the number of data elements in a vector variable, and \(H\), \(R\), \(E\) for the sizes of input, filter/weight, and output tensors. Thus, \(H=ih\cdot iw\cdot x\), \(R=fh\cdot fw\cdot x\), \(E=oh\cdot ow\).
### _Maximizing Data Reuse under Each Basic Dataflow_
#### Iv-A1 Reuse under Output Stationary Dataflows
Under output-anchored dataflows with the computation sequence following the description in Sec. II-C, all corresponding weights in each channel, totaling \(R\), are reused between the computations for two successive output elements. Additionally, there are \((fw-s)\cdot fh\) reusable input elements involved in the computations for two successive outputs. We demonstrate these reuse opportunities in Fig. 3(a).
The reuse scheme of inputs is similar for \(s>1\), as shown in Fig. 3(b), differed only by the number of inputs reusable between the computations around two successive outputs.
#### Iv-A2 Reuse under Input Stationary Dataflows
Given the algorithm of input-anchored dataflows (Sec. II-A), when \(s=1\), all corresponding weights in each channel, totaling \(R\), can be reused between the computations around two successive input elements. Outputs (partial sums) under input-anchored dataflows can be reused in a way similar to how inputs are reused under output-anchored dataflows. We demonstrate this reuse scheme in Fig. 3(d). Note that we would need to reverse the sequence of the weights (i.e., following the order of the outputs) to enable this reuse scheme (see Fig. 3(d)).
When \(s>1\), reusing both outputs and weights becomes complicated. Not all weights are applied to every input. For \(s=2\), the number of weights/outputs associated with the computations around one input can be 1, 2, or 4, as demonstrated in Fig. 5. In this case, the reuse opportunities become sparse. Additionally, code structure becomes less regular.
#### Iv-A3 Reuse under Weight Stationary Dataflows
In weight-anchored dataflows (Sec. II-B), between the computations around two successive weights in an input channel block, all \(H\) inputs and \(E\) outputs can be reused, as depicted in Fig. 3(c).
When using vector registers to stash an input, the input will not be reused in the computation involving each weight when \(s>1\). On the other hand, stashed outputs are guaranteed to be reused with each weight. As stashing outputs also saves write-related operations and the size of the output tensor is almost always greater than the remaining SIMD vector registers, we will later demonstrate the sufficiency of only supporting output auxiliary stationarity under weight-anchored dataflows.
#### Iv-A4 Heuristics to Quantify the Effectiveness of Data Reuse under Each Dataflow
We use the reduction in the number of memory instructions (both read and write, data size = \(c\times elem\_width\)) for each input channel as the guiding metric for framing the heuristics for choosing auxiliary stationarities, summarized in Table I. The baseline configurations correspond to the basic dataflow implementations discussed in Sec. II, where \(3\times\text{vector variable size}/\text{vector register size vector registers are allocated only. For the extended dataflows, we utilize additional vector variables (which are mapped to vector registers) for the auxiliary data types to further reduce data movement costs.
Fig. 3: Convolution operations and notations, showing only 1 channel and 1 kernel.
Output-anchored DataflowsIndependent of the value of \(s\), the numbers of inputs and weights associated with an output element, disregarding edge cases, are always equal to \(R\) for each input channel. Thus, every time we stash an input or weight vector variable in one or more vector register(s), the number of memory reads always goes down by the size of the output tensor.
Input-anchored DataflowsWhen \(s=1\), the gains from auxiliary allocation mimic that under output-anchored dataflows. We expect a reduction of \(H\) memory reads and \(H\) memory writes for every vector variable allocated to stash outputs for each input channel block. For each vector variable allocated for stashing weights, we expect a reduction of \(H\) memory reads per input channel block. Note that \(H\approx E\) in this case. When \(s>1\), the gains from auxiliary allocation become complex as shown in Table I.
Weight-anchored DataflowsRecall from Sec. IV-A3 that we iterate through both the whole input and output tensors under weight-anchored dataflows. While we proceed by \(1\) element on the output tensor, we need to leap forward by \(s\) elements on the input tensor and also increment the starting input index (i.e., the first weight starts with the input at index \(0\), the second weight starts with the input at index \(1\), and so forth) for the computations associated with each weight element. This naturally implies that each vector variable allocated for inputs saves \(R\approx\frac{H}{s^{2}}\) memory reads, and each vector variable assigned to stash outputs saves \(R\) reads and \(R\) writes, respectively, per input channel block.
Guided by the heuristics, we derive the following observations:
**Observation 1:** Weight-anchored dataflows will gain the least performance improvement from auxiliary stationarities.
**Observation 2:** Output-anchored dataflows will likely yield better performance than input-anchored dataflows when both are fully optimized.
**Observation 3:** Under output-anchored dataflows, prioritizing input auxiliary stationarity and prioritizing weight auxiliary stationarity will yield similar results.
**Observation 4:** Under input-anchored dataflows, prioritizing output auxiliary stationarity will yield better performance than prioritizing weight auxiliary stationarity.
**Observation 5:** Under weight-anchored dataflows, prioritizing output auxiliary stationary will yield better performance than prioritizing input auxiliary stationary.
### _Extended Dataflow Implementations and Code Generator_
Based upon the above observations, we develop a code generator to extend all three basic anchoring dataflows with auxiliary stationarities to further determine vector register allocation schemes, which is done by varying the number of vector registers allocated to each type of data. We first allocate a subset of vector registers (sweeping from \(v_{0}\) to \(v_{3n-1}\), where \(n=size(vec\_var)/size(vec\_reg),size(vec\_var)\in\{128,256,512\},\text{and}\ size(vec\_reg)=128\) in our implementation) to store the vector variables corresponding to the anchoring data type, then the remaining vector registers to the auxiliary data types.
```
Initialize the original allocation sequence with sequential row-major allocation. for\(un\) in range[1, \(lcm\)(all #vector variables per row \(>stride\)) do if# vector variables on this row \(>stride\)then Rotate stash indices on this row left by \(stride\) else The sequence stays the same endif endfor
```
**Algorithm 4** Allocation sequence for inputs under secondary-unrolled output-anchored dataflows. (The same sequence applies for outputs under input-anchored dataflows when \(s=1\).)
#### Iv-B1 Implementation of Output-anchored Dataflows
For each output element under computation, we first determine if the required input and weight elements are already stashed in vector variables. If so, we perform the computation using those stashed data. Otherwise, we load the required data from memory into 2 vector variables of length \(size(vec\_var)=x\times element\_width\). Note that the sequence of vector variable usage between every two consecutive outputs is identical for weights but different for inputs. This means that we incur the cost of SIMD data transfer if we assign vector registers in the same way across all unrolled iterations of the weight loop, as the same position on the "window" covering all inputs
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Anc. Dataflow & Aux. & \# vector variables for aux. & Stride & Reduction in \# mem. reads for each additional vector variable allocated to auxiliary data & Reduction in \# mem. writes\({}^{\text{a}}\) for each additional vector variable allocated to auxiliary data \\ \hline \hline OS & Both & \([1,R]\) & \([1,fw-1]\) & \(E\) & 0 \\ \hline WS & Input & \([1,H]\) & \([1,fw-1]\) & \(R\) & 0 \\ & Output & \([1,E]\) & \([1,fw-1]\) & \(R\) & \(R\) \\ \hline & Weight & \([1,R]\) & \(1\) & \(H\) & 0 \\ & Weight & \([1,fw]\) & \([2,fw-1]\) & \(\frac{H}{s}\) & 0 \\ & Weight & \([fw+1,2.fw]\) & \([2,fw-1]\) & \(\frac{H}{(w-s)s}\) & 0 \\ & Output & \([1,R]\) & \(1\) & \(H\) \\ & Output & \(\{1\}\) & \([2,fw-1]\) & \(H+\frac{H}{s^{2}}\) & \(H+\frac{H}{s^{2}}\) \\ & Output & \(\{2\}\) & \([2,fw-1]\) & \(\frac{ih}{fw-s}(H+\frac{H}{fw})+\frac{ih}{s}(fw-s-1)\) & \(\frac{ih}{fw-s}(H+\frac{H}{fw})+\frac{ih}{s}(fw-s-1)\) \\ & Output & \([3,(3+fw-s)]\) & \([2,fw-1]\) & \((fh-s)(fw-s)\frac{H}{R}\) & \((fh-s)(fw-s)\frac{H}{R}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Summary of gains from auxiliary allocation for each operation involving one channel block and one kernel
involved in the computations of an output data would be matched to a different input in two successive iterations.
To circumvent unnecessary data transfers between vector registers used for auxiliary input stationarity, we implement _secondary unrolling_, performed on the output loop with a magnitude of the least common multiple of all numbers of input vector variables per row (in the input tensor) that are greater than \(s\), so that each iteration of the secondary unrolled loop uses vector variables differently: the specific sequence of allocating input vector variables differs between the computations around two successive outputs if the number of input vector variables in that row is greater than \(s\), and remains the same otherwise. Algorithm 4 demonstrates the sequences of vector variable allocations for input auxiliary stationarity across each secondary-unrolled iteration, and Fig. 6 provides a graphical example of secondary loop unrolling.
To further minimize data movements, we directly load vectors of input data to be newly stashed into their corresponding vector variables (thereby overwriting the previous data), instead of new vector variables.
It is also worth noting that, through our observations, we found it advantageous to accumulate all results in a single vector register (instead of a scalar register) and execute the reduction sum operation only when all computations involving an output element have been completed. Although this approach consumes more vector registers, it ultimately saves costs related to performing a reduction sum operation on a scalar variable upon the completion of each computation.
Algorithm 5 summarizes the implementation of output-anchored dataflows.
#### Iv-B2 Implementation of Input-anchored Dataflows
Under input-anchored dataflows, we can allocate the remaining vector variables to both weights and outputs. When \(s\) is 1, we observe that the sequences of vector variable usage between every two consecutive inputs are identical for weight data but different for output data. Similar to the output-anchoring dataflows, this means that we incur the cost of vector data transfer if we consistently use variables in the same sequence. Therefore,
Fig. 4: Reuse opportunities under each anchoring dataflow, showing only one channel and one kernel.
Fig. 5: Under input-anchored dataflows: weights and outputs associated with each input when s=2 for each channel. Darker color means more data are associated with that input element.
Fig. 6: Secondary loop unrolling to bypass vector data transfer, using one channel for demonstration.s
again, we perform secondary unrolling on the output loop, following a similar procedure as described in Sec. IV-B1, but with the sequence of weights in reverse. We write the stashed outputs back to memory when their usage is complete for this row, i.e., when the output is in the first column of the current window of computation. The pseudocode of Input-anchored dataflows is provided in Algorithm 6.
```
\(inputs[ih\cdot inv\cdot ic]\), \(weights[ih\cdot inv\cdot ic\cdot oc]\), \(outputs[oh\cdot ow\cdot ic\cdot oc]\)
```
0:\(numInStash\), \(numWgtStash\), \(x\), \(s\) Prep 1: Initialize a total of \(numInStash\) input vector variables by loading data from the input tensor. Prep 2: Initialize a total of \(numWgtStash\) weight vector variables by loading data from the weight tensor. for\(c\) in \(ic\) by \(x\)do for\(k\) in \(oc\)do for\(h\) in \(oh\) by \(s\)do for\(w\) in \(ow\) by \(s\)do\(\triangleright\) Secondary Unroll Set the anchoring \(output\) vector variable to \(\vec{0}\) for\(r\) in \(fh\)do\(\triangleright\) Unroll for\(s\) in \(fw\)do if\(r\cdot fh+fw<numInStash\)then Use the stashed vector as input else if\(r\cdot fh-(fw-s)<numInStash\)then Overwrite a completely used input stash with the new input by \(vload(c\cdot ih\cdot iw+h\cdot i\cdot iw+w)\) and then use it as input else \(input=vload(c\cdot ih\cdot iw+h\cdot iw+w)\) end if if\(r\cdot fh+fw<numWgtStash\)then Use the stashed vector as weight else \(weight=vload(c\cdot oc\cdot ih\cdot iw+k\cdot fh\cdot fw+r\cdot fw+s)\) end if \(res=vmul(input,weight)\) \(output=vadd(output,res)\) end for end for \(output[k\cdot oh\cdot ow+h\cdot ow+w]+=vaddv(output)\) end for end for end for end for ```
**Algorithm 5** Implementation of Output-anchored Dataflows
#### Iv-B3 Implementation of Weight-anchored Dataflows
Similar to output- and input-anchored dataflows, we describe a concrete and general method to implement weight-anchored dataflows in Algorithm 7. For input and output auxiliary stationarity under weight-anchored dataflows, we always stash the earliest yet unstashed element to exploit locality. We perform a loop split on the weight loop on top of unrolling to write stashed outputs back to memory only when their last usage is complete. When \(s>1\), inputs are reused once for every \(s\) weights.
Our code generator follows Algorithms 5, 6, and 7 to implement various extended dataflows using ARM Intrinsics. Users input the anchoring stationarity, the number of vector variables to be allocated to each auxiliary stationarity, and the layer configurations to generate custom dataflow implementations.
``` \(inputs[ih\cdot inv\cdot ic]\), \(weights[ih\cdot inv\cdot ic\cdot oc]\), \(outputs[oh\cdot ow\cdot ic\cdot oc]\) ```
0:\(numInStash\), \(numWgtStash\), \(x\) Prep 1: Initialize a total of \(numInStash\) input vector variables by loading data from the input tensor. Prep 2: Initialize a total of \(numWgtStash\) weight vector variables by loading data from the weight tensor. for\(c\) in \(ic\) by \(x\)do for\(k\) in \(oc\)do for\(h\) in \(ih\)do\(\triangleright\) Secondary Unroll for\(w\) in \(iw\)do\(input=vload(c\cdot ih\cdot iw+h\cdot iw+w)\) for\(((h^{\prime},w^{\prime}),(r,s))\) in \((assoc\_idx(h,w,c))\)do\(\triangleright\) In reverse order \(\triangleright\) Output and weight indices. See Fig. 5 if\(r\cdot fw+s\in stashedWeightsIndices\)then Use stashed vector as weight else \(weight=vload(c\cdot oc\cdot ih\cdot iw+k\cdot fh\cdot fw+s)\) end if if\(h^{\prime}\cdot iw+w^{\prime}\in stashedOutputIndices\)then Use the stashed vector as output \(res=vmul(input,weight)\) \(output=vadd(res,output)\) ifLast use of this output then \(output[k\cdot oh\cdot ow+h\cdot ow+w]+=vaddv(output)\) end if elseif\(h^{\prime}\cdot iw+w^{\prime}\) to be newly stashedthen Use a free vector as output \(output=vmul(input,weight)\) else \(output[k\cdot oh\cdot ow+h\cdot ow+w]+=vaddv(vmul(input,weight))\) end if end for \(output[k\cdot oh\cdot ow+h\cdot ow+w]+=vaddv(output)\) end for end for end for
```
**Algorithm 6** Implementation of Input-anchored Dataflows
### _End-to-End Optimization of Memory Layout Sequence_
Consistent memory layout alignment across consecutive layers is a prerequisite for efficient neural network inference. Any layout discrepancy entails the need for transformations, leading to additional overhead. To combat this issue, we resort to the commonly adopted dynamic programming approach based on searched results [47, 20, 48]. The algorithm's strategy hinges on minimizing layout transformations by using costs obtained from repeated runs of different scheduling schemes on each layer, ensuring reduced variance. By leveraging these costs, the algorithm determines optimal layouts that synchronize every two successive layers, thus curtailing the necessity for layout transformations.
In addition, we search for the optimal blocking schemes in compile time by running the program under each of the possible configurations and comparing their performance.
## V Experiment Setup
We use physical ARM machines to quantitatively evaluate and compare dataflows implemented using our code generator. These experiments encompass executing convolution layers
with various combinations of the following parameters, as well as collecting end-to-end runtime results for neural networks, to facilitate a thorough and comprehensive evaluation and comparison of different dataflows.
* **Input Size:** We focus on larger convolution layers that are time-consuming with input sizes of \(56\times 56\) and \(112\times 112\).
* **Weight filter Size:** We use filters of sizes \(3\times 3\), \(4\times 4\), and \(5\times 5\), as these dimensions are most widely employed.
* **Stride:** We use strides of \(1\) and \(2\), as these values are also the most commonly used.
* **Number of Filters:** We tested with \(128\), \(256\), and \(512\) filters to compare the different dataflows across various numbers of filters.
* **Vector Lengths:**\(128\), \(256\), and \(512\), which are supported by modern ISAs such as ARM [28] and x86 [29, 49].
We use the GCC compiler [50] with the most aggressive optimization flags to compile all programs. We ran our experiments on a system with 64-bit quad-core ARM Neoverse-N1 CPUs which adopts the aarch64 architecture. Each program was executed 100 times to obtain the average run time.
## VI Results and Discussions
### _Validation of Heuristics_
We generated programs that implement extended dataflows for various convolution layers in ARM Intrinsics and ran experiments following the setup described in Sec. V to validate the heuristics described in Sec. IV.
We primarily present the results for \(s=1\) because (1) With output-anchored dataflows, the relative gains from weight and input auxiliary stationarities stay constant regardless of whether \(s\) is \(1\) or \(2\). (2) For weight-anchored dataflows, according to our heuristics in Sec. IV-A4, the improvement of extended dataflows over the basic, anchoring-only dataflow under \(s=2\) is expected to be less than that for \(s=1\). (3) Under input-anchored dataflows, as \(s\) increases, the difference between gains from weight and output auxiliary stationarity amplifies - we have empirically observed this behavior. (4) To compare output-anchored and input-anchored dataflows, we aim to determine whether the additional memory writes due to auxiliary output stationarity can exceed the 1.93x difference. Studying this under \(s=2\) is less insightful, as the difference between OS and IS (5.39x) is considerably larger.
#### Vi-A1 Comparing Different Anchoring Stationarities
**Finding 1:** Weight-anchored dataflows yield the least improvement from auxiliary dataflow optimizations and are consistently the slowest by a large magnitude.
Weight-anchored dataflows, even when fully optimized, significantly underperform in comparison to other anchoring stationarities (Fig. 7b). Surprisingly, fully optimized output-anchored dataflow implementations are by median approximately 7.41x faster than their weight-anchored counterparts. However, when comparing the basic dataflows, we observe only a median performance difference of about 5.44 times between WS and OS, and roughly 2.91 times between WS and IS, given \(s=1\). This escalating disparity is attributed to the different performance enhancements yielded by our optimization technique for different anchoring dataflows. As illustrated in Fig. 7a, the introduction of auxiliary stationarities results in a modest median improvement of around x1.08 for WS, while IS and OS enjoy more substantial median speedups of approximately x1.96 and x1.78 times, respectively. In fact, we find that adding auxiliary stationarities to the basic WS dataflow can sometimes lengthen the compute time. This is due to a low reuse frequency of the stashed auxiliary data and a more dominant increase in the size of the instruction cache. This result validates **Observation 1** derived from our heuristics.
**Finding 2:** Output-anchored Dataflows outperform input-anchored Dataflows in the majority of the cases.
While IS seems to gain a larger performance improvement from the addition of auxiliary stationarities, we still
find output-anchored dataflows to be superior upon full optimization. For the same convolution layer configuration, optimized output-anchored dataflows are faster than input-anchored dataflows for around 90% of the cases, which validates **Observation 2**.
#### Iv-A2 Findings Related to Auxiliary Stationarity
Here, we compare different auxiliary stationarity schemes under each anchoring dataflow.
**Finding 3:** Prioritizing stashing inputs or weights does not significantly impact performance under output-anchored dataflows.
This finding validates **Observation 3**. By comparing the latency of dataflows that prioritize allocation for weight auxiliary stationarity and the ones that prioritize input auxiliary stationarity, we observe neither allocation scheme is consistently superior to the other, and the differences between the two schemes are small (within 6%).
**Finding 4:** Allocating vector variables to outputs first improves performance compared to prioritizing allocation for weights under input-anchored dataflows.
By average, prioritizing stashing outputs yields an 8% performance gain, which becomes more evident as we increase the vector length. It follows that **Observation 4** is validated.
**Finding 5:** Prioritizing output allocation yields only slightly better performance than prioritizing input allocation under weight-anchored dataflows.
We find that under almost all cases, prioritizing output auxiliary stationarity brings a performance gain of up to 3% over prioritizing weight auxiliary stationarity. This validates **Observation 5**; however, the differences are negligible.
**Require:**\(numVecReg\), \(vecVarSize\), \(vecRegSize\)
\(regsPerVar\) = \(vecVarSize\) / \(vecRegSize\)
\(numVarAvailable\) = \(numVecReg\) / \(regsPerVar\)
\(auxVarAvailable\) = \(numVarAvailable-3\)
1. Use **output stationary** as the anchoring stationarity
2. Allocate \(auxVarAvailable\) vector variables **first to weight and then to input** (if there are still some remaining).
**Algorithm 8** Optimized Dataflow: Output Anchored Stationarity with Weight Auxiliary Stationarity
#### Iv-A3 Optimized Dataflow
From all previous analyses and results, we conclude that OS-anchored dataflow with auxiliary weight stationarity is the most optimized dataflow in our study. While there is generally little difference between prioritizing auxiliary WS and prioritizing auxiliary IS, we find the former to yield better code readability and more regular instruction patterns. Algorithm 8 summarizes this dataflow.
### _Neural Network Speedup against State-of-the-Art Implementations_
Applying end-to-end optimizations discussed in Sec. IV-C, we compare our technique to state-of-the-art baselines.
For INT8 neural networks, we use TVM as one of the baselines. TVM is a highly optimized machine learning compiler stack for efficient neural network deployment across various hardware platforms [18]. We compare the end-to-end inference latency of variants of ResNet [51] (Resnet-18 and Resnet-34) and VGG [52] (VGG-11, VGG-13, and VGG-16) with TVM-autotuned (we use GridSearchTuner as the KernelTuner - this enumerates through the entire search space for configurations [53]) implementations and untuned implementations (TVM default). We set TVM to target the architecture and SIMD extension to match the physical machines used for our experiments. Across all network architectures
Fig. 7: Performance results of extended dataflows. Vector \(\text{Length}=(elem\_width\times c)\in\{128,256,512\}\) (mean of 100 runs). Configurations on the y-axes are in the format of \((fw/fh,iw/ih,nf)\).
and numbers of threads, we observe a \(\sim\)3x speedup over TVM's implementations, and up to \(\sim\)14x over its untuned implementation. Moreover, our multithreading scheme yields comparable scalability. We also compare the end-to-end results with programs generated by gcc/clang (with the highest level of optimization and autovectorization enabled). Ours achieve significant (4x-6x) speedup.
For the evaluation of binary neural networks, we compared the inference latency of our implementations with Cowan et al.'s TVM-based bitserial implementations [23]. Since the code released by Cowan et al. only works for convolution layers on CPUs (while their end-to-end code generation tool targets Raspberry Pi and is not applicable to CPUs), we only perform this comparison for convolution layers. Bitserial implementations, although optimized for low-power consumption, do not offer satisfactory inference speed. Notably, our implementations are over 12x faster for various convolution layers. Based on the end-to-end results reported in their paper (which incorporates additional optimizations through microkernel synthesis) [23], we anticipate that our implementations will still outperform theirs by a large margin (6x or higher) in the end-to-end comparisons. We also compared our implementations of various convolution layers in VGG against those from [20], and ours achieve up to 4.8x speedup.
## VII Related Work
This section offers an overview of prevalent techniques for accelerating neural network inference. Our work already employs quantization [54, 55, 56, 57, 58], vectorization [59, 60, 61, 62], tiling/blocking [62, 63], operator fusion [64, 65, 66]. We compare and contrast our work with other related efforts.
Unroll-and-JamUnroll-and-jam reduces memory access costs by reordering instructions without breaking data dependencies [67, 68, 69, 70], which can enhance the performance of convolution and fully-connected layers in DNNs [71, 72, 18]. Our technique bypasses unneeded load instructions previously handled by jamming, and further jamming can be applied on top of our technique to lower latency.
Winograd ConvolutionWinograd convolution reduces the complexity of convolution operations [73, 74, 75, 76, 77] and there exist various optimizations of its implementation on CPUs [78, 79, 80, 81, 82]. Utilizing a similar concept of reusing data to speed up convolution inference, DREW [83] optimizes Winograd convolution by clustering data and reusing computed results and trades off accuracy and inference performance. In contrast, our method retains accuracy and suits all architectures with SIMD support. Moreover, standard Winograd convolutions struggle with quantization [84, 85, 86, 80], while our technique does not suffer from this limitation.
Transformer OptimizationsTransformers have revolutionized several areas of machine learning [87, 88, 89, 90, 91, 92]. However, optimizing their performance, particularly on CPUs, remains a significant challenge [93, 94, 95, 96]. Efforts to date include pruning [97, 98, 99, 100], quantization [101, 102, 103, 104], knowledge distillation [105, 106, 107, 108], architecture search [109, 110, 94, 111, 95, 110], GEMM optimizations [96, 95, 111], and hardware-level optimizations [93, 112]. Moreover, while there exist previous works on studying dataflows for transformers on other hardware platforms [113, 114, 115, 116], no dataflow work has been done on CPUs to the best of our knowledge. Our technique is orthogonal to and may be combined with other Transformer optimization techniques such as GEMM optimizations (e.g., [96]).
Intel AMX ExtensionIntel's AMX [49] is designed to accelerate matrix-level operations on CPUs, and only available in high-performance processors like the \(4\)th Generation Xeon Scalable Processors [117]. Our research focuses on prevalent SIMD extensions. Moreover, it is essential to develop dataflows that maximize data reuse opportunities in AMX to further optimize its performance, and our methodology may be extended for this purpose.
Fig. 8: End-to-end relative speedups for 8-bit quantized neural networks from our techniques, normalized to TVM default mode without autotune (Note: for DenseNet-121 we do not have the results for TVM default mode, and had to use a different tuner (TaskScheduler), and we use the first tuning trial as the baseline).
Fig. 9: Layer-wise latency comparisons for binary Resnet Workloads between Ours and Cowan et al [23].
Binary Neural Network Optimizations
Frames that optimize binary neural networks specifically exist. An example is daBNN [118], which employs various assembly-level microkernels to optimize performance. However, daBNN fails to harvest all data reuse opportunities, such as reusing input data between two successive outputs, or reusing weight data. By combining our dataflow technique with daBNN, further improvements can be achieved.
## VIII Conclusions
In this paper, we present the first study to systematically explore dataflows to achieve efficient neural network inference using SIMD capabilities. We developed heuristics for optimized vector register allocation by analyzing reuse opportunities for different dataflows, and validated these heuristics through thorough automatic code generation and experimentation, and demonstrated significant performance improvements over state-of-the-art implementations. We anticipate that this work will catalyze further investigation of dataflows to reduce inference time on contemporary CPU architectures.
|
2310.11479 | On the Temperature of Bayesian Graph Neural Networks for Conformal
Prediction | Accurate uncertainty quantification in graph neural networks (GNNs) is
essential, especially in high-stakes domains where GNNs are frequently
employed. Conformal prediction (CP) offers a promising framework for
quantifying uncertainty by providing $\textit{valid}$ prediction sets for any
black-box model. CP ensures formal probabilistic guarantees that a prediction
set contains a true label with a desired probability. However, the size of
prediction sets, known as $\textit{inefficiency}$, is influenced by the
underlying model and data generating process. On the other hand, Bayesian
learning also provides a credible region based on the estimated posterior
distribution, but this region is $\textit{well-calibrated}$ only when the model
is correctly specified. Building on a recent work that introduced a scaling
parameter for constructing valid credible regions from posterior estimate, our
study explores the advantages of incorporating a temperature parameter into
Bayesian GNNs within CP framework. We empirically demonstrate the existence of
temperatures that result in more efficient prediction sets. Furthermore, we
conduct an analysis to identify the factors contributing to inefficiency and
offer valuable insights into the relationship between CP performance and model
calibration. | Seohyeon Cha, Honggu Kang, Joonhyuk Kang | 2023-10-17T10:24:25Z | http://arxiv.org/abs/2310.11479v3 | # On the Temperature of Bayesian Graph Neural Networks for Conformal Prediction
###### Abstract
Accurate uncertainty quantification in graph neural networks (GNNs) is essential, especially in high-stakes domains where GNNs are frequently employed. Conformal prediction (CP) offers a promising framework for quantifying uncertainty by providing _valid_ prediction sets for any black-box model. CP ensures formal probabilistic guarantees that a prediction set contains a true label with a desired probability. However, the size of prediction sets, known as _inefficiency_, is influenced by the underlying model and data generating process. On the other hand, Bayesian learning also provides a credible region based on the estimated posterior distribution, but this region is _well-calibrated_ only when the model is correctly specified. Building on a recent work that introduced a scaling parameter for constructing valid credible regions from posterior estimate, our study explores the advantages of incorporating a temperature parameter into Bayesian GNNs within CP framework. We empirically demonstrate the existence of temperatures that result in more efficient prediction sets. Furthermore, we conduct an analysis to identify the factors contributing to inefficiency and offer valuable insights into the relationship between CP performance and model calibration.
## 1 Introduction
Graph neural networks (GNNs) have demonstrated their impressive abilities to learn from graph-structured data in a wide range of domains, including social sciences, chemistry, knowledge graphs, and recommendation systems [18; 34; 12; 39; 8]. Meanwhile, with the increasing applications of GNNs in safety-critical tasks, there is a growing demand for trustworthy and reliable uncertainty estimates from GNN outputs [14; 37]. However, many existing uncertainty quantification methods are not applicable to GNNs, since they rely on independent and identically distributed (i.i.d.) data assumption, which is violated in graph-structured data [1; 16]. Recent studies have addressed uncertainty in graph-related tasks, but these approaches introduce additional model architecture for density estimation or knowledge distillation. [31; 43].
Conformal prediction (CP) [36] is a promising framework for obtaining uncertainty estimates under the only assumption of data exchangeability1. CP constructs prediction sets that are guaranteed to contain a ground-truth with a desired coverage level. Formally, consider a training dataset \((x_{i},y_{i})\in\mathcal{X}\times\mathcal{Y},\ i\in\{1,\cdots,n\}\), and a test point \((x_{n+1},y_{n+1})\) drawn exchangeably from an underlying distribution \(P\). Then, for a given a pre-defined coverage \(1-\alpha\), CP generates a prediction set \(\mathcal{C}_{n,\alpha}(x_{n+1})\subseteq\mathcal{Y}\) for a test input \(x_{n+1}\) that satisfies
Footnote 1: The data exchangeability assumes that every permutation of data samples have the same probability, which is a weaker assumption than i.i.d assumption.
\[\mathbb{P}[y_{n+1}\in\mathcal{C}_{n,\alpha}(x_{n+1})]\geq 1-\alpha \tag{1}\]
where the probability is taken over \(n+1\) data samples \(\{(x_{i},y_{i})\}_{i=1}^{n+1}\). A prediction set satisfying the coverage condition in the equation (1) is said to be _valid_. While CP guarantees a valid prediction for
any classifier, the size of a prediction set, called _inefficiency_, is largely affected by the underlying classifier and the data generating process [7, 40].
Meanwhile, Bayesian learning [3, 9, 38] (see also [29, Ch. 12] for an overview) is a traditional way for building a predictive interval. In contrast to frequentist learning, which aims to find an optimal point of model parameters, Bayesian learning produces a posterior estimate for model parameters. Consequently, it can naturally generate credible regions by simply taking \((1-\alpha)\cdot 100\) % confidence region of the posterior distribution. However, the region can only be _valid_ without model misspecification, which means that prior, likelihood, and computational capability should be correctly specified in posterior inference [23, 38]. To address the issue of model misspecification when constructing a posterior credible region, a recent study has introduced a scaling parameter that controls the spread of posterior [33] (see also [19] for related discussion on _generalized Bayesian learning_). Their findings demonstrate that the control of the scaling parameter can yield a credible region with an appropriate size but provides approximate validity, whereas CP offers a rigorous guarantee of validity.
Building upon insights from previous works, we study the benefits of Bayesian GNNs in CP framework to obtain valid and efficient uncertainty estimates. While recent CP approaches for GNNs have focused on ensuring validity and reducing inefficiency, these methods are applicable to specific tasks [6, 41] or require correction dataset for additional GNN training [15]. In our work, we propose the use of a temperature parameter in Bayesian GNNs to allow for flexible control of inefficiency within CP framework. We show that Bayesian GNNs improve the performance of CP, especially the inefficiency, compared to frequentist GNNs while preserving the validity. Furthermore, our experiments on both node classification and graph classification tasks demonstrate the existence of temperatures of Bayesian GNNs that lead to more efficient prediction sets. While previous studies have explored the temperature parameter in Bayesian learning mainly for calibration purposes [33, 42], our study investigates the impact of temperatures on CP performance. Moreover, we provide an analysis that explores the connection between the CP performance and model calibration, providing valuable insights on the varying inefficiency depending on the temperature.
The main contributions of this study are summarized as follows:
* We show that Bayesian GNNs outperforms in CP compared to frequentist GNNs.
* Our experiments on graph-structured data demonstrate the existence of temperatures that result in better efficiency2. Footnote 2: Throughout the paper, we use the terms “lower inefficiency” and “better efficiency” interchangeably.
* We analyze the connection between the inefficiency and model calibration to verify the factors that contribute to the inefficiency.
Figure 1: **Conformal prediction (CP) of Bayesian GNNs with tempered posteriors. In Bayesian GNNs, the temperature parameter \(\beta\) controls the spread of the posterior over model parameters \(\theta\). Once we train Bayesian GNNs with a specific temperature, CP generates prediction sets by comparing softmax output of a test sample to a threshold obtained from calibration set. In the right box of this figure, the blue bar represents the true label, and CP guarantees that prediction sets contain the true label with a pre-defined coverage. We show that Bayesian GNNs with an appropriate temperature produce more efficient prediction sets, reducing inefficiency, while keeping the desired coverage.**
Background
### Conformal Prediction
Conformal prediction (CP) [36] aims to construct a set of candidate labels that is likely to contain the true label. Formally, given a test example \(x\in\mathcal{X}\) and for every candidate label \(y\in\mathcal{Y}\), CP either rejects or accepts the candidate pair \((x,y)\) to be included in a prediction set. This decision is done based on a statistic, called _non-conformity score_\(s((x,y)|\theta_{\mathcal{D}})\), a real-valued function that takes input-output pair \((x,y)\) as an input given a pre-trained model parameter \(\theta_{\mathcal{D}}\). The score \(s((x,y)|\theta_{\mathcal{D}})\) quantifies the disagreement between data sample \(x\) and a candidate label \(y\). One typical example of non-conformity score broadly used in classification task is negative log-likelihood (\(-\log p_{\theta}(y|x)\)).
In this work, we use one most common type of CP, called _split conformal prediction (SCP)_[36], which splits available dataset into training and calibration sets. Given a trained model on training set \(\mathcal{D}\), SCP creates a prediction set \(\mathcal{C}_{n,\alpha}(x)\) for a test input \(x\) using the calibration set \(\{(x_{i},y_{i})\}_{i=1}^{n}\). Specifically, we first compute the non-conformity scores of calibration samples as \(\{s((x_{i},y_{i})|\theta_{\mathcal{D}})\}_{i=1}^{n}\). Then, the \((1-\alpha)\)-quantile of the set \(\{s((x_{1},y_{1})|\theta_{\mathcal{D}}),\ldots,s((x_{n},y_{n})|\theta_{ \mathcal{D}})\}\cup\{\infty\}\) becomes the threshold that determines whether to accept or reject candidate label \(y\in\mathcal{Y}\) to be contained in the set. The threshold can be obtained by selecting \(\lceil(1-\alpha)(n+1)\rceil\)-th smallest value from the set \(\{s((x_{1},y_{1})|\theta_{\mathcal{D}}),\ldots,s((x_{n},y_{n})|\theta_{ \mathcal{D}})\}\cup\{\infty\}\), denoted as \(Q_{\alpha}(\{s((x_{i},y_{i})|\theta_{\mathcal{D}})\}_{i=1}^{n})\). Consequently, the corresponding prediction set for a test sample \(x\) is defined as
\[\mathcal{C}_{n,\alpha}(x)=\{y\in\mathcal{Y}:s((x,y)|\theta_{\mathcal{D}})\leq Q _{\alpha}(\{s((x_{i},y_{i})|\theta_{\mathcal{D}})\}_{i=1}^{n})\}\,. \tag{2}\]
**Theorem 1**: _Given that calibration set \(\{(x_{i},y_{i})\}_{i=1}^{n}\) and a test data point \((x,y)\) are exchangeable random variables, for any coverage level \(1-\alpha\), for any trained model \(\theta_{\mathcal{D}}\), and for any nonconformity score \(s(\cdot|\theta_{\mathcal{D}})\), a conformal prediction set \(\mathcal{C}_{n,\alpha}(x)\) defined in the equation (2) satisfies the equation (1) [36]._
Theorem 1 provides a marginal coverage guarantee for all \(x\) on average. Roughly speaking, the SCP can induce any classifier to produce prediction sets guaranteed to satisfy the desired coverage level. This implies that, on average, these sets have a pre-defined probability of containing the true label.
### Bayesian Learning
We introduce two learning paradigms, _frequentist learning_ and _Bayesian learning_, to find a classifier which predicts an output \(y\in\mathcal{Y}\) for a given input \(x\in\mathcal{X}\). A conventional learning that finds an optimal model parameter \(\theta\) and the corresponding single classifier \(p(y|x,\theta)\) given training set \(\mathcal{D}=\{x_{i},y_{i}\}_{i=1}^{n}\), is called _frequentist learning_[25]. In general, it optimizes the model parameter by minimizing the training loss as
\[\theta^{*}=\operatorname*{argmin}_{\theta}L_{\mathcal{D}}(\theta)=\sum_{(x,y) \in\mathcal{D}}-\log p(y|x,\theta). \tag{3}\]
Meanwhile, _Bayesian learning_[25; 3; 29] updates prior beliefs about model parameter \(\theta\) that fits the dataset \(\mathcal{D}\) while accounting for a prior belief \(p(\theta)\) about the values of \(\theta\). Instead of providing a single point estimate for \(\theta\), it infers a posterior distribution over possible values of parameters \(\theta\) based on Bayes' rule by incorporating likelihood from data and prior knowledge on \(\theta\). Hence, it is readily capable of capturing and quantifying the uncertainty about model parameter \(\theta\). Concretely, the Bayesian posterior can be obtained by minimizing the _free energy criterion_[20; 17],
\[q_{\beta}^{*}(\theta|\mathcal{D})=\operatorname*{argmin}_{q}\mathbb{E}_{q( \theta)}[L_{\mathcal{D}}(\theta)]+\beta\cdot\text{KL}[q(\theta)||p(\theta)], \tag{4}\]
where \(\text{KL}[q(\theta)||p(\theta)]\) is the Kullback-Leibler (KL) divergence between two distributions \(q\) and \(p\), and \(\beta>0\) is the _temperature_ parameter which creates a tempered posterior \(q_{\beta}^{*}(\theta|\mathcal{D})\).
One advantage of posterior inference in Bayesian learning is that it can naturally produce credible regions from the posterior. For example, we simply can take 90 % confidence region of posterior distribution if we want to get a _well-calibrated_ (or _valid_) region which satisfies the pre-defined coverage probability of 90 % to contain a target output in the region, as in the equation (1). However, it fails in common when the model is _misspecified_[10; 23; 19]. Model misspecification occurs from
three assumptions in posterior inference, which are prior belief, likelihood, and computational power required to perform inference on intractable posterior [19]. A model might converge to an incorrect solution if a true model is not included in the prior hypothesis space, or even if the space contains the true solution, a model may struggle to find a good solution unless it benefits from reasonable inductive biases from the likelihood [38]. Hence, if there is model misspecification, the resulting posterior can be poorly calibrated and creates undesirable uncertainty quantification. (refer to Fig. 3 in [19]). There has been a study that introduces a scaling parameter to address the model misspecification when constructing a posterior credible region [33]. The scaling parameter controls the spread of the posterior distribution for the credible region to be _efficient_ in terms of its size and approximately _well-calibrated_. Note that our work differs from previous studies in that we investigate how the temperature parameter \(\beta\) affects the size of prediction sets obtained by CP when applied on top of Bayesian learning.
## 3 Conformal Prediction for Bayesian GNNs
### Bayesian GNNs with Tempered Posterior
We begin with the explanation on how to implement Bayesian graph convolutional networks (GCNs). We use Graph DropConnect (GDC), a stochastic regularization technique with adaptive connection sampling [13]. The adaptive connection sampling is a generalized technique that encompasses a range of other regularization and sampling techniques, including DropOut [30], DropEdge [27], and Node Sampling [5]. It operates by applying binary random masks to the adjacency matrix, randomly masking edges, nodes, and channels. They show that connection sampling in GDC can be interpreted as Bayesian extensions of GNNs by transferring randomness from sampling to the model parameter space. Consequently, by implementing GDC within GNNs, we are able to get an appropriate posterior estimate for GNN model parameters and further adjust its temperature.
In detail, for every edge \(e=(v,u)\in\mathcal{E}\), we generate a random mask vector denoted as \(\mathbf{z}_{v,u}^{(l)}\in\{0,1\}^{1\times f_{l}}\) by sampling from Bernoulli distribution with a drop rate \(\pi_{l}\), denoted as \(\text{Bern}(\pi_{l})\). Here, \(f_{l}\) represents the number of features in the \(l\)-th layer of GNN. We define the degree of node \(v\) as \(c_{v}\) and its neighborhoods as \(\mathcal{N}(v)\). Then, for every \(l\)-th layer with the weight parameter matrix \(\mathbf{w}^{(l)}\) and activation function \(\sigma(\cdot)\), we can represent its output feature of each node \(v\) when connection sampling is applied as the equation (5) where \(\mathbf{w}_{v,u}^{(l)}:=\text{diag}(\mathbf{z}_{v,u}^{(l)})\mathbf{w}^{(l)}\) and \(\mathbf{h}_{u}^{(l)}\) denote output feature of node \(u\) at \(l\)-th layer, resulting in learning different random weight parameter \(\mathbf{w}_{v,u}^{(l)}\) for each edge \((v,u)\in\mathcal{E}\), i.e.,
\[\mathbf{h}_{v}^{(l+1)}=\sigma\left(\frac{1}{c_{v}}(\sum_{u\in\mathcal{N}(v) \cup\{v\}}\mathbf{z}_{v,u}^{(l)}\odot\mathbf{h}_{u}^{(l)})\mathbf{w}^{(l)} \right)\,=\sigma\left(\frac{1}{c_{v}}\sum_{u\in\mathcal{N}(v)\cup\{v\}} \mathbf{h}_{u}^{(l)}\mathbf{w}_{v,u}^{(l)}\right). \tag{5}\]
In Bayesian GNNs, we estimate the posterior distribution of GNN weight parameters considering them as random parameters. Since GDC incorporates random parameters including weight parameters \(\mathbf{W}^{(l)}=\{\mathbf{w}_{c}^{(l)}\}_{e=1}^{|\mathcal{E}|}\), random masks \(\mathbf{Z}^{(l)}=\{\mathbf{z}_{e}^{(l)}\}_{e=1}^{|\mathcal{E}|}\), and the corresponding dropout rate \(\pi_{l}\), we infer posteriors on these parameters, denoted as \(\boldsymbol{\theta}=\{\mathbf{W}^{(l)},\mathbf{Z}^{(l)},\pi_{l}\}_{l=1}^{L}\). Note that the drop rates are also learnable to enhance the flexibility of the model [4]. Then, the free energy criterion for training Bayesian GNNs is defined as
\[F_{\beta}(q)=\mathbb{E}_{q(\boldsymbol{\theta})}\left[\sum_{(x,y)\in\mathcal{D }}-\log p(y|x,\boldsymbol{\theta})\right]\,+\beta\cdot\sum_{l=1}^{L}\text{KL} \left[q(\boldsymbol{\theta})||p(\boldsymbol{\theta})\right] \tag{6}\]
where the _temperature_ parameter \(\beta\) controls the spread of posterior distribution, as illustrated in Fig. 1. The posterior with \(\beta=1\) corresponds to standard Bayesian posterior, and the \(\beta\gg 1\) and \(\beta\ll 1\) depend more on the prior assumption and information from data, respectively. Consequently, the _tempered posterior_ can be obtained by minimizing the equation (6) as
\[q_{\beta}(\boldsymbol{\theta}|\mathcal{D})\propto p(\boldsymbol{\theta})\prod_{ (x,y)\in\mathcal{D}}p(y|x,\boldsymbol{\theta})^{1/\beta}. \tag{7}\]
Though estimating the posterior distribution in the equation (7) requires intractable computation, _variational inference (VI)_[9] suggests a good approximation for it, where the variational distribution
from a defined family of distributions (e.g., Gaussian distributions) approximates true posterior distribution. Details on prior assumptions and VI algorithm for GDC are stated in Appendix D. Suppose we have the tempered posterior \(q_{\beta}(\mathbf{\theta}|\mathcal{D})\) or its VI approximate, then the predictive distribution on a test sample \(x\) can be obtained by averaging over all likely models as
\[p_{\beta}(y|x,q)=\int p(y|x,\mathbf{\theta})q_{\beta}(\mathbf{\theta}|\mathcal{D})d\mathbf{ \theta}\approx\frac{1}{T}\sum_{t=1}^{T}p(y|x,\hat{\mathbf{\theta}}_{t}) \tag{8}\]
where the distribution can be approximated by the average over \(T\) sets of i.i.d. samples \(\hat{\mathbf{\theta}}_{t}\) drawn from \(q_{\beta}(\mathbf{\theta}|\mathcal{D})\). Since we use negative log-loss as non-conformity score for CP, i.e., \(-\log p_{\beta}(y|x,q)\), the prediction sets, especially the size of sets is directly affected by the temperature. Accordingly, we discuss the adjustment of temperature of Bayesian GNNs for CP in the following.
### Temperature Control for Conformal Prediction
We now present how to exploit the benefits of Bayesian GNNs with tempered posterior in split conformal prediction (SCP). As illustrated in Fig. 1, we first train Bayesian GNNs using loss function in the equation (6) with a pre-determined temperature \(\beta\) using training set. Afterward, for a given calibration set \(\{(x_{i},y_{i})\}_{i=1}^{n}\), we compute the non-conformity score of calibration data as negative log-loss based on the predictive distribution obtained from the tempered posterior as the equation (8). Subsequently, for a given pre-defined coverage level of \(1-\alpha\), the prediction set for a test input \(x\), generated using SCP on top of Bayesian GNNs with tempered posterior, is defined as follows:
\[\mathcal{C}_{n,\alpha,\beta}(x)=\left\{y\in\mathcal{Y}:-\log p_{\beta}(y|x,q) \leq Q_{\alpha}(\{-\log p_{\beta}(y_{i}|x_{i},q)\}_{i=1}^{n})\right\}. \tag{9}\]
This prediction set satisfies the desired coverage condition, as stated in Theorem 2. Additionally, as depicted in Fig. 1, we suggest that employing the tempered posterior in Bayesian GNNs, where the temperature controls its spread as means to overcome model misspecification, can generate an efficient prediction set within CP. While the credible region introduced in [33] provides a coverage guarantee in an asymptotic sense, SCP provides a rigorous guarantee without relying on specific distribution assumptions, as outlined in Theorem 1. Given that conformal predictors alone may not necessarily create the smallest prediction sets, adjusting the temperature provides greater freedom to reduce the size of the prediction sets. Please refer to Appendix B for the proofs of the theorems.
**Theorem 2**: _Given that calibration set \(\{(x_{i},y_{i})\}_{i=1}^{n}\) and a test data point \((x,y)\) are exchangeable random variables, for any coverage level \(1-\alpha\), and for any pre-determined temperature parameter \(\beta\in[0,\infty)\), a prediction set \(\mathcal{C}_{n,\alpha,\beta}(x)\) defined in the equation (9) satisfies the equation (1) with additional marginalization over posterior sampling in the equation (8) when assuming a finite number of sampling \(T\)._
Figure 2: **Example of label distribution depending on the temperature of Bayesian GCNs.**
To suggest a brief insight into how the temperature affects the inefficiency, we provide an example of label distribution from Bayeisan GNNs at different temperatures, extracted from real data samples of Cora and Citeseer. In Fig. 2, note that there are some cases where the true label is included in prediction set, but not selected as a point prediction. While prediction sets from all models include the true label, the size largely depends on the temperature of Bayesian GNNs. As depicted in Fig. 1, when the temperature parameter \(\beta\) is set too low, the label distribution tends to reflect an over-confident behavior compared to other models. This leads to extremely low quantiles and prediction set containing a large number of candidate labels. Conversely, if the temperature is set too high, it results in labels having nearly identical confidence levels, leading to prediction sets with less informative labels. However, when an appropriate temperature is selected between these extremes, Bayesian GNNs generate efficient prediction sets that contain fewer number of informative labels, resulting in better efficiency.
## 4 Experimental Results
In this section, we study (i) the conformal prediction performance of tempered posterior in GCN, in terms of empirical coverage and inefficiency, and analyze (ii) the contributing factors of inefficiency, considering the connection between inefficiency and model calibration.
Experimental settingsWe consider two major tasks on graph-structured data, which are graph classification and node classification tasks. We focus on transductive setting in node classification task that ensures the validity [15]. We have selected public graph datasets with enough number of classes that can effectively reflect the change of set size depending on the temperature. We use CIFAR10-superpixel [21; 8] and ENZYMES [24] for graph classification task, and Cora and Citeseer for node classification task [28]. Overall dataset statistics used for model training and CP are stated in Table 1. Our models are based on GCN [18] and Bayesian GCNs are trained with Graph DropConnect (GDC) assuming the same priors for random parameters as [13]. We have selected temperature parameters for each dataset that make reasonable test accuracy. To ensure the exchangeability of split conformal prediction, we randomly split calibration and test set as illustrated in Table 1, and CP results are averaged over 50 runs for CIFAR10, and 100 runs for other datasets. Further details on experimental settings and exchangeability conditions for GNNs can be seen in Appendix A, C.
Evaluation metricsWe use two metrics of conformal prediction, empirical coverage and empirical inefficiency. A conformal predictor is considered to be valid when the empirical coverage over test set \(\mathcal{D}_{\text{test}}\), denoted as \(\widehat{cov}\), satisfies the pre-defined coverage level (we use 0.9). Note that achieving higher coverage is not necessarily advantageous if it leads to an increase in set size, quantified as higher inefficiency. Another metric is empirical inefficiency which measures the average size of prediction sets over test set, denoted as _ineff_. Lower inefficiency indicates that a conformal predictor is more efficient since it outputs smaller number of candidate labels while keeping the same pre-defined coverage level.
\[\widehat{cov}=\frac{1}{|\mathcal{D}_{\text{test}}|}\sum_{(x,y)\in\mathcal{D}_{ \text{test}}}\mathbbm{1}\{y\in C_{n,\alpha}(x)\},\ \widehat{\text{ineff}}=\frac{1}{|\mathcal{D}_{\text{test}}|}\sum_{(x,y)\in \mathcal{D}_{\text{test}}}|C_{n,\alpha}(x)| \tag{10}\]
Calibration measuresTo assess the calibration of GCNs, we employ commonly used calibration measures, along with CP measures. For each data sample \((x,y)\), suppose we have a point prediction from a model \(\theta\) as \(\hat{y}=\operatorname*{argmax}_{x}p(c|x,\theta)\), then accuracy and confidence for every sample are defined as \(\mathbbm{1}\{\hat{y}=y\}\) and \(p(c=\hat{y}|x,\theta)\), respectively. We use reliability diagram, which depicts the average sample accuracy against confidence levels. If a model is perfectly calibrated, where the estimated
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Dataset & \#Nodes & \#Edges & \#Graphs & \#Feat & \#Classes & \#Train & \#Cal & \#Test \\ \hline Cora & 2708 & 5429 & - & 1433 & 7 & 140 & 500 & 1000 \\ Citeseer & 3327 & 4732 & - & 3703 & 6 & 120 & 500 & 1000 \\ CIFAR10 & - & - & 50000 & 5 & 10 & 10000 & 5000 & 10000 \\ ENZYMES & - & - & 600 & 21 & 6 & 500 & 50 & 50 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics for Cora, Citeseer, CIFAR10, ENZYMES
probability of true label is exactly the same as the confidence for all levels, the reliability diagram takes the form of an identity function. In our experiments, we divide samples into \(M=20\) confidence bins, each denoted as \(B_{m}\), whose samples have confidence within the range \((\frac{m-1}{M},\frac{n}{M}]\). The average accuracy and confidence over samples in \(B_{m}\) are denoted as \(\text{acc}(B_{m})\) and \(\text{conf}(B_{m})\), respectively. We additionally use two measures obtained from reliability diagram, expected calibration error (ECE) and maximum calibration error (MCE) [11], defined as
\[\text{ECE}=\sum_{m=1}^{M}\frac{|B_{m}|}{n}\left|\text{acc}(B_{m})-\text{conf}(B _{m})\right|,\text{ MCE}=\max_{m\in\{1,\cdots,M\}}\left|\text{acc}(B_{m})-\text{ conf}(B_{m})\right|, \tag{11}\]
where \(n\) is the number of total samples. ECE is a weighted average of differences between confidence and accuracy across all bins, while MCE represents the largest of these differences among all bins.
### Efficiency of Tempered Posteriors
We evaluated our frequentist and Bayesian GCNs using CP evaluation metrics, empirical coverage and empirical inefficiency in Fig. 3. As theoretically guaranteed, Bayesian models of all temperatures satisfy the pre-defined coverage level of 0.9. Since coverage is guaranteed empirically across all models, the empirical inefficiency determines the performance of conformal predictors. A conformal predictor producing a smaller prediction set implies that the set conveys valuable information. In Fig. 3, we have verified that Bayesian GCN gives benefits to the lower inefficiency, while guaranteeing the desired coverage. _Notably, there exists a model with a temperature that makes better inefficiency, compared to other temperatures_, as illustrated as yellow boxes in Fig. 3. One may question that model with high accuracy might create smaller sets, and results in lower inefficiency. However, it is worth noting that while the inefficiency gets affected by the model performance, we have observed that this is not always the case. In the following sections, we will delve into this matter further by examining the relationship between inefficiency and model calibration.
### Contributing Factors to Inefficiency
Our experiments have shown that inefficiency is influenced by not only model performance but also model calibration. For example in Fig. 3(a), the Bayesian model with the temperature \(\beta=2\cdot 10^{-3}\) has lower inefficiency than the model with \(\beta=8\cdot 10^{-3}\), even though the test accuracy is lower. Motivated by this observation, we conducted following analysis to answer to the quenstion _"What factors contribute to inefficiency in CP due to the changes in temperature?"_
Relation between inefficiency and model calibrationTo provide a deeper understanding of the connection between prediction sets and the underlying model, we conducted evaluation of Bayesian GCNs with different temperatures, assessing calibration through reliability diagram, ECE and MCE.
Figure 3: **Empirical coverage and inefficiency of Bayesian GCNs with different temperatures**. Model that produces smallest sized prediction sets on average is denoted as yellow box. The exact average coverage and inefficiency of Bayesian models are represented as red line inside each box.
Overall, we have verified that employing an appropriate temperature parameter \(\beta\) enhances the calibration of GCNs. This improvement is evident in reliabillity diagrams, as shown in Fig. 4(b). Bayesian GCN model which has the lowest inefficiency at \(\beta=10^{-3}\) is comparably well-calibrated than other models.
Furthermore, our experiments imply two major observations. First, _when two models exhibit similar model accuracy levels, the better calibrated model tends to have lower inefficiency, even if the
Figure 4: **Results of inefficiency and calibration on Citeseer**. (a) Model with lower inefficiency implies that it generates prediction sets with smaller size on average. Bayesian GCNs with \(\beta=4\cdot 10^{-3}\) has the lowest inefficiency. (b) The line \(y=x\) on reliability diagrams represents a perfectly calibrated model.
Figure 5: **Results of inefficiency and calibration on CIFAR10**. (a) Model with the lower inefficiency implies that it generates prediction sets with smaller size on average. Bayesian GCNs with \(\beta=10^{-3}\) has the lowest inefficiency. (b) The line \(y=x\) on reliability diagrams represents a perfectly calibrated model.
difference can be slight_. In Fig. 4a, the Bayesian model with \(\beta=4\cdot 10^{-3}\), although having similar accuracy with the model \(\beta=8\cdot 10^{-4}\), exhibits reduced inefficiency. This can be attributed to its more robust calibration, as evidenced by ECE, MCE, and the reliability diagram. Similar case can be observed from Fig. 5 between two Bayesian models with \(\beta=10^{-3}\) and \(\beta=10^{-4}\).
The second finding is that _a well-calibrated model, even when its accuracy is relatively low, yields reasonably efficient prediction sets_. From the comparison of Bayesian models with \(\beta=2\cdot 10^{-3}\) and \(\beta=8\cdot 10^{-3}\), we clearly see that the well-calibrated one with \(\beta=2\cdot 10^{-3}\) results in lower inefficiency, despite its lower test accuracy. Note that the case of \(\beta=2\cdot 10^{-3}\), the two calibration measures ECE and MCE well reflect the reliability diagram as seen in Fig. 4a. However, in the case of a model with \(\beta=10^{-2}\), while MCE reasonably captures the low inefficiency resulting from its poor model quality, whereas ECE does not. Hence, in the following, we explore how previous calibration measures, ECE and MCE, reflect CP inefficiency and propose insights on temperature selection through a new calibration measure.
Combined measure for temperature selectionWhile our previous findings have indicated the existence of temperatures that lead to efficient prediction sets through CP, it is more preferable to find such good temperatures that yield low inefficiency, without adding computational burden from CP. For this purpose, MCE proves to be better than ECE, and this can be attributed to the sensitivity of inefficiency to difficult samples when making prediction sets. However, it is important to note that neither ECE nor MCE perfectly reflects inefficiency, making points such as \(\beta=10^{-2}\) in Fig. 4 and \(\beta=10^{-2}\) in Fig. 5. Consequently, recognizing that both model performance and calibration contribute to set size, we propose that a combined measure involving accuracy and calibration measure could offer a better chance to capture the inefficiency of models with different temperatures. In the rightmost figures of Fig. 4 and Fig. 5, we present the relation between the combined measure, calculated as MCE divided by accuracy, and inefficiency. These figures show that the combined measure better accounts for the inefficiency of models with high temperatures, addressing the limitations of ECE and MCE. While we have provided a preliminary exploration of the combined measure's ability to represent the set size in CP, we plan to further investigate a measure that well explains the inefficiency. Additionally, in our future research, we aim to develop methods for optimizing the temperature through CP-aware loss [15; 40; 32; 26] during training process.
## 5 Conclusion
In this work, we explore the impact of temperature on CP in Bayesian GNNs, specifically focusing on the inefficiency of prediction sets. We show that Bayesian GNNs improve the performance in CP compared to frequentist GNNs while providing more flexibility to control the inefficiency via temperature. Our experiments demonstrate that there exist temperatures of Bayesian GNNs that generate efficient prediction sets within CP. Moreover, our analysis on the connection between inefficiency and model calibration suggests further possibility for a measure capable of accurately capturing the inefficiency. Assessing the CP performance using such measure can reduce the computational burden induced by CP. Furthermore, the investigation of a novel method for training the temperature parameter using CP-aware loss [15; 40; 32; 26] remains as an interesting future work. We anticipate that these findings can provide guidance to trustworthy graph learning that studies reliable uncertainty estimates employing CP.
## Acknowledgement
The authors want to thank Sangwoo Park and Osvaldo Simeone for providing the main idea of this work that studies the impact of temperature in Bayesian learning when used in conjunction with CP under GNN architecture. This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2020-0-01787 and IITP-2023-RS-2023-00259991) supervised by the IITP (Institute of Information & Communications Technology Planning & Evaluation). |
2302.03244 | Quantum Recurrent Neural Networks for Sequential Learning | Quantum neural network (QNN) is one of the promising directions where the
near-term noisy intermediate-scale quantum (NISQ) devices could find
advantageous applications against classical resources. Recurrent neural
networks are the most fundamental networks for sequential learning, but up to
now there is still a lack of canonical model of quantum recurrent neural
network (QRNN), which certainly restricts the research in the field of quantum
deep learning. In the present work, we propose a new kind of QRNN which would
be a good candidate as the canonical QRNN model, where, the quantum recurrent
blocks (QRBs) are constructed in the hardware-efficient way, and the QRNN is
built by stacking the QRBs in a staggered way that can greatly reduce the
algorithm's requirement with regard to the coherent time of quantum devices.
That is, our QRNN is much more accessible on NISQ devices. Furthermore, the
performance of the present QRNN model is verified concretely using three
different kinds of classical sequential data, i.e., meteorological indicators,
stock price, and text categorization. The numerical experiments show that our
QRNN achieves much better performance in prediction (classification) accuracy
against the classical RNN and state-of-the-art QNN models for sequential
learning, and can predict the changing details of temporal sequence data. The
practical circuit structure and superior performance indicate that the present
QRNN is a promising learning model to find quantum advantageous applications in
the near term. | Yanan Li, Zhimin Wang, Rongbing Han, Shangshang Shi, Jiaxin Li, Ruimin Shang, Haiyong Zheng, Guoqiang Zhong, Yongjian Gu | 2023-02-07T04:04:39Z | http://arxiv.org/abs/2302.03244v1 | # Quantum Recurrent Neural Networks for Sequential Learning
###### Abstract
Quantum neural network (QNN) is one of the promising directions where the near-term noisy intermediate-scale quantum (NISQ) devices could find advantageous applications against classical resources. Recurrent neural networks are the most fundamental networks for sequential learning, but up to now there is still a lack of canonical model of quantum recurrent neural network (QRNN), which certainly restricts the research in the field of quantum deep learning. In the present work, we propose a new kind of QRNN which would be a good candidate as the canonical QRNN model, where, the quantum recurrent blocks (QRBs) are constructed in the hardware-efficient way, and the QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices. That is, our QRNN is much more accessible on NISQ devices. Furthermore, the performance of the present QRNN model is verified concretely using three different kinds of classical sequential data, i.e., meteorological indicators, stock price, and text categorization. The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning, and can predict the changing details of temporal sequence data. The practical circuit structure and superior performance indicate that the present QRNN is a promising learning
model to find quantum advantageous applications in the near term.
keywords: Quantum deep neural networks, Quantum recurrent neural networks, Temporal sequential data, Meteorological indicators, Stock price, Text categorization +
Footnote †: journal: Journal of Neural Networks
## 1 Introduction
In recent years, deep neural networks (DNNs) [1] have enabled revolutionary applications in several domains of artificial intelligence [2], such as computer vision [3] and natural language processing [4], etc. Parallelly, remarkable breakthroughs have been seen in quantum computing [5; 6; 7; 8]. With the demonstrations of quantum supremacy, we are entering the NISQ era of quantum computing, where NISQ refers to the noisy intermediate-scale quantum devices [9]. There is a growing consensus that NISQ devices may find useful applications in the near term. One of the most promising directions is the quantum neural network (QNN) [10; 11; 12]. QNN takes the parameterized quantum circuit (PQC) [13] as a learning model, which is a quantum analogue of the classical neural network.
The great success of classical DNNs is mainly attributed to its flexible architecture. That is, the multilayer architecture is versatile to discover intricate structures in high-dimensional data. Specifically, the convolutional neural networks (CNNs) [14] can effectively capture spatial correlation within, say, image data; while the recurrent neural networks (RNNs) perform well when learning sequential data [15], e.g., natural language processing.
Inspired by DNNs, naturally quantum deep neural networks (QDNNs) should have similar architectures to process corresponding types of data. Indeed, for the quantum convolutional neural networks (QCNNs), there have been a fair amount of research covering QCNNs' structure, learnability, and applications [16; 17; 18; 19]. In contrast, the studies about quantum recurrent neural networks (QRNNs) are rather sparse. Bausch developed a high-degree nonlinear quantum neuron [20] based on the work of Cao et al. [21], and used this neuron to build the recurrent networks. Such models possess good non-linearity but need to implement amplitude amplification operations, resulting in high complexity of quantum circuits. Takaki et al. [22] proposed a kind of QRNNs by employing a PQC with a recurrent structure. Such models use simple quantum circuits that are easy to implement on the NISQ devices, but the performance on the non-trivial sequential data has yet to be verified. In
addition, Sipio and Chen et al. [23; 24] developed hybrid quantum-classical models of QRNNs, where the classical linear layers in RNNs are replaced with PQCs. Such hybrid models just take the quantum circuits as acceleration sub-modules plugging into the classical networks, and face with the dilemma of the interface between quantum and classical systems.
Until now there is still a lack of canonical model of QRNN, which certainly restricts the research of QRNNs. Inspired by the QCNN model proposed by Cong et al. [16], we consider that one canonical QRNN model should possesses the following features: (1) flexible to be implemented on various NISQ platforms; (2) fully quantum evolved but not the quantum-classical hybrid networks to ease the interface problem; (3) efficient for sequential learning of classical data.
In order to address the above issues, in the present work, we develop a new kind of QRNN that can fulfill the requirements of the canonical QRNN. Specifically,
1. We propose to construct the quantum recurrent block in a more hardware-efficient way, but not based on Hamiltonian dynamics as done by Takaki et al. [22]. More importantly, we propose a staggered architecture of QRNN by stacking the recurrent blocks in a staggered way. The staggered QRNN can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices. This property is of great significance, because increasing the coherent time of quantum hardware is extremely hard from the technology development point of view.
2. The present QRNN is a fully quantum learning model, where the outcome of quantum transformation is taken as the prediction of the data with minor post-processing. Our QRNN is a standard NISQ algorithm [5], which can take full advantage of the near term quantum computers.
3. The performance of the present QRNN on classical sequential data is verified concretely. Three different kinds of sequential data including meteorological indicators, stock price, and natural language are applied to test the models. Our QRNN model shows better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning, and can predict the changing details of the sequence data.
The simple structure as well as the good performance imply that our QRNN would be a promising candidate to find useful applications in the near term.
The rest of the paper is organized as follows. In Section 2, we describe the structure of the quantum recurrent block, which is the basic cell to construct the QRNNs. Section 3 shows the details of the QRNNs, including the architectures of QRNN and the method of optimizing parameters. In Section 4, we present the performance of the QRNNs on three kinds of classical sequential data, i.e., the data of meteorological indicators, stock price, and natural language. Finally, conclusions and outlook of the present work are discussed in Section 5.
## 2 Quantum recurrent block
In general, RNNs possess a multilayer architecture and each layer is the basic recurrent block. Depending on the specific design of the recurrent block, there are a number of RNN variants, such as long short-term memory (LSTM) [25] and gated recurrent unit (GRU) [26]. The main idea of the recurrent block is that the prediction at a given moment is determined by both the new input data of the current moment and the information about the history of all the past elements of the sequence.
In the most basic recurrent block, the output at the time step \(t\) can be expressed as
\[\begin{split}\vec{y}^{(t)}&=f_{y}(U_{o}\vec{h}^{( t)}),\\ \vec{h}^{(t)}&=f_{h}(V_{in}\vec{x}^{(t)}+W\vec{h}^{ (t-1)}),\end{split} \tag{1}\]
where \(\vec{x}^{(t)}\) is the input of time step \(t\), \(\vec{h}^{(t-1)}\) is the output of last step implicitly containing information about all the previous elements of the sequence, \(f_{y}\) and \(f_{h}\) are the activation functions. The training process of RNNs is to optimize the parameters \(V_{in}\), \(W\) and \(V_{o}\) by minimizing a loss function. The basic classical recurrent block and one RNN by stacking the recurrent blocks are schematically shown in Fig. 1.
Inspired by the structure of the above classical recurrent block, the quantum recurrent block (QRB) is designed as schematically shown in Fig. 2. The qubits of QRB are divided into two groups, i.e., two quantum registers denoting as Reg. D and Reg. H. Reg. D is used to embed the sequential data, one element at each time step, and Reg. H is used to store information about the history of all previous elements. In general, the QRB consists of three parts, which are data encoding \(U_{in}(x^{(t)})\), ansatz circuit, and partial quantum measurement. Below we go into the details of implementing the three parts.
### Data encoding
Data encoding is a process of loading classical data into quantum computer, which is to represent the classical data as quantum states. It is worth noting that in quantum machine learning, data encoding plays a crucial role, far beyond the thing of preparing the input. On the one hand, encoding the classical data quantumly is by no means a trivial thing, which would turn to be the bottleneck for the runtime of the whole algorithm. On the other hand, data encoding can be interpreted as a feature map, dubbed as "quantum feature map", which is to map the input to Hilbert space of the quantum system [27]. A well-chosen quantum feature map can make the data linearly separable in the feature space, and thereby efficiently solve the learning problem.
Data encoding is equivalent to performing a unitary transformation \(U_{in}(x)\) on the initial state, i.e., \(\left|f(x)\right\rangle=U_{in}\left|0\right\rangle^{\otimes n}\) with \(n\) being the number of qubits. There exist numerous structures of circuit to embed the classical data into a quantum state. Among them, the most famous one would be the amplitude encoding, which can embed exponentially many classical data [27]. Specifically, given a normalized classical vector \(x=(x_{1},\ldots,x_{N})^{T}\) of dimension \(N=2^{n}\), amplitude encoding can represent this vector as amplitudes of an \(n\)-qubit quantum state, i.e., \(U_{in}(x)\left|0\right\rangle^{\otimes n}=\sum_{i=0}^{2^{n}-1}x_{i}\left|i\right\rangle\). Similarly, a data matrix \(A\in\mathbb{C}^{2^{n}\times 2^{n}}\) with entries \(a_{ij}\) satisfying \(\sum_{ij}|a_{ij}|^{2}=1\) can be encoded as \(U_{in}(A)\left|0\right\rangle^{\otimes m}\left|0\right\rangle^{\otimes n}=\sum _{i=0}^{2^{m}-1}\sum_{j=1}^{2^{n}-1}a_{ij}\left|i\right\rangle\left|j\right\rangle\), where \(\left|i\right\rangle\) and \(\left|j\right\rangle\) are
Figure 1: (a) Structure of the basic classical recurrent block. (b) One basic architecture of RNN by stacking the basic recurrent blocks.
respectively the \(i\)th and \(j\)th computational basis state. However, amplitude encoding is much less common in QNNs, because the quantum circuit cost for amplitude encoding usually grows as \(O(poly(N))\) that is exponential of the number of qubits. Therefore, although having exponentially large data-encoding space, amplitude encoding cannot be implemented efficiently on NISQ devices.
In QNNs, the most commonly used encoding techniques are the angle encoding [27] and the associated circuit encoding [28]. In general, angle encoding and circuit encoding embed the classical data as the rotation angles of the single-qubit or controlled two-qubit rotation gates. Specifically, given one classical data point \(x_{i}\), the angle encoding first rescales the data to \(\tilde{x}_{i}\) lying between \(0\) and \(\pi\), and then embeds it into a single qubit as, say \(U_{in}(\tilde{x}_{i})\left|0\right\rangle=\cos(\frac{\tilde{x}_{i}}{2})\left| 0\right\rangle+\sin(\frac{\tilde{x}_{i}}{2})\left|1\right\rangle\) with \(U_{in}(\tilde{x}_{i})\) being the \(R_{y}\) gate \(R_{y}(\theta)=[\cos\frac{\theta}{2},-\sin\frac{\theta}{2},\sin\frac{\theta}{2 },\cos\frac{\theta}{2}]\)[29]. For \(N\) data points \(x=(x_{1},\ldots,x_{N})^{T}\), the angle encoding embeds them by \(N\) qubits,
\[R_{y}^{\otimes N}(\tilde{x})\left|0\right\rangle^{\otimes N}=\bigotimes_{i=1 }^{N}(\cos(\frac{\tilde{x}_{i}}{2})\left|0\right\rangle+\sin(\frac{\tilde{x}_ {i}}{2})\left|1\right\rangle). \tag{2}\]
Formally, angle encoding is actually a kind of time-evolution encoding. Time-evolution encoding prescribes to associate a scalar value \(x\in R\) with the time \(t\) in the unitary evolution by a Hamiltonian \(\hat{H}\), i.e., \(U(x)=exp(-i\hat{H}t)\). In the angle encoding, \(\hat{H}\) is just the Pauli operation, and the corresponding unitaries \(U(\theta)=exp(-i\hat{\sigma}\theta)\) can be implemented efficiently on NISQ devices. Therefore, the computational cost of angle encoding is minor, while the required number of qubit is \(O(N)\) for \(N\) data points.
Figure 2: Structure of the quantum recurrent block inspired by the basic classical recurrent block shown in Fig. 1.
Circuit encoding takes the angle encoding as the basis, and embeds the data into a more complex circuit than that of angle encoding. For example, more than one data point, say \(x_{1},x_{2},x_{3}\), can be encoded into one qubit as follows,
\[R_{z}(\tilde{x}_{3})R_{x}(\tilde{x}_{2})R_{z}(\tilde{x}_{1})H\left|0\right>. \tag{3}\]
Such strategy of dense encoding can address, to some extent, the problem of angle encoding which needs \(N\) qubits to embed \(N\) data points. Based on this strategy, up to \(67-\)dimensional data has been handled successfully using quantum kernel method on the current quantum processors [30]. More importantly, circuit encoding can be used to construct a complex feature map which is hard to simulate by the classical computers. For example, in Ref. [28], one encoder is proposed as follows,
\[U_{\phi(X)}=exp(i\sum_{j,k}^{n}\phi_{j,k}(x)Z_{j}Z_{k})H^{\otimes n}, \tag{4}\]
where \(n\) is the number of qubit, \(Z_{j}(Z_{k})\) is the Pauli-Z operator for the \(j\)th (\(k\)th) qubit, \(\phi_{j,k}\) are real functions, and \(H\) is the Hadamard gate. Even two layers of such an encoder circuit would make it computationally hard for the classical resources [28]. However, it is worth mentioning that whether such a complex feature map leads to an advantage for discovering structure in data is still an open question.
In the present work, we use the angle encoding to load the sequential data. This choice is mainly due to the fact that the scope of the present work is to verify whether the QRNNs we developed are efficient for learning classical sequential data. So we tend to employ the data encoding as simple as possible and especially the commonly used one on NISQ devices. Specifically, the circuit for angle encoding used in the present work is shown in Fig. 3. Note that each element of the sequential data is embedded in a replicative fashion. It has been shown that input redundancy can provide an advantage in classification accuracy [31; 32].
Generally, more complex data encoding techniques, such as the circuit encoding, should be helpful to increase the performance of our QRNNs. We leave this exploration for future work.
### Ansatz
The ansatz in Fig. 2 is to process the feature produced by the encoder circuit from the raw data, and output the new feature for the following task
of classification or regression. Ansatz is, in fact, a parameterized quantum circuit, which has adjustable quantum gates. The adjustable parameters are optimized based on the data to learn, as they are determined in the algorithm of neural networks.
Ansatz, namely the PQC, is used to approximate the target function, which can map the feature data into different value domains representing different classifications. Similar to the universal approximation theorem in classical neural networks [33], there always exists a quantum circuit that can approximate the target function within an arbitrary small error [34; 35]. Moreover, it is possible to construct such a circuit with polynomial cost of quantum gates [13; 32]. Therefore, the ansatz aims to use polynomial number of quantum gates (i.e., polynomial number of parameters) to implement a function that can approximate the task at hand.
In practice, ansatz follows a fixed structure of quantum gates. There are two strategies to design the circuits, which are problem-inspired and hardware-efficient methods. Problem-inspired method is to leverage the Hamiltonian of the problem to construct the circuit, and optimizing the parameters of the circuit always means to search the ground state of the Hamiltonian. The ansatz applied in the variational quantum algorithms (VQAs) for solving eigenvalues and eigenstates [36; 32], and for approximation and optimization [37] usually adopts the problem-inspired method. Whereas, the ansatz in QNNs is typically hardware-efficient circuit, which consists of layers of native entangling gates (i.e., two-qubit gates) and single-qubit gates [38]. Essentially, such ansatz can be customized concerning the gate set and connectivity of the specific quantum devices, and the ansatz can be implemented directly on the device without the need of compilation.
Here, the ansatz circuit used in Fig. 2 is constructed in the hardware
Figure 3: The circuit for encoding the sequential data at the time step \(t\).
efficient way. Specifically, the circuit is composed of single-qubit rotation gates and two-qubit gates (i.e., controlled rotation gates). As it is known, an arbitrary single-qubit gate can be expressed as a combination of rotation gates about the \(\hat{x},\hat{y}\) and \(\hat{z}\) axes [29]. We adopt the \(X-Z\) decomposition to represent the single-qubit gates in the circuit,
\[U_{1q}=R_{x}(\alpha)R_{z}(\beta)R_{x}(\gamma), \tag{5}\]
where \(\alpha\), \(\beta\), and \(\gamma\) are the adjustable parameters to be optimized in the learning process.
For the two-qubit gates, they are applied to produce entanglement between qubits in the circuit. The two-qubit gates would be fixed without adjustable parameters, such as the CNOT and controlled Pauli-Z gate; or they would be adjustable, basically the controlled \(R_{x}(\theta)\) and \(R_{z}(\theta)\) gates. In order to increase the expressibility and entangling capability of the ansatz, we use the \(R_{zz}(\theta)\) gate as the two-qubit gates in the circuit,
\[U_{2q}=R_{zz}(\theta)=exp(i\theta Z_{j}Z_{k}), \tag{6}\]
where \(Z_{j}\) and \(Z_{k}\) are the Pauli-Z operators on the \(j\)th and \(k\)th qubit, respectively. The \(R_{zz}(\theta)\) gate can be implemented using the CNOT and Pauli-Z gates as shown in Fig. 4.
Next, the two-qubit gates need to be arranged in a regular way to boost the expressibility and entangling capability of the whole circuit. There are mainly three configurations of two-qubit gates, which are nearest-neighbor (NN), circuit-block (CB), and all-to-all (AA) structures as shown in Fig. 5. On the one hand, these configurations are proposed to harness various quantum hardware platforms with different qubit topologies. Specifically, the NN circuit is the most natural configuration for quantum devices with a linear array of qubits, while AA requires a fully connected architecture of qubits. On the other hand, as discussed in Ref. [39], the three configurations are distinguished from each other in the properties of expressibility, entangling
Figure 4: Implementation of the two-qubit gate \(R_{zz}(\theta)\) using the CNOT and Pauli-Z gate.
capability, and circuit cost. When having the same number of two-qubit gates (e.g., \(D=4\) for NN, \(D=3\) for CB, and \(D=1\) for AA in Fig. 5), NN circuit has the worst expressibility and entangling capability, but the lowest circuit depth; while AA has the best expressibility and entangling capability, but the highest requirement of circuit depth and connectivity. The CB circuit can provide a good balance. CB has a much cheaper circuit, while its expressibility and entangling capability are slightly less and equal to AA. Therefore, we apply the CB configuration in the ansatz circuit.
Finally, we put all the things together, including the single-qubit gates in Equation 5, the two-qubit gates in Fig. 4 and the CB configuration of two-qubit gates in Fig. 5, then the circuit of the ansatz in Fig. 2 is obtained. Fig. 6 shows one ansatz circuit with 6 qubits.
### Partial quantum measurement
The final step of the recurrent block is to output a prediction \(y_{t}\) of the current stage, and maintain an intermediate state \(h_{t}\) that contains information about the history of the sequential data. This is achieved by implementing
Figure 5: The circuits for the three configurations of two-qubit gates, (a) nearest-neighbor (NN), (b) circuit-block (CB), and (c) all-to-all (AA).
the partial quantum measurement, namely measuring a portion of qubits as shown in Fig. 2.
Quantum measurement corresponds to a physical observable \(M\), which can be decomposed as \(M=\sum_{i}\lambda_{i}P_{i}\) with \(\lambda_{i}\) being the \(i\)th eigenvalue and \(P_{i}\) the projector on the corresponding eigenspace. According to the Born rule, the outcome of the measurement corresponds to one of the eigenvalues \(\lambda_{i}\); that is, the quantum state of the qubits \(\left|\varphi\right\rangle\) randomly collapse to the corresponding eigenstates \(\left|\lambda_{i}\right\rangle\) with a probability \(p(\lambda_{i})=\left\langle\varphi\right|P_{i}\left|\varphi\right\rangle\). Then, the expectation value of the measurement outcome can be formalized as
\[\left\langle M\right\rangle=\sum_{i}\lambda_{i}p(\lambda_{i})=\sum_{i}\lambda_ {i}\left\langle\varphi\right|P_{i}\left|\varphi\right\rangle. \tag{7}\]
The most straightforward and commonly used measurement in quantum algorithms is the computational basis measurement, namely the Pauli-Z measurement with the observable \(\sigma_{Z}\),
\[\sigma_{Z}=\left(+1\right)\left|0\right\rangle\left\langle 0\right|+\left(-1 \right)\left|1\right\rangle\left\langle 1\right|. \tag{8}\]
When acting on multiple qubits, Pauli-Z measurement measures whether the individual qubits are in state \(\left|0\right\rangle\) or \(\left|1\right\rangle\), from which we can read off the eigenvalues \(+1\) or \(-1\), respectively.
The expectation value \(\left\langle\sigma_{Z}\right\rangle\) of a single-qubit is a value in the range \([-1,1]\). Note that every time we measure a quantum state \(\left|\varphi\right\rangle\), it will collapse to \(\left|0\right\rangle\) or \(\left|1\right\rangle\). Hence, in practice, the expectation is estimated by repeating the operations of creating the state \(\left|\varphi\right\rangle\) and measuring for \(S\) time, where \(S\) is also known as the number of shots. The average of the \(S\) results is taken as an estimate of the expectation. Therefore, high-precision estimates of the expectation require a larger number of shot, namely rerunning the algorithm
Figure 6: The circuit of the ansatz used in Fig. 2.
many times. It can be proved that the scaling of \(S\) is \(O((1/\varepsilon^{2}))\), where \(\varepsilon\) is the error of the estimation [40].
In the partial quantum measurement as shwon in Fig. 2, only the quantum Reg. D is measured. Specifically, we first implement the Pauli-Z measurement only on the first qubit of Reg. D. The probability of the first qubit collapsing to state \(\left|1\right\rangle\) is estimated, and after minor post-processing it is taken as the prediction \(y_{t}\). Then, all the qubits of Reg. D are measured and reinitialized to the state \(\left|0\right\rangle\) to be ready to embed the next element of the sequential data. Here, we would like to mention that the probability of the first qubit collapsing to \(\left|1\right\rangle\) can be expressed using the partial trace technique. Specifically, passing through the circuits of data encoding \(U_{in}\) and ansatz \(U_{a}\), the initial state evolves into \(U_{a}U_{in}\left|0\right\rangle\), and the corresponding dense matrix is \(\rho=U_{a}U_{in}\left|0\right\rangle\left\langle 0\right|U_{in}^{\dagger}U_{a}^{\dagger}\). Then the reduced density operator of the first qubit of Reg. D is \(\rho^{(1)}=tr_{\overline{1}}(\rho)\), where \(tr_{\overline{1}}\) represents the partial trace over the left qubits of Reg. D except the first qubit. Considering that the measurement operator acting on the first qubit is the projector \(\left|1\right\rangle\left\langle 1\right|\), then the probability is
\[p(\left|1\right\rangle)=tr\left(\left|1\right\rangle\left\langle 1\right| \rho^{(1)}\right). \tag{9}\]
We just use the partial trace technique to implement the partial quantum measurement in our program for simulating the present QRNN.
There are two reasons why only the first qubit, rather than the global qubits of Reg. D is measured. First, less qubits to measure can greatly reduce the measurement error when implementing the algorithm on quantum devices. Second, global measurement would exhibit a barren plateau (BP), that is, the cost function gradients would vanish exponentially in the number of qubits [41]. The partial measurement used here can reduce the BP, and thus improve the trainability of the network.
## 3 Quantum Recurrent Neural Networks
After having the quantum recurrent block, we can construct the quantum recurrent neural networks immediately by stacking the blocks with certain rules. Below we first present two architectures of QRNNs, then discuss the way of optimizing the parameters of QRNNs.
### Two QRNN architectures
The most straightforward way of building the QRNNs is to arrange the QRB in sequence as shown in Fig. 7. Hereafter, this architecture is called
as pQRNN, i.e., plain QRNN. As discussed in Section 2, each QRB goes as follows: first Reg. D embeds one element of the sequential data into the circuit; then Reg. D and Reg. H are entangled through the ansatz; further the first qubit of Reg. D is measured to get an intermediate prediction; finally Reg. D reinitializes to the state \(\left|0\right\rangle\) and Reg. H feeds its state directly into the next QRB to transmit the history information of the sequential data.
In pQRNN, qubits assigned to Reg. D and Reg. H are fixed. That is, when implementing pQRNN, qubits of Reg. H need to work all the time. Hence, pQRNN requires the quantum devices having a long coherent time. However, coherent time is one of the central indicators of NISQ devices, which is extremely hard to increase greatly.
In order to address the issues of pQRNN, we propose another architecture of QRNN. In this model, the QRBs are arranged in a staggered way as shown in Fig. 8, and it is called as sQRNN, i.e., staggered QRNN. In sQRNN, qubits are in turn assigned to Reg. H, so each qubit has a chance to reinitialize to state \(\left|0\right\rangle\) after several time steps. Using the strategy of shift work, sQRNN greatly reduces the requirement of coherent time of quantum hardware, therefore being more accessible on near-term quantum devices.
### Parameters learning
With the two architectures of QRNNs, now we discuss the method of learning optimal parameters of QRNNs. First, predictions (i.e., outcome of quantum measurement) of different discrete time steps are rescaled to associate with the real values or labels. The rescaled predictions are
\[\tilde{y}_{t}=y_{t}\cdot(x_{max}-x_{min})+x_{min}, \tag{10}\]
Figure 7: The straightforward way of building the QRNNs using the QRB. This architecture is called as pQRNN (i.e., plain QRNN).
where \(x_{min}\) and \(x_{max}\) are the minimum and maximum of input states.
Just like classical neural networks, the errors between the predictions and the real values are quantified by the loss functions. Learning parameters is done by minimizing the errors. There are various loss functions to use, which create different landscapes with different properties of plateaus, saddle points, and global minima, etc. The widely used methods include the mean squared error (i.e., \(L_{2}\) loss) and the cross-entropy loss. In the present work, we use the most straightforward \(L_{2}\) loss,
\[L_{2}(\vec{\theta})=\frac{1}{N}\left(\tilde{y}_{t}(x,\vec{\theta})-y_{true} \right)^{2}, \tag{11}\]
where \(\vec{\theta}\) are the parameters to learn (i.e., the rotation angles in ansatz circuit) and \(N\) is the number of data samples.
Just like classical neural networks, the parameters can be optimized based on the gradient of the loss function. That is, the parameters are updated towards the direction of steepest descent of the loss function. Gradient-based approaches play a crucial role in deep learning, because there exists the backpropagation (BP) algorithm, which can implement the calculation of the derivatives of the loss function with respect to the parameters through a computationally efficient organization of the chain rule. However, in quantum computing, there exists no similar BP algorithm to evaluate the derivatives. This is because the BP algorithm relies on storing the intermediate state of the network during computation, which is forbidden by the quantum no-cloning theorem.
In quantum deep learning, there are two typical methods to compute the derivatives of the parameters, namely the difference method and the analytical method. In the difference method, the partial derivatives are ap
Figure 8: The second architecture of QRNN, which is built by arranging the QRB in a staggered way. This model is called as sQRNN (i.e., staggered QRNN).
proximated by the finite difference scheme,
\[\frac{\partial L_{2}(\vec{\theta})}{\partial\theta_{j}}=\frac{L_{2}(\vec{\theta}+ \Delta\cdot\vec{e_{j}})-L_{2}(\vec{\theta}-\Delta\cdot\vec{e_{j}})}{2\Delta}, \tag{12}\]
where \(\Delta\) is a tiny hyper-parameter, and \(\vec{e_{j}}\) is an unit vector with the \(j\)th element being 1 and the rest being 0. Note that in order to estimate the derivative of each parameter, this method requires to evaluate the loss function twice, that is, implementing the quantum circuit twice.
In the analytical method, analytical gradients can be obtained based on the feature of the quantum gate used in the PQC. Suppose the trainable gates in PQC are of the form \(U(\theta_{j})=e^{-i\theta_{j}P_{j}}\), where \(P_{j}\) is a tensor product of Pauli matrices (in fact, almost all of the PQCs in literature apply this form of quantum gates). Then the derivative of the measurement expectation value (i.e., Equation 6) respect to the parameter \(\theta_{j}\) can be formalized as [13]
\[\frac{\partial\langle M\rangle}{\partial\theta_{j}}=\frac{\langle M\rangle_{ \vec{\theta}+\frac{\pi}{2}\cdot\vec{e_{j}}}-\langle M\rangle_{\vec{\theta}- \frac{\pi}{2}\cdot\vec{e_{j}}}}{2}, \tag{13}\]
where the subscript \(\vec{\theta}\pm\frac{\pi}{2}\cdot\vec{e_{j}}\) indicate that the parameter \(\theta_{j}\) add (minus) \(\pi/2\). That is, the derivative is estimated by executing the two circuits with shifted parameter vector. Thus, this formula is also known as the parameter shift rule [11]. Furthermore, the derivative of \(\theta_{j}\) for the loss function can be obtained by the chain rule,
\[\frac{\partial L_{2}\left(\langle M\rangle\right)}{\partial\theta_{j}}=\frac{ \partial L_{2}\left(\langle M\rangle\right)}{\partial\langle M\rangle}\frac{ \partial\langle M\rangle}{\partial\theta_{j}}. \tag{14}\]
Here we would like to remark that the estimation of the derivatives is performed in a quantum-classical hybrid way. Specifically, the terms \(L_{2}(\vec{\theta}\pm\Delta\cdot e_{j})\) in Equation 12 or the terms \(\langle M\rangle_{\vec{\theta}+\frac{\pi}{2}\cdot e_{j}}\) in Equation 13 are evaluated by executing the quantum circuit, while the arithmetic with regard to these terms, including Equation 14, is done in the classical computer. Additionally, as can be seen above, estimating the gradients of the parameters is rather cost in quantum neural networks. Further works are required to clarify the limitation brought by the quantum gradient estimation and design a quantum backpropagation algorithm [13].
Having the gradients of the parameters, the commonly used optimizers in classical neural networks can be used to update these parameters. Note that
this step is performed in classical computers. We test the gradient descent and Adam optimizer. The numerical experiments show that both optimizers can work well; for comparison, Adam can give a 32% improvement in the training speed, while has a slight decrease in accuracy.
## 4 Experimental results
In order to verify the performance of the present QRNN models, we evaluate our QRNNs on three different kinds of classical sequential data, i.e., meteorological indicators, stock price, and natural language. In the numerical experiments, both two architectures, i.e., pQRNN and sQRNN are evaluated. The difference method of estimating the gradients and the gradient descent optimizer are used to update the parameters. The algorithms are implemented based on the pyQPanda quantum programming framework [42]. The following experiments show that the present QRNN models achieve promising performance on the three totally different sequential data.
### Meteorological indicators
The meteorological data contains five indicators, which are the atmospheric pressure, maximum temperature, minimum temperature, relative humidity, and wind speed. The sequence of each indicator contains 500 elements representing the values of the indicator on 500 days. The task is to train the QRNN model to be capable of predicting the value of each indicator on the eighth day using the values of preceding seven days.
The circuits of the pQRNN and sQRNN used here are the ones as shown in Fig. 7 and 8. Specifically, the number of qubits used in Reg. D (also in Reg. H) is 3; the number of QRB is seven to embed the seven days' data and the output of the seventh QRB is taken as the prediction of data on the eighth day. The 500 elements of each indicator are divided into 300 for training and 200 for test. The hyperparameter of learning rate is 0.03. In addition, to be as a reference, a classical RNN is constructed with the structure as shown in Fig. 1, where the number of recurrent block is also seven for fair comparison with QRNN.
The prediction accuracy of each indicator is evaluated using the following
formula,
\[\begin{split} Accuracy&=\left(1-\sqrt{\frac{1}{N}\sum_{i=1}^{N}E_{ i}^{2}}\right)\times 100\%,\\ E_{i}&=\frac{actual-predicted}{actual},\end{split} \tag{15}\]
where \(E_{i}\) is the relative error of each predicted value (i.e., the data of eighth day) and \(N\) is the total number of predictions (i.e., the number of samples for test).
The prediction accuracy of the two QRNN models as well as the classical RNN are shown in Table 1. The first remarkable conclusion obtained from the table is that the sQRNN model can achieve a similar accuracy as pQRNN. This is impressive because sQRNN can be implemented efficiently on the NISQ devices, and together with the great performance implies that sQRNN has the potential to find useful applications on near-term quantum devices.
The second remarkable conclusion is that both two QRNN models are capable of predicting the five indicators with higher accuracy than the classical RNN. In particular, for the indicator of wind speed, the performances of QRNN are much better than the classical RNN. The relative variation of wind speed is much acuter that other indicators. Hence, it implies that QRNNs should possess a better capability of predicting the trend of acute variation.
Let us make a further comparison between QRNN and the classical RNN. In QRNN, the number of parameters required to learn are 30, while in RNN
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Meteorological indicators**} & \multicolumn{3}{c}{**Prediction accuracy**} \\ \cline{2-4} & **pQRNN** & **sQRNN** & **RNN** \\ \hline Atmospheric Pressure & 99.91\% & 99.87\% & 99.83\% \\ Minimum Temperature & 96.96\% & 93.09\% & 92.65\% \\ Maximum Temperature & 97.68\% & 90.15\% & 91.29\% \\ Relative Humidity & 98.51\% & 96.19\% & 92.99\% \\ Wind Speed & 90.13\% & 87.96\% & 70.38\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy of the two QRNN models and the classical RNN on the sequential data of meteorological indicators.
it is 49. That is, QRNNs use less parameters but get a higher prediction accuracy. This should result from the fact that PQCs have stronger expressibility than the classical networks [43; 44]. In other words, with the similar number of parameters, the function space achieved by PQC is exponentially larger than that of classical networks. Furthermore, we find that the QRNNs are capable of providing better predictions of the variation details of the temporal data than the classical RNN. Fig. 9 shows the curves of relative humidity indicator predicted by pQRNN and RNN. As can be seen from the figure, Details of the fluctuation presented in the actual sequence data are learned by QRNN, while classical RNN has an effect of smoothing the fluctuation. This, together with the above phenomena that QRNNs perform much better when predicting the wind speed whose variation is much acuter, shows that our QRNNs can capture the changing details existing in the temporal sequence data more efficiently. More predicted curves of other indicators are presented in A.
In order to verify the flexibility of the circuit structure of QRNN, we
Figure 9: Curves of the indicator of relative humidity predicted by pQRNN and RNN.
test the prediction accuracy of pQRNN with different number of qubits. Specifically, the number of qubits used in Reg. D (also in Reg. H) is set to 4, 6, and 8. The results are shown in Table 2. As can be seen from the table, generally higher prediction accuracy can be obtained when using more qubits in the QRNN circuit. However, there exits the phenomena of accuracy saturation; that is, with the increase of the number of qubits, the improvement of accuracy would become small.
### Stock price
The second task is to predict the variation of stock price, including the opening price, maximum price, minimum price, closing price, and volume of a stock. The sequence data of each component of stock price contains 180 elements representing the price of 180 days. The task is to use the first seven days' data to predict the price on the eighth day. The circuits of the pQRNN and sQRNN used here as well as the classical RNN are the same as those used in predicting meteorological indicators. The 180 elements of each component are divided into 100 for training and 80 for test. The hyperparameter of learning rate is 0.03.
The prediction accuracy of the two QRNN models as well as the classical RNN are shown in Table 3. Almost the same conclusions can be obtained: (1) the sQRNN model can achieve a similar accuracy as pQRNN; (2) both two QRNN models are capable of predicting the five components of stock price with higher accuracy than the classical RNN. Therefore, the advantages of QRNN against classical RNN can be embodied on different kinds of learning
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Meteorological indicators**} & \multicolumn{3}{c}{**Prediction accuracy**} \\ \cline{2-4} & **4-qubits** & **6-qubits** & **8-qubits** \\ \hline Atmospheric Pressure & 99.94\% & 99.98\% & 99.99\% \\ Minimum Temperature & 94.89\% & 96.96\% & 97.07\% \\ Maximum Temperature & 97.05\% & 97.68\% & 98.08\% \\ Relative Humidity & 98.32\% & 98.51\% & 98.30\% \\ Wind Speed & 87.57\% & 90.13\% & 91.12\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Prediction accuracy of the pQRNN with 4, 6 and 8 qubits in Reg. D (and Reg. H).
data. The predicted curves of five components of stock price are presented in B.
### Text categorization
The above two tasks are regression problems, while the third task is a classification problem. In order to verify the feasibility of QRNNs to perform nature language processing, we use the MC (meaning classification) task to test our QRNNs. In the MC task, there contains 130 sentences and each sentence has 3 or 4 words. Half of the sentences are related to food and half to information technology (IT). Hence, MC is a binary classification task that categorizes a sentence as food or IT. There are totally 17 words in MC and part of the vocabulary is in common between the two classes, so the task is not trivial [45].
The circuits of the pQRNN and sQRNN used here are the same as above. The 130 sentences are divided into 100 for training and 30 for test. The hyperparameter of learning rate is 0.01.
The classification accuracy of the two QRNN models as well as two state-of-the-art QNN models for natural language learning are shown in Table 4. As can be seen from the table, both two QRNN models can achieve an accuracy of 100%, which is much better than that of the syntactic analysis-based quantum model [45]. On the other hand, the accuracy of QSANN (quantum self-attentive neural networks) proposed in Ref. [46] also achieves 100%, but the computational cost of QSANN is much heavier than the present QRNNs. In addition, QSANN is a quantum-classical hybrid model, where the queries,
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Stock indicators**} & \multicolumn{3}{c}{**Prediction accuary**} \\ \cline{2-4} & **pQRNN** & **sQRNN** & **RNN** \\ \hline Opening price & 98.69\% & 97.29\% & 95.83\% \\ Highest price & 98.83\% & 97.99\% & 96.68\% \\ Lowest price & 98.82\% & 97.89\% & 96.20\% \\ Closing price & 99.08\% & 97.61\% & 96.11\% \\ Volume & 90.36\% & 87.13\% & 73.73\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prediction accuracy of the two QRNN models and the classical RNN on the sequential data of stock price.
keys, and values are implemented by the PQCs and the self-attention coefficients are calculated in the classical resources. QSANN uses 12 quantum parameters to encode each word into the circuit, while in our QRNNs, only one parameter is enough.
## 5 Conclusion
In the present work, we develop a hardware-efficient way of constructing the quantum recurrent blocks, and by stacking the blocks in a staggered way we obtain the staggered QRNN model that can be implemented efficiently on quantum devices with much lower requirement of the coherent time. The efficiency of the present QRNN models are verified using three different kinds of classical sequential data, i.e., the meteorological indicators, stock price, and text categorization. The numerical experiments show that our QRNN models have a much better performance in prediction (classification) accuracy than the classical RNN and state-of-the-art QNN models for sequential learning, and can capture the acute variation trend existing in the temporal sequence data. In one word, the present QRNNs possess a simple and well-designed structure, and show great performance on both regression and classification problems.
We think the present work is a significative start of studying the quantum recurrent neural networks for classical sequential data. The present QRNN model would be taken as a candidate of canonical QRNN model to study the possible near-term applications of quantum deep learning. The interesting future work includes (1) optimizing the present QRNNs further, e.g., applying different data encoding methods; (2) characterizing the trainability of the QRNNs, i.e., the properties of barren plateau and landscape of cost functions; (3) expanding the present models to be a deep recurrent neural network, reminiscent to the LSTM and GRU networks in deep learning.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{**Prediction accuracy**} \\ \hline
**pQRNN** & **sQRNN** & **QSANN**[46] & **DisCoCat**[45] \\
100\% & 100\% & 100\% & 79.8\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classificaion accuracy of the two QRNN models and two state-of-the-art QNN models for sequential learning on the MC task.
## Acknowledgments
The present work is supported by the Natural Science Foundation of Shandong Province of China (ZR2021ZD19) and the National Natural Science Foundation of China (Grant No. 12005212).
We are grateful to the support from Big Data Center of Marine Advanced Research Institute of Ocean University of China. We also thank the technical team from the Origin Quantum Computing Technology Co., Ltd, in Hefei for their professional services.
|
2308.12312 | Physics informed Neural Networks applied to the description of
wave-particle resonance in kinetic simulations of fusion plasmas | The Vlasov-Poisson system is employed in its reduced form version (1D1V) as a
test bed for the applicability of Physics Informed Neural Network (PINN) to the
wave-particle resonance. Two examples are explored: the Landau damping and the
bump-on-tail instability. PINN is first tested as a compression method for the
solution of the Vlasov-Poisson system and compared to the standard neural
networks. Second, the application of PINN to solving the Vlasov-Poisson system
is also presented with the special emphasis on the integral part, which
motivates the implementation of a PINN variant, called Integrable PINN
(I-PINN), based on the automatic-differentiation to solve the partial
differential equation and on the automatic-integration to solve the integral
equation. | Jai Kumar, David Zarzoso, Virginie Grandgirard, Jan Ebert, Stefan Kesselheim | 2023-08-23T07:00:56Z | http://arxiv.org/abs/2308.12312v1 | Physics informed Neural Networks applied to the description of wave-particle resonance in kinetic simulations of fusion plasmas
###### Abstract
The Vlasov-Poisson system is employed in its reduced form version (1D1V) as a test bed for the applicability of Physics Informed Neural Network (PINN) to the wave-particle resonance. Two examples are explored: the Landau damping and the bump-on-tail instability. PINN is first tested as a compression method for the solution of the Vlasov-Poisson system and compared to the standard neural networks. Second, the application of PINN to solving the Vlasov-Poisson system is also presented with the special emphasis on the integral part, which motivates the implementation of a PINN variant, called Integrable PINN (I-PINN), based on the automatic-differentiation to solve the partial differential equation and on the automatic-integration to solve the integral equation.
## 1 Introduction
Nuclear fusion reveals as the most promising solution for the increasing energy demand. However, a fusion plasma is an extremely complex system, characterized by large gradients of density, pressure and current that lead to the triggering of instabilities. Among those instabilities one may find those due to temperature and density gradients resulting in turbulent transport [1, 2, 3, 4, 5, 6, 7], those driven by the energetic particles (such as the alpha particles produced by the fusion reactions) [8, 9], or those driven by the gradients of the magnetic equilibrium [10, 11]. In general, such instabilities are deleterious for the overall confinement of particles and energy. This is the reason why understanding, predicting and eventually controlling instabilities in burning nuclear fusion plasmas is of prime importance on the route towards the steady-state production of energy in future fusion devices such as ITER or DEMO.
The study of instabilities can be done by means of analytic theory, experimental measurements or numerical simulations. One of the most complete frameworks for numerical analyses relies upon the kinetic description of the plasma, which requires solving the Vlasov equation (or Boltzmann equation in the presence of collisions) coupled to the Maxwell equations (Poisson equation and Ampere's law). Such approach involves numerical treatment of equations in 6D phase-space, which is numerically expensive and sometimes unaffordable. An approximation consists in averaging out the fastest gyromotion (cyclotron motion), whose frequency is usually much larger than the characteristic frequencies of the instabilities in fusion plasmas. Such approach is the fundamental basis of the so-called gyrokinetic codes [12], which has been widely used for turbulent transport studies [13], energetic particle physics [14] and macro-scale instabilities linked to the magnetic equilibrium [15]. The advantage of the gyrokinetic approach is that the dimension of the problem is reduced down to 5D. Nonetheless, the high-dimensionality of the data implies that the cost of gyrokinetic simulations is still quite high, in terms of both computational time required to perform the simulations and disk space required to store the data to be post-processed. For this reason, new techniques to either accelerate the gyrokinetic simulations or optimize the way data is stored are mandatory.
In this framework, the use of artificial intelligence (AI) techniques might provide solutions to deal with high-dimensionality data from gyrokinetic simulations. In this paper, we explore the possibility of using deep neural networks to reduce the percentage of data that is required to be stored. Such approach might also be used online as a real-time processing of data and, in the limit of no stored data, results in solving the integro-differential equations by means of only neural networks.
Deep Learning has evolved since the 1990s and transformed how research is done in bio-informatics [16], self-driving cars [17], natural language processing [18], and further applications [19, 20]. The usual application of deep learning is through the data-driven paradigm. In other words, large amount of data whose underlying model is unknown can be used to derive a reduced model based on artificial neural networks organized in sequential layers. The use of sequential layers (hence the designation of _deep learning_) helps capture non-linearities in the data and augments the capability of representation. Such reduced model is not analytical and can be only applied if one has the weights and biases of the neural network. A step forward can be taken when dealing with data from a physical system whose evolution is given by a set of equations. Coupling the data-driven paradigm with the information from the equations can increase the capability of representation with a higher degree of accuracy with respect to the case where no information from the equation is employed [21]. In that context, Physics informed Neural Networks (PINN) [22] are one class of neural networks exploiting the capability of deep neural networks as universal approximators [23] and encoding the differential equations underlying the physical system with the help of automatic differentiation [24].
The natural question that arises when using PINN, is "why do we need neural networks if we already have the equations that govern the evolution of the physical
system?". Indeed, solving the differential equations by means of standard numerical methods such as finite differences or finite elements might be more appropriate. However, using these methods to get the solution requires storing the numerical solution on a large number of grid points with a certain temporal resolution. For realistic physical systems this approach implies storing large amounts of data. In that respect, storing a neural network that can be used to obtain the solution at any point might represent a solution to optimise the storage. Depending on the amount of data used to train the neural network, PINN can exhibit different applications. For instance, it can be applied to infer missing data from known (or stored) data. When reducing the amount of known data down to the points at initial and boundary conditions, PINN is nothing else but a new way of solving differential equations. In the context of solving differential equations, PINN exhibit the advantage of being mesh-independent. Whereas PINN has already been used to solve linear and non-linear Partial Differential Equations (PDEs) [22, 25, 26], there are some variants of PINN used to solve integro-differential equations [27, 28, 29, 30], upon which part of our work relies.
In this paper we apply PINN to a specific physical problem: the wave-particle resonance [31]. Such mechanism is at the origin of most of the instabilities in fusion plasmas. Indeed, for an instability to occur, energy must flow from the particles of the plasma to the electromagnetic field. This exchange of energy is only possible when the phase velocity of the waves is close to the velocity of particles, which is the wave-particle resonance. Particles resonating with the wave but slightly faster (resp. slower) than the wave will transfer (resp. receive) energy to (resp. from) the wave. If on average there are more particles providing energy to the wave than particles receiving energy from the wave, the amplitude of the wave increases and an instability is triggered. This is the fundamental mechanism of the bump-on-tail instability [32]. When the opposite happens, the wave is damped. This is the mechanism for the Landau damping [33]. In order to simulate the two mechanisms we need to solve a system of coupled integro-differential equations, known as Vlasov-Poisson system. The Vlasov equation is a PDE and the Poisson equation is an integro-differential equation (IDE). We will use the VOICE code [34] to numerically solve this system of coupled equations. We use the numerical results as ground truth for training AI models and also to compare the predictions after training. Since both mechanisms are ubiquitous in burning fusion plasmas, we focus on the present paper on the application of AI to capture the underlying physics by means of PINN-based models. In particular, we will explore three applications of PINN to the wave-particle resonance.
1. Compare the use of Deep Neural Network (DeepNN) and PINN for storing the simulation data by using small percentage of stored results for training and inferring the rest.
2. Use PINN to solve the Vlasov equation (the PDE in our system) and use the stored result of the Poisson equation (the IDE in our system) to predict the solution of coupled equation.
3. A variant of PINN, called I-PINN (for Integrable-PINN) is implemented to solve the integro-differential equation by combining automatic differentiation and the fundamental theorem of Calculus, inspired by [29] and [30]. We also compare the results provided by I-PINN with the already existing f-PINN method [27].
The remainder of the paper is structured as follows. Section 2 is devoted to introducing numerical methods used in the VOICE code to solve the Vlasov-Poisson system. The PINN method together with the employed architecture are also briefly discussed. Section 3 is devoted to hyper-parameter tuning of DeepNN and PINN in case of inferring the missing data using very small amount of stored results. Both PINN and DeepNN are compared for the inference of missing data using least amount of stored data. At the end of section 3, PINN is used to solve the Vlasov equation. In section 4.1 f-PINN is used for the first time to solve the Vlasov-Poisson system using only the boundary and initial conditions. Finally, the I-PINN method is introduced in section 4.2 to solve integro-differential equations and applied to the Vlasov-Poisson system. Section 5 concludes the paper and summarizes our work and future plans.
## 2 Numerical methods: VOICE and PINN
In the context of gyro-kinetic simulations for fusion plasmas, GYSELA exhibits the advantage that it is global, full-f and flux-driven [35]. It simulates the electrostatic plasma turbulence and induced transport in the core of the tokamak, evolving the 5-dimensional (3D in real space coordinates, and 2D in velocity coordinates) guiding-center distribution function of ions and electrons. Owing to its complexity, a reduced version of GYSELA has been developed, called VOICE [34]. It is a two-dimensional (1D in real space, and 1D in velocity space) kinetic code. In this paper, before applying PINN to GYSELA, we explore its applicability to the VOICE code, which we use to simulate the Landau damping and the bump-on tail instability. These two physical mechanisms are used as test-bed cases for exploring the use of PINN to capture the physics of the wave-particle resonance. In the following subsections we discuss briefly the VOICE code, the results for the Landau damping and the bump-on-tail instabilities as well as the PINN method, with special emphasis on the loss function and the training details.
### VOICE to the solve the Vlasov-Poisson system
The Vlasov-Poisson system describes the collective behaviour of charged particles in plasmas. It combines the Vlasov equation, which describes the kinetics of particles, with the Poisson equation, which relates the electric potential to the charge distribution. The Vlasov equation governs the evolution of the distribution function \(f(x,v,t)\), which represents the distribution of particles at a given position \(x\) with velocity \(v\) at a time \(t\). In the following, we assume a plasma with electrons and ions. Electrons are much faster than ions, therefore we assume on the analyzed time scale that the ions are at rest, so the only distribution function to obtain is that of electrons. The normalised
Vlasov equation is given by
\[\frac{\partial f(x,v,t)}{\partial t}+v\frac{\partial f(x,v,t)}{\partial x}+E(x,t) \frac{\partial f(x,v,t)}{\partial v}=0, \tag{1}\]
where, \(E(x,t)\) is the electric field, derived from the electric potential \(E\left(x,t\right)=-\partial_{x}\phi\left(x,t\right)\). The evolution of the electric potential is given by the normalised Poisson equation, which expresses the relation between the potential and the charge density in the plasma
\[-\frac{\partial^{2}\phi(x,t)}{\partial x^{2}}=\int f(x,v,t)dv-1. \tag{2}\]
The VOICE code computes the solutions for \(x\in[x_{i},x_{f}]\) and \(v\in[v_{i},v_{f}]\) and restricting \(t\) to the interval \([0,t_{f}]\). For a detailed explanation of the normalization of equations 1 and 2 as well as the numerical method employed in VOICE, the reader is encouraged to refer to [34]. We impose Neumann boundary conditions in velocity for the distribution function,
\[\left.\frac{\partial f}{\partial v}\right|_{v=v_{i}}=\left.\frac{\partial f}{ \partial v}\right|_{v=v_{f}}=0,\qquad\forall\left(x,t\right) \tag{3}\]
Furthermore, we impose periodic boundary conditions in \(x\) for both the distribution function and electrostatic potential,
\[f\left(x_{i},v,t\right) = f\left(x_{f},v,t\right),\qquad\forall\left(v,t\right) \tag{4}\] \[\phi\left(x_{i},t\right) = \phi\left(x_{f},t\right),\qquad\forall t \tag{5}\]
The initial conditions are simply \(f\left(x,v,t=0\right)=f_{0}\left(x,v\right)\) and \(\phi\left(x,t=0\right)\) is obtained from the Poisson equation 2 using \(f_{0}\). The parameters used for each of the VOICE simulations reported in this paper are summarized in table 1. The discretization in \((x,v)\) is given by \((N_{x},N_{v})\), the number of points used in the respective dimensions.
In this article, we explore two cases: Landau damping and the bump-on-tail instability, as mentioned earlier. Without the loss of generality, the initial distribution
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Case & Simulation & \(\varepsilon\), \(\alpha\) & \(\Delta t\) & \((N_{x},N_{v})\) & \([0,t_{f}]\) & \([x_{i},x_{f}]\) & \([v_{i},v_{f}]\) & Storage \\ & & & & & & & & (MB) \\ \hline I & LLD & \(\varepsilon\)=0.01 & 0.0125 & \((128,512)\) & [0,45] & \([0,4\pi]\) & \([-6,6]\) & 1804 \\ \hline II & NLLD & \(\varepsilon\)=0.1 & 0.0125 & \((128,512)\) & [0,45] & \([0,4\pi]\) & \([-6,6]\) & 1804 \\ \hline III & BOT & \(\alpha\)=0.1 & 0.05 & (256,1024) & \([0,100]\) & \([0,50]\) & \([-8,8]\) & 4200 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation parameters of the three cases considered in this paper. LLD stands for _linear Landau damping_, NLLD stands for _non-linear Landau damping_ and BOT stands for _bump-on-tail_. \(\Delta t\) is the time step, \(N_{x}\) and \(N_{v}\) are the total number of equidistant points chosen to solve equations 1 and 2 in \(x\) and \(v\), respectively. Storage is the disk-space used to store the results of each simulation, namely the distribution function values at each point of \((x,v,t)\) and the electric potential at each \((x,t)\).
function \(f_{0}\) is decomposed into equilibrium (\(f_{eq}\)) and perturbed (\(\delta f\)) distribution functions. In our case, the initial perturbation is simply proportional to \(f_{eq}\), as \(\delta f=f_{eq}\varepsilon\cos{kx}\). Depending on the choice of \(f_{eq}\), we will analyse either the Landau damping or the bump-on-tail instability. The phenomenon of Landau damping will be explored with the following equilibrium distribution function,
\[f_{eq}=\frac{1}{\sqrt{2\pi}}\exp\left(\frac{-v^{2}}{2}\right). \tag{6}\]
VOICE has been run for this case up to \(t_{f}\)=45. Two values for the initial perturbation, \(\varepsilon=0.01\) and \(\varepsilon=0.1\), are employed for same \(k=0.5\). Indeed, using the amplitude \(\varepsilon=0.01\) leads to a linear damping during the whole simulation, whereas the amplitude \(\varepsilon=0.1\) results in a linear phase followed by a nonlinear phase where the damping rate is modified. This nonlinearity leads to higher amplitude oscillations of the distribution function in phase space. The top panel of figure 1 summarizes the results obtained with VOICE for the Landau damping cases. The top right panel represents the distribution function for \(\varepsilon=0.01\) (thick solid black line) and \(\varepsilon=0.1\) (thin dashed blue line). The left panels represent the time evolution of the electrostatic potential evaluated at the mid \(x\)-position. The \(y\)-axis is given in logarithmic scale. It is observed that Landau damping is characterized by the formation of structures (or oscillations in velocity space). The amplitude of these oscillations increases with \(\varepsilon\), but the location remains unchanged. The increasing amplitude is characteristic of the nonlinearities that play a major role in modifying the damping rate at \(t\approx 25\).
Moving on to the bump-on-tail instability, analysis of the physics requires inverting the slope of the distribution function. Therefore, we consider an equilibrium distribution function \(f_{eq}\) composed of a Maxwellian distribution \(f_{1}\) with density \(1-\alpha\) (\(0\leq\alpha\leq 1\)), and a shifted Maxwellian distribution \(f_{2}\) representing a population with a mean velocity \(v_{0}\), density \(\alpha\) and temperature \(T_{0}\). The parameter \(\alpha\) controls the fraction of superthermal particles leading to an inversion of the slope of the total distribution function. The equilibrium distribution function therefore yields
\[f_{eq}= f_{1}+f_{2},\,\mbox{with} \tag{7}\] \[f_{1}=\frac{1-\alpha}{\sqrt{2\pi}}\exp\left\{\frac{-v^{2}}{2}\right\}\] \[f_{2}=\frac{\alpha}{\sqrt{2\pi T_{0}}}\exp\left\{\frac{-(v-v_{0} )^{2}}{2T_{0}}\right\}\]
As for the Landau damping, the initial distribution function is
\[f_{0}=f_{eq}(1+\varepsilon\cos{kx}) \tag{8}\]
This case has been simulated with VOICE till \(t_{f}\)=100. The parameters used are: \(\varepsilon=10^{-5}\), \(\alpha=0.1\), \(T_{0}=0.2\), \(v_{0}=3.8\) and \(k=0.5\). The results are summarised in the bottom panels of figure 1. For the electrostatic potential, the two phases (linear and nonlinear) are conveniently highlighted by blue and red lines, respectively. The
final distribution function is plotted in red in the bottom right panel, together with the initial distribution function plotted in blue. The initial distribution function exhibits a bump on the tail, responsible for the so-called bump-on-tail instability. Particles are redistributed in phase space during the nonlinear phase, as can be observed in the final distribution function. This redistribution leads to the saturation of the instability.
The computed solutions of the Vlasov-Poisson system for the two physical mechanisms (Landau damping and bump-on-tail) are stored. The distribution function, \(f(x,v,t)\) is recorded for each spatial position \(x\) and velocity \(v\) at every time step during the numerical integration of the Vlasov equation. Additionally, the solution of the Poisson equation, representing the electrostatic potential \(\phi(x,t)\), is stored for every
Figure 1: Summary of physical results from VOICE simulations. Top panels: time evolution of the electrostatic potential (left) evaluated at the mid \(x\)-position for the two considered Landau damping cases: \(\varepsilon=0.01\) (thick solid black line) and \(\varepsilon=0.1\) (thin dashed blue line). The final distribution function is evaluated at the mid \(x\)-position as a function of the velocity (right) for the two considered cases of Landau damping. Bottom panels: same quantities, but for the bump-on-tail instability. For the electrostatic potential, the linear and nonlinear phases are highlighted by thick blue and red lines, respectively. The final distribution function on the right figure is represented by a thick red line, together with the initial distribution represented by a thin blue line, where the bump-on-tail is clearly seen.
spatial position and time. It should be noted that storing the results of a single VOICE simulation, corresponding to \(\phi\) and \(f\) requires approximately 1 to 4 GB of disk space, as indicated in the "Storage" column of table 1. These recorded data serve as the ground truth for training deep learning models in subsequent analyses.
### Physics Informed Neural Networks
The property of PINN to simulate a mesh-free physical system without requiring much data has attracted the interest of a lot of researchers resulting in quite a few variants of PINN [36]. Among those, fractional-PINN (f-PINNs) [27] is used to solve integro-differential equations, variational-PINN (vPINN) [37] uses a test space for Legendre polynomials to reduce the training cost, hp-vPINN is an extended version of vPINN. Many more variants exist and the reader is encouraged to go through the referred bibliography [38, 39, 40, 41, 42].
Mathematically speaking, PINN is a fully connected deep neural network that takes a vector \(\left(\mathbf{y},t\right)\) in the defined phase-space of a PDE as input and returns as output an approximation to the solution of the PDE evaluated at that vector. The PDE is defined as a differential operator \(\mathcal{L}\) applied to a function \(f\) to give a function \(g\)
\[\mathcal{L}\left(f\left(\mathbf{y},t\right)\right)=g(\mathbf{y},t),\qquad \left(\mathbf{y},t\right)\in\Omega\cup\left[t_{i},t_{f}\right]\,, \tag{9}\]
where \(\Omega\) is the sub-space on which the solution of the PDE is defined from initial time \(t_{i}\) till final time \(t_{f}\). The boundary of \(\Omega\) is divided into \(\partial\Omega=\partial\Omega_{N}\cup\partial\Omega_{D}\cup\partial\Omega_{ P1}\cup\partial\Omega_{P2}\), where \(\partial\Omega_{N}\) is the boundary where the solution satisfies Neumann conditions, \(\partial\Omega_{D}\) is the boundary where Dirichlet conditions apply and \(\partial\Omega_{P1}\) and \(\partial\Omega_{P2}\) represent the boundaries where the solution satisfies periodic conditions. In addition, initial conditions must be satisfied. Therefore, the solution of the PDE is supposed to satisfy the following conditions
\[f\left(\mathbf{y},t=t_{i}\right) =f_{0}\left(\mathbf{y}\right),\forall\mathbf{y}\in\Omega \tag{10}\] \[\mathbf{n}\cdot\nabla f\left(\mathbf{y},t\right) =f_{\partial\Omega_{N}}\left(\mathbf{y},t\right),\forall\mathbf{y }\in\partial\Omega_{N},\,\forall t\geq t_{i}\] (11) \[f\left(\mathbf{y},t\right) =f_{\partial\Omega_{D}}\left(\mathbf{y},t\right),\,\forall \mathbf{y}\in\partial\Omega_{D},\,\forall t\geq t_{i}\] (12) \[f\left(\mathbf{y},t\right) =f\left(\mathbf{y}^{\prime},t\right),\,\forall\left(\mathbf{y}, \mathbf{y}^{\prime}\right)\in\partial\Omega_{P1}\times\partial\Omega_{P2},\, \forall t\geq t_{i} \tag{13}\]
In the previous equation, \(f_{0}\) is a function of \(\mathbf{y}\) only, \(\mathbf{n}\) is the normal vector to \(\partial\Omega_{N}\) for the defined time range and \(f_{\partial\Omega_{N}}\) and \(f_{\partial\Omega_{D}}\) are functions of \(\mathbf{y}\) and \(t\). The goal is to approximate the solution of the PDEs by neural networks, defined as \(f_{NN}=f_{NN}\left(\mathbf{y},t;\boldsymbol{\theta}\right)\), where \(\boldsymbol{\theta}\) represents the vector of weights and biases of the neural network and the subscript \(NN\) stands for _Neural Network_. To achieve this goal, an optimization problem is set up, aiming at minimizing a cost or a loss function \(L\) made up of different terms, each term accounting for a given condition that the neural network must satisfy. These conditions are the following:
* The network must satisfy the PDEs at any point.
* The network must satisfy the initial and boundary conditions.
* The network must be good approximator of the solution at the known or stored data points, referred to as _data points_ in the remainder of the paper.
In order to compute the derivatives of the neural network automatic differentiation is utilized, leveraging the chain rule to compute exact derivatives up to machine precision. Consequently, the overall expression of the loss function can be represented as the summation of the following terms:
\[L=L_{\mathrm{PDE}}+L_{t=t_{i}}+L_{\partial\Omega_{N}}+L_{\partial\Omega_{D}}+ L_{\partial\Omega_{P1}\times\partial\Omega_{P2}}+L_{\mathrm{data}} \tag{14}\]
The losses are defined over all the points described before. In this paper, we use Mean Squared Error (_MSE_) as loss function. In that case, each of these terms has the following expression
\[L_{\mathrm{PDE}}\left(\mathbf{\theta}\right) =\sum_{\hat{\mathbf{y}}\in\Omega_{\mathrm{PDE}}}\frac{\left| \mathcal{L}\left(f_{NN}\left(\hat{\mathbf{y}};\mathbf{\theta}\right)\right)-g\left( \hat{\mathbf{y}}\right)\right|^{2}}{N_{\mathrm{PDE}}} \tag{15a}\] \[L_{t=t_{i}}\left(\mathbf{\theta}\right) =\sum_{\hat{\mathbf{y}}\in\Omega_{0}}\frac{\left|f_{NN}\left( \hat{\mathbf{y}};\mathbf{\theta}\right)-f_{\partial\Omega_{D}}\left(\hat{\mathbf{ y}}\right)\right|^{2}}{N_{0}}\] (15b) \[L_{\partial\Omega_{N}}\left(\mathbf{\theta}\right) =\sum_{\hat{\mathbf{y}}\in\partial\Omega_{N}}\frac{\left|\mathbf{ n}\cdot\nabla f_{NN}\left(\hat{\mathbf{y}};\mathbf{\theta}\right)-f_{\partial \Omega_{N}}\left(\hat{\mathbf{y}}\right)\right|^{2}}{N_{\partial\Omega_{ \mathrm{N}}}}\] (15c) \[L_{\partial\Omega_{D}}\left(\mathbf{\theta}\right) =\sum_{\hat{\mathbf{y}}\in\partial\Omega_{D}}\frac{\left|f_{NN} \left(\hat{\mathbf{y}};\mathbf{\theta}\right)-f_{\partial\Omega_{D}}\left(\hat{ \mathbf{y}};\mathbf{\theta}\right)\right|^{2}}{N_{\partial\Omega_{\mathrm{D}}}}\] (15d) \[L_{\partial\Omega_{P1}\times\partial\Omega_{P2}}\left(\mathbf{\theta }\right) =\sum_{\left(\hat{\mathbf{y}},\hat{\mathbf{y}}^{\prime}\right)\in \partial\Omega_{P1}\times\partial\Omega_{P2}}\frac{\left|f_{NN}\left(\hat{ \mathbf{y}};\mathbf{\theta}\right)-f_{NN}\left(\hat{\mathbf{y}}^{\prime};\mathbf{ \theta}\right)\right|^{2}}{N_{\partial\Omega_{\mathrm{p}}}}\] (15e) \[L_{\mathrm{data}}\left(\mathbf{\theta}\right) =\sum_{\hat{\mathbf{y}}\in\Omega_{\mathrm{data}}}\frac{\left|f_{ NN}\left(\hat{\mathbf{y}};\mathbf{\theta}\right)-f\left(\hat{\mathbf{y}}\right) \right|^{2}}{N_{\mathrm{data}}} \tag{15f}\]
where we have written \(\hat{\mathbf{y}}\equiv\left(\mathbf{y},t\right)\) for the sake of readability. \(\Omega_{\mathrm{PDE}}\) represents the subset of points where the neural network is forced to satisfy the PDE. These points are usually called _collocation points_ in the literature. \(\Omega_{0}\) is the subset of points where the neural network is forced to satisfy the initial conditions. \(\partial\Omega_{N}\), \(\partial\Omega_{D}\), \(\partial\Omega_{P1}\) and \(\partial\Omega_{P2}\) are the set of points where the boundary conditions must be satisfied. \(\Omega_{\mathrm{data}}\) is the set of points where the neural network is forced to approximate the stored data. We have also defined \(N_{\mathrm{PDE}}=\#\Omega_{\mathrm{PDE}}\), \(N_{0}=\#\Omega_{0}\), \(N_{\partial\Omega_{\mathrm{N}}}=\#\Omega_{\partial\Omega_{N}}\), \(N_{\partial\Omega_{\mathrm{D}}}=\#\Omega_{\partial\Omega_{D}}\), \(N_{\partial\Omega_{\mathrm{P}}}=\#\Omega_{\partial\Omega_{P1}}=\#\Omega_{ \partial\Omega_{P2}}\) and \(N_{\mathrm{data}}=\#\Omega_{\mathrm{data}}\). The different subsets of points are schematically represented in figure 2. It is to be noted that since the neural network \(f_{NN}\) depends on \(\hat{\mathbf{y}}\) and also on the vector of weights and biases \(\mathbf{\theta}\), each term of the loss function depends on \(\mathbf{\theta}\). Therefore, the general minimization problems reads
\[\mathbf{\theta}^{*}=\operatorname*{arg\,min}_{\mathbf{\theta}}L\left(\mathbf{\theta}\right) \tag{16}\]
This minimization problem can be solved using standard optimization techniques, such as gradient descent, stochastic gradient descent or extended versions with adaptive learning rates, such as Root Mean Squared Propagation (_RMSP_) and adaptive moment estimation (_Adam_), where the learning rate is adapted using averaged first (for RMSP) or second (for Adam) moments of the gradients.
The workflow to train PINN is represented schematically in figure 3. The neural network is composed of \(N_{l}\) layers and each layer is composed of \(N_{\mathrm{n}}\) neurons or nodes. Each node is characterized by an activation function \(a\). The network takes as input vector a \(\mathbf{y}\) and a time \(t\). The network output is supposed to approximate the solution of the PDE, \(f\left(\mathbf{y},t\right)\). The automatic-differentiation package computes the partial derivatives needed to calculate \(L_{\mathrm{PDE}}\) and \(L_{\partial\Omega_{N}}\) with respect to \(\left(\mathbf{y},t\right)\). Similarly, other terms of the loss function \(\left(L_{t=t_{i}},L_{\partial\Omega_{D}},L_{\partial\Omega_{P1}\times\partial \Omega_{P2}},L_{\mathrm{data}}\right)\) can be evaluated for the respective input values and required derivatives. All the terms are then summed up to evaluate the total loss.If the value of the loss function is smaller than the desire value or maximum iterations are reached, we consider the training to be finished. Otherwise, the weights and biases are re-adapted following the optimizer. The dataset is split into training and test datasets. Usually in this article, the proportion used is 90-10 unless specified otherwise. The training dataset is distributed into \(N_{\mathrm{b}}\) batches. All the trainings in this paper use \(N_{\mathrm{b}}\)=10, unless specified otherwise. The weights and biases of the neural network are updated after each batch and after each epoch. Once the network is trained, it is used to predict the values for unseen \(\left(\mathbf{y},t\right)\) inputs and compared with the test data set to produce the test loss. It is obvious that one needs to store the weights and biases of the trained Neural Network instead of storing the entire distribution function at each
Figure 2: Schematic representation of the domain where the PDEs are defined as well as the subset of points where the neural networks are forced to minimize each term of the loss function. Here, we assume initial time, \(t_{i}=0\).
\((x,v,t)\). The disk space required to store the trained Neural Network represents orders of magnitude smaller than the storage given in the table 1. This is our motivation behind using PINN as a storage solution for exascale simulations.
There exists already many packages mainly written in Python using _TensorFlow_ or _Pytorch_ backend, like SciANN [43] and DeepXDE [44]. In this paper, we made our own program to suit our needs as these packages are currently under development. In particular, we use _TensorFlow 2.8_[45] and _TensorFlow probability 0.15_[46] in Python along with _Adam_ optimizer in _Keras_ for training. _TensorFlow_ is GPU compatible for fast training of AI models. We implement data parallelization with the help of _horovod_[47] in tensorflow.
Defining the loss function for the coupled Vlasov-Poisson system is not straight forward. The Poisson equation is an integro-differential equation for which auto-differentiation techniques do not apply a priory. For this reason, the remainder of the paper is split into two applications. First, we will focus on solving PDE and apply PINN to the Vlasov equation. We will use the values of the distribution function and electric potential for this application. It can be seen as an inference problem where some values of the distribution function are known and the other values are missing.
Figure 3: Schematic representation of the PINN workflow. A deep neural network is built with the activation function \(a\). It takes \((\mathbf{y},t)\) as input and returns as output a value which is supposed to approximate \(f\). The derivatives of the output \(f\) are computed and evaluated numerically with respect to the inputs to find the loss for the desired PDE that we want to solve along with boundary conditions and initial conditions. The total loss is then computed by adding all the losses together. The training of the network is done using the total loss function until the desired accuracy or epochs are achieved.
This has a clear interest towards optimising the storage of values in high-dimensional plasma simulations. It is to be noted that when the percentage of known values of the distribution function tends to zero (only remaining known data of the initial and boundary conditions), the inference simply results in solving the Vlasov equation. The second application will focus on solving the coupled Vlasov-Poisson system for which other techniques different from the auto-differentiation are required to integrate the neural network that approximates the distribution function.
From the inference of the distribution function to the full integration of the Vlasov equation: a first step towards a solution to the storage problem of exa-scale simulations
Performing extensive kinetic or gyrokinetic simulations of fusion plasmas is a challenging task in terms of the complexity of the numerical methods and the storage of the solution, which is intrinsically high-dimensional and therefore requires large disk space. It is the case of the code GYSELA-X, used in simulations of electrostatic plasma turbulence and transport in the core of tokamaks. Codes like GYSELA-X output heavy files (up to 2 TB) at every time step, which makes it impossible to store all the results. Therefore, in general, a compromise must be reached and one is constrained to save only some reduced part of the output every few time steps. This solution can, of course, jeopardize the understanding of the physics underneath and is prone to missing some crucial physical behaviour. It is the case in particular for the physical applications of the Landau damping and the bump-on-tail instability that we deal with in the present work. Both of these applications are based on the wave-particle interaction, which is the primary mechanism at the origin of kinetic instabilities in plasmas. The wave-particle interaction allows energy and momentum exchange between particles and waves, where nonlinear effects lead to the formation of structures that are localized in phase space. Access to these structures is important for understanding the interaction between particles and waves.
However, storing only a minor percentage of data might result in misleading interpretations of the data. For this reason, in this section, the application of neural networks to the wave-particle interaction is presented. Section 3.1 and 3.2 is devoted to fixing hyper-parameters of neural network for training DeepNN and PINN in case of inferring the missing data using very small amount of stored results. In Section 3.3 the results obtained for inference of stored results using least amount of data for DeepNN and PINN are compared. In section 3.4, PINN is used to solve the Vlasov equation utilizing only the stored results of Poisson equation, without using any data of distribution function. Different methods to choose training points (specifically collocation points) for training PINNs are also defined and compared. All the training in this section is done for different subsets of datasets given in table 1. The subsets are in terms of the initial and final time of the chosen dataset. The reason is to capture the physical meaning and trend of the data with the limited resources of training AI models.
### DeepNN applied to infer the stored simulation data
DeepNN has the same architecture as given in figure 3, the only difference comes from the loss function. The loss function \(L\) used to train DeepNN is composed of just two terms, \(L_{t=t_{i}}\) and \(L_{data}\), as defined in equation 14. In this case,
\[L= L_{t=t_{i}}\left(\boldsymbol{\theta}\right)+L_{\mathrm{data}} \left(\boldsymbol{\theta}\right)\] (17) where, \[L_{t=t_{i}}\left(\boldsymbol{\theta}\right) =\sum_{\left(x,v,t_{i}\right)\in\Omega_{0}}\frac{\left|f_{DeepNN} \left(x,v,t_{i};\boldsymbol{\theta}\right)-f\left(x,v,t_{i}\right)\right|^{2} }{N_{0}}\] \[L_{\mathrm{data}}\left(\boldsymbol{\theta}\right) =\sum_{\left(x,v,t\right)\in\Omega_{\mathrm{data}}}\frac{\left|f_{ DeepNN}\left(x,v,t;\boldsymbol{\theta}\right)-f\left(x,v,t\right)\right|^{2}}{N_{ \mathrm{data}}}\]
where the output from DeepNN is denoted as \(f_{DeepNN}\). For the training process, a small portion of the stored distribution function results are used. Specifically, we select a subset of Case-II data spanning from \(t_{i}=0\) to \(t_{f}=20\) to evaluate the performance of the DeepNN for inference. This subset is chosen due to the time required for the DeepNN to learn non-linear patterns within the data. By selecting a smaller dataset, we mitigate the training time and GPU resource requirements. However, despite not using the entire dataset, we can still capture the underlying physics of Landau damping effectively.
We randomly choose 1% (\(\approx\)9.2x10\({}^{5}\) points) of the stored distribution function data to estimate \(L_{\mathrm{data}}\) and 80% (\(\approx\)6.4x10\({}^{4}\) points) of the initial distribution function data to estimate \(L_{t=t_{i}}\) in the training. The optimizer (\(Adam\)) and learning rate (0.001) are fixed for all the trainings. To determine the ideal configuration of the neural network, first we explore a range of options for the number of hidden layers and the number of nodes within each layer, keeping the activation function of each layer fixed to \(swish\). Specifically, we vary the number of hidden layers from 2 to 25 and the number of nodes per layer from 20 to 120. By systematically testing different combinations within these ranges, we aim to identify the optimal configuration that yields the best performance. The final test losses after 10000 epochs of training with different DeepNN architectures are given in figure 4a. The values for the test losses are actually computed by performing rolling average over 500 epochs. This is done in order to smooth the trends of the loss as a function of the number of epochs. The lowest final test loss is observed in DeepNN with 80 nodes in each of the 8 layers. Neural networks deeper than 8 layers also do very well giving the final test loss to the single precision, of the order of 10\({}^{-7}\). As expected, the training time increases exponentially as the NN gets deeper.
To determine the best activation function for the DeepNN with 80 nodes in each of the 8 layers, we try several activation functions. Some of the most common ones are the Rectified Linear Unit (_ReLU_), the _tanh_, the _sigmoid_, the Scaled Exponential Linear Unit (_SELU_), the Exponential Linear Unit (_ELU_) and the _swish_. The training curves of DeepNN using different activation functions for the best NN are given in figure 4b.
It can be observed that the training with _swish_ (given in red solid line) converges faster than the other activation functions. According to the results, _swish_ activation function works best for the non-linear Landau damping data. Therefore, this is the one that will be used in all the trainings in the remainder of the paper.
### PINN applied to infer the stored simulation data
PINN is used here to infer the full distribution function using a small percentage of randomly chosen stored data. The architecture of the PINN follows the structure outlined in figure 3. In VOICE simulations, the boundary conditions are defined by equations 3 and 5, which do not involve Dirichlet boundary conditions. Therefore, the total loss is modified by removing the \(L_{\partial\Omega_{D}}\) term in expression 14,
\[L=L_{\mathrm{PDE}}+L_{t=t_{i}}+L_{\partial\Omega_{N}}+L_{\partial\Omega_{P1} \times\partial\Omega_{P2}}+L_{\mathrm{data}} \tag{18}\]
Figure 4: (a) The final value of rolling average over 500 epochs of the test loss for 10000 epochs are plotted for different DeepNN architectures. The same colour and symbol data points represent the same number of layers with different number of nodes in each layer. The NN with 80 nodes in each of the 8 layers is trained for different activation functions, the test loss during training is plotted on the right (b). It can be observed that the training with _swish_ (shown in red solid line) converges faster than the other activation functions for the same number of epochs.
where,
\[L_{\mathrm{PDE}}\left(\mathbf{\theta}\right)= \frac{1}{N_{\mathrm{PDE}}}\sum_{\left(x,v,t\right)\in\Omega_{ \mathrm{PDE}}}\left|\frac{\partial f_{\mathrm{PINN}}\left(x,v,t;\mathbf{\theta} \right)}{\partial t}+v\frac{\partial f_{\mathrm{PINN}}\left(x,v,t;\mathbf{\theta} \right)}{\partial x}+\right. \tag{19}\] \[\left.E(x,t)\frac{\partial f_{\mathrm{PINN}}\left(x,v,t;\mathbf{ \theta}\right)}{\partial v}\right|^{2}\] \[L_{t=t_{i}}\left(\mathbf{\theta}\right)= \frac{1}{N_{0}}\sum_{\left(x,v,t\right)\in\Omega_{0}}\left|f_{ \mathrm{PINN}}\left(x,v,t;\mathbf{\theta}\right)-f_{0}\left(x,v,t\right)\right|^{2}\] \[L_{\partial\Omega_{D}}\left(\mathbf{\theta}\right)= \frac{1}{N_{\partial\Omega_{\mathrm{D}}}}\sum_{\left(x,v,t\right) \in\partial\Omega_{D}}\left|f_{\mathrm{PINN}}\left(x,v,t;\mathbf{\theta}\right)- f_{\partial\Omega_{D}}\left(x,v,t;\mathbf{\theta}\right)\right|^{2}\] \[L_{\partial\Omega_{N}}\left(\mathbf{\theta}\right)= \frac{1}{N_{\partial\Omega_{\mathrm{N1}}}}\sum_{\left(x,v_{i},t \right)\in\partial\Omega_{N1}}\left|\frac{\partial f_{\mathrm{PINN}}\left(x,v, t;\mathbf{\theta}\right)}{\partial v}\right|^{2}+\] \[\frac{1}{N_{\partial\Omega_{\mathrm{N2}}}}\sum_{\left(x,v_{f},t \right)\in\partial\Omega_{N2}}\left|\frac{\partial f_{\mathrm{PINN}}\left(x,v, t;\mathbf{\theta}\right)}{\partial v}\right|^{2}\] \[L_{\partial\Omega_{P1}\times\partial\Omega_{P2}}\left(\mathbf{\theta}\right)= \frac{1}{N_{\partial\Omega_{\mathrm{P}}}}\] \[\sum_{\left(\left(x_{i},v,t\right),\left(x_{f},v,t\right)\right) \in\partial\Omega_{P1}\times\partial\Omega_{P2}}\left|f_{\mathrm{PINN}}\left(x _{i},v,t;\mathbf{\theta}\right)-f_{\mathrm{PINN}}\left(x_{f},v,t;\mathbf{\theta} \right)\right|^{2}\] \[L_{\mathrm{data}}\left(\mathbf{\theta}\right)= \frac{1}{N_{\mathrm{data}}}\sum_{\left(x,v,t\right)\in\Omega_{ \mathrm{data}}}\left|f_{\mathrm{PINN}}\left(x,v,t;\mathbf{\theta}\right)-f\left(x,v,t\right)\right|^{2}\]
The output from PINN is denoted as \(f_{\mathrm{PINN}}\). The loss for Neumann boundary condition over \(v\) is represented by \(L_{\partial\Omega_{N}}\) on boundaries \(\Omega_{N1}\) and \(\Omega_{N2}\). The loss for periodic boundary conditions over \(x\) is represented by \(L_{\partial\Omega_{P1}\times\partial\Omega_{P2}}\) on boundaries \(\Omega_{P1}\) and \(\Omega_{P2}\). As explained earlier, vanilla PINN [22] can only solve PDEs, so the Vlasov equation is used in the loss for PDE. This means that the stored result of Poisson equation, i.e. the electric potential, is required to evaluate \(L_{PDE}\) for the collocation points. The stored electric potential data from the simulation is 2D, which is \(N_{v}\) times smaller than the 3D distribution function data. So, even though we are using the full solution of Poisson equation, it represents a very small amount in terms of data storage as compared to the full distribution function.
Similar to the hyper-parameter tuning performed for the DeepNN, we also conduct hyper-parameter tuning for PINN. As already seen for DeepNN, training time increases exponentially as the network gets deeper. Using PINN implies that more gradients have to be evaluated which will take even more time and resources. Therefore, a subset of Case-II data from \(t_{i}=15\) to \(t_{f}=20\) is chosen to test the PINN for inference. We choose this subset out of the full dataset because of non-linearities at this time in simulation. Indeed, if PINN can correctly infer data for this time-interval, it can easily infer the data for all the previous times. As before, 1% of distribution function data is chosen to estimate \(L_{data}\). Here, the points on different boundaries have to be chosen to evaluate
the loss terms in equation 18. 80% of stored values of input are randomly chosen for initial and boundary conditions, i.e., \(\Omega_{0}\) (\(\approx\)5.2x10\({}^{4}\)), \(\partial\Omega_{N_{1}}\) (\(\approx\)4.1x10\({}^{4}\)), \(\partial\Omega_{N_{2}}\) (\(\approx\)4.1x10\({}^{4}\)), \(\partial\Omega_{P_{1}}\) (\(\approx\)1.6x10\({}^{5}\)) and \(\partial\Omega_{P_{2}}\) (\(\approx\)1.6x10\({}^{5}\)). 5% of collocation points are randomly chosen out of the mesh-grid of the stored input, i.e., \((x,v,t)\) (\(\approx\)1.1x10\({}^{6}\) points) over \(\Omega_{\rm PDE}\) for \(L_{\rm PDE}\). Also, the electric field (or electric potential) data-points used for evaluating the \(L_{PDE}\) are \(\approx\)5.1x10\({}^{4}\).
When training NN, evaluating the gradients is the most time consuming task. To train PINN, the loss \(L_{PDE}\) is estimated by calculating three partial derivatives, which is not the case for DeepNN. This implies that the time required for PINN training is 3 to 4 times higher than that for DeepNN. Therefore, PINN is trained here up to 4000 epochs keeping all the training parameters the same as in section 3.1. It can be seen from figure 5 that the final test loss is of the order of \(10^{-7}\) after 4000 epochs, which is small enough for an accurate prediction of the results. Comparison of final test losses of different PINN architectures is given in figure 5a. It can be seen from the figure that PINN needs to be deeper than 8 layers to give good prediction. The corresponding training curves are given in figure 5b showing that deeper PINN converges faster to lower test loss. A compromise has to be made between deeper PINN and training time. The best PINN architecture turns out to be 40 nodes in each of the 15 layers. It takes 5 GPU hours for the training to get a final test loss of the order of \(10^{-7}\). One could argue that 8 layers with 80 neurons is the best model instead of 15 layers with 40 neurons. It is true that the former model converges faster for lower loss than the later but the training is unstable as the loss increases abruptly again. This abrupt change is mainly due to shallowness of the network. From now on, we only use NN with 40 neurons in each of 15 layers for our training with the _swish_ activation function.
Collocation points and distribution function data points were chosen randomly to evaluate \(L_{PDE}\) and \(L_{data}\) respectively, in the above trainings. Different ways can be employed for choosing these points to facilitate the training. Three methods to choose the training points are devised for our aim to store as less data from simulation as possible. These methods are summarised in table 2 and schematically represented in figure 6.
The \(t\)-grid, \(x\)-grid and \(v\)-grid given in table 2 refers to the equidistant grid points used by VOICE to solve equation 1 and 2 for the parameters given in table 1. These methods are devised keeping in mind the amount of data we need to store for the inference from simulation. In M-1 method, training points are chosen randomly from full phase space \((x,v,t)\) which needs all the data to be stored. In M-2 method, we randomly choose and fix \(x\)-grid, \(v\)-grid and \(t\)-grid points for training, so only data at these points are required for storage. This drastically reduces the disk space required for inference. In M-3 method, \((x,v)\) points are chosen for randomly chosen timesteps, which requires to store the data for randomly chosen timesteps for each \(x\) and \(v\)-grid points. Of course, once the training is done all the stored data can be deleted and only trained model is stored. M-2 and M-3 methods differ when it comes to using the potential data to estimate \(L_{PDE}\). M-3 method necessitates the storage of all data points
corresponding to the potential in the \(x\)-grid but M-2 method only requires the storage of selectively chosen \(x\)-grid points. For example, if we run a simulation, we will have to wait and store all the data to select random points from all the \((x,v,t)\) phase space while using M-1. But using M-2 requires saving the data at the chosen \(x\)-grid, \(v\)-grid points at the random time steps, which reduces the disk space required for the training data. For using M-3, while running the simulation, data can be stored at random time steps but for all \((x,v)\). However, for M-2 since the grid-points for \(x\) are fixed, the potential
\begin{table}
\begin{tabular}{l l} \hline Method & Description \\ \hline M-1 & Randomly choose given percentage of points from the full phase space of \((x,v,t)\). \\ M-2 & Randomly choose given percentage of points from \(x\)-grid, \(v\)-grid and \(t\)-grid separately and make a 3D meshgrid using the chosen \((x,v,t)\) points for training. \\ M-3 & Randomly choose given percentage of points from \(t\)-grid and for each of these timesteps randomly choose given percentage of points from 2D \((x,v)\) meshgrid. \\ \hline \end{tabular}
\end{table}
Table 2: Different methods to choose collocation points for \(L_{\text{PDE}}\) and distribution function points for \(L_{\text{data}}\) to train PINN. Method-1 is abbreviated as M-1 and similarly the others.
Figure 5: (a) The final value of rolling average over 500 epochs of the test loss after 4000 epochs are plotted for different model architectures of PINN. The same colour data points represent the same number of layers with different numbers of nodes in each layer. The best model has 40 nodes in each of the 15 layers. (b) The test loss curves corresponding to the different model architectures are plotted with same color and symbol in the right with different line style for nodes. The same color and symbol represents the same number of layers, and the same line style represents the same number of nodes.
values are needed for only few \(x\) positions. We compare these methods to obtain the most effective way of choosing training points while using the least amount of stored data points for accurate inference.
### Comparison of DeepNN and PINN to infer the stored simulation data
In this section, we conduct comparative analysis of PINN and DeepNN in terms of their suitability for data storage in simulations. A subset from \(t_{i}=0\) till \(t_{f}=10\) of the case-II data set is chosen. As concluded from the previous subsection, we use a NN with 15 layers and 40 nodes each and \(swish\) activation function for both DeepNN and PINN. The training is done till 10000 epochs for both networks using different percentage of stored distribution function data (or \(f\)-data). M-1 method is used to select collocation
Figure 6: Schematic representation of different methods used to select training points corresponding to table 2. Grid-points are represented in blue for \(x\), \(v\) and \(t\) and the selected training points are represented in red. The selected points are shown for a given time represented by \(t_{n_{0}}\). \(t_{n_{0}+1}\) represents immediate time step after \(t_{n_{0}}\) and \(t_{n_{0}+n_{r}}\) represent next random time step after \(t_{n_{0}}\) for which the training points are selected. M-1 selects points randomly from the mesh grid of \((x,v,t)\). Therefore, each time step will have some chosen training points. M-2 select random points directly from \(x\)-grid, \(v\)-grid and \(t\)-grid and make mesh grid of chosen points for training. M-3 selects random \(t\)-grid points and randomly chooses \((x,v)\) grid points for each chosen time steps.
points to train PINN, 5% of collocation points (totalling 2.31x10\({}^{6}\) points) are randomly chosen for most of the PINN trainings.
The inference results of DeepNN and PINN trainings are given in the table 3 and 4 for different percentage of chosen \(f\)-data. The Mean Square Error (MSE) given in the tables is the mean of square of the difference between true values of distribution function across all \((x,v,t)\) on which the training is done and the prediction by NN on the same points,
\[\mathrm{MSE}=\sum_{(x,v,t)\in\Omega_{data}}\frac{\left|f_{\mathrm{data}}-f_{ \mathrm{pred}}\right|^{2}}{N_{\mathrm{data}}} \tag{20}\]
The final test loss is the final value of test loss after taking a moving average over 500 epochs. In the following, the absolute error is defined as
\[\mathrm{Absolute\ Error}=\left|f_{data}-f_{pred}\right| \tag{21}\]
where, \(f_{data}\) represents the true value of distribution function for a given \((x,v,t)\) and \(f_{pred}\), the predicted values from NN at the same \((x,v,t)\) points. This absolute error is plotted in figure 7b for the training where 0.1% of \(f\)-data is retained and compared with the case where only 0.01% of \(f\)-data is retained (figure 7c). It can be seen that the absolute error increased one order from \(10^{-2}\) to \(10^{-3}\).
The results of PINN training are given in table 4. It can be seen from the table that choosing different percentage of collocation points does not affect much the final test loss for the same fixed percentage of \(f\)-data used. The comparison of test loss during training between PINN and DeepNN is given in figure 8a. It can be clearly seen from the figure that the test loss for PINN training using 0.1% \(f\)-data is the same as for DeepNN training but an order of magnitude smaller for 0.01% \(f\)-data. This means that DeepNN is better suited than PINN for inference when 0.1% \(f\)-data is used as it is less expensive to train. But PINN is clearly more powerful when less data (0.01% \(f\)-data) is used. It also indicates that incorporating the information of the physical system in \(L_{\mathrm{PDE}}\) helps infer the missing results even if we decrease the data used in training (from 0.1%
\begin{table}
\begin{tabular}{l l l l l} \hline S.No. & \% \(f\)-data & Total training points & Final test loss & MSE \\ \hline
1 & 1 & 461764 & 2.3x10\({}^{-07}\) & 7.34x10\({}^{-08}\) \\
2 & 0.1 & 46176 & 3.30x10\({}^{-07}\) & 2.51x10\({}^{-08}\) \\
3 & 0.05 & 23087 & 3.55x10\({}^{-06}\) & 3.31x10\({}^{-06}\) \\
4 & 0.01 & 4617 & 5.25x10\({}^{-06}\) & 1.07x10\({}^{-05}\) \\ \hline \end{tabular}
\end{table}
Table 3: Training results after 10000 epochs of training on 1 GPU using DeepNN model of 15 layers with 40 nodes each is given for different percentage of \(f\)-data used. Including the actual number of points used for training, the final test loss after taking the moving average over 500 epochs and MSE of prediction over the full dataset. Total time taken by each training is around 3.34 GPU hours.
to \(0.01\%\) of \(f\)-data). In order to compare the difference in terms of physics between the training of two NNs, the electrostatic potential, \(\phi\) at \(x=x_{mid}\), is evaluated using the predicted distribution function values of PINN and DeepNN. A numerical technique using Fast Fourier Transform (FFT) is used to solve the Poisson equation as given in 2, details of which can be found in [34]. The potential at \(x=6.28\) (middle \(x\)-position) is plotted against time in figure 7(b). It can be observed from the plots that the PINN prediction of \(\phi\) overlaps with the true results. On the other hand DeepNN prediction falls short in case of \(0.01\%\)\(f\)-data. It is to be noted that the determination of the total number of epochs for training is based on the accuracy of the predicted potential. It is observed that when the test loss reaches the order of \(10^{-7}\), the predicted values of
Figure 7: (a) The true 2D plot of distribution function, \(f\) is plotted as a function of \(x\) and \(v\) at final time \(t_{f}=10\) on the left. (b) The 2D plot of absolute error (\(f_{true}-f_{pred}\)) is plotted against \(x\) and \(v\) at the final time \(t_{f}=10\) for the training done using DeepNN with \(0.1\%\) of \(f\)-data as given in table 3. (c) The absolute error for training done using \(0.01\%\)\(f\)-data. (d) The true and prediction of \(f\) are respectively plotted in blue dashed and black solid lines as a function of \(v\) at final time and middle \(x\)-position, \(x_{mid}\) (corresponding to the white solid line in (c)).
\(\phi\) align with the physical accuracy. Consequently, the number of epochs is adjusted accordingly during the subsequent training process.
These results confirm that PINN is able to capture the physical aspect of the simulated data and is more powerful than a simple DeepNN for inference when less data is used. More PINN training results using different selection methods as given in table 2 are discussed in Appendix A.
\begin{table}
\begin{tabular}{l l l l l} \hline S.No. & \% \(f\)-data & \% Collocation points & Final test loss & MSE \\ \hline
1 & 1 & 5 & 3.01x\(10^{-07}\) & 1.03x\(10^{-07}\) \\
2 & 1 & 10 & 2.71x\(10^{-07}\) & 8.83x\(10^{-07}\) \\
3 & 0.1 & 5 & 2.21x\(10^{-07}\) & 4.31x\(10^{-08}\) \\
4 & 0.05 & 5 & 1.94x\(10^{-07}\) & 3.47x\(10^{-08}\) \\
5 & 0.01 & 5 & 2.20x\(10^{-07}\) & 4.30x\(10^{-08}\) \\
6 & 0.01 & 1 & 2.26x\(10^{-07}\) & 3.65x\(10^{-07}\) \\ \hline \end{tabular}
\end{table}
Table 4: Training results after \(\approx\)10000 epochs of training on 1 GPU using PINN with 40 nodes in each of 15 layers for randomly choosing training points as per M-1. The time taken for each training is around \(\approx\)11 GPU hours. The total number of 1% \(f\)-data points are 461764 and 5% collocation points in total amounts to 2.31x\(10^{6}\). The final test loss is given after taking the moving average over 500 epochs. MSE of the prediction over the full dataset is given in the last column.
Figure 8: (a) The training test loss is plotted as a function of number of epochs using the model with 40 nodes in each of the 15 layers on 1 GPU for PINN and DeepNN. The green dot dashed and red dashed lines are the test loss curve for PINN using 0.1% and 0.01% of \(f\)-data respectively, also, utilizing 5% collocation points (abbreviated as C.P) as per M-1 given in table 2. The blue solid and orange dotted lines are the test loss curve for DeepNN using 0.1% and 0.01% of \(f\)-data. (b) The true electrostatic potential at the middle of \(x\) is plotted in red solid line as a function of time. The prediction from PINN and DeepNN of \(\phi\) for 0.01% of \(f\)-data is plotted in black dashed and dotted lines respectively.
Another PINN with the same architecture is trained for Case-I in table 1 for a subset of data from \(t_{i}\)=0 till \(t_{f}\)=10. The training points are the same as given in the sixth training of table A1, using method M-2. The results after training for 20000 epochs during 22 GPU hours are given in figure 9. The evolution of the different
Figure 9: Results of the training using PINN and the predictions compared to the true results for the Landau damping with \(\epsilon=0.01\). (a) Evolution of the different terms of the loss function during the training. Each term has been averaged over 500 epochs in order to smooth the oscillations due to the gradient descent. The loss for the Vlasov equation, \(L_{\mathrm{PDE}}\) is represented by solid black line. The loss for initial condition, \(L_{t=t_{i}}\) for \(t_{i}\)=0 is given in black dashed line. The loss for all the boundary conditions, which includes, periodic and Neumann boundary conditions is shown in black dashdotted line. The loss for original data, \(L_{\mathrm{data}}\) is shown in red dotted line. (b) For the the mid \(x\)-position, the electric potential at \(t=t_{f}\) predicted by PINN (solid black line) is plotted together with the true value (dashed blue line), showing a good quantitative agreement. For more clarity, absolute error between the predicted and true values of \(\phi\) is also plotted in red dotted line. (c) The absolute error between true and predicted value are plotted in \((x,v)\) at final time, \(t_{f}\)=10. (d) \(f-f_{eq}\) (\(f_{eq}\) representing the equilibrium distribution function) is compared with true and predicted values at \(t_{f}\)=10 and \(x=x_{mid}\) (corresponding to the white solid line plotted in (c)).
terms of the loss function during the training is shown in figure 9a. It is observed that the neural network is indeed learning the distribution function. The prediction of electrostatic potential at the mid \(x\)-position is given in figure 9b, showing good quantitative agreement with true results. The absolute error (also shown in the same figure) is of the order of \(10^{-4}\) which confirms the accurate prediction. As shown in figure 9c with the plotting of the absolute error for distribution function against \((x,v)\), the maximum error is of the order of \(10^{-3}\) at final time, \(t_{f}\)=10. The perturbation in distribution function can be captured by calculating the difference between the evolved distribution function, \(f(x,v,t)\), and the equilibrium distribution function, \(f_{eq}\) (given in equation 6), at any time, namely \(\delta f(x,v,t)=f(x,v,t)-f_{eq}\). In the case of Landau damping, the distribution function exhibits minimal changes due to the small amplitude of the perturbation so the good indication of prediction accuracy is the comparison of distribution function. The predicted and true \(\delta f\) at the mid \(x\)-position and final time is given in figure 9d, it reveals an excellent agreement with an absolute error of \(10^{-4}\).
The training time to infer the distribution function is increased by a factor of 2 from Case-II to Case-I. This is because in order to capture small changes in the distribution function an accuracy of \(10^{-8}\) is required.
### PINN to solve Vlasov equation
Our aim is to store as few data as possible from the simulation. This can be controlled by reducing \(N_{\rm data}\), i.e. the number of points to estimate \(L_{\rm data}\). However, it is to be noted that when \(N_{\rm data}\to 0\), this is equivalent to solving the Vlasov equation, since the only information we have left is the initial and boundary conditions for the distribution function. This means that we effectively solve the Vlasov equation using PINN. However, the electric potential values obtained from the solution of the Poisson equation are still utilized to calculate the corresponding electric field. This electric field is then used to evaluate the loss for the PDE, \(L_{PDE}\). The same PINN architecture and parameters are used here as in the previous section. The total loss now reads
\[L=L_{\rm PDE}+L_{t=t_{i}}+L_{\partial\Omega_{N}}+L_{\partial\Omega_{P1}\times \partial\Omega_{P2}} \tag{22}\]
We use the same methods for selecting collocation points that we described in the previous section. To make comparison easier with previous section results, the same Case-II of non-linear Landau damping from \(t_{i}=0\) to \(t_{f}=10\) is used. We also start with the same percentage of collocations points for PINN training in each of the methods described in table 2. The test loss after training PINN for 20000 epochs for M-1 are given in table 5. The number of epochs to get the same order of accuracy as in the previous section is increased from 10000 to 20000. This is because \(L_{\rm data}\) is not used for training, which makes PINN harder to converge to the solution. This also means that it takes twice the time in the training.
Comparing the results of table 5 and 4 reveals that without using any \(f\)-data, PINN also gives the accurate results. The ability of PINN is exploited to successfully
solve the Vlasov equation. This means we just need to store the data for electrostatic potential (\(\phi\)-data) to infer the stored results, which represents 512 times less memory than the full distribution function data. Using DeepNN to infer \(f\) using only \(\phi\) is impossible, which confirms that PINN is efficient in storing and inferring or predicting the simulation results. All the \(\phi\)-data is used in training when M-1 method is used for choosing collocation points, M-2 and M-3 provides better alternatives. The results using M-2 and M-3 methods are discussed in Appendix B.
M-2 turns out to be the best method to choose collocation points. The lowest amount of data is used when 5% of \(t\)-grid, 50% of \(x\)-grid and 20% of \(v\)-grid points are chosen for training to give the accuracy of \(10^{-7}\) (given in ninth training of table B1).
\begin{table}
\begin{tabular}{c c c} \hline S.No. & \% Collocation points & Final test loss \\ \hline
1 & 5 & 1.80x\(10^{-07}\) \\
2 & 1 & 7.30x\(10^{-08}\) \\
3 & 0.5 & 1.56x\(10^{-07}\) \\ \hline \end{tabular}
\end{table}
Table 5: Training results after 20000 epochs using PINN model of 15 layers with 40 nodes each is given for randomly chosen 5% (totalling 2.31x\(10^{6}\) points) and 1% collocation points using M-1. The time taken for training is around 24 GPU hours. The final test loss is given after taking the moving average over 500 epochs. The MSE is given for all the phase space in which the training is done.
Figure 10: (a) The prediction of distribution function is plotted in black solid line at final time, \(t_{f}=20\) at mid \(x\)-position as a function of \(v\) after training of 80000 epochs. The true value is plotted in blue dashed line and absolute error is plotted in red dotted line with y-axis on the right. (b) The true values of electrostatic potential at the mid of \(x\)-position are plotted in blue dashed line against time (y-axis is given in logarithmic scale). The \(\phi\) evaluated from the prediction of distribution function by PINN is plotted in black solid line. The absolute error is plotted in red dotted line. The time taken for the training is 72 GPU hours.
The total electric potential data points used from stored results are only 2560 (2.5% of \(\phi\)), which amounts to 20 KiloBytes (KB) in disk space. The disk space required to store PINN is only 678 KB. In total, the disk space to store the simulation results from \(t_{i}=0\) to \(t_{f}=10\) of Case-II, is reduced from 400.8 Megabytes (MB) to 698 KB. The reduction in storage space required to store simulation data amounts to a significant factor of 588. This reduction signifies substantial decrease in the overall storage demands when employing PINN, thereby enabling more efficient data management and storage for the simulations.
From now on, the M-2 method is used for all the upcoming trainings. During the PINN training, we ended up using the \(t\)-grid points for every 20 timesteps, which makes the effective \(dt\) = 0.25. Using the same PINN architecture, we solve the Vlasov equation for Case-II, from \(t_{i}\)=10 to \(t_{f}\)=20 for the sake of completeness. M-2 method is used to choose the collocation points: 5% of uniformly chosen \(t\)-grid points, 50% \(x\)-grid points and 50% \(v\)-grid points. The results after 80000 epochs of training are given in figure 10. The predicted and true results exhibit excellent agreement. The increase in epochs from 20000 to 80000 is mainly because of the presence of highly non-linear data to be fitted, which takes more training time. It is important to realise that we do not directly train PINN to solve Vlasov equation for the full time till \(t_{f}=45\). This is because it is better to split the training for landau damping into a stretch of 10 unit times (from \(t=0\) to \(t=10\) then till \(t=20\)) per training since the non-linear phase takes a lot of time to converge for accurate prediction. One can predict correct results this way till \(t_{f}=45\). This way we get accurate and faster results while keeping the data storage and resources consumed to minimum.
We also solve the Vlasov equation for the bump-on-tail case as given in Case-III of table 1. The subset of data used for training is from \(t_{i}=55\) till \(t_{f}=65\) because most activity happens in the simulation when it goes from linear to non-linear phase. The same M-2 method is used to choose collocation points for training, 50% of \(x\)-grid points and 50% of \(v\)-grid points for uniformly chosen 25% of \(t\)-grid points. The \(t\)-grid points are chosen at every four time steps, which is equivalent to the effective \(dt=0.2\). After 70000 epochs of training, the final test loss after taking the moving average over 500 epochs is 2.11x10\({}^{-7}\). The potential evaluated using the predicted distribution function values is in good agreement compared to the true results as seen from figure 11b. It can be observed from figures 11a and 11c that PINN predictions are accurate to the order of \(10^{-3}\).
Solving the Vlasov-Poisson system: implementation of an integrable PINN method to solve integro-differential equations.
As explained before, since the Vlasov-Poisson system includes an integro-differential equation (IDE) one can not use PINN in its existing form to solve the whole system. There are some variants of PINN used to bypass this issue. Pang et al. showed in their paper [27] that one can solve IDEs combining PINN to solve the differential
equation and a numerical method to solve the integral equation. They formulated an approximate function which depends on the PINN output. Such function includes trainable parameters and at the same time satisfies initial and boundary conditions. Using the approximate function, they solve the integral equation using the Finite Element (FE) method for the collocation points. The total loss is calculated by inserting the values of the solved integral equation into the loss for the PDE. PINN is afterwards is trained using this loss. This PINN variant is called fractional-PINN (f-PINN) as it is used to solve fractional differential equations. They used a particular setup of initial and boundary conditions for which an approximate solution can be easily formulated.
Figure 11: (a) The prediction of distribution function is plotted in black solid line at final time, \(t_{f}=65\) for mid \(x\) position as a function of \(v\) after 70000 epoch training. The true value is plotted in blue dashed line and the absolute error is plotted in red dotted line with y-axis in the right side. (b) The true electrostatic potential at the mid of \(x\) position is plotted in blue dashed line as a function of time (y-axis is given in logarithmic scale). The prediction from PINN is plotted in black solid line and the absolute error is plotted in red dotted line. (c) The true distribution function is plotted over \(x\) and \(v\) on the left at final time \(t_{f}\)=65. The absolute error is plotted on the right for the same time against \(x\) and \(v\).
Although the numerical technique they used is finite element method. They pointed out that f-PINN should work for any IDE with any initial and boundary conditions using any numerical method. In this section, f-PINN is used to solve the Vlasov-Poisson system and the limitation of the method is discussed. In addition, another method to solve IDEs using PINN, namely I-PINN (for Integrable-PINN), is formulated, which does not use any numerical method to solve the integral.
### f-PINN to solve Vlasov-Poisson equation
In contrast to the approach suggested by Pang et al., we do not rely on defining an approximate function based on initial and boundary conditions. Instead, we utilize loss terms to enforce the satisfaction of the initial and boundary conditions. Moreover, defining an approximate function becomes more challenging when working with periodic boundary conditions. Therefore, Fast Fourier Transform (FFT) is used instead as done in VOICE to solve the Poisson equation at each training instance. It is an additional part of training as compared to previous section and also the most resource consuming (even more than gradients) to evaluate during training. To solve the Poisson equation at the collocation points, equidistant points in \(x\), \(v\) and \(t\) are defined. The grids resulted from uniform discretization are the same as those used in the VOICE simulation. The output from the network, \(f_{\text{fPINN}}\left(x,v,t;\boldsymbol{\theta}\right)\) is integrated over \(v\) using the trapezoidal rule to obtain the gradient of the electric field.
\[\frac{\partial E\left(x,t;\boldsymbol{\theta}\right)}{\partial x}=1-\int_{v_{i }}^{v_{f}}f_{\text{fPINN}}\left(x,v,t;\boldsymbol{\theta}\right)dv\quad \forall\left(x,t\right)\in\Omega_{\text{PDE}} \tag{23}\]
Using FFT one can write,
\[\texttt{fft}\left(E\left(x,t;\boldsymbol{\theta}\right)\right)=\begin{cases} \frac{\texttt{fft}\left(\int_{v_{i}}^{v_{f}}f_{\text{PINN}}\left(x,v,t; \boldsymbol{\theta}\right)dv\right)}{\texttt{ik}},&\text{if }k\neq 0\\ 0,&\text{if }k=0\end{cases} \tag{24}\]
and taking Inverse Fast Fourier Transform (ifft) we obtain the electric field. Here, i is the imaginary unit and k is the wave vector. Note that now the electric field depends on trainable parameters (\(\boldsymbol{\theta}\)) of PINN. The loss for the PDE is changed from equation 22 as follows
\[\begin{split}& L_{\text{PDE}}\left(\boldsymbol{\theta}\right)= \\ &\frac{1}{N_{\text{PDE}}}\sum_{\left(x,v,t\right)\in\Omega_{ \text{PDE}}}\left|\frac{\partial f_{\text{PINN}}\left(x,v,t;\boldsymbol{ \theta}\right)}{\partial t}+v\frac{\partial f_{\text{PINN}}\left(x,v,t; \boldsymbol{\theta}\right)}{\partial x}+E\left(x,t;\boldsymbol{\theta}\right) \frac{\partial f_{\text{PINN}}\left(x,v,t;\boldsymbol{\theta}\right)}{ \partial v}\right|^{2}\end{split} \tag{25}\]
_Tensorflow-probability_ library is used to evaluate fft, ifft and the integral of \(f\) using the trapezoidal rule. During the training, in order to compute the gradient of the PDE loss with respect to \(\boldsymbol{\theta}\), one has to compute the gradient of the electric field. This is different from what was done in the previous section, where the electric field is constant
with respect to \(\mathbf{\theta}\). This enables PINN to learn the relation between \(f(x,v,t)\) and \(E(x,t)\), making it f-PINN. The same subset from \(t_{i}=0\) till \(t_{f}=10\) of non-linear Landau damping simulation given in case-II is used for training. We found in previous section that method M-2 to select collocation points with (5%,50%,50%) of \((x,v,t)\) is enough to get accurate results. We are using the same number of points to train f-PINN. The training is done till f-PINN has an accuracy of the order of \(10^{-7}\), which occurs after 30000 epochs. The time taken for training is 220 GPU hours due to the increased number of operations which includes solving the Poisson equation. The point to note here is that we are not using any pre-trained network or weighted loss or special optimizer which would help in speed up the training. Since our goal here is to test the feasibility of using PINN and its variants for plasma simulations, increasing the efficiency of training is not in the scope of the current work. The predictions from f-PINN training can be seen in figure 12. The predicted and true distribution functions at final time and mid \(x\)-position are plotted as a function of \(v\) in figure 12. It can be observed from the error plot in the figure that the value of absolute error is of the order of \(10^{-3}\). The electric potential is evaluated from predicted distribution function, and the true and predicted values are plotted in figure 12, which completely overlap each other at mid \(x\)-position and the error is also of the order of \(10^{-3}\).
After the training, the entire solution of the Vlasov-Poisson system under the form of weights and biases of the neural network is obtained without using any data from simulation except the initial and boundary conditions. However, using f-PINN implies that there is a grid dependence to solve the Poisson equation.
Figure 12: (a) The prediction of distribution function is plotted in black solid line at the final time, \(t_{f}=10\), for the mid \(x\)-position as a function of \(v\) after training of 30000 epochs using f-PINN. The true value is plotted in blue dashed line and the absolute error is plotted in red dotted line with the y-axis on the right. (b) The electrostatic potential at the mid \(x\)-position is plotted against time (y-axis is given in logarithmic scale). The true results from VOICE simulation are plotted in blue dashed line and the prediction from f-PINN is plotted in black solid line. The absolute error is plotted in red dotted line.
### Using I-PINN to solve Vlasov-Poisson equation
In this section we aim at solving integro-differential equations by keeping the features of PINN to have mesh-free solution without using any numerical technique. There are some very recent attempts to solve integro-differential equations. For example, Yuan et al. [28] proposed an A-PINN (for Auxiliary-PINN) and apply to the Volterra equation. In practive, A-PINN outputs the solution of integral equation along with the solution of the PDE. Therefore, the total loss includes the loss for the PDE, for the initial and boundary conditions and for the integral equation. It means that A-PINN learns the solution of integral equation through the loss function. Moreover, works where integro-differential equations are solved in the framework of the Landau damping have been reported recently, making use of the so-called gPINN [48] and gPINNp [49], where the kinetic data for the initial time is used for PINN training and the total loss is defined using the moments of the distribution function, essentially solving fluid equations. The method we propose in the present paper is inspired by very recent works [29, 30] and construct our own I-PINN (Integrable-PINN) capable for solving IDEs based on the fundamental theorem of Calculus. More specifically, let us assume that we need to compute the integral \(\int_{a}^{b}f\left(v\right)dv\), with \(f\) approximated by a neural network \(f_{\mathrm{NN}}\). Instead of numerically integrating \(f_{\mathrm{NN}}\), we define the function
\[F\left(v\right)=\int_{a}^{v}f\left(v^{\prime}\right)dv \tag{26}\]
such that \(F^{\prime}\left(v\right)=f\left(v\right)\). Therefore, we can approximate \(F\) by a neural network and use the automatic differentiation to train it so that its derivative is the integrand. In other words, I-PINN does not learn the solution of the integral equation as A-PINN, instead it is satisfied through construction. No data from simulation is used to train I-PINN and the structure of the total loss remains the same as given in equation 22.
To apply I-PINN to the Vlasov-Poisson system given in equations 1 and 2, we need to define a function such that its gradient gives the distribution function. Two integrals have to be done over \(x\) and \(v\) to solve the Poisson equation. Therefore, we introduce a function \(\hat{E}\) that we call grad network, since it gives \(f\) after taking partial derivatives with respect to \(x\) and \(v\), i.e.
\[\partial_{x}\partial_{v}\hat{E}\left(x,v,t;\boldsymbol{\theta} \right)=f\left(x,v,t;\boldsymbol{\theta}\right)\] \[\text{with, }\hat{E}\left(x,v,t;\boldsymbol{\theta}\right)=\int_{v_{i} }^{v}\int_{x_{i}}^{x}f\left(x^{\prime},v^{\prime},t;\boldsymbol{\theta} \right)dx^{\prime}dv^{\prime}\]
assuming that \(\hat{E}\) vanishes at the boundary. This condition is also taken into account in the total loss by adding an extra term for \(\hat{E}\), called \(L_{\hat{E}}\). The advantage of changing the variable is that there is no more integral in Poisson equation. Indeed, integrating the
Poisson equation w.r.t \(x\) leads to
\[E\left(x,t;\boldsymbol{\theta}\right)=E\left(x_{i},t;\boldsymbol{ \theta}\right)+\left(x-x_{i}\right)+\\ \left(\hat{E}\left(x,v_{i},t;\boldsymbol{\theta}\right)-\hat{E} \left(x_{i},v_{i},t;\boldsymbol{\theta}\right)\right)-\left(\hat{E}\left(x,v_{ f},t;\boldsymbol{\theta}\right)-\hat{E}\left(x_{i},v_{f},t;\boldsymbol{\theta} \right)\right) \tag{28}\]
The total loss can then be defined with an extra term for the boundaries of \(\hat{E}\),
\[L=L_{\mathrm{PDE}}+L_{t=t_{i}}+L_{\partial\Omega_{N}}+L_{\partial\Omega_{P1} \times\partial\Omega_{P2}}+L_{\hat{E}} \tag{29}\]
where,
\[L_{\mathrm{PDE}}\left(\boldsymbol{\theta}\right)=\frac{1}{N_{ \mathrm{PDE}}}\sum_{\left(x,v,t\right)\in\Omega_{\mathrm{PDE}}}\left|\frac{ \partial}{\partial x}\frac{\partial}{\partial v}\frac{\partial}{\partial t} \hat{E}\left(x,v,t;\boldsymbol{\theta}\right)+\right.\] \[\left.v\frac{\partial}{\partial v}\frac{\partial^{2}}{\partial x ^{2}}\hat{E}\left(x,v,t;\boldsymbol{\theta}\right)+\right.\] \[\left.E(x,t)\frac{\partial}{\partial x}\frac{\partial^{2}}{ \partial v^{2}}\hat{E}\left(x,v,t;\boldsymbol{\theta}\right)\right|^{2}\] \[L_{t=t_{i}}\left(\boldsymbol{\theta}\right)=\frac{1}{N_{0}}\sum _{\left(x,v,t\right)\in\Omega_{0}}\left|\frac{\partial}{\partial x}\frac{ \partial}{\partial v}\hat{E}\left(x,v,t;\boldsymbol{\theta}\right)-f_{0}\left( x,v,t\right)\right|^{2}\] \[L_{\partial\Omega_{D}}\left(\boldsymbol{\theta}\right)=\frac{1}{N _{\partial\Omega_{\mathrm{D}}}}\sum_{\left(x,v,t\right)\in\partial\Omega_{D} }\left|\frac{\partial}{\partial x}\frac{\partial^{2}}{\partial v^{2}}\hat{E} \left(x,v,t;\boldsymbol{\theta}\right)-f_{\partial\Omega_{D}}\left(x,v,t; \boldsymbol{\theta}\right)\right|^{2}\] \[L_{\partial\Omega_{N}}\left(\boldsymbol{\theta}\right)=\frac{1}{N _{\partial\Omega_{\mathrm{N1}}}}\sum_{\left(x,v_{i},t\right)\in\partial\Omega_ {N1}}\left|\frac{\partial}{\partial x}\frac{\partial^{2}}{\partial v^{2}}\hat{ E}\left(x,v,t;\boldsymbol{\theta}\right)\right|^{2}+\] \[\left.\frac{1}{N_{\partial\Omega_{\mathrm{N2}}}}\sum_{\left(x,v_ {f},t\right)\in\partial\Omega_{N2}}\left|\frac{\partial}{\partial x}\frac{ \partial^{2}}{\partial v^{2}}\hat{E}\left(x,v,t;\boldsymbol{\theta}\right) \right|^{2}\] \[L_{\partial\Omega_{P1}\times\partial\Omega_{P2}}\left(\boldsymbol {\theta}\right)=\frac{1}{N_{\partial\Omega_{\mathrm{P}}}}\sum_{\left(\left(x_{ i},v,t\right),\left(x_{f},v,t\right)\right)\in\partial\Omega_{P1} \times\partial\Omega_{P2}}\left|\frac{\partial}{\partial x}\frac{\partial}{ \partial v}\hat{E}\left(x_{i},v,t;\boldsymbol{\theta}\right)-\frac{\partial}{ \partial x}\frac{\partial}{\partial v}\hat{E}\left(x_{f},v,t;\boldsymbol{ \theta}\right)\right|^{2}\] \[L_{\hat{E}}\left(\boldsymbol{\theta}\right)=\frac{1}{N_{t}}\sum _{t\in N_{t}}\left|\hat{E}(x_{i},v_{i},t)\right|^{2}+\frac{1}{N_{\Omega_{P1}}} \sum_{\left(x_{i},v,t\right)\in\partial\Omega_{P1}}\left|\frac{\partial\hat{ E}(x_{i},v,t)}{\partial v}\right|^{2}+\] \[\frac{1}{N_{\Omega_{P1}}}\sum_{i=0}^{N_{\Omega_{N1}}}\left|\frac{ \partial\hat{E}(x,v_{i},t)}{\partial x}\right|^{2}\]
Here \(N_{t}\) is the total time points. I-PINN is constructed with 40 neurons in each of 15 layers, having as input vector \(\left(x,v,t\right)\), but with output \(\hat{E}(x,v,t)\). Since there is no integration in any equation, we solved the integral equation with the help of grad network created. In that sense, we call it automatic-integration, as the integral is inherently computed by construction.
I-PINN is now used to solve for the case-II of Landau damping as done before. Since, three fold gradients need to be computed at each epoch, which consumes resources and time, we decide to use a subset of small time interval, \(t_{i}\)=0 till \(t_{f}\)=5 for training I-PINN. The collocation points for training are chosen using M-2 method. The percentage of points in \(t\),\(x\) and \(v\) are 20%, 50% and 50%, respectively, totalling 1.19x10\({}^{6}\) points. The training was done on 4 GPUs for 116 hours. It is important to highlight here again is that the training is done without optimization of training time and resources consumed as it is a test bed case. The result after training is given in figure 13. The predicted distribution function at \(t_{f}\)=5 at mid \(x\) position overlaps with the true results within an error of the order \(10^{-3}\) as given in figure 12(a). The electric potential evaluated from the predicted \(f\) is given in figure 12(b) at mid \(x\)-position, which also agrees with the true results within an error of the order of \(10^{-3}\).
## 5 Conclusion
In this article, we explored the application of Physcis Informed Neural network (PINN) to the Vlasov-Poisson system for two purposes. First, as a storage method for kinetic data, since storing the weights and biases of a neural network requires much less disk space than storing the raw data. Second, as a mesh-free technique for integration of differential and integro-differential equations (IDEs). Vanilla PINN as it is, uses automatic differentiation which is not suitable for integro-differential equations. For this purpose we used f-PINN (PINN to solve Vlasov equation and FFT to solve Poisson
Figure 13: (a) The prediction of distribution function is plotted in black solid line at final time, \(t_{f}=5\) at mid \(x\) position against \(v\) after training of 100000 epochs using I-PINN model. The true value is plotted in red dashed line and the absolute error is plotted in red dotted line with values on the right side of y-axis. (b) The electrostatic potential (\(\phi\)) at the mid \(x\) position is plotted against time (y-axis is given in logarithmic scale). The true results from VOICE simulation are plotted in blue dashed line and the prediction from I-PINN is plotted in black solid line. The absolute error is plotted in red dotted line.
equation). By doing so, we needed to define the grid to integrate the distribution function. This implies that the advantage of using PINN to give mesh-free results and without using numerical technique is lost. This motivated the development of I-PINN, a method to solve coupled integro-differential equations. I-PINN uses a grad network, where the integration is satisfied through construction based on the fundamental theorem of Calculus.
This work is also a proof of concept for I-PINN. In the future, we will focus on optimizing I-PINN to solve different integro-differential equations with the help of different techniques existing in the literature, for example using a weighted loss function, using different optimizers or adding causality information. We will also continue our work to use PINN as storage solution and expand it for higher dimensions and multi-species plasma simulations.
This work has received financial support from the AIM4EP project (ANR-21-CE30-0018), funded by the French National Research Agency (ANR) and was granted access to the HPC resources of IDRIS under the allocations 2021-AD011011719R1 and 2022-AD011011719R2 made by GENCI. The authors are deeply thankful for the financial assistance provided by A*MIDEX for ISFIN-SRIM research mobility for PhD students. Their support enabled us to carry out the research work with our collaborators at Julich Supercomputing Centre in Forschungszentrum Julich. The authors gratefully acknowledge computational resources provided by the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) on the GCS Supercomputer JUWELS [50] at Julich Supercomputing Centre (JSC).
|
2301.11462 | How poor is the stimulus? Evaluating hierarchical generalization in
neural networks trained on child-directed speech | When acquiring syntax, children consistently choose hierarchical rules over
competing non-hierarchical possibilities. Is this preference due to a learning
bias for hierarchical structure, or due to more general biases that interact
with hierarchical cues in children's linguistic input? We explore these
possibilities by training LSTMs and Transformers - two types of neural networks
without a hierarchical bias - on data similar in quantity and content to
children's linguistic input: text from the CHILDES corpus. We then evaluate
what these models have learned about English yes/no questions, a phenomenon for
which hierarchical structure is crucial. We find that, though they perform well
at capturing the surface statistics of child-directed speech (as measured by
perplexity), both model types generalize in a way more consistent with an
incorrect linear rule than the correct hierarchical rule. These results suggest
that human-like generalization from text alone requires stronger biases than
the general sequence-processing biases of standard neural network
architectures. | Aditya Yedetore, Tal Linzen, Robert Frank, R. Thomas McCoy | 2023-01-26T23:24:17Z | http://arxiv.org/abs/2301.11462v2 | How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
###### Abstract
When acquiring syntax, children consistently choose hierarchical rules over competing non-hierarchical possibilities. Is this preference due to a learning bias for hierarchical structure, or due to more general biases that interact with hierarchical cues in children's linguistic input? We explore these possibilities by training LSTMs and Transformers--two types of neural networks without a hierarchical bias--on data similar in quantity and content to children's linguistic input: text from the CHILDES corpus. We then evaluate what these models have learned about English yes/no questions, a phenomenon for which hierarchical structure is crucial. We find that, though they perform well at capturing the surface statistics of child-directed speech (as measured by perplexity), both model types generalize in a way more consistent with an incorrect linear rule than the correct hierarchical rule. These results suggest that human-like generalization from text alone requires stronger biases than the general sequence-processing biases of standard neural network architectures.
## 1 Introduction
Syntax is driven by hierarchical structure, yet we typically encounter sentences as linear sequences of words. How do children come to recognize the hierarchical nature of the languages they acquire? Some argue that humans must have a hierarchical inductive bias--an innate predisposition for hierarchical structure (Chomsky, 1965, 1980). An alternative view (e.g., Lewis and Elman, 2001) is that no such bias is necessary: there may be clear evidence for hierarchical structure in children's input, so that children would choose hierarchical rules even without a hierarchical bias.
At first blush, recent work in natural language processing (NLP) may seem to indicate that no hierarchical bias is necessary. Neural networks trained on naturally-occurring text perform impressively on syntactic evaluations even though they have no explicit syntactic structure built into them (e.g., Gulordava et al., 2018; Wilcox et al., 2018; Warstadt et al., 2020). However, these results do not provide strong evidence about the learning biases required to learn language from the data available to humans because these models receive very different training data than humans do (Warstadt and Bowman, 2022). First, NLP models are typically trained on far more data than children receive, so models have more opportunities to encounter rare syntactic structures (Linzen, 2020). Second, most training sets in NLP are built from Internet text (e.g., Wikipedia), which differs qualitatively from the utterances that children typically hear; e.g., sentences in Wikipedia are on average 25 words long (Yasseri et al., 2012), compared to 5 words for sentences in the North American English subset of the CHILDES corpus of child-directed speech (MacWhinney, 2000).
In this work, to evaluate if neural networks without a hierarchical bias generalize like children do, we train models on text1 comparable to the sentences in children's linguistic input: English data from CHILDES. We then analyze what they have learned about the relationship between declarative sentences, such as (1a), and their corresponding yes/no questions, such as (1b):
Footnote 1: Section 6.5 discusses other input types (e.g., visual input).
1. a. Those are your checkers. b. Are those your checkers?
Crucially, nearly all naturally-occurring yes/no questions are consistent with two rules: one based
on hierarchical structure (2), and one based on linear order (3):2.3 Footnote 2: In past work these rules have been framed as transformations named Move-First and Move-Main (McCoy et al., 2020). We instead follow Berwick et al. (2011) and frame the child’s knowledge as a relationship between sentences.
2. HierarchicalQ: The auxiliary at the start of a yes/no question corresponds to the **main** auxiliary of the corresponding declarative.
3. LinearQ: The auxiliary at the start of a yes/no question corresponds to the **first** auxiliary of the corresponding declarative.
Despite the scarcity of evidence disambiguating these rules, children reliably favor HierarchicalQ (Crain and Nakayama, 1987), albeit with occasional errors consistent with LinearQ (Ambridge et al., 2008). Yes/no questions thus are a prime candidate for an aspect of English syntax for which human-like generalization requires a hierarchical bias. We evaluate yes/no question performance in LSTMs and Transformers, two neural-network architectures that have no inherent hierarchical inductive bias (McCoy et al., 2020; Petty and Frank, 2021). These architectures employ different computational mechanisms, so consistent results across both would indicate that our results are not due to idiosyncrasies of one particular architecture.
To investigate if models generalize more consistently with the hierarchical or linear rule, we evaluate them on cases where the rules make different predictions, such as (4): under HierarchicalQ, the question that corresponds to (4a) is (4b), whereas under LinearQ it is (4c).
1. [label=(4)]
2. a. The boy who **has** talked \(\underline{\mbox{\tt can}}\) read.
3. \(\underline{\mbox{\tt Can}}\) the boy who \(\underline{\mbox{\tt has talked \tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tttt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tttt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tt\tt\tttt\tt\tt\tt\tt\tt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tt\tt\tttt\tttt\tt\tt\tt\tt\tttt\tttt\tt\tttt\tt\tttt\tt\tt\tt\tttt\tt\tt\tttt\tttt\tt\tttt\tt\tt\tttt\tttt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tttt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tttt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tt\tt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tttt\tt\tt\tt\tttt\tt\tttt\tt\tttt\tt\tt\tt\tttt\tt\tt\tttt\tttt\tt\tt\tttt\tttt\tt\tttt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tt\tt\tttt\tt\tttt\tt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tt\tttt\tttt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tt\tttt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tttt\tttt\tt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tttt\tt\tt\tttt\tt\tt\tttt\tt\tt\tt\tt\tt\tttt\tt\tt\tt\tttt\tt\tt\tt\tt\tttt\tt\tttt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tttt\tt
models would generalize when faced with the type of data that children receive.
## 3 Overview of Experimental Setup
We evaluated models on yes/no questions in two ways. First, we used relative acceptability judgments (Experiment 1): We trained neural networks on the task of language modeling (predicting the next word at every point in the sentence) and evaluated whether they assigned a higher probability to sentences consistent with LinearQ or HierarchicalQ. Our second approach was based on text generation (Experiment 2): We trained networks to take in a declarative sentence and output the corresponding question, and tested whether they generalized in a way more consistent with LinearQ or HierarchicalQ. Under both framings, we trained models on data from CHILDES and evaluated them on targeted datasets constructed to differentiate LinearQ and HierarchicalQ.
## 4 Experiment 1: Relative Acceptability
### Dataset
To train models on data as similar as possible to the sentences children receive, we extracted data from CHILDES MacWhinney (2000). We used the North American English portion. We wished to replicate children's _input_, so we excluded the children's own utterances, leaving a 9.6-million-word corpus. We allocated 90% of the data to training, 5% to validation, and 5% to testing. We replaced words that appeared two or fewer times in the training set with \(<\)unk\(>\), giving a replacement rate of 0.3%. See Appendix A for more details.
### Task: Next-Word Prediction
We trained models on next-word prediction, also known as language modeling. We chose this task for two reasons. First, it is clear empirically that next-word prediction can teach neural networks a substantial amount about syntax (e.g., Hu et al., 2020). Second, it is plausible that humans perform some version of next-word prediction during sentence processing Altmann and Kamide (1999); Hale (2001); Levy (2008); Kutas et al. (2011) and that such prediction may play a role in acquisition Elman (1991). Thus, while next-word prediction is certainly not the only goal of human language learners, we view this task as a reasonable first step in emulating human language acquisition.
### Architectures
We used two neural network architectures: LSTMs Hochreiter and Schmidhuber (1997) and Transformers Vaswani et al. (2017). We chose these models for two reasons. First, they have been the most successful architectures in NLP. Thus, we have reason to believe that, of the types of low-bias models invented, these two are the ones most likely to discover linguistic regularities in our CHILDES training data. Second, the two architectures process sequences very differently (via recurrence vs. via attention). Thus, if both generalize similarly, we would have evidence that what was learned is strongly evidenced in the data, rather than due to a quirk of one particular architecture.
For our LSTMs, we used 2 layers, a hidden and embedding size of 800, a batch size of 20, a dropout rate of 0.4, and a learning rate of 10. For our Transformers, the corresponding values were 4, 800, 10, 0.2, and 5, and we used 4 attention heads. We chose these values based on a hyperparameter search described in Appendix B. All following results are averaged across 10 runs with different random seeds.
### Results: Language Model Quality
Before testing models on questions, we used perplexity to evaluate how well they captured the basic structure of their training domain. As a baseline, we used a 5-gram model with Kneser-Ney smoothing Kneser and Ney (1995) trained with KenLM Heafield (2011). The test set perplexity for the 5-gram baseline was 24.37, while the average test set perplexity for the LSTMs and Transformers was 20.05 and 19.69, respectively. For perplexity, lower is better. Thus, both neural network types outperformed the strong baseline of a smoothed 5-gram model, showing that they performed well at capturing the basic statistics of their training domain.5
Footnote 5: For an intuitive illustration of our model quality, see the sample text generated by them in Appendix H.
### General Syntactic Evaluation
As an additional way to check the validity of our setup, we evaluated our models on the Zorro dataset Huebner et al. (2021), which is based on BLiMP Warstadt et al. (2020). Zorro contains 24 evaluations, each of which targets one syntactic phenomenon (e.g., subject-verb agreement) and involves sentence pairs for which one sentence is grammatical, and the other is minimally different
but ungrammatical (e.g., by violating subject verb agreement). A model is said to get a sentence pair correct if it assigns a higher probability to the grammatical sentence than the ungrammatical one. Huebner et al. (2021) showed that Transformers trained on CHILDES data can perform well on many of the Zorro categories, so if our setup is sound, our own models should also perform well on Zorro.
See Appendix D for full results. For each syntactic phenomenon, most model re-runs scored above 0.9, though at least one scored near the chance level of 0.5. For each re-run of each architecture there is at least one phenomenon for which the model scores over 0.97, and many models score 1.00 on some phenomena. Thus, all models score well on at least some syntactic evaluations, attaining results comparable to those of Huebner et al. (2021) and providing additional support for the validity of our setup. We now test whether these models have also successfully learned the specific phenomenon that we focus on, yes/no questions--a phenomenon not included in the Zorro dataset.
### Yes/No Questions
Evaluation Dataset: Forced-Choice Acceptability JudgmentsAs a first way to test whether our models have learned HierarchicalQ, we evaluate whether they assign higher probabilities to sentences consistent with HierarchicalQ than to minimally different sentences that are ungrammatical. For this purpose, we create an evaluation dataset containing groups of 6 questions, each created by starting with a declarative sentence, such as (5), and then deleting the **first**, **main**, or neither auxiliary, and inserting the **first** or **main** auxiliary at the front of the sentence.6 For instance, in (6b), the **first** auxiliary has been proposed, and the **main** auxiliary has been deleted.
Footnote 6: It would be possible to also use a ‘prepose other’ category, where an auxiliary not in the input is inserted (McCoy et al., 2018). We excluded this category because using it would raise complications about which ‘other’ auxiliary to choose.
(5) The dog who **has** seen a boy **did** try.
(6) a. **Has** the dog who **s** been a boy **did** try? **Has** the dog who **has** seen a boy try? **Has** the dog who **has** seen a boy **did** try? **Did** the dog who seen a boy **did** try? **Did** the dog who **has** seen a boy **try? **Did** the dog who **has** seen a boy **try? **Did** the dog who **has** seen a boy **try? Within each group, we evaluate which question the model assigned the highest probability to. If a model has correctly learned HierarchicalQ, it should assign the highest probability to the question consistent with this rule, such as (6e).
Several past papers about yes/no questions have used the same general approach (Lewis and Elman, 2001; Reali and Christiansen, 2005). However, these papers considered only pairs of sentences, whereas we consider groups of 6 to allow for a wider range of possible generalizations that a model might have learned.
To generate the declaratives from which we formed groups of 6 questions, we used the context-free grammar (CFG) in Appendix F, which has a vocabulary selected from the most common words in CHILDES. Each declarative generated by the CFG (e.g., (5)) contains two auxiliary verbs: one before the sentence's main verb and one inside a relative clause modifying the subject. One potential problem is that some questions are consistent with both HierarchicalQ and LinearQ. For instance, (7a) can be formed from (7b) with the HierarchicalQ-consistent steps Prepose-Main,Delete-Main, or from (7c) with the LinearQ-consistent steps Prepose-First,Delete-Main.
(7) a. Did the boy who did see the person laugh? b. The boy who did see the person did laugh. c. The boy who did see the person can laugh.
To avoid this problem, we required that the auxiliary before the main verb must select for a different verb inflection than the one in the relative clause. For instance in (5), **did** selects for the verb's bare form, while **has** selects for the past participle form. Thus, the auxiliary at the start of the question could only correspond to whichever auxiliary in the declarative has the same selectional properties.7
Footnote 7: A model could succeed on this dataset with a rule that relates the auxiliary at the start of a question with the _last_ auxiliary in the declarative form. Since our models fail on this dataset, this consideration is not relevant here.
**Results: Relative Question Acceptability** For each sentence group, we used per-word perplexity to see which of the 6 candidates the models scored most highly.8 For both LSTMs and Transformers, the correct category (Prepose Main, Delete Main) was the second-rarest choice, and
the most frequent preference was for Prepose First, Delete Main, a category that is only partially correct because it references linear order in addition to hierarchical structure. (Figure 1).
Thus, neither model displays preferences consistent with the correct, fully-hierarchical generalization. The two model types showed similar scores, which may mean that these results are largely driven by the statistics of the training data that both models share, rather than the models' differing inductive biases.
One of the incorrect categories--Prepose Main, Delete None, such as (6f)--only requires reference to hierarchical structure, so it could be said to capture the hierarchical nature of yes/no questions. Nonetheless, this category was also relatively rare: combining the two fully hierarchical possibilities (Prepose Main, Delete Main and Prepose Main, Delete None) accounts for only 26% of LSTM preferences and 27% of Transformer preferences, meaning that both models over 70% of the time favored a sentence generated at least partially based on linear order.
There are two likely reasons for why our models performed so poorly on yes-no questions when they performed well on many of the phenomena in the Zorro dataset (Section 4.5). First, yes/no questions may simply be harder to learn than the other phenomena; indeed, yes/no questions are often singled out as being likely to pose difficulties for a general-purpose learner (Section 1). Alternatively, it might be that the six-way evaluation we used for yes/no questions is stricter than the binary judgments used for the Zorro dataset.
## 5 Experiment 2: Question Formation
The previous experiment was designed to operate entirely in the next-word-prediction paradigm, motivated by arguments from past literature about the strength and relative ecological validity of next-word-prediction as a training objective (see Section 4.2). However, one of this setup's shortcomings is that HierarchicalQ describes correspondences between questions and declaratives, but Experiment 1 focused on questions alone, with no consideration of declaratives.
In this second experiment, to better capture that HierarchicalQ is defined over sentence pairs, we trained models on a sentence-pair task: transforming a declarative into a question (McCoy et al., 2020). For instance, given _the child did learn_ the model must produce _did the child learn?_
We evaluated models in two ways. First, we checked if the models' predictions fully matched the correct questions. This full-sentence evaluation is demanding, and models might fail this evaluation for reasons unrelated to our core hypotheses. For instance, given _the child did learn_ the model might produce _did the baby learn_, which would be marked as incorrect, even though this lexical error is not relevant to HierarchicalQ.
As a metric that is less demanding and that also more directly targets HierarchicalQ, we measured if the first word of the output question corresponded to the first or main auxiliary of the input. Critically, LinearQ and HierarchicalQ make different predictions for the first word of a question so long as the two auxiliaries are distinct: see (4). Because this framing lets the model freely generate its output (instead of choosing one option from a pre-specified set), we allow for the possibility that the rule learned by models may not be identical to any of our manually-generated hypotheses.
Solely training models to perform this transformation involves the implicit assumption that, when children acquire English yes/no questions, the only evidence they leverage is English yes/no questions. However, other types of sentences may also provide useful evidence (Pearl and Mis, 2016): e.g., _wh_-questions also illustrate subject-auxiliary in
Figure 1: The question types that models prefer when offered a choice between 6 questions. These 6 questions are formed by modifying a declarative with a relative clause on the subject according to ‘prepose’ and ‘delete’ rules. The correct category is Prepose Main, Delete Main. Within each architecture, the proportions across all 6 question types necessarily sum to 1. Each bar shows the average across 10 model re-runs, with single-standard-deviation error bars.
version (Pullum and Scholz, 2002), while, more generally, many types of sentences could provide evidence that the syntax as a whole is hierarchical (Perfors et al., 2011). To explore this possibility, we compared a condition in which models were only trained to perform question formation (the Question Formation condition) to another in which models were first pre-trained on next-word prediction with the exact same setup as in Experiment 1 before being further trained to perform question formation (the Next-word Prediction + Question Formation condition).
### Dataset
Training SetOur question formation dataset consisted of the yes/no questions in the CHILDES Treebank (Pearl and Sprouse, 2013, 2013), a parsed subset of CHILDES containing 189,359 sentences. We used these parses to extract all yes/no questions from the CHILDES Treebank and derive their corresponding declarative forms. The resulting declarative was concatenated with the question. An example declarative/question pair is:
(8) you can spell your name. can you spell your name?
The training set consisted of 10,870 declarative/question pairs, the validation set 1,360 pairs, and the test set 1,358 pairs (we will call this test set the _randomly-partitioned test set_ to distinguish it from two other evaluation sets discussed below). We trained models to perform next-word prediction on such concatenated sentence pairs.
The first-word accuracy of the trained model was then computed based on the model's prediction for the word after the period in each test example, while the full-sentence accuracy was computed based on its predictions for all tokens after the period. All questions in the randomly-partitioned test set were withheld from both the question-formation training set and the next-word-prediction training set. Thus, models had not seen these test examples in their training, even in the Next-word Prediction + Question Formation condition in which they were trained on both tasks.
Evaluation SetsIn addition to the randomly-partitioned test set, we used CFGs to generate two targeted evaluation sets. As in Experiment 1, we selected the CFGs' vocabulary from common words in our CHILDES data. In sentences generated from the first CFG, the sentence's first auxiliary was also its main auxiliary, so LinearQ and HierarchicalQ make the same predictions. (8) exemplifies the type of declarative-question pair in this dataset. We call this dataset First-Aux = Main-Aux. For sentences generated by the second CFG, the main auxiliary was the _second_ auxiliary in the sentence; thus, these examples disambiguate LinearQ and HierarchicalQ. Example (9) is a declarative-question pair from this evaluation set.
(9) a boy who is playing cantry. can a boy who is playing try?
We call this dataset First-Aux \(\neq\) Main-Aux. See Appendix F for the CFGs used. We sampled 10,000 declarative sentences from these grammars and transformed them into questions according to HierarchicalQ to create our evaluation sets.
### Results
Randomly-Partitioned Test SetThe LSTMs and Transformers in the Question Formation condition performed well on the randomly-partitioned test set, with a full-question accuracy of 0.68 \(\pm\) 0.014 and 0.87 \(\pm\) 0.005 (averaged across 10 reruns with margins indicating one standard deviation). The models in the Next-word Prediction + Question Formation condition performed similarly well, with a full-question accuracy of 0.66 \(\pm\) 0.008 for the LSTMs and 0.93 \(\pm\) 0.004 for the Transformers. For both model types, the first-word accuracy for the question was nearly 1.00 across re-runs. We suspect that Transformers have a stronger full-question accuracy because producing the question requires copying all words from the declarative (but in a different order). Copying is likely easy for Transformers because they can attend to specific words in the prior context, while our LSTMs must compress the entire context into a fixed-size vector, which may degrade the individual word representations. Because both model types achieved near-perfect performance on the crucial first-word accuracy metric, we conclude that our models have successfully learned how to handle the types of declarative/question pairs that we extracted from the CHILDES Treebank.
Targeted Evaluation SetsOn our two targeted evaluation sets, models almost never produced the complete question correctly. Turning to the more lenient measure of first-word accuracy, for examples on which LinearQ and HierarchicalQ predict the same first output word (First-Aux = Main-Aux), the Transformer trained only on question formation performed strongly, while the Trans
former trained on both tasks, and both LSTMs, performed reasonably well (Figure 2; note models could choose any word in their vocabulary to begin the output, so chance performance is near 0.00). For the crucial cases that disambiguate the two rules (First-Aux\(\neq\)Main-Aux), both models in both conditions performed more consistently with LinearQ than HierarchicalQ. Training on next-word prediction before question formation had inconsistent effects: it modestly increased the likelihood of hierarchical generalization in LSTMs, yet it decreased that likelihood in Transformers.
Lexical SpecificityIn Appendix G, we further break down the First-Aux\(\neq\)Main-Aux results based the auxiliaries' identity. The generalization pattern varied considerably across auxiliary pairs. For some auxiliary pairs, the auxiliary chosen to begin the question was usually neither auxiliary in the input (Figure 3, left facet). For other pairs, models usually chose the first auxiliary, regardless of lexical identity (Figure 3, middle facet). Finally, for some pairs, the auxiliary chosen was usually the same one, regardless of whether it was the first or main auxiliary (Figure 3, right facet).
Generalization based on lexical identity is rarely considered in past discussions of English yes/no question acquisition. Of the papers on this phenomenon (see Clark and Lappin (2010), Lasnik and Lidz (2017), and Pearl (2021) for overviews), the only one to our knowledge that discusses lexical specificity is Frank and Mathis (2007), which studied models trained on synthetic data. Our results highlight the importance of testing for a broad range of generalizations: Lexically-specific hypotheses appear attractive for our low-bias learners, so an account of what biases can yield human-like learning should rule out these lexically-specific hypotheses along with linear ones.
## 6 Discussion
We have found that, when trained on child-directed speech, two types of standard neural networks performed reasonably well at capturing the statistical properties of the dataset, yet their handling of English yes/no questions was more consistent with a linear rule LinearQ than the correct hierarchical rule HierarchicalQ. These results support the hypothesis that a learner requires a hierarchical bias to consistently learn hierarchical rules when learning from the linguistic data children receive.
### Takeaways for LSTMs and Transformers
When trained on massive corpora, LSTMs and Transformers perform impressively on some syntactic evaluations. Based on such results, it is tempting to conclude that the general-purpose biases of these architectures suffice to yield human-like syn
Figure 3: Lexical specificity in model behavior. Each facet considers only the evaluation examples containing the two auxiliaries in the facet heading; e.g., the _can and do_ facet includes, for example, the inputs _the children who **can** play **do** learn_ and _the children who **do** play **can** learn_. The bars show the proportion of model predictions for the first word of the output that are consistent with four potential movement rules, averaged across 10 model re-runs and with error bars showing one standard deviation above and below the mean. This plot only shows an illustrative subset of auxiliary pairs for one model type (Transformers in the Next-Word Prediction + Question Formation condition); see Appendix G for the full results.
Figure 2: Proportion of model-produced questions that were consistent with the linear rule LinearQ and/or the hierarchical rule HierarchicalQ. In the First-Aux = Main-Aux dataset, the first auxiliary is the main auxiliary, so both LinearQ and HierarchicalQ produce the correct question string. The First-Aux \(\neq\)Main-Aux dataset disambiguates the two rules. Each bar shows the average across 10 model re-runs, with error bars showing one standard deviation.
tax acquisition. Our results caution against this interpretation: When we trained the same architectures on data more similar to children's input, they failed to learn the structure of English yes/no questions. Thus, at least when learning from text alone, LSTMs and Transformers do not display human-like language learning--they do not generalize as humans do _from the data that humans receive_.
### Takeaways for the Poverty of the Stimulus Debate
Below we specify four possible positions in the poverty-of-the-stimulus debate about the adequacy of children's input for inducing hierarchical rules in low-bias learners, arranged from assuming the most limited to the most expansive innate component:
1. **Any inductive biases:** Any learner trained on CHILDES will generalize like humans do.
2. **Any inductive biases that enable in-distribution learning:** Any learner that captures the statistical patterns of the training distribution will generalize to HierarchicalQ.
3. **Some non-hierarchical inductive biases:** Some general-purpose learners will generalize as humans do, but others will not.
4. **Only a hierarchical inductive bias:** No general-purpose learners will generalize as humans do: hierarchical biases are necessary.
Position (10) is clearly false: many learners cannot learn certain aspects of syntax, no matter their training data (e.g., bigram models cannot capture long-distance dependencies). Our work shows that position (11) is also false: Though our models performed well on the in-distribution test sets of Experiments 1 and 2, they did not generalize in human-like ways. This leaves positions (12) and (13), which our existing results cannot differentiate. It is possible that only learners with hierarchical inductive biases can demonstrate human-like language learning (position (13)), but also that some learners without this bias can succeed (position (12))--just not the learners we tested. For further discussion of how computational modeling can bear on learnability arguments, see Wilcox et al. (2021).
One potential solution supporting position (12) would be that learners leverage the hierarchical structure of some syntactic phenomenon to help conclude that other, impoverished phenomena are hierarchical Perfors et al. (2011); Mulligan et al. (2021). However, our results from Experiment 2 show that giving learners access to a wider range of phenomena does not automatically improve hierarchical generalization: Models' performance on question formation was not substantially improved (and in some cases was even harmed) when they were trained not just on question formation but also on next-word prediction on the entire CHILDES corpus. Thus, although training on text that contains many linguistic phenomena can give models a hierarchical inductive bias when the training is done over large Internet corpora Warstadt and Bowman (2020); Mueller et al. (2022), our results provide evidence that this conclusion does not extend to models trained on child-directed speech.
Though both (12) and (13) remain as possibilities, we believe that our results more strongly support (13). Of all currently available general-purpose learners, LSTMs and Transformers are the best at modeling the probabilistic structure of linguistic data. Therefore, if child-directed speech contains clear evidence for the hierarchical nature of yes/no questions--evidence so clear that at least some general-purpose learners could recognize it--it is likely that LSTMs and Transformers would be among the set of general-purpose learners that could use this evidence to make hierarchical generalizations in our experiments. The fact that these architectures instead predominantly favored linear generalizations therefore supports position (13).
### How to test for HierarchicalQ
We have argued that an ideal simulation of the acquisition of English yes/no questions would have the following properties:
1. The training data should be similar to children's linguistic input.
2. The training task should be ecologically valid.
3. The evaluation method should focus on correspondences between pairs of sentences rather than the acceptability of individual sentences.
Property (14) motivated our use of text from CHILDES as the training data. We are not aware of a single experimental setup that fully satisfies both Property (15) and Property (16), so we instead used two experiments, each one focusing on one property at the cost of satisfying the other one less well. Experiment 1 works entirely in the context of the relatively ecologically valid task of next-word prediction, motivated by Property (15), but its
evaluation is only based on the acceptability of individual sentences, failing to satisfy Property (16). Experiment 2 fully satisfies Property (16) by using an evaluation based on sentence pairs, at the cost of including a less ecologically-valid training component based on sentence transformations. Both experiments yielded qualitatively similar conclusions (failure of models to learn HierarchicalQ).
### Quantity of Training Data
The size of our training set was plausibly within the range from which children can acquire HierarchicalQ. Crain and Nakayama (1987) found that children between ages 3 and 5 behaved much more consistently with HierarchicalQ than LinearQ. Though these children made many errors, their errors were usually compatible with a hierarchical rule (e.g., Prepose Main, Delete None errors: see Section 4.6). By age 3, American children receive approximately 10 to 33 million words of input Hart and Risley (1995), and the 8.5 million words of our training set is close to the lower end of that range. Thus, it is reasonable to suppose that a learner that generalizes as children do would favor HierarchicalQ after being trained on our training set. Our models, in contrast, regularly preferred sentences generated in ways based on linear order (Figures 1 and 2), a category of error that is very rare in children Crain and Nakayama (1987); Ambridge et al. (2008).
In order to give our models the strongest chance of generalizing correctly, it would have been ideal to provide a quantity of data closer to 33 million words, the high end of Hart and Risley's range. Our data source did not contain enough text to make this possible, but future work could investigate ways to augment the data using other sources.
### Type of Training Data
Our training set was both qualitatively and quantitatively closer to children's input than the massive Internet corpora standardly used to train models in NLP Linzen (2020). This difference is important: Lin et al. (2019), Warstadt and Bowman (2020), and Mueller et al. (2022) all found evidence that models trained on large Internet corpora performed well on yes/no questions evaluations, whereas our models trained on CHILDES performed poorly--though we cannot be certain the differences in results are solely due to differences in the training data, since these prior papers used different model architectures, training tasks, and evaluation setups.
Though our training data are more similar to children's input than massive Internet corpora are, differences remain. Our experiments omit several aspects of a child's experience that might help them acquire syntax, such as prosody Morgan and Demuth (1996), visual information Shi et al. (2019), and meaning Fitz and Chang (2017); Abend et al. (2017), all of which might correlate with syntactic structure and thus provide cues to the correct hierarchical generalization. On the other hand, our dataset might present an easier learning scenario than children are faced with, because children must learn to segment the speech stream into words Lakhotia et al. (2021), while our models do not need to. Further, though real-world grounding could provide helpful information, learners might struggle to leverage this information due to difficulty determining what is being discussed in the physical world Gleitman et al. (2005).
## 7 Conclusion
In this work, we trained two types of neural networks (LSTMs and Transformers) on sentences of the types available to children and then analyzed what they had learned about English yes/no questions. Across several evaluation paradigms, these models failed to generalize in human-like ways: Humans display hierarchical generalization, while the models' generalization was instead based on linear order and individual words' identities. Our results support the hypothesis that human-like linguistic generalization requires biases stronger than those of LSTMs and Transformers. Future work should investigate what inductive biases enable successful generalization. One approach would be to test architectures with built-in hierarchical structure; past work has shown that such architectures have a hierarchical bias McCoy et al. (2020) and generalize better on the hierarchical phenomenon of subject-verb agreement Kuncoro et al. (2018); Lepori et al. (2020), so they may also generalize better on English yes/no questions. A final direction would be to expand the input beyond words alone so that learners can leverage hierarchical structure that is present in other modalities, such as hierarchical structure in visual scenes.
### Ethics Statement
Use of human data:While we did not collect any new human data ourselves, many of our analyses involved the use of prior datasets within the
CHILDES database. All of these datasets were collected in accordance with IRB policies at the institutions of the data collectors, and all followed standard practices in obtaining informed consent and deidentifying data.9
Footnote 9: [https://talkbank.org/share/irb/](https://talkbank.org/share/irb/)
Risks and limitations:The main risk of our proposed analyses is that future work using the same analyses might draw overly strong conclusions based on increased model performance, leading to overestimates of model strength. Such overestimates are an issue because they can lead users to place more trust in a model than is warranted.
To clarify, we view strong performance on our evaluation datasets as necessary but not sufficient to demonstrate human-like learning. Thus, if models perform poorly on our datasets (as the models we evaluated did), then we have strong reason to conclude that models are not learning in human-like ways. If future models perform better, such results would be consistent with human-like learning but would not conclusively establish that models learn as humans do, as they might instead be using some shallow heuristic that is not controlled for in our datasets. In other words, a criterion that is necessary but not sufficient facilitates strong conclusions about failure but does not facilitate strong conclusions about success. If future papers are faced with models that are more successful, such papers would ideally supplement results based on our datasets with analyses of models' internal strategies in order to more conclusively establish that what they have learned is not a spurious heuristic.
|